diff --git a/.buildinfo b/.buildinfo new file mode 100644 index 000000000..f20b93365 --- /dev/null +++ b/.buildinfo @@ -0,0 +1,4 @@ +# Sphinx build info version 1 +# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. +config: 801ad5d59720abd82bdd89fc0284a5b4 +tags: 645f666f9bcd5a90fca523b33c5a78b7 diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 000000000..e69de29bb diff --git a/404.html b/404.html new file mode 100644 index 000000000..cff9e6429 --- /dev/null +++ b/404.html @@ -0,0 +1,116 @@ + + + + + + + 404 Page not found. — OpenZFS documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

404 Page not found.

+

Please use left menu or search to find interested page.

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Basic Concepts/Checksums.html b/Basic Concepts/Checksums.html new file mode 100644 index 000000000..49ee06a71 --- /dev/null +++ b/Basic Concepts/Checksums.html @@ -0,0 +1,324 @@ + + + + + + + Checksums and Their Use in ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Checksums and Their Use in ZFS

+

End-to-end checksums are a key feature of ZFS and an important +differentiator for ZFS over other RAID implementations and filesystems. +Advantages of end-to-end checksums include:

+
    +
  • detects data corruption upon reading from media

  • +
  • blocks that are detected as corrupt are automatically repaired if +possible, by using the RAID protection in suitably configured pools, +or redundant copies (see the zfs copies property)

  • +
  • periodic scrubs can check data to detect and repair latent media +degradation (bit rot) and corruption from other sources

  • +
  • checksums on ZFS replication streams, zfs send and +zfs receive, ensure the data received is not corrupted by +intervening storage or transport mechanisms

  • +
+
+

Checksum Algorithms

+

The checksum algorithms in ZFS can be changed for datasets (filesystems +or volumes). The checksum algorithm used for each block is stored in the +block pointer (metadata). The block checksum is calculated when the +block is written, so changing the algorithm only affects writes +occurring after the change.

+

The checksum algorithm for a dataset can be changed by setting the +checksum property:

+
zfs set checksum=sha256 pool_name/dataset_name
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Checksum

Ok for dedup +and nopwrite?

Compatible with +other ZFS +implementations?

Notes

on

see notes

yes

on is a +short hand for +fletcher4 +for non-deduped +datasets and +sha256 for +deduped +datasets

off

no

yes

Do not do use +off

fletcher2

no

yes

Deprecated +implementation +of Fletcher +checksum, use +fletcher4 +instead

fletcher4

no

yes

Fletcher +algorithm, also +used for +zfs send +streams

sha256

yes

yes

Default for +deduped +datasets

noparity

no

yes

Do not use +noparity

sha512

yes

requires pool +feature +org.illumos:sha512

salted +sha512 +currently not +supported for +any filesystem +on the boot +pools

skein

yes

requires pool +feature +org.illumos:skein

salted +skein +currently not +supported for +any filesystem +on the boot +pools

edonr

see notes

requires pool +feature +org.illumos:edonr

salted +edonr +currently not +supported for +any filesystem +on the boot +pools

+

In an abundance of +caution, Edon-R requires +verification when used +with dedup, so it will +automatically use +verify.

+

blake3

yes

requires pool +feature +org.openzfs:blake3

salted +blake3 +currently not +supported for +any filesystem +on the boot +pools

+
+
+

Checksum Accelerators

+

ZFS has the ability to offload checksum operations to the Intel +QuickAssist Technology (QAT) adapters.

+
+
+

Checksum Microbenchmarks

+

Some ZFS features use microbenchmarks when the zfs.ko kernel module +is loaded to determine the optimal algorithm for checksums. The results +of the microbenchmarks are observable in the /proc/spl/kstat/zfs +directory. The winning algorithm is reported as the “fastest” and +becomes the default. The default can be overridden by setting zfs module +parameters.

+ + + + + + + + + + + + + + + + + +

Checksum

Results Filename

zfs module parameter

Fletcher4

/proc/spl/kstat/zfs/fletcher_4_bench

zfs_fletcher_4_impl

all-other

/proc/spl/kstat/zfs/chksum_bench

zfs_blake3_impl, +zfs_sha256_impl, +zfs_sha512_impl

+
+
+

Disabling Checksums

+

While it may be tempting to disable checksums to improve CPU +performance, it is widely considered by the ZFS community to be an +extrodinarily bad idea. Don’t disable checksums.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Basic Concepts/Feature Flags.html b/Basic Concepts/Feature Flags.html new file mode 100644 index 000000000..5d96879f1 --- /dev/null +++ b/Basic Concepts/Feature Flags.html @@ -0,0 +1,289 @@ + + + + + + + Feature Flags — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Feature Flags

+

ZFS on-disk formats were originally versioned with a single number, +which increased whenever the format changed. The numbered approach was +suitable when development of ZFS was driven by a single organisation.

+

For distributed development of OpenZFS, version numbering was +unsuitable. Any change to the number would have required agreement, +across all implementations, of each change to the on-disk format.

+

OpenZFS feature flags – an alternative to traditional version numbering +– allow a uniquely named pool property for each change to the on-disk +format. This approach supports:

+
    +
  • format changes that are independent

  • +
  • format changes that depend on each other.

  • +
+
+

Compatibility

+

Where all features that are used by a pool are supported by multiple +implementations of OpenZFS, the on-disk format is portable across those +implementations.

+

Features that are exclusive when enabled should be periodically ported +to all distributions.

+
+
+

Reference materials

+

ZFS Feature Flags +(Christopher Siden, 2012-01, in the Internet +Archive Wayback Machine) in particular: “… Legacy version numbers still +exist for pool versions 1-28 …”.

+

zpool-features(7) man page - OpenZFS

+

zpool-features (5) – illumos

+
+
+

Feature flags implementation per OS

+
+ZFS Feature Matrix + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Feature FlagRead-Only
Compatible
OpenZFS (Linux, FreeBSD 13+)FreeBSD pre OpenZFSIllumosJoyentNetBSDNexentaOmniOS CEOpenZFS on OS X
0.6.5.110.7.130.8.62.0.72.1.142.2.2master12.1.012.2.0mastermaster9.3main4.0.5-FPmasterr151046r151048master2.1.62.2.02.2.2main
org.zfsonlinux:allocation_classesyesnonoyesyesyesyesyesnoyesyesyesnonononoyesyesyesyesyesyesyes
com.delphix:async_destroyyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
org.openzfs:blake3nonononononoyesyesnononononononononononoyesyesyesyes
com.fudosecurity:block_cloningyesnononononoyesyesnonononononononononononoyesyesyes
com.datto:bookmark_v2nononoyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesyes
com.delphix:bookmark_writtennonononoyesyesyesyesnononononononononononoyesyesyesyes
com.delphix:bookmarksyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
com.nexenta:class_of_storageyesnononononononononononononoyesyesnonononononono
org.openzfs:device_rebuildyesnononoyesyesyesyesnononononononononononoyesyesyesyes
com.delphix:device_removalnononoyesyesyesyesyesyesyesyesyesnononoyesyesyesyesyesyesyesyes
org.openzfs:draidnononononoyesyesyesnononononononononononoyesyesyesyes
org.illumos:edonrnoyes1yes1yes1yes1yes1yes1yesnonoyesyesnononoyesyesyesyesyesyesyesyes
com.delphix:embedded_datanoyesyesyesyesyesyesyesyesyesyesyesyesyesnoyesyesyesyesyesyesyesyes
com.delphix:empty_bpobjyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
com.delphix:enabled_txgyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
com.datto:encryptionnononoyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesyes
com.delphix:extensible_datasetnoyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
com.joyent:filesystem_limitsyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
com.delphix:head_errlognonononononoyesyesnononononononononononoyesyesyesyes
com.delphix:hole_birthnoyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
org.open-zfs:large_blocksnoyesyesyesyesyesyesyesyesyesyesyesyesyesnoyesyesyesyesyesyesyesyes
org.zfsonlinux:large_dnodenonoyesyesyesyesyesyesnoyesyesyesnonononoyesyesyesyesyesyesyes
com.delphix:livelistyesnononoyesyesyesyesnononononononononononoyesyesyesyes
com.delphix:log_spacemapyesnononoyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesyes
org.illumos:lz4_compressnoyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
com.nexenta:meta_devicesyesnononononononononononononoyesyesnonononononono
com.joyent:multi_vdev_crash_dumpnonoyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
com.delphix:obsolete_countsyesnonoyesyesyesyesyesyesyesyesyesnononoyesyesyesyesyesyesyesyes
org.zfsonlinux:project_quotayesnonoyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesyes
org.openzfs:raidz_expansionnononononononoyesnononononononononononononoyesyes
com.delphix:redacted_datasetsnonononoyesyesyesyesnononononononononononoyesyesyesyes
com.delphix:redaction_bookmarksnonononoyesyesyesyesnononononononononononoyesyesyesyes
com.delphix:redaction_list_spillnononononononoyesnonononononononononononoyesyesyes
com.datto:resilver_deferyesnonoyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesyes
org.illumos:sha512nonoyesyesyesyesyesyesyesyesyesyesyesyesnoyesyesyesyesyesyesyesyes
org.illumos:skeinnonoyesyesyesyesyesyesyesyesyesyesyesyesnoyesyesyesyesyesyesyesyes
com.delphix:spacemap_histogramyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
com.delphix:spacemap_v2yesnonoyesyesyesyesyesyesyesyesyesnonononoyesyesyesyesyesyesyes
org.zfsonlinux:userobj_accountingyesnoyesyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesyes
com.nexenta:vdev_propertiesyesnononononononononononononoyesyesnonononononono
com.klarasystems:vdev_zaps_v2nonononononoyesyesnonononononononononononoyesyesyes
com.nexenta:wbcnononononononononononononononoyesnonononononono
org.openzfs:zilsaxattryesnononononoyesyesnonoyesyesnonononoyesyesyesyesyesyesyes
com.delphix:zpool_checkpointyesnonoyesyesyesyesyesyesyesyesyesnonononoyesyesyesyesyesyesyes
org.freebsd:zstd_compressnonononoyesyesyesyesnononononononononononoyesyesyesyes
+ +

Table generates by parsing manpages for feature flags, and is entirely dependent on good, accurate documentation.
Last updated on 2023-12-25T19:17:15.361178Z using compatibility_matrix.py.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Basic Concepts/RAIDZ.html b/Basic Concepts/RAIDZ.html new file mode 100644 index 000000000..8663b9060 --- /dev/null +++ b/Basic Concepts/RAIDZ.html @@ -0,0 +1,200 @@ + + + + + + + RAIDZ — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

RAIDZ

+

tl;dr: RAIDZ is effective for large block sizes and sequential workloads.

+
+

Introduction

+

RAIDZ is a variation on RAID-5 that allows for better distribution of parity +and eliminates the RAID-5 “write hole” (in which data and parity become +inconsistent after a power loss). +Data and parity is striped across all disks within a raidz group.

+

A raidz group can have single, double, or triple parity, meaning that the raidz +group can sustain one, two, or three failures, respectively, without losing any +data. The raidz1 vdev type specifies a single-parity raidz group; the raidz2 +vdev type specifies a double-parity raidz group; and the raidz3 vdev type +specifies a triple-parity raidz group. The raidz vdev type is an alias for +raidz1.

+

A raidz group of N disks of size X with P parity disks can hold +approximately (N-P)*X bytes and can withstand P devices failing without +losing data. The minimum number of devices in a raidz group is one more +than the number of parity disks. The recommended number is between 3 and 9 +to help increase performance.

+
+
+

Space efficiency

+

Actual used space for a block in RAIDZ is based on several points:

+
    +
  • minimal write size is disk sector size (can be set via ashift vdev parameter)

  • +
  • stripe width in RAIDZ is dynamic, and starts with at least one data block part, or up to +disks count minus parity number parts of data block

  • +
  • one block of data with size of recordsize is +splitted equally via sector size parts +and written on each stripe on RAIDZ vdev

  • +
  • each stripe of data will have a part of block

  • +
  • in addition to data one, two or three blocks of parity should be written, +one per disk; so, for raidz2 of 5 disks there will be 3 blocks of data and +2 blocks of parity

  • +
+

Due to these inputs, if recordsize is less or equal to sector size, +then RAIDZ’s parity size will be effictively equal to mirror with same redundancy. +For example, for raidz1 of 3 disks with ashift=12 and recordsize=4K +we will allocate on disk:

+
    +
  • one 4K block of data

  • +
  • one 4K parity block

  • +
+

and usable space ratio will be 50%, same as with double mirror.

+

Another example for ashift=12 and recordsize=128K for raidz1 of 3 disks:

+
    +
  • total stripe width is 3

  • +
  • one stripe can have up to 2 data parts of 4K size because of 1 parity blocks

  • +
  • we will have 128K/8k = 16 stripes with 8K of data and 4K of parity each

  • +
  • 16 stripes each with 12k, means we write 192k to store 128k

  • +
+

so usable space ratio in this case will be 66%.

+

The more disks RAIDZ has, the wider the stripe, the greater the space +efficiency.

+

You can find actual parity cost per RAIDZ size here:

+

(source)

+
+
+

Performance considerations

+
+

Write

+

Because of full stripe width, one block write will write stripe part on each disk. +One RAIDZ vdev has a write IOPS of one slowest disk because of that in worst case.

+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Basic Concepts/Troubleshooting.html b/Basic Concepts/Troubleshooting.html new file mode 100644 index 000000000..834f4402c --- /dev/null +++ b/Basic Concepts/Troubleshooting.html @@ -0,0 +1,226 @@ + + + + + + + Troubleshooting — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Troubleshooting

+
+

Todo

+

This page is a draft.

+
+

This page contains tips for troubleshooting ZFS on Linux and what info +developers might want for bug triage.

+ +
+
+

About Log Files

+

Log files can be very useful for troubleshooting. In some cases, +interesting information is stored in multiple log files that are +correlated to system events.

+

Pro tip: logging infrastructure tools like elasticsearch, fluentd, +influxdb, or splunk can simplify log analysis and event correlation.

+
+

Generic Kernel Log

+

Typically, Linux kernel log messages are available from dmesg -T, +/var/log/syslog, or where kernel log messages are sent (eg by +rsyslogd).

+
+
+

ZFS Kernel Module Debug Messages

+

The ZFS kernel modules use an internal log buffer for detailed logging +information. This log information is available in the pseudo file +/proc/spl/kstat/zfs/dbgmsg for ZFS builds where ZFS module parameter +zfs_dbgmsg_enable = +1

+
+
+
+
+

Unkillable Process

+

Symptom: zfs or zpool command appear hung, does not return, and +is not killable

+

Likely cause: kernel thread hung or panic

+

Log files of interest: Generic Kernel Log, +ZFS Kernel Module Debug Messages

+

Important information: if a kernel thread is stuck, then a backtrace of +the stuck thread can be in the logs. In some cases, the stuck thread is +not logged until the deadman timer expires. See also debug +tunables

+
+
+
+

ZFS Events

+

ZFS uses an event-based messaging interface for communication of +important events to other consumers running on the system. The ZFS Event +Daemon (zed) is a userland daemon that listens for these events and +processes them. zed is extensible so you can write shell scripts or +other programs that subscribe to events and take action. For example, +the script usually installed at /etc/zfs/zed.d/all-syslog.sh writes +a formatted event message to syslog. See the man page for zed(8) +for more information.

+

A history of events is also available via the zpool events command. +This history begins at ZFS kernel module load and includes events from +any pool. These events are stored in RAM and limited in count to a value +determined by the kernel tunable +zfs_event_len_max. +zed has an internal throttling mechanism to prevent overconsumption +of system resources processing ZFS events.

+

More detailed information about events is observable using +zpool events -v The contents of the verbose events is subject to +change, based on the event and information available at the time of the +event.

+

Each event has a class identifier used for filtering event types. +Commonly seen events are those related to pool management with class +sysevent.fs.zfs.* including import, export, configuration updates, +and zpool history updates.

+

Events related to errors are reported as class ereport.* These can +be invaluable for troubleshooting. Some faults can cause multiple +ereports as various layers of the software deal with the fault. For +example, on a simple pool without parity protection, a faulty disk could +cause an ereport.io during a read from the disk that results in an +erport.fs.zfs.checksum at the pool level. These events are also +reflected by the error counters observed in zpool status If you see +checksum or read/write errors in zpool status then there should be +one or more corresponding ereports in the zpool events output.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Basic Concepts/dRAID Howto.html b/Basic Concepts/dRAID Howto.html new file mode 100644 index 000000000..b459b0d59 --- /dev/null +++ b/Basic Concepts/dRAID Howto.html @@ -0,0 +1,351 @@ + + + + + + + dRAID — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

dRAID

+
+

Note

+

This page describes functionality which has been added for the +OpenZFS 2.1.0 release, it is not in the OpenZFS 2.0.0 release.

+
+
+

Introduction

+

dRAID is a variant of raidz that provides integrated distributed hot +spares which allows for faster resilvering while retaining the benefits +of raidz. A dRAID vdev is constructed from multiple internal raidz +groups, each with D data devices and P parity devices. These groups +are distributed over all of the children in order to fully utilize the +available disk performance. This is known as parity declustering and +it has been an active area of research. The image below is simplified, +but it helps illustrate this key difference between dRAID and raidz.

+

draid1

+

Additionally, a dRAID vdev must shuffle its child vdevs in such a way +that regardless of which drive has failed, the rebuild IO (both read +and write) will distribute evenly among all surviving drives. This +is accomplished by using carefully chosen precomputed permutation +maps. This has the advantage of both keeping pool creation fast and +making it impossible for the mapping to be damaged or lost.

+

Another way dRAID differs from raidz is that it uses a fixed stripe +width (padding as necessary with zeros). This allows a dRAID vdev to +be sequentially resilvered, however the fixed stripe width significantly +effects both usable capacity and IOPS. For example, with the default +D=8 and 4k disk sectors the minimum allocation size is 32k. If using +compression, this relatively large allocation size can reduce the +effective compression ratio. When using ZFS volumes and dRAID the +default volblocksize property is increased to account for the allocation +size. If a dRAID pool will hold a significant amount of small blocks, +it is recommended to also add a mirrored special vdev to store those +blocks.

+

In regards to IO/s, performance is similar to raidz since for any +read all D data disks must be accessed. Delivered random IOPS can be +reasonably approximated as floor((N-S)/(D+P))*<single-drive-IOPS>.

+

In summary dRAID can provide the same level of redundancy and +performance as raidz, while also providing a fast integrated distributed +spare.

+
+
+

Create a dRAID vdev

+

A dRAID vdev is created like any other by using the zpool create +command and enumerating the disks which should be used.

+
# zpool create <pool> draid[1,2,3] <vdevs...>
+
+
+

Like raidz, the parity level is specified immediately after the draid +vdev type. However, unlike raidz additional colon separated options can be +specified. The most important of which is the :<spares>s option which +controls the number of distributed hot spares to create. By default, no +spares are created. The :<data>d option can be specified to set the +number of data devices to use in each RAID stripe (D+P). When unspecified +reasonable defaults are chosen.

+
# zpool create <pool> draid[<parity>][:<data>d][:<children>c][:<spares>s] <vdevs...>
+
+
+
    +
  • parity - The parity level (1-3). Defaults to one.

  • +
  • data - The number of data devices per redundancy group. In general +a smaller value of D will increase IOPS, improve the compression ratio, +and speed up resilvering at the expense of total usable capacity. +Defaults to 8, unless N-P-S is less than 8.

  • +
  • children - The expected number of children. Useful as a cross-check +when listing a large number of devices. An error is returned when the +provided number of children differs.

  • +
  • spares - The number of distributed hot spares. Defaults to zero.

  • +
+

For example, to create an 11 disk dRAID pool with 4+1 redundancy and a +single distributed spare the command would be:

+
# zpool create tank draid:4d:1s:11c /dev/sd[a-k]
+# zpool status tank
+
+  pool: tank
+ state: ONLINE
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        tank                  ONLINE       0     0     0
+          draid1:4d:11c:1s-0  ONLINE       0     0     0
+            sda               ONLINE       0     0     0
+            sdb               ONLINE       0     0     0
+            sdc               ONLINE       0     0     0
+            sdd               ONLINE       0     0     0
+            sde               ONLINE       0     0     0
+            sdf               ONLINE       0     0     0
+            sdg               ONLINE       0     0     0
+            sdh               ONLINE       0     0     0
+            sdi               ONLINE       0     0     0
+            sdj               ONLINE       0     0     0
+            sdk               ONLINE       0     0     0
+        spares
+          draid1-0-0          AVAIL
+
+
+

Note that the dRAID vdev name, draid1:4d:11c:1s, fully describes the +configuration and all of disks which are part of the dRAID are listed. +Furthermore, the logical distributed hot spare is shown as an available +spare disk.

+
+
+

Rebuilding to a Distributed Spare

+

One of the major advantages of dRAID is that it supports both sequential +and traditional healing resilvers. When performing a sequential resilver +to a distributed hot spare the performance scales with the number of disks +divided by the stripe width (D+P). This can greatly reduce resilver times +and restore full redundancy in a fraction of the usual time. For example, +the following graph shows the observed sequential resilver time in hours +for a 90 HDD based dRAID filled to 90% capacity.

+

draid-resilver

+

When using dRAID and a distributed spare, the process for handling a +failed disk is almost identical to raidz with a traditional hot spare. +When a disk failure is detected the ZFS Event Daemon (ZED) will start +rebuilding to a spare if one is available. The only difference is that +for dRAID a sequential resilver is started, while a healing resilver must +be used for raidz.

+
# echo offline >/sys/block/sdg/device/state
+# zpool replace -s tank sdg draid1-0-0
+# zpool status
+
+  pool: tank
+ state: DEGRADED
+status: One or more devices is currently being resilvered.  The pool will
+        continue to function, possibly in a degraded state.
+action: Wait for the resilver to complete.
+  scan: resilver (draid1:4d:11c:1s-0) in progress since Tue Nov 24 14:34:25 2020
+        3.51T scanned at 13.4G/s, 1.59T issued 6.07G/s, 6.13T total
+        326G resilvered, 57.17% done, 00:03:21 to go
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        tank                  DEGRADED     0     0     0
+          draid1:4d:11c:1s-0  DEGRADED     0     0     0
+            sda               ONLINE       0     0     0  (resilvering)
+            sdb               ONLINE       0     0     0  (resilvering)
+            sdc               ONLINE       0     0     0  (resilvering)
+            sdd               ONLINE       0     0     0  (resilvering)
+            sde               ONLINE       0     0     0  (resilvering)
+            sdf               ONLINE       0     0     0  (resilvering)
+            spare-6           DEGRADED     0     0     0
+              sdg             UNAVAIL      0     0     0
+              draid1-0-0      ONLINE       0     0     0  (resilvering)
+            sdh               ONLINE       0     0     0  (resilvering)
+            sdi               ONLINE       0     0     0  (resilvering)
+            sdj               ONLINE       0     0     0  (resilvering)
+            sdk               ONLINE       0     0     0  (resilvering)
+        spares
+          draid1-0-0          INUSE     currently in use
+
+
+

While both types of resilvering achieve the same goal it’s worth taking +a moment to summarize the key differences.

+
    +
  • A traditional healing resilver scans the entire block tree. This +means the checksum for each block is available while it’s being +repaired and can be immediately verified. The downside is this +creates a random read workload which is not ideal for performance.

  • +
  • A sequential resilver instead scans the space maps in order to +determine what space is allocated and what must be repaired. +This rebuild process is not limited to block boundaries and can +sequentially reads from the disks and make repairs using larger +I/Os. The price to pay for this performance improvement is that +the block checksums cannot be verified while resilvering. Therefore, +a scrub is started to verify the checksums after the sequential +resilver completes.

  • +
+

For a more in depth explanation of the differences between sequential +and healing resilvering check out these sequential resilver slides +which were presented at the OpenZFS Developer Summit.

+
+
+

Rebalancing

+

Distributed spare space can be made available again by simply replacing +any failed drive with a new drive. This process is called rebalancing +and is essentially a resilver. When performing rebalancing a healing +resilver is recommended since the pool is no longer degraded. This +ensures all checksums are verified when rebuilding to the new disk +and eliminates the need to perform a subsequent scrub of the pool.

+
# zpool replace tank sdg sdl
+# zpool status
+
+  pool: tank
+ state: DEGRADED
+status: One or more devices is currently being resilvered.  The pool will
+        continue to function, possibly in a degraded state.
+action: Wait for the resilver to complete.
+  scan: resilver in progress since Tue Nov 24 14:45:16 2020
+        6.13T scanned at 7.82G/s, 6.10T issued at 7.78G/s, 6.13T total
+        565G resilvered, 99.44% done, 00:00:04 to go
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        tank                  DEGRADED     0     0     0
+          draid1:4d:11c:1s-0  DEGRADED     0     0     0
+            sda               ONLINE       0     0     0  (resilvering)
+            sdb               ONLINE       0     0     0  (resilvering)
+            sdc               ONLINE       0     0     0  (resilvering)
+            sdd               ONLINE       0     0     0  (resilvering)
+            sde               ONLINE       0     0     0  (resilvering)
+            sdf               ONLINE       0     0     0  (resilvering)
+            spare-6           DEGRADED     0     0     0
+              replacing-0     DEGRADED     0     0     0
+                sdg           UNAVAIL      0     0     0
+                sdl           ONLINE       0     0     0  (resilvering)
+              draid1-0-0      ONLINE       0     0     0  (resilvering)
+            sdh               ONLINE       0     0     0  (resilvering)
+            sdi               ONLINE       0     0     0  (resilvering)
+            sdj               ONLINE       0     0     0  (resilvering)
+            sdk               ONLINE       0     0     0  (resilvering)
+        spares
+       draid1-0-0          INUSE     currently in use
+
+
+

After the resilvering completes the distributed hot spare is once again +available for use and the pool has been restored to its normal healthy +state.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Basic Concepts/index.html b/Basic Concepts/index.html new file mode 100644 index 000000000..6e7749a4b --- /dev/null +++ b/Basic Concepts/index.html @@ -0,0 +1,164 @@ + + + + + + + Basic Concepts — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ + +
+
+ + + + \ No newline at end of file diff --git a/Developer Resources/Buildbot Options.html b/Developer Resources/Buildbot Options.html new file mode 100644 index 000000000..a85702ac1 --- /dev/null +++ b/Developer Resources/Buildbot Options.html @@ -0,0 +1,383 @@ + + + + + + + Buildbot Options — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Buildbot Options

+

There are a number of ways to control the ZFS Buildbot at a commit +level. This page provides a summary of various options that the ZFS +Buildbot supports and how it impacts testing. More detailed information +regarding its implementation can be found at the ZFS Buildbot Github +page.

+
+

Choosing Builders

+

By default, all commits in your ZFS pull request are compiled by the +BUILD builders. Additionally, the top commit of your ZFS pull request is +tested by TEST builders. However, there is the option to override which +types of builder should be used on a per commit basis. In this case, you +can add +Requires-builders: <none|all|style|build|arch|distro|test|perf|coverage|unstable> +to your commit message. A comma separated list of options can be +provided. Supported options are:

+
    +
  • all: This commit should be built by all available builders

  • +
  • none: This commit should not be built by any builders

  • +
  • style: This commit should be built by STYLE builders

  • +
  • build: This commit should be built by all BUILD builders

  • +
  • arch: This commit should be built by BUILD builders tagged as +‘Architectures’

  • +
  • distro: This commit should be built by BUILD builders tagged as +‘Distributions’

  • +
  • test: This commit should be built and tested by the TEST builders +(excluding the Coverage TEST builders)

  • +
  • perf: This commit should be built and tested by the PERF builders

  • +
  • coverage : This commit should be built and tested by the Coverage +TEST builders

  • +
  • unstable : This commit should be built and tested by the Unstable +TEST builders (currently only the Fedora Rawhide TEST builder)

  • +
+

A couple of examples on how to use Requires-builders: in commit +messages can be found below.

+
+

Preventing a commit from being built and tested.

+
This is a commit message
+
+This text is part of the commit message body.
+
+Signed-off-by: Contributor <contributor@email.com>
+Requires-builders: none
+
+
+
+
+

Submitting a commit to STYLE and TEST builders only.

+
This is a commit message
+
+This text is part of the commit message body.
+
+Signed-off-by: Contributor <contributor@email.com>
+Requires-builders: style test
+
+
+
+
+
+

Requiring SPL Versions

+

Currently, the ZFS Buildbot attempts to choose the correct SPL branch to +build based on a pull request’s base branch. In the cases where a +specific SPL version needs to be built, the ZFS buildbot supports +specifying an SPL version for pull request testing. By opening a pull +request against ZFS and adding Requires-spl: in a commit message, +you can instruct the buildbot to use a specific SPL version. Below are +examples of a commit messages that specify the SPL version.

+
+

Build SPL from a specific pull request

+
This is a commit message
+
+This text is part of the commit message body.
+
+Signed-off-by: Contributor <contributor@email.com>
+Requires-spl: refs/pull/123/head
+
+
+
+
+

Build SPL branch spl-branch-name from zfsonlinux/spl repository

+
This is a commit message
+
+This text is part of the commit message body.
+
+Signed-off-by: Contributor <contributor@email.com>
+Requires-spl: spl-branch-name
+
+
+
+
+
+

Requiring Kernel Version

+

Currently, Kernel.org builders will clone and build the master branch of +Linux. In cases where a specific version of the Linux kernel needs to be +built, the ZFS buildbot supports specifying the Linux kernel to be built +via commit message. By opening a pull request against ZFS and adding +Requires-kernel: in a commit message, you can instruct the buildbot +to use a specific Linux kernel. Below is an example commit message that +specifies a specific Linux kernel tag.

+
+

Build Linux Kernel Version 4.14

+
This is a commit message
+
+This text is part of the commit message body.
+
+Signed-off-by: Contributor <contributor@email.com>
+Requires-kernel: v4.14
+
+
+
+
+
+

Build Steps Overrides

+

Each builder will execute or skip build steps based on its default +preferences. In some scenarios, it might be possible to skip various +build steps. The ZFS buildbot supports overriding the defaults of all +builders in a commit message. The list of available overrides are:

+
    +
  • Build-linux: <Yes|No>: All builders should build Linux for this +commit

  • +
  • Build-lustre: <Yes|No>: All builders should build Lustre for this +commit

  • +
  • Build-spl: <Yes|No>: All builders should build the SPL for this +commit

  • +
  • Build-zfs: <Yes|No>: All builders should build ZFS for this +commit

  • +
  • Built-in: <Yes|No>: All Linux builds should build in SPL and ZFS

  • +
  • Check-lint: <Yes|No>: All builders should perform lint checks for +this commit

  • +
  • Configure-lustre: <options>: Provide <options> as configure +flags when building Lustre

  • +
  • Configure-spl: <options>: Provide <options> as configure +flags when building the SPL

  • +
  • Configure-zfs: <options>: Provide <options> as configure +flags when building ZFS

  • +
+

A couple of examples on how to use overrides in commit messages can be +found below.

+
+

Skip building the SPL and build Lustre without ldiskfs

+
This is a commit message
+
+This text is part of the commit message body.
+
+Signed-off-by: Contributor <contributor@email.com>
+Build-lustre: Yes
+Configure-lustre: --disable-ldiskfs
+Build-spl: No
+
+
+
+
+

Build ZFS Only

+
This is a commit message
+
+This text is part of the commit message body.
+
+Signed-off-by: Contributor <contributor@email.com>
+Build-lustre: No
+Build-spl: No
+
+
+
+
+
+

Configuring Tests with the TEST File

+

At the top level of the ZFS source tree, there is the TEST +file which +contains variables that control if and how a specific test should run. +Below is a list of each variable and a brief description of what each +variable controls.

+
    +
  • TEST_PREPARE_WATCHDOG - Enables the Linux kernel watchdog

  • +
  • TEST_PREPARE_SHARES - Start NFS and Samba servers

  • +
  • TEST_SPLAT_SKIP - Determines if splat testing is skipped

  • +
  • TEST_SPLAT_OPTIONS - Command line options to provide to splat

  • +
  • TEST_ZTEST_SKIP - Determines if ztest testing is skipped

  • +
  • TEST_ZTEST_TIMEOUT - The length of time ztest should run

  • +
  • TEST_ZTEST_DIR - Directory where ztest will create vdevs

  • +
  • TEST_ZTEST_OPTIONS - Options to pass to ztest

  • +
  • TEST_ZTEST_CORE_DIR - Directory for ztest to store core dumps

  • +
  • TEST_ZIMPORT_SKIP - Determines if zimport testing is skipped

  • +
  • TEST_ZIMPORT_DIR - Directory used during zimport

  • +
  • TEST_ZIMPORT_VERSIONS - Source versions to test

  • +
  • TEST_ZIMPORT_POOLS - Names of the pools for zimport to use +for testing

  • +
  • TEST_ZIMPORT_OPTIONS - Command line options to provide to +zimport

  • +
  • TEST_XFSTESTS_SKIP - Determines if xfstest testing is skipped

  • +
  • TEST_XFSTESTS_URL - URL to download xfstest from

  • +
  • TEST_XFSTESTS_VER - Name of the tarball to download from +TEST_XFSTESTS_URL

  • +
  • TEST_XFSTESTS_POOL - Name of pool to create and used by +xfstest

  • +
  • TEST_XFSTESTS_FS - Name of dataset for use by xfstest

  • +
  • TEST_XFSTESTS_VDEV - Name of the vdev used by xfstest

  • +
  • TEST_XFSTESTS_OPTIONS - Command line options to provide to +xfstest

  • +
  • TEST_ZFSTESTS_SKIP - Determines if zfs-tests testing is +skipped

  • +
  • TEST_ZFSTESTS_DIR - Directory to store files and loopback devices

  • +
  • TEST_ZFSTESTS_DISKS - Space delimited list of disks that +zfs-tests is allowed to use

  • +
  • TEST_ZFSTESTS_DISKSIZE - File size of file based vdevs used by +zfs-tests

  • +
  • TEST_ZFSTESTS_ITERS - Number of times test-runner should +execute its set of tests

  • +
  • TEST_ZFSTESTS_OPTIONS - Options to provide zfs-tests

  • +
  • TEST_ZFSTESTS_RUNFILE - The runfile to use when running +zfs-tests

  • +
  • TEST_ZFSTESTS_TAGS - List of tags to provide to test-runner

  • +
  • TEST_ZFSSTRESS_SKIP - Determines if zfsstress testing is +skipped

  • +
  • TEST_ZFSSTRESS_URL - URL to download zfsstress from

  • +
  • TEST_ZFSSTRESS_VER - Name of the tarball to download from +TEST_ZFSSTRESS_URL

  • +
  • TEST_ZFSSTRESS_RUNTIME - Duration to run runstress.sh

  • +
  • TEST_ZFSSTRESS_POOL - Name of pool to create and use for +zfsstress testing

  • +
  • TEST_ZFSSTRESS_FS - Name of dataset for use during zfsstress +tests

  • +
  • TEST_ZFSSTRESS_FSOPT - File system options to provide to +zfsstress

  • +
  • TEST_ZFSSTRESS_VDEV - Directory to store vdevs for use during +zfsstress tests

  • +
  • TEST_ZFSSTRESS_OPTIONS - Command line options to provide to +runstress.sh

  • +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Developer Resources/Building ZFS.html b/Developer Resources/Building ZFS.html new file mode 100644 index 000000000..2005ba8a0 --- /dev/null +++ b/Developer Resources/Building ZFS.html @@ -0,0 +1,388 @@ + + + + + + + Building ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Building ZFS

+
+

GitHub Repositories

+

The official source for OpenZFS is maintained at GitHub by the +openzfs organization. The primary +git repository for the project is the zfs repository.

+

There are two main components in this repository:

+
    +
  • +
    ZFS: The ZFS repository contains a copy of the upstream OpenZFS

    code which has been adapted and extended for Linux and FreeBSD. The +vast majority of the core OpenZFS code is self-contained and can be +used without modification.

    +
    +
    +
  • +
  • +
    SPL: The SPL is a thin shim layer which is responsible for

    implementing the fundamental interfaces required by OpenZFS. It’s +this layer which allows OpenZFS to be used across multiple +platforms. SPL used to be maintained in a separate repository, but +was merged into the zfs +repository in the 0.8 major release.

    +
    +
    +
  • +
+
+
+

Installing Dependencies

+

The first thing you’ll need to do is prepare your environment by +installing a full development tool chain. In addition, development +headers for both the kernel and the following packages must be +available. It is important to note that if the development kernel +headers for the currently running kernel aren’t installed, the modules +won’t compile properly.

+

The following dependencies should be installed to build the latest ZFS +2.1 release.

+
    +
  • RHEL/CentOS 7:

  • +
+
sudo yum install epel-release gcc make autoconf automake libtool rpm-build libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python python2-devel python-setuptools python-cffi libffi-devel git ncompress libcurl-devel
+sudo yum install --enablerepo=epel python-packaging dkms
+
+
+
    +
  • RHEL/CentOS 8, Fedora:

  • +
+
sudo dnf install --skip-broken epel-release gcc make autoconf automake libtool rpm-build libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python3 python3-devel python3-setuptools python3-cffi libffi-devel git ncompress libcurl-devel
+sudo dnf install --skip-broken --enablerepo=epel --enablerepo=powertools python3-packaging dkms
+
+
+
    +
  • Debian, Ubuntu:

  • +
+
sudo apt install build-essential autoconf automake libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev linux-headers-generic python3 python3-dev python3-setuptools python3-cffi libffi-dev python3-packaging git libcurl4-openssl-dev debhelper-compat dh-python po-debconf python3-all-dev python3-sphinx
+
+
+
    +
  • FreeBSD:

  • +
+
pkg install autoconf automake autotools git gmake python devel/py-sysctl sudo
+
+
+
+
+

Build Options

+

There are two options for building OpenZFS; the correct one largely +depends on your requirements.

+
    +
  • Packages: Often it can be useful to build custom packages from +git which can be installed on a system. This is the best way to +perform integration testing with systemd, dracut, and udev. The +downside to using packages it is greatly increases the time required +to build, install, and test a change.

  • +
  • +
    In-tree: Development can be done entirely in the SPL/ZFS source

    tree. This speeds up development by allowing developers to rapidly +iterate on a patch. When working in-tree developers can leverage +incremental builds, load/unload kernel modules, execute utilities, +and verify all their changes with the ZFS Test Suite.

    +
    +
    +
  • +
+

The remainder of this page focuses on the in-tree option which is +the recommended method of development for the majority of changes. See +the custom packages page for additional +information on building custom packages.

+
+
+

Developing In-Tree

+
+

Clone from GitHub

+

Start by cloning the ZFS repository from GitHub. The repository has a +master branch for development and a series of *-release +branches for tagged releases. After checking out the repository your +clone will default to the master branch. Tagged releases may be built +by checking out zfs-x.y.z tags with matching version numbers or +matching release branches.

+
git clone https://github.com/openzfs/zfs
+
+
+
+
+

Configure and Build

+

For developers working on a change always create a new topic branch +based off of master. This will make it easy to open a pull request with +your change latter. The master branch is kept stable with extensive +regression testing of every pull +request before and after it’s merged. Every effort is made to catch +defects as early as possible and to keep them out of the tree. +Developers should be comfortable frequently rebasing their work against +the latest master branch.

+

In this example we’ll use the master branch and walk through a stock +in-tree build. Start by checking out the desired branch then build +the ZFS and SPL source in the traditional autotools fashion.

+
cd ./zfs
+git checkout master
+sh autogen.sh
+./configure
+make -s -j$(nproc)
+
+
+
+
tip: --with-linux=PATH and --with-linux-obj=PATH can be +passed to configure to specify a kernel installed in a non-default +location.
+
tip: --enable-debug can be passed to configure to enable all ASSERTs and +additional correctness tests.
+
+

Optional Build packages

+
make rpm #Builds RPM packages for CentOS/Fedora
+make deb #Builds RPM converted DEB packages for Debian/Ubuntu
+make native-deb #Builds native DEB packages for Debian/Ubuntu
+
+
+
+
tip: Native Debian packages build with pre-configured paths for +Debian and Ubuntu. It’s best not to override the paths during +configure.
+
tip: For native Debain packages, KVERS, KSRC and KOBJ +environment variables can be exported to specify the kernel installed +in non-default location.
+
+
+

Note

+

Support for native Debian packaging will be available starting from +openzfs-2.2 release.

+
+
+
+

Install

+

You can run zfs-tests.sh without installing ZFS, see below. If you +have reason to install ZFS after building it, pay attention to how your +distribution handles kernel modules. On Ubuntu, for example, the modules +from this repository install in the extra kernel module path, which +is not in the standard depmod search path. Therefore, for the +duration of your testing, edit /etc/depmod.d/ubuntu.conf and add +extra to the beginning of the search path.

+

You may then install using +sudo make install; sudo ldconfig; sudo depmod. You’d uninstall with +sudo make uninstall; sudo ldconfig; sudo depmod.

+
+
+

Running zloop.sh and zfs-tests.sh

+

If you wish to run the ZFS Test Suite (ZTS), then ksh and a few +additional utilities must be installed.

+
    +
  • RHEL/CentOS 7:

  • +
+
sudo yum install ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr nfs-utils samba rng-tools pax perf
+sudo yum install --enablerepo=epel dbench
+
+
+
    +
  • RHEL/CentOS 8, Fedora:

  • +
+
sudo dnf install --skip-broken ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr nfs-utils samba rng-tools pax perf
+sudo dnf install --skip-broken --enablerepo=epel dbench
+
+
+
    +
  • Debian:

  • +
+
sudo apt install ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr dbench nfs-kernel-server samba rng-tools pax linux-perf selinux-utils quota
+
+
+
    +
  • Ubuntu:

  • +
+
sudo apt install ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr dbench nfs-kernel-server samba rng-tools pax linux-tools-common selinux-utils quota
+
+
+
    +
  • FreeBSD:

  • +
+
pkg install base64 bash checkbashisms fio hs-ShellCheck ksh93 pamtester devel/py-flake8 sudo
+
+
+

There are a few helper scripts provided in the top-level scripts +directory designed to aid developers working with in-tree builds.

+
    +
  • zfs-helper.sh: Certain functionality (i.e. /dev/zvol/) depends on +the ZFS provided udev helper scripts being installed on the system. +This script can be used to create symlinks on the system from the +installation location to the in-tree helper. These links must be in +place to successfully run the ZFS Test Suite. The -i and -r +options can be used to install and remove the symlinks.

  • +
+
sudo ./scripts/zfs-helpers.sh -i
+
+
+
    +
  • zfs.sh: The freshly built kernel modules can be loaded using +zfs.sh. This script can later be used to unload the kernel +modules with the -u option.

  • +
+
sudo ./scripts/zfs.sh
+
+
+
    +
  • zloop.sh: A wrapper to run ztest repeatedly with randomized +arguments. The ztest command is a user space stress test designed to +detect correctness issues by concurrently running a random set of +test cases. If a crash is encountered, the ztest logs, any associated +vdev files, and core file (if one exists) are collected and moved to +the output directory for analysis.

  • +
+
sudo ./scripts/zloop.sh
+
+
+
    +
  • zfs-tests.sh: A wrapper which can be used to launch the ZFS Test +Suite. Three loopback devices are created on top of sparse files +located in /var/tmp/ and used for the regression test. Detailed +directions for the ZFS Test Suite can be found in the +README +located in the top-level tests directory.

  • +
+
./scripts/zfs-tests.sh -vx
+
+
+

tip: The delegate tests will be skipped unless group read +permission is set on the zfs directory and its parents.

+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Developer Resources/Custom Packages.html b/Developer Resources/Custom Packages.html new file mode 100644 index 000000000..0f0c5f7e9 --- /dev/null +++ b/Developer Resources/Custom Packages.html @@ -0,0 +1,359 @@ + + + + + + + Custom Packages — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Custom Packages

+

The following instructions assume you are building from an official +release tarball +(version 0.8.0 or newer) or directly from the git +repository. Most users should not +need to do this and should preferentially use the distribution packages. +As a general rule the distribution packages will be more tightly +integrated, widely tested, and better supported. However, if your +distribution of choice doesn’t provide packages, or you’re a developer +and want to roll your own, here’s how to do it.

+

The first thing to be aware of is that the build system is capable of +generating several different types of packages. Which type of package +you choose depends on what’s supported on your platform and exactly what +your needs are.

+
    +
  • DKMS packages contain only the source code and scripts for +rebuilding the kernel modules. When the DKMS package is installed +kernel modules will be built for all available kernels. Additionally, +when the kernel is upgraded new kernel modules will be automatically +built for that kernel. This is particularly convenient for desktop +systems which receive frequent kernel updates. The downside is that +because the DKMS packages build the kernel modules from source a full +development environment is required which may not be appropriate for +large deployments.

  • +
  • kmods packages are binary kernel modules which are compiled +against a specific version of the kernel. This means that if you +update the kernel you must compile and install a new kmod package. If +you don’t frequently update your kernel, or if you’re managing a +large number of systems, then kmod packages are a good choice.

  • +
  • kABI-tracking kmod Packages are similar to standard binary kmods +and may be used with Enterprise Linux distributions like Red Hat and +CentOS. These distributions provide a stable kABI (Kernel Application +Binary Interface) which allows the same binary modules to be used +with new versions of the distribution provided kernel.

  • +
+

By default the build system will generate user packages and both DKMS +and kmod style kernel packages if possible. The user packages can be +used with either set of kernel packages and do not need to be rebuilt +when the kernel is updated. You can also streamline the build process by +building only the DKMS or kmod packages as shown below.

+

Be aware that when building directly from a git repository you must +first run the autogen.sh script to create the configure script. This +will require installing the GNU autotools packages for your +distribution. To perform any of the builds, you must install all the +necessary development tools and headers for your distribution.

+

It is important to note that if the development kernel headers for the +currently running kernel aren’t installed, the modules won’t compile +properly.

+ +
+

RHEL, CentOS and Fedora

+

Make sure that the required packages are installed to build the latest +ZFS 2.1 release:

+
    +
  • RHEL/CentOS 7:

  • +
+
sudo yum install epel-release gcc make autoconf automake libtool rpm-build libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python python2-devel python-setuptools python-cffi libffi-devel ncompress
+sudo yum install --enablerepo=epel dkms python-packaging
+
+
+
    +
  • RHEL/CentOS 8, Fedora:

  • +
+
sudo dnf install --skip-broken epel-release gcc make autoconf automake libtool rpm-build kernel-rpm-macros libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) kernel-abi-stablelists-$(uname -r | sed 's/\.[^.]\+$//') python3 python3-devel python3-setuptools python3-cffi libffi-devel ncompress
+sudo dnf install --skip-broken --enablerepo=epel --enablerepo=powertools python3-packaging dkms
+
+
+
    +
  • RHEL/CentOS 9:

  • +
+
sudo dnf config-manager --set-enabled crb
+sudo dnf install --skip-broken epel-release gcc make autoconf automake libtool rpm-build kernel-rpm-macros libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) kernel-abi-stablelists-$(uname -r | sed 's/\.[^.]\+$//') python3 python3-devel python3-setuptools python3-cffi libffi-devel
+sudo dnf install --skip-broken --enablerepo=epel python3-packaging dkms
+
+
+

Get the source code.

+
+

DKMS

+

Building rpm-based DKMS and user packages can be done as follows:

+
$ cd zfs
+$ ./configure
+$ make -j1 rpm-utils rpm-dkms
+$ sudo yum localinstall *.$(uname -p).rpm *.noarch.rpm
+
+
+
+
+

kmod

+

The key thing to know when building a kmod package is that a specific +Linux kernel must be specified. At configure time the build system will +make an educated guess as to which kernel you want to build against. +However, if configure is unable to locate your kernel development +headers, or you want to build against a different kernel, you must +specify the exact path with the –with-linux and –with-linux-obj +options.

+
$ cd zfs
+$ ./configure
+$ make -j1 rpm-utils rpm-kmod
+$ sudo yum localinstall *.$(uname -p).rpm
+
+
+
+
+

kABI-tracking kmod

+

The process for building kABI-tracking kmods is almost identical to for +building normal kmods. However, it will only produce binaries which can +be used by multiple kernels if the distribution supports a stable kABI. +In order to request kABI-tracking package the –with-spec=redhat +option must be passed to configure.

+

NOTE: This type of package is not available for Fedora.

+
$ cd zfs
+$ ./configure --with-spec=redhat
+$ make -j1 rpm-utils rpm-kmod
+$ sudo yum localinstall *.$(uname -p).rpm
+
+
+
+
+
+

Debian and Ubuntu

+

Make sure that the required packages are installed:

+
sudo apt install build-essential autoconf automake libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev linux-headers-generic python3 python3-dev python3-setuptools python3-cffi libffi-dev python3-packaging debhelper-compat dh-python po-debconf python3-all-dev python3-sphinx
+
+
+

Get the source code.

+
+

kmod

+

The key thing to know when building a kmod package is that a specific +Linux kernel must be specified. At configure time the build system will +make an educated guess as to which kernel you want to build against. +However, if configure is unable to locate your kernel development +headers, or you want to build against a different kernel, you must +specify the exact path with the –with-linux and –with-linux-obj +options.

+

To build RPM converted Debian packages:

+
$ cd zfs
+$ ./configure --enable-systemd
+$ make -j1 deb-utils deb-kmod
+$ sudo apt-get install --fix-missing ./*.deb
+
+
+

Starting from openzfs-2.2 release, native Debian packages can be built +as follows:

+
$ cd zfs
+$ ./configure
+$ make native-deb-utils native-deb-kmod
+$ rm ../openzfs-zfs-dkms_*.deb
+$ sudo apt-get install --fix-missing ../*.deb
+
+
+

Native Debian packages build with pre-configured paths for Debian and +Ubuntu. It’s best not to override the paths during configure. +KVERS, KSRC and KOBJ environment variables can be exported +to specify the kernel installed in non-default location.

+
+
+

DKMS

+

Building RPM converted deb-based DKMS and user packages can be done as +follows:

+
$ cd zfs
+$ ./configure --enable-systemd
+$ make -j1 deb-utils deb-dkms
+$ sudo apt-get install --fix-missing ./*.deb
+
+
+

Starting from openzfs-2.2 release, native deb-based DKMS and user +packages can be built as follows:

+
$ sudo apt-get install dh-dkms
+$ cd zfs
+$ ./configure
+$ make native-deb-utils
+$ sudo apt-get install --fix-missing ../*.deb
+
+
+
+
+
+

Get the Source Code

+
+

Released Tarball

+

The released tarball contains the latest fully tested and released +version of ZFS. This is the preferred source code location for use in +production systems. If you want to use the official released tarballs, +then use the following commands to fetch and prepare the source.

+
$ wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-x.y.z.tar.gz
+$ tar -xzf zfs-x.y.z.tar.gz
+
+
+
+
+

Git Master Branch

+

The Git master branch contains the latest version of the software, and +will probably contain fixes that, for some reason, weren’t included in +the released tarball. This is the preferred source code location for +developers who intend to modify ZFS. If you would like to use the git +version, you can clone it from Github and prepare the source like this.

+
$ git clone https://github.com/zfsonlinux/zfs.git
+$ cd zfs
+$ ./autogen.sh
+
+
+

Once the source has been prepared you’ll need to decide what kind of +packages you’re building and jump the to appropriate section above. Note +that not all package types are supported for all platforms.

+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Developer Resources/Git and GitHub for beginners.html b/Developer Resources/Git and GitHub for beginners.html new file mode 100644 index 000000000..57d302d93 --- /dev/null +++ b/Developer Resources/Git and GitHub for beginners.html @@ -0,0 +1,315 @@ + + + + + + + Git and GitHub for beginners (ZoL edition) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Git and GitHub for beginners (ZoL edition)

+

This is a very basic rundown of how to use Git and GitHub to make +changes.

+

Recommended reading: ZFS on Linux +CONTRIBUTING.md

+
+

First time setup

+

If you’ve never used Git before, you’ll need a little setup to start +things off.

+
git config --global user.name "My Name"
+git config --global user.email myemail@noreply.non
+
+
+
+
+

Cloning the initial repository

+

The easiest way to get started is to click the fork icon at the top of +the main repository page. From there you need to download a copy of the +forked repository to your computer:

+
git clone https://github.com/<your-account-name>/zfs.git
+
+
+

This sets the “origin” repository to your fork. This will come in handy +when creating pull requests. To make pulling from the “upstream” +repository as changes are made, it is very useful to establish the +upstream repository as another remote (man git-remote):

+
cd zfs
+git remote add upstream https://github.com/zfsonlinux/zfs.git
+
+
+
+
+

Preparing and making changes

+

In order to make changes it is recommended to make a branch, this lets +you work on several unrelated changes at once. It is also not +recommended to make changes to the master branch unless you own the +repository.

+
git checkout -b my-new-branch
+
+
+

From here you can make your changes and move on to the next step.

+

Recommended reading: C Style and Coding Standards for +SunOS, +ZFS on Linux Developer +Resources, +OpenZFS Developer +Resources

+
+
+

Testing your patches before pushing

+

Before committing and pushing, you may want to test your patches. There +are several tests you can run against your branch such as style +checking, and functional tests. All pull requests go through these tests +before being pushed to the main repository, however testing locally +takes the load off the build/test servers. This step is optional but +highly recommended, however the test suite should be run on a virtual +machine or a host that currently does not use ZFS. You may need to +install shellcheck and flake8 to run the checkstyle +correctly.

+
sh autogen.sh
+./configure
+make checkstyle
+
+
+

Recommended reading: Building +ZFS, ZFS Test +Suite +README

+
+
+

Committing your changes to be pushed

+

When you are done making changes to your branch there are a few more +steps before you can make a pull request.

+
git commit --all --signoff
+
+
+

This command opens an editor and adds all unstaged files from your +branch. Here you need to describe your change and add a few things:

+
# Please enter the commit message for your changes. Lines starting
+# with '#' will be ignored, and an empty message aborts the commit.
+# On branch my-new-branch
+# Changes to be committed:
+#   (use "git reset HEAD <file>..." to unstage)
+#
+#   modified:   hello.c
+#
+
+
+

The first thing we need to add is the commit message. This is what is +displayed on the git log, and should be a short description of the +change. By style guidelines, this has to be less than 72 characters in +length.

+

Underneath the commit message you can add a more descriptive text to +your commit. The lines in this section have to be less than 72 +characters.

+

When you are done, the commit should look like this:

+
Add hello command
+
+This is a test commit with a descriptive commit message.
+This message can be more than one line as shown here.
+
+Signed-off-by: My Name <myemail@noreply.non>
+Closes #9998
+Issue #9999
+# Please enter the commit message for your changes. Lines starting
+# with '#' will be ignored, and an empty message aborts the commit.
+# On branch my-new-branch
+# Changes to be committed:
+#   (use "git reset HEAD <file>..." to unstage)
+#
+#   modified:   hello.c
+#
+
+
+

You can also reference issues and pull requests if you are filing a pull +request for an existing issue as shown above. Save and exit the editor +when you are done.

+
+
+

Pushing and creating the pull request

+

Home stretch. You’ve made your change and made the commit. Now it’s time +to push it.

+
git push --set-upstream origin my-new-branch
+
+
+

This should ask you for your github credentials and upload your changes +to your repository.

+

The last step is to either go to your repository or the upstream +repository on GitHub and you should see a button for making a new pull +request for your recently committed branch.

+
+
+

Correcting issues with your pull request

+

Sometimes things don’t always go as planned and you may need to update +your pull request with a correction to either your commit message, or +your changes. This can be accomplished by re-pushing your branch. If you +need to make code changes or git add a file, you can do those now, +along with the following:

+
git commit --amend
+git push --force
+
+
+

This will return you to the commit editor screen, and push your changes +over top of the old ones. Do note that this will restart the process of +any build/test servers currently running and excessively pushing can +cause delays in processing of all pull requests.

+
+
+

Maintaining your repository

+

When you wish to make changes in the future you will want to have an +up-to-date copy of the upstream repository to make your changes on. Here +is how you keep updated:

+
git checkout master
+git pull upstream master
+git push origin master
+
+
+

This will make sure you are on the master branch of the repository, grab +the changes from upstream, then push them back to your repository.

+
+
+

Final words

+

This is a very basic introduction to Git and GitHub, but should get you +on your way to contributing to many open source projects. Not all +projects have style requirements and some may have different processes +to getting changes committed so please refer to their documentation to +see if you need to do anything different. One topic we have not touched +on is the git rebase command which is a little more advanced for +this wiki article.

+

Additional resources: Github Help, +Atlassian Git Tutorials

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Developer Resources/OpenZFS Exceptions.html b/Developer Resources/OpenZFS Exceptions.html new file mode 100644 index 000000000..b2794f5aa --- /dev/null +++ b/Developer Resources/OpenZFS Exceptions.html @@ -0,0 +1,1426 @@ + + + + + + + OpenZFS Exceptions — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

OpenZFS Exceptions

+

Commit exceptions used to explicitly reference a given Linux commit. +These exceptions are useful for a variety of reasons.

+

This page is used to generate +OpenZFS Tracking +page.

+
+

Format:

+
    +
  • <openzfs issue>|-|<comment> - The OpenZFS commit isn’t applicable +to Linux, or the OpenZFS -> ZFS on Linux commit matching is unable to +associate the related commits due to lack of information (denoted by +a -).

  • +
  • <openzfs issue>|<commit>|<comment> - The fix was merged to Linux +prior to their being an OpenZFS issue.

  • +
  • <openzfs issue>|!|<comment> - The commit is applicable but not +applied for the reason described in the comment.

  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

OpenZFS issue id

status/ZFS commit

comment

11453

!

check_disk() on illumos +isn’t available on ZoL / +OpenZFS 2.0

11276

da68988

11052

2efea7c

11051

3b61ca3

10853

8dc2197

10844

61c3391

10842

d10b2f1

10841

944a372

10809

ee36c70

10808

2ef0f8c

10701

0091d66

10601

cc99f27

10573

48d3eb4

10572

edc1e71

10566

ab7615d

10554

bec1067

10500

03916905

10449

379ca9c

10406

da2feb4

10154

    +
  • +
+

Not applicable to Linux

10067

    +
  • +
+

The only ZFS change was to +zfs remap, which was +removed on Linux.

9884

    +
  • +
+

Not applicable to Linux

9851

    +
  • +
+

Not applicable to Linux

9691

d9b4bf0

9683

    +
  • +
+

Not applicable to Linux due +to devids not being used

9680

    +
  • +
+

Applied and rolled back in +OpenZFS, additional changes +needed.

9672

29445fe3

9647

a448a25

9626

59e6e7ca

9635

    +
  • +
+

Not applicable to Linux

9623

22448f08

9621

305bc4b3

9539

5228cf01

9512

b4555c77

9487

48fbb9dd

9466

272b5d73

9440

f664f1e

Illumos ticket 9440 never +landed in openzfs/openzfs, +but in ZoL / OpenZFS 2.0

9433

0873bb63

9421

64c1dcef

9237

    +
  • +
+

Introduced by 8567 which +was never applied to Linux

9194

    +
  • +
+

Not applicable the ‘-o +ashift=value’ option is +provided on Linux

9077

    +
  • +
+

Not applicable to Linux

9027

4a5d7f82

9018

3ec34e55

8984

!

WIP to support NFSv4 ACLs

8969

    +
  • +
+

Not applicable to Linux

8942

650258d7

8941

390d679a

8862

3b9edd7

8858

    +
  • +
+

Not applicable to Linux

8856

    +
  • +
+

Not applicable to Linux due +to Encryption (b525630)

8809

!

Adding libfakekernel needs +to be done by refactoring +existing code.

8727

b525630

8713

871e0732

8661

1ce23dca

8648

f763c3d1

8602

a032ac4

8601

d99a015

Equivalent fix included in +initial commit

8590

935e2c2

8569

    +
  • +
+

This change isn’t relevant +for Linux.

8567

    +
  • +
+

An alternate fix was +applied for Linux.

8552

935e2c2

8521

ee6370a7

8502

!

Apply when porting OpenZFS +7955

9485

1258bd7

8477

92e43c1

8454

    +
  • +
+

An alternate fix was +applied for Linux.

8423

50c957f

8408

5f1346c

8379

    +
  • +
+

This change isn’t relevant +for Linux.

8376

    +
  • +
+

This change isn’t relevant +for Linux.

8311

!

Need to assess +applicability to Linux.

8304

    +
  • +
+

This change isn’t relevant +for Linux.

8300

44f09cd

8265

    +
  • +
+

The large_dnode feature has +been implemented for Linux.

8168

78d95ea

8138

44f09cd

The spelling fix to the zfs +man page came in with the +mdoc conversion.

8108

    +
  • +
+

An equivalent Linux +specific fix was made.

8068

a1d477c24c

merged with zfs device +evacuation/removal

8064

    +
  • +
+

This change isn’t relevant +for Linux.

8022

e55ebf6

8021

7657def

8013

    +
  • +
+

The change is illumos +specific and not applicable +for Linux.

7982

    +
  • +
+

The change is illumos +specific and not applicable +for Linux.

7970

c30e58c

7956

cda0317

7955

!

Need to assess +applicability to Linux. If +porting, apply 8502.

7869

df7eecc

7816

    +
  • +
+

The change is illumos +specific and not applicable +for Linux.

7803

    +
  • +
+

This functionality is +provided by +upda +te_vdev_config_dev_strs() +on Linux.

7801

0eef1bd

Commit f25efb3 in +openzfs/master has a small +change for linting which is +being ported.

7779

    +
  • +
+

The change isn’t relevant, +zfs_ctldir.c was +rewritten for Linux.

7740

32d41fb

7739

582cc014

7730

e24e62a

7710

    +
  • +
+

None of the illumos build +system is used under Linux.

7602

44f09cd

7591

541a090

7586

c443487

7570

    +
  • +
+

Due to differences in the +block layer all discards +are handled asynchronously +under Linux. This +functionality could be +ported but it’s unclear to +what purpose.

7542

    +
  • +
+

The Linux libshare code +differs significantly from +the upstream OpenZFS code. +Since this change doesn’t +address a Linux specific +issue it doesn’t need to be +ported. The eventual plan +is to retire all of the +existing libshare code and +use the ZED to more +flexibly control filesystem +sharing.

7512

    +
  • +
+

None of the illumos build +system is used under Linux.

7497

    +
  • +
+

DTrace is isn’t readily +available under Linux.

7446

!

Need to assess +applicability to Linux.

7430

68cbd56

7402

690fe64

7345

058ac9b

7278

    +
  • +
+

Dynamic ARC tuning is +handled slightly +differently under Linux and +this case is covered by +arc_tuning_update()

7238

    +
  • +
+

zvol_swap test already +disabled in ZoL

7194

d7958b4

7164

b1b85c87

7041

33c0819

7016

d3c2ae1

6914

    +
  • +
+

Under Linux the +arc_meta_limit can be tuned +with the +zfs_arc_meta_limit_percent +module option.

6875

!

WIP to support NFSv4 ACLs

6843

f5f087e

6841

4254acb

6781

15313c5

6765

!

WIP to support NFSv4 ACLs

6764

!

WIP to support NFSv4 ACLs

6763

!

WIP to support NFSv4 ACLs

6762

!

WIP to support NFSv4 ACLs

6648

6bb24f4

6578

6bb24f4

6577

6bb24f4

6575

6bb24f4

6568

6bb24f4

6528

6bb24f4

6494

    +
  • +
+

The vdev_disk.c and +vdev_file.c files have +been reworked extensively +for Linux. The proposed +changes are not needed.

6468

6bb24f4

6465

6bb24f4

6434

472e7c6

6421

ca0bf58

6418

131cc95

6391

ee06391

6390

85802aa

6388

0de7c55

6386

485c581

6385

f3ad9cd

6369

6bb24f4

6368

2024041

6346

058ac9b

6334

1a04bab

6290

017da6

6250

    +
  • +
+

Linux handles crash dumps +in a fundamentally +different way than Illumos. +The proposed changes are +not needed.

6249

6bb24f4

6248

6bb24f4

6220

    +
  • +
+

The b_thawed debug code was +unused under Linux and +removed.

6209

    +
  • +
+

The Linux user space mutex +implementation is based on +phtread primitives.

6095

f866a4ea

6091

c11f100

6037

a8bd6dc

5984

480f626

5966

6bb24f4

5961

22872ff

5882

83e9986

5815

    +
  • +
+

This patch could be adapted +if needed use equivalent +Linux functionality.

5770

c3275b5

5769

dd26aa5

5768

    +
  • +
+

The change isn’t relevant, +zfs_ctldir.c was +rewritten for Linux.

5766

4dd1893

5693

0f7d2a4

5692

!

This functionality should +be ported in such a way +that it can be integrated +with filefrag(8).

5684

6bb24f4

5503

0f676dc

Proposed patch in 5503 +never upstreamed, +alternative fix deployed +with OpenZFS 7072

5502

f0ed6c7

Proposed patch in 5502 +never upstreamed, +alternative fix deployed +in ZoL with commit f0ed6c7

5410

0bf8501

5409

b23d543

5379

    +
  • +
+

This particular issue never +impacted Linux due to the +need for a modified +zfs_putpage() +implementation.

5316

    +
  • +
+

The illumos idmap facility +isn’t available under +Linux. This patch could +still be applied to +minimize code delta or all +HAVE_IDMAP chunks could be +removed on Linux for better +readability.

5313

ec8501e

5312

!

This change should be made +but the ideal time to do it +is when the spl repository +is folded in to the zfs +repository (planned for +0.8). At this time we’ll +want to cleanup many of the +includes.

5219

ef56b07

5179

3f4058c

5154

9a49d3f

Illumos ticket 5154 never +landed in openzfs/openzfs, +alternative fix deployed +in ZoL with commit 9a49d3f

5149

    +
  • +
+

Equivalent Linux +functionality is provided +by the +zvol_max_discard_blocks +module option.

5148

    +
  • +
+

Discards are handled +differently under Linux, +there is no DKIOCFREE +ioctl.

5136

e8b96c6

4752

aa9af22

4745

411bf20

4698

4fcc437

4620

6bb24f4

4573

10b7549

4571

6e1b9d0

4570

b1d13a6

4391

78e2739

4465

cda0317

4263

6bb24f4

4242

    +
  • +
+

Neither vnodes or their +associated events exist +under Linux.

4206

2820bc4

4188

2e7b765

4181

44f09cd

4161

    +
  • +
+

The Linux user space +reader/writer +implementation is based on +phtread primitives.

4128

!

The +ldi_ev_register_callbacks() +interface doesn’t exist +under Linux. It may be +possible to receive similar +notifications via the scsi +error handlers or possibly +a different interface.

4072

    +
  • +
+

None of the illumos build +system is used under Linux.

3998

417104bd

Illumos ticket 3998 never +landed in openzfs/openzfs, +alternative fix deployed +in ZoL.

3947

7f9d994

3928

    +
  • +
+

Neither vnodes or their +associated events exist +under Linux.

3871

d1d7e268

3747

090ff09

3705

    +
  • +
+

The Linux implementation +uses the lz4 workspace kmem +cache to resolve the stack +issue.

3606

c5b247f

3580

    +
  • +
+

Linux provides generic +ioctl handlers get/set +block device information.

3543

8dca0a9

3512

67629d0

3507

43a696e

3444

6bb24f4

3371

44f09cd

3311

6bb24f4

3301

    +
  • +
+

The Linux implementation of +vdev_disk.c does not +include this comment.

3258

9d81146

3254

!

WIP to support NFSv4 ACLs

3246

cc92e9d

2933

    +
  • +
+

None of the illumos build +system is used under Linux.

2897

fb82700

2665

32a9872

2130

460a021

1974

    +
  • +
+

This change was entirely +replaced in the ARC +restructuring.

1898

    +
  • +
+

The zfs_putpage() function +was rewritten to properly +integrate with the Linux +VM.

1700

    +
  • +
+

Not applicable to Linux, +the discard implementation +is entirely different.

1618

ca67b33

1337

2402458

1126

e43b290

763

3cee226

742

!

WIP to support NFSv4 ACLs

701

460a021

348

    +
  • +
+

The Linux implementation of +vdev_disk.c must have +this differently.

243

    +
  • +
+

Manual updates have been +made separately for Linux.

184

    +
  • +
+

The zfs_putpage() function +was rewritten to properly +integrate with the Linux +VM.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Developer Resources/OpenZFS Patches.html b/Developer Resources/OpenZFS Patches.html new file mode 100644 index 000000000..82083e956 --- /dev/null +++ b/Developer Resources/OpenZFS Patches.html @@ -0,0 +1,419 @@ + + + + + + + OpenZFS Patches — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

OpenZFS Patches

+

The ZFS on Linux project is an adaptation of the upstream OpenZFS +repository designed to work in +a Linux environment. This upstream repository acts as a location where +new features, bug fixes, and performance improvements from all the +OpenZFS platforms can be integrated. Each platform is responsible for +tracking the OpenZFS repository and merging the relevant improvements +back in to their release.

+

For the ZFS on Linux project this tracking is managed through an +OpenZFS tracking +page. The page is updated regularly and shows a list of OpenZFS commits +and their status in regard to the ZFS on Linux master branch.

+

This page describes the process of applying outstanding OpenZFS commits +to ZFS on Linux and submitting those changes for inclusion. As a +developer this is a great way to familiarize yourself with ZFS on Linux +and to begin quickly making a valuable contribution to the project. The +following guide assumes you have a github +account, +are familiar with git, and are used to developing in a Linux +environment.

+
+

Porting OpenZFS changes to ZFS on Linux

+
+

Setup the Environment

+

Clone the source. Start by making a local clone of the +spl and +zfs repositories.

+
$ git clone -o zfsonlinux https://github.com/zfsonlinux/spl.git
+$ git clone -o zfsonlinux https://github.com/zfsonlinux/zfs.git
+
+
+

Add remote repositories. Using the GitHub web interface +fork the +zfs repository in to your +personal GitHub account. Add your new zfs fork and the +openzfs repository as remotes +and then fetch both repositories. The OpenZFS repository is large and +the initial fetch may take some time over a slow connection.

+
$ cd zfs
+$ git remote add <your-github-account> git@github.com:<your-github-account>/zfs.git
+$ git remote add openzfs https://github.com/openzfs/openzfs.git
+$ git fetch --all
+
+
+

Build the source. Compile the spl and zfs master branches. These +branches are always kept stable and this is a useful verification that +you have a full build environment installed and all the required +dependencies are available. This may also speed up the compile time +latter for small patches where incremental builds are an option.

+
$ cd ../spl
+$ sh autogen.sh && ./configure --enable-debug && make -s -j$(nproc)
+$
+$ cd ../zfs
+$ sh autogen.sh && ./configure --enable-debug && make -s -j$(nproc)
+
+
+
+
+

Pick a patch

+

Consult the OpenZFS +tracking page and +select a patch which has not yet been applied. For your first patch you +will want to select a small patch to familiarize yourself with the +process.

+
+
+

Porting a Patch

+

There are 2 methods:

+ +

Please read about manual merge first to learn the +whole process.

+
+

Cherry-pick

+

You can start to +cherry-pick by your own, +but we have made a special +script, +which tries to +cherry-pick the patch +automatically and generates the description.

+
    +
  1. Prepare environment:

  2. +
+

Mandatory git settings (add to ~/.gitconfig):

+
[merge]
+    renameLimit = 999999
+[user]
+    email = mail@yourmail.com
+    name = Your Name
+
+
+

Download the script:

+
wget https://raw.githubusercontent.com/zfsonlinux/zfs-buildbot/master/scripts/openzfs-merge.sh
+
+
+
    +
  1. Run:

  2. +
+
./openzfs-merge.sh -d path_to_zfs_folder -c openzfs_commit_hash
+
+
+

This command will fetch all repositories, create a new branch +autoport-ozXXXX (XXXX - OpenZFS issue number), try to cherry-pick, +compile and check cstyle on success.

+

If it succeeds without any merge conflicts - go to autoport-ozXXXX +branch, it will have ready to pull commit. Congratulations, you can go +to step 7!

+

Otherwise you should go to step 2.

+
    +
  1. Resolve all merge conflicts manually. Easy method - install +Meld or any other diff tool and run +git mergetool.

  2. +
  3. Check all compile and cstyle errors (See Testing a +patch).

  4. +
  5. Commit your changes with any description.

  6. +
  7. Update commit description (last commit will be changed):

  8. +
+
./openzfs-merge.sh -d path_to_zfs_folder -g openzfs_commit_hash
+
+
+
    +
  1. Add any porting notes (if you have modified something): +git commit --amend

  2. +
  3. Push your commit to github: +git push <your-github-account> autoport-ozXXXX

  4. +
  5. Create a pull request to ZoL master branch.

  6. +
  7. Go to Testing a patch section.

  8. +
+
+
+

Manual merge

+

Create a new branch. It is important to create a new branch for +every commit you port to ZFS on Linux. This will allow you to easily +submit your work as a GitHub pull request and it makes it possible to +work on multiple OpenZFS changes concurrently. All development branches +need to be based off of the ZFS master branch and it’s helpful to name +the branches after the issue number you’re working on.

+
$ git checkout -b openzfs-<issue-nr> master
+
+
+

Generate a patch. One of the first things you’ll notice about the +ZFS on Linux repository is that it is laid out differently than the +OpenZFS repository. Organizationally it is much flatter, this is +possible because it only contains the code for OpenZFS not an entire OS. +That means that in order to apply a patch from OpenZFS the path names in +the patch must be changed. A script called zfs2zol-patch.sed has been +provided to perform this translation. Use the git format-patch +command and this script to generate a patch.

+
$ git format-patch --stdout <commit-hash>^..<commit-hash> | \
+    ./scripts/zfs2zol-patch.sed >openzfs-<issue-nr>.diff
+
+
+

Apply the patch. In many cases the generated patch will apply +cleanly to the repository. However, it’s important to keep in mind the +zfs2zol-patch.sed script only translates the paths. There are often +additional reasons why a patch might not apply. In some cases hunks of +the patch may not be applicable to Linux and should be dropped. In other +cases a patch may depend on other changes which must be applied first. +The changes may also conflict with Linux specific modifications. In all +of these cases the patch will need to be manually modified to apply +cleanly while preserving the its original intent.

+
$ git am ./openzfs-<commit-nr>.diff
+
+
+

Update the commit message. By using git format-patch to generate +the patch and then git am to apply it the original comment and +authorship will be preserved. However, due to the formatting of the +OpenZFS commit you will likely find that the entire commit comment has +been squashed in to the subject line. Use git commit --amend to +cleanup the comment and be careful to follow these standard +guidelines.

+

The summary line of an OpenZFS commit is often very long and you should +truncate it to 50 characters. This is useful because it preserves the +correct formatting of git log --pretty=oneline command. Make sure to +leave a blank line between the summary and body of the commit. Then +include the full OpenZFS commit message wrapping any lines which exceed +72 characters. Finally, add a Ported-by tag with your contact +information and both a OpenZFS-issue and OpenZFS-commit tag with +appropriate links. You’ll want to verify your commit contains all of the +following information:

+
    +
  • The subject line from the original OpenZFS patch in the form: +“OpenZFS <issue-nr> - short description”.

  • +
  • The original patch authorship should be preserved.

  • +
  • The OpenZFS commit message.

  • +
  • The following tags:

    +
      +
    • Authored by: Original patch author

    • +
    • Reviewed by: All OpenZFS reviewers from the original patch.

    • +
    • Approved by: All OpenZFS reviewers from the original patch.

    • +
    • Ported-by: Your name and email address.

    • +
    • OpenZFS-issue: https ://www.illumos.org/issues/issue

    • +
    • OpenZFS-commit: https +://github.com/openzfs/openzfs/commit/hash

    • +
    +
  • +
  • Porting Notes: An optional section describing any changes +required when porting.

  • +
+

For example, OpenZFS issue 6873 was applied to +Linux from this +upstream OpenZFS +commit.

+
OpenZFS 6873 - zfs_destroy_snaps_nvl leaks errlist
+
+Authored by: Chris Williamson <chris.williamson@delphix.com>
+Reviewed by: Matthew Ahrens <mahrens@delphix.com>
+Reviewed by: Paul Dagnelie <pcd@delphix.com>
+Ported-by: Denys Rtveliashvili <denys@rtveliashvili.name>
+
+lzc_destroy_snaps() returns an nvlist in errlist.
+zfs_destroy_snaps_nvl() should nvlist_free() it before returning.
+
+OpenZFS-issue: https://www.illumos.org/issues/6873
+OpenZFS-commit: https://github.com/openzfs/openzfs/commit/ee06391
+
+
+
+
+
+

Testing a Patch

+

Build the source. Verify the patched source compiles without errors +and all warnings are resolved.

+
$ make -s -j$(nproc)
+
+
+

Run the style checker. Verify the patched source passes the style +checker, the command should return without printing any output.

+
$ make cstyle
+
+
+

Open a Pull Request. When your patch builds cleanly and passes the +style checks open a new pull +request. +The pull request will be queued for automated +testing. As part of the +testing the change is built for a wide range of Linux distributions and +a battery of functional and stress tests are run to detect regressions.

+
$ git push <your-github-account> openzfs-<issue-nr>
+
+
+

Fix any issues. Testing takes approximately 2 hours to fully +complete and the results are posted in the GitHub pull +request. All the tests +are expected to pass and you should investigate and resolve any test +failures. The test +scripts +are all available and designed to run locally in order reproduce an +issue. Once you’ve resolved the issue force update the pull request to +trigger a new round of testing. Iterate until all the tests are passing.

+
# Fix issue, amend commit, force update branch.
+$ git commit --amend
+$ git push --force <your-github-account> openzfs-<issue-nr>
+
+
+
+
+

Merging the Patch

+

Review. Lastly one of the ZFS on Linux maintainers will make a final +review of the patch and may request additional changes. Once the +maintainer is happy with the final version of the patch they will add +their signed-off-by, merge it to the master branch, mark it complete on +the tracking page, and thank you for your contribution to the project!

+
+
+
+

Porting ZFS on Linux changes to OpenZFS

+

Often an issue will be first fixed in ZFS on Linux or a new feature +developed. Changes which are not Linux specific should be submitted +upstream to the OpenZFS GitHub repository for review. The process for +this is described in the OpenZFS +README.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Developer Resources/index.html b/Developer Resources/index.html new file mode 100644 index 000000000..b70ca6f7d --- /dev/null +++ b/Developer Resources/index.html @@ -0,0 +1,183 @@ + + + + + + + Developer Resources — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ + +
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Alpine Linux/Root on ZFS.html b/Getting Started/Alpine Linux/Root on ZFS.html new file mode 100644 index 000000000..b61bec547 --- /dev/null +++ b/Getting Started/Alpine Linux/Root on ZFS.html @@ -0,0 +1,537 @@ + + + + + + + Alpine Linux Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Alpine Linux Root on ZFS

+

ZFSBootMenu

+

This tutorial is based on the GRUB bootloader. Due to its independent +implementation of a read-only ZFS driver, GRUB only supports a subset +of ZFS features on the boot pool. [In general, bootloader treat disks +as read-only to minimize the risk of damaging on-disk data.]

+

ZFSBootMenu is an alternative bootloader +free of such limitations and has support for boot environments. Do not +follow instructions on this page if you plan to use ZBM, +as the layouts are not compatible. Refer +to their site for installation details.

+

Customization

+

Unless stated otherwise, it is not recommended to customize system +configuration before reboot.

+

Only use well-tested pool features

+

You should only use well-tested pool features. Avoid using new features if data integrity is paramount. See, for example, this comment.

+
+

Preparation

+
    +
  1. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled.

  2. +
  3. Download latest extended variant of Alpine Linux +live image, +verify checksum +and boot from it.

    +
    gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify alpine-extended-*.asc
    +
    +dd if=input-file of=output-file bs=1M
    +
    +
    +
  4. +
  5. Login as root user. There is no password.

  6. +
  7. Configure Internet

    +
    setup-interfaces -r
    +# You must use "-r" option to start networking services properly
    +# example:
    +network interface: wlan0
    +WiFi name:         <ssid>
    +ip address:        dhcp
    +<enter done to finish network config>
    +manual netconfig:  n
    +
    +
    +
  8. +
  9. If you are using wireless network and it is not shown, see Alpine +Linux wiki for +further details. wpa_supplicant can be installed with apk +add wpa_supplicant without internet connection.

  10. +
  11. Configure SSH server

    +
    setup-sshd
    +# example:
    +ssh server:        openssh
    +allow root:        "prohibit-password" or "yes"
    +ssh key:           "none" or "<public key>"
    +
    +
    +

    Configurations set here will be copied verbatim to the installed system.

    +
  12. +
  13. Set root password or /root/.ssh/authorized_keys.

    +

    Choose a strong root password, as it will be copied to the +installed system. However, authorized_keys is not copied.

    +
  14. +
  15. Connect from another computer

    +
    ssh root@192.168.1.91
    +
    +
    +
  16. +
  17. Configure NTP client for time synchronization

    +
    setup-ntp busybox
    +
    +
    +
  18. +
  19. Set up apk-repo. A list of available mirrors is shown. +Press space bar to continue

    +
    setup-apkrepos
    +
    +
    +
  20. +
  21. Throughout this guide, we use predictable disk names generated by +udev

    +
    apk update
    +apk add eudev
    +setup-devd udev
    +
    +
    +

    It can be removed after reboot with setup-devd mdev && apk del eudev.

    +
  22. +
  23. Target disk

    +

    List available disks with

    +
    find /dev/disk/by-id/
    +
    +
    +

    If virtio is used as disk bus, power off the VM and set serial numbers for disk. +For QEMU, use -drive format=raw,file=disk2.img,serial=AaBb. +For libvirt, edit domain XML. See this page for examples.

    +

    Declare disk array

    +
    DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR'
    +
    +
    +

    For single disk installation, use

    +
    DISK='/dev/disk/by-id/disk1'
    +
    +
    +
  24. +
  25. Set a mount point

    +
    MNT=$(mktemp -d)
    +
    +
    +
  26. +
  27. Set partition size:

    +

    Set swap size in GB, set to 1 if you don’t want swap to +take up too much space

    +
    SWAPSIZE=4
    +
    +
    +

    Set how much space should be left at the end of the disk, minimum 1GB

    +
    RESERVE=1
    +
    +
    +
  28. +
  29. Install ZFS support from live media:

    +
    apk add zfs
    +
    +
    +
  30. +
  31. Install bootloader programs and partition tool

    +
    apk add grub-bios grub-efi parted e2fsprogs cryptsetup util-linux
    +
    +
    +
  32. +
+
+
+

System Installation

+
    +
  1. Partition the disks.

    +

    Note: you must clear all existing partition tables and data structures from target disks.

    +

    For flash-based storage, this can be done by the blkdiscard command below:

    +
    partition_disk () {
    + local disk="${1}"
    + blkdiscard -f "${disk}" || true
    +
    + parted --script --align=optimal  "${disk}" -- \
    + mklabel gpt \
    + mkpart EFI 2MiB 1GiB \
    + mkpart bpool 1GiB 5GiB \
    + mkpart rpool 5GiB -$((SWAPSIZE + RESERVE))GiB \
    + mkpart swap  -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \
    + mkpart BIOS 1MiB 2MiB \
    + set 1 esp on \
    + set 5 bios_grub on \
    + set 5 legacy_boot on
    +
    + partprobe "${disk}"
    +}
    +
    +for i in ${DISK}; do
    +   partition_disk "${i}"
    +done
    +
    +
    +
  2. +
  3. Setup encrypted swap. This is useful if the available memory is +small:

    +
    for i in ${DISK}; do
    +   cryptsetup open --type plain --key-file /dev/random "${i}"-part4 "${i##*/}"-part4
    +   mkswap /dev/mapper/"${i##*/}"-part4
    +   swapon /dev/mapper/"${i##*/}"-part4
    +done
    +
    +
    +
  4. +
  5. Load ZFS kernel module

    +
    modprobe zfs
    +
    +
    +
  6. +
  7. Create boot pool

    +
    # shellcheck disable=SC2046
    +zpool create -o compatibility=legacy  \
    +    -o ashift=12 \
    +    -o autotrim=on \
    +    -O acltype=posixacl \
    +    -O canmount=off \
    +    -O devices=off \
    +    -O normalization=formD \
    +    -O relatime=on \
    +    -O xattr=sa \
    +    -O mountpoint=/boot \
    +    -R "${MNT}" \
    +    bpool \
    +           mirror \
    +    $(for i in ${DISK}; do
    +       printf '%s ' "${i}-part2";
    +      done)
    +
    +
    +

    If not using a multi-disk setup, remove mirror.

    +

    You should not need to customize any of the options for the boot pool.

    +

    GRUB does not support all of the zpool features. See spa_feature_names +in grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features.

    +
  8. +
  9. Create root pool

    +
    # shellcheck disable=SC2046
    +zpool create \
    +    -o ashift=12 \
    +    -o autotrim=on \
    +    -R "${MNT}" \
    +    -O acltype=posixacl \
    +    -O canmount=off \
    +    -O compression=zstd \
    +    -O dnodesize=auto \
    +    -O normalization=formD \
    +    -O relatime=on \
    +    -O xattr=sa \
    +    -O mountpoint=/ \
    +    rpool \
    +    mirror \
    +   $(for i in ${DISK}; do
    +      printf '%s ' "${i}-part3";
    +     done)
    +
    +
    +

    If not using a multi-disk setup, remove mirror.

    +
  10. +
  11. Create root system container:

    +
      +
    • Unencrypted

      +
      zfs create \
      + -o canmount=off \
      + -o mountpoint=none \
      +rpool/alpinelinux
      +
      +
      +
    • +
    • Encrypted:

      +

      Avoid ZFS send/recv when using native encryption, see `a ZFS developer's comment on this issue`__ and `this spreadsheet of bugs`__. A LUKS-based guide has yet to be written. Once compromised, changing password will not keep your +data safe. See zfs-change-key(8) for more info

      +
      zfs create \
      +  -o canmount=off \
      +         -o mountpoint=none \
      +         -o encryption=on \
      +         -o keylocation=prompt \
      +         -o keyformat=passphrase \
      +rpool/alpinelinux
      +
      +
      +
    • +
    +

    You can automate this step (insecure) with: echo POOLPASS | zfs create ....

    +

    Create system datasets, +manage mountpoints with mountpoint=legacy

    +
    zfs create -o canmount=noauto -o mountpoint=/  rpool/alpinelinux/root
    +zfs mount rpool/alpinelinux/root
    +zfs create -o mountpoint=legacy rpool/alpinelinux/home
    +mkdir "${MNT}"/home
    +mount -t zfs rpool/alpinelinux/home "${MNT}"/home
    +zfs create -o mountpoint=legacy  rpool/alpinelinux/var
    +zfs create -o mountpoint=legacy rpool/alpinelinux/var/lib
    +zfs create -o mountpoint=legacy rpool/alpinelinux/var/log
    +zfs create -o mountpoint=none bpool/alpinelinux
    +zfs create -o mountpoint=legacy bpool/alpinelinux/root
    +mkdir "${MNT}"/boot
    +mount -t zfs bpool/alpinelinux/root "${MNT}"/boot
    +mkdir -p "${MNT}"/var/log
    +mkdir -p "${MNT}"/var/lib
    +mount -t zfs rpool/alpinelinux/var/lib "${MNT}"/var/lib
    +mount -t zfs rpool/alpinelinux/var/log "${MNT}"/var/log
    +
    +
    +
  12. +
  13. Format and mount ESP

    +
    for i in ${DISK}; do
    + mkfs.vfat -n EFI "${i}"-part1
    + mkdir -p "${MNT}"/boot/efis/"${i##*/}"-part1
    + mount -t vfat -o iocharset=iso8859-1 "${i}"-part1 "${MNT}"/boot/efis/"${i##*/}"-part1
    +done
    +
    +mkdir -p "${MNT}"/boot/efi
    +mount -t vfat -o iocharset=iso8859-1 "$(echo "${DISK}" | sed "s|^ *||"  | cut -f1 -d' '|| true)"-part1 "${MNT}"/boot/efi
    +
    +
    +
  14. +
+
+
+

System Configuration

+
    +
  1. Workaround for GRUB to recognize predictable disk names:

    +
    export ZPOOL_VDEV_NAME_PATH=YES
    +
    +
    +
  2. +
  3. Install system to disk

    +
    BOOTLOADER=grub setup-disk -k lts -v "${MNT}"
    +
    +
    +

    GRUB installation will fail and will be reinstalled later. +The error message about ZFS kernel module can be ignored.

    +
  4. +
  5. Allow EFI system partition to fail at boot:

    +
    sed -i "s|vfat.*rw|vfat rw,nofail|" "${MNT}"/etc/fstab
    +
    +
    +
  6. +
  7. Chroot

    +
    for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done
    +chroot "${MNT}" /usr/bin/env DISK="${DISK}" sh
    +
    +
    +
  8. +
  9. Apply GRUB workaround

    +
    echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile.d/zpool_vdev_name_path.sh
    +# shellcheck disable=SC1091
    +. /etc/profile.d/zpool_vdev_name_path.sh
    +
    +# GRUB fails to detect rpool name, hard code as "rpool"
    +sed -i "s|rpool=.*|rpool=rpool|"  /etc/grub.d/10_linux
    +
    +# BusyBox stat does not recognize zfs, replace fs detection with ZFS
    +sed -i 's|stat -f -c %T /|echo zfs|' /usr/sbin/grub-mkconfig
    +
    +# grub-probe fails to identify fs mounted at /boot
    +BOOT_DEVICE=$(zpool status -P bpool | grep -- -part2 | head -n1 | sed  "s|.*/dev*|/dev|" | sed "s|part2.*|part2|")
    +sed -i "s|GRUB_DEVICE_BOOT=.*|GRUB_DEVICE_BOOT=${BOOT_DEVICE}|"  /usr/sbin/grub-mkconfig
    +
    +
    +

    The sed workaround for grub-mkconfig needs to be applied +for every GRUB update, as the update will overwrite the changes.

    +
  10. +
  11. Install GRUB:

    +
    mkdir -p /boot/efi/alpine/grub-bootdir/i386-pc/
    +mkdir -p /boot/efi/alpine/grub-bootdir/x86_64-efi/
    +for i in ${DISK}; do
    + grub-install --target=i386-pc --boot-directory \
    +     /boot/efi/alpine/grub-bootdir/i386-pc/  "${i}"
    +done
    +grub-install --target x86_64-efi --boot-directory \
    +  /boot/efi/alpine/grub-bootdir/x86_64-efi/ --efi-directory \
    +  /boot/efi --bootloader-id alpine --removable
    +if test -d /sys/firmware/efi/efivars/; then
    +  apk add efibootmgr
    +  grub-install --target x86_64-efi --boot-directory \
    +    /boot/efi/alpine/grub-bootdir/x86_64-efi/ --efi-directory \
    +    /boot/efi --bootloader-id alpine
    +fi
    +
    +
    +
  12. +
  13. Generate GRUB menu:

    +
    mkdir -p /boot/grub
    +grub-mkconfig -o /boot/grub/grub.cfg
    +cp /boot/grub/grub.cfg \
    + /boot/efi/alpine/grub-bootdir/x86_64-efi/grub/grub.cfg
    +cp /boot/grub/grub.cfg \
    + /boot/efi/alpine/grub-bootdir/i386-pc/grub/grub.cfg
    +
    +
    +
  14. +
  15. For both legacy and EFI booting: mirror ESP content:

    +
    espdir=$(mktemp -d)
    +find /boot/efi/ -maxdepth 1 -mindepth 1 -type d -print0 \
    +| xargs -t -0I '{}' cp -r '{}' "${espdir}"
    +find "${espdir}" -maxdepth 1 -mindepth 1 -type d -print0 \
    +| xargs -t -0I '{}' sh -vxc "find /boot/efis/ -maxdepth 1 -mindepth 1 -type d -print0 | xargs -t -0I '[]' cp -r '{}' '[]'"
    +
    +
    +
  16. +
  17. Exit chroot

    +
    exit
    +
    +
    +
  18. +
  19. Unmount filesystems and create initial system snapshot +You can later create a boot environment from this snapshot. +See Root on ZFS maintenance page.

    +
    umount -Rl "${MNT}"
    +zfs snapshot -r rpool@initial-installation
    +zfs snapshot -r bpool@initial-installation
    +zpool export -a
    +
    +
    +
  20. +
  21. Reboot

    +
    reboot
    +
    +
    +
  22. +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Alpine Linux/index.html b/Getting Started/Alpine Linux/index.html new file mode 100644 index 000000000..c5b1f8d34 --- /dev/null +++ b/Getting Started/Alpine Linux/index.html @@ -0,0 +1,179 @@ + + + + + + + Alpine Linux — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Alpine Linux

+
+

Contents

+ +
+
+

Installation

+

Note: this is for installing ZFS on an existing Alpine +installation. To use ZFS as root file system, +see below.

+
    +
  1. Install ZFS package:

    +
    apk add zfs zfs-lts
    +
    +
    +
  2. +
  3. Load kernel module:

    +
    modprobe zfs
    +
    +
    +
  4. +
+
+
+

Root on ZFS

+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Arch Linux/Root on ZFS.html b/Getting Started/Arch Linux/Root on ZFS.html new file mode 100644 index 000000000..f20319086 --- /dev/null +++ b/Getting Started/Arch Linux/Root on ZFS.html @@ -0,0 +1,667 @@ + + + + + + + Arch Linux Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Arch Linux Root on ZFS

+

ZFSBootMenu

+

This tutorial is based on the GRUB bootloader. Due to its independent +implementation of a read-only ZFS driver, GRUB only supports a subset +of ZFS features on the boot pool. [In general, bootloader treat disks +as read-only to minimize the risk of damaging on-disk data.]

+

ZFSBootMenu is an alternative bootloader +free of such limitations and has support for boot environments. Do not +follow instructions on this page if you plan to use ZBM, +as the layouts are not compatible. Refer +to their site for installation details.

+

Customization

+

Unless stated otherwise, it is not recommended to customize system +configuration before reboot.

+

Only use well-tested pool features

+

You should only use well-tested pool features. Avoid using new features if data integrity is paramount. See, for example, this comment.

+
+

Preparation

+
    +
  1. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled.

  2. +
  3. Because the kernel of latest Live CD might be incompatible with +ZFS, we will use Alpine Linux Extended, which ships with ZFS by +default.

    +

    Download latest extended variant of Alpine Linux +live image, +verify checksum +and boot from it.

    +
    gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify alpine-extended-*.asc
    +
    +dd if=input-file of=output-file bs=1M
    +
    +
    +
  4. +
  5. Login as root user. There is no password.

  6. +
  7. Configure Internet

    +
    setup-interfaces -r
    +# You must use "-r" option to start networking services properly
    +# example:
    +network interface: wlan0
    +WiFi name:         <ssid>
    +ip address:        dhcp
    +<enter done to finish network config>
    +manual netconfig:  n
    +
    +
    +
  8. +
  9. If you are using wireless network and it is not shown, see Alpine +Linux wiki for +further details. wpa_supplicant can be installed with apk +add wpa_supplicant without internet connection.

  10. +
  11. Configure SSH server

    +
    setup-sshd
    +# example:
    +ssh server:        openssh
    +allow root:        "prohibit-password" or "yes"
    +ssh key:           "none" or "<public key>"
    +
    +
    +
  12. +
  13. Set root password or /root/.ssh/authorized_keys.

  14. +
  15. Connect from another computer

    +
    ssh root@192.168.1.91
    +
    +
    +
  16. +
  17. Configure NTP client for time synchronization

    +
    setup-ntp busybox
    +
    +
    +
  18. +
  19. Set up apk-repo. A list of available mirrors is shown. +Press space bar to continue

    +
    setup-apkrepos
    +
    +
    +
  20. +
  21. Throughout this guide, we use predictable disk names generated by +udev

    +
    apk update
    +apk add eudev
    +setup-devd udev
    +
    +
    +
  22. +
  23. Target disk

    +

    List available disks with

    +
    find /dev/disk/by-id/
    +
    +
    +

    If virtio is used as disk bus, power off the VM and set serial numbers for disk. +For QEMU, use -drive format=raw,file=disk2.img,serial=AaBb. +For libvirt, edit domain XML. See this page for examples.

    +

    Declare disk array

    +
    DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR'
    +
    +
    +

    For single disk installation, use

    +
    DISK='/dev/disk/by-id/disk1'
    +
    +
    +
  24. +
  25. Set a mount point

    +
    MNT=$(mktemp -d)
    +
    +
    +
  26. +
  27. Set partition size:

    +

    Set swap size in GB, set to 1 if you don’t want swap to +take up too much space

    +
    SWAPSIZE=4
    +
    +
    +

    Set how much space should be left at the end of the disk, minimum 1GB

    +
    RESERVE=1
    +
    +
    +
  28. +
  29. Install ZFS support from live media:

    +
    apk add zfs
    +
    +
    +
  30. +
  31. Install partition tool

    +
    apk add parted e2fsprogs cryptsetup util-linux
    +
    +
    +
  32. +
+
+
+

System Installation

+
    +
  1. Partition the disks.

    +

    Note: you must clear all existing partition tables and data structures from target disks.

    +

    For flash-based storage, this can be done by the blkdiscard command below:

    +
    partition_disk () {
    + local disk="${1}"
    + blkdiscard -f "${disk}" || true
    +
    + parted --script --align=optimal  "${disk}" -- \
    + mklabel gpt \
    + mkpart EFI 2MiB 1GiB \
    + mkpart bpool 1GiB 5GiB \
    + mkpart rpool 5GiB -$((SWAPSIZE + RESERVE))GiB \
    + mkpart swap  -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \
    + mkpart BIOS 1MiB 2MiB \
    + set 1 esp on \
    + set 5 bios_grub on \
    + set 5 legacy_boot on
    +
    + partprobe "${disk}"
    +}
    +
    +for i in ${DISK}; do
    +   partition_disk "${i}"
    +done
    +
    +
    +
  2. +
  3. Setup encrypted swap. This is useful if the available memory is +small:

    +
    for i in ${DISK}; do
    +   cryptsetup open --type plain --key-file /dev/random "${i}"-part4 "${i##*/}"-part4
    +   mkswap /dev/mapper/"${i##*/}"-part4
    +   swapon /dev/mapper/"${i##*/}"-part4
    +done
    +
    +
    +
  4. +
  5. Load ZFS kernel module

    +
    modprobe zfs
    +
    +
    +
  6. +
  7. Create boot pool

    +
    # shellcheck disable=SC2046
    +zpool create -o compatibility=legacy  \
    +    -o ashift=12 \
    +    -o autotrim=on \
    +    -O acltype=posixacl \
    +    -O canmount=off \
    +    -O devices=off \
    +    -O normalization=formD \
    +    -O relatime=on \
    +    -O xattr=sa \
    +    -O mountpoint=/boot \
    +    -R "${MNT}" \
    +    bpool \
    +           mirror \
    +    $(for i in ${DISK}; do
    +       printf '%s ' "${i}-part2";
    +      done)
    +
    +
    +

    If not using a multi-disk setup, remove mirror.

    +

    You should not need to customize any of the options for the boot pool.

    +

    GRUB does not support all of the zpool features. See spa_feature_names +in grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features.

    +
  8. +
  9. Create root pool

    +
    # shellcheck disable=SC2046
    +zpool create \
    +    -o ashift=12 \
    +    -o autotrim=on \
    +    -R "${MNT}" \
    +    -O acltype=posixacl \
    +    -O canmount=off \
    +    -O compression=zstd \
    +    -O dnodesize=auto \
    +    -O normalization=formD \
    +    -O relatime=on \
    +    -O xattr=sa \
    +    -O mountpoint=/ \
    +    rpool \
    +    mirror \
    +   $(for i in ${DISK}; do
    +      printf '%s ' "${i}-part3";
    +     done)
    +
    +
    +

    If not using a multi-disk setup, remove mirror.

    +
  10. +
  11. Create root system container:

    +
      +
    • Unencrypted

      +
      zfs create \
      + -o canmount=off \
      + -o mountpoint=none \
      +rpool/archlinux
      +
      +
      +
    • +
    • Encrypted:

      +

      Avoid ZFS send/recv when using native encryption, see `a ZFS developer's comment on this issue`__ and `this spreadsheet of bugs`__. A LUKS-based guide has yet to be written. Once compromised, changing password will not keep your +data safe. See zfs-change-key(8) for more info

      +
      zfs create \
      +  -o canmount=off \
      +         -o mountpoint=none \
      +         -o encryption=on \
      +         -o keylocation=prompt \
      +         -o keyformat=passphrase \
      +rpool/archlinux
      +
      +
      +
    • +
    +

    You can automate this step (insecure) with: echo POOLPASS | zfs create ....

    +

    Create system datasets, +manage mountpoints with mountpoint=legacy

    +
    zfs create -o canmount=noauto -o mountpoint=/  rpool/archlinux/root
    +zfs mount rpool/archlinux/root
    +zfs create -o mountpoint=legacy rpool/archlinux/home
    +mkdir "${MNT}"/home
    +mount -t zfs rpool/archlinux/home "${MNT}"/home
    +zfs create -o mountpoint=legacy  rpool/archlinux/var
    +zfs create -o mountpoint=legacy rpool/archlinux/var/lib
    +zfs create -o mountpoint=legacy rpool/archlinux/var/log
    +zfs create -o mountpoint=none bpool/archlinux
    +zfs create -o mountpoint=legacy bpool/archlinux/root
    +mkdir "${MNT}"/boot
    +mount -t zfs bpool/archlinux/root "${MNT}"/boot
    +mkdir -p "${MNT}"/var/log
    +mkdir -p "${MNT}"/var/lib
    +mount -t zfs rpool/archlinux/var/lib "${MNT}"/var/lib
    +mount -t zfs rpool/archlinux/var/log "${MNT}"/var/log
    +
    +
    +
  12. +
  13. Format and mount ESP

    +
    for i in ${DISK}; do
    + mkfs.vfat -n EFI "${i}"-part1
    + mkdir -p "${MNT}"/boot/efis/"${i##*/}"-part1
    + mount -t vfat -o iocharset=iso8859-1 "${i}"-part1 "${MNT}"/boot/efis/"${i##*/}"-part1
    +done
    +
    +mkdir -p "${MNT}"/boot/efi
    +mount -t vfat -o iocharset=iso8859-1 "$(echo "${DISK}" | sed "s|^ *||"  | cut -f1 -d' '|| true)"-part1 "${MNT}"/boot/efi
    +
    +
    +
  14. +
+
+
+

System Configuration

+
    +
  1. Download and extract minimal Arch Linux root filesystem:

    +
    apk add curl
    +
    +curl --fail-early --fail -L \
    +https://america.archive.pkgbuild.com/iso/2023.09.01/archlinux-bootstrap-x86_64.tar.gz \
    +-o rootfs.tar.gz
    +curl --fail-early --fail -L \
    +https://america.archive.pkgbuild.com/iso/2023.09.01/archlinux-bootstrap-x86_64.tar.gz.sig \
    +-o rootfs.tar.gz.sig
    +
    +apk add gnupg
    +gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify rootfs.tar.gz.sig
    +
    +ln -s "${MNT}" "${MNT}"/root.x86_64
    +tar x  -C "${MNT}" -af rootfs.tar.gz root.x86_64
    +
    +
    +
  2. +
  3. Enable community repo

    +
    sed -i '/edge/d' /etc/apk/repositories
    +sed -i -E 's/#(.*)community/\1community/' /etc/apk/repositories
    +
    +
    +
  4. +
  5. Generate fstab:

    +
    apk add arch-install-scripts
    +genfstab -t PARTUUID "${MNT}" \
    +| grep -v swap \
    +| sed "s|vfat.*rw|vfat rw,x-systemd.idle-timeout=1min,x-systemd.automount,noauto,nofail|" \
    +> "${MNT}"/etc/fstab
    +
    +
    +
  6. +
  7. Chroot

    +
    cp /etc/resolv.conf "${MNT}"/etc/resolv.conf
    +for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done
    +chroot "${MNT}" /usr/bin/env DISK="${DISK}" bash
    +
    +
    +
  8. +
  9. Add archzfs repo to pacman config

    +
    pacman-key --init
    +pacman-key --refresh-keys
    +pacman-key --populate
    +
    +curl --fail-early --fail -L https://archzfs.com/archzfs.gpg \
    +|  pacman-key -a - --gpgdir /etc/pacman.d/gnupg
    +
    +pacman-key \
    +--lsign-key \
    +--gpgdir /etc/pacman.d/gnupg \
    +DDF7DB817396A49B2A2723F7403BD972F75D9D76
    +
    +tee -a /etc/pacman.d/mirrorlist-archzfs <<- 'EOF'
    +## See https://github.com/archzfs/archzfs/wiki
    +## France
    +#,Server = https://archzfs.com/$repo/$arch
    +
    +## Germany
    +#,Server = https://mirror.sum7.eu/archlinux/archzfs/$repo/$arch
    +#,Server = https://mirror.biocrafting.net/archlinux/archzfs/$repo/$arch
    +
    +## India
    +#,Server = https://mirror.in.themindsmaze.com/archzfs/$repo/$arch
    +
    +## United States
    +#,Server = https://zxcvfdsa.com/archzfs/$repo/$arch
    +EOF
    +
    +tee -a /etc/pacman.conf <<- 'EOF'
    +
    +#[archzfs-testing]
    +#Include = /etc/pacman.d/mirrorlist-archzfs
    +
    +#,[archzfs]
    +#,Include = /etc/pacman.d/mirrorlist-archzfs
    +EOF
    +
    +# this #, prefix is a workaround for ci/cd tests
    +# remove them
    +sed -i 's|#,||' /etc/pacman.d/mirrorlist-archzfs
    +sed -i 's|#,||' /etc/pacman.conf
    +sed -i 's|^#||' /etc/pacman.d/mirrorlist
    +
    +
    +
  10. +
  11. Install base packages:

    +
    pacman -Sy
    +pacman -S --noconfirm mg mandoc grub efibootmgr mkinitcpio
    +
    +kernel_compatible_with_zfs="$(pacman -Si zfs-linux \
    +| grep 'Depends On' \
    +| sed "s|.*linux=||" \
    +| awk '{ print $1 }')"
    +pacman -U --noconfirm https://america.archive.pkgbuild.com/packages/l/linux/linux-"${kernel_compatible_with_zfs}"-x86_64.pkg.tar.zst
    +
    +
    +
  12. +
  13. Install zfs packages:

    +
    pacman -S --noconfirm zfs-linux zfs-utils
    +
    +
    +
  14. +
  15. Configure mkinitcpio:

    +
    sed -i 's|filesystems|zfs filesystems|' /etc/mkinitcpio.conf
    +mkinitcpio -P
    +
    +
    +
  16. +
  17. For physical machine, install firmware

    +
    pacman -S linux-firmware intel-ucode amd-ucode
    +
    +
    +
  18. +
  19. Enable internet time synchronisation:

    +
    systemctl enable systemd-timesyncd
    +
    +
    +
  20. +
  21. Generate host id:

    +
    zgenhostid -f -o /etc/hostid
    +
    +
    +
  22. +
  23. Generate locales:

    +
    echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen
    +locale-gen
    +
    +
    +
  24. +
  25. Set locale, keymap, timezone, hostname

    +
    rm -f /etc/localtime
    +systemd-firstboot \
    +--force \
    +--locale=en_US.UTF-8 \
    +--timezone=Etc/UTC \
    +--hostname=testhost \
    +--keymap=us
    +
    +
    +
  26. +
  27. Set root passwd

    +
    printf 'root:yourpassword' | chpasswd
    +
    +
    +
  28. +
+
+
+

Bootloader

+
    +
  1. Apply GRUB workaround

    +
    echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile.d/zpool_vdev_name_path.sh
    +# shellcheck disable=SC1091
    +. /etc/profile.d/zpool_vdev_name_path.sh
    +
    +# GRUB fails to detect rpool name, hard code as "rpool"
    +sed -i "s|rpool=.*|rpool=rpool|"  /etc/grub.d/10_linux
    +
    +
    +

    This workaround needs to be applied for every GRUB update, as the +update will overwrite the changes.

    +
  2. +
  3. Install GRUB:

    +
    mkdir -p /boot/efi/archlinux/grub-bootdir/i386-pc/
    +mkdir -p /boot/efi/archlinux/grub-bootdir/x86_64-efi/
    +for i in ${DISK}; do
    + grub-install --target=i386-pc --boot-directory \
    +     /boot/efi/archlinux/grub-bootdir/i386-pc/  "${i}"
    +done
    +grub-install --target x86_64-efi --boot-directory \
    + /boot/efi/archlinux/grub-bootdir/x86_64-efi/ --efi-directory \
    + /boot/efi --bootloader-id archlinux --removable
    +if test -d /sys/firmware/efi/efivars/; then
    +   grub-install --target x86_64-efi --boot-directory \
    +    /boot/efi/archlinux/grub-bootdir/x86_64-efi/ --efi-directory \
    +    /boot/efi --bootloader-id archlinux
    +fi
    +
    +
    +
  4. +
  5. Import both bpool and rpool at boot:

    +
    echo 'GRUB_CMDLINE_LINUX="zfs_import_dir=/dev/"' >> /etc/default/grub
    +
    +
    +
  6. +
  7. Generate GRUB menu:

    +
    mkdir -p /boot/grub
    +grub-mkconfig -o /boot/grub/grub.cfg
    +cp /boot/grub/grub.cfg \
    + /boot/efi/archlinux/grub-bootdir/x86_64-efi/grub/grub.cfg
    +cp /boot/grub/grub.cfg \
    + /boot/efi/archlinux/grub-bootdir/i386-pc/grub/grub.cfg
    +
    +
    +
  8. +
  9. For both legacy and EFI booting: mirror ESP content:

    +
    espdir=$(mktemp -d)
    +find /boot/efi/ -maxdepth 1 -mindepth 1 -type d -print0 \
    +| xargs -t -0I '{}' cp -r '{}' "${espdir}"
    +find "${espdir}" -maxdepth 1 -mindepth 1 -type d -print0 \
    +| xargs -t -0I '{}' sh -vxc "find /boot/efis/ -maxdepth 1 -mindepth 1 -type d -print0 | xargs -t -0I '[]' cp -r '{}' '[]'"
    +
    +
    +
  10. +
  11. Exit chroot

    +
    exit
    +
    +
    +
  12. +
  13. Unmount filesystems and create initial system snapshot +You can later create a boot environment from this snapshot. +See Root on ZFS maintenance page.

    +
    umount -Rl "${MNT}"
    +zfs snapshot -r rpool@initial-installation
    +zfs snapshot -r bpool@initial-installation
    +
    +
    +
  14. +
  15. Export all pools

    +
    zpool export -a
    +
    +
    +
  16. +
  17. Reboot

    +
    reboot
    +
    +
    +
  18. +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Arch Linux/index.html b/Getting Started/Arch Linux/index.html new file mode 100644 index 000000000..d90a86d5f --- /dev/null +++ b/Getting Started/Arch Linux/index.html @@ -0,0 +1,209 @@ + + + + + + + Arch Linux — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Arch Linux

+
+

Contents

+ +
+
+

Support

+

Reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat.

+

If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @ne9z.

+
+
+

Overview

+

Due to license incompatibility, +ZFS is not available in Arch Linux official repo.

+

ZFS support is provided by third-party archzfs repo.

+
+
+

Installation

+

See Archlinux Wiki.

+
+
+

Root on ZFS

+

ZFS can be used as root file system for Arch Linux. +An installation guide is available.

+ +
+
+

Contribute

+
    +
  1. Fork and clone this repo.

  2. +
  3. Install the tools:

    +
    sudo pacman -S --needed python-pip make
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your "${PATH}", e.g. by adding this to ~/.bashrc:
    +[ -d "${HOME}"/.local/bin ] && export PATH="${HOME}"/.local/bin:"${PATH}"
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @ne9z.

  10. +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Debian/Debian Bookworm Root on ZFS.html b/Getting Started/Debian/Debian Bookworm Root on ZFS.html new file mode 100644 index 000000000..804884de7 --- /dev/null +++ b/Getting Started/Debian/Debian Bookworm Root on ZFS.html @@ -0,0 +1,1330 @@ + + + + + + + Debian Bookworm Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Debian Bookworm Root on ZFS

+ +
+

Overview

+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Do not use these instructions for dual-booting.

  • +
  • Backup your data. Any existing data will be lost.

  • +
+
+
+

System Requirements

+ +

Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need massive amounts of RAM. Enabling +deduplication is a permanent change that cannot be easily reverted.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @rlaager.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo apt install python3-pip
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @rlaager.

  10. +
+
+
+

Encryption

+

This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in /etc/fstab, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+
+

Step 1: Prepare The Install Environment

+
    +
  1. Boot the Debian GNU/Linux Live CD. If prompted, login with the username +user and password live. Connect your system to the Internet as +appropriate (e.g. join your WiFi network). Open a terminal.

  2. +
  3. Setup and update the repositories:

    +
    sudo vi /etc/apt/sources.list
    +
    +
    +
    deb http://deb.debian.org/debian bookworm main contrib non-free-firmware
    +
    +
    +
    sudo apt update
    +
    +
    +
  4. +
  5. Optional: Install and start the OpenSSH server in the Live CD environment:

    +

    If you have a second system, using SSH to access the target system can be +convenient:

    +
    sudo apt install --yes openssh-server
    +
    +sudo systemctl restart ssh
    +
    +
    +

    Hint: You can find your IP address with +ip addr show scope global | grep inet. Then, from your main machine, +connect with ssh user@IP.

    +
  6. +
  7. Disable automounting:

    +

    If the disk has been used before (with partitions at the same offsets), +previous filesystems (e.g. the ESP) will automount if not disabled:

    +
    gsettings set org.gnome.desktop.media-handling automount false
    +
    +
    +
  8. +
  9. Become root:

    +
    sudo -i
    +
    +
    +
  10. +
  11. Install ZFS in the Live CD environment:

    +
    apt install --yes debootstrap gdisk zfsutils-linux
    +
    +
    +
  12. +
+
+
+

Step 2: Disk Formatting

+
    +
  1. Set a variable with the disk name:

    +
    DISK=/dev/disk/by-id/scsi-SATA_disk1
    +
    +
    +

    Always use the long /dev/disk/by-id/* aliases with ZFS. Using the +/dev/sd* device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool.

    +

    Hints:

    +
      +
    • ls -la /dev/disk/by-id will list the aliases.

    • +
    • Are you doing this in a virtual machine? If your virtual disk is missing +from /dev/disk/by-id, use /dev/vda if you are using KVM with +virtio. Also when using /dev/vda, the partitions used later will be named +differently. Otherwise, read the troubleshooting +section.

    • +
    • For a mirror or raidz topology, use DISK1, DISK2, etc.

    • +
    • When choosing a boot pool size, consider how you will use the space. A +kernel and initrd may consume around 100M. If you have multiple kernels +and take snapshots, you may find yourself low on boot pool space, +especially if you need to regenerate your initramfs images, which may be +around 85M each. Size your boot pool appropriately for your needs.

    • +
    +
  2. +
  3. If you are re-using a disk, clear it as necessary:

    +

    Ensure swap partitions are not in use:

    +
    swapoff --all
    +
    +
    +

    If the disk was previously used in an MD array:

    +
    apt install --yes mdadm
    +
    +# See if one or more MD arrays are active:
    +cat /proc/mdstat
    +# If so, stop them (replace ``md0`` as required):
    +mdadm --stop /dev/md0
    +
    +# For an array using the whole disk:
    +mdadm --zero-superblock --force $DISK
    +# For an array using a partition:
    +mdadm --zero-superblock --force ${DISK}-part2
    +
    +
    +

    If the disk was previously used with zfs:

    +
    wipefs -a $DISK
    +
    +
    +

    For flash-based storage, if the disk was previously used, you may wish to +do a full-disk discard (TRIM/UNMAP), which can improve performance:

    +
    blkdiscard -f $DISK
    +
    +
    +

    Clear the partition table:

    +
    sgdisk --zap-all $DISK
    +
    +
    +

    If you get a message about the kernel still using the old partition table, +reboot and start over (except that you can skip this step).

    +
  4. +
  5. Partition your disk(s):

    +

    Run this if you need legacy (BIOS) booting:

    +
    sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
    +
    +
    +

    Run this for UEFI booting (for use now or in the future):

    +
    sgdisk     -n2:1M:+512M   -t2:EF00 $DISK
    +
    +
    +

    Run this for the boot pool:

    +
    sgdisk     -n3:0:+1G      -t3:BF01 $DISK
    +
    +
    +

    Choose one of the following options:

    +
      +
    • Unencrypted or ZFS native encryption:

      +
      sgdisk     -n4:0:0        -t4:BF00 $DISK
      +
      +
      +
    • +
    • LUKS:

      +
      sgdisk     -n4:0:0        -t4:8309 $DISK
      +
      +
      +
    • +
    +

    If you are creating a mirror or raidz topology, repeat the partitioning +commands for all the disks which will be part of the pool.

    +
  6. +
  7. Create the boot pool:

    +
    zpool create \
    +    -o ashift=12 \
    +    -o autotrim=on \
    +    -o compatibility=grub2 \
    +    -o cachefile=/etc/zfs/zpool.cache \
    +    -O devices=off \
    +    -O acltype=posixacl -O xattr=sa \
    +    -O compression=lz4 \
    +    -O normalization=formD \
    +    -O relatime=on \
    +    -O canmount=off -O mountpoint=/boot -R /mnt \
    +    bpool ${DISK}-part3
    +
    +
    +

    Note: GRUB does not support all zpool features (see +spa_feature_names in +grub-core/fs/zfs/zfs.c). +We create a separate zpool for /boot here, specifying the +-o compatibility=grub2 property which restricts the pool to only those +features that GRUB supports, allowing the root pool to use any/all features.

    +

    See the section on Compatibility feature sets in the zpool-features +man page for more information.

    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    bpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part3 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part3
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. The bpool convention originated in this HOWTO.

    • +
    +
  8. +
  9. Create the root pool:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted:

      +
      zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
      +    -O compression=lz4 \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O canmount=off -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • ZFS native encryption:

      +
      zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -O encryption=on -O keylocation=prompt -O keyformat=passphrase \
      +    -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
      +    -O compression=lz4 \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O canmount=off -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • LUKS:

      +
      apt install --yes cryptsetup
      +
      +cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
      +cryptsetup luksOpen ${DISK}-part4 luks1
      +zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
      +    -O compression=lz4 \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O canmount=off -O mountpoint=/ -R /mnt \
      +    rpool /dev/mapper/luks1
      +
      +
      +
    • +
    +

    Notes:

    +
      +
    • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

    • +
    • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires ACLs

    • +
    • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain controller. +Note that xattr=sa is Linux-specific. If you move your +xattr=sa pool to another OpenZFS implementation besides ZFS-on-Linux, +extended attributes will not be readable (though your data will be). If +portability of extended attributes is important to you, omit the +-O xattr=sa above. Even if you do not want xattr=sa for the whole +pool, it is probably fine to use it for /var/log.

    • +
    • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only filenames.

    • +
    • recordsize is unset (leaving it at the default of 128 KiB). If you +want to tune it (e.g. -O recordsize=1M), see these various blog +posts.

    • +
    • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s documentation +for further information.

    • +
    • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

    • +
    • ZFS native encryption now +defaults to aes-256-gcm.

    • +
    • For LUKS, the key size chosen is 512 bits. However, XTS mode requires two +keys, so the LUKS key is split in half. Thus, -s 512 means AES-256.

    • +
    • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup FAQ +for guidance.

    • +
    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    rpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part4 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part4
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • When using LUKS with mirror or raidz topologies, use +/dev/mapper/luks1, /dev/mapper/luks2, etc., which you will have +to create using cryptsetup.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. On systems that can automatically install to ZFS, the root +pool is named rpool by default.

    • +
    +
  10. +
+
+
+

Step 3: System Installation

+
    +
  1. Create filesystem datasets to act as containers:

    +
    zfs create -o canmount=off -o mountpoint=none rpool/ROOT
    +zfs create -o canmount=off -o mountpoint=none bpool/BOOT
    +
    +
    +

    On Solaris systems, the root filesystem is cloned and the suffix is +incremented for major system changes through pkg image-update or +beadm. Similar functionality was implemented in Ubuntu with the +zsys tool, though its dataset layout is more complicated, and zsys +is on life support. Even +without such a tool, the rpool/ROOT and bpool/BOOT containers can still +be used for manually created clones. That said, this HOWTO assumes a single +filesystem for /boot for simplicity.

    +
  2. +
  3. Create filesystem datasets for the root and boot filesystems:

    +
    zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian
    +zfs mount rpool/ROOT/debian
    +
    +zfs create -o mountpoint=/boot bpool/BOOT/debian
    +
    +
    +

    With ZFS, it is not normally necessary to use a mount command (either +mount or zfs mount). This situation is an exception because of +canmount=noauto.

    +
  4. +
  5. Create datasets:

    +
    zfs create                     rpool/home
    +zfs create -o mountpoint=/root rpool/home/root
    +chmod 700 /mnt/root
    +zfs create -o canmount=off     rpool/var
    +zfs create -o canmount=off     rpool/var/lib
    +zfs create                     rpool/var/log
    +zfs create                     rpool/var/spool
    +
    +
    +

    The datasets below are optional, depending on your preferences and/or +software choices.

    +

    If you wish to separate these to exclude them from snapshots:

    +
    zfs create -o com.sun:auto-snapshot=false rpool/var/cache
    +zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs
    +zfs create -o com.sun:auto-snapshot=false rpool/var/tmp
    +chmod 1777 /mnt/var/tmp
    +
    +
    +

    If you use /srv on this system:

    +
    zfs create rpool/srv
    +
    +
    +

    If you use /usr/local on this system:

    +
    zfs create -o canmount=off rpool/usr
    +zfs create                 rpool/usr/local
    +
    +
    +

    If this system will have games installed:

    +
    zfs create rpool/var/games
    +
    +
    +

    If this system will have a GUI:

    +
    zfs create rpool/var/lib/AccountsService
    +zfs create rpool/var/lib/NetworkManager
    +
    +
    +

    If this system will use Docker (which manages its own datasets & +snapshots):

    +
    zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker
    +
    +
    +

    If this system will store local email in /var/mail:

    +
    zfs create rpool/var/mail
    +
    +
    +

    If this system will use Snap packages:

    +
    zfs create rpool/var/snap
    +
    +
    +

    If you use /var/www on this system:

    +
    zfs create rpool/var/www
    +
    +
    +

    A tmpfs is recommended later, but if you want a separate dataset for +/tmp:

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/tmp
    +chmod 1777 /mnt/tmp
    +
    +
    +

    The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data.

    +

    If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for /tmp, +as shown above. This keeps the /tmp data out of snapshots of your root +filesystem. It also allows you to set a quota on rpool/tmp, if you want +to limit the maximum space used. Otherwise, you can use a tmpfs (RAM +filesystem) later.

    +

    Note: If you separate a directory required for booting (e.g. /etc) +into its own dataset, you must add it to +ZFS_INITRD_ADDITIONAL_DATASETS in /etc/default/zfs. Datasets +with canmount=off (like rpool/usr above) do not matter for this.

    +
  6. +
  7. Mount a tmpfs at /run:

    +
    mkdir /mnt/run
    +mount -t tmpfs tmpfs /mnt/run
    +mkdir /mnt/run/lock
    +
    +
    +
  8. +
  9. Install the minimal system:

    +
    debootstrap bookworm /mnt
    +
    +
    +

    The debootstrap command leaves the new system in an unconfigured state. +An alternative to using debootstrap is to copy the entirety of a +working system into the new ZFS root.

    +
  10. +
  11. Copy in zpool.cache:

    +
    mkdir /mnt/etc/zfs
    +cp /etc/zfs/zpool.cache /mnt/etc/zfs/
    +
    +
    +
  12. +
+
+
+

Step 4: System Configuration

+
    +
  1. Configure the hostname:

    +

    Replace HOSTNAME with the desired hostname:

    +
    hostname HOSTNAME
    +hostname > /mnt/etc/hostname
    +vi /mnt/etc/hosts
    +
    +
    +
    Add a line:
    +127.0.1.1       HOSTNAME
    +or if the system has a real name in DNS:
    +127.0.1.1       FQDN HOSTNAME
    +
    +
    +

    Hint: Use nano if you find vi confusing.

    +
  2. +
  3. Configure the network interface:

    +

    Find the interface name:

    +
    ip addr show
    +
    +
    +

    Adjust NAME below to match your interface name:

    +
    vi /mnt/etc/network/interfaces.d/NAME
    +
    +
    +
    auto NAME
    +iface NAME inet dhcp
    +
    +
    +

    Customize this file if the system is not a DHCP client.

    +
  4. +
  5. Configure the package sources:

    +
    vi /mnt/etc/apt/sources.list
    +
    +
    +
    deb http://deb.debian.org/debian bookworm main contrib non-free-firmware
    +deb-src http://deb.debian.org/debian bookworm main contrib non-free-firmware
    +
    +deb http://deb.debian.org/debian-security bookworm-security main contrib non-free-firmware
    +deb-src http://deb.debian.org/debian-security bookworm-security main contrib non-free-firmware
    +
    +deb http://deb.debian.org/debian bookworm-updates main contrib non-free-firmware
    +deb-src http://deb.debian.org/debian bookworm-updates main contrib non-free-firmware
    +
    +
    +
  6. +
  7. Bind the virtual filesystems from the LiveCD environment to the new +system and chroot into it:

    +
    mount --make-private --rbind /dev  /mnt/dev
    +mount --make-private --rbind /proc /mnt/proc
    +mount --make-private --rbind /sys  /mnt/sys
    +chroot /mnt /usr/bin/env DISK=$DISK bash --login
    +
    +
    +

    Note: This is using --rbind, not --bind.

    +
  8. +
  9. Configure a basic system environment:

    +
    apt update
    +
    +apt install --yes console-setup locales
    +
    +
    +

    Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

    +
    dpkg-reconfigure locales tzdata keyboard-configuration console-setup
    +
    +
    +
  10. +
  11. Install ZFS in the chroot environment for the new system:

    +
    apt install --yes dpkg-dev linux-headers-generic linux-image-generic
    +
    +apt install --yes zfs-initramfs
    +
    +echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf
    +
    +
    +

    Note: Ignore any error messages saying ERROR: Couldn't resolve +device and WARNING: Couldn't determine root device. cryptsetup does +not support ZFS.

    +
  12. +
  13. For LUKS installs only, setup /etc/crypttab:

    +
    apt install --yes cryptsetup cryptsetup-initramfs
    +
    +echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \
    +    none luks,discard,initramfs > /etc/crypttab
    +
    +
    +

    The use of initramfs is a work-around for cryptsetup does not support +ZFS.

    +

    Hint: If you are creating a mirror or raidz topology, repeat the +/etc/crypttab entries for luks2, etc. adjusting for each disk.

    +
  14. +
  15. Install an NTP service to synchronize time. +This step is specific to Bookworm which does not install the package during +bootstrap. +Although this step is not necessary for ZFS, it is useful for internet +browsing where local clock drift can cause login failures:

    +
    apt install systemd-timesyncd
    +
    +
    +
  16. +
  17. Install GRUB

    +

    Choose one of the following options:

    +
      +
    • Install GRUB for legacy (BIOS) booting:

      +
      apt install --yes grub-pc
      +
      +
      +
    • +
    • Install GRUB for UEFI booting:

      +
      apt install dosfstools
      +
      +mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2
      +mkdir /boot/efi
      +echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \
      +   /boot/efi vfat defaults 0 0 >> /etc/fstab
      +mount /boot/efi
      +apt install --yes grub-efi-amd64 shim-signed
      +
      +
      +

      Notes:

      +
        +
      • The -s 1 for mkdosfs is only necessary for drives which present +4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size +(given the partition size of 512 MiB) for FAT32. It also works fine on +drives which present 512 B sectors.

      • +
      • For a mirror or raidz topology, this step only installs GRUB on the +first disk. The other disk(s) will be handled later.

      • +
      +
    • +
    +
  18. +
  19. Optional: Remove os-prober:

    +
    apt purge --yes os-prober
    +
    +
    +

    This avoids error messages from update-grub. os-prober is only +necessary in dual-boot configurations.

    +
  20. +
  21. Set a root password:

    +
    passwd
    +
    +
    +
  22. +
  23. Enable importing bpool

    +

    This ensures that bpool is always imported, regardless of whether +/etc/zfs/zpool.cache exists, whether it is in the cachefile or not, +or whether zfs-import-scan.service is enabled.

    +
    vi /etc/systemd/system/zfs-import-bpool.service
    +
    +
    +
    [Unit]
    +DefaultDependencies=no
    +Before=zfs-import-scan.service
    +Before=zfs-import-cache.service
    +
    +[Service]
    +Type=oneshot
    +RemainAfterExit=yes
    +ExecStart=/sbin/zpool import -N -o cachefile=none bpool
    +# Work-around to preserve zpool cache:
    +ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache
    +ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache
    +
    +[Install]
    +WantedBy=zfs-import.target
    +
    +
    +
    systemctl enable zfs-import-bpool.service
    +
    +
    +

    Note: For some disk configurations (NVMe?), this service may fail with an error +indicating that the bpool cannot be found. If this happens, add +-d DISK-part3 (replace DISK with the correct device path) to the +zpool import command.

    +
  24. +
  25. Optional (but recommended): Mount a tmpfs to /tmp

    +

    If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

    +
    cp /usr/share/systemd/tmp.mount /etc/systemd/system/
    +systemctl enable tmp.mount
    +
    +
    +
  26. +
  27. Optional: Install SSH:

    +
    apt install --yes openssh-server
    +
    +vi /etc/ssh/sshd_config
    +# Set: PermitRootLogin yes
    +
    +
    +
  28. +
  29. Optional: For ZFS native encryption or LUKS, configure Dropbear for remote +unlocking:

    +
    apt install --yes --no-install-recommends dropbear-initramfs
    +mkdir -p /etc/dropbear/initramfs
    +
    +# Optional: Convert OpenSSH server keys for Dropbear
    +for type in ecdsa ed25519 rsa ; do
    +    cp /etc/ssh/ssh_host_${type}_key /tmp/openssh.key
    +    ssh-keygen -p -N "" -m PEM -f /tmp/openssh.key
    +    dropbearconvert openssh dropbear \
    +        /tmp/openssh.key \
    +        /etc/dropbear/initramfs/dropbear_${type}_host_key
    +done
    +rm /tmp/openssh.key
    +
    +# Add user keys in the same format as ~/.ssh/authorized_keys
    +vi /etc/dropbear/initramfs/authorized_keys
    +
    +# If using a static IP, set it for the initramfs environment:
    +vi /etc/initramfs-tools/initramfs.conf
    +# The syntax is: IP=ADDRESS::GATEWAY:MASK:HOSTNAME:NIC
    +# For example:
    +# IP=192.168.1.100::192.168.1.1:255.255.255.0:myhostname:ens3
    +# HOSTNAME and NIC are optional.
    +
    +# Rebuild the initramfs (required when changing any of the above):
    +update-initramfs -u -k all
    +
    +
    +

    Notes:

    +
      +
    • Converting the server keys makes Dropbear use the same keys as OpenSSH, +avoiding host key mismatch warnings. Currently, dropbearconvert doesn’t +understand the new OpenSSH private key format, so the +keys need to be converted to the old PEM format first using +ssh-keygen. The downside of using the same keys for both OpenSSH and +Dropbear is that the OpenSSH keys are then available on-disk, unencrypted +in the initramfs.

    • +
    • Later, to use this functionality, SSH to the system (as root) while it is +prompting for the passphrase during the boot process. For ZFS native +encryption, run zfsunlock. For LUKS, run cryptroot-unlock.

    • +
    • You can optionally add command="/usr/bin/zfsunlock" or +command="/bin/cryptroot-unlock" in front of the authorized_keys +line to force the unlock command. This way, the unlock command runs +automatically and is all that can be run.

    • +
    +
  30. +
  31. Optional (but kindly requested): Install popcon

    +

    The popularity-contest package reports the list of packages install +on your system. Showing that ZFS is popular may be helpful in terms of +long-term attention from the distro.

    +
    apt install --yes popularity-contest
    +
    +
    +

    Choose Yes at the prompt.

    +
  32. +
+
+
+

Step 5: GRUB Installation

+
    +
  1. Verify that the ZFS boot filesystem is recognized:

    +
    grub-probe /boot
    +
    +
    +
  2. +
  3. Refresh the initrd files:

    +
    update-initramfs -c -k all
    +
    +
    +

    Note: Ignore any error messages saying ERROR: Couldn't resolve +device and WARNING: Couldn't determine root device. cryptsetup +does not support ZFS.

    +
  4. +
  5. Workaround GRUB’s missing zpool-features support:

    +
    vi /etc/default/grub
    +# Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian"
    +
    +
    +
  6. +
  7. Optional (but highly recommended): Make debugging GRUB easier:

    +
    vi /etc/default/grub
    +# Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
    +# Uncomment: GRUB_TERMINAL=console
    +# Save and quit.
    +
    +
    +

    Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired.

    +
  8. +
  9. Update the boot configuration:

    +
    update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Install the boot loader:

    +
      +
    1. For legacy (BIOS) booting, install GRUB to the MBR:

      +
      grub-install $DISK
      +
      +
      +
    2. +
    +

    Note that you are installing GRUB to the whole disk, not a partition.

    +

    If you are creating a mirror or raidz topology, repeat the grub-install +command for each disk in the pool.

    +
      +
    1. For UEFI booting, install GRUB to the ESP:

      +
      grub-install --target=x86_64-efi --efi-directory=/boot/efi \
      +    --bootloader-id=debian --recheck --no-floppy
      +
      +
      +

      It is not necessary to specify the disk here. If you are creating a +mirror or raidz topology, the additional disks will be handled later.

      +
    2. +
    +
  12. +
  13. Fix filesystem mount ordering:

    +

    We need to activate zfs-mount-generator. This makes systemd aware of +the separate mountpoints, which is important for things like /var/log +and /var/tmp. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp feature +of systemd automatically use After=var-tmp.mount.

    +
    mkdir /etc/zfs/zfs-list.cache
    +touch /etc/zfs/zfs-list.cache/bpool
    +touch /etc/zfs/zfs-list.cache/rpool
    +zed -F &
    +
    +
    +

    Verify that zed updated the cache by making sure these are not empty:

    +
    cat /etc/zfs/zfs-list.cache/bpool
    +cat /etc/zfs/zfs-list.cache/rpool
    +
    +
    +

    If either is empty, force a cache update and check again:

    +
    zfs set canmount=on     bpool/BOOT/debian
    +zfs set canmount=noauto rpool/ROOT/debian
    +
    +
    +

    If they are still empty, stop zed (as below), start zed (as above) and try +again.

    +

    Once the files have data, stop zed:

    +
    fg
    +Press Ctrl-C.
    +
    +
    +

    Fix the paths to eliminate /mnt:

    +
    sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
    +
    +
    +
  14. +
+
+
+

Step 6: First Boot

+
    +
  1. Optional: Snapshot the initial installation:

    +
    zfs snapshot bpool/BOOT/debian@install
    +zfs snapshot rpool/ROOT/debian@install
    +
    +
    +

    In the future, you will likely want to take snapshots before each +upgrade, and remove old snapshots (including this one) at some point to +save space.

    +
  2. +
  3. Exit from the chroot environment back to the LiveCD environment:

    +
    exit
    +
    +
    +
  4. +
  5. Run these commands in the LiveCD environment to unmount all +filesystems:

    +
    mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
    +    xargs -i{} umount -lf {}
    +zpool export -a
    +
    +
    +
  6. +
  7. If this fails for rpool, mounting it on boot will fail and you will need to +zpool import -f rpool, then exit in the initamfs prompt.

  8. +
  9. Reboot:

    +
    reboot
    +
    +
    +

    Wait for the newly installed system to boot normally. Login as root.

    +
  10. +
  11. Create a user account:

    +

    Replace YOUR_USERNAME with your desired username:

    +
    username=YOUR_USERNAME
    +
    +zfs create rpool/home/$username
    +adduser $username
    +
    +cp -a /etc/skel/. /home/$username
    +chown -R $username:$username /home/$username
    +usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video $username
    +
    +
    +
  12. +
  13. Mirror GRUB

    +

    If you installed to multiple disks, install GRUB on the additional +disks.

    +
      +
    • For legacy (BIOS) booting:

      +
      dpkg-reconfigure grub-pc
      +
      +
      +

      Hit enter until you get to the device selection screen. +Select (using the space bar) all of the disks (not partitions) in your pool.

      +
    • +
    • For UEFI booting:

      +
      umount /boot/efi
      +
      +
      +

      For the second and subsequent disks (increment debian-2 to -3, etc.):

      +
      dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
      +   of=/dev/disk/by-id/scsi-SATA_disk2-part2
      +efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
      +    -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi'
      +
      +mount /boot/efi
      +
      +
      +
    • +
    +
  14. +
+
+
+

Step 7: Optional: Configure Swap

+

Caution: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is a bug report upstream.

+
    +
  1. Create a volume dataset (zvol) for use as a swap device:

    +
    zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
    +    -o logbias=throughput -o sync=always \
    +    -o primarycache=metadata -o secondarycache=none \
    +    -o com.sun:auto-snapshot=false rpool/swap
    +
    +
    +

    You can adjust the size (the 4G part) to your needs.

    +

    The compression algorithm is set to zle because it is the cheapest +available algorithm. As this guide recommends ashift=12 (4 kiB +blocks on disk), the common case of a 4 kiB page size means that no +compression algorithm can reduce I/O. The exception is all-zero pages, +which are dropped by ZFS; but some form of compression has to be enabled +to get this behavior.

    +
  2. +
  3. Configure the swap device:

    +

    Caution: Always use long /dev/zvol aliases in configuration +files. Never use a short /dev/zdX device name.

    +
    mkswap -f /dev/zvol/rpool/swap
    +echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
    +echo RESUME=none > /etc/initramfs-tools/conf.d/resume
    +
    +
    +

    The RESUME=none is necessary to disable resuming from hibernation. +This does not work, as the zvol is not present (because the pool has not +yet been imported) at the time the resume script runs. If it is not +disabled, the boot process hangs for 30 seconds waiting for the swap +zvol to appear.

    +
  4. +
  5. Enable the swap device:

    +
    swapon -av
    +
    +
    +
  6. +
+
+
+

Step 8: Full Software Installation

+
    +
  1. Upgrade the minimal system:

    +
    apt dist-upgrade --yes
    +
    +
    +
  2. +
  3. Install a regular set of software:

    +
    tasksel --new-install
    +
    +
    +

    Note: This will check “Debian desktop environment” and “print server” +by default. If you want a server installation, unselect those.

    +
  4. +
  5. Optional: Disable log compression:

    +

    As /var/log is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. Also, +if you are making snapshots of /var/log, logrotate’s compression will +actually waste space, as the uncompressed data will live on in the +snapshot. You can edit the files in /etc/logrotate.d by hand to comment +out compress, or use this loop (copy-and-paste highly recommended):

    +
    for file in /etc/logrotate.d/* ; do
    +    if grep -Eq "(^|[^#y])compress" "$file" ; then
    +        sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
    +    fi
    +done
    +
    +
    +
  6. +
  7. Reboot:

    +
    reboot
    +
    +
    +
  8. +
+
+
+

Step 9: Final Cleanup

+
    +
  1. Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

  2. +
  3. Optional: Delete the snapshots of the initial installation:

    +
    sudo zfs destroy bpool/BOOT/debian@install
    +sudo zfs destroy rpool/ROOT/debian@install
    +
    +
    +
  4. +
  5. Optional: Disable the root password:

    +
    sudo usermod -p '*' root
    +
    +
    +
  6. +
  7. Optional (but highly recommended): Disable root SSH logins:

    +

    If you installed SSH earlier, revert the temporary change:

    +
    sudo vi /etc/ssh/sshd_config
    +# Remove: PermitRootLogin yes
    +
    +sudo systemctl restart ssh
    +
    +
    +
  8. +
  9. Optional: Re-enable the graphical boot process:

    +

    If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer.

    +
    sudo vi /etc/default/grub
    +# Add quiet to GRUB_CMDLINE_LINUX_DEFAULT
    +# Comment out GRUB_TERMINAL=console
    +# Save and quit.
    +
    +sudo update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Optional: For LUKS installs only, backup the LUKS header:

    +
    sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
    +    --header-backup-file luks1-header.dat
    +
    +
    +

    Store that backup somewhere safe (e.g. cloud storage). It is protected by +your LUKS passphrase, but you may wish to use additional encryption.

    +

    Hint: If you created a mirror or raidz topology, repeat this for each +LUKS volume (luks2, etc.).

    +
  12. +
+
+
+

Troubleshooting

+
+

Rescuing using a Live CD

+

Go through Step 1: Prepare The Install Environment.

+

For LUKS, first unlock the disk(s):

+
apt install --yes cryptsetup
+
+cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+# Repeat for additional disks, if this is a mirror or raidz topology.
+
+
+

Mount everything correctly:

+
zpool export -a
+zpool import -N -R /mnt rpool
+zpool import -N -R /mnt bpool
+zfs load-key -a
+zfs mount rpool/ROOT/debian
+zfs mount -a
+
+
+

If needed, you can chroot into your installed environment:

+
mount --make-private --rbind /dev  /mnt/dev
+mount --make-private --rbind /proc /mnt/proc
+mount --make-private --rbind /sys  /mnt/sys
+mount -t tmpfs tmpfs /mnt/run
+mkdir /mnt/run/lock
+chroot /mnt /bin/bash --login
+mount /boot/efi
+mount -a
+
+
+

Do whatever you need to do to fix your system.

+

When done, cleanup:

+
exit
+mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
+    xargs -i{} umount -lf {}
+zpool export -a
+reboot
+
+
+
+
+

Areca

+

Systems that require the arcsas blob driver should add it to the +/etc/initramfs-tools/modules file and run update-initramfs -c -k all.

+

Upgrade or downgrade the Areca driver if something like +RIP: 0010:[<ffffffff8101b316>]  [<ffffffff8101b316>] native_read_tsc+0x6/0x20 +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message.

+
+
+

MPT2SAS

+

Most problem reports for this tutorial involve mpt2sas hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware.

+

The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330.

+

Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool.

+
+
+

QEMU/KVM/XEN

+

Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. -drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890).

+

To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:

+
sudo apt install ovmf
+sudo vi /etc/libvirt/qemu.conf
+
+
+

Uncomment these lines:

+
nvram = [
+   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd"
+]
+
+
+
sudo systemctl restart libvirtd.service
+
+
+
+
+

VMware

+
    +
  • Set disk.EnableUUID = "TRUE" in the vmx file or vsphere configuration. +Doing this ensures that /dev/disk aliases are created in the guest.

  • +
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Debian/Debian Bullseye Root on ZFS.html b/Getting Started/Debian/Debian Bullseye Root on ZFS.html new file mode 100644 index 000000000..e5ae990ae --- /dev/null +++ b/Getting Started/Debian/Debian Bullseye Root on ZFS.html @@ -0,0 +1,1378 @@ + + + + + + + Debian Bullseye Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Debian Bullseye Root on ZFS

+ +
+

Overview

+
+

Newer release available

+
    +
  • See Debian Bookworm Root on ZFS for +new installs. This guide is no longer receiving most updates. It continues +to exist for reference for existing installs that followed it.

  • +
+
+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Do not use these instructions for dual-booting.

  • +
  • Backup your data. Any existing data will be lost.

  • +
+
+
+

System Requirements

+ +

Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need massive amounts of RAM. Enabling +deduplication is a permanent change that cannot be easily reverted.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @rlaager.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo apt install python3-pip
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @rlaager.

  10. +
+
+
+

Encryption

+

This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in /etc/fstab, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+
+

Step 1: Prepare The Install Environment

+
    +
  1. Boot the Debian GNU/Linux Live CD. If prompted, login with the username +user and password live. Connect your system to the Internet as +appropriate (e.g. join your WiFi network). Open a terminal.

  2. +
  3. Setup and update the repositories:

    +
    sudo vi /etc/apt/sources.list
    +
    +
    +
    deb http://deb.debian.org/debian bullseye main contrib
    +
    +
    +
    sudo apt update
    +
    +
    +
  4. +
  5. Optional: Install and start the OpenSSH server in the Live CD environment:

    +

    If you have a second system, using SSH to access the target system can be +convenient:

    +
    sudo apt install --yes openssh-server
    +
    +sudo systemctl restart ssh
    +
    +
    +

    Hint: You can find your IP address with +ip addr show scope global | grep inet. Then, from your main machine, +connect with ssh user@IP.

    +
  6. +
  7. Disable automounting:

    +

    If the disk has been used before (with partitions at the same offsets), +previous filesystems (e.g. the ESP) will automount if not disabled:

    +
    gsettings set org.gnome.desktop.media-handling automount false
    +
    +
    +
  8. +
  9. Become root:

    +
    sudo -i
    +
    +
    +
  10. +
  11. Install ZFS in the Live CD environment:

    +
    apt install --yes debootstrap gdisk zfsutils-linux
    +
    +
    +
  12. +
+
+
+

Step 2: Disk Formatting

+
    +
  1. Set a variable with the disk name:

    +
    DISK=/dev/disk/by-id/scsi-SATA_disk1
    +
    +
    +

    Always use the long /dev/disk/by-id/* aliases with ZFS. Using the +/dev/sd* device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool.

    +

    Hints:

    +
      +
    • ls -la /dev/disk/by-id will list the aliases.

    • +
    • Are you doing this in a virtual machine? If your virtual disk is missing +from /dev/disk/by-id, use /dev/vda if you are using KVM with +virtio; otherwise, read the troubleshooting +section.

    • +
    • For a mirror or raidz topology, use DISK1, DISK2, etc.

    • +
    • When choosing a boot pool size, consider how you will use the space. A +kernel and initrd may consume around 100M. If you have multiple kernels +and take snapshots, you may find yourself low on boot pool space, +especially if you need to regenerate your initramfs images, which may be +around 85M each. Size your boot pool appropriately for your needs.

    • +
    +
  2. +
  3. If you are re-using a disk, clear it as necessary:

    +

    Ensure swap partitions are not in use:

    +
    swapoff --all
    +
    +
    +

    If the disk was previously used in an MD array:

    +
    apt install --yes mdadm
    +
    +# See if one or more MD arrays are active:
    +cat /proc/mdstat
    +# If so, stop them (replace ``md0`` as required):
    +mdadm --stop /dev/md0
    +
    +# For an array using the whole disk:
    +mdadm --zero-superblock --force $DISK
    +# For an array using a partition:
    +mdadm --zero-superblock --force ${DISK}-part2
    +
    +
    +

    If the disk was previously used with zfs:

    +
    wipefs -a $DISK
    +
    +
    +

    For flash-based storage, if the disk was previously used, you may wish to +do a full-disk discard (TRIM/UNMAP), which can improve performance:

    +
    blkdiscard -f $DISK
    +
    +
    +

    Clear the partition table:

    +
    sgdisk --zap-all $DISK
    +
    +
    +

    If you get a message about the kernel still using the old partition table, +reboot and start over (except that you can skip this step).

    +
  4. +
  5. Partition your disk(s):

    +

    Run this if you need legacy (BIOS) booting:

    +
    sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
    +
    +
    +

    Run this for UEFI booting (for use now or in the future):

    +
    sgdisk     -n2:1M:+512M   -t2:EF00 $DISK
    +
    +
    +

    Run this for the boot pool:

    +
    sgdisk     -n3:0:+1G      -t3:BF01 $DISK
    +
    +
    +

    Choose one of the following options:

    +
      +
    • Unencrypted or ZFS native encryption:

      +
      sgdisk     -n4:0:0        -t4:BF00 $DISK
      +
      +
      +
    • +
    • LUKS:

      +
      sgdisk     -n4:0:0        -t4:8309 $DISK
      +
      +
      +
    • +
    +

    If you are creating a mirror or raidz topology, repeat the partitioning +commands for all the disks which will be part of the pool.

    +
  6. +
  7. Create the boot pool:

    +
    zpool create \
    +    -o ashift=12 \
    +    -o autotrim=on -d \
    +    -o cachefile=/etc/zfs/zpool.cache \
    +    -o feature@async_destroy=enabled \
    +    -o feature@bookmarks=enabled \
    +    -o feature@embedded_data=enabled \
    +    -o feature@empty_bpobj=enabled \
    +    -o feature@enabled_txg=enabled \
    +    -o feature@extensible_dataset=enabled \
    +    -o feature@filesystem_limits=enabled \
    +    -o feature@hole_birth=enabled \
    +    -o feature@large_blocks=enabled \
    +    -o feature@livelist=enabled \
    +    -o feature@lz4_compress=enabled \
    +    -o feature@spacemap_histogram=enabled \
    +    -o feature@zpool_checkpoint=enabled \
    +    -O devices=off \
    +    -O acltype=posixacl -O xattr=sa \
    +    -O compression=lz4 \
    +    -O normalization=formD \
    +    -O relatime=on \
    +    -O canmount=off -O mountpoint=/boot -R /mnt \
    +    bpool ${DISK}-part3
    +
    +
    +

    You should not need to customize any of the options for the boot pool.

    +

    GRUB does not support all of the zpool features. See spa_feature_names +in grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are “supported” by GRUB.

    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    bpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part3 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part3
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. The bpool convention originated in this HOWTO.

    • +
    +

    Feature Notes:

    +
      +
    • The allocation_classes feature should be safe to use. However, unless +one is using it (i.e. a special vdev), there is no point to enabling +it. It is extremely unlikely that someone would use this feature for a +boot pool. If one cares about speeding up the boot pool, it would make +more sense to put the whole pool on the faster disk rather than using it +as a special vdev.

    • +
    • The device_rebuild feature should be safe to use (except on raidz, +which it is incompatible with), but the boot pool is small, so this does +not matter in practice.

    • +
    • The log_spacemap and spacemap_v2 features have been tested and +are safe to use. The boot pool is small, so these do not matter in +practice.

    • +
    • The project_quota feature has been tested and is safe to use. This +feature is extremely unlikely to matter for the boot pool.

    • +
    • The resilver_defer should be safe but the boot pool is small enough +that it is unlikely to be necessary.

    • +
    • As a read-only compatible feature, the userobj_accounting feature +should be compatible in theory, but in practice, GRUB can fail with an +“invalid dnode type” error. This feature does not matter for /boot +anyway.

    • +
    +
  8. +
  9. Create the root pool:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted:

      +
      zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
      +    -O compression=lz4 \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O canmount=off -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • ZFS native encryption:

      +
      zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -O encryption=on -O keylocation=prompt -O keyformat=passphrase \
      +    -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
      +    -O compression=lz4 \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O canmount=off -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • LUKS:

      +
      apt install --yes cryptsetup
      +
      +cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
      +cryptsetup luksOpen ${DISK}-part4 luks1
      +zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
      +    -O compression=lz4 \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O canmount=off -O mountpoint=/ -R /mnt \
      +    rpool /dev/mapper/luks1
      +
      +
      +
    • +
    +

    Notes:

    +
      +
    • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

    • +
    • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires ACLs

    • +
    • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain controller. +Note that xattr=sa is Linux-specific. If you move your +xattr=sa pool to another OpenZFS implementation besides ZFS-on-Linux, +extended attributes will not be readable (though your data will be). If +portability of extended attributes is important to you, omit the +-O xattr=sa above. Even if you do not want xattr=sa for the whole +pool, it is probably fine to use it for /var/log.

    • +
    • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only filenames.

    • +
    • recordsize is unset (leaving it at the default of 128 KiB). If you +want to tune it (e.g. -O recordsize=1M), see these various blog +posts.

    • +
    • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s documentation +for further information.

    • +
    • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

    • +
    • ZFS native encryption now +defaults to aes-256-gcm.

    • +
    • For LUKS, the key size chosen is 512 bits. However, XTS mode requires two +keys, so the LUKS key is split in half. Thus, -s 512 means AES-256.

    • +
    • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup FAQ +for guidance.

    • +
    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    rpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part4 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part4
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • When using LUKS with mirror or raidz topologies, use +/dev/mapper/luks1, /dev/mapper/luks2, etc., which you will have +to create using cryptsetup.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. On systems that can automatically install to ZFS, the root +pool is named rpool by default.

    • +
    +
  10. +
+
+
+

Step 3: System Installation

+
    +
  1. Create filesystem datasets to act as containers:

    +
    zfs create -o canmount=off -o mountpoint=none rpool/ROOT
    +zfs create -o canmount=off -o mountpoint=none bpool/BOOT
    +
    +
    +

    On Solaris systems, the root filesystem is cloned and the suffix is +incremented for major system changes through pkg image-update or +beadm. Similar functionality was implemented in Ubuntu with the +zsys tool, though its dataset layout is more complicated, and zsys +is on life support. Even +without such a tool, the rpool/ROOT and bpool/BOOT containers can still +be used for manually created clones. That said, this HOWTO assumes a single +filesystem for /boot for simplicity.

    +
  2. +
  3. Create filesystem datasets for the root and boot filesystems:

    +
    zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian
    +zfs mount rpool/ROOT/debian
    +
    +zfs create -o mountpoint=/boot bpool/BOOT/debian
    +
    +
    +

    With ZFS, it is not normally necessary to use a mount command (either +mount or zfs mount). This situation is an exception because of +canmount=noauto.

    +
  4. +
  5. Create datasets:

    +
    zfs create                     rpool/home
    +zfs create -o mountpoint=/root rpool/home/root
    +chmod 700 /mnt/root
    +zfs create -o canmount=off     rpool/var
    +zfs create -o canmount=off     rpool/var/lib
    +zfs create                     rpool/var/log
    +zfs create                     rpool/var/spool
    +
    +
    +

    The datasets below are optional, depending on your preferences and/or +software choices.

    +

    If you wish to separate these to exclude them from snapshots:

    +
    zfs create -o com.sun:auto-snapshot=false rpool/var/cache
    +zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs
    +zfs create -o com.sun:auto-snapshot=false rpool/var/tmp
    +chmod 1777 /mnt/var/tmp
    +
    +
    +

    If you use /srv on this system:

    +
    zfs create rpool/srv
    +
    +
    +

    If you use /usr/local on this system:

    +
    zfs create -o canmount=off rpool/usr
    +zfs create                 rpool/usr/local
    +
    +
    +

    If this system will have games installed:

    +
    zfs create rpool/var/games
    +
    +
    +

    If this system will have a GUI:

    +
    zfs create rpool/var/lib/AccountsService
    +zfs create rpool/var/lib/NetworkManager
    +
    +
    +

    If this system will use Docker (which manages its own datasets & +snapshots):

    +
    zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker
    +
    +
    +

    If this system will store local email in /var/mail:

    +
    zfs create rpool/var/mail
    +
    +
    +

    If this system will use Snap packages:

    +
    zfs create rpool/var/snap
    +
    +
    +

    If you use /var/www on this system:

    +
    zfs create rpool/var/www
    +
    +
    +

    A tmpfs is recommended later, but if you want a separate dataset for +/tmp:

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/tmp
    +chmod 1777 /mnt/tmp
    +
    +
    +

    The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data.

    +

    If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for /tmp, +as shown above. This keeps the /tmp data out of snapshots of your root +filesystem. It also allows you to set a quota on rpool/tmp, if you want +to limit the maximum space used. Otherwise, you can use a tmpfs (RAM +filesystem) later.

    +

    Note: If you separate a directory required for booting (e.g. /etc) +into its own dataset, you must add it to +ZFS_INITRD_ADDITIONAL_DATASETS in /etc/default/zfs. Datasets +with canmount=off (like rpool/usr above) do not matter for this.

    +
  6. +
  7. Mount a tmpfs at /run:

    +
    mkdir /mnt/run
    +mount -t tmpfs tmpfs /mnt/run
    +mkdir /mnt/run/lock
    +
    +
    +
  8. +
  9. Install the minimal system:

    +
    debootstrap bullseye /mnt
    +
    +
    +

    The debootstrap command leaves the new system in an unconfigured state. +An alternative to using debootstrap is to copy the entirety of a +working system into the new ZFS root.

    +
  10. +
  11. Copy in zpool.cache:

    +
    mkdir /mnt/etc/zfs
    +cp /etc/zfs/zpool.cache /mnt/etc/zfs/
    +
    +
    +
  12. +
+
+
+

Step 4: System Configuration

+
    +
  1. Configure the hostname:

    +

    Replace HOSTNAME with the desired hostname:

    +
    hostname HOSTNAME
    +hostname > /mnt/etc/hostname
    +vi /mnt/etc/hosts
    +
    +
    +
    Add a line:
    +127.0.1.1       HOSTNAME
    +or if the system has a real name in DNS:
    +127.0.1.1       FQDN HOSTNAME
    +
    +
    +

    Hint: Use nano if you find vi confusing.

    +
  2. +
  3. Configure the network interface:

    +

    Find the interface name:

    +
    ip addr show
    +
    +
    +

    Adjust NAME below to match your interface name:

    +
    vi /mnt/etc/network/interfaces.d/NAME
    +
    +
    +
    auto NAME
    +iface NAME inet dhcp
    +
    +
    +

    Customize this file if the system is not a DHCP client.

    +
  4. +
  5. Configure the package sources:

    +
    vi /mnt/etc/apt/sources.list
    +
    +
    +
    deb http://deb.debian.org/debian bullseye main contrib
    +deb-src http://deb.debian.org/debian bullseye main contrib
    +
    +deb http://deb.debian.org/debian-security bullseye-security main contrib
    +deb-src http://deb.debian.org/debian-security bullseye-security main contrib
    +
    +deb http://deb.debian.org/debian bullseye-updates main contrib
    +deb-src http://deb.debian.org/debian bullseye-updates main contrib
    +
    +
    +
  6. +
  7. Bind the virtual filesystems from the LiveCD environment to the new +system and chroot into it:

    +
    mount --make-private --rbind /dev  /mnt/dev
    +mount --make-private --rbind /proc /mnt/proc
    +mount --make-private --rbind /sys  /mnt/sys
    +chroot /mnt /usr/bin/env DISK=$DISK bash --login
    +
    +
    +

    Note: This is using --rbind, not --bind.

    +
  8. +
  9. Configure a basic system environment:

    +
    ln -s /proc/self/mounts /etc/mtab
    +apt update
    +
    +apt install --yes console-setup locales
    +
    +
    +

    Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

    +
    dpkg-reconfigure locales tzdata keyboard-configuration console-setup
    +
    +
    +
  10. +
  11. Install ZFS in the chroot environment for the new system:

    +
    apt install --yes dpkg-dev linux-headers-generic linux-image-generic
    +
    +apt install --yes zfs-initramfs
    +
    +echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf
    +
    +
    +

    Note: Ignore any error messages saying ERROR: Couldn't resolve +device and WARNING: Couldn't determine root device. cryptsetup does +not support ZFS.

    +
  12. +
  13. For LUKS installs only, setup /etc/crypttab:

    +
    apt install --yes cryptsetup cryptsetup-initramfs
    +
    +echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \
    +    none luks,discard,initramfs > /etc/crypttab
    +
    +
    +

    The use of initramfs is a work-around for cryptsetup does not support +ZFS.

    +

    Hint: If you are creating a mirror or raidz topology, repeat the +/etc/crypttab entries for luks2, etc. adjusting for each disk.

    +
  14. +
  15. Install an NTP service to synchronize time. +This step is specific to Bullseye which does not install the package during +bootstrap. +Although this step is not necessary for ZFS, it is useful for internet +browsing where local clock drift can cause login failures:

    +
    apt install systemd-timesyncd
    +timedatectl
    +
    +
    +

    You should now see “NTP service: active” in the above timedatectl +output.

    +
  16. +
  17. Install GRUB

    +

    Choose one of the following options:

    +
      +
    • Install GRUB for legacy (BIOS) booting:

      +
      apt install --yes grub-pc
      +
      +
      +

      Select (using the space bar) all of the disks (not partitions) in your +pool.

      +
    • +
    • Install GRUB for UEFI booting:

      +
      apt install dosfstools
      +
      +mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2
      +mkdir /boot/efi
      +echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \
      +   /boot/efi vfat defaults 0 0 >> /etc/fstab
      +mount /boot/efi
      +apt install --yes grub-efi-amd64 shim-signed
      +
      +
      +

      Notes:

      +
        +
      • The -s 1 for mkdosfs is only necessary for drives which present +4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size +(given the partition size of 512 MiB) for FAT32. It also works fine on +drives which present 512 B sectors.

      • +
      • For a mirror or raidz topology, this step only installs GRUB on the +first disk. The other disk(s) will be handled later.

      • +
      +
    • +
    +
  18. +
  19. Optional: Remove os-prober:

    +
    apt purge --yes os-prober
    +
    +
    +

    This avoids error messages from update-grub. os-prober is only +necessary in dual-boot configurations.

    +
  20. +
  21. Set a root password:

    +
    passwd
    +
    +
    +
  22. +
  23. Enable importing bpool

    +

    This ensures that bpool is always imported, regardless of whether +/etc/zfs/zpool.cache exists, whether it is in the cachefile or not, +or whether zfs-import-scan.service is enabled.

    +
    vi /etc/systemd/system/zfs-import-bpool.service
    +
    +
    +
    [Unit]
    +DefaultDependencies=no
    +Before=zfs-import-scan.service
    +Before=zfs-import-cache.service
    +
    +[Service]
    +Type=oneshot
    +RemainAfterExit=yes
    +ExecStart=/sbin/zpool import -N -o cachefile=none bpool
    +# Work-around to preserve zpool cache:
    +ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache
    +ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache
    +
    +[Install]
    +WantedBy=zfs-import.target
    +
    +
    +
    systemctl enable zfs-import-bpool.service
    +
    +
    +

    Note: For some disk configurations (NVMe?), this service may fail with an error +indicating that the bpool cannot be found. If this happens, add +-d DISK-part3 (replace DISK with the correct device path) to the +zpool import command.

    +
  24. +
  25. Optional (but recommended): Mount a tmpfs to /tmp

    +

    If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

    +
    cp /usr/share/systemd/tmp.mount /etc/systemd/system/
    +systemctl enable tmp.mount
    +
    +
    +
  26. +
  27. Optional: Install SSH:

    +
    apt install --yes openssh-server
    +
    +vi /etc/ssh/sshd_config
    +# Set: PermitRootLogin yes
    +
    +
    +
  28. +
  29. Optional: For ZFS native encryption or LUKS, configure Dropbear for remote +unlocking:

    +
    apt install --yes --no-install-recommends dropbear-initramfs
    +mkdir -p /etc/dropbear-initramfs
    +
    +# Optional: Convert OpenSSH server keys for Dropbear
    +for type in ecdsa ed25519 rsa ; do
    +    cp /etc/ssh/ssh_host_${type}_key /tmp/openssh.key
    +    ssh-keygen -p -N "" -m PEM -f /tmp/openssh.key
    +    dropbearconvert openssh dropbear \
    +        /tmp/openssh.key \
    +        /etc/dropbear-initramfs/dropbear_${type}_host_key
    +done
    +rm /tmp/openssh.key
    +
    +# Add user keys in the same format as ~/.ssh/authorized_keys
    +vi /etc/dropbear-initramfs/authorized_keys
    +
    +# If using a static IP, set it for the initramfs environment:
    +vi /etc/initramfs-tools/initramfs.conf
    +# The syntax is: IP=ADDRESS::GATEWAY:MASK:HOSTNAME:NIC
    +# For example:
    +# IP=192.168.1.100::192.168.1.1:255.255.255.0:myhostname:ens3
    +# HOSTNAME and NIC are optional.
    +
    +# Rebuild the initramfs (required when changing any of the above):
    +update-initramfs -u -k all
    +
    +
    +

    Notes:

    +
      +
    • Converting the server keys makes Dropbear use the same keys as OpenSSH, +avoiding host key mismatch warnings. Currently, dropbearconvert doesn’t +understand the new OpenSSH private key format, so the +keys need to be converted to the old PEM format first using +ssh-keygen. The downside of using the same keys for both OpenSSH and +Dropbear is that the OpenSSH keys are then available on-disk, unencrypted +in the initramfs.

    • +
    • Later, to use this functionality, SSH to the system (as root) while it is +prompting for the passphrase during the boot process. For ZFS native +encryption, run zfsunlock. For LUKS, run cryptroot-unlock.

    • +
    • You can optionally add command="/usr/bin/zfsunlock" or +command="/bin/cryptroot-unlock" in front of the authorized_keys +line to force the unlock command. This way, the unlock command runs +automatically and is all that can be run.

    • +
    +
  30. +
  31. Optional (but kindly requested): Install popcon

    +

    The popularity-contest package reports the list of packages install +on your system. Showing that ZFS is popular may be helpful in terms of +long-term attention from the distro.

    +
    apt install --yes popularity-contest
    +
    +
    +

    Choose Yes at the prompt.

    +
  32. +
+
+
+

Step 5: GRUB Installation

+
    +
  1. Verify that the ZFS boot filesystem is recognized:

    +
    grub-probe /boot
    +
    +
    +
  2. +
  3. Refresh the initrd files:

    +
    update-initramfs -c -k all
    +
    +
    +

    Note: Ignore any error messages saying ERROR: Couldn't resolve +device and WARNING: Couldn't determine root device. cryptsetup +does not support ZFS.

    +
  4. +
  5. Workaround GRUB’s missing zpool-features support:

    +
    vi /etc/default/grub
    +# Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian"
    +
    +
    +
  6. +
  7. Optional (but highly recommended): Make debugging GRUB easier:

    +
    vi /etc/default/grub
    +# Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
    +# Uncomment: GRUB_TERMINAL=console
    +# Save and quit.
    +
    +
    +

    Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired.

    +
  8. +
  9. Update the boot configuration:

    +
    update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Install the boot loader:

    +
      +
    1. For legacy (BIOS) booting, install GRUB to the MBR:

      +
      grub-install $DISK
      +
      +
      +
    2. +
    +

    Note that you are installing GRUB to the whole disk, not a partition.

    +

    If you are creating a mirror or raidz topology, repeat the grub-install +command for each disk in the pool.

    +
      +
    1. For UEFI booting, install GRUB to the ESP:

      +
      grub-install --target=x86_64-efi --efi-directory=/boot/efi \
      +    --bootloader-id=debian --recheck --no-floppy
      +
      +
      +

      It is not necessary to specify the disk here. If you are creating a +mirror or raidz topology, the additional disks will be handled later.

      +
    2. +
    +
  12. +
  13. Fix filesystem mount ordering:

    +

    We need to activate zfs-mount-generator. This makes systemd aware of +the separate mountpoints, which is important for things like /var/log +and /var/tmp. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp feature +of systemd automatically use After=var-tmp.mount.

    +
    mkdir /etc/zfs/zfs-list.cache
    +touch /etc/zfs/zfs-list.cache/bpool
    +touch /etc/zfs/zfs-list.cache/rpool
    +zed -F &
    +
    +
    +

    Verify that zed updated the cache by making sure these are not empty:

    +
    cat /etc/zfs/zfs-list.cache/bpool
    +cat /etc/zfs/zfs-list.cache/rpool
    +
    +
    +

    If either is empty, force a cache update and check again:

    +
    zfs set canmount=on     bpool/BOOT/debian
    +zfs set canmount=noauto rpool/ROOT/debian
    +
    +
    +

    If they are still empty, stop zed (as below), start zed (as above) and try +again.

    +

    Once the files have data, stop zed:

    +
    fg
    +Press Ctrl-C.
    +
    +
    +

    Fix the paths to eliminate /mnt:

    +
    sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
    +
    +
    +
  14. +
+
+
+

Step 6: First Boot

+
    +
  1. Optional: Snapshot the initial installation:

    +
    zfs snapshot bpool/BOOT/debian@install
    +zfs snapshot rpool/ROOT/debian@install
    +
    +
    +

    In the future, you will likely want to take snapshots before each +upgrade, and remove old snapshots (including this one) at some point to +save space.

    +
  2. +
  3. Exit from the chroot environment back to the LiveCD environment:

    +
    exit
    +
    +
    +
  4. +
  5. Run these commands in the LiveCD environment to unmount all +filesystems:

    +
    mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
    +    xargs -i{} umount -lf {}
    +zpool export -a
    +
    +
    +
  6. +
  7. If this fails for rpool, mounting it on boot will fail and you will need to +zpool import -f rpool, then exit in the initamfs prompt.

  8. +
  9. Reboot:

    +
    reboot
    +
    +
    +

    Wait for the newly installed system to boot normally. Login as root.

    +
  10. +
  11. Create a user account:

    +

    Replace YOUR_USERNAME with your desired username:

    +
    username=YOUR_USERNAME
    +
    +zfs create rpool/home/$username
    +adduser $username
    +
    +cp -a /etc/skel/. /home/$username
    +chown -R $username:$username /home/$username
    +usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video $username
    +
    +
    +
  12. +
  13. Mirror GRUB

    +

    If you installed to multiple disks, install GRUB on the additional +disks.

    +
      +
    • For legacy (BIOS) booting:

      +
      dpkg-reconfigure grub-pc
      +
      +
      +

      Hit enter until you get to the device selection screen. +Select (using the space bar) all of the disks (not partitions) in your pool.

      +
    • +
    • For UEFI booting:

      +
      umount /boot/efi
      +
      +
      +

      For the second and subsequent disks (increment debian-2 to -3, etc.):

      +
      dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
      +   of=/dev/disk/by-id/scsi-SATA_disk2-part2
      +efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
      +    -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi'
      +
      +mount /boot/efi
      +
      +
      +
    • +
    +
  14. +
+
+
+

Step 7: Optional: Configure Swap

+

Caution: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is a bug report upstream.

+
    +
  1. Create a volume dataset (zvol) for use as a swap device:

    +
    zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
    +    -o logbias=throughput -o sync=always \
    +    -o primarycache=metadata -o secondarycache=none \
    +    -o com.sun:auto-snapshot=false rpool/swap
    +
    +
    +

    You can adjust the size (the 4G part) to your needs.

    +

    The compression algorithm is set to zle because it is the cheapest +available algorithm. As this guide recommends ashift=12 (4 kiB +blocks on disk), the common case of a 4 kiB page size means that no +compression algorithm can reduce I/O. The exception is all-zero pages, +which are dropped by ZFS; but some form of compression has to be enabled +to get this behavior.

    +
  2. +
  3. Configure the swap device:

    +

    Caution: Always use long /dev/zvol aliases in configuration +files. Never use a short /dev/zdX device name.

    +
    mkswap -f /dev/zvol/rpool/swap
    +echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
    +echo RESUME=none > /etc/initramfs-tools/conf.d/resume
    +
    +
    +

    The RESUME=none is necessary to disable resuming from hibernation. +This does not work, as the zvol is not present (because the pool has not +yet been imported) at the time the resume script runs. If it is not +disabled, the boot process hangs for 30 seconds waiting for the swap +zvol to appear.

    +
  4. +
  5. Enable the swap device:

    +
    swapon -av
    +
    +
    +
  6. +
+
+
+

Step 8: Full Software Installation

+
    +
  1. Upgrade the minimal system:

    +
    apt dist-upgrade --yes
    +
    +
    +
  2. +
  3. Install a regular set of software:

    +
    tasksel --new-install
    +
    +
    +

    Note: This will check “Debian desktop environment” and “print server” +by default. If you want a server installation, unselect those.

    +
  4. +
  5. Optional: Disable log compression:

    +

    As /var/log is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. Also, +if you are making snapshots of /var/log, logrotate’s compression will +actually waste space, as the uncompressed data will live on in the +snapshot. You can edit the files in /etc/logrotate.d by hand to comment +out compress, or use this loop (copy-and-paste highly recommended):

    +
    for file in /etc/logrotate.d/* ; do
    +    if grep -Eq "(^|[^#y])compress" "$file" ; then
    +        sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
    +    fi
    +done
    +
    +
    +
  6. +
  7. Reboot:

    +
    reboot
    +
    +
    +
  8. +
+
+
+

Step 9: Final Cleanup

+
    +
  1. Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

  2. +
  3. Optional: Delete the snapshots of the initial installation:

    +
    sudo zfs destroy bpool/BOOT/debian@install
    +sudo zfs destroy rpool/ROOT/debian@install
    +
    +
    +
  4. +
  5. Optional: Disable the root password:

    +
    sudo usermod -p '*' root
    +
    +
    +
  6. +
  7. Optional (but highly recommended): Disable root SSH logins:

    +

    If you installed SSH earlier, revert the temporary change:

    +
    sudo vi /etc/ssh/sshd_config
    +# Remove: PermitRootLogin yes
    +
    +sudo systemctl restart ssh
    +
    +
    +
  8. +
  9. Optional: Re-enable the graphical boot process:

    +

    If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer.

    +
    sudo vi /etc/default/grub
    +# Add quiet to GRUB_CMDLINE_LINUX_DEFAULT
    +# Comment out GRUB_TERMINAL=console
    +# Save and quit.
    +
    +sudo update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Optional: For LUKS installs only, backup the LUKS header:

    +
    sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
    +    --header-backup-file luks1-header.dat
    +
    +
    +

    Store that backup somewhere safe (e.g. cloud storage). It is protected by +your LUKS passphrase, but you may wish to use additional encryption.

    +

    Hint: If you created a mirror or raidz topology, repeat this for each +LUKS volume (luks2, etc.).

    +
  12. +
+
+
+

Troubleshooting

+
+

Rescuing using a Live CD

+

Go through Step 1: Prepare The Install Environment.

+

For LUKS, first unlock the disk(s):

+
apt install --yes cryptsetup
+
+cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+# Repeat for additional disks, if this is a mirror or raidz topology.
+
+
+

Mount everything correctly:

+
zpool export -a
+zpool import -N -R /mnt rpool
+zpool import -N -R /mnt bpool
+zfs load-key -a
+zfs mount rpool/ROOT/debian
+zfs mount -a
+
+
+

If needed, you can chroot into your installed environment:

+
mount --make-private --rbind /dev  /mnt/dev
+mount --make-private --rbind /proc /mnt/proc
+mount --make-private --rbind /sys  /mnt/sys
+mount -t tmpfs tmpfs /mnt/run
+mkdir /mnt/run/lock
+chroot /mnt /bin/bash --login
+mount /boot/efi
+mount -a
+
+
+

Do whatever you need to do to fix your system.

+

When done, cleanup:

+
exit
+mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
+    xargs -i{} umount -lf {}
+zpool export -a
+reboot
+
+
+
+
+

Areca

+

Systems that require the arcsas blob driver should add it to the +/etc/initramfs-tools/modules file and run update-initramfs -c -k all.

+

Upgrade or downgrade the Areca driver if something like +RIP: 0010:[<ffffffff8101b316>]  [<ffffffff8101b316>] native_read_tsc+0x6/0x20 +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message.

+
+
+

MPT2SAS

+

Most problem reports for this tutorial involve mpt2sas hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware.

+

The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330.

+

Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool.

+
+
+

QEMU/KVM/XEN

+

Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. -drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890).

+

To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:

+
sudo apt install ovmf
+sudo vi /etc/libvirt/qemu.conf
+
+
+

Uncomment these lines:

+
nvram = [
+   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd"
+]
+
+
+
sudo systemctl restart libvirtd.service
+
+
+
+
+

VMware

+
    +
  • Set disk.EnableUUID = "TRUE" in the vmx file or vsphere configuration. +Doing this ensures that /dev/disk aliases are created in the guest.

  • +
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Debian/Debian Buster Root on ZFS.html b/Getting Started/Debian/Debian Buster Root on ZFS.html new file mode 100644 index 000000000..477b1f23f --- /dev/null +++ b/Getting Started/Debian/Debian Buster Root on ZFS.html @@ -0,0 +1,1315 @@ + + + + + + + Debian Buster Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Debian Buster Root on ZFS

+ +
+

Overview

+
+

Newer release available

+
    +
  • See Debian Bullseye Root on ZFS for +new installs. This guide is no longer receiving most updates. It continues +to exist for reference for existing installs that followed it.

  • +
+
+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Do not use these instructions for dual-booting.

  • +
  • Backup your data. Any existing data will be lost.

  • +
+
+
+

System Requirements

+ +

Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need massive amounts of RAM. Enabling +deduplication is a permanent change that cannot be easily reverted.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @rlaager.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo apt install python3-pip
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @rlaager.

  10. +
+
+
+

Encryption

+

This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in /etc/fstab, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+
+

Step 1: Prepare The Install Environment

+
    +
  1. Boot the Debian GNU/Linux Live CD. If prompted, login with the username +user and password live. Connect your system to the Internet as +appropriate (e.g. join your WiFi network). Open a terminal.

  2. +
  3. Setup and update the repositories:

    +
    sudo vi /etc/apt/sources.list
    +
    +
    +
    deb http://deb.debian.org/debian buster main contrib
    +deb http://deb.debian.org/debian buster-backports main contrib
    +
    +
    +
    sudo apt update
    +
    +
    +
  4. +
  5. Optional: Install and start the OpenSSH server in the Live CD environment:

    +

    If you have a second system, using SSH to access the target system can be +convenient:

    +
    sudo apt install --yes openssh-server
    +
    +sudo systemctl restart ssh
    +
    +
    +

    Hint: You can find your IP address with +ip addr show scope global | grep inet. Then, from your main machine, +connect with ssh user@IP.

    +
  6. +
  7. Disable automounting:

    +

    If the disk has been used before (with partitions at the same offsets), +previous filesystems (e.g. the ESP) will automount if not disabled:

    +
    gsettings set org.gnome.desktop.media-handling automount false
    +
    +
    +
  8. +
  9. Become root:

    +
    sudo -i
    +
    +
    +
  10. +
  11. Install ZFS in the Live CD environment:

    +
    apt install --yes debootstrap gdisk dkms dpkg-dev linux-headers-amd64
    +
    +apt install --yes -t buster-backports --no-install-recommends zfs-dkms
    +
    +modprobe zfs
    +apt install --yes -t buster-backports zfsutils-linux
    +
    +
    +
      +
    • The dkms dependency is installed manually just so it comes from buster +and not buster-backports. This is not critical.

    • +
    • We need to get the module built and loaded before installing +zfsutils-linux or zfs-mount.service will fail to start.

    • +
    +
  12. +
+
+
+

Step 2: Disk Formatting

+
    +
  1. Set a variable with the disk name:

    +
    DISK=/dev/disk/by-id/scsi-SATA_disk1
    +
    +
    +

    Always use the long /dev/disk/by-id/* aliases with ZFS. Using the +/dev/sd* device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool.

    +

    Hints:

    +
      +
    • ls -la /dev/disk/by-id will list the aliases.

    • +
    • Are you doing this in a virtual machine? If your virtual disk is missing +from /dev/disk/by-id, use /dev/vda if you are using KVM with +virtio; otherwise, read the troubleshooting +section.

    • +
    • For a mirror or raidz topology, use DISK1, DISK2, etc.

    • +
    • When choosing a boot pool size, consider how you will use the space. A +kernel and initrd may consume around 100M. If you have multiple kernels +and take snapshots, you may find yourself low on boot pool space, +especially if you need to regenerate your initramfs images, which may be +around 85M each. Size your boot pool appropriately for your needs.

    • +
    +
  2. +
  3. If you are re-using a disk, clear it as necessary:

    +

    Ensure swap partitions are not in use:

    +
    swapoff --all
    +
    +
    +

    If the disk was previously used in an MD array:

    +
    apt install --yes mdadm
    +
    +# See if one or more MD arrays are active:
    +cat /proc/mdstat
    +# If so, stop them (replace ``md0`` as required):
    +mdadm --stop /dev/md0
    +
    +# For an array using the whole disk:
    +mdadm --zero-superblock --force $DISK
    +# For an array using a partition:
    +mdadm --zero-superblock --force ${DISK}-part2
    +
    +
    +

    Clear the partition table:

    +
    sgdisk --zap-all $DISK
    +
    +
    +

    If you get a message about the kernel still using the old partition table, +reboot and start over (except that you can skip this step).

    +
  4. +
  5. Partition your disk(s):

    +

    Run this if you need legacy (BIOS) booting:

    +
    sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
    +
    +
    +

    Run this for UEFI booting (for use now or in the future):

    +
    sgdisk     -n2:1M:+512M   -t2:EF00 $DISK
    +
    +
    +

    Run this for the boot pool:

    +
    sgdisk     -n3:0:+1G      -t3:BF01 $DISK
    +
    +
    +

    Choose one of the following options:

    +
      +
    • Unencrypted or ZFS native encryption:

      +
      sgdisk     -n4:0:0        -t4:BF00 $DISK
      +
      +
      +
    • +
    • LUKS:

      +
      sgdisk     -n4:0:0        -t4:8309 $DISK
      +
      +
      +
    • +
    +

    If you are creating a mirror or raidz topology, repeat the partitioning +commands for all the disks which will be part of the pool.

    +
  6. +
  7. Create the boot pool:

    +
    zpool create \
    +    -o cachefile=/etc/zfs/zpool.cache \
    +    -o ashift=12 -d \
    +    -o feature@async_destroy=enabled \
    +    -o feature@bookmarks=enabled \
    +    -o feature@embedded_data=enabled \
    +    -o feature@empty_bpobj=enabled \
    +    -o feature@enabled_txg=enabled \
    +    -o feature@extensible_dataset=enabled \
    +    -o feature@filesystem_limits=enabled \
    +    -o feature@hole_birth=enabled \
    +    -o feature@large_blocks=enabled \
    +    -o feature@lz4_compress=enabled \
    +    -o feature@spacemap_histogram=enabled \
    +    -o feature@zpool_checkpoint=enabled \
    +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
    +    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
    +    -O mountpoint=/boot -R /mnt \
    +    bpool ${DISK}-part3
    +
    +
    +

    You should not need to customize any of the options for the boot pool.

    +

    GRUB does not support all of the zpool features. See spa_feature_names +in grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are “supported” by GRUB.

    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    bpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part3 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part3
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. The bpool convention originated in this HOWTO.

    • +
    +

    Feature Notes:

    +
      +
    • The allocation_classes feature should be safe to use. However, unless +one is using it (i.e. a special vdev), there is no point to enabling +it. It is extremely unlikely that someone would use this feature for a +boot pool. If one cares about speeding up the boot pool, it would make +more sense to put the whole pool on the faster disk rather than using it +as a special vdev.

    • +
    • The project_quota feature has been tested and is safe to use. This +feature is extremely unlikely to matter for the boot pool.

    • +
    • The resilver_defer should be safe but the boot pool is small enough +that it is unlikely to be necessary.

    • +
    • The spacemap_v2 feature has been tested and is safe to use. The boot +pool is small, so this does not matter in practice.

    • +
    • As a read-only compatible feature, the userobj_accounting feature +should be compatible in theory, but in practice, GRUB can fail with an +“invalid dnode type” error. This feature does not matter for /boot +anyway.

    • +
    +
  8. +
  9. Create the root pool:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted:

      +
      zpool create \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • ZFS native encryption:

      +
      zpool create \
      +    -o ashift=12 \
      +    -O encryption=on \
      +    -O keylocation=prompt -O keyformat=passphrase \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • LUKS:

      +
      apt install --yes cryptsetup
      +
      +cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
      +cryptsetup luksOpen ${DISK}-part4 luks1
      +zpool create \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool /dev/mapper/luks1
      +
      +
      +
    • +
    +

    Notes:

    +
      +
    • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

    • +
    • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires ACLs

    • +
    • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only filenames.

    • +
    • recordsize is unset (leaving it at the default of 128 KiB). If you +want to tune it (e.g. -O recordsize=1M), see these various blog +posts.

    • +
    • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s documentation +for further information.

    • +
    • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain controller. +Note that xattr=sa is Linux-specific. If you move your +xattr=sa pool to another OpenZFS implementation besides ZFS-on-Linux, +extended attributes will not be readable (though your data will be). If +portability of extended attributes is important to you, omit the +-O xattr=sa above. Even if you do not want xattr=sa for the whole +pool, it is probably fine to use it for /var/log.

    • +
    • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

    • +
    • ZFS native encryption now +defaults to aes-256-gcm.

    • +
    • For LUKS, the key size chosen is 512 bits. However, XTS mode requires two +keys, so the LUKS key is split in half. Thus, -s 512 means AES-256.

    • +
    • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup FAQ +for guidance.

    • +
    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    rpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part4 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part4
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • When using LUKS with mirror or raidz topologies, use +/dev/mapper/luks1, /dev/mapper/luks2, etc., which you will have +to create using cryptsetup.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. On systems that can automatically install to ZFS, the root +pool is named rpool by default.

    • +
    +
  10. +
+
+
+

Step 3: System Installation

+
    +
  1. Create filesystem datasets to act as containers:

    +
    zfs create -o canmount=off -o mountpoint=none rpool/ROOT
    +zfs create -o canmount=off -o mountpoint=none bpool/BOOT
    +
    +
    +

    On Solaris systems, the root filesystem is cloned and the suffix is +incremented for major system changes through pkg image-update or +beadm. Similar functionality was implemented in Ubuntu with the +zsys tool, though its dataset layout is more complicated, and zsys +is on life support. Even +without such a tool, the rpool/ROOT and bpool/BOOT containers can still +be used for manually created clones. That said, this HOWTO assumes a single +filesystem for /boot for simplicity.

    +
  2. +
  3. Create filesystem datasets for the root and boot filesystems:

    +
    zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian
    +zfs mount rpool/ROOT/debian
    +
    +zfs create -o mountpoint=/boot bpool/BOOT/debian
    +
    +
    +

    With ZFS, it is not normally necessary to use a mount command (either +mount or zfs mount). This situation is an exception because of +canmount=noauto.

    +
  4. +
  5. Create datasets:

    +
    zfs create                                 rpool/home
    +zfs create -o mountpoint=/root             rpool/home/root
    +chmod 700 /mnt/root
    +zfs create -o canmount=off                 rpool/var
    +zfs create -o canmount=off                 rpool/var/lib
    +zfs create                                 rpool/var/log
    +zfs create                                 rpool/var/spool
    +
    +
    +

    The datasets below are optional, depending on your preferences and/or +software choices.

    +

    If you wish to exclude these from snapshots:

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/var/cache
    +zfs create -o com.sun:auto-snapshot=false  rpool/var/tmp
    +chmod 1777 /mnt/var/tmp
    +
    +
    +

    If you use /opt on this system:

    +
    zfs create                                 rpool/opt
    +
    +
    +

    If you use /srv on this system:

    +
    zfs create                                 rpool/srv
    +
    +
    +

    If you use /usr/local on this system:

    +
    zfs create -o canmount=off                 rpool/usr
    +zfs create                                 rpool/usr/local
    +
    +
    +

    If this system will have games installed:

    +
    zfs create                                 rpool/var/games
    +
    +
    +

    If this system will store local email in /var/mail:

    +
    zfs create                                 rpool/var/mail
    +
    +
    +

    If this system will use Snap packages:

    +
    zfs create                                 rpool/var/snap
    +
    +
    +

    If you use /var/www on this system:

    +
    zfs create                                 rpool/var/www
    +
    +
    +

    If this system will use GNOME:

    +
    zfs create                                 rpool/var/lib/AccountsService
    +
    +
    +

    If this system will use Docker (which manages its own datasets & +snapshots):

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/docker
    +
    +
    +

    If this system will use NFS (locking):

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/nfs
    +
    +
    +

    Mount a tmpfs at /run:

    +
    mkdir /mnt/run
    +mount -t tmpfs tmpfs /mnt/run
    +mkdir /mnt/run/lock
    +
    +
    +

    A tmpfs is recommended later, but if you want a separate dataset for +/tmp:

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/tmp
    +chmod 1777 /mnt/tmp
    +
    +
    +

    The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data.

    +

    If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for /tmp, +as shown above. This keeps the /tmp data out of snapshots of your root +filesystem. It also allows you to set a quota on rpool/tmp, if you want +to limit the maximum space used. Otherwise, you can use a tmpfs (RAM +filesystem) later.

    +
  6. +
  7. Install the minimal system:

    +
    debootstrap buster /mnt
    +
    +
    +

    The debootstrap command leaves the new system in an unconfigured state. +An alternative to using debootstrap is to copy the entirety of a +working system into the new ZFS root.

    +
  8. +
  9. Copy in zpool.cache:

    +
    mkdir /mnt/etc/zfs
    +cp /etc/zfs/zpool.cache /mnt/etc/zfs/
    +
    +
    +
  10. +
+
+
+

Step 4: System Configuration

+
    +
  1. Configure the hostname:

    +

    Replace HOSTNAME with the desired hostname:

    +
    hostname HOSTNAME
    +hostname > /mnt/etc/hostname
    +vi /mnt/etc/hosts
    +
    +
    +
    Add a line:
    +127.0.1.1       HOSTNAME
    +or if the system has a real name in DNS:
    +127.0.1.1       FQDN HOSTNAME
    +
    +
    +

    Hint: Use nano if you find vi confusing.

    +
  2. +
  3. Configure the network interface:

    +

    Find the interface name:

    +
    ip addr show
    +
    +
    +

    Adjust NAME below to match your interface name:

    +
    vi /mnt/etc/network/interfaces.d/NAME
    +
    +
    +
    auto NAME
    +iface NAME inet dhcp
    +
    +
    +

    Customize this file if the system is not a DHCP client.

    +
  4. +
  5. Configure the package sources:

    +
    vi /mnt/etc/apt/sources.list
    +
    +
    +
    deb http://deb.debian.org/debian buster main contrib
    +deb-src http://deb.debian.org/debian buster main contrib
    +
    +deb http://security.debian.org/debian-security buster/updates main contrib
    +deb-src http://security.debian.org/debian-security buster/updates main contrib
    +
    +deb http://deb.debian.org/debian buster-updates main contrib
    +deb-src http://deb.debian.org/debian buster-updates main contrib
    +
    +
    +
    vi /mnt/etc/apt/sources.list.d/buster-backports.list
    +
    +
    +
    deb http://deb.debian.org/debian buster-backports main contrib
    +deb-src http://deb.debian.org/debian buster-backports main contrib
    +
    +
    +
    vi /mnt/etc/apt/preferences.d/90_zfs
    +
    +
    +
    Package: src:zfs-linux
    +Pin: release n=buster-backports
    +Pin-Priority: 990
    +
    +
    +
  6. +
  7. Bind the virtual filesystems from the LiveCD environment to the new +system and chroot into it:

    +
    mount --rbind /dev  /mnt/dev
    +mount --rbind /proc /mnt/proc
    +mount --rbind /sys  /mnt/sys
    +chroot /mnt /usr/bin/env DISK=$DISK bash --login
    +
    +
    +

    Note: This is using --rbind, not --bind.

    +
  8. +
  9. Configure a basic system environment:

    +
    ln -s /proc/self/mounts /etc/mtab
    +apt update
    +
    +apt install --yes console-setup locales
    +
    +
    +

    Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

    +
    dpkg-reconfigure locales tzdata keyboard-configuration console-setup
    +
    +
    +
  10. +
  11. Install ZFS in the chroot environment for the new system:

    +
    apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64
    +
    +apt install --yes zfs-initramfs
    +
    +echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf
    +
    +
    +

    Note: Ignore any error messages saying ERROR: Couldn't resolve +device and WARNING: Couldn't determine root device. cryptsetup does +not support ZFS.

    +
  12. +
  13. For LUKS installs only, setup /etc/crypttab:

    +
    apt install --yes cryptsetup
    +
    +echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \
    +    none luks,discard,initramfs > /etc/crypttab
    +
    +
    +

    The use of initramfs is a work-around for cryptsetup does not support +ZFS.

    +

    Hint: If you are creating a mirror or raidz topology, repeat the +/etc/crypttab entries for luks2, etc. adjusting for each disk.

    +
  14. +
  15. Install GRUB

    +

    Choose one of the following options:

    +
      +
    • Install GRUB for legacy (BIOS) booting:

      +
      apt install --yes grub-pc
      +
      +
      +

      Select (using the space bar) all of the disks (not partitions) in your +pool.

      +
    • +
    • Install GRUB for UEFI booting:

      +
      apt install dosfstools
      +
      +mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2
      +mkdir /boot/efi
      +echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \
      +   /boot/efi vfat defaults 0 0 >> /etc/fstab
      +mount /boot/efi
      +apt install --yes grub-efi-amd64 shim-signed
      +
      +
      +

      Notes:

      +
        +
      • The -s 1 for mkdosfs is only necessary for drives which present +4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size +(given the partition size of 512 MiB) for FAT32. It also works fine on +drives which present 512 B sectors.

      • +
      • For a mirror or raidz topology, this step only installs GRUB on the +first disk. The other disk(s) will be handled later.

      • +
      +
    • +
    +
  16. +
  17. Optional: Remove os-prober:

    +
    apt purge --yes os-prober
    +
    +
    +

    This avoids error messages from update-grub. os-prober is only +necessary in dual-boot configurations.

    +
  18. +
  19. Set a root password:

    +
    passwd
    +
    +
    +
  20. +
  21. Enable importing bpool

    +

    This ensures that bpool is always imported, regardless of whether +/etc/zfs/zpool.cache exists, whether it is in the cachefile or not, +or whether zfs-import-scan.service is enabled.

    +
    vi /etc/systemd/system/zfs-import-bpool.service
    +
    +
    +
    [Unit]
    +DefaultDependencies=no
    +Before=zfs-import-scan.service
    +Before=zfs-import-cache.service
    +
    +[Service]
    +Type=oneshot
    +RemainAfterExit=yes
    +ExecStart=/sbin/zpool import -N -o cachefile=none bpool
    +# Work-around to preserve zpool cache:
    +ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache
    +ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache
    +
    +[Install]
    +WantedBy=zfs-import.target
    +
    +
    +
    systemctl enable zfs-import-bpool.service
    +
    +
    +
  22. +
  23. Optional (but recommended): Mount a tmpfs to /tmp

    +

    If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

    +
    cp /usr/share/systemd/tmp.mount /etc/systemd/system/
    +systemctl enable tmp.mount
    +
    +
    +
  24. +
  25. Optional: Install SSH:

    +
    apt install --yes openssh-server
    +
    +vi /etc/ssh/sshd_config
    +# Set: PermitRootLogin yes
    +
    +
    +
  26. +
  27. Optional (but kindly requested): Install popcon

    +

    The popularity-contest package reports the list of packages install +on your system. Showing that ZFS is popular may be helpful in terms of +long-term attention from the distro.

    +
    apt install --yes popularity-contest
    +
    +
    +

    Choose Yes at the prompt.

    +
  28. +
+
+
+

Step 5: GRUB Installation

+
    +
  1. Verify that the ZFS boot filesystem is recognized:

    +
    grub-probe /boot
    +
    +
    +
  2. +
  3. Refresh the initrd files:

    +
    update-initramfs -c -k all
    +
    +
    +

    Note: Ignore any error messages saying ERROR: Couldn't resolve +device and WARNING: Couldn't determine root device. cryptsetup +does not support ZFS.

    +
  4. +
  5. Workaround GRUB’s missing zpool-features support:

    +
    vi /etc/default/grub
    +# Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian"
    +
    +
    +
  6. +
  7. Optional (but highly recommended): Make debugging GRUB easier:

    +
    vi /etc/default/grub
    +# Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
    +# Uncomment: GRUB_TERMINAL=console
    +# Save and quit.
    +
    +
    +

    Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired.

    +
  8. +
  9. Update the boot configuration:

    +
    update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Install the boot loader:

    +
      +
    1. For legacy (BIOS) booting, install GRUB to the MBR:

      +
      grub-install $DISK
      +
      +
      +
    2. +
    +

    Note that you are installing GRUB to the whole disk, not a partition.

    +

    If you are creating a mirror or raidz topology, repeat the grub-install +command for each disk in the pool.

    +
      +
    1. For UEFI booting, install GRUB to the ESP:

      +
      grub-install --target=x86_64-efi --efi-directory=/boot/efi \
      +    --bootloader-id=debian --recheck --no-floppy
      +
      +
      +

      It is not necessary to specify the disk here. If you are creating a +mirror or raidz topology, the additional disks will be handled later.

      +
    2. +
    +
  12. +
  13. Fix filesystem mount ordering:

    +

    We need to activate zfs-mount-generator. This makes systemd aware of +the separate mountpoints, which is important for things like /var/log +and /var/tmp. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp feature +of systemd automatically use After=var-tmp.mount.

    +
    mkdir /etc/zfs/zfs-list.cache
    +touch /etc/zfs/zfs-list.cache/bpool
    +touch /etc/zfs/zfs-list.cache/rpool
    +zed -F &
    +
    +
    +

    Verify that zed updated the cache by making sure these are not empty:

    +
    cat /etc/zfs/zfs-list.cache/bpool
    +cat /etc/zfs/zfs-list.cache/rpool
    +
    +
    +

    If either is empty, force a cache update and check again:

    +
    zfs set canmount=on     bpool/BOOT/debian
    +zfs set canmount=noauto rpool/ROOT/debian
    +
    +
    +

    If they are still empty, stop zed (as below), start zed (as above) and try +again.

    +

    Once the files have data, stop zed:

    +
    fg
    +Press Ctrl-C.
    +
    +
    +

    Fix the paths to eliminate /mnt:

    +
    sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
    +
    +
    +
  14. +
+
+
+

Step 6: First Boot

+
    +
  1. Optional: Snapshot the initial installation:

    +
    zfs snapshot bpool/BOOT/debian@install
    +zfs snapshot rpool/ROOT/debian@install
    +
    +
    +

    In the future, you will likely want to take snapshots before each +upgrade, and remove old snapshots (including this one) at some point to +save space.

    +
  2. +
  3. Exit from the chroot environment back to the LiveCD environment:

    +
    exit
    +
    +
    +
  4. +
  5. Run these commands in the LiveCD environment to unmount all +filesystems:

    +
    mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
    +    xargs -i{} umount -lf {}
    +zpool export -a
    +
    +
    +
  6. +
  7. Reboot:

    +
    reboot
    +
    +
    +

    Wait for the newly installed system to boot normally. Login as root.

    +
  8. +
  9. Create a user account:

    +

    Replace YOUR_USERNAME with your desired username:

    +
    username=YOUR_USERNAME
    +
    +zfs create rpool/home/$username
    +adduser $username
    +
    +cp -a /etc/skel/. /home/$username
    +chown -R $username:$username /home/$username
    +usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video $username
    +
    +
    +
  10. +
  11. Mirror GRUB

    +

    If you installed to multiple disks, install GRUB on the additional +disks.

    +
      +
    • For legacy (BIOS) booting:

      +
      dpkg-reconfigure grub-pc
      +
      +
      +

      Hit enter until you get to the device selection screen. +Select (using the space bar) all of the disks (not partitions) in your pool.

      +
    • +
    • For UEFI booting:

      +
      umount /boot/efi
      +
      +
      +

      For the second and subsequent disks (increment debian-2 to -3, etc.):

      +
      dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
      +   of=/dev/disk/by-id/scsi-SATA_disk2-part2
      +efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
      +    -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi'
      +
      +mount /boot/efi
      +
      +
      +
    • +
    +
  12. +
+
+
+

Step 7: Optional: Configure Swap

+

Caution: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is a bug report upstream.

+
    +
  1. Create a volume dataset (zvol) for use as a swap device:

    +
    zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
    +    -o logbias=throughput -o sync=always \
    +    -o primarycache=metadata -o secondarycache=none \
    +    -o com.sun:auto-snapshot=false rpool/swap
    +
    +
    +

    You can adjust the size (the 4G part) to your needs.

    +

    The compression algorithm is set to zle because it is the cheapest +available algorithm. As this guide recommends ashift=12 (4 kiB +blocks on disk), the common case of a 4 kiB page size means that no +compression algorithm can reduce I/O. The exception is all-zero pages, +which are dropped by ZFS; but some form of compression has to be enabled +to get this behavior.

    +
  2. +
  3. Configure the swap device:

    +

    Caution: Always use long /dev/zvol aliases in configuration +files. Never use a short /dev/zdX device name.

    +
    mkswap -f /dev/zvol/rpool/swap
    +echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
    +echo RESUME=none > /etc/initramfs-tools/conf.d/resume
    +
    +
    +

    The RESUME=none is necessary to disable resuming from hibernation. +This does not work, as the zvol is not present (because the pool has not +yet been imported) at the time the resume script runs. If it is not +disabled, the boot process hangs for 30 seconds waiting for the swap +zvol to appear.

    +
  4. +
  5. Enable the swap device:

    +
    swapon -av
    +
    +
    +
  6. +
+
+
+

Step 8: Full Software Installation

+
    +
  1. Upgrade the minimal system:

    +
    apt dist-upgrade --yes
    +
    +
    +
  2. +
  3. Install a regular set of software:

    +
    tasksel --new-install
    +
    +
    +

    Note: This will check “Debian desktop environment” and “print server” +by default. If you want a server installation, unselect those.

    +
  4. +
  5. Optional: Disable log compression:

    +

    As /var/log is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. Also, +if you are making snapshots of /var/log, logrotate’s compression will +actually waste space, as the uncompressed data will live on in the +snapshot. You can edit the files in /etc/logrotate.d by hand to comment +out compress, or use this loop (copy-and-paste highly recommended):

    +
    for file in /etc/logrotate.d/* ; do
    +    if grep -Eq "(^|[^#y])compress" "$file" ; then
    +        sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
    +    fi
    +done
    +
    +
    +
  6. +
  7. Reboot:

    +
    reboot
    +
    +
    +
  8. +
+
+
+

Step 9: Final Cleanup

+
    +
  1. Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

  2. +
  3. Optional: Delete the snapshots of the initial installation:

    +
    sudo zfs destroy bpool/BOOT/debian@install
    +sudo zfs destroy rpool/ROOT/debian@install
    +
    +
    +
  4. +
  5. Optional: Disable the root password:

    +
    sudo usermod -p '*' root
    +
    +
    +
  6. +
  7. Optional (but highly recommended): Disable root SSH logins:

    +

    If you installed SSH earlier, revert the temporary change:

    +
    sudo vi /etc/ssh/sshd_config
    +# Remove: PermitRootLogin yes
    +
    +sudo systemctl restart ssh
    +
    +
    +
  8. +
  9. Optional: Re-enable the graphical boot process:

    +

    If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer.

    +
    sudo vi /etc/default/grub
    +# Add quiet to GRUB_CMDLINE_LINUX_DEFAULT
    +# Comment out GRUB_TERMINAL=console
    +# Save and quit.
    +
    +sudo update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Optional: For LUKS installs only, backup the LUKS header:

    +
    sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
    +    --header-backup-file luks1-header.dat
    +
    +
    +

    Store that backup somewhere safe (e.g. cloud storage). It is protected by +your LUKS passphrase, but you may wish to use additional encryption.

    +

    Hint: If you created a mirror or raidz topology, repeat this for each +LUKS volume (luks2, etc.).

    +
  12. +
+
+
+

Troubleshooting

+
+

Rescuing using a Live CD

+

Go through Step 1: Prepare The Install Environment.

+

For LUKS, first unlock the disk(s):

+
apt install --yes cryptsetup
+
+cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+# Repeat for additional disks, if this is a mirror or raidz topology.
+
+
+

Mount everything correctly:

+
zpool export -a
+zpool import -N -R /mnt rpool
+zpool import -N -R /mnt bpool
+zfs load-key -a
+zfs mount rpool/ROOT/debian
+zfs mount -a
+
+
+

If needed, you can chroot into your installed environment:

+
mount --rbind /dev  /mnt/dev
+mount --rbind /proc /mnt/proc
+mount --rbind /sys  /mnt/sys
+mount -t tmpfs tmpfs /mnt/run
+mkdir /mnt/run/lock
+chroot /mnt /bin/bash --login
+mount /boot/efi
+mount -a
+
+
+

Do whatever you need to do to fix your system.

+

When done, cleanup:

+
exit
+mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
+    xargs -i{} umount -lf {}
+zpool export -a
+reboot
+
+
+
+
+

Areca

+

Systems that require the arcsas blob driver should add it to the +/etc/initramfs-tools/modules file and run update-initramfs -c -k all.

+

Upgrade or downgrade the Areca driver if something like +RIP: 0010:[<ffffffff8101b316>]  [<ffffffff8101b316>] native_read_tsc+0x6/0x20 +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message.

+
+
+

MPT2SAS

+

Most problem reports for this tutorial involve mpt2sas hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware.

+

The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330.

+

Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool.

+
+
+

QEMU/KVM/XEN

+

Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. -drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890).

+

To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:

+
sudo apt install ovmf
+sudo vi /etc/libvirt/qemu.conf
+
+
+

Uncomment these lines:

+
nvram = [
+   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd"
+]
+
+
+
sudo systemctl restart libvirtd.service
+
+
+
+
+

VMware

+
    +
  • Set disk.EnableUUID = "TRUE" in the vmx file or vsphere configuration. +Doing this ensures that /dev/disk aliases are created in the guest.

  • +
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Debian/Debian GNU Linux initrd documentation.html b/Getting Started/Debian/Debian GNU Linux initrd documentation.html new file mode 100644 index 000000000..c5ef91c37 --- /dev/null +++ b/Getting Started/Debian/Debian GNU Linux initrd documentation.html @@ -0,0 +1,250 @@ + + + + + + + Debian GNU Linux initrd documentation — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Debian GNU Linux initrd documentation

+
+

Supported boot parameters

+
    +
  • rollback=<on|yes|1> Do a rollback of specified snapshot.

  • +
  • zfs_debug=<on|yes|1> Debug the initrd script

  • +
  • zfs_force=<on|yes|1> Force importing the pool. Should not be +necessary.

  • +
  • zfs=<off|no|0> Don’t try to import ANY pool, mount ANY filesystem or +even load the module.

  • +
  • rpool=<pool> Use this pool for root pool.

  • +
  • bootfs=<pool>/<dataset> Use this dataset for root filesystem.

  • +
  • root=<pool>/<dataset> Use this dataset for root filesystem.

  • +
  • root=ZFS=<pool>/<dataset> Use this dataset for root filesystem.

  • +
  • root=zfs:<pool>/<dataset> Use this dataset for root filesystem.

  • +
  • root=zfs:AUTO Try to detect both pool and rootfs

  • +
+

In all these cases, <dataset> could also be <dataset>@<snapshot>.

+

The reason there are so many supported boot options to get the root +filesystem, is that there are a lot of different ways too boot ZFS out +there, and I wanted to make sure I supported them all.

+
+
+

Pool imports

+
+

Import using /dev/disk/by-*

+

The initrd will, if the variable USE_DISK_BY_ID is set in the file +/etc/default/zfs, to import using the /dev/disk/by-* links. It will try +to import in this order:

+
    +
  1. /dev/disk/by-vdev

  2. +
  3. /dev/disk/by-*

  4. +
  5. /dev

  6. +
+
+
+

Import using cache file

+

If all of these imports fail (or if USE_DISK_BY_ID is unset), it will +then try to import using the cache file.

+
+
+

Last ditch attempt at importing

+

If that ALSO fails, it will try one more time, without any -d or -c +options.

+
+
+
+

Booting

+
+

Booting from snapshot:

+

Enter the snapshot for the root= parameter like in this example:

+
linux   /BOOT/debian@/boot/vmlinuz-5.10.0-9-amd64 root=ZFS=rpool/ROOT/debian@some_snapshot ro
+
+
+

This will clone the snapshot rpool/ROOT/debian@some_snapshot into the +filesystem rpool/ROOT/debian_some_snapshot and use that as root +filesystem. The original filesystem and snapshot is left alone in this +case.

+

BEWARE that it will first destroy, blindingly, the +rpool/ROOT/debian_some_snapshot filesystem before trying to clone the +snapshot into it again. So if you’ve booted from the same snapshot +previously and done some changes in that root filesystem, they will be +undone by the destruction of the filesystem.

+
+
+

Snapshot rollback

+

From version 0.6.4-1-3 it is now also possible to specify rollback=1 to +do a rollback of the snapshot instead of cloning it. BEWARE that +this will destroy all snapshots done after the specified snapshot!

+
+
+

Select snapshot dynamically

+

From version 0.6.4-1-3 it is now also possible to specify a NULL +snapshot name (such as root=rpool/ROOT/debian@) and if so, the initrd +script will discover all snapshots below that filesystem (sans the at), +and output a list of snapshot for the user to choose from.

+
+
+

Booting from native encrypted filesystem

+

Although there is currently no support for native encryption in ZFS On +Linux, there is a patch floating around ‘out there’ and the initrd +supports loading key and unlock such encrypted filesystem.

+
+
+

Separated filesystems

+
+

Descended filesystems

+

If there are separate filesystems (for example a separate dataset for +/usr), the snapshot boot code will try to find the snapshot under each +filesystems and clone (or rollback) them.

+

Example:

+
rpool/ROOT/debian@some_snapshot
+rpool/ROOT/debian/usr@some_snapshot
+
+
+

These will create the following filesystems respectively (if not doing a +rollback):

+
rpool/ROOT/debian_some_snapshot
+rpool/ROOT/debian/usr_some_snapshot
+
+
+

The initrd code will use the mountpoint option (if any) in the original +(without the snapshot part) dataset to find where it should mount the +dataset. Or it will use the name of the dataset below the root +filesystem (rpool/ROOT/debian in this example) for the mount point.

+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Debian/Debian Stretch Root on ZFS.html b/Getting Started/Debian/Debian Stretch Root on ZFS.html new file mode 100644 index 000000000..a56f45811 --- /dev/null +++ b/Getting Started/Debian/Debian Stretch Root on ZFS.html @@ -0,0 +1,1077 @@ + + + + + + + Debian Stretch Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Debian Stretch Root on ZFS

+ +
+

Overview

+
+

Newer release available

+ +
+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Do not use these instructions for dual-booting.

  • +
  • Backup your data. Any existing data will be lost.

  • +
+
+
+

System Requirements

+ +

Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of +memory is recommended for normal performance in basic workloads. If you +wish to use deduplication, you will need massive amounts of +RAM. Enabling +deduplication is a permanent change that cannot be easily reverted.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @rlaager.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo apt install python3-pip
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @rlaager.

  10. +
+
+
+

Encryption

+

This guide supports two different encryption options: unencrypted and +LUKS (full-disk encryption). ZFS native encryption has not yet been +released. With either option, all ZFS features are fully available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

LUKS encrypts almost everything: the OS, swap, home directories, and +anything else. The only unencrypted data is the bootloader, kernel, and +initrd. The system cannot boot without the passphrase being entered at +the console. Performance is good, but LUKS sits underneath ZFS, so if +multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+
+

Step 1: Prepare The Install Environment

+

1.1 Boot the Debian GNU/Linux Live CD. If prompted, login with the +username user and password live. Connect your system to the +Internet as appropriate (e.g. join your WiFi network).

+

1.2 Optional: Install and start the OpenSSH server in the Live CD +environment:

+

If you have a second system, using SSH to access the target system can +be convenient.

+
$ sudo apt update
+$ sudo apt install --yes openssh-server
+$ sudo systemctl restart ssh
+
+
+

Hint: You can find your IP address with +ip addr show scope global | grep inet. Then, from your main machine, +connect with ssh user@IP.

+

1.3 Become root:

+
$ sudo -i
+
+
+

1.4 Setup and update the repositories:

+
# echo deb http://deb.debian.org/debian stretch contrib >> /etc/apt/sources.list
+# echo deb http://deb.debian.org/debian stretch-backports main contrib >> /etc/apt/sources.list
+# apt update
+
+
+

1.5 Install ZFS in the Live CD environment:

+
# apt install --yes debootstrap gdisk dkms dpkg-dev linux-headers-amd64
+# apt install --yes -t stretch-backports zfs-dkms
+# modprobe zfs
+
+
+
    +
  • The dkms dependency is installed manually just so it comes from +stretch and not stretch-backports. This is not critical.

  • +
+
+
+

Step 2: Disk Formatting

+

2.1 If you are re-using a disk, clear it as necessary:

+
If the disk was previously used in an MD array, zero the superblock:
+# apt install --yes mdadm
+# mdadm --zero-superblock --force /dev/disk/by-id/scsi-SATA_disk1
+
+Clear the partition table:
+# sgdisk --zap-all /dev/disk/by-id/scsi-SATA_disk1
+
+
+

2.2 Partition your disk(s):

+
Run this if you need legacy (BIOS) booting:
+# sgdisk -a1 -n1:24K:+1000K -t1:EF02 /dev/disk/by-id/scsi-SATA_disk1
+
+Run this for UEFI booting (for use now or in the future):
+# sgdisk     -n2:1M:+512M   -t2:EF00 /dev/disk/by-id/scsi-SATA_disk1
+
+Run this for the boot pool:
+# sgdisk     -n3:0:+1G      -t3:BF01 /dev/disk/by-id/scsi-SATA_disk1
+
+
+

Choose one of the following options:

+

2.2a Unencrypted:

+
# sgdisk     -n4:0:0        -t4:BF01 /dev/disk/by-id/scsi-SATA_disk1
+
+
+

2.2b LUKS:

+
# sgdisk     -n4:0:0        -t4:8300 /dev/disk/by-id/scsi-SATA_disk1
+
+
+

Always use the long /dev/disk/by-id/* aliases with ZFS. Using the +/dev/sd* device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool.

+

Hints:

+
    +
  • ls -la /dev/disk/by-id will list the aliases.

  • +
  • Are you doing this in a virtual machine? If your virtual disk is +missing from /dev/disk/by-id, use /dev/vda if you are using +KVM with virtio; otherwise, read the +troubleshooting section.

  • +
  • If you are creating a mirror or raidz topology, repeat the +partitioning commands for all the disks which will be part of the +pool.

  • +
+

2.3 Create the boot pool:

+
# zpool create -o ashift=12 -d \
+      -o feature@async_destroy=enabled \
+      -o feature@bookmarks=enabled \
+      -o feature@embedded_data=enabled \
+      -o feature@empty_bpobj=enabled \
+      -o feature@enabled_txg=enabled \
+      -o feature@extensible_dataset=enabled \
+      -o feature@filesystem_limits=enabled \
+      -o feature@hole_birth=enabled \
+      -o feature@large_blocks=enabled \
+      -o feature@lz4_compress=enabled \
+      -o feature@spacemap_histogram=enabled \
+      -o feature@userobj_accounting=enabled \
+      -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \
+      -O normalization=formD -O relatime=on -O xattr=sa \
+      -O mountpoint=/ -R /mnt \
+      bpool /dev/disk/by-id/scsi-SATA_disk1-part3
+
+
+

You should not need to customize any of the options for the boot pool.

+

GRUB does not support all of the zpool features. See +spa_feature_names in +grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are “supported” by GRUB.

+

Hints:

+
    +
  • If you are creating a mirror or raidz topology, create the pool using +zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3 +(or replace mirror with raidz, raidz2, or raidz3 and +list the partitions from additional disks).

  • +
  • The pool name is arbitrary. If changed, the new name must be used +consistently. The bpool convention originated in this HOWTO.

  • +
+

2.4 Create the root pool:

+

Choose one of the following options:

+

2.4a Unencrypted:

+
# zpool create -o ashift=12 \
+      -O acltype=posixacl -O canmount=off -O compression=lz4 \
+      -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
+      -O mountpoint=/ -R /mnt \
+      rpool /dev/disk/by-id/scsi-SATA_disk1-part4
+
+
+

2.4b LUKS:

+
# apt install --yes cryptsetup
+# cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 \
+      /dev/disk/by-id/scsi-SATA_disk1-part4
+# cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+# zpool create -o ashift=12 \
+      -O acltype=posixacl -O canmount=off -O compression=lz4 \
+      -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
+      -O mountpoint=/ -R /mnt \
+      rpool /dev/mapper/luks1
+
+
+
    +
  • The use of ashift=12 is recommended here because many drives +today have 4KiB (or larger) physical sectors, even though they +present 512B logical sectors. Also, a future replacement drive may +have 4KiB physical sectors (in which case ashift=12 is desirable) +or 4KiB logical sectors (in which case ashift=12 is required).

  • +
  • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires +ACLs

  • +
  • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only +filenames.

  • +
  • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s +documentation +for further information.

  • +
  • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI +applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain +controller. +Note that `xattr=sa is +Linux-specific. <https://openzfs.org/wiki/Platform_code_differences>`__ +If you move your xattr=sa pool to another OpenZFS implementation +besides ZFS-on-Linux, extended attributes will not be readable +(though your data will be). If portability of extended attributes is +important to you, omit the -O xattr=sa above. Even if you do not +want xattr=sa for the whole pool, it is probably fine to use it +for /var/log.

  • +
  • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

  • +
  • For LUKS, the key size chosen is 512 bits. However, XTS mode requires +two keys, so the LUKS key is split in half. Thus, -s 512 means +AES-256.

  • +
  • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup +FAQ +for guidance.

  • +
+

Hints:

+
    +
  • If you are creating a mirror or raidz topology, create the pool using +zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4 +(or replace mirror with raidz, raidz2, or raidz3 and +list the partitions from additional disks). For LUKS, use +/dev/mapper/luks1, /dev/mapper/luks2, etc., which you will +have to create using cryptsetup.

  • +
  • The pool name is arbitrary. If changed, the new name must be used +consistently. On systems that can automatically install to ZFS, the +root pool is named rpool by default.

  • +
+
+
+

Step 3: System Installation

+

3.1 Create filesystem datasets to act as containers:

+
# zfs create -o canmount=off -o mountpoint=none rpool/ROOT
+# zfs create -o canmount=off -o mountpoint=none bpool/BOOT
+
+
+

On Solaris systems, the root filesystem is cloned and the suffix is +incremented for major system changes through pkg image-update or +beadm. Similar functionality for APT is possible but currently +unimplemented. Even without such a tool, it can still be used for +manually created clones.

+

3.2 Create filesystem datasets for the root and boot filesystems:

+
# zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian
+# zfs mount rpool/ROOT/debian
+
+# zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/debian
+# zfs mount bpool/BOOT/debian
+
+
+

With ZFS, it is not normally necessary to use a mount command (either +mount or zfs mount). This situation is an exception because of +canmount=noauto.

+

3.3 Create datasets:

+
# zfs create                                 rpool/home
+# zfs create -o mountpoint=/root             rpool/home/root
+# zfs create -o canmount=off                 rpool/var
+# zfs create -o canmount=off                 rpool/var/lib
+# zfs create                                 rpool/var/log
+# zfs create                                 rpool/var/spool
+
+The datasets below are optional, depending on your preferences and/or
+software choices:
+
+If you wish to exclude these from snapshots:
+# zfs create -o com.sun:auto-snapshot=false  rpool/var/cache
+# zfs create -o com.sun:auto-snapshot=false  rpool/var/tmp
+# chmod 1777 /mnt/var/tmp
+
+If you use /opt on this system:
+# zfs create                                 rpool/opt
+
+If you use /srv on this system:
+# zfs create                                 rpool/srv
+
+If you use /usr/local on this system:
+# zfs create -o canmount=off                 rpool/usr
+# zfs create                                 rpool/usr/local
+
+If this system will have games installed:
+# zfs create                                 rpool/var/games
+
+If this system will store local email in /var/mail:
+# zfs create                                 rpool/var/mail
+
+If this system will use Snap packages:
+# zfs create                                 rpool/var/snap
+
+If you use /var/www on this system:
+# zfs create                                 rpool/var/www
+
+If this system will use GNOME:
+# zfs create                                 rpool/var/lib/AccountsService
+
+If this system will use Docker (which manages its own datasets & snapshots):
+# zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/docker
+
+If this system will use NFS (locking):
+# zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/nfs
+
+A tmpfs is recommended later, but if you want a separate dataset for /tmp:
+# zfs create -o com.sun:auto-snapshot=false  rpool/tmp
+# chmod 1777 /mnt/tmp
+
+
+

The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data such as logs (in /var/log). This will be especially +important if/when a beadm or similar utility is integrated. The +com.sun.auto-snapshot setting is used by some ZFS snapshot utilities +to exclude transient data.

+

If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for +/tmp, as shown above. This keeps the /tmp data out of snapshots +of your root filesystem. It also allows you to set a quota on +rpool/tmp, if you want to limit the maximum space used. Otherwise, +you can use a tmpfs (RAM filesystem) later.

+

3.4 Install the minimal system:

+
# debootstrap stretch /mnt
+# zfs set devices=off rpool
+
+
+

The debootstrap command leaves the new system in an unconfigured +state. An alternative to using debootstrap is to copy the entirety +of a working system into the new ZFS root.

+
+
+

Step 4: System Configuration

+

4.1 Configure the hostname (change HOSTNAME to the desired +hostname).

+
# echo HOSTNAME > /mnt/etc/hostname
+
+# vi /mnt/etc/hosts
+Add a line:
+127.0.1.1       HOSTNAME
+or if the system has a real name in DNS:
+127.0.1.1       FQDN HOSTNAME
+
+
+

Hint: Use nano if you find vi confusing.

+

4.2 Configure the network interface:

+
Find the interface name:
+# ip addr show
+
+# vi /mnt/etc/network/interfaces.d/NAME
+auto NAME
+iface NAME inet dhcp
+
+
+

Customize this file if the system is not a DHCP client.

+

4.3 Configure the package sources:

+
# vi /mnt/etc/apt/sources.list
+deb http://deb.debian.org/debian stretch main contrib
+deb-src http://deb.debian.org/debian stretch main contrib
+deb http://security.debian.org/debian-security stretch/updates main contrib
+deb-src http://security.debian.org/debian-security stretch/updates main contrib
+deb http://deb.debian.org/debian stretch-updates main contrib
+deb-src http://deb.debian.org/debian stretch-updates main contrib
+
+# vi /mnt/etc/apt/sources.list.d/stretch-backports.list
+deb http://deb.debian.org/debian stretch-backports main contrib
+deb-src http://deb.debian.org/debian stretch-backports main contrib
+
+# vi /mnt/etc/apt/preferences.d/90_zfs
+Package: src:zfs-linux
+Pin: release n=stretch-backports
+Pin-Priority: 990
+
+
+

4.4 Bind the virtual filesystems from the LiveCD environment to the new +system and chroot into it:

+
# mount --rbind /dev  /mnt/dev
+# mount --rbind /proc /mnt/proc
+# mount --rbind /sys  /mnt/sys
+# chroot /mnt /bin/bash --login
+
+
+

Note: This is using --rbind, not --bind.

+

4.5 Configure a basic system environment:

+
# ln -s /proc/self/mounts /etc/mtab
+# apt update
+
+# apt install --yes locales
+# dpkg-reconfigure locales
+
+
+

Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available.

+
# dpkg-reconfigure tzdata
+
+
+

4.6 Install ZFS in the chroot environment for the new system:

+
# apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64
+# apt install --yes zfs-initramfs
+
+
+

4.7 For LUKS installs only, setup crypttab:

+
# apt install --yes cryptsetup
+
+# echo luks1 UUID=$(blkid -s UUID -o value \
+      /dev/disk/by-id/scsi-SATA_disk1-part4) none \
+      luks,discard,initramfs > /etc/crypttab
+
+
+ +

Hint: If you are creating a mirror or raidz topology, repeat the +/etc/crypttab entries for luks2, etc. adjusting for each disk.

+

4.8 Install GRUB

+

Choose one of the following options:

+

4.8a Install GRUB for legacy (BIOS) booting

+
# apt install --yes grub-pc
+
+
+

Install GRUB to the disk(s), not the partition(s).

+

4.8b Install GRUB for UEFI booting

+
# apt install dosfstools
+# mkdosfs -F 32 -s 1 -n EFI /dev/disk/by-id/scsi-SATA_disk1-part2
+# mkdir /boot/efi
+# echo PARTUUID=$(blkid -s PARTUUID -o value \
+      /dev/disk/by-id/scsi-SATA_disk1-part2) \
+      /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab
+# mount /boot/efi
+# apt install --yes grub-efi-amd64 shim
+
+
+
    +
  • The -s 1 for mkdosfs is only necessary for drives which +present 4 KiB logical sectors (“4Kn” drives) to meet the minimum +cluster size (given the partition size of 512 MiB) for FAT32. It also +works fine on drives which present 512 B sectors.

  • +
+

Note: If you are creating a mirror or raidz topology, this step only +installs GRUB on the first disk. The other disk(s) will be handled +later.

+

4.9 Set a root password

+
# passwd
+
+
+

4.10 Enable importing bpool

+

This ensures that bpool is always imported, regardless of whether +/etc/zfs/zpool.cache exists, whether it is in the cachefile or not, +or whether zfs-import-scan.service is enabled.

+
# vi /etc/systemd/system/zfs-import-bpool.service
+[Unit]
+DefaultDependencies=no
+Before=zfs-import-scan.service
+Before=zfs-import-cache.service
+
+[Service]
+Type=oneshot
+RemainAfterExit=yes
+ExecStart=/sbin/zpool import -N -o cachefile=none bpool
+
+[Install]
+WantedBy=zfs-import.target
+
+# systemctl enable zfs-import-bpool.service
+
+
+

4.11 Optional (but recommended): Mount a tmpfs to /tmp

+

If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

+
# cp /usr/share/systemd/tmp.mount /etc/systemd/system/
+# systemctl enable tmp.mount
+
+
+

4.12 Optional (but kindly requested): Install popcon

+

The popularity-contest package reports the list of packages install +on your system. Showing that ZFS is popular may be helpful in terms of +long-term attention from the distro.

+
# apt install --yes popularity-contest
+
+
+

Choose Yes at the prompt.

+
+
+

Step 5: GRUB Installation

+

5.1 Verify that the ZFS boot filesystem is recognized:

+
# grub-probe /boot
+zfs
+
+
+

5.2 Refresh the initrd files:

+
# update-initramfs -u -k all
+update-initramfs: Generating /boot/initrd.img-4.9.0-8-amd64
+
+
+

Note: When using LUKS, this will print “WARNING could not determine +root device from /etc/fstab”. This is because cryptsetup does not +support +ZFS.

+

5.3 Workaround GRUB’s missing zpool-features support:

+
# vi /etc/default/grub
+Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian"
+
+
+

5.4 Optional (but highly recommended): Make debugging GRUB easier:

+
# vi /etc/default/grub
+Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
+Uncomment: GRUB_TERMINAL=console
+Save and quit.
+
+
+

Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired.

+

5.5 Update the boot configuration:

+
# update-grub
+Generating grub configuration file ...
+Found linux image: /boot/vmlinuz-4.9.0-8-amd64
+Found initrd image: /boot/initrd.img-4.9.0-8-amd64
+done
+
+
+

Note: Ignore errors from osprober, if present.

+

5.6 Install the boot loader

+

5.6a For legacy (BIOS) booting, install GRUB to the MBR:

+
# grub-install /dev/disk/by-id/scsi-SATA_disk1
+Installing for i386-pc platform.
+Installation finished. No error reported.
+
+
+

Do not reboot the computer until you get exactly that result message. +Note that you are installing GRUB to the whole disk, not a partition.

+

If you are creating a mirror or raidz topology, repeat the +grub-install command for each disk in the pool.

+

5.6b For UEFI booting, install GRUB:

+
# grub-install --target=x86_64-efi --efi-directory=/boot/efi \
+      --bootloader-id=debian --recheck --no-floppy
+
+
+

5.7 Verify that the ZFS module is installed:

+
# ls /boot/grub/*/zfs.mod
+
+
+

5.8 Fix filesystem mount ordering

+

Until ZFS gains a systemd mount +generator, there are +races between mounting filesystems and starting certain daemons. In +practice, the issues (e.g. +#5754) seem to be +with certain filesystems in /var, specifically /var/log and +/var/tmp. Setting these to use legacy mounting, and listing them +in /etc/fstab makes systemd aware that these are separate +mountpoints. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp +feature of systemd automatically use After=var-tmp.mount.

+

Until there is support for mounting /boot in the initramfs, we also +need to mount that, because it was marked canmount=noauto. Also, +with UEFI, we need to ensure it is mounted before its child filesystem +/boot/efi.

+

rpool is guaranteed to be imported by the initramfs, so there is no +point in adding x-systemd.requires=zfs-import.target to those +filesystems.

+
For UEFI booting, unmount /boot/efi first:
+# umount /boot/efi
+
+Everything else applies to both BIOS and UEFI booting:
+
+# zfs set mountpoint=legacy bpool/BOOT/debian
+# echo bpool/BOOT/debian /boot zfs \
+      nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab
+
+# zfs set mountpoint=legacy rpool/var/log
+# echo rpool/var/log /var/log zfs nodev,relatime 0 0 >> /etc/fstab
+
+# zfs set mountpoint=legacy rpool/var/spool
+# echo rpool/var/spool /var/spool zfs nodev,relatime 0 0 >> /etc/fstab
+
+If you created a /var/tmp dataset:
+# zfs set mountpoint=legacy rpool/var/tmp
+# echo rpool/var/tmp /var/tmp zfs nodev,relatime 0 0 >> /etc/fstab
+
+If you created a /tmp dataset:
+# zfs set mountpoint=legacy rpool/tmp
+# echo rpool/tmp /tmp zfs nodev,relatime 0 0 >> /etc/fstab
+
+
+
+
+

Step 6: First Boot

+

6.1 Snapshot the initial installation:

+
# zfs snapshot bpool/BOOT/debian@install
+# zfs snapshot rpool/ROOT/debian@install
+
+
+

In the future, you will likely want to take snapshots before each +upgrade, and remove old snapshots (including this one) at some point to +save space.

+

6.2 Exit from the chroot environment back to the LiveCD environment:

+
# exit
+
+
+

6.3 Run these commands in the LiveCD environment to unmount all +filesystems:

+
# mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
+# zpool export -a
+
+
+

6.4 Reboot:

+
# reboot
+
+
+

6.5 Wait for the newly installed system to boot normally. Login as root.

+

6.6 Create a user account:

+
# zfs create rpool/home/YOURUSERNAME
+# adduser YOURUSERNAME
+# cp -a /etc/skel/.[!.]* /home/YOURUSERNAME
+# chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME
+
+
+

6.7 Add your user account to the default set of groups for an +administrator:

+
# usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video YOURUSERNAME
+
+
+

6.8 Mirror GRUB

+

If you installed to multiple disks, install GRUB on the additional +disks:

+

6.8a For legacy (BIOS) booting:

+
# dpkg-reconfigure grub-pc
+Hit enter until you get to the device selection screen.
+Select (using the space bar) all of the disks (not partitions) in your pool.
+
+
+

6.8b UEFI

+
# umount /boot/efi
+
+For the second and subsequent disks (increment debian-2 to -3, etc.):
+# dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
+     of=/dev/disk/by-id/scsi-SATA_disk2-part2
+# efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
+      -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi'
+
+# mount /boot/efi
+
+
+
+
+

Step 7: (Optional) Configure Swap

+

Caution: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. This issue is currently being investigated in: +https://github.com/zfsonlinux/zfs/issues/7734

+

7.1 Create a volume dataset (zvol) for use as a swap device:

+
# zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
+      -o logbias=throughput -o sync=always \
+      -o primarycache=metadata -o secondarycache=none \
+      -o com.sun:auto-snapshot=false rpool/swap
+
+
+

You can adjust the size (the 4G part) to your needs.

+

The compression algorithm is set to zle because it is the cheapest +available algorithm. As this guide recommends ashift=12 (4 kiB +blocks on disk), the common case of a 4 kiB page size means that no +compression algorithm can reduce I/O. The exception is all-zero pages, +which are dropped by ZFS; but some form of compression has to be enabled +to get this behavior.

+

7.2 Configure the swap device:

+

Caution: Always use long /dev/zvol aliases in configuration +files. Never use a short /dev/zdX device name.

+
# mkswap -f /dev/zvol/rpool/swap
+# echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
+# echo RESUME=none > /etc/initramfs-tools/conf.d/resume
+
+
+

The RESUME=none is necessary to disable resuming from hibernation. +This does not work, as the zvol is not present (because the pool has not +yet been imported) at the time the resume script runs. If it is not +disabled, the boot process hangs for 30 seconds waiting for the swap +zvol to appear.

+

7.3 Enable the swap device:

+
# swapon -av
+
+
+
+
+

Step 8: Full Software Installation

+

8.1 Upgrade the minimal system:

+
# apt dist-upgrade --yes
+
+
+

8.2 Install a regular set of software:

+
# tasksel
+
+
+

Note: This will check “Debian desktop environment” and “print server” +by default. If you want a server installation, unselect those.

+

8.3 Optional: Disable log compression:

+

As /var/log is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. +Also, if you are making snapshots of /var/log, logrotate’s +compression will actually waste space, as the uncompressed data will +live on in the snapshot. You can edit the files in /etc/logrotate.d +by hand to comment out compress, or use this loop (copy-and-paste +highly recommended):

+
# for file in /etc/logrotate.d/* ; do
+    if grep -Eq "(^|[^#y])compress" "$file" ; then
+        sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
+    fi
+done
+
+
+

8.4 Reboot:

+
# reboot
+
+
+
+

Step 9: Final Cleanup

+

9.1 Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

+

9.2 Optional: Delete the snapshots of the initial installation:

+
$ sudo zfs destroy bpool/BOOT/debian@install
+$ sudo zfs destroy rpool/ROOT/debian@install
+
+
+

9.3 Optional: Disable the root password

+
$ sudo usermod -p '*' root
+
+
+

9.4 Optional: Re-enable the graphical boot process:

+

If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer.

+
$ sudo vi /etc/default/grub
+Add quiet to GRUB_CMDLINE_LINUX_DEFAULT
+Comment out GRUB_TERMINAL=console
+Save and quit.
+
+$ sudo update-grub
+
+
+

Note: Ignore errors from osprober, if present.

+

9.5 Optional: For LUKS installs only, backup the LUKS header:

+
$ sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
+    --header-backup-file luks1-header.dat
+
+
+

Store that backup somewhere safe (e.g. cloud storage). It is protected +by your LUKS passphrase, but you may wish to use additional encryption.

+

Hint: If you created a mirror or raidz topology, repeat this for +each LUKS volume (luks2, etc.).

+
+
+
+

Troubleshooting

+
+

Rescuing using a Live CD

+

Go through Step 1: Prepare The Install +Environment.

+

This will automatically import your pool. Export it and re-import it to +get the mounts right:

+
For LUKS, first unlock the disk(s):
+# apt install --yes cryptsetup
+# cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+Repeat for additional disks, if this is a mirror or raidz topology.
+
+# zpool export -a
+# zpool import -N -R /mnt rpool
+# zpool import -N -R /mnt bpool
+# zfs mount rpool/ROOT/debian
+# zfs mount -a
+
+
+

If needed, you can chroot into your installed environment:

+
# mount --rbind /dev  /mnt/dev
+# mount --rbind /proc /mnt/proc
+# mount --rbind /sys  /mnt/sys
+# chroot /mnt /bin/bash --login
+# mount /boot/efi
+# mount -a
+
+
+

Do whatever you need to do to fix your system.

+

When done, cleanup:

+
# exit
+# mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
+# zpool export -a
+# reboot
+
+
+
+
+

MPT2SAS

+

Most problem reports for this tutorial involve mpt2sas hardware that +does slow asynchronous drive initialization, like some IBM M1015 or +OEM-branded cards that have been flashed to the reference LSI firmware.

+

The basic problem is that disks on these controllers are not visible to +the Linux kernel until after the regular system is started, and ZoL does +not hotplug pool members. See +https://github.com/zfsonlinux/zfs/issues/330.

+

Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool.

+
+
+

Areca

+

Systems that require the arcsas blob driver should add it to the +/etc/initramfs-tools/modules file and run +update-initramfs -u -k all.

+

Upgrade or downgrade the Areca driver if something like +RIP: 0010:[<ffffffff8101b316>]  [<ffffffff8101b316>] native_read_tsc+0x6/0x20 +appears anywhere in kernel log. ZoL is unstable on systems that emit +this error message.

+
+
+

VMware

+
    +
  • Set disk.EnableUUID = "TRUE" in the vmx file or vsphere +configuration. Doing this ensures that /dev/disk aliases are +created in the guest.

  • +
+
+
+

QEMU/KVM/XEN

+

Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. -drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890).

+

To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:

+
$ sudo apt install ovmf
+$ sudo vi /etc/libvirt/qemu.conf
+Uncomment these lines:
+nvram = [
+   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd"
+]
+$ sudo service libvirt-bin restart
+
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Debian/index.html b/Getting Started/Debian/index.html new file mode 100644 index 000000000..ae3018776 --- /dev/null +++ b/Getting Started/Debian/index.html @@ -0,0 +1,209 @@ + + + + + + + Debian — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Debian

+ +
+

Installation

+

If you want to use ZFS as your root filesystem, see the Root on ZFS +links below instead.

+

ZFS packages are included in the contrib repository. The +backports repository +often provides newer releases of ZFS. You can use it as follows.

+

Add the backports repository:

+
vi /etc/apt/sources.list.d/bookworm-backports.list
+
+
+
deb http://deb.debian.org/debian bookworm-backports main contrib
+deb-src http://deb.debian.org/debian bookworm-backports main contrib
+
+
+
vi /etc/apt/preferences.d/90_zfs
+
+
+
Package: src:zfs-linux
+Pin: release n=bookworm-backports
+Pin-Priority: 990
+
+
+

Install the packages:

+
apt update
+apt install dpkg-dev linux-headers-generic linux-image-generic
+apt install zfs-dkms zfsutils-linux
+
+
+

Caution: If you are in a poorly configured environment (e.g. certain VM or container consoles), when apt attempts to pop up a message on first install, it may fail to notice a real console is unavailable, and instead appear to hang indefinitely. To circumvent this, you can prefix the apt install commands with DEBIAN_FRONTEND=noninteractive, like this:

+
DEBIAN_FRONTEND=noninteractive apt install zfs-dkms zfsutils-linux
+
+
+
+
+

Root on ZFS

+ +
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Fedora.html b/Getting Started/Fedora.html new file mode 100644 index 000000000..85c27d311 --- /dev/null +++ b/Getting Started/Fedora.html @@ -0,0 +1,116 @@ + + + + + + + Fedora — OpenZFS documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Fedora

+

This page has been moved to here.

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Fedora/Root on ZFS.html b/Getting Started/Fedora/Root on ZFS.html new file mode 100644 index 000000000..1dc7f8eaa --- /dev/null +++ b/Getting Started/Fedora/Root on ZFS.html @@ -0,0 +1,709 @@ + + + + + + + Fedora Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Fedora Root on ZFS

+

ZFSBootMenu

+

This tutorial is based on the GRUB bootloader. Due to its independent +implementation of a read-only ZFS driver, GRUB only supports a subset +of ZFS features on the boot pool. [In general, bootloader treat disks +as read-only to minimize the risk of damaging on-disk data.]

+

ZFSBootMenu is an alternative bootloader +free of such limitations and has support for boot environments. Do not +follow instructions on this page if you plan to use ZBM, +as the layouts are not compatible. Refer +to their site for installation details.

+

Customization

+

Unless stated otherwise, it is not recommended to customize system +configuration before reboot.

+

Only use well-tested pool features

+

You should only use well-tested pool features. Avoid using new features if data integrity is paramount. See, for example, this comment.

+
+

Preparation

+
    +
  1. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled.

  2. +
  3. Because the kernel of latest Live CD might be incompatible with +ZFS, we will use Alpine Linux Extended, which ships with ZFS by +default.

    +

    Download latest extended variant of Alpine Linux +live image, +verify checksum +and boot from it.

    +
    gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify alpine-extended-*.asc
    +
    +dd if=input-file of=output-file bs=1M
    +
    +
    +
  4. +
  5. Login as root user. There is no password.

  6. +
  7. Configure Internet

    +
    setup-interfaces -r
    +# You must use "-r" option to start networking services properly
    +# example:
    +network interface: wlan0
    +WiFi name:         <ssid>
    +ip address:        dhcp
    +<enter done to finish network config>
    +manual netconfig:  n
    +
    +
    +
  8. +
  9. If you are using wireless network and it is not shown, see Alpine +Linux wiki for +further details. wpa_supplicant can be installed with apk +add wpa_supplicant without internet connection.

  10. +
  11. Configure SSH server

    +
    setup-sshd
    +# example:
    +ssh server:        openssh
    +allow root:        "prohibit-password" or "yes"
    +ssh key:           "none" or "<public key>"
    +
    +
    +
  12. +
  13. Set root password or /root/.ssh/authorized_keys.

  14. +
  15. Connect from another computer

    +
    ssh root@192.168.1.91
    +
    +
    +
  16. +
  17. Configure NTP client for time synchronization

    +
    setup-ntp busybox
    +
    +
    +
  18. +
  19. Set up apk-repo. A list of available mirrors is shown. +Press space bar to continue

    +
    setup-apkrepos
    +
    +
    +
  20. +
  21. Throughout this guide, we use predictable disk names generated by +udev

    +
    apk update
    +apk add eudev
    +setup-devd udev
    +
    +
    +
  22. +
  23. Target disk

    +

    List available disks with

    +
    find /dev/disk/by-id/
    +
    +
    +

    If virtio is used as disk bus, power off the VM and set serial numbers for disk. +For QEMU, use -drive format=raw,file=disk2.img,serial=AaBb. +For libvirt, edit domain XML. See this page for examples.

    +

    Declare disk array

    +
    DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR'
    +
    +
    +

    For single disk installation, use

    +
    DISK='/dev/disk/by-id/disk1'
    +
    +
    +
  24. +
  25. Set a mount point

    +
    MNT=$(mktemp -d)
    +
    +
    +
  26. +
  27. Set partition size:

    +

    Set swap size in GB, set to 1 if you don’t want swap to +take up too much space

    +
    SWAPSIZE=4
    +
    +
    +

    Set how much space should be left at the end of the disk, minimum 1GB

    +
    RESERVE=1
    +
    +
    +
  28. +
  29. Install ZFS support from live media:

    +
    apk add zfs
    +
    +
    +
  30. +
  31. Install partition tool

    +
    apk add parted e2fsprogs cryptsetup util-linux
    +
    +
    +
  32. +
+
+
+

System Installation

+
    +
  1. Partition the disks.

    +

    Note: you must clear all existing partition tables and data structures from target disks.

    +

    For flash-based storage, this can be done by the blkdiscard command below:

    +
    partition_disk () {
    + local disk="${1}"
    + blkdiscard -f "${disk}" || true
    +
    + parted --script --align=optimal  "${disk}" -- \
    + mklabel gpt \
    + mkpart EFI 2MiB 1GiB \
    + mkpart bpool 1GiB 5GiB \
    + mkpart rpool 5GiB -$((SWAPSIZE + RESERVE))GiB \
    + mkpart swap  -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \
    + mkpart BIOS 1MiB 2MiB \
    + set 1 esp on \
    + set 5 bios_grub on \
    + set 5 legacy_boot on
    +
    + partprobe "${disk}"
    +}
    +
    +for i in ${DISK}; do
    +   partition_disk "${i}"
    +done
    +
    +
    +
  2. +
  3. Setup encrypted swap. This is useful if the available memory is +small:

    +
    for i in ${DISK}; do
    +   cryptsetup open --type plain --key-file /dev/random "${i}"-part4 "${i##*/}"-part4
    +   mkswap /dev/mapper/"${i##*/}"-part4
    +   swapon /dev/mapper/"${i##*/}"-part4
    +done
    +
    +
    +
  4. +
  5. Load ZFS kernel module

    +
    modprobe zfs
    +
    +
    +
  6. +
  7. Create boot pool

    +
    # shellcheck disable=SC2046
    +zpool create -o compatibility=legacy  \
    +    -o ashift=12 \
    +    -o autotrim=on \
    +    -O acltype=posixacl \
    +    -O canmount=off \
    +    -O devices=off \
    +    -O normalization=formD \
    +    -O relatime=on \
    +    -O xattr=sa \
    +    -O mountpoint=/boot \
    +    -R "${MNT}" \
    +    bpool \
    +           mirror \
    +    $(for i in ${DISK}; do
    +       printf '%s ' "${i}-part2";
    +      done)
    +
    +
    +

    If not using a multi-disk setup, remove mirror.

    +

    You should not need to customize any of the options for the boot pool.

    +

    GRUB does not support all of the zpool features. See spa_feature_names +in grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features.

    +
  8. +
  9. Create root pool

    +
    # shellcheck disable=SC2046
    +zpool create \
    +    -o ashift=12 \
    +    -o autotrim=on \
    +    -R "${MNT}" \
    +    -O acltype=posixacl \
    +    -O canmount=off \
    +    -O compression=zstd \
    +    -O dnodesize=auto \
    +    -O normalization=formD \
    +    -O relatime=on \
    +    -O xattr=sa \
    +    -O mountpoint=/ \
    +    rpool \
    +    mirror \
    +   $(for i in ${DISK}; do
    +      printf '%s ' "${i}-part3";
    +     done)
    +
    +
    +

    If not using a multi-disk setup, remove mirror.

    +
  10. +
  11. Create root system container:

    +
      +
    • Unencrypted

      +
      zfs create \
      + -o canmount=off \
      + -o mountpoint=none \
      +rpool/fedora
      +
      +
      +
    • +
    • Encrypted:

      +

      Avoid ZFS send/recv when using native encryption, see `a ZFS developer's comment on this issue`__ and `this spreadsheet of bugs`__. A LUKS-based guide has yet to be written. Once compromised, changing password will not keep your +data safe. See zfs-change-key(8) for more info

      +
      zfs create \
      +  -o canmount=off \
      +         -o mountpoint=none \
      +         -o encryption=on \
      +         -o keylocation=prompt \
      +         -o keyformat=passphrase \
      +rpool/fedora
      +
      +
      +
    • +
    +

    You can automate this step (insecure) with: echo POOLPASS | zfs create ....

    +

    Create system datasets, +manage mountpoints with mountpoint=legacy

    +
    zfs create -o canmount=noauto -o mountpoint=/  rpool/fedora/root
    +zfs mount rpool/fedora/root
    +zfs create -o mountpoint=legacy rpool/fedora/home
    +mkdir "${MNT}"/home
    +mount -t zfs rpool/fedora/home "${MNT}"/home
    +zfs create -o mountpoint=legacy  rpool/fedora/var
    +zfs create -o mountpoint=legacy rpool/fedora/var/lib
    +zfs create -o mountpoint=legacy rpool/fedora/var/log
    +zfs create -o mountpoint=none bpool/fedora
    +zfs create -o mountpoint=legacy bpool/fedora/root
    +mkdir "${MNT}"/boot
    +mount -t zfs bpool/fedora/root "${MNT}"/boot
    +mkdir -p "${MNT}"/var/log
    +mkdir -p "${MNT}"/var/lib
    +mount -t zfs rpool/fedora/var/lib "${MNT}"/var/lib
    +mount -t zfs rpool/fedora/var/log "${MNT}"/var/log
    +
    +
    +
  12. +
  13. Format and mount ESP

    +
    for i in ${DISK}; do
    + mkfs.vfat -n EFI "${i}"-part1
    + mkdir -p "${MNT}"/boot/efis/"${i##*/}"-part1
    + mount -t vfat -o iocharset=iso8859-1 "${i}"-part1 "${MNT}"/boot/efis/"${i##*/}"-part1
    +done
    +
    +mkdir -p "${MNT}"/boot/efi
    +mount -t vfat -o iocharset=iso8859-1 "$(echo "${DISK}" | sed "s|^ *||"  | cut -f1 -d' '|| true)"-part1 "${MNT}"/boot/efi
    +
    +
    +
  14. +
+
+
+

System Configuration

+
    +
  1. Download and extract minimal Fedora root filesystem:

    +
    apk add curl
    +curl --fail-early --fail -L \
    +https://dl.fedoraproject.org/pub/fedora/linux/releases/38/Container/x86_64/images/Fedora-Container-Base-38-1.6.x86_64.tar.xz \
    +-o rootfs.tar.gz
    +curl --fail-early --fail -L \
    +https://dl.fedoraproject.org/pub/fedora/linux/releases/38/Container/x86_64/images/Fedora-Container-38-1.6-x86_64-CHECKSUM \
    +-o checksum
    +
    +# BusyBox sha256sum treats all lines in the checksum file
    +# as checksums and requires two spaces "  "
    +# between filename and checksum
    +
    +grep 'Container-Base' checksum \
    +| grep '^SHA256' \
    +| sed -E 's|.*= ([a-z0-9]*)$|\1  rootfs.tar.gz|' > ./sha256checksum
    +
    +sha256sum -c ./sha256checksum
    +
    +rootfs_tar=$(tar t -af rootfs.tar.gz | grep layer.tar)
    +rootfs_tar_dir=$(dirname "${rootfs_tar}")
    +tar x -af rootfs.tar.gz "${rootfs_tar}"
    +ln -s "${MNT}" "${MNT}"/"${rootfs_tar_dir}"
    +tar x  -C "${MNT}" -af "${rootfs_tar}"
    +unlink "${MNT}"/"${rootfs_tar_dir}"
    +
    +
    +
  2. +
  3. Enable community repo

    +
    sed -i '/edge/d' /etc/apk/repositories
    +sed -i -E 's/#(.*)community/\1community/' /etc/apk/repositories
    +
    +
    +
  4. +
  5. Generate fstab:

    +
    apk add arch-install-scripts
    +genfstab -t PARTUUID "${MNT}" \
    +| grep -v swap \
    +| sed "s|vfat.*rw|vfat rw,x-systemd.idle-timeout=1min,x-systemd.automount,noauto,nofail|" \
    +> "${MNT}"/etc/fstab
    +
    +
    +
  6. +
  7. Chroot

    +
    cp /etc/resolv.conf "${MNT}"/etc/resolv.conf
    +for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done
    +chroot "${MNT}" /usr/bin/env DISK="${DISK}" bash
    +
    +
    +
  8. +
  9. Unset all shell aliases, which can interfere with installation:

    +
    unalias -a
    +
    +
    +
  10. +
  11. Install base packages

    +
    dnf -y install @core grub2-efi-x64 \
    +grub2-pc grub2-pc-modules grub2-efi-x64-modules shim-x64  \
    +efibootmgr kernel kernel-devel
    +
    +
    +
  12. +
  13. Install ZFS packages

    +
    dnf -y install \
    +https://zfsonlinux.org/fedora/zfs-release-2-3"$(rpm --eval "%{dist}"||true)".noarch.rpm
    +
    +dnf -y install zfs zfs-dracut
    +
    +
    +
  14. +
  15. Check whether ZFS modules are successfully built

    +
    tail -n10 /var/lib/dkms/zfs/**/build/make.log
    +
    +# ERROR: modpost: GPL-incompatible module zfs.ko uses GPL-only symbol 'bio_start_io_acct'
    +# ERROR: modpost: GPL-incompatible module zfs.ko uses GPL-only symbol 'bio_end_io_acct_remapped'
    +# make[4]:  [scripts/Makefile.modpost:138: /var/lib/dkms/zfs/2.1.9/build/module/Module.symvers] Error 1
    +# make[3]:  [Makefile:1977: modpost] Error 2
    +# make[3]: Leaving directory '/usr/src/kernels/6.2.9-100.fc36.x86_64'
    +# make[2]:  [Makefile:55: modules-Linux] Error 2
    +# make[2]: Leaving directory '/var/lib/dkms/zfs/2.1.9/build/module'
    +# make[1]:  [Makefile:933: all-recursive] Error 1
    +# make[1]: Leaving directory '/var/lib/dkms/zfs/2.1.9/build'
    +# make:  [Makefile:794: all] Error 2
    +
    +
    +

    If the build failed, you need to install an Long Term Support +kernel and its headers, then rebuild ZFS module

    +
    # this is a third-party repo!
    +# you have been warned.
    +#
    +# select a kernel from
    +# https://copr.fedorainfracloud.org/coprs/kwizart/
    +
    +dnf copr enable -y kwizart/kernel-longterm-VERSION
    +dnf install -y kernel-longterm kernel-longterm-devel
    +dnf remove -y kernel-core
    +
    +
    +

    ZFS modules will be built as part of the kernel installation. +Check build log again with tail command.

    +
  16. +
  17. Add zfs modules to dracut

    +
    echo 'add_dracutmodules+=" zfs "' >> /etc/dracut.conf.d/zfs.conf
    +echo 'force_drivers+=" zfs "' >> /etc/dracut.conf.d/zfs.conf
    +
    +
    +
  18. +
  19. Add other drivers to dracut:

    +
    if grep mpt3sas /proc/modules; then
    +  echo 'force_drivers+=" mpt3sas "'  >> /etc/dracut.conf.d/zfs.conf
    +fi
    +if grep virtio_blk /proc/modules; then
    +  echo 'filesystems+=" virtio_blk "' >> /etc/dracut.conf.d/fs.conf
    +fi
    +
    +
    +
  20. +
  21. Build initrd

    +
    find -D exec /lib/modules -maxdepth 1 \
    +-mindepth 1 -type d \
    +-exec sh -vxc \
    +'if test -e "$1"/modules.dep;
    +   then kernel=$(basename "$1");
    +   dracut --verbose --force --kver "${kernel}";
    + fi' sh {} \;
    +
    +
    +
  22. +
  23. For SELinux, relabel filesystem on reboot:

    +
    fixfiles -F onboot
    +
    +
    +
  24. +
  25. Enable internet time synchronisation:

    +
    systemctl enable systemd-timesyncd
    +
    +
    +
  26. +
  27. Generate host id

    +
    zgenhostid -f -o /etc/hostid
    +
    +
    +
  28. +
  29. Install locale package, example for English locale:

    +
    dnf install -y glibc-minimal-langpack glibc-langpack-en
    +
    +
    +
  30. +
  31. Set locale, keymap, timezone, hostname

    +
    rm -f /etc/localtime
    +rm -f /etc/hostname
    +systemd-firstboot \
    +--force \
    +--locale=en_US.UTF-8 \
    +--timezone=Etc/UTC \
    +--hostname=testhost \
    +--keymap=us || true
    +
    +
    +
  32. +
  33. Set root passwd

    +
    printf 'root:yourpassword' | chpasswd
    +
    +
    +
  34. +
+
+
+

Bootloader

+
    +
  1. Apply GRUB workaround

    +
    echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile.d/zpool_vdev_name_path.sh
    +# shellcheck disable=SC1091
    +. /etc/profile.d/zpool_vdev_name_path.sh
    +
    +# GRUB fails to detect rpool name, hard code as "rpool"
    +sed -i "s|rpool=.*|rpool=rpool|"  /etc/grub.d/10_linux
    +
    +
    +

    This workaround needs to be applied for every GRUB update, as the +update will overwrite the changes.

    +
  2. +
  3. Fedora and RHEL uses Boot Loader Specification module for GRUB, +which does not support ZFS. Disable it:

    +
    echo 'GRUB_ENABLE_BLSCFG=false' >> /etc/default/grub
    +
    +
    +

    This means that you need to regenerate GRUB menu and mirror them +after every kernel update, otherwise computer will still boot old +kernel on reboot.

    +
  4. +
  5. Install GRUB:

    +
    mkdir -p /boot/efi/fedora/grub-bootdir/i386-pc/
    +for i in ${DISK}; do
    + grub2-install --target=i386-pc --boot-directory \
    +     /boot/efi/fedora/grub-bootdir/i386-pc/  "${i}"
    +done
    +dnf reinstall -y grub2-efi-x64 shim-x64
    +cp -r /usr/lib/grub/x86_64-efi/ /boot/efi/EFI/fedora/
    +
    +
    +
  6. +
  7. Generate GRUB menu

    +
    mkdir -p /boot/grub2
    +grub2-mkconfig -o /boot/grub2/grub.cfg
    +cp /boot/grub2/grub.cfg \
    + /boot/efi/efi/fedora/grub.cfg
    +cp /boot/grub2/grub.cfg \
    + /boot/efi/fedora/grub-bootdir/i386-pc/grub2/grub.cfg
    +
    +
    +
  8. +
  9. For both legacy and EFI booting: mirror ESP content:

    +
    espdir=$(mktemp -d)
    +find /boot/efi/ -maxdepth 1 -mindepth 1 -type d -print0 \
    +| xargs -t -0I '{}' cp -r '{}' "${espdir}"
    +find "${espdir}" -maxdepth 1 -mindepth 1 -type d -print0 \
    +| xargs -t -0I '{}' sh -vxc "find /boot/efis/ -maxdepth 1 -mindepth 1 -type d -print0 | xargs -t -0I '[]' cp -r '{}' '[]'"
    +
    +
    +
  10. +
  11. Exit chroot

    +
    exit
    +
    +
    +
  12. +
  13. Unmount filesystems and create initial system snapshot +You can later create a boot environment from this snapshot. +See Root on ZFS maintenance page.

    +
    umount -Rl "${MNT}"
    +zfs snapshot -r rpool@initial-installation
    +zfs snapshot -r bpool@initial-installation
    +
    +
    +
  14. +
  15. Export all pools

    +
    zpool export -a
    +
    +
    +
  16. +
  17. Reboot

    +
    reboot
    +
    +
    +
  18. +
  19. For BIOS-legacy boot users only: the GRUB bootloader installed +might be unusable. In this case, see Bootloader Recovery section +in Root on ZFS maintenance page.

    +

    This issue is not related to Alpine Linux chroot, as Arch Linux +installed with this method does not have this issue.

    +

    UEFI bootloader is not affected by this issue.

    +
  20. +
  21. On first reboot, SELinux policies will be applied, albeit +incompletely. The computer will then reboot with incomplete +policies and fail to mount /run, resulting in a failure.

    +

    Workaround is to append enforcing=0 to kernel command line in +the GRUB menu, as many times as necessary, until the system +complete one successful boot. The author of this guide has not +found out a way to solve this issue during installation. Help is +appreciated.

    +
  22. +
+
+
+

Post installaion

+
    +
  1. Install package groups

    +
    dnf group list --hidden -v       # query package groups
    +dnf group install gnome-desktop
    +
    +
    +
  2. +
  3. Add new user, configure swap.

  4. +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Fedora/index.html b/Getting Started/Fedora/index.html new file mode 100644 index 000000000..b3184d46e --- /dev/null +++ b/Getting Started/Fedora/index.html @@ -0,0 +1,244 @@ + + + + + + + Fedora — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Fedora

+
+

Contents

+ +
+
+

Installation

+

Note: this is for installing ZFS on an existing Fedora +installation. To use ZFS as root file system, +see below.

+
    +
  1. If zfs-fuse from official Fedora repo is installed, +remove it first. It is not maintained and should not be used +under any circumstance:

    +
    rpm -e --nodeps zfs-fuse
    +
    +
    +
  2. +
  3. Add ZFS repo:

    +
    dnf install -y https://zfsonlinux.org/fedora/zfs-release-2-4$(rpm --eval "%{dist}").noarch.rpm
    +
    +
    +

    List of repos is available here.

    +
  4. +
  5. Install kernel headers:

    +
    dnf install -y kernel-devel
    +
    +
    +

    kernel-devel package must be installed before zfs package.

    +
  6. +
  7. Install ZFS packages:

    +
    dnf install -y zfs
    +
    +
    +
  8. +
  9. Load kernel module:

    +
    modprobe zfs
    +
    +
    +

    If kernel module can not be loaded, your kernel version +might be not yet supported by OpenZFS.

    +

    An option is to an LTS kernel from COPR, provided by a third-party. +Use it at your own risk:

    +
    # this is a third-party repo!
    +# you have been warned.
    +#
    +# select a kernel from
    +# https://copr.fedorainfracloud.org/coprs/kwizart/
    +
    +dnf copr enable -y kwizart/kernel-longterm-VERSION
    +dnf install -y kernel-longterm kernel-longterm-devel
    +
    +
    +

    Reboot to new LTS kernel, then load kernel module:

    +
    modprobe zfs
    +
    +
    +
  10. +
  11. By default ZFS kernel modules are loaded upon detecting a pool. +To always load the modules at boot:

    +
    echo zfs > /etc/modules-load.d/zfs.conf
    +
    +
    +
  12. +
  13. By default ZFS may be removed by kernel package updates. +To lock the kernel version to only ones supported by ZFS to prevent this:

    +
    echo 'zfs' > /etc/dnf/protected.d/zfs.conf
    +
    +
    +
    +
    Pending non-kernel updates can still be applied::

    dnf update –exclude=kernel*

    +
    +
    +
  14. +
+
+
+

Testing Repo

+

Testing repository, which is disabled by default, contains +the latest version of OpenZFS which is under active development. +These packages +should not be used on production systems.

+
dnf config-manager --enable zfs-testing
+dnf install zfs
+
+
+
+
+

Root on ZFS

+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/FreeBSD.html b/Getting Started/FreeBSD.html new file mode 100644 index 000000000..3b68878d7 --- /dev/null +++ b/Getting Started/FreeBSD.html @@ -0,0 +1,253 @@ + + + + + + + FreeBSD — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

FreeBSD

+

ZoF-logo

+
+

Installation on FreeBSD

+

OpenZFS is available pre-packaged as:

+
    +
  • the zfs-2.0-release branch, in the FreeBSD base system from FreeBSD 13.0-CURRENT forward

  • +
  • the master branch, in the FreeBSD ports tree as sysutils/openzfs and sysutils/openzfs-kmod from FreeBSD 12.1 forward

  • +
+

The rest of this document describes the use of OpenZFS either from ports/pkg or built manually from sources for development.

+

The ZFS utilities will be installed in /usr/local/sbin/, so make sure +your PATH gets adjusted accordingly.

+

To load the module at boot, put openzfs_load="YES" in +/boot/loader.conf, and remove zfs_load="YES" if migrating a ZFS +install.

+

Beware that the FreeBSD boot loader does not allow booting from root +pools with encryption active (even if it is not in use), so do not try +encryption on a pool you boot from.

+
+
+

Development on FreeBSD

+

The following dependencies are required to build OpenZFS on FreeBSD:

+
    +
  • FreeBSD sources in /usr/src or elsewhere specified by SYSDIR in env. +If you don’t have the sources installed you can install them with +git.

    +

    Install source For FreeBSD 12:

    +
    git clone -b stable/12 https://git.FreeBSD.org/src.git /usr/src
    +
    +
    +

    Install source for FreeBSD Current:

    +
    git clone https://git.FreeBSD.org/src.git /usr/src
    +
    +
    +
  • +
  • Packages for build:

    +
    pkg install \
    +    autoconf \
    +    automake \
    +    autotools \
    +    git \
    +    gmake
    +
    +
    +
  • +
  • Optional packages for build:

    +
    pkg install python
    +pkg install devel/py-sysctl # needed for arcstat, arc_summary, dbufstat
    +
    +
    +
  • +
  • Packages for checks and tests:

    +
    pkg install \
    +    base64 \
    +    bash \
    +    checkbashisms \
    +    fio \
    +    hs-ShellCheck \
    +    ksh93 \
    +    pamtester \
    +    devel/py-flake8 \
    +    sudo
    +
    +
    +

    Your preferred python version may be substituted. The user for +running tests must have NOPASSWD sudo permission.

    +
  • +
+

To build and install:

+
# as user
+git clone https://github.com/openzfs/zfs
+cd zfs
+./autogen.sh
+env MAKE=gmake ./configure
+gmake -j`sysctl -n hw.ncpu`
+# as root
+gmake install
+
+
+

To use the OpenZFS kernel module when FreeBSD starts, edit /boot/loader.conf :

+

Replace the line:

+
zfs_load="YES"
+
+
+

with:

+
openzfs_load="YES"
+
+
+

The stock FreeBSD ZFS binaries are installed in /sbin. OpenZFS binaries are installed to /usr/local/sbin when installed form ports/pkg or manually from the source. To use OpenZFS binaries, adjust your path so /usr/local/sbin is listed before /sbin. Otherwise the native ZFS binaries will be used.

+

For example, make changes to ~/.profile ~/.bashrc ~/.cshrc from this:

+
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:~/bin
+
+
+

To this:

+
PATH=/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin:~/bin
+
+
+

For rapid development it can be convenient to do a UFS install instead +of ZFS when setting up the work environment. That way the module can be +unloaded and loaded without rebooting.

+
reboot
+
+
+

Though not required, WITHOUT_ZFS is a useful build option in FreeBSD +to avoid building and installing the legacy zfs tools and kmod - see +src.conf(5).

+

Some tests require fdescfs to be mount on /dev/fd. This can be done +temporarily with:

+
mount -t fdescfs fdescfs /dev/fd
+
+
+

or an entry can be added to /etc/fstab.

+
fdescfs /dev/fd fdescfs rw 0 0
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/NixOS/Root on ZFS.html b/Getting Started/NixOS/Root on ZFS.html new file mode 100644 index 000000000..a84b401d8 --- /dev/null +++ b/Getting Started/NixOS/Root on ZFS.html @@ -0,0 +1,520 @@ + + + + + + + NixOS Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

NixOS Root on ZFS

+

Note for arm64:

+

Currently there is a bug with the grub installation script. See here for details.

+

Note for Immutable Root:

+

Immutable root can be enabled or disabled by setting +zfs-root.boot.immutable option inside per-host configuration.

+

Customization

+

Unless stated otherwise, it is not recommended to customize system +configuration before reboot.

+

Only use well-tested pool features

+

You should only use well-tested pool features. Avoid using new features if data integrity is paramount. See, for example, this comment.

+
+

Preparation

+
    +
  1. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled.

  2. +
  3. Download NixOS Live Image and boot from it.

    +
    sha256sum -c ./nixos-*.sha256
    +
    +dd if=input-file of=output-file bs=1M
    +
    +
    +
  4. +
  5. Connect to the Internet.

  6. +
  7. Set root password or /root/.ssh/authorized_keys.

  8. +
  9. Start SSH server

    +
    systemctl restart sshd
    +
    +
    +
  10. +
  11. Connect from another computer

    +
    ssh root@192.168.1.91
    +
    +
    +
  12. +
  13. Target disk

    +

    List available disks with

    +
    find /dev/disk/by-id/
    +
    +
    +

    If virtio is used as disk bus, power off the VM and set serial numbers for disk. +For QEMU, use -drive format=raw,file=disk2.img,serial=AaBb. +For libvirt, edit domain XML. See this page for examples.

    +

    Declare disk array

    +
    DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR'
    +
    +
    +

    For single disk installation, use

    +
    DISK='/dev/disk/by-id/disk1'
    +
    +
    +
  14. +
  15. Set a mount point

    +
    MNT=$(mktemp -d)
    +
    +
    +
  16. +
  17. Set partition size:

    +

    Set swap size in GB, set to 1 if you don’t want swap to +take up too much space

    +
    SWAPSIZE=4
    +
    +
    +

    Set how much space should be left at the end of the disk, minimum 1GB

    +
    RESERVE=1
    +
    +
    +
  18. +
  19. Enable Nix Flakes functionality

    +
    mkdir -p ~/.config/nix
    +echo "experimental-features = nix-command flakes" >> ~/.config/nix/nix.conf
    +
    +
    +
  20. +
  21. Install programs needed for system installation

    +
    if ! command -v git; then nix-env -f '<nixpkgs>' -iA git; fi
    +if ! command -v partprobe;  then nix-env -f '<nixpkgs>' -iA parted; fi
    +
    +
    +
  22. +
+
+
+

System Installation

+
    +
  1. Partition the disks.

    +

    Note: you must clear all existing partition tables and data structures from target disks.

    +

    For flash-based storage, this can be done by the blkdiscard command below:

    +
    partition_disk () {
    + local disk="${1}"
    + blkdiscard -f "${disk}" || true
    +
    + parted --script --align=optimal  "${disk}" -- \
    + mklabel gpt \
    + mkpart EFI 2MiB 1GiB \
    + mkpart bpool 1GiB 5GiB \
    + mkpart rpool 5GiB -$((SWAPSIZE + RESERVE))GiB \
    + mkpart swap  -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \
    + mkpart BIOS 1MiB 2MiB \
    + set 1 esp on \
    + set 5 bios_grub on \
    + set 5 legacy_boot on
    +
    + partprobe "${disk}"
    + udevadm settle
    +}
    +
    +for i in ${DISK}; do
    +   partition_disk "${i}"
    +done
    +
    +
    +
  2. +
  3. Setup encrypted swap. This is useful if the available memory is +small:

    +
    for i in ${DISK}; do
    +   cryptsetup open --type plain --key-file /dev/random "${i}"-part4 "${i##*/}"-part4
    +   mkswap /dev/mapper/"${i##*/}"-part4
    +   swapon /dev/mapper/"${i##*/}"-part4
    +done
    +
    +
    +
  4. +
  5. LUKS only: Setup encrypted LUKS container for root pool:

    +
    for i in ${DISK}; do
    +   # see PASSPHRASE PROCESSING section in cryptsetup(8)
    +   printf "YOUR_PASSWD" | cryptsetup luksFormat --type luks2 "${i}"-part3 -
    +   printf "YOUR_PASSWD" | cryptsetup luksOpen "${i}"-part3 luks-rpool-"${i##*/}"-part3 -
    +done
    +
    +
    +
  6. +
  7. Create boot pool

    +
    # shellcheck disable=SC2046
    +zpool create -o compatibility=legacy  \
    +    -o ashift=12 \
    +    -o autotrim=on \
    +    -O acltype=posixacl \
    +    -O canmount=off \
    +    -O devices=off \
    +    -O normalization=formD \
    +    -O relatime=on \
    +    -O xattr=sa \
    +    -O mountpoint=/boot \
    +    -R "${MNT}" \
    +    bpool \
    +  mirror \
    +    $(for i in ${DISK}; do
    +       printf '%s ' "${i}-part2";
    +      done)
    +
    +
    +

    If not using a multi-disk setup, remove mirror.

    +

    You should not need to customize any of the options for the boot pool.

    +

    GRUB does not support all of the zpool features. See spa_feature_names +in grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features.

    +

    Features enabled with -o compatibility=grub2 can be seen +here.

    +
  8. +
  9. Create root pool

    +
      +
    • Unencrypted

      +
      # shellcheck disable=SC2046
      +zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -R "${MNT}" \
      +    -O acltype=posixacl \
      +    -O canmount=off \
      +    -O compression=zstd \
      +    -O dnodesize=auto \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O xattr=sa \
      +    -O mountpoint=/ \
      +    rpool \
      +    mirror \
      +   $(for i in ${DISK}; do
      +      printf '%s ' "${i}-part3";
      +     done)
      +
      +
      +
    • +
    • LUKS encrypted

      +
      # shellcheck disable=SC2046
      +zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -R "${MNT}" \
      +    -O acltype=posixacl \
      +    -O canmount=off \
      +    -O compression=zstd \
      +    -O dnodesize=auto \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O xattr=sa \
      +    -O mountpoint=/ \
      +    rpool \
      +    mirror \
      +   $(for i in ${DISK}; do
      +      printf '/dev/mapper/luks-rpool-%s ' "${i##*/}-part3";
      +     done)
      +
      +
      +
    • +
    +

    If not using a multi-disk setup, remove mirror.

    +
  10. +
  11. Create root system container:

    +
      +
    • Unencrypted

      +
      zfs create \
      + -o canmount=off \
      + -o mountpoint=none \
      +rpool/nixos
      +
      +
      +
    • +
    • Encrypted:

      +

      Avoid ZFS send/recv when using native encryption, see `a ZFS developer's comment on +this issue`__ and `this spreadsheet of bugs`__. In short, if you +care about your data, don’t use native encryption. This section +has been removed, use LUKS encryption instead.

      +
    • +
    +

    Create system datasets, +manage mountpoints with mountpoint=legacy

    +
    zfs create -o mountpoint=legacy     rpool/nixos/root
    +mount -t zfs rpool/nixos/root "${MNT}"/
    +zfs create -o mountpoint=legacy rpool/nixos/home
    +mkdir "${MNT}"/home
    +mount -t zfs rpool/nixos/home "${MNT}"/home
    +zfs create -o mountpoint=none   rpool/nixos/var
    +zfs create -o mountpoint=legacy rpool/nixos/var/lib
    +zfs create -o mountpoint=legacy rpool/nixos/var/log
    +zfs create -o mountpoint=none bpool/nixos
    +zfs create -o mountpoint=legacy bpool/nixos/root
    +mkdir "${MNT}"/boot
    +mount -t zfs bpool/nixos/root "${MNT}"/boot
    +mkdir -p "${MNT}"/var/log
    +mkdir -p "${MNT}"/var/lib
    +mount -t zfs rpool/nixos/var/lib "${MNT}"/var/lib
    +mount -t zfs rpool/nixos/var/log "${MNT}"/var/log
    +zfs create -o mountpoint=legacy rpool/nixos/empty
    +zfs snapshot rpool/nixos/empty@start
    +
    +
    +
  12. +
  13. Format and mount ESP

    +
    for i in ${DISK}; do
    + mkfs.vfat -n EFI "${i}"-part1
    + mkdir -p "${MNT}"/boot/efis/"${i##*/}"-part1
    + mount -t vfat -o iocharset=iso8859-1 "${i}"-part1 "${MNT}"/boot/efis/"${i##*/}"-part1
    +done
    +
    +
    +
  14. +
+
+
+

System Configuration

+
    +
  1. Clone template flake configuration

    +
    mkdir -p "${MNT}"/etc
    +git clone --depth 1 --branch openzfs-guide \
    +  https://github.com/ne9z/dotfiles-flake.git "${MNT}"/etc/nixos
    +
    +
    +
  2. +
  3. From now on, the complete configuration of the system will be +tracked by git, set a user name and email address to continue

    +
    rm -rf "${MNT}"/etc/nixos/.git
    +git -C "${MNT}"/etc/nixos/ init -b main
    +git -C "${MNT}"/etc/nixos/ add "${MNT}"/etc/nixos/
    +git -C "${MNT}"/etc/nixos config user.email "you@example.com"
    +git -C "${MNT}"/etc/nixos config user.name "Alice Q. Nixer"
    +git -C "${MNT}"/etc/nixos commit -asm 'initial commit'
    +
    +
    +
  4. +
  5. Customize configuration to your hardware

    +
    for i in ${DISK}; do
    +  sed -i \
    +  "s|/dev/disk/by-id/|${i%/*}/|" \
    +  "${MNT}"/etc/nixos/hosts/exampleHost/default.nix
    +  break
    +done
    +
    +diskNames=""
    +for i in ${DISK}; do
    +  diskNames="${diskNames} \"${i##*/}\""
    +done
    +
    +sed -i "s|\"bootDevices_placeholder\"|${diskNames}|g" \
    +  "${MNT}"/etc/nixos/hosts/exampleHost/default.nix
    +
    +sed -i "s|\"abcd1234\"|\"$(head -c4 /dev/urandom | od -A none -t x4| sed 's| ||g' || true)\"|g" \
    +  "${MNT}"/etc/nixos/hosts/exampleHost/default.nix
    +
    +sed -i "s|\"x86_64-linux\"|\"$(uname -m || true)-linux\"|g" \
    +  "${MNT}"/etc/nixos/flake.nix
    +
    +
    +
  6. +
  7. LUKS only: Enable LUKS support:

    +
    sed -i 's|luks.enable = false|luks.enable = true|' "${MNT}"/etc/nixos/hosts/exampleHost/default.nix
    +
    +
    +
  8. +
  9. Detect kernel modules needed for boot

    +
    cp "$(command -v nixos-generate-config || true)" ./nixos-generate-config
    +
    +chmod a+rw ./nixos-generate-config
    +
    +# shellcheck disable=SC2016
    +echo 'print STDOUT $initrdAvailableKernelModules' >> ./nixos-generate-config
    +
    +kernelModules="$(./nixos-generate-config --show-hardware-config --no-filesystems | tail -n1 || true)"
    +
    +sed -i "s|\"kernelModules_placeholder\"|${kernelModules}|g" \
    +  "${MNT}"/etc/nixos/hosts/exampleHost/default.nix
    +
    +
    +
  10. +
  11. Set root password

    +
    rootPwd=$(mkpasswd -m SHA-512)
    +
    +
    +

    Declare password in configuration

    +
    sed -i \
    +"s|rootHash_placeholder|${rootPwd}|" \
    +"${MNT}"/etc/nixos/configuration.nix
    +
    +
    +
  12. +
  13. You can enable NetworkManager for wireless networks and GNOME +desktop environment in configuration.nix.

  14. +
  15. Commit changes to local repo

    +
    git -C "${MNT}"/etc/nixos commit -asm 'initial installation'
    +
    +
    +
  16. +
  17. Update flake lock file to track latest system version

    +
    nix flake update --commit-lock-file \
    +  "git+file://${MNT}/etc/nixos"
    +
    +
    +
  18. +
  19. Install system and apply configuration

    +
    nixos-install \
    +--root "${MNT}" \
    +--no-root-passwd \
    +--flake "git+file://${MNT}/etc/nixos#exampleHost"
    +
    +
    +
  20. +
  21. Unmount filesystems

    +
    umount -Rl "${MNT}"
    +zpool export -a
    +
    +
    +
  22. +
  23. Reboot

    +
    reboot
    +
    +
    +
  24. +
  25. For instructions on maintenance tasks, see Root on ZFS maintenance +page.

  26. +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/NixOS/index.html b/Getting Started/NixOS/index.html new file mode 100644 index 000000000..02fb1d03f --- /dev/null +++ b/Getting Started/NixOS/index.html @@ -0,0 +1,231 @@ + + + + + + + NixOS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

NixOS

+
+

Contents

+ +
+
+

Support

+

Reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat.

+

If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @ne9z.

+
+
+

Installation

+

Note: this is for installing ZFS on an existing +NixOS installation. To use ZFS as root file system, +see below.

+

NixOS live image ships with ZFS support by default.

+

Note that you need to apply these settings even if you don’t need +to boot from ZFS. The kernel module ‘zfs.ko’ will not be available +to modprobe until you make these changes and reboot.

+
    +
  1. Edit /etc/nixos/configuration.nix and add the following +options:

    +
    boot.supportedFilesystems = [ "zfs" ];
    +boot.zfs.forceImportRoot = false;
    +networking.hostId = "yourHostId";
    +
    +
    +

    Where hostID can be generated with:

    +
    head -c4 /dev/urandom | od -A none -t x4
    +
    +
    +
  2. +
  3. Apply configuration changes:

    +
    nixos-rebuild boot
    +
    +
    +
  4. +
  5. Reboot:

    +
    reboot
    +
    +
    +
  6. +
+
+
+

Root on ZFS

+ +
+
+

Contribute

+

You can contribute to this documentation. Fork this repo, edit the +documentation, then opening a pull request.

+
    +
  1. To test your changes locally, use the devShell in this repo:

    +
    git clone https://github.com/ne9z/nixos-live openzfs-docs-dev
    +cd openzfs-docs-dev
    +nix develop ./openzfs-docs-dev/#docs
    +
    +
    +
  2. +
  3. Inside the openzfs-docs repo, build pages:

    +
    make html
    +
    +
    +
  4. +
  5. Look for errors and warnings in the make output. If there is no +errors:

    +
    xdg-open _build/html/index.html
    +
    +
    +
  6. +
  7. git commit --signoff to a branch, git push, and create a +pull request. Mention @ne9z.

  8. +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/RHEL and CentOS.html b/Getting Started/RHEL and CentOS.html new file mode 100644 index 000000000..2d6ed27f2 --- /dev/null +++ b/Getting Started/RHEL and CentOS.html @@ -0,0 +1,116 @@ + + + + + + + RHEL and CentOS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

RHEL and CentOS

+

This page has been moved to RHEL-based distro.

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/RHEL-based distro/Root on ZFS.html b/Getting Started/RHEL-based distro/Root on ZFS.html new file mode 100644 index 000000000..8bceb1f0c --- /dev/null +++ b/Getting Started/RHEL-based distro/Root on ZFS.html @@ -0,0 +1,660 @@ + + + + + + + Rocky Linux Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Rocky Linux Root on ZFS

+

ZFSBootMenu

+

This tutorial is based on the GRUB bootloader. Due to its independent +implementation of a read-only ZFS driver, GRUB only supports a subset +of ZFS features on the boot pool. [In general, bootloader treat disks +as read-only to minimize the risk of damaging on-disk data.]

+

ZFSBootMenu is an alternative bootloader +free of such limitations and has support for boot environments. Do not +follow instructions on this page if you plan to use ZBM, +as the layouts are not compatible. Refer +to their site for installation details.

+

Customization

+

Unless stated otherwise, it is not recommended to customize system +configuration before reboot.

+

Only use well-tested pool features

+

You should only use well-tested pool features. Avoid using new features if data integrity is paramount. See, for example, this comment.

+
+

Preparation

+
    +
  1. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled.

  2. +
  3. Because the kernel of latest Live CD might be incompatible with +ZFS, we will use Alpine Linux Extended, which ships with ZFS by +default.

    +

    Download latest extended variant of Alpine Linux +live image, +verify checksum +and boot from it.

    +
    gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify alpine-extended-*.asc
    +
    +dd if=input-file of=output-file bs=1M
    +
    +
    +
  4. +
  5. Login as root user. There is no password.

  6. +
  7. Configure Internet

    +
    setup-interfaces -r
    +# You must use "-r" option to start networking services properly
    +# example:
    +network interface: wlan0
    +WiFi name:         <ssid>
    +ip address:        dhcp
    +<enter done to finish network config>
    +manual netconfig:  n
    +
    +
    +
  8. +
  9. If you are using wireless network and it is not shown, see Alpine +Linux wiki for +further details. wpa_supplicant can be installed with apk +add wpa_supplicant without internet connection.

  10. +
  11. Configure SSH server

    +
    setup-sshd
    +# example:
    +ssh server:        openssh
    +allow root:        "prohibit-password" or "yes"
    +ssh key:           "none" or "<public key>"
    +
    +
    +
  12. +
  13. Set root password or /root/.ssh/authorized_keys.

  14. +
  15. Connect from another computer

    +
    ssh root@192.168.1.91
    +
    +
    +
  16. +
  17. Configure NTP client for time synchronization

    +
    setup-ntp busybox
    +
    +
    +
  18. +
  19. Set up apk-repo. A list of available mirrors is shown. +Press space bar to continue

    +
    setup-apkrepos
    +
    +
    +
  20. +
  21. Throughout this guide, we use predictable disk names generated by +udev

    +
    apk update
    +apk add eudev
    +setup-devd udev
    +
    +
    +
  22. +
  23. Target disk

    +

    List available disks with

    +
    find /dev/disk/by-id/
    +
    +
    +

    If virtio is used as disk bus, power off the VM and set serial numbers for disk. +For QEMU, use -drive format=raw,file=disk2.img,serial=AaBb. +For libvirt, edit domain XML. See this page for examples.

    +

    Declare disk array

    +
    DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR'
    +
    +
    +

    For single disk installation, use

    +
    DISK='/dev/disk/by-id/disk1'
    +
    +
    +
  24. +
  25. Set a mount point

    +
    MNT=$(mktemp -d)
    +
    +
    +
  26. +
  27. Set partition size:

    +

    Set swap size in GB, set to 1 if you don’t want swap to +take up too much space

    +
    SWAPSIZE=4
    +
    +
    +

    Set how much space should be left at the end of the disk, minimum 1GB

    +
    RESERVE=1
    +
    +
    +
  28. +
  29. Install ZFS support from live media:

    +
    apk add zfs
    +
    +
    +
  30. +
  31. Install partition tool

    +
    apk add parted e2fsprogs cryptsetup util-linux
    +
    +
    +
  32. +
+
+
+

System Installation

+
    +
  1. Partition the disks.

    +

    Note: you must clear all existing partition tables and data structures from target disks.

    +

    For flash-based storage, this can be done by the blkdiscard command below:

    +
    partition_disk () {
    + local disk="${1}"
    + blkdiscard -f "${disk}" || true
    +
    + parted --script --align=optimal  "${disk}" -- \
    + mklabel gpt \
    + mkpart EFI 2MiB 1GiB \
    + mkpart bpool 1GiB 5GiB \
    + mkpart rpool 5GiB -$((SWAPSIZE + RESERVE))GiB \
    + mkpart swap  -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \
    + mkpart BIOS 1MiB 2MiB \
    + set 1 esp on \
    + set 5 bios_grub on \
    + set 5 legacy_boot on
    +
    + partprobe "${disk}"
    +}
    +
    +for i in ${DISK}; do
    +   partition_disk "${i}"
    +done
    +
    +
    +
  2. +
  3. Setup encrypted swap. This is useful if the available memory is +small:

    +
    for i in ${DISK}; do
    +   cryptsetup open --type plain --key-file /dev/random "${i}"-part4 "${i##*/}"-part4
    +   mkswap /dev/mapper/"${i##*/}"-part4
    +   swapon /dev/mapper/"${i##*/}"-part4
    +done
    +
    +
    +
  4. +
  5. Load ZFS kernel module

    +
    modprobe zfs
    +
    +
    +
  6. +
  7. Create boot pool

    +
    # shellcheck disable=SC2046
    +zpool create -o compatibility=legacy  \
    +    -o ashift=12 \
    +    -o autotrim=on \
    +    -O acltype=posixacl \
    +    -O canmount=off \
    +    -O devices=off \
    +    -O normalization=formD \
    +    -O relatime=on \
    +    -O xattr=sa \
    +    -O mountpoint=/boot \
    +    -R "${MNT}" \
    +    bpool \
    +           mirror \
    +    $(for i in ${DISK}; do
    +       printf '%s ' "${i}-part2";
    +      done)
    +
    +
    +

    If not using a multi-disk setup, remove mirror.

    +

    You should not need to customize any of the options for the boot pool.

    +

    GRUB does not support all of the zpool features. See spa_feature_names +in grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features.

    +
  8. +
  9. Create root pool

    +
    # shellcheck disable=SC2046
    +zpool create \
    +    -o ashift=12 \
    +    -o autotrim=on \
    +    -R "${MNT}" \
    +    -O acltype=posixacl \
    +    -O canmount=off \
    +    -O compression=zstd \
    +    -O dnodesize=auto \
    +    -O normalization=formD \
    +    -O relatime=on \
    +    -O xattr=sa \
    +    -O mountpoint=/ \
    +    rpool \
    +    mirror \
    +   $(for i in ${DISK}; do
    +      printf '%s ' "${i}-part3";
    +     done)
    +
    +
    +

    If not using a multi-disk setup, remove mirror.

    +
  10. +
  11. Create root system container:

    +
      +
    • Unencrypted

      +
      zfs create \
      + -o canmount=off \
      + -o mountpoint=none \
      +rpool/rhel
      +
      +
      +
    • +
    • Encrypted:

      +

      Avoid ZFS send/recv when using native encryption, see `a ZFS developer's comment on this issue`__ and `this spreadsheet of bugs`__. A LUKS-based guide has yet to be written. Once compromised, changing password will not keep your +data safe. See zfs-change-key(8) for more info

      +
      zfs create \
      +  -o canmount=off \
      +         -o mountpoint=none \
      +         -o encryption=on \
      +         -o keylocation=prompt \
      +         -o keyformat=passphrase \
      +rpool/rhel
      +
      +
      +
    • +
    +

    You can automate this step (insecure) with: echo POOLPASS | zfs create ....

    +

    Create system datasets, +manage mountpoints with mountpoint=legacy

    +
    zfs create -o canmount=noauto -o mountpoint=/      rpool/rhel/root
    +zfs mount rpool/rhel/root
    +zfs create -o mountpoint=legacy rpool/rhel/home
    +mkdir "${MNT}"/home
    +mount -t zfs rpool/rhel/home "${MNT}"/home
    +zfs create -o mountpoint=legacy  rpool/rhel/var
    +zfs create -o mountpoint=legacy rpool/rhel/var/lib
    +zfs create -o mountpoint=legacy rpool/rhel/var/log
    +zfs create -o mountpoint=none bpool/rhel
    +zfs create -o mountpoint=legacy bpool/rhel/root
    +mkdir "${MNT}"/boot
    +mount -t zfs bpool/rhel/root "${MNT}"/boot
    +mkdir -p "${MNT}"/var/log
    +mkdir -p "${MNT}"/var/lib
    +mount -t zfs rpool/rhel/var/lib "${MNT}"/var/lib
    +mount -t zfs rpool/rhel/var/log "${MNT}"/var/log
    +
    +
    +
  12. +
  13. Format and mount ESP

    +
    for i in ${DISK}; do
    + mkfs.vfat -n EFI "${i}"-part1
    + mkdir -p "${MNT}"/boot/efis/"${i##*/}"-part1
    + mount -t vfat -o iocharset=iso8859-1 "${i}"-part1 "${MNT}"/boot/efis/"${i##*/}"-part1
    +done
    +
    +mkdir -p "${MNT}"/boot/efi
    +mount -t vfat -o iocharset=iso8859-1 "$(echo "${DISK}" | sed "s|^ *||"  | cut -f1 -d' '|| true)"-part1 "${MNT}"/boot/efi
    +
    +
    +
  14. +
+
+
+

System Configuration

+
    +
  1. Download and extract minimal Rhel root filesystem:

    +
    apk add curl
    +curl --fail-early --fail -L \
    +https://dl.rockylinux.org/pub/rocky/9.2/images/x86_64/Rocky-9-Container-Base-9.2-20230513.0.x86_64.tar.xz \
    +-o rootfs.tar.gz
    +curl --fail-early --fail -L \
    +https://dl.rockylinux.org/pub/rocky/9.2/images/x86_64/Rocky-9-Container-Base-9.2-20230513.0.x86_64.tar.xz.CHECKSUM \
    +-o checksum
    +
    +# BusyBox sha256sum treats all lines in the checksum file
    +# as checksums and requires two spaces "  "
    +# between filename and checksum
    +
    +grep 'Container-Base' checksum \
    +| grep '^SHA256' \
    +| sed -E 's|.*= ([a-z0-9]*)$|\1  rootfs.tar.gz|' > ./sha256checksum
    +
    +sha256sum -c ./sha256checksum
    +
    +tar x  -C "${MNT}" -af rootfs.tar.gz
    +
    +
    +
  2. +
  3. Enable community repo

    +
    sed -i '/edge/d' /etc/apk/repositories
    +sed -i -E 's/#(.*)community/\1community/' /etc/apk/repositories
    +
    +
    +
  4. +
  5. Generate fstab:

    +
    apk add arch-install-scripts
    +genfstab -t PARTUUID "${MNT}" \
    +| grep -v swap \
    +| sed "s|vfat.*rw|vfat rw,x-systemd.idle-timeout=1min,x-systemd.automount,noauto,nofail|" \
    +> "${MNT}"/etc/fstab
    +
    +
    +
  6. +
  7. Chroot

    +
    cp /etc/resolv.conf "${MNT}"/etc/resolv.conf
    +for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done
    +chroot "${MNT}" /usr/bin/env DISK="${DISK}" bash
    +
    +
    +
  8. +
  9. Unset all shell aliases, which can interfere with installation:

    +
    unalias -a
    +
    +
    +
  10. +
  11. Install base packages

    +
    dnf -y install --allowerasing @core grub2-efi-x64 \
    +grub2-pc grub2-pc-modules grub2-efi-x64-modules shim-x64  \
    +efibootmgr kernel-core
    +
    +
    +
  12. +
  13. Install ZFS packages:

    +
    dnf install -y https://zfsonlinux.org/epel/zfs-release-2-3"$(rpm --eval "%{dist}"|| true)".noarch.rpm
    +dnf config-manager --disable zfs
    +dnf config-manager --enable zfs-kmod
    +dnf install -y zfs zfs-dracut
    +
    +
    +
  14. +
  15. Add zfs modules to dracut:

    +
    echo 'add_dracutmodules+=" zfs "' >> /etc/dracut.conf.d/zfs.conf
    +echo 'force_drivers+=" zfs "' >> /etc/dracut.conf.d/zfs.conf
    +
    +
    +
  16. +
  17. Add other drivers to dracut:

    +
    if grep mpt3sas /proc/modules; then
    +  echo 'force_drivers+=" mpt3sas "'  >> /etc/dracut.conf.d/zfs.conf
    +fi
    +if grep virtio_blk /proc/modules; then
    +  echo 'filesystems+=" virtio_blk "' >> /etc/dracut.conf.d/fs.conf
    +fi
    +
    +
    +
  18. +
  19. Build initrd:

    +
    find -D exec /lib/modules -maxdepth 1 \
    +-mindepth 1 -type d \
    +-exec sh -vxc \
    +'if test -e "$1"/modules.dep;
    +   then kernel=$(basename "$1");
    +   dracut --verbose --force --kver "${kernel}";
    + fi' sh {} \;
    +
    +
    +
  20. +
  21. For SELinux, relabel filesystem on reboot:

    +
    fixfiles -F onboot
    +
    +
    +
  22. +
  23. Generate host id:

    +
    zgenhostid -f -o /etc/hostid
    +
    +
    +
  24. +
  25. Install locale package, example for English locale:

    +
    dnf install -y glibc-minimal-langpack glibc-langpack-en
    +
    +
    +
  26. +
  27. Set locale, keymap, timezone, hostname

    +
    rm -f /etc/localtime
    +systemd-firstboot \
    +--force \
    +--locale=en_US.UTF-8 \
    +--timezone=Etc/UTC \
    +--hostname=testhost \
    +--keymap=us
    +
    +
    +
  28. +
  29. Set root passwd

    +
    printf 'root:yourpassword' | chpasswd
    +
    +
    +
  30. +
+
+
+

Bootloader

+
    +
  1. Apply GRUB workaround

    +
    echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile.d/zpool_vdev_name_path.sh
    +# shellcheck disable=SC1091
    +. /etc/profile.d/zpool_vdev_name_path.sh
    +
    +# GRUB fails to detect rpool name, hard code as "rpool"
    +sed -i "s|rpool=.*|rpool=rpool|"  /etc/grub.d/10_linux
    +
    +
    +

    This workaround needs to be applied for every GRUB update, as the +update will overwrite the changes.

    +
  2. +
  3. RHEL uses Boot Loader Specification module for GRUB, +which does not support ZFS. Disable it:

    +
    echo 'GRUB_ENABLE_BLSCFG=false' >> /etc/default/grub
    +
    +
    +

    This means that you need to regenerate GRUB menu and mirror them +after every kernel update, otherwise computer will still boot old +kernel on reboot.

    +
  4. +
  5. Install GRUB:

    +
    mkdir -p /boot/efi/rocky/grub-bootdir/i386-pc/
    +for i in ${DISK}; do
    + grub2-install --target=i386-pc --boot-directory \
    +     /boot/efi/rocky/grub-bootdir/i386-pc/  "${i}"
    +done
    +dnf reinstall -y grub2-efi-x64 shim-x64
    +cp -r /usr/lib/grub/x86_64-efi/ /boot/efi/EFI/rocky/
    +
    +
    +
  6. +
  7. Generate GRUB menu:

    +
    mkdir -p /boot/grub2
    +grub2-mkconfig -o /boot/grub2/grub.cfg
    +cp /boot/grub2/grub.cfg \
    + /boot/efi/efi/rocky/grub.cfg
    +cp /boot/grub2/grub.cfg \
    + /boot/efi/rocky/grub-bootdir/i386-pc/grub2/grub.cfg
    +
    +
    +
  8. +
  9. For both legacy and EFI booting: mirror ESP content:

    +
    espdir=$(mktemp -d)
    +find /boot/efi/ -maxdepth 1 -mindepth 1 -type d -print0 \
    +| xargs -t -0I '{}' cp -r '{}' "${espdir}"
    +find "${espdir}" -maxdepth 1 -mindepth 1 -type d -print0 \
    +| xargs -t -0I '{}' sh -vxc "find /boot/efis/ -maxdepth 1 -mindepth 1 -type d -print0 | xargs -t -0I '[]' cp -r '{}' '[]'"
    +
    +
    +
  10. +
  11. Exit chroot

    +
    exit
    +
    +
    +
  12. +
  13. Unmount filesystems and create initial system snapshot +You can later create a boot environment from this snapshot. +See Root on ZFS maintenance page.

    +
    umount -Rl "${MNT}"
    +zfs snapshot -r rpool@initial-installation
    +zfs snapshot -r bpool@initial-installation
    +
    +
    +
  14. +
  15. Export all pools

    +
    zpool export -a
    +
    +
    +
  16. +
  17. Reboot

    +
    reboot
    +
    +
    +
  18. +
  19. For BIOS-legacy boot users only: the GRUB bootloader installed +might be unusable. In this case, see Bootloader Recovery section +in Root on ZFS maintenance page.

    +

    This issue is not related to Alpine Linux chroot, as Arch Linux +installed with this method does not have this issue.

    +

    UEFI bootloader is not affected by this issue.

    +
  20. +
+
+
+

Post installaion

+
    +
  1. Install package groups

    +
    dnf group list --hidden -v       # query package groups
    +dnf group install gnome-desktop
    +
    +
    +
  2. +
  3. Add new user, configure swap.

  4. +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/RHEL-based distro/index.html b/Getting Started/RHEL-based distro/index.html new file mode 100644 index 000000000..e7ccdb9b3 --- /dev/null +++ b/Getting Started/RHEL-based distro/index.html @@ -0,0 +1,319 @@ + + + + + + + RHEL-based distro — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

RHEL-based distro

+
+

Contents

+ +

DKMS and kABI-tracking kmod style packages are provided for x86_64 RHEL- +and CentOS-based distributions from the OpenZFS repository. These packages +are updated as new versions are released. Only the repository for the current +minor version of each current major release is updated with new packages.

+

To simplify installation, a zfs-release package is provided which includes +a zfs.repo configuration file and public signing key. All official OpenZFS +packages are signed using this key, and by default yum or dnf will verify a +package’s signature before allowing it be to installed. Users are strongly +encouraged to verify the authenticity of the OpenZFS public key using +the fingerprint listed here.

+
+
Key location: /etc/pki/rpm-gpg/RPM-GPG-KEY-openzfs (previously -zfsonlinux)
+
Current release packages: EL7, EL8, EL9
+
Archived release packages: see repo page
+
+
+
Signing key1 (EL8 and older, Fedora 36 and older) +pgp.mit.edu / +direct link
+
Fingerprint: C93A FFFD 9F3F 7B03 C310 CEB6 A9D5 A1C0 F14A B620
+
+
+
Signing key2 (EL9+, Fedora 37+) +pgp.mit.edu / +direct link
+
Fingerprint: 7DC7 299D CF7C 7FD9 CD87 701B A599 FD5E 9DB8 4141
+
+

For EL7 run:

+
yum install https://zfsonlinux.org/epel/zfs-release-2-3$(rpm --eval "%{dist}").noarch.rpm
+
+
+

and for EL8 and 9:

+
dnf install https://zfsonlinux.org/epel/zfs-release-2-3$(rpm --eval "%{dist}").noarch.rpm
+
+
+

After installing the zfs-release package and verifying the public key +users can opt to install either the DKMS or kABI-tracking kmod style packages. +DKMS packages are recommended for users running a non-distribution kernel or +for users who wish to apply local customizations to OpenZFS. For most users +the kABI-tracking kmod packages are recommended in order to avoid needing to +rebuild OpenZFS for every kernel update.

+
+
+

DKMS

+

To install DKMS style packages issue the following commands. First add the +EPEL repository which provides DKMS by installing the epel-release +package, then the kernel-devel and zfs packages. Note that it is +important to make sure that the matching kernel-devel package is installed +for the running kernel since DKMS requires it to build OpenZFS.

+

For EL6 and 7, separately run:

+
yum install -y epel-release
+yum install -y kernel-devel
+yum install -y zfs
+
+
+

And for EL8 and newer, separately run:

+
dnf install -y epel-release
+dnf install -y kernel-devel
+dnf install -y zfs
+
+
+
+

Note

+

When switching from DKMS to kABI-tracking kmods first uninstall the +existing DKMS packages. This should remove the kernel modules for all +installed kernels, then the kABI-tracking kmods can be installed as +described in the section below.

+
+
+
+

kABI-tracking kmod

+

By default the zfs-release package is configured to install DKMS style +packages so they will work with a wide range of kernels. In order to +install the kABI-tracking kmods the default repository must be switched +from zfs to zfs-kmod. Keep in mind that the kABI-tracking kmods are +only verified to work with the distribution-provided, non-Stream kernel.

+

For EL6 and 7 run:

+
yum-config-manager --disable zfs
+yum-config-manager --enable zfs-kmod
+yum install zfs
+
+
+

And for EL8 and newer:

+
dnf config-manager --disable zfs
+dnf config-manager --enable zfs-kmod
+dnf install zfs
+
+
+

By default the OpenZFS kernel modules are automatically loaded when a ZFS +pool is detected. If you would prefer to always load the modules at boot +time you can create such configuration in /etc/modules-load.d:

+
echo zfs >/etc/modules-load.d/zfs.conf
+
+
+
+

Note

+

When updating to a new EL minor release the existing kmod +packages may not work due to upstream kABI changes in the kernel. +The configuration of the current release package may have already made an +updated package available, but the package manager may not know to install +that package if the version number isn’t newer. When upgrading, users +should verify that the kmod-zfs package is providing suitable kernel +modules, reinstalling the kmod-zfs package if necessary.

+
+
+
+

Previous minor EL releases

+

The current release package uses “${releasever}” rather than specify a particular +minor release as previous release packages did. Typically “${releasever}” will +resolve to just the major version (e.g. 8), and the resulting repository URL +will be aliased to the current minor version (e.g. 8.7), but you can specify +–releasever to use previous repositories.

+
[vagrant@localhost ~]$ dnf list available --showduplicates kmod-zfs
+Last metadata expiration check: 0:00:08 ago on tor 31 jan 2023 17:50:05 UTC.
+Available Packages
+kmod-zfs.x86_64                          2.1.6-1.el8                          zfs-kmod
+kmod-zfs.x86_64                          2.1.7-1.el8                          zfs-kmod
+kmod-zfs.x86_64                          2.1.8-1.el8                          zfs-kmod
+kmod-zfs.x86_64                          2.1.9-1.el8                          zfs-kmod
+[vagrant@localhost ~]$ dnf list available --showduplicates --releasever=8.6 kmod-zfs
+Last metadata expiration check: 0:16:13 ago on tor 31 jan 2023 17:34:10 UTC.
+Available Packages
+kmod-zfs.x86_64                          2.1.4-1.el8                          zfs-kmod
+kmod-zfs.x86_64                          2.1.5-1.el8                          zfs-kmod
+kmod-zfs.x86_64                          2.1.5-2.el8                          zfs-kmod
+kmod-zfs.x86_64                          2.1.6-1.el8                          zfs-kmod
+[vagrant@localhost ~]$
+
+
+

In the above example, the former packages were built for EL8.7, and the latter for EL8.6.

+
+
+

Testing Repositories

+

In addition to the primary zfs repository a zfs-testing repository +is available. This repository, which is disabled by default, contains +the latest version of OpenZFS which is under active development. These +packages are made available in order to get feedback from users regarding +the functionality and stability of upcoming releases. These packages +should not be used on production systems. Packages from the testing +repository can be installed as follows.

+

For EL6 and 7 run:

+
yum-config-manager --enable zfs-testing
+yum install kernel-devel zfs
+
+
+

And for EL8 and newer:

+
dnf config-manager --enable zfs-testing
+dnf install kernel-devel zfs
+
+
+
+

Note

+

Use zfs-testing for DKMS packages and zfs-testing-kmod +for kABI-tracking kmod packages.

+
+
+
+

Root on ZFS

+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS.html b/Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS.html new file mode 100644 index 000000000..1a05069c6 --- /dev/null +++ b/Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS.html @@ -0,0 +1,1110 @@ + + + + + + + Ubuntu 18.04 Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Ubuntu 18.04 Root on ZFS

+ +
+

Overview

+
+

Newer release available

+
    +
  • See Ubuntu 20.04 Root on ZFS for new +installs. This guide is no longer receiving most updates. It continues +to exist for reference for existing installs that followed it.

  • +
+
+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Do not use these instructions for dual-booting.

  • +
  • Backup your data. Any existing data will be lost.

  • +
+
+
+

System Requirements

+ +

Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of +memory is recommended for normal performance in basic workloads. If you +wish to use deduplication, you will need massive amounts of +RAM. Enabling +deduplication is a permanent change that cannot be easily reverted.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @rlaager.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo apt install python3-pip
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @rlaager.

  10. +
+
+
+

Encryption

+

This guide supports two different encryption options: unencrypted and +LUKS (full-disk encryption). With either option, all ZFS features are fully +available. ZFS native encryption is not available in Ubuntu 18.04.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+
+

Step 1: Prepare The Install Environment

+

1.1 Boot the Ubuntu Live CD. Select Try Ubuntu. Connect your system to +the Internet as appropriate (e.g. join your WiFi network). Open a +terminal (press Ctrl-Alt-T).

+

1.2 Setup and update the repositories:

+
sudo apt-add-repository universe
+sudo apt update
+
+
+

1.3 Optional: Install and start the OpenSSH server in the Live CD +environment:

+

If you have a second system, using SSH to access the target system can +be convenient:

+
passwd
+# There is no current password; hit enter at that prompt.
+sudo apt install --yes openssh-server
+
+
+

Hint: You can find your IP address with +ip addr show scope global | grep inet. Then, from your main machine, +connect with ssh ubuntu@IP.

+

1.4 Become root:

+
sudo -i
+
+
+

1.5 Install ZFS in the Live CD environment:

+
apt install --yes debootstrap gdisk zfs-initramfs
+
+
+
+
+

Step 2: Disk Formatting

+

2.1 Set a variable with the disk name:

+
DISK=/dev/disk/by-id/scsi-SATA_disk1
+
+
+

Always use the long /dev/disk/by-id/* aliases with ZFS. Using the +/dev/sd* device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool.

+

Hints:

+
    +
  • ls -la /dev/disk/by-id will list the aliases.

  • +
  • Are you doing this in a virtual machine? If your virtual disk is +missing from /dev/disk/by-id, use /dev/vda if you are using +KVM with virtio; otherwise, read the +troubleshooting section.

  • +
  • For a mirror or raidz topology, use DISK1, DISK2, etc.

  • +
  • When choosing a boot pool size, consider how you will use the space. A kernel +and initrd may consume around 100M. If you have multiple kernels and take +snapshots, you may find yourself low on boot pool space, especially if you +need to regenerate your initramfs images, which may be around 85M each. Size +your boot pool appropriately for your needs.

  • +
+

2.2 If you are re-using a disk, clear it as necessary:

+

If the disk was previously used in an MD array, zero the superblock:

+
apt install --yes mdadm
+mdadm --zero-superblock --force $DISK
+
+
+

Clear the partition table:

+
sgdisk --zap-all $DISK
+
+
+

2.3 Partition your disk(s):

+

Run this if you need legacy (BIOS) booting:

+
sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
+
+
+

Run this for UEFI booting (for use now or in the future):

+
sgdisk     -n2:1M:+512M   -t2:EF00 $DISK
+
+
+

Run this for the boot pool:

+
sgdisk     -n3:0:+1G      -t3:BF01 $DISK
+
+
+

Choose one of the following options:

+

2.3a Unencrypted:

+
sgdisk     -n4:0:0        -t4:BF01 $DISK
+
+
+

2.3b LUKS:

+
sgdisk     -n4:0:0        -t4:8300 $DISK
+
+
+

If you are creating a mirror or raidz topology, repeat the partitioning +commands for all the disks which will be part of the pool.

+

2.4 Create the boot pool:

+
zpool create -o ashift=12 -d \
+    -o feature@async_destroy=enabled \
+    -o feature@bookmarks=enabled \
+    -o feature@embedded_data=enabled \
+    -o feature@empty_bpobj=enabled \
+    -o feature@enabled_txg=enabled \
+    -o feature@extensible_dataset=enabled \
+    -o feature@filesystem_limits=enabled \
+    -o feature@hole_birth=enabled \
+    -o feature@large_blocks=enabled \
+    -o feature@lz4_compress=enabled \
+    -o feature@spacemap_histogram=enabled \
+    -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \
+    -O normalization=formD -O relatime=on -O xattr=sa \
+    -O mountpoint=/ -R /mnt bpool ${DISK}-part3
+
+
+

You should not need to customize any of the options for the boot pool.

+

GRUB does not support all of the zpool features. See +spa_feature_names in +grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are “supported” by GRUB.

+

Hints:

+
    +
  • If you are creating a mirror or raidz topology, create the pool using +zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3 +(or replace mirror with raidz, raidz2, or raidz3 and +list the partitions from additional disks).

  • +
  • The pool name is arbitrary. If changed, the new name must be used +consistently. The bpool convention originated in this HOWTO.

  • +
+

Feature Notes:

+
    +
  • As a read-only compatible feature, the userobj_accounting feature should +be compatible in theory, but in practice, GRUB can fail with an “invalid +dnode type” error. This feature does not matter for /boot anyway.

  • +
+

2.5 Create the root pool:

+

Choose one of the following options:

+

2.5a Unencrypted:

+
zpool create -o ashift=12 \
+    -O acltype=posixacl -O canmount=off -O compression=lz4 \
+    -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
+    -O mountpoint=/ -R /mnt rpool ${DISK}-part4
+
+
+

2.5b LUKS:

+
cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
+cryptsetup luksOpen ${DISK}-part4 luks1
+zpool create -o ashift=12 \
+    -O acltype=posixacl -O canmount=off -O compression=lz4 \
+    -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
+    -O mountpoint=/ -R /mnt rpool /dev/mapper/luks1
+
+
+

Notes:

+
    +
  • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

  • +
  • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires +ACLs

  • +
  • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only +filenames.

  • +
  • recordsize is unset (leaving it at the default of 128 KiB). If you want to +tune it (e.g. -O recordsize=1M), see these various blog +posts.

  • +
  • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s +documentation +for further information.

  • +
  • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI +applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain +controller. +Note that xattr=sa is +Linux-specific. +If you move your xattr=sa pool to another OpenZFS implementation +besides ZFS-on-Linux, extended attributes will not be readable +(though your data will be). If portability of extended attributes is +important to you, omit the -O xattr=sa above. Even if you do not +want xattr=sa for the whole pool, it is probably fine to use it +for /var/log.

  • +
  • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

  • +
  • For LUKS, the key size chosen is 512 bits. However, XTS mode requires +two keys, so the LUKS key is split in half. Thus, -s 512 means +AES-256.

  • +
  • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup +FAQ +for guidance.

  • +
+

Hints:

+
    +
  • If you are creating a mirror or raidz topology, create the pool using +zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4 +(or replace mirror with raidz, raidz2, or raidz3 and +list the partitions from additional disks). For LUKS, use +/dev/mapper/luks1, /dev/mapper/luks2, etc., which you will +have to create using cryptsetup.

  • +
  • The pool name is arbitrary. If changed, the new name must be used +consistently. On systems that can automatically install to ZFS, the +root pool is named rpool by default.

  • +
+
+
+

Step 3: System Installation

+

3.1 Create filesystem datasets to act as containers:

+
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
+zfs create -o canmount=off -o mountpoint=none bpool/BOOT
+
+
+

On Solaris systems, the root filesystem is cloned and the suffix is +incremented for major system changes through pkg image-update or +beadm. Similar functionality has been implemented in Ubuntu 20.04 with the +zsys tool, though its dataset layout is more complicated. Even without +such a tool, the rpool/ROOT and bpool/BOOT containers can still be used +for manually created clones.

+

3.2 Create filesystem datasets for the root and boot filesystems:

+
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
+zfs mount rpool/ROOT/ubuntu
+
+zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/ubuntu
+zfs mount bpool/BOOT/ubuntu
+
+
+

With ZFS, it is not normally necessary to use a mount command (either +mount or zfs mount). This situation is an exception because of +canmount=noauto.

+

3.3 Create datasets:

+
zfs create                                 rpool/home
+zfs create -o mountpoint=/root             rpool/home/root
+zfs create -o canmount=off                 rpool/var
+zfs create -o canmount=off                 rpool/var/lib
+zfs create                                 rpool/var/log
+zfs create                                 rpool/var/spool
+
+
+

The datasets below are optional, depending on your preferences and/or +software choices.

+

If you wish to exclude these from snapshots:

+
zfs create -o com.sun:auto-snapshot=false  rpool/var/cache
+zfs create -o com.sun:auto-snapshot=false  rpool/var/tmp
+chmod 1777 /mnt/var/tmp
+
+
+

If you use /opt on this system:

+
zfs create                                 rpool/opt
+
+
+

If you use /srv on this system:

+
zfs create                                 rpool/srv
+
+
+

If you use /usr/local on this system:

+
zfs create -o canmount=off                 rpool/usr
+zfs create                                 rpool/usr/local
+
+
+

If this system will have games installed:

+
zfs create                                 rpool/var/games
+
+
+

If this system will store local email in /var/mail:

+
zfs create                                 rpool/var/mail
+
+
+

If this system will use Snap packages:

+
zfs create                                 rpool/var/snap
+
+
+

If you use /var/www on this system:

+
zfs create                                 rpool/var/www
+
+
+

If this system will use GNOME:

+
zfs create                                 rpool/var/lib/AccountsService
+
+
+

If this system will use Docker (which manages its own datasets & +snapshots):

+
zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/docker
+
+
+

If this system will use NFS (locking):

+
zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/nfs
+
+
+

A tmpfs is recommended later, but if you want a separate dataset for +/tmp:

+
zfs create -o com.sun:auto-snapshot=false  rpool/tmp
+chmod 1777 /mnt/tmp
+
+
+

The primary goal of this dataset layout is to separate the OS from user data. +This allows the root filesystem to be rolled back without rolling back user +data. The com.sun.auto-snapshot setting is used by some ZFS +snapshot utilities to exclude transient data.

+

If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for +/tmp, as shown above. This keeps the /tmp data out of snapshots +of your root filesystem. It also allows you to set a quota on +rpool/tmp, if you want to limit the maximum space used. Otherwise, +you can use a tmpfs (RAM filesystem) later.

+

3.4 Install the minimal system:

+
debootstrap bionic /mnt
+zfs set devices=off rpool
+
+
+

The debootstrap command leaves the new system in an unconfigured +state. An alternative to using debootstrap is to copy the entirety +of a working system into the new ZFS root.

+
+
+

Step 4: System Configuration

+

4.1 Configure the hostname:

+

Replace HOSTNAME with the desired hostname:

+
echo HOSTNAME > /mnt/etc/hostname
+vi /mnt/etc/hosts
+
+
+
Add a line:
+127.0.1.1       HOSTNAME
+or if the system has a real name in DNS:
+127.0.1.1       FQDN HOSTNAME
+
+
+

Hint: Use nano if you find vi confusing.

+

4.2 Configure the network interface:

+

Find the interface name:

+
ip addr show
+
+
+

Adjust NAME below to match your interface name:

+
vi /mnt/etc/netplan/01-netcfg.yaml
+
+
+
network:
+  version: 2
+  ethernets:
+    NAME:
+      dhcp4: true
+
+
+

Customize this file if the system is not a DHCP client.

+

4.3 Configure the package sources:

+
vi /mnt/etc/apt/sources.list
+
+
+
deb http://archive.ubuntu.com/ubuntu bionic main restricted universe multiverse
+deb http://archive.ubuntu.com/ubuntu bionic-updates main restricted universe multiverse
+deb http://archive.ubuntu.com/ubuntu bionic-backports main restricted universe multiverse
+deb http://security.ubuntu.com/ubuntu bionic-security main restricted universe multiverse
+
+
+

4.4 Bind the virtual filesystems from the LiveCD environment to the new +system and chroot into it:

+
mount --rbind /dev  /mnt/dev
+mount --rbind /proc /mnt/proc
+mount --rbind /sys  /mnt/sys
+chroot /mnt /usr/bin/env DISK=$DISK bash --login
+
+
+

Note: This is using --rbind, not --bind.

+

4.5 Configure a basic system environment:

+
ln -s /proc/self/mounts /etc/mtab
+apt update
+
+
+

Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

+
dpkg-reconfigure locales
+dpkg-reconfigure tzdata
+
+
+

If you prefer nano over vi, install it:

+
apt install --yes nano
+
+
+

4.6 Install ZFS in the chroot environment for the new system:

+
apt install --yes --no-install-recommends linux-image-generic
+apt install --yes zfs-initramfs
+
+
+

Hint: For the HWE kernel, install linux-image-generic-hwe-18.04 +instead of linux-image-generic.

+

4.7 For LUKS installs only, setup /etc/crypttab:

+
apt install --yes cryptsetup
+
+echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \
+    luks,discard,initramfs > /etc/crypttab
+
+
+

The use of initramfs is a work-around for cryptsetup does not support ZFS.

+

Hint: If you are creating a mirror or raidz topology, repeat the +/etc/crypttab entries for luks2, etc. adjusting for each disk.

+

4.8 Install GRUB

+

Choose one of the following options:

+

4.8a Install GRUB for legacy (BIOS) booting:

+
apt install --yes grub-pc
+
+
+

Select (using the space bar) all of the disks (not partitions) in your pool.

+

4.8b Install GRUB for UEFI booting:

+
apt install dosfstools
+mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2
+mkdir /boot/efi
+echo PARTUUID=$(blkid -s PARTUUID -o value ${DISK}-part2) \
+    /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab
+mount /boot/efi
+apt install --yes grub-efi-amd64-signed shim-signed
+
+
+

Notes:

+
    +
  • The -s 1 for mkdosfs is only necessary for drives which present +4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size +(given the partition size of 512 MiB) for FAT32. It also works fine on +drives which present 512 B sectors.

  • +
  • For a mirror or raidz topology, this step only installs GRUB on the +first disk. The other disk(s) will be handled later.

  • +
+

4.9 (Optional): Remove os-prober:

+
apt purge --yes os-prober
+
+
+

This avoids error messages from update-grub. os-prober is only necessary +in dual-boot configurations.

+

4.10 Set a root password:

+
passwd
+
+
+

4.11 Enable importing bpool

+

This ensures that bpool is always imported, regardless of whether +/etc/zfs/zpool.cache exists, whether it is in the cachefile or not, +or whether zfs-import-scan.service is enabled.

+
vi /etc/systemd/system/zfs-import-bpool.service
+
+
+
[Unit]
+DefaultDependencies=no
+Before=zfs-import-scan.service
+Before=zfs-import-cache.service
+
+[Service]
+Type=oneshot
+RemainAfterExit=yes
+ExecStart=/sbin/zpool import -N -o cachefile=none bpool
+
+[Install]
+WantedBy=zfs-import.target
+
+
+
systemctl enable zfs-import-bpool.service
+
+
+

4.12 Optional (but recommended): Mount a tmpfs to /tmp

+

If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

+
cp /usr/share/systemd/tmp.mount /etc/systemd/system/
+systemctl enable tmp.mount
+
+
+

4.13 Setup system groups:

+
addgroup --system lpadmin
+addgroup --system sambashare
+
+
+
+
+

Step 5: GRUB Installation

+

5.1 Verify that the ZFS boot filesystem is recognized:

+
grub-probe /boot
+
+
+

5.2 Refresh the initrd files:

+
update-initramfs -c -k all
+
+
+

Note: When using LUKS, this will print “WARNING could not determine +root device from /etc/fstab”. This is because cryptsetup does not +support ZFS.

+

5.3 Workaround GRUB’s missing zpool-features support:

+
vi /etc/default/grub
+# Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/ubuntu"
+
+
+

5.4 Optional (but highly recommended): Make debugging GRUB easier:

+
vi /etc/default/grub
+# Comment out: GRUB_TIMEOUT_STYLE=hidden
+# Set: GRUB_TIMEOUT=5
+# Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5
+# Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
+# Uncomment: GRUB_TERMINAL=console
+# Save and quit.
+
+
+

Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired.

+

5.5 Update the boot configuration:

+
update-grub
+
+
+

Note: Ignore errors from osprober, if present.

+

5.6 Install the boot loader:

+

5.6a For legacy (BIOS) booting, install GRUB to the MBR:

+
grub-install $DISK
+
+
+

Note that you are installing GRUB to the whole disk, not a partition.

+

If you are creating a mirror or raidz topology, repeat the +grub-install command for each disk in the pool.

+

5.6b For UEFI booting, install GRUB:

+
grub-install --target=x86_64-efi --efi-directory=/boot/efi \
+    --bootloader-id=ubuntu --recheck --no-floppy
+
+
+

It is not necessary to specify the disk here. If you are creating a +mirror or raidz topology, the additional disks will be handled later.

+

5.7 Fix filesystem mount ordering:

+

Until ZFS gains a systemd mount +generator, there are +races between mounting filesystems and starting certain daemons. In +practice, the issues (e.g. +#5754) seem to be +with certain filesystems in /var, specifically /var/log and +/var/tmp. Setting these to use legacy mounting, and listing them +in /etc/fstab makes systemd aware that these are separate +mountpoints. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp +feature of systemd automatically use After=var-tmp.mount.

+

Until there is support for mounting /boot in the initramfs, we also +need to mount that, because it was marked canmount=noauto. Also, +with UEFI, we need to ensure it is mounted before its child filesystem +/boot/efi.

+

rpool is guaranteed to be imported by the initramfs, so there is no +point in adding x-systemd.requires=zfs-import.target to those +filesystems.

+

For UEFI booting, unmount /boot/efi first:

+
umount /boot/efi
+
+
+

Everything else applies to both BIOS and UEFI booting:

+
zfs set mountpoint=legacy bpool/BOOT/ubuntu
+echo bpool/BOOT/ubuntu /boot zfs \
+    nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab
+
+zfs set mountpoint=legacy rpool/var/log
+echo rpool/var/log /var/log zfs nodev,relatime 0 0 >> /etc/fstab
+
+zfs set mountpoint=legacy rpool/var/spool
+echo rpool/var/spool /var/spool zfs nodev,relatime 0 0 >> /etc/fstab
+
+
+

If you created a /var/tmp dataset:

+
zfs set mountpoint=legacy rpool/var/tmp
+echo rpool/var/tmp /var/tmp zfs nodev,relatime 0 0 >> /etc/fstab
+
+
+

If you created a /tmp dataset:

+
zfs set mountpoint=legacy rpool/tmp
+echo rpool/tmp /tmp zfs nodev,relatime 0 0 >> /etc/fstab
+
+
+
+
+

Step 6: First Boot

+

6.1 Snapshot the initial installation:

+
zfs snapshot bpool/BOOT/ubuntu@install
+zfs snapshot rpool/ROOT/ubuntu@install
+
+
+

In the future, you will likely want to take snapshots before each +upgrade, and remove old snapshots (including this one) at some point to +save space.

+

6.2 Exit from the chroot environment back to the LiveCD environment:

+
exit
+
+
+

6.3 Run these commands in the LiveCD environment to unmount all +filesystems:

+
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
+zpool export -a
+
+
+

6.4 Reboot:

+
reboot
+
+
+

Wait for the newly installed system to boot normally. Login as root.

+

6.5 Create a user account:

+

Replace username with your desired username:

+
zfs create rpool/home/username
+adduser username
+
+cp -a /etc/skel/. /home/username
+chown -R username:username /home/username
+usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video username
+
+
+

6.6 Mirror GRUB

+

If you installed to multiple disks, install GRUB on the additional +disks:

+

6.6a For legacy (BIOS) booting:

+
dpkg-reconfigure grub-pc
+Hit enter until you get to the device selection screen.
+Select (using the space bar) all of the disks (not partitions) in your pool.
+
+
+

6.6b For UEFI booting:

+
umount /boot/efi
+
+
+

For the second and subsequent disks (increment ubuntu-2 to -3, etc.):

+
dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
+   of=/dev/disk/by-id/scsi-SATA_disk2-part2
+efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
+    -p 2 -L "ubuntu-2" -l '\EFI\ubuntu\shimx64.efi'
+
+mount /boot/efi
+
+
+
+
+

Step 7: (Optional) Configure Swap

+

Caution: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. This issue is currently being investigated in: +https://github.com/zfsonlinux/zfs/issues/7734

+

7.1 Create a volume dataset (zvol) for use as a swap device:

+
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
+    -o logbias=throughput -o sync=always \
+    -o primarycache=metadata -o secondarycache=none \
+    -o com.sun:auto-snapshot=false rpool/swap
+
+
+

You can adjust the size (the 4G part) to your needs.

+

The compression algorithm is set to zle because it is the cheapest +available algorithm. As this guide recommends ashift=12 (4 kiB +blocks on disk), the common case of a 4 kiB page size means that no +compression algorithm can reduce I/O. The exception is all-zero pages, +which are dropped by ZFS; but some form of compression has to be enabled +to get this behavior.

+

7.2 Configure the swap device:

+

Caution: Always use long /dev/zvol aliases in configuration +files. Never use a short /dev/zdX device name.

+
mkswap -f /dev/zvol/rpool/swap
+echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
+echo RESUME=none > /etc/initramfs-tools/conf.d/resume
+
+
+

The RESUME=none is necessary to disable resuming from hibernation. +This does not work, as the zvol is not present (because the pool has not +yet been imported) at the time the resume script runs. If it is not +disabled, the boot process hangs for 30 seconds waiting for the swap +zvol to appear.

+

7.3 Enable the swap device:

+
swapon -av
+
+
+
+
+

Step 8: Full Software Installation

+

8.1 Upgrade the minimal system:

+
apt dist-upgrade --yes
+
+
+

8.2 Install a regular set of software:

+

Choose one of the following options:

+

8.2a Install a command-line environment only:

+
apt install --yes ubuntu-standard
+
+
+

8.2b Install a full GUI environment:

+
apt install --yes ubuntu-desktop
+vi /etc/gdm3/custom.conf
+# In the [daemon] section, add: InitialSetupEnable=false
+
+
+

Hint: If you are installing a full GUI environment, you will likely +want to manage your network with NetworkManager:

+
rm /mnt/etc/netplan/01-netcfg.yaml
+vi /etc/netplan/01-network-manager-all.yaml
+
+
+
network:
+  version: 2
+  renderer: NetworkManager
+
+
+

8.3 Optional: Disable log compression:

+

As /var/log is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. +Also, if you are making snapshots of /var/log, logrotate’s +compression will actually waste space, as the uncompressed data will +live on in the snapshot. You can edit the files in /etc/logrotate.d +by hand to comment out compress, or use this loop (copy-and-paste +highly recommended):

+
for file in /etc/logrotate.d/* ; do
+    if grep -Eq "(^|[^#y])compress" "$file" ; then
+        sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
+    fi
+done
+
+
+

8.4 Reboot:

+
reboot
+
+
+
+
+

Step 9: Final Cleanup

+

9.1 Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

+

9.2 Optional: Delete the snapshots of the initial installation:

+
sudo zfs destroy bpool/BOOT/ubuntu@install
+sudo zfs destroy rpool/ROOT/ubuntu@install
+
+
+

9.3 Optional: Disable the root password:

+
sudo usermod -p '*' root
+
+
+

9.4 Optional: Re-enable the graphical boot process:

+

If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer.

+
sudo vi /etc/default/grub
+# Uncomment: GRUB_TIMEOUT_STYLE=hidden
+# Add quiet and splash to: GRUB_CMDLINE_LINUX_DEFAULT
+# Comment out: GRUB_TERMINAL=console
+# Save and quit.
+
+sudo update-grub
+
+
+

Note: Ignore errors from osprober, if present.

+

9.5 Optional: For LUKS installs only, backup the LUKS header:

+
sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
+    --header-backup-file luks1-header.dat
+
+
+

Store that backup somewhere safe (e.g. cloud storage). It is protected +by your LUKS passphrase, but you may wish to use additional encryption.

+

Hint: If you created a mirror or raidz topology, repeat this for +each LUKS volume (luks2, etc.).

+
+
+

Troubleshooting

+
+

Rescuing using a Live CD

+

Go through Step 1: Prepare The Install +Environment.

+

For LUKS, first unlock the disk(s):

+
cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+# Repeat for additional disks, if this is a mirror or raidz topology.
+
+
+

Mount everything correctly:

+
zpool export -a
+zpool import -N -R /mnt rpool
+zpool import -N -R /mnt bpool
+zfs mount rpool/ROOT/ubuntu
+zfs mount -a
+
+
+

If needed, you can chroot into your installed environment:

+
mount --rbind /dev  /mnt/dev
+mount --rbind /proc /mnt/proc
+mount --rbind /sys  /mnt/sys
+chroot /mnt /bin/bash --login
+mount /boot/efi
+mount -a
+
+
+

Do whatever you need to do to fix your system.

+

When done, cleanup:

+
exit
+mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
+zpool export -a
+reboot
+
+
+
+
+

MPT2SAS

+

Most problem reports for this tutorial involve mpt2sas hardware that +does slow asynchronous drive initialization, like some IBM M1015 or +OEM-branded cards that have been flashed to the reference LSI firmware.

+

The basic problem is that disks on these controllers are not visible to +the Linux kernel until after the regular system is started, and ZoL does +not hotplug pool members. See +https://github.com/zfsonlinux/zfs/issues/330.

+

Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool.

+
+
+

Areca

+

Systems that require the arcsas blob driver should add it to the +/etc/initramfs-tools/modules file and run +update-initramfs -c -k all.

+

Upgrade or downgrade the Areca driver if something like +RIP: 0010:[<ffffffff8101b316>]  [<ffffffff8101b316>] native_read_tsc+0x6/0x20 +appears anywhere in kernel log. ZoL is unstable on systems that emit +this error message.

+
+
+

VMware

+
    +
  • Set disk.EnableUUID = "TRUE" in the vmx file or vsphere +configuration. Doing this ensures that /dev/disk aliases are +created in the guest.

  • +
+
+
+

QEMU/KVM/XEN

+

Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. -drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890).

+

To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:

+
sudo apt install ovmf
+sudo vi /etc/libvirt/qemu.conf
+
+
+

Uncomment these lines:

+
nvram = [
+   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd"
+]
+
+
+
sudo systemctl restart libvirtd.service
+
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS for Raspberry Pi.html b/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS for Raspberry Pi.html new file mode 100644 index 000000000..7b75f2c79 --- /dev/null +++ b/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS for Raspberry Pi.html @@ -0,0 +1,1022 @@ + + + + + + + Ubuntu 20.04 Root on ZFS for Raspberry Pi — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Ubuntu 20.04 Root on ZFS for Raspberry Pi

+ +
+

Overview

+
+

Newer release available

+ +
+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Backup your data. Any existing data will be lost.

  • +
+
+
+

System Requirements

+ +

4 GiB of memory is recommended. Do not use deduplication, as it needs massive +amounts of RAM. +Enabling deduplication is a permanent change that cannot be easily reverted.

+

A Raspberry Pi 3 B/B+ would probably work (as the Pi 3 is 64-bit, though it +has less RAM), but has not been tested. Please report your results (good or +bad) using the issue link below.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @rlaager.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo apt install python3-pip
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @rlaager.

  10. +
+
+
+

Encryption

+

WARNING: Encryption has not yet been tested on the Raspberry Pi.

+

This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in /etc/fstab, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+

USB Disks

+

The Raspberry Pi 4 runs much faster using a USB Solid State Drive (SSD) than +a microSD card. These instructions can also be used to install Ubuntu on a +USB-connected SSD or other USB disk. USB disks have three requirements that +do not apply to microSD cards:

+
    +
  1. The Raspberry Pi’s Bootloader EEPROM must be dated 2020-09-03 or later.

    +

    To check the bootloader version, power up the Raspberry Pi without an SD +card inserted or a USB boot device attached; the date will be on the +bootloader line. (If you do not see the bootloader line, the +bootloader is too old.) Alternatively, run sudo rpi-eeprom-update +on an existing OS on the Raspberry Pi (which on Ubuntu requires +apt install rpi-eeprom).

    +

    If needed, the bootloader can be updated from an existing OS on the +Raspberry Pi using rpi-eeprom-update -a and rebooting. +For other options, see Updating the Bootloader.

    +
  2. +
  3. The Raspberry Pi must configured for USB boot. The bootloader will show a +boot line; if order includes 4, USB boot is enabled.

    +

    If not already enabled, it can be enabled from an existing OS on the +Raspberry Pi using rpi-eeprom-config -e: set BOOT_ORDER=0xf41 +and reboot to apply the change. On subsequent reboots, USB boot will be +enabled.

    +

    Otherwise, it can be enabled without an existing OS as follows:

    +
      +
    • Download the Raspberry Pi Imager Utility.

    • +
    • Flash the USB Boot image to a microSD card. The USB Boot image is +listed under Bootload in the Misc utility images folder.

    • +
    • Boot the Raspberry Pi from the microSD card. USB Boot should be enabled +automatically.

    • +
    +
  4. +
  5. U-Boot on Ubuntu 20.04 does not seem to support the Raspberry Pi USB. +Ubuntu 20.10 may work. As a +work-around, the Raspberry Pi bootloader is configured to directly boot +Linux. For this to work, the Linux kernel must not be compressed. These +instructions decompress the kernel and add a script to +/etc/kernel/postinst.d to handle kernel upgrades.

  6. +
+
+
+
+

Step 1: Disk Formatting

+

The commands in this step are run on the system other than the Raspberry Pi.

+

This guide has you go to some extra work so that the stock ext4 partition can +be deleted.

+
    +
  1. Download and unpack the official image:

    +
    curl -O https://cdimage.ubuntu.com/releases/20.04.4/release/ubuntu-20.04.4-preinstalled-server-arm64+raspi.img.xz
    +xz -d ubuntu-20.04.4-preinstalled-server-arm64+raspi.img.xz
    +
    +# or combine them to decompress as you download:
    +curl https://cdimage.ubuntu.com/releases/20.04.4/release/ubuntu-20.04.4-preinstalled-server-arm64+raspi.img.xz | \
    +    xz -d > ubuntu-20.04.4-preinstalled-server-arm64+raspi.img
    +
    +
    +
  2. +
  3. Dump the partition table for the image:

    +
    sfdisk -d ubuntu-20.04.4-preinstalled-server-arm64+raspi.img
    +
    +
    +

    That will output this:

    +
    label: dos
    +label-id: 0xddbefb06
    +device: ubuntu-20.04.4-preinstalled-server-arm64+raspi.img
    +unit: sectors
    +
    +<name>.img1 : start=        2048, size=      524288, type=c, bootable
    +<name>.img2 : start=      526336, size=     6285628, type=83
    +
    +
    +

    The important numbers are 524288 and 6285628. Store those in variables:

    +
    BOOT=524288
    +ROOT=6285628
    +
    +
    +
  4. +
  5. Create a partition script:

    +
    cat > partitions << EOF
    +label: dos
    +unit: sectors
    +
    +1 : start=  2048,  size=$BOOT,  type=c, bootable
    +2 : start=$((2048+BOOT)),  size=$ROOT, type=83
    +3 : start=$((2048+BOOT+ROOT)), size=$ROOT, type=83
    +EOF
    +
    +
    +
  6. +
  7. Connect the disk:

    +

    Connect the disk to a machine other than the target Raspberry Pi. If any +filesystems are automatically mounted (e.g. by GNOME) unmount them. +Determine the device name. For SD, the device name is almost certainly +/dev/mmcblk0. For USB SSDs, the device name is /dev/sdX, where +X is a lowercase letter. lsblk can help determine the device name. +Set the DISK environment variable to the device name:

    +
    DISK=/dev/mmcblk0    # microSD card
    +DISK=/dev/sdX        # USB disk
    +
    +
    +

    Because partitions are named differently for /dev/mmcblk0 and /dev/sdX +devices, set a second variable used when working with partitions:

    +
    export DISKP=${DISK}p # microSD card
    +export DISKP=${DISK}  # USB disk ($DISKP == $DISK for /dev/sdX devices)
    +
    +
    +

    Hint: microSD cards connected using a USB reader also have /dev/sdX +names.

    +

    WARNING: The following steps destroy the existing data on the disk. Ensure +DISK and DISKP are correct before proceeding.

    +
  8. +
  9. Ensure swap partitions are not in use:

    +
    swapon -v
    +# If a partition is in use from the disk, disable it:
    +sudo swapoff THAT_PARTITION
    +
    +
    +
  10. +
  11. Clear old ZFS labels:

    +
    sudo zpool labelclear -f ${DISK}
    +
    +
    +

    If a ZFS label still exists from a previous system/attempt, expanding the +pool will result in an unbootable system.

    +

    Hint: If you do not already have the ZFS utilities installed, you can +install them with: sudo apt install zfsutils-linux Alternatively, you +can zero the entire disk with: +sudo dd if=/dev/zero of=${DISK} bs=1M status=progress

    +
  12. +
  13. Delete existing partitions:

    +
    echo "label: dos" | sudo sfdisk ${DISK}
    +sudo partprobe
    +ls ${DISKP}*
    +
    +
    +

    Make sure there are no partitions, just the file for the disk itself. This +step is not strictly necessary; it exists to catch problems.

    +
  14. +
  15. Create the partitions:

    +
    sudo sfdisk $DISK < partitions
    +
    +
    +
  16. +
  17. Loopback mount the image:

    +
    IMG=$(sudo losetup -fP --show \
    +          ubuntu-20.04.4-preinstalled-server-arm64+raspi.img)
    +
    +
    +
  18. +
  19. Copy the bootloader data:

    +
    sudo dd if=${IMG}p1 of=${DISKP}1 bs=1M
    +
    +
    +
  20. +
  21. Clear old label(s) from partition 2:

    +
    sudo wipefs -a ${DISKP}2
    +
    +
    +

    If a filesystem with the writable label from the Ubuntu image is still +present in partition 2, the system will not boot initially.

    +
  22. +
  23. Copy the root filesystem data:

    +
    # NOTE: the destination is p3, not p2.
    +sudo dd if=${IMG}p2 of=${DISKP}3 bs=1M status=progress conv=fsync
    +
    +
    +
  24. +
  25. Unmount the image:

    +
    sudo losetup -d $IMG
    +
    +
    +
  26. +
  27. If setting up a USB disk:

    +

    Decompress the kernel:

    +
    sudo -sE
    +
    +MNT=$(mktemp -d /mnt/XXXXXXXX)
    +mkdir -p $MNT/boot $MNT/root
    +mount ${DISKP}1 $MNT/boot
    +mount ${DISKP}3 $MNT/root
    +
    +zcat -qf $MNT/boot/vmlinuz >$MNT/boot/vmlinux
    +
    +
    +

    Modify boot config:

    +
    cat >> $MNT/boot/usercfg.txt << EOF
    +kernel=vmlinux
    +initramfs initrd.img followkernel
    +boot_delay
    +EOF
    +
    +
    +

    Create a script to automatically decompress the kernel after an upgrade:

    +
    cat >$MNT/root/etc/kernel/postinst.d/zz-decompress-kernel << 'EOF'
    +#!/bin/sh
    +
    +set -eu
    +
    +echo "Updating decompressed kernel..."
    +[ -e /boot/firmware/vmlinux ] && \
    +    cp /boot/firmware/vmlinux /boot/firmware/vmlinux.bak
    +vmlinuxtmp=$(mktemp /boot/firmware/vmlinux.XXXXXXXX)
    +zcat -qf /boot/vmlinuz > "$vmlinuxtmp"
    +mv "$vmlinuxtmp" /boot/firmware/vmlinux
    +EOF
    +
    +chmod +x $MNT/root/etc/kernel/postinst.d/zz-decompress-kernel
    +
    +
    +

    Cleanup:

    +
    umount $MNT/*
    +rm -rf $MNT
    +exit
    +
    +
    +
  28. +
  29. Boot the Raspberry Pi.

    +

    Move the SD/USB disk to the Raspberry Pi. Boot it and login (e.g. via SSH) +with ubuntu as the username and password. If you are using SSH, note +that it takes a little bit for cloud-init to enable password logins on the +first boot. Set a new password when prompted and login again using that +password. If you have your local SSH configured to use ControlPersist, +you will have to kill the existing SSH process before logging in the second +time.

    +
  30. +
+
+
+

Step 2: Setup ZFS

+
    +
  1. Become root:

    +
    sudo -i
    +
    +
    +
  2. +
  3. Set the DISK and DISKP variables again:

    +
    DISK=/dev/mmcblk0    # microSD card
    +DISKP=${DISK}p       # microSD card
    +
    +DISK=/dev/sdX        # USB disk
    +DISKP=${DISK}        # USB disk
    +
    +
    +

    WARNING: Device names can change when moving a device to a different +computer or switching the microSD card from a USB reader to a built-in +slot. Double check the device name before continuing.

    +
  4. +
  5. Install ZFS:

    +
    apt update
    +
    +apt install pv zfs-initramfs
    +
    +
    +

    Note: Since this is the first boot, you may get Waiting for cache +lock because unattended-upgrades is running in the background. +Wait for it to finish.

    +
  6. +
  7. Create the root pool:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted:

      +
      zpool create \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISKP}2
      +
      +
      +
    • +
    +

    WARNING: Encryption has not yet been tested on the Raspberry Pi.

    +
      +
    • ZFS native encryption:

      +
      zpool create \
      +    -o ashift=12 \
      +    -O encryption=aes-256-gcm \
      +    -O keylocation=prompt -O keyformat=passphrase \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISKP}2
      +
      +
      +
    • +
    • LUKS:

      +
      cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISKP}2
      +cryptsetup luksOpen ${DISK}-part4 luks1
      +zpool create \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool /dev/mapper/luks1
      +
      +
      +
    • +
    +

    Notes:

    +
      +
    • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

    • +
    • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires ACLs +Also, disabling ACLs apparently breaks umask handling with NFSv4.

    • +
    • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only filenames.

    • +
    • recordsize is unset (leaving it at the default of 128 KiB). If you +want to tune it (e.g. -O recordsize=1M), see these various blog +posts.

    • +
    • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s documentation +for further information.

    • +
    • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain controller. +Note that xattr=sa is Linux-specific. If you move your +xattr=sa pool to another OpenZFS implementation besides ZFS-on-Linux, +extended attributes will not be readable (though your data will be). If +portability of extended attributes is important to you, omit the +-O xattr=sa above. Even if you do not want xattr=sa for the whole +pool, it is probably fine to use it for /var/log.

    • +
    • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

    • +
    • ZFS native encryption defaults to aes-256-ccm, but the default has +changed upstream +to aes-256-gcm. AES-GCM seems to be generally preferred over AES-CCM, +is faster now, +and will be even faster in the future.

    • +
    • For LUKS, the key size chosen is 512 bits. However, XTS mode requires two +keys, so the LUKS key is split in half. Thus, -s 512 means AES-256.

    • +
    • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup FAQ +for guidance.

    • +
    +
  8. +
+
+
+

Step 3: System Installation

+
    +
  1. Create a filesystem dataset to act as a container:

    +
    zfs create -o canmount=off -o mountpoint=none rpool/ROOT
    +
    +
    +
  2. +
  3. Create a filesystem dataset for the root filesystem:

    +
    UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null |
    +    tr -dc 'a-z0-9' | cut -c-6)
    +
    +zfs create -o canmount=noauto -o mountpoint=/ \
    +    -o com.ubuntu.zsys:bootfs=yes \
    +    -o com.ubuntu.zsys:last-used=$(date +%s) rpool/ROOT/ubuntu_$UUID
    +zfs mount rpool/ROOT/ubuntu_$UUID
    +
    +
    +

    With ZFS, it is not normally necessary to use a mount command (either +mount or zfs mount). This situation is an exception because of +canmount=noauto.

    +
  4. +
  5. Create datasets:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no \
    +    rpool/ROOT/ubuntu_$UUID/srv
    +zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
    +    rpool/ROOT/ubuntu_$UUID/usr
    +zfs create rpool/ROOT/ubuntu_$UUID/usr/local
    +zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
    +    rpool/ROOT/ubuntu_$UUID/var
    +zfs create rpool/ROOT/ubuntu_$UUID/var/games
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/AccountsService
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/apt
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/dpkg
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/NetworkManager
    +zfs create rpool/ROOT/ubuntu_$UUID/var/log
    +zfs create rpool/ROOT/ubuntu_$UUID/var/mail
    +zfs create rpool/ROOT/ubuntu_$UUID/var/snap
    +zfs create rpool/ROOT/ubuntu_$UUID/var/spool
    +zfs create rpool/ROOT/ubuntu_$UUID/var/www
    +
    +zfs create -o canmount=off -o mountpoint=/ \
    +    rpool/USERDATA
    +zfs create -o com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_$UUID \
    +    -o canmount=on -o mountpoint=/root \
    +    rpool/USERDATA/root_$UUID
    +
    +
    +

    If you want a separate dataset for /tmp:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no \
    +    rpool/ROOT/ubuntu_$UUID/tmp
    +chmod 1777 /mnt/tmp
    +
    +
    +

    The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data.

    +

    If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for /tmp, +as shown above. This keeps the /tmp data out of snapshots of your root +filesystem. It also allows you to set a quota on rpool/tmp, if you want +to limit the maximum space used. Otherwise, you can use a tmpfs (RAM +filesystem) later.

    +
  6. +
  7. Optional: Ignore synchronous requests:

    +

    microSD cards are relatively slow. If you want to increase performance +(especially when installing packages) at the cost of some safety, you can +disable flushing of synchronous requests (e.g. fsync(), O_[D]SYNC):

    +

    Choose one of the following options:

    +
      +
    • For the root filesystem, but not user data:

      +
      zfs set sync=disabled rpool/ROOT
      +
      +
      +
    • +
    • For everything:

      +
      zfs set sync=disabled rpool
      +
      +
      +
    • +
    +

    ZFS is transactional, so it will still be crash consistent. However, you +should leave sync at its default of standard if this system needs +to guarantee persistence (e.g. if it is a database or NFS server).

    +
  8. +
  9. Copy the system into the ZFS filesystems:

    +
    (cd /; tar -cf - --one-file-system --warning=no-file-ignored .) | \
    +    pv -p -bs $(du -sxm --apparent-size / | cut -f1)m | \
    +    (cd /mnt ; tar -x)
    +
    +
    +
  10. +
+
+
+

Step 4: System Configuration

+
    +
  1. Configure the hostname:

    +

    Replace HOSTNAME with the desired hostname:

    +
    hostname HOSTNAME
    +hostname > /mnt/etc/hostname
    +vi /mnt/etc/hosts
    +
    +
    +
    Add a line:
    +127.0.1.1       HOSTNAME
    +or if the system has a real name in DNS:
    +127.0.1.1       FQDN HOSTNAME
    +
    +
    +

    Hint: Use nano if you find vi confusing.

    +
  2. +
  3. Stop zed:

    +
    systemctl stop zed
    +
    +
    +
  4. +
  5. Bind the virtual filesystems from the running environment to the new +ZFS environment and chroot into it:

    +
    mount --make-private --rbind /boot/firmware /mnt/boot/firmware
    +mount --make-private --rbind /dev  /mnt/dev
    +mount --make-private --rbind /proc /mnt/proc
    +mount --make-private --rbind /run  /mnt/run
    +mount --make-private --rbind /sys  /mnt/sys
    +chroot /mnt /usr/bin/env DISK=$DISK UUID=$UUID bash --login
    +
    +
    +
  6. +
  7. Configure a basic system environment:

    +
    apt update
    +
    +
    +

    Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

    +
    dpkg-reconfigure locales
    +dpkg-reconfigure tzdata
    +
    +
    +
  8. +
  9. For LUKS installs only, setup /etc/crypttab:

    +
    # cryptsetup is already installed, but this marks it as manually
    +# installed so it is not automatically removed.
    +apt install --yes cryptsetup
    +
    +echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \
    +    luks,discard,initramfs > /etc/crypttab
    +
    +
    +

    The use of initramfs is a work-around for cryptsetup does not support +ZFS.

    +
  10. +
  11. Optional: Mount a tmpfs to /tmp

    +

    If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

    +
    cp /usr/share/systemd/tmp.mount /etc/systemd/system/
    +systemctl enable tmp.mount
    +
    +
    +
  12. +
  13. Setup system groups:

    +
    addgroup --system lpadmin
    +addgroup --system sambashare
    +
    +
    +
  14. +
  15. Patch a dependency loop:

    +

    For ZFS native encryption or LUKS:

    +
    apt install --yes curl patch
    +
    +curl https://launchpadlibrarian.net/478315221/2150-fix-systemd-dependency-loops.patch | \
    +    sed "s|/etc|/lib|;s|\.in$||" | (cd / ; patch -p1)
    +
    +
    +

    Ignore the failure in Hunk #2 (say n twice).

    +

    This patch is from Bug #1875577 Encrypted swap won’t load on 20.04 with +zfs root.

    +
  16. +
  17. Fix filesystem mount ordering:

    +

    We need to activate zfs-mount-generator. This makes systemd aware of +the separate mountpoints, which is important for things like /var/log +and /var/tmp. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp feature +of systemd automatically use After=var-tmp.mount.

    +
    mkdir /etc/zfs/zfs-list.cache
    +touch /etc/zfs/zfs-list.cache/rpool
    +ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d
    +zed -F &
    +
    +
    +

    Force a cache update:

    +
    zfs set canmount=noauto rpool/ROOT/ubuntu_$UUID
    +
    +
    +

    Verify that zed updated the cache by making sure this is not empty, +which will take a few seconds:

    +
    cat /etc/zfs/zfs-list.cache/rpool
    +
    +
    +

    Stop zed:

    +
    fg
    +Press Ctrl-C.
    +
    +
    +

    Fix the paths to eliminate /mnt:

    +
    sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
    +
    +
    +
  18. +
  19. Remove old filesystem from /etc/fstab:

    +
    vi /etc/fstab
    +# Remove the old root filesystem line:
    +#   LABEL=writable / ext4 ...
    +
    +
    +
  20. +
  21. Configure kernel command line:

    +
    cp /boot/firmware/cmdline.txt /boot/firmware/cmdline.txt.bak
    +sed -i "s|root=LABEL=writable rootfstype=ext4|root=ZFS=rpool/ROOT/ubuntu_$UUID|" \
    +    /boot/firmware/cmdline.txt
    +sed -i "s| fixrtc||" /boot/firmware/cmdline.txt
    +sed -i "s|$| init_on_alloc=0|" /boot/firmware/cmdline.txt
    +
    +
    +

    The fixrtc script is not compatible with ZFS and will cause the boot +to hang for 180 seconds.

    +

    The init_on_alloc=0 is to address performance regressions.

    +
  22. +
  23. Optional (but highly recommended): Make debugging booting easier:

    +
    sed -i "s|$| nosplash|" /boot/firmware/cmdline.txt
    +
    +
    +
  24. +
  25. Reboot:

    +
    exit
    +reboot
    +
    +
    +

    Wait for the newly installed system to boot normally. Login as ubuntu.

    +
  26. +
+
+
+

Step 5: First Boot

+
    +
  1. Become root:

    +
    sudo -i
    +
    +
    +
  2. +
  3. Set the DISK variable again:

    +
    DISK=/dev/mmcblk0    # microSD card
    +
    +DISK=/dev/sdX        # USB disk
    +
    +
    +
  4. +
  5. Delete the ext4 partition and expand the ZFS partition:

    +
    sfdisk $DISK --delete 3
    +echo ", +" | sfdisk --no-reread -N 2 $DISK
    +
    +
    +

    Note: This does not automatically expand the pool. That will be happen +on reboot.

    +
  6. +
  7. Create a user account:

    +

    Replace YOUR_USERNAME with your desired username:

    +
    username=YOUR_USERNAME
    +
    +UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null |
    +    tr -dc 'a-z0-9' | cut -c-6)
    +ROOT_DS=$(zfs list -o name | awk '/ROOT\/ubuntu_/{print $1;exit}')
    +zfs create -o com.ubuntu.zsys:bootfs-datasets=$ROOT_DS \
    +    -o canmount=on -o mountpoint=/home/$username \
    +    rpool/USERDATA/${username}_$UUID
    +adduser $username
    +
    +cp -a /etc/skel/. /home/$username
    +chown -R $username:$username /home/$username
    +usermod -a -G adm,cdrom,dip,lpadmin,lxd,plugdev,sambashare,sudo $username
    +
    +
    +
  8. +
  9. Reboot:

    +
    reboot
    +
    +
    +

    Wait for the system to boot normally. Login using the account you +created.

    +
  10. +
  11. Become root:

    +
    sudo -i
    +
    +
    +
  12. +
  13. Expand the ZFS pool:

    +

    Verify the pool expanded:

    +
    zfs list rpool
    +
    +
    +

    If it did not automatically expand, try to expand it manually:

    +
    DISK=/dev/mmcblk0    # microSD card
    +DISKP=${DISK}p       # microSD card
    +
    +DISK=/dev/sdX        # USB disk
    +DISKP=${DISK}        # USB disk
    +
    +zpool online -e rpool ${DISKP}2
    +
    +
    +
  14. +
  15. Delete the ubuntu user:

    +
    deluser --remove-home ubuntu
    +
    +
    +
  16. +
+
+
+

Step 6: Full Software Installation

+
    +
  1. Optional: Remove cloud-init:

    +
    vi /etc/netplan/01-netcfg.yaml
    +
    +
    +
    network:
    +  version: 2
    +  ethernets:
    +    eth0:
    +      dhcp4: true
    +
    +
    +
    rm /etc/netplan/50-cloud-init.yaml
    +apt purge --autoremove ^cloud-init
    +rm -rf /etc/cloud
    +
    +
    +
  2. +
  3. Optional: Remove other storage packages:

    +
    apt purge --autoremove bcache-tools btrfs-progs cloud-guest-utils lvm2 \
    +    mdadm multipath-tools open-iscsi overlayroot xfsprogs
    +
    +
    +
  4. +
  5. Upgrade the minimal system:

    +
    apt dist-upgrade --yes
    +
    +
    +
  6. +
  7. Optional: Install a full GUI environment:

    +
    apt install --yes ubuntu-desktop
    +echo dtoverlay=vc4-fkms-v3d >> /boot/firmware/usercfg.txt
    +
    +
    +

    Hint: If you are installing a full GUI environment, you will likely +want to remove cloud-init as discussed above but manage your network with +NetworkManager:

    +
    rm /etc/netplan/*.yaml
    +vi /etc/netplan/01-network-manager-all.yaml
    +
    +
    +
    network:
    +  version: 2
    +  renderer: NetworkManager
    +
    +
    +
  8. +
  9. Optional (but recommended): Disable log compression:

    +

    As /var/log is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. Also, +if you are making snapshots of /var/log, logrotate’s compression will +actually waste space, as the uncompressed data will live on in the +snapshot. You can edit the files in /etc/logrotate.d by hand to comment +out compress, or use this loop (copy-and-paste highly recommended):

    +
    for file in /etc/logrotate.d/* ; do
    +    if grep -Eq "(^|[^#y])compress" "$file" ; then
    +        sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
    +    fi
    +done
    +
    +
    +
  10. +
  11. Reboot:

    +
    reboot
    +
    +
    +
  12. +
+
+
+

Step 7: Final Cleanup

+
    +
  1. Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

  2. +
  3. Optional: For LUKS installs only, backup the LUKS header:

    +
    sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
    +    --header-backup-file luks1-header.dat
    +
    +
    +

    Store that backup somewhere safe (e.g. cloud storage). It is protected by +your LUKS passphrase, but you may wish to use additional encryption.

    +

    Hint: If you created a mirror or raidz topology, repeat this for each +LUKS volume (luks2, etc.).

    +
  4. +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS.html b/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS.html new file mode 100644 index 000000000..30a39ac47 --- /dev/null +++ b/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS.html @@ -0,0 +1,1453 @@ + + + + + + + Ubuntu 20.04 Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Ubuntu 20.04 Root on ZFS

+ +
+

Newer release available

+
    +
  • See Ubuntu 22.04 Root on ZFS for new +installs. This guide is no longer receiving most updates. It continues +to exist for reference for existing installs that followed it.

  • +
+
+
+

Errata

+

If you previously installed using this guide, please apply these fixes if +applicable:

+
+

/boot/grub Not Mounted

+
+
Severity: Normal (previously Grave)
+
Fixed: 2020-12-05 (previously 2020-05-30)
+
+

For a mirror or raidz topology, /boot/grub is on a separate dataset. This +was originally bpool/grub, then changed on 2020-05-30 to +bpool/BOOT/ubuntu_UUID/grub to work-around zsys setting canmount=off +which would result in /boot/grub not mounting. This work-around lead to +issues with snapshot restores. The underlying zsys +issue was fixed and backported +to 20.04, so it is now back to being bpool/grub.

+
    +
  • If you never applied the 2020-05-30 errata fix, then /boot/grub is +probably not mounting. Check that:

    +
    mount | grep /boot/grub
    +
    +
    +

    If it is mounted, everything is fine. Stop. Otherwise:

    +
    zfs set canmount=on bpool/grub
    +update-initramfs -c -k all
    +update-grub
    +
    +grub-install --target=x86_64-efi --efi-directory=/boot/efi \
    +    --bootloader-id=ubuntu --recheck --no-floppy
    +
    +
    +

    Run this for the additional disk(s), incrementing the “2” to “3” and so on +for both /boot/efi2 and ubuntu-2:

    +
    cp -a /boot/efi/EFI /boot/efi2
    +grub-install --target=x86_64-efi --efi-directory=/boot/efi2 \
    +    --bootloader-id=ubuntu-2 --recheck --no-floppy
    +
    +
    +

    Check that these have set prefix=($root)'/grub@':

    +
    grep prefix= \
    +    /boot/efi/EFI/ubuntu/grub.cfg \
    +    /boot/efi2/EFI/ubuntu-2/grub.cfg
    +
    +
    +
  • +
  • If you applied the 2020-05-30 errata fix, then you should revert the dataset +rename:

    +
    umount /boot/grub
    +zfs rename bpool/BOOT/ubuntu_UUID/grub bpool/grub
    +zfs set com.ubuntu.zsys:bootfs=no bpool/grub
    +zfs mount bpool/grub
    +
    +
    +
  • +
+
+
+

AccountsService Not Mounted

+
+
Severity: Normal
+
Fixed: 2020-05-28
+
+

The HOWTO previously had a typo in AccountsService (where Accounts is plural) +as AccountServices (where Services is plural). This means that AccountsService +data will be written to the root filesystem. This is only harmful in the event +of a rollback of the root filesystem that does not include a rollback of the +user data. Check it:

+
zfs list | grep Account
+
+
+

If the “s” is on “Accounts”, you are good. If it is on “Services”, fix it:

+
mv /var/lib/AccountsService /var/lib/AccountsService-old
+zfs list -r rpool
+# Replace the UUID twice below:
+zfs rename rpool/ROOT/ubuntu_UUID/var/lib/AccountServices \
+           rpool/ROOT/ubuntu_UUID/var/lib/AccountsService
+mv /var/lib/AccountsService-old/* /var/lib/AccountsService
+rmdir /var/lib/AccountsService-old
+
+
+
+
+
+

Overview

+
+

Ubuntu Installer

+

The Ubuntu installer has support for root-on-ZFS. +This HOWTO produces nearly identical results as the Ubuntu installer because of +bidirectional collaboration.

+

If you want a single-disk, unencrypted, desktop install, use the installer. It +is far easier and faster than doing everything by hand.

+

If you want a ZFS native encrypted, desktop install, you can trivially edit +the installer. +The -O recordsize=1M there is unrelated to encryption; omit that unless +you understand it. Make sure to use a password that is at least 8 characters +or this hack will crash the installer. Additionally, once the system is +installed, you should switch to encrypted swap:

+
swapon -v
+# Note the device, including the partition.
+
+ls -l /dev/disk/by-id/
+# Find the by-id name of the disk.
+
+sudo swapoff -a
+sudo vi /etc/fstab
+# Remove the swap entry.
+
+sudo apt install --yes cryptsetup
+
+# Replace DISK-partN as appropriate from above:
+echo swap /dev/disk/by-id/DISK-partN /dev/urandom \
+    swap,cipher=aes-xts-plain64:sha256,size=512 | sudo tee -a /etc/crypttab
+echo /dev/mapper/swap none swap defaults 0 0 | sudo tee -a /etc/fstab
+
+
+

Hopefully the installer will gain encryption support in +the future.

+

If you want to setup a mirror or raidz topology, use LUKS encryption, and/or +install a server (no desktop GUI), use this HOWTO.

+
+
+

Raspberry Pi

+

If you are looking to install on a Raspberry Pi, see +Ubuntu 20.04 Root on ZFS for Raspberry Pi.

+
+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Do not use these instructions for dual-booting.

  • +
  • Backup your data. Any existing data will be lost.

  • +
+
+
+

System Requirements

+ +

Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need massive amounts of RAM. Enabling +deduplication is a permanent change that cannot be easily reverted.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @rlaager.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo apt install python3-pip
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @rlaager.

  10. +
+
+
+

Encryption

+

This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in /etc/fstab, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+
+

Step 1: Prepare The Install Environment

+
    +
  1. Boot the Ubuntu Live CD. Select Try Ubuntu. Connect your system to the +Internet as appropriate (e.g. join your WiFi network). Open a terminal +(press Ctrl-Alt-T).

  2. +
  3. Setup and update the repositories:

    +
    sudo apt update
    +
    +
    +
  4. +
  5. Optional: Install and start the OpenSSH server in the Live CD environment:

    +

    If you have a second system, using SSH to access the target system can be +convenient:

    +
    passwd
    +# There is no current password.
    +sudo apt install --yes openssh-server vim
    +
    +
    +

    Installing the full vim package fixes terminal problems that occur when +using the vim-tiny package (that ships in the Live CD environment) over +SSH.

    +

    Hint: You can find your IP address with +ip addr show scope global | grep inet. Then, from your main machine, +connect with ssh ubuntu@IP.

    +
  6. +
  7. Disable automounting:

    +

    If the disk has been used before (with partitions at the same offsets), +previous filesystems (e.g. the ESP) will automount if not disabled:

    +
    gsettings set org.gnome.desktop.media-handling automount false
    +
    +
    +
  8. +
  9. Become root:

    +
    sudo -i
    +
    +
    +
  10. +
  11. Install ZFS in the Live CD environment:

    +
    apt install --yes debootstrap gdisk zfsutils-linux
    +
    +systemctl stop zed
    +
    +
    +
  12. +
+
+
+

Step 2: Disk Formatting

+
    +
  1. Set a variable with the disk name:

    +
    DISK=/dev/disk/by-id/scsi-SATA_disk1
    +
    +
    +

    Always use the long /dev/disk/by-id/* aliases with ZFS. Using the +/dev/sd* device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool.

    +

    Hints:

    +
      +
    • ls -la /dev/disk/by-id will list the aliases.

    • +
    • Are you doing this in a virtual machine? If your virtual disk is missing +from /dev/disk/by-id, use /dev/vda if you are using KVM with +virtio; otherwise, read the troubleshooting +section.

    • +
    • For a mirror or raidz topology, use DISK1, DISK2, etc.

    • +
    • When choosing a boot pool size, consider how you will use the space. A +kernel and initrd may consume around 100M. If you have multiple kernels +and take snapshots, you may find yourself low on boot pool space, +especially if you need to regenerate your initramfs images, which may be +around 85M each. Size your boot pool appropriately for your needs.

    • +
    +
  2. +
  3. If you are re-using a disk, clear it as necessary:

    +

    Ensure swap partitions are not in use:

    +
    swapoff --all
    +
    +
    +

    If the disk was previously used in an MD array:

    +
    apt install --yes mdadm
    +
    +# See if one or more MD arrays are active:
    +cat /proc/mdstat
    +# If so, stop them (replace ``md0`` as required):
    +mdadm --stop /dev/md0
    +
    +# For an array using the whole disk:
    +mdadm --zero-superblock --force $DISK
    +# For an array using a partition (e.g. a swap partition per this HOWTO):
    +mdadm --zero-superblock --force ${DISK}-part2
    +
    +
    +

    Clear the partition table:

    +
    sgdisk --zap-all $DISK
    +
    +
    +

    If you get a message about the kernel still using the old partition table, +reboot and start over (except that you can skip this step).

    +
  4. +
  5. Create bootloader partition(s):

    +
    sgdisk     -n1:1M:+512M   -t1:EF00 $DISK
    +
    +# For legacy (BIOS) booting:
    +sgdisk -a1 -n5:24K:+1000K -t5:EF02 $DISK
    +
    +
    +

    Note: While the Ubuntu installer uses an MBR label for legacy (BIOS) +booting, this HOWTO uses GPT partition labels for both UEFI and legacy +(BIOS) booting. This is simpler than having two options. It is also +provides forward compatibility (future proofing). In other words, for +legacy (BIOS) booting, this will allow you to move the disk(s) to a new +system/motherboard in the future without having to rebuild the pool (and +restore your data from a backup). The ESP is created in both cases for +similar reasons. Additionally, the ESP is used for /boot/grub in +single-disk installs, as discussed below.

    +
  6. +
  7. Create a partition for swap:

    +

    Previous versions of this HOWTO put swap on a zvol. Ubuntu recommends +against this configuration due to deadlocks. There +is a bug report upstream.

    +

    Putting swap on a partition gives up the benefit of ZFS checksums (for your +swap). That is probably the right trade-off given the reports of ZFS +deadlocks with swap. If you are bothered by this, simply do not enable +swap.

    +

    Choose one of the following options if you want swap:

    +
      +
    • For a single-disk install:

      +
      sgdisk     -n2:0:+500M    -t2:8200 $DISK
      +
      +
      +
    • +
    • For a mirror or raidz topology:

      +
      sgdisk     -n2:0:+500M    -t2:FD00 $DISK
      +
      +
      +
    • +
    +

    Adjust the swap swize to your needs. If you wish to enable hiberation +(which only works for unencrypted installs), the swap partition must be +at least as large as the system’s RAM.

    +
  8. +
  9. Create a boot pool partition:

    +
    sgdisk     -n3:0:+2G      -t3:BE00 $DISK
    +
    +
    +

    The Ubuntu installer uses 5% of the disk space constrained to a minimum of +500 MiB and a maximum of 2 GiB. Making this too small (and 500 MiB might +be too small) can result in an inability to upgrade the kernel.

    +
  10. +
  11. Create a root pool partition:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted or ZFS native encryption:

      +
      sgdisk     -n4:0:0        -t4:BF00 $DISK
      +
      +
      +
    • +
    • LUKS:

      +
      sgdisk     -n4:0:0        -t4:8309 $DISK
      +
      +
      +
    • +
    +

    If you are creating a mirror or raidz topology, repeat the partitioning +commands for all the disks which will be part of the pool.

    +
  12. +
  13. Create the boot pool:

    +
    zpool create \
    +    -o cachefile=/etc/zfs/zpool.cache \
    +    -o ashift=12 -o autotrim=on -d \
    +    -o feature@async_destroy=enabled \
    +    -o feature@bookmarks=enabled \
    +    -o feature@embedded_data=enabled \
    +    -o feature@empty_bpobj=enabled \
    +    -o feature@enabled_txg=enabled \
    +    -o feature@extensible_dataset=enabled \
    +    -o feature@filesystem_limits=enabled \
    +    -o feature@hole_birth=enabled \
    +    -o feature@large_blocks=enabled \
    +    -o feature@lz4_compress=enabled \
    +    -o feature@spacemap_histogram=enabled \
    +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
    +    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
    +    -O mountpoint=/boot -R /mnt \
    +    bpool ${DISK}-part3
    +
    +
    +

    You should not need to customize any of the options for the boot pool.

    +

    GRUB does not support all of the zpool features. See spa_feature_names +in grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are “supported” by GRUB.

    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    bpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part3 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part3
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • The boot pool name is no longer arbitrary. It _must_ be bpool. +If you really want to rename it, edit /etc/grub.d/10_linux_zfs later, +after GRUB is installed (and run update-grub).

    • +
    +

    Feature Notes:

    +
      +
    • The allocation_classes feature should be safe to use. However, unless +one is using it (i.e. a special vdev), there is no point to enabling +it. It is extremely unlikely that someone would use this feature for a +boot pool. If one cares about speeding up the boot pool, it would make +more sense to put the whole pool on the faster disk rather than using it +as a special vdev.

    • +
    • The project_quota feature has been tested and is safe to use. This +feature is extremely unlikely to matter for the boot pool.

    • +
    • The resilver_defer should be safe but the boot pool is small enough +that it is unlikely to be necessary.

    • +
    • The spacemap_v2 feature has been tested and is safe to use. The boot +pool is small, so this does not matter in practice.

    • +
    • As a read-only compatible feature, the userobj_accounting feature +should be compatible in theory, but in practice, GRUB can fail with an +“invalid dnode type” error. This feature does not matter for /boot +anyway.

    • +
    +
  14. +
  15. Create the root pool:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted:

      +
      zpool create \
      +    -o ashift=12 -o autotrim=on \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • ZFS native encryption:

      +
      zpool create \
      +    -o ashift=12 -o autotrim=on \
      +    -O encryption=aes-256-gcm \
      +    -O keylocation=prompt -O keyformat=passphrase \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • LUKS:

      +
      cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
      +cryptsetup luksOpen ${DISK}-part4 luks1
      +zpool create \
      +    -o ashift=12 -o autotrim=on \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool /dev/mapper/luks1
      +
      +
      +
    • +
    +

    Notes:

    +
      +
    • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

    • +
    • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires ACLs +Also, disabling ACLs apparently breaks umask handling with NFSv4.

    • +
    • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only filenames.

    • +
    • recordsize is unset (leaving it at the default of 128 KiB). If you +want to tune it (e.g. -O recordsize=1M), see these various blog +posts.

    • +
    • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s documentation +for further information.

    • +
    • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain controller. +Note that xattr=sa is Linux-specific. If you move your +xattr=sa pool to another OpenZFS implementation besides ZFS-on-Linux, +extended attributes will not be readable (though your data will be). If +portability of extended attributes is important to you, omit the +-O xattr=sa above. Even if you do not want xattr=sa for the whole +pool, it is probably fine to use it for /var/log.

    • +
    • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

    • +
    • ZFS native encryption defaults to aes-256-ccm, but the default has +changed upstream +to aes-256-gcm. AES-GCM seems to be generally preferred over AES-CCM, +is faster now, +and will be even faster in the future.

    • +
    • For LUKS, the key size chosen is 512 bits. However, XTS mode requires two +keys, so the LUKS key is split in half. Thus, -s 512 means AES-256.

    • +
    • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup FAQ +for guidance.

    • +
    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    rpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part4 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part4
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • When using LUKS with mirror or raidz topologies, use +/dev/mapper/luks1, /dev/mapper/luks2, etc., which you will have +to create using cryptsetup.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. On systems that can automatically install to ZFS, the root +pool is named rpool by default.

    • +
    +
  16. +
+
+
+

Step 3: System Installation

+
    +
  1. Create filesystem datasets to act as containers:

    +
    zfs create -o canmount=off -o mountpoint=none rpool/ROOT
    +zfs create -o canmount=off -o mountpoint=none bpool/BOOT
    +
    +
    +
  2. +
  3. Create filesystem datasets for the root and boot filesystems:

    +
    UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null |
    +    tr -dc 'a-z0-9' | cut -c-6)
    +
    +zfs create -o mountpoint=/ \
    +    -o com.ubuntu.zsys:bootfs=yes \
    +    -o com.ubuntu.zsys:last-used=$(date +%s) rpool/ROOT/ubuntu_$UUID
    +
    +zfs create -o mountpoint=/boot bpool/BOOT/ubuntu_$UUID
    +
    +
    +
  4. +
  5. Create datasets:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no \
    +    rpool/ROOT/ubuntu_$UUID/srv
    +zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
    +    rpool/ROOT/ubuntu_$UUID/usr
    +zfs create rpool/ROOT/ubuntu_$UUID/usr/local
    +zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
    +    rpool/ROOT/ubuntu_$UUID/var
    +zfs create rpool/ROOT/ubuntu_$UUID/var/games
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/AccountsService
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/apt
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/dpkg
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/NetworkManager
    +zfs create rpool/ROOT/ubuntu_$UUID/var/log
    +zfs create rpool/ROOT/ubuntu_$UUID/var/mail
    +zfs create rpool/ROOT/ubuntu_$UUID/var/snap
    +zfs create rpool/ROOT/ubuntu_$UUID/var/spool
    +zfs create rpool/ROOT/ubuntu_$UUID/var/www
    +
    +zfs create -o canmount=off -o mountpoint=/ \
    +    rpool/USERDATA
    +zfs create -o com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_$UUID \
    +    -o canmount=on -o mountpoint=/root \
    +    rpool/USERDATA/root_$UUID
    +chmod 700 /mnt/root
    +
    +
    +

    For a mirror or raidz topology, create a dataset for /boot/grub:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no bpool/grub
    +
    +
    +

    Mount a tmpfs at /run:

    +
    mkdir /mnt/run
    +mount -t tmpfs tmpfs /mnt/run
    +mkdir /mnt/run/lock
    +
    +
    +

    A tmpfs is recommended later, but if you want a separate dataset for +/tmp:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no \
    +    rpool/ROOT/ubuntu_$UUID/tmp
    +chmod 1777 /mnt/tmp
    +
    +
    +

    The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data.

    +

    If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for /tmp, +as shown above. This keeps the /tmp data out of snapshots of your root +filesystem. It also allows you to set a quota on rpool/tmp, if you want +to limit the maximum space used. Otherwise, you can use a tmpfs (RAM +filesystem) later.

    +
  6. +
  7. Install the minimal system:

    +
    debootstrap focal /mnt
    +
    +
    +

    The debootstrap command leaves the new system in an unconfigured state. +An alternative to using debootstrap is to copy the entirety of a +working system into the new ZFS root.

    +
  8. +
  9. Copy in zpool.cache:

    +
    mkdir /mnt/etc/zfs
    +cp /etc/zfs/zpool.cache /mnt/etc/zfs/
    +
    +
    +
  10. +
+
+
+

Step 4: System Configuration

+
    +
  1. Configure the hostname:

    +

    Replace HOSTNAME with the desired hostname:

    +
    hostname HOSTNAME
    +hostname > /mnt/etc/hostname
    +vi /mnt/etc/hosts
    +
    +
    +
    Add a line:
    +127.0.1.1       HOSTNAME
    +or if the system has a real name in DNS:
    +127.0.1.1       FQDN HOSTNAME
    +
    +
    +

    Hint: Use nano if you find vi confusing.

    +
  2. +
  3. Configure the network interface:

    +

    Find the interface name:

    +
    ip addr show
    +
    +
    +

    Adjust NAME below to match your interface name:

    +
    vi /mnt/etc/netplan/01-netcfg.yaml
    +
    +
    +
    network:
    +  version: 2
    +  ethernets:
    +    NAME:
    +      dhcp4: true
    +
    +
    +

    Customize this file if the system is not a DHCP client.

    +
  4. +
  5. Configure the package sources:

    +
    vi /mnt/etc/apt/sources.list
    +
    +
    +
    deb http://archive.ubuntu.com/ubuntu focal main restricted universe multiverse
    +deb http://archive.ubuntu.com/ubuntu focal-updates main restricted universe multiverse
    +deb http://archive.ubuntu.com/ubuntu focal-backports main restricted universe multiverse
    +deb http://security.ubuntu.com/ubuntu focal-security main restricted universe multiverse
    +
    +
    +
  6. +
  7. Bind the virtual filesystems from the LiveCD environment to the new +system and chroot into it:

    +
    mount --make-private --rbind /dev  /mnt/dev
    +mount --make-private --rbind /proc /mnt/proc
    +mount --make-private --rbind /sys  /mnt/sys
    +chroot /mnt /usr/bin/env DISK=$DISK UUID=$UUID bash --login
    +
    +
    +

    Note: This is using --rbind, not --bind.

    +
  8. +
  9. Configure a basic system environment:

    +
    apt update
    +
    +
    +

    Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

    +
    dpkg-reconfigure locales tzdata keyboard-configuration console-setup
    +
    +
    +

    Install your preferred text editor:

    +
    apt install --yes nano
    +
    +apt install --yes vim
    +
    +
    +

    Installing the full vim package fixes terminal problems that occur when +using the vim-tiny package (that is installed by debootstrap) over +SSH.

    +
  10. +
  11. For LUKS installs only, setup /etc/crypttab:

    +
    apt install --yes cryptsetup
    +
    +echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \
    +    none luks,discard,initramfs > /etc/crypttab
    +
    +
    +

    The use of initramfs is a work-around for cryptsetup does not support +ZFS.

    +

    Hint: If you are creating a mirror or raidz topology, repeat the +/etc/crypttab entries for luks2, etc. adjusting for each disk.

    +
  12. +
  13. Create the EFI filesystem:

    +

    Perform these steps for both UEFI and legacy (BIOS) booting:

    +
    apt install --yes dosfstools
    +
    +mkdosfs -F 32 -s 1 -n EFI ${DISK}-part1
    +mkdir /boot/efi
    +echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part1) \
    +    /boot/efi vfat defaults 0 0 >> /etc/fstab
    +mount /boot/efi
    +
    +
    +

    For a mirror or raidz topology, repeat the mkdosfs for the additional +disks, but do not repeat the other commands.

    +

    Note: The -s 1 for mkdosfs is only necessary for drives which +present 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster +size (given the partition size of 512 MiB) for FAT32. It also works fine on +drives which present 512 B sectors.

    +
  14. +
  15. Put /boot/grub on the EFI System Partition:

    +

    For a single-disk install only:

    +
    mkdir /boot/efi/grub /boot/grub
    +echo /boot/efi/grub /boot/grub none defaults,bind 0 0 >> /etc/fstab
    +mount /boot/grub
    +
    +
    +

    This allows GRUB to write to /boot/grub (since it is on a FAT-formatted +ESP instead of on ZFS), which means that /boot/grub/grubenv and the +recordfail feature works as expected: if the boot fails, the normally +hidden GRUB menu will be shown on the next boot. For a mirror or raidz +topology, we do not want GRUB writing to the EFI System Partition. This is +because we duplicate it at install without a mechanism to update the copies +when the GRUB configuration changes (e.g. as the kernel is upgraded). Thus, +we keep /boot/grub on the boot pool for the mirror or raidz topologies. +This preserves correct mirroring/raidz behavior, at the expense of being +able to write to /boot/grub/grubenv and thus the recordfail +behavior.

    +
  16. +
  17. Install GRUB/Linux/ZFS in the chroot environment for the new system:

    +

    Choose one of the following options:

    +
      +
    • Install GRUB/Linux/ZFS for legacy (BIOS) booting:

      +
      apt install --yes grub-pc linux-image-generic zfs-initramfs zsys
      +
      +
      +

      Select (using the space bar) all of the disks (not partitions) in your +pool.

      +
    • +
    • Install GRUB/Linux/ZFS for UEFI booting:

      +
      apt install --yes \
      +    grub-efi-amd64 grub-efi-amd64-signed linux-image-generic \
      +    shim-signed zfs-initramfs zsys
      +
      +
      +

      Notes:

      +
        +
      • Ignore any error messages saying ERROR: Couldn't resolve device and +WARNING: Couldn't determine root device. cryptsetup does not +support ZFS.

      • +
      • Ignore any error messages saying Module zfs not found and +couldn't connect to zsys daemon. The first seems to occur due to a +version mismatch between the Live CD kernel and the chroot environment, +but this is irrelevant since the module is already loaded. The second +may be caused by the first but either way is irrelevant since zed +is started manually later.

      • +
      • For a mirror or raidz topology, this step only installs GRUB on the +first disk. The other disk(s) will be handled later. For some reason, +grub-efi-amd64 does not prompt for install_devices here, but does +after a reboot.

      • +
      +
    • +
    +
  18. +
  19. Optional: Remove os-prober:

    +
    apt purge --yes os-prober
    +
    +
    +

    This avoids error messages from update-grub. os-prober is only +necessary in dual-boot configurations.

    +
  20. +
  21. Set a root password:

    +
    passwd
    +
    +
    +
  22. +
  23. Configure swap:

    +

    Choose one of the following options if you want swap:

    +
      +
    • For an unencrypted single-disk install:

      +
      mkswap -f ${DISK}-part2
      +echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \
      +    none swap discard 0 0 >> /etc/fstab
      +swapon -a
      +
      +
      +
    • +
    • For an unencrypted mirror or raidz topology:

      +
      apt install --yes mdadm
      +
      +# Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and
      +# raid-devices if necessary and specify the actual devices.
      +mdadm --create /dev/md0 --metadata=1.2 --level=mirror \
      +    --raid-devices=2 ${DISK1}-part2 ${DISK2}-part2
      +mkswap -f /dev/md0
      +echo /dev/disk/by-uuid/$(blkid -s UUID -o value /dev/md0) \
      +    none swap discard 0 0 >> /etc/fstab
      +
      +
      +
    • +
    • For an encrypted (LUKS or ZFS native encryption) single-disk install:

      +
      apt install --yes cryptsetup
      +
      +echo swap ${DISK}-part2 /dev/urandom \
      +      swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab
      +echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab
      +
      +
      +
    • +
    • For an encrypted (LUKS or ZFS native encryption) mirror or raidz +topology:

      +
      apt install --yes cryptsetup mdadm
      +
      +# Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and
      +# raid-devices if necessary and specify the actual devices.
      +mdadm --create /dev/md0 --metadata=1.2 --level=mirror \
      +    --raid-devices=2 ${DISK1}-part2 ${DISK2}-part2
      +echo swap /dev/md0 /dev/urandom \
      +      swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab
      +echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab
      +
      +
      +
    • +
    +
  24. +
  25. Optional (but recommended): Mount a tmpfs to /tmp

    +

    If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

    +
    cp /usr/share/systemd/tmp.mount /etc/systemd/system/
    +systemctl enable tmp.mount
    +
    +
    +
  26. +
  27. Setup system groups:

    +
    addgroup --system lpadmin
    +addgroup --system lxd
    +addgroup --system sambashare
    +
    +
    +
  28. +
  29. Patch a dependency loop:

    +

    For ZFS native encryption or LUKS:

    +
    apt install --yes curl patch
    +
    +curl https://launchpadlibrarian.net/478315221/2150-fix-systemd-dependency-loops.patch | \
    +    sed "s|/etc|/lib|;s|\.in$||" | (cd / ; patch -p1)
    +
    +
    +

    Ignore the failure in Hunk #2 (say n twice).

    +

    This patch is from Bug #1875577 Encrypted swap won’t load on 20.04 with +zfs root.

    +
  30. +
  31. Optional: Install SSH:

    +
    apt install --yes openssh-server
    +
    +vi /etc/ssh/sshd_config
    +# Set: PermitRootLogin yes
    +
    +
    +
  32. +
+
+
+

Step 5: GRUB Installation

+
    +
  1. Verify that the ZFS boot filesystem is recognized:

    +
    grub-probe /boot
    +
    +
    +
  2. +
  3. Refresh the initrd files:

    +
    update-initramfs -c -k all
    +
    +
    +

    Note: Ignore any error messages saying ERROR: Couldn't resolve +device and WARNING: Couldn't determine root device. cryptsetup +does not support ZFS.

    +
  4. +
  5. Disable memory zeroing:

    +
    vi /etc/default/grub
    +# Add init_on_alloc=0 to: GRUB_CMDLINE_LINUX_DEFAULT
    +# Save and quit (or see the next step).
    +
    +
    +

    This is to address performance regressions.

    +
  6. +
  7. Optional (but highly recommended): Make debugging GRUB easier:

    +
    vi /etc/default/grub
    +# Comment out: GRUB_TIMEOUT_STYLE=hidden
    +# Set: GRUB_TIMEOUT=5
    +# Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5
    +# Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
    +# Uncomment: GRUB_TERMINAL=console
    +# Save and quit.
    +
    +
    +

    Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired.

    +
  8. +
  9. Update the boot configuration:

    +
    update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Install the boot loader:

    +

    Choose one of the following options:

    +
      +
    • For legacy (BIOS) booting, install GRUB to the MBR:

      +
      grub-install $DISK
      +
      +
      +

      Note that you are installing GRUB to the whole disk, not a partition.

      +

      If you are creating a mirror or raidz topology, repeat the +grub-install command for each disk in the pool.

      +
    • +
    • For UEFI booting, install GRUB to the ESP:

      +
      grub-install --target=x86_64-efi --efi-directory=/boot/efi \
      +    --bootloader-id=ubuntu --recheck --no-floppy
      +
      +
      +
    • +
    +
  12. +
  13. Disable grub-initrd-fallback.service

    +

    For a mirror or raidz topology:

    +
    systemctl mask grub-initrd-fallback.service
    +
    +
    +

    This is the service for /boot/grub/grubenv which does not work on +mirrored or raidz topologies. Disabling this keeps it from blocking +subsequent mounts of /boot/grub if that mount ever fails.

    +

    Another option would be to set RequiresMountsFor=/boot/grub via a +drop-in unit, but that is more work to do here for no reason. Hopefully +this bug +will be fixed upstream.

    +
  14. +
  15. Fix filesystem mount ordering:

    +

    We need to activate zfs-mount-generator. This makes systemd aware of +the separate mountpoints, which is important for things like /var/log +and /var/tmp. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp feature +of systemd automatically use After=var-tmp.mount.

    +
    mkdir /etc/zfs/zfs-list.cache
    +touch /etc/zfs/zfs-list.cache/bpool
    +touch /etc/zfs/zfs-list.cache/rpool
    +ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d
    +zed -F &
    +
    +
    +

    Verify that zed updated the cache by making sure these are not empty:

    +
    cat /etc/zfs/zfs-list.cache/bpool
    +cat /etc/zfs/zfs-list.cache/rpool
    +
    +
    +

    If either is empty, force a cache update and check again:

    +
    zfs set canmount=on bpool/BOOT/ubuntu_$UUID
    +zfs set canmount=on rpool/ROOT/ubuntu_$UUID
    +
    +
    +

    If they are still empty, stop zed (as below), start zed (as above) and try +again.

    +

    Once the files have data, stop zed:

    +
    fg
    +Press Ctrl-C.
    +
    +
    +

    Fix the paths to eliminate /mnt:

    +
    sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
    +
    +
    +
  16. +
  17. Exit from the chroot environment back to the LiveCD environment:

    +
    exit
    +
    +
    +
  18. +
  19. Run these commands in the LiveCD environment to unmount all +filesystems:

    +
    mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
    +    xargs -i{} umount -lf {}
    +zpool export -a
    +
    +
    +
  20. +
  21. Reboot:

    +
    reboot
    +
    +
    +

    Wait for the newly installed system to boot normally. Login as root.

    +
  22. +
+
+
+

Step 6: First Boot

+
    +
  1. Install GRUB to additional disks:

    +

    For a UEFI mirror or raidz topology only:

    +
    dpkg-reconfigure grub-efi-amd64
    +
    +Select (using the space bar) all of the ESP partitions (partition 1 on
    +each of the pool disks).
    +
    +
    +
  2. +
  3. Create a user account:

    +

    Replace YOUR_USERNAME with your desired username:

    +
    username=YOUR_USERNAME
    +
    +UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null |
    +    tr -dc 'a-z0-9' | cut -c-6)
    +ROOT_DS=$(zfs list -o name | awk '/ROOT\/ubuntu_/{print $1;exit}')
    +zfs create -o com.ubuntu.zsys:bootfs-datasets=$ROOT_DS \
    +    -o canmount=on -o mountpoint=/home/$username \
    +    rpool/USERDATA/${username}_$UUID
    +adduser $username
    +
    +cp -a /etc/skel/. /home/$username
    +chown -R $username:$username /home/$username
    +usermod -a -G adm,cdrom,dip,lpadmin,lxd,plugdev,sambashare,sudo $username
    +
    +
    +
  4. +
+
+
+

Step 7: Full Software Installation

+
    +
  1. Upgrade the minimal system:

    +
    apt dist-upgrade --yes
    +
    +
    +
  2. +
  3. Install a regular set of software:

    +

    Choose one of the following options:

    +
      +
    • Install a command-line environment only:

      +
      apt install --yes ubuntu-standard
      +
      +
      +
    • +
    • Install a full GUI environment:

      +
      apt install --yes ubuntu-desktop
      +
      +
      +

      Hint: If you are installing a full GUI environment, you will likely +want to manage your network with NetworkManager:

      +
      rm /etc/netplan/01-netcfg.yaml
      +vi /etc/netplan/01-network-manager-all.yaml
      +
      +
      +
      network:
      +  version: 2
      +  renderer: NetworkManager
      +
      +
      +
    • +
    +
  4. +
  5. Optional: Disable log compression:

    +

    As /var/log is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. Also, +if you are making snapshots of /var/log, logrotate’s compression will +actually waste space, as the uncompressed data will live on in the +snapshot. You can edit the files in /etc/logrotate.d by hand to comment +out compress, or use this loop (copy-and-paste highly recommended):

    +
    for file in /etc/logrotate.d/* ; do
    +    if grep -Eq "(^|[^#y])compress" "$file" ; then
    +        sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
    +    fi
    +done
    +
    +
    +
  6. +
  7. Reboot:

    +
    reboot
    +
    +
    +
  8. +
+
+
+

Step 8: Final Cleanup

+
    +
  1. Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

  2. +
  3. Optional: Disable the root password:

    +
    sudo usermod -p '*' root
    +
    +
    +
  4. +
  5. Optional (but highly recommended): Disable root SSH logins:

    +

    If you installed SSH earlier, revert the temporary change:

    +
    sudo vi /etc/ssh/sshd_config
    +# Remove: PermitRootLogin yes
    +
    +sudo systemctl restart ssh
    +
    +
    +
  6. +
  7. Optional: Re-enable the graphical boot process:

    +

    If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer.

    +
    sudo vi /etc/default/grub
    +# Uncomment: GRUB_TIMEOUT_STYLE=hidden
    +# Add quiet and splash to: GRUB_CMDLINE_LINUX_DEFAULT
    +# Comment out: GRUB_TERMINAL=console
    +# Save and quit.
    +
    +sudo update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  8. +
  9. Optional: For LUKS installs only, backup the LUKS header:

    +
    sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
    +    --header-backup-file luks1-header.dat
    +
    +
    +

    Store that backup somewhere safe (e.g. cloud storage). It is protected by +your LUKS passphrase, but you may wish to use additional encryption.

    +

    Hint: If you created a mirror or raidz topology, repeat this for each +LUKS volume (luks2, etc.).

    +
  10. +
+
+
+

Troubleshooting

+
+

Rescuing using a Live CD

+

Go through Step 1: Prepare The Install Environment.

+

For LUKS, first unlock the disk(s):

+
cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+# Repeat for additional disks, if this is a mirror or raidz topology.
+
+
+

Mount everything correctly:

+
zpool export -a
+zpool import -N -R /mnt rpool
+zpool import -N -R /mnt bpool
+zfs load-key -a
+# Replace “UUID” as appropriate; use zfs list to find it:
+zfs mount rpool/ROOT/ubuntu_UUID
+zfs mount bpool/BOOT/ubuntu_UUID
+zfs mount -a
+
+
+

If needed, you can chroot into your installed environment:

+
mount --make-private --rbind /dev  /mnt/dev
+mount --make-private --rbind /proc /mnt/proc
+mount --make-private --rbind /sys  /mnt/sys
+mount -t tmpfs tmpfs /mnt/run
+mkdir /mnt/run/lock
+chroot /mnt /bin/bash --login
+mount -a
+
+
+

Do whatever you need to do to fix your system.

+

When done, cleanup:

+
exit
+mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
+    xargs -i{} umount -lf {}
+zpool export -a
+reboot
+
+
+
+
+

Areca

+

Systems that require the arcsas blob driver should add it to the +/etc/initramfs-tools/modules file and run update-initramfs -c -k all.

+

Upgrade or downgrade the Areca driver if something like +RIP: 0010:[<ffffffff8101b316>]  [<ffffffff8101b316>] native_read_tsc+0x6/0x20 +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message.

+
+
+

MPT2SAS

+

Most problem reports for this tutorial involve mpt2sas hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware.

+

The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330.

+

Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool.

+
+
+

QEMU/KVM/XEN

+

Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. -drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890).

+

To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:

+
sudo apt install ovmf
+sudo vi /etc/libvirt/qemu.conf
+
+
+

Uncomment these lines:

+
nvram = [
+   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd",
+   "/usr/share/OVMF/OVMF_CODE.ms.fd:/usr/share/OVMF/OVMF_VARS.ms.fd"
+]
+
+
+
sudo systemctl restart libvirtd.service
+
+
+
+
+

VMware

+
    +
  • Set disk.EnableUUID = "TRUE" in the vmx file or vsphere configuration. +Doing this ensures that /dev/disk aliases are created in the guest.

  • +
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS for Raspberry Pi.html b/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS for Raspberry Pi.html new file mode 100644 index 000000000..55ed1041d --- /dev/null +++ b/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS for Raspberry Pi.html @@ -0,0 +1,1051 @@ + + + + + + + Ubuntu 22.04 Root on ZFS for Raspberry Pi — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Ubuntu 22.04 Root on ZFS for Raspberry Pi

+ +
+

Overview

+
+

Note

+

These are beta instructions. The author still needs to test them. +Additionally, it may be possible to use U-Boot now, which would eliminate +some of the customizations.

+
+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Backup your data. Any existing data will be lost.

  • +
+
+
+

System Requirements

+ +

4 GiB of memory is recommended. Do not use deduplication, as it needs massive +amounts of RAM. +Enabling deduplication is a permanent change that cannot be easily reverted.

+

A Raspberry Pi 3 B/B+ would probably work (as the Pi 3 is 64-bit, though it +has less RAM), but has not been tested. Please report your results (good or +bad) using the issue link below.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @rlaager.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo apt install python3-pip
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @rlaager.

  10. +
+
+
+

Encryption

+

WARNING: Encryption has not yet been tested on the Raspberry Pi.

+

This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in /etc/fstab, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+

USB Disks

+

The Raspberry Pi 4 runs much faster using a USB Solid State Drive (SSD) than +a microSD card. These instructions can also be used to install Ubuntu on a +USB-connected SSD or other USB disk. USB disks have three requirements that +do not apply to microSD cards:

+
    +
  1. The Raspberry Pi’s Bootloader EEPROM must be dated 2020-09-03 or later.

    +

    To check the bootloader version, power up the Raspberry Pi without an SD +card inserted or a USB boot device attached; the date will be on the +bootloader line. (If you do not see the bootloader line, the +bootloader is too old.) Alternatively, run sudo rpi-eeprom-update +on an existing OS on the Raspberry Pi (which on Ubuntu requires +apt install rpi-eeprom).

    +

    If needed, the bootloader can be updated from an existing OS on the +Raspberry Pi using rpi-eeprom-update -a and rebooting. +For other options, see Updating the Bootloader.

    +
  2. +
  3. The Raspberry Pi must configured for USB boot. The bootloader will show a +boot line; if order includes 4, USB boot is enabled.

    +

    If not already enabled, it can be enabled from an existing OS on the +Raspberry Pi using rpi-eeprom-config -e: set BOOT_ORDER=0xf41 +and reboot to apply the change. On subsequent reboots, USB boot will be +enabled.

    +

    Otherwise, it can be enabled without an existing OS as follows:

    +
      +
    • Download the Raspberry Pi Imager Utility.

    • +
    • Flash the USB Boot image to a microSD card. The USB Boot image is +listed under Bootload in the Misc utility images folder.

    • +
    • Boot the Raspberry Pi from the microSD card. USB Boot should be enabled +automatically.

    • +
    +
  4. +
  5. U-Boot on Ubuntu 20.04 does not seem to support the Raspberry Pi USB. +Ubuntu 20.10 may work. As a +work-around, the Raspberry Pi bootloader is configured to directly boot +Linux. For this to work, the Linux kernel must not be compressed. These +instructions decompress the kernel and add a script to +/etc/kernel/postinst.d to handle kernel upgrades.

  6. +
+
+
+
+

Step 1: Disk Formatting

+

The commands in this step are run on the system other than the Raspberry Pi.

+

This guide has you go to some extra work so that the stock ext4 partition can +be deleted.

+
    +
  1. Download and unpack the official image:

    +
    curl -O https://cdimage.ubuntu.com/releases/22.04/release/ubuntu-22.04.1-preinstalled-server-arm64+raspi.img.xz
    +xz -d ubuntu-22.04.1-preinstalled-server-arm64+raspi.img.xz
    +
    +# or combine them to decompress as you download:
    +curl https://cdimage.ubuntu.com/releases/22.04/release/ubuntu-22.04.1-preinstalled-server-arm64+raspi.img.xz | \
    +    xz -d > ubuntu-22.04.1-preinstalled-server-arm64+raspi.img
    +
    +
    +
  2. +
  3. Dump the partition table for the image:

    +
    sfdisk -d ubuntu-22.04.1-preinstalled-server-arm64+raspi.img
    +
    +
    +

    That will output this:

    +
    label: dos
    +label-id: 0x638274e3
    +device: ubuntu-22.04.1-preinstalled-server-arm64+raspi.img
    +unit: sectors
    +
    +<name>.img1 : start=        2048, size=      524288, type=c, bootable
    +<name>.img2 : start=      526336, size=     7193932, type=83
    +
    +
    +

    The important numbers are 524288 and 7193932. Store those in variables:

    +
    BOOT=524288
    +ROOT=7193932
    +
    +
    +
  4. +
  5. Create a partition script:

    +
    cat > partitions << EOF
    +label: dos
    +unit: sectors
    +
    +1 : start=  2048,  size=$BOOT,  type=c, bootable
    +2 : start=$((2048+BOOT)),  size=$ROOT, type=83
    +3 : start=$((2048+BOOT+ROOT)), size=$ROOT, type=83
    +EOF
    +
    +
    +
  6. +
  7. Connect the disk:

    +

    Connect the disk to a machine other than the target Raspberry Pi. If any +filesystems are automatically mounted (e.g. by GNOME) unmount them. +Determine the device name. For SD, the device name is almost certainly +/dev/mmcblk0. For USB SSDs, the device name is /dev/sdX, where +X is a lowercase letter. lsblk can help determine the device name. +Set the DISK environment variable to the device name:

    +
    DISK=/dev/mmcblk0    # microSD card
    +DISK=/dev/sdX        # USB disk
    +
    +
    +

    Because partitions are named differently for /dev/mmcblk0 and /dev/sdX +devices, set a second variable used when working with partitions:

    +
    export DISKP=${DISK}p # microSD card
    +export DISKP=${DISK}  # USB disk ($DISKP == $DISK for /dev/sdX devices)
    +
    +
    +

    Hint: microSD cards connected using a USB reader also have /dev/sdX +names.

    +

    WARNING: The following steps destroy the existing data on the disk. Ensure +DISK and DISKP are correct before proceeding.

    +
  8. +
  9. Ensure swap partitions are not in use:

    +
    swapon -v
    +# If a partition is in use from the disk, disable it:
    +sudo swapoff THAT_PARTITION
    +
    +
    +
  10. +
  11. Clear old ZFS labels:

    +
    sudo zpool labelclear -f ${DISK}
    +
    +
    +

    If a ZFS label still exists from a previous system/attempt, expanding the +pool will result in an unbootable system.

    +

    Hint: If you do not already have the ZFS utilities installed, you can +install them with: sudo apt install zfsutils-linux Alternatively, you +can zero the entire disk with: +sudo dd if=/dev/zero of=${DISK} bs=1M status=progress

    +
  12. +
  13. Delete existing partitions:

    +
    echo "label: dos" | sudo sfdisk ${DISK}
    +sudo partprobe
    +ls ${DISKP}*
    +
    +
    +

    Make sure there are no partitions, just the file for the disk itself. This +step is not strictly necessary; it exists to catch problems.

    +
  14. +
  15. Create the partitions:

    +
    sudo sfdisk $DISK < partitions
    +
    +
    +
  16. +
  17. Loopback mount the image:

    +
    IMG=$(sudo losetup -fP --show \
    +          ubuntu-22.04.1-preinstalled-server-arm64+raspi.img)
    +
    +
    +
  18. +
  19. Copy the bootloader data:

    +
    sudo dd if=${IMG}p1 of=${DISKP}1 bs=1M
    +
    +
    +
  20. +
  21. Clear old label(s) from partition 2:

    +
    sudo wipefs -a ${DISKP}2
    +
    +
    +

    If a filesystem with the writable label from the Ubuntu image is still +present in partition 2, the system will not boot initially.

    +
  22. +
  23. Copy the root filesystem data:

    +
    # NOTE: the destination is p3, not p2.
    +sudo dd if=${IMG}p2 of=${DISKP}3 bs=1M status=progress conv=fsync
    +
    +
    +
  24. +
  25. Unmount the image:

    +
    sudo losetup -d $IMG
    +
    +
    +
  26. +
  27. If setting up a USB disk:

    +

    Decompress the kernel:

    +
    sudo -sE
    +
    +MNT=$(mktemp -d /mnt/XXXXXXXX)
    +mkdir -p $MNT/boot $MNT/root
    +mount ${DISKP}1 $MNT/boot
    +mount ${DISKP}3 $MNT/root
    +
    +zcat -qf $MNT/boot/vmlinuz >$MNT/boot/vmlinux
    +
    +
    +

    Modify boot config:

    +
    cat >> $MNT/boot/usercfg.txt << EOF
    +kernel=vmlinux
    +initramfs initrd.img followkernel
    +boot_delay
    +EOF
    +
    +
    +

    Create a script to automatically decompress the kernel after an upgrade:

    +
    cat >$MNT/root/etc/kernel/postinst.d/zz-decompress-kernel << 'EOF'
    +#!/bin/sh
    +
    +set -eu
    +
    +echo "Updating decompressed kernel..."
    +[ -e /boot/firmware/vmlinux ] && \
    +    cp /boot/firmware/vmlinux /boot/firmware/vmlinux.bak
    +vmlinuxtmp=$(mktemp /boot/firmware/vmlinux.XXXXXXXX)
    +zcat -qf /boot/vmlinuz > "$vmlinuxtmp"
    +mv "$vmlinuxtmp" /boot/firmware/vmlinux
    +EOF
    +
    +chmod +x $MNT/root/etc/kernel/postinst.d/zz-decompress-kernel
    +
    +
    +

    Cleanup:

    +
    umount $MNT/*
    +rm -rf $MNT
    +exit
    +
    +
    +
  28. +
  29. Boot the Raspberry Pi.

    +

    Move the SD/USB disk to the Raspberry Pi. Boot it and login (e.g. via SSH) +with ubuntu as the username and password. If you are using SSH, note +that it takes a little bit for cloud-init to enable password logins on the +first boot. Set a new password when prompted and login again using that +password. If you have your local SSH configured to use ControlPersist, +you will have to kill the existing SSH process before logging in the second +time.

    +
  30. +
+
+
+

Step 2: Setup ZFS

+
    +
  1. Become root:

    +
    sudo -i
    +
    +
    +
  2. +
  3. Set the DISK and DISKP variables again:

    +
    DISK=/dev/mmcblk0    # microSD card
    +DISKP=${DISK}p       # microSD card
    +
    +DISK=/dev/sdX        # USB disk
    +DISKP=${DISK}        # USB disk
    +
    +
    +

    WARNING: Device names can change when moving a device to a different +computer or switching the microSD card from a USB reader to a built-in +slot. Double check the device name before continuing.

    +
  4. +
  5. Install ZFS:

    +
    apt update
    +
    +apt install pv zfs-initramfs
    +
    +
    +

    Note: Since this is the first boot, you may get Waiting for cache +lock because unattended-upgrades is running in the background. +Wait for it to finish.

    +
  6. +
  7. Create the root pool:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted:

      +
      zpool create \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISKP}2
      +
      +
      +
    • +
    +

    WARNING: Encryption has not yet been tested on the Raspberry Pi.

    +
      +
    • ZFS native encryption:

      +
      zpool create \
      +    -o ashift=12 \
      +    -O encryption=on \
      +    -O keylocation=prompt -O keyformat=passphrase \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISKP}2
      +
      +
      +
    • +
    • LUKS:

      +
      cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISKP}2
      +cryptsetup luksOpen ${DISK}-part4 luks1
      +zpool create \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool /dev/mapper/luks1
      +
      +
      +
    • +
    +

    Notes:

    +
      +
    • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

    • +
    • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires ACLs +Also, disabling ACLs apparently breaks umask handling with NFSv4.

    • +
    • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only filenames.

    • +
    • recordsize is unset (leaving it at the default of 128 KiB). If you +want to tune it (e.g. -O recordsize=1M), see these various blog +posts.

    • +
    • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s documentation +for further information.

    • +
    • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain controller. +Note that xattr=sa is Linux-specific. If you move your +xattr=sa pool to another OpenZFS implementation besides ZFS-on-Linux, +extended attributes will not be readable (though your data will be). If +portability of extended attributes is important to you, omit the +-O xattr=sa above. Even if you do not want xattr=sa for the whole +pool, it is probably fine to use it for /var/log.

    • +
    • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

    • +
    • ZFS native encryption now +defaults to aes-256-gcm.

    • +
    • For LUKS, the key size chosen is 512 bits. However, XTS mode requires two +keys, so the LUKS key is split in half. Thus, -s 512 means AES-256.

    • +
    • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup FAQ +for guidance.

    • +
    +
  8. +
+
+
+

Step 3: System Installation

+
    +
  1. Create a filesystem dataset to act as a container:

    +
    zfs create -o canmount=off -o mountpoint=none rpool/ROOT
    +
    +
    +
  2. +
  3. Create a filesystem dataset for the root filesystem:

    +
    UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null |
    +    tr -dc 'a-z0-9' | cut -c-6)
    +
    +zfs create -o canmount=noauto -o mountpoint=/ \
    +    -o com.ubuntu.zsys:bootfs=yes \
    +    -o com.ubuntu.zsys:last-used=$(date +%s) rpool/ROOT/ubuntu_$UUID
    +zfs mount rpool/ROOT/ubuntu_$UUID
    +
    +
    +

    With ZFS, it is not normally necessary to use a mount command (either +mount or zfs mount). This situation is an exception because of +canmount=noauto.

    +
  4. +
  5. Create datasets:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
    +    rpool/ROOT/ubuntu_$UUID/usr
    +zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
    +    rpool/ROOT/ubuntu_$UUID/var
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib
    +zfs create rpool/ROOT/ubuntu_$UUID/var/log
    +zfs create rpool/ROOT/ubuntu_$UUID/var/spool
    +
    +zfs create -o canmount=off -o mountpoint=/ \
    +    rpool/USERDATA
    +zfs create -o com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_$UUID \
    +    -o canmount=on -o mountpoint=/root \
    +    rpool/USERDATA/root_$UUID
    +chmod 700 /mnt/root
    +
    +
    +

    The datasets below are optional, depending on your preferences and/or +software choices.

    +

    If you wish to separate these to exclude them from snapshots:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/cache
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/nfs
    +zfs create rpool/ROOT/ubuntu_$UUID/var/tmp
    +chmod 1777 /mnt/var/tmp
    +
    +
    +

    If desired (the Ubuntu installer creates these):

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/lib/apt
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/dpkg
    +
    +
    +

    If you use /srv on this system:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no \
    +    rpool/ROOT/ubuntu_$UUID/srv
    +
    +
    +

    If you use /usr/local on this system:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/usr/local
    +
    +
    +

    If this system will have games installed:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/games
    +
    +
    +

    If this system will have a GUI:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/lib/AccountsService
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/NetworkManager
    +
    +
    +

    If this system will use Docker (which manages its own datasets & +snapshots):

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/lib/docker
    +
    +
    +

    If this system will store local email in /var/mail:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/mail
    +
    +
    +

    If this system will use Snap packages:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/snap
    +
    +
    +

    If you use /var/www on this system:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/www
    +
    +
    +

    For a mirror or raidz topology, create a dataset for /boot/grub:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no bpool/grub
    +
    +
    +

    A tmpfs is recommended later, but if you want a separate dataset for +/tmp:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no \
    +    rpool/ROOT/ubuntu_$UUID/tmp
    +chmod 1777 /mnt/tmp
    +
    +
    +

    The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data.

    +

    If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for /tmp, +as shown above. This keeps the /tmp data out of snapshots of your root +filesystem. It also allows you to set a quota on rpool/tmp, if you want +to limit the maximum space used. Otherwise, you can use a tmpfs (RAM +filesystem) later.

    +

    Note: If you separate a directory required for booting (e.g. /etc) +into its own dataset, you must add it to +ZFS_INITRD_ADDITIONAL_DATASETS in /etc/default/zfs. Datasets +with canmount=off (like rpool/usr above) do not matter for this.

    +
  6. +
  7. Optional: Ignore synchronous requests:

    +

    microSD cards are relatively slow. If you want to increase performance +(especially when installing packages) at the cost of some safety, you can +disable flushing of synchronous requests (e.g. fsync(), O_[D]SYNC):

    +

    Choose one of the following options:

    +
      +
    • For the root filesystem, but not user data:

      +
      zfs set sync=disabled rpool/ROOT
      +
      +
      +
    • +
    • For everything:

      +
      zfs set sync=disabled rpool
      +
      +
      +
    • +
    +

    ZFS is transactional, so it will still be crash consistent. However, you +should leave sync at its default of standard if this system needs +to guarantee persistence (e.g. if it is a database or NFS server).

    +
  8. +
  9. Copy the system into the ZFS filesystems:

    +
    (cd /; tar -cf - --one-file-system --warning=no-file-ignored .) | \
    +    pv -p -bs $(du -sxm --apparent-size / | cut -f1)m | \
    +    (cd /mnt ; tar -x)
    +
    +
    +
  10. +
+
+
+

Step 4: System Configuration

+
    +
  1. Configure the hostname:

    +

    Replace HOSTNAME with the desired hostname:

    +
    hostname HOSTNAME
    +hostname > /mnt/etc/hostname
    +vi /mnt/etc/hosts
    +
    +
    +
    Add a line:
    +127.0.1.1       HOSTNAME
    +or if the system has a real name in DNS:
    +127.0.1.1       FQDN HOSTNAME
    +
    +
    +

    Hint: Use nano if you find vi confusing.

    +
  2. +
  3. Stop zed:

    +
    systemctl stop zed
    +
    +
    +
  4. +
  5. Bind the virtual filesystems from the running environment to the new +ZFS environment and chroot into it:

    +
    mount --make-private --rbind /boot/firmware /mnt/boot/firmware
    +mount --make-private --rbind /dev  /mnt/dev
    +mount --make-private --rbind /proc /mnt/proc
    +mount --make-private --rbind /run  /mnt/run
    +mount --make-private --rbind /sys  /mnt/sys
    +chroot /mnt /usr/bin/env DISK=$DISK UUID=$UUID bash --login
    +
    +
    +
  6. +
  7. Configure a basic system environment:

    +
    apt update
    +
    +
    +

    Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

    +
    dpkg-reconfigure locales
    +dpkg-reconfigure tzdata
    +
    +
    +
  8. +
  9. For LUKS installs only, setup /etc/crypttab:

    +
    # cryptsetup is already installed, but this marks it as manually
    +# installed so it is not automatically removed.
    +apt install --yes cryptsetup
    +
    +echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \
    +    luks,discard,initramfs > /etc/crypttab
    +
    +
    +

    The use of initramfs is a work-around for cryptsetup does not support +ZFS.

    +
  10. +
  11. Optional: Mount a tmpfs to /tmp

    +

    If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

    +
    cp /usr/share/systemd/tmp.mount /etc/systemd/system/
    +systemctl enable tmp.mount
    +
    +
    +
  12. +
  13. Setup system groups:

    +
    addgroup --system lpadmin
    +addgroup --system sambashare
    +
    +
    +
  14. +
  15. Fix filesystem mount ordering:

    +

    We need to activate zfs-mount-generator. This makes systemd aware of +the separate mountpoints, which is important for things like /var/log +and /var/tmp. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp feature +of systemd automatically use After=var-tmp.mount.

    +
    mkdir /etc/zfs/zfs-list.cache
    +touch /etc/zfs/zfs-list.cache/rpool
    +zed -F &
    +
    +
    +

    Force a cache update:

    +
    zfs set canmount=noauto rpool/ROOT/ubuntu_$UUID
    +
    +
    +

    Verify that zed updated the cache by making sure this is not empty, +which will take a few seconds:

    +
    cat /etc/zfs/zfs-list.cache/rpool
    +
    +
    +

    Stop zed:

    +
    fg
    +Press Ctrl-C.
    +
    +
    +

    Fix the paths to eliminate /mnt:

    +
    sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
    +
    +
    +
  16. +
  17. Remove old filesystem from /etc/fstab:

    +
    vi /etc/fstab
    +# Remove the old root filesystem line:
    +#   LABEL=writable / ext4 ...
    +
    +
    +
  18. +
  19. Configure kernel command line:

    +
    cp /boot/firmware/cmdline.txt /boot/firmware/cmdline.txt.bak
    +sed -i "s|root=LABEL=writable rootfstype=ext4|root=ZFS=rpool/ROOT/ubuntu_$UUID|" \
    +    /boot/firmware/cmdline.txt
    +sed -i "s| fixrtc||" /boot/firmware/cmdline.txt
    +sed -i "s|$| init_on_alloc=0|" /boot/firmware/cmdline.txt
    +
    +
    +

    The fixrtc script is not compatible with ZFS and will cause the boot +to hang for 180 seconds.

    +

    The init_on_alloc=0 is to address performance regressions.

    +
  20. +
  21. Optional (but highly recommended): Make debugging booting easier:

    +
    sed -i "s|$| nosplash|" /boot/firmware/cmdline.txt
    +
    +
    +
  22. +
  23. Reboot:

    +
    exit
    +reboot
    +
    +
    +

    Wait for the newly installed system to boot normally. Login as ubuntu.

    +
  24. +
+
+
+

Step 5: First Boot

+
    +
  1. Become root:

    +
    sudo -i
    +
    +
    +
  2. +
  3. Set the DISK variable again:

    +
    DISK=/dev/mmcblk0    # microSD card
    +
    +DISK=/dev/sdX        # USB disk
    +
    +
    +
  4. +
  5. Delete the ext4 partition and expand the ZFS partition:

    +
    sfdisk $DISK --delete 3
    +echo ", +" | sfdisk --no-reread -N 2 $DISK
    +
    +
    +

    Note: This does not automatically expand the pool. That will be happen +on reboot.

    +
  6. +
  7. Create a user account:

    +

    Replace YOUR_USERNAME with your desired username:

    +
    username=YOUR_USERNAME
    +
    +UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null |
    +    tr -dc 'a-z0-9' | cut -c-6)
    +ROOT_DS=$(zfs list -o name | awk '/ROOT\/ubuntu_/{print $1;exit}')
    +zfs create -o com.ubuntu.zsys:bootfs-datasets=$ROOT_DS \
    +    -o canmount=on -o mountpoint=/home/$username \
    +    rpool/USERDATA/${username}_$UUID
    +adduser $username
    +
    +cp -a /etc/skel/. /home/$username
    +chown -R $username:$username /home/$username
    +usermod -a -G adm,cdrom,dip,lpadmin,lxd,plugdev,sambashare,sudo $username
    +
    +
    +
  8. +
  9. Reboot:

    +
    reboot
    +
    +
    +

    Wait for the system to boot normally. Login using the account you +created.

    +
  10. +
  11. Become root:

    +
    sudo -i
    +
    +
    +
  12. +
  13. Expand the ZFS pool:

    +

    Verify the pool expanded:

    +
    zfs list rpool
    +
    +
    +

    If it did not automatically expand, try to expand it manually:

    +
    DISK=/dev/mmcblk0    # microSD card
    +DISKP=${DISK}p       # microSD card
    +
    +DISK=/dev/sdX        # USB disk
    +DISKP=${DISK}        # USB disk
    +
    +zpool online -e rpool ${DISKP}2
    +
    +
    +
  14. +
  15. Delete the ubuntu user:

    +
    deluser --remove-home ubuntu
    +
    +
    +
  16. +
+
+
+

Step 6: Full Software Installation

+
    +
  1. Optional: Remove cloud-init:

    +
    vi /etc/netplan/01-netcfg.yaml
    +
    +
    +
    network:
    +  version: 2
    +  ethernets:
    +    eth0:
    +      dhcp4: true
    +
    +
    +
    rm /etc/netplan/50-cloud-init.yaml
    +apt purge --autoremove ^cloud-init
    +rm -rf /etc/cloud
    +
    +
    +
  2. +
  3. Optional: Remove other storage packages:

    +
    apt purge --autoremove bcache-tools btrfs-progs cloud-guest-utils lvm2 \
    +    mdadm multipath-tools open-iscsi overlayroot xfsprogs
    +
    +
    +
  4. +
  5. Upgrade the minimal system:

    +
    apt dist-upgrade --yes
    +
    +
    +
  6. +
  7. Optional: Install a full GUI environment:

    +
    apt install --yes ubuntu-desktop
    +echo dtoverlay=vc4-fkms-v3d >> /boot/firmware/usercfg.txt
    +
    +
    +

    Hint: If you are installing a full GUI environment, you will likely +want to remove cloud-init as discussed above but manage your network with +NetworkManager:

    +
    rm /etc/netplan/*.yaml
    +vi /etc/netplan/01-network-manager-all.yaml
    +
    +
    +
    network:
    +  version: 2
    +  renderer: NetworkManager
    +
    +
    +
  8. +
  9. Optional (but recommended): Disable log compression:

    +

    As /var/log is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. Also, +if you are making snapshots of /var/log, logrotate’s compression will +actually waste space, as the uncompressed data will live on in the +snapshot. You can edit the files in /etc/logrotate.d by hand to comment +out compress, or use this loop (copy-and-paste highly recommended):

    +
    for file in /etc/logrotate.d/* ; do
    +    if grep -Eq "(^|[^#y])compress" "$file" ; then
    +        sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
    +    fi
    +done
    +
    +
    +
  10. +
  11. Reboot:

    +
    reboot
    +
    +
    +
  12. +
+
+
+

Step 7: Final Cleanup

+
    +
  1. Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

  2. +
  3. Optional: For LUKS installs only, backup the LUKS header:

    +
    sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
    +    --header-backup-file luks1-header.dat
    +
    +
    +

    Store that backup somewhere safe (e.g. cloud storage). It is protected by +your LUKS passphrase, but you may wish to use additional encryption.

    +

    Hint: If you created a mirror or raidz topology, repeat this for each +LUKS volume (luks2, etc.).

    +
  4. +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS.html b/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS.html new file mode 100644 index 000000000..416541287 --- /dev/null +++ b/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS.html @@ -0,0 +1,1374 @@ + + + + + + + Ubuntu 22.04 Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Ubuntu 22.04 Root on ZFS

+ +
+

Overview

+
+

Ubuntu Installer

+

The Ubuntu installer still has ZFS support, but it was almost removed for +22.04 +and it no longer installs zsys. At +the moment, this HOWTO still uses zsys, but that will be probably be removed +in the near future.

+
+
+

Raspberry Pi

+

If you are looking to install on a Raspberry Pi, see +Ubuntu 20.04 Root on ZFS for Raspberry Pi.

+
+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Do not use these instructions for dual-booting.

  • +
  • Backup your data. Any existing data will be lost.

  • +
+
+
+

System Requirements

+ +

Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need massive amounts of RAM. Enabling +deduplication is a permanent change that cannot be easily reverted.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @rlaager.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo apt install python3-pip
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @rlaager.

  10. +
+
+
+

Encryption

+

This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in /etc/fstab, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+
+

Step 1: Prepare The Install Environment

+
    +
  1. Boot the Ubuntu Live CD. From the GRUB boot menu, select Try or Install Ubuntu. +On the Welcome page, select your preferred language and Try Ubuntu. +Connect your system to the Internet as appropriate (e.g. join your WiFi network). +Open a terminal (press Ctrl-Alt-T).

  2. +
  3. Setup and update the repositories:

    +
    sudo apt update
    +
    +
    +
  4. +
  5. Optional: Install and start the OpenSSH server in the Live CD environment:

    +

    If you have a second system, using SSH to access the target system can be +convenient:

    +
    passwd
    +# There is no current password.
    +sudo apt install --yes openssh-server vim
    +
    +
    +

    Installing the full vim package fixes terminal problems that occur when +using the vim-tiny package (that ships in the Live CD environment) over +SSH.

    +

    Hint: You can find your IP address with +ip addr show scope global | grep inet. Then, from your main machine, +connect with ssh ubuntu@IP.

    +
  6. +
  7. Disable automounting:

    +

    If the disk has been used before (with partitions at the same offsets), +previous filesystems (e.g. the ESP) will automount if not disabled:

    +
    gsettings set org.gnome.desktop.media-handling automount false
    +
    +
    +
  8. +
  9. Become root:

    +
    sudo -i
    +
    +
    +
  10. +
  11. Install ZFS in the Live CD environment:

    +
    apt install --yes debootstrap gdisk zfsutils-linux
    +
    +systemctl stop zed
    +
    +
    +
  12. +
+
+
+

Step 2: Disk Formatting

+
    +
  1. Set a variable with the disk name:

    +
    DISK=/dev/disk/by-id/scsi-SATA_disk1
    +
    +
    +

    Always use the long /dev/disk/by-id/* aliases with ZFS. Using the +/dev/sd* device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool.

    +

    Hints:

    +
      +
    • ls -la /dev/disk/by-id will list the aliases.

    • +
    • Are you doing this in a virtual machine? If your virtual disk is missing +from /dev/disk/by-id, use /dev/vda if you are using KVM with +virtio; otherwise, read the troubleshooting +section.

    • +
    • For a mirror or raidz topology, use DISK1, DISK2, etc.

    • +
    • When choosing a boot pool size, consider how you will use the space. A +kernel and initrd may consume around 100M. If you have multiple kernels +and take snapshots, you may find yourself low on boot pool space, +especially if you need to regenerate your initramfs images, which may be +around 85M each. Size your boot pool appropriately for your needs.

    • +
    +
  2. +
  3. If you are re-using a disk, clear it as necessary:

    +

    Ensure swap partitions are not in use:

    +
    swapoff --all
    +
    +
    +

    If the disk was previously used in an MD array:

    +
    apt install --yes mdadm
    +
    +# See if one or more MD arrays are active:
    +cat /proc/mdstat
    +# If so, stop them (replace ``md0`` as required):
    +mdadm --stop /dev/md0
    +
    +# For an array using the whole disk:
    +mdadm --zero-superblock --force $DISK
    +# For an array using a partition (e.g. a swap partition per this HOWTO):
    +mdadm --zero-superblock --force ${DISK}-part2
    +
    +
    +

    If the disk was previously used with zfs:

    +
    wipefs -a $DISK
    +
    +
    +

    For flash-based storage, if the disk was previously used, you may wish to +do a full-disk discard (TRIM/UNMAP), which can improve performance:

    +
    blkdiscard -f $DISK
    +
    +
    +

    Clear the partition table:

    +
    sgdisk --zap-all $DISK
    +
    +
    +

    If you get a message about the kernel still using the old partition table, +reboot and start over (except that you can skip this step).

    +
  4. +
  5. Create bootloader partition(s):

    +
    sgdisk     -n1:1M:+512M   -t1:EF00 $DISK
    +
    +# For legacy (BIOS) booting:
    +sgdisk -a1 -n5:24K:+1000K -t5:EF02 $DISK
    +
    +
    +

    Note: While the Ubuntu installer uses an MBR label for legacy (BIOS) +booting, this HOWTO uses GPT partition labels for both UEFI and legacy +(BIOS) booting. This is simpler than having two options. It is also +provides forward compatibility (future proofing). In other words, for +legacy (BIOS) booting, this will allow you to move the disk(s) to a new +system/motherboard in the future without having to rebuild the pool (and +restore your data from a backup). The ESP is created in both cases for +similar reasons. Additionally, the ESP is used for /boot/grub in +single-disk installs, as discussed below.

    +
  6. +
  7. Create a partition for swap:

    +

    Previous versions of this HOWTO put swap on a zvol. Ubuntu recommends +against this configuration due to deadlocks. There +is a bug report upstream.

    +

    Putting swap on a partition gives up the benefit of ZFS checksums (for your +swap). That is probably the right trade-off given the reports of ZFS +deadlocks with swap. If you are bothered by this, simply do not enable +swap.

    +

    Choose one of the following options if you want swap:

    +
      +
    • For a single-disk install:

      +
      sgdisk     -n2:0:+500M    -t2:8200 $DISK
      +
      +
      +
    • +
    • For a mirror or raidz topology:

      +
      sgdisk     -n2:0:+500M    -t2:FD00 $DISK
      +
      +
      +
    • +
    +

    Adjust the swap swize to your needs. If you wish to enable hiberation +(which only works for unencrypted installs), the swap partition must be +at least as large as the system’s RAM.

    +
  8. +
  9. Create a boot pool partition:

    +
    sgdisk     -n3:0:+2G      -t3:BE00 $DISK
    +
    +
    +

    The Ubuntu installer uses 5% of the disk space constrained to a minimum of +500 MiB and a maximum of 2 GiB. Making this too small (and 500 MiB might +be too small) can result in an inability to upgrade the kernel.

    +
  10. +
  11. Create a root pool partition:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted or ZFS native encryption:

      +
      sgdisk     -n4:0:0        -t4:BF00 $DISK
      +
      +
      +
    • +
    • LUKS:

      +
      sgdisk     -n4:0:0        -t4:8309 $DISK
      +
      +
      +
    • +
    +

    If you are creating a mirror or raidz topology, repeat the partitioning +commands for all the disks which will be part of the pool.

    +
  12. +
  13. Create the boot pool:

    +
    zpool create \
    +    -o ashift=12 \
    +    -o autotrim=on \
    +    -o cachefile=/etc/zfs/zpool.cache \
    +    -o compatibility=grub2 \
    +    -o feature@livelist=enabled \
    +    -o feature@zpool_checkpoint=enabled \
    +    -O devices=off \
    +    -O acltype=posixacl -O xattr=sa \
    +    -O compression=lz4 \
    +    -O normalization=formD \
    +    -O relatime=on \
    +    -O canmount=off -O mountpoint=/boot -R /mnt \
    +    bpool ${DISK}-part3
    +
    +
    +

    You should not need to customize any of the options for the boot pool.

    +

    Ignore the warnings about the features “not in specified ‘compatibility’ +feature set.”

    +

    GRUB does not support all of the zpool features. See spa_feature_names +in grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are “supported” by GRUB.

    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    bpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part3 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part3
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • The boot pool name is no longer arbitrary. It _must_ be bpool. +If you really want to rename it, edit /etc/grub.d/10_linux_zfs later, +after GRUB is installed (and run update-grub).

    • +
    +

    Feature Notes:

    +
      +
    • The allocation_classes feature should be safe to use. However, unless +one is using it (i.e. a special vdev), there is no point to enabling +it. It is extremely unlikely that someone would use this feature for a +boot pool. If one cares about speeding up the boot pool, it would make +more sense to put the whole pool on the faster disk rather than using it +as a special vdev.

    • +
    • The device_rebuild feature should be safe to use (except on raidz, +which it is incompatible with), but the boot pool is small, so this does +not matter in practice.

    • +
    • The log_spacemap and spacemap_v2 features have been tested and +are safe to use. The boot pool is small, so these do not matter in +practice.

    • +
    • The project_quota feature has been tested and is safe to use. This +feature is extremely unlikely to matter for the boot pool.

    • +
    • The resilver_defer should be safe but the boot pool is small enough +that it is unlikely to be necessary.

    • +
    • As a read-only compatible feature, the userobj_accounting feature +should be compatible in theory, but in practice, GRUB can fail with an +“invalid dnode type” error. This feature does not matter for /boot +anyway.

    • +
    +
  14. +
  15. Create the root pool:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted:

      +
      zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
      +    -O compression=lz4 \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O canmount=off -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • ZFS native encryption:

      +
      zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -O encryption=on -O keylocation=prompt -O keyformat=passphrase \
      +    -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
      +    -O compression=lz4 \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O canmount=off -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • LUKS:

      +
      cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
      +cryptsetup luksOpen ${DISK}-part4 luks1
      +zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
      +    -O compression=lz4 \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O canmount=off -O mountpoint=/ -R /mnt \
      +    rpool /dev/mapper/luks1
      +
      +
      +
    • +
    +

    Notes:

    +
      +
    • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

    • +
    • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires ACLs +Also, disabling ACLs apparently breaks umask handling with NFSv4.

    • +
    • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain controller. +Note that xattr=sa is Linux-specific. If you move your +xattr=sa pool to another OpenZFS implementation besides ZFS-on-Linux, +extended attributes will not be readable (though your data will be). If +portability of extended attributes is important to you, omit the +-O xattr=sa above. Even if you do not want xattr=sa for the whole +pool, it is probably fine to use it for /var/log.

    • +
    • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only filenames.

    • +
    • recordsize is unset (leaving it at the default of 128 KiB). If you +want to tune it (e.g. -O recordsize=1M), see these various blog +posts.

    • +
    • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s documentation +for further information.

    • +
    • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

    • +
    • ZFS native encryption now +defaults to aes-256-gcm.

    • +
    • For LUKS, the key size chosen is 512 bits. However, XTS mode requires two +keys, so the LUKS key is split in half. Thus, -s 512 means AES-256.

    • +
    • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup FAQ +for guidance.

    • +
    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    rpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part4 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part4
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • When using LUKS with mirror or raidz topologies, use +/dev/mapper/luks1, /dev/mapper/luks2, etc., which you will have +to create using cryptsetup.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. On systems that can automatically install to ZFS, the root +pool is named rpool by default.

    • +
    +
  16. +
+
+
+

Step 3: System Installation

+
    +
  1. Create filesystem datasets to act as containers:

    +
    zfs create -o canmount=off -o mountpoint=none rpool/ROOT
    +zfs create -o canmount=off -o mountpoint=none bpool/BOOT
    +
    +
    +
  2. +
  3. Create filesystem datasets for the root and boot filesystems:

    +
    UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null |
    +    tr -dc 'a-z0-9' | cut -c-6)
    +
    +zfs create -o mountpoint=/ \
    +    -o com.ubuntu.zsys:bootfs=yes \
    +    -o com.ubuntu.zsys:last-used=$(date +%s) rpool/ROOT/ubuntu_$UUID
    +
    +zfs create -o mountpoint=/boot bpool/BOOT/ubuntu_$UUID
    +
    +
    +
  4. +
  5. Create datasets:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
    +    rpool/ROOT/ubuntu_$UUID/usr
    +zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
    +    rpool/ROOT/ubuntu_$UUID/var
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib
    +zfs create rpool/ROOT/ubuntu_$UUID/var/log
    +zfs create rpool/ROOT/ubuntu_$UUID/var/spool
    +
    +zfs create -o canmount=off -o mountpoint=/ \
    +    rpool/USERDATA
    +zfs create -o com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_$UUID \
    +    -o canmount=on -o mountpoint=/root \
    +    rpool/USERDATA/root_$UUID
    +chmod 700 /mnt/root
    +
    +
    +

    The datasets below are optional, depending on your preferences and/or +software choices.

    +

    If you wish to separate these to exclude them from snapshots:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/cache
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/nfs
    +zfs create rpool/ROOT/ubuntu_$UUID/var/tmp
    +chmod 1777 /mnt/var/tmp
    +
    +
    +

    If desired (the Ubuntu installer creates these):

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/lib/apt
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/dpkg
    +
    +
    +

    If you use /srv on this system:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no \
    +    rpool/ROOT/ubuntu_$UUID/srv
    +
    +
    +

    If you use /usr/local on this system:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/usr/local
    +
    +
    +

    If this system will have games installed:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/games
    +
    +
    +

    If this system will have a GUI:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/lib/AccountsService
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/NetworkManager
    +
    +
    +

    If this system will use Docker (which manages its own datasets & +snapshots):

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/lib/docker
    +
    +
    +

    If this system will store local email in /var/mail:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/mail
    +
    +
    +

    If this system will use Snap packages:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/snap
    +
    +
    +

    If you use /var/www on this system:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/www
    +
    +
    +

    For a mirror or raidz topology, create a dataset for /boot/grub:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no bpool/grub
    +
    +
    +

    A tmpfs is recommended later, but if you want a separate dataset for +/tmp:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no \
    +    rpool/ROOT/ubuntu_$UUID/tmp
    +chmod 1777 /mnt/tmp
    +
    +
    +

    The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data.

    +

    If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for /tmp, +as shown above. This keeps the /tmp data out of snapshots of your root +filesystem. It also allows you to set a quota on rpool/tmp, if you want +to limit the maximum space used. Otherwise, you can use a tmpfs (RAM +filesystem) later.

    +

    Note: If you separate a directory required for booting (e.g. /etc) +into its own dataset, you must add it to +ZFS_INITRD_ADDITIONAL_DATASETS in /etc/default/zfs. Datasets +with canmount=off (like rpool/usr above) do not matter for this.

    +
  6. +
  7. Mount a tmpfs at /run:

    +
    mkdir /mnt/run
    +mount -t tmpfs tmpfs /mnt/run
    +mkdir /mnt/run/lock
    +
    +
    +
  8. +
  9. Install the minimal system:

    +
    debootstrap jammy /mnt
    +
    +
    +

    The debootstrap command leaves the new system in an unconfigured state. +An alternative to using debootstrap is to copy the entirety of a +working system into the new ZFS root.

    +
  10. +
  11. Copy in zpool.cache:

    +
    mkdir /mnt/etc/zfs
    +cp /etc/zfs/zpool.cache /mnt/etc/zfs/
    +
    +
    +
  12. +
+
+
+

Step 4: System Configuration

+
    +
  1. Configure the hostname:

    +

    Replace HOSTNAME with the desired hostname:

    +
    hostname HOSTNAME
    +hostname > /mnt/etc/hostname
    +vi /mnt/etc/hosts
    +
    +
    +
    Add a line:
    +127.0.1.1       HOSTNAME
    +or if the system has a real name in DNS:
    +127.0.1.1       FQDN HOSTNAME
    +
    +
    +

    Hint: Use nano if you find vi confusing.

    +
  2. +
  3. Configure the network interface:

    +

    Find the interface name:

    +
    ip addr show
    +
    +
    +

    Adjust NAME below to match your interface name:

    +
    vi /mnt/etc/netplan/01-netcfg.yaml
    +
    +
    +
    network:
    +  version: 2
    +  ethernets:
    +    NAME:
    +      dhcp4: true
    +
    +
    +

    Customize this file if the system is not a DHCP client.

    +
  4. +
  5. Configure the package sources:

    +
    vi /mnt/etc/apt/sources.list
    +
    +
    +
    deb http://archive.ubuntu.com/ubuntu jammy main restricted universe multiverse
    +deb http://archive.ubuntu.com/ubuntu jammy-updates main restricted universe multiverse
    +deb http://archive.ubuntu.com/ubuntu jammy-backports main restricted universe multiverse
    +deb http://security.ubuntu.com/ubuntu jammy-security main restricted universe multiverse
    +
    +
    +
  6. +
  7. Bind the virtual filesystems from the LiveCD environment to the new +system and chroot into it:

    +
    mount --make-private --rbind /dev  /mnt/dev
    +mount --make-private --rbind /proc /mnt/proc
    +mount --make-private --rbind /sys  /mnt/sys
    +chroot /mnt /usr/bin/env DISK=$DISK UUID=$UUID bash --login
    +
    +
    +

    Note: This is using --rbind, not --bind.

    +
  8. +
  9. Configure a basic system environment:

    +
    apt update
    +
    +
    +

    Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

    +
    dpkg-reconfigure locales tzdata keyboard-configuration console-setup
    +
    +
    +

    Install your preferred text editor:

    +
    apt install --yes nano
    +
    +apt install --yes vim
    +
    +
    +

    Installing the full vim package fixes terminal problems that occur when +using the vim-tiny package (that is installed by debootstrap) over +SSH.

    +
  10. +
  11. For LUKS installs only, setup /etc/crypttab:

    +
    apt install --yes cryptsetup
    +
    +echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \
    +    none luks,discard,initramfs > /etc/crypttab
    +
    +
    +

    The use of initramfs is a work-around for cryptsetup does not support +ZFS.

    +

    Hint: If you are creating a mirror or raidz topology, repeat the +/etc/crypttab entries for luks2, etc. adjusting for each disk.

    +
  12. +
  13. Create the EFI filesystem:

    +

    Perform these steps for both UEFI and legacy (BIOS) booting:

    +
    apt install --yes dosfstools
    +
    +mkdosfs -F 32 -s 1 -n EFI ${DISK}-part1
    +mkdir /boot/efi
    +echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part1) \
    +    /boot/efi vfat defaults 0 0 >> /etc/fstab
    +mount /boot/efi
    +
    +
    +

    For a mirror or raidz topology, repeat the mkdosfs for the additional +disks, but do not repeat the other commands.

    +

    Note: The -s 1 for mkdosfs is only necessary for drives which +present 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster +size (given the partition size of 512 MiB) for FAT32. It also works fine on +drives which present 512 B sectors.

    +
  14. +
  15. Put /boot/grub on the EFI System Partition:

    +

    For a single-disk install only:

    +
    mkdir /boot/efi/grub /boot/grub
    +echo /boot/efi/grub /boot/grub none defaults,bind 0 0 >> /etc/fstab
    +mount /boot/grub
    +
    +
    +

    This allows GRUB to write to /boot/grub (since it is on a FAT-formatted +ESP instead of on ZFS), which means that /boot/grub/grubenv and the +recordfail feature works as expected: if the boot fails, the normally +hidden GRUB menu will be shown on the next boot. For a mirror or raidz +topology, we do not want GRUB writing to the EFI System Partition. This is +because we duplicate it at install without a mechanism to update the copies +when the GRUB configuration changes (e.g. as the kernel is upgraded). Thus, +we keep /boot/grub on the boot pool for the mirror or raidz topologies. +This preserves correct mirroring/raidz behavior, at the expense of being +able to write to /boot/grub/grubenv and thus the recordfail +behavior.

    +
  16. +
  17. Install GRUB/Linux/ZFS in the chroot environment for the new system:

    +

    Choose one of the following options:

    +
      +
    • Install GRUB/Linux/ZFS for legacy (BIOS) booting:

      +
      apt install --yes grub-pc linux-image-generic zfs-initramfs zsys
      +
      +
      +

      Select (using the space bar) all of the disks (not partitions) in your +pool.

      +
    • +
    • Install GRUB/Linux/ZFS for UEFI booting:

      +
      apt install --yes \
      +    grub-efi-amd64 grub-efi-amd64-signed linux-image-generic \
      +    shim-signed zfs-initramfs zsys
      +
      +
      +

      Notes:

      +
        +
      • Ignore any error messages saying ERROR: Couldn't resolve device and +WARNING: Couldn't determine root device. cryptsetup does not +support ZFS.

      • +
      • Ignore any error messages saying Module zfs not found and +couldn't connect to zsys daemon. The first seems to occur due to a +version mismatch between the Live CD kernel and the chroot environment, +but this is irrelevant since the module is already loaded. The second +may be caused by the first but either way is irrelevant since zed +is started manually later.

      • +
      • For a mirror or raidz topology, this step only installs GRUB on the +first disk. The other disk(s) will be handled later. For some reason, +grub-efi-amd64 does not prompt for install_devices here, but does +after a reboot.

      • +
      +
    • +
    +
  18. +
  19. Optional: Remove os-prober:

    +
    apt purge --yes os-prober
    +
    +
    +

    This avoids error messages from update-grub. os-prober is only +necessary in dual-boot configurations.

    +
  20. +
  21. Set a root password:

    +
    passwd
    +
    +
    +
  22. +
  23. Configure swap:

    +

    Choose one of the following options if you want swap:

    +
      +
    • For an unencrypted single-disk install:

      +
      mkswap -f ${DISK}-part2
      +echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \
      +    none swap discard 0 0 >> /etc/fstab
      +swapon -a
      +
      +
      +
    • +
    • For an unencrypted mirror or raidz topology:

      +
      apt install --yes mdadm
      +
      +# Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and
      +# raid-devices if necessary and specify the actual devices.
      +mdadm --create /dev/md0 --metadata=1.2 --level=mirror \
      +    --raid-devices=2 ${DISK1}-part2 ${DISK2}-part2
      +mkswap -f /dev/md0
      +echo /dev/disk/by-uuid/$(blkid -s UUID -o value /dev/md0) \
      +    none swap discard 0 0 >> /etc/fstab
      +
      +
      +
    • +
    • For an encrypted (LUKS or ZFS native encryption) single-disk install:

      +
      apt install --yes cryptsetup
      +
      +echo swap ${DISK}-part2 /dev/urandom \
      +      swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab
      +echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab
      +
      +
      +
    • +
    • For an encrypted (LUKS or ZFS native encryption) mirror or raidz +topology:

      +
      apt install --yes cryptsetup mdadm
      +
      +# Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and
      +# raid-devices if necessary and specify the actual devices.
      +mdadm --create /dev/md0 --metadata=1.2 --level=mirror \
      +    --raid-devices=2 ${DISK1}-part2 ${DISK2}-part2
      +echo swap /dev/md0 /dev/urandom \
      +      swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab
      +echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab
      +
      +
      +
    • +
    +
  24. +
  25. Optional (but recommended): Mount a tmpfs to /tmp

    +

    If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

    +
    cp /usr/share/systemd/tmp.mount /etc/systemd/system/
    +systemctl enable tmp.mount
    +
    +
    +
  26. +
  27. Setup system groups:

    +
    addgroup --system lpadmin
    +addgroup --system lxd
    +addgroup --system sambashare
    +
    +
    +
  28. +
  29. Optional: Install SSH:

    +
    apt install --yes openssh-server
    +
    +vi /etc/ssh/sshd_config
    +# Set: PermitRootLogin yes
    +
    +
    +
  30. +
+
+
+

Step 5: GRUB Installation

+
    +
  1. Verify that the ZFS boot filesystem is recognized:

    +
    grub-probe /boot
    +
    +
    +
  2. +
  3. Refresh the initrd files:

    +
    update-initramfs -c -k all
    +
    +
    +

    Note: Ignore any error messages saying ERROR: Couldn't resolve +device and WARNING: Couldn't determine root device. cryptsetup +does not support ZFS.

    +
  4. +
  5. Disable memory zeroing:

    +
    vi /etc/default/grub
    +# Add init_on_alloc=0 to: GRUB_CMDLINE_LINUX_DEFAULT
    +# Save and quit (or see the next step).
    +
    +
    +

    This is to address performance regressions.

    +
  6. +
  7. Optional (but highly recommended): Make debugging GRUB easier:

    +
    vi /etc/default/grub
    +# Comment out: GRUB_TIMEOUT_STYLE=hidden
    +# Set: GRUB_TIMEOUT=5
    +# Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5
    +# Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
    +# Uncomment: GRUB_TERMINAL=console
    +# Save and quit.
    +
    +
    +

    Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired.

    +
  8. +
  9. Update the boot configuration:

    +
    update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Install the boot loader:

    +

    Choose one of the following options:

    +
      +
    • For legacy (BIOS) booting, install GRUB to the MBR:

      +
      grub-install $DISK
      +
      +
      +

      Note that you are installing GRUB to the whole disk, not a partition.

      +

      If you are creating a mirror or raidz topology, repeat the +grub-install command for each disk in the pool.

      +
    • +
    • For UEFI booting, install GRUB to the ESP:

      +
      grub-install --target=x86_64-efi --efi-directory=/boot/efi \
      +    --bootloader-id=ubuntu --recheck --no-floppy
      +
      +
      +
    • +
    +
  12. +
  13. Disable grub-initrd-fallback.service

    +

    For a mirror or raidz topology:

    +
    systemctl mask grub-initrd-fallback.service
    +
    +
    +

    This is the service for /boot/grub/grubenv which does not work on +mirrored or raidz topologies. Disabling this keeps it from blocking +subsequent mounts of /boot/grub if that mount ever fails.

    +

    Another option would be to set RequiresMountsFor=/boot/grub via a +drop-in unit, but that is more work to do here for no reason. Hopefully +this bug +will be fixed upstream.

    +
  14. +
  15. Fix filesystem mount ordering:

    +

    We need to activate zfs-mount-generator. This makes systemd aware of +the separate mountpoints, which is important for things like /var/log +and /var/tmp. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp feature +of systemd automatically use After=var-tmp.mount.

    +
    mkdir /etc/zfs/zfs-list.cache
    +touch /etc/zfs/zfs-list.cache/bpool
    +touch /etc/zfs/zfs-list.cache/rpool
    +zed -F &
    +
    +
    +

    Verify that zed updated the cache by making sure these are not empty:

    +
    cat /etc/zfs/zfs-list.cache/bpool
    +cat /etc/zfs/zfs-list.cache/rpool
    +
    +
    +

    If either is empty, force a cache update and check again:

    +
    zfs set canmount=on bpool/BOOT/ubuntu_$UUID
    +zfs set canmount=on rpool/ROOT/ubuntu_$UUID
    +
    +
    +

    If they are still empty, stop zed (as below), start zed (as above) and try +again.

    +

    Once the files have data, stop zed:

    +
    fg
    +Press Ctrl-C.
    +
    +
    +

    Fix the paths to eliminate /mnt:

    +
    sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
    +
    +
    +
  16. +
  17. Exit from the chroot environment back to the LiveCD environment:

    +
    exit
    +
    +
    +
  18. +
  19. Run these commands in the LiveCD environment to unmount all +filesystems:

    +
    mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
    +    xargs -i{} umount -lf {}
    +zpool export -a
    +
    +
    +
  20. +
  21. Reboot:

    +
    reboot
    +
    +
    +

    Wait for the newly installed system to boot normally. Login as root.

    +
  22. +
+
+
+

Step 6: First Boot

+
    +
  1. Install GRUB to additional disks:

    +

    For a UEFI mirror or raidz topology only:

    +
    dpkg-reconfigure grub-efi-amd64
    +
    +Select (using the space bar) all of the ESP partitions (partition 1 on
    +each of the pool disks).
    +
    +
    +
  2. +
  3. Create a user account:

    +

    Replace YOUR_USERNAME with your desired username:

    +
    username=YOUR_USERNAME
    +
    +UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null |
    +    tr -dc 'a-z0-9' | cut -c-6)
    +ROOT_DS=$(zfs list -o name | awk '/ROOT\/ubuntu_/{print $1;exit}')
    +zfs create -o com.ubuntu.zsys:bootfs-datasets=$ROOT_DS \
    +    -o canmount=on -o mountpoint=/home/$username \
    +    rpool/USERDATA/${username}_$UUID
    +adduser $username
    +
    +cp -a /etc/skel/. /home/$username
    +chown -R $username:$username /home/$username
    +usermod -a -G adm,cdrom,dip,lpadmin,lxd,plugdev,sambashare,sudo $username
    +
    +
    +
  4. +
+
+
+

Step 7: Full Software Installation

+
    +
  1. Upgrade the minimal system:

    +
    apt dist-upgrade --yes
    +
    +
    +
  2. +
  3. Install a regular set of software:

    +

    Choose one of the following options:

    +
      +
    • Install a command-line environment only:

      +
      apt install --yes ubuntu-standard
      +
      +
      +
    • +
    • Install a full GUI environment:

      +
      apt install --yes ubuntu-desktop
      +
      +
      +

      Hint: If you are installing a full GUI environment, you will likely +want to manage your network with NetworkManager:

      +
      rm /etc/netplan/01-netcfg.yaml
      +vi /etc/netplan/01-network-manager-all.yaml
      +
      +
      +
      network:
      +  version: 2
      +  renderer: NetworkManager
      +
      +
      +
    • +
    +
  4. +
  5. Optional: Disable log compression:

    +

    As /var/log is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. Also, +if you are making snapshots of /var/log, logrotate’s compression will +actually waste space, as the uncompressed data will live on in the +snapshot. You can edit the files in /etc/logrotate.d by hand to comment +out compress, or use this loop (copy-and-paste highly recommended):

    +
    for file in /etc/logrotate.d/* ; do
    +    if grep -Eq "(^|[^#y])compress" "$file" ; then
    +        sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
    +    fi
    +done
    +
    +
    +
  6. +
  7. Reboot:

    +
    reboot
    +
    +
    +
  8. +
+
+
+

Step 8: Final Cleanup

+
    +
  1. Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

  2. +
  3. Optional: Disable the root password:

    +
    sudo usermod -p '*' root
    +
    +
    +
  4. +
  5. Optional (but highly recommended): Disable root SSH logins:

    +

    If you installed SSH earlier, revert the temporary change:

    +
    sudo vi /etc/ssh/sshd_config
    +# Remove: PermitRootLogin yes
    +
    +sudo systemctl restart ssh
    +
    +
    +
  6. +
  7. Optional: Re-enable the graphical boot process:

    +

    If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer.

    +
    sudo vi /etc/default/grub
    +# Uncomment: GRUB_TIMEOUT_STYLE=hidden
    +# Add quiet and splash to: GRUB_CMDLINE_LINUX_DEFAULT
    +# Comment out: GRUB_TERMINAL=console
    +# Save and quit.
    +
    +sudo update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  8. +
  9. Optional: For LUKS installs only, backup the LUKS header:

    +
    sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
    +    --header-backup-file luks1-header.dat
    +
    +
    +

    Store that backup somewhere safe (e.g. cloud storage). It is protected by +your LUKS passphrase, but you may wish to use additional encryption.

    +

    Hint: If you created a mirror or raidz topology, repeat this for each +LUKS volume (luks2, etc.).

    +
  10. +
+
+
+

Troubleshooting

+
+

Rescuing using a Live CD

+

Go through Step 1: Prepare The Install Environment.

+

For LUKS, first unlock the disk(s):

+
cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+# Repeat for additional disks, if this is a mirror or raidz topology.
+
+
+

Mount everything correctly:

+
zpool export -a
+zpool import -N -R /mnt rpool
+zpool import -N -R /mnt bpool
+zfs load-key -a
+# Replace “UUID” as appropriate; use zfs list to find it:
+zfs mount rpool/ROOT/ubuntu_UUID
+zfs mount bpool/BOOT/ubuntu_UUID
+zfs mount -a
+
+
+

If needed, you can chroot into your installed environment:

+
mount --make-private --rbind /dev  /mnt/dev
+mount --make-private --rbind /proc /mnt/proc
+mount --make-private --rbind /sys  /mnt/sys
+mount -t tmpfs tmpfs /mnt/run
+mkdir /mnt/run/lock
+chroot /mnt /bin/bash --login
+mount -a
+
+
+

Do whatever you need to do to fix your system.

+

When done, cleanup:

+
exit
+mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
+    xargs -i{} umount -lf {}
+zpool export -a
+reboot
+
+
+
+
+

Areca

+

Systems that require the arcsas blob driver should add it to the +/etc/initramfs-tools/modules file and run update-initramfs -c -k all.

+

Upgrade or downgrade the Areca driver if something like +RIP: 0010:[<ffffffff8101b316>]  [<ffffffff8101b316>] native_read_tsc+0x6/0x20 +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message.

+
+
+

MPT2SAS

+

Most problem reports for this tutorial involve mpt2sas hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware.

+

The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330.

+

Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool.

+
+
+

QEMU/KVM/XEN

+

Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. -drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890).

+

To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:

+
sudo apt install ovmf
+sudo vi /etc/libvirt/qemu.conf
+
+
+

Uncomment these lines:

+
nvram = [
+   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd",
+   "/usr/share/OVMF/OVMF_CODE.ms.fd:/usr/share/OVMF/OVMF_VARS.ms.fd"
+]
+
+
+
sudo systemctl restart libvirtd.service
+
+
+
+
+

VMware

+
    +
  • Set disk.EnableUUID = "TRUE" in the vmx file or vsphere configuration. +Doing this ensures that /dev/disk aliases are created in the guest.

  • +
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Ubuntu/index.html b/Getting Started/Ubuntu/index.html new file mode 100644 index 000000000..2f2fcd997 --- /dev/null +++ b/Getting Started/Ubuntu/index.html @@ -0,0 +1,183 @@ + + + + + + + Ubuntu — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Ubuntu

+ +
+

Installation

+
+

Note

+

If you want to use ZFS as your root filesystem, see the +Root on ZFS links below instead.

+
+

On Ubuntu, ZFS is included in the default Linux kernel packages. +To install the ZFS utilities, first make sure universe is enabled in +/etc/apt/sources.list:

+
deb http://archive.ubuntu.com/ubuntu <CODENAME> main universe
+
+
+

Then install zfsutils-linux:

+
apt update
+apt install zfsutils-linux
+
+
+
+
+

Root on ZFS

+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/index.html b/Getting Started/index.html new file mode 100644 index 000000000..376d4906c --- /dev/null +++ b/Getting Started/index.html @@ -0,0 +1,260 @@ + + + + + + + Getting Started — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Getting Started

+

To get started with OpenZFS refer to the provided documentation for your +distribution. It will cover the recommended installation method and any +distribution specific information. First time OpenZFS users are +encouraged to check out Aaron Toponce’s excellent +documentation.

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/openSUSE/index.html b/Getting Started/openSUSE/index.html new file mode 100644 index 000000000..89df1eccd --- /dev/null +++ b/Getting Started/openSUSE/index.html @@ -0,0 +1,174 @@ + + + + + + + openSUSE — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

openSUSE

+ +
+

Installation

+

If you want to use ZFS as your root filesystem, see the Root on ZFS +links below instead.

+

ZFS packages are not included in official openSUSE repositories, but repository of filesystems projects of openSUSE +includes such packages of filesystems including OpenZFS.

+

openSUSE progresses through 3 main distribution branches, these are called Tumbleweed, Leap and SLE. There are ZFS packages available for all three.

+
+ +
+

Root on ZFS

+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/openSUSE/openSUSE Leap Root on ZFS.html b/Getting Started/openSUSE/openSUSE Leap Root on ZFS.html new file mode 100644 index 000000000..1c47406ac --- /dev/null +++ b/Getting Started/openSUSE/openSUSE Leap Root on ZFS.html @@ -0,0 +1,1442 @@ + + + + + + + openSUSE Leap Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

openSUSE Leap Root on ZFS

+ +
+

Overview

+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Do not use these instructions for dual-booting.

  • +
  • Backup your data. Any existing data will be lost.

  • +
  • This is not an openSUSE official HOWTO page. This document will be updated if Root on ZFS support of +openSUSE is added in the future. +Also, openSUSE’s default system installer Yast2 does not support zfs. The method of setting up system +with zypper without Yast2 used in this page is based on openSUSE installation methods written by the +experience of the people in the community. +For more information about this, please look at the external links.

  • +
+
+
+

System Requirements

+ +

Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need massive amounts of RAM. Enabling +deduplication is a permanent change that cannot be easily reverted.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @Zaryob.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo zypper install python3-pip
    +pip3 install -r docs/requirements.txt
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request.

  10. +
+
+
+

Encryption

+

This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in /etc/fstab, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+

Notes

+
    +
  • You can use unofficial script LroZ (Linux Root On Zfs), which is based on this manual and automates most steps.

  • +
+
+
+
+

Step 1: Prepare The Install Environment

+
    +
  1. Boot the openSUSE Live CD. If prompted, login with the username +linux without password. Connect your system to the Internet as +appropriate (e.g. join your WiFi network). Open a terminal.

  2. +
  3. Check your openSUSE Leap release:

    +
    lsb_release -d
    +Description:    openSUSE Leap {$release}
    +
    +
    +
  4. +
  5. Setup and update the repositories:

    +
    sudo zypper addrepo https://download.opensuse.org/repositories/filesystems/$(lsb_release -rs)/filesystems.repo
    +sudo zypper refresh   # Refresh all repositories
    +
    +
    +
  6. +
  7. Optional: Install and start the OpenSSH server in the Live CD environment:

    +

    If you have a second system, using SSH to access the target system can be +convenient:

    +
    sudo zypper install openssh-server
    +sudo systemctl restart sshd.service
    +
    +
    +

    Hint: You can find your IP address with +ip addr show scope global | grep inet. Then, from your main machine, +connect with ssh user@IP. Do not forget to set the password for user by passwd.

    +
  8. +
  9. Disable automounting:

    +

    If the disk has been used before (with partitions at the same offsets), +previous filesystems (e.g. the ESP) will automount if not disabled:

    +
    gsettings set org.gnome.desktop.media-handling automount false
    +
    +
    +
  10. +
  11. Become root:

    +
    sudo -i
    +
    +
    +
  12. +
  13. Install ZFS in the Live CD environment:

    +
    zypper install zfs zfs-kmp-default
    +zypper install gdisk dkms
    +modprobe zfs
    +
    +
    +
  14. +
+
+
+

Step 2: Disk Formatting

+
    +
  1. Set a variable with the disk name:

    +
    DISK=/dev/disk/by-id/scsi-SATA_disk1
    +
    +
    +

    Always use the long /dev/disk/by-id/* aliases with ZFS. Using the +/dev/sd* device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool.

    +

    Hints:

    +
      +
    • ls -la /dev/disk/by-id will list the aliases.

    • +
    • Are you doing this in a virtual machine? If your virtual disk is missing +from /dev/disk/by-id, use /dev/vda if you are using KVM with +virtio; otherwise, read the troubleshooting +section.

    • +
    +
  2. +
  3. If you are re-using a disk, clear it as necessary:

    +

    If the disk was previously used in an MD array:

    +
    zypper install mdadm
    +
    +# See if one or more MD arrays are active:
    +cat /proc/mdstat
    +# If so, stop them (replace ``md0`` as required):
    +mdadm --stop /dev/md0
    +
    +# For an array using the whole disk:
    +mdadm --zero-superblock --force $DISK
    +# For an array using a partition:
    +mdadm --zero-superblock --force ${DISK}-part2
    +
    +
    +

    Clear the partition table:

    +
    sgdisk --zap-all $DISK
    +
    +
    +

    If you get a message about the kernel still using the old partition table, +reboot and start over (except that you can skip this step).

    +
  4. +
  5. Partition your disk(s):

    +

    Run this if you need legacy (BIOS) booting:

    +
    sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
    +
    +
    +

    Run this for UEFI booting (for use now or in the future):

    +
    sgdisk     -n2:1M:+512M   -t2:EF00 $DISK
    +
    +
    +

    Run this for the boot pool:

    +
    sgdisk     -n3:0:+1G      -t3:BF01 $DISK
    +
    +
    +

    Choose one of the following options:

    +
      +
    • Unencrypted or ZFS native encryption:

      +
      sgdisk     -n4:0:0        -t4:BF00 $DISK
      +
      +
      +
    • +
    • LUKS:

      +
      sgdisk     -n4:0:0        -t4:8309 $DISK
      +
      +
      +
    • +
    +

    Hints:

    +
      +
    • If you are creating a mirror or raidz topology, repeat the partitioning commands for all the disks which will be part of the pool.

    • +
    +
  6. +
  7. Create the boot pool:

    +
    zpool create \
    +    -o cachefile=/etc/zfs/zpool.cache \
    +    -o ashift=12 -d \
    +    -o feature@async_destroy=enabled \
    +    -o feature@bookmarks=enabled \
    +    -o feature@embedded_data=enabled \
    +    -o feature@empty_bpobj=enabled \
    +    -o feature@enabled_txg=enabled \
    +    -o feature@extensible_dataset=enabled \
    +    -o feature@filesystem_limits=enabled \
    +    -o feature@hole_birth=enabled \
    +    -o feature@large_blocks=enabled \
    +    -o feature@lz4_compress=enabled \
    +    -o feature@spacemap_histogram=enabled \
    +    -o feature@zpool_checkpoint=enabled \
    +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
    +    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
    +    -O mountpoint=/boot -R /mnt \
    +    bpool ${DISK}-part3
    +
    +
    +

    You should not need to customize any of the options for the boot pool.

    +

    GRUB does not support all of the zpool features. See spa_feature_names +in grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are “supported” by GRUB.

    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    bpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part3 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part3
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. The bpool convention originated in this HOWTO.

    • +
    +

    Feature Notes:

    +
      +
    • The allocation_classes feature should be safe to use. However, unless +one is using it (i.e. a special vdev), there is no point to enabling +it. It is extremely unlikely that someone would use this feature for a +boot pool. If one cares about speeding up the boot pool, it would make +more sense to put the whole pool on the faster disk rather than using it +as a special vdev.

    • +
    • The project_quota feature has been tested and is safe to use. This +feature is extremely unlikely to matter for the boot pool.

    • +
    • The resilver_defer should be safe but the boot pool is small enough +that it is unlikely to be necessary.

    • +
    • The spacemap_v2 feature has been tested and is safe to use. The boot +pool is small, so this does not matter in practice.

    • +
    • As a read-only compatible feature, the userobj_accounting feature +should be compatible in theory, but in practice, GRUB can fail with an +“invalid dnode type” error. This feature does not matter for /boot +anyway.

    • +
    +
  8. +
  9. Create the root pool:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted:

      +
      zpool create \
      +    -o cachefile=/etc/zfs/zpool.cache \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • ZFS native encryption:

      +
      zpool create \
      +    -o cachefile=/etc/zfs/zpool.cache \
      +    -o ashift=12 \
      +    -O encryption=on \
      +    -O keylocation=prompt -O keyformat=passphrase \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • LUKS:

      +
      zypper install cryptsetup
      +cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
      +cryptsetup luksOpen ${DISK}-part4 luks1
      +zpool create \
      +    -o cachefile=/etc/zfs/zpool.cache \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool /dev/mapper/luks1
      +
      +
      +
    • +
    +

    Notes:

    +
      +
    • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

    • +
    • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires ACLs

    • +
    • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only filenames.

    • +
    • recordsize is unset (leaving it at the default of 128 KiB). If you +want to tune it (e.g. -O recordsize=1M), see these various blog +posts.

    • +
    • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s documentation +for further information.

    • +
    • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain controller. +Note that xattr=sa is Linux-specific. If you move your +xattr=sa pool to another OpenZFS implementation besides ZFS-on-Linux, +extended attributes will not be readable (though your data will be). If +portability of extended attributes is important to you, omit the +-O xattr=sa above. Even if you do not want xattr=sa for the whole +pool, it is probably fine to use it for /var/log.

    • +
    • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

    • +
    • ZFS native encryption now +defaults to aes-256-gcm.

    • +
    • For LUKS, the key size chosen is 512 bits. However, XTS mode requires two +keys, so the LUKS key is split in half. Thus, -s 512 means AES-256.

    • +
    • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup FAQ +for guidance.

    • +
    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    rpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part4 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part4
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • When using LUKS with mirror or raidz topologies, use +/dev/mapper/luks1, /dev/mapper/luks2, etc., which you will have +to create using cryptsetup.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. On systems that can automatically install to ZFS, the root +pool is named rpool by default.

    • +
    • If you want to use grub bootloader, you must set:

      +
      -o feature@async_destroy=enabled \
      +-o feature@bookmarks=enabled \
      +-o feature@embedded_data=enabled \
      +-o feature@empty_bpobj=enabled \
      +-o feature@enabled_txg=enabled \
      +-o feature@extensible_dataset=enabled \
      +-o feature@filesystem_limits=enabled \
      +-o feature@hole_birth=enabled \
      +-o feature@large_blocks=enabled \
      +-o feature@lz4_compress=enabled \
      +-o feature@spacemap_histogram=enabled \
      +-o feature@zpool_checkpoint=enabled \
      +
      +
      +

      for your root pool. Relevant for grub 2.04 and Leap 15.3. Don’t use zpool +upgrade for this pool or you will lost the possibility to use grub2-install command.

      +
    • +
    +
  10. +
+
+
+

Step 3: System Installation

+
    +
  1. Create filesystem datasets to act as containers:

    +
    zfs create -o canmount=off -o mountpoint=none rpool/ROOT
    +zfs create -o canmount=off -o mountpoint=none bpool/BOOT
    +
    +
    +

    On Solaris systems, the root filesystem is cloned and the suffix is +incremented for major system changes through pkg image-update or +beadm. Similar functionality has been implemented in Ubuntu 20.04 with +the zsys tool, though its dataset layout is more complicated. Even +without such a tool, the rpool/ROOT and bpool/BOOT containers can still +be used for manually created clones. That said, this HOWTO assumes a single +filesystem for /boot for simplicity.

    +
  2. +
  3. Create filesystem datasets for the root and boot filesystems:

    +
    zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/suse
    +zfs mount rpool/ROOT/suse
    +
    +zfs create -o mountpoint=/boot bpool/BOOT/suse
    +
    +
    +

    With ZFS, it is not normally necessary to use a mount command (either +mount or zfs mount). This situation is an exception because of +canmount=noauto.

    +
  4. +
  5. Create datasets:

    +
    zfs create                                 rpool/home
    +zfs create -o mountpoint=/root             rpool/home/root
    +chmod 700 /mnt/root
    +zfs create -o canmount=off                 rpool/var
    +zfs create -o canmount=off                 rpool/var/lib
    +zfs create                                 rpool/var/log
    +zfs create                                 rpool/var/spool
    +
    +
    +

    The datasets below are optional, depending on your preferences and/or +software choices.

    +

    If you wish to exclude these from snapshots:

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/var/cache
    +zfs create -o com.sun:auto-snapshot=false  rpool/var/tmp
    +chmod 1777 /mnt/var/tmp
    +
    +
    +

    If you use /opt on this system:

    +
    zfs create                                 rpool/opt
    +
    +
    +

    If you use /srv on this system:

    +
    zfs create                                 rpool/srv
    +
    +
    +

    If you use /usr/local on this system:

    +
    zfs create -o canmount=off                 rpool/usr
    +zfs create                                 rpool/usr/local
    +
    +
    +

    If this system will have games installed:

    +
    zfs create                                 rpool/var/games
    +
    +
    +

    If this system will store local email in /var/mail:

    +
    zfs create                                 rpool/var/mail
    +
    +
    +

    If this system will use Snap packages:

    +
    zfs create                                 rpool/var/snap
    +
    +
    +

    If this system will use Flatpak packages:

    +
    zfs create                                 rpool/var/lib/flatpak
    +
    +
    +

    If you use /var/www on this system:

    +
    zfs create                                 rpool/var/www
    +
    +
    +

    If this system will use GNOME:

    +
    zfs create                                 rpool/var/lib/AccountsService
    +
    +
    +

    If this system will use Docker (which manages its own datasets & +snapshots):

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/docker
    +
    +
    +

    If this system will use NFS (locking):

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/nfs
    +
    +
    +

    Mount a tmpfs at /run:

    +
    mkdir /mnt/run
    +mount -t tmpfs tmpfs /mnt/run
    +mkdir /mnt/run/lock
    +
    +
    +

    A tmpfs is recommended later, but if you want a separate dataset for +/tmp:

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/tmp
    +chmod 1777 /mnt/tmp
    +
    +
    +

    The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data.

    +

    If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for /tmp, +as shown above. This keeps the /tmp data out of snapshots of your root +filesystem. It also allows you to set a quota on rpool/tmp, if you want +to limit the maximum space used. Otherwise, you can use a tmpfs (RAM +filesystem) later.

    +
  6. +
  7. Copy in zpool.cache:

    +
    mkdir /mnt/etc/zfs -p
    +cp /etc/zfs/zpool.cache /mnt/etc/zfs/
    +
    +
    +
  8. +
+
+
+

Step 4. Install System

+
    +
  1. Add repositories into chrooting directory:

    +
    zypper --root /mnt ar http://download.opensuse.org/distribution/leap/$(lsb_release -rs)/repo/non-oss  non-oss
    +zypper --root /mnt ar http://download.opensuse.org/distribution/leap/$(lsb_release -rs)/repo/oss oss
    +zypper --root /mnt ar http://download.opensuse.org/update/leap/$(lsb_release -rs)/oss  update-oss
    +zypper --root /mnt ar http://download.opensuse.org/update/leap/$(lsb_release -rs)/non-oss update-nonoss
    +
    +
    +
  2. +
  3. Generate repository indexes:

    +
    zypper --root /mnt refresh
    +
    +
    +

    You will get fingerprint exception, click a to say always trust and continue.:

    +
    New repository or package signing key received:
    +
    +Repository:       oss
    +Key Name:         openSUSE Project Signing Key <opensuse@opensuse.org>
    +Key Fingerprint:  22C07BA5 34178CD0 2EFE22AA B88B2FD4 3DBDC284
    +Key Created:      Mon May  5 11:37:40 2014
    +Key Expires:      Thu May  2 11:37:40 2024
    +Rpm Name:         gpg-pubkey-3dbdc284-53674dd4
    +
    +Do you want to reject the key, trust temporarily, or trust always? [r/t/a/?] (r):
    +
    +
    +
  4. +
  5. Install openSUSE Leap with zypper:

    +

    If you install base pattern, zypper will install busybox-grep which masks default kernel package. +Thats why I recommend you to install enhanced_base pattern, if you’re new in openSUSE. But in enhanced_base, bloats +can annoy you, while you want to use it openSUSE on server. So, you need to select

    +
      +
    1. Install base packages of openSUSE Leap with zypper (Recommended for server):

      +
      zypper --root /mnt install -t pattern base
      +
      +
      +
    2. +
    3. Install enhanced base of openSUSE Leap with zypper (Recommended for desktop):

      +
      zypper --root /mnt install -t pattern enhanced_base
      +
      +
      +
    4. +
    +
  6. +
  7. Install openSUSE zypper package system into chroot:

    +
    zypper --root /mnt install zypper
    +
    +
    +
  8. +
  9. Recommended: Install openSUSE yast2 system into chroot:

    +
    zypper --root /mnt install yast2
    +zypper --root /mnt install -t pattern yast2_basis
    +
    +
    +

    It will make easier to configure network and other configurations for beginners.

    +
  10. +
+

To install a desktop environment, see the openSUSE wiki

+
+
+

Step 5: System Configuration

+
    +
  1. Configure the hostname:

    +

    Replace HOSTNAME with the desired hostname:

    +
    echo HOSTNAME > /mnt/etc/hostname
    +vi /mnt/etc/hosts
    +
    +
    +

    Add a line:

    +
    127.0.1.1       HOSTNAME
    +
    +
    +

    or if the system has a real name in DNS:

    +
    127.0.1.1       FQDN HOSTNAME
    +
    +
    +

    Hint: Use nano if you find vi confusing.

    +
  2. +
  3. Copy network information:

    +
    rm /mnt/etc/resolv.conf
    +cp /etc/resolv.conf /mnt/etc/
    +
    +
    +

    You will reconfigure network with yast2 later.

    +
  4. +
  5. Bind the virtual filesystems from the LiveCD environment to the new +system and chroot into it:

    +
    mount --make-private --rbind /dev  /mnt/dev
    +mount --make-private --rbind /proc /mnt/proc
    +mount --make-private --rbind /sys  /mnt/sys
    +mount -t tmpfs tmpfs /mnt/run
    +mkdir /mnt/run/lock
    +
    +chroot /mnt /usr/bin/env DISK=$DISK bash --login
    +
    +
    +

    Note: This is using --rbind, not --bind.

    +
  6. +
  7. Configure a basic system environment:

    +
    ln -s /proc/self/mounts /etc/mtab
    +zypper refresh
    +
    +
    +

    Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

    +
    locale -a
    +
    +
    +

    Output must include that languages:

    +
      +
    • C

    • +
    • C.utf8

    • +
    • en_US.utf8

    • +
    • POSIX

    • +
    +

    Find yout locale from locale -a commands output then set it with following command.

    +
    localectl set-locale LANG=en_US.UTF-8
    +
    +
    +
  8. +
  9. Optional: Reinstallation for stability:

    +

    After installation it may need. Some packages may have minor errors. +For that, do this if you wish. Since there is no command like +dpkg-reconfigure in openSUSE, zypper install -f stated as a alternative for +it +but it will reinstall packages.

    +
    zypper install -f permissions-config iputils ca-certificates  ca-certificates-mozilla pam shadow dbus libutempter0 suse-module-tools util-linux
    +
    +
    +
  10. +
  11. Install kernel:

    +
    zypper install kernel-default kernel-firmware
    +
    +
    +

    Note: If you installed base pattern, you need to deinstall busybox-grep to install kernel-default package.

    +
  12. +
  13. Install ZFS in the chroot environment for the new system:

    +
    zypper install lsb-release
    +zypper addrepo https://download.opensuse.org/repositories/filesystems/`lsb_release -rs`/filesystems.repo
    +zypper refresh   # Refresh all repositories
    +zypper install zfs zfs-kmp-default
    +
    +
    +
  14. +
  15. For LUKS installs only, setup /etc/crypttab:

    +
    zypper install cryptsetup
    +
    +echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) none \
    +    luks,discard,initramfs > /etc/crypttab
    +
    +
    +

    The use of initramfs is a work-around for cryptsetup does not support +ZFS.

    +

    Hint: If you are creating a mirror or raidz topology, repeat the +/etc/crypttab entries for luks2, etc. adjusting for each disk.

    +
  16. +
  17. For LUKS installs only, fix cryptsetup naming for ZFS:

    +
    echo 'ENV{DM_NAME}!="", SYMLINK+="$env{DM_NAME}"
    +ENV{DM_NAME}!="", SYMLINK+="dm-name-$env{DM_NAME}"' >> /etc/udev/rules.d/99-local-crypt.rules
    +
    +
    +
  18. +
  19. Recommended: Generate and setup hostid:

    +
    cd /root
    +zypper install wget
    +wget https://github.com/openzfs/zfs/files/4537537/genhostid.sh.gz
    +gzip -d genhostid.sh.gz
    +chmod +x genhostid.sh
    +zgenhostid `/root/genhostid.sh`
    +
    +
    +

    Check, that generated and system hostid matches:

    +
    /root/genhostid.sh
    +hostid
    +
    +
    +
  20. +
  21. Install GRUB

    +

    Choose one of the following options:

    +
      +
    • Install GRUB for legacy (BIOS) booting:

      +
      zypper install grub2-x86_64-pc
      +
      +
      +

      If your processor is 32bit use grub2-i386-pc instead of x86_64 one.

      +
    • +
    • Install GRUB for UEFI booting:

      +
      zypper install grub2-x86_64-efi dosfstools os-prober
      +mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2
      +mkdir /boot/efi
      +echo /dev/disk/by-uuid/$(blkid -s PARTUUID -o value ${DISK}-part2) \
      +    /boot/efi vfat defaults 0 0 >> /etc/fstab
      +mount /boot/efi
      +
      +
      +

      Notes:

      +
        +
      • +
        The -s 1 for mkdosfs is only necessary for drives which present

        4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size +(given the partition size of 512 MiB) for FAT32. It also works fine on +drives which present 512 B sectors.

        +
        +
        +
      • +
      • +
        For a mirror or raidz topology, this step only installs GRUB on the

        first disk. The other disk(s) will be handled later.

        +
        +
        +
      • +
      +
    • +
    +
  22. +
  23. Optional: Remove os-prober:

    +
    zypper remove os-prober
    +
    +
    +

    This avoids error messages from update-bootloader. os-prober is only +necessary in dual-boot configurations.

    +
  24. +
  25. Set a root password:

    +
    passwd
    +
    +
    +
  26. +
  27. Enable importing bpool

    +

    This ensures that bpool is always imported, regardless of whether +/etc/zfs/zpool.cache exists, whether it is in the cachefile or not, +or whether zfs-import-scan.service is enabled.

    +
    vi /etc/systemd/system/zfs-import-bpool.service
    +
    +
    +
    [Unit]
    +DefaultDependencies=no
    +Before=zfs-import-scan.service
    +Before=zfs-import-cache.service
    +
    +[Service]
    +Type=oneshot
    +RemainAfterExit=yes
    +ExecStart=/usr/sbin/zpool import -N -o cachefile=none bpool
    +# Work-around to preserve zpool cache:
    +ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache
    +ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache
    +
    +[Install]
    +WantedBy=zfs-import.target
    +
    +
    +
    systemctl enable zfs-import-bpool.service
    +
    +
    +
  28. +
  29. Optional (but recommended): Mount a tmpfs to /tmp

    +

    If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

    +
    cp /usr/share/systemd/tmp.mount /etc/systemd/system/
    +systemctl enable tmp.mount
    +
    +
    +
  30. +
+
+
+

Step 6: Kernel Installation

+
    +
  1. Add zfs module into dracut:

    +
    echo 'zfs'>> /etc/modules-load.d/zfs.conf
    +
    +
    +
  2. +
  3. Kernel version of livecd can differ from currently installed version. Get kernel version of your new OS:

    +
    kernel_version=$(find /boot/vmlinuz-* | grep -Eo '[[:digit:]]*\.[[:digit:]]*\.[[:digit:]]*\-.*-default')
    +
    +
    +
  4. +
  5. Refresh kernel files:

    +
    kernel-install add "$kernel_version" /boot/vmlinuz-"$kernel_version"
    +
    +
    +
  6. +
  7. Refresh the initrd files:

    +
    mkinitrd
    +
    +
    +

    Note: After some installations, LUKS partition cannot seen by dracut, +this will print “Failure occured during following action: +configuring encrypted DM device X VOLUME_CRYPTSETUP_FAILED“. For fix this +issue you need to check cryptsetup installation. See for more information +Note: Although we add the zfs config to the system module into /etc/modules.d, if it is not seen by dracut, we have to add it to dracut by force. +dracut –kver $(uname -r) –force –add-drivers “zfs”

    +
  8. +
+
+
+

Step 7: Grub2 Installation

+
    +
  1. Verify that the ZFS boot filesystem is recognized:

    +
    grub2-probe /boot
    +
    +
    +

    Output must be zfs

    +
  2. +
  3. If you having trouble with grub2-probe command make this:

    +
    echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile
    +export ZPOOL_VDEV_NAME_PATH=YES
    +
    +
    +

    then go back to grub2-probe step.

    +
  4. +
  5. Optional (but highly recommended): Make debugging GRUB easier:

    +
    vi /etc/default/grub
    +# Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
    +# Uncomment: GRUB_TERMINAL=console
    +# Save and quit.
    +
    +
    +

    Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired.

    +
  6. +
  7. Update the boot configuration:

    +
    update-bootloader
    +
    +
    +

    Note: Ignore errors from osprober, if present. +Note: If you have had trouble with the grub2 installation, I suggest you use systemd-boot. +Note: If this command don’t gives any output, use classic grub.cfg generation with following command: +grub2-mkconfig -o /boot/grub2/grub.cfg

    +
  8. +
  9. Check that /boot/grub2/grub.cfg have the menuentry root=ZFS=rpool/ROOT/suse, like this:

    +
    linux   /boot@/vmlinuz-5.3.18-150300.59.60-default root=ZFS=rpool/ROOT/suse
    +
    +
    +

    If not, change /etc/default/grub:

    +
    GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/suse"
    +
    +
    +

    and repeat previous step.

    +
  10. +
  11. Install the boot loader:

    +
      +
    1. For legacy (BIOS) booting, install GRUB to the MBR:

      +
      grub2-install $DISK
      +
      +
      +
    2. +
    +

    Note that you are installing GRUB to the whole disk, not a partition.

    +

    If you are creating a mirror or raidz topology, repeat the grub-install +command for each disk in the pool.

    +
      +
    1. For UEFI booting, install GRUB to the ESP:

      +
      grub2-install --target=x86_64-efi --efi-directory=/boot/efi \
      +    --bootloader-id=opensuse --recheck --no-floppy
      +
      +
      +

      It is not necessary to specify the disk here. If you are creating a +mirror or raidz topology, the additional disks will be handled later.

      +
    2. +
    +
  12. +
+
+
+

Step 8: Systemd-Boot Installation

+

Warning: This will break your Yast2 Bootloader Configuration. Make sure that you +are not able to fix the problem you are having with grub2. I decided to write this +part because sometimes grub2 doesn’t see the rpool pool in some cases.

+
    +
  1. Install systemd-boot:

    +
    bootctl install
    +
    +
    +

    Note: Only if previous cmd replied “Failed to get machine id: No medium found”, you need:

    +
    +

    systemd-machine-id-setup

    +
    +

    and repeat installation systemd-boot.

    +
  2. +
  3. Configure bootloader configuration:

    +
    tee -a /boot/efi/loader/loader.conf << EOF
    +default openSUSE_Leap.conf
    +timeout 5
    +console-mode auto
    +EOF
    +
    +
    +
  4. +
  5. Write Entries:

    +
    tee -a /boot/efi/loader/entries/openSUSE_Leap.conf << EOF
    +title   openSUSE Leap
    +linux   /EFI/openSUSE/vmlinuz
    +initrd  /EFI/openSUSE/initrd
    +options root=zfs:rpool/ROOT/suse boot=zfs
    +EOF
    +
    +
    +
  6. +
  7. Copy files into EFI:

    +
    mkdir /boot/efi/EFI/openSUSE
    +cp /boot/{vmlinuz,initrd} /boot/efi/EFI/openSUSE
    +
    +
    +
  8. +
  9. Update systemd-boot variables:

    +
    bootctl update
    +
    +
    +
  10. +
+
+
+

Step 9: Filesystem Configuration

+
    +
  1. Fix filesystem mount ordering:

    +

    We need to activate zfs-mount-generator. This makes systemd aware of +the separate mountpoints, which is important for things like /var/log +and /var/tmp. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp feature +of systemd automatically use After=var-tmp.mount.

    +
    mkdir /etc/zfs/zfs-list.cache
    +touch /etc/zfs/zfs-list.cache/bpool
    +touch /etc/zfs/zfs-list.cache/rpool
    +ln -s /usr/lib/zfs/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d
    +zed -F &
    +
    +
    +

    Verify that zed updated the cache by making sure these are not empty:

    +
    cat /etc/zfs/zfs-list.cache/bpool
    +cat /etc/zfs/zfs-list.cache/rpool
    +
    +
    +

    If either is empty, force a cache update and check again:

    +
    zfs set canmount=on     bpool/BOOT/suse
    +zfs set canmount=noauto rpool/ROOT/suse
    +
    +
    +

    If they are still empty, stop zed (as below), start zed (as above) and try +again.

    +

    Stop zed:

    +
    fg
    +Press Ctrl-C.
    +
    +
    +

    Fix the paths to eliminate /mnt:

    +
    sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
    +
    +
    +
  2. +
+
+
+

Step 10: First Boot

+
    +
  1. Optional: Install SSH:

    +
    zypper install -y openssh-server
    +
    +vi /etc/ssh/sshd_config
    +# Set: PermitRootLogin yes
    +
    +
    +
  2. +
  3. Optional: Snapshot the initial installation:

    +
    zfs snapshot -r bpool/BOOT/suse@install
    +zfs snapshot -r rpool/ROOT/suse@install
    +
    +
    +

    In the future, you will likely want to take snapshots before each +upgrade, and remove old snapshots (including this one) at some point to +save space.

    +
  4. +
  5. Exit from the chroot environment back to the LiveCD environment:

    +
    exit
    +
    +
    +
  6. +
  7. Run these commands in the LiveCD environment to unmount all +filesystems:

    +
    mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
    +    xargs -i{} umount -lf {}
    +zpool export -a
    +
    +
    +
  8. +
  9. Reboot:

    +
    reboot
    +
    +
    +

    Wait for the newly installed system to boot normally. Login as root.

    +
  10. +
  11. Create a user account:

    +

    Replace username with your desired username:

    +
    zfs create rpool/home/username
    +adduser username
    +
    +cp -a /etc/skel/. /home/username
    +chown -R username:username /home/username
    +usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video username
    +
    +
    +
  12. +
  13. Mirror GRUB

    +

    If you installed to multiple disks, install GRUB on the additional +disks.

    +
      +
    • For legacy (BIOS) booting:: +Check to be sure we using efi mode:

      +
      efibootmgr -v
      +
      +
      +

      This must return a message contains legacy_boot

      +

      Then reconfigure grub:

      +
      grub-install $DISK
      +
      +
      +

      Hit enter until you get to the device selection screen. +Select (using the space bar) all of the disks (not partitions) in your pool.

      +
    • +
    • For UEFI booting:

      +
      umount /boot/efi
      +
      +
      +

      For the second and subsequent disks (increment debian-2 to -3, etc.):

      +
      dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
      +   of=/dev/disk/by-id/scsi-SATA_disk2-part2
      +efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
      +    -p 2 -L "opensuse-2" -l '\EFI\opensuse\grubx64.efi'
      +
      +mount /boot/efi
      +
      +
      +
    • +
    +
  14. +
+
+
+

Step 11: Optional: Configure Swap

+

Caution: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is a bug report upstream.

+
    +
  1. Create a volume dataset (zvol) for use as a swap device:

    +
    zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
    +    -o logbias=throughput -o sync=always \
    +    -o primarycache=metadata -o secondarycache=none \
    +    -o com.sun:auto-snapshot=false rpool/swap
    +
    +
    +

    You can adjust the size (the 4G part) to your needs.

    +

    The compression algorithm is set to zle because it is the cheapest +available algorithm. As this guide recommends ashift=12 (4 kiB +blocks on disk), the common case of a 4 kiB page size means that no +compression algorithm can reduce I/O. The exception is all-zero pages, +which are dropped by ZFS; but some form of compression has to be enabled +to get this behavior.

    +
  2. +
  3. Configure the swap device:

    +

    Caution: Always use long /dev/zvol aliases in configuration +files. Never use a short /dev/zdX device name.

    +
    mkswap -f /dev/zvol/rpool/swap
    +echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
    +echo RESUME=none > /etc/initramfs-tools/conf.d/resume
    +
    +
    +

    The RESUME=none is necessary to disable resuming from hibernation. +This does not work, as the zvol is not present (because the pool has not +yet been imported) at the time the resume script runs. If it is not +disabled, the boot process hangs for 30 seconds waiting for the swap +zvol to appear.

    +
  4. +
  5. Enable the swap device:

    +
    swapon -av
    +
    +
    +
  6. +
+
+
+

Step 12: Final Cleanup

+
    +
  1. Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

  2. +
  3. Optional: Delete the snapshots of the initial installation:

    +
    sudo zfs destroy bpool/BOOT/suse@install
    +sudo zfs destroy rpool/ROOT/suse@install
    +
    +
    +
  4. +
  5. Optional: Disable the root password:

    +
    sudo usermod -p '*' root
    +
    +
    +
  6. +
  7. Optional (but highly recommended): Disable root SSH logins:

    +

    If you installed SSH earlier, revert the temporary change:

    +
    vi /etc/ssh/sshd_config
    +# Remove: PermitRootLogin yes
    +
    +systemctl restart sshd
    +
    +
    +
  8. +
  9. Optional: Re-enable the graphical boot process:

    +

    If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer.

    +
    sudo vi /etc/default/grub
    +# Add quiet to GRUB_CMDLINE_LINUX_DEFAULT
    +# Comment out GRUB_TERMINAL=console
    +# Save and quit.
    +
    +sudo update-bootloader
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Optional: For LUKS installs only, backup the LUKS header:

    +
    sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
    +    --header-backup-file luks1-header.dat
    +
    +
    +

    Store that backup somewhere safe (e.g. cloud storage). It is protected by +your LUKS passphrase, but you may wish to use additional encryption.

    +

    Hint: If you created a mirror or raidz topology, repeat this for each +LUKS volume (luks2, etc.).

    +
  12. +
+
+
+

Troubleshooting

+
+

Rescuing using a Live CD

+

Go through Step 1: Prepare The Install Environment.

+

For LUKS, first unlock the disk(s):

+
zypper install cryptsetup
+cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+# Repeat for additional disks, if this is a mirror or raidz topology.
+
+
+

Mount everything correctly:

+
zpool export -a
+zpool import -N -R /mnt rpool
+zpool import -N -R /mnt bpool
+zfs load-key -a
+zfs mount rpool/ROOT/suse
+zfs mount -a
+
+
+

If needed, you can chroot into your installed environment:

+
mount --make-private --rbind /dev  /mnt/dev
+mount --make-private --rbind /proc /mnt/proc
+mount --make-private --rbind /sys  /mnt/sys
+chroot /mnt /bin/bash --login
+mount /boot/efi
+mount -a
+
+
+

Do whatever you need to do to fix your system.

+

When done, cleanup:

+
exit
+mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
+    xargs -i{} umount -lf {}
+zpool export -a
+reboot
+
+
+
+
+

Areca

+

Systems that require the arcsas blob driver should add it to the +/etc/initramfs-tools/modules file and run update-initramfs -c -k all.

+

Upgrade or downgrade the Areca driver if something like +RIP: 0010:[<ffffffff8101b316>]  [<ffffffff8101b316>] native_read_tsc+0x6/0x20 +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message.

+
+
+

MPT2SAS

+

Most problem reports for this tutorial involve mpt2sas hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware.

+

The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330.

+

Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool.

+
+
+

QEMU/KVM/XEN

+

Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. -drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890).

+

To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:

+
sudo zypper install ovmf
+sudo vi /etc/libvirt/qemu.conf
+
+
+

Uncomment these lines:

+
nvram = [
+   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd"
+]
+
+
+
sudo systemctl restart libvirtd.service
+
+
+
+
+

VMware

+
    +
  • Set disk.EnableUUID = "TRUE" in the vmx file or vsphere configuration. +Doing this ensures that /dev/disk aliases are created in the guest.

  • +
+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/openSUSE/openSUSE Tumbleweed Root on ZFS.html b/Getting Started/openSUSE/openSUSE Tumbleweed Root on ZFS.html new file mode 100644 index 000000000..eb70ccf68 --- /dev/null +++ b/Getting Started/openSUSE/openSUSE Tumbleweed Root on ZFS.html @@ -0,0 +1,1389 @@ + + + + + + + openSUSE Tumbleweed Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

openSUSE Tumbleweed Root on ZFS

+ +
+

Overview

+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Do not use these instructions for dual-booting.

  • +
  • Backup your data. Any existing data will be lost.

  • +
  • This is not an openSUSE official HOWTO page. This document will be updated if Root on ZFS support of +openSUSE is added in the future. +Also, openSUSE’s default system installer Yast2 does not support zfs. The method of setting up system +with zypper without Yast2 used in this page is based on openSUSE installation methods written by the +experience of the people in the community. +For more information about this, please look at the external links.

  • +
+
+
+

System Requirements

+ +

Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need massive amounts of RAM. Enabling +deduplication is a permanent change that cannot be easily reverted.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @Zaryob.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo zypper install python3-pip
    +pip3 install -r docs/requirements.txt
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request.

  10. +
+
+
+

Encryption

+

This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in /etc/fstab, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+
+

Step 1: Prepare The Install Environment

+
    +
  1. Boot the openSUSE Live CD. If prompted, login with the username +live and password live. Connect your system to the Internet as +appropriate (e.g. join your WiFi network). Open a terminal.

  2. +
  3. Setup and update the repositories:

    +
    sudo zypper addrepo https://download.opensuse.org/repositories/filesystems/openSUSE_Tumbleweed/filesystems.repo
    +sudo zypper refresh  # Refresh all repositories
    +
    +
    +
  4. +
  5. Optional: Install and start the OpenSSH server in the Live CD environment:

    +

    If you have a second system, using SSH to access the target system can be +convenient:

    +
    sudo zypper install openssh-server
    +sudo systemctl restart sshd.service
    +
    +
    +

    Hint: You can find your IP address with +ip addr show scope global | grep inet. Then, from your main machine, +connect with ssh user@IP.

    +
  6. +
  7. Disable automounting:

    +

    If the disk has been used before (with partitions at the same offsets), +previous filesystems (e.g. the ESP) will automount if not disabled:

    +
    gsettings set org.gnome.desktop.media-handling automount false
    +
    +
    +
  8. +
  9. Become root:

    +
    sudo -i
    +
    +
    +
  10. +
  11. Install ZFS in the Live CD environment:

    +
    zypper install zfs zfs-kmp-default
    +zypper install gdisk
    +modprobe zfs
    +
    +
    +
  12. +
+
+
+

Step 2: Disk Formatting

+
    +
  1. Set a variable with the disk name:

    +
    DISK=/dev/disk/by-id/scsi-SATA_disk1
    +
    +
    +

    Always use the long /dev/disk/by-id/* aliases with ZFS. Using the +/dev/sd* device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool.

    +

    Hints:

    +
      +
    • ls -la /dev/disk/by-id will list the aliases.

    • +
    • Are you doing this in a virtual machine? If your virtual disk is missing +from /dev/disk/by-id, use /dev/vda if you are using KVM with +virtio; otherwise, read the troubleshooting +section.

    • +
    +
  2. +
  3. If you are re-using a disk, clear it as necessary:

    +

    If the disk was previously used in an MD array:

    +
    zypper install mdadm
    +
    +# See if one or more MD arrays are active:
    +cat /proc/mdstat
    +# If so, stop them (replace ``md0`` as required):
    +mdadm --stop /dev/md0
    +
    +# For an array using the whole disk:
    +mdadm --zero-superblock --force $DISK
    +# For an array using a partition:
    +mdadm --zero-superblock --force ${DISK}-part2
    +
    +
    +

    Clear the partition table:

    +
    sgdisk --zap-all $DISK
    +
    +
    +

    If you get a message about the kernel still using the old partition table, +reboot and start over (except that you can skip this step).

    +
  4. +
  5. Partition your disk(s):

    +

    Run this if you need legacy (BIOS) booting:

    +
    sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
    +
    +
    +

    Run this for UEFI booting (for use now or in the future):

    +
    sgdisk     -n2:1M:+512M   -t2:EF00 $DISK
    +
    +
    +

    Run this for the boot pool:

    +
    sgdisk     -n3:0:+1G      -t3:BF01 $DISK
    +
    +
    +

    Choose one of the following options:

    +
      +
    • Unencrypted or ZFS native encryption:

      +
      sgdisk     -n4:0:0        -t4:BF00 $DISK
      +
      +
      +
    • +
    • LUKS:

      +
      sgdisk     -n4:0:0        -t4:8309 $DISK
      +
      +
      +
    • +
    +

    If you are creating a mirror or raidz topology, repeat the partitioning +commands for all the disks which will be part of the pool.

    +
  6. +
  7. Create the boot pool:

    +
    zpool create \
    +    -o cachefile=/etc/zfs/zpool.cache \
    +    -o ashift=12 -d \
    +    -o feature@async_destroy=enabled \
    +    -o feature@bookmarks=enabled \
    +    -o feature@embedded_data=enabled \
    +    -o feature@empty_bpobj=enabled \
    +    -o feature@enabled_txg=enabled \
    +    -o feature@extensible_dataset=enabled \
    +    -o feature@filesystem_limits=enabled \
    +    -o feature@hole_birth=enabled \
    +    -o feature@large_blocks=enabled \
    +    -o feature@lz4_compress=enabled \
    +    -o feature@spacemap_histogram=enabled \
    +    -o feature@zpool_checkpoint=enabled \
    +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
    +    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
    +    -O mountpoint=/boot -R /mnt \
    +    bpool ${DISK}-part3
    +
    +
    +

    You should not need to customize any of the options for the boot pool.

    +

    GRUB does not support all of the zpool features. See spa_feature_names +in grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are “supported” by GRUB.

    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    bpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part3 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part3
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. The bpool convention originated in this HOWTO.

    • +
    +

    Feature Notes:

    +
      +
    • The allocation_classes feature should be safe to use. However, unless +one is using it (i.e. a special vdev), there is no point to enabling +it. It is extremely unlikely that someone would use this feature for a +boot pool. If one cares about speeding up the boot pool, it would make +more sense to put the whole pool on the faster disk rather than using it +as a special vdev.

    • +
    • The project_quota feature has been tested and is safe to use. This +feature is extremely unlikely to matter for the boot pool.

    • +
    • The resilver_defer should be safe but the boot pool is small enough +that it is unlikely to be necessary.

    • +
    • The spacemap_v2 feature has been tested and is safe to use. The boot +pool is small, so this does not matter in practice.

    • +
    • As a read-only compatible feature, the userobj_accounting feature +should be compatible in theory, but in practice, GRUB can fail with an +“invalid dnode type” error. This feature does not matter for /boot +anyway.

    • +
    +
  8. +
  9. Create the root pool:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted:

      +
      zpool create \
      +    -o cachefile=/etc/zfs/zpool.cache \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • ZFS native encryption:

      +
      zpool create \
      +    -o cachefile=/etc/zfs/zpool.cache \
      +    -o ashift=12 \
      +    -O encryption=on \
      +    -O keylocation=prompt -O keyformat=passphrase \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • LUKS:

      +
      zypper install cryptsetup
      +cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
      +cryptsetup luksOpen ${DISK}-part4 luks1
      +zpool create \
      +    -o cachefile=/etc/zfs/zpool.cache \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool /dev/mapper/luks1
      +
      +
      +
    • +
    +

    Notes:

    +
      +
    • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

    • +
    • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires ACLs

    • +
    • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only filenames.

    • +
    • recordsize is unset (leaving it at the default of 128 KiB). If you +want to tune it (e.g. -O recordsize=1M), see these various blog +posts.

    • +
    • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s documentation +for further information.

    • +
    • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain controller. +Note that xattr=sa is Linux-specific. If you move your +xattr=sa pool to another OpenZFS implementation besides ZFS-on-Linux, +extended attributes will not be readable (though your data will be). If +portability of extended attributes is important to you, omit the +-O xattr=sa above. Even if you do not want xattr=sa for the whole +pool, it is probably fine to use it for /var/log.

    • +
    • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

    • +
    • ZFS native encryption now +defaults to aes-256-gcm.

    • +
    • For LUKS, the key size chosen is 512 bits. However, XTS mode requires two +keys, so the LUKS key is split in half. Thus, -s 512 means AES-256.

    • +
    • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup FAQ +for guidance.

    • +
    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    rpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part4 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part4
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • When using LUKS with mirror or raidz topologies, use +/dev/mapper/luks1, /dev/mapper/luks2, etc., which you will have +to create using cryptsetup.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. On systems that can automatically install to ZFS, the root +pool is named rpool by default.

    • +
    +
  10. +
+
+
+

Step 3: System Installation

+
    +
  1. Create filesystem datasets to act as containers:

    +
    zfs create -o canmount=off -o mountpoint=none rpool/ROOT
    +zfs create -o canmount=off -o mountpoint=none bpool/BOOT
    +
    +
    +

    On Solaris systems, the root filesystem is cloned and the suffix is +incremented for major system changes through pkg image-update or +beadm. Similar functionality has been implemented in Ubuntu 20.04 with +the zsys tool, though its dataset layout is more complicated. Even +without such a tool, the rpool/ROOT and bpool/BOOT containers can still +be used for manually created clones. That said, this HOWTO assumes a single +filesystem for /boot for simplicity.

    +
  2. +
  3. Create filesystem datasets for the root and boot filesystems:

    +
    zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/suse
    +zfs mount rpool/ROOT/suse
    +
    +zfs create -o mountpoint=/boot bpool/BOOT/suse
    +
    +
    +

    With ZFS, it is not normally necessary to use a mount command (either +mount or zfs mount). This situation is an exception because of +canmount=noauto.

    +
  4. +
  5. Create datasets:

    +
    zfs create                                 rpool/home
    +zfs create -o mountpoint=/root             rpool/home/root
    +chmod 700 /mnt/root
    +zfs create -o canmount=off                 rpool/var
    +zfs create -o canmount=off                 rpool/var/lib
    +zfs create                                 rpool/var/log
    +zfs create                                 rpool/var/spool
    +
    +
    +

    The datasets below are optional, depending on your preferences and/or +software choices.

    +

    If you wish to exclude these from snapshots:

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/var/cache
    +zfs create -o com.sun:auto-snapshot=false  rpool/var/tmp
    +chmod 1777 /mnt/var/tmp
    +
    +
    +

    If you use /opt on this system:

    +
    zfs create                                 rpool/opt
    +
    +
    +

    If you use /srv on this system:

    +
    zfs create                                 rpool/srv
    +
    +
    +

    If you use /usr/local on this system:

    +
    zfs create -o canmount=off                 rpool/usr
    +zfs create                                 rpool/usr/local
    +
    +
    +

    If this system will have games installed:

    +
    zfs create                                 rpool/var/games
    +
    +
    +

    If this system will store local email in /var/mail:

    +
    zfs create                                 rpool/var/spool/mail
    +
    +
    +

    If this system will use Snap packages:

    +
    zfs create                                 rpool/var/snap
    +
    +
    +

    If this system will use Flatpak packages:

    +
    zfs create                                 rpool/var/lib/flatpak
    +
    +
    +

    If you use /var/www on this system:

    +
    zfs create                                 rpool/var/www
    +
    +
    +

    If this system will use GNOME:

    +
    zfs create                                 rpool/var/lib/AccountsService
    +
    +
    +

    If this system will use Docker (which manages its own datasets & +snapshots):

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/docker
    +
    +
    +

    If this system will use NFS (locking):

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/nfs
    +
    +
    +

    Mount a tmpfs at /run:

    +
    mkdir /mnt/run
    +mount -t tmpfs tmpfs /mnt/run
    +mkdir /mnt/run/lock
    +
    +
    +

    A tmpfs is recommended later, but if you want a separate dataset for +/tmp:

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/tmp
    +chmod 1777 /mnt/tmp
    +
    +
    +

    The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data.

    +

    If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for /tmp, +as shown above. This keeps the /tmp data out of snapshots of your root +filesystem. It also allows you to set a quota on rpool/tmp, if you want +to limit the maximum space used. Otherwise, you can use a tmpfs (RAM +filesystem) later.

    +
  6. +
  7. Copy in zpool.cache:

    +
    mkdir /mnt/etc/zfs -p
    +cp /etc/zfs/zpool.cache /mnt/etc/zfs/
    +
    +
    +
  8. +
+
+
+

Step 4. Install System

+
    +
  1. Add repositories into chrooting directory:

    +
    zypper --root /mnt ar http://download.opensuse.org/tumbleweed/repo/non-oss/ non-oss
    +zypper --root /mnt ar http://download.opensuse.org/tumbleweed/repo/oss/ oss
    +
    +
    +
  2. +
  3. Generate repository indexes:

    +
    zypper --root /mnt refresh
    +
    +
    +

    You will get fingerprint exception, click a to say always trust and continue.:

    +
    New repository or package signing key received:
    +
    +Repository:       oss
    +Key Name:         openSUSE Project Signing Key <opensuse@opensuse.org>
    +Key Fingerprint:  22C07BA5 34178CD0 2EFE22AA B88B2FD4 3DBDC284
    +Key Created:      Mon May  5 11:37:40 2014
    +Key Expires:      Thu May  2 11:37:40 2024
    +Rpm Name:         gpg-pubkey-3dbdc284-53674dd4
    +
    +Do you want to reject the key, trust temporarily, or trust always? [r/t/a/?] (r):
    +
    +
    +
  4. +
  5. Install openSUSE Tumbleweed with zypper:

    +

    If you install base pattern, zypper will install busybox-grep which masks default kernel package. +Thats why I recommend you to install enhanced_base pattern, if you’re new in openSUSE. But in enhanced_base, bloats +can annoy you, while you want to use it openSUSE on server. So, you need to select

    +
      +
    1. Install base packages of openSUSE Tumbleweed with zypper (Recommended for server):

      +
      zypper --root /mnt install -t pattern base
      +
      +
      +
    2. +
    3. Install enhanced base of openSUSE Tumbleweed with zypper (Recommended for desktop):

      +
      zypper --root /mnt install -t pattern enhanced_base
      +
      +
      +
    4. +
    +
  6. +
  7. Install openSUSE zypper package system into chroot:

    +
    zypper --root /mnt install zypper
    +
    +
    +
  8. +
  9. Recommended: Install openSUSE yast2 system into chroot:

    +
    zypper --root /mnt install yast2
    +
    +
    +
  10. +
+
+
+

Note

+

If your /etc/resolv.conf file is empty, proceed this command.

+

echo “nameserver 8.8.4.4” | tee -a /mnt/etc/resolv.conf

+
+

It will make easier to configure network and other configurations for beginners.

+
+

To install a desktop environment, see the openSUSE wiki

+
+
+

Step 5: System Configuration

+
    +
  1. Configure the hostname:

    +

    Replace HOSTNAME with the desired hostname:

    +
    echo HOSTNAME > /mnt/etc/hostname
    +vi /mnt/etc/hosts
    +
    +
    +

    Add a line:

    +
    127.0.1.1       HOSTNAME
    +
    +
    +

    or if the system has a real name in DNS:

    +
    127.0.1.1       FQDN HOSTNAME
    +
    +
    +

    Hint: Use nano if you find vi confusing.

    +
  2. +
  3. Copy network information:

    +
    cp /etc/resolv.conf /mnt/etc
    +
    +
    +

    You will reconfigure network with yast2.

    +
    +

    Note

    +

    If your /etc/resolv.conf file is empty, proceed this command.

    +

    echo “nameserver 8.8.4.4” | tee -a /mnt/etc/resolv.conf

    +
    +
  4. +
  5. Bind the virtual filesystems from the LiveCD environment to the new +system and chroot into it:

    +
    mount --make-private --rbind /dev  /mnt/dev
    +mount --make-private --rbind /proc /mnt/proc
    +mount --make-private --rbind /sys  /mnt/sys
    +mount -t tmpfs tmpfs /mnt/run
    +mkdir /mnt/run/lock
    +
    +chroot /mnt /usr/bin/env DISK=$DISK bash --login
    +
    +
    +

    Note: This is using --rbind, not --bind.

    +
  6. +
  7. Configure a basic system environment:

    +
    ln -s /proc/self/mounts /etc/mtab
    +zypper refresh
    +
    +
    +

    Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

    +
    locale -a
    +
    +
    +

    Output must include that languages:

    +
      +
    • C

    • +
    • C.UTF-8

    • +
    • en_US.utf8

    • +
    • POSIX

    • +
    +

    Find yout locale from locale -a commands output then set it with following command.

    +
    localectl set-locale LANG=en_US.UTF-8
    +
    +
    +
  8. +
  9. Optional: Reinstallation for stability:

    +

    After installation it may need. Some packages may have minor errors. +For that, do this if you wish. Since there is no command like +dpkg-reconfigure in openSUSE, zypper install -f stated as a alternative for +it +but it will reinstall packages.

    +
    zypper install -f permissions-config iputils ca-certificates  ca-certificates-mozilla pam shadow dbus-1 libutempter0 suse-module-tools util-linux
    +
    +
    +
  10. +
  11. Install kernel:

    +
    zypper install kernel-default kernel-firmware
    +
    +
    +
    +

    Note

    +

    If you installed base pattern, you need to deinstall busybox-grep to install kernel-default package.

    +
    +
  12. +
  13. Install ZFS in the chroot environment for the new system:

    +
    zypper addrepo https://download.opensuse.org/repositories/filesystems/openSUSE_Tumbleweed/filesystems.repo
    +zypper refresh   # Refresh all repositories
    +zypper install zfs
    +
    +
    +
  14. +
  15. For LUKS installs only, setup /etc/crypttab:

    +
    zypper install cryptsetup
    +
    +echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) none \
    +    luks,discard,initramfs > /etc/crypttab
    +
    +
    +

    The use of initramfs is a work-around for cryptsetup does not support +ZFS.

    +

    Hint: If you are creating a mirror or raidz topology, repeat the +/etc/crypttab entries for luks2, etc. adjusting for each disk.

    +
  16. +
  17. For LUKS installs only, fix cryptsetup naming for ZFS:

    +
    echo 'ENV{DM_NAME}!="", SYMLINK+="$env{DM_NAME}"
    +ENV{DM_NAME}!="", SYMLINK+="dm-name-$env{DM_NAME}"' >> /etc/udev/rules.d/99-local-crypt.rules
    +
    +
    +
  18. +
  19. Install GRUB

    +

    Choose one of the following options:

    +
      +
    • Install GRUB for legacy (BIOS) booting:

      +
      zypper install grub2-i386-pc
      +
      +
      +
    • +
    • Install GRUB for UEFI booting:

      +
      zypper install grub2-x86_64-efi dosfstools os-prober
      +mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2
      +mkdir /boot/efi
      +echo /dev/disk/by-uuid/$(blkid -s PARTUUID -o value ${DISK}-part2) \
      +   /boot/efi vfat defaults 0 0 >> /etc/fstab
      +mount /boot/efi
      +
      +
      +

      Notes:

      +
        +
      • +
        The -s 1 for mkdosfs is only necessary for drives which present

        4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size +(given the partition size of 512 MiB) for FAT32. It also works fine on +drives which present 512 B sectors.

        +
        +
        +
      • +
      • +
        For a mirror or raidz topology, this step only installs GRUB on the

        first disk. The other disk(s) will be handled later.

        +
        +
        +
      • +
      +
    • +
    +
  20. +
  21. Optional: Remove os-prober:

    +
    zypper remove os-prober
    +
    +
    +

    This avoids error messages from update-bootloader. os-prober is only +necessary in dual-boot configurations.

    +
  22. +
  23. Set a root password:

    +
    passwd
    +
    +
    +
  24. +
  25. Enable importing bpool

    +

    This ensures that bpool is always imported, regardless of whether +/etc/zfs/zpool.cache exists, whether it is in the cachefile or not, +or whether zfs-import-scan.service is enabled.

    +
    vi /etc/systemd/system/zfs-import-bpool.service
    +
    +
    +
    [Unit]
    +DefaultDependencies=no
    +Before=zfs-import-scan.service
    +Before=zfs-import-cache.service
    +
    +[Service]
    +Type=oneshot
    +RemainAfterExit=yes
    +ExecStart=/sbin/zpool import -N -o cachefile=none bpool
    +# Work-around to preserve zpool cache:
    +ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache
    +ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache
    +
    +[Install]
    +WantedBy=zfs-import.target
    +
    +
    +
    systemctl enable zfs-import-bpool.service
    +
    +
    +
  26. +
  27. Optional (but recommended): Mount a tmpfs to /tmp

    +

    If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

    +
    cp /usr/share/systemd/tmp.mount /etc/systemd/system/
    +systemctl enable tmp.mount
    +
    +
    +
  28. +
+
+
+

Step 6: Kernel Installation

+
    +
  1. Add zfs module into dracut:

    +
    echo 'zfs'>> /etc/modules-load.d/zfs.conf
    +
    +
    +
  2. +
  3. Refresh kernel files:

    +
    kernel-install add $(uname -r) /boot/vmlinuz-$(uname -r)
    +
    +
    +
  4. +
  5. Refresh the initrd files:

    +
    mkinitrd
    +
    +
    +

    Note: After some installations, LUKS partition cannot seen by dracut, +this will print “Failure occured during following action: +configuring encrypted DM device X VOLUME_CRYPTSETUP_FAILED“. For fix this +issue you need to check cryptsetup installation. See for more information +Note: Although we add the zfs config to the system module into /etc/modules.d, if it is not seen by dracut, we have to add it to dracut by force. +dracut –kver $(uname -r) –force –add-drivers “zfs”

    +
  6. +
+
+
+

Step 7: Grub2 Installation

+
    +
  1. Verify that the ZFS boot filesystem is recognized:

    +
    grub2-probe /boot
    +
    +
    +

    Output must be zfs

    +
  2. +
  3. If you having trouble with grub2-probe command make this:

    +
    echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile
    +export ZPOOL_VDEV_NAME_PATH=YES
    +
    +
    +

    then go back to grub2-probe step.

    +
  4. +
  5. Workaround GRUB’s missing zpool-features support:

    +
    vi /etc/default/grub
    +# Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/suse"
    +
    +
    +
  6. +
  7. Optional (but highly recommended): Make debugging GRUB easier:

    +
    vi /etc/default/grub
    +# Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
    +# Uncomment: GRUB_TERMINAL=console
    +# Save and quit.
    +
    +
    +

    Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired.

    +
  8. +
  9. Update the boot configuration:

    +
    update-bootloader
    +
    +
    +

    Note: Ignore errors from osprober, if present. +Note: If you have had trouble with the grub2 installation, I suggest you use systemd-boot. +Note: If this command don’t gives any output, use classic grub.cfg generation with following command: +grub2-mkconfig -o /boot/grub2/grub.cfg

    +
  10. +
  11. Install the boot loader:

    +
      +
    1. For legacy (BIOS) booting, install GRUB to the MBR:

      +
      grub2-install $DISK
      +
      +
      +
    2. +
    +

    Note that you are installing GRUB to the whole disk, not a partition.

    +

    If you are creating a mirror or raidz topology, repeat the grub-install +command for each disk in the pool.

    +
      +
    1. For UEFI booting, install GRUB to the ESP:

      +
      grub2-install --target=x86_64-efi --efi-directory=/boot/efi \
      +    --bootloader-id=opensuse --recheck --no-floppy
      +
      +
      +

      It is not necessary to specify the disk here. If you are creating a +mirror or raidz topology, the additional disks will be handled later.

      +
    2. +
    +
  12. +
+
+
+

Step 8: Systemd-Boot Installation

+

Warning: This will break your Yast2 Bootloader Configuration. Make sure that you +are not able to fix the problem you are having with grub2. I decided to write this +part because sometimes grub2 doesn’t see the rpool pool in some cases.

+
    +
  1. Install systemd-boot:

    +
    bootctl install
    +
    +
    +
  2. +
  3. Configure bootloader configuration:

    +
    tee -a /boot/efi/loader/loader.conf << EOF
    +default openSUSE_Tumbleweed.conf
    +timeout 5
    +console-mode auto
    +EOF
    +
    +
    +
  4. +
  5. Write Entries:

    +
    tee -a /boot/efi/loader/entries/openSUSE_Tumbleweed.conf << EOF
    +title   openSUSE Tumbleweed
    +linux   /EFI/openSUSE/vmlinuz
    +initrd  /EFI/openSUSE/initrd
    +options root=zfs=rpool/ROOT/suse boot=zfs
    +EOF
    +
    +
    +
  6. +
  7. Copy files into EFI:

    +
    mkdir /boot/efi/EFI/openSUSE
    +cp /boot/{vmlinuz,initrd} /boot/efi/EFI/openSUSE
    +
    +
    +
  8. +
  9. Update systemd-boot variables:

    +
    bootctl update
    +
    +
    +
  10. +
+
+
+

Step 9: Filesystem Configuration

+
    +
  1. Fix filesystem mount ordering:

    +

    We need to activate zfs-mount-generator. This makes systemd aware of +the separate mountpoints, which is important for things like /var/log +and /var/tmp. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp feature +of systemd automatically use After=var-tmp.mount.

    +
    mkdir /etc/zfs/zfs-list.cache
    +touch /etc/zfs/zfs-list.cache/bpool
    +touch /etc/zfs/zfs-list.cache/rpool
    +ln -s /usr/lib/zfs/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d
    +zed -F &
    +
    +
    +

    Verify that zed updated the cache by making sure these are not empty:

    +
    cat /etc/zfs/zfs-list.cache/bpool
    +cat /etc/zfs/zfs-list.cache/rpool
    +
    +
    +

    If either is empty, force a cache update and check again:

    +
    zfs set canmount=on     bpool/BOOT/suse
    +zfs set canmount=noauto rpool/ROOT/suse
    +
    +
    +

    If they are still empty, stop zed (as below), start zed (as above) and try +again.

    +

    Stop zed:

    +
    fg
    +Press Ctrl-C.
    +
    +
    +

    Fix the paths to eliminate /mnt:

    +
    sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
    +
    +
    +
  2. +
+
+
+

Step 10: First Boot

+
    +
  1. Optional: Install SSH:

    +
    zypper install --yes openssh-server
    +
    +vi /etc/ssh/sshd_config
    +# Set: PermitRootLogin yes
    +
    +
    +
  2. +
  3. Optional: Snapshot the initial installation:

    +
    zfs snapshot bpool/BOOT/suse@install
    +zfs snapshot rpool/ROOT/suse@install
    +
    +
    +

    In the future, you will likely want to take snapshots before each +upgrade, and remove old snapshots (including this one) at some point to +save space.

    +
  4. +
  5. Exit from the chroot environment back to the LiveCD environment:

    +
    exit
    +
    +
    +
  6. +
  7. Run these commands in the LiveCD environment to unmount all +filesystems:

    +
    mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
    +    xargs -i{} umount -lf {}
    +zpool export -a
    +
    +
    +
  8. +
  9. Reboot:

    +
    reboot
    +
    +
    +

    Wait for the newly installed system to boot normally. Login as root.

    +
  10. +
  11. Create a user account:

    +

    Replace username with your desired username:

    +
    zfs create rpool/home/username
    +adduser username
    +
    +cp -a /etc/skel/. /home/username
    +chown -R username:username /home/username
    +usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video username
    +
    +
    +
  12. +
  13. Mirror GRUB

    +

    If you installed to multiple disks, install GRUB on the additional +disks.

    +
      +
    • For legacy (BIOS) booting:: +Check to be sure we using efi mode:

      +
      efibootmgr -v
      +
      +
      +

      This must return a message contains legacy_boot

      +

      Then reconfigure grub:

      +
      grub-install $DISK
      +
      +
      +

      Hit enter until you get to the device selection screen. +Select (using the space bar) all of the disks (not partitions) in your pool.

      +
    • +
    • For UEFI booting:

      +
      umount /boot/efi
      +
      +
      +

      For the second and subsequent disks (increment debian-2 to -3, etc.):

      +
      dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
      +   of=/dev/disk/by-id/scsi-SATA_disk2-part2
      +efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
      +    -p 2 -L "opensuse-2" -l '\EFI\opensuse\grubx64.efi'
      +
      +mount /boot/efi
      +
      +
      +
    • +
    +
  14. +
+
+
+

Step 11: Optional: Configure Swap

+

Caution: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is a bug report upstream.

+
    +
  1. Create a volume dataset (zvol) for use as a swap device:

    +
    zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
    +    -o logbias=throughput -o sync=always \
    +    -o primarycache=metadata -o secondarycache=none \
    +    -o com.sun:auto-snapshot=false rpool/swap
    +
    +
    +

    You can adjust the size (the 4G part) to your needs.

    +

    The compression algorithm is set to zle because it is the cheapest +available algorithm. As this guide recommends ashift=12 (4 kiB +blocks on disk), the common case of a 4 kiB page size means that no +compression algorithm can reduce I/O. The exception is all-zero pages, +which are dropped by ZFS; but some form of compression has to be enabled +to get this behavior.

    +
  2. +
  3. Configure the swap device:

    +

    Caution: Always use long /dev/zvol aliases in configuration +files. Never use a short /dev/zdX device name.

    +
    mkswap -f /dev/zvol/rpool/swap
    +echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
    +echo RESUME=none > /etc/initramfs-tools/conf.d/resume
    +
    +
    +

    The RESUME=none is necessary to disable resuming from hibernation. +This does not work, as the zvol is not present (because the pool has not +yet been imported) at the time the resume script runs. If it is not +disabled, the boot process hangs for 30 seconds waiting for the swap +zvol to appear.

    +
  4. +
  5. Enable the swap device:

    +
    swapon -av
    +
    +
    +
  6. +
+
+
+

Step 12: Final Cleanup

+
    +
  1. Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

  2. +
  3. Optional: Delete the snapshots of the initial installation:

    +
    sudo zfs destroy bpool/BOOT/suse@install
    +sudo zfs destroy rpool/ROOT/suse@install
    +
    +
    +
  4. +
  5. Optional: Disable the root password:

    +
    sudo usermod -p '*' root
    +
    +
    +
  6. +
  7. Optional (but highly recommended): Disable root SSH logins:

    +

    If you installed SSH earlier, revert the temporary change:

    +
    vi /etc/ssh/sshd_config
    +# Remove: PermitRootLogin yes
    +
    +systemctl restart sshd
    +
    +
    +
  8. +
  9. Optional: Re-enable the graphical boot process:

    +

    If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer.

    +
    sudo vi /etc/default/grub
    +# Add quiet to GRUB_CMDLINE_LINUX_DEFAULT
    +# Comment out GRUB_TERMINAL=console
    +# Save and quit.
    +
    +sudo update-bootloader
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Optional: For LUKS installs only, backup the LUKS header:

    +
    sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
    +    --header-backup-file luks1-header.dat
    +
    +
    +

    Store that backup somewhere safe (e.g. cloud storage). It is protected by +your LUKS passphrase, but you may wish to use additional encryption.

    +

    Hint: If you created a mirror or raidz topology, repeat this for each +LUKS volume (luks2, etc.).

    +
  12. +
+
+
+

Troubleshooting

+
+

Rescuing using a Live CD

+

Go through Step 1: Prepare The Install Environment.

+

For LUKS, first unlock the disk(s):

+
zypper install cryptsetup
+cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+# Repeat for additional disks, if this is a mirror or raidz topology.
+
+
+

Mount everything correctly:

+
zpool export -a
+zpool import -N -R /mnt rpool
+zpool import -N -R /mnt bpool
+zfs load-key -a
+zfs mount rpool/ROOT/suse
+zfs mount -a
+
+
+

If needed, you can chroot into your installed environment:

+
mount --make-private --rbind /dev  /mnt/dev
+mount --make-private --rbind /proc /mnt/proc
+mount --make-private --rbind /sys  /mnt/sys
+chroot /mnt /bin/bash --login
+mount /boot/efi
+mount -a
+
+
+

Do whatever you need to do to fix your system.

+

When done, cleanup:

+
exit
+mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
+    xargs -i{} umount -lf {}
+zpool export -a
+reboot
+
+
+
+
+

Areca

+

Systems that require the arcsas blob driver should add it to the +/etc/initramfs-tools/modules file and run update-initramfs -c -k all.

+

Upgrade or downgrade the Areca driver if something like +RIP: 0010:[<ffffffff8101b316>]  [<ffffffff8101b316>] native_read_tsc+0x6/0x20 +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message.

+
+
+

MPT2SAS

+

Most problem reports for this tutorial involve mpt2sas hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware.

+

The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330.

+

Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool.

+
+
+

QEMU/KVM/XEN

+

Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. -drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890).

+

To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:

+
sudo zypper install ovmf
+sudo vi /etc/libvirt/qemu.conf
+
+
+

Uncomment these lines:

+
nvram = [
+   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd"
+]
+
+
+
sudo systemctl restart libvirtd.service
+
+
+
+
+

VMware

+
    +
  • Set disk.EnableUUID = "TRUE" in the vmx file or vsphere configuration. +Doing this ensures that /dev/disk aliases are created in the guest.

  • +
+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/zfs_root_maintenance.html b/Getting Started/zfs_root_maintenance.html new file mode 100644 index 000000000..d67804422 --- /dev/null +++ b/Getting Started/zfs_root_maintenance.html @@ -0,0 +1,401 @@ + + + + + + + Root on ZFS maintenance — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Root on ZFS maintenance

+
+

Boot Environment

+

This section is compatible with Alpine, Arch, Fedora and RHEL guides. +Not necessary for NixOS. Incompatible with Ubuntu and Debian guides.

+

Note: boot environments as described below are intended only for +system recovery purposes, that is, you boot into the alternate boot +environment once to perform system recovery on the default datasets:

+
rpool/distro/root
+bpool/distro/root
+
+
+

then reboot to those datasets once you have successfully recovered the +system.

+

Switching the default boot environment complicates bootloader recovery +and other maintenance operations and is thus currently not supported.

+
    +
  1. If you want to use the @initial-installation snapshot created +during installation, set my_boot_env=initial-installation and +skip Step 3 and 4.

  2. +
  3. Identify which dataset is currently mounted as root +/ and boot /boot

    +
    set -x
    +boot_dataset=$(df -P /boot | tail -n1 | cut -f1 -d' ' || true )
    +root_dataset=$(df -P / | tail -n1 | cut -f1 -d' ' || true )
    +
    +
    +
  4. +
  5. Choose a name for the new boot environment

    +
    my_boot_env=backup
    +
    +
    +
  6. +
  7. Take snapshots of the / and /boot datasets

    +
    zfs snapshot "${boot_dataset}"@"${my_boot_env}"
    +zfs snapshot "${root_dataset}"@"${my_boot_env}"
    +
    +
    +
  8. +
  9. Create clones from read-only snapshots

    +
    new_root_dataset="${root_dataset%/*}"/"${my_boot_env}"
    +new_boot_dataset="${boot_dataset%/*}"/"${my_boot_env}"
    +
    +zfs clone -o canmount=noauto \
    +  -o mountpoint=/ \
    +  "${root_dataset}"@"${my_boot_env}" \
    +  "${new_root_dataset}"
    +
    +zfs clone -o canmount=noauto \
    +  -o mountpoint=legacy \
    +  "${boot_dataset}"@"${my_boot_env}" \
    +  "${new_boot_dataset}"
    +
    +
    +
  10. +
  11. Mount clone and update file system table (fstab)

    +
    MNT=$(mktemp -d)
    +mount -t zfs -o zfsutil "${new_root_dataset}" "${MNT}"
    +mount -t zfs  "${new_boot_dataset}" "${MNT}"/boot
    +
    +sed -i s,"${root_dataset}","${new_root_dataset}",g "${MNT}"/etc/fstab
    +sed -i s,"${boot_dataset}","${new_boot_dataset}",g "${MNT}"/etc/fstab
    +
    +if test -f "${MNT}"/boot/grub/grub.cfg; then
    +  is_grub2=n
    +  sed -i s,"${boot_dataset#bpool/}","${new_boot_dataset#bpool/}",g "${MNT}"/boot/grub/grub.cfg
    +elif test -f "${MNT}"/boot/grub2/grub.cfg; then
    +  is_grub2=y
    +  sed -i s,"${boot_dataset#bpool/}","${new_boot_dataset#bpool/}",g "${MNT}"/boot/grub2/grub.cfg
    +else
    +  echo "ERROR: no grub menu found!"
    +  exit 1
    +fi
    +
    +
    +

    Do not proceed if no grub menu was found!

    +
  12. +
  13. Unmount clone

    +
    umount -Rl "${MNT}"
    +
    +
    +
  14. +
  15. Add new boot environment as GRUB menu entry

    +
    echo "# ${new_boot_dataset}" > new_boot_env_entry_"${new_boot_dataset##*/}"
    +printf '\n%s' "menuentry 'Boot environment ${new_boot_dataset#bpool/} from ${boot_dataset#bpool/}' "  \
    +  >> new_boot_env_entry_"${new_boot_dataset##*/}"
    +if [ "${is_grub2}" = y ]; then
    +   # shellcheck disable=SC2016
    +   printf '{ search --set=drive1 --label bpool; configfile ($drive1)/%s@/grub2/grub.cfg; }' \
    +   "${new_boot_dataset#bpool/}" >> new_boot_env_entry_"${new_boot_dataset##*/}"
    +else
    +   # shellcheck disable=SC2016
    +   printf '{ search --set=drive1 --label bpool; configfile ($drive1)/%s@/grub/grub.cfg; }' \
    +   "${new_boot_dataset#bpool/}" >> new_boot_env_entry_"${new_boot_dataset##*/}"
    +fi
    +
    +find /boot/efis/ -name "grub.cfg" -print0 \
    +| xargs -t -0I '{}' sh -vxc "tail -n1 new_boot_env_entry_${new_boot_dataset##*/}  >> '{}'"
    +
    +
    +
  16. +
  17. Do not delete new_boot_env_entry_"${new_boot_dataset##*/}" file. It +is needed when you want to remove the new boot environment from +GRUB menu later.

  18. +
  19. After reboot, select boot environment entry from GRUB +menu to boot from the clone. Press ESC inside +submenu to return to the previous menu.

  20. +
  21. Steps above can also be used to create a new clone +from an existing snapshot.

  22. +
  23. To delete the boot environment, first store its name in a +variable:

    +
    my_boot_env=backup
    +
    +
    +
  24. +
  25. Ensure that the boot environment is not +currently used

    +
    set -x
    +boot_dataset=$(df -P /boot | tail -n1 | cut -f1 -d' ' || true )
    +root_dataset=$(df -P / | tail -n1 | cut -f1 -d' ' || true )
    +new_boot_dataset="${boot_dataset%/*}"/"${my_boot_env}"
    +rm_boot_dataset=$(head -n1 new_boot_env_entry_"${new_boot_dataset##*/}" | sed 's|^# *||' || true )
    +
    +if [ "${boot_dataset}" = "${rm_boot_dataset}" ]; then
    +  echo "ERROR: the dataset you want to delete is the current root! abort!"
    +  exit 1
    +fi
    +
    +
    +
  26. +
  27. Then check the origin snapshot

    +
    rm_root_dataset=rpool/"${rm_boot_dataset#bpool/}"
    +
    +rm_boot_dataset_origin=$(zfs get -H origin "${rm_boot_dataset}"|cut -f3 || true )
    +rm_root_dataset_origin=$(zfs get -H origin "${rm_root_dataset}"|cut -f3 || true )
    +
    +
    +
  28. +
  29. Finally, destroy clone (boot environment) and its +origin snapshot

    +
    zfs destroy "${rm_root_dataset}"
    +zfs destroy "${rm_root_dataset_origin}"
    +zfs destroy "${rm_boot_dataset}"
    +zfs destroy "${rm_boot_dataset_origin}"
    +
    +
    +
  30. +
  31. Remove GRUB entry

    +
    new_entry_escaped=$(tail -n1 new_boot_env_entry_"${new_boot_dataset##*/}" | sed -e 's/[\/&]/\\&/g' || true )
    +find /boot/efis/ -name "grub.cfg" -print0 | xargs -t -0I '{}' sed -i "/${new_entry_escaped}/d" '{}'
    +
    +
    +
  32. +
+
+
+

Disk replacement

+

When a disk fails in a mirrored setup, the disk can be replaced with +the following procedure.

+
    +
  1. Shutdown the computer.

  2. +
  3. Replace the failed disk with another disk. The replacement should +be at least the same size or larger than the failed disk.

  4. +
  5. Boot the computer.

    +

    When a disk fails, the system will boot, albeit several minutes +slower than normal.

    +

    For NixOS, this is due to the initrd and systemd designed to only +import a pool in degraded state after a 90s timeout.

    +

    Swap partition on that disk will also fail.

    +
  6. +
  7. Install GNU parted with your distribution package manager.

  8. +
  9. Identify the bad disk and a working old disk

    +
    ZPOOL_VDEV_NAME_PATH=1 zpool status
    +
    +pool:   bpool
    +status: DEGRADED
    +action: Replace the device using 'zpool replace'.
    +...
    +config: bpool
    +    mirror-0
    +    2387489723748                    UNAVAIL    0  0  0   was /dev/disk/by-id/ata-BAD-part2
    +    /dev/disk/by-id/ata-disk_known_good-part2    ONLINE     0  0  0
    +
    +
    +
  10. +
  11. Store the bad disk and a working old disk in a variable, omit the partition number -partN

    +
    disk_to_replace=/dev/disk/by-id/ata-disk_to_replace
    +disk_known_good=/dev/disk/by-id/ata-disk_known_good
    +
    +
    +
  12. +
  13. Identify the new disk

    +
    find /dev/disk/by-id/
    +
    +/dev/disk/by-id/ata-disk_known_good-part1
    +/dev/disk/by-id/ata-disk_known_good-part2
    +...
    +/dev/disk/by-id/ata-disk_known_good-part5
    +/dev/disk/by-id/ata-disk_new       <-- new disk w/o partition table
    +
    +
    +
  14. +
  15. Store the new disk in a variable

    +
    disk_new=/dev/disk/by-id/ata-disk_new
    +
    +
    +
  16. +
  17. Create partition table on "${disk_new}", refer to respective +installation pages for details.

  18. +
  19. Format and mount EFI system partition, refer to respective +installation pages for details.

  20. +
  21. Replace failed disk in ZFS pool

    +
    zpool offline bpool "${disk_to_replace}"-part2
    +zpool offline rpool "${disk_to_replace}"-part3
    +zpool replace bpool "${disk_to_replace}"-part2 "${disk_new}"-part2
    +zpool replace rpool "${disk_to_replace}"-part3 "${disk_new}"-part3
    +zpool online  bpool "${disk_new}"-part2
    +zpool online  rpool "${disk_new}"-part3
    +
    +
    +

    Let the new disk resilver. Check status with zpool status.

    +
  22. +
  23. Reinstall and mirror bootloader, refer to respective installation +pages for details.

    +

    If you are using NixOS, see below.

    +
  24. +
  25. For NixOS, replace bad disk with new disk inside per-host +configuration file.

    +
    sed -i "s|"${disk_to_replace##*/}"|"${disk_new##*/}"|" /etc/nixos/hosts/exampleHost/default.nix
    +
    +
    +
  26. +
  27. Commit and apply the changed configuration, reinstall bootloader, then reboot

    +
    git -C /etc/nixos commit -asm "replace "${disk_to_replace##*/}" with "${disk_new##*/}"."
    +
    +nixos-rebuild boot --install-bootloader
    +
    +reboot
    +
    +
    +
  28. +
+
+
+

Bootloader Recovery

+

This section is compatible with Alpine, Arch, Fedora, RHEL and NixOS +root on ZFS guides.

+

Sometimes the GRUB bootloader might be accidentally overwritten, +rendering the system inaccessible. However, as long as the disk +partitions where boot pool and root pool resides remain untouched, the +system can still be booted easily.

+
    +
  1. Download GRUB rescue image from this repo.

    +

    You can also build the image yourself if you are familiar with Nix +package manager.

    +
  2. +
  3. Extract either x86_64-efi or i386-pc image from the archive.

  4. +
  5. Write the image to a disk.

  6. +
  7. Boot the computer from the GRUB rescue disk. Select your distro in +GRUB menu.

  8. +
  9. Reinstall bootloader. See respective installation pages for details.

  10. +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/License.html b/License.html new file mode 100644 index 000000000..02b086d91 --- /dev/null +++ b/License.html @@ -0,0 +1,152 @@ + + + + + + + License — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

License

+
    +
  • The OpenZFS software is licensed under the Common Development and Distribution License +(CDDL) unless otherwise noted.

  • +
  • The OpenZFS documentation content is licensed under a Creative Commons Attribution-ShareAlike +license (CC BY-SA 3.0) +unless otherwise noted.

  • +
  • OpenZFS is an associated project of SPI (Software in the Public Interest). SPI is a 501(c)(3) nonprofit +organization which handles the donations, finances, and legal holdings of the project.

  • +
+
+

Note

+

The Linux Kernel is licensed under the GNU General Public License +Version 2 (GPLv2). While +both (OpenZFS and Linux Kernel) are free open source licenses they are +restrictive licenses. The combination of them causes problems because it +prevents using pieces of code exclusively available under one license +with pieces of code exclusively available under the other in the same binary. +In the case of the Linux Kernel, this prevents us from distributing OpenZFS +as part of the Linux Kernel binary. However, there is nothing in either license +that prevents distributing it in the form of a binary module or in the form +of source code.

+

Additional reading and opinions:

+ +
+

CC BY-SA 3.0: Creative Commons License

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Performance and Tuning/Async Write.html b/Performance and Tuning/Async Write.html new file mode 100644 index 000000000..9e35ebd29 --- /dev/null +++ b/Performance and Tuning/Async Write.html @@ -0,0 +1,159 @@ + + + + + + + Async Writes — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Async Writes

+

The number of concurrent operations issued for the async write I/O class +follows a piece-wise linear function defined by a few adjustable points.

+
       |              o---------| <-- zfs_vdev_async_write_max_active
+  ^    |             /^         |
+  |    |            / |         |
+active |           /  |         |
+ I/O   |          /   |         |
+count  |         /    |         |
+       |        /     |         |
+       |-------o      |         | <-- zfs_vdev_async_write_min_active
+      0|_______^______|_________|
+       0%      |      |       100% of zfs_dirty_data_max
+               |      |
+               |      `-- zfs_vdev_async_write_active_max_dirty_percent
+               `--------- zfs_vdev_async_write_active_min_dirty_percent
+
+
+

Until the amount of dirty data exceeds a minimum percentage of the dirty +data allowed in the pool, the I/O scheduler will limit the number of +concurrent operations to the minimum. As that threshold is crossed, the +number of concurrent operations issued increases linearly to the maximum +at the specified maximum percentage of the dirty data allowed in the +pool.

+

Ideally, the amount of dirty data on a busy pool will stay in the sloped +part of the function between +zfs_vdev_async_write_active_min_dirty_percent and +zfs_vdev_async_write_active_max_dirty_percent. If it exceeds the maximum +percentage, this indicates that the rate of incoming data is greater +than the rate that the backend storage can handle. In this case, we must +further throttle incoming writes, as described in the next section.

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Performance and Tuning/Hardware.html b/Performance and Tuning/Hardware.html new file mode 100644 index 000000000..6079f054c --- /dev/null +++ b/Performance and Tuning/Hardware.html @@ -0,0 +1,970 @@ + + + + + + + Hardware — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Hardware

+ +
+

Introduction

+

Storage before ZFS involved rather expensive hardware that was unable to +protect against silent corruption and did not scale very well. The +introduction of ZFS has enabled people to use far less expensive +hardware than previously used in the industry with superior scaling. +This page attempts to provide some basic guidance to people buying +hardware for use in ZFS-based servers and workstations.

+

Hardware that adheres to this guidance will enable ZFS to reach its full +potential for performance and reliability. Hardware that does not adhere +to it will serve as a handicap. Unless otherwise stated, such handicaps +apply to all storage stacks and are by no means specific to ZFS. Systems +built using competing storage stacks will also benefit from these +suggestions.

+
+
+

BIOS / CPU microcode updates

+

Running the latest BIOS and CPU microcode is highly recommended.

+
+

Background

+

Computer microprocessors are very complex designs that often have bugs, +which are called errata. Modern microprocessors are designed to utilize +microcode. This puts part of the hardware design into quasi-software +that can be patched without replacing the entire chip. Errata are often +resolved through CPU microcode updates. These are often bundled in BIOS +updates. In some cases, the BIOS interactions with the CPU through +machine registers can be modified to fix things with the same microcode. +If a newer microcode is not bundled as part of a BIOS update, it can +often be loaded by the operating system bootloader or the operating +system itself.

+
+
+
+

ECC Memory

+

Bit flips can have fairly dramatic consequences for all computer +filesystems and ZFS is no exception. No technique used in ZFS (or any +other filesystem) is capable of protecting against bit flips. +Consequently, ECC Memory is highly recommended.

+
+

Background

+

Ordinary background radiation will randomly flip bits in computer +memory, which causes undefined behavior. These are known as “bit flips”. +Each bit flip can have any of four possible consequences depending on +which bit is flipped:

+
    +
  • Bit flips can have no effect.

    +
      +
    • Bit flips that have no effect occur in unused memory.

    • +
    +
  • +
  • Bit flips can cause runtime failures.

    +
      +
    • This is the case when a bit flip occurs in something read from +disk.

    • +
    • Failures are typically observed when program code is altered.

    • +
    • If the bit flip is in a routine within the system’s kernel or +/sbin/init, the system will likely crash. Otherwise, reloading the +affected data can clear it. This is typically achieved by a +reboot.

    • +
    +
  • +
  • It can cause data corruption.

    +
      +
    • This is the case when the bit is in use by data being written to +disk.

    • +
    • If the bit flip occurs before ZFS’ checksum calculation, ZFS will +not realize that the data is corrupt.

    • +
    • If the bit flip occurs after ZFS’ checksum calculation, but before +write-out, ZFS will detect it, but it might not be able to correct +it.

    • +
    +
  • +
  • It can cause metadata corruption.

    +
      +
    • This is the case when a bit flips in an on-disk structure being +written to disk.

    • +
    • If the bit flip occurs before ZFS’ checksum calculation, ZFS will +not realize that the metadata is corrupt.

    • +
    • If the bit flip occurs after ZFS’ checksum calculation, but before +write-out, ZFS will detect it, but it might not be able to correct +it.

    • +
    • Recovery from such an event will depend on what was corrupted. In +the worst, case, a pool could be rendered unimportable.

      +
        +
      • All filesystems have poor reliability in their absolute worst +case bit-flip failure scenarios. Such scenarios should be +considered extraordinarily rare.

      • +
      +
    • +
    +
  • +
+
+
+
+

Drive Interfaces

+
+

SAS versus SATA

+

ZFS depends on the block device layer for storage. Consequently, ZFS is +affected by the same things that affect other filesystems, such as +driver support and non-working hardware. Consequently, there are a few +things to note:

+
    +
  • Never place SATA disks into a SAS expander without a SAS interposer.

    +
      +
    • If you do this and it does work, it is the exception, rather than +the rule.

    • +
    +
  • +
  • Do not expect SAS controllers to be compatible with SATA port +multipliers.

    +
      +
    • This configuration is typically not tested.

    • +
    • The disks could be unrecognized.

    • +
    +
  • +
  • Support for SATA port multipliers is inconsistent across OpenZFS +platforms

    +
      +
    • Linux drivers generally support them.

    • +
    • Illumos drivers generally do not support them.

    • +
    • FreeBSD drivers are somewhere between Linux and Illumos in terms +of support.

    • +
    +
  • +
+
+
+

USB Hard Drives and/or Adapters

+

These have problems involving sector size reporting, SMART passthrough, +the ability to set ERC and other areas. ZFS will perform as well on such +devices as they are capable of allowing, but try to avoid them. They +should not be expected to have the same up-time as SAS and SATA drives +and should be considered unreliable.

+
+
+
+

Controllers

+

The ideal storage controller for ZFS has the following attributes:

+
    +
  • Driver support on major OpenZFS platforms

    +
      +
    • Stability is important.

    • +
    +
  • +
  • High per-port bandwidth

    +
      +
    • PCI Express interface bandwidth divided by the number of ports

    • +
    +
  • +
  • Low cost

    +
      +
    • Support for RAID, Battery Backup Units and hardware write caches +is unnecessary.

    • +
    +
  • +
+

Marc Bevand’s blog post From 32 to 2 ports: Ideal SATA/SAS Controllers +for ZFS & Linux MD RAID contains an +excellent list of storage controllers that meet these criteria. He +regularly updates it as newer controllers become available.

+
+

Hardware RAID controllers

+

Hardware RAID controllers should not be used with ZFS. While ZFS will +likely be more reliable than other filesystems on Hardware RAID, it will +not be as reliable as it would be on its own.

+
    +
  • Hardware RAID will limit opportunities for ZFS to perform self +healing on checksum failures. When ZFS does RAID-Z or mirroring, a +checksum failure on one disk can be corrected by treating the disk +containing the sector as bad for the purpose of reconstructing the +original information. This cannot be done when a RAID controller +handles the redundancy unless a duplicate copy is stored by ZFS in +the case that the corruption involving as metadata, the copies flag +is set or the RAID array is part of a mirror/raid-z vdev within ZFS.

  • +
  • Sector size information is not necessarily passed correctly by +hardware RAID on RAID 1. Sector size information cannot be passed +correctly on RAID 5/6. +Hardware RAID 1 is more likely to experience read-modify-write +overhead from partial sector writes while Hardware RAID 5/6 will almost +certainty suffer from partial stripe writes (i.e. the RAID write +hole). ZFS using the disks natively allows it to obtain the +sector size information reported by the disks to avoid +read-modify-write on sectors, while ZFS avoids partial stripe writes +on RAID-Z by design from using copy-on-write.

    +
      +
    • There can be sector alignment problems on ZFS when a drive +misreports its sector size. Such drives are typically NAND-flash +based solid state drives and older SATA drives from the advanced +format (4K sector size) transition before Windows XP EoL occurred. +This can be manually corrected at +vdev creation.

    • +
    • It is possible for the RAID header to cause misalignment of sector +writes on RAID 1 by starting the array within a sector on an +actual drive, such that manual correction of sector alignment at +vdev creation does not solve the problem.

    • +
    +
  • +
  • RAID controller failures can require that the controller be replaced with +the same model, or in less extreme cases, a model from the same +manufacturer. Using ZFS by itself allows any controller to be used.

  • +
  • If a hardware RAID controller’s write cache is used, an additional +failure point is introduced that can only be partially mitigated by +additional complexity from adding flash to save data in power loss +events. The data can still be lost if the battery fails when it is +required to survive a power loss event or there is no flash and power +is not restored in a timely manner. The loss of the data in the write +cache can severely damage anything stored on a RAID array when many +outstanding writes are cached. In addition, all writes are stored in +the cache rather than just synchronous writes that require a write +cache, which is inefficient, and the write cache is relatively small. +ZFS allows synchronous writes to be written directly to flash, which +should provide similar acceleration to hardware RAID and the ability +to accelerate many more in-flight operations.

  • +
  • Behavior during RAID reconstruction when silent corruption damages +data is undefined. There are reports of RAID 5 and 6 arrays being +lost during reconstruction when the controller encounters silent +corruption. ZFS’ checksums allow it to avoid this situation by +determining whether enough information exists to reconstruct data. If +not, the file is listed as damaged in zpool status and the +system administrator has the opportunity to restore it from a backup.

  • +
  • IO response times will be reduced whenever the OS blocks on IO +operations because the system CPU blocks on a much weaker embedded +CPU used in the RAID controller. This lowers IOPS relative to what +ZFS could have achieved.

  • +
  • The controller’s firmware is an additional layer of complexity that +cannot be inspected by arbitrary third parties. The ZFS source code +is open source and can be inspected by anyone.

  • +
  • If multiple RAID arrays are formed by the same controller and one +fails, the identifiers provided by the arrays exposed to the OS might +become inconsistent. Giving the drives directly to the OS allows this +to be avoided via naming that maps to a unique port or unique drive +identifier.

    +
      +
    • e.g. If you have arrays A, B, C and D; array B dies, the +interaction between the hardware RAID controller and the OS might +rename arrays C and D to look like arrays B and C respectively. +This can fault pools verbatim imported from the cachefile.

    • +
    • Not all RAID controllers behave this way. This issue has +been observed on both Linux and FreeBSD when system administrators +used single drive RAID 0 arrays, however. It has also been observed +with controllers from different vendors.

    • +
    +
  • +
+

One might be inclined to try using single-drive RAID 0 arrays to try to +use a RAID controller like a HBA, but this is not recommended for many +of the reasons listed for other hardware RAID types. It is best to use a +HBA instead of a RAID controller, for both performance and reliability.

+
+
+
+

Hard drives

+
+

Sector Size

+

Historically, all hard drives had 512-byte sectors, with the exception +of some SCSI drives that could be modified to support slightly larger +sectors. In 2009, the industry migrated from 512-byte sectors to +4096-byte “Advanced Format” sectors. Since Windows XP is not compatible +with 4096-byte sectors or drives larger than 2TB, some of the first +advanced format drives implemented hacks to maintain Windows XP +compatibility.

+
    +
  • The first advanced format drives on the market misreported their +sector size as 512-bytes for Windows XP compatibility. As of 2013, it +is believed that such hard drives are no longer in production. +Advanced format hard drives made during or after this time should +report their true physical sector size.

  • +
  • Drives storing 2TB and smaller might have a jumper that can be set to +map all sectors off by 1. This to provide proper alignment for +Windows XP, which started its first partition at sector 63. This +jumper setting should be off when using such drives with ZFS.

  • +
+

As of 2014, there are still 512-byte and 4096-byte drives on the market, +but they are known to properly identify themselves unless behind a USB +to SATA controller. Replacing a 512-byte sector drive with a 4096-byte +sector drives in a vdev created with 512-byte sector drives will +adversely affect performance. Replacing a 4096-byte sector drive with a +512-byte sector drive will have no negative effect on performance.

+
+
+

Error recovery control

+

ZFS is said to be able to use cheap drives. This was true when it was +introduced and hard drives supported Error recovery control. Since ZFS’ +introduction, error recovery control has been removed from low-end +drives from certain manufacturers, most notably Western Digital. +Consistent performance requires hard drives that support error recovery +control.

+
+

Background

+

Hard drives store data using small polarized regions a magnetic surface. +Reading from and/or writing to this surface poses a few reliability +problems. One is that imperfections in the surface can corrupt bits. +Another is that vibrations can cause drive heads to miss their targets. +Consequently, hard drive sectors are composed of three regions:

+
    +
  • A sector number

  • +
  • The actual data

  • +
  • ECC

  • +
+

The sector number and ECC enables hard drives to detect and respond to +such events. When either event occurs during a read, hard drives will +retry the read many times until they either succeed or conclude that the +data cannot be read. The latter case can take a substantial amount of +time and consequently, IO to the drive will stall.

+

Enterprise hard drives and some consumer hard drives implement a feature +called Time-Limited Error Recovery (TLER) by Western Digital, Error +Recovery Control (ERC) by Seagate and Command Completion Time Limit by +Hitachi and Samsung, which permits the time drives are willing to spend +on such events to be limited by the system administrator.

+

Drives that lack such functionality can be expected to have arbitrarily +high limits. Several minutes is not impossible. Drives with this +functionality typically default to 7 seconds. ZFS does not currently +adjust this setting on drives. However, it is advisable to write a +script to set the error recovery time to a low value, such as 0.1 +seconds until ZFS is modified to control it. This must be done on every +boot.

+
+
+
+

RPM Speeds

+

High RPM drives have lower seek times, which is historically regarded as +being desirable. They increase cost and sacrifice storage density in +order to achieve what is typically no more than a factor of 6 +improvement over their lower RPM counterparts.

+

To provide some numbers, a 15k RPM drive from a major manufacturer is +rated for 3.4 millisecond average read and 3.9 millisecond average +write. Presumably, this number assumes that the target sector is at most +half the number of drive tracks away from the head and half the disk +away. Being even further away is worst-case 2 times slower. Manufacturer +numbers for 7200 RPM drives are not available, but they average 13 to 16 +milliseconds in empirical measurements. 5400 RPM drives can be expected +to be slower.

+

ARC and ZIL are able to mitigate much of the benefit of lower seek +times. Far larger increases in IOPS performance can be obtained by +adding additional RAM for ARC, L2ARC devices and SLOG devices. Even +higher increases in performance can be obtained by replacing hard drives +with solid state storage entirely. Such things are typically more cost +effective than high RPM drives when considering IOPS.

+
+
+

Command Queuing

+

Drives with command queues are able to reorder IO operations to increase +IOPS. This is called Native Command Queuing on SATA and Tagged Command +Queuing on PATA/SCSI/SAS. ZFS stores objects in metaslabs and it can use +several metastabs at any given time. Consequently, ZFS is not only +designed to take advantage of command queuing, but good ZFS performance +requires command queuing. Almost all drives manufactured within the past +10 years can be expected to support command queuing. The exceptions are:

+
    +
  • Consumer PATA/IDE drives

  • +
  • First generation SATA drives, which used IDE to SATA translation +chips, from 2003 to 2004.

  • +
  • SATA drives operating under IDE emulation that was configured in the +system BIOS.

  • +
+

Each OpenZFS system has different methods for checking whether command +queuing is supported. On Linux, hdparm -I /path/to/device \| grep +Queue is used. On FreeBSD, camcontrol identify $DEVICE is used.

+
+
+
+

NAND Flash SSDs

+

As of 2014, Solid state storage is dominated by NAND-flash and most +articles on solid state storage focus on it exclusively. As of 2014, the +most popular form of flash storage used with ZFS involve drives with +SATA interfaces. Enterprise models with SAS interfaces are beginning to +become available.

+

As of 2017, Solid state storage using NAND-flash with PCI-E interfaces +are widely available on the market. They are predominantly enterprise +drives that utilize a NVMe interface that has lower overhead than the +ATA used in SATA or SCSI used in SAS. There is also an interface known +as M.2 that is primarily used by consumer SSDs, although not necessarily +limited to them. It can provide electrical connectivity for multiple +buses, such as SATA, PCI-E and USB. M.2 SSDs appear to use either SATA +or NVME.

+
+

NVMe low level formatting

+

Many NVMe SSDs support both 512-byte sectors and 4096-byte sectors. They +often ship with 512-byte sectors, which are less performant than +4096-byte sectors. Some also support metadata for T10/DIF CRC to try to +improve reliability, although this is unnecessary with ZFS.

+

NVMe drives should be +formatted +to use 4096-byte sectors without metadata prior to being given to ZFS +for best performance unless they indicate that 512-byte sectors are as +performant as 4096-byte sectors, although this is unlikely. Lower +numbers in the Rel_Perf of Supported LBA Sizes from smartctl -a +/dev/$device_namespace (for example smartctl -a /dev/nvme1n1) +indicate higher performance low level formats, with 0 being the best. +The current formatting will be marked by a plus sign under the format +Fmt.

+

You may format a drive using nvme format /dev/nvme1n1 -l $ID. The $ID +corresponds to the Id field value from the Supported LBA Sizes SMART +information.

+
+
+

Power Failure Protection

+
+

Background

+

On-flash data structures are highly complex and traditionally have been +highly vulnerable to corruption. In the past, such corruption would +result in the loss of *all* drive data and an event such as a PSU +failure could result in multiple drives simultaneously failing. Since +the drive firmware is not available for review, the traditional +conclusion was that all drives that lack hardware features to avoid +power failure events cannot be trusted, which was found to be the case +multiple times in the +past [1] [2] [3]. +Discussion of power failures bricking NAND flash SSDs appears to have +vanished from literature following the year 2015. SSD manufacturers now +claim that firmware power loss protection is robust enough to provide +equivalent protection to hardware power loss protection. Kingston is one +example. +Firmware power loss protection is used to guarantee the protection of +flushed data and the drives’ own metadata, which is all that filesystems +such as ZFS need.

+

However, those that either need or want strong guarantees that firmware +bugs are unlikely to be able to brick drives following power loss events +should continue to use drives that provide hardware power loss +protection. The basic concept behind how hardware power failure +protection works has been documented by +Intel +for those who wish to read about the details. As of 2020, use of +hardware power loss protection is now a feature solely of enterprise +SSDs that attempt to protect unflushed data in addition to drive +metadata and flushed data. This additional protection beyond protecting +flushed data and the drive metadata provides no additional benefit to +ZFS, but it does not hurt it.

+

It should also be noted that drives in data centers and laptops are +unlikely to experience power loss events, reducing the usefulness of +hardware power loss protection. This is especially the case in +datacenters where redundant power, UPS power and the use of IPMI to do +forced reboots should prevent most drives from experiencing power loss +events.

+

Lists of drives that provide hardware power loss protection are +maintained below for those who need/want it. Since ZFS, like other +filesystems, only requires power failure protection for flushed data and +drive metadata, older drives that only protect these things are included +on the lists.

+
+
+

NVMe drives with power failure protection

+

A non-exhaustive list of NVMe drives with power failure protection is as +follows:

+
    +
  • Intel 750

  • +
  • Intel DC P3500/P3600/P3608/P3700

  • +
  • Micron 7300/7400/7450 PRO/MAX

  • +
  • Samsung PM963 (M.2 form factor)

  • +
  • Samsung PM1725/PM1725a

  • +
  • Samsung XS1715

  • +
  • Toshiba ZD6300

  • +
  • Seagate Nytro 5000 M.2 (XP1920LE30002 tested; read notes below +before buying)

    +
      +
    • Inexpensive 22110 M.2 enterprise drive using consumer MLC that is +optimized for read mostly workloads. It is not a good choice for a +SLOG device, which is a write mostly workload.

    • +
    • The +manual +for this drive specifies airflow requirements. If the drive does +not receive sufficient airflow from case fans, it will overheat at +idle. It’s thermal throttling will severely degrade performance +such that write throughput performance will be limited to 1/10 of +the specification and read latencies will reach several hundred +milliseconds. Under continuous load, the device will continue to +become hotter until it suffers a “degraded reliability” event +where all data on at least one NVMe namespace is lost. The NVMe +namespace is then unusable until a secure erase is done. Even with +sufficient airflow under normal circumstances, data loss is +possible under load following the failure of fans in an enterprise +environment. Anyone deploying this into production in an +enterprise environment should be mindful of this failure mode.

    • +
    • Those who wish to use this drive in a low airflow situation can +workaround this failure mode by placing a passive heatsink such as +this on the +NAND flash controller. It is the chip under the sticker closest to +the capacitors. This was tested by placing the heatsink over the +sticker (as removing it was considered undesirable). The heatsink +will prevent the drive from overheating to the point of data loss, +but it will not fully alleviate the overheating situation under +load without active airflow. A scrub will cause it to overheat +after a few hundred gigabytes are read. However, the thermal +throttling will quickly cool the drive from 76 degrees Celsius to +74 degrees Celsius, restoring performance.

      +
        +
      • It might be possible to use the heatsink in an enterprise +environment to provide protection against data loss following +fan failures. However, this was not evaluated. Furthermore, +operating temperatures for consumer NAND flash should be at or +above 40 degrees Celsius for long term data integrity. +Therefore, the use of a heatsink to provide protection against +data loss following fan failures in an enterprise environment +should be evaluated before deploying drives into production to +ensure that the drive is not overcooled.

      • +
      +
    • +
    +
  • +
+
+
+

SAS drives with power failure protection

+

A non-exhaustive list of SAS drives with power failure protection is as +follows:

+
    +
  • Samsung PM1633/PM1633a

  • +
  • Samsung SM1625

  • +
  • Samsung PM853T

  • +
  • Toshiba PX05SHB***/PX04SHB***/PX04SHQ***

  • +
  • Toshiba PX05SLB***/PX04SLB***/PX04SLQ***

  • +
  • Toshiba PX05SMB***/PX04SMB***/PX04SMQ***

  • +
  • Toshiba PX05SRB***/PX04SRB***/PX04SRQ***

  • +
  • Toshiba PX05SVB***/PX04SVB***/PX04SVQ***

  • +
+
+
+

SATA drives with power failure protection

+

A non-exhaustive list of SATA drives with power failure protection is as +follows:

+
    +
  • Crucial MX100/MX200/MX300

  • +
  • Crucial M500/M550/M600

  • +
  • Intel 320

    +
      +
    • Early reports claimed that the 330 and 335 had power failure +protection too, but they do +not.

    • +
    +
  • +
  • Intel 710

  • +
  • Intel 730

  • +
  • Intel DC S3500/S3510/S3610/S3700/S3710

  • +
  • Kingston DC500R/DC500M

  • +
  • Micron 5210 Ion

    +
      +
    • First QLC drive on the list. High capacity with a low price per +gigabyte.

    • +
    +
  • +
  • Samsung PM863/PM863a

  • +
  • Samsung SM843T (do not confuse with SM843)

  • +
  • Samsung SM863/SM863a

  • +
  • Samsung 845DC Evo

  • +
  • Samsung 845DC Pro

    + +
  • +
  • Toshiba HK4E/HK3E2

  • +
  • Toshiba HK4R/HK3R2/HK3R

  • +
+
+
+

Criteria/process for inclusion into these lists

+

These lists have been compiled on a volunteer basis by OpenZFS +contributors (mainly Richard Yao) from trustworthy sources of +information. The lists are intended to be vendor neutral and are not +intended to benefit any particular manufacturer. Any perceived bias +toward any manufacturer is caused by a lack of awareness and a lack of +time to research additional options. Confirmation of the presence of +adequate power loss protection by a reliable source is the only +requirement for inclusion into this list. Adequate power loss protection +means that the drive must protect both its own internal metadata and all +flushed data. Protection of unflushed data is irrelevant and therefore +not a requirement. ZFS only expects storage to protect flushed data. +Consequently, solid state drives whose power loss protection only +protects flushed data is sufficient for ZFS to ensure that data remains +safe.

+

Anyone who believes an unlisted drive to provide adequate power failure +protection may contact the Mailing Lists with +a request for inclusion and substantiation for the claim that power +failure protection is provided. Examples of substantiation include +pictures of drive internals showing the presence of capacitors, +statements by well regarded independent review sites such as Anandtech +and manufacturer specification sheets. The latter are accepted on the +honor system until a manufacturer is found to misstate reality on the +protection of the drives’ own internal metadata structures and/or the +protection of flushed data. Thus far, all manufacturers have been +honest.

+
+
+
+

Flash pages

+

The smallest unit on a NAND chip that can be written is a flash page. +The first NAND-flash SSDs on the market had 4096-byte pages. Further +complicating matters is that the the page size has been doubled twice +since then. NAND flash SSDs should report these pages as being +sectors, but so far, all of them incorrectly report 512-byte sectors for +Windows XP compatibility. The consequence is that we have a similar +situation to what we had with early advanced format hard drives.

+

As of 2014, most NAND-flash SSDs on the market have 8192-byte page +sizes. However, models using 128-Gbit NAND from certain manufacturers +have a 16384-byte page size. Maximum performance requires that vdevs be +created with correct ashift values (13 for 8192-byte and 14 for +16384-byte). However, not all OpenZFS platforms support this. The Linux +port supports ashift=13, while others are limited to ashift=12 +(4096-byte).

+

As of 2017, NAND-flash SSDs are tuned for 4096-byte IOs. Matching the +flash page size is unnecessary and ashift=12 is usually the correct +choice. Public documentation on flash page size is also nearly +non-existent.

+
+
+

ATA TRIM / SCSI UNMAP

+

It should be noted that this is a separate case from +discard on zvols or hole punching on filesystems. Those work regardless +of whether ATA TRIM / SCSI UNMAP is sent to the actual block devices.

+
+

ATA TRIM Performance Issues

+

The ATA TRIM command in SATA 3.0 and earlier is a non-queued command. +Issuing a TRIM command on a SATA drive conforming to SATA 3.0 or earlier +will cause the drive to drain its IO queue and stop servicing requests +until it finishes, which hurts performance. SATA 3.1 removed this +limitation, but very few SATA drives on the market are conformant to +SATA 3.1 and it is difficult to distinguish them from SATA 3.0 drives. +At the same time, SCSI UNMAP has no such problems.

+
+
+
+
+

Optane / 3D XPoint SSDs

+

These are SSDs with far better latencies and write endurance than NAND +flash SSDs. They are byte addressable, such that ashift=9 is fine for +use on them. Unlike NAND flash SSDs, they do not require any special +power failure protection circuitry for reliability. There is also no +need to run TRIM on them. However, they cost more per GB than NAND flash +(as of 2020). The enterprise models make excellent SLOG devices. Here is +a list of models that are known to perform well:

+ +

Note that SLOG devices rarely have more than 4GB in use at any given +time, so the smaller sized devices are generally the best choice in +terms of cost, with larger sizes giving no benefit. Larger sizes could +be a good choice for other vdev types, depending on performance needs +and cost considerations.

+
+
+

Power

+

Ensuring that computers are properly grounded is highly recommended. +There have been cases in user homes where machines experienced random +failures when plugged into power receptacles that had open grounds (i.e. +no ground wire at all). This can cause random failures on any computer +system, whether it uses ZFS or not.

+

Power should also be relatively stable. Large dips in voltages from +brownouts are preferably avoided through the use of UPS units or line +conditioners. Systems subject to unstable power that do not outright +shutdown can exhibit undefined behavior. PSUs with longer hold-up times +should be able to provide partial protection against this, but hold up +times are often undocumented and are not a substitute for a UPS or line +conditioner.

+
+

PWR_OK signal

+

PSUs are supposed to deassert a PWR_OK signal to indicate that provided +voltages are no longer within the rated specification. This should force +an immediate shutdown. However, the system clock of a developer +workstation was observed to significantly deviate from the expected +value following during a series of ~1 second brown outs. This machine +did not use a UPS at the time. However, the PWR_OK mechanism should have +protected against this. The observation of the PWR_OK signal failing to +force a shutdown with adverse consequences (to the system clock in this +case) suggests that the PWR_OK mechanism is not a strict guarantee.

+
+
+

PSU Hold-up Times

+

A PSU hold-up time is the amount of time that a PSU can continue to +output power at maximum output within standard voltage tolerances +following the loss of input power. This is important for supporting UPS +units because the transfer +time +taken by a standard UPS to supply power from its battery can leave +machines without power for “5-12 ms”. Intel’s ATX Power Supply design +guide +specifies a hold up time of 17 milliseconds at maximum continuous +output. The hold-up time is a inverse function of how much power is +being output by the PSU, with lower power output increasing holdup +times.

+

Capacitor aging in PSUs will lower the hold-up time below what it was +when new, which could cause reliability issues as equipment ages. +Machines using substandard PSUs with hold-up times below the +specification therefore require higher end UPS units for protection to +ensure that the transfer time does not exceed the hold-up time. A +hold-up time below the transfer time during a transfer to battery power +can cause undefined behavior should the PWR_OK signal not become +deasserted to force the machine to power off.

+

If in doubt, use a double conversion UPS unit. Double conversion UPS +units always run off the battery, such that the transfer time is 0. This +is unless they are high efficiency models that are hybrids between +standard UPS units and double conversion UPS units, although these are +reported to have much lower transfer times than standard PSUs. You could +also contact your PSU manufacturer for the hold up time specification, +but if reliability for years is a requirement, you should use a higher +end UPS with a low transfer time.

+

Note that double conversion units are at most 94% efficient unless they +support a high efficiency mode, which adds latency to the time to +transition to battery power.

+
+
+

UPS batteries

+

The lead acid batteries in UPS units generally need to be replaced +regularly to ensure that they provide power during power outages. For +home systems, this is every 3 to 5 years, although this varies with +temperature [4]. For +enterprise systems, contact your vendor.

+

Footnotes

+ +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Performance and Tuning/Module Parameters.html b/Performance and Tuning/Module Parameters.html new file mode 100644 index 000000000..ac5f2d77e --- /dev/null +++ b/Performance and Tuning/Module Parameters.html @@ -0,0 +1,13854 @@ + + + + + + + Module Parameters — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Module Parameters

+

Most of the ZFS kernel module parameters are accessible in the SysFS +/sys/module/zfs/parameters directory. Current values can be observed +by

+
cat /sys/module/zfs/parameters/PARAMETER
+
+
+

Many of these can be changed by writing new values. These are denoted by +Change|Dynamic in the PARAMETER details below.

+
echo NEWVALUE >> /sys/module/zfs/parameters/PARAMETER
+
+
+

If the parameter is not dynamically adjustable, an error can occur and +the value will not be set. It can be helpful to check the permissions +for the PARAMETER file in SysFS.

+

In some cases, the parameter must be set prior to loading the kernel +modules or it is desired to have the parameters set automatically at +boot time. For many distros, this can be accomplished by creating a file +named /etc/modprobe.d/zfs.conf containing a text line for each +module parameter using the format:

+
# change PARAMETER for workload XZY to solve problem PROBLEM_DESCRIPTION
+# changed by YOUR_NAME on DATE
+options zfs PARAMETER=VALUE
+
+
+

Some parameters related to ZFS operations are located in module +parameters other than in the zfs kernel module. These are documented +in the individual parameter description. Unless otherwise noted, the +tunable applies to the zfs kernel module. For example, the icp +kernel module parameters are visible in the +/sys/module/icp/parameters directory and can be set by default at +boot time by changing the /etc/modprobe.d/icp.conf file.

+

See the man page for modprobe.d for more information.

+
+

Manual Pages

+

The zfs(4) and spl(4) man +pages (previously zfs- and spl-module-parameters(5), respectively, +prior to OpenZFS 2.1) contain brief descriptions of +the module parameters. Alas, man pages are not as suitable for quick +reference as documentation pages. This page is intended to be a better +cross-reference and capture some of the wisdom of ZFS developers and +practitioners.

+
+
+

ZFS Module Parameters

+

The ZFS kernel module, zfs.ko, parameters are detailed below.

+

To observe the list of parameters along with a short synopsis of each +parameter, use the modinfo command:

+
modinfo zfs
+
+
+
+
+

Tags

+

The list of parameters is quite large and resists hierarchical +representation. To assist in finding relevant information +quickly, each module parameter has a “Tags” row with keywords for +frequent searches.

+
+

ABD

+ +
+
+

allocation

+ +
+
+

ARC

+ +
+
+

channel_programs

+ +
+
+

checkpoint

+ +
+
+

checksum

+ +
+
+

compression

+ +
+
+

CPU

+ +
+
+

dataset

+ +
+
+

dbuf_cache

+ +
+
+

debug

+ +
+
+

dedup

+ +
+
+

delay

+ +
+
+

delete

+ +
+
+

discard

+ +
+
+

disks

+ +
+
+

DMU

+ +
+
+

encryption

+ +
+
+

filesystem

+ +
+
+

fragmentation

+ +
+
+

HDD

+ +
+
+

hostid

+ +
+
+

import

+ +
+
+

L2ARC

+ +
+
+

memory

+ +
+
+

metadata

+ +
+
+

metaslab

+ +
+
+

mirror

+ +
+
+

MMP

+ +
+
+

panic

+ +
+
+

prefetch

+ +
+
+

QAT

+ +
+
+

raidz

+ +
+
+

receive

+ +
+
+

remove

+ +
+
+

resilver

+ +
+
+

scrub

+ +
+
+

send

+ +
+
+

snapshot

+ +
+
+

SPA

+ +
+
+

special_vdev

+ +
+
+

SSD

+ +
+
+

taskq

+ +
+
+

trim

+ +
+
+

vdev

+ +
+
+

vdev_cache

+ +
+
+

vdev_initialize

+ +
+
+

vdev_removal

+ +
+
+

volume

+ +
+
+

write_throttle

+ +
+
+

zed

+ +
+
+

ZIL

+ +
+
+

ZIO_scheduler

+ +
+
+
+

Index

+ +
+
+

Module Parameters

+
+

ignore_hole_birth

+

When set, the hole_birth optimization will not be used and all holes +will always be sent by zfs send In the source code, +ignore_hole_birth is an alias for and SysFS PARAMETER for +send_holes_without_birth_time.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

ignore_hole_birth

Notes

Tags

send

When to change

Enable if you suspect your datasets are +affected by a bug in hole_birth during +zfs send operations

Data Type

boolean

Range

0=disabled, 1=enabled

Default

1 (hole birth optimization is ignored)

Change

Dynamic

Versions Affected

TBD

+
+
+

l2arc_exclude_special

+

Controls whether buffers present on special vdevs are eligible for +caching into L2ARC.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_exclude_special

Notes

Tags

ARC, +L2ARC, +special_vdev,

When to change

If cache and special devices exist and caching +data on special devices in L2ARC is not desired

Data Type

boolean

Range

0=disabled, 1=enabled

Default

0

Change

Dynamic

Versions Affected

TBD

+
+
+

l2arc_feed_again

+

Turbo L2ARC cache warm-up. When the L2ARC is cold the fill interval will +be set to aggressively fill as fast as possible.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_feed_again

Notes

Tags

ARC, L2ARC

When to change

If cache devices exist and it is desired to +fill them as fast as possible

Data Type

boolean

Range

0=disabled, 1=enabled

Default

1

Change

Dynamic

Versions Affected

TBD

+
+
+

l2arc_feed_min_ms

+

Minimum time period for aggressively feeding the L2ARC. The L2ARC feed +thread wakes up once per second (see +l2arc_feed_secs) to look for data to feed into +the L2ARC. l2arc_feed_min_ms only affects the turbo L2ARC cache +warm-up and allows the aggressiveness to be adjusted.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_feed_min_ms

Notes

Tags

ARC, L2ARC

When to change

If cache devices exist and +l2arc_feed_again and +the feed is too aggressive, then this tunable +can be adjusted to reduce the impact of the +fill

Data Type

uint64

Units

milliseconds

Range

0 to (1000 * l2arc_feed_secs)

Default

200

Change

Dynamic

Versions Affected

0.6 and later

+
+
+

l2arc_feed_secs

+

Seconds between waking the L2ARC feed thread. One feed thread works for +all cache devices in turn.

+

If the pool that owns a cache device is imported readonly, then the feed +thread is delayed 5 * l2arc_feed_secs before +moving onto the next cache device. If multiple pools are imported with +cache devices and one pool with cache is imported readonly, the L2ARC +feed rate to all caches can be slowed.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_feed_secs

Notes

Tags

ARC, L2ARC

When to change

Do not change

Data Type

uint64

Units

seconds

Range

1 to UINT64_MAX

Default

1

Change

Dynamic

Versions Affected

0.6 and later

+
+
+

l2arc_headroom

+

How far through the ARC lists to search for L2ARC cacheable content, +expressed as a multiplier of l2arc_write_max

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_headroom

Notes

Tags

ARC, L2ARC

When to change

If the rate of change in the ARC is faster than +the overall L2ARC feed rate, then increasing +l2arc_headroom can increase L2ARC efficiency. +Setting the value too large can cause the L2ARC +feed thread to consume more CPU time looking +for data to feed.

Data Type

uint64

Units

unit

Range

0 to UINT64_MAX

Default

2

Change

Dynamic

Versions Affected

0.6 and later

+
+
+

l2arc_headroom_boost

+

Percentage scale for l2arc_headroom when L2ARC +contents are being successfully compressed before writing.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_headroom_boost

Notes

Tags

ARC, L2ARC

When to change

If average compression efficiency is greater +than 2:1, then increasing +l2a +rc_headroom_boost +can increase the L2ARC feed rate

Data Type

uint64

Units

percent

Range

100 to UINT64_MAX, when set to 100, the +L2ARC headroom boost feature is effectively +disabled

Default

200

Change

Dynamic

Versions Affected

all

+
+
+

l2arc_nocompress

+

Disable writing compressed data to cache devices. Disabling allows the +legacy behavior of writing decompressed data to cache devices.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_nocompress

Notes

Tags

ARC, L2ARC

When to change

When testing compressed L2ARC feature

Data Type

boolean

Range

0=store compressed blocks in cache device, +1=store uncompressed blocks in cache device

Default

0

Change

Dynamic

Versions Affected

deprecated in v0.7.0 by new compressed ARC +design

+
+
+

l2arc_meta_percent

+

Percent of ARC size allowed for L2ARC-only headers. +Since L2ARC buffers are not evicted on memory pressure, too large amount of +headers on system with irrationaly large L2ARC can render it slow or unusable. +This parameter limits L2ARC writes and rebuild to achieve it.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_nocompress

Notes

Tags

ARC, L2ARC

When to change

When workload really require enormous L2ARC.

Data Type

int

Range

0 to 100

Default

33

Change

Dynamic

Versions Affected

v2.0 and later

+
+
+

l2arc_mfuonly

+

Controls whether only MFU metadata and data are cached from ARC into L2ARC. +This may be desirable to avoid wasting space on L2ARC when reading/writing +large amounts of data that are not expected to be accessed more than once. +By default both MRU and MFU data and metadata are cached in the L2ARC.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_mfuonly

Notes

Tags

ARC, L2ARC

When to change

When accessing a large amount of data only +once.

Data Type

boolean

Range

0=store MRU and MFU blocks in cache device, +1=store MFU blocks in cache device

Default

0

Change

Dynamic

Versions Affected

v2.0 and later

+
+
+

l2arc_noprefetch

+

Disables writing prefetched, but unused, buffers to cache devices.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_noprefetch

Notes

Tags

ARC, L2ARC, +prefetch

When to change

Setting to 0 can increase L2ARC hit rates for +workloads where the ARC is too small for a read +workload that benefits from prefetching. Also, +if the main pool devices are very slow, setting +to 0 can improve some workloads such as +backups.

Data Type

boolean

Range

0=write prefetched but unused buffers to cache +devices, 1=do not write prefetched but unused +buffers to cache devices

Default

1

Change

Dynamic

Versions Affected

v0.6.0 and later

+
+
+

l2arc_norw

+

Disables writing to cache devices while they are being read.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_norw

Notes

Tags

ARC, L2ARC

When to change

In the early days of SSDs, some devices did not +perform well when reading and writing +simultaneously. Modern SSDs do not have these +issues.

Data Type

boolean

Range

0=read and write simultaneously, 1=avoid writes +when reading for antique SSDs

Default

0

Change

Dynamic

Versions Affected

all

+
+
+

l2arc_rebuild_blocks_min_l2size

+

The minimum required size (in bytes) of an L2ARC device in order to +write log blocks in it. The log blocks are used upon importing the pool +to rebuild the persistent L2ARC. For L2ARC devices less than 1GB the +overhead involved offsets most of benefit so log blocks are not written +for cache devices smaller than this.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_rebuild_blocks_min_l2size

Notes

Tags

ARC, +L2ARC

When to change

The cache device is small and +the pool is frequently imported.

Data Type

bytes

Range

0 to UINT64_MAX

Default

1,073,741,824

Change

Dynamic

Versions Affected

v2.0 and later

+
+
+

l2arc_rebuild_enabled

+

Rebuild the persistent L2ARC when importing a pool.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_rebuild_enabled

Notes

Tags

ARC, L2ARC

When to change

If there are problems importing a pool or +attaching an L2ARC device.

Data Type

boolean

Range

0=disable persistent L2ARC rebuild, +1=enable persistent L2ARC rebuild

Default

1

Change

Dynamic

Versions Affected

v2.0 and later

+
+
+

l2arc_trim_ahead

+

Once the cache device has been filled TRIM ahead of the current write size +l2arc_write_max on L2ARC devices by this percentage. This can speed +up future writes depending on the performance characteristics of the +cache device.

+

When set to 100% TRIM twice the space required to accommodate upcoming +writes. A minimum of 64MB will be trimmed. If set it enables TRIM of +the whole L2ARC device when it is added to a pool. By default, this +option is disabled since it can put significant stress on the underlying +storage devices.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_trim_ahead

Notes

Tags

ARC, L2ARC

When to change

Consider setting for cache devices which +effeciently handle TRIM commands.

Data Type

ulong

Units

percent of l2arc_write_max

Range

0 to 100

Default

0

Change

Dynamic

Versions Affected

v2.0 and later

+
+
+

l2arc_write_boost

+

Until the ARC fills, increases the L2ARC fill rate +l2arc_write_max by l2arc_write_boost.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_write_boost

Notes

Tags

ARC, L2ARC

When to change

To fill the cache devices more aggressively +after pool import.

Data Type

uint64

Units

bytes

Range

0 to UINT64_MAX

Default

8,388,608

Change

Dynamic

Versions Affected

all

+
+
+

l2arc_write_max

+

Maximum number of bytes to be written to each cache device for each +L2ARC feed thread interval (see l2arc_feed_secs). +The actual limit can be adjusted by +l2arc_write_boost. By default +l2arc_feed_secs is 1 second, delivering a maximum +write workload to cache devices of 8 MiB/sec.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_write_max

Notes

Tags

ARC, L2ARC

When to change

If the cache devices can sustain the write +workload, increasing the rate of cache device +fill when workloads generate new data at a rate +higher than l2arc_write_max can increase L2ARC +hit rate

Data Type

uint64

Units

bytes

Range

1 to UINT64_MAX

Default

8,388,608

Change

Dynamic

Versions Affected

all

+
+
+

metaslab_aliquot

+

Sets the metaslab granularity. Nominally, ZFS will try to allocate this +amount of data to a top-level vdev before moving on to the next +top-level vdev. This is roughly similar to what would be referred to as +the “stripe size” in traditional RAID arrays.

+

When tuning for HDDs, it can be more efficient to have a few larger, +sequential writes to a device rather than switching to the next device. +Monitoring the size of contiguous writes to the disks relative to the +write throughput can be used to determine if increasing +metaslab_aliquot can help. For modern devices, it is unlikely that +decreasing metaslab_aliquot from the default will help.

+

If there is only one top-level vdev, this tunable is not used.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

metaslab_aliquot

Notes

Tags

allocation, +metaslab, vdev

When to change

If write performance increases as devices more +efficiently write larger, contiguous blocks

Data Type

uint64

Units

bytes

Range

0 to UINT64_MAX

Default

524,288

Change

Dynamic

Versions Affected

all

+
+
+

metaslab_bias_enabled

+

Enables metaslab group biasing based on a top-level vdev’s utilization +relative to the pool. Nominally, all top-level devs are the same size +and the allocation is spread evenly. When the top-level vdevs are not of +the same size, for example if a new (empty) top-level is added to the +pool, this allows the new top-level vdev to get a larger portion of new +allocations.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

metaslab_bias_enabled

Notes

Tags

allocation, +metaslab, vdev

When to change

If a new top-level vdev is added and you do +not want to bias new allocations to the new +top-level vdev

Data Type

boolean

Range

0=spread evenly across top-level vdevs, +1=bias spread to favor less full top-level +vdevs

Default

1

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_metaslab_segment_weight_enabled

+

Enables metaslab allocation based on largest free segment rather than +total amount of free space. The goal is to avoid metaslabs that exhibit +free space fragmentation: when there is a lot of small free spaces, but +few larger free spaces.

+

If zfs_metaslab_segment_weight_enabled is enabled, then +metaslab_fragmentation_factor_enabled +is ignored.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs +_metaslab_segment_weight_enabled

Notes

Tags

allocation, +metaslab

When to change

When testing allocation and +fragmentation

Data Type

boolean

Range

0=do not consider metaslab +fragmentation, 1=avoid metaslabs +where free space is highly +fragmented

Default

1

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_metaslab_switch_threshold

+

When using segment-based metaslab selection (see +zfs_metaslab_segment_weight_enabled), +continue allocating from the active metaslab until +zfs_metaslab_switch_threshold worth of free space buckets have been +exhausted.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_metaslab_switch_threshold

Notes

Tags

allocation, +metaslab

When to change

When testing allocation and +fragmentation

Data Type

uint64

Units

free spaces

Range

0 to UINT64_MAX

Default

2

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

metaslab_debug_load

+

When enabled, all metaslabs are loaded into memory during pool import. +Nominally, metaslab space map information is loaded and unloaded as +needed (see metaslab_debug_unload)

+

It is difficult to predict how much RAM is required to store a space +map. An empty or completely full metaslab has a small space map. +However, a highly fragmented space map can consume significantly more +memory.

+

Enabling metaslab_debug_load can increase pool import time.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

metaslab_debug_load

Notes

Tags

allocation, +memory, +metaslab

When to change

When RAM is plentiful and pool import time is +not a consideration

Data Type

boolean

Range

0=do not load all metaslab info at pool +import, 1=dynamically load metaslab info as +needed

Default

0

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

metaslab_debug_unload

+

When enabled, prevents metaslab information from being dynamically +unloaded from RAM. Nominally, metaslab space map information is loaded +and unloaded as needed (see +metaslab_debug_load)

+

It is difficult to predict how much RAM is required to store a space +map. An empty or completely full metaslab has a small space map. +However, a highly fragmented space map can consume significantly more +memory.

+

Enabling metaslab_debug_unload consumes RAM that would otherwise be +freed.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

metaslab_debug_unload

Notes

Tags

allocation, +memory, +metaslab

When to change

When RAM is plentiful and the penalty for +dynamically reloading metaslab info from +the pool is high

Data Type

boolean

Range

0=dynamically unload metaslab info, +1=unload metaslab info only upon pool +export

Default

0

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

metaslab_fragmentation_factor_enabled

+

Enable use of the fragmentation metric in computing metaslab weights.

+

In version v0.7.0, if +zfs_metaslab_segment_weight_enabled +is enabled, then metaslab_fragmentation_factor_enabled is ignored.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

metas +lab_fragmentation_factor_enabled

Notes

Tags

allocation, +metaslab

When to change

To test metaslab fragmentation

Data Type

boolean

Range

0=do not consider metaslab free +space fragmentation, 1=try to +avoid fragmented metaslabs

Default

1

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

metaslabs_per_vdev

+

When a vdev is added, it will be divided into approximately, but no more +than, this number of metaslabs.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

metaslabs_per_vdev

Notes

Tags

allocation, +metaslab, vdev

When to change

When testing metaslab allocation

Data Type

uint64

Units

metaslabs

Range

16 to UINT64_MAX

Default

200

Change

Prior to pool creation or adding new top-level +vdevs

Versions Affected

all

+
+
+

metaslab_preload_enabled

+

Enable metaslab group preloading. Each top-level vdev has a metaslab +group. By default, up to 3 copies of metadata can exist and are +distributed across multiple top-level vdevs. +metaslab_preload_enabled allows the corresponding metaslabs to be +preloaded, thus improving allocation efficiency.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

metaslab_preload_enabled

Notes

Tags

allocation, +metaslab

When to change

When testing metaslab allocation

Data Type

boolean

Range

0=do not preload metaslab info, +1=preload up to 3 metaslabs

Default

1

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

metaslab_lba_weighting_enabled

+

Modern HDDs have uniform bit density and constant angular velocity. +Therefore, the outer recording zones are faster (higher bandwidth) than +the inner zones by the ratio of outer to inner track diameter. The +difference in bandwidth can be 2:1, and is often available in the HDD +detailed specifications or drive manual. For HDDs when +metaslab_lba_weighting_enabled is true, write allocation preference +is given to the metaslabs representing the outer recording zones. Thus +the allocation to metaslabs prefers faster bandwidth over free space.

+

If the devices are not rotational, yet misrepresent themselves to the OS +as rotational, then disabling metaslab_lba_weighting_enabled can +result in more even, free-space-based allocation.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

metaslab_lba_weighting_enabled

Notes

Tags

allocation, +metaslab, +HDD, SSD

When to change

disable if using only SSDs and +version v0.6.4 or earlier

Data Type

boolean

Range

0=do not use LBA weighting, 1=use +LBA weighting

Default

1

Change

Dynamic

Verification

The rotational setting described +by a block device in sysfs by +observing +/sys/ +block/DISK_NAME/queue/rotational

Versions Affected

prior to v0.6.5, the check for +non-rotation media did not exist

+
+
+

spa_config_path

+

By default, the zpool import command searches for pool information +in the zpool.cache file. If the pool to be imported has an entry in +zpool.cache then the devices do not have to be scanned to determine +if they are pool members. The path to the cache file is spa_config_path.

+

For more information on zpool import and the -o cachefile and +-d options, see the man page for zpool(8)

+

See also zfs_autoimport_disable

+ + + + + + + + + + + + + + + + + + + + + + + + + + +

spa_config_path

Notes

Tags

import

When to change

If creating a non-standard distribution and the +cachefile property is inconvenient

Data Type

string

Default

/etc/zfs/zpool.cache

Change

Dynamic, applies only to the next invocation of +zpool import

Versions Affected

all

+
+
+

spa_asize_inflation

+

Multiplication factor used to estimate actual disk consumption from the +size of data being written. The default value is a worst case estimate, +but lower values may be valid for a given pool depending on its +configuration. Pool administrators who understand the factors involved +may wish to specify a more realistic inflation factor, particularly if +they operate close to quota or capacity limits.

+

The worst case space requirement for allocation is single-sector +max-parity RAIDZ blocks, in which case the space requirement is exactly +4 times the size, accounting for a maximum of 3 parity blocks. This is +added to the maximum number of ZFS copies parameter (copies max=3). +Additional space is required if the block could impact deduplication +tables. Altogether, the worst case is 24.

+

If the estimation is not correct, then quotas or out-of-space conditions +can lead to optimistic expectations of the ability to allocate. +Applications are typically not prepared to deal with such failures and +can misbehave.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spa_asize_inflation

Notes

Tags

allocation, SPA

When to change

If the allocation requirements for the +workload are well known and quotas are used

Data Type

uint64

Units

unit

Range

1 to 24

Default

24

Change

Dynamic

Versions Affected

v0.6.3 and later

+
+
+

spa_load_verify_data

+

An extreme rewind import (see zpool import -X) normally performs a +full traversal of all blocks in the pool for verification. If this +parameter is set to 0, the traversal skips non-metadata blocks. It can +be toggled once the import has started to stop or start the traversal of +non-metadata blocks. See also +spa_load_verify_metadata.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spa_load_verify_data

Notes

Tags

allocation, SPA

When to change

At the risk of data integrity, to speed +extreme import of large pool

Data Type

boolean

Range

0=do not verify data upon pool import, +1=verify pool data upon import

Default

1

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

spa_load_verify_metadata

+

An extreme rewind import (see zpool import -X) normally performs a +full traversal of all blocks in the pool for verification. If this +parameter is set to 0, the traversal is not performed. It can be toggled +once the import has started to stop or start the traversal. See +spa_load_verify_data

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spa_load_verify_metadata

Notes

Tags

import

When to change

At the risk of data integrity, to speed +extreme import of large pool

Data Type

boolean

Range

0=do not verify metadata upon pool +import, 1=verify pool metadata upon +import

Default

1

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

spa_load_verify_maxinflight

+

Maximum number of concurrent I/Os during the data verification performed +during an extreme rewind import (see zpool import -X)

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spa_load_verify_maxinflight

Notes

Tags

import

When to change

During an extreme rewind import, to +match the concurrent I/O capabilities +of the pool devices

Data Type

int

Units

I/Os

Range

1 to MAX_INT

Default

10,000

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

spa_slop_shift

+

Normally, the last 3.2% (1/(2^spa_slop_shift)) of pool space is +reserved to ensure the pool doesn’t run completely out of space, due to +unaccounted changes (e.g. to the MOS). This also limits the worst-case +time to allocate space. When less than this amount of free space exists, +most ZPL operations (e.g. write, create) return error:no space (ENOSPC).

+

Changing spa_slop_shift affects the currently loaded ZFS module and all +imported pools. spa_slop_shift is not stored on disk. Beware when +importing full pools on systems with larger spa_slop_shift can lead to +over-full conditions.

+

The minimum SPA slop space is limited to 128 MiB. +The maximum SPA slop space is limited to 128 GiB.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spa_slop_shift

Notes

Tags

allocation, SPA

When to change

For large pools, when 3.2% may be too +conservative and more usable space is desired, +consider increasing spa_slop_shift

Data Type

int

Units

shift

Range

1 to MAX_INT, however the practical upper limit +is 15 for a system with 4TB of RAM

Default

5

Change

Dynamic

Versions Affected

v0.6.5 and later (max. slop space since v2.1.0)

+
+
+

zfetch_array_rd_sz

+

If prefetching is enabled, do not prefetch blocks larger than +zfetch_array_rd_sz size.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfetch_array_rd_sz

Notes

Tags

prefetch

When to change

To allow prefetching when using large block sizes

Data Type

unsigned long

Units

bytes

Range

0 to MAX_ULONG

Default

1,048,576 (1 MiB)

Change

Dynamic

Versions Affected

all

+
+
+

zfetch_max_distance

+

Limits the maximum number of bytes to prefetch per stream.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfetch_max_distance

Notes

Tags

prefetch

When to change

Consider increasing read workloads that use +large blocks and exhibit high prefetch hit +ratios

Data Type

uint

Units

bytes

Range

0 to UINT_MAX

Default

8,388,608

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

zfetch_max_streams

+

Maximum number of prefetch streams per file.

+

For version v0.7.0 and later, when prefetching small files the number of +prefetch streams is automatically reduced below to prevent the streams +from overlapping.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfetch_max_streams

Notes

Tags

prefetch

When to change

If the workload benefits from prefetching and +has more than zfetch_max_streams +concurrent reader threads

Data Type

uint

Units

streams

Range

1 to MAX_UINT

Default

8

Change

Dynamic

Versions Affected

all

+
+
+

zfetch_min_sec_reap

+

Prefetch streams that have been accessed in zfetch_min_sec_reap +seconds are automatically stopped.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfetch_min_sec_reap

Notes

Tags

prefetch

When to change

To test prefetch efficiency

Data Type

uint

Units

seconds

Range

0 to MAX_UINT

Default

2

Change

Dynamic

Versions Affected

all

+
+
+

zfs_arc_dnode_limit_percent

+

Percentage of ARC metadata space that can be used for dnodes.

+

The value calculated for zfs_arc_dnode_limit_percent can be +overridden by zfs_arc_dnode_limit.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_dnode_limit_percent

Notes

Tags

ARC

When to change

Consider increasing if arc_prune +is using excessive system time and +/proc/spl/kstat/zfs/arcstats +shows arc_dnode_size is near or +over arc_dnode_limit

Data Type

int

Units

percent of arc_meta_limit

Range

0 to 100

Default

10

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_arc_dnode_limit

+

When the number of bytes consumed by dnodes in the ARC exceeds +zfs_arc_dnode_limit bytes, demand for new metadata can take from the +space consumed by dnodes.

+

The default value 0, indicates that a percent which is based on +zfs_arc_dnode_limit_percent of the +ARC meta buffers that may be used for dnodes.

+

zfs_arc_dnode_limit is similar to +zfs_arc_meta_prune which serves a similar +purpose for metadata.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_dnode_limit

Notes

Tags

ARC

When to change

Consider increasing if arc_prune is using +excessive system time and +/proc/spl/kstat/zfs/arcstats shows +arc_dnode_size is near or over +arc_dnode_limit

Data Type

uint64

Units

bytes

Range

0 to MAX_UINT64

Default

0 (uses +zfs_arc_dnode_lim +it_percent)

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_arc_dnode_reduce_percent

+

Percentage of ARC dnodes to try to evict in response to demand for +non-metadata when the number of bytes consumed by dnodes exceeds +zfs_arc_dnode_limit.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_dnode_reduce_percent

Notes

Tags

ARC

When to change

Testing dnode cache efficiency

Data Type

uint64

Units

percent of size of dnode space used +above +zfs_arc_d +node_limit

Range

0 to 100

Default

10

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_arc_average_blocksize

+

The ARC’s buffer hash table is sized based on the assumption of an +average block size of zfs_arc_average_blocksize. The default of 8 +KiB uses approximately 1 MiB of hash table per 1 GiB of physical memory +with 8-byte pointers.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_average_blocksize

Notes

Tags

ARC, memory

When to change

For workloads where the known average +blocksize is larger, increasing +zfs_arc_average_blocksize can +reduce memory usage

Data Type

int

Units

bytes

Range

512 to 16,777,216

Default

8,192

Change

Prior to zfs module load

Versions Affected

all

+
+
+

zfs_arc_evict_batch_limit

+

Number ARC headers to evict per sublist before proceeding to another +sublist. This batch-style operation prevents entire sublists from being +evicted at once but comes at a cost of additional unlocking and locking.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_evict_batch_limit

Notes

Tags

ARC

When to change

Testing ARC multilist features

Data Type

int

Units

count of ARC headers

Range

1 to INT_MAX

Default

10

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_arc_grow_retry

+

When the ARC is shrunk due to memory demand, do not retry growing the +ARC for zfs_arc_grow_retry seconds. This operates as a damper to +prevent oscillating grow/shrink cycles when there is memory pressure.

+

If zfs_arc_grow_retry = 0, the internal default of 5 seconds is +used.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_grow_retry

Notes

Tags

ARC, memory

When to change

TBD

Data Type

int

Units

seconds

Range

1 to MAX_INT

Default

0

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_arc_lotsfree_percent

+

Throttle ARC memory consumption, effectively throttling I/O, when free +system memory drops below this percentage of total system memory. +Setting zfs_arc_lotsfree_percent to 0 disables the throttle.

+

The arcstat_memory_throttle_count counter in +/proc/spl/kstat/arcstats can indicate throttle activity.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_lotsfree_percent

Notes

Tags

ARC, memory

When to change

TBD

Data Type

int

Units

percent

Range

0 to 100

Default

10

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_arc_max

+

Maximum size of ARC in bytes.

+

If set to 0 then the maximum size of ARC +is determined by the amount of system memory installed:

+
    +
  • Linux: 1/2 of system memory

  • +
  • FreeBSD: the larger of all_system_memory - 1GB and 5/8 × all_system_memory

  • +
+

zfs_arc_max can be changed dynamically with some caveats. It cannot +be set back to 0 while running and reducing it below the current ARC +size will not cause the ARC to shrink without memory pressure to induce +shrinking.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_max

Notes

Tags

ARC, memory

When to change

Reduce if ARC competes too much with other +applications, increase if ZFS is the primary +application and can use more RAM

Data Type

uint64

Units

bytes

Range

67,108,864 to RAM size in bytes

Default

0 (see description above, OS-dependent)

Change

Dynamic (see description above)

Verification

c column in arcstats.py or +/proc/spl/kstat/zfs/arcstats entry +c_max

Versions Affected

all

+
+
+

zfs_arc_meta_adjust_restarts

+

The number of restart passes to make while scanning the ARC attempting +the free buffers in order to stay below the +zfs_arc_meta_limit.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_meta_adjust_restarts

Notes

Tags

ARC

When to change

Testing ARC metadata adjustment feature

Data Type

int

Units

restarts

Range

0 to INT_MAX

Default

4,096

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_arc_meta_limit

+

Sets the maximum allowed size metadata buffers in the ARC. When +zfs_arc_meta_limit is reached metadata buffers +are reclaimed, even if the overall c_max has not been reached.

+

In version v0.7.0, with a default value = 0, +zfs_arc_meta_limit_percent is used to set arc_meta_limit

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_meta_limit

Notes

Tags

ARC

When to change

For workloads where the metadata to data ratio +in the ARC can be changed to improve ARC hit +rates

Data Type

uint64

Units

bytes

Range

0 to c_max

Default

0

Change

Dynamic, except that it cannot be set back to +0 for a specific percent of the ARC; it must +be set to an explicit value

Verification

/proc/spl/kstat/zfs/arcstats entry +arc_meta_limit

Versions Affected

all

+
+
+

zfs_arc_meta_limit_percent

+

Sets the limit to ARC metadata, arc_meta_limit, as a percentage of +the maximum size target of the ARC, c_max

+

Prior to version v0.7.0, the +zfs_arc_meta_limit was used to set the limit +as a fixed size. zfs_arc_meta_limit_percent provides a more +convenient interface for setting the limit.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_meta_limit_percent

Notes

Tags

ARC

When to change

For workloads where the metadata to +data ratio in the ARC can be changed +to improve ARC hit rates

Data Type

uint64

Units

percent of c_max

Range

0 to 100

Default

75

Change

Dynamic

Verification

/proc/spl/kstat/zfs/arcstats entry +arc_meta_limit

Versions Affected

v0.7.0 and later

+
+
+

zfs_arc_meta_min

+

The minimum allowed size in bytes that metadata buffers may consume in +the ARC. This value defaults to 0 which disables a floor on the amount +of the ARC devoted meta data.

+

When evicting data from the ARC, if the metadata_size is less than +arc_meta_min then data is evicted instead of metadata.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_meta_min

Notes

Tags

ARC

When to change

Data Type

uint64

Units

bytes

Range

16,777,216 to c_max

Default

0 (use internal default 16 MiB)

Change

Dynamic

Verification

/proc/spl/kstat/zfs/arcstats entry arc_meta_min

Versions Affected

all

+
+
+

zfs_arc_meta_prune

+

zfs_arc_meta_prune sets the number of dentries and znodes to be +scanned looking for entries which can be dropped. This provides a +mechanism to ensure the ARC can honor the arc_meta_limit and reclaim +otherwise pinned ARC buffers. Pruning may be required when the ARC size +drops to arc_meta_limit because dentries and znodes can pin buffers +in the ARC. Increasing this value will cause to dentry and znode caches +to be pruned more aggressively and the arc_prune thread becomes more +active. Setting zfs_arc_meta_prune to 0 will disable pruning.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_meta_prune

Notes

Tags

ARC

When to change

TBD

Data Type

uint64

Units

entries

Range

0 to INT_MAX

Default

10,000

Change

Dynamic

! Verification

Prune activity is counted by the +/proc/spl/kstat/zfs/arcstats entry +arc_prune

Versions Affected

v0.6.5 and later

+
+
+

zfs_arc_meta_strategy

+

Defines the strategy for ARC metadata eviction (meta reclaim strategy). +A value of 0 (META_ONLY) will evict only the ARC metadata. A value of 1 +(BALANCED) indicates that additional data may be evicted if required in +order to evict the requested amount of metadata.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_meta_strategy

Notes

Tags

ARC

When to change

Testing ARC metadata eviction

Data Type

int

Units

enum

Range

0=evict metadata only, 1=also evict data +buffers if they can free metadata buffers +for eviction

Default

1 (BALANCED)

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_arc_min

+

Minimum ARC size limit. When the ARC is asked to shrink, it will stop +shrinking at c_min as tuned by zfs_arc_min.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_min

Notes

Tags

ARC

When to change

If the primary focus of the system is ZFS, then +increasing can ensure the ARC gets a minimum +amount of RAM

Data Type

uint64

Units

bytes

Range

33,554,432 to c_max

Default

For kernel: greater of 33,554,432 (32 MiB) and +memory size / 32. For user-land: greater of +33,554,432 (32 MiB) and c_max / 2.

Change

Dynamic

Verification

/proc/spl/kstat/zfs/arcstats entry +c_min

Versions Affected

all

+
+
+

zfs_arc_min_prefetch_ms

+

Minimum time prefetched blocks are locked in the ARC.

+

A value of 0 represents the default of 1 second. However, once changed, +dynamically setting to 0 will not return to the default.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_min_prefetch_ms

Notes

Tags

ARC, prefetch

When to change

TBD

Data Type

int

Units

milliseconds

Range

1 to INT_MAX

Default

0 (use internal default of 1000 ms)

Change

Dynamic

Versions Affected

v0.8.0 and later

+
+
+

zfs_arc_min_prescient_prefetch_ms

+

Minimum time “prescient prefetched” blocks are locked in the ARC. These +blocks are meant to be prefetched fairly aggressively ahead of the code +that may use them.

+

A value of 0 represents the default of 6 seconds. However, once changed, +dynamically setting to 0 will not return to the default.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

z +fs_arc_min_prescient_prefetch_ms

Notes

Tags

ARC, +prefetch

When to change

TBD

Data Type

int

Units

milliseconds

Range

1 to INT_MAX

Default

0 (use internal default of 6000 +ms)

Change

Dynamic

Versions Affected

v0.8.0 and later

+
+
+

zfs_multilist_num_sublists

+

To allow more fine-grained locking, each ARC state contains a series of +lists (sublists) for both data and metadata objects. Locking is +performed at the sublist level. This parameters controls the number of +sublists per ARC state, and also applies to other uses of the multilist +data structure.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_multilist_num_sublists

Notes

Tags

ARC

When to change

TBD

Data Type

int

Units

lists

Range

1 to INT_MAX

Default

0 (internal value is greater of number +of online CPUs or 4)

Change

Prior to zfs module load

Versions Affected

v0.7.0 and later

+
+
+

zfs_arc_overflow_shift

+

The ARC size is considered to be overflowing if it exceeds the current +ARC target size (/proc/spl/kstat/zfs/arcstats entry c) by a +threshold determined by zfs_arc_overflow_shift. The threshold is +calculated as a fraction of c using the formula: (ARC target size) +c >> zfs_arc_overflow_shift

+

The default value of 8 causes the ARC to be considered to be overflowing +if it exceeds the target size by 1/256th (0.3%) of the target size.

+

When the ARC is overflowing, new buffer allocations are stalled until +the reclaim thread catches up and the overflow condition no longer +exists.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_overflow_shift

Notes

Tags

ARC

When to change

TBD

Data Type

int

Units

shift

Range

1 to INT_MAX

Default

8

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_arc_p_min_shift

+

arc_p_min_shift is used to shift of ARC target size +(/proc/spl/kstat/zfs/arcstats entry c) for calculating both +minimum and maximum most recently used (MRU) target size +(/proc/spl/kstat/zfs/arcstats entry p)

+

A value of 0 represents the default setting of arc_p_min_shift = 4. +However, once changed, dynamically setting zfs_arc_p_min_shift to 0 +will not return to the default.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_p_min_shift

Notes

Tags

ARC

When to change

TBD

Data Type

int

Units

shift

Range

1 to INT_MAX

Default

0 (internal default = 4)

Change

Dynamic

Verification

Observe changes to +/proc/spl/kstat/zfs/arcstats entry p

Versions Affected

all

+
+
+

zfs_arc_p_dampener_disable

+

When data is being added to the ghost lists, the MRU target size is +adjusted. The amount of adjustment is based on the ratio of the MRU/MFU +sizes. When enabled, the ratio is capped to 10, avoiding large +adjustments.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_p_dampener_disable

Notes

Tags

ARC

When to change

Testing ARC ghost list behaviour

Data Type

boolean

Range

0=avoid large adjustments, 1=permit +large adjustments

Default

1

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_arc_shrink_shift

+

arc_shrink_shift is used to adjust the ARC target sizes when large +reduction is required. The current ARC target size, c, and MRU size +p can be reduced by by the current size >> arc_shrink_shift. For +the default value of 7, this reduces the target by approximately 0.8%.

+

A value of 0 represents the default setting of arc_shrink_shift = 7. +However, once changed, dynamically setting arc_shrink_shift to 0 will +not return to the default.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_shrink_shift

Notes

Tags

ARC, memory

When to change

During memory shortfall, reducing +zfs_arc_shrink_shift increases the rate +of ARC shrinkage

Data Type

int

Units

shift

Range

1 to INT_MAX

Default

0 (arc_shrink_shift = 7)

Change

Dynamic

Versions Affected

all

+
+
+

zfs_arc_pc_percent

+

zfs_arc_pc_percent allows ZFS arc to play more nicely with the +kernel’s LRU pagecache. It can guarantee that the arc size won’t +collapse under scanning pressure on the pagecache, yet still allows arc +to be reclaimed down to zfs_arc_min if necessary. This value is +specified as percent of pagecache size (as measured by +NR_FILE_PAGES) where that percent may exceed 100. This only operates +during memory pressure/reclaim.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_pc_percent

Notes

Tags

ARC, memory

When to change

When using file systems under memory +shortfall, if the page scanner causes the ARC +to shrink too fast, then adjusting +zfs_arc_pc_percent can reduce the shrink +rate

Data Type

int

Units

percent

Range

0 to 100

Default

0 (disabled)

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_arc_sys_free

+

zfs_arc_sys_free is the target number of bytes the ARC should leave +as free memory on the system. Defaults to the larger of 1/64 of physical +memory or 512K. Setting this option to a non-zero value will override +the default.

+

A value of 0 represents the default setting of larger of 1/64 of +physical memory or 512 KiB. However, once changed, dynamically setting +zfs_arc_sys_free to 0 will not return to the default.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_sys_free

Notes

Tags

ARC, memory

When to change

Change if more free memory is desired as a +margin against memory demand by applications

Data Type

ulong

Units

bytes

Range

0 to ULONG_MAX

Default

0 (default to larger of 1/64 of physical memory +or 512 KiB)

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_autoimport_disable

+

Disable reading zpool.cache file (see +spa_config_path) when loading the zfs module.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_autoimport_disable

Notes

Tags

import

When to change

Leave as default so that zfs behaves as +other Linux kernel modules

Data Type

boolean

Range

0=read zpool.cache at module load, +1=do not read zpool.cache at module +load

Default

1

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_commit_timeout_pct

+

zfs_commit_timeout_pct controls the amount of time that a log (ZIL) +write block (lwb) remains “open” when it isn’t “full” and it has a +thread waiting to commit to stable storage. The timeout is scaled based +on a percentage of the last lwb latency to avoid significantly impacting +the latency of each individual intent log transaction (itx).

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_commit_timeout_pct

Notes

Tags

ZIL

When to change

TBD

Data Type

int

Units

percent

Range

1 to 100

Default

5

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

zfs_dbgmsg_enable

+
+
Internally ZFS keeps a small log to facilitate debugging. The contents +of the log are in the /proc/spl/kstat/zfs/dbgmsg file.
+
Writing 0 to /proc/spl/kstat/zfs/dbgmsg file clears the log.
+
+

See also zfs_dbgmsg_maxsize

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dbgmsg_enable

Notes

Tags

debug

When to change

To view ZFS internal debug log

Data Type

boolean

Range

0=do not log debug messages, 1=log debug messages

Default

0 (1 for debug builds)

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_dbgmsg_maxsize

+

The /proc/spl/kstat/zfs/dbgmsg file size limit is set by +zfs_dbgmsg_maxsize.

+

See also zfs_dbgmsg_enable

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dbgmsg_maxsize

Notes

Tags

debug

When to change

TBD

Data Type

int

Units

bytes

Range

0 to INT_MAX

Default

4 MiB

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_dbuf_state_index

+

The zfs_dbuf_state_index feature is currently unused. It is normally +used for controlling values in the /proc/spl/kstat/zfs/dbufs file.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dbuf_state_index

Notes

Tags

debug

When to change

Do not change

Data Type

int

Units

TBD

Range

TBD

Default

0

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_deadman_enabled

+

When a pool sync operation takes longer than zfs_deadman_synctime_ms +milliseconds, a “slow spa_sync” message is logged to the debug log (see +zfs_dbgmsg_enable). If zfs_deadman_enabled +is set to 1, then all pending IO operations are also checked and if any +haven’t completed within zfs_deadman_synctime_ms milliseconds, a “SLOW +IO” message is logged to the debug log and a “deadman” system event (see +zpool events command) with the details of the hung IO is posted.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_deadman_enabled

Notes

Tags

debug

When to change

To disable logging of slow I/O

Data Type

boolean

Range

0=do not log slow I/O, 1=log slow I/O

Default

1

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

zfs_deadman_checktime_ms

+

Once a pool sync operation has taken longer than +zfs_deadman_synctime_ms milliseconds, +continue to check for slow operations every +zfs_deadman_checktime_ms milliseconds.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_deadman_checktime_ms

Notes

Tags

debug

When to change

When debugging slow I/O

Data Type

ulong

Units

milliseconds

Range

1 to ULONG_MAX

Default

60,000 (1 minute)

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

zfs_deadman_ziotime_ms

+

When an individual I/O takes longer than zfs_deadman_ziotime_ms +milliseconds, then the operation is considered to be “hung”. If +zfs_deadman_enabled is set then the deadman +behaviour is invoked as described by the +zfs_deadman_failmode option.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_deadman_ziotime_ms

Notes

Tags

debug

When to change

Testing ABD features

Data Type

ulong

Units

milliseconds

Range

1 to ULONG_MAX

Default

300,000 (5 minutes)

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

zfs_deadman_synctime_ms

+

The I/O deadman timer expiration time has two meanings

+
    +
  1. determines when the spa_deadman() logic should fire, indicating +the txg sync has not completed in a timely manner

  2. +
  3. determines if an I/O is considered “hung”

  4. +
+

In version v0.8.0, any I/O that has not completed in +zfs_deadman_synctime_ms is considered “hung” resulting in one of +three behaviors controlled by the +zfs_deadman_failmode parameter.

+

zfs_deadman_synctime_ms takes effect if +zfs_deadman_enabled = 1.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_deadman_synctime_ms

Notes

Tags

debug

When to change

When debugging slow I/O

Data Type

ulong

Units

milliseconds

Range

1 to ULONG_MAX

Default

600,000 (10 minutes)

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_deadman_failmode

+

zfs_deadman_failmode controls the behavior of the I/O deadman timer when +it detects a “hung” I/O. Valid values are:

+
    +
  • wait - Wait for the “hung” I/O (default)

  • +
  • continue - Attempt to recover from a “hung” I/O

  • +
  • panic - Panic the system

  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_deadman_failmode

Notes

Tags

debug

When to change

In some cluster cases, panic can be appropriate

Data Type

string

Range

wait, continue, or panic

Default

wait

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

zfs_dedup_prefetch

+

ZFS can prefetch deduplication table (DDT) entries. +zfs_dedup_prefetch allows DDT prefetches to be enabled.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dedup_prefetch

Notes

Tags

prefetch, memory

When to change

For systems with limited RAM using the dedup +feature, disabling deduplication table +prefetch can reduce memory pressure

Data Type

boolean

Range

0=do not prefetch, 1=prefetch dedup table +entries

Default

0

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_delete_blocks

+

zfs_delete_blocks defines a large file for the purposes of delete. +Files containing more than zfs_delete_blocks will be deleted +asynchronously while smaller files are deleted synchronously. Decreasing +this value reduces the time spent in an unlink(2) system call at the +expense of a longer delay before the freed space is available.

+

The zfs_delete_blocks value is specified in blocks, not bytes. The +size of blocks can vary and is ultimately limited by the filesystem’s +recordsize property.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_delete_blocks

Notes

Tags

filesystem, +delete

When to change

If applications delete large files and blocking +on unlink(2) is not desired

Data Type

ulong

Units

blocks

Range

1 to ULONG_MAX

Default

20,480

Change

Dynamic

Versions Affected

all

+
+
+

zfs_delay_min_dirty_percent

+

The ZFS write throttle begins to delay each transaction when the amount +of dirty data reaches the threshold zfs_delay_min_dirty_percent of +zfs_dirty_data_max. This value should be >= +zfs_vdev_async_write_active_max_dirty_percent.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_delay_min_dirty_percent

Notes

Tags

write_throttle

When to change

See section “ZFS TRANSACTION DELAY”

Data Type

int

Units

percent

Range

0 to 100

Default

60

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_delay_scale

+

zfs_delay_scale controls how quickly the ZFS write throttle +transaction delay approaches infinity. Larger values cause longer delays +for a given amount of dirty data.

+

For the smoothest delay, this value should be about 1 billion divided by +the maximum number of write operations per second the pool can sustain. +The throttle will smoothly handle between 10x and 1/10th +zfs_delay_scale.

+

Note: zfs_delay_scale * +zfs_dirty_data_max must be < 2^64.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_delay_scale

Notes

Tags

write_throttle

When to change

See section “ZFS TRANSACTION DELAY”

Data Type

ulong

Units

scalar (nanoseconds)

Range

0 to ULONG_MAX

Default

500,000

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_dirty_data_max

+

zfs_dirty_data_max is the ZFS write throttle dirty space limit. Once +this limit is exceeded, new writes are delayed until space is freed by +writes being committed to the pool.

+

zfs_dirty_data_max takes precedence over +zfs_dirty_data_max_percent.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dirty_data_max

Notes

Tags

write_throttle

When to change

See section “ZFS TRANSACTION DELAY”

Data Type

ulong

Units

bytes

Range

1 to +zfs_d +irty_data_max_max

Default

10% of physical RAM

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_dirty_data_max_percent

+

zfs_dirty_data_max_percent is an alternative method of specifying +zfs_dirty_data_max, the ZFS write throttle +dirty space limit. Once this limit is exceeded, new writes are delayed +until space is freed by writes being committed to the pool.

+

zfs_dirty_data_max takes precedence over +zfs_dirty_data_max_percent.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dirty_data_max_percent

Notes

Tags

write_throttle

When to change

See section “ZFS TRANSACTION DELAY”

Data Type

int

Units

percent

Range

1 to 100

Default

10% of physical RAM

Change

Prior to zfs module load or a memory +hot plug event

Versions Affected

v0.6.4 and later

+
+
+

zfs_dirty_data_max_max

+

zfs_dirty_data_max_max is the maximum allowable value of +zfs_dirty_data_max.

+

zfs_dirty_data_max_max takes precedence over +zfs_dirty_data_max_max_percent.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dirty_data_max_max

Notes

Tags

write_throttle

When to change

See section “ZFS TRANSACTION DELAY”

Data Type

ulong

Units

bytes

Range

1 to physical RAM size

Default

physical_ram/4

+

since v0.7: min(physical_ram/4, 4GiB)

+

since v2.0 for 32-bit systems: min(physical_ram/4, 1GiB)

+

Change

Prior to zfs module load

Versions Affected

v0.6.4 and later

+
+
+

zfs_dirty_data_max_max_percent

+

zfs_dirty_data_max_max_percent an alternative to +zfs_dirty_data_max_max for setting the +maximum allowable value of zfs_dirty_data_max

+

zfs_dirty_data_max_max takes precedence +over zfs_dirty_data_max_max_percent

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dirty_data_max_max_percent

Notes

Tags

write_throttle

When to change

See section “ZFS TRANSACTION DELAY”

Data Type

int

Units

percent

Range

1 to 100

Default

25% of physical RAM

Change

Prior to zfs module load

Versions Affected

v0.6.4 and later

+
+
+

zfs_dirty_data_sync

+

When there is at least zfs_dirty_data_sync dirty data, a transaction +group sync is started. This allows a transaction group sync to occur +more frequently than the transaction group timeout interval (see +zfs_txg_timeout) when there is dirty data to be +written.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dirty_data_sync

Notes

Tags

write_throttle, +ZIO_scheduler

When to change

TBD

Data Type

ulong

Units

bytes

Range

1 to ULONG_MAX

Default

67,108,864 (64 MiB)

Change

Dynamic

Versions Affected

v0.6.4 through v0.8.x, deprecation planned +for v2

+
+
+

zfs_dirty_data_sync_percent

+

When there is at least zfs_dirty_data_sync_percent of +zfs_dirty_data_max dirty data, a transaction +group sync is started. This allows a transaction group sync to occur +more frequently than the transaction group timeout interval (see +zfs_txg_timeout) when there is dirty data to be +written.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dirty_data_sync_percent

Notes

Tags

write_throttle, +ZIO_scheduler

When to change

TBD

Data Type

int

Units

percent

Range

1 to +zfs_vdev_async_write_ac +tive_min_dirty_percent

Default

20

Change

Dynamic

Versions Affected

planned for v2, deprecates +zfs_dirt +y_data_sync

+
+
+

zfs_fletcher_4_impl

+

Fletcher-4 is the default checksum algorithm for metadata and data. When +the zfs kernel module is loaded, a set of microbenchmarks are run to +determine the fastest algorithm for the current hardware. The +zfs_fletcher_4_impl parameter allows a specific implementation to be +specified other than the default (fastest). Selectors other than +fastest and scalar require instruction set extensions to be +available and will only appear if ZFS detects their presence. The +scalar implementation works on all processors.

+

The results of the microbenchmark are visible in the +/proc/spl/kstat/zfs/fletcher_4_bench file. Larger numbers indicate +better performance. Since ZFS is processor endian-independent, the +microbenchmark is run against both big and little-endian transformation.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_fletcher_4_impl

Notes

Tags

CPU, checksum

When to change

Testing Fletcher-4 algorithms

Data Type

string

Range

fastest, scalar, superscalar, +superscalar4, sse2, ssse3, avx2, +avx512f, or aarch64_neon depending on +hardware support

Default

fastest

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_free_bpobj_enabled

+

The processing of the free_bpobj object can be enabled by +zfs_free_bpobj_enabled

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_free_bpobj_enabled

Notes

Tags

delete

When to change

If there’s a problem with processing +free_bpobj (e.g. i/o error or bug)

Data Type

boolean

Range

0=do not process free_bpobj objects, +1=process free_bpobj objects

Default

1

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_free_max_blocks

+

zfs_free_max_blocks sets the maximum number of blocks to be freed in +a single transaction group (txg). For workloads that delete (free) large +numbers of blocks in a short period of time, the processing of the frees +can negatively impact other operations, including txg commits. +zfs_free_max_blocks acts as a limit to reduce the impact.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_free_max_blocks

Notes

Tags

filesystem, +delete

When to change

For workloads that delete large files, +zfs_free_max_blocks can be adjusted to +meet performance requirements while reducing +the impacts of deletion

Data Type

ulong

Units

blocks

Range

1 to ULONG_MAX

Default

100,000

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_vdev_async_read_max_active

+

Maximum asynchronous read I/Os active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_async_read_max_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_ma +x_active

Default

3

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_async_read_min_active

+

Minimum asynchronous read I/Os active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_async_read_min_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +( +zfs_vdev_async_read_max_active +- 1)

Default

1

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_async_write_active_max_dirty_percent

+

When the amount of dirty data exceeds the threshold +zfs_vdev_async_write_active_max_dirty_percent of +zfs_dirty_data_max dirty data, then +zfs_vdev_async_write_max_active +is used to limit active async writes. If the dirty data is between +zfs_vdev_async_write_active_min_dirty_percent +and zfs_vdev_async_write_active_max_dirty_percent, the active I/O +limit is linearly interpolated between +zfs_vdev_async_write_min_active +and +zfs_vdev_async_write_max_active

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_asyn +c_write_active_max_dirty_percent

Notes

Tags

vdev, +Z +IO_scheduler

When to change

See ZFS I/O +Sch +eduler

Data Type

int

Units

percent of +zfs_dirty_d +ata_max

Range

0 to 100

Default

60

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_async_write_active_min_dirty_percent

+

If the amount of dirty data is between +zfs_vdev_async_write_active_min_dirty_percent and +zfs_vdev_async_write_active_max_dirty_percent +of zfs_dirty_data_max, the active I/O limit is +linearly interpolated between +zfs_vdev_async_write_min_active +and +zfs_vdev_async_write_max_active

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_asyn +c_write_active_min_dirty_percent

Notes

Tags

vdev, +Z +IO_scheduler

When to change

See ZFS I/O +Sch +eduler

Data Type

int

Units

percent of zfs_dirty_data_max

Range

0 to +(z +fs_vdev_async_write_active_max_d +irty_percent +- 1)

Default

30

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_async_write_max_active

+

zfs_vdev_async_write_max_active sets the maximum asynchronous write +I/Os active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_async_write_max_active

Notes

Tags

vdev, +` +ZIO_scheduler <#zio-scheduler>`__

When to change

See ZFS I/O +S +cheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_max +_active

Default

10

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_async_write_min_active

+

zfs_vdev_async_write_min_active sets the minimum asynchronous write +I/Os active to each device.

+

Lower values are associated with better latency on rotational media but +poorer resilver performance. The default value of 2 was chosen as a +compromise. A value of 3 has been shown to improve resilver performance +further at a cost of further increasing latency.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_async_write_min_active

Notes

Tags

vdev, +` +ZIO_scheduler <#zio-scheduler>`__

When to change

See ZFS I/O +S +cheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs +_vdev_async_write_max_active

Default

1 for v0.6.x, 2 for v0.7.0 and +later

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_max_active

+

The maximum number of I/Os active to each device. Ideally, +zfs_vdev_max_active >= the sum of each queue’s max_active.

+

Once queued to the device, the ZFS I/O scheduler is no longer able to +prioritize I/O operations. The underlying device drivers have their own +scheduler and queue depth limits. Values larger than the device’s +maximum queue depth can have the affect of increased latency as the I/Os +are queued in the intervening device driver layers.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_max_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

sum of each queue’s min_active to UINT32_MAX

Default

1,000

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_scrub_max_active

+

zfs_vdev_scrub_max_active sets the maximum scrub or scan read I/Os +active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_scrub_max_active

Notes

Tags

vdev, +ZIO_scheduler, +scrub, +resilver

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vd +ev_max_active

Default

2

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_scrub_min_active

+

zfs_vdev_scrub_min_active sets the minimum scrub or scan read I/Os +active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_scrub_min_active

Notes

Tags

vdev, +ZIO_scheduler, +scrub, +resilver

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_scrub_max +_active

Default

1

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_sync_read_max_active

+

Maximum synchronous read I/Os active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_sync_read_max_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_m +ax_active

Default

10

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_sync_read_min_active

+

zfs_vdev_sync_read_min_active sets the minimum synchronous read I/Os +active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_sync_read_min_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_sync_read_max_active

Default

10

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_sync_write_max_active

+

zfs_vdev_sync_write_max_active sets the maximum synchronous write +I/Os active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_sync_write_max_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_ma +x_active

Default

10

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_sync_write_min_active

+

zfs_vdev_sync_write_min_active sets the minimum synchronous write +I/Os active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_sync_write_min_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_sync_write_max_active

Default

10

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_queue_depth_pct

+

Maximum number of queued allocations per top-level vdev expressed as a +percentage of +zfs_vdev_async_write_max_active. +This allows the system to detect devices that are more capable of +handling allocations and to allocate more blocks to those devices. It +also allows for dynamic allocation distribution when devices are +imbalanced as fuller devices will tend to be slower than empty devices. +Once the queue depth reaches (zfs_vdev_queue_depth_pct * +zfs_vdev_async_write_max_active / +100) then allocator will stop allocating blocks on that top-level device +and switch to the next.

+

See also zio_dva_throttle_enabled

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_queue_depth_pct

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to UINT32_MAX

Default

1,000

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_disable_dup_eviction

+

Disable duplicate buffer eviction from ARC.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_disable_dup_eviction

Notes

Tags

ARC, dedup

When to change

TBD

Data Type

boolean

Range

0=duplicate buffers can be evicted, 1=do +not evict duplicate buffers

Default

0

Change

Dynamic

Versions Affected

v0.6.5, deprecated in v0.7.0

+
+
+

zfs_expire_snapshot

+

Snapshots of filesystems are normally automounted under the filesystem’s +.zfs/snapshot subdirectory. When not in use, snapshots are unmounted +after zfs_expire_snapshot seconds.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_expire_snapshot

Notes

Tags

filesystem, +snapshot

When to change

TBD

Data Type

int

Units

seconds

Range

0 disables automatic unmounting, maximum time +is INT_MAX

Default

300

Change

Dynamic

Versions Affected

v0.6.1 and later

+
+
+

zfs_admin_snapshot

+

Allow the creation, removal, or renaming of entries in the +.zfs/snapshot subdirectory to cause the creation, destruction, or +renaming of snapshots. When enabled this functionality works both +locally and over NFS exports which have the “no_root_squash” option set.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_admin_snapshot

Notes

Tags

filesystem, +snapshot

When to change

TBD

Data Type

boolean

Range

0=do not allow snapshot manipulation via the +filesystem, 1=allow snapshot manipulation via +the filesystem

Default

1

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_flags

+

Set additional debugging flags (see +zfs_dbgmsg_enable)

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

flag value

symbolic name

description

0x1

ZFS_DEBUG_DPRINTF

Enable dprintf entries in +the debug log

0x2

ZFS_DEBUG_DBUF_VERIFY

Enable extra dnode +verifications

0x4

ZFS_DEBUG_DNODE_VERIFY

Enable extra dnode +verifications

0x8

ZFS_DEBUG_SNAPNAMES

Enable snapshot name +verification

0x10

ZFS_DEBUG_MODIFY

Check for illegally +modified ARC buffers

0x20

ZFS_DEBUG_SPA

Enable spa_dbgmsg entries +in the debug log

0x40

ZFS_DEBUG_ZIO_FREE

Enable verification of +block frees

0x80

Z +FS_DEBUG_HISTOGRAM_VERIFY

Enable extra spacemap +histogram verifications

0x100

ZFS_DEBUG_METASLAB_VERIFY

Verify space accounting +on disk matches in-core +range_trees

0x200

ZFS_DEBUG_SET_ERROR

Enable SET_ERROR and +dprintf entries in the +debug log

+ + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_flags

Notes

Tags

debug

When to change

When debugging ZFS

Data Type

int

Default

0 no debug flags set, for debug builds: all +except ZFS_DEBUG_DPRINTF and ZFS_DEBUG_SPA

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_free_leak_on_eio

+

If destroy encounters an I/O error (EIO) while reading metadata (eg +indirect blocks), space referenced by the missing metadata cannot be +freed. Normally, this causes the background destroy to become “stalled”, +as the destroy is unable to make forward progress. While in this stalled +state, all remaining space to free from the error-encountering +filesystem is temporarily leaked. Set zfs_free_leak_on_eio = 1 to +ignore the EIO, permanently leak the space from indirect blocks that can +not be read, and continue to free everything else that it can.

+

The default, stalling behavior is useful if the storage partially fails +(eg some but not all I/Os fail), and then later recovers. In this case, +we will be able to continue pool operations while it is partially +failed, and when it recovers, we can continue to free the space, with no +leaks. However, note that this case is rare.

+

Typically pools either:

+
    +
  1. fail completely (but perhaps temporarily (eg a top-level vdev going +offline)

  2. +
  3. have localized, permanent errors (eg disk returns the wrong data due +to bit flip or firmware bug)

  4. +
+

In case (1), the zfs_free_leak_on_eio setting does not matter +because the pool will be suspended and the sync thread will not be able +to make forward progress. In case (2), because the error is permanent, +the best effort do is leak the minimum amount of space. Therefore, it is +reasonable for zfs_free_leak_on_eio be set, but by default the more +conservative approach is taken, so that there is no possibility of +leaking space in the “partial temporary” failure case.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_free_leak_on_eio

Notes

Tags

debug

When to change

When debugging I/O errors during destroy

Data Type

boolean

Range

0=normal behavior, 1=ignore error and +permanently leak space

Default

0

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_free_min_time_ms

+

During a zfs destroy operation using feature@async_destroy a +minimum of zfs_free_min_time_ms time will be spent working on +freeing blocks per txg commit.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_free_min_time_ms

Notes

Tags

delete

When to change

TBD

Data Type

int

Units

milliseconds

Range

1 to (zfs_txg_timeout * 1000)

Default

1,000

Change

Dynamic

Versions Affected

v0.6.0 and later

+
+
+

zfs_immediate_write_sz

+

If a pool does not have a log device, data blocks equal to or larger +than zfs_immediate_write_sz are treated as if the dataset being +written to had the property setting logbias=throughput

+

Terminology note: logbias=throughput writes the blocks in “indirect +mode” to the ZIL where the data is written to the pool and a pointer to +the data is written to the ZIL.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_immediate_write_sz

Notes

Tags

ZIL

When to change

TBD

Data Type

long

Units

bytes

Range

512 to 16,777,216 (valid block sizes)

Default

32,768 (32 KiB)

Change

Dynamic

Verification

Data blocks that exceed +zfs_immediate_write_sz or are written +as logbias=throughput increment the +zil_itx_indirect_count entry in +/proc/spl/kstat/zfs/zil

Versions Affected

all

+
+
+

zfs_max_recordsize

+

ZFS supports logical record (block) sizes from 512 bytes to 16 MiB. The +benefits of larger blocks, and thus larger average I/O sizes, can be +weighed against the cost of copy-on-write of large block to modify one +byte. Additionally, very large blocks can have a negative impact on both +I/O latency at the device level and the memory allocator. The +zfs_max_recordsize parameter limits the upper bound of the dataset +volblocksize and recordsize properties.

+

Larger blocks can be created by enabling zpool large_blocks +feature and changing this zfs_max_recordsize. Pools with larger +blocks can always be imported and used, regardless of the value of +zfs_max_recordsize.

+

For 32-bit systems, zfs_max_recordsize also limits the size of +kernel virtual memory caches used in the ZFS I/O pipeline (zio_buf_* +and zio_data_buf_*).

+

See also the zpool large_blocks feature.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_max_recordsize

Notes

Tags

filesystem, +memory, volume

When to change

To create datasets with larger volblocksize or +recordsize

Data Type

int

Units

bytes

Range

512 to 16,777,216 (valid block sizes)

Default

1,048,576

Change

Dynamic, set prior to creating volumes or +changing filesystem recordsize

Versions Affected

v0.6.5 and later

+
+
+

zfs_mdcomp_disable

+

zfs_mdcomp_disable allows metadata compression to be disabled.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_mdcomp_disable

Notes

Tags

CPU, metadata

When to change

When CPU cycles cost less than I/O

Data Type

boolean

Range

0=compress metadata, 1=do not compress metadata

Default

0

Change

Dynamic

Versions Affected

from v0.6.0 to v0.8.0

+
+
+

zfs_metaslab_fragmentation_threshold

+

Allow metaslabs to keep their active state as long as their +fragmentation percentage is less than or equal to this value. When +writing, an active metaslab whose fragmentation percentage exceeds +zfs_metaslab_fragmentation_threshold is avoided allowing metaslabs +with less fragmentation to be preferred.

+

Metaslab fragmentation is used to calculate the overall pool +fragmentation property value. However, individual metaslab +fragmentation levels are observable using the zdb with the -mm +option.

+

zfs_metaslab_fragmentation_threshold works at the metaslab level and +each top-level vdev has approximately +metaslabs_per_vdev metaslabs. See also +zfs_mg_fragmentation_threshold

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_metaslab_fragmentation_thresh +old

Notes

Tags

allocation, +fr +agmentation, +vdev

When to change

Testing metaslab allocation

Data Type

int

Units

percent

Range

1 to 100

Default

70

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_mg_fragmentation_threshold

+

Metaslab groups (top-level vdevs) are considered eligible for +allocations if their fragmentation percentage metric is less than or +equal to zfs_mg_fragmentation_threshold. If a metaslab group exceeds +this threshold then it will be skipped unless all metaslab groups within +the metaslab class have also crossed the +zfs_mg_fragmentation_threshold threshold.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_mg_fragmentation_threshold

Notes

Tags

allocation, +` +fragmentation <#fragmentation>`__, +vdev

When to change

Testing metaslab allocation

Data Type

int

Units

percent

Range

1 to 100

Default

85

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_mg_noalloc_threshold

+

Metaslab groups (top-level vdevs) with free space percentage greater +than zfs_mg_noalloc_threshold are eligible for new allocations. If a +metaslab group’s free space is less than or equal to the threshold, the +allocator avoids allocating to that group unless all groups in the pool +have reached the threshold. Once all metaslab groups have reached the +threshold, all metaslab groups are allowed to accept allocations. The +default value of 0 disables the feature and causes all metaslab groups +to be eligible for allocations.

+

This parameter allows one to deal with pools having heavily imbalanced +vdevs such as would be the case when a new vdev has been added. Setting +the threshold to a non-zero percentage will stop allocations from being +made to vdevs that aren’t filled to the specified percentage and allow +lesser filled vdevs to acquire more allocations than they otherwise +would under the older zfs_mg_alloc_failures facility.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_mg_noalloc_threshold

Notes

Tags

allocation, +fragmentation, +vdev

When to change

To force rebalancing as top-level vdevs +are added or expanded

Data Type

int

Units

percent

Range

0 to 100

Default

0 (disabled)

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_multihost_history

+

The pool multihost multimodifier protection (MMP) subsystem can +record historical updates in the +/proc/spl/kstat/zfs/POOL_NAME/multihost file for debugging purposes. +The number of lines of history is determined by zfs_multihost_history.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_multihost_history

Notes

Tags

MMP, import

When to change

When testing multihost feature

Data Type

int

Units

lines

Range

0 to INT_MAX

Default

0

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_multihost_interval

+

zfs_multihost_interval controls the frequency of multihost writes +performed by the pool multihost multimodifier protection (MMP) +subsystem. The multihost write period is (zfs_multihost_interval / +number of leaf-vdevs) milliseconds. Thus on average a multihost write +will be issued for each leaf vdev every zfs_multihost_interval +milliseconds. In practice, the observed period can vary with the I/O +load and this observed value is the delay which is stored in the +uberblock.

+

On import the multihost activity check waits a minimum amount of time +determined by (zfs_multihost_interval * +zfs_multihost_import_intervals) +with a lower bound of 1 second. The activity check time may be further +extended if the value of mmp delay found in the best uberblock indicates +actual multihost updates happened at longer intervals than +zfs_multihost_interval

+

Note: the multihost protection feature applies to storage devices that +can be shared between multiple systems.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_multihost_interval

Notes

Tags

MMP, import, +vdev

When to change

To optimize pool import time against +possibility of simultaneous import by +another system

Data Type

ulong

Units

milliseconds

Range

100 to ULONG_MAX

Default

1000

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_multihost_import_intervals

+

zfs_multihost_import_intervals controls the duration of the activity +test on pool import for the multihost multimodifier protection (MMP) +subsystem. The activity test can be expected to take a minimum time of +(zfs_multihost_import_intervals * +zfs_multihost_interval * random(25%)) +milliseconds. The random period of up to 25% improves simultaneous +import detection. For example, if two hosts are rebooted at the same +time and automatically attempt to import the pool, then is is highly +probable that one host will win.

+

Smaller values of zfs_multihost_import_intervals reduces the import +time but increases the risk of failing to detect an active pool. The +total activity check time is never allowed to drop below one second.

+

Note: the multihost protection feature applies to storage devices that +can be shared between multiple systems.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_multihost_import_intervals

Notes

Tags

MMP, import

When to change

TBD

Data Type

uint

Units

intervals

Range

1 to UINT_MAX

Default

20 since v0.8, previously 10

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_multihost_fail_intervals

+

zfs_multihost_fail_intervals controls the behavior of the pool when +write failures are detected in the multihost multimodifier protection +(MMP) subsystem.

+

If zfs_multihost_fail_intervals = 0 then multihost write failures +are ignored. The write failures are reported to the ZFS event daemon +(zed) which can take action such as suspending the pool or offlining +a device.

+
+
If zfs_multihost_fail_intervals > 0 then sequential multihost +write failures will cause the pool to be suspended. This occurs when +(zfs_multihost_fail_intervals * +zfs_multihost_interval) milliseconds +have passed since the last successful multihost write.
+
This guarantees the activity test will see multihost writes if the +pool is attempted to be imported by another system.
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_multihost_fail_intervals

Notes

Tags

MMP, import

When to change

TBD

Data Type

uint

Units

intervals

Range

0 to UINT_MAX

Default

10 since v0.8, previously 5

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_delays_per_second

+

The ZFS Event Daemon (zed) processes events from ZFS. However, it can be +overwhelmed by high rates of error reports which can be generated by +failing, high-performance devices. zfs_delays_per_second limits the +rate of delay events reported to zed.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_delays_per_second

Notes

Tags

zed, delay

When to change

If processing delay events at a higher rate +is desired

Data Type

uint

Units

events per second

Range

0 to UINT_MAX

Default

20

Change

Dynamic

Versions Affected

v0.7.7 and later

+
+
+

zfs_checksums_per_second

+

The ZFS Event Daemon (zed) processes events from ZFS. However, it can be +overwhelmed by high rates of error reports which can be generated by +failing, high-performance devices. zfs_checksums_per_second limits +the rate of checksum events reported to zed.

+

Note: do not set this value lower than the SERD limit for checksum +in zed. By default, checksum_N = 10 and checksum_T = 10 minutes, +resulting in a practical lower limit of 1.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_checksums_per_second

Notes

Tags

zed, checksum

When to change

If processing checksum error events at a +higher rate is desired

Data Type

uint

Units

events per second

Range

0 to UINT_MAX

Default

20

Change

Dynamic

Versions Affected

v0.7.7 and later

+
+
+

zfs_no_scrub_io

+

When zfs_no_scrub_io = 1 scrubs do not actually scrub data and +simply doing a metadata crawl of the pool instead.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_no_scrub_io

Notes

Tags

scrub

When to change

Testing scrub feature

Data Type

boolean

Range

0=perform scrub I/O, 1=do not perform scrub I/O

Default

0

Change

Dynamic

Versions Affected

v0.6.0 and later

+
+
+

zfs_no_scrub_prefetch

+

When zfs_no_scrub_prefetch = 1, prefetch is disabled for scrub I/Os.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_no_scrub_prefetch

Notes

Tags

prefetch, scrub

When to change

Testing scrub feature

Data Type

boolean

Range

0=prefetch scrub I/Os, 1=do not prefetch scrub I/Os

Default

0

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_nocacheflush

+

ZFS uses barriers (volatile cache flush commands) to ensure data is +committed to permanent media by devices. This ensures consistent +on-media state for devices where caches are volatile (eg HDDs).

+

For devices with nonvolatile caches, the cache flush operation can be a +no-op. However, in some RAID arrays, cache flushes can cause the entire +cache to be flushed to the backing devices.

+

To ensure on-media consistency, keep cache flush enabled.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_nocacheflush

Notes

Tags

disks

When to change

If the storage device has nonvolatile cache, +then disabling cache flush can save the cost of +occasional cache flush commands

Data Type

boolean

Range

0=send cache flush commands, 1=do not send +cache flush commands

Default

0

Change

Dynamic

Versions Affected

all

+
+
+

zfs_nopwrite_enabled

+

The NOP-write feature is enabled by default when a +crytographically-secure checksum algorithm is in use by the dataset. +zfs_nopwrite_enabled allows the NOP-write feature to be completely +disabled.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_nopwrite_enabled

Notes

Tags

checksum, debug

When to change

TBD

Data Type

boolean

Range

0=disable NOP-write feature, 1=enable +NOP-write feature

Default

1

Change

Dynamic

Versions Affected

v0.6.0 and later

+
+
+

zfs_dmu_offset_next_sync

+

zfs_dmu_offset_next_sync enables forcing txg sync to find holes. +This causes ZFS to act like older versions when SEEK_HOLE or +SEEK_DATA flags are used: when a dirty dnode causes txgs to be +synced so the previous data can be found.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dmu_offset_next_sync

Notes

Tags

DMU

When to change

to exchange strict hole reporting for +performance

Data Type

boolean

Range

0=do not force txg sync to find holes, +1=force txg sync to find holes

Default

1 since v2.1.5, previously 0

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_pd_bytes_max

+

zfs_pd_bytes_max limits the number of bytes prefetched during a pool +traversal (eg zfs send or other data crawling operations). These +prefetches are referred to as “prescient prefetches” and are always 100% +hit rate. The traversal operations do not use the default data or +metadata prefetcher.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_pd_bytes_max

Notes

Tags

prefetch, send

When to change

TBD

Data Type

int32

Units

bytes

Range

0 to INT32_MAX

Default

52,428,800 (50 MiB)

Change

Dynamic

Versions Affected

TBD

+
+
+

zfs_per_txg_dirty_frees_percent

+

zfs_per_txg_dirty_frees_percent as a percentage of +zfs_dirty_data_max controls the percentage of +dirtied blocks from frees in one txg. After the threshold is crossed, +additional dirty blocks from frees wait until the next txg. Thus, when +deleting large files, filling consecutive txgs with deletes/frees, does +not throttle other, perhaps more important, writes.

+

A side effect of this throttle can impact zfs receive workloads that +contain a large number of frees and the +ignore_hole_birth optimization is disabled. The +symptom is that the receive workload causes an increase in the frequency +of txg commits. The frequency of txg commits is observable via the +otime column of /proc/spl/kstat/zfs/POOLNAME/txgs. Since txg +commits also flush data from volatile caches in HDDs to media, HDD +performance can be negatively impacted. Also, since the frees do not +consume much bandwidth over the pipe, the pipe can appear to stall. Thus +the overall progress of receives is slower than expected.

+

A value of zero will disable this throttle.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_per_txg_dirty_frees_percent

Notes

Tags

delete

When to change

For zfs receive workloads, +consider increasing or disabling. +See section ZFS I/O +S +cheduler

Data Type

ulong

Units

percent

Range

0 to 100

Default

30

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_prefetch_disable

+

zfs_prefetch_disable controls the predictive prefetcher.

+

Note that it leaves “prescient” prefetch (eg prefetch for zfs send) +intact (see zfs_pd_bytes_max)

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_prefetch_disable

Notes

Tags

prefetch

When to change

In some case where the workload is +completely random reads, overall performance +can be better if prefetch is disabled

Data Type

boolean

Range

0=prefetch enabled, 1=prefetch disabled

Default

0

Change

Dynamic

Verification

prefetch efficacy is observed by +arcstat, arc_summary, and the +relevant entries in +/proc/spl/kstat/zfs/arcstats

Versions Affected

all

+
+
+

zfs_read_chunk_size

+

zfs_read_chunk_size is the limit for ZFS filesystem reads. If an +application issues a read() larger than zfs_read_chunk_size, +then the read() is divided into multiple operations no larger than +zfs_read_chunk_size

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_read_chunk_size

Notes

Tags

filesystem

When to change

TBD

Data Type

ulong

Units

bytes

Range

512 to ULONG_MAX

Default

1,048,576

Change

Dynamic

Versions Affected

all

+
+
+

zfs_read_history

+

Historical statistics for the last zfs_read_history reads are +available in /proc/spl/kstat/zfs/POOL_NAME/reads

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_read_history

Notes

Tags

debug

When to change

To observe read operation details

Data Type

int

Units

lines

Range

0 to INT_MAX

Default

0

Change

Dynamic

Versions Affected

all

+
+
+

zfs_read_history_hits

+

When zfs_read_history> 0, +zfs_read_history_hits controls whether ARC hits are displayed in the +read history file, /proc/spl/kstat/zfs/POOL_NAME/reads

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_read_history_hits

Notes

Tags

debug

When to change

To observe read operation details with ARC +hits

Data Type

boolean

Range

0=do not include data for ARC hits, +1=include ARC hit data

Default

0

Change

Dynamic

Versions Affected

all

+
+
+

zfs_recover

+

zfs_recover can be set to true (1) to attempt to recover from +otherwise-fatal errors, typically caused by on-disk corruption. When +set, calls to zfs_panic_recover() will turn into warning messages +rather than calling panic()

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_recover

Notes

Tags

import

When to change

zfs_recover should only be used as a last +resort, as it typically results in leaked +space, or worse

Data Type

boolean

Range

0=normal operation, 1=attempt recovery zpool +import

Default

0

Change

Dynamic

Verification

check output of dmesg and other logs for +details

Versions Affected

v0.6.4 or later

+
+
+

zfs_resilver_min_time_ms

+

Resilvers are processed by the sync thread in syncing context. While +resilvering, ZFS spends at least zfs_resilver_min_time_ms time +working on a resilver between txg commits.

+

The zfs_txg_timeout tunable sets a nominal +timeout value for the txg commits. By default, this timeout is 5 seconds +and the zfs_resilver_min_time_ms is 3 seconds. However, many +variables contribute to changing the actual txg times. The measured txg +interval is observed as the otime column (in nanoseconds) in the +/proc/spl/kstat/zfs/POOL_NAME/txgs file.

+

See also zfs_txg_timeout and +zfs_scan_min_time_ms

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_resilver_min_time_ms

Notes

Tags

resilver

When to change

In some resilvering cases, increasing +zfs_resilver_min_time_ms can result +in faster completion

Data Type

int

Units

milliseconds

Range

1 to +zfs_txg_timeout +converted to milliseconds

Default

3,000

Change

Dynamic

Versions Affected

all

+
+
+

zfs_scan_min_time_ms

+

Scrubs are processed by the sync thread in syncing context. While +scrubbing, ZFS spends at least zfs_scan_min_time_ms time working on +a scrub between txg commits.

+

See also zfs_txg_timeout and +zfs_resilver_min_time_ms

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_min_time_ms

Notes

Tags

scrub

When to change

In some scrub cases, increasing +zfs_scan_min_time_ms can result in +faster completion

Data Type

int

Units

milliseconds

Range

1 to zfs_txg_timeout +converted to milliseconds

Default

1,000

Change

Dynamic

Versions Affected

all

+
+
+

zfs_scan_checkpoint_intval

+

To preserve progress across reboots the sequential scan algorithm +periodically needs to stop metadata scanning and issue all the +verifications I/Os to disk every zfs_scan_checkpoint_intval seconds.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_checkpoint_intval

Notes

Tags

resilver, scrub

When to change

TBD

Data Type

int

Units

seconds

Range

1 to INT_MAX

Default

7,200 (2 hours)

Change

Dynamic

Versions Affected

v0.8.0 and later

+
+
+

zfs_scan_fill_weight

+

This tunable affects how scrub and resilver I/O segments are ordered. A +higher number indicates that we care more about how filled in a segment +is, while a lower number indicates we care more about the size of the +extent without considering the gaps within a segment.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_fill_weight

Notes

Tags

resilver, scrub

When to change

Testing sequential scrub and resilver

Data Type

int

Units

scalar

Range

0 to INT_MAX

Default

3

Change

Prior to zfs module load

Versions Affected

v0.8.0 and later

+
+
+

zfs_scan_issue_strategy

+

zfs_scan_issue_strategy controls the order of data verification +while scrubbing or resilvering.

+ + + + + + + + + + + + + + + + + +

value

description

0

fs will use strategy 1 during normal verification and +strategy 2 while taking a checkpoint

1

data is verified as sequentially as possible, given the +amount of memory reserved for scrubbing (see +zfs_scan_mem_lim_fact). This +can improve scrub performance if the pool’s data is heavily +fragmented.

2

the largest mostly-contiguous chunk of found data is +verified first. By deferring scrubbing of small segments, +we may later find adjacent data to coalesce and increase +the segment size.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_issue_strategy

Notes

Tags

resilver, scrub

When to change

TBD

Data Type

enum

Range

0 to 2

Default

0

Change

Dynamic

Versions Affected

TBD

+
+
+

zfs_scan_legacy

+

Setting zfs_scan_legacy = 1 enables the legacy scan and scrub +behavior instead of the newer sequential behavior.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_legacy

Notes

Tags

resilver, scrub

When to change

In some cases, the new scan mode can consumer +more memory as it collects and sorts I/Os; +using the legacy algorithm can be more memory +efficient at the expense of HDD read efficiency

Data Type

boolean

Range

0=use new method: scrubs and resilvers will +gather metadata in memory before issuing +sequential I/O, 1=use legacy algorithm will be +used where I/O is initiated as soon as it is +discovered

Default

0

Change

Dynamic, however changing to 0 does not affect +in-progress scrubs or resilvers

Versions Affected

v0.8.0 and later

+
+
+

zfs_scan_max_ext_gap

+

zfs_scan_max_ext_gap limits the largest gap in bytes between scrub +and resilver I/Os that will still be considered sequential for sorting +purposes.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_max_ext_gap

Notes

Tags

resilver, scrub

When to change

TBD

Data Type

ulong

Units

bytes

Range

512 to ULONG_MAX

Default

2,097,152 (2 MiB)

Change

Dynamic, however changing to 0 does not +affect in-progress scrubs or resilvers

Versions Affected

v0.8.0 and later

+
+
+

zfs_scan_mem_lim_fact

+

zfs_scan_mem_lim_fact limits the maximum fraction of RAM used for +I/O sorting by sequential scan algorithm. When the limit is reached +scanning metadata is stopped and data verification I/O is started. Data +verification I/O continues until the memory used by the sorting +algorithm drops by +zfs_scan_mem_lim_soft_fact

+

Memory used by the sequential scan algorithm can be observed as the kmem +sio_cache. This is visible from procfs as +grep sio_cache /proc/slabinfo and can be monitored using +slab-monitoring tools such as slabtop

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_mem_lim_fact

Notes

Tags

memory, +resilver, +scrub

When to change

TBD

Data Type

int

Units

divisor of physical RAM

Range

TBD

Default

20 (physical RAM / 20 or 5%)

Change

Dynamic

Versions Affected

v0.8.0 and later

+
+
+

zfs_scan_mem_lim_soft_fact

+

zfs_scan_mem_lim_soft_fact sets the fraction of the hard limit, +zfs_scan_mem_lim_fact, used to determined +the RAM soft limit for I/O sorting by the sequential scan algorithm. +After zfs_scan_mem_lim_fact has been +reached, metadata scanning is stopped until the RAM usage drops by +zfs_scan_mem_lim_soft_fact

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_mem_lim_soft_fact

Notes

Tags

resilver, +scrub

When to change

TBD

Data Type

int

Units

divisor of (physical RAM / +zfs_scan_mem +_lim_fact)

Range

1 to INT_MAX

Default

20 (for default +zfs_scan_mem +_lim_fact, +0.25% of physical RAM)

Change

Dynamic

Versions Affected

v0.8.0 and later

+
+
+

zfs_scan_vdev_limit

+

zfs_scan_vdev_limit is the maximum amount of data that can be +concurrently issued at once for scrubs and resilvers per leaf vdev. +zfs_scan_vdev_limit attempts to strike a balance between keeping the +leaf vdev queues full of I/Os while not overflowing the queues causing +high latency resulting in long txg sync times. While +zfs_scan_vdev_limit represents a bandwidth limit, the existing I/O +limit of zfs_vdev_scrub_max_active +remains in effect, too.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_vdev_limit

Notes

Tags

resilver, scrub, +vdev

When to change

TBD

Data Type

ulong

Units

bytes

Range

512 to ULONG_MAX

Default

4,194,304 (4 MiB)

Change

Dynamic

Versions Affected

v0.8.0 and later

+
+
+

zfs_send_corrupt_data

+

zfs_send_corrupt_data enables zfs send to send of corrupt data +by ignoring read and checksum errors. The corrupted or unreadable blocks +are replaced with the value 0x2f5baddb10c (ZFS bad block)

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_send_corrupt_data

Notes

Tags

send

When to change

When data corruption exists and an attempt +to recover at least some data via +zfs send is needed

Data Type

boolean

Range

0=do not send corrupt data, 1=replace +corrupt data with cookie

Default

0

Change

Dynamic

Versions Affected

v0.6.0 and later

+
+
+

zfs_sync_pass_deferred_free

+

The SPA sync process is performed in multiple passes. Once the pass +number reaches zfs_sync_pass_deferred_free, frees are no long +processed and must wait for the next SPA sync.

+

The zfs_sync_pass_deferred_free value is expected to be removed as a +tunable once the optimal value is determined during field testing.

+

The zfs_sync_pass_deferred_free pass must be greater than 1 to +ensure that regular blocks are not deferred.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_sync_pass_deferred_free

Notes

Tags

SPA

When to change

Testing SPA sync process

Data Type

int

Units

SPA sync passes

Range

1 to INT_MAX

Default

2

Change

Dynamic

Versions Affected

all

+
+
+

zfs_sync_pass_dont_compress

+

The SPA sync process is performed in multiple passes. Once the pass +number reaches zfs_sync_pass_dont_compress, data block compression +is no longer processed and must wait for the next SPA sync.

+

The zfs_sync_pass_dont_compress value is expected to be removed as a +tunable once the optimal value is determined during field testing.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_sync_pass_dont_compress

Notes

Tags

SPA

When to change

Testing SPA sync process

Data Type

int

Units

SPA sync passes

Range

1 to INT_MAX

Default

5

Change

Dynamic

Versions Affected

all

+
+
+

zfs_sync_pass_rewrite

+

The SPA sync process is performed in multiple passes. Once the pass +number reaches zfs_sync_pass_rewrite, blocks can be split into gang +blocks.

+

The zfs_sync_pass_rewrite value is expected to be removed as a +tunable once the optimal value is determined during field testing.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_sync_pass_rewrite

Notes

Tags

SPA

When to change

Testing SPA sync process

Data Type

int

Units

SPA sync passes

Range

1 to INT_MAX

Default

2

Change

Dynamic

Versions Affected

all

+
+
+

zfs_sync_taskq_batch_pct

+

zfs_sync_taskq_batch_pct controls the number of threads used by the +DSL pool sync taskq, dp_sync_taskq

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_sync_taskq_batch_pct

Notes

Tags

SPA

When to change

to adjust the number of +dp_sync_taskq threads

Data Type

int

Units

percent of number of online CPUs

Range

1 to 100

Default

75

Change

Prior to zfs module load

Versions Affected

v0.7.0 and later

+
+
+

zfs_txg_history

+

Historical statistics for the last zfs_txg_history txg commits are +available in /proc/spl/kstat/zfs/POOL_NAME/txgs

+

The work required to measure the txg commit (SPA statistics) is low. +However, for debugging purposes, it can be useful to observe the SPA +statistics.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_txg_history

Notes

Tags

debug

When to change

To observe details of SPA sync behavior.

Data Type

int

Units

lines

Range

0 to INT_MAX

Default

0 for version v0.6.0 to v0.7.6, 100 for version v0.8.0

Change

Dynamic

Versions Affected

all

+
+
+

zfs_txg_timeout

+

The open txg is committed to the pool periodically (SPA sync) and +zfs_txg_timeout represents the default target upper limit.

+

txg commits can occur more frequently and a rapid rate of txg commits +often indicates a busy write workload, quota limits reached, or the free +space is critically low.

+

Many variables contribute to changing the actual txg times. txg commits +can also take longer than zfs_txg_timeout if the ZFS write throttle +is not properly tuned or the time to sync is otherwise delayed (eg slow +device). Shorter txg commit intervals can occur due to +zfs_dirty_data_sync for write-intensive +workloads. The measured txg interval is observed as the otime column +(in nanoseconds) in the /proc/spl/kstat/zfs/POOL_NAME/txgs file.

+

See also zfs_dirty_data_sync and +zfs_txg_history

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_txg_timeout

Notes

Tags

SPA, +ZIO_scheduler

When to change

To optimize the work done by txg commit +relative to the pool requirements. See also +section ZFS I/O +Scheduler

Data Type

int

Units

seconds

Range

1 to INT_MAX

Default

5

Change

Dynamic

Versions Affected

all

+
+
+

zfs_vdev_aggregation_limit

+

To reduce IOPs, small, adjacent I/Os can be aggregated (coalesced) into +a large I/O. For reads, aggregations occur across small adjacency gaps. +For writes, aggregation can occur at the ZFS or disk level. +zfs_vdev_aggregation_limit is the upper bound on the size of the +larger, aggregated I/O.

+

Setting zfs_vdev_aggregation_limit = 0 effectively disables +aggregation by ZFS. However, the block device scheduler can still merge +(aggregate) I/Os. Also, many devices, such as modern HDDs, contain +schedulers that can aggregate I/Os.

+

In general, I/O aggregation can improve performance for devices, such as +HDDs, where ordering I/O operations for contiguous LBAs is a benefit. +For random access devices, such as SSDs, aggregation might not improve +performance relative to the CPU cycles needed to aggregate. For devices +that represent themselves as having no rotation, the +zfs_vdev_aggregation_limit_non_rotating +parameter is used instead of zfs_vdev_aggregation_limit

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_aggregation_limit

Notes

Tags

vdev, +ZIO_scheduler

When to change

If the workload does not benefit from +aggregation, the +zfs_vdev_aggregation_limit can be +reduced to avoid aggregation attempts

Data Type

int

Units

bytes

Range

0 to 1,048,576 (default) or 16,777,216 +(if zpool large_blocks feature +is enabled)

Default

1,048,576, or 131,072 for <v0.8

Change

Dynamic

Verification

ZFS aggregation is observed with +zpool iostat -r and the block +scheduler merging is observed with +iostat -x

Versions Affected

all

+
+
+

zfs_vdev_cache_size

+

Note: with the current ZFS code, the vdev cache is not helpful and in +some cases actually harmful. Thusit is disabled by setting the +zfs_vdev_cache_size = 0

+

zfs_vdev_cache_size is the size of the vdev cache.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_cache_size

Notes

Tags

vdev, +vdev_cache

When to change

Do not change

Data Type

int

Units

bytes

Range

0 to MAX_INT

Default

0 (vdev cache is disabled)

Change

Dynamic

Verification

vdev cache statistics are available in the +/proc/spl/kstat/zfs/vdev_cache_stats file

Versions Affected

all

+
+
+

zfs_vdev_cache_bshift

+

Note: with the current ZFS code, the vdev cache is not helpful and in +some cases actually harmful. Thus it is disabled by setting the +zfs_vdev_cache_size to zero. This related +tunable is, by default, inoperative.

+

All read I/Os smaller than zfs_vdev_cache_max +are turned into (1 << zfs_vdev_cache_bshift) byte reads by the vdev +cache. At most zfs_vdev_cache_size bytes will +be kept in each vdev’s cache.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_cache_bshift

Notes

Tags

vdev, vdev_cache

When to change

Do not change

Data Type

int

Units

shift

Range

1 to INT_MAX

Default

16 (65,536 bytes)

Change

Dynamic

Versions Affected

all

+
+
+

zfs_vdev_cache_max

+

Note: with the current ZFS code, the vdev cache is not helpful and in +some cases actually harmful. Thus it is disabled by setting the +zfs_vdev_cache_size to zero. This related +tunable is, by default, inoperative.

+

All read I/Os smaller than zfs_vdev_cache_max will be turned into +(1 <<zfs_vdev_cache_bshift byte reads +by the vdev cache. At most zfs_vdev_cache_size bytes will be kept in +each vdev’s cache.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_cache_max

Notes

Tags

vdev, vdev_cache

When to change

Do not change

Data Type

int

Units

bytes

Range

512 to INT_MAX

Default

16,384 (16 KiB)

Change

Dynamic

Versions Affected

all

+
+
+

zfs_vdev_mirror_rotating_inc

+

The mirror read algorithm uses current load and an incremental weighting +value to determine the vdev to service a read operation. Lower values +determine the preferred vdev. The weighting value is +zfs_vdev_mirror_rotating_inc for rotating media and +zfs_vdev_mirror_non_rotating_inc +for nonrotating media.

+

Verify the rotational setting described by a block device in sysfs by +observing /sys/block/DISK_NAME/queue/rotational

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_mirror_rotating_inc

Notes

Tags

vdev, +mirror, HDD

When to change

Increasing for mirrors with both +rotating and nonrotating media more +strongly favors the nonrotating +media

Data Type

int

Units

scalar

Range

0 to MAX_INT

Default

0

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_vdev_mirror_non_rotating_inc

+

The mirror read algorithm uses current load and an incremental weighting +value to determine the vdev to service a read operation. Lower values +determine the preferred vdev. The weighting value is +zfs_vdev_mirror_rotating_inc for +rotating media and zfs_vdev_mirror_non_rotating_inc for nonrotating +media.

+

Verify the rotational setting described by a block device in sysfs by +observing /sys/block/DISK_NAME/queue/rotational

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_mirror_non_rotating_inc

Notes

Tags

vdev, +mirror, +SSD

When to change

TBD

Data Type

int

Units

scalar

Range

0 to INT_MAX

Default

0

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_vdev_mirror_rotating_seek_inc

+

For rotating media in a mirror, if the next I/O offset is within +zfs_vdev_mirror_rotating_seek_offset +then the weighting factor is incremented by +(zfs_vdev_mirror_rotating_seek_inc / 2). Otherwise the weighting +factor is increased by zfs_vdev_mirror_rotating_seek_inc. This +algorithm prefers rotating media with lower seek distance.

+

Verify the rotational setting described by a block device in sysfs by +observing /sys/block/DISK_NAME/queue/rotational

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

z +fs_vdev_mirror_rotating_seek_inc

Notes

Tags

vdev, +mirror, +HDD

When to change

TBD

Data Type

int

Units

scalar

Range

0 to INT_MAX

Default

5

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_vdev_mirror_rotating_seek_offset

+

For rotating media in a mirror, if the next I/O offset is within +zfs_vdev_mirror_rotating_seek_offset then the weighting factor is +incremented by +(zfs_vdev_mirror_rotating_seek_inc/ 2). +Otherwise the weighting factor is increased by +zfs_vdev_mirror_rotating_seek_inc. This algorithm prefers rotating +media with lower seek distance.

+

Verify the rotational setting described by a block device in sysfs by +observing /sys/block/DISK_NAME/queue/rotational

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_mirror_rotating_seek_off +set

Notes

Tags

vdev, +mirror, +HDD

When to change

TBD

Data Type

int

Units

bytes

Range

0 to INT_MAX

Default

1,048,576 (1 MiB)

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_vdev_mirror_non_rotating_seek_inc

+

For nonrotating media in a mirror, a seek penalty is applied as +sequential I/O’s can be aggregated into fewer operations, avoiding +unnecessary per-command overhead, often boosting performance.

+

Verify the rotational setting described by a block device in SysFS by +observing /sys/block/DISK_NAME/queue/rotational

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_v +dev_mirror_non_rotating_seek_inc

Notes

Tags

vdev, +mirror, +SSD

When to change

TBD

Data Type

int

Units

scalar

Range

0 to INT_MAX

Default

1

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_vdev_read_gap_limit

+

To reduce IOPs, small, adjacent I/Os are aggregated (coalesced) into +into a large I/O. For reads, aggregations occur across small adjacency +gaps where the gap is less than zfs_vdev_read_gap_limit

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_read_gap_limit

Notes

Tags

vdev, +ZIO_scheduler

When to change

TBD

Data Type

int

Units

bytes

Range

0 to INT_MAX

Default

32,768 (32 KiB)

Change

Dynamic

Versions Affected

all

+
+
+

zfs_vdev_write_gap_limit

+

To reduce IOPs, small, adjacent I/Os are aggregated (coalesced) into +into a large I/O. For writes, aggregations occur across small adjacency +gaps where the gap is less than zfs_vdev_write_gap_limit

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_write_gap_limit

Notes

Tags

vdev, +ZIO_scheduler

When to change

TBD

Data Type

int

Units

bytes

Range

0 to INT_MAX

Default

4,096 (4 KiB)

Change

Dynamic

Versions Affected

all

+
+
+

zfs_vdev_scheduler

+

Prior to version 0.8.3, when the pool is imported, for whole disk vdevs, +the block device I/O scheduler is set to zfs_vdev_scheduler. +The most common schedulers are: noop, cfq, bfq, and deadline. +In some cases, the scheduler is not changeable using this method. +Known schedulers that cannot be changed are: scsi_mq and none. +In these cases, the scheduler is unchanged and an error message can be +reported to logs.

+

The parameter was disabled in v0.8.3 but left in place to avoid breaking +loading of the zfs module if the parameter is specified in modprobe +configuration on existing installations. It is recommended that users +leave the default scheduler “unless you’re encountering a specific +problem, or have clearly measured a performance improvement for your +workload,” +and if so, to change it via the /sys/block/<device>/queue/scheduler +interface and/or udev rule.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_scheduler

Notes

Tags

vdev, +ZIO_scheduler

When to change

since ZFS has its own I/O scheduler, using a +simple scheduler can result in more consistent +performance

Data Type

string

Range

expected: noop, cfq, bfq, and deadline

Default

noop

Change

Dynamic, but takes effect upon pool creation +or import

Versions Affected

all, but no effect since v0.8.3

+
+
+

zfs_vdev_raidz_impl

+

zfs_vdev_raidz_impl overrides the raidz parity algorithm. By +default, the algorithm is selected at zfs module load time by the +results of a microbenchmark of algorithms based on the current hardware.

+

Once the module is loaded, the content of +/sys/module/zfs/parameters/zfs_vdev_raidz_impl shows available +options with the currently selected enclosed in []. Details of the +results of the microbenchmark are observable in the +/proc/spl/kstat/zfs/vdev_raidz_bench file.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

algorithm

architecture

description

fastest

all

fastest implementation +selected by +microbenchmark

original

all

original raidz +implementation

scalar

all

scalar raidz +implementation

sse2

64-bit x86

uses SSE2 instruction +set

ssse3

64-bit x86

uses SSSE3 instruction +set

avx2

64-bit x86

uses AVX2 instruction +set

avx512f

64-bit x86

uses AVX512F +instruction set

avx512bw

64-bit x86

uses AVX512F & AVX512BW +instruction sets

aarch64_neon

aarch64/64 bit ARMv8

uses NEON

aarch64_neonx2

aarch64/64 bit ARMv8

uses NEON with more +unrolling

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_raidz_impl

Notes

Tags

CPU, raidz, vdev

When to change

testing raidz algorithms

Data Type

string

Range

see table above

Default

fastest

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_zevent_cols

+

zfs_zevent_cols is a soft wrap limit in columns (characters) for ZFS +events logged to the console.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_zevent_cols

Notes

Tags

debug

When to change

if 80 columns isn’t enough

Data Type

int

Units

characters

Range

1 to INT_MAX

Default

80

Change

Dynamic

Versions Affected

all

+
+
+

zfs_zevent_console

+

If zfs_zevent_console is true (1), then ZFS events are logged to the +console.

+

More logging and log filtering capabilities are provided by zed

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_zevent_console

Notes

Tags

debug

When to change

to log ZFS events to the console

Data Type

boolean

Range

0=do not log to console, 1=log to console

Default

0

Change

Dynamic

Versions Affected

all

+
+
+

zfs_zevent_len_max

+

zfs_zevent_len_max is the maximum ZFS event queue length. A value of +0 results in a calculated value (16 * number of CPUs) with a minimum of +64. Events in the queue can be viewed with the zpool events command.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_zevent_len_max

Notes

Tags

debug

When to change

increase to see more ZFS events

Data Type

int

Units

events

Range

0 to INT_MAX

Default

0 (calculate as described above)

Change

Dynamic

Versions Affected

all

+
+
+

zfs_zil_clean_taskq_maxalloc

+

During a SPA sync, intent log transaction groups (itxg) are cleaned. The +cleaning work is dispatched to the DSL pool ZIL clean taskq +(dp_zil_clean_taskq). +zfs_zil_clean_taskq_minalloc is the +minimum and zfs_zil_clean_taskq_maxalloc is the maximum number of +cached taskq entries for dp_zil_clean_taskq. The actual number of +taskq entries dynamically varies between these values.

+

When zfs_zil_clean_taskq_maxalloc is exceeded transaction records +(itxs) are cleaned synchronously with possible negative impact to the +performance of SPA sync.

+

Ideally taskq entries are pre-allocated prior to being needed by +zil_clean(), thus avoiding dynamic allocation of new taskq entries.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_zil_clean_taskq_maxalloc

Notes

Tags

ZIL

When to change

If more dp_zil_clean_taskq +entries are needed to prevent the +itxs from being synchronously +cleaned

Data Type

int

Units

dp_zil_clean_taskq taskq entries

Range

zfs_zil_clean_taskq_minallo +c +to INT_MAX

Default

1,048,576

Change

Dynamic, takes effect per-pool when +the pool is imported

Versions Affected

v0.8.0

+
+
+

zfs_zil_clean_taskq_minalloc

+

During a SPA sync, intent log transaction groups (itxg) are cleaned. The +cleaning work is dispatched to the DSL pool ZIL clean taskq +(dp_zil_clean_taskq). zfs_zil_clean_taskq_minalloc is the +minimum and +zfs_zil_clean_taskq_maxalloc is the +maximum number of cached taskq entries for dp_zil_clean_taskq. The +actual number of taskq entries dynamically varies between these values.

+

zfs_zil_clean_taskq_minalloc is the minimum number of ZIL +transaction records (itxs).

+

Ideally taskq entries are pre-allocated prior to being needed by +zil_clean(), thus avoiding dynamic allocation of new taskq entries.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_zil_clean_taskq_minalloc

Notes

Tags

ZIL

When to change

TBD

Data Type

int

Units

dp_zil_clean_taskq taskq entries

Range

1 to +zfs_zil_clean_taskq_maxallo +c

Default

1,024

Change

Dynamic, takes effect per-pool when +the pool is imported

Versions Affected

v0.8.0

+
+
+

zfs_zil_clean_taskq_nthr_pct

+

zfs_zil_clean_taskq_nthr_pct controls the number of threads used by +the DSL pool ZIL clean taskq (dp_zil_clean_taskq). The default value +of 100% will create a maximum of one thread per cpu.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_zil_clean_taskq_nthr_pct

Notes

Tags

taskq, ZIL

When to change

Testing ZIL clean and SPA sync +performance

Data Type

int

Units

percent of number of CPUs

Range

1 to 100

Default

100

Change

Dynamic, takes effect per-pool when +the pool is imported

Versions Affected

v0.8.0

+
+
+

zil_replay_disable

+

If zil_replay_disable = 1, then when a volume or filesystem is +brought online, no attempt to replay the ZIL is made and any existing +ZIL is destroyed. This can result in loss of data without notice.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zil_replay_disable

Notes

Tags

debug, ZIL

When to change

Do not change

Data Type

boolean

Range

0=replay ZIL, 1=destroy ZIL

Default

0

Change

Dynamic

Versions Affected

v0.6.5

+
+
+

zil_slog_bulk

+

zil_slog_bulk is the log device write size limit per commit executed +with synchronous priority. Writes below zil_slog_bulk are executed +with synchronous priority. Writes above zil_slog_bulk are executed +with lower (asynchronous) priority to reduct potential log device abuse +by a single active ZIL writer.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zil_slog_bulk

Notes

Tags

ZIL

When to change

See ZFS I/O +Scheduler

Data Type

ulong

Units

bytes

Range

0 to ULONG_MAX

Default

786,432

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

zio_delay_max

+

If a ZFS I/O operation takes more than zio_delay_max milliseconds to +complete, then an event is logged. Note that this is only a logging +facility, not a timeout on operations. See also zpool events

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zio_delay_max

Notes

Tags

debug

When to change

when debugging slow I/O

Data Type

int

Units

milliseconds

Range

1 to INT_MAX

Default

30,000 (30 seconds)

Change

Dynamic

Versions Affected

all

+
+
+

zio_dva_throttle_enabled

+

zio_dva_throttle_enabled controls throttling of block allocations in +the ZFS I/O (ZIO) pipeline. When enabled, the maximum number of pending +allocations per top-level vdev is limited by +zfs_vdev_queue_depth_pct

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zio_dva_throttle_enabled

Notes

Tags

vdev, +ZIO_scheduler

When to change

Testing ZIO block allocation algorithms

Data Type

boolean

Range

0=do not throttle ZIO block allocations, +1=throttle ZIO block allocations

Default

1

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zio_requeue_io_start_cut_in_line

+

zio_requeue_io_start_cut_in_line controls prioritization of a +re-queued ZFS I/O (ZIO) in the ZIO pipeline by the ZIO taskq.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zio_requeue_io_start_cut_in_line

Notes

Tags

Z +IO_scheduler

When to change

Do not change

Data Type

boolean

Range

0=don’t prioritize re-queued +I/Os, 1=prioritize re-queued +I/Os

Default

1

Change

Dynamic

Versions Affected

all

+
+
+

zio_taskq_batch_pct

+

zio_taskq_batch_pct sets the number of I/O worker threads as a +percentage of online CPUs. These workers threads are responsible for IO +work such as compression and checksum calculations.

+

Each block is handled by one worker thread, so maximum overall worker +thread throughput is function of the number of concurrent blocks being +processed, the number of worker threads, and the algorithms used. The +default value of 75% is chosen to avoid using all CPUs which can result +in latency issues and inconsistent application performance, especially +when high compression is enabled.

+

The taskq batch processes are:

+ + + + + + + + + + + + + +

taskq

process name

Notes

Write issue

z_wr_iss[_#]

Can be CPU intensive, runs at lower +priority than other taskqs

+

Other taskqs exist, but most have fixed numbers of instances and +therefore require recompiling the kernel module to adjust.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zio_taskq_batch_pct

Notes

Tags

taskq, +ZIO_scheduler

When to change

To tune parallelism in multiprocessor systems

Data Type

int

Units

percent of number of CPUs

Range

1 to 100, fractional number of CPUs are +rounded down

Default

75

Change

Prior to zfs module load

Verification

The number of taskqs for each batch group can +be observed using ps and counting the +threads

Versions Affected

TBD

+
+
+

zvol_inhibit_dev

+

zvol_inhibit_dev controls the creation of volume device nodes upon +pool import.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zvol_inhibit_dev

Notes

Tags

import, volume

When to change

Inhibiting can slightly improve startup time on +systems with a very large number of volumes

Data Type

boolean

Range

0=create volume device nodes, 1=do not create +volume device nodes

Default

0

Change

Dynamic, takes effect per-pool when the pool is +imported

Versions Affected

v0.6.0 and later

+
+
+

zvol_major

+

zvol_major is the default major number for volume devices.

+ + + + + + + + + + + + + + + + + + + + + + + + + + +

zvol_major

Notes

Tags

volume

When to change

Do not change

Data Type

uint

Default

230

Change

Dynamic, takes effect per-pool when the pool is +imported or volumes are created

Versions Affected

all

+
+
+

zvol_max_discard_blocks

+

Discard (aka ATA TRIM or SCSI UNMAP) operations done on volumes are done +in batches zvol_max_discard_blocks blocks. The block size is +determined by the volblocksize property of a volume.

+

Some applications, such as mkfs, discard the whole volume at once +using the maximum possible discard size. As a result, many gigabytes of +discard requests are not uncommon. Unfortunately, if a large amount of +data is already allocated in the volume, ZFS can be quite slow to +process discard requests. This is especially true if the volblocksize is +small (eg default=8KB). As a result, very large discard requests can +take a very long time (perhaps minutes under heavy load) to complete. +This can cause a number of problems, most notably if the volume is +accessed remotely (eg via iSCSI), in which case the client has a high +probability of timing out on the request.

+

Limiting the zvol_max_discard_blocks can decrease the amount of +discard workload request by setting the discard_max_bytes and +discard_max_hw_bytes for the volume’s block device in SysFS. This +value is readable by volume device consumers.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zvol_max_discard_blocks

Notes

Tags

discard, +volume

When to change

if volume discard activity severely +impacts other workloads

Data Type

ulong

Units

number of blocks of size volblocksize

Range

0 to ULONG_MAX

Default

16,384

Change

Dynamic, takes effect per-pool when the +pool is imported or volumes are created

Verification

Observe value of +/sys/block/ +VOLUME_INSTANCE/queue/discard_max_bytes

Versions Affected

v0.6.0 and later

+
+
+

zvol_prefetch_bytes

+

When importing a pool with volumes or adding a volume to a pool, +zvol_prefetch_bytes are prefetch from the start and end of the +volume. Prefetching these regions of the volume is desirable because +they are likely to be accessed immediately by blkid(8) or by the +kernel scanning for a partition table.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zvol_prefetch_bytes

Notes

Tags

prefetch, volume

When to change

TBD

Data Type

uint

Units

bytes

Range

0 to UINT_MAX

Default

131,072

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zvol_request_sync

+

When processing I/O requests for a volume submit them synchronously. +This effectively limits the queue depth to 1 for each I/O submitter. +When set to 0 requests are handled asynchronously by the “zvol” thread +pool.

+

See also zvol_threads

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zvol_request_sync

Notes

Tags

volume

When to change

Testing concurrent volume requests

Data Type

boolean

Range

0=do concurrent (async) volume requests, 1=do +sync volume requests

Default

0

Change

Dynamic

Versions Affected

v0.7.2 and later

+
+
+

zvol_threads

+

zvol_threads controls the maximum number of threads handling concurrent +volume I/O requests.

+

The default of 32 threads behaves similarly to a disk with a 32-entry +command queue. The actual number of threads required can vary widely by +workload and available CPUs. If lock analysis shows high contention in +the zvol taskq threads, then reducing the number of zvol_threads or +workload queue depth can improve overall throughput.

+

See also zvol_request_sync

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zvol_threads

Notes

Tags

volume

When to change

Matching the number of concurrent volume +requests with workload requirements can improve +concurrency

Data Type

uint

Units

threads

Range

1 to UINT_MAX

Default

32

Change

Dynamic, takes effect per-volume when the pool +is imported or volumes are created

Verification

iostat using avgqu-sz or aqu-sz +results

Versions Affected

v0.7.0 and later

+
+
+

zvol_volmode

+

zvol_volmode defines volume block devices behaviour when the +volmode property is set to default

+

Note: to maintain compatibility with ZFS on BSD, “geom” is synonymous +with “full”

+ + + + + + + + + + + + + + + + + + + + + +

value

volmode

Description

1

full

legacy fully functional behaviour (default)

2

dev

hide partitions on volume block devices

3

none

not exposing volumes outside ZFS

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zvol_volmode

Notes

Tags

volume

When to change

TBD

Data Type

enum

Range

1, 2, or 3

Default

1

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_qat_disable

+

zfs_qat_disable controls the Intel QuickAssist Technology (QAT) +driver providing hardware acceleration for gzip compression. When the +QAT hardware is present and qat driver available, the default behaviour +is to enable QAT.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_qat_disable

Notes

Tags

compression, QAT

When to change

Testing QAT functionality

Data Type

boolean

Range

0=use QAT acceleration if available, 1=do not +use QAT acceleration

Default

0

Change

Dynamic

Versions Affected

v0.7, renamed to +zfs_qat_ +compress_disable +in v0.8

+
+
+

zfs_qat_checksum_disable

+

zfs_qat_checksum_disable controls the Intel QuickAssist Technology +(QAT) driver providing hardware acceleration for checksums. When the QAT +hardware is present and qat driver available, the default behaviour is +to enable QAT.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_qat_checksum_disable

Notes

Tags

checksum, QAT

When to change

Testing QAT functionality

Data Type

boolean

Range

0=use QAT acceleration if available, +1=do not use QAT acceleration

Default

0

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

zfs_qat_compress_disable

+

zfs_qat_compress_disable controls the Intel QuickAssist Technology +(QAT) driver providing hardware acceleration for gzip compression. When +the QAT hardware is present and qat driver available, the default +behaviour is to enable QAT.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_qat_compress_disable

Notes

Tags

compression, +QAT

When to change

Testing QAT functionality

Data Type

boolean

Range

0=use QAT acceleration if available, +1=do not use QAT acceleration

Default

0

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

zfs_qat_encrypt_disable

+

zfs_qat_encrypt_disable controls the Intel QuickAssist Technology +(QAT) driver providing hardware acceleration for encryption. When the +QAT hardware is present and qat driver available, the default behaviour +is to enable QAT.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_qat_encrypt_disable

Notes

Tags

encryption, +QAT

When to change

Testing QAT functionality

Data Type

boolean

Range

0=use QAT acceleration if available, 1=do +not use QAT acceleration

Default

0

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

dbuf_cache_hiwater_pct

+

The dbuf_cache_hiwater_pct and +dbuf_cache_lowater_pct define the +operating range for dbuf cache evict thread. The hiwater and lowater are +percentages of the dbuf_cache_max_bytes +value. When the dbuf cache grows above ((100% + +dbuf_cache_hiwater_pct) * +dbuf_cache_max_bytes) then the dbuf cache +thread begins evicting. When the dbug cache falls below ((100% - +dbuf_cache_lowater_pct) * +dbuf_cache_max_bytes) then the dbuf cache +thread stops evicting.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

dbuf_cache_hiwater_pct

Notes

Tags

dbuf_cache

When to change

Testing dbuf cache algorithms

Data Type

uint

Units

percent

Range

0 to UINT_MAX

Default

10

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

dbuf_cache_lowater_pct

+

The dbuf_cache_hiwater_pct and dbuf_cache_lowater_pct define the +operating range for dbuf cache evict thread. The hiwater and lowater are +percentages of the dbuf_cache_max_bytes +value. When the dbuf cache grows above ((100% + +dbuf_cache_hiwater_pct) * +dbuf_cache_max_bytes) then the dbuf cache +thread begins evicting. When the dbug cache falls below ((100% - +dbuf_cache_lowater_pct) * +dbuf_cache_max_bytes) then the dbuf cache +thread stops evicting.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

dbuf_cache_lowater_pct

Notes

Tags

dbuf_cache

When to change

Testing dbuf cache algorithms

Data Type

uint

Units

percent

Range

0 to UINT_MAX

Default

10

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

dbuf_cache_max_bytes

+

The dbuf cache maintains a list of dbufs that are not currently held but +have been recently released. These dbufs are not eligible for ARC +eviction until they are aged out of the dbuf cache. Dbufs are added to +the dbuf cache once the last hold is released. If a dbuf is later +accessed and still exists in the dbuf cache, then it will be removed +from the cache and later re-added to the head of the cache. Dbufs that +are aged out of the cache will be immediately destroyed and become +eligible for ARC eviction.

+

The size of the dbuf cache is set by dbuf_cache_max_bytes. The +actual size is dynamically adjusted to the minimum of current ARC target +size (c) >> dbuf_cache_max_shift and the +default dbuf_cache_max_bytes

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

dbuf_cache_max_bytes

Notes

Tags

dbuf_cache

When to change

Testing dbuf cache algorithms

Data Type

ulong

Units

bytes

Range

16,777,216 to ULONG_MAX

Default

104,857,600 (100 MiB)

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

dbuf_cache_max_shift

+

The dbuf_cache_max_bytes minimum is the +lesser of dbuf_cache_max_bytes and the +current ARC target size (c) >> dbuf_cache_max_shift

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

dbuf_cache_max_shift

Notes

Tags

dbuf_cache

When to change

Testing dbuf cache algorithms

Data Type

int

Units

shift

Range

1 to 63

Default

5

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

dmu_object_alloc_chunk_shift

+

Each of the concurrent object allocators grabs +2^dmu_object_alloc_chunk_shift dnode slots at a time. The default is +to grab 128 slots, or 4 blocks worth. This default value was +experimentally determined to be the lowest value that eliminates the +measurable effect of lock contention in the DMU object allocation code +path.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

dmu_object_alloc_chunk_shift

Notes

Tags

allocation, +DMU

When to change

If the workload creates many files +concurrently on a system with many +CPUs, then increasing +dmu_object_alloc_chunk_shift can +decrease lock contention

Data Type

int

Units

shift

Range

7 to 9

Default

7

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

send_holes_without_birth_time

+

Alias for ignore_hole_birth

+
+
+

zfs_abd_scatter_enabled

+

zfs_abd_scatter_enabled controls the ARC Buffer Data (ABD) +scatter/gather feature.

+

When disabled, the legacy behaviour is selected using linear buffers. +For linear buffers, all the data in the ABD is stored in one contiguous +buffer in memory (from a zio_[data_]buf_* kmem cache).

+

When enabled (default), the data in the ABD is split into equal-sized +chunks (from the abd_chunk_cache kmem_cache), with pointers to the +chunks recorded in an array at the end of the ABD structure. This allows +more efficient memory allocation for buffers, especially when large +recordsizes are used.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_abd_scatter_enabled

Notes

Tags

ABD, memory

When to change

Testing ABD

Data Type

boolean

Range

0=use linear allocation only, 1=allow +scatter/gather

Default

1

Change

Dynamic

Verification

ABD statistics are observable in +/proc/spl/kstat/zfs/abdstats. Slab +allocations are observable in +/proc/slabinfo

Versions Affected

v0.7.0 and later

+
+
+

zfs_abd_scatter_max_order

+

zfs_abd_scatter_max_order sets the maximum order for physical page +allocation when ABD is enabled (see +zfs_abd_scatter_enabled)

+

See also Buddy Memory Allocation in the Linux kernel documentation.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_abd_scatter_max_order

Notes

Tags

ABD, memory

When to change

Testing ABD features

Data Type

int

Units

orders

Range

1 to 10 (upper limit is +hardware-dependent)

Default

10

Change

Dynamic

Verification

ABD statistics are observable in +/proc/spl/kstat/zfs/abdstats

Versions Affected

v0.7.0 and later

+
+
+

zfs_compressed_arc_enabled

+

When compression is enabled for a dataset, later reads of the data can +store the blocks in ARC in their on-disk, compressed state. This can +increse the effective size of the ARC, as counted in blocks, and thus +improve the ARC hit ratio.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_compressed_arc_enabled

Notes

Tags

ABD, +compression

When to change

Testing ARC compression feature

Data Type

boolean

Range

0=compressed ARC disabled (legacy +behaviour), 1=compress ARC data

Default

1

Change

Dynamic

Verification

raw ARC statistics are observable in +/proc/spl/kstat/zfs/arcstats and +ARC hit ratios can be observed using +arcstat

Versions Affected

v0.7.0 and later

+
+
+

zfs_key_max_salt_uses

+

For encrypted datasets, the salt is regenerated every +zfs_key_max_salt_uses blocks. This automatic regeneration reduces +the probability of collisions due to the Birthday problem. When set to +the default (400,000,000) the probability of collision is approximately +1 in 1 trillion.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_key_max_salt_uses

Notes

Tags

encryption

When to change

Testing encryption features

Data Type

ulong

Units

blocks encrypted

Range

1 to ULONG_MAX

Default

400,000,000

Change

Dynamic

Versions Affected

v0.8.0 and later

+
+
+

zfs_object_mutex_size

+

zfs_object_mutex_size facilitates resizing the the per-dataset znode +mutex array for testing deadlocks therein.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_object_mutex_size

Notes

Tags

debug

When to change

Testing znode mutex array deadlocks

Data Type

uint

Units

orders

Range

1 to UINT_MAX

Default

64

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_scan_strict_mem_lim

+

When scrubbing or resilvering, by default, ZFS checks to ensure it is +not over the hard memory limit before each txg commit. If finer-grained +control of this is needed zfs_scan_strict_mem_lim can be set to 1 to +enable checking before scanning each block.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_strict_mem_lim

Notes

Tags

memory, +resilver, +scrub

When to change

Do not change

Data Type

boolean

Range

0=normal scan behaviour, 1=check hard +memory limit strictly during scan

Default

0

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

zfs_send_queue_length

+

zfs_send_queue_length is the maximum number of bytes allowed in the +zfs send queue.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_send_queue_length

Notes

Tags

send

When to change

When using the largest recordsize or +volblocksize (16 MiB), increasing can +improve send efficiency

Data Type

int

Units

bytes

Range

Must be at least twice the maximum +recordsize or volblocksize in use

Default

16,777,216 bytes (16 MiB)

Change

Dynamic

Versions Affected

v0.8.1

+
+
+

zfs_recv_queue_length

+

zfs_recv_queue_length is the maximum number of bytes allowed in the +zfs receive queue.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_recv_queue_length

Notes

Tags

receive

When to change

When using the largest recordsize or +volblocksize (16 MiB), increasing can +improve receive efficiency

Data Type

int

Units

bytes

Range

Must be at least twice the maximum +recordsize or volblocksize in use

Default

16,777,216 bytes (16 MiB)

Change

Dynamic

Versions Affected

v0.8.1

+
+
+

zfs_arc_min_prefetch_lifespan

+

arc_min_prefetch_lifespan is the minimum time for a prefetched block +to remain in ARC before it is eligible for eviction.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_min_prefetch_lifespan

Notes

Tags

ARC

When to change

TBD

Data Type

int

Units

clock ticks

Range

0 = use default value

Default

1 second (as expressed in clock ticks)

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

zfs_scan_ignore_errors

+

zfs_scan_ignore_errors allows errors discovered during scrub or +resilver to be ignored. This can be tuned as a workaround to remove the +dirty time list (DTL) when completing a pool scan. It is intended to be +used during pool repair or recovery to prevent resilvering when the pool +is imported.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_ignore_errors

Notes

Tags

resilver

When to change

See description above

Data Type

boolean

Range

0 = do not ignore errors, 1 = ignore +errors during pool scrub or resilver

Default

0

Change

Dynamic

Versions Affected

v0.8.1

+
+
+

zfs_top_maxinflight

+

zfs_top_maxinflight is used to limit the maximum number of I/Os +queued to top-level vdevs during scrub or resilver operations. The +actual top-level vdev limit is calculated by multiplying the number of +child vdevs by zfs_top_maxinflight This limit is an additional cap +over and above the scan limits

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_top_maxinflight

Notes

Tags

resilver, scrub, +ZIO_scheduler

When to change

for modern ZFS versions, the ZIO scheduler +limits usually take precedence

Data Type

int

Units

I/O operations

Range

1 to MAX_INT

Default

32

Change

Dynamic

Versions Affected

v0.6.0

+
+
+

zfs_resilver_delay

+

zfs_resilver_delay sets a time-based delay for resilver I/Os. This +delay is in addition to the ZIO scheduler’s treatment of scrub +workloads. See also zfs_scan_idle

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_resilver_delay

Notes

Tags

resilver, +ZIO_scheduler

When to change

increasing can reduce impact of resilver +workload on dynamic workloads

Data Type

int

Units

clock ticks

Range

0 to MAX_INT

Default

2

Change

Dynamic

Versions Affected

v0.6.0

+
+
+

zfs_scrub_delay

+

zfs_scrub_delay sets a time-based delay for scrub I/Os. This delay +is in addition to the ZIO scheduler’s treatment of scrub workloads. See +also zfs_scan_idle

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scrub_delay

Notes

Tags

scrub, +ZIO_scheduler

When to change

increasing can reduce impact of scrub workload +on dynamic workloads

Data Type

int

Units

clock ticks

Range

0 to MAX_INT

Default

4

Change

Dynamic

Versions Affected

v0.6.0

+
+
+

zfs_scan_idle

+

When a non-scan I/O has occurred in the past zfs_scan_idle clock +ticks, then zfs_resilver_delay or +zfs_scrub_delay are enabled.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_idle

Notes

Tags

resilver, scrub, +ZIO_scheduler

When to change

as part of a resilver/scrub tuning effort

Data Type

int

Units

clock ticks

Range

0 to MAX_INT

Default

50

Change

Dynamic

Versions Affected

v0.6.0

+
+
+

icp_aes_impl

+

By default, ZFS will choose the highest performance, hardware-optimized +implementation of the AES encryption algorithm. The icp_aes_impl +tunable overrides this automatic choice.

+

Note: icp_aes_impl is set in the icp kernel module, not the +zfs kernel module.

+

To observe the available options +cat /sys/module/icp/parameters/icp_aes_impl The default option is +shown in brackets ‘[]’

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

icp_aes_impl

Notes

Tags

encryption

Kernel module

icp

When to change

debugging ZFS encryption on hardware

Data Type

string

Range

varies by hardware

Default

automatic, depends on the hardware

Change

dynamic

Versions Affected

planned for v2

+
+
+

icp_gcm_impl

+

By default, ZFS will choose the highest performance, hardware-optimized +implementation of the GCM encryption algorithm. The icp_gcm_impl +tunable overrides this automatic choice.

+

Note: icp_gcm_impl is set in the icp kernel module, not the +zfs kernel module.

+

To observe the available options +cat /sys/module/icp/parameters/icp_gcm_impl The default option is +shown in brackets ‘[]’

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

icp_gcm_impl

Notes

Tags

encryption

Kernel module

icp

When to change

debugging ZFS encryption on hardware

Data Type

string

Range

varies by hardware

Default

automatic, depends on the hardware

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_abd_scatter_min_size

+

zfs_abd_scatter_min_size changes the ARC buffer data (ABD) +allocator’s threshold for using linear or page-based scatter buffers. +Allocations smaller than zfs_abd_scatter_min_size use linear ABDs.

+

Scatter ABD’s use at least one page each, so sub-page allocations waste +some space when allocated as scatter allocations. For example, 2KB +scatter allocation wastes half of each page. Using linear ABD’s for +small allocations results in slabs containing many allocations. This can +improve memory efficiency, at the expense of more work for ARC evictions +attempting to free pages, because all the buffers on one slab need to be +freed in order to free the slab and its underlying pages.

+

Typically, 512B and 1KB kmem caches have 16 buffers per slab, so it’s +possible for them to actually waste more memory than scatter +allocations:

+
    +
  • one page per buf = wasting 3/4 or 7/8

  • +
  • one buf per slab = wasting 15/16

  • +
+

Spill blocks are typically 512B and are heavily used on systems running +selinux with the default dnode size and the xattr=sa property set.

+

By default, linear allocations for 512B and 1KB, and scatter allocations +for larger (>= 1.5KB) allocation requests.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_abd_scatter_min_size

Notes

Tags

ARC

When to change

debugging memory allocation, especially +for large pages

Data Type

int

Units

bytes

Range

0 to MAX_INT

Default

1536 (512B and 1KB allocations will be +linear)

Change

Dynamic

Versions Affected

planned for v2

+
+ +
+

spa_load_verify_shift

+

spa_load_verify_shift sets the fraction of ARC that can be used by +inflight I/Os when verifying the pool during import. This value is a +“shift” representing the fraction of ARC target size +(grep -w c /proc/spl/kstat/zfs/arcstats). The ARC target size is +shifted to the right. Thus a value of ‘2’ results in the fraction = 1/4, +while a value of ‘4’ results in the fraction = 1/8.

+

For large memory machines, pool import can consume large amounts of ARC: +much larger than the value of maxinflight. This can result in +spa_load_verify_maxinflight having a +value of 0 causing the system to hang. Setting spa_load_verify_shift +can reduce this limit and allow importing without hanging.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spa_load_verify_shift

Notes

Tags

import, ARC, +SPA

When to change

troubleshooting pool import on large memory +machines

Data Type

int

Units

shift

Range

1 to MAX_INT

Default

4

Change

prior to importing a pool

Versions Affected

planned for v2

+
+
+

spa_load_print_vdev_tree

+

spa_load_print_vdev_tree enables printing of the attempted pool +import’s vdev tree to kernel message to the ZFS debug message log +/proc/spl/kstat/zfs/dbgmsg Both the provided vdev tree and MOS vdev +tree are printed, which can be useful for debugging problems with the +zpool cachefile

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spa_load_print_vdev_tree

Notes

Tags

import, SPA

When to change

troubleshooting pool import failures

Data Type

boolean

Range

0 = do not print pool configuration in +logs, 1 = print pool configuration in +logs

Default

0

Change

prior to pool import

Versions Affected

planned for v2

+
+
+

zfs_max_missing_tvds

+

When importing a pool in readonly mode +(zpool import -o readonly=on ...) then up to +zfs_max_missing_tvds top-level vdevs can be missing, but the import +can attempt to progress.

+

Note: This is strictly intended for advanced pool recovery cases since +missing data is almost inevitable. Pools with missing devices can only +be imported read-only for safety reasons, and the pool’s failmode +property is automatically set to continue

+

The expected use case is to recover pool data immediately after +accidentally adding a non-protected vdev to a protected pool.

+
    +
  • With 1 missing top-level vdev, ZFS should be able to import the pool +and mount all datasets. User data that was not modified after the +missing device has been added should be recoverable. Thus snapshots +created prior to the addition of that device should be completely +intact.

  • +
  • With 2 missing top-level vdevs, some datasets may fail to mount since +there are dataset statistics that are stored as regular metadata. +Some data might be recoverable if those vdevs were added recently.

  • +
  • With 3 or more top-level missing vdevs, the pool is severely damaged +and MOS entries may be missing entirely. Chances of data recovery are +very low. Note that there are also risks of performing an inadvertent +rewind as we might be missing all the vdevs with the latest +uberblocks.

  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_max_missing_tvds

Notes

Tags

import

When to change

troubleshooting pools with missing devices

Data Type

int

Units

missing top-level vdevs

Range

0 to MAX_INT

Default

0

Change

prior to pool import

Versions Affected

planned for v2

+
+
+

dbuf_metadata_cache_shift

+

dbuf_metadata_cache_shift sets the size of the dbuf metadata cache +as a fraction of ARC target size. This is an alternate method for +setting dbuf metadata cache size than +dbuf_metadata_cache_max_bytes.

+

dbuf_metadata_cache_max_bytes +overrides dbuf_metadata_cache_shift

+

This value is a “shift” representing the fraction of ARC target size +(grep -w c /proc/spl/kstat/zfs/arcstats). The ARC target size is +shifted to the right. Thus a value of ‘2’ results in the fraction = 1/4, +while a value of ‘6’ results in the fraction = 1/64.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

dbuf_metadata_cache_shift

Notes

Tags

ARC, +dbuf_cache

When to change

Data Type

int

Units

shift

Range

practical range is +(` +dbuf_cache_shift <#dbuf-cache-shift>`__ ++ 1) to MAX_INT

Default

6

Change

Dynamic

Versions Affected

planned for v2

+
+
+

dbuf_metadata_cache_max_bytes

+

dbuf_metadata_cache_max_bytes sets the size of the dbuf metadata +cache as a number of bytes. This is an alternate method for setting dbuf +metadata cache size than +dbuf_metadata_cache_shift

+

dbuf_metadata_cache_max_bytes +overrides dbuf_metadata_cache_shift

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

dbuf_metadata_cache_max_bytes

Notes

Tags

dbuf_cache

When to change

Data Type

int

Units

bytes

Range

0 = use +dbuf_metadata_cache_sh +ift +to ARC c_max

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

dbuf_cache_shift

+

dbuf_cache_shift sets the size of the dbuf cache as a fraction of +ARC target size. This is an alternate method for setting dbuf cache size +than dbuf_cache_max_bytes.

+

dbuf_cache_max_bytes overrides +dbuf_cache_shift

+

This value is a “shift” representing the fraction of ARC target size +(grep -w c /proc/spl/kstat/zfs/arcstats). The ARC target size is +shifted to the right. Thus a value of ‘2’ results in the fraction = 1/4, +while a value of ‘5’ results in the fraction = 1/32.

+

Performance tuning of dbuf cache can be monitored using:

+
    +
  • dbufstat command

  • +
  • node_exporter ZFS +module for prometheus environments

  • +
  • telegraf ZFS plugin for +general-purpose metric collection

  • +
  • /proc/spl/kstat/zfs/dbufstats kstat

  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

dbuf_cache_shift

Notes

Tags

ARC, dbuf_cache

When to change

to improve performance of read-intensive +channel programs

Data Type

int

Units

shift

Range

5 to MAX_INT

Default

5

Change

Dynamic

Versions Affected

planned for v2

+
+
+

dbuf_cache_max_bytes

+

dbuf_cache_max_bytes sets the size of the dbuf cache in bytes. This +is an alternate method for setting dbuf cache size than +dbuf_cache_shift

+

Performance tuning of dbuf cache can be monitored using:

+
    +
  • dbufstat command

  • +
  • node_exporter ZFS +module for prometheus environments

  • +
  • telegraf ZFS plugin for +general-purpose metric collection

  • +
  • /proc/spl/kstat/zfs/dbufstats kstat

  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

dbuf_cache_max_bytes

Notes

Tags

ARC, dbuf_cache

When to change

Data Type

int

Units

bytes

Range

0 = use +dbuf_cache_shift to +ARC c_max

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

metaslab_force_ganging

+

When testing allocation code, metaslab_force_ganging forces blocks +above the specified size to be ganged.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

metaslab_force_ganging

Notes

Tags

allocation

When to change

for development testing purposes only

Data Type

ulong

Units

bytes

Range

SPA_MINBLOCKSIZE to (SPA_MAXBLOCKSIZE + 1)

Default

SPA_MAXBLOCKSIZE + 1 (16,777,217 bytes)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_default_ms_count

+

When adding a top-level vdev, zfs_vdev_default_ms_count is the +target number of metaslabs.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_default_ms_count

Notes

Tags

allocation

When to change

for development testing purposes only

Data Type

int

Range

16 to MAX_INT

Default

200

Change

prior to creating a pool or adding a +top-level vdev

Versions Affected

planned for v2

+
+
+

vdev_removal_max_span

+

During top-level vdev removal, chunks of data are copied from the vdev +which may include free space in order to trade bandwidth for IOPS. +vdev_removal_max_span sets the maximum span of free space included +as unnecessary data in a chunk of copied data.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

vdev_removal_max_span

Notes

Tags

vdev_removal

When to change

TBD

Data Type

int

Units

bytes

Range

0 to MAX_INT

Default

32,768 (32 MiB)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_removal_ignore_errors

+

When removing a device, zfs_removal_ignore_errors controls the +process for handling hard I/O errors. When set, if a device encounters a +hard IO error during the removal process the removal will not be +cancelled. This can result in a normally recoverable block becoming +permanently damaged and is not recommended. This should only be used as +a last resort when the pool cannot be returned to a healthy state prior +to removing the device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_removal_ignore_errors

Notes

Tags

vdev_removal

When to change

See description for caveat

Data Type

boolean

Range

during device removal: 0 = hard errors +are not ignored, 1 = hard errors are +ignored

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_removal_suspend_progress

+

zfs_removal_suspend_progress is used during automated testing of the +ZFS code to incease test coverage.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_removal_suspend_progress

Notes

Tags

vdev_removal

When to change

do not change

Data Type

boolean

Range

0 = do not suspend during vdev removal

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_condense_indirect_commit_entry_delay_ms

+

During vdev removal, the vdev indirection layer sleeps for +zfs_condense_indirect_commit_entry_delay_ms milliseconds during +mapping generation. This parameter is used during automated testing of +the ZFS code to improve test coverage.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_condens +e_indirect_commit_entry_delay_ms

Notes

Tags

vdev_removal

When to change

do not change

Data Type

int

Units

milliseconds

Range

0 to MAX_INT

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_condense_indirect_vdevs_enable

+

During vdev removal, condensing process is an attempt to save memory by +removing obsolete mappings. zfs_condense_indirect_vdevs_enable +enables condensing indirect vdev mappings. When set, ZFS attempts to +condense indirect vdev mappings if the mapping uses more than +zfs_condense_min_mapping_bytes +bytes of memory and if the obsolete space map object uses more than +zfs_condense_max_obsolete_bytes +bytes on disk.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zf +s_condense_indirect_vdevs_enable

Notes

Tags

vdev_removal

When to change

TBD

Data Type

boolean

Range

0 = do not save memory, 1 = save +memory by condensing obsolete +mapping after vdev removal

Default

1

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_condense_max_obsolete_bytes

+

After vdev removal, zfs_condense_max_obsolete_bytes sets the limit +for beginning the condensing process. Condensing begins if the obsolete +space map takes up more than zfs_condense_max_obsolete_bytes of +space on disk (logically). The default of 1 GiB is small enough relative +to a typical pool that the space consumed by the obsolete space map is +minimal.

+

See also +zfs_condense_indirect_vdevs_enable

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_condense_max_obsolete_bytes

Notes

Tags

vdev_removal

When to change

no not change

Data Type

ulong

Units

bytes

Range

0 to MAX_ULONG

Default

1,073,741,824 (1 GiB)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_condense_min_mapping_bytes

+

After vdev removal, zfs_condense_min_mapping_bytes is the lower +limit for determining when to condense the in-memory obsolete space map. +The condensing process will not continue unless a minimum of +zfs_condense_min_mapping_bytes of memory can be freed.

+

See also +zfs_condense_indirect_vdevs_enable

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_condense_min_mapping_bytes

Notes

Tags

vdev_removal

When to change

do not change

Data Type

ulong

Units

bytes

Range

0 to MAX_ULONG

Default

128 KiB

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_initializing_max_active

+

zfs_vdev_initializing_max_active sets the maximum initializing I/Os +active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_initializing_max_active

Notes

Tags

vdev, +Z +IO_scheduler

When to change

See ZFS I/O +Sch +eduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_max_ +active

Default

1

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_initializing_min_active

+

zfs_vdev_initializing_min_active sets the minimum initializing I/Os +active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_initializing_min_active

Notes

Tags

vdev, +Z +IO_scheduler

When to change

See ZFS I/O +Sch +eduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vde +v_initializing_max_active

Default

1

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_removal_max_active

+

zfs_vdev_removal_max_active sets the maximum top-level vdev removal +I/Os active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_removal_max_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev +_max_active

Default

2

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_removal_min_active

+

zfs_vdev_removal_min_active sets the minimum top-level vdev removal +I/Os active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_removal_min_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_removal_max_act +ive

Default

1

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_trim_max_active

+

zfs_vdev_trim_max_active sets the maximum trim I/Os active to each +device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_trim_max_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_v +dev_max_active

Default

2

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_trim_min_active

+

zfs_vdev_trim_min_active sets the minimum trim I/Os active to each +device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_trim_min_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_trim_m +ax_active

Default

1

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_initialize_value

+

When initializing a vdev, ZFS writes patterns of +zfs_initialize_value bytes to the device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_initialize_value

Notes

Tags

vdev_initialize

When to change

when debugging initialization code

Data Type

uint32 or uint64

Default

0xdeadbeef for 32-bit systems, +0xdeadbeefdeadbeee for 64-bit systems

Change

prior to running zpool initialize

Versions Affected

planned for v2

+
+
+

zfs_lua_max_instrlimit

+

zfs_lua_max_instrlimit limits the maximum time for a ZFS channel +program to run.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_lua_max_instrlimit

Notes

Tags

channel_programs

When to change

to enforce a CPU usage limit on ZFS +channel programs

Data Type

ulong

Units

LUA instructions

Range

0 to MAX_ULONG

Default

100,000,000

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_lua_max_memlimit

+

‘zfs_lua_max_memlimit’ is the maximum memory limit for a ZFS channel +program.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_lua_max_memlimit

Notes

Tags

channel_programs

When to change

Data Type

ulong

Units

bytes

Range

0 to MAX_ULONG

Default

104,857,600 (100 MiB)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_max_dataset_nesting

+

zfs_max_dataset_nesting limits the depth of nested datasets. Deeply +nested datasets can overflow the stack. The maximum stack depth depends +on kernel compilation options, so it is impractical to predict the +possible limits. For kernels compiled with small stack sizes, +zfs_max_dataset_nesting may require changes.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_max_dataset_nesting

Notes

Tags

dataset

When to change

can be tuned temporarily to fix existing +datasets that exceed the predefined limit

Data Type

int

Units

datasets

Range

0 to MAX_INT

Default

50

Change

Dynamic, though once on-disk the value +for the pool is set

Versions Affected

planned for v2

+
+
+

zfs_ddt_data_is_special

+

zfs_ddt_data_is_special enables the deduplication table (DDT) to +reside on a special top-level vdev.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_ddt_data_is_special

Notes

Tags

dedup, +special_vdev

When to change

when using a special top-level vdev and +no dedup top-level vdev and it is desired +to store the DDT in the main pool +top-level vdevs

Data Type

boolean

Range

0=do not use special vdevs to store DDT, +1=store DDT in special vdevs

Default

1

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_user_indirect_is_special

+

If special vdevs are in use, zfs_user_indirect_is_special enables +user data indirect blocks (a form of metadata) to be written to the +special vdevs.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_user_indirect_is_special

Notes

Tags

special_vdev

When to change

to force user data indirect blocks +to remain in the main pool top-level +vdevs

Data Type

boolean

Range

0=do not write user indirect blocks +to a special vdev, 1=write user +indirect blocks to a special vdev

Default

1

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_reconstruct_indirect_combinations_max

+

After device removal, if an indirect split block contains more than +zfs_reconstruct_indirect_combinations_max many possible unique +combinations when being reconstructed, it can be considered too +computationally expensive to check them all. Instead, at most +zfs_reconstruct_indirect_combinations_max randomly-selected +combinations are attempted each time the block is accessed. This allows +all segment copies to participate fairly in the reconstruction when all +combinations cannot be checked and prevents repeated use of one bad +copy.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_recon +struct_indirect_combinations_max

Notes

Tags

vdev_removal

When to change

TBD

Data Type

int

Units

attempts

Range

0=do not limit attempts, 1 to +MAX_INT = limit for attempts

Default

4096

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_send_unmodified_spill_blocks

+

zfs_send_unmodified_spill_blocks enables sending of unmodified spill +blocks in the send stream. Under certain circumstances, previous +versions of ZFS could incorrectly remove the spill block from an +existing object. Including unmodified copies of the spill blocks creates +a backwards compatible stream which will recreate a spill block if it +was incorrectly removed.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_send_unmodified_spill_blocks

Notes

Tags

send

When to change

TBD

Data Type

boolean

Range

0=do not send unmodified spill +blocks, 1=send unmodified spill +blocks

Default

1

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_spa_discard_memory_limit

+

zfs_spa_discard_memory_limit sets the limit for maximum memory used +for prefetching a pool’s checkpoint space map on each vdev while +discarding a pool checkpoint.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_spa_discard_memory_limit

Notes

Tags

checkpoint

When to change

TBD

Data Type

int

Units

bytes

Range

0 to MAX_INT

Default

16,777,216 (16 MiB)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_special_class_metadata_reserve_pct

+

zfs_special_class_metadata_reserve_pct sets a threshold for space in +special vdevs to be reserved exclusively for metadata. This prevents +small blocks or dedup table from completely consuming a special vdev.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_special_class_metadata_reserve_pct

Notes

Tags

special_vdev

When to change

TBD

Data Type

int

Units

percent

Range

0 to 100

Default

25

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_trim_extent_bytes_max

+

zfs_trim_extent_bytes_max sets the maximum size of a trim (aka +discard, scsi unmap) command. Ranges larger than +zfs_trim_extent_bytes_max are split in to chunks no larger than +zfs_trim_extent_bytes_max bytes prior to being issued to the device. +Use zpool iostat -w to observe the latency of trim commands.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_trim_extent_bytes_max

Notes

Tags

trim

When to change

if the device can efficiently handle +larger trim requests

Data Type

uint

Units

bytes

Range

zfs_trim_extent_by +tes_min +to MAX_UINT

Default

134,217,728 (128 MiB)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_trim_extent_bytes_min

+

zfs_trim_extent_bytes_min sets the minimum size of trim (aka +discard, scsi unmap) commands. Trim ranges smaller than +zfs_trim_extent_bytes_min are skipped unless they’re part of a +larger range which was broken in to chunks. Some devices have +performance degradation during trim operations, so using a larger +zfs_trim_extent_bytes_min can reduce the total amount of space +trimmed. Use zpool iostat -w to observe the latency of trim +commands.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_trim_extent_bytes_min

Notes

Tags

trim

When to change

when trim is in use and device +performance suffers from trimming small +allocations

Data Type

uint

Units

bytes

Range

0=trim all unallocated space, otherwise +minimum physical block size to MAX_

Default

32,768 (32 KiB)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_trim_metaslab_skip

+
+
zfs_trim_metaslab_skip enables uninitialized metaslabs to be +skipped during the trim (aka discard, scsi unmap) process. +zfs_trim_metaslab_skip can be useful for pools constructed from +large thinly-provisioned devices where trim operations perform slowly.
+
As a pool ages an increasing fraction of the pool’s metaslabs are +initialized, progressively degrading the usefulness of this option. +This setting is stored when starting a manual trim and persists for +the duration of the requested trim. Use zpool iostat -w to observe +the latency of trim commands.
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_trim_metaslab_skip

Notes

Tags

trim

When to change

Data Type

boolean

Range

0=do not skip uninitialized metaslabs +during trim, 1=skip uninitialized +metaslabs during trim

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_trim_queue_limit

+

zfs_trim_queue_limit sets the maximum queue depth for leaf vdevs. +See also zfs_vdev_trim_max_active and +zfs_trim_extent_bytes_max Use +zpool iostat -q to observe trim queue depth.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_trim_queue_limit

Notes

Tags

trim

When to change

to restrict the number of trim commands in the queue

Data Type

uint

Units

I/O operations

Range

1 to MAX_UINT

Default

10

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_trim_txg_batch

+

zfs_trim_txg_batch sets the number of transaction groups worth of +frees which should be aggregated before trim (aka discard, scsi unmap) +commands are issued to a device. This setting represents a trade-off +between issuing larger, more efficient trim commands and the delay +before the recently trimmed space is available for use by the device.

+

Increasing this value will allow frees to be aggregated for a longer +time. This will result is larger trim operations and potentially +increased memory usage. Decreasing this value will have the opposite +effect. The default value of 32 was empirically determined to be a +reasonable compromise.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_trim_txg_batch

Notes

Tags

trim

When to change

TBD

Data Type

uint

Units

metaslabs to stride

Range

1 to MAX_UINT

Default

32

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_aggregate_trim

+

zfs_vdev_aggregate_trim allows trim I/Os to be aggregated. This is +normally not helpful because the extents to be trimmed will have been +already been aggregated by the metaslab.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_aggregate_trim

Notes

Tags

trim, vdev, +ZIO_scheduler

When to change

when debugging trim code or trim +performance issues

Data Type

boolean

Range

0=do not attempt to aggregate trim +commands, 1=attempt to aggregate trim +commands

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_aggregation_limit_non_rotating

+

zfs_vdev_aggregation_limit_non_rotating is the equivalent of +zfs_vdev_aggregation_limit for devices +which represent themselves as non-rotating to the Linux blkdev +interfaces. Such devices have a value of 0 in +/sys/block/DEVICE/queue/rotational and are expected to be SSDs.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vde +v_aggregation_limit_non_rotating

Notes

Tags

vdev, +Z +IO_scheduler

When to change

see +zfs_vdev_aggregation_limit

Data Type

int

Units

bytes

Range

0 to MAX_INT

Default

131,072 bytes (128 KiB)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zil_nocacheflush

+

ZFS uses barriers (volatile cache flush commands) to ensure data is +committed to permanent media by devices. This ensures consistent +on-media state for devices where caches are volatile (eg HDDs).

+

zil_nocacheflush disables the cache flush commands that are normally +sent to devices by the ZIL after a log write has completed.

+

The difference between zil_nocacheflush and +zfs_nocacheflush is zil_nocacheflush applies +to ZIL writes while zfs_nocacheflush disables +barrier writes to the pool devices at the end of transaction group syncs.

+

WARNING: setting this can cause ZIL corruption on power loss if the +device has a volatile write cache.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zil_nocacheflush

Notes

Tags

disks, ZIL

When to change

If the storage device has nonvolatile cache, +then disabling cache flush can save the cost of +occasional cache flush commands

Data Type

boolean

Range

0=send cache flush commands, 1=do not send +cache flush commands

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zio_deadman_log_all

+

zio_deadman_log_all enables debugging messages for all ZFS I/Os, +rather than only for leaf ZFS I/Os for a vdev. This is meant to be used +by developers to gain diagnostic information for hang conditions which +don’t involve a mutex or other locking primitive. Typically these are +conditions where a thread in the zio pipeline is looping indefinitely.

+

See also zfs_dbgmsg_enable

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zio_deadman_log_all

Notes

Tags

debug

When to change

when debugging ZFS I/O pipeline

Data Type

boolean

Range

0=do not log all deadman events, 1=log all +deadman events

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zio_decompress_fail_fraction

+

If non-zero, zio_decompress_fail_fraction represents the denominator +of the probability that ZFS should induce a decompression failure. For +instance, for a 5% decompression failure rate, this value should be set +to 20.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zio_decompress_fail_fraction

Notes

Tags

debug

When to change

when debugging ZFS internal +compressed buffer code

Data Type

ulong

Units

probability of induced decompression +failure is +1/zio_decompress_fail_fraction

Range

0 = do not induce failures, or 1 to +MAX_ULONG

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zio_slow_io_ms

+

An I/O operation taking more than zio_slow_io_ms milliseconds to +complete is marked as a slow I/O. Slow I/O counters can be observed with +zpool status -s. Each slow I/O causes a delay zevent, observable +using zpool events. See also zfs-events(5).

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zio_slow_io_ms

Notes

Tags

vdev, zed

When to change

when debugging slow devices and the default +value is inappropriate

Data Type

int

Units

milliseconds

Range

0 to MAX_INT

Default

30,000 (30 seconds)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

vdev_validate_skip

+

vdev_validate_skip disables label validation steps during pool +import. Changing is not recommended unless you know what you are doing +and are recovering a damaged label.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

vdev_validate_skip

Notes

Tags

vdev

When to change

do not change

Data Type

boolean

Range

0=validate labels during pool import, 1=do not +validate vdev labels during pool import

Default

0

Change

prior to pool import

Versions Affected

planned for v2

+
+
+

zfs_async_block_max_blocks

+

zfs_async_block_max_blocks limits the number of blocks freed in a +single transaction group commit. During deletes of large objects, such +as snapshots, the number of freed blocks can cause the DMU to extend txg +sync times well beyond zfs_txg_timeout. +zfs_async_block_max_blocks is used to limit these effects.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_async_block_max_blocks

Notes

Tags

delete, DMU

When to change

TBD

Data Type

ulong

Units

blocks

Range

1 to MAX_ULONG

Default

MAX_ULONG (do not limit)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_checksum_events_per_second

+

zfs_checksum_events_per_second is a rate limit for checksum events. +Note that this should not be set below the zed thresholds (currently +10 checksums over 10 sec) or else zed may not trigger any action.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_checksum_events_per_second

Notes

Tags

vdev

When to change

TBD

Data Type

uint

Units

checksum events

Range

zed threshold to MAX_UINT

Default

20

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_disable_ivset_guid_check

+

zfs_disable_ivset_guid_check disables requirement for IVset guids to +be present and match when doing a raw receive of encrypted datasets. +Intended for users whose pools were created with ZFS on Linux +pre-release versions and now have compatibility issues.

+

For a ZFS raw receive, from a send stream created by zfs send --raw, +the crypt_keydata nvlist includes a to_ivset_guid to be set on the new +snapshot. This value will override the value generated by the snapshot +code. However, this value may not be present, because older +implementations of the raw send code did not include this value. When +zfs_disable_ivset_guid_check is enabled, the receive proceeds and a +newly-generated value is used.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_disable_ivset_guid_check

Notes

Tags

receive

When to change

debugging pre-release ZFS raw sends

Data Type

boolean

Range

0=check IVset guid, 1=do not check +IVset guid

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_obsolete_min_time_ms

+

zfs_obsolete_min_time_ms is similar to +zfs_free_min_time_ms and used for cleanup of +old indirection records for vdevs removed using the zpool remove +command.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_obsolete_min_time_ms

Notes

Tags

delete, remove

When to change

TBD

Data Type

int

Units

milliseconds

Range

0 to MAX_INT

Default

500

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_override_estimate_recordsize

+

zfs_override_estimate_recordsize overrides the default logic for +estimating block sizes when doing a zfs send. The default heuristic is +that the average block size will be the current recordsize.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_override_estimate_recordsize

Notes

Tags

send

When to change

if most data in your dataset is +not of the current recordsize +and you require accurate zfs +send size estimates

Data Type

ulong

Units

bytes

Range

0=do not override, 1 to +MAX_ULONG

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_remove_max_segment

+

zfs_remove_max_segment sets the largest contiguous segment that ZFS +attempts to allocate when removing a vdev. This can be no larger than +16MB. If there is a performance problem with attempting to allocate +large blocks, consider decreasing this. The value is rounded up to a +power-of-2.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_remove_max_segment

Notes

Tags

remove

When to change

after removing a top-level vdev, consider +decreasing if there is a performance +degradation when attempting to allocate +large blocks

Data Type

int

Units

bytes

Range

maximum of the physical block size of all +vdevs in the pool to 16,777,216 bytes (16 +MiB)

Default

16,777,216 bytes (16 MiB)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_resilver_disable_defer

+

zfs_resilver_disable_defer disables the resilver_defer pool +feature. The resilver_defer feature allows ZFS to postpone new +resilvers if an existing resilver is in progress.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_resilver_disable_defer

Notes

Tags

resilver

When to change

if resilver postponement is not +desired due to overall resilver time +constraints

Data Type

boolean

Range

0=allow resilver_defer to postpone +new resilver operations, 1=immediately +restart resilver when needed

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_scan_suspend_progress

+

zfs_scan_suspend_progress causes a scrub or resilver scan to freeze +without actually pausing.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_suspend_progress

Notes

Tags

resilver, scrub

When to change

testing or debugging scan code

Data Type

boolean

Range

0=do not freeze scans, 1=freeze scans

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_scrub_min_time_ms

+

Scrubs are processed by the sync thread. While scrubbing at least +zfs_scrub_min_time_ms time is spent working on a scrub between txg +syncs.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scrub_min_time_ms

Notes

Tags

scrub

When to change

Data Type

int

Units

milliseconds

Range

1 to (zfs_txg_timeout - 1)

Default

1,000

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_slow_io_events_per_second

+

zfs_slow_io_events_per_second is a rate limit for slow I/O events. +Note that this should not be set below the zed thresholds (currently +10 checksums over 10 sec) or else zed may not trigger any action.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_slow_io_events_per_second

Notes

Tags

vdev

When to change

TBD

Data Type

uint

Units

slow I/O events

Range

zed threshold to MAX_UINT

Default

20

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_min_ms_count

+

zfs_vdev_min_ms_count is the minimum number of metaslabs to create +in a top-level vdev.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_min_ms_count

Notes

Tags

metaslab, vdev

When to change

TBD

Data Type

int

Units

metaslabs

Range

16 to +zfs_vdev_m +s_count_limit

Default

16

Change

prior to creating a pool or adding a +top-level vdev

Versions Affected

planned for v2

+
+
+

zfs_vdev_ms_count_limit

+

zfs_vdev_ms_count_limit is the practical upper limit for the number +of metaslabs per top-level vdev.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_ms_count_limit

Notes

Tags

metaslab, +vdev

When to change

TBD

Data Type

int

Units

metaslabs

Range

zfs_vdev +_min_ms_count +to 131,072

Default

131,072

Change

prior to creating a pool or adding a +top-level vdev

Versions Affected

planned for v2

+
+
+

spl_hostid

+
+
spl_hostid is a unique system id number. It originated in Sun’s +products where most systems had a unique id assigned at the factory. +This assignment does not exist in modern hardware.
+
In ZFS, the hostid is stored in the vdev label and can be used to +determine if another system had imported the pool. When set +spl_hostid can be used to uniquely identify a system. By default +this value is set to zero which indicates the hostid is disabled. It +can be explicitly enabled by placing a unique non-zero value in the +file shown in spl_hostid_path
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_hostid

Notes

Tags

hostid, MMP

Kernel module

spl

When to change

to uniquely identify a system when vdevs can be +shared across multiple systems

Data Type

ulong

Range

0=ignore hostid, 1 to 4,294,967,295 (32-bits or +0xffffffff)

Default

0

Change

prior to importing pool

Versions Affected

v0.6.1

+
+
+

spl_hostid_path

+

spl_hostid_path is the path name for a file that can contain a +unique hostid. For testing purposes, spl_hostid_path can be +overridden by the ZFS_HOSTID environment variable.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_hostid_path

Notes

Tags

hostid, MMP

Kernel module

spl

When to change

when creating a new ZFS distribution where the +default value is inappropriate

Data Type

string

Default

“/etc/hostid”

Change

read-only, can only be changed prior to spl +module load

Versions Affected

v0.6.1

+
+
+

spl_kmem_alloc_max

+

Large kmem_alloc() allocations fail if they exceed KMALLOC_MAX_SIZE, +as determined by the kernel source. Allocations which are marginally +smaller than this limit may succeed but should still be avoided due to +the expense of locating a contiguous range of free pages. Therefore, a +maximum kmem size with reasonable safely margin of 4x is set. +kmem_alloc() allocations larger than this maximum will quickly fail. +vmem_alloc() allocations less than or equal to this value will use +kmalloc(), but shift to vmalloc() when exceeding this value.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_alloc_max

Notes

Tags

memory

Kernel module

spl

When to change

TBD

Data Type

uint

Units

bytes

Range

TBD

Default

KMALLOC_MAX_SIZE / 4

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_kmem_alloc_warn

+

As a general rule kmem_alloc() allocations should be small, +preferably just a few pages since they must by physically contiguous. +Therefore, a rate limited warning is printed to the console for any +kmem_alloc() which exceeds the threshold spl_kmem_alloc_warn

+

The default warning threshold is set to eight pages but capped at 32K to +accommodate systems using large pages. This value was selected to be +small enough to ensure the largest allocations are quickly noticed and +fixed. But large enough to avoid logging any warnings when a allocation +size is larger than optimal but not a serious concern. Since this value +is tunable, developers are encouraged to set it lower when testing so +any new largish allocations are quickly caught. These warnings may be +disabled by setting the threshold to zero.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_alloc_warn

Notes

Tags

memory

Kernel module

spl

When to change

developers are encouraged lower when testing +so any new, large allocations are quickly +caught

Data Type

uint

Units

bytes

Range

0=disable the warnings,

Default

32,768 (32 KiB)

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_kmem_cache_expire

+

Cache expiration is part of default illumos cache behavior. The idea is +that objects in magazines which have not been recently accessed should +be returned to the slabs periodically. This is known as cache aging and +when enabled objects will be typically returned after 15 seconds.

+

On the other hand Linux slabs are designed to never move objects back to +the slabs unless there is memory pressure. This is possible because +under Linux the cache will be notified when memory is low and objects +can be released.

+

By default only the Linux method is enabled. It has been shown to +improve responsiveness on low memory systems and not negatively impact +the performance of systems with more memory. This policy may be changed +by setting the spl_kmem_cache_expire bit mask as follows, both +policies may be enabled concurrently.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_cache_expire

Notes

Tags

memory

Kernel module

spl

When to change

TBD

Data Type

bitmask

Range

0x01 - Aging (illumos), 0x02 - Low memory (Linux)

Default

0x02

Change

Dynamic

Versions Affected

v0.6.1 to v0.8.x

+
+
+

spl_kmem_cache_kmem_limit

+

Depending on the size of a memory cache object it may be backed by +kmalloc() or vmalloc() memory. This is because the size of the +required allocation greatly impacts the best way to allocate the memory.

+

When objects are small and only a small number of memory pages need to +be allocated, ideally just one, then kmalloc() is very efficient. +However, allocating multiple pages with kmalloc() gets increasingly +expensive because the pages must be physically contiguous.

+

For this reason we shift to vmalloc() for slabs of large objects +which which removes the need for contiguous pages. vmalloc() cannot +be used in all cases because there is significant locking overhead +involved. This function takes a single global lock over the entire +virtual address range which serializes all allocations. Using slightly +different allocation functions for small and large objects allows us to +handle a wide range of object sizes.

+

The spl_kmem_cache_kmem_limit value is used to determine this cutoff +size. One quarter of the kernel’s compiled PAGE_SIZE is used as the +default value because +spl_kmem_cache_obj_per_slab defaults +to 16. With these default values, at most four contiguous pages are +allocated.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_cache_kmem_limit

Notes

Tags

memory

Kernel module

spl

When to change

TBD

Data Type

uint

Units

pages

Range

TBD

Default

PAGE_SIZE / 4

Change

Dynamic

Versions Affected

v0.7.0 to v0.8.x

+
+
+

spl_kmem_cache_max_size

+

spl_kmem_cache_max_size is the maximum size of a kmem cache slab in +MiB. This effectively limits the maximum cache object size to +spl_kmem_cache_max_size / +spl_kmem_cache_obj_per_slab Kmem +caches may not be created with object sized larger than this limit.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_cache_max_size

Notes

Tags

memory

Kernel module

spl

When to change

TBD

Data Type

uint

Units

MiB

Range

TBD

Default

4 for 32-bit kernel, 32 for 64-bit kernel

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_kmem_cache_obj_per_slab

+

spl_kmem_cache_obj_per_slab is the preferred number of objects per +slab in the kmem cache. In general, a larger value will increase the +caches memory footprint while decreasing the time required to perform an +allocation. Conversely, a smaller value will minimize the footprint and +improve cache reclaim time but individual allocations may take longer.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_cache_obj_per_slab

Notes

Tags

memory

Kernel module

spl

When to change

TBD

Data Type

uint

Units

kmem cache objects

Range

TBD

Default

8

Change

Dynamic

Versions Affected

v0.7.0 to v0.8.x

+
+
+

spl_kmem_cache_obj_per_slab_min

+

spl_kmem_cache_obj_per_slab_min is the minimum number of objects +allowed per slab. Normally slabs will contain +spl_kmem_cache_obj_per_slab objects +but for caches that contain very large objects it’s desirable to only +have a few, or even just one, object per slab.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_cache_obj_per_slab_min

Notes

Tags

memory

Kernel module

spl

When to change

debugging kmem cache operations

Data Type

uint

Units

kmem cache objects

Range

TBD

Default

1

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_kmem_cache_reclaim

+

spl_kmem_cache_reclaim prevents Linux from being able to rapidly +reclaim all the memory held by the kmem caches. This may be useful in +circumstances where it’s preferable that Linux reclaim memory from some +other subsystem first. Setting spl_kmem_cache_reclaim increases the +likelihood out of memory events on a memory constrained system.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_cache_reclaim

Notes

Tags

memory

Kernel module

spl

When to change

TBD

Data Type

boolean

Range

0=enable rapid memory reclaim from kmem +caches, 1=disable rapid memory reclaim +from kmem caches

Default

0

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_kmem_cache_slab_limit

+

For small objects the Linux slab allocator should be used to make the +most efficient use of the memory. However, large objects are not +supported by the Linux slab allocator and therefore the SPL +implementation is preferred. spl_kmem_cache_slab_limit is used to +determine the cutoff between a small and large object.

+

Objects of spl_kmem_cache_slab_limit or smaller will be allocated +using the Linux slab allocator, large objects use the SPL allocator. A +cutoff of 16 KiB was determined to be optimal for architectures using 4 +KiB pages.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_cache_slab_limit

Notes

Tags

memory

Kernel module

spl

When to change

TBD

Data Type

uint

Units

bytes

Range

TBD

Default

16,384 (16 KiB) when kernel PAGE_SIZE = +4KiB, 0 for other PAGE_SIZE values

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_max_show_tasks

+

spl_max_show_tasks is the limit of tasks per pending list in each +taskq shown in /proc/spl/taskq and /proc/spl/taskq-all. Reading +the ProcFS files walks the lists with lock held and it could cause a +lock up if the list grow too large. If the list is larger than the +limit, the string `”(truncated)” is printed.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_max_show_tasks

Notes

Tags

taskq

Kernel module

spl

When to change

TBD

Data Type

uint

Units

tasks reported

Range

0 disables the limit, 1 to MAX_UINT

Default

512

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_panic_halt

+

spl_panic_halt enables kernel panic upon assertion failures. When +not enabled, the asserting thread is halted to facilitate further +debugging.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_panic_halt

Notes

Tags

debug, panic

Kernel module

spl

When to change

when debugging assertions and kernel core dumps +are desired

Data Type

boolean

Range

0=halt thread upon assertion, 1=panic kernel +upon assertion

Default

0

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_taskq_kick

+

Upon writing a non-zero value to spl_taskq_kick, all taskqs are +scanned. If any taskq has a pending task more than 5 seconds old, the +taskq spawns more threads. This can be useful in rare deadlock +situations caused by one or more taskqs not spawning a thread when it +should.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_taskq_kick

Notes

Tags

taskq

Kernel module

spl

When to change

See description above

Data Type

uint

Units

N/A

Default

0

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_taskq_thread_bind

+

spl_taskq_thread_bind enables binding taskq threads to specific +CPUs, distributed evenly over the available CPUs. By default, this +behavior is disabled to allow the Linux scheduler the maximum +flexibility to determine where a thread should run.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_taskq_thread_bind

Notes

Tags

CPU, taskq

Kernel module

spl

When to change

when debugging CPU scheduling options

Data Type

boolean

Range

0=taskqs are not bound to specific CPUs, +1=taskqs are bound to CPUs

Default

0

Change

prior to loading spl kernel module

Versions Affected

v0.7.0

+
+
+

spl_taskq_thread_dynamic

+

spl_taskq_thread_dynamic enables taskqs to set the TASKQ_DYNAMIC +flag will by default create only a single thread. New threads will be +created on demand up to a maximum allowed number to facilitate the +completion of outstanding tasks. Threads which are no longer needed are +promptly destroyed. By default this behavior is enabled but it can be d.

+

See also +zfs_zil_clean_taskq_nthr_pct, +zio_taskq_batch_pct

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_taskq_thread_dynamic

Notes

Tags

taskq

Kernel module

spl

When to change

disable for performance analysis or +troubleshooting

Data Type

boolean

Range

0=taskq threads are not dynamic, 1=taskq +threads are dynamically created and +destroyed

Default

1

Change

prior to loading spl kernel module

Versions Affected

v0.7.0

+
+
+

spl_taskq_thread_priority

+
+
spl_taskq_thread_priority allows newly created taskq threads to +set a non-default scheduler priority. When enabled the priority +specified when a taskq is created will be applied to all threads +created by that taskq.
+
When disabled all threads will use the default Linux kernel thread +priority.
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_taskq_thread_priority

Notes

Tags

CPU, taskq

Kernel module

spl

When to change

when troubleshooting CPU +scheduling-related performance issues

Data Type

boolean

Range

0=taskq threads use the default Linux +kernel thread priority, 1=

Default

1

Change

prior to loading spl kernel module

Versions Affected

v0.7.0

+
+
+

spl_taskq_thread_sequential

+

spl_taskq_thread_sequential is the number of items a taskq worker +thread must handle without interruption before requesting a new worker +thread be spawned. spl_taskq_thread_sequential controls how quickly +taskqs ramp up the number of threads processing the queue. Because Linux +thread creation and destruction are relatively inexpensive a small +default value has been selected. Thus threads are created aggressively, +which is typically desirable. Increasing this value results in a slower +thread creation rate which may be preferable for some configurations.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_taskq_thread_sequential

Notes

Tags

CPU, taskq

Kernel module

spl

When to change

TBD

Data Type

int

Units

taskq items

Range

1 to MAX_INT

Default

4

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_kmem_cache_kmem_threads

+

spl_kmem_cache_kmem_threads shows the current number of +spl_kmem_cache threads. This task queue is responsible for +allocating new slabs for use by the kmem caches. For the majority of +systems and workloads only a small number of threads are required.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_cache_kmem_threads

Notes

Tags

CPU, memory

Kernel module

spl

When to change

read-only

Data Type

int

Range

1 to MAX_INT

Units

threads

Default

4

Change

read-only, can only be changed prior +to spl module load

Versions Affected

v0.7.0

+
+
+

spl_kmem_cache_magazine_size

+

spl_kmem_cache_magazine_size shows the current . Cache magazines are +an optimization designed to minimize the cost of allocating memory. They +do this by keeping a per-cpu cache of recently freed objects, which can +then be reallocated without taking a lock. This can improve performance +on highly contended caches. However, because objects in magazines will +prevent otherwise empty slabs from being immediately released this may +not be ideal for low memory machines.

+

For this reason spl_kmem_cache_magazine_size can be used to set a +maximum magazine size. When this value is set to 0 the magazine size +will be automatically determined based on the object size. Otherwise +magazines will be limited to 2-256 objects per magazine (eg per CPU). +Magazines cannot be disabled entirely in this implementation.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_cache_magazine_size

Notes

Tags

CPU, memory

Kernel module

spl

When to change

Data Type

int

Units

threads

Range

0=automatically scale magazine size, +otherwise 2 to 256

Default

0

Change

read-only, can only be changed prior +to spl module load

Versions Affected

v0.7.0

+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Performance and Tuning/Workload Tuning.html b/Performance and Tuning/Workload Tuning.html new file mode 100644 index 000000000..62a7dfaff --- /dev/null +++ b/Performance and Tuning/Workload Tuning.html @@ -0,0 +1,937 @@ + + + + + + + Workload Tuning — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Workload Tuning

+

Below are tips for various workloads.

+ +
+

Basic concepts

+

Descriptions of ZFS internals that have an effect on application +performance follow.

+
+

Adaptive Replacement Cache

+

For decades, operating systems have used RAM as a cache to avoid the +necessity of waiting on disk IO, which is extremely slow. This concept +is called page replacement. Until ZFS, virtually all filesystems used +the Least Recently Used (LRU) page replacement algorithm in which the +least recently used pages are the first to be replaced. Unfortunately, +the LRU algorithm is vulnerable to cache flushes, where a brief change +in workload that occurs occasionally removes all frequently used data +from cache. The Adaptive Replacement Cache (ARC) algorithm was +implemented in ZFS to replace LRU. It solves this problem by maintaining +four lists:

+
    +
  1. A list for recently cached entries.

  2. +
  3. A list for recently cached entries that have been accessed more than +once.

  4. +
  5. A list for entries evicted from #1.

  6. +
  7. A list of entries evicited from #2.

  8. +
+

Data is evicted from the first list while an effort is made to keep data +in the second list. In this way, ARC is able to outperform LRU by +providing a superior hit rate.

+

In addition, a dedicated cache device (typically a SSD) can be added to +the pool, with +zpool add POOLNAME cache DEVICENAME. The cache +device is managed by the L2ARC, which scans entries that are next to be +evicted and writes them to the cache device. The data stored in ARC and +L2ARC can be controlled via the primarycache and secondarycache +zfs properties respectively, which can be set on both zvols and +datasets. Possible settings are all, none and metadata. It +is possible to improve performance when a zvol or dataset hosts an +application that does its own caching by caching only metadata. One +example would be a virtual machine using ZFS. Another would be a +database system which manages its own cache (Oracle for instance). +PostgreSQL, by contrast, depends on the OS-level file cache for the +majority of cache.

+
+
+

Alignment Shift (ashift)

+

Top-level vdevs contain an internal property called ashift, which stands +for alignment shift. It is set at vdev creation and it is immutable. It +can be read using the zdb command. It is calculated as the maximum +base 2 logarithm of the physical sector size of any child vdev and it +alters the disk format such that writes are always done according to it. +This makes 2^ashift the smallest possible IO on a vdev. Configuring +ashift correctly is important because partial sector writes incur a +penalty where the sector must be read into a buffer before it can be +written. ZFS makes the implicit assumption that the sector size reported +by drives is correct and calculates ashift based on that.

+

In an ideal world, physical sector size is always reported correctly and +therefore, this requires no attention. Unfortunately, this is not the +case. The sector size on all storage devices was 512-bytes prior to the +creation of flash-based solid state drives. Some operating systems, such +as Windows XP, were written under this assumption and will not function +when drives report a different sector size.

+

Flash-based solid state drives came to market around 2007. These devices +report 512-byte sectors, but the actual flash pages, which roughly +correspond to sectors, are never 512-bytes. The early models used +4096-byte pages while the newer models have moved to an 8192-byte page. +In addition, “Advanced Format” hard drives have been created which also +use a 4096-byte sector size. Partial page writes suffer from similar +performance degradation as partial sector writes. In some cases, the +design of NAND-flash makes the performance degradation even worse, but +that is beyond the scope of this description.

+

Reporting the correct sector sizes is the responsibility the block +device layer. This unfortunately has made proper handling of devices +that misreport drives different across different platforms. The +respective methods are as follows:

+ +

-o ashift= is convenient, but it is flawed in that the creation of pools +containing top level vdevs that have multiple optimal sector sizes +require the use of multiple commands. A newer +syntax +that will rely on the actual sector sizes has been discussed as a cross +platform replacement and will likely be implemented in the future.

+

In addition, there is a database of +drives known to misreport sector +sizes +to the ZFS on Linux project. It is used to automatically adjust ashift +without the assistance of the system administrator. This approach is +unable to fully compensate for misreported sector sizes whenever drive +identifiers are used ambiguously (e.g. virtual machines, iSCSI LUNs, +some rare SSDs), but it does a great amount of good. The format is +roughly compatible with illumos’ sd.conf and it is expected that other +implementations will integrate the database in future releases. Strictly +speaking, this database does not belong in ZFS, but the difficulty of +patching the Linux kernel (especially older ones) necessitated that this +be implemented in ZFS itself for Linux. The same is true for MacZFS. +However, FreeBSD and illumos are both able to implement this in the +correct layer.

+
+
+

Compression

+

Internally, ZFS allocates data using multiples of the device’s sector +size, typically either 512 bytes or 4KB (see above). When compression is +enabled, a smaller number of sectors can be allocated for each block. +The uncompressed block size is set by the recordsize (defaults to +128KB) or volblocksize (defaults to 8KB) property (for filesystems +vs volumes).

+

The following compression algorithms are available:

+
    +
  • LZ4

    +
      +
    • New algorithm added after feature flags were created. It is +significantly superior to LZJB in all metrics tested. It is new +default compression algorithm +(compression=on) in OpenZFS. +It is available on all platforms as of 2020.

    • +
    +
  • +
  • LZJB

    +
      +
    • Original default compression algorithm (compression=on) for ZFS. +It was created to satisfy the desire for a compression algorithm +suitable for use in filesystems. Specifically, that it provides +fair compression, has a high compression speed, has a high +decompression speed and detects incompressible data +quickly.

    • +
    +
  • +
  • GZIP (1 through 9)

    +
      +
    • Classic Lempel-Ziv implementation. It provides high compression, +but it often makes IO CPU-bound.

    • +
    +
  • +
  • ZLE (Zero Length Encoding)

    +
      +
    • A very simple algorithm that only compresses zeroes.

    • +
    +
  • +
  • ZSTD (Zstandard)

    +
      +
    • Zstandard is a modern, high performance, general compression +algorithm which provides similar or better compression levels to +GZIP, but with much better performance. Zstandard offers a very +wide range of performance/compression trade-off, and is backed by +an extremely fast decoder. +It is available from OpenZFS 2.0 version.

    • +
    +
  • +
+

If you want to use compression and are uncertain which to use, use LZ4. +It averages a 2.1:1 compression ratio while gzip-1 averages 2.7:1, but +gzip is much slower. Both figures are obtained from testing by the LZ4 +project on the Silesia corpus. The +greater compression ratio of gzip is usually only worthwhile for rarely +accessed data.

+
+
+

RAID-Z stripe width

+

Choose a RAID-Z stripe width based on your IOPS needs and the amount of +space you are willing to devote to parity information. If you need more +IOPS, use fewer disks per stripe. If you need more usable space, use +more disks per stripe. Trying to optimize your RAID-Z stripe width based +on exact numbers is irrelevant in nearly all cases. See this blog +post +for more details.

+
+
+

Dataset recordsize

+

ZFS datasets use an internal recordsize of 128KB by default. The dataset +recordsize is the basic unit of data used for internal copy-on-write on +files. Partial record writes require that data be read from either ARC +(cheap) or disk (expensive). recordsize can be set to any power of 2 +from 512 bytes to 1 megabyte. Software that writes in fixed record +sizes (e.g. databases) will benefit from the use of a matching +recordsize.

+

Changing the recordsize on a dataset will only take effect for new +files. If you change the recordsize because your application should +perform better with a different one, you will need to recreate its +files. A cp followed by a mv on each file is sufficient. Alternatively, +send/recv should recreate the files with the correct recordsize when a +full receive is done.

+
+

Larger record sizes

+

Record sizes of up to 16M are supported with the large_blocks pool +feature, which is enabled by default on new pools on systems that +support it.

+

Record sizes larger than 1M were disabled by default +before openZFS v2.2, +unless the zfs_max_recordsize kernel module parameter was set to allow +sizes higher than 1M.

+

`zfs send` operations must specify -L +to ensure that larger than 128KB blocks are sent and the receiving pools +must support the large_blocks feature.

+
+
+
+

zvol volblocksize

+

Zvols have a volblocksize property that is analogous to recordsize. +Current default (16KB since v2.2) balances the metadata overhead, compression +opportunities and decent space efficiency on majority of pool configurations +due to 4KB disk physical block rounding (especially on RAIDZ and DRAID), +while incurring some write amplification on guest FSes that run with smaller +block sizes [7].

+

Users are advised to test their scenarios and see whether the volblocksize +needs to be changed to favor one or the other:

+
    +
  • sector alignment of guest FS is crucial

  • +
  • most of guest FSes use default block size of 4-8KB, so:

    +
      +
    • Larger volblocksize can help with mostly sequential workloads and +will gain a compression efficiency

    • +
    • Smaller volblocksize can help with random workloads and minimize +IO amplification, but will use more metadata +(e.g. more small IOs will be generated by ZFS) and may have worse +space efficiency (especially on RAIDZ and DRAID)

    • +
    • It’s meaningless to set volblocksize less than guest FS’s block size +or ashift

    • +
    • See Dataset recordsize +for additional information

    • +
    +
  • +
+
+
+

Deduplication

+

Deduplication uses an on-disk hash table, using extensible +hashing as +implemented in the ZAP (ZFS Attribute Processor). Each cached entry uses +slightly more than 320 bytes of memory. The DDT code relies on ARC for +caching the DDT entries, such that there is no double caching or +internal fragmentation from the kernel memory allocator. Each pool has a +global deduplication table shared across all datasets and zvols on which +deduplication is enabled. Each entry in the hash table is a record of a +unique block in the pool. (Where the block size is set by the +recordsize or volblocksize properties.)

+

The hash table (also known as the DDT or DeDup Table) must be accessed +for every dedup-able block that is written or freed (regardless of +whether it has multiple references). If there is insufficient memory for +the DDT to be cached in memory, each cache miss will require reading a +random block from disk, resulting in poor performance. For example, if +operating on a single 7200RPM drive that can do 100 io/s, uncached DDT +reads would limit overall write throughput to 100 blocks per second, or +400KB/s with 4KB blocks.

+

The consequence is that sufficient memory to store deduplication data is +required for good performance. The deduplication data is considered +metadata and therefore can be cached if the primarycache or +secondarycache properties are set to metadata. In addition, the +deduplication table will compete with other metadata for metadata +storage, which can have a negative effect on performance. Simulation of +the number of deduplication table entries needed for a given pool can be +done using the -D option to zdb. Then a simple multiplication by +320-bytes can be done to get the approximate memory requirements. +Alternatively, you can estimate an upper bound on the number of unique +blocks by dividing the amount of storage you plan to use on each dataset +(taking into account that partial records each count as a full +recordsize for the purposes of deduplication) by the recordsize and each +zvol by the volblocksize, summing and then multiplying by 320-bytes.

+
+
+

Metaslab Allocator

+

ZFS top level vdevs are divided into metaslabs from which blocks can be +independently allocated so allow for concurrent IOs to perform +allocations without blocking one another. At present, there is a +regression on the +Linux and Mac OS X ports that causes serialization to occur.

+

By default, the selection of a metaslab is biased toward lower LBAs to +improve performance of spinning disks, but this does not make sense on +solid state media. This behavior can be adjusted globally by setting the +ZFS module’s global metaslab_lba_weighting_enabled tuanble to 0. This +tunable is only advisable on systems that only use solid state media for +pools.

+

The metaslab allocator will allocate blocks on a first-fit basis when a +metaslab has more than or equal to 4 percent free space and a best-fit +basis when a metaslab has less than 4 percent free space. The former is +much faster than the latter, but it is not possible to tell when this +behavior occurs from the pool’s free space. However, the command zdb +-mmm $POOLNAME will provide this information.

+
+
+

Pool Geometry

+

If small random IOPS are of primary importance, mirrored vdevs will +outperform raidz vdevs. Read IOPS on mirrors will scale with the number +of drives in each mirror while raidz vdevs will each be limited to the +IOPS of the slowest drive.

+

If sequential writes are of primary importance, raidz will outperform +mirrored vdevs. Sequential write throughput increases linearly with the +number of data disks in raidz while writes are limited to the slowest +drive in mirrored vdevs. Sequential read performance should be roughly +the same on each.

+

Both IOPS and throughput will increase by the respective sums of the +IOPS and throughput of each top level vdev, regardless of whether they +are raidz or mirrors.

+
+
+

Whole Disks versus Partitions

+

ZFS will behave differently on different platforms when given a whole +disk.

+

On illumos, ZFS attempts to enable the write cache on a whole disk. The +illumos UFS driver cannot ensure integrity with the write cache enabled, +so by default Sun/Solaris systems using UFS file system for boot were +shipped with drive write cache disabled (long ago, when Sun was still an +independent company). For safety on illumos, if ZFS is not given the +whole disk, it could be shared with UFS and thus it is not appropriate +for ZFS to enable write cache. In this case, the write cache setting is +not changed and will remain as-is. Today, most vendors ship drives with +write cache enabled by default.

+

On Linux, the Linux IO elevator is largely redundant given that ZFS has +its own IO elevator.

+

ZFS will also create a GPT partition table own partitions when given a +whole disk under illumos on x86/amd64 and on Linux. This is mainly to +make booting through UEFI possible because UEFI requires a small FAT +partition to be able to boot the system. The ZFS driver will be able to +tell the difference between whether the pool had been given the entire +disk or not via the whole_disk field in the label.

+

This is not done on FreeBSD. Pools created by FreeBSD will always have +the whole_disk field set to true, such that a pool imported on another +platform that was created on FreeBSD will always be treated as the whole +disks were given to ZFS.

+
+
+
+

OS/distro-specific recommendations

+
+

Linux

+
+

init_on_alloc

+

Some Linux distributions (at least Debian, Ubuntu) enable +init_on_alloc option as security precaution by default. +This option can help to [6]:

+
+

prevent possible information leaks and +make control-flow bugs that depend on uninitialized values more +deterministic.

+
+

Unfortunately, it can lower ARC throughput considerably +(see bug).

+

If you’re ready to cope with these security risks [6], +you may disable it +by setting init_on_alloc=0 in the GRUB kernel boot parameters.

+
+
+
+
+

General recommendations

+
+

Alignment shift

+

Make sure that you create your pools such that the vdevs have the +correct alignment shift for your storage device’s size. if dealing with +flash media, this is going to be either 12 (4K sectors) or 13 (8K +sectors). For SSD ephemeral storage on Amazon EC2, the proper setting is +12.

+
+
+

Atime Updates

+

Set either relatime=on or atime=off to minimize IOs used to update +access time stamps. For backward compatibility with a small percentage +of software that supports it, relatime is preferred when available and +should be set on your entire pool. atime=off should be used more +selectively.

+
+
+

Free Space

+

Keep pool free space above 10% to avoid many metaslabs from reaching the +4% free space threshold to switch from first-fit to best-fit allocation +strategies. When the threshold is hit, the Metaslab Allocator becomes very CPU +intensive in an attempt to protect itself from fragmentation. This +reduces IOPS, especially as more metaslabs reach the 4% threshold.

+

The recommendation is 10% rather than 5% because metaslabs selection +considers both location and free space unless the global +metaslab_lba_weighting_enabled tunable is set to 0. When that tunable is +0, ZFS will consider only free space, so the the expense of the best-fit +allocator can be avoided by keeping free space above 5%. That setting +should only be used on systems with pools that consist of solid state +drives because it will reduce sequential IO performance on mechanical +disks.

+
+
+

LZ4 compression

+

Set compression=lz4 on your pools’ root datasets so that all datasets +inherit it unless you have a reason not to enable it. Userland tests of +LZ4 compression of incompressible data in a single thread has shown that +it can process 10GB/sec, so it is unlikely to be a bottleneck even on +incompressible data. Furthermore, incompressible data will be stored +without compression such that reads of incompressible data with +compression enabled will not be subject to decompression. Writes are so +fast that in-compressible data is unlikely to see a performance penalty +from the use of LZ4 compression. The reduction in IO from LZ4 will +typically be a performance win.

+

Note that larger record sizes will increase compression ratios on +compressible data by allowing compression algorithms to process more +data at a time.

+
+
+

NVMe low level formatting

+

See NVMe low level formatting.

+
+
+

Pool Geometry

+

Do not put more than ~16 disks in raidz. The rebuild times on mechanical +disks will be excessive when the pool is full.

+
+
+

Synchronous I/O

+

If your workload involves fsync or O_SYNC and your pool is backed by +mechanical storage, consider adding one or more SLOG devices. Pools that +have multiple SLOG devices will distribute ZIL operations across them. +The best choice for SLOG device(s) are likely Optane / 3D XPoint SSDs. +See Optane / 3D XPoint SSDs +for a description of them. If an Optane / 3D XPoint SSD is an option, +the rest of this section on synchronous I/O need not be read. If Optane +/ 3D XPoint SSDs is not an option, see +NAND Flash SSDs for suggestions +for NAND flash SSDs and also read the information below.

+

To ensure maximum ZIL performance on NAND flash SSD-based SLOG devices, +you should also overprovison spare area to increase +IOPS [1]. Only +about 4GB is needed, so the rest can be left as overprovisioned storage. +The choice of 4GB is somewhat arbitrary. Most systems do not write +anything close to 4GB to ZIL between transaction group commits, so +overprovisioning all storage beyond the 4GB partition should be alright. +If a workload needs more, then make it no more than the maximum ARC +size. Even under extreme workloads, ZFS will not benefit from more SLOG +storage than the maximum ARC size. That is half of system memory on +Linux and 3/4 of system memory on illumos.

+
+

Overprovisioning by secure erase and partition table trick

+

You can do this with a mix of a secure erase and a partition table +trick, such as the following:

+
    +
  1. Run a secure erase on the NAND-flash SSD.

  2. +
  3. Create a partition table on the NAND-flash SSD.

  4. +
  5. Create a 4GB partition.

  6. +
  7. Give the partition to ZFS to use as a log device.

  8. +
+

If using the secure erase and partition table trick, do not use the +unpartitioned space for other things, even temporarily. That will reduce +or eliminate the overprovisioning by marking pages as dirty.

+

Alternatively, some devices allow you to change the sizes that they +report.This would also work, although a secure erase should be done +prior to changing the reported size to ensure that the SSD recognizes +the additional spare area. Changing the reported size can be done on +drives that support it with `hdparm -N ` on systems that have +laptop-mode-tools.

+
+
+

NVMe overprovisioning

+

On NVMe, you can use namespaces to achieve overprovisioning:

+
    +
  1. Do a sanitize command as a precaution to ensure the device is +completely clean.

  2. +
  3. Delete the default namespace.

  4. +
  5. Create a new namespace of size 4GB.

  6. +
  7. Give the namespace to ZFS to use as a log device. e.g. zfs add tank +log /dev/nvme1n1

  8. +
+
+
+
+

Whole disks

+

Whole disks should be given to ZFS rather than partitions. If you must +use a partition, make certain that the partition is properly aligned to +avoid read-modify-write overhead. See the section on +Alignment Shift (ashift) +for a description of proper alignment. Also, see the section on +Whole Disks versus Partitions +for a description of changes in ZFS behavior when operating on a +partition.

+

Single disk RAID 0 arrays from RAID controllers are not equivalent to +whole disks. The Hardware RAID controllers page +explains in detail.

+
+
+
+

Bit Torrent

+

Bit torrent performs 16KB random reads/writes. The 16KB writes cause +read-modify-write overhead. The read-modify-write overhead can reduce +performance by a factor of 16 with 128KB record sizes when the amount of +data written exceeds system memory. This can be avoided by using a +dedicated dataset for bit torrent downloads with recordsize=16KB.

+

When the files are read sequentially through a HTTP server, the random +nature in which the files were generated creates fragmentation that has +been observed to reduce sequential read performance by a factor of two +on 7200RPM hard disks. If performance is a problem, fragmentation can be +eliminated by rewriting the files sequentially in either of two ways:

+

The first method is to configure your client to download the files to a +temporary directory and then copy them into their final location when +the downloads are finished, provided that your client supports this.

+

The second method is to use send/recv to recreate a dataset +sequentially.

+

In practice, defragmenting files obtained through bit torrent should +only improve performance when the files are stored on magnetic storage +and are subject to significant sequential read workloads after creation.

+
+
+

Database workloads

+

Setting redundant_metadata=most can increase IOPS by at least a few +percentage points by eliminating redundant metadata at the lowest level +of the indirect block tree. This comes with the caveat that data loss +will occur if a metadata block pointing to data blocks is corrupted and +there are no duplicate copies, but this is generally not a problem in +production on mirrored or raidz vdevs.

+
+

MySQL

+
+

InnoDB

+

Make separate datasets for InnoDB’s data files and log files. Set +recordsize=16K on InnoDB’s data files to avoid expensive partial record +writes and leave recordsize=128K on the log files. Set +primarycache=metadata on both to prefer InnoDB’s +caching [2]. +Set logbias=throughput on the data to stop ZIL from writing twice.

+

Set skip-innodb_doublewrite in my.cnf to prevent innodb from writing +twice. The double writes are a data integrity feature meant to protect +against corruption from partially-written records, but those are not +possible on ZFS. It should be noted that Percona’s +blog had advocated +using an ext4 configuration where double writes were +turned off for a performance gain, but later recanted it because it +caused data corruption. Following a well timed power failure, an in +place filesystem such as ext4 can have half of a 8KB record be old while +the other half would be new. This would be the corruption that caused +Percona to recant its advice. However, ZFS’ copy on write design would +cause it to return the old correct data following a power failure (no +matter what the timing is). That prevents the corruption that the double +write feature is intended to prevent from ever happening. The double +write feature is therefore unnecessary on ZFS and can be safely turned +off for better performance.

+

On Linux, the driver’s AIO implementation is a compatibility shim that +just barely passes the POSIX standard. InnoDB performance suffers when +using its default AIO codepath. Set innodb_use_native_aio=0 and +innodb_use_atomic_writes=0 in my.cnf to disable AIO. Both of these +settings must be disabled to disable AIO.

+
+
+
+

PostgreSQL

+

Make separate datasets for PostgreSQL’s data and WAL. Set +compression=lz4 and recordsize=32K (64K also work well, as +does the 128K default) on both. Configure full_page_writes = off +for PostgreSQL, as ZFS will never commit a partial write. For a database +with large updates, experiment with logbias=throughput on +PostgreSQL’s data to avoid writing twice, but be aware that with this +setting smaller updates can cause severe fragmentation.

+
+
+

SQLite

+

Make a separate dataset for the database. Set the recordsize to 64K. Set +the SQLite page size to 65536 +bytes [3].

+

Note that SQLite databases typically are not exercised enough to merit +special tuning, but this will provide it. Note the side effect on cache +size mentioned at +SQLite.org [4].

+
+
+
+

File servers

+

Create a dedicated dataset for files being served.

+

See +Sequential workloads +for configuration recommendations.

+
+

Samba

+

Windows/DOS clients doesn’t support case sensitive file names. +If your main workload won’t need case sensitivity for other supported clients, +create dataset with zfs create -o casesensitivity=insensitive +so Samba may search filenames faster in future [5].

+

See case sensitive option in +smb.conf(5).

+
+
+
+

Sequential workloads

+

Set recordsize=1M on datasets that are subject to sequential workloads. +Read +Larger record sizes +for documentation on things that should be known before setting 1M +record sizes.

+

Set compression=lz4 as per the general recommendation for LZ4 +compression.

+
+
+

Video games directories

+

Create a dedicated dataset, use chown to make it user accessible (or +create a directory under it and use chown on that) and then configure +the game download application to place games there. Specific information +on how to configure various ones is below.

+

See +Sequential workloads +for configuration recommendations before installing games.

+

Note that the performance gains from this tuning are likely to be small +and limited to load times. However, the combination of 1M records and +LZ4 will allow more games to be stored, which is why this tuning is +documented despite the performance gains being limited. A steam library +of 300 games (mostly from humble bundle) that had these tweaks applied +to it saw 20% space savings. Both faster load times and significant +space savings are possible on compressible games when this tuning has +been done. Games whose assets are already compressed will see little to +no benefit.

+
+

Lutris

+

Open the context menu by left clicking on the triple bar icon in the +upper right. Go to “Preferences” and then the “System options” tab. +Change the default installation directory and click save.

+
+
+

Steam

+

Go to “Settings” -> “Downloads” -> “Steam Library Folders” and use “Add +Library Folder” to set the directory for steam to use to store games. +Make sure to set it to the default by right clicking on it and clicking +“Make Default Folder” before closing the dialogue.

+

If you’ll use Proton to run non-native games, +create dataset with zfs create -o casesensitivity=insensitive +so Wine may search filenames faster in future [5].

+
+
+
+

Wine

+

Windows file systems’ standard behavior is to be case-insensitive. +Create dataset with zfs create -o casesensitivity=insensitive +so Wine may search filenames faster in future [5].

+
+
+

Virtual machines

+

Virtual machine images on ZFS should be stored using either zvols or raw +files to avoid unnecessary overhead. The recordsize/volblocksize and +guest filesystem may be configured to match to avoid overhead from +partial record modification, see zvol volblocksize. +If raw files are used, a separate dataset should be used to make it easy to configure +recordsize independently of other things stored on ZFS.

+
+

QEMU / KVM / Xen

+

AIO should be used to maximize IOPS when using files for guest storage.

+

Footnotes

+ +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Performance and Tuning/ZFS Transaction Delay.html b/Performance and Tuning/ZFS Transaction Delay.html new file mode 100644 index 000000000..561043a68 --- /dev/null +++ b/Performance and Tuning/ZFS Transaction Delay.html @@ -0,0 +1,222 @@ + + + + + + + ZFS Transaction Delay — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

ZFS Transaction Delay

+

ZFS write operations are delayed when the backend storage isn’t able to +accommodate the rate of incoming writes. This delay process is known as +the ZFS write throttle.

+

If there is already a write transaction waiting, the delay is relative +to when that transaction will finish waiting. Thus the calculated delay +time is independent of the number of threads concurrently executing +transactions.

+

If there is only one waiter, the delay is relative to when the +transaction started, rather than the current time. This credits the +transaction for “time already served.” For example, if a write +transaction requires reading indirect blocks first, then the delay is +counted at the start of the transaction, just prior to the indirect +block reads.

+

The minimum time for a transaction to take is calculated as:

+
min_time = zfs_delay_scale * (dirty - min) / (max - dirty)
+min_time is then capped at 100 milliseconds
+
+
+

The delay has two degrees of freedom that can be adjusted via tunables:

+
    +
  1. The percentage of dirty data at which we start to delay is defined by +zfs_delay_min_dirty_percent. This is typically be at or above +zfs_vdev_async_write_active_max_dirty_percent so delays occur after +writing at full speed has failed to keep up with the incoming write +rate.

  2. +
  3. The scale of the curve is defined by zfs_delay_scale. Roughly +speaking, this variable determines the amount of delay at the +midpoint of the curve.

  4. +
+
delay
+ 10ms +-------------------------------------------------------------*+
+      |                                                             *|
+  9ms +                                                             *+
+      |                                                             *|
+  8ms +                                                             *+
+      |                                                            * |
+  7ms +                                                            * +
+      |                                                            * |
+  6ms +                                                            * +
+      |                                                            * |
+  5ms +                                                           *  +
+      |                                                           *  |
+  4ms +                                                           *  +
+      |                                                           *  |
+  3ms +                                                          *   +
+      |                                                          *   |
+  2ms +                                              (midpoint) *    +
+      |                                                  |    **     |
+  1ms +                                                  v ***       +
+      |             zfs_delay_scale ---------->     ********         |
+    0 +-------------------------------------*********----------------+
+      0%                    <- zfs_dirty_data_max ->               100%
+
+
+

Note that since the delay is added to the outstanding time remaining on +the most recent transaction, the delay is effectively the inverse of +IOPS. Here the midpoint of 500 microseconds translates to 2000 IOPS. The +shape of the curve was chosen such that small changes in the amount of +accumulated dirty data in the first 3/4 of the curve yield relatively +small differences in the amount of delay.

+

The effects can be easier to understand when the amount of delay is +represented on a log scale:

+
delay
+100ms +-------------------------------------------------------------++
+      +                                                              +
+      |                                                              |
+      +                                                             *+
+ 10ms +                                                             *+
+      +                                                           ** +
+      |                                              (midpoint)  **  |
+      +                                                  |     **    +
+  1ms +                                                  v ****      +
+      +             zfs_delay_scale ---------->        *****         +
+      |                                             ****             |
+      +                                          ****                +
+100us +                                        **                    +
+      +                                       *                      +
+      |                                      *                       |
+      +                                     *                        +
+ 10us +                                     *                        +
+      +                                                              +
+      |                                                              |
+      +                                                              +
+      +--------------------------------------------------------------+
+      0%                    <- zfs_dirty_data_max ->               100%
+
+
+

Note here that only as the amount of dirty data approaches its limit +does the delay start to increase rapidly. The goal of a properly tuned +system should be to keep the amount of dirty data out of that range by +first ensuring that the appropriate limits are set for the I/O scheduler +to reach optimal throughput on the backend storage, and then by changing +the value of zfs_delay_scale to increase the steepness of the curve.

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Performance and Tuning/ZIO Scheduler.html b/Performance and Tuning/ZIO Scheduler.html new file mode 100644 index 000000000..d978f401b --- /dev/null +++ b/Performance and Tuning/ZIO Scheduler.html @@ -0,0 +1,244 @@ + + + + + + + ZFS I/O (ZIO) Scheduler — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

ZFS I/O (ZIO) Scheduler

+

ZFS issues I/O operations to leaf vdevs (usually devices) to satisfy and +complete I/Os. The ZIO scheduler determines when and in what order those +operations are issued. Operations are divided into five I/O classes +prioritized in the following order:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Priority

I/O Class

Description

highest

sync read

most reads

sync write

as defined by application or via ‘zfs’ +‘sync’ property

async read

prefetch reads

async write

most writes

lowest

scrub read

scan read: includes both scrub and +resilver

+

Each queue defines the minimum and maximum number of concurrent +operations issued to the device. In addition, the device has an +aggregate maximum, zfs_vdev_max_active. Note that the sum of the +per-queue minimums must not exceed the aggregate maximum. If the sum of +the per-queue maximums exceeds the aggregate maximum, then the number of +active I/Os may reach zfs_vdev_max_active, in which case no further I/Os +are issued regardless of whether all per-queue minimums have been met.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

I/O Class

Min Active Parameter

Max Active Parameter

sync read

zfs_vdev_sync_read_min_active

zfs_vdev_sync_read_max_active

sync write

zfs_vdev_sync_write_min_active

zfs_vdev_sync_write_max_active

async read

zfs_vdev_async_read_min_active

zfs_vdev_async_read_max_active

async write

zfs_vdev_async_write_min_active

zfs_vdev_async_write_max_active

scrub read

zfs_vdev_scrub_min_active

zfs_vdev_scrub_max_active

+

For many physical devices, throughput increases with the number of +concurrent operations, but latency typically suffers. Further, physical +devices typically have a limit at which more concurrent operations have +no effect on throughput or can cause the disk performance to +decrease.

+

The ZIO scheduler selects the next operation to issue by first looking +for an I/O class whose minimum has not been satisfied. Once all are +satisfied and the aggregate maximum has not been hit, the scheduler +looks for classes whose maximum has not been satisfied. Iteration +through the I/O classes is done in the order specified above. No further +operations are issued if the aggregate maximum number of concurrent +operations has been hit or if there are no operations queued for an I/O +class that has not hit its maximum. Every time an I/O is queued or an +operation completes, the I/O scheduler looks for new operations to +issue.

+

In general, smaller max_active’s will lead to lower latency of +synchronous operations. Larger max_active’s may lead to higher overall +throughput, depending on underlying storage and the I/O mix.

+

The ratio of the queues’ max_actives determines the balance of +performance between reads, writes, and scrubs. For example, when there +is contention, increasing zfs_vdev_scrub_max_active will cause the scrub +or resilver to complete more quickly, but reads and writes to have +higher latency and lower throughput.

+

All I/O classes have a fixed maximum number of outstanding operations +except for the async write class. Asynchronous writes represent the data +that is committed to stable storage during the syncing stage for +transaction groups (txgs). Transaction groups enter the syncing state +periodically so the number of queued async writes quickly bursts up and +then reduce down to zero. The zfs_txg_timeout tunable (default=5 +seconds) sets the target interval for txg sync. Thus a burst of async +writes every 5 seconds is a normal ZFS I/O pattern.

+

Rather than servicing I/Os as quickly as possible, the ZIO scheduler +changes the maximum number of active async write I/Os according to the +amount of dirty data in the pool. Since both throughput and latency +typically increase as the number of concurrent operations issued to +physical devices, reducing the burstiness in the number of concurrent +operations also stabilizes the response time of operations from other +queues. This is particularly important for the sync read and write queues, +where the periodic async write bursts of the txg sync can lead to +device-level contention. In broad strokes, the ZIO scheduler issues more +concurrent operations from the async write queue as there’s more dirty +data in the pool.

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Performance and Tuning/index.html b/Performance and Tuning/index.html new file mode 100644 index 000000000..523969c10 --- /dev/null +++ b/Performance and Tuning/index.html @@ -0,0 +1,169 @@ + + + + + + + Performance and Tuning — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ + +
+
+ + + + \ No newline at end of file diff --git a/Project and Community/Admin Documentation.html b/Project and Community/Admin Documentation.html new file mode 100644 index 000000000..b09b72386 --- /dev/null +++ b/Project and Community/Admin Documentation.html @@ -0,0 +1,138 @@ + + + + + + + Admin Documentation — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/Project and Community/FAQ hole birth.html b/Project and Community/FAQ hole birth.html new file mode 100644 index 000000000..023cda446 --- /dev/null +++ b/Project and Community/FAQ hole birth.html @@ -0,0 +1,168 @@ + + + + + + + FAQ Hole birth — OpenZFS documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

FAQ Hole birth

+
+

Short explanation

+

The hole_birth feature has/had bugs, the result of which is that, if you +do a zfs send -i (or -R, since it uses -i) from an affected +dataset, the receiver will not see any checksum or other errors, but the +resulting destination snapshot will not match the source.

+

ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring the +faulty metadata which causes this issue on the sender side.

+
+
+

FAQ

+
+

I have a pool with hole_birth enabled, how do I know if I am affected?

+

It is technically possible to calculate whether you have any affected +files, but it requires scraping zdb output for each file in each +snapshot in each dataset, which is a combinatoric nightmare. (If you +really want it, there is a proof of concept +here.

+
+
+

Is there any less painful way to fix this if we have already received an affected snapshot?

+

No, the data you need was simply not present in the send stream, +unfortunately, and cannot feasibly be rewritten in place.

+
+
+
+

Long explanation

+

hole_birth is a feature to speed up ZFS send -i - in particular, ZFS +used to not store metadata on when “holes” (sparse regions) in files +were created, so every zfs send -i needed to include every hole.

+

hole_birth, as the name implies, added tracking for the txg (transaction +group) when a hole was created, so that zfs send -i could only send +holes that had a birth_time between (starting snapshot txg) and (ending +snapshot txg), and life was wonderful.

+

Unfortunately, hole_birth had a number of edge cases where it could +“forget” to set the birth_time of holes in some cases, causing it to +record the birth_time as 0 (the value used prior to hole_birth, and +essentially equivalent to “since file creation”).

+

This meant that, when you did a zfs send -i, since zfs send does not +have any knowledge of the surrounding snapshots when sending a given +snapshot, it would see the creation txg as 0, conclude “oh, it is 0, I +must have already sent this before”, and not include it.

+

This means that, on the receiving side, it does not know those holes +should exist, and does not create them. This leads to differences +between the source and the destination.

+

ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring this +metadata and always sending holes with birth_time 0, configurable using +the tunable known as ignore_hole_birth or +send_holes_without_birth_time. The latter is what OpenZFS +standardized on. ZoL version 0.6.5.8 only has the former, but for any +ZoL version with send_holes_without_birth_time, they point to the +same value, so changing either will work.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Project and Community/FAQ.html b/Project and Community/FAQ.html new file mode 100644 index 000000000..2144aaaed --- /dev/null +++ b/Project and Community/FAQ.html @@ -0,0 +1,845 @@ + + + + + + + FAQ — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

FAQ

+ +
+

What is OpenZFS

+

OpenZFS is an outstanding storage platform that +encompasses the functionality of traditional filesystems, volume +managers, and more, with consistent reliability, functionality and +performance across all distributions. Additional information about +OpenZFS can be found in the OpenZFS wikipedia +article.

+
+
+

Hardware Requirements

+

Because ZFS was originally designed for Sun Solaris it was long +considered a filesystem for large servers and for companies that could +afford the best and most powerful hardware available. But since the +porting of ZFS to numerous OpenSource platforms (The BSDs, Illumos and +Linux - under the umbrella organization “OpenZFS”), these requirements +have been lowered.

+

The suggested hardware requirements are:

+
    +
  • ECC memory. This isn’t really a requirement, but it’s highly +recommended.

  • +
  • 8GB+ of memory for the best performance. It’s perfectly possible to +run with 2GB or less (and people do), but you’ll need more if using +deduplication.

  • +
+
+
+

Do I have to use ECC memory for ZFS?

+

Using ECC memory for OpenZFS is strongly recommended for enterprise +environments where the strongest data integrity guarantees are required. +Without ECC memory rare random bit flips caused by cosmic rays or by +faulty memory can go undetected. If this were to occur OpenZFS (or any +other filesystem) will write the damaged data to disk and be unable to +automatically detect the corruption.

+

Unfortunately, ECC memory is not always supported by consumer grade +hardware. And even when it is, ECC memory will be more expensive. For +home users the additional safety brought by ECC memory might not justify +the cost. It’s up to you to determine what level of protection your data +requires.

+
+
+

Installation

+

OpenZFS is available for FreeBSD and all major Linux distributions. Refer to +the getting started section of the wiki for +links to installations instructions. If your distribution/OS isn’t +listed you can always build OpenZFS from the latest official +tarball.

+
+
+

Supported Architectures

+

OpenZFS is regularly compiled for the following architectures: +aarch64, arm, ppc, ppc64, x86, x86_64.

+
+
+

Supported Linux Kernels

+

The notes for a given +OpenZFS release will include a range of supported kernels. Point +releases will be tagged as needed in order to support the stable +kernel available from kernel.org. The +oldest supported kernel is 2.6.32 due to its prominence in Enterprise +Linux distributions.

+
+
+

32-bit vs 64-bit Systems

+

You are strongly encouraged to use a 64-bit kernel. OpenZFS +will build for 32-bit systems but you may encounter stability problems.

+

ZFS was originally developed for the Solaris kernel which differs from +some OpenZFS platforms in several significant ways. Perhaps most importantly +for ZFS it is common practice in the Solaris kernel to make heavy use of +the virtual address space. However, use of the virtual address space is +strongly discouraged in the Linux kernel. This is particularly true on +32-bit architectures where the virtual address space is limited to 100M +by default. Using the virtual address space on 64-bit Linux kernels is +also discouraged but the address space is so much larger than physical +memory that it is less of an issue.

+

If you are bumping up against the virtual memory limit on a 32-bit +system you will see the following message in your system logs. You can +increase the virtual address size with the boot option vmalloc=512M.

+
vmap allocation for size 4198400 failed: use vmalloc=<size> to increase size.
+
+
+

However, even after making this change your system will likely not be +entirely stable. Proper support for 32-bit systems is contingent upon +the OpenZFS code being weaned off its dependence on virtual memory. This +will take some time to do correctly but it is planned for OpenZFS. This +change is also expected to improve how efficiently OpenZFS manages the +ARC cache and allow for tighter integration with the standard Linux page +cache.

+
+
+

Booting from ZFS

+

Booting from ZFS on Linux is possible and many people do it. There are +excellent walk throughs available for +Debian, +Ubuntu, and +Gentoo.

+

On FreeBSD 13+ booting from ZFS is supported out of the box.

+
+
+

Selecting /dev/ names when creating a pool (Linux)

+

There are different /dev/ names that can be used when creating a ZFS +pool. Each option has advantages and drawbacks, the right choice for +your ZFS pool really depends on your requirements. For development and +testing using /dev/sdX naming is quick and easy. A typical home server +might prefer /dev/disk/by-id/ naming for simplicity and readability. +While very large configurations with multiple controllers, enclosures, +and switches will likely prefer /dev/disk/by-vdev naming for maximum +control. But in the end, how you choose to identify your disks is up to +you.

+
    +
  • /dev/sdX, /dev/hdX: Best for development/test pools

    +
      +
    • Summary: The top level /dev/ names are the default for consistency +with other ZFS implementations. They are available under all Linux +distributions and are commonly used. However, because they are not +persistent they should only be used with ZFS for development/test +pools.

    • +
    • Benefits: This method is easy for a quick test, the names are +short, and they will be available on all Linux distributions.

    • +
    • Drawbacks: The names are not persistent and will change depending +on what order the disks are detected in. Adding or removing +hardware for your system can easily cause the names to change. You +would then need to remove the zpool.cache file and re-import the +pool using the new names.

    • +
    • Example: zpool create tank sda sdb

    • +
    +
  • +
  • /dev/disk/by-id/: Best for small pools (less than 10 disks)

    +
      +
    • Summary: This directory contains disk identifiers with more human +readable names. The disk identifier usually consists of the +interface type, vendor name, model number, device serial number, +and partition number. This approach is more user friendly because +it simplifies identifying a specific disk.

    • +
    • Benefits: Nice for small systems with a single disk controller. +Because the names are persistent and guaranteed not to change, it +doesn’t matter how the disks are attached to the system. You can +take them all out, randomly mix them up on the desk, put them +back anywhere in the system and your pool will still be +automatically imported correctly.

    • +
    • Drawbacks: Configuring redundancy groups based on physical +location becomes difficult and error prone. Unreliable on many +personal virtual machine setups because the software does not +generate persistent unique names by default.

    • +
    • Example: +zpool create tank scsi-SATA_Hitachi_HTS7220071201DP1D10DGG6HMRP

    • +
    +
  • +
  • /dev/disk/by-path/: Good for large pools (greater than 10 disks)

    +
      +
    • Summary: This approach is to use device names which include the +physical cable layout in the system, which means that a particular +disk is tied to a specific location. The name describes the PCI +bus number, as well as enclosure names and port numbers. This +allows the most control when configuring a large pool.

    • +
    • Benefits: Encoding the storage topology in the name is not only +helpful for locating a disk in large installations. But it also +allows you to explicitly layout your redundancy groups over +multiple adapters or enclosures.

    • +
    • Drawbacks: These names are long, cumbersome, and difficult for a +human to manage.

    • +
    • Example: +zpool create tank pci-0000:00:1f.2-scsi-0:0:0:0 pci-0000:00:1f.2-scsi-1:0:0:0

    • +
    +
  • +
  • /dev/disk/by-vdev/: Best for large pools (greater than 10 disks)

    +
      +
    • Summary: This approach provides administrative control over device +naming using the configuration file /etc/zfs/vdev_id.conf. Names +for disks in JBODs can be generated automatically to reflect their +physical location by enclosure IDs and slot numbers. The names can +also be manually assigned based on existing udev device links, +including those in /dev/disk/by-path or /dev/disk/by-id. This +allows you to pick your own unique meaningful names for the disks. +These names will be displayed by all the zfs utilities so it can +be used to clarify the administration of a large complex pool. See +the vdev_id and vdev_id.conf man pages for further details.

    • +
    • Benefits: The main benefit of this approach is that it allows you +to choose meaningful human-readable names. Beyond that, the +benefits depend on the naming method employed. If the names are +derived from the physical path the benefits of /dev/disk/by-path +are realized. On the other hand, aliasing the names based on drive +identifiers or WWNs has the same benefits as using +/dev/disk/by-id.

    • +
    • Drawbacks: This method relies on having a /etc/zfs/vdev_id.conf +file properly configured for your system. To configure this file +please refer to section Setting up the /etc/zfs/vdev_id.conf +file. As with +benefits, the drawbacks of /dev/disk/by-id or /dev/disk/by-path +may apply depending on the naming method employed.

    • +
    • Example: zpool create tank mirror A1 B1 mirror A2 B2

    • +
    +
  • +
  • /dev/disk/by-uuid/: Not a great option

  • +
+
+
    +
  • Summary: One might think from the use of “UUID” that this would +be an ideal option - however, in practice, this ends up listing +one device per pool ID, which is not very useful for importing +pools with multiple disks.

  • +
+
+
    +
  • /dev/disk/by-partuuid//by-partlabel: Works only for existing partitions

  • +
+
+
    +
  • Summary: partition UUID is generated on it’s creation, so usage is limited

  • +
  • Drawbacks: you can’t refer to a partition unique ID on +an unpartitioned disk for zpool replace/add/attach, +and you can’t find failed disk easily without a mapping written +down ahead of time.

  • +
+
+
+
+

Setting up the /etc/zfs/vdev_id.conf file

+

In order to use /dev/disk/by-vdev/ naming the /etc/zfs/vdev_id.conf +must be configured. The format of this file is described in the +vdev_id.conf man page. Several examples follow.

+

A non-multipath configuration with direct-attached SAS enclosures and an +arbitrary slot re-mapping.

+
multipath     no
+topology      sas_direct
+phys_per_port 4
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+
+#    Linux      Mapped
+#    Slot       Slot
+slot 0          2
+slot 1          6
+slot 2          0
+slot 3          3
+slot 4          5
+slot 5          7
+slot 6          4
+slot 7          1
+
+
+

A SAS-switch topology. Note that the channel keyword takes only two +arguments in this example.

+
topology      sas_switch
+
+#       SWITCH PORT  CHANNEL NAME
+channel 1            A
+channel 2            B
+channel 3            C
+channel 4            D
+
+
+

A multipath configuration. Note that channel names have multiple +definitions - one per physical path.

+
multipath yes
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+channel 86:00.0  1         A
+channel 86:00.0  0         B
+
+
+

A configuration using device link aliases.

+
#     by-vdev
+#     name     fully qualified or base name of device link
+alias d1       /dev/disk/by-id/wwn-0x5000c5002de3b9ca
+alias d2       wwn-0x5000c5002def789e
+
+
+

After defining the new disk names run udevadm trigger to prompt udev +to parse the configuration file. This will result in a new +/dev/disk/by-vdev directory which is populated with symlinks to /dev/sdX +names. Following the first example above, you could then create the new +pool of mirrors with the following command:

+
$ zpool create tank \
+    mirror A0 B0 mirror A1 B1 mirror A2 B2 mirror A3 B3 \
+    mirror A4 B4 mirror A5 B5 mirror A6 B6 mirror A7 B7
+
+$ zpool status
+  pool: tank
+ state: ONLINE
+ scan: none requested
+config:
+
+    NAME        STATE     READ WRITE CKSUM
+    tank        ONLINE       0     0     0
+      mirror-0  ONLINE       0     0     0
+        A0      ONLINE       0     0     0
+        B0      ONLINE       0     0     0
+      mirror-1  ONLINE       0     0     0
+        A1      ONLINE       0     0     0
+        B1      ONLINE       0     0     0
+      mirror-2  ONLINE       0     0     0
+        A2      ONLINE       0     0     0
+        B2      ONLINE       0     0     0
+      mirror-3  ONLINE       0     0     0
+        A3      ONLINE       0     0     0
+        B3      ONLINE       0     0     0
+      mirror-4  ONLINE       0     0     0
+        A4      ONLINE       0     0     0
+        B4      ONLINE       0     0     0
+      mirror-5  ONLINE       0     0     0
+        A5      ONLINE       0     0     0
+        B5      ONLINE       0     0     0
+      mirror-6  ONLINE       0     0     0
+        A6      ONLINE       0     0     0
+        B6      ONLINE       0     0     0
+      mirror-7  ONLINE       0     0     0
+        A7      ONLINE       0     0     0
+        B7      ONLINE       0     0     0
+
+errors: No known data errors
+
+
+
+
+

Changing /dev/ names on an existing pool

+

Changing the /dev/ names on an existing pool can be done by simply +exporting the pool and re-importing it with the -d option to specify +which new names should be used. For example, to use the custom names in +/dev/disk/by-vdev:

+
$ zpool export tank
+$ zpool import -d /dev/disk/by-vdev tank
+
+
+
+
+

The /etc/zfs/zpool.cache file

+

Whenever a pool is imported on the system it will be added to the +/etc/zfs/zpool.cache file. This file stores pool configuration +information, such as the device names and pool state. If this file +exists when running the zpool import command then it will be used to +determine the list of pools available for import. When a pool is not +listed in the cache file it will need to be detected and imported using +the zpool import -d /dev/disk/by-id command.

+
+
+

Generating a new /etc/zfs/zpool.cache file

+

The /etc/zfs/zpool.cache file will be automatically updated when +your pool configuration is changed. However, if for some reason it +becomes stale you can force the generation of a new +/etc/zfs/zpool.cache file by setting the cachefile property on the +pool.

+
$ zpool set cachefile=/etc/zfs/zpool.cache tank
+
+
+

Conversely the cache file can be disabled by setting cachefile=none. +This is useful for failover configurations where the pool should always +be explicitly imported by the failover software.

+
$ zpool set cachefile=none tank
+
+
+
+
+

Sending and Receiving Streams

+
+

hole_birth Bugs

+

The hole_birth feature has/had bugs, the result of which is that, if you +do a zfs send -i (or -R, since it uses -i) from an affected +dataset, the receiver will not see any checksum or other errors, but +will not match the source.

+

ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring the +faulty metadata which causes this issue on the sender side.

+

For more details, see the hole_birth FAQ.

+
+
+

Sending Large Blocks

+

When sending incremental streams which contain large blocks (>128K) the +--large-block flag must be specified. Inconsistent use of the flag +between incremental sends can result in files being incorrectly zeroed +when they are received. Raw encrypted send/recvs automatically imply the +--large-block flag and are therefore unaffected.

+

For more details, see issue +6224.

+
+
+
+

CEPH/ZFS

+

There is a lot of tuning that can be done that’s dependent on the +workload that is being put on CEPH/ZFS, as well as some general +guidelines. Some are as follow;

+
+

ZFS Configuration

+

The CEPH filestore back-end heavily relies on xattrs, for optimal +performance all CEPH workloads will benefit from the following ZFS +dataset parameters

+
    +
  • xattr=sa

  • +
  • dnodesize=auto

  • +
+

Beyond that typically rbd/cephfs focused workloads benefit from small +recordsize({16K-128K), while objectstore/s3/rados focused workloads +benefit from large recordsize (128K-1M).

+
+
+

CEPH Configuration (ceph.conf)

+

Additionally CEPH sets various values internally for handling xattrs +based on the underlying filesystem. As CEPH only officially +supports/detects XFS and BTRFS, for all other filesystems it falls back +to rather limited “safe” +values. +On newer releases, the need for larger xattrs will prevent OSD’s from even +starting.

+

The officially recommended workaround (see +here) +has some severe downsides, and more specifically is geared toward +filesystems with “limited” xattr support such as ext4.

+

ZFS does not have a limit internally to xattrs length, as such we can +treat it similarly to how CEPH treats XFS. We can set overrides to set 3 +internal values to the same as those used with XFS(see +here +and +here) +and allow it be used without the severe limitations of the “official” +workaround.

+
[osd]
+filestore_max_inline_xattrs = 10
+filestore_max_inline_xattr_size = 65536
+filestore_max_xattr_value_size = 65536
+
+
+
+
+

Other General Guidelines

+
    +
  • Use a separate journal device. Do not collocate CEPH journal on +ZFS dataset if at all possible, this will quickly lead to terrible +fragmentation, not to mention terrible performance upfront even +before fragmentation (CEPH journal does a dsync for every write).

  • +
  • Use a SLOG device, even with a separate CEPH journal device. For some +workloads, skipping SLOG and setting logbias=throughput may be +acceptable.

  • +
  • Use a high-quality SLOG/CEPH journal device. A consumer based SSD, or +even NVMe WILL NOT DO (Samsung 830, 840, 850, etc) for a variety of +reasons. CEPH will kill them quickly, on-top of the performance being +quite low in this use. Generally recommended devices are [Intel DC S3610, +S3700, S3710, P3600, P3700], or [Samsung SM853, SM863], or better.

  • +
  • If using a high quality SSD or NVMe device (as mentioned above), you +CAN share SLOG and CEPH Journal to good results on single device. A +ratio of 4 HDDs to 1 SSD (Intel DC S3710 200GB), with each SSD +partitioned (remember to align!) to 4x10GB (for ZIL/SLOG) + 4x20GB +(for CEPH journal) has been reported to work well.

  • +
+

Again - CEPH + ZFS will KILL a consumer based SSD VERY quickly. Even +ignoring the lack of power-loss protection, and endurance ratings, you +will be very disappointed with performance of consumer based SSD under +such a workload.

+
+
+
+

Performance Considerations

+

To achieve good performance with your pool there are some easy best +practices you should follow.

+
    +
  • Evenly balance your disks across controllers: Often the limiting +factor for performance is not the disks but the controller. By +balancing your disks evenly across controllers you can often improve +throughput.

  • +
  • Create your pool using whole disks: When running zpool create use +whole disk names. This will allow ZFS to automatically partition the +disk to ensure correct alignment. It will also improve +interoperability with other OpenZFS implementations which honor the +wholedisk property.

  • +
  • Have enough memory: A minimum of 2GB of memory is recommended for +ZFS. Additional memory is strongly recommended when the compression +and deduplication features are enabled.

  • +
  • Improve performance by setting ashift=12: You may be able to +improve performance for some workloads by setting ashift=12. This +tuning can only be set when block devices are first added to a pool, +such as when the pool is first created or when a new vdev is added to +the pool. This tuning parameter can result in a decrease of capacity +for RAIDZ configurations.

  • +
+
+
+

Advanced Format Disks

+

Advanced Format (AF) is a new disk format which natively uses a 4,096 +byte, instead of 512 byte, sector size. To maintain compatibility with +legacy systems many AF disks emulate a sector size of 512 bytes. By +default, ZFS will automatically detect the sector size of the drive. +This combination can result in poorly aligned disk accesses which will +greatly degrade the pool performance.

+

Therefore, the ability to set the ashift property has been added to the +zpool command. This allows users to explicitly assign the sector size +when devices are first added to a pool (typically at pool creation time +or adding a vdev to the pool). The ashift values range from 9 to 16 with +the default value 0 meaning that zfs should auto-detect the sector size. +This value is actually a bit shift value, so an ashift value for 512 +bytes is 9 (2^9 = 512) while the ashift value for 4,096 bytes is 12 +(2^12 = 4,096).

+

To force the pool to use 4,096 byte sectors at pool creation time, you +may run:

+
$ zpool create -o ashift=12 tank mirror sda sdb
+
+
+

To force the pool to use 4,096 byte sectors when adding a vdev to a +pool, you may run:

+
$ zpool add -o ashift=12 tank mirror sdc sdd
+
+
+
+
+

ZVOL used space larger than expected

+
+
Depending on the filesystem used on the zvol (e.g. ext4) and the usage +(e.g. deletion and creation of many files) the used and +referenced properties reported by the zvol may be larger than the +“actual” space that is being used as reported by the consumer.
+
This can happen due to the way some filesystems work, in which they +prefer to allocate files in new untouched blocks rather than the +fragmented used blocks marked as free. This forces zfs to reference +all blocks that the underlying filesystem has ever touched.
+
This is in itself not much of a problem, as when the used property +reaches the configured volsize the underlying filesystem will +start reusing blocks. But the problem arises if it is desired to +snapshot the zvol, as the space referenced by the snapshots will +contain the unused blocks.
+
+
+
This issue can be prevented, by issuing the so-called trim +(for ex. fstrim command on Linux) to allow +the kernel to specify to zfs which blocks are unused.
+
Issuing a trim before a snapshot is taken will ensure +a minimum snapshot size.
+
For Linux adding the discard option for the mounted ZVOL in /etc/fstab +effectively enables the kernel to issue the trim commands +continuously, without the need to execute fstrim on-demand.
+
+
+
+

Using a zvol for a swap device on Linux

+

You may use a zvol as a swap device but you’ll need to configure it +appropriately.

+

CAUTION: for now swap on zvol may lead to deadlock, in this case +please send your logs +here.

+
    +
  • Set the volume block size to match your systems page size. This +tuning prevents ZFS from having to perform read-modify-write options +on a larger block while the system is already low on memory.

  • +
  • Set the logbias=throughput and sync=always properties. Data +written to the volume will be flushed immediately to disk freeing up +memory as quickly as possible.

  • +
  • Set primarycache=metadata to avoid keeping swap data in RAM via +the ARC.

  • +
  • Disable automatic snapshots of the swap device.

  • +
+
$ zfs create -V 4G -b $(getconf PAGESIZE) \
+    -o logbias=throughput -o sync=always \
+    -o primarycache=metadata \
+    -o com.sun:auto-snapshot=false rpool/swap
+
+
+
+
+

Using ZFS on Xen Hypervisor or Xen Dom0 (Linux)

+

It is usually recommended to keep virtual machine storage and hypervisor +pools, quite separate. Although few people have managed to successfully +deploy and run OpenZFS using the same machine configured as Dom0. +There are few caveats:

+
    +
  • Set a fair amount of memory in grub.conf, dedicated to Dom0.

    +
      +
    • dom0_mem=16384M,max:16384M

    • +
    +
  • +
  • Allocate no more of 30-40% of Dom0’s memory to ZFS in +/etc/modprobe.d/zfs.conf.

    +
      +
    • options zfs zfs_arc_max=6442450944

    • +
    +
  • +
  • Disable Xen’s auto-ballooning in /etc/xen/xl.conf

  • +
  • Watch out for any Xen bugs, such as this +one related to +ballooning

  • +
+
+
+

udisks2 creating /dev/mapper/ entries for zvol (Linux)

+

To prevent udisks2 from creating /dev/mapper entries that must be +manually removed or maintained during zvol remove / rename, create a +udev rule such as /etc/udev/rules.d/80-udisks2-ignore-zfs.rules with +the following contents:

+
ENV{ID_PART_ENTRY_SCHEME}=="gpt", ENV{ID_FS_TYPE}=="zfs_member", ENV{ID_PART_ENTRY_TYPE}=="6a898cc3-1dd2-11b2-99a6-080020736631", ENV{UDISKS_IGNORE}="1"
+
+
+
+
+

Licensing

+

License information can be found here.

+
+
+

Reporting a problem

+

You can open a new issue and search existing issues using the public +issue tracker. The issue +tracker is used to organize outstanding bug reports, feature requests, +and other development tasks. Anyone may post comments after signing up +for a github account.

+

Please make sure that what you’re actually seeing is a bug and not a +support issue. If in doubt, please ask on the mailing list first, and if +you’re then asked to file an issue, do so.

+

When opening a new issue include this information at the top of the +issue:

+
    +
  • What distribution you’re using and the version.

  • +
  • What spl/zfs packages you’re using and the version.

  • +
  • Describe the problem you’re observing.

  • +
  • Describe how to reproduce the problem.

  • +
  • Including any warning/errors/backtraces from the system logs.

  • +
+

When a new issue is opened it’s not uncommon for a developer to request +additional information about the problem. In general, the more detail +you share about a problem the quicker a developer can resolve it. For +example, providing a simple test case is always exceptionally helpful. +Be prepared to work with the developer looking in to your bug in order +to get it resolved. They may ask for information like:

+
    +
  • Your pool configuration as reported by zdb or zpool status.

  • +
  • Your hardware configuration, such as

    +
      +
    • Number of CPUs.

    • +
    • Amount of memory.

    • +
    • Whether your system has ECC memory.

    • +
    • Whether it is running under a VMM/Hypervisor.

    • +
    • Kernel version.

    • +
    • Values of the spl/zfs module parameters.

    • +
    +
  • +
  • Stack traces which may be logged to dmesg.

  • +
+
+
+

Does OpenZFS have a Code of Conduct?

+

Yes, the OpenZFS community has a code of conduct. See the Code of +Conduct for details.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Project and Community/Mailing Lists.html b/Project and Community/Mailing Lists.html new file mode 100644 index 000000000..2b35948ca --- /dev/null +++ b/Project and Community/Mailing Lists.html @@ -0,0 +1,171 @@ + + + + + + + Mailing Lists — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Mailing Lists

+ + + + + + + + + + + + + + + + + + + + + + + + + +

List

Description

List Archive

zfs-announce@list.zfsonlinux.org

A low-traffic list +for announcements +such as new releases

archive

zfs-discuss@list.zfsonlinux.org

A user discussion +list for issues +related to +functionality and +usability

archive

zfs-devel@list.zfsonlinux.org

A development list +for developers to +discuss technical +issues

archive

developer@open-zfs.org

A +platform-independent +mailing list for ZFS +developers to review +ZFS code and +architecture changes +from all platforms

archive

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Project and Community/Signing Keys.html b/Project and Community/Signing Keys.html new file mode 100644 index 000000000..ec52c76af --- /dev/null +++ b/Project and Community/Signing Keys.html @@ -0,0 +1,198 @@ + + + + + + + Signing Keys — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Signing Keys

+

All tagged ZFS on Linux +releases are signed by +the official maintainer for that branch. These signatures are +automatically verified by GitHub and can be checked locally by +downloading the maintainers public key.

+
+

Maintainers

+
+

Release branch (spl/zfs-*-release)

+
+
Maintainer: Ned Bass
+
Download: +pgp.mit.edu
+
Key ID: C77B9667
+
Fingerprint: 29D5 610E AE29 41E3 55A2 FE8A B974 67AA C77B 9667
+
+
+
Maintainer: Tony Hutter
+
Download: +pgp.mit.edu
+
Key ID: D4598027
+
Fingerprint: 4F3B A9AB 6D1F 8D68 3DC2 DFB5 6AD8 60EE D459 8027
+
+
+
+

Master branch (master)

+
+
Maintainer: Brian Behlendorf
+
Download: +pgp.mit.edu
+
Key ID: C6AF658B
+
Fingerprint: C33D F142 657E D1F7 C328 A296 0AB9 E991 C6AF 658B
+
+
+
+
+

Checking the Signature of a Git Tag

+

First import the public key listed above in to your key ring.

+
$ gpg --keyserver pgp.mit.edu --recv C6AF658B
+gpg: requesting key C6AF658B from hkp server pgp.mit.edu
+gpg: key C6AF658B: "Brian Behlendorf <behlendorf1@llnl.gov>" not changed
+gpg: Total number processed: 1
+gpg:              unchanged: 1
+
+
+

After the public key is imported the signature of a git tag can be +verified as shown.

+
$ git tag --verify zfs-0.6.5
+object 7a27ad00ae142b38d4aef8cc0af7a72b4c0e44fe
+type commit
+tag zfs-0.6.5
+tagger Brian Behlendorf <behlendorf1@llnl.gov> 1441996302 -0700
+
+ZFS Version 0.6.5
+gpg: Signature made Fri 11 Sep 2015 11:31:42 AM PDT using DSA key ID C6AF658B
+gpg: Good signature from "Brian Behlendorf <behlendorf1@llnl.gov>"
+gpg:                 aka "Brian Behlendorf (LLNL) <behlendorf1@llnl.gov>"
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Project and Community/index.html b/Project and Community/index.html new file mode 100644 index 000000000..a10c73794 --- /dev/null +++ b/Project and Community/index.html @@ -0,0 +1,185 @@ + + + + + + + Project and Community — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Project and Community

+

OpenZFS is storage software which combines the functionality of +traditional filesystems, volume manager, and more. OpenZFS includes +protection against data corruption, support for high storage capacities, +efficient data compression, snapshots and copy-on-write clones, +continuous integrity checking and automatic repair, remote replication +with ZFS send and receive, and RAID-Z.

+

OpenZFS brings together developers from the illumos, Linux, FreeBSD and +OS X platforms, and a wide range of companies – both online and at the +annual OpenZFS Developer Summit. High-level goals of the project include +raising awareness of the quality, utility and availability of +open-source implementations of ZFS, encouraging open communication about +ongoing efforts toward improving open-source variants of ZFS, and +ensuring consistent reliability, functionality and performance of all +distributions of ZFS.

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_TableOfContents.html b/_TableOfContents.html new file mode 100644 index 000000000..2b6b5a390 --- /dev/null +++ b/_TableOfContents.html @@ -0,0 +1,199 @@ + + + + + + + <no title> — OpenZFS documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ + +
+
+ + + + \ No newline at end of file diff --git a/_images/draid-resilver-hours.png b/_images/draid-resilver-hours.png new file mode 100644 index 000000000..41899d28f Binary files /dev/null and b/_images/draid-resilver-hours.png differ diff --git a/_images/raidz_draid.png b/_images/raidz_draid.png new file mode 100644 index 000000000..b5617cd14 Binary files /dev/null and b/_images/raidz_draid.png differ diff --git a/_images/zof-logo.png b/_images/zof-logo.png new file mode 100644 index 000000000..0612f6056 Binary files /dev/null and b/_images/zof-logo.png differ diff --git a/_rediraffe_redirected.json b/_rediraffe_redirected.json new file mode 100644 index 000000000..a8b37081c --- /dev/null +++ b/_rediraffe_redirected.json @@ -0,0 +1 @@ +{"man/1/test-runner.1.rst": "man/master/1/test-runner.1.rst", "man/1/cstyle.1.rst": "man/master/1/cstyle.1.rst", "man/1/zvol_wait.1.rst": "man/master/1/zvol_wait.1.rst", "man/1/ztest.1.rst": "man/master/1/ztest.1.rst", "man/1/arcstat.1.rst": "man/master/1/arcstat.1.rst", "man/1/index.rst": "man/master/1/index.rst", "man/1/zhack.1.rst": "man/master/1/zhack.1.rst", "man/1/raidz_test.1.rst": "man/master/1/raidz_test.1.rst", "man/4/zfs.4.rst": "man/master/4/zfs.4.rst", "man/4/index.rst": "man/master/4/index.rst", "man/4/spl.4.rst": "man/master/4/spl.4.rst", "man/5/vdev_id.conf.5.rst": "man/master/5/vdev_id.conf.5.rst", "man/5/index.rst": "man/master/5/index.rst", "man/7/dracut.zfs.7.rst": "man/master/7/dracut.zfs.7.rst", "man/7/zfsconcepts.7.rst": "man/master/7/zfsconcepts.7.rst", "man/7/zpool-features.7.rst": "man/master/7/zpool-features.7.rst", "man/7/index.rst": "man/master/7/index.rst", "man/7/zpoolprops.7.rst": "man/master/7/zpoolprops.7.rst", "man/7/zfsprops.7.rst": "man/master/7/zfsprops.7.rst", "man/7/vdevprops.7.rst": "man/master/7/vdevprops.7.rst", "man/7/zpoolconcepts.7.rst": "man/master/7/zpoolconcepts.7.rst", "man/8/zfs_prepare_disk.8.rst": "man/master/8/zfs_prepare_disk.8.rst", "man/8/zfs-create.8.rst": "man/master/8/zfs-create.8.rst", "man/8/zpool-iostat.8.rst": "man/master/8/zpool-iostat.8.rst", "man/8/zfs-promote.8.rst": "man/master/8/zfs-promote.8.rst", "man/8/zfs-rollback.8.rst": "man/master/8/zfs-rollback.8.rst", "man/8/zfs-upgrade.8.rst": "man/master/8/zfs-upgrade.8.rst", "man/8/zfs-rename.8.rst": "man/master/8/zfs-rename.8.rst", "man/8/zpool-import.8.rst": "man/master/8/zpool-import.8.rst", "man/8/zpool-scrub.8.rst": "man/master/8/zpool-scrub.8.rst", "man/8/zstream.8.rst": "man/master/8/zstream.8.rst", "man/8/zfs-release.8.rst": "man/master/8/zfs-release.8.rst", "man/8/zfs-groupspace.8.rst": "man/master/8/zfs-groupspace.8.rst", "man/8/zfs-hold.8.rst": "man/master/8/zfs-hold.8.rst", "man/8/zfs-unzone.8.rst": "man/master/8/zfs-unzone.8.rst", "man/8/zpool-list.8.rst": "man/master/8/zpool-list.8.rst", "man/8/zpool-labelclear.8.rst": "man/master/8/zpool-labelclear.8.rst", "man/8/zfs-load-key.8.rst": "man/master/8/zfs-load-key.8.rst", "man/8/zfs-list.8.rst": "man/master/8/zfs-list.8.rst", "man/8/zpool-export.8.rst": "man/master/8/zpool-export.8.rst", "man/8/zpool-status.8.rst": "man/master/8/zpool-status.8.rst", "man/8/zfs-mount.8.rst": "man/master/8/zfs-mount.8.rst", "man/8/zfs-unjail.8.rst": "man/master/8/zfs-unjail.8.rst", "man/8/zpool-sync.8.rst": "man/master/8/zpool-sync.8.rst", "man/8/zpool-reopen.8.rst": "man/master/8/zpool-reopen.8.rst", "man/8/zpool-destroy.8.rst": "man/master/8/zpool-destroy.8.rst", "man/8/zpool-create.8.rst": "man/master/8/zpool-create.8.rst", "man/8/zfs-zone.8.rst": "man/master/8/zfs-zone.8.rst", "man/8/zfs-snapshot.8.rst": "man/master/8/zfs-snapshot.8.rst", "man/8/zfs-clone.8.rst": "man/master/8/zfs-clone.8.rst", "man/8/zfs-inherit.8.rst": "man/master/8/zfs-inherit.8.rst", "man/8/zpool-resilver.8.rst": "man/master/8/zpool-resilver.8.rst", "man/8/zpool-set.8.rst": "man/master/8/zpool-set.8.rst", "man/8/zfs-project.8.rst": "man/master/8/zfs-project.8.rst", "man/8/zfs-program.8.rst": "man/master/8/zfs-program.8.rst", "man/8/zpool-events.8.rst": "man/master/8/zpool-events.8.rst", "man/8/index.rst": "man/master/8/index.rst", "man/8/zpool-history.8.rst": "man/master/8/zpool-history.8.rst", "man/8/zfs-recv.8.rst": "man/master/8/zfs-recv.8.rst", "man/8/zpool-replace.8.rst": "man/master/8/zpool-replace.8.rst", "man/8/zpool-clear.8.rst": "man/master/8/zpool-clear.8.rst", "man/8/zpool-upgrade.8.rst": "man/master/8/zpool-upgrade.8.rst", "man/8/zpool-attach.8.rst": "man/master/8/zpool-attach.8.rst", "man/8/zpool-split.8.rst": "man/master/8/zpool-split.8.rst", "man/8/zpool-detach.8.rst": "man/master/8/zpool-detach.8.rst", "man/8/zfs-projectspace.8.rst": "man/master/8/zfs-projectspace.8.rst", "man/8/zfs-diff.8.rst": "man/master/8/zfs-diff.8.rst", "man/8/zfs-set.8.rst": "man/master/8/zfs-set.8.rst", "man/8/zfs-redact.8.rst": "man/master/8/zfs-redact.8.rst", "man/8/zfs-mount-generator.8.rst": "man/master/8/zfs-mount-generator.8.rst", "man/8/zed.8.rst": "man/master/8/zed.8.rst", "man/8/zfs-unmount.8.rst": "man/master/8/zfs-unmount.8.rst", "man/8/zfs-wait.8.rst": "man/master/8/zfs-wait.8.rst", "man/8/zpool-online.8.rst": "man/master/8/zpool-online.8.rst", "man/8/zfs-bookmark.8.rst": "man/master/8/zfs-bookmark.8.rst", "man/8/zdb.8.rst": "man/master/8/zdb.8.rst", "man/8/zpool-initialize.8.rst": "man/master/8/zpool-initialize.8.rst", "man/8/zpool-checkpoint.8.rst": "man/master/8/zpool-checkpoint.8.rst", "man/8/zinject.8.rst": "man/master/8/zinject.8.rst", "man/8/zpool-trim.8.rst": "man/master/8/zpool-trim.8.rst", "man/8/zfs-receive.8.rst": "man/master/8/zfs-receive.8.rst", "man/8/zpool-add.8.rst": "man/master/8/zpool-add.8.rst", "man/8/mount.zfs.8.rst": "man/master/8/mount.zfs.8.rst", "man/8/zfs-jail.8.rst": "man/master/8/zfs-jail.8.rst", "man/8/zpool-reguid.8.rst": "man/master/8/zpool-reguid.8.rst", "man/8/zgenhostid.8.rst": "man/master/8/zgenhostid.8.rst", "man/8/zfs-unallow.8.rst": "man/master/8/zfs-unallow.8.rst", "man/8/zfs-send.8.rst": "man/master/8/zfs-send.8.rst", "man/8/vdev_id.8.rst": "man/master/8/vdev_id.8.rst", "man/8/zstreamdump.8.rst": "man/master/8/zstreamdump.8.rst", "man/8/zfs-share.8.rst": "man/master/8/zfs-share.8.rst", "man/8/zpool-get.8.rst": "man/master/8/zpool-get.8.rst", "man/8/zfs-change-key.8.rst": "man/master/8/zfs-change-key.8.rst", "man/8/fsck.zfs.8.rst": "man/master/8/fsck.zfs.8.rst", "man/8/zpool-wait.8.rst": "man/master/8/zpool-wait.8.rst", "man/8/zfs.8.rst": "man/master/8/zfs.8.rst", "man/8/zpool_influxdb.8.rst": "man/master/8/zpool_influxdb.8.rst", "man/8/zfs_ids_to_path.8.rst": "man/master/8/zfs_ids_to_path.8.rst", "man/8/zpool-remove.8.rst": "man/master/8/zpool-remove.8.rst", "man/8/zfs-allow.8.rst": "man/master/8/zfs-allow.8.rst", "man/8/zpool-offline.8.rst": "man/master/8/zpool-offline.8.rst", "man/8/zfs-unload-key.8.rst": "man/master/8/zfs-unload-key.8.rst", "man/8/zfs-get.8.rst": "man/master/8/zfs-get.8.rst", "man/8/zpool.8.rst": "man/master/8/zpool.8.rst", "man/8/zfs-userspace.8.rst": "man/master/8/zfs-userspace.8.rst", "man/8/zfs-destroy.8.rst": "man/master/8/zfs-destroy.8.rst"} \ No newline at end of file diff --git a/_sources/404.rst.txt b/_sources/404.rst.txt new file mode 100644 index 000000000..7379594cf --- /dev/null +++ b/_sources/404.rst.txt @@ -0,0 +1,6 @@ +:orphan: + +404 Page not found. +=================== + +Please use left menu or search to find interested page. diff --git a/_sources/Basic Concepts/Checksums.rst.txt b/_sources/Basic Concepts/Checksums.rst.txt new file mode 100644 index 000000000..76ebfde4c --- /dev/null +++ b/_sources/Basic Concepts/Checksums.rst.txt @@ -0,0 +1,142 @@ +Checksums and Their Use in ZFS +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +End-to-end checksums are a key feature of ZFS and an important +differentiator for ZFS over other RAID implementations and filesystems. +Advantages of end-to-end checksums include: + +- detects data corruption upon reading from media +- blocks that are detected as corrupt are automatically repaired if + possible, by using the RAID protection in suitably configured pools, + or redundant copies (see the zfs ``copies`` property) +- periodic scrubs can check data to detect and repair latent media + degradation (bit rot) and corruption from other sources +- checksums on ZFS replication streams, ``zfs send`` and + ``zfs receive``, ensure the data received is not corrupted by + intervening storage or transport mechanisms + +Checksum Algorithms +^^^^^^^^^^^^^^^^^^^ + +The checksum algorithms in ZFS can be changed for datasets (filesystems +or volumes). The checksum algorithm used for each block is stored in the +block pointer (metadata). The block checksum is calculated when the +block is written, so changing the algorithm only affects writes +occurring after the change. + +The checksum algorithm for a dataset can be changed by setting the +``checksum`` property: + +.. code:: bash + + zfs set checksum=sha256 pool_name/dataset_name + ++-----------+--------------+------------------------+-------------------------+ +| Checksum | Ok for dedup | Compatible with | Notes | +| | and nopwrite?| other ZFS | | +| | | implementations? | | ++===========+==============+========================+=========================+ +| on | see notes | yes | ``on`` is a | +| | | | short hand for | +| | | | ``fletcher4`` | +| | | | for non-deduped | +| | | | datasets and | +| | | | ``sha256`` for | +| | | | deduped | +| | | | datasets | ++-----------+--------------+------------------------+-------------------------+ +| off | no | yes | Do not do use | +| | | | ``off`` | ++-----------+--------------+------------------------+-------------------------+ +| fletcher2 | no | yes | Deprecated | +| | | | implementation | +| | | | of Fletcher | +| | | | checksum, use | +| | | | ``fletcher4`` | +| | | | instead | ++-----------+--------------+------------------------+-------------------------+ +| fletcher4 | no | yes | Fletcher | +| | | | algorithm, also | +| | | | used for | +| | | | ``zfs send`` | +| | | | streams | ++-----------+--------------+------------------------+-------------------------+ +| sha256 | yes | yes | Default for | +| | | | deduped | +| | | | datasets | ++-----------+--------------+------------------------+-------------------------+ +| noparity | no | yes | Do not use | +| | | | ``noparity`` | ++-----------+--------------+------------------------+-------------------------+ +| sha512 | yes | requires pool | salted | +| | | feature | ``sha512`` | +| | | ``org.illumos:sha512`` | currently not | +| | | | supported for | +| | | | any filesystem | +| | | | on the boot | +| | | | pools | ++-----------+--------------+------------------------+-------------------------+ +| skein | yes | requires pool | salted | +| | | feature | ``skein`` | +| | | ``org.illumos:skein`` | currently not | +| | | | supported for | +| | | | any filesystem | +| | | | on the boot | +| | | | pools | ++-----------+--------------+------------------------+-------------------------+ +| edonr | see notes | requires pool | salted | +| | | feature | ``edonr`` | +| | | ``org.illumos:edonr`` | currently not | +| | | | supported for | +| | | | any filesystem | +| | | | on the boot | +| | | | pools | +| | | | | +| | | | In an abundance of | +| | | | caution, Edon-R requires| +| | | | verification when used | +| | | | with dedup, so it will | +| | | | automatically use | +| | | | ``verify``. | +| | | | | ++-----------+--------------+------------------------+-------------------------+ +| blake3 | yes | requires pool | salted | +| | | feature | ``blake3`` | +| | | ``org.openzfs:blake3`` | currently not | +| | | | supported for | +| | | | any filesystem | +| | | | on the boot | +| | | | pools | ++-----------+--------------+------------------------+-------------------------+ + +Checksum Accelerators +^^^^^^^^^^^^^^^^^^^^^ + +ZFS has the ability to offload checksum operations to the Intel +QuickAssist Technology (QAT) adapters. + +Checksum Microbenchmarks +^^^^^^^^^^^^^^^^^^^^^^^^ + +Some ZFS features use microbenchmarks when the ``zfs.ko`` kernel module +is loaded to determine the optimal algorithm for checksums. The results +of the microbenchmarks are observable in the ``/proc/spl/kstat/zfs`` +directory. The winning algorithm is reported as the "fastest" and +becomes the default. The default can be overridden by setting zfs module +parameters. + +========= ==================================== ======================== +Checksum Results Filename ``zfs`` module parameter +========= ==================================== ======================== +Fletcher4 /proc/spl/kstat/zfs/fletcher_4_bench zfs_fletcher_4_impl +all-other /proc/spl/kstat/zfs/chksum_bench zfs_blake3_impl, + zfs_sha256_impl, + zfs_sha512_impl +========= ==================================== ======================== + +Disabling Checksums +^^^^^^^^^^^^^^^^^^^ + +While it may be tempting to disable checksums to improve CPU +performance, it is widely considered by the ZFS community to be an +extrodinarily bad idea. Don't disable checksums. diff --git a/_sources/Basic Concepts/Feature Flags.rst.txt b/_sources/Basic Concepts/Feature Flags.rst.txt new file mode 100644 index 000000000..e9b3a2835 --- /dev/null +++ b/_sources/Basic Concepts/Feature Flags.rst.txt @@ -0,0 +1,53 @@ +Feature Flags +============= + +ZFS on-disk formats were originally versioned with a single number, +which increased whenever the format changed. The numbered approach was +suitable when development of ZFS was driven by a single organisation. + +For distributed development of OpenZFS, version numbering was +unsuitable. Any change to the number would have required agreement, +across all implementations, of each change to the on-disk format. + +OpenZFS feature flags – an alternative to traditional version numbering +– allow **a uniquely named pool property for each change to the on-disk +format**. This approach supports: + +- format changes that are independent +- format changes that depend on each other. + +Compatibility +------------- + +Where all *features* that are used by a pool are supported by multiple +implementations of OpenZFS, the on-disk format is portable across those +implementations. + +Features that are exclusive when enabled should be periodically ported +to all distributions. + +Reference materials +------------------- + +`ZFS Feature Flags `_ +(Christopher Siden, 2012-01, in the Internet +Archive Wayback Machine) in particular: "… Legacy version numbers still +exist for pool versions 1-28 …". + +`zpool-features(7) man page <../man/7/zpool-features.7.html>`_ - OpenZFS + +`zpool-features `__ (5) – illumos + +Feature flags implementation per OS +----------------------------------- + +.. raw:: html + +
+ +.. raw:: html + :file: ../_build/zfs_feature_matrix.html + +.. raw:: html + +
diff --git a/_sources/Basic Concepts/RAIDZ.rst.txt b/_sources/Basic Concepts/RAIDZ.rst.txt new file mode 100644 index 000000000..4675690e2 --- /dev/null +++ b/_sources/Basic Concepts/RAIDZ.rst.txt @@ -0,0 +1,91 @@ +RAIDZ +===== + +tl;dr: RAIDZ is effective for large block sizes and sequential workloads. + +Introduction +~~~~~~~~~~~~ + +RAIDZ is a variation on RAID-5 that allows for better distribution of parity +and eliminates the RAID-5 “write hole” (in which data and parity become +inconsistent after a power loss). +Data and parity is striped across all disks within a raidz group. + +A raidz group can have single, double, or triple parity, meaning that the raidz +group can sustain one, two, or three failures, respectively, without losing any +data. The ``raidz1`` vdev type specifies a single-parity raidz group; the ``raidz2`` +vdev type specifies a double-parity raidz group; and the ``raidz3`` vdev type +specifies a triple-parity raidz group. The ``raidz`` vdev type is an alias for +raidz1. + +A raidz group of N disks of size X with P parity disks can hold +approximately (N-P)*X bytes and can withstand P devices failing without +losing data. The minimum number of devices in a raidz group is one more +than the number of parity disks. The recommended number is between 3 and 9 +to help increase performance. + + +Space efficiency +~~~~~~~~~~~~~~~~ + +Actual used space for a block in RAIDZ is based on several points: + +- minimal write size is disk sector size (can be set via `ashift` vdev parameter) + +- stripe width in RAIDZ is dynamic, and starts with at least one data block part, or up to + ``disks count`` minus ``parity number`` parts of data block + +- one block of data with size of ``recordsize`` is + splitted equally via ``sector size`` parts + and written on each stripe on RAIDZ vdev +- each stripe of data will have a part of block + +- in addition to data one, two or three blocks of parity should be written, + one per disk; so, for raidz2 of 5 disks there will be 3 blocks of data and + 2 blocks of parity + +Due to these inputs, if ``recordsize`` is less or equal to sector size, +then RAIDZ's parity size will be effictively equal to mirror with same redundancy. +For example, for raidz1 of 3 disks with ``ashift=12`` and ``recordsize=4K`` +we will allocate on disk: + +- one 4K block of data + +- one 4K parity block + +and usable space ratio will be 50%, same as with double mirror. + + +Another example for ``ashift=12`` and ``recordsize=128K`` for raidz1 of 3 disks: + +- total stripe width is 3 + +- one stripe can have up to 2 data parts of 4K size because of 1 parity blocks + +- we will have 128K/8k = 16 stripes with 8K of data and 4K of parity each + +- 16 stripes each with 12k, means we write 192k to store 128k + +so usable space ratio in this case will be 66%. + + +The more disks RAIDZ has, the wider the stripe, the greater the space +efficiency. + +You can find actual parity cost per RAIDZ size here: + +.. raw:: html + + + +(`source `__) + + +Performance considerations +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Write +^^^^^ + +Because of full stripe width, one block write will write stripe part on each disk. +One RAIDZ vdev has a write IOPS of one slowest disk because of that in worst case. diff --git a/_sources/Basic Concepts/Troubleshooting.rst.txt b/_sources/Basic Concepts/Troubleshooting.rst.txt new file mode 100644 index 000000000..65d906008 --- /dev/null +++ b/_sources/Basic Concepts/Troubleshooting.rst.txt @@ -0,0 +1,105 @@ +Troubleshooting +=============== + +.. todo:: + This page is a draft. + +This page contains tips for troubleshooting ZFS on Linux and what info +developers might want for bug triage. + +- `About Log Files <#about-log-files>`__ + + - `Generic Kernel Log <#generic-kernel-log>`__ + - `ZFS Kernel Module Debug + Messages <#zfs-kernel-module-debug-messages>`__ + +- `Unkillable Process <#unkillable-process>`__ +- `ZFS Events <#zfs-events>`__ + +-------------- + +About Log Files +--------------- + +Log files can be very useful for troubleshooting. In some cases, +interesting information is stored in multiple log files that are +correlated to system events. + +Pro tip: logging infrastructure tools like *elasticsearch*, *fluentd*, +*influxdb*, or *splunk* can simplify log analysis and event correlation. + +Generic Kernel Log +~~~~~~~~~~~~~~~~~~ + +Typically, Linux kernel log messages are available from ``dmesg -T``, +``/var/log/syslog``, or where kernel log messages are sent (eg by +``rsyslogd``). + +ZFS Kernel Module Debug Messages +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ZFS kernel modules use an internal log buffer for detailed logging +information. This log information is available in the pseudo file +``/proc/spl/kstat/zfs/dbgmsg`` for ZFS builds where ZFS module parameter +`zfs_dbgmsg_enable = +1 `__ + +-------------- + +Unkillable Process +------------------ + +Symptom: ``zfs`` or ``zpool`` command appear hung, does not return, and +is not killable + +Likely cause: kernel thread hung or panic + +Log files of interest: `Generic Kernel Log <#generic-kernel-log>`__, +`ZFS Kernel Module Debug Messages <#zfs-kernel-module-debug-messages>`__ + +Important information: if a kernel thread is stuck, then a backtrace of +the stuck thread can be in the logs. In some cases, the stuck thread is +not logged until the deadman timer expires. See also `debug +tunables `__ + +-------------- + +ZFS Events +---------- + +ZFS uses an event-based messaging interface for communication of +important events to other consumers running on the system. The ZFS Event +Daemon (zed) is a userland daemon that listens for these events and +processes them. zed is extensible so you can write shell scripts or +other programs that subscribe to events and take action. For example, +the script usually installed at ``/etc/zfs/zed.d/all-syslog.sh`` writes +a formatted event message to ``syslog``. See the man page for ``zed(8)`` +for more information. + +A history of events is also available via the ``zpool events`` command. +This history begins at ZFS kernel module load and includes events from +any pool. These events are stored in RAM and limited in count to a value +determined by the kernel tunable +`zfs_event_len_max `__. +``zed`` has an internal throttling mechanism to prevent overconsumption +of system resources processing ZFS events. + +More detailed information about events is observable using +``zpool events -v`` The contents of the verbose events is subject to +change, based on the event and information available at the time of the +event. + +Each event has a class identifier used for filtering event types. +Commonly seen events are those related to pool management with class +``sysevent.fs.zfs.*`` including import, export, configuration updates, +and ``zpool history`` updates. + +Events related to errors are reported as class ``ereport.*`` These can +be invaluable for troubleshooting. Some faults can cause multiple +ereports as various layers of the software deal with the fault. For +example, on a simple pool without parity protection, a faulty disk could +cause an ``ereport.io`` during a read from the disk that results in an +``erport.fs.zfs.checksum`` at the pool level. These events are also +reflected by the error counters observed in ``zpool status`` If you see +checksum or read/write errors in ``zpool status`` then there should be +one or more corresponding ereports in the ``zpool events`` output. diff --git a/_sources/Basic Concepts/dRAID Howto.rst.txt b/_sources/Basic Concepts/dRAID Howto.rst.txt new file mode 100644 index 000000000..79d16d294 --- /dev/null +++ b/_sources/Basic Concepts/dRAID Howto.rst.txt @@ -0,0 +1,248 @@ +dRAID +===== + +.. note:: + This page describes functionality which has been added for the + OpenZFS 2.1.0 release, it is not in the OpenZFS 2.0.0 release. + +Introduction +~~~~~~~~~~~~ + +`dRAID`_ is a variant of raidz that provides integrated distributed hot +spares which allows for faster resilvering while retaining the benefits +of raidz. A dRAID vdev is constructed from multiple internal raidz +groups, each with D data devices and P parity devices. These groups +are distributed over all of the children in order to fully utilize the +available disk performance. This is known as parity declustering and +it has been an active area of research. The image below is simplified, +but it helps illustrate this key difference between dRAID and raidz. + +|draid1| + +Additionally, a dRAID vdev must shuffle its child vdevs in such a way +that regardless of which drive has failed, the rebuild IO (both read +and write) will distribute evenly among all surviving drives. This +is accomplished by using carefully chosen precomputed permutation +maps. This has the advantage of both keeping pool creation fast and +making it impossible for the mapping to be damaged or lost. + +Another way dRAID differs from raidz is that it uses a fixed stripe +width (padding as necessary with zeros). This allows a dRAID vdev to +be sequentially resilvered, however the fixed stripe width significantly +effects both usable capacity and IOPS. For example, with the default +D=8 and 4k disk sectors the minimum allocation size is 32k. If using +compression, this relatively large allocation size can reduce the +effective compression ratio. When using ZFS volumes and dRAID the +default volblocksize property is increased to account for the allocation +size. If a dRAID pool will hold a significant amount of small blocks, +it is recommended to also add a mirrored special vdev to store those +blocks. + +In regards to IO/s, performance is similar to raidz since for any +read all D data disks must be accessed. Delivered random IOPS can be +reasonably approximated as floor((N-S)/(D+P))*. + +In summary dRAID can provide the same level of redundancy and +performance as raidz, while also providing a fast integrated distributed +spare. + +Create a dRAID vdev +~~~~~~~~~~~~~~~~~~~ + +A dRAID vdev is created like any other by using the ``zpool create`` +command and enumerating the disks which should be used. + +:: + + # zpool create draid[1,2,3] + +Like raidz, the parity level is specified immediately after the ``draid`` +vdev type. However, unlike raidz additional colon separated options can be +specified. The most important of which is the ``:s`` option which +controls the number of distributed hot spares to create. By default, no +spares are created. The ``:d`` option can be specified to set the +number of data devices to use in each RAID stripe (D+P). When unspecified +reasonable defaults are chosen. + +:: + + # zpool create draid[][:d][:c][:s] + +- **parity** - The parity level (1-3). Defaults to one. + +- **data** - The number of data devices per redundancy group. In general + a smaller value of D will increase IOPS, improve the compression ratio, + and speed up resilvering at the expense of total usable capacity. + Defaults to 8, unless N-P-S is less than 8. + +- **children** - The expected number of children. Useful as a cross-check + when listing a large number of devices. An error is returned when the + provided number of children differs. + +- **spares** - The number of distributed hot spares. Defaults to zero. + +For example, to create an 11 disk dRAID pool with 4+1 redundancy and a +single distributed spare the command would be: + +:: + + # zpool create tank draid:4d:1s:11c /dev/sd[a-k] + # zpool status tank + + pool: tank + state: ONLINE + config: + + NAME STATE READ WRITE CKSUM + tank ONLINE 0 0 0 + draid1:4d:11c:1s-0 ONLINE 0 0 0 + sda ONLINE 0 0 0 + sdb ONLINE 0 0 0 + sdc ONLINE 0 0 0 + sdd ONLINE 0 0 0 + sde ONLINE 0 0 0 + sdf ONLINE 0 0 0 + sdg ONLINE 0 0 0 + sdh ONLINE 0 0 0 + sdi ONLINE 0 0 0 + sdj ONLINE 0 0 0 + sdk ONLINE 0 0 0 + spares + draid1-0-0 AVAIL + +Note that the dRAID vdev name, ``draid1:4d:11c:1s``, fully describes the +configuration and all of disks which are part of the dRAID are listed. +Furthermore, the logical distributed hot spare is shown as an available +spare disk. + +Rebuilding to a Distributed Spare +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +One of the major advantages of dRAID is that it supports both sequential +and traditional healing resilvers. When performing a sequential resilver +to a distributed hot spare the performance scales with the number of disks +divided by the stripe width (D+P). This can greatly reduce resilver times +and restore full redundancy in a fraction of the usual time. For example, +the following graph shows the observed sequential resilver time in hours +for a 90 HDD based dRAID filled to 90% capacity. + +|draid-resilver| + +When using dRAID and a distributed spare, the process for handling a +failed disk is almost identical to raidz with a traditional hot spare. +When a disk failure is detected the ZFS Event Daemon (ZED) will start +rebuilding to a spare if one is available. The only difference is that +for dRAID a sequential resilver is started, while a healing resilver must +be used for raidz. + +:: + + # echo offline >/sys/block/sdg/device/state + # zpool replace -s tank sdg draid1-0-0 + # zpool status + + pool: tank + state: DEGRADED + status: One or more devices is currently being resilvered. The pool will + continue to function, possibly in a degraded state. + action: Wait for the resilver to complete. + scan: resilver (draid1:4d:11c:1s-0) in progress since Tue Nov 24 14:34:25 2020 + 3.51T scanned at 13.4G/s, 1.59T issued 6.07G/s, 6.13T total + 326G resilvered, 57.17% done, 00:03:21 to go + config: + + NAME STATE READ WRITE CKSUM + tank DEGRADED 0 0 0 + draid1:4d:11c:1s-0 DEGRADED 0 0 0 + sda ONLINE 0 0 0 (resilvering) + sdb ONLINE 0 0 0 (resilvering) + sdc ONLINE 0 0 0 (resilvering) + sdd ONLINE 0 0 0 (resilvering) + sde ONLINE 0 0 0 (resilvering) + sdf ONLINE 0 0 0 (resilvering) + spare-6 DEGRADED 0 0 0 + sdg UNAVAIL 0 0 0 + draid1-0-0 ONLINE 0 0 0 (resilvering) + sdh ONLINE 0 0 0 (resilvering) + sdi ONLINE 0 0 0 (resilvering) + sdj ONLINE 0 0 0 (resilvering) + sdk ONLINE 0 0 0 (resilvering) + spares + draid1-0-0 INUSE currently in use + +While both types of resilvering achieve the same goal it's worth taking +a moment to summarize the key differences. + +- A traditional healing resilver scans the entire block tree. This + means the checksum for each block is available while it's being + repaired and can be immediately verified. The downside is this + creates a random read workload which is not ideal for performance. + +- A sequential resilver instead scans the space maps in order to + determine what space is allocated and what must be repaired. + This rebuild process is not limited to block boundaries and can + sequentially reads from the disks and make repairs using larger + I/Os. The price to pay for this performance improvement is that + the block checksums cannot be verified while resilvering. Therefore, + a scrub is started to verify the checksums after the sequential + resilver completes. + +For a more in depth explanation of the differences between sequential +and healing resilvering check out these `sequential resilver`_ slides +which were presented at the OpenZFS Developer Summit. + +Rebalancing +~~~~~~~~~~~ + +Distributed spare space can be made available again by simply replacing +any failed drive with a new drive. This process is called rebalancing +and is essentially a resilver. When performing rebalancing a healing +resilver is recommended since the pool is no longer degraded. This +ensures all checksums are verified when rebuilding to the new disk +and eliminates the need to perform a subsequent scrub of the pool. + +:: + + # zpool replace tank sdg sdl + # zpool status + + pool: tank + state: DEGRADED + status: One or more devices is currently being resilvered. The pool will + continue to function, possibly in a degraded state. + action: Wait for the resilver to complete. + scan: resilver in progress since Tue Nov 24 14:45:16 2020 + 6.13T scanned at 7.82G/s, 6.10T issued at 7.78G/s, 6.13T total + 565G resilvered, 99.44% done, 00:00:04 to go + config: + + NAME STATE READ WRITE CKSUM + tank DEGRADED 0 0 0 + draid1:4d:11c:1s-0 DEGRADED 0 0 0 + sda ONLINE 0 0 0 (resilvering) + sdb ONLINE 0 0 0 (resilvering) + sdc ONLINE 0 0 0 (resilvering) + sdd ONLINE 0 0 0 (resilvering) + sde ONLINE 0 0 0 (resilvering) + sdf ONLINE 0 0 0 (resilvering) + spare-6 DEGRADED 0 0 0 + replacing-0 DEGRADED 0 0 0 + sdg UNAVAIL 0 0 0 + sdl ONLINE 0 0 0 (resilvering) + draid1-0-0 ONLINE 0 0 0 (resilvering) + sdh ONLINE 0 0 0 (resilvering) + sdi ONLINE 0 0 0 (resilvering) + sdj ONLINE 0 0 0 (resilvering) + sdk ONLINE 0 0 0 (resilvering) + spares + draid1-0-0 INUSE currently in use + +After the resilvering completes the distributed hot spare is once again +available for use and the pool has been restored to its normal healthy +state. + +.. |draid1| image:: /_static/img/raidz_draid.png +.. |draid-resilver| image:: /_static/img/draid-resilver-hours.png +.. _dRAID: https://docs.google.com/presentation/d/1uo0nBfY84HIhEqGWEx-Tbm8fPbJKtIP3ICo4toOPcJo/edit +.. _sequential resilver: https://docs.google.com/presentation/d/1vLsgQ1MaHlifw40C9R2sPsSiHiQpxglxMbK2SMthu0Q/edit#slide=id.g995720a6cf_1_39 +.. _custom packages: https://openzfs.github.io/openzfs-docs/Developer%20Resources/Custom%20Packages.html# diff --git a/_sources/Basic Concepts/index.rst.txt b/_sources/Basic Concepts/index.rst.txt new file mode 100644 index 000000000..e7329870a --- /dev/null +++ b/_sources/Basic Concepts/index.rst.txt @@ -0,0 +1,9 @@ +Basic Concepts +============== + +.. toctree:: + :maxdepth: 2 + :caption: Contents: + :glob: + + * diff --git a/_sources/Developer Resources/Buildbot Options.rst.txt b/_sources/Developer Resources/Buildbot Options.rst.txt new file mode 100644 index 000000000..9727bd54d --- /dev/null +++ b/_sources/Developer Resources/Buildbot Options.rst.txt @@ -0,0 +1,248 @@ +Buildbot Options +================ + +There are a number of ways to control the ZFS Buildbot at a commit +level. This page provides a summary of various options that the ZFS +Buildbot supports and how it impacts testing. More detailed information +regarding its implementation can be found at the `ZFS Buildbot Github +page `__. + +Choosing Builders +----------------- + +By default, all commits in your ZFS pull request are compiled by the +BUILD builders. Additionally, the top commit of your ZFS pull request is +tested by TEST builders. However, there is the option to override which +types of builder should be used on a per commit basis. In this case, you +can add +``Requires-builders: `` +to your commit message. A comma separated list of options can be +provided. Supported options are: + +- ``all``: This commit should be built by all available builders +- ``none``: This commit should not be built by any builders +- ``style``: This commit should be built by STYLE builders +- ``build``: This commit should be built by all BUILD builders +- ``arch``: This commit should be built by BUILD builders tagged as + 'Architectures' +- ``distro``: This commit should be built by BUILD builders tagged as + 'Distributions' +- ``test``: This commit should be built and tested by the TEST builders + (excluding the Coverage TEST builders) +- ``perf``: This commit should be built and tested by the PERF builders +- ``coverage`` : This commit should be built and tested by the Coverage + TEST builders +- ``unstable`` : This commit should be built and tested by the Unstable + TEST builders (currently only the Fedora Rawhide TEST builder) + +A couple of examples on how to use ``Requires-builders:`` in commit +messages can be found below. + +.. _preventing-a-commit-from-being-built-and-tested: + +Preventing a commit from being built and tested. +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +:: + + This is a commit message + + This text is part of the commit message body. + + Signed-off-by: Contributor + Requires-builders: none + +.. _submitting-a-commit-to-style-and-test-builders-only: + +Submitting a commit to STYLE and TEST builders only. +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +:: + + This is a commit message + + This text is part of the commit message body. + + Signed-off-by: Contributor + Requires-builders: style test + +Requiring SPL Versions +---------------------- + +Currently, the ZFS Buildbot attempts to choose the correct SPL branch to +build based on a pull request's base branch. In the cases where a +specific SPL version needs to be built, the ZFS buildbot supports +specifying an SPL version for pull request testing. By opening a pull +request against ZFS and adding ``Requires-spl:`` in a commit message, +you can instruct the buildbot to use a specific SPL version. Below are +examples of a commit messages that specify the SPL version. + +Build SPL from a specific pull request +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +:: + + This is a commit message + + This text is part of the commit message body. + + Signed-off-by: Contributor + Requires-spl: refs/pull/123/head + +Build SPL branch ``spl-branch-name`` from ``zfsonlinux/spl`` repository +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +:: + + This is a commit message + + This text is part of the commit message body. + + Signed-off-by: Contributor + Requires-spl: spl-branch-name + +Requiring Kernel Version +------------------------ + +Currently, Kernel.org builders will clone and build the master branch of +Linux. In cases where a specific version of the Linux kernel needs to be +built, the ZFS buildbot supports specifying the Linux kernel to be built +via commit message. By opening a pull request against ZFS and adding +``Requires-kernel:`` in a commit message, you can instruct the buildbot +to use a specific Linux kernel. Below is an example commit message that +specifies a specific Linux kernel tag. + +.. _build-linux-kernel-version-414: + +Build Linux Kernel Version 4.14 +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +:: + + This is a commit message + + This text is part of the commit message body. + + Signed-off-by: Contributor + Requires-kernel: v4.14 + +Build Steps Overrides +--------------------- + +Each builder will execute or skip build steps based on its default +preferences. In some scenarios, it might be possible to skip various +build steps. The ZFS buildbot supports overriding the defaults of all +builders in a commit message. The list of available overrides are: + +- ``Build-linux: ``: All builders should build Linux for this + commit +- ``Build-lustre: ``: All builders should build Lustre for this + commit +- ``Build-spl: ``: All builders should build the SPL for this + commit +- ``Build-zfs: ``: All builders should build ZFS for this + commit +- ``Built-in: ``: All Linux builds should build in SPL and ZFS +- ``Check-lint: ``: All builders should perform lint checks for + this commit +- ``Configure-lustre: ``: Provide ```` as configure + flags when building Lustre +- ``Configure-spl: ``: Provide ```` as configure + flags when building the SPL +- ``Configure-zfs: ``: Provide ```` as configure + flags when building ZFS + +A couple of examples on how to use overrides in commit messages can be +found below. + +Skip building the SPL and build Lustre without ldiskfs +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +:: + + This is a commit message + + This text is part of the commit message body. + + Signed-off-by: Contributor + Build-lustre: Yes + Configure-lustre: --disable-ldiskfs + Build-spl: No + +Build ZFS Only +~~~~~~~~~~~~~~ + +:: + + This is a commit message + + This text is part of the commit message body. + + Signed-off-by: Contributor + Build-lustre: No + Build-spl: No + +Configuring Tests with the TEST File +------------------------------------ + +At the top level of the ZFS source tree, there is the `TEST +file `__ which +contains variables that control if and how a specific test should run. +Below is a list of each variable and a brief description of what each +variable controls. + +- ``TEST_PREPARE_WATCHDOG`` - Enables the Linux kernel watchdog +- ``TEST_PREPARE_SHARES`` - Start NFS and Samba servers +- ``TEST_SPLAT_SKIP`` - Determines if ``splat`` testing is skipped +- ``TEST_SPLAT_OPTIONS`` - Command line options to provide to ``splat`` +- ``TEST_ZTEST_SKIP`` - Determines if ``ztest`` testing is skipped +- ``TEST_ZTEST_TIMEOUT`` - The length of time ``ztest`` should run +- ``TEST_ZTEST_DIR`` - Directory where ``ztest`` will create vdevs +- ``TEST_ZTEST_OPTIONS`` - Options to pass to ``ztest`` +- ``TEST_ZTEST_CORE_DIR`` - Directory for ``ztest`` to store core dumps +- ``TEST_ZIMPORT_SKIP`` - Determines if ``zimport`` testing is skipped +- ``TEST_ZIMPORT_DIR`` - Directory used during ``zimport`` +- ``TEST_ZIMPORT_VERSIONS`` - Source versions to test +- ``TEST_ZIMPORT_POOLS`` - Names of the pools for ``zimport`` to use + for testing +- ``TEST_ZIMPORT_OPTIONS`` - Command line options to provide to + ``zimport`` +- ``TEST_XFSTESTS_SKIP`` - Determines if ``xfstest`` testing is skipped +- ``TEST_XFSTESTS_URL`` - URL to download ``xfstest`` from +- ``TEST_XFSTESTS_VER`` - Name of the tarball to download from + ``TEST_XFSTESTS_URL`` +- ``TEST_XFSTESTS_POOL`` - Name of pool to create and used by + ``xfstest`` +- ``TEST_XFSTESTS_FS`` - Name of dataset for use by ``xfstest`` +- ``TEST_XFSTESTS_VDEV`` - Name of the vdev used by ``xfstest`` +- ``TEST_XFSTESTS_OPTIONS`` - Command line options to provide to + ``xfstest`` +- ``TEST_ZFSTESTS_SKIP`` - Determines if ``zfs-tests`` testing is + skipped +- ``TEST_ZFSTESTS_DIR`` - Directory to store files and loopback devices +- ``TEST_ZFSTESTS_DISKS`` - Space delimited list of disks that + ``zfs-tests`` is allowed to use +- ``TEST_ZFSTESTS_DISKSIZE`` - File size of file based vdevs used by + ``zfs-tests`` +- ``TEST_ZFSTESTS_ITERS`` - Number of times ``test-runner`` should + execute its set of tests +- ``TEST_ZFSTESTS_OPTIONS`` - Options to provide ``zfs-tests`` +- ``TEST_ZFSTESTS_RUNFILE`` - The runfile to use when running + ``zfs-tests`` +- ``TEST_ZFSTESTS_TAGS`` - List of tags to provide to ``test-runner`` +- ``TEST_ZFSSTRESS_SKIP`` - Determines if ``zfsstress`` testing is + skipped +- ``TEST_ZFSSTRESS_URL`` - URL to download ``zfsstress`` from +- ``TEST_ZFSSTRESS_VER`` - Name of the tarball to download from + ``TEST_ZFSSTRESS_URL`` +- ``TEST_ZFSSTRESS_RUNTIME`` - Duration to run ``runstress.sh`` +- ``TEST_ZFSSTRESS_POOL`` - Name of pool to create and use for + ``zfsstress`` testing +- ``TEST_ZFSSTRESS_FS`` - Name of dataset for use during ``zfsstress`` + tests +- ``TEST_ZFSSTRESS_FSOPT`` - File system options to provide to + ``zfsstress`` +- ``TEST_ZFSSTRESS_VDEV`` - Directory to store vdevs for use during + ``zfsstress`` tests +- ``TEST_ZFSSTRESS_OPTIONS`` - Command line options to provide to + ``runstress.sh`` diff --git a/_sources/Developer Resources/Building ZFS.rst.txt b/_sources/Developer Resources/Building ZFS.rst.txt new file mode 100644 index 000000000..d60c39c4d --- /dev/null +++ b/_sources/Developer Resources/Building ZFS.rst.txt @@ -0,0 +1,255 @@ +Building ZFS +============ + +GitHub Repositories +~~~~~~~~~~~~~~~~~~~ + +The official source for OpenZFS is maintained at GitHub by the +`openzfs `__ organization. The primary +git repository for the project is the `zfs +`__ repository. + +There are two main components in this repository: + +- **ZFS**: The ZFS repository contains a copy of the upstream OpenZFS + code which has been adapted and extended for Linux and FreeBSD. The + vast majority of the core OpenZFS code is self-contained and can be + used without modification. + +- **SPL**: The SPL is a thin shim layer which is responsible for + implementing the fundamental interfaces required by OpenZFS. It's + this layer which allows OpenZFS to be used across multiple + platforms. SPL used to be maintained in a separate repository, but + was merged into the `zfs `__ + repository in the ``0.8`` major release. + +Installing Dependencies +~~~~~~~~~~~~~~~~~~~~~~~ + +The first thing you'll need to do is prepare your environment by +installing a full development tool chain. In addition, development +headers for both the kernel and the following packages must be +available. It is important to note that if the development kernel +headers for the currently running kernel aren't installed, the modules +won't compile properly. + +The following dependencies should be installed to build the latest ZFS +2.1 release. + +- **RHEL/CentOS 7**: + +.. code:: sh + + sudo yum install epel-release gcc make autoconf automake libtool rpm-build libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python python2-devel python-setuptools python-cffi libffi-devel git ncompress libcurl-devel + sudo yum install --enablerepo=epel python-packaging dkms + +- **RHEL/CentOS 8, Fedora**: + +.. code:: sh + + sudo dnf install --skip-broken epel-release gcc make autoconf automake libtool rpm-build libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python3 python3-devel python3-setuptools python3-cffi libffi-devel git ncompress libcurl-devel + sudo dnf install --skip-broken --enablerepo=epel --enablerepo=powertools python3-packaging dkms + +- **Debian, Ubuntu**: + +.. code:: sh + + sudo apt install build-essential autoconf automake libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev linux-headers-generic python3 python3-dev python3-setuptools python3-cffi libffi-dev python3-packaging git libcurl4-openssl-dev debhelper-compat dh-python po-debconf python3-all-dev python3-sphinx + +- **FreeBSD**: + +.. code:: sh + + pkg install autoconf automake autotools git gmake python devel/py-sysctl sudo + +Build Options +~~~~~~~~~~~~~ + +There are two options for building OpenZFS; the correct one largely +depends on your requirements. + +- **Packages**: Often it can be useful to build custom packages from + git which can be installed on a system. This is the best way to + perform integration testing with systemd, dracut, and udev. The + downside to using packages it is greatly increases the time required + to build, install, and test a change. + +- **In-tree**: Development can be done entirely in the SPL/ZFS source + tree. This speeds up development by allowing developers to rapidly + iterate on a patch. When working in-tree developers can leverage + incremental builds, load/unload kernel modules, execute utilities, + and verify all their changes with the ZFS Test Suite. + +The remainder of this page focuses on the **in-tree** option which is +the recommended method of development for the majority of changes. See +the :doc:`custom packages <./Custom Packages>` page for additional +information on building custom packages. + +Developing In-Tree +~~~~~~~~~~~~~~~~~~ + +Clone from GitHub +^^^^^^^^^^^^^^^^^ + +Start by cloning the ZFS repository from GitHub. The repository has a +**master** branch for development and a series of **\*-release** +branches for tagged releases. After checking out the repository your +clone will default to the master branch. Tagged releases may be built +by checking out zfs-x.y.z tags with matching version numbers or +matching release branches. + +:: + + git clone https://github.com/openzfs/zfs + +Configure and Build +^^^^^^^^^^^^^^^^^^^ + +For developers working on a change always create a new topic branch +based off of master. This will make it easy to open a pull request with +your change latter. The master branch is kept stable with extensive +`regression testing `__ of every pull +request before and after it's merged. Every effort is made to catch +defects as early as possible and to keep them out of the tree. +Developers should be comfortable frequently rebasing their work against +the latest master branch. + +In this example we'll use the master branch and walk through a stock +**in-tree** build. Start by checking out the desired branch then build +the ZFS and SPL source in the traditional autotools fashion. + +:: + + cd ./zfs + git checkout master + sh autogen.sh + ./configure + make -s -j$(nproc) + +| **tip:** ``--with-linux=PATH`` and ``--with-linux-obj=PATH`` can be + passed to configure to specify a kernel installed in a non-default + location. +| **tip:** ``--enable-debug`` can be passed to configure to enable all ASSERTs and + additional correctness tests. + +**Optional** Build packages + +:: + + make rpm #Builds RPM packages for CentOS/Fedora + make deb #Builds RPM converted DEB packages for Debian/Ubuntu + make native-deb #Builds native DEB packages for Debian/Ubuntu + +| **tip:** Native Debian packages build with pre-configured paths for + Debian and Ubuntu. It's best not to override the paths during + configure. +| **tip:** For native Debain packages, ``KVERS``, ``KSRC`` and ``KOBJ`` + environment variables can be exported to specify the kernel installed + in non-default location. + +.. note:: + Support for native Debian packaging will be available starting from + openzfs-2.2 release. + +Install +^^^^^^^ + +You can run ``zfs-tests.sh`` without installing ZFS, see below. If you +have reason to install ZFS after building it, pay attention to how your +distribution handles kernel modules. On Ubuntu, for example, the modules +from this repository install in the ``extra`` kernel module path, which +is not in the standard ``depmod`` search path. Therefore, for the +duration of your testing, edit ``/etc/depmod.d/ubuntu.conf`` and add +``extra`` to the beginning of the search path. + +You may then install using +``sudo make install; sudo ldconfig; sudo depmod``. You'd uninstall with +``sudo make uninstall; sudo ldconfig; sudo depmod``. + +.. _running-zloopsh-and-zfs-testssh: + +Running zloop.sh and zfs-tests.sh +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +If you wish to run the ZFS Test Suite (ZTS), then ``ksh`` and a few +additional utilities must be installed. + +- **RHEL/CentOS 7:** + +.. code:: sh + + sudo yum install ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr nfs-utils samba rng-tools pax perf + sudo yum install --enablerepo=epel dbench + +- **RHEL/CentOS 8, Fedora:** + +.. code:: sh + + sudo dnf install --skip-broken ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr nfs-utils samba rng-tools pax perf + sudo dnf install --skip-broken --enablerepo=epel dbench + +- **Debian:** + +.. code:: sh + + sudo apt install ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr dbench nfs-kernel-server samba rng-tools pax linux-perf selinux-utils quota + +- **Ubuntu:** + +.. code:: sh + + sudo apt install ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr dbench nfs-kernel-server samba rng-tools pax linux-tools-common selinux-utils quota + +- **FreeBSD**: + +.. code:: sh + + pkg install base64 bash checkbashisms fio hs-ShellCheck ksh93 pamtester devel/py-flake8 sudo + + +There are a few helper scripts provided in the top-level scripts +directory designed to aid developers working with in-tree builds. + +- **zfs-helper.sh:** Certain functionality (i.e. /dev/zvol/) depends on + the ZFS provided udev helper scripts being installed on the system. + This script can be used to create symlinks on the system from the + installation location to the in-tree helper. These links must be in + place to successfully run the ZFS Test Suite. The **-i** and **-r** + options can be used to install and remove the symlinks. + +:: + + sudo ./scripts/zfs-helpers.sh -i + +- **zfs.sh:** The freshly built kernel modules can be loaded using + ``zfs.sh``. This script can later be used to unload the kernel + modules with the **-u** option. + +:: + + sudo ./scripts/zfs.sh + +- **zloop.sh:** A wrapper to run ztest repeatedly with randomized + arguments. The ztest command is a user space stress test designed to + detect correctness issues by concurrently running a random set of + test cases. If a crash is encountered, the ztest logs, any associated + vdev files, and core file (if one exists) are collected and moved to + the output directory for analysis. + +:: + + sudo ./scripts/zloop.sh + +- **zfs-tests.sh:** A wrapper which can be used to launch the ZFS Test + Suite. Three loopback devices are created on top of sparse files + located in ``/var/tmp/`` and used for the regression test. Detailed + directions for the ZFS Test Suite can be found in the + `README `__ + located in the top-level tests directory. + +:: + + ./scripts/zfs-tests.sh -vx + +**tip:** The **delegate** tests will be skipped unless group read +permission is set on the zfs directory and its parents. diff --git a/_sources/Developer Resources/Custom Packages.rst.txt b/_sources/Developer Resources/Custom Packages.rst.txt new file mode 100644 index 000000000..6c6514666 --- /dev/null +++ b/_sources/Developer Resources/Custom Packages.rst.txt @@ -0,0 +1,248 @@ +Custom Packages +=============== + +The following instructions assume you are building from an official +`release tarball `__ +(version 0.8.0 or newer) or directly from the `git +repository `__. Most users should not +need to do this and should preferentially use the distribution packages. +As a general rule the distribution packages will be more tightly +integrated, widely tested, and better supported. However, if your +distribution of choice doesn't provide packages, or you're a developer +and want to roll your own, here's how to do it. + +The first thing to be aware of is that the build system is capable of +generating several different types of packages. Which type of package +you choose depends on what's supported on your platform and exactly what +your needs are. + +- **DKMS** packages contain only the source code and scripts for + rebuilding the kernel modules. When the DKMS package is installed + kernel modules will be built for all available kernels. Additionally, + when the kernel is upgraded new kernel modules will be automatically + built for that kernel. This is particularly convenient for desktop + systems which receive frequent kernel updates. The downside is that + because the DKMS packages build the kernel modules from source a full + development environment is required which may not be appropriate for + large deployments. + +- **kmods** packages are binary kernel modules which are compiled + against a specific version of the kernel. This means that if you + update the kernel you must compile and install a new kmod package. If + you don't frequently update your kernel, or if you're managing a + large number of systems, then kmod packages are a good choice. + +- **kABI-tracking kmod** Packages are similar to standard binary kmods + and may be used with Enterprise Linux distributions like Red Hat and + CentOS. These distributions provide a stable kABI (Kernel Application + Binary Interface) which allows the same binary modules to be used + with new versions of the distribution provided kernel. + +By default the build system will generate user packages and both DKMS +and kmod style kernel packages if possible. The user packages can be +used with either set of kernel packages and do not need to be rebuilt +when the kernel is updated. You can also streamline the build process by +building only the DKMS or kmod packages as shown below. + +Be aware that when building directly from a git repository you must +first run the *autogen.sh* script to create the *configure* script. This +will require installing the GNU autotools packages for your +distribution. To perform any of the builds, you must install all the +necessary development tools and headers for your distribution. + +It is important to note that if the development kernel headers for the +currently running kernel aren't installed, the modules won't compile +properly. + +- `Red Hat, CentOS and Fedora <#red-hat-centos-and-fedora>`__ +- `Debian and Ubuntu <#debian-and-ubuntu>`__ + +RHEL, CentOS and Fedora +----------------------- + +Make sure that the required packages are installed to build the latest +ZFS 2.1 release: + +- **RHEL/CentOS 7**: + +.. code:: sh + + sudo yum install epel-release gcc make autoconf automake libtool rpm-build libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python python2-devel python-setuptools python-cffi libffi-devel ncompress + sudo yum install --enablerepo=epel dkms python-packaging + +- **RHEL/CentOS 8, Fedora**: + +.. code:: sh + + sudo dnf install --skip-broken epel-release gcc make autoconf automake libtool rpm-build kernel-rpm-macros libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) kernel-abi-stablelists-$(uname -r | sed 's/\.[^.]\+$//') python3 python3-devel python3-setuptools python3-cffi libffi-devel ncompress + sudo dnf install --skip-broken --enablerepo=epel --enablerepo=powertools python3-packaging dkms + +- **RHEL/CentOS 9**: + +.. code:: sh + + sudo dnf config-manager --set-enabled crb + sudo dnf install --skip-broken epel-release gcc make autoconf automake libtool rpm-build kernel-rpm-macros libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) kernel-abi-stablelists-$(uname -r | sed 's/\.[^.]\+$//') python3 python3-devel python3-setuptools python3-cffi libffi-devel + sudo dnf install --skip-broken --enablerepo=epel python3-packaging dkms + + + +`Get the source code <#get-the-source-code>`__. + +DKMS +~~~~ + +Building rpm-based DKMS and user packages can be done as follows: + +.. code:: sh + + $ cd zfs + $ ./configure + $ make -j1 rpm-utils rpm-dkms + $ sudo yum localinstall *.$(uname -p).rpm *.noarch.rpm + +kmod +~~~~ + +The key thing to know when building a kmod package is that a specific +Linux kernel must be specified. At configure time the build system will +make an educated guess as to which kernel you want to build against. +However, if configure is unable to locate your kernel development +headers, or you want to build against a different kernel, you must +specify the exact path with the *--with-linux* and *--with-linux-obj* +options. + +.. code:: sh + + $ cd zfs + $ ./configure + $ make -j1 rpm-utils rpm-kmod + $ sudo yum localinstall *.$(uname -p).rpm + +kABI-tracking kmod +~~~~~~~~~~~~~~~~~~ + +The process for building kABI-tracking kmods is almost identical to for +building normal kmods. However, it will only produce binaries which can +be used by multiple kernels if the distribution supports a stable kABI. +In order to request kABI-tracking package the *--with-spec=redhat* +option must be passed to configure. + +**NOTE:** This type of package is not available for Fedora. + +.. code:: sh + + $ cd zfs + $ ./configure --with-spec=redhat + $ make -j1 rpm-utils rpm-kmod + $ sudo yum localinstall *.$(uname -p).rpm + +Debian and Ubuntu +----------------- + +Make sure that the required packages are installed: + +.. code:: sh + + sudo apt install build-essential autoconf automake libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev linux-headers-generic python3 python3-dev python3-setuptools python3-cffi libffi-dev python3-packaging debhelper-compat dh-python po-debconf python3-all-dev python3-sphinx + +`Get the source code <#get-the-source-code>`__. + +.. _kmod-1: + +kmod +~~~~ + +The key thing to know when building a kmod package is that a specific +Linux kernel must be specified. At configure time the build system will +make an educated guess as to which kernel you want to build against. +However, if configure is unable to locate your kernel development +headers, or you want to build against a different kernel, you must +specify the exact path with the *--with-linux* and *--with-linux-obj* +options. + +To build RPM converted Debian packages: + +.. code:: sh + + $ cd zfs + $ ./configure --enable-systemd + $ make -j1 deb-utils deb-kmod + $ sudo apt-get install --fix-missing ./*.deb + +Starting from openzfs-2.2 release, native Debian packages can be built +as follows: + +.. code:: sh + + $ cd zfs + $ ./configure + $ make native-deb-utils native-deb-kmod + $ rm ../openzfs-zfs-dkms_*.deb + $ sudo apt-get install --fix-missing ../*.deb + +Native Debian packages build with pre-configured paths for Debian and +Ubuntu. It's best not to override the paths during configure. +``KVERS``, ``KSRC`` and ``KOBJ`` environment variables can be exported +to specify the kernel installed in non-default location. + +.. _dkms-1: + +DKMS +~~~~ + +Building RPM converted deb-based DKMS and user packages can be done as +follows: + +.. code:: sh + + $ cd zfs + $ ./configure --enable-systemd + $ make -j1 deb-utils deb-dkms + $ sudo apt-get install --fix-missing ./*.deb + +Starting from openzfs-2.2 release, native deb-based DKMS and user +packages can be built as follows: + +.. code:: sh + + $ sudo apt-get install dh-dkms + $ cd zfs + $ ./configure + $ make native-deb-utils + $ sudo apt-get install --fix-missing ../*.deb + +Get the Source Code +------------------- + +Released Tarball +~~~~~~~~~~~~~~~~ + +The released tarball contains the latest fully tested and released +version of ZFS. This is the preferred source code location for use in +production systems. If you want to use the official released tarballs, +then use the following commands to fetch and prepare the source. + +.. code:: sh + + $ wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-x.y.z.tar.gz + $ tar -xzf zfs-x.y.z.tar.gz + +Git Master Branch +~~~~~~~~~~~~~~~~~ + +The Git *master* branch contains the latest version of the software, and +will probably contain fixes that, for some reason, weren't included in +the released tarball. This is the preferred source code location for +developers who intend to modify ZFS. If you would like to use the git +version, you can clone it from Github and prepare the source like this. + +.. code:: sh + + $ git clone https://github.com/zfsonlinux/zfs.git + $ cd zfs + $ ./autogen.sh + +Once the source has been prepared you'll need to decide what kind of +packages you're building and jump the to appropriate section above. Note +that not all package types are supported for all platforms. diff --git a/_sources/Developer Resources/Git and GitHub for beginners.rst.txt b/_sources/Developer Resources/Git and GitHub for beginners.rst.txt new file mode 100644 index 000000000..76e6eb33d --- /dev/null +++ b/_sources/Developer Resources/Git and GitHub for beginners.rst.txt @@ -0,0 +1,210 @@ +Git and GitHub for beginners (ZoL edition) +========================================== + +This is a very basic rundown of how to use Git and GitHub to make +changes. + +Recommended reading: `ZFS on Linux +CONTRIBUTING.md `__ + +First time setup +---------------- + +If you've never used Git before, you'll need a little setup to start +things off. + +:: + + git config --global user.name "My Name" + git config --global user.email myemail@noreply.non + +Cloning the initial repository +------------------------------ + +The easiest way to get started is to click the fork icon at the top of +the main repository page. From there you need to download a copy of the +forked repository to your computer: + +:: + + git clone https://github.com//zfs.git + +This sets the "origin" repository to your fork. This will come in handy +when creating pull requests. To make pulling from the "upstream" +repository as changes are made, it is very useful to establish the +upstream repository as another remote (man git-remote): + +:: + + cd zfs + git remote add upstream https://github.com/zfsonlinux/zfs.git + +Preparing and making changes +---------------------------- + +In order to make changes it is recommended to make a branch, this lets +you work on several unrelated changes at once. It is also not +recommended to make changes to the master branch unless you own the +repository. + +:: + + git checkout -b my-new-branch + +From here you can make your changes and move on to the next step. + +Recommended reading: `C Style and Coding Standards for +SunOS `__, +`ZFS on Linux Developer +Resources `__, +`OpenZFS Developer +Resources `__ + +Testing your patches before pushing +----------------------------------- + +Before committing and pushing, you may want to test your patches. There +are several tests you can run against your branch such as style +checking, and functional tests. All pull requests go through these tests +before being pushed to the main repository, however testing locally +takes the load off the build/test servers. This step is optional but +highly recommended, however the test suite should be run on a virtual +machine or a host that currently does not use ZFS. You may need to +install ``shellcheck`` and ``flake8`` to run the ``checkstyle`` +correctly. + +:: + + sh autogen.sh + ./configure + make checkstyle + +Recommended reading: `Building +ZFS `__, `ZFS Test +Suite +README `__ + +Committing your changes to be pushed +------------------------------------ + +When you are done making changes to your branch there are a few more +steps before you can make a pull request. + +:: + + git commit --all --signoff + +This command opens an editor and adds all unstaged files from your +branch. Here you need to describe your change and add a few things: + +:: + + + # Please enter the commit message for your changes. Lines starting + # with '#' will be ignored, and an empty message aborts the commit. + # On branch my-new-branch + # Changes to be committed: + # (use "git reset HEAD ..." to unstage) + # + # modified: hello.c + # + +The first thing we need to add is the commit message. This is what is +displayed on the git log, and should be a short description of the +change. By style guidelines, this has to be less than 72 characters in +length. + +Underneath the commit message you can add a more descriptive text to +your commit. The lines in this section have to be less than 72 +characters. + +When you are done, the commit should look like this: + +:: + + Add hello command + + This is a test commit with a descriptive commit message. + This message can be more than one line as shown here. + + Signed-off-by: My Name + Closes #9998 + Issue #9999 + # Please enter the commit message for your changes. Lines starting + # with '#' will be ignored, and an empty message aborts the commit. + # On branch my-new-branch + # Changes to be committed: + # (use "git reset HEAD ..." to unstage) + # + # modified: hello.c + # + +You can also reference issues and pull requests if you are filing a pull +request for an existing issue as shown above. Save and exit the editor +when you are done. + +Pushing and creating the pull request +------------------------------------- + +Home stretch. You've made your change and made the commit. Now it's time +to push it. + +:: + + git push --set-upstream origin my-new-branch + +This should ask you for your github credentials and upload your changes +to your repository. + +The last step is to either go to your repository or the upstream +repository on GitHub and you should see a button for making a new pull +request for your recently committed branch. + +Correcting issues with your pull request +---------------------------------------- + +Sometimes things don't always go as planned and you may need to update +your pull request with a correction to either your commit message, or +your changes. This can be accomplished by re-pushing your branch. If you +need to make code changes or ``git add`` a file, you can do those now, +along with the following: + +:: + + git commit --amend + git push --force + +This will return you to the commit editor screen, and push your changes +over top of the old ones. Do note that this will restart the process of +any build/test servers currently running and excessively pushing can +cause delays in processing of all pull requests. + +Maintaining your repository +--------------------------- + +When you wish to make changes in the future you will want to have an +up-to-date copy of the upstream repository to make your changes on. Here +is how you keep updated: + +:: + + git checkout master + git pull upstream master + git push origin master + +This will make sure you are on the master branch of the repository, grab +the changes from upstream, then push them back to your repository. + +Final words +----------- + +This is a very basic introduction to Git and GitHub, but should get you +on your way to contributing to many open source projects. Not all +projects have style requirements and some may have different processes +to getting changes committed so please refer to their documentation to +see if you need to do anything different. One topic we have not touched +on is the ``git rebase`` command which is a little more advanced for +this wiki article. + +Additional resources: `Github Help `__, +`Atlassian Git Tutorials `__ diff --git a/_sources/Developer Resources/OpenZFS Exceptions.rst.txt b/_sources/Developer Resources/OpenZFS Exceptions.rst.txt new file mode 100644 index 000000000..32c97352d --- /dev/null +++ b/_sources/Developer Resources/OpenZFS Exceptions.rst.txt @@ -0,0 +1,652 @@ +OpenZFS Exceptions +================== + +Commit exceptions used to explicitly reference a given Linux commit. +These exceptions are useful for a variety of reasons. + +**This page is used to generate** +`OpenZFS Tracking `__ +**page.** + +Format: +^^^^^^^ + +- ``|-|`` - The OpenZFS commit isn't applicable + to Linux, or the OpenZFS -> ZFS on Linux commit matching is unable to + associate the related commits due to lack of information (denoted by + a -). +- ``||`` - The fix was merged to Linux + prior to their being an OpenZFS issue. +- ``|!|`` - The commit is applicable but not + applied for the reason described in the comment. + ++------------------+-------------------+-----------------------------+ +| OpenZFS issue id | status/ZFS commit | comment | ++==================+===================+=============================+ +| 11453 | ! | check_disk() on illumos | +| | | isn't available on ZoL / | +| | | OpenZFS 2.0 | ++------------------+-------------------+-----------------------------+ +| 11276 | da68988 | | ++------------------+-------------------+-----------------------------+ +| 11052 | 2efea7c | | ++------------------+-------------------+-----------------------------+ +| 11051 | 3b61ca3 | | ++------------------+-------------------+-----------------------------+ +| 10853 | 8dc2197 | | ++------------------+-------------------+-----------------------------+ +| 10844 | 61c3391 | | ++------------------+-------------------+-----------------------------+ +| 10842 | d10b2f1 | | ++------------------+-------------------+-----------------------------+ +| 10841 | 944a372 | | ++------------------+-------------------+-----------------------------+ +| 10809 | ee36c70 | | ++------------------+-------------------+-----------------------------+ +| 10808 | 2ef0f8c | | ++------------------+-------------------+-----------------------------+ +| 10701 | 0091d66 | | ++------------------+-------------------+-----------------------------+ +| 10601 | cc99f27 | | ++------------------+-------------------+-----------------------------+ +| 10573 | 48d3eb4 | | ++------------------+-------------------+-----------------------------+ +| 10572 | edc1e71 | | ++------------------+-------------------+-----------------------------+ +| 10566 | ab7615d | | ++------------------+-------------------+-----------------------------+ +| 10554 | bec1067 | | ++------------------+-------------------+-----------------------------+ +| 10500 | 03916905 | | ++------------------+-------------------+-----------------------------+ +| 10449 | 379ca9c | | ++------------------+-------------------+-----------------------------+ +| 10406 | da2feb4 | | ++------------------+-------------------+-----------------------------+ +| 10154 | - | Not applicable to Linux | ++------------------+-------------------+-----------------------------+ +| 10067 | - | The only ZFS change was to | +| | | zfs remap, which was | +| | | removed on Linux. | ++------------------+-------------------+-----------------------------+ +| 9884 | - | Not applicable to Linux | ++------------------+-------------------+-----------------------------+ +| 9851 | - | Not applicable to Linux | ++------------------+-------------------+-----------------------------+ +| 9691 | d9b4bf0 | | ++------------------+-------------------+-----------------------------+ +| 9683 | - | Not applicable to Linux due | +| | | to devids not being used | ++------------------+-------------------+-----------------------------+ +| 9680 | - | Applied and rolled back in | +| | | OpenZFS, additional changes | +| | | needed. | ++------------------+-------------------+-----------------------------+ +| 9672 | 29445fe3 | | ++------------------+-------------------+-----------------------------+ +| 9647 | a448a25 | | ++------------------+-------------------+-----------------------------+ +| 9626 | 59e6e7ca | | ++------------------+-------------------+-----------------------------+ +| 9635 | - | Not applicable to Linux | ++------------------+-------------------+-----------------------------+ +| 9623 | 22448f08 | | ++------------------+-------------------+-----------------------------+ +| 9621 | 305bc4b3 | | ++------------------+-------------------+-----------------------------+ +| 9539 | 5228cf01 | | ++------------------+-------------------+-----------------------------+ +| 9512 | b4555c77 | | ++------------------+-------------------+-----------------------------+ +| 9487 | 48fbb9dd | | ++------------------+-------------------+-----------------------------+ +| 9466 | 272b5d73 | | ++------------------+-------------------+-----------------------------+ +| 9440 | f664f1e | Illumos ticket 9440 never | +| | | landed in openzfs/openzfs, | +| | | but in ZoL / OpenZFS 2.0 | ++------------------+-------------------+-----------------------------+ +| 9433 | 0873bb63 | | ++------------------+-------------------+-----------------------------+ +| 9421 | 64c1dcef | | ++------------------+-------------------+-----------------------------+ +| 9237 | - | Introduced by 8567 which | +| | | was never applied to Linux | ++------------------+-------------------+-----------------------------+ +| 9194 | - | Not applicable the '-o | +| | | ashift=value' option is | +| | | provided on Linux | ++------------------+-------------------+-----------------------------+ +| 9077 | - | Not applicable to Linux | ++------------------+-------------------+-----------------------------+ +| 9027 | 4a5d7f82 | | ++------------------+-------------------+-----------------------------+ +| 9018 | 3ec34e55 | | ++------------------+-------------------+-----------------------------+ +| 8984 | ! | WIP to support NFSv4 ACLs | ++------------------+-------------------+-----------------------------+ +| 8969 | - | Not applicable to Linux | ++------------------+-------------------+-----------------------------+ +| 8942 | 650258d7 | | ++------------------+-------------------+-----------------------------+ +| 8941 | 390d679a | | ++------------------+-------------------+-----------------------------+ +| 8862 | 3b9edd7 | | ++------------------+-------------------+-----------------------------+ +| 8858 | - | Not applicable to Linux | ++------------------+-------------------+-----------------------------+ +| 8856 | - | Not applicable to Linux due | +| | | to Encryption (b525630) | ++------------------+-------------------+-----------------------------+ +| 8809 | ! | Adding libfakekernel needs | +| | | to be done by refactoring | +| | | existing code. | ++------------------+-------------------+-----------------------------+ +| 8727 | b525630 | | ++------------------+-------------------+-----------------------------+ +| 8713 | 871e0732 | | ++------------------+-------------------+-----------------------------+ +| 8661 | 1ce23dca | | ++------------------+-------------------+-----------------------------+ +| 8648 | f763c3d1 | | ++------------------+-------------------+-----------------------------+ +| 8602 | a032ac4 | | ++------------------+-------------------+-----------------------------+ +| 8601 | d99a015 | Equivalent fix included in | +| | | initial commit | ++------------------+-------------------+-----------------------------+ +| 8590 | 935e2c2 | | ++------------------+-------------------+-----------------------------+ +| 8569 | - | This change isn't relevant | +| | | for Linux. | ++------------------+-------------------+-----------------------------+ +| 8567 | - | An alternate fix was | +| | | applied for Linux. | ++------------------+-------------------+-----------------------------+ +| 8552 | 935e2c2 | | ++------------------+-------------------+-----------------------------+ +| 8521 | ee6370a7 | | ++------------------+-------------------+-----------------------------+ +| 8502 | ! | Apply when porting OpenZFS | +| | | 7955 | ++------------------+-------------------+-----------------------------+ +| 9485 | 1258bd7 | | ++------------------+-------------------+-----------------------------+ +| 8477 | 92e43c1 | | ++------------------+-------------------+-----------------------------+ +| 8454 | - | An alternate fix was | +| | | applied for Linux. | ++------------------+-------------------+-----------------------------+ +| 8423 | 50c957f | | ++------------------+-------------------+-----------------------------+ +| 8408 | 5f1346c | | ++------------------+-------------------+-----------------------------+ +| 8379 | - | This change isn't relevant | +| | | for Linux. | ++------------------+-------------------+-----------------------------+ +| 8376 | - | This change isn't relevant | +| | | for Linux. | ++------------------+-------------------+-----------------------------+ +| 8311 | ! | Need to assess | +| | | applicability to Linux. | ++------------------+-------------------+-----------------------------+ +| 8304 | - | This change isn't relevant | +| | | for Linux. | ++------------------+-------------------+-----------------------------+ +| 8300 | 44f09cd | | ++------------------+-------------------+-----------------------------+ +| 8265 | - | The large_dnode feature has | +| | | been implemented for Linux. | ++------------------+-------------------+-----------------------------+ +| 8168 | 78d95ea | | ++------------------+-------------------+-----------------------------+ +| 8138 | 44f09cd | The spelling fix to the zfs | +| | | man page came in with the | +| | | mdoc conversion. | ++------------------+-------------------+-----------------------------+ +| 8108 | - | An equivalent Linux | +| | | specific fix was made. | ++------------------+-------------------+-----------------------------+ +| 8068 | a1d477c24c | merged with zfs device | +| | | evacuation/removal | ++------------------+-------------------+-----------------------------+ +| 8064 | - | This change isn't relevant | +| | | for Linux. | ++------------------+-------------------+-----------------------------+ +| 8022 | e55ebf6 | | ++------------------+-------------------+-----------------------------+ +| 8021 | 7657def | | ++------------------+-------------------+-----------------------------+ +| 8013 | - | The change is illumos | +| | | specific and not applicable | +| | | for Linux. | ++------------------+-------------------+-----------------------------+ +| 7982 | - | The change is illumos | +| | | specific and not applicable | +| | | for Linux. | ++------------------+-------------------+-----------------------------+ +| 7970 | c30e58c | | ++------------------+-------------------+-----------------------------+ +| 7956 | cda0317 | | ++------------------+-------------------+-----------------------------+ +| 7955 | ! | Need to assess | +| | | applicability to Linux. If | +| | | porting, apply 8502. | ++------------------+-------------------+-----------------------------+ +| 7869 | df7eecc | | ++------------------+-------------------+-----------------------------+ +| 7816 | - | The change is illumos | +| | | specific and not applicable | +| | | for Linux. | ++------------------+-------------------+-----------------------------+ +| 7803 | - | This functionality is | +| | | provided by | +| | | ``upda | +| | | te_vdev_config_dev_strs()`` | +| | | on Linux. | ++------------------+-------------------+-----------------------------+ +| 7801 | 0eef1bd | Commit f25efb3 in | +| | | openzfs/master has a small | +| | | change for linting which is | +| | | being ported. | ++------------------+-------------------+-----------------------------+ +| 7779 | - | The change isn't relevant, | +| | | ``zfs_ctldir.c`` was | +| | | rewritten for Linux. | ++------------------+-------------------+-----------------------------+ +| 7740 | 32d41fb | | ++------------------+-------------------+-----------------------------+ +| 7739 | 582cc014 | | ++------------------+-------------------+-----------------------------+ +| 7730 | e24e62a | | ++------------------+-------------------+-----------------------------+ +| 7710 | - | None of the illumos build | +| | | system is used under Linux. | ++------------------+-------------------+-----------------------------+ +| 7602 | 44f09cd | | ++------------------+-------------------+-----------------------------+ +| 7591 | 541a090 | | ++------------------+-------------------+-----------------------------+ +| 7586 | c443487 | | ++------------------+-------------------+-----------------------------+ +| 7570 | - | Due to differences in the | +| | | block layer all discards | +| | | are handled asynchronously | +| | | under Linux. This | +| | | functionality could be | +| | | ported but it's unclear to | +| | | what purpose. | ++------------------+-------------------+-----------------------------+ +| 7542 | - | The Linux libshare code | +| | | differs significantly from | +| | | the upstream OpenZFS code. | +| | | Since this change doesn't | +| | | address a Linux specific | +| | | issue it doesn't need to be | +| | | ported. The eventual plan | +| | | is to retire all of the | +| | | existing libshare code and | +| | | use the ZED to more | +| | | flexibly control filesystem | +| | | sharing. | ++------------------+-------------------+-----------------------------+ +| 7512 | - | None of the illumos build | +| | | system is used under Linux. | ++------------------+-------------------+-----------------------------+ +| 7497 | - | DTrace is isn't readily | +| | | available under Linux. | ++------------------+-------------------+-----------------------------+ +| 7446 | ! | Need to assess | +| | | applicability to Linux. | ++------------------+-------------------+-----------------------------+ +| 7430 | 68cbd56 | | ++------------------+-------------------+-----------------------------+ +| 7402 | 690fe64 | | ++------------------+-------------------+-----------------------------+ +| 7345 | 058ac9b | | ++------------------+-------------------+-----------------------------+ +| 7278 | - | Dynamic ARC tuning is | +| | | handled slightly | +| | | differently under Linux and | +| | | this case is covered by | +| | | arc_tuning_update() | ++------------------+-------------------+-----------------------------+ +| 7238 | - | zvol_swap test already | +| | | disabled in ZoL | ++------------------+-------------------+-----------------------------+ +| 7194 | d7958b4 | | ++------------------+-------------------+-----------------------------+ +| 7164 | b1b85c87 | | ++------------------+-------------------+-----------------------------+ +| 7041 | 33c0819 | | ++------------------+-------------------+-----------------------------+ +| 7016 | d3c2ae1 | | ++------------------+-------------------+-----------------------------+ +| 6914 | - | Under Linux the | +| | | arc_meta_limit can be tuned | +| | | with the | +| | | zfs_arc_meta_limit_percent | +| | | module option. | ++------------------+-------------------+-----------------------------+ +| 6875 | ! | WIP to support NFSv4 ACLs | ++------------------+-------------------+-----------------------------+ +| 6843 | f5f087e | | ++------------------+-------------------+-----------------------------+ +| 6841 | 4254acb | | ++------------------+-------------------+-----------------------------+ +| 6781 | 15313c5 | | ++------------------+-------------------+-----------------------------+ +| 6765 | ! | WIP to support NFSv4 ACLs | ++------------------+-------------------+-----------------------------+ +| 6764 | ! | WIP to support NFSv4 ACLs | ++------------------+-------------------+-----------------------------+ +| 6763 | ! | WIP to support NFSv4 ACLs | ++------------------+-------------------+-----------------------------+ +| 6762 | ! | WIP to support NFSv4 ACLs | ++------------------+-------------------+-----------------------------+ +| 6648 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6578 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6577 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6575 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6568 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6528 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6494 | - | The ``vdev_disk.c`` and | +| | | ``vdev_file.c`` files have | +| | | been reworked extensively | +| | | for Linux. The proposed | +| | | changes are not needed. | ++------------------+-------------------+-----------------------------+ +| 6468 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6465 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6434 | 472e7c6 | | ++------------------+-------------------+-----------------------------+ +| 6421 | ca0bf58 | | ++------------------+-------------------+-----------------------------+ +| 6418 | 131cc95 | | ++------------------+-------------------+-----------------------------+ +| 6391 | ee06391 | | ++------------------+-------------------+-----------------------------+ +| 6390 | 85802aa | | ++------------------+-------------------+-----------------------------+ +| 6388 | 0de7c55 | | ++------------------+-------------------+-----------------------------+ +| 6386 | 485c581 | | ++------------------+-------------------+-----------------------------+ +| 6385 | f3ad9cd | | ++------------------+-------------------+-----------------------------+ +| 6369 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6368 | 2024041 | | ++------------------+-------------------+-----------------------------+ +| 6346 | 058ac9b | | ++------------------+-------------------+-----------------------------+ +| 6334 | 1a04bab | | ++------------------+-------------------+-----------------------------+ +| 6290 | 017da6 | | ++------------------+-------------------+-----------------------------+ +| 6250 | - | Linux handles crash dumps | +| | | in a fundamentally | +| | | different way than Illumos. | +| | | The proposed changes are | +| | | not needed. | ++------------------+-------------------+-----------------------------+ +| 6249 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6248 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6220 | - | The b_thawed debug code was | +| | | unused under Linux and | +| | | removed. | ++------------------+-------------------+-----------------------------+ +| 6209 | - | The Linux user space mutex | +| | | implementation is based on | +| | | phtread primitives. | ++------------------+-------------------+-----------------------------+ +| 6095 | f866a4ea | | ++------------------+-------------------+-----------------------------+ +| 6091 | c11f100 | | ++------------------+-------------------+-----------------------------+ +| 6037 | a8bd6dc | | ++------------------+-------------------+-----------------------------+ +| 5984 | 480f626 | | ++------------------+-------------------+-----------------------------+ +| 5966 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 5961 | 22872ff | | ++------------------+-------------------+-----------------------------+ +| 5882 | 83e9986 | | ++------------------+-------------------+-----------------------------+ +| 5815 | - | This patch could be adapted | +| | | if needed use equivalent | +| | | Linux functionality. | ++------------------+-------------------+-----------------------------+ +| 5770 | c3275b5 | | ++------------------+-------------------+-----------------------------+ +| 5769 | dd26aa5 | | ++------------------+-------------------+-----------------------------+ +| 5768 | - | The change isn't relevant, | +| | | ``zfs_ctldir.c`` was | +| | | rewritten for Linux. | ++------------------+-------------------+-----------------------------+ +| 5766 | 4dd1893 | | ++------------------+-------------------+-----------------------------+ +| 5693 | 0f7d2a4 | | ++------------------+-------------------+-----------------------------+ +| 5692 | ! | This functionality should | +| | | be ported in such a way | +| | | that it can be integrated | +| | | with ``filefrag(8)``. | ++------------------+-------------------+-----------------------------+ +| 5684 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 5503 | 0f676dc | Proposed patch in 5503 | +| | | never upstreamed, | +| | | alternative fix deployed | +| | | with OpenZFS 7072 | ++------------------+-------------------+-----------------------------+ +| 5502 | f0ed6c7 | Proposed patch in 5502 | +| | | never upstreamed, | +| | | alternative fix deployed | +| | | in ZoL with commit f0ed6c7 | ++------------------+-------------------+-----------------------------+ +| 5410 | 0bf8501 | | ++------------------+-------------------+-----------------------------+ +| 5409 | b23d543 | | ++------------------+-------------------+-----------------------------+ +| 5379 | - | This particular issue never | +| | | impacted Linux due to the | +| | | need for a modified | +| | | zfs_putpage() | +| | | implementation. | ++------------------+-------------------+-----------------------------+ +| 5316 | - | The illumos idmap facility | +| | | isn't available under | +| | | Linux. This patch could | +| | | still be applied to | +| | | minimize code delta or all | +| | | HAVE_IDMAP chunks could be | +| | | removed on Linux for better | +| | | readability. | ++------------------+-------------------+-----------------------------+ +| 5313 | ec8501e | | ++------------------+-------------------+-----------------------------+ +| 5312 | ! | This change should be made | +| | | but the ideal time to do it | +| | | is when the spl repository | +| | | is folded in to the zfs | +| | | repository (planned for | +| | | 0.8). At this time we'll | +| | | want to cleanup many of the | +| | | includes. | ++------------------+-------------------+-----------------------------+ +| 5219 | ef56b07 | | ++------------------+-------------------+-----------------------------+ +| 5179 | 3f4058c | | ++------------------+-------------------+-----------------------------+ +| 5154 | 9a49d3f | Illumos ticket 5154 never | +| | | landed in openzfs/openzfs, | +| | | alternative fix deployed | +| | | in ZoL with commit 9a49d3f | ++------------------+-------------------+-----------------------------+ +| 5149 | - | Equivalent Linux | +| | | functionality is provided | +| | | by the | +| | | ``zvol_max_discard_blocks`` | +| | | module option. | ++------------------+-------------------+-----------------------------+ +| 5148 | - | Discards are handled | +| | | differently under Linux, | +| | | there is no DKIOCFREE | +| | | ioctl. | ++------------------+-------------------+-----------------------------+ +| 5136 | e8b96c6 | | ++------------------+-------------------+-----------------------------+ +| 4752 | aa9af22 | | ++------------------+-------------------+-----------------------------+ +| 4745 | 411bf20 | | ++------------------+-------------------+-----------------------------+ +| 4698 | 4fcc437 | | ++------------------+-------------------+-----------------------------+ +| 4620 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 4573 | 10b7549 | | ++------------------+-------------------+-----------------------------+ +| 4571 | 6e1b9d0 | | ++------------------+-------------------+-----------------------------+ +| 4570 | b1d13a6 | | ++------------------+-------------------+-----------------------------+ +| 4391 | 78e2739 | | ++------------------+-------------------+-----------------------------+ +| 4465 | cda0317 | | ++------------------+-------------------+-----------------------------+ +| 4263 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 4242 | - | Neither vnodes or their | +| | | associated events exist | +| | | under Linux. | ++------------------+-------------------+-----------------------------+ +| 4206 | 2820bc4 | | ++------------------+-------------------+-----------------------------+ +| 4188 | 2e7b765 | | ++------------------+-------------------+-----------------------------+ +| 4181 | 44f09cd | | ++------------------+-------------------+-----------------------------+ +| 4161 | - | The Linux user space | +| | | reader/writer | +| | | implementation is based on | +| | | phtread primitives. | ++------------------+-------------------+-----------------------------+ +| 4128 | ! | The | +| | | ldi_ev_register_callbacks() | +| | | interface doesn't exist | +| | | under Linux. It may be | +| | | possible to receive similar | +| | | notifications via the scsi | +| | | error handlers or possibly | +| | | a different interface. | ++------------------+-------------------+-----------------------------+ +| 4072 | - | None of the illumos build | +| | | system is used under Linux. | ++------------------+-------------------+-----------------------------+ +| 3998 | 417104bd | Illumos ticket 3998 never | +| | | landed in openzfs/openzfs, | +| | | alternative fix deployed | +| | | in ZoL. | ++------------------+-------------------+-----------------------------+ +| 3947 | 7f9d994 | | ++------------------+-------------------+-----------------------------+ +| 3928 | - | Neither vnodes or their | +| | | associated events exist | +| | | under Linux. | ++------------------+-------------------+-----------------------------+ +| 3871 | d1d7e268 | | ++------------------+-------------------+-----------------------------+ +| 3747 | 090ff09 | | ++------------------+-------------------+-----------------------------+ +| 3705 | - | The Linux implementation | +| | | uses the lz4 workspace kmem | +| | | cache to resolve the stack | +| | | issue. | ++------------------+-------------------+-----------------------------+ +| 3606 | c5b247f | | ++------------------+-------------------+-----------------------------+ +| 3580 | - | Linux provides generic | +| | | ioctl handlers get/set | +| | | block device information. | ++------------------+-------------------+-----------------------------+ +| 3543 | 8dca0a9 | | ++------------------+-------------------+-----------------------------+ +| 3512 | 67629d0 | | ++------------------+-------------------+-----------------------------+ +| 3507 | 43a696e | | ++------------------+-------------------+-----------------------------+ +| 3444 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 3371 | 44f09cd | | ++------------------+-------------------+-----------------------------+ +| 3311 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 3301 | - | The Linux implementation of | +| | | ``vdev_disk.c`` does not | +| | | include this comment. | ++------------------+-------------------+-----------------------------+ +| 3258 | 9d81146 | | ++------------------+-------------------+-----------------------------+ +| 3254 | ! | WIP to support NFSv4 ACLs | ++------------------+-------------------+-----------------------------+ +| 3246 | cc92e9d | | ++------------------+-------------------+-----------------------------+ +| 2933 | - | None of the illumos build | +| | | system is used under Linux. | ++------------------+-------------------+-----------------------------+ +| 2897 | fb82700 | | ++------------------+-------------------+-----------------------------+ +| 2665 | 32a9872 | | ++------------------+-------------------+-----------------------------+ +| 2130 | 460a021 | | ++------------------+-------------------+-----------------------------+ +| 1974 | - | This change was entirely | +| | | replaced in the ARC | +| | | restructuring. | ++------------------+-------------------+-----------------------------+ +| 1898 | - | The zfs_putpage() function | +| | | was rewritten to properly | +| | | integrate with the Linux | +| | | VM. | ++------------------+-------------------+-----------------------------+ +| 1700 | - | Not applicable to Linux, | +| | | the discard implementation | +| | | is entirely different. | ++------------------+-------------------+-----------------------------+ +| 1618 | ca67b33 | | ++------------------+-------------------+-----------------------------+ +| 1337 | 2402458 | | ++------------------+-------------------+-----------------------------+ +| 1126 | e43b290 | | ++------------------+-------------------+-----------------------------+ +| 763 | 3cee226 | | ++------------------+-------------------+-----------------------------+ +| 742 | ! | WIP to support NFSv4 ACLs | ++------------------+-------------------+-----------------------------+ +| 701 | 460a021 | | ++------------------+-------------------+-----------------------------+ +| 348 | - | The Linux implementation of | +| | | ``vdev_disk.c`` must have | +| | | this differently. | ++------------------+-------------------+-----------------------------+ +| 243 | - | Manual updates have been | +| | | made separately for Linux. | ++------------------+-------------------+-----------------------------+ +| 184 | - | The zfs_putpage() function | +| | | was rewritten to properly | +| | | integrate with the Linux | +| | | VM. | ++------------------+-------------------+-----------------------------+ diff --git a/_sources/Developer Resources/OpenZFS Patches.rst.txt b/_sources/Developer Resources/OpenZFS Patches.rst.txt new file mode 100644 index 000000000..fa622bd7c --- /dev/null +++ b/_sources/Developer Resources/OpenZFS Patches.rst.txt @@ -0,0 +1,318 @@ +OpenZFS Patches +=============== + +The ZFS on Linux project is an adaptation of the upstream `OpenZFS +repository `__ designed to work in +a Linux environment. This upstream repository acts as a location where +new features, bug fixes, and performance improvements from all the +OpenZFS platforms can be integrated. Each platform is responsible for +tracking the OpenZFS repository and merging the relevant improvements +back in to their release. + +For the ZFS on Linux project this tracking is managed through an +`OpenZFS tracking `__ +page. The page is updated regularly and shows a list of OpenZFS commits +and their status in regard to the ZFS on Linux master branch. + +This page describes the process of applying outstanding OpenZFS commits +to ZFS on Linux and submitting those changes for inclusion. As a +developer this is a great way to familiarize yourself with ZFS on Linux +and to begin quickly making a valuable contribution to the project. The +following guide assumes you have a `github +account `__, +are familiar with git, and are used to developing in a Linux +environment. + +Porting OpenZFS changes to ZFS on Linux +--------------------------------------- + +Setup the Environment +~~~~~~~~~~~~~~~~~~~~~ + +**Clone the source.** Start by making a local clone of the +`spl `__ and +`zfs `__ repositories. + +:: + + $ git clone -o zfsonlinux https://github.com/zfsonlinux/spl.git + $ git clone -o zfsonlinux https://github.com/zfsonlinux/zfs.git + +**Add remote repositories.** Using the GitHub web interface +`fork `__ the +`zfs `__ repository in to your +personal GitHub account. Add your new zfs fork and the +`openzfs `__ repository as remotes +and then fetch both repositories. The OpenZFS repository is large and +the initial fetch may take some time over a slow connection. + +:: + + $ cd zfs + $ git remote add git@github.com:/zfs.git + $ git remote add openzfs https://github.com/openzfs/openzfs.git + $ git fetch --all + +**Build the source.** Compile the spl and zfs master branches. These +branches are always kept stable and this is a useful verification that +you have a full build environment installed and all the required +dependencies are available. This may also speed up the compile time +latter for small patches where incremental builds are an option. + +:: + + $ cd ../spl + $ sh autogen.sh && ./configure --enable-debug && make -s -j$(nproc) + $ + $ cd ../zfs + $ sh autogen.sh && ./configure --enable-debug && make -s -j$(nproc) + +Pick a patch +~~~~~~~~~~~~ + +Consult the `OpenZFS +tracking `__ page and +select a patch which has not yet been applied. For your first patch you +will want to select a small patch to familiarize yourself with the +process. + +Porting a Patch +~~~~~~~~~~~~~~~ + +There are 2 methods: + +- `cherry-pick (easier) <#cherry-pick>`__ +- `manual merge <#manual-merge>`__ + +Please read about `manual merge <#manual-merge>`__ first to learn the +whole process. + +Cherry-pick +^^^^^^^^^^^ + +You can start to +`cherry-pick `__ by your own, +but we have made a special +`script `__, +which tries to +`cherry-pick `__ the patch +automatically and generates the description. + +0) Prepare environment: + +Mandatory git settings (add to ``~/.gitconfig``): + +:: + + [merge] + renameLimit = 999999 + [user] + email = mail@yourmail.com + name = Your Name + +Download the script: + +:: + + wget https://raw.githubusercontent.com/zfsonlinux/zfs-buildbot/master/scripts/openzfs-merge.sh + +1) Run: + +:: + + ./openzfs-merge.sh -d path_to_zfs_folder -c openzfs_commit_hash + +This command will fetch all repositories, create a new branch +``autoport-ozXXXX`` (XXXX - OpenZFS issue number), try to cherry-pick, +compile and check cstyle on success. + +If it succeeds without any merge conflicts - go to ``autoport-ozXXXX`` +branch, it will have ready to pull commit. Congratulations, you can go +to step 7! + +Otherwise you should go to step 2. + +2) Resolve all merge conflicts manually. Easy method - install + `Meld `__ or any other diff tool and run + ``git mergetool``. + +3) Check all compile and cstyle errors (See `Testing a + patch <#testing-a-patch>`__). + +4) Commit your changes with any description. + +5) Update commit description (last commit will be changed): + +:: + + ./openzfs-merge.sh -d path_to_zfs_folder -g openzfs_commit_hash + +6) Add any porting notes (if you have modified something): + ``git commit --amend`` + +7) Push your commit to github: + ``git push autoport-ozXXXX`` + +8) Create a pull request to ZoL master branch. + +9) Go to `Testing a patch <#testing-a-patch>`__ section. + +Manual merge +^^^^^^^^^^^^ + +**Create a new branch.** It is important to create a new branch for +every commit you port to ZFS on Linux. This will allow you to easily +submit your work as a GitHub pull request and it makes it possible to +work on multiple OpenZFS changes concurrently. All development branches +need to be based off of the ZFS master branch and it's helpful to name +the branches after the issue number you're working on. + +:: + + $ git checkout -b openzfs- master + +**Generate a patch.** One of the first things you'll notice about the +ZFS on Linux repository is that it is laid out differently than the +OpenZFS repository. Organizationally it is much flatter, this is +possible because it only contains the code for OpenZFS not an entire OS. +That means that in order to apply a patch from OpenZFS the path names in +the patch must be changed. A script called zfs2zol-patch.sed has been +provided to perform this translation. Use the ``git format-patch`` +command and this script to generate a patch. + +:: + + $ git format-patch --stdout ^.. | \ + ./scripts/zfs2zol-patch.sed >openzfs-.diff + +**Apply the patch.** In many cases the generated patch will apply +cleanly to the repository. However, it's important to keep in mind the +zfs2zol-patch.sed script only translates the paths. There are often +additional reasons why a patch might not apply. In some cases hunks of +the patch may not be applicable to Linux and should be dropped. In other +cases a patch may depend on other changes which must be applied first. +The changes may also conflict with Linux specific modifications. In all +of these cases the patch will need to be manually modified to apply +cleanly while preserving the its original intent. + +:: + + $ git am ./openzfs-.diff + +**Update the commit message.** By using ``git format-patch`` to generate +the patch and then ``git am`` to apply it the original comment and +authorship will be preserved. However, due to the formatting of the +OpenZFS commit you will likely find that the entire commit comment has +been squashed in to the subject line. Use ``git commit --amend`` to +cleanup the comment and be careful to follow `these standard +guidelines `__. + +The summary line of an OpenZFS commit is often very long and you should +truncate it to 50 characters. This is useful because it preserves the +correct formatting of ``git log --pretty=oneline`` command. Make sure to +leave a blank line between the summary and body of the commit. Then +include the full OpenZFS commit message wrapping any lines which exceed +72 characters. Finally, add a ``Ported-by`` tag with your contact +information and both a ``OpenZFS-issue`` and ``OpenZFS-commit`` tag with +appropriate links. You'll want to verify your commit contains all of the +following information: + +- The subject line from the original OpenZFS patch in the form: + "OpenZFS - short description". +- The original patch authorship should be preserved. +- The OpenZFS commit message. +- The following tags: + + - **Authored by:** Original patch author + - **Reviewed by:** All OpenZFS reviewers from the original patch. + - **Approved by:** All OpenZFS reviewers from the original patch. + - **Ported-by:** Your name and email address. + - **OpenZFS-issue:** https ://www.illumos.org/issues/issue + - **OpenZFS-commit:** https + ://github.com/openzfs/openzfs/commit/hash + +- **Porting Notes:** An optional section describing any changes + required when porting. + +For example, OpenZFS issue 6873 was `applied to +Linux `__ from this +upstream `OpenZFS +commit `__. + +:: + + OpenZFS 6873 - zfs_destroy_snaps_nvl leaks errlist + + Authored by: Chris Williamson + Reviewed by: Matthew Ahrens + Reviewed by: Paul Dagnelie + Ported-by: Denys Rtveliashvili + + lzc_destroy_snaps() returns an nvlist in errlist. + zfs_destroy_snaps_nvl() should nvlist_free() it before returning. + + OpenZFS-issue: https://www.illumos.org/issues/6873 + OpenZFS-commit: https://github.com/openzfs/openzfs/commit/ee06391 + +Testing a Patch +~~~~~~~~~~~~~~~ + +**Build the source.** Verify the patched source compiles without errors +and all warnings are resolved. + +:: + + $ make -s -j$(nproc) + +**Run the style checker.** Verify the patched source passes the style +checker, the command should return without printing any output. + +:: + + $ make cstyle + +**Open a Pull Request.** When your patch builds cleanly and passes the +style checks `open a new pull +request `__. +The pull request will be queued for `automated +testing `__. As part of the +testing the change is built for a wide range of Linux distributions and +a battery of functional and stress tests are run to detect regressions. + +:: + + $ git push openzfs- + +**Fix any issues.** Testing takes approximately 2 hours to fully +complete and the results are posted in the GitHub `pull +request `__. All the tests +are expected to pass and you should investigate and resolve any test +failures. The `test +scripts `__ +are all available and designed to run locally in order reproduce an +issue. Once you've resolved the issue force update the pull request to +trigger a new round of testing. Iterate until all the tests are passing. + +:: + + # Fix issue, amend commit, force update branch. + $ git commit --amend + $ git push --force openzfs- + +Merging the Patch +~~~~~~~~~~~~~~~~~ + +**Review.** Lastly one of the ZFS on Linux maintainers will make a final +review of the patch and may request additional changes. Once the +maintainer is happy with the final version of the patch they will add +their signed-off-by, merge it to the master branch, mark it complete on +the tracking page, and thank you for your contribution to the project! + +Porting ZFS on Linux changes to OpenZFS +--------------------------------------- + +Often an issue will be first fixed in ZFS on Linux or a new feature +developed. Changes which are not Linux specific should be submitted +upstream to the OpenZFS GitHub repository for review. The process for +this is described in the `OpenZFS +README `__. diff --git a/_sources/Developer Resources/index.rst.txt b/_sources/Developer Resources/index.rst.txt new file mode 100644 index 000000000..3b5d62b74 --- /dev/null +++ b/_sources/Developer Resources/index.rst.txt @@ -0,0 +1,18 @@ +Developer Resources +=================== + +.. toctree:: + :maxdepth: 2 + :caption: Contents: + :glob: + + Custom Packages + Building ZFS + Buildbot Status + Buildbot Issue Tracking + Buildbot Options + OpenZFS Tracking + OpenZFS Patches + OpenZFS Exceptions + OpenZFS Documentation + Git and GitHub for beginners diff --git a/_sources/Getting Started/Alpine Linux/Root on ZFS.rst.txt b/_sources/Getting Started/Alpine Linux/Root on ZFS.rst.txt new file mode 100644 index 000000000..e6d7fba2b --- /dev/null +++ b/_sources/Getting Started/Alpine Linux/Root on ZFS.rst.txt @@ -0,0 +1,561 @@ +.. highlight:: sh + +Alpine Linux Root on ZFS +======================== + +.. ifconfig:: zfs_root_test + + :: + + # For the CI/CD test run of this guide, + # Enable verbose logging of bash shell and fail immediately when + # a commmand fails. + set -vxeuf + distro=${1} + + cp /etc/resolv.conf ./"rootfs-${distro}"/etc/resolv.conf + arch-chroot ./"rootfs-${distro}" sh <<-'ZFS_ROOT_GUIDE_TEST' + + set -vxeuf + + # install alpine setup scripts + apk update + apk add alpine-conf curl + +**ZFSBootMenu** + +This tutorial is based on the GRUB bootloader. Due to its independent +implementation of a read-only ZFS driver, GRUB only supports a subset +of ZFS features on the boot pool. [In general, bootloader treat disks +as read-only to minimize the risk of damaging on-disk data.] + +`ZFSBootMenu `__ is an alternative bootloader +free of such limitations and has support for boot environments. Do not +follow instructions on this page if you plan to use ZBM, +as the layouts are not compatible. Refer +to their site for installation details. + +**Customization** + +Unless stated otherwise, it is not recommended to customize system +configuration before reboot. + +**Only use well-tested pool features** + +You should only use well-tested pool features. Avoid using new features if data integrity is paramount. See, for example, `this comment `__. + +Preparation +--------------------------- + +#. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled. +#. Download latest extended variant of `Alpine Linux + live image + `__, + verify `checksum `__ + and boot from it. + + .. code-block:: sh + + gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify alpine-extended-*.asc + + dd if=input-file of=output-file bs=1M + + .. ifconfig:: zfs_root_test + + # check whether the download page exists + # alpine version must be in sync with ci/cd test chroot tarball + curl --head --fail https://dl-cdn.alpinelinux.org/alpine/v3.18/releases/x86_64/alpine-extended-3.18.4-x86_64.iso + curl --head --fail https://dl-cdn.alpinelinux.org/alpine/v3.18/releases/x86_64/alpine-extended-3.18.4-x86_64.iso.asc + +#. Login as root user. There is no password. +#. Configure Internet + + .. code-block:: sh + + setup-interfaces -r + # You must use "-r" option to start networking services properly + # example: + network interface: wlan0 + WiFi name: + ip address: dhcp + + manual netconfig: n + +#. If you are using wireless network and it is not shown, see `Alpine + Linux wiki + `__ for + further details. ``wpa_supplicant`` can be installed with ``apk + add wpa_supplicant`` without internet connection. + +#. Configure SSH server + + .. code-block:: sh + + setup-sshd + # example: + ssh server: openssh + allow root: "prohibit-password" or "yes" + ssh key: "none" or "" + + Configurations set here will be copied verbatim to the installed system. + +#. Set root password or ``/root/.ssh/authorized_keys``. + + Choose a strong root password, as it will be copied to the + installed system. However, ``authorized_keys`` is not copied. + +#. Connect from another computer + + .. code-block:: sh + + ssh root@192.168.1.91 + +#. Configure NTP client for time synchronization + + .. code-block:: sh + + setup-ntp busybox + + .. ifconfig:: zfs_root_test + + # this step is unnecessary for chroot and returns 1 when executed + +#. Set up apk-repo. A list of available mirrors is shown. + Press space bar to continue + + .. code-block:: sh + + setup-apkrepos + +#. Throughout this guide, we use predictable disk names generated by + udev + + .. code-block:: sh + + apk update + apk add eudev + setup-devd udev + + It can be removed after reboot with ``setup-devd mdev && apk del eudev``. + + .. ifconfig:: zfs_root_test + + # for some reason, udev is extremely slow in chroot + # it is not needed for chroot anyway. so, skip this step + +#. Target disk + + List available disks with + + .. code-block:: sh + + find /dev/disk/by-id/ + + If virtio is used as disk bus, power off the VM and set serial numbers for disk. + For QEMU, use ``-drive format=raw,file=disk2.img,serial=AaBb``. + For libvirt, edit domain XML. See `this page + `__ for examples. + + Declare disk array + + .. code-block:: sh + + DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR' + + For single disk installation, use + + .. code-block:: sh + + DISK='/dev/disk/by-id/disk1' + + .. ifconfig:: zfs_root_test + + # for github test run, use chroot and loop devices + DISK="$(losetup -a| grep alpine | cut -f1 -d: | xargs -t -I '{}' printf '{} ')" + # for maintenance guide test + DISK="$(losetup -a| grep maintenance | cut -f1 -d: | xargs -t -I '{}' printf '{} ') ${DISK}" + +#. Set a mount point + :: + + MNT=$(mktemp -d) + +#. Set partition size: + + Set swap size in GB, set to 1 if you don't want swap to + take up too much space + + .. code-block:: sh + + SWAPSIZE=4 + + .. ifconfig:: zfs_root_test + + # For the test run, use 1GB swap space to avoid hitting CI/CD + # quota + SWAPSIZE=1 + + Set how much space should be left at the end of the disk, minimum 1GB + + :: + + RESERVE=1 + +#. Install ZFS support from live media:: + + apk add zfs + +#. Install bootloader programs and partition tool + :: + + apk add grub-bios grub-efi parted e2fsprogs cryptsetup util-linux + +System Installation +--------------------------- + +#. Partition the disks. + + Note: you must clear all existing partition tables and data structures from target disks. + + For flash-based storage, this can be done by the blkdiscard command below: + :: + + partition_disk () { + local disk="${1}" + blkdiscard -f "${disk}" || true + + parted --script --align=optimal "${disk}" -- \ + mklabel gpt \ + mkpart EFI 2MiB 1GiB \ + mkpart bpool 1GiB 5GiB \ + mkpart rpool 5GiB -$((SWAPSIZE + RESERVE))GiB \ + mkpart swap -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \ + mkpart BIOS 1MiB 2MiB \ + set 1 esp on \ + set 5 bios_grub on \ + set 5 legacy_boot on + + partprobe "${disk}" + } + + for i in ${DISK}; do + partition_disk "${i}" + done + + .. ifconfig:: zfs_root_test + + :: + + # When working with GitHub chroot runners, we are using loop + # devices as installation target. However, the alias support for + # loop device was just introduced in March 2023. See + # https://github.com/systemd/systemd/pull/26693 + # For now, we will create the aliases maunally as a workaround + looppart="1 2 3 4 5" + for i in ${DISK}; do + for j in ${looppart}; do + if test -e "${i}p${j}"; then + ln -s "${i}p${j}" "${i}-part${j}" + fi + done + done + +#. Setup encrypted swap. This is useful if the available memory is + small:: + + for i in ${DISK}; do + cryptsetup open --type plain --key-file /dev/random "${i}"-part4 "${i##*/}"-part4 + mkswap /dev/mapper/"${i##*/}"-part4 + swapon /dev/mapper/"${i##*/}"-part4 + done + +#. Load ZFS kernel module + + .. code-block:: sh + + modprobe zfs + +#. Create boot pool + :: + + # shellcheck disable=SC2046 + zpool create -o compatibility=legacy \ + -o ashift=12 \ + -o autotrim=on \ + -O acltype=posixacl \ + -O canmount=off \ + -O devices=off \ + -O normalization=formD \ + -O relatime=on \ + -O xattr=sa \ + -O mountpoint=/boot \ + -R "${MNT}" \ + bpool \ + mirror \ + $(for i in ${DISK}; do + printf '%s ' "${i}-part2"; + done) + + If not using a multi-disk setup, remove ``mirror``. + + You should not need to customize any of the options for the boot pool. + + GRUB does not support all of the zpool features. See ``spa_feature_names`` + in `grub-core/fs/zfs/zfs.c + `__. + This step creates a separate boot pool for ``/boot`` with the features + limited to only those that GRUB supports, allowing the root pool to use + any/all features. + +#. Create root pool + :: + + # shellcheck disable=SC2046 + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -R "${MNT}" \ + -O acltype=posixacl \ + -O canmount=off \ + -O compression=zstd \ + -O dnodesize=auto \ + -O normalization=formD \ + -O relatime=on \ + -O xattr=sa \ + -O mountpoint=/ \ + rpool \ + mirror \ + $(for i in ${DISK}; do + printf '%s ' "${i}-part3"; + done) + + If not using a multi-disk setup, remove ``mirror``. + +#. Create root system container: + + - Unencrypted + + :: + + zfs create \ + -o canmount=off \ + -o mountpoint=none \ + rpool/alpinelinux + + - Encrypted: + + Avoid ZFS send/recv when using native encryption, see `a ZFS developer's comment on this issue`__ and `this spreadsheet of bugs`__. A LUKS-based guide has yet to be written. Once compromised, changing password will not keep your + data safe. See ``zfs-change-key(8)`` for more info + + .. code-block:: sh + + zfs create \ + -o canmount=off \ + -o mountpoint=none \ + -o encryption=on \ + -o keylocation=prompt \ + -o keyformat=passphrase \ + rpool/alpinelinux + + You can automate this step (insecure) with: ``echo POOLPASS | zfs create ...``. + + Create system datasets, + manage mountpoints with ``mountpoint=legacy`` + :: + + zfs create -o canmount=noauto -o mountpoint=/ rpool/alpinelinux/root + zfs mount rpool/alpinelinux/root + zfs create -o mountpoint=legacy rpool/alpinelinux/home + mkdir "${MNT}"/home + mount -t zfs rpool/alpinelinux/home "${MNT}"/home + zfs create -o mountpoint=legacy rpool/alpinelinux/var + zfs create -o mountpoint=legacy rpool/alpinelinux/var/lib + zfs create -o mountpoint=legacy rpool/alpinelinux/var/log + zfs create -o mountpoint=none bpool/alpinelinux + zfs create -o mountpoint=legacy bpool/alpinelinux/root + mkdir "${MNT}"/boot + mount -t zfs bpool/alpinelinux/root "${MNT}"/boot + mkdir -p "${MNT}"/var/log + mkdir -p "${MNT}"/var/lib + mount -t zfs rpool/alpinelinux/var/lib "${MNT}"/var/lib + mount -t zfs rpool/alpinelinux/var/log "${MNT}"/var/log + +#. Format and mount ESP + :: + + for i in ${DISK}; do + mkfs.vfat -n EFI "${i}"-part1 + mkdir -p "${MNT}"/boot/efis/"${i##*/}"-part1 + mount -t vfat -o iocharset=iso8859-1 "${i}"-part1 "${MNT}"/boot/efis/"${i##*/}"-part1 + done + + mkdir -p "${MNT}"/boot/efi + mount -t vfat -o iocharset=iso8859-1 "$(echo "${DISK}" | sed "s|^ *||" | cut -f1 -d' '|| true)"-part1 "${MNT}"/boot/efi + + +System Configuration +--------------------------- + +#. Workaround for GRUB to recognize predictable disk names:: + + export ZPOOL_VDEV_NAME_PATH=YES + +#. Install system to disk + + .. code-block:: sh + + BOOTLOADER=grub setup-disk -k lts -v "${MNT}" + + GRUB installation will fail and will be reinstalled later. + The error message about ZFS kernel module can be ignored. + + .. ifconfig:: zfs_root_test + + # lts kernel will pull in tons of firmware + BOOTLOADER=grub setup-disk -k virt -v "${MNT}" + +#. Allow EFI system partition to fail at boot:: + + sed -i "s|vfat.*rw|vfat rw,nofail|" "${MNT}"/etc/fstab + +#. Chroot + + .. code-block:: sh + + for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done + chroot "${MNT}" /usr/bin/env DISK="${DISK}" sh + + .. ifconfig:: zfs_root_test + + :: + + for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done + chroot "${MNT}" /usr/bin/env DISK="${DISK}" sh <<-'ZFS_ROOT_NESTED_CHROOT' + + set -vxeuf + +#. Apply GRUB workaround + + :: + + echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile.d/zpool_vdev_name_path.sh + # shellcheck disable=SC1091 + . /etc/profile.d/zpool_vdev_name_path.sh + + # GRUB fails to detect rpool name, hard code as "rpool" + sed -i "s|rpool=.*|rpool=rpool|" /etc/grub.d/10_linux + + # BusyBox stat does not recognize zfs, replace fs detection with ZFS + sed -i 's|stat -f -c %T /|echo zfs|' /usr/sbin/grub-mkconfig + + # grub-probe fails to identify fs mounted at /boot + BOOT_DEVICE=$(zpool status -P bpool | grep -- -part2 | head -n1 | sed "s|.*/dev*|/dev|" | sed "s|part2.*|part2|") + sed -i "s|GRUB_DEVICE_BOOT=.*|GRUB_DEVICE_BOOT=${BOOT_DEVICE}|" /usr/sbin/grub-mkconfig + + The ``sed`` workaround for ``grub-mkconfig`` needs to be applied + for every GRUB update, as the update will overwrite the changes. + +#. Install GRUB:: + + mkdir -p /boot/efi/alpine/grub-bootdir/i386-pc/ + mkdir -p /boot/efi/alpine/grub-bootdir/x86_64-efi/ + for i in ${DISK}; do + grub-install --target=i386-pc --boot-directory \ + /boot/efi/alpine/grub-bootdir/i386-pc/ "${i}" + done + grub-install --target x86_64-efi --boot-directory \ + /boot/efi/alpine/grub-bootdir/x86_64-efi/ --efi-directory \ + /boot/efi --bootloader-id alpine --removable + if test -d /sys/firmware/efi/efivars/; then + apk add efibootmgr + grub-install --target x86_64-efi --boot-directory \ + /boot/efi/alpine/grub-bootdir/x86_64-efi/ --efi-directory \ + /boot/efi --bootloader-id alpine + fi + +#. Generate GRUB menu:: + + mkdir -p /boot/grub + grub-mkconfig -o /boot/grub/grub.cfg + cp /boot/grub/grub.cfg \ + /boot/efi/alpine/grub-bootdir/x86_64-efi/grub/grub.cfg + cp /boot/grub/grub.cfg \ + /boot/efi/alpine/grub-bootdir/i386-pc/grub/grub.cfg + + .. ifconfig:: zfs_root_test + + :: + + find /boot/efis/ -name "grub.cfg" -print0 \ + | xargs -t -0I '{}' grub-script-check -v '{}' + +#. For both legacy and EFI booting: mirror ESP content:: + + espdir=$(mktemp -d) + find /boot/efi/ -maxdepth 1 -mindepth 1 -type d -print0 \ + | xargs -t -0I '{}' cp -r '{}' "${espdir}" + find "${espdir}" -maxdepth 1 -mindepth 1 -type d -print0 \ + | xargs -t -0I '{}' sh -vxc "find /boot/efis/ -maxdepth 1 -mindepth 1 -type d -print0 | xargs -t -0I '[]' cp -r '{}' '[]'" + + .. ifconfig:: zfs_root_test + + :: + + ################################################## + # + # + # MAINTENANCE SCRIPT ENTRY POINT + # DO NOT TOUCH + # + # + ################################################# + +#. Exit chroot + + .. code-block:: sh + + exit + + .. ifconfig:: zfs_root_test + + # nested chroot ends here + ZFS_ROOT_NESTED_CHROOT + + .. ifconfig:: zfs_root_test + + :: + + # list contents of boot dir to confirm + # that the mirroring succeeded + find "${MNT}"/boot/efis/ -type d > list_of_efi_dirs + for i in ${DISK}; do + if ! grep "${i##*/}-part1/efi\|${i##*/}-part1/EFI" list_of_efi_dirs; then + echo "disk ${i} not found in efi system partition, installation error"; + cat list_of_efi_dirs + exit 1 + fi + done + +#. Unmount filesystems and create initial system snapshot + You can later create a boot environment from this snapshot. + See `Root on ZFS maintenance page <../zfs_root_maintenance.html>`__. + :: + + umount -Rl "${MNT}" + zfs snapshot -r rpool@initial-installation + zfs snapshot -r bpool@initial-installation + zpool export -a + +#. Reboot + + .. code-block:: sh + + reboot + + .. ifconfig:: zfs_root_test + + # chroot ends here + ZFS_ROOT_GUIDE_TEST + +.. _a ZFS developer's comment on this issue: https://ol.reddit.com/r/zfs/comments/10n8fsn/does_openzfs_have_a_new_developer_for_the_native/j6b8k1m/ +.. _this spreadsheet of bugs: https://docs.google.com/spreadsheets/d/1OfRSXibZ2nIE9DGK6swwBZXgXwdCPKgp4SbPZwTexCg/htmlview diff --git a/_sources/Getting Started/Alpine Linux/index.rst.txt b/_sources/Getting Started/Alpine Linux/index.rst.txt new file mode 100644 index 000000000..c9bb60eba --- /dev/null +++ b/_sources/Getting Started/Alpine Linux/index.rst.txt @@ -0,0 +1,33 @@ +Alpine Linux +============ + +Contents +-------- +.. toctree:: + :maxdepth: 1 + :glob: + + * + +Installation +------------ + +Note: this is for installing ZFS on an existing Alpine +installation. To use ZFS as root file system, +see below. + +#. Install ZFS package:: + + apk add zfs zfs-lts + +#. Load kernel module:: + + modprobe zfs + +Root on ZFS +----------- +.. toctree:: + :maxdepth: 1 + :glob: + + * diff --git a/_sources/Getting Started/Arch Linux/Root on ZFS.rst.txt b/_sources/Getting Started/Arch Linux/Root on ZFS.rst.txt new file mode 100644 index 000000000..f879ea605 --- /dev/null +++ b/_sources/Getting Started/Arch Linux/Root on ZFS.rst.txt @@ -0,0 +1,672 @@ +.. highlight:: sh + +.. ifconfig:: zfs_root_test + + :: + + # For the CI/CD test run of this guide, + # Enable verbose logging of bash shell and fail immediately when + # a commmand fails. + set -vxeuf + distro=${1} + + cp /etc/resolv.conf ./"rootfs-${distro}"/etc/resolv.conf + arch-chroot ./"rootfs-${distro}" sh <<-'ZFS_ROOT_GUIDE_TEST' + + set -vxeuf + + # install alpine setup scripts + apk update + apk add alpine-conf curl + +.. In this document, there are three types of code-block markups: + ``::`` are commands intended for both the vm test and the users + ``.. ifconfig:: zfs_root_test`` are commands intended only for vm test + ``.. code-block:: sh`` are commands intended only for users + +Arch Linux Root on ZFS +======================================= + +**ZFSBootMenu** + +This tutorial is based on the GRUB bootloader. Due to its independent +implementation of a read-only ZFS driver, GRUB only supports a subset +of ZFS features on the boot pool. [In general, bootloader treat disks +as read-only to minimize the risk of damaging on-disk data.] + +`ZFSBootMenu `__ is an alternative bootloader +free of such limitations and has support for boot environments. Do not +follow instructions on this page if you plan to use ZBM, +as the layouts are not compatible. Refer +to their site for installation details. + +**Customization** + +Unless stated otherwise, it is not recommended to customize system +configuration before reboot. + +**Only use well-tested pool features** + +You should only use well-tested pool features. Avoid using new features if data integrity is paramount. See, for example, `this comment `__. + +Preparation +--------------------------- + +#. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled. +#. Because the kernel of latest Live CD might be incompatible with + ZFS, we will use Alpine Linux Extended, which ships with ZFS by + default. + + Download latest extended variant of `Alpine Linux + live image + `__, + verify `checksum `__ + and boot from it. + + .. code-block:: sh + + gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify alpine-extended-*.asc + + dd if=input-file of=output-file bs=1M + + .. ifconfig:: zfs_root_test + + # check whether the download page exists + # alpine version must be in sync with ci/cd test chroot tarball + +#. Login as root user. There is no password. +#. Configure Internet + + .. code-block:: sh + + setup-interfaces -r + # You must use "-r" option to start networking services properly + # example: + network interface: wlan0 + WiFi name: + ip address: dhcp + + manual netconfig: n + +#. If you are using wireless network and it is not shown, see `Alpine + Linux wiki + `__ for + further details. ``wpa_supplicant`` can be installed with ``apk + add wpa_supplicant`` without internet connection. + +#. Configure SSH server + + .. code-block:: sh + + setup-sshd + # example: + ssh server: openssh + allow root: "prohibit-password" or "yes" + ssh key: "none" or "" + +#. Set root password or ``/root/.ssh/authorized_keys``. + +#. Connect from another computer + + .. code-block:: sh + + ssh root@192.168.1.91 + +#. Configure NTP client for time synchronization + + .. code-block:: sh + + setup-ntp busybox + + .. ifconfig:: zfs_root_test + + # this step is unnecessary for chroot and returns 1 when executed + +#. Set up apk-repo. A list of available mirrors is shown. + Press space bar to continue + + .. code-block:: sh + + setup-apkrepos + +#. Throughout this guide, we use predictable disk names generated by + udev + + .. code-block:: sh + + apk update + apk add eudev + setup-devd udev + + .. ifconfig:: zfs_root_test + + # for some reason, udev is extremely slow in chroot + # it is not needed for chroot anyway. so, skip this step + +#. Target disk + + List available disks with + + .. code-block:: sh + + find /dev/disk/by-id/ + + If virtio is used as disk bus, power off the VM and set serial numbers for disk. + For QEMU, use ``-drive format=raw,file=disk2.img,serial=AaBb``. + For libvirt, edit domain XML. See `this page + `__ for examples. + + Declare disk array + + .. code-block:: sh + + DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR' + + For single disk installation, use + + .. code-block:: sh + + DISK='/dev/disk/by-id/disk1' + + .. ifconfig:: zfs_root_test + + # for github test run, use chroot and loop devices + DISK="$(losetup -a| grep archlinux | cut -f1 -d: | xargs -t -I '{}' printf '{} ')" + +#. Set a mount point + :: + + MNT=$(mktemp -d) + +#. Set partition size: + + Set swap size in GB, set to 1 if you don't want swap to + take up too much space + + .. code-block:: sh + + SWAPSIZE=4 + + .. ifconfig:: zfs_root_test + + # For the test run, use 1GB swap space to avoid hitting CI/CD + # quota + SWAPSIZE=1 + + Set how much space should be left at the end of the disk, minimum 1GB + + :: + + RESERVE=1 + +#. Install ZFS support from live media:: + + apk add zfs + +#. Install partition tool + :: + + apk add parted e2fsprogs cryptsetup util-linux + +System Installation +--------------------------- + +#. Partition the disks. + + Note: you must clear all existing partition tables and data structures from target disks. + + For flash-based storage, this can be done by the blkdiscard command below: + :: + + partition_disk () { + local disk="${1}" + blkdiscard -f "${disk}" || true + + parted --script --align=optimal "${disk}" -- \ + mklabel gpt \ + mkpart EFI 2MiB 1GiB \ + mkpart bpool 1GiB 5GiB \ + mkpart rpool 5GiB -$((SWAPSIZE + RESERVE))GiB \ + mkpart swap -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \ + mkpart BIOS 1MiB 2MiB \ + set 1 esp on \ + set 5 bios_grub on \ + set 5 legacy_boot on + + partprobe "${disk}" + } + + for i in ${DISK}; do + partition_disk "${i}" + done + + .. ifconfig:: zfs_root_test + + :: + + # When working with GitHub chroot runners, we are using loop + # devices as installation target. However, the alias support for + # loop device was just introduced in March 2023. See + # https://github.com/systemd/systemd/pull/26693 + # For now, we will create the aliases maunally as a workaround + looppart="1 2 3 4 5" + for i in ${DISK}; do + for j in ${looppart}; do + if test -e "${i}p${j}"; then + ln -s "${i}p${j}" "${i}-part${j}" + fi + done + done + +#. Setup encrypted swap. This is useful if the available memory is + small:: + + for i in ${DISK}; do + cryptsetup open --type plain --key-file /dev/random "${i}"-part4 "${i##*/}"-part4 + mkswap /dev/mapper/"${i##*/}"-part4 + swapon /dev/mapper/"${i##*/}"-part4 + done + +#. Load ZFS kernel module + + .. code-block:: sh + + modprobe zfs + +#. Create boot pool + :: + + # shellcheck disable=SC2046 + zpool create -o compatibility=legacy \ + -o ashift=12 \ + -o autotrim=on \ + -O acltype=posixacl \ + -O canmount=off \ + -O devices=off \ + -O normalization=formD \ + -O relatime=on \ + -O xattr=sa \ + -O mountpoint=/boot \ + -R "${MNT}" \ + bpool \ + mirror \ + $(for i in ${DISK}; do + printf '%s ' "${i}-part2"; + done) + + If not using a multi-disk setup, remove ``mirror``. + + You should not need to customize any of the options for the boot pool. + + GRUB does not support all of the zpool features. See ``spa_feature_names`` + in `grub-core/fs/zfs/zfs.c + `__. + This step creates a separate boot pool for ``/boot`` with the features + limited to only those that GRUB supports, allowing the root pool to use + any/all features. + +#. Create root pool + :: + + # shellcheck disable=SC2046 + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -R "${MNT}" \ + -O acltype=posixacl \ + -O canmount=off \ + -O compression=zstd \ + -O dnodesize=auto \ + -O normalization=formD \ + -O relatime=on \ + -O xattr=sa \ + -O mountpoint=/ \ + rpool \ + mirror \ + $(for i in ${DISK}; do + printf '%s ' "${i}-part3"; + done) + + If not using a multi-disk setup, remove ``mirror``. + +#. Create root system container: + + - Unencrypted + + :: + + zfs create \ + -o canmount=off \ + -o mountpoint=none \ + rpool/archlinux + + - Encrypted: + + Avoid ZFS send/recv when using native encryption, see `a ZFS developer's comment on this issue`__ and `this spreadsheet of bugs`__. A LUKS-based guide has yet to be written. Once compromised, changing password will not keep your + data safe. See ``zfs-change-key(8)`` for more info + + .. code-block:: sh + + zfs create \ + -o canmount=off \ + -o mountpoint=none \ + -o encryption=on \ + -o keylocation=prompt \ + -o keyformat=passphrase \ + rpool/archlinux + + You can automate this step (insecure) with: ``echo POOLPASS | zfs create ...``. + + Create system datasets, + manage mountpoints with ``mountpoint=legacy`` + :: + + zfs create -o canmount=noauto -o mountpoint=/ rpool/archlinux/root + zfs mount rpool/archlinux/root + zfs create -o mountpoint=legacy rpool/archlinux/home + mkdir "${MNT}"/home + mount -t zfs rpool/archlinux/home "${MNT}"/home + zfs create -o mountpoint=legacy rpool/archlinux/var + zfs create -o mountpoint=legacy rpool/archlinux/var/lib + zfs create -o mountpoint=legacy rpool/archlinux/var/log + zfs create -o mountpoint=none bpool/archlinux + zfs create -o mountpoint=legacy bpool/archlinux/root + mkdir "${MNT}"/boot + mount -t zfs bpool/archlinux/root "${MNT}"/boot + mkdir -p "${MNT}"/var/log + mkdir -p "${MNT}"/var/lib + mount -t zfs rpool/archlinux/var/lib "${MNT}"/var/lib + mount -t zfs rpool/archlinux/var/log "${MNT}"/var/log + +#. Format and mount ESP + :: + + for i in ${DISK}; do + mkfs.vfat -n EFI "${i}"-part1 + mkdir -p "${MNT}"/boot/efis/"${i##*/}"-part1 + mount -t vfat -o iocharset=iso8859-1 "${i}"-part1 "${MNT}"/boot/efis/"${i##*/}"-part1 + done + + mkdir -p "${MNT}"/boot/efi + mount -t vfat -o iocharset=iso8859-1 "$(echo "${DISK}" | sed "s|^ *||" | cut -f1 -d' '|| true)"-part1 "${MNT}"/boot/efi + +System Configuration +--------------------------- + +#. Download and extract minimal Arch Linux root filesystem:: + + apk add curl + + curl --fail-early --fail -L \ + https://america.archive.pkgbuild.com/iso/2023.09.01/archlinux-bootstrap-x86_64.tar.gz \ + -o rootfs.tar.gz + curl --fail-early --fail -L \ + https://america.archive.pkgbuild.com/iso/2023.09.01/archlinux-bootstrap-x86_64.tar.gz.sig \ + -o rootfs.tar.gz.sig + + apk add gnupg + gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify rootfs.tar.gz.sig + + ln -s "${MNT}" "${MNT}"/root.x86_64 + tar x -C "${MNT}" -af rootfs.tar.gz root.x86_64 + +#. Enable community repo + + .. code-block:: sh + + sed -i '/edge/d' /etc/apk/repositories + sed -i -E 's/#(.*)community/\1community/' /etc/apk/repositories + +#. Generate fstab:: + + apk add arch-install-scripts + genfstab -t PARTUUID "${MNT}" \ + | grep -v swap \ + | sed "s|vfat.*rw|vfat rw,x-systemd.idle-timeout=1min,x-systemd.automount,noauto,nofail|" \ + > "${MNT}"/etc/fstab + +#. Chroot + + .. code-block:: sh + + cp /etc/resolv.conf "${MNT}"/etc/resolv.conf + for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done + chroot "${MNT}" /usr/bin/env DISK="${DISK}" bash + + .. ifconfig:: zfs_root_test + + :: + + cp /etc/resolv.conf "${MNT}"/etc/resolv.conf + for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done + chroot "${MNT}" /usr/bin/env DISK="${DISK}" bash <<-'ZFS_ROOT_NESTED_CHROOT' + + set -vxeuf + +#. Add archzfs repo to pacman config + + :: + + pacman-key --init + pacman-key --refresh-keys + pacman-key --populate + + curl --fail-early --fail -L https://archzfs.com/archzfs.gpg \ + | pacman-key -a - --gpgdir /etc/pacman.d/gnupg + + pacman-key \ + --lsign-key \ + --gpgdir /etc/pacman.d/gnupg \ + DDF7DB817396A49B2A2723F7403BD972F75D9D76 + + tee -a /etc/pacman.d/mirrorlist-archzfs <<- 'EOF' + ## See https://github.com/archzfs/archzfs/wiki + ## France + #,Server = https://archzfs.com/$repo/$arch + + ## Germany + #,Server = https://mirror.sum7.eu/archlinux/archzfs/$repo/$arch + #,Server = https://mirror.biocrafting.net/archlinux/archzfs/$repo/$arch + + ## India + #,Server = https://mirror.in.themindsmaze.com/archzfs/$repo/$arch + + ## United States + #,Server = https://zxcvfdsa.com/archzfs/$repo/$arch + EOF + + tee -a /etc/pacman.conf <<- 'EOF' + + #[archzfs-testing] + #Include = /etc/pacman.d/mirrorlist-archzfs + + #,[archzfs] + #,Include = /etc/pacman.d/mirrorlist-archzfs + EOF + + # this #, prefix is a workaround for ci/cd tests + # remove them + sed -i 's|#,||' /etc/pacman.d/mirrorlist-archzfs + sed -i 's|#,||' /etc/pacman.conf + sed -i 's|^#||' /etc/pacman.d/mirrorlist + +#. Install base packages:: + + pacman -Sy + pacman -S --noconfirm mg mandoc grub efibootmgr mkinitcpio + + kernel_compatible_with_zfs="$(pacman -Si zfs-linux \ + | grep 'Depends On' \ + | sed "s|.*linux=||" \ + | awk '{ print $1 }')" + pacman -U --noconfirm https://america.archive.pkgbuild.com/packages/l/linux/linux-"${kernel_compatible_with_zfs}"-x86_64.pkg.tar.zst + +#. Install zfs packages:: + + pacman -S --noconfirm zfs-linux zfs-utils + + +#. Configure mkinitcpio:: + + sed -i 's|filesystems|zfs filesystems|' /etc/mkinitcpio.conf + mkinitcpio -P + +#. For physical machine, install firmware + + .. code-block:: sh + + pacman -S linux-firmware intel-ucode amd-ucode + +#. Enable internet time synchronisation:: + + systemctl enable systemd-timesyncd + +#. Generate host id:: + + zgenhostid -f -o /etc/hostid + +#. Generate locales:: + + echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen + locale-gen + +#. Set locale, keymap, timezone, hostname + + :: + + rm -f /etc/localtime + systemd-firstboot \ + --force \ + --locale=en_US.UTF-8 \ + --timezone=Etc/UTC \ + --hostname=testhost \ + --keymap=us + +#. Set root passwd + :: + + printf 'root:yourpassword' | chpasswd + +Bootloader +--------------------------- + + +#. Apply GRUB workaround + + :: + + echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile.d/zpool_vdev_name_path.sh + # shellcheck disable=SC1091 + . /etc/profile.d/zpool_vdev_name_path.sh + + # GRUB fails to detect rpool name, hard code as "rpool" + sed -i "s|rpool=.*|rpool=rpool|" /etc/grub.d/10_linux + + This workaround needs to be applied for every GRUB update, as the + update will overwrite the changes. + +#. Install GRUB:: + + mkdir -p /boot/efi/archlinux/grub-bootdir/i386-pc/ + mkdir -p /boot/efi/archlinux/grub-bootdir/x86_64-efi/ + for i in ${DISK}; do + grub-install --target=i386-pc --boot-directory \ + /boot/efi/archlinux/grub-bootdir/i386-pc/ "${i}" + done + grub-install --target x86_64-efi --boot-directory \ + /boot/efi/archlinux/grub-bootdir/x86_64-efi/ --efi-directory \ + /boot/efi --bootloader-id archlinux --removable + if test -d /sys/firmware/efi/efivars/; then + grub-install --target x86_64-efi --boot-directory \ + /boot/efi/archlinux/grub-bootdir/x86_64-efi/ --efi-directory \ + /boot/efi --bootloader-id archlinux + fi + + +#. Import both bpool and rpool at boot:: + + echo 'GRUB_CMDLINE_LINUX="zfs_import_dir=/dev/"' >> /etc/default/grub + +#. Generate GRUB menu:: + + mkdir -p /boot/grub + grub-mkconfig -o /boot/grub/grub.cfg + cp /boot/grub/grub.cfg \ + /boot/efi/archlinux/grub-bootdir/x86_64-efi/grub/grub.cfg + cp /boot/grub/grub.cfg \ + /boot/efi/archlinux/grub-bootdir/i386-pc/grub/grub.cfg + + .. ifconfig:: zfs_root_test + + :: + + find /boot/efis/ -name "grub.cfg" -print0 \ + | xargs -t -0I '{}' grub-script-check -v '{}' + +#. For both legacy and EFI booting: mirror ESP content:: + + espdir=$(mktemp -d) + find /boot/efi/ -maxdepth 1 -mindepth 1 -type d -print0 \ + | xargs -t -0I '{}' cp -r '{}' "${espdir}" + find "${espdir}" -maxdepth 1 -mindepth 1 -type d -print0 \ + | xargs -t -0I '{}' sh -vxc "find /boot/efis/ -maxdepth 1 -mindepth 1 -type d -print0 | xargs -t -0I '[]' cp -r '{}' '[]'" + +#. Exit chroot + + .. code-block:: sh + + exit + + .. ifconfig:: zfs_root_test + + # nested chroot ends here + ZFS_ROOT_NESTED_CHROOT + + .. ifconfig:: zfs_root_test + + :: + + # list contents of boot dir to confirm + # that the mirroring succeeded + find "${MNT}"/boot/efis/ -type d > list_of_efi_dirs + for i in ${DISK}; do + if ! grep "${i##*/}-part1/efi\|${i##*/}-part1/EFI" list_of_efi_dirs; then + echo "disk ${i} not found in efi system partition, installation error"; + cat list_of_efi_dirs + exit 1 + fi + done + +#. Unmount filesystems and create initial system snapshot + You can later create a boot environment from this snapshot. + See `Root on ZFS maintenance page <../zfs_root_maintenance.html>`__. + :: + + umount -Rl "${MNT}" + zfs snapshot -r rpool@initial-installation + zfs snapshot -r bpool@initial-installation + +#. Export all pools + + .. code-block:: sh + + zpool export -a + + .. ifconfig:: zfs_root_test + + # we are now inside a chroot, where the export will fail + # export pools when we are outside chroot + +#. Reboot + + .. code-block:: sh + + reboot + + .. ifconfig:: zfs_root_test + + # chroot ends here + ZFS_ROOT_GUIDE_TEST + +.. _a ZFS developer's comment on this issue: https://ol.reddit.com/r/zfs/comments/10n8fsn/does_openzfs_have_a_new_developer_for_the_native/j6b8k1m/ +.. _this spreadsheet of bugs: https://docs.google.com/spreadsheets/d/1OfRSXibZ2nIE9DGK6swwBZXgXwdCPKgp4SbPZwTexCg/htmlview diff --git a/_sources/Getting Started/Arch Linux/index.rst.txt b/_sources/Getting Started/Arch Linux/index.rst.txt new file mode 100644 index 000000000..da70b659a --- /dev/null +++ b/_sources/Getting Started/Arch Linux/index.rst.txt @@ -0,0 +1,69 @@ +.. highlight:: sh + +Arch Linux +============ + +Contents +-------- +.. toctree:: + :maxdepth: 1 + :glob: + + * + +Support +------- +Reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. + +If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @ne9z +`__. + +Overview +-------- +Due to license incompatibility, +ZFS is not available in Arch Linux official repo. + +ZFS support is provided by third-party `archzfs repo `__. + +Installation +------------ + +See `Archlinux Wiki `__. + +Root on ZFS +----------- +ZFS can be used as root file system for Arch Linux. +An installation guide is available. + +.. toctree:: + :maxdepth: 1 + :glob: + + * + +Contribute +---------- +#. Fork and clone `this repo `__. + +#. Install the tools:: + + sudo pacman -S --needed python-pip make + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your "${PATH}", e.g. by adding this to ~/.bashrc: + [ -d "${HOME}"/.local/bin ] && export PATH="${HOME}"/.local/bin:"${PATH}" + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @ne9z. diff --git a/_sources/Getting Started/Debian/Debian Bookworm Root on ZFS.rst.txt b/_sources/Getting Started/Debian/Debian Bookworm Root on ZFS.rst.txt new file mode 100644 index 000000000..70f0ceae0 --- /dev/null +++ b/_sources/Getting Started/Debian/Debian Bookworm Root on ZFS.rst.txt @@ -0,0 +1,1185 @@ +.. highlight:: sh + +Debian Bookworm Root on ZFS +=========================== + +.. contents:: Table of Contents + :local: + +Overview +-------- + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- `64-bit Debian GNU/Linux Bookworm Live CD w/ GUI (e.g. gnome iso) + `__ +- `A 64-bit kernel is strongly encouraged. + `__ +- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” drive) + only works with UEFI booting. This not unique to ZFS. `GRUB does not and + will not work on 4Kn with legacy (BIOS) booting. + `__ + +Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need `massive amounts of RAM +`__. Enabling +deduplication is a permanent change that cannot be easily reverted. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @rlaager +`__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo apt install python3-pip + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. + +Encryption +~~~~~~~~~~ + +This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in ``/etc/fstab``, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +Step 1: Prepare The Install Environment +--------------------------------------- + +#. Boot the Debian GNU/Linux Live CD. If prompted, login with the username + ``user`` and password ``live``. Connect your system to the Internet as + appropriate (e.g. join your WiFi network). Open a terminal. + +#. Setup and update the repositories:: + + sudo vi /etc/apt/sources.list + + .. code-block:: sourceslist + + deb http://deb.debian.org/debian bookworm main contrib non-free-firmware + + :: + + sudo apt update + +#. Optional: Install and start the OpenSSH server in the Live CD environment: + + If you have a second system, using SSH to access the target system can be + convenient:: + + sudo apt install --yes openssh-server + + sudo systemctl restart ssh + + **Hint:** You can find your IP address with + ``ip addr show scope global | grep inet``. Then, from your main machine, + connect with ``ssh user@IP``. + +#. Disable automounting: + + If the disk has been used before (with partitions at the same offsets), + previous filesystems (e.g. the ESP) will automount if not disabled:: + + gsettings set org.gnome.desktop.media-handling automount false + +#. Become root:: + + sudo -i + +#. Install ZFS in the Live CD environment:: + + apt install --yes debootstrap gdisk zfsutils-linux + +Step 2: Disk Formatting +----------------------- + +#. Set a variable with the disk name:: + + DISK=/dev/disk/by-id/scsi-SATA_disk1 + + Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the + ``/dev/sd*`` device nodes directly can cause sporadic import failures, + especially on systems that have more than one storage pool. + + **Hints:** + + - ``ls -la /dev/disk/by-id`` will list the aliases. + - Are you doing this in a virtual machine? If your virtual disk is missing + from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using KVM with + virtio. Also when using /dev/vda, the partitions used later will be named + differently. Otherwise, read the `troubleshooting <#troubleshooting>`__ + section. + - For a mirror or raidz topology, use ``DISK1``, ``DISK2``, etc. + - When choosing a boot pool size, consider how you will use the space. A + kernel and initrd may consume around 100M. If you have multiple kernels + and take snapshots, you may find yourself low on boot pool space, + especially if you need to regenerate your initramfs images, which may be + around 85M each. Size your boot pool appropriately for your needs. + +#. If you are re-using a disk, clear it as necessary: + + Ensure swap partitions are not in use:: + + swapoff --all + + If the disk was previously used in an MD array:: + + apt install --yes mdadm + + # See if one or more MD arrays are active: + cat /proc/mdstat + # If so, stop them (replace ``md0`` as required): + mdadm --stop /dev/md0 + + # For an array using the whole disk: + mdadm --zero-superblock --force $DISK + # For an array using a partition: + mdadm --zero-superblock --force ${DISK}-part2 + + If the disk was previously used with zfs:: + + wipefs -a $DISK + + For flash-based storage, if the disk was previously used, you may wish to + do a full-disk discard (TRIM/UNMAP), which can improve performance:: + + blkdiscard -f $DISK + + Clear the partition table:: + + sgdisk --zap-all $DISK + + If you get a message about the kernel still using the old partition table, + reboot and start over (except that you can skip this step). + +#. Partition your disk(s): + + Run this if you need legacy (BIOS) booting:: + + sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK + + Run this for UEFI booting (for use now or in the future):: + + sgdisk -n2:1M:+512M -t2:EF00 $DISK + + Run this for the boot pool:: + + sgdisk -n3:0:+1G -t3:BF01 $DISK + + Choose one of the following options: + + - Unencrypted or ZFS native encryption:: + + sgdisk -n4:0:0 -t4:BF00 $DISK + + - LUKS:: + + sgdisk -n4:0:0 -t4:8309 $DISK + + If you are creating a mirror or raidz topology, repeat the partitioning + commands for all the disks which will be part of the pool. + +#. Create the boot pool:: + + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -o compatibility=grub2 \ + -o cachefile=/etc/zfs/zpool.cache \ + -O devices=off \ + -O acltype=posixacl -O xattr=sa \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/boot -R /mnt \ + bpool ${DISK}-part3 + + *Note:* GRUB does not support all zpool features (see + ``spa_feature_names`` in + `grub-core/fs/zfs/zfs.c `_). + We create a separate zpool for ``/boot`` here, specifying the + ``-o compatibility=grub2`` property which restricts the pool to only those + features that GRUB supports, allowing the root pool to use any/all features. + + See the section on ``Compatibility feature sets`` in the ``zpool-features`` + man page for more information. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + bpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part3 \ + /dev/disk/by-id/scsi-SATA_disk2-part3 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - The pool name is arbitrary. If changed, the new name must be used + consistently. The ``bpool`` convention originated in this HOWTO. + +#. Create the root pool: + + Choose one of the following options: + + - Unencrypted:: + + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -O acltype=posixacl -O xattr=sa -O dnodesize=auto \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - ZFS native encryption:: + + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -O encryption=on -O keylocation=prompt -O keyformat=passphrase \ + -O acltype=posixacl -O xattr=sa -O dnodesize=auto \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - LUKS:: + + apt install --yes cryptsetup + + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -O acltype=posixacl -O xattr=sa -O dnodesize=auto \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + + **Notes:** + + - The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). + - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires ACLs + `__ + - Setting ``xattr=sa`` `vastly improves the performance of extended + attributes + `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI applications. + `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain controller. + `__ + Note that ``xattr=sa`` is `Linux-specific + `__. If you move your + ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux, + extended attributes will not be readable (though your data will be). If + portability of extended attributes is important to you, omit the + ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole + pool, it is probably fine to use it for ``/var/log``. + - Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only filenames + `__. + - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you + want to tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. + - Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s documentation + `__ + for further information. + - Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). + - ZFS native encryption `now + `__ + defaults to ``aes-256-gcm``. + - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two + keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256. + - Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup FAQ + `__ + for guidance. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + rpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part4 \ + /dev/disk/by-id/scsi-SATA_disk2-part4 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - When using LUKS with mirror or raidz topologies, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will have + to create using ``cryptsetup``. + - The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the root + pool is named ``rpool`` by default. + +Step 3: System Installation +--------------------------- + +#. Create filesystem datasets to act as containers:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + zfs create -o canmount=off -o mountpoint=none bpool/BOOT + + On Solaris systems, the root filesystem is cloned and the suffix is + incremented for major system changes through ``pkg image-update`` or + ``beadm``. Similar functionality was implemented in Ubuntu with the + ``zsys`` tool, though its dataset layout is more complicated, and ``zsys`` + `is on life support + `__. Even + without such a tool, the `rpool/ROOT` and `bpool/BOOT` containers can still + be used for manually created clones. That said, this HOWTO assumes a single + filesystem for ``/boot`` for simplicity. + +#. Create filesystem datasets for the root and boot filesystems:: + + zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian + zfs mount rpool/ROOT/debian + + zfs create -o mountpoint=/boot bpool/BOOT/debian + + With ZFS, it is not normally necessary to use a mount command (either + ``mount`` or ``zfs mount``). This situation is an exception because of + ``canmount=noauto``. + +#. Create datasets:: + + zfs create rpool/home + zfs create -o mountpoint=/root rpool/home/root + chmod 700 /mnt/root + zfs create -o canmount=off rpool/var + zfs create -o canmount=off rpool/var/lib + zfs create rpool/var/log + zfs create rpool/var/spool + + The datasets below are optional, depending on your preferences and/or + software choices. + + If you wish to separate these to exclude them from snapshots:: + + zfs create -o com.sun:auto-snapshot=false rpool/var/cache + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs + zfs create -o com.sun:auto-snapshot=false rpool/var/tmp + chmod 1777 /mnt/var/tmp + + If you use /srv on this system:: + + zfs create rpool/srv + + If you use /usr/local on this system:: + + zfs create -o canmount=off rpool/usr + zfs create rpool/usr/local + + If this system will have games installed:: + + zfs create rpool/var/games + + If this system will have a GUI:: + + zfs create rpool/var/lib/AccountsService + zfs create rpool/var/lib/NetworkManager + + If this system will use Docker (which manages its own datasets & + snapshots):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker + + If this system will store local email in /var/mail:: + + zfs create rpool/var/mail + + If this system will use Snap packages:: + + zfs create rpool/var/snap + + If you use /var/www on this system:: + + zfs create rpool/var/www + + A tmpfs is recommended later, but if you want a separate dataset for + ``/tmp``:: + + zfs create -o com.sun:auto-snapshot=false rpool/tmp + chmod 1777 /mnt/tmp + + The primary goal of this dataset layout is to separate the OS from user + data. This allows the root filesystem to be rolled back without rolling + back user data. + + If you do nothing extra, ``/tmp`` will be stored as part of the root + filesystem. Alternatively, you can create a separate dataset for ``/tmp``, + as shown above. This keeps the ``/tmp`` data out of snapshots of your root + filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want + to limit the maximum space used. Otherwise, you can use a tmpfs (RAM + filesystem) later. + + **Note:** If you separate a directory required for booting (e.g. ``/etc``) + into its own dataset, you must add it to + ``ZFS_INITRD_ADDITIONAL_DATASETS`` in ``/etc/default/zfs``. Datasets + with ``canmount=off`` (like ``rpool/usr`` above) do not matter for this. + +#. Mount a tmpfs at /run:: + + mkdir /mnt/run + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + +#. Install the minimal system:: + + debootstrap bookworm /mnt + + The ``debootstrap`` command leaves the new system in an unconfigured state. + An alternative to using ``debootstrap`` is to copy the entirety of a + working system into the new ZFS root. + +#. Copy in zpool.cache:: + + mkdir /mnt/etc/zfs + cp /etc/zfs/zpool.cache /mnt/etc/zfs/ + +Step 4: System Configuration +---------------------------- + +#. Configure the hostname: + + Replace ``HOSTNAME`` with the desired hostname:: + + hostname HOSTNAME + hostname > /mnt/etc/hostname + vi /mnt/etc/hosts + + .. code-block:: text + + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME + + **Hint:** Use ``nano`` if you find ``vi`` confusing. + +#. Configure the network interface: + + Find the interface name:: + + ip addr show + + Adjust ``NAME`` below to match your interface name:: + + vi /mnt/etc/network/interfaces.d/NAME + + .. code-block:: text + + auto NAME + iface NAME inet dhcp + + Customize this file if the system is not a DHCP client. + +#. Configure the package sources:: + + vi /mnt/etc/apt/sources.list + + .. code-block:: sourceslist + + deb http://deb.debian.org/debian bookworm main contrib non-free-firmware + deb-src http://deb.debian.org/debian bookworm main contrib non-free-firmware + + deb http://deb.debian.org/debian-security bookworm-security main contrib non-free-firmware + deb-src http://deb.debian.org/debian-security bookworm-security main contrib non-free-firmware + + deb http://deb.debian.org/debian bookworm-updates main contrib non-free-firmware + deb-src http://deb.debian.org/debian bookworm-updates main contrib non-free-firmware + +#. Bind the virtual filesystems from the LiveCD environment to the new + system and ``chroot`` into it:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + chroot /mnt /usr/bin/env DISK=$DISK bash --login + + **Note:** This is using ``--rbind``, not ``--bind``. + +#. Configure a basic system environment:: + + apt update + + apt install --yes console-setup locales + + Even if you prefer a non-English system language, always ensure that + ``en_US.UTF-8`` is available:: + + dpkg-reconfigure locales tzdata keyboard-configuration console-setup + +#. Install ZFS in the chroot environment for the new system:: + + apt install --yes dpkg-dev linux-headers-generic linux-image-generic + + apt install --yes zfs-initramfs + + echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf + + **Note:** Ignore any error messages saying ``ERROR: Couldn't resolve + device`` and ``WARNING: Couldn't determine root device``. `cryptsetup does + not support ZFS + `__. + +#. For LUKS installs only, setup ``/etc/crypttab``:: + + apt install --yes cryptsetup cryptsetup-initramfs + + echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \ + none luks,discard,initramfs > /etc/crypttab + + The use of ``initramfs`` is a work-around for `cryptsetup does not support + ZFS `__. + + **Hint:** If you are creating a mirror or raidz topology, repeat the + ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. + +#. Install an NTP service to synchronize time. + This step is specific to Bookworm which does not install the package during + bootstrap. + Although this step is not necessary for ZFS, it is useful for internet + browsing where local clock drift can cause login failures:: + + apt install systemd-timesyncd + +#. Install GRUB + + Choose one of the following options: + + - Install GRUB for legacy (BIOS) booting:: + + apt install --yes grub-pc + + + - Install GRUB for UEFI booting:: + + apt install dosfstools + + mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2 + mkdir /boot/efi + echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \ + /boot/efi vfat defaults 0 0 >> /etc/fstab + mount /boot/efi + apt install --yes grub-efi-amd64 shim-signed + + **Notes:** + + - The ``-s 1`` for ``mkdosfs`` is only necessary for drives which present + 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size + (given the partition size of 512 MiB) for FAT32. It also works fine on + drives which present 512 B sectors. + - For a mirror or raidz topology, this step only installs GRUB on the + first disk. The other disk(s) will be handled later. + +#. Optional: Remove os-prober:: + + apt purge --yes os-prober + + This avoids error messages from `update-grub`. `os-prober` is only + necessary in dual-boot configurations. + +#. Set a root password:: + + passwd + +#. Enable importing bpool + + This ensures that ``bpool`` is always imported, regardless of whether + ``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not, + or whether ``zfs-import-scan.service`` is enabled. + + :: + + vi /etc/systemd/system/zfs-import-bpool.service + + .. code-block:: ini + + [Unit] + DefaultDependencies=no + Before=zfs-import-scan.service + Before=zfs-import-cache.service + + [Service] + Type=oneshot + RemainAfterExit=yes + ExecStart=/sbin/zpool import -N -o cachefile=none bpool + # Work-around to preserve zpool cache: + ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache + ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache + + [Install] + WantedBy=zfs-import.target + + :: + + systemctl enable zfs-import-bpool.service + + **Note:** For some disk configurations (NVMe?), this service `may fail + `__ with an error + indicating that the ``bpool`` cannot be found. If this happens, add + ``-d DISK-part3`` (replace ``DISK`` with the correct device path) to the + ``zpool import`` command. + +#. Optional (but recommended): Mount a tmpfs to ``/tmp`` + + If you chose to create a ``/tmp`` dataset above, skip this step, as they + are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a + tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + + :: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + +#. Optional: Install SSH:: + + apt install --yes openssh-server + + vi /etc/ssh/sshd_config + # Set: PermitRootLogin yes + +#. Optional: For ZFS native encryption or LUKS, configure Dropbear for remote + unlocking:: + + apt install --yes --no-install-recommends dropbear-initramfs + mkdir -p /etc/dropbear/initramfs + + # Optional: Convert OpenSSH server keys for Dropbear + for type in ecdsa ed25519 rsa ; do + cp /etc/ssh/ssh_host_${type}_key /tmp/openssh.key + ssh-keygen -p -N "" -m PEM -f /tmp/openssh.key + dropbearconvert openssh dropbear \ + /tmp/openssh.key \ + /etc/dropbear/initramfs/dropbear_${type}_host_key + done + rm /tmp/openssh.key + + # Add user keys in the same format as ~/.ssh/authorized_keys + vi /etc/dropbear/initramfs/authorized_keys + + # If using a static IP, set it for the initramfs environment: + vi /etc/initramfs-tools/initramfs.conf + # The syntax is: IP=ADDRESS::GATEWAY:MASK:HOSTNAME:NIC + # For example: + # IP=192.168.1.100::192.168.1.1:255.255.255.0:myhostname:ens3 + # HOSTNAME and NIC are optional. + + # Rebuild the initramfs (required when changing any of the above): + update-initramfs -u -k all + + **Notes:** + + - Converting the server keys makes Dropbear use the same keys as OpenSSH, + avoiding host key mismatch warnings. Currently, `dropbearconvert doesn't + understand the new OpenSSH private key format + `__, so the + keys need to be converted to the old PEM format first using + ``ssh-keygen``. The downside of using the same keys for both OpenSSH and + Dropbear is that the OpenSSH keys are then available on-disk, unencrypted + in the initramfs. + - Later, to use this functionality, SSH to the system (as root) while it is + prompting for the passphrase during the boot process. For ZFS native + encryption, run ``zfsunlock``. For LUKS, run ``cryptroot-unlock``. + - You can optionally add ``command="/usr/bin/zfsunlock"`` or + ``command="/bin/cryptroot-unlock"`` in front of the ``authorized_keys`` + line to force the unlock command. This way, the unlock command runs + automatically and is all that can be run. + +#. Optional (but kindly requested): Install popcon + + The ``popularity-contest`` package reports the list of packages install + on your system. Showing that ZFS is popular may be helpful in terms of + long-term attention from the distro. + + :: + + apt install --yes popularity-contest + + Choose Yes at the prompt. + +Step 5: GRUB Installation +------------------------- + +#. Verify that the ZFS boot filesystem is recognized:: + + grub-probe /boot + +#. Refresh the initrd files:: + + update-initramfs -c -k all + + **Note:** Ignore any error messages saying ``ERROR: Couldn't resolve + device`` and ``WARNING: Couldn't determine root device``. `cryptsetup + does not support ZFS + `__. + +#. Workaround GRUB's missing zpool-features support:: + + vi /etc/default/grub + # Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian" + +#. Optional (but highly recommended): Make debugging GRUB easier:: + + vi /etc/default/grub + # Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT + # Uncomment: GRUB_TERMINAL=console + # Save and quit. + + Later, once the system has rebooted twice and you are sure everything is + working, you can undo these changes, if desired. + +#. Update the boot configuration:: + + update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Install the boot loader: + + #. For legacy (BIOS) booting, install GRUB to the MBR:: + + grub-install $DISK + + Note that you are installing GRUB to the whole disk, not a partition. + + If you are creating a mirror or raidz topology, repeat the ``grub-install`` + command for each disk in the pool. + + #. For UEFI booting, install GRUB to the ESP:: + + grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=debian --recheck --no-floppy + + It is not necessary to specify the disk here. If you are creating a + mirror or raidz topology, the additional disks will be handled later. + +#. Fix filesystem mount ordering: + + We need to activate ``zfs-mount-generator``. This makes systemd aware of + the separate mountpoints, which is important for things like ``/var/log`` + and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount`` + by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature + of systemd automatically use ``After=var-tmp.mount``. + + :: + + mkdir /etc/zfs/zfs-list.cache + touch /etc/zfs/zfs-list.cache/bpool + touch /etc/zfs/zfs-list.cache/rpool + zed -F & + + Verify that ``zed`` updated the cache by making sure these are not empty:: + + cat /etc/zfs/zfs-list.cache/bpool + cat /etc/zfs/zfs-list.cache/rpool + + If either is empty, force a cache update and check again:: + + zfs set canmount=on bpool/BOOT/debian + zfs set canmount=noauto rpool/ROOT/debian + + If they are still empty, stop zed (as below), start zed (as above) and try + again. + + Once the files have data, stop ``zed``:: + + fg + Press Ctrl-C. + + Fix the paths to eliminate ``/mnt``:: + + sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/* + +Step 6: First Boot +------------------ + +#. Optional: Snapshot the initial installation:: + + zfs snapshot bpool/BOOT/debian@install + zfs snapshot rpool/ROOT/debian@install + + In the future, you will likely want to take snapshots before each + upgrade, and remove old snapshots (including this one) at some point to + save space. + +#. Exit from the ``chroot`` environment back to the LiveCD environment:: + + exit + +#. Run these commands in the LiveCD environment to unmount all + filesystems:: + + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + +#. If this fails for rpool, mounting it on boot will fail and you will need to + ``zpool import -f rpool``, then ``exit`` in the initamfs prompt. + +#. Reboot:: + + reboot + + Wait for the newly installed system to boot normally. Login as root. + +#. Create a user account: + + Replace ``YOUR_USERNAME`` with your desired username:: + + username=YOUR_USERNAME + + zfs create rpool/home/$username + adduser $username + + cp -a /etc/skel/. /home/$username + chown -R $username:$username /home/$username + usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video $username + +#. Mirror GRUB + + If you installed to multiple disks, install GRUB on the additional + disks. + + - For legacy (BIOS) booting:: + + dpkg-reconfigure grub-pc + + Hit enter until you get to the device selection screen. + Select (using the space bar) all of the disks (not partitions) in your pool. + + - For UEFI booting:: + + umount /boot/efi + + For the second and subsequent disks (increment debian-2 to -3, etc.):: + + dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ + of=/dev/disk/by-id/scsi-SATA_disk2-part2 + efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ + -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi' + + mount /boot/efi + +Step 7: Optional: Configure Swap +--------------------------------- + +**Caution**: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is `a bug report upstream +`__. + +#. Create a volume dataset (zvol) for use as a swap device:: + + zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata -o secondarycache=none \ + -o com.sun:auto-snapshot=false rpool/swap + + You can adjust the size (the ``4G`` part) to your needs. + + The compression algorithm is set to ``zle`` because it is the cheapest + available algorithm. As this guide recommends ``ashift=12`` (4 kiB + blocks on disk), the common case of a 4 kiB page size means that no + compression algorithm can reduce I/O. The exception is all-zero pages, + which are dropped by ZFS; but some form of compression has to be enabled + to get this behavior. + +#. Configure the swap device: + + **Caution**: Always use long ``/dev/zvol`` aliases in configuration + files. Never use a short ``/dev/zdX`` device name. + + :: + + mkswap -f /dev/zvol/rpool/swap + echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab + echo RESUME=none > /etc/initramfs-tools/conf.d/resume + + The ``RESUME=none`` is necessary to disable resuming from hibernation. + This does not work, as the zvol is not present (because the pool has not + yet been imported) at the time the resume script runs. If it is not + disabled, the boot process hangs for 30 seconds waiting for the swap + zvol to appear. + +#. Enable the swap device:: + + swapon -av + +Step 8: Full Software Installation +---------------------------------- + +#. Upgrade the minimal system:: + + apt dist-upgrade --yes + +#. Install a regular set of software:: + + tasksel --new-install + + **Note:** This will check "Debian desktop environment" and "print server" + by default. If you want a server installation, unselect those. + +#. Optional: Disable log compression: + + As ``/var/log`` is already compressed by ZFS, logrotate’s compression is + going to burn CPU and disk I/O for (in most cases) very little gain. Also, + if you are making snapshots of ``/var/log``, logrotate’s compression will + actually waste space, as the uncompressed data will live on in the + snapshot. You can edit the files in ``/etc/logrotate.d`` by hand to comment + out ``compress``, or use this loop (copy-and-paste highly recommended):: + + for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done + +#. Reboot:: + + reboot + +Step 9: Final Cleanup +--------------------- + +#. Wait for the system to boot normally. Login using the account you + created. Ensure the system (including networking) works normally. + +#. Optional: Delete the snapshots of the initial installation:: + + sudo zfs destroy bpool/BOOT/debian@install + sudo zfs destroy rpool/ROOT/debian@install + +#. Optional: Disable the root password:: + + sudo usermod -p '*' root + +#. Optional (but highly recommended): Disable root SSH logins: + + If you installed SSH earlier, revert the temporary change:: + + sudo vi /etc/ssh/sshd_config + # Remove: PermitRootLogin yes + + sudo systemctl restart ssh + +#. Optional: Re-enable the graphical boot process: + + If you prefer the graphical boot process, you can re-enable it now. If + you are using LUKS, it makes the prompt look nicer. + + :: + + sudo vi /etc/default/grub + # Add quiet to GRUB_CMDLINE_LINUX_DEFAULT + # Comment out GRUB_TERMINAL=console + # Save and quit. + + sudo update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + + Store that backup somewhere safe (e.g. cloud storage). It is protected by + your LUKS passphrase, but you may wish to use additional encryption. + + **Hint:** If you created a mirror or raidz topology, repeat this for each + LUKS volume (``luks2``, etc.). + +Troubleshooting +--------------- + +Rescuing using a Live CD +~~~~~~~~~~~~~~~~~~~~~~~~ + +Go through `Step 1: Prepare The Install Environment +<#step-1-prepare-the-install-environment>`__. + +For LUKS, first unlock the disk(s):: + + apt install --yes cryptsetup + + cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # Repeat for additional disks, if this is a mirror or raidz topology. + +Mount everything correctly:: + + zpool export -a + zpool import -N -R /mnt rpool + zpool import -N -R /mnt bpool + zfs load-key -a + zfs mount rpool/ROOT/debian + zfs mount -a + +If needed, you can chroot into your installed environment:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + chroot /mnt /bin/bash --login + mount /boot/efi + mount -a + +Do whatever you need to do to fix your system. + +When done, cleanup:: + + exit + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + reboot + +Areca +~~~~~ + +Systems that require the ``arcsas`` blob driver should add it to the +``/etc/initramfs-tools/modules`` file and run ``update-initramfs -c -k all``. + +Upgrade or downgrade the Areca driver if something like +``RIP: 0010:[] [] native_read_tsc+0x6/0x20`` +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message. + +MPT2SAS +~~~~~~~ + +Most problem reports for this tutorial involve ``mpt2sas`` hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware. + +The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See `https://github.com/zfsonlinux/zfs/issues/330 +`__. + +Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in +``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to +appear before importing the pool. + +QEMU/KVM/XEN +~~~~~~~~~~~~ + +Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). + +To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:: + + sudo apt install ovmf + sudo vi /etc/libvirt/qemu.conf + +Uncomment these lines: + +.. code-block:: text + + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd" + ] + +:: + + sudo systemctl restart libvirtd.service + +VMware +~~~~~~ + +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere configuration. + Doing this ensures that ``/dev/disk`` aliases are created in the guest. diff --git a/_sources/Getting Started/Debian/Debian Bullseye Root on ZFS.rst.txt b/_sources/Getting Started/Debian/Debian Bullseye Root on ZFS.rst.txt new file mode 100644 index 000000000..86be93fc7 --- /dev/null +++ b/_sources/Getting Started/Debian/Debian Bullseye Root on ZFS.rst.txt @@ -0,0 +1,1234 @@ +.. highlight:: sh + +Debian Bullseye Root on ZFS +=========================== + +.. contents:: Table of Contents + :local: + +Overview +-------- + +Newer release available +~~~~~~~~~~~~~~~~~~~~~~~ + +- See :doc:`Debian Bookworm Root on ZFS <./Debian Bookworm Root on ZFS>` for + new installs. This guide is no longer receiving most updates. It continues + to exist for reference for existing installs that followed it. + + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- `64-bit Debian GNU/Linux Bullseye Live CD w/ GUI (e.g. gnome iso) + `__ +- `A 64-bit kernel is strongly encouraged. + `__ +- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” drive) + only works with UEFI booting. This not unique to ZFS. `GRUB does not and + will not work on 4Kn with legacy (BIOS) booting. + `__ + +Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need `massive amounts of RAM +`__. Enabling +deduplication is a permanent change that cannot be easily reverted. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @rlaager +`__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo apt install python3-pip + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. + +Encryption +~~~~~~~~~~ + +This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in ``/etc/fstab``, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +Step 1: Prepare The Install Environment +--------------------------------------- + +#. Boot the Debian GNU/Linux Live CD. If prompted, login with the username + ``user`` and password ``live``. Connect your system to the Internet as + appropriate (e.g. join your WiFi network). Open a terminal. + +#. Setup and update the repositories:: + + sudo vi /etc/apt/sources.list + + .. code-block:: sourceslist + + deb http://deb.debian.org/debian bullseye main contrib + + :: + + sudo apt update + +#. Optional: Install and start the OpenSSH server in the Live CD environment: + + If you have a second system, using SSH to access the target system can be + convenient:: + + sudo apt install --yes openssh-server + + sudo systemctl restart ssh + + **Hint:** You can find your IP address with + ``ip addr show scope global | grep inet``. Then, from your main machine, + connect with ``ssh user@IP``. + +#. Disable automounting: + + If the disk has been used before (with partitions at the same offsets), + previous filesystems (e.g. the ESP) will automount if not disabled:: + + gsettings set org.gnome.desktop.media-handling automount false + +#. Become root:: + + sudo -i + +#. Install ZFS in the Live CD environment:: + + apt install --yes debootstrap gdisk zfsutils-linux + +Step 2: Disk Formatting +----------------------- + +#. Set a variable with the disk name:: + + DISK=/dev/disk/by-id/scsi-SATA_disk1 + + Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the + ``/dev/sd*`` device nodes directly can cause sporadic import failures, + especially on systems that have more than one storage pool. + + **Hints:** + + - ``ls -la /dev/disk/by-id`` will list the aliases. + - Are you doing this in a virtual machine? If your virtual disk is missing + from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using KVM with + virtio; otherwise, read the `troubleshooting <#troubleshooting>`__ + section. + - For a mirror or raidz topology, use ``DISK1``, ``DISK2``, etc. + - When choosing a boot pool size, consider how you will use the space. A + kernel and initrd may consume around 100M. If you have multiple kernels + and take snapshots, you may find yourself low on boot pool space, + especially if you need to regenerate your initramfs images, which may be + around 85M each. Size your boot pool appropriately for your needs. + +#. If you are re-using a disk, clear it as necessary: + + Ensure swap partitions are not in use:: + + swapoff --all + + If the disk was previously used in an MD array:: + + apt install --yes mdadm + + # See if one or more MD arrays are active: + cat /proc/mdstat + # If so, stop them (replace ``md0`` as required): + mdadm --stop /dev/md0 + + # For an array using the whole disk: + mdadm --zero-superblock --force $DISK + # For an array using a partition: + mdadm --zero-superblock --force ${DISK}-part2 + + If the disk was previously used with zfs:: + + wipefs -a $DISK + + For flash-based storage, if the disk was previously used, you may wish to + do a full-disk discard (TRIM/UNMAP), which can improve performance:: + + blkdiscard -f $DISK + + Clear the partition table:: + + sgdisk --zap-all $DISK + + If you get a message about the kernel still using the old partition table, + reboot and start over (except that you can skip this step). + +#. Partition your disk(s): + + Run this if you need legacy (BIOS) booting:: + + sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK + + Run this for UEFI booting (for use now or in the future):: + + sgdisk -n2:1M:+512M -t2:EF00 $DISK + + Run this for the boot pool:: + + sgdisk -n3:0:+1G -t3:BF01 $DISK + + Choose one of the following options: + + - Unencrypted or ZFS native encryption:: + + sgdisk -n4:0:0 -t4:BF00 $DISK + + - LUKS:: + + sgdisk -n4:0:0 -t4:8309 $DISK + + If you are creating a mirror or raidz topology, repeat the partitioning + commands for all the disks which will be part of the pool. + +#. Create the boot pool:: + + zpool create \ + -o ashift=12 \ + -o autotrim=on -d \ + -o cachefile=/etc/zfs/zpool.cache \ + -o feature@async_destroy=enabled \ + -o feature@bookmarks=enabled \ + -o feature@embedded_data=enabled \ + -o feature@empty_bpobj=enabled \ + -o feature@enabled_txg=enabled \ + -o feature@extensible_dataset=enabled \ + -o feature@filesystem_limits=enabled \ + -o feature@hole_birth=enabled \ + -o feature@large_blocks=enabled \ + -o feature@livelist=enabled \ + -o feature@lz4_compress=enabled \ + -o feature@spacemap_histogram=enabled \ + -o feature@zpool_checkpoint=enabled \ + -O devices=off \ + -O acltype=posixacl -O xattr=sa \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/boot -R /mnt \ + bpool ${DISK}-part3 + + You should not need to customize any of the options for the boot pool. + + GRUB does not support all of the zpool features. See ``spa_feature_names`` + in `grub-core/fs/zfs/zfs.c + `__. + This step creates a separate boot pool for ``/boot`` with the features + limited to only those that GRUB supports, allowing the root pool to use + any/all features. Note that GRUB opens the pool read-only, so all + read-only compatible features are “supported” by GRUB. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + bpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part3 \ + /dev/disk/by-id/scsi-SATA_disk2-part3 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - The pool name is arbitrary. If changed, the new name must be used + consistently. The ``bpool`` convention originated in this HOWTO. + + **Feature Notes:** + + - The ``allocation_classes`` feature should be safe to use. However, unless + one is using it (i.e. a ``special`` vdev), there is no point to enabling + it. It is extremely unlikely that someone would use this feature for a + boot pool. If one cares about speeding up the boot pool, it would make + more sense to put the whole pool on the faster disk rather than using it + as a ``special`` vdev. + - The ``device_rebuild`` feature should be safe to use (except on raidz, + which it is incompatible with), but the boot pool is small, so this does + not matter in practice. + - The ``log_spacemap`` and ``spacemap_v2`` features have been tested and + are safe to use. The boot pool is small, so these do not matter in + practice. + - The ``project_quota`` feature has been tested and is safe to use. This + feature is extremely unlikely to matter for the boot pool. + - The ``resilver_defer`` should be safe but the boot pool is small enough + that it is unlikely to be necessary. + - As a read-only compatible feature, the ``userobj_accounting`` feature + should be compatible in theory, but in practice, GRUB can fail with an + “invalid dnode type” error. This feature does not matter for ``/boot`` + anyway. + +#. Create the root pool: + + Choose one of the following options: + + - Unencrypted:: + + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -O acltype=posixacl -O xattr=sa -O dnodesize=auto \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - ZFS native encryption:: + + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -O encryption=on -O keylocation=prompt -O keyformat=passphrase \ + -O acltype=posixacl -O xattr=sa -O dnodesize=auto \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - LUKS:: + + apt install --yes cryptsetup + + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -O acltype=posixacl -O xattr=sa -O dnodesize=auto \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + + **Notes:** + + - The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). + - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires ACLs + `__ + - Setting ``xattr=sa`` `vastly improves the performance of extended + attributes + `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI applications. + `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain controller. + `__ + Note that ``xattr=sa`` is `Linux-specific + `__. If you move your + ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux, + extended attributes will not be readable (though your data will be). If + portability of extended attributes is important to you, omit the + ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole + pool, it is probably fine to use it for ``/var/log``. + - Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only filenames + `__. + - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you + want to tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. + - Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s documentation + `__ + for further information. + - Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). + - ZFS native encryption `now + `__ + defaults to ``aes-256-gcm``. + - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two + keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256. + - Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup FAQ + `__ + for guidance. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + rpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part4 \ + /dev/disk/by-id/scsi-SATA_disk2-part4 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - When using LUKS with mirror or raidz topologies, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will have + to create using ``cryptsetup``. + - The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the root + pool is named ``rpool`` by default. + +Step 3: System Installation +--------------------------- + +#. Create filesystem datasets to act as containers:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + zfs create -o canmount=off -o mountpoint=none bpool/BOOT + + On Solaris systems, the root filesystem is cloned and the suffix is + incremented for major system changes through ``pkg image-update`` or + ``beadm``. Similar functionality was implemented in Ubuntu with the + ``zsys`` tool, though its dataset layout is more complicated, and ``zsys`` + `is on life support + `__. Even + without such a tool, the `rpool/ROOT` and `bpool/BOOT` containers can still + be used for manually created clones. That said, this HOWTO assumes a single + filesystem for ``/boot`` for simplicity. + +#. Create filesystem datasets for the root and boot filesystems:: + + zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian + zfs mount rpool/ROOT/debian + + zfs create -o mountpoint=/boot bpool/BOOT/debian + + With ZFS, it is not normally necessary to use a mount command (either + ``mount`` or ``zfs mount``). This situation is an exception because of + ``canmount=noauto``. + +#. Create datasets:: + + zfs create rpool/home + zfs create -o mountpoint=/root rpool/home/root + chmod 700 /mnt/root + zfs create -o canmount=off rpool/var + zfs create -o canmount=off rpool/var/lib + zfs create rpool/var/log + zfs create rpool/var/spool + + The datasets below are optional, depending on your preferences and/or + software choices. + + If you wish to separate these to exclude them from snapshots:: + + zfs create -o com.sun:auto-snapshot=false rpool/var/cache + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs + zfs create -o com.sun:auto-snapshot=false rpool/var/tmp + chmod 1777 /mnt/var/tmp + + If you use /srv on this system:: + + zfs create rpool/srv + + If you use /usr/local on this system:: + + zfs create -o canmount=off rpool/usr + zfs create rpool/usr/local + + If this system will have games installed:: + + zfs create rpool/var/games + + If this system will have a GUI:: + + zfs create rpool/var/lib/AccountsService + zfs create rpool/var/lib/NetworkManager + + If this system will use Docker (which manages its own datasets & + snapshots):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker + + If this system will store local email in /var/mail:: + + zfs create rpool/var/mail + + If this system will use Snap packages:: + + zfs create rpool/var/snap + + If you use /var/www on this system:: + + zfs create rpool/var/www + + A tmpfs is recommended later, but if you want a separate dataset for + ``/tmp``:: + + zfs create -o com.sun:auto-snapshot=false rpool/tmp + chmod 1777 /mnt/tmp + + The primary goal of this dataset layout is to separate the OS from user + data. This allows the root filesystem to be rolled back without rolling + back user data. + + If you do nothing extra, ``/tmp`` will be stored as part of the root + filesystem. Alternatively, you can create a separate dataset for ``/tmp``, + as shown above. This keeps the ``/tmp`` data out of snapshots of your root + filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want + to limit the maximum space used. Otherwise, you can use a tmpfs (RAM + filesystem) later. + + **Note:** If you separate a directory required for booting (e.g. ``/etc``) + into its own dataset, you must add it to + ``ZFS_INITRD_ADDITIONAL_DATASETS`` in ``/etc/default/zfs``. Datasets + with ``canmount=off`` (like ``rpool/usr`` above) do not matter for this. + +#. Mount a tmpfs at /run:: + + mkdir /mnt/run + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + +#. Install the minimal system:: + + debootstrap bullseye /mnt + + The ``debootstrap`` command leaves the new system in an unconfigured state. + An alternative to using ``debootstrap`` is to copy the entirety of a + working system into the new ZFS root. + +#. Copy in zpool.cache:: + + mkdir /mnt/etc/zfs + cp /etc/zfs/zpool.cache /mnt/etc/zfs/ + +Step 4: System Configuration +---------------------------- + +#. Configure the hostname: + + Replace ``HOSTNAME`` with the desired hostname:: + + hostname HOSTNAME + hostname > /mnt/etc/hostname + vi /mnt/etc/hosts + + .. code-block:: text + + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME + + **Hint:** Use ``nano`` if you find ``vi`` confusing. + +#. Configure the network interface: + + Find the interface name:: + + ip addr show + + Adjust ``NAME`` below to match your interface name:: + + vi /mnt/etc/network/interfaces.d/NAME + + .. code-block:: text + + auto NAME + iface NAME inet dhcp + + Customize this file if the system is not a DHCP client. + +#. Configure the package sources:: + + vi /mnt/etc/apt/sources.list + + .. code-block:: sourceslist + + deb http://deb.debian.org/debian bullseye main contrib + deb-src http://deb.debian.org/debian bullseye main contrib + + deb http://deb.debian.org/debian-security bullseye-security main contrib + deb-src http://deb.debian.org/debian-security bullseye-security main contrib + + deb http://deb.debian.org/debian bullseye-updates main contrib + deb-src http://deb.debian.org/debian bullseye-updates main contrib + +#. Bind the virtual filesystems from the LiveCD environment to the new + system and ``chroot`` into it:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + chroot /mnt /usr/bin/env DISK=$DISK bash --login + + **Note:** This is using ``--rbind``, not ``--bind``. + +#. Configure a basic system environment:: + + ln -s /proc/self/mounts /etc/mtab + apt update + + apt install --yes console-setup locales + + Even if you prefer a non-English system language, always ensure that + ``en_US.UTF-8`` is available:: + + dpkg-reconfigure locales tzdata keyboard-configuration console-setup + +#. Install ZFS in the chroot environment for the new system:: + + apt install --yes dpkg-dev linux-headers-generic linux-image-generic + + apt install --yes zfs-initramfs + + echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf + + **Note:** Ignore any error messages saying ``ERROR: Couldn't resolve + device`` and ``WARNING: Couldn't determine root device``. `cryptsetup does + not support ZFS + `__. + +#. For LUKS installs only, setup ``/etc/crypttab``:: + + apt install --yes cryptsetup cryptsetup-initramfs + + echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \ + none luks,discard,initramfs > /etc/crypttab + + The use of ``initramfs`` is a work-around for `cryptsetup does not support + ZFS `__. + + **Hint:** If you are creating a mirror or raidz topology, repeat the + ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. + +#. Install an NTP service to synchronize time. + This step is specific to Bullseye which does not install the package during + bootstrap. + Although this step is not necessary for ZFS, it is useful for internet + browsing where local clock drift can cause login failures:: + + apt install systemd-timesyncd + timedatectl + + You should now see "NTP service: active" in the above ``timedatectl`` + output. + +#. Install GRUB + + Choose one of the following options: + + - Install GRUB for legacy (BIOS) booting:: + + apt install --yes grub-pc + + Select (using the space bar) all of the disks (not partitions) in your + pool. + + - Install GRUB for UEFI booting:: + + apt install dosfstools + + mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2 + mkdir /boot/efi + echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \ + /boot/efi vfat defaults 0 0 >> /etc/fstab + mount /boot/efi + apt install --yes grub-efi-amd64 shim-signed + + **Notes:** + + - The ``-s 1`` for ``mkdosfs`` is only necessary for drives which present + 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size + (given the partition size of 512 MiB) for FAT32. It also works fine on + drives which present 512 B sectors. + - For a mirror or raidz topology, this step only installs GRUB on the + first disk. The other disk(s) will be handled later. + +#. Optional: Remove os-prober:: + + apt purge --yes os-prober + + This avoids error messages from `update-grub`. `os-prober` is only + necessary in dual-boot configurations. + +#. Set a root password:: + + passwd + +#. Enable importing bpool + + This ensures that ``bpool`` is always imported, regardless of whether + ``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not, + or whether ``zfs-import-scan.service`` is enabled. + + :: + + vi /etc/systemd/system/zfs-import-bpool.service + + .. code-block:: ini + + [Unit] + DefaultDependencies=no + Before=zfs-import-scan.service + Before=zfs-import-cache.service + + [Service] + Type=oneshot + RemainAfterExit=yes + ExecStart=/sbin/zpool import -N -o cachefile=none bpool + # Work-around to preserve zpool cache: + ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache + ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache + + [Install] + WantedBy=zfs-import.target + + :: + + systemctl enable zfs-import-bpool.service + + **Note:** For some disk configurations (NVMe?), this service `may fail + `__ with an error + indicating that the ``bpool`` cannot be found. If this happens, add + ``-d DISK-part3`` (replace ``DISK`` with the correct device path) to the + ``zpool import`` command. + +#. Optional (but recommended): Mount a tmpfs to ``/tmp`` + + If you chose to create a ``/tmp`` dataset above, skip this step, as they + are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a + tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + + :: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + +#. Optional: Install SSH:: + + apt install --yes openssh-server + + vi /etc/ssh/sshd_config + # Set: PermitRootLogin yes + +#. Optional: For ZFS native encryption or LUKS, configure Dropbear for remote + unlocking:: + + apt install --yes --no-install-recommends dropbear-initramfs + mkdir -p /etc/dropbear-initramfs + + # Optional: Convert OpenSSH server keys for Dropbear + for type in ecdsa ed25519 rsa ; do + cp /etc/ssh/ssh_host_${type}_key /tmp/openssh.key + ssh-keygen -p -N "" -m PEM -f /tmp/openssh.key + dropbearconvert openssh dropbear \ + /tmp/openssh.key \ + /etc/dropbear-initramfs/dropbear_${type}_host_key + done + rm /tmp/openssh.key + + # Add user keys in the same format as ~/.ssh/authorized_keys + vi /etc/dropbear-initramfs/authorized_keys + + # If using a static IP, set it for the initramfs environment: + vi /etc/initramfs-tools/initramfs.conf + # The syntax is: IP=ADDRESS::GATEWAY:MASK:HOSTNAME:NIC + # For example: + # IP=192.168.1.100::192.168.1.1:255.255.255.0:myhostname:ens3 + # HOSTNAME and NIC are optional. + + # Rebuild the initramfs (required when changing any of the above): + update-initramfs -u -k all + + **Notes:** + + - Converting the server keys makes Dropbear use the same keys as OpenSSH, + avoiding host key mismatch warnings. Currently, `dropbearconvert doesn't + understand the new OpenSSH private key format + `__, so the + keys need to be converted to the old PEM format first using + ``ssh-keygen``. The downside of using the same keys for both OpenSSH and + Dropbear is that the OpenSSH keys are then available on-disk, unencrypted + in the initramfs. + - Later, to use this functionality, SSH to the system (as root) while it is + prompting for the passphrase during the boot process. For ZFS native + encryption, run ``zfsunlock``. For LUKS, run ``cryptroot-unlock``. + - You can optionally add ``command="/usr/bin/zfsunlock"`` or + ``command="/bin/cryptroot-unlock"`` in front of the ``authorized_keys`` + line to force the unlock command. This way, the unlock command runs + automatically and is all that can be run. + +#. Optional (but kindly requested): Install popcon + + The ``popularity-contest`` package reports the list of packages install + on your system. Showing that ZFS is popular may be helpful in terms of + long-term attention from the distro. + + :: + + apt install --yes popularity-contest + + Choose Yes at the prompt. + +Step 5: GRUB Installation +------------------------- + +#. Verify that the ZFS boot filesystem is recognized:: + + grub-probe /boot + +#. Refresh the initrd files:: + + update-initramfs -c -k all + + **Note:** Ignore any error messages saying ``ERROR: Couldn't resolve + device`` and ``WARNING: Couldn't determine root device``. `cryptsetup + does not support ZFS + `__. + +#. Workaround GRUB's missing zpool-features support:: + + vi /etc/default/grub + # Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian" + +#. Optional (but highly recommended): Make debugging GRUB easier:: + + vi /etc/default/grub + # Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT + # Uncomment: GRUB_TERMINAL=console + # Save and quit. + + Later, once the system has rebooted twice and you are sure everything is + working, you can undo these changes, if desired. + +#. Update the boot configuration:: + + update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Install the boot loader: + + #. For legacy (BIOS) booting, install GRUB to the MBR:: + + grub-install $DISK + + Note that you are installing GRUB to the whole disk, not a partition. + + If you are creating a mirror or raidz topology, repeat the ``grub-install`` + command for each disk in the pool. + + #. For UEFI booting, install GRUB to the ESP:: + + grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=debian --recheck --no-floppy + + It is not necessary to specify the disk here. If you are creating a + mirror or raidz topology, the additional disks will be handled later. + +#. Fix filesystem mount ordering: + + We need to activate ``zfs-mount-generator``. This makes systemd aware of + the separate mountpoints, which is important for things like ``/var/log`` + and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount`` + by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature + of systemd automatically use ``After=var-tmp.mount``. + + :: + + mkdir /etc/zfs/zfs-list.cache + touch /etc/zfs/zfs-list.cache/bpool + touch /etc/zfs/zfs-list.cache/rpool + zed -F & + + Verify that ``zed`` updated the cache by making sure these are not empty:: + + cat /etc/zfs/zfs-list.cache/bpool + cat /etc/zfs/zfs-list.cache/rpool + + If either is empty, force a cache update and check again:: + + zfs set canmount=on bpool/BOOT/debian + zfs set canmount=noauto rpool/ROOT/debian + + If they are still empty, stop zed (as below), start zed (as above) and try + again. + + Once the files have data, stop ``zed``:: + + fg + Press Ctrl-C. + + Fix the paths to eliminate ``/mnt``:: + + sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/* + +Step 6: First Boot +------------------ + +#. Optional: Snapshot the initial installation:: + + zfs snapshot bpool/BOOT/debian@install + zfs snapshot rpool/ROOT/debian@install + + In the future, you will likely want to take snapshots before each + upgrade, and remove old snapshots (including this one) at some point to + save space. + +#. Exit from the ``chroot`` environment back to the LiveCD environment:: + + exit + +#. Run these commands in the LiveCD environment to unmount all + filesystems:: + + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + +#. If this fails for rpool, mounting it on boot will fail and you will need to + ``zpool import -f rpool``, then ``exit`` in the initamfs prompt. + +#. Reboot:: + + reboot + + Wait for the newly installed system to boot normally. Login as root. + +#. Create a user account: + + Replace ``YOUR_USERNAME`` with your desired username:: + + username=YOUR_USERNAME + + zfs create rpool/home/$username + adduser $username + + cp -a /etc/skel/. /home/$username + chown -R $username:$username /home/$username + usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video $username + +#. Mirror GRUB + + If you installed to multiple disks, install GRUB on the additional + disks. + + - For legacy (BIOS) booting:: + + dpkg-reconfigure grub-pc + + Hit enter until you get to the device selection screen. + Select (using the space bar) all of the disks (not partitions) in your pool. + + - For UEFI booting:: + + umount /boot/efi + + For the second and subsequent disks (increment debian-2 to -3, etc.):: + + dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ + of=/dev/disk/by-id/scsi-SATA_disk2-part2 + efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ + -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi' + + mount /boot/efi + +Step 7: Optional: Configure Swap +--------------------------------- + +**Caution**: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is `a bug report upstream +`__. + +#. Create a volume dataset (zvol) for use as a swap device:: + + zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata -o secondarycache=none \ + -o com.sun:auto-snapshot=false rpool/swap + + You can adjust the size (the ``4G`` part) to your needs. + + The compression algorithm is set to ``zle`` because it is the cheapest + available algorithm. As this guide recommends ``ashift=12`` (4 kiB + blocks on disk), the common case of a 4 kiB page size means that no + compression algorithm can reduce I/O. The exception is all-zero pages, + which are dropped by ZFS; but some form of compression has to be enabled + to get this behavior. + +#. Configure the swap device: + + **Caution**: Always use long ``/dev/zvol`` aliases in configuration + files. Never use a short ``/dev/zdX`` device name. + + :: + + mkswap -f /dev/zvol/rpool/swap + echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab + echo RESUME=none > /etc/initramfs-tools/conf.d/resume + + The ``RESUME=none`` is necessary to disable resuming from hibernation. + This does not work, as the zvol is not present (because the pool has not + yet been imported) at the time the resume script runs. If it is not + disabled, the boot process hangs for 30 seconds waiting for the swap + zvol to appear. + +#. Enable the swap device:: + + swapon -av + +Step 8: Full Software Installation +---------------------------------- + +#. Upgrade the minimal system:: + + apt dist-upgrade --yes + +#. Install a regular set of software:: + + tasksel --new-install + + **Note:** This will check "Debian desktop environment" and "print server" + by default. If you want a server installation, unselect those. + +#. Optional: Disable log compression: + + As ``/var/log`` is already compressed by ZFS, logrotate’s compression is + going to burn CPU and disk I/O for (in most cases) very little gain. Also, + if you are making snapshots of ``/var/log``, logrotate’s compression will + actually waste space, as the uncompressed data will live on in the + snapshot. You can edit the files in ``/etc/logrotate.d`` by hand to comment + out ``compress``, or use this loop (copy-and-paste highly recommended):: + + for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done + +#. Reboot:: + + reboot + +Step 9: Final Cleanup +--------------------- + +#. Wait for the system to boot normally. Login using the account you + created. Ensure the system (including networking) works normally. + +#. Optional: Delete the snapshots of the initial installation:: + + sudo zfs destroy bpool/BOOT/debian@install + sudo zfs destroy rpool/ROOT/debian@install + +#. Optional: Disable the root password:: + + sudo usermod -p '*' root + +#. Optional (but highly recommended): Disable root SSH logins: + + If you installed SSH earlier, revert the temporary change:: + + sudo vi /etc/ssh/sshd_config + # Remove: PermitRootLogin yes + + sudo systemctl restart ssh + +#. Optional: Re-enable the graphical boot process: + + If you prefer the graphical boot process, you can re-enable it now. If + you are using LUKS, it makes the prompt look nicer. + + :: + + sudo vi /etc/default/grub + # Add quiet to GRUB_CMDLINE_LINUX_DEFAULT + # Comment out GRUB_TERMINAL=console + # Save and quit. + + sudo update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + + Store that backup somewhere safe (e.g. cloud storage). It is protected by + your LUKS passphrase, but you may wish to use additional encryption. + + **Hint:** If you created a mirror or raidz topology, repeat this for each + LUKS volume (``luks2``, etc.). + +Troubleshooting +--------------- + +Rescuing using a Live CD +~~~~~~~~~~~~~~~~~~~~~~~~ + +Go through `Step 1: Prepare The Install Environment +<#step-1-prepare-the-install-environment>`__. + +For LUKS, first unlock the disk(s):: + + apt install --yes cryptsetup + + cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # Repeat for additional disks, if this is a mirror or raidz topology. + +Mount everything correctly:: + + zpool export -a + zpool import -N -R /mnt rpool + zpool import -N -R /mnt bpool + zfs load-key -a + zfs mount rpool/ROOT/debian + zfs mount -a + +If needed, you can chroot into your installed environment:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + chroot /mnt /bin/bash --login + mount /boot/efi + mount -a + +Do whatever you need to do to fix your system. + +When done, cleanup:: + + exit + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + reboot + +Areca +~~~~~ + +Systems that require the ``arcsas`` blob driver should add it to the +``/etc/initramfs-tools/modules`` file and run ``update-initramfs -c -k all``. + +Upgrade or downgrade the Areca driver if something like +``RIP: 0010:[] [] native_read_tsc+0x6/0x20`` +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message. + +MPT2SAS +~~~~~~~ + +Most problem reports for this tutorial involve ``mpt2sas`` hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware. + +The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See `https://github.com/zfsonlinux/zfs/issues/330 +`__. + +Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in +``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to +appear before importing the pool. + +QEMU/KVM/XEN +~~~~~~~~~~~~ + +Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). + +To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:: + + sudo apt install ovmf + sudo vi /etc/libvirt/qemu.conf + +Uncomment these lines: + +.. code-block:: text + + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd" + ] + +:: + + sudo systemctl restart libvirtd.service + +VMware +~~~~~~ + +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere configuration. + Doing this ensures that ``/dev/disk`` aliases are created in the guest. diff --git a/_sources/Getting Started/Debian/Debian Buster Root on ZFS.rst.txt b/_sources/Getting Started/Debian/Debian Buster Root on ZFS.rst.txt new file mode 100644 index 000000000..56a95e839 --- /dev/null +++ b/_sources/Getting Started/Debian/Debian Buster Root on ZFS.rst.txt @@ -0,0 +1,1171 @@ +.. highlight:: sh + +Debian Buster Root on ZFS +========================= + +.. contents:: Table of Contents + :local: + +Overview +-------- + +Newer release available +~~~~~~~~~~~~~~~~~~~~~~~ + +- See :doc:`Debian Bullseye Root on ZFS <./Debian Bullseye Root on ZFS>` for + new installs. This guide is no longer receiving most updates. It continues + to exist for reference for existing installs that followed it. + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- `64-bit Debian GNU/Linux Buster Live CD w/ GUI (e.g. gnome iso) + `__ +- `A 64-bit kernel is strongly encouraged. + `__ +- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” drive) + only works with UEFI booting. This not unique to ZFS. `GRUB does not and + will not work on 4Kn with legacy (BIOS) booting. + `__ + +Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need `massive amounts of RAM +`__. Enabling +deduplication is a permanent change that cannot be easily reverted. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @rlaager +`__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo apt install python3-pip + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. + +Encryption +~~~~~~~~~~ + +This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in ``/etc/fstab``, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +Step 1: Prepare The Install Environment +--------------------------------------- + +#. Boot the Debian GNU/Linux Live CD. If prompted, login with the username + ``user`` and password ``live``. Connect your system to the Internet as + appropriate (e.g. join your WiFi network). Open a terminal. + +#. Setup and update the repositories:: + + sudo vi /etc/apt/sources.list + + .. code-block:: sourceslist + + deb http://deb.debian.org/debian buster main contrib + deb http://deb.debian.org/debian buster-backports main contrib + + :: + + sudo apt update + +#. Optional: Install and start the OpenSSH server in the Live CD environment: + + If you have a second system, using SSH to access the target system can be + convenient:: + + sudo apt install --yes openssh-server + + sudo systemctl restart ssh + + **Hint:** You can find your IP address with + ``ip addr show scope global | grep inet``. Then, from your main machine, + connect with ``ssh user@IP``. + +#. Disable automounting: + + If the disk has been used before (with partitions at the same offsets), + previous filesystems (e.g. the ESP) will automount if not disabled:: + + gsettings set org.gnome.desktop.media-handling automount false + +#. Become root:: + + sudo -i + +#. Install ZFS in the Live CD environment:: + + apt install --yes debootstrap gdisk dkms dpkg-dev linux-headers-amd64 + + apt install --yes -t buster-backports --no-install-recommends zfs-dkms + + modprobe zfs + apt install --yes -t buster-backports zfsutils-linux + + - The dkms dependency is installed manually just so it comes from buster + and not buster-backports. This is not critical. + - We need to get the module built and loaded before installing + zfsutils-linux or `zfs-mount.service will fail to start + `__. + +Step 2: Disk Formatting +----------------------- + +#. Set a variable with the disk name:: + + DISK=/dev/disk/by-id/scsi-SATA_disk1 + + Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the + ``/dev/sd*`` device nodes directly can cause sporadic import failures, + especially on systems that have more than one storage pool. + + **Hints:** + + - ``ls -la /dev/disk/by-id`` will list the aliases. + - Are you doing this in a virtual machine? If your virtual disk is missing + from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using KVM with + virtio; otherwise, read the `troubleshooting <#troubleshooting>`__ + section. + - For a mirror or raidz topology, use ``DISK1``, ``DISK2``, etc. + - When choosing a boot pool size, consider how you will use the space. A + kernel and initrd may consume around 100M. If you have multiple kernels + and take snapshots, you may find yourself low on boot pool space, + especially if you need to regenerate your initramfs images, which may be + around 85M each. Size your boot pool appropriately for your needs. + +#. If you are re-using a disk, clear it as necessary: + + Ensure swap partitions are not in use:: + + swapoff --all + + If the disk was previously used in an MD array:: + + apt install --yes mdadm + + # See if one or more MD arrays are active: + cat /proc/mdstat + # If so, stop them (replace ``md0`` as required): + mdadm --stop /dev/md0 + + # For an array using the whole disk: + mdadm --zero-superblock --force $DISK + # For an array using a partition: + mdadm --zero-superblock --force ${DISK}-part2 + + Clear the partition table:: + + sgdisk --zap-all $DISK + + If you get a message about the kernel still using the old partition table, + reboot and start over (except that you can skip this step). + +#. Partition your disk(s): + + Run this if you need legacy (BIOS) booting:: + + sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK + + Run this for UEFI booting (for use now or in the future):: + + sgdisk -n2:1M:+512M -t2:EF00 $DISK + + Run this for the boot pool:: + + sgdisk -n3:0:+1G -t3:BF01 $DISK + + Choose one of the following options: + + - Unencrypted or ZFS native encryption:: + + sgdisk -n4:0:0 -t4:BF00 $DISK + + - LUKS:: + + sgdisk -n4:0:0 -t4:8309 $DISK + + If you are creating a mirror or raidz topology, repeat the partitioning + commands for all the disks which will be part of the pool. + +#. Create the boot pool:: + + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 -d \ + -o feature@async_destroy=enabled \ + -o feature@bookmarks=enabled \ + -o feature@embedded_data=enabled \ + -o feature@empty_bpobj=enabled \ + -o feature@enabled_txg=enabled \ + -o feature@extensible_dataset=enabled \ + -o feature@filesystem_limits=enabled \ + -o feature@hole_birth=enabled \ + -o feature@large_blocks=enabled \ + -o feature@lz4_compress=enabled \ + -o feature@spacemap_histogram=enabled \ + -o feature@zpool_checkpoint=enabled \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/boot -R /mnt \ + bpool ${DISK}-part3 + + You should not need to customize any of the options for the boot pool. + + GRUB does not support all of the zpool features. See ``spa_feature_names`` + in `grub-core/fs/zfs/zfs.c + `__. + This step creates a separate boot pool for ``/boot`` with the features + limited to only those that GRUB supports, allowing the root pool to use + any/all features. Note that GRUB opens the pool read-only, so all + read-only compatible features are “supported” by GRUB. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + bpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part3 \ + /dev/disk/by-id/scsi-SATA_disk2-part3 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - The pool name is arbitrary. If changed, the new name must be used + consistently. The ``bpool`` convention originated in this HOWTO. + + **Feature Notes:** + + - The ``allocation_classes`` feature should be safe to use. However, unless + one is using it (i.e. a ``special`` vdev), there is no point to enabling + it. It is extremely unlikely that someone would use this feature for a + boot pool. If one cares about speeding up the boot pool, it would make + more sense to put the whole pool on the faster disk rather than using it + as a ``special`` vdev. + - The ``project_quota`` feature has been tested and is safe to use. This + feature is extremely unlikely to matter for the boot pool. + - The ``resilver_defer`` should be safe but the boot pool is small enough + that it is unlikely to be necessary. + - The ``spacemap_v2`` feature has been tested and is safe to use. The boot + pool is small, so this does not matter in practice. + - As a read-only compatible feature, the ``userobj_accounting`` feature + should be compatible in theory, but in practice, GRUB can fail with an + “invalid dnode type” error. This feature does not matter for ``/boot`` + anyway. + +#. Create the root pool: + + Choose one of the following options: + + - Unencrypted:: + + zpool create \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - ZFS native encryption:: + + zpool create \ + -o ashift=12 \ + -O encryption=on \ + -O keylocation=prompt -O keyformat=passphrase \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - LUKS:: + + apt install --yes cryptsetup + + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + + **Notes:** + + - The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). + - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires ACLs + `__ + - Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only filenames + `__. + - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you + want to tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. + - Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s documentation + `__ + for further information. + - Setting ``xattr=sa`` `vastly improves the performance of extended + attributes + `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI applications. + `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain controller. + `__ + Note that ``xattr=sa`` is `Linux-specific + `__. If you move your + ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux, + extended attributes will not be readable (though your data will be). If + portability of extended attributes is important to you, omit the + ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole + pool, it is probably fine to use it for ``/var/log``. + - Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). + - ZFS native encryption `now + `__ + defaults to ``aes-256-gcm``. + - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two + keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256. + - Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup FAQ + `__ + for guidance. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + rpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part4 \ + /dev/disk/by-id/scsi-SATA_disk2-part4 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - When using LUKS with mirror or raidz topologies, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will have + to create using ``cryptsetup``. + - The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the root + pool is named ``rpool`` by default. + +Step 3: System Installation +--------------------------- + +#. Create filesystem datasets to act as containers:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + zfs create -o canmount=off -o mountpoint=none bpool/BOOT + + On Solaris systems, the root filesystem is cloned and the suffix is + incremented for major system changes through ``pkg image-update`` or + ``beadm``. Similar functionality was implemented in Ubuntu with the + ``zsys`` tool, though its dataset layout is more complicated, and ``zsys`` + `is on life support + `__. Even + without such a tool, the `rpool/ROOT` and `bpool/BOOT` containers can still + be used for manually created clones. That said, this HOWTO assumes a single + filesystem for ``/boot`` for simplicity. + +#. Create filesystem datasets for the root and boot filesystems:: + + zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian + zfs mount rpool/ROOT/debian + + zfs create -o mountpoint=/boot bpool/BOOT/debian + + With ZFS, it is not normally necessary to use a mount command (either + ``mount`` or ``zfs mount``). This situation is an exception because of + ``canmount=noauto``. + +#. Create datasets:: + + zfs create rpool/home + zfs create -o mountpoint=/root rpool/home/root + chmod 700 /mnt/root + zfs create -o canmount=off rpool/var + zfs create -o canmount=off rpool/var/lib + zfs create rpool/var/log + zfs create rpool/var/spool + + The datasets below are optional, depending on your preferences and/or + software choices. + + If you wish to exclude these from snapshots:: + + zfs create -o com.sun:auto-snapshot=false rpool/var/cache + zfs create -o com.sun:auto-snapshot=false rpool/var/tmp + chmod 1777 /mnt/var/tmp + + If you use /opt on this system:: + + zfs create rpool/opt + + If you use /srv on this system:: + + zfs create rpool/srv + + If you use /usr/local on this system:: + + zfs create -o canmount=off rpool/usr + zfs create rpool/usr/local + + If this system will have games installed:: + + zfs create rpool/var/games + + If this system will store local email in /var/mail:: + + zfs create rpool/var/mail + + If this system will use Snap packages:: + + zfs create rpool/var/snap + + If you use /var/www on this system:: + + zfs create rpool/var/www + + If this system will use GNOME:: + + zfs create rpool/var/lib/AccountsService + + If this system will use Docker (which manages its own datasets & + snapshots):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker + + If this system will use NFS (locking):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs + + Mount a tmpfs at /run:: + + mkdir /mnt/run + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + + A tmpfs is recommended later, but if you want a separate dataset for + ``/tmp``:: + + zfs create -o com.sun:auto-snapshot=false rpool/tmp + chmod 1777 /mnt/tmp + + The primary goal of this dataset layout is to separate the OS from user + data. This allows the root filesystem to be rolled back without rolling + back user data. + + If you do nothing extra, ``/tmp`` will be stored as part of the root + filesystem. Alternatively, you can create a separate dataset for ``/tmp``, + as shown above. This keeps the ``/tmp`` data out of snapshots of your root + filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want + to limit the maximum space used. Otherwise, you can use a tmpfs (RAM + filesystem) later. + +#. Install the minimal system:: + + debootstrap buster /mnt + + The ``debootstrap`` command leaves the new system in an unconfigured state. + An alternative to using ``debootstrap`` is to copy the entirety of a + working system into the new ZFS root. + +#. Copy in zpool.cache:: + + mkdir /mnt/etc/zfs + cp /etc/zfs/zpool.cache /mnt/etc/zfs/ + +Step 4: System Configuration +---------------------------- + +#. Configure the hostname: + + Replace ``HOSTNAME`` with the desired hostname:: + + hostname HOSTNAME + hostname > /mnt/etc/hostname + vi /mnt/etc/hosts + + .. code-block:: text + + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME + + **Hint:** Use ``nano`` if you find ``vi`` confusing. + +#. Configure the network interface: + + Find the interface name:: + + ip addr show + + Adjust ``NAME`` below to match your interface name:: + + vi /mnt/etc/network/interfaces.d/NAME + + .. code-block:: text + + auto NAME + iface NAME inet dhcp + + Customize this file if the system is not a DHCP client. + +#. Configure the package sources:: + + vi /mnt/etc/apt/sources.list + + .. code-block:: sourceslist + + deb http://deb.debian.org/debian buster main contrib + deb-src http://deb.debian.org/debian buster main contrib + + deb http://security.debian.org/debian-security buster/updates main contrib + deb-src http://security.debian.org/debian-security buster/updates main contrib + + deb http://deb.debian.org/debian buster-updates main contrib + deb-src http://deb.debian.org/debian buster-updates main contrib + + :: + + vi /mnt/etc/apt/sources.list.d/buster-backports.list + + .. code-block:: sourceslist + + deb http://deb.debian.org/debian buster-backports main contrib + deb-src http://deb.debian.org/debian buster-backports main contrib + + :: + + vi /mnt/etc/apt/preferences.d/90_zfs + + .. code-block:: control + + Package: src:zfs-linux + Pin: release n=buster-backports + Pin-Priority: 990 + +#. Bind the virtual filesystems from the LiveCD environment to the new + system and ``chroot`` into it:: + + mount --rbind /dev /mnt/dev + mount --rbind /proc /mnt/proc + mount --rbind /sys /mnt/sys + chroot /mnt /usr/bin/env DISK=$DISK bash --login + + **Note:** This is using ``--rbind``, not ``--bind``. + +#. Configure a basic system environment:: + + ln -s /proc/self/mounts /etc/mtab + apt update + + apt install --yes console-setup locales + + Even if you prefer a non-English system language, always ensure that + ``en_US.UTF-8`` is available:: + + dpkg-reconfigure locales tzdata keyboard-configuration console-setup + +#. Install ZFS in the chroot environment for the new system:: + + apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64 + + apt install --yes zfs-initramfs + + echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf + + **Note:** Ignore any error messages saying ``ERROR: Couldn't resolve + device`` and ``WARNING: Couldn't determine root device``. `cryptsetup does + not support ZFS + `__. + +#. For LUKS installs only, setup ``/etc/crypttab``:: + + apt install --yes cryptsetup + + echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \ + none luks,discard,initramfs > /etc/crypttab + + The use of ``initramfs`` is a work-around for `cryptsetup does not support + ZFS `__. + + **Hint:** If you are creating a mirror or raidz topology, repeat the + ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. + +#. Install GRUB + + Choose one of the following options: + + - Install GRUB for legacy (BIOS) booting:: + + apt install --yes grub-pc + + Select (using the space bar) all of the disks (not partitions) in your + pool. + + - Install GRUB for UEFI booting:: + + apt install dosfstools + + mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2 + mkdir /boot/efi + echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \ + /boot/efi vfat defaults 0 0 >> /etc/fstab + mount /boot/efi + apt install --yes grub-efi-amd64 shim-signed + + **Notes:** + + - The ``-s 1`` for ``mkdosfs`` is only necessary for drives which present + 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size + (given the partition size of 512 MiB) for FAT32. It also works fine on + drives which present 512 B sectors. + - For a mirror or raidz topology, this step only installs GRUB on the + first disk. The other disk(s) will be handled later. + +#. Optional: Remove os-prober:: + + apt purge --yes os-prober + + This avoids error messages from `update-grub`. `os-prober` is only + necessary in dual-boot configurations. + +#. Set a root password:: + + passwd + +#. Enable importing bpool + + This ensures that ``bpool`` is always imported, regardless of whether + ``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not, + or whether ``zfs-import-scan.service`` is enabled. + + :: + + vi /etc/systemd/system/zfs-import-bpool.service + + .. code-block:: ini + + [Unit] + DefaultDependencies=no + Before=zfs-import-scan.service + Before=zfs-import-cache.service + + [Service] + Type=oneshot + RemainAfterExit=yes + ExecStart=/sbin/zpool import -N -o cachefile=none bpool + # Work-around to preserve zpool cache: + ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache + ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache + + [Install] + WantedBy=zfs-import.target + + :: + + systemctl enable zfs-import-bpool.service + +#. Optional (but recommended): Mount a tmpfs to ``/tmp`` + + If you chose to create a ``/tmp`` dataset above, skip this step, as they + are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a + tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + + :: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + +#. Optional: Install SSH:: + + apt install --yes openssh-server + + vi /etc/ssh/sshd_config + # Set: PermitRootLogin yes + +#. Optional (but kindly requested): Install popcon + + The ``popularity-contest`` package reports the list of packages install + on your system. Showing that ZFS is popular may be helpful in terms of + long-term attention from the distro. + + :: + + apt install --yes popularity-contest + + Choose Yes at the prompt. + +Step 5: GRUB Installation +------------------------- + +#. Verify that the ZFS boot filesystem is recognized:: + + grub-probe /boot + +#. Refresh the initrd files:: + + update-initramfs -c -k all + + **Note:** Ignore any error messages saying ``ERROR: Couldn't resolve + device`` and ``WARNING: Couldn't determine root device``. `cryptsetup + does not support ZFS + `__. + +#. Workaround GRUB's missing zpool-features support:: + + vi /etc/default/grub + # Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian" + +#. Optional (but highly recommended): Make debugging GRUB easier:: + + vi /etc/default/grub + # Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT + # Uncomment: GRUB_TERMINAL=console + # Save and quit. + + Later, once the system has rebooted twice and you are sure everything is + working, you can undo these changes, if desired. + +#. Update the boot configuration:: + + update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Install the boot loader: + + #. For legacy (BIOS) booting, install GRUB to the MBR:: + + grub-install $DISK + + Note that you are installing GRUB to the whole disk, not a partition. + + If you are creating a mirror or raidz topology, repeat the ``grub-install`` + command for each disk in the pool. + + #. For UEFI booting, install GRUB to the ESP:: + + grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=debian --recheck --no-floppy + + It is not necessary to specify the disk here. If you are creating a + mirror or raidz topology, the additional disks will be handled later. + +#. Fix filesystem mount ordering: + + We need to activate ``zfs-mount-generator``. This makes systemd aware of + the separate mountpoints, which is important for things like ``/var/log`` + and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount`` + by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature + of systemd automatically use ``After=var-tmp.mount``. + + :: + + mkdir /etc/zfs/zfs-list.cache + touch /etc/zfs/zfs-list.cache/bpool + touch /etc/zfs/zfs-list.cache/rpool + zed -F & + + Verify that ``zed`` updated the cache by making sure these are not empty:: + + cat /etc/zfs/zfs-list.cache/bpool + cat /etc/zfs/zfs-list.cache/rpool + + If either is empty, force a cache update and check again:: + + zfs set canmount=on bpool/BOOT/debian + zfs set canmount=noauto rpool/ROOT/debian + + If they are still empty, stop zed (as below), start zed (as above) and try + again. + + Once the files have data, stop ``zed``:: + + fg + Press Ctrl-C. + + Fix the paths to eliminate ``/mnt``:: + + sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/* + +Step 6: First Boot +------------------ + +#. Optional: Snapshot the initial installation:: + + zfs snapshot bpool/BOOT/debian@install + zfs snapshot rpool/ROOT/debian@install + + In the future, you will likely want to take snapshots before each + upgrade, and remove old snapshots (including this one) at some point to + save space. + +#. Exit from the ``chroot`` environment back to the LiveCD environment:: + + exit + +#. Run these commands in the LiveCD environment to unmount all + filesystems:: + + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + +#. Reboot:: + + reboot + + Wait for the newly installed system to boot normally. Login as root. + +#. Create a user account: + + Replace ``YOUR_USERNAME`` with your desired username:: + + username=YOUR_USERNAME + + zfs create rpool/home/$username + adduser $username + + cp -a /etc/skel/. /home/$username + chown -R $username:$username /home/$username + usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video $username + +#. Mirror GRUB + + If you installed to multiple disks, install GRUB on the additional + disks. + + - For legacy (BIOS) booting:: + + dpkg-reconfigure grub-pc + + Hit enter until you get to the device selection screen. + Select (using the space bar) all of the disks (not partitions) in your pool. + + - For UEFI booting:: + + umount /boot/efi + + For the second and subsequent disks (increment debian-2 to -3, etc.):: + + dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ + of=/dev/disk/by-id/scsi-SATA_disk2-part2 + efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ + -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi' + + mount /boot/efi + +Step 7: Optional: Configure Swap +--------------------------------- + +**Caution**: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is `a bug report upstream +`__. + +#. Create a volume dataset (zvol) for use as a swap device:: + + zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata -o secondarycache=none \ + -o com.sun:auto-snapshot=false rpool/swap + + You can adjust the size (the ``4G`` part) to your needs. + + The compression algorithm is set to ``zle`` because it is the cheapest + available algorithm. As this guide recommends ``ashift=12`` (4 kiB + blocks on disk), the common case of a 4 kiB page size means that no + compression algorithm can reduce I/O. The exception is all-zero pages, + which are dropped by ZFS; but some form of compression has to be enabled + to get this behavior. + +#. Configure the swap device: + + **Caution**: Always use long ``/dev/zvol`` aliases in configuration + files. Never use a short ``/dev/zdX`` device name. + + :: + + mkswap -f /dev/zvol/rpool/swap + echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab + echo RESUME=none > /etc/initramfs-tools/conf.d/resume + + The ``RESUME=none`` is necessary to disable resuming from hibernation. + This does not work, as the zvol is not present (because the pool has not + yet been imported) at the time the resume script runs. If it is not + disabled, the boot process hangs for 30 seconds waiting for the swap + zvol to appear. + +#. Enable the swap device:: + + swapon -av + +Step 8: Full Software Installation +---------------------------------- + +#. Upgrade the minimal system:: + + apt dist-upgrade --yes + +#. Install a regular set of software:: + + tasksel --new-install + + **Note:** This will check "Debian desktop environment" and "print server" + by default. If you want a server installation, unselect those. + +#. Optional: Disable log compression: + + As ``/var/log`` is already compressed by ZFS, logrotate’s compression is + going to burn CPU and disk I/O for (in most cases) very little gain. Also, + if you are making snapshots of ``/var/log``, logrotate’s compression will + actually waste space, as the uncompressed data will live on in the + snapshot. You can edit the files in ``/etc/logrotate.d`` by hand to comment + out ``compress``, or use this loop (copy-and-paste highly recommended):: + + for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done + +#. Reboot:: + + reboot + +Step 9: Final Cleanup +--------------------- + +#. Wait for the system to boot normally. Login using the account you + created. Ensure the system (including networking) works normally. + +#. Optional: Delete the snapshots of the initial installation:: + + sudo zfs destroy bpool/BOOT/debian@install + sudo zfs destroy rpool/ROOT/debian@install + +#. Optional: Disable the root password:: + + sudo usermod -p '*' root + +#. Optional (but highly recommended): Disable root SSH logins: + + If you installed SSH earlier, revert the temporary change:: + + sudo vi /etc/ssh/sshd_config + # Remove: PermitRootLogin yes + + sudo systemctl restart ssh + +#. Optional: Re-enable the graphical boot process: + + If you prefer the graphical boot process, you can re-enable it now. If + you are using LUKS, it makes the prompt look nicer. + + :: + + sudo vi /etc/default/grub + # Add quiet to GRUB_CMDLINE_LINUX_DEFAULT + # Comment out GRUB_TERMINAL=console + # Save and quit. + + sudo update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + + Store that backup somewhere safe (e.g. cloud storage). It is protected by + your LUKS passphrase, but you may wish to use additional encryption. + + **Hint:** If you created a mirror or raidz topology, repeat this for each + LUKS volume (``luks2``, etc.). + +Troubleshooting +--------------- + +Rescuing using a Live CD +~~~~~~~~~~~~~~~~~~~~~~~~ + +Go through `Step 1: Prepare The Install Environment +<#step-1-prepare-the-install-environment>`__. + +For LUKS, first unlock the disk(s):: + + apt install --yes cryptsetup + + cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # Repeat for additional disks, if this is a mirror or raidz topology. + +Mount everything correctly:: + + zpool export -a + zpool import -N -R /mnt rpool + zpool import -N -R /mnt bpool + zfs load-key -a + zfs mount rpool/ROOT/debian + zfs mount -a + +If needed, you can chroot into your installed environment:: + + mount --rbind /dev /mnt/dev + mount --rbind /proc /mnt/proc + mount --rbind /sys /mnt/sys + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + chroot /mnt /bin/bash --login + mount /boot/efi + mount -a + +Do whatever you need to do to fix your system. + +When done, cleanup:: + + exit + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + reboot + +Areca +~~~~~ + +Systems that require the ``arcsas`` blob driver should add it to the +``/etc/initramfs-tools/modules`` file and run ``update-initramfs -c -k all``. + +Upgrade or downgrade the Areca driver if something like +``RIP: 0010:[] [] native_read_tsc+0x6/0x20`` +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message. + +MPT2SAS +~~~~~~~ + +Most problem reports for this tutorial involve ``mpt2sas`` hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware. + +The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See `https://github.com/zfsonlinux/zfs/issues/330 +`__. + +Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in +``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to +appear before importing the pool. + +QEMU/KVM/XEN +~~~~~~~~~~~~ + +Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). + +To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:: + + sudo apt install ovmf + sudo vi /etc/libvirt/qemu.conf + +Uncomment these lines: + +.. code-block:: text + + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd" + ] + +:: + + sudo systemctl restart libvirtd.service + +VMware +~~~~~~ + +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere configuration. + Doing this ensures that ``/dev/disk`` aliases are created in the guest. diff --git a/_sources/Getting Started/Debian/Debian GNU Linux initrd documentation.rst.txt b/_sources/Getting Started/Debian/Debian GNU Linux initrd documentation.rst.txt new file mode 100644 index 000000000..b0dd3871d --- /dev/null +++ b/_sources/Getting Started/Debian/Debian GNU Linux initrd documentation.rst.txt @@ -0,0 +1,125 @@ +Debian GNU Linux initrd documentation +===================================== + +Supported boot parameters +************************* + +- rollback= Do a rollback of specified snapshot. +- zfs_debug= Debug the initrd script +- zfs_force= Force importing the pool. Should not be + necessary. +- zfs= Don't try to import ANY pool, mount ANY filesystem or + even load the module. +- rpool= Use this pool for root pool. +- bootfs=/ Use this dataset for root filesystem. +- root=/ Use this dataset for root filesystem. +- root=ZFS=/ Use this dataset for root filesystem. +- root=zfs:/ Use this dataset for root filesystem. +- root=zfs:AUTO Try to detect both pool and rootfs + +In all these cases, could also be @. + +The reason there are so many supported boot options to get the root +filesystem, is that there are a lot of different ways too boot ZFS out +there, and I wanted to make sure I supported them all. + +Pool imports +************ + +Import using /dev/disk/by-\* +---------------------------- + +The initrd will, if the variable USE_DISK_BY_ID is set in the file +/etc/default/zfs, to import using the /dev/disk/by-\* links. It will try +to import in this order: + +1. /dev/disk/by-vdev +2. /dev/disk/by-\* +3. /dev + +Import using cache file +----------------------- + +If all of these imports fail (or if USE_DISK_BY_ID is unset), it will +then try to import using the cache file. + +Last ditch attempt at importing +------------------------------- + +If that ALSO fails, it will try one more time, without any -d or -c +options. + +Booting +******* + +Booting from snapshot: +---------------------- + +Enter the snapshot for the root= parameter like in this example: + +:: + + linux /BOOT/debian@/boot/vmlinuz-5.10.0-9-amd64 root=ZFS=rpool/ROOT/debian@some_snapshot ro + +This will clone the snapshot rpool/ROOT/debian@some_snapshot into the +filesystem rpool/ROOT/debian_some_snapshot and use that as root +filesystem. The original filesystem and snapshot is left alone in this +case. + +**BEWARE** that it will first destroy, blindingly, the +rpool/ROOT/debian_some_snapshot filesystem before trying to clone the +snapshot into it again. So if you've booted from the same snapshot +previously and done some changes in that root filesystem, they will be +undone by the destruction of the filesystem. + +Snapshot rollback +----------------- + +From version 0.6.4-1-3 it is now also possible to specify rollback=1 to +do a rollback of the snapshot instead of cloning it. **BEWARE** that +this will destroy *all* snapshots done after the specified snapshot! + +Select snapshot dynamically +--------------------------- + +From version 0.6.4-1-3 it is now also possible to specify a NULL +snapshot name (such as root=rpool/ROOT/debian@) and if so, the initrd +script will discover all snapshots below that filesystem (sans the at), +and output a list of snapshot for the user to choose from. + +Booting from native encrypted filesystem +---------------------------------------- + +Although there is currently no support for native encryption in ZFS On +Linux, there is a patch floating around 'out there' and the initrd +supports loading key and unlock such encrypted filesystem. + +Separated filesystems +--------------------- + +Descended filesystems +~~~~~~~~~~~~~~~~~~~~~ + +If there are separate filesystems (for example a separate dataset for +/usr), the snapshot boot code will try to find the snapshot under each +filesystems and clone (or rollback) them. + +Example: + +:: + + rpool/ROOT/debian@some_snapshot + rpool/ROOT/debian/usr@some_snapshot + +These will create the following filesystems respectively (if not doing a +rollback): + +:: + + rpool/ROOT/debian_some_snapshot + rpool/ROOT/debian/usr_some_snapshot + +The initrd code will use the mountpoint option (if any) in the original +(without the snapshot part) dataset to find *where* it should mount the +dataset. Or it will use the name of the dataset below the root +filesystem (rpool/ROOT/debian in this example) for the mount point. diff --git a/_sources/Getting Started/Debian/Debian Stretch Root on ZFS.rst.txt b/_sources/Getting Started/Debian/Debian Stretch Root on ZFS.rst.txt new file mode 100644 index 000000000..0c56a8075 --- /dev/null +++ b/_sources/Getting Started/Debian/Debian Stretch Root on ZFS.rst.txt @@ -0,0 +1,1079 @@ +Debian Stretch Root on ZFS +========================== + +.. contents:: Table of Contents + :local: + +Overview +-------- + +Newer release available +~~~~~~~~~~~~~~~~~~~~~~~ + +- See :doc:`Debian Buster Root on ZFS <./Debian Buster Root on ZFS>` for new + installs. + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- `64-bit Debian GNU/Linux Stretch Live + CD `__ +- `A 64-bit kernel is strongly + encouraged. `__ +- Installing on a drive which presents 4KiB logical sectors (a “4Kn” + drive) only works with UEFI booting. This not unique to ZFS. `GRUB + does not and will not work on 4Kn with legacy (BIOS) + booting. `__ + +Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of +memory is recommended for normal performance in basic workloads. If you +wish to use deduplication, you will need `massive amounts of +RAM `__. Enabling +deduplication is a permanent change that cannot be easily reverted. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @rlaager +`__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo apt install python3-pip + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. + +Encryption +~~~~~~~~~~ + +This guide supports two different encryption options: unencrypted and +LUKS (full-disk encryption). ZFS native encryption has not yet been +released. With either option, all ZFS features are fully available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +LUKS encrypts almost everything: the OS, swap, home directories, and +anything else. The only unencrypted data is the bootloader, kernel, and +initrd. The system cannot boot without the passphrase being entered at +the console. Performance is good, but LUKS sits underneath ZFS, so if +multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +Step 1: Prepare The Install Environment +--------------------------------------- + +1.1 Boot the Debian GNU/Linux Live CD. If prompted, login with the +username ``user`` and password ``live``. Connect your system to the +Internet as appropriate (e.g. join your WiFi network). + +1.2 Optional: Install and start the OpenSSH server in the Live CD +environment: + +If you have a second system, using SSH to access the target system can +be convenient. + +:: + + $ sudo apt update + $ sudo apt install --yes openssh-server + $ sudo systemctl restart ssh + +**Hint:** You can find your IP address with +``ip addr show scope global | grep inet``. Then, from your main machine, +connect with ``ssh user@IP``. + +1.3 Become root: + +:: + + $ sudo -i + +1.4 Setup and update the repositories: + +:: + + # echo deb http://deb.debian.org/debian stretch contrib >> /etc/apt/sources.list + # echo deb http://deb.debian.org/debian stretch-backports main contrib >> /etc/apt/sources.list + # apt update + +1.5 Install ZFS in the Live CD environment: + +:: + + # apt install --yes debootstrap gdisk dkms dpkg-dev linux-headers-amd64 + # apt install --yes -t stretch-backports zfs-dkms + # modprobe zfs + +- The dkms dependency is installed manually just so it comes from + stretch and not stretch-backports. This is not critical. + +Step 2: Disk Formatting +----------------------- + +2.1 If you are re-using a disk, clear it as necessary: + +:: + + If the disk was previously used in an MD array, zero the superblock: + # apt install --yes mdadm + # mdadm --zero-superblock --force /dev/disk/by-id/scsi-SATA_disk1 + + Clear the partition table: + # sgdisk --zap-all /dev/disk/by-id/scsi-SATA_disk1 + +2.2 Partition your disk(s): + +:: + + Run this if you need legacy (BIOS) booting: + # sgdisk -a1 -n1:24K:+1000K -t1:EF02 /dev/disk/by-id/scsi-SATA_disk1 + + Run this for UEFI booting (for use now or in the future): + # sgdisk -n2:1M:+512M -t2:EF00 /dev/disk/by-id/scsi-SATA_disk1 + + Run this for the boot pool: + # sgdisk -n3:0:+1G -t3:BF01 /dev/disk/by-id/scsi-SATA_disk1 + +Choose one of the following options: + +2.2a Unencrypted: + +:: + + # sgdisk -n4:0:0 -t4:BF01 /dev/disk/by-id/scsi-SATA_disk1 + +2.2b LUKS: + +:: + + # sgdisk -n4:0:0 -t4:8300 /dev/disk/by-id/scsi-SATA_disk1 + +Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the +``/dev/sd*`` device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool. + +**Hints:** + +- ``ls -la /dev/disk/by-id`` will list the aliases. +- Are you doing this in a virtual machine? If your virtual disk is + missing from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using + KVM with virtio; otherwise, read the + `troubleshooting <#troubleshooting>`__ section. +- If you are creating a mirror or raidz topology, repeat the + partitioning commands for all the disks which will be part of the + pool. + +2.3 Create the boot pool: + +:: + + # zpool create -o ashift=12 -d \ + -o feature@async_destroy=enabled \ + -o feature@bookmarks=enabled \ + -o feature@embedded_data=enabled \ + -o feature@empty_bpobj=enabled \ + -o feature@enabled_txg=enabled \ + -o feature@extensible_dataset=enabled \ + -o feature@filesystem_limits=enabled \ + -o feature@hole_birth=enabled \ + -o feature@large_blocks=enabled \ + -o feature@lz4_compress=enabled \ + -o feature@spacemap_histogram=enabled \ + -o feature@userobj_accounting=enabled \ + -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \ + -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt \ + bpool /dev/disk/by-id/scsi-SATA_disk1-part3 + +You should not need to customize any of the options for the boot pool. + +GRUB does not support all of the zpool features. See +``spa_feature_names`` in +`grub-core/fs/zfs/zfs.c `__. +This step creates a separate boot pool for ``/boot`` with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are "supported" by GRUB. + +**Hints:** + +- If you are creating a mirror or raidz topology, create the pool using + ``zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3`` + (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and + list the partitions from additional disks). +- The pool name is arbitrary. If changed, the new name must be used + consistently. The ``bpool`` convention originated in this HOWTO. + +2.4 Create the root pool: + +Choose one of the following options: + +2.4a Unencrypted: + +:: + + # zpool create -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt \ + rpool /dev/disk/by-id/scsi-SATA_disk1-part4 + +2.4b LUKS: + +:: + + # apt install --yes cryptsetup + # cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 \ + /dev/disk/by-id/scsi-SATA_disk1-part4 + # cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # zpool create -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + +- The use of ``ashift=12`` is recommended here because many drives + today have 4KiB (or larger) physical sectors, even though they + present 512B logical sectors. Also, a future replacement drive may + have 4KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4KiB logical sectors (in which case ``ashift=12`` is required). +- Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase "o") to the ``zfs create`` + for ``/var/log``, as `journald requires + ACLs `__ +- Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only + filenames `__. +- Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat's + documentation `__ + for further information. +- Setting ``xattr=sa`` `vastly improves the performance of extended + attributes `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI + applications. `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain + controller. `__ + Note that ```xattr=sa`` is + Linux-specific. `__ + If you move your ``xattr=sa`` pool to another OpenZFS implementation + besides ZFS-on-Linux, extended attributes will not be readable + (though your data will be). If portability of extended attributes is + important to you, omit the ``-O xattr=sa`` above. Even if you do not + want ``xattr=sa`` for the whole pool, it is probably fine to use it + for ``/var/log``. +- Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). +- For LUKS, the key size chosen is 512 bits. However, XTS mode requires + two keys, so the LUKS key is split in half. Thus, ``-s 512`` means + AES-256. +- Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup + FAQ `__ + for guidance. + +**Hints:** + +- If you are creating a mirror or raidz topology, create the pool using + ``zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4`` + (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and + list the partitions from additional disks). For LUKS, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will + have to create using ``cryptsetup``. +- The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the + root pool is named ``rpool`` by default. + +Step 3: System Installation +--------------------------- + +3.1 Create filesystem datasets to act as containers: + +:: + + # zfs create -o canmount=off -o mountpoint=none rpool/ROOT + # zfs create -o canmount=off -o mountpoint=none bpool/BOOT + +On Solaris systems, the root filesystem is cloned and the suffix is +incremented for major system changes through ``pkg image-update`` or +``beadm``. Similar functionality for APT is possible but currently +unimplemented. Even without such a tool, it can still be used for +manually created clones. + +3.2 Create filesystem datasets for the root and boot filesystems: + +:: + + # zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian + # zfs mount rpool/ROOT/debian + + # zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/debian + # zfs mount bpool/BOOT/debian + +With ZFS, it is not normally necessary to use a mount command (either +``mount`` or ``zfs mount``). This situation is an exception because of +``canmount=noauto``. + +3.3 Create datasets: + +:: + + # zfs create rpool/home + # zfs create -o mountpoint=/root rpool/home/root + # zfs create -o canmount=off rpool/var + # zfs create -o canmount=off rpool/var/lib + # zfs create rpool/var/log + # zfs create rpool/var/spool + + The datasets below are optional, depending on your preferences and/or + software choices: + + If you wish to exclude these from snapshots: + # zfs create -o com.sun:auto-snapshot=false rpool/var/cache + # zfs create -o com.sun:auto-snapshot=false rpool/var/tmp + # chmod 1777 /mnt/var/tmp + + If you use /opt on this system: + # zfs create rpool/opt + + If you use /srv on this system: + # zfs create rpool/srv + + If you use /usr/local on this system: + # zfs create -o canmount=off rpool/usr + # zfs create rpool/usr/local + + If this system will have games installed: + # zfs create rpool/var/games + + If this system will store local email in /var/mail: + # zfs create rpool/var/mail + + If this system will use Snap packages: + # zfs create rpool/var/snap + + If you use /var/www on this system: + # zfs create rpool/var/www + + If this system will use GNOME: + # zfs create rpool/var/lib/AccountsService + + If this system will use Docker (which manages its own datasets & snapshots): + # zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker + + If this system will use NFS (locking): + # zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs + + A tmpfs is recommended later, but if you want a separate dataset for /tmp: + # zfs create -o com.sun:auto-snapshot=false rpool/tmp + # chmod 1777 /mnt/tmp + +The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data such as logs (in ``/var/log``). This will be especially +important if/when a ``beadm`` or similar utility is integrated. The +``com.sun.auto-snapshot`` setting is used by some ZFS snapshot utilities +to exclude transient data. + +If you do nothing extra, ``/tmp`` will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for +``/tmp``, as shown above. This keeps the ``/tmp`` data out of snapshots +of your root filesystem. It also allows you to set a quota on +``rpool/tmp``, if you want to limit the maximum space used. Otherwise, +you can use a tmpfs (RAM filesystem) later. + +3.4 Install the minimal system: + +:: + + # debootstrap stretch /mnt + # zfs set devices=off rpool + +The ``debootstrap`` command leaves the new system in an unconfigured +state. An alternative to using ``debootstrap`` is to copy the entirety +of a working system into the new ZFS root. + +Step 4: System Configuration +---------------------------- + +4.1 Configure the hostname (change ``HOSTNAME`` to the desired +hostname). + +:: + + # echo HOSTNAME > /mnt/etc/hostname + + # vi /mnt/etc/hosts + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME + +**Hint:** Use ``nano`` if you find ``vi`` confusing. + +4.2 Configure the network interface: + +:: + + Find the interface name: + # ip addr show + + # vi /mnt/etc/network/interfaces.d/NAME + auto NAME + iface NAME inet dhcp + +Customize this file if the system is not a DHCP client. + +4.3 Configure the package sources: + +:: + + # vi /mnt/etc/apt/sources.list + deb http://deb.debian.org/debian stretch main contrib + deb-src http://deb.debian.org/debian stretch main contrib + deb http://security.debian.org/debian-security stretch/updates main contrib + deb-src http://security.debian.org/debian-security stretch/updates main contrib + deb http://deb.debian.org/debian stretch-updates main contrib + deb-src http://deb.debian.org/debian stretch-updates main contrib + + # vi /mnt/etc/apt/sources.list.d/stretch-backports.list + deb http://deb.debian.org/debian stretch-backports main contrib + deb-src http://deb.debian.org/debian stretch-backports main contrib + + # vi /mnt/etc/apt/preferences.d/90_zfs + Package: src:zfs-linux + Pin: release n=stretch-backports + Pin-Priority: 990 + +4.4 Bind the virtual filesystems from the LiveCD environment to the new +system and ``chroot`` into it: + +:: + + # mount --rbind /dev /mnt/dev + # mount --rbind /proc /mnt/proc + # mount --rbind /sys /mnt/sys + # chroot /mnt /bin/bash --login + +**Note:** This is using ``--rbind``, not ``--bind``. + +4.5 Configure a basic system environment: + +:: + + # ln -s /proc/self/mounts /etc/mtab + # apt update + + # apt install --yes locales + # dpkg-reconfigure locales + +Even if you prefer a non-English system language, always ensure that +``en_US.UTF-8`` is available. + +:: + + # dpkg-reconfigure tzdata + +4.6 Install ZFS in the chroot environment for the new system: + +:: + + # apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64 + # apt install --yes zfs-initramfs + +4.7 For LUKS installs only, setup crypttab: + +:: + + # apt install --yes cryptsetup + + # echo luks1 UUID=$(blkid -s UUID -o value \ + /dev/disk/by-id/scsi-SATA_disk1-part4) none \ + luks,discard,initramfs > /etc/crypttab + +- The use of ``initramfs`` is a work-around for `cryptsetup does not + support + ZFS `__. + +**Hint:** If you are creating a mirror or raidz topology, repeat the +``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. + +4.8 Install GRUB + +Choose one of the following options: + +4.8a Install GRUB for legacy (BIOS) booting + +:: + + # apt install --yes grub-pc + +Install GRUB to the disk(s), not the partition(s). + +4.8b Install GRUB for UEFI booting + +:: + + # apt install dosfstools + # mkdosfs -F 32 -s 1 -n EFI /dev/disk/by-id/scsi-SATA_disk1-part2 + # mkdir /boot/efi + # echo PARTUUID=$(blkid -s PARTUUID -o value \ + /dev/disk/by-id/scsi-SATA_disk1-part2) \ + /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab + # mount /boot/efi + # apt install --yes grub-efi-amd64 shim + +- The ``-s 1`` for ``mkdosfs`` is only necessary for drives which + present 4 KiB logical sectors (“4Kn” drives) to meet the minimum + cluster size (given the partition size of 512 MiB) for FAT32. It also + works fine on drives which present 512 B sectors. + +**Note:** If you are creating a mirror or raidz topology, this step only +installs GRUB on the first disk. The other disk(s) will be handled +later. + +4.9 Set a root password + +:: + + # passwd + +4.10 Enable importing bpool + +This ensures that ``bpool`` is always imported, regardless of whether +``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not, +or whether ``zfs-import-scan.service`` is enabled. + +:: + + # vi /etc/systemd/system/zfs-import-bpool.service + [Unit] + DefaultDependencies=no + Before=zfs-import-scan.service + Before=zfs-import-cache.service + + [Service] + Type=oneshot + RemainAfterExit=yes + ExecStart=/sbin/zpool import -N -o cachefile=none bpool + + [Install] + WantedBy=zfs-import.target + + # systemctl enable zfs-import-bpool.service + +4.11 Optional (but recommended): Mount a tmpfs to /tmp + +If you chose to create a ``/tmp`` dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a +tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + +:: + + # cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + # systemctl enable tmp.mount + +4.12 Optional (but kindly requested): Install popcon + +The ``popularity-contest`` package reports the list of packages install +on your system. Showing that ZFS is popular may be helpful in terms of +long-term attention from the distro. + +:: + + # apt install --yes popularity-contest + +Choose Yes at the prompt. + +Step 5: GRUB Installation +------------------------- + +5.1 Verify that the ZFS boot filesystem is recognized: + +:: + + # grub-probe /boot + zfs + +5.2 Refresh the initrd files: + +:: + + # update-initramfs -u -k all + update-initramfs: Generating /boot/initrd.img-4.9.0-8-amd64 + +**Note:** When using LUKS, this will print "WARNING could not determine +root device from /etc/fstab". This is because `cryptsetup does not +support +ZFS `__. + +5.3 Workaround GRUB's missing zpool-features support: + +:: + + # vi /etc/default/grub + Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian" + +5.4 Optional (but highly recommended): Make debugging GRUB easier: + +:: + + # vi /etc/default/grub + Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT + Uncomment: GRUB_TERMINAL=console + Save and quit. + +Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired. + +5.5 Update the boot configuration: + +:: + + # update-grub + Generating grub configuration file ... + Found linux image: /boot/vmlinuz-4.9.0-8-amd64 + Found initrd image: /boot/initrd.img-4.9.0-8-amd64 + done + +**Note:** Ignore errors from ``osprober``, if present. + +5.6 Install the boot loader + +5.6a For legacy (BIOS) booting, install GRUB to the MBR: + +:: + + # grub-install /dev/disk/by-id/scsi-SATA_disk1 + Installing for i386-pc platform. + Installation finished. No error reported. + +Do not reboot the computer until you get exactly that result message. +Note that you are installing GRUB to the whole disk, not a partition. + +If you are creating a mirror or raidz topology, repeat the +``grub-install`` command for each disk in the pool. + +5.6b For UEFI booting, install GRUB: + +:: + + # grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=debian --recheck --no-floppy + +5.7 Verify that the ZFS module is installed: + +:: + + # ls /boot/grub/*/zfs.mod + +5.8 Fix filesystem mount ordering + +`Until ZFS gains a systemd mount +generator `__, there are +races between mounting filesystems and starting certain daemons. In +practice, the issues (e.g. +`#5754 `__) seem to be +with certain filesystems in ``/var``, specifically ``/var/log`` and +``/var/tmp``. Setting these to use ``legacy`` mounting, and listing them +in ``/etc/fstab`` makes systemd aware that these are separate +mountpoints. In turn, ``rsyslog.service`` depends on ``var-log.mount`` +by way of ``local-fs.target`` and services using the ``PrivateTmp`` +feature of systemd automatically use ``After=var-tmp.mount``. + +Until there is support for mounting ``/boot`` in the initramfs, we also +need to mount that, because it was marked ``canmount=noauto``. Also, +with UEFI, we need to ensure it is mounted before its child filesystem +``/boot/efi``. + +``rpool`` is guaranteed to be imported by the initramfs, so there is no +point in adding ``x-systemd.requires=zfs-import.target`` to those +filesystems. + +:: + + For UEFI booting, unmount /boot/efi first: + # umount /boot/efi + + Everything else applies to both BIOS and UEFI booting: + + # zfs set mountpoint=legacy bpool/BOOT/debian + # echo bpool/BOOT/debian /boot zfs \ + nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab + + # zfs set mountpoint=legacy rpool/var/log + # echo rpool/var/log /var/log zfs nodev,relatime 0 0 >> /etc/fstab + + # zfs set mountpoint=legacy rpool/var/spool + # echo rpool/var/spool /var/spool zfs nodev,relatime 0 0 >> /etc/fstab + + If you created a /var/tmp dataset: + # zfs set mountpoint=legacy rpool/var/tmp + # echo rpool/var/tmp /var/tmp zfs nodev,relatime 0 0 >> /etc/fstab + + If you created a /tmp dataset: + # zfs set mountpoint=legacy rpool/tmp + # echo rpool/tmp /tmp zfs nodev,relatime 0 0 >> /etc/fstab + +Step 6: First Boot +------------------ + +6.1 Snapshot the initial installation: + +:: + + # zfs snapshot bpool/BOOT/debian@install + # zfs snapshot rpool/ROOT/debian@install + +In the future, you will likely want to take snapshots before each +upgrade, and remove old snapshots (including this one) at some point to +save space. + +6.2 Exit from the ``chroot`` environment back to the LiveCD environment: + +:: + + # exit + +6.3 Run these commands in the LiveCD environment to unmount all +filesystems: + +:: + + # mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} + # zpool export -a + +6.4 Reboot: + +:: + + # reboot + +6.5 Wait for the newly installed system to boot normally. Login as root. + +6.6 Create a user account: + +:: + + # zfs create rpool/home/YOURUSERNAME + # adduser YOURUSERNAME + # cp -a /etc/skel/.[!.]* /home/YOURUSERNAME + # chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME + +6.7 Add your user account to the default set of groups for an +administrator: + +:: + + # usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video YOURUSERNAME + +6.8 Mirror GRUB + +If you installed to multiple disks, install GRUB on the additional +disks: + +6.8a For legacy (BIOS) booting: + +:: + + # dpkg-reconfigure grub-pc + Hit enter until you get to the device selection screen. + Select (using the space bar) all of the disks (not partitions) in your pool. + +6.8b UEFI + +:: + + # umount /boot/efi + + For the second and subsequent disks (increment debian-2 to -3, etc.): + # dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ + of=/dev/disk/by-id/scsi-SATA_disk2-part2 + # efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ + -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi' + + # mount /boot/efi + +Step 7: (Optional) Configure Swap +--------------------------------- + +**Caution**: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. This issue is currently being investigated in: +`https://github.com/zfsonlinux/zfs/issues/7734 `__ + +7.1 Create a volume dataset (zvol) for use as a swap device: + +:: + + # zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata -o secondarycache=none \ + -o com.sun:auto-snapshot=false rpool/swap + +You can adjust the size (the ``4G`` part) to your needs. + +The compression algorithm is set to ``zle`` because it is the cheapest +available algorithm. As this guide recommends ``ashift=12`` (4 kiB +blocks on disk), the common case of a 4 kiB page size means that no +compression algorithm can reduce I/O. The exception is all-zero pages, +which are dropped by ZFS; but some form of compression has to be enabled +to get this behavior. + +7.2 Configure the swap device: + +**Caution**: Always use long ``/dev/zvol`` aliases in configuration +files. Never use a short ``/dev/zdX`` device name. + +:: + + # mkswap -f /dev/zvol/rpool/swap + # echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab + # echo RESUME=none > /etc/initramfs-tools/conf.d/resume + +The ``RESUME=none`` is necessary to disable resuming from hibernation. +This does not work, as the zvol is not present (because the pool has not +yet been imported) at the time the resume script runs. If it is not +disabled, the boot process hangs for 30 seconds waiting for the swap +zvol to appear. + +7.3 Enable the swap device: + +:: + + # swapon -av + +Step 8: Full Software Installation +---------------------------------- + +8.1 Upgrade the minimal system: + +:: + + # apt dist-upgrade --yes + +8.2 Install a regular set of software: + +:: + + # tasksel + +**Note:** This will check "Debian desktop environment" and "print server" +by default. If you want a server installation, unselect those. + +8.3 Optional: Disable log compression: + +As ``/var/log`` is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. +Also, if you are making snapshots of ``/var/log``, logrotate’s +compression will actually waste space, as the uncompressed data will +live on in the snapshot. You can edit the files in ``/etc/logrotate.d`` +by hand to comment out ``compress``, or use this loop (copy-and-paste +highly recommended): + +:: + + # for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done + +8.4 Reboot: + +:: + + # reboot + +Step 9: Final Cleanup +~~~~~~~~~~~~~~~~~~~~~ + +9.1 Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally. + +9.2 Optional: Delete the snapshots of the initial installation: + +:: + + $ sudo zfs destroy bpool/BOOT/debian@install + $ sudo zfs destroy rpool/ROOT/debian@install + +9.3 Optional: Disable the root password + +:: + + $ sudo usermod -p '*' root + +9.4 Optional: Re-enable the graphical boot process: + +If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer. + +:: + + $ sudo vi /etc/default/grub + Add quiet to GRUB_CMDLINE_LINUX_DEFAULT + Comment out GRUB_TERMINAL=console + Save and quit. + + $ sudo update-grub + +**Note:** Ignore errors from ``osprober``, if present. + +9.5 Optional: For LUKS installs only, backup the LUKS header: + +:: + + $ sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + +Store that backup somewhere safe (e.g. cloud storage). It is protected +by your LUKS passphrase, but you may wish to use additional encryption. + +**Hint:** If you created a mirror or raidz topology, repeat this for +each LUKS volume (``luks2``, etc.). + +Troubleshooting +--------------- + +Rescuing using a Live CD +~~~~~~~~~~~~~~~~~~~~~~~~ + +Go through `Step 1: Prepare The Install +Environment <#step-1-prepare-the-install-environment>`__. + +This will automatically import your pool. Export it and re-import it to +get the mounts right: + +:: + + For LUKS, first unlock the disk(s): + # apt install --yes cryptsetup + # cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + Repeat for additional disks, if this is a mirror or raidz topology. + + # zpool export -a + # zpool import -N -R /mnt rpool + # zpool import -N -R /mnt bpool + # zfs mount rpool/ROOT/debian + # zfs mount -a + +If needed, you can chroot into your installed environment: + +:: + + # mount --rbind /dev /mnt/dev + # mount --rbind /proc /mnt/proc + # mount --rbind /sys /mnt/sys + # chroot /mnt /bin/bash --login + # mount /boot/efi + # mount -a + +Do whatever you need to do to fix your system. + +When done, cleanup: + +:: + + # exit + # mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} + # zpool export -a + # reboot + +MPT2SAS +~~~~~~~ + +Most problem reports for this tutorial involve ``mpt2sas`` hardware that +does slow asynchronous drive initialization, like some IBM M1015 or +OEM-branded cards that have been flashed to the reference LSI firmware. + +The basic problem is that disks on these controllers are not visible to +the Linux kernel until after the regular system is started, and ZoL does +not hotplug pool members. See +`https://github.com/zfsonlinux/zfs/issues/330 `__. + +Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool. + +Areca +~~~~~ + +Systems that require the ``arcsas`` blob driver should add it to the +``/etc/initramfs-tools/modules`` file and run +``update-initramfs -u -k all``. + +Upgrade or downgrade the Areca driver if something like +``RIP: 0010:[] [] native_read_tsc+0x6/0x20`` +appears anywhere in kernel log. ZoL is unstable on systems that emit +this error message. + +VMware +~~~~~~ + +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere + configuration. Doing this ensures that ``/dev/disk`` aliases are + created in the guest. + +QEMU/KVM/XEN +~~~~~~~~~~~~ + +Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). + +To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host: + +:: + + $ sudo apt install ovmf + $ sudo vi /etc/libvirt/qemu.conf + Uncomment these lines: + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd" + ] + $ sudo service libvirt-bin restart diff --git a/_sources/Getting Started/Debian/index.rst.txt b/_sources/Getting Started/Debian/index.rst.txt new file mode 100644 index 000000000..d62053af0 --- /dev/null +++ b/_sources/Getting Started/Debian/index.rst.txt @@ -0,0 +1,62 @@ +.. highlight:: sh + +Debian +====== + +.. contents:: Table of Contents + :local: + +Installation +------------ + +If you want to use ZFS as your root filesystem, see the `Root on ZFS`_ +links below instead. + +ZFS packages are included in the `contrib repository +`__. The +`backports repository `__ +often provides newer releases of ZFS. You can use it as follows. + +Add the backports repository:: + + vi /etc/apt/sources.list.d/bookworm-backports.list + +.. code-block:: sourceslist + + deb http://deb.debian.org/debian bookworm-backports main contrib + deb-src http://deb.debian.org/debian bookworm-backports main contrib + +:: + + vi /etc/apt/preferences.d/90_zfs + +.. code-block:: control + + Package: src:zfs-linux + Pin: release n=bookworm-backports + Pin-Priority: 990 + +Install the packages:: + + apt update + apt install dpkg-dev linux-headers-generic linux-image-generic + apt install zfs-dkms zfsutils-linux + +**Caution**: If you are in a poorly configured environment (e.g. certain VM or container consoles), when apt attempts to pop up a message on first install, it may fail to notice a real console is unavailable, and instead appear to hang indefinitely. To circumvent this, you can prefix the `apt install` commands with ``DEBIAN_FRONTEND=noninteractive``, like this:: + + DEBIAN_FRONTEND=noninteractive apt install zfs-dkms zfsutils-linux + +Root on ZFS +----------- +.. toctree:: + :maxdepth: 1 + :glob: + + *Root on ZFS + +Related topics +-------------- +.. toctree:: + :maxdepth: 1 + + Debian GNU Linux initrd documentation diff --git a/_sources/Getting Started/Fedora.rst.txt b/_sources/Getting Started/Fedora.rst.txt new file mode 100644 index 000000000..1c341d6ee --- /dev/null +++ b/_sources/Getting Started/Fedora.rst.txt @@ -0,0 +1,7 @@ +:orphan: + +Fedora +======================= + +This page has been moved to `here `__. + diff --git a/_sources/Getting Started/Fedora/Root on ZFS.rst.txt b/_sources/Getting Started/Fedora/Root on ZFS.rst.txt new file mode 100644 index 000000000..6b00e77f6 --- /dev/null +++ b/_sources/Getting Started/Fedora/Root on ZFS.rst.txt @@ -0,0 +1,749 @@ +.. highlight:: sh + +.. ifconfig:: zfs_root_test + + :: + + # For the CI/CD test run of this guide, + # Enable verbose logging of bash shell and fail immediately when + # a commmand fails. + set -vxeuf + + distro=${1} + + cp /etc/resolv.conf ./"rootfs-${distro}"/etc/resolv.conf + arch-chroot ./"rootfs-${distro}" sh <<-'ZFS_ROOT_GUIDE_TEST' + + set -vxeuf + + # install alpine setup scripts + apk update + apk add alpine-conf curl + +.. In this document, there are three types of code-block markups: + ``::`` are commands intended for both the vm test and the users + ``.. ifconfig:: zfs_root_test`` are commands intended only for vm test + ``.. code-block:: sh`` are commands intended only for users + +Fedora Root on ZFS +======================================= + +**ZFSBootMenu** + +This tutorial is based on the GRUB bootloader. Due to its independent +implementation of a read-only ZFS driver, GRUB only supports a subset +of ZFS features on the boot pool. [In general, bootloader treat disks +as read-only to minimize the risk of damaging on-disk data.] + +`ZFSBootMenu `__ is an alternative bootloader +free of such limitations and has support for boot environments. Do not +follow instructions on this page if you plan to use ZBM, +as the layouts are not compatible. Refer +to their site for installation details. + +**Customization** + +Unless stated otherwise, it is not recommended to customize system +configuration before reboot. + +**Only use well-tested pool features** + +You should only use well-tested pool features. Avoid using new features if data integrity is paramount. See, for example, `this comment `__. + +Preparation +--------------------------- + +#. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled. +#. Because the kernel of latest Live CD might be incompatible with + ZFS, we will use Alpine Linux Extended, which ships with ZFS by + default. + + Download latest extended variant of `Alpine Linux + live image + `__, + verify `checksum `__ + and boot from it. + + .. code-block:: sh + + gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify alpine-extended-*.asc + + dd if=input-file of=output-file bs=1M + + .. ifconfig:: zfs_root_test + + # check whether the download page exists + # alpine version must be in sync with ci/cd test chroot tarball + +#. Login as root user. There is no password. +#. Configure Internet + + .. code-block:: sh + + setup-interfaces -r + # You must use "-r" option to start networking services properly + # example: + network interface: wlan0 + WiFi name: + ip address: dhcp + + manual netconfig: n + +#. If you are using wireless network and it is not shown, see `Alpine + Linux wiki + `__ for + further details. ``wpa_supplicant`` can be installed with ``apk + add wpa_supplicant`` without internet connection. + +#. Configure SSH server + + .. code-block:: sh + + setup-sshd + # example: + ssh server: openssh + allow root: "prohibit-password" or "yes" + ssh key: "none" or "" + + + +#. Set root password or ``/root/.ssh/authorized_keys``. + +#. Connect from another computer + + .. code-block:: sh + + ssh root@192.168.1.91 + +#. Configure NTP client for time synchronization + + .. code-block:: sh + + setup-ntp busybox + + .. ifconfig:: zfs_root_test + + # this step is unnecessary for chroot and returns 1 when executed + +#. Set up apk-repo. A list of available mirrors is shown. + Press space bar to continue + + .. code-block:: sh + + setup-apkrepos + + +#. Throughout this guide, we use predictable disk names generated by + udev + + .. code-block:: sh + + apk update + apk add eudev + setup-devd udev + + .. ifconfig:: zfs_root_test + + # for some reason, udev is extremely slow in chroot + # it is not needed for chroot anyway. so, skip this step + +#. Target disk + + List available disks with + + .. code-block:: sh + + find /dev/disk/by-id/ + + If virtio is used as disk bus, power off the VM and set serial numbers for disk. + For QEMU, use ``-drive format=raw,file=disk2.img,serial=AaBb``. + For libvirt, edit domain XML. See `this page + `__ for examples. + + Declare disk array + + .. code-block:: sh + + DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR' + + For single disk installation, use + + .. code-block:: sh + + DISK='/dev/disk/by-id/disk1' + + .. ifconfig:: zfs_root_test + + # for github test run, use chroot and loop devices + DISK="$(losetup -a| grep fedora | cut -f1 -d: | xargs -t -I '{}' printf '{} ')" + +#. Set a mount point + :: + + MNT=$(mktemp -d) + +#. Set partition size: + + Set swap size in GB, set to 1 if you don't want swap to + take up too much space + + .. code-block:: sh + + SWAPSIZE=4 + + .. ifconfig:: zfs_root_test + + # For the test run, use 1GB swap space to avoid hitting CI/CD + # quota + SWAPSIZE=1 + + Set how much space should be left at the end of the disk, minimum 1GB + + :: + + RESERVE=1 + +#. Install ZFS support from live media:: + + apk add zfs + +#. Install partition tool + :: + + apk add parted e2fsprogs cryptsetup util-linux + +System Installation +--------------------------- + +#. Partition the disks. + + Note: you must clear all existing partition tables and data structures from target disks. + + For flash-based storage, this can be done by the blkdiscard command below: + :: + + partition_disk () { + local disk="${1}" + blkdiscard -f "${disk}" || true + + parted --script --align=optimal "${disk}" -- \ + mklabel gpt \ + mkpart EFI 2MiB 1GiB \ + mkpart bpool 1GiB 5GiB \ + mkpart rpool 5GiB -$((SWAPSIZE + RESERVE))GiB \ + mkpart swap -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \ + mkpart BIOS 1MiB 2MiB \ + set 1 esp on \ + set 5 bios_grub on \ + set 5 legacy_boot on + + partprobe "${disk}" + } + + for i in ${DISK}; do + partition_disk "${i}" + done + + .. ifconfig:: zfs_root_test + + :: + + # When working with GitHub chroot runners, we are using loop + # devices as installation target. However, the alias support for + # loop device was just introduced in March 2023. See + # https://github.com/systemd/systemd/pull/26693 + # For now, we will create the aliases maunally as a workaround + looppart="1 2 3 4 5" + for i in ${DISK}; do + for j in ${looppart}; do + if test -e "${i}p${j}"; then + ln -s "${i}p${j}" "${i}-part${j}" + fi + done + done + +#. Setup encrypted swap. This is useful if the available memory is + small:: + + for i in ${DISK}; do + cryptsetup open --type plain --key-file /dev/random "${i}"-part4 "${i##*/}"-part4 + mkswap /dev/mapper/"${i##*/}"-part4 + swapon /dev/mapper/"${i##*/}"-part4 + done + + +#. Load ZFS kernel module + + .. code-block:: sh + + modprobe zfs + +#. Create boot pool + :: + + # shellcheck disable=SC2046 + zpool create -o compatibility=legacy \ + -o ashift=12 \ + -o autotrim=on \ + -O acltype=posixacl \ + -O canmount=off \ + -O devices=off \ + -O normalization=formD \ + -O relatime=on \ + -O xattr=sa \ + -O mountpoint=/boot \ + -R "${MNT}" \ + bpool \ + mirror \ + $(for i in ${DISK}; do + printf '%s ' "${i}-part2"; + done) + + If not using a multi-disk setup, remove ``mirror``. + + You should not need to customize any of the options for the boot pool. + + GRUB does not support all of the zpool features. See ``spa_feature_names`` + in `grub-core/fs/zfs/zfs.c + `__. + This step creates a separate boot pool for ``/boot`` with the features + limited to only those that GRUB supports, allowing the root pool to use + any/all features. + +#. Create root pool + :: + + # shellcheck disable=SC2046 + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -R "${MNT}" \ + -O acltype=posixacl \ + -O canmount=off \ + -O compression=zstd \ + -O dnodesize=auto \ + -O normalization=formD \ + -O relatime=on \ + -O xattr=sa \ + -O mountpoint=/ \ + rpool \ + mirror \ + $(for i in ${DISK}; do + printf '%s ' "${i}-part3"; + done) + + If not using a multi-disk setup, remove ``mirror``. + +#. Create root system container: + + - Unencrypted + + :: + + zfs create \ + -o canmount=off \ + -o mountpoint=none \ + rpool/fedora + + - Encrypted: + + Avoid ZFS send/recv when using native encryption, see `a ZFS developer's comment on this issue`__ and `this spreadsheet of bugs`__. A LUKS-based guide has yet to be written. Once compromised, changing password will not keep your + data safe. See ``zfs-change-key(8)`` for more info + + .. code-block:: sh + + zfs create \ + -o canmount=off \ + -o mountpoint=none \ + -o encryption=on \ + -o keylocation=prompt \ + -o keyformat=passphrase \ + rpool/fedora + + You can automate this step (insecure) with: ``echo POOLPASS | zfs create ...``. + + Create system datasets, + manage mountpoints with ``mountpoint=legacy`` + :: + + zfs create -o canmount=noauto -o mountpoint=/ rpool/fedora/root + zfs mount rpool/fedora/root + zfs create -o mountpoint=legacy rpool/fedora/home + mkdir "${MNT}"/home + mount -t zfs rpool/fedora/home "${MNT}"/home + zfs create -o mountpoint=legacy rpool/fedora/var + zfs create -o mountpoint=legacy rpool/fedora/var/lib + zfs create -o mountpoint=legacy rpool/fedora/var/log + zfs create -o mountpoint=none bpool/fedora + zfs create -o mountpoint=legacy bpool/fedora/root + mkdir "${MNT}"/boot + mount -t zfs bpool/fedora/root "${MNT}"/boot + mkdir -p "${MNT}"/var/log + mkdir -p "${MNT}"/var/lib + mount -t zfs rpool/fedora/var/lib "${MNT}"/var/lib + mount -t zfs rpool/fedora/var/log "${MNT}"/var/log + +#. Format and mount ESP + :: + + for i in ${DISK}; do + mkfs.vfat -n EFI "${i}"-part1 + mkdir -p "${MNT}"/boot/efis/"${i##*/}"-part1 + mount -t vfat -o iocharset=iso8859-1 "${i}"-part1 "${MNT}"/boot/efis/"${i##*/}"-part1 + done + + mkdir -p "${MNT}"/boot/efi + mount -t vfat -o iocharset=iso8859-1 "$(echo "${DISK}" | sed "s|^ *||" | cut -f1 -d' '|| true)"-part1 "${MNT}"/boot/efi + +System Configuration +--------------------------- + +#. Download and extract minimal Fedora root filesystem:: + + apk add curl + curl --fail-early --fail -L \ + https://dl.fedoraproject.org/pub/fedora/linux/releases/38/Container/x86_64/images/Fedora-Container-Base-38-1.6.x86_64.tar.xz \ + -o rootfs.tar.gz + curl --fail-early --fail -L \ + https://dl.fedoraproject.org/pub/fedora/linux/releases/38/Container/x86_64/images/Fedora-Container-38-1.6-x86_64-CHECKSUM \ + -o checksum + + # BusyBox sha256sum treats all lines in the checksum file + # as checksums and requires two spaces " " + # between filename and checksum + + grep 'Container-Base' checksum \ + | grep '^SHA256' \ + | sed -E 's|.*= ([a-z0-9]*)$|\1 rootfs.tar.gz|' > ./sha256checksum + + sha256sum -c ./sha256checksum + + rootfs_tar=$(tar t -af rootfs.tar.gz | grep layer.tar) + rootfs_tar_dir=$(dirname "${rootfs_tar}") + tar x -af rootfs.tar.gz "${rootfs_tar}" + ln -s "${MNT}" "${MNT}"/"${rootfs_tar_dir}" + tar x -C "${MNT}" -af "${rootfs_tar}" + unlink "${MNT}"/"${rootfs_tar_dir}" + +#. Enable community repo + + .. code-block:: sh + + sed -i '/edge/d' /etc/apk/repositories + sed -i -E 's/#(.*)community/\1community/' /etc/apk/repositories + +#. Generate fstab:: + + apk add arch-install-scripts + genfstab -t PARTUUID "${MNT}" \ + | grep -v swap \ + | sed "s|vfat.*rw|vfat rw,x-systemd.idle-timeout=1min,x-systemd.automount,noauto,nofail|" \ + > "${MNT}"/etc/fstab + +#. Chroot + + .. code-block:: sh + + cp /etc/resolv.conf "${MNT}"/etc/resolv.conf + for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done + chroot "${MNT}" /usr/bin/env DISK="${DISK}" bash + + .. ifconfig:: zfs_root_test + + cp /etc/resolv.conf "${MNT}"/etc/resolv.conf + for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done + chroot "${MNT}" /usr/bin/env DISK="${DISK}" bash <<-'ZFS_ROOT_NESTED_CHROOT' + + set -vxeuf + +#. Unset all shell aliases, which can interfere with installation:: + + unalias -a + +#. Install base packages + + .. code-block:: sh + + dnf -y install @core grub2-efi-x64 \ + grub2-pc grub2-pc-modules grub2-efi-x64-modules shim-x64 \ + efibootmgr kernel kernel-devel + + .. ifconfig:: zfs_root_test + + # no firmware for test + dnf -y install --setopt=install_weak_deps=False @core grub2-efi-x64 \ + grub2-pc grub2-pc-modules grub2-efi-x64-modules shim-x64 \ + efibootmgr + # kernel-core + +#. Install ZFS packages + + .. code-block:: sh + + dnf -y install \ + https://zfsonlinux.org/fedora/zfs-release-2-3"$(rpm --eval "%{dist}"||true)".noarch.rpm + + dnf -y install zfs zfs-dracut + + .. ifconfig:: zfs_root_test + + # this step will build zfs modules and fail + # no need to test building in chroot + + dnf -y install \ + https://zfsonlinux.org/fedora/zfs-release-2-3"$(rpm --eval "%{dist}"||true)".noarch.rpm + +#. Check whether ZFS modules are successfully built + + .. code-block:: sh + + tail -n10 /var/lib/dkms/zfs/**/build/make.log + + # ERROR: modpost: GPL-incompatible module zfs.ko uses GPL-only symbol 'bio_start_io_acct' + # ERROR: modpost: GPL-incompatible module zfs.ko uses GPL-only symbol 'bio_end_io_acct_remapped' + # make[4]: [scripts/Makefile.modpost:138: /var/lib/dkms/zfs/2.1.9/build/module/Module.symvers] Error 1 + # make[3]: [Makefile:1977: modpost] Error 2 + # make[3]: Leaving directory '/usr/src/kernels/6.2.9-100.fc36.x86_64' + # make[2]: [Makefile:55: modules-Linux] Error 2 + # make[2]: Leaving directory '/var/lib/dkms/zfs/2.1.9/build/module' + # make[1]: [Makefile:933: all-recursive] Error 1 + # make[1]: Leaving directory '/var/lib/dkms/zfs/2.1.9/build' + # make: [Makefile:794: all] Error 2 + + If the build failed, you need to install an Long Term Support + kernel and its headers, then rebuild ZFS module + + .. code-block:: sh + + # this is a third-party repo! + # you have been warned. + # + # select a kernel from + # https://copr.fedorainfracloud.org/coprs/kwizart/ + + dnf copr enable -y kwizart/kernel-longterm-VERSION + dnf install -y kernel-longterm kernel-longterm-devel + dnf remove -y kernel-core + + ZFS modules will be built as part of the kernel installation. + Check build log again with ``tail`` command. + +#. Add zfs modules to dracut + + .. code-block:: sh + + echo 'add_dracutmodules+=" zfs "' >> /etc/dracut.conf.d/zfs.conf + echo 'force_drivers+=" zfs "' >> /etc/dracut.conf.d/zfs.conf + + .. ifconfig:: zfs_root_test + + # skip this in chroot, because we did not build zfs module + +#. Add other drivers to dracut:: + + if grep mpt3sas /proc/modules; then + echo 'force_drivers+=" mpt3sas "' >> /etc/dracut.conf.d/zfs.conf + fi + if grep virtio_blk /proc/modules; then + echo 'filesystems+=" virtio_blk "' >> /etc/dracut.conf.d/fs.conf + fi + +#. Build initrd + :: + + find -D exec /lib/modules -maxdepth 1 \ + -mindepth 1 -type d \ + -exec sh -vxc \ + 'if test -e "$1"/modules.dep; + then kernel=$(basename "$1"); + dracut --verbose --force --kver "${kernel}"; + fi' sh {} \; + +#. For SELinux, relabel filesystem on reboot:: + + fixfiles -F onboot + +#. Enable internet time synchronisation:: + + systemctl enable systemd-timesyncd + +#. Generate host id + + .. code-block:: sh + + zgenhostid -f -o /etc/hostid + + .. ifconfig:: zfs_root_test + + # because zfs is not installed, skip this step + +#. Install locale package, example for English locale:: + + dnf install -y glibc-minimal-langpack glibc-langpack-en + +#. Set locale, keymap, timezone, hostname + :: + + rm -f /etc/localtime + rm -f /etc/hostname + systemd-firstboot \ + --force \ + --locale=en_US.UTF-8 \ + --timezone=Etc/UTC \ + --hostname=testhost \ + --keymap=us || true + +#. Set root passwd + :: + + printf 'root:yourpassword' | chpasswd + +Bootloader +--------------------------- + +#. Apply GRUB workaround + + :: + + echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile.d/zpool_vdev_name_path.sh + # shellcheck disable=SC1091 + . /etc/profile.d/zpool_vdev_name_path.sh + + # GRUB fails to detect rpool name, hard code as "rpool" + sed -i "s|rpool=.*|rpool=rpool|" /etc/grub.d/10_linux + + This workaround needs to be applied for every GRUB update, as the + update will overwrite the changes. + +#. Fedora and RHEL uses Boot Loader Specification module for GRUB, + which does not support ZFS. Disable it:: + + echo 'GRUB_ENABLE_BLSCFG=false' >> /etc/default/grub + + This means that you need to regenerate GRUB menu and mirror them + after every kernel update, otherwise computer will still boot old + kernel on reboot. + +#. Install GRUB:: + + mkdir -p /boot/efi/fedora/grub-bootdir/i386-pc/ + for i in ${DISK}; do + grub2-install --target=i386-pc --boot-directory \ + /boot/efi/fedora/grub-bootdir/i386-pc/ "${i}" + done + dnf reinstall -y grub2-efi-x64 shim-x64 + cp -r /usr/lib/grub/x86_64-efi/ /boot/efi/EFI/fedora/ + +#. Generate GRUB menu + + .. code-block:: sh + + mkdir -p /boot/grub2 + grub2-mkconfig -o /boot/grub2/grub.cfg + cp /boot/grub2/grub.cfg \ + /boot/efi/efi/fedora/grub.cfg + cp /boot/grub2/grub.cfg \ + /boot/efi/fedora/grub-bootdir/i386-pc/grub2/grub.cfg + + .. ifconfig:: zfs_root_test + + # GRUB menu can not be generated in test due to missing zfs programs + +#. For both legacy and EFI booting: mirror ESP content:: + + espdir=$(mktemp -d) + find /boot/efi/ -maxdepth 1 -mindepth 1 -type d -print0 \ + | xargs -t -0I '{}' cp -r '{}' "${espdir}" + find "${espdir}" -maxdepth 1 -mindepth 1 -type d -print0 \ + | xargs -t -0I '{}' sh -vxc "find /boot/efis/ -maxdepth 1 -mindepth 1 -type d -print0 | xargs -t -0I '[]' cp -r '{}' '[]'" + +#. Exit chroot + + .. code-block:: sh + + exit + + .. ifconfig:: zfs_root_test + + # nested chroot ends here + ZFS_ROOT_NESTED_CHROOT + + .. ifconfig:: zfs_root_test + + :: + + # list contents of boot dir to confirm + # that the mirroring succeeded + find "${MNT}"/boot/efis/ -type d > list_of_efi_dirs + for i in ${DISK}; do + if ! grep "${i##*/}-part1/efi\|${i##*/}-part1/EFI" list_of_efi_dirs; then + echo "disk ${i} not found in efi system partition, installation error"; + cat list_of_efi_dirs + exit 1 + fi + done + +#. Unmount filesystems and create initial system snapshot + You can later create a boot environment from this snapshot. + See `Root on ZFS maintenance page <../zfs_root_maintenance.html>`__. + :: + + umount -Rl "${MNT}" + zfs snapshot -r rpool@initial-installation + zfs snapshot -r bpool@initial-installation + +#. Export all pools + + .. code-block:: sh + + zpool export -a + + .. ifconfig:: zfs_root_test + + # we are now inside a chroot, where the export will fail + # export pools when we are outside chroot + +#. Reboot + + .. code-block:: sh + + reboot + +#. For BIOS-legacy boot users only: the GRUB bootloader installed + might be unusable. In this case, see Bootloader Recovery section + in `Root on ZFS maintenance page <../zfs_root_maintenance.html>`__. + + This issue is not related to Alpine Linux chroot, as Arch Linux + installed with this method does not have this issue. + + UEFI bootloader is not affected by this issue. + + .. ifconfig:: zfs_root_test + + # chroot ends here + ZFS_ROOT_GUIDE_TEST + +#. On first reboot, SELinux policies will be applied, albeit + incompletely. The computer will then reboot with incomplete + policies and fail to mount ``/run``, resulting in a failure. + + Workaround is to append ``enforcing=0`` to kernel command line in + the GRUB menu, as many times as necessary, until the system + complete one successful boot. The author of this guide has not + found out a way to solve this issue during installation. Help is + appreciated. + +Post installaion +--------------------------- + +#. Install package groups + + .. code-block:: sh + + dnf group list --hidden -v # query package groups + dnf group install gnome-desktop + +#. Add new user, configure swap. + +.. _a ZFS developer's comment on this issue: https://ol.reddit.com/r/zfs/comments/10n8fsn/does_openzfs_have_a_new_developer_for_the_native/j6b8k1m/ +.. _this spreadsheet of bugs: https://docs.google.com/spreadsheets/d/1OfRSXibZ2nIE9DGK6swwBZXgXwdCPKgp4SbPZwTexCg/htmlview diff --git a/_sources/Getting Started/Fedora/index.rst.txt b/_sources/Getting Started/Fedora/index.rst.txt new file mode 100644 index 000000000..bfdf599e7 --- /dev/null +++ b/_sources/Getting Started/Fedora/index.rst.txt @@ -0,0 +1,95 @@ +Fedora +====== + +Contents +-------- +.. toctree:: + :maxdepth: 1 + :glob: + + * + +Installation +------------ + +Note: this is for installing ZFS on an existing Fedora +installation. To use ZFS as root file system, +see below. + +#. If ``zfs-fuse`` from official Fedora repo is installed, + remove it first. It is not maintained and should not be used + under any circumstance:: + + rpm -e --nodeps zfs-fuse + +#. Add ZFS repo:: + + dnf install -y https://zfsonlinux.org/fedora/zfs-release-2-4$(rpm --eval "%{dist}").noarch.rpm + + List of repos is available `here `__. + +#. Install kernel headers:: + + dnf install -y kernel-devel + + ``kernel-devel`` package must be installed before ``zfs`` package. + +#. Install ZFS packages:: + + dnf install -y zfs + +#. Load kernel module:: + + modprobe zfs + + If kernel module can not be loaded, your kernel version + might be not yet supported by OpenZFS. + + An option is to an LTS kernel from COPR, provided by a third-party. + Use it at your own risk:: + + # this is a third-party repo! + # you have been warned. + # + # select a kernel from + # https://copr.fedorainfracloud.org/coprs/kwizart/ + + dnf copr enable -y kwizart/kernel-longterm-VERSION + dnf install -y kernel-longterm kernel-longterm-devel + + Reboot to new LTS kernel, then load kernel module:: + + modprobe zfs + +#. By default ZFS kernel modules are loaded upon detecting a pool. + To always load the modules at boot:: + + echo zfs > /etc/modules-load.d/zfs.conf + +#. By default ZFS may be removed by kernel package updates. + To lock the kernel version to only ones supported by ZFS to prevent this:: + echo 'zfs' > /etc/dnf/protected.d/zfs.conf + + Pending non-kernel updates can still be applied:: + dnf update --exclude=kernel* + +Testing Repo +-------------------- + +Testing repository, which is disabled by default, contains +the latest version of OpenZFS which is under active development. +These packages +**should not** be used on production systems. + +:: + + dnf config-manager --enable zfs-testing + dnf install zfs + +Root on ZFS +----------- +.. toctree:: + :maxdepth: 1 + :glob: + + * diff --git a/_sources/Getting Started/FreeBSD.rst.txt b/_sources/Getting Started/FreeBSD.rst.txt new file mode 100644 index 000000000..6e118b734 --- /dev/null +++ b/_sources/Getting Started/FreeBSD.rst.txt @@ -0,0 +1,142 @@ +FreeBSD +======= + +|ZoF-logo| + +Installation on FreeBSD +----------------------- + +OpenZFS is available pre-packaged as: + +- the zfs-2.0-release branch, in the FreeBSD base system from FreeBSD 13.0-CURRENT forward +- the master branch, in the FreeBSD ports tree as sysutils/openzfs and sysutils/openzfs-kmod from FreeBSD 12.1 forward + +The rest of this document describes the use of OpenZFS either from ports/pkg or built manually from sources for development. + +The ZFS utilities will be installed in /usr/local/sbin/, so make sure +your PATH gets adjusted accordingly. + +To load the module at boot, put ``openzfs_load="YES"`` in +/boot/loader.conf, and remove ``zfs_load="YES"`` if migrating a ZFS +install. + +Beware that the FreeBSD boot loader does not allow booting from root +pools with encryption active (even if it is not in use), so do not try +encryption on a pool you boot from. + +Development on FreeBSD +---------------------- + +The following dependencies are required to build OpenZFS on FreeBSD: + +- FreeBSD sources in /usr/src or elsewhere specified by SYSDIR in env. + If you don't have the sources installed you can install them with + git. + + Install source For FreeBSD 12: + :: + + git clone -b stable/12 https://git.FreeBSD.org/src.git /usr/src + + Install source for FreeBSD Current: + :: + + git clone https://git.FreeBSD.org/src.git /usr/src + +- Packages for build: + :: + + pkg install \ + autoconf \ + automake \ + autotools \ + git \ + gmake + +- Optional packages for build: + :: + + pkg install python + pkg install devel/py-sysctl # needed for arcstat, arc_summary, dbufstat + +- Packages for checks and tests: + :: + + pkg install \ + base64 \ + bash \ + checkbashisms \ + fio \ + hs-ShellCheck \ + ksh93 \ + pamtester \ + devel/py-flake8 \ + sudo + + Your preferred python version may be substituted. The user for + running tests must have NOPASSWD sudo permission. + +To build and install: + +:: + + # as user + git clone https://github.com/openzfs/zfs + cd zfs + ./autogen.sh + env MAKE=gmake ./configure + gmake -j`sysctl -n hw.ncpu` + # as root + gmake install + +To use the OpenZFS kernel module when FreeBSD starts, edit ``/boot/loader.conf`` : + +Replace the line: + +:: + + zfs_load="YES" + +with: + +:: + + openzfs_load="YES" + +The stock FreeBSD ZFS binaries are installed in /sbin. OpenZFS binaries are installed to /usr/local/sbin when installed form ports/pkg or manually from the source. To use OpenZFS binaries, adjust your path so /usr/local/sbin is listed before /sbin. Otherwise the native ZFS binaries will be used. + +For example, make changes to ~/.profile ~/.bashrc ~/.cshrc from this: + +:: + + PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:~/bin + +To this: + +:: + + PATH=/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin:~/bin + +For rapid development it can be convenient to do a UFS install instead +of ZFS when setting up the work environment. That way the module can be +unloaded and loaded without rebooting. +:: + + reboot + +Though not required, ``WITHOUT_ZFS`` is a useful build option in FreeBSD +to avoid building and installing the legacy zfs tools and kmod - see +``src.conf(5)``. + +Some tests require fdescfs to be mount on /dev/fd. This can be done +temporarily with: +:: + + mount -t fdescfs fdescfs /dev/fd + +or an entry can be added to /etc/fstab. +:: + + fdescfs /dev/fd fdescfs rw 0 0 + +.. |ZoF-logo| image:: /_static/img/logo/zof-logo.png diff --git a/_sources/Getting Started/NixOS/Root on ZFS.rst.txt b/_sources/Getting Started/NixOS/Root on ZFS.rst.txt new file mode 100644 index 000000000..fbdf8efb6 --- /dev/null +++ b/_sources/Getting Started/NixOS/Root on ZFS.rst.txt @@ -0,0 +1,537 @@ +.. highlight:: sh + +.. ifconfig:: zfs_root_test + + # For the CI/CD test run of this guide, + # Enable verbose logging of bash shell and fail immediately when + # a commmand fails. + set -vxeuf + +.. In this document, there are three types of code-block markups: + ``::`` are commands intended for both the vm test and the users + ``.. ifconfig:: zfs_root_test`` are commands intended only for vm test + ``.. code-block:: sh`` are commands intended only for users + +NixOS Root on ZFS +======================================= +**Note for arm64**: + +Currently there is a bug with the grub installation script. See `here +`__ for details. + +**Note for Immutable Root**: + +Immutable root can be enabled or disabled by setting +``zfs-root.boot.immutable`` option inside per-host configuration. + +**Customization** + +Unless stated otherwise, it is not recommended to customize system +configuration before reboot. + +**Only use well-tested pool features** + +You should only use well-tested pool features. Avoid using new features if data integrity is paramount. See, for example, `this comment `__. + +Preparation +--------------------------- + +#. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled. +#. Download `NixOS Live Image + `__ and boot from it. + + .. code-block:: sh + + sha256sum -c ./nixos-*.sha256 + + dd if=input-file of=output-file bs=1M + +#. Connect to the Internet. +#. Set root password or ``/root/.ssh/authorized_keys``. +#. Start SSH server + + .. code-block:: sh + + systemctl restart sshd + +#. Connect from another computer + + .. code-block:: sh + + ssh root@192.168.1.91 + +#. Target disk + + List available disks with + + .. code-block:: sh + + find /dev/disk/by-id/ + + If virtio is used as disk bus, power off the VM and set serial numbers for disk. + For QEMU, use ``-drive format=raw,file=disk2.img,serial=AaBb``. + For libvirt, edit domain XML. See `this page + `__ for examples. + + Declare disk array + + .. code-block:: sh + + DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR' + + For single disk installation, use + + .. code-block:: sh + + DISK='/dev/disk/by-id/disk1' + + .. ifconfig:: zfs_root_test + + :: + + # for github test run, use chroot and loop devices + DISK="$(losetup --all| grep nixos | cut -f1 -d: | xargs -t -I '{}' printf '{} ')" + + # if there is no loopdev, then we are using qemu virtualized test + # run, use sata disks instead + if test -z "${DISK}"; then + DISK=$(find /dev/disk/by-id -type l | grep -v DVD-ROM | grep -v -- -part | xargs -t -I '{}' printf '{} ') + fi + +#. Set a mount point + :: + + MNT=$(mktemp -d) + +#. Set partition size: + + Set swap size in GB, set to 1 if you don't want swap to + take up too much space + + .. code-block:: sh + + SWAPSIZE=4 + + .. ifconfig:: zfs_root_test + + # For the test run, use 1GB swap space to avoid hitting CI/CD + # quota + SWAPSIZE=1 + + Set how much space should be left at the end of the disk, minimum 1GB + + :: + + RESERVE=1 + +#. Enable Nix Flakes functionality + :: + + mkdir -p ~/.config/nix + echo "experimental-features = nix-command flakes" >> ~/.config/nix/nix.conf + +#. Install programs needed for system installation + :: + + if ! command -v git; then nix-env -f '' -iA git; fi + if ! command -v partprobe; then nix-env -f '' -iA parted; fi + + .. ifconfig:: zfs_root_test + + :: + + # install missing packages in chroot + if (echo "${DISK}" | grep "/dev/loop"); then + nix-env -f '' -iA nixos-install-tools + fi + +System Installation +--------------------------- + +#. Partition the disks. + + Note: you must clear all existing partition tables and data structures from target disks. + + For flash-based storage, this can be done by the blkdiscard command below: + :: + + partition_disk () { + local disk="${1}" + blkdiscard -f "${disk}" || true + + parted --script --align=optimal "${disk}" -- \ + mklabel gpt \ + mkpart EFI 2MiB 1GiB \ + mkpart bpool 1GiB 5GiB \ + mkpart rpool 5GiB -$((SWAPSIZE + RESERVE))GiB \ + mkpart swap -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \ + mkpart BIOS 1MiB 2MiB \ + set 1 esp on \ + set 5 bios_grub on \ + set 5 legacy_boot on + + partprobe "${disk}" + udevadm settle + } + + for i in ${DISK}; do + partition_disk "${i}" + done + + .. ifconfig:: zfs_root_test + + :: + + # When working with GitHub chroot runners, we are using loop + # devices as installation target. However, the alias support for + # loop device was just introduced in March 2023. See + # https://github.com/systemd/systemd/pull/26693 + # For now, we will create the aliases maunally as a workaround + looppart="1 2 3 4 5" + for i in ${DISK}; do + for j in ${looppart}; do + if test -e "${i}p${j}"; then + ln -s "${i}p${j}" "${i}-part${j}" + fi + done + done + +#. Setup encrypted swap. This is useful if the available memory is + small:: + + for i in ${DISK}; do + cryptsetup open --type plain --key-file /dev/random "${i}"-part4 "${i##*/}"-part4 + mkswap /dev/mapper/"${i##*/}"-part4 + swapon /dev/mapper/"${i##*/}"-part4 + done + +#. **LUKS only**: Setup encrypted LUKS container for root pool:: + + for i in ${DISK}; do + # see PASSPHRASE PROCESSING section in cryptsetup(8) + printf "YOUR_PASSWD" | cryptsetup luksFormat --type luks2 "${i}"-part3 - + printf "YOUR_PASSWD" | cryptsetup luksOpen "${i}"-part3 luks-rpool-"${i##*/}"-part3 - + done + +#. Create boot pool + :: + + # shellcheck disable=SC2046 + zpool create -o compatibility=legacy \ + -o ashift=12 \ + -o autotrim=on \ + -O acltype=posixacl \ + -O canmount=off \ + -O devices=off \ + -O normalization=formD \ + -O relatime=on \ + -O xattr=sa \ + -O mountpoint=/boot \ + -R "${MNT}" \ + bpool \ + mirror \ + $(for i in ${DISK}; do + printf '%s ' "${i}-part2"; + done) + + If not using a multi-disk setup, remove ``mirror``. + + You should not need to customize any of the options for the boot pool. + + GRUB does not support all of the zpool features. See ``spa_feature_names`` + in `grub-core/fs/zfs/zfs.c + `__. + This step creates a separate boot pool for ``/boot`` with the features + limited to only those that GRUB supports, allowing the root pool to use + any/all features. + + Features enabled with ``-o compatibility=grub2`` can be seen + `here `__. + +#. Create root pool + + - Unencrypted + + .. code-block:: sh + + # shellcheck disable=SC2046 + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -R "${MNT}" \ + -O acltype=posixacl \ + -O canmount=off \ + -O compression=zstd \ + -O dnodesize=auto \ + -O normalization=formD \ + -O relatime=on \ + -O xattr=sa \ + -O mountpoint=/ \ + rpool \ + mirror \ + $(for i in ${DISK}; do + printf '%s ' "${i}-part3"; + done) + + - LUKS encrypted + + :: + + # shellcheck disable=SC2046 + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -R "${MNT}" \ + -O acltype=posixacl \ + -O canmount=off \ + -O compression=zstd \ + -O dnodesize=auto \ + -O normalization=formD \ + -O relatime=on \ + -O xattr=sa \ + -O mountpoint=/ \ + rpool \ + mirror \ + $(for i in ${DISK}; do + printf '/dev/mapper/luks-rpool-%s ' "${i##*/}-part3"; + done) + + If not using a multi-disk setup, remove ``mirror``. + +#. Create root system container: + + - Unencrypted + + :: + + zfs create \ + -o canmount=off \ + -o mountpoint=none \ + rpool/nixos + + - Encrypted: + + Avoid ZFS send/recv when using native encryption, see `a ZFS developer's comment on + this issue`__ and `this spreadsheet of bugs`__. In short, if you + care about your data, don't use native encryption. This section + has been removed, use LUKS encryption instead. + + Create system datasets, + manage mountpoints with ``mountpoint=legacy`` + :: + + zfs create -o mountpoint=legacy rpool/nixos/root + mount -t zfs rpool/nixos/root "${MNT}"/ + zfs create -o mountpoint=legacy rpool/nixos/home + mkdir "${MNT}"/home + mount -t zfs rpool/nixos/home "${MNT}"/home + zfs create -o mountpoint=none rpool/nixos/var + zfs create -o mountpoint=legacy rpool/nixos/var/lib + zfs create -o mountpoint=legacy rpool/nixos/var/log + zfs create -o mountpoint=none bpool/nixos + zfs create -o mountpoint=legacy bpool/nixos/root + mkdir "${MNT}"/boot + mount -t zfs bpool/nixos/root "${MNT}"/boot + mkdir -p "${MNT}"/var/log + mkdir -p "${MNT}"/var/lib + mount -t zfs rpool/nixos/var/lib "${MNT}"/var/lib + mount -t zfs rpool/nixos/var/log "${MNT}"/var/log + zfs create -o mountpoint=legacy rpool/nixos/empty + zfs snapshot rpool/nixos/empty@start + +#. Format and mount ESP + :: + + for i in ${DISK}; do + mkfs.vfat -n EFI "${i}"-part1 + mkdir -p "${MNT}"/boot/efis/"${i##*/}"-part1 + mount -t vfat -o iocharset=iso8859-1 "${i}"-part1 "${MNT}"/boot/efis/"${i##*/}"-part1 + done + + +System Configuration +--------------------------- + +#. Clone template flake configuration + + .. code-block:: sh + + mkdir -p "${MNT}"/etc + git clone --depth 1 --branch openzfs-guide \ + https://github.com/ne9z/dotfiles-flake.git "${MNT}"/etc/nixos + + .. ifconfig:: zfs_root_test + + :: + + # Use vm branch of the template config for test run + mkdir -p "${MNT}"/etc + git clone --depth 1 --branch openzfs-guide-testvm \ + https://github.com/ne9z/dotfiles-flake.git "${MNT}"/etc/nixos + # for debugging: show template revision + git -C "${MNT}"/etc/nixos log -n1 + +#. From now on, the complete configuration of the system will be + tracked by git, set a user name and email address to continue + :: + + rm -rf "${MNT}"/etc/nixos/.git + git -C "${MNT}"/etc/nixos/ init -b main + git -C "${MNT}"/etc/nixos/ add "${MNT}"/etc/nixos/ + git -C "${MNT}"/etc/nixos config user.email "you@example.com" + git -C "${MNT}"/etc/nixos config user.name "Alice Q. Nixer" + git -C "${MNT}"/etc/nixos commit -asm 'initial commit' + +#. Customize configuration to your hardware + + :: + + for i in ${DISK}; do + sed -i \ + "s|/dev/disk/by-id/|${i%/*}/|" \ + "${MNT}"/etc/nixos/hosts/exampleHost/default.nix + break + done + + diskNames="" + for i in ${DISK}; do + diskNames="${diskNames} \"${i##*/}\"" + done + + sed -i "s|\"bootDevices_placeholder\"|${diskNames}|g" \ + "${MNT}"/etc/nixos/hosts/exampleHost/default.nix + + sed -i "s|\"abcd1234\"|\"$(head -c4 /dev/urandom | od -A none -t x4| sed 's| ||g' || true)\"|g" \ + "${MNT}"/etc/nixos/hosts/exampleHost/default.nix + + sed -i "s|\"x86_64-linux\"|\"$(uname -m || true)-linux\"|g" \ + "${MNT}"/etc/nixos/flake.nix + +#. **LUKS only**: Enable LUKS support:: + + sed -i 's|luks.enable = false|luks.enable = true|' "${MNT}"/etc/nixos/hosts/exampleHost/default.nix + +#. Detect kernel modules needed for boot + + .. code-block:: sh + + cp "$(command -v nixos-generate-config || true)" ./nixos-generate-config + + chmod a+rw ./nixos-generate-config + + # shellcheck disable=SC2016 + echo 'print STDOUT $initrdAvailableKernelModules' >> ./nixos-generate-config + + kernelModules="$(./nixos-generate-config --show-hardware-config --no-filesystems | tail -n1 || true)" + + sed -i "s|\"kernelModules_placeholder\"|${kernelModules}|g" \ + "${MNT}"/etc/nixos/hosts/exampleHost/default.nix + + .. ifconfig:: zfs_root_test + + :: + + sed -i "s|\"kernelModules_placeholder\"|\"nvme\"|g" \ + "${MNT}"/etc/nixos/hosts/exampleHost/default.nix + + # show generated config + cat "${MNT}"/etc/nixos/hosts/exampleHost/default.nix + +#. Set root password + + .. code-block:: sh + + rootPwd=$(mkpasswd -m SHA-512) + + .. ifconfig:: zfs_root_test + + :: + + # Use "test" for root password in test run + rootPwd=$(echo yourpassword | mkpasswd -m SHA-512 -) + + Declare password in configuration + :: + + sed -i \ + "s|rootHash_placeholder|${rootPwd}|" \ + "${MNT}"/etc/nixos/configuration.nix + +#. You can enable NetworkManager for wireless networks and GNOME + desktop environment in ``configuration.nix``. + +#. Commit changes to local repo + :: + + git -C "${MNT}"/etc/nixos commit -asm 'initial installation' + +#. Update flake lock file to track latest system version + :: + + nix flake update --commit-lock-file \ + "git+file://${MNT}/etc/nixos" + +#. Install system and apply configuration + + .. code-block:: sh + + nixos-install \ + --root "${MNT}" \ + --no-root-passwd \ + --flake "git+file://${MNT}/etc/nixos#exampleHost" + + .. ifconfig:: zfs_root_test + + :: + + if (echo "${DISK}" | grep "/dev/loop"); then + # nixos-install command might fail in a chroot environment + # due to + # https://github.com/NixOS/nixpkgs/issues/220211 + # it should be sufficient to test if the configuration builds + nix build "git+file://${MNT}/etc/nixos/#nixosConfigurations.exampleHost.config.system.build.toplevel" + + nixos-install \ + --root "${MNT}" \ + --no-root-passwd \ + --flake "git+file://${MNT}/etc/nixos#exampleHost" || true + else + # but with qemu test installation must be fully working + nixos-install \ + --root "${MNT}" \ + --no-root-passwd \ + --flake "git+file://${MNT}/etc/nixos#exampleHost" + fi + + .. ifconfig:: zfs_root_test + + :: + + # list contents of boot dir to confirm + # that the mirroring succeeded + find "${MNT}"/boot/efis/ -type d + +#. Unmount filesystems + :: + + umount -Rl "${MNT}" + zpool export -a + +#. Reboot + + .. code-block:: sh + + reboot + + .. ifconfig:: zfs_root_test + + :: + + # For qemu test run, power off instead. + # Test run is successful if the vm powers off + if ! (echo "${DISK}" | grep "/dev/loop"); then + poweroff + fi + +#. For instructions on maintenance tasks, see `Root on ZFS maintenance + page <../zfs_root_maintenance.html>`__. diff --git a/_sources/Getting Started/NixOS/index.rst.txt b/_sources/Getting Started/NixOS/index.rst.txt new file mode 100644 index 000000000..10465e8ba --- /dev/null +++ b/_sources/Getting Started/NixOS/index.rst.txt @@ -0,0 +1,86 @@ +.. highlight:: sh + +NixOS +===== + +Contents +-------- +.. toctree:: + :maxdepth: 1 + :glob: + + * + +Support +------- +Reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. + +If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @ne9z +`__. + +Installation +------------ + +Note: this is for installing ZFS on an existing +NixOS installation. To use ZFS as root file system, +see below. + +NixOS live image ships with ZFS support by default. + +Note that you need to apply these settings even if you don't need +to boot from ZFS. The kernel module 'zfs.ko' will not be available +to modprobe until you make these changes and reboot. + +#. Edit ``/etc/nixos/configuration.nix`` and add the following + options:: + + boot.supportedFilesystems = [ "zfs" ]; + boot.zfs.forceImportRoot = false; + networking.hostId = "yourHostId"; + + Where hostID can be generated with:: + + head -c4 /dev/urandom | od -A none -t x4 + +#. Apply configuration changes:: + + nixos-rebuild boot + +#. Reboot:: + + reboot + +Root on ZFS +----------- +.. toctree:: + :maxdepth: 1 + :glob: + + * + +Contribute +---------- + +You can contribute to this documentation. Fork this repo, edit the +documentation, then opening a pull request. + +#. To test your changes locally, use the devShell in this repo:: + + git clone https://github.com/ne9z/nixos-live openzfs-docs-dev + cd openzfs-docs-dev + nix develop ./openzfs-docs-dev/#docs + +#. Inside the openzfs-docs repo, build pages:: + + make html + +#. Look for errors and warnings in the make output. If there is no + errors:: + + xdg-open _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a + pull request. Mention @ne9z. diff --git a/_sources/Getting Started/RHEL and CentOS.rst.txt b/_sources/Getting Started/RHEL and CentOS.rst.txt new file mode 100644 index 000000000..afe0ef95b --- /dev/null +++ b/_sources/Getting Started/RHEL and CentOS.rst.txt @@ -0,0 +1,6 @@ +:orphan: + +RHEL and CentOS +======================= + +This page has been moved to `RHEL-based distro `__. diff --git a/_sources/Getting Started/RHEL-based distro/Root on ZFS.rst.txt b/_sources/Getting Started/RHEL-based distro/Root on ZFS.rst.txt new file mode 100644 index 000000000..a26064d55 --- /dev/null +++ b/_sources/Getting Started/RHEL-based distro/Root on ZFS.rst.txt @@ -0,0 +1,668 @@ +.. highlight:: sh + +.. ifconfig:: zfs_root_test + + # For the CI/CD test run of this guide, + # Enable verbose logging of bash shell and fail immediately when + # a commmand fails. + set -vxeuf + distro=${1} + + cp /etc/resolv.conf ./"rootfs-${distro}"/etc/resolv.conf + arch-chroot ./"rootfs-${distro}" sh <<-'ZFS_ROOT_GUIDE_TEST' + + set -vxeuf + + # install alpine setup scripts + apk update + apk add alpine-conf curl + +.. In this document, there are three types of code-block markups: + ``::`` are commands intended for both the vm test and the users + ``.. ifconfig:: zfs_root_test`` are commands intended only for vm test + ``.. code-block:: sh`` are commands intended only for users + +Rocky Linux Root on ZFS +======================================= + +**ZFSBootMenu** + +This tutorial is based on the GRUB bootloader. Due to its independent +implementation of a read-only ZFS driver, GRUB only supports a subset +of ZFS features on the boot pool. [In general, bootloader treat disks +as read-only to minimize the risk of damaging on-disk data.] + +`ZFSBootMenu `__ is an alternative bootloader +free of such limitations and has support for boot environments. Do not +follow instructions on this page if you plan to use ZBM, +as the layouts are not compatible. Refer +to their site for installation details. + +**Customization** + +Unless stated otherwise, it is not recommended to customize system +configuration before reboot. + +**Only use well-tested pool features** + +You should only use well-tested pool features. Avoid using new features if data integrity is paramount. See, for example, `this comment `__. + +Preparation +--------------------------- + +#. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled. +#. Because the kernel of latest Live CD might be incompatible with + ZFS, we will use Alpine Linux Extended, which ships with ZFS by + default. + + Download latest extended variant of `Alpine Linux + live image + `__, + verify `checksum `__ + and boot from it. + + .. code-block:: sh + + gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify alpine-extended-*.asc + + dd if=input-file of=output-file bs=1M + + .. ifconfig:: zfs_root_test + + # check whether the download page exists + # alpine version must be in sync with ci/cd test chroot tarball + +#. Login as root user. There is no password. +#. Configure Internet + + .. code-block:: sh + + setup-interfaces -r + # You must use "-r" option to start networking services properly + # example: + network interface: wlan0 + WiFi name: + ip address: dhcp + + manual netconfig: n + +#. If you are using wireless network and it is not shown, see `Alpine + Linux wiki + `__ for + further details. ``wpa_supplicant`` can be installed with ``apk + add wpa_supplicant`` without internet connection. + +#. Configure SSH server + + .. code-block:: sh + + setup-sshd + # example: + ssh server: openssh + allow root: "prohibit-password" or "yes" + ssh key: "none" or "" + +#. Set root password or ``/root/.ssh/authorized_keys``. + +#. Connect from another computer + + .. code-block:: sh + + ssh root@192.168.1.91 + +#. Configure NTP client for time synchronization + + .. code-block:: sh + + setup-ntp busybox + + .. ifconfig:: zfs_root_test + + # this step is unnecessary for chroot and returns 1 when executed + +#. Set up apk-repo. A list of available mirrors is shown. + Press space bar to continue + + .. code-block:: sh + + setup-apkrepos + + +#. Throughout this guide, we use predictable disk names generated by + udev + + .. code-block:: sh + + apk update + apk add eudev + setup-devd udev + + .. ifconfig:: zfs_root_test + + # for some reason, udev is extremely slow in chroot + # it is not needed for chroot anyway. so, skip this step + +#. Target disk + + List available disks with + + .. code-block:: sh + + find /dev/disk/by-id/ + + If virtio is used as disk bus, power off the VM and set serial numbers for disk. + For QEMU, use ``-drive format=raw,file=disk2.img,serial=AaBb``. + For libvirt, edit domain XML. See `this page + `__ for examples. + + Declare disk array + + .. code-block:: sh + + DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR' + + For single disk installation, use + + .. code-block:: sh + + DISK='/dev/disk/by-id/disk1' + + .. ifconfig:: zfs_root_test + + # for github test run, use chroot and loop devices + DISK="$(losetup -a| grep rhel | cut -f1 -d: | xargs -t -I '{}' printf '{} ')" + +#. Set a mount point + :: + + MNT=$(mktemp -d) + +#. Set partition size: + + Set swap size in GB, set to 1 if you don't want swap to + take up too much space + + .. code-block:: sh + + SWAPSIZE=4 + + .. ifconfig:: zfs_root_test + + # For the test run, use 1GB swap space to avoid hitting CI/CD + # quota + SWAPSIZE=1 + + Set how much space should be left at the end of the disk, minimum 1GB + + :: + + RESERVE=1 + +#. Install ZFS support from live media:: + + apk add zfs + +#. Install partition tool + :: + + apk add parted e2fsprogs cryptsetup util-linux + +System Installation +--------------------------- + +#. Partition the disks. + + Note: you must clear all existing partition tables and data structures from target disks. + + For flash-based storage, this can be done by the blkdiscard command below: + :: + + partition_disk () { + local disk="${1}" + blkdiscard -f "${disk}" || true + + parted --script --align=optimal "${disk}" -- \ + mklabel gpt \ + mkpart EFI 2MiB 1GiB \ + mkpart bpool 1GiB 5GiB \ + mkpart rpool 5GiB -$((SWAPSIZE + RESERVE))GiB \ + mkpart swap -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \ + mkpart BIOS 1MiB 2MiB \ + set 1 esp on \ + set 5 bios_grub on \ + set 5 legacy_boot on + + partprobe "${disk}" + } + + for i in ${DISK}; do + partition_disk "${i}" + done + + .. ifconfig:: zfs_root_test + + :: + + # When working with GitHub chroot runners, we are using loop + # devices as installation target. However, the alias support for + # loop device was just introduced in March 2023. See + # https://github.com/systemd/systemd/pull/26693 + # For now, we will create the aliases maunally as a workaround + looppart="1 2 3 4 5" + for i in ${DISK}; do + for j in ${looppart}; do + if test -e "${i}p${j}"; then + ln -s "${i}p${j}" "${i}-part${j}" + fi + done + done + +#. Setup encrypted swap. This is useful if the available memory is + small:: + + for i in ${DISK}; do + cryptsetup open --type plain --key-file /dev/random "${i}"-part4 "${i##*/}"-part4 + mkswap /dev/mapper/"${i##*/}"-part4 + swapon /dev/mapper/"${i##*/}"-part4 + done + +#. Load ZFS kernel module + + .. code-block:: sh + + modprobe zfs + +#. Create boot pool + :: + + # shellcheck disable=SC2046 + zpool create -o compatibility=legacy \ + -o ashift=12 \ + -o autotrim=on \ + -O acltype=posixacl \ + -O canmount=off \ + -O devices=off \ + -O normalization=formD \ + -O relatime=on \ + -O xattr=sa \ + -O mountpoint=/boot \ + -R "${MNT}" \ + bpool \ + mirror \ + $(for i in ${DISK}; do + printf '%s ' "${i}-part2"; + done) + + If not using a multi-disk setup, remove ``mirror``. + + You should not need to customize any of the options for the boot pool. + + GRUB does not support all of the zpool features. See ``spa_feature_names`` + in `grub-core/fs/zfs/zfs.c + `__. + This step creates a separate boot pool for ``/boot`` with the features + limited to only those that GRUB supports, allowing the root pool to use + any/all features. + +#. Create root pool + :: + + # shellcheck disable=SC2046 + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -R "${MNT}" \ + -O acltype=posixacl \ + -O canmount=off \ + -O compression=zstd \ + -O dnodesize=auto \ + -O normalization=formD \ + -O relatime=on \ + -O xattr=sa \ + -O mountpoint=/ \ + rpool \ + mirror \ + $(for i in ${DISK}; do + printf '%s ' "${i}-part3"; + done) + + If not using a multi-disk setup, remove ``mirror``. + +#. Create root system container: + + - Unencrypted + + :: + + zfs create \ + -o canmount=off \ + -o mountpoint=none \ + rpool/rhel + + - Encrypted: + + Avoid ZFS send/recv when using native encryption, see `a ZFS developer's comment on this issue`__ and `this spreadsheet of bugs`__. A LUKS-based guide has yet to be written. Once compromised, changing password will not keep your + data safe. See ``zfs-change-key(8)`` for more info + + .. code-block:: sh + + zfs create \ + -o canmount=off \ + -o mountpoint=none \ + -o encryption=on \ + -o keylocation=prompt \ + -o keyformat=passphrase \ + rpool/rhel + + You can automate this step (insecure) with: ``echo POOLPASS | zfs create ...``. + + Create system datasets, + manage mountpoints with ``mountpoint=legacy`` + :: + + zfs create -o canmount=noauto -o mountpoint=/ rpool/rhel/root + zfs mount rpool/rhel/root + zfs create -o mountpoint=legacy rpool/rhel/home + mkdir "${MNT}"/home + mount -t zfs rpool/rhel/home "${MNT}"/home + zfs create -o mountpoint=legacy rpool/rhel/var + zfs create -o mountpoint=legacy rpool/rhel/var/lib + zfs create -o mountpoint=legacy rpool/rhel/var/log + zfs create -o mountpoint=none bpool/rhel + zfs create -o mountpoint=legacy bpool/rhel/root + mkdir "${MNT}"/boot + mount -t zfs bpool/rhel/root "${MNT}"/boot + mkdir -p "${MNT}"/var/log + mkdir -p "${MNT}"/var/lib + mount -t zfs rpool/rhel/var/lib "${MNT}"/var/lib + mount -t zfs rpool/rhel/var/log "${MNT}"/var/log + +#. Format and mount ESP + :: + + for i in ${DISK}; do + mkfs.vfat -n EFI "${i}"-part1 + mkdir -p "${MNT}"/boot/efis/"${i##*/}"-part1 + mount -t vfat -o iocharset=iso8859-1 "${i}"-part1 "${MNT}"/boot/efis/"${i##*/}"-part1 + done + + mkdir -p "${MNT}"/boot/efi + mount -t vfat -o iocharset=iso8859-1 "$(echo "${DISK}" | sed "s|^ *||" | cut -f1 -d' '|| true)"-part1 "${MNT}"/boot/efi + +System Configuration +--------------------------- + +#. Download and extract minimal Rhel root filesystem:: + + apk add curl + curl --fail-early --fail -L \ + https://dl.rockylinux.org/pub/rocky/9.2/images/x86_64/Rocky-9-Container-Base-9.2-20230513.0.x86_64.tar.xz \ + -o rootfs.tar.gz + curl --fail-early --fail -L \ + https://dl.rockylinux.org/pub/rocky/9.2/images/x86_64/Rocky-9-Container-Base-9.2-20230513.0.x86_64.tar.xz.CHECKSUM \ + -o checksum + + # BusyBox sha256sum treats all lines in the checksum file + # as checksums and requires two spaces " " + # between filename and checksum + + grep 'Container-Base' checksum \ + | grep '^SHA256' \ + | sed -E 's|.*= ([a-z0-9]*)$|\1 rootfs.tar.gz|' > ./sha256checksum + + sha256sum -c ./sha256checksum + + tar x -C "${MNT}" -af rootfs.tar.gz + +#. Enable community repo + + .. code-block:: sh + + sed -i '/edge/d' /etc/apk/repositories + sed -i -E 's/#(.*)community/\1community/' /etc/apk/repositories + +#. Generate fstab:: + + apk add arch-install-scripts + genfstab -t PARTUUID "${MNT}" \ + | grep -v swap \ + | sed "s|vfat.*rw|vfat rw,x-systemd.idle-timeout=1min,x-systemd.automount,noauto,nofail|" \ + > "${MNT}"/etc/fstab + +#. Chroot + + .. code-block:: sh + + cp /etc/resolv.conf "${MNT}"/etc/resolv.conf + for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done + chroot "${MNT}" /usr/bin/env DISK="${DISK}" bash + + .. ifconfig:: zfs_root_test + + cp /etc/resolv.conf "${MNT}"/etc/resolv.conf + for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done + chroot "${MNT}" /usr/bin/env DISK="${DISK}" bash <<-'ZFS_ROOT_NESTED_CHROOT' + + set -vxeuf + +#. Unset all shell aliases, which can interfere with installation:: + + unalias -a + +#. Install base packages + + .. code-block:: sh + + dnf -y install --allowerasing @core grub2-efi-x64 \ + grub2-pc grub2-pc-modules grub2-efi-x64-modules shim-x64 \ + efibootmgr kernel-core + + .. ifconfig:: zfs_root_test + + # skip installing firmware in test + dnf -y install --allowerasing --setopt=install_weak_deps=False \ + @core grub2-efi-x64 \ + grub2-pc grub2-pc-modules grub2-efi-x64-modules shim-x64 \ + efibootmgr kernel-core + +#. Install ZFS packages:: + + dnf install -y https://zfsonlinux.org/epel/zfs-release-2-3"$(rpm --eval "%{dist}"|| true)".noarch.rpm + dnf config-manager --disable zfs + dnf config-manager --enable zfs-kmod + dnf install -y zfs zfs-dracut + +#. Add zfs modules to dracut:: + + echo 'add_dracutmodules+=" zfs "' >> /etc/dracut.conf.d/zfs.conf + echo 'force_drivers+=" zfs "' >> /etc/dracut.conf.d/zfs.conf + +#. Add other drivers to dracut:: + + if grep mpt3sas /proc/modules; then + echo 'force_drivers+=" mpt3sas "' >> /etc/dracut.conf.d/zfs.conf + fi + if grep virtio_blk /proc/modules; then + echo 'filesystems+=" virtio_blk "' >> /etc/dracut.conf.d/fs.conf + fi + +#. Build initrd:: + + find -D exec /lib/modules -maxdepth 1 \ + -mindepth 1 -type d \ + -exec sh -vxc \ + 'if test -e "$1"/modules.dep; + then kernel=$(basename "$1"); + dracut --verbose --force --kver "${kernel}"; + fi' sh {} \; + +#. For SELinux, relabel filesystem on reboot:: + + fixfiles -F onboot + +#. Generate host id:: + + zgenhostid -f -o /etc/hostid + +#. Install locale package, example for English locale:: + + dnf install -y glibc-minimal-langpack glibc-langpack-en + +#. Set locale, keymap, timezone, hostname + + :: + + rm -f /etc/localtime + systemd-firstboot \ + --force \ + --locale=en_US.UTF-8 \ + --timezone=Etc/UTC \ + --hostname=testhost \ + --keymap=us + +#. Set root passwd + :: + + printf 'root:yourpassword' | chpasswd + +Bootloader +--------------------------- + + +#. Apply GRUB workaround + + :: + + echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile.d/zpool_vdev_name_path.sh + # shellcheck disable=SC1091 + . /etc/profile.d/zpool_vdev_name_path.sh + + # GRUB fails to detect rpool name, hard code as "rpool" + sed -i "s|rpool=.*|rpool=rpool|" /etc/grub.d/10_linux + + This workaround needs to be applied for every GRUB update, as the + update will overwrite the changes. + +#. RHEL uses Boot Loader Specification module for GRUB, + which does not support ZFS. Disable it:: + + echo 'GRUB_ENABLE_BLSCFG=false' >> /etc/default/grub + + This means that you need to regenerate GRUB menu and mirror them + after every kernel update, otherwise computer will still boot old + kernel on reboot. + +#. Install GRUB:: + + mkdir -p /boot/efi/rocky/grub-bootdir/i386-pc/ + for i in ${DISK}; do + grub2-install --target=i386-pc --boot-directory \ + /boot/efi/rocky/grub-bootdir/i386-pc/ "${i}" + done + dnf reinstall -y grub2-efi-x64 shim-x64 + cp -r /usr/lib/grub/x86_64-efi/ /boot/efi/EFI/rocky/ + +#. Generate GRUB menu:: + + mkdir -p /boot/grub2 + grub2-mkconfig -o /boot/grub2/grub.cfg + cp /boot/grub2/grub.cfg \ + /boot/efi/efi/rocky/grub.cfg + cp /boot/grub2/grub.cfg \ + /boot/efi/rocky/grub-bootdir/i386-pc/grub2/grub.cfg + + .. ifconfig:: zfs_root_test + + :: + + find /boot/efis/ -name "grub.cfg" -print0 \ + | xargs -t -0I '{}' grub2-script-check -v '{}' + +#. For both legacy and EFI booting: mirror ESP content:: + + espdir=$(mktemp -d) + find /boot/efi/ -maxdepth 1 -mindepth 1 -type d -print0 \ + | xargs -t -0I '{}' cp -r '{}' "${espdir}" + find "${espdir}" -maxdepth 1 -mindepth 1 -type d -print0 \ + | xargs -t -0I '{}' sh -vxc "find /boot/efis/ -maxdepth 1 -mindepth 1 -type d -print0 | xargs -t -0I '[]' cp -r '{}' '[]'" + +#. Exit chroot + + .. code-block:: sh + + exit + + .. ifconfig:: zfs_root_test + + # nested chroot ends here + ZFS_ROOT_NESTED_CHROOT + + .. ifconfig:: zfs_root_test + + :: + + # list contents of boot dir to confirm + # that the mirroring succeeded + find "${MNT}"/boot/efis/ -type d > list_of_efi_dirs + for i in ${DISK}; do + if ! grep "${i##*/}-part1/efi\|${i##*/}-part1/EFI" list_of_efi_dirs; then + echo "disk ${i} not found in efi system partition, installation error"; + cat list_of_efi_dirs + exit 1 + fi + done + +#. Unmount filesystems and create initial system snapshot + You can later create a boot environment from this snapshot. + See `Root on ZFS maintenance page <../zfs_root_maintenance.html>`__. + :: + + umount -Rl "${MNT}" + zfs snapshot -r rpool@initial-installation + zfs snapshot -r bpool@initial-installation + +#. Export all pools + + .. code-block:: sh + + zpool export -a + + .. ifconfig:: zfs_root_test + + # we are now inside a chroot, where the export will fail + # export pools when we are outside chroot + +#. Reboot + + .. code-block:: sh + + reboot + +#. For BIOS-legacy boot users only: the GRUB bootloader installed + might be unusable. In this case, see Bootloader Recovery section + in `Root on ZFS maintenance page <../zfs_root_maintenance.html>`__. + + This issue is not related to Alpine Linux chroot, as Arch Linux + installed with this method does not have this issue. + + UEFI bootloader is not affected by this issue. + + .. ifconfig:: zfs_root_test + + # chroot ends here + ZFS_ROOT_GUIDE_TEST + +Post installaion +--------------------------- + +#. Install package groups + + .. code-block:: sh + + dnf group list --hidden -v # query package groups + dnf group install gnome-desktop + +#. Add new user, configure swap. + +.. _a ZFS developer's comment on this issue: https://ol.reddit.com/r/zfs/comments/10n8fsn/does_openzfs_have_a_new_developer_for_the_native/j6b8k1m/ +.. _this spreadsheet of bugs: https://docs.google.com/spreadsheets/d/1OfRSXibZ2nIE9DGK6swwBZXgXwdCPKgp4SbPZwTexCg/htmlview diff --git a/_sources/Getting Started/RHEL-based distro/index.rst.txt b/_sources/Getting Started/RHEL-based distro/index.rst.txt new file mode 100644 index 000000000..edf553070 --- /dev/null +++ b/_sources/Getting Started/RHEL-based distro/index.rst.txt @@ -0,0 +1,181 @@ +RHEL-based distro +======================= + +Contents +-------- +.. toctree:: + :maxdepth: 1 + :glob: + + * + +`DKMS`_ and `kABI-tracking kmod`_ style packages are provided for x86_64 RHEL- +and CentOS-based distributions from the OpenZFS repository. These packages +are updated as new versions are released. Only the repository for the current +minor version of each current major release is updated with new packages. + +To simplify installation, a *zfs-release* package is provided which includes +a zfs.repo configuration file and public signing key. All official OpenZFS +packages are signed using this key, and by default yum or dnf will verify a +package's signature before allowing it be to installed. Users are strongly +encouraged to verify the authenticity of the OpenZFS public key using +the fingerprint listed here. + +| **Key location:** /etc/pki/rpm-gpg/RPM-GPG-KEY-openzfs (previously -zfsonlinux) +| **Current release packages:** `EL7`_, `EL8`_, `EL9`_ +| **Archived release packages:** `see repo page `__ + +| **Signing key1 (EL8 and older, Fedora 36 and older)** + `pgp.mit.edu `__ / + `direct link `__ +| **Fingerprint:** C93A FFFD 9F3F 7B03 C310 CEB6 A9D5 A1C0 F14A B620 + +| **Signing key2 (EL9+, Fedora 37+)** + `pgp.mit.edu `__ / + `direct link `__ +| **Fingerprint:** 7DC7 299D CF7C 7FD9 CD87 701B A599 FD5E 9DB8 4141 + +For EL7 run:: + + yum install https://zfsonlinux.org/epel/zfs-release-2-3$(rpm --eval "%{dist}").noarch.rpm + +and for EL8 and 9:: + + dnf install https://zfsonlinux.org/epel/zfs-release-2-3$(rpm --eval "%{dist}").noarch.rpm + +After installing the *zfs-release* package and verifying the public key +users can opt to install either the DKMS or kABI-tracking kmod style packages. +DKMS packages are recommended for users running a non-distribution kernel or +for users who wish to apply local customizations to OpenZFS. For most users +the kABI-tracking kmod packages are recommended in order to avoid needing to +rebuild OpenZFS for every kernel update. + +DKMS +---- + +To install DKMS style packages issue the following commands. First add the +`EPEL repository`_ which provides DKMS by installing the *epel-release* +package, then the *kernel-devel* and *zfs* packages. Note that it is +important to make sure that the matching *kernel-devel* package is installed +for the running kernel since DKMS requires it to build OpenZFS. + +For EL6 and 7, separately run:: + + yum install -y epel-release + yum install -y kernel-devel + yum install -y zfs + +And for EL8 and newer, separately run:: + + dnf install -y epel-release + dnf install -y kernel-devel + dnf install -y zfs + +.. note:: + When switching from DKMS to kABI-tracking kmods first uninstall the + existing DKMS packages. This should remove the kernel modules for all + installed kernels, then the kABI-tracking kmods can be installed as + described in the section below. + +kABI-tracking kmod +------------------ + +By default the *zfs-release* package is configured to install DKMS style +packages so they will work with a wide range of kernels. In order to +install the kABI-tracking kmods the default repository must be switched +from *zfs* to *zfs-kmod*. Keep in mind that the kABI-tracking kmods are +only verified to work with the distribution-provided, non-Stream kernel. + +For EL6 and 7 run:: + + yum-config-manager --disable zfs + yum-config-manager --enable zfs-kmod + yum install zfs + +And for EL8 and newer:: + + dnf config-manager --disable zfs + dnf config-manager --enable zfs-kmod + dnf install zfs + +By default the OpenZFS kernel modules are automatically loaded when a ZFS +pool is detected. If you would prefer to always load the modules at boot +time you can create such configuration in ``/etc/modules-load.d``:: + + echo zfs >/etc/modules-load.d/zfs.conf + +.. note:: + When updating to a new EL minor release the existing kmod + packages may not work due to upstream kABI changes in the kernel. + The configuration of the current release package may have already made an + updated package available, but the package manager may not know to install + that package if the version number isn't newer. When upgrading, users + should verify that the *kmod-zfs* package is providing suitable kernel + modules, reinstalling the *kmod-zfs* package if necessary. + +Previous minor EL releases +-------------------------- + +The current release package uses `"${releasever}"` rather than specify a particular +minor release as previous release packages did. Typically `"${releasever}"` will +resolve to just the major version (e.g. `8`), and the resulting repository URL +will be aliased to the current minor version (e.g. `8.7`), but you can specify +`--releasever` to use previous repositories. :: + + [vagrant@localhost ~]$ dnf list available --showduplicates kmod-zfs + Last metadata expiration check: 0:00:08 ago on tor 31 jan 2023 17:50:05 UTC. + Available Packages + kmod-zfs.x86_64 2.1.6-1.el8 zfs-kmod + kmod-zfs.x86_64 2.1.7-1.el8 zfs-kmod + kmod-zfs.x86_64 2.1.8-1.el8 zfs-kmod + kmod-zfs.x86_64 2.1.9-1.el8 zfs-kmod + [vagrant@localhost ~]$ dnf list available --showduplicates --releasever=8.6 kmod-zfs + Last metadata expiration check: 0:16:13 ago on tor 31 jan 2023 17:34:10 UTC. + Available Packages + kmod-zfs.x86_64 2.1.4-1.el8 zfs-kmod + kmod-zfs.x86_64 2.1.5-1.el8 zfs-kmod + kmod-zfs.x86_64 2.1.5-2.el8 zfs-kmod + kmod-zfs.x86_64 2.1.6-1.el8 zfs-kmod + [vagrant@localhost ~]$ + +In the above example, the former packages were built for EL8.7, and the latter for EL8.6. + +Testing Repositories +-------------------- + +In addition to the primary *zfs* repository a *zfs-testing* repository +is available. This repository, which is disabled by default, contains +the latest version of OpenZFS which is under active development. These +packages are made available in order to get feedback from users regarding +the functionality and stability of upcoming releases. These packages +**should not** be used on production systems. Packages from the testing +repository can be installed as follows. + +For EL6 and 7 run:: + + yum-config-manager --enable zfs-testing + yum install kernel-devel zfs + +And for EL8 and newer:: + + dnf config-manager --enable zfs-testing + dnf install kernel-devel zfs + +.. note:: + Use *zfs-testing* for DKMS packages and *zfs-testing-kmod* + for kABI-tracking kmod packages. + +Root on ZFS +----------- +.. toctree:: + :maxdepth: 1 + :glob: + + * + +.. _kABI-tracking kmod: https://elrepoproject.blogspot.com/2016/02/kabi-tracking-kmod-packages.html +.. _DKMS: https://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support +.. _EL7: https://zfsonlinux.org/epel/zfs-release-2-3.el7.noarch.rpm +.. _EL8: https://zfsonlinux.org/epel/zfs-release-2-3.el8.noarch.rpm +.. _EL9: https://zfsonlinux.org/epel/zfs-release-2-3.el9.noarch.rpm +.. _EPEL repository: https://fedoraproject.org/wiki/EPEL diff --git a/_sources/Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS.rst.txt b/_sources/Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS.rst.txt new file mode 100644 index 000000000..8fc3be062 --- /dev/null +++ b/_sources/Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS.rst.txt @@ -0,0 +1,1032 @@ +.. highlight:: sh + +Ubuntu 18.04 Root on ZFS +======================== + +.. contents:: Table of Contents + :local: + +Overview +-------- + +Newer release available +~~~~~~~~~~~~~~~~~~~~~~~ + +- See :doc:`Ubuntu 20.04 Root on ZFS <./Ubuntu 20.04 Root on ZFS>` for new + installs. This guide is no longer receiving most updates. It continues + to exist for reference for existing installs that followed it. + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- `Ubuntu 18.04.3 ("Bionic") Desktop + CD `__ + (*not* any server images) +- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” + drive) only works with UEFI booting. This not unique to ZFS. `GRUB + does not and will not work on 4Kn with legacy (BIOS) + booting. `__ + +Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of +memory is recommended for normal performance in basic workloads. If you +wish to use deduplication, you will need `massive amounts of +RAM `__. Enabling +deduplication is a permanent change that cannot be easily reverted. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @rlaager +`__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo apt install python3-pip + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. + +Encryption +~~~~~~~~~~ + +This guide supports two different encryption options: unencrypted and +LUKS (full-disk encryption). With either option, all ZFS features are fully +available. ZFS native encryption is not available in Ubuntu 18.04. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +Step 1: Prepare The Install Environment +--------------------------------------- + +1.1 Boot the Ubuntu Live CD. Select Try Ubuntu. Connect your system to +the Internet as appropriate (e.g. join your WiFi network). Open a +terminal (press Ctrl-Alt-T). + +1.2 Setup and update the repositories:: + + sudo apt-add-repository universe + sudo apt update + +1.3 Optional: Install and start the OpenSSH server in the Live CD +environment: + +If you have a second system, using SSH to access the target system can +be convenient:: + + passwd + # There is no current password; hit enter at that prompt. + sudo apt install --yes openssh-server + +**Hint:** You can find your IP address with +``ip addr show scope global | grep inet``. Then, from your main machine, +connect with ``ssh ubuntu@IP``. + +1.4 Become root:: + + sudo -i + +1.5 Install ZFS in the Live CD environment:: + + apt install --yes debootstrap gdisk zfs-initramfs + +Step 2: Disk Formatting +----------------------- + +2.1 Set a variable with the disk name:: + + DISK=/dev/disk/by-id/scsi-SATA_disk1 + +Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the +``/dev/sd*`` device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool. + +**Hints:** + +- ``ls -la /dev/disk/by-id`` will list the aliases. +- Are you doing this in a virtual machine? If your virtual disk is + missing from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using + KVM with virtio; otherwise, read the + `troubleshooting <#troubleshooting>`__ section. +- For a mirror or raidz topology, use ``DISK1``, ``DISK2``, etc. +- When choosing a boot pool size, consider how you will use the space. A kernel + and initrd may consume around 100M. If you have multiple kernels and take + snapshots, you may find yourself low on boot pool space, especially if you + need to regenerate your initramfs images, which may be around 85M each. Size + your boot pool appropriately for your needs. + +2.2 If you are re-using a disk, clear it as necessary: + +If the disk was previously used in an MD array, zero the superblock:: + + apt install --yes mdadm + mdadm --zero-superblock --force $DISK + +Clear the partition table:: + + sgdisk --zap-all $DISK + +2.3 Partition your disk(s): + +Run this if you need legacy (BIOS) booting:: + + sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK + +Run this for UEFI booting (for use now or in the future):: + + sgdisk -n2:1M:+512M -t2:EF00 $DISK + +Run this for the boot pool:: + + sgdisk -n3:0:+1G -t3:BF01 $DISK + +Choose one of the following options: + +2.3a Unencrypted:: + + sgdisk -n4:0:0 -t4:BF01 $DISK + +2.3b LUKS:: + + sgdisk -n4:0:0 -t4:8300 $DISK + +If you are creating a mirror or raidz topology, repeat the partitioning +commands for all the disks which will be part of the pool. + +2.4 Create the boot pool:: + + zpool create -o ashift=12 -d \ + -o feature@async_destroy=enabled \ + -o feature@bookmarks=enabled \ + -o feature@embedded_data=enabled \ + -o feature@empty_bpobj=enabled \ + -o feature@enabled_txg=enabled \ + -o feature@extensible_dataset=enabled \ + -o feature@filesystem_limits=enabled \ + -o feature@hole_birth=enabled \ + -o feature@large_blocks=enabled \ + -o feature@lz4_compress=enabled \ + -o feature@spacemap_histogram=enabled \ + -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \ + -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt bpool ${DISK}-part3 + +You should not need to customize any of the options for the boot pool. + +GRUB does not support all of the zpool features. See +``spa_feature_names`` in +`grub-core/fs/zfs/zfs.c `__. +This step creates a separate boot pool for ``/boot`` with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are “supported” by GRUB. + +**Hints:** + +- If you are creating a mirror or raidz topology, create the pool using + ``zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3`` + (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and + list the partitions from additional disks). +- The pool name is arbitrary. If changed, the new name must be used + consistently. The ``bpool`` convention originated in this HOWTO. + +**Feature Notes:** + +- As a read-only compatible feature, the ``userobj_accounting`` feature should + be compatible in theory, but in practice, GRUB can fail with an “invalid + dnode type” error. This feature does not matter for ``/boot`` anyway. + +2.5 Create the root pool: + +Choose one of the following options: + +2.5a Unencrypted:: + + zpool create -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt rpool ${DISK}-part4 + +2.5b LUKS:: + + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt rpool /dev/mapper/luks1 + +**Notes:** + +- The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). +- Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires + ACLs `__ +- Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only + filenames `__. +- ``recordsize`` is unset (leaving it at the default of 128 KiB). If you want to + tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. +- Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s + documentation `__ + for further information. +- Setting ``xattr=sa`` `vastly improves the performance of extended + attributes `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI + applications. `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain + controller. `__ + Note that ``xattr=sa`` is + `Linux-specific `__. + If you move your ``xattr=sa`` pool to another OpenZFS implementation + besides ZFS-on-Linux, extended attributes will not be readable + (though your data will be). If portability of extended attributes is + important to you, omit the ``-O xattr=sa`` above. Even if you do not + want ``xattr=sa`` for the whole pool, it is probably fine to use it + for ``/var/log``. +- Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). +- For LUKS, the key size chosen is 512 bits. However, XTS mode requires + two keys, so the LUKS key is split in half. Thus, ``-s 512`` means + AES-256. +- Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup + FAQ `__ + for guidance. + +**Hints:** + +- If you are creating a mirror or raidz topology, create the pool using + ``zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4`` + (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and + list the partitions from additional disks). For LUKS, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will + have to create using ``cryptsetup``. +- The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the + root pool is named ``rpool`` by default. + +Step 3: System Installation +--------------------------- + +3.1 Create filesystem datasets to act as containers:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + zfs create -o canmount=off -o mountpoint=none bpool/BOOT + +On Solaris systems, the root filesystem is cloned and the suffix is +incremented for major system changes through ``pkg image-update`` or +``beadm``. Similar functionality has been implemented in Ubuntu 20.04 with the +``zsys`` tool, though its dataset layout is more complicated. Even without +such a tool, the `rpool/ROOT` and `bpool/BOOT` containers can still be used +for manually created clones. + +3.2 Create filesystem datasets for the root and boot filesystems:: + + zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu + zfs mount rpool/ROOT/ubuntu + + zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/ubuntu + zfs mount bpool/BOOT/ubuntu + +With ZFS, it is not normally necessary to use a mount command (either +``mount`` or ``zfs mount``). This situation is an exception because of +``canmount=noauto``. + +3.3 Create datasets:: + + zfs create rpool/home + zfs create -o mountpoint=/root rpool/home/root + zfs create -o canmount=off rpool/var + zfs create -o canmount=off rpool/var/lib + zfs create rpool/var/log + zfs create rpool/var/spool + +The datasets below are optional, depending on your preferences and/or +software choices. + +If you wish to exclude these from snapshots:: + + zfs create -o com.sun:auto-snapshot=false rpool/var/cache + zfs create -o com.sun:auto-snapshot=false rpool/var/tmp + chmod 1777 /mnt/var/tmp + +If you use /opt on this system:: + + zfs create rpool/opt + +If you use /srv on this system:: + + zfs create rpool/srv + +If you use /usr/local on this system:: + + zfs create -o canmount=off rpool/usr + zfs create rpool/usr/local + +If this system will have games installed:: + + zfs create rpool/var/games + +If this system will store local email in /var/mail:: + + zfs create rpool/var/mail + +If this system will use Snap packages:: + + zfs create rpool/var/snap + +If you use /var/www on this system:: + + zfs create rpool/var/www + +If this system will use GNOME:: + + zfs create rpool/var/lib/AccountsService + +If this system will use Docker (which manages its own datasets & +snapshots):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker + +If this system will use NFS (locking):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs + +A tmpfs is recommended later, but if you want a separate dataset for +``/tmp``:: + + zfs create -o com.sun:auto-snapshot=false rpool/tmp + chmod 1777 /mnt/tmp + +The primary goal of this dataset layout is to separate the OS from user data. +This allows the root filesystem to be rolled back without rolling back user +data. The ``com.sun.auto-snapshot`` setting is used by some ZFS +snapshot utilities to exclude transient data. + +If you do nothing extra, ``/tmp`` will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for +``/tmp``, as shown above. This keeps the ``/tmp`` data out of snapshots +of your root filesystem. It also allows you to set a quota on +``rpool/tmp``, if you want to limit the maximum space used. Otherwise, +you can use a tmpfs (RAM filesystem) later. + +3.4 Install the minimal system:: + + debootstrap bionic /mnt + zfs set devices=off rpool + +The ``debootstrap`` command leaves the new system in an unconfigured +state. An alternative to using ``debootstrap`` is to copy the entirety +of a working system into the new ZFS root. + +Step 4: System Configuration +---------------------------- + +4.1 Configure the hostname: + +Replace ``HOSTNAME`` with the desired hostname:: + + echo HOSTNAME > /mnt/etc/hostname + vi /mnt/etc/hosts + +.. code-block:: text + + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME + +**Hint:** Use ``nano`` if you find ``vi`` confusing. + +4.2 Configure the network interface: + +Find the interface name:: + + ip addr show + +Adjust NAME below to match your interface name:: + + vi /mnt/etc/netplan/01-netcfg.yaml + +.. code-block:: yaml + + network: + version: 2 + ethernets: + NAME: + dhcp4: true + +Customize this file if the system is not a DHCP client. + +4.3 Configure the package sources:: + + vi /mnt/etc/apt/sources.list + +.. code-block:: sourceslist + + deb http://archive.ubuntu.com/ubuntu bionic main restricted universe multiverse + deb http://archive.ubuntu.com/ubuntu bionic-updates main restricted universe multiverse + deb http://archive.ubuntu.com/ubuntu bionic-backports main restricted universe multiverse + deb http://security.ubuntu.com/ubuntu bionic-security main restricted universe multiverse + +4.4 Bind the virtual filesystems from the LiveCD environment to the new +system and ``chroot`` into it:: + + mount --rbind /dev /mnt/dev + mount --rbind /proc /mnt/proc + mount --rbind /sys /mnt/sys + chroot /mnt /usr/bin/env DISK=$DISK bash --login + +**Note:** This is using ``--rbind``, not ``--bind``. + +4.5 Configure a basic system environment:: + + ln -s /proc/self/mounts /etc/mtab + apt update + +Even if you prefer a non-English system language, always ensure that +``en_US.UTF-8`` is available:: + + dpkg-reconfigure locales + dpkg-reconfigure tzdata + +If you prefer ``nano`` over ``vi``, install it:: + + apt install --yes nano + +4.6 Install ZFS in the chroot environment for the new system:: + + apt install --yes --no-install-recommends linux-image-generic + apt install --yes zfs-initramfs + +**Hint:** For the HWE kernel, install ``linux-image-generic-hwe-18.04`` +instead of ``linux-image-generic``. + +4.7 For LUKS installs only, setup ``/etc/crypttab``:: + + apt install --yes cryptsetup + + echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \ + luks,discard,initramfs > /etc/crypttab + +The use of ``initramfs`` is a work-around for `cryptsetup does not support ZFS +`__. + +**Hint:** If you are creating a mirror or raidz topology, repeat the +``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. + +4.8 Install GRUB + +Choose one of the following options: + +4.8a Install GRUB for legacy (BIOS) booting:: + + apt install --yes grub-pc + +Select (using the space bar) all of the disks (not partitions) in your pool. + +4.8b Install GRUB for UEFI booting:: + + apt install dosfstools + mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2 + mkdir /boot/efi + echo PARTUUID=$(blkid -s PARTUUID -o value ${DISK}-part2) \ + /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab + mount /boot/efi + apt install --yes grub-efi-amd64-signed shim-signed + +**Notes:** + +- The ``-s 1`` for ``mkdosfs`` is only necessary for drives which present + 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size + (given the partition size of 512 MiB) for FAT32. It also works fine on + drives which present 512 B sectors. +- For a mirror or raidz topology, this step only installs GRUB on the + first disk. The other disk(s) will be handled later. + +4.9 (Optional): Remove os-prober:: + + apt purge --yes os-prober + +This avoids error messages from `update-grub`. `os-prober` is only necessary +in dual-boot configurations. + +4.10 Set a root password:: + + passwd + +4.11 Enable importing bpool + +This ensures that ``bpool`` is always imported, regardless of whether +``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not, +or whether ``zfs-import-scan.service`` is enabled. + +:: + + vi /etc/systemd/system/zfs-import-bpool.service + +.. code-block:: ini + + [Unit] + DefaultDependencies=no + Before=zfs-import-scan.service + Before=zfs-import-cache.service + + [Service] + Type=oneshot + RemainAfterExit=yes + ExecStart=/sbin/zpool import -N -o cachefile=none bpool + + [Install] + WantedBy=zfs-import.target + +:: + + systemctl enable zfs-import-bpool.service + +4.12 Optional (but recommended): Mount a tmpfs to ``/tmp`` + +If you chose to create a ``/tmp`` dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a +tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + +:: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + +4.13 Setup system groups:: + + addgroup --system lpadmin + addgroup --system sambashare + +Step 5: GRUB Installation +------------------------- + +5.1 Verify that the ZFS boot filesystem is recognized:: + + grub-probe /boot + +5.2 Refresh the initrd files:: + + update-initramfs -c -k all + +**Note:** When using LUKS, this will print “WARNING could not determine +root device from /etc/fstab”. This is because `cryptsetup does not +support ZFS +`__. + +5.3 Workaround GRUB's missing zpool-features support:: + + vi /etc/default/grub + # Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/ubuntu" + +5.4 Optional (but highly recommended): Make debugging GRUB easier:: + + vi /etc/default/grub + # Comment out: GRUB_TIMEOUT_STYLE=hidden + # Set: GRUB_TIMEOUT=5 + # Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5 + # Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT + # Uncomment: GRUB_TERMINAL=console + # Save and quit. + +Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired. + +5.5 Update the boot configuration:: + + update-grub + +**Note:** Ignore errors from ``osprober``, if present. + +5.6 Install the boot loader: + +5.6a For legacy (BIOS) booting, install GRUB to the MBR:: + + grub-install $DISK + +Note that you are installing GRUB to the whole disk, not a partition. + +If you are creating a mirror or raidz topology, repeat the +``grub-install`` command for each disk in the pool. + +5.6b For UEFI booting, install GRUB:: + + grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=ubuntu --recheck --no-floppy + +It is not necessary to specify the disk here. If you are creating a +mirror or raidz topology, the additional disks will be handled later. + +5.7 Fix filesystem mount ordering: + +`Until ZFS gains a systemd mount +generator `__, there are +races between mounting filesystems and starting certain daemons. In +practice, the issues (e.g. +`#5754 `__) seem to be +with certain filesystems in ``/var``, specifically ``/var/log`` and +``/var/tmp``. Setting these to use ``legacy`` mounting, and listing them +in ``/etc/fstab`` makes systemd aware that these are separate +mountpoints. In turn, ``rsyslog.service`` depends on ``var-log.mount`` +by way of ``local-fs.target`` and services using the ``PrivateTmp`` +feature of systemd automatically use ``After=var-tmp.mount``. + +Until there is support for mounting ``/boot`` in the initramfs, we also +need to mount that, because it was marked ``canmount=noauto``. Also, +with UEFI, we need to ensure it is mounted before its child filesystem +``/boot/efi``. + +``rpool`` is guaranteed to be imported by the initramfs, so there is no +point in adding ``x-systemd.requires=zfs-import.target`` to those +filesystems. + +For UEFI booting, unmount /boot/efi first:: + + umount /boot/efi + +Everything else applies to both BIOS and UEFI booting:: + + zfs set mountpoint=legacy bpool/BOOT/ubuntu + echo bpool/BOOT/ubuntu /boot zfs \ + nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab + + zfs set mountpoint=legacy rpool/var/log + echo rpool/var/log /var/log zfs nodev,relatime 0 0 >> /etc/fstab + + zfs set mountpoint=legacy rpool/var/spool + echo rpool/var/spool /var/spool zfs nodev,relatime 0 0 >> /etc/fstab + +If you created a /var/tmp dataset:: + + zfs set mountpoint=legacy rpool/var/tmp + echo rpool/var/tmp /var/tmp zfs nodev,relatime 0 0 >> /etc/fstab + +If you created a /tmp dataset:: + + zfs set mountpoint=legacy rpool/tmp + echo rpool/tmp /tmp zfs nodev,relatime 0 0 >> /etc/fstab + +Step 6: First Boot +------------------ + +6.1 Snapshot the initial installation:: + + zfs snapshot bpool/BOOT/ubuntu@install + zfs snapshot rpool/ROOT/ubuntu@install + +In the future, you will likely want to take snapshots before each +upgrade, and remove old snapshots (including this one) at some point to +save space. + +6.2 Exit from the ``chroot`` environment back to the LiveCD environment:: + + exit + +6.3 Run these commands in the LiveCD environment to unmount all +filesystems:: + + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} + zpool export -a + +6.4 Reboot:: + + reboot + +Wait for the newly installed system to boot normally. Login as root. + +6.5 Create a user account: + +Replace ``username`` with your desired username:: + + zfs create rpool/home/username + adduser username + + cp -a /etc/skel/. /home/username + chown -R username:username /home/username + usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video username + +6.6 Mirror GRUB + +If you installed to multiple disks, install GRUB on the additional +disks: + +6.6a For legacy (BIOS) booting:: + + dpkg-reconfigure grub-pc + Hit enter until you get to the device selection screen. + Select (using the space bar) all of the disks (not partitions) in your pool. + +6.6b For UEFI booting:: + + umount /boot/efi + +For the second and subsequent disks (increment ubuntu-2 to -3, etc.):: + + dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ + of=/dev/disk/by-id/scsi-SATA_disk2-part2 + efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ + -p 2 -L "ubuntu-2" -l '\EFI\ubuntu\shimx64.efi' + + mount /boot/efi + +Step 7: (Optional) Configure Swap +--------------------------------- + +**Caution**: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. This issue is currently being investigated in: +`https://github.com/zfsonlinux/zfs/issues/7734 `__ + +7.1 Create a volume dataset (zvol) for use as a swap device:: + + zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata -o secondarycache=none \ + -o com.sun:auto-snapshot=false rpool/swap + +You can adjust the size (the ``4G`` part) to your needs. + +The compression algorithm is set to ``zle`` because it is the cheapest +available algorithm. As this guide recommends ``ashift=12`` (4 kiB +blocks on disk), the common case of a 4 kiB page size means that no +compression algorithm can reduce I/O. The exception is all-zero pages, +which are dropped by ZFS; but some form of compression has to be enabled +to get this behavior. + +7.2 Configure the swap device: + +**Caution**: Always use long ``/dev/zvol`` aliases in configuration +files. Never use a short ``/dev/zdX`` device name. + +:: + + mkswap -f /dev/zvol/rpool/swap + echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab + echo RESUME=none > /etc/initramfs-tools/conf.d/resume + +The ``RESUME=none`` is necessary to disable resuming from hibernation. +This does not work, as the zvol is not present (because the pool has not +yet been imported) at the time the resume script runs. If it is not +disabled, the boot process hangs for 30 seconds waiting for the swap +zvol to appear. + +7.3 Enable the swap device:: + + swapon -av + +Step 8: Full Software Installation +---------------------------------- + +8.1 Upgrade the minimal system:: + + apt dist-upgrade --yes + +8.2 Install a regular set of software: + +Choose one of the following options: + +8.2a Install a command-line environment only:: + + apt install --yes ubuntu-standard + +8.2b Install a full GUI environment:: + + apt install --yes ubuntu-desktop + vi /etc/gdm3/custom.conf + # In the [daemon] section, add: InitialSetupEnable=false + +**Hint**: If you are installing a full GUI environment, you will likely +want to manage your network with NetworkManager:: + + rm /mnt/etc/netplan/01-netcfg.yaml + vi /etc/netplan/01-network-manager-all.yaml + +.. code-block:: yaml + + network: + version: 2 + renderer: NetworkManager + +8.3 Optional: Disable log compression: + +As ``/var/log`` is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. +Also, if you are making snapshots of ``/var/log``, logrotate’s +compression will actually waste space, as the uncompressed data will +live on in the snapshot. You can edit the files in ``/etc/logrotate.d`` +by hand to comment out ``compress``, or use this loop (copy-and-paste +highly recommended):: + + for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done + +8.4 Reboot:: + + reboot + +Step 9: Final Cleanup +--------------------- + +9.1 Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally. + +9.2 Optional: Delete the snapshots of the initial installation:: + + sudo zfs destroy bpool/BOOT/ubuntu@install + sudo zfs destroy rpool/ROOT/ubuntu@install + +9.3 Optional: Disable the root password:: + + sudo usermod -p '*' root + +9.4 Optional: Re-enable the graphical boot process: + +If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer. + +:: + + sudo vi /etc/default/grub + # Uncomment: GRUB_TIMEOUT_STYLE=hidden + # Add quiet and splash to: GRUB_CMDLINE_LINUX_DEFAULT + # Comment out: GRUB_TERMINAL=console + # Save and quit. + + sudo update-grub + +**Note:** Ignore errors from ``osprober``, if present. + +9.5 Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + +Store that backup somewhere safe (e.g. cloud storage). It is protected +by your LUKS passphrase, but you may wish to use additional encryption. + +**Hint:** If you created a mirror or raidz topology, repeat this for +each LUKS volume (``luks2``, etc.). + +Troubleshooting +--------------- + +Rescuing using a Live CD +~~~~~~~~~~~~~~~~~~~~~~~~ + +Go through `Step 1: Prepare The Install +Environment <#step-1-prepare-the-install-environment>`__. + +For LUKS, first unlock the disk(s):: + + cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # Repeat for additional disks, if this is a mirror or raidz topology. + +Mount everything correctly:: + + zpool export -a + zpool import -N -R /mnt rpool + zpool import -N -R /mnt bpool + zfs mount rpool/ROOT/ubuntu + zfs mount -a + +If needed, you can chroot into your installed environment:: + + mount --rbind /dev /mnt/dev + mount --rbind /proc /mnt/proc + mount --rbind /sys /mnt/sys + chroot /mnt /bin/bash --login + mount /boot/efi + mount -a + +Do whatever you need to do to fix your system. + +When done, cleanup:: + + exit + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} + zpool export -a + reboot + +MPT2SAS +~~~~~~~ + +Most problem reports for this tutorial involve ``mpt2sas`` hardware that +does slow asynchronous drive initialization, like some IBM M1015 or +OEM-branded cards that have been flashed to the reference LSI firmware. + +The basic problem is that disks on these controllers are not visible to +the Linux kernel until after the regular system is started, and ZoL does +not hotplug pool members. See +`https://github.com/zfsonlinux/zfs/issues/330 `__. + +Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in +``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to +appear before importing the pool. + +Areca +~~~~~ + +Systems that require the ``arcsas`` blob driver should add it to the +``/etc/initramfs-tools/modules`` file and run +``update-initramfs -c -k all``. + +Upgrade or downgrade the Areca driver if something like +``RIP: 0010:[] [] native_read_tsc+0x6/0x20`` +appears anywhere in kernel log. ZoL is unstable on systems that emit +this error message. + +VMware +~~~~~~ + +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere + configuration. Doing this ensures that ``/dev/disk`` aliases are + created in the guest. + +QEMU/KVM/XEN +~~~~~~~~~~~~ + +Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). + +To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:: + + sudo apt install ovmf + sudo vi /etc/libvirt/qemu.conf + +Uncomment these lines: + +.. code-block:: text + + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd" + ] + +:: + + sudo systemctl restart libvirtd.service diff --git a/_sources/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS for Raspberry Pi.rst.txt b/_sources/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS for Raspberry Pi.rst.txt new file mode 100644 index 000000000..076eee0dd --- /dev/null +++ b/_sources/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS for Raspberry Pi.rst.txt @@ -0,0 +1,869 @@ +.. highlight:: sh + +Ubuntu 20.04 Root on ZFS for Raspberry Pi +========================================= + +.. contents:: Table of Contents + :local: + +Overview +-------- + +Newer release available +~~~~~~~~~~~~~~~~~~~~~~~ + +- See :doc:`Ubuntu 22.04 Root on ZFS for Raspberry Pi + <./Ubuntu 22.04 Root on ZFS for Raspberry Pi>` for new installs. This guide + is no longer receiving most updates. It continues to exist for reference + for existing installs that followed it. + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Backup your data. Any existing data will be lost. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- A Raspberry Pi 4 B. (If you are looking to install on a regular PC, see + :doc:`Ubuntu 20.04 Root on ZFS`.) +- `Ubuntu Server 20.04.4 (“Focal”) for Raspberry Pi 4 + `__ +- A microSD card or USB disk. For microSD card recommendations, see Jeff + Geerling's `performance comparison + `__. + When using a USB enclosure, `ensure it supports UASP + `__. +- An Ubuntu system (with the ability to write to the microSD card or USB disk) + other than the target Raspberry Pi. + +4 GiB of memory is recommended. Do not use deduplication, as it needs `massive +amounts of RAM `__. +Enabling deduplication is a permanent change that cannot be easily reverted. + +A Raspberry Pi 3 B/B+ would probably work (as the Pi 3 is 64-bit, though it +has less RAM), but has not been tested. Please report your results (good or +bad) using the issue link below. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @rlaager +`__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo apt install python3-pip + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. + +Encryption +~~~~~~~~~~ + +**WARNING:** Encryption has not yet been tested on the Raspberry Pi. + +This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in ``/etc/fstab``, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +USB Disks +~~~~~~~~~ + +The Raspberry Pi 4 runs much faster using a USB Solid State Drive (SSD) than +a microSD card. These instructions can also be used to install Ubuntu on a +USB-connected SSD or other USB disk. USB disks have three requirements that +do not apply to microSD cards: + +#. The Raspberry Pi's Bootloader EEPROM must be dated 2020-09-03 or later. + + To check the bootloader version, power up the Raspberry Pi without an SD + card inserted or a USB boot device attached; the date will be on the + ``bootloader`` line. (If you do not see the ``bootloader`` line, the + bootloader is too old.) Alternatively, run ``sudo rpi-eeprom-update`` + on an existing OS on the Raspberry Pi (which on Ubuntu requires + ``apt install rpi-eeprom``). + + If needed, the bootloader can be updated from an existing OS on the + Raspberry Pi using ``rpi-eeprom-update -a`` and rebooting. + For other options, see `Updating the Bootloader + `_. + +#. The Raspberry Pi must configured for USB boot. The bootloader will show a + ``boot`` line; if ``order`` includes ``4``, USB boot is enabled. + + If not already enabled, it can be enabled from an existing OS on the + Raspberry Pi using ``rpi-eeprom-config -e``: set ``BOOT_ORDER=0xf41`` + and reboot to apply the change. On subsequent reboots, USB boot will be + enabled. + + Otherwise, it can be enabled without an existing OS as follows: + + - Download the `Raspberry Pi Imager Utility + `_. + - Flash the ``USB Boot`` image to a microSD card. The ``USB Boot`` image is + listed under ``Bootload`` in the ``Misc utility images`` folder. + - Boot the Raspberry Pi from the microSD card. USB Boot should be enabled + automatically. + +#. U-Boot on Ubuntu 20.04 does not seem to support the Raspberry Pi USB. + `Ubuntu 20.10 may work + `_. As a + work-around, the Raspberry Pi bootloader is configured to directly boot + Linux. For this to work, the Linux kernel must not be compressed. These + instructions decompress the kernel and add a script to + ``/etc/kernel/postinst.d`` to handle kernel upgrades. + +Step 1: Disk Formatting +----------------------- + +The commands in this step are run on the system other than the Raspberry Pi. + +This guide has you go to some extra work so that the stock ext4 partition can +be deleted. + +#. Download and unpack the official image:: + + curl -O https://cdimage.ubuntu.com/releases/20.04.4/release/ubuntu-20.04.4-preinstalled-server-arm64+raspi.img.xz + xz -d ubuntu-20.04.4-preinstalled-server-arm64+raspi.img.xz + + # or combine them to decompress as you download: + curl https://cdimage.ubuntu.com/releases/20.04.4/release/ubuntu-20.04.4-preinstalled-server-arm64+raspi.img.xz | \ + xz -d > ubuntu-20.04.4-preinstalled-server-arm64+raspi.img + +#. Dump the partition table for the image:: + + sfdisk -d ubuntu-20.04.4-preinstalled-server-arm64+raspi.img + + That will output this:: + + label: dos + label-id: 0xddbefb06 + device: ubuntu-20.04.4-preinstalled-server-arm64+raspi.img + unit: sectors + + .img1 : start= 2048, size= 524288, type=c, bootable + .img2 : start= 526336, size= 6285628, type=83 + + The important numbers are 524288 and 6285628. Store those in variables:: + + BOOT=524288 + ROOT=6285628 + +#. Create a partition script:: + + cat > partitions << EOF + label: dos + unit: sectors + + 1 : start= 2048, size=$BOOT, type=c, bootable + 2 : start=$((2048+BOOT)), size=$ROOT, type=83 + 3 : start=$((2048+BOOT+ROOT)), size=$ROOT, type=83 + EOF + +#. Connect the disk: + + Connect the disk to a machine other than the target Raspberry Pi. If any + filesystems are automatically mounted (e.g. by GNOME) unmount them. + Determine the device name. For SD, the device name is almost certainly + ``/dev/mmcblk0``. For USB SSDs, the device name is ``/dev/sdX``, where + ``X`` is a lowercase letter. ``lsblk`` can help determine the device name. + Set the ``DISK`` environment variable to the device name:: + + DISK=/dev/mmcblk0 # microSD card + DISK=/dev/sdX # USB disk + + Because partitions are named differently for ``/dev/mmcblk0`` and ``/dev/sdX`` + devices, set a second variable used when working with partitions:: + + export DISKP=${DISK}p # microSD card + export DISKP=${DISK} # USB disk ($DISKP == $DISK for /dev/sdX devices) + + **Hint**: microSD cards connected using a USB reader also have ``/dev/sdX`` + names. + + **WARNING**: The following steps destroy the existing data on the disk. Ensure + ``DISK`` and ``DISKP`` are correct before proceeding. + +#. Ensure swap partitions are not in use:: + + swapon -v + # If a partition is in use from the disk, disable it: + sudo swapoff THAT_PARTITION + +#. Clear old ZFS labels:: + + sudo zpool labelclear -f ${DISK} + + If a ZFS label still exists from a previous system/attempt, expanding the + pool will result in an unbootable system. + + **Hint:** If you do not already have the ZFS utilities installed, you can + install them with: ``sudo apt install zfsutils-linux`` Alternatively, you + can zero the entire disk with: + ``sudo dd if=/dev/zero of=${DISK} bs=1M status=progress`` + +#. Delete existing partitions:: + + echo "label: dos" | sudo sfdisk ${DISK} + sudo partprobe + ls ${DISKP}* + + Make sure there are no partitions, just the file for the disk itself. This + step is not strictly necessary; it exists to catch problems. + +#. Create the partitions:: + + sudo sfdisk $DISK < partitions + +#. Loopback mount the image:: + + IMG=$(sudo losetup -fP --show \ + ubuntu-20.04.4-preinstalled-server-arm64+raspi.img) + +#. Copy the bootloader data:: + + sudo dd if=${IMG}p1 of=${DISKP}1 bs=1M + +#. Clear old label(s) from partition 2:: + + sudo wipefs -a ${DISKP}2 + + If a filesystem with the ``writable`` label from the Ubuntu image is still + present in partition 2, the system will not boot initially. + +#. Copy the root filesystem data:: + + # NOTE: the destination is p3, not p2. + sudo dd if=${IMG}p2 of=${DISKP}3 bs=1M status=progress conv=fsync + +#. Unmount the image:: + + sudo losetup -d $IMG + +#. If setting up a USB disk: + + Decompress the kernel:: + + sudo -sE + + MNT=$(mktemp -d /mnt/XXXXXXXX) + mkdir -p $MNT/boot $MNT/root + mount ${DISKP}1 $MNT/boot + mount ${DISKP}3 $MNT/root + + zcat -qf $MNT/boot/vmlinuz >$MNT/boot/vmlinux + + Modify boot config:: + + cat >> $MNT/boot/usercfg.txt << EOF + kernel=vmlinux + initramfs initrd.img followkernel + boot_delay + EOF + + Create a script to automatically decompress the kernel after an upgrade:: + + cat >$MNT/root/etc/kernel/postinst.d/zz-decompress-kernel << 'EOF' + #!/bin/sh + + set -eu + + echo "Updating decompressed kernel..." + [ -e /boot/firmware/vmlinux ] && \ + cp /boot/firmware/vmlinux /boot/firmware/vmlinux.bak + vmlinuxtmp=$(mktemp /boot/firmware/vmlinux.XXXXXXXX) + zcat -qf /boot/vmlinuz > "$vmlinuxtmp" + mv "$vmlinuxtmp" /boot/firmware/vmlinux + EOF + + chmod +x $MNT/root/etc/kernel/postinst.d/zz-decompress-kernel + + Cleanup:: + + umount $MNT/* + rm -rf $MNT + exit + +#. Boot the Raspberry Pi. + + Move the SD/USB disk to the Raspberry Pi. Boot it and login (e.g. via SSH) + with ``ubuntu`` as the username and password. If you are using SSH, note + that it takes a little bit for cloud-init to enable password logins on the + first boot. Set a new password when prompted and login again using that + password. If you have your local SSH configured to use ``ControlPersist``, + you will have to kill the existing SSH process before logging in the second + time. + +Step 2: Setup ZFS +----------------- + +#. Become root:: + + sudo -i + +#. Set the DISK and DISKP variables again:: + + DISK=/dev/mmcblk0 # microSD card + DISKP=${DISK}p # microSD card + + DISK=/dev/sdX # USB disk + DISKP=${DISK} # USB disk + + **WARNING:** Device names can change when moving a device to a different + computer or switching the microSD card from a USB reader to a built-in + slot. Double check the device name before continuing. + +#. Install ZFS:: + + apt update + + apt install pv zfs-initramfs + + **Note:** Since this is the first boot, you may get ``Waiting for cache + lock`` because ``unattended-upgrades`` is running in the background. + Wait for it to finish. + +#. Create the root pool: + + Choose one of the following options: + + - Unencrypted:: + + zpool create \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISKP}2 + + **WARNING:** Encryption has not yet been tested on the Raspberry Pi. + + - ZFS native encryption:: + + zpool create \ + -o ashift=12 \ + -O encryption=aes-256-gcm \ + -O keylocation=prompt -O keyformat=passphrase \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISKP}2 + + - LUKS:: + + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISKP}2 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + + **Notes:** + + - The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). + - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires ACLs + `__ + Also, `disabling ACLs apparently breaks umask handling with NFSv4 + `__. + - Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only filenames + `__. + - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you + want to tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. + - Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s documentation + `__ + for further information. + - Setting ``xattr=sa`` `vastly improves the performance of extended + attributes + `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI applications. + `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain controller. + `__ + Note that ``xattr=sa`` is `Linux-specific + `__. If you move your + ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux, + extended attributes will not be readable (though your data will be). If + portability of extended attributes is important to you, omit the + ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole + pool, it is probably fine to use it for ``/var/log``. + - Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). + - ZFS native encryption defaults to ``aes-256-ccm``, but `the default has + changed upstream + `__ + to ``aes-256-gcm``. `AES-GCM seems to be generally preferred over AES-CCM + `__, + `is faster now + `__, + and `will be even faster in the future + `__. + - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two + keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256. + - Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup FAQ + `__ + for guidance. + +Step 3: System Installation +--------------------------- + +#. Create a filesystem dataset to act as a container:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + +#. Create a filesystem dataset for the root filesystem:: + + UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | + tr -dc 'a-z0-9' | cut -c-6) + + zfs create -o canmount=noauto -o mountpoint=/ \ + -o com.ubuntu.zsys:bootfs=yes \ + -o com.ubuntu.zsys:last-used=$(date +%s) rpool/ROOT/ubuntu_$UUID + zfs mount rpool/ROOT/ubuntu_$UUID + + With ZFS, it is not normally necessary to use a mount command (either + ``mount`` or ``zfs mount``). This situation is an exception because of + ``canmount=noauto``. + +#. Create datasets:: + + zfs create -o com.ubuntu.zsys:bootfs=no \ + rpool/ROOT/ubuntu_$UUID/srv + zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \ + rpool/ROOT/ubuntu_$UUID/usr + zfs create rpool/ROOT/ubuntu_$UUID/usr/local + zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \ + rpool/ROOT/ubuntu_$UUID/var + zfs create rpool/ROOT/ubuntu_$UUID/var/games + zfs create rpool/ROOT/ubuntu_$UUID/var/lib + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/AccountsService + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/apt + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/dpkg + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/NetworkManager + zfs create rpool/ROOT/ubuntu_$UUID/var/log + zfs create rpool/ROOT/ubuntu_$UUID/var/mail + zfs create rpool/ROOT/ubuntu_$UUID/var/snap + zfs create rpool/ROOT/ubuntu_$UUID/var/spool + zfs create rpool/ROOT/ubuntu_$UUID/var/www + + zfs create -o canmount=off -o mountpoint=/ \ + rpool/USERDATA + zfs create -o com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_$UUID \ + -o canmount=on -o mountpoint=/root \ + rpool/USERDATA/root_$UUID + + If you want a separate dataset for ``/tmp``:: + + zfs create -o com.ubuntu.zsys:bootfs=no \ + rpool/ROOT/ubuntu_$UUID/tmp + chmod 1777 /mnt/tmp + + The primary goal of this dataset layout is to separate the OS from user + data. This allows the root filesystem to be rolled back without rolling + back user data. + + If you do nothing extra, ``/tmp`` will be stored as part of the root + filesystem. Alternatively, you can create a separate dataset for ``/tmp``, + as shown above. This keeps the ``/tmp`` data out of snapshots of your root + filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want + to limit the maximum space used. Otherwise, you can use a tmpfs (RAM + filesystem) later. + +#. Optional: Ignore synchronous requests: + + microSD cards are relatively slow. If you want to increase performance + (especially when installing packages) at the cost of some safety, you can + disable flushing of synchronous requests (e.g. ``fsync()``, ``O_[D]SYNC``): + + Choose one of the following options: + + - For the root filesystem, but not user data:: + + zfs set sync=disabled rpool/ROOT + + - For everything:: + + zfs set sync=disabled rpool + + ZFS is transactional, so it will still be crash consistent. However, you + should leave ``sync`` at its default of ``standard`` if this system needs + to guarantee persistence (e.g. if it is a database or NFS server). + +#. Copy the system into the ZFS filesystems:: + + (cd /; tar -cf - --one-file-system --warning=no-file-ignored .) | \ + pv -p -bs $(du -sxm --apparent-size / | cut -f1)m | \ + (cd /mnt ; tar -x) + +Step 4: System Configuration +---------------------------- + +#. Configure the hostname: + + Replace ``HOSTNAME`` with the desired hostname:: + + hostname HOSTNAME + hostname > /mnt/etc/hostname + vi /mnt/etc/hosts + + .. code-block:: text + + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME + + **Hint:** Use ``nano`` if you find ``vi`` confusing. + +#. Stop ``zed``:: + + systemctl stop zed + +#. Bind the virtual filesystems from the running environment to the new + ZFS environment and ``chroot`` into it:: + + mount --make-private --rbind /boot/firmware /mnt/boot/firmware + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /run /mnt/run + mount --make-private --rbind /sys /mnt/sys + chroot /mnt /usr/bin/env DISK=$DISK UUID=$UUID bash --login + +#. Configure a basic system environment:: + + apt update + + Even if you prefer a non-English system language, always ensure that + ``en_US.UTF-8`` is available:: + + dpkg-reconfigure locales + dpkg-reconfigure tzdata + +#. For LUKS installs only, setup ``/etc/crypttab``:: + + # cryptsetup is already installed, but this marks it as manually + # installed so it is not automatically removed. + apt install --yes cryptsetup + + echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \ + luks,discard,initramfs > /etc/crypttab + + The use of ``initramfs`` is a work-around for `cryptsetup does not support + ZFS `__. + +#. Optional: Mount a tmpfs to ``/tmp`` + + If you chose to create a ``/tmp`` dataset above, skip this step, as they + are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a + tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + + :: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + +#. Setup system groups:: + + addgroup --system lpadmin + addgroup --system sambashare + +#. Patch a dependency loop: + + For ZFS native encryption or LUKS:: + + apt install --yes curl patch + + curl https://launchpadlibrarian.net/478315221/2150-fix-systemd-dependency-loops.patch | \ + sed "s|/etc|/lib|;s|\.in$||" | (cd / ; patch -p1) + + Ignore the failure in Hunk #2 (say ``n`` twice). + + This patch is from `Bug #1875577 Encrypted swap won't load on 20.04 with + zfs root + `__. + +#. Fix filesystem mount ordering: + + We need to activate ``zfs-mount-generator``. This makes systemd aware of + the separate mountpoints, which is important for things like ``/var/log`` + and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount`` + by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature + of systemd automatically use ``After=var-tmp.mount``. + + :: + + mkdir /etc/zfs/zfs-list.cache + touch /etc/zfs/zfs-list.cache/rpool + ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d + zed -F & + + Force a cache update:: + + zfs set canmount=noauto rpool/ROOT/ubuntu_$UUID + + Verify that ``zed`` updated the cache by making sure this is not empty, + which will take a few seconds:: + + cat /etc/zfs/zfs-list.cache/rpool + + Stop ``zed``:: + + fg + Press Ctrl-C. + + Fix the paths to eliminate ``/mnt``:: + + sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/* + +#. Remove old filesystem from ``/etc/fstab``:: + + vi /etc/fstab + # Remove the old root filesystem line: + # LABEL=writable / ext4 ... + +#. Configure kernel command line:: + + cp /boot/firmware/cmdline.txt /boot/firmware/cmdline.txt.bak + sed -i "s|root=LABEL=writable rootfstype=ext4|root=ZFS=rpool/ROOT/ubuntu_$UUID|" \ + /boot/firmware/cmdline.txt + sed -i "s| fixrtc||" /boot/firmware/cmdline.txt + sed -i "s|$| init_on_alloc=0|" /boot/firmware/cmdline.txt + + The ``fixrtc`` script is not compatible with ZFS and will cause the boot + to hang for 180 seconds. + + The ``init_on_alloc=0`` is to address `performance regressions + `__. + +#. Optional (but highly recommended): Make debugging booting easier:: + + sed -i "s|$| nosplash|" /boot/firmware/cmdline.txt + +#. Reboot:: + + exit + reboot + + Wait for the newly installed system to boot normally. Login as ``ubuntu``. + +Step 5: First Boot +------------------ + +#. Become root:: + + sudo -i + +#. Set the DISK variable again:: + + DISK=/dev/mmcblk0 # microSD card + + DISK=/dev/sdX # USB disk + +#. Delete the ext4 partition and expand the ZFS partition:: + + sfdisk $DISK --delete 3 + echo ", +" | sfdisk --no-reread -N 2 $DISK + + **Note:** This does not automatically expand the pool. That will be happen + on reboot. + +#. Create a user account: + + Replace ``YOUR_USERNAME`` with your desired username:: + + username=YOUR_USERNAME + + UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | + tr -dc 'a-z0-9' | cut -c-6) + ROOT_DS=$(zfs list -o name | awk '/ROOT\/ubuntu_/{print $1;exit}') + zfs create -o com.ubuntu.zsys:bootfs-datasets=$ROOT_DS \ + -o canmount=on -o mountpoint=/home/$username \ + rpool/USERDATA/${username}_$UUID + adduser $username + + cp -a /etc/skel/. /home/$username + chown -R $username:$username /home/$username + usermod -a -G adm,cdrom,dip,lpadmin,lxd,plugdev,sambashare,sudo $username + +#. Reboot:: + + reboot + + Wait for the system to boot normally. Login using the account you + created. + +#. Become root:: + + sudo -i + +#. Expand the ZFS pool: + + Verify the pool expanded:: + + zfs list rpool + + If it did not automatically expand, try to expand it manually:: + + DISK=/dev/mmcblk0 # microSD card + DISKP=${DISK}p # microSD card + + DISK=/dev/sdX # USB disk + DISKP=${DISK} # USB disk + + zpool online -e rpool ${DISKP}2 + +#. Delete the ``ubuntu`` user:: + + deluser --remove-home ubuntu + +Step 6: Full Software Installation +---------------------------------- + +#. Optional: Remove cloud-init:: + + vi /etc/netplan/01-netcfg.yaml + + .. code-block:: yaml + + network: + version: 2 + ethernets: + eth0: + dhcp4: true + + :: + + rm /etc/netplan/50-cloud-init.yaml + apt purge --autoremove ^cloud-init + rm -rf /etc/cloud + +#. Optional: Remove other storage packages:: + + apt purge --autoremove bcache-tools btrfs-progs cloud-guest-utils lvm2 \ + mdadm multipath-tools open-iscsi overlayroot xfsprogs + +#. Upgrade the minimal system:: + + apt dist-upgrade --yes + +#. Optional: Install a full GUI environment:: + + apt install --yes ubuntu-desktop + echo dtoverlay=vc4-fkms-v3d >> /boot/firmware/usercfg.txt + + **Hint**: If you are installing a full GUI environment, you will likely + want to remove cloud-init as discussed above but manage your network with + NetworkManager:: + + rm /etc/netplan/*.yaml + vi /etc/netplan/01-network-manager-all.yaml + + .. code-block:: yaml + + network: + version: 2 + renderer: NetworkManager + +#. Optional (but recommended): Disable log compression: + + As ``/var/log`` is already compressed by ZFS, logrotate’s compression is + going to burn CPU and disk I/O for (in most cases) very little gain. Also, + if you are making snapshots of ``/var/log``, logrotate’s compression will + actually waste space, as the uncompressed data will live on in the + snapshot. You can edit the files in ``/etc/logrotate.d`` by hand to comment + out ``compress``, or use this loop (copy-and-paste highly recommended):: + + for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done + +#. Reboot:: + + reboot + +Step 7: Final Cleanup +--------------------- + +#. Wait for the system to boot normally. Login using the account you + created. Ensure the system (including networking) works normally. + +#. Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + + Store that backup somewhere safe (e.g. cloud storage). It is protected by + your LUKS passphrase, but you may wish to use additional encryption. + + **Hint:** If you created a mirror or raidz topology, repeat this for each + LUKS volume (``luks2``, etc.). diff --git a/_sources/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS.rst.txt b/_sources/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS.rst.txt new file mode 100644 index 000000000..0eab863d2 --- /dev/null +++ b/_sources/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS.rst.txt @@ -0,0 +1,1307 @@ +.. highlight:: sh + +Ubuntu 20.04 Root on ZFS +======================== + +.. contents:: Table of Contents + :local: + +Newer release available +----------------------- + +- See :doc:`Ubuntu 22.04 Root on ZFS <./Ubuntu 22.04 Root on ZFS>` for new + installs. This guide is no longer receiving most updates. It continues + to exist for reference for existing installs that followed it. + +Errata +------ + +If you previously installed using this guide, please apply these fixes if +applicable: + +/boot/grub Not Mounted +~~~~~~~~~~~~~~~~~~~~~~ + +| **Severity:** Normal (previously Grave) +| **Fixed:** 2020-12-05 (previously 2020-05-30) + +For a mirror or raidz topology, ``/boot/grub`` is on a separate dataset. This +was originally ``bpool/grub``, then changed on 2020-05-30 to +``bpool/BOOT/ubuntu_UUID/grub`` to work-around zsys setting ``canmount=off`` +which would result in ``/boot/grub`` not mounting. This work-around lead to +`issues with snapshot restores +`__. The underlying `zsys +issue `__ was fixed and backported +to 20.04, so it is now back to being ``bpool/grub``. + +* If you never applied the 2020-05-30 errata fix, then ``/boot/grub`` is + probably not mounting. Check that:: + + mount | grep /boot/grub + + If it is mounted, everything is fine. Stop. Otherwise:: + + zfs set canmount=on bpool/grub + update-initramfs -c -k all + update-grub + + grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=ubuntu --recheck --no-floppy + + Run this for the additional disk(s), incrementing the “2” to “3” and so on + for both ``/boot/efi2`` and ``ubuntu-2``:: + + cp -a /boot/efi/EFI /boot/efi2 + grub-install --target=x86_64-efi --efi-directory=/boot/efi2 \ + --bootloader-id=ubuntu-2 --recheck --no-floppy + + Check that these have ``set prefix=($root)'/grub@'``:: + + grep prefix= \ + /boot/efi/EFI/ubuntu/grub.cfg \ + /boot/efi2/EFI/ubuntu-2/grub.cfg + +* If you applied the 2020-05-30 errata fix, then you should revert the dataset + rename:: + + umount /boot/grub + zfs rename bpool/BOOT/ubuntu_UUID/grub bpool/grub + zfs set com.ubuntu.zsys:bootfs=no bpool/grub + zfs mount bpool/grub + +AccountsService Not Mounted +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +| **Severity:** Normal +| **Fixed:** 2020-05-28 + +The HOWTO previously had a typo in AccountsService (where Accounts is plural) +as AccountServices (where Services is plural). This means that AccountsService +data will be written to the root filesystem. This is only harmful in the event +of a rollback of the root filesystem that does not include a rollback of the +user data. Check it:: + + zfs list | grep Account + +If the “s” is on “Accounts”, you are good. If it is on “Services”, fix it:: + + mv /var/lib/AccountsService /var/lib/AccountsService-old + zfs list -r rpool + # Replace the UUID twice below: + zfs rename rpool/ROOT/ubuntu_UUID/var/lib/AccountServices \ + rpool/ROOT/ubuntu_UUID/var/lib/AccountsService + mv /var/lib/AccountsService-old/* /var/lib/AccountsService + rmdir /var/lib/AccountsService-old + +Overview +-------- + +Ubuntu Installer +~~~~~~~~~~~~~~~~ + +The Ubuntu installer has `support for root-on-ZFS +`__. +This HOWTO produces nearly identical results as the Ubuntu installer because of +`bidirectional collaboration +`__. + +If you want a single-disk, unencrypted, desktop install, use the installer. It +is far easier and faster than doing everything by hand. + +If you want a ZFS native encrypted, desktop install, you can `trivially edit +the installer +`__. +The ``-O recordsize=1M`` there is unrelated to encryption; omit that unless +you understand it. Make sure to use a password that is at least 8 characters +or this hack will crash the installer. Additionally, once the system is +installed, you should switch to encrypted swap:: + + swapon -v + # Note the device, including the partition. + + ls -l /dev/disk/by-id/ + # Find the by-id name of the disk. + + sudo swapoff -a + sudo vi /etc/fstab + # Remove the swap entry. + + sudo apt install --yes cryptsetup + + # Replace DISK-partN as appropriate from above: + echo swap /dev/disk/by-id/DISK-partN /dev/urandom \ + swap,cipher=aes-xts-plain64:sha256,size=512 | sudo tee -a /etc/crypttab + echo /dev/mapper/swap none swap defaults 0 0 | sudo tee -a /etc/fstab + +`Hopefully the installer will gain encryption support in +the future +`__. + +If you want to setup a mirror or raidz topology, use LUKS encryption, and/or +install a server (no desktop GUI), use this HOWTO. + +Raspberry Pi +~~~~~~~~~~~~ + +If you are looking to install on a Raspberry Pi, see +:doc:`Ubuntu 20.04 Root on ZFS for Raspberry Pi`. + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- `Ubuntu 20.04.4 (“Focal”) Desktop CD + `__ + (*not* any server images) +- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” drive) + only works with UEFI booting. This not unique to ZFS. `GRUB does not and + will not work on 4Kn with legacy (BIOS) booting. + `__ + +Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need `massive amounts of RAM +`__. Enabling +deduplication is a permanent change that cannot be easily reverted. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @rlaager +`__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo apt install python3-pip + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. + +Encryption +~~~~~~~~~~ + +This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in ``/etc/fstab``, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +Step 1: Prepare The Install Environment +--------------------------------------- + +#. Boot the Ubuntu Live CD. Select Try Ubuntu. Connect your system to the + Internet as appropriate (e.g. join your WiFi network). Open a terminal + (press Ctrl-Alt-T). + +#. Setup and update the repositories:: + + sudo apt update + +#. Optional: Install and start the OpenSSH server in the Live CD environment: + + If you have a second system, using SSH to access the target system can be + convenient:: + + passwd + # There is no current password. + sudo apt install --yes openssh-server vim + + Installing the full ``vim`` package fixes terminal problems that occur when + using the ``vim-tiny`` package (that ships in the Live CD environment) over + SSH. + + **Hint:** You can find your IP address with + ``ip addr show scope global | grep inet``. Then, from your main machine, + connect with ``ssh ubuntu@IP``. + +#. Disable automounting: + + If the disk has been used before (with partitions at the same offsets), + previous filesystems (e.g. the ESP) will automount if not disabled:: + + gsettings set org.gnome.desktop.media-handling automount false + +#. Become root:: + + sudo -i + +#. Install ZFS in the Live CD environment:: + + apt install --yes debootstrap gdisk zfsutils-linux + + systemctl stop zed + +Step 2: Disk Formatting +----------------------- + +#. Set a variable with the disk name:: + + DISK=/dev/disk/by-id/scsi-SATA_disk1 + + Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the + ``/dev/sd*`` device nodes directly can cause sporadic import failures, + especially on systems that have more than one storage pool. + + **Hints:** + + - ``ls -la /dev/disk/by-id`` will list the aliases. + - Are you doing this in a virtual machine? If your virtual disk is missing + from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using KVM with + virtio; otherwise, read the `troubleshooting <#troubleshooting>`__ + section. + - For a mirror or raidz topology, use ``DISK1``, ``DISK2``, etc. + - When choosing a boot pool size, consider how you will use the space. A + kernel and initrd may consume around 100M. If you have multiple kernels + and take snapshots, you may find yourself low on boot pool space, + especially if you need to regenerate your initramfs images, which may be + around 85M each. Size your boot pool appropriately for your needs. + +#. If you are re-using a disk, clear it as necessary: + + Ensure swap partitions are not in use:: + + swapoff --all + + If the disk was previously used in an MD array:: + + apt install --yes mdadm + + # See if one or more MD arrays are active: + cat /proc/mdstat + # If so, stop them (replace ``md0`` as required): + mdadm --stop /dev/md0 + + # For an array using the whole disk: + mdadm --zero-superblock --force $DISK + # For an array using a partition (e.g. a swap partition per this HOWTO): + mdadm --zero-superblock --force ${DISK}-part2 + + Clear the partition table:: + + sgdisk --zap-all $DISK + + If you get a message about the kernel still using the old partition table, + reboot and start over (except that you can skip this step). + +#. Create bootloader partition(s):: + + sgdisk -n1:1M:+512M -t1:EF00 $DISK + + # For legacy (BIOS) booting: + sgdisk -a1 -n5:24K:+1000K -t5:EF02 $DISK + + **Note:** While the Ubuntu installer uses an MBR label for legacy (BIOS) + booting, this HOWTO uses GPT partition labels for both UEFI and legacy + (BIOS) booting. This is simpler than having two options. It is also + provides forward compatibility (future proofing). In other words, for + legacy (BIOS) booting, this will allow you to move the disk(s) to a new + system/motherboard in the future without having to rebuild the pool (and + restore your data from a backup). The ESP is created in both cases for + similar reasons. Additionally, the ESP is used for ``/boot/grub`` in + single-disk installs, as :ref:`discussed below `. + +#. Create a partition for swap: + + Previous versions of this HOWTO put swap on a zvol. `Ubuntu recommends + against this configuration due to deadlocks. + `__ There + is `a bug report upstream + `__. + + Putting swap on a partition gives up the benefit of ZFS checksums (for your + swap). That is probably the right trade-off given the reports of ZFS + deadlocks with swap. If you are bothered by this, simply do not enable + swap. + + Choose one of the following options if you want swap: + + - For a single-disk install:: + + sgdisk -n2:0:+500M -t2:8200 $DISK + + - For a mirror or raidz topology:: + + sgdisk -n2:0:+500M -t2:FD00 $DISK + + Adjust the swap swize to your needs. If you wish to enable hiberation + (which only works for unencrypted installs), the swap partition must be + at least as large as the system's RAM. + +#. Create a boot pool partition:: + + sgdisk -n3:0:+2G -t3:BE00 $DISK + + The Ubuntu installer uses 5% of the disk space constrained to a minimum of + 500 MiB and a maximum of 2 GiB. `Making this too small (and 500 MiB might + be too small) can result in an inability to upgrade the kernel. + `__ + +#. Create a root pool partition: + + Choose one of the following options: + + - Unencrypted or ZFS native encryption:: + + sgdisk -n4:0:0 -t4:BF00 $DISK + + - LUKS:: + + sgdisk -n4:0:0 -t4:8309 $DISK + + If you are creating a mirror or raidz topology, repeat the partitioning + commands for all the disks which will be part of the pool. + +#. Create the boot pool:: + + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 -o autotrim=on -d \ + -o feature@async_destroy=enabled \ + -o feature@bookmarks=enabled \ + -o feature@embedded_data=enabled \ + -o feature@empty_bpobj=enabled \ + -o feature@enabled_txg=enabled \ + -o feature@extensible_dataset=enabled \ + -o feature@filesystem_limits=enabled \ + -o feature@hole_birth=enabled \ + -o feature@large_blocks=enabled \ + -o feature@lz4_compress=enabled \ + -o feature@spacemap_histogram=enabled \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/boot -R /mnt \ + bpool ${DISK}-part3 + + You should not need to customize any of the options for the boot pool. + + GRUB does not support all of the zpool features. See ``spa_feature_names`` + in `grub-core/fs/zfs/zfs.c + `__. + This step creates a separate boot pool for ``/boot`` with the features + limited to only those that GRUB supports, allowing the root pool to use + any/all features. Note that GRUB opens the pool read-only, so all + read-only compatible features are “supported” by GRUB. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + bpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part3 \ + /dev/disk/by-id/scsi-SATA_disk2-part3 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - The boot pool name is no longer arbitrary. It _must_ be ``bpool``. + If you really want to rename it, edit ``/etc/grub.d/10_linux_zfs`` later, + after GRUB is installed (and run ``update-grub``). + + **Feature Notes:** + + - The ``allocation_classes`` feature should be safe to use. However, unless + one is using it (i.e. a ``special`` vdev), there is no point to enabling + it. It is extremely unlikely that someone would use this feature for a + boot pool. If one cares about speeding up the boot pool, it would make + more sense to put the whole pool on the faster disk rather than using it + as a ``special`` vdev. + - The ``project_quota`` feature has been tested and is safe to use. This + feature is extremely unlikely to matter for the boot pool. + - The ``resilver_defer`` should be safe but the boot pool is small enough + that it is unlikely to be necessary. + - The ``spacemap_v2`` feature has been tested and is safe to use. The boot + pool is small, so this does not matter in practice. + - As a read-only compatible feature, the ``userobj_accounting`` feature + should be compatible in theory, but in practice, GRUB can fail with an + “invalid dnode type” error. This feature does not matter for ``/boot`` + anyway. + +#. Create the root pool: + + Choose one of the following options: + + - Unencrypted:: + + zpool create \ + -o ashift=12 -o autotrim=on \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - ZFS native encryption:: + + zpool create \ + -o ashift=12 -o autotrim=on \ + -O encryption=aes-256-gcm \ + -O keylocation=prompt -O keyformat=passphrase \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - LUKS:: + + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create \ + -o ashift=12 -o autotrim=on \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + + **Notes:** + + - The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). + - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires ACLs + `__ + Also, `disabling ACLs apparently breaks umask handling with NFSv4 + `__. + - Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only filenames + `__. + - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you + want to tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. + - Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s documentation + `__ + for further information. + - Setting ``xattr=sa`` `vastly improves the performance of extended + attributes + `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI applications. + `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain controller. + `__ + Note that ``xattr=sa`` is `Linux-specific + `__. If you move your + ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux, + extended attributes will not be readable (though your data will be). If + portability of extended attributes is important to you, omit the + ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole + pool, it is probably fine to use it for ``/var/log``. + - Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). + - ZFS native encryption defaults to ``aes-256-ccm``, but `the default has + changed upstream + `__ + to ``aes-256-gcm``. `AES-GCM seems to be generally preferred over AES-CCM + `__, + `is faster now + `__, + and `will be even faster in the future + `__. + - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two + keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256. + - Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup FAQ + `__ + for guidance. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + rpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part4 \ + /dev/disk/by-id/scsi-SATA_disk2-part4 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - When using LUKS with mirror or raidz topologies, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will have + to create using ``cryptsetup``. + - The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the root + pool is named ``rpool`` by default. + +Step 3: System Installation +--------------------------- + +#. Create filesystem datasets to act as containers:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + zfs create -o canmount=off -o mountpoint=none bpool/BOOT + +#. Create filesystem datasets for the root and boot filesystems:: + + UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | + tr -dc 'a-z0-9' | cut -c-6) + + zfs create -o mountpoint=/ \ + -o com.ubuntu.zsys:bootfs=yes \ + -o com.ubuntu.zsys:last-used=$(date +%s) rpool/ROOT/ubuntu_$UUID + + zfs create -o mountpoint=/boot bpool/BOOT/ubuntu_$UUID + +#. Create datasets:: + + zfs create -o com.ubuntu.zsys:bootfs=no \ + rpool/ROOT/ubuntu_$UUID/srv + zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \ + rpool/ROOT/ubuntu_$UUID/usr + zfs create rpool/ROOT/ubuntu_$UUID/usr/local + zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \ + rpool/ROOT/ubuntu_$UUID/var + zfs create rpool/ROOT/ubuntu_$UUID/var/games + zfs create rpool/ROOT/ubuntu_$UUID/var/lib + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/AccountsService + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/apt + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/dpkg + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/NetworkManager + zfs create rpool/ROOT/ubuntu_$UUID/var/log + zfs create rpool/ROOT/ubuntu_$UUID/var/mail + zfs create rpool/ROOT/ubuntu_$UUID/var/snap + zfs create rpool/ROOT/ubuntu_$UUID/var/spool + zfs create rpool/ROOT/ubuntu_$UUID/var/www + + zfs create -o canmount=off -o mountpoint=/ \ + rpool/USERDATA + zfs create -o com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_$UUID \ + -o canmount=on -o mountpoint=/root \ + rpool/USERDATA/root_$UUID + chmod 700 /mnt/root + + For a mirror or raidz topology, create a dataset for ``/boot/grub``:: + + zfs create -o com.ubuntu.zsys:bootfs=no bpool/grub + + Mount a tmpfs at /run:: + + mkdir /mnt/run + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + + A tmpfs is recommended later, but if you want a separate dataset for + ``/tmp``:: + + zfs create -o com.ubuntu.zsys:bootfs=no \ + rpool/ROOT/ubuntu_$UUID/tmp + chmod 1777 /mnt/tmp + + The primary goal of this dataset layout is to separate the OS from user + data. This allows the root filesystem to be rolled back without rolling + back user data. + + If you do nothing extra, ``/tmp`` will be stored as part of the root + filesystem. Alternatively, you can create a separate dataset for ``/tmp``, + as shown above. This keeps the ``/tmp`` data out of snapshots of your root + filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want + to limit the maximum space used. Otherwise, you can use a tmpfs (RAM + filesystem) later. + +#. Install the minimal system:: + + debootstrap focal /mnt + + The ``debootstrap`` command leaves the new system in an unconfigured state. + An alternative to using ``debootstrap`` is to copy the entirety of a + working system into the new ZFS root. + +#. Copy in zpool.cache:: + + mkdir /mnt/etc/zfs + cp /etc/zfs/zpool.cache /mnt/etc/zfs/ + +Step 4: System Configuration +---------------------------- + +#. Configure the hostname: + + Replace ``HOSTNAME`` with the desired hostname:: + + hostname HOSTNAME + hostname > /mnt/etc/hostname + vi /mnt/etc/hosts + + .. code-block:: text + + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME + + **Hint:** Use ``nano`` if you find ``vi`` confusing. + +#. Configure the network interface: + + Find the interface name:: + + ip addr show + + Adjust ``NAME`` below to match your interface name:: + + vi /mnt/etc/netplan/01-netcfg.yaml + + .. code-block:: yaml + + network: + version: 2 + ethernets: + NAME: + dhcp4: true + + Customize this file if the system is not a DHCP client. + +#. Configure the package sources:: + + vi /mnt/etc/apt/sources.list + + .. code-block:: sourceslist + + deb http://archive.ubuntu.com/ubuntu focal main restricted universe multiverse + deb http://archive.ubuntu.com/ubuntu focal-updates main restricted universe multiverse + deb http://archive.ubuntu.com/ubuntu focal-backports main restricted universe multiverse + deb http://security.ubuntu.com/ubuntu focal-security main restricted universe multiverse + +#. Bind the virtual filesystems from the LiveCD environment to the new + system and ``chroot`` into it:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + chroot /mnt /usr/bin/env DISK=$DISK UUID=$UUID bash --login + + **Note:** This is using ``--rbind``, not ``--bind``. + +#. Configure a basic system environment:: + + apt update + + Even if you prefer a non-English system language, always ensure that + ``en_US.UTF-8`` is available:: + + dpkg-reconfigure locales tzdata keyboard-configuration console-setup + + Install your preferred text editor:: + + apt install --yes nano + + apt install --yes vim + + Installing the full ``vim`` package fixes terminal problems that occur when + using the ``vim-tiny`` package (that is installed by ``debootstrap``) over + SSH. + +#. For LUKS installs only, setup ``/etc/crypttab``:: + + apt install --yes cryptsetup + + echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \ + none luks,discard,initramfs > /etc/crypttab + + The use of ``initramfs`` is a work-around for `cryptsetup does not support + ZFS `__. + + **Hint:** If you are creating a mirror or raidz topology, repeat the + ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. + +#. Create the EFI filesystem: + + Perform these steps for both UEFI and legacy (BIOS) booting:: + + apt install --yes dosfstools + + mkdosfs -F 32 -s 1 -n EFI ${DISK}-part1 + mkdir /boot/efi + echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part1) \ + /boot/efi vfat defaults 0 0 >> /etc/fstab + mount /boot/efi + + For a mirror or raidz topology, repeat the `mkdosfs` for the additional + disks, but do not repeat the other commands. + + **Note:** The ``-s 1`` for ``mkdosfs`` is only necessary for drives which + present 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster + size (given the partition size of 512 MiB) for FAT32. It also works fine on + drives which present 512 B sectors. + +#. Put ``/boot/grub`` on the EFI System Partition: + + .. _boot-grub-esp: + + For a single-disk install only:: + + mkdir /boot/efi/grub /boot/grub + echo /boot/efi/grub /boot/grub none defaults,bind 0 0 >> /etc/fstab + mount /boot/grub + + This allows GRUB to write to ``/boot/grub`` (since it is on a FAT-formatted + ESP instead of on ZFS), which means that ``/boot/grub/grubenv`` and the + ``recordfail`` feature works as expected: if the boot fails, the normally + hidden GRUB menu will be shown on the next boot. For a mirror or raidz + topology, we do not want GRUB writing to the EFI System Partition. This is + because we duplicate it at install without a mechanism to update the copies + when the GRUB configuration changes (e.g. as the kernel is upgraded). Thus, + we keep ``/boot/grub`` on the boot pool for the mirror or raidz topologies. + This preserves correct mirroring/raidz behavior, at the expense of being + able to write to ``/boot/grub/grubenv`` and thus the ``recordfail`` + behavior. + +#. Install GRUB/Linux/ZFS in the chroot environment for the new system: + + Choose one of the following options: + + - Install GRUB/Linux/ZFS for legacy (BIOS) booting:: + + apt install --yes grub-pc linux-image-generic zfs-initramfs zsys + + Select (using the space bar) all of the disks (not partitions) in your + pool. + + - Install GRUB/Linux/ZFS for UEFI booting:: + + apt install --yes \ + grub-efi-amd64 grub-efi-amd64-signed linux-image-generic \ + shim-signed zfs-initramfs zsys + + **Notes:** + + - Ignore any error messages saying ``ERROR: Couldn't resolve device`` and + ``WARNING: Couldn't determine root device``. `cryptsetup does not + support ZFS + `__. + + - Ignore any error messages saying ``Module zfs not found`` and + ``couldn't connect to zsys daemon``. The first seems to occur due to a + version mismatch between the Live CD kernel and the chroot environment, + but this is irrelevant since the module is already loaded. The second + may be caused by the first but either way is irrelevant since ``zed`` + is started manually later. + + - For a mirror or raidz topology, this step only installs GRUB on the + first disk. The other disk(s) will be handled later. For some reason, + grub-efi-amd64 does not prompt for ``install_devices`` here, but does + after a reboot. + +#. Optional: Remove os-prober:: + + apt purge --yes os-prober + + This avoids error messages from ``update-grub``. ``os-prober`` is only + necessary in dual-boot configurations. + +#. Set a root password:: + + passwd + +#. Configure swap: + + Choose one of the following options if you want swap: + + - For an unencrypted single-disk install:: + + mkswap -f ${DISK}-part2 + echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \ + none swap discard 0 0 >> /etc/fstab + swapon -a + + - For an unencrypted mirror or raidz topology:: + + apt install --yes mdadm + + # Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and + # raid-devices if necessary and specify the actual devices. + mdadm --create /dev/md0 --metadata=1.2 --level=mirror \ + --raid-devices=2 ${DISK1}-part2 ${DISK2}-part2 + mkswap -f /dev/md0 + echo /dev/disk/by-uuid/$(blkid -s UUID -o value /dev/md0) \ + none swap discard 0 0 >> /etc/fstab + + - For an encrypted (LUKS or ZFS native encryption) single-disk install:: + + apt install --yes cryptsetup + + echo swap ${DISK}-part2 /dev/urandom \ + swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab + echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab + + - For an encrypted (LUKS or ZFS native encryption) mirror or raidz + topology:: + + apt install --yes cryptsetup mdadm + + # Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and + # raid-devices if necessary and specify the actual devices. + mdadm --create /dev/md0 --metadata=1.2 --level=mirror \ + --raid-devices=2 ${DISK1}-part2 ${DISK2}-part2 + echo swap /dev/md0 /dev/urandom \ + swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab + echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab + +#. Optional (but recommended): Mount a tmpfs to ``/tmp`` + + If you chose to create a ``/tmp`` dataset above, skip this step, as they + are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a + tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + + :: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + +#. Setup system groups:: + + addgroup --system lpadmin + addgroup --system lxd + addgroup --system sambashare + +#. Patch a dependency loop: + + For ZFS native encryption or LUKS:: + + apt install --yes curl patch + + curl https://launchpadlibrarian.net/478315221/2150-fix-systemd-dependency-loops.patch | \ + sed "s|/etc|/lib|;s|\.in$||" | (cd / ; patch -p1) + + Ignore the failure in Hunk #2 (say ``n`` twice). + + This patch is from `Bug #1875577 Encrypted swap won't load on 20.04 with + zfs root + `__. + +#. Optional: Install SSH:: + + apt install --yes openssh-server + + vi /etc/ssh/sshd_config + # Set: PermitRootLogin yes + +Step 5: GRUB Installation +------------------------- + +#. Verify that the ZFS boot filesystem is recognized:: + + grub-probe /boot + +#. Refresh the initrd files:: + + update-initramfs -c -k all + + **Note:** Ignore any error messages saying ``ERROR: Couldn't resolve + device`` and ``WARNING: Couldn't determine root device``. `cryptsetup + does not support ZFS + `__. + +#. Disable memory zeroing:: + + vi /etc/default/grub + # Add init_on_alloc=0 to: GRUB_CMDLINE_LINUX_DEFAULT + # Save and quit (or see the next step). + + This is to address `performance regressions + `__. + +#. Optional (but highly recommended): Make debugging GRUB easier:: + + vi /etc/default/grub + # Comment out: GRUB_TIMEOUT_STYLE=hidden + # Set: GRUB_TIMEOUT=5 + # Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5 + # Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT + # Uncomment: GRUB_TERMINAL=console + # Save and quit. + + Later, once the system has rebooted twice and you are sure everything is + working, you can undo these changes, if desired. + +#. Update the boot configuration:: + + update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Install the boot loader: + + Choose one of the following options: + + - For legacy (BIOS) booting, install GRUB to the MBR:: + + grub-install $DISK + + Note that you are installing GRUB to the whole disk, not a partition. + + If you are creating a mirror or raidz topology, repeat the + ``grub-install`` command for each disk in the pool. + + - For UEFI booting, install GRUB to the ESP:: + + grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=ubuntu --recheck --no-floppy + +#. Disable grub-initrd-fallback.service + + For a mirror or raidz topology:: + + systemctl mask grub-initrd-fallback.service + + This is the service for ``/boot/grub/grubenv`` which does not work on + mirrored or raidz topologies. Disabling this keeps it from blocking + subsequent mounts of ``/boot/grub`` if that mount ever fails. + + Another option would be to set ``RequiresMountsFor=/boot/grub`` via a + drop-in unit, but that is more work to do here for no reason. Hopefully + `this bug `__ + will be fixed upstream. + +#. Fix filesystem mount ordering: + + We need to activate ``zfs-mount-generator``. This makes systemd aware of + the separate mountpoints, which is important for things like ``/var/log`` + and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount`` + by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature + of systemd automatically use ``After=var-tmp.mount``. + + :: + + mkdir /etc/zfs/zfs-list.cache + touch /etc/zfs/zfs-list.cache/bpool + touch /etc/zfs/zfs-list.cache/rpool + ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d + zed -F & + + Verify that ``zed`` updated the cache by making sure these are not empty:: + + cat /etc/zfs/zfs-list.cache/bpool + cat /etc/zfs/zfs-list.cache/rpool + + If either is empty, force a cache update and check again:: + + zfs set canmount=on bpool/BOOT/ubuntu_$UUID + zfs set canmount=on rpool/ROOT/ubuntu_$UUID + + If they are still empty, stop zed (as below), start zed (as above) and try + again. + + Once the files have data, stop ``zed``:: + + fg + Press Ctrl-C. + + Fix the paths to eliminate ``/mnt``:: + + sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/* + +#. Exit from the ``chroot`` environment back to the LiveCD environment:: + + exit + +#. Run these commands in the LiveCD environment to unmount all + filesystems:: + + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + +#. Reboot:: + + reboot + + Wait for the newly installed system to boot normally. Login as root. + +Step 6: First Boot +------------------ + +#. Install GRUB to additional disks: + + For a UEFI mirror or raidz topology only:: + + dpkg-reconfigure grub-efi-amd64 + + Select (using the space bar) all of the ESP partitions (partition 1 on + each of the pool disks). + +#. Create a user account: + + Replace ``YOUR_USERNAME`` with your desired username:: + + username=YOUR_USERNAME + + UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | + tr -dc 'a-z0-9' | cut -c-6) + ROOT_DS=$(zfs list -o name | awk '/ROOT\/ubuntu_/{print $1;exit}') + zfs create -o com.ubuntu.zsys:bootfs-datasets=$ROOT_DS \ + -o canmount=on -o mountpoint=/home/$username \ + rpool/USERDATA/${username}_$UUID + adduser $username + + cp -a /etc/skel/. /home/$username + chown -R $username:$username /home/$username + usermod -a -G adm,cdrom,dip,lpadmin,lxd,plugdev,sambashare,sudo $username + +Step 7: Full Software Installation +---------------------------------- + +#. Upgrade the minimal system:: + + apt dist-upgrade --yes + +#. Install a regular set of software: + + Choose one of the following options: + + - Install a command-line environment only:: + + apt install --yes ubuntu-standard + + - Install a full GUI environment:: + + apt install --yes ubuntu-desktop + + **Hint**: If you are installing a full GUI environment, you will likely + want to manage your network with NetworkManager:: + + rm /etc/netplan/01-netcfg.yaml + vi /etc/netplan/01-network-manager-all.yaml + + .. code-block:: yaml + + network: + version: 2 + renderer: NetworkManager + +#. Optional: Disable log compression: + + As ``/var/log`` is already compressed by ZFS, logrotate’s compression is + going to burn CPU and disk I/O for (in most cases) very little gain. Also, + if you are making snapshots of ``/var/log``, logrotate’s compression will + actually waste space, as the uncompressed data will live on in the + snapshot. You can edit the files in ``/etc/logrotate.d`` by hand to comment + out ``compress``, or use this loop (copy-and-paste highly recommended):: + + for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done + +#. Reboot:: + + reboot + +Step 8: Final Cleanup +--------------------- + +#. Wait for the system to boot normally. Login using the account you + created. Ensure the system (including networking) works normally. + +#. Optional: Disable the root password:: + + sudo usermod -p '*' root + +#. Optional (but highly recommended): Disable root SSH logins: + + If you installed SSH earlier, revert the temporary change:: + + sudo vi /etc/ssh/sshd_config + # Remove: PermitRootLogin yes + + sudo systemctl restart ssh + +#. Optional: Re-enable the graphical boot process: + + If you prefer the graphical boot process, you can re-enable it now. If + you are using LUKS, it makes the prompt look nicer. + + :: + + sudo vi /etc/default/grub + # Uncomment: GRUB_TIMEOUT_STYLE=hidden + # Add quiet and splash to: GRUB_CMDLINE_LINUX_DEFAULT + # Comment out: GRUB_TERMINAL=console + # Save and quit. + + sudo update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + + Store that backup somewhere safe (e.g. cloud storage). It is protected by + your LUKS passphrase, but you may wish to use additional encryption. + + **Hint:** If you created a mirror or raidz topology, repeat this for each + LUKS volume (``luks2``, etc.). + +Troubleshooting +--------------- + +Rescuing using a Live CD +~~~~~~~~~~~~~~~~~~~~~~~~ + +Go through `Step 1: Prepare The Install Environment +<#step-1-prepare-the-install-environment>`__. + +For LUKS, first unlock the disk(s):: + + cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # Repeat for additional disks, if this is a mirror or raidz topology. + +Mount everything correctly:: + + zpool export -a + zpool import -N -R /mnt rpool + zpool import -N -R /mnt bpool + zfs load-key -a + # Replace “UUID” as appropriate; use zfs list to find it: + zfs mount rpool/ROOT/ubuntu_UUID + zfs mount bpool/BOOT/ubuntu_UUID + zfs mount -a + +If needed, you can chroot into your installed environment:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + chroot /mnt /bin/bash --login + mount -a + +Do whatever you need to do to fix your system. + +When done, cleanup:: + + exit + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + reboot + +Areca +~~~~~ + +Systems that require the ``arcsas`` blob driver should add it to the +``/etc/initramfs-tools/modules`` file and run ``update-initramfs -c -k all``. + +Upgrade or downgrade the Areca driver if something like +``RIP: 0010:[] [] native_read_tsc+0x6/0x20`` +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message. + +MPT2SAS +~~~~~~~ + +Most problem reports for this tutorial involve ``mpt2sas`` hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware. + +The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See `https://github.com/zfsonlinux/zfs/issues/330 +`__. + +Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in +``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to +appear before importing the pool. + +QEMU/KVM/XEN +~~~~~~~~~~~~ + +Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). + +To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:: + + sudo apt install ovmf + sudo vi /etc/libvirt/qemu.conf + +Uncomment these lines: + +.. code-block:: text + + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd", + "/usr/share/OVMF/OVMF_CODE.ms.fd:/usr/share/OVMF/OVMF_VARS.ms.fd" + ] + +:: + + sudo systemctl restart libvirtd.service + +VMware +~~~~~~ + +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere configuration. + Doing this ensures that ``/dev/disk`` aliases are created in the guest. diff --git a/_sources/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS for Raspberry Pi.rst.txt b/_sources/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS for Raspberry Pi.rst.txt new file mode 100644 index 000000000..3d4d523ca --- /dev/null +++ b/_sources/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS for Raspberry Pi.rst.txt @@ -0,0 +1,894 @@ +.. highlight:: sh + +Ubuntu 22.04 Root on ZFS for Raspberry Pi +========================================= + +.. contents:: Table of Contents + :local: + +Overview +-------- + +.. note:: + These are beta instructions. The author still needs to test them. + Additionally, it may be possible to use U-Boot now, which would eliminate + some of the customizations. + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Backup your data. Any existing data will be lost. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- A Raspberry Pi 4 B. (If you are looking to install on a regular PC, see + :doc:`Ubuntu 22.04 Root on ZFS`.) +- `Ubuntu Server 22.04 (“Jammy”) for Raspberry Pi 4 + `__ +- A microSD card or USB disk. For microSD card recommendations, see Jeff + Geerling's `performance comparison + `__. + When using a USB enclosure, `ensure it supports UASP + `__. +- An Ubuntu system (with the ability to write to the microSD card or USB disk) + other than the target Raspberry Pi. + +4 GiB of memory is recommended. Do not use deduplication, as it needs `massive +amounts of RAM `__. +Enabling deduplication is a permanent change that cannot be easily reverted. + +A Raspberry Pi 3 B/B+ would probably work (as the Pi 3 is 64-bit, though it +has less RAM), but has not been tested. Please report your results (good or +bad) using the issue link below. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @rlaager +`__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo apt install python3-pip + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. + +Encryption +~~~~~~~~~~ + +**WARNING:** Encryption has not yet been tested on the Raspberry Pi. + +This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in ``/etc/fstab``, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +USB Disks +~~~~~~~~~ + +The Raspberry Pi 4 runs much faster using a USB Solid State Drive (SSD) than +a microSD card. These instructions can also be used to install Ubuntu on a +USB-connected SSD or other USB disk. USB disks have three requirements that +do not apply to microSD cards: + +#. The Raspberry Pi's Bootloader EEPROM must be dated 2020-09-03 or later. + + To check the bootloader version, power up the Raspberry Pi without an SD + card inserted or a USB boot device attached; the date will be on the + ``bootloader`` line. (If you do not see the ``bootloader`` line, the + bootloader is too old.) Alternatively, run ``sudo rpi-eeprom-update`` + on an existing OS on the Raspberry Pi (which on Ubuntu requires + ``apt install rpi-eeprom``). + + If needed, the bootloader can be updated from an existing OS on the + Raspberry Pi using ``rpi-eeprom-update -a`` and rebooting. + For other options, see `Updating the Bootloader + `_. + +#. The Raspberry Pi must configured for USB boot. The bootloader will show a + ``boot`` line; if ``order`` includes ``4``, USB boot is enabled. + + If not already enabled, it can be enabled from an existing OS on the + Raspberry Pi using ``rpi-eeprom-config -e``: set ``BOOT_ORDER=0xf41`` + and reboot to apply the change. On subsequent reboots, USB boot will be + enabled. + + Otherwise, it can be enabled without an existing OS as follows: + + - Download the `Raspberry Pi Imager Utility + `_. + - Flash the ``USB Boot`` image to a microSD card. The ``USB Boot`` image is + listed under ``Bootload`` in the ``Misc utility images`` folder. + - Boot the Raspberry Pi from the microSD card. USB Boot should be enabled + automatically. + +#. U-Boot on Ubuntu 20.04 does not seem to support the Raspberry Pi USB. + `Ubuntu 20.10 may work + `_. As a + work-around, the Raspberry Pi bootloader is configured to directly boot + Linux. For this to work, the Linux kernel must not be compressed. These + instructions decompress the kernel and add a script to + ``/etc/kernel/postinst.d`` to handle kernel upgrades. + +Step 1: Disk Formatting +----------------------- + +The commands in this step are run on the system other than the Raspberry Pi. + +This guide has you go to some extra work so that the stock ext4 partition can +be deleted. + +#. Download and unpack the official image:: + + curl -O https://cdimage.ubuntu.com/releases/22.04/release/ubuntu-22.04.1-preinstalled-server-arm64+raspi.img.xz + xz -d ubuntu-22.04.1-preinstalled-server-arm64+raspi.img.xz + + # or combine them to decompress as you download: + curl https://cdimage.ubuntu.com/releases/22.04/release/ubuntu-22.04.1-preinstalled-server-arm64+raspi.img.xz | \ + xz -d > ubuntu-22.04.1-preinstalled-server-arm64+raspi.img + +#. Dump the partition table for the image:: + + sfdisk -d ubuntu-22.04.1-preinstalled-server-arm64+raspi.img + + That will output this:: + + label: dos + label-id: 0x638274e3 + device: ubuntu-22.04.1-preinstalled-server-arm64+raspi.img + unit: sectors + + .img1 : start= 2048, size= 524288, type=c, bootable + .img2 : start= 526336, size= 7193932, type=83 + + The important numbers are 524288 and 7193932. Store those in variables:: + + BOOT=524288 + ROOT=7193932 + +#. Create a partition script:: + + cat > partitions << EOF + label: dos + unit: sectors + + 1 : start= 2048, size=$BOOT, type=c, bootable + 2 : start=$((2048+BOOT)), size=$ROOT, type=83 + 3 : start=$((2048+BOOT+ROOT)), size=$ROOT, type=83 + EOF + +#. Connect the disk: + + Connect the disk to a machine other than the target Raspberry Pi. If any + filesystems are automatically mounted (e.g. by GNOME) unmount them. + Determine the device name. For SD, the device name is almost certainly + ``/dev/mmcblk0``. For USB SSDs, the device name is ``/dev/sdX``, where + ``X`` is a lowercase letter. ``lsblk`` can help determine the device name. + Set the ``DISK`` environment variable to the device name:: + + DISK=/dev/mmcblk0 # microSD card + DISK=/dev/sdX # USB disk + + Because partitions are named differently for ``/dev/mmcblk0`` and ``/dev/sdX`` + devices, set a second variable used when working with partitions:: + + export DISKP=${DISK}p # microSD card + export DISKP=${DISK} # USB disk ($DISKP == $DISK for /dev/sdX devices) + + **Hint**: microSD cards connected using a USB reader also have ``/dev/sdX`` + names. + + **WARNING**: The following steps destroy the existing data on the disk. Ensure + ``DISK`` and ``DISKP`` are correct before proceeding. + +#. Ensure swap partitions are not in use:: + + swapon -v + # If a partition is in use from the disk, disable it: + sudo swapoff THAT_PARTITION + +#. Clear old ZFS labels:: + + sudo zpool labelclear -f ${DISK} + + If a ZFS label still exists from a previous system/attempt, expanding the + pool will result in an unbootable system. + + **Hint:** If you do not already have the ZFS utilities installed, you can + install them with: ``sudo apt install zfsutils-linux`` Alternatively, you + can zero the entire disk with: + ``sudo dd if=/dev/zero of=${DISK} bs=1M status=progress`` + +#. Delete existing partitions:: + + echo "label: dos" | sudo sfdisk ${DISK} + sudo partprobe + ls ${DISKP}* + + Make sure there are no partitions, just the file for the disk itself. This + step is not strictly necessary; it exists to catch problems. + +#. Create the partitions:: + + sudo sfdisk $DISK < partitions + +#. Loopback mount the image:: + + IMG=$(sudo losetup -fP --show \ + ubuntu-22.04.1-preinstalled-server-arm64+raspi.img) + +#. Copy the bootloader data:: + + sudo dd if=${IMG}p1 of=${DISKP}1 bs=1M + +#. Clear old label(s) from partition 2:: + + sudo wipefs -a ${DISKP}2 + + If a filesystem with the ``writable`` label from the Ubuntu image is still + present in partition 2, the system will not boot initially. + +#. Copy the root filesystem data:: + + # NOTE: the destination is p3, not p2. + sudo dd if=${IMG}p2 of=${DISKP}3 bs=1M status=progress conv=fsync + +#. Unmount the image:: + + sudo losetup -d $IMG + +#. If setting up a USB disk: + + Decompress the kernel:: + + sudo -sE + + MNT=$(mktemp -d /mnt/XXXXXXXX) + mkdir -p $MNT/boot $MNT/root + mount ${DISKP}1 $MNT/boot + mount ${DISKP}3 $MNT/root + + zcat -qf $MNT/boot/vmlinuz >$MNT/boot/vmlinux + + Modify boot config:: + + cat >> $MNT/boot/usercfg.txt << EOF + kernel=vmlinux + initramfs initrd.img followkernel + boot_delay + EOF + + Create a script to automatically decompress the kernel after an upgrade:: + + cat >$MNT/root/etc/kernel/postinst.d/zz-decompress-kernel << 'EOF' + #!/bin/sh + + set -eu + + echo "Updating decompressed kernel..." + [ -e /boot/firmware/vmlinux ] && \ + cp /boot/firmware/vmlinux /boot/firmware/vmlinux.bak + vmlinuxtmp=$(mktemp /boot/firmware/vmlinux.XXXXXXXX) + zcat -qf /boot/vmlinuz > "$vmlinuxtmp" + mv "$vmlinuxtmp" /boot/firmware/vmlinux + EOF + + chmod +x $MNT/root/etc/kernel/postinst.d/zz-decompress-kernel + + Cleanup:: + + umount $MNT/* + rm -rf $MNT + exit + +#. Boot the Raspberry Pi. + + Move the SD/USB disk to the Raspberry Pi. Boot it and login (e.g. via SSH) + with ``ubuntu`` as the username and password. If you are using SSH, note + that it takes a little bit for cloud-init to enable password logins on the + first boot. Set a new password when prompted and login again using that + password. If you have your local SSH configured to use ``ControlPersist``, + you will have to kill the existing SSH process before logging in the second + time. + +Step 2: Setup ZFS +----------------- + +#. Become root:: + + sudo -i + +#. Set the DISK and DISKP variables again:: + + DISK=/dev/mmcblk0 # microSD card + DISKP=${DISK}p # microSD card + + DISK=/dev/sdX # USB disk + DISKP=${DISK} # USB disk + + **WARNING:** Device names can change when moving a device to a different + computer or switching the microSD card from a USB reader to a built-in + slot. Double check the device name before continuing. + +#. Install ZFS:: + + apt update + + apt install pv zfs-initramfs + + **Note:** Since this is the first boot, you may get ``Waiting for cache + lock`` because ``unattended-upgrades`` is running in the background. + Wait for it to finish. + +#. Create the root pool: + + Choose one of the following options: + + - Unencrypted:: + + zpool create \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISKP}2 + + **WARNING:** Encryption has not yet been tested on the Raspberry Pi. + + - ZFS native encryption:: + + zpool create \ + -o ashift=12 \ + -O encryption=on \ + -O keylocation=prompt -O keyformat=passphrase \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISKP}2 + + - LUKS:: + + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISKP}2 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + + **Notes:** + + - The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). + - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires ACLs + `__ + Also, `disabling ACLs apparently breaks umask handling with NFSv4 + `__. + - Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only filenames + `__. + - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you + want to tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. + - Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s documentation + `__ + for further information. + - Setting ``xattr=sa`` `vastly improves the performance of extended + attributes + `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI applications. + `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain controller. + `__ + Note that ``xattr=sa`` is `Linux-specific + `__. If you move your + ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux, + extended attributes will not be readable (though your data will be). If + portability of extended attributes is important to you, omit the + ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole + pool, it is probably fine to use it for ``/var/log``. + - Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). + - ZFS native encryption `now + `__ + defaults to ``aes-256-gcm``. + - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two + keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256. + - Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup FAQ + `__ + for guidance. + +Step 3: System Installation +--------------------------- + +#. Create a filesystem dataset to act as a container:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + +#. Create a filesystem dataset for the root filesystem:: + + UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | + tr -dc 'a-z0-9' | cut -c-6) + + zfs create -o canmount=noauto -o mountpoint=/ \ + -o com.ubuntu.zsys:bootfs=yes \ + -o com.ubuntu.zsys:last-used=$(date +%s) rpool/ROOT/ubuntu_$UUID + zfs mount rpool/ROOT/ubuntu_$UUID + + With ZFS, it is not normally necessary to use a mount command (either + ``mount`` or ``zfs mount``). This situation is an exception because of + ``canmount=noauto``. + +#. Create datasets:: + + zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \ + rpool/ROOT/ubuntu_$UUID/usr + zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \ + rpool/ROOT/ubuntu_$UUID/var + zfs create rpool/ROOT/ubuntu_$UUID/var/lib + zfs create rpool/ROOT/ubuntu_$UUID/var/log + zfs create rpool/ROOT/ubuntu_$UUID/var/spool + + zfs create -o canmount=off -o mountpoint=/ \ + rpool/USERDATA + zfs create -o com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_$UUID \ + -o canmount=on -o mountpoint=/root \ + rpool/USERDATA/root_$UUID + chmod 700 /mnt/root + + The datasets below are optional, depending on your preferences and/or + software choices. + + If you wish to separate these to exclude them from snapshots:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/cache + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/nfs + zfs create rpool/ROOT/ubuntu_$UUID/var/tmp + chmod 1777 /mnt/var/tmp + + If desired (the Ubuntu installer creates these):: + + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/apt + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/dpkg + + If you use /srv on this system:: + + zfs create -o com.ubuntu.zsys:bootfs=no \ + rpool/ROOT/ubuntu_$UUID/srv + + If you use /usr/local on this system:: + + zfs create rpool/ROOT/ubuntu_$UUID/usr/local + + If this system will have games installed:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/games + + If this system will have a GUI:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/AccountsService + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/NetworkManager + + If this system will use Docker (which manages its own datasets & + snapshots):: + + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/docker + + If this system will store local email in /var/mail:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/mail + + If this system will use Snap packages:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/snap + + If you use /var/www on this system:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/www + + For a mirror or raidz topology, create a dataset for ``/boot/grub``:: + + zfs create -o com.ubuntu.zsys:bootfs=no bpool/grub + + A tmpfs is recommended later, but if you want a separate dataset for + ``/tmp``:: + + zfs create -o com.ubuntu.zsys:bootfs=no \ + rpool/ROOT/ubuntu_$UUID/tmp + chmod 1777 /mnt/tmp + + The primary goal of this dataset layout is to separate the OS from user + data. This allows the root filesystem to be rolled back without rolling + back user data. + + If you do nothing extra, ``/tmp`` will be stored as part of the root + filesystem. Alternatively, you can create a separate dataset for ``/tmp``, + as shown above. This keeps the ``/tmp`` data out of snapshots of your root + filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want + to limit the maximum space used. Otherwise, you can use a tmpfs (RAM + filesystem) later. + + **Note:** If you separate a directory required for booting (e.g. ``/etc``) + into its own dataset, you must add it to + ``ZFS_INITRD_ADDITIONAL_DATASETS`` in ``/etc/default/zfs``. Datasets + with ``canmount=off`` (like ``rpool/usr`` above) do not matter for this. + +#. Optional: Ignore synchronous requests: + + microSD cards are relatively slow. If you want to increase performance + (especially when installing packages) at the cost of some safety, you can + disable flushing of synchronous requests (e.g. ``fsync()``, ``O_[D]SYNC``): + + Choose one of the following options: + + - For the root filesystem, but not user data:: + + zfs set sync=disabled rpool/ROOT + + - For everything:: + + zfs set sync=disabled rpool + + ZFS is transactional, so it will still be crash consistent. However, you + should leave ``sync`` at its default of ``standard`` if this system needs + to guarantee persistence (e.g. if it is a database or NFS server). + +#. Copy the system into the ZFS filesystems:: + + (cd /; tar -cf - --one-file-system --warning=no-file-ignored .) | \ + pv -p -bs $(du -sxm --apparent-size / | cut -f1)m | \ + (cd /mnt ; tar -x) + +Step 4: System Configuration +---------------------------- + +#. Configure the hostname: + + Replace ``HOSTNAME`` with the desired hostname:: + + hostname HOSTNAME + hostname > /mnt/etc/hostname + vi /mnt/etc/hosts + + .. code-block:: text + + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME + + **Hint:** Use ``nano`` if you find ``vi`` confusing. + +#. Stop ``zed``:: + + systemctl stop zed + +#. Bind the virtual filesystems from the running environment to the new + ZFS environment and ``chroot`` into it:: + + mount --make-private --rbind /boot/firmware /mnt/boot/firmware + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /run /mnt/run + mount --make-private --rbind /sys /mnt/sys + chroot /mnt /usr/bin/env DISK=$DISK UUID=$UUID bash --login + +#. Configure a basic system environment:: + + apt update + + Even if you prefer a non-English system language, always ensure that + ``en_US.UTF-8`` is available:: + + dpkg-reconfigure locales + dpkg-reconfigure tzdata + +#. For LUKS installs only, setup ``/etc/crypttab``:: + + # cryptsetup is already installed, but this marks it as manually + # installed so it is not automatically removed. + apt install --yes cryptsetup + + echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \ + luks,discard,initramfs > /etc/crypttab + + The use of ``initramfs`` is a work-around for `cryptsetup does not support + ZFS `__. + +#. Optional: Mount a tmpfs to ``/tmp`` + + If you chose to create a ``/tmp`` dataset above, skip this step, as they + are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a + tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + + :: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + +#. Setup system groups:: + + addgroup --system lpadmin + addgroup --system sambashare + +#. Fix filesystem mount ordering: + + We need to activate ``zfs-mount-generator``. This makes systemd aware of + the separate mountpoints, which is important for things like ``/var/log`` + and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount`` + by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature + of systemd automatically use ``After=var-tmp.mount``. + + :: + + mkdir /etc/zfs/zfs-list.cache + touch /etc/zfs/zfs-list.cache/rpool + zed -F & + + Force a cache update:: + + zfs set canmount=noauto rpool/ROOT/ubuntu_$UUID + + Verify that ``zed`` updated the cache by making sure this is not empty, + which will take a few seconds:: + + cat /etc/zfs/zfs-list.cache/rpool + + Stop ``zed``:: + + fg + Press Ctrl-C. + + Fix the paths to eliminate ``/mnt``:: + + sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/* + +#. Remove old filesystem from ``/etc/fstab``:: + + vi /etc/fstab + # Remove the old root filesystem line: + # LABEL=writable / ext4 ... + +#. Configure kernel command line:: + + cp /boot/firmware/cmdline.txt /boot/firmware/cmdline.txt.bak + sed -i "s|root=LABEL=writable rootfstype=ext4|root=ZFS=rpool/ROOT/ubuntu_$UUID|" \ + /boot/firmware/cmdline.txt + sed -i "s| fixrtc||" /boot/firmware/cmdline.txt + sed -i "s|$| init_on_alloc=0|" /boot/firmware/cmdline.txt + + The ``fixrtc`` script is not compatible with ZFS and will cause the boot + to hang for 180 seconds. + + The ``init_on_alloc=0`` is to address `performance regressions + `__. + +#. Optional (but highly recommended): Make debugging booting easier:: + + sed -i "s|$| nosplash|" /boot/firmware/cmdline.txt + +#. Reboot:: + + exit + reboot + + Wait for the newly installed system to boot normally. Login as ``ubuntu``. + +Step 5: First Boot +------------------ + +#. Become root:: + + sudo -i + +#. Set the DISK variable again:: + + DISK=/dev/mmcblk0 # microSD card + + DISK=/dev/sdX # USB disk + +#. Delete the ext4 partition and expand the ZFS partition:: + + sfdisk $DISK --delete 3 + echo ", +" | sfdisk --no-reread -N 2 $DISK + + **Note:** This does not automatically expand the pool. That will be happen + on reboot. + +#. Create a user account: + + Replace ``YOUR_USERNAME`` with your desired username:: + + username=YOUR_USERNAME + + UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | + tr -dc 'a-z0-9' | cut -c-6) + ROOT_DS=$(zfs list -o name | awk '/ROOT\/ubuntu_/{print $1;exit}') + zfs create -o com.ubuntu.zsys:bootfs-datasets=$ROOT_DS \ + -o canmount=on -o mountpoint=/home/$username \ + rpool/USERDATA/${username}_$UUID + adduser $username + + cp -a /etc/skel/. /home/$username + chown -R $username:$username /home/$username + usermod -a -G adm,cdrom,dip,lpadmin,lxd,plugdev,sambashare,sudo $username + +#. Reboot:: + + reboot + + Wait for the system to boot normally. Login using the account you + created. + +#. Become root:: + + sudo -i + +#. Expand the ZFS pool: + + Verify the pool expanded:: + + zfs list rpool + + If it did not automatically expand, try to expand it manually:: + + DISK=/dev/mmcblk0 # microSD card + DISKP=${DISK}p # microSD card + + DISK=/dev/sdX # USB disk + DISKP=${DISK} # USB disk + + zpool online -e rpool ${DISKP}2 + +#. Delete the ``ubuntu`` user:: + + deluser --remove-home ubuntu + +Step 6: Full Software Installation +---------------------------------- + +#. Optional: Remove cloud-init:: + + vi /etc/netplan/01-netcfg.yaml + + .. code-block:: yaml + + network: + version: 2 + ethernets: + eth0: + dhcp4: true + + :: + + rm /etc/netplan/50-cloud-init.yaml + apt purge --autoremove ^cloud-init + rm -rf /etc/cloud + +#. Optional: Remove other storage packages:: + + apt purge --autoremove bcache-tools btrfs-progs cloud-guest-utils lvm2 \ + mdadm multipath-tools open-iscsi overlayroot xfsprogs + +#. Upgrade the minimal system:: + + apt dist-upgrade --yes + +#. Optional: Install a full GUI environment:: + + apt install --yes ubuntu-desktop + echo dtoverlay=vc4-fkms-v3d >> /boot/firmware/usercfg.txt + + **Hint**: If you are installing a full GUI environment, you will likely + want to remove cloud-init as discussed above but manage your network with + NetworkManager:: + + rm /etc/netplan/*.yaml + vi /etc/netplan/01-network-manager-all.yaml + + .. code-block:: yaml + + network: + version: 2 + renderer: NetworkManager + +#. Optional (but recommended): Disable log compression: + + As ``/var/log`` is already compressed by ZFS, logrotate’s compression is + going to burn CPU and disk I/O for (in most cases) very little gain. Also, + if you are making snapshots of ``/var/log``, logrotate’s compression will + actually waste space, as the uncompressed data will live on in the + snapshot. You can edit the files in ``/etc/logrotate.d`` by hand to comment + out ``compress``, or use this loop (copy-and-paste highly recommended):: + + for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done + +#. Reboot:: + + reboot + +Step 7: Final Cleanup +--------------------- + +#. Wait for the system to boot normally. Login using the account you + created. Ensure the system (including networking) works normally. + +#. Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + + Store that backup somewhere safe (e.g. cloud storage). It is protected by + your LUKS passphrase, but you may wish to use additional encryption. + + **Hint:** If you created a mirror or raidz topology, repeat this for each + LUKS volume (``luks2``, etc.). diff --git a/_sources/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS.rst.txt b/_sources/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS.rst.txt new file mode 100644 index 000000000..acd6f9593 --- /dev/null +++ b/_sources/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS.rst.txt @@ -0,0 +1,1229 @@ +.. highlight:: sh + +Ubuntu 22.04 Root on ZFS +======================== + +.. contents:: Table of Contents + :local: + +Overview +-------- + +Ubuntu Installer +~~~~~~~~~~~~~~~~ + +The Ubuntu installer still has ZFS support, but `it was almost removed for +22.04 `__ +and `it no longer installs zsys +`__. At +the moment, this HOWTO still uses zsys, but that will be probably be removed +in the near future. + +Raspberry Pi +~~~~~~~~~~~~ + +If you are looking to install on a Raspberry Pi, see +:doc:`Ubuntu 20.04 Root on ZFS for Raspberry Pi`. + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- `Ubuntu 22.04.1 (“jammy”) Desktop CD + `__ + (*not* any server images) +- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” drive) + only works with UEFI booting. This not unique to ZFS. `GRUB does not and + will not work on 4Kn with legacy (BIOS) booting. + `__ + +Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need `massive amounts of RAM +`__. Enabling +deduplication is a permanent change that cannot be easily reverted. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @rlaager +`__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo apt install python3-pip + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. + +Encryption +~~~~~~~~~~ + +This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in ``/etc/fstab``, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +Step 1: Prepare The Install Environment +--------------------------------------- + +#. Boot the Ubuntu Live CD. From the GRUB boot menu, select *Try or Install Ubuntu*. + On the *Welcome* page, select your preferred language and *Try Ubuntu*. + Connect your system to the Internet as appropriate (e.g. join your WiFi network). + Open a terminal (press Ctrl-Alt-T). + +#. Setup and update the repositories:: + + sudo apt update + +#. Optional: Install and start the OpenSSH server in the Live CD environment: + + If you have a second system, using SSH to access the target system can be + convenient:: + + passwd + # There is no current password. + sudo apt install --yes openssh-server vim + + Installing the full ``vim`` package fixes terminal problems that occur when + using the ``vim-tiny`` package (that ships in the Live CD environment) over + SSH. + + **Hint:** You can find your IP address with + ``ip addr show scope global | grep inet``. Then, from your main machine, + connect with ``ssh ubuntu@IP``. + +#. Disable automounting: + + If the disk has been used before (with partitions at the same offsets), + previous filesystems (e.g. the ESP) will automount if not disabled:: + + gsettings set org.gnome.desktop.media-handling automount false + +#. Become root:: + + sudo -i + +#. Install ZFS in the Live CD environment:: + + apt install --yes debootstrap gdisk zfsutils-linux + + systemctl stop zed + +Step 2: Disk Formatting +----------------------- + +#. Set a variable with the disk name:: + + DISK=/dev/disk/by-id/scsi-SATA_disk1 + + Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the + ``/dev/sd*`` device nodes directly can cause sporadic import failures, + especially on systems that have more than one storage pool. + + **Hints:** + + - ``ls -la /dev/disk/by-id`` will list the aliases. + - Are you doing this in a virtual machine? If your virtual disk is missing + from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using KVM with + virtio; otherwise, read the `troubleshooting <#troubleshooting>`__ + section. + - For a mirror or raidz topology, use ``DISK1``, ``DISK2``, etc. + - When choosing a boot pool size, consider how you will use the space. A + kernel and initrd may consume around 100M. If you have multiple kernels + and take snapshots, you may find yourself low on boot pool space, + especially if you need to regenerate your initramfs images, which may be + around 85M each. Size your boot pool appropriately for your needs. + +#. If you are re-using a disk, clear it as necessary: + + Ensure swap partitions are not in use:: + + swapoff --all + + If the disk was previously used in an MD array:: + + apt install --yes mdadm + + # See if one or more MD arrays are active: + cat /proc/mdstat + # If so, stop them (replace ``md0`` as required): + mdadm --stop /dev/md0 + + # For an array using the whole disk: + mdadm --zero-superblock --force $DISK + # For an array using a partition (e.g. a swap partition per this HOWTO): + mdadm --zero-superblock --force ${DISK}-part2 + + If the disk was previously used with zfs:: + + wipefs -a $DISK + + For flash-based storage, if the disk was previously used, you may wish to + do a full-disk discard (TRIM/UNMAP), which can improve performance:: + + blkdiscard -f $DISK + + Clear the partition table:: + + sgdisk --zap-all $DISK + + If you get a message about the kernel still using the old partition table, + reboot and start over (except that you can skip this step). + +#. Create bootloader partition(s):: + + sgdisk -n1:1M:+512M -t1:EF00 $DISK + + # For legacy (BIOS) booting: + sgdisk -a1 -n5:24K:+1000K -t5:EF02 $DISK + + **Note:** While the Ubuntu installer uses an MBR label for legacy (BIOS) + booting, this HOWTO uses GPT partition labels for both UEFI and legacy + (BIOS) booting. This is simpler than having two options. It is also + provides forward compatibility (future proofing). In other words, for + legacy (BIOS) booting, this will allow you to move the disk(s) to a new + system/motherboard in the future without having to rebuild the pool (and + restore your data from a backup). The ESP is created in both cases for + similar reasons. Additionally, the ESP is used for ``/boot/grub`` in + single-disk installs, as :ref:`discussed below `. + +#. Create a partition for swap: + + Previous versions of this HOWTO put swap on a zvol. `Ubuntu recommends + against this configuration due to deadlocks. + `__ There + is `a bug report upstream + `__. + + Putting swap on a partition gives up the benefit of ZFS checksums (for your + swap). That is probably the right trade-off given the reports of ZFS + deadlocks with swap. If you are bothered by this, simply do not enable + swap. + + Choose one of the following options if you want swap: + + - For a single-disk install:: + + sgdisk -n2:0:+500M -t2:8200 $DISK + + - For a mirror or raidz topology:: + + sgdisk -n2:0:+500M -t2:FD00 $DISK + + Adjust the swap swize to your needs. If you wish to enable hiberation + (which only works for unencrypted installs), the swap partition must be + at least as large as the system's RAM. + +#. Create a boot pool partition:: + + sgdisk -n3:0:+2G -t3:BE00 $DISK + + The Ubuntu installer uses 5% of the disk space constrained to a minimum of + 500 MiB and a maximum of 2 GiB. `Making this too small (and 500 MiB might + be too small) can result in an inability to upgrade the kernel. + `__ + +#. Create a root pool partition: + + Choose one of the following options: + + - Unencrypted or ZFS native encryption:: + + sgdisk -n4:0:0 -t4:BF00 $DISK + + - LUKS:: + + sgdisk -n4:0:0 -t4:8309 $DISK + + If you are creating a mirror or raidz topology, repeat the partitioning + commands for all the disks which will be part of the pool. + +#. Create the boot pool:: + + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -o cachefile=/etc/zfs/zpool.cache \ + -o compatibility=grub2 \ + -o feature@livelist=enabled \ + -o feature@zpool_checkpoint=enabled \ + -O devices=off \ + -O acltype=posixacl -O xattr=sa \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/boot -R /mnt \ + bpool ${DISK}-part3 + + You should not need to customize any of the options for the boot pool. + + Ignore the warnings about the features “not in specified 'compatibility' + feature set.” + + GRUB does not support all of the zpool features. See ``spa_feature_names`` + in `grub-core/fs/zfs/zfs.c + `__. + This step creates a separate boot pool for ``/boot`` with the features + limited to only those that GRUB supports, allowing the root pool to use + any/all features. Note that GRUB opens the pool read-only, so all + read-only compatible features are “supported” by GRUB. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + bpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part3 \ + /dev/disk/by-id/scsi-SATA_disk2-part3 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - The boot pool name is no longer arbitrary. It _must_ be ``bpool``. + If you really want to rename it, edit ``/etc/grub.d/10_linux_zfs`` later, + after GRUB is installed (and run ``update-grub``). + + **Feature Notes:** + + - The ``allocation_classes`` feature should be safe to use. However, unless + one is using it (i.e. a ``special`` vdev), there is no point to enabling + it. It is extremely unlikely that someone would use this feature for a + boot pool. If one cares about speeding up the boot pool, it would make + more sense to put the whole pool on the faster disk rather than using it + as a ``special`` vdev. + - The ``device_rebuild`` feature should be safe to use (except on raidz, + which it is incompatible with), but the boot pool is small, so this does + not matter in practice. + - The ``log_spacemap`` and ``spacemap_v2`` features have been tested and + are safe to use. The boot pool is small, so these do not matter in + practice. + - The ``project_quota`` feature has been tested and is safe to use. This + feature is extremely unlikely to matter for the boot pool. + - The ``resilver_defer`` should be safe but the boot pool is small enough + that it is unlikely to be necessary. + - As a read-only compatible feature, the ``userobj_accounting`` feature + should be compatible in theory, but in practice, GRUB can fail with an + “invalid dnode type” error. This feature does not matter for ``/boot`` + anyway. + +#. Create the root pool: + + Choose one of the following options: + + - Unencrypted:: + + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -O acltype=posixacl -O xattr=sa -O dnodesize=auto \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - ZFS native encryption:: + + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -O encryption=on -O keylocation=prompt -O keyformat=passphrase \ + -O acltype=posixacl -O xattr=sa -O dnodesize=auto \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - LUKS:: + + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -O acltype=posixacl -O xattr=sa -O dnodesize=auto \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + + **Notes:** + + - The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). + - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires ACLs + `__ + Also, `disabling ACLs apparently breaks umask handling with NFSv4 + `__. + - Setting ``xattr=sa`` `vastly improves the performance of extended + attributes + `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI applications. + `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain controller. + `__ + Note that ``xattr=sa`` is `Linux-specific + `__. If you move your + ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux, + extended attributes will not be readable (though your data will be). If + portability of extended attributes is important to you, omit the + ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole + pool, it is probably fine to use it for ``/var/log``. + - Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only filenames + `__. + - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you + want to tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. + - Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s documentation + `__ + for further information. + - Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). + - ZFS native encryption `now + `__ + defaults to ``aes-256-gcm``. + - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two + keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256. + - Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup FAQ + `__ + for guidance. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + rpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part4 \ + /dev/disk/by-id/scsi-SATA_disk2-part4 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - When using LUKS with mirror or raidz topologies, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will have + to create using ``cryptsetup``. + - The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the root + pool is named ``rpool`` by default. + +Step 3: System Installation +--------------------------- + +#. Create filesystem datasets to act as containers:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + zfs create -o canmount=off -o mountpoint=none bpool/BOOT + +#. Create filesystem datasets for the root and boot filesystems:: + + UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | + tr -dc 'a-z0-9' | cut -c-6) + + zfs create -o mountpoint=/ \ + -o com.ubuntu.zsys:bootfs=yes \ + -o com.ubuntu.zsys:last-used=$(date +%s) rpool/ROOT/ubuntu_$UUID + + zfs create -o mountpoint=/boot bpool/BOOT/ubuntu_$UUID + +#. Create datasets:: + + zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \ + rpool/ROOT/ubuntu_$UUID/usr + zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \ + rpool/ROOT/ubuntu_$UUID/var + zfs create rpool/ROOT/ubuntu_$UUID/var/lib + zfs create rpool/ROOT/ubuntu_$UUID/var/log + zfs create rpool/ROOT/ubuntu_$UUID/var/spool + + zfs create -o canmount=off -o mountpoint=/ \ + rpool/USERDATA + zfs create -o com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_$UUID \ + -o canmount=on -o mountpoint=/root \ + rpool/USERDATA/root_$UUID + chmod 700 /mnt/root + + The datasets below are optional, depending on your preferences and/or + software choices. + + If you wish to separate these to exclude them from snapshots:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/cache + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/nfs + zfs create rpool/ROOT/ubuntu_$UUID/var/tmp + chmod 1777 /mnt/var/tmp + + If desired (the Ubuntu installer creates these):: + + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/apt + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/dpkg + + If you use /srv on this system:: + + zfs create -o com.ubuntu.zsys:bootfs=no \ + rpool/ROOT/ubuntu_$UUID/srv + + If you use /usr/local on this system:: + + zfs create rpool/ROOT/ubuntu_$UUID/usr/local + + If this system will have games installed:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/games + + If this system will have a GUI:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/AccountsService + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/NetworkManager + + If this system will use Docker (which manages its own datasets & + snapshots):: + + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/docker + + If this system will store local email in /var/mail:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/mail + + If this system will use Snap packages:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/snap + + If you use /var/www on this system:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/www + + For a mirror or raidz topology, create a dataset for ``/boot/grub``:: + + zfs create -o com.ubuntu.zsys:bootfs=no bpool/grub + + A tmpfs is recommended later, but if you want a separate dataset for + ``/tmp``:: + + zfs create -o com.ubuntu.zsys:bootfs=no \ + rpool/ROOT/ubuntu_$UUID/tmp + chmod 1777 /mnt/tmp + + The primary goal of this dataset layout is to separate the OS from user + data. This allows the root filesystem to be rolled back without rolling + back user data. + + If you do nothing extra, ``/tmp`` will be stored as part of the root + filesystem. Alternatively, you can create a separate dataset for ``/tmp``, + as shown above. This keeps the ``/tmp`` data out of snapshots of your root + filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want + to limit the maximum space used. Otherwise, you can use a tmpfs (RAM + filesystem) later. + + **Note:** If you separate a directory required for booting (e.g. ``/etc``) + into its own dataset, you must add it to + ``ZFS_INITRD_ADDITIONAL_DATASETS`` in ``/etc/default/zfs``. Datasets + with ``canmount=off`` (like ``rpool/usr`` above) do not matter for this. + +#. Mount a tmpfs at /run:: + + mkdir /mnt/run + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + +#. Install the minimal system:: + + debootstrap jammy /mnt + + The ``debootstrap`` command leaves the new system in an unconfigured state. + An alternative to using ``debootstrap`` is to copy the entirety of a + working system into the new ZFS root. + +#. Copy in zpool.cache:: + + mkdir /mnt/etc/zfs + cp /etc/zfs/zpool.cache /mnt/etc/zfs/ + +Step 4: System Configuration +---------------------------- + +#. Configure the hostname: + + Replace ``HOSTNAME`` with the desired hostname:: + + hostname HOSTNAME + hostname > /mnt/etc/hostname + vi /mnt/etc/hosts + + .. code-block:: text + + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME + + **Hint:** Use ``nano`` if you find ``vi`` confusing. + +#. Configure the network interface: + + Find the interface name:: + + ip addr show + + Adjust ``NAME`` below to match your interface name:: + + vi /mnt/etc/netplan/01-netcfg.yaml + + .. code-block:: yaml + + network: + version: 2 + ethernets: + NAME: + dhcp4: true + + Customize this file if the system is not a DHCP client. + +#. Configure the package sources:: + + vi /mnt/etc/apt/sources.list + + .. code-block:: sourceslist + + deb http://archive.ubuntu.com/ubuntu jammy main restricted universe multiverse + deb http://archive.ubuntu.com/ubuntu jammy-updates main restricted universe multiverse + deb http://archive.ubuntu.com/ubuntu jammy-backports main restricted universe multiverse + deb http://security.ubuntu.com/ubuntu jammy-security main restricted universe multiverse + +#. Bind the virtual filesystems from the LiveCD environment to the new + system and ``chroot`` into it:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + chroot /mnt /usr/bin/env DISK=$DISK UUID=$UUID bash --login + + **Note:** This is using ``--rbind``, not ``--bind``. + +#. Configure a basic system environment:: + + apt update + + Even if you prefer a non-English system language, always ensure that + ``en_US.UTF-8`` is available:: + + dpkg-reconfigure locales tzdata keyboard-configuration console-setup + + Install your preferred text editor:: + + apt install --yes nano + + apt install --yes vim + + Installing the full ``vim`` package fixes terminal problems that occur when + using the ``vim-tiny`` package (that is installed by ``debootstrap``) over + SSH. + +#. For LUKS installs only, setup ``/etc/crypttab``:: + + apt install --yes cryptsetup + + echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \ + none luks,discard,initramfs > /etc/crypttab + + The use of ``initramfs`` is a work-around for `cryptsetup does not support + ZFS `__. + + **Hint:** If you are creating a mirror or raidz topology, repeat the + ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. + +#. Create the EFI filesystem: + + Perform these steps for both UEFI and legacy (BIOS) booting:: + + apt install --yes dosfstools + + mkdosfs -F 32 -s 1 -n EFI ${DISK}-part1 + mkdir /boot/efi + echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part1) \ + /boot/efi vfat defaults 0 0 >> /etc/fstab + mount /boot/efi + + For a mirror or raidz topology, repeat the `mkdosfs` for the additional + disks, but do not repeat the other commands. + + **Note:** The ``-s 1`` for ``mkdosfs`` is only necessary for drives which + present 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster + size (given the partition size of 512 MiB) for FAT32. It also works fine on + drives which present 512 B sectors. + +#. Put ``/boot/grub`` on the EFI System Partition: + + .. _boot-grub-esp: + + For a single-disk install only:: + + mkdir /boot/efi/grub /boot/grub + echo /boot/efi/grub /boot/grub none defaults,bind 0 0 >> /etc/fstab + mount /boot/grub + + This allows GRUB to write to ``/boot/grub`` (since it is on a FAT-formatted + ESP instead of on ZFS), which means that ``/boot/grub/grubenv`` and the + ``recordfail`` feature works as expected: if the boot fails, the normally + hidden GRUB menu will be shown on the next boot. For a mirror or raidz + topology, we do not want GRUB writing to the EFI System Partition. This is + because we duplicate it at install without a mechanism to update the copies + when the GRUB configuration changes (e.g. as the kernel is upgraded). Thus, + we keep ``/boot/grub`` on the boot pool for the mirror or raidz topologies. + This preserves correct mirroring/raidz behavior, at the expense of being + able to write to ``/boot/grub/grubenv`` and thus the ``recordfail`` + behavior. + +#. Install GRUB/Linux/ZFS in the chroot environment for the new system: + + Choose one of the following options: + + - Install GRUB/Linux/ZFS for legacy (BIOS) booting:: + + apt install --yes grub-pc linux-image-generic zfs-initramfs zsys + + Select (using the space bar) all of the disks (not partitions) in your + pool. + + - Install GRUB/Linux/ZFS for UEFI booting:: + + apt install --yes \ + grub-efi-amd64 grub-efi-amd64-signed linux-image-generic \ + shim-signed zfs-initramfs zsys + + **Notes:** + + - Ignore any error messages saying ``ERROR: Couldn't resolve device`` and + ``WARNING: Couldn't determine root device``. `cryptsetup does not + support ZFS + `__. + + - Ignore any error messages saying ``Module zfs not found`` and + ``couldn't connect to zsys daemon``. The first seems to occur due to a + version mismatch between the Live CD kernel and the chroot environment, + but this is irrelevant since the module is already loaded. The second + may be caused by the first but either way is irrelevant since ``zed`` + is started manually later. + + - For a mirror or raidz topology, this step only installs GRUB on the + first disk. The other disk(s) will be handled later. For some reason, + grub-efi-amd64 does not prompt for ``install_devices`` here, but does + after a reboot. + +#. Optional: Remove os-prober:: + + apt purge --yes os-prober + + This avoids error messages from ``update-grub``. ``os-prober`` is only + necessary in dual-boot configurations. + +#. Set a root password:: + + passwd + +#. Configure swap: + + Choose one of the following options if you want swap: + + - For an unencrypted single-disk install:: + + mkswap -f ${DISK}-part2 + echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \ + none swap discard 0 0 >> /etc/fstab + swapon -a + + - For an unencrypted mirror or raidz topology:: + + apt install --yes mdadm + + # Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and + # raid-devices if necessary and specify the actual devices. + mdadm --create /dev/md0 --metadata=1.2 --level=mirror \ + --raid-devices=2 ${DISK1}-part2 ${DISK2}-part2 + mkswap -f /dev/md0 + echo /dev/disk/by-uuid/$(blkid -s UUID -o value /dev/md0) \ + none swap discard 0 0 >> /etc/fstab + + - For an encrypted (LUKS or ZFS native encryption) single-disk install:: + + apt install --yes cryptsetup + + echo swap ${DISK}-part2 /dev/urandom \ + swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab + echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab + + - For an encrypted (LUKS or ZFS native encryption) mirror or raidz + topology:: + + apt install --yes cryptsetup mdadm + + # Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and + # raid-devices if necessary and specify the actual devices. + mdadm --create /dev/md0 --metadata=1.2 --level=mirror \ + --raid-devices=2 ${DISK1}-part2 ${DISK2}-part2 + echo swap /dev/md0 /dev/urandom \ + swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab + echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab + +#. Optional (but recommended): Mount a tmpfs to ``/tmp`` + + If you chose to create a ``/tmp`` dataset above, skip this step, as they + are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a + tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + + :: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + +#. Setup system groups:: + + addgroup --system lpadmin + addgroup --system lxd + addgroup --system sambashare + +#. Optional: Install SSH:: + + apt install --yes openssh-server + + vi /etc/ssh/sshd_config + # Set: PermitRootLogin yes + +Step 5: GRUB Installation +------------------------- + +#. Verify that the ZFS boot filesystem is recognized:: + + grub-probe /boot + +#. Refresh the initrd files:: + + update-initramfs -c -k all + + **Note:** Ignore any error messages saying ``ERROR: Couldn't resolve + device`` and ``WARNING: Couldn't determine root device``. `cryptsetup + does not support ZFS + `__. + +#. Disable memory zeroing:: + + vi /etc/default/grub + # Add init_on_alloc=0 to: GRUB_CMDLINE_LINUX_DEFAULT + # Save and quit (or see the next step). + + This is to address `performance regressions + `__. + +#. Optional (but highly recommended): Make debugging GRUB easier:: + + vi /etc/default/grub + # Comment out: GRUB_TIMEOUT_STYLE=hidden + # Set: GRUB_TIMEOUT=5 + # Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5 + # Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT + # Uncomment: GRUB_TERMINAL=console + # Save and quit. + + Later, once the system has rebooted twice and you are sure everything is + working, you can undo these changes, if desired. + +#. Update the boot configuration:: + + update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Install the boot loader: + + Choose one of the following options: + + - For legacy (BIOS) booting, install GRUB to the MBR:: + + grub-install $DISK + + Note that you are installing GRUB to the whole disk, not a partition. + + If you are creating a mirror or raidz topology, repeat the + ``grub-install`` command for each disk in the pool. + + - For UEFI booting, install GRUB to the ESP:: + + grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=ubuntu --recheck --no-floppy + +#. Disable grub-initrd-fallback.service + + For a mirror or raidz topology:: + + systemctl mask grub-initrd-fallback.service + + This is the service for ``/boot/grub/grubenv`` which does not work on + mirrored or raidz topologies. Disabling this keeps it from blocking + subsequent mounts of ``/boot/grub`` if that mount ever fails. + + Another option would be to set ``RequiresMountsFor=/boot/grub`` via a + drop-in unit, but that is more work to do here for no reason. Hopefully + `this bug `__ + will be fixed upstream. + +#. Fix filesystem mount ordering: + + We need to activate ``zfs-mount-generator``. This makes systemd aware of + the separate mountpoints, which is important for things like ``/var/log`` + and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount`` + by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature + of systemd automatically use ``After=var-tmp.mount``. + + :: + + mkdir /etc/zfs/zfs-list.cache + touch /etc/zfs/zfs-list.cache/bpool + touch /etc/zfs/zfs-list.cache/rpool + zed -F & + + Verify that ``zed`` updated the cache by making sure these are not empty:: + + cat /etc/zfs/zfs-list.cache/bpool + cat /etc/zfs/zfs-list.cache/rpool + + If either is empty, force a cache update and check again:: + + zfs set canmount=on bpool/BOOT/ubuntu_$UUID + zfs set canmount=on rpool/ROOT/ubuntu_$UUID + + If they are still empty, stop zed (as below), start zed (as above) and try + again. + + Once the files have data, stop ``zed``:: + + fg + Press Ctrl-C. + + Fix the paths to eliminate ``/mnt``:: + + sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/* + +#. Exit from the ``chroot`` environment back to the LiveCD environment:: + + exit + +#. Run these commands in the LiveCD environment to unmount all + filesystems:: + + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + +#. Reboot:: + + reboot + + Wait for the newly installed system to boot normally. Login as root. + +Step 6: First Boot +------------------ + +#. Install GRUB to additional disks: + + For a UEFI mirror or raidz topology only:: + + dpkg-reconfigure grub-efi-amd64 + + Select (using the space bar) all of the ESP partitions (partition 1 on + each of the pool disks). + +#. Create a user account: + + Replace ``YOUR_USERNAME`` with your desired username:: + + username=YOUR_USERNAME + + UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | + tr -dc 'a-z0-9' | cut -c-6) + ROOT_DS=$(zfs list -o name | awk '/ROOT\/ubuntu_/{print $1;exit}') + zfs create -o com.ubuntu.zsys:bootfs-datasets=$ROOT_DS \ + -o canmount=on -o mountpoint=/home/$username \ + rpool/USERDATA/${username}_$UUID + adduser $username + + cp -a /etc/skel/. /home/$username + chown -R $username:$username /home/$username + usermod -a -G adm,cdrom,dip,lpadmin,lxd,plugdev,sambashare,sudo $username + +Step 7: Full Software Installation +---------------------------------- + +#. Upgrade the minimal system:: + + apt dist-upgrade --yes + +#. Install a regular set of software: + + Choose one of the following options: + + - Install a command-line environment only:: + + apt install --yes ubuntu-standard + + - Install a full GUI environment:: + + apt install --yes ubuntu-desktop + + **Hint**: If you are installing a full GUI environment, you will likely + want to manage your network with NetworkManager:: + + rm /etc/netplan/01-netcfg.yaml + vi /etc/netplan/01-network-manager-all.yaml + + .. code-block:: yaml + + network: + version: 2 + renderer: NetworkManager + +#. Optional: Disable log compression: + + As ``/var/log`` is already compressed by ZFS, logrotate’s compression is + going to burn CPU and disk I/O for (in most cases) very little gain. Also, + if you are making snapshots of ``/var/log``, logrotate’s compression will + actually waste space, as the uncompressed data will live on in the + snapshot. You can edit the files in ``/etc/logrotate.d`` by hand to comment + out ``compress``, or use this loop (copy-and-paste highly recommended):: + + for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done + +#. Reboot:: + + reboot + +Step 8: Final Cleanup +--------------------- + +#. Wait for the system to boot normally. Login using the account you + created. Ensure the system (including networking) works normally. + +#. Optional: Disable the root password:: + + sudo usermod -p '*' root + +#. Optional (but highly recommended): Disable root SSH logins: + + If you installed SSH earlier, revert the temporary change:: + + sudo vi /etc/ssh/sshd_config + # Remove: PermitRootLogin yes + + sudo systemctl restart ssh + +#. Optional: Re-enable the graphical boot process: + + If you prefer the graphical boot process, you can re-enable it now. If + you are using LUKS, it makes the prompt look nicer. + + :: + + sudo vi /etc/default/grub + # Uncomment: GRUB_TIMEOUT_STYLE=hidden + # Add quiet and splash to: GRUB_CMDLINE_LINUX_DEFAULT + # Comment out: GRUB_TERMINAL=console + # Save and quit. + + sudo update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + + Store that backup somewhere safe (e.g. cloud storage). It is protected by + your LUKS passphrase, but you may wish to use additional encryption. + + **Hint:** If you created a mirror or raidz topology, repeat this for each + LUKS volume (``luks2``, etc.). + +Troubleshooting +--------------- + +Rescuing using a Live CD +~~~~~~~~~~~~~~~~~~~~~~~~ + +Go through `Step 1: Prepare The Install Environment +<#step-1-prepare-the-install-environment>`__. + +For LUKS, first unlock the disk(s):: + + cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # Repeat for additional disks, if this is a mirror or raidz topology. + +Mount everything correctly:: + + zpool export -a + zpool import -N -R /mnt rpool + zpool import -N -R /mnt bpool + zfs load-key -a + # Replace “UUID” as appropriate; use zfs list to find it: + zfs mount rpool/ROOT/ubuntu_UUID + zfs mount bpool/BOOT/ubuntu_UUID + zfs mount -a + +If needed, you can chroot into your installed environment:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + chroot /mnt /bin/bash --login + mount -a + +Do whatever you need to do to fix your system. + +When done, cleanup:: + + exit + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + reboot + +Areca +~~~~~ + +Systems that require the ``arcsas`` blob driver should add it to the +``/etc/initramfs-tools/modules`` file and run ``update-initramfs -c -k all``. + +Upgrade or downgrade the Areca driver if something like +``RIP: 0010:[] [] native_read_tsc+0x6/0x20`` +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message. + +MPT2SAS +~~~~~~~ + +Most problem reports for this tutorial involve ``mpt2sas`` hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware. + +The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See `https://github.com/zfsonlinux/zfs/issues/330 +`__. + +Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in +``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to +appear before importing the pool. + +QEMU/KVM/XEN +~~~~~~~~~~~~ + +Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). + +To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:: + + sudo apt install ovmf + sudo vi /etc/libvirt/qemu.conf + +Uncomment these lines: + +.. code-block:: text + + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd", + "/usr/share/OVMF/OVMF_CODE.ms.fd:/usr/share/OVMF/OVMF_VARS.ms.fd" + ] + +:: + + sudo systemctl restart libvirtd.service + +VMware +~~~~~~ + +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere configuration. + Doing this ensures that ``/dev/disk`` aliases are created in the guest. diff --git a/_sources/Getting Started/Ubuntu/index.rst.txt b/_sources/Getting Started/Ubuntu/index.rst.txt new file mode 100644 index 000000000..4d52da52a --- /dev/null +++ b/_sources/Getting Started/Ubuntu/index.rst.txt @@ -0,0 +1,31 @@ +Ubuntu +====== + +.. contents:: Table of Contents + :local: + +Installation +------------ + +.. note:: + If you want to use ZFS as your root filesystem, see the + `Root on ZFS`_ links below instead. + +On Ubuntu, ZFS is included in the default Linux kernel packages. +To install the ZFS utilities, first make sure ``universe`` is enabled in +``/etc/apt/sources.list``:: + + deb http://archive.ubuntu.com/ubuntu main universe + +Then install ``zfsutils-linux``:: + + apt update + apt install zfsutils-linux + +Root on ZFS +----------- +.. toctree:: + :maxdepth: 1 + :glob: + + * diff --git a/_sources/Getting Started/index.rst.txt b/_sources/Getting Started/index.rst.txt new file mode 100644 index 000000000..176750bd3 --- /dev/null +++ b/_sources/Getting Started/index.rst.txt @@ -0,0 +1,24 @@ +Getting Started +=============== + +To get started with OpenZFS refer to the provided documentation for your +distribution. It will cover the recommended installation method and any +distribution specific information. First time OpenZFS users are +encouraged to check out Aaron Toponce's `excellent +documentation `__. + +.. toctree:: + :maxdepth: 3 + :glob: + + Alpine Linux/index + Arch Linux/index + Debian/index + Fedora/index + FreeBSD + Gentoo + NixOS/index + openSUSE/index + RHEL-based distro/index + Ubuntu/index + zfs_root_maintenance diff --git a/_sources/Getting Started/openSUSE/index.rst.txt b/_sources/Getting Started/openSUSE/index.rst.txt new file mode 100644 index 000000000..c1e097884 --- /dev/null +++ b/_sources/Getting Started/openSUSE/index.rst.txt @@ -0,0 +1,35 @@ +.. highlight:: sh + +openSUSE +======== + +.. contents:: Table of Contents + :local: + +Installation +------------ + +If you want to use ZFS as your root filesystem, see the `Root on ZFS`_ +links below instead. + +ZFS packages are not included in official openSUSE repositories, but repository of `filesystems projects of openSUSE +`__ +includes such packages of filesystems including OpenZFS. + +openSUSE progresses through 3 main distribution branches, these are called Tumbleweed, Leap and SLE. There are ZFS packages available for all three. + + +External Links +-------------- + +* `openSUSE OpenZFS page `__ + +Root on ZFS +----------- +.. toctree:: + :maxdepth: 1 + :glob: + + *Root on ZFS + + diff --git a/_sources/Getting Started/openSUSE/openSUSE Leap Root on ZFS.rst.txt b/_sources/Getting Started/openSUSE/openSUSE Leap Root on ZFS.rst.txt new file mode 100644 index 000000000..e0f1488fe --- /dev/null +++ b/_sources/Getting Started/openSUSE/openSUSE Leap Root on ZFS.rst.txt @@ -0,0 +1,1280 @@ +.. highlight:: sh + +openSUSE Leap Root on ZFS +========================= + +.. contents:: Table of Contents + :local: + +Overview +-------- + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. +- This is not an openSUSE official HOWTO page. This document will be updated if Root on ZFS support of + openSUSE is added in the future. + Also, `openSUSE's default system installer Yast2 does not support zfs `__. The method of setting up system + with zypper without Yast2 used in this page is based on openSUSE installation methods written by the + experience of the people in the community. + For more information about this, please look at the external links. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- `64-bit openSUSE Leap Live CD w/ GUI (e.g. gnome iso) + `__ +- `A 64-bit kernel is strongly encouraged. + `__ +- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” drive) + only works with UEFI booting. This not unique to ZFS. `GRUB does not and + will not work on 4Kn with legacy (BIOS) booting. + `__ + +Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need `massive amounts of RAM +`__. Enabling +deduplication is a permanent change that cannot be easily reverted. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention `@Zaryob `__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo zypper install python3-pip + pip3 install -r docs/requirements.txt + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. + +Encryption +~~~~~~~~~~ + +This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in ``/etc/fstab``, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +Notes +~~~~~~~ + +- You can use unofficial script `LroZ `__ (Linux Root On Zfs), which is based on this manual and automates most steps. + +Step 1: Prepare The Install Environment +--------------------------------------- + +#. Boot the openSUSE Live CD. If prompted, login with the username + ``linux`` without password. Connect your system to the Internet as + appropriate (e.g. join your WiFi network). Open a terminal. + +#. Check your openSUSE Leap release:: + + lsb_release -d + Description: openSUSE Leap {$release} + +#. Setup and update the repositories:: + + sudo zypper addrepo https://download.opensuse.org/repositories/filesystems/$(lsb_release -rs)/filesystems.repo + sudo zypper refresh # Refresh all repositories + +#. Optional: Install and start the OpenSSH server in the Live CD environment: + + If you have a second system, using SSH to access the target system can be + convenient:: + + sudo zypper install openssh-server + sudo systemctl restart sshd.service + + **Hint:** You can find your IP address with + ``ip addr show scope global | grep inet``. Then, from your main machine, + connect with ``ssh user@IP``. Do not forget to set the password for user by ``passwd``. + +#. Disable automounting: + + If the disk has been used before (with partitions at the same offsets), + previous filesystems (e.g. the ESP) will automount if not disabled:: + + gsettings set org.gnome.desktop.media-handling automount false + +#. Become root:: + + sudo -i + +#. Install ZFS in the Live CD environment:: + + zypper install zfs zfs-kmp-default + zypper install gdisk dkms + modprobe zfs + +Step 2: Disk Formatting +----------------------- + +#. Set a variable with the disk name:: + + DISK=/dev/disk/by-id/scsi-SATA_disk1 + + Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the + ``/dev/sd*`` device nodes directly can cause sporadic import failures, + especially on systems that have more than one storage pool. + + **Hints:** + + - ``ls -la /dev/disk/by-id`` will list the aliases. + - Are you doing this in a virtual machine? If your virtual disk is missing + from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using KVM with + virtio; otherwise, read the `troubleshooting <#troubleshooting>`__ + section. + +#. If you are re-using a disk, clear it as necessary: + + If the disk was previously used in an MD array:: + + zypper install mdadm + + # See if one or more MD arrays are active: + cat /proc/mdstat + # If so, stop them (replace ``md0`` as required): + mdadm --stop /dev/md0 + + # For an array using the whole disk: + mdadm --zero-superblock --force $DISK + # For an array using a partition: + mdadm --zero-superblock --force ${DISK}-part2 + + Clear the partition table:: + + sgdisk --zap-all $DISK + + If you get a message about the kernel still using the old partition table, + reboot and start over (except that you can skip this step). + +#. Partition your disk(s): + + Run this if you need legacy (BIOS) booting:: + + sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK + + Run this for UEFI booting (for use now or in the future):: + + sgdisk -n2:1M:+512M -t2:EF00 $DISK + + Run this for the boot pool:: + + sgdisk -n3:0:+1G -t3:BF01 $DISK + + Choose one of the following options: + + - Unencrypted or ZFS native encryption:: + + sgdisk -n4:0:0 -t4:BF00 $DISK + + - LUKS:: + + sgdisk -n4:0:0 -t4:8309 $DISK + + **Hints:** + + - If you are creating a mirror or raidz topology, repeat the partitioning commands for all the disks which will be part of the pool. + +#. Create the boot pool:: + + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 -d \ + -o feature@async_destroy=enabled \ + -o feature@bookmarks=enabled \ + -o feature@embedded_data=enabled \ + -o feature@empty_bpobj=enabled \ + -o feature@enabled_txg=enabled \ + -o feature@extensible_dataset=enabled \ + -o feature@filesystem_limits=enabled \ + -o feature@hole_birth=enabled \ + -o feature@large_blocks=enabled \ + -o feature@lz4_compress=enabled \ + -o feature@spacemap_histogram=enabled \ + -o feature@zpool_checkpoint=enabled \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/boot -R /mnt \ + bpool ${DISK}-part3 + + You should not need to customize any of the options for the boot pool. + + GRUB does not support all of the zpool features. See ``spa_feature_names`` + in `grub-core/fs/zfs/zfs.c + `__. + This step creates a separate boot pool for ``/boot`` with the features + limited to only those that GRUB supports, allowing the root pool to use + any/all features. Note that GRUB opens the pool read-only, so all + read-only compatible features are “supported” by GRUB. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + bpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part3 \ + /dev/disk/by-id/scsi-SATA_disk2-part3 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - The pool name is arbitrary. If changed, the new name must be used + consistently. The ``bpool`` convention originated in this HOWTO. + + **Feature Notes:** + + - The ``allocation_classes`` feature should be safe to use. However, unless + one is using it (i.e. a ``special`` vdev), there is no point to enabling + it. It is extremely unlikely that someone would use this feature for a + boot pool. If one cares about speeding up the boot pool, it would make + more sense to put the whole pool on the faster disk rather than using it + as a ``special`` vdev. + - The ``project_quota`` feature has been tested and is safe to use. This + feature is extremely unlikely to matter for the boot pool. + - The ``resilver_defer`` should be safe but the boot pool is small enough + that it is unlikely to be necessary. + - The ``spacemap_v2`` feature has been tested and is safe to use. The boot + pool is small, so this does not matter in practice. + - As a read-only compatible feature, the ``userobj_accounting`` feature + should be compatible in theory, but in practice, GRUB can fail with an + “invalid dnode type” error. This feature does not matter for ``/boot`` + anyway. + +#. Create the root pool: + + Choose one of the following options: + + - Unencrypted:: + + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - ZFS native encryption:: + + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 \ + -O encryption=on \ + -O keylocation=prompt -O keyformat=passphrase \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - LUKS:: + + zypper install cryptsetup + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + + **Notes:** + + - The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). + - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires ACLs + `__ + - Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only filenames + `__. + - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you + want to tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. + - Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s documentation + `__ + for further information. + - Setting ``xattr=sa`` `vastly improves the performance of extended + attributes + `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI applications. + `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain controller. + `__ + Note that ``xattr=sa`` is `Linux-specific + `__. If you move your + ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux, + extended attributes will not be readable (though your data will be). If + portability of extended attributes is important to you, omit the + ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole + pool, it is probably fine to use it for ``/var/log``. + - Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). + - ZFS native encryption `now + `__ + defaults to ``aes-256-gcm``. + - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two + keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256. + - Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup FAQ + `__ + for guidance. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + rpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part4 \ + /dev/disk/by-id/scsi-SATA_disk2-part4 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - When using LUKS with mirror or raidz topologies, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will have + to create using ``cryptsetup``. + - The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the root + pool is named ``rpool`` by default. + - If you want to use grub bootloader, you must set:: + + -o feature@async_destroy=enabled \ + -o feature@bookmarks=enabled \ + -o feature@embedded_data=enabled \ + -o feature@empty_bpobj=enabled \ + -o feature@enabled_txg=enabled \ + -o feature@extensible_dataset=enabled \ + -o feature@filesystem_limits=enabled \ + -o feature@hole_birth=enabled \ + -o feature@large_blocks=enabled \ + -o feature@lz4_compress=enabled \ + -o feature@spacemap_histogram=enabled \ + -o feature@zpool_checkpoint=enabled \ + + for your root pool. Relevant for grub 2.04 and Leap 15.3. Don't use zpool + upgrade for this pool or you will lost the possibility to use grub2-install command. + +Step 3: System Installation +--------------------------- + +#. Create filesystem datasets to act as containers:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + zfs create -o canmount=off -o mountpoint=none bpool/BOOT + + On Solaris systems, the root filesystem is cloned and the suffix is + incremented for major system changes through ``pkg image-update`` or + ``beadm``. Similar functionality has been implemented in Ubuntu 20.04 with + the ``zsys`` tool, though its dataset layout is more complicated. Even + without such a tool, the `rpool/ROOT` and `bpool/BOOT` containers can still + be used for manually created clones. That said, this HOWTO assumes a single + filesystem for ``/boot`` for simplicity. + +#. Create filesystem datasets for the root and boot filesystems:: + + zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/suse + zfs mount rpool/ROOT/suse + + zfs create -o mountpoint=/boot bpool/BOOT/suse + + With ZFS, it is not normally necessary to use a mount command (either + ``mount`` or ``zfs mount``). This situation is an exception because of + ``canmount=noauto``. + +#. Create datasets:: + + zfs create rpool/home + zfs create -o mountpoint=/root rpool/home/root + chmod 700 /mnt/root + zfs create -o canmount=off rpool/var + zfs create -o canmount=off rpool/var/lib + zfs create rpool/var/log + zfs create rpool/var/spool + + The datasets below are optional, depending on your preferences and/or + software choices. + + If you wish to exclude these from snapshots:: + + zfs create -o com.sun:auto-snapshot=false rpool/var/cache + zfs create -o com.sun:auto-snapshot=false rpool/var/tmp + chmod 1777 /mnt/var/tmp + + If you use /opt on this system:: + + zfs create rpool/opt + + If you use /srv on this system:: + + zfs create rpool/srv + + If you use /usr/local on this system:: + + zfs create -o canmount=off rpool/usr + zfs create rpool/usr/local + + If this system will have games installed:: + + zfs create rpool/var/games + + If this system will store local email in /var/mail:: + + zfs create rpool/var/mail + + If this system will use Snap packages:: + + zfs create rpool/var/snap + + If this system will use Flatpak packages:: + + zfs create rpool/var/lib/flatpak + + If you use /var/www on this system:: + + zfs create rpool/var/www + + If this system will use GNOME:: + + zfs create rpool/var/lib/AccountsService + + If this system will use Docker (which manages its own datasets & + snapshots):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker + + If this system will use NFS (locking):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs + + + Mount a tmpfs at /run:: + + mkdir /mnt/run + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + + + A tmpfs is recommended later, but if you want a separate dataset for + ``/tmp``:: + + zfs create -o com.sun:auto-snapshot=false rpool/tmp + chmod 1777 /mnt/tmp + + The primary goal of this dataset layout is to separate the OS from user + data. This allows the root filesystem to be rolled back without rolling + back user data. + + If you do nothing extra, ``/tmp`` will be stored as part of the root + filesystem. Alternatively, you can create a separate dataset for ``/tmp``, + as shown above. This keeps the ``/tmp`` data out of snapshots of your root + filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want + to limit the maximum space used. Otherwise, you can use a tmpfs (RAM + filesystem) later. + + +#. Copy in zpool.cache:: + + mkdir /mnt/etc/zfs -p + cp /etc/zfs/zpool.cache /mnt/etc/zfs/ + +Step 4. Install System +---------------------- + +#. Add repositories into chrooting directory:: + + zypper --root /mnt ar http://download.opensuse.org/distribution/leap/$(lsb_release -rs)/repo/non-oss non-oss + zypper --root /mnt ar http://download.opensuse.org/distribution/leap/$(lsb_release -rs)/repo/oss oss + zypper --root /mnt ar http://download.opensuse.org/update/leap/$(lsb_release -rs)/oss update-oss + zypper --root /mnt ar http://download.opensuse.org/update/leap/$(lsb_release -rs)/non-oss update-nonoss + +#. Generate repository indexes:: + + zypper --root /mnt refresh + + + You will get fingerprint exception, click a to say always trust and continue.:: + + New repository or package signing key received: + + Repository: oss + Key Name: openSUSE Project Signing Key + Key Fingerprint: 22C07BA5 34178CD0 2EFE22AA B88B2FD4 3DBDC284 + Key Created: Mon May 5 11:37:40 2014 + Key Expires: Thu May 2 11:37:40 2024 + Rpm Name: gpg-pubkey-3dbdc284-53674dd4 + + Do you want to reject the key, trust temporarily, or trust always? [r/t/a/?] (r): + + +#. Install openSUSE Leap with zypper: + + If you install `base` pattern, zypper will install `busybox-grep` which masks default kernel package. + Thats why I recommend you to install `enhanced_base` pattern, if you're new in openSUSE. But in `enhanced_base`, bloats + can annoy you, while you want to use it openSUSE on server. So, you need to select + + a. Install base packages of openSUSE Leap with zypper (Recommended for server):: + + zypper --root /mnt install -t pattern base + + + b. Install enhanced base of openSUSE Leap with zypper (Recommended for desktop):: + + zypper --root /mnt install -t pattern enhanced_base + + + +#. Install openSUSE zypper package system into chroot:: + + zypper --root /mnt install zypper + +#. Recommended: Install openSUSE yast2 system into chroot:: + + zypper --root /mnt install yast2 + zypper --root /mnt install -t pattern yast2_basis + + It will make easier to configure network and other configurations for beginners. + +To install a desktop environment, see the `openSUSE wiki +`__ + +Step 5: System Configuration +---------------------------- + +#. Configure the hostname: + + Replace ``HOSTNAME`` with the desired hostname:: + + echo HOSTNAME > /mnt/etc/hostname + vi /mnt/etc/hosts + + Add a line: + + .. code-block:: text + + 127.0.1.1 HOSTNAME + + or if the system has a real name in DNS: + + .. code-block:: text + + 127.0.1.1 FQDN HOSTNAME + + **Hint:** Use ``nano`` if you find ``vi`` confusing. + +#. Copy network information:: + + rm /mnt/etc/resolv.conf + cp /etc/resolv.conf /mnt/etc/ + + You will reconfigure network with yast2 later. + +#. Bind the virtual filesystems from the LiveCD environment to the new + system and ``chroot`` into it:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + + chroot /mnt /usr/bin/env DISK=$DISK bash --login + + **Note:** This is using ``--rbind``, not ``--bind``. + +#. Configure a basic system environment:: + + ln -s /proc/self/mounts /etc/mtab + zypper refresh + + Even if you prefer a non-English system language, always ensure that + ``en_US.UTF-8`` is available:: + + locale -a + + Output must include that languages: + + * C + * C.utf8 + * en_US.utf8 + * POSIX + + Find yout locale from `locale -a` commands output then set it with following command. + + .. code-block:: text + + localectl set-locale LANG=en_US.UTF-8 + +#. Optional: Reinstallation for stability: + + After installation it may need. Some packages may have minor errors. + For that, do this if you wish. Since there is no command like + dpkg-reconfigure in openSUSE, `zypper install -f stated as a alternative for + it `__ + but it will reinstall packages. + + .. code-block:: text + + zypper install -f permissions-config iputils ca-certificates ca-certificates-mozilla pam shadow dbus libutempter0 suse-module-tools util-linux + + +#. Install kernel:: + + zypper install kernel-default kernel-firmware + + **Note:** If you installed `base` pattern, you need to deinstall busybox-grep to install `kernel-default` package. + +#. Install ZFS in the chroot environment for the new system:: + + zypper install lsb-release + zypper addrepo https://download.opensuse.org/repositories/filesystems/`lsb_release -rs`/filesystems.repo + zypper refresh # Refresh all repositories + zypper install zfs zfs-kmp-default + +#. For LUKS installs only, setup ``/etc/crypttab``:: + + zypper install cryptsetup + + echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) none \ + luks,discard,initramfs > /etc/crypttab + + The use of ``initramfs`` is a work-around for `cryptsetup does not support + ZFS `__. + + **Hint:** If you are creating a mirror or raidz topology, repeat the + ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. + +#. For LUKS installs only, fix cryptsetup naming for ZFS:: + + echo 'ENV{DM_NAME}!="", SYMLINK+="$env{DM_NAME}" + ENV{DM_NAME}!="", SYMLINK+="dm-name-$env{DM_NAME}"' >> /etc/udev/rules.d/99-local-crypt.rules + +#. Recommended: Generate and setup hostid:: + + cd /root + zypper install wget + wget https://github.com/openzfs/zfs/files/4537537/genhostid.sh.gz + gzip -d genhostid.sh.gz + chmod +x genhostid.sh + zgenhostid `/root/genhostid.sh` + + Check, that generated and system hostid matches:: + + /root/genhostid.sh + hostid + +#. Install GRUB + + Choose one of the following options: + + - Install GRUB for legacy (BIOS) booting:: + + zypper install grub2-x86_64-pc + + If your processor is 32bit use `grub2-i386-pc` instead of x86_64 one. + + - Install GRUB for UEFI booting:: + + zypper install grub2-x86_64-efi dosfstools os-prober + mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2 + mkdir /boot/efi + echo /dev/disk/by-uuid/$(blkid -s PARTUUID -o value ${DISK}-part2) \ + /boot/efi vfat defaults 0 0 >> /etc/fstab + mount /boot/efi + + **Notes:** + + - The ``-s 1`` for ``mkdosfs`` is only necessary for drives which present + 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size + (given the partition size of 512 MiB) for FAT32. It also works fine on + drives which present 512 B sectors. + - For a mirror or raidz topology, this step only installs GRUB on the + first disk. The other disk(s) will be handled later. + +#. Optional: Remove os-prober:: + + zypper remove os-prober + + This avoids error messages from `update-bootloader`. `os-prober` is only + necessary in dual-boot configurations. + +#. Set a root password:: + + passwd + +#. Enable importing bpool + + This ensures that ``bpool`` is always imported, regardless of whether + ``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not, + or whether ``zfs-import-scan.service`` is enabled. + + :: + + vi /etc/systemd/system/zfs-import-bpool.service + + .. code-block:: ini + + [Unit] + DefaultDependencies=no + Before=zfs-import-scan.service + Before=zfs-import-cache.service + + [Service] + Type=oneshot + RemainAfterExit=yes + ExecStart=/usr/sbin/zpool import -N -o cachefile=none bpool + # Work-around to preserve zpool cache: + ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache + ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache + + [Install] + WantedBy=zfs-import.target + + :: + + systemctl enable zfs-import-bpool.service + +#. Optional (but recommended): Mount a tmpfs to ``/tmp`` + + If you chose to create a ``/tmp`` dataset above, skip this step, as they + are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a + tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + + :: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + + +Step 6: Kernel Installation +--------------------------- + +#. Add zfs module into dracut:: + + echo 'zfs'>> /etc/modules-load.d/zfs.conf + +#. Kernel version of livecd can differ from currently installed version. Get kernel version of your new OS:: + + kernel_version=$(find /boot/vmlinuz-* | grep -Eo '[[:digit:]]*\.[[:digit:]]*\.[[:digit:]]*\-.*-default') + +#. Refresh kernel files:: + + kernel-install add "$kernel_version" /boot/vmlinuz-"$kernel_version" + +#. Refresh the initrd files:: + + mkinitrd + + **Note:** After some installations, LUKS partition cannot seen by dracut, + this will print “Failure occured during following action: + configuring encrypted DM device X VOLUME_CRYPTSETUP_FAILED“. For fix this + issue you need to check cryptsetup installation. `See for more information `__ + **Note:** Although we add the zfs config to the system module into `/etc/modules.d`, if it is not seen by dracut, we have to add it to dracut by force. + `dracut --kver $(uname -r) --force --add-drivers "zfs"` + + +Step 7: Grub2 Installation +-------------------------- + +#. Verify that the ZFS boot filesystem is recognized:: + + grub2-probe /boot + + Output must be `zfs` + +#. If you having trouble with `grub2-probe` command make this:: + + echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile + export ZPOOL_VDEV_NAME_PATH=YES + + then go back to `grub2-probe` step. + +#. Optional (but highly recommended): Make debugging GRUB easier:: + + vi /etc/default/grub + # Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT + # Uncomment: GRUB_TERMINAL=console + # Save and quit. + + Later, once the system has rebooted twice and you are sure everything is + working, you can undo these changes, if desired. + +#. Update the boot configuration:: + + update-bootloader + + **Note:** Ignore errors from ``osprober``, if present. + **Note:** If you have had trouble with the grub2 installation, I suggest you use systemd-boot. + **Note:** If this command don't gives any output, use classic grub.cfg generation with following command: + ``grub2-mkconfig -o /boot/grub2/grub.cfg`` + +#. Check that ``/boot/grub2/grub.cfg`` have the menuentry ``root=ZFS=rpool/ROOT/suse``, like this:: + + linux /boot@/vmlinuz-5.3.18-150300.59.60-default root=ZFS=rpool/ROOT/suse + + If not, change ``/etc/default/grub``:: + + GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/suse" + + and repeat previous step. + +#. Install the boot loader: + + #. For legacy (BIOS) booting, install GRUB to the MBR:: + + grub2-install $DISK + + Note that you are installing GRUB to the whole disk, not a partition. + + If you are creating a mirror or raidz topology, repeat the ``grub-install`` + command for each disk in the pool. + + #. For UEFI booting, install GRUB to the ESP:: + + grub2-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=opensuse --recheck --no-floppy + + It is not necessary to specify the disk here. If you are creating a + mirror or raidz topology, the additional disks will be handled later. + +Step 8: Systemd-Boot Installation +--------------------------------- + +**Warning:** This will break your Yast2 Bootloader Configuration. Make sure that you +are not able to fix the problem you are having with grub2. I decided to write this +part because sometimes grub2 doesn't see the rpool pool in some cases. + +#. Install systemd-boot:: + + bootctl install + + Note: Only if previous cmd replied "Failed to get machine id: No medium found", you need: + + systemd-machine-id-setup + + and repeat installation systemd-boot. + +#. Configure bootloader configuration:: + + tee -a /boot/efi/loader/loader.conf << EOF + default openSUSE_Leap.conf + timeout 5 + console-mode auto + EOF + +#. Write Entries:: + + tee -a /boot/efi/loader/entries/openSUSE_Leap.conf << EOF + title openSUSE Leap + linux /EFI/openSUSE/vmlinuz + initrd /EFI/openSUSE/initrd + options root=zfs:rpool/ROOT/suse boot=zfs + EOF + +#. Copy files into EFI:: + + mkdir /boot/efi/EFI/openSUSE + cp /boot/{vmlinuz,initrd} /boot/efi/EFI/openSUSE + +#. Update systemd-boot variables:: + + bootctl update + +Step 9: Filesystem Configuration +-------------------------------- + +#. Fix filesystem mount ordering: + + We need to activate ``zfs-mount-generator``. This makes systemd aware of + the separate mountpoints, which is important for things like ``/var/log`` + and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount`` + by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature + of systemd automatically use ``After=var-tmp.mount``. + + :: + + mkdir /etc/zfs/zfs-list.cache + touch /etc/zfs/zfs-list.cache/bpool + touch /etc/zfs/zfs-list.cache/rpool + ln -s /usr/lib/zfs/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d + zed -F & + + Verify that ``zed`` updated the cache by making sure these are not empty:: + + cat /etc/zfs/zfs-list.cache/bpool + cat /etc/zfs/zfs-list.cache/rpool + + If either is empty, force a cache update and check again:: + + zfs set canmount=on bpool/BOOT/suse + zfs set canmount=noauto rpool/ROOT/suse + + If they are still empty, stop zed (as below), start zed (as above) and try + again. + + Stop ``zed``:: + + fg + Press Ctrl-C. + + Fix the paths to eliminate ``/mnt``:: + + sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/* + +Step 10: First Boot +------------------- + +#. Optional: Install SSH:: + + zypper install -y openssh-server + + vi /etc/ssh/sshd_config + # Set: PermitRootLogin yes + +#. Optional: Snapshot the initial installation:: + + zfs snapshot -r bpool/BOOT/suse@install + zfs snapshot -r rpool/ROOT/suse@install + + In the future, you will likely want to take snapshots before each + upgrade, and remove old snapshots (including this one) at some point to + save space. + +#. Exit from the ``chroot`` environment back to the LiveCD environment:: + + exit + +#. Run these commands in the LiveCD environment to unmount all + filesystems:: + + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + +#. Reboot:: + + reboot + + Wait for the newly installed system to boot normally. Login as root. + +#. Create a user account: + + Replace ``username`` with your desired username:: + + zfs create rpool/home/username + adduser username + + cp -a /etc/skel/. /home/username + chown -R username:username /home/username + usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video username + +#. Mirror GRUB + + If you installed to multiple disks, install GRUB on the additional + disks. + + - For legacy (BIOS) booting:: + Check to be sure we using efi mode: + + .. code-block:: text + + efibootmgr -v + + This must return a message contains `legacy_boot` + + Then reconfigure grub: + + .. code-block:: text + + grub-install $DISK + + Hit enter until you get to the device selection screen. + Select (using the space bar) all of the disks (not partitions) in your pool. + + - For UEFI booting:: + + umount /boot/efi + + For the second and subsequent disks (increment debian-2 to -3, etc.):: + + dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ + of=/dev/disk/by-id/scsi-SATA_disk2-part2 + efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ + -p 2 -L "opensuse-2" -l '\EFI\opensuse\grubx64.efi' + + mount /boot/efi + +Step 11: Optional: Configure Swap +--------------------------------- + +**Caution**: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is `a bug report upstream +`__. + +#. Create a volume dataset (zvol) for use as a swap device:: + + zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata -o secondarycache=none \ + -o com.sun:auto-snapshot=false rpool/swap + + You can adjust the size (the ``4G`` part) to your needs. + + The compression algorithm is set to ``zle`` because it is the cheapest + available algorithm. As this guide recommends ``ashift=12`` (4 kiB + blocks on disk), the common case of a 4 kiB page size means that no + compression algorithm can reduce I/O. The exception is all-zero pages, + which are dropped by ZFS; but some form of compression has to be enabled + to get this behavior. + +#. Configure the swap device: + + **Caution**: Always use long ``/dev/zvol`` aliases in configuration + files. Never use a short ``/dev/zdX`` device name. + + :: + + mkswap -f /dev/zvol/rpool/swap + echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab + echo RESUME=none > /etc/initramfs-tools/conf.d/resume + + The ``RESUME=none`` is necessary to disable resuming from hibernation. + This does not work, as the zvol is not present (because the pool has not + yet been imported) at the time the resume script runs. If it is not + disabled, the boot process hangs for 30 seconds waiting for the swap + zvol to appear. + +#. Enable the swap device:: + + swapon -av + +Step 12: Final Cleanup +---------------------- + +#. Wait for the system to boot normally. Login using the account you + created. Ensure the system (including networking) works normally. + +#. Optional: Delete the snapshots of the initial installation:: + + sudo zfs destroy bpool/BOOT/suse@install + sudo zfs destroy rpool/ROOT/suse@install + +#. Optional: Disable the root password:: + + sudo usermod -p '*' root + +#. Optional (but highly recommended): Disable root SSH logins: + + If you installed SSH earlier, revert the temporary change:: + + vi /etc/ssh/sshd_config + # Remove: PermitRootLogin yes + + systemctl restart sshd + +#. Optional: Re-enable the graphical boot process: + + If you prefer the graphical boot process, you can re-enable it now. If + you are using LUKS, it makes the prompt look nicer. + + :: + + sudo vi /etc/default/grub + # Add quiet to GRUB_CMDLINE_LINUX_DEFAULT + # Comment out GRUB_TERMINAL=console + # Save and quit. + + sudo update-bootloader + + **Note:** Ignore errors from ``osprober``, if present. + +#. Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + + Store that backup somewhere safe (e.g. cloud storage). It is protected by + your LUKS passphrase, but you may wish to use additional encryption. + + **Hint:** If you created a mirror or raidz topology, repeat this for each + LUKS volume (``luks2``, etc.). + +Troubleshooting +--------------- + +Rescuing using a Live CD +~~~~~~~~~~~~~~~~~~~~~~~~ + +Go through `Step 1: Prepare The Install Environment +<#step-1-prepare-the-install-environment>`__. + +For LUKS, first unlock the disk(s):: + + zypper install cryptsetup + cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # Repeat for additional disks, if this is a mirror or raidz topology. + +Mount everything correctly:: + + zpool export -a + zpool import -N -R /mnt rpool + zpool import -N -R /mnt bpool + zfs load-key -a + zfs mount rpool/ROOT/suse + zfs mount -a + +If needed, you can chroot into your installed environment:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + chroot /mnt /bin/bash --login + mount /boot/efi + mount -a + +Do whatever you need to do to fix your system. + +When done, cleanup:: + + exit + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + reboot + +Areca +~~~~~ + +Systems that require the ``arcsas`` blob driver should add it to the +``/etc/initramfs-tools/modules`` file and run ``update-initramfs -c -k all``. + +Upgrade or downgrade the Areca driver if something like +``RIP: 0010:[] [] native_read_tsc+0x6/0x20`` +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message. + +MPT2SAS +~~~~~~~ + +Most problem reports for this tutorial involve ``mpt2sas`` hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware. + +The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See `https://github.com/zfsonlinux/zfs/issues/330 +`__. + +Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in +``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to +appear before importing the pool. + +QEMU/KVM/XEN +~~~~~~~~~~~~ + +Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). + +To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:: + + sudo zypper install ovmf + sudo vi /etc/libvirt/qemu.conf + +Uncomment these lines: + +.. code-block:: text + + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd" + ] + +:: + + sudo systemctl restart libvirtd.service + +VMware +~~~~~~ + +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere configuration. + Doing this ensures that ``/dev/disk`` aliases are created in the guest. + + +External Links +~~~~~~~~~~~~~~ +* `OpenZFS on openSUSE `__ +* `ZenLinux Blog - How to Setup an openSUSE chroot + `__ diff --git a/_sources/Getting Started/openSUSE/openSUSE Tumbleweed Root on ZFS.rst.txt b/_sources/Getting Started/openSUSE/openSUSE Tumbleweed Root on ZFS.rst.txt new file mode 100644 index 000000000..2719502e9 --- /dev/null +++ b/_sources/Getting Started/openSUSE/openSUSE Tumbleweed Root on ZFS.rst.txt @@ -0,0 +1,1237 @@ +.. highlight:: sh + +openSUSE Tumbleweed Root on ZFS +=============================== + +.. contents:: Table of Contents + :local: + +Overview +-------- + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. +- This is not an openSUSE official HOWTO page. This document will be updated if Root on ZFS support of + openSUSE is added in the future. + Also, `openSUSE's default system installer Yast2 does not support zfs `__. The method of setting up system + with zypper without Yast2 used in this page is based on openSUSE installation methods written by the + experience of the people in the community. + For more information about this, please look at the external links. + + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- `64-bit openSUSE Tumbleweed Live CD w/ GUI (e.g. gnome iso) + `__ +- `A 64-bit kernel is strongly encouraged. + `__ +- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” drive) + only works with UEFI booting. This not unique to ZFS. `GRUB does not and + will not work on 4Kn with legacy (BIOS) booting. + `__ + +Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need `massive amounts of RAM +`__. Enabling +deduplication is a permanent change that cannot be easily reverted. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention `@Zaryob `__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo zypper install python3-pip + pip3 install -r docs/requirements.txt + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. + +Encryption +~~~~~~~~~~ + +This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in ``/etc/fstab``, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +Step 1: Prepare The Install Environment +--------------------------------------- + +#. Boot the openSUSE Live CD. If prompted, login with the username + ``live`` and password ``live``. Connect your system to the Internet as + appropriate (e.g. join your WiFi network). Open a terminal. + +#. Setup and update the repositories:: + + sudo zypper addrepo https://download.opensuse.org/repositories/filesystems/openSUSE_Tumbleweed/filesystems.repo + sudo zypper refresh # Refresh all repositories + +#. Optional: Install and start the OpenSSH server in the Live CD environment: + + If you have a second system, using SSH to access the target system can be + convenient:: + + sudo zypper install openssh-server + sudo systemctl restart sshd.service + + **Hint:** You can find your IP address with + ``ip addr show scope global | grep inet``. Then, from your main machine, + connect with ``ssh user@IP``. + + +#. Disable automounting: + + If the disk has been used before (with partitions at the same offsets), + previous filesystems (e.g. the ESP) will automount if not disabled:: + + gsettings set org.gnome.desktop.media-handling automount false + + +#. Become root:: + + sudo -i + +#. Install ZFS in the Live CD environment:: + + zypper install zfs zfs-kmp-default + zypper install gdisk + modprobe zfs + +Step 2: Disk Formatting +----------------------- + +#. Set a variable with the disk name:: + + DISK=/dev/disk/by-id/scsi-SATA_disk1 + + Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the + ``/dev/sd*`` device nodes directly can cause sporadic import failures, + especially on systems that have more than one storage pool. + + **Hints:** + + - ``ls -la /dev/disk/by-id`` will list the aliases. + - Are you doing this in a virtual machine? If your virtual disk is missing + from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using KVM with + virtio; otherwise, read the `troubleshooting <#troubleshooting>`__ + section. + +#. If you are re-using a disk, clear it as necessary: + + If the disk was previously used in an MD array:: + + zypper install mdadm + + # See if one or more MD arrays are active: + cat /proc/mdstat + # If so, stop them (replace ``md0`` as required): + mdadm --stop /dev/md0 + + # For an array using the whole disk: + mdadm --zero-superblock --force $DISK + # For an array using a partition: + mdadm --zero-superblock --force ${DISK}-part2 + + Clear the partition table:: + + sgdisk --zap-all $DISK + + If you get a message about the kernel still using the old partition table, + reboot and start over (except that you can skip this step). + + +#. Partition your disk(s): + + Run this if you need legacy (BIOS) booting:: + + sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK + + Run this for UEFI booting (for use now or in the future):: + + sgdisk -n2:1M:+512M -t2:EF00 $DISK + + Run this for the boot pool:: + + sgdisk -n3:0:+1G -t3:BF01 $DISK + + Choose one of the following options: + + - Unencrypted or ZFS native encryption:: + + sgdisk -n4:0:0 -t4:BF00 $DISK + + - LUKS:: + + sgdisk -n4:0:0 -t4:8309 $DISK + + If you are creating a mirror or raidz topology, repeat the partitioning + commands for all the disks which will be part of the pool. + +#. Create the boot pool:: + + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 -d \ + -o feature@async_destroy=enabled \ + -o feature@bookmarks=enabled \ + -o feature@embedded_data=enabled \ + -o feature@empty_bpobj=enabled \ + -o feature@enabled_txg=enabled \ + -o feature@extensible_dataset=enabled \ + -o feature@filesystem_limits=enabled \ + -o feature@hole_birth=enabled \ + -o feature@large_blocks=enabled \ + -o feature@lz4_compress=enabled \ + -o feature@spacemap_histogram=enabled \ + -o feature@zpool_checkpoint=enabled \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/boot -R /mnt \ + bpool ${DISK}-part3 + + You should not need to customize any of the options for the boot pool. + + GRUB does not support all of the zpool features. See ``spa_feature_names`` + in `grub-core/fs/zfs/zfs.c + `__. + This step creates a separate boot pool for ``/boot`` with the features + limited to only those that GRUB supports, allowing the root pool to use + any/all features. Note that GRUB opens the pool read-only, so all + read-only compatible features are “supported” by GRUB. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + bpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part3 \ + /dev/disk/by-id/scsi-SATA_disk2-part3 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - The pool name is arbitrary. If changed, the new name must be used + consistently. The ``bpool`` convention originated in this HOWTO. + + **Feature Notes:** + + - The ``allocation_classes`` feature should be safe to use. However, unless + one is using it (i.e. a ``special`` vdev), there is no point to enabling + it. It is extremely unlikely that someone would use this feature for a + boot pool. If one cares about speeding up the boot pool, it would make + more sense to put the whole pool on the faster disk rather than using it + as a ``special`` vdev. + - The ``project_quota`` feature has been tested and is safe to use. This + feature is extremely unlikely to matter for the boot pool. + - The ``resilver_defer`` should be safe but the boot pool is small enough + that it is unlikely to be necessary. + - The ``spacemap_v2`` feature has been tested and is safe to use. The boot + pool is small, so this does not matter in practice. + - As a read-only compatible feature, the ``userobj_accounting`` feature + should be compatible in theory, but in practice, GRUB can fail with an + “invalid dnode type” error. This feature does not matter for ``/boot`` + anyway. + +#. Create the root pool: + + Choose one of the following options: + + - Unencrypted:: + + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - ZFS native encryption:: + + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 \ + -O encryption=on \ + -O keylocation=prompt -O keyformat=passphrase \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - LUKS:: + + zypper install cryptsetup + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + + **Notes:** + + - The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). + - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires ACLs + `__ + - Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only filenames + `__. + - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you + want to tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. + - Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s documentation + `__ + for further information. + - Setting ``xattr=sa`` `vastly improves the performance of extended + attributes + `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI applications. + `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain controller. + `__ + Note that ``xattr=sa`` is `Linux-specific + `__. If you move your + ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux, + extended attributes will not be readable (though your data will be). If + portability of extended attributes is important to you, omit the + ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole + pool, it is probably fine to use it for ``/var/log``. + - Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). + - ZFS native encryption `now + `__ + defaults to ``aes-256-gcm``. + - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two + keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256. + - Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup FAQ + `__ + for guidance. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + rpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part4 \ + /dev/disk/by-id/scsi-SATA_disk2-part4 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - When using LUKS with mirror or raidz topologies, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will have + to create using ``cryptsetup``. + - The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the root + pool is named ``rpool`` by default. + +Step 3: System Installation +--------------------------- + +#. Create filesystem datasets to act as containers:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + zfs create -o canmount=off -o mountpoint=none bpool/BOOT + + On Solaris systems, the root filesystem is cloned and the suffix is + incremented for major system changes through ``pkg image-update`` or + ``beadm``. Similar functionality has been implemented in Ubuntu 20.04 with + the ``zsys`` tool, though its dataset layout is more complicated. Even + without such a tool, the `rpool/ROOT` and `bpool/BOOT` containers can still + be used for manually created clones. That said, this HOWTO assumes a single + filesystem for ``/boot`` for simplicity. + +#. Create filesystem datasets for the root and boot filesystems:: + + zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/suse + zfs mount rpool/ROOT/suse + + zfs create -o mountpoint=/boot bpool/BOOT/suse + + With ZFS, it is not normally necessary to use a mount command (either + ``mount`` or ``zfs mount``). This situation is an exception because of + ``canmount=noauto``. + +#. Create datasets:: + + zfs create rpool/home + zfs create -o mountpoint=/root rpool/home/root + chmod 700 /mnt/root + zfs create -o canmount=off rpool/var + zfs create -o canmount=off rpool/var/lib + zfs create rpool/var/log + zfs create rpool/var/spool + + The datasets below are optional, depending on your preferences and/or + software choices. + + If you wish to exclude these from snapshots:: + + zfs create -o com.sun:auto-snapshot=false rpool/var/cache + zfs create -o com.sun:auto-snapshot=false rpool/var/tmp + chmod 1777 /mnt/var/tmp + + If you use /opt on this system:: + + zfs create rpool/opt + + If you use /srv on this system:: + + zfs create rpool/srv + + If you use /usr/local on this system:: + + zfs create -o canmount=off rpool/usr + zfs create rpool/usr/local + + If this system will have games installed:: + + zfs create rpool/var/games + + If this system will store local email in /var/mail:: + + zfs create rpool/var/spool/mail + + If this system will use Snap packages:: + + zfs create rpool/var/snap + + If this system will use Flatpak packages:: + + zfs create rpool/var/lib/flatpak + + If you use /var/www on this system:: + + zfs create rpool/var/www + + If this system will use GNOME:: + + zfs create rpool/var/lib/AccountsService + + If this system will use Docker (which manages its own datasets & + snapshots):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker + + If this system will use NFS (locking):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs + + + Mount a tmpfs at /run:: + + mkdir /mnt/run + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + + + A tmpfs is recommended later, but if you want a separate dataset for + ``/tmp``:: + + zfs create -o com.sun:auto-snapshot=false rpool/tmp + chmod 1777 /mnt/tmp + + The primary goal of this dataset layout is to separate the OS from user + data. This allows the root filesystem to be rolled back without rolling + back user data. + + If you do nothing extra, ``/tmp`` will be stored as part of the root + filesystem. Alternatively, you can create a separate dataset for ``/tmp``, + as shown above. This keeps the ``/tmp`` data out of snapshots of your root + filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want + to limit the maximum space used. Otherwise, you can use a tmpfs (RAM + filesystem) later. + + +#. Copy in zpool.cache:: + + mkdir /mnt/etc/zfs -p + cp /etc/zfs/zpool.cache /mnt/etc/zfs/ + +Step 4. Install System +---------------------- + +#. Add repositories into chrooting directory:: + + zypper --root /mnt ar http://download.opensuse.org/tumbleweed/repo/non-oss/ non-oss + zypper --root /mnt ar http://download.opensuse.org/tumbleweed/repo/oss/ oss + +#. Generate repository indexes:: + + zypper --root /mnt refresh + + + You will get fingerprint exception, click a to say always trust and continue.:: + + New repository or package signing key received: + + Repository: oss + Key Name: openSUSE Project Signing Key + Key Fingerprint: 22C07BA5 34178CD0 2EFE22AA B88B2FD4 3DBDC284 + Key Created: Mon May 5 11:37:40 2014 + Key Expires: Thu May 2 11:37:40 2024 + Rpm Name: gpg-pubkey-3dbdc284-53674dd4 + + Do you want to reject the key, trust temporarily, or trust always? [r/t/a/?] (r): + + +#. Install openSUSE Tumbleweed with zypper: + + If you install `base` pattern, zypper will install `busybox-grep` which masks default kernel package. + Thats why I recommend you to install `enhanced_base` pattern, if you're new in openSUSE. But in `enhanced_base`, bloats + can annoy you, while you want to use it openSUSE on server. So, you need to select + + a. Install base packages of openSUSE Tumbleweed with zypper (Recommended for server):: + + zypper --root /mnt install -t pattern base + + + b. Install enhanced base of openSUSE Tumbleweed with zypper (Recommended for desktop):: + + zypper --root /mnt install -t pattern enhanced_base + + + +#. Install openSUSE zypper package system into chroot:: + + zypper --root /mnt install zypper + +#. Recommended: Install openSUSE yast2 system into chroot:: + + zypper --root /mnt install yast2 + + + .. note:: If your `/etc/resolv.conf` file is empty, proceed this command. + + echo "nameserver 8.8.4.4" | tee -a /mnt/etc/resolv.conf + + + It will make easier to configure network and other configurations for beginners. + + + +To install a desktop environment, see the `openSUSE wiki +`__ + +Step 5: System Configuration +---------------------------- + +#. Configure the hostname: + + Replace ``HOSTNAME`` with the desired hostname:: + + echo HOSTNAME > /mnt/etc/hostname + vi /mnt/etc/hosts + + Add a line: + + .. code-block:: text + + 127.0.1.1 HOSTNAME + + or if the system has a real name in DNS: + + .. code-block:: text + + 127.0.1.1 FQDN HOSTNAME + + **Hint:** Use ``nano`` if you find ``vi`` confusing. + +#. Copy network information:: + + cp /etc/resolv.conf /mnt/etc + + You will reconfigure network with yast2. + + .. note:: If your `/etc/resolv.conf` file is empty, proceed this command. + + echo "nameserver 8.8.4.4" | tee -a /mnt/etc/resolv.conf + +#. Bind the virtual filesystems from the LiveCD environment to the new + system and ``chroot`` into it:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + + chroot /mnt /usr/bin/env DISK=$DISK bash --login + + **Note:** This is using ``--rbind``, not ``--bind``. + +#. Configure a basic system environment:: + + ln -s /proc/self/mounts /etc/mtab + zypper refresh + + Even if you prefer a non-English system language, always ensure that + ``en_US.UTF-8`` is available:: + + locale -a + + Output must include that languages: + + * C + * C.UTF-8 + * en_US.utf8 + * POSIX + + Find yout locale from `locale -a` commands output then set it with following command. + + .. code-block:: text + + localectl set-locale LANG=en_US.UTF-8 + + +#. Optional: Reinstallation for stability: + + After installation it may need. Some packages may have minor errors. + For that, do this if you wish. Since there is no command like + dpkg-reconfigure in openSUSE, `zypper install -f stated as a alternative for + it `__ + but it will reinstall packages. + + .. code-block:: text + + zypper install -f permissions-config iputils ca-certificates ca-certificates-mozilla pam shadow dbus-1 libutempter0 suse-module-tools util-linux + + +#. Install kernel:: + + zypper install kernel-default kernel-firmware + + .. note:: If you installed `base` pattern, you need to deinstall busybox-grep to install `kernel-default` package. + +#. Install ZFS in the chroot environment for the new system:: + + zypper addrepo https://download.opensuse.org/repositories/filesystems/openSUSE_Tumbleweed/filesystems.repo + zypper refresh # Refresh all repositories + zypper install zfs + + +#. For LUKS installs only, setup ``/etc/crypttab``:: + + zypper install cryptsetup + + echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) none \ + luks,discard,initramfs > /etc/crypttab + + The use of ``initramfs`` is a work-around for `cryptsetup does not support + ZFS `__. + + **Hint:** If you are creating a mirror or raidz topology, repeat the + ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. + +#. For LUKS installs only, fix cryptsetup naming for ZFS:: + + echo 'ENV{DM_NAME}!="", SYMLINK+="$env{DM_NAME}" + ENV{DM_NAME}!="", SYMLINK+="dm-name-$env{DM_NAME}"' >> /etc/udev/rules.d/99-local-crypt.rules + + +#. Install GRUB + + Choose one of the following options: + + - Install GRUB for legacy (BIOS) booting:: + + zypper install grub2-i386-pc + + - Install GRUB for UEFI booting:: + + zypper install grub2-x86_64-efi dosfstools os-prober + mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2 + mkdir /boot/efi + echo /dev/disk/by-uuid/$(blkid -s PARTUUID -o value ${DISK}-part2) \ + /boot/efi vfat defaults 0 0 >> /etc/fstab + mount /boot/efi + + **Notes:** + + - The ``-s 1`` for ``mkdosfs`` is only necessary for drives which present + 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size + (given the partition size of 512 MiB) for FAT32. It also works fine on + drives which present 512 B sectors. + - For a mirror or raidz topology, this step only installs GRUB on the + first disk. The other disk(s) will be handled later. + +#. Optional: Remove os-prober:: + + zypper remove os-prober + + This avoids error messages from `update-bootloader`. `os-prober` is only + necessary in dual-boot configurations. + +#. Set a root password:: + + passwd + +#. Enable importing bpool + + This ensures that ``bpool`` is always imported, regardless of whether + ``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not, + or whether ``zfs-import-scan.service`` is enabled. + + :: + + vi /etc/systemd/system/zfs-import-bpool.service + + .. code-block:: ini + + [Unit] + DefaultDependencies=no + Before=zfs-import-scan.service + Before=zfs-import-cache.service + + [Service] + Type=oneshot + RemainAfterExit=yes + ExecStart=/sbin/zpool import -N -o cachefile=none bpool + # Work-around to preserve zpool cache: + ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache + ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache + + [Install] + WantedBy=zfs-import.target + + :: + + systemctl enable zfs-import-bpool.service + +#. Optional (but recommended): Mount a tmpfs to ``/tmp`` + + If you chose to create a ``/tmp`` dataset above, skip this step, as they + are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a + tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + + :: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + + +Step 6: Kernel Installation +--------------------------- + +#. Add zfs module into dracut:: + + echo 'zfs'>> /etc/modules-load.d/zfs.conf + + +#. Refresh kernel files:: + + kernel-install add $(uname -r) /boot/vmlinuz-$(uname -r) + +#. Refresh the initrd files:: + + mkinitrd + + **Note:** After some installations, LUKS partition cannot seen by dracut, + this will print “Failure occured during following action: + configuring encrypted DM device X VOLUME_CRYPTSETUP_FAILED“. For fix this + issue you need to check cryptsetup installation. `See for more information `__ + **Note:** Although we add the zfs config to the system module into `/etc/modules.d`, if it is not seen by dracut, we have to add it to dracut by force. + `dracut --kver $(uname -r) --force --add-drivers "zfs"` + + +Step 7: Grub2 Installation +-------------------------- + +#. Verify that the ZFS boot filesystem is recognized:: + + grub2-probe /boot + + Output must be `zfs` + +#. If you having trouble with `grub2-probe` command make this:: + + echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile + export ZPOOL_VDEV_NAME_PATH=YES + + then go back to `grub2-probe` step. + + +#. Workaround GRUB's missing zpool-features support:: + + vi /etc/default/grub + # Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/suse" + +#. Optional (but highly recommended): Make debugging GRUB easier:: + + vi /etc/default/grub + # Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT + # Uncomment: GRUB_TERMINAL=console + # Save and quit. + + Later, once the system has rebooted twice and you are sure everything is + working, you can undo these changes, if desired. + +#. Update the boot configuration:: + + update-bootloader + + **Note:** Ignore errors from ``osprober``, if present. + **Note:** If you have had trouble with the grub2 installation, I suggest you use systemd-boot. + **Note:** If this command don't gives any output, use classic grub.cfg generation with following command: + ``grub2-mkconfig -o /boot/grub2/grub.cfg`` + +#. Install the boot loader: + + #. For legacy (BIOS) booting, install GRUB to the MBR:: + + grub2-install $DISK + + Note that you are installing GRUB to the whole disk, not a partition. + + If you are creating a mirror or raidz topology, repeat the ``grub-install`` + command for each disk in the pool. + + #. For UEFI booting, install GRUB to the ESP:: + + grub2-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=opensuse --recheck --no-floppy + + It is not necessary to specify the disk here. If you are creating a + mirror or raidz topology, the additional disks will be handled later. + +Step 8: Systemd-Boot Installation +--------------------------------- + +**Warning:** This will break your Yast2 Bootloader Configuration. Make sure that you +are not able to fix the problem you are having with grub2. I decided to write this +part because sometimes grub2 doesn't see the rpool pool in some cases. + +#. Install systemd-boot:: + + bootctl install + +#. Configure bootloader configuration:: + + tee -a /boot/efi/loader/loader.conf << EOF + default openSUSE_Tumbleweed.conf + timeout 5 + console-mode auto + EOF + +#. Write Entries:: + + tee -a /boot/efi/loader/entries/openSUSE_Tumbleweed.conf << EOF + title openSUSE Tumbleweed + linux /EFI/openSUSE/vmlinuz + initrd /EFI/openSUSE/initrd + options root=zfs=rpool/ROOT/suse boot=zfs + EOF + +#. Copy files into EFI:: + + mkdir /boot/efi/EFI/openSUSE + cp /boot/{vmlinuz,initrd} /boot/efi/EFI/openSUSE + +#. Update systemd-boot variables:: + + bootctl update + +Step 9: Filesystem Configuration +-------------------------------- + +#. Fix filesystem mount ordering: + + We need to activate ``zfs-mount-generator``. This makes systemd aware of + the separate mountpoints, which is important for things like ``/var/log`` + and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount`` + by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature + of systemd automatically use ``After=var-tmp.mount``. + + :: + + mkdir /etc/zfs/zfs-list.cache + touch /etc/zfs/zfs-list.cache/bpool + touch /etc/zfs/zfs-list.cache/rpool + ln -s /usr/lib/zfs/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d + zed -F & + + Verify that ``zed`` updated the cache by making sure these are not empty:: + + cat /etc/zfs/zfs-list.cache/bpool + cat /etc/zfs/zfs-list.cache/rpool + + If either is empty, force a cache update and check again:: + + zfs set canmount=on bpool/BOOT/suse + zfs set canmount=noauto rpool/ROOT/suse + + If they are still empty, stop zed (as below), start zed (as above) and try + again. + + Stop ``zed``:: + + fg + Press Ctrl-C. + + Fix the paths to eliminate ``/mnt``:: + + sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/* + +Step 10: First Boot +------------------- + +#. Optional: Install SSH:: + + zypper install --yes openssh-server + + vi /etc/ssh/sshd_config + # Set: PermitRootLogin yes + +#. Optional: Snapshot the initial installation:: + + zfs snapshot bpool/BOOT/suse@install + zfs snapshot rpool/ROOT/suse@install + + In the future, you will likely want to take snapshots before each + upgrade, and remove old snapshots (including this one) at some point to + save space. + +#. Exit from the ``chroot`` environment back to the LiveCD environment:: + + exit + +#. Run these commands in the LiveCD environment to unmount all + filesystems:: + + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + +#. Reboot:: + + reboot + + Wait for the newly installed system to boot normally. Login as root. + +#. Create a user account: + + Replace ``username`` with your desired username:: + + zfs create rpool/home/username + adduser username + + cp -a /etc/skel/. /home/username + chown -R username:username /home/username + usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video username + +#. Mirror GRUB + + If you installed to multiple disks, install GRUB on the additional + disks. + + - For legacy (BIOS) booting:: + Check to be sure we using efi mode: + + .. code-block:: text + + efibootmgr -v + + This must return a message contains `legacy_boot` + + Then reconfigure grub: + + .. code-block:: text + + grub-install $DISK + + Hit enter until you get to the device selection screen. + Select (using the space bar) all of the disks (not partitions) in your pool. + + - For UEFI booting:: + + umount /boot/efi + + For the second and subsequent disks (increment debian-2 to -3, etc.):: + + dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ + of=/dev/disk/by-id/scsi-SATA_disk2-part2 + efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ + -p 2 -L "opensuse-2" -l '\EFI\opensuse\grubx64.efi' + + mount /boot/efi + +Step 11: Optional: Configure Swap +--------------------------------- + +**Caution**: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is `a bug report upstream +`__. + +#. Create a volume dataset (zvol) for use as a swap device:: + + zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata -o secondarycache=none \ + -o com.sun:auto-snapshot=false rpool/swap + + You can adjust the size (the ``4G`` part) to your needs. + + The compression algorithm is set to ``zle`` because it is the cheapest + available algorithm. As this guide recommends ``ashift=12`` (4 kiB + blocks on disk), the common case of a 4 kiB page size means that no + compression algorithm can reduce I/O. The exception is all-zero pages, + which are dropped by ZFS; but some form of compression has to be enabled + to get this behavior. + +#. Configure the swap device: + + **Caution**: Always use long ``/dev/zvol`` aliases in configuration + files. Never use a short ``/dev/zdX`` device name. + + :: + + mkswap -f /dev/zvol/rpool/swap + echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab + echo RESUME=none > /etc/initramfs-tools/conf.d/resume + + The ``RESUME=none`` is necessary to disable resuming from hibernation. + This does not work, as the zvol is not present (because the pool has not + yet been imported) at the time the resume script runs. If it is not + disabled, the boot process hangs for 30 seconds waiting for the swap + zvol to appear. + +#. Enable the swap device:: + + swapon -av + +Step 12: Final Cleanup +---------------------- + +#. Wait for the system to boot normally. Login using the account you + created. Ensure the system (including networking) works normally. + +#. Optional: Delete the snapshots of the initial installation:: + + sudo zfs destroy bpool/BOOT/suse@install + sudo zfs destroy rpool/ROOT/suse@install + +#. Optional: Disable the root password:: + + sudo usermod -p '*' root + +#. Optional (but highly recommended): Disable root SSH logins: + + If you installed SSH earlier, revert the temporary change:: + + vi /etc/ssh/sshd_config + # Remove: PermitRootLogin yes + + systemctl restart sshd + +#. Optional: Re-enable the graphical boot process: + + If you prefer the graphical boot process, you can re-enable it now. If + you are using LUKS, it makes the prompt look nicer. + + :: + + sudo vi /etc/default/grub + # Add quiet to GRUB_CMDLINE_LINUX_DEFAULT + # Comment out GRUB_TERMINAL=console + # Save and quit. + + sudo update-bootloader + + **Note:** Ignore errors from ``osprober``, if present. + +#. Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + + Store that backup somewhere safe (e.g. cloud storage). It is protected by + your LUKS passphrase, but you may wish to use additional encryption. + + **Hint:** If you created a mirror or raidz topology, repeat this for each + LUKS volume (``luks2``, etc.). + +Troubleshooting +--------------- + +Rescuing using a Live CD +~~~~~~~~~~~~~~~~~~~~~~~~ + +Go through `Step 1: Prepare The Install Environment +<#step-1-prepare-the-install-environment>`__. + +For LUKS, first unlock the disk(s):: + + zypper install cryptsetup + cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # Repeat for additional disks, if this is a mirror or raidz topology. + +Mount everything correctly:: + + zpool export -a + zpool import -N -R /mnt rpool + zpool import -N -R /mnt bpool + zfs load-key -a + zfs mount rpool/ROOT/suse + zfs mount -a + +If needed, you can chroot into your installed environment:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + chroot /mnt /bin/bash --login + mount /boot/efi + mount -a + +Do whatever you need to do to fix your system. + +When done, cleanup:: + + exit + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + reboot + +Areca +~~~~~ + +Systems that require the ``arcsas`` blob driver should add it to the +``/etc/initramfs-tools/modules`` file and run ``update-initramfs -c -k all``. + +Upgrade or downgrade the Areca driver if something like +``RIP: 0010:[] [] native_read_tsc+0x6/0x20`` +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message. + +MPT2SAS +~~~~~~~ + +Most problem reports for this tutorial involve ``mpt2sas`` hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware. + +The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See `https://github.com/zfsonlinux/zfs/issues/330 +`__. + +Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in +``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to +appear before importing the pool. + +QEMU/KVM/XEN +~~~~~~~~~~~~ + +Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). + +To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:: + + sudo zypper install ovmf + sudo vi /etc/libvirt/qemu.conf + +Uncomment these lines: + +.. code-block:: text + + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd" + ] + +:: + + sudo systemctl restart libvirtd.service + +VMware +~~~~~~ + +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere configuration. + Doing this ensures that ``/dev/disk`` aliases are created in the guest. + + +External Links +~~~~~~~~~~~~~~ +* `OpenZFS on openSUSE `__ +* `ZenLinux Blog - How to Setup an openSUSE chroot + `__ diff --git a/_sources/Getting Started/zfs_root_maintenance.rst.txt b/_sources/Getting Started/zfs_root_maintenance.rst.txt new file mode 100644 index 000000000..341aad692 --- /dev/null +++ b/_sources/Getting Started/zfs_root_maintenance.rst.txt @@ -0,0 +1,311 @@ +.. highlight:: sh + +Root on ZFS maintenance +======================== + +Boot Environment +---------------- + +This section is compatible with Alpine, Arch, Fedora and RHEL guides. +Not necessary for NixOS. Incompatible with Ubuntu and Debian guides. + +Note: boot environments as described below are intended only for +system recovery purposes, that is, you boot into the alternate boot +environment once to perform system recovery on the default datasets: + +.. code-block:: sh + + rpool/distro/root + bpool/distro/root + +then reboot to those datasets once you have successfully recovered the +system. + +Switching the default boot environment complicates bootloader recovery +and other maintenance operations and is thus currently not supported. + +#. If you want to use the ``@initial-installation`` snapshot created + during installation, set ``my_boot_env=initial-installation`` and + skip Step 3 and 4. + +#. Identify which dataset is currently mounted as root + ``/`` and boot ``/boot`` + :: + + set -x + boot_dataset=$(df -P /boot | tail -n1 | cut -f1 -d' ' || true ) + root_dataset=$(df -P / | tail -n1 | cut -f1 -d' ' || true ) + +#. Choose a name for the new boot environment + :: + + my_boot_env=backup + +#. Take snapshots of the ``/`` and ``/boot`` datasets + + :: + + zfs snapshot "${boot_dataset}"@"${my_boot_env}" + zfs snapshot "${root_dataset}"@"${my_boot_env}" + +#. Create clones from read-only snapshots + + :: + + new_root_dataset="${root_dataset%/*}"/"${my_boot_env}" + new_boot_dataset="${boot_dataset%/*}"/"${my_boot_env}" + + zfs clone -o canmount=noauto \ + -o mountpoint=/ \ + "${root_dataset}"@"${my_boot_env}" \ + "${new_root_dataset}" + + zfs clone -o canmount=noauto \ + -o mountpoint=legacy \ + "${boot_dataset}"@"${my_boot_env}" \ + "${new_boot_dataset}" + +#. Mount clone and update file system table (fstab) + :: + + MNT=$(mktemp -d) + mount -t zfs -o zfsutil "${new_root_dataset}" "${MNT}" + mount -t zfs "${new_boot_dataset}" "${MNT}"/boot + + sed -i s,"${root_dataset}","${new_root_dataset}",g "${MNT}"/etc/fstab + sed -i s,"${boot_dataset}","${new_boot_dataset}",g "${MNT}"/etc/fstab + + if test -f "${MNT}"/boot/grub/grub.cfg; then + is_grub2=n + sed -i s,"${boot_dataset#bpool/}","${new_boot_dataset#bpool/}",g "${MNT}"/boot/grub/grub.cfg + elif test -f "${MNT}"/boot/grub2/grub.cfg; then + is_grub2=y + sed -i s,"${boot_dataset#bpool/}","${new_boot_dataset#bpool/}",g "${MNT}"/boot/grub2/grub.cfg + else + echo "ERROR: no grub menu found!" + exit 1 + fi + + Do not proceed if no grub menu was found! + +#. Unmount clone + :: + + umount -Rl "${MNT}" + +#. Add new boot environment as GRUB menu entry + :: + + echo "# ${new_boot_dataset}" > new_boot_env_entry_"${new_boot_dataset##*/}" + printf '\n%s' "menuentry 'Boot environment ${new_boot_dataset#bpool/} from ${boot_dataset#bpool/}' " \ + >> new_boot_env_entry_"${new_boot_dataset##*/}" + if [ "${is_grub2}" = y ]; then + # shellcheck disable=SC2016 + printf '{ search --set=drive1 --label bpool; configfile ($drive1)/%s@/grub2/grub.cfg; }' \ + "${new_boot_dataset#bpool/}" >> new_boot_env_entry_"${new_boot_dataset##*/}" + else + # shellcheck disable=SC2016 + printf '{ search --set=drive1 --label bpool; configfile ($drive1)/%s@/grub/grub.cfg; }' \ + "${new_boot_dataset#bpool/}" >> new_boot_env_entry_"${new_boot_dataset##*/}" + fi + + find /boot/efis/ -name "grub.cfg" -print0 \ + | xargs -t -0I '{}' sh -vxc "tail -n1 new_boot_env_entry_${new_boot_dataset##*/} >> '{}'" + + .. ifconfig:: zfs_root_test + + :: + + find /boot/efis/ -name "grub.cfg" -print0 \ + | xargs -t -0I '{}' grub-script-check -v '{}' + +#. Do not delete ``new_boot_env_entry_"${new_boot_dataset##*/}"`` file. It + is needed when you want to remove the new boot environment from + GRUB menu later. + +#. After reboot, select boot environment entry from GRUB + menu to boot from the clone. Press ESC inside + submenu to return to the previous menu. + +#. Steps above can also be used to create a new clone + from an existing snapshot. + +#. To delete the boot environment, first store its name in a + variable:: + + my_boot_env=backup + +#. Ensure that the boot environment is not + currently used + :: + + set -x + boot_dataset=$(df -P /boot | tail -n1 | cut -f1 -d' ' || true ) + root_dataset=$(df -P / | tail -n1 | cut -f1 -d' ' || true ) + new_boot_dataset="${boot_dataset%/*}"/"${my_boot_env}" + rm_boot_dataset=$(head -n1 new_boot_env_entry_"${new_boot_dataset##*/}" | sed 's|^# *||' || true ) + + if [ "${boot_dataset}" = "${rm_boot_dataset}" ]; then + echo "ERROR: the dataset you want to delete is the current root! abort!" + exit 1 + fi + +#. Then check the origin snapshot + :: + + rm_root_dataset=rpool/"${rm_boot_dataset#bpool/}" + + rm_boot_dataset_origin=$(zfs get -H origin "${rm_boot_dataset}"|cut -f3 || true ) + rm_root_dataset_origin=$(zfs get -H origin "${rm_root_dataset}"|cut -f3 || true ) + +#. Finally, destroy clone (boot environment) and its + origin snapshot + :: + + zfs destroy "${rm_root_dataset}" + zfs destroy "${rm_root_dataset_origin}" + zfs destroy "${rm_boot_dataset}" + zfs destroy "${rm_boot_dataset_origin}" + +#. Remove GRUB entry + :: + + new_entry_escaped=$(tail -n1 new_boot_env_entry_"${new_boot_dataset##*/}" | sed -e 's/[\/&]/\\&/g' || true ) + find /boot/efis/ -name "grub.cfg" -print0 | xargs -t -0I '{}' sed -i "/${new_entry_escaped}/d" '{}' + + .. ifconfig:: zfs_root_test + + :: + + find /boot/efis/ -name "grub.cfg" -print0 \ + | xargs -t -0I '{}' grub-script-check -v '{}' + +Disk replacement +---------------- + +When a disk fails in a mirrored setup, the disk can be replaced with +the following procedure. + +#. Shutdown the computer. + +#. Replace the failed disk with another disk. The replacement should + be at least the same size or larger than the failed disk. + +#. Boot the computer. + + When a disk fails, the system will boot, albeit several minutes + slower than normal. + + For NixOS, this is due to the initrd and systemd designed to only + import a pool in degraded state after a 90s timeout. + + Swap partition on that disk will also fail. + +#. Install GNU ``parted`` with your distribution package manager. + +#. Identify the bad disk and a working old disk + + .. code-block:: sh + + ZPOOL_VDEV_NAME_PATH=1 zpool status + + pool: bpool + status: DEGRADED + action: Replace the device using 'zpool replace'. + ... + config: bpool + mirror-0 + 2387489723748 UNAVAIL 0 0 0 was /dev/disk/by-id/ata-BAD-part2 + /dev/disk/by-id/ata-disk_known_good-part2 ONLINE 0 0 0 + +#. Store the bad disk and a working old disk in a variable, omit the partition number ``-partN`` + + .. code-block:: sh + + disk_to_replace=/dev/disk/by-id/ata-disk_to_replace + disk_known_good=/dev/disk/by-id/ata-disk_known_good + +#. Identify the new disk + + .. code-block:: sh + + find /dev/disk/by-id/ + + /dev/disk/by-id/ata-disk_known_good-part1 + /dev/disk/by-id/ata-disk_known_good-part2 + ... + /dev/disk/by-id/ata-disk_known_good-part5 + /dev/disk/by-id/ata-disk_new <-- new disk w/o partition table + +#. Store the new disk in a variable + + .. code-block:: sh + + disk_new=/dev/disk/by-id/ata-disk_new + +#. Create partition table on ``"${disk_new}"``, refer to respective + installation pages for details. + +#. Format and mount EFI system partition, refer to respective + installation pages for details. + +#. Replace failed disk in ZFS pool + + .. code-block:: sh + + zpool offline bpool "${disk_to_replace}"-part2 + zpool offline rpool "${disk_to_replace}"-part3 + zpool replace bpool "${disk_to_replace}"-part2 "${disk_new}"-part2 + zpool replace rpool "${disk_to_replace}"-part3 "${disk_new}"-part3 + zpool online bpool "${disk_new}"-part2 + zpool online rpool "${disk_new}"-part3 + + Let the new disk resilver. Check status with ``zpool status``. + +#. Reinstall and mirror bootloader, refer to respective installation + pages for details. + + If you are using NixOS, see below. + +#. For NixOS, replace bad disk with new disk inside per-host + configuration file. + + .. code-block:: sh + + sed -i "s|"${disk_to_replace##*/}"|"${disk_new##*/}"|" /etc/nixos/hosts/exampleHost/default.nix + +#. Commit and apply the changed configuration, reinstall bootloader, then reboot + + .. code-block:: sh + + git -C /etc/nixos commit -asm "replace "${disk_to_replace##*/}" with "${disk_new##*/}"." + + nixos-rebuild boot --install-bootloader + + reboot + +Bootloader Recovery +------------------- + +This section is compatible with Alpine, Arch, Fedora, RHEL and NixOS +root on ZFS guides. + +Sometimes the GRUB bootloader might be accidentally overwritten, +rendering the system inaccessible. However, as long as the disk +partitions where boot pool and root pool resides remain untouched, the +system can still be booted easily. + +#. Download GRUB rescue image from `this repo + `__. + + You can also build the image yourself if you are familiar with Nix + package manager. + +#. Extract either x86_64-efi or i386-pc image from the archive. + +#. Write the image to a disk. + +#. Boot the computer from the GRUB rescue disk. Select your distro in + GRUB menu. + +#. Reinstall bootloader. See respective installation pages for details. diff --git a/_sources/License.rst.txt b/_sources/License.rst.txt new file mode 100644 index 000000000..d2adea9a2 --- /dev/null +++ b/_sources/License.rst.txt @@ -0,0 +1,41 @@ +License +======= + +- The OpenZFS software is licensed under the Common Development and Distribution License + (`CDDL `__) unless otherwise noted. + +- The OpenZFS documentation content is licensed under a Creative Commons Attribution-ShareAlike + license (`CC BY-SA 3.0 `__) + unless otherwise noted. + +- OpenZFS is an associated project of SPI (`Software in the Public Interest + `__). SPI is a 501(c)(3) nonprofit + organization which handles the donations, finances, and legal holdings of the project. + +.. note:: + The Linux Kernel is licensed under the GNU General Public License + Version 2 (`GPLv2 `__). While + both (OpenZFS and Linux Kernel) are free open source licenses they are + restrictive licenses. The combination of them causes problems because it + prevents using pieces of code exclusively available under one license + with pieces of code exclusively available under the other in the same binary. + In the case of the Linux Kernel, this prevents us from distributing OpenZFS + as part of the Linux Kernel binary. However, there is nothing in either license + that prevents distributing it in the form of a binary module or in the form + of source code. + + Additional reading and opinions: + + - `Software Freedom Law + Center `__ + - `Software Freedom + Conservancy `__ + - `Free Software + Foundation `__ + - `Encouraging closed source + modules `__ + +CC BY-SA 3.0: |Creative Commons License| + +.. |Creative Commons License| image:: https://i.creativecommons.org/l/by-sa/3.0/88x31.png + :target: http://creativecommons.org/licenses/by-sa/3.0/ diff --git a/_sources/Performance and Tuning/Async Write.rst.txt b/_sources/Performance and Tuning/Async Write.rst.txt new file mode 100644 index 000000000..692b72d3c --- /dev/null +++ b/_sources/Performance and Tuning/Async Write.rst.txt @@ -0,0 +1,36 @@ +Async Writes +============ + +The number of concurrent operations issued for the async write I/O class +follows a piece-wise linear function defined by a few adjustable points. + +:: + + | o---------| <-- zfs_vdev_async_write_max_active + ^ | /^ | + | | / | | + active | / | | + I/O | / | | + count | / | | + | / | | + |-------o | | <-- zfs_vdev_async_write_min_active + 0|_______^______|_________| + 0% | | 100% of zfs_dirty_data_max + | | + | `-- zfs_vdev_async_write_active_max_dirty_percent + `--------- zfs_vdev_async_write_active_min_dirty_percent + +Until the amount of dirty data exceeds a minimum percentage of the dirty +data allowed in the pool, the I/O scheduler will limit the number of +concurrent operations to the minimum. As that threshold is crossed, the +number of concurrent operations issued increases linearly to the maximum +at the specified maximum percentage of the dirty data allowed in the +pool. + +Ideally, the amount of dirty data on a busy pool will stay in the sloped +part of the function between +zfs_vdev_async_write_active_min_dirty_percent and +zfs_vdev_async_write_active_max_dirty_percent. If it exceeds the maximum +percentage, this indicates that the rate of incoming data is greater +than the rate that the backend storage can handle. In this case, we must +further throttle incoming writes, as described in the next section. diff --git a/_sources/Performance and Tuning/Hardware.rst.txt b/_sources/Performance and Tuning/Hardware.rst.txt new file mode 100644 index 000000000..0ca9f0280 --- /dev/null +++ b/_sources/Performance and Tuning/Hardware.rst.txt @@ -0,0 +1,808 @@ +Hardware +******** + +.. contents:: Table of Contents + :local: + +Introduction +============ + +Storage before ZFS involved rather expensive hardware that was unable to +protect against silent corruption and did not scale very well. The +introduction of ZFS has enabled people to use far less expensive +hardware than previously used in the industry with superior scaling. +This page attempts to provide some basic guidance to people buying +hardware for use in ZFS-based servers and workstations. + +Hardware that adheres to this guidance will enable ZFS to reach its full +potential for performance and reliability. Hardware that does not adhere +to it will serve as a handicap. Unless otherwise stated, such handicaps +apply to all storage stacks and are by no means specific to ZFS. Systems +built using competing storage stacks will also benefit from these +suggestions. + +.. _bios_cpu_microcode_updates: + +BIOS / CPU microcode updates +============================ + +Running the latest BIOS and CPU microcode is highly recommended. + +Background +---------- + +Computer microprocessors are very complex designs that often have bugs, +which are called errata. Modern microprocessors are designed to utilize +microcode. This puts part of the hardware design into quasi-software +that can be patched without replacing the entire chip. Errata are often +resolved through CPU microcode updates. These are often bundled in BIOS +updates. In some cases, the BIOS interactions with the CPU through +machine registers can be modified to fix things with the same microcode. +If a newer microcode is not bundled as part of a BIOS update, it can +often be loaded by the operating system bootloader or the operating +system itself. + +.. _ecc_memory: + +ECC Memory +========== + +Bit flips can have fairly dramatic consequences for all computer +filesystems and ZFS is no exception. No technique used in ZFS (or any +other filesystem) is capable of protecting against bit flips. +Consequently, ECC Memory is highly recommended. + +.. _background_1: + +Background +---------- + +Ordinary background radiation will randomly flip bits in computer +memory, which causes undefined behavior. These are known as "bit flips". +Each bit flip can have any of four possible consequences depending on +which bit is flipped: + +- Bit flips can have no effect. + + - Bit flips that have no effect occur in unused memory. + +- Bit flips can cause runtime failures. + + - This is the case when a bit flip occurs in something read from + disk. + - Failures are typically observed when program code is altered. + - If the bit flip is in a routine within the system's kernel or + /sbin/init, the system will likely crash. Otherwise, reloading the + affected data can clear it. This is typically achieved by a + reboot. + +- It can cause data corruption. + + - This is the case when the bit is in use by data being written to + disk. + - If the bit flip occurs before ZFS' checksum calculation, ZFS will + not realize that the data is corrupt. + - If the bit flip occurs after ZFS' checksum calculation, but before + write-out, ZFS will detect it, but it might not be able to correct + it. + +- It can cause metadata corruption. + + - This is the case when a bit flips in an on-disk structure being + written to disk. + - If the bit flip occurs before ZFS' checksum calculation, ZFS will + not realize that the metadata is corrupt. + - If the bit flip occurs after ZFS' checksum calculation, but before + write-out, ZFS will detect it, but it might not be able to correct + it. + - Recovery from such an event will depend on what was corrupted. In + the worst, case, a pool could be rendered unimportable. + + - All filesystems have poor reliability in their absolute worst + case bit-flip failure scenarios. Such scenarios should be + considered extraordinarily rare. + +.. _drive_interfaces: + +Drive Interfaces +================ + +.. _sas_versus_sata: + +SAS versus SATA +--------------- + +ZFS depends on the block device layer for storage. Consequently, ZFS is +affected by the same things that affect other filesystems, such as +driver support and non-working hardware. Consequently, there are a few +things to note: + +- Never place SATA disks into a SAS expander without a SAS interposer. + + - If you do this and it does work, it is the exception, rather than + the rule. + +- Do not expect SAS controllers to be compatible with SATA port + multipliers. + + - This configuration is typically not tested. + - The disks could be unrecognized. + +- Support for SATA port multipliers is inconsistent across OpenZFS + platforms + + - Linux drivers generally support them. + - Illumos drivers generally do not support them. + - FreeBSD drivers are somewhere between Linux and Illumos in terms + of support. + +.. _usb_hard_drives_andor_adapters: + +USB Hard Drives and/or Adapters +------------------------------- + +These have problems involving sector size reporting, SMART passthrough, +the ability to set ERC and other areas. ZFS will perform as well on such +devices as they are capable of allowing, but try to avoid them. They +should not be expected to have the same up-time as SAS and SATA drives +and should be considered unreliable. + +Controllers +=========== + +The ideal storage controller for ZFS has the following attributes: + +- Driver support on major OpenZFS platforms + + - Stability is important. + +- High per-port bandwidth + + - PCI Express interface bandwidth divided by the number of ports + +- Low cost + + - Support for RAID, Battery Backup Units and hardware write caches + is unnecessary. + +Marc Bevand's blog post `From 32 to 2 ports: Ideal SATA/SAS Controllers +for ZFS & Linux MD RAID `__ contains an +excellent list of storage controllers that meet these criteria. He +regularly updates it as newer controllers become available. + +.. _hardware_raid_controllers: + +Hardware RAID controllers +------------------------- + +Hardware RAID controllers should not be used with ZFS. While ZFS will +likely be more reliable than other filesystems on Hardware RAID, it will +not be as reliable as it would be on its own. + +- Hardware RAID will limit opportunities for ZFS to perform self + healing on checksum failures. When ZFS does RAID-Z or mirroring, a + checksum failure on one disk can be corrected by treating the disk + containing the sector as bad for the purpose of reconstructing the + original information. This cannot be done when a RAID controller + handles the redundancy unless a duplicate copy is stored by ZFS in + the case that the corruption involving as metadata, the copies flag + is set or the RAID array is part of a mirror/raid-z vdev within ZFS. + +- Sector size information is not necessarily passed correctly by + hardware RAID on RAID 1. Sector size information cannot be passed + correctly on RAID 5/6. + Hardware RAID 1 is more likely to experience read-modify-write + overhead from partial sector writes while Hardware RAID 5/6 will almost + certainty suffer from partial stripe writes (i.e. the RAID write + hole). ZFS using the disks natively allows it to obtain the + sector size information reported by the disks to avoid + read-modify-write on sectors, while ZFS avoids partial stripe writes + on RAID-Z by design from using copy-on-write. + + - There can be sector alignment problems on ZFS when a drive + misreports its sector size. Such drives are typically NAND-flash + based solid state drives and older SATA drives from the advanced + format (4K sector size) transition before Windows XP EoL occurred. + This can be :ref:`manually corrected ` at + vdev creation. + - It is possible for the RAID header to cause misalignment of sector + writes on RAID 1 by starting the array within a sector on an + actual drive, such that manual correction of sector alignment at + vdev creation does not solve the problem. + +- RAID controller failures can require that the controller be replaced with + the same model, or in less extreme cases, a model from the same + manufacturer. Using ZFS by itself allows any controller to be used. + +- If a hardware RAID controller's write cache is used, an additional + failure point is introduced that can only be partially mitigated by + additional complexity from adding flash to save data in power loss + events. The data can still be lost if the battery fails when it is + required to survive a power loss event or there is no flash and power + is not restored in a timely manner. The loss of the data in the write + cache can severely damage anything stored on a RAID array when many + outstanding writes are cached. In addition, all writes are stored in + the cache rather than just synchronous writes that require a write + cache, which is inefficient, and the write cache is relatively small. + ZFS allows synchronous writes to be written directly to flash, which + should provide similar acceleration to hardware RAID and the ability + to accelerate many more in-flight operations. + +- Behavior during RAID reconstruction when silent corruption damages + data is undefined. There are reports of RAID 5 and 6 arrays being + lost during reconstruction when the controller encounters silent + corruption. ZFS' checksums allow it to avoid this situation by + determining whether enough information exists to reconstruct data. If + not, the file is listed as damaged in zpool status and the + system administrator has the opportunity to restore it from a backup. + +- IO response times will be reduced whenever the OS blocks on IO + operations because the system CPU blocks on a much weaker embedded + CPU used in the RAID controller. This lowers IOPS relative to what + ZFS could have achieved. + +- The controller's firmware is an additional layer of complexity that + cannot be inspected by arbitrary third parties. The ZFS source code + is open source and can be inspected by anyone. + +- If multiple RAID arrays are formed by the same controller and one + fails, the identifiers provided by the arrays exposed to the OS might + become inconsistent. Giving the drives directly to the OS allows this + to be avoided via naming that maps to a unique port or unique drive + identifier. + + - e.g. If you have arrays A, B, C and D; array B dies, the + interaction between the hardware RAID controller and the OS might + rename arrays C and D to look like arrays B and C respectively. + This can fault pools verbatim imported from the cachefile. + - Not all RAID controllers behave this way. This issue has + been observed on both Linux and FreeBSD when system administrators + used single drive RAID 0 arrays, however. It has also been observed + with controllers from different vendors. + +One might be inclined to try using single-drive RAID 0 arrays to try to +use a RAID controller like a HBA, but this is not recommended for many +of the reasons listed for other hardware RAID types. It is best to use a +HBA instead of a RAID controller, for both performance and reliability. + +.. _hard_drives: + +Hard drives +=========== + +.. _sector_size: + +Sector Size +----------- + +Historically, all hard drives had 512-byte sectors, with the exception +of some SCSI drives that could be modified to support slightly larger +sectors. In 2009, the industry migrated from 512-byte sectors to +4096-byte "Advanced Format" sectors. Since Windows XP is not compatible +with 4096-byte sectors or drives larger than 2TB, some of the first +advanced format drives implemented hacks to maintain Windows XP +compatibility. + +- The first advanced format drives on the market misreported their + sector size as 512-bytes for Windows XP compatibility. As of 2013, it + is believed that such hard drives are no longer in production. + Advanced format hard drives made during or after this time should + report their true physical sector size. +- Drives storing 2TB and smaller might have a jumper that can be set to + map all sectors off by 1. This to provide proper alignment for + Windows XP, which started its first partition at sector 63. This + jumper setting should be off when using such drives with ZFS. + +As of 2014, there are still 512-byte and 4096-byte drives on the market, +but they are known to properly identify themselves unless behind a USB +to SATA controller. Replacing a 512-byte sector drive with a 4096-byte +sector drives in a vdev created with 512-byte sector drives will +adversely affect performance. Replacing a 4096-byte sector drive with a +512-byte sector drive will have no negative effect on performance. + +.. _error_recovery_control: + +Error recovery control +---------------------- + +ZFS is said to be able to use cheap drives. This was true when it was +introduced and hard drives supported Error recovery control. Since ZFS' +introduction, error recovery control has been removed from low-end +drives from certain manufacturers, most notably Western Digital. +Consistent performance requires hard drives that support error recovery +control. + +.. _background_2: + +Background +~~~~~~~~~~ + +Hard drives store data using small polarized regions a magnetic surface. +Reading from and/or writing to this surface poses a few reliability +problems. One is that imperfections in the surface can corrupt bits. +Another is that vibrations can cause drive heads to miss their targets. +Consequently, hard drive sectors are composed of three regions: + +- A sector number +- The actual data +- ECC + +The sector number and ECC enables hard drives to detect and respond to +such events. When either event occurs during a read, hard drives will +retry the read many times until they either succeed or conclude that the +data cannot be read. The latter case can take a substantial amount of +time and consequently, IO to the drive will stall. + +Enterprise hard drives and some consumer hard drives implement a feature +called Time-Limited Error Recovery (TLER) by Western Digital, Error +Recovery Control (ERC) by Seagate and Command Completion Time Limit by +Hitachi and Samsung, which permits the time drives are willing to spend +on such events to be limited by the system administrator. + +Drives that lack such functionality can be expected to have arbitrarily +high limits. Several minutes is not impossible. Drives with this +functionality typically default to 7 seconds. ZFS does not currently +adjust this setting on drives. However, it is advisable to write a +script to set the error recovery time to a low value, such as 0.1 +seconds until ZFS is modified to control it. This must be done on every +boot. + +.. _rpm_speeds: + +RPM Speeds +---------- + +High RPM drives have lower seek times, which is historically regarded as +being desirable. They increase cost and sacrifice storage density in +order to achieve what is typically no more than a factor of 6 +improvement over their lower RPM counterparts. + +To provide some numbers, a 15k RPM drive from a major manufacturer is +rated for 3.4 millisecond average read and 3.9 millisecond average +write. Presumably, this number assumes that the target sector is at most +half the number of drive tracks away from the head and half the disk +away. Being even further away is worst-case 2 times slower. Manufacturer +numbers for 7200 RPM drives are not available, but they average 13 to 16 +milliseconds in empirical measurements. 5400 RPM drives can be expected +to be slower. + +ARC and ZIL are able to mitigate much of the benefit of lower seek +times. Far larger increases in IOPS performance can be obtained by +adding additional RAM for ARC, L2ARC devices and SLOG devices. Even +higher increases in performance can be obtained by replacing hard drives +with solid state storage entirely. Such things are typically more cost +effective than high RPM drives when considering IOPS. + +.. _command_queuing: + +Command Queuing +--------------- + +Drives with command queues are able to reorder IO operations to increase +IOPS. This is called Native Command Queuing on SATA and Tagged Command +Queuing on PATA/SCSI/SAS. ZFS stores objects in metaslabs and it can use +several metastabs at any given time. Consequently, ZFS is not only +designed to take advantage of command queuing, but good ZFS performance +requires command queuing. Almost all drives manufactured within the past +10 years can be expected to support command queuing. The exceptions are: + +- Consumer PATA/IDE drives +- First generation SATA drives, which used IDE to SATA translation + chips, from 2003 to 2004. +- SATA drives operating under IDE emulation that was configured in the + system BIOS. + +Each OpenZFS system has different methods for checking whether command +queuing is supported. On Linux, ``hdparm -I /path/to/device \| grep +Queue`` is used. On FreeBSD, ``camcontrol identify $DEVICE`` is used. + +.. _nand_flash_ssds: + +NAND Flash SSDs +=============== + +As of 2014, Solid state storage is dominated by NAND-flash and most +articles on solid state storage focus on it exclusively. As of 2014, the +most popular form of flash storage used with ZFS involve drives with +SATA interfaces. Enterprise models with SAS interfaces are beginning to +become available. + +As of 2017, Solid state storage using NAND-flash with PCI-E interfaces +are widely available on the market. They are predominantly enterprise +drives that utilize a NVMe interface that has lower overhead than the +ATA used in SATA or SCSI used in SAS. There is also an interface known +as M.2 that is primarily used by consumer SSDs, although not necessarily +limited to them. It can provide electrical connectivity for multiple +buses, such as SATA, PCI-E and USB. M.2 SSDs appear to use either SATA +or NVME. + +.. _nvme_low_level_formatting: + +NVMe low level formatting +------------------------- + +Many NVMe SSDs support both 512-byte sectors and 4096-byte sectors. They +often ship with 512-byte sectors, which are less performant than +4096-byte sectors. Some also support metadata for T10/DIF CRC to try to +improve reliability, although this is unnecessary with ZFS. + +NVMe drives should be +`formatted `__ +to use 4096-byte sectors without metadata prior to being given to ZFS +for best performance unless they indicate that 512-byte sectors are as +performant as 4096-byte sectors, although this is unlikely. Lower +numbers in the Rel_Perf of Supported LBA Sizes from ``smartctl -a +/dev/$device_namespace`` (for example ``smartctl -a /dev/nvme1n1``) +indicate higher performance low level formats, with 0 being the best. +The current formatting will be marked by a plus sign under the format +Fmt. + +You may format a drive using ``nvme format /dev/nvme1n1 -l $ID``. The $ID +corresponds to the Id field value from the Supported LBA Sizes SMART +information. + +.. _power_failure_protection: + +Power Failure Protection +------------------------ + +.. _background_3: + +Background +~~~~~~~~~~ + +On-flash data structures are highly complex and traditionally have been +highly vulnerable to corruption. In the past, such corruption would +result in the loss of \*all\* drive data and an event such as a PSU +failure could result in multiple drives simultaneously failing. Since +the drive firmware is not available for review, the traditional +conclusion was that all drives that lack hardware features to avoid +power failure events cannot be trusted, which was found to be the case +multiple times in the +past [#ssd_analysis]_ [#ssd_analysis2]_ [#ssd_analysis3]_. +Discussion of power failures bricking NAND flash SSDs appears to have +vanished from literature following the year 2015. SSD manufacturers now +claim that firmware power loss protection is robust enough to provide +equivalent protection to hardware power loss protection. `Kingston is one +example `__. +Firmware power loss protection is used to guarantee the protection of +flushed data and the drives’ own metadata, which is all that filesystems +such as ZFS need. + +However, those that either need or want strong guarantees that firmware +bugs are unlikely to be able to brick drives following power loss events +should continue to use drives that provide hardware power loss +protection. The basic concept behind how hardware power failure +protection works has been `documented by +Intel `__ +for those who wish to read about the details. As of 2020, use of +hardware power loss protection is now a feature solely of enterprise +SSDs that attempt to protect unflushed data in addition to drive +metadata and flushed data. This additional protection beyond protecting +flushed data and the drive metadata provides no additional benefit to +ZFS, but it does not hurt it. + +It should also be noted that drives in data centers and laptops are +unlikely to experience power loss events, reducing the usefulness of +hardware power loss protection. This is especially the case in +datacenters where redundant power, UPS power and the use of IPMI to do +forced reboots should prevent most drives from experiencing power loss +events. + +Lists of drives that provide hardware power loss protection are +maintained below for those who need/want it. Since ZFS, like other +filesystems, only requires power failure protection for flushed data and +drive metadata, older drives that only protect these things are included +on the lists. + +.. _nvme_drives_with_power_failure_protection: + +NVMe drives with power failure protection +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +A non-exhaustive list of NVMe drives with power failure protection is as +follows: + +- Intel 750 +- Intel DC P3500/P3600/P3608/P3700 +- Micron 7300/7400/7450 PRO/MAX +- Samsung PM963 (M.2 form factor) +- Samsung PM1725/PM1725a +- Samsung XS1715 +- Toshiba ZD6300 +- Seagate Nytro 5000 M.2 (XP1920LE30002 tested; **read notes below + before buying**) + + - Inexpensive 22110 M.2 enterprise drive using consumer MLC that is + optimized for read mostly workloads. It is not a good choice for a + SLOG device, which is a write mostly workload. + - The + `manual `__ + for this drive specifies airflow requirements. If the drive does + not receive sufficient airflow from case fans, it will overheat at + idle. It's thermal throttling will severely degrade performance + such that write throughput performance will be limited to 1/10 of + the specification and read latencies will reach several hundred + milliseconds. Under continuous load, the device will continue to + become hotter until it suffers a "degraded reliability" event + where all data on at least one NVMe namespace is lost. The NVMe + namespace is then unusable until a secure erase is done. Even with + sufficient airflow under normal circumstances, data loss is + possible under load following the failure of fans in an enterprise + environment. Anyone deploying this into production in an + enterprise environment should be mindful of this failure mode. + - Those who wish to use this drive in a low airflow situation can + workaround this failure mode by placing a passive heatsink such as + `this `__ on the + NAND flash controller. It is the chip under the sticker closest to + the capacitors. This was tested by placing the heatsink over the + sticker (as removing it was considered undesirable). The heatsink + will prevent the drive from overheating to the point of data loss, + but it will not fully alleviate the overheating situation under + load without active airflow. A scrub will cause it to overheat + after a few hundred gigabytes are read. However, the thermal + throttling will quickly cool the drive from 76 degrees Celsius to + 74 degrees Celsius, restoring performance. + + - It might be possible to use the heatsink in an enterprise + environment to provide protection against data loss following + fan failures. However, this was not evaluated. Furthermore, + operating temperatures for consumer NAND flash should be at or + above 40 degrees Celsius for long term data integrity. + Therefore, the use of a heatsink to provide protection against + data loss following fan failures in an enterprise environment + should be evaluated before deploying drives into production to + ensure that the drive is not overcooled. + +.. _sas_drives_with_power_failure_protection: + +SAS drives with power failure protection +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +A non-exhaustive list of SAS drives with power failure protection is as +follows: + +- Samsung PM1633/PM1633a +- Samsung SM1625 +- Samsung PM853T +- Toshiba PX05SHB***/PX04SHB***/PX04SHQ**\* +- Toshiba PX05SLB***/PX04SLB***/PX04SLQ**\* +- Toshiba PX05SMB***/PX04SMB***/PX04SMQ**\* +- Toshiba PX05SRB***/PX04SRB***/PX04SRQ**\* +- Toshiba PX05SVB***/PX04SVB***/PX04SVQ**\* + +.. _sata_drives_with_power_failure_protection: + +SATA drives with power failure protection +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +A non-exhaustive list of SATA drives with power failure protection is as +follows: + +- Crucial MX100/MX200/MX300 +- Crucial M500/M550/M600 +- Intel 320 + + - Early reports claimed that the 330 and 335 had power failure + protection too, `but they do + not `__. + +- Intel 710 +- Intel 730 +- Intel DC S3500/S3510/S3610/S3700/S3710 +- Kingston DC500R/DC500M +- Micron 5210 Ion + + - First QLC drive on the list. High capacity with a low price per + gigabyte. + +- Samsung PM863/PM863a +- Samsung SM843T (do not confuse with SM843) +- Samsung SM863/SM863a +- Samsung 845DC Evo +- Samsung 845DC Pro + + - `High sustained write + IOPS `__ + +- Toshiba HK4E/HK3E2 +- Toshiba HK4R/HK3R2/HK3R + +.. _criteriaprocess_for_inclusion_into_these_lists: + +Criteria/process for inclusion into these lists +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +These lists have been compiled on a volunteer basis by OpenZFS +contributors (mainly Richard Yao) from trustworthy sources of +information. The lists are intended to be vendor neutral and are not +intended to benefit any particular manufacturer. Any perceived bias +toward any manufacturer is caused by a lack of awareness and a lack of +time to research additional options. Confirmation of the presence of +adequate power loss protection by a reliable source is the only +requirement for inclusion into this list. Adequate power loss protection +means that the drive must protect both its own internal metadata and all +flushed data. Protection of unflushed data is irrelevant and therefore +not a requirement. ZFS only expects storage to protect flushed data. +Consequently, solid state drives whose power loss protection only +protects flushed data is sufficient for ZFS to ensure that data remains +safe. + +Anyone who believes an unlisted drive to provide adequate power failure +protection may contact the :ref:`mailing_lists` with +a request for inclusion and substantiation for the claim that power +failure protection is provided. Examples of substantiation include +pictures of drive internals showing the presence of capacitors, +statements by well regarded independent review sites such as Anandtech +and manufacturer specification sheets. The latter are accepted on the +honor system until a manufacturer is found to misstate reality on the +protection of the drives' own internal metadata structures and/or the +protection of flushed data. Thus far, all manufacturers have been +honest. + +.. _flash_pages: + +Flash pages +----------- + +The smallest unit on a NAND chip that can be written is a flash page. +The first NAND-flash SSDs on the market had 4096-byte pages. Further +complicating matters is that the the page size has been doubled twice +since then. NAND flash SSDs **should** report these pages as being +sectors, but so far, all of them incorrectly report 512-byte sectors for +Windows XP compatibility. The consequence is that we have a similar +situation to what we had with early advanced format hard drives. + +As of 2014, most NAND-flash SSDs on the market have 8192-byte page +sizes. However, models using 128-Gbit NAND from certain manufacturers +have a 16384-byte page size. Maximum performance requires that vdevs be +created with correct ashift values (13 for 8192-byte and 14 for +16384-byte). However, not all OpenZFS platforms support this. The Linux +port supports ashift=13, while others are limited to ashift=12 +(4096-byte). + +As of 2017, NAND-flash SSDs are tuned for 4096-byte IOs. Matching the +flash page size is unnecessary and ashift=12 is usually the correct +choice. Public documentation on flash page size is also nearly +non-existent. + +.. _ata_trim_scsi_unmap: + +ATA TRIM / SCSI UNMAP +--------------------- + +It should be noted that this is a separate case from +discard on zvols or hole punching on filesystems. Those work regardless +of whether ATA TRIM / SCSI UNMAP is sent to the actual block devices. + +.. _ata_trim_performance_issues: + +ATA TRIM Performance Issues +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ATA TRIM command in SATA 3.0 and earlier is a non-queued command. +Issuing a TRIM command on a SATA drive conforming to SATA 3.0 or earlier +will cause the drive to drain its IO queue and stop servicing requests +until it finishes, which hurts performance. SATA 3.1 removed this +limitation, but very few SATA drives on the market are conformant to +SATA 3.1 and it is difficult to distinguish them from SATA 3.0 drives. +At the same time, SCSI UNMAP has no such problems. + +.. _optane_3d_xpoint_ssds: + +Optane / 3D XPoint SSDs +======================= + +These are SSDs with far better latencies and write endurance than NAND +flash SSDs. They are byte addressable, such that ashift=9 is fine for +use on them. Unlike NAND flash SSDs, they do not require any special +power failure protection circuitry for reliability. There is also no +need to run TRIM on them. However, they cost more per GB than NAND flash +(as of 2020). The enterprise models make excellent SLOG devices. Here is +a list of models that are known to perform well: + +- `Intel DC + P4800X `__ + +- `Intel DC + P4801X `__ + +- `Intel DC + P1600X `__ + +Note that SLOG devices rarely have more than 4GB in use at any given +time, so the smaller sized devices are generally the best choice in +terms of cost, with larger sizes giving no benefit. Larger sizes could +be a good choice for other vdev types, depending on performance needs +and cost considerations. + +Power +===== + +Ensuring that computers are properly grounded is highly recommended. +There have been cases in user homes where machines experienced random +failures when plugged into power receptacles that had open grounds (i.e. +no ground wire at all). This can cause random failures on any computer +system, whether it uses ZFS or not. + +Power should also be relatively stable. Large dips in voltages from +brownouts are preferably avoided through the use of UPS units or line +conditioners. Systems subject to unstable power that do not outright +shutdown can exhibit undefined behavior. PSUs with longer hold-up times +should be able to provide partial protection against this, but hold up +times are often undocumented and are not a substitute for a UPS or line +conditioner. + +.. _pwr_ok_signal: + +PWR_OK signal +------------- + +PSUs are supposed to deassert a PWR_OK signal to indicate that provided +voltages are no longer within the rated specification. This should force +an immediate shutdown. However, the system clock of a developer +workstation was observed to significantly deviate from the expected +value following during a series of ~1 second brown outs. This machine +did not use a UPS at the time. However, the PWR_OK mechanism should have +protected against this. The observation of the PWR_OK signal failing to +force a shutdown with adverse consequences (to the system clock in this +case) suggests that the PWR_OK mechanism is not a strict guarantee. + +.. _psu_hold_up_times: + +PSU Hold-up Times +----------------- + +A PSU hold-up time is the amount of time that a PSU can continue to +output power at maximum output within standard voltage tolerances +following the loss of input power. This is important for supporting UPS +units because `the transfer +time `__ +taken by a standard UPS to supply power from its battery can leave +machines without power for "5-12 ms". `Intel's ATX Power Supply design +guide `__ +specifies a hold up time of 17 milliseconds at maximum continuous +output. The hold-up time is a inverse function of how much power is +being output by the PSU, with lower power output increasing holdup +times. + +Capacitor aging in PSUs will lower the hold-up time below what it was +when new, which could cause reliability issues as equipment ages. +Machines using substandard PSUs with hold-up times below the +specification therefore require higher end UPS units for protection to +ensure that the transfer time does not exceed the hold-up time. A +hold-up time below the transfer time during a transfer to battery power +can cause undefined behavior should the PWR_OK signal not become +deasserted to force the machine to power off. + +If in doubt, use a double conversion UPS unit. Double conversion UPS +units always run off the battery, such that the transfer time is 0. This +is unless they are high efficiency models that are hybrids between +standard UPS units and double conversion UPS units, although these are +reported to have much lower transfer times than standard PSUs. You could +also contact your PSU manufacturer for the hold up time specification, +but if reliability for years is a requirement, you should use a higher +end UPS with a low transfer time. + +Note that double conversion units are at most 94% efficient unless they +support a high efficiency mode, which adds latency to the time to +transition to battery power. + +.. _ups_batteries: + +UPS batteries +------------- + +The lead acid batteries in UPS units generally need to be replaced +regularly to ensure that they provide power during power outages. For +home systems, this is every 3 to 5 years, although this varies with +temperature [#ups_temp]_. For +enterprise systems, contact your vendor. + + +.. rubric:: Footnotes + +.. [#ssd_analysis] +.. [#ssd_analysis2] +.. [#ssd_analysis3] +.. [#ups_temp] diff --git a/_sources/Performance and Tuning/Module Parameters.rst.txt b/_sources/Performance and Tuning/Module Parameters.rst.txt new file mode 100644 index 000000000..0467e12fe --- /dev/null +++ b/_sources/Performance and Tuning/Module Parameters.rst.txt @@ -0,0 +1,9557 @@ +Module Parameters +================= + +Most of the ZFS kernel module parameters are accessible in the SysFS +``/sys/module/zfs/parameters`` directory. Current values can be observed +by + +.. code:: shell + + cat /sys/module/zfs/parameters/PARAMETER + +Many of these can be changed by writing new values. These are denoted by +Change|Dynamic in the PARAMETER details below. + +.. code:: shell + + echo NEWVALUE >> /sys/module/zfs/parameters/PARAMETER + +If the parameter is not dynamically adjustable, an error can occur and +the value will not be set. It can be helpful to check the permissions +for the PARAMETER file in SysFS. + +In some cases, the parameter must be set prior to loading the kernel +modules or it is desired to have the parameters set automatically at +boot time. For many distros, this can be accomplished by creating a file +named ``/etc/modprobe.d/zfs.conf`` containing a text line for each +module parameter using the format: + +:: + + # change PARAMETER for workload XZY to solve problem PROBLEM_DESCRIPTION + # changed by YOUR_NAME on DATE + options zfs PARAMETER=VALUE + +Some parameters related to ZFS operations are located in module +parameters other than in the ``zfs`` kernel module. These are documented +in the individual parameter description. Unless otherwise noted, the +tunable applies to the ``zfs`` kernel module. For example, the ``icp`` +kernel module parameters are visible in the +``/sys/module/icp/parameters`` directory and can be set by default at +boot time by changing the ``/etc/modprobe.d/icp.conf`` file. + +See the man page for *modprobe.d* for more information. + +Manual Pages +------------ + +The `zfs(4) <../man/4/zfs.4.html>`_ and `spl(4) <../man/4/spl.4.html>`_ man +pages (previously ``zfs-`` and ``spl-module-parameters(5)``, respectively, +prior to OpenZFS 2.1) contain brief descriptions of +the module parameters. Alas, man pages are not as suitable for quick +reference as documentation pages. This page is intended to be a better +cross-reference and capture some of the wisdom of ZFS developers and +practitioners. + +ZFS Module Parameters +--------------------- + +The ZFS kernel module, ``zfs.ko``, parameters are detailed below. + +To observe the list of parameters along with a short synopsis of each +parameter, use the ``modinfo`` command: + +.. code:: bash + + modinfo zfs + +Tags +---- + +The list of parameters is quite large and resists hierarchical +representation. To assist in finding relevant information +quickly, each module parameter has a "Tags" row with keywords for +frequent searches. + +ABD +~~~ + +- `zfs_abd_scatter_enabled <#zfs-abd-scatter-enabled>`__ +- `zfs_abd_scatter_max_order <#zfs-abd-scatter-max-order>`__ +- `zfs_compressed_arc_enabled <#zfs-compressed-arc-enabled>`__ + +allocation +~~~~~~~~~~ + +- `dmu_object_alloc_chunk_shift <#dmu-object-alloc-chunk-shift>`__ +- `metaslab_aliquot <#metaslab-aliquot>`__ +- `metaslab_bias_enabled <#metaslab-bias-enabled>`__ +- `metaslab_debug_load <#metaslab-debug-load>`__ +- `metaslab_debug_unload <#metaslab-debug-unload>`__ +- `metaslab_force_ganging <#metaslab-force-ganging>`__ +- `metaslab_fragmentation_factor_enabled <#metaslab-fragmentation-factor-enabled>`__ +- `zfs_metaslab_fragmentation_threshold <#zfs-metaslab-fragmentation-threshold>`__ +- `metaslab_lba_weighting_enabled <#metaslab-lba-weighting-enabled>`__ +- `metaslab_preload_enabled <#metaslab-preload-enabled>`__ +- `zfs_metaslab_segment_weight_enabled <#zfs-metaslab-segment-weight-enabled>`__ +- `zfs_metaslab_switch_threshold <#zfs-metaslab-switch-threshold>`__ +- `metaslabs_per_vdev <#metaslabs-per-vdev>`__ +- `zfs_mg_fragmentation_threshold <#zfs-mg-fragmentation-threshold>`__ +- `zfs_mg_noalloc_threshold <#zfs-mg-noalloc-threshold>`__ +- `spa_asize_inflation <#spa-asize-inflation>`__ +- `spa_load_verify_data <#spa-load-verify-data>`__ +- `spa_slop_shift <#spa-slop-shift>`__ +- `zfs_vdev_default_ms_count <#zfs-vdev-default-ms-count>`__ + +ARC +~~~ + +- `zfs_abd_scatter_min_size <#zfs-abd-scatter-min-size>`__ +- `zfs_arc_average_blocksize <#zfs-arc-average-blocksize>`__ +- `zfs_arc_dnode_limit <#zfs-arc-dnode-limit>`__ +- `zfs_arc_dnode_limit_percent <#zfs-arc-dnode-limit-percent>`__ +- `zfs_arc_dnode_reduce_percent <#zfs-arc-dnode-reduce-percent>`__ +- `zfs_arc_evict_batch_limit <#zfs-arc-evict-batch-limit>`__ +- `zfs_arc_grow_retry <#zfs-arc-grow-retry>`__ +- `zfs_arc_lotsfree_percent <#zfs-arc-lotsfree-percent>`__ +- `zfs_arc_max <#zfs-arc-max>`__ +- `zfs_arc_meta_adjust_restarts <#zfs-arc-meta-adjust-restarts>`__ +- `zfs_arc_meta_limit <#zfs-arc-meta-limit>`__ +- `zfs_arc_meta_limit_percent <#zfs-arc-meta-limit-percent>`__ +- `zfs_arc_meta_min <#zfs-arc-meta-min>`__ +- `zfs_arc_meta_prune <#zfs-arc-meta-prune>`__ +- `zfs_arc_meta_strategy <#zfs-arc-meta-strategy>`__ +- `zfs_arc_min <#zfs-arc-min>`__ +- `zfs_arc_min_prefetch_lifespan <#zfs-arc-min-prefetch-lifespan>`__ +- `zfs_arc_min_prefetch_ms <#zfs-arc-min-prefetch-ms>`__ +- `zfs_arc_min_prescient_prefetch_ms <#zfs-arc-min-prescient-prefetch-ms>`__ +- `zfs_arc_overflow_shift <#zfs-arc-overflow-shift>`__ +- `zfs_arc_p_dampener_disable <#zfs-arc-p-dampener-disable>`__ +- `zfs_arc_p_min_shift <#zfs-arc-p-min-shift>`__ +- `zfs_arc_pc_percent <#zfs-arc-pc-percent>`__ +- `zfs_arc_shrink_shift <#zfs-arc-shrink-shift>`__ +- `zfs_arc_sys_free <#zfs-arc-sys-free>`__ +- `dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__ +- `dbuf_cache_shift <#dbuf-cache-shift>`__ +- `dbuf_metadata_cache_shift <#dbuf-metadata-cache-shift>`__ +- `zfs_disable_dup_eviction <#zfs-disable-dup-eviction>`__ +- `l2arc_exclude_special <#l2arc-exclude-special>`__ +- `l2arc_feed_again <#l2arc-feed-again>`__ +- `l2arc_feed_min_ms <#l2arc-feed-min-ms>`__ +- `l2arc_feed_secs <#l2arc-feed-secs>`__ +- `l2arc_headroom <#l2arc-headroom>`__ +- `l2arc_headroom_boost <#l2arc-headroom-boost>`__ +- `l2arc_meta_percent <#l2arc-meta-percent>`__ +- `l2arc_mfuonly <#l2arc-mfuonly>`__ +- `l2arc_nocompress <#l2arc-nocompress>`__ +- `l2arc_noprefetch <#l2arc-noprefetch>`__ +- `l2arc_norw <#l2arc-norw>`__ +- `l2arc_rebuild_blocks_min_l2size <#l2arc-rebuild-blocks-min-l2size>`__ +- `l2arc_rebuild_enabled <#l2arc-rebuild-enabled>`__ +- `l2arc_trim_ahead <#l2arc-trim-ahead>`__ +- `l2arc_write_boost <#l2arc-write-boost>`__ +- `l2arc_write_max <#l2arc-write-max>`__ +- `zfs_multilist_num_sublists <#zfs-multilist-num-sublists>`__ +- `spa_load_verify_shift <#spa-load-verify-shift>`__ + +channel_programs +~~~~~~~~~~~~~~~~ + +- `zfs_lua_max_instrlimit <#zfs-lua-max-instrlimit>`__ +- `zfs_lua_max_memlimit <#zfs-lua-max-memlimit>`__ + +checkpoint +~~~~~~~~~~ + +- `zfs_spa_discard_memory_limit <#zfs-spa-discard-memory-limit>`__ + +checksum +~~~~~~~~ + +- `zfs_checksums_per_second <#zfs-checksums-per-second>`__ +- `zfs_fletcher_4_impl <#zfs-fletcher-4-impl>`__ +- `zfs_nopwrite_enabled <#zfs-nopwrite-enabled>`__ +- `zfs_qat_checksum_disable <#zfs-qat-checksum-disable>`__ + +compression +~~~~~~~~~~~ + +- `zfs_compressed_arc_enabled <#zfs-compressed-arc-enabled>`__ +- `zfs_qat_compress_disable <#zfs-qat-compress-disable>`__ +- `zfs_qat_disable <#zfs-qat-disable>`__ + +CPU +~~~ + +- `zfs_fletcher_4_impl <#zfs-fletcher-4-impl>`__ +- `zfs_mdcomp_disable <#zfs-mdcomp-disable>`__ +- `spl_kmem_cache_kmem_threads <#spl-kmem-cache-kmem-threads>`__ +- `spl_kmem_cache_magazine_size <#spl-kmem-cache-magazine-size>`__ +- `spl_taskq_thread_bind <#spl-taskq-thread-bind>`__ +- `spl_taskq_thread_priority <#spl-taskq-thread-priority>`__ +- `spl_taskq_thread_sequential <#spl-taskq-thread-sequential>`__ +- `zfs_vdev_raidz_impl <#zfs-vdev-raidz-impl>`__ + +dataset +~~~~~~~ + +- `zfs_max_dataset_nesting <#zfs-max-dataset-nesting>`__ + +dbuf_cache +~~~~~~~~~~ + +- `dbuf_cache_hiwater_pct <#dbuf-cache-hiwater-pct>`__ +- `dbuf_cache_lowater_pct <#dbuf-cache-lowater-pct>`__ +- `dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__ +- `dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__ +- `dbuf_cache_max_shift <#dbuf-cache-max-shift>`__ +- `dbuf_cache_shift <#dbuf-cache-shift>`__ +- `dbuf_metadata_cache_max_bytes <#dbuf-metadata-cache-max-bytes>`__ +- `dbuf_metadata_cache_shift <#dbuf-metadata-cache-shift>`__ + +debug +~~~~~ + +- `zfs_dbgmsg_enable <#zfs-dbgmsg-enable>`__ +- `zfs_dbgmsg_maxsize <#zfs-dbgmsg-maxsize>`__ +- `zfs_dbuf_state_index <#zfs-dbuf-state-index>`__ +- `zfs_deadman_checktime_ms <#zfs-deadman-checktime-ms>`__ +- `zfs_deadman_enabled <#zfs-deadman-enabled>`__ +- `zfs_deadman_failmode <#zfs-deadman-failmode>`__ +- `zfs_deadman_synctime_ms <#zfs-deadman-synctime-ms>`__ +- `zfs_deadman_ziotime_ms <#zfs-deadman-ziotime-ms>`__ +- `zfs_flags <#zfs-flags>`__ +- `zfs_free_leak_on_eio <#zfs-free-leak-on-eio>`__ +- `zfs_nopwrite_enabled <#zfs-nopwrite-enabled>`__ +- `zfs_object_mutex_size <#zfs-object-mutex-size>`__ +- `zfs_read_history <#zfs-read-history>`__ +- `zfs_read_history_hits <#zfs-read-history-hits>`__ +- `spl_panic_halt <#spl-panic-halt>`__ +- `zfs_txg_history <#zfs-txg-history>`__ +- `zfs_zevent_cols <#zfs-zevent-cols>`__ +- `zfs_zevent_console <#zfs-zevent-console>`__ +- `zfs_zevent_len_max <#zfs-zevent-len-max>`__ +- `zil_replay_disable <#zil-replay-disable>`__ +- `zio_deadman_log_all <#zio-deadman-log-all>`__ +- `zio_decompress_fail_fraction <#zio-decompress-fail-fraction>`__ +- `zio_delay_max <#zio-delay-max>`__ + +dedup +~~~~~ + +- `zfs_ddt_data_is_special <#zfs-ddt-data-is-special>`__ +- `zfs_disable_dup_eviction <#zfs-disable-dup-eviction>`__ + +delay +~~~~~ + +- `zfs_delays_per_second <#zfs-delays-per-second>`__ + +delete +~~~~~~ + +- `zfs_async_block_max_blocks <#zfs-async-block-max-blocks>`__ +- `zfs_delete_blocks <#zfs-delete-blocks>`__ +- `zfs_free_bpobj_enabled <#zfs-free-bpobj-enabled>`__ +- `zfs_free_max_blocks <#zfs-free-max-blocks>`__ +- `zfs_free_min_time_ms <#zfs-free-min-time-ms>`__ +- `zfs_obsolete_min_time_ms <#zfs-obsolete-min-time-ms>`__ +- `zfs_per_txg_dirty_frees_percent <#zfs-per-txg-dirty-frees-percent>`__ + +discard +~~~~~~~ + +- `zvol_max_discard_blocks <#zvol-max-discard-blocks>`__ + +disks +~~~~~ + +- `zfs_nocacheflush <#zfs-nocacheflush>`__ +- `zil_nocacheflush <#zil-nocacheflush>`__ + +DMU +~~~ + +- `zfs_async_block_max_blocks <#zfs-async-block-max-blocks>`__ +- `dmu_object_alloc_chunk_shift <#dmu-object-alloc-chunk-shift>`__ +- `zfs_dmu_offset_next_sync <#zfs-dmu-offset-next-sync>`__ + +encryption +~~~~~~~~~~ + +- `icp_aes_impl <#icp-aes-impl>`__ +- `icp_gcm_impl <#icp-gcm-impl>`__ +- `zfs_key_max_salt_uses <#zfs-key-max-salt-uses>`__ +- `zfs_qat_encrypt_disable <#zfs-qat-encrypt-disable>`__ + +filesystem +~~~~~~~~~~ + +- `zfs_admin_snapshot <#zfs-admin-snapshot>`__ +- `zfs_delete_blocks <#zfs-delete-blocks>`__ +- `zfs_expire_snapshot <#zfs-expire-snapshot>`__ +- `zfs_free_max_blocks <#zfs-free-max-blocks>`__ +- `zfs_max_recordsize <#zfs-max-recordsize>`__ +- `zfs_read_chunk_size <#zfs-read-chunk-size>`__ + +fragmentation +~~~~~~~~~~~~~ + +- `zfs_metaslab_fragmentation_threshold <#zfs-metaslab-fragmentation-threshold>`__ +- `zfs_mg_fragmentation_threshold <#zfs-mg-fragmentation-threshold>`__ +- `zfs_mg_noalloc_threshold <#zfs-mg-noalloc-threshold>`__ + +HDD +~~~ + +- `metaslab_lba_weighting_enabled <#metaslab-lba-weighting-enabled>`__ +- `zfs_vdev_mirror_rotating_inc <#zfs-vdev-mirror-rotating-inc>`__ +- `zfs_vdev_mirror_rotating_seek_inc <#zfs-vdev-mirror-rotating-seek-inc>`__ +- `zfs_vdev_mirror_rotating_seek_offset <#zfs-vdev-mirror-rotating-seek-offset>`__ + +hostid +~~~~~~ + +- `spl_hostid <#spl-hostid>`__ +- `spl_hostid_path <#spl-hostid-path>`__ + +import +~~~~~~ + +- `zfs_autoimport_disable <#zfs-autoimport-disable>`__ +- `zfs_max_missing_tvds <#zfs-max-missing-tvds>`__ +- `zfs_multihost_fail_intervals <#zfs-multihost-fail-intervals>`__ +- `zfs_multihost_history <#zfs-multihost-history>`__ +- `zfs_multihost_import_intervals <#zfs-multihost-import-intervals>`__ +- `zfs_multihost_interval <#zfs-multihost-interval>`__ +- `zfs_recover <#zfs-recover>`__ +- `spa_config_path <#spa-config-path>`__ +- `spa_load_print_vdev_tree <#spa-load-print-vdev-tree>`__ +- `spa_load_verify_maxinflight <#spa-load-verify-maxinflight>`__ +- `spa_load_verify_metadata <#spa-load-verify-metadata>`__ +- `spa_load_verify_shift <#spa-load-verify-shift>`__ +- `zvol_inhibit_dev <#zvol-inhibit-dev>`__ + +L2ARC +~~~~~ + +- `l2arc_exclude_special <#l2arc-exclude-special>`__ +- `l2arc_feed_again <#l2arc-feed-again>`__ +- `l2arc_feed_min_ms <#l2arc-feed-min-ms>`__ +- `l2arc_feed_secs <#l2arc-feed-secs>`__ +- `l2arc_headroom <#l2arc-headroom>`__ +- `l2arc_headroom_boost <#l2arc-headroom-boost>`__ +- `l2arc_meta_percent <#l2arc-meta-percent>`__ +- `l2arc_mfuonly <#l2arc-mfuonly>`__ +- `l2arc_nocompress <#l2arc-nocompress>`__ +- `l2arc_noprefetch <#l2arc-noprefetch>`__ +- `l2arc_norw <#l2arc-norw>`__ +- `l2arc_rebuild_blocks_min_l2size <#l2arc-rebuild-blocks-min-l2size>`__ +- `l2arc_rebuild_enabled <#l2arc-rebuild-enabled>`__ +- `l2arc_trim_ahead <#l2arc-trim-ahead>`__ +- `l2arc_write_boost <#l2arc-write-boost>`__ +- `l2arc_write_max <#l2arc-write-max>`__ + +memory +~~~~~~ + +- `zfs_abd_scatter_enabled <#zfs-abd-scatter-enabled>`__ +- `zfs_abd_scatter_max_order <#zfs-abd-scatter-max-order>`__ +- `zfs_arc_average_blocksize <#zfs-arc-average-blocksize>`__ +- `zfs_arc_grow_retry <#zfs-arc-grow-retry>`__ +- `zfs_arc_lotsfree_percent <#zfs-arc-lotsfree-percent>`__ +- `zfs_arc_max <#zfs-arc-max>`__ +- `zfs_arc_pc_percent <#zfs-arc-pc-percent>`__ +- `zfs_arc_shrink_shift <#zfs-arc-shrink-shift>`__ +- `zfs_arc_sys_free <#zfs-arc-sys-free>`__ +- `zfs_dedup_prefetch <#zfs-dedup-prefetch>`__ +- `zfs_max_recordsize <#zfs-max-recordsize>`__ +- `metaslab_debug_load <#metaslab-debug-load>`__ +- `metaslab_debug_unload <#metaslab-debug-unload>`__ +- `zfs_scan_mem_lim_fact <#zfs-scan-mem-lim-fact>`__ +- `zfs_scan_strict_mem_lim <#zfs-scan-strict-mem-lim>`__ +- `spl_kmem_alloc_max <#spl-kmem-alloc-max>`__ +- `spl_kmem_alloc_warn <#spl-kmem-alloc-warn>`__ +- `spl_kmem_cache_expire <#spl-kmem-cache-expire>`__ +- `spl_kmem_cache_kmem_limit <#spl-kmem-cache-kmem-limit>`__ +- `spl_kmem_cache_kmem_threads <#spl-kmem-cache-kmem-threads>`__ +- `spl_kmem_cache_magazine_size <#spl-kmem-cache-magazine-size>`__ +- `spl_kmem_cache_max_size <#spl-kmem-cache-max-size>`__ +- `spl_kmem_cache_obj_per_slab <#spl-kmem-cache-obj-per-slab>`__ +- `spl_kmem_cache_obj_per_slab_min <#spl-kmem-cache-obj-per-slab-min>`__ +- `spl_kmem_cache_reclaim <#spl-kmem-cache-reclaim>`__ +- `spl_kmem_cache_slab_limit <#spl-kmem-cache-slab-limit>`__ + +metadata +~~~~~~~~ + +- `zfs_mdcomp_disable <#zfs-mdcomp-disable>`__ + +metaslab +~~~~~~~~ + +- `metaslab_aliquot <#metaslab-aliquot>`__ +- `metaslab_bias_enabled <#metaslab-bias-enabled>`__ +- `metaslab_debug_load <#metaslab-debug-load>`__ +- `metaslab_debug_unload <#metaslab-debug-unload>`__ +- `metaslab_fragmentation_factor_enabled <#metaslab-fragmentation-factor-enabled>`__ +- `metaslab_lba_weighting_enabled <#metaslab-lba-weighting-enabled>`__ +- `metaslab_preload_enabled <#metaslab-preload-enabled>`__ +- `zfs_metaslab_segment_weight_enabled <#zfs-metaslab-segment-weight-enabled>`__ +- `zfs_metaslab_switch_threshold <#zfs-metaslab-switch-threshold>`__ +- `metaslabs_per_vdev <#metaslabs-per-vdev>`__ +- `zfs_vdev_min_ms_count <#zfs-vdev-min-ms-count>`__ +- `zfs_vdev_ms_count_limit <#zfs-vdev-ms-count-limit>`__ + +mirror +~~~~~~ + +- `zfs_vdev_mirror_non_rotating_inc <#zfs-vdev-mirror-non-rotating-inc>`__ +- `zfs_vdev_mirror_non_rotating_seek_inc <#zfs-vdev-mirror-non-rotating-seek-inc>`__ +- `zfs_vdev_mirror_rotating_inc <#zfs-vdev-mirror-rotating-inc>`__ +- `zfs_vdev_mirror_rotating_seek_inc <#zfs-vdev-mirror-rotating-seek-inc>`__ +- `zfs_vdev_mirror_rotating_seek_offset <#zfs-vdev-mirror-rotating-seek-offset>`__ + +MMP +~~~ + +- `zfs_multihost_fail_intervals <#zfs-multihost-fail-intervals>`__ +- `zfs_multihost_history <#zfs-multihost-history>`__ +- `zfs_multihost_import_intervals <#zfs-multihost-import-intervals>`__ +- `zfs_multihost_interval <#zfs-multihost-interval>`__ +- `spl_hostid <#spl-hostid>`__ +- `spl_hostid_path <#spl-hostid-path>`__ + +panic +~~~~~ + +- `spl_panic_halt <#spl-panic-halt>`__ + +prefetch +~~~~~~~~ + +- `zfs_arc_min_prefetch_ms <#zfs-arc-min-prefetch-ms>`__ +- `zfs_arc_min_prescient_prefetch_ms <#zfs-arc-min-prescient-prefetch-ms>`__ +- `zfs_dedup_prefetch <#zfs-dedup-prefetch>`__ +- `l2arc_noprefetch <#l2arc-noprefetch>`__ +- `zfs_no_scrub_prefetch <#zfs-no-scrub-prefetch>`__ +- `zfs_pd_bytes_max <#zfs-pd-bytes-max>`__ +- `zfs_prefetch_disable <#zfs-prefetch-disable>`__ +- `zfetch_array_rd_sz <#zfetch-array-rd-sz>`__ +- `zfetch_max_distance <#zfetch-max-distance>`__ +- `zfetch_max_streams <#zfetch-max-streams>`__ +- `zfetch_min_sec_reap <#zfetch-min-sec-reap>`__ +- `zvol_prefetch_bytes <#zvol-prefetch-bytes>`__ + +QAT +~~~ + +- `zfs_qat_checksum_disable <#zfs-qat-checksum-disable>`__ +- `zfs_qat_compress_disable <#zfs-qat-compress-disable>`__ +- `zfs_qat_disable <#zfs-qat-disable>`__ +- `zfs_qat_encrypt_disable <#zfs-qat-encrypt-disable>`__ + +raidz +~~~~~ + +- `zfs_vdev_raidz_impl <#zfs-vdev-raidz-impl>`__ + +receive +~~~~~~~ + +- `zfs_disable_ivset_guid_check <#zfs-disable-ivset-guid-check>`__ +- `zfs_recv_queue_length <#zfs-recv-queue-length>`__ + +remove +~~~~~~ + +- `zfs_obsolete_min_time_ms <#zfs-obsolete-min-time-ms>`__ +- `zfs_remove_max_segment <#zfs-remove-max-segment>`__ + +resilver +~~~~~~~~ + +- `zfs_resilver_delay <#zfs-resilver-delay>`__ +- `zfs_resilver_disable_defer <#zfs-resilver-disable-defer>`__ +- `zfs_resilver_min_time_ms <#zfs-resilver-min-time-ms>`__ +- `zfs_scan_checkpoint_intval <#zfs-scan-checkpoint-intval>`__ +- `zfs_scan_fill_weight <#zfs-scan-fill-weight>`__ +- `zfs_scan_idle <#zfs-scan-idle>`__ +- `zfs_scan_ignore_errors <#zfs-scan-ignore-errors>`__ +- `zfs_scan_issue_strategy <#zfs-scan-issue-strategy>`__ +- `zfs_scan_legacy <#zfs-scan-legacy>`__ +- `zfs_scan_max_ext_gap <#zfs-scan-max-ext-gap>`__ +- `zfs_scan_mem_lim_fact <#zfs-scan-mem-lim-fact>`__ +- `zfs_scan_mem_lim_soft_fact <#zfs-scan-mem-lim-soft-fact>`__ +- `zfs_scan_strict_mem_lim <#zfs-scan-strict-mem-lim>`__ +- `zfs_scan_suspend_progress <#zfs-scan-suspend-progress>`__ +- `zfs_scan_vdev_limit <#zfs-scan-vdev-limit>`__ +- `zfs_top_maxinflight <#zfs-top-maxinflight>`__ +- `zfs_vdev_scrub_max_active <#zfs-vdev-scrub-max-active>`__ +- `zfs_vdev_scrub_min_active <#zfs-vdev-scrub-min-active>`__ + +scrub +~~~~~ + +- `zfs_no_scrub_io <#zfs-no-scrub-io>`__ +- `zfs_no_scrub_prefetch <#zfs-no-scrub-prefetch>`__ +- `zfs_scan_checkpoint_intval <#zfs-scan-checkpoint-intval>`__ +- `zfs_scan_fill_weight <#zfs-scan-fill-weight>`__ +- `zfs_scan_idle <#zfs-scan-idle>`__ +- `zfs_scan_issue_strategy <#zfs-scan-issue-strategy>`__ +- `zfs_scan_legacy <#zfs-scan-legacy>`__ +- `zfs_scan_max_ext_gap <#zfs-scan-max-ext-gap>`__ +- `zfs_scan_mem_lim_fact <#zfs-scan-mem-lim-fact>`__ +- `zfs_scan_mem_lim_soft_fact <#zfs-scan-mem-lim-soft-fact>`__ +- `zfs_scan_min_time_ms <#zfs-scan-min-time-ms>`__ +- `zfs_scan_strict_mem_lim <#zfs-scan-strict-mem-lim>`__ +- `zfs_scan_suspend_progress <#zfs-scan-suspend-progress>`__ +- `zfs_scan_vdev_limit <#zfs-scan-vdev-limit>`__ +- `zfs_scrub_delay <#zfs-scrub-delay>`__ +- `zfs_scrub_min_time_ms <#zfs-scrub-min-time-ms>`__ +- `zfs_top_maxinflight <#zfs-top-maxinflight>`__ +- `zfs_vdev_scrub_max_active <#zfs-vdev-scrub-max-active>`__ +- `zfs_vdev_scrub_min_active <#zfs-vdev-scrub-min-active>`__ + +send +~~~~ + +- `ignore_hole_birth <#ignore-hole-birth>`__ +- `zfs_override_estimate_recordsize <#zfs-override-estimate-recordsize>`__ +- `zfs_pd_bytes_max <#zfs-pd-bytes-max>`__ +- `zfs_send_corrupt_data <#zfs-send-corrupt-data>`__ +- `zfs_send_queue_length <#zfs-send-queue-length>`__ +- `zfs_send_unmodified_spill_blocks <#zfs-send-unmodified-spill-blocks>`__ + +snapshot +~~~~~~~~ + +- `zfs_admin_snapshot <#zfs-admin-snapshot>`__ +- `zfs_expire_snapshot <#zfs-expire-snapshot>`__ + +SPA +~~~ + +- `spa_asize_inflation <#spa-asize-inflation>`__ +- `spa_load_print_vdev_tree <#spa-load-print-vdev-tree>`__ +- `spa_load_verify_data <#spa-load-verify-data>`__ +- `spa_load_verify_shift <#spa-load-verify-shift>`__ +- `spa_slop_shift <#spa-slop-shift>`__ +- `zfs_sync_pass_deferred_free <#zfs-sync-pass-deferred-free>`__ +- `zfs_sync_pass_dont_compress <#zfs-sync-pass-dont-compress>`__ +- `zfs_sync_pass_rewrite <#zfs-sync-pass-rewrite>`__ +- `zfs_sync_taskq_batch_pct <#zfs-sync-taskq-batch-pct>`__ +- `zfs_txg_timeout <#zfs-txg-timeout>`__ + +special_vdev +~~~~~~~~~~~~ + +- `l2arc_exclude_special <#l2arc-exclude-special>`__ +- `zfs_ddt_data_is_special <#zfs-ddt-data-is-special>`__ +- `zfs_special_class_metadata_reserve_pct <#zfs-special-class-metadata-reserve-pct>`__ +- `zfs_user_indirect_is_special <#zfs-user-indirect-is-special>`__ + +SSD +~~~ + +- `metaslab_lba_weighting_enabled <#metaslab-lba-weighting-enabled>`__ +- `zfs_vdev_mirror_non_rotating_inc <#zfs-vdev-mirror-non-rotating-inc>`__ +- `zfs_vdev_mirror_non_rotating_seek_inc <#zfs-vdev-mirror-non-rotating-seek-inc>`__ + +taskq +~~~~~ + +- `spl_max_show_tasks <#spl-max-show-tasks>`__ +- `spl_taskq_kick <#spl-taskq-kick>`__ +- `spl_taskq_thread_bind <#spl-taskq-thread-bind>`__ +- `spl_taskq_thread_dynamic <#spl-taskq-thread-dynamic>`__ +- `spl_taskq_thread_priority <#spl-taskq-thread-priority>`__ +- `spl_taskq_thread_sequential <#spl-taskq-thread-sequential>`__ +- `zfs_zil_clean_taskq_nthr_pct <#zfs-zil-clean-taskq-nthr-pct>`__ +- `zio_taskq_batch_pct <#zio-taskq-batch-pct>`__ + +trim +~~~~ + +- `zfs_trim_extent_bytes_max <#zfs-trim-extent-bytes-max>`__ +- `zfs_trim_extent_bytes_min <#zfs-trim-extent-bytes-min>`__ +- `zfs_trim_metaslab_skip <#zfs-trim-metaslab-skip>`__ +- `zfs_trim_queue_limit <#zfs-trim-queue-limit>`__ +- `zfs_trim_txg_batch <#zfs-trim-txg-batch>`__ +- `zfs_vdev_aggregate_trim <#zfs-vdev-aggregate-trim>`__ + +vdev +~~~~ + +- `zfs_checksum_events_per_second <#zfs-checksum-events-per-second>`__ +- `metaslab_aliquot <#metaslab-aliquot>`__ +- `metaslab_bias_enabled <#metaslab-bias-enabled>`__ +- `zfs_metaslab_fragmentation_threshold <#zfs-metaslab-fragmentation-threshold>`__ +- `metaslabs_per_vdev <#metaslabs-per-vdev>`__ +- `zfs_mg_fragmentation_threshold <#zfs-mg-fragmentation-threshold>`__ +- `zfs_mg_noalloc_threshold <#zfs-mg-noalloc-threshold>`__ +- `zfs_multihost_interval <#zfs-multihost-interval>`__ +- `zfs_scan_vdev_limit <#zfs-scan-vdev-limit>`__ +- `zfs_slow_io_events_per_second <#zfs-slow-io-events-per-second>`__ +- `zfs_vdev_aggregate_trim <#zfs-vdev-aggregate-trim>`__ +- `zfs_vdev_aggregation_limit <#zfs-vdev-aggregation-limit>`__ +- `zfs_vdev_aggregation_limit_non_rotating <#zfs-vdev-aggregation-limit-non-rotating>`__ +- `zfs_vdev_async_read_max_active <#zfs-vdev-async-read-max-active>`__ +- `zfs_vdev_async_read_min_active <#zfs-vdev-async-read-min-active>`__ +- `zfs_vdev_async_write_active_max_dirty_percent <#zfs-vdev-async-write-active-max-dirty-percent>`__ +- `zfs_vdev_async_write_active_min_dirty_percent <#zfs-vdev-async-write-active-min-dirty-percent>`__ +- `zfs_vdev_async_write_max_active <#zfs-vdev-async-write-max-active>`__ +- `zfs_vdev_async_write_min_active <#zfs-vdev-async-write-min-active>`__ +- `zfs_vdev_cache_bshift <#zfs-vdev-cache-bshift>`__ +- `zfs_vdev_cache_max <#zfs-vdev-cache-max>`__ +- `zfs_vdev_cache_size <#zfs-vdev-cache-size>`__ +- `zfs_vdev_initializing_max_active <#zfs-vdev-initializing-max-active>`__ +- `zfs_vdev_initializing_min_active <#zfs-vdev-initializing-min-active>`__ +- `zfs_vdev_max_active <#zfs-vdev-max-active>`__ +- `zfs_vdev_min_ms_count <#zfs-vdev-min-ms-count>`__ +- `zfs_vdev_mirror_non_rotating_inc <#zfs-vdev-mirror-non-rotating-inc>`__ +- `zfs_vdev_mirror_non_rotating_seek_inc <#zfs-vdev-mirror-non-rotating-seek-inc>`__ +- `zfs_vdev_mirror_rotating_inc <#zfs-vdev-mirror-rotating-inc>`__ +- `zfs_vdev_mirror_rotating_seek_inc <#zfs-vdev-mirror-rotating-seek-inc>`__ +- `zfs_vdev_mirror_rotating_seek_offset <#zfs-vdev-mirror-rotating-seek-offset>`__ +- `zfs_vdev_ms_count_limit <#zfs-vdev-ms-count-limit>`__ +- `zfs_vdev_queue_depth_pct <#zfs-vdev-queue-depth-pct>`__ +- `zfs_vdev_raidz_impl <#zfs-vdev-raidz-impl>`__ +- `zfs_vdev_read_gap_limit <#zfs-vdev-read-gap-limit>`__ +- `zfs_vdev_removal_max_active <#zfs-vdev-removal-max-active>`__ +- `zfs_vdev_removal_min_active <#zfs-vdev-removal-min-active>`__ +- `zfs_vdev_scheduler <#zfs-vdev-scheduler>`__ +- `zfs_vdev_scrub_max_active <#zfs-vdev-scrub-max-active>`__ +- `zfs_vdev_scrub_min_active <#zfs-vdev-scrub-min-active>`__ +- `zfs_vdev_sync_read_max_active <#zfs-vdev-sync-read-max-active>`__ +- `zfs_vdev_sync_read_min_active <#zfs-vdev-sync-read-min-active>`__ +- `zfs_vdev_sync_write_max_active <#zfs-vdev-sync-write-max-active>`__ +- `zfs_vdev_sync_write_min_active <#zfs-vdev-sync-write-min-active>`__ +- `zfs_vdev_trim_max_active <#zfs-vdev-trim-max-active>`__ +- `zfs_vdev_trim_min_active <#zfs-vdev-trim-min-active>`__ +- `vdev_validate_skip <#vdev-validate-skip>`__ +- `zfs_vdev_write_gap_limit <#zfs-vdev-write-gap-limit>`__ +- `zio_dva_throttle_enabled <#zio-dva-throttle-enabled>`__ +- `zio_slow_io_ms <#zio-slow-io-ms>`__ + +vdev_cache +~~~~~~~~~~ + +- `zfs_vdev_cache_bshift <#zfs-vdev-cache-bshift>`__ +- `zfs_vdev_cache_max <#zfs-vdev-cache-max>`__ +- `zfs_vdev_cache_size <#zfs-vdev-cache-size>`__ + +vdev_initialize +~~~~~~~~~~~~~~~ + +- `zfs_initialize_value <#zfs-initialize-value>`__ + +vdev_removal +~~~~~~~~~~~~ + +- `zfs_condense_indirect_commit_entry_delay_ms <#zfs-condense-indirect-commit-entry-delay-ms>`__ +- `zfs_condense_indirect_vdevs_enable <#zfs-condense-indirect-vdevs-enable>`__ +- `zfs_condense_max_obsolete_bytes <#zfs-condense-max-obsolete-bytes>`__ +- `zfs_condense_min_mapping_bytes <#zfs-condense-min-mapping-bytes>`__ +- `zfs_reconstruct_indirect_combinations_max <#zfs-reconstruct-indirect-combinations-max>`__ +- `zfs_removal_ignore_errors <#zfs-removal-ignore-errors>`__ +- `zfs_removal_suspend_progress <#zfs-removal-suspend-progress>`__ +- `vdev_removal_max_span <#vdev-removal-max-span>`__ + +volume +~~~~~~ + +- `zfs_max_recordsize <#zfs-max-recordsize>`__ +- `zvol_inhibit_dev <#zvol-inhibit-dev>`__ +- `zvol_major <#zvol-major>`__ +- `zvol_max_discard_blocks <#zvol-max-discard-blocks>`__ +- `zvol_prefetch_bytes <#zvol-prefetch-bytes>`__ +- `zvol_request_sync <#zvol-request-sync>`__ +- `zvol_threads <#zvol-threads>`__ +- `zvol_volmode <#zvol-volmode>`__ + +write_throttle +~~~~~~~~~~~~~~ + +- `zfs_delay_min_dirty_percent <#zfs-delay-min-dirty-percent>`__ +- `zfs_delay_scale <#zfs-delay-scale>`__ +- `zfs_dirty_data_max <#zfs-dirty-data-max>`__ +- `zfs_dirty_data_max_max <#zfs-dirty-data-max-max>`__ +- `zfs_dirty_data_max_max_percent <#zfs-dirty-data-max-max-percent>`__ +- `zfs_dirty_data_max_percent <#zfs-dirty-data-max-percent>`__ +- `zfs_dirty_data_sync <#zfs-dirty-data-sync>`__ +- `zfs_dirty_data_sync_percent <#zfs-dirty-data-sync-percent>`__ + +zed +~~~ + +- `zfs_checksums_per_second <#zfs-checksums-per-second>`__ +- `zfs_delays_per_second <#zfs-delays-per-second>`__ +- `zio_slow_io_ms <#zio-slow-io-ms>`__ + +ZIL +~~~ + +- `zfs_commit_timeout_pct <#zfs-commit-timeout-pct>`__ +- `zfs_immediate_write_sz <#zfs-immediate-write-sz>`__ +- `zfs_zil_clean_taskq_maxalloc <#zfs-zil-clean-taskq-maxalloc>`__ +- `zfs_zil_clean_taskq_minalloc <#zfs-zil-clean-taskq-minalloc>`__ +- `zfs_zil_clean_taskq_nthr_pct <#zfs-zil-clean-taskq-nthr-pct>`__ +- `zil_nocacheflush <#zil-nocacheflush>`__ +- `zil_replay_disable <#zil-replay-disable>`__ +- `zil_slog_bulk <#zil-slog-bulk>`__ + +ZIO_scheduler +~~~~~~~~~~~~~ + +- `zfs_dirty_data_sync <#zfs-dirty-data-sync>`__ +- `zfs_dirty_data_sync_percent <#zfs-dirty-data-sync-percent>`__ +- `zfs_resilver_delay <#zfs-resilver-delay>`__ +- `zfs_scan_idle <#zfs-scan-idle>`__ +- `zfs_scrub_delay <#zfs-scrub-delay>`__ +- `zfs_top_maxinflight <#zfs-top-maxinflight>`__ +- `zfs_txg_timeout <#zfs-txg-timeout>`__ +- `zfs_vdev_aggregate_trim <#zfs-vdev-aggregate-trim>`__ +- `zfs_vdev_aggregation_limit <#zfs-vdev-aggregation-limit>`__ +- `zfs_vdev_aggregation_limit_non_rotating <#zfs-vdev-aggregation-limit-non-rotating>`__ +- `zfs_vdev_async_read_max_active <#zfs-vdev-async-read-max-active>`__ +- `zfs_vdev_async_read_min_active <#zfs-vdev-async-read-min-active>`__ +- `zfs_vdev_async_write_active_max_dirty_percent <#zfs-vdev-async-write-active-max-dirty-percent>`__ +- `zfs_vdev_async_write_active_min_dirty_percent <#zfs-vdev-async-write-active-min-dirty-percent>`__ +- `zfs_vdev_async_write_max_active <#zfs-vdev-async-write-max-active>`__ +- `zfs_vdev_async_write_min_active <#zfs-vdev-async-write-min-active>`__ +- `zfs_vdev_initializing_max_active <#zfs-vdev-initializing-max-active>`__ +- `zfs_vdev_initializing_min_active <#zfs-vdev-initializing-min-active>`__ +- `zfs_vdev_max_active <#zfs-vdev-max-active>`__ +- `zfs_vdev_queue_depth_pct <#zfs-vdev-queue-depth-pct>`__ +- `zfs_vdev_read_gap_limit <#zfs-vdev-read-gap-limit>`__ +- `zfs_vdev_removal_max_active <#zfs-vdev-removal-max-active>`__ +- `zfs_vdev_removal_min_active <#zfs-vdev-removal-min-active>`__ +- `zfs_vdev_scheduler <#zfs-vdev-scheduler>`__ +- `zfs_vdev_scrub_max_active <#zfs-vdev-scrub-max-active>`__ +- `zfs_vdev_scrub_min_active <#zfs-vdev-scrub-min-active>`__ +- `zfs_vdev_sync_read_max_active <#zfs-vdev-sync-read-max-active>`__ +- `zfs_vdev_sync_read_min_active <#zfs-vdev-sync-read-min-active>`__ +- `zfs_vdev_sync_write_max_active <#zfs-vdev-sync-write-max-active>`__ +- `zfs_vdev_sync_write_min_active <#zfs-vdev-sync-write-min-active>`__ +- `zfs_vdev_trim_max_active <#zfs-vdev-trim-max-active>`__ +- `zfs_vdev_trim_min_active <#zfs-vdev-trim-min-active>`__ +- `zfs_vdev_write_gap_limit <#zfs-vdev-write-gap-limit>`__ +- `zio_dva_throttle_enabled <#zio-dva-throttle-enabled>`__ +- `zio_requeue_io_start_cut_in_line <#zio-requeue-io-start-cut-in-line>`__ +- `zio_taskq_batch_pct <#zio-taskq-batch-pct>`__ + +Index +----- + +- `zfs_abd_scatter_enabled <#zfs-abd-scatter-enabled>`__ +- `zfs_abd_scatter_max_order <#zfs-abd-scatter-max-order>`__ +- `zfs_abd_scatter_min_size <#zfs-abd-scatter-min-size>`__ +- `zfs_admin_snapshot <#zfs-admin-snapshot>`__ +- `zfs_arc_average_blocksize <#zfs-arc-average-blocksize>`__ +- `zfs_arc_dnode_limit <#zfs-arc-dnode-limit>`__ +- `zfs_arc_dnode_limit_percent <#zfs-arc-dnode-limit-percent>`__ +- `zfs_arc_dnode_reduce_percent <#zfs-arc-dnode-reduce-percent>`__ +- `zfs_arc_evict_batch_limit <#zfs-arc-evict-batch-limit>`__ +- `zfs_arc_grow_retry <#zfs-arc-grow-retry>`__ +- `zfs_arc_lotsfree_percent <#zfs-arc-lotsfree-percent>`__ +- `zfs_arc_max <#zfs-arc-max>`__ +- `zfs_arc_meta_adjust_restarts <#zfs-arc-meta-adjust-restarts>`__ +- `zfs_arc_meta_limit <#zfs-arc-meta-limit>`__ +- `zfs_arc_meta_limit_percent <#zfs-arc-meta-limit-percent>`__ +- `zfs_arc_meta_min <#zfs-arc-meta-min>`__ +- `zfs_arc_meta_prune <#zfs-arc-meta-prune>`__ +- `zfs_arc_meta_strategy <#zfs-arc-meta-strategy>`__ +- `zfs_arc_min <#zfs-arc-min>`__ +- `zfs_arc_min_prefetch_lifespan <#zfs-arc-min-prefetch-lifespan>`__ +- `zfs_arc_min_prefetch_ms <#zfs-arc-min-prefetch-ms>`__ +- `zfs_arc_min_prescient_prefetch_ms <#zfs-arc-min-prescient-prefetch-ms>`__ +- `zfs_arc_overflow_shift <#zfs-arc-overflow-shift>`__ +- `zfs_arc_p_dampener_disable <#zfs-arc-p-dampener-disable>`__ +- `zfs_arc_p_min_shift <#zfs-arc-p-min-shift>`__ +- `zfs_arc_pc_percent <#zfs-arc-pc-percent>`__ +- `zfs_arc_shrink_shift <#zfs-arc-shrink-shift>`__ +- `zfs_arc_sys_free <#zfs-arc-sys-free>`__ +- `zfs_async_block_max_blocks <#zfs-async-block-max-blocks>`__ +- `zfs_autoimport_disable <#zfs-autoimport-disable>`__ +- `zfs_checksum_events_per_second <#zfs-checksum-events-per-second>`__ +- `zfs_checksums_per_second <#zfs-checksums-per-second>`__ +- `zfs_commit_timeout_pct <#zfs-commit-timeout-pct>`__ +- `zfs_compressed_arc_enabled <#zfs-compressed-arc-enabled>`__ +- `zfs_condense_indirect_commit_entry_delay_ms <#zfs-condense-indirect-commit-entry-delay-ms>`__ +- `zfs_condense_indirect_vdevs_enable <#zfs-condense-indirect-vdevs-enable>`__ +- `zfs_condense_max_obsolete_bytes <#zfs-condense-max-obsolete-bytes>`__ +- `zfs_condense_min_mapping_bytes <#zfs-condense-min-mapping-bytes>`__ +- `zfs_dbgmsg_enable <#zfs-dbgmsg-enable>`__ +- `zfs_dbgmsg_maxsize <#zfs-dbgmsg-maxsize>`__ +- `dbuf_cache_hiwater_pct <#dbuf-cache-hiwater-pct>`__ +- `dbuf_cache_lowater_pct <#dbuf-cache-lowater-pct>`__ +- `dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__ +- `dbuf_cache_max_shift <#dbuf-cache-max-shift>`__ +- `dbuf_cache_shift <#dbuf-cache-shift>`__ +- `dbuf_metadata_cache_max_bytes <#dbuf-metadata-cache-max-bytes>`__ +- `dbuf_metadata_cache_shift <#dbuf-metadata-cache-shift>`__ +- `zfs_dbuf_state_index <#zfs-dbuf-state-index>`__ +- `zfs_ddt_data_is_special <#zfs-ddt-data-is-special>`__ +- `zfs_deadman_checktime_ms <#zfs-deadman-checktime-ms>`__ +- `zfs_deadman_enabled <#zfs-deadman-enabled>`__ +- `zfs_deadman_failmode <#zfs-deadman-failmode>`__ +- `zfs_deadman_synctime_ms <#zfs-deadman-synctime-ms>`__ +- `zfs_deadman_ziotime_ms <#zfs-deadman-ziotime-ms>`__ +- `zfs_dedup_prefetch <#zfs-dedup-prefetch>`__ +- `zfs_delay_min_dirty_percent <#zfs-delay-min-dirty-percent>`__ +- `zfs_delay_scale <#zfs-delay-scale>`__ +- `zfs_delays_per_second <#zfs-delays-per-second>`__ +- `zfs_delete_blocks <#zfs-delete-blocks>`__ +- `zfs_dirty_data_max <#zfs-dirty-data-max>`__ +- `zfs_dirty_data_max_max <#zfs-dirty-data-max-max>`__ +- `zfs_dirty_data_max_max_percent <#zfs-dirty-data-max-max-percent>`__ +- `zfs_dirty_data_max_percent <#zfs-dirty-data-max-percent>`__ +- `zfs_dirty_data_sync <#zfs-dirty-data-sync>`__ +- `zfs_dirty_data_sync_percent <#zfs-dirty-data-sync-percent>`__ +- `zfs_disable_dup_eviction <#zfs-disable-dup-eviction>`__ +- `zfs_disable_ivset_guid_check <#zfs-disable-ivset-guid-check>`__ +- `dmu_object_alloc_chunk_shift <#dmu-object-alloc-chunk-shift>`__ +- `zfs_dmu_offset_next_sync <#zfs-dmu-offset-next-sync>`__ +- `zfs_expire_snapshot <#zfs-expire-snapshot>`__ +- `zfs_flags <#zfs-flags>`__ +- `zfs_fletcher_4_impl <#zfs-fletcher-4-impl>`__ +- `zfs_free_bpobj_enabled <#zfs-free-bpobj-enabled>`__ +- `zfs_free_leak_on_eio <#zfs-free-leak-on-eio>`__ +- `zfs_free_max_blocks <#zfs-free-max-blocks>`__ +- `zfs_free_min_time_ms <#zfs-free-min-time-ms>`__ +- `icp_aes_impl <#icp-aes-impl>`__ +- `icp_gcm_impl <#icp-gcm-impl>`__ +- `ignore_hole_birth <#ignore-hole-birth>`__ +- `zfs_immediate_write_sz <#zfs-immediate-write-sz>`__ +- `zfs_initialize_value <#zfs-initialize-value>`__ +- `zfs_key_max_salt_uses <#zfs-key-max-salt-uses>`__ +- `l2arc_exclude_special <#l2arc-exclude-special>`__ +- `l2arc_feed_again <#l2arc-feed-again>`__ +- `l2arc_feed_min_ms <#l2arc-feed-min-ms>`__ +- `l2arc_feed_secs <#l2arc-feed-secs>`__ +- `l2arc_headroom <#l2arc-headroom>`__ +- `l2arc_headroom_boost <#l2arc-headroom-boost>`__ +- `l2arc_meta_percent <#l2arc-meta-percent>`__ +- `l2arc_mfuonly <#l2arc-mfuonly>`__ +- `l2arc_nocompress <#l2arc-nocompress>`__ +- `l2arc_noprefetch <#l2arc-noprefetch>`__ +- `l2arc_norw <#l2arc-norw>`__ +- `l2arc_rebuild_blocks_min_l2size <#l2arc-rebuild-blocks-min-l2size>`__ +- `l2arc_rebuild_enabled <#l2arc-rebuild-enabled>`__ +- `l2arc_trim_ahead <#l2arc-trim-ahead>`__ +- `l2arc_write_boost <#l2arc-write-boost>`__ +- `l2arc_write_max <#l2arc-write-max>`__ +- `zfs_lua_max_instrlimit <#zfs-lua-max-instrlimit>`__ +- `zfs_lua_max_memlimit <#zfs-lua-max-memlimit>`__ +- `zfs_max_dataset_nesting <#zfs-max-dataset-nesting>`__ +- `zfs_max_missing_tvds <#zfs-max-missing-tvds>`__ +- `zfs_max_recordsize <#zfs-max-recordsize>`__ +- `zfs_mdcomp_disable <#zfs-mdcomp-disable>`__ +- `metaslab_aliquot <#metaslab-aliquot>`__ +- `metaslab_bias_enabled <#metaslab-bias-enabled>`__ +- `metaslab_debug_load <#metaslab-debug-load>`__ +- `metaslab_debug_unload <#metaslab-debug-unload>`__ +- `metaslab_force_ganging <#metaslab-force-ganging>`__ +- `metaslab_fragmentation_factor_enabled <#metaslab-fragmentation-factor-enabled>`__ +- `zfs_metaslab_fragmentation_threshold <#zfs-metaslab-fragmentation-threshold>`__ +- `metaslab_lba_weighting_enabled <#metaslab-lba-weighting-enabled>`__ +- `metaslab_preload_enabled <#metaslab-preload-enabled>`__ +- `zfs_metaslab_segment_weight_enabled <#zfs-metaslab-segment-weight-enabled>`__ +- `zfs_metaslab_switch_threshold <#zfs-metaslab-switch-threshold>`__ +- `metaslabs_per_vdev <#metaslabs-per-vdev>`__ +- `zfs_mg_fragmentation_threshold <#zfs-mg-fragmentation-threshold>`__ +- `zfs_mg_noalloc_threshold <#zfs-mg-noalloc-threshold>`__ +- `zfs_multihost_fail_intervals <#zfs-multihost-fail-intervals>`__ +- `zfs_multihost_history <#zfs-multihost-history>`__ +- `zfs_multihost_import_intervals <#zfs-multihost-import-intervals>`__ +- `zfs_multihost_interval <#zfs-multihost-interval>`__ +- `zfs_multilist_num_sublists <#zfs-multilist-num-sublists>`__ +- `zfs_no_scrub_io <#zfs-no-scrub-io>`__ +- `zfs_no_scrub_prefetch <#zfs-no-scrub-prefetch>`__ +- `zfs_nocacheflush <#zfs-nocacheflush>`__ +- `zfs_nopwrite_enabled <#zfs-nopwrite-enabled>`__ +- `zfs_object_mutex_size <#zfs-object-mutex-size>`__ +- `zfs_obsolete_min_time_ms <#zfs-obsolete-min-time-ms>`__ +- `zfs_override_estimate_recordsize <#zfs-override-estimate-recordsize>`__ +- `zfs_pd_bytes_max <#zfs-pd-bytes-max>`__ +- `zfs_per_txg_dirty_frees_percent <#zfs-per-txg-dirty-frees-percent>`__ +- `zfs_prefetch_disable <#zfs-prefetch-disable>`__ +- `zfs_qat_checksum_disable <#zfs-qat-checksum-disable>`__ +- `zfs_qat_compress_disable <#zfs-qat-compress-disable>`__ +- `zfs_qat_disable <#zfs-qat-disable>`__ +- `zfs_qat_encrypt_disable <#zfs-qat-encrypt-disable>`__ +- `zfs_read_chunk_size <#zfs-read-chunk-size>`__ +- `zfs_read_history <#zfs-read-history>`__ +- `zfs_read_history_hits <#zfs-read-history-hits>`__ +- `zfs_reconstruct_indirect_combinations_max <#zfs-reconstruct-indirect-combinations-max>`__ +- `zfs_recover <#zfs-recover>`__ +- `zfs_recv_queue_length <#zfs-recv-queue-length>`__ +- `zfs_removal_ignore_errors <#zfs-removal-ignore-errors>`__ +- `zfs_removal_suspend_progress <#zfs-removal-suspend-progress>`__ +- `zfs_remove_max_segment <#zfs-remove-max-segment>`__ +- `zfs_resilver_delay <#zfs-resilver-delay>`__ +- `zfs_resilver_disable_defer <#zfs-resilver-disable-defer>`__ +- `zfs_resilver_min_time_ms <#zfs-resilver-min-time-ms>`__ +- `zfs_scan_checkpoint_intval <#zfs-scan-checkpoint-intval>`__ +- `zfs_scan_fill_weight <#zfs-scan-fill-weight>`__ +- `zfs_scan_idle <#zfs-scan-idle>`__ +- `zfs_scan_ignore_errors <#zfs-scan-ignore-errors>`__ +- `zfs_scan_issue_strategy <#zfs-scan-issue-strategy>`__ +- `zfs_scan_legacy <#zfs-scan-legacy>`__ +- `zfs_scan_max_ext_gap <#zfs-scan-max-ext-gap>`__ +- `zfs_scan_mem_lim_fact <#zfs-scan-mem-lim-fact>`__ +- `zfs_scan_mem_lim_soft_fact <#zfs-scan-mem-lim-soft-fact>`__ +- `zfs_scan_min_time_ms <#zfs-scan-min-time-ms>`__ +- `zfs_scan_strict_mem_lim <#zfs-scan-strict-mem-lim>`__ +- `zfs_scan_suspend_progress <#zfs-scan-suspend-progress>`__ +- `zfs_scan_vdev_limit <#zfs-scan-vdev-limit>`__ +- `zfs_scrub_delay <#zfs-scrub-delay>`__ +- `zfs_scrub_min_time_ms <#zfs-scrub-min-time-ms>`__ +- `zfs_send_corrupt_data <#zfs-send-corrupt-data>`__ +- `send_holes_without_birth_time <#send-holes-without-birth-time>`__ +- `zfs_send_queue_length <#zfs-send-queue-length>`__ +- `zfs_send_unmodified_spill_blocks <#zfs-send-unmodified-spill-blocks>`__ +- `zfs_slow_io_events_per_second <#zfs-slow-io-events-per-second>`__ +- `spa_asize_inflation <#spa-asize-inflation>`__ +- `spa_config_path <#spa-config-path>`__ +- `zfs_spa_discard_memory_limit <#zfs-spa-discard-memory-limit>`__ +- `spa_load_print_vdev_tree <#spa-load-print-vdev-tree>`__ +- `spa_load_verify_data <#spa-load-verify-data>`__ +- `spa_load_verify_maxinflight <#spa-load-verify-maxinflight>`__ +- `spa_load_verify_metadata <#spa-load-verify-metadata>`__ +- `spa_load_verify_shift <#spa-load-verify-shift>`__ +- `spa_slop_shift <#spa-slop-shift>`__ +- `zfs_special_class_metadata_reserve_pct <#zfs-special-class-metadata-reserve-pct>`__ +- `spl_hostid <#spl-hostid>`__ +- `spl_hostid_path <#spl-hostid-path>`__ +- `spl_kmem_alloc_max <#spl-kmem-alloc-max>`__ +- `spl_kmem_alloc_warn <#spl-kmem-alloc-warn>`__ +- `spl_kmem_cache_expire <#spl-kmem-cache-expire>`__ +- `spl_kmem_cache_kmem_limit <#spl-kmem-cache-kmem-limit>`__ +- `spl_kmem_cache_kmem_threads <#spl-kmem-cache-kmem-threads>`__ +- `spl_kmem_cache_magazine_size <#spl-kmem-cache-magazine-size>`__ +- `spl_kmem_cache_max_size <#spl-kmem-cache-max-size>`__ +- `spl_kmem_cache_obj_per_slab <#spl-kmem-cache-obj-per-slab>`__ +- `spl_kmem_cache_obj_per_slab_min <#spl-kmem-cache-obj-per-slab-min>`__ +- `spl_kmem_cache_reclaim <#spl-kmem-cache-reclaim>`__ +- `spl_kmem_cache_slab_limit <#spl-kmem-cache-slab-limit>`__ +- `spl_max_show_tasks <#spl-max-show-tasks>`__ +- `spl_panic_halt <#spl-panic-halt>`__ +- `spl_taskq_kick <#spl-taskq-kick>`__ +- `spl_taskq_thread_bind <#spl-taskq-thread-bind>`__ +- `spl_taskq_thread_dynamic <#spl-taskq-thread-dynamic>`__ +- `spl_taskq_thread_priority <#spl-taskq-thread-priority>`__ +- `spl_taskq_thread_sequential <#spl-taskq-thread-sequential>`__ +- `zfs_sync_pass_deferred_free <#zfs-sync-pass-deferred-free>`__ +- `zfs_sync_pass_dont_compress <#zfs-sync-pass-dont-compress>`__ +- `zfs_sync_pass_rewrite <#zfs-sync-pass-rewrite>`__ +- `zfs_sync_taskq_batch_pct <#zfs-sync-taskq-batch-pct>`__ +- `zfs_top_maxinflight <#zfs-top-maxinflight>`__ +- `zfs_trim_extent_bytes_max <#zfs-trim-extent-bytes-max>`__ +- `zfs_trim_extent_bytes_min <#zfs-trim-extent-bytes-min>`__ +- `zfs_trim_metaslab_skip <#zfs-trim-metaslab-skip>`__ +- `zfs_trim_queue_limit <#zfs-trim-queue-limit>`__ +- `zfs_trim_txg_batch <#zfs-trim-txg-batch>`__ +- `zfs_txg_history <#zfs-txg-history>`__ +- `zfs_txg_timeout <#zfs-txg-timeout>`__ +- `zfs_unlink_suspend_progress <#zfs-unlink-suspend-progress>`__ +- `zfs_user_indirect_is_special <#zfs-user-indirect-is-special>`__ +- `zfs_vdev_aggregate_trim <#zfs-vdev-aggregate-trim>`__ +- `zfs_vdev_aggregation_limit <#zfs-vdev-aggregation-limit>`__ +- `zfs_vdev_aggregation_limit_non_rotating <#zfs-vdev-aggregation-limit-non-rotating>`__ +- `zfs_vdev_async_read_max_active <#zfs-vdev-async-read-max-active>`__ +- `zfs_vdev_async_read_min_active <#zfs-vdev-async-read-min-active>`__ +- `zfs_vdev_async_write_active_max_dirty_percent <#zfs-vdev-async-write-active-max-dirty-percent>`__ +- `zfs_vdev_async_write_active_min_dirty_percent <#zfs-vdev-async-write-active-min-dirty-percent>`__ +- `zfs_vdev_async_write_max_active <#zfs-vdev-async-write-max-active>`__ +- `zfs_vdev_async_write_min_active <#zfs-vdev-async-write-min-active>`__ +- `zfs_vdev_cache_bshift <#zfs-vdev-cache-bshift>`__ +- `zfs_vdev_cache_max <#zfs-vdev-cache-max>`__ +- `zfs_vdev_cache_size <#zfs-vdev-cache-size>`__ +- `zfs_vdev_default_ms_count <#zfs-vdev-default-ms-count>`__ +- `zfs_vdev_initializing_max_active <#zfs-vdev-initializing-max-active>`__ +- `zfs_vdev_initializing_min_active <#zfs-vdev-initializing-min-active>`__ +- `zfs_vdev_max_active <#zfs-vdev-max-active>`__ +- `zfs_vdev_min_ms_count <#zfs-vdev-min-ms-count>`__ +- `zfs_vdev_mirror_non_rotating_inc <#zfs-vdev-mirror-non-rotating-inc>`__ +- `zfs_vdev_mirror_non_rotating_seek_inc <#zfs-vdev-mirror-non-rotating-seek-inc>`__ +- `zfs_vdev_mirror_rotating_inc <#zfs-vdev-mirror-rotating-inc>`__ +- `zfs_vdev_mirror_rotating_seek_inc <#zfs-vdev-mirror-rotating-seek-inc>`__ +- `zfs_vdev_mirror_rotating_seek_offset <#zfs-vdev-mirror-rotating-seek-offset>`__ +- `zfs_vdev_ms_count_limit <#zfs-vdev-ms-count-limit>`__ +- `zfs_vdev_queue_depth_pct <#zfs-vdev-queue-depth-pct>`__ +- `zfs_vdev_raidz_impl <#zfs-vdev-raidz-impl>`__ +- `zfs_vdev_read_gap_limit <#zfs-vdev-read-gap-limit>`__ +- `zfs_vdev_removal_max_active <#zfs-vdev-removal-max-active>`__ +- `vdev_removal_max_span <#vdev-removal-max-span>`__ +- `zfs_vdev_removal_min_active <#zfs-vdev-removal-min-active>`__ +- `zfs_vdev_scheduler <#zfs-vdev-scheduler>`__ +- `zfs_vdev_scrub_max_active <#zfs-vdev-scrub-max-active>`__ +- `zfs_vdev_scrub_min_active <#zfs-vdev-scrub-min-active>`__ +- `zfs_vdev_sync_read_max_active <#zfs-vdev-sync-read-max-active>`__ +- `zfs_vdev_sync_read_min_active <#zfs-vdev-sync-read-min-active>`__ +- `zfs_vdev_sync_write_max_active <#zfs-vdev-sync-write-max-active>`__ +- `zfs_vdev_sync_write_min_active <#zfs-vdev-sync-write-min-active>`__ +- `zfs_vdev_trim_max_active <#zfs-vdev-trim-max-active>`__ +- `zfs_vdev_trim_min_active <#zfs-vdev-trim-min-active>`__ +- `vdev_validate_skip <#vdev-validate-skip>`__ +- `zfs_vdev_write_gap_limit <#zfs-vdev-write-gap-limit>`__ +- `zfs_zevent_cols <#zfs-zevent-cols>`__ +- `zfs_zevent_console <#zfs-zevent-console>`__ +- `zfs_zevent_len_max <#zfs-zevent-len-max>`__ +- `zfetch_array_rd_sz <#zfetch-array-rd-sz>`__ +- `zfetch_max_distance <#zfetch-max-distance>`__ +- `zfetch_max_streams <#zfetch-max-streams>`__ +- `zfetch_min_sec_reap <#zfetch-min-sec-reap>`__ +- `zfs_zil_clean_taskq_maxalloc <#zfs-zil-clean-taskq-maxalloc>`__ +- `zfs_zil_clean_taskq_minalloc <#zfs-zil-clean-taskq-minalloc>`__ +- `zfs_zil_clean_taskq_nthr_pct <#zfs-zil-clean-taskq-nthr-pct>`__ +- `zil_nocacheflush <#zil-nocacheflush>`__ +- `zil_replay_disable <#zil-replay-disable>`__ +- `zil_slog_bulk <#zil-slog-bulk>`__ +- `zio_deadman_log_all <#zio-deadman-log-all>`__ +- `zio_decompress_fail_fraction <#zio-decompress-fail-fraction>`__ +- `zio_delay_max <#zio-delay-max>`__ +- `zio_dva_throttle_enabled <#zio-dva-throttle-enabled>`__ +- `zio_requeue_io_start_cut_in_line <#zio-requeue-io-start-cut-in-line>`__ +- `zio_slow_io_ms <#zio-slow-io-ms>`__ +- `zio_taskq_batch_pct <#zio-taskq-batch-pct>`__ +- `zvol_inhibit_dev <#zvol-inhibit-dev>`__ +- `zvol_major <#zvol-major>`__ +- `zvol_max_discard_blocks <#zvol-max-discard-blocks>`__ +- `zvol_prefetch_bytes <#zvol-prefetch-bytes>`__ +- `zvol_request_sync <#zvol-request-sync>`__ +- `zvol_threads <#zvol-threads>`__ +- `zvol_volmode <#zvol-volmode>`__ + +.. _zfs-module-parameters-1: + +Module Parameters +----------------- + +ignore_hole_birth +~~~~~~~~~~~~~~~~~ + +When set, the hole_birth optimization will not be used and all holes +will always be sent by ``zfs send`` In the source code, +ignore_hole_birth is an alias for and SysFS PARAMETER for +`send_holes_without_birth_time <#send-holes-without-birth-time>`__. + ++-------------------+-------------------------------------------------+ +| ignore_hole_birth | Notes | ++===================+=================================================+ +| Tags | `send <#send>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Enable if you suspect your datasets are | +| | affected by a bug in hole_birth during | +| | ``zfs send`` operations | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=disabled, 1=enabled | ++-------------------+-------------------------------------------------+ +| Default | 1 (hole birth optimization is ignored) | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | TBD | ++-------------------+-------------------------------------------------+ + +l2arc_exclude_special +~~~~~~~~~~~~~~~~~~~~~ + +Controls whether buffers present on special vdevs are eligible for +caching into L2ARC. + ++-----------------------+-------------------------------------------------+ +| l2arc_exclude_special | Notes | ++=======================+=================================================+ +| Tags | `ARC <#arc>`__, | +| | `L2ARC <#l2arc>`__, | +| | `special_vdev <#special-vdev>`__, | ++-----------------------+-------------------------------------------------+ +| When to change | If cache and special devices exist and caching | +| | data on special devices in L2ARC is not desired | ++-----------------------+-------------------------------------------------+ +| Data Type | boolean | ++-----------------------+-------------------------------------------------+ +| Range | 0=disabled, 1=enabled | ++-----------------------+-------------------------------------------------+ +| Default | 0 | ++-----------------------+-------------------------------------------------+ +| Change | Dynamic | ++-----------------------+-------------------------------------------------+ +| Versions Affected | TBD | ++-----------------------+-------------------------------------------------+ + +l2arc_feed_again +~~~~~~~~~~~~~~~~ + +Turbo L2ARC cache warm-up. When the L2ARC is cold the fill interval will +be set to aggressively fill as fast as possible. + ++-------------------+-------------------------------------------------+ +| l2arc_feed_again | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If cache devices exist and it is desired to | +| | fill them as fast as possible | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=disabled, 1=enabled | ++-------------------+-------------------------------------------------+ +| Default | 1 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | TBD | ++-------------------+-------------------------------------------------+ + +l2arc_feed_min_ms +~~~~~~~~~~~~~~~~~ + +Minimum time period for aggressively feeding the L2ARC. The L2ARC feed +thread wakes up once per second (see +`l2arc_feed_secs <#l2arc-feed-secs>`__) to look for data to feed into +the L2ARC. ``l2arc_feed_min_ms`` only affects the turbo L2ARC cache +warm-up and allows the aggressiveness to be adjusted. + ++-------------------+-------------------------------------------------+ +| l2arc_feed_min_ms | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If cache devices exist and | +| | `l2arc_feed_again <#l2arc-feed-again>`__ and | +| | the feed is too aggressive, then this tunable | +| | can be adjusted to reduce the impact of the | +| | fill | ++-------------------+-------------------------------------------------+ +| Data Type | uint64 | ++-------------------+-------------------------------------------------+ +| Units | milliseconds | ++-------------------+-------------------------------------------------+ +| Range | 0 to (1000 \* l2arc_feed_secs) | ++-------------------+-------------------------------------------------+ +| Default | 200 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | 0.6 and later | ++-------------------+-------------------------------------------------+ + +l2arc_feed_secs +~~~~~~~~~~~~~~~ + +Seconds between waking the L2ARC feed thread. One feed thread works for +all cache devices in turn. + +If the pool that owns a cache device is imported readonly, then the feed +thread is delayed 5 \* `l2arc_feed_secs <#l2arc-feed-secs>`__ before +moving onto the next cache device. If multiple pools are imported with +cache devices and one pool with cache is imported readonly, the L2ARC +feed rate to all caches can be slowed. + +================= ================================== +l2arc_feed_secs Notes +================= ================================== +Tags `ARC <#arc>`__, `L2ARC <#l2arc>`__ +When to change Do not change +Data Type uint64 +Units seconds +Range 1 to UINT64_MAX +Default 1 +Change Dynamic +Versions Affected 0.6 and later +================= ================================== + +l2arc_headroom +~~~~~~~~~~~~~~ + +How far through the ARC lists to search for L2ARC cacheable content, +expressed as a multiplier of `l2arc_write_max <#l2arc-write-max>`__ + ++-------------------+-------------------------------------------------+ +| l2arc_headroom | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If the rate of change in the ARC is faster than | +| | the overall L2ARC feed rate, then increasing | +| | l2arc_headroom can increase L2ARC efficiency. | +| | Setting the value too large can cause the L2ARC | +| | feed thread to consume more CPU time looking | +| | for data to feed. | ++-------------------+-------------------------------------------------+ +| Data Type | uint64 | ++-------------------+-------------------------------------------------+ +| Units | unit | ++-------------------+-------------------------------------------------+ +| Range | 0 to UINT64_MAX | ++-------------------+-------------------------------------------------+ +| Default | 2 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | 0.6 and later | ++-------------------+-------------------------------------------------+ + +l2arc_headroom_boost +~~~~~~~~~~~~~~~~~~~~ + +Percentage scale for `l2arc_headroom <#l2arc-headroom>`__ when L2ARC +contents are being successfully compressed before writing. + ++----------------------+----------------------------------------------+ +| l2arc_headroom_boost | Notes | ++======================+==============================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++----------------------+----------------------------------------------+ +| When to change | If average compression efficiency is greater | +| | than 2:1, then increasing | +| | `l2a | +| | rc_headroom_boost <#l2arc-headroom-boost>`__ | +| | can increase the L2ARC feed rate | ++----------------------+----------------------------------------------+ +| Data Type | uint64 | ++----------------------+----------------------------------------------+ +| Units | percent | ++----------------------+----------------------------------------------+ +| Range | 100 to UINT64_MAX, when set to 100, the | +| | L2ARC headroom boost feature is effectively | +| | disabled | ++----------------------+----------------------------------------------+ +| Default | 200 | ++----------------------+----------------------------------------------+ +| Change | Dynamic | ++----------------------+----------------------------------------------+ +| Versions Affected | all | ++----------------------+----------------------------------------------+ + +l2arc_nocompress +~~~~~~~~~~~~~~~~ + +Disable writing compressed data to cache devices. Disabling allows the +legacy behavior of writing decompressed data to cache devices. + ++-------------------+-------------------------------------------------+ +| l2arc_nocompress | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | When testing compressed L2ARC feature | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=store compressed blocks in cache device, | +| | 1=store uncompressed blocks in cache device | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | deprecated in v0.7.0 by new compressed ARC | +| | design | ++-------------------+-------------------------------------------------+ + +l2arc_meta_percent +~~~~~~~~~~~~~~~~~~ + +Percent of ARC size allowed for L2ARC-only headers. +Since L2ARC buffers are not evicted on memory pressure, too large amount of +headers on system with irrationaly large L2ARC can render it slow or unusable. +This parameter limits L2ARC writes and rebuild to achieve it. + ++-------------------+-------------------------------------------------+ +| l2arc_nocompress | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | When workload really require enormous L2ARC. | ++-------------------+-------------------------------------------------+ +| Data Type | int | ++-------------------+-------------------------------------------------+ +| Range | 0 to 100 | ++-------------------+-------------------------------------------------+ +| Default | 33 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v2.0 and later | ++-------------------+-------------------------------------------------+ + +l2arc_mfuonly +~~~~~~~~~~~~~ + +Controls whether only MFU metadata and data are cached from ARC into L2ARC. +This may be desirable to avoid wasting space on L2ARC when reading/writing +large amounts of data that are not expected to be accessed more than once. +By default both MRU and MFU data and metadata are cached in the L2ARC. + ++-------------------+-------------------------------------------------+ +| l2arc_mfuonly | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | When accessing a large amount of data only | +| | once. | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=store MRU and MFU blocks in cache device, | +| | 1=store MFU blocks in cache device | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v2.0 and later | ++-------------------+-------------------------------------------------+ + +l2arc_noprefetch +~~~~~~~~~~~~~~~~ + +Disables writing prefetched, but unused, buffers to cache devices. + ++-------------------+-------------------------------------------------+ +| l2arc_noprefetch | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__, | +| | `prefetch <#prefetch>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Setting to 0 can increase L2ARC hit rates for | +| | workloads where the ARC is too small for a read | +| | workload that benefits from prefetching. Also, | +| | if the main pool devices are very slow, setting | +| | to 0 can improve some workloads such as | +| | backups. | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=write prefetched but unused buffers to cache | +| | devices, 1=do not write prefetched but unused | +| | buffers to cache devices | ++-------------------+-------------------------------------------------+ +| Default | 1 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.0 and later | ++-------------------+-------------------------------------------------+ + +l2arc_norw +~~~~~~~~~~ + +Disables writing to cache devices while they are being read. + ++-------------------+-------------------------------------------------+ +| l2arc_norw | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | In the early days of SSDs, some devices did not | +| | perform well when reading and writing | +| | simultaneously. Modern SSDs do not have these | +| | issues. | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=read and write simultaneously, 1=avoid writes | +| | when reading for antique SSDs | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +l2arc_rebuild_blocks_min_l2size +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The minimum required size (in bytes) of an L2ARC device in order to +write log blocks in it. The log blocks are used upon importing the pool +to rebuild the persistent L2ARC. For L2ARC devices less than 1GB the +overhead involved offsets most of benefit so log blocks are not written +for cache devices smaller than this. + ++---------------------------------+-----------------------------------+ +| l2arc_rebuild_blocks_min_l2size | Notes | ++=================================+===================================+ +| Tags | `ARC <#arc>`__, | +| | `L2ARC <#l2arc>`__ | ++---------------------------------+-----------------------------------+ +| When to change | The cache device is small and | +| | the pool is frequently imported. | ++---------------------------------+-----------------------------------+ +| Data Type | bytes | ++---------------------------------+-----------------------------------+ +| Range | 0 to UINT64_MAX | ++---------------------------------+-----------------------------------+ +| Default | 1,073,741,824 | ++---------------------------------+-----------------------------------+ +| Change | Dynamic | ++---------------------------------+-----------------------------------+ +| Versions Affected | v2.0 and later | ++---------------------------------+-----------------------------------+ + +l2arc_rebuild_enabled +~~~~~~~~~~~~~~~~~~~~~ + +Rebuild the persistent L2ARC when importing a pool. + ++-----------------------+---------------------------------------------+ +| l2arc_rebuild_enabled | Notes | ++=======================+=============================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-----------------------+---------------------------------------------+ +| When to change | If there are problems importing a pool or | +| | attaching an L2ARC device. | ++-----------------------+---------------------------------------------+ +| Data Type | boolean | ++-----------------------+---------------------------------------------+ +| Range | 0=disable persistent L2ARC rebuild, | +| | 1=enable persistent L2ARC rebuild | ++-----------------------+---------------------------------------------+ +| Default | 1 | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | v2.0 and later | ++-----------------------+---------------------------------------------+ + +l2arc_trim_ahead +~~~~~~~~~~~~~~~~ + +Once the cache device has been filled TRIM ahead of the current write size +``l2arc_write_max`` on L2ARC devices by this percentage. This can speed +up future writes depending on the performance characteristics of the +cache device. + +When set to 100% TRIM twice the space required to accommodate upcoming +writes. A minimum of 64MB will be trimmed. If set it enables TRIM of +the whole L2ARC device when it is added to a pool. By default, this +option is disabled since it can put significant stress on the underlying +storage devices. + ++-------------------+-------------------------------------------------+ +| l2arc_trim_ahead | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Consider setting for cache devices which | +| | effeciently handle TRIM commands. | ++-------------------+-------------------------------------------------+ +| Data Type | ulong | ++-------------------+-------------------------------------------------+ +| Units | percent of l2arc_write_max | ++-------------------+-------------------------------------------------+ +| Range | 0 to 100 | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v2.0 and later | ++-------------------+-------------------------------------------------+ + +l2arc_write_boost +~~~~~~~~~~~~~~~~~ + +Until the ARC fills, increases the L2ARC fill rate +`l2arc_write_max <#l2arc-write-max>`__ by ``l2arc_write_boost``. + ++-------------------+-------------------------------------------------+ +| l2arc_write_boost | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | To fill the cache devices more aggressively | +| | after pool import. | ++-------------------+-------------------------------------------------+ +| Data Type | uint64 | ++-------------------+-------------------------------------------------+ +| Units | bytes | ++-------------------+-------------------------------------------------+ +| Range | 0 to UINT64_MAX | ++-------------------+-------------------------------------------------+ +| Default | 8,388,608 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +l2arc_write_max +~~~~~~~~~~~~~~~ + +Maximum number of bytes to be written to each cache device for each +L2ARC feed thread interval (see `l2arc_feed_secs <#l2arc-feed-secs>`__). +The actual limit can be adjusted by +`l2arc_write_boost <#l2arc-write-boost>`__. By default +`l2arc_feed_secs <#l2arc-feed-secs>`__ is 1 second, delivering a maximum +write workload to cache devices of 8 MiB/sec. + ++-------------------+-------------------------------------------------+ +| l2arc_write_max | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If the cache devices can sustain the write | +| | workload, increasing the rate of cache device | +| | fill when workloads generate new data at a rate | +| | higher than l2arc_write_max can increase L2ARC | +| | hit rate | ++-------------------+-------------------------------------------------+ +| Data Type | uint64 | ++-------------------+-------------------------------------------------+ +| Units | bytes | ++-------------------+-------------------------------------------------+ +| Range | 1 to UINT64_MAX | ++-------------------+-------------------------------------------------+ +| Default | 8,388,608 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +metaslab_aliquot +~~~~~~~~~~~~~~~~ + +Sets the metaslab granularity. Nominally, ZFS will try to allocate this +amount of data to a top-level vdev before moving on to the next +top-level vdev. This is roughly similar to what would be referred to as +the "stripe size" in traditional RAID arrays. + +When tuning for HDDs, it can be more efficient to have a few larger, +sequential writes to a device rather than switching to the next device. +Monitoring the size of contiguous writes to the disks relative to the +write throughput can be used to determine if increasing +``metaslab_aliquot`` can help. For modern devices, it is unlikely that +decreasing ``metaslab_aliquot`` from the default will help. + +If there is only one top-level vdev, this tunable is not used. + ++-------------------+-------------------------------------------------+ +| metaslab_aliquot | Notes | ++===================+=================================================+ +| Tags | `allocation <#allocation>`__, | +| | `metaslab <#metaslab>`__, `vdev <#vdev>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If write performance increases as devices more | +| | efficiently write larger, contiguous blocks | ++-------------------+-------------------------------------------------+ +| Data Type | uint64 | ++-------------------+-------------------------------------------------+ +| Units | bytes | ++-------------------+-------------------------------------------------+ +| Range | 0 to UINT64_MAX | ++-------------------+-------------------------------------------------+ +| Default | 524,288 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +metaslab_bias_enabled +~~~~~~~~~~~~~~~~~~~~~ + +Enables metaslab group biasing based on a top-level vdev's utilization +relative to the pool. Nominally, all top-level devs are the same size +and the allocation is spread evenly. When the top-level vdevs are not of +the same size, for example if a new (empty) top-level is added to the +pool, this allows the new top-level vdev to get a larger portion of new +allocations. + ++-----------------------+---------------------------------------------+ +| metaslab_bias_enabled | Notes | ++=======================+=============================================+ +| Tags | `allocation <#allocation>`__, | +| | `metaslab <#metaslab>`__, `vdev <#vdev>`__ | ++-----------------------+---------------------------------------------+ +| When to change | If a new top-level vdev is added and you do | +| | not want to bias new allocations to the new | +| | top-level vdev | ++-----------------------+---------------------------------------------+ +| Data Type | boolean | ++-----------------------+---------------------------------------------+ +| Range | 0=spread evenly across top-level vdevs, | +| | 1=bias spread to favor less full top-level | +| | vdevs | ++-----------------------+---------------------------------------------+ +| Default | 1 | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++-----------------------+---------------------------------------------+ + +zfs_metaslab_segment_weight_enabled +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enables metaslab allocation based on largest free segment rather than +total amount of free space. The goal is to avoid metaslabs that exhibit +free space fragmentation: when there is a lot of small free spaces, but +few larger free spaces. + +If ``zfs_metaslab_segment_weight_enabled`` is enabled, then +`metaslab_fragmentation_factor_enabled <#metaslab-fragmentation-factor-enabled>`__ +is ignored. + ++----------------------------------+----------------------------------+ +| zfs | Notes | +| _metaslab_segment_weight_enabled | | ++==================================+==================================+ +| Tags | `allocation <#allocation>`__, | +| | `metaslab <#metaslab>`__ | ++----------------------------------+----------------------------------+ +| When to change | When testing allocation and | +| | fragmentation | ++----------------------------------+----------------------------------+ +| Data Type | boolean | ++----------------------------------+----------------------------------+ +| Range | 0=do not consider metaslab | +| | fragmentation, 1=avoid metaslabs | +| | where free space is highly | +| | fragmented | ++----------------------------------+----------------------------------+ +| Default | 1 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.7.0 and later | ++----------------------------------+----------------------------------+ + +zfs_metaslab_switch_threshold +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When using segment-based metaslab selection (see +`zfs_metaslab_segment_weight_enabled <#zfs-metaslab-segment-weight-enabled>`__), +continue allocating from the active metaslab until +``zfs_metaslab_switch_threshold`` worth of free space buckets have been +exhausted. + ++-------------------------------+-------------------------------------+ +| zfs_metaslab_switch_threshold | Notes | ++===============================+=====================================+ +| Tags | `allocation <#allocation>`__, | +| | `metaslab <#metaslab>`__ | ++-------------------------------+-------------------------------------+ +| When to change | When testing allocation and | +| | fragmentation | ++-------------------------------+-------------------------------------+ +| Data Type | uint64 | ++-------------------------------+-------------------------------------+ +| Units | free spaces | ++-------------------------------+-------------------------------------+ +| Range | 0 to UINT64_MAX | ++-------------------------------+-------------------------------------+ +| Default | 2 | ++-------------------------------+-------------------------------------+ +| Change | Dynamic | ++-------------------------------+-------------------------------------+ +| Versions Affected | v0.7.0 and later | ++-------------------------------+-------------------------------------+ + +metaslab_debug_load +~~~~~~~~~~~~~~~~~~~ + +When enabled, all metaslabs are loaded into memory during pool import. +Nominally, metaslab space map information is loaded and unloaded as +needed (see `metaslab_debug_unload <#metaslab-debug-unload>`__) + +It is difficult to predict how much RAM is required to store a space +map. An empty or completely full metaslab has a small space map. +However, a highly fragmented space map can consume significantly more +memory. + +Enabling ``metaslab_debug_load`` can increase pool import time. + ++---------------------+-----------------------------------------------+ +| metaslab_debug_load | Notes | ++=====================+===============================================+ +| Tags | `allocation <#allocation>`__, | +| | `memory <#memory>`__, | +| | `metaslab <#metaslab>`__ | ++---------------------+-----------------------------------------------+ +| When to change | When RAM is plentiful and pool import time is | +| | not a consideration | ++---------------------+-----------------------------------------------+ +| Data Type | boolean | ++---------------------+-----------------------------------------------+ +| Range | 0=do not load all metaslab info at pool | +| | import, 1=dynamically load metaslab info as | +| | needed | ++---------------------+-----------------------------------------------+ +| Default | 0 | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++---------------------+-----------------------------------------------+ + +metaslab_debug_unload +~~~~~~~~~~~~~~~~~~~~~ + +When enabled, prevents metaslab information from being dynamically +unloaded from RAM. Nominally, metaslab space map information is loaded +and unloaded as needed (see +`metaslab_debug_load <#metaslab-debug-load>`__) + +It is difficult to predict how much RAM is required to store a space +map. An empty or completely full metaslab has a small space map. +However, a highly fragmented space map can consume significantly more +memory. + +Enabling ``metaslab_debug_unload`` consumes RAM that would otherwise be +freed. + ++-----------------------+---------------------------------------------+ +| metaslab_debug_unload | Notes | ++=======================+=============================================+ +| Tags | `allocation <#allocation>`__, | +| | `memory <#memory>`__, | +| | `metaslab <#metaslab>`__ | ++-----------------------+---------------------------------------------+ +| When to change | When RAM is plentiful and the penalty for | +| | dynamically reloading metaslab info from | +| | the pool is high | ++-----------------------+---------------------------------------------+ +| Data Type | boolean | ++-----------------------+---------------------------------------------+ +| Range | 0=dynamically unload metaslab info, | +| | 1=unload metaslab info only upon pool | +| | export | ++-----------------------+---------------------------------------------+ +| Default | 0 | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++-----------------------+---------------------------------------------+ + +metaslab_fragmentation_factor_enabled +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enable use of the fragmentation metric in computing metaslab weights. + +In version v0.7.0, if +`zfs_metaslab_segment_weight_enabled <#zfs-metaslab-segment-weight-enabled>`__ +is enabled, then ``metaslab_fragmentation_factor_enabled`` is ignored. + ++----------------------------------+----------------------------------+ +| metas | Notes | +| lab_fragmentation_factor_enabled | | ++==================================+==================================+ +| Tags | `allocation <#allocation>`__, | +| | `metaslab <#metaslab>`__ | ++----------------------------------+----------------------------------+ +| When to change | To test metaslab fragmentation | ++----------------------------------+----------------------------------+ +| Data Type | boolean | ++----------------------------------+----------------------------------+ +| Range | 0=do not consider metaslab free | +| | space fragmentation, 1=try to | +| | avoid fragmented metaslabs | ++----------------------------------+----------------------------------+ +| Default | 1 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.6.4 and later | ++----------------------------------+----------------------------------+ + +metaslabs_per_vdev +~~~~~~~~~~~~~~~~~~ + +When a vdev is added, it will be divided into approximately, but no more +than, this number of metaslabs. + ++--------------------+------------------------------------------------+ +| metaslabs_per_vdev | Notes | ++====================+================================================+ +| Tags | `allocation <#allocation>`__, | +| | `metaslab <#metaslab>`__, `vdev <#vdev>`__ | ++--------------------+------------------------------------------------+ +| When to change | When testing metaslab allocation | ++--------------------+------------------------------------------------+ +| Data Type | uint64 | ++--------------------+------------------------------------------------+ +| Units | metaslabs | ++--------------------+------------------------------------------------+ +| Range | 16 to UINT64_MAX | ++--------------------+------------------------------------------------+ +| Default | 200 | ++--------------------+------------------------------------------------+ +| Change | Prior to pool creation or adding new top-level | +| | vdevs | ++--------------------+------------------------------------------------+ +| Versions Affected | all | ++--------------------+------------------------------------------------+ + +metaslab_preload_enabled +~~~~~~~~~~~~~~~~~~~~~~~~ + +Enable metaslab group preloading. Each top-level vdev has a metaslab +group. By default, up to 3 copies of metadata can exist and are +distributed across multiple top-level vdevs. +``metaslab_preload_enabled`` allows the corresponding metaslabs to be +preloaded, thus improving allocation efficiency. + ++--------------------------+------------------------------------------+ +| metaslab_preload_enabled | Notes | ++==========================+==========================================+ +| Tags | `allocation <#allocation>`__, | +| | `metaslab <#metaslab>`__ | ++--------------------------+------------------------------------------+ +| When to change | When testing metaslab allocation | ++--------------------------+------------------------------------------+ +| Data Type | boolean | ++--------------------------+------------------------------------------+ +| Range | 0=do not preload metaslab info, | +| | 1=preload up to 3 metaslabs | ++--------------------------+------------------------------------------+ +| Default | 1 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++--------------------------+------------------------------------------+ + +metaslab_lba_weighting_enabled +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Modern HDDs have uniform bit density and constant angular velocity. +Therefore, the outer recording zones are faster (higher bandwidth) than +the inner zones by the ratio of outer to inner track diameter. The +difference in bandwidth can be 2:1, and is often available in the HDD +detailed specifications or drive manual. For HDDs when +``metaslab_lba_weighting_enabled`` is true, write allocation preference +is given to the metaslabs representing the outer recording zones. Thus +the allocation to metaslabs prefers faster bandwidth over free space. + +If the devices are not rotational, yet misrepresent themselves to the OS +as rotational, then disabling ``metaslab_lba_weighting_enabled`` can +result in more even, free-space-based allocation. + ++--------------------------------+------------------------------------+ +| metaslab_lba_weighting_enabled | Notes | ++================================+====================================+ +| Tags | `allocation <#allocation>`__, | +| | `metaslab <#metaslab>`__, | +| | `HDD <#hdd>`__, `SSD <#ssd>`__ | ++--------------------------------+------------------------------------+ +| When to change | disable if using only SSDs and | +| | version v0.6.4 or earlier | ++--------------------------------+------------------------------------+ +| Data Type | boolean | ++--------------------------------+------------------------------------+ +| Range | 0=do not use LBA weighting, 1=use | +| | LBA weighting | ++--------------------------------+------------------------------------+ +| Default | 1 | ++--------------------------------+------------------------------------+ +| Change | Dynamic | ++--------------------------------+------------------------------------+ +| Verification | The rotational setting described | +| | by a block device in sysfs by | +| | observing | +| | ``/sys/ | +| | block/DISK_NAME/queue/rotational`` | ++--------------------------------+------------------------------------+ +| Versions Affected | prior to v0.6.5, the check for | +| | non-rotation media did not exist | ++--------------------------------+------------------------------------+ + +spa_config_path +~~~~~~~~~~~~~~~ + +By default, the ``zpool import`` command searches for pool information +in the ``zpool.cache`` file. If the pool to be imported has an entry in +``zpool.cache`` then the devices do not have to be scanned to determine +if they are pool members. The path to the cache file is spa_config_path. + +For more information on ``zpool import`` and the ``-o cachefile`` and +``-d`` options, see the man page for zpool(8) + +See also `zfs_autoimport_disable <#zfs-autoimport-disable>`__ + ++-------------------+-------------------------------------------------+ +| spa_config_path | Notes | ++===================+=================================================+ +| Tags | `import <#import>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If creating a non-standard distribution and the | +| | cachefile property is inconvenient | ++-------------------+-------------------------------------------------+ +| Data Type | string | ++-------------------+-------------------------------------------------+ +| Default | ``/etc/zfs/zpool.cache`` | ++-------------------+-------------------------------------------------+ +| Change | Dynamic, applies only to the next invocation of | +| | ``zpool import`` | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +spa_asize_inflation +~~~~~~~~~~~~~~~~~~~ + +Multiplication factor used to estimate actual disk consumption from the +size of data being written. The default value is a worst case estimate, +but lower values may be valid for a given pool depending on its +configuration. Pool administrators who understand the factors involved +may wish to specify a more realistic inflation factor, particularly if +they operate close to quota or capacity limits. + +The worst case space requirement for allocation is single-sector +max-parity RAIDZ blocks, in which case the space requirement is exactly +4 times the size, accounting for a maximum of 3 parity blocks. This is +added to the maximum number of ZFS ``copies`` parameter (copies max=3). +Additional space is required if the block could impact deduplication +tables. Altogether, the worst case is 24. + +If the estimation is not correct, then quotas or out-of-space conditions +can lead to optimistic expectations of the ability to allocate. +Applications are typically not prepared to deal with such failures and +can misbehave. + ++---------------------+-----------------------------------------------+ +| spa_asize_inflation | Notes | ++=====================+===============================================+ +| Tags | `allocation <#allocation>`__, `SPA <#spa>`__ | ++---------------------+-----------------------------------------------+ +| When to change | If the allocation requirements for the | +| | workload are well known and quotas are used | ++---------------------+-----------------------------------------------+ +| Data Type | uint64 | ++---------------------+-----------------------------------------------+ +| Units | unit | ++---------------------+-----------------------------------------------+ +| Range | 1 to 24 | ++---------------------+-----------------------------------------------+ +| Default | 24 | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.6.3 and later | ++---------------------+-----------------------------------------------+ + +spa_load_verify_data +~~~~~~~~~~~~~~~~~~~~ + +An extreme rewind import (see ``zpool import -X``) normally performs a +full traversal of all blocks in the pool for verification. If this +parameter is set to 0, the traversal skips non-metadata blocks. It can +be toggled once the import has started to stop or start the traversal of +non-metadata blocks. See also +`spa_load_verify_metadata <#spa-load-verify-metadata>`__. + ++----------------------+----------------------------------------------+ +| spa_load_verify_data | Notes | ++======================+==============================================+ +| Tags | `allocation <#allocation>`__, `SPA <#spa>`__ | ++----------------------+----------------------------------------------+ +| When to change | At the risk of data integrity, to speed | +| | extreme import of large pool | ++----------------------+----------------------------------------------+ +| Data Type | boolean | ++----------------------+----------------------------------------------+ +| Range | 0=do not verify data upon pool import, | +| | 1=verify pool data upon import | ++----------------------+----------------------------------------------+ +| Default | 1 | ++----------------------+----------------------------------------------+ +| Change | Dynamic | ++----------------------+----------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++----------------------+----------------------------------------------+ + +spa_load_verify_metadata +~~~~~~~~~~~~~~~~~~~~~~~~ + +An extreme rewind import (see ``zpool import -X``) normally performs a +full traversal of all blocks in the pool for verification. If this +parameter is set to 0, the traversal is not performed. It can be toggled +once the import has started to stop or start the traversal. See +`spa_load_verify_data <#spa-load-verify-data>`__ + ++--------------------------+------------------------------------------+ +| spa_load_verify_metadata | Notes | ++==========================+==========================================+ +| Tags | `import <#import>`__ | ++--------------------------+------------------------------------------+ +| When to change | At the risk of data integrity, to speed | +| | extreme import of large pool | ++--------------------------+------------------------------------------+ +| Data Type | boolean | ++--------------------------+------------------------------------------+ +| Range | 0=do not verify metadata upon pool | +| | import, 1=verify pool metadata upon | +| | import | ++--------------------------+------------------------------------------+ +| Default | 1 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++--------------------------+------------------------------------------+ + +spa_load_verify_maxinflight +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Maximum number of concurrent I/Os during the data verification performed +during an extreme rewind import (see ``zpool import -X``) + ++-----------------------------+---------------------------------------+ +| spa_load_verify_maxinflight | Notes | ++=============================+=======================================+ +| Tags | `import <#import>`__ | ++-----------------------------+---------------------------------------+ +| When to change | During an extreme rewind import, to | +| | match the concurrent I/O capabilities | +| | of the pool devices | ++-----------------------------+---------------------------------------+ +| Data Type | int | ++-----------------------------+---------------------------------------+ +| Units | I/Os | ++-----------------------------+---------------------------------------+ +| Range | 1 to MAX_INT | ++-----------------------------+---------------------------------------+ +| Default | 10,000 | ++-----------------------------+---------------------------------------+ +| Change | Dynamic | ++-----------------------------+---------------------------------------+ +| Versions Affected | v0.6.4 and later | ++-----------------------------+---------------------------------------+ + +spa_slop_shift +~~~~~~~~~~~~~~ + +Normally, the last 3.2% (1/(2^\ ``spa_slop_shift``)) of pool space is +reserved to ensure the pool doesn't run completely out of space, due to +unaccounted changes (e.g. to the MOS). This also limits the worst-case +time to allocate space. When less than this amount of free space exists, +most ZPL operations (e.g. write, create) return error:no space (ENOSPC). + +Changing spa_slop_shift affects the currently loaded ZFS module and all +imported pools. spa_slop_shift is not stored on disk. Beware when +importing full pools on systems with larger spa_slop_shift can lead to +over-full conditions. + +The minimum SPA slop space is limited to 128 MiB. +The maximum SPA slop space is limited to 128 GiB. + ++-------------------+-------------------------------------------------+ +| spa_slop_shift | Notes | ++===================+=================================================+ +| Tags | `allocation <#allocation>`__, `SPA <#spa>`__ | ++-------------------+-------------------------------------------------+ +| When to change | For large pools, when 3.2% may be too | +| | conservative and more usable space is desired, | +| | consider increasing ``spa_slop_shift`` | ++-------------------+-------------------------------------------------+ +| Data Type | int | ++-------------------+-------------------------------------------------+ +| Units | shift | ++-------------------+-------------------------------------------------+ +| Range | 1 to MAX_INT, however the practical upper limit | +| | is 15 for a system with 4TB of RAM | ++-------------------+-------------------------------------------------+ +| Default | 5 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.5 and later (max. slop space since v2.1.0) | ++-------------------+-------------------------------------------------+ + +zfetch_array_rd_sz +~~~~~~~~~~~~~~~~~~ + +If prefetching is enabled, do not prefetch blocks larger than +``zfetch_array_rd_sz`` size. + +================== ================================================= +zfetch_array_rd_sz Notes +================== ================================================= +Tags `prefetch <#prefetch>`__ +When to change To allow prefetching when using large block sizes +Data Type unsigned long +Units bytes +Range 0 to MAX_ULONG +Default 1,048,576 (1 MiB) +Change Dynamic +Versions Affected all +================== ================================================= + +zfetch_max_distance +~~~~~~~~~~~~~~~~~~~ + +Limits the maximum number of bytes to prefetch per stream. + ++---------------------+-----------------------------------------------+ +| zfetch_max_distance | Notes | ++=====================+===============================================+ +| Tags | `prefetch <#prefetch>`__ | ++---------------------+-----------------------------------------------+ +| When to change | Consider increasing read workloads that use | +| | large blocks and exhibit high prefetch hit | +| | ratios | ++---------------------+-----------------------------------------------+ +| Data Type | uint | ++---------------------+-----------------------------------------------+ +| Units | bytes | ++---------------------+-----------------------------------------------+ +| Range | 0 to UINT_MAX | ++---------------------+-----------------------------------------------+ +| Default | 8,388,608 | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.7.0 | ++---------------------+-----------------------------------------------+ + +zfetch_max_streams +~~~~~~~~~~~~~~~~~~ + +Maximum number of prefetch streams per file. + +For version v0.7.0 and later, when prefetching small files the number of +prefetch streams is automatically reduced below to prevent the streams +from overlapping. + ++--------------------+------------------------------------------------+ +| zfetch_max_streams | Notes | ++====================+================================================+ +| Tags | `prefetch <#prefetch>`__ | ++--------------------+------------------------------------------------+ +| When to change | If the workload benefits from prefetching and | +| | has more than ``zfetch_max_streams`` | +| | concurrent reader threads | ++--------------------+------------------------------------------------+ +| Data Type | uint | ++--------------------+------------------------------------------------+ +| Units | streams | ++--------------------+------------------------------------------------+ +| Range | 1 to MAX_UINT | ++--------------------+------------------------------------------------+ +| Default | 8 | ++--------------------+------------------------------------------------+ +| Change | Dynamic | ++--------------------+------------------------------------------------+ +| Versions Affected | all | ++--------------------+------------------------------------------------+ + +zfetch_min_sec_reap +~~~~~~~~~~~~~~~~~~~ + +Prefetch streams that have been accessed in ``zfetch_min_sec_reap`` +seconds are automatically stopped. + +=================== =========================== +zfetch_min_sec_reap Notes +=================== =========================== +Tags `prefetch <#prefetch>`__ +When to change To test prefetch efficiency +Data Type uint +Units seconds +Range 0 to MAX_UINT +Default 2 +Change Dynamic +Versions Affected all +=================== =========================== + +zfs_arc_dnode_limit_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Percentage of ARC metadata space that can be used for dnodes. + +The value calculated for ``zfs_arc_dnode_limit_percent`` can be +overridden by `zfs_arc_dnode_limit <#zfs-arc-dnode-limit>`__. + ++-----------------------------+---------------------------------------+ +| zfs_arc_dnode_limit_percent | Notes | ++=============================+=======================================+ +| Tags | `ARC <#arc>`__ | ++-----------------------------+---------------------------------------+ +| When to change | Consider increasing if ``arc_prune`` | +| | is using excessive system time and | +| | ``/proc/spl/kstat/zfs/arcstats`` | +| | shows ``arc_dnode_size`` is near or | +| | over ``arc_dnode_limit`` | ++-----------------------------+---------------------------------------+ +| Data Type | int | ++-----------------------------+---------------------------------------+ +| Units | percent of arc_meta_limit | ++-----------------------------+---------------------------------------+ +| Range | 0 to 100 | ++-----------------------------+---------------------------------------+ +| Default | 10 | ++-----------------------------+---------------------------------------+ +| Change | Dynamic | ++-----------------------------+---------------------------------------+ +| Versions Affected | v0.7.0 and later | ++-----------------------------+---------------------------------------+ + +zfs_arc_dnode_limit +~~~~~~~~~~~~~~~~~~~ + +When the number of bytes consumed by dnodes in the ARC exceeds +``zfs_arc_dnode_limit`` bytes, demand for new metadata can take from the +space consumed by dnodes. + +The default value 0, indicates that a percent which is based on +`zfs_arc_dnode_limit_percent <#zfs-arc-dnode-limit-percent>`__ of the +ARC meta buffers that may be used for dnodes. + +``zfs_arc_dnode_limit`` is similar to +`zfs_arc_meta_prune <#zfs-arc-meta-prune>`__ which serves a similar +purpose for metadata. + ++---------------------+-----------------------------------------------+ +| zfs_arc_dnode_limit | Notes | ++=====================+===============================================+ +| Tags | `ARC <#arc>`__ | ++---------------------+-----------------------------------------------+ +| When to change | Consider increasing if ``arc_prune`` is using | +| | excessive system time and | +| | ``/proc/spl/kstat/zfs/arcstats`` shows | +| | ``arc_dnode_size`` is near or over | +| | ``arc_dnode_limit`` | ++---------------------+-----------------------------------------------+ +| Data Type | uint64 | ++---------------------+-----------------------------------------------+ +| Units | bytes | ++---------------------+-----------------------------------------------+ +| Range | 0 to MAX_UINT64 | ++---------------------+-----------------------------------------------+ +| Default | 0 (uses | +| | `zfs_arc_dnode_lim | +| | it_percent <#zfs-arc-dnode-limit-percent>`__) | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++---------------------+-----------------------------------------------+ + +zfs_arc_dnode_reduce_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Percentage of ARC dnodes to try to evict in response to demand for +non-metadata when the number of bytes consumed by dnodes exceeds +`zfs_arc_dnode_limit <#zfs-arc-dnode-limit>`__. + ++------------------------------+--------------------------------------+ +| zfs_arc_dnode_reduce_percent | Notes | ++==============================+======================================+ +| Tags | `ARC <#arc>`__ | ++------------------------------+--------------------------------------+ +| When to change | Testing dnode cache efficiency | ++------------------------------+--------------------------------------+ +| Data Type | uint64 | ++------------------------------+--------------------------------------+ +| Units | percent of size of dnode space used | +| | above | +| | `zfs_arc_d | +| | node_limit <#zfs-arc-dnode-limit>`__ | ++------------------------------+--------------------------------------+ +| Range | 0 to 100 | ++------------------------------+--------------------------------------+ +| Default | 10 | ++------------------------------+--------------------------------------+ +| Change | Dynamic | ++------------------------------+--------------------------------------+ +| Versions Affected | v0.7.0 and later | ++------------------------------+--------------------------------------+ + +zfs_arc_average_blocksize +~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ARC's buffer hash table is sized based on the assumption of an +average block size of ``zfs_arc_average_blocksize``. The default of 8 +KiB uses approximately 1 MiB of hash table per 1 GiB of physical memory +with 8-byte pointers. + ++---------------------------+-----------------------------------------+ +| zfs_arc_average_blocksize | Notes | ++===========================+=========================================+ +| Tags | `ARC <#arc>`__, `memory <#memory>`__ | ++---------------------------+-----------------------------------------+ +| When to change | For workloads where the known average | +| | blocksize is larger, increasing | +| | ``zfs_arc_average_blocksize`` can | +| | reduce memory usage | ++---------------------------+-----------------------------------------+ +| Data Type | int | ++---------------------------+-----------------------------------------+ +| Units | bytes | ++---------------------------+-----------------------------------------+ +| Range | 512 to 16,777,216 | ++---------------------------+-----------------------------------------+ +| Default | 8,192 | ++---------------------------+-----------------------------------------+ +| Change | Prior to zfs module load | ++---------------------------+-----------------------------------------+ +| Versions Affected | all | ++---------------------------+-----------------------------------------+ + +zfs_arc_evict_batch_limit +~~~~~~~~~~~~~~~~~~~~~~~~~ + +Number ARC headers to evict per sublist before proceeding to another +sublist. This batch-style operation prevents entire sublists from being +evicted at once but comes at a cost of additional unlocking and locking. + +========================= ============================== +zfs_arc_evict_batch_limit Notes +========================= ============================== +Tags `ARC <#arc>`__ +When to change Testing ARC multilist features +Data Type int +Units count of ARC headers +Range 1 to INT_MAX +Default 10 +Change Dynamic +Versions Affected v0.6.5 and later +========================= ============================== + +zfs_arc_grow_retry +~~~~~~~~~~~~~~~~~~ + +When the ARC is shrunk due to memory demand, do not retry growing the +ARC for ``zfs_arc_grow_retry`` seconds. This operates as a damper to +prevent oscillating grow/shrink cycles when there is memory pressure. + +If ``zfs_arc_grow_retry`` = 0, the internal default of 5 seconds is +used. + +================== ==================================== +zfs_arc_grow_retry Notes +================== ==================================== +Tags `ARC <#arc>`__, `memory <#memory>`__ +When to change TBD +Data Type int +Units seconds +Range 1 to MAX_INT +Default 0 +Change Dynamic +Versions Affected v0.6.5 and later +================== ==================================== + +zfs_arc_lotsfree_percent +~~~~~~~~~~~~~~~~~~~~~~~~ + +Throttle ARC memory consumption, effectively throttling I/O, when free +system memory drops below this percentage of total system memory. +Setting ``zfs_arc_lotsfree_percent`` to 0 disables the throttle. + +The arcstat_memory_throttle_count counter in +``/proc/spl/kstat/arcstats`` can indicate throttle activity. + +======================== ==================================== +zfs_arc_lotsfree_percent Notes +======================== ==================================== +Tags `ARC <#arc>`__, `memory <#memory>`__ +When to change TBD +Data Type int +Units percent +Range 0 to 100 +Default 10 +Change Dynamic +Versions Affected v0.6.5 and later +======================== ==================================== + +zfs_arc_max +~~~~~~~~~~~ + +Maximum size of ARC in bytes. + +If set to 0 then the maximum size of ARC +is determined by the amount of system memory installed: + +* **Linux**: 1/2 of system memory +* **FreeBSD**: the larger of ``all_system_memory - 1GB`` and ``5/8 × all_system_memory`` + +``zfs_arc_max`` can be changed dynamically with some caveats. It cannot +be set back to 0 while running and reducing it below the current ARC +size will not cause the ARC to shrink without memory pressure to induce +shrinking. + ++-------------------+-------------------------------------------------+ +| zfs_arc_max | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `memory <#memory>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Reduce if ARC competes too much with other | +| | applications, increase if ZFS is the primary | +| | application and can use more RAM | ++-------------------+-------------------------------------------------+ +| Data Type | uint64 | ++-------------------+-------------------------------------------------+ +| Units | bytes | ++-------------------+-------------------------------------------------+ +| Range | 67,108,864 to RAM size in bytes | ++-------------------+-------------------------------------------------+ +| Default | 0 (see description above, OS-dependent) | ++-------------------+-------------------------------------------------+ +| Change | Dynamic (see description above) | ++-------------------+-------------------------------------------------+ +| Verification | ``c`` column in ``arcstats.py`` or | +| | ``/proc/spl/kstat/zfs/arcstats`` entry | +| | ``c_max`` | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +zfs_arc_meta_adjust_restarts +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The number of restart passes to make while scanning the ARC attempting +the free buffers in order to stay below the +`zfs_arc_meta_limit <#zfs-arc-meta-limit>`__. + +============================ ======================================= +zfs_arc_meta_adjust_restarts Notes +============================ ======================================= +Tags `ARC <#arc>`__ +When to change Testing ARC metadata adjustment feature +Data Type int +Units restarts +Range 0 to INT_MAX +Default 4,096 +Change Dynamic +Versions Affected v0.6.5 and later +============================ ======================================= + +zfs_arc_meta_limit +~~~~~~~~~~~~~~~~~~ + +Sets the maximum allowed size metadata buffers in the ARC. When +`zfs_arc_meta_limit <#zfs-arc-meta-limit>`__ is reached metadata buffers +are reclaimed, even if the overall ``c_max`` has not been reached. + +In version v0.7.0, with a default value = 0, +``zfs_arc_meta_limit_percent`` is used to set ``arc_meta_limit`` + ++--------------------+------------------------------------------------+ +| zfs_arc_meta_limit | Notes | ++====================+================================================+ +| Tags | `ARC <#arc>`__ | ++--------------------+------------------------------------------------+ +| When to change | For workloads where the metadata to data ratio | +| | in the ARC can be changed to improve ARC hit | +| | rates | ++--------------------+------------------------------------------------+ +| Data Type | uint64 | ++--------------------+------------------------------------------------+ +| Units | bytes | ++--------------------+------------------------------------------------+ +| Range | 0 to ``c_max`` | ++--------------------+------------------------------------------------+ +| Default | 0 | ++--------------------+------------------------------------------------+ +| Change | Dynamic, except that it cannot be set back to | +| | 0 for a specific percent of the ARC; it must | +| | be set to an explicit value | ++--------------------+------------------------------------------------+ +| Verification | ``/proc/spl/kstat/zfs/arcstats`` entry | +| | ``arc_meta_limit`` | ++--------------------+------------------------------------------------+ +| Versions Affected | all | ++--------------------+------------------------------------------------+ + +zfs_arc_meta_limit_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Sets the limit to ARC metadata, ``arc_meta_limit``, as a percentage of +the maximum size target of the ARC, ``c_max`` + +Prior to version v0.7.0, the +`zfs_arc_meta_limit <#zfs-arc-meta-limit>`__ was used to set the limit +as a fixed size. ``zfs_arc_meta_limit_percent`` provides a more +convenient interface for setting the limit. + ++----------------------------+----------------------------------------+ +| zfs_arc_meta_limit_percent | Notes | ++============================+========================================+ +| Tags | `ARC <#arc>`__ | ++----------------------------+----------------------------------------+ +| When to change | For workloads where the metadata to | +| | data ratio in the ARC can be changed | +| | to improve ARC hit rates | ++----------------------------+----------------------------------------+ +| Data Type | uint64 | ++----------------------------+----------------------------------------+ +| Units | percent of ``c_max`` | ++----------------------------+----------------------------------------+ +| Range | 0 to 100 | ++----------------------------+----------------------------------------+ +| Default | 75 | ++----------------------------+----------------------------------------+ +| Change | Dynamic | ++----------------------------+----------------------------------------+ +| Verification | ``/proc/spl/kstat/zfs/arcstats`` entry | +| | ``arc_meta_limit`` | ++----------------------------+----------------------------------------+ +| Versions Affected | v0.7.0 and later | ++----------------------------+----------------------------------------+ + +zfs_arc_meta_min +~~~~~~~~~~~~~~~~ + +The minimum allowed size in bytes that metadata buffers may consume in +the ARC. This value defaults to 0 which disables a floor on the amount +of the ARC devoted meta data. + +When evicting data from the ARC, if the ``metadata_size`` is less than +``arc_meta_min`` then data is evicted instead of metadata. + ++-------------------+---------------------------------------------------------+ +| zfs_arc_meta_min | Notes | ++===================+=========================================================+ +| Tags | `ARC <#arc>`__ | ++-------------------+---------------------------------------------------------+ +| When to change | | ++-------------------+---------------------------------------------------------+ +| Data Type | uint64 | ++-------------------+---------------------------------------------------------+ +| Units | bytes | ++-------------------+---------------------------------------------------------+ +| Range | 16,777,216 to ``c_max`` | ++-------------------+---------------------------------------------------------+ +| Default | 0 (use internal default 16 MiB) | ++-------------------+---------------------------------------------------------+ +| Change | Dynamic | ++-------------------+---------------------------------------------------------+ +| Verification | ``/proc/spl/kstat/zfs/arcstats`` entry ``arc_meta_min`` | ++-------------------+---------------------------------------------------------+ +| Versions Affected | all | ++-------------------+---------------------------------------------------------+ + +zfs_arc_meta_prune +~~~~~~~~~~~~~~~~~~ + +``zfs_arc_meta_prune`` sets the number of dentries and znodes to be +scanned looking for entries which can be dropped. This provides a +mechanism to ensure the ARC can honor the ``arc_meta_limit and`` reclaim +otherwise pinned ARC buffers. Pruning may be required when the ARC size +drops to ``arc_meta_limit`` because dentries and znodes can pin buffers +in the ARC. Increasing this value will cause to dentry and znode caches +to be pruned more aggressively and the arc_prune thread becomes more +active. Setting ``zfs_arc_meta_prune`` to 0 will disable pruning. + ++--------------------+------------------------------------------------+ +| zfs_arc_meta_prune | Notes | ++====================+================================================+ +| Tags | `ARC <#arc>`__ | ++--------------------+------------------------------------------------+ +| When to change | TBD | ++--------------------+------------------------------------------------+ +| Data Type | uint64 | ++--------------------+------------------------------------------------+ +| Units | entries | ++--------------------+------------------------------------------------+ +| Range | 0 to INT_MAX | ++--------------------+------------------------------------------------+ +| Default | 10,000 | ++--------------------+------------------------------------------------+ +| Change | Dynamic | ++--------------------+------------------------------------------------+ +| ! Verification | Prune activity is counted by the | +| | ``/proc/spl/kstat/zfs/arcstats`` entry | +| | ``arc_prune`` | ++--------------------+------------------------------------------------+ +| Versions Affected | v0.6.5 and later | ++--------------------+------------------------------------------------+ + +zfs_arc_meta_strategy +~~~~~~~~~~~~~~~~~~~~~ + +Defines the strategy for ARC metadata eviction (meta reclaim strategy). +A value of 0 (META_ONLY) will evict only the ARC metadata. A value of 1 +(BALANCED) indicates that additional data may be evicted if required in +order to evict the requested amount of metadata. + ++-----------------------+---------------------------------------------+ +| zfs_arc_meta_strategy | Notes | ++=======================+=============================================+ +| Tags | `ARC <#arc>`__ | ++-----------------------+---------------------------------------------+ +| When to change | Testing ARC metadata eviction | ++-----------------------+---------------------------------------------+ +| Data Type | int | ++-----------------------+---------------------------------------------+ +| Units | enum | ++-----------------------+---------------------------------------------+ +| Range | 0=evict metadata only, 1=also evict data | +| | buffers if they can free metadata buffers | +| | for eviction | ++-----------------------+---------------------------------------------+ +| Default | 1 (BALANCED) | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | v0.6.5 and later | ++-----------------------+---------------------------------------------+ + +zfs_arc_min +~~~~~~~~~~~ + +Minimum ARC size limit. When the ARC is asked to shrink, it will stop +shrinking at ``c_min`` as tuned by ``zfs_arc_min``. + ++-------------------+-------------------------------------------------+ +| zfs_arc_min | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If the primary focus of the system is ZFS, then | +| | increasing can ensure the ARC gets a minimum | +| | amount of RAM | ++-------------------+-------------------------------------------------+ +| Data Type | uint64 | ++-------------------+-------------------------------------------------+ +| Units | bytes | ++-------------------+-------------------------------------------------+ +| Range | 33,554,432 to ``c_max`` | ++-------------------+-------------------------------------------------+ +| Default | For kernel: greater of 33,554,432 (32 MiB) and | +| | memory size / 32. For user-land: greater of | +| | 33,554,432 (32 MiB) and ``c_max`` / 2. | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Verification | ``/proc/spl/kstat/zfs/arcstats`` entry | +| | ``c_min`` | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +zfs_arc_min_prefetch_ms +~~~~~~~~~~~~~~~~~~~~~~~ + +Minimum time prefetched blocks are locked in the ARC. + +A value of 0 represents the default of 1 second. However, once changed, +dynamically setting to 0 will not return to the default. + +======================= ======================================== +zfs_arc_min_prefetch_ms Notes +======================= ======================================== +Tags `ARC <#arc>`__, `prefetch <#prefetch>`__ +When to change TBD +Data Type int +Units milliseconds +Range 1 to INT_MAX +Default 0 (use internal default of 1000 ms) +Change Dynamic +Versions Affected v0.8.0 and later +======================= ======================================== + +zfs_arc_min_prescient_prefetch_ms +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Minimum time "prescient prefetched" blocks are locked in the ARC. These +blocks are meant to be prefetched fairly aggressively ahead of the code +that may use them. + +A value of 0 represents the default of 6 seconds. However, once changed, +dynamically setting to 0 will not return to the default. + ++----------------------------------+----------------------------------+ +| z | Notes | +| fs_arc_min_prescient_prefetch_ms | | ++==================================+==================================+ +| Tags | `ARC <#arc>`__, | +| | `prefetch <#prefetch>`__ | ++----------------------------------+----------------------------------+ +| When to change | TBD | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | milliseconds | ++----------------------------------+----------------------------------+ +| Range | 1 to INT_MAX | ++----------------------------------+----------------------------------+ +| Default | 0 (use internal default of 6000 | +| | ms) | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.8.0 and later | ++----------------------------------+----------------------------------+ + +zfs_multilist_num_sublists +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To allow more fine-grained locking, each ARC state contains a series of +lists (sublists) for both data and metadata objects. Locking is +performed at the sublist level. This parameters controls the number of +sublists per ARC state, and also applies to other uses of the multilist +data structure. + ++----------------------------+----------------------------------------+ +| zfs_multilist_num_sublists | Notes | ++============================+========================================+ +| Tags | `ARC <#arc>`__ | ++----------------------------+----------------------------------------+ +| When to change | TBD | ++----------------------------+----------------------------------------+ +| Data Type | int | ++----------------------------+----------------------------------------+ +| Units | lists | ++----------------------------+----------------------------------------+ +| Range | 1 to INT_MAX | ++----------------------------+----------------------------------------+ +| Default | 0 (internal value is greater of number | +| | of online CPUs or 4) | ++----------------------------+----------------------------------------+ +| Change | Prior to zfs module load | ++----------------------------+----------------------------------------+ +| Versions Affected | v0.7.0 and later | ++----------------------------+----------------------------------------+ + +zfs_arc_overflow_shift +~~~~~~~~~~~~~~~~~~~~~~ + +The ARC size is considered to be overflowing if it exceeds the current +ARC target size (``/proc/spl/kstat/zfs/arcstats`` entry ``c``) by a +threshold determined by ``zfs_arc_overflow_shift``. The threshold is +calculated as a fraction of c using the formula: (ARC target size) +``c >> zfs_arc_overflow_shift`` + +The default value of 8 causes the ARC to be considered to be overflowing +if it exceeds the target size by 1/256th (0.3%) of the target size. + +When the ARC is overflowing, new buffer allocations are stalled until +the reclaim thread catches up and the overflow condition no longer +exists. + +====================== ================ +zfs_arc_overflow_shift Notes +====================== ================ +Tags `ARC <#arc>`__ +When to change TBD +Data Type int +Units shift +Range 1 to INT_MAX +Default 8 +Change Dynamic +Versions Affected v0.6.5 and later +====================== ================ + +zfs_arc_p_min_shift +~~~~~~~~~~~~~~~~~~~ + +arc_p_min_shift is used to shift of ARC target size +(``/proc/spl/kstat/zfs/arcstats`` entry ``c``) for calculating both +minimum and maximum most recently used (MRU) target size +(``/proc/spl/kstat/zfs/arcstats`` entry ``p``) + +A value of 0 represents the default setting of ``arc_p_min_shift`` = 4. +However, once changed, dynamically setting ``zfs_arc_p_min_shift`` to 0 +will not return to the default. + ++---------------------+-----------------------------------------------+ +| zfs_arc_p_min_shift | Notes | ++=====================+===============================================+ +| Tags | `ARC <#arc>`__ | ++---------------------+-----------------------------------------------+ +| When to change | TBD | ++---------------------+-----------------------------------------------+ +| Data Type | int | ++---------------------+-----------------------------------------------+ +| Units | shift | ++---------------------+-----------------------------------------------+ +| Range | 1 to INT_MAX | ++---------------------+-----------------------------------------------+ +| Default | 0 (internal default = 4) | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Verification | Observe changes to | +| | ``/proc/spl/kstat/zfs/arcstats`` entry ``p`` | ++---------------------+-----------------------------------------------+ +| Versions Affected | all | ++---------------------+-----------------------------------------------+ + +zfs_arc_p_dampener_disable +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When data is being added to the ghost lists, the MRU target size is +adjusted. The amount of adjustment is based on the ratio of the MRU/MFU +sizes. When enabled, the ratio is capped to 10, avoiding large +adjustments. + ++----------------------------+----------------------------------------+ +| zfs_arc_p_dampener_disable | Notes | ++============================+========================================+ +| Tags | `ARC <#arc>`__ | ++----------------------------+----------------------------------------+ +| When to change | Testing ARC ghost list behaviour | ++----------------------------+----------------------------------------+ +| Data Type | boolean | ++----------------------------+----------------------------------------+ +| Range | 0=avoid large adjustments, 1=permit | +| | large adjustments | ++----------------------------+----------------------------------------+ +| Default | 1 | ++----------------------------+----------------------------------------+ +| Change | Dynamic | ++----------------------------+----------------------------------------+ +| Versions Affected | v0.6.4 and later | ++----------------------------+----------------------------------------+ + +zfs_arc_shrink_shift +~~~~~~~~~~~~~~~~~~~~ + +``arc_shrink_shift`` is used to adjust the ARC target sizes when large +reduction is required. The current ARC target size, ``c``, and MRU size +``p`` can be reduced by by the current ``size >> arc_shrink_shift``. For +the default value of 7, this reduces the target by approximately 0.8%. + +A value of 0 represents the default setting of arc_shrink_shift = 7. +However, once changed, dynamically setting arc_shrink_shift to 0 will +not return to the default. + ++----------------------+----------------------------------------------+ +| zfs_arc_shrink_shift | Notes | ++======================+==============================================+ +| Tags | `ARC <#arc>`__, `memory <#memory>`__ | ++----------------------+----------------------------------------------+ +| When to change | During memory shortfall, reducing | +| | ``zfs_arc_shrink_shift`` increases the rate | +| | of ARC shrinkage | ++----------------------+----------------------------------------------+ +| Data Type | int | ++----------------------+----------------------------------------------+ +| Units | shift | ++----------------------+----------------------------------------------+ +| Range | 1 to INT_MAX | ++----------------------+----------------------------------------------+ +| Default | 0 (``arc_shrink_shift`` = 7) | ++----------------------+----------------------------------------------+ +| Change | Dynamic | ++----------------------+----------------------------------------------+ +| Versions Affected | all | ++----------------------+----------------------------------------------+ + +zfs_arc_pc_percent +~~~~~~~~~~~~~~~~~~ + +``zfs_arc_pc_percent`` allows ZFS arc to play more nicely with the +kernel's LRU pagecache. It can guarantee that the arc size won't +collapse under scanning pressure on the pagecache, yet still allows arc +to be reclaimed down to zfs_arc_min if necessary. This value is +specified as percent of pagecache size (as measured by +``NR_FILE_PAGES``) where that percent may exceed 100. This only operates +during memory pressure/reclaim. + ++--------------------+------------------------------------------------+ +| zfs_arc_pc_percent | Notes | ++====================+================================================+ +| Tags | `ARC <#arc>`__, `memory <#memory>`__ | ++--------------------+------------------------------------------------+ +| When to change | When using file systems under memory | +| | shortfall, if the page scanner causes the ARC | +| | to shrink too fast, then adjusting | +| | ``zfs_arc_pc_percent`` can reduce the shrink | +| | rate | ++--------------------+------------------------------------------------+ +| Data Type | int | ++--------------------+------------------------------------------------+ +| Units | percent | ++--------------------+------------------------------------------------+ +| Range | 0 to 100 | ++--------------------+------------------------------------------------+ +| Default | 0 (disabled) | ++--------------------+------------------------------------------------+ +| Change | Dynamic | ++--------------------+------------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++--------------------+------------------------------------------------+ + +zfs_arc_sys_free +~~~~~~~~~~~~~~~~ + +``zfs_arc_sys_free`` is the target number of bytes the ARC should leave +as free memory on the system. Defaults to the larger of 1/64 of physical +memory or 512K. Setting this option to a non-zero value will override +the default. + +A value of 0 represents the default setting of larger of 1/64 of +physical memory or 512 KiB. However, once changed, dynamically setting +zfs_arc_sys_free to 0 will not return to the default. + ++-------------------+-------------------------------------------------+ +| zfs_arc_sys_free | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `memory <#memory>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Change if more free memory is desired as a | +| | margin against memory demand by applications | ++-------------------+-------------------------------------------------+ +| Data Type | ulong | ++-------------------+-------------------------------------------------+ +| Units | bytes | ++-------------------+-------------------------------------------------+ +| Range | 0 to ULONG_MAX | ++-------------------+-------------------------------------------------+ +| Default | 0 (default to larger of 1/64 of physical memory | +| | or 512 KiB) | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.5 and later | ++-------------------+-------------------------------------------------+ + +zfs_autoimport_disable +~~~~~~~~~~~~~~~~~~~~~~ + +Disable reading zpool.cache file (see +`spa_config_path <#spa-config-path>`__) when loading the zfs module. + ++------------------------+--------------------------------------------+ +| zfs_autoimport_disable | Notes | ++========================+============================================+ +| Tags | `import <#import>`__ | ++------------------------+--------------------------------------------+ +| When to change | Leave as default so that zfs behaves as | +| | other Linux kernel modules | ++------------------------+--------------------------------------------+ +| Data Type | boolean | ++------------------------+--------------------------------------------+ +| Range | 0=read ``zpool.cache`` at module load, | +| | 1=do not read ``zpool.cache`` at module | +| | load | ++------------------------+--------------------------------------------+ +| Default | 1 | ++------------------------+--------------------------------------------+ +| Change | Dynamic | ++------------------------+--------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++------------------------+--------------------------------------------+ + +zfs_commit_timeout_pct +~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_commit_timeout_pct`` controls the amount of time that a log (ZIL) +write block (lwb) remains "open" when it isn't "full" and it has a +thread waiting to commit to stable storage. The timeout is scaled based +on a percentage of the last lwb latency to avoid significantly impacting +the latency of each individual intent log transaction (itx). + +====================== ============== +zfs_commit_timeout_pct Notes +====================== ============== +Tags `ZIL <#zil>`__ +When to change TBD +Data Type int +Units percent +Range 1 to 100 +Default 5 +Change Dynamic +Versions Affected v0.8.0 +====================== ============== + +zfs_dbgmsg_enable +~~~~~~~~~~~~~~~~~ + +| Internally ZFS keeps a small log to facilitate debugging. The contents + of the log are in the ``/proc/spl/kstat/zfs/dbgmsg`` file. +| Writing 0 to ``/proc/spl/kstat/zfs/dbgmsg`` file clears the log. + +See also `zfs_dbgmsg_maxsize <#zfs-dbgmsg-maxsize>`__ + +================= ================================================= +zfs_dbgmsg_enable Notes +================= ================================================= +Tags `debug <#debug>`__ +When to change To view ZFS internal debug log +Data Type boolean +Range 0=do not log debug messages, 1=log debug messages +Default 0 (1 for debug builds) +Change Dynamic +Versions Affected v0.6.5 and later +================= ================================================= + +zfs_dbgmsg_maxsize +~~~~~~~~~~~~~~~~~~ + +The ``/proc/spl/kstat/zfs/dbgmsg`` file size limit is set by +zfs_dbgmsg_maxsize. + +See also zfs_dbgmsg_enable + +================== ================== +zfs_dbgmsg_maxsize Notes +================== ================== +Tags `debug <#debug>`__ +When to change TBD +Data Type int +Units bytes +Range 0 to INT_MAX +Default 4 MiB +Change Dynamic +Versions Affected v0.6.5 and later +================== ================== + +zfs_dbuf_state_index +~~~~~~~~~~~~~~~~~~~~ + +The ``zfs_dbuf_state_index`` feature is currently unused. It is normally +used for controlling values in the ``/proc/spl/kstat/zfs/dbufs`` file. + +==================== ================== +zfs_dbuf_state_index Notes +==================== ================== +Tags `debug <#debug>`__ +When to change Do not change +Data Type int +Units TBD +Range TBD +Default 0 +Change Dynamic +Versions Affected v0.6.5 and later +==================== ================== + +zfs_deadman_enabled +~~~~~~~~~~~~~~~~~~~ + +When a pool sync operation takes longer than zfs_deadman_synctime_ms +milliseconds, a "slow spa_sync" message is logged to the debug log (see +`zfs_dbgmsg_enable <#zfs-dbgmsg-enable>`__). If ``zfs_deadman_enabled`` +is set to 1, then all pending IO operations are also checked and if any +haven't completed within zfs_deadman_synctime_ms milliseconds, a "SLOW +IO" message is logged to the debug log and a "deadman" system event (see +zpool events command) with the details of the hung IO is posted. + +=================== ===================================== +zfs_deadman_enabled Notes +=================== ===================================== +Tags `debug <#debug>`__ +When to change To disable logging of slow I/O +Data Type boolean +Range 0=do not log slow I/O, 1=log slow I/O +Default 1 +Change Dynamic +Versions Affected v0.8.0 +=================== ===================================== + +zfs_deadman_checktime_ms +~~~~~~~~~~~~~~~~~~~~~~~~ + +Once a pool sync operation has taken longer than +`zfs_deadman_synctime_ms <#zfs-deadman-synctime-ms>`__ milliseconds, +continue to check for slow operations every +`zfs_deadman_checktime_ms <#zfs-deadman-synctime-ms>`__ milliseconds. + +======================== ======================= +zfs_deadman_checktime_ms Notes +======================== ======================= +Tags `debug <#debug>`__ +When to change When debugging slow I/O +Data Type ulong +Units milliseconds +Range 1 to ULONG_MAX +Default 60,000 (1 minute) +Change Dynamic +Versions Affected v0.8.0 +======================== ======================= + +zfs_deadman_ziotime_ms +~~~~~~~~~~~~~~~~~~~~~~ + +When an individual I/O takes longer than ``zfs_deadman_ziotime_ms`` +milliseconds, then the operation is considered to be "hung". If +`zfs_deadman_enabled <#zfs-deadman-enabled>`__ is set then the deadman +behaviour is invoked as described by the +`zfs_deadman_failmode <#zfs-deadman-failmode>`__ option. + +====================== ==================== +zfs_deadman_ziotime_ms Notes +====================== ==================== +Tags `debug <#debug>`__ +When to change Testing ABD features +Data Type ulong +Units milliseconds +Range 1 to ULONG_MAX +Default 300,000 (5 minutes) +Change Dynamic +Versions Affected v0.8.0 +====================== ==================== + +zfs_deadman_synctime_ms +~~~~~~~~~~~~~~~~~~~~~~~ + +The I/O deadman timer expiration time has two meanings + +1. determines when the ``spa_deadman()`` logic should fire, indicating + the txg sync has not completed in a timely manner +2. determines if an I/O is considered "hung" + +In version v0.8.0, any I/O that has not completed in +``zfs_deadman_synctime_ms`` is considered "hung" resulting in one of +three behaviors controlled by the +`zfs_deadman_failmode <#zfs-deadman-failmode>`__ parameter. + +``zfs_deadman_synctime_ms`` takes effect if +`zfs_deadman_enabled <#zfs-deadman-enabled>`__ = 1. + +======================= ======================= +zfs_deadman_synctime_ms Notes +======================= ======================= +Tags `debug <#debug>`__ +When to change When debugging slow I/O +Data Type ulong +Units milliseconds +Range 1 to ULONG_MAX +Default 600,000 (10 minutes) +Change Dynamic +Versions Affected v0.6.5 and later +======================= ======================= + +zfs_deadman_failmode +~~~~~~~~~~~~~~~~~~~~ + +zfs_deadman_failmode controls the behavior of the I/O deadman timer when +it detects a "hung" I/O. Valid values are: + +- wait - Wait for the "hung" I/O (default) +- continue - Attempt to recover from a "hung" I/O +- panic - Panic the system + +==================== =============================================== +zfs_deadman_failmode Notes +==================== =============================================== +Tags `debug <#debug>`__ +When to change In some cluster cases, panic can be appropriate +Data Type string +Range *wait*, *continue*, or *panic* +Default wait +Change Dynamic +Versions Affected v0.8.0 +==================== =============================================== + +zfs_dedup_prefetch +~~~~~~~~~~~~~~~~~~ + +ZFS can prefetch deduplication table (DDT) entries. +``zfs_dedup_prefetch`` allows DDT prefetches to be enabled. + ++--------------------+------------------------------------------------+ +| zfs_dedup_prefetch | Notes | ++====================+================================================+ +| Tags | `prefetch <#prefetch>`__, `memory <#memory>`__ | ++--------------------+------------------------------------------------+ +| When to change | For systems with limited RAM using the dedup | +| | feature, disabling deduplication table | +| | prefetch can reduce memory pressure | ++--------------------+------------------------------------------------+ +| Data Type | boolean | ++--------------------+------------------------------------------------+ +| Range | 0=do not prefetch, 1=prefetch dedup table | +| | entries | ++--------------------+------------------------------------------------+ +| Default | 0 | ++--------------------+------------------------------------------------+ +| Change | Dynamic | ++--------------------+------------------------------------------------+ +| Versions Affected | v0.6.5 and later | ++--------------------+------------------------------------------------+ + +zfs_delete_blocks +~~~~~~~~~~~~~~~~~ + +``zfs_delete_blocks`` defines a large file for the purposes of delete. +Files containing more than ``zfs_delete_blocks`` will be deleted +asynchronously while smaller files are deleted synchronously. Decreasing +this value reduces the time spent in an ``unlink(2)`` system call at the +expense of a longer delay before the freed space is available. + +The ``zfs_delete_blocks`` value is specified in blocks, not bytes. The +size of blocks can vary and is ultimately limited by the filesystem's +recordsize property. + ++-------------------+-------------------------------------------------+ +| zfs_delete_blocks | Notes | ++===================+=================================================+ +| Tags | `filesystem <#filesystem>`__, | +| | `delete <#delete>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If applications delete large files and blocking | +| | on ``unlink(2)`` is not desired | ++-------------------+-------------------------------------------------+ +| Data Type | ulong | ++-------------------+-------------------------------------------------+ +| Units | blocks | ++-------------------+-------------------------------------------------+ +| Range | 1 to ULONG_MAX | ++-------------------+-------------------------------------------------+ +| Default | 20,480 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +zfs_delay_min_dirty_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ZFS write throttle begins to delay each transaction when the amount +of dirty data reaches the threshold ``zfs_delay_min_dirty_percent`` of +`zfs_dirty_data_max <#zfs-dirty-data-max>`__. This value should be >= +`zfs_vdev_async_write_active_max_dirty_percent <#zfs-vdev-async-write-active-max-dirty-percent>`__. + +=========================== ==================================== +zfs_delay_min_dirty_percent Notes +=========================== ==================================== +Tags `write_throttle <#write-throttle>`__ +When to change See section "ZFS TRANSACTION DELAY" +Data Type int +Units percent +Range 0 to 100 +Default 60 +Change Dynamic +Versions Affected v0.6.4 and later +=========================== ==================================== + +zfs_delay_scale +~~~~~~~~~~~~~~~ + +``zfs_delay_scale`` controls how quickly the ZFS write throttle +transaction delay approaches infinity. Larger values cause longer delays +for a given amount of dirty data. + +For the smoothest delay, this value should be about 1 billion divided by +the maximum number of write operations per second the pool can sustain. +The throttle will smoothly handle between 10x and 1/10th +``zfs_delay_scale``. + +Note: ``zfs_delay_scale`` \* +`zfs_dirty_data_max <#zfs-dirty-data-max>`__ must be < 2^64. + +================= ==================================== +zfs_delay_scale Notes +================= ==================================== +Tags `write_throttle <#write-throttle>`__ +When to change See section "ZFS TRANSACTION DELAY" +Data Type ulong +Units scalar (nanoseconds) +Range 0 to ULONG_MAX +Default 500,000 +Change Dynamic +Versions Affected v0.6.4 and later +================= ==================================== + +zfs_dirty_data_max +~~~~~~~~~~~~~~~~~~ + +``zfs_dirty_data_max`` is the ZFS write throttle dirty space limit. Once +this limit is exceeded, new writes are delayed until space is freed by +writes being committed to the pool. + +zfs_dirty_data_max takes precedence over +`zfs_dirty_data_max_percent <#zfs-dirty-data-max-percent>`__. + ++--------------------+------------------------------------------------+ +| zfs_dirty_data_max | Notes | ++====================+================================================+ +| Tags | `write_throttle <#write-throttle>`__ | ++--------------------+------------------------------------------------+ +| When to change | See section "ZFS TRANSACTION DELAY" | ++--------------------+------------------------------------------------+ +| Data Type | ulong | ++--------------------+------------------------------------------------+ +| Units | bytes | ++--------------------+------------------------------------------------+ +| Range | 1 to | +| | `zfs_d | +| | irty_data_max_max <#zfs-dirty-data-max-max>`__ | ++--------------------+------------------------------------------------+ +| Default | 10% of physical RAM | ++--------------------+------------------------------------------------+ +| Change | Dynamic | ++--------------------+------------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++--------------------+------------------------------------------------+ + +zfs_dirty_data_max_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_dirty_data_max_percent`` is an alternative method of specifying +`zfs_dirty_data_max <#zfs-dirty-data-max>`__, the ZFS write throttle +dirty space limit. Once this limit is exceeded, new writes are delayed +until space is freed by writes being committed to the pool. + +`zfs_dirty_data_max <#zfs-dirty-data-max>`__ takes precedence over +``zfs_dirty_data_max_percent``. + ++----------------------------+----------------------------------------+ +| zfs_dirty_data_max_percent | Notes | ++============================+========================================+ +| Tags | `write_throttle <#write-throttle>`__ | ++----------------------------+----------------------------------------+ +| When to change | See section "ZFS TRANSACTION DELAY" | ++----------------------------+----------------------------------------+ +| Data Type | int | ++----------------------------+----------------------------------------+ +| Units | percent | ++----------------------------+----------------------------------------+ +| Range | 1 to 100 | ++----------------------------+----------------------------------------+ +| Default | 10% of physical RAM | ++----------------------------+----------------------------------------+ +| Change | Prior to zfs module load or a memory | +| | hot plug event | ++----------------------------+----------------------------------------+ +| Versions Affected | v0.6.4 and later | ++----------------------------+----------------------------------------+ + +zfs_dirty_data_max_max +~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_dirty_data_max_max`` is the maximum allowable value of +`zfs_dirty_data_max <#zfs-dirty-data-max>`__. + +``zfs_dirty_data_max_max`` takes precedence over +`zfs_dirty_data_max_max_percent <#zfs-dirty-data-max-max-percent>`__. + +====================== ==================================== +zfs_dirty_data_max_max Notes +====================== ==================================== +Tags `write_throttle <#write-throttle>`__ +When to change See section "ZFS TRANSACTION DELAY" +Data Type ulong +Units bytes +Range 1 to physical RAM size +Default physical_ram/4 + + **since v0.7:** min(physical_ram/4, 4GiB) + + **since v2.0 for 32-bit systems:** min(physical_ram/4, 1GiB) +Change Prior to zfs module load +Versions Affected v0.6.4 and later +====================== ==================================== + +zfs_dirty_data_max_max_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_dirty_data_max_max_percent`` an alternative to +`zfs_dirty_data_max_max <#zfs-dirty-data-max-max>`__ for setting the +maximum allowable value of `zfs_dirty_data_max <#zfs-dirty-data-max>`__ + +`zfs_dirty_data_max_max <#zfs-dirty-data-max-max>`__ takes precedence +over ``zfs_dirty_data_max_max_percent`` + +============================== ==================================== +zfs_dirty_data_max_max_percent Notes +============================== ==================================== +Tags `write_throttle <#write-throttle>`__ +When to change See section "ZFS TRANSACTION DELAY" +Data Type int +Units percent +Range 1 to 100 +Default 25% of physical RAM +Change Prior to zfs module load +Versions Affected v0.6.4 and later +============================== ==================================== + +zfs_dirty_data_sync +~~~~~~~~~~~~~~~~~~~ + +When there is at least ``zfs_dirty_data_sync`` dirty data, a transaction +group sync is started. This allows a transaction group sync to occur +more frequently than the transaction group timeout interval (see +`zfs_txg_timeout <#zfs-txg-timeout>`__) when there is dirty data to be +written. + ++---------------------+-----------------------------------------------+ +| zfs_dirty_data_sync | Notes | ++=====================+===============================================+ +| Tags | `write_throttle <#write-throttle>`__, | +| | `ZIO_scheduler <#ZIO-scheduler>`__ | ++---------------------+-----------------------------------------------+ +| When to change | TBD | ++---------------------+-----------------------------------------------+ +| Data Type | ulong | ++---------------------+-----------------------------------------------+ +| Units | bytes | ++---------------------+-----------------------------------------------+ +| Range | 1 to ULONG_MAX | ++---------------------+-----------------------------------------------+ +| Default | 67,108,864 (64 MiB) | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.6.4 through v0.8.x, deprecation planned | +| | for v2 | ++---------------------+-----------------------------------------------+ + +zfs_dirty_data_sync_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When there is at least ``zfs_dirty_data_sync_percent`` of +`zfs_dirty_data_max <#zfs-dirty-data-max>`__ dirty data, a transaction +group sync is started. This allows a transaction group sync to occur +more frequently than the transaction group timeout interval (see +`zfs_txg_timeout <#zfs-txg-timeout>`__) when there is dirty data to be +written. + ++-----------------------------+---------------------------------------+ +| zfs_dirty_data_sync_percent | Notes | ++=============================+=======================================+ +| Tags | `write_throttle <#write-throttle>`__, | +| | `ZIO_scheduler <#ZIO-scheduler>`__ | ++-----------------------------+---------------------------------------+ +| When to change | TBD | ++-----------------------------+---------------------------------------+ +| Data Type | int | ++-----------------------------+---------------------------------------+ +| Units | percent | ++-----------------------------+---------------------------------------+ +| Range | 1 to | +| | `zfs_vdev_async_write_ac | +| | tive_min_dirty_percent <#zfs_vdev_asy | +| | nc_write_active_min_dirty_percent>`__ | ++-----------------------------+---------------------------------------+ +| Default | 20 | ++-----------------------------+---------------------------------------+ +| Change | Dynamic | ++-----------------------------+---------------------------------------+ +| Versions Affected | planned for v2, deprecates | +| | `zfs_dirt | +| | y_data_sync <#zfs-dirty-data-sync>`__ | ++-----------------------------+---------------------------------------+ + +zfs_fletcher_4_impl +~~~~~~~~~~~~~~~~~~~ + +Fletcher-4 is the default checksum algorithm for metadata and data. When +the zfs kernel module is loaded, a set of microbenchmarks are run to +determine the fastest algorithm for the current hardware. The +``zfs_fletcher_4_impl`` parameter allows a specific implementation to be +specified other than the default (fastest). Selectors other than +*fastest* and *scalar* require instruction set extensions to be +available and will only appear if ZFS detects their presence. The +*scalar* implementation works on all processors. + +The results of the microbenchmark are visible in the +``/proc/spl/kstat/zfs/fletcher_4_bench`` file. Larger numbers indicate +better performance. Since ZFS is processor endian-independent, the +microbenchmark is run against both big and little-endian transformation. + ++---------------------+-----------------------------------------------+ +| zfs_fletcher_4_impl | Notes | ++=====================+===============================================+ +| Tags | `CPU <#cpu>`__, `checksum <#checksum>`__ | ++---------------------+-----------------------------------------------+ +| When to change | Testing Fletcher-4 algorithms | ++---------------------+-----------------------------------------------+ +| Data Type | string | ++---------------------+-----------------------------------------------+ +| Range | *fastest*, *scalar*, *superscalar*, | +| | *superscalar4*, *sse2*, *ssse3*, *avx2*, | +| | *avx512f*, or *aarch64_neon* depending on | +| | hardware support | ++---------------------+-----------------------------------------------+ +| Default | fastest | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++---------------------+-----------------------------------------------+ + +zfs_free_bpobj_enabled +~~~~~~~~~~~~~~~~~~~~~~ + +The processing of the free_bpobj object can be enabled by +``zfs_free_bpobj_enabled`` + ++------------------------+--------------------------------------------+ +| zfs_free_bpobj_enabled | Notes | ++========================+============================================+ +| Tags | `delete <#delete>`__ | ++------------------------+--------------------------------------------+ +| When to change | If there's a problem with processing | +| | free_bpobj (e.g. i/o error or bug) | ++------------------------+--------------------------------------------+ +| Data Type | boolean | ++------------------------+--------------------------------------------+ +| Range | 0=do not process free_bpobj objects, | +| | 1=process free_bpobj objects | ++------------------------+--------------------------------------------+ +| Default | 1 | ++------------------------+--------------------------------------------+ +| Change | Dynamic | ++------------------------+--------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++------------------------+--------------------------------------------+ + +zfs_free_max_blocks +~~~~~~~~~~~~~~~~~~~ + +``zfs_free_max_blocks`` sets the maximum number of blocks to be freed in +a single transaction group (txg). For workloads that delete (free) large +numbers of blocks in a short period of time, the processing of the frees +can negatively impact other operations, including txg commits. +``zfs_free_max_blocks`` acts as a limit to reduce the impact. + ++---------------------+-----------------------------------------------+ +| zfs_free_max_blocks | Notes | ++=====================+===============================================+ +| Tags | `filesystem <#filesystem>`__, | +| | `delete <#delete>`__ | ++---------------------+-----------------------------------------------+ +| When to change | For workloads that delete large files, | +| | ``zfs_free_max_blocks`` can be adjusted to | +| | meet performance requirements while reducing | +| | the impacts of deletion | ++---------------------+-----------------------------------------------+ +| Data Type | ulong | ++---------------------+-----------------------------------------------+ +| Units | blocks | ++---------------------+-----------------------------------------------+ +| Range | 1 to ULONG_MAX | ++---------------------+-----------------------------------------------+ +| Default | 100,000 | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++---------------------+-----------------------------------------------+ + +zfs_vdev_async_read_max_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Maximum asynchronous read I/Os active to each device. + ++--------------------------------+------------------------------------+ +| zfs_vdev_async_read_max_active | Notes | ++================================+====================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------------------+------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++--------------------------------+------------------------------------+ +| Data Type | uint32 | ++--------------------------------+------------------------------------+ +| Units | I/O operations | ++--------------------------------+------------------------------------+ +| Range | 1 to | +| | `zfs_vdev_ma | +| | x_active <#zfs-vdev-max-active>`__ | ++--------------------------------+------------------------------------+ +| Default | 3 | ++--------------------------------+------------------------------------+ +| Change | Dynamic | ++--------------------------------+------------------------------------+ +| Versions Affected | v0.6.4 and later | ++--------------------------------+------------------------------------+ + +zfs_vdev_async_read_min_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Minimum asynchronous read I/Os active to each device. + ++--------------------------------+------------------------------------+ +| zfs_vdev_async_read_min_active | Notes | ++================================+====================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------------------+------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++--------------------------------+------------------------------------+ +| Data Type | uint32 | ++--------------------------------+------------------------------------+ +| Units | I/O operations | ++--------------------------------+------------------------------------+ +| Range | 1 to | +| | ( | +| | `zfs_vdev_async_read_max_active <# | +| | zfs_vdev_async_read_max_active>`__ | +| | - 1) | ++--------------------------------+------------------------------------+ +| Default | 1 | ++--------------------------------+------------------------------------+ +| Change | Dynamic | ++--------------------------------+------------------------------------+ +| Versions Affected | v0.6.4 and later | ++--------------------------------+------------------------------------+ + +zfs_vdev_async_write_active_max_dirty_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When the amount of dirty data exceeds the threshold +``zfs_vdev_async_write_active_max_dirty_percent`` of +`zfs_dirty_data_max <#zfs-dirty-data-max>`__ dirty data, then +`zfs_vdev_async_write_max_active <#zfs-vdev-async-write-max-active>`__ +is used to limit active async writes. If the dirty data is between +`zfs_vdev_async_write_active_min_dirty_percent <#zfs-vdev-async-write-active-min-dirty-percent>`__ +and ``zfs_vdev_async_write_active_max_dirty_percent``, the active I/O +limit is linearly interpolated between +`zfs_vdev_async_write_min_active <#zfs-vdev-async-write-min-active>`__ +and +`zfs_vdev_async_write_max_active <#zfs-vdev-async-write-max-active>`__ + ++----------------------------------+----------------------------------+ +| zfs_vdev_asyn | Notes | +| c_write_active_max_dirty_percent | | ++==================================+==================================+ +| Tags | `vdev <#vdev>`__, | +| | `Z | +| | IO_scheduler <#zio-scheduler>`__ | ++----------------------------------+----------------------------------+ +| When to change | See `ZFS I/O | +| | Sch | +| | eduler `__ | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | percent of | +| | `zfs_dirty_d | +| | ata_max <#zfs-dirty-data-max>`__ | ++----------------------------------+----------------------------------+ +| Range | 0 to 100 | ++----------------------------------+----------------------------------+ +| Default | 60 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.6.4 and later | ++----------------------------------+----------------------------------+ + +zfs_vdev_async_write_active_min_dirty_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If the amount of dirty data is between +``zfs_vdev_async_write_active_min_dirty_percent`` and +`zfs_vdev_async_write_active_max_dirty_percent <#zfs-vdev-async-write-active-max-dirty-percent>`__ +of `zfs_dirty_data_max <#zfs-dirty-data-max>`__, the active I/O limit is +linearly interpolated between +`zfs_vdev_async_write_min_active <#zfs-vdev-async-write-min-active>`__ +and +`zfs_vdev_async_write_max_active <#zfs-vdev-async-write-max-active>`__ + ++----------------------------------+----------------------------------+ +| zfs_vdev_asyn | Notes | +| c_write_active_min_dirty_percent | | ++==================================+==================================+ +| Tags | `vdev <#vdev>`__, | +| | `Z | +| | IO_scheduler <#zio-scheduler>`__ | ++----------------------------------+----------------------------------+ +| When to change | See `ZFS I/O | +| | Sch | +| | eduler `__ | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | percent of zfs_dirty_data_max | ++----------------------------------+----------------------------------+ +| Range | 0 to | +| | (`z | +| | fs_vdev_async_write_active_max_d | +| | irty_percent <#zfs_vdev_async_wr | +| | ite_active_max_dirty_percent>`__ | +| | - 1) | ++----------------------------------+----------------------------------+ +| Default | 30 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.6.4 and later | ++----------------------------------+----------------------------------+ + +zfs_vdev_async_write_max_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_async_write_max_active`` sets the maximum asynchronous write +I/Os active to each device. + ++---------------------------------+-----------------------------------+ +| zfs_vdev_async_write_max_active | Notes | ++=================================+===================================+ +| Tags | `vdev <#vdev>`__, | +| | ` | +| | ZIO_scheduler <#zio-scheduler>`__ | ++---------------------------------+-----------------------------------+ +| When to change | See `ZFS I/O | +| | S | +| | cheduler `__ | ++---------------------------------+-----------------------------------+ +| Data Type | uint32 | ++---------------------------------+-----------------------------------+ +| Units | I/O operations | ++---------------------------------+-----------------------------------+ +| Range | 1 to | +| | `zfs_vdev_max | +| | _active <#zfs-vdev-max-active>`__ | ++---------------------------------+-----------------------------------+ +| Default | 10 | ++---------------------------------+-----------------------------------+ +| Change | Dynamic | ++---------------------------------+-----------------------------------+ +| Versions Affected | v0.6.4 and later | ++---------------------------------+-----------------------------------+ + +zfs_vdev_async_write_min_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_async_write_min_active`` sets the minimum asynchronous write +I/Os active to each device. + +Lower values are associated with better latency on rotational media but +poorer resilver performance. The default value of 2 was chosen as a +compromise. A value of 3 has been shown to improve resilver performance +further at a cost of further increasing latency. + ++---------------------------------+-----------------------------------+ +| zfs_vdev_async_write_min_active | Notes | ++=================================+===================================+ +| Tags | `vdev <#vdev>`__, | +| | ` | +| | ZIO_scheduler <#zio-scheduler>`__ | ++---------------------------------+-----------------------------------+ +| When to change | See `ZFS I/O | +| | S | +| | cheduler `__ | ++---------------------------------+-----------------------------------+ +| Data Type | uint32 | ++---------------------------------+-----------------------------------+ +| Units | I/O operations | ++---------------------------------+-----------------------------------+ +| Range | 1 to | +| | `zfs | +| | _vdev_async_write_max_active <#zf | +| | s_vdev_async_write_max_active>`__ | ++---------------------------------+-----------------------------------+ +| Default | 1 for v0.6.x, 2 for v0.7.0 and | +| | later | ++---------------------------------+-----------------------------------+ +| Change | Dynamic | ++---------------------------------+-----------------------------------+ +| Versions Affected | v0.6.4 and later | ++---------------------------------+-----------------------------------+ + +zfs_vdev_max_active +~~~~~~~~~~~~~~~~~~~ + +The maximum number of I/Os active to each device. Ideally, +``zfs_vdev_max_active`` >= the sum of each queue's max_active. + +Once queued to the device, the ZFS I/O scheduler is no longer able to +prioritize I/O operations. The underlying device drivers have their own +scheduler and queue depth limits. Values larger than the device's +maximum queue depth can have the affect of increased latency as the I/Os +are queued in the intervening device driver layers. + ++---------------------+-----------------------------------------------+ +| zfs_vdev_max_active | Notes | ++=====================+===============================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++---------------------+-----------------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++---------------------+-----------------------------------------------+ +| Data Type | uint32 | ++---------------------+-----------------------------------------------+ +| Units | I/O operations | ++---------------------+-----------------------------------------------+ +| Range | sum of each queue's min_active to UINT32_MAX | ++---------------------+-----------------------------------------------+ +| Default | 1,000 | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++---------------------+-----------------------------------------------+ + +zfs_vdev_scrub_max_active +~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_scrub_max_active`` sets the maximum scrub or scan read I/Os +active to each device. + ++---------------------------+-----------------------------------------+ +| zfs_vdev_scrub_max_active | Notes | ++===========================+=========================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__, | +| | `scrub <#scrub>`__, | +| | `resilver <#resilver>`__ | ++---------------------------+-----------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++---------------------------+-----------------------------------------+ +| Data Type | uint32 | ++---------------------------+-----------------------------------------+ +| Units | I/O operations | ++---------------------------+-----------------------------------------+ +| Range | 1 to | +| | `zfs_vd | +| | ev_max_active <#zfs-vdev-max-active>`__ | ++---------------------------+-----------------------------------------+ +| Default | 2 | ++---------------------------+-----------------------------------------+ +| Change | Dynamic | ++---------------------------+-----------------------------------------+ +| Versions Affected | v0.6.4 and later | ++---------------------------+-----------------------------------------+ + +zfs_vdev_scrub_min_active +~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_scrub_min_active`` sets the minimum scrub or scan read I/Os +active to each device. + ++---------------------------+-----------------------------------------+ +| zfs_vdev_scrub_min_active | Notes | ++===========================+=========================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__, | +| | `scrub <#scrub>`__, | +| | `resilver <#resilver>`__ | ++---------------------------+-----------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++---------------------------+-----------------------------------------+ +| Data Type | uint32 | ++---------------------------+-----------------------------------------+ +| Units | I/O operations | ++---------------------------+-----------------------------------------+ +| Range | 1 to | +| | `zfs_vdev_scrub_max | +| | _active <#zfs-vdev-scrub-max-active>`__ | ++---------------------------+-----------------------------------------+ +| Default | 1 | ++---------------------------+-----------------------------------------+ +| Change | Dynamic | ++---------------------------+-----------------------------------------+ +| Versions Affected | v0.6.4 and later | ++---------------------------+-----------------------------------------+ + +zfs_vdev_sync_read_max_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Maximum synchronous read I/Os active to each device. + ++-------------------------------+-------------------------------------+ +| zfs_vdev_sync_read_max_active | Notes | ++===============================+=====================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++-------------------------------+-------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++-------------------------------+-------------------------------------+ +| Data Type | uint32 | ++-------------------------------+-------------------------------------+ +| Units | I/O operations | ++-------------------------------+-------------------------------------+ +| Range | 1 to | +| | `zfs_vdev_m | +| | ax_active <#zfs-vdev-max-active>`__ | ++-------------------------------+-------------------------------------+ +| Default | 10 | ++-------------------------------+-------------------------------------+ +| Change | Dynamic | ++-------------------------------+-------------------------------------+ +| Versions Affected | v0.6.4 and later | ++-------------------------------+-------------------------------------+ + +zfs_vdev_sync_read_min_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_sync_read_min_active`` sets the minimum synchronous read I/Os +active to each device. + ++-------------------------------+-------------------------------------+ +| zfs_vdev_sync_read_min_active | Notes | ++===============================+=====================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++-------------------------------+-------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++-------------------------------+-------------------------------------+ +| Data Type | uint32 | ++-------------------------------+-------------------------------------+ +| Units | I/O operations | ++-------------------------------+-------------------------------------+ +| Range | 1 to | +| | `zfs_vdev_sync_read_max_active | +| | <#zfs-vdev-sync-read-max-active>`__ | ++-------------------------------+-------------------------------------+ +| Default | 10 | ++-------------------------------+-------------------------------------+ +| Change | Dynamic | ++-------------------------------+-------------------------------------+ +| Versions Affected | v0.6.4 and later | ++-------------------------------+-------------------------------------+ + +zfs_vdev_sync_write_max_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_sync_write_max_active`` sets the maximum synchronous write +I/Os active to each device. + ++--------------------------------+------------------------------------+ +| zfs_vdev_sync_write_max_active | Notes | ++================================+====================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------------------+------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++--------------------------------+------------------------------------+ +| Data Type | uint32 | ++--------------------------------+------------------------------------+ +| Units | I/O operations | ++--------------------------------+------------------------------------+ +| Range | 1 to | +| | `zfs_vdev_ma | +| | x_active <#zfs-vdev-max-active>`__ | ++--------------------------------+------------------------------------+ +| Default | 10 | ++--------------------------------+------------------------------------+ +| Change | Dynamic | ++--------------------------------+------------------------------------+ +| Versions Affected | v0.6.4 and later | ++--------------------------------+------------------------------------+ + +zfs_vdev_sync_write_min_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_sync_write_min_active`` sets the minimum synchronous write +I/Os active to each device. + ++--------------------------------+------------------------------------+ +| zfs_vdev_sync_write_min_active | Notes | ++================================+====================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------------------+------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++--------------------------------+------------------------------------+ +| Data Type | uint32 | ++--------------------------------+------------------------------------+ +| Units | I/O operations | ++--------------------------------+------------------------------------+ +| Range | 1 to | +| | `zfs_vdev_sync_write_max_active <# | +| | zfs_vdev_sync_write_max_active>`__ | ++--------------------------------+------------------------------------+ +| Default | 10 | ++--------------------------------+------------------------------------+ +| Change | Dynamic | ++--------------------------------+------------------------------------+ +| Versions Affected | v0.6.4 and later | ++--------------------------------+------------------------------------+ + +zfs_vdev_queue_depth_pct +~~~~~~~~~~~~~~~~~~~~~~~~ + +Maximum number of queued allocations per top-level vdev expressed as a +percentage of +`zfs_vdev_async_write_max_active <#zfs-vdev-async-write-max-active>`__. +This allows the system to detect devices that are more capable of +handling allocations and to allocate more blocks to those devices. It +also allows for dynamic allocation distribution when devices are +imbalanced as fuller devices will tend to be slower than empty devices. +Once the queue depth reaches (``zfs_vdev_queue_depth_pct`` \* +`zfs_vdev_async_write_max_active <#zfs-vdev-async-write-max-active>`__ / +100) then allocator will stop allocating blocks on that top-level device +and switch to the next. + +See also `zio_dva_throttle_enabled <#zio-dva-throttle-enabled>`__ + ++--------------------------+------------------------------------------+ +| zfs_vdev_queue_depth_pct | Notes | ++==========================+==========================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------------+------------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++--------------------------+------------------------------------------+ +| Data Type | uint32 | ++--------------------------+------------------------------------------+ +| Units | I/O operations | ++--------------------------+------------------------------------------+ +| Range | 1 to UINT32_MAX | ++--------------------------+------------------------------------------+ +| Default | 1,000 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++--------------------------+------------------------------------------+ + +zfs_disable_dup_eviction +~~~~~~~~~~~~~~~~~~~~~~~~ + +Disable duplicate buffer eviction from ARC. + ++--------------------------+------------------------------------------+ +| zfs_disable_dup_eviction | Notes | ++==========================+==========================================+ +| Tags | `ARC <#arc>`__, `dedup <#dedup>`__ | ++--------------------------+------------------------------------------+ +| When to change | TBD | ++--------------------------+------------------------------------------+ +| Data Type | boolean | ++--------------------------+------------------------------------------+ +| Range | 0=duplicate buffers can be evicted, 1=do | +| | not evict duplicate buffers | ++--------------------------+------------------------------------------+ +| Default | 0 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.6.5, deprecated in v0.7.0 | ++--------------------------+------------------------------------------+ + +zfs_expire_snapshot +~~~~~~~~~~~~~~~~~~~ + +Snapshots of filesystems are normally automounted under the filesystem's +``.zfs/snapshot`` subdirectory. When not in use, snapshots are unmounted +after zfs_expire_snapshot seconds. + ++---------------------+-----------------------------------------------+ +| zfs_expire_snapshot | Notes | ++=====================+===============================================+ +| Tags | `filesystem <#filesystem>`__, | +| | `snapshot <#snapshot>`__ | ++---------------------+-----------------------------------------------+ +| When to change | TBD | ++---------------------+-----------------------------------------------+ +| Data Type | int | ++---------------------+-----------------------------------------------+ +| Units | seconds | ++---------------------+-----------------------------------------------+ +| Range | 0 disables automatic unmounting, maximum time | +| | is INT_MAX | ++---------------------+-----------------------------------------------+ +| Default | 300 | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.6.1 and later | ++---------------------+-----------------------------------------------+ + +zfs_admin_snapshot +~~~~~~~~~~~~~~~~~~ + +Allow the creation, removal, or renaming of entries in the +``.zfs/snapshot`` subdirectory to cause the creation, destruction, or +renaming of snapshots. When enabled this functionality works both +locally and over NFS exports which have the "no_root_squash" option set. + ++--------------------+------------------------------------------------+ +| zfs_admin_snapshot | Notes | ++====================+================================================+ +| Tags | `filesystem <#filesystem>`__, | +| | `snapshot <#snapshot>`__ | ++--------------------+------------------------------------------------+ +| When to change | TBD | ++--------------------+------------------------------------------------+ +| Data Type | boolean | ++--------------------+------------------------------------------------+ +| Range | 0=do not allow snapshot manipulation via the | +| | filesystem, 1=allow snapshot manipulation via | +| | the filesystem | ++--------------------+------------------------------------------------+ +| Default | 1 | ++--------------------+------------------------------------------------+ +| Change | Dynamic | ++--------------------+------------------------------------------------+ +| Versions Affected | v0.6.5 and later | ++--------------------+------------------------------------------------+ + +zfs_flags +~~~~~~~~~ + +Set additional debugging flags (see +`zfs_dbgmsg_enable <#zfs-dbgmsg-enable>`__) + ++------------+---------------------------+---------------------------+ +| flag value | symbolic name | description | ++============+===========================+===========================+ +| 0x1 | ZFS_DEBUG_DPRINTF | Enable dprintf entries in | +| | | the debug log | ++------------+---------------------------+---------------------------+ +| 0x2 | ZFS_DEBUG_DBUF_VERIFY | Enable extra dnode | +| | | verifications | ++------------+---------------------------+---------------------------+ +| 0x4 | ZFS_DEBUG_DNODE_VERIFY | Enable extra dnode | +| | | verifications | ++------------+---------------------------+---------------------------+ +| 0x8 | ZFS_DEBUG_SNAPNAMES | Enable snapshot name | +| | | verification | ++------------+---------------------------+---------------------------+ +| 0x10 | ZFS_DEBUG_MODIFY | Check for illegally | +| | | modified ARC buffers | ++------------+---------------------------+---------------------------+ +| 0x20 | ZFS_DEBUG_SPA | Enable spa_dbgmsg entries | +| | | in the debug log | ++------------+---------------------------+---------------------------+ +| 0x40 | ZFS_DEBUG_ZIO_FREE | Enable verification of | +| | | block frees | ++------------+---------------------------+---------------------------+ +| 0x80 | Z | Enable extra spacemap | +| | FS_DEBUG_HISTOGRAM_VERIFY | histogram verifications | ++------------+---------------------------+---------------------------+ +| 0x100 | ZFS_DEBUG_METASLAB_VERIFY | Verify space accounting | +| | | on disk matches in-core | +| | | range_trees | ++------------+---------------------------+---------------------------+ +| 0x200 | ZFS_DEBUG_SET_ERROR | Enable SET_ERROR and | +| | | dprintf entries in the | +| | | debug log | ++------------+---------------------------+---------------------------+ + ++-------------------+-------------------------------------------------+ +| zfs_flags | Notes | ++===================+=================================================+ +| Tags | `debug <#debug>`__ | ++-------------------+-------------------------------------------------+ +| When to change | When debugging ZFS | ++-------------------+-------------------------------------------------+ +| Data Type | int | ++-------------------+-------------------------------------------------+ +| Default | 0 no debug flags set, for debug builds: all | +| | except ZFS_DEBUG_DPRINTF and ZFS_DEBUG_SPA | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++-------------------+-------------------------------------------------+ + +zfs_free_leak_on_eio +~~~~~~~~~~~~~~~~~~~~ + +If destroy encounters an I/O error (EIO) while reading metadata (eg +indirect blocks), space referenced by the missing metadata cannot be +freed. Normally, this causes the background destroy to become "stalled", +as the destroy is unable to make forward progress. While in this stalled +state, all remaining space to free from the error-encountering +filesystem is temporarily leaked. Set ``zfs_free_leak_on_eio = 1`` to +ignore the EIO, permanently leak the space from indirect blocks that can +not be read, and continue to free everything else that it can. + +The default, stalling behavior is useful if the storage partially fails +(eg some but not all I/Os fail), and then later recovers. In this case, +we will be able to continue pool operations while it is partially +failed, and when it recovers, we can continue to free the space, with no +leaks. However, note that this case is rare. + +Typically pools either: + +1. fail completely (but perhaps temporarily (eg a top-level vdev going + offline) + +2. have localized, permanent errors (eg disk returns the wrong data due + to bit flip or firmware bug) + +In case (1), the ``zfs_free_leak_on_eio`` setting does not matter +because the pool will be suspended and the sync thread will not be able +to make forward progress. In case (2), because the error is permanent, +the best effort do is leak the minimum amount of space. Therefore, it is +reasonable for ``zfs_free_leak_on_eio`` be set, but by default the more +conservative approach is taken, so that there is no possibility of +leaking space in the "partial temporary" failure case. + ++----------------------+----------------------------------------------+ +| zfs_free_leak_on_eio | Notes | ++======================+==============================================+ +| Tags | `debug <#debug>`__ | ++----------------------+----------------------------------------------+ +| When to change | When debugging I/O errors during destroy | ++----------------------+----------------------------------------------+ +| Data Type | boolean | ++----------------------+----------------------------------------------+ +| Range | 0=normal behavior, 1=ignore error and | +| | permanently leak space | ++----------------------+----------------------------------------------+ +| Default | 0 | ++----------------------+----------------------------------------------+ +| Change | Dynamic | ++----------------------+----------------------------------------------+ +| Versions Affected | v0.6.5 and later | ++----------------------+----------------------------------------------+ + +zfs_free_min_time_ms +~~~~~~~~~~~~~~~~~~~~ + +During a ``zfs destroy`` operation using ``feature@async_destroy`` a +minimum of ``zfs_free_min_time_ms`` time will be spent working on +freeing blocks per txg commit. + +==================== ============================== +zfs_free_min_time_ms Notes +==================== ============================== +Tags `delete <#delete>`__ +When to change TBD +Data Type int +Units milliseconds +Range 1 to (zfs_txg_timeout \* 1000) +Default 1,000 +Change Dynamic +Versions Affected v0.6.0 and later +==================== ============================== + +zfs_immediate_write_sz +~~~~~~~~~~~~~~~~~~~~~~ + +If a pool does not have a log device, data blocks equal to or larger +than ``zfs_immediate_write_sz`` are treated as if the dataset being +written to had the property setting ``logbias=throughput`` + +Terminology note: ``logbias=throughput`` writes the blocks in "indirect +mode" to the ZIL where the data is written to the pool and a pointer to +the data is written to the ZIL. + ++------------------------+--------------------------------------------+ +| zfs_immediate_write_sz | Notes | ++========================+============================================+ +| Tags | `ZIL <#zil>`__ | ++------------------------+--------------------------------------------+ +| When to change | TBD | ++------------------------+--------------------------------------------+ +| Data Type | long | ++------------------------+--------------------------------------------+ +| Units | bytes | ++------------------------+--------------------------------------------+ +| Range | 512 to 16,777,216 (valid block sizes) | ++------------------------+--------------------------------------------+ +| Default | 32,768 (32 KiB) | ++------------------------+--------------------------------------------+ +| Change | Dynamic | ++------------------------+--------------------------------------------+ +| Verification | Data blocks that exceed | +| | ``zfs_immediate_write_sz`` or are written | +| | as ``logbias=throughput`` increment the | +| | ``zil_itx_indirect_count`` entry in | +| | ``/proc/spl/kstat/zfs/zil`` | ++------------------------+--------------------------------------------+ +| Versions Affected | all | ++------------------------+--------------------------------------------+ + +zfs_max_recordsize +~~~~~~~~~~~~~~~~~~ + +ZFS supports logical record (block) sizes from 512 bytes to 16 MiB. The +benefits of larger blocks, and thus larger average I/O sizes, can be +weighed against the cost of copy-on-write of large block to modify one +byte. Additionally, very large blocks can have a negative impact on both +I/O latency at the device level and the memory allocator. The +``zfs_max_recordsize`` parameter limits the upper bound of the dataset +volblocksize and recordsize properties. + +Larger blocks can be created by enabling ``zpool`` ``large_blocks`` +feature and changing this ``zfs_max_recordsize``. Pools with larger +blocks can always be imported and used, regardless of the value of +``zfs_max_recordsize``. + +For 32-bit systems, ``zfs_max_recordsize`` also limits the size of +kernel virtual memory caches used in the ZFS I/O pipeline (``zio_buf_*`` +and ``zio_data_buf_*``). + +See also the ``zpool`` ``large_blocks`` feature. + ++--------------------+------------------------------------------------+ +| zfs_max_recordsize | Notes | ++====================+================================================+ +| Tags | `filesystem <#filesystem>`__, | +| | `memory <#memory>`__, `volume <#volume>`__ | ++--------------------+------------------------------------------------+ +| When to change | To create datasets with larger volblocksize or | +| | recordsize | ++--------------------+------------------------------------------------+ +| Data Type | int | ++--------------------+------------------------------------------------+ +| Units | bytes | ++--------------------+------------------------------------------------+ +| Range | 512 to 16,777,216 (valid block sizes) | ++--------------------+------------------------------------------------+ +| Default | 1,048,576 | ++--------------------+------------------------------------------------+ +| Change | Dynamic, set prior to creating volumes or | +| | changing filesystem recordsize | ++--------------------+------------------------------------------------+ +| Versions Affected | v0.6.5 and later | ++--------------------+------------------------------------------------+ + +zfs_mdcomp_disable +~~~~~~~~~~~~~~~~~~ + +``zfs_mdcomp_disable`` allows metadata compression to be disabled. + +================== =============================================== +zfs_mdcomp_disable Notes +================== =============================================== +Tags `CPU <#cpu>`__, `metadata <#metadata>`__ +When to change When CPU cycles cost less than I/O +Data Type boolean +Range 0=compress metadata, 1=do not compress metadata +Default 0 +Change Dynamic +Versions Affected from v0.6.0 to v0.8.0 +================== =============================================== + +zfs_metaslab_fragmentation_threshold +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Allow metaslabs to keep their active state as long as their +fragmentation percentage is less than or equal to this value. When +writing, an active metaslab whose fragmentation percentage exceeds +``zfs_metaslab_fragmentation_threshold`` is avoided allowing metaslabs +with less fragmentation to be preferred. + +Metaslab fragmentation is used to calculate the overall pool +``fragmentation`` property value. However, individual metaslab +fragmentation levels are observable using the ``zdb`` with the ``-mm`` +option. + +``zfs_metaslab_fragmentation_threshold`` works at the metaslab level and +each top-level vdev has approximately +`metaslabs_per_vdev <#metaslabs-per-vdev>`__ metaslabs. See also +`zfs_mg_fragmentation_threshold <#zfs-mg-fragmentation-threshold>`__ + ++----------------------------------+----------------------------------+ +| zfs_metaslab_fragmentation_thresh| Notes | +| old | | ++==================================+==================================+ +| Tags | `allocation <#allocation>`__, | +| | `fr | +| | agmentation <#fragmentation>`__, | +| | `vdev <#vdev>`__ | ++----------------------------------+----------------------------------+ +| When to change | Testing metaslab allocation | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | percent | ++----------------------------------+----------------------------------+ +| Range | 1 to 100 | ++----------------------------------+----------------------------------+ +| Default | 70 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.6.4 and later | ++----------------------------------+----------------------------------+ + +zfs_mg_fragmentation_threshold +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Metaslab groups (top-level vdevs) are considered eligible for +allocations if their fragmentation percentage metric is less than or +equal to ``zfs_mg_fragmentation_threshold``. If a metaslab group exceeds +this threshold then it will be skipped unless all metaslab groups within +the metaslab class have also crossed the +``zfs_mg_fragmentation_threshold`` threshold. + ++--------------------------------+------------------------------------+ +| zfs_mg_fragmentation_threshold | Notes | ++================================+====================================+ +| Tags | `allocation <#allocation>`__, | +| | ` | +| | fragmentation <#fragmentation>`__, | +| | `vdev <#vdev>`__ | ++--------------------------------+------------------------------------+ +| When to change | Testing metaslab allocation | ++--------------------------------+------------------------------------+ +| Data Type | int | ++--------------------------------+------------------------------------+ +| Units | percent | ++--------------------------------+------------------------------------+ +| Range | 1 to 100 | ++--------------------------------+------------------------------------+ +| Default | 85 | ++--------------------------------+------------------------------------+ +| Change | Dynamic | ++--------------------------------+------------------------------------+ +| Versions Affected | v0.6.4 and later | ++--------------------------------+------------------------------------+ + +zfs_mg_noalloc_threshold +~~~~~~~~~~~~~~~~~~~~~~~~ + +Metaslab groups (top-level vdevs) with free space percentage greater +than ``zfs_mg_noalloc_threshold`` are eligible for new allocations. If a +metaslab group's free space is less than or equal to the threshold, the +allocator avoids allocating to that group unless all groups in the pool +have reached the threshold. Once all metaslab groups have reached the +threshold, all metaslab groups are allowed to accept allocations. The +default value of 0 disables the feature and causes all metaslab groups +to be eligible for allocations. + +This parameter allows one to deal with pools having heavily imbalanced +vdevs such as would be the case when a new vdev has been added. Setting +the threshold to a non-zero percentage will stop allocations from being +made to vdevs that aren't filled to the specified percentage and allow +lesser filled vdevs to acquire more allocations than they otherwise +would under the older ``zfs_mg_alloc_failures`` facility. + ++--------------------------+------------------------------------------+ +| zfs_mg_noalloc_threshold | Notes | ++==========================+==========================================+ +| Tags | `allocation <#allocation>`__, | +| | `fragmentation <#fragmentation>`__, | +| | `vdev <#vdev>`__ | ++--------------------------+------------------------------------------+ +| When to change | To force rebalancing as top-level vdevs | +| | are added or expanded | ++--------------------------+------------------------------------------+ +| Data Type | int | ++--------------------------+------------------------------------------+ +| Units | percent | ++--------------------------+------------------------------------------+ +| Range | 0 to 100 | ++--------------------------+------------------------------------------+ +| Default | 0 (disabled) | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++--------------------------+------------------------------------------+ + +zfs_multihost_history +~~~~~~~~~~~~~~~~~~~~~ + +The pool ``multihost`` multimodifier protection (MMP) subsystem can +record historical updates in the +``/proc/spl/kstat/zfs/POOL_NAME/multihost`` file for debugging purposes. +The number of lines of history is determined by zfs_multihost_history. + +===================== ==================================== +zfs_multihost_history Notes +===================== ==================================== +Tags `MMP <#mmp>`__, `import <#import>`__ +When to change When testing multihost feature +Data Type int +Units lines +Range 0 to INT_MAX +Default 0 +Change Dynamic +Versions Affected v0.7.0 and later +===================== ==================================== + +zfs_multihost_interval +~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_multihost_interval`` controls the frequency of multihost writes +performed by the pool multihost multimodifier protection (MMP) +subsystem. The multihost write period is (``zfs_multihost_interval`` / +number of leaf-vdevs) milliseconds. Thus on average a multihost write +will be issued for each leaf vdev every ``zfs_multihost_interval`` +milliseconds. In practice, the observed period can vary with the I/O +load and this observed value is the delay which is stored in the +uberblock. + +On import the multihost activity check waits a minimum amount of time +determined by (``zfs_multihost_interval`` \* +`zfs_multihost_import_intervals <#zfs-multihost-import-intervals>`__) +with a lower bound of 1 second. The activity check time may be further +extended if the value of mmp delay found in the best uberblock indicates +actual multihost updates happened at longer intervals than +``zfs_multihost_interval`` + +Note: the multihost protection feature applies to storage devices that +can be shared between multiple systems. + ++------------------------+--------------------------------------------+ +| zfs_multihost_interval | Notes | ++========================+============================================+ +| Tags | `MMP <#mmp>`__, `import <#import>`__, | +| | `vdev <#vdev>`__ | ++------------------------+--------------------------------------------+ +| When to change | To optimize pool import time against | +| | possibility of simultaneous import by | +| | another system | ++------------------------+--------------------------------------------+ +| Data Type | ulong | ++------------------------+--------------------------------------------+ +| Units | milliseconds | ++------------------------+--------------------------------------------+ +| Range | 100 to ULONG_MAX | ++------------------------+--------------------------------------------+ +| Default | 1000 | ++------------------------+--------------------------------------------+ +| Change | Dynamic | ++------------------------+--------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++------------------------+--------------------------------------------+ + +zfs_multihost_import_intervals +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_multihost_import_intervals`` controls the duration of the activity +test on pool import for the multihost multimodifier protection (MMP) +subsystem. The activity test can be expected to take a minimum time of +(``zfs_multihost_import_interval``\ s \* +`zfs_multihost_interval <#zfs-multihost-interval>`__ \* ``random(25%)``) +milliseconds. The random period of up to 25% improves simultaneous +import detection. For example, if two hosts are rebooted at the same +time and automatically attempt to import the pool, then is is highly +probable that one host will win. + +Smaller values of ``zfs_multihost_import_intervals`` reduces the import +time but increases the risk of failing to detect an active pool. The +total activity check time is never allowed to drop below one second. + +Note: the multihost protection feature applies to storage devices that +can be shared between multiple systems. + +============================== ==================================== +zfs_multihost_import_intervals Notes +============================== ==================================== +Tags `MMP <#mmp>`__, `import <#import>`__ +When to change TBD +Data Type uint +Units intervals +Range 1 to UINT_MAX +Default 20 since v0.8, previously 10 +Change Dynamic +Versions Affected v0.7.0 and later +============================== ==================================== + +zfs_multihost_fail_intervals +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_multihost_fail_intervals`` controls the behavior of the pool when +write failures are detected in the multihost multimodifier protection +(MMP) subsystem. + +If ``zfs_multihost_fail_intervals = 0`` then multihost write failures +are ignored. The write failures are reported to the ZFS event daemon +(``zed``) which can take action such as suspending the pool or offlining +a device. + +| If ``zfs_multihost_fail_intervals > 0`` then sequential multihost + write failures will cause the pool to be suspended. This occurs when + (``zfs_multihost_fail_intervals`` \* + `zfs_multihost_interval <#zfs-multihost-interval>`__) milliseconds + have passed since the last successful multihost write. +| This guarantees the activity test will see multihost writes if the + pool is attempted to be imported by another system. + +============================ ==================================== +zfs_multihost_fail_intervals Notes +============================ ==================================== +Tags `MMP <#mmp>`__, `import <#import>`__ +When to change TBD +Data Type uint +Units intervals +Range 0 to UINT_MAX +Default 10 since v0.8, previously 5 +Change Dynamic +Versions Affected v0.7.0 and later +============================ ==================================== + +zfs_delays_per_second +~~~~~~~~~~~~~~~~~~~~~ + +The ZFS Event Daemon (zed) processes events from ZFS. However, it can be +overwhelmed by high rates of error reports which can be generated by +failing, high-performance devices. ``zfs_delays_per_second`` limits the +rate of delay events reported to zed. + ++-----------------------+---------------------------------------------+ +| zfs_delays_per_second | Notes | ++=======================+=============================================+ +| Tags | `zed <#zed>`__, `delay <#delay>`__ | ++-----------------------+---------------------------------------------+ +| When to change | If processing delay events at a higher rate | +| | is desired | ++-----------------------+---------------------------------------------+ +| Data Type | uint | ++-----------------------+---------------------------------------------+ +| Units | events per second | ++-----------------------+---------------------------------------------+ +| Range | 0 to UINT_MAX | ++-----------------------+---------------------------------------------+ +| Default | 20 | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | v0.7.7 and later | ++-----------------------+---------------------------------------------+ + +zfs_checksums_per_second +~~~~~~~~~~~~~~~~~~~~~~~~ + +The ZFS Event Daemon (zed) processes events from ZFS. However, it can be +overwhelmed by high rates of error reports which can be generated by +failing, high-performance devices. ``zfs_checksums_per_second`` limits +the rate of checksum events reported to zed. + +Note: do not set this value lower than the SERD limit for ``checksum`` +in zed. By default, ``checksum_N`` = 10 and ``checksum_T`` = 10 minutes, +resulting in a practical lower limit of 1. + ++--------------------------+------------------------------------------+ +| zfs_checksums_per_second | Notes | ++==========================+==========================================+ +| Tags | `zed <#zed>`__, `checksum <#checksum>`__ | ++--------------------------+------------------------------------------+ +| When to change | If processing checksum error events at a | +| | higher rate is desired | ++--------------------------+------------------------------------------+ +| Data Type | uint | ++--------------------------+------------------------------------------+ +| Units | events per second | ++--------------------------+------------------------------------------+ +| Range | 0 to UINT_MAX | ++--------------------------+------------------------------------------+ +| Default | 20 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.7.7 and later | ++--------------------------+------------------------------------------+ + +zfs_no_scrub_io +~~~~~~~~~~~~~~~ + +When ``zfs_no_scrub_io = 1`` scrubs do not actually scrub data and +simply doing a metadata crawl of the pool instead. + +================= =============================================== +zfs_no_scrub_io Notes +================= =============================================== +Tags `scrub <#scrub>`__ +When to change Testing scrub feature +Data Type boolean +Range 0=perform scrub I/O, 1=do not perform scrub I/O +Default 0 +Change Dynamic +Versions Affected v0.6.0 and later +================= =============================================== + +zfs_no_scrub_prefetch +~~~~~~~~~~~~~~~~~~~~~ + +When ``zfs_no_scrub_prefetch = 1``, prefetch is disabled for scrub I/Os. + ++-----------------------+-----------------------------------------------------+ +| zfs_no_scrub_prefetch | Notes | ++=======================+=====================================================+ +| Tags | `prefetch <#prefetch>`__, `scrub <#scrub>`__ | ++-----------------------+-----------------------------------------------------+ +| When to change | Testing scrub feature | ++-----------------------+-----------------------------------------------------+ +| Data Type | boolean | ++-----------------------+-----------------------------------------------------+ +| Range | 0=prefetch scrub I/Os, 1=do not prefetch scrub I/Os | ++-----------------------+-----------------------------------------------------+ +| Default | 0 | ++-----------------------+-----------------------------------------------------+ +| Change | Dynamic | ++-----------------------+-----------------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++-----------------------+-----------------------------------------------------+ + +zfs_nocacheflush +~~~~~~~~~~~~~~~~ + +ZFS uses barriers (volatile cache flush commands) to ensure data is +committed to permanent media by devices. This ensures consistent +on-media state for devices where caches are volatile (eg HDDs). + +For devices with nonvolatile caches, the cache flush operation can be a +no-op. However, in some RAID arrays, cache flushes can cause the entire +cache to be flushed to the backing devices. + +To ensure on-media consistency, keep cache flush enabled. + ++-------------------+-------------------------------------------------+ +| zfs_nocacheflush | Notes | ++===================+=================================================+ +| Tags | `disks <#disks>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If the storage device has nonvolatile cache, | +| | then disabling cache flush can save the cost of | +| | occasional cache flush commands | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=send cache flush commands, 1=do not send | +| | cache flush commands | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +zfs_nopwrite_enabled +~~~~~~~~~~~~~~~~~~~~ + +The NOP-write feature is enabled by default when a +crytographically-secure checksum algorithm is in use by the dataset. +``zfs_nopwrite_enabled`` allows the NOP-write feature to be completely +disabled. + ++----------------------+----------------------------------------------+ +| zfs_nopwrite_enabled | Notes | ++======================+==============================================+ +| Tags | `checksum <#checksum>`__, `debug <#debug>`__ | ++----------------------+----------------------------------------------+ +| When to change | TBD | ++----------------------+----------------------------------------------+ +| Data Type | boolean | ++----------------------+----------------------------------------------+ +| Range | 0=disable NOP-write feature, 1=enable | +| | NOP-write feature | ++----------------------+----------------------------------------------+ +| Default | 1 | ++----------------------+----------------------------------------------+ +| Change | Dynamic | ++----------------------+----------------------------------------------+ +| Versions Affected | v0.6.0 and later | ++----------------------+----------------------------------------------+ + +zfs_dmu_offset_next_sync +~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_dmu_offset_next_sync`` enables forcing txg sync to find holes. +This causes ZFS to act like older versions when ``SEEK_HOLE`` or +``SEEK_DATA`` flags are used: when a dirty dnode causes txgs to be +synced so the previous data can be found. + ++--------------------------+------------------------------------------+ +| zfs_dmu_offset_next_sync | Notes | ++==========================+==========================================+ +| Tags | `DMU <#dmu>`__ | ++--------------------------+------------------------------------------+ +| When to change | to exchange strict hole reporting for | +| | performance | ++--------------------------+------------------------------------------+ +| Data Type | boolean | ++--------------------------+------------------------------------------+ +| Range | 0=do not force txg sync to find holes, | +| | 1=force txg sync to find holes | ++--------------------------+------------------------------------------+ +| Default | 1 since v2.1.5, previously 0 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++--------------------------+------------------------------------------+ + +zfs_pd_bytes_max +~~~~~~~~~~~~~~~~ + +``zfs_pd_bytes_max`` limits the number of bytes prefetched during a pool +traversal (eg ``zfs send`` or other data crawling operations). These +prefetches are referred to as "prescient prefetches" and are always 100% +hit rate. The traversal operations do not use the default data or +metadata prefetcher. + +================= ========================================== +zfs_pd_bytes_max Notes +================= ========================================== +Tags `prefetch <#prefetch>`__, `send <#send>`__ +When to change TBD +Data Type int32 +Units bytes +Range 0 to INT32_MAX +Default 52,428,800 (50 MiB) +Change Dynamic +Versions Affected TBD +================= ========================================== + +zfs_per_txg_dirty_frees_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_per_txg_dirty_frees_percent`` as a percentage of +`zfs_dirty_data_max <#zfs-dirty-data-max>`__ controls the percentage of +dirtied blocks from frees in one txg. After the threshold is crossed, +additional dirty blocks from frees wait until the next txg. Thus, when +deleting large files, filling consecutive txgs with deletes/frees, does +not throttle other, perhaps more important, writes. + +A side effect of this throttle can impact ``zfs receive`` workloads that +contain a large number of frees and the +`ignore_hole_birth <#ignore-hole-birth>`__ optimization is disabled. The +symptom is that the receive workload causes an increase in the frequency +of txg commits. The frequency of txg commits is observable via the +``otime`` column of ``/proc/spl/kstat/zfs/POOLNAME/txgs``. Since txg +commits also flush data from volatile caches in HDDs to media, HDD +performance can be negatively impacted. Also, since the frees do not +consume much bandwidth over the pipe, the pipe can appear to stall. Thus +the overall progress of receives is slower than expected. + +A value of zero will disable this throttle. + ++---------------------------------+-----------------------------------+ +| zfs_per_txg_dirty_frees_percent | Notes | ++=================================+===================================+ +| Tags | `delete <#delete>`__ | ++---------------------------------+-----------------------------------+ +| When to change | For ``zfs receive`` workloads, | +| | consider increasing or disabling. | +| | See section `ZFS I/O | +| | S | +| | cheduler `__ | ++---------------------------------+-----------------------------------+ +| Data Type | ulong | ++---------------------------------+-----------------------------------+ +| Units | percent | ++---------------------------------+-----------------------------------+ +| Range | 0 to 100 | ++---------------------------------+-----------------------------------+ +| Default | 30 | ++---------------------------------+-----------------------------------+ +| Change | Dynamic | ++---------------------------------+-----------------------------------+ +| Versions Affected | v0.7.0 and later | ++---------------------------------+-----------------------------------+ + +zfs_prefetch_disable +~~~~~~~~~~~~~~~~~~~~ + +``zfs_prefetch_disable`` controls the predictive prefetcher. + +Note that it leaves "prescient" prefetch (eg prefetch for ``zfs send``) +intact (see `zfs_pd_bytes_max <#zfs-pd-bytes-max>`__) + ++----------------------+----------------------------------------------+ +| zfs_prefetch_disable | Notes | ++======================+==============================================+ +| Tags | `prefetch <#prefetch>`__ | ++----------------------+----------------------------------------------+ +| When to change | In some case where the workload is | +| | completely random reads, overall performance | +| | can be better if prefetch is disabled | ++----------------------+----------------------------------------------+ +| Data Type | boolean | ++----------------------+----------------------------------------------+ +| Range | 0=prefetch enabled, 1=prefetch disabled | ++----------------------+----------------------------------------------+ +| Default | 0 | ++----------------------+----------------------------------------------+ +| Change | Dynamic | ++----------------------+----------------------------------------------+ +| Verification | prefetch efficacy is observed by | +| | ``arcstat``, ``arc_summary``, and the | +| | relevant entries in | +| | ``/proc/spl/kstat/zfs/arcstats`` | ++----------------------+----------------------------------------------+ +| Versions Affected | all | ++----------------------+----------------------------------------------+ + +zfs_read_chunk_size +~~~~~~~~~~~~~~~~~~~ + +``zfs_read_chunk_size`` is the limit for ZFS filesystem reads. If an +application issues a ``read()`` larger than ``zfs_read_chunk_size``, +then the ``read()`` is divided into multiple operations no larger than +``zfs_read_chunk_size`` + +=================== ============================ +zfs_read_chunk_size Notes +=================== ============================ +Tags `filesystem <#filesystem>`__ +When to change TBD +Data Type ulong +Units bytes +Range 512 to ULONG_MAX +Default 1,048,576 +Change Dynamic +Versions Affected all +=================== ============================ + +zfs_read_history +~~~~~~~~~~~~~~~~ + +Historical statistics for the last ``zfs_read_history`` reads are +available in ``/proc/spl/kstat/zfs/POOL_NAME/reads`` + +================= ================================= +zfs_read_history Notes +================= ================================= +Tags `debug <#debug>`__ +When to change To observe read operation details +Data Type int +Units lines +Range 0 to INT_MAX +Default 0 +Change Dynamic +Versions Affected all +================= ================================= + +zfs_read_history_hits +~~~~~~~~~~~~~~~~~~~~~ + +When `zfs_read_history <#zfs-read-history>`__\ ``> 0``, +zfs_read_history_hits controls whether ARC hits are displayed in the +read history file, ``/proc/spl/kstat/zfs/POOL_NAME/reads`` + ++-----------------------+---------------------------------------------+ +| zfs_read_history_hits | Notes | ++=======================+=============================================+ +| Tags | `debug <#debug>`__ | ++-----------------------+---------------------------------------------+ +| When to change | To observe read operation details with ARC | +| | hits | ++-----------------------+---------------------------------------------+ +| Data Type | boolean | ++-----------------------+---------------------------------------------+ +| Range | 0=do not include data for ARC hits, | +| | 1=include ARC hit data | ++-----------------------+---------------------------------------------+ +| Default | 0 | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | all | ++-----------------------+---------------------------------------------+ + +zfs_recover +~~~~~~~~~~~ + +``zfs_recover`` can be set to true (1) to attempt to recover from +otherwise-fatal errors, typically caused by on-disk corruption. When +set, calls to ``zfs_panic_recover()`` will turn into warning messages +rather than calling ``panic()`` + ++-------------------+-------------------------------------------------+ +| zfs_recover | Notes | ++===================+=================================================+ +| Tags | `import <#import>`__ | ++-------------------+-------------------------------------------------+ +| When to change | zfs_recover should only be used as a last | +| | resort, as it typically results in leaked | +| | space, or worse | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=normal operation, 1=attempt recovery zpool | +| | import | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Verification | check output of ``dmesg`` and other logs for | +| | details | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.4 or later | ++-------------------+-------------------------------------------------+ + +zfs_resilver_min_time_ms +~~~~~~~~~~~~~~~~~~~~~~~~ + +Resilvers are processed by the sync thread in syncing context. While +resilvering, ZFS spends at least ``zfs_resilver_min_time_ms`` time +working on a resilver between txg commits. + +The `zfs_txg_timeout <#zfs-txg-timeout>`__ tunable sets a nominal +timeout value for the txg commits. By default, this timeout is 5 seconds +and the ``zfs_resilver_min_time_ms`` is 3 seconds. However, many +variables contribute to changing the actual txg times. The measured txg +interval is observed as the ``otime`` column (in nanoseconds) in the +``/proc/spl/kstat/zfs/POOL_NAME/txgs`` file. + +See also `zfs_txg_timeout <#zfs-txg-timeout>`__ and +`zfs_scan_min_time_ms <#zfs-scan-min-time-ms>`__ + ++--------------------------+------------------------------------------+ +| zfs_resilver_min_time_ms | Notes | ++==========================+==========================================+ +| Tags | `resilver <#resilver>`__ | ++--------------------------+------------------------------------------+ +| When to change | In some resilvering cases, increasing | +| | ``zfs_resilver_min_time_ms`` can result | +| | in faster completion | ++--------------------------+------------------------------------------+ +| Data Type | int | ++--------------------------+------------------------------------------+ +| Units | milliseconds | ++--------------------------+------------------------------------------+ +| Range | 1 to | +| | `zfs_txg_timeout <#zfs-txg-timeout>`__ | +| | converted to milliseconds | ++--------------------------+------------------------------------------+ +| Default | 3,000 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | all | ++--------------------------+------------------------------------------+ + +zfs_scan_min_time_ms +~~~~~~~~~~~~~~~~~~~~ + +Scrubs are processed by the sync thread in syncing context. While +scrubbing, ZFS spends at least ``zfs_scan_min_time_ms`` time working on +a scrub between txg commits. + +See also `zfs_txg_timeout <#zfs-txg-timeout>`__ and +`zfs_resilver_min_time_ms <#zfs-resilver-min-time-ms>`__ + ++----------------------+----------------------------------------------+ +| zfs_scan_min_time_ms | Notes | ++======================+==============================================+ +| Tags | `scrub <#scrub>`__ | ++----------------------+----------------------------------------------+ +| When to change | In some scrub cases, increasing | +| | ``zfs_scan_min_time_ms`` can result in | +| | faster completion | ++----------------------+----------------------------------------------+ +| Data Type | int | ++----------------------+----------------------------------------------+ +| Units | milliseconds | ++----------------------+----------------------------------------------+ +| Range | 1 to `zfs_txg_timeout <#zfs-txg-timeout>`__ | +| | converted to milliseconds | ++----------------------+----------------------------------------------+ +| Default | 1,000 | ++----------------------+----------------------------------------------+ +| Change | Dynamic | ++----------------------+----------------------------------------------+ +| Versions Affected | all | ++----------------------+----------------------------------------------+ + +zfs_scan_checkpoint_intval +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To preserve progress across reboots the sequential scan algorithm +periodically needs to stop metadata scanning and issue all the +verifications I/Os to disk every ``zfs_scan_checkpoint_intval`` seconds. + +========================== ============================================ +zfs_scan_checkpoint_intval Notes +========================== ============================================ +Tags `resilver <#resilver>`__, `scrub <#scrub>`__ +When to change TBD +Data Type int +Units seconds +Range 1 to INT_MAX +Default 7,200 (2 hours) +Change Dynamic +Versions Affected v0.8.0 and later +========================== ============================================ + +zfs_scan_fill_weight +~~~~~~~~~~~~~~~~~~~~ + +This tunable affects how scrub and resilver I/O segments are ordered. A +higher number indicates that we care more about how filled in a segment +is, while a lower number indicates we care more about the size of the +extent without considering the gaps within a segment. + +==================== ============================================ +zfs_scan_fill_weight Notes +==================== ============================================ +Tags `resilver <#resilver>`__, `scrub <#scrub>`__ +When to change Testing sequential scrub and resilver +Data Type int +Units scalar +Range 0 to INT_MAX +Default 3 +Change Prior to zfs module load +Versions Affected v0.8.0 and later +==================== ============================================ + +zfs_scan_issue_strategy +~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_scan_issue_strategy`` controls the order of data verification +while scrubbing or resilvering. + ++-------+-------------------------------------------------------------+ +| value | description | ++=======+=============================================================+ +| 0 | fs will use strategy 1 during normal verification and | +| | strategy 2 while taking a checkpoint | ++-------+-------------------------------------------------------------+ +| 1 | data is verified as sequentially as possible, given the | +| | amount of memory reserved for scrubbing (see | +| | `zfs_scan_mem_lim_fact <#zfs-scan-mem-lim-fact>`__). This | +| | can improve scrub performance if the pool's data is heavily | +| | fragmented. | ++-------+-------------------------------------------------------------+ +| 2 | the largest mostly-contiguous chunk of found data is | +| | verified first. By deferring scrubbing of small segments, | +| | we may later find adjacent data to coalesce and increase | +| | the segment size. | ++-------+-------------------------------------------------------------+ + +======================= ============================================ +zfs_scan_issue_strategy Notes +======================= ============================================ +Tags `resilver <#resilver>`__, `scrub <#scrub>`__ +When to change TBD +Data Type enum +Range 0 to 2 +Default 0 +Change Dynamic +Versions Affected TBD +======================= ============================================ + +zfs_scan_legacy +~~~~~~~~~~~~~~~ + +Setting ``zfs_scan_legacy = 1`` enables the legacy scan and scrub +behavior instead of the newer sequential behavior. + ++-------------------+-------------------------------------------------+ +| zfs_scan_legacy | Notes | ++===================+=================================================+ +| Tags | `resilver <#resilver>`__, `scrub <#scrub>`__ | ++-------------------+-------------------------------------------------+ +| When to change | In some cases, the new scan mode can consumer | +| | more memory as it collects and sorts I/Os; | +| | using the legacy algorithm can be more memory | +| | efficient at the expense of HDD read efficiency | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=use new method: scrubs and resilvers will | +| | gather metadata in memory before issuing | +| | sequential I/O, 1=use legacy algorithm will be | +| | used where I/O is initiated as soon as it is | +| | discovered | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic, however changing to 0 does not affect | +| | in-progress scrubs or resilvers | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.8.0 and later | ++-------------------+-------------------------------------------------+ + +zfs_scan_max_ext_gap +~~~~~~~~~~~~~~~~~~~~ + +``zfs_scan_max_ext_gap`` limits the largest gap in bytes between scrub +and resilver I/Os that will still be considered sequential for sorting +purposes. + ++----------------------+----------------------------------------------+ +| zfs_scan_max_ext_gap | Notes | ++======================+==============================================+ +| Tags | `resilver <#resilver>`__, `scrub <#scrub>`__ | ++----------------------+----------------------------------------------+ +| When to change | TBD | ++----------------------+----------------------------------------------+ +| Data Type | ulong | ++----------------------+----------------------------------------------+ +| Units | bytes | ++----------------------+----------------------------------------------+ +| Range | 512 to ULONG_MAX | ++----------------------+----------------------------------------------+ +| Default | 2,097,152 (2 MiB) | ++----------------------+----------------------------------------------+ +| Change | Dynamic, however changing to 0 does not | +| | affect in-progress scrubs or resilvers | ++----------------------+----------------------------------------------+ +| Versions Affected | v0.8.0 and later | ++----------------------+----------------------------------------------+ + +zfs_scan_mem_lim_fact +~~~~~~~~~~~~~~~~~~~~~ + +``zfs_scan_mem_lim_fact`` limits the maximum fraction of RAM used for +I/O sorting by sequential scan algorithm. When the limit is reached +scanning metadata is stopped and data verification I/O is started. Data +verification I/O continues until the memory used by the sorting +algorithm drops by +`zfs_scan_mem_lim_soft_fact <#zfs-scan-mem-lim-soft-fact>`__ + +Memory used by the sequential scan algorithm can be observed as the kmem +sio_cache. This is visible from procfs as +``grep sio_cache /proc/slabinfo`` and can be monitored using +slab-monitoring tools such as ``slabtop`` + ++-----------------------+---------------------------------------------+ +| zfs_scan_mem_lim_fact | Notes | ++=======================+=============================================+ +| Tags | `memory <#memory>`__, | +| | `resilver <#resilver>`__, | +| | `scrub <#scrub>`__ | ++-----------------------+---------------------------------------------+ +| When to change | TBD | ++-----------------------+---------------------------------------------+ +| Data Type | int | ++-----------------------+---------------------------------------------+ +| Units | divisor of physical RAM | ++-----------------------+---------------------------------------------+ +| Range | TBD | ++-----------------------+---------------------------------------------+ +| Default | 20 (physical RAM / 20 or 5%) | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | v0.8.0 and later | ++-----------------------+---------------------------------------------+ + +zfs_scan_mem_lim_soft_fact +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_scan_mem_lim_soft_fact`` sets the fraction of the hard limit, +`zfs_scan_mem_lim_fact <#zfs-scan-mem-lim-fact>`__, used to determined +the RAM soft limit for I/O sorting by the sequential scan algorithm. +After `zfs_scan_mem_lim_fact <#zfs-scan-mem-lim-fact>`__ has been +reached, metadata scanning is stopped until the RAM usage drops by +``zfs_scan_mem_lim_soft_fact`` + ++----------------------------+----------------------------------------+ +| zfs_scan_mem_lim_soft_fact | Notes | ++============================+========================================+ +| Tags | `resilver <#resilver>`__, | +| | `scrub <#scrub>`__ | ++----------------------------+----------------------------------------+ +| When to change | TBD | ++----------------------------+----------------------------------------+ +| Data Type | int | ++----------------------------+----------------------------------------+ +| Units | divisor of (physical RAM / | +| | `zfs_scan_mem | +| | _lim_fact <#zfs-scan-mem-lim-fact>`__) | ++----------------------------+----------------------------------------+ +| Range | 1 to INT_MAX | ++----------------------------+----------------------------------------+ +| Default | 20 (for default | +| | `zfs_scan_mem | +| | _lim_fact <#zfs-scan-mem-lim-fact>`__, | +| | 0.25% of physical RAM) | ++----------------------------+----------------------------------------+ +| Change | Dynamic | ++----------------------------+----------------------------------------+ +| Versions Affected | v0.8.0 and later | ++----------------------------+----------------------------------------+ + +zfs_scan_vdev_limit +~~~~~~~~~~~~~~~~~~~ + +``zfs_scan_vdev_limit`` is the maximum amount of data that can be +concurrently issued at once for scrubs and resilvers per leaf vdev. +``zfs_scan_vdev_limit`` attempts to strike a balance between keeping the +leaf vdev queues full of I/Os while not overflowing the queues causing +high latency resulting in long txg sync times. While +``zfs_scan_vdev_limit`` represents a bandwidth limit, the existing I/O +limit of `zfs_vdev_scrub_max_active <#zfs-vdev-scrub-max-active>`__ +remains in effect, too. + ++---------------------+-----------------------------------------------+ +| zfs_scan_vdev_limit | Notes | ++=====================+===============================================+ +| Tags | `resilver <#resilver>`__, `scrub <#scrub>`__, | +| | `vdev <#vdev>`__ | ++---------------------+-----------------------------------------------+ +| When to change | TBD | ++---------------------+-----------------------------------------------+ +| Data Type | ulong | ++---------------------+-----------------------------------------------+ +| Units | bytes | ++---------------------+-----------------------------------------------+ +| Range | 512 to ULONG_MAX | ++---------------------+-----------------------------------------------+ +| Default | 4,194,304 (4 MiB) | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.8.0 and later | ++---------------------+-----------------------------------------------+ + +zfs_send_corrupt_data +~~~~~~~~~~~~~~~~~~~~~ + +``zfs_send_corrupt_data`` enables ``zfs send`` to send of corrupt data +by ignoring read and checksum errors. The corrupted or unreadable blocks +are replaced with the value ``0x2f5baddb10c`` (ZFS bad block) + ++-----------------------+---------------------------------------------+ +| zfs_send_corrupt_data | Notes | ++=======================+=============================================+ +| Tags | `send <#send>`__ | ++-----------------------+---------------------------------------------+ +| When to change | When data corruption exists and an attempt | +| | to recover at least some data via | +| | ``zfs send`` is needed | ++-----------------------+---------------------------------------------+ +| Data Type | boolean | ++-----------------------+---------------------------------------------+ +| Range | 0=do not send corrupt data, 1=replace | +| | corrupt data with cookie | ++-----------------------+---------------------------------------------+ +| Default | 0 | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | v0.6.0 and later | ++-----------------------+---------------------------------------------+ + +zfs_sync_pass_deferred_free +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The SPA sync process is performed in multiple passes. Once the pass +number reaches ``zfs_sync_pass_deferred_free``, frees are no long +processed and must wait for the next SPA sync. + +The ``zfs_sync_pass_deferred_free`` value is expected to be removed as a +tunable once the optimal value is determined during field testing. + +The ``zfs_sync_pass_deferred_free`` pass must be greater than 1 to +ensure that regular blocks are not deferred. + +=========================== ======================== +zfs_sync_pass_deferred_free Notes +=========================== ======================== +Tags `SPA <#spa>`__ +When to change Testing SPA sync process +Data Type int +Units SPA sync passes +Range 1 to INT_MAX +Default 2 +Change Dynamic +Versions Affected all +=========================== ======================== + +zfs_sync_pass_dont_compress +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The SPA sync process is performed in multiple passes. Once the pass +number reaches ``zfs_sync_pass_dont_compress``, data block compression +is no longer processed and must wait for the next SPA sync. + +The ``zfs_sync_pass_dont_compress`` value is expected to be removed as a +tunable once the optimal value is determined during field testing. + +=========================== ======================== +zfs_sync_pass_dont_compress Notes +=========================== ======================== +Tags `SPA <#spa>`__ +When to change Testing SPA sync process +Data Type int +Units SPA sync passes +Range 1 to INT_MAX +Default 5 +Change Dynamic +Versions Affected all +=========================== ======================== + +zfs_sync_pass_rewrite +~~~~~~~~~~~~~~~~~~~~~ + +The SPA sync process is performed in multiple passes. Once the pass +number reaches ``zfs_sync_pass_rewrite``, blocks can be split into gang +blocks. + +The ``zfs_sync_pass_rewrite`` value is expected to be removed as a +tunable once the optimal value is determined during field testing. + +===================== ======================== +zfs_sync_pass_rewrite Notes +===================== ======================== +Tags `SPA <#spa>`__ +When to change Testing SPA sync process +Data Type int +Units SPA sync passes +Range 1 to INT_MAX +Default 2 +Change Dynamic +Versions Affected all +===================== ======================== + +zfs_sync_taskq_batch_pct +~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_sync_taskq_batch_pct`` controls the number of threads used by the +DSL pool sync taskq, ``dp_sync_taskq`` + ++--------------------------+------------------------------------------+ +| zfs_sync_taskq_batch_pct | Notes | ++==========================+==========================================+ +| Tags | `SPA <#spa>`__ | ++--------------------------+------------------------------------------+ +| When to change | to adjust the number of | +| | ``dp_sync_taskq`` threads | ++--------------------------+------------------------------------------+ +| Data Type | int | ++--------------------------+------------------------------------------+ +| Units | percent of number of online CPUs | ++--------------------------+------------------------------------------+ +| Range | 1 to 100 | ++--------------------------+------------------------------------------+ +| Default | 75 | ++--------------------------+------------------------------------------+ +| Change | Prior to zfs module load | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++--------------------------+------------------------------------------+ + +zfs_txg_history +~~~~~~~~~~~~~~~ + +Historical statistics for the last ``zfs_txg_history`` txg commits are +available in ``/proc/spl/kstat/zfs/POOL_NAME/txgs`` + +The work required to measure the txg commit (SPA statistics) is low. +However, for debugging purposes, it can be useful to observe the SPA +statistics. + +================= ====================================================== +zfs_txg_history Notes +================= ====================================================== +Tags `debug <#debug>`__ +When to change To observe details of SPA sync behavior. +Data Type int +Units lines +Range 0 to INT_MAX +Default 0 for version v0.6.0 to v0.7.6, 100 for version v0.8.0 +Change Dynamic +Versions Affected all +================= ====================================================== + +zfs_txg_timeout +~~~~~~~~~~~~~~~ + +The open txg is committed to the pool periodically (SPA sync) and +``zfs_txg_timeout`` represents the default target upper limit. + +txg commits can occur more frequently and a rapid rate of txg commits +often indicates a busy write workload, quota limits reached, or the free +space is critically low. + +Many variables contribute to changing the actual txg times. txg commits +can also take longer than ``zfs_txg_timeout`` if the ZFS write throttle +is not properly tuned or the time to sync is otherwise delayed (eg slow +device). Shorter txg commit intervals can occur due to +`zfs_dirty_data_sync <#zfs-dirty-data-sync>`__ for write-intensive +workloads. The measured txg interval is observed as the ``otime`` column +(in nanoseconds) in the ``/proc/spl/kstat/zfs/POOL_NAME/txgs`` file. + +See also `zfs_dirty_data_sync <#zfs-dirty-data-sync>`__ and +`zfs_txg_history <#zfs-txg-history>`__ + ++-------------------+-------------------------------------------------+ +| zfs_txg_timeout | Notes | ++===================+=================================================+ +| Tags | `SPA <#spa>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++-------------------+-------------------------------------------------+ +| When to change | To optimize the work done by txg commit | +| | relative to the pool requirements. See also | +| | section `ZFS I/O | +| | Scheduler `__ | ++-------------------+-------------------------------------------------+ +| Data Type | int | ++-------------------+-------------------------------------------------+ +| Units | seconds | ++-------------------+-------------------------------------------------+ +| Range | 1 to INT_MAX | ++-------------------+-------------------------------------------------+ +| Default | 5 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +zfs_vdev_aggregation_limit +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To reduce IOPs, small, adjacent I/Os can be aggregated (coalesced) into +a large I/O. For reads, aggregations occur across small adjacency gaps. +For writes, aggregation can occur at the ZFS or disk level. +``zfs_vdev_aggregation_limit`` is the upper bound on the size of the +larger, aggregated I/O. + +Setting ``zfs_vdev_aggregation_limit = 0`` effectively disables +aggregation by ZFS. However, the block device scheduler can still merge +(aggregate) I/Os. Also, many devices, such as modern HDDs, contain +schedulers that can aggregate I/Os. + +In general, I/O aggregation can improve performance for devices, such as +HDDs, where ordering I/O operations for contiguous LBAs is a benefit. +For random access devices, such as SSDs, aggregation might not improve +performance relative to the CPU cycles needed to aggregate. For devices +that represent themselves as having no rotation, the +`zfs_vdev_aggregation_limit_non_rotating <#zfs-vdev-aggregation-limit-non-rotating>`__ +parameter is used instead of ``zfs_vdev_aggregation_limit`` + ++----------------------------+----------------------------------------+ +| zfs_vdev_aggregation_limit | Notes | ++============================+========================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++----------------------------+----------------------------------------+ +| When to change | If the workload does not benefit from | +| | aggregation, the | +| | ``zfs_vdev_aggregation_limit`` can be | +| | reduced to avoid aggregation attempts | ++----------------------------+----------------------------------------+ +| Data Type | int | ++----------------------------+----------------------------------------+ +| Units | bytes | ++----------------------------+----------------------------------------+ +| Range | 0 to 1,048,576 (default) or 16,777,216 | +| | (if ``zpool`` ``large_blocks`` feature | +| | is enabled) | ++----------------------------+----------------------------------------+ +| Default | 1,048,576, or 131,072 for `__, | +| | `vdev_cache <#vdev-cache>`__ | ++---------------------+-----------------------------------------------+ +| When to change | Do not change | ++---------------------+-----------------------------------------------+ +| Data Type | int | ++---------------------+-----------------------------------------------+ +| Units | bytes | ++---------------------+-----------------------------------------------+ +| Range | 0 to MAX_INT | ++---------------------+-----------------------------------------------+ +| Default | 0 (vdev cache is disabled) | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Verification | vdev cache statistics are available in the | +| | ``/proc/spl/kstat/zfs/vdev_cache_stats`` file | ++---------------------+-----------------------------------------------+ +| Versions Affected | all | ++---------------------+-----------------------------------------------+ + +zfs_vdev_cache_bshift +~~~~~~~~~~~~~~~~~~~~~ + +Note: with the current ZFS code, the vdev cache is not helpful and in +some cases actually harmful. Thus it is disabled by setting the +`zfs_vdev_cache_size <#zfs-vdev-cache-size>`__ to zero. This related +tunable is, by default, inoperative. + +All read I/Os smaller than `zfs_vdev_cache_max <#zfs-vdev-cache-max>`__ +are turned into (``1 << zfs_vdev_cache_bshift``) byte reads by the vdev +cache. At most `zfs_vdev_cache_size <#zfs-vdev-cache-size>`__ bytes will +be kept in each vdev's cache. + +===================== ============================================== +zfs_vdev_cache_bshift Notes +===================== ============================================== +Tags `vdev <#vdev>`__, `vdev_cache <#vdev-cache>`__ +When to change Do not change +Data Type int +Units shift +Range 1 to INT_MAX +Default 16 (65,536 bytes) +Change Dynamic +Versions Affected all +===================== ============================================== + +zfs_vdev_cache_max +~~~~~~~~~~~~~~~~~~ + +Note: with the current ZFS code, the vdev cache is not helpful and in +some cases actually harmful. Thus it is disabled by setting the +`zfs_vdev_cache_size <#zfs-vdev-cache-size>`__ to zero. This related +tunable is, by default, inoperative. + +All read I/Os smaller than zfs_vdev_cache_max will be turned into +(``1 <<``\ `zfs_vdev_cache_bshift <#zfs-vdev-cache-bshift>`__ byte reads +by the vdev cache. At most ``zfs_vdev_cache_size`` bytes will be kept in +each vdev's cache. + +================== ============================================== +zfs_vdev_cache_max Notes +================== ============================================== +Tags `vdev <#vdev>`__, `vdev_cache <#vdev-cache>`__ +When to change Do not change +Data Type int +Units bytes +Range 512 to INT_MAX +Default 16,384 (16 KiB) +Change Dynamic +Versions Affected all +================== ============================================== + +zfs_vdev_mirror_rotating_inc +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The mirror read algorithm uses current load and an incremental weighting +value to determine the vdev to service a read operation. Lower values +determine the preferred vdev. The weighting value is +``zfs_vdev_mirror_rotating_inc`` for rotating media and +`zfs_vdev_mirror_non_rotating_inc <#zfs-vdev-mirror-non-rotating-inc>`__ +for nonrotating media. + +Verify the rotational setting described by a block device in sysfs by +observing ``/sys/block/DISK_NAME/queue/rotational`` + ++------------------------------+--------------------------------------+ +| zfs_vdev_mirror_rotating_inc | Notes | ++==============================+======================================+ +| Tags | `vdev <#vdev>`__, | +| | `mirror <#mirror>`__, `HDD <#hdd>`__ | ++------------------------------+--------------------------------------+ +| When to change | Increasing for mirrors with both | +| | rotating and nonrotating media more | +| | strongly favors the nonrotating | +| | media | ++------------------------------+--------------------------------------+ +| Data Type | int | ++------------------------------+--------------------------------------+ +| Units | scalar | ++------------------------------+--------------------------------------+ +| Range | 0 to MAX_INT | ++------------------------------+--------------------------------------+ +| Default | 0 | ++------------------------------+--------------------------------------+ +| Change | Dynamic | ++------------------------------+--------------------------------------+ +| Versions Affected | v0.7.0 and later | ++------------------------------+--------------------------------------+ + +zfs_vdev_mirror_non_rotating_inc +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The mirror read algorithm uses current load and an incremental weighting +value to determine the vdev to service a read operation. Lower values +determine the preferred vdev. The weighting value is +`zfs_vdev_mirror_rotating_inc <#zfs-vdev-mirror-rotating-inc>`__ for +rotating media and ``zfs_vdev_mirror_non_rotating_inc`` for nonrotating +media. + +Verify the rotational setting described by a block device in sysfs by +observing ``/sys/block/DISK_NAME/queue/rotational`` + ++----------------------------------+----------------------------------+ +| zfs_vdev_mirror_non_rotating_inc | Notes | ++==================================+==================================+ +| Tags | `vdev <#vdev>`__, | +| | `mirror <#mirror>`__, | +| | `SSD <#ssd>`__ | ++----------------------------------+----------------------------------+ +| When to change | TBD | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | scalar | ++----------------------------------+----------------------------------+ +| Range | 0 to INT_MAX | ++----------------------------------+----------------------------------+ +| Default | 0 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.7.0 and later | ++----------------------------------+----------------------------------+ + +zfs_vdev_mirror_rotating_seek_inc +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +For rotating media in a mirror, if the next I/O offset is within +`zfs_vdev_mirror_rotating_seek_offset <#zfs-vdev-mirror-rotating-seek-offset>`__ +then the weighting factor is incremented by +(``zfs_vdev_mirror_rotating_seek_inc / 2``). Otherwise the weighting +factor is increased by ``zfs_vdev_mirror_rotating_seek_inc``. This +algorithm prefers rotating media with lower seek distance. + +Verify the rotational setting described by a block device in sysfs by +observing ``/sys/block/DISK_NAME/queue/rotational`` + ++----------------------------------+----------------------------------+ +| z | Notes | +| fs_vdev_mirror_rotating_seek_inc | | ++==================================+==================================+ +| Tags | `vdev <#vdev>`__, | +| | `mirror <#mirror>`__, | +| | `HDD <#hdd>`__ | ++----------------------------------+----------------------------------+ +| When to change | TBD | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | scalar | ++----------------------------------+----------------------------------+ +| Range | 0 to INT_MAX | ++----------------------------------+----------------------------------+ +| Default | 5 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.7.0 and later | ++----------------------------------+----------------------------------+ + +zfs_vdev_mirror_rotating_seek_offset +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +For rotating media in a mirror, if the next I/O offset is within +``zfs_vdev_mirror_rotating_seek_offset`` then the weighting factor is +incremented by +(`zfs_vdev_mirror_rotating_seek_inc <#zfs-vdev-mirror-rotating-seek-inc>`__\ ``/ 2``). +Otherwise the weighting factor is increased by +``zfs_vdev_mirror_rotating_seek_inc``. This algorithm prefers rotating +media with lower seek distance. + +Verify the rotational setting described by a block device in sysfs by +observing ``/sys/block/DISK_NAME/queue/rotational`` + ++----------------------------------+----------------------------------+ +| zfs_vdev_mirror_rotating_seek_off| Notes | +| set | | ++==================================+==================================+ +| Tags | `vdev <#vdev>`__, | +| | `mirror <#mirror>`__, | +| | `HDD <#hdd>`__ | ++----------------------------------+----------------------------------+ +| When to change | TBD | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | bytes | ++----------------------------------+----------------------------------+ +| Range | 0 to INT_MAX | ++----------------------------------+----------------------------------+ +| Default | 1,048,576 (1 MiB) | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.7.0 and later | ++----------------------------------+----------------------------------+ + +zfs_vdev_mirror_non_rotating_seek_inc +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +For nonrotating media in a mirror, a seek penalty is applied as +sequential I/O's can be aggregated into fewer operations, avoiding +unnecessary per-command overhead, often boosting performance. + +Verify the rotational setting described by a block device in SysFS by +observing ``/sys/block/DISK_NAME/queue/rotational`` + ++----------------------------------+----------------------------------+ +| zfs_v | Notes | +| dev_mirror_non_rotating_seek_inc | | ++==================================+==================================+ +| Tags | `vdev <#vdev>`__, | +| | `mirror <#mirror>`__, | +| | `SSD <#ssd>`__ | ++----------------------------------+----------------------------------+ +| When to change | TBD | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | scalar | ++----------------------------------+----------------------------------+ +| Range | 0 to INT_MAX | ++----------------------------------+----------------------------------+ +| Default | 1 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.7.0 and later | ++----------------------------------+----------------------------------+ + +zfs_vdev_read_gap_limit +~~~~~~~~~~~~~~~~~~~~~~~ + +To reduce IOPs, small, adjacent I/Os are aggregated (coalesced) into +into a large I/O. For reads, aggregations occur across small adjacency +gaps where the gap is less than ``zfs_vdev_read_gap_limit`` + ++-------------------------+-------------------------------------------+ +| zfs_vdev_read_gap_limit | Notes | ++=========================+===========================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++-------------------------+-------------------------------------------+ +| When to change | TBD | ++-------------------------+-------------------------------------------+ +| Data Type | int | ++-------------------------+-------------------------------------------+ +| Units | bytes | ++-------------------------+-------------------------------------------+ +| Range | 0 to INT_MAX | ++-------------------------+-------------------------------------------+ +| Default | 32,768 (32 KiB) | ++-------------------------+-------------------------------------------+ +| Change | Dynamic | ++-------------------------+-------------------------------------------+ +| Versions Affected | all | ++-------------------------+-------------------------------------------+ + +zfs_vdev_write_gap_limit +~~~~~~~~~~~~~~~~~~~~~~~~ + +To reduce IOPs, small, adjacent I/Os are aggregated (coalesced) into +into a large I/O. For writes, aggregations occur across small adjacency +gaps where the gap is less than ``zfs_vdev_write_gap_limit`` + ++--------------------------+------------------------------------------+ +| zfs_vdev_write_gap_limit | Notes | ++==========================+==========================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------------+------------------------------------------+ +| When to change | TBD | ++--------------------------+------------------------------------------+ +| Data Type | int | ++--------------------------+------------------------------------------+ +| Units | bytes | ++--------------------------+------------------------------------------+ +| Range | 0 to INT_MAX | ++--------------------------+------------------------------------------+ +| Default | 4,096 (4 KiB) | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | all | ++--------------------------+------------------------------------------+ + +zfs_vdev_scheduler +~~~~~~~~~~~~~~~~~~ + +Prior to version 0.8.3, when the pool is imported, for whole disk vdevs, +the block device I/O scheduler is set to ``zfs_vdev_scheduler``. +The most common schedulers are: *noop*, *cfq*, *bfq*, and *deadline*. +In some cases, the scheduler is not changeable using this method. +Known schedulers that cannot be changed are: *scsi_mq* and *none*. +In these cases, the scheduler is unchanged and an error message can be +reported to logs. + +The parameter was disabled in v0.8.3 but left in place to avoid breaking +loading of the ``zfs`` module if the parameter is specified in modprobe +configuration on existing installations. It is recommended that users +leave the default scheduler "`unless you're encountering a specific +problem, or have clearly measured a performance improvement for your +workload +`__," +and if so, to change it via the ``/sys/block//queue/scheduler`` +interface and/or udev rule. + ++--------------------+------------------------------------------------+ +| zfs_vdev_scheduler | Notes | ++====================+================================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------+------------------------------------------------+ +| When to change | since ZFS has its own I/O scheduler, using a | +| | simple scheduler can result in more consistent | +| | performance | ++--------------------+------------------------------------------------+ +| Data Type | string | ++--------------------+------------------------------------------------+ +| Range | expected: *noop*, *cfq*, *bfq*, and *deadline* | ++--------------------+------------------------------------------------+ +| Default | *noop* | ++--------------------+------------------------------------------------+ +| Change | Dynamic, but takes effect upon pool creation | +| | or import | ++--------------------+------------------------------------------------+ +| Versions Affected | all, but no effect since v0.8.3 | ++--------------------+------------------------------------------------+ + +zfs_vdev_raidz_impl +~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_raidz_impl`` overrides the raidz parity algorithm. By +default, the algorithm is selected at zfs module load time by the +results of a microbenchmark of algorithms based on the current hardware. + +Once the module is loaded, the content of +``/sys/module/zfs/parameters/zfs_vdev_raidz_impl`` shows available +options with the currently selected enclosed in ``[]``. Details of the +results of the microbenchmark are observable in the +``/proc/spl/kstat/zfs/vdev_raidz_bench`` file. + ++----------------+----------------------+-------------------------+ +| algorithm | architecture | description | ++================+======================+=========================+ +| fastest | all | fastest implementation | +| | | selected by | +| | | microbenchmark | ++----------------+----------------------+-------------------------+ +| original | all | original raidz | +| | | implementation | ++----------------+----------------------+-------------------------+ +| scalar | all | scalar raidz | +| | | implementation | ++----------------+----------------------+-------------------------+ +| sse2 | 64-bit x86 | uses SSE2 instruction | +| | | set | ++----------------+----------------------+-------------------------+ +| ssse3 | 64-bit x86 | uses SSSE3 instruction | +| | | set | ++----------------+----------------------+-------------------------+ +| avx2 | 64-bit x86 | uses AVX2 instruction | +| | | set | ++----------------+----------------------+-------------------------+ +| avx512f | 64-bit x86 | uses AVX512F | +| | | instruction set | ++----------------+----------------------+-------------------------+ +| avx512bw | 64-bit x86 | uses AVX512F & AVX512BW | +| | | instruction sets | ++----------------+----------------------+-------------------------+ +| aarch64_neon | aarch64/64 bit ARMv8 | uses NEON | ++----------------+----------------------+-------------------------+ +| aarch64_neonx2 | aarch64/64 bit ARMv8 | uses NEON with more | +| | | unrolling | ++----------------+----------------------+-------------------------+ + +=================== ==================================================== +zfs_vdev_raidz_impl Notes +=================== ==================================================== +Tags `CPU <#cpu>`__, `raidz <#raidz>`__, `vdev <#vdev>`__ +When to change testing raidz algorithms +Data Type string +Range see table above +Default *fastest* +Change Dynamic +Versions Affected v0.7.0 and later +=================== ==================================================== + +zfs_zevent_cols +~~~~~~~~~~~~~~~ + +``zfs_zevent_cols`` is a soft wrap limit in columns (characters) for ZFS +events logged to the console. + +================= ========================== +zfs_zevent_cols Notes +================= ========================== +Tags `debug <#debug>`__ +When to change if 80 columns isn't enough +Data Type int +Units characters +Range 1 to INT_MAX +Default 80 +Change Dynamic +Versions Affected all +================= ========================== + +zfs_zevent_console +~~~~~~~~~~~~~~~~~~ + +If ``zfs_zevent_console`` is true (1), then ZFS events are logged to the +console. + +More logging and log filtering capabilities are provided by ``zed`` + +================== ========================================= +zfs_zevent_console Notes +================== ========================================= +Tags `debug <#debug>`__ +When to change to log ZFS events to the console +Data Type boolean +Range 0=do not log to console, 1=log to console +Default 0 +Change Dynamic +Versions Affected all +================== ========================================= + +zfs_zevent_len_max +~~~~~~~~~~~~~~~~~~ + +``zfs_zevent_len_max`` is the maximum ZFS event queue length. A value of +0 results in a calculated value (16 \* number of CPUs) with a minimum of +64. Events in the queue can be viewed with the ``zpool events`` command. + +================== ================================ +zfs_zevent_len_max Notes +================== ================================ +Tags `debug <#debug>`__ +When to change increase to see more ZFS events +Data Type int +Units events +Range 0 to INT_MAX +Default 0 (calculate as described above) +Change Dynamic +Versions Affected all +================== ================================ + +zfs_zil_clean_taskq_maxalloc +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +During a SPA sync, intent log transaction groups (itxg) are cleaned. The +cleaning work is dispatched to the DSL pool ZIL clean taskq +(``dp_zil_clean_taskq``). +`zfs_zil_clean_taskq_minalloc <#zfs-zil-clean-taskq-minalloc>`__ is the +minimum and ``zfs_zil_clean_taskq_maxalloc`` is the maximum number of +cached taskq entries for ``dp_zil_clean_taskq``. The actual number of +taskq entries dynamically varies between these values. + +When ``zfs_zil_clean_taskq_maxalloc`` is exceeded transaction records +(itxs) are cleaned synchronously with possible negative impact to the +performance of SPA sync. + +Ideally taskq entries are pre-allocated prior to being needed by +``zil_clean()``, thus avoiding dynamic allocation of new taskq entries. + ++------------------------------+--------------------------------------+ +| zfs_zil_clean_taskq_maxalloc | Notes | ++==============================+======================================+ +| Tags | `ZIL <#zil>`__ | ++------------------------------+--------------------------------------+ +| When to change | If more ``dp_zil_clean_taskq`` | +| | entries are needed to prevent the | +| | itxs from being synchronously | +| | cleaned | ++------------------------------+--------------------------------------+ +| Data Type | int | ++------------------------------+--------------------------------------+ +| Units | ``dp_zil_clean_taskq`` taskq entries | ++------------------------------+--------------------------------------+ +| Range | `zfs_zil_clean_taskq_minallo | +| | c <#zfs-zil-clean-taskq-minalloc>`__ | +| | to ``INT_MAX`` | ++------------------------------+--------------------------------------+ +| Default | 1,048,576 | ++------------------------------+--------------------------------------+ +| Change | Dynamic, takes effect per-pool when | +| | the pool is imported | ++------------------------------+--------------------------------------+ +| Versions Affected | v0.8.0 | ++------------------------------+--------------------------------------+ + +zfs_zil_clean_taskq_minalloc +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +During a SPA sync, intent log transaction groups (itxg) are cleaned. The +cleaning work is dispatched to the DSL pool ZIL clean taskq +(``dp_zil_clean_taskq``). ``zfs_zil_clean_taskq_minalloc`` is the +minimum and +`zfs_zil_clean_taskq_maxalloc <#zfs-zil-clean-taskq-maxalloc>`__ is the +maximum number of cached taskq entries for ``dp_zil_clean_taskq``. The +actual number of taskq entries dynamically varies between these values. + +``zfs_zil_clean_taskq_minalloc`` is the minimum number of ZIL +transaction records (itxs). + +Ideally taskq entries are pre-allocated prior to being needed by +``zil_clean()``, thus avoiding dynamic allocation of new taskq entries. + ++------------------------------+--------------------------------------+ +| zfs_zil_clean_taskq_minalloc | Notes | ++==============================+======================================+ +| Tags | `ZIL <#zil>`__ | ++------------------------------+--------------------------------------+ +| When to change | TBD | ++------------------------------+--------------------------------------+ +| Data Type | int | ++------------------------------+--------------------------------------+ +| Units | dp_zil_clean_taskq taskq entries | ++------------------------------+--------------------------------------+ +| Range | 1 to | +| | `zfs_zil_clean_taskq_maxallo | +| | c <#zfs-zil-clean-taskq-maxalloc>`__ | ++------------------------------+--------------------------------------+ +| Default | 1,024 | ++------------------------------+--------------------------------------+ +| Change | Dynamic, takes effect per-pool when | +| | the pool is imported | ++------------------------------+--------------------------------------+ +| Versions Affected | v0.8.0 | ++------------------------------+--------------------------------------+ + +zfs_zil_clean_taskq_nthr_pct +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_zil_clean_taskq_nthr_pct`` controls the number of threads used by +the DSL pool ZIL clean taskq (``dp_zil_clean_taskq``). The default value +of 100% will create a maximum of one thread per cpu. + ++------------------------------+--------------------------------------+ +| zfs_zil_clean_taskq_nthr_pct | Notes | ++==============================+======================================+ +| Tags | `taskq <#taskq>`__, `ZIL <#zil>`__ | ++------------------------------+--------------------------------------+ +| When to change | Testing ZIL clean and SPA sync | +| | performance | ++------------------------------+--------------------------------------+ +| Data Type | int | ++------------------------------+--------------------------------------+ +| Units | percent of number of CPUs | ++------------------------------+--------------------------------------+ +| Range | 1 to 100 | ++------------------------------+--------------------------------------+ +| Default | 100 | ++------------------------------+--------------------------------------+ +| Change | Dynamic, takes effect per-pool when | +| | the pool is imported | ++------------------------------+--------------------------------------+ +| Versions Affected | v0.8.0 | ++------------------------------+--------------------------------------+ + +zil_replay_disable +~~~~~~~~~~~~~~~~~~ + +If ``zil_replay_disable = 1``, then when a volume or filesystem is +brought online, no attempt to replay the ZIL is made and any existing +ZIL is destroyed. This can result in loss of data without notice. + +================== ================================== +zil_replay_disable Notes +================== ================================== +Tags `debug <#debug>`__, `ZIL <#zil>`__ +When to change Do not change +Data Type boolean +Range 0=replay ZIL, 1=destroy ZIL +Default 0 +Change Dynamic +Versions Affected v0.6.5 +================== ================================== + +zil_slog_bulk +~~~~~~~~~~~~~ + +``zil_slog_bulk`` is the log device write size limit per commit executed +with synchronous priority. Writes below ``zil_slog_bulk`` are executed +with synchronous priority. Writes above ``zil_slog_bulk`` are executed +with lower (asynchronous) priority to reduct potential log device abuse +by a single active ZIL writer. + ++-------------------+-------------------------------------------------+ +| zil_slog_bulk | Notes | ++===================+=================================================+ +| Tags | `ZIL <#zil>`__ | ++-------------------+-------------------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++-------------------+-------------------------------------------------+ +| Data Type | ulong | ++-------------------+-------------------------------------------------+ +| Units | bytes | ++-------------------+-------------------------------------------------+ +| Range | 0 to ULONG_MAX | ++-------------------+-------------------------------------------------+ +| Default | 786,432 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.8.0 | ++-------------------+-------------------------------------------------+ + +zio_delay_max +~~~~~~~~~~~~~ + +If a ZFS I/O operation takes more than ``zio_delay_max`` milliseconds to +complete, then an event is logged. Note that this is only a logging +facility, not a timeout on operations. See also ``zpool events`` + +================= ======================= +zio_delay_max Notes +================= ======================= +Tags `debug <#debug>`__ +When to change when debugging slow I/O +Data Type int +Units milliseconds +Range 1 to INT_MAX +Default 30,000 (30 seconds) +Change Dynamic +Versions Affected all +================= ======================= + +zio_dva_throttle_enabled +~~~~~~~~~~~~~~~~~~~~~~~~ + +``zio_dva_throttle_enabled`` controls throttling of block allocations in +the ZFS I/O (ZIO) pipeline. When enabled, the maximum number of pending +allocations per top-level vdev is limited by +`zfs_vdev_queue_depth_pct <#zfs-vdev-queue-depth-pct>`__ + ++--------------------------+------------------------------------------+ +| zio_dva_throttle_enabled | Notes | ++==========================+==========================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------------+------------------------------------------+ +| When to change | Testing ZIO block allocation algorithms | ++--------------------------+------------------------------------------+ +| Data Type | boolean | ++--------------------------+------------------------------------------+ +| Range | 0=do not throttle ZIO block allocations, | +| | 1=throttle ZIO block allocations | ++--------------------------+------------------------------------------+ +| Default | 1 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++--------------------------+------------------------------------------+ + +zio_requeue_io_start_cut_in_line +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zio_requeue_io_start_cut_in_line`` controls prioritization of a +re-queued ZFS I/O (ZIO) in the ZIO pipeline by the ZIO taskq. + ++----------------------------------+----------------------------------+ +| zio_requeue_io_start_cut_in_line | Notes | ++==================================+==================================+ +| Tags | `Z | +| | IO_scheduler <#zio-scheduler>`__ | ++----------------------------------+----------------------------------+ +| When to change | Do not change | ++----------------------------------+----------------------------------+ +| Data Type | boolean | ++----------------------------------+----------------------------------+ +| Range | 0=don't prioritize re-queued | +| | I/Os, 1=prioritize re-queued | +| | I/Os | ++----------------------------------+----------------------------------+ +| Default | 1 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | all | ++----------------------------------+----------------------------------+ + +zio_taskq_batch_pct +~~~~~~~~~~~~~~~~~~~ + +``zio_taskq_batch_pct`` sets the number of I/O worker threads as a +percentage of online CPUs. These workers threads are responsible for IO +work such as compression and checksum calculations. + +Each block is handled by one worker thread, so maximum overall worker +thread throughput is function of the number of concurrent blocks being +processed, the number of worker threads, and the algorithms used. The +default value of 75% is chosen to avoid using all CPUs which can result +in latency issues and inconsistent application performance, especially +when high compression is enabled. + +The taskq batch processes are: + ++-------------+--------------+---------------------------------------+ +| taskq | process name | Notes | ++=============+==============+=======================================+ +| Write issue | z_wr_iss[_#] | Can be CPU intensive, runs at lower | +| | | priority than other taskqs | ++-------------+--------------+---------------------------------------+ + +Other taskqs exist, but most have fixed numbers of instances and +therefore require recompiling the kernel module to adjust. + ++---------------------+-----------------------------------------------+ +| zio_taskq_batch_pct | Notes | ++=====================+===============================================+ +| Tags | `taskq <#taskq>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++---------------------+-----------------------------------------------+ +| When to change | To tune parallelism in multiprocessor systems | ++---------------------+-----------------------------------------------+ +| Data Type | int | ++---------------------+-----------------------------------------------+ +| Units | percent of number of CPUs | ++---------------------+-----------------------------------------------+ +| Range | 1 to 100, fractional number of CPUs are | +| | rounded down | ++---------------------+-----------------------------------------------+ +| Default | 75 | ++---------------------+-----------------------------------------------+ +| Change | Prior to zfs module load | ++---------------------+-----------------------------------------------+ +| Verification | The number of taskqs for each batch group can | +| | be observed using ``ps`` and counting the | +| | threads | ++---------------------+-----------------------------------------------+ +| Versions Affected | TBD | ++---------------------+-----------------------------------------------+ + +zvol_inhibit_dev +~~~~~~~~~~~~~~~~ + +``zvol_inhibit_dev`` controls the creation of volume device nodes upon +pool import. + ++-------------------+-------------------------------------------------+ +| zvol_inhibit_dev | Notes | ++===================+=================================================+ +| Tags | `import <#import>`__, `volume <#volume>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Inhibiting can slightly improve startup time on | +| | systems with a very large number of volumes | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=create volume device nodes, 1=do not create | +| | volume device nodes | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic, takes effect per-pool when the pool is | +| | imported | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.0 and later | ++-------------------+-------------------------------------------------+ + +zvol_major +~~~~~~~~~~ + +``zvol_major`` is the default major number for volume devices. + ++-------------------+-------------------------------------------------+ +| zvol_major | Notes | ++===================+=================================================+ +| Tags | `volume <#volume>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Do not change | ++-------------------+-------------------------------------------------+ +| Data Type | uint | ++-------------------+-------------------------------------------------+ +| Default | 230 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic, takes effect per-pool when the pool is | +| | imported or volumes are created | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +zvol_max_discard_blocks +~~~~~~~~~~~~~~~~~~~~~~~ + +Discard (aka ATA TRIM or SCSI UNMAP) operations done on volumes are done +in batches ``zvol_max_discard_blocks`` blocks. The block size is +determined by the ``volblocksize`` property of a volume. + +Some applications, such as ``mkfs``, discard the whole volume at once +using the maximum possible discard size. As a result, many gigabytes of +discard requests are not uncommon. Unfortunately, if a large amount of +data is already allocated in the volume, ZFS can be quite slow to +process discard requests. This is especially true if the volblocksize is +small (eg default=8KB). As a result, very large discard requests can +take a very long time (perhaps minutes under heavy load) to complete. +This can cause a number of problems, most notably if the volume is +accessed remotely (eg via iSCSI), in which case the client has a high +probability of timing out on the request. + +Limiting the ``zvol_max_discard_blocks`` can decrease the amount of +discard workload request by setting the ``discard_max_bytes`` and +``discard_max_hw_bytes`` for the volume's block device in SysFS. This +value is readable by volume device consumers. + ++-------------------------+-------------------------------------------+ +| zvol_max_discard_blocks | Notes | ++=========================+===========================================+ +| Tags | `discard <#discard>`__, | +| | `volume <#volume>`__ | ++-------------------------+-------------------------------------------+ +| When to change | if volume discard activity severely | +| | impacts other workloads | ++-------------------------+-------------------------------------------+ +| Data Type | ulong | ++-------------------------+-------------------------------------------+ +| Units | number of blocks of size volblocksize | ++-------------------------+-------------------------------------------+ +| Range | 0 to ULONG_MAX | ++-------------------------+-------------------------------------------+ +| Default | 16,384 | ++-------------------------+-------------------------------------------+ +| Change | Dynamic, takes effect per-pool when the | +| | pool is imported or volumes are created | ++-------------------------+-------------------------------------------+ +| Verification | Observe value of | +| | ``/sys/block/ | +| | VOLUME_INSTANCE/queue/discard_max_bytes`` | ++-------------------------+-------------------------------------------+ +| Versions Affected | v0.6.0 and later | ++-------------------------+-------------------------------------------+ + +zvol_prefetch_bytes +~~~~~~~~~~~~~~~~~~~ + +When importing a pool with volumes or adding a volume to a pool, +``zvol_prefetch_bytes`` are prefetch from the start and end of the +volume. Prefetching these regions of the volume is desirable because +they are likely to be accessed immediately by ``blkid(8)`` or by the +kernel scanning for a partition table. + +=================== ============================================== +zvol_prefetch_bytes Notes +=================== ============================================== +Tags `prefetch <#prefetch>`__, `volume <#volume>`__ +When to change TBD +Data Type uint +Units bytes +Range 0 to UINT_MAX +Default 131,072 +Change Dynamic +Versions Affected v0.6.5 and later +=================== ============================================== + +zvol_request_sync +~~~~~~~~~~~~~~~~~ + +When processing I/O requests for a volume submit them synchronously. +This effectively limits the queue depth to 1 for each I/O submitter. +When set to 0 requests are handled asynchronously by the "zvol" thread +pool. + +See also `zvol_threads <#zvol-threads>`__ + ++-------------------+-------------------------------------------------+ +| zvol_request_sync | Notes | ++===================+=================================================+ +| Tags | `volume <#volume>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Testing concurrent volume requests | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=do concurrent (async) volume requests, 1=do | +| | sync volume requests | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.7.2 and later | ++-------------------+-------------------------------------------------+ + +zvol_threads +~~~~~~~~~~~~ + +zvol_threads controls the maximum number of threads handling concurrent +volume I/O requests. + +The default of 32 threads behaves similarly to a disk with a 32-entry +command queue. The actual number of threads required can vary widely by +workload and available CPUs. If lock analysis shows high contention in +the zvol taskq threads, then reducing the number of zvol_threads or +workload queue depth can improve overall throughput. + +See also `zvol_request_sync <#zvol-request-sync>`__ + ++-------------------+-------------------------------------------------+ +| zvol_threads | Notes | ++===================+=================================================+ +| Tags | `volume <#volume>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Matching the number of concurrent volume | +| | requests with workload requirements can improve | +| | concurrency | ++-------------------+-------------------------------------------------+ +| Data Type | uint | ++-------------------+-------------------------------------------------+ +| Units | threads | ++-------------------+-------------------------------------------------+ +| Range | 1 to UINT_MAX | ++-------------------+-------------------------------------------------+ +| Default | 32 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic, takes effect per-volume when the pool | +| | is imported or volumes are created | ++-------------------+-------------------------------------------------+ +| Verification | ``iostat`` using ``avgqu-sz`` or ``aqu-sz`` | +| | results | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++-------------------+-------------------------------------------------+ + +zvol_volmode +~~~~~~~~~~~~ + +``zvol_volmode`` defines volume block devices behaviour when the +``volmode`` property is set to ``default`` + +Note: to maintain compatibility with ZFS on BSD, "geom" is synonymous +with "full" + +===== ======= =========================================== +value volmode Description +===== ======= =========================================== +1 full legacy fully functional behaviour (default) +2 dev hide partitions on volume block devices +3 none not exposing volumes outside ZFS +===== ======= =========================================== + +================= ==================== +zvol_volmode Notes +================= ==================== +Tags `volume <#volume>`__ +When to change TBD +Data Type enum +Range 1, 2, or 3 +Default 1 +Change Dynamic +Versions Affected v0.7.0 and later +================= ==================== + +zfs_qat_disable +~~~~~~~~~~~~~~~ + +``zfs_qat_disable`` controls the Intel QuickAssist Technology (QAT) +driver providing hardware acceleration for gzip compression. When the +QAT hardware is present and qat driver available, the default behaviour +is to enable QAT. + ++-------------------+-------------------------------------------------+ +| zfs_qat_disable | Notes | ++===================+=================================================+ +| Tags | `compression <#compression>`__, `QAT <#qat>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Testing QAT functionality | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=use QAT acceleration if available, 1=do not | +| | use QAT acceleration | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.7, renamed to | +| | `zfs_qat_ | +| | compress_disable <#zfs-qat-compress-disable>`__ | +| | in v0.8 | ++-------------------+-------------------------------------------------+ + +zfs_qat_checksum_disable +~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_qat_checksum_disable`` controls the Intel QuickAssist Technology +(QAT) driver providing hardware acceleration for checksums. When the QAT +hardware is present and qat driver available, the default behaviour is +to enable QAT. + ++--------------------------+------------------------------------------+ +| zfs_qat_checksum_disable | Notes | ++==========================+==========================================+ +| Tags | `checksum <#checksum>`__, `QAT <#qat>`__ | ++--------------------------+------------------------------------------+ +| When to change | Testing QAT functionality | ++--------------------------+------------------------------------------+ +| Data Type | boolean | ++--------------------------+------------------------------------------+ +| Range | 0=use QAT acceleration if available, | +| | 1=do not use QAT acceleration | ++--------------------------+------------------------------------------+ +| Default | 0 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.8.0 | ++--------------------------+------------------------------------------+ + +zfs_qat_compress_disable +~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_qat_compress_disable`` controls the Intel QuickAssist Technology +(QAT) driver providing hardware acceleration for gzip compression. When +the QAT hardware is present and qat driver available, the default +behaviour is to enable QAT. + ++--------------------------+------------------------------------------+ +| zfs_qat_compress_disable | Notes | ++==========================+==========================================+ +| Tags | `compression <#compression>`__, | +| | `QAT <#qat>`__ | ++--------------------------+------------------------------------------+ +| When to change | Testing QAT functionality | ++--------------------------+------------------------------------------+ +| Data Type | boolean | ++--------------------------+------------------------------------------+ +| Range | 0=use QAT acceleration if available, | +| | 1=do not use QAT acceleration | ++--------------------------+------------------------------------------+ +| Default | 0 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.8.0 | ++--------------------------+------------------------------------------+ + +zfs_qat_encrypt_disable +~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_qat_encrypt_disable`` controls the Intel QuickAssist Technology +(QAT) driver providing hardware acceleration for encryption. When the +QAT hardware is present and qat driver available, the default behaviour +is to enable QAT. + ++-------------------------+-------------------------------------------+ +| zfs_qat_encrypt_disable | Notes | ++=========================+===========================================+ +| Tags | `encryption <#encryption>`__, | +| | `QAT <#qat>`__ | ++-------------------------+-------------------------------------------+ +| When to change | Testing QAT functionality | ++-------------------------+-------------------------------------------+ +| Data Type | boolean | ++-------------------------+-------------------------------------------+ +| Range | 0=use QAT acceleration if available, 1=do | +| | not use QAT acceleration | ++-------------------------+-------------------------------------------+ +| Default | 0 | ++-------------------------+-------------------------------------------+ +| Change | Dynamic | ++-------------------------+-------------------------------------------+ +| Versions Affected | v0.8.0 | ++-------------------------+-------------------------------------------+ + +dbuf_cache_hiwater_pct +~~~~~~~~~~~~~~~~~~~~~~ + +The ``dbuf_cache_hiwater_pct`` and +`dbuf_cache_lowater_pct <#dbuf-cache-lowater-pct>`__ define the +operating range for dbuf cache evict thread. The hiwater and lowater are +percentages of the `dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__ +value. When the dbuf cache grows above ((100% + +``dbuf_cache_hiwater_pct``) \* +`dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__) then the dbuf cache +thread begins evicting. When the dbug cache falls below ((100% - +`dbuf_cache_lowater_pct <#dbuf-cache-lowater-pct>`__) \* +`dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__) then the dbuf cache +thread stops evicting. + +====================== ============================= +dbuf_cache_hiwater_pct Notes +====================== ============================= +Tags `dbuf_cache <#dbuf-cache>`__ +When to change Testing dbuf cache algorithms +Data Type uint +Units percent +Range 0 to UINT_MAX +Default 10 +Change Dynamic +Versions Affected v0.7.0 and later +====================== ============================= + +dbuf_cache_lowater_pct +~~~~~~~~~~~~~~~~~~~~~~ + +The dbuf_cache_hiwater_pct and dbuf_cache_lowater_pct define the +operating range for dbuf cache evict thread. The hiwater and lowater are +percentages of the `dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__ +value. When the dbuf cache grows above ((100% + +`dbuf_cache_hiwater_pct <#dbuf-cache-hiwater-pct>`__) \* +`dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__) then the dbuf cache +thread begins evicting. When the dbug cache falls below ((100% - +``dbuf_cache_lowater_pct``) \* +`dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__) then the dbuf cache +thread stops evicting. + +====================== ============================= +dbuf_cache_lowater_pct Notes +====================== ============================= +Tags `dbuf_cache <#dbuf-cache>`__ +When to change Testing dbuf cache algorithms +Data Type uint +Units percent +Range 0 to UINT_MAX +Default 10 +Change Dynamic +Versions Affected v0.7.0 and later +====================== ============================= + +dbuf_cache_max_bytes +~~~~~~~~~~~~~~~~~~~~ + +The dbuf cache maintains a list of dbufs that are not currently held but +have been recently released. These dbufs are not eligible for ARC +eviction until they are aged out of the dbuf cache. Dbufs are added to +the dbuf cache once the last hold is released. If a dbuf is later +accessed and still exists in the dbuf cache, then it will be removed +from the cache and later re-added to the head of the cache. Dbufs that +are aged out of the cache will be immediately destroyed and become +eligible for ARC eviction. + +The size of the dbuf cache is set by ``dbuf_cache_max_bytes``. The +actual size is dynamically adjusted to the minimum of current ARC target +size (``c``) >> `dbuf_cache_max_shift <#dbuf-cache-max-shift>`__ and the +default ``dbuf_cache_max_bytes`` + +==================== ============================= +dbuf_cache_max_bytes Notes +==================== ============================= +Tags `dbuf_cache <#dbuf-cache>`__ +When to change Testing dbuf cache algorithms +Data Type ulong +Units bytes +Range 16,777,216 to ULONG_MAX +Default 104,857,600 (100 MiB) +Change Dynamic +Versions Affected v0.7.0 and later +==================== ============================= + +dbuf_cache_max_shift +~~~~~~~~~~~~~~~~~~~~ + +The `dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__ minimum is the +lesser of `dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__ and the +current ARC target size (``c``) >> ``dbuf_cache_max_shift`` + +==================== ============================= +dbuf_cache_max_shift Notes +==================== ============================= +Tags `dbuf_cache <#dbuf-cache>`__ +When to change Testing dbuf cache algorithms +Data Type int +Units shift +Range 1 to 63 +Default 5 +Change Dynamic +Versions Affected v0.7.0 and later +==================== ============================= + +dmu_object_alloc_chunk_shift +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Each of the concurrent object allocators grabs +``2^dmu_object_alloc_chunk_shift`` dnode slots at a time. The default is +to grab 128 slots, or 4 blocks worth. This default value was +experimentally determined to be the lowest value that eliminates the +measurable effect of lock contention in the DMU object allocation code +path. + ++------------------------------+--------------------------------------+ +| dmu_object_alloc_chunk_shift | Notes | ++==============================+======================================+ +| Tags | `allocation <#allocation>`__, | +| | `DMU <#dmu>`__ | ++------------------------------+--------------------------------------+ +| When to change | If the workload creates many files | +| | concurrently on a system with many | +| | CPUs, then increasing | +| | ``dmu_object_alloc_chunk_shift`` can | +| | decrease lock contention | ++------------------------------+--------------------------------------+ +| Data Type | int | ++------------------------------+--------------------------------------+ +| Units | shift | ++------------------------------+--------------------------------------+ +| Range | 7 to 9 | ++------------------------------+--------------------------------------+ +| Default | 7 | ++------------------------------+--------------------------------------+ +| Change | Dynamic | ++------------------------------+--------------------------------------+ +| Versions Affected | v0.7.0 and later | ++------------------------------+--------------------------------------+ + +send_holes_without_birth_time +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Alias for `ignore_hole_birth <#ignore-hole-birth>`__ + +zfs_abd_scatter_enabled +~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_abd_scatter_enabled`` controls the ARC Buffer Data (ABD) +scatter/gather feature. + +When disabled, the legacy behaviour is selected using linear buffers. +For linear buffers, all the data in the ABD is stored in one contiguous +buffer in memory (from a ``zio_[data_]buf_*`` kmem cache). + +When enabled (default), the data in the ABD is split into equal-sized +chunks (from the ``abd_chunk_cache`` kmem_cache), with pointers to the +chunks recorded in an array at the end of the ABD structure. This allows +more efficient memory allocation for buffers, especially when large +recordsizes are used. + ++-------------------------+-------------------------------------------+ +| zfs_abd_scatter_enabled | Notes | ++=========================+===========================================+ +| Tags | `ABD <#abd>`__, `memory <#memory>`__ | ++-------------------------+-------------------------------------------+ +| When to change | Testing ABD | ++-------------------------+-------------------------------------------+ +| Data Type | boolean | ++-------------------------+-------------------------------------------+ +| Range | 0=use linear allocation only, 1=allow | +| | scatter/gather | ++-------------------------+-------------------------------------------+ +| Default | 1 | ++-------------------------+-------------------------------------------+ +| Change | Dynamic | ++-------------------------+-------------------------------------------+ +| Verification | ABD statistics are observable in | +| | ``/proc/spl/kstat/zfs/abdstats``. Slab | +| | allocations are observable in | +| | ``/proc/slabinfo`` | ++-------------------------+-------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++-------------------------+-------------------------------------------+ + +zfs_abd_scatter_max_order +~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_abd_scatter_max_order`` sets the maximum order for physical page +allocation when ABD is enabled (see +`zfs_abd_scatter_enabled <#zfs-abd-scatter-enabled>`__) + +See also Buddy Memory Allocation in the Linux kernel documentation. + ++---------------------------+-----------------------------------------+ +| zfs_abd_scatter_max_order | Notes | ++===========================+=========================================+ +| Tags | `ABD <#abd>`__, `memory <#memory>`__ | ++---------------------------+-----------------------------------------+ +| When to change | Testing ABD features | ++---------------------------+-----------------------------------------+ +| Data Type | int | ++---------------------------+-----------------------------------------+ +| Units | orders | ++---------------------------+-----------------------------------------+ +| Range | 1 to 10 (upper limit is | +| | hardware-dependent) | ++---------------------------+-----------------------------------------+ +| Default | 10 | ++---------------------------+-----------------------------------------+ +| Change | Dynamic | ++---------------------------+-----------------------------------------+ +| Verification | ABD statistics are observable in | +| | ``/proc/spl/kstat/zfs/abdstats`` | ++---------------------------+-----------------------------------------+ +| Versions Affected | v0.7.0 and later | ++---------------------------+-----------------------------------------+ + +zfs_compressed_arc_enabled +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When compression is enabled for a dataset, later reads of the data can +store the blocks in ARC in their on-disk, compressed state. This can +increse the effective size of the ARC, as counted in blocks, and thus +improve the ARC hit ratio. + ++----------------------------+----------------------------------------+ +| zfs_compressed_arc_enabled | Notes | ++============================+========================================+ +| Tags | `ABD <#abd>`__, | +| | `compression <#compression>`__ | ++----------------------------+----------------------------------------+ +| When to change | Testing ARC compression feature | ++----------------------------+----------------------------------------+ +| Data Type | boolean | ++----------------------------+----------------------------------------+ +| Range | 0=compressed ARC disabled (legacy | +| | behaviour), 1=compress ARC data | ++----------------------------+----------------------------------------+ +| Default | 1 | ++----------------------------+----------------------------------------+ +| Change | Dynamic | ++----------------------------+----------------------------------------+ +| Verification | raw ARC statistics are observable in | +| | ``/proc/spl/kstat/zfs/arcstats`` and | +| | ARC hit ratios can be observed using | +| | ``arcstat`` | ++----------------------------+----------------------------------------+ +| Versions Affected | v0.7.0 and later | ++----------------------------+----------------------------------------+ + +zfs_key_max_salt_uses +~~~~~~~~~~~~~~~~~~~~~ + +For encrypted datasets, the salt is regenerated every +``zfs_key_max_salt_uses`` blocks. This automatic regeneration reduces +the probability of collisions due to the Birthday problem. When set to +the default (400,000,000) the probability of collision is approximately +1 in 1 trillion. + +===================== ============================ +zfs_key_max_salt_uses Notes +===================== ============================ +Tags `encryption <#encryption>`__ +When to change Testing encryption features +Data Type ulong +Units blocks encrypted +Range 1 to ULONG_MAX +Default 400,000,000 +Change Dynamic +Versions Affected v0.8.0 and later +===================== ============================ + +zfs_object_mutex_size +~~~~~~~~~~~~~~~~~~~~~ + +``zfs_object_mutex_size`` facilitates resizing the the per-dataset znode +mutex array for testing deadlocks therein. + +===================== =================================== +zfs_object_mutex_size Notes +===================== =================================== +Tags `debug <#debug>`__ +When to change Testing znode mutex array deadlocks +Data Type uint +Units orders +Range 1 to UINT_MAX +Default 64 +Change Dynamic +Versions Affected v0.7.0 and later +===================== =================================== + +zfs_scan_strict_mem_lim +~~~~~~~~~~~~~~~~~~~~~~~ + +When scrubbing or resilvering, by default, ZFS checks to ensure it is +not over the hard memory limit before each txg commit. If finer-grained +control of this is needed ``zfs_scan_strict_mem_lim`` can be set to 1 to +enable checking before scanning each block. + ++-------------------------+-------------------------------------------+ +| zfs_scan_strict_mem_lim | Notes | ++=========================+===========================================+ +| Tags | `memory <#memory>`__, | +| | `resilver <#resilver>`__, | +| | `scrub <#scrub>`__ | ++-------------------------+-------------------------------------------+ +| When to change | Do not change | ++-------------------------+-------------------------------------------+ +| Data Type | boolean | ++-------------------------+-------------------------------------------+ +| Range | 0=normal scan behaviour, 1=check hard | +| | memory limit strictly during scan | ++-------------------------+-------------------------------------------+ +| Default | 0 | ++-------------------------+-------------------------------------------+ +| Change | Dynamic | ++-------------------------+-------------------------------------------+ +| Versions Affected | v0.8.0 | ++-------------------------+-------------------------------------------+ + +zfs_send_queue_length +~~~~~~~~~~~~~~~~~~~~~ + +``zfs_send_queue_length`` is the maximum number of bytes allowed in the +zfs send queue. + ++-----------------------+---------------------------------------------+ +| zfs_send_queue_length | Notes | ++=======================+=============================================+ +| Tags | `send <#send>`__ | ++-----------------------+---------------------------------------------+ +| When to change | When using the largest recordsize or | +| | volblocksize (16 MiB), increasing can | +| | improve send efficiency | ++-----------------------+---------------------------------------------+ +| Data Type | int | ++-----------------------+---------------------------------------------+ +| Units | bytes | ++-----------------------+---------------------------------------------+ +| Range | Must be at least twice the maximum | +| | recordsize or volblocksize in use | ++-----------------------+---------------------------------------------+ +| Default | 16,777,216 bytes (16 MiB) | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | v0.8.1 | ++-----------------------+---------------------------------------------+ + +zfs_recv_queue_length +~~~~~~~~~~~~~~~~~~~~~ + +``zfs_recv_queue_length`` is the maximum number of bytes allowed in the +zfs receive queue. + ++-----------------------+---------------------------------------------+ +| zfs_recv_queue_length | Notes | ++=======================+=============================================+ +| Tags | `receive <#receive>`__ | ++-----------------------+---------------------------------------------+ +| When to change | When using the largest recordsize or | +| | volblocksize (16 MiB), increasing can | +| | improve receive efficiency | ++-----------------------+---------------------------------------------+ +| Data Type | int | ++-----------------------+---------------------------------------------+ +| Units | bytes | ++-----------------------+---------------------------------------------+ +| Range | Must be at least twice the maximum | +| | recordsize or volblocksize in use | ++-----------------------+---------------------------------------------+ +| Default | 16,777,216 bytes (16 MiB) | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | v0.8.1 | ++-----------------------+---------------------------------------------+ + +zfs_arc_min_prefetch_lifespan +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``arc_min_prefetch_lifespan`` is the minimum time for a prefetched block +to remain in ARC before it is eligible for eviction. + +============================= ====================================== +zfs_arc_min_prefetch_lifespan Notes +============================= ====================================== +Tags `ARC <#ARC>`__ +When to change TBD +Data Type int +Units clock ticks +Range 0 = use default value +Default 1 second (as expressed in clock ticks) +Change Dynamic +Versions Affected v0.7.0 +============================= ====================================== + +zfs_scan_ignore_errors +~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_scan_ignore_errors`` allows errors discovered during scrub or +resilver to be ignored. This can be tuned as a workaround to remove the +dirty time list (DTL) when completing a pool scan. It is intended to be +used during pool repair or recovery to prevent resilvering when the pool +is imported. + ++------------------------+--------------------------------------------+ +| zfs_scan_ignore_errors | Notes | ++========================+============================================+ +| Tags | `resilver <#resilver>`__ | ++------------------------+--------------------------------------------+ +| When to change | See description above | ++------------------------+--------------------------------------------+ +| Data Type | boolean | ++------------------------+--------------------------------------------+ +| Range | 0 = do not ignore errors, 1 = ignore | +| | errors during pool scrub or resilver | ++------------------------+--------------------------------------------+ +| Default | 0 | ++------------------------+--------------------------------------------+ +| Change | Dynamic | ++------------------------+--------------------------------------------+ +| Versions Affected | v0.8.1 | ++------------------------+--------------------------------------------+ + +zfs_top_maxinflight +~~~~~~~~~~~~~~~~~~~ + +``zfs_top_maxinflight`` is used to limit the maximum number of I/Os +queued to top-level vdevs during scrub or resilver operations. The +actual top-level vdev limit is calculated by multiplying the number of +child vdevs by ``zfs_top_maxinflight`` This limit is an additional cap +over and above the scan limits + ++---------------------+-----------------------------------------------+ +| zfs_top_maxinflight | Notes | ++=====================+===============================================+ +| Tags | `resilver <#resilver>`__, `scrub <#scrub>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++---------------------+-----------------------------------------------+ +| When to change | for modern ZFS versions, the ZIO scheduler | +| | limits usually take precedence | ++---------------------+-----------------------------------------------+ +| Data Type | int | ++---------------------+-----------------------------------------------+ +| Units | I/O operations | ++---------------------+-----------------------------------------------+ +| Range | 1 to MAX_INT | ++---------------------+-----------------------------------------------+ +| Default | 32 | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.6.0 | ++---------------------+-----------------------------------------------+ + +zfs_resilver_delay +~~~~~~~~~~~~~~~~~~ + +``zfs_resilver_delay`` sets a time-based delay for resilver I/Os. This +delay is in addition to the ZIO scheduler's treatment of scrub +workloads. See also `zfs_scan_idle <#zfs-scan-idle>`__ + ++--------------------+------------------------------------------------+ +| zfs_resilver_delay | Notes | ++====================+================================================+ +| Tags | `resilver <#resilver>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------+------------------------------------------------+ +| When to change | increasing can reduce impact of resilver | +| | workload on dynamic workloads | ++--------------------+------------------------------------------------+ +| Data Type | int | ++--------------------+------------------------------------------------+ +| Units | clock ticks | ++--------------------+------------------------------------------------+ +| Range | 0 to MAX_INT | ++--------------------+------------------------------------------------+ +| Default | 2 | ++--------------------+------------------------------------------------+ +| Change | Dynamic | ++--------------------+------------------------------------------------+ +| Versions Affected | v0.6.0 | ++--------------------+------------------------------------------------+ + +zfs_scrub_delay +~~~~~~~~~~~~~~~ + +``zfs_scrub_delay`` sets a time-based delay for scrub I/Os. This delay +is in addition to the ZIO scheduler's treatment of scrub workloads. See +also `zfs_scan_idle <#zfs-scan-idle>`__ + ++-------------------+-------------------------------------------------+ +| zfs_scrub_delay | Notes | ++===================+=================================================+ +| Tags | `scrub <#scrub>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++-------------------+-------------------------------------------------+ +| When to change | increasing can reduce impact of scrub workload | +| | on dynamic workloads | ++-------------------+-------------------------------------------------+ +| Data Type | int | ++-------------------+-------------------------------------------------+ +| Units | clock ticks | ++-------------------+-------------------------------------------------+ +| Range | 0 to MAX_INT | ++-------------------+-------------------------------------------------+ +| Default | 4 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.0 | ++-------------------+-------------------------------------------------+ + +zfs_scan_idle +~~~~~~~~~~~~~ + +When a non-scan I/O has occurred in the past ``zfs_scan_idle`` clock +ticks, then `zfs_resilver_delay <#zfs-resilver-delay>`__ or +`zfs_scrub_delay <#zfs-scrub-delay>`__ are enabled. + ++-------------------+-------------------------------------------------+ +| zfs_scan_idle | Notes | ++===================+=================================================+ +| Tags | `resilver <#resilver>`__, `scrub <#scrub>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++-------------------+-------------------------------------------------+ +| When to change | as part of a resilver/scrub tuning effort | ++-------------------+-------------------------------------------------+ +| Data Type | int | ++-------------------+-------------------------------------------------+ +| Units | clock ticks | ++-------------------+-------------------------------------------------+ +| Range | 0 to MAX_INT | ++-------------------+-------------------------------------------------+ +| Default | 50 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.0 | ++-------------------+-------------------------------------------------+ + +icp_aes_impl +~~~~~~~~~~~~ + +By default, ZFS will choose the highest performance, hardware-optimized +implementation of the AES encryption algorithm. The ``icp_aes_impl`` +tunable overrides this automatic choice. + +Note: ``icp_aes_impl`` is set in the ``icp`` kernel module, not the +``zfs`` kernel module. + +To observe the available options +``cat /sys/module/icp/parameters/icp_aes_impl`` The default option is +shown in brackets '[]' + +================= ==================================== +icp_aes_impl Notes +================= ==================================== +Tags `encryption <#encryption>`__ +Kernel module icp +When to change debugging ZFS encryption on hardware +Data Type string +Range varies by hardware +Default automatic, depends on the hardware +Change dynamic +Versions Affected planned for v2 +================= ==================================== + +icp_gcm_impl +~~~~~~~~~~~~ + +By default, ZFS will choose the highest performance, hardware-optimized +implementation of the GCM encryption algorithm. The ``icp_gcm_impl`` +tunable overrides this automatic choice. + +Note: ``icp_gcm_impl`` is set in the ``icp`` kernel module, not the +``zfs`` kernel module. + +To observe the available options +``cat /sys/module/icp/parameters/icp_gcm_impl`` The default option is +shown in brackets '[]' + +================= ==================================== +icp_gcm_impl Notes +================= ==================================== +Tags `encryption <#encryption>`__ +Kernel module icp +When to change debugging ZFS encryption on hardware +Data Type string +Range varies by hardware +Default automatic, depends on the hardware +Change Dynamic +Versions Affected planned for v2 +================= ==================================== + +zfs_abd_scatter_min_size +~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_abd_scatter_min_size`` changes the ARC buffer data (ABD) +allocator's threshold for using linear or page-based scatter buffers. +Allocations smaller than ``zfs_abd_scatter_min_size`` use linear ABDs. + +Scatter ABD's use at least one page each, so sub-page allocations waste +some space when allocated as scatter allocations. For example, 2KB +scatter allocation wastes half of each page. Using linear ABD's for +small allocations results in slabs containing many allocations. This can +improve memory efficiency, at the expense of more work for ARC evictions +attempting to free pages, because all the buffers on one slab need to be +freed in order to free the slab and its underlying pages. + +Typically, 512B and 1KB kmem caches have 16 buffers per slab, so it's +possible for them to actually waste more memory than scatter +allocations: + +- one page per buf = wasting 3/4 or 7/8 +- one buf per slab = wasting 15/16 + +Spill blocks are typically 512B and are heavily used on systems running +*selinux* with the default dnode size and the ``xattr=sa`` property set. + +By default, linear allocations for 512B and 1KB, and scatter allocations +for larger (>= 1.5KB) allocation requests. + ++--------------------------+------------------------------------------+ +| zfs_abd_scatter_min_size | Notes | ++==========================+==========================================+ +| Tags | `ARC <#ARC>`__ | ++--------------------------+------------------------------------------+ +| When to change | debugging memory allocation, especially | +| | for large pages | ++--------------------------+------------------------------------------+ +| Data Type | int | ++--------------------------+------------------------------------------+ +| Units | bytes | ++--------------------------+------------------------------------------+ +| Range | 0 to MAX_INT | ++--------------------------+------------------------------------------+ +| Default | 1536 (512B and 1KB allocations will be | +| | linear) | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | planned for v2 | ++--------------------------+------------------------------------------+ + +zfs_unlink_suspend_progress +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_unlink_suspend_progress`` changes the policy for removing pending +unlinks. When enabled, files will not be asynchronously removed from the +list of pending unlinks and the space they consume will be leaked. Once +this option has been disabled and the dataset is remounted, the pending +unlinks will be processed and the freed space returned to the pool. + ++-----------------------------+---------------------------------------+ +| zfs_unlink_suspend_progress | Notes | ++=============================+=======================================+ +| Tags | | ++-----------------------------+---------------------------------------+ +| When to change | used by the ZFS test suite (ZTS) to | +| | facilitate testing | ++-----------------------------+---------------------------------------+ +| Data Type | boolean | ++-----------------------------+---------------------------------------+ +| Range | 0 = use async unlink removal, 1 = do | +| | not async unlink thus leaking space | ++-----------------------------+---------------------------------------+ +| Default | 0 | ++-----------------------------+---------------------------------------+ +| Change | prior to dataset mount | ++-----------------------------+---------------------------------------+ +| Versions Affected | planned for v2 | ++-----------------------------+---------------------------------------+ + +spa_load_verify_shift +~~~~~~~~~~~~~~~~~~~~~ + +``spa_load_verify_shift`` sets the fraction of ARC that can be used by +inflight I/Os when verifying the pool during import. This value is a +"shift" representing the fraction of ARC target size +(``grep -w c /proc/spl/kstat/zfs/arcstats``). The ARC target size is +shifted to the right. Thus a value of '2' results in the fraction = 1/4, +while a value of '4' results in the fraction = 1/8. + +For large memory machines, pool import can consume large amounts of ARC: +much larger than the value of maxinflight. This can result in +`spa_load_verify_maxinflight <#spa-load-verify-maxinflight>`__ having a +value of 0 causing the system to hang. Setting ``spa_load_verify_shift`` +can reduce this limit and allow importing without hanging. + ++-----------------------+---------------------------------------------+ +| spa_load_verify_shift | Notes | ++=======================+=============================================+ +| Tags | `import <#import>`__, `ARC <#ARC>`__, | +| | `SPA <#SPA>`__ | ++-----------------------+---------------------------------------------+ +| When to change | troubleshooting pool import on large memory | +| | machines | ++-----------------------+---------------------------------------------+ +| Data Type | int | ++-----------------------+---------------------------------------------+ +| Units | shift | ++-----------------------+---------------------------------------------+ +| Range | 1 to MAX_INT | ++-----------------------+---------------------------------------------+ +| Default | 4 | ++-----------------------+---------------------------------------------+ +| Change | prior to importing a pool | ++-----------------------+---------------------------------------------+ +| Versions Affected | planned for v2 | ++-----------------------+---------------------------------------------+ + +spa_load_print_vdev_tree +~~~~~~~~~~~~~~~~~~~~~~~~ + +``spa_load_print_vdev_tree`` enables printing of the attempted pool +import's vdev tree to kernel message to the ZFS debug message log +``/proc/spl/kstat/zfs/dbgmsg`` Both the provided vdev tree and MOS vdev +tree are printed, which can be useful for debugging problems with the +zpool ``cachefile`` + ++--------------------------+------------------------------------------+ +| spa_load_print_vdev_tree | Notes | ++==========================+==========================================+ +| Tags | `import <#import>`__, `SPA <#SPA>`__ | ++--------------------------+------------------------------------------+ +| When to change | troubleshooting pool import failures | ++--------------------------+------------------------------------------+ +| Data Type | boolean | ++--------------------------+------------------------------------------+ +| Range | 0 = do not print pool configuration in | +| | logs, 1 = print pool configuration in | +| | logs | ++--------------------------+------------------------------------------+ +| Default | 0 | ++--------------------------+------------------------------------------+ +| Change | prior to pool import | ++--------------------------+------------------------------------------+ +| Versions Affected | planned for v2 | ++--------------------------+------------------------------------------+ + +zfs_max_missing_tvds +~~~~~~~~~~~~~~~~~~~~ + +When importing a pool in readonly mode +(``zpool import -o readonly=on ...``) then up to +``zfs_max_missing_tvds`` top-level vdevs can be missing, but the import +can attempt to progress. + +Note: This is strictly intended for advanced pool recovery cases since +missing data is almost inevitable. Pools with missing devices can only +be imported read-only for safety reasons, and the pool's ``failmode`` +property is automatically set to ``continue`` + +The expected use case is to recover pool data immediately after +accidentally adding a non-protected vdev to a protected pool. + +- With 1 missing top-level vdev, ZFS should be able to import the pool + and mount all datasets. User data that was not modified after the + missing device has been added should be recoverable. Thus snapshots + created prior to the addition of that device should be completely + intact. + +- With 2 missing top-level vdevs, some datasets may fail to mount since + there are dataset statistics that are stored as regular metadata. + Some data might be recoverable if those vdevs were added recently. + +- With 3 or more top-level missing vdevs, the pool is severely damaged + and MOS entries may be missing entirely. Chances of data recovery are + very low. Note that there are also risks of performing an inadvertent + rewind as we might be missing all the vdevs with the latest + uberblocks. + +==================== ========================================== +zfs_max_missing_tvds Notes +==================== ========================================== +Tags `import <#import>`__ +When to change troubleshooting pools with missing devices +Data Type int +Units missing top-level vdevs +Range 0 to MAX_INT +Default 0 +Change prior to pool import +Versions Affected planned for v2 +==================== ========================================== + +dbuf_metadata_cache_shift +~~~~~~~~~~~~~~~~~~~~~~~~~ + +``dbuf_metadata_cache_shift`` sets the size of the dbuf metadata cache +as a fraction of ARC target size. This is an alternate method for +setting dbuf metadata cache size than +`dbuf_metadata_cache_max_bytes <#dbuf-metadata-cache-max-bytes>`__. + +`dbuf_metadata_cache_max_bytes <#dbuf-metadata-cache-max-bytes>`__ +overrides ``dbuf_metadata_cache_shift`` + +This value is a "shift" representing the fraction of ARC target size +(``grep -w c /proc/spl/kstat/zfs/arcstats``). The ARC target size is +shifted to the right. Thus a value of '2' results in the fraction = 1/4, +while a value of '6' results in the fraction = 1/64. + ++---------------------------+-----------------------------------------+ +| dbuf_metadata_cache_shift | Notes | ++===========================+=========================================+ +| Tags | `ARC <#ARC>`__, | +| | `dbuf_cache <#dbuf-cache>`__ | ++---------------------------+-----------------------------------------+ +| When to change | | ++---------------------------+-----------------------------------------+ +| Data Type | int | ++---------------------------+-----------------------------------------+ +| Units | shift | ++---------------------------+-----------------------------------------+ +| Range | practical range is | +| | (` | +| | dbuf_cache_shift <#dbuf-cache-shift>`__ | +| | + 1) to MAX_INT | ++---------------------------+-----------------------------------------+ +| Default | 6 | ++---------------------------+-----------------------------------------+ +| Change | Dynamic | ++---------------------------+-----------------------------------------+ +| Versions Affected | planned for v2 | ++---------------------------+-----------------------------------------+ + +dbuf_metadata_cache_max_bytes +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``dbuf_metadata_cache_max_bytes`` sets the size of the dbuf metadata +cache as a number of bytes. This is an alternate method for setting dbuf +metadata cache size than +`dbuf_metadata_cache_shift <#dbuf-metadata-cache-shift>`__ + +`dbuf_metadata_cache_max_bytes <#dbuf-metadata-cache-max-bytes>`__ +overrides ``dbuf_metadata_cache_shift`` + ++-------------------------------+-------------------------------------+ +| dbuf_metadata_cache_max_bytes | Notes | ++===============================+=====================================+ +| Tags | `dbuf_cache <#dbuf-cache>`__ | ++-------------------------------+-------------------------------------+ +| When to change | | ++-------------------------------+-------------------------------------+ +| Data Type | int | ++-------------------------------+-------------------------------------+ +| Units | bytes | ++-------------------------------+-------------------------------------+ +| Range | 0 = use | +| | `dbuf_metadata_cache_sh | +| | ift <#dbuf-metadata-cache-shift>`__ | +| | to ARC ``c_max`` | ++-------------------------------+-------------------------------------+ +| Default | 0 | ++-------------------------------+-------------------------------------+ +| Change | Dynamic | ++-------------------------------+-------------------------------------+ +| Versions Affected | planned for v2 | ++-------------------------------+-------------------------------------+ + +dbuf_cache_shift +~~~~~~~~~~~~~~~~ + +``dbuf_cache_shift`` sets the size of the dbuf cache as a fraction of +ARC target size. This is an alternate method for setting dbuf cache size +than `dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__. + +`dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__ overrides +``dbuf_cache_shift`` + +This value is a "shift" representing the fraction of ARC target size +(``grep -w c /proc/spl/kstat/zfs/arcstats``). The ARC target size is +shifted to the right. Thus a value of '2' results in the fraction = 1/4, +while a value of '5' results in the fraction = 1/32. + +Performance tuning of dbuf cache can be monitored using: + +- ``dbufstat`` command +- `node_exporter `__ ZFS + module for prometheus environments +- `telegraf `__ ZFS plugin for + general-purpose metric collection +- ``/proc/spl/kstat/zfs/dbufstats`` kstat + ++-------------------+-------------------------------------------------+ +| dbuf_cache_shift | Notes | ++===================+=================================================+ +| Tags | `ARC <#ARC>`__, `dbuf_cache <#dbuf-cache>`__ | ++-------------------+-------------------------------------------------+ +| When to change | to improve performance of read-intensive | +| | channel programs | ++-------------------+-------------------------------------------------+ +| Data Type | int | ++-------------------+-------------------------------------------------+ +| Units | shift | ++-------------------+-------------------------------------------------+ +| Range | 5 to MAX_INT | ++-------------------+-------------------------------------------------+ +| Default | 5 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | planned for v2 | ++-------------------+-------------------------------------------------+ + +.. _dbuf_cache_max_bytes-1: + +dbuf_cache_max_bytes +~~~~~~~~~~~~~~~~~~~~ + +``dbuf_cache_max_bytes`` sets the size of the dbuf cache in bytes. This +is an alternate method for setting dbuf cache size than +`dbuf_cache_shift <#dbuf-cache-shift>`__ + +Performance tuning of dbuf cache can be monitored using: + +- ``dbufstat`` command +- `node_exporter `__ ZFS + module for prometheus environments +- `telegraf `__ ZFS plugin for + general-purpose metric collection +- ``/proc/spl/kstat/zfs/dbufstats`` kstat + ++----------------------+----------------------------------------------+ +| dbuf_cache_max_bytes | Notes | ++======================+==============================================+ +| Tags | `ARC <#ARC>`__, `dbuf_cache <#dbuf-cache>`__ | ++----------------------+----------------------------------------------+ +| When to change | | ++----------------------+----------------------------------------------+ +| Data Type | int | ++----------------------+----------------------------------------------+ +| Units | bytes | ++----------------------+----------------------------------------------+ +| Range | 0 = use | +| | `dbuf_cache_shift <#dbuf-cache-shift>`__ to | +| | ARC ``c_max`` | ++----------------------+----------------------------------------------+ +| Default | 0 | ++----------------------+----------------------------------------------+ +| Change | Dynamic | ++----------------------+----------------------------------------------+ +| Versions Affected | planned for v2 | ++----------------------+----------------------------------------------+ + +metaslab_force_ganging +~~~~~~~~~~~~~~~~~~~~~~ + +When testing allocation code, ``metaslab_force_ganging`` forces blocks +above the specified size to be ganged. + +====================== ========================================== +metaslab_force_ganging Notes +====================== ========================================== +Tags `allocation <#allocation>`__ +When to change for development testing purposes only +Data Type ulong +Units bytes +Range SPA_MINBLOCKSIZE to (SPA_MAXBLOCKSIZE + 1) +Default SPA_MAXBLOCKSIZE + 1 (16,777,217 bytes) +Change Dynamic +Versions Affected planned for v2 +====================== ========================================== + +zfs_vdev_default_ms_count +~~~~~~~~~~~~~~~~~~~~~~~~~ + +When adding a top-level vdev, ``zfs_vdev_default_ms_count`` is the +target number of metaslabs. + ++---------------------------+-----------------------------------------+ +| zfs_vdev_default_ms_count | Notes | ++===========================+=========================================+ +| Tags | `allocation <#allocation>`__ | ++---------------------------+-----------------------------------------+ +| When to change | for development testing purposes only | ++---------------------------+-----------------------------------------+ +| Data Type | int | ++---------------------------+-----------------------------------------+ +| Range | 16 to MAX_INT | ++---------------------------+-----------------------------------------+ +| Default | 200 | ++---------------------------+-----------------------------------------+ +| Change | prior to creating a pool or adding a | +| | top-level vdev | ++---------------------------+-----------------------------------------+ +| Versions Affected | planned for v2 | ++---------------------------+-----------------------------------------+ + +vdev_removal_max_span +~~~~~~~~~~~~~~~~~~~~~ + +During top-level vdev removal, chunks of data are copied from the vdev +which may include free space in order to trade bandwidth for IOPS. +``vdev_removal_max_span`` sets the maximum span of free space included +as unnecessary data in a chunk of copied data. + +===================== ================================ +vdev_removal_max_span Notes +===================== ================================ +Tags `vdev_removal <#vdev-removal>`__ +When to change TBD +Data Type int +Units bytes +Range 0 to MAX_INT +Default 32,768 (32 MiB) +Change Dynamic +Versions Affected planned for v2 +===================== ================================ + +zfs_removal_ignore_errors +~~~~~~~~~~~~~~~~~~~~~~~~~ + +When removing a device, ``zfs_removal_ignore_errors`` controls the +process for handling hard I/O errors. When set, if a device encounters a +hard IO error during the removal process the removal will not be +cancelled. This can result in a normally recoverable block becoming +permanently damaged and is not recommended. This should only be used as +a last resort when the pool cannot be returned to a healthy state prior +to removing the device. + ++---------------------------+-----------------------------------------+ +| zfs_removal_ignore_errors | Notes | ++===========================+=========================================+ +| Tags | `vdev_removal <#vdev-removal>`__ | ++---------------------------+-----------------------------------------+ +| When to change | See description for caveat | ++---------------------------+-----------------------------------------+ +| Data Type | boolean | ++---------------------------+-----------------------------------------+ +| Range | during device removal: 0 = hard errors | +| | are not ignored, 1 = hard errors are | +| | ignored | ++---------------------------+-----------------------------------------+ +| Default | 0 | ++---------------------------+-----------------------------------------+ +| Change | Dynamic | ++---------------------------+-----------------------------------------+ +| Versions Affected | planned for v2 | ++---------------------------+-----------------------------------------+ + +zfs_removal_suspend_progress +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_removal_suspend_progress`` is used during automated testing of the +ZFS code to incease test coverage. + +============================ ====================================== +zfs_removal_suspend_progress Notes +============================ ====================================== +Tags `vdev_removal <#vdev-removal>`__ +When to change do not change +Data Type boolean +Range 0 = do not suspend during vdev removal +Default 0 +Change Dynamic +Versions Affected planned for v2 +============================ ====================================== + +zfs_condense_indirect_commit_entry_delay_ms +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +During vdev removal, the vdev indirection layer sleeps for +``zfs_condense_indirect_commit_entry_delay_ms`` milliseconds during +mapping generation. This parameter is used during automated testing of +the ZFS code to improve test coverage. + ++----------------------------------+----------------------------------+ +| zfs_condens | Notes | +| e_indirect_commit_entry_delay_ms | | ++==================================+==================================+ +| Tags | `vdev_removal <#vdev-removal>`__ | ++----------------------------------+----------------------------------+ +| When to change | do not change | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | milliseconds | ++----------------------------------+----------------------------------+ +| Range | 0 to MAX_INT | ++----------------------------------+----------------------------------+ +| Default | 0 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | planned for v2 | ++----------------------------------+----------------------------------+ + +zfs_condense_indirect_vdevs_enable +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +During vdev removal, condensing process is an attempt to save memory by +removing obsolete mappings. ``zfs_condense_indirect_vdevs_enable`` +enables condensing indirect vdev mappings. When set, ZFS attempts to +condense indirect vdev mappings if the mapping uses more than +`zfs_condense_min_mapping_bytes <#zfs-condense-min-mapping-bytes>`__ +bytes of memory and if the obsolete space map object uses more than +`zfs_condense_max_obsolete_bytes <#zfs-condense-max-obsolete-bytes>`__ +bytes on disk. + ++----------------------------------+----------------------------------+ +| zf | Notes | +| s_condense_indirect_vdevs_enable | | ++==================================+==================================+ +| Tags | `vdev_removal <#vdev-removal>`__ | ++----------------------------------+----------------------------------+ +| When to change | TBD | ++----------------------------------+----------------------------------+ +| Data Type | boolean | ++----------------------------------+----------------------------------+ +| Range | 0 = do not save memory, 1 = save | +| | memory by condensing obsolete | +| | mapping after vdev removal | ++----------------------------------+----------------------------------+ +| Default | 1 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | planned for v2 | ++----------------------------------+----------------------------------+ + +zfs_condense_max_obsolete_bytes +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +After vdev removal, ``zfs_condense_max_obsolete_bytes`` sets the limit +for beginning the condensing process. Condensing begins if the obsolete +space map takes up more than ``zfs_condense_max_obsolete_bytes`` of +space on disk (logically). The default of 1 GiB is small enough relative +to a typical pool that the space consumed by the obsolete space map is +minimal. + +See also +`zfs_condense_indirect_vdevs_enable <#zfs-condense-indirect-vdevs-enable>`__ + +=============================== ================================ +zfs_condense_max_obsolete_bytes Notes +=============================== ================================ +Tags `vdev_removal <#vdev-removal>`__ +When to change no not change +Data Type ulong +Units bytes +Range 0 to MAX_ULONG +Default 1,073,741,824 (1 GiB) +Change Dynamic +Versions Affected planned for v2 +=============================== ================================ + +zfs_condense_min_mapping_bytes +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +After vdev removal, ``zfs_condense_min_mapping_bytes`` is the lower +limit for determining when to condense the in-memory obsolete space map. +The condensing process will not continue unless a minimum of +``zfs_condense_min_mapping_bytes`` of memory can be freed. + +See also +`zfs_condense_indirect_vdevs_enable <#zfs-condense-indirect-vdevs-enable>`__ + +============================== ================================ +zfs_condense_min_mapping_bytes Notes +============================== ================================ +Tags `vdev_removal <#vdev-removal>`__ +When to change do not change +Data Type ulong +Units bytes +Range 0 to MAX_ULONG +Default 128 KiB +Change Dynamic +Versions Affected planned for v2 +============================== ================================ + +zfs_vdev_initializing_max_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_initializing_max_active`` sets the maximum initializing I/Os +active to each device. + ++----------------------------------+----------------------------------+ +| zfs_vdev_initializing_max_active | Notes | ++==================================+==================================+ +| Tags | `vdev <#vdev>`__, | +| | `Z | +| | IO_scheduler <#zio-scheduler>`__ | ++----------------------------------+----------------------------------+ +| When to change | See `ZFS I/O | +| | Sch | +| | eduler `__ | ++----------------------------------+----------------------------------+ +| Data Type | uint32 | ++----------------------------------+----------------------------------+ +| Units | I/O operations | ++----------------------------------+----------------------------------+ +| Range | 1 to | +| | `zfs_vdev_max_ | +| | active <#zfs-vdev-max-active>`__ | ++----------------------------------+----------------------------------+ +| Default | 1 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | planned for v2 | ++----------------------------------+----------------------------------+ + +zfs_vdev_initializing_min_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_initializing_min_active`` sets the minimum initializing I/Os +active to each device. + ++----------------------------------+----------------------------------+ +| zfs_vdev_initializing_min_active | Notes | ++==================================+==================================+ +| Tags | `vdev <#vdev>`__, | +| | `Z | +| | IO_scheduler <#zio-scheduler>`__ | ++----------------------------------+----------------------------------+ +| When to change | See `ZFS I/O | +| | Sch | +| | eduler `__ | ++----------------------------------+----------------------------------+ +| Data Type | uint32 | ++----------------------------------+----------------------------------+ +| Units | I/O operations | ++----------------------------------+----------------------------------+ +| Range | 1 to | +| | `zfs_vde | +| | v_initializing_max_active <#zfs_ | +| | vdev_initializing_max_active>`__ | ++----------------------------------+----------------------------------+ +| Default | 1 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | planned for v2 | ++----------------------------------+----------------------------------+ + +zfs_vdev_removal_max_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_removal_max_active`` sets the maximum top-level vdev removal +I/Os active to each device. + ++-----------------------------+---------------------------------------+ +| zfs_vdev_removal_max_active | Notes | ++=============================+=======================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++-----------------------------+---------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++-----------------------------+---------------------------------------+ +| Data Type | uint32 | ++-----------------------------+---------------------------------------+ +| Units | I/O operations | ++-----------------------------+---------------------------------------+ +| Range | 1 to | +| | `zfs_vdev | +| | _max_active <#zfs-vdev-max-active>`__ | ++-----------------------------+---------------------------------------+ +| Default | 2 | ++-----------------------------+---------------------------------------+ +| Change | Dynamic | ++-----------------------------+---------------------------------------+ +| Versions Affected | planned for v2 | ++-----------------------------+---------------------------------------+ + +zfs_vdev_removal_min_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_removal_min_active`` sets the minimum top-level vdev removal +I/Os active to each device. + ++-----------------------------+---------------------------------------+ +| zfs_vdev_removal_min_active | Notes | ++=============================+=======================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++-----------------------------+---------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++-----------------------------+---------------------------------------+ +| Data Type | uint32 | ++-----------------------------+---------------------------------------+ +| Units | I/O operations | ++-----------------------------+---------------------------------------+ +| Range | 1 to | +| | `zfs_vdev_removal_max_act | +| | ive <#zfs-vdev-removal-max-active>`__ | ++-----------------------------+---------------------------------------+ +| Default | 1 | ++-----------------------------+---------------------------------------+ +| Change | Dynamic | ++-----------------------------+---------------------------------------+ +| Versions Affected | planned for v2 | ++-----------------------------+---------------------------------------+ + +zfs_vdev_trim_max_active +~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_trim_max_active`` sets the maximum trim I/Os active to each +device. + ++--------------------------+------------------------------------------+ +| zfs_vdev_trim_max_active | Notes | ++==========================+==========================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------------+------------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++--------------------------+------------------------------------------+ +| Data Type | uint32 | ++--------------------------+------------------------------------------+ +| Units | I/O operations | ++--------------------------+------------------------------------------+ +| Range | 1 to | +| | `zfs_v | +| | dev_max_active <#zfs-vdev-max-active>`__ | ++--------------------------+------------------------------------------+ +| Default | 2 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | planned for v2 | ++--------------------------+------------------------------------------+ + +zfs_vdev_trim_min_active +~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_trim_min_active`` sets the minimum trim I/Os active to each +device. + ++--------------------------+------------------------------------------+ +| zfs_vdev_trim_min_active | Notes | ++==========================+==========================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------------+------------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++--------------------------+------------------------------------------+ +| Data Type | uint32 | ++--------------------------+------------------------------------------+ +| Units | I/O operations | ++--------------------------+------------------------------------------+ +| Range | 1 to | +| | `zfs_vdev_trim_m | +| | ax_active <#zfs-vdev-trim-max-active>`__ | ++--------------------------+------------------------------------------+ +| Default | 1 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | planned for v2 | ++--------------------------+------------------------------------------+ + +zfs_initialize_value +~~~~~~~~~~~~~~~~~~~~ + +When initializing a vdev, ZFS writes patterns of +``zfs_initialize_value`` bytes to the device. + ++----------------------+----------------------------------------------+ +| zfs_initialize_value | Notes | ++======================+==============================================+ +| Tags | `vdev_initialize <#vdev-initialize>`__ | ++----------------------+----------------------------------------------+ +| When to change | when debugging initialization code | ++----------------------+----------------------------------------------+ +| Data Type | uint32 or uint64 | ++----------------------+----------------------------------------------+ +| Default | 0xdeadbeef for 32-bit systems, | +| | 0xdeadbeefdeadbeee for 64-bit systems | ++----------------------+----------------------------------------------+ +| Change | prior to running ``zpool initialize`` | ++----------------------+----------------------------------------------+ +| Versions Affected | planned for v2 | ++----------------------+----------------------------------------------+ + +zfs_lua_max_instrlimit +~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_lua_max_instrlimit`` limits the maximum time for a ZFS channel +program to run. + ++------------------------+--------------------------------------------+ +| zfs_lua_max_instrlimit | Notes | ++========================+============================================+ +| Tags | `channel_programs <#channel-programs>`__ | ++------------------------+--------------------------------------------+ +| When to change | to enforce a CPU usage limit on ZFS | +| | channel programs | ++------------------------+--------------------------------------------+ +| Data Type | ulong | ++------------------------+--------------------------------------------+ +| Units | LUA instructions | ++------------------------+--------------------------------------------+ +| Range | 0 to MAX_ULONG | ++------------------------+--------------------------------------------+ +| Default | 100,000,000 | ++------------------------+--------------------------------------------+ +| Change | Dynamic | ++------------------------+--------------------------------------------+ +| Versions Affected | planned for v2 | ++------------------------+--------------------------------------------+ + +zfs_lua_max_memlimit +~~~~~~~~~~~~~~~~~~~~ + +'zfs_lua_max_memlimit' is the maximum memory limit for a ZFS channel +program. + +==================== ======================================== +zfs_lua_max_memlimit Notes +==================== ======================================== +Tags `channel_programs <#channel-programs>`__ +When to change +Data Type ulong +Units bytes +Range 0 to MAX_ULONG +Default 104,857,600 (100 MiB) +Change Dynamic +Versions Affected planned for v2 +==================== ======================================== + +zfs_max_dataset_nesting +~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_max_dataset_nesting`` limits the depth of nested datasets. Deeply +nested datasets can overflow the stack. The maximum stack depth depends +on kernel compilation options, so it is impractical to predict the +possible limits. For kernels compiled with small stack sizes, +``zfs_max_dataset_nesting`` may require changes. + ++-------------------------+-------------------------------------------+ +| zfs_max_dataset_nesting | Notes | ++=========================+===========================================+ +| Tags | `dataset <#dataset>`__ | ++-------------------------+-------------------------------------------+ +| When to change | can be tuned temporarily to fix existing | +| | datasets that exceed the predefined limit | ++-------------------------+-------------------------------------------+ +| Data Type | int | ++-------------------------+-------------------------------------------+ +| Units | datasets | ++-------------------------+-------------------------------------------+ +| Range | 0 to MAX_INT | ++-------------------------+-------------------------------------------+ +| Default | 50 | ++-------------------------+-------------------------------------------+ +| Change | Dynamic, though once on-disk the value | +| | for the pool is set | ++-------------------------+-------------------------------------------+ +| Versions Affected | planned for v2 | ++-------------------------+-------------------------------------------+ + +zfs_ddt_data_is_special +~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_ddt_data_is_special`` enables the deduplication table (DDT) to +reside on a special top-level vdev. + ++-------------------------+-------------------------------------------+ +| zfs_ddt_data_is_special | Notes | ++=========================+===========================================+ +| Tags | `dedup <#dedup>`__, | +| | `special_vdev <#special-vdev>`__ | ++-------------------------+-------------------------------------------+ +| When to change | when using a special top-level vdev and | +| | no dedup top-level vdev and it is desired | +| | to store the DDT in the main pool | +| | top-level vdevs | ++-------------------------+-------------------------------------------+ +| Data Type | boolean | ++-------------------------+-------------------------------------------+ +| Range | 0=do not use special vdevs to store DDT, | +| | 1=store DDT in special vdevs | ++-------------------------+-------------------------------------------+ +| Default | 1 | ++-------------------------+-------------------------------------------+ +| Change | Dynamic | ++-------------------------+-------------------------------------------+ +| Versions Affected | planned for v2 | ++-------------------------+-------------------------------------------+ + +zfs_user_indirect_is_special +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If special vdevs are in use, ``zfs_user_indirect_is_special`` enables +user data indirect blocks (a form of metadata) to be written to the +special vdevs. + ++------------------------------+--------------------------------------+ +| zfs_user_indirect_is_special | Notes | ++==============================+======================================+ +| Tags | `special_vdev <#special-vdev>`__ | ++------------------------------+--------------------------------------+ +| When to change | to force user data indirect blocks | +| | to remain in the main pool top-level | +| | vdevs | ++------------------------------+--------------------------------------+ +| Data Type | boolean | ++------------------------------+--------------------------------------+ +| Range | 0=do not write user indirect blocks | +| | to a special vdev, 1=write user | +| | indirect blocks to a special vdev | ++------------------------------+--------------------------------------+ +| Default | 1 | ++------------------------------+--------------------------------------+ +| Change | Dynamic | ++------------------------------+--------------------------------------+ +| Versions Affected | planned for v2 | ++------------------------------+--------------------------------------+ + +zfs_reconstruct_indirect_combinations_max +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +After device removal, if an indirect split block contains more than +``zfs_reconstruct_indirect_combinations_max`` many possible unique +combinations when being reconstructed, it can be considered too +computationally expensive to check them all. Instead, at most +``zfs_reconstruct_indirect_combinations_max`` randomly-selected +combinations are attempted each time the block is accessed. This allows +all segment copies to participate fairly in the reconstruction when all +combinations cannot be checked and prevents repeated use of one bad +copy. + ++----------------------------------+----------------------------------+ +| zfs_recon | Notes | +| struct_indirect_combinations_max | | ++==================================+==================================+ +| Tags | `vdev_removal <#vdev-removal>`__ | ++----------------------------------+----------------------------------+ +| When to change | TBD | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | attempts | ++----------------------------------+----------------------------------+ +| Range | 0=do not limit attempts, 1 to | +| | MAX_INT = limit for attempts | ++----------------------------------+----------------------------------+ +| Default | 4096 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | planned for v2 | ++----------------------------------+----------------------------------+ + +zfs_send_unmodified_spill_blocks +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_send_unmodified_spill_blocks`` enables sending of unmodified spill +blocks in the send stream. Under certain circumstances, previous +versions of ZFS could incorrectly remove the spill block from an +existing object. Including unmodified copies of the spill blocks creates +a backwards compatible stream which will recreate a spill block if it +was incorrectly removed. + ++----------------------------------+----------------------------------+ +| zfs_send_unmodified_spill_blocks | Notes | ++==================================+==================================+ +| Tags | `send <#send>`__ | ++----------------------------------+----------------------------------+ +| When to change | TBD | ++----------------------------------+----------------------------------+ +| Data Type | boolean | ++----------------------------------+----------------------------------+ +| Range | 0=do not send unmodified spill | +| | blocks, 1=send unmodified spill | +| | blocks | ++----------------------------------+----------------------------------+ +| Default | 1 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | planned for v2 | ++----------------------------------+----------------------------------+ + +zfs_spa_discard_memory_limit +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_spa_discard_memory_limit`` sets the limit for maximum memory used +for prefetching a pool's checkpoint space map on each vdev while +discarding a pool checkpoint. + +============================ ============================ +zfs_spa_discard_memory_limit Notes +============================ ============================ +Tags `checkpoint <#checkpoint>`__ +When to change TBD +Data Type int +Units bytes +Range 0 to MAX_INT +Default 16,777,216 (16 MiB) +Change Dynamic +Versions Affected planned for v2 +============================ ============================ + +zfs_special_class_metadata_reserve_pct +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_special_class_metadata_reserve_pct`` sets a threshold for space in +special vdevs to be reserved exclusively for metadata. This prevents +small blocks or dedup table from completely consuming a special vdev. + +====================================== ================================ +zfs_special_class_metadata_reserve_pct Notes +====================================== ================================ +Tags `special_vdev <#special-vdev>`__ +When to change TBD +Data Type int +Units percent +Range 0 to 100 +Default 25 +Change Dynamic +Versions Affected planned for v2 +====================================== ================================ + +zfs_trim_extent_bytes_max +~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_trim_extent_bytes_max`` sets the maximum size of a trim (aka +discard, scsi unmap) command. Ranges larger than +``zfs_trim_extent_bytes_max`` are split in to chunks no larger than +``zfs_trim_extent_bytes_max`` bytes prior to being issued to the device. +Use ``zpool iostat -w`` to observe the latency of trim commands. + ++---------------------------+-----------------------------------------+ +| zfs_trim_extent_bytes_max | Notes | ++===========================+=========================================+ +| Tags | `trim <#trim>`__ | ++---------------------------+-----------------------------------------+ +| When to change | if the device can efficiently handle | +| | larger trim requests | ++---------------------------+-----------------------------------------+ +| Data Type | uint | ++---------------------------+-----------------------------------------+ +| Units | bytes | ++---------------------------+-----------------------------------------+ +| Range | `zfs_trim_extent_by | +| | tes_min <#zfs-trim-extent-bytes-min>`__ | +| | to MAX_UINT | ++---------------------------+-----------------------------------------+ +| Default | 134,217,728 (128 MiB) | ++---------------------------+-----------------------------------------+ +| Change | Dynamic | ++---------------------------+-----------------------------------------+ +| Versions Affected | planned for v2 | ++---------------------------+-----------------------------------------+ + +zfs_trim_extent_bytes_min +~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_trim_extent_bytes_min`` sets the minimum size of trim (aka +discard, scsi unmap) commands. Trim ranges smaller than +``zfs_trim_extent_bytes_min`` are skipped unless they're part of a +larger range which was broken in to chunks. Some devices have +performance degradation during trim operations, so using a larger +``zfs_trim_extent_bytes_min`` can reduce the total amount of space +trimmed. Use ``zpool iostat -w`` to observe the latency of trim +commands. + ++---------------------------+-----------------------------------------+ +| zfs_trim_extent_bytes_min | Notes | ++===========================+=========================================+ +| Tags | `trim <#trim>`__ | ++---------------------------+-----------------------------------------+ +| When to change | when trim is in use and device | +| | performance suffers from trimming small | +| | allocations | ++---------------------------+-----------------------------------------+ +| Data Type | uint | ++---------------------------+-----------------------------------------+ +| Units | bytes | ++---------------------------+-----------------------------------------+ +| Range | 0=trim all unallocated space, otherwise | +| | minimum physical block size to MAX\_ | ++---------------------------+-----------------------------------------+ +| Default | 32,768 (32 KiB) | ++---------------------------+-----------------------------------------+ +| Change | Dynamic | ++---------------------------+-----------------------------------------+ +| Versions Affected | planned for v2 | ++---------------------------+-----------------------------------------+ + +zfs_trim_metaslab_skip +~~~~~~~~~~~~~~~~~~~~~~ + +| ``zfs_trim_metaslab_skip`` enables uninitialized metaslabs to be + skipped during the trim (aka discard, scsi unmap) process. + ``zfs_trim_metaslab_skip`` can be useful for pools constructed from + large thinly-provisioned devices where trim operations perform slowly. +| As a pool ages an increasing fraction of the pool's metaslabs are + initialized, progressively degrading the usefulness of this option. + This setting is stored when starting a manual trim and persists for + the duration of the requested trim. Use ``zpool iostat -w`` to observe + the latency of trim commands. + ++------------------------+--------------------------------------------+ +| zfs_trim_metaslab_skip | Notes | ++========================+============================================+ +| Tags | `trim <#trim>`__ | ++------------------------+--------------------------------------------+ +| When to change | | ++------------------------+--------------------------------------------+ +| Data Type | boolean | ++------------------------+--------------------------------------------+ +| Range | 0=do not skip uninitialized metaslabs | +| | during trim, 1=skip uninitialized | +| | metaslabs during trim | ++------------------------+--------------------------------------------+ +| Default | 0 | ++------------------------+--------------------------------------------+ +| Change | Dynamic | ++------------------------+--------------------------------------------+ +| Versions Affected | planned for v2 | ++------------------------+--------------------------------------------+ + +zfs_trim_queue_limit +~~~~~~~~~~~~~~~~~~~~ + +``zfs_trim_queue_limit`` sets the maximum queue depth for leaf vdevs. +See also `zfs_vdev_trim_max_active <#zfs-vdev-trim-max-active>`__ and +`zfs_trim_extent_bytes_max <#zfs-trim-extent-bytes-max>`__ Use +``zpool iostat -q`` to observe trim queue depth. + ++----------------------+------------------------------------------------------+ +| zfs_trim_queue_limit | Notes | ++======================+======================================================+ +| Tags | `trim <#trim>`__ | ++----------------------+------------------------------------------------------+ +| When to change | to restrict the number of trim commands in the queue | ++----------------------+------------------------------------------------------+ +| Data Type | uint | ++----------------------+------------------------------------------------------+ +| Units | I/O operations | ++----------------------+------------------------------------------------------+ +| Range | 1 to MAX_UINT | ++----------------------+------------------------------------------------------+ +| Default | 10 | ++----------------------+------------------------------------------------------+ +| Change | Dynamic | ++----------------------+------------------------------------------------------+ +| Versions Affected | planned for v2 | ++----------------------+------------------------------------------------------+ + +zfs_trim_txg_batch +~~~~~~~~~~~~~~~~~~ + +``zfs_trim_txg_batch`` sets the number of transaction groups worth of +frees which should be aggregated before trim (aka discard, scsi unmap) +commands are issued to a device. This setting represents a trade-off +between issuing larger, more efficient trim commands and the delay +before the recently trimmed space is available for use by the device. + +Increasing this value will allow frees to be aggregated for a longer +time. This will result is larger trim operations and potentially +increased memory usage. Decreasing this value will have the opposite +effect. The default value of 32 was empirically determined to be a +reasonable compromise. + +================== =================== +zfs_trim_txg_batch Notes +================== =================== +Tags `trim <#trim>`__ +When to change TBD +Data Type uint +Units metaslabs to stride +Range 1 to MAX_UINT +Default 32 +Change Dynamic +Versions Affected planned for v2 +================== =================== + +zfs_vdev_aggregate_trim +~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_aggregate_trim`` allows trim I/Os to be aggregated. This is +normally not helpful because the extents to be trimmed will have been +already been aggregated by the metaslab. + ++-------------------------+-------------------------------------------+ +| zfs_vdev_aggregate_trim | Notes | ++=========================+===========================================+ +| Tags | `trim <#trim>`__, `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++-------------------------+-------------------------------------------+ +| When to change | when debugging trim code or trim | +| | performance issues | ++-------------------------+-------------------------------------------+ +| Data Type | boolean | ++-------------------------+-------------------------------------------+ +| Range | 0=do not attempt to aggregate trim | +| | commands, 1=attempt to aggregate trim | +| | commands | ++-------------------------+-------------------------------------------+ +| Default | 0 | ++-------------------------+-------------------------------------------+ +| Change | Dynamic | ++-------------------------+-------------------------------------------+ +| Versions Affected | planned for v2 | ++-------------------------+-------------------------------------------+ + +zfs_vdev_aggregation_limit_non_rotating +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_aggregation_limit_non_rotating`` is the equivalent of +`zfs_vdev_aggregation_limit <#zfs-vdev-aggregation-limit>`__ for devices +which represent themselves as non-rotating to the Linux blkdev +interfaces. Such devices have a value of 0 in +``/sys/block/DEVICE/queue/rotational`` and are expected to be SSDs. + ++----------------------------------+----------------------------------+ +| zfs_vde | Notes | +| v_aggregation_limit_non_rotating | | ++==================================+==================================+ +| Tags | `vdev <#vdev>`__, | +| | `Z | +| | IO_scheduler <#zio-scheduler>`__ | ++----------------------------------+----------------------------------+ +| When to change | see | +| | `zfs_vdev_aggregation_limit | +| | <#zfs-vdev-aggregation-limit>`__ | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | bytes | ++----------------------------------+----------------------------------+ +| Range | 0 to MAX_INT | ++----------------------------------+----------------------------------+ +| Default | 131,072 bytes (128 KiB) | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | planned for v2 | ++----------------------------------+----------------------------------+ + +zil_nocacheflush +~~~~~~~~~~~~~~~~ + +ZFS uses barriers (volatile cache flush commands) to ensure data is +committed to permanent media by devices. This ensures consistent +on-media state for devices where caches are volatile (eg HDDs). + +``zil_nocacheflush`` disables the cache flush commands that are normally +sent to devices by the ZIL after a log write has completed. + +The difference between ``zil_nocacheflush`` and +`zfs_nocacheflush <#zfs-nocacheflush>`__ is ``zil_nocacheflush`` applies +to ZIL writes while `zfs_nocacheflush <#zfs-nocacheflush>`__ disables +barrier writes to the pool devices at the end of transaction group syncs. + +WARNING: setting this can cause ZIL corruption on power loss if the +device has a volatile write cache. + ++-------------------+-------------------------------------------------+ +| zil_nocacheflush | Notes | ++===================+=================================================+ +| Tags | `disks <#disks>`__, `ZIL <#ZIL>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If the storage device has nonvolatile cache, | +| | then disabling cache flush can save the cost of | +| | occasional cache flush commands | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=send cache flush commands, 1=do not send | +| | cache flush commands | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | planned for v2 | ++-------------------+-------------------------------------------------+ + +zio_deadman_log_all +~~~~~~~~~~~~~~~~~~~ + +``zio_deadman_log_all`` enables debugging messages for all ZFS I/Os, +rather than only for leaf ZFS I/Os for a vdev. This is meant to be used +by developers to gain diagnostic information for hang conditions which +don't involve a mutex or other locking primitive. Typically these are +conditions where a thread in the zio pipeline is looping indefinitely. + +See also `zfs_dbgmsg_enable <#zfs-dbgmsg-enable>`__ + ++---------------------+-----------------------------------------------+ +| zio_deadman_log_all | Notes | ++=====================+===============================================+ +| Tags | `debug <#debug>`__ | ++---------------------+-----------------------------------------------+ +| When to change | when debugging ZFS I/O pipeline | ++---------------------+-----------------------------------------------+ +| Data Type | boolean | ++---------------------+-----------------------------------------------+ +| Range | 0=do not log all deadman events, 1=log all | +| | deadman events | ++---------------------+-----------------------------------------------+ +| Default | 0 | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | planned for v2 | ++---------------------+-----------------------------------------------+ + +zio_decompress_fail_fraction +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If non-zero, ``zio_decompress_fail_fraction`` represents the denominator +of the probability that ZFS should induce a decompression failure. For +instance, for a 5% decompression failure rate, this value should be set +to 20. + ++------------------------------+--------------------------------------+ +| zio_decompress_fail_fraction | Notes | ++==============================+======================================+ +| Tags | `debug <#debug>`__ | ++------------------------------+--------------------------------------+ +| When to change | when debugging ZFS internal | +| | compressed buffer code | ++------------------------------+--------------------------------------+ +| Data Type | ulong | ++------------------------------+--------------------------------------+ +| Units | probability of induced decompression | +| | failure is | +| | 1/``zio_decompress_fail_fraction`` | ++------------------------------+--------------------------------------+ +| Range | 0 = do not induce failures, or 1 to | +| | MAX_ULONG | ++------------------------------+--------------------------------------+ +| Default | 0 | ++------------------------------+--------------------------------------+ +| Change | Dynamic | ++------------------------------+--------------------------------------+ +| Versions Affected | planned for v2 | ++------------------------------+--------------------------------------+ + +zio_slow_io_ms +~~~~~~~~~~~~~~ + +An I/O operation taking more than ``zio_slow_io_ms`` milliseconds to +complete is marked as a slow I/O. Slow I/O counters can be observed with +``zpool status -s``. Each slow I/O causes a delay zevent, observable +using ``zpool events``. See also ``zfs-events(5)``. + ++-------------------+-------------------------------------------------+ +| zio_slow_io_ms | Notes | ++===================+=================================================+ +| Tags | `vdev <#vdev>`__, `zed <#zed>`__ | ++-------------------+-------------------------------------------------+ +| When to change | when debugging slow devices and the default | +| | value is inappropriate | ++-------------------+-------------------------------------------------+ +| Data Type | int | ++-------------------+-------------------------------------------------+ +| Units | milliseconds | ++-------------------+-------------------------------------------------+ +| Range | 0 to MAX_INT | ++-------------------+-------------------------------------------------+ +| Default | 30,000 (30 seconds) | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | planned for v2 | ++-------------------+-------------------------------------------------+ + +vdev_validate_skip +~~~~~~~~~~~~~~~~~~ + +``vdev_validate_skip`` disables label validation steps during pool +import. Changing is not recommended unless you know what you are doing +and are recovering a damaged label. + ++--------------------+------------------------------------------------+ +| vdev_validate_skip | Notes | ++====================+================================================+ +| Tags | `vdev <#vdev>`__ | ++--------------------+------------------------------------------------+ +| When to change | do not change | ++--------------------+------------------------------------------------+ +| Data Type | boolean | ++--------------------+------------------------------------------------+ +| Range | 0=validate labels during pool import, 1=do not | +| | validate vdev labels during pool import | ++--------------------+------------------------------------------------+ +| Default | 0 | ++--------------------+------------------------------------------------+ +| Change | prior to pool import | ++--------------------+------------------------------------------------+ +| Versions Affected | planned for v2 | ++--------------------+------------------------------------------------+ + +zfs_async_block_max_blocks +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_async_block_max_blocks`` limits the number of blocks freed in a +single transaction group commit. During deletes of large objects, such +as snapshots, the number of freed blocks can cause the DMU to extend txg +sync times well beyond `zfs_txg_timeout <#zfs-txg-timeout>`__. +``zfs_async_block_max_blocks`` is used to limit these effects. + +========================== ==================================== +zfs_async_block_max_blocks Notes +========================== ==================================== +Tags `delete <#delete>`__, `DMU <#DMU>`__ +When to change TBD +Data Type ulong +Units blocks +Range 1 to MAX_ULONG +Default MAX_ULONG (do not limit) +Change Dynamic +Versions Affected planned for v2 +========================== ==================================== + +zfs_checksum_events_per_second +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_checksum_events_per_second`` is a rate limit for checksum events. +Note that this should not be set below the ``zed`` thresholds (currently +10 checksums over 10 sec) or else ``zed`` may not trigger any action. + +============================== ============================= +zfs_checksum_events_per_second Notes +============================== ============================= +Tags `vdev <#vdev>`__ +When to change TBD +Data Type uint +Units checksum events +Range ``zed`` threshold to MAX_UINT +Default 20 +Change Dynamic +Versions Affected planned for v2 +============================== ============================= + +zfs_disable_ivset_guid_check +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_disable_ivset_guid_check`` disables requirement for IVset guids to +be present and match when doing a raw receive of encrypted datasets. +Intended for users whose pools were created with ZFS on Linux +pre-release versions and now have compatibility issues. + +For a ZFS raw receive, from a send stream created by ``zfs send --raw``, +the crypt_keydata nvlist includes a to_ivset_guid to be set on the new +snapshot. This value will override the value generated by the snapshot +code. However, this value may not be present, because older +implementations of the raw send code did not include this value. When +``zfs_disable_ivset_guid_check`` is enabled, the receive proceeds and a +newly-generated value is used. + ++------------------------------+--------------------------------------+ +| zfs_disable_ivset_guid_check | Notes | ++==============================+======================================+ +| Tags | `receive <#receive>`__ | ++------------------------------+--------------------------------------+ +| When to change | debugging pre-release ZFS raw sends | ++------------------------------+--------------------------------------+ +| Data Type | boolean | ++------------------------------+--------------------------------------+ +| Range | 0=check IVset guid, 1=do not check | +| | IVset guid | ++------------------------------+--------------------------------------+ +| Default | 0 | ++------------------------------+--------------------------------------+ +| Change | Dynamic | ++------------------------------+--------------------------------------+ +| Versions Affected | planned for v2 | ++------------------------------+--------------------------------------+ + +zfs_obsolete_min_time_ms +~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_obsolete_min_time_ms`` is similar to +`zfs_free_min_time_ms <#zfs-free-min-time-ms>`__ and used for cleanup of +old indirection records for vdevs removed using the ``zpool remove`` +command. + +======================== ========================================== +zfs_obsolete_min_time_ms Notes +======================== ========================================== +Tags `delete <#delete>`__, `remove <#remove>`__ +When to change TBD +Data Type int +Units milliseconds +Range 0 to MAX_INT +Default 500 +Change Dynamic +Versions Affected planned for v2 +======================== ========================================== + +zfs_override_estimate_recordsize +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_override_estimate_recordsize`` overrides the default logic for +estimating block sizes when doing a zfs send. The default heuristic is +that the average block size will be the current recordsize. + ++----------------------------------+----------------------------------+ +| zfs_override_estimate_recordsize | Notes | ++==================================+==================================+ +| Tags | `send <#send>`__ | ++----------------------------------+----------------------------------+ +| When to change | if most data in your dataset is | +| | not of the current recordsize | +| | and you require accurate zfs | +| | send size estimates | ++----------------------------------+----------------------------------+ +| Data Type | ulong | ++----------------------------------+----------------------------------+ +| Units | bytes | ++----------------------------------+----------------------------------+ +| Range | 0=do not override, 1 to | +| | MAX_ULONG | ++----------------------------------+----------------------------------+ +| Default | 0 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | planned for v2 | ++----------------------------------+----------------------------------+ + +zfs_remove_max_segment +~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_remove_max_segment`` sets the largest contiguous segment that ZFS +attempts to allocate when removing a vdev. This can be no larger than +16MB. If there is a performance problem with attempting to allocate +large blocks, consider decreasing this. The value is rounded up to a +power-of-2. + ++------------------------+--------------------------------------------+ +| zfs_remove_max_segment | Notes | ++========================+============================================+ +| Tags | `remove <#remove>`__ | ++------------------------+--------------------------------------------+ +| When to change | after removing a top-level vdev, consider | +| | decreasing if there is a performance | +| | degradation when attempting to allocate | +| | large blocks | ++------------------------+--------------------------------------------+ +| Data Type | int | ++------------------------+--------------------------------------------+ +| Units | bytes | ++------------------------+--------------------------------------------+ +| Range | maximum of the physical block size of all | +| | vdevs in the pool to 16,777,216 bytes (16 | +| | MiB) | ++------------------------+--------------------------------------------+ +| Default | 16,777,216 bytes (16 MiB) | ++------------------------+--------------------------------------------+ +| Change | Dynamic | ++------------------------+--------------------------------------------+ +| Versions Affected | planned for v2 | ++------------------------+--------------------------------------------+ + +zfs_resilver_disable_defer +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_resilver_disable_defer`` disables the ``resilver_defer`` pool +feature. The ``resilver_defer`` feature allows ZFS to postpone new +resilvers if an existing resilver is in progress. + ++----------------------------+----------------------------------------+ +| zfs_resilver_disable_defer | Notes | ++============================+========================================+ +| Tags | `resilver <#resilver>`__ | ++----------------------------+----------------------------------------+ +| When to change | if resilver postponement is not | +| | desired due to overall resilver time | +| | constraints | ++----------------------------+----------------------------------------+ +| Data Type | boolean | ++----------------------------+----------------------------------------+ +| Range | 0=allow ``resilver_defer`` to postpone | +| | new resilver operations, 1=immediately | +| | restart resilver when needed | ++----------------------------+----------------------------------------+ +| Default | 0 | ++----------------------------+----------------------------------------+ +| Change | Dynamic | ++----------------------------+----------------------------------------+ +| Versions Affected | planned for v2 | ++----------------------------+----------------------------------------+ + +zfs_scan_suspend_progress +~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_scan_suspend_progress`` causes a scrub or resilver scan to freeze +without actually pausing. + +========================= ============================================ +zfs_scan_suspend_progress Notes +========================= ============================================ +Tags `resilver <#resilver>`__, `scrub <#scrub>`__ +When to change testing or debugging scan code +Data Type boolean +Range 0=do not freeze scans, 1=freeze scans +Default 0 +Change Dynamic +Versions Affected planned for v2 +========================= ============================================ + +zfs_scrub_min_time_ms +~~~~~~~~~~~~~~~~~~~~~ + +Scrubs are processed by the sync thread. While scrubbing at least +``zfs_scrub_min_time_ms`` time is spent working on a scrub between txg +syncs. + +===================== ================================================= +zfs_scrub_min_time_ms Notes +===================== ================================================= +Tags `scrub <#scrub>`__ +When to change +Data Type int +Units milliseconds +Range 1 to (`zfs_txg_timeout <#zfs-txg-timeout>`__ - 1) +Default 1,000 +Change Dynamic +Versions Affected planned for v2 +===================== ================================================= + +zfs_slow_io_events_per_second +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_slow_io_events_per_second`` is a rate limit for slow I/O events. +Note that this should not be set below the ``zed`` thresholds (currently +10 checksums over 10 sec) or else ``zed`` may not trigger any action. + +============================= ============================= +zfs_slow_io_events_per_second Notes +============================= ============================= +Tags `vdev <#vdev>`__ +When to change TBD +Data Type uint +Units slow I/O events +Range ``zed`` threshold to MAX_UINT +Default 20 +Change Dynamic +Versions Affected planned for v2 +============================= ============================= + +zfs_vdev_min_ms_count +~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_min_ms_count`` is the minimum number of metaslabs to create +in a top-level vdev. + ++-----------------------+---------------------------------------------+ +| zfs_vdev_min_ms_count | Notes | ++=======================+=============================================+ +| Tags | `metaslab <#metaslab>`__, `vdev <#vdev>`__ | ++-----------------------+---------------------------------------------+ +| When to change | TBD | ++-----------------------+---------------------------------------------+ +| Data Type | int | ++-----------------------+---------------------------------------------+ +| Units | metaslabs | ++-----------------------+---------------------------------------------+ +| Range | 16 to | +| | `zfs_vdev_m | +| | s_count_limit <#zfs-vdev-ms-count-limit>`__ | ++-----------------------+---------------------------------------------+ +| Default | 16 | ++-----------------------+---------------------------------------------+ +| Change | prior to creating a pool or adding a | +| | top-level vdev | ++-----------------------+---------------------------------------------+ +| Versions Affected | planned for v2 | ++-----------------------+---------------------------------------------+ + +zfs_vdev_ms_count_limit +~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_ms_count_limit`` is the practical upper limit for the number +of metaslabs per top-level vdev. + ++-------------------------+-------------------------------------------+ +| zfs_vdev_ms_count_limit | Notes | ++=========================+===========================================+ +| Tags | `metaslab <#metaslab>`__, | +| | `vdev <#vdev>`__ | ++-------------------------+-------------------------------------------+ +| When to change | TBD | ++-------------------------+-------------------------------------------+ +| Data Type | int | ++-------------------------+-------------------------------------------+ +| Units | metaslabs | ++-------------------------+-------------------------------------------+ +| Range | `zfs_vdev | +| | _min_ms_count <#zfs-vdev-min-ms-count>`__ | +| | to 131,072 | ++-------------------------+-------------------------------------------+ +| Default | 131,072 | ++-------------------------+-------------------------------------------+ +| Change | prior to creating a pool or adding a | +| | top-level vdev | ++-------------------------+-------------------------------------------+ +| Versions Affected | planned for v2 | ++-------------------------+-------------------------------------------+ + +spl_hostid +~~~~~~~~~~ + +| ``spl_hostid`` is a unique system id number. It originated in Sun's + products where most systems had a unique id assigned at the factory. + This assignment does not exist in modern hardware. +| In ZFS, the hostid is stored in the vdev label and can be used to + determine if another system had imported the pool. When set + ``spl_hostid`` can be used to uniquely identify a system. By default + this value is set to zero which indicates the hostid is disabled. It + can be explicitly enabled by placing a unique non-zero value in the + file shown in `spl_hostid_path <#spl-hostid-path>`__ + ++-------------------+-------------------------------------------------+ +| spl_hostid | Notes | ++===================+=================================================+ +| Tags | `hostid <#hostid>`__, `MMP <#MMP>`__ | ++-------------------+-------------------------------------------------+ +| Kernel module | spl | ++-------------------+-------------------------------------------------+ +| When to change | to uniquely identify a system when vdevs can be | +| | shared across multiple systems | ++-------------------+-------------------------------------------------+ +| Data Type | ulong | ++-------------------+-------------------------------------------------+ +| Range | 0=ignore hostid, 1 to 4,294,967,295 (32-bits or | +| | 0xffffffff) | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | prior to importing pool | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.1 | ++-------------------+-------------------------------------------------+ + +spl_hostid_path +~~~~~~~~~~~~~~~ + +``spl_hostid_path`` is the path name for a file that can contain a +unique hostid. For testing purposes, ``spl_hostid_path`` can be +overridden by the ZFS_HOSTID environment variable. + ++-------------------+-------------------------------------------------+ +| spl_hostid_path | Notes | ++===================+=================================================+ +| Tags | `hostid <#hostid>`__, `MMP <#MMP>`__ | ++-------------------+-------------------------------------------------+ +| Kernel module | spl | ++-------------------+-------------------------------------------------+ +| When to change | when creating a new ZFS distribution where the | +| | default value is inappropriate | ++-------------------+-------------------------------------------------+ +| Data Type | string | ++-------------------+-------------------------------------------------+ +| Default | "/etc/hostid" | ++-------------------+-------------------------------------------------+ +| Change | read-only, can only be changed prior to spl | +| | module load | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.1 | ++-------------------+-------------------------------------------------+ + +spl_kmem_alloc_max +~~~~~~~~~~~~~~~~~~ + +Large ``kmem_alloc()`` allocations fail if they exceed KMALLOC_MAX_SIZE, +as determined by the kernel source. Allocations which are marginally +smaller than this limit may succeed but should still be avoided due to +the expense of locating a contiguous range of free pages. Therefore, a +maximum kmem size with reasonable safely margin of 4x is set. +``kmem_alloc()`` allocations larger than this maximum will quickly fail. +``vmem_alloc()`` allocations less than or equal to this value will use +``kmalloc()``, but shift to ``vmalloc()`` when exceeding this value. + +================== ==================== +spl_kmem_alloc_max Notes +================== ==================== +Tags `memory <#memory>`__ +Kernel module spl +When to change TBD +Data Type uint +Units bytes +Range TBD +Default KMALLOC_MAX_SIZE / 4 +Change Dynamic +Versions Affected v0.7.0 +================== ==================== + +spl_kmem_alloc_warn +~~~~~~~~~~~~~~~~~~~ + +As a general rule ``kmem_alloc()`` allocations should be small, +preferably just a few pages since they must by physically contiguous. +Therefore, a rate limited warning is printed to the console for any +``kmem_alloc()`` which exceeds the threshold ``spl_kmem_alloc_warn`` + +The default warning threshold is set to eight pages but capped at 32K to +accommodate systems using large pages. This value was selected to be +small enough to ensure the largest allocations are quickly noticed and +fixed. But large enough to avoid logging any warnings when a allocation +size is larger than optimal but not a serious concern. Since this value +is tunable, developers are encouraged to set it lower when testing so +any new largish allocations are quickly caught. These warnings may be +disabled by setting the threshold to zero. + ++---------------------+-----------------------------------------------+ +| spl_kmem_alloc_warn | Notes | ++=====================+===============================================+ +| Tags | `memory <#memory>`__ | ++---------------------+-----------------------------------------------+ +| Kernel module | spl | ++---------------------+-----------------------------------------------+ +| When to change | developers are encouraged lower when testing | +| | so any new, large allocations are quickly | +| | caught | ++---------------------+-----------------------------------------------+ +| Data Type | uint | ++---------------------+-----------------------------------------------+ +| Units | bytes | ++---------------------+-----------------------------------------------+ +| Range | 0=disable the warnings, | ++---------------------+-----------------------------------------------+ +| Default | 32,768 (32 KiB) | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.7.0 | ++---------------------+-----------------------------------------------+ + +spl_kmem_cache_expire +~~~~~~~~~~~~~~~~~~~~~ + +Cache expiration is part of default illumos cache behavior. The idea is +that objects in magazines which have not been recently accessed should +be returned to the slabs periodically. This is known as cache aging and +when enabled objects will be typically returned after 15 seconds. + +On the other hand Linux slabs are designed to never move objects back to +the slabs unless there is memory pressure. This is possible because +under Linux the cache will be notified when memory is low and objects +can be released. + +By default only the Linux method is enabled. It has been shown to +improve responsiveness on low memory systems and not negatively impact +the performance of systems with more memory. This policy may be changed +by setting the ``spl_kmem_cache_expire`` bit mask as follows, both +policies may be enabled concurrently. + +===================== ================================================= +spl_kmem_cache_expire Notes +===================== ================================================= +Tags `memory <#memory>`__ +Kernel module spl +When to change TBD +Data Type bitmask +Range 0x01 - Aging (illumos), 0x02 - Low memory (Linux) +Default 0x02 +Change Dynamic +Versions Affected v0.6.1 to v0.8.x +===================== ================================================= + +spl_kmem_cache_kmem_limit +~~~~~~~~~~~~~~~~~~~~~~~~~ + +Depending on the size of a memory cache object it may be backed by +``kmalloc()`` or ``vmalloc()`` memory. This is because the size of the +required allocation greatly impacts the best way to allocate the memory. + +When objects are small and only a small number of memory pages need to +be allocated, ideally just one, then ``kmalloc()`` is very efficient. +However, allocating multiple pages with ``kmalloc()`` gets increasingly +expensive because the pages must be physically contiguous. + +For this reason we shift to ``vmalloc()`` for slabs of large objects +which which removes the need for contiguous pages. ``vmalloc()`` cannot +be used in all cases because there is significant locking overhead +involved. This function takes a single global lock over the entire +virtual address range which serializes all allocations. Using slightly +different allocation functions for small and large objects allows us to +handle a wide range of object sizes. + +The ``spl_kmem_cache_kmem_limit`` value is used to determine this cutoff +size. One quarter of the kernel's compiled PAGE_SIZE is used as the +default value because +`spl_kmem_cache_obj_per_slab <#spl-kmem-cache-obj-per-slab>`__ defaults +to 16. With these default values, at most four contiguous pages are +allocated. + +========================= ==================== +spl_kmem_cache_kmem_limit Notes +========================= ==================== +Tags `memory <#memory>`__ +Kernel module spl +When to change TBD +Data Type uint +Units pages +Range TBD +Default PAGE_SIZE / 4 +Change Dynamic +Versions Affected v0.7.0 to v0.8.x +========================= ==================== + +spl_kmem_cache_max_size +~~~~~~~~~~~~~~~~~~~~~~~ + +``spl_kmem_cache_max_size`` is the maximum size of a kmem cache slab in +MiB. This effectively limits the maximum cache object size to +``spl_kmem_cache_max_size`` / +`spl_kmem_cache_obj_per_slab <#spl-kmem-cache-obj-per-slab>`__ Kmem +caches may not be created with object sized larger than this limit. + +======================= ========================================= +spl_kmem_cache_max_size Notes +======================= ========================================= +Tags `memory <#memory>`__ +Kernel module spl +When to change TBD +Data Type uint +Units MiB +Range TBD +Default 4 for 32-bit kernel, 32 for 64-bit kernel +Change Dynamic +Versions Affected v0.7.0 +======================= ========================================= + +spl_kmem_cache_obj_per_slab +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``spl_kmem_cache_obj_per_slab`` is the preferred number of objects per +slab in the kmem cache. In general, a larger value will increase the +caches memory footprint while decreasing the time required to perform an +allocation. Conversely, a smaller value will minimize the footprint and +improve cache reclaim time but individual allocations may take longer. + +=========================== ==================== +spl_kmem_cache_obj_per_slab Notes +=========================== ==================== +Tags `memory <#memory>`__ +Kernel module spl +When to change TBD +Data Type uint +Units kmem cache objects +Range TBD +Default 8 +Change Dynamic +Versions Affected v0.7.0 to v0.8.x +=========================== ==================== + +spl_kmem_cache_obj_per_slab_min +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``spl_kmem_cache_obj_per_slab_min`` is the minimum number of objects +allowed per slab. Normally slabs will contain +`spl_kmem_cache_obj_per_slab <#spl-kmem-cache-obj-per-slab>`__ objects +but for caches that contain very large objects it's desirable to only +have a few, or even just one, object per slab. + +=============================== =============================== +spl_kmem_cache_obj_per_slab_min Notes +=============================== =============================== +Tags `memory <#memory>`__ +Kernel module spl +When to change debugging kmem cache operations +Data Type uint +Units kmem cache objects +Range TBD +Default 1 +Change Dynamic +Versions Affected v0.7.0 +=============================== =============================== + +spl_kmem_cache_reclaim +~~~~~~~~~~~~~~~~~~~~~~ + +``spl_kmem_cache_reclaim`` prevents Linux from being able to rapidly +reclaim all the memory held by the kmem caches. This may be useful in +circumstances where it's preferable that Linux reclaim memory from some +other subsystem first. Setting ``spl_kmem_cache_reclaim`` increases the +likelihood out of memory events on a memory constrained system. + ++------------------------+--------------------------------------------+ +| spl_kmem_cache_reclaim | Notes | ++========================+============================================+ +| Tags | `memory <#memory>`__ | ++------------------------+--------------------------------------------+ +| Kernel module | spl | ++------------------------+--------------------------------------------+ +| When to change | TBD | ++------------------------+--------------------------------------------+ +| Data Type | boolean | ++------------------------+--------------------------------------------+ +| Range | 0=enable rapid memory reclaim from kmem | +| | caches, 1=disable rapid memory reclaim | +| | from kmem caches | ++------------------------+--------------------------------------------+ +| Default | 0 | ++------------------------+--------------------------------------------+ +| Change | Dynamic | ++------------------------+--------------------------------------------+ +| Versions Affected | v0.7.0 | ++------------------------+--------------------------------------------+ + +spl_kmem_cache_slab_limit +~~~~~~~~~~~~~~~~~~~~~~~~~ + +For small objects the Linux slab allocator should be used to make the +most efficient use of the memory. However, large objects are not +supported by the Linux slab allocator and therefore the SPL +implementation is preferred. ``spl_kmem_cache_slab_limit`` is used to +determine the cutoff between a small and large object. + +Objects of ``spl_kmem_cache_slab_limit`` or smaller will be allocated +using the Linux slab allocator, large objects use the SPL allocator. A +cutoff of 16 KiB was determined to be optimal for architectures using 4 +KiB pages. + ++---------------------------+-----------------------------------------+ +| spl_kmem_cache_slab_limit | Notes | ++===========================+=========================================+ +| Tags | `memory <#memory>`__ | ++---------------------------+-----------------------------------------+ +| Kernel module | spl | ++---------------------------+-----------------------------------------+ +| When to change | TBD | ++---------------------------+-----------------------------------------+ +| Data Type | uint | ++---------------------------+-----------------------------------------+ +| Units | bytes | ++---------------------------+-----------------------------------------+ +| Range | TBD | ++---------------------------+-----------------------------------------+ +| Default | 16,384 (16 KiB) when kernel PAGE_SIZE = | +| | 4KiB, 0 for other PAGE_SIZE values | ++---------------------------+-----------------------------------------+ +| Change | Dynamic | ++---------------------------+-----------------------------------------+ +| Versions Affected | v0.7.0 | ++---------------------------+-----------------------------------------+ + +spl_max_show_tasks +~~~~~~~~~~~~~~~~~~ + +``spl_max_show_tasks`` is the limit of tasks per pending list in each +taskq shown in ``/proc/spl/taskq`` and ``/proc/spl/taskq-all``. Reading +the ProcFS files walks the lists with lock held and it could cause a +lock up if the list grow too large. If the list is larger than the +limit, the string \`"(truncated)" is printed. + +================== =================================== +spl_max_show_tasks Notes +================== =================================== +Tags `taskq <#taskq>`__ +Kernel module spl +When to change TBD +Data Type uint +Units tasks reported +Range 0 disables the limit, 1 to MAX_UINT +Default 512 +Change Dynamic +Versions Affected v0.7.0 +================== =================================== + +spl_panic_halt +~~~~~~~~~~~~~~ + +``spl_panic_halt`` enables kernel panic upon assertion failures. When +not enabled, the asserting thread is halted to facilitate further +debugging. + ++-------------------+-------------------------------------------------+ +| spl_panic_halt | Notes | ++===================+=================================================+ +| Tags | `debug <#debug>`__, `panic <#panic>`__ | ++-------------------+-------------------------------------------------+ +| Kernel module | spl | ++-------------------+-------------------------------------------------+ +| When to change | when debugging assertions and kernel core dumps | +| | are desired | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=halt thread upon assertion, 1=panic kernel | +| | upon assertion | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.7.0 | ++-------------------+-------------------------------------------------+ + +spl_taskq_kick +~~~~~~~~~~~~~~ + +Upon writing a non-zero value to ``spl_taskq_kick``, all taskqs are +scanned. If any taskq has a pending task more than 5 seconds old, the +taskq spawns more threads. This can be useful in rare deadlock +situations caused by one or more taskqs not spawning a thread when it +should. + +================= ===================== +spl_taskq_kick Notes +================= ===================== +Tags `taskq <#taskq>`__ +Kernel module spl +When to change See description above +Data Type uint +Units N/A +Default 0 +Change Dynamic +Versions Affected v0.7.0 +================= ===================== + +spl_taskq_thread_bind +~~~~~~~~~~~~~~~~~~~~~ + +``spl_taskq_thread_bind`` enables binding taskq threads to specific +CPUs, distributed evenly over the available CPUs. By default, this +behavior is disabled to allow the Linux scheduler the maximum +flexibility to determine where a thread should run. + ++-----------------------+---------------------------------------------+ +| spl_taskq_thread_bind | Notes | ++=======================+=============================================+ +| Tags | `CPU <#CPU>`__, `taskq <#taskq>`__ | ++-----------------------+---------------------------------------------+ +| Kernel module | spl | ++-----------------------+---------------------------------------------+ +| When to change | when debugging CPU scheduling options | ++-----------------------+---------------------------------------------+ +| Data Type | boolean | ++-----------------------+---------------------------------------------+ +| Range | 0=taskqs are not bound to specific CPUs, | +| | 1=taskqs are bound to CPUs | ++-----------------------+---------------------------------------------+ +| Default | 0 | ++-----------------------+---------------------------------------------+ +| Change | prior to loading spl kernel module | ++-----------------------+---------------------------------------------+ +| Versions Affected | v0.7.0 | ++-----------------------+---------------------------------------------+ + +spl_taskq_thread_dynamic +~~~~~~~~~~~~~~~~~~~~~~~~ + +``spl_taskq_thread_dynamic`` enables taskqs to set the TASKQ_DYNAMIC +flag will by default create only a single thread. New threads will be +created on demand up to a maximum allowed number to facilitate the +completion of outstanding tasks. Threads which are no longer needed are +promptly destroyed. By default this behavior is enabled but it can be d. + +See also +`zfs_zil_clean_taskq_nthr_pct <#zfs-zil-clean-taskq-nthr-pct>`__, +`zio_taskq_batch_pct <#zio-taskq-batch-pct>`__ + ++--------------------------+------------------------------------------+ +| spl_taskq_thread_dynamic | Notes | ++==========================+==========================================+ +| Tags | `taskq <#taskq>`__ | ++--------------------------+------------------------------------------+ +| Kernel module | spl | ++--------------------------+------------------------------------------+ +| When to change | disable for performance analysis or | +| | troubleshooting | ++--------------------------+------------------------------------------+ +| Data Type | boolean | ++--------------------------+------------------------------------------+ +| Range | 0=taskq threads are not dynamic, 1=taskq | +| | threads are dynamically created and | +| | destroyed | ++--------------------------+------------------------------------------+ +| Default | 1 | ++--------------------------+------------------------------------------+ +| Change | prior to loading spl kernel module | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.7.0 | ++--------------------------+------------------------------------------+ + +spl_taskq_thread_priority +~~~~~~~~~~~~~~~~~~~~~~~~~ + +| ``spl_taskq_thread_priority`` allows newly created taskq threads to + set a non-default scheduler priority. When enabled the priority + specified when a taskq is created will be applied to all threads + created by that taskq. +| When disabled all threads will use the default Linux kernel thread + priority. + ++---------------------------+-----------------------------------------+ +| spl_taskq_thread_priority | Notes | ++===========================+=========================================+ +| Tags | `CPU <#CPU>`__, `taskq <#taskq>`__ | ++---------------------------+-----------------------------------------+ +| Kernel module | spl | ++---------------------------+-----------------------------------------+ +| When to change | when troubleshooting CPU | +| | scheduling-related performance issues | ++---------------------------+-----------------------------------------+ +| Data Type | boolean | ++---------------------------+-----------------------------------------+ +| Range | 0=taskq threads use the default Linux | +| | kernel thread priority, 1= | ++---------------------------+-----------------------------------------+ +| Default | 1 | ++---------------------------+-----------------------------------------+ +| Change | prior to loading spl kernel module | ++---------------------------+-----------------------------------------+ +| Versions Affected | v0.7.0 | ++---------------------------+-----------------------------------------+ + +spl_taskq_thread_sequential +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``spl_taskq_thread_sequential`` is the number of items a taskq worker +thread must handle without interruption before requesting a new worker +thread be spawned. ``spl_taskq_thread_sequential`` controls how quickly +taskqs ramp up the number of threads processing the queue. Because Linux +thread creation and destruction are relatively inexpensive a small +default value has been selected. Thus threads are created aggressively, +which is typically desirable. Increasing this value results in a slower +thread creation rate which may be preferable for some configurations. + +=========================== ================================== +spl_taskq_thread_sequential Notes +=========================== ================================== +Tags `CPU <#CPU>`__, `taskq <#taskq>`__ +Kernel module spl +When to change TBD +Data Type int +Units taskq items +Range 1 to MAX_INT +Default 4 +Change Dynamic +Versions Affected v0.7.0 +=========================== ================================== + +spl_kmem_cache_kmem_threads +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``spl_kmem_cache_kmem_threads`` shows the current number of +``spl_kmem_cache`` threads. This task queue is responsible for +allocating new slabs for use by the kmem caches. For the majority of +systems and workloads only a small number of threads are required. + ++-----------------------------+---------------------------------------+ +| spl_kmem_cache_kmem_threads | Notes | ++=============================+=======================================+ +| Tags | `CPU <#CPU>`__, `memory <#memory>`__ | ++-----------------------------+---------------------------------------+ +| Kernel module | spl | ++-----------------------------+---------------------------------------+ +| When to change | read-only | ++-----------------------------+---------------------------------------+ +| Data Type | int | ++-----------------------------+---------------------------------------+ +| Range | 1 to MAX_INT | ++-----------------------------+---------------------------------------+ +| Units | threads | ++-----------------------------+---------------------------------------+ +| Default | 4 | ++-----------------------------+---------------------------------------+ +| Change | read-only, can only be changed prior | +| | to spl module load | ++-----------------------------+---------------------------------------+ +| Versions Affected | v0.7.0 | ++-----------------------------+---------------------------------------+ + +spl_kmem_cache_magazine_size +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``spl_kmem_cache_magazine_size`` shows the current . Cache magazines are +an optimization designed to minimize the cost of allocating memory. They +do this by keeping a per-cpu cache of recently freed objects, which can +then be reallocated without taking a lock. This can improve performance +on highly contended caches. However, because objects in magazines will +prevent otherwise empty slabs from being immediately released this may +not be ideal for low memory machines. + +For this reason spl_kmem_cache_magazine_size can be used to set a +maximum magazine size. When this value is set to 0 the magazine size +will be automatically determined based on the object size. Otherwise +magazines will be limited to 2-256 objects per magazine (eg per CPU). +Magazines cannot be disabled entirely in this implementation. + ++------------------------------+--------------------------------------+ +| spl_kmem_cache_magazine_size | Notes | ++==============================+======================================+ +| Tags | `CPU <#CPU>`__, `memory <#memory>`__ | ++------------------------------+--------------------------------------+ +| Kernel module | spl | ++------------------------------+--------------------------------------+ +| When to change | | ++------------------------------+--------------------------------------+ +| Data Type | int | ++------------------------------+--------------------------------------+ +| Units | threads | ++------------------------------+--------------------------------------+ +| Range | 0=automatically scale magazine size, | +| | otherwise 2 to 256 | ++------------------------------+--------------------------------------+ +| Default | 0 | ++------------------------------+--------------------------------------+ +| Change | read-only, can only be changed prior | +| | to spl module load | ++------------------------------+--------------------------------------+ +| Versions Affected | v0.7.0 | ++------------------------------+--------------------------------------+ diff --git a/_sources/Performance and Tuning/Workload Tuning.rst.txt b/_sources/Performance and Tuning/Workload Tuning.rst.txt new file mode 100644 index 000000000..18f0edbc9 --- /dev/null +++ b/_sources/Performance and Tuning/Workload Tuning.rst.txt @@ -0,0 +1,789 @@ +Workload Tuning +=============== + +Below are tips for various workloads. + +.. contents:: Table of Contents + :local: + +.. _basic_concepts: + +Basic concepts +-------------- + +Descriptions of ZFS internals that have an effect on application +performance follow. + +.. _adaptive_replacement_cache: + +Adaptive Replacement Cache +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +For decades, operating systems have used RAM as a cache to avoid the +necessity of waiting on disk IO, which is extremely slow. This concept +is called page replacement. Until ZFS, virtually all filesystems used +the Least Recently Used (LRU) page replacement algorithm in which the +least recently used pages are the first to be replaced. Unfortunately, +the LRU algorithm is vulnerable to cache flushes, where a brief change +in workload that occurs occasionally removes all frequently used data +from cache. The Adaptive Replacement Cache (ARC) algorithm was +implemented in ZFS to replace LRU. It solves this problem by maintaining +four lists: + +#. A list for recently cached entries. +#. A list for recently cached entries that have been accessed more than + once. +#. A list for entries evicted from #1. +#. A list of entries evicited from #2. + +Data is evicted from the first list while an effort is made to keep data +in the second list. In this way, ARC is able to outperform LRU by +providing a superior hit rate. + +In addition, a dedicated cache device (typically a SSD) can be added to +the pool, with +``zpool add POOLNAME cache DEVICENAME``. The cache +device is managed by the L2ARC, which scans entries that are next to be +evicted and writes them to the cache device. The data stored in ARC and +L2ARC can be controlled via the ``primarycache`` and ``secondarycache`` +zfs properties respectively, which can be set on both zvols and +datasets. Possible settings are ``all``, ``none`` and ``metadata``. It +is possible to improve performance when a zvol or dataset hosts an +application that does its own caching by caching only metadata. One +example would be a virtual machine using ZFS. Another would be a +database system which manages its own cache (Oracle for instance). +PostgreSQL, by contrast, depends on the OS-level file cache for the +majority of cache. + +.. _alignment_shift_ashift: + +Alignment Shift (ashift) +~~~~~~~~~~~~~~~~~~~~~~~~ + +Top-level vdevs contain an internal property called ashift, which stands +for alignment shift. It is set at vdev creation and it is immutable. It +can be read using the ``zdb`` command. It is calculated as the maximum +base 2 logarithm of the physical sector size of any child vdev and it +alters the disk format such that writes are always done according to it. +This makes 2^ashift the smallest possible IO on a vdev. Configuring +ashift correctly is important because partial sector writes incur a +penalty where the sector must be read into a buffer before it can be +written. ZFS makes the implicit assumption that the sector size reported +by drives is correct and calculates ashift based on that. + +In an ideal world, physical sector size is always reported correctly and +therefore, this requires no attention. Unfortunately, this is not the +case. The sector size on all storage devices was 512-bytes prior to the +creation of flash-based solid state drives. Some operating systems, such +as Windows XP, were written under this assumption and will not function +when drives report a different sector size. + +Flash-based solid state drives came to market around 2007. These devices +report 512-byte sectors, but the actual flash pages, which roughly +correspond to sectors, are never 512-bytes. The early models used +4096-byte pages while the newer models have moved to an 8192-byte page. +In addition, "Advanced Format" hard drives have been created which also +use a 4096-byte sector size. Partial page writes suffer from similar +performance degradation as partial sector writes. In some cases, the +design of NAND-flash makes the performance degradation even worse, but +that is beyond the scope of this description. + +Reporting the correct sector sizes is the responsibility the block +device layer. This unfortunately has made proper handling of devices +that misreport drives different across different platforms. The +respective methods are as follows: + +- `sd.conf `__ + on illumos +- `gnop(8) `__ + on FreeBSD; see for example `FreeBSD on 4K sector + drives `__ + (2011-01-01) +- `ashift= `__ + on ZFS on Linux +- -o ashift= also works with both MacZFS (pool version 8) and ZFS-OSX + (pool version 5000). + +-o ashift= is convenient, but it is flawed in that the creation of pools +containing top level vdevs that have multiple optimal sector sizes +require the use of multiple commands. `A newer +syntax `__ +that will rely on the actual sector sizes has been discussed as a cross +platform replacement and will likely be implemented in the future. + +In addition, there is a `database of +drives known to misreport sector +sizes `__ +to the ZFS on Linux project. It is used to automatically adjust ashift +without the assistance of the system administrator. This approach is +unable to fully compensate for misreported sector sizes whenever drive +identifiers are used ambiguously (e.g. virtual machines, iSCSI LUNs, +some rare SSDs), but it does a great amount of good. The format is +roughly compatible with illumos' sd.conf and it is expected that other +implementations will integrate the database in future releases. Strictly +speaking, this database does not belong in ZFS, but the difficulty of +patching the Linux kernel (especially older ones) necessitated that this +be implemented in ZFS itself for Linux. The same is true for MacZFS. +However, FreeBSD and illumos are both able to implement this in the +correct layer. + +Compression +~~~~~~~~~~~ + +Internally, ZFS allocates data using multiples of the device's sector +size, typically either 512 bytes or 4KB (see above). When compression is +enabled, a smaller number of sectors can be allocated for each block. +The uncompressed block size is set by the ``recordsize`` (defaults to +128KB) or ``volblocksize`` (defaults to 8KB) property (for filesystems +vs volumes). + +The following compression algorithms are available: + +- LZ4 + + - New algorithm added after feature flags were created. It is + significantly superior to LZJB in all metrics tested. It is `new + default compression algorithm `__ + (compression=on) in OpenZFS. + It is available on all platforms as of 2020. + +- LZJB + + - Original default compression algorithm (compression=on) for ZFS. + It was created to satisfy the desire for a compression algorithm + suitable for use in filesystems. Specifically, that it provides + fair compression, has a high compression speed, has a high + decompression speed and detects incompressible data + quickly. + +- GZIP (1 through 9) + + - Classic Lempel-Ziv implementation. It provides high compression, + but it often makes IO CPU-bound. + +- ZLE (Zero Length Encoding) + + - A very simple algorithm that only compresses zeroes. + +- ZSTD (Zstandard) + + - Zstandard is a modern, high performance, general compression + algorithm which provides similar or better compression levels to + GZIP, but with much better performance. Zstandard offers a very + wide range of performance/compression trade-off, and is backed by + an extremely fast decoder. + It is available from `OpenZFS 2.0 version `__. + +If you want to use compression and are uncertain which to use, use LZ4. +It averages a 2.1:1 compression ratio while gzip-1 averages 2.7:1, but +gzip is much slower. Both figures are obtained from `testing by the LZ4 +project `__ on the Silesia corpus. The +greater compression ratio of gzip is usually only worthwhile for rarely +accessed data. + +.. _raid_z_stripe_width: + +RAID-Z stripe width +~~~~~~~~~~~~~~~~~~~ + +Choose a RAID-Z stripe width based on your IOPS needs and the amount of +space you are willing to devote to parity information. If you need more +IOPS, use fewer disks per stripe. If you need more usable space, use +more disks per stripe. Trying to optimize your RAID-Z stripe width based +on exact numbers is irrelevant in nearly all cases. See this `blog +post `__ +for more details. + +.. _dataset_recordsize: + +Dataset recordsize +~~~~~~~~~~~~~~~~~~ + +ZFS datasets use an internal recordsize of 128KB by default. The dataset +recordsize is the basic unit of data used for internal copy-on-write on +files. Partial record writes require that data be read from either ARC +(cheap) or disk (expensive). recordsize can be set to any power of 2 +from 512 bytes to 1 megabyte. Software that writes in fixed record +sizes (e.g. databases) will benefit from the use of a matching +recordsize. + +Changing the recordsize on a dataset will only take effect for new +files. If you change the recordsize because your application should +perform better with a different one, you will need to recreate its +files. A cp followed by a mv on each file is sufficient. Alternatively, +send/recv should recreate the files with the correct recordsize when a +full receive is done. + +.. _larger_record_sizes: + +Larger record sizes +^^^^^^^^^^^^^^^^^^^ + +Record sizes of up to 16M are supported with the large_blocks pool +feature, which is enabled by default on new pools on systems that +support it. + +Record sizes larger than 1M were disabled by default +before openZFS v2.2, +unless the zfs_max_recordsize kernel module parameter was set to allow +sizes higher than 1M. + +\`zfs send\` operations must specify -L +to ensure that larger than 128KB blocks are sent and the receiving pools +must support the large_blocks feature. + +.. _zvol_volblocksize: + +zvol volblocksize +~~~~~~~~~~~~~~~~~ + +Zvols have a ``volblocksize`` property that is analogous to ``recordsize``. +Current default (16KB since v2.2) balances the metadata overhead, compression +opportunities and decent space efficiency on majority of pool configurations +due to 4KB disk physical block rounding (especially on RAIDZ and DRAID), +while incurring some write amplification on guest FSes that run with smaller +block sizes [#VOLBLOCKSIZE]_. + +Users are advised to test their scenarios and see whether the ``volblocksize`` +needs to be changed to favor one or the other: + +- sector alignment of guest FS is crucial +- most of guest FSes use default block size of 4-8KB, so: + + - Larger ``volblocksize`` can help with mostly sequential workloads and + will gain a compression efficiency + + - Smaller ``volblocksize`` can help with random workloads and minimize + IO amplification, but will use more metadata + (e.g. more small IOs will be generated by ZFS) and may have worse + space efficiency (especially on RAIDZ and DRAID) + + - It's meaningless to set ``volblocksize`` less than guest FS's block size + or :ref:`ashift ` + + - See :ref:`Dataset recordsize ` + for additional information + +Deduplication +~~~~~~~~~~~~~ + +Deduplication uses an on-disk hash table, using `extensible +hashing `__ as +implemented in the ZAP (ZFS Attribute Processor). Each cached entry uses +slightly more than 320 bytes of memory. The DDT code relies on ARC for +caching the DDT entries, such that there is no double caching or +internal fragmentation from the kernel memory allocator. Each pool has a +global deduplication table shared across all datasets and zvols on which +deduplication is enabled. Each entry in the hash table is a record of a +unique block in the pool. (Where the block size is set by the +``recordsize`` or ``volblocksize`` properties.) + +The hash table (also known as the DDT or DeDup Table) must be accessed +for every dedup-able block that is written or freed (regardless of +whether it has multiple references). If there is insufficient memory for +the DDT to be cached in memory, each cache miss will require reading a +random block from disk, resulting in poor performance. For example, if +operating on a single 7200RPM drive that can do 100 io/s, uncached DDT +reads would limit overall write throughput to 100 blocks per second, or +400KB/s with 4KB blocks. + +The consequence is that sufficient memory to store deduplication data is +required for good performance. The deduplication data is considered +metadata and therefore can be cached if the ``primarycache`` or +``secondarycache`` properties are set to ``metadata``. In addition, the +deduplication table will compete with other metadata for metadata +storage, which can have a negative effect on performance. Simulation of +the number of deduplication table entries needed for a given pool can be +done using the -D option to zdb. Then a simple multiplication by +320-bytes can be done to get the approximate memory requirements. +Alternatively, you can estimate an upper bound on the number of unique +blocks by dividing the amount of storage you plan to use on each dataset +(taking into account that partial records each count as a full +recordsize for the purposes of deduplication) by the recordsize and each +zvol by the volblocksize, summing and then multiplying by 320-bytes. + +.. _metaslab_allocator: + +Metaslab Allocator +~~~~~~~~~~~~~~~~~~ + +ZFS top level vdevs are divided into metaslabs from which blocks can be +independently allocated so allow for concurrent IOs to perform +allocations without blocking one another. At present, `there is a +regression `__ on the +Linux and Mac OS X ports that causes serialization to occur. + +By default, the selection of a metaslab is biased toward lower LBAs to +improve performance of spinning disks, but this does not make sense on +solid state media. This behavior can be adjusted globally by setting the +ZFS module's global metaslab_lba_weighting_enabled tuanble to 0. This +tunable is only advisable on systems that only use solid state media for +pools. + +The metaslab allocator will allocate blocks on a first-fit basis when a +metaslab has more than or equal to 4 percent free space and a best-fit +basis when a metaslab has less than 4 percent free space. The former is +much faster than the latter, but it is not possible to tell when this +behavior occurs from the pool's free space. However, the command ``zdb +-mmm $POOLNAME`` will provide this information. + +.. _pool_geometry: + +Pool Geometry +~~~~~~~~~~~~~ + +If small random IOPS are of primary importance, mirrored vdevs will +outperform raidz vdevs. Read IOPS on mirrors will scale with the number +of drives in each mirror while raidz vdevs will each be limited to the +IOPS of the slowest drive. + +If sequential writes are of primary importance, raidz will outperform +mirrored vdevs. Sequential write throughput increases linearly with the +number of data disks in raidz while writes are limited to the slowest +drive in mirrored vdevs. Sequential read performance should be roughly +the same on each. + +Both IOPS and throughput will increase by the respective sums of the +IOPS and throughput of each top level vdev, regardless of whether they +are raidz or mirrors. + +.. _whole_disks_versus_partitions: + +Whole Disks versus Partitions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +ZFS will behave differently on different platforms when given a whole +disk. + +On illumos, ZFS attempts to enable the write cache on a whole disk. The +illumos UFS driver cannot ensure integrity with the write cache enabled, +so by default Sun/Solaris systems using UFS file system for boot were +shipped with drive write cache disabled (long ago, when Sun was still an +independent company). For safety on illumos, if ZFS is not given the +whole disk, it could be shared with UFS and thus it is not appropriate +for ZFS to enable write cache. In this case, the write cache setting is +not changed and will remain as-is. Today, most vendors ship drives with +write cache enabled by default. + +On Linux, the Linux IO elevator is largely redundant given that ZFS has +its own IO elevator. + +ZFS will also create a GPT partition table own partitions when given a +whole disk under illumos on x86/amd64 and on Linux. This is mainly to +make booting through UEFI possible because UEFI requires a small FAT +partition to be able to boot the system. The ZFS driver will be able to +tell the difference between whether the pool had been given the entire +disk or not via the whole_disk field in the label. + +This is not done on FreeBSD. Pools created by FreeBSD will always have +the whole_disk field set to true, such that a pool imported on another +platform that was created on FreeBSD will always be treated as the whole +disks were given to ZFS. + +.. _OS_specific: + +OS/distro-specific recommendations +---------------------------------- + +.. _linux_specific: + +Linux +~~~~~ + +init_on_alloc +^^^^^^^^^^^^^ +Some Linux distributions (at least Debian, Ubuntu) enable +``init_on_alloc`` option as security precaution by default. +This option can help to [#init_on_alloc]_: + + prevent possible information leaks and + make control-flow bugs that depend on uninitialized values more + deterministic. + +Unfortunately, it can lower ARC throughput considerably +(see `bug `__). + +If you're ready to cope with these security risks [#init_on_alloc]_, +you may disable it +by setting ``init_on_alloc=0`` in the GRUB kernel boot parameters. + +.. _general_recommendations: + +General recommendations +----------------------- + +.. _alignment_shift: + +Alignment shift +~~~~~~~~~~~~~~~ + +Make sure that you create your pools such that the vdevs have the +correct alignment shift for your storage device's size. if dealing with +flash media, this is going to be either 12 (4K sectors) or 13 (8K +sectors). For SSD ephemeral storage on Amazon EC2, the proper setting is +12. + +.. _atime_updates: + +Atime Updates +~~~~~~~~~~~~~ + +Set either relatime=on or atime=off to minimize IOs used to update +access time stamps. For backward compatibility with a small percentage +of software that supports it, relatime is preferred when available and +should be set on your entire pool. atime=off should be used more +selectively. + +.. _free_space: + +Free Space +~~~~~~~~~~ + +Keep pool free space above 10% to avoid many metaslabs from reaching the +4% free space threshold to switch from first-fit to best-fit allocation +strategies. When the threshold is hit, the :ref:`metaslab_allocator` becomes very CPU +intensive in an attempt to protect itself from fragmentation. This +reduces IOPS, especially as more metaslabs reach the 4% threshold. + +The recommendation is 10% rather than 5% because metaslabs selection +considers both location and free space unless the global +metaslab_lba_weighting_enabled tunable is set to 0. When that tunable is +0, ZFS will consider only free space, so the the expense of the best-fit +allocator can be avoided by keeping free space above 5%. That setting +should only be used on systems with pools that consist of solid state +drives because it will reduce sequential IO performance on mechanical +disks. + +.. _lz4_compression: + +LZ4 compression +~~~~~~~~~~~~~~~ + +Set compression=lz4 on your pools' root datasets so that all datasets +inherit it unless you have a reason not to enable it. Userland tests of +LZ4 compression of incompressible data in a single thread has shown that +it can process 10GB/sec, so it is unlikely to be a bottleneck even on +incompressible data. Furthermore, incompressible data will be stored +without compression such that reads of incompressible data with +compression enabled will not be subject to decompression. Writes are so +fast that in-compressible data is unlikely to see a performance penalty +from the use of LZ4 compression. The reduction in IO from LZ4 will +typically be a performance win. + +Note that larger record sizes will increase compression ratios on +compressible data by allowing compression algorithms to process more +data at a time. + +.. _nvme_low_level_formatting_link: + +NVMe low level formatting +~~~~~~~~~~~~~~~~~~~~~~~~~ + +See :ref:`nvme_low_level_formatting`. + +.. _pool_geometry_1: + +Pool Geometry +~~~~~~~~~~~~~ + +Do not put more than ~16 disks in raidz. The rebuild times on mechanical +disks will be excessive when the pool is full. + +.. _synchronous_io: + +Synchronous I/O +~~~~~~~~~~~~~~~ + +If your workload involves fsync or O_SYNC and your pool is backed by +mechanical storage, consider adding one or more SLOG devices. Pools that +have multiple SLOG devices will distribute ZIL operations across them. +The best choice for SLOG device(s) are likely Optane / 3D XPoint SSDs. +See :ref:`optane_3d_xpoint_ssds` +for a description of them. If an Optane / 3D XPoint SSD is an option, +the rest of this section on synchronous I/O need not be read. If Optane +/ 3D XPoint SSDs is not an option, see +:ref:`nand_flash_ssds` for suggestions +for NAND flash SSDs and also read the information below. + +To ensure maximum ZIL performance on NAND flash SSD-based SLOG devices, +you should also overprovison spare area to increase +IOPS [#ssd_iops]_. Only +about 4GB is needed, so the rest can be left as overprovisioned storage. +The choice of 4GB is somewhat arbitrary. Most systems do not write +anything close to 4GB to ZIL between transaction group commits, so +overprovisioning all storage beyond the 4GB partition should be alright. +If a workload needs more, then make it no more than the maximum ARC +size. Even under extreme workloads, ZFS will not benefit from more SLOG +storage than the maximum ARC size. That is half of system memory on +Linux and 3/4 of system memory on illumos. + +.. _overprovisioning_by_secure_erase_and_partition_table_trick: + +Overprovisioning by secure erase and partition table trick +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +You can do this with a mix of a secure erase and a partition table +trick, such as the following: + +#. Run a secure erase on the NAND-flash SSD. +#. Create a partition table on the NAND-flash SSD. +#. Create a 4GB partition. +#. Give the partition to ZFS to use as a log device. + +If using the secure erase and partition table trick, do *not* use the +unpartitioned space for other things, even temporarily. That will reduce +or eliminate the overprovisioning by marking pages as dirty. + +Alternatively, some devices allow you to change the sizes that they +report.This would also work, although a secure erase should be done +prior to changing the reported size to ensure that the SSD recognizes +the additional spare area. Changing the reported size can be done on +drives that support it with \`hdparm -N \` on systems that have +laptop-mode-tools. + +.. _nvme_overprovisioning: + +NVMe overprovisioning +^^^^^^^^^^^^^^^^^^^^^ + +On NVMe, you can use namespaces to achieve overprovisioning: + +#. Do a sanitize command as a precaution to ensure the device is + completely clean. +#. Delete the default namespace. +#. Create a new namespace of size 4GB. +#. Give the namespace to ZFS to use as a log device. e.g. zfs add tank + log /dev/nvme1n1 + +.. _whole_disks: + +Whole disks +~~~~~~~~~~~ + +Whole disks should be given to ZFS rather than partitions. If you must +use a partition, make certain that the partition is properly aligned to +avoid read-modify-write overhead. See the section on +:ref:`Alignment Shift (ashift) ` +for a description of proper alignment. Also, see the section on +:ref:`Whole Disks versus Partitions ` +for a description of changes in ZFS behavior when operating on a +partition. + +Single disk RAID 0 arrays from RAID controllers are not equivalent to +whole disks. The :ref:`hardware_raid_controllers` page +explains in detail. + +.. _bit_torrent: + +Bit Torrent +----------- + +Bit torrent performs 16KB random reads/writes. The 16KB writes cause +read-modify-write overhead. The read-modify-write overhead can reduce +performance by a factor of 16 with 128KB record sizes when the amount of +data written exceeds system memory. This can be avoided by using a +dedicated dataset for bit torrent downloads with recordsize=16KB. + +When the files are read sequentially through a HTTP server, the random +nature in which the files were generated creates fragmentation that has +been observed to reduce sequential read performance by a factor of two +on 7200RPM hard disks. If performance is a problem, fragmentation can be +eliminated by rewriting the files sequentially in either of two ways: + +The first method is to configure your client to download the files to a +temporary directory and then copy them into their final location when +the downloads are finished, provided that your client supports this. + +The second method is to use send/recv to recreate a dataset +sequentially. + +In practice, defragmenting files obtained through bit torrent should +only improve performance when the files are stored on magnetic storage +and are subject to significant sequential read workloads after creation. + +.. _database_workloads: + +Database workloads +------------------ + +Setting ``redundant_metadata=most`` can increase IOPS by at least a few +percentage points by eliminating redundant metadata at the lowest level +of the indirect block tree. This comes with the caveat that data loss +will occur if a metadata block pointing to data blocks is corrupted and +there are no duplicate copies, but this is generally not a problem in +production on mirrored or raidz vdevs. + +MySQL +~~~~~ + +InnoDB +^^^^^^ + +Make separate datasets for InnoDB's data files and log files. Set +``recordsize=16K`` on InnoDB's data files to avoid expensive partial record +writes and leave recordsize=128K on the log files. Set +``primarycache=metadata`` on both to prefer InnoDB's +caching [#mysql_basic]_. +Set ``logbias=throughput`` on the data to stop ZIL from writing twice. + +Set ``skip-innodb_doublewrite`` in my.cnf to prevent innodb from writing +twice. The double writes are a data integrity feature meant to protect +against corruption from partially-written records, but those are not +possible on ZFS. It should be noted that `Percona’s +blog had advocated `__ +using an ext4 configuration where double writes were +turned off for a performance gain, but later recanted it because it +caused data corruption. Following a well timed power failure, an in +place filesystem such as ext4 can have half of a 8KB record be old while +the other half would be new. This would be the corruption that caused +Percona to recant its advice. However, ZFS’ copy on write design would +cause it to return the old correct data following a power failure (no +matter what the timing is). That prevents the corruption that the double +write feature is intended to prevent from ever happening. The double +write feature is therefore unnecessary on ZFS and can be safely turned +off for better performance. + +On Linux, the driver's AIO implementation is a compatibility shim that +just barely passes the POSIX standard. InnoDB performance suffers when +using its default AIO codepath. Set ``innodb_use_native_aio=0`` and +``innodb_use_atomic_writes=0`` in my.cnf to disable AIO. Both of these +settings must be disabled to disable AIO. + +PostgreSQL +~~~~~~~~~~ + +Make separate datasets for PostgreSQL's data and WAL. Set +``compression=lz4`` and ``recordsize=32K`` (64K also work well, as +does the 128K default) on both. Configure ``full_page_writes = off`` +for PostgreSQL, as ZFS will never commit a partial write. For a database +with large updates, experiment with ``logbias=throughput`` on +PostgreSQL's data to avoid writing twice, but be aware that with this +setting smaller updates can cause severe fragmentation. + +SQLite +~~~~~~ + +Make a separate dataset for the database. Set the recordsize to 64K. Set +the SQLite page size to 65536 +bytes [#sqlite_ps]_. + +Note that SQLite databases typically are not exercised enough to merit +special tuning, but this will provide it. Note the side effect on cache +size mentioned at +SQLite.org [#sqlite_ps_change]_. + +.. _file_servers: + +File servers +------------ + +Create a dedicated dataset for files being served. + +See +:ref:`Sequential workloads ` +for configuration recommendations. + +Samba +~~~~~ +Windows/DOS clients doesn't support case sensitive file names. +If your main workload won't need case sensitivity for other supported clients, +create dataset with ``zfs create -o casesensitivity=insensitive`` +so Samba may search filenames faster in future [#FS_CASEFOLD_FL]_. + +See ``case sensitive`` option in +`smb.conf(5) `__. + +.. _sequential_workloads: + +Sequential workloads +-------------------- + +Set ``recordsize=1M`` on datasets that are subject to sequential workloads. +Read +:ref:`Larger record sizes ` +for documentation on things that should be known before setting 1M +record sizes. + +Set ``compression=lz4`` as per the general recommendation for :ref:`LZ4 +compression `. + +.. _video_games_directories: + +Video games directories +----------------------- + +Create a dedicated dataset, use chown to make it user accessible (or +create a directory under it and use chown on that) and then configure +the game download application to place games there. Specific information +on how to configure various ones is below. + +See +:ref:`Sequential workloads ` +for configuration recommendations before installing games. + +Note that the performance gains from this tuning are likely to be small +and limited to load times. However, the combination of 1M records and +LZ4 will allow more games to be stored, which is why this tuning is +documented despite the performance gains being limited. A steam library +of 300 games (mostly from humble bundle) that had these tweaks applied +to it saw 20% space savings. Both faster load times and significant +space savings are possible on compressible games when this tuning has +been done. Games whose assets are already compressed will see little to +no benefit. + +Lutris +~~~~~~ + +Open the context menu by left clicking on the triple bar icon in the +upper right. Go to "Preferences" and then the "System options" tab. +Change the default installation directory and click save. + +Steam +~~~~~ + +Go to "Settings" -> "Downloads" -> "Steam Library Folders" and use "Add +Library Folder" to set the directory for steam to use to store games. +Make sure to set it to the default by right clicking on it and clicking +"Make Default Folder" before closing the dialogue. + +If you'll use Proton to run non-native games, +create dataset with ``zfs create -o casesensitivity=insensitive`` +so Wine may search filenames faster in future [#FS_CASEFOLD_FL]_. + +.. _wine: + +Wine +---- + +Windows file systems' standard behavior is to be case-insensitive. +Create dataset with ``zfs create -o casesensitivity=insensitive`` +so Wine may search filenames faster in future [#FS_CASEFOLD_FL]_. + +.. _virtual_machines: + +Virtual machines +---------------- + +Virtual machine images on ZFS should be stored using either zvols or raw +files to avoid unnecessary overhead. The recordsize/volblocksize and +guest filesystem may be configured to match to avoid overhead from +partial record modification, see :ref:`zvol volblocksize `. +If raw files are used, a separate dataset should be used to make it easy to configure +recordsize independently of other things stored on ZFS. + +.. _qemu_kvm_xen: + +QEMU / KVM / Xen +~~~~~~~~~~~~~~~~ + +AIO should be used to maximize IOPS when using files for guest storage. + +.. rubric:: Footnotes + +.. [#ssd_iops] +.. [#mysql_basic] +.. [#sqlite_ps] +.. [#sqlite_ps_change] +.. [#FS_CASEFOLD_FL] +.. [#init_on_alloc] +.. [#VOLBLOCKSIZE] diff --git a/_sources/Performance and Tuning/ZFS Transaction Delay.rst.txt b/_sources/Performance and Tuning/ZFS Transaction Delay.rst.txt new file mode 100644 index 000000000..1ee539cc7 --- /dev/null +++ b/_sources/Performance and Tuning/ZFS Transaction Delay.rst.txt @@ -0,0 +1,105 @@ +ZFS Transaction Delay +===================== + +ZFS write operations are delayed when the backend storage isn't able to +accommodate the rate of incoming writes. This delay process is known as +the ZFS write throttle. + +If there is already a write transaction waiting, the delay is relative +to when that transaction will finish waiting. Thus the calculated delay +time is independent of the number of threads concurrently executing +transactions. + +If there is only one waiter, the delay is relative to when the +transaction started, rather than the current time. This credits the +transaction for "time already served." For example, if a write +transaction requires reading indirect blocks first, then the delay is +counted at the start of the transaction, just prior to the indirect +block reads. + +The minimum time for a transaction to take is calculated as: + +:: + + min_time = zfs_delay_scale * (dirty - min) / (max - dirty) + min_time is then capped at 100 milliseconds + +The delay has two degrees of freedom that can be adjusted via tunables: + +1. The percentage of dirty data at which we start to delay is defined by + zfs_delay_min_dirty_percent. This is typically be at or above + zfs_vdev_async_write_active_max_dirty_percent so delays occur after + writing at full speed has failed to keep up with the incoming write + rate. +2. The scale of the curve is defined by zfs_delay_scale. Roughly + speaking, this variable determines the amount of delay at the + midpoint of the curve. + +:: + + delay + 10ms +-------------------------------------------------------------*+ + | *| + 9ms + *+ + | *| + 8ms + *+ + | * | + 7ms + * + + | * | + 6ms + * + + | * | + 5ms + * + + | * | + 4ms + * + + | * | + 3ms + * + + | * | + 2ms + (midpoint) * + + | | ** | + 1ms + v *** + + | zfs_delay_scale ----------> ******** | + 0 +-------------------------------------*********----------------+ + 0% <- zfs_dirty_data_max -> 100% + +Note that since the delay is added to the outstanding time remaining on +the most recent transaction, the delay is effectively the inverse of +IOPS. Here the midpoint of 500 microseconds translates to 2000 IOPS. The +shape of the curve was chosen such that small changes in the amount of +accumulated dirty data in the first 3/4 of the curve yield relatively +small differences in the amount of delay. + +The effects can be easier to understand when the amount of delay is +represented on a log scale: + +:: + + delay + 100ms +-------------------------------------------------------------++ + + + + | | + + *+ + 10ms + *+ + + ** + + | (midpoint) ** | + + | ** + + 1ms + v **** + + + zfs_delay_scale ----------> ***** + + | **** | + + **** + + 100us + ** + + + * + + | * | + + * + + 10us + * + + + + + | | + + + + +--------------------------------------------------------------+ + 0% <- zfs_dirty_data_max -> 100% + +Note here that only as the amount of dirty data approaches its limit +does the delay start to increase rapidly. The goal of a properly tuned +system should be to keep the amount of dirty data out of that range by +first ensuring that the appropriate limits are set for the I/O scheduler +to reach optimal throughput on the backend storage, and then by changing +the value of zfs_delay_scale to increase the steepness of the curve. diff --git a/_sources/Performance and Tuning/ZIO Scheduler.rst.txt b/_sources/Performance and Tuning/ZIO Scheduler.rst.txt new file mode 100644 index 000000000..53551bf56 --- /dev/null +++ b/_sources/Performance and Tuning/ZIO Scheduler.rst.txt @@ -0,0 +1,93 @@ +ZFS I/O (ZIO) Scheduler +======================= + +ZFS issues I/O operations to leaf vdevs (usually devices) to satisfy and +complete I/Os. The ZIO scheduler determines when and in what order those +operations are issued. Operations are divided into five I/O classes +prioritized in the following order: + ++----------+-------------+-------------------------------------------+ +| Priority | I/O Class | Description | ++==========+=============+===========================================+ +| highest | sync read | most reads | ++----------+-------------+-------------------------------------------+ +| | sync write | as defined by application or via 'zfs' | +| | | 'sync' property | ++----------+-------------+-------------------------------------------+ +| | async read | prefetch reads | ++----------+-------------+-------------------------------------------+ +| | async write | most writes | ++----------+-------------+-------------------------------------------+ +| lowest | scrub read | scan read: includes both scrub and | +| | | resilver | ++----------+-------------+-------------------------------------------+ + +Each queue defines the minimum and maximum number of concurrent +operations issued to the device. In addition, the device has an +aggregate maximum, zfs_vdev_max_active. Note that the sum of the +per-queue minimums must not exceed the aggregate maximum. If the sum of +the per-queue maximums exceeds the aggregate maximum, then the number of +active I/Os may reach zfs_vdev_max_active, in which case no further I/Os +are issued regardless of whether all per-queue minimums have been met. + ++-------------+------------------------------------+------------------------------------+ +| I/O Class | Min Active Parameter | Max Active Parameter | ++=============+====================================+====================================+ +| sync read | ``zfs_vdev_sync_read_min_active`` | ``zfs_vdev_sync_read_max_active`` | ++-------------+------------------------------------+------------------------------------+ +| sync write | ``zfs_vdev_sync_write_min_active`` | ``zfs_vdev_sync_write_max_active`` | ++-------------+------------------------------------+------------------------------------+ +| async read | ``zfs_vdev_async_read_min_active`` | ``zfs_vdev_async_read_max_active`` | ++-------------+------------------------------------+------------------------------------+ +| async write | ``zfs_vdev_async_write_min_active``| ``zfs_vdev_async_write_max_active``| ++-------------+------------------------------------+------------------------------------+ +| scrub read | ``zfs_vdev_scrub_min_active`` | ``zfs_vdev_scrub_max_active`` | ++-------------+------------------------------------+------------------------------------+ + +For many physical devices, throughput increases with the number of +concurrent operations, but latency typically suffers. Further, physical +devices typically have a limit at which more concurrent operations have +no effect on throughput or can cause the disk performance to +decrease. + +The ZIO scheduler selects the next operation to issue by first looking +for an I/O class whose minimum has not been satisfied. Once all are +satisfied and the aggregate maximum has not been hit, the scheduler +looks for classes whose maximum has not been satisfied. Iteration +through the I/O classes is done in the order specified above. No further +operations are issued if the aggregate maximum number of concurrent +operations has been hit or if there are no operations queued for an I/O +class that has not hit its maximum. Every time an I/O is queued or an +operation completes, the I/O scheduler looks for new operations to +issue. + +In general, smaller max_active's will lead to lower latency of +synchronous operations. Larger max_active's may lead to higher overall +throughput, depending on underlying storage and the I/O mix. + +The ratio of the queues' max_actives determines the balance of +performance between reads, writes, and scrubs. For example, when there +is contention, increasing zfs_vdev_scrub_max_active will cause the scrub +or resilver to complete more quickly, but reads and writes to have +higher latency and lower throughput. + +All I/O classes have a fixed maximum number of outstanding operations +except for the async write class. Asynchronous writes represent the data +that is committed to stable storage during the syncing stage for +transaction groups (txgs). Transaction groups enter the syncing state +periodically so the number of queued async writes quickly bursts up and +then reduce down to zero. The zfs_txg_timeout tunable (default=5 +seconds) sets the target interval for txg sync. Thus a burst of async +writes every 5 seconds is a normal ZFS I/O pattern. + +Rather than servicing I/Os as quickly as possible, the ZIO scheduler +changes the maximum number of active async write I/Os according to the +amount of dirty data in the pool. Since both throughput and latency +typically increase as the number of concurrent operations issued to +physical devices, reducing the burstiness in the number of concurrent +operations also stabilizes the response time of operations from other +queues. This is particularly important for the sync read and write queues, +where the periodic async write bursts of the txg sync can lead to +device-level contention. In broad strokes, the ZIO scheduler issues more +concurrent operations from the async write queue as there's more dirty +data in the pool. diff --git a/_sources/Performance and Tuning/index.rst.txt b/_sources/Performance and Tuning/index.rst.txt new file mode 100644 index 000000000..1d1479b73 --- /dev/null +++ b/_sources/Performance and Tuning/index.rst.txt @@ -0,0 +1,9 @@ +Performance and Tuning +====================== + +.. toctree:: + :maxdepth: 2 + :caption: Contents: + :glob: + + * diff --git a/_sources/Project and Community/Admin Documentation.rst.txt b/_sources/Project and Community/Admin Documentation.rst.txt new file mode 100644 index 000000000..6385f192d --- /dev/null +++ b/_sources/Project and Community/Admin Documentation.rst.txt @@ -0,0 +1,9 @@ +Admin Documentation +=================== + +- `Aaron Toponce's ZFS on Linux User + Guide `__ +- `OpenZFS System + Administration `__ +- `Oracle Solaris ZFS Administration + Guide `__ diff --git a/_sources/Project and Community/FAQ hole birth.rst.txt b/_sources/Project and Community/FAQ hole birth.rst.txt new file mode 100644 index 000000000..52411d674 --- /dev/null +++ b/_sources/Project and Community/FAQ hole birth.rst.txt @@ -0,0 +1,67 @@ +:orphan: + +FAQ Hole birth +============== + +Short explanation +~~~~~~~~~~~~~~~~~ + +The hole_birth feature has/had bugs, the result of which is that, if you +do a ``zfs send -i`` (or ``-R``, since it uses ``-i``) from an affected +dataset, the receiver will not see any checksum or other errors, but the +resulting destination snapshot will not match the source. + +ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring the +faulty metadata which causes this issue *on the sender side*. + +FAQ +~~~ + +I have a pool with hole_birth enabled, how do I know if I am affected? +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +It is technically possible to calculate whether you have any affected +files, but it requires scraping zdb output for each file in each +snapshot in each dataset, which is a combinatoric nightmare. (If you +really want it, there is a proof of concept +`here `__. + +Is there any less painful way to fix this if we have already received an affected snapshot? +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +No, the data you need was simply not present in the send stream, +unfortunately, and cannot feasibly be rewritten in place. + +Long explanation +~~~~~~~~~~~~~~~~ + +hole_birth is a feature to speed up ZFS send -i - in particular, ZFS +used to not store metadata on when "holes" (sparse regions) in files +were created, so every zfs send -i needed to include every hole. + +hole_birth, as the name implies, added tracking for the txg (transaction +group) when a hole was created, so that zfs send -i could only send +holes that had a birth_time between (starting snapshot txg) and (ending +snapshot txg), and life was wonderful. + +Unfortunately, hole_birth had a number of edge cases where it could +"forget" to set the birth_time of holes in some cases, causing it to +record the birth_time as 0 (the value used prior to hole_birth, and +essentially equivalent to "since file creation"). + +This meant that, when you did a zfs send -i, since zfs send does not +have any knowledge of the surrounding snapshots when sending a given +snapshot, it would see the creation txg as 0, conclude "oh, it is 0, I +must have already sent this before", and not include it. + +This means that, on the receiving side, it does not know those holes +should exist, and does not create them. This leads to differences +between the source and the destination. + +ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring this +metadata and always sending holes with birth_time 0, configurable using +the tunable known as ``ignore_hole_birth`` or +``send_holes_without_birth_time``. The latter is what OpenZFS +standardized on. ZoL version 0.6.5.8 only has the former, but for any +ZoL version with ``send_holes_without_birth_time``, they point to the +same value, so changing either will work. diff --git a/_sources/Project and Community/FAQ.rst.txt b/_sources/Project and Community/FAQ.rst.txt new file mode 100644 index 000000000..155a8d091 --- /dev/null +++ b/_sources/Project and Community/FAQ.rst.txt @@ -0,0 +1,694 @@ +FAQ +=== + +.. contents:: Table of Contents + :local: + +What is OpenZFS +--------------- + +OpenZFS is an outstanding storage platform that +encompasses the functionality of traditional filesystems, volume +managers, and more, with consistent reliability, functionality and +performance across all distributions. Additional information about +OpenZFS can be found in the `OpenZFS wikipedia +article `__. + +Hardware Requirements +--------------------- + +Because ZFS was originally designed for Sun Solaris it was long +considered a filesystem for large servers and for companies that could +afford the best and most powerful hardware available. But since the +porting of ZFS to numerous OpenSource platforms (The BSDs, Illumos and +Linux - under the umbrella organization "OpenZFS"), these requirements +have been lowered. + +The suggested hardware requirements are: + +- ECC memory. This isn't really a requirement, but it's highly + recommended. +- 8GB+ of memory for the best performance. It's perfectly possible to + run with 2GB or less (and people do), but you'll need more if using + deduplication. + +Do I have to use ECC memory for ZFS? +------------------------------------ + +Using ECC memory for OpenZFS is strongly recommended for enterprise +environments where the strongest data integrity guarantees are required. +Without ECC memory rare random bit flips caused by cosmic rays or by +faulty memory can go undetected. If this were to occur OpenZFS (or any +other filesystem) will write the damaged data to disk and be unable to +automatically detect the corruption. + +Unfortunately, ECC memory is not always supported by consumer grade +hardware. And even when it is, ECC memory will be more expensive. For +home users the additional safety brought by ECC memory might not justify +the cost. It's up to you to determine what level of protection your data +requires. + +Installation +------------ + +OpenZFS is available for FreeBSD and all major Linux distributions. Refer to +the :doc:`getting started <../Getting Started/index>` section of the wiki for +links to installations instructions. If your distribution/OS isn't +listed you can always build OpenZFS from the latest official +`tarball `__. + +Supported Architectures +----------------------- + +OpenZFS is regularly compiled for the following architectures: +aarch64, arm, ppc, ppc64, x86, x86_64. + +Supported Linux Kernels +----------------------- + +The `notes `__ for a given +OpenZFS release will include a range of supported kernels. Point +releases will be tagged as needed in order to support the *stable* +kernel available from `kernel.org `__. The +oldest supported kernel is 2.6.32 due to its prominence in Enterprise +Linux distributions. + +.. _32-bit-vs-64-bit-systems: + +32-bit vs 64-bit Systems +------------------------ + +You are **strongly** encouraged to use a 64-bit kernel. OpenZFS +will build for 32-bit systems but you may encounter stability problems. + +ZFS was originally developed for the Solaris kernel which differs from +some OpenZFS platforms in several significant ways. Perhaps most importantly +for ZFS it is common practice in the Solaris kernel to make heavy use of +the virtual address space. However, use of the virtual address space is +strongly discouraged in the Linux kernel. This is particularly true on +32-bit architectures where the virtual address space is limited to 100M +by default. Using the virtual address space on 64-bit Linux kernels is +also discouraged but the address space is so much larger than physical +memory that it is less of an issue. + +If you are bumping up against the virtual memory limit on a 32-bit +system you will see the following message in your system logs. You can +increase the virtual address size with the boot option ``vmalloc=512M``. + +:: + + vmap allocation for size 4198400 failed: use vmalloc= to increase size. + +However, even after making this change your system will likely not be +entirely stable. Proper support for 32-bit systems is contingent upon +the OpenZFS code being weaned off its dependence on virtual memory. This +will take some time to do correctly but it is planned for OpenZFS. This +change is also expected to improve how efficiently OpenZFS manages the +ARC cache and allow for tighter integration with the standard Linux page +cache. + +Booting from ZFS +---------------- + +Booting from ZFS on Linux is possible and many people do it. There are +excellent walk throughs available for +:doc:`Debian <../Getting Started/Debian/index>`, +:doc:`Ubuntu <../Getting Started/Ubuntu/index>`, and +`Gentoo `__. + +On FreeBSD 13+ booting from ZFS is supported out of the box. + +Selecting /dev/ names when creating a pool (Linux) +-------------------------------------------------- + +There are different /dev/ names that can be used when creating a ZFS +pool. Each option has advantages and drawbacks, the right choice for +your ZFS pool really depends on your requirements. For development and +testing using /dev/sdX naming is quick and easy. A typical home server +might prefer /dev/disk/by-id/ naming for simplicity and readability. +While very large configurations with multiple controllers, enclosures, +and switches will likely prefer /dev/disk/by-vdev naming for maximum +control. But in the end, how you choose to identify your disks is up to +you. + +- **/dev/sdX, /dev/hdX:** Best for development/test pools + + - Summary: The top level /dev/ names are the default for consistency + with other ZFS implementations. They are available under all Linux + distributions and are commonly used. However, because they are not + persistent they should only be used with ZFS for development/test + pools. + - Benefits: This method is easy for a quick test, the names are + short, and they will be available on all Linux distributions. + - Drawbacks: The names are not persistent and will change depending + on what order the disks are detected in. Adding or removing + hardware for your system can easily cause the names to change. You + would then need to remove the zpool.cache file and re-import the + pool using the new names. + - Example: ``zpool create tank sda sdb`` + +- **/dev/disk/by-id/:** Best for small pools (less than 10 disks) + + - Summary: This directory contains disk identifiers with more human + readable names. The disk identifier usually consists of the + interface type, vendor name, model number, device serial number, + and partition number. This approach is more user friendly because + it simplifies identifying a specific disk. + - Benefits: Nice for small systems with a single disk controller. + Because the names are persistent and guaranteed not to change, it + doesn't matter how the disks are attached to the system. You can + take them all out, randomly mix them up on the desk, put them + back anywhere in the system and your pool will still be + automatically imported correctly. + - Drawbacks: Configuring redundancy groups based on physical + location becomes difficult and error prone. Unreliable on many + personal virtual machine setups because the software does not + generate persistent unique names by default. + - Example: + ``zpool create tank scsi-SATA_Hitachi_HTS7220071201DP1D10DGG6HMRP`` + +- **/dev/disk/by-path/:** Good for large pools (greater than 10 disks) + + - Summary: This approach is to use device names which include the + physical cable layout in the system, which means that a particular + disk is tied to a specific location. The name describes the PCI + bus number, as well as enclosure names and port numbers. This + allows the most control when configuring a large pool. + - Benefits: Encoding the storage topology in the name is not only + helpful for locating a disk in large installations. But it also + allows you to explicitly layout your redundancy groups over + multiple adapters or enclosures. + - Drawbacks: These names are long, cumbersome, and difficult for a + human to manage. + - Example: + ``zpool create tank pci-0000:00:1f.2-scsi-0:0:0:0 pci-0000:00:1f.2-scsi-1:0:0:0`` + +- **/dev/disk/by-vdev/:** Best for large pools (greater than 10 disks) + + - Summary: This approach provides administrative control over device + naming using the configuration file /etc/zfs/vdev_id.conf. Names + for disks in JBODs can be generated automatically to reflect their + physical location by enclosure IDs and slot numbers. The names can + also be manually assigned based on existing udev device links, + including those in /dev/disk/by-path or /dev/disk/by-id. This + allows you to pick your own unique meaningful names for the disks. + These names will be displayed by all the zfs utilities so it can + be used to clarify the administration of a large complex pool. See + the vdev_id and vdev_id.conf man pages for further details. + - Benefits: The main benefit of this approach is that it allows you + to choose meaningful human-readable names. Beyond that, the + benefits depend on the naming method employed. If the names are + derived from the physical path the benefits of /dev/disk/by-path + are realized. On the other hand, aliasing the names based on drive + identifiers or WWNs has the same benefits as using + /dev/disk/by-id. + - Drawbacks: This method relies on having a /etc/zfs/vdev_id.conf + file properly configured for your system. To configure this file + please refer to section `Setting up the /etc/zfs/vdev_id.conf + file <#setting-up-the-etc-zfs-vdev-id-conf-file>`__. As with + benefits, the drawbacks of /dev/disk/by-id or /dev/disk/by-path + may apply depending on the naming method employed. + - Example: ``zpool create tank mirror A1 B1 mirror A2 B2`` + +- **/dev/disk/by-uuid/:** Not a great option + + - Summary: One might think from the use of "UUID" that this would + be an ideal option - however, in practice, this ends up listing + one device per **pool** ID, which is not very useful for importing + pools with multiple disks. + +- **/dev/disk/by-partuuid/**/**by-partlabel:** Works only for existing partitions + + - Summary: partition UUID is generated on it's creation, so usage is limited + - Drawbacks: you can't refer to a partition unique ID on + an unpartitioned disk for ``zpool replace``/``add``/``attach``, + and you can't find failed disk easily without a mapping written + down ahead of time. + +Setting up the /etc/zfs/vdev_id.conf file +----------------------------------------- + +In order to use /dev/disk/by-vdev/ naming the ``/etc/zfs/vdev_id.conf`` +must be configured. The format of this file is described in the +vdev_id.conf man page. Several examples follow. + +A non-multipath configuration with direct-attached SAS enclosures and an +arbitrary slot re-mapping. + +:: + + multipath no + topology sas_direct + phys_per_port 4 + + # PCI_SLOT HBA PORT CHANNEL NAME + channel 85:00.0 1 A + channel 85:00.0 0 B + + # Linux Mapped + # Slot Slot + slot 0 2 + slot 1 6 + slot 2 0 + slot 3 3 + slot 4 5 + slot 5 7 + slot 6 4 + slot 7 1 + +A SAS-switch topology. Note that the channel keyword takes only two +arguments in this example. + +:: + + topology sas_switch + + # SWITCH PORT CHANNEL NAME + channel 1 A + channel 2 B + channel 3 C + channel 4 D + +A multipath configuration. Note that channel names have multiple +definitions - one per physical path. + +:: + + multipath yes + + # PCI_SLOT HBA PORT CHANNEL NAME + channel 85:00.0 1 A + channel 85:00.0 0 B + channel 86:00.0 1 A + channel 86:00.0 0 B + +A configuration using device link aliases. + +:: + + # by-vdev + # name fully qualified or base name of device link + alias d1 /dev/disk/by-id/wwn-0x5000c5002de3b9ca + alias d2 wwn-0x5000c5002def789e + +After defining the new disk names run ``udevadm trigger`` to prompt udev +to parse the configuration file. This will result in a new +/dev/disk/by-vdev directory which is populated with symlinks to /dev/sdX +names. Following the first example above, you could then create the new +pool of mirrors with the following command: + +:: + + $ zpool create tank \ + mirror A0 B0 mirror A1 B1 mirror A2 B2 mirror A3 B3 \ + mirror A4 B4 mirror A5 B5 mirror A6 B6 mirror A7 B7 + + $ zpool status + pool: tank + state: ONLINE + scan: none requested + config: + + NAME STATE READ WRITE CKSUM + tank ONLINE 0 0 0 + mirror-0 ONLINE 0 0 0 + A0 ONLINE 0 0 0 + B0 ONLINE 0 0 0 + mirror-1 ONLINE 0 0 0 + A1 ONLINE 0 0 0 + B1 ONLINE 0 0 0 + mirror-2 ONLINE 0 0 0 + A2 ONLINE 0 0 0 + B2 ONLINE 0 0 0 + mirror-3 ONLINE 0 0 0 + A3 ONLINE 0 0 0 + B3 ONLINE 0 0 0 + mirror-4 ONLINE 0 0 0 + A4 ONLINE 0 0 0 + B4 ONLINE 0 0 0 + mirror-5 ONLINE 0 0 0 + A5 ONLINE 0 0 0 + B5 ONLINE 0 0 0 + mirror-6 ONLINE 0 0 0 + A6 ONLINE 0 0 0 + B6 ONLINE 0 0 0 + mirror-7 ONLINE 0 0 0 + A7 ONLINE 0 0 0 + B7 ONLINE 0 0 0 + + errors: No known data errors + +Changing /dev/ names on an existing pool +---------------------------------------- + +Changing the /dev/ names on an existing pool can be done by simply +exporting the pool and re-importing it with the -d option to specify +which new names should be used. For example, to use the custom names in +/dev/disk/by-vdev: + +:: + + $ zpool export tank + $ zpool import -d /dev/disk/by-vdev tank + +.. _the-etczfszpoolcache-file: + +The /etc/zfs/zpool.cache file +----------------------------- + +Whenever a pool is imported on the system it will be added to the +``/etc/zfs/zpool.cache file``. This file stores pool configuration +information, such as the device names and pool state. If this file +exists when running the ``zpool import`` command then it will be used to +determine the list of pools available for import. When a pool is not +listed in the cache file it will need to be detected and imported using +the ``zpool import -d /dev/disk/by-id`` command. + +.. _generating-a-new-etczfszpoolcache-file: + +Generating a new /etc/zfs/zpool.cache file +------------------------------------------ + +The ``/etc/zfs/zpool.cache`` file will be automatically updated when +your pool configuration is changed. However, if for some reason it +becomes stale you can force the generation of a new +``/etc/zfs/zpool.cache`` file by setting the cachefile property on the +pool. + +:: + + $ zpool set cachefile=/etc/zfs/zpool.cache tank + +Conversely the cache file can be disabled by setting ``cachefile=none``. +This is useful for failover configurations where the pool should always +be explicitly imported by the failover software. + +:: + + $ zpool set cachefile=none tank + +Sending and Receiving Streams +----------------------------- + +hole_birth Bugs +~~~~~~~~~~~~~~~ + +The hole_birth feature has/had bugs, the result of which is that, if you +do a ``zfs send -i`` (or ``-R``, since it uses ``-i``) from an affected +dataset, the receiver *will not see any checksum or other errors, but +will not match the source*. + +ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring the +faulty metadata which causes this issue *on the sender side*. + +For more details, see the :doc:`hole_birth FAQ <./FAQ hole birth>`. + +Sending Large Blocks +~~~~~~~~~~~~~~~~~~~~ + +When sending incremental streams which contain large blocks (>128K) the +``--large-block`` flag must be specified. Inconsistent use of the flag +between incremental sends can result in files being incorrectly zeroed +when they are received. Raw encrypted send/recvs automatically imply the +``--large-block`` flag and are therefore unaffected. + +For more details, see `issue +6224 `__. + +CEPH/ZFS +-------- + +There is a lot of tuning that can be done that's dependent on the +workload that is being put on CEPH/ZFS, as well as some general +guidelines. Some are as follow; + +ZFS Configuration +~~~~~~~~~~~~~~~~~ + +The CEPH filestore back-end heavily relies on xattrs, for optimal +performance all CEPH workloads will benefit from the following ZFS +dataset parameters + +- ``xattr=sa`` +- ``dnodesize=auto`` + +Beyond that typically rbd/cephfs focused workloads benefit from small +recordsize({16K-128K), while objectstore/s3/rados focused workloads +benefit from large recordsize (128K-1M). + +.. _ceph-configuration-cephconf: + +CEPH Configuration (ceph.conf) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Additionally CEPH sets various values internally for handling xattrs +based on the underlying filesystem. As CEPH only officially +supports/detects XFS and BTRFS, for all other filesystems it falls back +to rather `limited "safe" +values `__. +On newer releases, the need for larger xattrs will prevent OSD's from even +starting. + +The officially recommended workaround (`see +here `__) +has some severe downsides, and more specifically is geared toward +filesystems with "limited" xattr support such as ext4. + +ZFS does not have a limit internally to xattrs length, as such we can +treat it similarly to how CEPH treats XFS. We can set overrides to set 3 +internal values to the same as those used with XFS(`see +here `__ +and +`here `__) +and allow it be used without the severe limitations of the "official" +workaround. + +:: + + [osd] + filestore_max_inline_xattrs = 10 + filestore_max_inline_xattr_size = 65536 + filestore_max_xattr_value_size = 65536 + +Other General Guidelines +~~~~~~~~~~~~~~~~~~~~~~~~ + +- Use a separate journal device. Do not collocate CEPH journal on + ZFS dataset if at all possible, this will quickly lead to terrible + fragmentation, not to mention terrible performance upfront even + before fragmentation (CEPH journal does a dsync for every write). +- Use a SLOG device, even with a separate CEPH journal device. For some + workloads, skipping SLOG and setting ``logbias=throughput`` may be + acceptable. +- Use a high-quality SLOG/CEPH journal device. A consumer based SSD, or + even NVMe WILL NOT DO (Samsung 830, 840, 850, etc) for a variety of + reasons. CEPH will kill them quickly, on-top of the performance being + quite low in this use. Generally recommended devices are [Intel DC S3610, + S3700, S3710, P3600, P3700], or [Samsung SM853, SM863], or better. +- If using a high quality SSD or NVMe device (as mentioned above), you + CAN share SLOG and CEPH Journal to good results on single device. A + ratio of 4 HDDs to 1 SSD (Intel DC S3710 200GB), with each SSD + partitioned (remember to align!) to 4x10GB (for ZIL/SLOG) + 4x20GB + (for CEPH journal) has been reported to work well. + +Again - CEPH + ZFS will KILL a consumer based SSD VERY quickly. Even +ignoring the lack of power-loss protection, and endurance ratings, you +will be very disappointed with performance of consumer based SSD under +such a workload. + +Performance Considerations +-------------------------- + +To achieve good performance with your pool there are some easy best +practices you should follow. + +- **Evenly balance your disks across controllers:** Often the limiting + factor for performance is not the disks but the controller. By + balancing your disks evenly across controllers you can often improve + throughput. +- **Create your pool using whole disks:** When running zpool create use + whole disk names. This will allow ZFS to automatically partition the + disk to ensure correct alignment. It will also improve + interoperability with other OpenZFS implementations which honor the + wholedisk property. +- **Have enough memory:** A minimum of 2GB of memory is recommended for + ZFS. Additional memory is strongly recommended when the compression + and deduplication features are enabled. +- **Improve performance by setting ashift=12:** You may be able to + improve performance for some workloads by setting ``ashift=12``. This + tuning can only be set when block devices are first added to a pool, + such as when the pool is first created or when a new vdev is added to + the pool. This tuning parameter can result in a decrease of capacity + for RAIDZ configurations. + +Advanced Format Disks +--------------------- + +Advanced Format (AF) is a new disk format which natively uses a 4,096 +byte, instead of 512 byte, sector size. To maintain compatibility with +legacy systems many AF disks emulate a sector size of 512 bytes. By +default, ZFS will automatically detect the sector size of the drive. +This combination can result in poorly aligned disk accesses which will +greatly degrade the pool performance. + +Therefore, the ability to set the ashift property has been added to the +zpool command. This allows users to explicitly assign the sector size +when devices are first added to a pool (typically at pool creation time +or adding a vdev to the pool). The ashift values range from 9 to 16 with +the default value 0 meaning that zfs should auto-detect the sector size. +This value is actually a bit shift value, so an ashift value for 512 +bytes is 9 (2^9 = 512) while the ashift value for 4,096 bytes is 12 +(2^12 = 4,096). + +To force the pool to use 4,096 byte sectors at pool creation time, you +may run: + +:: + + $ zpool create -o ashift=12 tank mirror sda sdb + +To force the pool to use 4,096 byte sectors when adding a vdev to a +pool, you may run: + +:: + + $ zpool add -o ashift=12 tank mirror sdc sdd + +ZVOL used space larger than expected +------------------------------------ + +| Depending on the filesystem used on the zvol (e.g. ext4) and the usage + (e.g. deletion and creation of many files) the ``used`` and + ``referenced`` properties reported by the zvol may be larger than the + "actual" space that is being used as reported by the consumer. +| This can happen due to the way some filesystems work, in which they + prefer to allocate files in new untouched blocks rather than the + fragmented used blocks marked as free. This forces zfs to reference + all blocks that the underlying filesystem has ever touched. +| This is in itself not much of a problem, as when the ``used`` property + reaches the configured ``volsize`` the underlying filesystem will + start reusing blocks. But the problem arises if it is desired to + snapshot the zvol, as the space referenced by the snapshots will + contain the unused blocks. + +| This issue can be prevented, by issuing the so-called trim + (for ex. ``fstrim`` command on Linux) to allow + the kernel to specify to zfs which blocks are unused. +| Issuing a trim before a snapshot is taken will ensure + a minimum snapshot size. +| For Linux adding the ``discard`` option for the mounted ZVOL in ``/etc/fstab`` + effectively enables the kernel to issue the trim commands + continuously, without the need to execute fstrim on-demand. + +Using a zvol for a swap device on Linux +--------------------------------------- + +You may use a zvol as a swap device but you'll need to configure it +appropriately. + +**CAUTION:** for now swap on zvol may lead to deadlock, in this case +please send your logs +`here `__. + +- Set the volume block size to match your systems page size. This + tuning prevents ZFS from having to perform read-modify-write options + on a larger block while the system is already low on memory. +- Set the ``logbias=throughput`` and ``sync=always`` properties. Data + written to the volume will be flushed immediately to disk freeing up + memory as quickly as possible. +- Set ``primarycache=metadata`` to avoid keeping swap data in RAM via + the ARC. +- Disable automatic snapshots of the swap device. + +:: + + $ zfs create -V 4G -b $(getconf PAGESIZE) \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata \ + -o com.sun:auto-snapshot=false rpool/swap + +Using ZFS on Xen Hypervisor or Xen Dom0 (Linux) +----------------------------------------------- + +It is usually recommended to keep virtual machine storage and hypervisor +pools, quite separate. Although few people have managed to successfully +deploy and run OpenZFS using the same machine configured as Dom0. +There are few caveats: + +- Set a fair amount of memory in grub.conf, dedicated to Dom0. + + - dom0_mem=16384M,max:16384M + +- Allocate no more of 30-40% of Dom0's memory to ZFS in + ``/etc/modprobe.d/zfs.conf``. + + - options zfs zfs_arc_max=6442450944 + +- Disable Xen's auto-ballooning in ``/etc/xen/xl.conf`` +- Watch out for any Xen bugs, such as `this + one `__ related to + ballooning + +udisks2 creating /dev/mapper/ entries for zvol (Linux) +------------------------------------------------------ + +To prevent udisks2 from creating /dev/mapper entries that must be +manually removed or maintained during zvol remove / rename, create a +udev rule such as ``/etc/udev/rules.d/80-udisks2-ignore-zfs.rules`` with +the following contents: + +:: + + ENV{ID_PART_ENTRY_SCHEME}=="gpt", ENV{ID_FS_TYPE}=="zfs_member", ENV{ID_PART_ENTRY_TYPE}=="6a898cc3-1dd2-11b2-99a6-080020736631", ENV{UDISKS_IGNORE}="1" + +Licensing +--------- + +License information can be found `here `__. + +Reporting a problem +------------------- + +You can open a new issue and search existing issues using the public +`issue tracker `__. The issue +tracker is used to organize outstanding bug reports, feature requests, +and other development tasks. Anyone may post comments after signing up +for a github account. + +Please make sure that what you're actually seeing is a bug and not a +support issue. If in doubt, please ask on the mailing list first, and if +you're then asked to file an issue, do so. + +When opening a new issue include this information at the top of the +issue: + +- What distribution you're using and the version. +- What spl/zfs packages you're using and the version. +- Describe the problem you're observing. +- Describe how to reproduce the problem. +- Including any warning/errors/backtraces from the system logs. + +When a new issue is opened it's not uncommon for a developer to request +additional information about the problem. In general, the more detail +you share about a problem the quicker a developer can resolve it. For +example, providing a simple test case is always exceptionally helpful. +Be prepared to work with the developer looking in to your bug in order +to get it resolved. They may ask for information like: + +- Your pool configuration as reported by ``zdb`` or ``zpool status``. +- Your hardware configuration, such as + + - Number of CPUs. + - Amount of memory. + - Whether your system has ECC memory. + - Whether it is running under a VMM/Hypervisor. + - Kernel version. + - Values of the spl/zfs module parameters. + +- Stack traces which may be logged to ``dmesg``. + +Does OpenZFS have a Code of Conduct? +------------------------------------ + +Yes, the OpenZFS community has a code of conduct. See the `Code of +Conduct `__ for details. diff --git a/_sources/Project and Community/Mailing Lists.rst.txt b/_sources/Project and Community/Mailing Lists.rst.txt new file mode 100644 index 000000000..8aba7e735 --- /dev/null +++ b/_sources/Project and Community/Mailing Lists.rst.txt @@ -0,0 +1,36 @@ +.. _mailing_lists: + +Mailing Lists +============= + ++----------------------+----------------------+----------------------+ +|              | Description | List Archive | +|             List     | | | +|                      | | | ++======================+======================+======================+ +| `zfs-announce\ | A low-traffic list | `archive | +| @list.zfsonlinux.\ | for announcements | `__ | +| ups/zfs-announce>`__ | | | ++----------------------+----------------------+----------------------+ +| `zfs-discuss\ | A user discussion | `archive | +| @list.zfsonlinux\ | list for issues | `__ | +| oups/zfs-discuss>`__ | usability | | ++----------------------+----------------------+----------------------+ +| `zfs-\ | A development list | `archive | +| devel@list.zfsonlin\ | for developers to | `__ | +| groups/zfs-devel>`__ | | | ++----------------------+----------------------+----------------------+ +| `devel\ | A | `archive `__ | +| iki/Mailing_list>`__ | developers to review | | +| | ZFS code and | | +| | architecture changes | | +| | from all platforms | | ++----------------------+----------------------+----------------------+ diff --git a/_sources/Project and Community/Signing Keys.rst.txt b/_sources/Project and Community/Signing Keys.rst.txt new file mode 100644 index 000000000..b25a08c35 --- /dev/null +++ b/_sources/Project and Community/Signing Keys.rst.txt @@ -0,0 +1,64 @@ +Signing Keys +============ + +All tagged ZFS on Linux +`releases `__ are signed by +the official maintainer for that branch. These signatures are +automatically verified by GitHub and can be checked locally by +downloading the maintainers public key. + +Maintainers +----------- + +Release branch (spl/zfs-\*-release) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +| **Maintainer:** `Ned Bass `__ +| **Download:** + `pgp.mit.edu `__ +| **Key ID:** C77B9667 +| **Fingerprint:** 29D5 610E AE29 41E3 55A2 FE8A B974 67AA C77B 9667 + +| **Maintainer:** `Tony Hutter `__ +| **Download:** + `pgp.mit.edu `__ +| **Key ID:** D4598027 +| **Fingerprint:** 4F3B A9AB 6D1F 8D68 3DC2 DFB5 6AD8 60EE D459 8027 + +Master branch (master) +~~~~~~~~~~~~~~~~~~~~~~ + +| **Maintainer:** `Brian Behlendorf `__ +| **Download:** + `pgp.mit.edu `__ +| **Key ID:** C6AF658B +| **Fingerprint:** C33D F142 657E D1F7 C328 A296 0AB9 E991 C6AF 658B + +Checking the Signature of a Git Tag +----------------------------------- + +First import the public key listed above in to your key ring. + +:: + + $ gpg --keyserver pgp.mit.edu --recv C6AF658B + gpg: requesting key C6AF658B from hkp server pgp.mit.edu + gpg: key C6AF658B: "Brian Behlendorf " not changed + gpg: Total number processed: 1 + gpg: unchanged: 1 + +After the public key is imported the signature of a git tag can be +verified as shown. + +:: + + $ git tag --verify zfs-0.6.5 + object 7a27ad00ae142b38d4aef8cc0af7a72b4c0e44fe + type commit + tag zfs-0.6.5 + tagger Brian Behlendorf 1441996302 -0700 + + ZFS Version 0.6.5 + gpg: Signature made Fri 11 Sep 2015 11:31:42 AM PDT using DSA key ID C6AF658B + gpg: Good signature from "Brian Behlendorf " + gpg: aka "Brian Behlendorf (LLNL) " diff --git a/_sources/Project and Community/index.rst.txt b/_sources/Project and Community/index.rst.txt new file mode 100644 index 000000000..4ed8122e3 --- /dev/null +++ b/_sources/Project and Community/index.rst.txt @@ -0,0 +1,31 @@ +Project and Community +===================== + +OpenZFS is storage software which combines the functionality of +traditional filesystems, volume manager, and more. OpenZFS includes +protection against data corruption, support for high storage capacities, +efficient data compression, snapshots and copy-on-write clones, +continuous integrity checking and automatic repair, remote replication +with ZFS send and receive, and RAID-Z. + +OpenZFS brings together developers from the illumos, Linux, FreeBSD and +OS X platforms, and a wide range of companies -- both online and at the +annual OpenZFS Developer Summit. High-level goals of the project include +raising awareness of the quality, utility and availability of +open-source implementations of ZFS, encouraging open communication about +ongoing efforts toward improving open-source variants of ZFS, and +ensuring consistent reliability, functionality and performance of all +distributions of ZFS. + +.. toctree:: + :maxdepth: 2 + :caption: Contents: + :glob: + + Admin Documentation + FAQ + Mailing Lists + Signing Keys + Issue Tracker + Releases + Roadmap diff --git a/_sources/_TableOfContents.rst.txt b/_sources/_TableOfContents.rst.txt new file mode 100644 index 000000000..3502e12d9 --- /dev/null +++ b/_sources/_TableOfContents.rst.txt @@ -0,0 +1,12 @@ +.. toctree:: + :maxdepth: 2 + :glob: + + Getting Started/index + Project and Community/index + Developer Resources/index + Performance and Tuning/index + Basic Concepts/index + man/index + msg/index + License diff --git a/_sources/index.rst.txt b/_sources/index.rst.txt new file mode 100644 index 000000000..3b694ccbe --- /dev/null +++ b/_sources/index.rst.txt @@ -0,0 +1,24 @@ +OpenZFS Documentation +===================== + +Welcome to the OpenZFS Documentation. This resource provides documentation for +users and developers working with (or contributing to) the OpenZFS +project. New users or system administrators should refer to the +documentation for their favorite platform to get started. + ++----------------------+----------------------+----------------------+ +| :doc:`Getting Started| :doc:`Project and | :doc:`Developer | +| <./Getting | Community <./Project | Resources ` | and Community/index>`| Resources/index>` | ++======================+======================+======================+ +| How to get started | About the project | Technical | +| with OpenZFS on your | and how to | documentation | +| favorite platform | contribute | discussing the | +| | | OpenZFS | +| | | implementation | ++----------------------+----------------------+----------------------+ + + +Table of Contents: +------------------ +.. include:: _TableOfContents.rst diff --git a/_sources/man/index.rst.txt b/_sources/man/index.rst.txt new file mode 100644 index 000000000..e555d5d9b --- /dev/null +++ b/_sources/man/index.rst.txt @@ -0,0 +1,15 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +Man Pages +========= +.. toctree:: + :maxdepth: 1 + :glob: + + master/index + v2.2/index + v2.1/index + v2.0/index + v0.8/index + v0.7/index + v0.6/index diff --git a/_sources/man/master/1/arcstat.1.rst.txt b/_sources/man/master/1/arcstat.1.rst.txt new file mode 100644 index 000000000..74cae1a17 --- /dev/null +++ b/_sources/man/master/1/arcstat.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man1/arcstat.1 + +arcstat.1 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man1/arcstat.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/1/cstyle.1.rst.txt b/_sources/man/master/1/cstyle.1.rst.txt new file mode 100644 index 000000000..2d7beadc0 --- /dev/null +++ b/_sources/man/master/1/cstyle.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man1/cstyle.1 + +cstyle.1 +======== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man1/cstyle.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/1/index.rst.txt b/_sources/man/master/1/index.rst.txt new file mode 100644 index 000000000..6981144fb --- /dev/null +++ b/_sources/man/master/1/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man1/ + +User Commands (1) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/master/1/raidz_test.1.rst.txt b/_sources/man/master/1/raidz_test.1.rst.txt new file mode 100644 index 000000000..08c042614 --- /dev/null +++ b/_sources/man/master/1/raidz_test.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man1/raidz_test.1 + +raidz_test.1 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man1/raidz_test.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/1/test-runner.1.rst.txt b/_sources/man/master/1/test-runner.1.rst.txt new file mode 100644 index 000000000..3b1b16ed1 --- /dev/null +++ b/_sources/man/master/1/test-runner.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man1/test-runner.1 + +test-runner.1 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man1/test-runner.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/1/zhack.1.rst.txt b/_sources/man/master/1/zhack.1.rst.txt new file mode 100644 index 000000000..93c530d91 --- /dev/null +++ b/_sources/man/master/1/zhack.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man1/zhack.1 + +zhack.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man1/zhack.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/1/ztest.1.rst.txt b/_sources/man/master/1/ztest.1.rst.txt new file mode 100644 index 000000000..9438f4f80 --- /dev/null +++ b/_sources/man/master/1/ztest.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man1/ztest.1 + +ztest.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man1/ztest.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/1/zvol_wait.1.rst.txt b/_sources/man/master/1/zvol_wait.1.rst.txt new file mode 100644 index 000000000..4d77975f3 --- /dev/null +++ b/_sources/man/master/1/zvol_wait.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man1/zvol_wait.1 + +zvol_wait.1 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man1/zvol_wait.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/4/index.rst.txt b/_sources/man/master/4/index.rst.txt new file mode 100644 index 000000000..10e6950ab --- /dev/null +++ b/_sources/man/master/4/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man4/ + +Devices and Special Files (4) +============================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/master/4/spl.4.rst.txt b/_sources/man/master/4/spl.4.rst.txt new file mode 100644 index 000000000..de76f2f77 --- /dev/null +++ b/_sources/man/master/4/spl.4.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man4/spl.4 + +spl.4 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man4/spl.4.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/4/zfs.4.rst.txt b/_sources/man/master/4/zfs.4.rst.txt new file mode 100644 index 000000000..ca6f3c963 --- /dev/null +++ b/_sources/man/master/4/zfs.4.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man4/zfs.4 + +zfs.4 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man4/zfs.4.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/5/index.rst.txt b/_sources/man/master/5/index.rst.txt new file mode 100644 index 000000000..ec202a199 --- /dev/null +++ b/_sources/man/master/5/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man5/ + +File Formats and Conventions (5) +================================ +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/master/5/vdev_id.conf.5.rst.txt b/_sources/man/master/5/vdev_id.conf.5.rst.txt new file mode 100644 index 000000000..ce71e2bef --- /dev/null +++ b/_sources/man/master/5/vdev_id.conf.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man5/vdev_id.conf.5 + +vdev_id.conf.5 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man5/vdev_id.conf.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/7/dracut.zfs.7.rst.txt b/_sources/man/master/7/dracut.zfs.7.rst.txt new file mode 100644 index 000000000..ab81fda2a --- /dev/null +++ b/_sources/man/master/7/dracut.zfs.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man7/dracut.zfs.7 + +dracut.zfs.7 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man7/dracut.zfs.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/7/index.rst.txt b/_sources/man/master/7/index.rst.txt new file mode 100644 index 000000000..08a08f746 --- /dev/null +++ b/_sources/man/master/7/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man7/ + +Miscellaneous (7) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/master/7/vdevprops.7.rst.txt b/_sources/man/master/7/vdevprops.7.rst.txt new file mode 100644 index 000000000..00279c4d0 --- /dev/null +++ b/_sources/man/master/7/vdevprops.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man7/vdevprops.7 + +vdevprops.7 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man7/vdevprops.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/7/zfsconcepts.7.rst.txt b/_sources/man/master/7/zfsconcepts.7.rst.txt new file mode 100644 index 000000000..360b75f42 --- /dev/null +++ b/_sources/man/master/7/zfsconcepts.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man7/zfsconcepts.7 + +zfsconcepts.7 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man7/zfsconcepts.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/7/zfsprops.7.rst.txt b/_sources/man/master/7/zfsprops.7.rst.txt new file mode 100644 index 000000000..32f0bedc1 --- /dev/null +++ b/_sources/man/master/7/zfsprops.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man7/zfsprops.7 + +zfsprops.7 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man7/zfsprops.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/7/zpool-features.7.rst.txt b/_sources/man/master/7/zpool-features.7.rst.txt new file mode 100644 index 000000000..e7d8f1122 --- /dev/null +++ b/_sources/man/master/7/zpool-features.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man7/zpool-features.7 + +zpool-features.7 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man7/zpool-features.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/7/zpoolconcepts.7.rst.txt b/_sources/man/master/7/zpoolconcepts.7.rst.txt new file mode 100644 index 000000000..e812be284 --- /dev/null +++ b/_sources/man/master/7/zpoolconcepts.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man7/zpoolconcepts.7 + +zpoolconcepts.7 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man7/zpoolconcepts.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/7/zpoolprops.7.rst.txt b/_sources/man/master/7/zpoolprops.7.rst.txt new file mode 100644 index 000000000..e871927e7 --- /dev/null +++ b/_sources/man/master/7/zpoolprops.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man7/zpoolprops.7 + +zpoolprops.7 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man7/zpoolprops.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/fsck.zfs.8.rst.txt b/_sources/man/master/8/fsck.zfs.8.rst.txt new file mode 100644 index 000000000..4e701e018 --- /dev/null +++ b/_sources/man/master/8/fsck.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/fsck.zfs.8 + +fsck.zfs.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/fsck.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/index.rst.txt b/_sources/man/master/8/index.rst.txt new file mode 100644 index 000000000..99184bac4 --- /dev/null +++ b/_sources/man/master/8/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/ + +System Administration Commands (8) +================================== +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/master/8/mount.zfs.8.rst.txt b/_sources/man/master/8/mount.zfs.8.rst.txt new file mode 100644 index 000000000..b721f264e --- /dev/null +++ b/_sources/man/master/8/mount.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/mount.zfs.8 + +mount.zfs.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/mount.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/vdev_id.8.rst.txt b/_sources/man/master/8/vdev_id.8.rst.txt new file mode 100644 index 000000000..c3693e44f --- /dev/null +++ b/_sources/man/master/8/vdev_id.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/vdev_id.8 + +vdev_id.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/vdev_id.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zdb.8.rst.txt b/_sources/man/master/8/zdb.8.rst.txt new file mode 100644 index 000000000..e9730d2d6 --- /dev/null +++ b/_sources/man/master/8/zdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zdb.8 + +zdb.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zed.8.rst.txt b/_sources/man/master/8/zed.8.rst.txt new file mode 100644 index 000000000..db0622099 --- /dev/null +++ b/_sources/man/master/8/zed.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zed.8 + +zed.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zed.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-allow.8.rst.txt b/_sources/man/master/8/zfs-allow.8.rst.txt new file mode 100644 index 000000000..4b440b402 --- /dev/null +++ b/_sources/man/master/8/zfs-allow.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-allow.8 + +zfs-allow.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-allow.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-bookmark.8.rst.txt b/_sources/man/master/8/zfs-bookmark.8.rst.txt new file mode 100644 index 000000000..2016899db --- /dev/null +++ b/_sources/man/master/8/zfs-bookmark.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-bookmark.8 + +zfs-bookmark.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-bookmark.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-change-key.8.rst.txt b/_sources/man/master/8/zfs-change-key.8.rst.txt new file mode 100644 index 000000000..1e65ca4f7 --- /dev/null +++ b/_sources/man/master/8/zfs-change-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-change-key.8 + +zfs-change-key.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-change-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-clone.8.rst.txt b/_sources/man/master/8/zfs-clone.8.rst.txt new file mode 100644 index 000000000..73ae2cfab --- /dev/null +++ b/_sources/man/master/8/zfs-clone.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-clone.8 + +zfs-clone.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-clone.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-create.8.rst.txt b/_sources/man/master/8/zfs-create.8.rst.txt new file mode 100644 index 000000000..91d05c297 --- /dev/null +++ b/_sources/man/master/8/zfs-create.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-create.8 + +zfs-create.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-create.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-destroy.8.rst.txt b/_sources/man/master/8/zfs-destroy.8.rst.txt new file mode 100644 index 000000000..880923e14 --- /dev/null +++ b/_sources/man/master/8/zfs-destroy.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-destroy.8 + +zfs-destroy.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-destroy.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-diff.8.rst.txt b/_sources/man/master/8/zfs-diff.8.rst.txt new file mode 100644 index 000000000..2537e6776 --- /dev/null +++ b/_sources/man/master/8/zfs-diff.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-diff.8 + +zfs-diff.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-diff.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-get.8.rst.txt b/_sources/man/master/8/zfs-get.8.rst.txt new file mode 100644 index 000000000..145395060 --- /dev/null +++ b/_sources/man/master/8/zfs-get.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-get.8 + +zfs-get.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-get.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-groupspace.8.rst.txt b/_sources/man/master/8/zfs-groupspace.8.rst.txt new file mode 100644 index 000000000..3eedf7648 --- /dev/null +++ b/_sources/man/master/8/zfs-groupspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-groupspace.8 + +zfs-groupspace.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-groupspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-hold.8.rst.txt b/_sources/man/master/8/zfs-hold.8.rst.txt new file mode 100644 index 000000000..3b7737f2f --- /dev/null +++ b/_sources/man/master/8/zfs-hold.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-hold.8 + +zfs-hold.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-hold.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-inherit.8.rst.txt b/_sources/man/master/8/zfs-inherit.8.rst.txt new file mode 100644 index 000000000..24b85f8bb --- /dev/null +++ b/_sources/man/master/8/zfs-inherit.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-inherit.8 + +zfs-inherit.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-inherit.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-jail.8.rst.txt b/_sources/man/master/8/zfs-jail.8.rst.txt new file mode 100644 index 000000000..3652ae8d4 --- /dev/null +++ b/_sources/man/master/8/zfs-jail.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-jail.8 + +zfs-jail.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-jail.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-list.8.rst.txt b/_sources/man/master/8/zfs-list.8.rst.txt new file mode 100644 index 000000000..091e258d8 --- /dev/null +++ b/_sources/man/master/8/zfs-list.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-list.8 + +zfs-list.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-list.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-load-key.8.rst.txt b/_sources/man/master/8/zfs-load-key.8.rst.txt new file mode 100644 index 000000000..6c5caea32 --- /dev/null +++ b/_sources/man/master/8/zfs-load-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-load-key.8 + +zfs-load-key.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-load-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-mount-generator.8.rst.txt b/_sources/man/master/8/zfs-mount-generator.8.rst.txt new file mode 100644 index 000000000..af5ccf97c --- /dev/null +++ b/_sources/man/master/8/zfs-mount-generator.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-mount-generator.8 + +zfs-mount-generator.8 +===================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-mount-generator.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-mount.8.rst.txt b/_sources/man/master/8/zfs-mount.8.rst.txt new file mode 100644 index 000000000..de1233778 --- /dev/null +++ b/_sources/man/master/8/zfs-mount.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-mount.8 + +zfs-mount.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-mount.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-program.8.rst.txt b/_sources/man/master/8/zfs-program.8.rst.txt new file mode 100644 index 000000000..833776b2b --- /dev/null +++ b/_sources/man/master/8/zfs-program.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-program.8 + +zfs-program.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-program.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-project.8.rst.txt b/_sources/man/master/8/zfs-project.8.rst.txt new file mode 100644 index 000000000..9c161e768 --- /dev/null +++ b/_sources/man/master/8/zfs-project.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-project.8 + +zfs-project.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-project.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-projectspace.8.rst.txt b/_sources/man/master/8/zfs-projectspace.8.rst.txt new file mode 100644 index 000000000..9ffefb346 --- /dev/null +++ b/_sources/man/master/8/zfs-projectspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-projectspace.8 + +zfs-projectspace.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-projectspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-promote.8.rst.txt b/_sources/man/master/8/zfs-promote.8.rst.txt new file mode 100644 index 000000000..09eeb9b5a --- /dev/null +++ b/_sources/man/master/8/zfs-promote.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-promote.8 + +zfs-promote.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-promote.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-receive.8.rst.txt b/_sources/man/master/8/zfs-receive.8.rst.txt new file mode 100644 index 000000000..2c9a0852f --- /dev/null +++ b/_sources/man/master/8/zfs-receive.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-receive.8 + +zfs-receive.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-receive.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-recv.8.rst.txt b/_sources/man/master/8/zfs-recv.8.rst.txt new file mode 100644 index 000000000..5ee87738d --- /dev/null +++ b/_sources/man/master/8/zfs-recv.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-recv.8 + +zfs-recv.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-recv.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-redact.8.rst.txt b/_sources/man/master/8/zfs-redact.8.rst.txt new file mode 100644 index 000000000..347080ac0 --- /dev/null +++ b/_sources/man/master/8/zfs-redact.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-redact.8 + +zfs-redact.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-redact.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-release.8.rst.txt b/_sources/man/master/8/zfs-release.8.rst.txt new file mode 100644 index 000000000..fd651c8e0 --- /dev/null +++ b/_sources/man/master/8/zfs-release.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-release.8 + +zfs-release.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-release.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-rename.8.rst.txt b/_sources/man/master/8/zfs-rename.8.rst.txt new file mode 100644 index 000000000..215da65db --- /dev/null +++ b/_sources/man/master/8/zfs-rename.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-rename.8 + +zfs-rename.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-rename.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-rollback.8.rst.txt b/_sources/man/master/8/zfs-rollback.8.rst.txt new file mode 100644 index 000000000..75b9e8829 --- /dev/null +++ b/_sources/man/master/8/zfs-rollback.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-rollback.8 + +zfs-rollback.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-rollback.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-send.8.rst.txt b/_sources/man/master/8/zfs-send.8.rst.txt new file mode 100644 index 000000000..301546001 --- /dev/null +++ b/_sources/man/master/8/zfs-send.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-send.8 + +zfs-send.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-send.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-set.8.rst.txt b/_sources/man/master/8/zfs-set.8.rst.txt new file mode 100644 index 000000000..563f752ef --- /dev/null +++ b/_sources/man/master/8/zfs-set.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-set.8 + +zfs-set.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-set.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-share.8.rst.txt b/_sources/man/master/8/zfs-share.8.rst.txt new file mode 100644 index 000000000..a25d386fc --- /dev/null +++ b/_sources/man/master/8/zfs-share.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-share.8 + +zfs-share.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-share.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-snapshot.8.rst.txt b/_sources/man/master/8/zfs-snapshot.8.rst.txt new file mode 100644 index 000000000..a32c3c7a5 --- /dev/null +++ b/_sources/man/master/8/zfs-snapshot.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-snapshot.8 + +zfs-snapshot.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-snapshot.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-unallow.8.rst.txt b/_sources/man/master/8/zfs-unallow.8.rst.txt new file mode 100644 index 000000000..27a710afb --- /dev/null +++ b/_sources/man/master/8/zfs-unallow.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-unallow.8 + +zfs-unallow.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-unallow.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-unjail.8.rst.txt b/_sources/man/master/8/zfs-unjail.8.rst.txt new file mode 100644 index 000000000..d3d709c19 --- /dev/null +++ b/_sources/man/master/8/zfs-unjail.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-unjail.8 + +zfs-unjail.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-unjail.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-unload-key.8.rst.txt b/_sources/man/master/8/zfs-unload-key.8.rst.txt new file mode 100644 index 000000000..d6f24dfe7 --- /dev/null +++ b/_sources/man/master/8/zfs-unload-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-unload-key.8 + +zfs-unload-key.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-unload-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-unmount.8.rst.txt b/_sources/man/master/8/zfs-unmount.8.rst.txt new file mode 100644 index 000000000..f5aa20432 --- /dev/null +++ b/_sources/man/master/8/zfs-unmount.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-unmount.8 + +zfs-unmount.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-unmount.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-unzone.8.rst.txt b/_sources/man/master/8/zfs-unzone.8.rst.txt new file mode 100644 index 000000000..b05a9cced --- /dev/null +++ b/_sources/man/master/8/zfs-unzone.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-unzone.8 + +zfs-unzone.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-unzone.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-upgrade.8.rst.txt b/_sources/man/master/8/zfs-upgrade.8.rst.txt new file mode 100644 index 000000000..697bf7bfb --- /dev/null +++ b/_sources/man/master/8/zfs-upgrade.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-upgrade.8 + +zfs-upgrade.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-upgrade.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-userspace.8.rst.txt b/_sources/man/master/8/zfs-userspace.8.rst.txt new file mode 100644 index 000000000..2898f9f8c --- /dev/null +++ b/_sources/man/master/8/zfs-userspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-userspace.8 + +zfs-userspace.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-userspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-wait.8.rst.txt b/_sources/man/master/8/zfs-wait.8.rst.txt new file mode 100644 index 000000000..d2f1ad899 --- /dev/null +++ b/_sources/man/master/8/zfs-wait.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-wait.8 + +zfs-wait.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-wait.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-zone.8.rst.txt b/_sources/man/master/8/zfs-zone.8.rst.txt new file mode 100644 index 000000000..d03395c04 --- /dev/null +++ b/_sources/man/master/8/zfs-zone.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-zone.8 + +zfs-zone.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-zone.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs.8.rst.txt b/_sources/man/master/8/zfs.8.rst.txt new file mode 100644 index 000000000..99132cd10 --- /dev/null +++ b/_sources/man/master/8/zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs.8 + +zfs.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs_ids_to_path.8.rst.txt b/_sources/man/master/8/zfs_ids_to_path.8.rst.txt new file mode 100644 index 000000000..c5339446c --- /dev/null +++ b/_sources/man/master/8/zfs_ids_to_path.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs_ids_to_path.8 + +zfs_ids_to_path.8 +================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs_ids_to_path.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs_prepare_disk.8.rst.txt b/_sources/man/master/8/zfs_prepare_disk.8.rst.txt new file mode 100644 index 000000000..4510a8abe --- /dev/null +++ b/_sources/man/master/8/zfs_prepare_disk.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs_prepare_disk.8 + +zfs_prepare_disk.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs_prepare_disk.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zgenhostid.8.rst.txt b/_sources/man/master/8/zgenhostid.8.rst.txt new file mode 100644 index 000000000..ad0d76c44 --- /dev/null +++ b/_sources/man/master/8/zgenhostid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zgenhostid.8 + +zgenhostid.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zgenhostid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zinject.8.rst.txt b/_sources/man/master/8/zinject.8.rst.txt new file mode 100644 index 000000000..d52d5f68b --- /dev/null +++ b/_sources/man/master/8/zinject.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zinject.8 + +zinject.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zinject.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-add.8.rst.txt b/_sources/man/master/8/zpool-add.8.rst.txt new file mode 100644 index 000000000..1f315adaf --- /dev/null +++ b/_sources/man/master/8/zpool-add.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-add.8 + +zpool-add.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-add.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-attach.8.rst.txt b/_sources/man/master/8/zpool-attach.8.rst.txt new file mode 100644 index 000000000..06af83321 --- /dev/null +++ b/_sources/man/master/8/zpool-attach.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-attach.8 + +zpool-attach.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-attach.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-checkpoint.8.rst.txt b/_sources/man/master/8/zpool-checkpoint.8.rst.txt new file mode 100644 index 000000000..0f763841b --- /dev/null +++ b/_sources/man/master/8/zpool-checkpoint.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-checkpoint.8 + +zpool-checkpoint.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-checkpoint.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-clear.8.rst.txt b/_sources/man/master/8/zpool-clear.8.rst.txt new file mode 100644 index 000000000..15b49e26c --- /dev/null +++ b/_sources/man/master/8/zpool-clear.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-clear.8 + +zpool-clear.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-clear.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-create.8.rst.txt b/_sources/man/master/8/zpool-create.8.rst.txt new file mode 100644 index 000000000..9f12988ec --- /dev/null +++ b/_sources/man/master/8/zpool-create.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-create.8 + +zpool-create.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-create.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-destroy.8.rst.txt b/_sources/man/master/8/zpool-destroy.8.rst.txt new file mode 100644 index 000000000..bfa476bdc --- /dev/null +++ b/_sources/man/master/8/zpool-destroy.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-destroy.8 + +zpool-destroy.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-destroy.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-detach.8.rst.txt b/_sources/man/master/8/zpool-detach.8.rst.txt new file mode 100644 index 000000000..628ec1477 --- /dev/null +++ b/_sources/man/master/8/zpool-detach.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-detach.8 + +zpool-detach.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-detach.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-events.8.rst.txt b/_sources/man/master/8/zpool-events.8.rst.txt new file mode 100644 index 000000000..15bb149e8 --- /dev/null +++ b/_sources/man/master/8/zpool-events.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-events.8 + +zpool-events.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-events.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-export.8.rst.txt b/_sources/man/master/8/zpool-export.8.rst.txt new file mode 100644 index 000000000..9a5a59a7c --- /dev/null +++ b/_sources/man/master/8/zpool-export.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-export.8 + +zpool-export.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-export.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-get.8.rst.txt b/_sources/man/master/8/zpool-get.8.rst.txt new file mode 100644 index 000000000..1205db06e --- /dev/null +++ b/_sources/man/master/8/zpool-get.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-get.8 + +zpool-get.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-get.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-history.8.rst.txt b/_sources/man/master/8/zpool-history.8.rst.txt new file mode 100644 index 000000000..a34b58617 --- /dev/null +++ b/_sources/man/master/8/zpool-history.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-history.8 + +zpool-history.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-history.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-import.8.rst.txt b/_sources/man/master/8/zpool-import.8.rst.txt new file mode 100644 index 000000000..8d30383bc --- /dev/null +++ b/_sources/man/master/8/zpool-import.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-import.8 + +zpool-import.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-import.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-initialize.8.rst.txt b/_sources/man/master/8/zpool-initialize.8.rst.txt new file mode 100644 index 000000000..c09465f21 --- /dev/null +++ b/_sources/man/master/8/zpool-initialize.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-initialize.8 + +zpool-initialize.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-initialize.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-iostat.8.rst.txt b/_sources/man/master/8/zpool-iostat.8.rst.txt new file mode 100644 index 000000000..fe923dbc6 --- /dev/null +++ b/_sources/man/master/8/zpool-iostat.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-iostat.8 + +zpool-iostat.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-iostat.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-labelclear.8.rst.txt b/_sources/man/master/8/zpool-labelclear.8.rst.txt new file mode 100644 index 000000000..0586d539d --- /dev/null +++ b/_sources/man/master/8/zpool-labelclear.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-labelclear.8 + +zpool-labelclear.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-labelclear.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-list.8.rst.txt b/_sources/man/master/8/zpool-list.8.rst.txt new file mode 100644 index 000000000..da8884f8c --- /dev/null +++ b/_sources/man/master/8/zpool-list.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-list.8 + +zpool-list.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-list.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-offline.8.rst.txt b/_sources/man/master/8/zpool-offline.8.rst.txt new file mode 100644 index 000000000..c9dc13cad --- /dev/null +++ b/_sources/man/master/8/zpool-offline.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-offline.8 + +zpool-offline.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-offline.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-online.8.rst.txt b/_sources/man/master/8/zpool-online.8.rst.txt new file mode 100644 index 000000000..6873779d1 --- /dev/null +++ b/_sources/man/master/8/zpool-online.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-online.8 + +zpool-online.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-online.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-reguid.8.rst.txt b/_sources/man/master/8/zpool-reguid.8.rst.txt new file mode 100644 index 000000000..735913796 --- /dev/null +++ b/_sources/man/master/8/zpool-reguid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-reguid.8 + +zpool-reguid.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-reguid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-remove.8.rst.txt b/_sources/man/master/8/zpool-remove.8.rst.txt new file mode 100644 index 000000000..f532317b8 --- /dev/null +++ b/_sources/man/master/8/zpool-remove.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-remove.8 + +zpool-remove.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-remove.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-reopen.8.rst.txt b/_sources/man/master/8/zpool-reopen.8.rst.txt new file mode 100644 index 000000000..4ab383016 --- /dev/null +++ b/_sources/man/master/8/zpool-reopen.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-reopen.8 + +zpool-reopen.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-reopen.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-replace.8.rst.txt b/_sources/man/master/8/zpool-replace.8.rst.txt new file mode 100644 index 000000000..2bb16d3bd --- /dev/null +++ b/_sources/man/master/8/zpool-replace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-replace.8 + +zpool-replace.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-replace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-resilver.8.rst.txt b/_sources/man/master/8/zpool-resilver.8.rst.txt new file mode 100644 index 000000000..e491136c1 --- /dev/null +++ b/_sources/man/master/8/zpool-resilver.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-resilver.8 + +zpool-resilver.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-resilver.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-scrub.8.rst.txt b/_sources/man/master/8/zpool-scrub.8.rst.txt new file mode 100644 index 000000000..8835c31ed --- /dev/null +++ b/_sources/man/master/8/zpool-scrub.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-scrub.8 + +zpool-scrub.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-scrub.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-set.8.rst.txt b/_sources/man/master/8/zpool-set.8.rst.txt new file mode 100644 index 000000000..c566b9bc6 --- /dev/null +++ b/_sources/man/master/8/zpool-set.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-set.8 + +zpool-set.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-set.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-split.8.rst.txt b/_sources/man/master/8/zpool-split.8.rst.txt new file mode 100644 index 000000000..6a3f01321 --- /dev/null +++ b/_sources/man/master/8/zpool-split.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-split.8 + +zpool-split.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-split.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-status.8.rst.txt b/_sources/man/master/8/zpool-status.8.rst.txt new file mode 100644 index 000000000..54eeb645c --- /dev/null +++ b/_sources/man/master/8/zpool-status.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-status.8 + +zpool-status.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-status.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-sync.8.rst.txt b/_sources/man/master/8/zpool-sync.8.rst.txt new file mode 100644 index 000000000..d82a72b7c --- /dev/null +++ b/_sources/man/master/8/zpool-sync.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-sync.8 + +zpool-sync.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-sync.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-trim.8.rst.txt b/_sources/man/master/8/zpool-trim.8.rst.txt new file mode 100644 index 000000000..48018ac21 --- /dev/null +++ b/_sources/man/master/8/zpool-trim.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-trim.8 + +zpool-trim.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-trim.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-upgrade.8.rst.txt b/_sources/man/master/8/zpool-upgrade.8.rst.txt new file mode 100644 index 000000000..83980bcf6 --- /dev/null +++ b/_sources/man/master/8/zpool-upgrade.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-upgrade.8 + +zpool-upgrade.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-upgrade.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-wait.8.rst.txt b/_sources/man/master/8/zpool-wait.8.rst.txt new file mode 100644 index 000000000..cef33250f --- /dev/null +++ b/_sources/man/master/8/zpool-wait.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-wait.8 + +zpool-wait.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-wait.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool.8.rst.txt b/_sources/man/master/8/zpool.8.rst.txt new file mode 100644 index 000000000..0ef799edb --- /dev/null +++ b/_sources/man/master/8/zpool.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool.8 + +zpool.8 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool_influxdb.8.rst.txt b/_sources/man/master/8/zpool_influxdb.8.rst.txt new file mode 100644 index 000000000..c4bca6e1a --- /dev/null +++ b/_sources/man/master/8/zpool_influxdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool_influxdb.8 + +zpool_influxdb.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool_influxdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zstream.8.rst.txt b/_sources/man/master/8/zstream.8.rst.txt new file mode 100644 index 000000000..ed8ac3b58 --- /dev/null +++ b/_sources/man/master/8/zstream.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zstream.8 + +zstream.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zstream.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zstreamdump.8.rst.txt b/_sources/man/master/8/zstreamdump.8.rst.txt new file mode 100644 index 000000000..dd4e94a68 --- /dev/null +++ b/_sources/man/master/8/zstreamdump.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zstreamdump.8 + +zstreamdump.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zstreamdump.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/index.rst.txt b/_sources/man/master/index.rst.txt new file mode 100644 index 000000000..4cfb92b15 --- /dev/null +++ b/_sources/man/master/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/ + +master +====== +.. toctree:: + :maxdepth: 1 + :glob: + + */index + \ No newline at end of file diff --git a/_sources/man/v0.6/1/cstyle.1.rst.txt b/_sources/man/v0.6/1/cstyle.1.rst.txt new file mode 100644 index 000000000..068acdb77 --- /dev/null +++ b/_sources/man/v0.6/1/cstyle.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man1/cstyle.1 + +cstyle.1 +======== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man1/cstyle.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/1/index.rst.txt b/_sources/man/v0.6/1/index.rst.txt new file mode 100644 index 000000000..ba7af7efb --- /dev/null +++ b/_sources/man/v0.6/1/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man1/ + +User Commands (1) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v0.6/1/zhack.1.rst.txt b/_sources/man/v0.6/1/zhack.1.rst.txt new file mode 100644 index 000000000..330094d93 --- /dev/null +++ b/_sources/man/v0.6/1/zhack.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man1/zhack.1 + +zhack.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man1/zhack.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/1/zpios.1.rst.txt b/_sources/man/v0.6/1/zpios.1.rst.txt new file mode 100644 index 000000000..36f617243 --- /dev/null +++ b/_sources/man/v0.6/1/zpios.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man1/zpios.1 + +zpios.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man1/zpios.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/1/ztest.1.rst.txt b/_sources/man/v0.6/1/ztest.1.rst.txt new file mode 100644 index 000000000..71112a7af --- /dev/null +++ b/_sources/man/v0.6/1/ztest.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man1/ztest.1 + +ztest.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man1/ztest.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/5/index.rst.txt b/_sources/man/v0.6/5/index.rst.txt new file mode 100644 index 000000000..56a6ae520 --- /dev/null +++ b/_sources/man/v0.6/5/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man5/ + +File Formats and Conventions (5) +================================ +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v0.6/5/vdev_id.conf.5.rst.txt b/_sources/man/v0.6/5/vdev_id.conf.5.rst.txt new file mode 100644 index 000000000..9ff7cb8d5 --- /dev/null +++ b/_sources/man/v0.6/5/vdev_id.conf.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man5/vdev_id.conf.5 + +vdev_id.conf.5 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man5/vdev_id.conf.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/5/zfs-events.5.rst.txt b/_sources/man/v0.6/5/zfs-events.5.rst.txt new file mode 100644 index 000000000..cd78b4652 --- /dev/null +++ b/_sources/man/v0.6/5/zfs-events.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man5/zfs-events.5 + +zfs-events.5 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man5/zfs-events.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/5/zfs-module-parameters.5.rst.txt b/_sources/man/v0.6/5/zfs-module-parameters.5.rst.txt new file mode 100644 index 000000000..18b1baa7f --- /dev/null +++ b/_sources/man/v0.6/5/zfs-module-parameters.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man5/zfs-module-parameters.5 + +zfs-module-parameters.5 +======================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man5/zfs-module-parameters.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/5/zpool-features.5.rst.txt b/_sources/man/v0.6/5/zpool-features.5.rst.txt new file mode 100644 index 000000000..428c31ca4 --- /dev/null +++ b/_sources/man/v0.6/5/zpool-features.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man5/zpool-features.5 + +zpool-features.5 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man5/zpool-features.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/8/fsck.zfs.8.rst.txt b/_sources/man/v0.6/8/fsck.zfs.8.rst.txt new file mode 100644 index 000000000..e1f9dda12 --- /dev/null +++ b/_sources/man/v0.6/8/fsck.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/fsck.zfs.8 + +fsck.zfs.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man8/fsck.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/8/index.rst.txt b/_sources/man/v0.6/8/index.rst.txt new file mode 100644 index 000000000..b32eab8cd --- /dev/null +++ b/_sources/man/v0.6/8/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/ + +System Administration Commands (8) +================================== +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v0.6/8/mount.zfs.8.rst.txt b/_sources/man/v0.6/8/mount.zfs.8.rst.txt new file mode 100644 index 000000000..5a9fbc6e4 --- /dev/null +++ b/_sources/man/v0.6/8/mount.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/mount.zfs.8 + +mount.zfs.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man8/mount.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/8/vdev_id.8.rst.txt b/_sources/man/v0.6/8/vdev_id.8.rst.txt new file mode 100644 index 000000000..9afaad856 --- /dev/null +++ b/_sources/man/v0.6/8/vdev_id.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/vdev_id.8 + +vdev_id.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man8/vdev_id.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/8/zdb.8.rst.txt b/_sources/man/v0.6/8/zdb.8.rst.txt new file mode 100644 index 000000000..90bfa4830 --- /dev/null +++ b/_sources/man/v0.6/8/zdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/zdb.8 + +zdb.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man8/zdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/8/zed.8.rst.txt b/_sources/man/v0.6/8/zed.8.rst.txt new file mode 100644 index 000000000..09bfc47c6 --- /dev/null +++ b/_sources/man/v0.6/8/zed.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/zed.8 + +zed.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man8/zed.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/8/zfs.8.rst.txt b/_sources/man/v0.6/8/zfs.8.rst.txt new file mode 100644 index 000000000..d7ac33c27 --- /dev/null +++ b/_sources/man/v0.6/8/zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/zfs.8 + +zfs.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man8/zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/8/zinject.8.rst.txt b/_sources/man/v0.6/8/zinject.8.rst.txt new file mode 100644 index 000000000..361329272 --- /dev/null +++ b/_sources/man/v0.6/8/zinject.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/zinject.8 + +zinject.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man8/zinject.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/8/zpool.8.rst.txt b/_sources/man/v0.6/8/zpool.8.rst.txt new file mode 100644 index 000000000..c856f79a4 --- /dev/null +++ b/_sources/man/v0.6/8/zpool.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/zpool.8 + +zpool.8 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man8/zpool.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/8/zstreamdump.8.rst.txt b/_sources/man/v0.6/8/zstreamdump.8.rst.txt new file mode 100644 index 000000000..2a33ac124 --- /dev/null +++ b/_sources/man/v0.6/8/zstreamdump.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/zstreamdump.8 + +zstreamdump.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man8/zstreamdump.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/index.rst.txt b/_sources/man/v0.6/index.rst.txt new file mode 100644 index 000000000..58e744cac --- /dev/null +++ b/_sources/man/v0.6/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/ + +v0.6 +==== +.. toctree:: + :maxdepth: 1 + :glob: + + */index + \ No newline at end of file diff --git a/_sources/man/v0.7/1/cstyle.1.rst.txt b/_sources/man/v0.7/1/cstyle.1.rst.txt new file mode 100644 index 000000000..e9d88519d --- /dev/null +++ b/_sources/man/v0.7/1/cstyle.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man1/cstyle.1 + +cstyle.1 +======== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man1/cstyle.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/1/index.rst.txt b/_sources/man/v0.7/1/index.rst.txt new file mode 100644 index 000000000..6e18a7641 --- /dev/null +++ b/_sources/man/v0.7/1/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man1/ + +User Commands (1) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v0.7/1/raidz_test.1.rst.txt b/_sources/man/v0.7/1/raidz_test.1.rst.txt new file mode 100644 index 000000000..4c834061d --- /dev/null +++ b/_sources/man/v0.7/1/raidz_test.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man1/raidz_test.1 + +raidz_test.1 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man1/raidz_test.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/1/zhack.1.rst.txt b/_sources/man/v0.7/1/zhack.1.rst.txt new file mode 100644 index 000000000..a9e774fc3 --- /dev/null +++ b/_sources/man/v0.7/1/zhack.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man1/zhack.1 + +zhack.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man1/zhack.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/1/zpios.1.rst.txt b/_sources/man/v0.7/1/zpios.1.rst.txt new file mode 100644 index 000000000..a04f4a4ad --- /dev/null +++ b/_sources/man/v0.7/1/zpios.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man1/zpios.1 + +zpios.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man1/zpios.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/1/ztest.1.rst.txt b/_sources/man/v0.7/1/ztest.1.rst.txt new file mode 100644 index 000000000..19f25c5ae --- /dev/null +++ b/_sources/man/v0.7/1/ztest.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man1/ztest.1 + +ztest.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man1/ztest.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/5/index.rst.txt b/_sources/man/v0.7/5/index.rst.txt new file mode 100644 index 000000000..e62c984bd --- /dev/null +++ b/_sources/man/v0.7/5/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man5/ + +File Formats and Conventions (5) +================================ +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v0.7/5/vdev_id.conf.5.rst.txt b/_sources/man/v0.7/5/vdev_id.conf.5.rst.txt new file mode 100644 index 000000000..69fa91cac --- /dev/null +++ b/_sources/man/v0.7/5/vdev_id.conf.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man5/vdev_id.conf.5 + +vdev_id.conf.5 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man5/vdev_id.conf.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/5/zfs-events.5.rst.txt b/_sources/man/v0.7/5/zfs-events.5.rst.txt new file mode 100644 index 000000000..a0c1c0cda --- /dev/null +++ b/_sources/man/v0.7/5/zfs-events.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man5/zfs-events.5 + +zfs-events.5 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man5/zfs-events.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/5/zfs-module-parameters.5.rst.txt b/_sources/man/v0.7/5/zfs-module-parameters.5.rst.txt new file mode 100644 index 000000000..3759beff1 --- /dev/null +++ b/_sources/man/v0.7/5/zfs-module-parameters.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man5/zfs-module-parameters.5 + +zfs-module-parameters.5 +======================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man5/zfs-module-parameters.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/5/zpool-features.5.rst.txt b/_sources/man/v0.7/5/zpool-features.5.rst.txt new file mode 100644 index 000000000..1be5db5fa --- /dev/null +++ b/_sources/man/v0.7/5/zpool-features.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man5/zpool-features.5 + +zpool-features.5 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man5/zpool-features.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/fsck.zfs.8.rst.txt b/_sources/man/v0.7/8/fsck.zfs.8.rst.txt new file mode 100644 index 000000000..ceece6b38 --- /dev/null +++ b/_sources/man/v0.7/8/fsck.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/fsck.zfs.8 + +fsck.zfs.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/fsck.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/index.rst.txt b/_sources/man/v0.7/8/index.rst.txt new file mode 100644 index 000000000..d45c02924 --- /dev/null +++ b/_sources/man/v0.7/8/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/ + +System Administration Commands (8) +================================== +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v0.7/8/mount.zfs.8.rst.txt b/_sources/man/v0.7/8/mount.zfs.8.rst.txt new file mode 100644 index 000000000..f47fc79de --- /dev/null +++ b/_sources/man/v0.7/8/mount.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/mount.zfs.8 + +mount.zfs.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/mount.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/vdev_id.8.rst.txt b/_sources/man/v0.7/8/vdev_id.8.rst.txt new file mode 100644 index 000000000..4738f3265 --- /dev/null +++ b/_sources/man/v0.7/8/vdev_id.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/vdev_id.8 + +vdev_id.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/vdev_id.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/zdb.8.rst.txt b/_sources/man/v0.7/8/zdb.8.rst.txt new file mode 100644 index 000000000..a6c71f3c2 --- /dev/null +++ b/_sources/man/v0.7/8/zdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/zdb.8 + +zdb.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/zdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/zed.8.rst.txt b/_sources/man/v0.7/8/zed.8.rst.txt new file mode 100644 index 000000000..db4a8cd1a --- /dev/null +++ b/_sources/man/v0.7/8/zed.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/zed.8 + +zed.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/zed.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/zfs.8.rst.txt b/_sources/man/v0.7/8/zfs.8.rst.txt new file mode 100644 index 000000000..31f7cf27a --- /dev/null +++ b/_sources/man/v0.7/8/zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/zfs.8 + +zfs.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/zgenhostid.8.rst.txt b/_sources/man/v0.7/8/zgenhostid.8.rst.txt new file mode 100644 index 000000000..daeef3bbc --- /dev/null +++ b/_sources/man/v0.7/8/zgenhostid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/zgenhostid.8 + +zgenhostid.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/zgenhostid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/zinject.8.rst.txt b/_sources/man/v0.7/8/zinject.8.rst.txt new file mode 100644 index 000000000..77394e6a8 --- /dev/null +++ b/_sources/man/v0.7/8/zinject.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/zinject.8 + +zinject.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/zinject.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/zpool.8.rst.txt b/_sources/man/v0.7/8/zpool.8.rst.txt new file mode 100644 index 000000000..6669995e9 --- /dev/null +++ b/_sources/man/v0.7/8/zpool.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/zpool.8 + +zpool.8 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/zpool.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/zstreamdump.8.rst.txt b/_sources/man/v0.7/8/zstreamdump.8.rst.txt new file mode 100644 index 000000000..b00520de6 --- /dev/null +++ b/_sources/man/v0.7/8/zstreamdump.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/zstreamdump.8 + +zstreamdump.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/zstreamdump.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/index.rst.txt b/_sources/man/v0.7/index.rst.txt new file mode 100644 index 000000000..f7348cf6c --- /dev/null +++ b/_sources/man/v0.7/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/ + +v0.7 +==== +.. toctree:: + :maxdepth: 1 + :glob: + + */index + \ No newline at end of file diff --git a/_sources/man/v0.8/1/cstyle.1.rst.txt b/_sources/man/v0.8/1/cstyle.1.rst.txt new file mode 100644 index 000000000..38753099d --- /dev/null +++ b/_sources/man/v0.8/1/cstyle.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man1/cstyle.1 + +cstyle.1 +======== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man1/cstyle.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/1/index.rst.txt b/_sources/man/v0.8/1/index.rst.txt new file mode 100644 index 000000000..f39f7cf34 --- /dev/null +++ b/_sources/man/v0.8/1/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man1/ + +User Commands (1) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v0.8/1/raidz_test.1.rst.txt b/_sources/man/v0.8/1/raidz_test.1.rst.txt new file mode 100644 index 000000000..350d2930f --- /dev/null +++ b/_sources/man/v0.8/1/raidz_test.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man1/raidz_test.1 + +raidz_test.1 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man1/raidz_test.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/1/zhack.1.rst.txt b/_sources/man/v0.8/1/zhack.1.rst.txt new file mode 100644 index 000000000..b8304b530 --- /dev/null +++ b/_sources/man/v0.8/1/zhack.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man1/zhack.1 + +zhack.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man1/zhack.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/1/ztest.1.rst.txt b/_sources/man/v0.8/1/ztest.1.rst.txt new file mode 100644 index 000000000..d14313e10 --- /dev/null +++ b/_sources/man/v0.8/1/ztest.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man1/ztest.1 + +ztest.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man1/ztest.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/1/zvol_wait.1.rst.txt b/_sources/man/v0.8/1/zvol_wait.1.rst.txt new file mode 100644 index 000000000..1eed1316c --- /dev/null +++ b/_sources/man/v0.8/1/zvol_wait.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man1/zvol_wait.1 + +zvol_wait.1 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man1/zvol_wait.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/5/index.rst.txt b/_sources/man/v0.8/5/index.rst.txt new file mode 100644 index 000000000..67e29b9fd --- /dev/null +++ b/_sources/man/v0.8/5/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man5/ + +File Formats and Conventions (5) +================================ +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v0.8/5/spl-module-parameters.5.rst.txt b/_sources/man/v0.8/5/spl-module-parameters.5.rst.txt new file mode 100644 index 000000000..1096b7b01 --- /dev/null +++ b/_sources/man/v0.8/5/spl-module-parameters.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man5/spl-module-parameters.5 + +spl-module-parameters.5 +======================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man5/spl-module-parameters.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/5/vdev_id.conf.5.rst.txt b/_sources/man/v0.8/5/vdev_id.conf.5.rst.txt new file mode 100644 index 000000000..f548cb6c6 --- /dev/null +++ b/_sources/man/v0.8/5/vdev_id.conf.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man5/vdev_id.conf.5 + +vdev_id.conf.5 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man5/vdev_id.conf.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/5/zfs-events.5.rst.txt b/_sources/man/v0.8/5/zfs-events.5.rst.txt new file mode 100644 index 000000000..ab3ff7edc --- /dev/null +++ b/_sources/man/v0.8/5/zfs-events.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man5/zfs-events.5 + +zfs-events.5 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man5/zfs-events.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/5/zfs-module-parameters.5.rst.txt b/_sources/man/v0.8/5/zfs-module-parameters.5.rst.txt new file mode 100644 index 000000000..2e4049079 --- /dev/null +++ b/_sources/man/v0.8/5/zfs-module-parameters.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man5/zfs-module-parameters.5 + +zfs-module-parameters.5 +======================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man5/zfs-module-parameters.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/5/zpool-features.5.rst.txt b/_sources/man/v0.8/5/zpool-features.5.rst.txt new file mode 100644 index 000000000..50afa8811 --- /dev/null +++ b/_sources/man/v0.8/5/zpool-features.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man5/zpool-features.5 + +zpool-features.5 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man5/zpool-features.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/fsck.zfs.8.rst.txt b/_sources/man/v0.8/8/fsck.zfs.8.rst.txt new file mode 100644 index 000000000..7c1d3f261 --- /dev/null +++ b/_sources/man/v0.8/8/fsck.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/fsck.zfs.8 + +fsck.zfs.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/fsck.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/index.rst.txt b/_sources/man/v0.8/8/index.rst.txt new file mode 100644 index 000000000..3ba1e232d --- /dev/null +++ b/_sources/man/v0.8/8/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/ + +System Administration Commands (8) +================================== +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v0.8/8/mount.zfs.8.rst.txt b/_sources/man/v0.8/8/mount.zfs.8.rst.txt new file mode 100644 index 000000000..43dfaeb7c --- /dev/null +++ b/_sources/man/v0.8/8/mount.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/mount.zfs.8 + +mount.zfs.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/mount.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/vdev_id.8.rst.txt b/_sources/man/v0.8/8/vdev_id.8.rst.txt new file mode 100644 index 000000000..2037d14b7 --- /dev/null +++ b/_sources/man/v0.8/8/vdev_id.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/vdev_id.8 + +vdev_id.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/vdev_id.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zdb.8.rst.txt b/_sources/man/v0.8/8/zdb.8.rst.txt new file mode 100644 index 000000000..36bcb8a73 --- /dev/null +++ b/_sources/man/v0.8/8/zdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zdb.8 + +zdb.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zed.8.rst.txt b/_sources/man/v0.8/8/zed.8.rst.txt new file mode 100644 index 000000000..15c0c41c2 --- /dev/null +++ b/_sources/man/v0.8/8/zed.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zed.8 + +zed.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zed.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zfs-mount-generator.8.rst.txt b/_sources/man/v0.8/8/zfs-mount-generator.8.rst.txt new file mode 100644 index 000000000..3cf59bea0 --- /dev/null +++ b/_sources/man/v0.8/8/zfs-mount-generator.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zfs-mount-generator.8 + +zfs-mount-generator.8 +===================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zfs-mount-generator.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zfs-program.8.rst.txt b/_sources/man/v0.8/8/zfs-program.8.rst.txt new file mode 100644 index 000000000..1299e1e38 --- /dev/null +++ b/_sources/man/v0.8/8/zfs-program.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zfs-program.8 + +zfs-program.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zfs-program.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zfs.8.rst.txt b/_sources/man/v0.8/8/zfs.8.rst.txt new file mode 100644 index 000000000..347e69182 --- /dev/null +++ b/_sources/man/v0.8/8/zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zfs.8 + +zfs.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zfsprops.8.rst.txt b/_sources/man/v0.8/8/zfsprops.8.rst.txt new file mode 100644 index 000000000..fb51f65d2 --- /dev/null +++ b/_sources/man/v0.8/8/zfsprops.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zfsprops.8 + +zfsprops.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zfsprops.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zgenhostid.8.rst.txt b/_sources/man/v0.8/8/zgenhostid.8.rst.txt new file mode 100644 index 000000000..e175327bf --- /dev/null +++ b/_sources/man/v0.8/8/zgenhostid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zgenhostid.8 + +zgenhostid.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zgenhostid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zinject.8.rst.txt b/_sources/man/v0.8/8/zinject.8.rst.txt new file mode 100644 index 000000000..8db555875 --- /dev/null +++ b/_sources/man/v0.8/8/zinject.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zinject.8 + +zinject.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zinject.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zpool.8.rst.txt b/_sources/man/v0.8/8/zpool.8.rst.txt new file mode 100644 index 000000000..e771ed419 --- /dev/null +++ b/_sources/man/v0.8/8/zpool.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zpool.8 + +zpool.8 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zpool.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zstreamdump.8.rst.txt b/_sources/man/v0.8/8/zstreamdump.8.rst.txt new file mode 100644 index 000000000..e363c99f5 --- /dev/null +++ b/_sources/man/v0.8/8/zstreamdump.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zstreamdump.8 + +zstreamdump.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zstreamdump.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/index.rst.txt b/_sources/man/v0.8/index.rst.txt new file mode 100644 index 000000000..5b12af500 --- /dev/null +++ b/_sources/man/v0.8/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/ + +v0.8 +==== +.. toctree:: + :maxdepth: 1 + :glob: + + */index + \ No newline at end of file diff --git a/_sources/man/v2.0/1/arcstat.1.rst.txt b/_sources/man/v2.0/1/arcstat.1.rst.txt new file mode 100644 index 000000000..c33120fe1 --- /dev/null +++ b/_sources/man/v2.0/1/arcstat.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man1/arcstat.1 + +arcstat.1 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man1/arcstat.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/1/cstyle.1.rst.txt b/_sources/man/v2.0/1/cstyle.1.rst.txt new file mode 100644 index 000000000..2ea60fd16 --- /dev/null +++ b/_sources/man/v2.0/1/cstyle.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man1/cstyle.1 + +cstyle.1 +======== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man1/cstyle.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/1/index.rst.txt b/_sources/man/v2.0/1/index.rst.txt new file mode 100644 index 000000000..0eef9b1c0 --- /dev/null +++ b/_sources/man/v2.0/1/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man1/ + +User Commands (1) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.0/1/raidz_test.1.rst.txt b/_sources/man/v2.0/1/raidz_test.1.rst.txt new file mode 100644 index 000000000..5c1f34a70 --- /dev/null +++ b/_sources/man/v2.0/1/raidz_test.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man1/raidz_test.1 + +raidz_test.1 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man1/raidz_test.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/1/zhack.1.rst.txt b/_sources/man/v2.0/1/zhack.1.rst.txt new file mode 100644 index 000000000..30cfe73ee --- /dev/null +++ b/_sources/man/v2.0/1/zhack.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man1/zhack.1 + +zhack.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man1/zhack.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/1/ztest.1.rst.txt b/_sources/man/v2.0/1/ztest.1.rst.txt new file mode 100644 index 000000000..4f8fda834 --- /dev/null +++ b/_sources/man/v2.0/1/ztest.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man1/ztest.1 + +ztest.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man1/ztest.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/1/zvol_wait.1.rst.txt b/_sources/man/v2.0/1/zvol_wait.1.rst.txt new file mode 100644 index 000000000..5a0450a98 --- /dev/null +++ b/_sources/man/v2.0/1/zvol_wait.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man1/zvol_wait.1 + +zvol_wait.1 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man1/zvol_wait.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/5/index.rst.txt b/_sources/man/v2.0/5/index.rst.txt new file mode 100644 index 000000000..1af97ff34 --- /dev/null +++ b/_sources/man/v2.0/5/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man5/ + +File Formats and Conventions (5) +================================ +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.0/5/spl-module-parameters.5.rst.txt b/_sources/man/v2.0/5/spl-module-parameters.5.rst.txt new file mode 100644 index 000000000..d99aca40e --- /dev/null +++ b/_sources/man/v2.0/5/spl-module-parameters.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man5/spl-module-parameters.5 + +spl-module-parameters.5 +======================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man5/spl-module-parameters.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/5/vdev_id.conf.5.rst.txt b/_sources/man/v2.0/5/vdev_id.conf.5.rst.txt new file mode 100644 index 000000000..feea5a1f4 --- /dev/null +++ b/_sources/man/v2.0/5/vdev_id.conf.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man5/vdev_id.conf.5 + +vdev_id.conf.5 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man5/vdev_id.conf.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/5/zfs-events.5.rst.txt b/_sources/man/v2.0/5/zfs-events.5.rst.txt new file mode 100644 index 000000000..c28504730 --- /dev/null +++ b/_sources/man/v2.0/5/zfs-events.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man5/zfs-events.5 + +zfs-events.5 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man5/zfs-events.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/5/zfs-module-parameters.5.rst.txt b/_sources/man/v2.0/5/zfs-module-parameters.5.rst.txt new file mode 100644 index 000000000..3218ee4df --- /dev/null +++ b/_sources/man/v2.0/5/zfs-module-parameters.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man5/zfs-module-parameters.5 + +zfs-module-parameters.5 +======================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man5/zfs-module-parameters.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/5/zpool-features.5.rst.txt b/_sources/man/v2.0/5/zpool-features.5.rst.txt new file mode 100644 index 000000000..0da76ae71 --- /dev/null +++ b/_sources/man/v2.0/5/zpool-features.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man5/zpool-features.5 + +zpool-features.5 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man5/zpool-features.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/fsck.zfs.8.rst.txt b/_sources/man/v2.0/8/fsck.zfs.8.rst.txt new file mode 100644 index 000000000..14e2b9e09 --- /dev/null +++ b/_sources/man/v2.0/8/fsck.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/fsck.zfs.8 + +fsck.zfs.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/fsck.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/index.rst.txt b/_sources/man/v2.0/8/index.rst.txt new file mode 100644 index 000000000..3a752f36d --- /dev/null +++ b/_sources/man/v2.0/8/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/ + +System Administration Commands (8) +================================== +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.0/8/mount.zfs.8.rst.txt b/_sources/man/v2.0/8/mount.zfs.8.rst.txt new file mode 100644 index 000000000..e086ad705 --- /dev/null +++ b/_sources/man/v2.0/8/mount.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/mount.zfs.8 + +mount.zfs.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/mount.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/vdev_id.8.rst.txt b/_sources/man/v2.0/8/vdev_id.8.rst.txt new file mode 100644 index 000000000..557ccc0dd --- /dev/null +++ b/_sources/man/v2.0/8/vdev_id.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/vdev_id.8 + +vdev_id.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/vdev_id.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zdb.8.rst.txt b/_sources/man/v2.0/8/zdb.8.rst.txt new file mode 100644 index 000000000..c660f12a8 --- /dev/null +++ b/_sources/man/v2.0/8/zdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zdb.8 + +zdb.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zed.8.rst.txt b/_sources/man/v2.0/8/zed.8.rst.txt new file mode 100644 index 000000000..8b88ddc27 --- /dev/null +++ b/_sources/man/v2.0/8/zed.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zed.8 + +zed.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zed.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-allow.8.rst.txt b/_sources/man/v2.0/8/zfs-allow.8.rst.txt new file mode 100644 index 000000000..443e18a9d --- /dev/null +++ b/_sources/man/v2.0/8/zfs-allow.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-allow.8 + +zfs-allow.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-allow.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-bookmark.8.rst.txt b/_sources/man/v2.0/8/zfs-bookmark.8.rst.txt new file mode 100644 index 000000000..4fef4e902 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-bookmark.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-bookmark.8 + +zfs-bookmark.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-bookmark.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-change-key.8.rst.txt b/_sources/man/v2.0/8/zfs-change-key.8.rst.txt new file mode 100644 index 000000000..eb5a47e95 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-change-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-change-key.8 + +zfs-change-key.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-change-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-clone.8.rst.txt b/_sources/man/v2.0/8/zfs-clone.8.rst.txt new file mode 100644 index 000000000..e428c95a6 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-clone.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-clone.8 + +zfs-clone.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-clone.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-create.8.rst.txt b/_sources/man/v2.0/8/zfs-create.8.rst.txt new file mode 100644 index 000000000..82de8cadb --- /dev/null +++ b/_sources/man/v2.0/8/zfs-create.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-create.8 + +zfs-create.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-create.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-destroy.8.rst.txt b/_sources/man/v2.0/8/zfs-destroy.8.rst.txt new file mode 100644 index 000000000..d5ed2f355 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-destroy.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-destroy.8 + +zfs-destroy.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-destroy.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-diff.8.rst.txt b/_sources/man/v2.0/8/zfs-diff.8.rst.txt new file mode 100644 index 000000000..798fbac13 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-diff.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-diff.8 + +zfs-diff.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-diff.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-get.8.rst.txt b/_sources/man/v2.0/8/zfs-get.8.rst.txt new file mode 100644 index 000000000..4ca0901bb --- /dev/null +++ b/_sources/man/v2.0/8/zfs-get.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-get.8 + +zfs-get.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-get.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-groupspace.8.rst.txt b/_sources/man/v2.0/8/zfs-groupspace.8.rst.txt new file mode 100644 index 000000000..634a0d254 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-groupspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-groupspace.8 + +zfs-groupspace.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-groupspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-hold.8.rst.txt b/_sources/man/v2.0/8/zfs-hold.8.rst.txt new file mode 100644 index 000000000..0d0ec6050 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-hold.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-hold.8 + +zfs-hold.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-hold.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-inherit.8.rst.txt b/_sources/man/v2.0/8/zfs-inherit.8.rst.txt new file mode 100644 index 000000000..4c3925b47 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-inherit.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-inherit.8 + +zfs-inherit.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-inherit.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-jail.8.rst.txt b/_sources/man/v2.0/8/zfs-jail.8.rst.txt new file mode 100644 index 000000000..c65e72094 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-jail.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-jail.8 + +zfs-jail.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-jail.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-list.8.rst.txt b/_sources/man/v2.0/8/zfs-list.8.rst.txt new file mode 100644 index 000000000..10e7fa040 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-list.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-list.8 + +zfs-list.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-list.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-load-key.8.rst.txt b/_sources/man/v2.0/8/zfs-load-key.8.rst.txt new file mode 100644 index 000000000..1d2e8902f --- /dev/null +++ b/_sources/man/v2.0/8/zfs-load-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-load-key.8 + +zfs-load-key.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-load-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-mount-generator.8.rst.txt b/_sources/man/v2.0/8/zfs-mount-generator.8.rst.txt new file mode 100644 index 000000000..6c7d16c20 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-mount-generator.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-mount-generator.8 + +zfs-mount-generator.8 +===================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-mount-generator.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-mount.8.rst.txt b/_sources/man/v2.0/8/zfs-mount.8.rst.txt new file mode 100644 index 000000000..6aa66de70 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-mount.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-mount.8 + +zfs-mount.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-mount.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-program.8.rst.txt b/_sources/man/v2.0/8/zfs-program.8.rst.txt new file mode 100644 index 000000000..3f9a12013 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-program.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-program.8 + +zfs-program.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-program.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-project.8.rst.txt b/_sources/man/v2.0/8/zfs-project.8.rst.txt new file mode 100644 index 000000000..6c90e1830 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-project.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-project.8 + +zfs-project.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-project.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-projectspace.8.rst.txt b/_sources/man/v2.0/8/zfs-projectspace.8.rst.txt new file mode 100644 index 000000000..574b2be7f --- /dev/null +++ b/_sources/man/v2.0/8/zfs-projectspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-projectspace.8 + +zfs-projectspace.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-projectspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-promote.8.rst.txt b/_sources/man/v2.0/8/zfs-promote.8.rst.txt new file mode 100644 index 000000000..95edd0be3 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-promote.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-promote.8 + +zfs-promote.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-promote.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-receive.8.rst.txt b/_sources/man/v2.0/8/zfs-receive.8.rst.txt new file mode 100644 index 000000000..45569d4d1 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-receive.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-receive.8 + +zfs-receive.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-receive.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-recv.8.rst.txt b/_sources/man/v2.0/8/zfs-recv.8.rst.txt new file mode 100644 index 000000000..c06bb510d --- /dev/null +++ b/_sources/man/v2.0/8/zfs-recv.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-recv.8 + +zfs-recv.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-recv.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-redact.8.rst.txt b/_sources/man/v2.0/8/zfs-redact.8.rst.txt new file mode 100644 index 000000000..546660ebd --- /dev/null +++ b/_sources/man/v2.0/8/zfs-redact.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-redact.8 + +zfs-redact.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-redact.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-release.8.rst.txt b/_sources/man/v2.0/8/zfs-release.8.rst.txt new file mode 100644 index 000000000..d2eb4b4d9 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-release.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-release.8 + +zfs-release.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-release.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-rename.8.rst.txt b/_sources/man/v2.0/8/zfs-rename.8.rst.txt new file mode 100644 index 000000000..7063d1bef --- /dev/null +++ b/_sources/man/v2.0/8/zfs-rename.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-rename.8 + +zfs-rename.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-rename.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-rollback.8.rst.txt b/_sources/man/v2.0/8/zfs-rollback.8.rst.txt new file mode 100644 index 000000000..80fe00dfb --- /dev/null +++ b/_sources/man/v2.0/8/zfs-rollback.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-rollback.8 + +zfs-rollback.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-rollback.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-send.8.rst.txt b/_sources/man/v2.0/8/zfs-send.8.rst.txt new file mode 100644 index 000000000..ec5c3e502 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-send.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-send.8 + +zfs-send.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-send.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-set.8.rst.txt b/_sources/man/v2.0/8/zfs-set.8.rst.txt new file mode 100644 index 000000000..9020e6166 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-set.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-set.8 + +zfs-set.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-set.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-share.8.rst.txt b/_sources/man/v2.0/8/zfs-share.8.rst.txt new file mode 100644 index 000000000..20a44cf1f --- /dev/null +++ b/_sources/man/v2.0/8/zfs-share.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-share.8 + +zfs-share.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-share.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-snapshot.8.rst.txt b/_sources/man/v2.0/8/zfs-snapshot.8.rst.txt new file mode 100644 index 000000000..6a22e3219 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-snapshot.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-snapshot.8 + +zfs-snapshot.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-snapshot.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-unallow.8.rst.txt b/_sources/man/v2.0/8/zfs-unallow.8.rst.txt new file mode 100644 index 000000000..2a401cd37 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-unallow.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-unallow.8 + +zfs-unallow.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-unallow.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-unjail.8.rst.txt b/_sources/man/v2.0/8/zfs-unjail.8.rst.txt new file mode 100644 index 000000000..75350d2cd --- /dev/null +++ b/_sources/man/v2.0/8/zfs-unjail.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-unjail.8 + +zfs-unjail.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-unjail.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-unload-key.8.rst.txt b/_sources/man/v2.0/8/zfs-unload-key.8.rst.txt new file mode 100644 index 000000000..bc117f140 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-unload-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-unload-key.8 + +zfs-unload-key.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-unload-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-unmount.8.rst.txt b/_sources/man/v2.0/8/zfs-unmount.8.rst.txt new file mode 100644 index 000000000..4e5ca890d --- /dev/null +++ b/_sources/man/v2.0/8/zfs-unmount.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-unmount.8 + +zfs-unmount.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-unmount.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-upgrade.8.rst.txt b/_sources/man/v2.0/8/zfs-upgrade.8.rst.txt new file mode 100644 index 000000000..2e807a486 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-upgrade.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-upgrade.8 + +zfs-upgrade.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-upgrade.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-userspace.8.rst.txt b/_sources/man/v2.0/8/zfs-userspace.8.rst.txt new file mode 100644 index 000000000..1b3e4f4b4 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-userspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-userspace.8 + +zfs-userspace.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-userspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-wait.8.rst.txt b/_sources/man/v2.0/8/zfs-wait.8.rst.txt new file mode 100644 index 000000000..e0d78dfd0 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-wait.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-wait.8 + +zfs-wait.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-wait.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs.8.rst.txt b/_sources/man/v2.0/8/zfs.8.rst.txt new file mode 100644 index 000000000..5ca7a38ce --- /dev/null +++ b/_sources/man/v2.0/8/zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs.8 + +zfs.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs_ids_to_path.8.rst.txt b/_sources/man/v2.0/8/zfs_ids_to_path.8.rst.txt new file mode 100644 index 000000000..98c3a7c1f --- /dev/null +++ b/_sources/man/v2.0/8/zfs_ids_to_path.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs_ids_to_path.8 + +zfs_ids_to_path.8 +================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs_ids_to_path.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfsconcepts.8.rst.txt b/_sources/man/v2.0/8/zfsconcepts.8.rst.txt new file mode 100644 index 000000000..e620f8c45 --- /dev/null +++ b/_sources/man/v2.0/8/zfsconcepts.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfsconcepts.8 + +zfsconcepts.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfsconcepts.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfsprops.8.rst.txt b/_sources/man/v2.0/8/zfsprops.8.rst.txt new file mode 100644 index 000000000..1fb9978b9 --- /dev/null +++ b/_sources/man/v2.0/8/zfsprops.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfsprops.8 + +zfsprops.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfsprops.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zgenhostid.8.rst.txt b/_sources/man/v2.0/8/zgenhostid.8.rst.txt new file mode 100644 index 000000000..68b3cfd64 --- /dev/null +++ b/_sources/man/v2.0/8/zgenhostid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zgenhostid.8 + +zgenhostid.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zgenhostid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zinject.8.rst.txt b/_sources/man/v2.0/8/zinject.8.rst.txt new file mode 100644 index 000000000..49ef330f2 --- /dev/null +++ b/_sources/man/v2.0/8/zinject.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zinject.8 + +zinject.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zinject.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-add.8.rst.txt b/_sources/man/v2.0/8/zpool-add.8.rst.txt new file mode 100644 index 000000000..a137128d7 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-add.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-add.8 + +zpool-add.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-add.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-attach.8.rst.txt b/_sources/man/v2.0/8/zpool-attach.8.rst.txt new file mode 100644 index 000000000..cb989a1ee --- /dev/null +++ b/_sources/man/v2.0/8/zpool-attach.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-attach.8 + +zpool-attach.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-attach.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-checkpoint.8.rst.txt b/_sources/man/v2.0/8/zpool-checkpoint.8.rst.txt new file mode 100644 index 000000000..75045c947 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-checkpoint.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-checkpoint.8 + +zpool-checkpoint.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-checkpoint.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-clear.8.rst.txt b/_sources/man/v2.0/8/zpool-clear.8.rst.txt new file mode 100644 index 000000000..f17298df9 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-clear.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-clear.8 + +zpool-clear.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-clear.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-create.8.rst.txt b/_sources/man/v2.0/8/zpool-create.8.rst.txt new file mode 100644 index 000000000..74f14c7c4 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-create.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-create.8 + +zpool-create.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-create.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-destroy.8.rst.txt b/_sources/man/v2.0/8/zpool-destroy.8.rst.txt new file mode 100644 index 000000000..335c29979 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-destroy.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-destroy.8 + +zpool-destroy.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-destroy.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-detach.8.rst.txt b/_sources/man/v2.0/8/zpool-detach.8.rst.txt new file mode 100644 index 000000000..caa2e4f19 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-detach.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-detach.8 + +zpool-detach.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-detach.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-events.8.rst.txt b/_sources/man/v2.0/8/zpool-events.8.rst.txt new file mode 100644 index 000000000..34fa98343 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-events.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-events.8 + +zpool-events.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-events.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-export.8.rst.txt b/_sources/man/v2.0/8/zpool-export.8.rst.txt new file mode 100644 index 000000000..24d8954ed --- /dev/null +++ b/_sources/man/v2.0/8/zpool-export.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-export.8 + +zpool-export.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-export.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-get.8.rst.txt b/_sources/man/v2.0/8/zpool-get.8.rst.txt new file mode 100644 index 000000000..e9d165d89 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-get.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-get.8 + +zpool-get.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-get.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-history.8.rst.txt b/_sources/man/v2.0/8/zpool-history.8.rst.txt new file mode 100644 index 000000000..fb1196837 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-history.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-history.8 + +zpool-history.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-history.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-import.8.rst.txt b/_sources/man/v2.0/8/zpool-import.8.rst.txt new file mode 100644 index 000000000..4fefc6366 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-import.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-import.8 + +zpool-import.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-import.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-initialize.8.rst.txt b/_sources/man/v2.0/8/zpool-initialize.8.rst.txt new file mode 100644 index 000000000..a6049ba3b --- /dev/null +++ b/_sources/man/v2.0/8/zpool-initialize.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-initialize.8 + +zpool-initialize.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-initialize.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-iostat.8.rst.txt b/_sources/man/v2.0/8/zpool-iostat.8.rst.txt new file mode 100644 index 000000000..4224e46d6 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-iostat.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-iostat.8 + +zpool-iostat.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-iostat.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-labelclear.8.rst.txt b/_sources/man/v2.0/8/zpool-labelclear.8.rst.txt new file mode 100644 index 000000000..453dcf106 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-labelclear.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-labelclear.8 + +zpool-labelclear.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-labelclear.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-list.8.rst.txt b/_sources/man/v2.0/8/zpool-list.8.rst.txt new file mode 100644 index 000000000..a981d4ed0 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-list.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-list.8 + +zpool-list.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-list.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-offline.8.rst.txt b/_sources/man/v2.0/8/zpool-offline.8.rst.txt new file mode 100644 index 000000000..1735d904f --- /dev/null +++ b/_sources/man/v2.0/8/zpool-offline.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-offline.8 + +zpool-offline.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-offline.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-online.8.rst.txt b/_sources/man/v2.0/8/zpool-online.8.rst.txt new file mode 100644 index 000000000..b4e74c54a --- /dev/null +++ b/_sources/man/v2.0/8/zpool-online.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-online.8 + +zpool-online.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-online.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-reguid.8.rst.txt b/_sources/man/v2.0/8/zpool-reguid.8.rst.txt new file mode 100644 index 000000000..141a4380c --- /dev/null +++ b/_sources/man/v2.0/8/zpool-reguid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-reguid.8 + +zpool-reguid.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-reguid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-remove.8.rst.txt b/_sources/man/v2.0/8/zpool-remove.8.rst.txt new file mode 100644 index 000000000..db4667f68 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-remove.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-remove.8 + +zpool-remove.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-remove.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-reopen.8.rst.txt b/_sources/man/v2.0/8/zpool-reopen.8.rst.txt new file mode 100644 index 000000000..150a48494 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-reopen.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-reopen.8 + +zpool-reopen.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-reopen.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-replace.8.rst.txt b/_sources/man/v2.0/8/zpool-replace.8.rst.txt new file mode 100644 index 000000000..bc73d5415 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-replace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-replace.8 + +zpool-replace.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-replace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-resilver.8.rst.txt b/_sources/man/v2.0/8/zpool-resilver.8.rst.txt new file mode 100644 index 000000000..8e75103da --- /dev/null +++ b/_sources/man/v2.0/8/zpool-resilver.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-resilver.8 + +zpool-resilver.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-resilver.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-scrub.8.rst.txt b/_sources/man/v2.0/8/zpool-scrub.8.rst.txt new file mode 100644 index 000000000..bccc8b22e --- /dev/null +++ b/_sources/man/v2.0/8/zpool-scrub.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-scrub.8 + +zpool-scrub.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-scrub.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-set.8.rst.txt b/_sources/man/v2.0/8/zpool-set.8.rst.txt new file mode 100644 index 000000000..0e218ceb2 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-set.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-set.8 + +zpool-set.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-set.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-split.8.rst.txt b/_sources/man/v2.0/8/zpool-split.8.rst.txt new file mode 100644 index 000000000..73de77ae1 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-split.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-split.8 + +zpool-split.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-split.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-status.8.rst.txt b/_sources/man/v2.0/8/zpool-status.8.rst.txt new file mode 100644 index 000000000..bacfd18e2 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-status.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-status.8 + +zpool-status.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-status.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-sync.8.rst.txt b/_sources/man/v2.0/8/zpool-sync.8.rst.txt new file mode 100644 index 000000000..531d00e22 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-sync.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-sync.8 + +zpool-sync.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-sync.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-trim.8.rst.txt b/_sources/man/v2.0/8/zpool-trim.8.rst.txt new file mode 100644 index 000000000..ea73cde18 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-trim.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-trim.8 + +zpool-trim.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-trim.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-upgrade.8.rst.txt b/_sources/man/v2.0/8/zpool-upgrade.8.rst.txt new file mode 100644 index 000000000..1429c3192 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-upgrade.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-upgrade.8 + +zpool-upgrade.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-upgrade.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-wait.8.rst.txt b/_sources/man/v2.0/8/zpool-wait.8.rst.txt new file mode 100644 index 000000000..1365cca74 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-wait.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-wait.8 + +zpool-wait.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-wait.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool.8.rst.txt b/_sources/man/v2.0/8/zpool.8.rst.txt new file mode 100644 index 000000000..c3c951048 --- /dev/null +++ b/_sources/man/v2.0/8/zpool.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool.8 + +zpool.8 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpoolconcepts.8.rst.txt b/_sources/man/v2.0/8/zpoolconcepts.8.rst.txt new file mode 100644 index 000000000..0d35da910 --- /dev/null +++ b/_sources/man/v2.0/8/zpoolconcepts.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpoolconcepts.8 + +zpoolconcepts.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpoolconcepts.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpoolprops.8.rst.txt b/_sources/man/v2.0/8/zpoolprops.8.rst.txt new file mode 100644 index 000000000..cf3be631e --- /dev/null +++ b/_sources/man/v2.0/8/zpoolprops.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpoolprops.8 + +zpoolprops.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpoolprops.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zstream.8.rst.txt b/_sources/man/v2.0/8/zstream.8.rst.txt new file mode 100644 index 000000000..1177cf86e --- /dev/null +++ b/_sources/man/v2.0/8/zstream.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zstream.8 + +zstream.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zstream.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zstreamdump.8.rst.txt b/_sources/man/v2.0/8/zstreamdump.8.rst.txt new file mode 100644 index 000000000..4ea673e21 --- /dev/null +++ b/_sources/man/v2.0/8/zstreamdump.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zstreamdump.8 + +zstreamdump.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zstreamdump.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/index.rst.txt b/_sources/man/v2.0/index.rst.txt new file mode 100644 index 000000000..65e27de1a --- /dev/null +++ b/_sources/man/v2.0/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/ + +v2.0 +==== +.. toctree:: + :maxdepth: 1 + :glob: + + */index + \ No newline at end of file diff --git a/_sources/man/v2.1/1/arcstat.1.rst.txt b/_sources/man/v2.1/1/arcstat.1.rst.txt new file mode 100644 index 000000000..2d214b3b2 --- /dev/null +++ b/_sources/man/v2.1/1/arcstat.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man1/arcstat.1 + +arcstat.1 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man1/arcstat.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/1/cstyle.1.rst.txt b/_sources/man/v2.1/1/cstyle.1.rst.txt new file mode 100644 index 000000000..26b7ab68a --- /dev/null +++ b/_sources/man/v2.1/1/cstyle.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man1/cstyle.1 + +cstyle.1 +======== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man1/cstyle.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/1/index.rst.txt b/_sources/man/v2.1/1/index.rst.txt new file mode 100644 index 000000000..692db8a7f --- /dev/null +++ b/_sources/man/v2.1/1/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man1/ + +User Commands (1) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.1/1/raidz_test.1.rst.txt b/_sources/man/v2.1/1/raidz_test.1.rst.txt new file mode 100644 index 000000000..4b5d9a2e1 --- /dev/null +++ b/_sources/man/v2.1/1/raidz_test.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man1/raidz_test.1 + +raidz_test.1 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man1/raidz_test.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/1/zhack.1.rst.txt b/_sources/man/v2.1/1/zhack.1.rst.txt new file mode 100644 index 000000000..c13823ee8 --- /dev/null +++ b/_sources/man/v2.1/1/zhack.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man1/zhack.1 + +zhack.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man1/zhack.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/1/ztest.1.rst.txt b/_sources/man/v2.1/1/ztest.1.rst.txt new file mode 100644 index 000000000..9dd72b644 --- /dev/null +++ b/_sources/man/v2.1/1/ztest.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man1/ztest.1 + +ztest.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man1/ztest.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/1/zvol_wait.1.rst.txt b/_sources/man/v2.1/1/zvol_wait.1.rst.txt new file mode 100644 index 000000000..6df93601f --- /dev/null +++ b/_sources/man/v2.1/1/zvol_wait.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man1/zvol_wait.1 + +zvol_wait.1 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man1/zvol_wait.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/4/index.rst.txt b/_sources/man/v2.1/4/index.rst.txt new file mode 100644 index 000000000..ca0dc884b --- /dev/null +++ b/_sources/man/v2.1/4/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man4/ + +Devices and Special Files (4) +============================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.1/4/spl.4.rst.txt b/_sources/man/v2.1/4/spl.4.rst.txt new file mode 100644 index 000000000..d43b82f3e --- /dev/null +++ b/_sources/man/v2.1/4/spl.4.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man4/spl.4 + +spl.4 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man4/spl.4.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/4/zfs.4.rst.txt b/_sources/man/v2.1/4/zfs.4.rst.txt new file mode 100644 index 000000000..1d178d7cc --- /dev/null +++ b/_sources/man/v2.1/4/zfs.4.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man4/zfs.4 + +zfs.4 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man4/zfs.4.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/5/index.rst.txt b/_sources/man/v2.1/5/index.rst.txt new file mode 100644 index 000000000..c9738b615 --- /dev/null +++ b/_sources/man/v2.1/5/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man5/ + +File Formats and Conventions (5) +================================ +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.1/5/vdev_id.conf.5.rst.txt b/_sources/man/v2.1/5/vdev_id.conf.5.rst.txt new file mode 100644 index 000000000..568cad7b7 --- /dev/null +++ b/_sources/man/v2.1/5/vdev_id.conf.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man5/vdev_id.conf.5 + +vdev_id.conf.5 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man5/vdev_id.conf.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/7/dracut.zfs.7.rst.txt b/_sources/man/v2.1/7/dracut.zfs.7.rst.txt new file mode 100644 index 000000000..648d29002 --- /dev/null +++ b/_sources/man/v2.1/7/dracut.zfs.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man7/dracut.zfs.7 + +dracut.zfs.7 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man7/dracut.zfs.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/7/index.rst.txt b/_sources/man/v2.1/7/index.rst.txt new file mode 100644 index 000000000..4d0e296fd --- /dev/null +++ b/_sources/man/v2.1/7/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man7/ + +Miscellaneous (7) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.1/7/zfsconcepts.7.rst.txt b/_sources/man/v2.1/7/zfsconcepts.7.rst.txt new file mode 100644 index 000000000..74616944b --- /dev/null +++ b/_sources/man/v2.1/7/zfsconcepts.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man7/zfsconcepts.7 + +zfsconcepts.7 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man7/zfsconcepts.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/7/zfsprops.7.rst.txt b/_sources/man/v2.1/7/zfsprops.7.rst.txt new file mode 100644 index 000000000..e7d25c66e --- /dev/null +++ b/_sources/man/v2.1/7/zfsprops.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man7/zfsprops.7 + +zfsprops.7 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man7/zfsprops.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/7/zpool-features.7.rst.txt b/_sources/man/v2.1/7/zpool-features.7.rst.txt new file mode 100644 index 000000000..383fb72b6 --- /dev/null +++ b/_sources/man/v2.1/7/zpool-features.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man7/zpool-features.7 + +zpool-features.7 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man7/zpool-features.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/7/zpoolconcepts.7.rst.txt b/_sources/man/v2.1/7/zpoolconcepts.7.rst.txt new file mode 100644 index 000000000..51ee265c4 --- /dev/null +++ b/_sources/man/v2.1/7/zpoolconcepts.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man7/zpoolconcepts.7 + +zpoolconcepts.7 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man7/zpoolconcepts.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/7/zpoolprops.7.rst.txt b/_sources/man/v2.1/7/zpoolprops.7.rst.txt new file mode 100644 index 000000000..2b65b5934 --- /dev/null +++ b/_sources/man/v2.1/7/zpoolprops.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man7/zpoolprops.7 + +zpoolprops.7 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man7/zpoolprops.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/fsck.zfs.8.rst.txt b/_sources/man/v2.1/8/fsck.zfs.8.rst.txt new file mode 100644 index 000000000..20317a4a3 --- /dev/null +++ b/_sources/man/v2.1/8/fsck.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/fsck.zfs.8 + +fsck.zfs.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/fsck.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/index.rst.txt b/_sources/man/v2.1/8/index.rst.txt new file mode 100644 index 000000000..bf0d9a09d --- /dev/null +++ b/_sources/man/v2.1/8/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/ + +System Administration Commands (8) +================================== +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.1/8/mount.zfs.8.rst.txt b/_sources/man/v2.1/8/mount.zfs.8.rst.txt new file mode 100644 index 000000000..446abf854 --- /dev/null +++ b/_sources/man/v2.1/8/mount.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/mount.zfs.8 + +mount.zfs.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/mount.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/vdev_id.8.rst.txt b/_sources/man/v2.1/8/vdev_id.8.rst.txt new file mode 100644 index 000000000..1fc220ced --- /dev/null +++ b/_sources/man/v2.1/8/vdev_id.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/vdev_id.8 + +vdev_id.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/vdev_id.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zdb.8.rst.txt b/_sources/man/v2.1/8/zdb.8.rst.txt new file mode 100644 index 000000000..27eb220ac --- /dev/null +++ b/_sources/man/v2.1/8/zdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zdb.8 + +zdb.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zed.8.rst.txt b/_sources/man/v2.1/8/zed.8.rst.txt new file mode 100644 index 000000000..e5cb20da4 --- /dev/null +++ b/_sources/man/v2.1/8/zed.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zed.8 + +zed.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zed.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-allow.8.rst.txt b/_sources/man/v2.1/8/zfs-allow.8.rst.txt new file mode 100644 index 000000000..12b63c3aa --- /dev/null +++ b/_sources/man/v2.1/8/zfs-allow.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-allow.8 + +zfs-allow.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-allow.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-bookmark.8.rst.txt b/_sources/man/v2.1/8/zfs-bookmark.8.rst.txt new file mode 100644 index 000000000..d149ee388 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-bookmark.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-bookmark.8 + +zfs-bookmark.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-bookmark.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-change-key.8.rst.txt b/_sources/man/v2.1/8/zfs-change-key.8.rst.txt new file mode 100644 index 000000000..f5f31602e --- /dev/null +++ b/_sources/man/v2.1/8/zfs-change-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-change-key.8 + +zfs-change-key.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-change-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-clone.8.rst.txt b/_sources/man/v2.1/8/zfs-clone.8.rst.txt new file mode 100644 index 000000000..e9683ce9f --- /dev/null +++ b/_sources/man/v2.1/8/zfs-clone.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-clone.8 + +zfs-clone.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-clone.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-create.8.rst.txt b/_sources/man/v2.1/8/zfs-create.8.rst.txt new file mode 100644 index 000000000..63777d8cb --- /dev/null +++ b/_sources/man/v2.1/8/zfs-create.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-create.8 + +zfs-create.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-create.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-destroy.8.rst.txt b/_sources/man/v2.1/8/zfs-destroy.8.rst.txt new file mode 100644 index 000000000..0ff681b99 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-destroy.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-destroy.8 + +zfs-destroy.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-destroy.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-diff.8.rst.txt b/_sources/man/v2.1/8/zfs-diff.8.rst.txt new file mode 100644 index 000000000..d29f55b30 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-diff.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-diff.8 + +zfs-diff.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-diff.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-get.8.rst.txt b/_sources/man/v2.1/8/zfs-get.8.rst.txt new file mode 100644 index 000000000..ec003fe08 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-get.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-get.8 + +zfs-get.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-get.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-groupspace.8.rst.txt b/_sources/man/v2.1/8/zfs-groupspace.8.rst.txt new file mode 100644 index 000000000..1f6d65b73 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-groupspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-groupspace.8 + +zfs-groupspace.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-groupspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-hold.8.rst.txt b/_sources/man/v2.1/8/zfs-hold.8.rst.txt new file mode 100644 index 000000000..a46f03739 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-hold.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-hold.8 + +zfs-hold.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-hold.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-inherit.8.rst.txt b/_sources/man/v2.1/8/zfs-inherit.8.rst.txt new file mode 100644 index 000000000..b7614180b --- /dev/null +++ b/_sources/man/v2.1/8/zfs-inherit.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-inherit.8 + +zfs-inherit.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-inherit.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-jail.8.rst.txt b/_sources/man/v2.1/8/zfs-jail.8.rst.txt new file mode 100644 index 000000000..1af66ad88 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-jail.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-jail.8 + +zfs-jail.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-jail.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-list.8.rst.txt b/_sources/man/v2.1/8/zfs-list.8.rst.txt new file mode 100644 index 000000000..f63c6a0bb --- /dev/null +++ b/_sources/man/v2.1/8/zfs-list.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-list.8 + +zfs-list.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-list.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-load-key.8.rst.txt b/_sources/man/v2.1/8/zfs-load-key.8.rst.txt new file mode 100644 index 000000000..3135970c5 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-load-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-load-key.8 + +zfs-load-key.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-load-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-mount-generator.8.rst.txt b/_sources/man/v2.1/8/zfs-mount-generator.8.rst.txt new file mode 100644 index 000000000..4e047bbc5 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-mount-generator.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-mount-generator.8 + +zfs-mount-generator.8 +===================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-mount-generator.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-mount.8.rst.txt b/_sources/man/v2.1/8/zfs-mount.8.rst.txt new file mode 100644 index 000000000..332edf046 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-mount.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-mount.8 + +zfs-mount.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-mount.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-program.8.rst.txt b/_sources/man/v2.1/8/zfs-program.8.rst.txt new file mode 100644 index 000000000..a5aa0fcd9 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-program.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-program.8 + +zfs-program.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-program.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-project.8.rst.txt b/_sources/man/v2.1/8/zfs-project.8.rst.txt new file mode 100644 index 000000000..75f4a09a1 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-project.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-project.8 + +zfs-project.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-project.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-projectspace.8.rst.txt b/_sources/man/v2.1/8/zfs-projectspace.8.rst.txt new file mode 100644 index 000000000..56eaeba7a --- /dev/null +++ b/_sources/man/v2.1/8/zfs-projectspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-projectspace.8 + +zfs-projectspace.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-projectspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-promote.8.rst.txt b/_sources/man/v2.1/8/zfs-promote.8.rst.txt new file mode 100644 index 000000000..801d48049 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-promote.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-promote.8 + +zfs-promote.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-promote.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-receive.8.rst.txt b/_sources/man/v2.1/8/zfs-receive.8.rst.txt new file mode 100644 index 000000000..b123bf09e --- /dev/null +++ b/_sources/man/v2.1/8/zfs-receive.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-receive.8 + +zfs-receive.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-receive.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-recv.8.rst.txt b/_sources/man/v2.1/8/zfs-recv.8.rst.txt new file mode 100644 index 000000000..29401a525 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-recv.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-recv.8 + +zfs-recv.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-recv.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-redact.8.rst.txt b/_sources/man/v2.1/8/zfs-redact.8.rst.txt new file mode 100644 index 000000000..78b44597f --- /dev/null +++ b/_sources/man/v2.1/8/zfs-redact.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-redact.8 + +zfs-redact.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-redact.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-release.8.rst.txt b/_sources/man/v2.1/8/zfs-release.8.rst.txt new file mode 100644 index 000000000..8a288e596 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-release.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-release.8 + +zfs-release.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-release.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-rename.8.rst.txt b/_sources/man/v2.1/8/zfs-rename.8.rst.txt new file mode 100644 index 000000000..f180b98f6 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-rename.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-rename.8 + +zfs-rename.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-rename.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-rollback.8.rst.txt b/_sources/man/v2.1/8/zfs-rollback.8.rst.txt new file mode 100644 index 000000000..a31c80d7e --- /dev/null +++ b/_sources/man/v2.1/8/zfs-rollback.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-rollback.8 + +zfs-rollback.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-rollback.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-send.8.rst.txt b/_sources/man/v2.1/8/zfs-send.8.rst.txt new file mode 100644 index 000000000..0b4f93158 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-send.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-send.8 + +zfs-send.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-send.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-set.8.rst.txt b/_sources/man/v2.1/8/zfs-set.8.rst.txt new file mode 100644 index 000000000..23c4f4056 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-set.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-set.8 + +zfs-set.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-set.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-share.8.rst.txt b/_sources/man/v2.1/8/zfs-share.8.rst.txt new file mode 100644 index 000000000..303a7b3f3 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-share.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-share.8 + +zfs-share.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-share.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-snapshot.8.rst.txt b/_sources/man/v2.1/8/zfs-snapshot.8.rst.txt new file mode 100644 index 000000000..12062dd12 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-snapshot.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-snapshot.8 + +zfs-snapshot.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-snapshot.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-unallow.8.rst.txt b/_sources/man/v2.1/8/zfs-unallow.8.rst.txt new file mode 100644 index 000000000..137f48899 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-unallow.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-unallow.8 + +zfs-unallow.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-unallow.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-unjail.8.rst.txt b/_sources/man/v2.1/8/zfs-unjail.8.rst.txt new file mode 100644 index 000000000..46f18c58a --- /dev/null +++ b/_sources/man/v2.1/8/zfs-unjail.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-unjail.8 + +zfs-unjail.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-unjail.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-unload-key.8.rst.txt b/_sources/man/v2.1/8/zfs-unload-key.8.rst.txt new file mode 100644 index 000000000..d7f173af5 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-unload-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-unload-key.8 + +zfs-unload-key.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-unload-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-unmount.8.rst.txt b/_sources/man/v2.1/8/zfs-unmount.8.rst.txt new file mode 100644 index 000000000..34912825b --- /dev/null +++ b/_sources/man/v2.1/8/zfs-unmount.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-unmount.8 + +zfs-unmount.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-unmount.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-upgrade.8.rst.txt b/_sources/man/v2.1/8/zfs-upgrade.8.rst.txt new file mode 100644 index 000000000..c8976c84b --- /dev/null +++ b/_sources/man/v2.1/8/zfs-upgrade.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-upgrade.8 + +zfs-upgrade.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-upgrade.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-userspace.8.rst.txt b/_sources/man/v2.1/8/zfs-userspace.8.rst.txt new file mode 100644 index 000000000..5b4afe1be --- /dev/null +++ b/_sources/man/v2.1/8/zfs-userspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-userspace.8 + +zfs-userspace.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-userspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-wait.8.rst.txt b/_sources/man/v2.1/8/zfs-wait.8.rst.txt new file mode 100644 index 000000000..726032f8d --- /dev/null +++ b/_sources/man/v2.1/8/zfs-wait.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs-wait.8 + +zfs-wait.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-wait.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs.8.rst.txt b/_sources/man/v2.1/8/zfs.8.rst.txt new file mode 100644 index 000000000..82bfa9d7d --- /dev/null +++ b/_sources/man/v2.1/8/zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs.8 + +zfs.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs_ids_to_path.8.rst.txt b/_sources/man/v2.1/8/zfs_ids_to_path.8.rst.txt new file mode 100644 index 000000000..f365d602c --- /dev/null +++ b/_sources/man/v2.1/8/zfs_ids_to_path.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zfs_ids_to_path.8 + +zfs_ids_to_path.8 +================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs_ids_to_path.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zgenhostid.8.rst.txt b/_sources/man/v2.1/8/zgenhostid.8.rst.txt new file mode 100644 index 000000000..3385a66f3 --- /dev/null +++ b/_sources/man/v2.1/8/zgenhostid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zgenhostid.8 + +zgenhostid.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zgenhostid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zinject.8.rst.txt b/_sources/man/v2.1/8/zinject.8.rst.txt new file mode 100644 index 000000000..53c276ed4 --- /dev/null +++ b/_sources/man/v2.1/8/zinject.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zinject.8 + +zinject.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zinject.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-add.8.rst.txt b/_sources/man/v2.1/8/zpool-add.8.rst.txt new file mode 100644 index 000000000..c8c8f3159 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-add.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-add.8 + +zpool-add.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-add.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-attach.8.rst.txt b/_sources/man/v2.1/8/zpool-attach.8.rst.txt new file mode 100644 index 000000000..663b27559 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-attach.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-attach.8 + +zpool-attach.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-attach.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-checkpoint.8.rst.txt b/_sources/man/v2.1/8/zpool-checkpoint.8.rst.txt new file mode 100644 index 000000000..194462ed8 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-checkpoint.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-checkpoint.8 + +zpool-checkpoint.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-checkpoint.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-clear.8.rst.txt b/_sources/man/v2.1/8/zpool-clear.8.rst.txt new file mode 100644 index 000000000..b7411c3d7 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-clear.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-clear.8 + +zpool-clear.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-clear.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-create.8.rst.txt b/_sources/man/v2.1/8/zpool-create.8.rst.txt new file mode 100644 index 000000000..4b11d9608 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-create.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-create.8 + +zpool-create.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-create.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-destroy.8.rst.txt b/_sources/man/v2.1/8/zpool-destroy.8.rst.txt new file mode 100644 index 000000000..f48a44e77 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-destroy.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-destroy.8 + +zpool-destroy.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-destroy.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-detach.8.rst.txt b/_sources/man/v2.1/8/zpool-detach.8.rst.txt new file mode 100644 index 000000000..cf29af77d --- /dev/null +++ b/_sources/man/v2.1/8/zpool-detach.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-detach.8 + +zpool-detach.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-detach.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-events.8.rst.txt b/_sources/man/v2.1/8/zpool-events.8.rst.txt new file mode 100644 index 000000000..23c137ce2 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-events.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-events.8 + +zpool-events.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-events.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-export.8.rst.txt b/_sources/man/v2.1/8/zpool-export.8.rst.txt new file mode 100644 index 000000000..a17dce39f --- /dev/null +++ b/_sources/man/v2.1/8/zpool-export.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-export.8 + +zpool-export.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-export.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-get.8.rst.txt b/_sources/man/v2.1/8/zpool-get.8.rst.txt new file mode 100644 index 000000000..ea5c46b4e --- /dev/null +++ b/_sources/man/v2.1/8/zpool-get.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-get.8 + +zpool-get.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-get.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-history.8.rst.txt b/_sources/man/v2.1/8/zpool-history.8.rst.txt new file mode 100644 index 000000000..1521d38ab --- /dev/null +++ b/_sources/man/v2.1/8/zpool-history.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-history.8 + +zpool-history.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-history.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-import.8.rst.txt b/_sources/man/v2.1/8/zpool-import.8.rst.txt new file mode 100644 index 000000000..6493bd86e --- /dev/null +++ b/_sources/man/v2.1/8/zpool-import.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-import.8 + +zpool-import.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-import.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-initialize.8.rst.txt b/_sources/man/v2.1/8/zpool-initialize.8.rst.txt new file mode 100644 index 000000000..0d86213c8 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-initialize.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-initialize.8 + +zpool-initialize.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-initialize.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-iostat.8.rst.txt b/_sources/man/v2.1/8/zpool-iostat.8.rst.txt new file mode 100644 index 000000000..0e9922e0a --- /dev/null +++ b/_sources/man/v2.1/8/zpool-iostat.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-iostat.8 + +zpool-iostat.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-iostat.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-labelclear.8.rst.txt b/_sources/man/v2.1/8/zpool-labelclear.8.rst.txt new file mode 100644 index 000000000..fe272879a --- /dev/null +++ b/_sources/man/v2.1/8/zpool-labelclear.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-labelclear.8 + +zpool-labelclear.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-labelclear.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-list.8.rst.txt b/_sources/man/v2.1/8/zpool-list.8.rst.txt new file mode 100644 index 000000000..f79809d59 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-list.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-list.8 + +zpool-list.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-list.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-offline.8.rst.txt b/_sources/man/v2.1/8/zpool-offline.8.rst.txt new file mode 100644 index 000000000..ec263d179 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-offline.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-offline.8 + +zpool-offline.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-offline.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-online.8.rst.txt b/_sources/man/v2.1/8/zpool-online.8.rst.txt new file mode 100644 index 000000000..04d324f69 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-online.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-online.8 + +zpool-online.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-online.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-reguid.8.rst.txt b/_sources/man/v2.1/8/zpool-reguid.8.rst.txt new file mode 100644 index 000000000..06ed8d39f --- /dev/null +++ b/_sources/man/v2.1/8/zpool-reguid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-reguid.8 + +zpool-reguid.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-reguid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-remove.8.rst.txt b/_sources/man/v2.1/8/zpool-remove.8.rst.txt new file mode 100644 index 000000000..d89e6d162 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-remove.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-remove.8 + +zpool-remove.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-remove.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-reopen.8.rst.txt b/_sources/man/v2.1/8/zpool-reopen.8.rst.txt new file mode 100644 index 000000000..1583e785c --- /dev/null +++ b/_sources/man/v2.1/8/zpool-reopen.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-reopen.8 + +zpool-reopen.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-reopen.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-replace.8.rst.txt b/_sources/man/v2.1/8/zpool-replace.8.rst.txt new file mode 100644 index 000000000..28d2e1596 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-replace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-replace.8 + +zpool-replace.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-replace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-resilver.8.rst.txt b/_sources/man/v2.1/8/zpool-resilver.8.rst.txt new file mode 100644 index 000000000..6391cd177 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-resilver.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-resilver.8 + +zpool-resilver.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-resilver.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-scrub.8.rst.txt b/_sources/man/v2.1/8/zpool-scrub.8.rst.txt new file mode 100644 index 000000000..b272ace4c --- /dev/null +++ b/_sources/man/v2.1/8/zpool-scrub.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-scrub.8 + +zpool-scrub.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-scrub.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-set.8.rst.txt b/_sources/man/v2.1/8/zpool-set.8.rst.txt new file mode 100644 index 000000000..4a6f2b63b --- /dev/null +++ b/_sources/man/v2.1/8/zpool-set.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-set.8 + +zpool-set.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-set.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-split.8.rst.txt b/_sources/man/v2.1/8/zpool-split.8.rst.txt new file mode 100644 index 000000000..ea672e695 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-split.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-split.8 + +zpool-split.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-split.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-status.8.rst.txt b/_sources/man/v2.1/8/zpool-status.8.rst.txt new file mode 100644 index 000000000..e43bdf81e --- /dev/null +++ b/_sources/man/v2.1/8/zpool-status.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-status.8 + +zpool-status.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-status.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-sync.8.rst.txt b/_sources/man/v2.1/8/zpool-sync.8.rst.txt new file mode 100644 index 000000000..769e40b7f --- /dev/null +++ b/_sources/man/v2.1/8/zpool-sync.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-sync.8 + +zpool-sync.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-sync.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-trim.8.rst.txt b/_sources/man/v2.1/8/zpool-trim.8.rst.txt new file mode 100644 index 000000000..95db90d30 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-trim.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-trim.8 + +zpool-trim.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-trim.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-upgrade.8.rst.txt b/_sources/man/v2.1/8/zpool-upgrade.8.rst.txt new file mode 100644 index 000000000..c56002d80 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-upgrade.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-upgrade.8 + +zpool-upgrade.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-upgrade.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-wait.8.rst.txt b/_sources/man/v2.1/8/zpool-wait.8.rst.txt new file mode 100644 index 000000000..59d510743 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-wait.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool-wait.8 + +zpool-wait.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-wait.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool.8.rst.txt b/_sources/man/v2.1/8/zpool.8.rst.txt new file mode 100644 index 000000000..8f27bd393 --- /dev/null +++ b/_sources/man/v2.1/8/zpool.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool.8 + +zpool.8 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool_influxdb.8.rst.txt b/_sources/man/v2.1/8/zpool_influxdb.8.rst.txt new file mode 100644 index 000000000..a393b2209 --- /dev/null +++ b/_sources/man/v2.1/8/zpool_influxdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zpool_influxdb.8 + +zpool_influxdb.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool_influxdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zstream.8.rst.txt b/_sources/man/v2.1/8/zstream.8.rst.txt new file mode 100644 index 000000000..316a2b97c --- /dev/null +++ b/_sources/man/v2.1/8/zstream.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zstream.8 + +zstream.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zstream.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zstreamdump.8.rst.txt b/_sources/man/v2.1/8/zstreamdump.8.rst.txt new file mode 100644 index 000000000..4f517cf5e --- /dev/null +++ b/_sources/man/v2.1/8/zstreamdump.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/man8/zstreamdump.8 + +zstreamdump.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zstreamdump.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/index.rst.txt b/_sources/man/v2.1/index.rst.txt new file mode 100644 index 000000000..49ed31fc0 --- /dev/null +++ b/_sources/man/v2.1/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.14/man/ + +v2.1 +==== +.. toctree:: + :maxdepth: 1 + :glob: + + */index + \ No newline at end of file diff --git a/_sources/man/v2.2/1/arcstat.1.rst.txt b/_sources/man/v2.2/1/arcstat.1.rst.txt new file mode 100644 index 000000000..e66628da6 --- /dev/null +++ b/_sources/man/v2.2/1/arcstat.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man1/arcstat.1 + +arcstat.1 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man1/arcstat.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/1/cstyle.1.rst.txt b/_sources/man/v2.2/1/cstyle.1.rst.txt new file mode 100644 index 000000000..346641b48 --- /dev/null +++ b/_sources/man/v2.2/1/cstyle.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man1/cstyle.1 + +cstyle.1 +======== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man1/cstyle.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/1/index.rst.txt b/_sources/man/v2.2/1/index.rst.txt new file mode 100644 index 000000000..a9a81d93a --- /dev/null +++ b/_sources/man/v2.2/1/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man1/ + +User Commands (1) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.2/1/raidz_test.1.rst.txt b/_sources/man/v2.2/1/raidz_test.1.rst.txt new file mode 100644 index 000000000..c9beba4fa --- /dev/null +++ b/_sources/man/v2.2/1/raidz_test.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man1/raidz_test.1 + +raidz_test.1 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man1/raidz_test.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/1/test-runner.1.rst.txt b/_sources/man/v2.2/1/test-runner.1.rst.txt new file mode 100644 index 000000000..0baaa3d78 --- /dev/null +++ b/_sources/man/v2.2/1/test-runner.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man1/test-runner.1 + +test-runner.1 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man1/test-runner.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/1/zhack.1.rst.txt b/_sources/man/v2.2/1/zhack.1.rst.txt new file mode 100644 index 000000000..7b605dd5f --- /dev/null +++ b/_sources/man/v2.2/1/zhack.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man1/zhack.1 + +zhack.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man1/zhack.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/1/ztest.1.rst.txt b/_sources/man/v2.2/1/ztest.1.rst.txt new file mode 100644 index 000000000..bfa641f51 --- /dev/null +++ b/_sources/man/v2.2/1/ztest.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man1/ztest.1 + +ztest.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man1/ztest.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/1/zvol_wait.1.rst.txt b/_sources/man/v2.2/1/zvol_wait.1.rst.txt new file mode 100644 index 000000000..030dad25a --- /dev/null +++ b/_sources/man/v2.2/1/zvol_wait.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man1/zvol_wait.1 + +zvol_wait.1 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man1/zvol_wait.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/4/index.rst.txt b/_sources/man/v2.2/4/index.rst.txt new file mode 100644 index 000000000..5581b4b50 --- /dev/null +++ b/_sources/man/v2.2/4/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man4/ + +Devices and Special Files (4) +============================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.2/4/spl.4.rst.txt b/_sources/man/v2.2/4/spl.4.rst.txt new file mode 100644 index 000000000..371496976 --- /dev/null +++ b/_sources/man/v2.2/4/spl.4.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man4/spl.4 + +spl.4 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man4/spl.4.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/4/zfs.4.rst.txt b/_sources/man/v2.2/4/zfs.4.rst.txt new file mode 100644 index 000000000..3c11e4e98 --- /dev/null +++ b/_sources/man/v2.2/4/zfs.4.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man4/zfs.4 + +zfs.4 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man4/zfs.4.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/5/index.rst.txt b/_sources/man/v2.2/5/index.rst.txt new file mode 100644 index 000000000..66724569b --- /dev/null +++ b/_sources/man/v2.2/5/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man5/ + +File Formats and Conventions (5) +================================ +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.2/5/vdev_id.conf.5.rst.txt b/_sources/man/v2.2/5/vdev_id.conf.5.rst.txt new file mode 100644 index 000000000..1f7ee29c4 --- /dev/null +++ b/_sources/man/v2.2/5/vdev_id.conf.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man5/vdev_id.conf.5 + +vdev_id.conf.5 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man5/vdev_id.conf.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/7/dracut.zfs.7.rst.txt b/_sources/man/v2.2/7/dracut.zfs.7.rst.txt new file mode 100644 index 000000000..4f407870f --- /dev/null +++ b/_sources/man/v2.2/7/dracut.zfs.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man7/dracut.zfs.7 + +dracut.zfs.7 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man7/dracut.zfs.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/7/index.rst.txt b/_sources/man/v2.2/7/index.rst.txt new file mode 100644 index 000000000..527f65cec --- /dev/null +++ b/_sources/man/v2.2/7/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man7/ + +Miscellaneous (7) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.2/7/vdevprops.7.rst.txt b/_sources/man/v2.2/7/vdevprops.7.rst.txt new file mode 100644 index 000000000..6cb36eecc --- /dev/null +++ b/_sources/man/v2.2/7/vdevprops.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man7/vdevprops.7 + +vdevprops.7 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man7/vdevprops.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/7/zfsconcepts.7.rst.txt b/_sources/man/v2.2/7/zfsconcepts.7.rst.txt new file mode 100644 index 000000000..94d1ebd7f --- /dev/null +++ b/_sources/man/v2.2/7/zfsconcepts.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man7/zfsconcepts.7 + +zfsconcepts.7 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man7/zfsconcepts.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/7/zfsprops.7.rst.txt b/_sources/man/v2.2/7/zfsprops.7.rst.txt new file mode 100644 index 000000000..29994ef4a --- /dev/null +++ b/_sources/man/v2.2/7/zfsprops.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man7/zfsprops.7 + +zfsprops.7 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man7/zfsprops.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/7/zpool-features.7.rst.txt b/_sources/man/v2.2/7/zpool-features.7.rst.txt new file mode 100644 index 000000000..8ae7e6162 --- /dev/null +++ b/_sources/man/v2.2/7/zpool-features.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man7/zpool-features.7 + +zpool-features.7 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man7/zpool-features.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/7/zpoolconcepts.7.rst.txt b/_sources/man/v2.2/7/zpoolconcepts.7.rst.txt new file mode 100644 index 000000000..e6b5702ff --- /dev/null +++ b/_sources/man/v2.2/7/zpoolconcepts.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man7/zpoolconcepts.7 + +zpoolconcepts.7 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man7/zpoolconcepts.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/7/zpoolprops.7.rst.txt b/_sources/man/v2.2/7/zpoolprops.7.rst.txt new file mode 100644 index 000000000..c3993349c --- /dev/null +++ b/_sources/man/v2.2/7/zpoolprops.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man7/zpoolprops.7 + +zpoolprops.7 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man7/zpoolprops.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/fsck.zfs.8.rst.txt b/_sources/man/v2.2/8/fsck.zfs.8.rst.txt new file mode 100644 index 000000000..1332d69a9 --- /dev/null +++ b/_sources/man/v2.2/8/fsck.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/fsck.zfs.8 + +fsck.zfs.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/fsck.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/index.rst.txt b/_sources/man/v2.2/8/index.rst.txt new file mode 100644 index 000000000..d9480aeb5 --- /dev/null +++ b/_sources/man/v2.2/8/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/ + +System Administration Commands (8) +================================== +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.2/8/mount.zfs.8.rst.txt b/_sources/man/v2.2/8/mount.zfs.8.rst.txt new file mode 100644 index 000000000..c884dd8a0 --- /dev/null +++ b/_sources/man/v2.2/8/mount.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/mount.zfs.8 + +mount.zfs.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/mount.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/vdev_id.8.rst.txt b/_sources/man/v2.2/8/vdev_id.8.rst.txt new file mode 100644 index 000000000..939f0ac6f --- /dev/null +++ b/_sources/man/v2.2/8/vdev_id.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/vdev_id.8 + +vdev_id.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/vdev_id.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zdb.8.rst.txt b/_sources/man/v2.2/8/zdb.8.rst.txt new file mode 100644 index 000000000..f9394c5ac --- /dev/null +++ b/_sources/man/v2.2/8/zdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zdb.8 + +zdb.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zed.8.rst.txt b/_sources/man/v2.2/8/zed.8.rst.txt new file mode 100644 index 000000000..c86e409de --- /dev/null +++ b/_sources/man/v2.2/8/zed.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zed.8 + +zed.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zed.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-allow.8.rst.txt b/_sources/man/v2.2/8/zfs-allow.8.rst.txt new file mode 100644 index 000000000..929d1ba69 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-allow.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-allow.8 + +zfs-allow.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-allow.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-bookmark.8.rst.txt b/_sources/man/v2.2/8/zfs-bookmark.8.rst.txt new file mode 100644 index 000000000..0dffd748a --- /dev/null +++ b/_sources/man/v2.2/8/zfs-bookmark.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-bookmark.8 + +zfs-bookmark.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-bookmark.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-change-key.8.rst.txt b/_sources/man/v2.2/8/zfs-change-key.8.rst.txt new file mode 100644 index 000000000..3f8e298a6 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-change-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-change-key.8 + +zfs-change-key.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-change-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-clone.8.rst.txt b/_sources/man/v2.2/8/zfs-clone.8.rst.txt new file mode 100644 index 000000000..ebed6e3c2 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-clone.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-clone.8 + +zfs-clone.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-clone.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-create.8.rst.txt b/_sources/man/v2.2/8/zfs-create.8.rst.txt new file mode 100644 index 000000000..728d2ccdc --- /dev/null +++ b/_sources/man/v2.2/8/zfs-create.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-create.8 + +zfs-create.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-create.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-destroy.8.rst.txt b/_sources/man/v2.2/8/zfs-destroy.8.rst.txt new file mode 100644 index 000000000..729196cf7 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-destroy.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-destroy.8 + +zfs-destroy.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-destroy.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-diff.8.rst.txt b/_sources/man/v2.2/8/zfs-diff.8.rst.txt new file mode 100644 index 000000000..0f94d4217 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-diff.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-diff.8 + +zfs-diff.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-diff.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-get.8.rst.txt b/_sources/man/v2.2/8/zfs-get.8.rst.txt new file mode 100644 index 000000000..14accb4e7 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-get.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-get.8 + +zfs-get.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-get.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-groupspace.8.rst.txt b/_sources/man/v2.2/8/zfs-groupspace.8.rst.txt new file mode 100644 index 000000000..ead3c777c --- /dev/null +++ b/_sources/man/v2.2/8/zfs-groupspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-groupspace.8 + +zfs-groupspace.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-groupspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-hold.8.rst.txt b/_sources/man/v2.2/8/zfs-hold.8.rst.txt new file mode 100644 index 000000000..d0828b9ce --- /dev/null +++ b/_sources/man/v2.2/8/zfs-hold.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-hold.8 + +zfs-hold.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-hold.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-inherit.8.rst.txt b/_sources/man/v2.2/8/zfs-inherit.8.rst.txt new file mode 100644 index 000000000..d7e7dbd00 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-inherit.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-inherit.8 + +zfs-inherit.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-inherit.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-jail.8.rst.txt b/_sources/man/v2.2/8/zfs-jail.8.rst.txt new file mode 100644 index 000000000..cd2b359f9 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-jail.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-jail.8 + +zfs-jail.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-jail.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-list.8.rst.txt b/_sources/man/v2.2/8/zfs-list.8.rst.txt new file mode 100644 index 000000000..734de875d --- /dev/null +++ b/_sources/man/v2.2/8/zfs-list.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-list.8 + +zfs-list.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-list.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-load-key.8.rst.txt b/_sources/man/v2.2/8/zfs-load-key.8.rst.txt new file mode 100644 index 000000000..987fd8a96 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-load-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-load-key.8 + +zfs-load-key.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-load-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-mount-generator.8.rst.txt b/_sources/man/v2.2/8/zfs-mount-generator.8.rst.txt new file mode 100644 index 000000000..0cb46100d --- /dev/null +++ b/_sources/man/v2.2/8/zfs-mount-generator.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-mount-generator.8 + +zfs-mount-generator.8 +===================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-mount-generator.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-mount.8.rst.txt b/_sources/man/v2.2/8/zfs-mount.8.rst.txt new file mode 100644 index 000000000..051cfc1f0 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-mount.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-mount.8 + +zfs-mount.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-mount.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-program.8.rst.txt b/_sources/man/v2.2/8/zfs-program.8.rst.txt new file mode 100644 index 000000000..136dd5d25 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-program.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-program.8 + +zfs-program.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-program.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-project.8.rst.txt b/_sources/man/v2.2/8/zfs-project.8.rst.txt new file mode 100644 index 000000000..02a5fca7a --- /dev/null +++ b/_sources/man/v2.2/8/zfs-project.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-project.8 + +zfs-project.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-project.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-projectspace.8.rst.txt b/_sources/man/v2.2/8/zfs-projectspace.8.rst.txt new file mode 100644 index 000000000..b6784a2d4 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-projectspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-projectspace.8 + +zfs-projectspace.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-projectspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-promote.8.rst.txt b/_sources/man/v2.2/8/zfs-promote.8.rst.txt new file mode 100644 index 000000000..6c068e62f --- /dev/null +++ b/_sources/man/v2.2/8/zfs-promote.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-promote.8 + +zfs-promote.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-promote.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-receive.8.rst.txt b/_sources/man/v2.2/8/zfs-receive.8.rst.txt new file mode 100644 index 000000000..fee288ea4 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-receive.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-receive.8 + +zfs-receive.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-receive.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-recv.8.rst.txt b/_sources/man/v2.2/8/zfs-recv.8.rst.txt new file mode 100644 index 000000000..fb54d822d --- /dev/null +++ b/_sources/man/v2.2/8/zfs-recv.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-recv.8 + +zfs-recv.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-recv.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-redact.8.rst.txt b/_sources/man/v2.2/8/zfs-redact.8.rst.txt new file mode 100644 index 000000000..e31a1f8bd --- /dev/null +++ b/_sources/man/v2.2/8/zfs-redact.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-redact.8 + +zfs-redact.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-redact.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-release.8.rst.txt b/_sources/man/v2.2/8/zfs-release.8.rst.txt new file mode 100644 index 000000000..ef9d1bd37 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-release.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-release.8 + +zfs-release.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-release.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-rename.8.rst.txt b/_sources/man/v2.2/8/zfs-rename.8.rst.txt new file mode 100644 index 000000000..929dfbccb --- /dev/null +++ b/_sources/man/v2.2/8/zfs-rename.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-rename.8 + +zfs-rename.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-rename.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-rollback.8.rst.txt b/_sources/man/v2.2/8/zfs-rollback.8.rst.txt new file mode 100644 index 000000000..899803651 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-rollback.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-rollback.8 + +zfs-rollback.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-rollback.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-send.8.rst.txt b/_sources/man/v2.2/8/zfs-send.8.rst.txt new file mode 100644 index 000000000..1627e43a4 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-send.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-send.8 + +zfs-send.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-send.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-set.8.rst.txt b/_sources/man/v2.2/8/zfs-set.8.rst.txt new file mode 100644 index 000000000..ff5573637 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-set.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-set.8 + +zfs-set.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-set.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-share.8.rst.txt b/_sources/man/v2.2/8/zfs-share.8.rst.txt new file mode 100644 index 000000000..ddf44d3c3 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-share.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-share.8 + +zfs-share.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-share.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-snapshot.8.rst.txt b/_sources/man/v2.2/8/zfs-snapshot.8.rst.txt new file mode 100644 index 000000000..c3c8aadd0 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-snapshot.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-snapshot.8 + +zfs-snapshot.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-snapshot.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-unallow.8.rst.txt b/_sources/man/v2.2/8/zfs-unallow.8.rst.txt new file mode 100644 index 000000000..1fcd50aec --- /dev/null +++ b/_sources/man/v2.2/8/zfs-unallow.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-unallow.8 + +zfs-unallow.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-unallow.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-unjail.8.rst.txt b/_sources/man/v2.2/8/zfs-unjail.8.rst.txt new file mode 100644 index 000000000..24b262f72 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-unjail.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-unjail.8 + +zfs-unjail.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-unjail.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-unload-key.8.rst.txt b/_sources/man/v2.2/8/zfs-unload-key.8.rst.txt new file mode 100644 index 000000000..1b3572b76 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-unload-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-unload-key.8 + +zfs-unload-key.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-unload-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-unmount.8.rst.txt b/_sources/man/v2.2/8/zfs-unmount.8.rst.txt new file mode 100644 index 000000000..b68e892cc --- /dev/null +++ b/_sources/man/v2.2/8/zfs-unmount.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-unmount.8 + +zfs-unmount.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-unmount.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-unzone.8.rst.txt b/_sources/man/v2.2/8/zfs-unzone.8.rst.txt new file mode 100644 index 000000000..107047675 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-unzone.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-unzone.8 + +zfs-unzone.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-unzone.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-upgrade.8.rst.txt b/_sources/man/v2.2/8/zfs-upgrade.8.rst.txt new file mode 100644 index 000000000..3dab7f205 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-upgrade.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-upgrade.8 + +zfs-upgrade.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-upgrade.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-userspace.8.rst.txt b/_sources/man/v2.2/8/zfs-userspace.8.rst.txt new file mode 100644 index 000000000..2cb1fe594 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-userspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-userspace.8 + +zfs-userspace.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-userspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-wait.8.rst.txt b/_sources/man/v2.2/8/zfs-wait.8.rst.txt new file mode 100644 index 000000000..5466cd033 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-wait.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-wait.8 + +zfs-wait.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-wait.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-zone.8.rst.txt b/_sources/man/v2.2/8/zfs-zone.8.rst.txt new file mode 100644 index 000000000..7d12be107 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-zone.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs-zone.8 + +zfs-zone.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-zone.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs.8.rst.txt b/_sources/man/v2.2/8/zfs.8.rst.txt new file mode 100644 index 000000000..f421623a5 --- /dev/null +++ b/_sources/man/v2.2/8/zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs.8 + +zfs.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs_ids_to_path.8.rst.txt b/_sources/man/v2.2/8/zfs_ids_to_path.8.rst.txt new file mode 100644 index 000000000..b70c823b9 --- /dev/null +++ b/_sources/man/v2.2/8/zfs_ids_to_path.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs_ids_to_path.8 + +zfs_ids_to_path.8 +================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs_ids_to_path.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs_prepare_disk.8.rst.txt b/_sources/man/v2.2/8/zfs_prepare_disk.8.rst.txt new file mode 100644 index 000000000..48493108d --- /dev/null +++ b/_sources/man/v2.2/8/zfs_prepare_disk.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zfs_prepare_disk.8 + +zfs_prepare_disk.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs_prepare_disk.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zgenhostid.8.rst.txt b/_sources/man/v2.2/8/zgenhostid.8.rst.txt new file mode 100644 index 000000000..39245745d --- /dev/null +++ b/_sources/man/v2.2/8/zgenhostid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zgenhostid.8 + +zgenhostid.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zgenhostid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zinject.8.rst.txt b/_sources/man/v2.2/8/zinject.8.rst.txt new file mode 100644 index 000000000..d219d7187 --- /dev/null +++ b/_sources/man/v2.2/8/zinject.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zinject.8 + +zinject.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zinject.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-add.8.rst.txt b/_sources/man/v2.2/8/zpool-add.8.rst.txt new file mode 100644 index 000000000..66343b028 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-add.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-add.8 + +zpool-add.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-add.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-attach.8.rst.txt b/_sources/man/v2.2/8/zpool-attach.8.rst.txt new file mode 100644 index 000000000..3b42b6032 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-attach.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-attach.8 + +zpool-attach.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-attach.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-checkpoint.8.rst.txt b/_sources/man/v2.2/8/zpool-checkpoint.8.rst.txt new file mode 100644 index 000000000..bdcf46ce1 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-checkpoint.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-checkpoint.8 + +zpool-checkpoint.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-checkpoint.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-clear.8.rst.txt b/_sources/man/v2.2/8/zpool-clear.8.rst.txt new file mode 100644 index 000000000..3cddf23cf --- /dev/null +++ b/_sources/man/v2.2/8/zpool-clear.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-clear.8 + +zpool-clear.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-clear.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-create.8.rst.txt b/_sources/man/v2.2/8/zpool-create.8.rst.txt new file mode 100644 index 000000000..ad4b2ab2e --- /dev/null +++ b/_sources/man/v2.2/8/zpool-create.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-create.8 + +zpool-create.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-create.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-destroy.8.rst.txt b/_sources/man/v2.2/8/zpool-destroy.8.rst.txt new file mode 100644 index 000000000..d2eee3a1a --- /dev/null +++ b/_sources/man/v2.2/8/zpool-destroy.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-destroy.8 + +zpool-destroy.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-destroy.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-detach.8.rst.txt b/_sources/man/v2.2/8/zpool-detach.8.rst.txt new file mode 100644 index 000000000..3f19c3737 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-detach.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-detach.8 + +zpool-detach.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-detach.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-events.8.rst.txt b/_sources/man/v2.2/8/zpool-events.8.rst.txt new file mode 100644 index 000000000..39da2bace --- /dev/null +++ b/_sources/man/v2.2/8/zpool-events.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-events.8 + +zpool-events.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-events.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-export.8.rst.txt b/_sources/man/v2.2/8/zpool-export.8.rst.txt new file mode 100644 index 000000000..b544024e7 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-export.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-export.8 + +zpool-export.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-export.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-get.8.rst.txt b/_sources/man/v2.2/8/zpool-get.8.rst.txt new file mode 100644 index 000000000..123feef85 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-get.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-get.8 + +zpool-get.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-get.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-history.8.rst.txt b/_sources/man/v2.2/8/zpool-history.8.rst.txt new file mode 100644 index 000000000..98fae35f3 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-history.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-history.8 + +zpool-history.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-history.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-import.8.rst.txt b/_sources/man/v2.2/8/zpool-import.8.rst.txt new file mode 100644 index 000000000..46919923c --- /dev/null +++ b/_sources/man/v2.2/8/zpool-import.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-import.8 + +zpool-import.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-import.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-initialize.8.rst.txt b/_sources/man/v2.2/8/zpool-initialize.8.rst.txt new file mode 100644 index 000000000..b998d237d --- /dev/null +++ b/_sources/man/v2.2/8/zpool-initialize.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-initialize.8 + +zpool-initialize.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-initialize.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-iostat.8.rst.txt b/_sources/man/v2.2/8/zpool-iostat.8.rst.txt new file mode 100644 index 000000000..116209569 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-iostat.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-iostat.8 + +zpool-iostat.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-iostat.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-labelclear.8.rst.txt b/_sources/man/v2.2/8/zpool-labelclear.8.rst.txt new file mode 100644 index 000000000..e311a0725 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-labelclear.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-labelclear.8 + +zpool-labelclear.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-labelclear.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-list.8.rst.txt b/_sources/man/v2.2/8/zpool-list.8.rst.txt new file mode 100644 index 000000000..924ace88f --- /dev/null +++ b/_sources/man/v2.2/8/zpool-list.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-list.8 + +zpool-list.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-list.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-offline.8.rst.txt b/_sources/man/v2.2/8/zpool-offline.8.rst.txt new file mode 100644 index 000000000..d7015af07 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-offline.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-offline.8 + +zpool-offline.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-offline.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-online.8.rst.txt b/_sources/man/v2.2/8/zpool-online.8.rst.txt new file mode 100644 index 000000000..e44dfb8e4 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-online.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-online.8 + +zpool-online.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-online.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-reguid.8.rst.txt b/_sources/man/v2.2/8/zpool-reguid.8.rst.txt new file mode 100644 index 000000000..1135750f3 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-reguid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-reguid.8 + +zpool-reguid.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-reguid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-remove.8.rst.txt b/_sources/man/v2.2/8/zpool-remove.8.rst.txt new file mode 100644 index 000000000..1b8a46861 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-remove.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-remove.8 + +zpool-remove.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-remove.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-reopen.8.rst.txt b/_sources/man/v2.2/8/zpool-reopen.8.rst.txt new file mode 100644 index 000000000..bcb0680a5 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-reopen.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-reopen.8 + +zpool-reopen.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-reopen.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-replace.8.rst.txt b/_sources/man/v2.2/8/zpool-replace.8.rst.txt new file mode 100644 index 000000000..e3537a908 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-replace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-replace.8 + +zpool-replace.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-replace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-resilver.8.rst.txt b/_sources/man/v2.2/8/zpool-resilver.8.rst.txt new file mode 100644 index 000000000..094ea1b47 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-resilver.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-resilver.8 + +zpool-resilver.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-resilver.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-scrub.8.rst.txt b/_sources/man/v2.2/8/zpool-scrub.8.rst.txt new file mode 100644 index 000000000..5e5bde696 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-scrub.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-scrub.8 + +zpool-scrub.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-scrub.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-set.8.rst.txt b/_sources/man/v2.2/8/zpool-set.8.rst.txt new file mode 100644 index 000000000..527d9f338 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-set.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-set.8 + +zpool-set.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-set.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-split.8.rst.txt b/_sources/man/v2.2/8/zpool-split.8.rst.txt new file mode 100644 index 000000000..65caed9d4 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-split.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-split.8 + +zpool-split.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-split.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-status.8.rst.txt b/_sources/man/v2.2/8/zpool-status.8.rst.txt new file mode 100644 index 000000000..e1113a73e --- /dev/null +++ b/_sources/man/v2.2/8/zpool-status.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-status.8 + +zpool-status.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-status.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-sync.8.rst.txt b/_sources/man/v2.2/8/zpool-sync.8.rst.txt new file mode 100644 index 000000000..37744e8f7 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-sync.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-sync.8 + +zpool-sync.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-sync.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-trim.8.rst.txt b/_sources/man/v2.2/8/zpool-trim.8.rst.txt new file mode 100644 index 000000000..25bbe0add --- /dev/null +++ b/_sources/man/v2.2/8/zpool-trim.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-trim.8 + +zpool-trim.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-trim.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-upgrade.8.rst.txt b/_sources/man/v2.2/8/zpool-upgrade.8.rst.txt new file mode 100644 index 000000000..35858cbfa --- /dev/null +++ b/_sources/man/v2.2/8/zpool-upgrade.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-upgrade.8 + +zpool-upgrade.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-upgrade.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-wait.8.rst.txt b/_sources/man/v2.2/8/zpool-wait.8.rst.txt new file mode 100644 index 000000000..9f3f6a95f --- /dev/null +++ b/_sources/man/v2.2/8/zpool-wait.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool-wait.8 + +zpool-wait.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-wait.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool.8.rst.txt b/_sources/man/v2.2/8/zpool.8.rst.txt new file mode 100644 index 000000000..b301d7121 --- /dev/null +++ b/_sources/man/v2.2/8/zpool.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool.8 + +zpool.8 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool_influxdb.8.rst.txt b/_sources/man/v2.2/8/zpool_influxdb.8.rst.txt new file mode 100644 index 000000000..9ca16c897 --- /dev/null +++ b/_sources/man/v2.2/8/zpool_influxdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zpool_influxdb.8 + +zpool_influxdb.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool_influxdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zstream.8.rst.txt b/_sources/man/v2.2/8/zstream.8.rst.txt new file mode 100644 index 000000000..8085c5e25 --- /dev/null +++ b/_sources/man/v2.2/8/zstream.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zstream.8 + +zstream.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zstream.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zstreamdump.8.rst.txt b/_sources/man/v2.2/8/zstreamdump.8.rst.txt new file mode 100644 index 000000000..1851e718e --- /dev/null +++ b/_sources/man/v2.2/8/zstreamdump.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/man8/zstreamdump.8 + +zstreamdump.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zstreamdump.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/index.rst.txt b/_sources/man/v2.2/index.rst.txt new file mode 100644 index 000000000..9e481c2c1 --- /dev/null +++ b/_sources/man/v2.2/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.2/man/ + +v2.2 +==== +.. toctree:: + :maxdepth: 1 + :glob: + + */index + \ No newline at end of file diff --git a/_sources/msg/ZFS-8000-14/index.rst.txt b/_sources/msg/ZFS-8000-14/index.rst.txt new file mode 100644 index 000000000..5084bfcd8 --- /dev/null +++ b/_sources/msg/ZFS-8000-14/index.rst.txt @@ -0,0 +1,82 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-14 +======================= + +Corrupt ZFS cache +----------------- + ++-------------------------+--------------------------------------+ +| **Type:** | Error | ++-------------------------+--------------------------------------+ +| **Severity:** | Critical | ++-------------------------+--------------------------------------+ +| **Description:** | The ZFS cache file is corrupted. | ++-------------------------+--------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+--------------------------------------+ +| **Impact:** | ZFS filesystems are not available. | ++-------------------------+--------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +ZFS keeps a list of active pools on the filesystem to avoid having to +scan all devices when the system is booted. If this file is corrupted, +then normally active pools will not be automatically opened. The pools +can be recovered using the ``zpool import`` command: + +:: + + # zpool import + pool: test + id: 12743384782310107047 + state: ONLINE + action: The pool can be imported using its name or numeric identifier. + config: + + test ONLINE + sda9 ONLINE + +This will automatically scan ``/dev`` for any devices part of a pool. +If devices have been made available in an alternate location, use the +``-d`` option to ``zpool import`` to search for devices in a different +directory. + +Once you have determined which pools are available for import, you +can import the pool explicitly by specifying the name or numeric +identifier: + +:: + + # zpool import test + +Alternately, you can import all available pools by specifying the ``-a`` +option. Once a pool has been imported, the ZFS cache will be repaired +so that the pool will appear normally in the future. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-14`` indicates a corrupted ZFS cache file. +Take the documented action to resolve the problem. diff --git a/_sources/msg/ZFS-8000-2Q/index.rst.txt b/_sources/msg/ZFS-8000-2Q/index.rst.txt new file mode 100644 index 000000000..3eac49fa6 --- /dev/null +++ b/_sources/msg/ZFS-8000-2Q/index.rst.txt @@ -0,0 +1,134 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-2Q +======================= + +Missing device in replicated configuration +------------------------------------------ + ++-------------------------+--------------------------------------------------+ +| **Type:** | Error | ++-------------------------+--------------------------------------------------+ +| **Severity:** | Major | ++-------------------------+--------------------------------------------------+ +| **Description:** | A device in a replicated configuration could not | +| | be opened. | ++-------------------------+--------------------------------------------------+ +| **Automated Response:** | A hot spare will be activated if available. | ++-------------------------+--------------------------------------------------+ +| **Impact:** | The pool is no longer providing the configured | +| | level of replication. | ++-------------------------+--------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +.. rubric:: For an active pool: + +If this error was encountered while running ``zpool import``, please +see the section below. Otherwise, run ``zpool status -x`` to determine +which pool has experienced a failure: + +:: + + # zpool status -x + pool: test + state: DEGRADED + status: One or more devices could not be opened. Sufficient replicas exist for + the pool to continue functioning in a degraded state. + action: Attach the missing device and online it using 'zpool online'. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-2Q + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test DEGRADED 0 0 0 + mirror DEGRADED 0 0 0 + c0t0d0 ONLINE 0 0 0 + c0t0d1 FAULTED 0 0 0 cannot open + + errors: No known data errors + +Determine which device failed to open by looking for a FAULTED device +with an additional 'cannot open' message. If this device has been +inadvertently removed from the system, attach the device and bring it +online with ``zpool online``: + +:: + + # zpool online test c0t0d1 + +If the device is no longer available, the device can be replaced +using the ``zpool replace`` command: + +:: + + # zpool replace test c0t0d1 c0t0d2 + +If the device has been replaced by another disk in the same physical +slot, then the device can be replaced using a single argument to the +``zpool replace`` command: + +:: + + # zpool replace test c0t0d1 + +Existing data will be resilvered to the new device. Once the +resilvering completes, the device will be removed from the pool. + +.. rubric:: For an exported pool: + +If this error is encountered during a ``zpool import``, it means that +one of the devices is not attached to the system: + +:: + + # zpool import + pool: test + id: 10121266328238932306 + state: DEGRADED + status: One or more devices are missing from the system. + action: The pool can be imported despite missing or damaged devices. The + fault tolerance of the pool may be compromised if imported. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-2Q + config: + + test DEGRADED + mirror DEGRADED + c0t0d0 ONLINE + c0t0d1 FAULTED cannot open + +Unlike when the pool is active on the system, the device cannot be +replaced while the pool is exported. If the device can be attached to +the system, attach the device and run ``zpool import`` again. + +Alternatively, the pool can be imported as-is, though it will be +placed in the DEGRADED state due to a missing device. The device will +be marked as UNAVAIL. Once the pool has been imported, the missing +device can be replaced as described above. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-2Q`` indicates a device which was unable +to be opened by the ZFS subsystem. diff --git a/_sources/msg/ZFS-8000-3C/index.rst.txt b/_sources/msg/ZFS-8000-3C/index.rst.txt new file mode 100644 index 000000000..fcdb0ccd9 --- /dev/null +++ b/_sources/msg/ZFS-8000-3C/index.rst.txt @@ -0,0 +1,110 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-3C +======================= + +Missing device in non-replicated configuration +---------------------------------------------- + ++-------------------------+--------------------------------------------------+ +| **Type:** | Error | ++-------------------------+--------------------------------------------------+ +| **Severity:** | Critical | ++-------------------------+--------------------------------------------------+ +| **Description:** | A device could not be opened and no replicas are | +| | available. | ++-------------------------+--------------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+--------------------------------------------------+ +| **Impact:** | The pool is no longer available. | ++-------------------------+--------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +.. rubric:: For an active pool: + +If this error was encountered while running ``zpool import``, please +see the section below. Otherwise, run ``zpool status -x`` to determine +which pool has experienced a failure: + +:: + + # zpool status -x + pool: test + state: FAULTED + status: One or more devices could not be opened. There are insufficient + replicas for the pool to continue functioning. + action: Attach the missing device and online it using 'zpool online'. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-3C + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test FAULTED 0 0 0 insufficient replicas + c0t0d0 ONLINE 0 0 0 + c0t0d1 FAULTED 0 0 0 cannot open + + errors: No known data errors + +If the device has been temporarily detached from the system, attach +the device to the system and run ``zpool status`` again. The pool +should automatically detect the newly attached device and resume +functioning. You may have to mount the filesystems in the pool +explicitly using ``zfs mount -a``. + +If the device is no longer available and cannot be reattached to the +system, then the pool must be destroyed and re-created from a backup +source. + +.. rubric:: For an exported pool: + +If this error is encountered during a ``zpool import``, it means that +one of the devices is not attached to the system: + +:: + + # zpool import + pool: test + id: 10121266328238932306 + state: FAULTED + status: One or more devices are missing from the system. + action: The pool cannot be imported. Attach the missing devices and try again. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-3C + config: + + test FAULTED insufficient replicas + c0t0d0 ONLINE + c0t0d1 FAULTED cannot open + +The pool cannot be imported until the missing device is attached to +the system. If the device has been made available in an alternate +location, use the ``-d`` option to ``zpool import`` to search for devices +in a different directory. If the missing device is unavailable, then +the pool cannot be imported. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-3C`` indicates a device which was unable +to be opened by the ZFS subsystem. diff --git a/_sources/msg/ZFS-8000-4J/index.rst.txt b/_sources/msg/ZFS-8000-4J/index.rst.txt new file mode 100644 index 000000000..cab39c293 --- /dev/null +++ b/_sources/msg/ZFS-8000-4J/index.rst.txt @@ -0,0 +1,133 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-4J +======================= + +Corrupted device label in a replicated configuration +---------------------------------------------------- + ++-------------------------+--------------------------------------------------+ +| **Type:** | Error | ++-------------------------+--------------------------------------------------+ +| **Severity:** | Major | ++-------------------------+--------------------------------------------------+ +| **Description:** | A device could not be opened due to a missing or | +| | invalid device label. | ++-------------------------+--------------------------------------------------+ +| **Automated Response:** | A hot spare will be activated if available. | ++-------------------------+--------------------------------------------------+ +| **Impact:** | The pool is no longer providing the configured | +| | level of replication. | ++-------------------------+--------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +.. rubric:: For an active pool: + +If this error was encountered while running ``zpool import``, please +see the section below. Otherwise, run ``zpool status -x`` to determine +which pool has experienced a failure: + +:: + + # zpool status -x + pool: test + state: DEGRADED + status: One or more devices could not be used because the label is missing or + invalid. Sufficient replicas exist for the pool to continue + functioning in a degraded state. + action: Replace the device using 'zpool replace'. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test DEGRADED 0 0 0 + mirror DEGRADED 0 0 0 + c0t0d0 ONLINE 0 0 0 + c0t0d1 FAULTED 0 0 0 corrupted data + + errors: No known data errors + +If the device has been temporarily detached from the system, attach +the device to the system and run ``zpool status`` again. The pool +should automatically detect the newly attached device and resume +functioning. + +If the device is no longer available, it can be replaced using ``zpool +replace``: + +:: + + # zpool replace test c0t0d1 c0t0d2 + +If the device has been replaced by another disk in the same physical +slot, then the device can be replaced using a single argument to the +``zpool replace`` command: + +:: + + # zpool replace test c0t0d1 + +ZFS will begin migrating data to the new device as soon as the +replace is issued. Once the resilvering completes, the original +device (if different from the replacement) will be removed, and the +pool will be restored to the ONLINE state. + +.. rubric:: For an exported pool: + +If this error is encountered while running ``zpool import``, the pool +can be still be imported despite the failure: + +:: + + # zpool import + pool: test + id: 5187963178597328409 + state: DEGRADED + status: One or more devices contains corrupted data. The fault tolerance of + the pool may be compromised if imported. + action: The pool can be imported using its name or numeric identifier. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J + config: + + test DEGRADED + mirror DEGRADED + c0t0d0 ONLINE + c0t0d1 FAULTED corrupted data + +To import the pool, run ``zpool import``: + +:: + + # zpool import test + +Once the pool has been imported, the damaged device can be replaced +according to the above procedure. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-4J`` indicates a device which was unable +to be opened by the ZFS subsystem. diff --git a/_sources/msg/ZFS-8000-5E/index.rst.txt b/_sources/msg/ZFS-8000-5E/index.rst.txt new file mode 100644 index 000000000..0b895153f --- /dev/null +++ b/_sources/msg/ZFS-8000-5E/index.rst.txt @@ -0,0 +1,88 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-5E +======================= + +Corrupted device label in non-replicated configuration +------------------------------------------------------ + ++-------------------------+--------------------------------------------------+ +| **Type:** | Error | ++-------------------------+--------------------------------------------------+ +| **Severity:** | Critical | ++-------------------------+--------------------------------------------------+ +| **Description:** | A device could not be opened due to a missing or | +| | invalid device label and no replicas are | +| | available. | ++-------------------------+--------------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+--------------------------------------------------+ +| **Impact:** | The pool is no longer available. | ++-------------------------+--------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +.. rubric:: For an active pool: + +If this error was encountered while running ``zpool import``, please see the +section below. Otherwise, run ``zpool status -x`` to determine which pool has +experienced a failure: + +:: + + # zpool status -x + pool: test + state: FAULTED + status: One or more devices could not be used because the the label is missing + or invalid. There are insufficient replicas for the pool to continue + functioning. + action: Destroy and re-create the pool from a backup source. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test FAULTED 0 0 0 insufficient replicas + c0t0d0 FAULTED 0 0 0 corrupted data + c0t0d1 ONLINE 0 0 0 + + errors: No known data errors + +The device listed as FAULTED with 'corrupted data' cannot be opened due to a +corrupt label. ZFS will be unable to use the pool, and all data within the +pool is irrevocably lost. The pool must be destroyed and recreated from an +appropriate backup source. Using replicated configurations will prevent this +from happening in the future. + +.. rubric:: For an exported pool: + +If this error is encountered during ``zpool import``, the action is the same. +The pool cannot be imported - all data is lost and must be restored from an +appropriate backup source. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-5E`` indicates a device which was unable to be +opened by the ZFS subsystem. diff --git a/_sources/msg/ZFS-8000-6X/index.rst.txt b/_sources/msg/ZFS-8000-6X/index.rst.txt new file mode 100644 index 000000000..b6702eb2e --- /dev/null +++ b/_sources/msg/ZFS-8000-6X/index.rst.txt @@ -0,0 +1,80 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-6X +======================= + +Missing top level device +------------------------ + ++-------------------------+--------------------------------------------+ +| **Type:** | Error | ++-------------------------+--------------------------------------------+ +| **Severity:** | Critical | ++-------------------------+--------------------------------------------+ +| **Description:** | One or more top level devices are missing. | ++-------------------------+--------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+--------------------------------------------+ +| **Impact:** | The pool cannot be imported. | ++-------------------------+--------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +Run ``zpool import`` to list which pool cannot be imported: + +:: + + # zpool import + pool: test + id: 13783646421373024673 + state: FAULTED + status: One or more devices are missing from the system. + action: The pool cannot be imported. Attach the missing devices and try again. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-6X + config: + + test FAULTED missing device + c0t0d0 ONLINE + + Additional devices are known to be part of this pool, though their + exact configuration cannot be determined. + +ZFS attempts to store enough configuration data on the devices such +that the configuration is recoverable from any subset of devices. In +some cases, particularly when an entire toplevel virtual device is +not attached to the system, ZFS will be unable to determine the +complete configuration. It will always detect that these devices are +missing, even if it cannot identify all of the devices. + +The pool cannot be imported until the unknown missing device is +attached to the system. If the device has been made available in an +alternate location, use the ``-d`` option to ``zpool import`` to search +for devices in a different directory. If the missing device is +unavailable, then the pool cannot be imported. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-6X`` indicates one or more top level +devices are missing from the configuration. diff --git a/_sources/msg/ZFS-8000-72/index.rst.txt b/_sources/msg/ZFS-8000-72/index.rst.txt new file mode 100644 index 000000000..e302ea24e --- /dev/null +++ b/_sources/msg/ZFS-8000-72/index.rst.txt @@ -0,0 +1,112 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-72 +======================= + +Corrupted pool metadata +----------------------- + ++-------------------------+-------------------------------------------+ +| **Type:** | Error | ++-------------------------+-------------------------------------------+ +| **Severity:** | Critical | ++-------------------------+-------------------------------------------+ +| **Description:** | The metadata required to open the pool is | +| | corrupt. | ++-------------------------+-------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+-------------------------------------------+ +| **Impact:** | The pool is no longer available. | ++-------------------------+-------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +Even though all the devices are available, the on-disk data has been +corrupted such that the pool cannot be opened. If a recovery action +is presented, the pool can be returned to a usable state. Otherwise, +all data within the pool is lost, and the pool must be destroyed and +restored from an appropriate backup source. ZFS includes built-in +metadata replication to prevent this from happening even for +unreplicated pools, but running in a replicated configuration will +decrease the chances of this happening in the future. + +If this error is encountered during ``zpool import``, see the section +below. Otherwise, run ``zpool status -x`` to determine which pool is +faulted and if a recovery option is available: + +:: + + # zpool status -x + pool: test + id: 13783646421373024673 + state: FAULTED + status: The pool metadata is corrupted and cannot be opened. + action: Recovery is possible, but will result in some data loss. + Returning the pool to its state as of Mon Sep 28 10:24:39 2009 + should correct the problem. Approximately 59 seconds of data + will have to be discarded, irreversibly. Recovery can be + attempted by executing 'zpool clear -F test'. A scrub of the pool + is strongly recommended following a successful recovery. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-72 + config: + + NAME STATE READ WRITE CKSUM + test FAULTED 0 0 2 corrupted data + c0t0d0 ONLINE 0 0 2 + c0t0d1 ONLINE 0 0 2 + +If recovery is unavailable, the recommended action will be: + +:: + + action: Destroy the pool and restore from backup. + +If this error is encountered during ``zpool import``, and if no recovery option +is mentioned, the pool is unrecoverable and cannot be imported. The pool must +be restored from an appropriate backup source. If a recovery option is +available, the output from ``zpool import`` will look something like the +following: + +:: + + # zpool import share + cannot import 'share': I/O error + Recovery is possible, but will result in some data loss. + Returning the pool to its state as of Sun Sep 27 12:31:07 2009 + should correct the problem. Approximately 53 seconds of data + will have to be discarded, irreversibly. Recovery can be + attempted by executing 'zpool import -F share'. A scrub of the pool + is strongly recommended following a successful recovery. + +Recovery actions are requested with the -F option to either ``zpool +clear`` or ``zpool import``. Recovery will result in some data loss, +because it reverts the pool to an earlier state. A dry-run recovery +check can be performed by adding the ``-n`` option, affirming if recovery +is possible without actually reverting the pool to its earlier state. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-72`` indicates a pool was unable to be +opened due to a detected corruption in the pool metadata. diff --git a/_sources/msg/ZFS-8000-8A/index.rst.txt b/_sources/msg/ZFS-8000-8A/index.rst.txt new file mode 100644 index 000000000..a854e839d --- /dev/null +++ b/_sources/msg/ZFS-8000-8A/index.rst.txt @@ -0,0 +1,111 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-8A +======================= + +Corrupted data +-------------- + ++-------------------------+----------------------------------------------+ +| **Type:** | Error | ++-------------------------+----------------------------------------------+ +| **Severity:** | Critical | ++-------------------------+----------------------------------------------+ +| **Description:** | A file or directory could not be read due to | +| | corrupt data. | ++-------------------------+----------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+----------------------------------------------+ +| **Impact:** | The file or directory is unavailable. | ++-------------------------+----------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +Run ``zpool status -x`` to determine which pool is damaged: + +:: + + # zpool status -x + pool: test + state: ONLINE + status: One or more devices has experienced an error and no valid replicas + are available. Some filesystem data is corrupt, and applications + may have been affected. + action: Destroy the pool and restore from backup. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test ONLINE 0 0 2 + c0t0d0 ONLINE 0 0 2 + c0t0d1 ONLINE 0 0 0 + + errors: 1 data errors, use '-v' for a list + +Unfortunately, the data cannot be repaired, and the only choice to +repair the data is to restore the pool from backup. Applications +attempting to access the corrupted data will get an error (EIO), and +data may be permanently lost. + +The list of affected files can be retrieved by using the ``-v`` option to +``zpool status``: + +:: + + # zpool status -xv + pool: test + state: ONLINE + status: One or more devices has experienced an error and no valid replicas + are available. Some filesystem data is corrupt, and applications + may have been affected. + action: Destroy the pool and restore from backup. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test ONLINE 0 0 2 + c0t0d0 ONLINE 0 0 2 + c0t0d1 ONLINE 0 0 0 + + errors: Permanent errors have been detected in the following files: + + /export/example/foo + +Damaged files may or may not be able to be removed depending on the +type of corruption. If the corruption is within the plain data, the +file should be removable. If the corruption is in the file metadata, +then the file cannot be removed, though it can be moved to an +alternate location. In either case, the data should be restored from +a backup source. It is also possible for the corruption to be within +pool-wide metadata, resulting in entire datasets being unavailable. +If this is the case, the only option is to destroy the pool and +re-create the datasets from backup. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-8A`` indicates corrupted data exists in +the current pool. diff --git a/_sources/msg/ZFS-8000-9P/index.rst.txt b/_sources/msg/ZFS-8000-9P/index.rst.txt new file mode 100644 index 000000000..e49b099a4 --- /dev/null +++ b/_sources/msg/ZFS-8000-9P/index.rst.txt @@ -0,0 +1,157 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-9P +======================= + +Failing device in replicated configuration +------------------------------------------ + ++-------------------------+----------------------------------------------------+ +| **Type:** | Error | ++-------------------------+----------------------------------------------------+ +| **Severity:** | Minor | ++-------------------------+----------------------------------------------------+ +| **Description:** | A device has experienced uncorrectable errors in a | +| | replicated configuration. | ++-------------------------+----------------------------------------------------+ +| **Automated Response:** | ZFS has attempted to repair the affected data. | ++-------------------------+----------------------------------------------------+ +| **Impact:** | The system is unaffected, though errors may | +| | indicate future failure. Future errors may cause | +| | ZFS to automatically fault the device. | ++-------------------------+----------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +Run ``zpool status -x`` to determine which pool has experienced errors: + +:: + + # zpool status + pool: test + state: ONLINE + status: One or more devices has experienced an unrecoverable error. An + attempt was made to correct the error. Applications are unaffected. + action: Determine if the device needs to be replaced, and clear the errors + using 'zpool online' or replace the device with 'zpool replace'. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test ONLINE 0 0 0 + mirror ONLINE 0 0 0 + c0t0d0 ONLINE 0 0 2 + c0t0d1 ONLINE 0 0 0 + + errors: No known data errors + +Find the device with a non-zero error count for READ, WRITE, or +CKSUM. This indicates that the device has experienced a read I/O +error, write I/O error, or checksum validation error. Because the +device is part of a mirror or RAID-Z device, ZFS was able to recover +from the error and subsequently repair the damaged data. + +If these errors persist over a period of time, ZFS may determine the +device is faulty and mark it as such. However, these error counts may +or may not indicate that the device is unusable. It depends on how +the errors were caused, which the administrator can determine in +advance of any ZFS diagnosis. For example, the following cases will +all produce errors that do not indicate potential device failure: + +- A network attached device lost connectivity but has now + recovered +- A device suffered from a bit flip, an expected event over long + periods of time +- An administrator accidentally wrote over a portion of the disk + using another program + +In these cases, the presence of errors does not indicate that the +device is likely to fail in the future, and therefore does not need +to be replaced. If this is the case, then the device errors should be +cleared using ``zpool clear``: + +:: + + # zpool clear test c0t0d0 + +On the other hand, errors may very well indicate that the device has +failed or is about to fail. If there are continual I/O errors to a +device that is otherwise attached and functioning on the system, it +most likely needs to be replaced. The administrator should check the +system log for any driver messages that may indicate hardware +failure. If it is determined that the device needs to be replaced, +then the ``zpool replace`` command should be used: + +:: + + # zpool replace test c0t0d0 c0t0d2 + +This will attach the new device to the pool and begin resilvering +data to it. Once the resilvering process is complete, the old device +will automatically be removed from the pool, at which point it can +safely be removed from the system. If the device needs to be replaced +in-place (because there are no available spare devices), the original +device can be removed and replaced with a new device, at which point +a different form of ``zpool replace`` can be used: + +:: + + # zpool replace test c0t0d0 + +This assumes that the original device at 'c0t0d0' has been replaced +with a new device under the same path, and will be replaced +appropriately. + +You can monitor the progress of the resilvering operation by using +the ``zpool status -x`` command: + +:: + + # zpool status -x + pool: test + state: DEGRADED + status: One or more devices is currently being replaced. The pool may not be + providing the necessary level of replication. + action: Wait for the resilvering operation to complete + scrub: resilver in progress, 0.14% done, 0h0m to go + config: + + NAME STATE READ WRITE CKSUM + test ONLINE 0 0 0 + mirror ONLINE 0 0 0 + replacing ONLINE 0 0 0 + c0t0d0 ONLINE 0 0 3 + c0t0d2 ONLINE 0 0 0 58.5K resilvered + c0t0d1 ONLINE 0 0 0 + + errors: No known data errors + +.. rubric:: Details + +The Message ID: ``ZFS-8000-9P`` indicates a device has exceeded the +acceptable limit of errors allowed by the system. See document +`203768 `__ +for additional information. diff --git a/_sources/msg/ZFS-8000-A5/index.rst.txt b/_sources/msg/ZFS-8000-A5/index.rst.txt new file mode 100644 index 000000000..b58c12974 --- /dev/null +++ b/_sources/msg/ZFS-8000-A5/index.rst.txt @@ -0,0 +1,83 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-A5 +======================= + +Incompatible version +-------------------- + ++-------------------------+------------------------------------------------+ +| **Type:** | Error | ++-------------------------+------------------------------------------------+ +| **Severity:** | Major | ++-------------------------+------------------------------------------------+ +| **Description:** | The on-disk version is not compatible with the | +| | running system. | ++-------------------------+------------------------------------------------+ +| **Automated Response:** | No automated response will occur. | ++-------------------------+------------------------------------------------+ +| **Impact:** | The pool is unavailable. | ++-------------------------+------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +If this error is seen during ``zpool import``, see the section below. +Otherwise, run ``zpool status -x`` to determine which pool is faulted: + +:: + + # zpool status -x + pool: test + state: FAULTED + status: The ZFS version for the pool is incompatible with the software running + on this system. + action: Destroy and re-create the pool. + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test FAULTED 0 0 0 incompatible version + mirror ONLINE 0 0 0 + sda9 ONLINE 0 0 0 + sdb9 ONLINE 0 0 0 + + errors: No known errors + +The pool cannot be used on this system. Either move the storage to +the system where the pool was originally created, upgrade the current +system software to a more recent version, or destroy the pool and +re-create it from backup. + +If this error is seen during import, the pool cannot be imported on +the current system. The disks must be attached to the system which +originally created the pool, and imported there. + +The list of currently supported versions can be displayed using +``zpool upgrade -v``. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-A5`` indicates a version mismatch exists +between the running system and the on-disk data. diff --git a/_sources/msg/ZFS-8000-ER/index.rst.txt b/_sources/msg/ZFS-8000-ER/index.rst.txt new file mode 100644 index 000000000..b890abc27 --- /dev/null +++ b/_sources/msg/ZFS-8000-ER/index.rst.txt @@ -0,0 +1,320 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-ER +======================= + +ZFS Errata #1 +------------- + ++-------------------------+--------------------------------------------------+ +| **Type:** | Compatibility | ++-------------------------+--------------------------------------------------+ +| **Severity:** | Moderate | ++-------------------------+--------------------------------------------------+ +| **Description:** | The ZFS pool contains an on-disk format | +| | incompatibility. | ++-------------------------+--------------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+--------------------------------------------------+ +| **Impact:** | Until the pool is scrubbed using OpenZFS version | +| | 0.6.3 or newer the pool may not be imported by | +| | older versions of OpenZFS or other ZFS | +| | implementations. | ++-------------------------+--------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +The pool contains an on-disk format incompatibility. Affected pools +must be imported and scrubbed using the current version of ZFS. This +will return the pool to a state in which it may be imported by other +implementations. This errata only impacts compatibility between ZFS +versions, no user data is at risk as result of this erratum. + +:: + + # zpool status -x + pool: test + state: ONLINE + status: Errata #1 detected. + action: To correct the issue run 'zpool scrub'. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-ER + scan: none requested + config: + + NAME STATE READ WRITE CKSUM + test ONLINE 0 0 0 + raidz1-0 ONLINE 0 0 0 + vdev0 ONLINE 0 0 0 + vdev1 ONLINE 0 0 0 + vdev2 ONLINE 0 0 0 + vdev3 ONLINE 0 0 0 + + errors: No known data errors + + # zpool scrub test + + # zpool status -x + all pools are healthy + + +ZFS Errata #2 +------------- + ++-------------------------+---------------------------------------------------+ +| **Type:** | Compatibility | ++-------------------------+---------------------------------------------------+ +| **Severity:** | Moderate | ++-------------------------+---------------------------------------------------+ +| **Description:** | The ZFS packages were updated while an | +| | asynchronous destroy was in progress and the pool | +| | contains an on-disk format incompatibility. | ++-------------------------+---------------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+---------------------------------------------------+ +| **Impact:** | The pool cannot be imported until the issue is | +| | corrected. | ++-------------------------+---------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +Affected pools must be reverted to the previous ZFS version where +they can be correctly imported. Once imported, all asynchronous +destroy operations must be allowed to complete. The ZFS packages may +then be updated and the pool can be imported cleanly by the newer +software. + +:: + + # zpool import + pool: test + id: 1165955789558693437 + state: ONLINE + status: Errata #2 detected. + action: The pool cannot be imported with this version of ZFS due to + an active asynchronous destroy. Revert to an earlier version + and allow the destroy to complete before updating. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-ER + config: + + test ONLINE + raidz1-0 ONLINE + vdev0 ONLINE + vdev1 ONLINE + vdev2 ONLINE + vdev3 ONLINE + +Revert to previous ZFS version, import the pool, then wait for the +``freeing`` property to drop to zero. This indicates that all +outstanding asynchronous destroys have completed. + +:: + + # zpool get freeing + NAME PROPERTY VALUE SOURCE + test freeing 0 default + +The ZFS packages may be now be updated and the pool imported. The +on-disk format incompatibility can now be corrected online as +described in `Errata #1 <#1>`__. + + +ZFS Errata #3 +------------- + ++-------------------------+----------------------------------------------------+ +| **Type:** | Compatibility | ++-------------------------+----------------------------------------------------+ +| **Severity:** | Moderate | ++-------------------------+----------------------------------------------------+ +| **Description:** | An encrypted dataset contains an on-disk format | +| | incompatibility. | ++-------------------------+----------------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+----------------------------------------------------+ +| **Impact:** | Encrypted datasets created before the ZFS packages | +| | were updated cannot be mounted or opened for | +| | write. The errata impacts the ability of ZFS to | +| | correctly perform raw sends, so this functionality | +| | has been disabled for these datasets. | ++-------------------------+----------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +System administrators with affected pools will need to recreate any +encrypted datasets created before the new version of ZFS was used. +This can be accomplished by using ``zfs send`` and ``zfs receive``. +Note, however, that backups can NOT be done with a raw ``zfs send -w``, +since this would preserve the on-disk incompatibility. +Alternatively, system administrators can use conventional tools to +back up data to new encrypted datasets. The new version of ZFS will +prevent new data from being written to the impacted datasets, but +they can still be mounted read-only. + +:: + + # zpool status + pool: test + id: 1165955789558693437 + state: ONLINE + status: Errata #3 detected. + action: To correct the issue backup existing encrypted datasets to new + encrypted datasets and destroy the old ones. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-ER + config: + + test ONLINE + raidz1-0 ONLINE + vdev0 ONLINE + vdev1 ONLINE + vdev2 ONLINE + vdev3 ONLINE + +Import the pool and backup any existing encrypted datasets to new +datasets. To ensure the new datasets are re-encrypted, be sure to +receive them below an encryption root or use ``zfs receive -o +encryption=on``, then destroy the source dataset. + +:: + + # zfs send test/crypt1@snap1 | zfs receive -o encryption=on -o keyformat=passphrase -o keylocation=file:///path/to/keyfile test/newcrypt1 + # zfs send -I test/crypt1@snap1 test/crypt1@snap5 | zfs receive test/newcrypt1 + # zfs destroy -R test/crypt1 + +New datasets can be mounted read-write and used normally. The errata +will be cleared upon reimporting the pool and the alert will only be +shown again if another dataset is found with the errata. To ensure +that all datasets are on the new version reimport the pool, load all +keys, mount all encrypted datasets, and check ``zpool status``. + +:: + + # zpool export test + # zpool import test + # zfs load-key -a + Enter passphrase for 'test/crypt1': + 1 / 1 key(s) successfully loaded + # zfs mount -a + # zpool status -x + all pools are healthy + + +ZFS Errata #4 +------------- + ++-------------------------+----------------------------------------------------+ +| **Type:** | Compatibility | ++-------------------------+----------------------------------------------------+ +| **Severity:** | Moderate | ++-------------------------+----------------------------------------------------+ +| **Description:** | An encrypted dataset contains an on-disk format | +| | incompatibility. | ++-------------------------+----------------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+----------------------------------------------------+ +| **Impact:** | Encrypted datasets created before the ZFS packages | +| | were updated cannot be backed up via a raw send to | +| | an updated system. These datasets also cannot | +| | receive additional snapshots. New encrypted | +| | datasets cannot be created until the | +| | ``bookmark_v2`` feature has been enabled. | ++-------------------------+----------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +First, system administrators with affected pools will need to enable +the ``bookmark_v2`` feature on their pools. Enabling this feature +will prevent this pool from being imported by previous versions of +the ZFS software after any new bookmarks are created (including +read-only imports). If the pool contains no encrypted datasets, this +is the only step required. If there are existing encrypted datasets, +administrators will then need to back these datasets up. This can be +done in several ways. Non-raw ``zfs send`` and ``zfs receive`` can be +used as per usual, as can traditional backup tools. Raw receives of +existing encrypted datasets and raw receives into existing encrypted +datasets are currently disabled because ZFS is not able to guarantee +that the stream and the existing dataset came from a consistent +source. This check can be disabled which will allow ZFS to receive +these streams anyway. Note that this can result in datasets with data +that cannot be accessed due to authentication errors if raw and +non-raw receives are mixed over the course of several incremental +backups. To disable this restriction, set the +``zfs_disable_ivset_guid_check`` module parameter to 1. Streams +received this way (as well as any received before the upgrade) will +need to be manually checked by reading the data to ensure they are +not corrupted. Note that ``zpool scrub`` cannot be used for this +purpose because the scrub does not check the cryptographic +authentication codes. For more information on this issue, please +refer to the zfs man page section on ``zfs receive`` which describes +the restrictions on raw sends. + +:: + + # zpool status + pool: test + state: ONLINE + status: Errata #4 detected. + Existing encrypted datasets contain an on-disk incompatibility + which needs to be corrected. + action: To correct the issue enable the bookmark_v2 feature and backup + any existing encrypted datasets to new encrypted datasets and + destroy the old ones. If this pool does not contain any + encrypted datasets, simply enable the bookmark_v2 feature. + see: http://openzfs.github.io/openzfs-docs/msg/ZFS-8000-ER + scan: none requested + config: + + NAME STATE READ WRITE CKSUM + test ONLINE 0 0 0 + /root/vdev0 ONLINE 0 0 0 + + errors: No known data errors + +Import the pool and enable the ``bookmark_v2`` feature. Then backup +any existing encrypted datasets to new datasets. This can be done +with traditional tools or via ``zfs send``. Raw sends will require +that the ``zfs_disable_ivset_guid_check`` is set to 1 on the receive +side. Once this is done, the original datasets should be destroyed. + +:: + + # zpool set feature@bookmark_v2=enabled test + # echo 1 > /sys/module/zfs/parameters/zfs_disable_ivset_guid_check + # zfs send -Rw test/crypt1@snap1 | zfs receive test/newcrypt1 + # zfs send -I test/crypt1@snap1 test/crypt1@snap5 | zfs receive test/newcrypt1 + # zfs destroy -R test/crypt1 + # echo 0 > /sys/module/zfs/parameters/zfs_disable_ivset_guid_check + +The errata will be cleared upon reimporting the pool and the alert +will only be shown again if another dataset is found with the errata. +To check that all datasets are fixed, perform a ``zfs list -t all``, +and check ``zpool status`` once it is completed. + +:: + + # zpool export test + # zpool import test + # zpool scrub # wait for completion + # zpool status -x + all pools are healthy diff --git a/_sources/msg/ZFS-8000-EY/index.rst.txt b/_sources/msg/ZFS-8000-EY/index.rst.txt new file mode 100644 index 000000000..0bd466e8a --- /dev/null +++ b/_sources/msg/ZFS-8000-EY/index.rst.txt @@ -0,0 +1,79 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-EY +======================= + +ZFS label hostid mismatch +------------------------- + ++-------------------------+---------------------------------------------------+ +| **Type:** | Error | ++-------------------------+---------------------------------------------------+ +| **Severity:** | Major | ++-------------------------+---------------------------------------------------+ +| **Description:** | The ZFS pool was last accessed by another system. | ++-------------------------+---------------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+---------------------------------------------------+ +| **Impact:** | ZFS filesystems are not available. | ++-------------------------+---------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +The pool has been written to from another host, and was not cleanly +exported from the other system. Actively importing a pool on multiple +systems will corrupt the pool and leave it in an unrecoverable state. +To determine which system last accessed the pool, run the ``zpool +import`` command: + +:: + + # zpool import + pool: test + id: 14702934086626715962 + state: ONLINE + status: The pool was last accessed by another system. + action: The pool can be imported using its name or numeric identifier and + the '-f' flag. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY + config: + + test ONLINE + c0t0d0 ONLINE + + # zpool import test + cannot import 'test': pool may be in use from other system, it was last + accessed by 'tank' (hostid: 0x1435718c) on Fri Mar 9 15:42:47 2007 + use '-f' to import anyway + +If you are certain that the pool is not being actively accessed by +another system, then you can use the ``-f`` option to ``zpool import`` to +forcibly import the pool. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-EY`` indicates that the pool cannot be +imported as it was last accessed by another system. Take the +documented action to resolve the problem. diff --git a/_sources/msg/ZFS-8000-HC/index.rst.txt b/_sources/msg/ZFS-8000-HC/index.rst.txt new file mode 100644 index 000000000..1a94d7e44 --- /dev/null +++ b/_sources/msg/ZFS-8000-HC/index.rst.txt @@ -0,0 +1,85 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-HC +======================= + +ZFS pool I/O failures +--------------------- + ++-------------------------+-----------------------------------------+ +| **Type:** | Error | ++-------------------------+-----------------------------------------+ +| **Severity:** | Major | ++-------------------------+-----------------------------------------+ +| **Description:** | The ZFS pool has experienced currently | +| | unrecoverable I/O failures. | ++-------------------------+-----------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+-----------------------------------------+ +| **Impact:** | Read and write I/Os cannot be serviced. | ++-------------------------+-----------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +The pool has experienced I/O failures. Since the ZFS pool property +``failmode`` is set to 'wait', all I/Os (reads and writes) are blocked. +See the zpoolprops(8) manpage for more information on the ``failmode`` +property. Manual intervention is required for I/Os to be serviced. + +You can see which devices are affected by running ``zpool status -x``: + +:: + + # zpool status -x + pool: test + state: FAULTED + status: There are I/O failures. + action: Make sure the affected devices are connected, then run 'zpool clear'. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test FAULTED 0 13 0 insufficient replicas + c0t0d0 FAULTED 0 7 0 experienced I/O failures + c0t1d0 ONLINE 0 0 0 + + errors: 1 data errors, use '-v' for a list + +After you have made sure the affected devices are connected, run ``zpool +clear`` to allow I/O to the pool again: + +:: + + # zpool clear test + +If I/O failures continue to happen, then applications and commands for the pool +may hang. At this point, a reboot may be necessary to allow I/O to the pool +again. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-HC`` indicates that the pool has experienced I/O +failures. Take the documented action to resolve the problem. diff --git a/_sources/msg/ZFS-8000-JQ/index.rst.txt b/_sources/msg/ZFS-8000-JQ/index.rst.txt new file mode 100644 index 000000000..6ffcc2fcc --- /dev/null +++ b/_sources/msg/ZFS-8000-JQ/index.rst.txt @@ -0,0 +1,86 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-JQ +======================= + +ZFS pool I/O failures +--------------------- + ++-------------------------+----------------------------------------+ +| **Type:** | Error | ++-------------------------+----------------------------------------+ +| **Severity:** | Major | ++-------------------------+----------------------------------------+ +| **Description:** | The ZFS pool has experienced currently | +| | unrecoverable I/O failures. | ++-------------------------+----------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+----------------------------------------+ +| **Impact:** | Write I/Os cannot be serviced. | ++-------------------------+----------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +The pool has experienced I/O failures. Since the ZFS pool property +``failmode`` is set to 'continue', read I/Os will continue to be +serviced, but write I/Os are blocked. See the zpoolprops(8) manpage for +more information on the ``failmode`` property. Manual intervention is +required for write I/Os to be serviced. You can see which devices are +affected by running ``zpool status -x``: + +:: + + # zpool status -x + pool: test + state: FAULTED + status: There are I/O failures. + action: Make sure the affected devices are connected, then run 'zpool clear'. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test FAULTED 0 13 0 insufficient replicas + sda9 FAULTED 0 7 0 experienced I/O failures + sdb9 ONLINE 0 0 0 + + errors: 1 data errors, use '-v' for a list + +After you have made sure the affected devices are connected, run +``zpool clear`` to allow write I/O to the pool again: + +:: + + # zpool clear test + +If I/O failures continue to happen, then applications and commands +for the pool may hang. At this point, a reboot may be necessary to +allow I/O to the pool again. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-JQ`` indicates that the pool has +experienced I/O failures. Take the documented action to resolve the +problem. diff --git a/_sources/msg/ZFS-8000-K4/index.rst.txt b/_sources/msg/ZFS-8000-K4/index.rst.txt new file mode 100644 index 000000000..c8963d801 --- /dev/null +++ b/_sources/msg/ZFS-8000-K4/index.rst.txt @@ -0,0 +1,132 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-K4 +======================= + +ZFS intent log read failure +--------------------------- + ++-------------------------+--------------------------------------------+ +| **Type:** | Error | ++-------------------------+--------------------------------------------+ +| **Severity:** | Major | ++-------------------------+--------------------------------------------+ +| **Description:** | A ZFS intent log device could not be read. | ++-------------------------+--------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+--------------------------------------------+ +| **Impact:** | The intent log(s) cannot be replayed. | ++-------------------------+--------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +A ZFS intent log record could not be read due to an error. This may +be due to a missing or broken log device, or a device within the pool +may be experiencing I/O errors. The pool itself is not corrupt but is +missing some pool changes that happened shortly before a power loss +or system failure. These are pool changes that applications had +requested to be written synchronously but had not been committed in +the pool. This transaction group commit currently occurs every five +seconds, and so typically at most five seconds worth of synchronous +writes have been lost. ZFS itself cannot determine if the pool +changes lost are critical to those applications running at the time +of the system failure. This is a decision the administrator must +make. You may want to consider mirroring log devices. First determine +which pool is in error: + +:: + + # zpool status -x + pool: test + state: FAULTED + status: One or more of the intent logs could not be read. + Waiting for adminstrator intervention to fix the faulted pool. + action: Either restore the affected device(s) and run 'zpool online', + or ignore the intent log records by running 'zpool clear'. + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test FAULTED 0 0 0 bad intent log + c3t2d0 ONLINE 0 0 0 + logs FAULTED 0 0 0 bad intent log + c5t3d0 UNAVAIL 0 0 0 cannot open + +There are two courses of action to resolve this problem. +If the validity of the pool from an application perspective requires +the pool changes then the log devices must be recovered. Make sure +power and cables are connected and that the affected device is +online. Then run ``zpool online`` and then ``zpool clear``: + +:: + + # zpool online test c5t3d0 + # zpool clear test + # zpool status test + pool: test + state: ONLINE + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test ONLINE 0 0 0 + c3t2d0 ONLINE 0 0 0 + logs ONLINE 0 0 0 + c5t3d0 ONLINE 0 0 0 + + errors: No known data errors + +The second alternative action is to ignore the most recent pool +changes that could not be read. To do this run ``zpool clear``: + +:: + + # zpool clear test + # zpool status test + pool: test + state: DEGRADED + status: One or more devices could not be opened. Sufficient replicas exist for + the pool to continue functioning in a degraded state. + action: Attach the missing device and online it using 'zpool online'. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-2Q + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test DEGRADED 0 0 0 + c3t2d0 ONLINE 0 0 0 + logs DEGRADED 0 0 0 + c5t3d0 UNAVAIL 0 0 0 cannot open + + errors: No known data errors + +Future log records will not use a failed log device but will be +written to the main pool. You should fix or replace any failed log +devices. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-K4`` indicates that a log device is +missing or cannot be read. diff --git a/_sources/msg/index.rst.txt b/_sources/msg/index.rst.txt new file mode 100644 index 000000000..cbcbcdd3e --- /dev/null +++ b/_sources/msg/index.rst.txt @@ -0,0 +1,9 @@ +ZFS Messages +============ + +.. toctree:: + :maxdepth: 2 + :caption: Contents: + :glob: + + ZFS-*/index diff --git a/_static/_sphinx_javascript_frameworks_compat.js b/_static/_sphinx_javascript_frameworks_compat.js new file mode 100644 index 000000000..81415803e --- /dev/null +++ b/_static/_sphinx_javascript_frameworks_compat.js @@ -0,0 +1,123 @@ +/* Compatability shim for jQuery and underscores.js. + * + * Copyright Sphinx contributors + * Released under the two clause BSD licence + */ + +/** + * small helper function to urldecode strings + * + * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/decodeURIComponent#Decoding_query_parameters_from_a_URL + */ +jQuery.urldecode = function(x) { + if (!x) { + return x + } + return decodeURIComponent(x.replace(/\+/g, ' ')); +}; + +/** + * small helper function to urlencode strings + */ +jQuery.urlencode = encodeURIComponent; + +/** + * This function returns the parsed url parameters of the + * current request. Multiple values per key are supported, + * it will always return arrays of strings for the value parts. + */ +jQuery.getQueryParameters = function(s) { + if (typeof s === 'undefined') + s = document.location.search; + var parts = s.substr(s.indexOf('?') + 1).split('&'); + var result = {}; + for (var i = 0; i < parts.length; i++) { + var tmp = parts[i].split('=', 2); + var key = jQuery.urldecode(tmp[0]); + var value = jQuery.urldecode(tmp[1]); + if (key in result) + result[key].push(value); + else + result[key] = [value]; + } + return result; +}; + +/** + * highlight a given string on a jquery object by wrapping it in + * span elements with the given class name. + */ +jQuery.fn.highlightText = function(text, className) { + function highlight(node, addItems) { + if (node.nodeType === 3) { + var val = node.nodeValue; + var pos = val.toLowerCase().indexOf(text); + if (pos >= 0 && + !jQuery(node.parentNode).hasClass(className) && + !jQuery(node.parentNode).hasClass("nohighlight")) { + var span; + var isInSVG = jQuery(node).closest("body, svg, foreignObject").is("svg"); + if (isInSVG) { + span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); + } else { + span = document.createElement("span"); + span.className = className; + } + span.appendChild(document.createTextNode(val.substr(pos, text.length))); + node.parentNode.insertBefore(span, node.parentNode.insertBefore( + document.createTextNode(val.substr(pos + text.length)), + node.nextSibling)); + node.nodeValue = val.substr(0, pos); + if (isInSVG) { + var rect = document.createElementNS("http://www.w3.org/2000/svg", "rect"); + var bbox = node.parentElement.getBBox(); + rect.x.baseVal.value = bbox.x; + rect.y.baseVal.value = bbox.y; + rect.width.baseVal.value = bbox.width; + rect.height.baseVal.value = bbox.height; + rect.setAttribute('class', className); + addItems.push({ + "parent": node.parentNode, + "target": rect}); + } + } + } + else if (!jQuery(node).is("button, select, textarea")) { + jQuery.each(node.childNodes, function() { + highlight(this, addItems); + }); + } + } + var addItems = []; + var result = this.each(function() { + highlight(this, addItems); + }); + for (var i = 0; i < addItems.length; ++i) { + jQuery(addItems[i].parent).before(addItems[i].target); + } + return result; +}; + +/* + * backward compatibility for jQuery.browser + * This will be supported until firefox bug is fixed. + */ +if (!jQuery.browser) { + jQuery.uaMatch = function(ua) { + ua = ua.toLowerCase(); + + var match = /(chrome)[ \/]([\w.]+)/.exec(ua) || + /(webkit)[ \/]([\w.]+)/.exec(ua) || + /(opera)(?:.*version|)[ \/]([\w.]+)/.exec(ua) || + /(msie) ([\w.]+)/.exec(ua) || + ua.indexOf("compatible") < 0 && /(mozilla)(?:.*? rv:([\w.]+)|)/.exec(ua) || + []; + + return { + browser: match[ 1 ] || "", + version: match[ 2 ] || "0" + }; + }; + jQuery.browser = {}; + jQuery.browser[jQuery.uaMatch(navigator.userAgent).browser] = true; +} diff --git a/_static/basic.css b/_static/basic.css new file mode 100644 index 000000000..cfc60b86c --- /dev/null +++ b/_static/basic.css @@ -0,0 +1,921 @@ +/* + * basic.css + * ~~~~~~~~~ + * + * Sphinx stylesheet -- basic theme. + * + * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ + +/* -- main layout ----------------------------------------------------------- */ + +div.clearer { + clear: both; +} + +div.section::after { + display: block; + content: ''; + clear: left; +} + +/* -- relbar ---------------------------------------------------------------- */ + +div.related { + width: 100%; + font-size: 90%; +} + +div.related h3 { + display: none; +} + +div.related ul { + margin: 0; + padding: 0 0 0 10px; + list-style: none; +} + +div.related li { + display: inline; +} + +div.related li.right { + float: right; + margin-right: 5px; +} + +/* -- sidebar --------------------------------------------------------------- */ + +div.sphinxsidebarwrapper { + padding: 10px 5px 0 10px; +} + +div.sphinxsidebar { + float: left; + width: 230px; + margin-left: -100%; + font-size: 90%; + word-wrap: break-word; + overflow-wrap : break-word; +} + +div.sphinxsidebar ul { + list-style: none; +} + +div.sphinxsidebar ul ul, +div.sphinxsidebar ul.want-points { + margin-left: 20px; + list-style: square; +} + +div.sphinxsidebar ul ul { + margin-top: 0; + margin-bottom: 0; +} + +div.sphinxsidebar form { + margin-top: 10px; +} + +div.sphinxsidebar input { + border: 1px solid #98dbcc; + font-family: sans-serif; + font-size: 1em; +} + +div.sphinxsidebar #searchbox form.search { + overflow: hidden; +} + +div.sphinxsidebar #searchbox input[type="text"] { + float: left; + width: 80%; + padding: 0.25em; + box-sizing: border-box; +} + +div.sphinxsidebar #searchbox input[type="submit"] { + float: left; + width: 20%; + border-left: none; + padding: 0.25em; + box-sizing: border-box; +} + + +img { + border: 0; + max-width: 100%; +} + +/* -- search page ----------------------------------------------------------- */ + +ul.search { + margin: 10px 0 0 20px; + padding: 0; +} + +ul.search li { + padding: 5px 0 5px 20px; + background-image: url(file.png); + background-repeat: no-repeat; + background-position: 0 7px; +} + +ul.search li a { + font-weight: bold; +} + +ul.search li p.context { + color: #888; + margin: 2px 0 0 30px; + text-align: left; +} + +ul.keywordmatches li.goodmatch a { + font-weight: bold; +} + +/* -- index page ------------------------------------------------------------ */ + +table.contentstable { + width: 90%; + margin-left: auto; + margin-right: auto; +} + +table.contentstable p.biglink { + line-height: 150%; +} + +a.biglink { + font-size: 1.3em; +} + +span.linkdescr { + font-style: italic; + padding-top: 5px; + font-size: 90%; +} + +/* -- general index --------------------------------------------------------- */ + +table.indextable { + width: 100%; +} + +table.indextable td { + text-align: left; + vertical-align: top; +} + +table.indextable ul { + margin-top: 0; + margin-bottom: 0; + list-style-type: none; +} + +table.indextable > tbody > tr > td > ul { + padding-left: 0em; +} + +table.indextable tr.pcap { + height: 10px; +} + +table.indextable tr.cap { + margin-top: 10px; + background-color: #f2f2f2; +} + +img.toggler { + margin-right: 3px; + margin-top: 3px; + cursor: pointer; +} + +div.modindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +div.genindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +/* -- domain module index --------------------------------------------------- */ + +table.modindextable td { + padding: 2px; + border-collapse: collapse; +} + +/* -- general body styles --------------------------------------------------- */ + +div.body { + min-width: 360px; + max-width: 800px; +} + +div.body p, div.body dd, div.body li, div.body blockquote { + -moz-hyphens: auto; + -ms-hyphens: auto; + -webkit-hyphens: auto; + hyphens: auto; +} + +a.headerlink { + visibility: hidden; +} + +h1:hover > a.headerlink, +h2:hover > a.headerlink, +h3:hover > a.headerlink, +h4:hover > a.headerlink, +h5:hover > a.headerlink, +h6:hover > a.headerlink, +dt:hover > a.headerlink, +caption:hover > a.headerlink, +p.caption:hover > a.headerlink, +div.code-block-caption:hover > a.headerlink { + visibility: visible; +} + +div.body p.caption { + text-align: inherit; +} + +div.body td { + text-align: left; +} + +.first { + margin-top: 0 !important; +} + +p.rubric { + margin-top: 30px; + font-weight: bold; +} + +img.align-left, figure.align-left, .figure.align-left, object.align-left { + clear: left; + float: left; + margin-right: 1em; +} + +img.align-right, figure.align-right, .figure.align-right, object.align-right { + clear: right; + float: right; + margin-left: 1em; +} + +img.align-center, figure.align-center, .figure.align-center, object.align-center { + display: block; + margin-left: auto; + margin-right: auto; +} + +img.align-default, figure.align-default, .figure.align-default { + display: block; + margin-left: auto; + margin-right: auto; +} + +.align-left { + text-align: left; +} + +.align-center { + text-align: center; +} + +.align-default { + text-align: center; +} + +.align-right { + text-align: right; +} + +/* -- sidebars -------------------------------------------------------------- */ + +div.sidebar, +aside.sidebar { + margin: 0 0 0.5em 1em; + border: 1px solid #ddb; + padding: 7px; + background-color: #ffe; + width: 40%; + float: right; + clear: right; + overflow-x: auto; +} + +p.sidebar-title { + font-weight: bold; +} + +nav.contents, +aside.topic, +div.admonition, div.topic, blockquote { + clear: left; +} + +/* -- topics ---------------------------------------------------------------- */ + +nav.contents, +aside.topic, +div.topic { + border: 1px solid #ccc; + padding: 7px; + margin: 10px 0 10px 0; +} + +p.topic-title { + font-size: 1.1em; + font-weight: bold; + margin-top: 10px; +} + +/* -- admonitions ----------------------------------------------------------- */ + +div.admonition { + margin-top: 10px; + margin-bottom: 10px; + padding: 7px; +} + +div.admonition dt { + font-weight: bold; +} + +p.admonition-title { + margin: 0px 10px 5px 0px; + font-weight: bold; +} + +div.body p.centered { + text-align: center; + margin-top: 25px; +} + +/* -- content of sidebars/topics/admonitions -------------------------------- */ + +div.sidebar > :last-child, +aside.sidebar > :last-child, +nav.contents > :last-child, +aside.topic > :last-child, +div.topic > :last-child, +div.admonition > :last-child { + margin-bottom: 0; +} + +div.sidebar::after, +aside.sidebar::after, +nav.contents::after, +aside.topic::after, +div.topic::after, +div.admonition::after, +blockquote::after { + display: block; + content: ''; + clear: both; +} + +/* -- tables ---------------------------------------------------------------- */ + +table.docutils { + margin-top: 10px; + margin-bottom: 10px; + border: 0; + border-collapse: collapse; +} + +table.align-center { + margin-left: auto; + margin-right: auto; +} + +table.align-default { + margin-left: auto; + margin-right: auto; +} + +table caption span.caption-number { + font-style: italic; +} + +table caption span.caption-text { +} + +table.docutils td, table.docutils th { + padding: 1px 8px 1px 5px; + border-top: 0; + border-left: 0; + border-right: 0; + border-bottom: 1px solid #aaa; +} + +th { + text-align: left; + padding-right: 5px; +} + +table.citation { + border-left: solid 1px gray; + margin-left: 1px; +} + +table.citation td { + border-bottom: none; +} + +th > :first-child, +td > :first-child { + margin-top: 0px; +} + +th > :last-child, +td > :last-child { + margin-bottom: 0px; +} + +/* -- figures --------------------------------------------------------------- */ + +div.figure, figure { + margin: 0.5em; + padding: 0.5em; +} + +div.figure p.caption, figcaption { + padding: 0.3em; +} + +div.figure p.caption span.caption-number, +figcaption span.caption-number { + font-style: italic; +} + +div.figure p.caption span.caption-text, +figcaption span.caption-text { +} + +/* -- field list styles ----------------------------------------------------- */ + +table.field-list td, table.field-list th { + border: 0 !important; +} + +.field-list ul { + margin: 0; + padding-left: 1em; +} + +.field-list p { + margin: 0; +} + +.field-name { + -moz-hyphens: manual; + -ms-hyphens: manual; + -webkit-hyphens: manual; + hyphens: manual; +} + +/* -- hlist styles ---------------------------------------------------------- */ + +table.hlist { + margin: 1em 0; +} + +table.hlist td { + vertical-align: top; +} + +/* -- object description styles --------------------------------------------- */ + +.sig { + font-family: 'Consolas', 'Menlo', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace; +} + +.sig-name, code.descname { + background-color: transparent; + font-weight: bold; +} + +.sig-name { + font-size: 1.1em; +} + +code.descname { + font-size: 1.2em; +} + +.sig-prename, code.descclassname { + background-color: transparent; +} + +.optional { + font-size: 1.3em; +} + +.sig-paren { + font-size: larger; +} + +.sig-param.n { + font-style: italic; +} + +/* C++ specific styling */ + +.sig-inline.c-texpr, +.sig-inline.cpp-texpr { + font-family: unset; +} + +.sig.c .k, .sig.c .kt, +.sig.cpp .k, .sig.cpp .kt { + color: #0033B3; +} + +.sig.c .m, +.sig.cpp .m { + color: #1750EB; +} + +.sig.c .s, .sig.c .sc, +.sig.cpp .s, .sig.cpp .sc { + color: #067D17; +} + + +/* -- other body styles ----------------------------------------------------- */ + +ol.arabic { + list-style: decimal; +} + +ol.loweralpha { + list-style: lower-alpha; +} + +ol.upperalpha { + list-style: upper-alpha; +} + +ol.lowerroman { + list-style: lower-roman; +} + +ol.upperroman { + list-style: upper-roman; +} + +:not(li) > ol > li:first-child > :first-child, +:not(li) > ul > li:first-child > :first-child { + margin-top: 0px; +} + +:not(li) > ol > li:last-child > :last-child, +:not(li) > ul > li:last-child > :last-child { + margin-bottom: 0px; +} + +ol.simple ol p, +ol.simple ul p, +ul.simple ol p, +ul.simple ul p { + margin-top: 0; +} + +ol.simple > li:not(:first-child) > p, +ul.simple > li:not(:first-child) > p { + margin-top: 0; +} + +ol.simple p, +ul.simple p { + margin-bottom: 0; +} + +aside.footnote > span, +div.citation > span { + float: left; +} +aside.footnote > span:last-of-type, +div.citation > span:last-of-type { + padding-right: 0.5em; +} +aside.footnote > p { + margin-left: 2em; +} +div.citation > p { + margin-left: 4em; +} +aside.footnote > p:last-of-type, +div.citation > p:last-of-type { + margin-bottom: 0em; +} +aside.footnote > p:last-of-type:after, +div.citation > p:last-of-type:after { + content: ""; + clear: both; +} + +dl.field-list { + display: grid; + grid-template-columns: fit-content(30%) auto; +} + +dl.field-list > dt { + font-weight: bold; + word-break: break-word; + padding-left: 0.5em; + padding-right: 5px; +} + +dl.field-list > dd { + padding-left: 0.5em; + margin-top: 0em; + margin-left: 0em; + margin-bottom: 0em; +} + +dl { + margin-bottom: 15px; +} + +dd > :first-child { + margin-top: 0px; +} + +dd ul, dd table { + margin-bottom: 10px; +} + +dd { + margin-top: 3px; + margin-bottom: 10px; + margin-left: 30px; +} + +.sig dd { + margin-top: 0px; + margin-bottom: 0px; +} + +.sig dl { + margin-top: 0px; + margin-bottom: 0px; +} + +dl > dd:last-child, +dl > dd:last-child > :last-child { + margin-bottom: 0; +} + +dt:target, span.highlighted { + background-color: #fbe54e; +} + +rect.highlighted { + fill: #fbe54e; +} + +dl.glossary dt { + font-weight: bold; + font-size: 1.1em; +} + +.versionmodified { + font-style: italic; +} + +.system-message { + background-color: #fda; + padding: 5px; + border: 3px solid red; +} + +.footnote:target { + background-color: #ffa; +} + +.line-block { + display: block; + margin-top: 1em; + margin-bottom: 1em; +} + +.line-block .line-block { + margin-top: 0; + margin-bottom: 0; + margin-left: 1.5em; +} + +.guilabel, .menuselection { + font-family: sans-serif; +} + +.accelerator { + text-decoration: underline; +} + +.classifier { + font-style: oblique; +} + +.classifier:before { + font-style: normal; + margin: 0 0.5em; + content: ":"; + display: inline-block; +} + +abbr, acronym { + border-bottom: dotted 1px; + cursor: help; +} + +.translated { + background-color: rgba(207, 255, 207, 0.2) +} + +.untranslated { + background-color: rgba(255, 207, 207, 0.2) +} + +/* -- code displays --------------------------------------------------------- */ + +pre { + overflow: auto; + overflow-y: hidden; /* fixes display issues on Chrome browsers */ +} + +pre, div[class*="highlight-"] { + clear: both; +} + +span.pre { + -moz-hyphens: none; + -ms-hyphens: none; + -webkit-hyphens: none; + hyphens: none; + white-space: nowrap; +} + +div[class*="highlight-"] { + margin: 1em 0; +} + +td.linenos pre { + border: 0; + background-color: transparent; + color: #aaa; +} + +table.highlighttable { + display: block; +} + +table.highlighttable tbody { + display: block; +} + +table.highlighttable tr { + display: flex; +} + +table.highlighttable td { + margin: 0; + padding: 0; +} + +table.highlighttable td.linenos { + padding-right: 0.5em; +} + +table.highlighttable td.code { + flex: 1; + overflow: hidden; +} + +.highlight .hll { + display: block; +} + +div.highlight pre, +table.highlighttable pre { + margin: 0; +} + +div.code-block-caption + div { + margin-top: 0; +} + +div.code-block-caption { + margin-top: 1em; + padding: 2px 5px; + font-size: small; +} + +div.code-block-caption code { + background-color: transparent; +} + +table.highlighttable td.linenos, +span.linenos, +div.highlight span.gp { /* gp: Generic.Prompt */ + user-select: none; + -webkit-user-select: text; /* Safari fallback only */ + -webkit-user-select: none; /* Chrome/Safari */ + -moz-user-select: none; /* Firefox */ + -ms-user-select: none; /* IE10+ */ +} + +div.code-block-caption span.caption-number { + padding: 0.1em 0.3em; + font-style: italic; +} + +div.code-block-caption span.caption-text { +} + +div.literal-block-wrapper { + margin: 1em 0; +} + +code.xref, a code { + background-color: transparent; + font-weight: bold; +} + +h1 code, h2 code, h3 code, h4 code, h5 code, h6 code { + background-color: transparent; +} + +.viewcode-link { + float: right; +} + +.viewcode-back { + float: right; + font-family: sans-serif; +} + +div.viewcode-block:target { + margin: -1px -10px; + padding: 0 10px; +} + +/* -- math display ---------------------------------------------------------- */ + +img.math { + vertical-align: middle; +} + +div.body div.math p { + text-align: center; +} + +span.eqno { + float: right; +} + +span.eqno a.headerlink { + position: absolute; + z-index: 1; +} + +div.math:hover a.headerlink { + visibility: visible; +} + +/* -- printout stylesheet --------------------------------------------------- */ + +@media print { + div.document, + div.documentwrapper, + div.bodywrapper { + margin: 0 !important; + width: 100%; + } + + div.sphinxsidebar, + div.related, + div.footer, + #top-link { + display: none; + } +} \ No newline at end of file diff --git a/_static/css/badge_only.css b/_static/css/badge_only.css new file mode 100644 index 000000000..c718cee44 --- /dev/null +++ b/_static/css/badge_only.css @@ -0,0 +1 @@ +.clearfix{*zoom:1}.clearfix:after,.clearfix:before{display:table;content:""}.clearfix:after{clear:both}@font-face{font-family:FontAwesome;font-style:normal;font-weight:400;src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713?#iefix) format("embedded-opentype"),url(fonts/fontawesome-webfont.woff2?af7ae505a9eed503f8b8e6982036873e) format("woff2"),url(fonts/fontawesome-webfont.woff?fee66e712a8a08eef5805a46892932ad) format("woff"),url(fonts/fontawesome-webfont.ttf?b06871f281fee6b241d60582ae9369b9) format("truetype"),url(fonts/fontawesome-webfont.svg?912ec66d7572ff821749319396470bde#FontAwesome) format("svg")}.fa:before{font-family:FontAwesome;font-style:normal;font-weight:400;line-height:1}.fa:before,a .fa{text-decoration:inherit}.fa:before,a .fa,li .fa{display:inline-block}li .fa-large:before{width:1.875em}ul.fas{list-style-type:none;margin-left:2em;text-indent:-.8em}ul.fas li .fa{width:.8em}ul.fas li .fa-large:before{vertical-align:baseline}.fa-book:before,.icon-book:before{content:"\f02d"}.fa-caret-down:before,.icon-caret-down:before{content:"\f0d7"}.fa-caret-up:before,.icon-caret-up:before{content:"\f0d8"}.fa-caret-left:before,.icon-caret-left:before{content:"\f0d9"}.fa-caret-right:before,.icon-caret-right:before{content:"\f0da"}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;z-index:400}.rst-versions a{color:#2980b9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27ae60}.rst-versions .rst-current-version:after{clear:both;content:"";display:block}.rst-versions .rst-current-version .fa{color:#fcfcfc}.rst-versions .rst-current-version .fa-book,.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#e74c3c;color:#fff}.rst-versions .rst-current-version.rst-active-old-version{background-color:#f1c40f;color:#000}.rst-versions.shift-up{height:auto;max-height:100%;overflow-y:scroll}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:grey;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:1px solid #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px;max-height:90%}.rst-versions.rst-badge .fa-book,.rst-versions.rst-badge .icon-book{float:none;line-height:30px}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .fa-book,.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge>.rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width:768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}} \ No newline at end of file diff --git a/_static/css/fonts/Roboto-Slab-Bold.woff b/_static/css/fonts/Roboto-Slab-Bold.woff new file mode 100644 index 000000000..6cb600001 Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Bold.woff differ diff --git a/_static/css/fonts/Roboto-Slab-Bold.woff2 b/_static/css/fonts/Roboto-Slab-Bold.woff2 new file mode 100644 index 000000000..7059e2314 Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Bold.woff2 differ diff --git a/_static/css/fonts/Roboto-Slab-Regular.woff b/_static/css/fonts/Roboto-Slab-Regular.woff new file mode 100644 index 000000000..f815f63f9 Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Regular.woff differ diff --git a/_static/css/fonts/Roboto-Slab-Regular.woff2 b/_static/css/fonts/Roboto-Slab-Regular.woff2 new file mode 100644 index 000000000..f2c76e5bd Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Regular.woff2 differ diff --git a/_static/css/fonts/fontawesome-webfont.eot b/_static/css/fonts/fontawesome-webfont.eot new file mode 100644 index 000000000..e9f60ca95 Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.eot differ diff --git a/_static/css/fonts/fontawesome-webfont.svg b/_static/css/fonts/fontawesome-webfont.svg new file mode 100644 index 000000000..855c845e5 --- /dev/null +++ b/_static/css/fonts/fontawesome-webfont.svg @@ -0,0 +1,2671 @@ + + + + +Created by FontForge 20120731 at Mon Oct 24 17:37:40 2016 + By ,,, +Copyright Dave Gandy 2016. All rights reserved. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/_static/css/fonts/fontawesome-webfont.ttf b/_static/css/fonts/fontawesome-webfont.ttf new file mode 100644 index 000000000..35acda2fa Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.ttf differ diff --git a/_static/css/fonts/fontawesome-webfont.woff b/_static/css/fonts/fontawesome-webfont.woff new file mode 100644 index 000000000..400014a4b Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.woff differ diff --git a/_static/css/fonts/fontawesome-webfont.woff2 b/_static/css/fonts/fontawesome-webfont.woff2 new file mode 100644 index 000000000..4d13fc604 Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.woff2 differ diff --git a/_static/css/fonts/lato-bold-italic.woff b/_static/css/fonts/lato-bold-italic.woff new file mode 100644 index 000000000..88ad05b9f Binary files /dev/null and b/_static/css/fonts/lato-bold-italic.woff differ diff --git a/_static/css/fonts/lato-bold-italic.woff2 b/_static/css/fonts/lato-bold-italic.woff2 new file mode 100644 index 000000000..c4e3d804b Binary files /dev/null and b/_static/css/fonts/lato-bold-italic.woff2 differ diff --git a/_static/css/fonts/lato-bold.woff b/_static/css/fonts/lato-bold.woff new file mode 100644 index 000000000..c6dff51f0 Binary files /dev/null and b/_static/css/fonts/lato-bold.woff differ diff --git a/_static/css/fonts/lato-bold.woff2 b/_static/css/fonts/lato-bold.woff2 new file mode 100644 index 000000000..bb195043c Binary files /dev/null and b/_static/css/fonts/lato-bold.woff2 differ diff --git a/_static/css/fonts/lato-normal-italic.woff b/_static/css/fonts/lato-normal-italic.woff new file mode 100644 index 000000000..76114bc03 Binary files /dev/null and b/_static/css/fonts/lato-normal-italic.woff differ diff --git a/_static/css/fonts/lato-normal-italic.woff2 b/_static/css/fonts/lato-normal-italic.woff2 new file mode 100644 index 000000000..3404f37e2 Binary files /dev/null and b/_static/css/fonts/lato-normal-italic.woff2 differ diff --git a/_static/css/fonts/lato-normal.woff b/_static/css/fonts/lato-normal.woff new file mode 100644 index 000000000..ae1307ff5 Binary files /dev/null and b/_static/css/fonts/lato-normal.woff differ diff --git a/_static/css/fonts/lato-normal.woff2 b/_static/css/fonts/lato-normal.woff2 new file mode 100644 index 000000000..3bf984332 Binary files /dev/null and b/_static/css/fonts/lato-normal.woff2 differ diff --git a/_static/css/mandoc.css b/_static/css/mandoc.css new file mode 100644 index 000000000..8cf11fcc0 --- /dev/null +++ b/_static/css/mandoc.css @@ -0,0 +1,262 @@ +/* $Id: mandoc.css,v 1.46 2019/06/02 16:57:13 schwarze Exp $ */ +/* + * Standard style sheet for mandoc(1) -Thtml and man.cgi(8). + * + * Written by Ingo Schwarze . + * I place this file into the public domain. + * Permission to use, copy, modify, and distribute it for any purpose + * with or without fee is hereby granted, without any conditions. + */ + +/* + * Edited by George Melikov + * to be integrated with sphinx RTD theme. + */ + +/* override */ +.man_container code { + overflow-x: initial; + background: none; + border: none; + font-size: 100%; +} + +/* OpenZFS styles */ +.man_container .head { + max-width: 640px; + width: 100%; +} +.man_container .head .head-vol { + text-align: center; +} +.man_container .head .head-rtitle { + text-align: right; +} +.man_container .foot td { + padding: 1em; +} + +/* Fix for Chrome */ +.man_container dl dt { + display: initial !important; + color: black !important; +} + +/* Next CSS rules come from upstream file as is, only with needed changes */ + +/* Sections and paragraphs. */ + +.manual-text { + margin-left: 0em; } +.Nd { } +section.Sh { } +h1.Sh { margin-top: 1.2em; + margin-bottom: 0.6em; } +section.Ss { } +h2.Ss { margin-top: 1.2em; + margin-bottom: 0.6em; + font-size: 105%; } +.Pp { margin: 0.6em 0em; } +.Sx { } +.Xr { } + +/* Displays and lists. */ + +.Bd { } +.Bd-indent { margin-left: 3.8em; } + +.Bl-bullet { list-style-type: disc; + padding-left: 1em; } +.Bl-bullet > li { } +.Bl-dash { list-style-type: none; + padding-left: 0em; } +.Bl-dash > li:before { + content: "\2014 "; } +.Bl-item { list-style-type: none; + padding-left: 0em; } +.Bl-item > li { } +.Bl-compact > li { + margin-top: 0em; } + +.Bl-enum { padding-left: 2em; } +.Bl-enum > li { } +.Bl-compact > li { + margin-top: 0em; } + +.Bl-diag { } +.Bl-diag > dt { + font-style: normal; + font-weight: bold; } +.Bl-diag > dd { + margin-left: 0em; } +.Bl-hang { } +.Bl-hang > dt { } +.Bl-hang > dd { + margin-left: 5.5em; } +.Bl-inset { } +.Bl-inset > dt { } +.Bl-inset > dd { + margin-left: 0em; } +.Bl-ohang { } +.Bl-ohang > dt { } +.Bl-ohang > dd { + margin-left: 0em; } +.Bl-tag { margin-top: 0.6em; + margin-left: 5.5em; } +.Bl-tag > dt { + float: left; + margin-top: 0em; + margin-left: -5.5em; + padding-right: 0.5em; + vertical-align: top; } +.Bl-tag > dd { + clear: right; + width: 100%; + margin-top: 0em; + margin-left: 0em; + margin-bottom: 0.6em; + vertical-align: top; + overflow: auto; } +.Bl-compact { margin-top: 0em; } +.Bl-compact > dd { + margin-bottom: 0em; } +.Bl-compact > dt { + margin-top: 0em; } + +.Bl-column { } +.Bl-column > tbody > tr { } +.Bl-column > tbody > tr > td { + margin-top: 1em; } +.Bl-compact > tbody > tr > td { + margin-top: 0em; } + +.Rs { font-style: normal; + font-weight: normal; } +.RsA { } +.RsB { font-style: italic; + font-weight: normal; } +.RsC { } +.RsD { } +.RsI { font-style: italic; + font-weight: normal; } +.RsJ { font-style: italic; + font-weight: normal; } +.RsN { } +.RsO { } +.RsP { } +.RsQ { } +.RsR { } +.RsT { text-decoration: underline; } +.RsU { } +.RsV { } + +.eqn { } +.tbl td { vertical-align: middle; } + +.HP { margin-left: 3.8em; + text-indent: -3.8em; } + +/* Semantic markup for command line utilities. */ + +table.Nm { } +code.Nm { font-style: normal; + font-weight: bold; + font-family: inherit; } +.Fl { font-style: normal; + font-weight: bold; + font-family: inherit; } +.Cm { font-style: normal; + font-weight: bold; + font-family: inherit; } +.Ar { font-style: italic; + font-weight: normal; } +.Op { display: inline; } +.Ic { font-style: normal; + font-weight: bold; + font-family: inherit; } +.Ev { font-style: normal; + font-weight: normal; + font-family: monospace; } +.Pa { font-style: italic; + font-weight: normal; } + +/* Semantic markup for function libraries. */ + +.Lb { } +code.In { font-style: normal; + font-weight: bold; + font-family: inherit; } +a.In { } +.Fd { font-style: normal; + font-weight: bold; + font-family: inherit; } +.Ft { font-style: italic; + font-weight: normal; } +.Fn { font-style: normal; + font-weight: bold; + font-family: inherit; } +.Fa { font-style: italic; + font-weight: normal; } +.Vt { font-style: italic; + font-weight: normal; } +.Va { font-style: italic; + font-weight: normal; } +.Dv { font-style: normal; + font-weight: normal; + font-family: monospace; } +.Er { font-style: normal; + font-weight: normal; + font-family: monospace; } + +/* Various semantic markup. */ + +.An { } +.Lk { } +.Mt { } +.Cd { font-style: normal; + font-weight: bold; + font-family: inherit; } +.Ad { font-style: italic; + font-weight: normal; } +.Ms { font-style: normal; + font-weight: bold; } +.St { } +.Ux { } + +/* Physical markup. */ + +.Bf { display: inline; } +.No { font-style: normal; + font-weight: normal; } +.Em { font-style: italic; + font-weight: normal; } +.Sy { font-style: normal; + font-weight: bold; } +.Li { font-style: normal; + font-weight: normal; + font-family: monospace; } + +/* Tooltip support. */ + +h1.Sh, h2.Ss { position: relative; } +.An, .Ar, .Cd, .Cm, .Dv, .Em, .Er, .Ev, .Fa, .Fd, .Fl, .Fn, .Ft, +.Ic, code.In, .Lb, .Lk, .Ms, .Mt, .Nd, code.Nm, .Pa, .Rs, +.St, .Sx, .Sy, .Va, .Vt, .Xr { + display: inline-block; + position: relative; }? + +/* Overrides to avoid excessive margins on small devices. */ + +@media (max-width: 37.5em) { +.manual-text { + margin-left: 0.5em; } +h1.Sh, h2.Ss { margin-left: 0em; } +.Bd-indent { margin-left: 2em; } +.Bl-hang > dd { + margin-left: 2em; } +.Bl-tag { margin-left: 2em; } +.Bl-tag > dt { + margin-left: -2em; } +.HP { margin-left: 2em; + text-indent: -2em; } +} diff --git a/_static/css/theme.css b/_static/css/theme.css new file mode 100644 index 000000000..19a446a0e --- /dev/null +++ b/_static/css/theme.css @@ -0,0 +1,4 @@ +html{box-sizing:border-box}*,:after,:before{box-sizing:inherit}article,aside,details,figcaption,figure,footer,header,hgroup,nav,section{display:block}audio,canvas,video{display:inline-block;*display:inline;*zoom:1}[hidden],audio:not([controls]){display:none}*{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}html{font-size:100%;-webkit-text-size-adjust:100%;-ms-text-size-adjust:100%}body{margin:0}a:active,a:hover{outline:0}abbr[title]{border-bottom:1px dotted}b,strong{font-weight:700}blockquote{margin:0}dfn{font-style:italic}ins{background:#ff9;text-decoration:none}ins,mark{color:#000}mark{background:#ff0;font-style:italic;font-weight:700}.rst-content code,.rst-content tt,code,kbd,pre,samp{font-family:monospace,serif;_font-family:courier new,monospace;font-size:1em}pre{white-space:pre}q{quotes:none}q:after,q:before{content:"";content:none}small{font-size:85%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sup{top:-.5em}sub{bottom:-.25em}dl,ol,ul{margin:0;padding:0;list-style:none;list-style-image:none}li{list-style:none}dd{margin:0}img{border:0;-ms-interpolation-mode:bicubic;vertical-align:middle;max-width:100%}svg:not(:root){overflow:hidden}figure,form{margin:0}label{cursor:pointer}button,input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}button,input{line-height:normal}button,input[type=button],input[type=reset],input[type=submit]{cursor:pointer;-webkit-appearance:button;*overflow:visible}button[disabled],input[disabled]{cursor:default}input[type=search]{-webkit-appearance:textfield;-moz-box-sizing:content-box;-webkit-box-sizing:content-box;box-sizing:content-box}textarea{resize:vertical}table{border-collapse:collapse;border-spacing:0}td{vertical-align:top}.chromeframe{margin:.2em 0;background:#ccc;color:#000;padding:.2em 0}.ir{display:block;border:0;text-indent:-999em;overflow:hidden;background-color:transparent;background-repeat:no-repeat;text-align:left;direction:ltr;*line-height:0}.ir br{display:none}.hidden{display:none!important;visibility:hidden}.visuallyhidden{border:0;clip:rect(0 0 0 0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.visuallyhidden.focusable:active,.visuallyhidden.focusable:focus{clip:auto;height:auto;margin:0;overflow:visible;position:static;width:auto}.invisible{visibility:hidden}.relative{position:relative}big,small{font-size:100%}@media print{body,html,section{background:none!important}*{box-shadow:none!important;text-shadow:none!important;filter:none!important;-ms-filter:none!important}a,a:visited{text-decoration:underline}.ir a:after,a[href^="#"]:after,a[href^="javascript:"]:after{content:""}blockquote,pre{page-break-inside:avoid}thead{display:table-header-group}img,tr{page-break-inside:avoid}img{max-width:100%!important}@page{margin:.5cm}.rst-content .toctree-wrapper>p.caption,h2,h3,p{orphans:3;widows:3}.rst-content .toctree-wrapper>p.caption,h2,h3{page-break-after:avoid}}.btn,.fa:before,.icon:before,.rst-content .admonition,.rst-content .admonition-title:before,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .code-block-caption .headerlink:before,.rst-content .danger,.rst-content .eqno .headerlink:before,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-alert,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before,input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week],select,textarea{-webkit-font-smoothing:antialiased}.clearfix{*zoom:1}.clearfix:after,.clearfix:before{display:table;content:""}.clearfix:after{clear:both}/*! + * Font Awesome 4.7.0 by @davegandy - http://fontawesome.io - @fontawesome + * License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License) + */@font-face{font-family:FontAwesome;src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713);src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713?#iefix&v=4.7.0) format("embedded-opentype"),url(fonts/fontawesome-webfont.woff2?af7ae505a9eed503f8b8e6982036873e) format("woff2"),url(fonts/fontawesome-webfont.woff?fee66e712a8a08eef5805a46892932ad) format("woff"),url(fonts/fontawesome-webfont.ttf?b06871f281fee6b241d60582ae9369b9) format("truetype"),url(fonts/fontawesome-webfont.svg?912ec66d7572ff821749319396470bde#fontawesomeregular) format("svg");font-weight:400;font-style:normal}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{display:inline-block;font:normal normal normal 14px/1 FontAwesome;font-size:inherit;text-rendering:auto;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.fa-lg{font-size:1.33333em;line-height:.75em;vertical-align:-15%}.fa-2x{font-size:2em}.fa-3x{font-size:3em}.fa-4x{font-size:4em}.fa-5x{font-size:5em}.fa-fw{width:1.28571em;text-align:center}.fa-ul{padding-left:0;margin-left:2.14286em;list-style-type:none}.fa-ul>li{position:relative}.fa-li{position:absolute;left:-2.14286em;width:2.14286em;top:.14286em;text-align:center}.fa-li.fa-lg{left:-1.85714em}.fa-border{padding:.2em .25em .15em;border:.08em solid #eee;border-radius:.1em}.fa-pull-left{float:left}.fa-pull-right{float:right}.fa-pull-left.icon,.fa.fa-pull-left,.rst-content .code-block-caption .fa-pull-left.headerlink,.rst-content .eqno .fa-pull-left.headerlink,.rst-content .fa-pull-left.admonition-title,.rst-content code.download span.fa-pull-left:first-child,.rst-content dl dt .fa-pull-left.headerlink,.rst-content h1 .fa-pull-left.headerlink,.rst-content h2 .fa-pull-left.headerlink,.rst-content h3 .fa-pull-left.headerlink,.rst-content h4 .fa-pull-left.headerlink,.rst-content h5 .fa-pull-left.headerlink,.rst-content h6 .fa-pull-left.headerlink,.rst-content p .fa-pull-left.headerlink,.rst-content table>caption .fa-pull-left.headerlink,.rst-content tt.download span.fa-pull-left:first-child,.wy-menu-vertical li.current>a button.fa-pull-left.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-left.toctree-expand,.wy-menu-vertical li button.fa-pull-left.toctree-expand{margin-right:.3em}.fa-pull-right.icon,.fa.fa-pull-right,.rst-content .code-block-caption .fa-pull-right.headerlink,.rst-content .eqno .fa-pull-right.headerlink,.rst-content .fa-pull-right.admonition-title,.rst-content code.download span.fa-pull-right:first-child,.rst-content dl dt .fa-pull-right.headerlink,.rst-content h1 .fa-pull-right.headerlink,.rst-content h2 .fa-pull-right.headerlink,.rst-content h3 .fa-pull-right.headerlink,.rst-content h4 .fa-pull-right.headerlink,.rst-content h5 .fa-pull-right.headerlink,.rst-content h6 .fa-pull-right.headerlink,.rst-content p .fa-pull-right.headerlink,.rst-content table>caption .fa-pull-right.headerlink,.rst-content tt.download span.fa-pull-right:first-child,.wy-menu-vertical li.current>a button.fa-pull-right.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-right.toctree-expand,.wy-menu-vertical li button.fa-pull-right.toctree-expand{margin-left:.3em}.pull-right{float:right}.pull-left{float:left}.fa.pull-left,.pull-left.icon,.rst-content .code-block-caption .pull-left.headerlink,.rst-content .eqno .pull-left.headerlink,.rst-content .pull-left.admonition-title,.rst-content code.download span.pull-left:first-child,.rst-content dl dt .pull-left.headerlink,.rst-content h1 .pull-left.headerlink,.rst-content h2 .pull-left.headerlink,.rst-content h3 .pull-left.headerlink,.rst-content h4 .pull-left.headerlink,.rst-content h5 .pull-left.headerlink,.rst-content h6 .pull-left.headerlink,.rst-content p .pull-left.headerlink,.rst-content table>caption .pull-left.headerlink,.rst-content tt.download span.pull-left:first-child,.wy-menu-vertical li.current>a button.pull-left.toctree-expand,.wy-menu-vertical li.on a button.pull-left.toctree-expand,.wy-menu-vertical li button.pull-left.toctree-expand{margin-right:.3em}.fa.pull-right,.pull-right.icon,.rst-content .code-block-caption .pull-right.headerlink,.rst-content .eqno .pull-right.headerlink,.rst-content .pull-right.admonition-title,.rst-content code.download span.pull-right:first-child,.rst-content dl dt .pull-right.headerlink,.rst-content h1 .pull-right.headerlink,.rst-content h2 .pull-right.headerlink,.rst-content h3 .pull-right.headerlink,.rst-content h4 .pull-right.headerlink,.rst-content h5 .pull-right.headerlink,.rst-content h6 .pull-right.headerlink,.rst-content p .pull-right.headerlink,.rst-content table>caption .pull-right.headerlink,.rst-content tt.download span.pull-right:first-child,.wy-menu-vertical li.current>a button.pull-right.toctree-expand,.wy-menu-vertical li.on a button.pull-right.toctree-expand,.wy-menu-vertical li button.pull-right.toctree-expand{margin-left:.3em}.fa-spin{-webkit-animation:fa-spin 2s linear infinite;animation:fa-spin 2s linear infinite}.fa-pulse{-webkit-animation:fa-spin 1s steps(8) infinite;animation:fa-spin 1s steps(8) infinite}@-webkit-keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}@keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}.fa-rotate-90{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";-webkit-transform:rotate(90deg);-ms-transform:rotate(90deg);transform:rotate(90deg)}.fa-rotate-180{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";-webkit-transform:rotate(180deg);-ms-transform:rotate(180deg);transform:rotate(180deg)}.fa-rotate-270{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";-webkit-transform:rotate(270deg);-ms-transform:rotate(270deg);transform:rotate(270deg)}.fa-flip-horizontal{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";-webkit-transform:scaleX(-1);-ms-transform:scaleX(-1);transform:scaleX(-1)}.fa-flip-vertical{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)";-webkit-transform:scaleY(-1);-ms-transform:scaleY(-1);transform:scaleY(-1)}:root .fa-flip-horizontal,:root .fa-flip-vertical,:root .fa-rotate-90,:root .fa-rotate-180,:root .fa-rotate-270{filter:none}.fa-stack{position:relative;display:inline-block;width:2em;height:2em;line-height:2em;vertical-align:middle}.fa-stack-1x,.fa-stack-2x{position:absolute;left:0;width:100%;text-align:center}.fa-stack-1x{line-height:inherit}.fa-stack-2x{font-size:2em}.fa-inverse{color:#fff}.fa-glass:before{content:""}.fa-music:before{content:""}.fa-search:before,.icon-search:before{content:""}.fa-envelope-o:before{content:""}.fa-heart:before{content:""}.fa-star:before{content:""}.fa-star-o:before{content:""}.fa-user:before{content:""}.fa-film:before{content:""}.fa-th-large:before{content:""}.fa-th:before{content:""}.fa-th-list:before{content:""}.fa-check:before{content:""}.fa-close:before,.fa-remove:before,.fa-times:before{content:""}.fa-search-plus:before{content:""}.fa-search-minus:before{content:""}.fa-power-off:before{content:""}.fa-signal:before{content:""}.fa-cog:before,.fa-gear:before{content:""}.fa-trash-o:before{content:""}.fa-home:before,.icon-home:before{content:""}.fa-file-o:before{content:""}.fa-clock-o:before{content:""}.fa-road:before{content:""}.fa-download:before,.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{content:""}.fa-arrow-circle-o-down:before{content:""}.fa-arrow-circle-o-up:before{content:""}.fa-inbox:before{content:""}.fa-play-circle-o:before{content:""}.fa-repeat:before,.fa-rotate-right:before{content:""}.fa-refresh:before{content:""}.fa-list-alt:before{content:""}.fa-lock:before{content:""}.fa-flag:before{content:""}.fa-headphones:before{content:""}.fa-volume-off:before{content:""}.fa-volume-down:before{content:""}.fa-volume-up:before{content:""}.fa-qrcode:before{content:""}.fa-barcode:before{content:""}.fa-tag:before{content:""}.fa-tags:before{content:""}.fa-book:before,.icon-book:before{content:""}.fa-bookmark:before{content:""}.fa-print:before{content:""}.fa-camera:before{content:""}.fa-font:before{content:""}.fa-bold:before{content:""}.fa-italic:before{content:""}.fa-text-height:before{content:""}.fa-text-width:before{content:""}.fa-align-left:before{content:""}.fa-align-center:before{content:""}.fa-align-right:before{content:""}.fa-align-justify:before{content:""}.fa-list:before{content:""}.fa-dedent:before,.fa-outdent:before{content:""}.fa-indent:before{content:""}.fa-video-camera:before{content:""}.fa-image:before,.fa-photo:before,.fa-picture-o:before{content:""}.fa-pencil:before{content:""}.fa-map-marker:before{content:""}.fa-adjust:before{content:""}.fa-tint:before{content:""}.fa-edit:before,.fa-pencil-square-o:before{content:""}.fa-share-square-o:before{content:""}.fa-check-square-o:before{content:""}.fa-arrows:before{content:""}.fa-step-backward:before{content:""}.fa-fast-backward:before{content:""}.fa-backward:before{content:""}.fa-play:before{content:""}.fa-pause:before{content:""}.fa-stop:before{content:""}.fa-forward:before{content:""}.fa-fast-forward:before{content:""}.fa-step-forward:before{content:""}.fa-eject:before{content:""}.fa-chevron-left:before{content:""}.fa-chevron-right:before{content:""}.fa-plus-circle:before{content:""}.fa-minus-circle:before{content:""}.fa-times-circle:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before{content:""}.fa-check-circle:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before{content:""}.fa-question-circle:before{content:""}.fa-info-circle:before{content:""}.fa-crosshairs:before{content:""}.fa-times-circle-o:before{content:""}.fa-check-circle-o:before{content:""}.fa-ban:before{content:""}.fa-arrow-left:before{content:""}.fa-arrow-right:before{content:""}.fa-arrow-up:before{content:""}.fa-arrow-down:before{content:""}.fa-mail-forward:before,.fa-share:before{content:""}.fa-expand:before{content:""}.fa-compress:before{content:""}.fa-plus:before{content:""}.fa-minus:before{content:""}.fa-asterisk:before{content:""}.fa-exclamation-circle:before,.rst-content .admonition-title:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before{content:""}.fa-gift:before{content:""}.fa-leaf:before{content:""}.fa-fire:before,.icon-fire:before{content:""}.fa-eye:before{content:""}.fa-eye-slash:before{content:""}.fa-exclamation-triangle:before,.fa-warning:before{content:""}.fa-plane:before{content:""}.fa-calendar:before{content:""}.fa-random:before{content:""}.fa-comment:before{content:""}.fa-magnet:before{content:""}.fa-chevron-up:before{content:""}.fa-chevron-down:before{content:""}.fa-retweet:before{content:""}.fa-shopping-cart:before{content:""}.fa-folder:before{content:""}.fa-folder-open:before{content:""}.fa-arrows-v:before{content:""}.fa-arrows-h:before{content:""}.fa-bar-chart-o:before,.fa-bar-chart:before{content:""}.fa-twitter-square:before{content:""}.fa-facebook-square:before{content:""}.fa-camera-retro:before{content:""}.fa-key:before{content:""}.fa-cogs:before,.fa-gears:before{content:""}.fa-comments:before{content:""}.fa-thumbs-o-up:before{content:""}.fa-thumbs-o-down:before{content:""}.fa-star-half:before{content:""}.fa-heart-o:before{content:""}.fa-sign-out:before{content:""}.fa-linkedin-square:before{content:""}.fa-thumb-tack:before{content:""}.fa-external-link:before{content:""}.fa-sign-in:before{content:""}.fa-trophy:before{content:""}.fa-github-square:before{content:""}.fa-upload:before{content:""}.fa-lemon-o:before{content:""}.fa-phone:before{content:""}.fa-square-o:before{content:""}.fa-bookmark-o:before{content:""}.fa-phone-square:before{content:""}.fa-twitter:before{content:""}.fa-facebook-f:before,.fa-facebook:before{content:""}.fa-github:before,.icon-github:before{content:""}.fa-unlock:before{content:""}.fa-credit-card:before{content:""}.fa-feed:before,.fa-rss:before{content:""}.fa-hdd-o:before{content:""}.fa-bullhorn:before{content:""}.fa-bell:before{content:""}.fa-certificate:before{content:""}.fa-hand-o-right:before{content:""}.fa-hand-o-left:before{content:""}.fa-hand-o-up:before{content:""}.fa-hand-o-down:before{content:""}.fa-arrow-circle-left:before,.icon-circle-arrow-left:before{content:""}.fa-arrow-circle-right:before,.icon-circle-arrow-right:before{content:""}.fa-arrow-circle-up:before{content:""}.fa-arrow-circle-down:before{content:""}.fa-globe:before{content:""}.fa-wrench:before{content:""}.fa-tasks:before{content:""}.fa-filter:before{content:""}.fa-briefcase:before{content:""}.fa-arrows-alt:before{content:""}.fa-group:before,.fa-users:before{content:""}.fa-chain:before,.fa-link:before,.icon-link:before{content:""}.fa-cloud:before{content:""}.fa-flask:before{content:""}.fa-cut:before,.fa-scissors:before{content:""}.fa-copy:before,.fa-files-o:before{content:""}.fa-paperclip:before{content:""}.fa-floppy-o:before,.fa-save:before{content:""}.fa-square:before{content:""}.fa-bars:before,.fa-navicon:before,.fa-reorder:before{content:""}.fa-list-ul:before{content:""}.fa-list-ol:before{content:""}.fa-strikethrough:before{content:""}.fa-underline:before{content:""}.fa-table:before{content:""}.fa-magic:before{content:""}.fa-truck:before{content:""}.fa-pinterest:before{content:""}.fa-pinterest-square:before{content:""}.fa-google-plus-square:before{content:""}.fa-google-plus:before{content:""}.fa-money:before{content:""}.fa-caret-down:before,.icon-caret-down:before,.wy-dropdown .caret:before{content:""}.fa-caret-up:before{content:""}.fa-caret-left:before{content:""}.fa-caret-right:before{content:""}.fa-columns:before{content:""}.fa-sort:before,.fa-unsorted:before{content:""}.fa-sort-desc:before,.fa-sort-down:before{content:""}.fa-sort-asc:before,.fa-sort-up:before{content:""}.fa-envelope:before{content:""}.fa-linkedin:before{content:""}.fa-rotate-left:before,.fa-undo:before{content:""}.fa-gavel:before,.fa-legal:before{content:""}.fa-dashboard:before,.fa-tachometer:before{content:""}.fa-comment-o:before{content:""}.fa-comments-o:before{content:""}.fa-bolt:before,.fa-flash:before{content:""}.fa-sitemap:before{content:""}.fa-umbrella:before{content:""}.fa-clipboard:before,.fa-paste:before{content:""}.fa-lightbulb-o:before{content:""}.fa-exchange:before{content:""}.fa-cloud-download:before{content:""}.fa-cloud-upload:before{content:""}.fa-user-md:before{content:""}.fa-stethoscope:before{content:""}.fa-suitcase:before{content:""}.fa-bell-o:before{content:""}.fa-coffee:before{content:""}.fa-cutlery:before{content:""}.fa-file-text-o:before{content:""}.fa-building-o:before{content:""}.fa-hospital-o:before{content:""}.fa-ambulance:before{content:""}.fa-medkit:before{content:""}.fa-fighter-jet:before{content:""}.fa-beer:before{content:""}.fa-h-square:before{content:""}.fa-plus-square:before{content:""}.fa-angle-double-left:before{content:""}.fa-angle-double-right:before{content:""}.fa-angle-double-up:before{content:""}.fa-angle-double-down:before{content:""}.fa-angle-left:before{content:""}.fa-angle-right:before{content:""}.fa-angle-up:before{content:""}.fa-angle-down:before{content:""}.fa-desktop:before{content:""}.fa-laptop:before{content:""}.fa-tablet:before{content:""}.fa-mobile-phone:before,.fa-mobile:before{content:""}.fa-circle-o:before{content:""}.fa-quote-left:before{content:""}.fa-quote-right:before{content:""}.fa-spinner:before{content:""}.fa-circle:before{content:""}.fa-mail-reply:before,.fa-reply:before{content:""}.fa-github-alt:before{content:""}.fa-folder-o:before{content:""}.fa-folder-open-o:before{content:""}.fa-smile-o:before{content:""}.fa-frown-o:before{content:""}.fa-meh-o:before{content:""}.fa-gamepad:before{content:""}.fa-keyboard-o:before{content:""}.fa-flag-o:before{content:""}.fa-flag-checkered:before{content:""}.fa-terminal:before{content:""}.fa-code:before{content:""}.fa-mail-reply-all:before,.fa-reply-all:before{content:""}.fa-star-half-empty:before,.fa-star-half-full:before,.fa-star-half-o:before{content:""}.fa-location-arrow:before{content:""}.fa-crop:before{content:""}.fa-code-fork:before{content:""}.fa-chain-broken:before,.fa-unlink:before{content:""}.fa-question:before{content:""}.fa-info:before{content:""}.fa-exclamation:before{content:""}.fa-superscript:before{content:""}.fa-subscript:before{content:""}.fa-eraser:before{content:""}.fa-puzzle-piece:before{content:""}.fa-microphone:before{content:""}.fa-microphone-slash:before{content:""}.fa-shield:before{content:""}.fa-calendar-o:before{content:""}.fa-fire-extinguisher:before{content:""}.fa-rocket:before{content:""}.fa-maxcdn:before{content:""}.fa-chevron-circle-left:before{content:""}.fa-chevron-circle-right:before{content:""}.fa-chevron-circle-up:before{content:""}.fa-chevron-circle-down:before{content:""}.fa-html5:before{content:""}.fa-css3:before{content:""}.fa-anchor:before{content:""}.fa-unlock-alt:before{content:""}.fa-bullseye:before{content:""}.fa-ellipsis-h:before{content:""}.fa-ellipsis-v:before{content:""}.fa-rss-square:before{content:""}.fa-play-circle:before{content:""}.fa-ticket:before{content:""}.fa-minus-square:before{content:""}.fa-minus-square-o:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before{content:""}.fa-level-up:before{content:""}.fa-level-down:before{content:""}.fa-check-square:before{content:""}.fa-pencil-square:before{content:""}.fa-external-link-square:before{content:""}.fa-share-square:before{content:""}.fa-compass:before{content:""}.fa-caret-square-o-down:before,.fa-toggle-down:before{content:""}.fa-caret-square-o-up:before,.fa-toggle-up:before{content:""}.fa-caret-square-o-right:before,.fa-toggle-right:before{content:""}.fa-eur:before,.fa-euro:before{content:""}.fa-gbp:before{content:""}.fa-dollar:before,.fa-usd:before{content:""}.fa-inr:before,.fa-rupee:before{content:""}.fa-cny:before,.fa-jpy:before,.fa-rmb:before,.fa-yen:before{content:""}.fa-rouble:before,.fa-rub:before,.fa-ruble:before{content:""}.fa-krw:before,.fa-won:before{content:""}.fa-bitcoin:before,.fa-btc:before{content:""}.fa-file:before{content:""}.fa-file-text:before{content:""}.fa-sort-alpha-asc:before{content:""}.fa-sort-alpha-desc:before{content:""}.fa-sort-amount-asc:before{content:""}.fa-sort-amount-desc:before{content:""}.fa-sort-numeric-asc:before{content:""}.fa-sort-numeric-desc:before{content:""}.fa-thumbs-up:before{content:""}.fa-thumbs-down:before{content:""}.fa-youtube-square:before{content:""}.fa-youtube:before{content:""}.fa-xing:before{content:""}.fa-xing-square:before{content:""}.fa-youtube-play:before{content:""}.fa-dropbox:before{content:""}.fa-stack-overflow:before{content:""}.fa-instagram:before{content:""}.fa-flickr:before{content:""}.fa-adn:before{content:""}.fa-bitbucket:before,.icon-bitbucket:before{content:""}.fa-bitbucket-square:before{content:""}.fa-tumblr:before{content:""}.fa-tumblr-square:before{content:""}.fa-long-arrow-down:before{content:""}.fa-long-arrow-up:before{content:""}.fa-long-arrow-left:before{content:""}.fa-long-arrow-right:before{content:""}.fa-apple:before{content:""}.fa-windows:before{content:""}.fa-android:before{content:""}.fa-linux:before{content:""}.fa-dribbble:before{content:""}.fa-skype:before{content:""}.fa-foursquare:before{content:""}.fa-trello:before{content:""}.fa-female:before{content:""}.fa-male:before{content:""}.fa-gittip:before,.fa-gratipay:before{content:""}.fa-sun-o:before{content:""}.fa-moon-o:before{content:""}.fa-archive:before{content:""}.fa-bug:before{content:""}.fa-vk:before{content:""}.fa-weibo:before{content:""}.fa-renren:before{content:""}.fa-pagelines:before{content:""}.fa-stack-exchange:before{content:""}.fa-arrow-circle-o-right:before{content:""}.fa-arrow-circle-o-left:before{content:""}.fa-caret-square-o-left:before,.fa-toggle-left:before{content:""}.fa-dot-circle-o:before{content:""}.fa-wheelchair:before{content:""}.fa-vimeo-square:before{content:""}.fa-try:before,.fa-turkish-lira:before{content:""}.fa-plus-square-o:before,.wy-menu-vertical li button.toctree-expand:before{content:""}.fa-space-shuttle:before{content:""}.fa-slack:before{content:""}.fa-envelope-square:before{content:""}.fa-wordpress:before{content:""}.fa-openid:before{content:""}.fa-bank:before,.fa-institution:before,.fa-university:before{content:""}.fa-graduation-cap:before,.fa-mortar-board:before{content:""}.fa-yahoo:before{content:""}.fa-google:before{content:""}.fa-reddit:before{content:""}.fa-reddit-square:before{content:""}.fa-stumbleupon-circle:before{content:""}.fa-stumbleupon:before{content:""}.fa-delicious:before{content:""}.fa-digg:before{content:""}.fa-pied-piper-pp:before{content:""}.fa-pied-piper-alt:before{content:""}.fa-drupal:before{content:""}.fa-joomla:before{content:""}.fa-language:before{content:""}.fa-fax:before{content:""}.fa-building:before{content:""}.fa-child:before{content:""}.fa-paw:before{content:""}.fa-spoon:before{content:""}.fa-cube:before{content:""}.fa-cubes:before{content:""}.fa-behance:before{content:""}.fa-behance-square:before{content:""}.fa-steam:before{content:""}.fa-steam-square:before{content:""}.fa-recycle:before{content:""}.fa-automobile:before,.fa-car:before{content:""}.fa-cab:before,.fa-taxi:before{content:""}.fa-tree:before{content:""}.fa-spotify:before{content:""}.fa-deviantart:before{content:""}.fa-soundcloud:before{content:""}.fa-database:before{content:""}.fa-file-pdf-o:before{content:""}.fa-file-word-o:before{content:""}.fa-file-excel-o:before{content:""}.fa-file-powerpoint-o:before{content:""}.fa-file-image-o:before,.fa-file-photo-o:before,.fa-file-picture-o:before{content:""}.fa-file-archive-o:before,.fa-file-zip-o:before{content:""}.fa-file-audio-o:before,.fa-file-sound-o:before{content:""}.fa-file-movie-o:before,.fa-file-video-o:before{content:""}.fa-file-code-o:before{content:""}.fa-vine:before{content:""}.fa-codepen:before{content:""}.fa-jsfiddle:before{content:""}.fa-life-bouy:before,.fa-life-buoy:before,.fa-life-ring:before,.fa-life-saver:before,.fa-support:before{content:""}.fa-circle-o-notch:before{content:""}.fa-ra:before,.fa-rebel:before,.fa-resistance:before{content:""}.fa-empire:before,.fa-ge:before{content:""}.fa-git-square:before{content:""}.fa-git:before{content:""}.fa-hacker-news:before,.fa-y-combinator-square:before,.fa-yc-square:before{content:""}.fa-tencent-weibo:before{content:""}.fa-qq:before{content:""}.fa-wechat:before,.fa-weixin:before{content:""}.fa-paper-plane:before,.fa-send:before{content:""}.fa-paper-plane-o:before,.fa-send-o:before{content:""}.fa-history:before{content:""}.fa-circle-thin:before{content:""}.fa-header:before{content:""}.fa-paragraph:before{content:""}.fa-sliders:before{content:""}.fa-share-alt:before{content:""}.fa-share-alt-square:before{content:""}.fa-bomb:before{content:""}.fa-futbol-o:before,.fa-soccer-ball-o:before{content:""}.fa-tty:before{content:""}.fa-binoculars:before{content:""}.fa-plug:before{content:""}.fa-slideshare:before{content:""}.fa-twitch:before{content:""}.fa-yelp:before{content:""}.fa-newspaper-o:before{content:""}.fa-wifi:before{content:""}.fa-calculator:before{content:""}.fa-paypal:before{content:""}.fa-google-wallet:before{content:""}.fa-cc-visa:before{content:""}.fa-cc-mastercard:before{content:""}.fa-cc-discover:before{content:""}.fa-cc-amex:before{content:""}.fa-cc-paypal:before{content:""}.fa-cc-stripe:before{content:""}.fa-bell-slash:before{content:""}.fa-bell-slash-o:before{content:""}.fa-trash:before{content:""}.fa-copyright:before{content:""}.fa-at:before{content:""}.fa-eyedropper:before{content:""}.fa-paint-brush:before{content:""}.fa-birthday-cake:before{content:""}.fa-area-chart:before{content:""}.fa-pie-chart:before{content:""}.fa-line-chart:before{content:""}.fa-lastfm:before{content:""}.fa-lastfm-square:before{content:""}.fa-toggle-off:before{content:""}.fa-toggle-on:before{content:""}.fa-bicycle:before{content:""}.fa-bus:before{content:""}.fa-ioxhost:before{content:""}.fa-angellist:before{content:""}.fa-cc:before{content:""}.fa-ils:before,.fa-shekel:before,.fa-sheqel:before{content:""}.fa-meanpath:before{content:""}.fa-buysellads:before{content:""}.fa-connectdevelop:before{content:""}.fa-dashcube:before{content:""}.fa-forumbee:before{content:""}.fa-leanpub:before{content:""}.fa-sellsy:before{content:""}.fa-shirtsinbulk:before{content:""}.fa-simplybuilt:before{content:""}.fa-skyatlas:before{content:""}.fa-cart-plus:before{content:""}.fa-cart-arrow-down:before{content:""}.fa-diamond:before{content:""}.fa-ship:before{content:""}.fa-user-secret:before{content:""}.fa-motorcycle:before{content:""}.fa-street-view:before{content:""}.fa-heartbeat:before{content:""}.fa-venus:before{content:""}.fa-mars:before{content:""}.fa-mercury:before{content:""}.fa-intersex:before,.fa-transgender:before{content:""}.fa-transgender-alt:before{content:""}.fa-venus-double:before{content:""}.fa-mars-double:before{content:""}.fa-venus-mars:before{content:""}.fa-mars-stroke:before{content:""}.fa-mars-stroke-v:before{content:""}.fa-mars-stroke-h:before{content:""}.fa-neuter:before{content:""}.fa-genderless:before{content:""}.fa-facebook-official:before{content:""}.fa-pinterest-p:before{content:""}.fa-whatsapp:before{content:""}.fa-server:before{content:""}.fa-user-plus:before{content:""}.fa-user-times:before{content:""}.fa-bed:before,.fa-hotel:before{content:""}.fa-viacoin:before{content:""}.fa-train:before{content:""}.fa-subway:before{content:""}.fa-medium:before{content:""}.fa-y-combinator:before,.fa-yc:before{content:""}.fa-optin-monster:before{content:""}.fa-opencart:before{content:""}.fa-expeditedssl:before{content:""}.fa-battery-4:before,.fa-battery-full:before,.fa-battery:before{content:""}.fa-battery-3:before,.fa-battery-three-quarters:before{content:""}.fa-battery-2:before,.fa-battery-half:before{content:""}.fa-battery-1:before,.fa-battery-quarter:before{content:""}.fa-battery-0:before,.fa-battery-empty:before{content:""}.fa-mouse-pointer:before{content:""}.fa-i-cursor:before{content:""}.fa-object-group:before{content:""}.fa-object-ungroup:before{content:""}.fa-sticky-note:before{content:""}.fa-sticky-note-o:before{content:""}.fa-cc-jcb:before{content:""}.fa-cc-diners-club:before{content:""}.fa-clone:before{content:""}.fa-balance-scale:before{content:""}.fa-hourglass-o:before{content:""}.fa-hourglass-1:before,.fa-hourglass-start:before{content:""}.fa-hourglass-2:before,.fa-hourglass-half:before{content:""}.fa-hourglass-3:before,.fa-hourglass-end:before{content:""}.fa-hourglass:before{content:""}.fa-hand-grab-o:before,.fa-hand-rock-o:before{content:""}.fa-hand-paper-o:before,.fa-hand-stop-o:before{content:""}.fa-hand-scissors-o:before{content:""}.fa-hand-lizard-o:before{content:""}.fa-hand-spock-o:before{content:""}.fa-hand-pointer-o:before{content:""}.fa-hand-peace-o:before{content:""}.fa-trademark:before{content:""}.fa-registered:before{content:""}.fa-creative-commons:before{content:""}.fa-gg:before{content:""}.fa-gg-circle:before{content:""}.fa-tripadvisor:before{content:""}.fa-odnoklassniki:before{content:""}.fa-odnoklassniki-square:before{content:""}.fa-get-pocket:before{content:""}.fa-wikipedia-w:before{content:""}.fa-safari:before{content:""}.fa-chrome:before{content:""}.fa-firefox:before{content:""}.fa-opera:before{content:""}.fa-internet-explorer:before{content:""}.fa-television:before,.fa-tv:before{content:""}.fa-contao:before{content:""}.fa-500px:before{content:""}.fa-amazon:before{content:""}.fa-calendar-plus-o:before{content:""}.fa-calendar-minus-o:before{content:""}.fa-calendar-times-o:before{content:""}.fa-calendar-check-o:before{content:""}.fa-industry:before{content:""}.fa-map-pin:before{content:""}.fa-map-signs:before{content:""}.fa-map-o:before{content:""}.fa-map:before{content:""}.fa-commenting:before{content:""}.fa-commenting-o:before{content:""}.fa-houzz:before{content:""}.fa-vimeo:before{content:""}.fa-black-tie:before{content:""}.fa-fonticons:before{content:""}.fa-reddit-alien:before{content:""}.fa-edge:before{content:""}.fa-credit-card-alt:before{content:""}.fa-codiepie:before{content:""}.fa-modx:before{content:""}.fa-fort-awesome:before{content:""}.fa-usb:before{content:""}.fa-product-hunt:before{content:""}.fa-mixcloud:before{content:""}.fa-scribd:before{content:""}.fa-pause-circle:before{content:""}.fa-pause-circle-o:before{content:""}.fa-stop-circle:before{content:""}.fa-stop-circle-o:before{content:""}.fa-shopping-bag:before{content:""}.fa-shopping-basket:before{content:""}.fa-hashtag:before{content:""}.fa-bluetooth:before{content:""}.fa-bluetooth-b:before{content:""}.fa-percent:before{content:""}.fa-gitlab:before,.icon-gitlab:before{content:""}.fa-wpbeginner:before{content:""}.fa-wpforms:before{content:""}.fa-envira:before{content:""}.fa-universal-access:before{content:""}.fa-wheelchair-alt:before{content:""}.fa-question-circle-o:before{content:""}.fa-blind:before{content:""}.fa-audio-description:before{content:""}.fa-volume-control-phone:before{content:""}.fa-braille:before{content:""}.fa-assistive-listening-systems:before{content:""}.fa-american-sign-language-interpreting:before,.fa-asl-interpreting:before{content:""}.fa-deaf:before,.fa-deafness:before,.fa-hard-of-hearing:before{content:""}.fa-glide:before{content:""}.fa-glide-g:before{content:""}.fa-sign-language:before,.fa-signing:before{content:""}.fa-low-vision:before{content:""}.fa-viadeo:before{content:""}.fa-viadeo-square:before{content:""}.fa-snapchat:before{content:""}.fa-snapchat-ghost:before{content:""}.fa-snapchat-square:before{content:""}.fa-pied-piper:before{content:""}.fa-first-order:before{content:""}.fa-yoast:before{content:""}.fa-themeisle:before{content:""}.fa-google-plus-circle:before,.fa-google-plus-official:before{content:""}.fa-fa:before,.fa-font-awesome:before{content:""}.fa-handshake-o:before{content:""}.fa-envelope-open:before{content:""}.fa-envelope-open-o:before{content:""}.fa-linode:before{content:""}.fa-address-book:before{content:""}.fa-address-book-o:before{content:""}.fa-address-card:before,.fa-vcard:before{content:""}.fa-address-card-o:before,.fa-vcard-o:before{content:""}.fa-user-circle:before{content:""}.fa-user-circle-o:before{content:""}.fa-user-o:before{content:""}.fa-id-badge:before{content:""}.fa-drivers-license:before,.fa-id-card:before{content:""}.fa-drivers-license-o:before,.fa-id-card-o:before{content:""}.fa-quora:before{content:""}.fa-free-code-camp:before{content:""}.fa-telegram:before{content:""}.fa-thermometer-4:before,.fa-thermometer-full:before,.fa-thermometer:before{content:""}.fa-thermometer-3:before,.fa-thermometer-three-quarters:before{content:""}.fa-thermometer-2:before,.fa-thermometer-half:before{content:""}.fa-thermometer-1:before,.fa-thermometer-quarter:before{content:""}.fa-thermometer-0:before,.fa-thermometer-empty:before{content:""}.fa-shower:before{content:""}.fa-bath:before,.fa-bathtub:before,.fa-s15:before{content:""}.fa-podcast:before{content:""}.fa-window-maximize:before{content:""}.fa-window-minimize:before{content:""}.fa-window-restore:before{content:""}.fa-times-rectangle:before,.fa-window-close:before{content:""}.fa-times-rectangle-o:before,.fa-window-close-o:before{content:""}.fa-bandcamp:before{content:""}.fa-grav:before{content:""}.fa-etsy:before{content:""}.fa-imdb:before{content:""}.fa-ravelry:before{content:""}.fa-eercast:before{content:""}.fa-microchip:before{content:""}.fa-snowflake-o:before{content:""}.fa-superpowers:before{content:""}.fa-wpexplorer:before{content:""}.fa-meetup:before{content:""}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;margin:0;overflow:visible;clip:auto}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-dropdown .caret,.wy-inline-validate.wy-inline-validate-danger .wy-input-context,.wy-inline-validate.wy-inline-validate-info .wy-input-context,.wy-inline-validate.wy-inline-validate-success .wy-input-context,.wy-inline-validate.wy-inline-validate-warning .wy-input-context,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{font-family:inherit}.fa:before,.icon:before,.rst-content .admonition-title:before,.rst-content .code-block-caption .headerlink:before,.rst-content .eqno .headerlink:before,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before{font-family:FontAwesome;display:inline-block;font-style:normal;font-weight:400;line-height:1;text-decoration:inherit}.rst-content .code-block-caption a .headerlink,.rst-content .eqno a .headerlink,.rst-content a .admonition-title,.rst-content code.download a span:first-child,.rst-content dl dt a .headerlink,.rst-content h1 a .headerlink,.rst-content h2 a .headerlink,.rst-content h3 a .headerlink,.rst-content h4 a .headerlink,.rst-content h5 a .headerlink,.rst-content h6 a .headerlink,.rst-content p.caption a .headerlink,.rst-content p a .headerlink,.rst-content table>caption a .headerlink,.rst-content tt.download a span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li a button.toctree-expand,a .fa,a .icon,a .rst-content .admonition-title,a .rst-content .code-block-caption .headerlink,a .rst-content .eqno .headerlink,a .rst-content code.download span:first-child,a .rst-content dl dt .headerlink,a .rst-content h1 .headerlink,a .rst-content h2 .headerlink,a .rst-content h3 .headerlink,a .rst-content h4 .headerlink,a .rst-content h5 .headerlink,a .rst-content h6 .headerlink,a .rst-content p.caption .headerlink,a .rst-content p .headerlink,a .rst-content table>caption .headerlink,a .rst-content tt.download span:first-child,a .wy-menu-vertical li button.toctree-expand{display:inline-block;text-decoration:inherit}.btn .fa,.btn .icon,.btn .rst-content .admonition-title,.btn .rst-content .code-block-caption .headerlink,.btn .rst-content .eqno .headerlink,.btn .rst-content code.download span:first-child,.btn .rst-content dl dt .headerlink,.btn .rst-content h1 .headerlink,.btn .rst-content h2 .headerlink,.btn .rst-content h3 .headerlink,.btn .rst-content h4 .headerlink,.btn .rst-content h5 .headerlink,.btn .rst-content h6 .headerlink,.btn .rst-content p .headerlink,.btn .rst-content table>caption .headerlink,.btn .rst-content tt.download span:first-child,.btn .wy-menu-vertical li.current>a button.toctree-expand,.btn .wy-menu-vertical li.on a button.toctree-expand,.btn .wy-menu-vertical li button.toctree-expand,.nav .fa,.nav .icon,.nav .rst-content .admonition-title,.nav .rst-content .code-block-caption .headerlink,.nav .rst-content .eqno .headerlink,.nav .rst-content code.download span:first-child,.nav .rst-content dl dt .headerlink,.nav .rst-content h1 .headerlink,.nav .rst-content h2 .headerlink,.nav .rst-content h3 .headerlink,.nav .rst-content h4 .headerlink,.nav .rst-content h5 .headerlink,.nav .rst-content h6 .headerlink,.nav .rst-content p .headerlink,.nav .rst-content table>caption .headerlink,.nav .rst-content tt.download span:first-child,.nav .wy-menu-vertical li.current>a button.toctree-expand,.nav .wy-menu-vertical li.on a button.toctree-expand,.nav .wy-menu-vertical li button.toctree-expand,.rst-content .btn .admonition-title,.rst-content .code-block-caption .btn .headerlink,.rst-content .code-block-caption .nav .headerlink,.rst-content .eqno .btn .headerlink,.rst-content .eqno .nav .headerlink,.rst-content .nav .admonition-title,.rst-content code.download .btn span:first-child,.rst-content code.download .nav span:first-child,.rst-content dl dt .btn .headerlink,.rst-content dl dt .nav .headerlink,.rst-content h1 .btn .headerlink,.rst-content h1 .nav .headerlink,.rst-content h2 .btn .headerlink,.rst-content h2 .nav .headerlink,.rst-content h3 .btn .headerlink,.rst-content h3 .nav .headerlink,.rst-content h4 .btn .headerlink,.rst-content h4 .nav .headerlink,.rst-content h5 .btn .headerlink,.rst-content h5 .nav .headerlink,.rst-content h6 .btn .headerlink,.rst-content h6 .nav .headerlink,.rst-content p .btn .headerlink,.rst-content p .nav .headerlink,.rst-content table>caption .btn .headerlink,.rst-content table>caption .nav .headerlink,.rst-content tt.download .btn span:first-child,.rst-content tt.download .nav span:first-child,.wy-menu-vertical li .btn button.toctree-expand,.wy-menu-vertical li.current>a .btn button.toctree-expand,.wy-menu-vertical li.current>a .nav button.toctree-expand,.wy-menu-vertical li .nav button.toctree-expand,.wy-menu-vertical li.on a .btn button.toctree-expand,.wy-menu-vertical li.on a .nav button.toctree-expand{display:inline}.btn .fa-large.icon,.btn .fa.fa-large,.btn .rst-content .code-block-caption .fa-large.headerlink,.btn .rst-content .eqno .fa-large.headerlink,.btn .rst-content .fa-large.admonition-title,.btn .rst-content code.download span.fa-large:first-child,.btn .rst-content dl dt .fa-large.headerlink,.btn .rst-content h1 .fa-large.headerlink,.btn .rst-content h2 .fa-large.headerlink,.btn .rst-content h3 .fa-large.headerlink,.btn .rst-content h4 .fa-large.headerlink,.btn .rst-content h5 .fa-large.headerlink,.btn .rst-content h6 .fa-large.headerlink,.btn .rst-content p .fa-large.headerlink,.btn .rst-content table>caption .fa-large.headerlink,.btn .rst-content tt.download span.fa-large:first-child,.btn .wy-menu-vertical li button.fa-large.toctree-expand,.nav .fa-large.icon,.nav .fa.fa-large,.nav .rst-content .code-block-caption .fa-large.headerlink,.nav .rst-content .eqno .fa-large.headerlink,.nav .rst-content .fa-large.admonition-title,.nav .rst-content code.download span.fa-large:first-child,.nav .rst-content dl dt .fa-large.headerlink,.nav .rst-content h1 .fa-large.headerlink,.nav .rst-content h2 .fa-large.headerlink,.nav .rst-content h3 .fa-large.headerlink,.nav .rst-content h4 .fa-large.headerlink,.nav .rst-content h5 .fa-large.headerlink,.nav .rst-content h6 .fa-large.headerlink,.nav .rst-content p .fa-large.headerlink,.nav .rst-content table>caption .fa-large.headerlink,.nav .rst-content tt.download span.fa-large:first-child,.nav .wy-menu-vertical li button.fa-large.toctree-expand,.rst-content .btn .fa-large.admonition-title,.rst-content .code-block-caption .btn .fa-large.headerlink,.rst-content .code-block-caption .nav .fa-large.headerlink,.rst-content .eqno .btn .fa-large.headerlink,.rst-content .eqno .nav .fa-large.headerlink,.rst-content .nav .fa-large.admonition-title,.rst-content code.download .btn span.fa-large:first-child,.rst-content code.download .nav span.fa-large:first-child,.rst-content dl dt .btn .fa-large.headerlink,.rst-content dl dt .nav .fa-large.headerlink,.rst-content h1 .btn .fa-large.headerlink,.rst-content h1 .nav .fa-large.headerlink,.rst-content h2 .btn .fa-large.headerlink,.rst-content h2 .nav .fa-large.headerlink,.rst-content h3 .btn .fa-large.headerlink,.rst-content h3 .nav .fa-large.headerlink,.rst-content h4 .btn .fa-large.headerlink,.rst-content h4 .nav .fa-large.headerlink,.rst-content h5 .btn .fa-large.headerlink,.rst-content h5 .nav .fa-large.headerlink,.rst-content h6 .btn .fa-large.headerlink,.rst-content h6 .nav .fa-large.headerlink,.rst-content p .btn .fa-large.headerlink,.rst-content p .nav .fa-large.headerlink,.rst-content table>caption .btn .fa-large.headerlink,.rst-content table>caption .nav .fa-large.headerlink,.rst-content tt.download .btn span.fa-large:first-child,.rst-content tt.download .nav span.fa-large:first-child,.wy-menu-vertical li .btn button.fa-large.toctree-expand,.wy-menu-vertical li .nav button.fa-large.toctree-expand{line-height:.9em}.btn .fa-spin.icon,.btn .fa.fa-spin,.btn .rst-content .code-block-caption .fa-spin.headerlink,.btn .rst-content .eqno .fa-spin.headerlink,.btn .rst-content .fa-spin.admonition-title,.btn .rst-content code.download span.fa-spin:first-child,.btn .rst-content dl dt .fa-spin.headerlink,.btn .rst-content h1 .fa-spin.headerlink,.btn .rst-content h2 .fa-spin.headerlink,.btn .rst-content h3 .fa-spin.headerlink,.btn .rst-content h4 .fa-spin.headerlink,.btn .rst-content h5 .fa-spin.headerlink,.btn .rst-content h6 .fa-spin.headerlink,.btn .rst-content p .fa-spin.headerlink,.btn .rst-content table>caption .fa-spin.headerlink,.btn .rst-content tt.download span.fa-spin:first-child,.btn .wy-menu-vertical li button.fa-spin.toctree-expand,.nav .fa-spin.icon,.nav .fa.fa-spin,.nav .rst-content .code-block-caption .fa-spin.headerlink,.nav .rst-content .eqno .fa-spin.headerlink,.nav .rst-content .fa-spin.admonition-title,.nav .rst-content code.download span.fa-spin:first-child,.nav .rst-content dl dt .fa-spin.headerlink,.nav .rst-content h1 .fa-spin.headerlink,.nav .rst-content h2 .fa-spin.headerlink,.nav .rst-content h3 .fa-spin.headerlink,.nav .rst-content h4 .fa-spin.headerlink,.nav .rst-content h5 .fa-spin.headerlink,.nav .rst-content h6 .fa-spin.headerlink,.nav .rst-content p .fa-spin.headerlink,.nav .rst-content table>caption .fa-spin.headerlink,.nav .rst-content tt.download span.fa-spin:first-child,.nav .wy-menu-vertical li button.fa-spin.toctree-expand,.rst-content .btn .fa-spin.admonition-title,.rst-content .code-block-caption .btn .fa-spin.headerlink,.rst-content .code-block-caption .nav .fa-spin.headerlink,.rst-content .eqno .btn .fa-spin.headerlink,.rst-content .eqno .nav .fa-spin.headerlink,.rst-content .nav .fa-spin.admonition-title,.rst-content code.download .btn span.fa-spin:first-child,.rst-content code.download .nav span.fa-spin:first-child,.rst-content dl dt .btn .fa-spin.headerlink,.rst-content dl dt .nav .fa-spin.headerlink,.rst-content h1 .btn .fa-spin.headerlink,.rst-content h1 .nav .fa-spin.headerlink,.rst-content h2 .btn .fa-spin.headerlink,.rst-content h2 .nav .fa-spin.headerlink,.rst-content h3 .btn .fa-spin.headerlink,.rst-content h3 .nav .fa-spin.headerlink,.rst-content h4 .btn .fa-spin.headerlink,.rst-content h4 .nav .fa-spin.headerlink,.rst-content h5 .btn .fa-spin.headerlink,.rst-content h5 .nav .fa-spin.headerlink,.rst-content h6 .btn .fa-spin.headerlink,.rst-content h6 .nav .fa-spin.headerlink,.rst-content p .btn .fa-spin.headerlink,.rst-content p .nav .fa-spin.headerlink,.rst-content table>caption .btn .fa-spin.headerlink,.rst-content table>caption .nav .fa-spin.headerlink,.rst-content tt.download .btn span.fa-spin:first-child,.rst-content tt.download .nav span.fa-spin:first-child,.wy-menu-vertical li .btn button.fa-spin.toctree-expand,.wy-menu-vertical li .nav button.fa-spin.toctree-expand{display:inline-block}.btn.fa:before,.btn.icon:before,.rst-content .btn.admonition-title:before,.rst-content .code-block-caption .btn.headerlink:before,.rst-content .eqno .btn.headerlink:before,.rst-content code.download span.btn:first-child:before,.rst-content dl dt .btn.headerlink:before,.rst-content h1 .btn.headerlink:before,.rst-content h2 .btn.headerlink:before,.rst-content h3 .btn.headerlink:before,.rst-content h4 .btn.headerlink:before,.rst-content h5 .btn.headerlink:before,.rst-content h6 .btn.headerlink:before,.rst-content p .btn.headerlink:before,.rst-content table>caption .btn.headerlink:before,.rst-content tt.download span.btn:first-child:before,.wy-menu-vertical li button.btn.toctree-expand:before{opacity:.5;-webkit-transition:opacity .05s ease-in;-moz-transition:opacity .05s ease-in;transition:opacity .05s ease-in}.btn.fa:hover:before,.btn.icon:hover:before,.rst-content .btn.admonition-title:hover:before,.rst-content .code-block-caption .btn.headerlink:hover:before,.rst-content .eqno .btn.headerlink:hover:before,.rst-content code.download span.btn:first-child:hover:before,.rst-content dl dt .btn.headerlink:hover:before,.rst-content h1 .btn.headerlink:hover:before,.rst-content h2 .btn.headerlink:hover:before,.rst-content h3 .btn.headerlink:hover:before,.rst-content h4 .btn.headerlink:hover:before,.rst-content h5 .btn.headerlink:hover:before,.rst-content h6 .btn.headerlink:hover:before,.rst-content p .btn.headerlink:hover:before,.rst-content table>caption .btn.headerlink:hover:before,.rst-content tt.download span.btn:first-child:hover:before,.wy-menu-vertical li button.btn.toctree-expand:hover:before{opacity:1}.btn-mini .fa:before,.btn-mini .icon:before,.btn-mini .rst-content .admonition-title:before,.btn-mini .rst-content .code-block-caption .headerlink:before,.btn-mini .rst-content .eqno .headerlink:before,.btn-mini .rst-content code.download span:first-child:before,.btn-mini .rst-content dl dt .headerlink:before,.btn-mini .rst-content h1 .headerlink:before,.btn-mini .rst-content h2 .headerlink:before,.btn-mini .rst-content h3 .headerlink:before,.btn-mini .rst-content h4 .headerlink:before,.btn-mini .rst-content h5 .headerlink:before,.btn-mini .rst-content h6 .headerlink:before,.btn-mini .rst-content p .headerlink:before,.btn-mini .rst-content table>caption .headerlink:before,.btn-mini .rst-content tt.download span:first-child:before,.btn-mini .wy-menu-vertical li button.toctree-expand:before,.rst-content .btn-mini .admonition-title:before,.rst-content .code-block-caption .btn-mini .headerlink:before,.rst-content .eqno .btn-mini .headerlink:before,.rst-content code.download .btn-mini span:first-child:before,.rst-content dl dt .btn-mini .headerlink:before,.rst-content h1 .btn-mini .headerlink:before,.rst-content h2 .btn-mini .headerlink:before,.rst-content h3 .btn-mini .headerlink:before,.rst-content h4 .btn-mini .headerlink:before,.rst-content h5 .btn-mini .headerlink:before,.rst-content h6 .btn-mini .headerlink:before,.rst-content p .btn-mini .headerlink:before,.rst-content table>caption .btn-mini .headerlink:before,.rst-content tt.download .btn-mini span:first-child:before,.wy-menu-vertical li .btn-mini button.toctree-expand:before{font-size:14px;vertical-align:-15%}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.wy-alert{padding:12px;line-height:24px;margin-bottom:24px;background:#e7f2fa}.rst-content .admonition-title,.wy-alert-title{font-weight:700;display:block;color:#fff;background:#6ab0de;padding:6px 12px;margin:-12px -12px 12px}.rst-content .danger,.rst-content .error,.rst-content .wy-alert-danger.admonition,.rst-content .wy-alert-danger.admonition-todo,.rst-content .wy-alert-danger.attention,.rst-content .wy-alert-danger.caution,.rst-content .wy-alert-danger.hint,.rst-content .wy-alert-danger.important,.rst-content .wy-alert-danger.note,.rst-content .wy-alert-danger.seealso,.rst-content .wy-alert-danger.tip,.rst-content .wy-alert-danger.warning,.wy-alert.wy-alert-danger{background:#fdf3f2}.rst-content .danger .admonition-title,.rst-content .danger .wy-alert-title,.rst-content .error .admonition-title,.rst-content .error .wy-alert-title,.rst-content .wy-alert-danger.admonition-todo .admonition-title,.rst-content .wy-alert-danger.admonition-todo .wy-alert-title,.rst-content .wy-alert-danger.admonition .admonition-title,.rst-content .wy-alert-danger.admonition .wy-alert-title,.rst-content .wy-alert-danger.attention .admonition-title,.rst-content .wy-alert-danger.attention .wy-alert-title,.rst-content .wy-alert-danger.caution .admonition-title,.rst-content .wy-alert-danger.caution .wy-alert-title,.rst-content .wy-alert-danger.hint .admonition-title,.rst-content .wy-alert-danger.hint .wy-alert-title,.rst-content .wy-alert-danger.important .admonition-title,.rst-content .wy-alert-danger.important .wy-alert-title,.rst-content .wy-alert-danger.note .admonition-title,.rst-content .wy-alert-danger.note .wy-alert-title,.rst-content .wy-alert-danger.seealso .admonition-title,.rst-content .wy-alert-danger.seealso .wy-alert-title,.rst-content .wy-alert-danger.tip .admonition-title,.rst-content .wy-alert-danger.tip .wy-alert-title,.rst-content .wy-alert-danger.warning .admonition-title,.rst-content .wy-alert-danger.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-danger .admonition-title,.wy-alert.wy-alert-danger .rst-content .admonition-title,.wy-alert.wy-alert-danger .wy-alert-title{background:#f29f97}.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .warning,.rst-content .wy-alert-warning.admonition,.rst-content .wy-alert-warning.danger,.rst-content .wy-alert-warning.error,.rst-content .wy-alert-warning.hint,.rst-content .wy-alert-warning.important,.rst-content .wy-alert-warning.note,.rst-content .wy-alert-warning.seealso,.rst-content .wy-alert-warning.tip,.wy-alert.wy-alert-warning{background:#ffedcc}.rst-content .admonition-todo .admonition-title,.rst-content .admonition-todo .wy-alert-title,.rst-content .attention .admonition-title,.rst-content .attention .wy-alert-title,.rst-content .caution .admonition-title,.rst-content .caution .wy-alert-title,.rst-content .warning .admonition-title,.rst-content .warning .wy-alert-title,.rst-content .wy-alert-warning.admonition .admonition-title,.rst-content .wy-alert-warning.admonition .wy-alert-title,.rst-content .wy-alert-warning.danger .admonition-title,.rst-content .wy-alert-warning.danger .wy-alert-title,.rst-content .wy-alert-warning.error .admonition-title,.rst-content .wy-alert-warning.error .wy-alert-title,.rst-content .wy-alert-warning.hint .admonition-title,.rst-content .wy-alert-warning.hint .wy-alert-title,.rst-content .wy-alert-warning.important .admonition-title,.rst-content .wy-alert-warning.important .wy-alert-title,.rst-content .wy-alert-warning.note .admonition-title,.rst-content .wy-alert-warning.note .wy-alert-title,.rst-content .wy-alert-warning.seealso .admonition-title,.rst-content .wy-alert-warning.seealso .wy-alert-title,.rst-content .wy-alert-warning.tip .admonition-title,.rst-content .wy-alert-warning.tip .wy-alert-title,.rst-content .wy-alert.wy-alert-warning .admonition-title,.wy-alert.wy-alert-warning .rst-content .admonition-title,.wy-alert.wy-alert-warning .wy-alert-title{background:#f0b37e}.rst-content .note,.rst-content .seealso,.rst-content .wy-alert-info.admonition,.rst-content .wy-alert-info.admonition-todo,.rst-content .wy-alert-info.attention,.rst-content .wy-alert-info.caution,.rst-content .wy-alert-info.danger,.rst-content .wy-alert-info.error,.rst-content .wy-alert-info.hint,.rst-content .wy-alert-info.important,.rst-content .wy-alert-info.tip,.rst-content .wy-alert-info.warning,.wy-alert.wy-alert-info{background:#e7f2fa}.rst-content .note .admonition-title,.rst-content .note .wy-alert-title,.rst-content .seealso .admonition-title,.rst-content .seealso .wy-alert-title,.rst-content .wy-alert-info.admonition-todo .admonition-title,.rst-content .wy-alert-info.admonition-todo .wy-alert-title,.rst-content .wy-alert-info.admonition .admonition-title,.rst-content .wy-alert-info.admonition .wy-alert-title,.rst-content .wy-alert-info.attention .admonition-title,.rst-content .wy-alert-info.attention .wy-alert-title,.rst-content .wy-alert-info.caution .admonition-title,.rst-content .wy-alert-info.caution .wy-alert-title,.rst-content .wy-alert-info.danger .admonition-title,.rst-content .wy-alert-info.danger .wy-alert-title,.rst-content .wy-alert-info.error .admonition-title,.rst-content .wy-alert-info.error .wy-alert-title,.rst-content .wy-alert-info.hint .admonition-title,.rst-content .wy-alert-info.hint .wy-alert-title,.rst-content .wy-alert-info.important .admonition-title,.rst-content .wy-alert-info.important .wy-alert-title,.rst-content .wy-alert-info.tip .admonition-title,.rst-content .wy-alert-info.tip .wy-alert-title,.rst-content .wy-alert-info.warning .admonition-title,.rst-content .wy-alert-info.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-info .admonition-title,.wy-alert.wy-alert-info .rst-content .admonition-title,.wy-alert.wy-alert-info .wy-alert-title{background:#6ab0de}.rst-content .hint,.rst-content .important,.rst-content .tip,.rst-content .wy-alert-success.admonition,.rst-content .wy-alert-success.admonition-todo,.rst-content .wy-alert-success.attention,.rst-content .wy-alert-success.caution,.rst-content .wy-alert-success.danger,.rst-content .wy-alert-success.error,.rst-content .wy-alert-success.note,.rst-content .wy-alert-success.seealso,.rst-content .wy-alert-success.warning,.wy-alert.wy-alert-success{background:#dbfaf4}.rst-content .hint .admonition-title,.rst-content .hint .wy-alert-title,.rst-content .important .admonition-title,.rst-content .important .wy-alert-title,.rst-content .tip .admonition-title,.rst-content .tip .wy-alert-title,.rst-content .wy-alert-success.admonition-todo .admonition-title,.rst-content .wy-alert-success.admonition-todo .wy-alert-title,.rst-content .wy-alert-success.admonition .admonition-title,.rst-content .wy-alert-success.admonition .wy-alert-title,.rst-content .wy-alert-success.attention .admonition-title,.rst-content .wy-alert-success.attention .wy-alert-title,.rst-content .wy-alert-success.caution .admonition-title,.rst-content .wy-alert-success.caution .wy-alert-title,.rst-content .wy-alert-success.danger .admonition-title,.rst-content .wy-alert-success.danger .wy-alert-title,.rst-content .wy-alert-success.error .admonition-title,.rst-content .wy-alert-success.error .wy-alert-title,.rst-content .wy-alert-success.note .admonition-title,.rst-content .wy-alert-success.note .wy-alert-title,.rst-content .wy-alert-success.seealso .admonition-title,.rst-content .wy-alert-success.seealso .wy-alert-title,.rst-content .wy-alert-success.warning .admonition-title,.rst-content .wy-alert-success.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-success .admonition-title,.wy-alert.wy-alert-success .rst-content .admonition-title,.wy-alert.wy-alert-success .wy-alert-title{background:#1abc9c}.rst-content .wy-alert-neutral.admonition,.rst-content .wy-alert-neutral.admonition-todo,.rst-content .wy-alert-neutral.attention,.rst-content .wy-alert-neutral.caution,.rst-content .wy-alert-neutral.danger,.rst-content .wy-alert-neutral.error,.rst-content .wy-alert-neutral.hint,.rst-content .wy-alert-neutral.important,.rst-content .wy-alert-neutral.note,.rst-content .wy-alert-neutral.seealso,.rst-content .wy-alert-neutral.tip,.rst-content .wy-alert-neutral.warning,.wy-alert.wy-alert-neutral{background:#f3f6f6}.rst-content .wy-alert-neutral.admonition-todo .admonition-title,.rst-content .wy-alert-neutral.admonition-todo .wy-alert-title,.rst-content .wy-alert-neutral.admonition .admonition-title,.rst-content .wy-alert-neutral.admonition .wy-alert-title,.rst-content .wy-alert-neutral.attention .admonition-title,.rst-content .wy-alert-neutral.attention .wy-alert-title,.rst-content .wy-alert-neutral.caution .admonition-title,.rst-content .wy-alert-neutral.caution .wy-alert-title,.rst-content .wy-alert-neutral.danger .admonition-title,.rst-content .wy-alert-neutral.danger .wy-alert-title,.rst-content .wy-alert-neutral.error .admonition-title,.rst-content .wy-alert-neutral.error .wy-alert-title,.rst-content .wy-alert-neutral.hint .admonition-title,.rst-content .wy-alert-neutral.hint .wy-alert-title,.rst-content .wy-alert-neutral.important .admonition-title,.rst-content .wy-alert-neutral.important .wy-alert-title,.rst-content .wy-alert-neutral.note .admonition-title,.rst-content .wy-alert-neutral.note .wy-alert-title,.rst-content .wy-alert-neutral.seealso .admonition-title,.rst-content .wy-alert-neutral.seealso .wy-alert-title,.rst-content .wy-alert-neutral.tip .admonition-title,.rst-content .wy-alert-neutral.tip .wy-alert-title,.rst-content .wy-alert-neutral.warning .admonition-title,.rst-content .wy-alert-neutral.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-neutral .admonition-title,.wy-alert.wy-alert-neutral .rst-content .admonition-title,.wy-alert.wy-alert-neutral .wy-alert-title{color:#404040;background:#e1e4e5}.rst-content .wy-alert-neutral.admonition-todo a,.rst-content .wy-alert-neutral.admonition a,.rst-content .wy-alert-neutral.attention a,.rst-content .wy-alert-neutral.caution a,.rst-content .wy-alert-neutral.danger a,.rst-content .wy-alert-neutral.error a,.rst-content .wy-alert-neutral.hint a,.rst-content .wy-alert-neutral.important a,.rst-content .wy-alert-neutral.note a,.rst-content .wy-alert-neutral.seealso a,.rst-content .wy-alert-neutral.tip a,.rst-content .wy-alert-neutral.warning a,.wy-alert.wy-alert-neutral a{color:#2980b9}.rst-content .admonition-todo p:last-child,.rst-content .admonition p:last-child,.rst-content .attention p:last-child,.rst-content .caution p:last-child,.rst-content .danger p:last-child,.rst-content .error p:last-child,.rst-content .hint p:last-child,.rst-content .important p:last-child,.rst-content .note p:last-child,.rst-content .seealso p:last-child,.rst-content .tip p:last-child,.rst-content .warning p:last-child,.wy-alert p:last-child{margin-bottom:0}.wy-tray-container{position:fixed;bottom:0;left:0;z-index:600}.wy-tray-container li{display:block;width:300px;background:transparent;color:#fff;text-align:center;box-shadow:0 5px 5px 0 rgba(0,0,0,.1);padding:0 24px;min-width:20%;opacity:0;height:0;line-height:56px;overflow:hidden;-webkit-transition:all .3s ease-in;-moz-transition:all .3s ease-in;transition:all .3s ease-in}.wy-tray-container li.wy-tray-item-success{background:#27ae60}.wy-tray-container li.wy-tray-item-info{background:#2980b9}.wy-tray-container li.wy-tray-item-warning{background:#e67e22}.wy-tray-container li.wy-tray-item-danger{background:#e74c3c}.wy-tray-container li.on{opacity:1;height:56px}@media screen and (max-width:768px){.wy-tray-container{bottom:auto;top:0;width:100%}.wy-tray-container li{width:100%}}button{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle;cursor:pointer;line-height:normal;-webkit-appearance:button;*overflow:visible}button::-moz-focus-inner,input::-moz-focus-inner{border:0;padding:0}button[disabled]{cursor:default}.btn{display:inline-block;border-radius:2px;line-height:normal;white-space:nowrap;text-align:center;cursor:pointer;font-size:100%;padding:6px 12px 8px;color:#fff;border:1px solid rgba(0,0,0,.1);background-color:#27ae60;text-decoration:none;font-weight:400;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 2px -1px hsla(0,0%,100%,.5),inset 0 -2px 0 0 rgba(0,0,0,.1);outline-none:false;vertical-align:middle;*display:inline;zoom:1;-webkit-user-drag:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;-webkit-transition:all .1s linear;-moz-transition:all .1s linear;transition:all .1s linear}.btn-hover{background:#2e8ece;color:#fff}.btn:hover{background:#2cc36b;color:#fff}.btn:focus{background:#2cc36b;outline:0}.btn:active{box-shadow:inset 0 -1px 0 0 rgba(0,0,0,.05),inset 0 2px 0 0 rgba(0,0,0,.1);padding:8px 12px 6px}.btn:visited{color:#fff}.btn-disabled,.btn-disabled:active,.btn-disabled:focus,.btn-disabled:hover,.btn:disabled{background-image:none;filter:progid:DXImageTransform.Microsoft.gradient(enabled = false);filter:alpha(opacity=40);opacity:.4;cursor:not-allowed;box-shadow:none}.btn::-moz-focus-inner{padding:0;border:0}.btn-small{font-size:80%}.btn-info{background-color:#2980b9!important}.btn-info:hover{background-color:#2e8ece!important}.btn-neutral{background-color:#f3f6f6!important;color:#404040!important}.btn-neutral:hover{background-color:#e5ebeb!important;color:#404040}.btn-neutral:visited{color:#404040!important}.btn-success{background-color:#27ae60!important}.btn-success:hover{background-color:#295!important}.btn-danger{background-color:#e74c3c!important}.btn-danger:hover{background-color:#ea6153!important}.btn-warning{background-color:#e67e22!important}.btn-warning:hover{background-color:#e98b39!important}.btn-invert{background-color:#222}.btn-invert:hover{background-color:#2f2f2f!important}.btn-link{background-color:transparent!important;color:#2980b9;box-shadow:none;border-color:transparent!important}.btn-link:active,.btn-link:hover{background-color:transparent!important;color:#409ad5!important;box-shadow:none}.btn-link:visited{color:#9b59b6}.wy-btn-group .btn,.wy-control .btn{vertical-align:middle}.wy-btn-group{margin-bottom:24px;*zoom:1}.wy-btn-group:after,.wy-btn-group:before{display:table;content:""}.wy-btn-group:after{clear:both}.wy-dropdown{position:relative;display:inline-block}.wy-dropdown-active .wy-dropdown-menu{display:block}.wy-dropdown-menu{position:absolute;left:0;display:none;float:left;top:100%;min-width:100%;background:#fcfcfc;z-index:100;border:1px solid #cfd7dd;box-shadow:0 2px 2px 0 rgba(0,0,0,.1);padding:12px}.wy-dropdown-menu>dd>a{display:block;clear:both;color:#404040;white-space:nowrap;font-size:90%;padding:0 12px;cursor:pointer}.wy-dropdown-menu>dd>a:hover{background:#2980b9;color:#fff}.wy-dropdown-menu>dd.divider{border-top:1px solid #cfd7dd;margin:6px 0}.wy-dropdown-menu>dd.search{padding-bottom:12px}.wy-dropdown-menu>dd.search input[type=search]{width:100%}.wy-dropdown-menu>dd.call-to-action{background:#e3e3e3;text-transform:uppercase;font-weight:500;font-size:80%}.wy-dropdown-menu>dd.call-to-action:hover{background:#e3e3e3}.wy-dropdown-menu>dd.call-to-action .btn{color:#fff}.wy-dropdown.wy-dropdown-up .wy-dropdown-menu{bottom:100%;top:auto;left:auto;right:0}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu{background:#fcfcfc;margin-top:2px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a{padding:6px 12px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a:hover{background:#2980b9;color:#fff}.wy-dropdown.wy-dropdown-left .wy-dropdown-menu{right:0;left:auto;text-align:right}.wy-dropdown-arrow:before{content:" ";border-bottom:5px solid #f5f5f5;border-left:5px solid transparent;border-right:5px solid transparent;position:absolute;display:block;top:-4px;left:50%;margin-left:-3px}.wy-dropdown-arrow.wy-dropdown-arrow-left:before{left:11px}.wy-form-stacked select{display:block}.wy-form-aligned .wy-help-inline,.wy-form-aligned input,.wy-form-aligned label,.wy-form-aligned select,.wy-form-aligned textarea{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-form-aligned .wy-control-group>label{display:inline-block;vertical-align:middle;width:10em;margin:6px 12px 0 0;float:left}.wy-form-aligned .wy-control{float:left}.wy-form-aligned .wy-control label{display:block}.wy-form-aligned .wy-control select{margin-top:6px}fieldset{margin:0}fieldset,legend{border:0;padding:0}legend{width:100%;white-space:normal;margin-bottom:24px;font-size:150%;*margin-left:-7px}label,legend{display:block}label{margin:0 0 .3125em;color:#333;font-size:90%}input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}.wy-control-group{margin-bottom:24px;max-width:1200px;margin-left:auto;margin-right:auto;*zoom:1}.wy-control-group:after,.wy-control-group:before{display:table;content:""}.wy-control-group:after{clear:both}.wy-control-group.wy-control-group-required>label:after{content:" *";color:#e74c3c}.wy-control-group .wy-form-full,.wy-control-group .wy-form-halves,.wy-control-group .wy-form-thirds{padding-bottom:12px}.wy-control-group .wy-form-full input[type=color],.wy-control-group .wy-form-full input[type=date],.wy-control-group .wy-form-full input[type=datetime-local],.wy-control-group .wy-form-full input[type=datetime],.wy-control-group .wy-form-full input[type=email],.wy-control-group .wy-form-full input[type=month],.wy-control-group .wy-form-full input[type=number],.wy-control-group .wy-form-full input[type=password],.wy-control-group .wy-form-full input[type=search],.wy-control-group .wy-form-full input[type=tel],.wy-control-group .wy-form-full input[type=text],.wy-control-group .wy-form-full input[type=time],.wy-control-group .wy-form-full input[type=url],.wy-control-group .wy-form-full input[type=week],.wy-control-group .wy-form-full select,.wy-control-group .wy-form-halves input[type=color],.wy-control-group .wy-form-halves input[type=date],.wy-control-group .wy-form-halves input[type=datetime-local],.wy-control-group .wy-form-halves input[type=datetime],.wy-control-group .wy-form-halves input[type=email],.wy-control-group .wy-form-halves input[type=month],.wy-control-group .wy-form-halves input[type=number],.wy-control-group .wy-form-halves input[type=password],.wy-control-group .wy-form-halves input[type=search],.wy-control-group .wy-form-halves input[type=tel],.wy-control-group .wy-form-halves input[type=text],.wy-control-group .wy-form-halves input[type=time],.wy-control-group .wy-form-halves input[type=url],.wy-control-group .wy-form-halves input[type=week],.wy-control-group .wy-form-halves select,.wy-control-group .wy-form-thirds input[type=color],.wy-control-group .wy-form-thirds input[type=date],.wy-control-group .wy-form-thirds input[type=datetime-local],.wy-control-group .wy-form-thirds input[type=datetime],.wy-control-group .wy-form-thirds input[type=email],.wy-control-group .wy-form-thirds input[type=month],.wy-control-group .wy-form-thirds input[type=number],.wy-control-group .wy-form-thirds input[type=password],.wy-control-group .wy-form-thirds input[type=search],.wy-control-group .wy-form-thirds input[type=tel],.wy-control-group .wy-form-thirds input[type=text],.wy-control-group .wy-form-thirds input[type=time],.wy-control-group .wy-form-thirds input[type=url],.wy-control-group .wy-form-thirds input[type=week],.wy-control-group .wy-form-thirds select{width:100%}.wy-control-group .wy-form-full{float:left;display:block;width:100%;margin-right:0}.wy-control-group .wy-form-full:last-child{margin-right:0}.wy-control-group .wy-form-halves{float:left;display:block;margin-right:2.35765%;width:48.82117%}.wy-control-group .wy-form-halves:last-child,.wy-control-group .wy-form-halves:nth-of-type(2n){margin-right:0}.wy-control-group .wy-form-halves:nth-of-type(odd){clear:left}.wy-control-group .wy-form-thirds{float:left;display:block;margin-right:2.35765%;width:31.76157%}.wy-control-group .wy-form-thirds:last-child,.wy-control-group .wy-form-thirds:nth-of-type(3n){margin-right:0}.wy-control-group .wy-form-thirds:nth-of-type(3n+1){clear:left}.wy-control-group.wy-control-group-no-input .wy-control,.wy-control-no-input{margin:6px 0 0;font-size:90%}.wy-control-no-input{display:inline-block}.wy-control-group.fluid-input input[type=color],.wy-control-group.fluid-input input[type=date],.wy-control-group.fluid-input input[type=datetime-local],.wy-control-group.fluid-input input[type=datetime],.wy-control-group.fluid-input input[type=email],.wy-control-group.fluid-input input[type=month],.wy-control-group.fluid-input input[type=number],.wy-control-group.fluid-input input[type=password],.wy-control-group.fluid-input input[type=search],.wy-control-group.fluid-input input[type=tel],.wy-control-group.fluid-input input[type=text],.wy-control-group.fluid-input input[type=time],.wy-control-group.fluid-input input[type=url],.wy-control-group.fluid-input input[type=week]{width:100%}.wy-form-message-inline{padding-left:.3em;color:#666;font-size:90%}.wy-form-message{display:block;color:#999;font-size:70%;margin-top:.3125em;font-style:italic}.wy-form-message p{font-size:inherit;font-style:italic;margin-bottom:6px}.wy-form-message p:last-child{margin-bottom:0}input{line-height:normal}input[type=button],input[type=reset],input[type=submit]{-webkit-appearance:button;cursor:pointer;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;*overflow:visible}input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week]{-webkit-appearance:none;padding:6px;display:inline-block;border:1px solid #ccc;font-size:80%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 3px #ddd;border-radius:0;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}input[type=datetime-local]{padding:.34375em .625em}input[disabled]{cursor:default}input[type=checkbox],input[type=radio]{padding:0;margin-right:.3125em;*height:13px;*width:13px}input[type=checkbox],input[type=radio],input[type=search]{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}input[type=search]::-webkit-search-cancel-button,input[type=search]::-webkit-search-decoration{-webkit-appearance:none}input[type=color]:focus,input[type=date]:focus,input[type=datetime-local]:focus,input[type=datetime]:focus,input[type=email]:focus,input[type=month]:focus,input[type=number]:focus,input[type=password]:focus,input[type=search]:focus,input[type=tel]:focus,input[type=text]:focus,input[type=time]:focus,input[type=url]:focus,input[type=week]:focus{outline:0;outline:thin dotted\9;border-color:#333}input.no-focus:focus{border-color:#ccc!important}input[type=checkbox]:focus,input[type=file]:focus,input[type=radio]:focus{outline:thin dotted #333;outline:1px auto #129fea}input[type=color][disabled],input[type=date][disabled],input[type=datetime-local][disabled],input[type=datetime][disabled],input[type=email][disabled],input[type=month][disabled],input[type=number][disabled],input[type=password][disabled],input[type=search][disabled],input[type=tel][disabled],input[type=text][disabled],input[type=time][disabled],input[type=url][disabled],input[type=week][disabled]{cursor:not-allowed;background-color:#fafafa}input:focus:invalid,select:focus:invalid,textarea:focus:invalid{color:#e74c3c;border:1px solid #e74c3c}input:focus:invalid:focus,select:focus:invalid:focus,textarea:focus:invalid:focus{border-color:#e74c3c}input[type=checkbox]:focus:invalid:focus,input[type=file]:focus:invalid:focus,input[type=radio]:focus:invalid:focus{outline-color:#e74c3c}input.wy-input-large{padding:12px;font-size:100%}textarea{overflow:auto;vertical-align:top;width:100%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif}select,textarea{padding:.5em .625em;display:inline-block;border:1px solid #ccc;font-size:80%;box-shadow:inset 0 1px 3px #ddd;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}select{border:1px solid #ccc;background-color:#fff}select[multiple]{height:auto}select:focus,textarea:focus{outline:0}input[readonly],select[disabled],select[readonly],textarea[disabled],textarea[readonly]{cursor:not-allowed;background-color:#fafafa}input[type=checkbox][disabled],input[type=radio][disabled]{cursor:not-allowed}.wy-checkbox,.wy-radio{margin:6px 0;color:#404040;display:block}.wy-checkbox input,.wy-radio input{vertical-align:baseline}.wy-form-message-inline{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-input-prefix,.wy-input-suffix{white-space:nowrap;padding:6px}.wy-input-prefix .wy-input-context,.wy-input-suffix .wy-input-context{line-height:27px;padding:0 8px;display:inline-block;font-size:80%;background-color:#f3f6f6;border:1px solid #ccc;color:#999}.wy-input-suffix .wy-input-context{border-left:0}.wy-input-prefix .wy-input-context{border-right:0}.wy-switch{position:relative;display:block;height:24px;margin-top:12px;cursor:pointer}.wy-switch:before{left:0;top:0;width:36px;height:12px;background:#ccc}.wy-switch:after,.wy-switch:before{position:absolute;content:"";display:block;border-radius:4px;-webkit-transition:all .2s ease-in-out;-moz-transition:all .2s ease-in-out;transition:all .2s ease-in-out}.wy-switch:after{width:18px;height:18px;background:#999;left:-3px;top:-3px}.wy-switch span{position:absolute;left:48px;display:block;font-size:12px;color:#ccc;line-height:1}.wy-switch.active:before{background:#1e8449}.wy-switch.active:after{left:24px;background:#27ae60}.wy-switch.disabled{cursor:not-allowed;opacity:.8}.wy-control-group.wy-control-group-error .wy-form-message,.wy-control-group.wy-control-group-error>label{color:#e74c3c}.wy-control-group.wy-control-group-error input[type=color],.wy-control-group.wy-control-group-error input[type=date],.wy-control-group.wy-control-group-error input[type=datetime-local],.wy-control-group.wy-control-group-error input[type=datetime],.wy-control-group.wy-control-group-error input[type=email],.wy-control-group.wy-control-group-error input[type=month],.wy-control-group.wy-control-group-error input[type=number],.wy-control-group.wy-control-group-error input[type=password],.wy-control-group.wy-control-group-error input[type=search],.wy-control-group.wy-control-group-error input[type=tel],.wy-control-group.wy-control-group-error input[type=text],.wy-control-group.wy-control-group-error input[type=time],.wy-control-group.wy-control-group-error input[type=url],.wy-control-group.wy-control-group-error input[type=week],.wy-control-group.wy-control-group-error textarea{border:1px solid #e74c3c}.wy-inline-validate{white-space:nowrap}.wy-inline-validate .wy-input-context{padding:.5em .625em;display:inline-block;font-size:80%}.wy-inline-validate.wy-inline-validate-success .wy-input-context{color:#27ae60}.wy-inline-validate.wy-inline-validate-danger .wy-input-context{color:#e74c3c}.wy-inline-validate.wy-inline-validate-warning .wy-input-context{color:#e67e22}.wy-inline-validate.wy-inline-validate-info .wy-input-context{color:#2980b9}.rotate-90{-webkit-transform:rotate(90deg);-moz-transform:rotate(90deg);-ms-transform:rotate(90deg);-o-transform:rotate(90deg);transform:rotate(90deg)}.rotate-180{-webkit-transform:rotate(180deg);-moz-transform:rotate(180deg);-ms-transform:rotate(180deg);-o-transform:rotate(180deg);transform:rotate(180deg)}.rotate-270{-webkit-transform:rotate(270deg);-moz-transform:rotate(270deg);-ms-transform:rotate(270deg);-o-transform:rotate(270deg);transform:rotate(270deg)}.mirror{-webkit-transform:scaleX(-1);-moz-transform:scaleX(-1);-ms-transform:scaleX(-1);-o-transform:scaleX(-1);transform:scaleX(-1)}.mirror.rotate-90{-webkit-transform:scaleX(-1) rotate(90deg);-moz-transform:scaleX(-1) rotate(90deg);-ms-transform:scaleX(-1) rotate(90deg);-o-transform:scaleX(-1) rotate(90deg);transform:scaleX(-1) rotate(90deg)}.mirror.rotate-180{-webkit-transform:scaleX(-1) rotate(180deg);-moz-transform:scaleX(-1) rotate(180deg);-ms-transform:scaleX(-1) rotate(180deg);-o-transform:scaleX(-1) rotate(180deg);transform:scaleX(-1) rotate(180deg)}.mirror.rotate-270{-webkit-transform:scaleX(-1) rotate(270deg);-moz-transform:scaleX(-1) rotate(270deg);-ms-transform:scaleX(-1) rotate(270deg);-o-transform:scaleX(-1) rotate(270deg);transform:scaleX(-1) rotate(270deg)}@media only screen and (max-width:480px){.wy-form button[type=submit]{margin:.7em 0 0}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=text],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week],.wy-form label{margin-bottom:.3em;display:block}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week]{margin-bottom:0}.wy-form-aligned .wy-control-group label{margin-bottom:.3em;text-align:left;display:block;width:100%}.wy-form-aligned .wy-control{margin:1.5em 0 0}.wy-form-message,.wy-form-message-inline,.wy-form .wy-help-inline{display:block;font-size:80%;padding:6px 0}}@media screen and (max-width:768px){.tablet-hide{display:none}}@media screen and (max-width:480px){.mobile-hide{display:none}}.float-left{float:left}.float-right{float:right}.full-width{width:100%}.rst-content table.docutils,.rst-content table.field-list,.wy-table{border-collapse:collapse;border-spacing:0;empty-cells:show;margin-bottom:24px}.rst-content table.docutils caption,.rst-content table.field-list caption,.wy-table caption{color:#000;font:italic 85%/1 arial,sans-serif;padding:1em 0;text-align:center}.rst-content table.docutils td,.rst-content table.docutils th,.rst-content table.field-list td,.rst-content table.field-list th,.wy-table td,.wy-table th{font-size:90%;margin:0;overflow:visible;padding:8px 16px}.rst-content table.docutils td:first-child,.rst-content table.docutils th:first-child,.rst-content table.field-list td:first-child,.rst-content table.field-list th:first-child,.wy-table td:first-child,.wy-table th:first-child{border-left-width:0}.rst-content table.docutils thead,.rst-content table.field-list thead,.wy-table thead{color:#000;text-align:left;vertical-align:bottom;white-space:nowrap}.rst-content table.docutils thead th,.rst-content table.field-list thead th,.wy-table thead th{font-weight:700;border-bottom:2px solid #e1e4e5}.rst-content table.docutils td,.rst-content table.field-list td,.wy-table td{background-color:transparent;vertical-align:middle}.rst-content table.docutils td p,.rst-content table.field-list td p,.wy-table td p{line-height:18px}.rst-content table.docutils td p:last-child,.rst-content table.field-list td p:last-child,.wy-table td p:last-child{margin-bottom:0}.rst-content table.docutils .wy-table-cell-min,.rst-content table.field-list .wy-table-cell-min,.wy-table .wy-table-cell-min{width:1%;padding-right:0}.rst-content table.docutils .wy-table-cell-min input[type=checkbox],.rst-content table.field-list .wy-table-cell-min input[type=checkbox],.wy-table .wy-table-cell-min input[type=checkbox]{margin:0}.wy-table-secondary{color:grey;font-size:90%}.wy-table-tertiary{color:grey;font-size:80%}.rst-content table.docutils:not(.field-list) tr:nth-child(2n-1) td,.wy-table-backed,.wy-table-odd td,.wy-table-striped tr:nth-child(2n-1) td{background-color:#f3f6f6}.rst-content table.docutils,.wy-table-bordered-all{border:1px solid #e1e4e5}.rst-content table.docutils td,.wy-table-bordered-all td{border-bottom:1px solid #e1e4e5;border-left:1px solid #e1e4e5}.rst-content table.docutils tbody>tr:last-child td,.wy-table-bordered-all tbody>tr:last-child td{border-bottom-width:0}.wy-table-bordered{border:1px solid #e1e4e5}.wy-table-bordered-rows td{border-bottom:1px solid #e1e4e5}.wy-table-bordered-rows tbody>tr:last-child td{border-bottom-width:0}.wy-table-horizontal td,.wy-table-horizontal th{border-width:0 0 1px;border-bottom:1px solid #e1e4e5}.wy-table-horizontal tbody>tr:last-child td{border-bottom-width:0}.wy-table-responsive{margin-bottom:24px;max-width:100%;overflow:auto}.wy-table-responsive table{margin-bottom:0!important}.wy-table-responsive table td,.wy-table-responsive table th{white-space:nowrap}a{color:#2980b9;text-decoration:none;cursor:pointer}a:hover{color:#3091d1}a:visited{color:#9b59b6}html{height:100%}body,html{overflow-x:hidden}body{font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;font-weight:400;color:#404040;min-height:100%;background:#edf0f2}.wy-text-left{text-align:left}.wy-text-center{text-align:center}.wy-text-right{text-align:right}.wy-text-large{font-size:120%}.wy-text-normal{font-size:100%}.wy-text-small,small{font-size:80%}.wy-text-strike{text-decoration:line-through}.wy-text-warning{color:#e67e22!important}a.wy-text-warning:hover{color:#eb9950!important}.wy-text-info{color:#2980b9!important}a.wy-text-info:hover{color:#409ad5!important}.wy-text-success{color:#27ae60!important}a.wy-text-success:hover{color:#36d278!important}.wy-text-danger{color:#e74c3c!important}a.wy-text-danger:hover{color:#ed7669!important}.wy-text-neutral{color:#404040!important}a.wy-text-neutral:hover{color:#595959!important}.rst-content .toctree-wrapper>p.caption,h1,h2,h3,h4,h5,h6,legend{margin-top:0;font-weight:700;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif}p{line-height:24px;font-size:16px;margin:0 0 24px}h1{font-size:175%}.rst-content .toctree-wrapper>p.caption,h2{font-size:150%}h3{font-size:125%}h4{font-size:115%}h5{font-size:110%}h6{font-size:100%}hr{display:block;height:1px;border:0;border-top:1px solid #e1e4e5;margin:24px 0;padding:0}.rst-content code,.rst-content tt,code{white-space:nowrap;max-width:100%;background:#fff;border:1px solid #e1e4e5;font-size:75%;padding:0 5px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#e74c3c;overflow-x:auto}.rst-content tt.code-large,code.code-large{font-size:90%}.rst-content .section ul,.rst-content .toctree-wrapper ul,.rst-content section ul,.wy-plain-list-disc,article ul{list-style:disc;line-height:24px;margin-bottom:24px}.rst-content .section ul li,.rst-content .toctree-wrapper ul li,.rst-content section ul li,.wy-plain-list-disc li,article ul li{list-style:disc;margin-left:24px}.rst-content .section ul li p:last-child,.rst-content .section ul li ul,.rst-content .toctree-wrapper ul li p:last-child,.rst-content .toctree-wrapper ul li ul,.rst-content section ul li p:last-child,.rst-content section ul li ul,.wy-plain-list-disc li p:last-child,.wy-plain-list-disc li ul,article ul li p:last-child,article ul li ul{margin-bottom:0}.rst-content .section ul li li,.rst-content .toctree-wrapper ul li li,.rst-content section ul li li,.wy-plain-list-disc li li,article ul li li{list-style:circle}.rst-content .section ul li li li,.rst-content .toctree-wrapper ul li li li,.rst-content section ul li li li,.wy-plain-list-disc li li li,article ul li li li{list-style:square}.rst-content .section ul li ol li,.rst-content .toctree-wrapper ul li ol li,.rst-content section ul li ol li,.wy-plain-list-disc li ol li,article ul li ol li{list-style:decimal}.rst-content .section ol,.rst-content .section ol.arabic,.rst-content .toctree-wrapper ol,.rst-content .toctree-wrapper ol.arabic,.rst-content section ol,.rst-content section ol.arabic,.wy-plain-list-decimal,article ol{list-style:decimal;line-height:24px;margin-bottom:24px}.rst-content .section ol.arabic li,.rst-content .section ol li,.rst-content .toctree-wrapper ol.arabic li,.rst-content .toctree-wrapper ol li,.rst-content section ol.arabic li,.rst-content section ol li,.wy-plain-list-decimal li,article ol li{list-style:decimal;margin-left:24px}.rst-content .section ol.arabic li ul,.rst-content .section ol li p:last-child,.rst-content .section ol li ul,.rst-content .toctree-wrapper ol.arabic li ul,.rst-content .toctree-wrapper ol li p:last-child,.rst-content .toctree-wrapper ol li ul,.rst-content section ol.arabic li ul,.rst-content section ol li p:last-child,.rst-content section ol li ul,.wy-plain-list-decimal li p:last-child,.wy-plain-list-decimal li ul,article ol li p:last-child,article ol li ul{margin-bottom:0}.rst-content .section ol.arabic li ul li,.rst-content .section ol li ul li,.rst-content .toctree-wrapper ol.arabic li ul li,.rst-content .toctree-wrapper ol li ul li,.rst-content section ol.arabic li ul li,.rst-content section ol li ul li,.wy-plain-list-decimal li ul li,article ol li ul li{list-style:disc}.wy-breadcrumbs{*zoom:1}.wy-breadcrumbs:after,.wy-breadcrumbs:before{display:table;content:""}.wy-breadcrumbs:after{clear:both}.wy-breadcrumbs>li{display:inline-block;padding-top:5px}.wy-breadcrumbs>li.wy-breadcrumbs-aside{float:right}.rst-content .wy-breadcrumbs>li code,.rst-content .wy-breadcrumbs>li tt,.wy-breadcrumbs>li .rst-content tt,.wy-breadcrumbs>li code{all:inherit;color:inherit}.breadcrumb-item:before{content:"/";color:#bbb;font-size:13px;padding:0 6px 0 3px}.wy-breadcrumbs-extra{margin-bottom:0;color:#b3b3b3;font-size:80%;display:inline-block}@media screen and (max-width:480px){.wy-breadcrumbs-extra,.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}@media print{.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}html{font-size:16px}.wy-affix{position:fixed;top:1.618em}.wy-menu a:hover{text-decoration:none}.wy-menu-horiz{*zoom:1}.wy-menu-horiz:after,.wy-menu-horiz:before{display:table;content:""}.wy-menu-horiz:after{clear:both}.wy-menu-horiz li,.wy-menu-horiz ul{display:inline-block}.wy-menu-horiz li:hover{background:hsla(0,0%,100%,.1)}.wy-menu-horiz li.divide-left{border-left:1px solid #404040}.wy-menu-horiz li.divide-right{border-right:1px solid #404040}.wy-menu-horiz a{height:32px;display:inline-block;line-height:32px;padding:0 16px}.wy-menu-vertical{width:300px}.wy-menu-vertical header,.wy-menu-vertical p.caption{color:#55a5d9;height:32px;line-height:32px;padding:0 1.618em;margin:12px 0 0;display:block;font-weight:700;text-transform:uppercase;font-size:85%;white-space:nowrap}.wy-menu-vertical ul{margin-bottom:0}.wy-menu-vertical li.divide-top{border-top:1px solid #404040}.wy-menu-vertical li.divide-bottom{border-bottom:1px solid #404040}.wy-menu-vertical li.current{background:#e3e3e3}.wy-menu-vertical li.current a{color:grey;border-right:1px solid #c9c9c9;padding:.4045em 2.427em}.wy-menu-vertical li.current a:hover{background:#d6d6d6}.rst-content .wy-menu-vertical li tt,.wy-menu-vertical li .rst-content tt,.wy-menu-vertical li code{border:none;background:inherit;color:inherit;padding-left:0;padding-right:0}.wy-menu-vertical li button.toctree-expand{display:block;float:left;margin-left:-1.2em;line-height:18px;color:#4d4d4d;border:none;background:none;padding:0}.wy-menu-vertical li.current>a,.wy-menu-vertical li.on a{color:#404040;font-weight:700;position:relative;background:#fcfcfc;border:none;padding:.4045em 1.618em}.wy-menu-vertical li.current>a:hover,.wy-menu-vertical li.on a:hover{background:#fcfcfc}.wy-menu-vertical li.current>a:hover button.toctree-expand,.wy-menu-vertical li.on a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand{display:block;line-height:18px;color:#333}.wy-menu-vertical li.toctree-l1.current>a{border-bottom:1px solid #c9c9c9;border-top:1px solid #c9c9c9}.wy-menu-vertical .toctree-l1.current .toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .toctree-l11>ul{display:none}.wy-menu-vertical .toctree-l1.current .current.toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .current.toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .current.toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .current.toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .current.toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .current.toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .current.toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .current.toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .current.toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .current.toctree-l11>ul{display:block}.wy-menu-vertical li.toctree-l3,.wy-menu-vertical li.toctree-l4{font-size:.9em}.wy-menu-vertical li.toctree-l2 a,.wy-menu-vertical li.toctree-l3 a,.wy-menu-vertical li.toctree-l4 a,.wy-menu-vertical li.toctree-l5 a,.wy-menu-vertical li.toctree-l6 a,.wy-menu-vertical li.toctree-l7 a,.wy-menu-vertical li.toctree-l8 a,.wy-menu-vertical li.toctree-l9 a,.wy-menu-vertical li.toctree-l10 a{color:#404040}.wy-menu-vertical li.toctree-l2 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l3 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l4 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l5 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l6 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l7 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l8 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l9 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l10 a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a,.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a,.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a,.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a,.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a,.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a,.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a,.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{display:block}.wy-menu-vertical li.toctree-l2.current>a{padding:.4045em 2.427em}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{padding:.4045em 1.618em .4045em 4.045em}.wy-menu-vertical li.toctree-l3.current>a{padding:.4045em 4.045em}.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{padding:.4045em 1.618em .4045em 5.663em}.wy-menu-vertical li.toctree-l4.current>a{padding:.4045em 5.663em}.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a{padding:.4045em 1.618em .4045em 7.281em}.wy-menu-vertical li.toctree-l5.current>a{padding:.4045em 7.281em}.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a{padding:.4045em 1.618em .4045em 8.899em}.wy-menu-vertical li.toctree-l6.current>a{padding:.4045em 8.899em}.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a{padding:.4045em 1.618em .4045em 10.517em}.wy-menu-vertical li.toctree-l7.current>a{padding:.4045em 10.517em}.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a{padding:.4045em 1.618em .4045em 12.135em}.wy-menu-vertical li.toctree-l8.current>a{padding:.4045em 12.135em}.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a{padding:.4045em 1.618em .4045em 13.753em}.wy-menu-vertical li.toctree-l9.current>a{padding:.4045em 13.753em}.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a{padding:.4045em 1.618em .4045em 15.371em}.wy-menu-vertical li.toctree-l10.current>a{padding:.4045em 15.371em}.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{padding:.4045em 1.618em .4045em 16.989em}.wy-menu-vertical li.toctree-l2.current>a,.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{background:#c9c9c9}.wy-menu-vertical li.toctree-l2 button.toctree-expand{color:#a3a3a3}.wy-menu-vertical li.toctree-l3.current>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{background:#bdbdbd}.wy-menu-vertical li.toctree-l3 button.toctree-expand{color:#969696}.wy-menu-vertical li.current ul{display:block}.wy-menu-vertical li ul{margin-bottom:0;display:none}.wy-menu-vertical li ul li a{margin-bottom:0;color:#d9d9d9;font-weight:400}.wy-menu-vertical a{line-height:18px;padding:.4045em 1.618em;display:block;position:relative;font-size:90%;color:#d9d9d9}.wy-menu-vertical a:hover{background-color:#4e4a4a;cursor:pointer}.wy-menu-vertical a:hover button.toctree-expand{color:#d9d9d9}.wy-menu-vertical a:active{background-color:#2980b9;cursor:pointer;color:#fff}.wy-menu-vertical a:active button.toctree-expand{color:#fff}.wy-side-nav-search{display:block;width:300px;padding:.809em;margin-bottom:.809em;z-index:200;background-color:#2980b9;text-align:center;color:#fcfcfc}.wy-side-nav-search input[type=text]{width:100%;border-radius:50px;padding:6px 12px;border-color:#2472a4}.wy-side-nav-search img{display:block;margin:auto auto .809em;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-side-nav-search .wy-dropdown>a,.wy-side-nav-search>a{color:#fcfcfc;font-size:100%;font-weight:700;display:inline-block;padding:4px 6px;margin-bottom:.809em;max-width:100%}.wy-side-nav-search .wy-dropdown>a:hover,.wy-side-nav-search>a:hover{background:hsla(0,0%,100%,.1)}.wy-side-nav-search .wy-dropdown>a img.logo,.wy-side-nav-search>a img.logo{display:block;margin:0 auto;height:auto;width:auto;border-radius:0;max-width:100%;background:transparent}.wy-side-nav-search .wy-dropdown>a.icon img.logo,.wy-side-nav-search>a.icon img.logo{margin-top:.85em}.wy-side-nav-search>div.version{margin-top:-.4045em;margin-bottom:.809em;font-weight:400;color:hsla(0,0%,100%,.3)}.wy-nav .wy-menu-vertical header{color:#2980b9}.wy-nav .wy-menu-vertical a{color:#b3b3b3}.wy-nav .wy-menu-vertical a:hover{background-color:#2980b9;color:#fff}[data-menu-wrap]{-webkit-transition:all .2s ease-in;-moz-transition:all .2s ease-in;transition:all .2s ease-in;position:absolute;opacity:1;width:100%;opacity:0}[data-menu-wrap].move-center{left:0;right:auto;opacity:1}[data-menu-wrap].move-left{right:auto;left:-100%;opacity:0}[data-menu-wrap].move-right{right:-100%;left:auto;opacity:0}.wy-body-for-nav{background:#fcfcfc}.wy-grid-for-nav{position:absolute;width:100%;height:100%}.wy-nav-side{position:fixed;top:0;bottom:0;left:0;padding-bottom:2em;width:300px;overflow-x:hidden;overflow-y:hidden;min-height:100%;color:#9b9b9b;background:#343131;z-index:200}.wy-side-scroll{width:320px;position:relative;overflow-x:hidden;overflow-y:scroll;height:100%}.wy-nav-top{display:none;background:#2980b9;color:#fff;padding:.4045em .809em;position:relative;line-height:50px;text-align:center;font-size:100%;*zoom:1}.wy-nav-top:after,.wy-nav-top:before{display:table;content:""}.wy-nav-top:after{clear:both}.wy-nav-top a{color:#fff;font-weight:700}.wy-nav-top img{margin-right:12px;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-nav-top i{font-size:30px;float:left;cursor:pointer;padding-top:inherit}.wy-nav-content-wrap{margin-left:300px;background:#fcfcfc;min-height:100%}.wy-nav-content{padding:1.618em 3.236em;height:100%;max-width:800px;margin:auto}.wy-body-mask{position:fixed;width:100%;height:100%;background:rgba(0,0,0,.2);display:none;z-index:499}.wy-body-mask.on{display:block}footer{color:grey}footer p{margin-bottom:12px}.rst-content footer span.commit tt,footer span.commit .rst-content tt,footer span.commit code{padding:0;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:1em;background:none;border:none;color:grey}.rst-footer-buttons{*zoom:1}.rst-footer-buttons:after,.rst-footer-buttons:before{width:100%;display:table;content:""}.rst-footer-buttons:after{clear:both}.rst-breadcrumbs-buttons{margin-top:12px;*zoom:1}.rst-breadcrumbs-buttons:after,.rst-breadcrumbs-buttons:before{display:table;content:""}.rst-breadcrumbs-buttons:after{clear:both}#search-results .search li{margin-bottom:24px;border-bottom:1px solid #e1e4e5;padding-bottom:24px}#search-results .search li:first-child{border-top:1px solid #e1e4e5;padding-top:24px}#search-results .search li a{font-size:120%;margin-bottom:12px;display:inline-block}#search-results .context{color:grey;font-size:90%}.genindextable li>ul{margin-left:24px}@media screen and (max-width:768px){.wy-body-for-nav{background:#fcfcfc}.wy-nav-top{display:block}.wy-nav-side{left:-300px}.wy-nav-side.shift{width:85%;left:0}.wy-menu.wy-menu-vertical,.wy-side-nav-search,.wy-side-scroll{width:auto}.wy-nav-content-wrap{margin-left:0}.wy-nav-content-wrap .wy-nav-content{padding:1.618em}.wy-nav-content-wrap.shift{position:fixed;min-width:100%;left:85%;top:0;height:100%;overflow:hidden}}@media screen and (min-width:1100px){.wy-nav-content-wrap{background:rgba(0,0,0,.05)}.wy-nav-content{margin:0;background:#fcfcfc}}@media print{.rst-versions,.wy-nav-side,footer{display:none}.wy-nav-content-wrap{margin-left:0}}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;z-index:400}.rst-versions a{color:#2980b9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27ae60;*zoom:1}.rst-versions .rst-current-version:after,.rst-versions .rst-current-version:before{display:table;content:""}.rst-versions .rst-current-version:after{clear:both}.rst-content .code-block-caption .rst-versions .rst-current-version .headerlink,.rst-content .eqno .rst-versions .rst-current-version .headerlink,.rst-content .rst-versions .rst-current-version .admonition-title,.rst-content code.download .rst-versions .rst-current-version span:first-child,.rst-content dl dt .rst-versions .rst-current-version .headerlink,.rst-content h1 .rst-versions .rst-current-version .headerlink,.rst-content h2 .rst-versions .rst-current-version .headerlink,.rst-content h3 .rst-versions .rst-current-version .headerlink,.rst-content h4 .rst-versions .rst-current-version .headerlink,.rst-content h5 .rst-versions .rst-current-version .headerlink,.rst-content h6 .rst-versions .rst-current-version .headerlink,.rst-content p .rst-versions .rst-current-version .headerlink,.rst-content table>caption .rst-versions .rst-current-version .headerlink,.rst-content tt.download .rst-versions .rst-current-version span:first-child,.rst-versions .rst-current-version .fa,.rst-versions .rst-current-version .icon,.rst-versions .rst-current-version .rst-content .admonition-title,.rst-versions .rst-current-version .rst-content .code-block-caption .headerlink,.rst-versions .rst-current-version .rst-content .eqno .headerlink,.rst-versions .rst-current-version .rst-content code.download span:first-child,.rst-versions .rst-current-version .rst-content dl dt .headerlink,.rst-versions .rst-current-version .rst-content h1 .headerlink,.rst-versions .rst-current-version .rst-content h2 .headerlink,.rst-versions .rst-current-version .rst-content h3 .headerlink,.rst-versions .rst-current-version .rst-content h4 .headerlink,.rst-versions .rst-current-version .rst-content h5 .headerlink,.rst-versions .rst-current-version .rst-content h6 .headerlink,.rst-versions .rst-current-version .rst-content p .headerlink,.rst-versions .rst-current-version .rst-content table>caption .headerlink,.rst-versions .rst-current-version .rst-content tt.download span:first-child,.rst-versions .rst-current-version .wy-menu-vertical li button.toctree-expand,.wy-menu-vertical li .rst-versions .rst-current-version button.toctree-expand{color:#fcfcfc}.rst-versions .rst-current-version .fa-book,.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#e74c3c;color:#fff}.rst-versions .rst-current-version.rst-active-old-version{background-color:#f1c40f;color:#000}.rst-versions.shift-up{height:auto;max-height:100%;overflow-y:scroll}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:grey;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:1px solid #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px;max-height:90%}.rst-versions.rst-badge .fa-book,.rst-versions.rst-badge .icon-book{float:none;line-height:30px}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .fa-book,.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge>.rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width:768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}}.rst-content .toctree-wrapper>p.caption,.rst-content h1,.rst-content h2,.rst-content h3,.rst-content h4,.rst-content h5,.rst-content h6{margin-bottom:24px}.rst-content img{max-width:100%;height:auto}.rst-content div.figure,.rst-content figure{margin-bottom:24px}.rst-content div.figure .caption-text,.rst-content figure .caption-text{font-style:italic}.rst-content div.figure p:last-child.caption,.rst-content figure p:last-child.caption{margin-bottom:0}.rst-content div.figure.align-center,.rst-content figure.align-center{text-align:center}.rst-content .section>a>img,.rst-content .section>img,.rst-content section>a>img,.rst-content section>img{margin-bottom:24px}.rst-content abbr[title]{text-decoration:none}.rst-content.style-external-links a.reference.external:after{font-family:FontAwesome;content:"\f08e";color:#b3b3b3;vertical-align:super;font-size:60%;margin:0 .2em}.rst-content blockquote{margin-left:24px;line-height:24px;margin-bottom:24px}.rst-content pre.literal-block{white-space:pre;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;display:block;overflow:auto}.rst-content div[class^=highlight],.rst-content pre.literal-block{border:1px solid #e1e4e5;overflow-x:auto;margin:1px 0 24px}.rst-content div[class^=highlight] div[class^=highlight],.rst-content pre.literal-block div[class^=highlight]{padding:0;border:none;margin:0}.rst-content div[class^=highlight] td.code{width:100%}.rst-content .linenodiv pre{border-right:1px solid #e6e9ea;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;user-select:none;pointer-events:none}.rst-content div[class^=highlight] pre{white-space:pre;margin:0;padding:12px;display:block;overflow:auto}.rst-content div[class^=highlight] pre .hll{display:block;margin:0 -12px;padding:0 12px}.rst-content .linenodiv pre,.rst-content div[class^=highlight] pre,.rst-content pre.literal-block{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:12px;line-height:1.4}.rst-content div.highlight .gp,.rst-content div.highlight span.linenos{user-select:none;pointer-events:none}.rst-content div.highlight span.linenos{display:inline-block;padding-left:0;padding-right:12px;margin-right:12px;border-right:1px solid #e6e9ea}.rst-content .code-block-caption{font-style:italic;font-size:85%;line-height:1;padding:1em 0;text-align:center}@media print{.rst-content .codeblock,.rst-content div[class^=highlight],.rst-content div[class^=highlight] pre{white-space:pre-wrap}}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning{clear:both}.rst-content .admonition-todo .last,.rst-content .admonition-todo>:last-child,.rst-content .admonition .last,.rst-content .admonition>:last-child,.rst-content .attention .last,.rst-content .attention>:last-child,.rst-content .caution .last,.rst-content .caution>:last-child,.rst-content .danger .last,.rst-content .danger>:last-child,.rst-content .error .last,.rst-content .error>:last-child,.rst-content .hint .last,.rst-content .hint>:last-child,.rst-content .important .last,.rst-content .important>:last-child,.rst-content .note .last,.rst-content .note>:last-child,.rst-content .seealso .last,.rst-content .seealso>:last-child,.rst-content .tip .last,.rst-content .tip>:last-child,.rst-content .warning .last,.rst-content .warning>:last-child{margin-bottom:0}.rst-content .admonition-title:before{margin-right:4px}.rst-content .admonition table{border-color:rgba(0,0,0,.1)}.rst-content .admonition table td,.rst-content .admonition table th{background:transparent!important;border-color:rgba(0,0,0,.1)!important}.rst-content .section ol.loweralpha,.rst-content .section ol.loweralpha>li,.rst-content .toctree-wrapper ol.loweralpha,.rst-content .toctree-wrapper ol.loweralpha>li,.rst-content section ol.loweralpha,.rst-content section ol.loweralpha>li{list-style:lower-alpha}.rst-content .section ol.upperalpha,.rst-content .section ol.upperalpha>li,.rst-content .toctree-wrapper ol.upperalpha,.rst-content .toctree-wrapper ol.upperalpha>li,.rst-content section ol.upperalpha,.rst-content section ol.upperalpha>li{list-style:upper-alpha}.rst-content .section ol li>*,.rst-content .section ul li>*,.rst-content .toctree-wrapper ol li>*,.rst-content .toctree-wrapper ul li>*,.rst-content section ol li>*,.rst-content section ul li>*{margin-top:12px;margin-bottom:12px}.rst-content .section ol li>:first-child,.rst-content .section ul li>:first-child,.rst-content .toctree-wrapper ol li>:first-child,.rst-content .toctree-wrapper ul li>:first-child,.rst-content section ol li>:first-child,.rst-content section ul li>:first-child{margin-top:0}.rst-content .section ol li>p,.rst-content .section ol li>p:last-child,.rst-content .section ul li>p,.rst-content .section ul li>p:last-child,.rst-content .toctree-wrapper ol li>p,.rst-content .toctree-wrapper ol li>p:last-child,.rst-content .toctree-wrapper ul li>p,.rst-content .toctree-wrapper ul li>p:last-child,.rst-content section ol li>p,.rst-content section ol li>p:last-child,.rst-content section ul li>p,.rst-content section ul li>p:last-child{margin-bottom:12px}.rst-content .section ol li>p:only-child,.rst-content .section ol li>p:only-child:last-child,.rst-content .section ul li>p:only-child,.rst-content .section ul li>p:only-child:last-child,.rst-content .toctree-wrapper ol li>p:only-child,.rst-content .toctree-wrapper ol li>p:only-child:last-child,.rst-content .toctree-wrapper ul li>p:only-child,.rst-content .toctree-wrapper ul li>p:only-child:last-child,.rst-content section ol li>p:only-child,.rst-content section ol li>p:only-child:last-child,.rst-content section ul li>p:only-child,.rst-content section ul li>p:only-child:last-child{margin-bottom:0}.rst-content .section ol li>ol,.rst-content .section ol li>ul,.rst-content .section ul li>ol,.rst-content .section ul li>ul,.rst-content .toctree-wrapper ol li>ol,.rst-content .toctree-wrapper ol li>ul,.rst-content .toctree-wrapper ul li>ol,.rst-content .toctree-wrapper ul li>ul,.rst-content section ol li>ol,.rst-content section ol li>ul,.rst-content section ul li>ol,.rst-content section ul li>ul{margin-bottom:12px}.rst-content .section ol.simple li>*,.rst-content .section ol.simple li ol,.rst-content .section ol.simple li ul,.rst-content .section ul.simple li>*,.rst-content .section ul.simple li ol,.rst-content .section ul.simple li ul,.rst-content .toctree-wrapper ol.simple li>*,.rst-content .toctree-wrapper ol.simple li ol,.rst-content .toctree-wrapper ol.simple li ul,.rst-content .toctree-wrapper ul.simple li>*,.rst-content .toctree-wrapper ul.simple li ol,.rst-content .toctree-wrapper ul.simple li ul,.rst-content section ol.simple li>*,.rst-content section ol.simple li ol,.rst-content section ol.simple li ul,.rst-content section ul.simple li>*,.rst-content section ul.simple li ol,.rst-content section ul.simple li ul{margin-top:0;margin-bottom:0}.rst-content .line-block{margin-left:0;margin-bottom:24px;line-height:24px}.rst-content .line-block .line-block{margin-left:24px;margin-bottom:0}.rst-content .topic-title{font-weight:700;margin-bottom:12px}.rst-content .toc-backref{color:#404040}.rst-content .align-right{float:right;margin:0 0 24px 24px}.rst-content .align-left{float:left;margin:0 24px 24px 0}.rst-content .align-center{margin:auto}.rst-content .align-center:not(table){display:block}.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink{opacity:0;font-size:14px;font-family:FontAwesome;margin-left:.5em}.rst-content .code-block-caption .headerlink:focus,.rst-content .code-block-caption:hover .headerlink,.rst-content .eqno .headerlink:focus,.rst-content .eqno:hover .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink:focus,.rst-content .toctree-wrapper>p.caption:hover .headerlink,.rst-content dl dt .headerlink:focus,.rst-content dl dt:hover .headerlink,.rst-content h1 .headerlink:focus,.rst-content h1:hover .headerlink,.rst-content h2 .headerlink:focus,.rst-content h2:hover .headerlink,.rst-content h3 .headerlink:focus,.rst-content h3:hover .headerlink,.rst-content h4 .headerlink:focus,.rst-content h4:hover .headerlink,.rst-content h5 .headerlink:focus,.rst-content h5:hover .headerlink,.rst-content h6 .headerlink:focus,.rst-content h6:hover .headerlink,.rst-content p.caption .headerlink:focus,.rst-content p.caption:hover .headerlink,.rst-content p .headerlink:focus,.rst-content p:hover .headerlink,.rst-content table>caption .headerlink:focus,.rst-content table>caption:hover .headerlink{opacity:1}.rst-content p a{overflow-wrap:anywhere}.rst-content .wy-table td p,.rst-content .wy-table td ul,.rst-content .wy-table th p,.rst-content .wy-table th ul,.rst-content table.docutils td p,.rst-content table.docutils td ul,.rst-content table.docutils th p,.rst-content table.docutils th ul,.rst-content table.field-list td p,.rst-content table.field-list td ul,.rst-content table.field-list th p,.rst-content table.field-list th ul{font-size:inherit}.rst-content .btn:focus{outline:2px solid}.rst-content table>caption .headerlink:after{font-size:12px}.rst-content .centered{text-align:center}.rst-content .sidebar{float:right;width:40%;display:block;margin:0 0 24px 24px;padding:24px;background:#f3f6f6;border:1px solid #e1e4e5}.rst-content .sidebar dl,.rst-content .sidebar p,.rst-content .sidebar ul{font-size:90%}.rst-content .sidebar .last,.rst-content .sidebar>:last-child{margin-bottom:0}.rst-content .sidebar .sidebar-title{display:block;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif;font-weight:700;background:#e1e4e5;padding:6px 12px;margin:-24px -24px 24px;font-size:100%}.rst-content .highlighted{background:#f1c40f;box-shadow:0 0 0 2px #f1c40f;display:inline;font-weight:700}.rst-content .citation-reference,.rst-content .footnote-reference{vertical-align:baseline;position:relative;top:-.4em;line-height:0;font-size:90%}.rst-content .citation-reference>span.fn-bracket,.rst-content .footnote-reference>span.fn-bracket{display:none}.rst-content .hlist{width:100%}.rst-content dl dt span.classifier:before{content:" : "}.rst-content dl dt span.classifier-delimiter{display:none!important}html.writer-html4 .rst-content table.docutils.citation,html.writer-html4 .rst-content table.docutils.footnote{background:none;border:none}html.writer-html4 .rst-content table.docutils.citation td,html.writer-html4 .rst-content table.docutils.citation tr,html.writer-html4 .rst-content table.docutils.footnote td,html.writer-html4 .rst-content table.docutils.footnote tr{border:none;background-color:transparent!important;white-space:normal}html.writer-html4 .rst-content table.docutils.citation td.label,html.writer-html4 .rst-content table.docutils.footnote td.label{padding-left:0;padding-right:0;vertical-align:top}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{display:grid;grid-template-columns:auto minmax(80%,95%)}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{display:inline-grid;grid-template-columns:max-content auto}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{display:grid;grid-template-columns:auto auto minmax(.65rem,auto) minmax(40%,95%)}html.writer-html5 .rst-content aside.citation>span.label,html.writer-html5 .rst-content aside.footnote>span.label,html.writer-html5 .rst-content div.citation>span.label{grid-column-start:1;grid-column-end:2}html.writer-html5 .rst-content aside.citation>span.backrefs,html.writer-html5 .rst-content aside.footnote>span.backrefs,html.writer-html5 .rst-content div.citation>span.backrefs{grid-column-start:2;grid-column-end:3;grid-row-start:1;grid-row-end:3}html.writer-html5 .rst-content aside.citation>p,html.writer-html5 .rst-content aside.footnote>p,html.writer-html5 .rst-content div.citation>p{grid-column-start:4;grid-column-end:5}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{margin-bottom:24px}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{padding-left:1rem}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dd,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dd,html.writer-html5 .rst-content dl.footnote>dt{margin-bottom:0}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{font-size:.9rem}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.footnote>dt{margin:0 .5rem .5rem 0;line-height:1.2rem;word-break:break-all;font-weight:400}html.writer-html5 .rst-content dl.citation>dt>span.brackets:before,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:before{content:"["}html.writer-html5 .rst-content dl.citation>dt>span.brackets:after,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:after{content:"]"}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a{word-break:keep-all}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a:not(:first-child):before,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.footnote>dd{margin:0 0 .5rem;line-height:1.2rem}html.writer-html5 .rst-content dl.citation>dd p,html.writer-html5 .rst-content dl.footnote>dd p{font-size:.9rem}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{padding-left:1rem;padding-right:1rem;font-size:.9rem;line-height:1.2rem}html.writer-html5 .rst-content aside.citation p,html.writer-html5 .rst-content aside.footnote p,html.writer-html5 .rst-content div.citation p{font-size:.9rem;line-height:1.2rem;margin-bottom:12px}html.writer-html5 .rst-content aside.citation span.backrefs,html.writer-html5 .rst-content aside.footnote span.backrefs,html.writer-html5 .rst-content div.citation span.backrefs{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content aside.citation span.backrefs>a,html.writer-html5 .rst-content aside.footnote span.backrefs>a,html.writer-html5 .rst-content div.citation span.backrefs>a{word-break:keep-all}html.writer-html5 .rst-content aside.citation span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content aside.footnote span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content div.citation span.backrefs>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content aside.citation span.label,html.writer-html5 .rst-content aside.footnote span.label,html.writer-html5 .rst-content div.citation span.label{line-height:1.2rem}html.writer-html5 .rst-content aside.citation-list,html.writer-html5 .rst-content aside.footnote-list,html.writer-html5 .rst-content div.citation-list{margin-bottom:24px}html.writer-html5 .rst-content dl.option-list kbd{font-size:.9rem}.rst-content table.docutils.footnote,html.writer-html4 .rst-content table.docutils.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content aside.footnote-list aside.footnote,html.writer-html5 .rst-content div.citation-list>div.citation,html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{color:grey}.rst-content table.docutils.footnote code,.rst-content table.docutils.footnote tt,html.writer-html4 .rst-content table.docutils.citation code,html.writer-html4 .rst-content table.docutils.citation tt,html.writer-html5 .rst-content aside.footnote-list aside.footnote code,html.writer-html5 .rst-content aside.footnote-list aside.footnote tt,html.writer-html5 .rst-content aside.footnote code,html.writer-html5 .rst-content aside.footnote tt,html.writer-html5 .rst-content div.citation-list>div.citation code,html.writer-html5 .rst-content div.citation-list>div.citation tt,html.writer-html5 .rst-content dl.citation code,html.writer-html5 .rst-content dl.citation tt,html.writer-html5 .rst-content dl.footnote code,html.writer-html5 .rst-content dl.footnote tt{color:#555}.rst-content .wy-table-responsive.citation,.rst-content .wy-table-responsive.footnote{margin-bottom:0}.rst-content .wy-table-responsive.citation+:not(.citation),.rst-content .wy-table-responsive.footnote+:not(.footnote){margin-top:24px}.rst-content .wy-table-responsive.citation:last-child,.rst-content .wy-table-responsive.footnote:last-child{margin-bottom:24px}.rst-content table.docutils th{border-color:#e1e4e5}html.writer-html5 .rst-content table.docutils th{border:1px solid #e1e4e5}html.writer-html5 .rst-content table.docutils td>p,html.writer-html5 .rst-content table.docutils th>p{line-height:1rem;margin-bottom:0;font-size:.9rem}.rst-content table.docutils td .last,.rst-content table.docutils td .last>:last-child{margin-bottom:0}.rst-content table.field-list,.rst-content table.field-list td{border:none}.rst-content table.field-list td p{line-height:inherit}.rst-content table.field-list td>strong{display:inline-block}.rst-content table.field-list .field-name{padding-right:10px;text-align:left;white-space:nowrap}.rst-content table.field-list .field-body{text-align:left}.rst-content code,.rst-content tt{color:#000;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;padding:2px 5px}.rst-content code big,.rst-content code em,.rst-content tt big,.rst-content tt em{font-size:100%!important;line-height:normal}.rst-content code.literal,.rst-content tt.literal{color:#e74c3c;white-space:normal}.rst-content code.xref,.rst-content tt.xref,a .rst-content code,a .rst-content tt{font-weight:700;color:#404040;overflow-wrap:normal}.rst-content kbd,.rst-content pre,.rst-content samp{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace}.rst-content a code,.rst-content a tt{color:#2980b9}.rst-content dl{margin-bottom:24px}.rst-content dl dt{font-weight:700;margin-bottom:12px}.rst-content dl ol,.rst-content dl p,.rst-content dl table,.rst-content dl ul{margin-bottom:12px}.rst-content dl dd{margin:0 0 12px 24px;line-height:24px}.rst-content dl dd>ol:last-child,.rst-content dl dd>p:last-child,.rst-content dl dd>table:last-child,.rst-content dl dd>ul:last-child{margin-bottom:0}html.writer-html4 .rst-content dl:not(.docutils),html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple){margin-bottom:24px}html.writer-html4 .rst-content dl:not(.docutils)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{display:table;margin:6px 0;font-size:90%;line-height:normal;background:#e7f2fa;color:#2980b9;border-top:3px solid #6ab0de;padding:6px;position:relative}html.writer-html4 .rst-content dl:not(.docutils)>dt:before,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:before{color:#6ab0de}html.writer-html4 .rst-content dl:not(.docutils)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{margin-bottom:6px;border:none;border-left:3px solid #ccc;background:#f0f0f0;color:#555}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils)>dt:first-child,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:first-child{margin-top:0}html.writer-html4 .rst-content dl:not(.docutils) code.descclassname,html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descclassname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{background-color:transparent;border:none;padding:0;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .optional,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .optional{display:inline-block;padding:0 4px;color:#000;font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .property,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .property{display:inline-block;padding-right:8px;max-width:100%}html.writer-html4 .rst-content dl:not(.docutils) .k,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .k{font-style:italic}html.writer-html4 .rst-content dl:not(.docutils) .descclassname,html.writer-html4 .rst-content dl:not(.docutils) .descname,html.writer-html4 .rst-content dl:not(.docutils) .sig-name,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .sig-name{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#000}.rst-content .viewcode-back,.rst-content .viewcode-link{display:inline-block;color:#27ae60;font-size:80%;padding-left:24px}.rst-content .viewcode-back{display:block;float:right}.rst-content p.rubric{margin-bottom:12px;font-weight:700}.rst-content code.download,.rst-content tt.download{background:inherit;padding:inherit;font-weight:400;font-family:inherit;font-size:inherit;color:inherit;border:inherit;white-space:inherit}.rst-content code.download span:first-child,.rst-content tt.download span:first-child{-webkit-font-smoothing:subpixel-antialiased}.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{margin-right:4px}.rst-content .guilabel,.rst-content .menuselection{font-size:80%;font-weight:700;border-radius:4px;padding:2.4px 6px;margin:auto 2px}.rst-content .guilabel,.rst-content .menuselection{border:1px solid #7fbbe3;background:#e7f2fa}.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>.kbd,.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>kbd{color:inherit;font-size:80%;background-color:#fff;border:1px solid #a6a6a6;border-radius:4px;box-shadow:0 2px grey;padding:2.4px 6px;margin:auto 0}.rst-content .versionmodified{font-style:italic}@media screen and (max-width:480px){.rst-content .sidebar{width:100%}}span[id*=MathJax-Span]{color:#404040}.math{text-align:center}@font-face{font-family:Lato;src:url(fonts/lato-normal.woff2?bd03a2cc277bbbc338d464e679fe9942) format("woff2"),url(fonts/lato-normal.woff?27bd77b9162d388cb8d4c4217c7c5e2a) format("woff");font-weight:400;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold.woff2?cccb897485813c7c256901dbca54ecf2) format("woff2"),url(fonts/lato-bold.woff?d878b6c29b10beca227e9eef4246111b) format("woff");font-weight:700;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold-italic.woff2?0b6bb6725576b072c5d0b02ecdd1900d) format("woff2"),url(fonts/lato-bold-italic.woff?9c7e4e9eb485b4a121c760e61bc3707c) format("woff");font-weight:700;font-style:italic;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-normal-italic.woff2?4eb103b4d12be57cb1d040ed5e162e9d) format("woff2"),url(fonts/lato-normal-italic.woff?f28f2d6482446544ef1ea1ccc6dd5892) format("woff");font-weight:400;font-style:italic;font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:400;src:url(fonts/Roboto-Slab-Regular.woff2?7abf5b8d04d26a2cafea937019bca958) format("woff2"),url(fonts/Roboto-Slab-Regular.woff?c1be9284088d487c5e3ff0a10a92e58c) format("woff");font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:700;src:url(fonts/Roboto-Slab-Bold.woff2?9984f4a9bda09be08e83f2506954adbe) format("woff2"),url(fonts/Roboto-Slab-Bold.woff?bed5564a116b05148e3b3bea6fb1162a) format("woff");font-display:block} \ No newline at end of file diff --git a/_static/css/theme_overrides.css b/_static/css/theme_overrides.css new file mode 100644 index 000000000..730b6fe94 --- /dev/null +++ b/_static/css/theme_overrides.css @@ -0,0 +1,17 @@ +/* override table width restrictions */ +@media screen and (min-width: 767px) { + + .wy-table-responsive table td { + /* !important prevents the common CSS stylesheets from overriding + this as on RTD they are loaded after this stylesheet */ + white-space: normal !important; + } + + .wy-table-responsive { + overflow: visible !important; + } + + .wy-nav-content { + max-width: 1500px !important; + } + } diff --git a/_static/doctools.js b/_static/doctools.js new file mode 100644 index 000000000..d06a71d75 --- /dev/null +++ b/_static/doctools.js @@ -0,0 +1,156 @@ +/* + * doctools.js + * ~~~~~~~~~~~ + * + * Base JavaScript utilities for all Sphinx HTML documentation. + * + * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ +"use strict"; + +const BLACKLISTED_KEY_CONTROL_ELEMENTS = new Set([ + "TEXTAREA", + "INPUT", + "SELECT", + "BUTTON", +]); + +const _ready = (callback) => { + if (document.readyState !== "loading") { + callback(); + } else { + document.addEventListener("DOMContentLoaded", callback); + } +}; + +/** + * Small JavaScript module for the documentation. + */ +const Documentation = { + init: () => { + Documentation.initDomainIndexTable(); + Documentation.initOnKeyListeners(); + }, + + /** + * i18n support + */ + TRANSLATIONS: {}, + PLURAL_EXPR: (n) => (n === 1 ? 0 : 1), + LOCALE: "unknown", + + // gettext and ngettext don't access this so that the functions + // can safely bound to a different name (_ = Documentation.gettext) + gettext: (string) => { + const translated = Documentation.TRANSLATIONS[string]; + switch (typeof translated) { + case "undefined": + return string; // no translation + case "string": + return translated; // translation exists + default: + return translated[0]; // (singular, plural) translation tuple exists + } + }, + + ngettext: (singular, plural, n) => { + const translated = Documentation.TRANSLATIONS[singular]; + if (typeof translated !== "undefined") + return translated[Documentation.PLURAL_EXPR(n)]; + return n === 1 ? singular : plural; + }, + + addTranslations: (catalog) => { + Object.assign(Documentation.TRANSLATIONS, catalog.messages); + Documentation.PLURAL_EXPR = new Function( + "n", + `return (${catalog.plural_expr})` + ); + Documentation.LOCALE = catalog.locale; + }, + + /** + * helper function to focus on search bar + */ + focusSearchBar: () => { + document.querySelectorAll("input[name=q]")[0]?.focus(); + }, + + /** + * Initialise the domain index toggle buttons + */ + initDomainIndexTable: () => { + const toggler = (el) => { + const idNumber = el.id.substr(7); + const toggledRows = document.querySelectorAll(`tr.cg-${idNumber}`); + if (el.src.substr(-9) === "minus.png") { + el.src = `${el.src.substr(0, el.src.length - 9)}plus.png`; + toggledRows.forEach((el) => (el.style.display = "none")); + } else { + el.src = `${el.src.substr(0, el.src.length - 8)}minus.png`; + toggledRows.forEach((el) => (el.style.display = "")); + } + }; + + const togglerElements = document.querySelectorAll("img.toggler"); + togglerElements.forEach((el) => + el.addEventListener("click", (event) => toggler(event.currentTarget)) + ); + togglerElements.forEach((el) => (el.style.display = "")); + if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) togglerElements.forEach(toggler); + }, + + initOnKeyListeners: () => { + // only install a listener if it is really needed + if ( + !DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS && + !DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS + ) + return; + + document.addEventListener("keydown", (event) => { + // bail for input elements + if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; + // bail with special keys + if (event.altKey || event.ctrlKey || event.metaKey) return; + + if (!event.shiftKey) { + switch (event.key) { + case "ArrowLeft": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const prevLink = document.querySelector('link[rel="prev"]'); + if (prevLink && prevLink.href) { + window.location.href = prevLink.href; + event.preventDefault(); + } + break; + case "ArrowRight": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const nextLink = document.querySelector('link[rel="next"]'); + if (nextLink && nextLink.href) { + window.location.href = nextLink.href; + event.preventDefault(); + } + break; + } + } + + // some keyboard layouts may need Shift to get / + switch (event.key) { + case "/": + if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) break; + Documentation.focusSearchBar(); + event.preventDefault(); + } + }); + }, +}; + +// quick alias for translations +const _ = Documentation.gettext; + +_ready(Documentation.init); diff --git a/_static/documentation_options.js b/_static/documentation_options.js new file mode 100644 index 000000000..b57ae3b83 --- /dev/null +++ b/_static/documentation_options.js @@ -0,0 +1,14 @@ +var DOCUMENTATION_OPTIONS = { + URL_ROOT: document.getElementById("documentation_options").getAttribute('data-url_root'), + VERSION: '', + LANGUAGE: 'en', + COLLAPSE_INDEX: false, + BUILDER: 'html', + FILE_SUFFIX: '.html', + LINK_SUFFIX: '.html', + HAS_SOURCE: true, + SOURCELINK_SUFFIX: '.txt', + NAVIGATION_WITH_KEYS: false, + SHOW_SEARCH_SUMMARY: true, + ENABLE_SEARCH_SHORTCUTS: true, +}; \ No newline at end of file diff --git a/_static/favicon.ico b/_static/favicon.ico new file mode 100644 index 000000000..35ad3d5c1 Binary files /dev/null and b/_static/favicon.ico differ diff --git a/_static/file.png b/_static/file.png new file mode 100644 index 000000000..a858a410e Binary files /dev/null and b/_static/file.png differ diff --git a/_static/img/draid-resilver-hours.png b/_static/img/draid-resilver-hours.png new file mode 100644 index 000000000..41899d28f Binary files /dev/null and b/_static/img/draid-resilver-hours.png differ diff --git a/_static/img/favicon.ico b/_static/img/favicon.ico new file mode 100644 index 000000000..35ad3d5c1 Binary files /dev/null and b/_static/img/favicon.ico differ diff --git a/_static/img/logo/320px-Open-ZFS-Secondary-Logo-Colour-halfsize.png b/_static/img/logo/320px-Open-ZFS-Secondary-Logo-Colour-halfsize.png new file mode 100644 index 000000000..4338899a4 Binary files /dev/null and b/_static/img/logo/320px-Open-ZFS-Secondary-Logo-Colour-halfsize.png differ diff --git a/_static/img/logo/480px-Open-ZFS-Secondary-Logo-Colour-halfsize.png b/_static/img/logo/480px-Open-ZFS-Secondary-Logo-Colour-halfsize.png new file mode 100644 index 000000000..af853062f Binary files /dev/null and b/_static/img/logo/480px-Open-ZFS-Secondary-Logo-Colour-halfsize.png differ diff --git a/_static/img/logo/800px-Open-ZFS-Secondary-Logo-Colour-halfsize.png b/_static/img/logo/800px-Open-ZFS-Secondary-Logo-Colour-halfsize.png new file mode 100644 index 000000000..32fd3e21e Binary files /dev/null and b/_static/img/logo/800px-Open-ZFS-Secondary-Logo-Colour-halfsize.png differ diff --git a/_static/img/logo/logo_main.png b/_static/img/logo/logo_main.png new file mode 100644 index 000000000..cc86e84e7 Binary files /dev/null and b/_static/img/logo/logo_main.png differ diff --git a/_static/img/logo/zof-logo.png b/_static/img/logo/zof-logo.png new file mode 100644 index 000000000..0612f6056 Binary files /dev/null and b/_static/img/logo/zof-logo.png differ diff --git a/_static/img/raidz_draid.png b/_static/img/raidz_draid.png new file mode 100644 index 000000000..b5617cd14 Binary files /dev/null and b/_static/img/raidz_draid.png differ diff --git a/_static/jquery.js b/_static/jquery.js new file mode 100644 index 000000000..c4c6022f2 --- /dev/null +++ b/_static/jquery.js @@ -0,0 +1,2 @@ +/*! jQuery v3.6.0 | (c) OpenJS Foundation and other contributors | jquery.org/license */ +!function(e,t){"use strict";"object"==typeof module&&"object"==typeof module.exports?module.exports=e.document?t(e,!0):function(e){if(!e.document)throw new Error("jQuery requires a window with a document");return t(e)}:t(e)}("undefined"!=typeof window?window:this,function(C,e){"use strict";var t=[],r=Object.getPrototypeOf,s=t.slice,g=t.flat?function(e){return t.flat.call(e)}:function(e){return t.concat.apply([],e)},u=t.push,i=t.indexOf,n={},o=n.toString,v=n.hasOwnProperty,a=v.toString,l=a.call(Object),y={},m=function(e){return"function"==typeof e&&"number"!=typeof e.nodeType&&"function"!=typeof e.item},x=function(e){return null!=e&&e===e.window},E=C.document,c={type:!0,src:!0,nonce:!0,noModule:!0};function b(e,t,n){var r,i,o=(n=n||E).createElement("script");if(o.text=e,t)for(r in c)(i=t[r]||t.getAttribute&&t.getAttribute(r))&&o.setAttribute(r,i);n.head.appendChild(o).parentNode.removeChild(o)}function w(e){return null==e?e+"":"object"==typeof e||"function"==typeof e?n[o.call(e)]||"object":typeof e}var f="3.6.0",S=function(e,t){return new S.fn.init(e,t)};function p(e){var t=!!e&&"length"in e&&e.length,n=w(e);return!m(e)&&!x(e)&&("array"===n||0===t||"number"==typeof t&&0+~]|"+M+")"+M+"*"),U=new RegExp(M+"|>"),X=new RegExp(F),V=new RegExp("^"+I+"$"),G={ID:new RegExp("^#("+I+")"),CLASS:new RegExp("^\\.("+I+")"),TAG:new RegExp("^("+I+"|[*])"),ATTR:new RegExp("^"+W),PSEUDO:new RegExp("^"+F),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+R+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/HTML$/i,Q=/^(?:input|select|textarea|button)$/i,J=/^h\d$/i,K=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ee=/[+~]/,te=new RegExp("\\\\[\\da-fA-F]{1,6}"+M+"?|\\\\([^\\r\\n\\f])","g"),ne=function(e,t){var n="0x"+e.slice(1)-65536;return t||(n<0?String.fromCharCode(n+65536):String.fromCharCode(n>>10|55296,1023&n|56320))},re=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ie=function(e,t){return t?"\0"===e?"\ufffd":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},oe=function(){T()},ae=be(function(e){return!0===e.disabled&&"fieldset"===e.nodeName.toLowerCase()},{dir:"parentNode",next:"legend"});try{H.apply(t=O.call(p.childNodes),p.childNodes),t[p.childNodes.length].nodeType}catch(e){H={apply:t.length?function(e,t){L.apply(e,O.call(t))}:function(e,t){var n=e.length,r=0;while(e[n++]=t[r++]);e.length=n-1}}}function se(t,e,n,r){var i,o,a,s,u,l,c,f=e&&e.ownerDocument,p=e?e.nodeType:9;if(n=n||[],"string"!=typeof t||!t||1!==p&&9!==p&&11!==p)return n;if(!r&&(T(e),e=e||C,E)){if(11!==p&&(u=Z.exec(t)))if(i=u[1]){if(9===p){if(!(a=e.getElementById(i)))return n;if(a.id===i)return n.push(a),n}else if(f&&(a=f.getElementById(i))&&y(e,a)&&a.id===i)return n.push(a),n}else{if(u[2])return H.apply(n,e.getElementsByTagName(t)),n;if((i=u[3])&&d.getElementsByClassName&&e.getElementsByClassName)return H.apply(n,e.getElementsByClassName(i)),n}if(d.qsa&&!N[t+" "]&&(!v||!v.test(t))&&(1!==p||"object"!==e.nodeName.toLowerCase())){if(c=t,f=e,1===p&&(U.test(t)||z.test(t))){(f=ee.test(t)&&ye(e.parentNode)||e)===e&&d.scope||((s=e.getAttribute("id"))?s=s.replace(re,ie):e.setAttribute("id",s=S)),o=(l=h(t)).length;while(o--)l[o]=(s?"#"+s:":scope")+" "+xe(l[o]);c=l.join(",")}try{return H.apply(n,f.querySelectorAll(c)),n}catch(e){N(t,!0)}finally{s===S&&e.removeAttribute("id")}}}return g(t.replace($,"$1"),e,n,r)}function ue(){var r=[];return function e(t,n){return r.push(t+" ")>b.cacheLength&&delete e[r.shift()],e[t+" "]=n}}function le(e){return e[S]=!0,e}function ce(e){var t=C.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function fe(e,t){var n=e.split("|"),r=n.length;while(r--)b.attrHandle[n[r]]=t}function pe(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)while(n=n.nextSibling)if(n===t)return-1;return e?1:-1}function de(t){return function(e){return"input"===e.nodeName.toLowerCase()&&e.type===t}}function he(n){return function(e){var t=e.nodeName.toLowerCase();return("input"===t||"button"===t)&&e.type===n}}function ge(t){return function(e){return"form"in e?e.parentNode&&!1===e.disabled?"label"in e?"label"in e.parentNode?e.parentNode.disabled===t:e.disabled===t:e.isDisabled===t||e.isDisabled!==!t&&ae(e)===t:e.disabled===t:"label"in e&&e.disabled===t}}function ve(a){return le(function(o){return o=+o,le(function(e,t){var n,r=a([],e.length,o),i=r.length;while(i--)e[n=r[i]]&&(e[n]=!(t[n]=e[n]))})})}function ye(e){return e&&"undefined"!=typeof e.getElementsByTagName&&e}for(e in d=se.support={},i=se.isXML=function(e){var t=e&&e.namespaceURI,n=e&&(e.ownerDocument||e).documentElement;return!Y.test(t||n&&n.nodeName||"HTML")},T=se.setDocument=function(e){var t,n,r=e?e.ownerDocument||e:p;return r!=C&&9===r.nodeType&&r.documentElement&&(a=(C=r).documentElement,E=!i(C),p!=C&&(n=C.defaultView)&&n.top!==n&&(n.addEventListener?n.addEventListener("unload",oe,!1):n.attachEvent&&n.attachEvent("onunload",oe)),d.scope=ce(function(e){return a.appendChild(e).appendChild(C.createElement("div")),"undefined"!=typeof e.querySelectorAll&&!e.querySelectorAll(":scope fieldset div").length}),d.attributes=ce(function(e){return e.className="i",!e.getAttribute("className")}),d.getElementsByTagName=ce(function(e){return e.appendChild(C.createComment("")),!e.getElementsByTagName("*").length}),d.getElementsByClassName=K.test(C.getElementsByClassName),d.getById=ce(function(e){return a.appendChild(e).id=S,!C.getElementsByName||!C.getElementsByName(S).length}),d.getById?(b.filter.ID=function(e){var t=e.replace(te,ne);return function(e){return e.getAttribute("id")===t}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n=t.getElementById(e);return n?[n]:[]}}):(b.filter.ID=function(e){var n=e.replace(te,ne);return function(e){var t="undefined"!=typeof e.getAttributeNode&&e.getAttributeNode("id");return t&&t.value===n}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];i=t.getElementsByName(e),r=0;while(o=i[r++])if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),b.find.TAG=d.getElementsByTagName?function(e,t){return"undefined"!=typeof t.getElementsByTagName?t.getElementsByTagName(e):d.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){while(n=o[i++])1===n.nodeType&&r.push(n);return r}return o},b.find.CLASS=d.getElementsByClassName&&function(e,t){if("undefined"!=typeof t.getElementsByClassName&&E)return t.getElementsByClassName(e)},s=[],v=[],(d.qsa=K.test(C.querySelectorAll))&&(ce(function(e){var t;a.appendChild(e).innerHTML="",e.querySelectorAll("[msallowcapture^='']").length&&v.push("[*^$]="+M+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||v.push("\\["+M+"*(?:value|"+R+")"),e.querySelectorAll("[id~="+S+"-]").length||v.push("~="),(t=C.createElement("input")).setAttribute("name",""),e.appendChild(t),e.querySelectorAll("[name='']").length||v.push("\\["+M+"*name"+M+"*="+M+"*(?:''|\"\")"),e.querySelectorAll(":checked").length||v.push(":checked"),e.querySelectorAll("a#"+S+"+*").length||v.push(".#.+[+~]"),e.querySelectorAll("\\\f"),v.push("[\\r\\n\\f]")}),ce(function(e){e.innerHTML="";var t=C.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&v.push("name"+M+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&v.push(":enabled",":disabled"),a.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&v.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),v.push(",.*:")})),(d.matchesSelector=K.test(c=a.matches||a.webkitMatchesSelector||a.mozMatchesSelector||a.oMatchesSelector||a.msMatchesSelector))&&ce(function(e){d.disconnectedMatch=c.call(e,"*"),c.call(e,"[s!='']:x"),s.push("!=",F)}),v=v.length&&new RegExp(v.join("|")),s=s.length&&new RegExp(s.join("|")),t=K.test(a.compareDocumentPosition),y=t||K.test(a.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)while(t=t.parentNode)if(t===e)return!0;return!1},j=t?function(e,t){if(e===t)return l=!0,0;var n=!e.compareDocumentPosition-!t.compareDocumentPosition;return n||(1&(n=(e.ownerDocument||e)==(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!d.sortDetached&&t.compareDocumentPosition(e)===n?e==C||e.ownerDocument==p&&y(p,e)?-1:t==C||t.ownerDocument==p&&y(p,t)?1:u?P(u,e)-P(u,t):0:4&n?-1:1)}:function(e,t){if(e===t)return l=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e==C?-1:t==C?1:i?-1:o?1:u?P(u,e)-P(u,t):0;if(i===o)return pe(e,t);n=e;while(n=n.parentNode)a.unshift(n);n=t;while(n=n.parentNode)s.unshift(n);while(a[r]===s[r])r++;return r?pe(a[r],s[r]):a[r]==p?-1:s[r]==p?1:0}),C},se.matches=function(e,t){return se(e,null,null,t)},se.matchesSelector=function(e,t){if(T(e),d.matchesSelector&&E&&!N[t+" "]&&(!s||!s.test(t))&&(!v||!v.test(t)))try{var n=c.call(e,t);if(n||d.disconnectedMatch||e.document&&11!==e.document.nodeType)return n}catch(e){N(t,!0)}return 0":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(te,ne),e[3]=(e[3]||e[4]||e[5]||"").replace(te,ne),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||se.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&se.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return G.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&X.test(n)&&(t=h(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(te,ne).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=m[e+" "];return t||(t=new RegExp("(^|"+M+")"+e+"("+M+"|$)"))&&m(e,function(e){return t.test("string"==typeof e.className&&e.className||"undefined"!=typeof e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(n,r,i){return function(e){var t=se.attr(e,n);return null==t?"!="===r:!r||(t+="","="===r?t===i:"!="===r?t!==i:"^="===r?i&&0===t.indexOf(i):"*="===r?i&&-1:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function j(e,n,r){return m(n)?S.grep(e,function(e,t){return!!n.call(e,t,e)!==r}):n.nodeType?S.grep(e,function(e){return e===n!==r}):"string"!=typeof n?S.grep(e,function(e){return-1)[^>]*|#([\w-]+))$/;(S.fn.init=function(e,t,n){var r,i;if(!e)return this;if(n=n||D,"string"==typeof e){if(!(r="<"===e[0]&&">"===e[e.length-1]&&3<=e.length?[null,e,null]:q.exec(e))||!r[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(r[1]){if(t=t instanceof S?t[0]:t,S.merge(this,S.parseHTML(r[1],t&&t.nodeType?t.ownerDocument||t:E,!0)),N.test(r[1])&&S.isPlainObject(t))for(r in t)m(this[r])?this[r](t[r]):this.attr(r,t[r]);return this}return(i=E.getElementById(r[2]))&&(this[0]=i,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):m(e)?void 0!==n.ready?n.ready(e):e(S):S.makeArray(e,this)}).prototype=S.fn,D=S(E);var L=/^(?:parents|prev(?:Until|All))/,H={children:!0,contents:!0,next:!0,prev:!0};function O(e,t){while((e=e[t])&&1!==e.nodeType);return e}S.fn.extend({has:function(e){var t=S(e,this),n=t.length;return this.filter(function(){for(var e=0;e\x20\t\r\n\f]*)/i,he=/^$|^module$|\/(?:java|ecma)script/i;ce=E.createDocumentFragment().appendChild(E.createElement("div")),(fe=E.createElement("input")).setAttribute("type","radio"),fe.setAttribute("checked","checked"),fe.setAttribute("name","t"),ce.appendChild(fe),y.checkClone=ce.cloneNode(!0).cloneNode(!0).lastChild.checked,ce.innerHTML="",y.noCloneChecked=!!ce.cloneNode(!0).lastChild.defaultValue,ce.innerHTML="",y.option=!!ce.lastChild;var ge={thead:[1,"","
"],col:[2,"","
"],tr:[2,"","
"],td:[3,"","
"],_default:[0,"",""]};function ve(e,t){var n;return n="undefined"!=typeof e.getElementsByTagName?e.getElementsByTagName(t||"*"):"undefined"!=typeof e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&A(e,t)?S.merge([e],n):n}function ye(e,t){for(var n=0,r=e.length;n",""]);var me=/<|&#?\w+;/;function xe(e,t,n,r,i){for(var o,a,s,u,l,c,f=t.createDocumentFragment(),p=[],d=0,h=e.length;d\s*$/g;function je(e,t){return A(e,"table")&&A(11!==t.nodeType?t:t.firstChild,"tr")&&S(e).children("tbody")[0]||e}function De(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function qe(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function Le(e,t){var n,r,i,o,a,s;if(1===t.nodeType){if(Y.hasData(e)&&(s=Y.get(e).events))for(i in Y.remove(t,"handle events"),s)for(n=0,r=s[i].length;n").attr(n.scriptAttrs||{}).prop({charset:n.scriptCharset,src:n.url}).on("load error",i=function(e){r.remove(),i=null,e&&t("error"===e.type?404:200,e.type)}),E.head.appendChild(r[0])},abort:function(){i&&i()}}});var _t,zt=[],Ut=/(=)\?(?=&|$)|\?\?/;S.ajaxSetup({jsonp:"callback",jsonpCallback:function(){var e=zt.pop()||S.expando+"_"+wt.guid++;return this[e]=!0,e}}),S.ajaxPrefilter("json jsonp",function(e,t,n){var r,i,o,a=!1!==e.jsonp&&(Ut.test(e.url)?"url":"string"==typeof e.data&&0===(e.contentType||"").indexOf("application/x-www-form-urlencoded")&&Ut.test(e.data)&&"data");if(a||"jsonp"===e.dataTypes[0])return r=e.jsonpCallback=m(e.jsonpCallback)?e.jsonpCallback():e.jsonpCallback,a?e[a]=e[a].replace(Ut,"$1"+r):!1!==e.jsonp&&(e.url+=(Tt.test(e.url)?"&":"?")+e.jsonp+"="+r),e.converters["script json"]=function(){return o||S.error(r+" was not called"),o[0]},e.dataTypes[0]="json",i=C[r],C[r]=function(){o=arguments},n.always(function(){void 0===i?S(C).removeProp(r):C[r]=i,e[r]&&(e.jsonpCallback=t.jsonpCallback,zt.push(r)),o&&m(i)&&i(o[0]),o=i=void 0}),"script"}),y.createHTMLDocument=((_t=E.implementation.createHTMLDocument("").body).innerHTML="
",2===_t.childNodes.length),S.parseHTML=function(e,t,n){return"string"!=typeof e?[]:("boolean"==typeof t&&(n=t,t=!1),t||(y.createHTMLDocument?((r=(t=E.implementation.createHTMLDocument("")).createElement("base")).href=E.location.href,t.head.appendChild(r)):t=E),o=!n&&[],(i=N.exec(e))?[t.createElement(i[1])]:(i=xe([e],t,o),o&&o.length&&S(o).remove(),S.merge([],i.childNodes)));var r,i,o},S.fn.load=function(e,t,n){var r,i,o,a=this,s=e.indexOf(" ");return-1").append(S.parseHTML(e)).find(r):e)}).always(n&&function(e,t){a.each(function(){n.apply(this,o||[e.responseText,t,e])})}),this},S.expr.pseudos.animated=function(t){return S.grep(S.timers,function(e){return t===e.elem}).length},S.offset={setOffset:function(e,t,n){var r,i,o,a,s,u,l=S.css(e,"position"),c=S(e),f={};"static"===l&&(e.style.position="relative"),s=c.offset(),o=S.css(e,"top"),u=S.css(e,"left"),("absolute"===l||"fixed"===l)&&-1<(o+u).indexOf("auto")?(a=(r=c.position()).top,i=r.left):(a=parseFloat(o)||0,i=parseFloat(u)||0),m(t)&&(t=t.call(e,n,S.extend({},s))),null!=t.top&&(f.top=t.top-s.top+a),null!=t.left&&(f.left=t.left-s.left+i),"using"in t?t.using.call(e,f):c.css(f)}},S.fn.extend({offset:function(t){if(arguments.length)return void 0===t?this:this.each(function(e){S.offset.setOffset(this,t,e)});var e,n,r=this[0];return r?r.getClientRects().length?(e=r.getBoundingClientRect(),n=r.ownerDocument.defaultView,{top:e.top+n.pageYOffset,left:e.left+n.pageXOffset}):{top:0,left:0}:void 0},position:function(){if(this[0]){var e,t,n,r=this[0],i={top:0,left:0};if("fixed"===S.css(r,"position"))t=r.getBoundingClientRect();else{t=this.offset(),n=r.ownerDocument,e=r.offsetParent||n.documentElement;while(e&&(e===n.body||e===n.documentElement)&&"static"===S.css(e,"position"))e=e.parentNode;e&&e!==r&&1===e.nodeType&&((i=S(e).offset()).top+=S.css(e,"borderTopWidth",!0),i.left+=S.css(e,"borderLeftWidth",!0))}return{top:t.top-i.top-S.css(r,"marginTop",!0),left:t.left-i.left-S.css(r,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){var e=this.offsetParent;while(e&&"static"===S.css(e,"position"))e=e.offsetParent;return e||re})}}),S.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(t,i){var o="pageYOffset"===i;S.fn[t]=function(e){return $(this,function(e,t,n){var r;if(x(e)?r=e:9===e.nodeType&&(r=e.defaultView),void 0===n)return r?r[i]:e[t];r?r.scrollTo(o?r.pageXOffset:n,o?n:r.pageYOffset):e[t]=n},t,e,arguments.length)}}),S.each(["top","left"],function(e,n){S.cssHooks[n]=Fe(y.pixelPosition,function(e,t){if(t)return t=We(e,n),Pe.test(t)?S(e).position()[n]+"px":t})}),S.each({Height:"height",Width:"width"},function(a,s){S.each({padding:"inner"+a,content:s,"":"outer"+a},function(r,o){S.fn[o]=function(e,t){var n=arguments.length&&(r||"boolean"!=typeof e),i=r||(!0===e||!0===t?"margin":"border");return $(this,function(e,t,n){var r;return x(e)?0===o.indexOf("outer")?e["inner"+a]:e.document.documentElement["client"+a]:9===e.nodeType?(r=e.documentElement,Math.max(e.body["scroll"+a],r["scroll"+a],e.body["offset"+a],r["offset"+a],r["client"+a])):void 0===n?S.css(e,t,i):S.style(e,t,n,i)},s,n?e:void 0,n)}})}),S.each(["ajaxStart","ajaxStop","ajaxComplete","ajaxError","ajaxSuccess","ajaxSend"],function(e,t){S.fn[t]=function(e){return this.on(t,e)}}),S.fn.extend({bind:function(e,t,n){return this.on(e,null,t,n)},unbind:function(e,t){return this.off(e,null,t)},delegate:function(e,t,n,r){return this.on(t,e,n,r)},undelegate:function(e,t,n){return 1===arguments.length?this.off(e,"**"):this.off(t,e||"**",n)},hover:function(e,t){return this.mouseenter(e).mouseleave(t||e)}}),S.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(e,n){S.fn[n]=function(e,t){return 0",d.insertBefore(c.lastChild,d.firstChild)}function d(){var a=y.elements;return"string"==typeof a?a.split(" "):a}function e(a,b){var c=y.elements;"string"!=typeof c&&(c=c.join(" ")),"string"!=typeof a&&(a=a.join(" ")),y.elements=c+" "+a,j(b)}function f(a){var b=x[a[v]];return b||(b={},w++,a[v]=w,x[w]=b),b}function g(a,c,d){if(c||(c=b),q)return c.createElement(a);d||(d=f(c));var e;return e=d.cache[a]?d.cache[a].cloneNode():u.test(a)?(d.cache[a]=d.createElem(a)).cloneNode():d.createElem(a),!e.canHaveChildren||t.test(a)||e.tagUrn?e:d.frag.appendChild(e)}function h(a,c){if(a||(a=b),q)return a.createDocumentFragment();c=c||f(a);for(var e=c.frag.cloneNode(),g=0,h=d(),i=h.length;i>g;g++)e.createElement(h[g]);return e}function i(a,b){b.cache||(b.cache={},b.createElem=a.createElement,b.createFrag=a.createDocumentFragment,b.frag=b.createFrag()),a.createElement=function(c){return y.shivMethods?g(c,a,b):b.createElem(c)},a.createDocumentFragment=Function("h,f","return function(){var n=f.cloneNode(),c=n.createElement;h.shivMethods&&("+d().join().replace(/[\w\-:]+/g,function(a){return b.createElem(a),b.frag.createElement(a),'c("'+a+'")'})+");return n}")(y,b.frag)}function j(a){a||(a=b);var d=f(a);return!y.shivCSS||p||d.hasCSS||(d.hasCSS=!!c(a,"article,aside,dialog,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}mark{background:#FF0;color:#000}template{display:none}")),q||i(a,d),a}function k(a){for(var b,c=a.getElementsByTagName("*"),e=c.length,f=RegExp("^(?:"+d().join("|")+")$","i"),g=[];e--;)b=c[e],f.test(b.nodeName)&&g.push(b.applyElement(l(b)));return g}function l(a){for(var b,c=a.attributes,d=c.length,e=a.ownerDocument.createElement(A+":"+a.nodeName);d--;)b=c[d],b.specified&&e.setAttribute(b.nodeName,b.nodeValue);return e.style.cssText=a.style.cssText,e}function m(a){for(var b,c=a.split("{"),e=c.length,f=RegExp("(^|[\\s,>+~])("+d().join("|")+")(?=[[\\s,>+~#.:]|$)","gi"),g="$1"+A+"\\:$2";e--;)b=c[e]=c[e].split("}"),b[b.length-1]=b[b.length-1].replace(f,g),c[e]=b.join("}");return c.join("{")}function n(a){for(var b=a.length;b--;)a[b].removeNode()}function o(a){function b(){clearTimeout(g._removeSheetTimer),d&&d.removeNode(!0),d=null}var d,e,g=f(a),h=a.namespaces,i=a.parentWindow;return!B||a.printShived?a:("undefined"==typeof h[A]&&h.add(A),i.attachEvent("onbeforeprint",function(){b();for(var f,g,h,i=a.styleSheets,j=[],l=i.length,n=Array(l);l--;)n[l]=i[l];for(;h=n.pop();)if(!h.disabled&&z.test(h.media)){try{f=h.imports,g=f.length}catch(o){g=0}for(l=0;g>l;l++)n.push(f[l]);try{j.push(h.cssText)}catch(o){}}j=m(j.reverse().join("")),e=k(a),d=c(a,j)}),i.attachEvent("onafterprint",function(){n(e),clearTimeout(g._removeSheetTimer),g._removeSheetTimer=setTimeout(b,500)}),a.printShived=!0,a)}var p,q,r="3.7.3",s=a.html5||{},t=/^<|^(?:button|map|select|textarea|object|iframe|option|optgroup)$/i,u=/^(?:a|b|code|div|fieldset|h1|h2|h3|h4|h5|h6|i|label|li|ol|p|q|span|strong|style|table|tbody|td|th|tr|ul)$/i,v="_html5shiv",w=0,x={};!function(){try{var a=b.createElement("a");a.innerHTML="",p="hidden"in a,q=1==a.childNodes.length||function(){b.createElement("a");var a=b.createDocumentFragment();return"undefined"==typeof a.cloneNode||"undefined"==typeof a.createDocumentFragment||"undefined"==typeof a.createElement}()}catch(c){p=!0,q=!0}}();var y={elements:s.elements||"abbr article aside audio bdi canvas data datalist details dialog figcaption figure footer header hgroup main mark meter nav output picture progress section summary template time video",version:r,shivCSS:s.shivCSS!==!1,supportsUnknownElements:q,shivMethods:s.shivMethods!==!1,type:"default",shivDocument:j,createElement:g,createDocumentFragment:h,addElements:e};a.html5=y,j(b);var z=/^$|\b(?:all|print)\b/,A="html5shiv",B=!q&&function(){var c=b.documentElement;return!("undefined"==typeof b.namespaces||"undefined"==typeof b.parentWindow||"undefined"==typeof c.applyElement||"undefined"==typeof c.removeNode||"undefined"==typeof a.attachEvent)}();y.type+=" print",y.shivPrint=o,o(b),"object"==typeof module&&module.exports&&(module.exports=y)}("undefined"!=typeof window?window:this,document); \ No newline at end of file diff --git a/_static/js/html5shiv.min.js b/_static/js/html5shiv.min.js new file mode 100644 index 000000000..cd1c674f5 --- /dev/null +++ b/_static/js/html5shiv.min.js @@ -0,0 +1,4 @@ +/** +* @preserve HTML5 Shiv 3.7.3 | @afarkas @jdalton @jon_neal @rem | MIT/GPL2 Licensed +*/ +!function(a,b){function c(a,b){var c=a.createElement("p"),d=a.getElementsByTagName("head")[0]||a.documentElement;return c.innerHTML="x",d.insertBefore(c.lastChild,d.firstChild)}function d(){var a=t.elements;return"string"==typeof a?a.split(" "):a}function e(a,b){var c=t.elements;"string"!=typeof c&&(c=c.join(" ")),"string"!=typeof a&&(a=a.join(" ")),t.elements=c+" "+a,j(b)}function f(a){var b=s[a[q]];return b||(b={},r++,a[q]=r,s[r]=b),b}function g(a,c,d){if(c||(c=b),l)return c.createElement(a);d||(d=f(c));var e;return e=d.cache[a]?d.cache[a].cloneNode():p.test(a)?(d.cache[a]=d.createElem(a)).cloneNode():d.createElem(a),!e.canHaveChildren||o.test(a)||e.tagUrn?e:d.frag.appendChild(e)}function h(a,c){if(a||(a=b),l)return a.createDocumentFragment();c=c||f(a);for(var e=c.frag.cloneNode(),g=0,h=d(),i=h.length;i>g;g++)e.createElement(h[g]);return e}function i(a,b){b.cache||(b.cache={},b.createElem=a.createElement,b.createFrag=a.createDocumentFragment,b.frag=b.createFrag()),a.createElement=function(c){return t.shivMethods?g(c,a,b):b.createElem(c)},a.createDocumentFragment=Function("h,f","return function(){var n=f.cloneNode(),c=n.createElement;h.shivMethods&&("+d().join().replace(/[\w\-:]+/g,function(a){return b.createElem(a),b.frag.createElement(a),'c("'+a+'")'})+");return n}")(t,b.frag)}function j(a){a||(a=b);var d=f(a);return!t.shivCSS||k||d.hasCSS||(d.hasCSS=!!c(a,"article,aside,dialog,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}mark{background:#FF0;color:#000}template{display:none}")),l||i(a,d),a}var k,l,m="3.7.3-pre",n=a.html5||{},o=/^<|^(?:button|map|select|textarea|object|iframe|option|optgroup)$/i,p=/^(?:a|b|code|div|fieldset|h1|h2|h3|h4|h5|h6|i|label|li|ol|p|q|span|strong|style|table|tbody|td|th|tr|ul)$/i,q="_html5shiv",r=0,s={};!function(){try{var a=b.createElement("a");a.innerHTML="",k="hidden"in a,l=1==a.childNodes.length||function(){b.createElement("a");var a=b.createDocumentFragment();return"undefined"==typeof a.cloneNode||"undefined"==typeof a.createDocumentFragment||"undefined"==typeof a.createElement}()}catch(c){k=!0,l=!0}}();var t={elements:n.elements||"abbr article aside audio bdi canvas data datalist details dialog figcaption figure footer header hgroup main mark meter nav output picture progress section summary template time video",version:m,shivCSS:n.shivCSS!==!1,supportsUnknownElements:l,shivMethods:n.shivMethods!==!1,type:"default",shivDocument:j,createElement:g,createDocumentFragment:h,addElements:e};a.html5=t,j(b),"object"==typeof module&&module.exports&&(module.exports=t)}("undefined"!=typeof window?window:this,document); \ No newline at end of file diff --git a/_static/js/theme.js b/_static/js/theme.js new file mode 100644 index 000000000..1fddb6ee4 --- /dev/null +++ b/_static/js/theme.js @@ -0,0 +1 @@ +!function(n){var e={};function t(i){if(e[i])return e[i].exports;var o=e[i]={i:i,l:!1,exports:{}};return n[i].call(o.exports,o,o.exports,t),o.l=!0,o.exports}t.m=n,t.c=e,t.d=function(n,e,i){t.o(n,e)||Object.defineProperty(n,e,{enumerable:!0,get:i})},t.r=function(n){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(n,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(n,"__esModule",{value:!0})},t.t=function(n,e){if(1&e&&(n=t(n)),8&e)return n;if(4&e&&"object"==typeof n&&n&&n.__esModule)return n;var i=Object.create(null);if(t.r(i),Object.defineProperty(i,"default",{enumerable:!0,value:n}),2&e&&"string"!=typeof n)for(var o in n)t.d(i,o,function(e){return n[e]}.bind(null,o));return i},t.n=function(n){var e=n&&n.__esModule?function(){return n.default}:function(){return n};return t.d(e,"a",e),e},t.o=function(n,e){return Object.prototype.hasOwnProperty.call(n,e)},t.p="",t(t.s=0)}([function(n,e,t){t(1),n.exports=t(3)},function(n,e,t){(function(){var e="undefined"!=typeof window?window.jQuery:t(2);n.exports.ThemeNav={navBar:null,win:null,winScroll:!1,winResize:!1,linkScroll:!1,winPosition:0,winHeight:null,docHeight:null,isRunning:!1,enable:function(n){var t=this;void 0===n&&(n=!0),t.isRunning||(t.isRunning=!0,e((function(e){t.init(e),t.reset(),t.win.on("hashchange",t.reset),n&&t.win.on("scroll",(function(){t.linkScroll||t.winScroll||(t.winScroll=!0,requestAnimationFrame((function(){t.onScroll()})))})),t.win.on("resize",(function(){t.winResize||(t.winResize=!0,requestAnimationFrame((function(){t.onResize()})))})),t.onResize()})))},enableSticky:function(){this.enable(!0)},init:function(n){n(document);var e=this;this.navBar=n("div.wy-side-scroll:first"),this.win=n(window),n(document).on("click","[data-toggle='wy-nav-top']",(function(){n("[data-toggle='wy-nav-shift']").toggleClass("shift"),n("[data-toggle='rst-versions']").toggleClass("shift")})).on("click",".wy-menu-vertical .current ul li a",(function(){var t=n(this);n("[data-toggle='wy-nav-shift']").removeClass("shift"),n("[data-toggle='rst-versions']").toggleClass("shift"),e.toggleCurrent(t),e.hashChange()})).on("click","[data-toggle='rst-current-version']",(function(){n("[data-toggle='rst-versions']").toggleClass("shift-up")})),n("table.docutils:not(.field-list,.footnote,.citation)").wrap("
"),n("table.docutils.footnote").wrap("
"),n("table.docutils.citation").wrap("
"),n(".wy-menu-vertical ul").not(".simple").siblings("a").each((function(){var t=n(this);expand=n(''),expand.on("click",(function(n){return e.toggleCurrent(t),n.stopPropagation(),!1})),t.prepend(expand)}))},reset:function(){var n=encodeURI(window.location.hash)||"#";try{var e=$(".wy-menu-vertical"),t=e.find('[href="'+n+'"]');if(0===t.length){var i=$('.document [id="'+n.substring(1)+'"]').closest("div.section");0===(t=e.find('[href="#'+i.attr("id")+'"]')).length&&(t=e.find('[href="#"]'))}if(t.length>0){$(".wy-menu-vertical .current").removeClass("current").attr("aria-expanded","false"),t.addClass("current").attr("aria-expanded","true"),t.closest("li.toctree-l1").parent().addClass("current").attr("aria-expanded","true");for(let n=1;n<=10;n++)t.closest("li.toctree-l"+n).addClass("current").attr("aria-expanded","true");t[0].scrollIntoView()}}catch(n){console.log("Error expanding nav for anchor",n)}},onScroll:function(){this.winScroll=!1;var n=this.win.scrollTop(),e=n+this.winHeight,t=this.navBar.scrollTop()+(n-this.winPosition);n<0||e>this.docHeight||(this.navBar.scrollTop(t),this.winPosition=n)},onResize:function(){this.winResize=!1,this.winHeight=this.win.height(),this.docHeight=$(document).height()},hashChange:function(){this.linkScroll=!0,this.win.one("hashchange",(function(){this.linkScroll=!1}))},toggleCurrent:function(n){var e=n.closest("li");e.siblings("li.current").removeClass("current").attr("aria-expanded","false"),e.siblings().find("li.current").removeClass("current").attr("aria-expanded","false");var t=e.find("> ul li");t.length&&(t.removeClass("current").attr("aria-expanded","false"),e.toggleClass("current").attr("aria-expanded",(function(n,e){return"true"==e?"false":"true"})))}},"undefined"!=typeof window&&(window.SphinxRtdTheme={Navigation:n.exports.ThemeNav,StickyNav:n.exports.ThemeNav}),function(){for(var n=0,e=["ms","moz","webkit","o"],t=0;t0 + var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1 + var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1 + var s_v = "^(" + C + ")?" + v; // vowel in stem + + this.stemWord = function (w) { + var stem; + var suffix; + var firstch; + var origword = w; + + if (w.length < 3) + return w; + + var re; + var re2; + var re3; + var re4; + + firstch = w.substr(0,1); + if (firstch == "y") + w = firstch.toUpperCase() + w.substr(1); + + // Step 1a + re = /^(.+?)(ss|i)es$/; + re2 = /^(.+?)([^s])s$/; + + if (re.test(w)) + w = w.replace(re,"$1$2"); + else if (re2.test(w)) + w = w.replace(re2,"$1$2"); + + // Step 1b + re = /^(.+?)eed$/; + re2 = /^(.+?)(ed|ing)$/; + if (re.test(w)) { + var fp = re.exec(w); + re = new RegExp(mgr0); + if (re.test(fp[1])) { + re = /.$/; + w = w.replace(re,""); + } + } + else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1]; + re2 = new RegExp(s_v); + if (re2.test(stem)) { + w = stem; + re2 = /(at|bl|iz)$/; + re3 = new RegExp("([^aeiouylsz])\\1$"); + re4 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + if (re2.test(w)) + w = w + "e"; + else if (re3.test(w)) { + re = /.$/; + w = w.replace(re,""); + } + else if (re4.test(w)) + w = w + "e"; + } + } + + // Step 1c + re = /^(.+?)y$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(s_v); + if (re.test(stem)) + w = stem + "i"; + } + + // Step 2 + re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = new RegExp(mgr0); + if (re.test(stem)) + w = stem + step2list[suffix]; + } + + // Step 3 + re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = new RegExp(mgr0); + if (re.test(stem)) + w = stem + step3list[suffix]; + } + + // Step 4 + re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/; + re2 = /^(.+?)(s|t)(ion)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(mgr1); + if (re.test(stem)) + w = stem; + } + else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1] + fp[2]; + re2 = new RegExp(mgr1); + if (re2.test(stem)) + w = stem; + } + + // Step 5 + re = /^(.+?)e$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(mgr1); + re2 = new RegExp(meq1); + re3 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) + w = stem; + } + re = /ll$/; + re2 = new RegExp(mgr1); + if (re.test(w) && re2.test(w)) { + re = /.$/; + w = w.replace(re,""); + } + + // and turn initial Y back to y + if (firstch == "y") + w = firstch.toLowerCase() + w.substr(1); + return w; + } +} + diff --git a/_static/logo_main.png b/_static/logo_main.png new file mode 100644 index 000000000..cc86e84e7 Binary files /dev/null and b/_static/logo_main.png differ diff --git a/_static/minus.png b/_static/minus.png new file mode 100644 index 000000000..d96755fda Binary files /dev/null and b/_static/minus.png differ diff --git a/_static/plus.png b/_static/plus.png new file mode 100644 index 000000000..7107cec93 Binary files /dev/null and b/_static/plus.png differ diff --git a/_static/pygments.css b/_static/pygments.css new file mode 100644 index 000000000..84ab3030a --- /dev/null +++ b/_static/pygments.css @@ -0,0 +1,75 @@ +pre { line-height: 125%; } +td.linenos .normal { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; } +span.linenos { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; } +td.linenos .special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; } +span.linenos.special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; } +.highlight .hll { background-color: #ffffcc } +.highlight { background: #f8f8f8; } +.highlight .c { color: #3D7B7B; font-style: italic } /* Comment */ +.highlight .err { border: 1px solid #FF0000 } /* Error */ +.highlight .k { color: #008000; font-weight: bold } /* Keyword */ +.highlight .o { color: #666666 } /* Operator */ +.highlight .ch { color: #3D7B7B; font-style: italic } /* Comment.Hashbang */ +.highlight .cm { color: #3D7B7B; font-style: italic } /* Comment.Multiline */ +.highlight .cp { color: #9C6500 } /* Comment.Preproc */ +.highlight .cpf { color: #3D7B7B; font-style: italic } /* Comment.PreprocFile */ +.highlight .c1 { color: #3D7B7B; font-style: italic } /* Comment.Single */ +.highlight .cs { color: #3D7B7B; font-style: italic } /* Comment.Special */ +.highlight .gd { color: #A00000 } /* Generic.Deleted */ +.highlight .ge { font-style: italic } /* Generic.Emph */ +.highlight .ges { font-weight: bold; font-style: italic } /* Generic.EmphStrong */ +.highlight .gr { color: #E40000 } /* Generic.Error */ +.highlight .gh { color: #000080; font-weight: bold } /* Generic.Heading */ +.highlight .gi { color: #008400 } /* Generic.Inserted */ +.highlight .go { color: #717171 } /* Generic.Output */ +.highlight .gp { color: #000080; font-weight: bold } /* Generic.Prompt */ +.highlight .gs { font-weight: bold } /* Generic.Strong */ +.highlight .gu { color: #800080; font-weight: bold } /* Generic.Subheading */ +.highlight .gt { color: #0044DD } /* Generic.Traceback */ +.highlight .kc { color: #008000; font-weight: bold } /* Keyword.Constant */ +.highlight .kd { color: #008000; font-weight: bold } /* Keyword.Declaration */ +.highlight .kn { color: #008000; font-weight: bold } /* Keyword.Namespace */ +.highlight .kp { color: #008000 } /* Keyword.Pseudo */ +.highlight .kr { color: #008000; font-weight: bold } /* Keyword.Reserved */ +.highlight .kt { color: #B00040 } /* Keyword.Type */ +.highlight .m { color: #666666 } /* Literal.Number */ +.highlight .s { color: #BA2121 } /* Literal.String */ +.highlight .na { color: #687822 } /* Name.Attribute */ +.highlight .nb { color: #008000 } /* Name.Builtin */ +.highlight .nc { color: #0000FF; font-weight: bold } /* Name.Class */ +.highlight .no { color: #880000 } /* Name.Constant */ +.highlight .nd { color: #AA22FF } /* Name.Decorator */ +.highlight .ni { color: #717171; font-weight: bold } /* Name.Entity */ +.highlight .ne { color: #CB3F38; font-weight: bold } /* Name.Exception */ +.highlight .nf { color: #0000FF } /* Name.Function */ +.highlight .nl { color: #767600 } /* Name.Label */ +.highlight .nn { color: #0000FF; font-weight: bold } /* Name.Namespace */ +.highlight .nt { color: #008000; font-weight: bold } /* Name.Tag */ +.highlight .nv { color: #19177C } /* Name.Variable */ +.highlight .ow { color: #AA22FF; font-weight: bold } /* Operator.Word */ +.highlight .w { color: #bbbbbb } /* Text.Whitespace */ +.highlight .mb { color: #666666 } /* Literal.Number.Bin */ +.highlight .mf { color: #666666 } /* Literal.Number.Float */ +.highlight .mh { color: #666666 } /* Literal.Number.Hex */ +.highlight .mi { color: #666666 } /* Literal.Number.Integer */ +.highlight .mo { color: #666666 } /* Literal.Number.Oct */ +.highlight .sa { color: #BA2121 } /* Literal.String.Affix */ +.highlight .sb { color: #BA2121 } /* Literal.String.Backtick */ +.highlight .sc { color: #BA2121 } /* Literal.String.Char */ +.highlight .dl { color: #BA2121 } /* Literal.String.Delimiter */ +.highlight .sd { color: #BA2121; font-style: italic } /* Literal.String.Doc */ +.highlight .s2 { color: #BA2121 } /* Literal.String.Double */ +.highlight .se { color: #AA5D1F; font-weight: bold } /* Literal.String.Escape */ +.highlight .sh { color: #BA2121 } /* Literal.String.Heredoc */ +.highlight .si { color: #A45A77; font-weight: bold } /* Literal.String.Interpol */ +.highlight .sx { color: #008000 } /* Literal.String.Other */ +.highlight .sr { color: #A45A77 } /* Literal.String.Regex */ +.highlight .s1 { color: #BA2121 } /* Literal.String.Single */ +.highlight .ss { color: #19177C } /* Literal.String.Symbol */ +.highlight .bp { color: #008000 } /* Name.Builtin.Pseudo */ +.highlight .fm { color: #0000FF } /* Name.Function.Magic */ +.highlight .vc { color: #19177C } /* Name.Variable.Class */ +.highlight .vg { color: #19177C } /* Name.Variable.Global */ +.highlight .vi { color: #19177C } /* Name.Variable.Instance */ +.highlight .vm { color: #19177C } /* Name.Variable.Magic */ +.highlight .il { color: #666666 } /* Literal.Number.Integer.Long */ \ No newline at end of file diff --git a/_static/searchtools.js b/_static/searchtools.js new file mode 100644 index 000000000..97d56a74d --- /dev/null +++ b/_static/searchtools.js @@ -0,0 +1,566 @@ +/* + * searchtools.js + * ~~~~~~~~~~~~~~~~ + * + * Sphinx JavaScript utilities for the full-text search. + * + * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ +"use strict"; + +/** + * Simple result scoring code. + */ +if (typeof Scorer === "undefined") { + var Scorer = { + // Implement the following function to further tweak the score for each result + // The function takes a result array [docname, title, anchor, descr, score, filename] + // and returns the new score. + /* + score: result => { + const [docname, title, anchor, descr, score, filename] = result + return score + }, + */ + + // query matches the full name of an object + objNameMatch: 11, + // or matches in the last dotted part of the object name + objPartialMatch: 6, + // Additive scores depending on the priority of the object + objPrio: { + 0: 15, // used to be importantResults + 1: 5, // used to be objectResults + 2: -5, // used to be unimportantResults + }, + // Used when the priority is not in the mapping. + objPrioDefault: 0, + + // query found in title + title: 15, + partialTitle: 7, + // query found in terms + term: 5, + partialTerm: 2, + }; +} + +const _removeChildren = (element) => { + while (element && element.lastChild) element.removeChild(element.lastChild); +}; + +/** + * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions#escaping + */ +const _escapeRegExp = (string) => + string.replace(/[.*+\-?^${}()|[\]\\]/g, "\\$&"); // $& means the whole matched string + +const _displayItem = (item, searchTerms) => { + const docBuilder = DOCUMENTATION_OPTIONS.BUILDER; + const docUrlRoot = DOCUMENTATION_OPTIONS.URL_ROOT; + const docFileSuffix = DOCUMENTATION_OPTIONS.FILE_SUFFIX; + const docLinkSuffix = DOCUMENTATION_OPTIONS.LINK_SUFFIX; + const showSearchSummary = DOCUMENTATION_OPTIONS.SHOW_SEARCH_SUMMARY; + + const [docName, title, anchor, descr, score, _filename] = item; + + let listItem = document.createElement("li"); + let requestUrl; + let linkUrl; + if (docBuilder === "dirhtml") { + // dirhtml builder + let dirname = docName + "/"; + if (dirname.match(/\/index\/$/)) + dirname = dirname.substring(0, dirname.length - 6); + else if (dirname === "index/") dirname = ""; + requestUrl = docUrlRoot + dirname; + linkUrl = requestUrl; + } else { + // normal html builders + requestUrl = docUrlRoot + docName + docFileSuffix; + linkUrl = docName + docLinkSuffix; + } + let linkEl = listItem.appendChild(document.createElement("a")); + linkEl.href = linkUrl + anchor; + linkEl.dataset.score = score; + linkEl.innerHTML = title; + if (descr) + listItem.appendChild(document.createElement("span")).innerHTML = + " (" + descr + ")"; + else if (showSearchSummary) + fetch(requestUrl) + .then((responseData) => responseData.text()) + .then((data) => { + if (data) + listItem.appendChild( + Search.makeSearchSummary(data, searchTerms) + ); + }); + Search.output.appendChild(listItem); +}; +const _finishSearch = (resultCount) => { + Search.stopPulse(); + Search.title.innerText = _("Search Results"); + if (!resultCount) + Search.status.innerText = Documentation.gettext( + "Your search did not match any documents. Please make sure that all words are spelled correctly and that you've selected enough categories." + ); + else + Search.status.innerText = _( + `Search finished, found ${resultCount} page(s) matching the search query.` + ); +}; +const _displayNextItem = ( + results, + resultCount, + searchTerms +) => { + // results left, load the summary and display it + // this is intended to be dynamic (don't sub resultsCount) + if (results.length) { + _displayItem(results.pop(), searchTerms); + setTimeout( + () => _displayNextItem(results, resultCount, searchTerms), + 5 + ); + } + // search finished, update title and status message + else _finishSearch(resultCount); +}; + +/** + * Default splitQuery function. Can be overridden in ``sphinx.search`` with a + * custom function per language. + * + * The regular expression works by splitting the string on consecutive characters + * that are not Unicode letters, numbers, underscores, or emoji characters. + * This is the same as ``\W+`` in Python, preserving the surrogate pair area. + */ +if (typeof splitQuery === "undefined") { + var splitQuery = (query) => query + .split(/[^\p{Letter}\p{Number}_\p{Emoji_Presentation}]+/gu) + .filter(term => term) // remove remaining empty strings +} + +/** + * Search Module + */ +const Search = { + _index: null, + _queued_query: null, + _pulse_status: -1, + + htmlToText: (htmlString) => { + const htmlElement = new DOMParser().parseFromString(htmlString, 'text/html'); + htmlElement.querySelectorAll(".headerlink").forEach((el) => { el.remove() }); + const docContent = htmlElement.querySelector('[role="main"]'); + if (docContent !== undefined) return docContent.textContent; + console.warn( + "Content block not found. Sphinx search tries to obtain it via '[role=main]'. Could you check your theme or template." + ); + return ""; + }, + + init: () => { + const query = new URLSearchParams(window.location.search).get("q"); + document + .querySelectorAll('input[name="q"]') + .forEach((el) => (el.value = query)); + if (query) Search.performSearch(query); + }, + + loadIndex: (url) => + (document.body.appendChild(document.createElement("script")).src = url), + + setIndex: (index) => { + Search._index = index; + if (Search._queued_query !== null) { + const query = Search._queued_query; + Search._queued_query = null; + Search.query(query); + } + }, + + hasIndex: () => Search._index !== null, + + deferQuery: (query) => (Search._queued_query = query), + + stopPulse: () => (Search._pulse_status = -1), + + startPulse: () => { + if (Search._pulse_status >= 0) return; + + const pulse = () => { + Search._pulse_status = (Search._pulse_status + 1) % 4; + Search.dots.innerText = ".".repeat(Search._pulse_status); + if (Search._pulse_status >= 0) window.setTimeout(pulse, 500); + }; + pulse(); + }, + + /** + * perform a search for something (or wait until index is loaded) + */ + performSearch: (query) => { + // create the required interface elements + const searchText = document.createElement("h2"); + searchText.textContent = _("Searching"); + const searchSummary = document.createElement("p"); + searchSummary.classList.add("search-summary"); + searchSummary.innerText = ""; + const searchList = document.createElement("ul"); + searchList.classList.add("search"); + + const out = document.getElementById("search-results"); + Search.title = out.appendChild(searchText); + Search.dots = Search.title.appendChild(document.createElement("span")); + Search.status = out.appendChild(searchSummary); + Search.output = out.appendChild(searchList); + + const searchProgress = document.getElementById("search-progress"); + // Some themes don't use the search progress node + if (searchProgress) { + searchProgress.innerText = _("Preparing search..."); + } + Search.startPulse(); + + // index already loaded, the browser was quick! + if (Search.hasIndex()) Search.query(query); + else Search.deferQuery(query); + }, + + /** + * execute search (requires search index to be loaded) + */ + query: (query) => { + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const titles = Search._index.titles; + const allTitles = Search._index.alltitles; + const indexEntries = Search._index.indexentries; + + // stem the search terms and add them to the correct list + const stemmer = new Stemmer(); + const searchTerms = new Set(); + const excludedTerms = new Set(); + const highlightTerms = new Set(); + const objectTerms = new Set(splitQuery(query.toLowerCase().trim())); + splitQuery(query.trim()).forEach((queryTerm) => { + const queryTermLower = queryTerm.toLowerCase(); + + // maybe skip this "word" + // stopwords array is from language_data.js + if ( + stopwords.indexOf(queryTermLower) !== -1 || + queryTerm.match(/^\d+$/) + ) + return; + + // stem the word + let word = stemmer.stemWord(queryTermLower); + // select the correct list + if (word[0] === "-") excludedTerms.add(word.substr(1)); + else { + searchTerms.add(word); + highlightTerms.add(queryTermLower); + } + }); + + if (SPHINX_HIGHLIGHT_ENABLED) { // set in sphinx_highlight.js + localStorage.setItem("sphinx_highlight_terms", [...highlightTerms].join(" ")) + } + + // console.debug("SEARCH: searching for:"); + // console.info("required: ", [...searchTerms]); + // console.info("excluded: ", [...excludedTerms]); + + // array of [docname, title, anchor, descr, score, filename] + let results = []; + _removeChildren(document.getElementById("search-progress")); + + const queryLower = query.toLowerCase(); + for (const [title, foundTitles] of Object.entries(allTitles)) { + if (title.toLowerCase().includes(queryLower) && (queryLower.length >= title.length/2)) { + for (const [file, id] of foundTitles) { + let score = Math.round(100 * queryLower.length / title.length) + results.push([ + docNames[file], + titles[file] !== title ? `${titles[file]} > ${title}` : title, + id !== null ? "#" + id : "", + null, + score, + filenames[file], + ]); + } + } + } + + // search for explicit entries in index directives + for (const [entry, foundEntries] of Object.entries(indexEntries)) { + if (entry.includes(queryLower) && (queryLower.length >= entry.length/2)) { + for (const [file, id] of foundEntries) { + let score = Math.round(100 * queryLower.length / entry.length) + results.push([ + docNames[file], + titles[file], + id ? "#" + id : "", + null, + score, + filenames[file], + ]); + } + } + } + + // lookup as object + objectTerms.forEach((term) => + results.push(...Search.performObjectSearch(term, objectTerms)) + ); + + // lookup as search terms in fulltext + results.push(...Search.performTermsSearch(searchTerms, excludedTerms)); + + // let the scorer override scores with a custom scoring function + if (Scorer.score) results.forEach((item) => (item[4] = Scorer.score(item))); + + // now sort the results by score (in opposite order of appearance, since the + // display function below uses pop() to retrieve items) and then + // alphabetically + results.sort((a, b) => { + const leftScore = a[4]; + const rightScore = b[4]; + if (leftScore === rightScore) { + // same score: sort alphabetically + const leftTitle = a[1].toLowerCase(); + const rightTitle = b[1].toLowerCase(); + if (leftTitle === rightTitle) return 0; + return leftTitle > rightTitle ? -1 : 1; // inverted is intentional + } + return leftScore > rightScore ? 1 : -1; + }); + + // remove duplicate search results + // note the reversing of results, so that in the case of duplicates, the highest-scoring entry is kept + let seen = new Set(); + results = results.reverse().reduce((acc, result) => { + let resultStr = result.slice(0, 4).concat([result[5]]).map(v => String(v)).join(','); + if (!seen.has(resultStr)) { + acc.push(result); + seen.add(resultStr); + } + return acc; + }, []); + + results = results.reverse(); + + // for debugging + //Search.lastresults = results.slice(); // a copy + // console.info("search results:", Search.lastresults); + + // print the results + _displayNextItem(results, results.length, searchTerms); + }, + + /** + * search for object names + */ + performObjectSearch: (object, objectTerms) => { + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const objects = Search._index.objects; + const objNames = Search._index.objnames; + const titles = Search._index.titles; + + const results = []; + + const objectSearchCallback = (prefix, match) => { + const name = match[4] + const fullname = (prefix ? prefix + "." : "") + name; + const fullnameLower = fullname.toLowerCase(); + if (fullnameLower.indexOf(object) < 0) return; + + let score = 0; + const parts = fullnameLower.split("."); + + // check for different match types: exact matches of full name or + // "last name" (i.e. last dotted part) + if (fullnameLower === object || parts.slice(-1)[0] === object) + score += Scorer.objNameMatch; + else if (parts.slice(-1)[0].indexOf(object) > -1) + score += Scorer.objPartialMatch; // matches in last name + + const objName = objNames[match[1]][2]; + const title = titles[match[0]]; + + // If more than one term searched for, we require other words to be + // found in the name/title/description + const otherTerms = new Set(objectTerms); + otherTerms.delete(object); + if (otherTerms.size > 0) { + const haystack = `${prefix} ${name} ${objName} ${title}`.toLowerCase(); + if ( + [...otherTerms].some((otherTerm) => haystack.indexOf(otherTerm) < 0) + ) + return; + } + + let anchor = match[3]; + if (anchor === "") anchor = fullname; + else if (anchor === "-") anchor = objNames[match[1]][1] + "-" + fullname; + + const descr = objName + _(", in ") + title; + + // add custom score for some objects according to scorer + if (Scorer.objPrio.hasOwnProperty(match[2])) + score += Scorer.objPrio[match[2]]; + else score += Scorer.objPrioDefault; + + results.push([ + docNames[match[0]], + fullname, + "#" + anchor, + descr, + score, + filenames[match[0]], + ]); + }; + Object.keys(objects).forEach((prefix) => + objects[prefix].forEach((array) => + objectSearchCallback(prefix, array) + ) + ); + return results; + }, + + /** + * search for full-text terms in the index + */ + performTermsSearch: (searchTerms, excludedTerms) => { + // prepare search + const terms = Search._index.terms; + const titleTerms = Search._index.titleterms; + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const titles = Search._index.titles; + + const scoreMap = new Map(); + const fileMap = new Map(); + + // perform the search on the required terms + searchTerms.forEach((word) => { + const files = []; + const arr = [ + { files: terms[word], score: Scorer.term }, + { files: titleTerms[word], score: Scorer.title }, + ]; + // add support for partial matches + if (word.length > 2) { + const escapedWord = _escapeRegExp(word); + Object.keys(terms).forEach((term) => { + if (term.match(escapedWord) && !terms[word]) + arr.push({ files: terms[term], score: Scorer.partialTerm }); + }); + Object.keys(titleTerms).forEach((term) => { + if (term.match(escapedWord) && !titleTerms[word]) + arr.push({ files: titleTerms[word], score: Scorer.partialTitle }); + }); + } + + // no match but word was a required one + if (arr.every((record) => record.files === undefined)) return; + + // found search word in contents + arr.forEach((record) => { + if (record.files === undefined) return; + + let recordFiles = record.files; + if (recordFiles.length === undefined) recordFiles = [recordFiles]; + files.push(...recordFiles); + + // set score for the word in each file + recordFiles.forEach((file) => { + if (!scoreMap.has(file)) scoreMap.set(file, {}); + scoreMap.get(file)[word] = record.score; + }); + }); + + // create the mapping + files.forEach((file) => { + if (fileMap.has(file) && fileMap.get(file).indexOf(word) === -1) + fileMap.get(file).push(word); + else fileMap.set(file, [word]); + }); + }); + + // now check if the files don't contain excluded terms + const results = []; + for (const [file, wordList] of fileMap) { + // check if all requirements are matched + + // as search terms with length < 3 are discarded + const filteredTermCount = [...searchTerms].filter( + (term) => term.length > 2 + ).length; + if ( + wordList.length !== searchTerms.size && + wordList.length !== filteredTermCount + ) + continue; + + // ensure that none of the excluded terms is in the search result + if ( + [...excludedTerms].some( + (term) => + terms[term] === file || + titleTerms[term] === file || + (terms[term] || []).includes(file) || + (titleTerms[term] || []).includes(file) + ) + ) + break; + + // select one (max) score for the file. + const score = Math.max(...wordList.map((w) => scoreMap.get(file)[w])); + // add result to the result list + results.push([ + docNames[file], + titles[file], + "", + null, + score, + filenames[file], + ]); + } + return results; + }, + + /** + * helper function to return a node containing the + * search summary for a given text. keywords is a list + * of stemmed words. + */ + makeSearchSummary: (htmlText, keywords) => { + const text = Search.htmlToText(htmlText); + if (text === "") return null; + + const textLower = text.toLowerCase(); + const actualStartPosition = [...keywords] + .map((k) => textLower.indexOf(k.toLowerCase())) + .filter((i) => i > -1) + .slice(-1)[0]; + const startWithContext = Math.max(actualStartPosition - 120, 0); + + const top = startWithContext === 0 ? "" : "..."; + const tail = startWithContext + 240 < text.length ? "..." : ""; + + let summary = document.createElement("p"); + summary.classList.add("context"); + summary.textContent = top + text.substr(startWithContext, 240).trim() + tail; + + return summary; + }, +}; + +_ready(Search.init); diff --git a/_static/sphinx_highlight.js b/_static/sphinx_highlight.js new file mode 100644 index 000000000..aae669d7e --- /dev/null +++ b/_static/sphinx_highlight.js @@ -0,0 +1,144 @@ +/* Highlighting utilities for Sphinx HTML documentation. */ +"use strict"; + +const SPHINX_HIGHLIGHT_ENABLED = true + +/** + * highlight a given string on a node by wrapping it in + * span elements with the given class name. + */ +const _highlight = (node, addItems, text, className) => { + if (node.nodeType === Node.TEXT_NODE) { + const val = node.nodeValue; + const parent = node.parentNode; + const pos = val.toLowerCase().indexOf(text); + if ( + pos >= 0 && + !parent.classList.contains(className) && + !parent.classList.contains("nohighlight") + ) { + let span; + + const closestNode = parent.closest("body, svg, foreignObject"); + const isInSVG = closestNode && closestNode.matches("svg"); + if (isInSVG) { + span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); + } else { + span = document.createElement("span"); + span.classList.add(className); + } + + span.appendChild(document.createTextNode(val.substr(pos, text.length))); + parent.insertBefore( + span, + parent.insertBefore( + document.createTextNode(val.substr(pos + text.length)), + node.nextSibling + ) + ); + node.nodeValue = val.substr(0, pos); + + if (isInSVG) { + const rect = document.createElementNS( + "http://www.w3.org/2000/svg", + "rect" + ); + const bbox = parent.getBBox(); + rect.x.baseVal.value = bbox.x; + rect.y.baseVal.value = bbox.y; + rect.width.baseVal.value = bbox.width; + rect.height.baseVal.value = bbox.height; + rect.setAttribute("class", className); + addItems.push({ parent: parent, target: rect }); + } + } + } else if (node.matches && !node.matches("button, select, textarea")) { + node.childNodes.forEach((el) => _highlight(el, addItems, text, className)); + } +}; +const _highlightText = (thisNode, text, className) => { + let addItems = []; + _highlight(thisNode, addItems, text, className); + addItems.forEach((obj) => + obj.parent.insertAdjacentElement("beforebegin", obj.target) + ); +}; + +/** + * Small JavaScript module for the documentation. + */ +const SphinxHighlight = { + + /** + * highlight the search words provided in localstorage in the text + */ + highlightSearchWords: () => { + if (!SPHINX_HIGHLIGHT_ENABLED) return; // bail if no highlight + + // get and clear terms from localstorage + const url = new URL(window.location); + const highlight = + localStorage.getItem("sphinx_highlight_terms") + || url.searchParams.get("highlight") + || ""; + localStorage.removeItem("sphinx_highlight_terms") + url.searchParams.delete("highlight"); + window.history.replaceState({}, "", url); + + // get individual terms from highlight string + const terms = highlight.toLowerCase().split(/\s+/).filter(x => x); + if (terms.length === 0) return; // nothing to do + + // There should never be more than one element matching "div.body" + const divBody = document.querySelectorAll("div.body"); + const body = divBody.length ? divBody[0] : document.querySelector("body"); + window.setTimeout(() => { + terms.forEach((term) => _highlightText(body, term, "highlighted")); + }, 10); + + const searchBox = document.getElementById("searchbox"); + if (searchBox === null) return; + searchBox.appendChild( + document + .createRange() + .createContextualFragment( + '" + ) + ); + }, + + /** + * helper function to hide the search marks again + */ + hideSearchWords: () => { + document + .querySelectorAll("#searchbox .highlight-link") + .forEach((el) => el.remove()); + document + .querySelectorAll("span.highlighted") + .forEach((el) => el.classList.remove("highlighted")); + localStorage.removeItem("sphinx_highlight_terms") + }, + + initEscapeListener: () => { + // only install a listener if it is really needed + if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) return; + + document.addEventListener("keydown", (event) => { + // bail for input elements + if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; + // bail with special keys + if (event.shiftKey || event.altKey || event.ctrlKey || event.metaKey) return; + if (DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS && (event.key === "Escape")) { + SphinxHighlight.hideSearchWords(); + event.preventDefault(); + } + }); + }, +}; + +_ready(SphinxHighlight.highlightSearchWords); +_ready(SphinxHighlight.initEscapeListener); diff --git a/genindex.html b/genindex.html new file mode 100644 index 000000000..b362119d2 --- /dev/null +++ b/genindex.html @@ -0,0 +1,117 @@ + + + + + + Index — OpenZFS documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ + +

Index

+ +
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/index.html b/index.html new file mode 100644 index 000000000..38cc2bf39 --- /dev/null +++ b/index.html @@ -0,0 +1,236 @@ + + + + + + + OpenZFS Documentation — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

OpenZFS Documentation

+

Welcome to the OpenZFS Documentation. This resource provides documentation for +users and developers working with (or contributing to) the OpenZFS +project. New users or system administrators should refer to the +documentation for their favorite platform to get started.

+ + + + + + + + + + + + + +

Getting Started

Project and +Community

Developer +Resources

How to get started +with OpenZFS on your +favorite platform

About the project +and how to +contribute

Technical +documentation +discussing the +OpenZFS +implementation

+
+

Table of Contents:

+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/1/arcstat.1.html b/man/1/arcstat.1.html new file mode 100644 index 000000000..2539b53e2 --- /dev/null +++ b/man/1/arcstat.1.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/1/cstyle.1.html b/man/1/cstyle.1.html new file mode 100644 index 000000000..f0acf936c --- /dev/null +++ b/man/1/cstyle.1.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/1/index.html b/man/1/index.html new file mode 100644 index 000000000..9154e3af3 --- /dev/null +++ b/man/1/index.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/1/raidz_test.1.html b/man/1/raidz_test.1.html new file mode 100644 index 000000000..b2cb6d59a --- /dev/null +++ b/man/1/raidz_test.1.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/1/test-runner.1.html b/man/1/test-runner.1.html new file mode 100644 index 000000000..57e7fbf37 --- /dev/null +++ b/man/1/test-runner.1.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/1/zhack.1.html b/man/1/zhack.1.html new file mode 100644 index 000000000..184102ada --- /dev/null +++ b/man/1/zhack.1.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/1/ztest.1.html b/man/1/ztest.1.html new file mode 100644 index 000000000..ae0758377 --- /dev/null +++ b/man/1/ztest.1.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/1/zvol_wait.1.html b/man/1/zvol_wait.1.html new file mode 100644 index 000000000..490e97d5a --- /dev/null +++ b/man/1/zvol_wait.1.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/4/index.html b/man/4/index.html new file mode 100644 index 000000000..9c72daa96 --- /dev/null +++ b/man/4/index.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/4/spl.4.html b/man/4/spl.4.html new file mode 100644 index 000000000..f939a3465 --- /dev/null +++ b/man/4/spl.4.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/4/zfs.4.html b/man/4/zfs.4.html new file mode 100644 index 000000000..225979924 --- /dev/null +++ b/man/4/zfs.4.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/5/index.html b/man/5/index.html new file mode 100644 index 000000000..d885643bc --- /dev/null +++ b/man/5/index.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/5/vdev_id.conf.5.html b/man/5/vdev_id.conf.5.html new file mode 100644 index 000000000..4fc70b8cc --- /dev/null +++ b/man/5/vdev_id.conf.5.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/7/dracut.zfs.7.html b/man/7/dracut.zfs.7.html new file mode 100644 index 000000000..13c1d2c7e --- /dev/null +++ b/man/7/dracut.zfs.7.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/7/index.html b/man/7/index.html new file mode 100644 index 000000000..87c0d7102 --- /dev/null +++ b/man/7/index.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/7/vdevprops.7.html b/man/7/vdevprops.7.html new file mode 100644 index 000000000..8a273c9e0 --- /dev/null +++ b/man/7/vdevprops.7.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/7/zfsconcepts.7.html b/man/7/zfsconcepts.7.html new file mode 100644 index 000000000..e88177394 --- /dev/null +++ b/man/7/zfsconcepts.7.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/7/zfsprops.7.html b/man/7/zfsprops.7.html new file mode 100644 index 000000000..cd36490a2 --- /dev/null +++ b/man/7/zfsprops.7.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/7/zpool-features.7.html b/man/7/zpool-features.7.html new file mode 100644 index 000000000..02540b17d --- /dev/null +++ b/man/7/zpool-features.7.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/7/zpoolconcepts.7.html b/man/7/zpoolconcepts.7.html new file mode 100644 index 000000000..937d64a76 --- /dev/null +++ b/man/7/zpoolconcepts.7.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/7/zpoolprops.7.html b/man/7/zpoolprops.7.html new file mode 100644 index 000000000..a2e6861c4 --- /dev/null +++ b/man/7/zpoolprops.7.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/fsck.zfs.8.html b/man/8/fsck.zfs.8.html new file mode 100644 index 000000000..8e959547d --- /dev/null +++ b/man/8/fsck.zfs.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/index.html b/man/8/index.html new file mode 100644 index 000000000..fc203fc62 --- /dev/null +++ b/man/8/index.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/mount.zfs.8.html b/man/8/mount.zfs.8.html new file mode 100644 index 000000000..7aeb8c5a4 --- /dev/null +++ b/man/8/mount.zfs.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/vdev_id.8.html b/man/8/vdev_id.8.html new file mode 100644 index 000000000..25fccebb8 --- /dev/null +++ b/man/8/vdev_id.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zdb.8.html b/man/8/zdb.8.html new file mode 100644 index 000000000..54567512b --- /dev/null +++ b/man/8/zdb.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zed.8.html b/man/8/zed.8.html new file mode 100644 index 000000000..e8cc4f05e --- /dev/null +++ b/man/8/zed.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-allow.8.html b/man/8/zfs-allow.8.html new file mode 100644 index 000000000..35b403186 --- /dev/null +++ b/man/8/zfs-allow.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-bookmark.8.html b/man/8/zfs-bookmark.8.html new file mode 100644 index 000000000..80642d0e6 --- /dev/null +++ b/man/8/zfs-bookmark.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-change-key.8.html b/man/8/zfs-change-key.8.html new file mode 100644 index 000000000..5ef1b1842 --- /dev/null +++ b/man/8/zfs-change-key.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-clone.8.html b/man/8/zfs-clone.8.html new file mode 100644 index 000000000..7136da371 --- /dev/null +++ b/man/8/zfs-clone.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-create.8.html b/man/8/zfs-create.8.html new file mode 100644 index 000000000..4c3f4c2ee --- /dev/null +++ b/man/8/zfs-create.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-destroy.8.html b/man/8/zfs-destroy.8.html new file mode 100644 index 000000000..f18354ab7 --- /dev/null +++ b/man/8/zfs-destroy.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-diff.8.html b/man/8/zfs-diff.8.html new file mode 100644 index 000000000..21f4ad6ab --- /dev/null +++ b/man/8/zfs-diff.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-get.8.html b/man/8/zfs-get.8.html new file mode 100644 index 000000000..7655d9f3d --- /dev/null +++ b/man/8/zfs-get.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-groupspace.8.html b/man/8/zfs-groupspace.8.html new file mode 100644 index 000000000..17ae56091 --- /dev/null +++ b/man/8/zfs-groupspace.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-hold.8.html b/man/8/zfs-hold.8.html new file mode 100644 index 000000000..ff1d4c571 --- /dev/null +++ b/man/8/zfs-hold.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-inherit.8.html b/man/8/zfs-inherit.8.html new file mode 100644 index 000000000..1ac647217 --- /dev/null +++ b/man/8/zfs-inherit.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-jail.8.html b/man/8/zfs-jail.8.html new file mode 100644 index 000000000..b627aac63 --- /dev/null +++ b/man/8/zfs-jail.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-list.8.html b/man/8/zfs-list.8.html new file mode 100644 index 000000000..1d7eefab3 --- /dev/null +++ b/man/8/zfs-list.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-load-key.8.html b/man/8/zfs-load-key.8.html new file mode 100644 index 000000000..3e2606394 --- /dev/null +++ b/man/8/zfs-load-key.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-mount-generator.8.html b/man/8/zfs-mount-generator.8.html new file mode 100644 index 000000000..a5c87620a --- /dev/null +++ b/man/8/zfs-mount-generator.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-mount.8.html b/man/8/zfs-mount.8.html new file mode 100644 index 000000000..cbb5a6cf6 --- /dev/null +++ b/man/8/zfs-mount.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-program.8.html b/man/8/zfs-program.8.html new file mode 100644 index 000000000..46d561274 --- /dev/null +++ b/man/8/zfs-program.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-project.8.html b/man/8/zfs-project.8.html new file mode 100644 index 000000000..7c51c4b38 --- /dev/null +++ b/man/8/zfs-project.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-projectspace.8.html b/man/8/zfs-projectspace.8.html new file mode 100644 index 000000000..4c8edeb1e --- /dev/null +++ b/man/8/zfs-projectspace.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-promote.8.html b/man/8/zfs-promote.8.html new file mode 100644 index 000000000..e2319cd4d --- /dev/null +++ b/man/8/zfs-promote.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-receive.8.html b/man/8/zfs-receive.8.html new file mode 100644 index 000000000..48062c2a1 --- /dev/null +++ b/man/8/zfs-receive.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-recv.8.html b/man/8/zfs-recv.8.html new file mode 100644 index 000000000..f17ac89ea --- /dev/null +++ b/man/8/zfs-recv.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-redact.8.html b/man/8/zfs-redact.8.html new file mode 100644 index 000000000..a56a1c890 --- /dev/null +++ b/man/8/zfs-redact.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-release.8.html b/man/8/zfs-release.8.html new file mode 100644 index 000000000..be788f5c7 --- /dev/null +++ b/man/8/zfs-release.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-rename.8.html b/man/8/zfs-rename.8.html new file mode 100644 index 000000000..f34a981f9 --- /dev/null +++ b/man/8/zfs-rename.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-rollback.8.html b/man/8/zfs-rollback.8.html new file mode 100644 index 000000000..f3c24e6a5 --- /dev/null +++ b/man/8/zfs-rollback.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-send.8.html b/man/8/zfs-send.8.html new file mode 100644 index 000000000..b9b019a3f --- /dev/null +++ b/man/8/zfs-send.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-set.8.html b/man/8/zfs-set.8.html new file mode 100644 index 000000000..d57666e11 --- /dev/null +++ b/man/8/zfs-set.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-share.8.html b/man/8/zfs-share.8.html new file mode 100644 index 000000000..8fa24ff49 --- /dev/null +++ b/man/8/zfs-share.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-snapshot.8.html b/man/8/zfs-snapshot.8.html new file mode 100644 index 000000000..1007b2c19 --- /dev/null +++ b/man/8/zfs-snapshot.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-unallow.8.html b/man/8/zfs-unallow.8.html new file mode 100644 index 000000000..732097b36 --- /dev/null +++ b/man/8/zfs-unallow.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-unjail.8.html b/man/8/zfs-unjail.8.html new file mode 100644 index 000000000..9f4351cfd --- /dev/null +++ b/man/8/zfs-unjail.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-unload-key.8.html b/man/8/zfs-unload-key.8.html new file mode 100644 index 000000000..05094fa35 --- /dev/null +++ b/man/8/zfs-unload-key.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-unmount.8.html b/man/8/zfs-unmount.8.html new file mode 100644 index 000000000..2dad6d881 --- /dev/null +++ b/man/8/zfs-unmount.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-unzone.8.html b/man/8/zfs-unzone.8.html new file mode 100644 index 000000000..fbbc20766 --- /dev/null +++ b/man/8/zfs-unzone.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-upgrade.8.html b/man/8/zfs-upgrade.8.html new file mode 100644 index 000000000..844cd7e83 --- /dev/null +++ b/man/8/zfs-upgrade.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-userspace.8.html b/man/8/zfs-userspace.8.html new file mode 100644 index 000000000..dfef04ee1 --- /dev/null +++ b/man/8/zfs-userspace.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-wait.8.html b/man/8/zfs-wait.8.html new file mode 100644 index 000000000..5403c8afc --- /dev/null +++ b/man/8/zfs-wait.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-zone.8.html b/man/8/zfs-zone.8.html new file mode 100644 index 000000000..f96d064e5 --- /dev/null +++ b/man/8/zfs-zone.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs.8.html b/man/8/zfs.8.html new file mode 100644 index 000000000..a2a1af8e8 --- /dev/null +++ b/man/8/zfs.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs_ids_to_path.8.html b/man/8/zfs_ids_to_path.8.html new file mode 100644 index 000000000..5de973af5 --- /dev/null +++ b/man/8/zfs_ids_to_path.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs_prepare_disk.8.html b/man/8/zfs_prepare_disk.8.html new file mode 100644 index 000000000..a7d9658b9 --- /dev/null +++ b/man/8/zfs_prepare_disk.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zgenhostid.8.html b/man/8/zgenhostid.8.html new file mode 100644 index 000000000..bbf3b5f65 --- /dev/null +++ b/man/8/zgenhostid.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zinject.8.html b/man/8/zinject.8.html new file mode 100644 index 000000000..5a8acfda3 --- /dev/null +++ b/man/8/zinject.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-add.8.html b/man/8/zpool-add.8.html new file mode 100644 index 000000000..e6263a5fb --- /dev/null +++ b/man/8/zpool-add.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-attach.8.html b/man/8/zpool-attach.8.html new file mode 100644 index 000000000..ae8cb885f --- /dev/null +++ b/man/8/zpool-attach.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-checkpoint.8.html b/man/8/zpool-checkpoint.8.html new file mode 100644 index 000000000..ad9da4b3b --- /dev/null +++ b/man/8/zpool-checkpoint.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-clear.8.html b/man/8/zpool-clear.8.html new file mode 100644 index 000000000..ba74ee942 --- /dev/null +++ b/man/8/zpool-clear.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-create.8.html b/man/8/zpool-create.8.html new file mode 100644 index 000000000..48fc5062c --- /dev/null +++ b/man/8/zpool-create.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-destroy.8.html b/man/8/zpool-destroy.8.html new file mode 100644 index 000000000..b8ae44e25 --- /dev/null +++ b/man/8/zpool-destroy.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-detach.8.html b/man/8/zpool-detach.8.html new file mode 100644 index 000000000..01eb37fad --- /dev/null +++ b/man/8/zpool-detach.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-events.8.html b/man/8/zpool-events.8.html new file mode 100644 index 000000000..2d2019c77 --- /dev/null +++ b/man/8/zpool-events.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-export.8.html b/man/8/zpool-export.8.html new file mode 100644 index 000000000..8c905dad3 --- /dev/null +++ b/man/8/zpool-export.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-get.8.html b/man/8/zpool-get.8.html new file mode 100644 index 000000000..d88445783 --- /dev/null +++ b/man/8/zpool-get.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-history.8.html b/man/8/zpool-history.8.html new file mode 100644 index 000000000..a9ac60933 --- /dev/null +++ b/man/8/zpool-history.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-import.8.html b/man/8/zpool-import.8.html new file mode 100644 index 000000000..d8d7b6341 --- /dev/null +++ b/man/8/zpool-import.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-initialize.8.html b/man/8/zpool-initialize.8.html new file mode 100644 index 000000000..069c58d04 --- /dev/null +++ b/man/8/zpool-initialize.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-iostat.8.html b/man/8/zpool-iostat.8.html new file mode 100644 index 000000000..fc0369b59 --- /dev/null +++ b/man/8/zpool-iostat.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-labelclear.8.html b/man/8/zpool-labelclear.8.html new file mode 100644 index 000000000..8f70028c2 --- /dev/null +++ b/man/8/zpool-labelclear.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-list.8.html b/man/8/zpool-list.8.html new file mode 100644 index 000000000..7fa9bc2d2 --- /dev/null +++ b/man/8/zpool-list.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-offline.8.html b/man/8/zpool-offline.8.html new file mode 100644 index 000000000..2af57e581 --- /dev/null +++ b/man/8/zpool-offline.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-online.8.html b/man/8/zpool-online.8.html new file mode 100644 index 000000000..18c7f787f --- /dev/null +++ b/man/8/zpool-online.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-reguid.8.html b/man/8/zpool-reguid.8.html new file mode 100644 index 000000000..c1afa1145 --- /dev/null +++ b/man/8/zpool-reguid.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-remove.8.html b/man/8/zpool-remove.8.html new file mode 100644 index 000000000..fb7abab3d --- /dev/null +++ b/man/8/zpool-remove.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-reopen.8.html b/man/8/zpool-reopen.8.html new file mode 100644 index 000000000..0a70ecf71 --- /dev/null +++ b/man/8/zpool-reopen.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-replace.8.html b/man/8/zpool-replace.8.html new file mode 100644 index 000000000..ef59f0fc8 --- /dev/null +++ b/man/8/zpool-replace.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-resilver.8.html b/man/8/zpool-resilver.8.html new file mode 100644 index 000000000..bc4b40297 --- /dev/null +++ b/man/8/zpool-resilver.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-scrub.8.html b/man/8/zpool-scrub.8.html new file mode 100644 index 000000000..7cb99faf9 --- /dev/null +++ b/man/8/zpool-scrub.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-set.8.html b/man/8/zpool-set.8.html new file mode 100644 index 000000000..677b54388 --- /dev/null +++ b/man/8/zpool-set.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-split.8.html b/man/8/zpool-split.8.html new file mode 100644 index 000000000..716ea93ee --- /dev/null +++ b/man/8/zpool-split.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-status.8.html b/man/8/zpool-status.8.html new file mode 100644 index 000000000..1d6fd2346 --- /dev/null +++ b/man/8/zpool-status.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-sync.8.html b/man/8/zpool-sync.8.html new file mode 100644 index 000000000..e17241671 --- /dev/null +++ b/man/8/zpool-sync.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-trim.8.html b/man/8/zpool-trim.8.html new file mode 100644 index 000000000..0ba154699 --- /dev/null +++ b/man/8/zpool-trim.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-upgrade.8.html b/man/8/zpool-upgrade.8.html new file mode 100644 index 000000000..480605f57 --- /dev/null +++ b/man/8/zpool-upgrade.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-wait.8.html b/man/8/zpool-wait.8.html new file mode 100644 index 000000000..0e0cde4ed --- /dev/null +++ b/man/8/zpool-wait.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool.8.html b/man/8/zpool.8.html new file mode 100644 index 000000000..3dd2fcad6 --- /dev/null +++ b/man/8/zpool.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool_influxdb.8.html b/man/8/zpool_influxdb.8.html new file mode 100644 index 000000000..c1fe7c6c3 --- /dev/null +++ b/man/8/zpool_influxdb.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zstream.8.html b/man/8/zstream.8.html new file mode 100644 index 000000000..6ee702beb --- /dev/null +++ b/man/8/zstream.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zstreamdump.8.html b/man/8/zstreamdump.8.html new file mode 100644 index 000000000..7002d45c9 --- /dev/null +++ b/man/8/zstreamdump.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/index.html b/man/index.html new file mode 100644 index 000000000..4431bf420 --- /dev/null +++ b/man/index.html @@ -0,0 +1,140 @@ + + + + + + + Man Pages — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Man Pages

+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/1/arcstat.1.html b/man/master/1/arcstat.1.html new file mode 100644 index 000000000..1d4726301 --- /dev/null +++ b/man/master/1/arcstat.1.html @@ -0,0 +1,411 @@ + + + + + + + arcstat.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

arcstat.1

+
+ + + + + +
ARCSTAT(1)General Commands ManualARCSTAT(1)
+
+
+

+

arcstatreport + ZFS ARC and L2ARC statistics

+
+
+

+ + + + + +
arcstat[-havxp] [-f + field[,field…]] + [-o file] + [-s string] + [interval] [count]
+
+
+

+

arcstat prints various ZFS ARC and L2ARC + statistics in vmstat-like fashion:

+
+
+
+
ARC target size
+
+
Demand hit percentage
+
+
Demand I/O hit percentage
+
+
Demand miss percentage
+
+
Demand data hit percentage
+
+
Demand data I/O hit percentage
+
+
Demand data miss percentage
+
+
Demand metadata hit percentage
+
+
Demand metadata I/O hit percentage
+
+
Demand metadata miss percentage
+
+
MFU list hits per second
+
+
Metadata hit percentage
+
+
Metadata I/O hit percentage
+
+
Metadata miss percentage
+
+
MRU list hits per second
+
+
Prefetch hits percentage
+
+
Prefetch I/O hits percentage
+
+
Prefetch miss percentage
+
+
Prefetch data hits percentage
+
+
Prefetch data I/O hits percentage
+
+
Prefetch data miss percentage
+
+
Prefetch metadata hits percentage
+
+
Prefetch metadata I/O hits percentage
+
+
Prefetch metadata miss percentage
+
+
Demand hits per second
+
+
Demand I/O hits per second
+
+
Demand misses per second
+
+
Demand data hits per second
+
+
Demand data I/O hits per second
+
+
Demand data misses per second
+
+
Demand metadata hits per second
+
+
Demand metadata I/O hits per second
+
+
Demand metadata misses per second
+
+
ARC hit percentage
+
+
ARC hits per second
+
+
ARC I/O hits percentage
+
+
ARC I/O hits per second
+
+
MFU ghost list hits per second
+
+
Metadata hits per second
+
+
Metadata I/O hits per second
+
+
ARC misses per second
+
+
Metadata misses per second
+
+
MRU ghost list hits per second
+
+
Prefetch hits per second
+
+
Prefetch I/O hits per second
+
+
Prefetch misses per second
+
+
Prefetch data hits per second
+
+
Prefetch data I/O hits per second
+
+
Prefetch data misses per second
+
+
Prefetch metadata hits per second
+
+
Prefetch metadata I/O hits per second
+
+
Prefetch metadata misses per second
+
+
Total ARC accesses per second
+
+
Current time
+
+
ARC size
+
+
Alias for size
+
+
Uncached list hits per second
+
+
Demand accesses per second
+
+
Demand data accesses per second
+
+
Demand metadata accesses per second
+
+
evict_skip per second
+
+
ARC miss percentage
+
+
Metadata accesses per second
+
+
Prefetch accesses per second
+
+
Prefetch data accesses per second
+
+
Prefetch metadata accesses per second
+
+
L2ARC access hit percentage
+
+
L2ARC hits per second
+
+
L2ARC misses per second
+
+
Total L2ARC accesses per second
+
+
L2ARC prefetch allocated size per second
+
+
L2ARC prefetch allocated size percentage
+
+
L2ARC MFU allocated size per second
+
+
L2ARC MFU allocated size percentage
+
+
L2ARC MRU allocated size per second
+
+
L2ARC MRU allocated size percentage
+
+
L2ARC data (buf content) allocated size per second
+
+
L2ARC data (buf content) allocated size percentage
+
+
L2ARC metadata (buf content) allocated size per second
+
+
L2ARC metadata (buf content) allocated size percentage
+
+
Size of the L2ARC
+
+
mutex_miss per second
+
+
Bytes read per second from the L2ARC
+
+
L2ARC access miss percentage
+
+
Actual (compressed) size of the L2ARC
+
+
ARC grow disabled
+
+
ARC reclaim needed
+
+
The ARC's idea of how much free memory there is, which includes evictable + memory in the page cache. Since the ARC tries to keep + avail above zero, avail is usually + more instructive to observe than free.
+
+
The ARC's idea of how much free memory is available to it, which is a bit + less than free. May temporarily be negative, in which + case the ARC will reduce the target size c.
+
+
+
+
+

+
+
+
Print all possible stats.
+
+
Display only specific fields. See + DESCRIPTION for supported + statistics.
+
+
Display help message.
+
+
Report statistics to a file instead of the standard output.
+
+
Disable auto-scaling of numerical fields (for raw, machine-parsable + values).
+
+
Display data with a specified separator (default: 2 spaces).
+
+
Print extended stats (same as -f + time,mfu,mru,mfug,mrug,eskip,mtxmis,dread,pread,read).
+
+
Show field headers and definitions
+
+
+
+

+

The following operands are supported:

+
+
+
interval
+
Specify the sampling interval in seconds.
+
count
+
Display only count reports.
+
+
+
+
+ + + + + +
December 23, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/1/cstyle.1.html b/man/master/1/cstyle.1.html new file mode 100644 index 000000000..6cb26ce02 --- /dev/null +++ b/man/master/1/cstyle.1.html @@ -0,0 +1,293 @@ + + + + + + + cstyle.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cstyle.1

+
+ + + + + +
CSTYLE(1)General Commands ManualCSTYLE(1)
+
+
+

+

cstylecheck for + some common stylistic errors in C source files

+
+
+

+ + + + + +
cstyle[-chpvCP] + [file]…
+
+
+

+

cstyle inspects C source files (*.c and + *.h) for common stylistic errors. It attempts to check for the cstyle + documented in + http://www.cis.upenn.edu/~lee/06cse480/data/cstyle.ms.pdf. + Note that there is much in that document that + + be checked for; just because your code is + cstyle-clean does not mean that you've followed + Sun's C style. + .

+
+
+

+
+
+
Check continuation line indentation inside of functions. Sun's C style + states that all statements must be indented to an appropriate tab stop, + and any continuation lines after them must be indented + + four spaces from the start line. This option enables a series of checks + designed to find continuation line problems within functions only. The + checks have some limitations; see + , below.
+
+
Performs some of the more picky checks. Includes ANSI + + and + + rules, and tries to detect spaces after casts. Used as part of the putback + checks.
+
+
Verbose output; includes the text of the line of error, and, for + -c, the first statement in the current + continuation block.
+
+
Check for use of non-POSIX types. Historically, types like + + and + + were used, but they are now deprecated in favor of the POSIX types + , + , + etc. This detects any use of the deprecated types. Used as part of the + putback checks.
+
+
Also print GitHub-Actions-style ::error + output.
+
+
+
+

+
+
+
If set and nonempty, equivalent to -g.
+
+
+
+

+

The continuation checker is a reasonably simple state machine that + knows something about how C is laid out, and can match parenthesis, etc. + over multiple lines. It does have some limitations:

+
    +
  1. Preprocessor macros which cause unmatched parenthesis will + confuse the checker for that line. To fix this, you'll need to make sure + that each branch of the + statement has + balanced parenthesis.
  2. +
  3. Some cpp(1) macros do not require + ;s after them. Any such macros + be ALL_CAPS; + any lower case letters will cause bad output. +

    The bad output will generally be corrected after the + next ;, + , + or + .

    +
  4. +
+Some continuation error messages deserve some additional explanation: +
+
+
A multi-line statement which is not broken at statement boundaries. For + example: +
+
if (this_is_a_long_variable == another_variable) a =
+    b + c;
+
+

Will trigger this error. Instead, do:

+
+
if (this_is_a_long_variable == another_variable)
+    a = b + c;
+
+
+
+
For visibility, empty bodies for if, for, and while statements should be + on their own line. For example: +
+
while (do_something(&x) == 0);
+
+

Will trigger this error. Instead, do:

+
+
while (do_something(&x) == 0)
+    ;
+
+
+
+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/1/index.html b/man/master/1/index.html new file mode 100644 index 000000000..0a56beefb --- /dev/null +++ b/man/master/1/index.html @@ -0,0 +1,159 @@ + + + + + + + User Commands (1) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

User Commands (1)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/1/raidz_test.1.html b/man/master/1/raidz_test.1.html new file mode 100644 index 000000000..b051a0406 --- /dev/null +++ b/man/master/1/raidz_test.1.html @@ -0,0 +1,254 @@ + + + + + + + raidz_test.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

raidz_test.1

+
+ + + + + +
RAIDZ_TEST(1)General Commands ManualRAIDZ_TEST(1)
+
+
+

+

raidz_testraidz + implementation verification and benchmarking tool

+
+
+

+ + + + + +
raidz_test[-StBevTD] [-a + ashift] [-o + zio_off_shift] [-d + raidz_data_disks] [-s + zio_size_shift] [-r + reflow_offset]
+
+
+

+

The purpose of this tool is to run all supported raidz + implementation and verify the results of all methods. It also contains a + parameter sweep option where all parameters affecting a RAID-Z block are + verified (like ashift size, data offset, data size, etc.). The tool also + supports a benchmarking mode using the -B + option.

+
+
+

+
+
+
Print a help summary.
+
+ ashift (default: + )
+
Ashift value.
+
+ zio_off_shift (default: + )
+
ZIO offset for each raidz block. The offset's value is + .
+
+ raidz_data_disks (default: + )
+
Number of raidz data disks to use. Additional disks will be used for + parity.
+
+ zio_size_shift (default: + )
+
Size of data for raidz block. The real size is + .
+
+ reflow_offset (default: + )
+
Set raidz expansion offset. The expanded raidz map allocation function + will produce different map configurations depending on this value.
+
(weep)
+
Sweep parameter space while verifying the raidz implementations. This + option will exhaust all most of valid values for the + -aods options. Runtime using this option will be + long.
+
(imeout)
+
Wall time for sweep test in seconds. The actual runtime could be + longer.
+
(enchmark)
+
All implementations are benchmarked using increasing per disk data size. + Results are given as throughput per disk, measured in MiB/s.
+
(xpansion)
+
Use expanded raidz map allocation function.
+
(erbose)
+
Increase verbosity.
+
(est + the test)
+
Debugging option: fail all tests. This is to check if tests would properly + verify bit-exactness.
+
(ebug)
+
Debugging option: attach gdb(1) when + + or + + are received.
+
+
+
+

+

ztest(1)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/1/test-runner.1.html b/man/master/1/test-runner.1.html new file mode 100644 index 000000000..034e3b18d --- /dev/null +++ b/man/master/1/test-runner.1.html @@ -0,0 +1,437 @@ + + + + + + + test-runner.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

test-runner.1

+
+ + + + + +
RUN(1)General Commands ManualRUN(1)
+
+
+

+

runfind, + execute, and log the results of tests

+
+
+

+ + + + + +
run[-dgq] [-o + outputdir] [-pP + script] [-t + -seconds] [-uxX + username] + pathname
+

+
+ + + + + +
run-w runfile + [-gq] [-o + outputdir] [-pP + script] [-t + -seconds] [-uxX + username] + pathname
+

+
+ + + + + +
run-c runfile + [-dq]
+

+
+ + + + + +
run[-h]
+
+
+

+

run command has three basic modes of + operation. With neither -c nor + -w, run processes the + arguments provided on the command line, adding them to the list for this + run. If a specified pathname is an executable file, it + is added as a test. If a specified pathname is a + directory, the behavior depends upon the presence of + -g. If -g is specified, the + directory is treated as a test group. See the section on + below. Without -g, + run simply descends into the directory looking for + executable files. The tests are then executed, and the results are + logged.

+

With -w, run finds + tests in the manner described above. Rather than executing the tests and + logging the results, the test configuration is stored in a + runfile, which can be used in future invocations, or + edited to modify which tests are executed and which options are applied. + Options included on the command line with -w become + defaults in the runfile.

+

With -c, run + parses a runfile, which can specify a series of tests + and test groups to be executed. The tests are then executed, and the results + are logged.

+
+

+

A test group is comprised of a set of executable files, all of + which exist in one directory. The options specified on the command line or + in a runfile apply to individual tests in the group. + The exception is options pertaining to pre and post scripts, which act on + all tests as a group. Rather than running before and after each test, these + scripts are run only once each at the start and end of the test group.

+
+
+

+

The specified tests run serially, and are typically assigned + results according to exit values. Tests that exit zero and non-zero are + marked + and + , + respectively. When a pre script fails for a test group, only the post script + is executed, and the remaining tests are marked + . + Any test that exceeds its timeout is terminated, and + marked + .

+

By default, tests are executed with the credentials of the + run script. Executing tests with other credentials + is done via sudo(1m), which must be configured to allow + execution without prompting for a password. Environment variables from the + calling shell are available to individual tests. During test execution, the + working directory is changed to outputdir.

+
+
+

+

By default, run will print one line on + standard output at the conclusion of each test indicating the test name, + result and elapsed time. Additionally, for each invocation of + run, a directory is created using the ISO 8601 date + format. Within this directory is a file named + + containing all the test output with timestamps, and a directory for each + test. Within the test directories, there is one file each for standard + output, standard error and merged output. The default location for the + outputdir is + /var/tmp/test_results.

+
+
+

+

The runfile is an INI-style configuration + file that describes a test run. The file has one section named + , + which contains configuration option names and their values in + + = value format. The values in + this section apply to all the subsequent sections, unless they are also + specified there, in which case the default is overridden. The remaining + section names are the absolute pathnames of files and directories, + describing tests and test groups respectively. The legal option names + are:

+
+
+ = pathname
+
The name of the directory that holds test logs.
+
+ = script
+
Run script prior to the test or test group.
+
+ = username
+
Execute the pre script as username.
+
+ = script
+
Run script after the test or test group.
+
+ = username
+
Execute the post script as username.
+
+ = + True|
+
If True, only the results summary is printed to standard + out.
+
+ = ['filename', + ]
+
Specify a list of filenames for this test group. + Only the basename of the absolute path is required. This option is only + valid for test groups, and each filename must be + single quoted.
+
+ = n
+
A timeout value of n seconds.
+
+ = username
+
Execute the test or test group as username.
+
+
+
+
+

+
+
+ runfile
+
Specify a runfile to be consumed by the run + command.
+
+
Dry run mode. Execute no tests, but print a description of each test that + would have been run.
+
+
Enable kmemleak reporting (Linux only)
+
+
Create test groups from any directories found while searching for + tests.
+
+ outputdir
+
Specify the directory in which to write test results.
+
+ script
+
Run script prior to any test or test group.
+
+ script
+
Run script after any test or test group.
+
+
Print only the results summary to the standard output.
+
+ script
+
Run script as a failsafe after any test is + killed.
+
+ username
+
Execute the failsafe script as username.
+
+ n
+
Specify a timeout value of n seconds per test.
+
+ username
+
Execute tests or test groups as username.
+
+ runfile
+
Specify the name of the runfile to create.
+
+ username
+
Execute the pre script as username.
+
+ username
+
Execute the post script as username.
+
+
+
+

+
+
: Running ad-hoc tests.
+
This example demonstrates the simplest invocation of + run. +
+
% run my-tests
+Test: /home/jkennedy/my-tests/test-01                    [00:02] [PASS]
+Test: /home/jkennedy/my-tests/test-02                    [00:04] [PASS]
+Test: /home/jkennedy/my-tests/test-03                    [00:01] [PASS]
+
+Results Summary
+PASS       3
+
+Running Time:   00:00:07
+Percent passed: 100.0%
+Log directory:  /var/tmp/test_results/20120923T180654
+
+
+
: Creating a runfile + for future use.
+
This example demonstrates creating a runfile with + non-default options. +
+
% run -p setup -x root -g -w new-tests.run new-tests
+% cat new-tests.run
+[DEFAULT]
+pre = setup
+post_user =
+quiet = False
+user =
+timeout = 60
+post =
+pre_user = root
+outputdir = /var/tmp/test_results
+
+[/home/jkennedy/new-tests]
+tests = ['test-01', 'test-02', 'test-03']
+
+
+
+
+
+

+

sudo(1m)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/1/zhack.1.html b/man/master/1/zhack.1.html new file mode 100644 index 000000000..2c769f166 --- /dev/null +++ b/man/master/1/zhack.1.html @@ -0,0 +1,297 @@ + + + + + + + zhack.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zhack.1

+
+ + + + + +
ZHACK(1)General Commands ManualZHACK(1)
+
+
+

+

zhacklibzpool + debugging tool

+
+
+

+

This utility pokes configuration changes directly into a ZFS pool, + which is dangerous and can cause data corruption.

+
+
+

+
+
+ + + + + +
zhackfeature stat pool
+
+
List feature flags.
+
+ + + + + +
zhackfeature enable [-d + description] [-r] + pool guid
+
+
Add a new feature to pool that is uniquely + identified by guid, which is specified in the same + form as a zfs(8) user property. +

The description is a short human + readable explanation of the new feature.

+

The -r flag indicates that + pool can be safely opened in read-only mode by a + system that does not understand the guid + feature.

+
+
+ + + + + +
zhackfeature ref + [-d|-m] + pool guid
+
+
Increment the reference count of the guid feature in + pool. +

The -d flag decrements the reference + count of the guid feature in + pool instead.

+

The -m flag indicates that the + guid feature is now required to read the pool + MOS.

+
+
+ + + + + +
zhacklabel repair [-cu] + device
+
+
Repair labels of a specified device according to + options. +

Flags may be combined to do their functions + simultaneously.

+

The -c flag repairs corrupted label + checksums

+

The -u flag restores the label on a + detached device

+

Example:

+
+ + + + + +
zhack label repair + -cu device +
+ Fix checksums and undetach a device
+
+
+
+
+

+

The following can be passed to all zhack + invocations before any subcommand:

+
+
+ cachefile
+
Read pool configuration from the + cachefile, which is + /etc/zfs/zpool.cache by default.
+
+ dir
+
Search for pool members in + dir. Can be specified more than once.
+
+
+
+

+
+
# zhack feature stat tank
+for_read_obj:
+	org.illumos:lz4_compress = 0
+for_write_obj:
+	com.delphix:async_destroy = 0
+	com.delphix:empty_bpobj = 0
+descriptions_obj:
+	com.delphix:async_destroy = Destroy filesystems asynchronously.
+	com.delphix:empty_bpobj = Snapshots use less space.
+	org.illumos:lz4_compress = LZ4 compression algorithm support.
+
+# zhack feature enable -d 'Predict future disk failures.' tank com.example:clairvoyance
+# zhack feature ref tank com.example:clairvoyance
+
+
+
+

+

ztest(1), zpool-features(7), + zfs(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/1/ztest.1.html b/man/master/1/ztest.1.html new file mode 100644 index 000000000..8fefbdd61 --- /dev/null +++ b/man/master/1/ztest.1.html @@ -0,0 +1,402 @@ + + + + + + + ztest.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

ztest.1

+
+ + + + + +
ZTEST(1)General Commands ManualZTEST(1)
+
+
+

+

ztestwas + written by the ZFS Developers as a ZFS unit test

+
+
+

+ + + + + +
ztest[-VEG] [-v + vdevs] [-s + size_of_each_vdev] [-a + alignment_shift] [-m + mirror_copies] [-r + raidz_disks/draid_disks] [-R + raid_parity] [-K + raid_kind] [-D + draid_data] [-S + draid_spares] [-C + vdev_class_state] [-d + datasets] [-t + threads] [-g + gang_block_threshold] [-i + initialize_pool_i_times] [-k + kill_percentage] [-p + pool_name] [-T + time] [-z + zil_failure_rate]
+
+ + + + + +
ztest-X [-VG] + [-s size_of_each_vdev] + [-a alignment_shift] + [-r raidz_disks] + [-R raid_parity] + [-d datasets] + [-t threads]
+
+
+

+

ztest was written by the ZFS Developers as + a ZFS unit test. The tool was developed in tandem with the ZFS functionality + and was executed nightly as one of the many regression test against the + daily build. As features were added to ZFS, unit tests were also added to + ztest. In addition, a separate test development team + wrote and executed more functional and stress tests.

+

By default ztest runs for ten minutes and + uses block files (stored in /tmp) to create pools + rather than using physical disks. Block files afford + ztest its flexibility to play around with zpool + components without requiring large hardware configurations. However, storing + the block files in /tmp may not work for you if you + have a small tmp directory.

+

By default is non-verbose. This is why entering the command above + will result in ztest quietly executing for 5 + minutes. The -V option can be used to increase the + verbosity of the tool. Adding multiple -V options is + allowed and the more you add the more chatty ztest + becomes.

+

After the ztest run completes, you should + notice many ztest.* files lying around. Once the run + completes you can safely remove these files. Note that you shouldn't remove + these files during a run. You can re-use these files in your next + ztest run by using the -E + option.

+
+
+

+
+
, + -?, --help
+
Print a help summary.
+
, + --vdevs= (default: + )
+
Number of vdevs.
+
, + --vdev-size= (default: + )
+
Size of each vdev.
+
, + --alignment-shift= (default: + ) + (use + + for random)
+
Alignment shift used in test.
+
, + --mirror-copies= (default: + )
+
Number of mirror copies.
+
, + --raid-disks= (default: 4 + for + raidz/ + for draid)
+
Number of raidz/draid disks.
+
, + --raid-parity= (default: 1)
+
Raid parity (raidz & draid).
+
, + --raid-kind=|||random + (default: random)
+
The kind of RAID config to use. With random the kind + alternates between raidz, eraidz (expandable raidz) and draid.
+
, + --draid-data= (default: 4)
+
Number of data disks in a dRAID redundancy group.
+
, + --draid-spares= (default: 1)
+
Number of dRAID distributed spare disks.
+
, + --datasets= (default: + )
+
Number of datasets.
+
, + --threads= (default: + )
+
Number of threads.
+
, + --gang-block-threshold= (default: + 32K)
+
Gang block threshold.
+
, + --init-count= (default: 1)
+
Number of pool initializations.
+
, + --kill-percentage= (default: + )
+
Kill percentage.
+
, + --pool-name= (default: + )
+
Pool name.
+
, + --vdev-file-directory= (default: + /tmp)
+
File directory for vdev files.
+
, + --multi-host
+
Multi-host; simulate pool imported on remote host.
+
, + --use-existing-pool
+
Use existing pool (use existing pool instead of creating new one).
+
, + --run-time= (default: + s)
+
Total test run time.
+
, + --pass-time= (default: + s)
+
Time per pass.
+
, + --freeze-loops= (default: + )
+
Max loops in + ().
+
, + --alt-ztest=
+
Path to alternate ("older") ztest to + drive, which will be used to initialise the pool, and, a stochastic half + the time, to run the tests. The parallel lib + directory is prepended to LD_LIBRARY_PATH; i.e. + given -B + ./chroots/lenny/usr/bin/ztest, + ./chroots/lenny/usr/lib will be loaded.
+
, + --vdev-class-state=||random + (default: random)
+
The vdev allocation class state.
+
, + --option=variable=value
+
Set global variable to an unsigned 32-bit integer + value (little-endian only).
+
, + --dump-debug
+
Dump zfs_dbgmsg buffer before exiting due to an error.
+
, + --verbose
+
Verbose (use multiple times for ever more verbosity).
+
, + --raidz-expansion
+
Perform a dedicated raidz expansion test.
+
+
+
+

+

To override /tmp as your location for + block files, you can use the -f option:

+
# ztest -f /
+

To get an idea of what ztest is actually + testing try this:

+
# ztest -f / -VVV
+

Maybe you'd like to run ztest for longer? + To do so simply use the -T option and specify the + runlength in seconds like so:

+
# ztest -f / -V -T 120
+
+
+

+
+
=id
+
Use id instead of the SPL hostid to identify this host. + Intended for use with ztest, but this environment + variable will affect any utility which uses libzpool, including + zpool(8). Since the kernel is unaware of this setting, + results with utilities other than ztest are undefined.
+
=stacksize
+
Limit the default stack size to stacksize bytes for the + purpose of detecting and debugging kernel stack overflows. This value + defaults to 32K which is double the default + Linux + kernel stack size. +

In practice, setting the stack size slightly higher is needed + because differences in stack usage between kernel and user space can + lead to spurious stack overflows (especially when debugging is enabled). + The specified value will be rounded up to a floor of PTHREAD_STACK_MIN + which is the minimum stack required for a NULL procedure in user + space.

+

By default the stack size is limited to + .

+
+
+
+
+

+

zdb(1), zfs(1), + zpool(1), spl(4)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/1/zvol_wait.1.html b/man/master/1/zvol_wait.1.html new file mode 100644 index 000000000..45089dce4 --- /dev/null +++ b/man/master/1/zvol_wait.1.html @@ -0,0 +1,191 @@ + + + + + + + zvol_wait.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zvol_wait.1

+
+ + + + + +
ZVOL_WAIT(1)General Commands ManualZVOL_WAIT(1)
+
+
+

+

zvol_waitwait + for ZFS volume links to appear in /dev

+
+
+

+ + + + + +
zvol_wait
+
+
+

+

When a ZFS pool is imported, the volumes within it will appear as + block devices. As they're registered, udev(7) + asynchronously creates symlinks under /dev/zvol + using the volumes' names. zvol_wait will wait for + all those symlinks to be created before exiting.

+
+
+

+

udev(7)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/4/index.html b/man/master/4/index.html new file mode 100644 index 000000000..9c3813b19 --- /dev/null +++ b/man/master/4/index.html @@ -0,0 +1,149 @@ + + + + + + + Devices and Special Files (4) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Devices and Special Files (4)

+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/4/spl.4.html b/man/master/4/spl.4.html new file mode 100644 index 000000000..c2cc7ee95 --- /dev/null +++ b/man/master/4/spl.4.html @@ -0,0 +1,322 @@ + + + + + + + spl.4 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

spl.4

+
+ + + + + +
SPL(4)Device Drivers ManualSPL(4)
+
+
+

+

splparameters + of the SPL kernel module

+
+
+

+
+
=4 + (uint)
+
The number of threads created for the spl_kmem_cache task queue. This task + queue is responsible for allocating new slabs for use by the kmem caches. + For the majority of systems and workloads only a small number of threads + are required.
+
= + (uint)
+
The preferred number of objects per slab in the cache. In general, a + larger value will increase the caches memory footprint while decreasing + the time required to perform an allocation. Conversely, a smaller value + will minimize the footprint and improve cache reclaim time but individual + allocations may take longer.
+
= + (64-bit) or 4 (32-bit) (uint)
+
The maximum size of a kmem cache slab in MiB. This effectively limits the + maximum cache object size to + spl_kmem_cache_max_size/spl_kmem_cache_obj_per_slab. +

Caches may not be created with object sized larger than this + limit.

+
+
= + (uint)
+
For small objects the Linux slab allocator should be used to make the most + efficient use of the memory. However, large objects are not supported by + the Linux slab and therefore the SPL implementation is preferred. This + value is used to determine the cutoff between a small and large object. +

Objects of size spl_kmem_cache_slab_limit or + smaller will be allocated using the Linux slab allocator, large objects + use the SPL allocator. A cutoff of 16K was determined to be optimal for + architectures using 4K pages.

+
+
= + (uint)
+
As a general rule + () + allocations should be small, preferably just a few pages, since they must + by physically contiguous. Therefore, a rate limited warning will be + printed to the console for any kmem_alloc() which + exceeds a reasonable threshold. +

The default warning threshold is set to eight pages but capped + at 32K to accommodate systems using large pages. This value was selected + to be small enough to ensure the largest allocations are quickly noticed + and fixed. But large enough to avoid logging any warnings when a + allocation size is larger than optimal but not a serious concern. Since + this value is tunable, developers are encouraged to set it lower when + testing so any new largish allocations are quickly caught. These + warnings may be disabled by setting the threshold to zero.

+
+
=KMALLOC_MAX_SIZE/4 + (uint)
+
Large + () + allocations will fail if they exceed KMALLOC_MAX_SIZE. + Allocations which are marginally smaller than this limit may succeed but + should still be avoided due to the expense of locating a contiguous range + of free pages. Therefore, a maximum kmem size with reasonable safely + margin of 4x is set. kmem_alloc() allocations + larger than this maximum will quickly fail. + () + allocations less than or equal to this value will use + (), + but shift to + () + when exceeding this value.
+
=0 + (uint)
+
Cache magazines are an optimization designed to minimize the cost of + allocating memory. They do this by keeping a per-cpu cache of recently + freed objects, which can then be reallocated without taking a lock. This + can improve performance on highly contended caches. However, because + objects in magazines will prevent otherwise empty slabs from being + immediately released this may not be ideal for low memory machines. +

For this reason, + spl_kmem_cache_magazine_size can be used to set a + maximum magazine size. When this value is set to 0 the magazine size + will be automatically determined based on the object size. Otherwise + magazines will be limited to 2-256 objects per magazine (i.e per cpu). + Magazines may never be entirely disabled in this implementation.

+
+
=0 + (ulong)
+
The system hostid, when set this can be used to uniquely identify a + system. By default this value is set to zero which indicates the hostid is + disabled. It can be explicitly enabled by placing a unique non-zero value + in /etc/hostid.
+
=/etc/hostid + (charp)
+
The expected path to locate the system hostid when specified. This value + may be overridden for non-standard configurations.
+
=0 + (uint)
+
Cause a kernel panic on assertion failures. When not enabled, the thread + is halted to facilitate further debugging. +

Set to a non-zero value to enable.

+
+
=0 + (uint)
+
Kick stuck taskq to spawn threads. When writing a non-zero value to it, it + will scan all the taskqs. If any of them have a pending task more than 5 + seconds old, it will kick it to spawn more threads. This can be used if + you find a rare deadlock occurs because one or more taskqs didn't spawn a + thread when it should.
+
=0 + (int)
+
Bind taskq threads to specific CPUs. When enabled all taskq threads will + be distributed evenly across the available CPUs. By default, this behavior + is disabled to allow the Linux scheduler the maximum flexibility to + determine where a thread should run.
+
=1 + (int)
+
Allow dynamic taskqs. When enabled taskqs which set the + + flag will by default create only a single thread. New threads will be + created on demand up to a maximum allowed number to facilitate the + completion of outstanding tasks. Threads which are no longer needed will + be promptly destroyed. By default this behavior is enabled but it can be + disabled to aid performance analysis or troubleshooting.
+
=1 + (int)
+
Allow newly created taskq threads to set a non-default scheduler priority. + When enabled, the priority specified when a taskq is created will be + applied to all threads created by that taskq. When disabled all threads + will use the default Linux kernel thread priority. By default, this + behavior is enabled.
+
=4 + (int)
+
The number of items a taskq worker thread must handle without interruption + before requesting a new worker thread be spawned. This is used to control + how quickly taskqs ramp up the number of threads processing the queue. + Because Linux thread creation and destruction are relatively inexpensive a + small default value has been selected. This means that normally threads + will be created aggressively which is desirable. Increasing this value + will result in a slower thread creation rate which may be preferable for + some configurations.
+
= + (uint)
+
The maximum number of tasks per pending list in each taskq shown in + /proc/spl/taskq{,-all}. Write 0 + to turn off the limit. The proc file will walk the lists with lock held, + reading it could cause a lock-up if the list grow too large without + limiting the output. "(truncated)" will be shown if the list is + larger than the limit.
+
= + (uint)
+
(Linux-only) How long a taskq has to have had no work before we tear it + down. Previously, we would tear down a dynamic taskq worker as soon as we + noticed it had no work, but it was observed that this led to a lot of + churn in tearing down things we then immediately spawned anew. In + practice, it seems any nonzero value will remove the vast majority of this + churn, while the nontrivially larger value was chosen to help filter out + the little remaining churn on a mostly idle system. Setting this value to + 0 will revert to the previous behavior.
+
+
+
+ + + + + +
August 24, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/4/zfs.4.html b/man/master/4/zfs.4.html new file mode 100644 index 000000000..ee87eec56 --- /dev/null +++ b/man/master/4/zfs.4.html @@ -0,0 +1,2709 @@ + + + + + + + zfs.4 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.4

+
+ + + + + +
ZFS(4)Device Drivers ManualZFS(4)
+
+
+

+

zfstuning of + the ZFS kernel module

+
+
+

+

The ZFS module supports these parameters:

+
+
=UINT64_MAXB + (u64)
+
Maximum size in bytes of the dbuf cache. The target size is determined by + the MIN versus + 1/2^dbuf_cache_shift (1/32nd) of + the target ARC size. The behavior of the dbuf cache and its associated + settings can be observed via the + /proc/spl/kstat/zfs/dbufstats kstat.
+
=UINT64_MAXB + (u64)
+
Maximum size in bytes of the metadata dbuf cache. The target size is + determined by the MIN versus + 1/2^dbuf_metadata_cache_shift + (1/64th) of the target ARC size. The behavior of the metadata dbuf cache + and its associated settings can be observed via the + /proc/spl/kstat/zfs/dbufstats kstat.
+
=10% + (uint)
+
The percentage over dbuf_cache_max_bytes when dbufs must + be evicted directly.
+
=10% + (uint)
+
The percentage below dbuf_cache_max_bytes when the evict + thread stops evicting dbufs.
+
=5 + (uint)
+
Set the size of the dbuf cache (dbuf_cache_max_bytes) to + a log2 fraction of the target ARC size.
+
= + (uint)
+
Set the size of the dbuf metadata cache + (dbuf_metadata_cache_max_bytes) to a log2 fraction of + the target ARC size.
+
=0 + (uint)
+
Set the size of the mutex array for the dbuf cache. When set to + 0 the array is dynamically sized based on total system + memory.
+
=7 + (128) (uint)
+
dnode slots allocated in a single operation as a power of 2. The default + value minimizes lock contention for the bulk operation performed.
+
=134217728B + (128 MiB) (uint)
+
Limit the amount we can prefetch with one call to this amount in bytes. + This helps to limit the amount of memory that can be used by + prefetching.
+
+ (int)
+
Alias for send_holes_without_birth_time.
+
=1|0 + (int)
+
Turbo L2ARC warm-up. When the L2ARC is cold the fill interval will be set + as fast as possible.
+
=200 + (u64)
+
Min feed interval in milliseconds. Requires + l2arc_feed_again=1 and only + applicable in related situations.
+
=1 + (u64)
+
Seconds between L2ARC writing.
+
=8 + (u64)
+
How far through the ARC lists to search for L2ARC cacheable content, + expressed as a multiplier of l2arc_write_max. ARC + persistence across reboots can be achieved with persistent L2ARC by + setting this parameter to 0, allowing the full length of + ARC lists to be searched for cacheable content.
+
=200% + (u64)
+
Scales l2arc_headroom by this percentage when L2ARC + contents are being successfully compressed before writing. A value of + 100 disables this feature.
+
=0|1 + (int)
+
Controls whether buffers present on special vdevs are eligible for caching + into L2ARC. If set to 1, exclude dbufs on special vdevs from being cached + to L2ARC.
+
=0|1 + (int)
+
Controls whether only MFU metadata and data are cached from ARC into + L2ARC. This may be desired to avoid wasting space on L2ARC when + reading/writing large amounts of data that are not expected to be accessed + more than once. +

The default is off, meaning both MRU and MFU data and metadata + are cached. When turning off this feature, some MRU buffers will still + be present in ARC and eventually cached on L2ARC. + If + l2arc_noprefetch=0, some prefetched + buffers will be cached to L2ARC, and those might later transition to + MRU, in which case the l2arc_mru_asize + arcstat will not be 0.

+

Regardless of l2arc_noprefetch, some MFU + buffers might be evicted from ARC, accessed later on as prefetches and + transition to MRU as prefetches. If accessed again they are counted as + MRU and the l2arc_mru_asize arcstat + will not be 0.

+

The ARC status of L2ARC buffers when they + were first cached in L2ARC can be seen in the + l2arc_mru_asize, + , + and + + arcstats when importing the pool or onlining a cache device if + persistent L2ARC is enabled.

+

The + + arcstat does not take into account if this option is enabled as the + information provided by the + + arcstats can be used to decide if toggling this option is appropriate + for the current workload.

+
+
=% + (uint)
+
Percent of ARC size allowed for L2ARC-only headers. Since L2ARC buffers + are not evicted on memory pressure, too many headers on a system with an + irrationally large L2ARC can render it slow or unusable. This parameter + limits L2ARC writes and rebuilds to achieve the target.
+
=0% + (u64)
+
Trims ahead of the current write size (l2arc_write_max) + on L2ARC devices by this percentage of write size if we have filled the + device. If set to 100 we TRIM twice the space required + to accommodate upcoming writes. A minimum of 64 MiB will + be trimmed. It also enables TRIM of the whole L2ARC device upon creation + or addition to an existing pool or if the header of the device is invalid + upon importing a pool or onlining a cache device. A value of + 0 disables TRIM on L2ARC altogether and is the default + as it can put significant stress on the underlying storage devices. This + will vary depending of how well the specific device handles these + commands.
+
=1|0 + (int)
+
Do not write buffers to L2ARC if they were prefetched but not used by + applications. In case there are prefetched buffers in L2ARC and this + option is later set, we do not read the prefetched buffers from L2ARC. + Unsetting this option is useful for caching sequential reads from the + disks to L2ARC and serve those reads from L2ARC later on. This may be + beneficial in case the L2ARC device is significantly faster in sequential + reads than the disks of the pool. +

Use 1 to disable and 0 to + enable caching/reading prefetches to/from L2ARC.

+
+
=0|1 + (int)
+
No reads during writes.
+
=33554432B + (32 MiB) (u64)
+
Cold L2ARC devices will have l2arc_write_max increased + by this amount while they remain cold.
+
=33554432B + (32 MiB) (u64)
+
Max write bytes per interval.
+
=1|0 + (int)
+
Rebuild the L2ARC when importing a pool (persistent L2ARC). This can be + disabled if there are problems importing a pool or attaching an L2ARC + device (e.g. the L2ARC device is slow in reading stored log metadata, or + the metadata has become somehow fragmented/unusable).
+
=1073741824B + (1 GiB) (u64)
+
Mininum size of an L2ARC device required in order to write log blocks in + it. The log blocks are used upon importing the pool to rebuild the + persistent L2ARC. +

For L2ARC devices less than 1 GiB, the amount + of data + () + evicts is significant compared to the amount of restored L2ARC data. In + this case, do not write log blocks in L2ARC in order not to waste + space.

+
+
=1048576B + (1 MiB) (u64)
+
Metaslab granularity, in bytes. This is roughly similar to what would be + referred to as the "stripe size" in traditional RAID arrays. In + normal operation, ZFS will try to write this amount of data to each disk + before moving on to the next top-level vdev.
+
=1|0 + (int)
+
Enable metaslab group biasing based on their vdevs' over- or + under-utilization relative to the pool.
+
=B + (16 MiB + 1 B) (u64)
+
Make some blocks above a certain size be gang blocks. This option is used + by the test suite to facilitate testing.
+
=3% + (uint)
+
For blocks that could be forced to be a gang block (due to + metaslab_force_ganging), force this many of them to be + gang blocks.
+
=15 + (32 KiB) (int)
+
Default DDT ZAP data block size as a power of 2. Note that changing this + after creating a DDT on the pool will not affect existing DDTs, only newly + created ones.
+
=15 + (32 KiB) (int)
+
Default DDT ZAP indirect block size as a power of 2. Note that changing + this after creating a DDT on the pool will not affect existing DDTs, only + newly created ones.
+
=9 + (512 B) (int)
+
Default dnode block size as a power of 2.
+
= + (128 KiB) (int)
+
Default dnode indirect block size as a power of 2.
+
=1048576B + (1 MiB) (u64)
+
When attempting to log an output nvlist of an ioctl in the on-disk + history, the output will not be stored if it is larger than this size (in + bytes). This must be less than + + (64 MiB). This applies primarily to + () + (cf. zfs-program(8)).
+
=0|1 + (int)
+
Prevent log spacemaps from being destroyed during pool exports and + destroys.
+
=1|0 + (int)
+
Enable/disable segment-based metaslab selection.
+
=2 + (int)
+
When using segment-based metaslab selection, continue allocating from the + active metaslab until this option's worth of buckets have been + exhausted.
+
=0|1 + (int)
+
Load all metaslabs during pool import.
+
=0|1 + (int)
+
Prevent metaslabs from being unloaded.
+
=1|0 + (int)
+
Enable use of the fragmentation metric in computing metaslab weights.
+ +
Maximum distance to search forward from the last offset. Without this + limit, fragmented pools can see + + iterations and + () + becomes the performance limiting factor on high-performance storage. +

With the default setting of 16 + MiB, we typically see less than 500 iterations, + even with very fragmented ashift=9 + pools. The maximum number of iterations possible is + metaslab_df_max_search / 2^(ashift+1). With the + default setting of 16 MiB this is + (with + ashift=9) or + + (with + ashift=).

+
+
=0|1 + (int)
+
If not searching forward (due to metaslab_df_max_search, + , + or + ), + this tunable controls which segment is used. If set, we will use the + largest free segment. If unset, we will use a segment of at least the + requested size.
+
=s + (1 hour) (u64)
+
When we unload a metaslab, we cache the size of the largest free chunk. We + use that cached size to determine whether or not to load a metaslab for a + given allocation. As more frees accumulate in that metaslab while it's + unloaded, the cached max size becomes less and less accurate. After a + number of seconds controlled by this tunable, we stop considering the + cached max size and start considering only the histogram instead.
+
=25% + (uint)
+
When we are loading a new metaslab, we check the amount of memory being + used to store metaslab range trees. If it is over a threshold, we attempt + to unload the least recently used metaslab to prevent the system from + clogging all of its memory with range trees. This tunable sets the + percentage of total system memory that is the threshold.
+
=0|1 + (int)
+
+
    +
  • If unset, we will first try normal allocation.
  • +
  • If that fails then we will do a gang allocation.
  • +
  • If that fails then we will do a "try hard" gang + allocation.
  • +
  • If that fails then we will have a multi-layer gang block.
  • +
+

+
    +
  • If set, we will first try normal allocation.
  • +
  • If that fails then we will do a "try hard" allocation.
  • +
  • If that fails we will do a gang allocation.
  • +
  • If that fails we will do a "try hard" gang allocation.
  • +
  • If that fails then we will have a multi-layer gang block.
  • +
+
+
=100 + (uint)
+
When not trying hard, we only consider this number of the best metaslabs. + This improves performance, especially when there are many metaslabs per + vdev and the allocation can't actually be satisfied (so we would otherwise + iterate all metaslabs).
+
=200 + (uint)
+
When a vdev is added, target this number of metaslabs per top-level + vdev.
+
= + (512 MiB) (uint)
+
Default lower limit for metaslab size.
+
= + (16 GiB) (uint)
+
Default upper limit for metaslab size.
+
= + (uint)
+
Maximum ashift used when optimizing for logical → physical sector + size on new top-level vdevs. May be increased up to + + (16), but this may negatively impact pool space efficiency.
+
= + (9) (uint)
+
Minimum ashift used when creating new top-level vdevs.
+
=16 + (uint)
+
Minimum number of metaslabs to create in a top-level vdev.
+
=0|1 + (int)
+
Skip label validation steps during pool import. Changing is not + recommended unless you know what you're doing and are recovering a damaged + label.
+
=131072 + (128k) (uint)
+
Practical upper limit of total metaslabs per top-level vdev.
+
=1|0 + (int)
+
Enable metaslab group preloading.
+
=10 + (uint)
+
Maximum number of metaslabs per group to preload
+
=50 + (uint)
+
Percentage of CPUs to run a metaslab preload taskq
+
=1|0 + (int)
+
Give more weight to metaslabs with lower LBAs, assuming they have greater + bandwidth, as is typically the case on a modern constant angular velocity + disk drive.
+
=32 + (uint)
+
After a metaslab is used, we keep it loaded for this many TXGs, to attempt + to reduce unnecessary reloading. Note that both this many TXGs and + metaslab_unload_delay_ms milliseconds must pass before + unloading will occur.
+
=600000ms + (10 min) (uint)
+
After a metaslab is used, we keep it loaded for this many milliseconds, to + attempt to reduce unnecessary reloading. Note, that both this many + milliseconds and metaslab_unload_delay TXGs must pass + before unloading will occur.
+
=3 + (uint)
+
Maximum reference holders being tracked when reference_tracking_enable is + active.
+
= + (ulong)
+
Max amount of memory to use for RAID-Z expansion I/O. This limits how much + I/O can be outstanding at once.
+
=0 + (ulong)
+
For testing, pause RAID-Z expansion when reflow amount reaches this + value.
+
=4 + (ulong)
+
For expanded RAID-Z, aggregate reads that have more rows than this.
+
=3 + (int)
+
Maximum reference holders being tracked when reference_tracking_enable is + active.
+
=0|1 + (int)
+
Track reference holders to + + objects (debug builds only).
+
=1|0 + (int)
+
When set, the hole_birth optimization will not be used, + and all holes will always be sent during a zfs + send. This is useful if you suspect your datasets + are affected by a bug in hole_birth.
+
=/etc/zfs/zpool.cache + (charp)
+
SPA config file.
+
= + (uint)
+
Multiplication factor used to estimate actual disk consumption from the + size of data being written. The default value is a worst case estimate, + but lower values may be valid for a given pool depending on its + configuration. Pool administrators who understand the factors involved may + wish to specify a more realistic inflation factor, particularly if they + operate close to quota or capacity limits.
+
=0|1 + (int)
+
Whether to print the vdev tree in the debugging message buffer during pool + import.
+
=1|0 + (int)
+
Whether to traverse data blocks during an "extreme rewind" + (-X) import. +

An extreme rewind import normally performs a full traversal of + all blocks in the pool for verification. If this parameter is unset, the + traversal skips non-metadata blocks. It can be toggled once the import + has started to stop or start the traversal of non-metadata blocks.

+
+
=1|0 + (int)
+
Whether to traverse blocks during an "extreme rewind" + (-X) pool import. +

An extreme rewind import normally performs a full traversal of + all blocks in the pool for verification. If this parameter is unset, the + traversal is not performed. It can be toggled once the import has + started to stop or start the traversal.

+
+
=4 + (1/16th) (uint)
+
Sets the maximum number of bytes to consume during pool import to the log2 + fraction of the target ARC size.
+
=5 + (1/32nd) (int)
+
Normally, we don't allow the last + + () + of space in the pool to be consumed. This ensures that we don't run the + pool completely out of space, due to unaccounted changes (e.g. to the + MOS). It also limits the worst-case time to allocate space. If we have + less than this amount of free space, most ZPL operations (e.g. write, + create) will return + .
+
=4 + (int)
+
Determines the number of block alloctators to use per spa instance. Capped + by the number of actual CPUs in the system. +

Note that setting this value too high could result in + performance degredation and/or excess fragmentation.

+
+
=0 + (uint)
+
Limits the number of on-disk error log entries that will be converted to + the new format when enabling the + + feature. The default is to convert all log entries.
+
=32768B + (32 KiB) (uint)
+
During top-level vdev removal, chunks of data are copied from the vdev + which may include free space in order to trade bandwidth for IOPS. This + parameter determines the maximum span of free space, in bytes, which will + be included as "unnecessary" data in a chunk of copied data. +

The default value here was chosen to align with + zfs_vdev_read_gap_limit, which is a similar concept + when doing regular reads (but there's no reason it has to be the + same).

+
+
=9 + (512 B) (u64)
+
Logical ashift for file-based devices.
+
=9 + (512 B) (u64)
+
Physical ashift for file-based devices.
+
=1|0 + (int)
+
If set, when we start iterating over a ZAP object, prefetch the entire + object (all leaf blocks). However, this is limited by + dmu_prefetch_max.
+
=131072B + (128 KiB) (int)
+
Maximum micro ZAP size. A micro ZAP is upgraded to a fat ZAP, once it + grows beyond the specified size.
+
=4194304B + (4 MiB) (uint)
+
Min bytes to prefetch per stream. Prefetch distance starts from the demand + access size and quickly grows to this value, doubling on each hit. After + that it may grow further by 1/8 per hit, but only if some prefetch since + last time haven't completed in time to satisfy demand request, i.e. + prefetch depth didn't cover the read latency or the pool got + saturated.
+
=67108864B + (64 MiB) (uint)
+
Max bytes to prefetch per stream.
+
=67108864B + (64 MiB) (uint)
+
Max bytes to prefetch indirects for per stream.
+
=8 + (uint)
+
Max number of streams per zfetch (prefetch streams per file).
+
=1 + (uint)
+
Min time before inactive prefetch stream can be reclaimed
+
=2 + (uint)
+
Max time before inactive prefetch stream can be deleted
+
=1|0 + (int)
+
Enables ARC from using scatter/gather lists and forces all allocations to + be linear in kernel memory. Disabling can improve performance in some code + paths at the expense of fragmented kernel memory.
+
=MAX_ORDER-1 + (uint)
+
Maximum number of consecutive memory pages allocated in a single block for + scatter/gather lists. +

The value of MAX_ORDER depends on kernel + configuration.

+
+
=B + (1.5 KiB) (uint)
+
This is the minimum allocation size that will use scatter (page-based) + ABDs. Smaller allocations will use linear ABDs.
+
=0B + (u64)
+
When the number of bytes consumed by dnodes in the ARC exceeds this number + of bytes, try to unpin some of it in response to demand for non-metadata. + This value acts as a ceiling to the amount of dnode metadata, and defaults + to 0, which indicates that a percent which is based on + zfs_arc_dnode_limit_percent of the ARC meta buffers that + may be used for dnodes.
+
=10% + (u64)
+
Percentage that can be consumed by dnodes of ARC meta buffers. +

See also zfs_arc_dnode_limit, which serves a + similar purpose but has a higher priority if nonzero.

+
+
=10% + (u64)
+
Percentage of ARC dnodes to try to scan in response to demand for + non-metadata when the number of bytes consumed by dnodes exceeds + zfs_arc_dnode_limit.
+
=B + (8 KiB) (uint)
+
The ARC's buffer hash table is sized based on the assumption of an average + block size of this value. This works out to roughly 1 MiB of hash table + per 1 GiB of physical memory with 8-byte pointers. For configurations with + a known larger average block size, this value can be increased to reduce + the memory footprint.
+
=200% + (uint)
+
When + (), + () + waits for this percent of the requested amount of data to be evicted. For + example, by default, for every 2 KiB that's evicted, + 1 KiB of it may be "reused" by a new + allocation. Since this is above 100%, it ensures that + progress is made towards getting arc_size + under arc_c. Since this is + finite, it ensures that allocations can still happen, even during the + potentially long time that arc_size is + more than arc_c.
+
=10 + (uint)
+
Number ARC headers to evict per sub-list before proceeding to another + sub-list. This batch-style operation prevents entire sub-lists from being + evicted at once but comes at a cost of additional unlocking and + locking.
+
=0s + (uint)
+
If set to a non zero value, it will replace the + arc_grow_retry value with this value. The + arc_grow_retry value (default + 5s) is the number of seconds the ARC will wait before + trying to resume growth after a memory pressure event.
+
=10% + (int)
+
Throttle I/O when free system memory drops below this percentage of total + system memory. Setting this value to 0 will disable the + throttle.
+
=0B + (u64)
+
Max size of ARC in bytes. If 0, then the max size of ARC + is determined by the amount of system memory installed. The larger of + all_system_memory - + 1 GiB and + + × all_system_memory will + be used as the limit. This value must be at least + 67108864B (64 MiB). +

This value can be changed dynamically, with some caveats. It + cannot be set back to 0 while running, and reducing it + below the current ARC size will not cause the ARC to shrink without + memory pressure to induce shrinking.

+
+
=500 + (uint)
+
Balance between metadata and data on ghost hits. Values above 100 increase + metadata caching by proportionally reducing effect of ghost data hits on + target data/metadata rate.
+
=0B + (u64)
+
Min size of ARC in bytes. If set to + 0, + + will default to consuming the larger of 32 MiB and + all_system_memory / + 32.
+
=0ms(≡1s) + (uint)
+
Minimum time prefetched blocks are locked in the ARC.
+
=0ms(≡6s) + (uint)
+
Minimum time "prescient prefetched" blocks are locked in the + ARC. These blocks are meant to be prefetched fairly aggressively ahead of + the code that may use them.
+
=1 + (int)
+
Number of arc_prune threads. FreeBSD does not need + more than one. Linux may theoretically use one per mount point up to + number of CPUs, but that was not proven to be useful.
+
=0 + (int)
+
Number of missing top-level vdevs which will be allowed during pool import + (only in read-only mode).
+
= + 0 (u64)
+
Maximum size in bytes allowed to be passed as + + for ioctls on /dev/zfs. This prevents a user from + causing the kernel to allocate an excessive amount of memory. When the + limit is exceeded, the ioctl fails with + + and a description of the error is sent to the + zfs-dbgmsg log. This parameter should not need to + be touched under normal circumstances. If 0, equivalent + to a quarter of the user-wired memory limit under + FreeBSD and to 134217728B (128 + MiB) under Linux.
+
=0 + (uint)
+
To allow more fine-grained locking, each ARC state contains a series of + lists for both data and metadata objects. Locking is performed at the + level of these "sub-lists". This parameters controls the number + of sub-lists per ARC state, and also applies to other uses of the + multilist data structure. +

If 0, equivalent to the greater of the + number of online CPUs and 4.

+
+
=8 + (int)
+
The ARC size is considered to be overflowing if it exceeds the current ARC + target size (arc_c) by thresholds determined by this + parameter. Exceeding by (arc_c + >> zfs_arc_overflow_shift) + / 2 starts ARC reclamation + process. If that appears insufficient, exceeding by + (arc_c >> + zfs_arc_overflow_shift) × + blocks + new buffer allocation until the reclaim thread catches up. Started + reclamation process continues till ARC size returns below the target size. +

The default value of 8 causes the + ARC to start reclamation if it exceeds the target size by + of the + target size, and block allocations by + .

+
+
=0 + (uint)
+
If nonzero, this will update + + (default 7) with the new value.
+
=0% + (off) (uint)
+
Percent of pagecache to reclaim ARC to. +

This tunable allows the ZFS ARC to play + more nicely with the kernel's LRU pagecache. It can guarantee that the + ARC size won't collapse under scanning pressure on the pagecache, yet + still allows the ARC to be reclaimed down to + zfs_arc_min if necessary. This value is specified as + percent of pagecache size (as measured by + ), + where that percent may exceed 100. This only operates + during memory pressure/reclaim.

+
+
=10000 + (int)
+
This is a limit on how many pages the ARC shrinker makes available for + eviction in response to one page allocation attempt. Note that in + practice, the kernel's shrinker can ask us to evict up to about four times + this for one allocation attempt. +

The default limit of 10000 (in + practice, + per allocation attempt with 4 KiB pages) limits + the amount of time spent attempting to reclaim ARC memory to less than + 100 ms per allocation attempt, even with a small average compressed + block size of ~8 KiB.

+

The parameter can be set to 0 (zero) to disable the limit, and + only applies on Linux.

+
+
=0B + (u64)
+
The target number of bytes the ARC should leave as free memory on the + system. If zero, equivalent to the bigger of 512 KiB + and + .
+
=1|0 + (int)
+
Disable pool import at module load by ignoring the cache file + (spa_config_path).
+
=20/s + (uint)
+
Rate limit checksum events to this many per second. Note that this should + not be set below the ZED thresholds (currently 10 checksums over 10 + seconds) or else the daemon may not trigger any action.
+
=10% + (uint)
+
This controls the amount of time that a ZIL block (lwb) will remain + "open" when it isn't "full", and it has a thread + waiting for it to be committed to stable storage. The timeout is scaled + based on a percentage of the last lwb latency to avoid significantly + impacting the latency of each individual transaction record (itx).
+
=0ms + (int)
+
Vdev indirection layer (used for device removal) sleeps for this many + milliseconds during mapping generation. Intended for use with the test + suite to throttle vdev removal speed.
+
=25% + (uint)
+
Minimum percent of obsolete bytes in vdev mapping required to attempt to + condense (see zfs_condense_indirect_vdevs_enable). + Intended for use with the test suite to facilitate triggering condensing + as needed.
+
=1|0 + (int)
+
Enable condensing indirect vdev mappings. When set, attempt to condense + indirect vdev mappings if the mapping uses more than + zfs_condense_min_mapping_bytes bytes of memory and if + the obsolete space map object uses more than + zfs_condense_max_obsolete_bytes bytes on-disk. The + condensing process is an attempt to save memory by removing obsolete + mappings.
+
=1073741824B + (1 GiB) (u64)
+
Only attempt to condense indirect vdev mappings if the on-disk size of the + obsolete space map object is greater than this number of bytes (see + zfs_condense_indirect_vdevs_enable).
+
=131072B + (128 KiB) (u64)
+
Minimum size vdev mapping to attempt to condense (see + zfs_condense_indirect_vdevs_enable).
+
=1|0 + (int)
+
Internally ZFS keeps a small log to facilitate debugging. The log is + enabled by default, and can be disabled by unsetting this option. The + contents of the log can be accessed by reading + /proc/spl/kstat/zfs/dbgmsg. Writing + 0 to the file clears the log. +

This setting does not influence debug prints due to + zfs_flags.

+
+
=4194304B + (4 MiB) (uint)
+
Maximum size of the internal ZFS debug log.
+
=0 + (int)
+
Historically used for controlling what reporting was available under + /proc/spl/kstat/zfs. No effect.
+
=1|0 + (int)
+
When a pool sync operation takes longer than + zfs_deadman_synctime_ms, or when an individual I/O + operation takes longer than zfs_deadman_ziotime_ms, then + the operation is considered to be "hung". If + zfs_deadman_enabled is set, then the deadman behavior is + invoked as described by zfs_deadman_failmode. By + default, the deadman is enabled and set to wait which + results in "hung" I/O operations only being logged. The deadman + is automatically disabled when a pool gets suspended.
+
=wait + (charp)
+
Controls the failure behavior when the deadman detects a "hung" + I/O operation. Valid values are: +
+
+
+
Wait for a "hung" operation to complete. For each + "hung" operation a "deadman" event will be posted + describing that operation.
+
+
Attempt to recover from a "hung" operation by re-dispatching + it to the I/O pipeline if possible.
+
+
Panic the system. This can be used to facilitate automatic fail-over + to a properly configured fail-over partner.
+
+
+
+
=ms + (1 min) (u64)
+
Check time in milliseconds. This defines the frequency at which we check + for hung I/O requests and potentially invoke the + zfs_deadman_failmode behavior.
+
=600000ms + (10 min) (u64)
+
Interval in milliseconds after which the deadman is triggered and also the + interval after which a pool sync operation is considered to be + "hung". Once this limit is exceeded the deadman will be invoked + every zfs_deadman_checktime_ms milliseconds until the + pool sync completes.
+
=ms + (5 min) (u64)
+
Interval in milliseconds after which the deadman is triggered and an + individual I/O operation is considered to be "hung". As long as + the operation remains "hung", the deadman will be invoked every + zfs_deadman_checktime_ms milliseconds until the + operation completes.
+
=0|1 + (int)
+
Enable prefetching dedup-ed blocks which are going to be freed.
+
=60% + (uint)
+
Start to delay each transaction once there is this amount of dirty data, + expressed as a percentage of zfs_dirty_data_max. This + value should be at least + zfs_vdev_async_write_active_max_dirty_percent. + See + ZFS TRANSACTION + DELAY.
+
=500000 + (int)
+
This controls how quickly the transaction delay approaches infinity. + Larger values cause longer delays for a given amount of dirty data. +

For the smoothest delay, this value should be about 1 billion + divided by the maximum number of operations per second. This will + smoothly handle between ten times and a tenth of this number. + See + ZFS TRANSACTION + DELAY.

+

zfs_delay_scale + × zfs_dirty_data_max + + be smaller than + .

+
+
=0|1 + (int)
+
Disables requirement for IVset GUIDs to be present and match when doing a + raw receive of encrypted datasets. Intended for users whose pools were + created with OpenZFS pre-release versions and now have compatibility + issues.
+
= + (4*10^8) (ulong)
+
Maximum number of uses of a single salt value before generating a new one + for encrypted datasets. The default value is also the maximum.
+
=64 + (uint)
+
Size of the znode hashtable used for holds. +

Due to the need to hold locks on objects that may not exist + yet, kernel mutexes are not created per-object and instead a hashtable + is used where collisions will result in objects waiting when there is + not actually contention on the same object.

+
+
=20/s + (int)
+
Rate limit delay and deadman zevents (which report slow I/O operations) to + this many per second.
+
=1073741824B + (1 GiB) (u64)
+
Upper-bound limit for unflushed metadata changes to be held by the log + spacemap in memory, in bytes.
+
=1000ppm + (0.1%) (u64)
+
Part of overall system memory that ZFS allows to be used for unflushed + metadata changes by the log spacemap, in millionths.
+
=131072 + (128k) (u64)
+
Describes the maximum number of log spacemap blocks allowed for each pool. + The default value means that the space in all the log spacemaps can add up + to no more than 131072 blocks (which means + 16 GiB of logical space before compression and ditto + blocks, assuming that blocksize is 128 KiB). +

This tunable is important because it involves a trade-off + between import time after an unclean export and the frequency of + flushing metaslabs. The higher this number is, the more log blocks we + allow when the pool is active which means that we flush metaslabs less + often and thus decrease the number of I/O operations for spacemap + updates per TXG. At the same time though, that means that in the event + of an unclean export, there will be more log spacemap blocks for us to + read, inducing overhead in the import time of the pool. The lower the + number, the amount of flushing increases, destroying log blocks quicker + as they become obsolete faster, which leaves less blocks to be read + during import time after a crash.

+

Each log spacemap block existing during pool import leads to + approximately one extra logical I/O issued. This is the reason why this + tunable is exposed in terms of blocks rather than space used.

+
+
=1000 + (u64)
+
If the number of metaslabs is small and our incoming rate is high, we + could get into a situation that we are flushing all our metaslabs every + TXG. Thus we always allow at least this many log blocks.
+
=% + (u64)
+
Tunable used to determine the number of blocks that can be used for the + spacemap log, expressed as a percentage of the total number of unflushed + metaslabs in the pool.
+
=1000 + (u64)
+
Tunable limiting maximum time in TXGs any metaslab may remain unflushed. + It effectively limits maximum number of unflushed per-TXG spacemap logs + that need to be read after unclean pool export.
+ +
When enabled, files will not be asynchronously removed from the list of + pending unlinks and the space they consume will be leaked. Once this + option has been disabled and the dataset is remounted, the pending unlinks + will be processed and the freed space returned to the pool. This option is + used by the test suite.
+
= + (ulong)
+
This is the used to define a large file for the purposes of deletion. + Files containing more than zfs_delete_blocks will be + deleted asynchronously, while smaller files are deleted synchronously. + Decreasing this value will reduce the time spent in an + unlink(2) system call, at the expense of a longer delay + before the freed space is available. This only applies on Linux.
+
= + (int)
+
Determines the dirty space limit in bytes. Once this limit is exceeded, + new writes are halted until space frees up. This parameter takes + precedence over zfs_dirty_data_max_percent. + See + ZFS TRANSACTION DELAY. +

Defaults to + , + capped at zfs_dirty_data_max_max.

+
+
= + (int)
+
Maximum allowable value of zfs_dirty_data_max, expressed + in bytes. This limit is only enforced at module load time, and will be + ignored if zfs_dirty_data_max is later changed. This + parameter takes precedence over + zfs_dirty_data_max_max_percent. + See + ZFS TRANSACTION DELAY. +

Defaults to min(physical_ram/4, 4GiB), or + min(physical_ram/4, 1GiB) for 32-bit systems.

+
+
=25% + (uint)
+
Maximum allowable value of zfs_dirty_data_max, expressed + as a percentage of physical RAM. This limit is only enforced at module + load time, and will be ignored if zfs_dirty_data_max is + later changed. The parameter zfs_dirty_data_max_max + takes precedence over this one. See + ZFS TRANSACTION + DELAY.
+
=10% + (uint)
+
Determines the dirty space limit, expressed as a percentage of all memory. + Once this limit is exceeded, new writes are halted until space frees up. + The parameter zfs_dirty_data_max takes precedence over + this one. See + ZFS TRANSACTION DELAY. +

Subject to zfs_dirty_data_max_max.

+
+
=20% + (uint)
+
Start syncing out a transaction group if there's at least this much dirty + data (as a percentage of zfs_dirty_data_max). This + should be less than + zfs_vdev_async_write_active_min_dirty_percent.
+
= + (int)
+
The upper limit of write-transaction zil log data size in bytes. Write + operations are throttled when approaching the limit until log data is + cleared out after transaction group sync. Because of some overhead, it + should be set at least 2 times the size of + zfs_dirty_data_max to prevent harming + normal write throughput. It also should be smaller than the size of + the slog device if slog is present. +

Defaults to +

+
+
=% + (uint)
+
Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be + preallocated for a file in order to guarantee that later writes will not + run out of space. Instead, fallocate(2) space + preallocation only checks that sufficient space is currently available in + the pool or the user's project quota allocation, and then creates a sparse + file of the requested size. The requested space is multiplied by + zfs_fallocate_reserve_percent to allow additional space + for indirect blocks and other internal metadata. Setting this to + 0 disables support for fallocate(2) + and causes it to return + .
+
=fastest + (string)
+
Select a fletcher 4 implementation. +

Supported selectors are: fastest, + scalar, sse2, + , + avx2, + , + , + and + . + All except fastest and + scalar require instruction set extensions to be + available, and will only appear if ZFS detects that they are present at + runtime. If multiple implementations of fletcher 4 are available, the + fastest will be chosen using a micro benchmark. + Selecting scalar results in the original CPU-based + calculation being used. Selecting any option other than + fastest or + scalar results in vector instructions from the + respective CPU instruction set being used.

+
+
=1|0 + (int)
+
Enable the experimental block cloning feature. If this setting is 0, then + even if feature@block_cloning is enabled, attempts to clone blocks will + act as though the feature is disabled.
+
=fastest + (string)
+
Select a BLAKE3 implementation. +

Supported selectors are: cycle, + fastest, generic, + sse2, + , + avx2, + . + All except cycle, fastest + and generic require + instruction set extensions to be available, and will only appear if ZFS + detects that they are present at runtime. If multiple implementations of + BLAKE3 are available, the fastest will be chosen using a + micro benchmark. You can see the benchmark results by reading this + kstat file: + /proc/spl/kstat/zfs/chksum_bench.

+
+
=1|0 + (int)
+
Enable/disable the processing of the free_bpobj object.
+
=UINT64_MAX + (unlimited) (u64)
+
Maximum number of blocks freed in a single TXG.
+
= + (10^5) (u64)
+
Maximum number of dedup blocks freed in a single TXG.
+
=3 + (uint)
+
Maximum asynchronous read I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum asynchronous read I/O operation active to each device. + See ZFS + I/O SCHEDULER.
+
=60% + (uint)
+
When the pool has more than this much dirty data, use + zfs_vdev_async_write_max_active to limit active async + writes. If the dirty data is between the minimum and maximum, the active + I/O limit is linearly interpolated. See + ZFS I/O SCHEDULER.
+
=30% + (uint)
+
When the pool has less than this much dirty data, use + zfs_vdev_async_write_min_active to limit active async + writes. If the dirty data is between the minimum and maximum, the active + I/O limit is linearly interpolated. See + ZFS I/O SCHEDULER.
+
=10 + (uint)
+
Maximum asynchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (uint)
+
Minimum asynchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER. +

Lower values are associated with better latency on rotational + media but poorer resilver performance. The default value of + 2 was chosen as a compromise. A value of + 3 has been shown to improve resilver performance + further at a cost of further increasing latency.

+
+
=1 + (uint)
+
Maximum initializing I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum initializing I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1000 + (uint)
+
The maximum number of I/O operations active to each device. Ideally, this + will be at least the sum of each queue's max_active. + See ZFS + I/O SCHEDULER.
+
=1000 + (uint)
+
Timeout value to wait before determining a device is missing during + import. This is helpful for transient missing paths due to links being + briefly removed and recreated in response to udev events.
+
=3 + (uint)
+
Maximum sequential resilver I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum sequential resilver I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (uint)
+
Maximum removal I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum removal I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (uint)
+
Maximum scrub I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum scrub I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (uint)
+
Maximum synchronous read I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (uint)
+
Minimum synchronous read I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (uint)
+
Maximum synchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (uint)
+
Minimum synchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (uint)
+
Maximum trim/discard I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum trim/discard I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=5 + (uint)
+
For non-interactive I/O (scrub, resilver, removal, initialize and + rebuild), the number of concurrently-active I/O operations is limited to + , + unless the vdev is "idle". When there are no interactive I/O + operations active (synchronous or otherwise), and + zfs_vdev_nia_delay operations have completed since the + last interactive operation, then the vdev is considered to be + "idle", and the number of concurrently-active non-interactive + operations is increased to zfs_*_max_active. + See ZFS + I/O SCHEDULER.
+
=5 + (uint)
+
Some HDDs tend to prioritize sequential I/O so strongly, that concurrent + random I/O latency reaches several seconds. On some HDDs this happens even + if sequential I/O operations are submitted one at a time, and so setting + zfs_*_max_active= 1 does not help. To + prevent non-interactive I/O, like scrub, from monopolizing the device, no + more than zfs_vdev_nia_credit operations can be sent + while there are outstanding incomplete interactive operations. This + enforced wait ensures the HDD services the interactive I/O within a + reasonable amount of time. See + ZFS I/O SCHEDULER.
+
=1000% + (uint)
+
Maximum number of queued allocations per top-level vdev expressed as a + percentage of zfs_vdev_async_write_max_active, which + allows the system to detect devices that are more capable of handling + allocations and to allocate more blocks to those devices. This allows for + dynamic allocation distribution when devices are imbalanced, as fuller + devices will tend to be slower than empty devices. +

Also see zio_dva_throttle_enabled.

+
+
=32 + (uint)
+
Default queue depth for each vdev IO allocator. Higher values allow for + better coalescing of sequential writes before sending them to the disk, + but can increase transaction commit times.
+
=1 + (uint)
+
Defines if the driver should retire on a given error type. The following + options may be bitwise-ored together: + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueNameDescription
1DeviceNo driver retries on device errors
2TransportNo driver retries on transport errors.
4DriverNo driver retries on driver errors.
+
+
=s + (int)
+
Time before expiring .zfs/snapshot.
+
=0|1 + (int)
+
Allow the creation, removal, or renaming of entries in the + + directory to cause the creation, destruction, or renaming of snapshots. + When enabled, this functionality works both locally and over NFS exports + which have the + + option set.
+
=0 + (int)
+
Set additional debugging flags. The following flags may be bitwise-ored + together: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueNameDescription
1ZFS_DEBUG_DPRINTFEnable dprintf entries in the debug log.
*2ZFS_DEBUG_DBUF_VERIFYEnable extra dbuf verifications.
*4ZFS_DEBUG_DNODE_VERIFYEnable extra dnode verifications.
8ZFS_DEBUG_SNAPNAMESEnable snapshot name verification.
*16ZFS_DEBUG_MODIFYCheck for illegally modified ARC buffers.
64ZFS_DEBUG_ZIO_FREEEnable verification of block frees.
128ZFS_DEBUG_HISTOGRAM_VERIFYEnable extra spacemap histogram verifications.
256ZFS_DEBUG_METASLAB_VERIFYVerify space accounting on disk matches in-memory + range_trees.
512ZFS_DEBUG_SET_ERROREnable SET_ERROR and dprintf entries in the debug log.
1024ZFS_DEBUG_INDIRECT_REMAPVerify split blocks created by device removal.
2048ZFS_DEBUG_TRIMVerify TRIM ranges are always within the allocatable range + tree.
4096ZFS_DEBUG_LOG_SPACEMAPVerify that the log summary is consistent with the spacemap log
and enable zfs_dbgmsgs for metaslab loading and + flushing.
+ * Requires debug build.
+
=0 + (uint)
+
Enables btree verification. The following settings are culminative: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueDescription
1Verify height.
2Verify pointers from children to parent.
3Verify element counts.
4Verify element order. (expensive)
*5Verify unused memory is poisoned. (expensive)
+ * Requires debug build.
+
=0|1 + (int)
+
If destroy encounters an EIO while reading metadata + (e.g. indirect blocks), space referenced by the missing metadata can not + be freed. Normally this causes the background destroy to become + "stalled", as it is unable to make forward progress. While in + this stalled state, all remaining space to free from the + error-encountering filesystem is "temporarily leaked". Set this + flag to cause it to ignore the EIO, permanently leak the + space from indirect blocks that can not be read, and continue to free + everything else that it can. +

The default "stalling" behavior is useful if the + storage partially fails (i.e. some but not all I/O operations fail), and + then later recovers. In this case, we will be able to continue pool + operations while it is partially failed, and when it recovers, we can + continue to free the space, with no leaks. Note, however, that this case + is actually fairly rare.

+

Typically pools either

+
    +
  1. fail completely (but perhaps temporarily, e.g. due to a top-level vdev + going offline), or
  2. +
  3. have localized, permanent errors (e.g. disk returns the wrong data due + to bit flip or firmware bug).
  4. +
+ In the former case, this setting does not matter because the pool will be + suspended and the sync thread will not be able to make forward progress + regardless. In the latter, because the error is permanent, the best we can + do is leak the minimum amount of space, which is what setting this flag + will do. It is therefore reasonable for this flag to normally be set, but + we chose the more conservative approach of not setting it, so that there + is no possibility of leaking space in the "partial temporary" + failure case.
+
=1000ms + (1s) (uint)
+
During a zfs destroy + operation using the + + feature, a minimum of this much time will be spent working on freeing + blocks per TXG.
+
=500ms + (uint)
+
Similar to zfs_free_min_time_ms, but for cleanup of old + indirection records for removed vdevs.
+
=32768B + (32 KiB) (s64)
+
Largest data block to write to the ZIL. Larger blocks will be treated as + if the dataset being written to had the + = + property set.
+
= + (0xDEADBEEFDEADBEEE) (u64)
+
Pattern written to vdev free space by + zpool-initialize(8).
+
=1048576B + (1 MiB) (u64)
+
Size of writes used by zpool-initialize(8). This option + is used by the test suite.
+
=500000 + (5*10^5) (u64)
+
The threshold size (in block pointers) at which we create a new + sub-livelist. Larger sublists are more costly from a memory perspective + but the fewer sublists there are, the lower the cost of insertion.
+
=% + (int)
+
If the amount of shared space between a snapshot and its clone drops below + this threshold, the clone turns off the livelist and reverts to the old + deletion method. This is in place because livelists no long give us a + benefit once a clone has been overwritten enough.
+
=0 + (int)
+
Incremented each time an extra ALLOC blkptr is added to a livelist entry + while it is being condensed. This option is used by the test suite to + track race conditions.
+
=0 + (int)
+
Incremented each time livelist condensing is canceled while in + (). + This option is used by the test suite to track race conditions.
+
=0|1 + (int)
+
When set, the livelist condense process pauses indefinitely before + executing the synctask — + spa_livelist_condense_sync(). This option is used + by the test suite to trigger race conditions.
+
=0 + (int)
+
Incremented each time livelist condensing is canceled while in + (). + This option is used by the test suite to track race conditions.
+
=0|1 + (int)
+
When set, the livelist condense process pauses indefinitely before + executing the open context condensing work in + spa_livelist_condense_cb(). This option is used by + the test suite to trigger race conditions.
+
= + (10^8) (u64)
+
The maximum execution time limit that can be set for a ZFS channel + program, specified as a number of Lua instructions.
+
= + (100 MiB) (u64)
+
The maximum memory limit that can be set for a ZFS channel program, + specified in bytes.
+
=50 + (int)
+
The maximum depth of nested datasets. This value can be tuned temporarily + to fix existing datasets that exceed the predefined limit.
+
=5 + (u64)
+
The number of past TXGs that the flushing algorithm of the log spacemap + feature uses to estimate incoming log blocks.
+
=10 + (u64)
+
Maximum number of rows allowed in the summary of the spacemap log.
+
=16777216 + (16 MiB) (uint)
+
We currently support block sizes from 512 (512 B) + to 16777216 (16 MiB). The + benefits of larger blocks, and thus larger I/O, need to be weighed against + the cost of COWing a giant block to modify one byte. Additionally, very + large blocks can have an impact on I/O latency, and also potentially on + the memory allocator. Therefore, we formerly forbade creating blocks + larger than 1M. Larger blocks could be created by changing it, and pools + with larger blocks can always be imported and used, regardless of this + setting.
+
=0|1 + (int)
+
Allow datasets received with redacted send/receive to be mounted. Normally + disabled because these datasets may be missing key data.
+
=1 + (u64)
+
Minimum number of metaslabs to flush per dirty TXG.
+
=% + (uint)
+
Allow metaslabs to keep their active state as long as their fragmentation + percentage is no more than this value. An active metaslab that exceeds + this threshold will no longer keep its active status allowing better + metaslabs to be selected.
+
=% + (uint)
+
Metaslab groups are considered eligible for allocations if their + fragmentation metric (measured as a percentage) is less than or equal to + this value. If a metaslab group exceeds this threshold then it will be + skipped unless all metaslab groups within the metaslab class have also + crossed this threshold.
+
=0% + (uint)
+
Defines a threshold at which metaslab groups should be eligible for + allocations. The value is expressed as a percentage of free space beyond + which a metaslab group is always eligible for allocations. If a metaslab + group's free space is less than or equal to the threshold, the allocator + will avoid allocating to that group unless all groups in the pool have + reached the threshold. Once all groups have reached the threshold, all + groups are allowed to accept allocations. The default value of + 0 disables the feature and causes all metaslab groups to + be eligible for allocations. +

This parameter allows one to deal + with pools having heavily imbalanced vdevs such as would be the case + when a new vdev has been added. Setting the threshold to a non-zero + percentage will stop allocations from being made to vdevs that aren't + filled to the specified percentage and allow lesser filled vdevs to + acquire more allocations than they otherwise would under the old + + facility.

+
+
=1|0 + (int)
+
If enabled, ZFS will place DDT data into the special allocation + class.
+
=1|0 + (int)
+
If enabled, ZFS will place user data indirect blocks into the special + allocation class.
+
=0 + (uint)
+
Historical statistics for this many latest multihost updates will be + available in + /proc/spl/kstat/zfs/pool/multihost.
+
=1000ms + (1 s) (u64)
+
Used to control the frequency of multihost writes which are performed when + the + + pool property is on. This is one of the factors used to determine the + length of the activity check during import. +

The multihost write period is + zfs_multihost_interval / + . + On average a multihost write will be issued for each leaf vdev every + zfs_multihost_interval milliseconds. In practice, the + observed period can vary with the I/O load and this observed value is + the delay which is stored in the uberblock.

+
+
=20 + (uint)
+
Used to control the duration of the activity test on import. Smaller + values of zfs_multihost_import_intervals will reduce the + import time but increase the risk of failing to detect an active pool. The + total activity check time is never allowed to drop below one second. +

On import the activity check waits a minimum amount of time + determined by zfs_multihost_interval + × + zfs_multihost_import_intervals, or the same product + computed on the host which last had the pool imported, whichever is + greater. The activity check time may be further extended if the value of + MMP delay found in the best uberblock indicates actual multihost updates + happened at longer intervals than + zfs_multihost_interval. A minimum of 100 + ms is enforced.

+

0 is equivalent to + 1.

+
+
=10 + (uint)
+
Controls the behavior of the pool when multihost write failures or delays + are detected. +

When 0, multihost write failures or delays + are ignored. The failures will still be reported to the ZED which + depending on its configuration may take action such as suspending the + pool or offlining a device.

+

Otherwise, the pool will be suspended if + zfs_multihost_fail_intervals + × + zfs_multihost_interval milliseconds pass without a + successful MMP write. This guarantees the activity test will see MMP + writes if the pool is imported. 1 is + equivalent to 2; this is necessary to prevent + the pool from being suspended due to normal, small I/O latency + variations.

+
+
=0|1 + (int)
+
Set to disable scrub I/O. This results in scrubs not actually scrubbing + data and simply doing a metadata crawl of the pool instead.
+
=0|1 + (int)
+
Set to disable block prefetching for scrubs.
+
=0|1 + (int)
+
Disable cache flush operations on disks when writing. Setting this will + cause pool corruption on power loss if a volatile out-of-order write cache + is enabled.
+
=1|0 + (int)
+
Allow no-operation writes. The occurrence of nopwrites will further depend + on other pool properties (i.a. the checksumming and compression + algorithms).
+
=1|0 + (int)
+
Enable forcing TXG sync to find holes. When enabled forces ZFS to sync + data when + + or + + flags are used allowing holes in a file to be accurately reported. When + disabled holes will not be reported in recently dirtied files.
+
=B + (50 MiB) (int)
+
The number of bytes which should be prefetched during a pool traversal, + like zfs send or other + data crawling operations.
+
=32 + (uint)
+
The number of blocks pointed by indirect (non-L0) block which should be + prefetched during a pool traversal, like zfs + send or other data crawling operations.
+
=30% + (u64)
+
Control percentage of dirtied indirect blocks from frees allowed into one + TXG. After this threshold is crossed, additional frees will wait until the + next TXG. 0 disables this + throttle.
+
=0|1 + (int)
+
Disable predictive prefetch. Note that it leaves "prescient" + prefetch (for, e.g., zfs + send) intact. Unlike predictive prefetch, + prescient prefetch never issues I/O that ends up not being needed, so it + can't hurt performance.
+
=0|1 + (int)
+
Disable QAT hardware acceleration for SHA256 checksums. May be unset after + the ZFS modules have been loaded to initialize the QAT hardware as long as + support is compiled in and the QAT driver is present.
+
=0|1 + (int)
+
Disable QAT hardware acceleration for gzip compression. May be unset after + the ZFS modules have been loaded to initialize the QAT hardware as long as + support is compiled in and the QAT driver is present.
+
=0|1 + (int)
+
Disable QAT hardware acceleration for AES-GCM encryption. May be unset + after the ZFS modules have been loaded to initialize the QAT hardware as + long as support is compiled in and the QAT driver is present.
+
=1048576B + (1 MiB) (u64)
+
Bytes to read per chunk.
+
=0 + (uint)
+
Historical statistics for this many latest reads will be available in + /proc/spl/kstat/zfs/pool/reads.
+
=0|1 + (int)
+
Include cache hits in read history
+
=1048576B + (1 MiB) (u64)
+
Maximum read segment size to issue when sequentially resilvering a + top-level vdev.
+
=1|0 + (int)
+
Automatically start a pool scrub when the last active sequential resilver + completes in order to verify the checksums of all blocks which have been + resilvered. This is enabled by default and strongly recommended.
+
=67108864B + (64 MiB) (u64)
+
Maximum amount of I/O that can be concurrently issued for a sequential + resilver per leaf device, given in bytes.
+
=4096 + (int)
+
If an indirect split block contains more than this many possible unique + combinations when being reconstructed, consider it too computationally + expensive to check them all. Instead, try at most this many randomly + selected combinations each time the block is accessed. This allows all + segment copies to participate fairly in the reconstruction when all + combinations cannot be checked and prevents repeated use of one bad + copy.
+
=0|1 + (int)
+
Set to attempt to recover from fatal errors. This should only be used as a + last resort, as it typically results in leaked space, or worse.
+
=0|1 + (int)
+
Ignore hard I/O errors during device removal. When set, if a device + encounters a hard I/O error during the removal process the removal will + not be cancelled. This can result in a normally recoverable block becoming + permanently damaged and is hence not recommended. This should only be used + as a last resort when the pool cannot be returned to a healthy state prior + to removing the device.
+
=0|1 + (uint)
+
This is used by the test suite so that it can ensure that certain actions + happen while in the middle of a removal.
+
=16777216B + (16 MiB) (uint)
+
The largest contiguous segment that we will attempt to allocate when + removing a device. If there is a performance problem with attempting to + allocate large blocks, consider decreasing this. The default value is also + the maximum.
+
=0|1 + (int)
+
Ignore the + + feature, causing an operation that would start a resilver to immediately + restart the one in progress.
+
=ms + (3 s) (uint)
+
Resilvers are processed by the sync thread. While resilvering, it will + spend at least this much time working on a resilver between TXG + flushes.
+
=0|1 + (int)
+
If set, remove the DTL (dirty time list) upon completion of a pool scan + (scrub), even if there were unrepairable errors. Intended to be used + during pool repair or recovery to stop resilvering when the pool is next + imported.
+
=1|0 + (int)
+
Automatically start a pool scrub after a RAIDZ expansion completes in + order to verify the checksums of all blocks which have been copied during + the expansion. This is enabled by default and strongly recommended.
+
=1000ms + (1 s) (uint)
+
Scrubs are processed by the sync thread. While scrubbing, it will spend at + least this much time working on a scrub between TXG flushes.
+
=4096 + (uint)
+
Error blocks to be scrubbed in one txg.
+
=s + (2 hour) (uint)
+
To preserve progress across reboots, the sequential scan algorithm + periodically needs to stop metadata scanning and issue all the + verification I/O to disk. The frequency of this flushing is determined by + this tunable.
+
=3 + (uint)
+
This tunable affects how scrub and resilver I/O segments are ordered. A + higher number indicates that we care more about how filled in a segment + is, while a lower number indicates we care more about the size of the + extent without considering the gaps within a segment. This value is only + tunable upon module insertion. Changing the value afterwards will have no + effect on scrub or resilver performance.
+
=0 + (uint)
+
Determines the order that data will be verified while scrubbing or + resilvering: +
+
+
+
Data will be verified as sequentially as possible, given the amount of + memory reserved for scrubbing (see + zfs_scan_mem_lim_fact). This may improve scrub + performance if the pool's data is very fragmented.
+
+
The largest mostly-contiguous chunk of found data will be verified + first. By deferring scrubbing of small segments, we may later find + adjacent data to coalesce and increase the segment size.
+
+
1 during normal + verification and strategy + 2 while taking a + checkpoint.
+
+
+
+
=0|1 + (int)
+
If unset, indicates that scrubs and resilvers will gather metadata in + memory before issuing sequential I/O. Otherwise indicates that the legacy + algorithm will be used, where I/O is initiated as soon as it is + discovered. Unsetting will not affect scrubs or resilvers that are already + in progress.
+
=B + (2 MiB) (int)
+
Sets the largest gap in bytes between scrub/resilver I/O operations that + will still be considered sequential for sorting purposes. Changing this + value will not affect scrubs or resilvers that are already in + progress.
+
=20^-1 + (uint)
+
Maximum fraction of RAM used for I/O sorting by sequential scan algorithm. + This tunable determines the hard limit for I/O sorting memory usage. When + the hard limit is reached we stop scanning metadata and start issuing data + verification I/O. This is done until we get below the soft limit.
+
=20^-1 + (uint)
+
The fraction of the hard limit used to determined the soft limit for I/O + sorting by the sequential scan algorithm. When we cross this limit from + below no action is taken. When we cross this limit from above it is + because we are issuing verification I/O. In this case (unless the metadata + scan is done) we stop issuing verification I/O and start scanning metadata + again until we get to the hard limit.
+
=0|1 + (uint)
+
When reporting resilver throughput and estimated completion time use the + performance observed over roughly the last + zfs_scan_report_txgs TXGs. When set to zero performance + is calculated over the time between checkpoints.
+
=0|1 + (int)
+
Enforce tight memory limits on pool scans when a sequential scan is in + progress. When disabled, the memory limit may be exceeded by fast + disks.
+
=0|1 + (int)
+
Freezes a scrub/resilver in progress without actually pausing it. Intended + for testing/debugging.
+
=16777216B + (16 MiB) (int)
+
Maximum amount of data that can be concurrently issued at once for scrubs + and resilvers per leaf device, given in bytes.
+
=0|1 + (int)
+
Allow sending of corrupt data (ignore read/checksum errors when + sending).
+
=1|0 + (int)
+
Include unmodified spill blocks in the send stream. Under certain + circumstances, previous versions of ZFS could incorrectly remove the spill + block from an existing object. Including unmodified copies of the spill + blocks creates a backwards-compatible stream which will recreate a spill + block if it was incorrectly removed.
+
=20^-1 + (uint)
+
The fill fraction of the zfs + send internal queues. The fill fraction controls + the timing with which internal threads are woken up.
+
=1048576B + (1 MiB) (uint)
+
The maximum number of bytes allowed in zfs + send's internal queues.
+
=20^-1 + (uint)
+
The fill fraction of the zfs + send prefetch queue. The fill fraction controls + the timing with which internal threads are woken up.
+
=16777216B + (16 MiB) (uint)
+
The maximum number of bytes allowed that will be prefetched by + zfs send. This value must + be at least twice the maximum block size in use.
+
=20^-1 + (uint)
+
The fill fraction of the zfs + receive queue. The fill fraction controls the + timing with which internal threads are woken up.
+
=16777216B + (16 MiB) (uint)
+
The maximum number of bytes allowed in the zfs + receive queue. This value must be at least twice + the maximum block size in use.
+
=1048576B + (1 MiB) (uint)
+
The maximum amount of data, in bytes, that zfs + receive will write in one DMU transaction. This is + the uncompressed size, even when receiving a compressed send stream. This + setting will not reduce the write size below a single block. Capped at a + maximum of 32 MiB.
+
=0 + (int)
+
When this variable is set to non-zero a corrective receive: +
    +
  1. Does not enforce the restriction of source & destination snapshot + GUIDs matching.
  2. +
  3. If there is an error during healing, the healing receive is not + terminated instead it moves on to the next record.
  4. +
+
+
=0|1 + (uint)
+
Setting this variable overrides the default logic for estimating block + sizes when doing a zfs + send. The default heuristic is that the average + block size will be the current recordsize. Override this value if most + data in your dataset is not of that size and you require accurate zfs send + size estimates.
+
=2 + (uint)
+
Flushing of data to disk is done in passes. Defer frees starting in this + pass.
+
=16777216B + (16 MiB) (int)
+
Maximum memory used for prefetching a checkpoint's space map on each vdev + while discarding the checkpoint.
+
=25% + (uint)
+
Only allow small data blocks to be allocated on the special and dedup vdev + types when the available free space percentage on these vdevs exceeds this + value. This ensures reserved space is available for pool metadata as the + special vdevs approach capacity.
+
=8 + (uint)
+
Starting in this sync pass, disable compression (including of metadata). + With the default setting, in practice, we don't have this many sync + passes, so this has no effect. +

The original intent was that disabling compression would help + the sync passes to converge. However, in practice, disabling compression + increases the average number of sync passes; because when we turn + compression off, many blocks' size will change, and thus we have to + re-allocate (not overwrite) them. It also increases the number of + 128 KiB allocations (e.g. for indirect blocks and + spacemaps) because these will not be compressed. The 128 + KiB allocations are especially detrimental to performance on highly + fragmented systems, which may have very few free segments of this size, + and may need to load new metaslabs to satisfy these allocations.

+
+
=2 + (uint)
+
Rewrite new block pointers starting in this pass.
+
=134217728B + (128 MiB) (uint)
+
Maximum size of TRIM command. Larger ranges will be split into chunks no + larger than this value before issuing.
+
=32768B + (32 KiB) (uint)
+
Minimum size of TRIM commands. TRIM ranges smaller than this will be + skipped, unless they're part of a larger range which was chunked. This is + done because it's common for these small TRIMs to negatively impact + overall performance.
+
=0|1 + (uint)
+
Skip uninitialized metaslabs during the TRIM process. This option is + useful for pools constructed from large thinly-provisioned devices where + TRIM operations are slow. As a pool ages, an increasing fraction of the + pool's metaslabs will be initialized, progressively degrading the + usefulness of this option. This setting is stored when starting a manual + TRIM and will persist for the duration of the requested TRIM.
+
=10 + (uint)
+
Maximum number of queued TRIMs outstanding per leaf vdev. The number of + concurrent TRIM commands issued to the device is controlled by + zfs_vdev_trim_min_active and + zfs_vdev_trim_max_active.
+
=32 + (uint)
+
The number of transaction groups' worth of frees which should be + aggregated before TRIM operations are issued to the device. This setting + represents a trade-off between issuing larger, more efficient TRIM + operations and the delay before the recently trimmed space is available + for use by the device. +

Increasing this value will allow frees to be aggregated for a + longer time. This will result is larger TRIM operations and potentially + increased memory usage. Decreasing this value will have the opposite + effect. The default of 32 was determined to be a + reasonable compromise.

+
+
=0 + (uint)
+
Historical statistics for this many latest TXGs will be available in + /proc/spl/kstat/zfs/pool/TXGs.
+
=5s + (uint)
+
Flush dirty data to disk at least every this many seconds (maximum TXG + duration).
+
=1048576B + (1 MiB) (uint)
+
Max vdev I/O aggregation size.
+
=131072B + (128 KiB) (uint)
+
Max vdev I/O aggregation size for non-rotating media.
+
=0 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member when an I/O + operation immediately follows its predecessor on rotational vdevs for the + purpose of making decisions based on load.
+
=5 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member when an I/O + operation lacks locality as defined by + zfs_vdev_mirror_rotating_seek_offset. Operations within + this that are not immediately following the previous operation are + incremented by half.
+
=1048576B + (1 MiB) (int)
+
The maximum distance for the last queued I/O operation in which the + balancing algorithm considers an operation to have locality. + See ZFS + I/O SCHEDULER.
+
=0 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member on + non-rotational vdevs when I/O operations do not immediately follow one + another.
+
=1 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member when an I/O + operation lacks locality as defined by the + zfs_vdev_mirror_rotating_seek_offset. Operations within + this that are not immediately following the previous operation are + incremented by half.
+
=32768B + (32 KiB) (uint)
+
Aggregate read I/O operations if the on-disk gap between them is within + this threshold.
+
=4096B + (4 KiB) (uint)
+
Aggregate write I/O operations if the on-disk gap between them is within + this threshold.
+
=fastest + (string)
+
Select the raidz parity implementation to use. +

Variants that don't depend on CPU-specific features may be + selected on module load, as they are supported on all systems. The + remaining options may only be set after the module is loaded, as they + are available only if the implementations are compiled in and supported + on the running system.

+

Once the module is loaded, + /sys/module/zfs/parameters/zfs_vdev_raidz_impl + will show the available options, with the currently selected one + enclosed in square brackets.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
fastestselected by built-in benchmark
originaloriginal implementation
scalarscalar implementation
sse2SSE2 instruction set64-bit x86
ssse3SSSE3 instruction set64-bit x86
avx2AVX2 instruction set64-bit x86
avx512fAVX512F instruction set64-bit x86
avx512bwAVX512F & AVX512BW instruction sets64-bit x86
aarch64_neonNEONAarch64/64-bit ARMv8
aarch64_neonx2NEON with more unrollingAarch64/64-bit ARMv8
powerpc_altivecAltivecPowerPC
+
+
+ (charp)
+
. + Prints warning to kernel log for compatibility.
+
=512 + (uint)
+
Max event queue length. Events in the queue can be viewed with + zpool-events(8).
+
=2000 + (int)
+
Maximum recent zevent records to retain for duplicate checking. Setting + this to 0 disables duplicate detection.
+
=s + (15 min) (int)
+
Lifespan for a recent ereport that was retained for duplicate + checking.
+
=1048576 + (int)
+
The maximum number of taskq entries that are allowed to be cached. When + this limit is exceeded transaction records (itxs) will be cleaned + synchronously.
+
= + (int)
+
The number of taskq entries that are pre-populated when the taskq is first + created and are immediately available for use.
+
=100% + (int)
+
This controls the number of threads used by + . + The default value of + + will create a maximum of one thread per cpu.
+
=131072B + (128 KiB) (uint)
+
This sets the maximum block size used by the ZIL. On very fragmented + pools, lowering this (typically to + ) can + improve performance.
+
=B + (7.5 KiB) (uint)
+
This sets the maximum number of write bytes logged via WR_COPIED. It tunes + a tradeoff between additional memory copy and possibly worse log space + efficiency vs additional range lock/unlock.
+
=0|1 + (int)
+
Disable the cache flush commands that are normally sent to disk by the ZIL + after an LWB write has completed. Setting this will cause ZIL corruption + on power loss if a volatile out-of-order write cache is enabled.
+
=0|1 + (int)
+
Disable intent logging replay. Can be disabled for recovery from corrupted + ZIL.
+
=67108864B + (64 MiB) (u64)
+
Limit SLOG write size per commit executed with synchronous priority. Any + writes above that will be executed with lower (asynchronous) priority to + limit potential SLOG device abuse by single active ZIL writer.
+
=1|0 + (int)
+
Setting this tunable to zero disables ZIL logging of new + = + records if the + + feature is enabled on the pool. This would only be necessary to work + around bugs in the ZIL logging or replay code for this record type. The + tunable has no effect if the feature is disabled.
+
=64 + (uint)
+
Usually, one metaslab from each normal-class vdev is dedicated for use by + the ZIL to log synchronous writes. However, if there are fewer than + zfs_embedded_slog_min_ms metaslabs in the vdev, this + functionality is disabled. This ensures that we don't set aside an + unreasonable amount of space for the ZIL.
+
=1 + (uint)
+
Whether heuristic for detection of incompressible data with zstd levels + >= 3 using LZ4 and zstd-1 passes is enabled.
+
=131072 + (uint)
+
Minimal uncompressed size (inclusive) of a record before the early abort + heuristic will be attempted.
+
=0|1 + (int)
+
If non-zero, the zio deadman will produce debugging messages (see + zfs_dbgmsg_enable) for all zios, rather than only for + leaf zios possessing a vdev. This is meant to be used by developers to + gain diagnostic information for hang conditions which don't involve a + mutex or other locking primitive: typically conditions in which a thread + in the zio pipeline is looping indefinitely.
+
=ms + (30 s) (int)
+
When an I/O operation takes more than this much time to complete, it's + marked as slow. Each slow operation causes a delay zevent. Slow I/O + counters can be seen with zpool + status -s.
+
=1|0 + (int)
+
Throttle block allocations in the I/O pipeline. This allows for dynamic + allocation distribution when devices are imbalanced. When enabled, the + maximum number of pending allocations per top-level vdev is limited by + zfs_vdev_queue_depth_pct.
+
=0|1 + (int)
+
Control the naming scheme used when setting new xattrs in the user + namespace. If 0 (the default on Linux), user namespace + xattr names are prefixed with the namespace, to be backwards compatible + with previous versions of ZFS on Linux. If 1 (the + default on FreeBSD), user namespace xattr names + are not prefixed, to be backwards compatible with previous versions of ZFS + on illumos and FreeBSD. +

Either naming scheme can be read on this and future versions + of ZFS, regardless of this tunable, but legacy ZFS on illumos or + FreeBSD are unable to read user namespace xattrs + written in the Linux format, and legacy versions of ZFS on Linux are + unable to read user namespace xattrs written in the legacy ZFS + format.

+

An existing xattr with the alternate naming scheme is removed + when overwriting the xattr so as to not accumulate duplicates.

+
+
=0|1 + (int)
+
Prioritize requeued I/O.
+
=% + (uint)
+
Percentage of online CPUs which will run a worker thread for I/O. These + workers are responsible for I/O work such as compression and checksum + calculations. Fractional number of CPUs will be rounded down. +

The default value of + was chosen to + avoid using all CPUs which can result in latency issues and inconsistent + application performance, especially when slower compression and/or + checksumming is enabled.

+
+
=0 + (uint)
+
Number of worker threads per taskq. Lower values improve I/O ordering and + CPU utilization, while higher reduces lock contention. +

If 0, generate a system-dependent value + close to 6 threads per taskq.

+
+
=0 + (uint)
+
Determines the number of CPUs to run write issue taskqs. +

When 0 (the default), the value to use is computed internally + as the number of actual CPUs in the system divided by the + spa_num_allocators value.

+
+
= (charp)
+
Set the queue and thread configuration for the IO read queues. This is an + advanced debugging parameter. Don't change this unless you understand what + it does.
+
= (charp)
+
Set the queue and thread configuration for the IO write queues. This is an + advanced debugging parameter. Don't change this unless you understand what + it does.
+
=0|1 + (uint)
+
Do not create zvol device nodes. This may slightly improve startup time on + systems with a very large number of zvols.
+
= + (uint)
+
Major number for zvol block devices.
+
= + (long)
+
Discard (TRIM) operations done on zvols will be done in batches of this + many blocks, where block size is determined by the + volblocksize property of a zvol.
+
=131072B + (128 KiB) (uint)
+
When adding a zvol to the system, prefetch this many bytes from the start + and end of the volume. Prefetching these regions of the volume is + desirable, because they are likely to be accessed immediately by + blkid(8) or the kernel partitioner.
+
=0|1 + (uint)
+
When processing I/O requests for a zvol, submit them synchronously. This + effectively limits the queue depth to 1 for each I/O + submitter. When unset, requests are handled asynchronously by a thread + pool. The number of requests which can be handled concurrently is + controlled by zvol_threads. + zvol_request_sync is ignored when running on a kernel + that supports block multiqueue (blk-mq).
+
=0 + (uint)
+
The number of system wide threads to use for processing zvol block IOs. If + 0 (the default) then internally set + zvol_threads to the number of CPUs present or 32 + (whichever is greater).
+
=0 + (uint)
+
The number of threads per zvol to use for queuing IO requests. This + parameter will only appear if your kernel supports + blk-mq and is only read and assigned to a zvol at + zvol load time. If 0 (the default) then internally set + zvol_blk_mq_threads to the number of CPUs present.
+
=0|1 + (uint)
+
Set to 1 to use the blk-mq API + for zvols. Set to 0 (the default) to use the legacy zvol + APIs. This setting can give better or worse zvol performance depending on + the workload. This parameter will only appear if your kernel supports + blk-mq and is only read and assigned to a zvol at + zvol load time.
+
=8 + (uint)
+
If zvol_use_blk_mq is enabled, then process this number + of volblocksize-sized blocks per zvol thread. This + tunable can be use to favor better performance for zvol reads (lower + values) or writes (higher values). If set to 0, then the + zvol layer will process the maximum number of blocks per thread that it + can. This parameter will only appear if your kernel supports + blk-mq and is only applied at each zvol's load + time.
+
=0 + (uint)
+
The queue_depth value for the zvol blk-mq + interface. This parameter will only appear if your kernel supports + blk-mq and is only applied at each zvol's load + time. If 0 (the default) then use the kernel's default + queue depth. Values are clamped to the kernel's + BLKDEV_MIN_RQ and + BLKDEV_MAX_RQ/BLKDEV_DEFAULT_RQ + limits.
+
=1 + (uint)
+
Defines zvol block devices behaviour when + =: + +
+
=0|1 + (uint)
+
Enable strict ZVOL quota enforcement. The strict quota enforcement may + have a performance impact.
+
+
+
+

+

ZFS issues I/O operations to leaf vdevs to satisfy and complete + I/O operations. The scheduler determines when and in what order those + operations are issued. The scheduler divides operations into five I/O + classes, prioritized in the following order: sync read, sync write, async + read, async write, and scrub/resilver. Each queue defines the minimum and + maximum number of concurrent operations that may be issued to the device. In + addition, the device has an aggregate maximum, + zfs_vdev_max_active. Note that the sum of the per-queue + minima must not exceed the aggregate maximum. If the sum of the per-queue + maxima exceeds the aggregate maximum, then the number of active operations + may reach zfs_vdev_max_active, in which case no further + operations will be issued, regardless of whether all per-queue minima have + been met.

+

For many physical devices, throughput increases with the number of + concurrent operations, but latency typically suffers. Furthermore, physical + devices typically have a limit at which more concurrent operations have no + effect on throughput or can actually cause it to decrease.

+

The scheduler selects the next operation to issue by first looking + for an I/O class whose minimum has not been satisfied. Once all are + satisfied and the aggregate maximum has not been hit, the scheduler looks + for classes whose maximum has not been satisfied. Iteration through the I/O + classes is done in the order specified above. No further operations are + issued if the aggregate maximum number of concurrent operations has been + hit, or if there are no operations queued for an I/O class that has not hit + its maximum. Every time an I/O operation is queued or an operation + completes, the scheduler looks for new operations to issue.

+

In general, smaller max_actives will lead to + lower latency of synchronous operations. Larger + max_actives may lead to higher overall throughput, + depending on underlying storage.

+

The ratio of the queues' max_actives determines + the balance of performance between reads, writes, and scrubs. For example, + increasing zfs_vdev_scrub_max_active will cause the scrub + or resilver to complete more quickly, but reads and writes to have higher + latency and lower throughput.

+

All I/O classes have a fixed maximum number of outstanding + operations, except for the async write class. Asynchronous writes represent + the data that is committed to stable storage during the syncing stage for + transaction groups. Transaction groups enter the syncing state periodically, + so the number of queued async writes will quickly burst up and then bleed + down to zero. Rather than servicing them as quickly as possible, the I/O + scheduler changes the maximum number of active async write operations + according to the amount of dirty data in the pool. Since both throughput and + latency typically increase with the number of concurrent operations issued + to physical devices, reducing the burstiness in the number of simultaneous + operations also stabilizes the response time of operations from other + queues, in particular synchronous ones. In broad strokes, the I/O scheduler + will issue more concurrent operations from the async write queue as there is + more dirty data in the pool.

+
+

+

The number of concurrent operations issued for the async write I/O + class follows a piece-wise linear function defined by a few adjustable + points:

+
+
       |              o---------| <-- zfs_vdev_async_write_max_active
+  ^    |             /^         |
+  |    |            / |         |
+active |           /  |         |
+ I/O   |          /   |         |
+count  |         /    |         |
+       |        /     |         |
+       |-------o      |         | <-- zfs_vdev_async_write_min_active
+      0|_______^______|_________|
+       0%      |      |       100% of zfs_dirty_data_max
+               |      |
+               |      `-- zfs_vdev_async_write_active_max_dirty_percent
+               `--------- zfs_vdev_async_write_active_min_dirty_percent
+
+

Until the amount of dirty data exceeds a minimum percentage of the + dirty data allowed in the pool, the I/O scheduler will limit the number of + concurrent operations to the minimum. As that threshold is crossed, the + number of concurrent operations issued increases linearly to the maximum at + the specified maximum percentage of the dirty data allowed in the pool.

+

Ideally, the amount of dirty data on a busy pool will stay in the + sloped part of the function between + zfs_vdev_async_write_active_min_dirty_percent and + zfs_vdev_async_write_active_max_dirty_percent. If it + exceeds the maximum percentage, this indicates that the rate of incoming + data is greater than the rate that the backend storage can handle. In this + case, we must further throttle incoming writes, as described in the next + section.

+
+
+
+

+

We delay transactions when we've determined that the backend + storage isn't able to accommodate the rate of incoming writes.

+

If there is already a transaction waiting, we delay relative to + when that transaction will finish waiting. This way the calculated delay + time is independent of the number of threads concurrently executing + transactions.

+

If we are the only waiter, wait relative to when the transaction + started, rather than the current time. This credits the transaction for + "time already served", e.g. reading indirect blocks.

+

The minimum time for a transaction to take is calculated as

+
min_time = min(zfs_delay_scale + × (dirty + - + ) / + ( + - dirty), 100ms)
+

The delay has two degrees of freedom that can be adjusted via + tunables. The percentage of dirty data at which we start to delay is defined + by zfs_delay_min_dirty_percent. This should typically be + at or above zfs_vdev_async_write_active_max_dirty_percent, + so that we only start to delay after writing at full speed has failed to + keep up with the incoming write rate. The scale of the curve is defined by + zfs_delay_scale. Roughly speaking, this variable + determines the amount of delay at the midpoint of the curve.

+
+
delay
+ 10ms +-------------------------------------------------------------*+
+      |                                                             *|
+  9ms +                                                             *+
+      |                                                             *|
+  8ms +                                                             *+
+      |                                                            * |
+  7ms +                                                            * +
+      |                                                            * |
+  6ms +                                                            * +
+      |                                                            * |
+  5ms +                                                           *  +
+      |                                                           *  |
+  4ms +                                                           *  +
+      |                                                           *  |
+  3ms +                                                          *   +
+      |                                                          *   |
+  2ms +                                              (midpoint) *    +
+      |                                                  |    **     |
+  1ms +                                                  v ***       +
+      |             zfs_delay_scale ---------->     ********         |
+    0 +-------------------------------------*********----------------+
+      0%                    <- zfs_dirty_data_max ->               100%
+
+

Note, that since the delay is added to the outstanding time + remaining on the most recent transaction it's effectively the inverse of + IOPS. Here, the midpoint of 500 us translates to + 2000 IOPS. The shape of the curve was chosen such that + small changes in the amount of accumulated dirty data in the first three + quarters of the curve yield relatively small differences in the amount of + delay.

+

The effects can be easier to understand when the amount of delay + is represented on a logarithmic scale:

+
+
delay
+100ms +-------------------------------------------------------------++
+      +                                                              +
+      |                                                              |
+      +                                                             *+
+ 10ms +                                                             *+
+      +                                                           ** +
+      |                                              (midpoint)  **  |
+      +                                                  |     **    +
+  1ms +                                                  v ****      +
+      +             zfs_delay_scale ---------->        *****         +
+      |                                             ****             |
+      +                                          ****                +
+100us +                                        **                    +
+      +                                       *                      +
+      |                                      *                       |
+      +                                     *                        +
+ 10us +                                     *                        +
+      +                                                              +
+      |                                                              |
+      +                                                              +
+      +--------------------------------------------------------------+
+      0%                    <- zfs_dirty_data_max ->               100%
+
+

Note here that only as the amount of dirty data approaches its + limit does the delay start to increase rapidly. The goal of a properly tuned + system should be to keep the amount of dirty data out of that range by first + ensuring that the appropriate limits are set for the I/O scheduler to reach + optimal throughput on the back-end storage, and then by changing the value + of zfs_delay_scale to increase the steepness of the + curve.

+
+
+ + + + + +
July 21, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/5/index.html b/man/master/5/index.html new file mode 100644 index 000000000..3849f9324 --- /dev/null +++ b/man/master/5/index.html @@ -0,0 +1,147 @@ + + + + + + + File Formats and Conventions (5) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

File Formats and Conventions (5)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/5/vdev_id.conf.5.html b/man/master/5/vdev_id.conf.5.html new file mode 100644 index 000000000..c86fa5792 --- /dev/null +++ b/man/master/5/vdev_id.conf.5.html @@ -0,0 +1,367 @@ + + + + + + + vdev_id.conf.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

vdev_id.conf.5

+
+ + + + + +
VDEV_ID.CONF(5)File Formats ManualVDEV_ID.CONF(5)
+
+
+

+

vdev_id.conf — + configuration file for vdev_id(8)

+
+
+

+

vdev_id.conf is the configuration file for + vdev_id(8). It controls the default behavior of + vdev_id(8) while it is mapping a disk device name to an + alias.

+

The vdev_id.conf file uses a simple format + consisting of a keyword followed by one or more values on a single line. Any + line not beginning with a recognized keyword is ignored. Comments may + optionally begin with a hash character.

+

The following keywords and values are used.

+
+
+ name devlink
+
Maps a device link in the /dev directory hierarchy + to a new device name. The udev rule defining the device link must have run + prior to vdev_id(8). A defined alias takes precedence + over a topology-derived name, but the two naming methods can otherwise + coexist. For example, one might name drives in a JBOD with the + sas_direct topology while naming an internal L2ARC + device with an alias. +

name is the name of the link to the + device that will by created under + /dev/disk/by-vdev.

+

devlink is the name of the device link + that has already been defined by udev. This may be an absolute path or + the base filename.

+
+
+ [pci_slot] port + name
+
Maps a physical path to a channel name (typically representing a single + disk enclosure).
+ +
Additionally create /dev/by-enclosure symlinks to + the disk enclosure + devices + using the naming scheme from vdev_id.conf. + enclosure_symlinks is only allowed for + sas_direct mode.
+ +
Specify the prefix for the enclosure symlinks in the form + /dev/by-enclosure/prefix⟩-⟨channel⟩⟨num⟩ +

Defaults to + “”.

+
+
+ prefix new + [channel]
+
Maps a disk slot number as reported by the operating system to an + alternative slot number. If the channel parameter is + specified then the mapping is only applied to slots in the named channel, + otherwise the mapping is applied to all channels. The first-specified + slot rule that can match a slot takes precedence. + Therefore a channel-specific mapping for a given slot should generally + appear before a generic mapping for the same slot. In this way a custom + mapping may be applied to a particular channel and a default mapping + applied to the others.
+
+ yes|no
+
Specifies whether vdev_id(8) will handle only + dm-multipath devices. If set to yes then + vdev_id(8) will examine the first running component disk + of a dm-multipath device as provided by the driver command to determine + the physical path.
+
+ sas_direct|sas_switch|scsi
+
Identifies a physical topology that governs how physical paths are mapped + to channels: +
+
+ and scsi
+
channels are uniquely identified by a PCI slot and HBA port + number
+
+
channels are uniquely identified by a SAS switch port number
+
+
+
+ num
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) internally uses this value to + determine which HBA or switch port a device is connected to. The default + is .
+
+ bay|phy|port|id|lun|ses
+
Specifies from which element of a SAS identifier the slot number is taken. + The default is bay: +
+
+
read the slot number from the bay identifier.
+
+
read the slot number from the phy identifier.
+
+
use the SAS port as the slot number.
+
+
use the scsi id as the slot number.
+
+
use the scsi lun as the slot number.
+
+
use the SCSI Enclosure Services (SES) enclosure device slot number, as + reported by sg_ses(8). Intended for use only on + systems where bay is unsupported, noting that + port and id may be unstable across + disk replacement.
+
+
+
+
+
+

+
+
/etc/zfs/vdev_id.conf
+
The configuration file for vdev_id(8).
+
+
+
+

+

A non-multipath configuration with direct-attached SAS enclosures + and an arbitrary slot re-mapping:

+
+
multipath     no
+topology      sas_direct
+phys_per_port 4
+slot          bay
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+channel 86:00.0  1         C
+channel 86:00.0  0         D
+
+# Custom mapping for Channel A
+
+#    Linux      Mapped
+#    Slot       Slot      Channel
+slot 1          7         A
+slot 2          10        A
+slot 3          3         A
+slot 4          6         A
+
+# Default mapping for B, C, and D
+
+slot 1          4
+slot 2          2
+slot 3          1
+slot 4          3
+
+

A SAS-switch topology. Note, that the + channel keyword takes only two arguments in this + example:

+
+
topology      sas_switch
+
+#       SWITCH PORT  CHANNEL NAME
+channel 1            A
+channel 2            B
+channel 3            C
+channel 4            D
+
+

A multipath configuration. Note that channel names have multiple + definitions - one per physical path:

+
+
multipath yes
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+channel 86:00.0  1         A
+channel 86:00.0  0         B
+
+

A configuration with enclosure_symlinks enabled:

+
+
multipath yes
+enclosure_symlinks yes
+
+#          PCI_ID      HBA PORT     CHANNEL NAME
+channel    05:00.0     1            U
+channel    05:00.0     0            L
+channel    06:00.0     1            U
+channel    06:00.0     0            L
+
+In addition to the disks symlinks, this configuration will create: +
+
/dev/by-enclosure/enc-L0
+/dev/by-enclosure/enc-L1
+/dev/by-enclosure/enc-U0
+/dev/by-enclosure/enc-U1
+
+

A configuration using device link aliases:

+
+
#     by-vdev
+#     name     fully qualified or base name of device link
+alias d1       /dev/disk/by-id/wwn-0x5000c5002de3b9ca
+alias d2       wwn-0x5000c5002def789e
+
+
+
+

+

vdev_id(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/7/dracut.zfs.7.html b/man/master/7/dracut.zfs.7.html new file mode 100644 index 000000000..032dbdfc2 --- /dev/null +++ b/man/master/7/dracut.zfs.7.html @@ -0,0 +1,403 @@ + + + + + + + dracut.zfs.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

dracut.zfs.7

+
+ + + + + +
DRACUT.ZFS(7)Miscellaneous Information ManualDRACUT.ZFS(7)
+
+
+

+

dracut.zfs — + overview of ZFS dracut hooks

+
+
+

+
+
                      parse-zfs.sh → dracut-cmdline.service
+                          |                     ↓
+                          |                     …
+                          |                     ↓
+                          \————————→ dracut-initqueue.service
+                                                |                      zfs-import-opts.sh
+   zfs-load-module.service                      ↓                          |       |
+     |                  |                sysinit.target                    ↓       |
+     ↓                  |                       |        zfs-import-scan.service   ↓
+zfs-import-scan.service ↓                       ↓           | zfs-import-cache.service
+     |   zfs-import-cache.service         basic.target      |     |
+     \__________________|                       |           ↓     ↓
+                        ↓                       |     zfs-load-key.sh
+     zfs-env-bootfs.service                     |         |
+                        ↓                       ↓         ↓
+                 zfs-import.target → dracut-pre-mount.service
+                        |          ↑            |
+                        | dracut-zfs-generator  |
+                        | _____________________/|
+                        |/                      ↓
+                        |                   sysroot.mount ←——— dracut-zfs-generator
+                        |                       |
+                        |                       ↓
+                        |             initrd-root-fs.target ←— zfs-nonroot-necessities.service
+                        |                       |                                 |
+                        |                       ↓                                 |
+                        ↓             dracut-mount.service                        |
+       zfs-snapshot-bootfs.service              |                                 |
+                        |                       ↓                                 |
+                        ↓                       …                                 |
+       zfs-rollback-bootfs.service              |                                 |
+                        |                       ↓                                 |
+                        |          /sysroot/{usr,etc,lib,&c.} ←———————————————————/
+                        |                       |
+                        |                       ↓
+                        |                initrd-fs.target
+                        \______________________ |
+                                               \|
+                                                ↓
+        export-zfs.sh                      initrd.target
+              |                                 |
+              ↓                                 ↓
+   dracut-shutdown.service                      …
+                                                |
+                                                ↓
+                 zfs-needshutdown.sh → initrd-cleanup.service
+
+

Compare dracut.bootup(7) for the full + flowchart.

+
+
+

+

Under dracut, booting with + ZFS-on-/ is facilitated by a + number of hooks in the 90zfs module.

+

Booting into a ZFS dataset requires + mountpoint=/ to be set on the + dataset containing the root filesystem (henceforth "the boot + dataset") and at the very least either the bootfs + property to be set to that dataset, or the root= kernel + cmdline (or dracut drop-in) argument to specify it.

+

All children of the boot dataset with + = + with mountpoints matching /etc, + /bin, /lib, + /lib??, /libx32, + and /usr globs are deemed + essential and will be mounted as well.

+

zfs-mount-generator(8) is recommended for proper + functioning of the system afterward (correct mount properties, remounting, + &c.).

+
+
+

+
+

+
+
dataset, + dataset
+
Use dataset as the boot dataset. All pluses + (‘+’) are replaced with spaces + (‘ ’).
+
, + root=zfs:, + , + [root=]
+
After import, search for the first pool with the bootfs + property set, use its value as-if specified as the + dataset above.
+
rootfstype=zfs root=dataset
+
Equivalent to + root=zfs:dataset.
+
+ [root=]
+
Equivalent to root=zfs:AUTO.
+
flags
+
Mount the boot dataset with -o + flags; cf. + Temporary Mount + Point Properties in zfsprops(7). These properties + will not last, since all filesystems will be re-mounted from the real + root.
+
+
If specified, dracut-zfs-generator logs to the + journal.
+
+

Be careful about setting neither rootfstype=zfs + nor root=zfs:dataset — other + automatic boot selection methods, like + systemd-gpt-auto-generator and + systemd-fstab-generator might take precedent.

+
+
+

+
+
[=snapshot-name]
+
Execute zfs snapshot + boot-dataset@snapshot-name + before pivoting to the real root. snapshot-name + defaults to the current kernel release.
+
[=snapshot-name]
+
Execute zfs snapshot + -Rf + boot-dataset@snapshot-name + before pivoting to the real root. snapshot-name + defaults to the current kernel release.
+
host-id
+
Use zgenhostid(8) to set the host ID to + host-id; otherwise, + /etc/hostid inherited from the real root is + used.
+
, + zfs.force, zfsforce
+
Appends -f to all zpool + import invocations; primarily useful in + conjunction with spl_hostid=, or if no host ID was + inherited.
+
+
+
+
+

+
+
parse-zfs.sh + ()
+
Processes spl_hostid=. If root= + matches a known pattern, above, provides /dev/root + and delays the initqueue until zfs(4) is loaded,
+
zfs-import-opts.sh + (systemd environment + generator)
+
Turns zfs_force, zfs.force, + or zfsforce into + ZPOOL_IMPORT_OPTS=-f for + zfs-import-scan.service or + zfs-import-cache.service.
+
zfs-load-key.sh + ()
+
Loads encryption keys for the boot dataset and its essential descendants. +
+
+
=
+
Is prompted for via systemd-ask-password + thrice.
+
=URL, + keylocation=URL
+
network-online.target is started before + loading.
+
=path
+
If path doesn't exist, + udevadm is + settled. If it still doesn't, it's waited for + for up to + s.
+
+
+
+
zfs-env-bootfs.service + (systemd service)
+
After pool import, sets BOOTFS= in the systemd + environment to the first non-null bootfs value in + iteration order.
+
dracut-zfs-generator + (systemd generator)
+
Generates sysroot.mount (using + rootflags=, if any). If an + explicit boot dataset was specified, also generates essential mountpoints + (sysroot-etc.mount, + sysroot-bin.mount, + &c.), otherwise generates + zfs-nonroot-necessities.service which mounts them + explicitly after /sysroot using + BOOTFS=.
+
zfs-snapshot-bootfs.service, + zfs-rollback-bootfs.service + (systemd services)
+
Consume bootfs.snapshot and + bootfs.rollback as described in + CMDLINE. Use + BOOTFS= if no explicit boot dataset was + specified.
+
zfs-needshutdown.sh + ()
+
If any pools were imported, signals that shutdown hooks are required.
+
export-zfs.sh + ()
+
Forcibly exports all pools.
+
/etc/hostid, + /etc/zfs/zpool.cache, + /etc/zfs/vdev_id.conf (regular files)
+
Included verbatim, hostonly.
+
mount-zfs.sh + ()
+
Does nothing on systemd systems (if + dracut-zfs-generator + succeeded). Otherwise, loads encryption key for + the boot dataset from the console or via plymouth. It may not work at + all!
+
+
+
+

+

zfsprops(7), + zpoolprops(7), + dracut-shutdown.service(8), + systemd-fstab-generator(8), + systemd-gpt-auto-generator(8), + zfs-mount-generator(8), + zgenhostid(8)

+
+
+ + + + + +
March 28, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/7/index.html b/man/master/7/index.html new file mode 100644 index 000000000..9c6a642bd --- /dev/null +++ b/man/master/7/index.html @@ -0,0 +1,159 @@ + + + + + + + Miscellaneous (7) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/man/master/7/vdevprops.7.html b/man/master/7/vdevprops.7.html new file mode 100644 index 000000000..89b8e446a --- /dev/null +++ b/man/master/7/vdevprops.7.html @@ -0,0 +1,330 @@ + + + + + + + vdevprops.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

vdevprops.7

+
+ + + + + +
VDEVPROPS(7)Miscellaneous Information ManualVDEVPROPS(7)
+
+
+

+

vdevpropsnative + and user-defined properties of ZFS vdevs

+
+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate vdevs in a way that is + meaningful in your environment. For more information about user properties, + see the User Properties section, + below.

+
+

+

Every vdev has a set of properties that export statistics about + the vdev as well as control various behaviors. Properties are not inherited + from top-level vdevs, with the exception of checksum_n, checksum_t, io_n, + and io_t.

+

The values of numeric properties can be specified using + human-readable suffixes (for example, + , + , + , + , and so + forth, up to + for zettabyte). The following are all valid (and equal) specifications: + 1536M, 1.5g, + 1.50GB.

+

The values of non-numeric properties are case sensitive and must + be lowercase.

+

The following native properties consist of read-only statistics + about the vdev. These properties can not be changed.

+
+
+
Percentage of vdev space used
+
+
state of this vdev such as online, faulted, or offline
+
+
globally unique id of this vdev
+
+
The allocable size of this vdev
+
+
The physical size of this vdev
+
+
The physical sector size of this vdev expressed as the power of two
+
+
The total size of this vdev
+
+
The amount of remaining free space on this vdev
+
+
The amount of allocated space on this vdev
+
+
How much this vdev can expand by
+
+
Percent of fragmentation in this vdev
+
+
The level of parity for this vdev
+
+
The device id for this vdev
+
+
The physical path to the device
+
+
The enclosure path to the device
+
+
Field Replacable Unit, usually a model number
+
+
Parent of this vdev
+
+
Comma separated list of children of this vdev
+
+
The number of children belonging to this vdev
+
, + , + , +
+
The number of errors of each type encountered by this vdev
+
, + , + , + , + , +
+
The number of I/O operations of each type performed by this vdev
+
, + , + , + , + , +
+
The cumulative size of all operations of each type performed by this + vdev
+
+
If this device is currently being removed from the pool
+
+

The following native properties can be used to change the behavior + of a vdev.

+
+
, + , + , +
+
Tune the fault management daemon by specifying checksum/io thresholds of + <N> errors in <T> seconds, respectively. These properties can + be set on leaf and top-level vdevs. When the property is set on the leaf + and top-level vdev, the value of the leaf vdev will be used. If the + property is only set on the top-level vdev, this value will be used. The + value of these properties do not persist across vdev replacement. For this + reason, it is advisable to set the property on the top-level vdev - not on + the leaf vdev itself. The default values are 10 errors in 600 + seconds.
+
+
A text comment up to 8192 characters long
+
+
The amount of space to reserve for the EFI system partition
+
+
If this device should propage BIO errors back to ZFS, used to disable + failfast.
+
+
The path to the device for this vdev
+
+
If this device should perform new allocations, used to disable a device + when it is scheduled for later removal. See + zpool-remove(8).
+
+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate vdevs.

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as + module:property, but this + namespace is not enforced by ZFS. User property names can be at most 256 + characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the + module component of property names to reduce the + chance that two independently-developed packages use the same property name + for different purposes.

+

The values of user properties are arbitrary strings and are never + validated. Use the zpool set + command with a blank value to clear a user property. Property values are + limited to 8192 bytes.

+
+
+
+

+

zpoolprops(7), + zpool-set(8)

+
+
+ + + + + +
October 30, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/7/zfsconcepts.7.html b/man/master/7/zfsconcepts.7.html new file mode 100644 index 000000000..54dab3ba3 --- /dev/null +++ b/man/master/7/zfsconcepts.7.html @@ -0,0 +1,326 @@ + + + + + + + zfsconcepts.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfsconcepts.7

+
+ + + + + +
ZFSCONCEPTS(7)Miscellaneous Information ManualZFSCONCEPTS(7)
+
+
+

+

zfsconcepts — + overview of ZFS concepts

+
+
+

+
+

+

A ZFS storage pool is a logical collection of devices that provide + space for datasets. A storage pool is also the root of the ZFS file system + hierarchy.

+

The root of the pool can be accessed as a file system, such as + mounting and unmounting, taking snapshots, and setting properties. The + physical storage characteristics, however, are managed by the + zpool(8) command.

+

See zpool(8) for more information on creating + and administering pools.

+
+
+

+

A snapshot is a read-only copy of a file system or volume. + Snapshots can be created extremely quickly, and initially consume no + additional space within the pool. As data within the active dataset changes, + the snapshot consumes more data than would otherwise be shared with the + active dataset.

+

Snapshots can have arbitrary names. Snapshots of + volumes can be cloned or rolled back, visibility is determined by the + property + of the parent volume.

+

File system snapshots can be accessed under the + .zfs/snapshot directory in the root of the file + system. Snapshots are automatically mounted on demand and may be unmounted + at regular intervals. The visibility of the .zfs + directory can be controlled by the + + property.

+
+
+

+

A bookmark is like a snapshot, a read-only copy of a file system + or volume. Bookmarks can be created extremely quickly, compared to + snapshots, and they consume no additional space within the pool. Bookmarks + can also have arbitrary names, much like snapshots.

+

Unlike snapshots, bookmarks can not be accessed through the + filesystem in any way. From a storage standpoint a bookmark just provides a + way to reference when a snapshot was created as a distinct object. Bookmarks + are initially tied to a snapshot, not the filesystem or volume, and they + will survive if the snapshot itself is destroyed. Since they are very light + weight there's little incentive to destroy them.

+
+
+

+

A clone is a writable volume or file system whose initial contents + are the same as another dataset. As with snapshots, creating a clone is + nearly instantaneous, and initially consumes no additional space.

+

Clones can only be created from a snapshot. When a + snapshot is cloned, it creates an implicit dependency between the parent and + child. Even though the clone is created somewhere else in the dataset + hierarchy, the original snapshot cannot be destroyed as long as a clone + exists. The + property exposes this dependency, and the destroy + command lists any such dependencies, if they exist.

+

The clone parent-child dependency relationship can be reversed by + using the promote subcommand. This causes the + "origin" file system to become a clone of the specified file + system, which makes it possible to destroy the file system that the clone + was created from.

+
+
+

+

Creating a ZFS file system is a simple operation, so the number of + file systems per system is likely to be numerous. To cope with this, ZFS + automatically manages mounting and unmounting file systems without the need + to edit the /etc/fstab file. All automatically + managed file systems are mounted by ZFS at boot time.

+

By default, file systems are mounted under + /path, where path is the name + of the file system in the ZFS namespace. Directories are created and + destroyed as needed.

+

A file system can also have a mount point set in + the mountpoint property. This directory is created as + needed, and ZFS automatically mounts the file system when the + zfs mount + -a command is invoked (without editing + /etc/fstab). The mountpoint + property can be inherited, so if + has a + mount point of /export/stuff, then + + automatically inherits a mount point of + /export/stuff/user.

+

A file system mountpoint property of + prevents the + file system from being mounted.

+

If needed, ZFS file systems can also be managed with + traditional tools (mount, + umount, /etc/fstab). If a + file system's mount point is set to + , ZFS makes + no attempt to manage the file system, and the administrator is responsible + for mounting and unmounting the file system. Because pools must be imported + before a legacy mount can succeed, administrators should ensure that legacy + mounts are only attempted after the zpool import process finishes at boot + time. For example, on machines using systemd, the mount option

+

x-systemd.requires=zfs-import.target

+

will ensure that the zfs-import completes before systemd attempts + mounting the filesystem. See systemd.mount(5) for + details.

+
+
+

+

Deduplication is the process for removing redundant data at the + block level, reducing the total amount of data stored. If a file system has + the + + property enabled, duplicate data blocks are removed synchronously. The + result is that only unique data is stored and common components are shared + among files.

+

Deduplicating data is a very resource-intensive operation. It is + generally recommended that you have at least 1.25 GiB of RAM per 1 TiB of + storage when you enable deduplication. Calculating the exact requirement + depends heavily on the type of data stored in the pool.

+

Enabling deduplication on an improperly-designed system can result + in performance issues (slow I/O and administrative operations). It can + potentially lead to problems importing a pool due to memory exhaustion. + Deduplication can consume significant processing power (CPU) and memory as + well as generate additional disk I/O.

+

Before creating a pool with deduplication + enabled, ensure that you have planned your hardware requirements + appropriately and implemented appropriate recovery practices, such as + regular backups. Consider using the + + property as a less resource-intensive alternative.

+
+
+

+

Block cloning is a facility that allows a file (or parts of a + file) to be "cloned", that is, a shallow copy made where the + existing data blocks are referenced rather than copied. Later modifications + to the data will cause a copy of the data block to be taken and that copy + modified. This facility is used to implement "reflinks" or + "file-level copy-on-write".

+

Cloned blocks are tracked in a special on-disk structure called + the Block Reference Table (BRT). Unlike deduplication, this table has + minimal overhead, so can be enabled at all times.

+

Also unlike deduplication, cloning must be requested by a user + program. Many common file copying programs, including newer versions of + /bin/cp, will try to create clones automatically. + Look for "clone", "dedupe" or "reflink" in the + documentation for more information.

+

There are some limitations to block cloning. Only + whole blocks can be cloned, and blocks can not be cloned if they are not yet + written to disk, or if they are encrypted, or the source and destination + + properties differ. The OS may add additional restrictions; for example, most + versions of Linux will not allow clones across datasets.

+
+
+
+ + + + + +
October 6, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/7/zfsprops.7.html b/man/master/7/zfsprops.7.html new file mode 100644 index 000000000..4691a24d6 --- /dev/null +++ b/man/master/7/zfsprops.7.html @@ -0,0 +1,1553 @@ + + + + + + + zfsprops.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfsprops.7

+
+ + + + + +
ZFSPROPS(7)Miscellaneous Information ManualZFSPROPS(7)
+
+
+

+

zfspropsnative + and user-defined properties of ZFS datasets

+
+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about user properties, + see the User Properties section, + below.

+
+

+

Every dataset has a set of properties that export statistics about + the dataset as well as control various behaviors. Properties are inherited + from the parent unless overridden by the child. Some properties apply only + to certain types of datasets (file systems, volumes, or snapshots).

+

The values of numeric properties can be specified using + human-readable suffixes (for example, + , + , + , + , and so + forth, up to + for zettabyte). The following are all valid (and equal) specifications: + 1536M, 1.5g, + 1.50GB.

+

The values of non-numeric properties are case sensitive and must + be lowercase, except for mountpoint, + sharenfs, and sharesmb.

+

The following native properties consist of read-only statistics + about the dataset. These properties can be neither set, nor inherited. + Native properties apply to all dataset types unless otherwise noted.

+
+
+
The amount of space available to the dataset and all its children, + assuming that there is no other activity in the pool. Because space is + shared within a pool, availability can be limited by any number of + factors, including physical pool size, quotas, reservations, or other + datasets within the pool. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For non-snapshots, the compression ratio achieved for the + used space of this dataset, expressed as a multiplier. + The used property includes descendant datasets, and, for + clones, does not include the space shared with the origin snapshot. For + snapshots, the compressratio is the same as the + refcompressratio property. Compression can be turned on + by running: zfs set + compression=on + dataset. The default value is + off.
+
+
The transaction group (txg) in which the dataset was created. Bookmarks + have the same createtxg as the snapshot they are + initially tied to. This property is suitable for ordering a list of + snapshots, e.g. for incremental send and receive.
+
+
The time this dataset was created.
+
+
For snapshots, this property is a comma-separated list of filesystems or + volumes which are clones of this snapshot. The clones' + origin property is this snapshot. If the + clones property is not empty, then this snapshot can not + be destroyed (even with the -r or + -f options). The roles of origin and clone can be + swapped by promoting the clone with the zfs + promote command.
+
+
This property is on if the snapshot has been marked for + deferred destroy by using the zfs + destroy -d command. + Otherwise, the property is off.
+
+
For encrypted datasets, indicates where the dataset is currently + inheriting its encryption key from. Loading or unloading a key for the + encryptionroot will implicitly load / unload the key for + any inheriting datasets (see zfs + load-key and zfs + unload-key for details). Clones will always share + an encryption key with their origin. See the + Encryption section of + zfs-load-key(8) for details.
+
+
The total number of filesystems and volumes that exist under this location + in the dataset tree. This value is only available when a + filesystem_limit has been set somewhere in the tree + under which the dataset resides.
+
+
Indicates if an encryption key is currently loaded into ZFS. The possible + values are none, available, and + . + See zfs load-key and + zfs unload-key.
+
+
The 64 bit GUID of this dataset or bookmark which does not change over its + entire lifetime. When a snapshot is sent to another pool, the received + snapshot has the same GUID. Thus, the guid is suitable + to identify a snapshot across pools.
+
+
The amount of space that is "logically" accessible by this + dataset. See the referenced property. The logical space + ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space that is "logically" consumed by this dataset + and all its descendents. See the used property. The + logical space ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For file systems, indicates whether the file system is currently mounted. + This property can be either + or + .
+
+
A unique identifier for this dataset within the pool. Unlike the dataset's + guid, the + objsetid of a dataset is not transferred to other pools + when the snapshot is copied with a send/receive operation. The + objsetid can be reused (for a new dataset) after the + dataset is deleted.
+
+
For cloned file systems or volumes, the snapshot from which the clone was + created. See also the clones property.
+
+
For filesystems or volumes which have saved partially-completed state from + zfs receive + -s, this opaque token can be provided to + zfs send + -t to resume and complete the + zfs receive.
+
+
For bookmarks, this is the list of snapshot guids the bookmark contains a + redaction list for. For snapshots, this is the list of snapshot guids the + snapshot is redacted with respect to.
+
+
The amount of data that is accessible by this dataset, which may or may + not be shared with other datasets in the pool. When a snapshot or clone is + created, it initially references the same amount of space as the file + system or snapshot it was created from, since its contents are identical. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The compression ratio achieved for the referenced space + of this dataset, expressed as a multiplier. See also the + compressratio property.
+
+
The total number of snapshots that exist under this location in the + dataset tree. This value is only available when a + snapshot_limit has been set somewhere in the tree under + which the dataset resides.
+
+
The type of dataset: + , + , + , + or + .
+
+
The amount of space consumed by this dataset and all its descendents. This + is the value that is checked against this dataset's quota and reservation. + The space used does not include this dataset's reservation, but does take + into account the reservations of any descendent datasets. The amount of + space that a dataset consumes from its parent, as well as the amount of + space that is freed if this dataset is recursively destroyed, is the + greater of its space used and its reservation. +

The used space of a snapshot (see the + Snapshots section of + zfsconcepts(7)) is space that is referenced + exclusively by this snapshot. If this snapshot is destroyed, the amount + of used space will be freed. Space that is shared by + multiple snapshots isn't accounted for in this metric. When a snapshot + is destroyed, space that was previously shared with this snapshot can + become unique to snapshots adjacent to it, thus changing the used space + of those snapshots. The used space of the latest snapshot can also be + affected by changes in the file system. Note that the + used space of a snapshot is a subset of the + written space of the snapshot.

+

The amount of space used, available, or referenced + does not take into account pending changes. Pending changes are + generally accounted for within a few seconds. Committing a change to a + disk using fsync(2) or + does + not necessarily guarantee that the space usage information is updated + immediately.

+
+
+
The usedby* properties decompose the + used properties into the various reasons that space is + used. Specifically, used = + usedbychildren + + usedbydataset + + usedbyrefreservation + + usedbysnapshots. These properties are only available for + datasets created on zpool "version 13" + pools.
+
+
The amount of space used by children of this dataset, which would be freed + if all the dataset's children were destroyed.
+
+
The amount of space used by this dataset itself, which would be freed if + the dataset were destroyed (after first removing any + refreservation and destroying any necessary snapshots or + descendents).
+
+
The amount of space used by a refreservation set on this + dataset, which would be freed if the refreservation was + removed.
+
+
The amount of space consumed by snapshots of this dataset. In particular, + it is the amount of space that would be freed if all of this dataset's + snapshots were destroyed. Note that this is not simply the sum of the + snapshots' used properties because space can be shared + by multiple snapshots.
+
@user
+
The amount of space consumed by the specified user in this dataset. Space + is charged to the owner of each file, as displayed by + ls -l. The amount of space + charged is displayed by du + and ls + -s. See the zfs + userspace command for more information. +

Unprivileged users can access only their own space usage. The + root user, or a user who has been granted the userused + privilege with zfs + allow, can access everyone's usage.

+

The userused@ + properties are not displayed by zfs + get all. The user's name must + be appended after the @ symbol, using one of the + following forms:

+
    +
  • POSIX name ("joe")
  • +
  • POSIX numeric ID ("789")
  • +
  • SID name ("joe.smith@mydomain")
  • +
  • SID numeric ID ("S-1-123-456-789")
  • +
+

Files created on Linux always have POSIX owners.

+
+
@user
+
The userobjused property is similar to + userused but instead it counts the number of objects + consumed by a user. This property counts all objects allocated on behalf + of the user, it may differ from the results of system tools such as + df -i. +

When the property xattr=on + is set on a file system additional objects will be created per-file to + store extended attributes. These additional objects are reflected in the + userobjused value and are counted against the user's + userobjquota. When a file system is configured to use + xattr=sa no additional internal + objects are normally required.

+
+
+
This property is set to the number of user holds on this snapshot. User + holds are set by using the zfs + hold command.
+
@group
+
The amount of space consumed by the specified group in this dataset. Space + is charged to the group of each file, as displayed by + ls -l. See the + userused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupused privilege with zfs + allow, can access all groups' usage.

+
+
@group
+
The number of objects consumed by the specified group in this dataset. + Multiple objects may be charged to the group for each file when extended + attributes are in use. See the + userobjused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupobjused privilege with + zfs allow, can access + all groups' usage.

+
+
@project
+
The amount of space consumed by the specified project in this dataset. + Project is identified via the project identifier (ID) that is object-based + numeral attribute. An object can inherit the project ID from its parent + object (if the parent has the flag of inherit project ID that can be set + and changed via chattr + -/+P or zfs project + -s) when being created. The privileged user can + set and change object's project ID via chattr + -p or zfs project + -s anytime. Space is charged to the project of + each file, as displayed by lsattr + -p or zfs project. See the + userused@user property for more + information. +

The root user, or a user who has been granted the + projectused privilege with zfs + allow, can access all projects' usage.

+
+
@project
+
The projectobjused is similar to + projectused but instead it counts the number of objects + consumed by project. When the property + xattr=on is set on a fileset, ZFS will + create additional objects per-file to store extended attributes. These + additional objects are reflected in the projectobjused + value and are counted against the project's + projectobjquota. When a filesystem is configured to use + xattr=sa no additional internal + objects are required. See the + userobjused@user property for more + information. +

The root user, or a user who has been granted the + projectobjused privilege with zfs + allow, can access all projects' objects usage.

+
+
+
Provides a mechanism to quickly determine whether snapshot list has + changed without having to mount a dataset or iterate the snapshot list. + Specifies the time at which a snapshot for a dataset was last created or + deleted. +

This allows us to be more efficient + how often we query snapshots. The property is persistent across mount + and unmount operations only if the + + feature is enabled.

+
+
+
For volumes, specifies the block size of the volume. The + blocksize cannot be changed once the volume has been + written, so it should be set at volume creation time. The default + blocksize for volumes is 16 Kbytes. Any power of 2 from + 512 bytes to 128 Kbytes is valid. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space referenced by this dataset, that was + written since the previous snapshot (i.e. that is not referenced by the + previous snapshot).
+
@snapshot
+
The amount of referenced space written to this dataset + since the specified snapshot. This is the space that is referenced by this + dataset but was not referenced by the specified snapshot. +

The snapshot may be specified as a short + snapshot name (just the part after the @), in which + case it will be interpreted as a snapshot in the same filesystem as this + dataset. The snapshot may be a full snapshot name + (filesystem@snapshot), which + for clones may be a snapshot in the origin's filesystem (or the origin + of the origin's filesystem, etc.)

+
+
+

The following native properties can be used to change the behavior + of a ZFS dataset.

+
+
=discard|noallow|restricted|passthrough|passthrough-x
+
Controls how ACEs are inherited when files and directories are created. +
+
+
+
does not inherit any ACEs.
+
+
only inherits inheritable ACEs that specify "deny" + permissions.
+
+
default, removes the + + and + + permissions when the ACE is inherited.
+
+
inherits all inheritable ACEs without any modifications.
+
+
same meaning as passthrough, except that the + , + , + and + + ACEs inherit the execute permission only if the file creation mode + also requests the execute bit.
+
+
+

When the property value is set to + passthrough, files are created with a mode determined + by the inheritable ACEs. If no inheritable ACEs exist that affect the + mode, then the mode is set in accordance to the requested mode from the + application.

+

The aclinherit property does not apply to + POSIX ACLs.

+
+
=discard|groupmask|passthrough|restricted
+
Controls how an ACL is modified during chmod(2) and how inherited ACEs are + modified by the file creation mode: +
+
+
+
default, deletes all + + except for those representing the mode of the file or directory + requested by chmod(2).
+
+
reduces permissions granted in all + + entries found in the + + such that they are no greater than the group permissions specified by + chmod(2).
+
+
indicates that no changes are made to the ACL other than creating or + updating the necessary ACL entries to represent the new mode of the + file or directory.
+
+
will cause the chmod(2) operation to return an error + when used on any file or directory which has a non-trivial ACL whose + entries can not be represented by a mode. chmod(2) + is required to change the set user ID, set group ID, or sticky bits on + a file or directory, as they do not have equivalent ACL entries. In + order to use chmod(2) on a file or directory with a + non-trivial ACL when aclmode is set to + restricted, you must first remove all ACL entries + which do not represent the current mode.
+
+
+
+
=off|nfsv4|posix
+
Controls whether ACLs are enabled and if so what type of ACL to use. When + this property is set to a type of ACL not supported by the current + platform, the behavior is the same as if it were set to + off. +
+
+
+
default on Linux, when a file system has the acltype + property set to off then ACLs are disabled.
+
+
an alias for off
+
+
default on FreeBSD, indicates that NFSv4-style + ZFS ACLs should be used. These ACLs can be managed with the + getfacl(1) and setfacl(1). The + nfsv4 ZFS ACL type is not yet supported on + Linux.
+
+
indicates POSIX ACLs should be used. POSIX ACLs are specific to Linux + and are not functional on other platforms. POSIX ACLs are stored as an + extended attribute and therefore will not overwrite any existing NFSv4 + ACLs which may be set.
+
+
an alias for posix
+
+
+

To obtain the best performance when setting + posix users are strongly encouraged to set the + xattr=sa property. This will result + in the POSIX ACL being stored more efficiently on disk. But as a + consequence, all new extended attributes will only be accessible from + OpenZFS implementations which support the + xattr=sa property. See the + xattr property for more details.

+
+
=on|off
+
Controls whether the access time for files is updated when they are read. + Turning this property off avoids producing write traffic when reading + files and can result in significant performance gains, though it might + confuse mailers and other similar utilities. The values + on and off are equivalent to the + atime and + + mount options. The default value is on. See also + relatime below.
+
=on|off|noauto
+
If this property is set to off, the file system cannot + be mounted, and is ignored by zfs + mount -a. Setting this + property to off is similar to setting the + mountpoint property to none, except + that the dataset still has a normal mountpoint property, + which can be inherited. Setting this property to off + allows datasets to be used solely as a mechanism to inherit properties. + One example of setting canmount=off is + to have two datasets with the same mountpoint, so that + the children of both datasets appear in the same directory, but might have + different inherited characteristics. +

When set to noauto, a dataset can only be + mounted and unmounted explicitly. The dataset is not mounted + automatically when the dataset is created or imported, nor is it mounted + by the zfs mount + -a command or unmounted by the + zfs unmount + -a command.

+

This property is not inherited.

+
+
=on|off||fletcher4|sha256|noparity|sha512|skein|edonr|blake3
+
Controls the checksum used to verify data integrity. The default value is + on, which automatically selects an appropriate algorithm + (currently, fletcher4, but this may change in future + releases). The value off disables integrity checking on + user data. The value noparity not only disables + integrity but also disables maintaining parity for user data. This setting + is used internally by a dump device residing on a RAID-Z pool and should + not be used by any other dataset. Disabling checksums is + NOT a recommended practice. +

The sha512, skein, + edonr, and blake3 checksum + algorithms require enabling the appropriate features on the pool.

+

Please see zpool-features(7) for more + information on these algorithms.

+

Changing this property affects only newly-written data.

+
+
=on|off|gzip|gzip-N|lz4|lzjb|zle|zstd|zstd-N|zstd-fast|zstd-fast-N
+
Controls the compression algorithm used for this dataset. +

When set to on (the default), indicates that + the current default compression algorithm should be used. The default + balances compression and decompression speed, with compression ratio and + is expected to work well on a wide variety of workloads. Unlike all + other settings for this property, on does not select a + fixed compression type. As new compression algorithms are added to ZFS + and enabled on a pool, the default compression algorithm may change. The + current default compression algorithm is either lzjb + or, if the lz4_compress feature is enabled, + lz4.

+

The lz4 compression algorithm + is a high-performance replacement for the lzjb + algorithm. It features significantly faster compression and + decompression, as well as a moderately higher compression ratio than + lzjb, but can only be used on pools with the + lz4_compress feature set to + . See + zpool-features(7) for details on ZFS feature flags and + the lz4_compress feature.

+

The lzjb compression algorithm is optimized + for performance while providing decent data compression.

+

The gzip compression algorithm + uses the same compression as the gzip(1) command. You + can specify the gzip level by using the value + gzip-N, where + N is an integer from 1 (fastest) to 9 (best + compression ratio). Currently, gzip is equivalent to + (which + is also the default for gzip(1)).

+

The zstd compression algorithm + provides both high compression ratios and good performance. You can + specify the zstd level by using the value + zstd-N, where + N is an integer from 1 (fastest) to 19 (best + compression ratio). zstd is equivalent to + .

+

Faster speeds at the cost of the compression ratio can + be requested by setting a negative zstd level. This is + done using zstd-fast-N, where + N is an integer in + [1-, + , + , + , + , + , + 1000] which maps to a negative zstd + level. The lower the level the faster the compression — + 1000 provides the fastest compression and lowest + compression ratio. zstd-fast is equivalent to + zstd-fast-1.

+

The zle compression algorithm compresses + runs of zeros.

+

This property can also be referred to by its + shortened column name + . + Changing this property affects only newly-written data.

+

When any setting except off is selected, + compression will explicitly check for blocks consisting of only zeroes + (the NUL byte). When a zero-filled block is detected, it is stored as a + hole and not compressed using the indicated compression algorithm.

+

Any block being compressed must be no larger than 7/8 of its + original size after compression, otherwise the compression will not be + considered worthwhile and the block saved uncompressed. Note that when + the logical block is less than 8 times the disk sector size this + effectively reduces the necessary compression ratio; for example, 8 KiB + blocks on disks with 4 KiB disk sectors must compress to 1/2 or less of + their original size.

+
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux context for all files in the file system under + a mount point for that file system. See selinux(8) for + more information.
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux context for the file system file system being + mounted. See selinux(8) for more information.
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux default context for unlabeled files. See + selinux(8) for more information.
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux context for the root inode of the file system. + See selinux(8) for more information.
+
=1||
+
Controls the number of copies of data stored for this dataset. These + copies are in addition to any redundancy provided by the pool, for + example, mirroring or RAID-Z. The copies are stored on different disks, if + possible. The space used by multiple copies is charged to the associated + file and dataset, changing the used property and + counting against quotas and reservations. +

Changing this property only affects newly-written data. + Therefore, set this property at file system creation time by using the + -o + copies=N option.

+

Remember that ZFS will not import a pool with a missing + top-level vdev. Do NOT create, for example a two-disk + striped pool and set copies=2 on + some datasets thinking you have setup redundancy for them. When a disk + fails you will not be able to import the pool and will have lost all of + your data.

+

Encrypted datasets may not have + copies=3 since the + implementation stores some encryption metadata where the third copy + would normally be.

+
+
=on|off
+
Controls whether device nodes can be opened on this file system. The + default value is on. The values on and + off are equivalent to the dev and + + mount options.
+
=off|on|verify|sha256[,verify]|sha512[,verify]|skein[,verify]|edonr,verify|blake3[,verify]
+
Configures deduplication for a dataset. The default value is + off. The default deduplication checksum is + sha256 (this may change in the future). When + dedup is enabled, the checksum defined here overrides + the checksum property. Setting the value to + verify has the same effect as the setting + sha256,verify. +

If set to verify, ZFS will do a byte-to-byte + comparison in case of two blocks having the same signature to make sure + the block contents are identical. Specifying verify is + mandatory for the edonr algorithm.

+

Unless necessary, deduplication should + be enabled on + a system. See the Deduplication + section of zfsconcepts(7).

+
+
=legacy|auto|||||
+
Specifies a compatibility mode or literal value for the size of dnodes in + the file system. The default value is legacy. Setting + this property to a value other than legacy + requires the large_dnode + pool feature to be enabled. +

Consider setting dnodesize to + auto if the dataset uses the + xattr=sa property setting and the + workload makes heavy use of extended attributes. This may be applicable + to SELinux-enabled systems, Lustre servers, and Samba servers, for + example. Literal values are supported for cases where the optimal size + is known in advance and for performance testing.

+

Leave dnodesize set to + legacy if you need to receive a send stream of this + dataset on a pool that doesn't enable the large_dnode + feature, or if you need to import this pool on a system that doesn't + support the large_dnode + feature.

+

This property can also be referred to by its + shortened column name, + .

+
+
=off|on||||||aes-256-gcm
+
Controls the encryption cipher suite (block cipher, key length, and mode) + used for this dataset. Requires the encryption feature + to be enabled on the pool. Requires a keyformat to be + set at dataset creation time. +

Selecting encryption=on + when creating a dataset indicates that the default encryption suite will + be selected, which is currently aes-256-gcm. In order + to provide consistent data protection, encryption must be specified at + dataset creation time and it cannot be changed afterwards.

+

For more details and caveats about encryption see the + Encryption section of + zfs-load-key(8).

+
+
=||passphrase
+
Controls what format the user's encryption key will be provided as. This + property is only set when the dataset is encrypted. +

Raw keys and hex keys must be 32 bytes long (regardless of the + chosen encryption suite) and must be randomly generated. A raw key can + be generated with the following command:

+
# dd + + /path/to/output/key
+

Passphrases must be between 8 and 512 bytes long and will be + processed through PBKDF2 before being used (see the + pbkdf2iters property). Even though the encryption + suite cannot be changed after dataset creation, the keyformat can be + with zfs change-key.

+
+
=prompt|/absolute/file/path|address|address
+
Controls where the user's encryption key will be loaded from by default + for commands such as zfs + load-key and zfs + mount -l. This property is + only set for encrypted datasets which are encryption roots. If + unspecified, the default is prompt. +

Even though the encryption suite cannot + be changed after dataset creation, the keylocation can be with either + zfs set or + zfs change-key. If + prompt is selected ZFS will ask for the key at the + command prompt when it is required to access the encrypted data (see + zfs load-key for + details). This setting will also allow the key to be passed in via the + standard input stream, but users should be careful not to place keys + which should be kept secret on the command line. If a file URI is + selected, the key will be loaded from the specified absolute file path. + If an HTTPS or HTTP URL is selected, it will be GETted using + fetch(3), libcurl, or nothing, depending on + compile-time configuration and run-time availability. The + + environment variable can be set to set the location of the concatenated + certificate store. The + + environment variable can be set to override the location of the + directory containing the certificate authority bundle. The + + and + + environment variables can be set to configure the path to the client + certificate and its key.

+
+
=iterations
+
Controls the number of PBKDF2 iterations that a + passphrase encryption key should be run through when + processing it into an encryption key. This property is only defined when + encryption is enabled and a keyformat of passphrase is + selected. The goal of PBKDF2 is to significantly increase the + computational difficulty needed to brute force a user's passphrase. This + is accomplished by forcing the attacker to run each passphrase through a + computationally expensive hashing function many times before they arrive + at the resulting key. A user who actually knows the passphrase will only + have to pay this cost once. As CPUs become better at processing, this + number should be raised to ensure that a brute force attack is still not + possible. The current default is + + and the minimum is + . + This property may be changed with zfs + change-key.
+
=on|off
+
Controls whether processes can be executed from within this file system. + The default value is on. The values on + and off are equivalent to the exec and + + mount options.
+
=on|off
+
Controls internal zvol threading. The value off disables + zvol threading, and zvol relies on application threads. The default value + is on, which enables threading within a zvol. Please + note that this property will be overridden by + + module parameter. This property is only applicable to Linux.
+
=count|none
+
Limits the number of filesystems and volumes that can exist under this + point in the dataset tree. The limit is not enforced if the user is + allowed to change the limit. Setting a filesystem_limit + to on a descendent of a filesystem that already has a + filesystem_limit does not override the ancestor's + filesystem_limit, but rather imposes an additional + limit. This feature must be enabled to be used (see + zpool-features(7)).
+
=size
+
This value represents the threshold block size for including small file + blocks into the special allocation class. Blocks smaller than or equal to + this value will be assigned to the special allocation class while greater + blocks will be assigned to the regular class. Valid values are zero or a + power of two from 512 up to 1048576 (1 MiB). The default size is 0 which + means no small file blocks will be allocated in the special class. +

Before setting this property, a special class vdev must be + added to the pool. See zpoolconcepts(7) for more + details on the special allocation class.

+
+
=path|none|legacy
+
Controls the mount point used for this file system. See the + Mount Points section of + zfsconcepts(7) for more information on how this property + is used. +

When the mountpoint property is changed for + a file system, the file system and any children that inherit the mount + point are unmounted. If the new value is legacy, then + they remain unmounted. Otherwise, they are automatically remounted in + the new location if the property was previously legacy + or none. In addition, any shared file systems are + unshared and shared in the new location.

+

When the mountpoint property is set with + zfs set + -u , the mountpoint property + is updated but dataset is not mounted or unmounted and remains as it was + before.

+
+
=on|off
+
Controls whether the file system should be mounted with + nbmand (Non-blocking mandatory locks). Changes to this + property only take effect when the file system is umounted and remounted. + This was only supported by Linux prior to 5.15, and was buggy there, and + is not supported by FreeBSD. On Solaris it's used + for SMB clients.
+
=on|off
+
Allow mounting on a busy directory or a directory which already contains + files or directories. This is the default mount behavior for Linux and + FreeBSD file systems. On these platforms the + property is on by default. Set to off + to disable overlay mounts for consistency with OpenZFS on other + platforms.
+
=all|none|metadata
+
Controls what is cached in the primary cache (ARC). If this property is + set to all, then both user data and metadata is cached. + If this property is set to none, then neither user data + nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=size|none
+
Limits the amount of space a dataset and its descendents can consume. This + property enforces a hard limit on the amount of space used. This includes + all space consumed by descendents, including file systems and snapshots. + Setting a quota on a descendent of a dataset that already has a quota does + not override the ancestor's quota, but rather imposes an additional limit. +

Quotas cannot be set on volumes, as the + volsize property acts as an implicit quota.

+
+
=count|none
+
Limits the number of snapshots that can be created on a dataset and its + descendents. Setting a snapshot_limit on a descendent of + a dataset that already has a snapshot_limit does not + override the ancestor's snapshot_limit, but rather + imposes an additional limit. The limit is not enforced if the user is + allowed to change the limit. For example, this means that recursive + snapshots taken from the global zone are counted against each delegated + dataset within a zone. This feature must be enabled to be used (see + zpool-features(7)).
+
user=size|none
+
Limits the amount of space consumed by the specified user. User space + consumption is identified by the + user + property. +

Enforcement of user quotas may be delayed by several seconds. + This delay means that a user might exceed their quota before the system + notices that they are over quota and begins to refuse additional writes + with the EDQUOT error message. See the + zfs userspace command + for more information.

+

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + userquota privilege with zfs + allow, can get and set everyone's quota.

+

This property is not available on volumes, on file systems + before version 4, or on pools before version 15. The + userquota@ properties + are not displayed by zfs + get all. The user's name must + be appended after the @ symbol, using one of the + following forms:

+
    +
  • POSIX name ("joe")
  • +
  • POSIX numeric ID ("789")
  • +
  • SID name ("joe.smith@mydomain")
  • +
  • SID numeric ID ("S-1-123-456-789")
  • +
+

Files created on Linux always have POSIX owners.

+
+
user=size|none
+
The userobjquota is similar to + userquota but it limits the number of objects a user can + create. Please refer to userobjused for more information + about how objects are counted.
+
group=size|none
+
Limits the amount of space consumed by the specified group. Group space + consumption is identified by the + group + property. +

Unprivileged users can access only their own groups' space + usage. The root user, or a user who has been granted the + groupquota privilege with zfs + allow, can get and set all groups' quotas.

+
+
group=size|none
+
The + + is similar to groupquota but it limits number of objects + a group can consume. Please refer to userobjused for + more information about how objects are counted.
+
project=size|none
+
Limits the amount of space consumed by the specified project. Project + space consumption is identified by the + project + property. Please refer to projectused for more + information about how project is identified and set/changed. +

The root user, or a user who has been granted the + projectquota privilege with zfs + allow, can access all projects' quota.

+
+
project=size|none
+
The projectobjquota is similar to + projectquota but it limits number of objects a project + can consume. Please refer to userobjused for more + information about how objects are counted.
+
=on|off
+
Controls whether this dataset can be modified. The default value is + off. The values on and + off are equivalent to the + and + mount + options. +

This property can also be referred to by its + shortened column name, + .

+
+
=size
+
Specifies a suggested block size for files in the file system. This + property is designed solely for use with database workloads that access + files in fixed-size records. ZFS automatically tunes block sizes according + to internal algorithms optimized for typical access patterns. +

For databases that create very large files but access them in + small random chunks, these algorithms may be suboptimal. Specifying a + recordsize greater than or equal to the record size of + the database can result in significant performance gains. Use of this + property for general purpose file systems is strongly discouraged, and + may adversely affect performance.

+

The size specified must be a power of two + greater than or equal to 512 B and less than or + equal to 128 KiB. If the + + feature is enabled on the pool, the size may be up to 1 + MiB. See zpool-features(7) for details on ZFS + feature flags.

+

Changing the file system's recordsize + affects only files created afterward; existing files are unaffected.

+

This property can also be referred to by its + shortened column name, + .

+
+
=all|most|some|none
+
Controls what types of metadata are stored redundantly. ZFS stores an + extra copy of metadata, so that if a single block is corrupted, the amount + of user data lost is limited. This extra copy is in addition to any + redundancy provided at the pool level (e.g. by mirroring or RAID-Z), and + is in addition to an extra copy specified by the copies + property (up to a total of 3 copies). For example if the pool is mirrored, + copies=2, and + redundant_metadata=most, then ZFS + stores 6 copies of most metadata, and 4 copies of data and some metadata. +

When set to all, ZFS stores an extra copy of + all metadata. If a single on-disk block is corrupt, at worst a single + block of user data (which is recordsize bytes long) + can be lost.

+

When set to most, ZFS stores an extra copy + of most types of metadata. This can improve performance of random + writes, because less metadata must be written. In practice, at worst + about 1000 blocks (of recordsize bytes each) of user + data can be lost if a single on-disk block is corrupt. The exact + behavior of which metadata blocks are stored redundantly may change in + future releases.

+

When set to some, ZFS stores an extra copy + of only critical metadata. This can improve file create performance + since less metadata needs to be written. If a single on-disk block is + corrupt, at worst a single user file can be lost.

+

When set to none, ZFS does not store any + copies of metadata redundantly. If a single on-disk block is corrupt, an + entire dataset can be lost.

+

The default value is all.

+
+
=size|none
+
Limits the amount of space a dataset can consume. This property enforces a + hard limit on the amount of space used. This hard limit does not include + space used by descendents, including file systems and snapshots.
+
=size|none|auto
+
The minimum amount of space guaranteed to a dataset, not including its + descendents. When the amount of space used is below this value, the + dataset is treated as if it were taking up the amount of space specified + by refreservation. The refreservation + reservation is accounted for in the parent datasets' space used, and + counts against the parent datasets' quotas and reservations. +

If refreservation is set, a snapshot is only + allowed if there is enough free pool space outside of this reservation + to accommodate the current number of "referenced" bytes in the + dataset.

+

If refreservation is set to + auto, a volume is thick provisioned (or "not + sparse"). refreservation=auto + is only supported on volumes. See volsize in the + Native Properties section + for more information about sparse volumes.

+

This property can also be referred to by its + shortened column name, + .

+
+
=on|off
+
Controls the manner in which the access time is updated when + atime=on is set. Turning this property + on causes the access time to be updated relative to the modify or change + time. Access time is only updated if the previous access time was earlier + than the current modify or change time or if the existing access time + hasn't been updated within the past 24 hours. The default value is + on. The values on and + off are equivalent to the relatime and + + mount options.
+
=size|none
+
The minimum amount of space guaranteed to a dataset and its descendants. + When the amount of space used is below this value, the dataset is treated + as if it were taking up the amount of space specified by its reservation. + Reservations are accounted for in the parent datasets' space used, and + count against the parent datasets' quotas and reservations. +

This property can also be referred to by its + shortened column name, + .

+
+
=all|none|metadata
+
Controls what is cached in the secondary cache (L2ARC). If this property + is set to all, then both user data and metadata is + cached. If this property is set to none, then neither + user data nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=all|none|metadata
+
Controls what speculative prefetch does. If this property is set to + all, then both user data and metadata are prefetched. If + this property is set to none, then neither user data nor + metadata are prefetched. If this property is set to + metadata, then only metadata are prefetched. The default + value is all. +

Please note that the module parameter zfs_disable_prefetch=1 + can be used to totally disable speculative prefetch, bypassing anything + this property does.

+
+
=on|off
+
Controls whether the setuid bit is respected for the file system. The + default value is on. The values on and + off are equivalent to the + and + nosuid mount options.
+
=on|off|opts
+
Controls whether the file system is shared by using + and what options are to be used. Otherwise, the file + system is automatically shared and unshared with the + zfs share and + zfs unshare commands. If + the property is set to on, the net(8) command is invoked + to create a + . +

Because SMB shares requires a resource name, a unique resource + name is constructed from the dataset name. The constructed name is a + copy of the dataset name except that the characters in the dataset name, + which would be invalid in the resource name, are replaced with + underscore (_) characters. Linux does not currently support additional + options which might be available on Solaris.

+

If the sharesmb property is set to + off, the file systems are unshared.

+

The share is created with the ACL (Access Control List) + "Everyone:F" ("F" stands for "full + permissions", i.e. read and write permissions) and no guest access + (which means Samba must be able to authenticate a real user — + passwd(5)/shadow(5)-, LDAP- or + smbpasswd(5)-based) by default. This means that any + additional access control (disallow specific user specific access etc) + must be done on the underlying file system.

+

When the sharesmb property is updated with + zfs set + -u , the property is set to desired value, but + the operation to share, reshare or unshare the the dataset is not + performed.

+
+
=on|off|opts
+
Controls whether the file system is shared via NFS, and what options are + to be used. A file system with a sharenfs property of + off is managed with the exportfs(8) + command and entries in the /etc/exports file. + Otherwise, the file system is automatically shared and unshared with the + zfs share and + zfs unshare commands. If + the property is set to on, the dataset is shared using + the default options: +
sec=sys,rw,crossmnt,no_subtree_check
+

Please note that the options are comma-separated, unlike those + found in exports(5). This is done to negate the need + for quoting, as well as to make parsing with scripts easier.

+

See exports(5) for the meaning of the + default options. Otherwise, the exportfs(8) command is + invoked with options equivalent to the contents of this property.

+

When the sharenfs property is changed for a + dataset, the dataset and any children inheriting the property are + re-shared with the new options, only if the property was previously + off, or if they were shared before the property was + changed. If the new property is off, the file systems + are unshared.

+

When the sharenfs property is updated with + zfs set + -u , the property is set to desired value, but + the operation to share, reshare or unshare the the dataset is not + performed.

+
+
=latency|throughput
+
Provide a hint to ZFS about handling of synchronous requests in this + dataset. If logbias is set to latency + (the default), ZFS will use pool log devices (if configured) to handle the + requests at low latency. If logbias is set to + throughput, ZFS will not use configured pool log + devices. ZFS will instead optimize synchronous operations for global pool + throughput and efficient use of resources.
+
=hidden|visible
+
Controls whether the volume snapshot devices under + /dev/zvol/pool⟩ + are hidden or visible. The default value is hidden.
+
=hidden|visible
+
Controls whether the .zfs directory is hidden or + visible in the root of the file system as discussed in the + Snapshots section of + zfsconcepts(7). The default value is + hidden.
+
=standard|always|disabled
+
Controls the behavior of synchronous requests (e.g. fsync, O_DSYNC). + standard is the POSIX-specified behavior of ensuring all + synchronous requests are written to stable storage and all devices are + flushed to ensure data is not cached by device controllers (this is the + default). always causes every file system transaction to + be written and flushed before its system call returns. This has a large + performance penalty. disabled disables synchronous + requests. File system transactions are only committed to stable storage + periodically. This option will give the highest performance. However, it + is very dangerous as ZFS would be ignoring the synchronous transaction + demands of applications such as databases or NFS. Administrators should + only use this option when the risks are understood.
+
=N|
+
The on-disk version of this file system, which is independent of the pool + version. This property can only be set to later supported versions. See + the zfs upgrade + command.
+
=size
+
For volumes, specifies the logical size of the volume. By default, + creating a volume establishes a reservation of equal size. For storage + pools with a version number of 9 or higher, a + refreservation is set instead. Any changes to + volsize are reflected in an equivalent change to the + reservation (or refreservation). The + volsize can only be set to a multiple of + volblocksize, and cannot be zero. +

The reservation is kept equal to the volume's logical size to + prevent unexpected behavior for consumers. Without the reservation, the + volume could run out of space, resulting in undefined behavior or data + corruption, depending on how the volume is used. These effects can also + occur when the volume size is changed while it is in use (particularly + when shrinking the size). Extreme care should be used when adjusting the + volume size.

+

Though not recommended, a "sparse volume" (also + known as "thin provisioned") can be created by specifying the + -s option to the zfs + create -V command, or by + changing the value of the refreservation property (or + reservation property on pool version 8 or earlier) + after the volume has been created. A "sparse volume" is a + volume where the value of refreservation is less than + the size of the volume plus the space required to store its metadata. + Consequently, writes to a sparse volume can fail with + ENOSPC when the pool is low on space. For a + sparse volume, changes to volsize are not reflected in + the refreservation. A volume that is not sparse is + said to be "thick provisioned". A sparse volume can become + thick provisioned by setting refreservation to + auto.

+
+
=default|full|geom|dev|none
+
This property specifies how volumes should be exposed to the OS. Setting + it to full exposes volumes as fully fledged block + devices, providing maximal functionality. The value geom + is just an alias for full and is kept for compatibility. + Setting it to dev hides its partitions. Volumes with + property set to none are not exposed outside ZFS, but + can be snapshotted, cloned, replicated, etc, that can be suitable for + backup purposes. Value default means that volumes + exposition is controlled by system-wide tunable + , + where full, dev and + none are encoded as 1, 2 and 3 respectively. The default + value is full.
+
=on|off
+
Controls whether regular files should be scanned for viruses when a file + is opened and closed. In addition to enabling this property, the virus + scan service must also be enabled for virus scanning to occur. The default + value is off. This property is not used by OpenZFS.
+
=on|off|sa
+
Controls whether extended attributes are enabled for this file system. Two + styles of extended attributes are supported: either directory-based or + system-attribute-based. +

The default value of on enables + directory-based extended attributes. This style of extended attribute + imposes no practical limit on either the size or number of attributes + which can be set on a file. Although under Linux the + getxattr(2) and setxattr(2) system + calls limit the maximum size to 64K. This is the most + compatible style of extended attribute and is supported by all ZFS + implementations.

+

System-attribute-based xattrs can be enabled by setting the + value to sa. The key advantage of this type of xattr + is improved performance. Storing extended attributes as system + attributes significantly decreases the amount of disk I/O required. Up + to 64K of data may be stored per-file in the space + reserved for system attributes. If there is not enough space available + for an extended attribute then it will be automatically written as a + directory-based xattr. System-attribute-based extended attributes are + not accessible on platforms which do not support the + xattr=sa feature. OpenZFS supports + xattr=sa on both + FreeBSD and Linux.

+

The use of system-attribute-based xattrs is strongly + encouraged for users of SELinux or POSIX ACLs. Both of these features + heavily rely on extended attributes and benefit significantly from the + reduced access time.

+

The values on and + off are equivalent to the xattr and + mount + options.

+
+
=off|on
+
Controls whether the dataset is managed from a jail. See + zfs-jail(8) for more information. Jails are a + FreeBSD feature and this property is not available + on other platforms.
+
=off|on
+
Controls whether the dataset is managed from a non-global zone or + namespace. See zfs-zone(8) for more information. Zoning + is a Linux feature and this property is not available on other + platforms.
+
+

The following three properties cannot be changed after the file + system is created, and therefore, should be set when the file system is + created. If the properties are not set with the zfs + create or zpool + create commands, these properties are inherited from + the parent dataset. If the parent dataset lacks these properties due to + having been created prior to these features being supported, the new file + system will have the default values for these properties.

+
+
=sensitive||mixed
+
Indicates whether the file name matching algorithm used by the file system + should be case-sensitive, case-insensitive, or allow a combination of both + styles of matching. The default value for the + casesensitivity property is sensitive. + Traditionally, UNIX and POSIX file systems have + case-sensitive file names. +

The mixed value for the + casesensitivity property indicates that the file + system can support requests for both case-sensitive and case-insensitive + matching behavior. Currently, case-insensitive matching behavior on a + file system that supports mixed behavior is limited to the SMB server + product. For more information about the mixed value + behavior, see the "ZFS Administration Guide".

+
+
=none||||
+
Indicates whether the file system should perform a + + normalization of file names whenever two file names are compared, and + which normalization algorithm should be used. File names are always stored + unmodified, names are normalized as part of any comparison process. If + this property is set to a legal value other than none, + and the utf8only property was left unspecified, the + utf8only property is automatically set to + on. The default value of the + normalization property is none. This + property cannot be changed after the file system is created.
+
=on|off
+
Indicates whether the file system should reject file names that include + characters that are not present in the + + character code set. If this property is explicitly set to + off, the normalization property must either not be + explicitly set or be set to none. The default value for + the utf8only property is off. This + property cannot be changed after the file system is created.
+
+

The casesensitivity, + normalization, and utf8only properties + are also new permissions that can be assigned to non-privileged users by + using the ZFS delegated administration feature.

+
+
+

+

When a file system is mounted, either through + mount(8) for legacy mounts or the + zfs mount command for normal + file systems, its mount options are set according to its properties. The + correlation between properties and mount options is as follows:

+
+
+
+
atime/noatime
+
+
auto/noauto
+
+
dev/nodev
+
+
exec/noexec
+
+
ro/rw
+
+
relatime/norelatime
+
+
suid/nosuid
+
+
xattr/noxattr
+
+
mand/nomand
+
=
+
context=
+
=
+
fscontext=
+
=
+
defcontext=
+
=
+
rootcontext=
+
+
+

In addition, these options can be set on a + per-mount basis using the -o option, without + affecting the property that is stored on disk. The values specified on the + command line override the values stored in the dataset. The + nosuid option is an alias for + ,. + These properties are reported as "temporary" by the + zfs get command. If the + properties are changed while the dataset is mounted, the new setting + overrides any temporary settings.

+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate datasets (file + systems, volumes, and snapshots).

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as + module:property, but this + namespace is not enforced by ZFS. User property names can be at most 256 + characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the + module component of property names to reduce the + chance that two independently-developed packages use the same property name + for different purposes.

+

The values of user properties are arbitrary strings, are always + inherited, and are never validated. All of the commands that operate on + properties (zfs list, + zfs get, + zfs set, and so forth) can + be used to manipulate both native properties and user properties. Use the + zfs inherit command to clear + a user property. If the property is not defined in any parent dataset, it is + removed entirely. Property values are limited to 8192 bytes.

+
+
+
+ + + + + +
August 8, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/7/zpool-features.7.html b/man/master/7/zpool-features.7.html new file mode 100644 index 000000000..451f98b5a --- /dev/null +++ b/man/master/7/zpool-features.7.html @@ -0,0 +1,1254 @@ + + + + + + + zpool-features.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-features.7

+
+ + + + + +
ZPOOL-FEATURES(7)Miscellaneous Information ManualZPOOL-FEATURES(7)
+
+
+

+

zpool-features — + description of ZFS pool features

+
+
+

+

ZFS pool on-disk format versions are specified via + “features” which replace the old on-disk format numbers (the + last supported on-disk format number is 28). To enable a feature on a pool + use the zpool upgrade, or + set the feature@feature-name + property to enabled. Please also see the + Compatibility feature + sets section for information on how sets of features may be enabled + together.

+

The pool format does not affect file system version compatibility + or the ability to send file systems between pools.

+

Since most features can be enabled independently of each other, + the on-disk format of the pool is specified by the set of all features + marked as active on the pool. If the pool was created by + another software version this set may include unsupported features.

+
+

+

Every feature has a GUID of the form + com.example:feature-name. The + reversed DNS name ensures that the feature's GUID is unique across all ZFS + implementations. When unsupported features are encountered on a pool they + will be identified by their GUIDs. Refer to the documentation for the ZFS + implementation that created the pool for information about those + features.

+

Each supported feature also has a short name. By convention a + feature's short name is the portion of its GUID which follows the + ‘:’ (i.e. + com.example:feature-name would + have the short name feature-name), however a feature's + short name may differ across ZFS implementations if following the convention + would result in name conflicts.

+
+
+

+

Features can be in one of three states:

+
+
+
This feature's on-disk format changes are in effect on the pool. Support + for this feature is required to import the pool in read-write mode. If + this feature is not read-only compatible, support is also required to + import the pool in read-only mode (see + Read-only + compatibility).
+
+
An administrator has marked this feature as enabled on the pool, but the + feature's on-disk format changes have not been made yet. The pool can + still be imported by software that does not support this feature, but + changes may be made to the on-disk format at any time which will move the + feature to the active state. Some features may support + returning to the enabled state after becoming + active. See feature-specific documentation for + details.
+
+
This feature's on-disk format changes have not been made and will not be + made unless an administrator moves the feature to the + enabled state. Features cannot be disabled once they + have been enabled.
+
+

The state of supported features is exposed through pool properties + of the form feature@short-name.

+
+
+

+

Some features may make on-disk format changes that do not + interfere with other software's ability to read from the pool. These + features are referred to as “read-only compatible”. If all + unsupported features on a pool are read-only compatible, the pool can be + imported in read-only mode by setting the readonly + property during import (see zpool-import(8) for details on + importing pools).

+
+
+

+

For each unsupported feature enabled on an imported pool, a pool + property named + @feature-name + will indicate why the import was allowed despite the unsupported feature. + Possible values for this property are:

+
+
+
The feature is in the enabled state and therefore the + pool's on-disk format is still compatible with software that does not + support this feature.
+
+
The feature is read-only compatible and the pool has been imported in + read-only mode.
+
+
+
+

+

Some features depend on other features being enabled in order to + function. Enabling a feature will automatically enable any features it + depends on.

+
+
+

+

It is sometimes necessary for a pool to maintain compatibility + with a specific on-disk format, by enabling and disabling particular + features. The compatibility feature facilitates this by + allowing feature sets to be read from text files. When set to + (the + default), compatibility feature sets are disabled (i.e. all features are + enabled); when set to legacy, no features are enabled. + When set to a comma-separated list of filenames (each filename may either be + an absolute path, or relative to + /etc/zfs/compatibility.d or + /usr/share/zfs/compatibility.d), the lists of + requested features are read from those files, separated by whitespace and/or + commas. Only features present in all files are enabled.

+

Simple sanity checks are applied to the files: they must be + between 1 B and 16 KiB in size, and must end with a newline character.

+

The requested features are applied when a pool is created using + zpool create + -o + compatibility= and controls + which features are enabled when using zpool + upgrade. zpool + status will not show a warning about disabled + features which are not part of the requested feature set.

+

The special value legacy prevents any features + from being enabled, either via zpool + upgrade or zpool + set + feature@feature-name=enabled. + This setting also prevents pools from being upgraded to newer on-disk + versions. This is a safety measure to prevent new features from being + accidentally enabled, breaking compatibility.

+

By convention, compatibility files in + /usr/share/zfs/compatibility.d are provided by the + distribution, and include feature sets supported by important versions of + popular distributions, and feature sets commonly supported at the start of + each year. Compatibility files in + /etc/zfs/compatibility.d, if present, will take + precedence over files with the same name in + /usr/share/zfs/compatibility.d.

+

If an unrecognized feature is found in these files, an error + message will be shown. If the unrecognized feature is in a file in + /etc/zfs/compatibility.d, this is treated as an + error and processing will stop. If the unrecognized feature is under + /usr/share/zfs/compatibility.d, this is treated as a + warning and processing will continue. This difference is to allow + distributions to include features which might not be recognized by the + currently-installed binaries.

+

Compatibility files may include comments: any text from + ‘#’ to the end of the line is ignored.

+

:

+
+
example# cat /usr/share/zfs/compatibility.d/grub2
+# Features which are supported by GRUB2
+allocation_classes
+async_destroy
+block_cloning
+bookmarks
+device_rebuild
+embedded_data
+empty_bpobj
+enabled_txg
+extensible_dataset
+filesystem_limits
+hole_birth
+large_blocks
+livelist
+log_spacemap
+lz4_compress
+project_quota
+resilver_defer
+spacemap_histogram
+spacemap_v2
+userobj_accounting
+zilsaxattr
+zpool_checkpoint
+
+example# zpool create -o compatibility=grub2 bootpool vdev
+
+

See zpool-create(8) and + zpool-upgrade(8) for more information on how these + commands are affected by feature sets.

+
+
+
+

+

The following features are supported on this system:

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables support for separate allocation + classes.

+

This feature becomes active when a dedicated + allocation class vdev (dedup or special) is created with the + zpool create + or zpool + add commands. With + device removal, it can be returned to the enabled + state if all the dedicated allocation class vdevs are removed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

Destroying a file system requires traversing all of its data + in order to return its used space to the pool. Without + async_destroy, the file system is not fully removed + until all space has been reclaimed. If the destroy operation is + interrupted by a reboot or power outage, the next attempt to open the + pool will need to complete the destroy operation synchronously.

+

When async_destroy is enabled, the file + system's data will be reclaimed by a background process, allowing the + destroy operation to complete without traversing the entire file system. + The background process is able to resume interrupted destroys after the + pool has been opened, eliminating the need to finish interrupted + destroys as part of the open operation. The amount of space remaining to + be reclaimed by the background process is available through the + freeing property.

+

This feature is only active while + freeing is non-zero.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the BLAKE3 hash algorithm for + checksum and dedup. BLAKE3 is a secure hash algorithm focused on high + performance.

+

When the blake3 feature is set to + enabled, the administrator can turn on the + blake3 checksum on any dataset using + zfs set + checksum=blake3 + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + blake3, and will return to being + enabled once all filesystems that have ever had their + checksum set to blake3 are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

When this feature is enabled ZFS will use + block cloning for operations like + (2). + Block cloning allows to create multiple references to a single block. It + is much faster than copying the data (as the actual data is neither read + nor written) and takes no additional space. Blocks can be cloned across + datasets under some conditions (like equal + recordsize, the same master encryption key, + etc.). ZFS tries its best to clone across datasets including encrypted + ones. This is limited for various (nontrivial) reasons depending on the + OS and/or ZFS internals.

+

This feature becomes active when first block + is cloned. When the last cloned block is freed, it goes back to the + enabled state.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables use of the zfs + bookmark command.

+

This feature is active while + any bookmarks exist in the pool. All bookmarks in the pool can be listed + by running zfs list + -t + + -r poolname.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmark, extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the creation and management of larger + bookmarks which are needed for other features in ZFS.

+

This feature becomes active when a v2 + bookmark is created and will be returned to the + enabled state when all v2 bookmarks are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmark, extensible_dataset, bookmark_v2
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables additional bookmark + accounting fields, enabling the + #bookmark + property (space written since a bookmark) and estimates of send stream + sizes for incrementals from bookmarks.

+

This feature becomes active when a bookmark + is created and will be returned to the enabled state + when all bookmarks with these fields are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables the ability for the + zpool attach and + zpool replace commands + to perform sequential reconstruction (instead of healing reconstruction) + when resilvering.

+

Sequential reconstruction resilvers a device in LBA order + without immediately verifying the checksums. Once complete, a scrub is + started, which then verifies the checksums. This approach allows full + redundancy to be restored to the pool in the minimum amount of time. + This two-phase approach will take longer than a healing resilver when + the time to verify the checksums is included. However, unless there is + additional pool damage, no checksum errors should be reported by the + scrub. This feature is incompatible with raidz configurations. This + feature becomes active while a sequential resilver is + in progress, and returns to enabled when the resilver + completes.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the zpool + remove command to remove top-level vdevs, + evacuating them to reduce the total size of the pool.

+

This feature becomes active when the + zpool remove command is + used on a top-level vdev, and will never return to being + enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables use of the draid vdev + type. dRAID is a variant of RAID-Z which provides integrated distributed + hot spares that allow faster resilvering while retaining the benefits of + RAID-Z. Data, parity, and spare space are organized in redundancy groups + and distributed evenly over all of the devices.

+

This feature becomes active when creating a + pool which uses the draid vdev type, or when adding a + new draid vdev to an existing pool.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the Edon-R hash + algorithm for checksum, including for nopwrite (if compression is also + enabled, an overwrite of a block whose checksum matches the data being + written will be ignored). In an abundance of caution, Edon-R requires + verification when used with dedup: zfs + set + =edonr, + (see zfs-set(8)).

+

Edon-R is a very high-performance hash algorithm that was part + of the NIST SHA-3 competition. It provides extremely high hash + performance (over 350% faster than SHA-256), but was not selected + because of its unsuitability as a general purpose secure hash algorithm. + This implementation utilizes the new salted checksumming functionality + in ZFS, which means that the checksum is pre-seeded with a secret + 256-bit random key (stored on the pool) before being fed the data block + to be checksummed. Thus the produced checksums are unique to a given + pool, preventing hash collision attacks on systems with dedup.

+

When the edonr feature is set to + enabled, the administrator can turn on the + edonr checksum on any dataset using + zfs set + checksum=edonr + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + edonr, and will return to being + enabled once all filesystems that have ever had their + checksum set to edonr are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature improves the performance and compression ratio of + highly-compressible blocks. Blocks whose contents can compress to 112 + bytes or smaller can take advantage of this feature.

+

When this feature is enabled, the contents of + highly-compressible blocks are stored in the block + “pointer” itself (a misnomer in this case, as it contains + the compressed data, rather than a pointer to its location on disk). + Thus the space of the block (one sector, typically 512 B or 4 KiB) is + saved, and no additional I/O is needed to read and write the data block. + This feature becomes active + as soon as it is enabled and will never return to + being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature increases the performance of creating and using a + large number of snapshots of a single filesystem or volume, and also + reduces the disk space required.

+

When there are many snapshots, each snapshot uses many Block + Pointer Objects (bpobjs) to track blocks associated with that snapshot. + However, in common use cases, most of these bpobjs are empty. This + feature allows us to create each bpobj on-demand, thus eliminating the + empty bpobjs.

+

This feature is active while there are any + filesystems, volumes, or snapshots which were created after enabling + this feature.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

Once this feature is enabled, ZFS records the transaction + group number in which new features are enabled. This has no user-visible + impact, but other features may depend on this feature.

+

This feature becomes active as soon as it is + enabled and will never return to being enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmark_v2, extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the creation and management of natively + encrypted datasets.

+

This feature becomes active when an + encrypted dataset is created and will be returned to the + enabled state when all datasets that use this feature + are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows more flexible use of internal ZFS data + structures, and exists for other features to depend on.

+

This feature will be active when the first + dependent feature uses it, and will be returned to the + enabled state when all datasets that use this feature + are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables filesystem and snapshot limits. These + limits can be used to control how many filesystems and/or snapshots can + be created at the point in the tree on which the limits are set.

+

This feature is active once either of the + limit properties has been set on a dataset and will never return to + being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the upgraded version of errlog, which + required an on-disk error log format change. Now the error log of each + head dataset is stored separately in the zap object and keyed by the + head id. With this feature enabled, every dataset affected by an error + block is listed in the output of zpool + status. In case of encrypted filesystems with + unloaded keys we are unable to check their snapshots or clones for + errors and these will not be reported. An "access denied" + error will be reported.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
enabled_txg
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature has/had bugs, + the result of which is that, if you do a zfs + send -i (or + -R, since it uses + -i) from an affected dataset, the receiving + party will not see any checksum or other errors, but the resulting + destination snapshot will not match the source. Its use by + zfs send + -i has been disabled by default (see + + in zfs(4)).

+

This feature improves performance of incremental sends + (zfs send + -i) and receives for objects with many holes. + The most common case of hole-filled objects is zvols.

+

An incremental send stream from snapshot A + to snapshot B contains + information about every block that changed between A + and B. Blocks which did not + change between those snapshots can be identified and omitted from the + stream using a piece of metadata called the “block birth + time”, but birth times are not recorded for holes (blocks filled + only with zeroes). Since holes created after A + cannot be distinguished from holes created + before A, information about every hole in the + entire filesystem or zvol is included in the send stream.

+

For workloads where holes are rare this is not a problem. + However, when incrementally replicating filesystems or zvols with many + holes (for example a zvol formatted with another filesystem) a lot of + time will be spent sending and receiving unnecessary information about + holes that already exist on the receiving side.

+

Once the hole_birth feature has been enabled + the block birth times of all new holes will be recorded. Incremental + sends between snapshots created after this feature is enabled will use + this new metadata to avoid sending information about holes that already + exist on the receiving side.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows the record size on a dataset to be set + larger than 128 KiB.

+

This feature becomes active once a dataset + contains a file with a block size larger than 128 KiB, and will return + to being enabled once all filesystems that have ever + had their recordsize larger than 128 KiB are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows the size of dnodes in a + dataset to be set larger than 512 B. This feature becomes + active once a dataset contains an object with a dnode + larger than 512 B, which occurs as a result of setting the + + dataset property to a value other than legacy. The + feature will return to being enabled once all + filesystems that have ever contained a dnode larger than 512 B are + destroyed. Large dnodes allow more data to be stored in the bonus + buffer, thus potentially improving performance by avoiding the use of + spill blocks.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows clones to be deleted faster than the + traditional method when a large number of random/sparse writes have been + made to the clone. All blocks allocated and freed after a clone is + created are tracked by the the clone's livelist which is referenced + during the deletion of the clone. The feature is activated when a clone + is created and remains active until all clones have + been destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
com.delphix:spacemap_v2
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature improves performance for heavily-fragmented + pools, especially when workloads are heavy in random-writes. It does so + by logging all the metaslab changes on a single spacemap every TXG + instead of scattering multiple writes to all the metaslab spacemaps.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

lz4 is a high-performance real-time + compression algorithm that features significantly faster compression and + decompression as well as a higher compression ratio than the older + lzjb compression. Typically, lz4 + compression is approximately 50% faster on compressible data and 200% + faster on incompressible data than lzjb. It is also + approximately 80% faster on decompression, while giving approximately a + 10% better compression ratio.

+

When the lz4_compress feature is set to + enabled, the administrator can turn on + lz4 compression on any dataset on the pool using the + zfs-set(8) command. All newly written metadata will be + compressed with the lz4 algorithm.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows a dump device to be configured with a pool + comprised of multiple vdevs. Those vdevs may be arranged in any mirrored + or raidz configuration.

+

When the multi_vdev_crash_dump feature is + set to enabled, the administrator can use + dumpadm(8) to configure a dump device on a pool + comprised of multiple vdevs.

+

Under FreeBSD and Linux this feature + is unused, but registered for compatibility. New pools created on these + systems will have the feature enabled but will never + transition to active, as this functionality is not + required for crash dump support. Existing pools where this feature is + active can be imported.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
device_removal
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature is an enhancement of + device_removal, which will over time reduce the memory + used to track removed devices. When indirect blocks are freed or + remapped, we note that their part of the indirect mapping is + “obsolete” – no longer needed.

+

This feature becomes active when the + zpool remove command is + used on a top-level vdev, and will never return to being + enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows administrators to account the spaces and + objects usage information against the project identifier (ID).

+

The project ID is an object-based attribute. When + upgrading an existing filesystem, objects without a project ID will be + assigned a zero project ID. When this feature is enabled, newly created + objects inherit their parent directories' project ID if the parent's + inherit flag is set (via chattr + + or zfs + project + -s|-C). Otherwise, the + new object's project ID will be zero. An object's project ID can be + changed at any time by the owner (or privileged user) via + chattr -p + prjid or zfs + project -p + prjid.

+

This feature will become active as soon as + it is enabled and will never return to being disabled. + Each filesystem will be upgraded automatically when + remounted, or when a new file is created under that filesystem. The + upgrade can also be triggered on filesystems via + zfs set + version=current + fs. The upgrade process runs in + the background and may take a while to complete for filesystems + containing large amounts of files.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
none
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the zpool + attach subcommand to attach a new device to a + RAID-Z group, expanding the total amount usable space in the pool. See + zpool-attach(8).

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmarks, extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of redacted + zfs sends, which create + redaction bookmarks storing the list of blocks redacted by the send that + created them. For more information about redacted sends, see + zfs-send(8).

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the receiving of redacted + zfs send streams, which + create redacted datasets when received. These datasets are missing some + of their blocks, and so cannot be safely mounted, and their contents + cannot be safely read. For more information about redacted receives, see + zfs-send(8).

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
redaction_bookmarks
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the redaction list created by zfs redact + to store many more entries. It becomes active when a + redaction list is created with more than 36 entries, and returns to + being enabled when no long redaction lists remain in + the pool. For more information about redacted sends, see + zfs-send(8).

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows ZFS to postpone new resilvers if an + existing one is already in progress. Without this feature, any new + resilvers will cause the currently running one to be immediately + restarted from the beginning.

+

This feature becomes active once a resilver + has been deferred, and returns to being enabled when + the deferred resilver begins.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the SHA-512/256 truncated hash + algorithm (FIPS 180-4) for checksum and dedup. The native 64-bit + arithmetic of SHA-512 provides an approximate 50% performance boost over + SHA-256 on 64-bit hardware and is thus a good minimum-change replacement + candidate for systems where hash performance is important, but these + systems cannot for whatever reason utilize the faster + skein and + edonr algorithms.

+

When the sha512 feature is set to + enabled, the administrator can turn on the + sha512 checksum on any dataset using + zfs set + checksum=sha512 + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + sha512, and will return to being + enabled once all filesystems that have ever had their + checksum set to sha512 are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the Skein hash algorithm for + checksum and dedup. Skein is a high-performance secure hash algorithm + that was a finalist in the NIST SHA-3 competition. It provides a very + high security margin and high performance on 64-bit hardware (80% faster + than SHA-256). This implementation also utilizes the new salted + checksumming functionality in ZFS, which means that the checksum is + pre-seeded with a secret 256-bit random key (stored on the pool) before + being fed the data block to be checksummed. Thus the produced checksums + are unique to a given pool, preventing hash collision attacks on systems + with dedup.

+

When the skein feature is set to + enabled, the administrator can turn on the + skein checksum on any dataset using + zfs set + checksum=skein + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + skein, and will return to being + enabled once all filesystems that have ever had their + checksum set to skein are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This features allows ZFS to maintain more information about + how free space is organized within the pool. If this feature is + enabled, it will be activated when a new space map + object is created, or an existing space map is upgraded to the new + format, and never returns back to being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables the use of the new space map encoding + which consists of two words (instead of one) whenever it is + advantageous. The new encoding allows space maps to represent large + regions of space more efficiently on-disk while also increasing their + maximum addressable offset.

+

This feature becomes active once it is + enabled, and never returns back to being + enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows administrators to account the object usage + information by user and group.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled. + Each filesystem will be upgraded automatically when + remounted, or when a new file is created under that filesystem. The + upgrade can also be triggered on filesystems via + zfs set + version=current + fs. The upgrade process runs in + the background and may take a while to complete for filesystems + containing large amounts of files.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature creates a ZAP object for the root vdev.

+

This feature becomes active after the next + zpool import or + zpool reguid. Properties can be retrieved or set + on the root vdev using zpool + get and zpool + set with + as the vdev + name which is an alias for + .

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables + xattr=sa extended attribute logging + in the ZIL. If enabled, extended attribute changes (both + = + and + xattr=sa) are guaranteed to be + durable if either the dataset had + = + set at the time the changes were made, or sync(2) is + called on the dataset after the changes were made.

+

This feature becomes active when a ZIL is + created for at least one dataset and will be returned to the + enabled state when it is destroyed for all datasets + that use this feature.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables the zpool + checkpoint command that can checkpoint the state + of the pool at the time it was issued and later rewind back to it or + discard it.

+

This feature becomes active when the + zpool checkpoint command + is used to checkpoint the pool. The feature will only return back to + being enabled when the pool is rewound or the + checkpoint has been discarded.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

zstd is a high-performance + compression algorithm that features a combination of high compression + ratios and high speed. Compared to + , + zstd offers slightly better compression at much higher + speeds. Compared to lz4, zstd offers + much better compression while being only modestly slower. Typically, + zstd compression speed ranges from 250 to 500 MB/s per + thread and decompression speed is over 1 GB/s per thread.

+

When the zstd feature is set to + enabled, the administrator can turn on + zstd compression of any dataset using + zfs set + compress=zstd + dset (see zfs-set(8)). This + feature becomes active once a + compress property has been set to + zstd, and will return to being + enabled once all filesystems that have ever had their + compress property set to zstd are + destroyed.

+
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
June 23, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/7/zpoolconcepts.7.html b/man/master/7/zpoolconcepts.7.html new file mode 100644 index 000000000..9db0ef2d5 --- /dev/null +++ b/man/master/7/zpoolconcepts.7.html @@ -0,0 +1,605 @@ + + + + + + + zpoolconcepts.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpoolconcepts.7

+
+ + + + + +
ZPOOLCONCEPTS(7)Miscellaneous Information ManualZPOOLCONCEPTS(7)
+
+
+

+

zpoolconcepts — + overview of ZFS storage pools

+
+
+

+
+

+

A "virtual device" describes a single device or a + collection of devices, organized according to certain performance and fault + characteristics. The following virtual devices are supported:

+
+
+
A block device, typically located under /dev. ZFS + can use individual slices or partitions, though the recommended mode of + operation is to use whole disks. A disk can be specified by a full path, + or it can be a shorthand name (the relative portion of the path under + /dev). A whole disk can be specified by omitting + the slice or partition designation. For example, + sda is equivalent to + /dev/sda. When given a whole disk, ZFS + automatically labels the disk, if necessary.
+
+
A regular file. The use of files as a backing store is strongly + discouraged. It is designed primarily for experimental purposes, as the + fault tolerance of a file is only as good as the file system on which it + resides. A file must be specified by a full path.
+
+
A mirror of two or more devices. Data is replicated in an identical + fashion across all components of a mirror. A mirror with + N disks of size + X can hold X + bytes and can withstand + + devices failing, without losing data.
+
, + raidz1, raidz2, + raidz3
+
A distributed-parity layout, similar to RAID-5/6, with improved + distribution of parity, and which does not suffer from the RAID-5/6 + "write hole", (in which data and parity become inconsistent + after a power loss). Data and parity is striped across all disks within a + raidz group, though not necessarily in a consistent stripe width. +

A raidz group can have single, double, or triple parity, + meaning that the raidz group can sustain one, two, or three failures, + respectively, without losing any data. The raidz1 vdev + type specifies a single-parity raidz group; the raidz2 + vdev type specifies a double-parity raidz group; and the + raidz3 vdev type specifies a triple-parity raidz + group. The raidz vdev type is an alias for + raidz1.

+

A raidz group with N + disks of size X + with P parity + disks can hold approximately + + bytes and can withstand P + devices failing without losing data. The minimum + number of devices in a raidz group is one more than the number of parity + disks. The recommended number is between 3 and 9 to help increase + performance.

+
+
, + draid1, draid2, + draid3
+
A variant of raidz that provides integrated distributed hot spares, + allowing for faster resilvering, while retaining the benefits of raidz. A + dRAID vdev is constructed from multiple internal raidz groups, each with + D data devices and + P parity devices. These groups + are distributed over all of the children in order to fully utilize the + available disk performance. +

Unlike raidz, dRAID uses a fixed stripe width + (padding as necessary with zeros) to allow fully sequential resilvering. + This fixed stripe width significantly affects both usable capacity and + IOPS. For example, with the default + + and + + disk sectors the minimum allocation size is + . If + using compression, this relatively large allocation size can reduce the + effective compression ratio. When using ZFS volumes (zvols) and dRAID, + the default of the + + property is increased to account for the allocation size. If a dRAID + pool will hold a significant amount of small blocks, it is recommended + to also add a mirrored special vdev to store those + blocks.

+

In regards to I/O, + performance is similar to raidz since, for any read, all + D data disks must be accessed. + Delivered random IOPS can be reasonably approximated as + .

+

Like raidz, a dRAID can have single-, double-, or + triple-parity. The draid1, draid2, + and draid3 types can be used to specify the parity + level. The draid vdev type is an alias for + draid1.

+

A dRAID with N disks + of size X, D + data disks per redundancy group, + P parity level, and + + distributed hot spares can hold approximately + + bytes and can withstand P + devices failing without losing data.

+
+
[parity][:data][:children][:spares]
+
A non-default dRAID configuration can be specified by appending one or + more of the following optional arguments to the draid + keyword: +
+
parity
+
The parity level (1-3).
+
data
+
The number of data devices per redundancy group. In general, a smaller + value of D will increase IOPS, + improve the compression ratio, and speed up resilvering at the + expense of total usable capacity. Defaults to 8, + unless + + is less than 8.
+
children
+
The expected number of children. Useful as a cross-check when listing + a large number of devices. An error is returned when the provided + number of children differs.
+
spares
+
The number of distributed hot spares. Defaults to zero.
+
+
+
+
A pseudo-vdev which keeps track of available hot spares for a pool. For + more information, see the Hot Spares + section.
+
+
A separate intent log device. If more than one log device is specified, + then writes are load-balanced between devices. Log devices can be + mirrored. However, raidz vdev types are not supported for the intent log. + For more information, see the Intent + Log section.
+
+
A device solely dedicated for deduplication tables. The redundancy of this + device should match the redundancy of the other normal devices in the + pool. If more than one dedup device is specified, then allocations are + load-balanced between those devices.
+
+
A device dedicated solely for allocating various kinds of internal + metadata, and optionally small file blocks. The redundancy of this device + should match the redundancy of the other normal devices in the pool. If + more than one special device is specified, then allocations are + load-balanced between those devices. +

For more information on special allocations, see the + Special Allocation + Class section.

+
+
+
A device used to cache storage pool data. A cache device cannot be + configured as a mirror or raidz group. For more information, see the + Cache Devices section.
+
+

Virtual devices cannot be nested arbitrarily. A mirror, raidz or + draid virtual device can only be created with files or disks. Mirrors of + mirrors or other such combinations are not allowed.

+

A pool can have any number of virtual devices at the top of the + configuration (known as "root vdevs"). Data is dynamically + distributed across all top-level devices to balance data among devices. As + new virtual devices are added, ZFS automatically places data on the newly + available devices.

+

Virtual devices are specified one at a time on the command line, + separated by whitespace. Keywords like mirror + and raidz are used to distinguish + where a group ends and another begins. For example, the following creates a + pool with two root vdevs, each a mirror of two disks:

+
# zpool + create mypool + mirror sda sdb + mirror sdc sdd
+
+
+

+

ZFS supports a rich set of mechanisms for handling device failure + and data corruption. All metadata and data is checksummed, and ZFS + automatically repairs bad data from a good copy, when corruption is + detected.

+

In order to take advantage of these features, a pool must make use + of some form of redundancy, using either mirrored or raidz groups. While ZFS + supports running in a non-redundant configuration, where each root vdev is + simply a disk or file, this is strongly discouraged. A single case of bit + corruption can render some or all of your data unavailable.

+

A pool's health status is described by one of three + states: , + , + or + . + An online pool has all devices operating normally. A degraded pool is one in + which one or more devices have failed, but the data is still available due + to a redundant configuration. A faulted pool has corrupted metadata, or one + or more faulted devices, and insufficient replicas to continue + functioning.

+

The health of the top-level vdev, such as a mirror or raidz + device, is potentially impacted by the state of its associated vdevs or + component devices. A top-level vdev or component device is in one of the + following states:

+
+
+
One or more top-level vdevs is in the degraded state because one or more + component devices are offline. Sufficient replicas exist to continue + functioning. +

One or more component devices is in the degraded or faulted + state, but sufficient replicas exist to continue functioning. The + underlying conditions are as follows:

+
    +
  • The number of checksum errors exceeds acceptable levels and the device + is degraded as an indication that something may be wrong. ZFS + continues to use the device as necessary.
  • +
  • The number of I/O errors exceeds acceptable levels. The device could + not be marked as faulted because there are insufficient replicas to + continue functioning.
  • +
+
+
+
One or more top-level vdevs is in the faulted state because one or more + component devices are offline. Insufficient replicas exist to continue + functioning. +

One or more component devices is in the faulted state, and + insufficient replicas exist to continue functioning. The underlying + conditions are as follows:

+
    +
  • The device could be opened, but the contents did not match expected + values.
  • +
  • The number of I/O errors exceeds acceptable levels and the device is + faulted to prevent further use of the device.
  • +
+
+
+
The device was explicitly taken offline by the + zpool offline + command.
+
+
The device is online and functioning.
+
+
The device was physically removed while the system was running. Device + removal detection is hardware-dependent and may not be supported on all + platforms.
+
+
The device could not be opened. If a pool is imported when a device was + unavailable, then the device will be identified by a unique identifier + instead of its path since the path was never correct in the first + place.
+
+

Checksum errors represent events where a disk returned data that + was expected to be correct, but was not. In other words, these are instances + of silent data corruption. The checksum errors are reported in + zpool status and + zpool events. When a block + is stored redundantly, a damaged block may be reconstructed (e.g. from raidz + parity or a mirrored copy). In this case, ZFS reports the checksum error + against the disks that contained damaged data. If a block is unable to be + reconstructed (e.g. due to 3 disks being damaged in a raidz2 group), it is + not possible to determine which disks were silently corrupted. In this case, + checksum errors are reported for all disks on which the block is stored.

+

If a device is removed and later re-attached to the system, ZFS + attempts to bring the device online automatically. Device attachment + detection is hardware-dependent and might not be supported on all + platforms.

+
+
+

+

ZFS allows devices to be associated with pools as "hot + spares". These devices are not actively used in the pool. But, when an + active device fails, it is automatically replaced by a hot spare. To create + a pool with hot spares, specify a spare vdev with any + number of devices. For example,

+
# zpool + create pool + mirror sda sdb spare + sdc sdd
+

Spares can be shared across multiple pools, and can be added with + the zpool add command and + removed with the zpool + remove command. Once a spare replacement is + initiated, a new spare vdev is created within the + configuration that will remain there until the original device is replaced. + At this point, the hot spare becomes available again, if another device + fails.

+

If a pool has a shared spare that is currently being used, the + pool cannot be exported, since other pools may use this shared spare, which + may lead to potential data corruption.

+

Shared spares add some risk. If the pools are imported on + different hosts, and both pools suffer a device failure at the same time, + both could attempt to use the spare at the same time. This may not be + detected, resulting in data corruption.

+

An in-progress spare replacement can be cancelled by detaching the + hot spare. If the original faulted device is detached, then the hot spare + assumes its place in the configuration, and is removed from the spare list + of all active pools.

+

The draid vdev type provides distributed hot + spares. These hot spares are named after the dRAID vdev they're a part of + (draid1-2-3 + specifies spare 3 + of vdev 2, + which is a single parity dRAID) and may only be used + by that dRAID vdev. Otherwise, they behave the same as normal hot + spares.

+

Spares cannot replace log devices.

+
+
+

+

The ZFS Intent Log (ZIL) satisfies POSIX requirements for + synchronous transactions. For instance, databases often require their + transactions to be on stable storage devices when returning from a system + call. NFS and other applications can also use fsync(2) to + ensure data stability. By default, the intent log is allocated from blocks + within the main pool. However, it might be possible to get better + performance using separate intent log devices such as NVRAM or a dedicated + disk. For example:

+
# zpool + create pool sda sdb + log sdc
+

Multiple log devices can also be specified, and they can be + mirrored. See the EXAMPLES section for an + example of mirroring multiple log devices.

+

Log devices can be added, replaced, attached, detached, and + removed. In addition, log devices are imported and exported as part of the + pool that contains them. Mirrored devices can be removed by specifying the + top-level mirror vdev.

+
+
+

+

Devices can be added to a storage pool as "cache + devices". These devices provide an additional layer of caching between + main memory and disk. For read-heavy workloads, where the working set size + is much larger than what can be cached in main memory, using cache devices + allows much more of this working set to be served from low latency media. + Using cache devices provides the greatest performance improvement for random + read-workloads of mostly static content.

+

To create a pool with cache devices, specify a + cache vdev with any number of devices. For example:

+
# zpool + create pool sda sdb + cache sdc sdd
+

Cache devices cannot be mirrored or part of a raidz configuration. + If a read error is encountered on a cache device, that read I/O is reissued + to the original storage pool device, which might be part of a mirrored or + raidz configuration.

+

The content of the cache devices is + persistent across reboots and restored asynchronously when importing the + pool in L2ARC (persistent L2ARC). This can be disabled by setting + =0. + For cache devices smaller than + , ZFS does + not write the metadata structures required for rebuilding the L2ARC, to + conserve space. This can be changed with + . + The cache device header + () is + updated even if no metadata structures are written. Setting + =0 + will result in scanning the full-length ARC lists for cacheable content to + be written in L2ARC (persistent ARC). If a cache device is added with + zpool add, its label and + header will be overwritten and its contents will not be restored in L2ARC, + even if the device was previously part of the pool. If a cache device is + onlined with zpool online, + its contents will be restored in L2ARC. This is useful in case of memory + pressure, where the contents of the cache device are not fully restored in + L2ARC. The user can off- and online the cache device when there is less + memory pressure, to fully restore its contents to L2ARC.

+
+
+

+

Before starting critical procedures that include destructive + actions (like zfs destroy), + an administrator can checkpoint the pool's state and, in the case of a + mistake or failure, rewind the entire pool back to the checkpoint. + Otherwise, the checkpoint can be discarded when the procedure has completed + successfully.

+

A pool checkpoint can be thought of as a pool-wide snapshot and + should be used with care as it contains every part of the pool's state, from + properties to vdev configuration. Thus, certain operations are not allowed + while a pool has a checkpoint. Specifically, vdev removal/attach/detach, + mirror splitting, and changing the pool's GUID. Adding a new vdev is + supported, but in the case of a rewind it will have to be added again. + Finally, users of this feature should keep in mind that scrubs in a pool + that has a checkpoint do not repair checkpointed data.

+

To create a checkpoint for a pool:

+
# zpool + checkpoint pool
+

To later rewind to its checkpointed state, you need to first + export it and then rewind it during import:

+
# zpool + export pool
+
# zpool + import --rewind-to-checkpoint + pool
+

To discard the checkpoint from a pool:

+
# zpool + checkpoint -d + pool
+

Dataset reservations (controlled by the + + and + + properties) may be unenforceable while a checkpoint exists, because the + checkpoint is allowed to consume the dataset's reservation. Finally, data + that is part of the checkpoint but has been freed in the current state of + the pool won't be scanned during a scrub.

+
+
+

+

Allocations in the special class are dedicated to specific block + types. By default, this includes all metadata, the indirect blocks of user + data, and any deduplication tables. The class can also be provisioned to + accept small file blocks.

+

A pool must always have at least one normal + (non-dedup/-special) vdev before other + devices can be assigned to the special class. If the + special class becomes full, then allocations intended for + it will spill back into the normal class.

+

Deduplication tables can be excluded + from the special class by unsetting the + + ZFS module parameter.

+

Inclusion of small file blocks in the + special class is opt-in. Each dataset can control the size of small file + blocks allowed in the special class by setting the + + property to nonzero. See zfsprops(7) for more info on this + property.

+
+
+
+ + + + + +
April 7, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/7/zpoolprops.7.html b/man/master/7/zpoolprops.7.html new file mode 100644 index 000000000..f26e110d0 --- /dev/null +++ b/man/master/7/zpoolprops.7.html @@ -0,0 +1,511 @@ + + + + + + + zpoolprops.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpoolprops.7

+
+ + + + + +
ZPOOLPROPS(7)Miscellaneous Information ManualZPOOLPROPS(7)
+
+
+

+

zpoolprops — + properties of ZFS storage pools

+
+
+

+

Each pool has several properties associated with it. Some + properties are read-only statistics while others are configurable and change + the behavior of the pool.

+

User properties have no effect on ZFS behavior. Use them to + annotate pools in a way that is meaningful in your environment. For more + information about user properties, see the + User Properties section.

+

The following are read-only properties:

+
+
+
Amount of storage used within the pool. See + fragmentation and free for more + information.
+
+
The ratio of the total amount of storage that would be required to store + all the cloned blocks without cloning to the actual storage used. The + bcloneratio property is calculated as: +

((bclonesaved + bcloneused) + ) +

+
+
+
The amount of additional storage that would be required if block cloning + was not used.
+
+
The amount of storage used by cloned blocks.
+
+
Percentage of pool space used. This property can also be referred to by + its shortened column name, + .
+
+
Amount of uninitialized space within the pool or device that can be used + to increase the total capacity of the pool. On whole-disk vdevs, this is + the space beyond the end of the GPT – typically occurring when a + LUN is dynamically expanded or a disk replaced with a larger one. On + partition vdevs, this is the space appended to the partition after it was + added to the pool – most likely by resizing it in-place. The space + can be claimed for the pool by bringing it online with + + or using zpool online + -e.
+
+
The amount of fragmentation in the pool. As the amount of space + allocated increases, it becomes more difficult to locate + free space. This may result in lower write performance + compared to pools with more unfragmented free space.
+
+
The amount of free space available in the pool. By contrast, the + zfs(8) available property describes + how much new data can be written to ZFS filesystems/volumes. The zpool + free property is not generally useful for this purpose, + and can be substantially more than the zfs available + space. This discrepancy is due to several factors, including raidz parity; + zfs reservation, quota, refreservation, and refquota properties; and space + set aside by + + (see zfs(4) for more information).
+
+
After a file system or snapshot is destroyed, the space it was using is + returned to the pool asynchronously. freeing is the + amount of space remaining to be reclaimed. Over time + freeing will decrease while free + increases.
+
+
A unique identifier for the pool.
+
+
The current health of the pool. Health can be one of + , + , + , + , + .
+
+
Space not released while freeing due to corruption, now + permanently leaked into the pool.
+
+
A unique identifier for the pool. Unlike the guid + property, this identifier is generated every time we load the pool (i.e. + does not persist across imports/exports) and never changes while the pool + is loaded (even if a + + operation takes place).
+
+
Total size of the storage pool.
+
guid
+
Information about unsupported features that are enabled on the pool. See + zpool-features(7) for details.
+
+

The space usage properties report actual physical space available + to the storage pool. The physical space can be different from the total + amount of space that any contained datasets can actually use. The amount of + space used in a raidz configuration depends on the characteristics of the + data being written. In addition, ZFS reserves some space for internal + accounting that the zfs(8) command takes into account, but + the zpoolprops command does not. For non-full pools + of a reasonable size, these effects should be invisible. For small pools, or + pools that are close to being completely full, these discrepancies may + become more noticeable.

+

The following property can be set at creation time and import + time:

+
+
+
Alternate root directory. If set, this directory is prepended to any mount + points within the pool. This can be used when examining an unknown pool + where the mount points cannot be trusted, or in an alternate boot + environment, where the typical paths are not valid. + altroot is not a persistent property. It is valid only + while the system is up. Setting altroot defaults to + using cachefile=none, though this may + be overridden using an explicit setting.
+
+

The following property can be set only at import time:

+
+
=on|off
+
If set to on, the pool will be imported in read-only + mode. This property can also be referred to by its shortened column name, + .
+
+

The following properties can be set at creation time and import + time, and later changed with the zpool + set command:

+
+
=ashift
+
Pool sector size exponent, to the power of + (internally + referred to as ashift). Values from 9 to 16, inclusive, + are valid; also, the value 0 (the default) means to auto-detect using the + kernel's block layer and a ZFS internal exception list. I/O operations + will be aligned to the specified size boundaries. Additionally, the + minimum (disk) write size will be set to the specified size, so this + represents a space/performance trade-off. For optimal performance, the + pool sector size should be greater than or equal to the sector size of the + underlying disks. The typical case for setting this property is when + performance is important and the underlying disks use 4KiB sectors but + report 512B sectors to the OS (for compatibility reasons); in that case, + set + ashift= + (which is + + = + ). + When set, this property is used as the default hint value in subsequent + vdev operations (add, attach and replace). Changing this value will not + modify any existing vdev, not even on disk replacement; however it can be + used, for instance, to replace a dying 512B sectors disk with a newer 4KiB + sectors device: this will probably result in bad performance but at the + same time could prevent loss of data.
+
=on|off
+
Controls automatic pool expansion when the underlying LUN is grown. If set + to on, the pool will be resized according to the size of + the expanded device. If the device is part of a mirror or raidz then all + devices within that mirror/raidz group must be expanded before the new + space is made available to the pool. The default behavior is + off. This property can also be referred to by its + shortened column name, + .
+
=on|off
+
Controls automatic device replacement. If set to off, + device replacement must be initiated by the administrator by using the + zpool replace command. If + set to on, any new device, found in the same physical + location as a device that previously belonged to the pool, is + automatically formatted and replaced. The default behavior is + off. This property can also be referred to by its + shortened column name, + . + Autoreplace can also be used with virtual disks (like device mapper) + provided that you use the /dev/disk/by-vdev paths setup by vdev_id.conf. + See the vdev_id(8) manual page for more details. + Autoreplace and autoonline require the ZFS Event Daemon be configured and + running. See the zed(8) manual page for more + details.
+
=on|off
+
When set to on space which has been recently freed, and + is no longer allocated by the pool, will be periodically trimmed. This + allows block device vdevs which support BLKDISCARD, such as SSDs, or file + vdevs on which the underlying file system supports hole-punching, to + reclaim unused blocks. The default value for this property is + off. +

Automatic TRIM does not immediately + reclaim blocks after a free. Instead, it will optimistically delay + allowing smaller ranges to be aggregated into a few larger ones. These + can then be issued more efficiently to the storage. TRIM on L2ARC + devices is enabled by setting + .

+

Be aware that automatic trimming of recently freed data blocks + can put significant stress on the underlying storage devices. This will + vary depending of how well the specific device handles these commands. + For lower-end devices it is often possible to achieve most of the + benefits of automatic trimming by running an on-demand (manual) TRIM + periodically using the zpool + trim command.

+
+
=|pool[/dataset]
+
Identifies the default bootable dataset for the root pool. This property + is expected to be set mainly by the installation and upgrade programs. Not + all Linux distribution boot processes use the bootfs property.
+
=path|none
+
Controls the location of where the pool configuration is cached. + Discovering all pools on system startup requires a cached copy of the + configuration data that is stored on the root file system. All pools in + this cache are automatically imported when the system boots. Some + environments, such as install and clustering, need to cache this + information in a different location so that pools are not automatically + imported. Setting this property caches the pool configuration in a + different location that can later be imported with + zpool import + -c. Setting it to the value none + creates a temporary pool that is never cached, and the "" (empty + string) uses the default location. +

Multiple pools can share the same cache file. Because the + kernel destroys and recreates this file when pools are added and + removed, care should be taken when attempting to access this file. When + the last pool using a cachefile is exported or + destroyed, the file will be empty.

+
+
=text
+
A text string consisting of printable ASCII characters that will be stored + such that it is available even if the pool becomes faulted. An + administrator can provide additional information about a pool using this + property.
+
=off|legacy|file[,file]…
+
Specifies that the pool maintain compatibility with specific feature sets. + When set to off (or unset) compatibility is disabled + (all features may be enabled); when set to legacyno + features may be enabled. When set to a comma-separated list of filenames + (each filename may either be an absolute path, or relative to + /etc/zfs/compatibility.d or + /usr/share/zfs/compatibility.d) the lists of + requested features are read from those files, separated by whitespace + and/or commas. Only features present in all files may be enabled. +

See zpool-features(7), + zpool-create(8) and zpool-upgrade(8) + for more information on the operation of compatibility feature sets.

+
+
=number
+
This property is deprecated and no longer has any effect.
+
=on|off
+
Controls whether a non-privileged user is granted access based on the + dataset permissions defined on the dataset. See zfs(8) + for more information on ZFS delegated administration.
+
=wait|continue|panic
+
Controls the system behavior in the event of catastrophic pool failure. + This condition is typically a result of a loss of connectivity to the + underlying storage device(s) or a failure of all devices within the pool. + The behavior of such an event is determined as follows: +
+
+
Blocks all I/O access until the device connectivity is recovered and + the errors are cleared with zpool + clear. This is the default behavior.
+
+
Returns EIO to any new write I/O requests but + allows reads to any of the remaining healthy devices. Any write + requests that have yet to be committed to disk would be blocked.
+
+
Prints out a message to the console and generates a system crash + dump.
+
+
+
feature_name=enabled
+
The value of this property is the current state of + feature_name. The only valid value when setting this + property is enabled which moves + feature_name to the enabled state. See + zpool-features(7) for details on feature states.
+
=on|off
+
Controls whether information about snapshots associated with this pool is + output when zfs list is + run without the -t option. The default value is + off. This property can also be referred to by its + shortened name, + .
+
=on|off
+
Controls whether a pool activity check should be performed during + zpool import. When a pool + is determined to be active it cannot be imported, even with the + -f option. This property is intended to be used in + failover configurations where multiple hosts have access to a pool on + shared storage. +

Multihost provides protection on import only. It does not + protect against an individual device being used in multiple pools, + regardless of the type of vdev. See the discussion under + zpool create.

+

When this property is on, periodic + writes to storage occur to show the pool is in use. See + + in the zfs(4) manual page. In order to enable this + property each host must set a unique hostid. See + zgenhostid(8) + spl(4) for additional details. The default value is + off.

+
+
=version
+
The current on-disk version of the pool. This can be increased, but never + decreased. The preferred method of updating pools is with the + zpool upgrade command, + though this property can be used when a specific version is needed for + backwards compatibility. Once feature flags are enabled on a pool this + property will no longer have a value.
+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate pools.

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as + module:property, but this + namespace is not enforced by ZFS. User property names can be at most 256 + characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the + module component of property names to reduce the + chance that two independently-developed packages use the same property name + for different purposes.

+

The values of user properties are arbitrary strings and are never + validated. All of the commands that operate on properties + (zpool list, + zpool get, + zpool set, and so forth) can + be used to manipulate both native properties and user properties. Use + zpool set + name= to clear a user property. Property values are + limited to 8192 bytes.

+
+
+
+ + + + + +
April 18, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/fsck.zfs.8.html b/man/master/8/fsck.zfs.8.html new file mode 100644 index 000000000..25f661bf6 --- /dev/null +++ b/man/master/8/fsck.zfs.8.html @@ -0,0 +1,292 @@ + + + + + + + fsck.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

fsck.zfs.8

+
+ + + + + +
FSCK.ZFS(8)System Manager's ManualFSCK.ZFS(8)
+
+
+

+

fsck.zfsdummy + ZFS filesystem checker

+
+
+

+ + + + + +
fsck.zfs[options] + dataset
+
+
+

+

fsck.zfs is a thin shell wrapper that at + most checks the status of a dataset's container pool. It is installed by + OpenZFS because some Linux distributions expect a fsck helper for all + filesystems.

+

If more than one dataset is specified, each + is checked in turn and the results binary-ored.

+
+
+

+

Ignored.

+
+
+

+

ZFS datasets are checked by running zpool + scrub on the containing pool. An individual ZFS + dataset is never checked independently of its pool, which is unlike a + regular filesystem.

+

However, the fsck(8) interface still + allows it to communicate some errors: if the dataset + is in a degraded pool, then fsck.zfs will return + exit code to indicate + an uncorrected filesystem error.

+

Similarly, if the dataset is in a + faulted pool and has a legacy /etc/fstab record, + then fsck.zfs will return exit code + to indicate a fatal + operational error.

+
+
+

+

fstab(5), fsck(8), + zpool-scrub(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/index.html b/man/master/8/index.html new file mode 100644 index 000000000..9d98df6af --- /dev/null +++ b/man/master/8/index.html @@ -0,0 +1,313 @@ + + + + + + + System Administration Commands (8) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/mount.zfs.8.html b/man/master/8/mount.zfs.8.html new file mode 100644 index 000000000..de220ecd1 --- /dev/null +++ b/man/master/8/mount.zfs.8.html @@ -0,0 +1,299 @@ + + + + + + + mount.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

mount.zfs.8

+
+ + + + + +
MOUNT.ZFS(8)System Manager's ManualMOUNT.ZFS(8)
+
+
+

+

mount.zfsmount + ZFS filesystem

+
+
+

+ + + + + +
mount.zfs[-sfnvh] [-o + options] dataset + mountpoint
+
+
+

+

The mount.zfs helper is used by + mount(8) to mount filesystem snapshots and + legacy + ZFS filesystems, as well as by zfs(8) when the + + environment variable is not set. Users should should invoke either + zfs(8) in most cases.

+

options are handled according + to the section in zfsprops(7), except + for those described below.

+

If /etc/mtab is a regular file and + -n was not specified, it will be updated via + libmount.

+
+
+

+
+
+
Ignore unknown (sloppy) mount options.
+
+
Do everything except actually executing the system call.
+
+
Never update /etc/mtab.
+
+
Print resolved mount options and parser state.
+
+
Print the usage message.
+
+ zfsutil
+
This private flag indicates that mount(8) is being + called by the zfs(8) command.
+
+
+
+

+

fstab(5), mount(8), + zfs-mount(8)

+
+
+ + + + + +
May 24, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/vdev_id.8.html b/man/master/8/vdev_id.8.html new file mode 100644 index 000000000..10f46bc3a --- /dev/null +++ b/man/master/8/vdev_id.8.html @@ -0,0 +1,324 @@ + + + + + + + vdev_id.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

vdev_id.8

+
+ + + + + +
VDEV_ID(8)System Manager's ManualVDEV_ID(8)
+
+
+

+

vdev_idgenerate + user-friendly names for JBOD disks

+
+
+

+ + + + + +
vdev_id-d dev + -c config_file + -g + sas_direct|sas_switch|scsi + -m -p + phys_per_port
+
+
+

+

vdev_id is an udev helper which parses + vdev_id.conf(5) to map a physical path in a storage + topology to a channel name. The channel name is combined with a disk + enclosure slot number to create an alias that reflects the physical location + of the drive. This is particularly helpful when it comes to tasks like + replacing failed drives. Slot numbers may also be remapped in case the + default numbering is unsatisfactory. The drive aliases will be created as + symbolic links in /dev/disk/by-vdev.

+

The currently supported topologies are + sas_direct, sas_switch, and + scsi. A multipath mode is supported in which dm-mpath + devices are handled by examining the first running component disk as + reported by the driver. In multipath mode the configuration file should + contain a channel definition with the same name for each path to a given + enclosure.

+

vdev_id also supports creating + aliases based on existing udev links in the /dev hierarchy using the + configuration + file keyword. See vdev_id.conf(5) for details.

+
+
+

+
+
+ device
+
The device node to classify, like /dev/sda.
+
+ config_file
+
Specifies the path to an alternate configuration file. The default is + /etc/zfs/vdev_id.conf.
+
+ sas_direct|sas_switch|scsi
+
Identifies a physical topology that governs how physical paths are mapped + to channels: +
+
+ and scsi
+
channels are uniquely identified by a PCI slot and HBA port + number
+
+
channels are uniquely identified by a SAS switch port number
+
+
+
+
Only handle dm-multipath devices. If specified, examine the first running + component disk of a dm-multipath device as provided by the driver to + determine the physical path.
+
+ phys_per_port
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id internally uses this value to + determine which HBA or switch port a device is connected to. The default + is .
+
+
Print a usage summary.
+
+
+
+

+

vdev_id.conf(5)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zdb.8.html b/man/master/8/zdb.8.html new file mode 100644 index 000000000..08e6fe94a --- /dev/null +++ b/man/master/8/zdb.8.html @@ -0,0 +1,806 @@ + + + + + + + zdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zdb.8

+
+ + + + + +
ZDB(8)System Manager's ManualZDB(8)
+
+
+

+

zdbdisplay ZFS + storage pool debugging and consistency information

+
+
+

+ + + + + +
zdb[-AbcdDFGhikLMNPsTvXYy] + [-e [-V] + [-p path]…] + [-I inflight-I/O-ops] + [-o + var=value]… + [-t txg] + [-U cache] + [-x dumpdir] + [-K key] + [poolname[/dataset|objset-ID]] + [object|range…]
+
+ + + + + +
zdb[-AdiPv] [-e + [-V] [-p + path]…] [-U + cache] [-K + key] + poolname[/dataset|objset-ID] + [object|range…]
+
+ + + + + +
zdb-B [-e + [-V] [-p + path]…] [-U + cache] [-K + key] + poolname/objset-ID + [backup-flags]
+
+ + + + + +
zdb-C [-A] + [-U cache] + [poolname]
+
+ + + + + +
zdb-E [-A] + word0:word1:…:word15
+
+ + + + + +
zdb-l [-Aqu] + device
+
+ + + + + +
zdb-m [-AFLPXY] + [-e [-V] + [-p path]…] + [-t txg] + [-U cache] + poolname [vdev + [metaslab]…]
+
+ + + + + +
zdb-O [-K + key] dataset path
+
+ + + + + +
zdb-r [-K + key] dataset path + destination
+
+ + + + + +
zdb-R [-A] + [-e [-V] + [-p path]…] + [-U cache] + poolname + vdev:offset:[lsize/]psize[:flags]
+
+ + + + + +
zdb-S [-AP] + [-e [-V] + [-p path]…] + [-U cache] + poolname
+
+
+

+

The zdb utility displays information about + a ZFS pool useful for debugging and performs some amount of consistency + checking. It is a not a general purpose tool and options (and facilities) + may change. It is not a fsck(8) utility.

+

The output of this command in general reflects the on-disk + structure of a ZFS pool, and is inherently unstable. The precise output of + most invocations is not documented, a knowledge of ZFS internals is + assumed.

+

If the dataset argument does not + contain any + "" or + "" + characters, it is interpreted as a pool name. The root dataset can be + specified as "pool/".

+

zdb is an "offline" tool; it + accesses the block devices underneath the pools directly from userspace and + does not care if the pool is imported or datasets are mounted (or even if + the system understands ZFS at all). When operating on an imported and active + pool it is possible, though unlikely, that zdb may interpret inconsistent + pool data and behave erratically.

+
+
+

+

Display options:

+
+
, + --block-stats
+
Display statistics regarding the number, size (logical, physical and + allocated) and deduplication of blocks.
+
, + --backup
+
Generate a backup stream, similar to zfs + send, but for the numeric objset ID, and without + opening the dataset. This can be useful in recovery scenarios if dataset + metadata has become corrupted but the dataset itself is readable. The + optional flags argument is a string of one or more + of the letters e, L, + c, and + , which + correspond to the same flags in zfs-send(8).
+
, + --checksum
+
Verify the checksum of all metadata blocks while printing block statistics + (see -b). +

If specified multiple times, verify the checksums of all + blocks.

+
+
, + --config
+
Display information about the configuration. If specified with no other + options, instead display information about the cache file + (/etc/zfs/zpool.cache). To specify the cache file + to display, see -U. +

If specified multiple times, and a pool name is also specified + display both the cached configuration and the on-disk configuration. If + specified multiple times with -e also display + the configuration that would be used were the pool to be imported.

+
+
, + --datasets
+
Display information about datasets. Specified once, displays basic dataset + information: ID, create transaction, size, and object count. See + -N for determining if + poolname[/dataset|objset-ID] + is to use the specified + dataset|objset-ID as a string + (dataset name) or a number (objset ID) when datasets have numeric names. +

If specified multiple times provides greater and greater + verbosity.

+

If object IDs or object ID ranges are specified, display + information about those specific objects or ranges only.

+

An object ID range is specified in terms of a colon-separated + tuple of the form + ⟨start⟩:⟨end⟩[:⟨flags⟩]. The + fields start and end are + integer object identifiers that denote the upper and lower bounds of the + range. An end value of -1 specifies a range with + no upper bound. The flags field optionally + specifies a set of flags, described below, that control which object + types are dumped. By default, all object types are dumped. A minus sign + (-) negates the effect of the flag that follows it and has no effect + unless preceded by the A flag. For example, the + range 0:-1:A-d will dump all object types except for directories.

+

+
+
+
Dump all objects (this is the default)
+
+
Dump ZFS directory objects
+
+
Dump ZFS plain file objects
+
+
Dump SPA space map objects
+
+
Dump ZAP objects
+
-
+
Negate the effect of next flag
+
+
+
, + --dedup-stats
+
Display deduplication statistics, including the deduplication ratio + (dedup), compression ratio (compress), + inflation due to the zfs copies property (copies), and + an overall effective ratio (dedup + × compress + / copies).
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in the + pool) block counts and sizes by reference count.
+
+
Display the statistics independently for each deduplication table.
+
+
Dump the contents of the deduplication tables describing duplicate + blocks.
+
+
Also dump the contents of the deduplication tables describing unique + blocks.
+
, + --embedded-block-pointer=word0:word1:…:word15
+
Decode and display block from an embedded block pointer specified by the + word arguments.
+
, + --history
+
Display pool history similar to zpool + history, but include internal changes, + transaction, and dataset information.
+
, + --intent-logs
+
Display information about intent log (ZIL) entries relating to each + dataset. If specified multiple times, display counts of each intent log + transaction type.
+
, + --checkpointed-state
+
Examine the checkpointed state of the pool. Note, the on disk format of + the pool is not reverted to the checkpointed state.
+
, + --label=device
+
Read the vdev labels and L2ARC header from the specified device. + zdb -l will return 0 if + valid label was found, 1 if error occurred, and 2 if no valid labels were + found. The presence of L2ARC header is indicated by a specific sequence + (L2ARC_DEV_HDR_MAGIC). If there is an accounting error in the size or the + number of L2ARC log blocks zdb + -l will return 1. Each unique configuration is + displayed only once.
+
+ device
+
In addition display label space usage stats. If a valid L2ARC header was + found also display the properties of log blocks used for restoring L2ARC + contents (persistent L2ARC).
+
+ device
+
Display every configuration, unique or not. If a valid L2ARC header was + found also display the properties of log entries in log blocks used for + restoring L2ARC contents (persistent L2ARC). +

If the -q option is also specified, + don't print the labels or the L2ARC header.

+

If the -u option is also specified, + also display the uberblocks on this device. Specify multiple times to + increase verbosity.

+
+
, + --disable-leak-tracking
+
Disable leak detection and the loading of space maps. By default, + zdb verifies that all non-free blocks are + referenced, which can be very expensive.
+
, + --metaslabs
+
Display the offset, spacemap, free space of each metaslab, all the log + spacemaps and their obsolete entry statistics.
+
+
Also display information about the on-disk free space histogram associated + with each metaslab.
+
+
Display the maximum contiguous free space, the in-core free space + histogram, and the percentage of free space in each space map.
+
+
Display every spacemap record.
+
, + --metaslab-groups
+
Display all "normal" vdev metaslab group information - per-vdev + metaslab count, fragmentation, and free space histogram, as well as + overall pool fragmentation and histogram.
+
+
"Special" vdevs are added to -M's normal output.
+
, + --object-lookups=dataset + path
+
Also display information about the maximum contiguous free space and the + percentage of free space in each space map.
+
+
Display every spacemap record.
+
+
Same as -d but force zdb to interpret the + [dataset|objset-ID] in + [poolname[/dataset|objset-ID]] + as a numeric objset ID.
+
+ dataset path
+
Look up the specified path inside of the + dataset and display its metadata and indirect + blocks. Specified path must be relative to the root + of dataset. This option can be combined with + -v for increasing verbosity.
+
, + --copy-object=dataset path + destination
+
Copy the specified path inside of the + dataset to the specified destination. Specified + path must be relative to the root of + dataset. This option can be combined with + -v for increasing verbosity.
+
, + --read-block=poolname + vdev:offset:[lsize/]psize[:flags]
+
Read and display a block from the specified device. By default the block + is displayed as a hex dump, but see the description of the + r flag, below. +

The block is specified in terms of a colon-separated tuple + vdev (an integer vdev identifier) + offset (the offset within the vdev) + size (the physical size, or logical size / + physical size) of the block to read and, optionally, + flags (a set of flags, described below).

+

+
+
+ offset
+
Print block pointer at hex offset
+
+
Calculate and display checksums
+
+
Decompress the block. Set environment variable + ZDB_NO_ZLE to skip zle when guessing.
+
+
Byte swap the block
+
+
Dump gang block header
+
+
Dump indirect block
+
+
Dump raw uninterpreted block data
+
+
Verbose output for guessing compression algorithm
+
+
+
, + --io-stats
+
Report statistics on zdb I/O. Display operation + counts, bandwidth, and error counts of I/O to the pool from + zdb.
+
, + --simulate-dedup
+
Simulate the effects of deduplication, constructing a DDT and then display + that DDT as with -DD.
+
, + --brt-stats
+
Display block reference table (BRT) statistics, including the size of + uniques blocks cloned, the space saving as a result of cloning, and the + saving ratio.
+
+
Display the per-vdev BRT statistics, including total references.
+
+
Dump the contents of the block reference tables.
+
, + --uberblock
+
Display the current uberblock.
+
+

Other options:

+
+
, + --ignore-assertions
+
Do not abort should any assertion fail.
+
+
Enable panic recovery, certain errors which would otherwise be fatal are + demoted to warnings.
+
+
Do not abort if asserts fail and also enable panic recovery.
+
, + --exported=[-p + path]…
+
Operate on an exported pool, not present in + /etc/zfs/zpool.cache. The + -p flag specifies the path under which devices are + to be searched.
+
, + --dump-blocks=dumpdir
+
All blocks accessed will be copied to files in the specified directory. + The blocks will be placed in sparse files whose name is the same as that + of the file or device read. zdb can be then run on + the generated files. Note that the -bbc flags are + sufficient to access (and thus copy) all metadata on the pool.
+
, + --automatic-rewind
+
Attempt to make an unreadable pool readable by trying progressively older + transactions.
+
, + --dump-debug-msg
+
Dump the contents of the zfs_dbgmsg buffer before exiting + zdb. zfs_dbgmsg is a buffer used by ZFS to dump + advanced debug information.
+
, + --inflight=inflight-I/O-ops
+
Limit the number of outstanding checksum I/O operations to the specified + value. The default value is 200. This option affects the performance of + the -c option.
+
, + --key=key
+
Decryption key needed to access an encrypted dataset. This will cause + zdb to attempt to unlock the dataset using the + encryption root, key format and other encryption parameters on the given + dataset. zdb can still inspect pool and dataset + structures on encrypted datasets without unlocking them, but will not be + able to access file names and attributes and object contents. + WARNING: The raw decryption key and any decrypted data will be in + user memory while zdb is running. Other user + programs may be able to extract it by inspecting + zdb as it runs. Exercise extreme caution when + using this option in shared or uncontrolled environments.
+
, + --option=var=value
+
Set the given global libzpool variable to the provided value. The value + must be an unsigned 32-bit integer. Currently only little-endian systems + are supported to avoid accidentally setting the high 32 bits of 64-bit + variables.
+
, + --parseable
+
Print numbers in an unscaled form more amenable to parsing, e.g. + + rather than + .
+
, + --txg=transaction
+
Specify the highest transaction to use when searching for uberblocks. See + also the -u and -l options + for a means to see the available uberblocks and their associated + transaction numbers.
+
, + --cachefile=cachefile
+
Use a cache file other than + /etc/zfs/zpool.cache.
+
, + --verbose
+
Enable verbosity. Specify multiple times for increased verbosity.
+
, + --verbatim
+
Attempt verbatim import. This mimics the behavior of the kernel when + loading a pool from a cachefile. Only usable with + -e.
+
, + --extreme-rewind
+
Attempt "extreme" transaction rewind, that is attempt the same + recovery as -F but read transactions otherwise + deemed too old.
+
, + --all-reconstruction
+
Attempt all possible combinations when reconstructing indirect split + blocks. This flag disables the individual I/O deadman timer in order to + allow as much time as required for the attempted reconstruction.
+
, + --livelist
+
Perform validation for livelists that are being deleted. Scans through the + livelist and metaslabs, checking for duplicate entries and compares the + two, checking for potential double frees. If it encounters issues, + warnings will be printed, but the command will not necessarily fail.
+
+

Specifying a display option more than once enables verbosity for + only that option, with more occurrences enabling more verbosity.

+

If no options are specified, all information about the named pool + will be displayed at default verbosity.

+
+
+

+
+

+
+
# zdb -C rpool
+MOS Configuration:
+        version: 28
+        name: 'rpool'
+ …
+
+
+
+

+
+
# zdb -d rpool
+Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects
+Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects
+ …
+
+
+
+

+
+
# zdb -d rpool/export/home 0
+Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects
+
+    Object  lvl   iblk   dblk  dsize  lsize   %full  type
+         0    7    16K    16K  15.0K    16K   25.00  DMU dnode
+
+
+
+

+
+
# zdb -S rpool
+Simulated DDT histogram:
+
+bucket              allocated                       referenced
+______   ______________________________   ______________________________
+refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
+------   ------   -----   -----   -----   ------   -----   -----   -----
+     1     694K   27.1G   15.0G   15.0G     694K   27.1G   15.0G   15.0G
+     2    35.0K   1.33G    699M    699M    74.7K   2.79G   1.45G   1.45G
+ …
+dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00
+
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
November 18, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zed.8.html b/man/master/8/zed.8.html new file mode 100644 index 000000000..bc0581461 --- /dev/null +++ b/man/master/8/zed.8.html @@ -0,0 +1,474 @@ + + + + + + + zed.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zed.8

+
+ + + + + +
ZED(8)System Manager's ManualZED(8)
+
+
+

+

ZEDZFS Event + Daemon

+
+
+

+ + + + + +
ZED[-fFhILMvVZ] [-d + zedletdir] [-p + pidfile] [-P + path] [-s + statefile] [-j + jobs] [-b + buflen]
+
+
+

+

The ZED (ZFS Event Daemon) monitors events + generated by the ZFS kernel module. When a zevent (ZFS Event) is posted, the + ZED will run any ZEDLETs (ZFS Event Daemon Linkage + for Executable Tasks) that have been enabled for the corresponding zevent + class.

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Display license information.
+
+
Display version information.
+
+
Be verbose.
+
+
Force the daemon to run if at all possible, disabling security checks and + throwing caution to the wind. Not recommended for use in production.
+
+
Don't daemonise: remain attached to the controlling terminal, log to the + standard I/O streams.
+
+
Lock all current and future pages in the virtual memory address space. + This may help the daemon remain responsive when the system is under heavy + memory pressure.
+
+
Request that the daemon idle rather than exit when the kernel modules are + not loaded. Processing of events will start, or resume, when the kernel + modules are (re)loaded. Under Linux the kernel modules cannot be unloaded + while the daemon is running.
+
+
Zero the daemon's state, thereby allowing zevents still within the kernel + to be reprocessed.
+
+ zedletdir
+
Read the enabled ZEDLETs from the specified directory.
+
+ pidfile
+
Write the daemon's process ID to the specified file.
+
+ path
+
Custom $PATH for zedlets to use. Normally zedlets + run in a locked-down environment, with hardcoded paths to the ZFS commands + ($ZFS, $ZPOOL, + $ZED, ), and a + hard-coded $PATH. This is done for security + reasons. However, the ZFS test suite uses a custom PATH for its ZFS + commands, and passes it to ZED with + -P. In short, -P is only + to be used by the ZFS test suite; never use it in production!
+
+ statefile
+
Write the daemon's state to the specified file.
+
+ jobs
+
Allow at most jobs ZEDLETs to run concurrently, + delaying execution of new ones until they finish. Defaults to + .
+
+ buflen
+
Cap kernel event buffer growth to buflen entries. + This buffer is grown when the daemon misses an event, but results in + unreclaimable memory use in the kernel. A value of + removes the + cap. Defaults to + .
+
+
+
+

+

A zevent is comprised of a list of nvpairs (name/value pairs). + Each zevent contains an EID (Event IDentifier) that uniquely identifies it + throughout the lifetime of the loaded ZFS kernel module; this EID is a + monotonically increasing integer that resets to 1 each time the kernel + module is loaded. Each zevent also contains a class string that identifies + the type of event. For brevity, a subclass string is defined that omits the + leading components of the class string. Additional nvpairs exist to provide + event details.

+

The kernel maintains a list of recent zevents that can be viewed + (along with their associated lists of nvpairs) using the + zpool events + -v command.

+
+
+

+

ZEDLETs to be invoked in response to zevents are located in the + enabled-zedlets directory + (zedletdir). These can be symlinked or copied from the + + directory; symlinks allow for automatic updates from the installed ZEDLETs, + whereas copies preserve local modifications. As a security measure, since + ownership change is a privileged operation, ZEDLETs must be owned by root. + They must have execute permissions for the user, but they must not have + write permissions for group or other. Dotfiles are ignored.

+

ZEDLETs are named after the zevent class for which they + should be invoked. In particular, a ZEDLET will be invoked for a given + zevent if either its class or subclass string is a prefix of its filename + (and is followed by a non-alphabetic character). As a special case, the + prefix matches + all zevents. Multiple ZEDLETs may be invoked for a given zevent.

+
+
+

+

ZEDLETs are executables invoked by the ZED in response to a given + zevent. They should be written under the presumption they can be invoked + concurrently, and they should use appropriate locking to access any shared + resources. Common variables used by ZEDLETs can be stored in the default rc + file which is sourced by scripts; these variables should be prefixed with + .

+

The zevent nvpairs are passed to ZEDLETs as environment variables. + Each nvpair name is converted to an environment variable in the following + manner:

+
    +
  1. it is prefixed with + ,
  2. +
  3. it is converted to uppercase, and
  4. +
  5. each non-alphanumeric character is converted to an underscore.
  6. +
+

Some additional environment variables have been defined to present + certain nvpair values in a more convenient form. An incomplete list of + zevent environment variables is as follows:

+
+
+
The Event IDentifier.
+
+
The zevent class string.
+
+
The zevent subclass string.
+
+
The time at which the zevent was posted as “seconds + nanoseconds” since the Epoch.
+
+
The seconds component of + ZEVENT_TIME.
+
+
The + + component of ZEVENT_TIME.
+
+
An almost-RFC3339-compliant string for ZEVENT_TIME.
+
+

Additionally, the following ZED & ZFS variables are + defined:

+
+
+
The daemon's process ID.
+
+
The daemon's current enabled-zedlets directory.
+
+
The alias + (“--”) + string of the ZFS distribution the daemon is part of.
+
+
The ZFS version the daemon is part of.
+
+
The ZFS release the daemon is part of.
+
+

ZEDLETs may need to call other ZFS commands. The + installation paths of the following executables are defined as environment + variables: , + , + , + , + and + . + These variables may be overridden in the rc file.

+
+
+

+
+
@sysconfdir@/zfs/zed.d
+
The default directory for enabled ZEDLETs.
+
@sysconfdir@/zfs/zed.d/zed.rc
+
The default rc file for common variables used by ZEDLETs.
+
@zfsexecdir@/zed.d
+
The default directory for installed ZEDLETs.
+
@runstatedir@/zed.pid
+
The default file containing the daemon's process ID.
+
@runstatedir@/zed.state
+
The default file containing the daemon's state.
+
+
+
+

+
+
+
Reconfigure the daemon and rescan the directory for enabled ZEDLETs.
+
, +
+
Terminate the daemon.
+
+
+
+

+

zfs(8), zpool(8), + zpool-events(8)

+
+
+

+

The ZED requires root privileges.

+

Do not taunt the ZED.

+
+
+

+

ZEDLETs are unable to return state/status information to the + kernel.

+

Internationalization support via gettext has not been added.

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-allow.8.html b/man/master/8/zfs-allow.8.html new file mode 100644 index 000000000..044008a46 --- /dev/null +++ b/man/master/8/zfs-allow.8.html @@ -0,0 +1,956 @@ + + + + + + + zfs-allow.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-allow.8

+
+ + + + + +
ZFS-ALLOW(8)System Manager's ManualZFS-ALLOW(8)
+
+
+

+

zfs-allow — + delegate ZFS administration permissions to unprivileged + users

+
+
+

+ + + + + +
zfsallow [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+
+

+
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of mount, + , + , + , + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
 
+
zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]…
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]…
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]…
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]…
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NAMETYPENOTES



allowsubcommandMust also have the permission that is being allowed
bookmarksubcommand
clonesubcommandMust also have the create ability and mount ability in + the origin file system
createsubcommandMust also have the mount ability. Must also have the + refreservation ability to create a non-sparse volume.
destroysubcommandMust also have the mount ability
diffsubcommandAllows lookup of paths within a dataset given an object number, and + the ability to create snapshots necessary to zfs diff.
holdsubcommandAllows adding a user hold to a snapshot
load-keysubcommandAllows loading and unloading of encryption key (see zfs + load-key and zfs unload-key).
change-keysubcommandAllows changing an encryption key via zfs change-key.
mountsubcommandAllows mounting/umounting ZFS datasets
promotesubcommandMust also have the mount and promote ability in the + origin file system
receivesubcommandMust also have the mount and create ability
releasesubcommandAllows releasing a user hold which might destroy the snapshot
renamesubcommandMust also have the mount and create ability in the new + parent
rollbacksubcommandMust also have the mount ability
sendsubcommand
sharesubcommandAllows sharing file systems over NFS or SMB protocols
snapshotsubcommandMust also have the mount ability
groupquotaotherAllows accessing any groupquota@ property
groupobjquotaotherAllows accessing any groupobjquota@ + property
groupusedotherAllows reading any groupused@ property
groupobjusedotherAllows reading any groupobjused@ property
userpropotherAllows changing any user property
userquotaotherAllows accessing any userquota@ property
userobjquotaotherAllows accessing any userobjquota@ + property
userusedotherAllows reading any userused@ property
userobjusedotherAllows reading any userobjused@ property
projectobjquotaotherAllows accessing any projectobjquota@ + property
projectquotaotherAllows accessing any projectquota@ + property
projectobjusedotherAllows reading any projectobjused@ + property
projectusedotherAllows reading any projectused@ property
aclinheritproperty
aclmodeproperty
acltypeproperty
atimeproperty
canmountproperty
casesensitivityproperty
checksumproperty
compressionproperty
contextproperty
copiesproperty
dedupproperty
defcontextproperty
devicesproperty
dnodesizeproperty
encryptionproperty
execproperty
filesystem_limitproperty
fscontextproperty
keyformatproperty
keylocationproperty
logbiasproperty
mlslabelproperty
mountpointproperty
nbmandproperty
normalizationproperty
overlayproperty
pbkdf2itersproperty
primarycacheproperty
quotaproperty
readonlyproperty
recordsizeproperty
redundant_metadataproperty
refquotaproperty
refreservationproperty
relatimeproperty
reservationproperty
rootcontextproperty
secondarycacheproperty
setuidproperty
sharenfsproperty
sharesmbproperty
snapdevproperty
snapdirproperty
snapshot_limitproperty
special_small_blocksproperty
syncproperty
utf8onlyproperty
versionproperty
volblocksizeproperty
volmodeproperty
volsizeproperty
vscanproperty
xattrproperty
zonedproperty
+
+
zfs allow + -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
+
+
+

+
+

+

The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots + on tank/cindys. The permissions on + tank/cindys are also displayed.

+
+
# zfs allow ,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys will + be unable to mount file systems under tank/cindys. Add + an ACE similar to the following syntax to provide mount point access:

+
# chmod + A+user:cindys:add_subdirectory:allow + /tank/cindys
+
+
+

+

The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed.

+
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
+

+

The following example shows how to define and grant a permission + set on the tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+

+

The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed.

+
+
# zfs allow cindys , users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
+

+

The following example shows how to remove the snapshot permission + from the staff group on the + tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-bookmark.8.html b/man/master/8/zfs-bookmark.8.html new file mode 100644 index 000000000..eb5f332b2 --- /dev/null +++ b/man/master/8/zfs-bookmark.8.html @@ -0,0 +1,291 @@ + + + + + + + zfs-bookmark.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-bookmark.8

+
+ + + + + +
ZFS-BOOKMARK(8)System Manager's ManualZFS-BOOKMARK(8)
+
+
+

+

zfs-bookmark — + create bookmark of ZFS snapshot

+
+
+

+ + + + + +
zfsbookmark + snapshot|bookmark + newbookmark
+
+
+

+

Creates a new bookmark of the given snapshot or bookmark. + Bookmarks mark the point in time when the snapshot was created, and can be + used as the incremental source for a zfs + send.

+

When creating a bookmark from an existing redaction + bookmark, the resulting bookmark is + a redaction + bookmark.

+

This feature must be enabled to be used. See + zpool-features(7) for details on ZFS feature flags and the + + feature.

+
+
+

+
+

+

The following example creates a bookmark to a snapshot. This + bookmark can then be used instead of a snapshot in send streams.

+
# zfs + bookmark + rpool@snapshot + rpool#bookmark
+
+
+
+

+

zfs-destroy(8), zfs-send(8), + zfs-snapshot(8)

+
+
+ + + + + +
May 12, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-change-key.8.html b/man/master/8/zfs-change-key.8.html new file mode 100644 index 000000000..ded668b3e --- /dev/null +++ b/man/master/8/zfs-change-key.8.html @@ -0,0 +1,476 @@ + + + + + + + zfs-change-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-change-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + load, unload, or change encryption key of ZFS + dataset

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a|filesystem
+
+ + + + + +
zfsunload-key [-r] + -a|filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a|filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. + This will cause zfs to simply check that the + provided key is correct. This command may be run even if the key is + already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a|filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded. + This command may also be used to change the keylocation, + keyformat, and pbkdf2iters properties + as needed. If the dataset was not previously an encryption root it will + become one. Alternatively, the -i flag may be + provided to cause an encryption root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim + --secure if supported by your hardware, + otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to running zfs + load-key filesystem; + zfs change-key + filesystem
+
+ property=value
+
Allows the user to set encryption key properties + (keyformat, keylocation, + and pbkdf2iters) while + changing the key. This is the only way to alter + keyformat and pbkdf2iters after + the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + volume data, file attributes, ACLs, permission bits, directory listings, + FUID mappings, and + / + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the load-key subcommand for more + info on key loading).

+

Creating an encrypted dataset requires + specifying the encryption and + keyformat properties at creation time, along with an + optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, + and pbkdf2iters) do not inherit + like other ZFS properties and instead use the value determined by their + encryption root. Encryption root inheritance can be tracked via the + read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only deduplicate against themselves, + their snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption, + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost for each block written.

+
+
+
+

+

zfsprops(7), zfs-create(8), + zfs-set(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-clone.8.html b/man/master/8/zfs-clone.8.html new file mode 100644 index 000000000..ac2bb707c --- /dev/null +++ b/man/master/8/zfs-clone.8.html @@ -0,0 +1,315 @@ + + + + + + + zfs-clone.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-clone.8

+
+ + + + + +
ZFS-CLONE(8)System Manager's ManualZFS-CLONE(8)
+
+
+

+

zfs-cloneclone + snapshot of ZFS dataset

+
+
+

+ + + + + +
zfsclone [-p] + [-o + property=value]… + snapshot + filesystem|volume
+
+
+

+

See the Clones section of + zfsconcepts(7) for details. The target dataset can be + located anywhere in the ZFS hierarchy, and is created as the same type as + the original.

+
+
+ property=value
+
Sets the specified property; see zfs + create for details.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + + property inherited from their parent. If the target filesystem or volume + already exists, the operation completes successfully.
+
+
+
+

+
+

+

The following command creates a writable file system whose initial + contents are the same as pool/home/bob@yesterday.

+
# zfs + clone pool/home/bob@yesterday + pool/clone
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+
+

+

zfs-promote(8), + zfs-snapshot(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-create.8.html b/man/master/8/zfs-create.8.html new file mode 100644 index 000000000..9328f004f --- /dev/null +++ b/man/master/8/zfs-create.8.html @@ -0,0 +1,452 @@ + + + + + + + zfs-create.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-create.8

+
+ + + + + +
ZFS-CREATE(8)System Manager's ManualZFS-CREATE(8)
+
+
+

+

zfs-create — + create ZFS dataset

+
+
+

+ + + + + +
zfscreate [-Pnpuv] + [-o + property=value]… + filesystem
+
+ + + + + +
zfscreate [-ps] + [-b blocksize] + [-o + property=value]… + -V size + volume
+
+
+

+
+
zfs create + [-Pnpuv] [-o + property=value]… + filesystem
+
Creates a new ZFS file system. The file system is automatically mounted + according to the mountpoint property inherited from the + parent, unless the -u option is used. +
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + at the same time the dataset was created. Any editable ZFS property + can also be set at creation time. Multiple -o + options can be specified. An error results if the same property is + specified in multiple -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Do a dry-run ("No-op") creation. No datasets will be + created. This is useful in conjunction with the + -v or -P flags to + validate properties that are passed via -o + options and those implied by other options. The actual dataset + creation can still fail due to insufficient privileges or available + capacity.
+
+
Print machine-parsable verbose information about the created dataset. + Each line of output contains a key and one or two values, all + separated by tabs. The create_ancestors and + create keys have filesystem as + their only value. The create_ancestors key only + appears if the -p option is used. The + property key has two values, a property name that + property's value. The property key may appear zero + or more times, once for each property that will be set local to + filesystem due to the use of the + -o option.
+
+
Do not mount the newly created file system.
+
+
Print verbose information about the created dataset.
+
+
+
zfs create + [-ps] [-b + blocksize] [-o + property=value]… + -V size + volume
+
Creates a volume of the given size. The volume is exported as a block + device in /dev/zvol/path, where + is the name + of the volume in the ZFS namespace. The size represents the logical size + as exported by the device. By default, a reservation of equal size is + created. +

size is automatically + rounded up to the nearest multiple of the + .

+
+
+ blocksize
+
Equivalent to -o + volblocksize=blocksize. If + this option is specified in conjunction with + -o volblocksize, the + resulting behavior is undefined.
+
+ property=value
+
Sets the specified property as if the zfs + set + property=value command was + invoked at the same time the dataset was created. Any editable ZFS + property can also be set at creation time. Multiple + -o options can be specified. An error results + if the same property is specified in multiple + -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Creates a sparse volume with no reservation. See + + in the + section of zfsprops(7) for more + information about sparse volumes.
+
+
Do a dry-run ("No-op") creation. No datasets will be + created. This is useful in conjunction with the + -v or -P flags to + validate properties that are passed via -o + options and those implied by other options. The actual dataset + creation can still fail due to insufficient privileges or available + capacity.
+
+
Print machine-parsable verbose information about the created dataset. + Each line of output contains a key and one or two values, all + separated by tabs. The create_ancestors and + create keys have volume as their + only value. The create_ancestors key only appears if + the -p option is used. The + property key has two values, a property name that + property's value. The property key may appear zero + or more times, once for each property that will be set local to + volume due to the use of the + -b or -o options, as + well as + + if the volume is not sparse.
+
+
Print verbose information about the created dataset.
+
+
+
+
+

+

Swapping to a ZFS volume is prone to deadlock and not recommended. + See OpenZFS FAQ.

+

Swapping to a file on a ZFS filesystem is not supported.

+
+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + mountpoint=/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+
+

+

zfs-destroy(8), zfs-list(8), + zpool-create(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-destroy.8.html b/man/master/8/zfs-destroy.8.html new file mode 100644 index 000000000..e5fc921ad --- /dev/null +++ b/man/master/8/zfs-destroy.8.html @@ -0,0 +1,424 @@ + + + + + + + zfs-destroy.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-destroy.8

+
+ + + + + +
ZFS-DESTROY(8)System Manager's ManualZFS-DESTROY(8)
+
+
+

+

zfs-destroy — + destroy ZFS dataset, snapshots, or bookmark

+
+
+

+ + + + + +
zfsdestroy [-Rfnprv] + filesystem|volume
+
+ + + + + +
zfsdestroy [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]…
+
+ + + + + +
zfsdestroy + filesystem|volume#bookmark
+
+
+

+
+
zfs destroy + [-Rfnprv] + filesystem|volume
+
Destroys the given dataset. By default, the command unshares any file + systems that are currently shared, unmounts any file systems that are + currently mounted, and refuses to destroy a dataset that has active + dependents (children or clones). +
+
+
Recursively destroy all dependents, including cloned file systems + outside the target hierarchy.
+
+
Forcibly unmount file systems. This option has no effect on non-file + systems or unmounted file systems.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -v or + -p flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Recursively destroy all children.
+
+
Print verbose information about the deleted data.
+
+

Extreme care should be taken when applying either the + -r or the -R options, as + they can destroy large portions of a pool and cause unexpected behavior + for mounted file systems in use.

+
+
zfs destroy + [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]…
+
The given snapshots are destroyed immediately if and only if the + zfs destroy command + without the -d option would have destroyed it. + Such immediate destruction would occur, for example, if the snapshot had + no clones and the user-initiated reference count were zero. +

If a snapshot does not qualify for immediate destruction, it + is marked for deferred deletion. In this state, it exists as a usable, + visible snapshot until both of the preconditions listed above are met, + at which point it is destroyed.

+

An inclusive range of snapshots may be specified by separating + the first and last snapshots with a percent sign. The first and/or last + snapshots may be left blank, in which case the filesystem's oldest or + newest snapshot will be implied.

+

Multiple snapshots (or ranges of snapshots) of the same + filesystem or volume may be specified in a comma-separated list of + snapshots. Only the snapshot's short name (the part after the + ) should be + specified when using a range or comma-separated list to identify + multiple snapshots.

+
+
+
Recursively destroy all clones of these snapshots, including the + clones, snapshots, and children. If this flag is specified, the + -d flag will have no effect.
+
+
Destroy immediately. If a snapshot cannot be destroyed now, mark it + for deferred destruction.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -p or + -v flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Destroy (or mark for deferred deletion) all snapshots with this name + in descendent file systems.
+
+
Print verbose information about the deleted data. +

Extreme care should be taken when applying either the + -r or the -R + options, as they can destroy large portions of a pool and cause + unexpected behavior for mounted file systems in use.

+
+
+
+
zfs destroy + filesystem|volume#bookmark
+
The given bookmark is destroyed.
+
+
+
+

+
+

+

The following command creates snapshots named + yesterday of + pool/home and all of its descendent file systems. Each + snapshot is mounted on demand in the .zfs/snapshot + directory at the root of its file system. The second command destroys the + newly created snapshots.

+
# zfs + snapshot -r + pool/home@yesterday
+
# zfs + destroy -r + pool/home@yesterday
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+

+

The following example shows how to maintain a history of snapshots + with a consistent naming scheme. To keep a week's worth of snapshots, the + user destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows:

+
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
+
+

+

zfs-create(8), zfs-hold(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-diff.8.html b/man/master/8/zfs-diff.8.html new file mode 100644 index 000000000..479c8329e --- /dev/null +++ b/man/master/8/zfs-diff.8.html @@ -0,0 +1,341 @@ + + + + + + + zfs-diff.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-diff.8

+
+ + + + + +
ZFS-DIFF(8)System Manager's ManualZFS-DIFF(8)
+
+
+

+

zfs-diffshow + difference between ZFS snapshots

+
+
+

+ + + + + +
zfsdiff [-FHth] + snapshot + snapshot|filesystem
+
+
+

+

Display the difference between a snapshot of a given filesystem + and another snapshot of that filesystem from a later time or the current + contents of the filesystem. The first column is a character indicating the + type of change, the other columns indicate pathname, new pathname (in case + of rename), change in link count, and optionally file type and/or change + time. The types of change are:

+
+
+
-
+
The path has been removed
+
+
The path has been created
+
+
The path has been modified
+
+
The path has been renamed
+
+
+
+
+
Display an indication of the type of file, in a manner similar to the + -F option of ls(1). +
+
+
+
Block device
+
+
Character device
+
+
Directory
+
+
Door
+
+
Named pipe
+
+
Symbolic link
+
+
Event port
+
+
Socket
+
+
Regular file
+
+
+
+
+
Give more parsable tab-separated output, without header lines and without + arrows.
+
+
Display the path's inode change time as the first column of output.
+
+
Do not + ooo-escape + non-ASCII paths.
+
+
+
+

+
+

+

The following example shows how to see what has changed between a + prior snapshot of a ZFS dataset and its current state. The + -F option is used to indicate type information for + the files affected.

+
+
# zfs diff -F tank/test@before tank/test
+M       /       /tank/test/
+M       F       /tank/test/linked      (+1)
+R       F       /tank/test/oldname -> /tank/test/newname
+-       F       /tank/test/deleted
++       F       /tank/test/created
+M       F       /tank/test/modified
+
+
+
+
+

+

zfs-snapshot(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-get.8.html b/man/master/8/zfs-get.8.html new file mode 100644 index 000000000..9ef0e7a1d --- /dev/null +++ b/man/master/8/zfs-get.8.html @@ -0,0 +1,566 @@ + + + + + + + zfs-get.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-get.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setset + properties on ZFS datasets

+
+
+

+ + + + + +
zfsset [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot
+
+
+

+
+
zfs set + [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
Only some properties can be edited. See zfsprops(7) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the + section of zfsprops(7). +
+
+
Update mountpoint, sharenfs, sharesmb property but do not mount or + share the dataset.
+
+
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
+
+
Dataset name
+
+
Property name
+
+
Property value
+
+
Property source local, default, + inherited, temporary, + received, or + - (none).
+
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections of + zfsprops(7).

+

The value all can be used to display all + properties that apply to the given dataset's type + (filesystem, volume, + snapshot, or + bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: local, + default, inherited, + temporary, received, + or + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or + all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(7) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value, if one exists; otherwise, + for non-inheritable properties, to the default; otherwise, operate as + if the -S option was not specified.
+
+
+
+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + =/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following command disables the compression + property for all file systems under pool/home. The + next command explicitly enables compression for + pool/home/anne.

+
# zfs + set + compression= + pool/home
+
# zfs + set + compression= + pool/home/anne
+
+
+

+

The following command sets a quota of 50 Gbytes for + pool/home/bob:

+
# zfs + set + =50G + pool/home/bob
+
+
+

+

The following command lists all properties for + pool/home/bob:

+
+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value:

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+

The following command lists all properties with local settings for + pool/home/bob:

+
+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
+

+

The following command causes pool/home/bob + and pool/home/anne to inherit + the checksum property from their parent.

+
# zfs + inherit checksum + pool/home/bob pool/home/anne
+
+
+

+

The following example sets the user-defined + com.example:department property + for a dataset:

+
# zfs + set + com.example:department=12345 + tank/accounting
+
+
+

+

The following commands show how to set sharenfs + property options to enable read-write access for a set of IP addresses and + to enable root access for system "neo" on the + tank/home file system:

+
# zfs + set + sharenfs='rw=@123.123.0.0/16:[::1],root=neo' + tank/home
+

If you are using DNS for host name resolution, specify the + fully-qualified hostname.

+
+
+
+

+

zfsprops(7), zfs-list(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-groupspace.8.html b/man/master/8/zfs-groupspace.8.html new file mode 100644 index 000000000..f7e21cc5e --- /dev/null +++ b/man/master/8/zfs-groupspace.8.html @@ -0,0 +1,390 @@ + + + + + + + zfs-groupspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-groupspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + display space and quotas of ZFS dataset

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + user, + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (like stat(2), + ls -l) perform this + translation, so the -i option allows the + output from zfs + userspace to be compared directly with those + utilities. However, -i may lead to confusion + if some files were created by an SMB user before a SMB-to-POSIX name + mapping was established. In such a case, some files will be owned by + the SMB entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]…
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]…
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is a + numeral, not a name. So need neither the option -i + for SID to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfsprops(7), zfs-set(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-hold.8.html b/man/master/8/zfs-hold.8.html new file mode 100644 index 000000000..08bdce011 --- /dev/null +++ b/man/master/8/zfs-hold.8.html @@ -0,0 +1,325 @@ + + + + + + + zfs-hold.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-hold.8

+
+ + + + + +
ZFS-HOLD(8)System Manager's ManualZFS-HOLD(8)
+
+
+

+

zfs-holdhold + ZFS snapshots to prevent their removal

+
+
+

+ + + + + +
zfshold [-r] + tag snapshot
+
+ + + + + +
zfsholds [-rHp] + snapshot
+
+ + + + + +
zfsrelease [-r] + tag snapshot
+
+
+

+
+
zfs hold + [-r] tag + snapshot
+
Adds a single reference, named with the tag + argument, to the specified snapshots. Each snapshot has its own tag + namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-rHp] snapshot
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
Do not print headers, use tab-delimited output.
+
+
Prints holds timestamps as unix epoch timestamps.
+
+
+
zfs release + [-r] tag + snapshot
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
+
+
+

+

zfs-destroy(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-inherit.8.html b/man/master/8/zfs-inherit.8.html new file mode 100644 index 000000000..6d04f1d59 --- /dev/null +++ b/man/master/8/zfs-inherit.8.html @@ -0,0 +1,566 @@ + + + + + + + zfs-inherit.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-inherit.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setset + properties on ZFS datasets

+
+
+

+ + + + + +
zfsset [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot
+
+
+

+
+
zfs set + [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
Only some properties can be edited. See zfsprops(7) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the + section of zfsprops(7). +
+
+
Update mountpoint, sharenfs, sharesmb property but do not mount or + share the dataset.
+
+
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
+
+
Dataset name
+
+
Property name
+
+
Property value
+
+
Property source local, default, + inherited, temporary, + received, or + - (none).
+
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections of + zfsprops(7).

+

The value all can be used to display all + properties that apply to the given dataset's type + (filesystem, volume, + snapshot, or + bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: local, + default, inherited, + temporary, received, + or + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or + all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(7) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value, if one exists; otherwise, + for non-inheritable properties, to the default; otherwise, operate as + if the -S option was not specified.
+
+
+
+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + =/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following command disables the compression + property for all file systems under pool/home. The + next command explicitly enables compression for + pool/home/anne.

+
# zfs + set + compression= + pool/home
+
# zfs + set + compression= + pool/home/anne
+
+
+

+

The following command sets a quota of 50 Gbytes for + pool/home/bob:

+
# zfs + set + =50G + pool/home/bob
+
+
+

+

The following command lists all properties for + pool/home/bob:

+
+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value:

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+

The following command lists all properties with local settings for + pool/home/bob:

+
+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
+

+

The following command causes pool/home/bob + and pool/home/anne to inherit + the checksum property from their parent.

+
# zfs + inherit checksum + pool/home/bob pool/home/anne
+
+
+

+

The following example sets the user-defined + com.example:department property + for a dataset:

+
# zfs + set + com.example:department=12345 + tank/accounting
+
+
+

+

The following commands show how to set sharenfs + property options to enable read-write access for a set of IP addresses and + to enable root access for system "neo" on the + tank/home file system:

+
# zfs + set + sharenfs='rw=@123.123.0.0/16:[::1],root=neo' + tank/home
+

If you are using DNS for host name resolution, specify the + fully-qualified hostname.

+
+
+
+

+

zfsprops(7), zfs-list(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-jail.8.html b/man/master/8/zfs-jail.8.html new file mode 100644 index 000000000..e1600944e --- /dev/null +++ b/man/master/8/zfs-jail.8.html @@ -0,0 +1,314 @@ + + + + + + + zfs-jail.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-jail.8

+
+ + + + + +
ZFS-JAIL(8)System Manager's ManualZFS-JAIL(8)
+
+
+

+

zfs-jailattach + or detach ZFS filesystem from FreeBSD jail

+
+
+

+ + + + + +
zfs jailjailid|jailname + filesystem
+
+ + + + + +
zfs unjailjailid|jailname + filesystem
+
+
+

+
+
zfs jail + jailid|jailname + filesystem
+
Attach the specified filesystem to the jail + identified by JID jailid or name + jailname. From now on this file system tree can be + managed from within a jail if the jailed property has + been set. To use this functionality, the jail needs the + + and + + parameters set to + and the + + parameter set to a value lower than + . +

You cannot attach a jailed dataset's children to another jail. + You can also not attach the root file system of the jail or any dataset + which needs to be mounted before the zfs rc script is run inside the + jail, as it would be attached unmounted until it is mounted from the rc + script inside the jail.

+

To allow management of the dataset from within a + jail, the jailed property has to be set and the jail + needs access to the /dev/zfs device. The + property + cannot be changed from within a jail.

+

After a dataset is attached to a jail and the + jailed property is set, a jailed file system cannot be + mounted outside the jail, since the jail administrator might have set + the mount point to an unacceptable value.

+

See jail(8) for more information on managing + jails. Jails are a FreeBSD feature and are not + relevant on other platforms.

+
+
zfs unjail + jailid|jailname + filesystem
+
Detaches the specified filesystem from the jail + identified by JID jailid or name + jailname.
+
+
+
+

+

zfsprops(7), jail(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-list.8.html b/man/master/8/zfs-list.8.html new file mode 100644 index 000000000..46e1d44a9 --- /dev/null +++ b/man/master/8/zfs-list.8.html @@ -0,0 +1,371 @@ + + + + + + + zfs-list.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-list.8

+
+ + + + + +
ZFS-LIST(8)System Manager's ManualZFS-LIST(8)
+
+
+

+

zfs-listlist + properties of ZFS datasets

+
+
+

+ + + + + +
zfslist + [-r|-d + depth] [-Hp] + [-o + property[,property]…] + [-s property]… + [-S property]… + [-t + type[,type]…] + [filesystem|volume|snapshot]…
+
+
+

+

If specified, you can list property information by the absolute + pathname or the relative pathname. By default, all file systems and volumes + are displayed. Snapshots are displayed if the + + pool property is on (the default is + off), or if the -t + snapshot or -t + all options are specified. The following fields are + displayed: name, used, + , + , + .

+
+
+
Used for scripting mode. Do not print headers and separate fields by a + single tab instead of arbitrary white space.
+
+ depth
+
Recursively display any children of the dataset, limiting the recursion to + depth. A depth of + will display + only the dataset and its direct children.
+
+ property
+
A comma-separated list of properties to display. The property must be: + +
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display any children of the dataset on the command line.
+
+ property
+
A property for sorting the output by column in ascending order based on + the value of the property. The property must be one of the properties + described in the Properties section + of zfsprops(7) or the value name to + sort by the dataset name. Multiple properties can be specified at one time + using multiple -s property options. Multiple + -s options are evaluated from left to right in + decreasing order of importance. The following is a list of sorting + criteria: +
    +
  • Numeric types sort in numeric order.
  • +
  • String types sort in alphabetical order.
  • +
  • Types inappropriate for a row sort that row to the literal bottom, + regardless of the specified ordering.
  • +
+

If no sorting options are specified the existing behavior of + zfs list is + preserved.

+
+
+ property
+
Same as -s, but sorts by property in descending + order.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + , + or all. For example, specifying + -t snapshot displays only + snapshots.
+
+
+
+

+
+

+

The following command lists all active file systems and volumes in + the system. Snapshots are displayed if + =on. + The default is off. See zpoolprops(7) + for more information on pool properties.

+
+
# zfs list
+NAME                      USED  AVAIL  REFER  MOUNTPOINT
+pool                      450K   457G    18K  /pool
+pool/home                 315K   457G    21K  /export/home
+pool/home/anne             18K   457G    18K  /export/home/anne
+pool/home/bob             276K   457G   276K  /export/home/bob
+
+
+
+
+

+

zfsprops(7), zfs-get(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-load-key.8.html b/man/master/8/zfs-load-key.8.html new file mode 100644 index 000000000..665446f86 --- /dev/null +++ b/man/master/8/zfs-load-key.8.html @@ -0,0 +1,476 @@ + + + + + + + zfs-load-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-load-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + load, unload, or change encryption key of ZFS + dataset

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a|filesystem
+
+ + + + + +
zfsunload-key [-r] + -a|filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a|filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. + This will cause zfs to simply check that the + provided key is correct. This command may be run even if the key is + already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a|filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded. + This command may also be used to change the keylocation, + keyformat, and pbkdf2iters properties + as needed. If the dataset was not previously an encryption root it will + become one. Alternatively, the -i flag may be + provided to cause an encryption root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim + --secure if supported by your hardware, + otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to running zfs + load-key filesystem; + zfs change-key + filesystem
+
+ property=value
+
Allows the user to set encryption key properties + (keyformat, keylocation, + and pbkdf2iters) while + changing the key. This is the only way to alter + keyformat and pbkdf2iters after + the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + volume data, file attributes, ACLs, permission bits, directory listings, + FUID mappings, and + / + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the load-key subcommand for more + info on key loading).

+

Creating an encrypted dataset requires + specifying the encryption and + keyformat properties at creation time, along with an + optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, + and pbkdf2iters) do not inherit + like other ZFS properties and instead use the value determined by their + encryption root. Encryption root inheritance can be tracked via the + read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only deduplicate against themselves, + their snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption, + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost for each block written.

+
+
+
+

+

zfsprops(7), zfs-create(8), + zfs-set(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-mount-generator.8.html b/man/master/8/zfs-mount-generator.8.html new file mode 100644 index 000000000..560bd01f2 --- /dev/null +++ b/man/master/8/zfs-mount-generator.8.html @@ -0,0 +1,439 @@ + + + + + + + zfs-mount-generator.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-mount-generator.8

+
+ + + + + +
ZFS-MOUNT-GENERATOR(8)System Manager's ManualZFS-MOUNT-GENERATOR(8)
+
+
+

+

zfs-mount-generator — + generate systemd mount units for ZFS filesystems

+
+
+

+

@systemdgeneratordir@/zfs-mount-generator

+
+
+

+

zfs-mount-generator is a + systemd.generator(7) that generates native + systemd.mount(5) units for configured ZFS datasets.

+
+

+
+
=
+
+ + or none.
+
=
+
off. Skipped if + only noauto datasets exist for a given mountpoint + and there's more than one. Datasets with + + take precedence over ones with + noauto for the same mountpoint. + Sets logical noauto + flag if noauto. Encryption roots + always generate + zfs-load-key@root.service, + even if off.
+
=, + relatime=, + =, + =, + =, + =, + =
+
Used to generate mount options equivalent to zfs + mount.
+
=, + keylocation=
+
If the dataset is an encryption root, its mount unit will bind to + zfs-load-key@root.service, + with additional dependencies as follows: +
+
+
=
+
None, uses systemd-ask-password(1)
+
=URL + (et al.)
+
=, + After=: + network-online.target
+
=<path>
+
=path
+
+
+ The service also uses the same Wants=, + After=, Requires=, + and RequiresMountsFor=, as the + mount unit.
+
=path[ + path]…
+
+ Requires= for the mount- and key-loading unit.
+
=path[ + path]…
+
+ RequiresMountsFor= for the mount- and key-loading + unit.
+
=unit[ + unit]…
+
+ Before= for the mount unit.
+
=unit[ + unit]…
+
+ After= for the mount unit.
+
=unit[ + unit]…
+
Sets logical noauto + flag (see below). If not + none, sets + WantedBy= for the mount unit.
+
=unit[ + unit]…
+
Sets logical noauto + flag (see below). If not + none, sets + RequiredBy= for the mount unit.
+
=(unset)|on|off
+
Waxes or wanes strength of default reverse dependencies of the mount unit, + see below.
+
=on|off
+
on. Defaults to + off.
+
+
+
+

+

Additionally, unless the pool the dataset resides on is imported + at generation time, both units gain + Wants=zfs-import.target and + After=zfs-import.target.

+

Additionally, unless the logical noauto flag is + set, the mount unit gains a reverse-dependency for + local-fs.target of strength

+
+
+
(unset)
+
= + + Before=
+
+
=
+
+
= + + Before=
+
+
+
+
+

+

Because ZFS pools may not be available very early in the boot + process, information on ZFS mountpoints must be stored separately. The + output of

+
zfs + list -Ho + name,⟨every property above in + order⟩
+for datasets that should be mounted by systemd should be kept at + @sysconfdir@/zfs/zfs-list.cache/poolname, + and, if writeable, will be kept synchronized for the entire pool by the + history_event-zfs-list-cacher.sh ZEDLET, if enabled + (see zed(8)). +
+
+
+

+

If the + + environment variable is nonzero (or unset and + /proc/cmdline contains + ""), + print summary accounting information at the end.

+
+
+

+

To begin, enable tracking for the pool:

+
# touch + @sysconfdir@/zfs/zfs-list.cache/poolname
+Then enable the tracking ZEDLET: +
# ln + -s + @zfsexecdir@/zed.d/history_event-zfs-list-cacher.sh + @sysconfdir@/zfs/zed.d
+
# systemctl + enable + zfs-zed.service
+
# systemctl + restart + zfs-zed.service
+

If no history event is in the queue, inject one to ensure the + ZEDLET runs to refresh the cache file by setting a monitored property + somewhere on the pool:

+
# zfs + set relatime=off + poolname/dset
+
# zfs + inherit relatime + poolname/dset
+

To test the generator output:

+
$ mkdir + /tmp/zfs-mount-generator
+
$ + @systemdgeneratordir@/zfs-mount-generator + /tmp/zfs-mount-generator
+If the generated units are satisfactory, instruct + systemd to re-run all generators: +
# systemctl + daemon-reload
+
+
+

+

systemd.mount(5), + zfs(5), + systemd.generator(7), + zed(8), + zpool-events(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-mount.8.html b/man/master/8/zfs-mount.8.html new file mode 100644 index 000000000..a42fa8d0d --- /dev/null +++ b/man/master/8/zfs-mount.8.html @@ -0,0 +1,338 @@ + + + + + + + zfs-mount.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-mount.8

+
+ + + + + +
ZFS-MOUNT(8)System Manager's ManualZFS-MOUNT(8)
+
+
+

+

zfs-mountmanage + mount state of ZFS filesystems

+
+
+

+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Oflv] + [-o options] + -a|filesystem
+
+ + + + + +
zfsunmount [-fu] + -a|filesystem|mountpoint
+
+
+

+
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Oflv] [-o + options] + -a|filesystem
+
Mount ZFS filesystem on a path described by its + mountpoint property, if the path exists and is empty. If + mountpoint is set to + , the + filesystem should be instead mounted using mount(8). +
+
+
Perform an overlay mount. Allows mounting in non-empty + mountpoint. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process if configured.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + section of + zfsprops(7) for details.
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has + =, + this will cause the terminal to interactively block after asking for + the key.
+
+
Report mount progress.
+
+
Attempt to force mounting of all filesystems, even those that couldn't + normally be mounted (e.g. redacted datasets).
+
+
+
zfs unmount + [-fu] + -a|filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
+
Forcefully unmount the file system, even if it is currently in use. + This option is not supported on Linux.
+
+
Unload keys for any encryption roots unmounted by this command.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
+
+
+
+ + + + + +
February 16, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-program.8.html b/man/master/8/zfs-program.8.html new file mode 100644 index 000000000..de14e4d91 --- /dev/null +++ b/man/master/8/zfs-program.8.html @@ -0,0 +1,1007 @@ + + + + + + + zfs-program.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-program.8

+
+ + + + + +
ZFS-PROGRAM(8)System Manager's ManualZFS-PROGRAM(8)
+
+
+

+

zfs-program — + execute ZFS channel programs

+
+
+

+ + + + + +
zfsprogram [-jn] + [-t instruction-limit] + [-m memory-limit] + pool script + [script arguments]
+
+
+

+

The ZFS channel program interface allows ZFS administrative + operations to be run programmatically as a Lua script. The entire script is + executed atomically, with no other administrative operations taking effect + concurrently. A library of ZFS calls is made available to channel program + scripts. Channel programs may only be run with root privileges.

+

A modified version of the Lua 5.2 interpreter is used to run + channel program scripts. The Lua 5.2 manual can be found at + http://www.lua.org/manual/5.2/

+

The channel program given by script will be + run on pool, and any attempts to access or modify + other pools will cause an error.

+
+
+

+
+
+
Display channel program output in JSON format. When this flag is specified + and standard output is empty - channel program encountered an error. The + details of such an error will be printed to standard error in plain + text.
+
+
Executes a read-only channel program, which runs faster. The program + cannot change on-disk state by calling functions from the zfs.sync + submodule. The program can be used to gather information such as + properties and determining if changes would succeed (zfs.check.*). Without + this flag, all pending changes must be synced to disk before a channel + program can complete.
+
+ instruction-limit
+
Limit the number of Lua instructions to execute. If a channel program + executes more than the specified number of instructions, it will be + stopped and an error will be returned. The default limit is 10 million + instructions, and it can be set to a maximum of 100 million + instructions.
+
+ memory-limit
+
Memory limit, in bytes. If a channel program attempts to allocate more + memory than the given limit, it will be stopped and an error returned. The + default memory limit is 10 MiB, and can be set to a maximum of 100 + MiB.
+
+

All remaining argument strings will be passed directly to the Lua + script as described in the LUA + INTERFACE section below.

+
+
+

+

A channel program can be invoked either from the command line, or + via a library call to + ().

+
+

+

Arguments passed to the channel program are converted to a Lua + table. If invoked from the command line, extra arguments to the Lua script + will be accessible as an array stored in the argument table with the key + 'argv':

+
+
args = ...
+argv = args["argv"]
+-- argv == {1="arg1", 2="arg2", ...}
+
+

If invoked from the libzfs interface, an arbitrary argument list + can be passed to the channel program, which is accessible via the same + "..." syntax in Lua:

+
+
args = ...
+-- args == {"foo"="bar", "baz"={...}, ...}
+
+

Note that because Lua arrays are 1-indexed, arrays passed to Lua + from the libzfs interface will have their indices incremented by 1. That is, + the element in arr[0] in a C array passed to a channel + program will be stored in arr[1] when accessed from + Lua.

+
+
+

+

Lua return statements take the form:

+
return ret0, ret1, ret2, + ...
+

Return statements returning multiple values are permitted + internally in a channel program script, but attempting to return more than + one value from the top level of the channel program is not permitted and + will throw an error. However, tables containing multiple values can still be + returned. If invoked from the command line, a return statement:

+
+
a = {foo="bar", baz=2}
+return a
+
+

Will be output formatted as:

+
+
Channel program fully executed with return value:
+    return:
+        baz: 2
+        foo: 'bar'
+
+
+
+

+

If the channel program encounters a fatal error while running, a + non-zero exit status will be returned. If more information about the error + is available, a singleton list will be returned detailing the error:

+
error: "error string, including + Lua stack trace"
+

If a fatal error is returned, the channel program may have not + executed at all, may have partially executed, or may have fully executed but + failed to pass a return value back to userland.

+

If the channel program exhausts an instruction or memory limit, a + fatal error will be generated and the program will be stopped, leaving the + program partially executed. No attempt is made to reverse or undo any + operations already performed. Note that because both the instruction count + and amount of memory used by a channel program are deterministic when run + against the same inputs and filesystem state, as long as a channel program + has run successfully once, you can guarantee that it will finish + successfully against a similar size system.

+

If a channel program attempts to return too large a value, the + program will fully execute but exit with a nonzero status code and no return + value.

+

: + ZFS API functions do not generate Fatal Errors when correctly invoked, they + return an error code and the channel program continues executing. See the + ZFS API section below for + function-specific details on error return codes.

+
+
+

+

When invoking a channel program via the libzfs interface, it is + necessary to translate arguments and return values from Lua values to their + C equivalents, and vice-versa.

+

There is a correspondence between nvlist values in C and Lua + tables. A Lua table which is returned from the channel program will be + recursively converted to an nvlist, with table values converted to their + natural equivalents:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
string->string
number->int64
boolean->boolean_value
nil->boolean (no value)
table->nvlist
+

Likewise, table keys are replaced by string equivalents as + follows:

+ + + + + + + + + + + + + + + + + + + +
string->no change
number->signed decimal string ("%lld")
boolean->"true" | "false"
+

Any collision of table key strings (for example, the string + "true" and a true boolean value) will cause a fatal error.

+

Lua numbers are represented internally as signed 64-bit + integers.

+
+
+
+

+

The following Lua built-in base library functions are + available:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
assertrawlencollectgarbagerawget
errorrawsetgetmetatableselect
ipairssetmetatablenexttonumber
pairstostringrawequaltype
+

All functions in the + , + , + and + + built-in submodules are also available. A complete list and documentation of + these modules is available in the Lua manual.

+

The following functions base library functions have been disabled + and are not available for use in channel programs:

+ + + + + + + + + + +
dofileloadfileloadpcallprintxpcall
+
+
+

+
+

+

Each API function takes a fixed set of required positional + arguments and optional keyword arguments. For example, the destroy function + takes a single positional string argument (the name of the dataset to + destroy) and an optional "defer" keyword boolean argument. When + using parentheses to specify the arguments to a Lua function, only + positional arguments can be used:

+
zfs.sync.destroy("rpool@snap")
+

To use keyword arguments, functions must be called with a single + argument that is a Lua table containing entries mapping integers to + positional arguments and strings to keyword arguments:

+
zfs.sync.destroy({1="rpool@snap", + defer=true})
+

The Lua language allows curly braces to be used in place of + parenthesis as syntactic sugar for this calling convention:

+
zfs.sync.snapshot{"rpool@snap", + defer=true}
+
+
+

+

If an API function succeeds, it returns 0. If it fails, it returns + an error code and the channel program continues executing. API functions do + not generate Fatal Errors except in the case of an unrecoverable internal + file system error.

+

In addition to returning an error code, some functions also return + extra details describing what caused the error. This extra description is + given as a second return value, and will always be a Lua table, or Nil if no + error details were returned. Different keys will exist in the error details + table depending on the function and error case. Any such function may be + called expecting a single return value:

+
errno = + zfs.sync.promote(dataset)
+

Or, the error details can be retrieved:

+
+
errno, details = zfs.sync.promote(dataset)
+if (errno == EEXIST) then
+    assert(details ~= Nil)
+    list_of_conflicting_snapshots = details
+end
+
+

The following global aliases for API function error return codes + are defined for use in channel programs:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
EPERMECHILDENODEVENOSPCENOENTEAGAINENOTDIR
ESPIPEESRCHENOMEMEISDIREROFSEINTREACCES
EINVALEMLINKEIOEFAULTENFILEEPIPEENXIO
ENOTBLKEMFILEEDOME2BIGEBUSYENOTTYERANGE
ENOEXECEEXISTETXTBSYEDQUOTEBADFEXDEVEFBIG
+
+
+

+

For detailed descriptions of the exact behavior of any ZFS + administrative operations, see the main zfs(8) manual + page.

+
+
(msg)
+
Record a debug message in the zfs_dbgmsg log. A log of these messages can + be printed via mdb's "::zfs_dbgmsg" command, or can be monitored + live by running +
dtrace -n + 'zfs-dbgmsg{trace(stringof(arg0))}'
+

+
+
msg (string)
+
Debug message to be printed.
+
+
+
(dataset)
+
Returns true if the given dataset exists, or false if it doesn't. A fatal + error will be thrown if the dataset is not in the target pool. That is, in + a channel program running on rpool, + zfs.exists("rpool/nonexistent_fs") returns + false, but + zfs.exists("somepool/fs_that_may_exist") will + error. +

+
+
dataset (string)
+
Dataset to check for existence. Must be in the target pool.
+
+
+
(dataset, + property)
+
Returns two values. First, a string, number or table containing the + property value for the given dataset. Second, a string containing the + source of the property (i.e. the name of the dataset in which it was set + or nil if it is readonly). Throws a Lua error if the dataset is invalid or + the property doesn't exist. Note that Lua only supports int64 number types + whereas ZFS number properties are uint64. This means very large values + (like GUIDs) may wrap around and appear negative. +

+
+
dataset (string)
+
Filesystem or snapshot path to retrieve properties from.
+
property (string)
+
Name of property to retrieve. All filesystem, snapshot and volume + properties are supported except for + and + . + Also supports the + snap + and + bookmark + properties and the + ⟨|⟩⟨|id + properties, though the id must be in numeric form.
+
+
+
+
+
+
The sync submodule contains functions that modify the on-disk state. They + are executed in "syncing context". +

The available sync submodule functions are as follows:

+
+
(dataset, + [defer=true|false])
+
Destroy the given dataset. Returns 0 on successful destroy, or a + nonzero error code if the dataset could not be destroyed (for example, + if the dataset has any active children or clones). +

+
+
dataset (string)
+
Filesystem or snapshot to be destroyed.
+
[defer (boolean)]
+
Valid only for destroying snapshots. If set to true, and the + snapshot has holds or clones, allows the snapshot to be marked for + deferred deletion rather than failing.
+
+
+
(dataset, + property)
+
Clears the specified property in the given dataset, causing it to be + inherited from an ancestor, or restored to the default if no ancestor + property is set. The zfs + inherit -S option has + not been implemented. Returns 0 on success, or a nonzero error code if + the property could not be cleared. +

+
+
dataset (string)
+
Filesystem or snapshot containing the property to clear.
+
property (string)
+
The property to clear. Allowed properties are the same as those + for the zfs + inherit command.
+
+
+
(dataset)
+
Promote the given clone to a filesystem. Returns 0 on successful + promotion, or a nonzero error code otherwise. If EEXIST is returned, + the second return value will be an array of the clone's snapshots + whose names collide with snapshots of the parent filesystem. +

+
+
dataset (string)
+
Clone to be promoted.
+
+
+
(filesystem)
+
Rollback to the previous snapshot for a dataset. Returns 0 on + successful rollback, or a nonzero error code otherwise. Rollbacks can + be performed on filesystems or zvols, but not on snapshots or mounted + datasets. EBUSY is returned in the case where the filesystem is + mounted. +

+
+
filesystem (string)
+
Filesystem to rollback.
+
+
+
(dataset, + property, value)
+
Sets the given property on a dataset. Currently only user properties + are supported. Returns 0 if the property was set, or a nonzero error + code otherwise. +

+
+
dataset (string)
+
The dataset where the property will be set.
+
property (string)
+
The property to set.
+
value (string)
+
The value of the property to be set.
+
+
+
(dataset)
+
Create a snapshot of a filesystem. Returns 0 if the snapshot was + successfully created, and a nonzero error code otherwise. +

Note: Taking a snapshot will fail on any pool older than + legacy version 27. To enable taking snapshots from ZCP scripts, the + pool must be upgraded.

+

+
+
dataset (string)
+
Name of snapshot to create.
+
+
+
(dataset, + oldsnapname, + newsnapname)
+
Rename a snapshot of a filesystem or a volume. Returns 0 if the + snapshot was successfully renamed, and a nonzero error code otherwise. +

+
+
dataset (string)
+
Name of the snapshot's parent dataset.
+
oldsnapname (string)
+
Original name of the snapshot.
+
newsnapname (string)
+
New name of the snapshot.
+
+
+
(source, + newbookmark)
+
Create a bookmark of an existing source snapshot or bookmark. Returns + 0 if the new bookmark was successfully created, and a nonzero error + code otherwise. +

Note: Bookmarking requires the corresponding pool feature + to be enabled.

+

+
+
source (string)
+
Full name of the existing snapshot or bookmark.
+
newbookmark (string)
+
Full name of the new bookmark.
+
+
+
+
+
+
For each function in the zfs.sync submodule, there is a + corresponding zfs.check function which performs a + "dry run" of the same operation. Each takes the same arguments + as its zfs.sync counterpart and returns 0 if the + operation would succeed, or a non-zero error code if it would fail, along + with any other error details. That is, each has the same behavior as the + corresponding sync function except for actually executing the requested + change. For example, + ("fs") + returns 0 if + zfs.sync.destroy("fs") + would successfully destroy the dataset. +

The available zfs.check functions are:

+
+
(dataset, + [defer=true|false])
+
 
+
(dataset)
+
 
+
(filesystem)
+
 
+
(dataset, + property, value)
+
 
+
(dataset)
+
 
+
+
+
+
The zfs.list submodule provides functions for iterating over datasets and + properties. Rather than returning tables, these functions act as Lua + iterators, and are generally used as follows: +
+
for child in zfs.list.children("rpool") do
+    ...
+end
+
+

The available zfs.list functions are:

+
+
(snapshot)
+
Iterate through all clones of the given snapshot. +

+
+
snapshot (string)
+
Must be a valid snapshot path in the current pool.
+
+
+
(dataset)
+
Iterate through all snapshots of the given dataset. Each snapshot is + returned as a string containing the full dataset name, e.g. + "pool/fs@snap". +

+
+
dataset (string)
+
Must be a valid filesystem or volume.
+
+
+
(dataset)
+
Iterate through all direct children of the given dataset. Each child + is returned as a string containing the full dataset name, e.g. + "pool/fs/child". +

+
+
dataset (string)
+
Must be a valid filesystem or volume.
+
+
+
(dataset)
+
Iterate through all bookmarks of the given dataset. Each bookmark is + returned as a string containing the full dataset name, e.g. + "pool/fs#bookmark". +

+
+
dataset (string)
+
Must be a valid filesystem or volume.
+
+
+
(snapshot)
+
Iterate through all user holds on the given snapshot. Each hold is + returned as a pair of the hold's tag and the timestamp (in seconds + since the epoch) at which it was created. +

+
+
snapshot (string)
+
Must be a valid snapshot.
+
+
+
(dataset)
+
An alias for zfs.list.user_properties (see relevant entry). +

+
+
dataset (string)
+
Must be a valid filesystem, snapshot, or volume.
+
+
+
(dataset)
+
Iterate through all user properties for the given dataset. For each + step of the iteration, output the property name, its value, and its + source. Throws a Lua error if the dataset is invalid. +

+
+
dataset (string)
+
Must be a valid filesystem, snapshot, or volume.
+
+
+
(dataset)
+
Returns an array of strings, the names of the valid system (non-user + defined) properties for the given dataset. Throws a Lua error if the + dataset is invalid. +

+
+
dataset (string)
+
Must be a valid filesystem, snapshot or volume.
+
+
+
+
+
+
+
+
+

+
+

+

The following channel program recursively destroys a filesystem + and all its snapshots and children in a naive manner. Note that this does + not involve any error handling or reporting.

+
+
function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        zfs.sync.destroy(snap)
+    end
+    zfs.sync.destroy(root)
+end
+destroy_recursive("pool/somefs")
+
+
+
+

+

A more verbose and robust version of the same channel program, + which properly detects and reports errors, and also takes the dataset to + destroy as a command line argument, would be as follows:

+
+
succeeded = {}
+failed = {}
+
+function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        err = zfs.sync.destroy(snap)
+        if (err ~= 0) then
+            failed[snap] = err
+        else
+            succeeded[snap] = err
+        end
+    end
+    err = zfs.sync.destroy(root)
+    if (err ~= 0) then
+        failed[root] = err
+    else
+        succeeded[root] = err
+    end
+end
+
+args = ...
+argv = args["argv"]
+
+destroy_recursive(argv[1])
+
+results = {}
+results["succeeded"] = succeeded
+results["failed"] = failed
+return results
+
+
+
+

+

The following function performs a forced promote operation by + attempting to promote the given clone and destroying any conflicting + snapshots.

+
+
function force_promote(ds)
+   errno, details = zfs.check.promote(ds)
+   if (errno == EEXIST) then
+       assert(details ~= Nil)
+       for i, snap in ipairs(details) do
+           zfs.sync.destroy(ds .. "@" .. snap)
+       end
+   elseif (errno ~= 0) then
+       return errno
+   end
+   return zfs.sync.promote(ds)
+end
+
+
+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-project.8.html b/man/master/8/zfs-project.8.html new file mode 100644 index 000000000..b747ce153 --- /dev/null +++ b/man/master/8/zfs-project.8.html @@ -0,0 +1,362 @@ + + + + + + + zfs-project.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-project.8

+
+ + + + + +
ZFS-PROJECT(8)System Manager's ManualZFS-PROJECT(8)
+
+
+

+

zfs-project — + manage projects in ZFS filesystem

+
+
+

+ + + + + +
zfsproject + [-d|-r] + file|directory
+
+ + + + + +
zfsproject -C + [-kr] + file|directory
+
+ + + + + +
zfsproject -c + [-0] + [-d|-r] + [-p id] + file|directory
+
+ + + + + +
zfsproject [-p + id] [-rs] + file|directory
+
+
+

+
+
zfs project + [-d|-r] + file|directory
+
List project identifier (ID) and inherit flag of files and directories. +
+
+
Show the directory project ID and inherit flag, not its children.
+
+
List subdirectories recursively.
+
+
+
zfs project + -C [-kr] + file|directory
+
Clear project inherit flag and/or ID on the files and directories. +
+
+
Keep the project ID unchanged. If not specified, the project ID will + be reset to zero.
+
+
Clear subdirectories' flags recursively.
+
+
+
zfs project + -c [-0] + [-d|-r] + [-p id] + file|directory
+
Check project ID and inherit flag on the files and directories: report + entries without the project inherit flag, or with project IDs different + from the target directory's project ID or the one specified with + -p. +
+
+
Delimit filenames with a NUL byte instead of newline, don't output + diagnoses.
+
+
Check the directory project ID and inherit flag, not its + children.
+
+ id
+
Compare to id instead of the target files and + directories' project IDs.
+
+
Check subdirectories recursively.
+
+
+
zfs project + -p id + [-rs] + file|directory
+
Set project ID and/or inherit flag on the files and directories. +
+
+ id
+
Set the project ID to the given value.
+
+
Set on subdirectories recursively.
+
+
Set project inherit flag on the given files and directories. This is + usually used for setting up tree quotas with + -r. In that case, the directory's project ID + will be set for all its descendants, unless specified explicitly with + -p.
+
+
+
+
+
+

+

zfs-projectspace(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-projectspace.8.html b/man/master/8/zfs-projectspace.8.html new file mode 100644 index 000000000..995af8fae --- /dev/null +++ b/man/master/8/zfs-projectspace.8.html @@ -0,0 +1,390 @@ + + + + + + + zfs-projectspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-projectspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + display space and quotas of ZFS dataset

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + user, + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (like stat(2), + ls -l) perform this + translation, so the -i option allows the + output from zfs + userspace to be compared directly with those + utilities. However, -i may lead to confusion + if some files were created by an SMB user before a SMB-to-POSIX name + mapping was established. In such a case, some files will be owned by + the SMB entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]…
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]…
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is a + numeral, not a name. So need neither the option -i + for SID to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfsprops(7), zfs-set(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-promote.8.html b/man/master/8/zfs-promote.8.html new file mode 100644 index 000000000..09b9aa3ef --- /dev/null +++ b/man/master/8/zfs-promote.8.html @@ -0,0 +1,299 @@ + + + + + + + zfs-promote.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-promote.8

+
+ + + + + +
ZFS-PROMOTE(8)System Manager's ManualZFS-PROMOTE(8)
+
+
+

+

zfs-promote — + promote clone dataset to no longer depend on origin + snapshot

+
+
+

+ + + + + +
zfspromote clone
+
+
+

+

The zfs promote + command makes it possible to destroy the dataset that the clone was created + from. The clone parent-child dependency relationship is reversed, so that + the origin dataset becomes a clone of the specified dataset.

+

The snapshot that was cloned, and any snapshots previous to this + snapshot, are now owned by the promoted clone. The space they use moves from + the origin dataset to the promoted clone, so enough space must be available + to accommodate these snapshots. No new space is consumed by this operation, + but the space accounting is adjusted. The promoted clone must not have any + conflicting snapshot names of its own. The zfs + rename subcommand can be used to rename any + conflicting snapshots.

+
+
+

+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+
+

+

zfs-clone(8), + zfs-rename(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-receive.8.html b/man/master/8/zfs-receive.8.html new file mode 100644 index 000000000..741acd431 --- /dev/null +++ b/man/master/8/zfs-receive.8.html @@ -0,0 +1,628 @@ + + + + + + + zfs-receive.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-receive.8

+
+ + + + + +
ZFS-RECEIVE(8)System Manager's ManualZFS-RECEIVE(8)
+
+
+

+

zfs-receive — + create snapshot from backup stream

+
+
+

+ + + + + +
zfsreceive [-FhMnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+ + + + + +
zfsreceive -c + [-vn] + filesystem|snapshot
+
+
+

+
+
zfs receive + [-FhMnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

The ability to send and receive deduplicated send streams has + been removed. However, a deduplicated send stream created with older + software can be converted to a regular (non-deduplicated) stream by + using the zstream redup + command.

+

If -o + property=value or + -x property is specified, it + applies to the effective value of the property throughout the entire + subtree of replicated datasets. Effective property values will be set + (-o) or inherited (-x) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + origin= + is a special case because, even if origin is a + read-only property and cannot be set, it's allowed to receive the send + stream as a clone of the given snapshot.

+

Raw encrypted send streams (created with + zfs send + -w) may only be received as is, and cannot be + re-encrypted, decrypted, or recompressed by the receive process. + Unencrypted streams can be received as encrypted datasets, either + through inheritance or by specifying encryption parameters with the + -o options. Note that the + keylocation property cannot be overridden to + prompt during a receive. This is because the receive + process itself is already using the standard input for the send stream. + Instead, the property can be overridden after the receive completes.

+

The added security provided by raw sends adds some + restrictions to the send and receive process. ZFS will not allow a mix + of raw receives and non-raw receives. Specifically, any raw incremental + receives that are attempted after a non-raw receive will fail. Non-raw + receives do not have this restriction and, therefore, are always + possible. Because of this, it is best practice to always use either raw + sends for their security benefits or non-raw sends for their flexibility + when working with encrypted datasets, but not a combination.

+

The reason for this restriction stems from the inherent + restrictions of the AEAD ciphers that ZFS uses to encrypt data. When + using ZFS native encryption, each block of data is encrypted against a + randomly generated number known as the "initialization vector" + (IV), which is stored in the filesystem metadata. This number is + required by the encryption algorithms whenever the data is to be + decrypted. Together, all of the IVs provided for all of the blocks in a + given snapshot are collectively called an "IV set". When ZFS + performs a raw send, the IV set is transferred from the source to the + destination in the send stream. When ZFS performs a non-raw send, the + data is decrypted by the source system and re-encrypted by the + destination system, creating a snapshot with effectively the same data, + but a different IV set. In order for decryption to work after a raw + send, ZFS must ensure that the IV set used on both the source and + destination side match. When an incremental raw receive is performed on + top of an existing snapshot, ZFS will check to confirm that the + "from" snapshot on both the source and destination were using + the same IV set, ensuring the new IV set is consistent.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Skip the receive of holds. There is no effect if holds are not + sent.
+
+
Force an unmount of the file system while receiving a snapshot. This + option is not supported on Linux.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + immediately before the receive. When receiving a stream from + zfs send + -R, causes the property to be inherited by all + descendant datasets, as through zfs + inherit property was run on + any descendant datasets that have this property set on the sending + system. +

If the send stream was sent with + -c then overriding the + compression property will have no effect on + received data but the compression property will be + set. To have the data recompressed on receive remove the + -c flag from the send stream.

+

Any editable property can be set at + receive time. Set-once properties bound to the received data, such + as + + and + , + cannot be set at receive time even when the datasets are newly + created by zfs + receive. Additionally both settable + properties + + and + + cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+

The -o option may also be used to + override encryption properties upon initial receive. This allows + unencrypted streams to be received as encrypted datasets. To cause + the received dataset (or root dataset of a recursive stream) to be + received as an encryption root, specify encryption properties in the + same manner as is required for zfs + create. For instance:

+
# zfs + send tank/test@snap1 | + zfs recv + -o + encryption= + -o + = + -o + keylocation=file:///path/to/keyfile
+

Note that -o + keylocation=prompt may not be + specified here, since the standard input is already being utilized + for the send stream. Once the receive has completed, you can use + zfs set to change + this setting after the fact. Similarly, you can receive a dataset as + an encrypted child by specifying -x + encryption to force the property to be inherited. + Overriding encryption properties (except for + keylocation) is not possible with raw send + streams.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with + a stream generated by zfs + send -t + token, where the token + is the value of the + + property of the filesystem or volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(7) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions (e.g. + set-once) apply equally to -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
zfs receive + -c [-vn] + filesystem|snapshot
+
Attempt to repair data corruption in the specified dataset, by using the + provided stream as the source of healthy data. This method of healing can + only heal data blocks present in the stream. Metadata can not be healed by + corrective receive. Running a scrub is recommended post-healing to ensure + all data corruption was repaired. +

It's important to consider why corruption has happened in the + first place. If you have slowly failing hardware - periodically + repairing the data is not going to save you from data loss later on when + the hardware fails completely.

+
+
+
+
+

+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+
+

+

zfs-send(8), zstream(8)

+
+
+ + + + + +
March 12, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-recv.8.html b/man/master/8/zfs-recv.8.html new file mode 100644 index 000000000..de0e2d9c9 --- /dev/null +++ b/man/master/8/zfs-recv.8.html @@ -0,0 +1,628 @@ + + + + + + + zfs-recv.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-recv.8

+
+ + + + + +
ZFS-RECEIVE(8)System Manager's ManualZFS-RECEIVE(8)
+
+
+

+

zfs-receive — + create snapshot from backup stream

+
+
+

+ + + + + +
zfsreceive [-FhMnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+ + + + + +
zfsreceive -c + [-vn] + filesystem|snapshot
+
+
+

+
+
zfs receive + [-FhMnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

The ability to send and receive deduplicated send streams has + been removed. However, a deduplicated send stream created with older + software can be converted to a regular (non-deduplicated) stream by + using the zstream redup + command.

+

If -o + property=value or + -x property is specified, it + applies to the effective value of the property throughout the entire + subtree of replicated datasets. Effective property values will be set + (-o) or inherited (-x) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + origin= + is a special case because, even if origin is a + read-only property and cannot be set, it's allowed to receive the send + stream as a clone of the given snapshot.

+

Raw encrypted send streams (created with + zfs send + -w) may only be received as is, and cannot be + re-encrypted, decrypted, or recompressed by the receive process. + Unencrypted streams can be received as encrypted datasets, either + through inheritance or by specifying encryption parameters with the + -o options. Note that the + keylocation property cannot be overridden to + prompt during a receive. This is because the receive + process itself is already using the standard input for the send stream. + Instead, the property can be overridden after the receive completes.

+

The added security provided by raw sends adds some + restrictions to the send and receive process. ZFS will not allow a mix + of raw receives and non-raw receives. Specifically, any raw incremental + receives that are attempted after a non-raw receive will fail. Non-raw + receives do not have this restriction and, therefore, are always + possible. Because of this, it is best practice to always use either raw + sends for their security benefits or non-raw sends for their flexibility + when working with encrypted datasets, but not a combination.

+

The reason for this restriction stems from the inherent + restrictions of the AEAD ciphers that ZFS uses to encrypt data. When + using ZFS native encryption, each block of data is encrypted against a + randomly generated number known as the "initialization vector" + (IV), which is stored in the filesystem metadata. This number is + required by the encryption algorithms whenever the data is to be + decrypted. Together, all of the IVs provided for all of the blocks in a + given snapshot are collectively called an "IV set". When ZFS + performs a raw send, the IV set is transferred from the source to the + destination in the send stream. When ZFS performs a non-raw send, the + data is decrypted by the source system and re-encrypted by the + destination system, creating a snapshot with effectively the same data, + but a different IV set. In order for decryption to work after a raw + send, ZFS must ensure that the IV set used on both the source and + destination side match. When an incremental raw receive is performed on + top of an existing snapshot, ZFS will check to confirm that the + "from" snapshot on both the source and destination were using + the same IV set, ensuring the new IV set is consistent.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Skip the receive of holds. There is no effect if holds are not + sent.
+
+
Force an unmount of the file system while receiving a snapshot. This + option is not supported on Linux.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + immediately before the receive. When receiving a stream from + zfs send + -R, causes the property to be inherited by all + descendant datasets, as through zfs + inherit property was run on + any descendant datasets that have this property set on the sending + system. +

If the send stream was sent with + -c then overriding the + compression property will have no effect on + received data but the compression property will be + set. To have the data recompressed on receive remove the + -c flag from the send stream.

+

Any editable property can be set at + receive time. Set-once properties bound to the received data, such + as + + and + , + cannot be set at receive time even when the datasets are newly + created by zfs + receive. Additionally both settable + properties + + and + + cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+

The -o option may also be used to + override encryption properties upon initial receive. This allows + unencrypted streams to be received as encrypted datasets. To cause + the received dataset (or root dataset of a recursive stream) to be + received as an encryption root, specify encryption properties in the + same manner as is required for zfs + create. For instance:

+
# zfs + send tank/test@snap1 | + zfs recv + -o + encryption= + -o + = + -o + keylocation=file:///path/to/keyfile
+

Note that -o + keylocation=prompt may not be + specified here, since the standard input is already being utilized + for the send stream. Once the receive has completed, you can use + zfs set to change + this setting after the fact. Similarly, you can receive a dataset as + an encrypted child by specifying -x + encryption to force the property to be inherited. + Overriding encryption properties (except for + keylocation) is not possible with raw send + streams.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with + a stream generated by zfs + send -t + token, where the token + is the value of the + + property of the filesystem or volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(7) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions (e.g. + set-once) apply equally to -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
zfs receive + -c [-vn] + filesystem|snapshot
+
Attempt to repair data corruption in the specified dataset, by using the + provided stream as the source of healthy data. This method of healing can + only heal data blocks present in the stream. Metadata can not be healed by + corrective receive. Running a scrub is recommended post-healing to ensure + all data corruption was repaired. +

It's important to consider why corruption has happened in the + first place. If you have slowly failing hardware - periodically + repairing the data is not going to save you from data loss later on when + the hardware fails completely.

+
+
+
+
+

+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+
+

+

zfs-send(8), zstream(8)

+
+
+ + + + + +
March 12, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-redact.8.html b/man/master/8/zfs-redact.8.html new file mode 100644 index 000000000..b2ed726cc --- /dev/null +++ b/man/master/8/zfs-redact.8.html @@ -0,0 +1,836 @@ + + + + + + + zfs-redact.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-redact.8

+
+ + + + + +
ZFS-SEND(8)System Manager's ManualZFS-SEND(8)
+
+
+

+

zfs-send — + generate backup stream of ZFS dataset

+
+
+

+ + + + + +
zfssend [-DLPVbcehnpsvw] + [-R [-X + dataset[,dataset]…]] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-DLPVcensvw] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend --redact + redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
+ + + + + +
zfssend [-PVenv] + -t receive_resume_token
+
+ + + + + +
zfssend [-PVnv] + -S filesystem
+
+ + + + + +
zfsredact snapshot + redaction_bookmark + redaction_snapshot
+
+
+

+
+
zfs send + [-DLPVbcehnpsvw] [-R + [-X + dataset[,dataset]…]] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128 KiB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128 KiB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed. If the -R flag + is used to send encrypted datasets, then -w + must also be specified.

+
+
, + --proctitle
+
Set the process title to a per-second report of how much data has been + sent.
+
, + --exclude + dataset[,dataset]…
+
With -R, -X specifies + a set of datasets (and, hence, their descendants), to be excluded from + the send stream. The root dataset may not be excluded. + -X a + -X b is equivalent to + -X + a,b.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
, + --backup
+
Sends only received property values whether or not they are overridden + by local settings, but only if the dataset has ever been received. Use + this option when you want zfs + receive to restore received properties backed + up on the sent dataset and to avoid sending local settings that may + have nothing to do with the source dataset, but only with how the data + is backed up.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes. Streams sent with -c will not + have their data recompressed on the receiver side using + -o compress= + value. The data will stay compressed as it was + from the sender. The new compression property will be set for future + data. Note that uncompressed data from the sender will still attempt + to compress on the receiver, unless you specify + -o compress= + .
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --holds
+
Generate a stream package that includes any snapshot holds (created + with the zfs hold + command), and indicating to zfs + receive that the holds be applied to the + dataset on the receiving system.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature. Sends of encrypted datasets must use + -w when using this flag.
+
, + --skip-missing
+
Allows sending a replication stream even when there are snapshots + missing in the hierarchy. When a snapshot is missing, instead of + throwing an error and aborting the send, a warning is printed to the + standard error stream and the dataset to which it belongs and its + descendents are skipped. This flag can only be used in conjunction + with -R.
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. The same report can be requested by sending + SIGINFO or SIGUSR1, + regardless of -v. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS.

+
+
+
+
zfs send + [-DLPVcenvw] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128 KiB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128 KiB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. The same report can be requested by sending + SIGINFO or SIGUSR1, + regardless of -v.
+
+
+
zfs send + --redact redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
Generate a redacted send stream. This send stream contains all blocks from + the snapshot being sent that aren't included in the redaction list + contained in the bookmark specified by the + --redact (or -d) flag. The + resulting send stream is said to be redacted with respect to the snapshots + the bookmark specified by the --redact + flag was created with. The bookmark must have been + created by running zfs + redact on the snapshot being sent. +

This feature can be used to allow clones of a filesystem to be + made available on a remote system, in the case where their parent need + not (or needs to not) be usable. For example, if a filesystem contains + sensitive data, and it has clones where that sensitive data has been + secured or replaced with dummy data, redacted sends can be used to + replicate the secured data without replicating the original sensitive + data, while still sharing all possible blocks. A snapshot that has been + redacted with respect to a set of snapshots will contain all blocks + referenced by at least one snapshot in the set, but will contain none of + the blocks referenced by none of the snapshots in the set. In other + words, if all snapshots in the set have modified a given block in the + parent, that block will not be sent; but if one or more snapshots have + not modified a block in the parent, they will still reference the + parent's block, so that block will be sent. Note that only user data + will be redacted.

+

When the redacted send stream is received, we will generate a + redacted snapshot. Due to the nature of redaction, a redacted dataset + can only be used in the following ways:

+
    +
  1. To receive, as a clone, an incremental send from the original snapshot + to one of the snapshots it was redacted with respect to. In this case, + the stream will produce a valid dataset when received because all + blocks that were redacted in the parent are guaranteed to be present + in the child's send stream. This use case will produce a normal + snapshot, which can be used just like other snapshots.
  2. +
  3. To receive an incremental send from the original snapshot to something + redacted with respect to a subset of the set of snapshots the initial + snapshot was redacted with respect to. In this case, each block that + was redacted in the original is still redacted (redacting with respect + to additional snapshots causes less data to be redacted (because the + snapshots define what is permitted, and everything else is redacted)). + This use case will produce a new redacted snapshot.
  4. +
  5. To receive an incremental send from a redaction bookmark of the + original snapshot that was created when redacting with respect to a + subset of the set of snapshots the initial snapshot was created with + respect to anything else. A send stream from such a redaction bookmark + will contain all of the blocks necessary to fill in any redacted data, + should it be needed, because the sending system is aware of what + blocks were originally redacted. This will either produce a normal + snapshot or a redacted one, depending on whether the new send stream + is redacted.
  6. +
  7. To receive an incremental send from a redacted version of the initial + snapshot that is redacted with respect to a subject of the set of + snapshots the initial snapshot was created with respect to. A send + stream from a compatible redacted dataset will contain all of the + blocks necessary to fill in any redacted data. This will either + produce a normal snapshot or a redacted one, depending on whether the + new send stream is redacted.
  8. +
  9. To receive a full send as a clone of the redacted snapshot. Since the + stream is a full send, it definitionally contains all the data needed + to create a new dataset. This use case will either produce a normal + snapshot or a redacted one, depending on whether the full send stream + was redacted.
  10. +
+

These restrictions are detected and enforced by + zfs receive; a redacted + send stream will contain the list of snapshots that the stream is + redacted with respect to. These are stored with the redacted snapshot, + and are used to detect and correctly handle the cases above. Note that + for technical reasons, raw sends and redacted sends cannot be combined + at this time.

+
+
zfs send + [-PVenv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs + receive -s for more + details.
+
zfs send + [-PVnv] [-i + snapshot|bookmark] + -S filesystem
+
Generate a send stream from a dataset that has been partially received. +
+
, + --saved
+
This flag requires that the specified filesystem previously received a + resumable send that did not finish and was interrupted. In such + scenarios this flag enables the user to send this partially received + state. Using this flag will always use the last fully received + snapshot as the incremental source if it exists.
+
+
+
zfs redact + snapshot redaction_bookmark + redaction_snapshot
+
Generate a new redaction bookmark. In addition to the typical bookmark + information, a redaction bookmark contains the list of redacted blocks and + the list of redaction snapshots specified. The redacted blocks are blocks + in the snapshot which are not referenced by any of the redaction + snapshots. These blocks are found by iterating over the metadata in each + redaction snapshot to determine what has been changed since the target + snapshot. Redaction is designed to support redacted zfs sends; see the + entry for zfs send for + more information on the purpose of this operation. If a redact operation + fails partway through (due to an error or a system failure), the redaction + can be resumed by rerunning the same command.
+
+
+

+

ZFS has support for a limited version of data subsetting, in the + form of redaction. Using the zfs + redact command, a + can be created that stores a list of blocks containing + sensitive information. When provided to zfs + send, this causes a redacted send + to occur. Redacted sends omit the blocks containing sensitive information, + replacing them with REDACT records. When these send streams are received, a + redacted dataset is created. A redacted dataset cannot be + mounted by default, since it is incomplete. It can be used to receive other + send streams. In this way datasets can be used for data backup and + replication, with all the benefits that zfs send and receive have to offer, + while protecting sensitive information from being stored on less-trusted + machines or services.

+

For the purposes of redaction, there are two steps to the process. + A redact step, and a send/receive step. First, a redaction bookmark is + created. This is done by providing the zfs + redact command with a parent snapshot, a bookmark to + be created, and a number of redaction snapshots. These redaction snapshots + must be descendants of the parent snapshot, and they should modify data that + is considered sensitive in some way. Any blocks of data modified by all of + the redaction snapshots will be listed in the redaction bookmark, because it + represents the truly sensitive information. When it comes to the send step, + the send process will not send the blocks listed in the redaction bookmark, + instead replacing them with REDACT records. When received on the target + system, this will create a redacted dataset, missing the data that + corresponds to the blocks in the redaction bookmark on the sending system. + The incremental send streams from the original parent to the redaction + snapshots can then also be received on the target system, and this will + produce a complete snapshot that can be used normally. Incrementals from one + snapshot on the parent filesystem and another can also be done by sending + from the redaction bookmark, rather than the snapshots themselves.

+

In order to make the purpose of the feature more clear, an example + is provided. Consider a zfs filesystem containing four files. These files + represent information for an online shopping service. One file contains a + list of usernames and passwords, another contains purchase histories, a + third contains click tracking data, and a fourth contains user preferences. + The owner of this data wants to make it available for their development + teams to test against, and their market research teams to do analysis on. + The development teams need information about user preferences and the click + tracking data, while the market research teams need information about + purchase histories and user preferences. Neither needs access to the + usernames and passwords. However, because all of this data is stored in one + ZFS filesystem, it must all be sent and received together. In addition, the + owner of the data wants to take advantage of features like compression, + checksumming, and snapshots, so they do want to continue to use ZFS to store + and transmit their data. Redaction can help them do so. First, they would + make two clones of a snapshot of the data on the source. In one clone, they + create the setup they want their market research team to see; they delete + the usernames and passwords file, and overwrite the click tracking data with + dummy information. In another, they create the setup they want the + development teams to see, by replacing the passwords with fake information + and replacing the purchase histories with randomly generated ones. They + would then create a redaction bookmark on the parent snapshot, using + snapshots on the two clones as redaction snapshots. The parent can then be + sent, redacted, to the target server where the research and development + teams have access. Finally, incremental sends from the parent snapshot to + each of the clones can be sent to and received on the target server; these + snapshots are identical to the ones on the source, and are ready to be used, + while the parent snapshot on the target contains none of the username and + password data present on the source, because it was removed by the redacted + send operation.

+
+
+
+

+

See -v.

+
+
+

+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+
+

+

zfs-bookmark(8), + zfs-receive(8), zfs-redact(8), + zfs-snapshot(8)

+
+
+ + + + + +
July 27, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-release.8.html b/man/master/8/zfs-release.8.html new file mode 100644 index 000000000..f95207a21 --- /dev/null +++ b/man/master/8/zfs-release.8.html @@ -0,0 +1,325 @@ + + + + + + + zfs-release.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-release.8

+
+ + + + + +
ZFS-HOLD(8)System Manager's ManualZFS-HOLD(8)
+
+
+

+

zfs-holdhold + ZFS snapshots to prevent their removal

+
+
+

+ + + + + +
zfshold [-r] + tag snapshot
+
+ + + + + +
zfsholds [-rHp] + snapshot
+
+ + + + + +
zfsrelease [-r] + tag snapshot
+
+
+

+
+
zfs hold + [-r] tag + snapshot
+
Adds a single reference, named with the tag + argument, to the specified snapshots. Each snapshot has its own tag + namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-rHp] snapshot
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
Do not print headers, use tab-delimited output.
+
+
Prints holds timestamps as unix epoch timestamps.
+
+
+
zfs release + [-r] tag + snapshot
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
+
+
+

+

zfs-destroy(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-rename.8.html b/man/master/8/zfs-rename.8.html new file mode 100644 index 000000000..3d6b61f8b --- /dev/null +++ b/man/master/8/zfs-rename.8.html @@ -0,0 +1,375 @@ + + + + + + + zfs-rename.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-rename.8

+
+ + + + + +
ZFS-RENAME(8)System Manager's ManualZFS-RENAME(8)
+
+
+

+

zfs-rename — + rename ZFS dataset

+
+
+

+ + + + + +
zfsrename [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
+ + + + + +
zfsrename -p + [-f] + filesystem|volume + filesystem|volume
+
+ + + + + +
zfsrename -u + [-f] filesystem + filesystem
+
+ + + + + +
zfsrename -r + snapshot snapshot
+
+
+

+
+
zfs rename + [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
 
+
zfs rename + -p [-f] + filesystem|volume + filesystem|volume
+
 
+
zfs rename + -u [-f] + filesystem filesystem
+
Renames the given dataset. The new target can be located anywhere in the + ZFS hierarchy, with the exception of snapshots. Snapshots can only be + renamed within the parent file system or volume. When renaming a snapshot, + the parent file system of the snapshot does not need to be specified as + part of the second argument. Renamed file systems can inherit new mount + points, in which case they are unmounted and remounted at the new mount + point. +
+
+
Force unmount any file systems that need to be unmounted in the + process. This flag has no effect if used together with the + -u flag.
+
+
Creates all the nonexistent parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their + parent.
+
+
Do not remount file systems during rename. If a file system's + mountpoint property is set to + + or + , + the file system is not unmounted even if this option is not + given.
+
+
+
zfs rename + -r snapshot + snapshot
+
Recursively rename the snapshots of all descendent datasets. Snapshots are + the only dataset that can be renamed recursively.
+
+
+
+

+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+

+

The following example shows how to maintain a history of snapshots + with a consistent naming scheme. To keep a week's worth of snapshots, the + user destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows:

+
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-rollback.8.html b/man/master/8/zfs-rollback.8.html new file mode 100644 index 000000000..886ff5cde --- /dev/null +++ b/man/master/8/zfs-rollback.8.html @@ -0,0 +1,299 @@ + + + + + + + zfs-rollback.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-rollback.8

+
+ + + + + +
ZFS-ROLLBACK(8)System Manager's ManualZFS-ROLLBACK(8)
+
+
+

+

zfs-rollback — + roll ZFS dataset back to snapshot

+
+
+

+ + + + + +
zfsrollback [-Rfr] + snapshot
+
+
+

+

When a dataset is rolled back, all data that has changed since the + snapshot is discarded, and the dataset reverts to the state at the time of + the snapshot. By default, the command refuses to roll back to a snapshot + other than the most recent one. In order to do so, all intermediate + snapshots and bookmarks must be destroyed by specifying the + -r option.

+

The -rR options do not recursively destroy + the child snapshots of a recursive snapshot. Only direct snapshots of the + specified filesystem are destroyed by either of these options. To completely + roll back a recursive snapshot, you must roll back the individual child + snapshots.

+
+
+
Destroy any more recent snapshots and bookmarks, as well as any clones of + those snapshots.
+
+
Used with the -R option to force an unmount of any + clone file systems that are to be destroyed.
+
+
Destroy any snapshots and bookmarks more recent than the one + specified.
+
+
+
+

+
+

+

The following command reverts the contents of + pool/home/anne to the snapshot named + yesterday, deleting all intermediate snapshots:

+
# zfs + rollback -r + pool/home/anne@yesterday
+
+
+
+

+

zfs-snapshot(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-send.8.html b/man/master/8/zfs-send.8.html new file mode 100644 index 000000000..c3624b0c2 --- /dev/null +++ b/man/master/8/zfs-send.8.html @@ -0,0 +1,836 @@ + + + + + + + zfs-send.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-send.8

+
+ + + + + +
ZFS-SEND(8)System Manager's ManualZFS-SEND(8)
+
+
+

+

zfs-send — + generate backup stream of ZFS dataset

+
+
+

+ + + + + +
zfssend [-DLPVbcehnpsvw] + [-R [-X + dataset[,dataset]…]] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-DLPVcensvw] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend --redact + redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
+ + + + + +
zfssend [-PVenv] + -t receive_resume_token
+
+ + + + + +
zfssend [-PVnv] + -S filesystem
+
+ + + + + +
zfsredact snapshot + redaction_bookmark + redaction_snapshot
+
+
+

+
+
zfs send + [-DLPVbcehnpsvw] [-R + [-X + dataset[,dataset]…]] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128 KiB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128 KiB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed. If the -R flag + is used to send encrypted datasets, then -w + must also be specified.

+
+
, + --proctitle
+
Set the process title to a per-second report of how much data has been + sent.
+
, + --exclude + dataset[,dataset]…
+
With -R, -X specifies + a set of datasets (and, hence, their descendants), to be excluded from + the send stream. The root dataset may not be excluded. + -X a + -X b is equivalent to + -X + a,b.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
, + --backup
+
Sends only received property values whether or not they are overridden + by local settings, but only if the dataset has ever been received. Use + this option when you want zfs + receive to restore received properties backed + up on the sent dataset and to avoid sending local settings that may + have nothing to do with the source dataset, but only with how the data + is backed up.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes. Streams sent with -c will not + have their data recompressed on the receiver side using + -o compress= + value. The data will stay compressed as it was + from the sender. The new compression property will be set for future + data. Note that uncompressed data from the sender will still attempt + to compress on the receiver, unless you specify + -o compress= + .
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --holds
+
Generate a stream package that includes any snapshot holds (created + with the zfs hold + command), and indicating to zfs + receive that the holds be applied to the + dataset on the receiving system.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature. Sends of encrypted datasets must use + -w when using this flag.
+
, + --skip-missing
+
Allows sending a replication stream even when there are snapshots + missing in the hierarchy. When a snapshot is missing, instead of + throwing an error and aborting the send, a warning is printed to the + standard error stream and the dataset to which it belongs and its + descendents are skipped. This flag can only be used in conjunction + with -R.
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. The same report can be requested by sending + SIGINFO or SIGUSR1, + regardless of -v. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS.

+
+
+
+
zfs send + [-DLPVcenvw] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128 KiB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128 KiB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. The same report can be requested by sending + SIGINFO or SIGUSR1, + regardless of -v.
+
+
+
zfs send + --redact redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
Generate a redacted send stream. This send stream contains all blocks from + the snapshot being sent that aren't included in the redaction list + contained in the bookmark specified by the + --redact (or -d) flag. The + resulting send stream is said to be redacted with respect to the snapshots + the bookmark specified by the --redact + flag was created with. The bookmark must have been + created by running zfs + redact on the snapshot being sent. +

This feature can be used to allow clones of a filesystem to be + made available on a remote system, in the case where their parent need + not (or needs to not) be usable. For example, if a filesystem contains + sensitive data, and it has clones where that sensitive data has been + secured or replaced with dummy data, redacted sends can be used to + replicate the secured data without replicating the original sensitive + data, while still sharing all possible blocks. A snapshot that has been + redacted with respect to a set of snapshots will contain all blocks + referenced by at least one snapshot in the set, but will contain none of + the blocks referenced by none of the snapshots in the set. In other + words, if all snapshots in the set have modified a given block in the + parent, that block will not be sent; but if one or more snapshots have + not modified a block in the parent, they will still reference the + parent's block, so that block will be sent. Note that only user data + will be redacted.

+

When the redacted send stream is received, we will generate a + redacted snapshot. Due to the nature of redaction, a redacted dataset + can only be used in the following ways:

+
    +
  1. To receive, as a clone, an incremental send from the original snapshot + to one of the snapshots it was redacted with respect to. In this case, + the stream will produce a valid dataset when received because all + blocks that were redacted in the parent are guaranteed to be present + in the child's send stream. This use case will produce a normal + snapshot, which can be used just like other snapshots.
  2. +
  3. To receive an incremental send from the original snapshot to something + redacted with respect to a subset of the set of snapshots the initial + snapshot was redacted with respect to. In this case, each block that + was redacted in the original is still redacted (redacting with respect + to additional snapshots causes less data to be redacted (because the + snapshots define what is permitted, and everything else is redacted)). + This use case will produce a new redacted snapshot.
  4. +
  5. To receive an incremental send from a redaction bookmark of the + original snapshot that was created when redacting with respect to a + subset of the set of snapshots the initial snapshot was created with + respect to anything else. A send stream from such a redaction bookmark + will contain all of the blocks necessary to fill in any redacted data, + should it be needed, because the sending system is aware of what + blocks were originally redacted. This will either produce a normal + snapshot or a redacted one, depending on whether the new send stream + is redacted.
  6. +
  7. To receive an incremental send from a redacted version of the initial + snapshot that is redacted with respect to a subject of the set of + snapshots the initial snapshot was created with respect to. A send + stream from a compatible redacted dataset will contain all of the + blocks necessary to fill in any redacted data. This will either + produce a normal snapshot or a redacted one, depending on whether the + new send stream is redacted.
  8. +
  9. To receive a full send as a clone of the redacted snapshot. Since the + stream is a full send, it definitionally contains all the data needed + to create a new dataset. This use case will either produce a normal + snapshot or a redacted one, depending on whether the full send stream + was redacted.
  10. +
+

These restrictions are detected and enforced by + zfs receive; a redacted + send stream will contain the list of snapshots that the stream is + redacted with respect to. These are stored with the redacted snapshot, + and are used to detect and correctly handle the cases above. Note that + for technical reasons, raw sends and redacted sends cannot be combined + at this time.

+
+
zfs send + [-PVenv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs + receive -s for more + details.
+
zfs send + [-PVnv] [-i + snapshot|bookmark] + -S filesystem
+
Generate a send stream from a dataset that has been partially received. +
+
, + --saved
+
This flag requires that the specified filesystem previously received a + resumable send that did not finish and was interrupted. In such + scenarios this flag enables the user to send this partially received + state. Using this flag will always use the last fully received + snapshot as the incremental source if it exists.
+
+
+
zfs redact + snapshot redaction_bookmark + redaction_snapshot
+
Generate a new redaction bookmark. In addition to the typical bookmark + information, a redaction bookmark contains the list of redacted blocks and + the list of redaction snapshots specified. The redacted blocks are blocks + in the snapshot which are not referenced by any of the redaction + snapshots. These blocks are found by iterating over the metadata in each + redaction snapshot to determine what has been changed since the target + snapshot. Redaction is designed to support redacted zfs sends; see the + entry for zfs send for + more information on the purpose of this operation. If a redact operation + fails partway through (due to an error or a system failure), the redaction + can be resumed by rerunning the same command.
+
+
+

+

ZFS has support for a limited version of data subsetting, in the + form of redaction. Using the zfs + redact command, a + can be created that stores a list of blocks containing + sensitive information. When provided to zfs + send, this causes a redacted send + to occur. Redacted sends omit the blocks containing sensitive information, + replacing them with REDACT records. When these send streams are received, a + redacted dataset is created. A redacted dataset cannot be + mounted by default, since it is incomplete. It can be used to receive other + send streams. In this way datasets can be used for data backup and + replication, with all the benefits that zfs send and receive have to offer, + while protecting sensitive information from being stored on less-trusted + machines or services.

+

For the purposes of redaction, there are two steps to the process. + A redact step, and a send/receive step. First, a redaction bookmark is + created. This is done by providing the zfs + redact command with a parent snapshot, a bookmark to + be created, and a number of redaction snapshots. These redaction snapshots + must be descendants of the parent snapshot, and they should modify data that + is considered sensitive in some way. Any blocks of data modified by all of + the redaction snapshots will be listed in the redaction bookmark, because it + represents the truly sensitive information. When it comes to the send step, + the send process will not send the blocks listed in the redaction bookmark, + instead replacing them with REDACT records. When received on the target + system, this will create a redacted dataset, missing the data that + corresponds to the blocks in the redaction bookmark on the sending system. + The incremental send streams from the original parent to the redaction + snapshots can then also be received on the target system, and this will + produce a complete snapshot that can be used normally. Incrementals from one + snapshot on the parent filesystem and another can also be done by sending + from the redaction bookmark, rather than the snapshots themselves.

+

In order to make the purpose of the feature more clear, an example + is provided. Consider a zfs filesystem containing four files. These files + represent information for an online shopping service. One file contains a + list of usernames and passwords, another contains purchase histories, a + third contains click tracking data, and a fourth contains user preferences. + The owner of this data wants to make it available for their development + teams to test against, and their market research teams to do analysis on. + The development teams need information about user preferences and the click + tracking data, while the market research teams need information about + purchase histories and user preferences. Neither needs access to the + usernames and passwords. However, because all of this data is stored in one + ZFS filesystem, it must all be sent and received together. In addition, the + owner of the data wants to take advantage of features like compression, + checksumming, and snapshots, so they do want to continue to use ZFS to store + and transmit their data. Redaction can help them do so. First, they would + make two clones of a snapshot of the data on the source. In one clone, they + create the setup they want their market research team to see; they delete + the usernames and passwords file, and overwrite the click tracking data with + dummy information. In another, they create the setup they want the + development teams to see, by replacing the passwords with fake information + and replacing the purchase histories with randomly generated ones. They + would then create a redaction bookmark on the parent snapshot, using + snapshots on the two clones as redaction snapshots. The parent can then be + sent, redacted, to the target server where the research and development + teams have access. Finally, incremental sends from the parent snapshot to + each of the clones can be sent to and received on the target server; these + snapshots are identical to the ones on the source, and are ready to be used, + while the parent snapshot on the target contains none of the username and + password data present on the source, because it was removed by the redacted + send operation.

+
+
+
+

+

See -v.

+
+
+

+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+
+

+

zfs-bookmark(8), + zfs-receive(8), zfs-redact(8), + zfs-snapshot(8)

+
+
+ + + + + +
July 27, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-set.8.html b/man/master/8/zfs-set.8.html new file mode 100644 index 000000000..a395b54b0 --- /dev/null +++ b/man/master/8/zfs-set.8.html @@ -0,0 +1,566 @@ + + + + + + + zfs-set.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-set.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setset + properties on ZFS datasets

+
+
+

+ + + + + +
zfsset [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot
+
+
+

+
+
zfs set + [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
Only some properties can be edited. See zfsprops(7) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the + section of zfsprops(7). +
+
+
Update mountpoint, sharenfs, sharesmb property but do not mount or + share the dataset.
+
+
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
+
+
Dataset name
+
+
Property name
+
+
Property value
+
+
Property source local, default, + inherited, temporary, + received, or + - (none).
+
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections of + zfsprops(7).

+

The value all can be used to display all + properties that apply to the given dataset's type + (filesystem, volume, + snapshot, or + bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: local, + default, inherited, + temporary, received, + or + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or + all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(7) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value, if one exists; otherwise, + for non-inheritable properties, to the default; otherwise, operate as + if the -S option was not specified.
+
+
+
+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + =/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following command disables the compression + property for all file systems under pool/home. The + next command explicitly enables compression for + pool/home/anne.

+
# zfs + set + compression= + pool/home
+
# zfs + set + compression= + pool/home/anne
+
+
+

+

The following command sets a quota of 50 Gbytes for + pool/home/bob:

+
# zfs + set + =50G + pool/home/bob
+
+
+

+

The following command lists all properties for + pool/home/bob:

+
+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value:

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+

The following command lists all properties with local settings for + pool/home/bob:

+
+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
+

+

The following command causes pool/home/bob + and pool/home/anne to inherit + the checksum property from their parent.

+
# zfs + inherit checksum + pool/home/bob pool/home/anne
+
+
+

+

The following example sets the user-defined + com.example:department property + for a dataset:

+
# zfs + set + com.example:department=12345 + tank/accounting
+
+
+

+

The following commands show how to set sharenfs + property options to enable read-write access for a set of IP addresses and + to enable root access for system "neo" on the + tank/home file system:

+
# zfs + set + sharenfs='rw=@123.123.0.0/16:[::1],root=neo' + tank/home
+

If you are using DNS for host name resolution, specify the + fully-qualified hostname.

+
+
+
+

+

zfsprops(7), zfs-list(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-share.8.html b/man/master/8/zfs-share.8.html new file mode 100644 index 000000000..e8e6ba659 --- /dev/null +++ b/man/master/8/zfs-share.8.html @@ -0,0 +1,310 @@ + + + + + + + zfs-share.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-share.8

+
+ + + + + +
ZFS-SHARE(8)System Manager's ManualZFS-SHARE(8)
+
+
+

+

zfs-shareshare + and unshare ZFS filesystems

+
+
+

+ + + + + +
zfsshare [-l] + -a|filesystem
+
+ + + + + +
zfsunshare + -a|filesystem|mountpoint
+
+
+

+
+
zfs share + [-l] + -a|filesystem
+
Shares available ZFS file systems. +
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has + =, + this will cause the terminal to interactively block after asking for + the key.
+
+
Share all available ZFS file systems. Invoked automatically as part of + the boot process.
+
filesystem
+
Share the specified filesystem according to the + sharenfs and sharesmb properties. + File systems are shared when the sharenfs or + sharesmb property is set.
+
+
+
zfs unshare + -a|filesystem|mountpoint
+
Unshares currently shared ZFS file systems. +
+
+
Unshare all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
filesystem|mountpoint
+
Unshare the specified filesystem. The command can also be given a path + to a ZFS file system shared on the system.
+
+
+
+
+
+

+

exports(5), smb.conf(5), + zfsprops(7)

+
+
+ + + + + +
May 17, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-snapshot.8.html b/man/master/8/zfs-snapshot.8.html new file mode 100644 index 000000000..552d8526b --- /dev/null +++ b/man/master/8/zfs-snapshot.8.html @@ -0,0 +1,352 @@ + + + + + + + zfs-snapshot.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-snapshot.8

+
+ + + + + +
ZFS-SNAPSHOT(8)System Manager's ManualZFS-SNAPSHOT(8)
+
+
+

+

zfs-snapshot — + create snapshots of ZFS datasets

+
+
+

+ + + + + +
zfssnapshot [-r] + [-o + property=value]… + dataset@snapname
+
+
+

+

All previous modifications by successful system calls to the file + system are part of the snapshots. Snapshots are taken atomically, so that + all snapshots correspond to the same moment in time. + zfs snap can be used as an + alias for zfs snapshot. See + the Snapshots section of + zfsconcepts(7) for details.

+
+
+ property=value
+
Set the specified property; see zfs + create for details.
+
+
Recursively create snapshots of all descendent datasets
+
+
+
+

+
+

+

The following command creates a snapshot named + yesterday. This snapshot is mounted on demand in the + .zfs/snapshot directory at the root of the + pool/home/bob file system.

+
# zfs + snapshot + pool/home/bob@yesterday
+
+
+

+

The following command creates snapshots named + yesterday of + pool/home and all of its descendent file systems. Each + snapshot is mounted on demand in the .zfs/snapshot + directory at the root of its file system. The second command destroys the + newly created snapshots.

+
# zfs + snapshot -r + pool/home@yesterday
+
# zfs + destroy -r + pool/home@yesterday
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+

+

The following example shows how to maintain a history of snapshots + with a consistent naming scheme. To keep a week's worth of snapshots, the + user destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows:

+
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
+
+

+

zfs-bookmark(8), zfs-clone(8), + zfs-destroy(8), zfs-diff(8), + zfs-hold(8), zfs-rename(8), + zfs-rollback(8), zfs-send(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-unallow.8.html b/man/master/8/zfs-unallow.8.html new file mode 100644 index 000000000..9971bc760 --- /dev/null +++ b/man/master/8/zfs-unallow.8.html @@ -0,0 +1,956 @@ + + + + + + + zfs-unallow.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unallow.8

+
+ + + + + +
ZFS-ALLOW(8)System Manager's ManualZFS-ALLOW(8)
+
+
+

+

zfs-allow — + delegate ZFS administration permissions to unprivileged + users

+
+
+

+ + + + + +
zfsallow [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+
+

+
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of mount, + , + , + , + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
 
+
zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]…
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]…
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]…
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]…
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NAMETYPENOTES



allowsubcommandMust also have the permission that is being allowed
bookmarksubcommand
clonesubcommandMust also have the create ability and mount ability in + the origin file system
createsubcommandMust also have the mount ability. Must also have the + refreservation ability to create a non-sparse volume.
destroysubcommandMust also have the mount ability
diffsubcommandAllows lookup of paths within a dataset given an object number, and + the ability to create snapshots necessary to zfs diff.
holdsubcommandAllows adding a user hold to a snapshot
load-keysubcommandAllows loading and unloading of encryption key (see zfs + load-key and zfs unload-key).
change-keysubcommandAllows changing an encryption key via zfs change-key.
mountsubcommandAllows mounting/umounting ZFS datasets
promotesubcommandMust also have the mount and promote ability in the + origin file system
receivesubcommandMust also have the mount and create ability
releasesubcommandAllows releasing a user hold which might destroy the snapshot
renamesubcommandMust also have the mount and create ability in the new + parent
rollbacksubcommandMust also have the mount ability
sendsubcommand
sharesubcommandAllows sharing file systems over NFS or SMB protocols
snapshotsubcommandMust also have the mount ability
groupquotaotherAllows accessing any groupquota@ property
groupobjquotaotherAllows accessing any groupobjquota@ + property
groupusedotherAllows reading any groupused@ property
groupobjusedotherAllows reading any groupobjused@ property
userpropotherAllows changing any user property
userquotaotherAllows accessing any userquota@ property
userobjquotaotherAllows accessing any userobjquota@ + property
userusedotherAllows reading any userused@ property
userobjusedotherAllows reading any userobjused@ property
projectobjquotaotherAllows accessing any projectobjquota@ + property
projectquotaotherAllows accessing any projectquota@ + property
projectobjusedotherAllows reading any projectobjused@ + property
projectusedotherAllows reading any projectused@ property
aclinheritproperty
aclmodeproperty
acltypeproperty
atimeproperty
canmountproperty
casesensitivityproperty
checksumproperty
compressionproperty
contextproperty
copiesproperty
dedupproperty
defcontextproperty
devicesproperty
dnodesizeproperty
encryptionproperty
execproperty
filesystem_limitproperty
fscontextproperty
keyformatproperty
keylocationproperty
logbiasproperty
mlslabelproperty
mountpointproperty
nbmandproperty
normalizationproperty
overlayproperty
pbkdf2itersproperty
primarycacheproperty
quotaproperty
readonlyproperty
recordsizeproperty
redundant_metadataproperty
refquotaproperty
refreservationproperty
relatimeproperty
reservationproperty
rootcontextproperty
secondarycacheproperty
setuidproperty
sharenfsproperty
sharesmbproperty
snapdevproperty
snapdirproperty
snapshot_limitproperty
special_small_blocksproperty
syncproperty
utf8onlyproperty
versionproperty
volblocksizeproperty
volmodeproperty
volsizeproperty
vscanproperty
xattrproperty
zonedproperty
+
+
zfs allow + -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
+
+
+

+
+

+

The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots + on tank/cindys. The permissions on + tank/cindys are also displayed.

+
+
# zfs allow ,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys will + be unable to mount file systems under tank/cindys. Add + an ACE similar to the following syntax to provide mount point access:

+
# chmod + A+user:cindys:add_subdirectory:allow + /tank/cindys
+
+
+

+

The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed.

+
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
+

+

The following example shows how to define and grant a permission + set on the tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+

+

The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed.

+
+
# zfs allow cindys , users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
+

+

The following example shows how to remove the snapshot permission + from the staff group on the + tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-unjail.8.html b/man/master/8/zfs-unjail.8.html new file mode 100644 index 000000000..57d87a31a --- /dev/null +++ b/man/master/8/zfs-unjail.8.html @@ -0,0 +1,314 @@ + + + + + + + zfs-unjail.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-unjail.8

+
+ + + + + +
ZFS-JAIL(8)System Manager's ManualZFS-JAIL(8)
+
+
+

+

zfs-jailattach + or detach ZFS filesystem from FreeBSD jail

+
+
+

+ + + + + +
zfs jailjailid|jailname + filesystem
+
+ + + + + +
zfs unjailjailid|jailname + filesystem
+
+
+

+
+
zfs jail + jailid|jailname + filesystem
+
Attach the specified filesystem to the jail + identified by JID jailid or name + jailname. From now on this file system tree can be + managed from within a jail if the jailed property has + been set. To use this functionality, the jail needs the + + and + + parameters set to + and the + + parameter set to a value lower than + . +

You cannot attach a jailed dataset's children to another jail. + You can also not attach the root file system of the jail or any dataset + which needs to be mounted before the zfs rc script is run inside the + jail, as it would be attached unmounted until it is mounted from the rc + script inside the jail.

+

To allow management of the dataset from within a + jail, the jailed property has to be set and the jail + needs access to the /dev/zfs device. The + property + cannot be changed from within a jail.

+

After a dataset is attached to a jail and the + jailed property is set, a jailed file system cannot be + mounted outside the jail, since the jail administrator might have set + the mount point to an unacceptable value.

+

See jail(8) for more information on managing + jails. Jails are a FreeBSD feature and are not + relevant on other platforms.

+
+
zfs unjail + jailid|jailname + filesystem
+
Detaches the specified filesystem from the jail + identified by JID jailid or name + jailname.
+
+
+
+

+

zfsprops(7), jail(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-unload-key.8.html b/man/master/8/zfs-unload-key.8.html new file mode 100644 index 000000000..358429deb --- /dev/null +++ b/man/master/8/zfs-unload-key.8.html @@ -0,0 +1,476 @@ + + + + + + + zfs-unload-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unload-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + load, unload, or change encryption key of ZFS + dataset

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a|filesystem
+
+ + + + + +
zfsunload-key [-r] + -a|filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a|filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. + This will cause zfs to simply check that the + provided key is correct. This command may be run even if the key is + already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a|filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded. + This command may also be used to change the keylocation, + keyformat, and pbkdf2iters properties + as needed. If the dataset was not previously an encryption root it will + become one. Alternatively, the -i flag may be + provided to cause an encryption root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim + --secure if supported by your hardware, + otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to running zfs + load-key filesystem; + zfs change-key + filesystem
+
+ property=value
+
Allows the user to set encryption key properties + (keyformat, keylocation, + and pbkdf2iters) while + changing the key. This is the only way to alter + keyformat and pbkdf2iters after + the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + volume data, file attributes, ACLs, permission bits, directory listings, + FUID mappings, and + / + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the load-key subcommand for more + info on key loading).

+

Creating an encrypted dataset requires + specifying the encryption and + keyformat properties at creation time, along with an + optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, + and pbkdf2iters) do not inherit + like other ZFS properties and instead use the value determined by their + encryption root. Encryption root inheritance can be tracked via the + read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only deduplicate against themselves, + their snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption, + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost for each block written.

+
+
+
+

+

zfsprops(7), zfs-create(8), + zfs-set(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-unmount.8.html b/man/master/8/zfs-unmount.8.html new file mode 100644 index 000000000..9820098f9 --- /dev/null +++ b/man/master/8/zfs-unmount.8.html @@ -0,0 +1,338 @@ + + + + + + + zfs-unmount.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unmount.8

+
+ + + + + +
ZFS-MOUNT(8)System Manager's ManualZFS-MOUNT(8)
+
+
+

+

zfs-mountmanage + mount state of ZFS filesystems

+
+
+

+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Oflv] + [-o options] + -a|filesystem
+
+ + + + + +
zfsunmount [-fu] + -a|filesystem|mountpoint
+
+
+

+
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Oflv] [-o + options] + -a|filesystem
+
Mount ZFS filesystem on a path described by its + mountpoint property, if the path exists and is empty. If + mountpoint is set to + , the + filesystem should be instead mounted using mount(8). +
+
+
Perform an overlay mount. Allows mounting in non-empty + mountpoint. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process if configured.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + section of + zfsprops(7) for details.
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has + =, + this will cause the terminal to interactively block after asking for + the key.
+
+
Report mount progress.
+
+
Attempt to force mounting of all filesystems, even those that couldn't + normally be mounted (e.g. redacted datasets).
+
+
+
zfs unmount + [-fu] + -a|filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
+
Forcefully unmount the file system, even if it is currently in use. + This option is not supported on Linux.
+
+
Unload keys for any encryption roots unmounted by this command.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
+
+
+
+ + + + + +
February 16, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-unzone.8.html b/man/master/8/zfs-unzone.8.html new file mode 100644 index 000000000..6e6bd11cb --- /dev/null +++ b/man/master/8/zfs-unzone.8.html @@ -0,0 +1,314 @@ + + + + + + + zfs-unzone.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-unzone.8

+
+ + + + + +
ZFS-ZONE(8)System Manager's ManualZFS-ZONE(8)
+
+
+

+

zfs-zone, + zfs-unzoneattach and + detach ZFS filesystems to user namespaces

+
+
+

+ + + + + +
zfs zonensfile filesystem
+
+ + + + + +
zfs unzonensfile filesystem
+
+
+

+
+
zfs zone + nsfile filesystem
+
Attach the specified filesystem to the user + namespace identified by nsfile. From now on this + file system tree can be managed from within a user namespace if the + zoned property has been set. +

You cannot attach a zoned dataset's children to another user + namespace. You can also not attach the root file system of the user + namespace or any dataset which needs to be mounted before the zfs + service is run inside the user namespace, as it would be attached + unmounted until it is mounted from the service inside the user + namespace.

+

To allow management of the dataset from within a + user namespace, the zoned property has to be set and + the user namespaces needs access to the /dev/zfs + device. The + property + cannot be changed from within a user namespace.

+

After a dataset is attached to a user namespace and the + zoned property is set, a zoned file system cannot be + mounted outside the user namespace, since the user namespace + administrator might have set the mount point to an unacceptable + value.

+
+
zfs unzone + nsfile filesystem
+
Detach the specified filesystem from the user + namespace identified by nsfile.
+
+
+
+

+
+

+

The following example delegates the + tank/users dataset to a user namespace identified by + user namespace file /proc/1234/ns/user.

+
# zfs + zone /proc/1234/ns/user + tank/users
+
+
+
+

+

zfsprops(7)

+
+
+ + + + + +
June 3, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-upgrade.8.html b/man/master/8/zfs-upgrade.8.html new file mode 100644 index 000000000..94ee3cf0f --- /dev/null +++ b/man/master/8/zfs-upgrade.8.html @@ -0,0 +1,317 @@ + + + + + + + zfs-upgrade.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-upgrade.8

+
+ + + + + +
ZFS-UPGRADE(8)System Manager's ManualZFS-UPGRADE(8)
+
+
+

+

zfs-upgrade — + manage on-disk version of ZFS filesystems

+
+
+

+ + + + + +
zfsupgrade
+
+ + + + + +
zfsupgrade -v
+
+ + + + + +
zfsupgrade [-r] + [-V version] + -a|filesystem
+
+
+

+
+
zfs upgrade
+
Displays a list of file systems that are not the most recent version.
+
zfs upgrade + -v
+
Displays a list of currently supported file system versions.
+
zfs upgrade + [-r] [-V + version] + -a|filesystem
+
Upgrades file systems to a new on-disk version. Once this is done, the + file systems will no longer be accessible on systems running older + versions of ZFS. zfs send + streams generated from new snapshots of these file systems cannot be + accessed on systems running older versions of ZFS. +

In general, the file system version is independent of the pool + version. See zpool-features(7) for information on + features of ZFS storage pools.

+

In some cases, the file system version and the pool version + are interrelated and the pool version must be upgraded before the file + system version can be upgraded.

+
+
+ version
+
Upgrade to version. If not specified, upgrade to + the most recent version. This option can only be used to increase the + version number, and only up to the most recent version supported by + this version of ZFS.
+
+
Upgrade all file systems on all imported pools.
+
filesystem
+
Upgrade the specified file system.
+
+
Upgrade the specified file system and all descendent file + systems.
+
+
+
+
+
+

+

zpool-upgrade(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-userspace.8.html b/man/master/8/zfs-userspace.8.html new file mode 100644 index 000000000..6f3eafa7f --- /dev/null +++ b/man/master/8/zfs-userspace.8.html @@ -0,0 +1,390 @@ + + + + + + + zfs-userspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-userspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + display space and quotas of ZFS dataset

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + user, + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (like stat(2), + ls -l) perform this + translation, so the -i option allows the + output from zfs + userspace to be compared directly with those + utilities. However, -i may lead to confusion + if some files were created by an SMB user before a SMB-to-POSIX name + mapping was established. In such a case, some files will be owned by + the SMB entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]…
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]…
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is a + numeral, not a name. So need neither the option -i + for SID to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfsprops(7), zfs-set(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-wait.8.html b/man/master/8/zfs-wait.8.html new file mode 100644 index 000000000..ee7b2d06b --- /dev/null +++ b/man/master/8/zfs-wait.8.html @@ -0,0 +1,282 @@ + + + + + + + zfs-wait.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-wait.8

+
+ + + + + +
ZFS-WAIT(8)System Manager's ManualZFS-WAIT(8)
+
+
+

+

zfs-waitwait + for activity in ZFS filesystem to stop

+
+
+

+ + + + + +
zfswait [-t + activity[,activity]…] + filesystem
+
+
+

+

Waits until all background activity of the given types has ceased + in the given filesystem. The activity could cease because it has completed + or because the filesystem has been destroyed or unmounted. If no activities + are specified, the command waits until background activity of every type + listed below has ceased. If there is no activity of the given types in + progress, the command returns immediately.

+

These are the possible values for activity, + along with what each one waits for:

+
+
+
+
The filesystem's internal delete queue to empty
+
+
+

Note that the internal delete queue does not finish draining until + all large files have had time to be fully destroyed and all open file + handles to unlinked files are closed.

+
+
+

+

lsof(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-zone.8.html b/man/master/8/zfs-zone.8.html new file mode 100644 index 000000000..b78193d20 --- /dev/null +++ b/man/master/8/zfs-zone.8.html @@ -0,0 +1,314 @@ + + + + + + + zfs-zone.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-zone.8

+
+ + + + + +
ZFS-ZONE(8)System Manager's ManualZFS-ZONE(8)
+
+
+

+

zfs-zone, + zfs-unzoneattach and + detach ZFS filesystems to user namespaces

+
+
+

+ + + + + +
zfs zonensfile filesystem
+
+ + + + + +
zfs unzonensfile filesystem
+
+
+

+
+
zfs zone + nsfile filesystem
+
Attach the specified filesystem to the user + namespace identified by nsfile. From now on this + file system tree can be managed from within a user namespace if the + zoned property has been set. +

You cannot attach a zoned dataset's children to another user + namespace. You can also not attach the root file system of the user + namespace or any dataset which needs to be mounted before the zfs + service is run inside the user namespace, as it would be attached + unmounted until it is mounted from the service inside the user + namespace.

+

To allow management of the dataset from within a + user namespace, the zoned property has to be set and + the user namespaces needs access to the /dev/zfs + device. The + property + cannot be changed from within a user namespace.

+

After a dataset is attached to a user namespace and the + zoned property is set, a zoned file system cannot be + mounted outside the user namespace, since the user namespace + administrator might have set the mount point to an unacceptable + value.

+
+
zfs unzone + nsfile filesystem
+
Detach the specified filesystem from the user + namespace identified by nsfile.
+
+
+
+

+
+

+

The following example delegates the + tank/users dataset to a user namespace identified by + user namespace file /proc/1234/ns/user.

+
# zfs + zone /proc/1234/ns/user + tank/users
+
+
+
+

+

zfsprops(7)

+
+
+ + + + + +
June 3, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs.8.html b/man/master/8/zfs.8.html new file mode 100644 index 000000000..6c50e0882 --- /dev/null +++ b/man/master/8/zfs.8.html @@ -0,0 +1,1033 @@ + + + + + + + zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.8

+
+ + + + + +
ZFS(8)System Manager's ManualZFS(8)
+
+
+

+

zfsconfigure + ZFS datasets

+
+
+

+ + + + + +
zfs-?V
+
+ + + + + +
zfsversion
+
+ + + + + +
zfssubcommand + [arguments]
+
+
+

+

The zfs command configures ZFS datasets + within a ZFS storage pool, as described in zpool(8). A + dataset is identified by a unique path within the ZFS namespace:

+

+
pool[/component]/component
+

for example:

+

+
rpool/var/log
+

The maximum length of a dataset name + is + + - 1 ASCII characters (currently 255) satisfying + . Additionally snapshots are allowed to contain a single + character, + while bookmarks are allowed to contain a single + character. + / is used as separator between components. The maximum + amount of nesting allowed in a path is + + levels deep. ZFS tunables + () + are explained in zfs(4).

+

A dataset can be one of the following:

+
+
+
+
Can be mounted within the standard system namespace and behaves like other + file systems. While ZFS file systems are designed to be POSIX-compliant, + known issues exist that prevent compliance in some cases. Applications + that depend on standards conformance might fail due to non-standard + behavior when checking file system free space.
+
+
A logical volume exported as a raw or block device. This type of dataset + should only be used when a block device is required. File systems are + typically used in most environments.
+
+
A read-only version of a file system or volume at a given point in time. + It is specified as + filesystem@name or + volume@name.
+
+
Much like a snapshot, but without the hold on on-disk + data. It can be used as the source of a send (but not for a receive). It + is specified as + filesystem#name or + volume#name.
+
+
+

See zfsconcepts(7) for details.

+
+

+

Properties are divided into two types: native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about properties, see + zfsprops(7).

+
+
+

+

Enabling the + + feature allows for the creation of encrypted filesystems and volumes. ZFS + will encrypt file and zvol data, file attributes, ACLs, permission bits, + directory listings, FUID mappings, and + // + data. For an overview of encryption, see + zfs-load-key(8).

+
+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+
+
zfs -?
+
Displays a help message.
+
zfs -V, + --version
+
 
+
zfs version
+
Displays the software version of the zfs userland + utility and the zfs kernel module.
+
+
+

+
+
zfs-list(8)
+
Lists the property information for the given datasets in tabular + form.
+
zfs-create(8)
+
Creates a new ZFS file system or volume.
+
zfs-destroy(8)
+
Destroys the given dataset(s), snapshot(s), or bookmark.
+
zfs-rename(8)
+
Renames the given dataset (filesystem or snapshot).
+
zfs-upgrade(8)
+
Manage upgrading the on-disk version of filesystems.
+
+
+
+

+
+
zfs-snapshot(8)
+
Creates snapshots with the given names.
+
zfs-rollback(8)
+
Roll back the given dataset to a previous snapshot.
+
zfs-hold(8)/zfs-release(8)
+
Add or remove a hold reference to the specified snapshot or snapshots. If + a hold exists on a snapshot, attempts to destroy that snapshot by using + the zfs destroy command + return + .
+
zfs-diff(8)
+
Display the difference between a snapshot of a given filesystem and + another snapshot of that filesystem from a later time or the current + contents of the filesystem.
+
+
+
+

+
+
zfs-clone(8)
+
Creates a clone of the given snapshot.
+
zfs-promote(8)
+
Promotes a clone file system to no longer be dependent on its + "origin" snapshot.
+
+
+
+

+
+
zfs-send(8)
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark.
+
zfs-receive(8)
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the + zfs-send(8) subcommand, which by default creates a full + stream.
+
zfs-bookmark(8)
+
Creates a new bookmark of the given snapshot or bookmark. Bookmarks mark + the point in time when the snapshot was created, and can be used as the + incremental source for a zfs + send command.
+
zfs-redact(8)
+
Generate a new redaction bookmark. This feature can be used to allow + clones of a filesystem to be made available on a remote system, in the + case where their parent need not (or needs to not) be usable.
+
+
+
+

+
+
zfs-get(8)
+
Displays properties for the given datasets.
+
zfs-set(8)
+
Sets the property or list of properties to the given value(s) for each + dataset.
+
zfs-inherit(8)
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists.
+
+
+
+

+
+
zfs-userspace(8)/zfs-groupspace(8)/zfs-projectspace(8)
+
Displays space consumed by, and quotas on, each user, group, or project in + the specified filesystem or snapshot.
+
zfs-project(8)
+
List, set, or clear project ID and/or inherit flag on the files or + directories.
+
+
+
+

+
+
zfs-mount(8)
+
Displays all ZFS file systems currently mounted, or mount ZFS filesystem + on a path described by its mountpoint property.
+
zfs-unmount(8)
+
Unmounts currently mounted ZFS file systems.
+
+
+
+

+
+
zfs-share(8)
+
Shares available ZFS file systems.
+
zfs-unshare(8)
+
Unshares currently shared ZFS file systems.
+
+
+
+

+
+
zfs-allow(8)
+
Delegate permissions on the specified filesystem or volume.
+
zfs-unallow(8)
+
Remove delegated permissions on the specified filesystem or volume.
+
+
+
+

+
+
zfs-change-key(8)
+
Add or change an encryption key on the specified dataset.
+
zfs-load-key(8)
+
Load the key for the specified encrypted dataset, enabling access.
+
zfs-unload-key(8)
+
Unload a key for the specified dataset, removing the ability to access the + dataset.
+
+
+
+

+
+
zfs-program(8)
+
Execute ZFS administrative operations programmatically via a Lua + script-language channel program.
+
+
+
+

+
+
zfs-jail(8)
+
Attaches a filesystem to a jail.
+
zfs-unjail(8)
+
Detaches a filesystem from a jail.
+
+
+
+

+
+
zfs-wait(8)
+
Wait for background activity in a filesystem to complete.
+
+
+
+
+

+

The zfs utility exits 0 + on success, if + an error occurs, and + if invalid + command line options were specified.

+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + mountpoint=/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following command creates a snapshot named + yesterday. This snapshot is mounted on demand in the + .zfs/snapshot directory at the root of the + pool/home/bob file system.

+
# zfs + snapshot + pool/home/bob@yesterday
+
+
+

+

The following command creates snapshots named + yesterday of + pool/home and all of its descendent file systems. Each + snapshot is mounted on demand in the .zfs/snapshot + directory at the root of its file system. The second command destroys the + newly created snapshots.

+
# zfs + snapshot -r + pool/home@yesterday
+
# zfs + destroy -r + pool/home@yesterday
+
+
+

+

The following command disables the compression + property for all file systems under pool/home. The + next command explicitly enables compression for + pool/home/anne.

+
# zfs + set + compression=off + pool/home
+
# zfs + set compression=on + pool/home/anne
+
+
+

+

The following command lists all active file systems and volumes in + the system. Snapshots are displayed if + =on. + The default is off. See zpoolprops(7) + for more information on pool properties.

+
+
# zfs list
+NAME                      USED  AVAIL  REFER  MOUNTPOINT
+pool                      450K   457G    18K  /pool
+pool/home                 315K   457G    21K  /export/home
+pool/home/anne             18K   457G    18K  /export/home/anne
+pool/home/bob             276K   457G   276K  /export/home/bob
+
+
+
+

+

The following command sets a quota of 50 Gbytes for + pool/home/bob:

+
# zfs + set quota=50G + pool/home/bob
+
+
+

+

The following command lists all properties for + pool/home/bob:

+
+
# zfs get  pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value:

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+

The following command lists all properties with local settings for + pool/home/bob:

+
+
# zfs get -r -s  -o ,,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
+

+

The following command reverts the contents of + pool/home/anne to the snapshot named + yesterday, deleting all intermediate snapshots:

+
# zfs + rollback -r + pool/home/anne@yesterday
+
+
+

+

The following command creates a writable file system whose initial + contents are the same as pool/home/bob@yesterday.

+
# zfs + clone pool/home/bob@yesterday + pool/clone
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+

+

The following command causes pool/home/bob + and pool/home/anne to inherit + the checksum property from their parent.

+
# zfs + inherit checksum + pool/home/bob pool/home/anne
+
+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+

+

The following example sets the user-defined + com.example:department property + for a dataset:

+
# zfs + set + com.example:department=12345 + tank/accounting
+
+
+

+

The following example shows how to maintain a history of snapshots + with a consistent naming scheme. To keep a week's worth of snapshots, the + user destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows:

+
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
+

+

The following commands show how to set sharenfs + property options to enable read-write access for a set of IP addresses and + to enable root access for system "neo" on the + tank/home file system:

+
# zfs + set + sharenfs='rw=@123.123.0.0/16:[::1],root=neo' + tank/home
+

If you are using DNS for host name resolution, specify the + fully-qualified hostname.

+
+
+

+

The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots + on tank/cindys. The permissions on + tank/cindys are also displayed.

+
+
# zfs allow ,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys will + be unable to mount file systems under tank/cindys. Add + an ACE similar to the following syntax to provide mount point access:

+
# chmod + A+user:cindys:add_subdirectory:allow + /tank/cindys
+
+
+

+

The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed.

+
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
+

+

The following example shows how to define and grant a permission + set on the tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+

+

The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed.

+
+
# zfs allow cindys quota, users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
+

+

The following example shows how to remove the snapshot permission + from the staff group on the + tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+

+

The following example shows how to see what has changed between a + prior snapshot of a ZFS dataset and its current state. The + -F option is used to indicate type information for + the files affected.

+
+
# zfs diff -F tank/test@before tank/test
+M       /       /tank/test/
+M       F       /tank/test/linked      (+1)
+R       F       /tank/test/oldname -> /tank/test/newname
+-       F       /tank/test/deleted
++       F       /tank/test/created
+M       F       /tank/test/modified
+
+
+
+

+

The following example creates a bookmark to a snapshot. This + bookmark can then be used instead of a snapshot in send streams.

+
# zfs + bookmark + rpool@snapshot + rpool#bookmark
+
+
+

+ Property Options on a ZFS File System

+

The following example show how to share SMB filesystem through + ZFS. Note that a user and their password must be given.

+
# smbmount + //127.0.0.1/share_tmp /mnt/tmp + -o + user=workgroup/turbo,password=obrut,uid=1000
+

Minimal /etc/samba/smb.conf configuration + is required, as follows.

+

Samba will need to bind to the loopback interface for the ZFS + utilities to communicate with Samba. This is the default behavior for most + Linux distributions.

+

Samba must be able to authenticate a user. This can be done in a + number of ways (passwd(5), LDAP, + smbpasswd(5), &c.). How to do this is outside the + scope of this document – refer to smb.conf(5) for + more information.

+

See the USERSHARES section + for all configuration options, in case you need to modify any options of the + share afterwards. Do note that any changes done with the + net(8) command will be undone if the share is ever + unshared (like via a reboot).

+
+
+
+

+
+
+
Use ANSI color in zfs diff + and zfs list output.
+
+
Cause zfs mount to use + mount(8) to mount ZFS datasets. This option is provided + for backwards compatibility with older ZFS versions.
+
+
Tells zfs to set the maximum pipe size for + sends/recieves. Disabled by default on Linux due to an unfixed deadlock in + Linux's pipe size handling code.
+
+
Time, in seconds, to wait for /dev/zfs to appear. + Defaults to + , max + (10 + minutes). If <0, wait forever; if + 0, don't wait.
+
+
+
+

+

.

+
+
+

+

attr(1), gzip(1), + ssh(1), chmod(2), + fsync(2), stat(2), + write(2), acl(5), + attributes(5), exports(5), + zfsconcepts(7), zfsprops(7), + exportfs(8), mount(8), + net(8), selinux(8), + zfs-allow(8), zfs-bookmark(8), + zfs-change-key(8), zfs-clone(8), + zfs-create(8), zfs-destroy(8), + zfs-diff(8), zfs-get(8), + zfs-groupspace(8), zfs-hold(8), + zfs-inherit(8), zfs-jail(8), + zfs-list(8), zfs-load-key(8), + zfs-mount(8), zfs-program(8), + zfs-project(8), zfs-projectspace(8), + zfs-promote(8), zfs-receive(8), + zfs-redact(8), zfs-release(8), + zfs-rename(8), zfs-rollback(8), + zfs-send(8), zfs-set(8), + zfs-share(8), zfs-snapshot(8), + zfs-unallow(8), zfs-unjail(8), + zfs-unload-key(8), zfs-unmount(8), + zfs-upgrade(8), + zfs-userspace(8), zfs-wait(8), + zpool(8)

+
+
+ + + + + +
May 12, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs_ids_to_path.8.html b/man/master/8/zfs_ids_to_path.8.html new file mode 100644 index 000000000..4eca4fc14 --- /dev/null +++ b/man/master/8/zfs_ids_to_path.8.html @@ -0,0 +1,274 @@ + + + + + + + zfs_ids_to_path.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs_ids_to_path.8

+
+ + + + + +
ZFS_IDS_TO_PATH(8)System Manager's ManualZFS_IDS_TO_PATH(8)
+
+
+

+

zfs_ids_to_path — + convert objset and object ids to names and paths

+
+
+

+ + + + + +
zfs_ids_to_path[-v] pool + objset-id object-id
+
+
+

+

The + + utility converts a provided objset and object ids into a path to the file + they refer to.

+
+
+
Verbose. Print the dataset name and the file path within the dataset + separately. This will work correctly even if the dataset is not + mounted.
+
+
+
+

+

zdb(8), zfs(8)

+
+
+ + + + + +
April 17, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs_prepare_disk.8.html b/man/master/8/zfs_prepare_disk.8.html new file mode 100644 index 000000000..ef24ca9fd --- /dev/null +++ b/man/master/8/zfs_prepare_disk.8.html @@ -0,0 +1,302 @@ + + + + + + + zfs_prepare_disk.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs_prepare_disk.8

+
+ + + + + +
ZFS_PREPARE_DISK(8)System Manager's ManualZFS_PREPARE_DISK(8)
+
+
+

+

zfs_prepare_disk — + special script that gets run before bringing a disk into a + pool

+
+
+

+

zfs_prepare_disk is an optional script + that gets called by libzfs before bringing a disk into a pool. It can be + modified by the user to run whatever commands are necessary to prepare a + disk for inclusion into the pool. For example, users can add lines to + zfs_prepare_disk to do things like update the + drive's firmware or check the drive's health. + zfs_prepare_disk is optional and can be removed if + not needed. libzfs will look for the script at + @zfsexecdir@/zfs_prepare_disk.

+
+

+

zfs_prepare_disk will be passed the + following environment variables:

+

+
+
POOL_NAME
+
+
VDEV_PATH
+
+
VDEV_PREPARE
+
('create', 'add', 'replace', or + 'autoreplace'). This can be useful if you only want the script to be run + under certain actions.
+
VDEV_UPATH
+
disk. For multipath this would + return one of the /dev/sd* paths to the disk. If the device is not a + device mapper device, then VDEV_UPATH just returns + the same value as VDEV_PATH
+
VDEV_ENC_SYSFS_PATH
+
+
+

Note that some of these variables may have a blank value. + POOL_NAME is blank at pool creation time, for + example.

+
+
+
+

+

zfs_prepare_disk runs with a limited + $PATH.

+
+
+

+

zfs_prepare_disk should return 0 on + success, non-zero otherwise. If non-zero is returned, the disk will not be + included in the pool.

+
+
+ + + + + +
August 30, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zgenhostid.8.html b/man/master/8/zgenhostid.8.html new file mode 100644 index 000000000..e9a1bbd28 --- /dev/null +++ b/man/master/8/zgenhostid.8.html @@ -0,0 +1,332 @@ + + + + + + + zgenhostid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zgenhostid.8

+
+ + + + + +
ZGENHOSTID(8)System Manager's ManualZGENHOSTID(8)
+
+
+

+

zgenhostid — + generate host ID into /etc/hostid

+
+
+

+ + + + + +
zgenhostid[-f] [-o + filename] [hostid]
+
+
+

+

Creates /etc/hostid file and stores the + host ID in it. If hostid was provided, validate and + store that value. Otherwise, randomly generate an ID.

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Allow output overwrite.
+
+ filename
+
Write to filename instead of the default + /etc/hostid.
+
hostid
+
Specifies the value to be placed in /etc/hostid. + It should be a number with a value between 1 and 2^32-1. If + , generate a random + ID. This value must be unique among your systems. It + must be an 8-digit-long hexadecimal number, optionally + prefixed by "0x".
+
+
+
+

+

/etc/hostid

+
+
+

+
+
Generate a random hostid and store it
+
+
# + zgenhostid
+
+
Record the libc-generated hostid in + /etc/hostid
+
+
# + zgenhostid + "$(hostid)"
+
+
Record a custom hostid (0xdeadbeef) in + /etc/hostid
+
+
# + zgenhostid + deadbeef
+
+
Record a custom hostid (0x01234567) in + /tmp/hostid and overwrite the file + if it exists
+
+
# + zgenhostid -f + -o /tmp/hostid + 0x01234567
+
+
+
+
+

+

genhostid(1), hostid(1), + spl(4)

+
+
+

+

zgenhostid emulates the + genhostid(1) utility and is provided for use on systems + which do not include the utility or do not provide the + sethostid(3) function.

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zinject.8.html b/man/master/8/zinject.8.html new file mode 100644 index 000000000..8650c837c --- /dev/null +++ b/man/master/8/zinject.8.html @@ -0,0 +1,550 @@ + + + + + + + zinject.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zinject.8

+
+ + + + + +
ZINJECT(8)System Manager's ManualZINJECT(8)
+
+
+

+

zinjectZFS + Fault Injector

+
+
+

+

zinject creates artificial problems in a + ZFS pool by simulating data corruption or device failures. This program is + dangerous.

+
+
+

+
+
+ + + + + +
zinject
+
+
List injection records.
+
+ + + + + +
zinject-b + objset:object:level:start:end + [-f frequency] + -amu [pool]
+
+
Force an error into the pool at a bookmark.
+
+ + + + + +
zinject-c + id|all
+
+
Cancel injection records.
+
+ + + + + +
zinject-d vdev + -A + | + pool
+
+
Force a vdev into the DEGRADED or FAULTED state.
+
+ + + + + +
zinject-d vdev + -D + latency:lanes + pool
+
+
Add an artificial delay to I/O requests on a particular device, such that + the requests take a minimum of latency milliseconds + to complete. Each delay has an associated number of + lanes which defines the number of concurrent I/O + requests that can be processed. +

For example, with a single lane delay of 10 ms + (-D + 10:1), the device will only + be able to service a single I/O request at a time with each request + taking 10 ms to complete. So, if only a single request is submitted + every 10 ms, the average latency will be 10 ms; but if more than one + request is submitted every 10 ms, the average latency will be more than + 10 ms.

+

Similarly, if a delay of 10 ms is specified to have two lanes + (-D + 10:2), then the device will + be able to service two requests at a time, each with a minimum latency + of 10 ms. So, if two requests are submitted every 10 ms, then the + average latency will be 10 ms; but if more than two requests are + submitted every 10 ms, the average latency will be more than 10 ms.

+

Also note, these delays are additive. So two invocations of + -D + 10:1 are roughly equivalent + to a single invocation of -D + 10:2. This also means, that + one can specify multiple lanes with differing target latencies. For + example, an invocation of -D + 10:1 followed by + -D + 25:2 will create 3 lanes on + the device: one lane with a latency of 10 ms and two lanes with a 25 ms + latency.

+
+
+ + + + + +
zinject-d vdev + [-e device_error] + [-L label_error] + [-T failure] + [-f frequency] + [-F] pool
+
+
Force a vdev error.
+
+ + + + + +
zinject-I [-s + seconds|-g + txgs] pool
+
+
Simulate a hardware failure that fails to honor a cache flush.
+
+ + + + + +
zinject-p function + pool
+
+
Panic inside the specified function.
+
+ + + + + +
zinject-t + + -C dvas + [-e device_error] + [-f frequency] + [-l level] + [-r range] + [-amq] path
+
+
Force an error into the contents of a file.
+
+ + + + + +
zinject-t + + -C dvas + [-e device_error] + [-f frequency] + [-l level] + [-amq] path
+
+
Force an error into the metadnode for a file or directory.
+
+ + + + + +
zinject-t mos_type + -C dvas + [-e device_error] + [-f frequency] + [-l level] + [-r range] + [-amqu] pool
+
+
Force an error into the MOS of a pool.
+
+
+
+

+
+
+
Flush the ARC before injection.
+
+ objset:object:level:start:end
+
Force an error into the pool at this bookmark tuple. Each number is in + hexadecimal, and only one block can be specified.
+
+ dvas
+
Inject the given error only into specific DVAs. The mask should be + specified as a list of 0-indexed DVAs separated by commas + (e.g. + 0,2). This option is not + applicable to logical data errors such as decompress and + decrypt.
+
+ vdev
+
A vdev specified by path or GUID.
+
+ device_error
+
Specify +
+
+
for an ECKSUM error,
+
+
for a data decompression error,
+
+
for a data decryption error,
+
+
to flip a bit in the data after a read,
+
+
for an ECHILD error,
+
+
for an EIO error where reopening the device will succeed, or
+
+
for an ENXIO error where reopening the device will fail.
+
+

For EIO and ENXIO, the "failed" reads or writes + still occur. The probe simply sets the error value reported by the I/O + pipeline so it appears the read or write failed. Decryption errors only + currently work with file data.

+
+
+ frequency
+
Only inject errors a fraction of the time. Expressed as a real number + percentage between + + and + .
+
+
Fail faster. Do fewer checks.
+
+ txgs
+
Run for this many transaction groups before reporting failure.
+
+
Print the usage message.
+
+ level
+
Inject an error at a particular block level. The default is + .
+
+ label_error
+
Set the label error region to one of + , + , + , or + .
+
+
Automatically remount the underlying filesystem.
+
+
Quiet mode. Only print the handler number added.
+
+ range
+
Inject an error over a particular logical range of an object, which will + be translated to the appropriate blkid range according to the object's + properties.
+
+ seconds
+
Run for this many seconds before reporting failure.
+
+ failure
+
Set the failure type to one of all, + , + , + , or + .
+
+ mos_type
+
Set this to +
+
+
for any data in the MOS,
+
+
for an object directory,
+
+
for the pool configuration,
+
+
for the block pointer list,
+
+
for the space map,
+
+
for the metaslab, or
+
+
for the persistent error log.
+
+
+
+
Unload the pool after injection.
+
+
+
+

+
+
+
Run zinject in debug mode.
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-add.8.html b/man/master/8/zpool-add.8.html new file mode 100644 index 000000000..ed277bc37 --- /dev/null +++ b/man/master/8/zpool-add.8.html @@ -0,0 +1,336 @@ + + + + + + + zpool-add.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-add.8

+
+ + + + + +
ZPOOL-ADD(8)System Manager's ManualZPOOL-ADD(8)
+
+
+

+

zpool-addadd + vdevs to ZFS storage pool

+
+
+

+ + + + + +
zpooladd [-fgLnP] + [-o + property=value] + pool vdev
+
+
+

+

Adds the specified virtual devices to the given pool. The + vdev specification is described in the + section of zpoolconcepts(7). The behavior + of the -f option, and the device checks performed + are described in the zpool + create subcommand.

+
+
+
Forces use of vdevs, even if they appear in use or + specify a conflicting replication level. Not all devices can be overridden + in this manner.
+
+
Display vdev, GUIDs instead of the normal device + names. These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all symbolic + links. This can be used to look up the current block device name + regardless of the /dev/disk path used to open + it.
+
+
Displays the configuration that would be used without actually adding the + vdevs. The actual pool creation can still fail due + to insufficient privileges or device sharing.
+
+
Display real paths for vdevs instead of only the + last component of the path. This can be used in conjunction with the + -L flag.
+
+ property=value
+
Sets the given pool properties. See the zpoolprops(7) + manual page for a list of valid properties that can be set. The only + property supported at the moment is + .
+
+
+
+

+
+

+

The following command adds two mirrored disks to the pool + tank, assuming the pool is already made up of two-way + mirrors. The additional space is immediately available to any datasets + within the pool.

+
# zpool + add tank + + sda sdb
+
+
+

+

The following command adds two disks for use as cache devices to a + ZFS storage pool:

+
# zpool + add pool + + sdc sdd
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take over + an hour for them to fill. Capacity and reads can be monitored using the + iostat subcommand as follows:

+
# zpool + iostat -v pool + 5
+
+
+
+

+

zpool-attach(8), + zpool-import(8), zpool-initialize(8), + zpool-online(8), zpool-remove(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-attach.8.html b/man/master/8/zpool-attach.8.html new file mode 100644 index 000000000..c8177b9bd --- /dev/null +++ b/man/master/8/zpool-attach.8.html @@ -0,0 +1,335 @@ + + + + + + + zpool-attach.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-attach.8

+
+ + + + + +
ZPOOL-ATTACH(8)System Manager's ManualZPOOL-ATTACH(8)
+
+
+

+

zpool-attach — + attach new device to existing ZFS vdev

+
+
+

+ + + + + +
zpoolattach [-fsw] + [-o + property=value] + pool device new_device
+
+
+

+

Attaches new_device to the existing + device. The behavior differs depending on if the + existing device is a RAID-Z device, or a mirror/plain + device.

+

If the existing device is a mirror or plain device (e.g. specified + as "sda" or + "mirror-7"), the new device will be + mirrored with the existing device, a resilver will be initiated, and the new + device will contribute to additional redundancy once the resilver completes. + If device is not currently part of a mirrored + configuration, device automatically transforms into a + two-way mirror of device and + new_device. If device is part of + a two-way mirror, attaching new_device creates a + three-way mirror, and so on. In either case, + new_device begins to resilver immediately and any + running scrub is cancelled.

+

If the existing device is a RAID-Z device (e.g. specified as + "raidz2-0"), the new device will become part + of that RAID-Z group. A "raidz expansion" will be initiated, and + once the expansion completes, the new device will contribute additional + space to the RAID-Z group. The expansion entails reading all allocated space + from existing disks in the RAID-Z group, and rewriting it to the new disks + in the RAID-Z group (including the newly added + device). Its progress can be monitored with + zpool status.

+

Data redundancy is maintained during and after the expansion. If a + disk fails while the expansion is in progress, the expansion pauses until + the health of the RAID-Z vdev is restored (e.g. by replacing the failed disk + and waiting for reconstruction to complete). Expansion does not change the + number of failures that can be tolerated without data loss (e.g. a RAID-Z2 + is still a RAID-Z2 even after expansion). A RAID-Z vdev can be expanded + multiple times.

+

After the expansion completes, old blocks retain their old + data-to-parity ratio (e.g. 5-wide RAID-Z2 has 3 data and 2 parity) but + distributed among the larger set of disks. New blocks will be written with + the new data-to-parity ratio (e.g. a 5-wide RAID-Z2 which has been expanded + once to 6-wide, has 4 data and 2 parity). However, the vdev's assumed parity + ratio does not change, so slightly less space than is expected may be + reported for newly-written blocks, according to zfs + list, df, + ls -s, and similar + tools.

+

A pool-wide scrub is initiated at the end of the expansion in + order to verify the checksums of all blocks which have been copied during + the expansion.

+
+
+
Forces use of new_device, even if it appears to be + in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the zpoolprops(7) + manual page for a list of valid properties that can be set. The only + property supported at the moment is + .
+
+
When attaching to a mirror or plain device, the + new_device is reconstructed sequentially to restore + redundancy as quickly as possible. Checksums are not verified during + sequential reconstruction so a scrub is started when the resilver + completes.
+
+
Waits until new_device has finished resilvering or + expanding before returning.
+
+
+
+

+

zpool-add(8), zpool-detach(8), + zpool-import(8), zpool-initialize(8), + zpool-online(8), zpool-replace(8), + zpool-resilver(8)

+
+
+ + + + + +
June 28, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-checkpoint.8.html b/man/master/8/zpool-checkpoint.8.html new file mode 100644 index 000000000..3c1109c5a --- /dev/null +++ b/man/master/8/zpool-checkpoint.8.html @@ -0,0 +1,290 @@ + + + + + + + zpool-checkpoint.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-checkpoint.8

+
+ + + + + +
ZPOOL-CHECKPOINT(8)System Manager's ManualZPOOL-CHECKPOINT(8)
+
+
+

+

zpool-checkpoint — + check-point current ZFS storage pool state

+
+
+

+ + + + + +
zpoolcheckpoint [-d + [-w]] pool
+
+
+

+

Checkpoints the current state of pool , + which can be later restored by zpool + import --rewind-to-checkpoint. The existence of a + checkpoint in a pool prohibits the following zpool + subcommands: remove, attach, + detach, split, + and reguid. In addition, it + may break reservation boundaries if the pool lacks free space. The + zpool status command + indicates the existence of a checkpoint or the progress of discarding a + checkpoint from a pool. zpool + list can be used to check how much space the + checkpoint takes from the pool.

+
+
+

+
+
, + --discard
+
Discards an existing checkpoint from pool.
+
, + --wait
+
Waits until the checkpoint has finished being discarded before + returning.
+
+
+
+

+

zfs-snapshot(8), + zpool-import(8), zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-clear.8.html b/man/master/8/zpool-clear.8.html new file mode 100644 index 000000000..9b53176cf --- /dev/null +++ b/man/master/8/zpool-clear.8.html @@ -0,0 +1,284 @@ + + + + + + + zpool-clear.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-clear.8

+
+ + + + + +
ZPOOL-CLEAR(8)System Manager's ManualZPOOL-CLEAR(8)
+
+
+

+

zpool-clear — + clear device errors in ZFS storage pool

+
+
+

+ + + + + +
zpoolclear [--power] + pool [device]…
+
+
+

+

Clears device errors in a pool. If no arguments are specified, all + device errors within the pool are cleared. If one or more devices is + specified, only those errors associated with the specified device or devices + are cleared.

+

If the pool was suspended it will be brought back + online provided the devices can be accessed. Pools with + + enabled which have been suspended cannot be resumed. While the pool was + suspended, it may have been imported on another host, and resuming I/O could + result in pool damage.

+
+
+
Power on the devices's slot in the storage enclosure and wait for the + device to show up before attempting to clear errors. This is done on all + the devices specified. Alternatively, you can set the + + environment variable to always enable this behavior. Note: This flag + currently works on Linux only.
+
+
+
+

+

zdb(8), zpool-reopen(8), + zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-create.8.html b/man/master/8/zpool-create.8.html new file mode 100644 index 000000000..260ed40dd --- /dev/null +++ b/man/master/8/zpool-create.8.html @@ -0,0 +1,449 @@ + + + + + + + zpool-create.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-create.8

+
+ + + + + +
ZPOOL-CREATE(8)System Manager's ManualZPOOL-CREATE(8)
+
+
+

+

zpool-create — + create ZFS storage pool

+
+
+

+ + + + + +
zpoolcreate [-dfn] + [-m mountpoint] + [-o + property=value]… + [-o + feature@feature=value] + [-o + compatibility=off|legacy|file[,file]…] + [-O + file-system-property=value]… + [-R root] + [-t tname] + pool vdev
+
+
+

+

Creates a new storage pool containing the virtual devices + specified on the command line. The pool name must begin with a letter, and + can only contain alphanumeric characters as well as the underscore + (""), + dash + (""), + colon + (""), + space (" "), and period + (""). + The pool names mirror, raidz, + draid, spare and log + are reserved, as are names beginning with mirror, + raidz, draid, and + spare. The vdev specification is + described in the Virtual Devices + section of zpoolconcepts(7).

+

The command attempts to verify that each device + specified is accessible and not currently in use by another subsystem. + However this check is not robust enough to detect simultaneous attempts to + use a new device in different pools, even if + = + enabled. The administrator must ensure that simultaneous + invocations of any combination of zpool + replace, zpool + create, zpool + add, or zpool + labelclear do not refer to the same device. Using + the same device in two pools will result in pool corruption.

+

There are some uses, such as being currently mounted, or specified + as the dedicated dump device, that prevents a device from ever being used by + ZFS. Other uses, such as having a preexisting UFS file system, can be + overridden with -f.

+

The command also checks that the replication strategy for the pool + is consistent. An attempt to combine redundant and non-redundant storage in + a single pool, or to mix disks and files, results in an error unless + -f is specified. The use of differently-sized + devices within a single raidz or mirror group is also flagged as an error + unless -f is specified.

+

Unless the -R option is specified, the + default mount point is /pool. + The mount point must not exist or must be empty, or else the root dataset + will not be able to be be mounted. This can be overridden with the + -m option.

+

By default all supported features are enabled + on the new pool. The -d option and the + -o compatibility property (e.g + -o + =2020) + can be used to restrict the features that are enabled, so that the pool can + be imported on other releases of ZFS.

+
+
+
Do not enable any features on the new pool. Individual features can be + enabled by setting their corresponding properties to + enabled with -o. See + zpool-features(7) for details about feature + properties.
+
+
Forces use of vdevs, even if they appear in use or + specify a conflicting replication level. Not all devices can be overridden + in this manner.
+
+ mountpoint
+
Sets the mount point for the root dataset. The default mount point is + /pool or altroot/pool if + altroot is specified. The mount point must be an + absolute path, legacy, or none. For + more information on dataset mount points, see + zfsprops(7).
+
+
Displays the configuration that would be used without actually creating + the pool. The actual pool creation can still fail due to insufficient + privileges or device sharing.
+
+ property=value
+
Sets the given pool properties. See zpoolprops(7) for a + list of valid properties that can be set.
+
+ compatibility=off|legacy|file[,file]…
+
Specifies compatibility feature sets. See + zpool-features(7) for more information about + compatibility feature sets.
+
+ feature@feature=value
+
Sets the given pool feature. See the zpool-features(7) + section for a list of valid features that can be set. Value can be either + disabled or enabled.
+
+ file-system-property=value
+
Sets the given file system properties in the root file system of the pool. + See zfsprops(7) for a list of valid properties that can + be set.
+
+ root
+
Equivalent to -o + cachefile=none + -o + altroot=root
+
+ tname
+
Sets the in-core pool name to tname while the + on-disk name will be the name specified as pool. + This will set the default of the cachefile property to + none. This is intended to handle name space collisions + when creating pools for other systems, such as virtual machines or + physical machines whose pools live on network block devices.
+
+
+
+

+
+

+

The following command creates a pool with a single raidz root vdev + that consists of six disks:

+
# zpool + create tank + raidz sda sdb sdc sdd sde + sdf
+
+
+

+

The following command creates a pool with two mirrors, where each + mirror contains two disks:

+
# zpool + create tank + mirror sda sdb + mirror sdc sdd
+
+
+

+

The following command creates a non-redundant pool using two disk + partitions:

+
# zpool + create tank + sda1 sdb2
+
+
+

+

The following command creates a non-redundant pool using files. + While not recommended, a pool based on files can be useful for experimental + purposes.

+
# zpool + create tank + /path/to/file/a /path/to/file/b
+
+
+

+

The following command creates a new pool with an available hot + spare:

+
# zpool + create tank + mirror sda sdb + spare sdc
+
+
+

+

The following command creates a ZFS storage pool consisting of + two, two-way mirrors and mirrored log devices:

+
# zpool + create pool + mirror sda sdb + mirror sdc sdd log + mirror sde sdf
+
+
+
+

+

zpool-destroy(8), + zpool-export(8), zpool-import(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-destroy.8.html b/man/master/8/zpool-destroy.8.html new file mode 100644 index 000000000..6190eccad --- /dev/null +++ b/man/master/8/zpool-destroy.8.html @@ -0,0 +1,278 @@ + + + + + + + zpool-destroy.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-destroy.8

+
+ + + + + +
ZPOOL-DESTROY(8)System Manager's ManualZPOOL-DESTROY(8)
+
+
+

+

zpool-destroy — + destroy ZFS storage pool

+
+
+

+ + + + + +
zpooldestroy [-f] + pool
+
+
+

+

Destroys the given pool, freeing up any devices for other use. + This command tries to unmount any active datasets before destroying the + pool.

+
+
+
Forcefully unmount all active datasets.
+
+
+
+

+
+

+

The following command destroys the pool tank + and any datasets contained within:

+
# zpool + destroy -f + tank
+
+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-detach.8.html b/man/master/8/zpool-detach.8.html new file mode 100644 index 000000000..73ab2ecbe --- /dev/null +++ b/man/master/8/zpool-detach.8.html @@ -0,0 +1,271 @@ + + + + + + + zpool-detach.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-detach.8

+
+ + + + + +
ZPOOL-DETACH(8)System Manager's ManualZPOOL-DETACH(8)
+
+
+

+

zpool-detach — + detach device from ZFS mirror

+
+
+

+ + + + + +
zpooldetach pool device
+
+
+

+

Detaches device from a mirror. The operation + is refused if there are no other valid replicas of the data. If + device may be re-added to the pool later on then + consider the zpool offline + command instead.

+
+
+

+

zpool-attach(8), + zpool-labelclear(8), zpool-offline(8), + zpool-remove(8), zpool-replace(8), + zpool-split(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-events.8.html b/man/master/8/zpool-events.8.html new file mode 100644 index 000000000..21d76dca2 --- /dev/null +++ b/man/master/8/zpool-events.8.html @@ -0,0 +1,872 @@ + + + + + + + zpool-events.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-events.8

+
+ + + + + +
ZPOOL-EVENTS(8)System Manager's ManualZPOOL-EVENTS(8)
+
+
+

+

zpool-events — + list recent events generated by kernel

+
+
+

+ + + + + +
zpoolevents [-vHf] + [pool]
+
+ + + + + +
zpoolevents -c
+
+
+

+

Lists all recent events generated by the ZFS kernel modules. These + events are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. For + more information about the subclasses and event payloads that can be + generated see EVENTS and the following + sections.

+
+
+

+
+
+
Clear all previous events.
+
+
Follow mode.
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+
Print the entire payload for each event.
+
+
+
+

+

These are the different event subclasses. The full event name + would be + , + but only the last part is listed here.

+

+
+
+
Issued when a checksum error has been detected.
+
+
Issued when there is an I/O error in a vdev in the pool.
+
+
Issued when there have been data errors in the pool.
+
+
Issued when an I/O request is determined to be "hung", this can + be caused by lost completion events due to flaky hardware or drivers. See + + in zfs(4) for additional information regarding + "hung" I/O detection and configuration.
+
+
Issued when a completed I/O request exceeds the maximum allowed time + specified by the + + module parameter. This can be an indicator of problems with the underlying + storage device. The number of delay events is ratelimited by the + + module parameter.
+
+
Issued every time a vdev change have been done to the pool.
+
+
Issued when a pool cannot be imported.
+
+
Issued when a pool is destroyed.
+
+
Issued when a pool is exported.
+
+
Issued when a pool is imported.
+
+
Issued when a REGUID (new unique identifier for the pool have been + regenerated) have been detected.
+
+
Issued when the vdev is unknown. Such as trying to clear device errors on + a vdev that have failed/been kicked from the system/pool and is no longer + available.
+
+
Issued when a vdev could not be opened (because it didn't exist for + example).
+
+
Issued when corrupt data have been detected on a vdev.
+
+
Issued when there are no more replicas to sustain the pool. This would + lead to the pool being + .
+
+
Issued when a missing device in the pool have been detected.
+
+
Issued when the system (kernel) have removed a device, and ZFS notices + that the device isn't there any more. This is usually followed by a + probe_failure event.
+
+
Issued when the label is OK but invalid.
+
+
Issued when the ashift alignment requirement has increased.
+
+
Issued when a vdev is detached from a mirror (or a spare detached from a + vdev where it have been used to replace a failed drive - only works if the + original drive have been re-added).
+
+
Issued when clearing device errors in a pool. Such as running + zpool clear on a device in + the pool.
+
+
Issued when a check to see if a given vdev could be opened is + started.
+
+
Issued when a spare have kicked in to replace a failed device.
+
+
Issued when a vdev can be automatically expanded.
+
+
Issued when there is an I/O failure in a vdev in the pool.
+
+
Issued when a probe fails on a vdev. This would occur if a vdev have been + kicked from the system outside of ZFS (such as the kernel have removed the + device).
+
+
Issued when the intent log cannot be replayed. The can occur in the case + of a missing or damaged log device.
+
+
Issued when a resilver is started.
+
+
Issued when the running resilver have finished.
+
+
Issued when a scrub is started on a pool.
+
+
Issued when a pool has finished scrubbing.
+
+
Issued when a scrub is aborted on a pool.
+
+
Issued when a scrub is resumed on a pool.
+
+
Issued when a scrub is paused on a pool.
+
+
 
+
+
+
+

+

This is the payload (data, information) that accompanies an + event.

+

For zed(8), these are set to + uppercase and prefixed with + .

+

+
+
+
Pool name.
+
+
Failmode - + , + , + or + . + See the + + property in zpoolprops(7) for more information.
+
+
The GUID of the pool.
+
+
The load state for the pool (0=none, 1=open, 2=import, 3=tryimport, + 4=recover 5=error).
+
+
The GUID of the vdev in question (the vdev failing or operated upon with + zpool clear, etc.).
+
+
Type of vdev - + , + , + , + etc. See the + section of zpoolconcepts(7) for more + information on possible values.
+
+
Full path of the vdev, including any -partX.
+
+
ID of vdev (if any).
+
+
Physical FRU location.
+
+
State of vdev (0=uninitialized, 1=closed, 2=offline, 3=removed, 4=failed + to open, 5=faulted, 6=degraded, 7=healthy).
+
+
The ashift value of the vdev.
+
+
The time the last I/O request completed for the specified vdev.
+
+
The time since the last I/O request completed for the specified vdev.
+
+
List of spares, including full path and any -partX.
+
+
GUID(s) of spares.
+
+
How many read errors that have been detected on the vdev.
+
+
How many write errors that have been detected on the vdev.
+
+
How many checksum errors that have been detected on the vdev.
+
+
GUID of the vdev parent.
+
+
Type of parent. See vdev_type.
+
+
Path of the vdev parent (if any).
+
+
ID of the vdev parent (if any).
+
+
The object set number for a given I/O request.
+
+
The object number for a given I/O request.
+
+
The indirect level for the block. Level 0 is the lowest level and includes + data blocks. Values > 0 indicate metadata blocks at the appropriate + level.
+
+
The block ID for a given I/O request.
+
+
The error number for a failure when handling a given I/O request, + compatible with errno(3) with the value of + + used to indicate a ZFS checksum error.
+
+
The offset in bytes of where to write the I/O request for the specified + vdev.
+
+
The size in bytes of the I/O request.
+
+
The current flags describing how the I/O request should be handled. See + the I/O FLAGS section for the full list of I/O + flags.
+
+
The current stage of the I/O in the pipeline. See the I/O + STAGES section for a full list of all the I/O stages.
+
+
The valid pipeline stages for the I/O. See the I/O + STAGES section for a full list of all the I/O stages.
+
+
The time elapsed (in nanoseconds) waiting for the block layer to complete + the I/O request. Unlike zio_delta, this does not include + any vdev queuing time and is therefore solely a measure of the block layer + performance.
+
+
The time when a given I/O request was submitted.
+
+
The time required to service a given I/O request.
+
+
The previous state of the vdev.
+
+
Checksum algorithm used. See zfsprops(7) for more + information on the available checksum algorithms.
+
+
Whether or not the data is byteswapped.
+
+
start, + end) pairs of corruption offsets. Offsets are always + aligned on a 64-bit boundary, and can include some gaps of non-corruption. + (See bad_ranges_min_gap)
+
+
In order to bound the size of the bad_ranges array, gaps + of non-corruption less than or equal to + bad_ranges_min_gap bytes have been merged with adjacent + corruption. Always at least 8 bytes, since corruption is detected on a + 64-bit word basis.
+
+
This array has one element per range in bad_ranges. Each + element contains the count of bits in that range which were clear in the + good data and set in the bad data.
+
+
This array has one element per range in bad_ranges. Each + element contains the count of bits for that range which were set in the + good data and clear in the bad data.
+
+
If this field exists, it is an array of (bad data + & ~(good data)); that + is, the bits set in the bad data which are cleared in the good data. Each + element corresponds a byte whose offset is in a range in + bad_ranges, and the array is ordered by offset. Thus, + the first element is the first byte in the first + bad_ranges range, and the last element is the last byte + in the last bad_ranges range.
+
+
Like bad_set_bits, but contains (good + data & ~(bad + data)); that is, the bits set in the good data which are cleared in + the bad data.
+
+
+
+

+

The ZFS I/O pipeline is comprised of various stages which are + defined below. The individual stages are used to construct these basic I/O + operations: Read, Write, Free, Claim, and Ioctl. These stages may be set on + an event to describe the life cycle of a given I/O request.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StageBit MaskOperations



ZIO_STAGE_OPEN0x00000001RWFCI
ZIO_STAGE_READ_BP_INIT0x00000002R----
ZIO_STAGE_WRITE_BP_INIT0x00000004-W---
ZIO_STAGE_FREE_BP_INIT0x00000008--F--
ZIO_STAGE_ISSUE_ASYNC0x00000010RWF--
ZIO_STAGE_WRITE_COMPRESS0x00000020-W---
ZIO_STAGE_ENCRYPT0x00000040-W---
ZIO_STAGE_CHECKSUM_GENERATE0x00000080-W---
ZIO_STAGE_NOP_WRITE0x00000100-W---
ZIO_STAGE_BRT_FREE0x00000200--F--
ZIO_STAGE_DDT_READ_START0x00000400R----
ZIO_STAGE_DDT_READ_DONE0x00000800R----
ZIO_STAGE_DDT_WRITE0x00001000-W---
ZIO_STAGE_DDT_FREE0x00002000--F--
ZIO_STAGE_GANG_ASSEMBLE0x00004000RWFC-
ZIO_STAGE_GANG_ISSUE0x00008000RWFC-
ZIO_STAGE_DVA_THROTTLE0x00010000-W---
ZIO_STAGE_DVA_ALLOCATE0x00020000-W---
ZIO_STAGE_DVA_FREE0x00040000--F--
ZIO_STAGE_DVA_CLAIM0x00080000---C-
ZIO_STAGE_READY0x00100000RWFCI
ZIO_STAGE_VDEV_IO_START0x00200000RW--I
ZIO_STAGE_VDEV_IO_DONE0x00400000RW--I
ZIO_STAGE_VDEV_IO_ASSESS0x00800000RW--I
ZIO_STAGE_CHECKSUM_VERIFY0x01000000R----
ZIO_STAGE_DONE0x02000000RWFCI
+
+
+

+

Every I/O request in the pipeline contains a set of flags which + describe its function and are used to govern its behavior. These flags will + be set in an event as a zio_flags payload entry.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FlagBit Mask


ZIO_FLAG_DONT_AGGREGATE0x00000001
ZIO_FLAG_IO_REPAIR0x00000002
ZIO_FLAG_SELF_HEAL0x00000004
ZIO_FLAG_RESILVER0x00000008
ZIO_FLAG_SCRUB0x00000010
ZIO_FLAG_SCAN_THREAD0x00000020
ZIO_FLAG_PHYSICAL0x00000040
ZIO_FLAG_CANFAIL0x00000080
ZIO_FLAG_SPECULATIVE0x00000100
ZIO_FLAG_CONFIG_WRITER0x00000200
ZIO_FLAG_DONT_RETRY0x00000400
ZIO_FLAG_NODATA0x00001000
ZIO_FLAG_INDUCE_DAMAGE0x00002000
ZIO_FLAG_IO_ALLOCATING0x00004000
ZIO_FLAG_IO_RETRY0x00008000
ZIO_FLAG_PROBE0x00010000
ZIO_FLAG_TRYHARD0x00020000
ZIO_FLAG_OPTIONAL0x00040000
ZIO_FLAG_DONT_QUEUE0x00080000
ZIO_FLAG_DONT_PROPAGATE0x00100000
ZIO_FLAG_IO_BYPASS0x00200000
ZIO_FLAG_IO_REWRITE0x00400000
ZIO_FLAG_RAW_COMPRESS0x00800000
ZIO_FLAG_RAW_ENCRYPT0x01000000
ZIO_FLAG_GANG_CHILD0x02000000
ZIO_FLAG_DDT_CHILD0x04000000
ZIO_FLAG_GODFATHER0x08000000
ZIO_FLAG_NOPWRITE0x10000000
ZIO_FLAG_REEXECUTED0x20000000
ZIO_FLAG_DELEGATED0x40000000
ZIO_FLAG_FASTWRITE0x80000000
+
+
+

+

zfs(4), zed(8), + zpool-wait(8)

+
+
+ + + + + +
July 11, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-export.8.html b/man/master/8/zpool-export.8.html new file mode 100644 index 000000000..c10c84372 --- /dev/null +++ b/man/master/8/zpool-export.8.html @@ -0,0 +1,299 @@ + + + + + + + zpool-export.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-export.8

+
+ + + + + +
ZPOOL-EXPORT(8)System Manager's ManualZPOOL-EXPORT(8)
+
+
+

+

zpool-export — + export ZFS storage pools

+
+
+

+ + + + + +
zpoolexport [-f] + -a|pool
+
+
+

+

Exports the given pools from the system. All devices are marked as + exported, but are still considered in use by other subsystems. The devices + can be moved between systems (even those of different endianness) and + imported as long as a sufficient number of devices are present.

+

Before exporting the pool, all datasets within the pool are + unmounted. A pool can not be exported if it has a shared spare that is + currently being used.

+

For pools to be portable, you must give the + zpool command whole disks, not just partitions, so + that ZFS can label the disks with portable EFI labels. Otherwise, disk + drivers on platforms of different endianness will not recognize the + disks.

+
+
+
Exports all pools imported on the system.
+
+
Forcefully unmount all datasets, and allow export of pools with active + shared spares. +

This command will forcefully export the pool even if it has a + shared spare that is currently being used. This may lead to potential + data corruption.

+
+
+
+
+

+
+

+

The following command exports the devices in pool + tank so that they can be relocated or later + imported:

+
# zpool + export tank
+
+
+
+

+

zpool-import(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-get.8.html b/man/master/8/zpool-get.8.html new file mode 100644 index 000000000..0be761657 --- /dev/null +++ b/man/master/8/zpool-get.8.html @@ -0,0 +1,389 @@ + + + + + + + zpool-get.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-get.8

+
+ + + + + +
ZPOOL-GET(8)System Manager's ManualZPOOL-GET(8)
+
+
+

+

zpool-get — + retrieve properties of ZFS storage pools

+
+
+

+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + [pool]…
+
+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + pool + [all-vdevs|vdev]…
+
+ + + + + +
zpoolset + property=value + pool
+
+ + + + + +
zpoolset + property=value + pool vdev
+
+
+

+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + [pool]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
+
+
Name of storage pool.
+
+
Property name.
+
+
Property value.
+
+
Property source, either default + or local.
+
+
+

See the zpoolprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + pool + [all-vdevs|vdev]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified vdevs (or all vdevs if + all-vdevs is used) in the specified pool. These + properties are displayed with the following fields: +
+
+
+
Name of vdev.
+
+
Property name.
+
+
Property value.
+
+
Property source, either default + or local.
+
+
+

See the vdevprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + zpoolprops(7) manual page for more information on what + properties can be set and acceptable values.
+
zpool set + property=value + pool vdev
+
Sets the given property on the specified vdev in the specified pool. See + the vdevprops(7) manual page for more information on + what properties can be set and acceptable values.
+
+
+
+

+

vdevprops(7), + zpool-features(7), zpoolprops(7), + zpool-list(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-history.8.html b/man/master/8/zpool-history.8.html new file mode 100644 index 000000000..26e2bd086 --- /dev/null +++ b/man/master/8/zpool-history.8.html @@ -0,0 +1,277 @@ + + + + + + + zpool-history.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-history.8

+
+ + + + + +
ZPOOL-HISTORY(8)System Manager's ManualZPOOL-HISTORY(8)
+
+
+

+

zpool-history — + inspect command history of ZFS storage pools

+
+
+

+ + + + + +
zpoolhistory [-il] + [pool]…
+
+
+

+

Displays the command history of the specified pool(s) or all pools + if no pool is specified.

+
+
+
Displays internally logged ZFS events in addition to user initiated + events.
+
+
Displays log records in long format, which in addition to standard format + includes, the user name, the hostname, and the zone in which the operation + was performed.
+
+
+
+

+

zpool-checkpoint(8), + zpool-events(8), zpool-status(8), + zpool-wait(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-import.8.html b/man/master/8/zpool-import.8.html new file mode 100644 index 000000000..690de60a9 --- /dev/null +++ b/man/master/8/zpool-import.8.html @@ -0,0 +1,575 @@ + + + + + + + zpool-import.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-import.8

+
+ + + + + +
ZPOOL-IMPORT(8)System Manager's ManualZPOOL-IMPORT(8)
+
+
+

+

zpool-import — + import ZFS storage pools or list available pools

+
+
+

+ + + + + +
zpoolimport [-D] + [-d + dir|device]…
+
+ + + + + +
zpoolimport -a + [-DflmN] [-F + [-nTX]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root]
+
+ + + + + +
zpoolimport [-Dflmt] + [-F [-nTX]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root] + [-s] + pool|id + [newpool]
+
+
+

+
+
zpool import + [-D] [-d + dir|device]…
+
Lists pools available to import. If the -d or + -c options are not specified, this command + searches for devices using libblkid on Linux and geom on + FreeBSD. The -d option can + be specified multiple times, and all directories are searched. If the + device appears to be part of an exported pool, this command displays a + summary of the pool with the name of the pool, a numeric identifier, as + well as the vdev layout and current health of the device for each device + or file. Destroyed pools, pools that were previously destroyed with the + zpool destroy command, are + not listed unless the -D option is specified. +

The numeric identifier is unique, and can be used instead of + the pool name when multiple exported pools of the same name are + available.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times.
+
+
Lists destroyed pools only.
+
+
+
zpool import + -a [-DflmN] + [-F [-nTX]] + [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root] + [-s]
+
Imports all pools found in the search directories. Identical to the + previous command, except that all pools with a sufficient number of + devices available are imported. Destroyed pools, pools that were + previously destroyed with the zpool + destroy command, will not be imported unless the + -D option is specified. +
+
+
Searches for and imports all pools found.
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pools only. The -f option is + also required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+
Import the pool without mounting any file systems.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + zpoolprops(7) manual page for more information on + the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Rewinds pool to the checkpointed state. Once the pool is imported with + this flag there is no way to undo the rewind. All changes and data + that were written after the checkpoint are lost! The only exception is + when the + + mounting option is enabled. In this case, the checkpointed state of + the pool is opened and an administrator can see how the pool would + look like if they were to fully rewind.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. WARNING: This + option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
+
zpool import + [-Dflmt] [-F + [-nTX]] [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root] + [-s] + pool|id + [newpool]
+
Imports a specific pool. A pool can be identified by its name or the + numeric identifier. If newpool is specified, the + pool is imported using the name newpool. Otherwise, + it is imported with the same name as its exported name. +

If a device is removed from a system without running + zpool export first, the + device appears as potentially active. It cannot be determined if this + was a failed export, or whether the device is really in use from another + host. To import a pool in this state, the -f + option is required.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pool. The -f option is also + required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + zpoolprops(7) manual page for more information on + the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. + : + This option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
Used with newpool. Specifies that + newpool is temporary. Temporary pool names last + until export. Ensures that the original pool name will be used in all + label updates and therefore is retained upon export. Will also set + -o + cachefile=none when not explicitly + specified.
+
+
+
+
+
+

+
+

+

The following command displays available pools, and then imports + the pool tank for use on the system. The results from + this command are similar to the following:

+
+
# zpool import
+  pool: tank
+    id: 15451357997522795478
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+        tank        ONLINE
+          mirror    ONLINE
+            sda     ONLINE
+            sdb     ONLINE
+
+# zpool import tank
+
+
+
+
+

+

zpool-export(8), + zpool-list(8), zpool-status(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-initialize.8.html b/man/master/8/zpool-initialize.8.html new file mode 100644 index 000000000..0b857ff2a --- /dev/null +++ b/man/master/8/zpool-initialize.8.html @@ -0,0 +1,298 @@ + + + + + + + zpool-initialize.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-initialize.8

+
+ + + + + +
ZPOOL-INITIALIZE(8)System Manager's ManualZPOOL-INITIALIZE(8)
+
+
+

+

zpool-initialize — + write to unallocated regions of ZFS storage pool

+
+
+

+ + + + + +
zpoolinitialize + [-c|-s + |-u] [-w] + pool [device]…
+
+
+

+

Begins initializing by writing to all unallocated regions on the + specified devices, or all eligible devices in the pool if no individual + devices are specified. Only leaf data or log devices may be initialized.

+
+
, + --cancel
+
Cancel initializing on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are not + currently being initialized, the command will fail and no cancellation + will occur on any device.
+
, + --suspend
+
Suspend initializing on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are not + currently being initialized, the command will fail and no suspension will + occur on any device. Initializing can then be resumed by running + zpool initialize with no + flags on the relevant target devices.
+
, + --uninit
+
Clears the initialization state on the specified devices, or all eligible + devices if none are specified. If the devices are being actively + initialized the command will fail. After being cleared + zpool initialize with no + flags can be used to re-initialize all unallocoated regions on the + relevant target devices.
+
, + --wait
+
Wait until the devices have finished initializing before returning.
+
+
+
+

+

zpool-add(8), zpool-attach(8), + zpool-create(8), zpool-online(8), + zpool-replace(8), zpool-trim(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-iostat.8.html b/man/master/8/zpool-iostat.8.html new file mode 100644 index 000000000..a2f49159e --- /dev/null +++ b/man/master/8/zpool-iostat.8.html @@ -0,0 +1,490 @@ + + + + + + + zpool-iostat.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-iostat.8

+
+ + + + + +
ZPOOL-IOSTAT(8)System Manager's ManualZPOOL-IOSTAT(8)
+
+
+

+

zpool-iostat — + display logical I/O statistics for ZFS storage + pools

+
+
+

+ + + + + +
zpooliostat [[[-c + SCRIPT] + [-lq]]|-rw] + [-T u|d] + [-ghHLnpPvy] + [pool…|[pool + vdev…]|vdev…] + [interval [count]]
+
+
+

+

Displays logical I/O statistics for the given pools/vdevs. + Physical I/O statistics may be observed via iostat(1). If + writes are located nearby, they may be merged into a single larger + operation. Additional I/O may be generated depending on the level of vdev + redundancy. To filter output, you may pass in a list of pools, a pool and + list of vdevs in that pool, or a list of any vdevs from any pool. If no + items are specified, statistics for every pool in the system are shown. When + given an interval, the statistics are printed every + interval seconds until killed. If + -n flag is specified the headers are displayed only + once, otherwise they are displayed periodically. If + count is specified, the command exits after + count reports are printed. The first report printed is + always the statistics since boot regardless of whether + interval and count are passed. + However, this behavior can be suppressed with the -y + flag. Also note that the units of + , + , + … that + are printed in the report are in base 1024. To get the raw values, use the + -p flag.

+
+
+ [SCRIPT1[,SCRIPT2]…]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool iostat + output. Users can run any script found in their + ~/.zpool.d directory or from the system + /etc/zfs/zpool.d directory. Script names + containing the slash + () character + are not allowed. The default search path can be overridden by setting the + + environment variable. A privileged user can only run + -c if they have the + + environment variable set. If a script requires the use of a privileged + command, like smartctl(8), then it's recommended you + allow the user access to it in /etc/sudoers or add + the user to the /etc/sudoers.d/zfs file. +

If -c is passed without a script name, + it prints a list of all scripts. -c also sets + verbose mode + (-v).

+

Script output should be in the form of "name=value". + The column name is set to "name" and the value is set to + "value". Multiple lines can be used to output multiple + columns. The first line of output not in the "name=value" + format is displayed without a column title, and no more output after + that is displayed. This can be useful for printing error messages. Blank + or NULL values are printed as a '-' to make output AWKable.

+

The following environment variables are set before running + each script:

+
+
+
Full path to the vdev
+
+
Underlying path to the vdev (/dev/sd*). For + use with device mapper, multipath, or partitioned vdevs.
+
+
The sysfs path to the enclosure for the vdev (if any).
+
+
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard date + format. See date(1).
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Print headers only once when passed
+
+
Display numbers in parsable (exact) values. Time values are in + nanoseconds.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+
Print request size histograms for the leaf vdev's I/O. This includes + histograms of individual I/O (ind) and aggregate I/O (agg). These stats + can be useful for observing how well I/O aggregation is working. Note that + TRIM I/O may exceed 16M, but will be counted as 16M.
+
+
Verbose statistics Reports usage statistics for individual vdevs within + the pool, in addition to the pool-wide statistics.
+
+
Normally the first line of output reports the statistics since boot: + suppress it.
+
+
Display latency histograms: +
+
+
Total I/O time (queuing + disk I/O time).
+
+
Disk I/O time (time reading/writing the disk).
+
+
Amount of time I/O spent in synchronous priority queues. Does not + include disk time.
+
+
Amount of time I/O spent in asynchronous priority queues. Does not + include disk time.
+
+
Amount of time I/O spent in scrub queue. Does not include disk + time.
+
+
Amount of time I/O spent in rebuild queue. Does not include disk + time.
+
+
+
+
Include average latency statistics: +
+
+
Average total I/O time (queuing + disk I/O time).
+
+
Average disk I/O time (time reading/writing the disk).
+
+
Average amount of time I/O spent in synchronous priority queues. Does + not include disk time.
+
+
Average amount of time I/O spent in asynchronous priority queues. Does + not include disk time.
+
+
Average queuing time in scrub queue. Does not include disk time.
+
+
Average queuing time in trim queue. Does not include disk time.
+
+
Average queuing time in rebuild queue. Does not include disk + time.
+
+
+
+
Include active queue statistics. Each priority queue has both pending + () + and active + () + I/O requests. Pending requests are waiting to be issued to the disk, and + active requests have been issued to disk and are waiting for completion. + These stats are broken out by priority queue: +
+
+
Current number of entries in synchronous priority queues.
+
+
Current number of entries in asynchronous priority queues.
+
+
Current number of entries in scrub queue.
+
+
Current number of entries in trim queue.
+
+
Current number of entries in rebuild queue.
+
+

All queue statistics are instantaneous measurements of the + number of entries in the queues. If you specify an interval, the + measurements will be sampled from the end of the interval.

+
+
+
+
+

+
+

+

The following command adds two disks for use as cache devices to a + ZFS storage pool:

+
# zpool + add pool + + sdc sdd
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take over + an hour for them to fill. Capacity and reads can be monitored using the + iostat subcommand as follows:

+
# zpool + iostat -v pool + 5
+
+
+

+

Additional columns can be added to the + zpool status + and zpool + iostat output with + -c.

+
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc size
+              capacity     operations     bandwidth
+pool        alloc   free   read  write   read  write  size
+----------  -----  -----  -----  -----  -----  -----  ----
+rpool       14.6G  54.9G      4     55   250K  2.69M
+  sda1      14.6G  54.9G      4     55   250K  2.69M   70G
+----------  -----  -----  -----  -----  -----  -----  ----
+
+
+
+
+

+

iostat(1), smartctl(8), + zpool-list(8), zpool-status(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-labelclear.8.html b/man/master/8/zpool-labelclear.8.html new file mode 100644 index 000000000..fa0ee4928 --- /dev/null +++ b/man/master/8/zpool-labelclear.8.html @@ -0,0 +1,275 @@ + + + + + + + zpool-labelclear.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-labelclear.8

+
+ + + + + +
ZPOOL-LABELCLEAR(8)System Manager's ManualZPOOL-LABELCLEAR(8)
+
+
+

+

zpool-labelclear — + remove ZFS label information from device

+
+
+

+ + + + + +
zpoollabelclear [-f] + device
+
+
+

+

Removes ZFS label information from the specified + device. If the device is a cache + device, it also removes the L2ARC header (persistent L2ARC). The + device must not be part of an active pool + configuration.

+
+
+
Treat exported or foreign devices as inactive.
+
+
+
+

+

zpool-destroy(8), + zpool-detach(8), zpool-remove(8), + zpool-replace(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-list.8.html b/man/master/8/zpool-list.8.html new file mode 100644 index 000000000..423318ae1 --- /dev/null +++ b/man/master/8/zpool-list.8.html @@ -0,0 +1,354 @@ + + + + + + + zpool-list.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-list.8

+
+ + + + + +
ZPOOL-LIST(8)System Manager's ManualZPOOL-LIST(8)
+
+
+

+

zpool-listlist + information about ZFS storage pools

+
+
+

+ + + + + +
zpoollist [-HgLpPv] + [-o + property[,property]…] + [-T u|d] + [pool]… [interval + [count]]
+
+
+

+

Lists the given pools along with a health status and space usage. + If no pools are specified, all pools in the system are + listed. When given an interval, the information is + printed every interval seconds until killed. If + count is specified, the command exits after + count reports are printed.

+
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+ property
+
Comma-separated list of properties to display. See the + zpoolprops(7) manual page for a list of valid + properties. The default list is + , + , + , + , + , + , + , + , + , + .
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard date + format. See date(1).
+
+
Verbose statistics. Reports usage statistics for individual vdevs within + the pool, in addition to the pool-wide statistics.
+
+
+
+

+
+

+

The following command lists all available pools on the system. In + this case, the pool zion is faulted due to a missing + device. The results from this command are similar to the following:

+
+
# zpool list
+NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+rpool  19.9G  8.43G  11.4G         -    33%    42%  1.00x  ONLINE  -
+tank   61.5G  20.0G  41.5G         -    48%    32%  1.00x  ONLINE  -
+zion       -      -      -         -      -      -      -  FAULTED -
+
+
+
+

+

The following command displays the detailed information for the + pool data. This pool is comprised of a single raidz + vdev where one of its devices increased its capacity by 10 GiB. In this + example, the pool will not be able to utilize this extra capacity until all + the devices under the raidz vdev have been expanded.

+
+
# zpool list -v data
+NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+data        23.9G  14.6G  9.30G         -    48%    61%  1.00x  ONLINE  -
+  raidz1    23.9G  14.6G  9.30G         -    48%
+    sda         -      -      -         -      -
+    sdb         -      -      -       10G      -
+    sdc         -      -      -         -      -
+
+
+
+
+

+

zpool-import(8), + zpool-status(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-offline.8.html b/man/master/8/zpool-offline.8.html new file mode 100644 index 000000000..809aefc67 --- /dev/null +++ b/man/master/8/zpool-offline.8.html @@ -0,0 +1,318 @@ + + + + + + + zpool-offline.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-offline.8

+
+ + + + + +
ZPOOL-OFFLINE(8)System Manager's ManualZPOOL-OFFLINE(8)
+
+
+

+

zpool-offline — + take physical devices offline in ZFS storage + pool

+
+
+

+ + + + + +
zpooloffline + [--power|[-ft]] + pool device
+
+ + + + + +
zpoolonline + [--power] + [-e] pool + device
+
+
+

+
+
zpool offline + [--power|[-ft]] + pool device
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Power off the device's slot in the storage enclosure. This flag + currently works on Linux only
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [--power] [-e] + pool device
+
Brings the specified physical device online. This command is not + applicable to spares. +
+
+
Power on the device's slot in the storage enclosure and wait for the + device to show up before attempting to online it. Alternatively, you + can set the + + environment variable to always enable this behavior. This flag + currently works on Linux only
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
+
+
+

+

zpool-detach(8), + zpool-remove(8), zpool-reopen(8), + zpool-resilver(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-online.8.html b/man/master/8/zpool-online.8.html new file mode 100644 index 000000000..850dd78df --- /dev/null +++ b/man/master/8/zpool-online.8.html @@ -0,0 +1,318 @@ + + + + + + + zpool-online.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-online.8

+
+ + + + + +
ZPOOL-OFFLINE(8)System Manager's ManualZPOOL-OFFLINE(8)
+
+
+

+

zpool-offline — + take physical devices offline in ZFS storage + pool

+
+
+

+ + + + + +
zpooloffline + [--power|[-ft]] + pool device
+
+ + + + + +
zpoolonline + [--power] + [-e] pool + device
+
+
+

+
+
zpool offline + [--power|[-ft]] + pool device
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Power off the device's slot in the storage enclosure. This flag + currently works on Linux only
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [--power] [-e] + pool device
+
Brings the specified physical device online. This command is not + applicable to spares. +
+
+
Power on the device's slot in the storage enclosure and wait for the + device to show up before attempting to online it. Alternatively, you + can set the + + environment variable to always enable this behavior. This flag + currently works on Linux only
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
+
+
+

+

zpool-detach(8), + zpool-remove(8), zpool-reopen(8), + zpool-resilver(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-reguid.8.html b/man/master/8/zpool-reguid.8.html new file mode 100644 index 000000000..3ff0eee86 --- /dev/null +++ b/man/master/8/zpool-reguid.8.html @@ -0,0 +1,268 @@ + + + + + + + zpool-reguid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-reguid.8

+
+ + + + + +
ZPOOL-REGUID(8)System Manager's ManualZPOOL-REGUID(8)
+
+
+

+

zpool-reguid — + generate new unique identifier for ZFS storage + pool

+
+
+

+ + + + + +
zpoolreguid pool
+
+
+

+

Generates a new unique identifier for the pool. You must ensure + that all devices in this pool are online and healthy before performing this + action.

+
+
+

+

zpool-export(8), + zpool-import(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-remove.8.html b/man/master/8/zpool-remove.8.html new file mode 100644 index 000000000..996f29b7d --- /dev/null +++ b/man/master/8/zpool-remove.8.html @@ -0,0 +1,363 @@ + + + + + + + zpool-remove.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-remove.8

+
+ + + + + +
ZPOOL-REMOVE(8)System Manager's ManualZPOOL-REMOVE(8)
+
+
+

+

zpool-remove — + remove devices from ZFS storage pool

+
+
+

+ + + + + +
zpoolremove [-npw] + pool device
+
+ + + + + +
zpoolremove -s + pool
+
+
+

+
+
zpool remove + [-npw] pool + device
+
Removes the specified device from the pool. This command supports removing + hot spare, cache, log, and both mirrored and non-redundant primary + top-level vdevs, including dedup and special vdevs. +

Top-level vdevs can only be removed if the primary pool + storage does not contain a top-level raidz vdev, all top-level vdevs + have the same sector size, and the keys for all encrypted datasets are + loaded.

+

Removing a top-level vdev reduces the + total amount of space in the storage pool. The specified device will be + evacuated by copying all allocated space from it to the other devices in + the pool. In this case, the zpool + remove command initiates the removal and + returns, while the evacuation continues in the background. The removal + progress can be monitored with zpool + status. If an I/O error is encountered during + the removal process it will be cancelled. The + + feature flag must be enabled to remove a top-level vdev, see + zpool-features(7).

+

A mirrored top-level device (log or data) can be removed by + specifying the top- level mirror for the same. Non-log devices or data + devices that are part of a mirrored configuration can be removed using + the zpool detach + command.

+
+
+
Do not actually perform the removal ("No-op"). Instead, + print the estimated amount of memory that will be used by the mapping + table after the removal completes. This is nonzero only for top-level + vdevs.
+
+
+
+
Used in conjunction with the -n flag, displays + numbers as parsable (exact) values.
+
+
Waits until the removal has completed before returning.
+
+
+
zpool remove + -s pool
+
Stops and cancels an in-progress removal of a top-level vdev.
+
+
+
+

+
+

+

The following commands remove the mirrored log device + + and mirrored top-level data device + .

+

Given this configuration:

+
+
  pool: tank
+ state: ONLINE
+ scrub: none requested
+config:
+
+         NAME        STATE     READ WRITE CKSUM
+         tank        ONLINE       0     0     0
+           mirror-0  ONLINE       0     0     0
+             sda     ONLINE       0     0     0
+             sdb     ONLINE       0     0     0
+           mirror-1  ONLINE       0     0     0
+             sdc     ONLINE       0     0     0
+             sdd     ONLINE       0     0     0
+         logs
+           mirror-2  ONLINE       0     0     0
+             sde     ONLINE       0     0     0
+             sdf     ONLINE       0     0     0
+
+

The command to remove the mirrored log + mirror-2 is:

+
# zpool + remove tank + mirror-2
+

The command to remove the mirrored data + mirror-1 is:

+
# zpool + remove tank + mirror-1
+
+
+
+

+

zpool-add(8), zpool-detach(8), + zpool-labelclear(8), zpool-offline(8), + zpool-replace(8), zpool-split(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-reopen.8.html b/man/master/8/zpool-reopen.8.html new file mode 100644 index 000000000..bf061c80b --- /dev/null +++ b/man/master/8/zpool-reopen.8.html @@ -0,0 +1,270 @@ + + + + + + + zpool-reopen.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-reopen.8

+
+ + + + + +
ZPOOL-REOPEN(8)System Manager's ManualZPOOL-REOPEN(8)
+
+
+

+

zpool-reopen — + reopen vdevs associated with ZFS storage pools

+
+
+

+ + + + + +
zpoolreopen [-n] + [pool]…
+
+
+

+

Reopen all vdevs associated with the specified pools, or all pools + if none specified.

+
+
+

+
+
+
Do not restart an in-progress scrub operation. This is not recommended and + can result in partially resilvered devices unless a second scrub is + performed.
+
+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-replace.8.html b/man/master/8/zpool-replace.8.html new file mode 100644 index 000000000..a1c850f47 --- /dev/null +++ b/man/master/8/zpool-replace.8.html @@ -0,0 +1,304 @@ + + + + + + + zpool-replace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-replace.8

+
+ + + + + +
ZPOOL-REPLACE(8)System Manager's ManualZPOOL-REPLACE(8)
+
+
+

+

zpool-replace — + replace one device with another in ZFS storage + pool

+
+
+

+ + + + + +
zpoolreplace [-fsw] + [-o + property=value] + pool device + [new-device]
+
+
+

+

Replaces device with + new-device. This is equivalent to attaching + new-device, waiting for it to resilver, and then + detaching device. Any in progress scrub will be + cancelled.

+

The size of new-device must be greater than + or equal to the minimum size of all the devices in a mirror or raidz + configuration.

+

new-device is required if the pool is not + redundant. If new-device is not specified, it defaults + to device. This form of replacement is useful after an + existing disk has failed and has been physically replaced. In this case, the + new disk may have the same /dev path as the old + device, even though it is actually a different disk. ZFS recognizes + this.

+
+
+
Forces use of new-device, even if it appears to be + in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the zpoolprops(7) + manual page for a list of valid properties that can be set. The only + property supported at the moment is + .
+
+
The new-device is reconstructed sequentially to + restore redundancy as quickly as possible. Checksums are not verified + during sequential reconstruction so a scrub is started when the resilver + completes. Sequential reconstruction is not supported for raidz + configurations.
+
+
Waits until the replacement has completed before returning.
+
+
+
+

+

zpool-detach(8), + zpool-initialize(8), zpool-online(8), + zpool-resilver(8)

+
+
+ + + + + +
May 29, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-resilver.8.html b/man/master/8/zpool-resilver.8.html new file mode 100644 index 000000000..7cc7cce3c --- /dev/null +++ b/man/master/8/zpool-resilver.8.html @@ -0,0 +1,272 @@ + + + + + + + zpool-resilver.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-resilver.8

+
+ + + + + +
ZPOOL-RESILVER(8)System Manager's ManualZPOOL-RESILVER(8)
+
+
+

+

zpool-resilver — + resilver devices in ZFS storage pools

+
+
+

+ + + + + +
zpoolresilver pool
+
+
+

+

Starts a resilver of the specified pools. If an existing resilver + is already running it will be restarted from the beginning. Any drives that + were scheduled for a deferred resilver will be added to the new one. This + requires the + + pool feature.

+
+
+

+

zpool-iostat(8), + zpool-online(8), zpool-reopen(8), + zpool-replace(8), zpool-scrub(8), + zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-scrub.8.html b/man/master/8/zpool-scrub.8.html new file mode 100644 index 000000000..a64e5866b --- /dev/null +++ b/man/master/8/zpool-scrub.8.html @@ -0,0 +1,362 @@ + + + + + + + zpool-scrub.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-scrub.8

+
+ + + + + +
ZPOOL-SCRUB(8)System Manager's ManualZPOOL-SCRUB(8)
+
+
+

+

zpool-scrub — + begin or resume scrub of ZFS storage pools

+
+
+

+ + + + + +
zpoolscrub + [-s|-p] + [-w] [-e] + pool
+
+
+

+

Begins a scrub or resumes a paused scrub. The scrub examines all + data in the specified pools to verify that it checksums correctly. For + replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any + damage discovered during the scrub. The zpool + status command reports the progress of the scrub and + summarizes the results of the scrub upon completion.

+

Scrubbing and resilvering are very similar operations. The + difference is that resilvering only examines data that ZFS knows to be out + of date (for example, when attaching a new device to a mirror or replacing + an existing device), whereas scrubbing examines all data to discover silent + errors due to hardware faults or disk failure.

+

When scrubbing a pool with encrypted filesystems the keys do not + need to be loaded. However, if the keys are not loaded and an unrepairable + checksum error is detected the file name cannot be included in the + zpool status + -v verbose error report.

+

Because scrubbing and resilvering are I/O-intensive operations, + ZFS only allows one at a time.

+

A scrub is split into two parts: metadata scanning and block + scrubbing. The metadata scanning sorts blocks into large sequential ranges + which can then be read much more efficiently from disk when issuing the + scrub I/O.

+

If a scrub is paused, the zpool + scrub resumes it. If a resilver is in progress, ZFS + does not allow a scrub to be started until the resilver completes.

+

Note that, due to changes in pool data on a live system, it is + possible for scrubs to progress slightly beyond 100% completion. During this + period, no completion time estimate will be provided.

+
+
+

+
+
+
Stop scrubbing.
+
+
Pause scrubbing. Scrub pause state and progress are periodically synced to + disk. If the system is restarted or pool is exported during a paused + scrub, even after import, scrub will remain paused until it is resumed. + Once resumed the scrub will pick up from the place where it was last + checkpointed to disk. To resume a paused scrub issue + zpool scrub or + zpool scrub + -e again.
+
+
Wait until scrub has completed before returning.
+
+
Only scrub files with known data errors as reported by + zpool status + -v. The pool must have been scrubbed at least once + with the + + feature enabled to use this option. Error scrubbing cannot be run + simultaneously with regular scrubbing or resilvering, nor can it be run + when a regular scrub is paused.
+
+
+
+

+
+

+

Status of pool with ongoing scrub:

+

+
+
# zpool status
+  ...
+  scan: scrub in progress since Sun Jul 25 16:07:49 2021
+        403M / 405M scanned at 100M/s, 68.4M / 405M issued at 10.0M/s
+        0B repaired, 16.91% done, 00:00:04 to go
+  ...
+
+

Where metadata which references 403M of file data has been scanned + at 100M/s, and 68.4M of that file data has been scrubbed sequentially at + 10.0M/s.

+
+
+
+

+

On machines using systemd, scrub timers can be enabled on per-pool + basis. weekly and monthly + timer units are provided.

+
+
+
systemctl enable + zfs-scrub-weekly@rpool.timer + --now
+
+
systemctl + enable + zfs-scrub-monthly@otherpool.timer + --now
+
+
+
+

+

systemd.timer(5), + zpool-iostat(8), + zpool-resilver(8), + zpool-status(8)

+
+
+ + + + + +
June 22, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-set.8.html b/man/master/8/zpool-set.8.html new file mode 100644 index 000000000..42f245cb9 --- /dev/null +++ b/man/master/8/zpool-set.8.html @@ -0,0 +1,389 @@ + + + + + + + zpool-set.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-set.8

+
+ + + + + +
ZPOOL-GET(8)System Manager's ManualZPOOL-GET(8)
+
+
+

+

zpool-get — + retrieve properties of ZFS storage pools

+
+
+

+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + [pool]…
+
+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + pool + [all-vdevs|vdev]…
+
+ + + + + +
zpoolset + property=value + pool
+
+ + + + + +
zpoolset + property=value + pool vdev
+
+
+

+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + [pool]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
+
+
Name of storage pool.
+
+
Property name.
+
+
Property value.
+
+
Property source, either default + or local.
+
+
+

See the zpoolprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + pool + [all-vdevs|vdev]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified vdevs (or all vdevs if + all-vdevs is used) in the specified pool. These + properties are displayed with the following fields: +
+
+
+
Name of vdev.
+
+
Property name.
+
+
Property value.
+
+
Property source, either default + or local.
+
+
+

See the vdevprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + zpoolprops(7) manual page for more information on what + properties can be set and acceptable values.
+
zpool set + property=value + pool vdev
+
Sets the given property on the specified vdev in the specified pool. See + the vdevprops(7) manual page for more information on + what properties can be set and acceptable values.
+
+
+
+

+

vdevprops(7), + zpool-features(7), zpoolprops(7), + zpool-list(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-split.8.html b/man/master/8/zpool-split.8.html new file mode 100644 index 000000000..280143f1b --- /dev/null +++ b/man/master/8/zpool-split.8.html @@ -0,0 +1,317 @@ + + + + + + + zpool-split.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-split.8

+
+ + + + + +
ZPOOL-SPLIT(8)System Manager's ManualZPOOL-SPLIT(8)
+
+
+

+

zpool-split — + split devices off ZFS storage pool, creating new + pool

+
+
+

+ + + + + +
zpoolsplit [-gLlnP] + [-o + property=value]… + [-R root] + pool newpool + [device]…
+
+
+

+

Splits devices off pool creating + newpool. All vdevs in pool must + be mirrors and the pool must not be in the process of resilvering. At the + time of the split, newpool will be a replica of + pool. By default, the last device in each mirror is + split from pool to create + newpool.

+

The optional device specification causes the specified device(s) + to be included in the new pool and, should any devices + remain unspecified, the last device in each mirror is used as would be by + default.

+
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Indicates that this command will request encryption keys for all encrypted + datasets it attempts to mount as it is bringing the new pool online. Note + that if any datasets have + =, + this command will block waiting for the keys to be entered. Without this + flag, encrypted datasets will be left unavailable until the keys are + loaded.
+
+
Do a dry-run ("No-op") split: do not actually perform it. Print + out the expected configuration of newpool.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+ property=value
+
Sets the specified property for newpool. See the + zpoolprops(7) manual page for more information on the + available pool properties.
+
+ root
+
Set + + for newpool to root and + automatically import it.
+
+
+
+

+

zpool-import(8), + zpool-list(8), zpool-remove(8)

+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-status.8.html b/man/master/8/zpool-status.8.html new file mode 100644 index 000000000..529c52493 --- /dev/null +++ b/man/master/8/zpool-status.8.html @@ -0,0 +1,371 @@ + + + + + + + zpool-status.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-status.8

+
+ + + + + +
ZPOOL-STATUS(8)System Manager's ManualZPOOL-STATUS(8)
+
+
+

+

zpool-status — + show detailed health status for ZFS storage + pools

+
+
+

+ + + + + +
zpoolstatus [-DigLpPstvx] + [-T u|d] + [-c + [SCRIPT1[,SCRIPT2]…]] + [pool]… [interval + [count]]
+
+
+

+

Displays the detailed health status for the given pools. If no + pool is specified, then the status of each pool in the + system is displayed. For more information on pool and device health, see the + Device Failure and + Recovery section of zpoolconcepts(7).

+

If a scrub or resilver is in progress, this command reports the + percentage done and the estimated time to completion. Both of these are only + approximate, because the amount of data in the pool and the other workloads + on the system can change.

+
+
+
Display vdev enclosure slot power status (on or off).
+
+ [SCRIPT1[,SCRIPT2]…]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool status + output. See the -c option of + zpool iostat for complete + details.
+
+
Display vdev initialization status.
+
+
Display vdev GUIDs instead of the normal device names These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in the + pool) block counts and sizes by reference count.
+
+
Display the number of leaf vdev slow I/O operations. This is the number of + I/O operations that didn't complete in + + milliseconds + ( + by default). This does not necessarily mean the + I/O operations failed to complete, just took an unreasonably long amount + of time. This may indicate a problem with the underlying storage.
+
+
Display vdev TRIM status.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard date + format. See date(1).
+
+
Displays verbose data error information, printing out a complete list of + all data errors since the last complete pool scrub. If the head_errlog + feature is enabled and files containing errors have been removed then the + respective filenames will not be reported in subsequent runs of this + command.
+
+
Only display status for pools that are exhibiting errors or are otherwise + unavailable. Warnings about pools not using the latest on-disk format will + not be included.
+
+
+
+

+
+

+

Additional columns can be added to the + zpool status + and zpool + iostat output with + -c.

+
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc size
+              capacity     operations     bandwidth
+pool        alloc   free   read  write   read  write  size
+----------  -----  -----  -----  -----  -----  -----  ----
+rpool       14.6G  54.9G      4     55   250K  2.69M
+  sda1      14.6G  54.9G      4     55   250K  2.69M   70G
+----------  -----  -----  -----  -----  -----  -----  ----
+
+
+
+
+

+

zpool-events(8), + zpool-history(8), zpool-iostat(8), + zpool-list(8), zpool-resilver(8), + zpool-scrub(8), zpool-wait(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-sync.8.html b/man/master/8/zpool-sync.8.html new file mode 100644 index 000000000..d6673acb9 --- /dev/null +++ b/man/master/8/zpool-sync.8.html @@ -0,0 +1,269 @@ + + + + + + + zpool-sync.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-sync.8

+
+ + + + + +
ZPOOL-SYNC(8)System Manager's ManualZPOOL-SYNC(8)
+
+
+

+

zpool-syncflush + data to primary storage of ZFS storage pools

+
+
+

+ + + + + +
zpoolsync [pool]…
+
+
+

+

This command forces all in-core dirty data to be written to the + primary pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + zpool sync will sync all + pools on the system. Otherwise, it will sync only the specified pools.

+
+
+

+

zpoolconcepts(7), + zpool-export(8), zpool-iostat(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-trim.8.html b/man/master/8/zpool-trim.8.html new file mode 100644 index 000000000..ed2ad1a0b --- /dev/null +++ b/man/master/8/zpool-trim.8.html @@ -0,0 +1,326 @@ + + + + + + + zpool-trim.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-trim.8

+
+ + + + + +
ZPOOL-TRIM(8)System Manager's ManualZPOOL-TRIM(8)
+
+
+

+

zpool-trim — + initiate TRIM of free space in ZFS storage pool

+
+
+

+ + + + + +
zpooltrim [-dw] + [-r rate] + [-c|-s] + pool [device]…
+
+
+

+

Initiates an immediate on-demand TRIM operation for all of the + free space in a pool. This operation informs the underlying storage devices + of all blocks in the pool which are no longer allocated and allows thinly + provisioned devices to reclaim the space.

+

A manual on-demand TRIM operation can be initiated irrespective of + the autotrim pool property setting. See the documentation + for the autotrim property above for the types of vdev + devices which can be trimmed.

+
+
, + --secure
+
Causes a secure TRIM to be initiated. When performing a secure TRIM, the + device guarantees that data stored on the trimmed blocks has been erased. + This requires support from the device and is not supported by all + SSDs.
+
, + --rate rate
+
Controls the rate at which the TRIM operation progresses. Without this + option TRIM is executed as quickly as possible. The rate, expressed in + bytes per second, is applied on a per-vdev basis and may be set + differently for each leaf vdev.
+
, + --cancel
+
Cancel trimming on the specified devices, or all eligible devices if none + are specified. If one or more target devices are invalid or are not + currently being trimmed, the command will fail and no cancellation will + occur on any device.
+
, + --suspend
+
Suspend trimming on the specified devices, or all eligible devices if none + are specified. If one or more target devices are invalid or are not + currently being trimmed, the command will fail and no suspension will + occur on any device. Trimming can then be resumed by running + zpool trim with no flags + on the relevant target devices.
+
, + --wait
+
Wait until the devices are done being trimmed before returning.
+
+
+
+

+

On machines using systemd, trim timers can be enabled on a + per-pool basis. weekly and + monthly timer units are provided.

+
+
+
systemctl enable + zfs-trim-weekly@rpool.timer + --now
+
+
systemctl + enable + zfs-trim-monthly@otherpool.timer + --now
+
+
+
+

+

systemd.timer(5), + zpoolprops(7), + zpool-initialize(8), + zpool-wait(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-upgrade.8.html b/man/master/8/zpool-upgrade.8.html new file mode 100644 index 000000000..51b7037cb --- /dev/null +++ b/man/master/8/zpool-upgrade.8.html @@ -0,0 +1,337 @@ + + + + + + + zpool-upgrade.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-upgrade.8

+
+ + + + + +
ZPOOL-UPGRADE(8)System Manager's ManualZPOOL-UPGRADE(8)
+
+
+

+

zpool-upgrade — + manage version and feature flags of ZFS storage + pools

+
+
+

+ + + + + +
zpoolupgrade
+
+ + + + + +
zpoolupgrade -v
+
+ + + + + +
zpoolupgrade [-V + version] + -a|pool
+
+
+

+
+
zpool upgrade
+
Displays pools which do not have all supported features enabled and pools + formatted using a legacy ZFS version number. These pools can continue to + be used, but some features may not be available. Use + zpool upgrade + -a to enable all features on all pools (subject to + the -o compatibility + property).
+
zpool upgrade + -v
+
Displays legacy ZFS versions supported by the this version of ZFS. See + zpool-features(7) for a description of feature flags + features supported by this version of ZFS.
+
zpool upgrade + [-V version] + -a|pool
+
Enables all supported features on the given pool. +

If the pool has specified compatibility feature sets using the + -o compatibility property, + only the features present in all requested compatibility sets will be + enabled. If this property is set to legacy then no + upgrade will take place.

+

Once this is done, the pool will no longer be accessible on + systems that do not support feature flags. See + zpool-features(7) for details on compatibility with + systems that support feature flags, but do not support all features + enabled on the pool.

+
+
+
Enables all supported features (from specified compatibility sets, if + any) on all pools.
+
+ version
+
Upgrade to the specified legacy version. If specified, no features + will be enabled on the pool. This option can only be used to increase + the version number up to the last supported legacy version + number.
+
+
+
+
+
+

+
+

+

The following command upgrades all ZFS Storage pools to the + current version of the software:

+
+
# zpool upgrade -a
+This system is currently running ZFS version 2.
+
+
+
+
+

+

zpool-features(7), + zpoolconcepts(7), zpoolprops(7), + zpool-history(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-wait.8.html b/man/master/8/zpool-wait.8.html new file mode 100644 index 000000000..0e41d79fc --- /dev/null +++ b/man/master/8/zpool-wait.8.html @@ -0,0 +1,320 @@ + + + + + + + zpool-wait.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-wait.8

+
+ + + + + +
ZPOOL-WAIT(8)System Manager's ManualZPOOL-WAIT(8)
+
+
+

+

zpool-waitwait + for activity to stop in a ZFS storage pool

+
+
+

+ + + + + +
zpoolwait [-Hp] + [-T u|d] + [-t + activity[,activity]…] + pool [interval]
+
+
+

+

Waits until all background activity of the given types has ceased + in the given pool. The activity could cease because it has completed, or + because it has been paused or canceled by a user, or because the pool has + been exported or destroyed. If no activities are specified, the command + waits until background activity of every type listed below has ceased. If + there is no activity of the given types in progress, the command returns + immediately.

+

These are the possible values for activity, + along with what each one waits for:

+
+
+
+
Checkpoint to be discarded
+
+
+ property to become +
+
+
All initializations to cease
+
+
All device replacements to cease
+
+
Device removal to cease
+
+
Resilver to cease
+
+
Scrub to cease
+
+
Manual trim to cease
+
+
Attaching to a RAID-Z vdev to complete
+
+
+

If an interval is provided, the amount of + work remaining, in bytes, for each activity is printed every + interval seconds.

+
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+
Display numbers in parsable (exact) values.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard date + format. See date(1).
+
+
+
+

+

zpool-checkpoint(8), + zpool-initialize(8), zpool-remove(8), + zpool-replace(8), zpool-resilver(8), + zpool-scrub(8), zpool-status(8), + zpool-trim(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool.8.html b/man/master/8/zpool.8.html new file mode 100644 index 000000000..f5f159be1 --- /dev/null +++ b/man/master/8/zpool.8.html @@ -0,0 +1,838 @@ + + + + + + + zpool.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool.8

+
+ + + + + +
ZPOOL(8)System Manager's ManualZPOOL(8)
+
+
+

+

zpoolconfigure + ZFS storage pools

+
+
+

+ + + + + +
zpool-?V
+
+ + + + + +
zpoolversion
+
+ + + + + +
zpoolsubcommand + [arguments]
+
+
+

+

The zpool command configures ZFS storage + pools. A storage pool is a collection of devices that provides physical + storage and data replication for ZFS datasets. All datasets within a storage + pool share the same space. See zfs(8) for information on + managing datasets.

+

For an overview of creating and managing ZFS storage pools see the + zpoolconcepts(7) manual page.

+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+

The zpool command provides subcommands to + create and destroy storage pools, add capacity to storage pools, and provide + information about the storage pools. The following subcommands are + supported:

+
+
zpool -?
+
Displays a help message.
+
zpool -V, + --version
+
 
+
zpool version
+
Displays the software version of the zpool + userland utility and the ZFS kernel module.
+
+
+

+
+
zpool-create(8)
+
Creates a new storage pool containing the virtual devices specified on the + command line.
+
zpool-initialize(8)
+
Begins initializing by writing to all unallocated regions on the specified + devices, or all eligible devices in the pool if no individual devices are + specified.
+
+
+
+

+
+
zpool-destroy(8)
+
Destroys the given pool, freeing up any devices for other use.
+
zpool-labelclear(8)
+
Removes ZFS label information from the specified + device.
+
+
+
+

+
+
zpool-attach(8)/zpool-detach(8)
+
Converts a non-redundant disk into a mirror, or increases the redundancy + level of an existing mirror (attach), or performs + the inverse operation (detach).
+
zpool-add(8)/zpool-remove(8)
+
Adds the specified virtual devices to the given pool, or removes the + specified device from the pool.
+
zpool-replace(8)
+
Replaces an existing device (which may be faulted) with a new one.
+
zpool-split(8)
+
Creates a new pool by splitting all mirrors in an existing pool (which + decreases its redundancy).
+
+
+
+

+

Available pool properties listed in the + zpoolprops(7) manual page.

+
+
zpool-list(8)
+
Lists the given pools along with a health status and space usage.
+
zpool-get(8)/zpool-set(8)
+
Retrieves the given list of properties (or all properties if + is used) for + the specified storage pool(s).
+
+
+
+

+
+
zpool-status(8)
+
Displays the detailed health status for the given pools.
+
zpool-iostat(8)
+
Displays logical I/O statistics for the given pools/vdevs. Physical I/O + operations may be observed via iostat(1).
+
zpool-events(8)
+
Lists all recent events generated by the ZFS kernel modules. These events + are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. + That manual page also describes the subclasses and event payloads that can + be generated.
+
zpool-history(8)
+
Displays the command history of the specified pool(s) or all pools if no + pool is specified.
+
+
+
+

+
+
zpool-scrub(8)
+
Begins a scrub or resumes a paused scrub.
+
zpool-checkpoint(8)
+
Checkpoints the current state of pool, which can be + later restored by zpool + import + --rewind-to-checkpoint.
+
zpool-trim(8)
+
Initiates an immediate on-demand TRIM operation for all of the free space + in a pool. This operation informs the underlying storage devices of all + blocks in the pool which are no longer allocated and allows thinly + provisioned devices to reclaim the space.
+
zpool-sync(8)
+
This command forces all in-core dirty data to be written to the primary + pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + zpool sync will sync all + pools on the system. Otherwise, it will sync only the specified + pool(s).
+
zpool-upgrade(8)
+
Manage the on-disk format version of storage pools.
+
zpool-wait(8)
+
Waits until all background activity of the given types has ceased in the + given pool.
+
+
+
+

+
+
zpool-offline(8)/zpool-online(8)
+
Takes the specified physical device offline or brings it online.
+
zpool-resilver(8)
+
Starts a resilver. If an existing resilver is already running it will be + restarted from the beginning.
+
zpool-reopen(8)
+
Reopen all the vdevs associated with the pool.
+
zpool-clear(8)
+
Clears device errors in a pool.
+
+
+
+

+
+
zpool-import(8)
+
Make disks containing ZFS storage pools available for use on the + system.
+
zpool-export(8)
+
Exports the given pools from the system.
+
zpool-reguid(8)
+
Generates a new unique identifier for the pool.
+
+
+
+
+

+

The following exit values are returned:

+
+
+
+
Successful completion.
+
+
An error occurred.
+
+
Invalid command line options were specified.
+
+
+
+
+

+
+

+

The following command creates a pool with a single raidz root vdev + that consists of six disks:

+
# zpool + create tank + + sda sdb sdc sdd sde sdf
+
+
+

+

The following command creates a pool with two mirrors, where each + mirror contains two disks:

+
# zpool + create tank + mirror sda sdb + mirror sdc sdd
+
+
+

+

The following command creates a non-redundant pool using two disk + partitions:

+
# zpool + create tank + sda1 sdb2
+
+
+

+

The following command creates a non-redundant pool using files. + While not recommended, a pool based on files can be useful for experimental + purposes.

+
# zpool + create tank + /path/to/file/a /path/to/file/b
+
+
+

+

The following command converts an existing single device + sda into a mirror by attaching a second device to it, + sdb.

+
# zpool + attach tank sda + sdb
+
+
+

+

The following command adds two mirrored disks to the pool + tank, assuming the pool is already made up of two-way + mirrors. The additional space is immediately available to any datasets + within the pool.

+
# zpool + add tank + mirror sda sdb
+
+
+

+

The following command lists all available pools on the system. In + this case, the pool zion is faulted due to a missing + device. The results from this command are similar to the following:

+
+
# zpool list
+NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+rpool  19.9G  8.43G  11.4G         -    33%    42%  1.00x  ONLINE  -
+tank   61.5G  20.0G  41.5G         -    48%    32%  1.00x  ONLINE  -
+zion       -      -      -         -      -      -      -  FAULTED -
+
+
+
+

+

The following command destroys the pool tank + and any datasets contained within:

+
# zpool + destroy -f + tank
+
+
+

+

The following command exports the devices in pool + tank so that they can be relocated or later + imported:

+
# zpool + export tank
+
+
+

+

The following command displays available pools, and then imports + the pool tank for use on the system. The results from + this command are similar to the following:

+
+
# zpool import
+  pool: tank
+    id: 15451357997522795478
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+        tank        ONLINE
+          mirror    ONLINE
+            sda     ONLINE
+            sdb     ONLINE
+
+# zpool import tank
+
+
+
+

+

The following command upgrades all ZFS Storage pools to the + current version of the software:

+
+
# zpool upgrade -a
+This system is currently running ZFS version 2.
+
+
+
+

+

The following command creates a new pool with an available hot + spare:

+
# zpool + create tank + mirror sda sdb + + sdc
+

If one of the disks were to fail, the pool would be reduced to the + degraded state. The failed device can be replaced using the following + command:

+
# zpool + replace tank + sda sdd
+

Once the data has been resilvered, the spare is automatically + removed and is made available for use should another device fail. The hot + spare can be permanently removed from the pool using the following + command:

+
# zpool + remove tank + sdc
+
+
+

+

The following command creates a ZFS storage pool consisting of + two, two-way mirrors and mirrored log devices:

+
# zpool + create pool + mirror sda sdb + mirror sdc sdd + + sde sdf
+
+
+

+

The following command adds two disks for use as cache devices to a + ZFS storage pool:

+
# zpool + add pool + + sdc sdd
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take over + an hour for them to fill. Capacity and reads can be monitored using the + iostat subcommand as follows:

+
# zpool + iostat -v pool + 5
+
+
+

+

The following commands remove the mirrored log device + + and mirrored top-level data device + .

+

Given this configuration:

+
+
  pool: tank
+ state: ONLINE
+ scrub: none requested
+config:
+
+         NAME        STATE     READ WRITE CKSUM
+         tank        ONLINE       0     0     0
+           mirror-0  ONLINE       0     0     0
+             sda     ONLINE       0     0     0
+             sdb     ONLINE       0     0     0
+           mirror-1  ONLINE       0     0     0
+             sdc     ONLINE       0     0     0
+             sdd     ONLINE       0     0     0
+         logs
+           mirror-2  ONLINE       0     0     0
+             sde     ONLINE       0     0     0
+             sdf     ONLINE       0     0     0
+
+

The command to remove the mirrored log + mirror-2 is:

+
# zpool + remove tank + mirror-2
+

The command to remove the mirrored data + mirror-1 is:

+
# zpool + remove tank + mirror-1
+
+
+

+

The following command displays the detailed information for the + pool data. This pool is comprised of a single raidz + vdev where one of its devices increased its capacity by 10 GiB. In this + example, the pool will not be able to utilize this extra capacity until all + the devices under the raidz vdev have been expanded.

+
+
# zpool list -v data
+NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+data        23.9G  14.6G  9.30G         -    48%    61%  1.00x  ONLINE  -
+  raidz1    23.9G  14.6G  9.30G         -    48%
+    sda         -      -      -         -      -
+    sdb         -      -      -       10G      -
+    sdc         -      -      -         -      -
+
+
+
+

+

Additional columns can be added to the + zpool status + and zpool + iostat output with + -c.

+
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc size
+              capacity     operations     bandwidth
+pool        alloc   free   read  write   read  write  size
+----------  -----  -----  -----  -----  -----  -----  ----
+rpool       14.6G  54.9G      4     55   250K  2.69M
+  sda1      14.6G  54.9G      4     55   250K  2.69M   70G
+----------  -----  -----  -----  -----  -----  -----  ----
+
+
+
+
+

+
+
+
Cause zpool to dump core on exit for the purposes + of running + .
+
+
Use ANSI color in zpool + status and zpool + iostat output.
+
+
Automatically attempt to turn on the drives enclosure slot power to a + drive when running the zpool + online or zpool + clear commands. This has the same effect as + passing the --power option to those commands.
+
+
The maximum time in milliseconds to wait for a slot power sysfs value to + return the correct value after writing it. For example, after writing + "on" to the sysfs enclosure slot power_control file, it can take + some time for the enclosure to power down the slot and return + "on" if you read back the 'power_control' value. Defaults to 30 + seconds (30000ms) if not set.
+
+
The search path for devices or files to use with the pool. This is a + colon-separated list of directories in which zpool + looks for device nodes and files. Similar to the + -d option in zpool + import.
+
+
The maximum time in milliseconds that zpool import + will wait for an expected device to be available.
+
+
If set, suppress warning about non-native vdev ashift in + zpool status. The value is + not used, only the presence or absence of the variable matters.
+
+
Cause zpool subcommands to output vdev guids by + default. This behavior is identical to the zpool + status -g command line + option.
+ +
Cause zpool subcommands to follow links for vdev + names by default. This behavior is identical to the + zpool status + -L command line option.
+
+
Cause zpool subcommands to output full vdev path + names by default. This behavior is identical to the + zpool status + -P command line option.
+
+
Older OpenZFS implementations had issues when attempting to display pool + config vdev names if a devid NVP value is present in the + pool's config. +

For example, a pool that originated on illumos platform would + have a devid value in the config and + zpool status would fail + when listing the config. This would also be true for future Linux-based + pools.

+

A pool can be stripped of any devid values + on import or prevented from adding them on zpool + create or zpool + add by setting + ZFS_VDEV_DEVID_OPT_OUT.

+

+
+
+
Allow a privileged user to run zpool + status/iostat + -c. Normally, only unprivileged users are allowed + to run -c.
+
+
The search path for scripts when running zpool + status/iostat + -c. This is a colon-separated list of directories + and overrides the default ~/.zpool.d and + /etc/zfs/zpool.d search paths.
+
+
Allow a user to run zpool + status/iostat + -c. If ZPOOL_SCRIPTS_ENABLED is + not set, it is assumed that the user is allowed to run + zpool + status/iostat + -c.
+
+
Time, in seconds, to wait for /dev/zfs to appear. + Defaults to + , max + (10 + minutes). If <0, wait forever; if + 0, don't wait.
+
+
+
+

+

+
+
+

+

zfs(4), zpool-features(7), + zpoolconcepts(7), zpoolprops(7), + zed(8), zfs(8), + zpool-add(8), zpool-attach(8), + zpool-checkpoint(8), zpool-clear(8), + zpool-create(8), zpool-destroy(8), + zpool-detach(8), zpool-events(8), + zpool-export(8), zpool-get(8), + zpool-history(8), zpool-import(8), + zpool-initialize(8), zpool-iostat(8), + zpool-labelclear(8), zpool-list(8), + zpool-offline(8), zpool-online(8), + zpool-reguid(8), zpool-remove(8), + zpool-reopen(8), zpool-replace(8), + zpool-resilver(8), zpool-scrub(8), + zpool-set(8), zpool-split(8), + zpool-status(8), zpool-sync(8), + zpool-trim(8), zpool-upgrade(8), + zpool-wait(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool_influxdb.8.html b/man/master/8/zpool_influxdb.8.html new file mode 100644 index 000000000..7fdde5625 --- /dev/null +++ b/man/master/8/zpool_influxdb.8.html @@ -0,0 +1,319 @@ + + + + + + + zpool_influxdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool_influxdb.8

+
+ + + + + +
ZPOOL_INFLUXDB(8)System Manager's ManualZPOOL_INFLUXDB(8)
+
+
+

+

zpool_influxdb — + collect ZFS pool statistics in InfluxDB line protocol + format

+
+
+

+ + + + + +
zpool_influxdb[-e|--execd] + [-n|--no-histogram] + [-s|--sum-histogram-buckets] + [-t|--tags + key=value[,key=value]…] + [pool]
+
+
+

+

zpool_influxdb produces + InfluxDB-line-protocol-compatible metrics from zpools. Like the + zpool command, + zpool_influxdb reads the current pool status and + statistics. Unlike the zpool command which is + intended for humans, zpool_influxdb formats the + output in the InfluxDB line protocol. The expected use is as a plugin to a + metrics collector or aggregator, such as Telegraf.

+

By default, zpool_influxdb prints pool + metrics and status in the InfluxDB line protocol format. All pools are + printed, similar to the zpool + status command. Providing a pool name restricts the + output to the named pool.

+
+
+

+
+
, + --execd
+
Run in daemon mode compatible with Telegraf's + execd plugin. In this mode, the pools are sampled + every time a newline appears on the standard input.
+
, + --no-histogram
+
Do not print latency and I/O size histograms. This can reduce the total + amount of data, but one should consider the value brought by the insights + that latency and I/O size distributions provide. The resulting values are + suitable for graphing with Grafana's heatmap plugin.
+
, + --sum-histogram-buckets
+
Accumulates bucket values. By default, the values are not accumulated and + the raw data appears as shown by zpool + iostat. This works well for Grafana's heatmap + plugin. Summing the buckets produces output similar to Prometheus + histograms.
+
, + --tags + key=value[,key=value]…
+
Adds specified tags to the tag set. No sanity checking is performed. See + the InfluxDB Line Protocol format documentation for details on escaping + special characters used in tags.
+
, + --help
+
Print a usage summary.
+
+
+
+

+

zpool-iostat(8), + zpool-status(8), + InfluxDB, + Telegraf, + Grafana, + Prometheus

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zstream.8.html b/man/master/8/zstream.8.html new file mode 100644 index 000000000..751fcb604 --- /dev/null +++ b/man/master/8/zstream.8.html @@ -0,0 +1,406 @@ + + + + + + + zstream.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zstream.8

+
+ + + + + +
ZSTREAM(8)System Manager's ManualZSTREAM(8)
+
+
+

+

zstream — + manipulate ZFS send streams

+
+
+

+ + + + + +
zstreamdump [-Cvd] + [file]
+
+ + + + + +
zstreamdecompress [-v] + [object,offset[,type...]]
+
+ + + + + +
zstreamredup [-v] + file
+
+ + + + + +
zstreamtoken resume_token
+
+ + + + + +
zstreamrecompress [-l + level] algorithm
+
+
+

+

The + + utility manipulates ZFS send streams output by the + + command.

+
+
zstream dump + [-Cvd] [file]
+
Print information about the specified send stream, including headers and + record counts. The send stream may either be in the specified + file, or provided on standard input. +
+
+
Suppress the validation of checksums.
+
+
Verbose. Print metadata for each record.
+
+
Dump data contained in each record. Implies verbose.
+
+

The zstreamdump alias is provided for + compatibility and is equivalent to running + zstream dump.

+
+
zstream token + resume_token
+
Dumps zfs resume token information
+
zstream + decompress [-v] + [object,offset[,type...]]
+
Decompress selected records in a ZFS send stream provided on standard + input, when the compression type recorded in ZFS metadata may be + incorrect. Specify the object number and byte offset of each record that + you wish to decompress. Optionally specify the compression type. Valid + compression types include off, + , + lz4, + , + , + and . + The default is lz4. Every record for that object + beginning at that offset will be decompressed, if possible. It may not be + possible, because the record may be corrupted in some but not all of the + stream's snapshots. Specifying a compression type of off + will change the stream's metadata accordingly, without attempting + decompression. This can be useful if the record is already uncompressed + but the metadata insists otherwise. The repaired stream will be written to + standard output. +
+
+
Verbose. Print summary of decompressed records.
+
+
+
zstream redup + [-v] file
+
Deduplicated send streams can be generated by using the + zfs send + -D command. The ability to send deduplicated send + streams is deprecated. In the future, the ability to receive a + deduplicated send stream with zfs + receive will be removed. However, deduplicated + send streams can still be received by utilizing + zstream redup. +

The zstream + redup command is provided a + file containing a deduplicated send stream, and + outputs an equivalent non-deduplicated send stream on standard output. + Therefore, a deduplicated send stream can be received by running:

+
# zstream + redup DEDUP_STREAM_FILE | + zfs receive +
+
+
+
Verbose. Print summary of converted records.
+
+
+
zstream recompress + [-l level] + algorithm
+
Recompresses a send stream, provided on standard input, using the provided + algorithm and optional level, and writes the modified stream to standard + output. All WRITE records in the send stream will be recompressed, unless + they fail to result in size reduction compared to being left uncompressed. + The provided algorithm can be any valid value to the + compress property. Note that encrypted send + streams cannot be recompressed. +
+
+ level
+
Specifies compression level. Only needed for algorithms where the + level is not implied as part of the name of the algorithm (e.g. gzip-3 + does not require it, while zstd does, if a non-default level is + desired).
+
+
+
+
+
+

+

Heal a dataset that was corrupted due to OpenZFS bug #12762. + First, determine which records are corrupt. That cannot be done + automatically; it requires information beyond ZFS's metadata. If object + is + corrupted at offset + and is + compressed using lz4, then run this command:

+
+
# zfs send -c  | zstream decompress 128,0,lz4 | zfs recv 
+
+
+
+

+

zfs(8), zfs-receive(8), + zfs-send(8), + https://github.com/openzfs/zfs/issues/12762

+
+
+ + + + + +
October 4, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zstreamdump.8.html b/man/master/8/zstreamdump.8.html new file mode 100644 index 000000000..2938cd443 --- /dev/null +++ b/man/master/8/zstreamdump.8.html @@ -0,0 +1,406 @@ + + + + + + + zstreamdump.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zstreamdump.8

+
+ + + + + +
ZSTREAM(8)System Manager's ManualZSTREAM(8)
+
+
+

+

zstream — + manipulate ZFS send streams

+
+
+

+ + + + + +
zstreamdump [-Cvd] + [file]
+
+ + + + + +
zstreamdecompress [-v] + [object,offset[,type...]]
+
+ + + + + +
zstreamredup [-v] + file
+
+ + + + + +
zstreamtoken resume_token
+
+ + + + + +
zstreamrecompress [-l + level] algorithm
+
+
+

+

The + + utility manipulates ZFS send streams output by the + + command.

+
+
zstream dump + [-Cvd] [file]
+
Print information about the specified send stream, including headers and + record counts. The send stream may either be in the specified + file, or provided on standard input. +
+
+
Suppress the validation of checksums.
+
+
Verbose. Print metadata for each record.
+
+
Dump data contained in each record. Implies verbose.
+
+

The zstreamdump alias is provided for + compatibility and is equivalent to running + zstream dump.

+
+
zstream token + resume_token
+
Dumps zfs resume token information
+
zstream + decompress [-v] + [object,offset[,type...]]
+
Decompress selected records in a ZFS send stream provided on standard + input, when the compression type recorded in ZFS metadata may be + incorrect. Specify the object number and byte offset of each record that + you wish to decompress. Optionally specify the compression type. Valid + compression types include off, + , + lz4, + , + , + and . + The default is lz4. Every record for that object + beginning at that offset will be decompressed, if possible. It may not be + possible, because the record may be corrupted in some but not all of the + stream's snapshots. Specifying a compression type of off + will change the stream's metadata accordingly, without attempting + decompression. This can be useful if the record is already uncompressed + but the metadata insists otherwise. The repaired stream will be written to + standard output. +
+
+
Verbose. Print summary of decompressed records.
+
+
+
zstream redup + [-v] file
+
Deduplicated send streams can be generated by using the + zfs send + -D command. The ability to send deduplicated send + streams is deprecated. In the future, the ability to receive a + deduplicated send stream with zfs + receive will be removed. However, deduplicated + send streams can still be received by utilizing + zstream redup. +

The zstream + redup command is provided a + file containing a deduplicated send stream, and + outputs an equivalent non-deduplicated send stream on standard output. + Therefore, a deduplicated send stream can be received by running:

+
# zstream + redup DEDUP_STREAM_FILE | + zfs receive +
+
+
+
Verbose. Print summary of converted records.
+
+
+
zstream recompress + [-l level] + algorithm
+
Recompresses a send stream, provided on standard input, using the provided + algorithm and optional level, and writes the modified stream to standard + output. All WRITE records in the send stream will be recompressed, unless + they fail to result in size reduction compared to being left uncompressed. + The provided algorithm can be any valid value to the + compress property. Note that encrypted send + streams cannot be recompressed. +
+
+ level
+
Specifies compression level. Only needed for algorithms where the + level is not implied as part of the name of the algorithm (e.g. gzip-3 + does not require it, while zstd does, if a non-default level is + desired).
+
+
+
+
+
+

+

Heal a dataset that was corrupted due to OpenZFS bug #12762. + First, determine which records are corrupt. That cannot be done + automatically; it requires information beyond ZFS's metadata. If object + is + corrupted at offset + and is + compressed using lz4, then run this command:

+
+
# zfs send -c  | zstream decompress 128,0,lz4 | zfs recv 
+
+
+
+

+

zfs(8), zfs-receive(8), + zfs-send(8), + https://github.com/openzfs/zfs/issues/12762

+
+
+ + + + + +
October 4, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/index.html b/man/master/index.html new file mode 100644 index 000000000..d037e627c --- /dev/null +++ b/man/master/index.html @@ -0,0 +1,147 @@ + + + + + + + master — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/man/v0.6/1/cstyle.1.html b/man/v0.6/1/cstyle.1.html new file mode 100644 index 000000000..e0b445681 --- /dev/null +++ b/man/v0.6/1/cstyle.1.html @@ -0,0 +1,284 @@ + + + + + + + cstyle.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cstyle.1

+
+ + + + + +
cstyle(1)General Commands Manualcstyle(1)
+
+
+

+

cstyle - check for some common stylistic errors in C source + files

+
+
+

+

cstyle [-chpvCP] [-o constructs] [file...]

+
+
+

+

cstyle inspects C source files (*.c and *.h) for common + sylistic errors. It attempts to check for the cstyle documented in + http://www.cis.upenn.edu/~lee/06cse480/data/cstyle.ms.pdf. Note that + there is much in that document that cannot be checked for; just + because your code is cstyle(1) clean does not mean that you've + followed Sun's C style. Caveat emptor.

+
+
+

+

The following options are supported:

+
+
+
Check continuation line indentation inside of functions. Sun's C style + states that all statements must be indented to an appropriate tab stop, + and any continuation lines after them must be indented exactly four + spaces from the start line. This option enables a series of checks + designed to find contination line problems within functions only. The + checks have some limitations; see CONTINUATION CHECKING, below.
+
+
Performs heuristic checks that are sometimes wrong. Not generally + used.
+
+
Performs some of the more picky checks. Includes ANSI #else and #endif + rules, and tries to detect spaces after casts. Used as part of the putback + checks.
+
+
Verbose output; includes the text of the line of error, and, for + -c, the first statement in the current continuation block.
+
+
Ignore errors in header comments (i.e. block comments starting in the + first column). Not generally used.
+
+
Check for use of non-POSIX types. Historically, types like + "u_int" and "u_long" were used, but they are now + deprecated in favor of the POSIX types uint_t, ulong_t, etc. This detects + any use of the deprecated types. Used as part of the putback checks.
+
+
Allow a comma-seperated list of additional constructs. Available + constructs include:
+
+
Allow doxygen-style block comments (/** and /*!)
+
+
Allow splint-style lint comments (/*@...@*/)
+
+
+
+

+

The cstyle rule for the OS/Net consolidation is that all new files + must be -pP clean. For existing files, the following invocations are + run against both the old and new files:

+
+
+
+
+
+
+
+
+

If the old file gave no errors for one of the invocations, the new + file must also give no errors. This way, files can only become more + clean.

+
+
+

+

The continuation checker is a resonably simple state machine that + knows something about how C is layed out, and can match parenthesis, etc. + over multiple lines. It does have some limitations:

+
+
1.
+
Preprocessor macros which cause unmatched parenthesis will confuse the + checker for that line. To fix this, you'll need to make sure that each + branch of the #if statement has balanced parenthesis.
+
2.
+
Some cpp macros do not require ;s after them. Any such macros + *must* be ALL_CAPS; any lower case letters will cause bad output.
+
+

The bad output will generally be corrected after the next + ;, {, or }.

+

Some continuation error messages deserve some additional + explanation

+
+
+
A multi-line statement which is not broken at statement boundries. For + example:
+
+
+

if (this_is_a_long_variable == another_variable) a = +
+ b + c;

+

Will trigger this error. Instead, do:

+

if (this_is_a_long_variable == another_variable) +
+ a = b + c;

+
+
+
+
For visibility, empty bodies for if, for, and while statements should be + on their own line. For example:
+
+
+

while (do_something(&x) == 0);

+

Will trigger this error. Instead, do:

+

while (do_something(&x) == 0) +
+ ;

+
+

+
+
+ + + + + +
28 March 2005
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/1/index.html b/man/v0.6/1/index.html new file mode 100644 index 000000000..f630dd9b0 --- /dev/null +++ b/man/v0.6/1/index.html @@ -0,0 +1,151 @@ + + + + + + + User Commands (1) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

User Commands (1)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/1/zhack.1.html b/man/v0.6/1/zhack.1.html new file mode 100644 index 000000000..58892adfb --- /dev/null +++ b/man/v0.6/1/zhack.1.html @@ -0,0 +1,252 @@ + + + + + + + zhack.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zhack.1

+
+ + + + + +
zhack(1)User Commandszhack(1)
+
+

+
+

+

zhack - libzpool debugging tool

+
+
+

+

This utility pokes configuration changes directly into a ZFS pool, + which is dangerous and can cause data corruption.

+
+
+

+

zhack [-c cachefile] [-d dir] + <subcommand> [arguments]

+
+
+

+

-c cachefile

+
+
+
Read the pool configuration from the cachefile, which is + /etc/zfs/zpool.cache by default.
+
+

-d dir

+
+
+
Search for pool members in the dir path. Can be specified + more than once.
+
+
+
+

+

feature stat pool

+
+
+
List feature flags.
+
+

feature enable [-d description] [-r] pool + guid

+
+
+
Add a new feature to pool that is uniquely identified by + guid, which is specified in the same form as a zfs(8) user + property.
+
+
The description is a short human readable explanation of the new + feature.
+
+
The -r switch indicates that pool can be safely opened in + read-only mode by a system that does not have the guid + feature.
+
+

feature ref [-d|-m] pool guid

+
+
+
Increment the reference count of the guid feature in + pool.
+
+
The -d switch decrements the reference count of the guid + feature in pool.
+
+
The -m switch indicates that the guid feature is now + required to read the pool MOS.
+
+
+
+

+
# zhack feature stat tank
+for_read_obj:
+	org.illumos:lz4_compress = 0
+for_write_obj:
+	com.delphix:async_destroy = 0
+	com.delphix:empty_bpobj = 0
+descriptions_obj:
+	com.delphix:async_destroy = Destroy filesystems asynchronously.
+	com.delphix:empty_bpobj = Snapshots use less space.
+	org.illumos:lz4_compress = LZ4 compression algorithm support.
+
# zhack feature enable -d 'Predict future disk failures.' \
+
+ tank com.example:clairvoyance
+
# zhack feature ref tank com.example:clairvoyance
+
+
+

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

splat(1), zfs(8), zpios(1), + zpool-features(5), ztest(1)

+
+
+ + + + + +
2013 MAR 16ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/1/zpios.1.html b/man/v0.6/1/zpios.1.html new file mode 100644 index 000000000..8d88b4c12 --- /dev/null +++ b/man/v0.6/1/zpios.1.html @@ -0,0 +1,384 @@ + + + + + + + zpios.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpios.1

+
+ + + + + +
zpios(1)User Commandszpios(1)
+
+

+
+

+

zpios - Directly test the DMU.

+
+
+

+

zpios [options] <-p pool>

+

+
+
+

+

This utility runs in-kernel DMU performance and stress tests that + do not depend on the ZFS Posix Layer ("ZPL").

+

+
+
+

+

-s regex, --threadcount regex

+
+
+
Start this many threads for each test series, specified as a comma + delimited regular expression. (eg: "-s 1,2,3")
+
+
This option is mutually exclusive with the threadcount_* + options.
+
+

-l regex_low, --threadcount_low + regex_low

+

-h regex_high, --threadcount_high + regex_high

+

-e regex_incr, --threadcount_incr + regex_incr

+
+
+
Start regex_low threads for the first test, add regex_incr + threads for each subsequent test, and start regex_high threads for + the last test.
+
+
These three options must be specified together and are mutually exclusive + with the threadcount option.
+
+

-n regex, --regioncount regex

+
+
+
Create this many regions for each test series, specified as a comma + delimited regular expression. (eg: "-n 512,4096,65536")
+
+
This option is mutually exclusive with the regioncount_* + options.
+
+

-i regex_low, --regioncount_low + regex_low

+

-j regex_high, --regioncount_high + regex_high

+

-k regex_incr, --regioncount_incr + regex_incr

+
+
+
Create regex_low regions for the first test, add regex_incr + regions for each subsequent test, and create regex_high regions for + the last test.
+
+
These three options must be specified together and are mutually exclusive + with the regioncount option.
+
+

-o size, --offset size

+
+
+
Create regions at size offset for each test series, specified as a + comma delimited regular expression with an optional unit suffix. (eg: + "-o 4M" means four megabytes.)
+
+
This option is mutually exclusive with the offset_* options.
+
+

-m size_low, --offset_low + size_low

+

-q size_high, --offset_high + size_high

+

-r size_incr, --offset_incr + size_incr

+
+
+
Create a region at size_low offset for the first test, add + size_incr to the offset for each subsequent test, and create a + region at size_high offset for the last test.
+
+
These three options must be specified together and are mutually exclusive + with the offset option.
+
+

-c size, --chunksize size

+
+
+
Use size chunks for each test, specified as a comma delimited + regular expression with an optional unit suffix. (eg: "-c 1M" + means one megabyte.) The chunk size must be at least the region size.
+
+
This option is mutually exclusive with the chunksize_* + options.
+
+

-a size_low, --chunksize_low + size_low

+

-b size_high, --chunksize_high + size_high

+

-g size_incr, --chunksize_incr + size_incr

+
+
+
Use a size_low chunk size for the first test, add size_incr + to the chunk size for each subsequent test, and use a size_high + chunk size for the last test.
+
+
These three options must be specified together and are mutually exclusive + with the chunksize option.
+
+

-L dmu_flags, --load dmu_flags

+
+
+
Specify dmuio for regular DMU_IO, ssf for single shared file + access, or fpp for per thread access. Use commas to delimit + multiple flags. (eg: "-L dmuio,ssf")
+
+

-p name, --pool name

+
+
+
The pool name, which is mandatory.
+
+

-M test, --name test

+
+
+
An arbitrary string that appears in the program output.
+
+

-x, --cleanup

+
+
+
Enable the DMU_REMOVE flag.
+
+

-P command, --prerun command

+
+
+
Invoke command from the kernel before running the test. Shell + expansion is not performed and the environment is set to HOME=/; + TERM=linux; PATH=/sbin:/usr/sbin:/bin:/usr/bin.
+
+

-R command, --postrun command

+
+
+
Invoke command from the kernel after running the test. Shell + expansion is not performed and the environment is set to HOME=/; + TERM=linux; PATH=/sbin:/usr/sbin:/bin:/usr/bin.
+
+

-G directory, --log directory

+
+
+
Put logging output in this directory.
+
+

-I size, --regionnoise size

+
+
+
Randomly vary the regionsize parameter for each test modulo + size bytes.
+
+

-N size, --chunknoise size

+
+
+
Randomly vary the chunksize parameter for each test modulo + size bytes.
+
+

-T time, --threaddelay time

+
+
+
Randomly vary the execution time for each test modulo time kernel + jiffies.
+
+

-V, --verify

+
+
+
Enable the DMU_VERIFY flag for trivial data verification.
+
+

-z, --zerocopy

+
+
+
Enable the DMU_READ_ZC and DMU_WRITE_ZC flags, which are currently + unimplemented for Linux.
+
+

-O, --nowait

+
+
+
Enable the DMU_WRITE_NOWAIT flag.
+
+

-f, --noprefetch

+
+
+
Enable the DMU_READ_NOPF flag.
+
+

-H, --human-readable

+
+
+
Print PASS and FAIL results explicitly and put unit suffixes on large + numbers.
+
+

-v, --verbose

+
+
+
Increase output verbosity.
+
+

-? , --help

+
+
+
Print the usage message.
+
+
+
+

+

The original zpios implementation was created by Cluster File + Systems Inc and adapted to ZFS on Linux by Brian Behlendorf + <behlendorf1@llnl.gov>.

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

zpool(8), zfs(8)

+
+
+ + + + + +
2013 FEB 28ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/1/ztest.1.html b/man/v0.6/1/ztest.1.html new file mode 100644 index 000000000..d0494b9c7 --- /dev/null +++ b/man/v0.6/1/ztest.1.html @@ -0,0 +1,337 @@ + + + + + + + ztest.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

ztest.1

+
+ + + + + +
ztest(1)User Commandsztest(1)
+
+

+
+

+

ztest - was written by the ZFS Developers as a ZFS unit + test.

+
+
+

+

ztest <options>

+
+
+

+

This manual page documents briefly the ztest command.

+

ztest was written by the ZFS Developers as a ZFS unit test. + The tool was developed in tandem with the ZFS functionality and was executed + nightly as one of the many regression test against the daily build. As + features were added to ZFS, unit tests were also added to ztest. In + addition, a separate test development team wrote and executed more + functional and stress tests.

+

By default ztest runs for ten minutes and uses block files + (stored in /tmp) to create pools rather than using physical disks. Block + files afford ztest its flexibility to play around with zpool + components without requiring large hardware configurations. However, storing + the block files in /tmp may not work for you if you have a small tmp + directory.

+

By default is non-verbose. This is why entering the command above + will result in ztest quietly executing for 5 minutes. The -V option + can be used to increase the verbosity of the tool. Adding multiple -V option + is allowed and the more you add the more chatty ztest becomes.

+

After the ztest run completes, you should notice many + ztest.* files lying around. Once the run completes you can safely remove + these files. Note that you shouldn't remove these files during a run. You + can re-use these files in your next ztest run by using the -E + option.

+
+
+

+

-?

+
+
+
Print a help summary.
+
+

-v vdevs (default: 5)

+
+
+
Number of vdevs.
+
+

-s size_of_each_vdev (default: 64M)

+
+
+
Size of each vdev.
+
+

-a alignment_shift (default: 9) (use 0 for + random)

+
+
+
Used alignment in test.
+
+

-m mirror_copies (default: 2)

+
+
+
Number of mirror copies.
+
+

-r raidz_disks (default: 4)

+
+
+
Number of raidz disks.
+
+

-R raidz_parity (default: 1)

+
+
+
Raidz parity.
+
+

-d datasets (default: 7)

+
+
+
Number of datasets.
+
+

-t threads (default: 23)

+
+
+
Number of threads.
+
+

-g gang_block_threshold (default: 32K)

+
+
+
Gang block threshold.
+
+

-i initialize_pool_i_times (default: + 1)

+
+
+
Number of pool initialisations.
+
+

-k kill_percentage (default: 70%)

+
+
+
Kill percentage.
+
+

-p pool_name (default: ztest)

+
+
+
Pool name.
+
+

-V(erbose)

+
+
+
Verbose (use multiple times for ever more blather).
+
+

-E(xisting)

+
+
+
Use existing pool (use existing pool instead of creating new one).
+
+

-T time (default: 300 sec)

+
+
+
Total test run time.
+
+

-z zil_failure_rate (default: fail every 2^5 + allocs)

+
+
+
Injected failure rate.
+
+
+
+

+

To override /tmp as your location for block files, you can use the + -f option:

+
+
+
ztest -f /
+
+

To get an idea of what ztest is actually testing try this:

+
+
+
ztest -f / -VVV
+
+

Maybe you'd like to run ztest for longer? To do so simply use the + -T option and specify the runlength in seconds like so:

+
+
+
ztest -f / -V -T 120 +

+
+
+
+
+

+
+
+
Limit the default stack size to stacksize bytes for the purpose of + detecting and debugging kernel stack overflows. For x86_64 platforms this + value should be set as follows to simulate these platforms: 8192 + (Linux), 20480 (Illumos), 16384 (FreeBSD). +

In practice you may need to set these value slightly higher + because differences in stack usage between kernel and user space can + lead to spurious stack overflows (especially when debugging is enabled). + The specified value will be rounded up to a floor of PTHREAD_STACK_MIN + which is the minimum stack required for a NULL procedure in user + space.

+

By default the stack size is limited to 256K.

+
+
+
+
+

+

zpool (1), zfs (1), zdb (1),

+
+
+

+

This manual page was transvered to asciidoc by Michael + Gebetsroither <gebi@grml.org> from + http://opensolaris.org/os/community/zfs/ztest/

+
+
+ + + + + +
2009 NOV 01ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/5/index.html b/man/v0.6/5/index.html new file mode 100644 index 000000000..ba2f8e46f --- /dev/null +++ b/man/v0.6/5/index.html @@ -0,0 +1,151 @@ + + + + + + + File Formats and Conventions (5) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

File Formats and Conventions (5)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/5/vdev_id.conf.5.html b/man/v0.6/5/vdev_id.conf.5.html new file mode 100644 index 000000000..09da91feb --- /dev/null +++ b/man/v0.6/5/vdev_id.conf.5.html @@ -0,0 +1,310 @@ + + + + + + + vdev_id.conf.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

vdev_id.conf.5

+
+ + + + + +
vdev_id.conf(5)File Formats Manualvdev_id.conf(5)
+
+
+

+

vdev_id.conf - Configuration file for vdev_id

+
+
+

+

vdev_id.conf is the configuration file for + vdev_id(8). It controls the default behavior of vdev_id(8) + while it is mapping a disk device name to an alias.

+

The vdev_id.conf file uses a simple format consisting of a + keyword followed by one or more values on a single line. Any line not + beginning with a recognized keyword is ignored. Comments may optionally + begin with a hash character.

+

The following keywords and values are used.

+
+
+
Maps a device link in the /dev directory hierarchy to a new device name. + The udev rule defining the device link must have run prior to + vdev_id(8). A defined alias takes precedence over a + topology-derived name, but the two naming methods can otherwise coexist. + For example, one might name drives in a JBOD with the sas_direct topology + while naming an internal L2ARC device with an alias. +

name - the name of the link to the device that will by + created in /dev/disk/by-vdev.

+

devlink - the name of the device link that has already + been defined by udev. This may be an absolute path or the base + filename.

+

+
+
+
Maps a physical path to a channel name (typically representing a single + disk enclosure). +

pci_slot - specifies the PCI SLOT of the HBA hosting + the disk enclosure being mapped, as found in the output of + lspci(8). This argument is not used in sas_switch mode.

+

port - specifies the numeric identifier of the HBA or + SAS switch port connected to the disk enclosure being mapped.

+

name - specifies the name of the channel.

+

+
+
+
Maps a disk slot number as reported by the operating system to an + alternative slot number. If the channel parameter is specified then + the mapping is only applied to slots in the named channel, otherwise the + mapping is applied to all channels. The first-specified slot rule + that can match a slot takes precedence. Therefore a channel-specific + mapping for a given slot should generally appear before a generic mapping + for the same slot. In this way a custom mapping may be applied to a + particular channel and a default mapping applied to the others. +

+
+
+
Specifies whether vdev_id(8) will handle only dm-multipath devices. + If set to "yes" then vdev_id(8) will examine the first + running component disk of a dm-multipath device as listed by the + multipath(8) command to determine the physical path.
+
+
Identifies a physical topology that governs how physical paths are mapped + to channels. +

sas_direct - in this mode a channel is uniquely + identified by a PCI slot and a HBA port number

+

sas_switch - in this mode a channel is uniquely + identified by a SAS switch port number

+

+
+
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) internally uses this value to determine + which HBA or switch port a device is connected to. The default is 4. +

+
+
+
Specifies from which element of a SAS identifier the slot number is taken. + The default is bay. +

bay - read the slot number from the bay identifier.

+

phy - read the slot number from the phy identifier.

+

id - use the scsi id as the slot number.

+

lun - use the scsi lun as the slot number.

+
+
+
+
+

+

A non-multipath configuration with direct-attached SAS enclosures + and an arbitrary slot re-mapping.

+

+
	multipath     no
+	topology      sas_direct
+	phys_per_port 4
+	slot          bay
+	#       PCI_SLOT HBA PORT  CHANNEL NAME
+	channel 85:00.0  1         A
+	channel 85:00.0  0         B
+	channel 86:00.0  1         C
+	channel 86:00.0  0         D
+	# Custom mapping for Channel A
+	#    Linux      Mapped
+	#    Slot       Slot      Channel
+	slot 1          7         A
+	slot 2          10        A
+	slot 3          3         A
+	slot 4          6         A
+	# Default mapping for B, C, and D
+	slot 1          4
+	slot 2          2
+	slot 3          1
+	slot 4          3
+

A SAS-switch topology. Note that the channel keyword takes + only two arguments in this example.

+

+
	topology      sas_switch
+	#       SWITCH PORT  CHANNEL NAME
+	channel 1            A
+	channel 2            B
+	channel 3            C
+	channel 4            D
+

A multipath configuration. Note that channel names have multiple + definitions - one per physical path.

+

+
	multipath yes
+	#       PCI_SLOT HBA PORT  CHANNEL NAME
+	channel 85:00.0  1         A
+	channel 85:00.0  0         B
+	channel 86:00.0  1         A
+	channel 86:00.0  0         B
+

A configuration using device link aliases.

+

+
	#     by-vdev
+	#     name     fully qualified or base name of device link
+	alias d1       /dev/disk/by-id/wwn-0x5000c5002de3b9ca
+	alias d2       wwn-0x5000c5002def789e
+
+
+

+
+
/etc/zfs/vdev_id.conf
+
The configuration file for vdev_id(8).
+
+
+
+

+

vdev_id(8)

+
+
+ + + + + +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/5/zfs-events.5.html b/man/v0.6/5/zfs-events.5.html new file mode 100644 index 000000000..34e6f3f3d --- /dev/null +++ b/man/v0.6/5/zfs-events.5.html @@ -0,0 +1,777 @@ + + + + + + + zfs-events.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-events.5

+
+ + + + + +
ZFS-EVENTS(5)File Formats ManualZFS-EVENTS(5)
+
+
+

+

zfs-events - Events created by the ZFS filesystem.

+
+
+

+

Description of the different events generated by the ZFS + stack.

+

Most of these don't have any description. The events generated by + ZFS have never been publicly documented. What is here is intended as a + starting point to provide documentation for all possible events.

+

To view all events created since the loading of the ZFS + infrastructure (i.e, "the module"), run

+

+
zpool events
+

to get a short list, and

+

+
zpool events -v
+

to get a full detail of the events and what information is + available about it.

+

This man page lists the different subclasses that are issued in + the case of an event. The full event name would be + ereport.fs.zfs.SUBCLASS, but we only list the last part here.

+

+
+

+

+

checksum

+
Issued when a checksum error have been detected.
+

+

io

+
Issued when there is an I/O error in a vdev in the + pool.
+

+

data

+
Issued when there have been data errors in the + pool.
+

+

delay

+
Issued when an I/O was slow to complete as defined by the + zio_delay_max module option.
+

+

config.sync

+
Issued every time a vdev change have been done to the + pool.
+

+

zpool

+
Issued when a pool cannot be imported.
+

+

zpool.destroy

+
Issued when a pool is destroyed.
+

+

zpool.export

+
Issued when a pool is exported.
+

+

zpool.import

+
Issued when a pool is imported.
+

+

zpool.reguid

+
Issued when a REGUID (new unique identifier for the pool + have been regenerated) have been detected.
+

+

vdev.unknown

+
Issued when the vdev is unknown. Such as trying to clear + device errors on a vdev that have failed/been kicked from the system/pool and + is no longer available.
+

+

vdev.open_failed

+
Issued when a vdev could not be opened (because it didn't + exist for example).
+

+

vdev.corrupt_data

+
Issued when corrupt data have been detected on a + vdev.
+

+

vdev.no_replicas

+
Issued when there are no more replicas to sustain the + pool. This would lead to the pool being DEGRADED.
+

+

vdev.bad_guid_sum

+
Issued when a missing device in the pool have been + detected.
+

+

vdev.too_small

+
Issued when the system (kernel) have removed a device, + and ZFS notices that the device isn't there any more. This is usually followed + by a probe_failure event.
+

+

vdev.bad_label

+
Issued when the label is OK but invalid.
+

+

vdev.bad_ashift

+
Issued when the ashift alignment requirement has + increased.
+

+

vdev.remove

+
Issued when a vdev is detached from a mirror (or a spare + detached from a vdev where it have been used to replace a failed drive - only + works if the original drive have been readded).
+

+

vdev.clear

+
Issued when clearing device errors in a pool. Such as + running zpool clear on a device in the pool.
+

+

vdev.check

+
Issued when a check to see if a given vdev could be + opened is started.
+

+

vdev.spare

+
Issued when a spare have kicked in to replace a failed + device.
+

+

vdev.autoexpand

+
Issued when a vdev can be automatically expanded.
+

+

io_failure

+
Issued when there is an I/O failure in a vdev in the + pool.
+

+

probe_failure

+
Issued when a probe fails on a vdev. This would occur if + a vdeev have been kicked from the system outside of ZFS (such as the kernel + have removed the device).
+

+

log_replay

+
Issued when the intent log cannot be replayed. The can + occur in the case of a missing or damaged log device.
+

+

resilver.start

+
Issued when a resilver is started.
+

+

resilver.finish

+
Issued when the running resilver have finished.
+

+

scrub.start

+
Issued when a scrub is started on a pool.
+

+

scrub.finish

+
Issued when a pool have finished scrubbing.
+

+

bootfs.vdev.attach

+
+

+
+
+

+

This is the payload (data, information) that accompanies an + event.

+

For zed(8), these are set to uppercase and prefixed with + ZEVENT_.

+

+

pool

+
Pool name.
+

+

pool_failmode

+
Failmode - wait, continue or panic. + See pool(8) (failmode property) for more information.
+

+

pool_guid

+
The GUID of the pool.
+

+

pool_context

+
The load state for the pool (0=none, 1=open, 2=import, + 3=tryimport, 4=recover 5=error).
+

+

vdev_guid

+
The GUID of the vdev in question (the vdev failing or + operated upon with zpool clear etc).
+

+

vdev_type

+
Type of vdev - disk, file, mirror + etc. See zpool(8) under Virtual Devices for more information on + possible values.
+

+

vdev_path

+
Full path of the vdev, including any -partX.
+

+

vdev_devid

+
ID of vdev (if any).
+

+

vdev_fru

+
Physical FRU location.
+

+

vdev_state

+
State of vdev (0=uninitialized, 1=closed, 2=offline, + 3=removed, 4=failed to open, 5=faulted, 6=degraded, 7=healty).
+

+

vdev_ashift

+
The ashift value of the vdev.
+

+

vdev_complete_ts

+
The time the last I/O completed for the specified + vdev.
+

+

vdev_delta_ts

+
The time since the last I/O completed for the specified + vdev.
+

+

vdev_spare_paths

+
List of spares, including full path and any + -partX.
+

+

vdev_spare_guids

+
GUID(s) of spares.
+

+

vdev_read_errors

+
How many read errors that have been detected on the + vdev.
+

+

vdev_write_errors

+
How many write errors that have been detected on the + vdev.
+

+

vdev_cksum_errors

+
How many checkum errors that have been detected on the + vdev.
+

+

parent_guid

+
GUID of the vdev parent.
+

+

parent_type

+
Type of parent. See vdev_type.
+

+

parent_path

+
Path of the vdev parent (if any).
+

+

parent_devid

+
ID of the vdev parent (if any).
+

+

zio_objset

+
The object set number for a given I/O.
+

+

zio_object

+
The object number for a given I/O.
+

+

zio_level

+
The block level for a given I/O.
+

+

zio_blkid

+
The block ID for a given I/O.
+

+

zio_err

+
The errno for a failure when handling a given I/O.
+

+

zio_offset

+
The offset in bytes of where to write the I/O for the + specified vdev.
+

+

zio_size

+
The size in bytes of the I/O.
+

+

zio_flags

+
The current flags describing how the I/O should be + handled. See the I/O FLAGS section for the full list of I/O + flags.
+

+

zio_stage

+
The current stage of the I/O in the pipeline. See the + I/O STAGES section for a full list of all the I/O stages.
+

+

zio_pipeline

+
The valid pipeline stages for the I/O. See the I/O + STAGES section for a full list of all the I/O stages.
+

+

zio_delay

+
The time in ticks (HZ) required for the block layer to + service the I/O. Unlike zio_delta this does not include any vdev + queuing time and is therefore solely a measure of the block layer performance. + On most modern Linux systems HZ is defined as 1000 making a tick equivalent to + 1 millisecond.
+

+

zio_timestamp

+
The time when a given I/O was submitted.
+

+

zio_delta

+
The time required to service a given I/O.
+

+

prev_state

+
The previous state of the vdev.
+

+

cksum_expected

+
The expected checksum value.
+

+

cksum_actual

+
The actual/current checksum value.
+

+

cksum_algorithm

+
Checksum algorithm used. See zfs(8) for more + information on checksum algorithms available.
+

+

cksum_byteswap

+
Checksum value is byte swapped.
+

+

bad_ranges

+
Checksum bad offset ranges.
+

+

bad_ranges_min_gap

+
Checksum allowed minimum gap.
+

+

bad_range_sets

+
Checksum for each range the number of bits set.
+

+

bad_range_clears

+
Checksum for each range the number of bits cleared.
+

+

bad_set_bits

+
Checksum array of bits set.
+

+

bad_cleared_bits

+
Checksum array of bits cleared.
+

+

bad_set_histogram

+
Checksum histogram of set bits by bit number in a 64-bit + word.
+

+

bad_cleared_histogram

+
Checksum histogram of cleared bits by bit number in a + 64-bit word.
+

+
+
+

+

The ZFS I/O pipeline is comprised of various stages which are + defined below. The individual stages are used to construct these basic I/O + operations: Read, Write, Free, Claim, and Ioctl. These stages may be set on + an event to describe the life cycle of a given I/O.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StageBit MaskOperations



ZIO_STAGE_OPEN0x00000001RWFCI
ZIO_STAGE_READ_BP_INIT0x00000002R----
ZIO_STAGE_FREE_BP_INIT0x00000004--F--
ZIO_STAGE_ISSUE_ASYNC0x00000008RWF--
ZIO_STAGE_WRITE_BP_INIT0x00000010-W---
ZIO_STAGE_CHECKSUM_GENERATE0x00000020-W---
ZIO_STAGE_NOP_WRITE0x00000040-W---
ZIO_STAGE_DDT_READ_START0x00000080R----
ZIO_STAGE_DDT_READ_DONE0x00000100R----
ZIO_STAGE_DDT_WRITE0x00000200-W---
ZIO_STAGE_DDT_FREE0x00000400--F--
ZIO_STAGE_GANG_ASSEMBLE0x00000800RWFC-
ZIO_STAGE_GANG_ISSUE0x00001000RWFC-
ZIO_STAGE_DVA_ALLOCATE0x00002000-W---
ZIO_STAGE_DVA_FREE0x00004000--F--
ZIO_STAGE_DVA_CLAIM0x00008000---C-
ZIO_STAGE_READY0x00010000RWFCI
ZIO_STAGE_VDEV_IO_START0x00020000RW--I
ZIO_STAGE_VDEV_IO_DONE0x00040000RW--I
ZIO_STAGE_VDEV_IO_ASSESS0x00080000RW--I
ZIO_STAGE_CHECKSUM_VERIFY00x00100000R----
ZIO_STAGE_DONE0x00200000RWFCI
+

+
+
+

+

Every I/O in the pipeline contains a set of flags which describe + its function and are used to govern its behavior. These flags will be set in + an event as an zio_flags payload entry.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FlagBit Mask


ZIO_FLAG_DONT_AGGREGATE0x00000001
ZIO_FLAG_IO_REPAIR0x00000002
ZIO_FLAG_SELF_HEAL0x00000004
ZIO_FLAG_RESILVER0x00000008
ZIO_FLAG_SCRUB0x00000010
ZIO_FLAG_SCAN_THREAD0x00000020
ZIO_FLAG_PHYSICAL0x00000040
ZIO_FLAG_CANFAIL0x00000080
ZIO_FLAG_SPECULATIVE0x00000100
ZIO_FLAG_CONFIG_WRITER0x00000200
ZIO_FLAG_DONT_RETRY0x00000400
ZIO_FLAG_DONT_CACHE0x00000800
ZIO_FLAG_NODATA0x00001000
ZIO_FLAG_INDUCE_DAMAGE0x00002000
ZIO_FLAG_IO_RETRY0x00004000
ZIO_FLAG_PROBE0x00008000
ZIO_FLAG_TRYHARD0x00010000
ZIO_FLAG_OPTIONAL0x00020000
ZIO_FLAG_DONT_QUEUE0x00040000
ZIO_FLAG_DONT_PROPAGATE0x00080000
ZIO_FLAG_IO_BYPASS0x00100000
ZIO_FLAG_IO_REWRITE0x00200000
ZIO_FLAG_RAW0x00400000
ZIO_FLAG_GANG_CHILD0x00800000
ZIO_FLAG_DDT_CHILD0x01000000
ZIO_FLAG_GODFATHER0x02000000
ZIO_FLAG_NOPWRITE0x04000000
ZIO_FLAG_REEXECUTED0x08000000
ZIO_FLAG_DELEGATED0x10000000
ZIO_FLAG_FASTWRITE0x20000000
+
+
+
+ + + + + +
June 6, 2015
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/5/zfs-module-parameters.5.html b/man/v0.6/5/zfs-module-parameters.5.html new file mode 100644 index 000000000..c1684b3e0 --- /dev/null +++ b/man/v0.6/5/zfs-module-parameters.5.html @@ -0,0 +1,1329 @@ + + + + + + + zfs-module-parameters.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-module-parameters.5

+
+ + + + + +
ZFS-MODULE-PARAMETERS(5)File Formats ManualZFS-MODULE-PARAMETERS(5)
+
+
+

+

zfs-module-parameters - ZFS module parameters

+
+
+

+

Description of the different parameters to the ZFS module.

+

+
+

+

+

ignore_hole_birth (int)

+
When set, the hole_birth optimization will not be used, + and all holes will always be sent on zfs send. Useful if you suspect your + datasets are affected by a bug in hole_birth. +

Use 1 (default) for on and 0 for off.

+
+

+

l2arc_feed_again (int)

+
Turbo L2ARC warmup +

Use 1 for yes (default) and 0 to disable.

+
+

+

l2arc_feed_min_ms (ulong)

+
Min feed interval in milliseconds +

Default value: 200.

+
+

+

l2arc_feed_secs (ulong)

+
Seconds between L2ARC writing +

Default value: 1.

+
+

+

l2arc_headroom (ulong)

+
Number of max device writes to precache +

Default value: 2.

+
+

+

l2arc_headroom_boost (ulong)

+
Compressed l2arc_headroom multiplier +

Default value: 200.

+
+

+

l2arc_nocompress (int)

+
Skip compressing L2ARC buffers +

Use 1 for yes and 0 for no (default).

+
+

+

l2arc_noprefetch (int)

+
Skip caching prefetched buffers +

Use 1 for yes (default) and 0 to disable.

+
+

+

l2arc_norw (int)

+
No reads during writes +

Use 1 for yes and 0 for no (default).

+
+

+

l2arc_write_boost (ulong)

+
Extra write bytes during device warmup +

Default value: 8,388,608.

+
+

+

l2arc_write_max (ulong)

+
Max write bytes per interval +

Default value: 8,388,608.

+
+

+

metaslab_aliquot (ulong)

+
Metaslab granularity, in bytes. This is roughly similar + to what would be referred to as the "stripe size" in traditional + RAID arrays. In normal operation, ZFS will try to write this amount of data to + a top-level vdev before moving on to the next one. +

Default value: 524,288.

+
+

+

metaslab_bias_enabled (int)

+
Enable metaslab group biasing based on its vdev's over- + or under-utilization relative to the pool. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_debug_load (int)

+
Load all metaslabs during pool import. +

Use 1 for yes and 0 for no (default).

+
+

+

metaslab_debug_unload (int)

+
Prevent metaslabs from being unloaded. +

Use 1 for yes and 0 for no (default).

+
+

+

metaslab_fragmentation_factor_enabled (int)

+
Enable use of the fragmentation metric in computing + metaslab weights. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslabs_per_vdev (int)

+
When a vdev is added, it will be divided into + approximately (but no more than) this number of metaslabs. +

Default value: 200.

+
+

+

metaslab_preload_enabled (int)

+
Enable metaslab group preloading. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_lba_weighting_enabled (int)

+
Give more weight to metaslabs with lower LBAs, assuming + they have greater bandwidth as is typically the case on a modern constant + angular velocity disk drive. +

Use 1 for yes (default) and 0 for no.

+
+

+

spa_config_path (charp)

+
SPA config file +

Default value: /etc/zfs/zpool.cache.

+
+

+

spa_asize_inflation (int)

+
Multiplication factor used to estimate actual disk + consumption from the size of data being written. The default value is a worst + case estimate, but lower values may be valid for a given pool depending on its + configuration. Pool administrators who understand the factors involved may + wish to specify a more realistic inflation factor, particularly if they + operate close to quota or capacity limits. +

Default value: 24

+
+

+

spa_load_verify_data (int)

+
Whether to traverse data blocks during an "extreme + rewind" (-X) import. Use 0 to disable and 1 to enable. +

An extreme rewind import normally performs a full traversal of all + blocks in the pool for verification. If this parameter is set to 0, the + traversal skips non-metadata blocks. It can be toggled once the import has + started to stop or start the traversal of non-metadata blocks.

+

Default value: 1

+
+

+

spa_load_verify_metadata (int)

+
Whether to traverse blocks during an "extreme + rewind" (-X) pool import. Use 0 to disable and 1 to enable. +

An extreme rewind import normally performs a full traversal of all + blocks in the pool for verification. If this parameter is set to 1, the + traversal is not performed. It can be toggled once the import has started to + stop or start the traversal.

+

Default value: 1

+
+

+

spa_load_verify_maxinflight (int)

+
Maximum concurrent I/Os during the traversal performed + during an "extreme rewind" (-X) pool import. +

Default value: 10000

+
+

+

spa_slop_shift (int)

+
Normally, we don't allow the last 3.2% + (1/(2^spa_slop_shift)) of space in the pool to be consumed. This ensures that + we don't run the pool completely out of space, due to unaccounted changes + (e.g. to the MOS). It also limits the worst-case time to allocate space. If we + have less than this amount of free space, most ZPL operations (e.g. write, + create) will return ENOSPC. +

Default value: 5

+
+

+

zfetch_array_rd_sz (ulong)

+
If prefetching is enabled, disable prefetching for reads + larger than this size. +

Default value: 1,048,576.

+
+

+

zfetch_block_cap (uint)

+
Max number of blocks to prefetch at a time +

Default value: 256.

+
+

+

zfetch_max_streams (uint)

+
Max number of streams per zfetch (prefetch streams per + file). +

Default value: 8.

+
+

+

zfetch_min_sec_reap (uint)

+
Min time before an active prefetch stream can be + reclaimed +

Default value: 2.

+
+

+

zfs_arc_average_blocksize (int)

+
The ARC's buffer hash table is sized based on the + assumption of an average block size of zfs_arc_average_blocksize + (default 8K). This works out to roughly 1MB of hash table per 1GB of physical + memory with 8-byte pointers. For configurations with a known larger average + block size this value can be increased to reduce the memory footprint. +

+

Default value: 8192.

+
+

+

zfs_arc_evict_batch_limit (int)

+
Number ARC headers to evict per sub-list before + proceeding to another sub-list. This batch-style operation prevents entire + sub-lists from being evicted at once but comes at a cost of additional + unlocking and locking. +

Default value: 10.

+
+

+

zfs_arc_grow_retry (int)

+
Seconds before growing arc size +

Default value: 5.

+
+

+

zfs_arc_lotsfree_percent (int)

+
Throttle I/O when free system memory drops below this + percentage of total system memory. Setting this value to 0 will disable the + throttle. +

Default value: 10.

+
+

+

zfs_arc_max (ulong)

+
Max arc size +

Default value: 0.

+
+

+

zfs_arc_meta_limit (ulong)

+
The maximum allowed size in bytes that meta data buffers + are allowed to consume in the ARC. When this limit is reached meta data + buffers will be reclaimed even if the overall arc_c_max has not been reached. + This value defaults to 0 which indicates that 3/4 of the ARC may be used for + meta data. +

Default value: 0.

+
+

+

zfs_arc_meta_min (ulong)

+
The minimum allowed size in bytes that meta data buffers + may consume in the ARC. This value defaults to 0 which disables a floor on the + amount of the ARC devoted meta data. +

Default value: 0.

+
+

+

zfs_arc_meta_prune (int)

+
The number of dentries and inodes to be scanned looking + for entries which can be dropped. This may be required when the ARC reaches + the zfs_arc_meta_limit because dentries and inodes can pin buffers in + the ARC. Increasing this value will cause to dentry and inode caches to be + pruned more aggressively. Setting this value to 0 will disable pruning the + inode and dentry caches. +

Default value: 10,000.

+
+

+

zfs_arc_meta_adjust_restarts (ulong)

+
The number of restart passes to make while scanning the + ARC attempting the free buffers in order to stay below the + zfs_arc_meta_limit. This value should not need to be tuned but is + available to facilitate performance analysis. +

Default value: 4096.

+
+

+

zfs_arc_min (ulong)

+
Min arc size +

Default value: 100.

+
+

+

zfs_arc_min_prefetch_lifespan (int)

+
Min life of prefetch block +

Default value: 100.

+
+

+

zfs_arc_num_sublists_per_state (int)

+
To allow more fine-grained locking, each ARC state + contains a series of lists for both data and meta data objects. Locking is + performed at the level of these "sub-lists". This parameters + controls the number of sub-lists per ARC state. +

Default value: 1 or the number of on-online CPUs, whichever is + greater

+
+

+

zfs_arc_overflow_shift (int)

+
The ARC size is considered to be overflowing if it + exceeds the current ARC target size (arc_c) by a threshold determined by this + parameter. The threshold is calculated as a fraction of arc_c using the + formula "arc_c >> zfs_arc_overflow_shift". +

The default value of 8 causes the ARC to be considered to be + overflowing if it exceeds the target size by 1/256th (0.3%) of the target + size.

+

When the ARC is overflowing, new buffer allocations are stalled + until the reclaim thread catches up and the overflow condition no longer + exists.

+

Default value: 8.

+
+

+

+

zfs_arc_p_min_shift (int)

+
arc_c shift to calc min/max arc_p +

Default value: 4.

+
+

+

zfs_arc_p_aggressive_disable (int)

+
Disable aggressive arc_p growth +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_arc_p_dampener_disable (int)

+
Disable arc_p adapt dampener +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_arc_shrink_shift (int)

+
log2(fraction of arc to reclaim) +

Default value: 5.

+
+

+

zfs_arc_sys_free (ulong)

+
The target number of bytes the ARC should leave as free + memory on the system. Defaults to the larger of 1/64 of physical memory or + 512K. Setting this option to a non-zero value will override the default. +

Default value: 0.

+
+

+

zfs_autoimport_disable (int)

+
Disable pool import at module load by ignoring the cache + file (typically /etc/zfs/zpool.cache). +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_dbgmsg_enable (int)

+
Internally ZFS keeps a small log to facilitate debugging. + By default the log is disabled, to enable it set this option to 1. The + contents of the log can be accessed by reading the /proc/spl/kstat/zfs/dbgmsg + file. Writing 0 to this proc file clears the log. +

Default value: 0.

+
+

+

zfs_dbgmsg_maxsize (int)

+
The maximum size in bytes of the internal ZFS debug log. +

Default value: 4M.

+
+

+

zfs_dbuf_state_index (int)

+
Calculate arc header index +

Default value: 0.

+
+

+

zfs_deadman_enabled (int)

+
Enable deadman timer +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_deadman_synctime_ms (ulong)

+
Expiration time in milliseconds. This value has two + meanings. First it is used to determine when the spa_deadman() logic should + fire. By default the spa_deadman() will fire if spa_sync() has not completed + in 1000 seconds. Secondly, the value determines if an I/O is considered + "hung". Any I/O that has not completed in zfs_deadman_synctime_ms is + considered "hung" resulting in a zevent being logged. +

Default value: 1,000,000.

+
+

+

zfs_dedup_prefetch (int)

+
Enable prefetching dedup-ed blks +

Use 1 for yes and 0 to disable (default).

+
+

+

zfs_delay_min_dirty_percent (int)

+
Start to delay each transaction once there is this amount + of dirty data, expressed as a percentage of zfs_dirty_data_max. This + value should be >= zfs_vdev_async_write_active_max_dirty_percent. See the + section "ZFS TRANSACTION DELAY". +

Default value: 60.

+
+

+

zfs_delay_scale (int)

+
This controls how quickly the transaction delay + approaches infinity. Larger values cause longer delays for a given amount of + dirty data. +

For the smoothest delay, this value should be about 1 billion + divided by the maximum number of operations per second. This will smoothly + handle between 10x and 1/10th this number.

+

See the section "ZFS TRANSACTION DELAY".

+

Note: zfs_delay_scale * zfs_dirty_data_max must be + < 2^64.

+

Default value: 500,000.

+
+

+

zfs_dirty_data_max (int)

+
Determines the dirty space limit in bytes. Once this + limit is exceeded, new writes are halted until space frees up. This parameter + takes precedence over zfs_dirty_data_max_percent. See the section + "ZFS TRANSACTION DELAY". +

Default value: 10 percent of all memory, capped at + zfs_dirty_data_max_max.

+
+

+

zfs_dirty_data_max_max (int)

+
Maximum allowable value of zfs_dirty_data_max, + expressed in bytes. This limit is only enforced at module load time, and will + be ignored if zfs_dirty_data_max is later changed. This parameter takes + precedence over zfs_dirty_data_max_max_percent. See the section + "ZFS TRANSACTION DELAY". +

Default value: 25% of physical RAM.

+
+

+

zfs_dirty_data_max_max_percent (int)

+
Maximum allowable value of zfs_dirty_data_max, + expressed as a percentage of physical RAM. This limit is only enforced at + module load time, and will be ignored if zfs_dirty_data_max is later + changed. The parameter zfs_dirty_data_max_max takes precedence over + this one. See the section "ZFS TRANSACTION DELAY". +

Default value: 25

+
+

+

zfs_dirty_data_max_percent (int)

+
Determines the dirty space limit, expressed as a + percentage of all memory. Once this limit is exceeded, new writes are halted + until space frees up. The parameter zfs_dirty_data_max takes precedence + over this one. See the section "ZFS TRANSACTION DELAY". +

Default value: 10%, subject to zfs_dirty_data_max_max.

+
+

+

zfs_dirty_data_sync (int)

+
Start syncing out a transaction group if there is at + least this much dirty data. +

Default value: 67,108,864.

+
+

+

zfs_free_max_blocks (ulong)

+
Maximum number of blocks freed in a single txg. +

Default value: 100,000.

+
+

+

zfs_vdev_async_read_max_active (int)

+
Maxium asynchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 3.

+
+

+

zfs_vdev_async_read_min_active (int)

+
Minimum asynchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_async_write_active_max_dirty_percent (int)

+
When the pool has more than + zfs_vdev_async_write_active_max_dirty_percent dirty data, use + zfs_vdev_async_write_max_active to limit active async writes. If the + dirty data is between min and max, the active I/O limit is linearly + interpolated. See the section "ZFS I/O SCHEDULER". +

Default value: 60.

+
+

+

zfs_vdev_async_write_active_min_dirty_percent (int)

+
When the pool has less than + zfs_vdev_async_write_active_min_dirty_percent dirty data, use + zfs_vdev_async_write_min_active to limit active async writes. If the + dirty data is between min and max, the active I/O limit is linearly + interpolated. See the section "ZFS I/O SCHEDULER". +

Default value: 30.

+
+

+

zfs_vdev_async_write_max_active (int)

+
Maxium asynchronous write I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_async_write_min_active (int)

+
Minimum asynchronous write I/Os active to each device. + See the section "ZFS I/O SCHEDULER". +

Lower values are associated with better latency on rotational + media but poorer resilver performance. The default value of 2 was chosen as + a compromise. A value of 3 has been shown to improve resilver performance + further at a cost of further increasing latency.

+

Default value: 2.

+
+

+

zfs_vdev_max_active (int)

+
The maximum number of I/Os active to each device. + Ideally, this will be >= the sum of each queue's max_active. It must be at + least the sum of each queue's min_active. See the section "ZFS I/O + SCHEDULER". +

Default value: 1,000.

+
+

+

zfs_vdev_scrub_max_active (int)

+
Maxium scrub I/Os active to each device. See the section + "ZFS I/O SCHEDULER". +

Default value: 2.

+
+

+

zfs_vdev_scrub_min_active (int)

+
Minimum scrub I/Os active to each device. See the section + "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_sync_read_max_active (int)

+
Maxium synchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_read_min_active (int)

+
Minimum synchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_write_max_active (int)

+
Maxium synchronous write I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_write_min_active (int)

+
Minimum synchronous write I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_disable_dup_eviction (int)

+
Disable duplicate buffer eviction +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_expire_snapshot (int)

+
Seconds to expire .zfs/snapshot +

Default value: 300.

+
+

+

zfs_admin_snapshot (int)

+
Allow the creation, removal, or renaming of entries in + the .zfs/snapshot directory to cause the creation, destruction, or renaming of + snapshots. When enabled this functionality works both locally and over NFS + exports which have the 'no_root_squash' option set. This functionality is + disabled by default. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_flags (int)

+
Set additional debugging flags. The following flags may + be bitwise-or'd together. +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueSymbolic Name
Description
1ZFS_DEBUG_DPRINTF
Enable dprintf entries in the debug log.
2ZFS_DEBUG_DBUF_VERIFY *
Enable extra dbuf verifications.
4ZFS_DEBUG_DNODE_VERIFY *
Enable extra dnode verifications.
8ZFS_DEBUG_SNAPNAMES
Enable snapshot name verification.
16ZFS_DEBUG_MODIFY
Check for illegally modified ARC buffers.
32ZFS_DEBUG_SPA
Enable spa_dbgmsg entries in the debug log.
64ZFS_DEBUG_ZIO_FREE
Enable verification of block frees.
128ZFS_DEBUG_HISTOGRAM_VERIFY
Enable extra spacemap histogram verifications.
+

* Requires debug build.

+

Default value: 0.

+
+

+

zfs_free_leak_on_eio (int)

+
If destroy encounters an EIO while reading metadata (e.g. + indirect blocks), space referenced by the missing metadata can not be freed. + Normally this causes the background destroy to become "stalled", as + it is unable to make forward progress. While in this stalled state, all + remaining space to free from the error-encountering filesystem is + "temporarily leaked". Set this flag to cause it to ignore the EIO, + permanently leak the space from indirect blocks that can not be read, and + continue to free everything else that it can. +

The default, "stalling" behavior is useful if the + storage partially fails (i.e. some but not all i/os fail), and then later + recovers. In this case, we will be able to continue pool operations while it + is partially failed, and when it recovers, we can continue to free the + space, with no leaks. However, note that this case is actually fairly + rare.

+

Typically pools either (a) fail completely (but perhaps + temporarily, e.g. a top-level vdev going offline), or (b) have localized, + permanent errors (e.g. disk returns the wrong data due to bit flip or + firmware bug). In case (a), this setting does not matter because the pool + will be suspended and the sync thread will not be able to make forward + progress regardless. In case (b), because the error is permanent, the best + we can do is leak the minimum amount of space, which is what setting this + flag will do. Therefore, it is reasonable for this flag to normally be set, + but we chose the more conservative approach of not setting it, so that there + is no possibility of leaking space in the "partial temporary" + failure case.

+

Default value: 0.

+
+

+

zfs_free_min_time_ms (int)

+
Min millisecs to free per txg +

Default value: 1,000.

+
+

+

zfs_immediate_write_sz (long)

+
Largest data block to write to zil +

Default value: 32,768.

+
+

+

zfs_max_recordsize (int)

+
We currently support block sizes from 512 bytes to 16MB. + The benefits of larger blocks, and thus larger IO, need to be weighed against + the cost of COWing a giant block to modify one byte. Additionally, very large + blocks can have an impact on i/o latency, and also potentially on the memory + allocator. Therefore, we do not allow the recordsize to be set larger than + zfs_max_recordsize (default 1MB). Larger blocks can be created by changing + this tunable, and pools with larger blocks can always be imported and used, + regardless of this setting. +

Default value: 1,048,576.

+
+

+

zfs_mdcomp_disable (int)

+
Disable meta data compression +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_metaslab_fragmentation_threshold (int)

+
Allow metaslabs to keep their active state as long as + their fragmentation percentage is less than or equal to this value. An active + metaslab that exceeds this threshold will no longer keep its active status + allowing better metaslabs to be selected. +

Default value: 70.

+
+

+

zfs_mg_fragmentation_threshold (int)

+
Metaslab groups are considered eligible for allocations + if their fragmenation metric (measured as a percentage) is less than or equal + to this value. If a metaslab group exceeds this threshold then it will be + skipped unless all metaslab groups within the metaslab class have also crossed + this threshold. +

Default value: 85.

+
+

+

zfs_mg_noalloc_threshold (int)

+
Defines a threshold at which metaslab groups should be + eligible for allocations. The value is expressed as a percentage of free space + beyond which a metaslab group is always eligible for allocations. If a + metaslab group's free space is less than or equal to the the threshold, the + allocator will avoid allocating to that group unless all groups in the pool + have reached the threshold. Once all groups have reached the threshold, all + groups are allowed to accept allocations. The default value of 0 disables the + feature and causes all metaslab groups to be eligible for allocations. +

This parameter allows to deal with pools having heavily imbalanced + vdevs such as would be the case when a new vdev has been added. Setting the + threshold to a non-zero percentage will stop allocations from being made to + vdevs that aren't filled to the specified percentage and allow lesser filled + vdevs to acquire more allocations than they otherwise would under the old + zfs_mg_alloc_failures facility.

+

Default value: 0.

+
+

+

zfs_no_scrub_io (int)

+
Set for no scrub I/O +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_no_scrub_prefetch (int)

+
Set for no scrub prefetching +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_nocacheflush (int)

+
Disable cache flushes +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_nopwrite_enabled (int)

+
Enable NOP writes +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_pd_bytes_max (int)

+
The number of bytes which should be prefetched. +

Default value: 52,428,800.

+
+

+

zfs_prefetch_disable (int)

+
Disable all ZFS prefetching +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_read_chunk_size (long)

+
Bytes to read per chunk +

Default value: 1,048,576.

+
+

+

zfs_read_history (int)

+
Historic statistics for the last N reads +

Default value: 0.

+
+

+

zfs_read_history_hits (int)

+
Include cache hits in read history +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_recover (int)

+
Set to attempt to recover from fatal errors. This should + only be used as a last resort, as it typically results in leaked space, or + worse. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_resilver_delay (int)

+
Number of ticks to delay prior to issuing a resilver I/O + operation when a non-resilver or non-scrub I/O operation has occurred within + the past zfs_scan_idle ticks. +

Default value: 2.

+
+

+

zfs_resilver_min_time_ms (int)

+
Min millisecs to resilver per txg +

Default value: 3,000.

+
+

+

zfs_scan_idle (int)

+
Idle window in clock ticks. During a scrub or a resilver, + if a non-scrub or non-resilver I/O operation has occurred during this window, + the next scrub or resilver operation is delayed by, respectively + zfs_scrub_delay or zfs_resilver_delay ticks. +

Default value: 50.

+
+

+

zfs_scan_min_time_ms (int)

+
Min millisecs to scrub per txg +

Default value: 1,000.

+
+

+

zfs_scrub_delay (int)

+
Number of ticks to delay prior to issuing a scrub I/O + operation when a non-scrub or non-resilver I/O operation has occurred within + the past zfs_scan_idle ticks. +

Default value: 4.

+
+

+

zfs_send_corrupt_data (int)

+
Allow to send corrupt data (ignore read/checksum errors + when sending data) +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_sync_pass_deferred_free (int)

+
Defer frees starting in this pass +

Default value: 2.

+
+

+

zfs_sync_pass_dont_compress (int)

+
Don't compress starting in this pass +

Default value: 5.

+
+

+

zfs_sync_pass_rewrite (int)

+
Rewrite new bps starting in this pass +

Default value: 2.

+
+

+

zfs_top_maxinflight (int)

+
Max I/Os per top-level vdev during scrub or resilver + operations. +

Default value: 32.

+
+

+

zfs_txg_history (int)

+
Historic statistics for the last N txgs +

Default value: 0.

+
+

+

zfs_txg_timeout (int)

+
Max seconds worth of delta per txg +

Default value: 5.

+
+

+

zfs_vdev_aggregation_limit (int)

+
Max vdev I/O aggregation size +

Default value: 131,072.

+
+

+

zfs_vdev_cache_bshift (int)

+
Shift size to inflate reads too +

Default value: 16.

+
+

+

zfs_vdev_cache_max (int)

+
Inflate reads small than max
+

+

zfs_vdev_cache_size (int)

+
Total size of the per-disk cache +

Default value: 0.

+
+

+

zfs_vdev_mirror_switch_us (int)

+
Switch mirrors every N usecs +

Default value: 10,000.

+
+

+

zfs_vdev_read_gap_limit (int)

+
Aggregate read I/O over gap +

Default value: 32,768.

+
+

+

zfs_vdev_scheduler (charp)

+
I/O scheduler +

Default value: noop.

+
+

+

zfs_vdev_write_gap_limit (int)

+
Aggregate write I/O over gap +

Default value: 4,096.

+
+

+

zfs_zevent_cols (int)

+
Max event column width +

Default value: 80.

+
+

+

zfs_zevent_console (int)

+
Log events to the console +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_zevent_len_max (int)

+
Max event queue length +

Default value: 0.

+
+

+

zil_replay_disable (int)

+
Disable intent logging replay +

Use 1 for yes and 0 for no (default).

+
+

+

zil_slog_limit (ulong)

+
Max commit bytes to separate log device +

Default value: 1,048,576.

+
+

+

zio_delay_max (int)

+
Max zio millisec delay before posting event +

Default value: 30,000.

+
+

+

zio_requeue_io_start_cut_in_line (int)

+
Prioritize requeued I/O +

Default value: 0.

+
+

+

zio_taskq_batch_pct (uint)

+
Percentage of online CPUs (or CPU cores, etc) which will + run a worker thread for IO. These workers are responsible for IO work such as + compression and checksum calculations. Fractional number of CPUs will be + rounded down. +

The default value of 75 was chosen to avoid using all CPUs which + can result in latency issues and inconsistent application performance, + especially when high compression is enabled.

+

Default value: 75.

+
+

+

zvol_inhibit_dev (uint)

+
Do not create zvol device nodes +

Use 1 for yes and 0 for no (default).

+
+

+

zvol_major (uint)

+
Major number for zvol device +

Default value: 230.

+
+

+

zvol_max_discard_blocks (ulong)

+
Max number of blocks to discard at once +

Default value: 16,384.

+
+

+

zvol_prefetch_bytes (uint)

+
When adding a zvol to the system prefetch + zvol_prefetch_bytes from the start and end of the volume. Prefetching + these regions of the volume is desirable because they are likely to be + accessed immediately by blkid(8) or by the kernel scanning for a + partition table. +

Default value: 131,072.

+
+

+
+
+
+

+

ZFS issues I/O operations to leaf vdevs to satisfy and complete + I/Os. The I/O scheduler determines when and in what order those operations + are issued. The I/O scheduler divides operations into five I/O classes + prioritized in the following order: sync read, sync write, async read, async + write, and scrub/resilver. Each queue defines the minimum and maximum number + of concurrent operations that may be issued to the device. In addition, the + device has an aggregate maximum, zfs_vdev_max_active. Note that the + sum of the per-queue minimums must not exceed the aggregate maximum. If the + sum of the per-queue maximums exceeds the aggregate maximum, then the number + of active I/Os may reach zfs_vdev_max_active, in which case no + further I/Os will be issued regardless of whether all per-queue minimums + have been met.

+

For many physical devices, throughput increases with the number of + concurrent operations, but latency typically suffers. Further, physical + devices typically have a limit at which more concurrent operations have no + effect on throughput or can actually cause it to decrease.

+

The scheduler selects the next operation to issue by first looking + for an I/O class whose minimum has not been satisfied. Once all are + satisfied and the aggregate maximum has not been hit, the scheduler looks + for classes whose maximum has not been satisfied. Iteration through the I/O + classes is done in the order specified above. No further operations are + issued if the aggregate maximum number of concurrent operations has been hit + or if there are no operations queued for an I/O class that has not hit its + maximum. Every time an I/O is queued or an operation completes, the I/O + scheduler looks for new operations to issue.

+

In general, smaller max_active's will lead to lower latency of + synchronous operations. Larger max_active's may lead to higher overall + throughput, depending on underlying storage.

+

The ratio of the queues' max_actives determines the balance of + performance between reads, writes, and scrubs. E.g., increasing + zfs_vdev_scrub_max_active will cause the scrub or resilver to + complete more quickly, but reads and writes to have higher latency and lower + throughput.

+

All I/O classes have a fixed maximum number of outstanding + operations except for the async write class. Asynchronous writes represent + the data that is committed to stable storage during the syncing stage for + transaction groups. Transaction groups enter the syncing state periodically + so the number of queued async writes will quickly burst up and then bleed + down to zero. Rather than servicing them as quickly as possible, the I/O + scheduler changes the maximum number of active async write I/Os according to + the amount of dirty data in the pool. Since both throughput and latency + typically increase with the number of concurrent operations issued to + physical devices, reducing the burstiness in the number of concurrent + operations also stabilizes the response time of operations from other -- and + in particular synchronous -- queues. In broad strokes, the I/O scheduler + will issue more concurrent operations from the async write queue as there's + more dirty data in the pool.

+

Async Writes

+

The number of concurrent operations issued for the async write I/O + class follows a piece-wise linear function defined by a few adjustable + points.

+
+
+ | o---------| <-- zfs_vdev_async_write_max_active +
+ ^ | /^ | +
+ | | / | | +active | / | | +
+ I/O | / | | +count | / | | +
+ | / | | +
+ |-------o | | <-- zfs_vdev_async_write_min_active +
+ 0|_______^______|_________| +
+ 0% | | 100% of zfs_dirty_data_max +
+ | | +
+ | `-- zfs_vdev_async_write_active_max_dirty_percent +
+ `--------- zfs_vdev_async_write_active_min_dirty_percent +
+Until the amount of dirty data exceeds a minimum percentage of the dirty data + allowed in the pool, the I/O scheduler will limit the number of concurrent + operations to the minimum. As that threshold is crossed, the number of + concurrent operations issued increases linearly to the maximum at the + specified maximum percentage of the dirty data allowed in the pool. +

Ideally, the amount of dirty data on a busy pool will stay in the + sloped part of the function between + zfs_vdev_async_write_active_min_dirty_percent and + zfs_vdev_async_write_active_max_dirty_percent. If it exceeds the + maximum percentage, this indicates that the rate of incoming data is greater + than the rate that the backend storage can handle. In this case, we must + further throttle incoming writes, as described in the next section.

+

+
+
+

+

We delay transactions when we've determined that the backend + storage isn't able to accommodate the rate of incoming writes.

+

If there is already a transaction waiting, we delay relative to + when that transaction will finish waiting. This way the calculated delay + time is independent of the number of threads concurrently executing + transactions.

+

If we are the only waiter, wait relative to when the transaction + started, rather than the current time. This credits the transaction for + "time already served", e.g. reading indirect blocks.

+

The minimum time for a transaction to take is calculated as:

+
+
+ min_time = zfs_delay_scale * (dirty - min) / (max - dirty) +
+ min_time is then capped at 100 milliseconds.
+

The delay has two degrees of freedom that can be adjusted via + tunables. The percentage of dirty data at which we start to delay is defined + by zfs_delay_min_dirty_percent. This should typically be at or above + zfs_vdev_async_write_active_max_dirty_percent so that we only start + to delay after writing at full speed has failed to keep up with the incoming + write rate. The scale of the curve is defined by zfs_delay_scale. + Roughly speaking, this variable determines the amount of delay at the + midpoint of the curve.

+

+
delay
+
+ 10ms +-------------------------------------------------------------*+ +
+ | *| +
+ 9ms + *+ +
+ | *| +
+ 8ms + *+ +
+ | * | +
+ 7ms + * + +
+ | * | +
+ 6ms + * + +
+ | * | +
+ 5ms + * + +
+ | * | +
+ 4ms + * + +
+ | * | +
+ 3ms + * + +
+ | * | +
+ 2ms + (midpoint) * + +
+ | | ** | +
+ 1ms + v *** + +
+ | zfs_delay_scale ----------> ******** | +
+ 0 +-------------------------------------*********----------------+ +
+ 0% <- zfs_dirty_data_max -> 100%
+

Note that since the delay is added to the outstanding time + remaining on the most recent transaction, the delay is effectively the + inverse of IOPS. Here the midpoint of 500us translates to 2000 IOPS. The + shape of the curve was chosen such that small changes in the amount of + accumulated dirty data in the first 3/4 of the curve yield relatively small + differences in the amount of delay.

+

The effects can be easier to understand when the amount of delay + is represented on a log scale:

+

+
delay
+100ms +-------------------------------------------------------------++
+
+ + + +
+ | | +
+ + *+ +
+ 10ms + *+ +
+ + ** + +
+ | (midpoint) ** | +
+ + | ** + +
+ 1ms + v **** + +
+ + zfs_delay_scale ----------> ***** + +
+ | **** | +
+ + **** + +100us + ** + +
+ + * + +
+ | * | +
+ + * + +
+ 10us + * + +
+ + + +
+ | | +
+ + + +
+ +--------------------------------------------------------------+ +
+ 0% <- zfs_dirty_data_max -> 100%
+

Note here that only as the amount of dirty data approaches its + limit does the delay start to increase rapidly. The goal of a properly tuned + system should be to keep the amount of dirty data out of that range by first + ensuring that the appropriate limits are set for the I/O scheduler to reach + optimal throughput on the backend storage, and then by changing the value of + zfs_delay_scale to increase the steepness of the curve.

+
+
+ + + + + +
November 16, 2013
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/5/zpool-features.5.html b/man/v0.6/5/zpool-features.5.html new file mode 100644 index 000000000..4f455c26c --- /dev/null +++ b/man/v0.6/5/zpool-features.5.html @@ -0,0 +1,584 @@ + + + + + + + zpool-features.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-features.5

+
+ + + + + +
ZPOOL-FEATURES(5)File Formats ManualZPOOL-FEATURES(5)
+
+
+

+

zpool-features - ZFS pool feature descriptions

+
+
+

+

ZFS pool on-disk format versions are specified via + "features" which replace the old on-disk format numbers (the last + supported on-disk format number is 28). To enable a feature on a pool use + the upgrade subcommand of the zpool(8) command, or set the + feature@feature_name property to enabled.

+

+

The pool format does not affect file system version compatibility + or the ability to send file systems between pools.

+

+

Since most features can be enabled independently of each other the + on-disk format of the pool is specified by the set of all features marked as + active on the pool. If the pool was created by another software + version this set may include unsupported features.

+
+

+

Every feature has a guid of the form + com.example:feature_name. The reverse DNS name ensures that the + feature's guid is unique across all ZFS implementations. When unsupported + features are encountered on a pool they will be identified by their guids. + Refer to the documentation for the ZFS implementation that created the pool + for information about those features.

+

+

Each supported feature also has a short name. By convention a + feature's short name is the portion of its guid which follows the ':' (e.g. + com.example:feature_name would have the short name + feature_name), however a feature's short name may differ across ZFS + implementations if following the convention would result in name + conflicts.

+
+
+

+

Features can be in one of three states:

+

active

+
This feature's on-disk format changes are in effect on + the pool. Support for this feature is required to import the pool in + read-write mode. If this feature is not read-only compatible, support is also + required to import the pool in read-only mode (see "Read-only + compatibility").
+

+

enabled

+
An administrator has marked this feature as enabled on + the pool, but the feature's on-disk format changes have not been made yet. The + pool can still be imported by software that does not support this feature, but + changes may be made to the on-disk format at any time which will move the + feature to the active state. Some features may support returning to the + enabled state after becoming active. See feature-specific + documentation for details.
+

+

disabled

+
This feature's on-disk format changes have not been made + and will not be made unless an administrator moves the feature to the + enabled state. Features cannot be disabled once they have been + enabled.
+

+

+

The state of supported features is exposed through pool properties + of the form feature@short_name.

+
+
+

+

Some features may make on-disk format changes that do not + interfere with other software's ability to read from the pool. These + features are referred to as "read-only compatible". If all + unsupported features on a pool are read-only compatible, the pool can be + imported in read-only mode by setting the readonly property during + import (see zpool(8) for details on importing pools).

+
+
+

+

For each unsupported feature enabled on an imported pool a pool + property named unsupported@feature_guid will indicate why the import + was allowed despite the unsupported feature. Possible values for this + property are:

+

+

inactive

+
The feature is in the enabled state and therefore + the pool's on-disk format is still compatible with software that does not + support this feature.
+

+

readonly

+
The feature is read-only compatible and the pool has been + imported in read-only mode.
+

+
+
+

+

Some features depend on other features being enabled in order to + function properly. Enabling a feature will automatically enable any features + it depends on.

+
+
+
+

+

The following features are supported on this system:

+

async_destroy

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:async_destroy
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

Destroying a file system requires traversing all of its data in + order to return its used space to the pool. Without async_destroy the + file system is not fully removed until all space has been reclaimed. If the + destroy operation is interrupted by a reboot or power outage the next + attempt to open the pool will need to complete the destroy operation + synchronously.

+

When async_destroy is enabled the file system's data will + be reclaimed by a background process, allowing the destroy operation to + complete without traversing the entire file system. The background process + is able to resume interrupted destroys after the pool has been opened, + eliminating the need to finish interrupted destroys as part of the open + operation. The amount of space remaining to be reclaimed by the background + process is available through the freeing property.

+

This feature is only active while freeing is + non-zero.

+
+

+

empty_bpobj

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:empty_bpobj
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature increases the performance of creating and using a + large number of snapshots of a single filesystem or volume, and also reduces + the disk space required.

+

When there are many snapshots, each snapshot uses many Block + Pointer Objects (bpobj's) to track blocks associated with that snapshot. + However, in common use cases, most of these bpobj's are empty. This feature + allows us to create each bpobj on-demand, thus eliminating the empty + bpobjs.

+

This feature is active while there are any filesystems, + volumes, or snapshots which were created after enabling this feature.

+
+

+

filesystem_limits

+
+ + + + + + + + + + + + + +
GUIDcom.joyent:filesystem_limits
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature enables filesystem and snapshot limits. These limits + can be used to control how many filesystems and/or snapshots can be created + at the point in the tree on which the limits are set.

+

This feature is active once either of the limit properties + has been set on a dataset. Once activated the feature is never + deactivated.

+
+

+

lz4_compress

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:lz4_compress
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

lz4 is a high-performance real-time compression algorithm + that features significantly faster compression and decompression as well as + a higher compression ratio than the older lzjb compression. + Typically, lz4 compression is approximately 50% faster on + compressible data and 200% faster on incompressible data than lzjb. + It is also approximately 80% faster on decompression, while giving + approximately 10% better compression ratio.

+

When the lz4_compress feature is set to enabled, the + administrator can turn on lz4 compression on any dataset on the pool + using the zfs(8) command. Please note that doing so will immediately + activate the lz4_compress feature on the underlying pool using the + zfs(1M) command. Also, all newly written metadata will be compressed + with lz4 algorithm. Since this feature is not read-only compatible, + this operation will render the pool unimportable on systems without support + for the lz4_compress feature. Booting off of lz4-compressed + root pools is supported.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

spacemap_histogram

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:spacemap_histogram
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This features allows ZFS to maintain more information about how + free space is organized within the pool. If this feature is enabled, + ZFS will set this feature to active when a new space map object is + created or an existing space map is upgraded to the new format. Once the + feature is active, it will remain in that state until the pool is + destroyed.

+

+
+

+

extensible_dataset

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:extensible_dataset
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature allows more flexible use of internal ZFS data + structures, and exists for other features to depend on.

+

This feature will be active when the first dependent + feature uses it, and will be returned to the enabled state when all + datasets that use this feature are destroyed.

+

+
+

+

bookmarks

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:bookmarks
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature enables use of the zfs bookmark + subcommand.

+

This feature is active while any bookmarks exist in the + pool. All bookmarks in the pool can be listed by running zfs list -t + bookmark -r poolname.

+

+
+

+

enabled_txg

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:enabled_txg
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

Once this feature is enabled ZFS records the transaction group + number in which new features are enabled. This has no user-visible impact, + but other features may depend on this feature.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+

+
+

+

hole_birth

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:hole_birth
READ-ONLY COMPATIBLEno
DEPENDENCIESenabled_txg
+

This feature improves performance of incremental sends ("zfs + send -i") and receives for objects with many holes. The most common + case of hole-filled objects is zvols.

+

An incremental send stream from snapshot A to snapshot + B contains information about every block that changed between + A and B. Blocks which did not change between those snapshots + can be identified and omitted from the stream using a piece of metadata + called the 'block birth time', but birth times are not recorded for holes + (blocks filled only with zeroes). Since holes created after A cannot + be distinguished from holes created before A, information about every + hole in the entire filesystem or zvol is included in the send stream.

+

For workloads where holes are rare this is not a problem. However, + when incrementally replicating filesystems or zvols with many holes (for + example a zvol formatted with another filesystem) a lot of time will be + spent sending and receiving unnecessary information about holes that already + exist on the receiving side.

+

Once the hole_birth feature has been enabled the block + birth times of all new holes will be recorded. Incremental sends between + snapshots created after this feature is enabled will use this new metadata + to avoid sending information about holes that already exist on the receiving + side.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+

+
+

+

embedded_data

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:embedded_data
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature improves the performance and compression ratio of + highly-compressible blocks. Blocks whose contents can compress to 112 bytes + or smaller can take advantage of this feature.

+

When this feature is enabled, the contents of highly-compressible + blocks are stored in the block "pointer" itself (a misnomer in + this case, as it contains the compresseed data, rather than a pointer to its + location on disk). Thus the space of the block (one sector, typically 512 + bytes or 4KB) is saved, and no additional i/o is needed to read and write + the data block.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+

+
+

+

large_blocks

+
+ + + + + + + + + + + + + +
GUIDorg.open-zfs:large_block
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

The large_block feature allows the record size on a dataset + to be set larger than 128KB.

+

This feature becomes active once a recordsize + property has been set larger than 128KB, and will return to being + enabled once all filesystems that have ever had their recordsize + larger than 128KB are destroyed.

+
+

+
+
+

+

zpool(8)

+
+
+ + + + + +
August 27, 2013
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/fsck.zfs.8.html b/man/v0.6/8/fsck.zfs.8.html new file mode 100644 index 000000000..3cdde3639 --- /dev/null +++ b/man/v0.6/8/fsck.zfs.8.html @@ -0,0 +1,215 @@ + + + + + + + fsck.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

fsck.zfs.8

+
+ + + + + +
fsck.zfs(8)System Administration Commandsfsck.zfs(8)
+
+

+
+

+

fsck.zfs - Dummy ZFS filesystem checker.

+

+
+
+

+

fsck.zfs [options] + <dataset>

+

+
+
+

+

fsck.zfs is a shell stub that does nothing and always + returns true. It is installed by ZoL because some Linux distributions expect + a fsck helper for all filesystems.

+

+
+
+

+

All options and the dataset are ignored.

+

+
+
+

+

ZFS datasets are checked by running zpool scrub on the + containing pool. An individual ZFS dataset is never checked independently of + its pool, which is unlike a regular filesystem.

+

+
+
+

+

On some systems, if the dataset is in a degraded pool, then + it might be appropriate for fsck.zfs to return exit code 4 to + indicate an uncorrected filesystem error.

+

Similarly, if the dataset is in a faulted pool and has a + legacy /etc/fstab record, then fsck.zfs should return exit code 8 to + indicate a fatal operational error.

+

+
+
+

+

Darik Horn <dajhorn@vanadac.com>.

+

+
+
+

+

fsck(8), fstab(5), zpool(8)

+
+
+ + + + + +
2013 MAR 16ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/index.html b/man/v0.6/8/index.html new file mode 100644 index 000000000..93996c694 --- /dev/null +++ b/man/v0.6/8/index.html @@ -0,0 +1,161 @@ + + + + + + + System Administration Commands (8) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

System Administration Commands (8)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/mount.zfs.8.html b/man/v0.6/8/mount.zfs.8.html new file mode 100644 index 000000000..928cf62f9 --- /dev/null +++ b/man/v0.6/8/mount.zfs.8.html @@ -0,0 +1,264 @@ + + + + + + + mount.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

mount.zfs.8

+
+ + + + + +
mount.zfs(8)System Administration Commandsmount.zfs(8)
+
+

+
+

+

mount.zfs - mount a ZFS filesystem

+
+
+

+

mount.zfs [-sfnvh] [-o options] dataset + mountpoint

+

+
+
+

+

mount.zfs is part of the zfsutils package for Linux. It is + a helper program that is usually invoked by the mount(8) or + zfs(8) commands to mount a ZFS dataset.

+

All options are handled according to the FILESYSTEM + INDEPENDENT MOUNT OPTIONS section in the mount(8) manual, except for + those described below.

+

The dataset parameter is a ZFS filesystem name, as output + by the zfs list -H -o name command. This parameter never has a + leading slash character and is not a device name.

+

The mountpoint parameter is the path name of a + directory.

+

+

+
+
+

+
+
+
Ignore bad or sloppy mount options.
+
+
Do a fake mount; do not perform the mount operation.
+
+
Do not update the /etc/mtab file.
+
+
Increase verbosity.
+
+
Print the usage message.
+
+
This flag sets the SELinux context for all files in the filesytem under + that mountpoint.
+
+
This flag sets the SELinux context for the filesytem being mounted.
+
+
This flag sets the SELinux context for unlabeled files.
+
+
This flag sets the SELinux context for the root inode of the + filesystem.
+
+
This private flag indicates that the dataset has an entry in the + /etc/fstab file.
+
+
This private flag disables extended attributes.
+
+
This private flag enables directory-based extended attributes and, if + appropriate, adds a ZFS context to the selinux system policy.
+
+
This private flag enables system attributed-based extended attributes and, + if appropriate, adds a ZFS context to the selinux system policy.
+
+
Equivalent to xattr.
+
+
This private flag indicates that mount(8) is being called by the + zfs(8) command. +

+
+
+
+
+

+

ZFS conventionally requires that the mountpoint be an empty + directory, but the Linux implementation inconsistently enforces the + requirement.

+

The mount.zfs helper does not mount the contents of + zvols.

+

+
+
+

+
+
/etc/fstab
+
The static filesystem table.
+
/etc/mtab
+
The mounted filesystem table.
+
+
+
+

+

The primary author of mount.zfs is Brian Behlendorf + <behlendorf1@llnl.gov>.

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

fstab(5), mount(8), zfs(8)

+
+
+ + + + + +
2013 FEB 28ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/vdev_id.8.html b/man/v0.6/8/vdev_id.8.html new file mode 100644 index 000000000..57f06d808 --- /dev/null +++ b/man/v0.6/8/vdev_id.8.html @@ -0,0 +1,234 @@ + + + + + + + vdev_id.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

vdev_id.8

+
+ + + + + +
vdev_id(8)System Manager's Manualvdev_id(8)
+
+
+

+

vdev_id - generate user-friendly names for JBOD disks

+
+
+

+
vdev_id <-d dev> [-c config_file] [-g sas_direct|sas_switch]
+
+ [-m] [-p phys_per_port] +vdev_id -h
+
+
+

+

The vdev_id command is a udev helper which parses the file + /etc/zfs/vdev_id.conf(5) to map a physical path in a storage topology + to a channel name. The channel name is combined with a disk enclosure slot + number to create an alias that reflects the physical location of the drive. + This is particularly helpful when it comes to tasks like replacing failed + drives. Slot numbers may also be re-mapped in case the default numbering is + unsatisfactory. The drive aliases will be created as symbolic links in + /dev/disk/by-vdev.

+

The currently supported topologies are sas_direct and sas_switch. + A multipath mode is supported in which dm-mpath devices are handled by + examining the first-listed running component disk as reported by the + multipath(8) command. In multipath mode the configuration file should + contain a channel definition with the same name for each path to a given + enclosure.

+

vdev_id also supports creating aliases based on existing + udev links in the /dev hierarchy using the alias configuration file + keyword. See the vdev_id.conf(5) man page for details.

+

+
+
+

+
+
+
Specifies the path to an alternate configuration file. The default is + /etc/zfs/vdev_id.conf.
+
+
This is the only mandatory argument. Specifies the name of a device in + /dev, i.e. "sda".
+
+
Identifies a physical topology that governs how physical paths are mapped + to channels. +

sas_direct - in this mode a channel is uniquely + identified by a PCI slot and a HBA port number

+

sas_switch - in this mode a channel is uniquely + identified by a SAS switch port number

+
+
+
Specifies that vdev_id(8) will handle only dm-multipath devices. If + set to "yes" then vdev_id(8) will examine the first + running component disk of a dm-multipath device as listed by the + multipath(8) command to determine the physical path.
+
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) internally uses this value to determine + which HBA or switch port a device is connected to. The default is 4.
+
+
Print a usage summary.
+
+
+
+

+

vdev_id.conf(5)

+
+
+ + + + + +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/zdb.8.html b/man/v0.6/8/zdb.8.html new file mode 100644 index 000000000..add6484af --- /dev/null +++ b/man/v0.6/8/zdb.8.html @@ -0,0 +1,526 @@ + + + + + + + zdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zdb.8

+
+ + + + + +
ZDB(8)ZDB(8)
+
+

+
+

+

zdb - Display zpool debugging and consistency + information

+

+
+
+

+

zdb [-CumdibcsDvhLMXFPA] [-e [-p path...]] [-t + txg] +
+ [-U cache] [-I inflight I/Os] +
+ [poolname [object ...]]

+

+

zdb [-divPA] [-e [-p path...]] [-U cache] +
+ dataset [object ...]

+

+

zdb -m [-MLXFPA] [-t txg] [-e [-p path...]] + [-U cache] +
+ poolname [vdev [metaslab ...]]

+

+

zdb -R [-A] [-e [-p path...]] [-U cache] + poolname +
+ vdev:offset:size[:flags]

+

+

zdb -S [-AP] [-e [-p path...]] [-U cache] + poolname

+

+

zdb -l [-uA] device

+

+

zdb -C [-A] [-U cache]

+

+
+
+

+

The zdb utility displays information about a ZFS pool + useful for debugging and performs some amount of consistency checking. It is + a not a general purpose tool and options (and facilities) may change. This + is neither a fsck(8) nor an fsdb(8) utility.

+

+

The output of this command in general reflects the on-disk + structure of a ZFS pool, and is inherently unstable. The precise output of + most invocations is not documented, a knowledge of ZFS internals is + assumed.

+

+

If the dataset argument does not contain any / or + @ characters, it is interpreted as a pool name. The root dataset can + be specified as pool/ (pool name followed by a slash).

+

+

When operating on an imported and active pool it is possible, + though unlikely, that zdb may interpret inconsistent pool data and behave + erratically.

+

+
+
+

+

Display options:

+

+

-b

+

+
Display statistics regarding the number, size (logical, + physical and allocated) and deduplication of blocks.
+

+

-c

+

+
Verify the checksum of all metadata blocks while printing + block statistics (see -b). +

If specified multiple times, verify the checksums of all + blocks.

+
+

+

-C

+

+
Display information about the configuration. If specified + with no other options, instead display information about the cache file + (/etc/zfs/zpool.cache). To specify the cache file to display, see + -U. +

If specified multiple times, and a pool name is also specified + display both the cached configuration and the on-disk configuration. If + specified multiple times with -e also display the configuration that + would be used were the pool to be imported.

+
+

+

-d

+

+
Display information about datasets. Specified once, + displays basic dataset information: ID, create transaction, size, and object + count. +

If specified multiple times provides greater and greater + verbosity.

+

If object IDs are specified, display information about those + specific objects only.

+
+

+

-D

+

+
Display deduplication statistics, including the + deduplication ratio (dedup), compression ratio (compress), inflation due to + the zfs copies property (copies), and an overall effective ratio (dedup * + compress / copies). +

If specified twice, display a histogram of deduplication + statistics, showing the allocated (physically present on disk) and + referenced (logically referenced in the pool) block counts and sizes by + reference count.

+

If specified a third time, display the statistics independently + for each deduplication table.

+

If specified a fourth time, dump the contents of the deduplication + tables describing duplicate blocks.

+

If specified a fifth time, also dump the contents of the + deduplication tables describing unique blocks.

+
+

+

-h

+

+
Display pool history similar to zpool history, but + include internal changes, transaction, and dataset information.
+

+

-i

+

+
Display information about intent log (ZIL) entries + relating to each dataset. If specified multiple times, display counts of each + intent log transaction type.
+

+

-l device

+

+
Display the vdev labels from the specified device. If the + -u option is also specified, also display the uberblocks on this + device.
+

+

-L

+

+
Disable leak tracing and the loading of space maps. By + default, zdb verifies that all non-free blocks are referenced, which + can be very expensive.
+

+

-m

+

+
Display the offset, spacemap, and free space of each + metaslab. When specified twice, also display information about the on-disk + free space histogram associated with each metaslab. When specified three time, + display the maximum contiguous free space, the in-core free space histogram, + and the percentage of free space in each space map. When specified four times + display every spacemap record.
+

+

-M

+

+
Display the offset, spacemap, and free space of each + metaslab. When specified twice, also display information about the maximum + contiguous free space and the percentage of free space in each space map. When + specified three times display every spacemap record.
+

+

-R poolname + vdev:offset:size[:flags]

+

+
Read and display a block from the specified device. By + default the block is displayed as a hex dump, but see the description of the + ´r´ flag, below. +

The block is specified in terms of a colon-separated tuple + vdev (an integer vdev identifier) offset (the offset within + the vdev) size (the size of the block to read) and, optionally, + flags (a set of flags, described below).

+

+

b offset

+

+
Print block pointer
+

+

d

+

+
Decompress the block
+

+

e

+

+
Byte swap the block
+

+

g

+

+
Dump gang block header
+

+

i

+

+
Dump indirect block
+

+

r

+

+
Dump raw uninterpreted block data
+
+

+

-s

+

+
Report statistics on zdb´s I/O. Display + operation counts, bandwidth, and error counts of I/O to the pool from + zdb.
+

+

-S

+

+
Simulate the effects of deduplication, constructing a DDT + and then display that DDT as with -DD.
+

+

-u

+

+
Display the current uberblock.
+

+

Other options:

+

+

-A

+

+
Do not abort should any assertion fail.
+

+

-AA

+

+
Enable panic recovery, certain errors which would + otherwise be fatal are demoted to warnings.
+

+

-AAA

+

+
Do not abort if asserts fail and also enable panic + recovery.
+

+

-e [-p path]...

+

+
Operate on an exported pool, not present in + /etc/zfs/zpool.cache. The -p flag specifies the path under which + devices are to be searched.
+

+

-F

+

+
Attempt to make an unreadable pool readable by trying + progressively older transactions.
+

+

-I inflight I/Os

+

+
Limit the number of outstanding checksum I/Os to the + specified value. The default value is 200. This option affects the performance + of the -c option.
+

+

-P

+

+
Print numbers in an unscaled form more amenable to + parsing, eg. 1000000 rather than 1M.
+

+

-t transaction

+

+
Specify the highest transaction to use when searching for + uberblocks. See also the -u and -l options for a means to see + the available uberblocks and their associated transaction numbers.
+

+

-U cachefile

+

+
Use a cache file other than + /etc/zfs/zpool.cache.
+

+

-v

+

+
Enable verbosity. Specify multiple times for increased + verbosity.
+

+

-X

+

+
Attempt ´extreme´ transaction rewind, that + is attempt the same recovery as -F but read transactions otherwise + deemed too old.
+

+

-V

+

+
Attempt a verbatim import. This mimics the behavior of + the kernel when loading a pool from a cachefile.
+

+

Specifying a display option more than once enables verbosity for + only that option, with more occurrences enabling more verbosity.

+

If no options are specified, all information about the named pool + will be displayed at default verbosity.

+

+
+
+

+

Example 1 Display the configuration of imported pool + 'rpool'

+

+
+

+
# zdb -C rpool
+MOS Configuration:
+
+ version: 28 +
+ name: 'rpool' +
+ ...
+
+

+

+

Example 2 Display basic dataset information about + 'rpool'

+

+
+

+
# zdb -d rpool
+Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects
+Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects
+
+ ...
+
+

+

+

Example 3 Display basic information about object 0 in + 'rpool/export/home'

+

+
+

+
# zdb -d rpool/export/home 0
+Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects
+
+ Object lvl iblk dblk dsize lsize %full type +
+ 0 7 16K 16K 15.0K 16K 25.00 DMU dnode
+
+

+

+

Example 4 Display the predicted effect of enabling + deduplication on 'rpool'

+

+
+

+
# zdb -S rpool
+Simulated DDT histogram:
+bucket              allocated                       referenced          
+______   ______________________________   ______________________________
+refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
+------   ------   -----   -----   -----   ------   -----   -----   -----
+
+ 1 694K 27.1G 15.0G 15.0G 694K 27.1G 15.0G 15.0G +
+ 2 35.0K 1.33G 699M 699M 74.7K 2.79G 1.45G 1.45G +
+ ... +dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00
+
+

+

+
+
+

+
+
+
Override the default spa_config_path (/etc/zfs/zpool.cache) + setting. If -U flag is specified it will override this environment + variable settings once again. +

+
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
February 15, 2012
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/zed.8.html b/man/v0.6/8/zed.8.html new file mode 100644 index 000000000..dd3c6859b --- /dev/null +++ b/man/v0.6/8/zed.8.html @@ -0,0 +1,370 @@ + + + + + + + zed.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zed.8

+
+ + + + + +
ZED(8)System Administration CommandsZED(8)
+
+

+
+

+

ZED - ZFS Event Daemon

+

+
+
+

+

zed [-d zedletdir] [-f] [-F] + [-h] [-L] [-M] [-p pidfile] [-s + statefile] [-v] [-V] [-Z]

+

+
+
+

+

ZED (ZFS Event Daemon) monitors events generated by the ZFS + kernel module. When a zevent (ZFS Event) is posted, ZED will run any + ZEDLETs (ZFS Event Daemon Linkage for Executable Tasks) that have been + enabled for the corresponding zevent class.

+

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Display license information.
+
+
Display version information.
+
+
Be verbose.
+
+
Force the daemon to run if at all possible, disabling security checks and + throwing caution to the wind. Not recommended for use in production.
+
+
Run the daemon in the foreground.
+
+
Lock all current and future pages in the virtual memory address space. + This may help the daemon remain responsive when the system is under heavy + memory pressure.
+
+
Zero the daemon's state, thereby allowing zevents still within the kernel + to be reprocessed.
+
+
Read the enabled ZEDLETs from the specified directory.
+
+
Write the daemon's process ID to the specified file.
+
+
Write the daemon's state to the specified file. +

+
+
+
+
+

+

A zevent is comprised of a list of nvpairs (name/value pairs). + Each zevent contains an EID (Event IDentifier) that uniquely identifies it + throughout the lifetime of the loaded ZFS kernel module; this EID is a + monotonically increasing integer that resets to 1 each time the kernel + module is loaded. Each zevent also contains a class string that identifies + the type of event. For brevity, a subclass string is defined that omits the + leading components of the class string. Additional nvpairs exist to provide + event details.

+

The kernel maintains a list of recent zevents that can be viewed + (along with their associated lists of nvpairs) using the "zpool + events -v" command.

+

+
+
+

+

ZEDLETs to be invoked in response to zevents are located in the + enabled-zedlets directory. These can be symlinked or copied from the + installed-zedlets directory; symlinks allow for automatic updates + from the installed ZEDLETs, whereas copies preserve local modifications. As + a security measure, ZEDLETs must be owned by root. They must have execute + permissions for the user, but they must not have write permissions for group + or other. Dotfiles are ignored.

+

ZEDLETs are named after the zevent class for which they should be + invoked. In particular, a ZEDLET will be invoked for a given zevent if + either its class or subclass string is a prefix of its filename (and is + followed by a non-alphabetic character). As a special case, the prefix + "all" matches all zevents. Multiple ZEDLETs may be invoked for a + given zevent.

+

+
+
+

+

ZEDLETs are executables invoked by the ZED in response to a given + zevent. They should be written under the presumption they can be invoked + concurrently, and they should use appropriate locking to access any shared + resources. Common variables used by ZEDLETs can be stored in the default rc + file which is sourced by scripts; these variables should be prefixed with + "ZED_".

+

The zevent nvpairs are passed to ZEDLETs as environment variables. + Each nvpair name is converted to an environment variable in the following + manner: 1) it is prefixed with "ZEVENT_", 2) it is converted to + uppercase, and 3) each non-alphanumeric character is converted to an + underscore. Some additional environment variables have been defined to + present certain nvpair values in a more convenient form. An incomplete list + of zevent environment variables is as follows:

+
+
+
The Event IDentifier.
+
+
The zevent class string.
+
+
The zevent subclass string.
+
+
The time at which the zevent was posted as + "seconds nanoseconds" since the Epoch.
+
+
The seconds component of ZEVENT_TIME.
+
+
The nanoseconds component of ZEVENT_TIME.
+
+
An almost-RFC3339-compliant string for ZEVENT_TIME.
+
+

Additionally, the following ZED & ZFS variables are + defined:

+
+
+
The daemon's process ID.
+
+
The daemon's current enabled-zedlets directory.
+
+
The ZFS alias (name-version-release) string used to build the + daemon.
+
+
The ZFS version used to build the daemon.
+
+
The ZFS release used to build the daemon.
+
+

ZEDLETs may need to call other ZFS commands. The installation + paths of the following executables are defined: ZDB, ZED, + ZFS, ZINJECT, and ZPOOL. These variables can be + overridden in the rc file if needed.

+

+
+
+

+
+
@sysconfdir@/zfs/zed.d
+
The default directory for enabled ZEDLETs.
+
@sysconfdir@/zfs/zed.d/zed.rc
+
The default rc file for common variables used by ZEDLETs.
+
@libexecdir@/zfs/zed.d
+
The default directory for installed ZEDLETs.
+
@runstatedir@/zed.pid
+
The default file containing the daemon's process ID.
+
@runstatedir@/zed.state
+
The default file containing the daemon's state. +

+
+
+
+
+

+
+
+
Reconfigure the daemon and rescan the directory for enabled ZEDLETs.
+
+
Terminate the daemon. +

+
+
+
+
+

+

ZED requires root privileges.

+

+
+
+

+

Events are processed synchronously by a single thread. This can + delay the processing of simultaneous zevents.

+

There is no maximum timeout for ZEDLET execution. Consequently, a + misbehaving ZEDLET can delay the processing of subsequent zevents.

+

The ownership and permissions of the enabled-zedlets + directory (along with all parent directories) are not checked. If any of + these directories are improperly owned or permissioned, an unprivileged user + could insert a ZEDLET to be executed as root. The requirement that ZEDLETs + be owned by root mitigates this to some extent.

+

ZEDLETs are unable to return state/status information to the + kernel.

+

Some zevent nvpair types are not handled. These are denoted by + zevent environment variables having a "_NOT_IMPLEMENTED_" + value.

+

Internationalization support via gettext has not been added.

+

The configuration file is not yet implemented.

+

The diagnosis engine is not yet implemented.

+

+
+
+

+

ZED (ZFS Event Daemon) is distributed under the terms of + the Common Development and Distribution License Version 1.0 (CDDL-1.0).

+

Developed at Lawrence Livermore National Laboratory + (LLNL-CODE-403049).

+

+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
Octember 1, 2013ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/zfs.8.html b/man/v0.6/8/zfs.8.html new file mode 100644 index 000000000..f26125b11 --- /dev/null +++ b/man/v0.6/8/zfs.8.html @@ -0,0 +1,3315 @@ + + + + + + + zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.8

+
+ + + + + +
zfs(8)System Administration Commandszfs(8)
+
+
+

+

zfs - configures ZFS file systems

+
+
+

+
zfs [-?]
+

+

+
zfs create [-p] [-o property=value] ... filesystem
+

+

+
zfs create [-ps] [-b blocksize] [-o property=value] ... -V size volume
+

+

+
zfs destroy [-fnpRrv] filesystem|volume
+

+

+
zfs destroy [-dnpRrv] filesystem|volume@snap[%snap][,...]
+

+

+
zfs destroy filesystem|volume#bookmark
+

+

+
zfs snapshot | snap [-r] [-o property=value] ... 
+
+ filesystem@snapname|volume@snapname ...
+

+

+
zfs rollback [-rRf] snapshot
+

+

+
zfs clone [-p] [-o property=value] ... snapshot filesystem|volume
+

+

+
zfs promote clone-filesystem
+

+

+
zfs rename [-f] filesystem|volume|snapshot
+
+ filesystem|volume|snapshot
+

+

+
zfs rename [-fp] filesystem|volume filesystem|volume
+

+

+
zfs rename -r snapshot snapshot
+

+

+
zfs list [-r|-d depth][-Hp][-o property[,property]...] [-t type[,type]..]
+
+ [-s property] ... [-S property] ... [filesystem|volume|snapshot] ...
+

+

+
zfs set property=value filesystem|volume|snapshot ...
+

+

+
zfs get [-r|-d depth][-Hp][-o field[,...]] [-t type[,...]] 
+
+ [-s source[,...]] "all" | property[,...] filesystem|volume|snapshot ...
+

+

+
zfs inherit [-rS] property filesystem|volume|snapshot ...
+

+

+
zfs upgrade [-v]
+

+

+
zfs upgrade [-r] [-V version] -a | filesystem
+

+

+
zfs userspace [-Hinp] [-o field[,...]] [-s field] ...
+
+ [-S field] ... [-t type[,...]] filesystem|snapshot
+

+

+
zfs groupspace [-Hinp] [-o field[,...]] [-s field] ...
+
+ [-S field] ... [-t type[,...]] filesystem|snapshot
+

+

+
zfs mount 
+

+

+
zfs mount [-vO] [-o options] -a | filesystem
+

+

+
zfs unmount | umount [-f] -a | filesystem|mountpoint
+

+

+
zfs share -a | filesystem
+

+

+
zfs unshare -a filesystem|mountpoint
+

+

+
zfs bookmark snapshot bookmark
+

+

+
zfs send [-DnPpRveL] [-[iI] snapshot] snapshot
+

+

+
zfs send [-eL] [-i snapshot|bookmark] filesystem|volume|snapshot
+

+

+
zfs receive | recv [-vnFu] filesystem|volume|snapshot
+

+

+
zfs receive | recv [-vnFu] [-d|-e] filesystem
+

+

+
zfs allow filesystem|volume
+

+

+
zfs allow [-ldug] "everyone"|user|group[,...] perm|@setname[,...] 
+
+ filesystem|volume
+

+

+
zfs allow [-ld] -e perm|@setname[,...] filesystem|volume
+

+

+
zfs allow -c perm|@setname[,...] filesystem|volume
+

+

+
zfs allow -s @setname perm|@setname[,...] filesystem|volume
+

+

+
zfs unallow [-rldug] "everyone"|user|group[,...] [perm|@setname[,... ]] 
+
+ filesystem|volume
+

+

+
zfs unallow [-rld] -e [perm|@setname[,... ]] filesystem|volume
+

+

+
zfs unallow [-r] -c [perm|@setname[ ... ]] filesystem|volume
+

+

+
zfs unallow [-r] -s @setname [perm|@setname[,... ]] filesystem|volume
+

+

+
zfs hold [-r] tag snapshot...
+

+

+
zfs holds [-r] snapshot...
+

+

+
zfs release [-r] tag snapshot...
+

+

+
zfs diff [-FHt] snapshot snapshot|filesystem
+
+
+
+

+

The zfs command configures ZFS datasets within a + ZFS storage pool, as described in zpool(8). A dataset is + identified by a unique path within the ZFS namespace. For + example:

+

+
+

+
pool/{filesystem,volume,snapshot}
+
+

+

+

+

where the maximum length of a dataset name is MAXNAMELEN + (256 bytes).

+

+

A dataset can be one of the following:

+

file system

+

+
A ZFS dataset of type filesystem can be + mounted within the standard system namespace and behaves like other file + systems. While ZFS file systems are designed to be POSIX + compliant, known issues exist that prevent compliance in some cases. + Applications that depend on standards conformance might fail due to + nonstandard behavior when checking file system free space.
+

+

volume

+

+
A logical volume exported as a raw or block device. This + type of dataset should only be used under special circumstances. File systems + are typically used in most environments.
+

+

snapshot

+

+
A read-only version of a file system or volume at a given + point in time. It is specified as filesystem@name or + volume@name.
+

+

bookmark

+

+
Much like a snapshot, but without the hold on + on-disk data. It can be used as the source of a send (but not for a receive). + It is specified as filesystem#name or volume#name.
+

+
+

+

A ZFS storage pool is a logical collection of devices that + provide space for datasets. A storage pool is also the root of the + ZFS file system hierarchy.

+

+

The root of the pool can be accessed as a file system, such as + mounting and unmounting, taking snapshots, and setting properties. The + physical storage characteristics, however, are managed by the + zpool(8) command.

+

+

See zpool(8) for more information on creating and + administering pools.

+
+
+

+

A snapshot is a read-only copy of a file system or volume. + Snapshots can be created extremely quickly, and initially consume no + additional space within the pool. As data within the active dataset changes, + the snapshot consumes more data than would otherwise be shared with the + active dataset.

+

+

Snapshots can have arbitrary names. Snapshots of volumes can be + cloned or rolled back. Visibility is determined by the snapdev + property of the parent volume.

+

+

File system snapshots can be accessed under the + .zfs/snapshot directory in the root of the file system. Snapshots are + automatically mounted on demand and may be unmounted at regular intervals. + The visibility of the .zfs directory can be controlled by the + snapdir property.

+
+
+

+

A bookmark is like a snapshot, a read-only copy of a file system + or volume. Bookmarks can be created extremely quickly, compared to + snapshots, and they consume no additional space within the pool. Bookmarks + can also have arbitrary names, much like snapshots.

+

+

Unlike snapshots, bookmarks can not be accessed through the + filesystem in any way. From a storage standpoint a bookmark just provides a + way to reference when a snapshot was created as a distinct object. Bookmarks + are initially tied to a snapshot, not the filesystem/volume, and they will + survive if the snapshot itself is destroyed. Since they are very light + weight there's little incentive to destroy them.

+
+
+

+

A clone is a writable volume or file system whose initial contents + are the same as another dataset. As with snapshots, creating a clone is + nearly instantaneous, and initially consumes no additional space.

+

+

Clones can only be created from a snapshot. When a snapshot is + cloned, it creates an implicit dependency between the parent and child. Even + though the clone is created somewhere else in the dataset hierarchy, the + original snapshot cannot be destroyed as long as a clone exists. The + origin property exposes this dependency, and the destroy + command lists any such dependencies, if they exist.

+

+

The clone parent-child dependency relationship can be reversed by + using the promote subcommand. This causes the "origin" file + system to become a clone of the specified file system, which makes it + possible to destroy the file system that the clone was created from.

+
+
+

+

Creating a ZFS file system is a simple operation, so the + number of file systems per system is likely to be numerous. To cope with + this, ZFS automatically manages mounting and unmounting file systems + without the need to edit the /etc/fstab file. All automatically + managed file systems are mounted by ZFS at boot time.

+

+

By default, file systems are mounted under /path, + where path is the name of the file system in the ZFS + namespace. Directories are created and destroyed as needed.

+

+

A file system can also have a mount point set in the + mountpoint property. This directory is created as needed, and + ZFS automatically mounts the file system when the zfs mount -a + command is invoked (without editing /etc/fstab). The + mountpoint property can be inherited, so if pool/home has a + mount point of /export/stuff, then pool/home/user + automatically inherits a mount point of /export/stuff/user.

+

+

A file system mountpoint property of none prevents + the file system from being mounted.

+

+

If needed, ZFS file systems can also be managed with + traditional tools (mount, umount, /etc/fstab). If a + file system's mount point is set to legacy, ZFS makes no + attempt to manage the file system, and the administrator is responsible for + mounting and unmounting the file system.

+
+
+

+

Deduplication is the process for removing redundant data at the + block-level, reducing the total amount of data stored. If a file system has + the dedup property enabled, duplicate data blocks are removed + synchronously. The result is that only unique data is stored and common + components are shared among files.

+

WARNING: DO NOT ENABLE DEDUPLICATION UNLESS YOU NEED IT AND + KNOW EXACTLY WHAT YOU ARE DOING!

+

Deduplicating data is a very resource-intensive operation. It is + generally recommended that you have at least 1.25 GB of RAM per 1 TB + of storage when you enable deduplication. But calculating the exact + requirenments is a somewhat complicated affair. Please see the Oracle + Dedup Guide for more information..

+

Enabling deduplication on an improperly-designed system will + result in extreme performance issues (extremely slow filesystem and snapshot + deletions etc.) and can potentially lead to data loss (i.e. unimportable + pool due to memory exhaustion) if your system is not built for this purpose. + Deduplication affects the processing power (CPU), disks (and the controller) + as well as primary (real) memory.

+

Before creating a pool with deduplication enabled, ensure that you + have planned your hardware requirements appropriately and implemented + appropriate recovery practices, such as regular backups.

+

Unless necessary, deduplication should NOT be enabled on a system. + Instead, consider using compression=lz4, as a less resource-intensive + alternative.

+
+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, + native properties are either editable or read-only. User properties have no + effect on ZFS behavior, but you can use them to annotate datasets in + a way that is meaningful in your environment. For more information about + user properties, see the "User Properties" section, below.

+

+

Every dataset has a set of properties that export statistics about + the dataset as well as control various behaviors. Properties are inherited + from the parent unless overridden by the child. Some properties apply only + to certain types of datasets (file systems, volumes, or snapshots).

+

+

The values of numeric properties can be specified using + human-readable suffixes (for example, k, KB, M, + Gb, and so forth, up to Z for zettabyte). The following are + all valid (and equal) specifications:

+

+
+

+
1536M, 1.5g, 1.50GB
+
+

+

+

+

The values of non-numeric properties are case sensitive and must + be lowercase, except for mountpoint, sharenfs, and + sharesmb.

+

+

The following native properties consist of read-only statistics + about the dataset. These properties can be neither set, nor inherited. + Native properties apply to all dataset types unless otherwise noted.

+

available

+

+
The amount of space available to the dataset and all its + children, assuming that there is no other activity in the pool. Because space + is shared within a pool, availability can be limited by any number of factors, + including physical pool size, quotas, reservations, or other datasets within + the pool. +

This property can also be referred to by its shortened column + name, avail.

+
+

+

compressratio

+

+
For non-snapshots, the compression ratio achieved for the + used space of this dataset, expressed as a multiplier. The used + property includes descendant datasets, and, for clones, does not include the + space shared with the origin snapshot. For snapshots, the compressratio + is the same as the refcompressratio property. Compression can be turned + on by running: zfs set compression=on dataset. The default value + is off.
+

+

creation

+

+
The time this dataset was created.
+

+

clones

+

+
For snapshots, this property is a comma-separated list of + filesystems or volumes which are clones of this snapshot. The clones' + origin property is this snapshot. If the clones property is not + empty, then this snapshot can not be destroyed (even with the -r or + -f options).
+

+

defer_destroy

+

+
This property is on if the snapshot has been + marked for deferred destruction by using the zfs destroy -d + command. Otherwise, the property is off.
+

+

filesystem_count

+

+
The total number of filesystems and volumes that exist + under this location in the dataset tree. This value is only available when a + filesystem_limit has been set somewhere in the tree under which the + dataset resides.
+

+

logicalreferenced

+

+
The amount of space that is "logically" + accessible by this dataset. See the referenced property. The logical + space ignores the effect of the compression and copies + properties, giving a quantity closer to the amount of data that applications + see. However, it does include space consumed by metadata. +

This property can also be referred to by its shortened column + name, lrefer.

+
+

+

logicalused

+

+
The amount of space that is "logically" + consumed by this dataset and all its descendents. See the used + property. The logical space ignores the effect of the compression and + copies properties, giving a quantity closer to the amount of data that + applications see. However, it does include space consumed by metadata. +

This property can also be referred to by its shortened column + name, lused.

+
+

+

mounted

+

+
For file systems, indicates whether the file system is + currently mounted. This property can be either yes or no.
+

+

origin

+

+
For cloned file systems or volumes, the snapshot from + which the clone was created. See also the clones property.
+

+

referenced

+

+
The amount of data that is accessible by this dataset, + which may or may not be shared with other datasets in the pool. When a + snapshot or clone is created, it initially references the same amount of space + as the file system or snapshot it was created from, since its contents are + identical. +

This property can also be referred to by its shortened column + name, refer.

+
+

+

refcompressratio

+

+
The compression ratio achieved for the referenced + space of this dataset, expressed as a multiplier. See also the + compressratio property.
+

+

snapshot_count

+

+
The total number of snapshots that exist under this + location in the dataset tree. This value is only available when a + snapshot_limit has been set somewhere in the tree under which the + dataset resides.
+

+

type

+

+
The type of dataset: filesystem, volume, or + snapshot.
+

+

used

+

+
The amount of space consumed by this dataset and all its + descendents. This is the value that is checked against this dataset's quota + and reservation. The space used does not include this dataset's reservation, + but does take into account the reservations of any descendent datasets. The + amount of space that a dataset consumes from its parent, as well as the amount + of space that are freed if this dataset is recursively destroyed, is the + greater of its space used and its reservation. +

When snapshots (see the "Snapshots" section) are + created, their space is initially shared between the snapshot and the file + system, and possibly with previous snapshots. As the file system changes, + space that was previously shared becomes unique to the snapshot, and counted + in the snapshot's space used. Additionally, deleting snapshots can increase + the amount of space unique to (and used by) other snapshots.

+

The amount of space used, available, or referenced does not take + into account pending changes. Pending changes are generally accounted for + within a few seconds. Committing a change to a disk using fsync(2) or + O_SYNC does not necessarily guarantee that the space usage + information is updated immediately.

+
+

+

usedby*

+

+
The usedby* properties decompose the used + properties into the various reasons that space is used. Specifically, + used = usedbychildren + usedbydataset + + usedbyrefreservation +, usedbysnapshots. These properties are + only available for datasets created on zpool "version 13" + pools.
+

+

usedbychildren

+

+
The amount of space used by children of this dataset, + which would be freed if all the dataset's children were destroyed.
+

+

usedbydataset

+

+
The amount of space used by this dataset itself, which + would be freed if the dataset were destroyed (after first removing any + refreservation and destroying any necessary snapshots or + descendents).
+

+

usedbyrefreservation

+

+
The amount of space used by a refreservation set + on this dataset, which would be freed if the refreservation was + removed.
+

+

usedbysnapshots

+

+
The amount of space consumed by snapshots of this + dataset. In particular, it is the amount of space that would be freed if all + of this dataset's snapshots were destroyed. Note that this is not simply the + sum of the snapshots' used properties because space can be shared by + multiple snapshots.
+

+

userused@user

+

+
The amount of space consumed by the specified user in + this dataset. Space is charged to the owner of each file, as displayed by + ls -l. The amount of space charged is displayed by du and + ls -s. See the zfs userspace subcommand for more + information. +

Unprivileged users can access only their own space usage. The root + user, or a user who has been granted the userused privilege with + zfs allow, can access everyone's usage.

+

The userused@... properties are not displayed by zfs get + all. The user's name must be appended after the @ symbol, using + one of the following forms:

+
+
+
+
POSIX name (for example, joe)
+
+
+
+
+
+
POSIX numeric ID (for example, 789)
+
+
+
+
+
+
SID name (for example, joe.smith@mydomain)
+
+
+
+
+
+
SID numeric ID (for example, S-1-123-456-789)
+
+
+
+

+

userrefs

+

+
This property is set to the number of user holds on this + snapshot. User holds are set by using the zfs hold command.
+

+

groupused@group

+

+
The amount of space consumed by the specified group in + this dataset. Space is charged to the group of each file, as displayed by + ls -l. See the userused@user property for more + information. +

Unprivileged users can only access their own groups' space usage. + The root user, or a user who has been granted the groupused privilege + with zfs allow, can access all groups' usage.

+
+

+

volblocksize=blocksize

+

+
For volumes, specifies the block size of the volume. The + blocksize cannot be changed once the volume has been written, so it + should be set at volume creation time. The default blocksize for + volumes is 8 Kbytes. Any power of 2 from 512 bytes to 128 Kbytes is valid. +

This property can also be referred to by its shortened column + name, volblock.

+
+

+

written

+

+
The amount of referenced space written to this + dataset since the previous snapshot.
+

+

written@snapshot

+

+
The amount of referenced space written to this + dataset since the specified snapshot. This is the space that is referenced by + this dataset but was not referenced by the specified snapshot. +

The snapshot may be specified as a short snapshot name + (just the part after the @), in which case it will be interpreted as + a snapshot in the same filesystem as this dataset. The snapshot be a + full snapshot name (filesystem@snapshot), which for clones may + be a snapshot in the origin's filesystem (or the origin of the origin's + filesystem, etc).

+
+

+

+

The following native properties can be used to change the behavior + of a ZFS dataset.

+

aclinherit=discard | noallow | + restricted | passthrough | passthrough-x

+

+
Controls how ACL entries are inherited when files + and directories are created. A file system with an aclinherit property + of discard does not inherit any ACL entries. A file system with + an aclinherit property value of noallow only inherits + inheritable ACL entries that specify "deny" permissions. The + property value restricted (the default) removes the write_acl + and write_owner permissions when the ACL entry is inherited. A + file system with an aclinherit property value of passthrough + inherits all inheritable ACL entries without any modifications made to + the ACL entries when they are inherited. A file system with an + aclinherit property value of passthrough-x has the same meaning + as passthrough, except that the owner@, group@, and + everyone@ ACEs inherit the execute permission only if the file + creation mode also requests the execute bit. +

When the property value is set to passthrough, files are + created with a mode determined by the inheritable ACEs. If no + inheritable ACEs exist that affect the mode, then the mode is set in + accordance to the requested mode from the application.

+

The aclinherit property does not apply to Posix ACLs.

+
+

+

acltype=noacl | posixacl

+

+
Controls whether ACLs are enabled and if so what type of + ACL to use. When a file system has the acltype property set to + noacl (the default) then ACLs are disabled. Setting the acltype + property to posixacl indicates Posix ACLs should be used. Posix ACLs + are specific to Linux and are not functional on other platforms. Posix ACLs + are stored as an xattr and therefore will not overwrite any existing ZFS/NFSv4 + ACLs which may be set. Currently only posixacls are supported on Linux. +

To obtain the best performance when setting posixacl users + are strongly encouraged to set the xattr=sa property. This will + result in the Posix ACL being stored more efficiently on disk. But as a + consequence of this all new xattrs will only be accessible from ZFS + implementations which support the xattr=sa property. See the + xattr property for more details.

+
+

+

atime=on | off

+

+
Controls whether the access time for files is updated + when they are read. Turning this property off avoids producing write traffic + when reading files and can result in significant performance gains, though it + might confuse mailers and other similar utilities. The default value is + on. See also relatime below.
+

+

canmount=on | off | noauto

+

+
If this property is set to off, the file system + cannot be mounted, and is ignored by zfs mount -a. Setting this + property to off is similar to setting the mountpoint property to + none, except that the dataset still has a normal mountpoint + property, which can be inherited. Setting this property to off allows + datasets to be used solely as a mechanism to inherit properties. One example + of setting canmount=off is to have two datasets with the same + mountpoint, so that the children of both datasets appear in the same + directory, but might have different inherited characteristics. +

When the noauto option is set, a dataset can only be + mounted and unmounted explicitly. The dataset is not mounted automatically + when the dataset is created or imported, nor is it mounted by the zfs + mount -a command or unmounted by the zfs unmount -a command.

+

This property is not inherited.

+
+

+

checksum=on | off | fletcher2,| + fletcher4 | sha256

+

+
Controls the checksum used to verify data integrity. The + default value is on, which automatically selects an appropriate + algorithm (currently, fletcher4, but this may change in future + releases). The value off disables integrity checking on user data. + Disabling checksums is NOT a recommended practice. +

Changing this property affects only newly-written data.

+
+

+

compression=on | off | lzjb | + lz4 | gzip | gzip-N | zle

+

+
Controls the compression algorithm used for this dataset. +

Setting compression to on indicates that the current + default compression algorithm should be used. The default balances + compression and decompression speed, with compression ratio and is expected + to work well on a wide variety of workloads. Unlike all other settings for + this property, on does not select a fixed compression type. As new + compression algorithms are added to ZFS and enabled on a pool, the default + compression algorithm may change. The current default compression algorthm + is either lzjb or, if the lz4_compress feature is enabled, + lz4.

+

The lzjb compression algorithm is optimized for performance + while providing decent data compression.

+

The lz4 compression algorithm is a high-performance + replacement for the lzjb algorithm. It features significantly faster + compression and decompression, as well as a moderately higher compression + ratio than lzjb, but can only be used on pools with the + lz4_compress feature set to enabled. See + zpool-features(5) for details on ZFS feature flags and the + lz4_compress feature.

+

The gzip compression algorithm uses the same compression as + the gzip(1) command. You can specify the gzip level by using + the value gzip-N where N is an integer from 1 (fastest) + to 9 (best compression ratio). Currently, gzip is equivalent to + gzip-6 (which is also the default for gzip(1)). The zle + compression algorithm compresses runs of zeros.

+

This property can also be referred to by its shortened column name + compress. Changing this property affects only newly-written data.

+
+

+

copies=1 | 2 | 3

+

+
Controls the number of copies of data stored for this + dataset. These copies are in addition to any redundancy provided by the pool, + for example, mirroring or RAID-Z. The copies are stored on different disks, if + possible. The space used by multiple copies is charged to the associated file + and dataset, changing the used property and counting against quotas and + reservations. +

Changing this property only affects newly-written data. Therefore, + set this property at file system creation time by using the -o + copies=N option.

+
+

+

dedup=on | off | verify | + sha256[,verify]

+

+
Controls whether deduplication is in effect for a + dataset. The default value is off. The default checksum used for + deduplication is sha256 (subject to change). When dedup is + enabled, the dedup checksum algorithm overrides the checksum + property. Setting the value to verify is equivalent to specifying + sha256,verify. +

If the property is set to verify, then, whenever two blocks + have the same signature, ZFS will do a byte-for-byte comparison with the + existing block to ensure that the contents are identical.

+

Unless necessary, deduplication should NOT be enabled on a system. + See Deduplication above.

+
+

+

devices=on | off

+

+
Controls whether device nodes can be opened on this file + system. The default value is on.
+

+

exec=on | off

+

+
Controls whether processes can be executed from within + this file system. The default value is on.
+

+

mlslabel=label | none

+

+
The mlslabel property is a sensitivity label that + determines if a dataset can be mounted in a zone on a system with Trusted + Extensions enabled. If the labeled dataset matches the labeled zone, the + dataset can be mounted and accessed from the labeled zone. +

When the mlslabel property is not set, the default value is + none. Setting the mlslabel property to none is + equivalent to removing the property.

+

The mlslabel property can be modified only when Trusted + Extensions is enabled and only with appropriate privilege. Rights to modify + it cannot be delegated. When changing a label to a higher label or setting + the initial dataset label, the {PRIV_FILE_UPGRADE_SL} privilege is + required. When changing a label to a lower label or the default + (none), the {PRIV_FILE_DOWNGRADE_SL} privilege is required. + Changing the dataset to labels other than the default can be done only when + the dataset is not mounted. When a dataset with the default label is mounted + into a labeled-zone, the mount operation automatically sets the + mlslabel property to the label of that zone.

+

When Trusted Extensions is not enabled, only datasets with + the default label (none) can be mounted.

+

Zones are a Solaris feature and are not relevant on Linux.

+
+

+

filesystem_limit=count | none

+

+
Limits the number of filesystems and volumes that can + exist under this point in the dataset tree. The limit is not enforced if the + user is allowed to change the limit. Setting a filesystem_limit on a + descendent of a filesystem that already has a filesystem_limit does not + override the ancestor's filesystem_limit, but rather imposes an additional + limit. This feature must be enabled to be used (see + zpool-features(5)).
+

+

mountpoint=path | none | + legacy

+

+
Controls the mount point used for this file system. See + the "Mount Points" section for more information on how this property + is used. +

When the mountpoint property is changed for a file system, + the file system and any children that inherit the mount point are unmounted. + If the new value is legacy, then they remain unmounted. Otherwise, + they are automatically remounted in the new location if the property was + previously legacy or none, or if they were mounted before the + property was changed. In addition, any shared file systems are unshared and + shared in the new location.

+
+

+

nbmand=on | off

+

+
Controls whether the file system should be mounted with + nbmand (Non Blocking mandatory locks). This is used for CIFS + clients. Changes to this property only take effect when the file system is + umounted and remounted. See mount(8) for more information on + nbmand mounts.
+

+

primarycache=all | none | + metadata

+

+
Controls what is cached in the primary cache (ARC). If + this property is set to all, then both user data and metadata is + cached. If this property is set to none, then neither user data nor + metadata is cached. If this property is set to metadata, then only + metadata is cached. The default value is all.
+

+

quota=size | none

+

+
Limits the amount of space a dataset and its descendents + can consume. This property enforces a hard limit on the amount of space used. + This includes all space consumed by descendents, including file systems and + snapshots. Setting a quota on a descendent of a dataset that already has a + quota does not override the ancestor's quota, but rather imposes an additional + limit. +

Quotas cannot be set on volumes, as the volsize property + acts as an implicit quota.

+
+

+

snapshot_limit=count | none

+

+
Limits the number of snapshots that can be created on a + dataset and its descendents. Setting a snapshot_limit on a descendent of a + dataset that already has a snapshot_limit does not override the ancestor's + snapshot_limit, but rather imposes an additional limit. The limit is not + enforced if the user is allowed to change the limit. For example, this means + that recursive snapshots taken from the global zone are counted against each + delegated dataset within a zone. This feature must be enabled to be used (see + zpool-features(5)).
+

+

userquota@user=size | none

+

+
Limits the amount of space consumed by the specified + user. Similar to the refquota property, the userquota space + calculation does not include space that is used by descendent datasets, such + as snapshots and clones. User space consumption is identified by the + userspace@user property. +

Enforcement of user quotas may be delayed by several seconds. This + delay means that a user might exceed their quota before the system notices + that they are over quota and begins to refuse additional writes with the + EDQUOT error message . See the zfs userspace subcommand for + more information.

+

Unprivileged users can only access their own groups' space usage. + The root user, or a user who has been granted the userquota privilege + with zfs allow, can get and set everyone's quota.

+

This property is not available on volumes, on file systems before + version 4, or on pools before version 15. The userquota@... + properties are not displayed by zfs get all. The user's name must be + appended after the @ symbol, using one of the following forms:

+
+
+
+
POSIX name (for example, joe)
+
+
+
+
+
+
POSIX numeric ID (for example, 789)
+
+
+
+
+
+
SID name (for example, joe.smith@mydomain)
+
+
+
+
+
+
SID numeric ID (for example, S-1-123-456-789)
+
+
+
+

+

groupquota@group=size | + none

+

+
Limits the amount of space consumed by the specified + group. Group space consumption is identified by the + userquota@user property. +

Unprivileged users can access only their own groups' space usage. + The root user, or a user who has been granted the groupquota + privilege with zfs allow, can get and set all groups' quotas.

+
+

+

readonly=on | off

+

+
Controls whether this dataset can be modified. The + default value is off. +

This property can also be referred to by its shortened column + name, rdonly.

+
+

+

recordsize=size

+

+
Specifies a suggested block size for files in the file + system. This property is designed solely for use with database workloads that + access files in fixed-size records. ZFS automatically tunes block sizes + according to internal algorithms optimized for typical access patterns. +

For databases that create very large files but access them in + small random chunks, these algorithms may be suboptimal. Specifying a + recordsize greater than or equal to the record size of the database + can result in significant performance gains. Use of this property for + general purpose file systems is strongly discouraged, and may adversely + affect performance.

+

The size specified must be a power of two greater than or equal to + 512 and less than or equal to 128 Kbytes.

+

Changing the file system's recordsize affects only files + created afterward; existing files are unaffected.

+

This property can also be referred to by its shortened column + name, recsize.

+
+

+

redundant_metadata=all | most

+

+
Controls what types of metadata are stored redundantly. + ZFS stores an extra copy of metadata, so that if a single block is corrupted, + the amount of user data lost is limited. This extra copy is in addition to any + redundancy provided at the pool level (e.g. by mirroring or RAID-Z), and is in + addition to an extra copy specified by the copies property (up to a + total of 3 copies). For example if the pool is mirrored, copies=2, and + redundant_metadata=most, then ZFS stores 6 copies of most metadata, and + 4 copies of data and some metadata. +

When set to all, ZFS stores an extra copy of all metadata. + If a single on-disk block is corrupt, at worst a single block of user data + (which is recordsize bytes long) can be lost.

+

When set to most, ZFS stores an extra copy of most types of + metadata. This can improve performance of random writes, because less + metadata must be written. In practice, at worst about 100 blocks (of + recordsize bytes each) of user data can be lost if a single on-disk + block is corrupt. The exact behavior of which metadata blocks are stored + redundantly may change in future releases.

+

The default value is all.

+
+

+

refquota=size | none

+

+
Limits the amount of space a dataset can consume. This + property enforces a hard limit on the amount of space used. This hard limit + does not include space used by descendents, including file systems and + snapshots.
+

+

refreservation=size | none

+

+
The minimum amount of space guaranteed to a dataset, not + including its descendents. When the amount of space used is below this value, + the dataset is treated as if it were taking up the amount of space specified + by refreservation. The refreservation reservation is accounted + for in the parent datasets' space used, and counts against the parent + datasets' quotas and reservations. +

If refreservation is set, a snapshot is only allowed if + there is enough free pool space outside of this reservation to accommodate + the current number of "referenced" bytes in the dataset.

+

This property can also be referred to by its shortened column + name, refreserv.

+
+

+

relatime=on | off

+

+
Controls the manner in which the access time is updated + when atime=on is set. Turning this property on causes the access + time to be updated relative to the modify or change time. Access time is only + updated if the previous access time was earlier than the current modify or + change time or if the existing access time hasn't been updated within the past + 24 hours. The default value is off.
+

+

reservation=size | none

+

+
The minimum amount of space guaranteed to a dataset and + its descendents. When the amount of space used is below this value, the + dataset is treated as if it were taking up the amount of space specified by + its reservation. Reservations are accounted for in the parent datasets' space + used, and count against the parent datasets' quotas and reservations. +

This property can also be referred to by its shortened column + name, reserv.

+
+

+

secondarycache=all | none | + metadata

+

+
Controls what is cached in the secondary cache (L2ARC). + If this property is set to all, then both user data and metadata is + cached. If this property is set to none, then neither user data nor + metadata is cached. If this property is set to metadata, then only + metadata is cached. The default value is all.
+

+

setuid=on | off

+

+
Controls whether the set-UID bit is respected for + the file system. The default value is on.
+

+

shareiscsi=on | off

+

+
Like the sharenfs property, shareiscsi + indicates whether a ZFS volume is exported as an iSCSI target. + The acceptable values for this property are on, off, and + type=disk. The default value is off. In the future, other target + types might be supported. For example, tape. +

You might want to set shareiscsi=on for a file system so + that all ZFS volumes within the file system are shared by default. + However, setting this property on a file system has no direct effect.

+
+

+

sharesmb=on | off

+

+
Controls whether the file system is shared by using + Samba USERSHARES, and what options are to be used. Otherwise, the file + system is automatically shared and unshared with the zfs share and + zfs unshare commands. If the property is set to on, the + net(8) command is invoked to create a USERSHARE. +

Because SMB shares requires a resource name, a unique + resource name is constructed from the dataset name. The constructed name is + a copy of the dataset name except that the characters in the dataset name, + which would be illegal in the resource name, are replaced with underscore + (_) characters. The ZFS On Linux driver does not (yet) support + additional options which might be available in the Solaris version.

+

If the sharesmb property is set to off, the file + systems are unshared.

+

In Linux, the share is created with the ACL (Access Control List) + "Everyone:F" ("F" stands for "full + permissions", ie. read and write permissions) and no guest access + (which means samba must be able to authenticate a real user, system + passwd/shadow, ldap or smbpasswd based) by default. This means that any + additional access control (dissalow specific user specific access etc) must + be done on the underlaying filesystem.

+

+
+ Example to mount a SMB filesystem shared through ZFS (share/tmp): Note that a + user and his/her password must be given!

+

+
+ smbmount //127.0.0.1/share_tmp /mnt/tmp -o + user=workgroup/turbo,password=obrut,uid=1000 +
+
+

+

Minimal /etc/samba/smb.conf configuration

+

+
+ * Samba will need to listen to 'localhost' (127.0.0.1) for the zfs utilities + to communitate with samba. This is the default behavior for most Linux + distributions.

+

* Samba must be able to authenticate a user. This can be done in a + number of ways, depending on if using the system password file, LDAP or the + Samba specific smbpasswd file. How to do this is outside the scope of this + manual. Please refer to the smb.conf(5) manpage for more information.

+

* See the USERSHARE section of the smb.conf(5) man + page for all configuration options in case you need to modify any options to + the share afterwards. Do note that any changes done with the 'net' command + will be undone if the share is every unshared (such as at a reboot etc). In + the future, ZoL will be able to set specific options directly using + sharesmb=<option>.

+

+
+

+
+

+

sharenfs=on | off | opts

+

+
Controls whether the file system is shared via + NFS, and what options are used. A file system with a sharenfs + property of off is managed with the exportfs(8) command and + entries in /etc/exports file. Otherwise, the file system is + automatically shared and unshared with the zfs share and zfs + unshare commands. If the property is set to on, the dataset is + shared using the exportfs(8) command in the following manner (see + exportfs(8) for the meaning of the different options): +

+
+

+
/usr/sbin/exportfs -i -o sec=sys,rw,no_subtree_check,no_root_squash,mountpoint *:<mountpoint of dataset>
+
+

Otherwise, the exportfs(8) command is invoked with options + equivalent to the contents of this property.

+

When the sharenfs property is changed for a dataset, the + dataset and any children inheriting the property are re-shared with the new + options, only if the property was previously off, or if they were + shared before the property was changed. If the new property is off, + the file systems are unshared.

+
+

+

logbias = latency | throughput

+

+
Provide a hint to ZFS about handling of synchronous + requests in this dataset. If logbias is set to latency (the + default), ZFS will use pool log devices (if configured) to handle the requests + at low latency. If logbias is set to throughput, ZFS will not + use configured pool log devices. ZFS will instead optimize synchronous + operations for global pool throughput and efficient use of resources.
+

+

snapdev=hidden | visible

+

+
Controls whether the snapshots devices of zvol's are + hidden or visible. The default value is hidden.
+

+

snapdir=hidden | visible

+

+
Controls whether the .zfs directory is hidden or + visible in the root of the file system as discussed in the + "Snapshots" section. The default value is hidden.
+

+

sync=standard | always | + disabled

+

+
Controls the behavior of synchronous requests (e.g. + fsync, O_DSYNC). standard is the POSIX specified behavior of ensuring + all synchronous requests are written to stable storage and all devices are + flushed to ensure data is not cached by device controllers (this is the + default). always causes every file system transaction to be written and + flushed before its system call returns. This has a large performance penalty. + disabled disables synchronous requests. File system transactions are + only committed to stable storage periodically. This option will give the + highest performance. However, it is very dangerous as ZFS would be ignoring + the synchronous transaction demands of applications such as databases or NFS. + Administrators should only use this option when the risks are + understood.
+

+

version=1 | 2 | current

+

+
The on-disk version of this file system, which is + independent of the pool version. This property can only be set to later + supported versions. See the zfs upgrade command.
+

+

volsize=size

+

+
For volumes, specifies the logical size of the volume. By + default, creating a volume establishes a reservation of equal size. For + storage pools with a version number of 9 or higher, a refreservation is + set instead. Any changes to volsize are reflected in an equivalent + change to the reservation (or refreservation). The volsize can + only be set to a multiple of volblocksize, and cannot be zero. +

The reservation is kept equal to the volume's logical size to + prevent unexpected behavior for consumers. Without the reservation, the + volume could run out of space, resulting in undefined behavior or data + corruption, depending on how the volume is used. These effects can also + occur when the volume size is changed while it is in use (particularly when + shrinking the size). Extreme care should be used when adjusting the volume + size.

+

Though not recommended, a "sparse volume" (also known as + "thin provisioning") can be created by specifying the -s + option to the zfs create -V command, or by changing the reservation + after the volume has been created. A "sparse volume" is a volume + where the reservation is less then the volume size. Consequently, writes to + a sparse volume can fail with ENOSPC when the pool is low on space. + For a sparse volume, changes to volsize are not reflected in the + reservation.

+
+

+

vscan=on | off

+

+
Controls whether regular files should be scanned for + viruses when a file is opened and closed. In addition to enabling this + property, the virus scan service must also be enabled for virus scanning to + occur. The default value is off.
+

+

xattr=on | off | sa

+

+
Controls whether extended attributes are enabled for this + file system. Two styles of extended attributes are supported either directory + based or system attribute based. +

The default value of on enables directory based extended + attributes. This style of xattr imposes no practical limit on either the + size or number of xattrs which may be set on a file. Although under Linux + the getxattr(2) and setxattr(2) system calls limit the maximum + xattr size to 64K. This is the most compatible style of xattr and it is + supported by the majority of ZFS implementations.

+

System attribute based xattrs may be enabled by setting the value + to sa. The key advantage of this type of xattr is improved + performance. Storing xattrs as system attributes significantly decreases the + amount of disk IO required. Up to 64K of xattr data may be stored per file + in the space reserved for system attributes. If there is not enough space + available for an xattr then it will be automatically written as a directory + based xattr. System attribute based xattrs are not accessible on platforms + which do not support the xattr=sa feature.

+

The use of system attribute based xattrs is strongly encouraged + for users of SELinux or Posix ACLs. Both of these features heavily rely of + xattrs and benefit significantly from the reduced xattr access time.

+
+

+

zoned=on | off

+

+
Controls whether the dataset is managed from a non-global + zone. Zones are a Solaris feature and are not relevant on Linux. The default + value is off.
+

+

+

The following three properties cannot be changed after the file + system is created, and therefore, should be set when the file system is + created. If the properties are not set with the zfs create or + zpool create commands, these properties are inherited from the parent + dataset. If the parent dataset lacks these properties due to having been + created prior to these features being supported, the new file system will + have the default values for these properties.

+

casesensitivity=sensitive | + insensitive | mixed

+

+
Indicates whether the file name matching algorithm used + by the file system should be case-sensitive, case-insensitive, or allow a + combination of both styles of matching. The default value for the + casesensitivity property is sensitive. Traditionally, UNIX and + POSIX file systems have case-sensitive file names. +

The mixed value for the casesensitivity property + indicates that the file system can support requests for both case-sensitive + and case-insensitive matching behavior. Currently, case-insensitive matching + behavior on a file system that supports mixed behavior is limited to the + Solaris CIFS server product. For more information about the mixed + value behavior, see the Solaris ZFS Administration Guide.

+
+

+

normalization = none | formC | + formD | formKC | formKD

+

+
Indicates whether the file system should perform a + unicode normalization of file names whenever two file names are + compared, and which normalization algorithm should be used. File names are + always stored unmodified, names are normalized as part of any comparison + process. If this property is set to a legal value other than none, and + the utf8only property was left unspecified, the utf8only + property is automatically set to on. The default value of the + normalization property is none. This property cannot be changed + after the file system is created.
+

+

utf8only=on | off

+

+
Indicates whether the file system should reject file + names that include characters that are not present in the UTF-8 + character code set. If this property is explicitly set to off, the + normalization property must either not be explicitly set or be set to + none. The default value for the utf8only property is off. + This property cannot be changed after the file system is created.
+

+

+

The casesensitivity, normalization, and + utf8only properties are also new permissions that can be assigned to + non-privileged users by using the ZFS delegated administration + feature.

+

+

context=SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level

+

+
This flag sets the SELinux context for all files in the + filesytem under the mountpoint for that filesystem. See selinux(8) for + more information.
+

+

fscontext=SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level

+

+
This flag sets the SELinux context for the filesytem + being mounted. See selinux(8) for more information.
+

+

defntext=SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level

+

+
This flag sets the SELinux context for unlabeled files. + See selinux(8) for more information.
+

+

rootcontext=SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level

+

+
This flag sets the SELinux context for the root inode of + the filesystem. See selinux(8) for more information.
+

+

overlay=on | off

+

+
Allow mounting on a busy directory or a directory which + already contains files/directories. This is the default mount behavior for + Linux filesystems. However, for consistency with ZFS on other platforms + overlay mounts are disabled by default. Set overlay=on to enable + overlay mounts.
+

+
+
+

+

When a file system is mounted, either through mount(8) for + legacy mounts or the zfs mount command for normal file systems, its + mount options are set according to its properties. The correlation between + properties and mount options is as follows:

+

+
+

+
+
+ PROPERTY MOUNT OPTION +
+ devices devices/nodevices +
+ exec exec/noexec +
+ readonly ro/rw +
+ setuid setuid/nosetuid +
+ xattr xattr/noxattr +
+ atime atime/noatime +
+ relatime relatime/norelatime +
+ nbmand nbmand/nonbmand
+
+

+

+

+

In addition, these options can be set on a per-mount basis using + the -o option, without affecting the property that is stored on disk. + The values specified on the command line override the values stored in the + dataset. The -nosuid option is an alias for + nodevices,nosetuid. These properties are reported as + "temporary" by the zfs get command. If the properties are + changed while the dataset is mounted, the new setting overrides any + temporary settings.

+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS + behavior, but applications or administrators can use them to annotate + datasets (file systems, volumes, and snapshots).

+

+

User property names must contain a colon (:) character to + distinguish them from native properties. They may contain lowercase letters, + numbers, and the following punctuation characters: colon (:), dash + (-), period (.), and underscore (_). The expected + convention is that the property name is divided into two portions such as + module:property, but this namespace is not enforced by + ZFS. User property names can be at most 256 characters, and cannot + begin with a dash (-).

+

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the module + component of property names to reduce the chance that two + independently-developed packages use the same property name for different + purposes. For example, property names beginning with com.sun. are + reserved for use by Oracle Corporation (which acquired Sun + Microsystems).

+

+

The values of user properties are arbitrary strings, are always + inherited, and are never validated. All of the commands that operate on + properties (zfs list, zfs get, zfs set, and so forth) + can be used to manipulate both native properties and user properties. Use + the zfs inherit command to clear a user property . If the property is + not defined in any parent dataset, it is removed entirely. Property values + are limited to 1024 characters.

+
+
+

+

ZFS volumes may be used as Linux swap devices. After + creating the volume with the zfs create command set up and enable the + swap area using the mkswap(8) and swapon(8) commands. Do not + swap to a file on a ZFS file system. A ZFS swap file + configuration is not supported.

+
+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+

zfs ?

+

+
Displays a help message.
+

+

zfs create [-p] [-o + property=value] ... filesystem

+

+
Creates a new ZFS file system. The file system is + automatically mounted according to the mountpoint property inherited + from the parent. +

-p

+

+
Creates all the non-existing parent datasets. Datasets + created in this manner are automatically mounted according to the + mountpoint property inherited from their parent. Any property specified + on the command line using the -o option is ignored. If the target + filesystem already exists, the operation completes successfully.
+

+

-o property=value

+

+
Sets the specified property as if the command zfs + set property=value was invoked at the same time the dataset + was created. Any editable ZFS property can also be set at creation + time. Multiple -o options can be specified. An error results if the + same property is specified in multiple -o options.
+

+
+

+

zfs create [-ps] [-b blocksize] + [-o property=value] ... -V size + volume

+

+
Creates a volume of the given size. The volume is + exported as a block device in /dev/zvol/path, where path + is the name of the volume in the ZFS namespace. The size represents the + logical size as exported by the device. By default, a reservation of equal + size is created. +

size is automatically rounded up to the nearest 128 Kbytes + to ensure that the volume has an integral number of blocks regardless of + blocksize.

+

-p

+

+
Creates all the non-existing parent datasets. Datasets + created in this manner are automatically mounted according to the + mountpoint property inherited from their parent. Any property specified + on the command line using the -o option is ignored. If the target + filesystem already exists, the operation completes successfully.
+

+

-s

+

+
Creates a sparse volume with no reservation. See + volsize in the Native Properties section for more information about + sparse volumes.
+

+

-o property=value

+

+
Sets the specified property as if the zfs set + property=value command was invoked at the same time the dataset + was created. Any editable ZFS property can also be set at creation + time. Multiple -o options can be specified. An error results if the + same property is specified in multiple -o options.
+

+

-b blocksize

+

+
Equivalent to -o + volblocksize=blocksize. If this option is specified in + conjunction with -o volblocksize, the resulting behavior is + undefined.
+

+
+

+

zfs destroy [-fnpRrv] + filesystem|volume

+

+
Destroys the given dataset. By default, the command + unshares any file systems that are currently shared, unmounts any file systems + that are currently mounted, and refuses to destroy a dataset that has active + dependents (children or clones). +

-r

+

+
Recursively destroy all children.
+

+

-R

+

+
Recursively destroy all dependents, including cloned file + systems outside the target hierarchy.
+

+

-f

+

+
Force an unmount of any file systems using the unmount + -f command. This option has no effect on non-file systems or unmounted + file systems.
+

+

-n

+

+
Do a dry-run ("No-op") deletion. No data will + be deleted. This is useful in conjunction with the -v or -p + flags to determine what data would be deleted.
+

+

-p

+

+
Print machine-parsable verbose information about the + deleted data.
+

+

-v

+

+
Print verbose information about the deleted data.
+

+

Extreme care should be taken when applying either the -r or + the -R options, as they can destroy large portions of a pool and + cause unexpected behavior for mounted file systems in use.

+
+

+

zfs destroy [-dnpRrv] + filesystem|volume@snap[%snap][,...]

+

+
The given snapshots are destroyed immediately if and only + if the zfs destroy command without the -d option would have + destroyed it. Such immediate destruction would occur, for example, if the + snapshot had no clones and the user-initiated reference count were zero. +

If a snapshot does not qualify for immediate destruction, it is + marked for deferred destruction. In this state, it exists as a usable, + visible snapshot until both of the preconditions listed above are met, at + which point it is destroyed.

+

An inclusive range of snapshots may be specified by separating the + first and last snapshots with a percent sign. The first and/or last + snapshots may be left blank, in which case the filesystem's oldest or newest + snapshot will be implied.

+

Multiple snapshots (or ranges of snapshots) of the same filesystem + or volume may be specified in a comma-separated list of snapshots. Only the + snapshot's short name (the part after the @) should be specified when + using a range or comma-separated list to identify multiple snapshots.

+

-d

+

+
Defer snapshot deletion.
+

+

-r

+

+
Destroy (or mark for deferred destruction) all snapshots + with this name in descendent file systems.
+

+

-R

+

+
Recursively destroy all clones of these snapshots, + including the clones, snapshots, and children. If this flag is specified, the + -d flag will have no effect.
+

+

-n

+

+
Do a dry-run ("No-op") deletion. No data will + be deleted. This is useful in conjunction with the -v or -p + flags to determine what data would be deleted.
+

+

-p

+

+
Print machine-parsable verbose information about the + deleted data.
+

+

-v

+

+
Print verbose information about the deleted data.
+

+

Extreme care should be taken when applying either the -r or + the -R options, as they can destroy large portions of a pool and + cause unexpected behavior for mounted file systems in use.

+
+

+

+

zfs destroy + filesystem|volume#bookmark

+

+
The given bookmark is destroyed. +

+
+

+

zfs snapshot [-r] [-o + property=value] ... + filesystem@snapname|volume@snapname ...

+

+
Creates snapshots with the given names. All previous + modifications by successful system calls to the file system are part of the + snapshots. Snapshots are taken atomically, so that all snapshots correspond to + the same moment in time. See the "Snapshots" section for details. +

-r

+

+
Recursively create snapshots of all descendent + datasets.
+

+

-o property=value

+

+
Sets the specified property; see zfs create for + details.
+

+
+

+

zfs rollback [-rRf] snapshot

+

+
Roll back the given dataset to a previous snapshot. When + a dataset is rolled back, all data that has changed since the snapshot is + discarded, and the dataset reverts to the state at the time of the snapshot. + By default, the command refuses to roll back to a snapshot other than the most + recent one. In order to do so, all intermediate snapshots and bookmarks must + be destroyed by specifying the -r option. +

The -rR options do not recursively destroy the child + snapshots of a recursive snapshot. Only direct snapshots of the specified + filesystem are destroyed by either of these options. To completely roll back + a recursive snapshot, you must rollback the individual child snapshots.

+

-r

+

+
Destroy any snapshots and bookmarks more recent than the + one specified.
+

+

-R

+

+
Recursively destroy any more recent snapshots and + bookmarks, as well as any clones of those snapshots.
+

+

-f

+

+
Used with the -R option to force an unmount of any + clone file systems that are to be destroyed.
+

+
+

+

zfs clone [-p] [-o + property=value] ... snapshot + filesystem|volume

+

+
Creates a clone of the given snapshot. See the + "Clones" section for details. The target dataset can be located + anywhere in the ZFS hierarchy, and is created as the same type as the + original. +

-p

+

+
Creates all the non-existing parent datasets. Datasets + created in this manner are automatically mounted according to the + mountpoint property inherited from their parent. If the target + filesystem or volume already exists, the operation completes + successfully.
+

+

-o property=value

+

+
Sets the specified property; see zfs create for + details.
+

+
+

+

zfs promote clone-filesystem

+

+
Promotes a clone file system to no longer be dependent on + its "origin" snapshot. This makes it possible to destroy the file + system that the clone was created from. The clone parent-child dependency + relationship is reversed, so that the origin file system becomes a clone of + the specified file system. +

The snapshot that was cloned, and any snapshots previous to this + snapshot, are now owned by the promoted clone. The space they use moves from + the origin file system to the promoted clone, so enough space must be + available to accommodate these snapshots. No new space is consumed by this + operation, but the space accounting is adjusted. The promoted clone must not + have any conflicting snapshot names of its own. The rename subcommand + can be used to rename any conflicting snapshots.

+
+

+

zfs rename [-f] + filesystem|volume|snapshot +
+ filesystem|volume|snapshot +
+ zfs rename [-fp] filesystem|volume + filesystem|volume

+

+
Renames the given dataset. The new target can be located + anywhere in the ZFS hierarchy, with the exception of snapshots. + Snapshots can only be renamed within the parent file system or volume. When + renaming a snapshot, the parent file system of the snapshot does not need to + be specified as part of the second argument. Renamed file systems can inherit + new mount points, in which case they are unmounted and remounted at the new + mount point. +

-p

+

+
Creates all the nonexistent parent datasets. Datasets + created in this manner are automatically mounted according to the + mountpoint property inherited from their parent.
+

+

-f

+

+
Force unmount any filesystems that need to be unmounted + in the process.
+

+
+

+

zfs rename -r snapshot + snapshot

+

+
Recursively rename the snapshots of all descendent + datasets. Snapshots are the only dataset that can be renamed + recursively.
+

+

zfs list [-r|-d depth] + [-Hp] [-o property[,...]] [ -t + type[,...]] [ -s property ] ... [ -S + property ] ... [filesystem|volume|snapshot] + ...

+

+
Lists the property information for the given datasets in + tabular form. If specified, you can list property information by the absolute + pathname or the relative pathname. By default, all file systems and volumes + are displayed. Snapshots are displayed if the listsnaps property is + on (the default is off). When listing hundreds or thousands of + snapshots performance can be improved by restricting the output to only the + name. In that case, it is recommended to use -o name -s name. The + following fields are displayed by default, + name,used,available,referenced,mountpoint. +

-H

+

+
Used for scripting mode. Do not print headers and + separate fields by a single tab instead of arbitrary white space.
+

+

-p

+

+
Display numbers in parsable (exact) values.
+

+

-r

+

+
Recursively display any children of the dataset on the + command line.
+

+

-d depth

+

+
Recursively display any children of the dataset, limiting + the recursion to depth. A depth of 1 will display only the + dataset and its direct children.
+

+

-o property

+

+
A comma-separated list of properties to display. The + property must be: +
+
+
+
One of the properties described in the "Native Properties" + section
+
+
+
+
+
+
A user property
+
+
+
+
+
+
The value name to display the dataset name
+
+
+
+
+
+
The value space to display space usage properties on file systems + and volumes. This is a shortcut for specifying -o + name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t + filesystem,volume syntax.
+
+
+
+

+

-s property

+

+
A property for sorting the output by column in ascending + order based on the value of the property. The property must be one of the + properties described in the "Properties" section, or the special + value name to sort by the dataset name. Multiple properties can be + specified at one time using multiple -s property options. Multiple + -s options are evaluated from left to right in decreasing order of + importance. +

The following is a list of sorting criteria:

+
+
+
+
Numeric types sort in numeric order.
+
+
+
+
+
+
String types sort in alphabetical order.
+
+
+
+
+
+
Types inappropriate for a row sort that row to the literal bottom, + regardless of the specified ordering.
+
+
+
+
+
+
If no sorting options are specified the existing behavior of zfs + list is preserved.
+
+
+
+

+

-S property

+

+
Same as the -s option, but sorts by property in + descending order.
+

+

-t type

+

+
A comma-separated list of types to display, where + type is one of filesystem, snapshot, snap, + volume, bookmark, or all. For example, specifying -t + snapshot displays only snapshots.
+

+
+

+

zfs set property=value + filesystem|volume|snapshot ...

+

+
Sets the property to the given value for each dataset. + Only some properties can be edited. See the "Properties" section for + more information on what properties can be set and acceptable values. Numeric + values can be specified as exact values, or in a human-readable form with a + suffix of B, K, M, G, T, P, + E, Z (for bytes, kilobytes, megabytes, gigabytes, terabytes, + petabytes, exabytes, or zettabytes, respectively). User properties can be set + on snapshots. For more information, see the "User Properties" + section.
+

+

zfs get [-r|-d depth] + [-Hp] [-o field[,...] [-t type[,...]] + [-s source[,...] "all" | + property[,...] filesystem|volume|snapshot + ...

+

+
Displays properties for the given datasets. If no + datasets are specified, then the command displays properties for all datasets + on the system. For each property, the following columns are displayed: +

+
+

+
+
+ name Dataset name +
+ property Property name +
+ value Property value +
+ source Property source. Can either be local, default, +
+ temporary, inherited, received, or none (-).
+
+

+

All columns are displayed by default, though this can be + controlled by using the -o option. This command takes a + comma-separated list of properties as described in the "Native + Properties" and "User Properties" sections.

+

The special value all can be used to display all properties + that apply to the given dataset's type (filesystem, volume snapshot, or + bookmark).

+

-r

+

+
Recursively display properties for any children.
+

+

-d depth

+

+
Recursively display any children of the dataset, limiting + the recursion to depth. A depth of 1 will display only the + dataset and its direct children.
+

+

-H

+

+
Display output in a form more easily parsed by scripts. + Any headers are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+

+

-o field

+

+
A comma-separated list of columns to display. + name,property,value,source is the default value.
+

+

-s source

+

+
A comma-separated list of sources to display. Those + properties coming from a source other than those in this list are ignored. + Each source must be one of the following: + local,default,inherited,received,temporary,none. The default value is + all sources.
+

+

-p

+

+
Display numbers in parsable (exact) values.
+

+
+

+

zfs inherit [-rS] property + filesystem|volume|snapshot ...

+

+
Clears the specified property, causing it to be inherited + from an ancestor, restored to default if no ancestor has the property set, or + with the -S option reverted to the received value if one exists. See + the "Properties" section for a listing of default values, and + details on which properties can be inherited. +

-r

+

+
Recursively inherit the given property for all + children.
+

-S

+

+
Revert the property to the received value if one exists; + otherwise operate as if the -S option was not specified.
+

+
+

+

zfs upgrade [-v]

+

+
Displays a list of file systems that are not the most + recent version.
+

+

zfs upgrade [-r] [-V version] + [-a | filesystem]

+

+
Upgrades file systems to a new on-disk version. Once this + is done, the file systems will no longer be accessible on systems running + older versions of the software. zfs send streams generated from new + snapshots of these file systems cannot be accessed on systems running older + versions of the software. +

In general, the file system version is independent of the pool + version. See zpool(8) for information on the zpool upgrade + command.

+

In some cases, the file system version and the pool version are + interrelated and the pool version must be upgraded before the file system + version can be upgraded.

+

-a

+

+
Upgrade all file systems on all imported pools.
+

+

filesystem

+

+
Upgrade the specified file system.
+

+

-r

+

+
Upgrade the specified file system and all descendent file + systems
+

+

-V version

+

+
Upgrade to the specified version. If the -V + flag is not specified, this command upgrades to the most recent version. This + option can only be used to increase the version number, and only up to the + most recent version supported by this software.
+

+
+

+

zfs userspace [-Hinp] [-o + field[,...]] [-s field] ... [-S field] + ... [-t type[,...]] filesystem|snapshot

+

+
Displays space consumed by, and quotas on, each user in + the specified filesystem or snapshot. This corresponds to the + userused@user and userquota@user properties. +

-n

+

+
Print numeric ID instead of user/group name.
+

+

-H

+

+
Do not print headers, use tab-delimited output.
+

+

-p

+

+
Use exact (parsable) numeric output.
+

+

-o field[,...]

+

+
Display only the specified fields from the following set: + type, name, used, quota. The default is to display all fields.
+

+

-s field

+

+
Sort output by this field. The s and S + flags may be specified multiple times to sort first by one field, then by + another. The default is -s type -s name.
+

+

-S field

+

+
Sort by this field in reverse order. See -s.
+

+

-t type[,...]

+

+
Print only the specified types from the following set: + all, posixuser, smbuser, posixgroup, smbgroup. The default is -t + posixuser,smbuser. The default can be changed to include group + types.
+

+

-i

+

+
Translate SID to POSIX ID. The POSIX ID may be ephemeral + if no mapping exists. Normal POSIX interfaces (for example, stat(2), + ls -l) perform this translation, so the -i option allows + the output from zfs userspace to be compared directly with those + utilities. However, -i may lead to confusion if some files were created + by an SMB user before a SMB-to-POSIX name mapping was established. In such a + case, some files will be owned by the SMB entity and some by the POSIX entity. + However, the -i option will report that the POSIX entity has the total + usage and quota for both.
+

+
+

+

zfs groupspace [-Hinp] [-o + field[,...]] [-s field] ... [-S field] + ... [-t type[,...]] filesystem|snapshot

+

+
Displays space consumed by, and quotas on, each group in + the specified filesystem or snapshot. This subcommand is identical to zfs + userspace, except that the default types to display are -t + posixgroup,smbgroup.
+

+

zfs mount

+

+
Displays all ZFS file systems currently + mounted.
+

+

zfs mount [-vO] [-o options] + -a | filesystem

+

+
Mounts ZFS file systems. Invoked automatically as + part of the boot process. +

-o options

+

+
An optional, comma-separated list of mount options to use + temporarily for the duration of the mount. See the "Temporary Mount Point + Properties" section for details.
+

+

-O

+

+
Perform an overlay mount. See mount(8) for more + information.
+

+

-v

+

+
Report mount progress.
+

+

-a

+

+
Mount all available ZFS file systems. Invoked + automatically as part of the boot process.
+

+

filesystem

+

+
Mount the specified filesystem.
+

+
+

+

zfs unmount [-f] -a | + filesystem|mountpoint

+

+
Unmounts currently mounted ZFS file systems. + Invoked automatically as part of the shutdown process. +

-f

+

+
Forcefully unmount the file system, even if it is + currently in use.
+

+

-a

+

+
Unmount all available ZFS file systems. Invoked + automatically as part of the boot process.
+

+

filesystem|mountpoint

+

+
Unmount the specified filesystem. The command can also be + given a path to a ZFS file system mount point on the system.
+

+
+

+

zfs share -a | filesystem

+

+
Shares available ZFS file systems. +

-a

+

+
Share all available ZFS file systems. Invoked + automatically as part of the boot process.
+

+

filesystem

+

+
Share the specified filesystem according to the + sharenfs and sharesmb properties. File systems are shared when + the sharenfs or sharesmb property is set.
+

+
+

+

zfs unshare -a | + filesystem|mountpoint

+

+
Unshares currently shared ZFS file systems. This + is invoked automatically as part of the shutdown process. +

-a

+

+
Unshare all available ZFS file systems. Invoked + automatically as part of the boot process.
+

+

filesystem|mountpoint

+

+
Unshare the specified filesystem. The command can also be + given a path to a ZFS file system shared on the system.
+

+
+

+

zfs bookmark snapshot bookmark

+

+
Creates a bookmark of the given snapshot. Bookmarks mark + the point in time when the snapshot was created, and can be used as the + incremental source for a zfs send command. +

This feature must be enabled to be used. See + zpool-features(5) for details on ZFS feature flags and the + bookmarks feature.

+
+

+

+

zfs send [-DnPpRveL] [-[iI] + snapshot] snapshot

+

+
Creates a stream representation of the second + snapshot, which is written to standard output. The output can be + redirected to a file or to a different system (for example, using + ssh(1). By default, a full stream is generated. +

-i snapshot

+

+
Generate an incremental stream from the first + snapshot (the incremental source) to the second snapshot (the + incremental target). The incremental source can be specified as the last + component of the snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the source may be the origin + snapshot, which must be fully specified (for example, pool/fs@origin, + not just @origin).

+
+

+

-I snapshot

+

+
Generate a stream package that sends all intermediary + snapshots from the first snapshot to the second snapshot. For example, -I + @a fs@d is similar to -i @a fs@b; -i @b fs@c; -i @c fs@d. The + incremental source may be specified as with the -i option.
+

+

-R

+

+
Generate a replication stream package, which will + replicate the specified filesystem, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent file + systems, and clones are preserved. +

If the -i or -I flags are used in conjunction with + the -R flag, an incremental replication stream is generated. The + current values of properties, and current snapshot and file system names are + set when the stream is received. If the -F flag is specified when + this stream is received, snapshots and file systems that do not exist on the + sending side are destroyed.

+
+

+

-D

+

+
Generate a deduplicated stream. Blocks which would have + been sent multiple times in the send stream will only be sent once. The + receiving system must also support this feature to receive a deduplicated + stream. This flag can be used regardless of the dataset's dedup property, but + performance will be much better if the filesystem uses a dedup-capable + checksum (eg. sha256).
+

+

-L

+

+
Generate a stream which may contain blocks larger than + 128KB. This flag has no effect if the large_blocks pool feature is + disabled, or if the recordsize property of this filesystem has never been set + above 128KB. The receiving system must have the large_blocks pool + feature enabled as well. See zpool-features(5) for details on ZFS + feature flags and the large_blocks feature.
+

+

-e

+

+
Generate a more compact stream by using WRITE_EMBEDDED + records for blocks which are stored more compactly on disk by the + embedded_data pool feature. This flag has no effect if the + embedded_data feature is disabled. The receiving system must have the + embedded_data feature enabled. If the lz4_compress feature is + active on the sending system, then the receiving system must have that feature + enabled as well. See zpool-features(5) for details on ZFS feature flags + and the embedded_data feature.
+

+

-p

+

+
Include the dataset's properties in the stream. This flag + is implicit when -R is specified. The receiving system must also support this + feature.
+

+

-n

+

+
Do a dry-run ("No-op") send. Do not generate + any actual send data. This is useful in conjunction with the -v or + -P flags to determine what data will be sent. In this case, the verbose + output will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes to + standard error).
+

+

-P

+

+
Print machine-parsable verbose information about the + stream package generated.
+

+

-v

+

+
Print verbose information about the stream package + generated. This information includes a per-second report of how much data has + been sent.
+

The format of the stream is committed. You will be able to receive + your streams on future versions of ZFS.

+
+

+

zfs send [-eL] [-i + snapshot|bookmark] + filesystem|volume|snapshot

+

+
Generate a send stream, which may be of a filesystem, and + may be incremental from a bookmark. If the destination is a filesystem or + volume, the pool must be read-only, or the filesystem must not be mounted. + When the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +

+

-i snapshot|bookmark

+

+
Generate an incremental send stream. The incremental + source must be an earlier snapshot in the destination's history. It will + commonly be an earlier snapshot in the destination's filesystem, in which case + it can be specified as the last component of the name (the # or + @ character and following). +

If the incremental target is a clone, the incremental source can + be the origin snapshot, or an earlier snapshot in the origin's filesystem, + or the origin's origin, etc.

+
+

+

-L

+

+
Generate a stream which may contain blocks larger than + 128KB. This flag has no effect if the large_blocks pool feature is + disabled, or if the recordsize property of this filesystem has never been set + above 128KB. The receiving system must have the large_blocks pool + feature enabled as well. See zpool-features(5) for details on ZFS + feature flags and the large_blocks feature.
+

+

-e

+

+
Generate a more compact stream by using WRITE_EMBEDDED + records for blocks which are stored more compactly on disk by the + embedded_data pool feature. This flag has no effect if the + embedded_data feature is disabled. The receiving system must have the + embedded_data feature enabled. If the lz4_compress feature is + active on the sending system, then the receiving system must have that feature + enabled as well. See zpool-features(5) for details on ZFS feature flags + and the embedded_data feature.
+

+
+

zfs receive [-vnFu] + filesystem|volume|snapshot +
+ zfs receive [-vnFu] [-d|-e] + filesystem

+

+
Creates a snapshot whose contents are as specified in the + stream provided on standard input. If a full stream is received, then a new + file system is created as well. Streams are created using the zfs send + subcommand, which by default creates a full stream. zfs recv can be + used as an alias for zfs receive. +

If an incremental stream is received, then the destination file + system must already exist, and its most recent snapshot must match the + incremental stream's source. For zvols, the destination device link + is destroyed and recreated, which means the zvol cannot be accessed + during the receive operation.

+

When a snapshot replication package stream that is generated by + using the zfs send -R command is received, any snapshots that + do not exist on the sending location are destroyed by using the zfs + destroy -d command.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and the + use of the -d or -e options.

+

If the argument is a snapshot name, the specified snapshot + is created. If the argument is a file system or volume name, a snapshot with + the same name as the sent snapshot is created within the specified + filesystem or volume. If neither of the -d or -e + options are specified, the provided target snapshot name is used exactly as + provided.

+

The -d and -e options cause the file system name of + the target snapshot to be determined by appending a portion of the sent + snapshot's name to the specified target filesystem. If the -d + option is specified, all but the first element of the sent snapshot's file + system path (usually the pool name) is used and any required intermediate + file systems within the specified one are created. If the -e option + is specified, then only the last element of the sent snapshot's file system + name (i.e. the name of the source file system itself) is used as the target + file system name.

+

-d

+

+
Discard the first element of the sent snapshot's file + system name, using the remaining elements to determine the name of the target + file system for the new snapshot as described in the paragraph above.
+

+

-e

+

+
Discard all but the last element of the sent snapshot's + file system name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+

+

-u

+

+
File system that is associated with the received stream + is not mounted.
+

+

-v

+

+
Print verbose information about the stream and the time + required to perform the receive operation.
+

+

-n

+

+
Do not actually receive the stream. This can be useful in + conjunction with the -v option to verify the name the receive operation + would use.
+

+

-F

+

+
Force a rollback of the file system to the most recent + snapshot before performing the receive operation. If receiving an incremental + replication stream (for example, one generated by zfs send -R -[iI]), + destroy snapshots and file systems that do not exist on the sending + side.
+

+
+

+

zfs allow filesystem | volume

+

+
Displays permissions that have been delegated on the + specified filesystem or volume. See the other forms of zfs allow for + more information.
+

+

zfs allow [-ldug] + "everyone"|user|group[,...] + perm|@setname[,...] filesystem| volume +
+ zfs allow [-ld] -e + perm|@setname[,...] filesystem | volume

+

+
Delegates ZFS administration permission for the + file systems to non-privileged users. +

[-ug] + "everyone"|user|group[,...]

+

+
Specifies to whom the permissions are delegated. Multiple + entities can be specified as a comma-separated list. If neither of the + -ug options are specified, then the argument is interpreted + preferentially as the keyword "everyone", then as a user name, and + lastly as a group name. To specify a user or group named "everyone", + use the -u or -g options. To specify a group with the same name + as a user, use the -g options.
+

+

[-e] perm|@setname[,...]

+

+
Specifies that the permissions be delegated to + "everyone." Multiple permissions may be specified as a + comma-separated list. Permission names are the same as ZFS subcommand + and property names. See the property list below. Property set names, which + begin with an at sign (@) , may be specified. See the -s form + below for details.
+

+

[-ld] filesystem|volume

+

+
Specifies where the permissions are delegated. If neither + of the -ld options are specified, or both are, then the permissions are + allowed for the file system or volume, and all of its descendents. If only the + -l option is used, then is allowed "locally" only for the + specified file system. If only the -d option is used, then is allowed + only for the descendent file systems.
+

+
+

+

+

Permissions are generally the ability to use a ZFS + subcommand or change a ZFS property. The following permissions are + available:

+

+
+

+
NAME             TYPE           NOTES
+allow            subcommand     Must also have the permission that is being
+
+ allowed +clone subcommand Must also have the 'create' ability and 'mount' +
+ ability in the origin file system +create subcommand Must also have the 'mount' ability +destroy subcommand Must also have the 'mount' ability +diff subcommand Allows lookup of paths within a dataset +
+ given an object number, and the ability to +
+ create snapshots necessary to 'zfs diff'. +mount subcommand Allows mount/umount of ZFS datasets +promote subcommand Must also have the 'mount' +
+ and 'promote' ability in the origin file system +receive subcommand Must also have the 'mount' and 'create' ability +rename subcommand Must also have the 'mount' and 'create' +
+ ability in the new parent +rollback subcommand Must also have the 'mount' ability +send subcommand +share subcommand Allows sharing file systems over NFS or SMB +
+ protocols +snapshot subcommand Must also have the 'mount' ability +groupquota other Allows accessing any groupquota@... property +groupused other Allows reading any groupused@... property +userprop other Allows changing any user property +userquota other Allows accessing any userquota@... property +userused other Allows reading any userused@... property +acltype property +aclinherit property +atime property +canmount property +casesensitivity property +checksum property +compression property +copies property +dedup property +devices property +exec property +filesystem_limit property +logbias property +mlslabel property +mountpoint property +nbmand property +normalization property +primarycache property +quota property +readonly property +recordsize property +refquota property +refreservation property +reservation property +secondarycache property +setuid property +shareiscsi property +sharenfs property +sharesmb property +snapdir property +snapshot_limit property +utf8only property +version property +volblocksize property +volsize property +vscan property +xattr property +zoned property
+
+

+

+

zfs allow -c + perm|@setname[,...] filesystem|volume

+

+
Sets "create time" permissions. These + permissions are granted (locally) to the creator of any newly-created + descendent file system.
+

+

zfs allow -s @setname + perm|@setname[,...] filesystem|volume

+

+
Defines or adds permissions to a permission set. The set + can be used by other zfs allow commands for the specified file system + and its descendents. Sets are evaluated dynamically, so changes to a set are + immediately reflected. Permission sets follow the same naming restrictions as + ZFS file systems, but the name must begin with an "at sign" + (@), and can be no more than 64 characters long.
+

+

zfs unallow [-rldug] + "everyone"|user|group[,...] + [perm|@setname[, ...]] filesystem|volume +
+ zfs unallow [-rld] -e [perm|@setname + [,...]] filesystem|volume +
+ zfs unallow [-r] -c + [perm|@setname[,...]] +
+ filesystem|volume

+

+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly denied, so other permissions + granted are still in effect. For example, if the permission is granted by an + ancestor. If no permissions are specified, then all permissions for the + specified user, group, or everyone are removed. + Specifying "everyone" (or using the -e option) only removes + the permissions that were granted to "everyone", not all permissions + for every user and group. See the zfs allow command for a description + of the -ldugec options. +

-r

+

+
Recursively remove the permissions from this file system + and all descendents.
+

+
+

+

zfs unallow [-r] -s @setname + [perm|@setname[,...]] +
+ filesystem|volume

+

+
Removes permissions from a permission set. If no + permissions are specified, then all permissions are removed, thus removing the + set entirely.
+

+

zfs hold [-r] tag + snapshot...

+

+
Adds a single reference, named with the tag + argument, to the specified snapshot or snapshots. Each snapshot has its own + tag namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that snapshot + by using the zfs destroy command return EBUSY.

+

-r

+

+
Specifies that a hold with the given tag is applied + recursively to the snapshots of all descendent file systems.
+

+
+

+

zfs holds [-r] snapshot...

+

+
Lists all existing user references for the given snapshot + or snapshots. +

-r

+

+
Lists the holds that are set on the named descendent + snapshots, in addition to listing the holds on the named snapshot.
+

+
+

+

zfs release [-r] tag + snapshot...

+

+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already exist + for each snapshot. +

If a hold exists on a snapshot, attempts to destroy that snapshot + by using the zfs destroy command return EBUSY.

+

-r

+

+
Recursively releases a hold with the given tag on the + snapshots of all descendent file systems.
+

+
+

+

zfs diff [-FHt] snapshot + snapshot|filesystem

+

+
Display the difference between a snapshot of a given + filesystem and another snapshot of that filesystem from a later time or the + current contents of the filesystem. The first column is a character indicating + the type of change, the other columns indicate pathname, new pathname (in case + of rename), change in link count, and optionally file type and/or change time. +

The types of change are: +
+

+
-       The path has been removed
++       The path has been created
+M       The path has been modified
+R       The path has been renamed
+
+

-F

+

+
Display an indication of the type of file, in a manner + similar to the -F option of ls(1). +
+
B       Block device
+C       Character device
+/       Directory
+>       Door
+|       Named pipe
+@       Symbolic link
+P       Event port
+=       Socket
+F       Regular file
+
+
+

-H

+

+
Give more parsable tab-separated output, without header + lines and without arrows.
+

-t

+

+
Display the path's inode change time as the first column + of output.
+

+
+
+
+

+

Example 1 Creating a ZFS File System Hierarchy

+

+

The following commands create a file system named pool/home + and a file system named pool/home/bob. The mount point + /export/home is set for the parent file system, and is automatically + inherited by the child file system.

+

+

+
+

+
# zfs create pool/home
+# zfs set mountpoint=/export/home pool/home
+# zfs create pool/home/bob
+
+

+

+

Example 2 Creating a ZFS Snapshot

+

+

The following command creates a snapshot named yesterday. + This snapshot is mounted on demand in the .zfs/snapshot directory at + the root of the pool/home/bob file system.

+

+

+
+

+
# zfs snapshot pool/home/bob@yesterday
+
+

+

+

Example 3 Creating and Destroying Multiple Snapshots

+

+

The following command creates snapshots named yesterday of + pool/home and all of its descendent file systems. Each snapshot is + mounted on demand in the .zfs/snapshot directory at the root of its + file system. The second command destroys the newly created snapshots.

+

+

+
+

+
# zfs snapshot -r pool/home@yesterday
+# zfs destroy -r pool/home@yesterday
+
+

+

+

Example 4 Disabling and Enabling File System + Compression

+

+

The following command disables the compression property for + all file systems under pool/home. The next command explicitly enables + compression for pool/home/anne.

+

+

+
+

+
# zfs set compression=off pool/home
+# zfs set compression=on pool/home/anne
+
+

+

+

Example 5 Listing ZFS Datasets

+

+

The following command lists all active file systems and volumes in + the system. Snapshots are displayed if the listsnaps property is + on. The default is off. See zpool(8) for more + information on pool properties.

+

+

+
+

+
# zfs list
+
+ NAME USED AVAIL REFER MOUNTPOINT +
+ pool 450K 457G 18K /pool +
+ pool/home 315K 457G 21K /export/home +
+ pool/home/anne 18K 457G 18K /export/home/anne +
+ pool/home/bob 276K 457G 276K /export/home/bob
+
+

+

+

Example 6 Setting a Quota on a ZFS File System

+

+

The following command sets a quota of 50 Gbytes for + pool/home/bob.

+

+

+
+

+
# zfs set quota=50G pool/home/bob
+
+

+

+

Example 7 Listing ZFS Properties

+

+

The following command lists all properties for + pool/home/bob.

+

+

+
+

+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  shareiscsi            off                    default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+pool/home/bob  logbias               latency                default
+pool/home/bob  dedup                 off                    default
+pool/home/bob  mlslabel              none                   default
+pool/home/bob  relatime              off                    default
+
+

+

+

+

The following command gets a single property value.

+

+

+
+

+
# zfs get -H -o value compression pool/home/bob
+on
+
+

+

+

+

The following command lists all properties with local settings for + pool/home/bob.

+

+

+
+

+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+

+

+

Example 8 Rolling Back a ZFS File System

+

+

The following command reverts the contents of + pool/home/anne to the snapshot named yesterday, deleting all + intermediate snapshots.

+

+

+
+

+
# zfs rollback -r pool/home/anne@yesterday
+
+

+

+

Example 9 Creating a ZFS Clone

+

+

The following command creates a writable file system whose initial + contents are the same as pool/home/bob@yesterday.

+

+

+
+

+
# zfs clone pool/home/bob@yesterday pool/clone
+
+

+

+

Example 10 Promoting a ZFS Clone

+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+

+

+
+

+
# zfs create pool/project/production
+
+ populate /pool/project/production with data +# zfs snapshot pool/project/production@today +# zfs clone pool/project/production@today pool/project/beta +make changes to /pool/project/beta and test them +# zfs promote pool/project/beta +# zfs rename pool/project/production pool/project/legacy +# zfs rename pool/project/beta pool/project/production +once the legacy version is no longer needed, it can be destroyed +# zfs destroy pool/project/legacy
+
+

+

+

Example 11 Inheriting ZFS Properties

+

+

The following command causes pool/home/bob and + pool/home/anne to inherit the checksum property from their + parent.

+

+

+
+

+
# zfs inherit checksum pool/home/bob pool/home/anne
+
+

+

The following command causes pool/home/bob to revert to the + received value for the quota property if it exists.

+

+

+
+

+
# zfs inherit -S quota pool/home/bob
+
+

+

+

Example 12 Remotely Replicating ZFS Data

+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + poolB/received/fs@aand poolB/received/fs@b, respectively. + poolB must contain the file system poolB/received, and must + not initially contain poolB/received/fs.

+

+

+
+

+
# zfs send pool/fs@a | \
+
+ ssh host zfs receive poolB/received/fs@a +# zfs send -i a pool/fs@b | ssh host \ +
+ zfs receive poolB/received/fs
+
+

+

+

Example 13 Using the zfs receive -d + Option

+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it into + poolB/received/fsA/fsB@snap. The fsA/fsB@snap portion of the + received snapshot's name is determined from the name of the sent snapshot. + poolB must contain the file system poolB/received. If + poolB/received/fsA does not exist, it is created as an empty file + system.

+

+

+
+

+
# zfs send poolA/fsA/fsB@snap | \
+
+ ssh host zfs receive -d poolB/received
+
+

+

+

Example 14 Setting User Properties

+

+

The following example sets the user-defined + com.example:department property for a dataset.

+

+

+
+

+
# zfs set com.example:department=12345 tank/accounting
+
+

+

+

Example 15 Creating a ZFS Volume as an iSCSI Target + Device

+

+

The following example shows how to create a ZFS volume as + an iSCSI target.

+

+

+
+

+
# zfs create -V 2g pool/volumes/vol1
+# zfs set shareiscsi=on pool/volumes/vol1
+# iscsitadm list target
+Target: pool/volumes/vol1
+
+ iSCSI Name: +
+ iqn.1986-03.com.sun:02:7b4b02a6-3277-eb1b-e686-a24762c52a8c +
+ Connections: 0
+
+

+

+

+

After the iSCSI target is created, set up the iSCSI + initiator. For more information about the Solaris iSCSI initiator, + see iscsitadm(1M).

+

Example 16 Performing a Rolling Snapshot

+

+

The following example shows how to maintain a history of snapshots + with a consistent naming scheme. To keep a week's worth of snapshots, the + user destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows:

+

+

+
+

+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+

+

+

Example 17 Setting sharenfs Property Options on a + ZFS File System

+

+

The following commands show how to set sharenfs property + options to enable rw access for a set of IP addresses and to + enable root access for system neo on the tank/home file + system.

+

+

+
+

+
# zfs set sharenfs='rw=@123.123.0.0/16,root=neo' tank/home
+
+

+

+

+

If you are using DNS for host name resolution, specify the + fully qualified hostname.

+

+

Example 18 Delegating ZFS Administration Permissions on a + ZFS Dataset

+

+

The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots on + tank/cindys. The permissions on tank/cindys are also + displayed.

+

+

+
+

+
# zfs allow cindys create,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+-------------------------------------------------------------
+Local+Descendent permissions on (tank/cindys)
+
+ user cindys create,destroy,mount,snapshot +-------------------------------------------------------------
+
+

+

+

+

Because the tank/cindys mount point permission is set to + 755 by default, user cindys will be unable to mount file systems + under tank/cindys. Set an ACL similar to the following syntax + to provide mount point access:

+

+
+

+
# chmod A+user:cindys:add_subdirectory:allow /tank/cindys
+
+

+

+

Example 19 Delegating Create Time Permissions on a ZFS + Dataset

+

+

The following example shows how to grant anyone in the group + staff to create file systems in tank/users. This syntax also + allows staff members to destroy their own file systems, but not destroy + anyone else's file system. The permissions on tank/users are also + displayed.

+

+

+
+

+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+-------------------------------------------------------------
+Create time permissions on (tank/users)
+
+ create,destroy +Local+Descendent permissions on (tank/users) +
+ group staff create,mount +-------------------------------------------------------------
+
+

+

+

Example 20 Defining and Granting a Permission Set on a ZFS + Dataset

+

+

The following example shows how to define and grant a permission + set on the tank/users file system. The permissions on + tank/users are also displayed.

+

+

+
+

+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+-------------------------------------------------------------
+Permission sets on (tank/users)
+
+ @pset create,destroy,mount,snapshot +Create time permissions on (tank/users) +
+ create,destroy +Local+Descendent permissions on (tank/users) +
+ group staff @pset,create,mount +-------------------------------------------------------------
+
+

+

+

Example 21 Delegating Property Permissions on a ZFS + Dataset

+

+

The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The permissions on + users/home are also displayed.

+

+

+
+

+
# zfs allow cindys quota,reservation users/home
+# zfs allow users/home
+-------------------------------------------------------------
+Local+Descendent permissions on (users/home)
+
+ user cindys quota,reservation +------------------------------------------------------------- +cindys% zfs set quota=10G users/home/marks +cindys% zfs get quota users/home/marks +NAME PROPERTY VALUE SOURCE +users/home/marks quota 10G local
+
+

+

+

Example 22 Removing ZFS Delegated Permissions on a ZFS + Dataset

+

+

The following example shows how to remove the snapshot permission + from the staff group on the tank/users file system. The + permissions on tank/users are also displayed.

+

+

+
+

+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+-------------------------------------------------------------
+Permission sets on (tank/users)
+
+ @pset create,destroy,mount,snapshot +Create time permissions on (tank/users) +
+ create,destroy +Local+Descendent permissions on (tank/users) +
+ group staff @pset,create,mount +-------------------------------------------------------------
+
+

+

+

Example 23 Showing the differences between a snapshot and a + ZFS Dataset

+

+

The following example shows how to see what has changed between a + prior snapshot of a ZFS Dataset and its current state. The -F option + is used to indicate type information for the files affected.

+

+

+
+

+
# zfs diff -F tank/test@before tank/test
+M       /       /tank/test/
+M       F       /tank/test/linked      (+1)
+R       F       /tank/test/oldname -> /tank/test/newname
+-       F       /tank/test/deleted
++       F       /tank/test/created
+M       F       /tank/test/modified
+
+

+

+

Example 24 Creating a bookmark

+

+

The following example create a bookmark to a snapshot. This + bookmark can then be used instead of snapshot in send streams.

+

+

+
+

+
# zfs bookmark rpool@snapshot rpool#bookmark
+
+

+

+
+
+

+
+
+
Cause zfs to dump core on exit for the purposes of running + ::findleaks. +

+
+
+
+
+

+

The following exit values are returned:

+

0

+

+
Successful completion.
+

+

1

+

+
An error occurred.
+

+

2

+

+
Invalid command line options were specified.
+

+
+
+

+

chmod(2), fsync(2), gzip(1), mount(8), + ssh(1), stat(2), write(2), zpool(8)

+
+
+ + + + + +
November 19, 2013ZFS pool 28, filesystem 5
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/zinject.8.html b/man/v0.6/8/zinject.8.html new file mode 100644 index 000000000..b8a4a2341 --- /dev/null +++ b/man/v0.6/8/zinject.8.html @@ -0,0 +1,290 @@ + + + + + + + zinject.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zinject.8

+
+ + + + + +
zinject(8)System Administration Commandszinject(8)
+
+

+
+

+

zinject - ZFS Fault Injector

+
+
+

+

zinject creates artificial problems in a ZFS pool by + simulating data corruption or device failures. This program is + dangerous.

+
+
+

+
+
+
List injection records.
+
zinject -b objset:object:level:blkd [-f + frequency] [-amu] pool
+
Force an error into the pool at a bookmark.
+
zinject -c <id | all>
+
Cancel injection records.
+
zinject -d vdev -A <degrade|fault> + pool
+
Force a vdev into the DEGRADED or FAULTED state.
+
zinject -d vdev [-e device_error] [-L + label_error] [-T failure] [-F] + pool
+
Force a vdev error.
+
zinject -I [-s seconds | -g txgs] + pool
+
Simulate a hardware failure that fails to honor a cache flush.
+
zinject -p function pool
+
Panic inside the specified function.
+
zinject -t data [-e device_error] [-f + frequency] [-l level] [-r range] + [-amq] path
+
Force an error into the contents of a file.
+
zinject -t dnode [-e device_error] [-f + frequency] [-l level] [-amq] + path
+
Force an error into the metadnode for a file or directory.
+
zinject -t mos_type [-e device_error] [-f + frequency] [-l level] [-r range] + [-amqu] pool
+
Force an error into the MOS of a pool.
+
+
+
+

+
+
+
Flush the ARC before injection.
+
+
Force an error into the pool at this bookmark tuple. Each number is in + hexidecimal, and only one block can be specified.
+
+
A vdev specified by path or GUID.
+
+
Specify checksum for an ECKSUM error, dtl for an ECHILD + error, io for an EIO error where reopening the device will succeed, + or nxio for an ENXIO error where reopening the device will + fail.
+
+
Only inject errors a fraction of the time. Expressed as an integer + percentage between 1 and 100.
+
+
Fail faster. Do fewer checks.
+
+
Run for this many transaction groups before reporting failure.
+
+
Print the usage message.
+
+
Inject an error at a particular block level. The default is 0.
+
+
Set the label error region to one of nvlist, pad1, + pad2, or uber.
+
+
Automatically remount the underlying filesystem.
+
+
Quiet mode. Only print the handler number added.
+
+
Inject an error over a particular logical range of an object, which will + be translated to the appropriate blkid range according to the object's + properties.
+
+
Run for this many seconds before reporting failure.
+
+
Set the failure type to one of all, claim, free, + read, or write.
+
+
Set this to mos for any data in the MOS, mosdir for an + object directory, config for the pool configuration, bpobj + for the block pointer list, spacemap for the space map, + metaslab for the metaslab, or errlog for the persistent + error log.
+
+
Unload the pool after injection. +

+
+
+
+
+

+
+
+
Run zinject in debug mode. +

+
+
+
+
+

+

This man page was written by Darik Horn + <dajhorn@vanadac.com> excerpting the zinject usage message and + source code.

+

+
+
+

+

zpool(8), zfs(8)

+
+
+ + + + + +
2013 FEB 28ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/zpool.8.html b/man/v0.6/8/zpool.8.html new file mode 100644 index 000000000..053ef6716 --- /dev/null +++ b/man/v0.6/8/zpool.8.html @@ -0,0 +1,1980 @@ + + + + + + + zpool.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool.8

+
+ + + + + +
zpool(8)System Administration Commandszpool(8)
+
+
+

+

zpool - configures ZFS storage pools

+
+
+

+
zpool [-?]
+

+

+
zpool add [-fgLnP] [-o property=value] pool vdev ...
+

+

+
zpool attach [-f] [-o property=value] pool device new_device
+

+

+
zpool clear pool [device]
+

+

+
zpool create [-fnd] [-o property=value] ... [-O file-system-property=value]
+
+ ... [-m mountpoint] [-R root] [-t tname] pool vdev ...
+

+

+
zpool destroy [-f] pool
+

+

+
zpool detach pool device
+

+

+
zpool events [-vHfc] [pool] ...
+

+

+
zpool export [-a] [-f] pool ...
+

+

+
zpool get [-pH] "all" | property[,...] pool ...
+

+

+
zpool history [-il] [pool] ...
+

+

+
zpool import [-d dir] [-D]
+

+

+
zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile]
+
+ [-D] [-f] [-m] [-N] [-R root] [-F [-n] [-X] [-T]] -a
+

+

+
zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile]
+
+ [-D] [-f] [-m] [-R root] [-F [-n] [-X] [-T]] [-t]] pool |id [newpool]
+

+

+
zpool iostat [-T d | u ] [-gLPvy] [pool] ... [interval[count]]
+

+

+
zpool labelclear [-f] device
+

+

+
zpool list [-T d | u ] [-HgLPv] [-o property[,...]] [pool] ...
+
+ [interval[count]]
+

+

+
zpool offline [-t] pool device ...
+

+

+
zpool online pool device ...
+

+

+
zpool reguid pool
+

+

+
zpool reopen pool
+

+

+
zpool remove pool device ...
+

+

+
zpool replace [-f] [-o property=value]  pool device [new_device]
+

+

+
zpool scrub [-s] pool ...
+

+

+
zpool set property=value pool
+

+

+
zpool split [-gLnP] [-R altroot] [-o property=value] pool newpool [device ...]
+

+

+
zpool status [-gLPvxD] [-T d | u] [pool] ... [interval [count]]
+

+

+
zpool upgrade 
+

+

+
zpool upgrade -v
+

+

+
zpool upgrade [-V version] -a | pool ...
+

+
+
+

+

The zpool command configures ZFS storage pools. A + storage pool is a collection of devices that provides physical storage and + data replication for ZFS datasets.

+

+

All datasets within a storage pool share the same space. See + zfs(8) for information on managing datasets.

+
+

+

A "virtual device" describes a single device or a + collection of devices organized according to certain performance and fault + characteristics. The following virtual devices are supported:

+

disk

+
A block device, typically located under /dev. + ZFS can use individual partitions, though the recommended mode of + operation is to use whole disks. A disk can be specified by a full path, or it + can be a shorthand name (the relative portion of the path under + "/dev"). For example, "sda" is equivalent to + "/dev/sda". A whole disk can be specified by omitting the partition + designation. When given a whole disk, ZFS automatically labels the + disk, if necessary.
+

+

file

+
A regular file. The use of files as a backing store is + strongly discouraged. It is designed primarily for experimental purposes, as + the fault tolerance of a file is only as good as the file system of which it + is a part. A file must be specified by a full path.
+

+

mirror

+
A mirror of two or more devices. Data is replicated in an + identical fashion across all components of a mirror. A mirror with N + disks of size X can hold X bytes and can withstand (N-1) + devices failing before data integrity is compromised.
+

+

raidz +
+ raidz1 +
+ raidz2 +
+ raidz3

+
A variation on RAID-5 that allows for better + distribution of parity and eliminates the "RAID-5 write hole" + (in which data and parity become inconsistent after a power loss). Data and + parity is striped across all disks within a raidz group. +

A raidz group can have single-, double- , or triple parity, + meaning that the raidz group can sustain one, two, or three failures, + respectively, without losing any data. The raidz1 vdev type + specifies a single-parity raidz group; the raidz2 vdev + type specifies a double-parity raidz group; and the raidz3 + vdev type specifies a triple-parity raidz group. The + raidz vdev type is an alias for raidz1.

+

A raidz group with N disks of size X with + P parity disks can hold approximately (N-P)*X bytes and + can withstand P device(s) failing before data integrity is + compromised. The minimum number of devices in a raidz group is one + more than the number of parity disks. The recommended number is between 3 + and 9 to help increase performance.

+
+

+

spare

+
A special pseudo-vdev which keeps track of + available hot spares for a pool. For more information, see the "Hot + Spares" section.
+

+

log

+
A separate-intent log device. If more than one log device + is specified, then writes are load-balanced between devices. Log devices can + be mirrored. However, raidz vdev types are not supported for the + intent log. For more information, see the "Intent Log" + section.
+

+

cache

+
A device used to cache storage pool data. A cache device + cannot be configured as a mirror or raidz group. For more information, + see the "Cache Devices" section.
+

+

+

Virtual devices cannot be nested, so a mirror or raidz + virtual device can only contain files or disks. Mirrors of mirrors (or other + combinations) are not allowed.

+

+

A pool can have any number of virtual devices at the top of the + configuration (known as "root vdevs"). Data is dynamically + distributed across all top-level devices to balance data among devices. As + new virtual devices are added, ZFS automatically places data on the + newly available devices.

+

+

Virtual devices are specified one at a time on the command line, + separated by whitespace. The keywords "mirror" and + "raidz" are used to distinguish where a group ends and another + begins. For example, the following creates two root vdevs, each a mirror of + two disks:

+

+
+

+
# zpool create mypool mirror sda sdb mirror sdc sdd
+
+

+

+
+
+

+

ZFS supports a rich set of mechanisms for handling device + failure and data corruption. All metadata and data is checksummed, and + ZFS automatically repairs bad data from a good copy when corruption + is detected.

+

+

In order to take advantage of these features, a pool must make use + of some form of redundancy, using either mirrored or raidz groups. + While ZFS supports running in a non-redundant configuration, where + each root vdev is simply a disk or file, this is strongly discouraged. A + single case of bit corruption can render some or all of your data + unavailable.

+

+

A pool's health status is described by one of three states: + online, degraded, or faulted. An online pool has all devices operating + normally. A degraded pool is one in which one or more devices have failed, + but the data is still available due to a redundant configuration. A faulted + pool has corrupted metadata, or one or more faulted devices, and + insufficient replicas to continue functioning.

+

+

The health of the top-level vdev, such as mirror or raidz + device, is potentially impacted by the state of its associated vdevs, or + component devices. A top-level vdev or component device is in one of the + following states:

+

DEGRADED

+
One or more top-level vdevs is in the degraded state + because one or more component devices are offline. Sufficient replicas exist + to continue functioning. +

One or more component devices is in the degraded or faulted state, + but sufficient replicas exist to continue functioning. The underlying + conditions are as follows:

+
+
+
+
The number of checksum errors exceeds acceptable levels and the device is + degraded as an indication that something may be wrong. ZFS + continues to use the device as necessary.
+
+
+
+
+
+
The number of I/O errors exceeds acceptable levels. The device could not + be marked as faulted because there are insufficient replicas to continue + functioning.
+
+
+
+

+

FAULTED

+
One or more top-level vdevs is in the faulted state + because one or more component devices are offline. Insufficient replicas exist + to continue functioning. +

One or more component devices is in the faulted state, and + insufficient replicas exist to continue functioning. The underlying + conditions are as follows:

+
+
+
+
The device could be opened, but the contents did not match expected + values.
+
+
+
+
+
+
The number of I/O errors exceeds acceptable levels and the device is + faulted to prevent further use of the device.
+
+
+
+

+

OFFLINE

+
The device was explicitly taken offline by the + "zpool offline" command.
+

+

ONLINE

+
The device is online and functioning.
+

+

REMOVED

+
The device was physically removed while the system was + running. Device removal detection is hardware-dependent and may not be + supported on all platforms.
+

+

UNAVAIL

+
The device could not be opened. If a pool is imported + when a device was unavailable, then the device will be identified by a unique + identifier instead of its path since the path was never correct in the first + place.
+

+

+

If a device is removed and later re-attached to the system, + ZFS attempts to put the device online automatically. Device attach + detection is hardware-dependent and might not be supported on all + platforms.

+
+
+

+

ZFS allows devices to be associated with pools as "hot + spares". These devices are not actively used in the pool, but when an + active device fails, it is automatically replaced by a hot spare. To create + a pool with hot spares, specify a "spare" vdev with any + number of devices. For example,

+

+
+

+
# zpool create pool mirror sda sdb spare sdc sdd
+
+

+

+

+

Spares can be shared across multiple pools, and can be added with + the "zpool add" command and removed with the "zpool + remove" command. Once a spare replacement is initiated, a new + "spare" vdev is created within the configuration that will + remain there until the original device is replaced. At this point, the hot + spare becomes available again.

+

+

If a pool has a shared spare that is currently being used, the + pool can not be exported since other pools may use this shared spare, which + may lead to potential data corruption.

+

+

An in-progress spare replacement can be cancelled by detaching the + hot spare. If the original faulted device is detached, then the hot spare + assumes its place in the configuration, and is removed from the spare list + of all active pools.

+

+

Spares cannot replace log devices.

+
+
+

+

The ZFS Intent Log (ZIL) satisfies POSIX + requirements for synchronous transactions. For instance, databases often + require their transactions to be on stable storage devices when returning + from a system call. NFS and other applications can also use + fsync() to ensure data stability. By default, the intent log is + allocated from blocks within the main pool. However, it might be possible to + get better performance using separate intent log devices such as + NVRAM or a dedicated disk. For example:

+

+
+

+
# zpool create pool sda sdb log sdc
+
+

+

+

+

Multiple log devices can also be specified, and they can be + mirrored. See the EXAMPLES section for an example of mirroring multiple log + devices.

+

+

Log devices can be added, replaced, attached, detached, and + imported and exported as part of the larger pool. Mirrored log devices can + be removed by specifying the top-level mirror for the log.

+
+
+

+

Devices can be added to a storage pool as "cache + devices." These devices provide an additional layer of caching between + main memory and disk. For read-heavy workloads, where the working set size + is much larger than what can be cached in main memory, using cache devices + allow much more of this working set to be served from low latency media. + Using cache devices provides the greatest performance improvement for random + read-workloads of mostly static content.

+

+

To create a pool with cache devices, specify a "cache" + vdev with any number of devices. For example:

+

+
+

+
# zpool create pool sda sdb cache sdc sdd
+
+

+

+

+

Cache devices cannot be mirrored or part of a raidz + configuration. If a read error is encountered on a cache device, that read + I/O is reissued to the original storage pool device, which might be + part of a mirrored or raidz configuration.

+

+

The content of the cache devices is considered volatile, as is the + case with other system caches.

+
+
+

+

Each pool has several properties associated with it. Some + properties are read-only statistics while others are configurable and change + the behavior of the pool. The following are read-only properties:

+

available

+
Amount of storage available within the pool. This + property can also be referred to by its shortened column name, + "avail".
+

+

capacity

+
Percentage of pool space used. This property can also be + referred to by its shortened column name, "cap".
+

+

expandsize

+
Amount of uninitialized space within the pool or device + that can be used to increase the total capacity of the pool. Uninitialized + space consists of any space on an EFI labeled vdev which has not been brought + online (i.e. zpool online -e). This space occurs when a LUN is dynamically + expanded.
+

+

fragmentation

+
The amount of fragmentation in the pool.
+

+

free

+
The amount of free space available in the pool.
+

+

freeing

+
After a file system or snapshot is destroyed, the space + it was using is returned to the pool asynchronously. freeing is + the amount of space remaining to be reclaimed. Over time freeing + will decrease while free increases.
+

+

health

+
The current health of the pool. Health can be + "ONLINE", "DEGRADED", + "FAULTED", " OFFLINE", + "REMOVED", or "UNAVAIL".
+

+

guid

+
A unique identifier for the pool.
+

+

size

+
Total size of the storage pool.
+

+

unsupported@feature_guid

+
+

Information about unsupported features that are enabled on the + pool. See zpool-features(5) for details.

+
+

+

used

+
Amount of storage space used within the pool.
+

+

+

The space usage properties report actual physical space available + to the storage pool. The physical space can be different from the total + amount of space that any contained datasets can actually use. The amount of + space used in a raidz configuration depends on the characteristics of + the data being written. In addition, ZFS reserves some space for + internal accounting that the zfs(8) command takes into account, but + the zpool command does not. For non-full pools of a reasonable size, + these effects should be invisible. For small pools, or pools that are close + to being completely full, these discrepancies may become more + noticeable.

+

+

+

The following property can be set at creation time:

+

ashift

+

+
Pool sector size exponent, to the power of 2 (internally + referred to as "ashift"). I/O operations will be aligned to the + specified size boundaries. Additionally, the minimum (disk) write size will be + set to the specified size, so this represents a space vs. performance + trade-off. The typical case for setting this property is when performance is + important and the underlying disks use 4KiB sectors but report 512B sectors to + the OS (for compatibility reasons); in that case, set ashift=12 (which + is 1<<12 = 4096). +

For optimal performance, the pool sector size should be greater + than or equal to the sector size of the underlying disks. Since the property + cannot be changed after pool creation, if in a given pool, you ever + want to use drives that report 4KiB sectors, you must set + ashift=12 at pool creation time.

+

Keep in mind is that the ashift is vdev specific and + is not a pool global. This means that when adding new vdevs to an + existing pool you may need to specify the ashift.

+
+

+

+

The following property can be set at creation time and import + time:

+

altroot

+

+
Alternate root directory. If set, this directory is + prepended to any mount points within the pool. This can be used when examining + an unknown pool where the mount points cannot be trusted, or in an alternate + boot environment, where the typical paths are not valid. altroot is not + a persistent property. It is valid only while the system is up. Setting + altroot defaults to using cachefile=none, though this may be + overridden using an explicit setting.
+

+

+

The following property can only be set at import time:

+

readonly=on | off

+

+
If set to on, the pool will be imported in + read-only mode: Synchronous data in the intent log will not be accessible, + properties of the pool can not be changed and datasets of the pool can only be + mounted read-only. The readonly property of its datasets will be + implicitly set to on. +

It can also be specified by its column name of rdonly.

+

To write to a read-only pool, a export and import of the pool is + required.

+
+

+

+

The following properties can be set at creation time and import + time, and later changed with the zpool set command:

+

autoexpand=on | off

+

+
Controls automatic pool expansion when the underlying LUN + is grown. If set to on, the pool will be resized according to the size + of the expanded device. If the device is part of a mirror or raidz then + all devices within that mirror/raidz group must be expanded before the + new space is made available to the pool. The default behavior is off. + This property can also be referred to by its shortened column name, + expand.
+

+

autoreplace=on | off

+

+
Controls automatic device replacement. If set to + "off", device replacement must be initiated by the + administrator by using the "zpool replace" command. If set to + "on", any new device, found in the same physical location as + a device that previously belonged to the pool, is automatically formatted and + replaced. The default behavior is "off". This property can + also be referred to by its shortened column name, "replace".
+

+

bootfs=pool/dataset

+

+
Identifies the default bootable dataset for the root + pool. This property is expected to be set mainly by the installation and + upgrade programs.
+

+

cachefile=path | none

+

+
Controls the location of where the pool configuration is + cached. Discovering all pools on system startup requires a cached copy of the + configuration data that is stored on the root file system. All pools in this + cache are automatically imported when the system boots. Some environments, + such as install and clustering, need to cache this information in a different + location so that pools are not automatically imported. Setting this property + caches the pool configuration in a different location that can later be + imported with "zpool import -c". Setting it to the special + value "none" creates a temporary pool that is never cached, + and the special value '' (empty string) uses the default location. +

Multiple pools can share the same cache file. Because the kernel + destroys and recreates this file when pools are added and removed, care + should be taken when attempting to access this file. When the last pool + using a cachefile is exported or destroyed, the file is removed.

+
+

+

comment=text

+

+
A text string consisting of printable ASCII characters + that will be stored such that it is available even if the pool becomes + faulted. An administrator can provide additional information about a pool + using this property.
+

+

dedupditto=number

+

+
Threshold for the number of block ditto copies. If the + reference count for a deduplicated block increases above this number, a new + ditto copy of this block is automatically stored. The default setting is 0 + which causes no ditto copies to be created for deduplicated blocks. The + miniumum legal nonzero setting is 100.
+

+

delegation=on | off

+

+
Controls whether a non-privileged user is granted access + based on the dataset permissions defined on the dataset. See zfs(8) for + more information on ZFS delegated administration.
+

+

failmode=wait | continue | + panic

+

+
Controls the system behavior in the event of catastrophic + pool failure. This condition is typically a result of a loss of connectivity + to the underlying storage device(s) or a failure of all devices within the + pool. The behavior of such an event is determined as follows: +

wait

+
Blocks all I/O access until the device + connectivity is recovered and the errors are cleared. This is the default + behavior.
+

+

continue

+
Returns EIO to any new write I/O requests + but allows reads to any of the remaining healthy devices. Any write requests + that have yet to be committed to disk would be blocked.
+

+

panic

+
Prints out a message to the console and generates a + system crash dump.
+

+
+

+

feature@feature_name=enabled

+
The value of this property is the current state of + feature_name. The only valid value when setting this property is + enabled which moves feature_name to the enabled state. See + zpool-features(5) for details on feature states.
+

+

listsnaps=on | off

+

+
Controls whether information about snapshots associated + with this pool is output when "zfs list" is run without the + -t option. The default value is "off".
+

+

version=version

+

+
The current on-disk version of the pool. This can be + increased, but never decreased. The preferred method of updating pools is with + the "zpool upgrade" command, though this property can be used + when a specific version is needed for backwards compatibility. Once feature + flags are enabled on a pool this property will no longer have a value.
+

+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+

+

The zpool command provides subcommands to create and + destroy storage pools, add capacity to storage pools, and provide + information about the storage pools. The following subcommands are + supported:

+

zpool -?

+

+
Displays a help message.
+

+

zpool add [-fgLnP] [-o + property=value] pool vdev ...

+

+
Adds the specified virtual devices to the given pool. The + vdev specification is described in the "Virtual Devices" + section. The behavior of the -f option, and the device checks performed + are described in the "zpool create" subcommand. +

-f

+
Forces use of vdevs, even if they appear in use or + specify a conflicting replication level. Not all devices can be overridden in + this manner.
+

+

-g

+
Display vdev GUIDs instead of the normal device names. + These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+

+

-L

+
Display real paths for vdevs resolving all symbolic + links. This can be used to look up the current block device name regardless of + the /dev/disk/ path used to open it.
+

+

-n

+
Displays the configuration that would be used without + actually adding the vdevs. The actual pool creation can still fail due + to insufficient privileges or device sharing.
+

+

-P

+
Display full paths for vdevs instead of only the last + component of the path. This can be used in conjunction with the -L + flag.
+

+

-o property=value

+

+
Sets the given pool properties. See the + "Properties" section for a list of valid properties that can be set. + The only property supported at the moment is ashift. Do note + that some properties (among them ashift) are not inherited from + a previous vdev. They are vdev specific, not pool specific.
+

Do not add a disk that is currently configured as a quorum device + to a zpool. After a disk is in the pool, that disk can then be configured as + a quorum device.

+
+

+

zpool attach [-f] [-o + property=value] pool device new_device

+

+
Attaches new_device to an existing zpool + device. The existing device cannot be part of a raidz configuration. If + device is not currently part of a mirrored configuration, device + automatically transforms into a two-way mirror of device and + new_device. If device is part of a two-way mirror, attaching + new_device creates a three-way mirror, and so on. In either case, + new_device begins to resilver immediately. +

-f

+
Forces use of new_device, even if its appears to + be in use. Not all devices can be overridden in this manner.
+

+

-o property=value

+

+
Sets the given pool properties. See the + "Properties" section for a list of valid properties that can be set. + The only property supported at the moment is "ashift".
+

+
+

+

zpool clear pool [device] ...

+

+
Clears device errors in a pool. If no arguments are + specified, all device errors within the pool are cleared. If one or more + devices is specified, only those errors associated with the specified device + or devices are cleared.
+

+

zpool create [-fnd] [-o + property=value] ... [-O file-system-property=value] ... + [-m mountpoint] [-R root] [-t + tname] pool vdev ...

+

+
Creates a new storage pool containing the virtual devices + specified on the command line. The pool name must begin with a letter, and can + only contain alphanumeric characters as well as underscore ("_"), + dash ("-"), period ("."), colon (":"), and space + (" "). The pool names "mirror", "raidz", + "spare" and "log" are reserved, as are names beginning + with the pattern "c[0-9]". The vdev specification is + described in the "Virtual Devices" section. +

The command verifies that each device specified is accessible and + not currently in use by another subsystem. There are some uses, such as + being currently mounted, or specified as the dedicated dump device, that + prevents a device from ever being used by ZFS. Other uses, such as + having a preexisting UFS file system, can be overridden with the + -f option.

+

The command also checks that the replication strategy for the pool + is consistent. An attempt to combine redundant and non-redundant storage in + a single pool, or to mix disks and files, results in an error unless + -f is specified. The use of differently sized devices within a single + raidz or mirror group is also flagged as an error unless -f is + specified.

+

Unless the -R option is specified, the default mount point + is "/pool". The mount point must not exist or must be + empty, or else the root dataset cannot be mounted. This can be overridden + with the -m option.

+

By default all supported features are enabled on the new pool + unless the -d option is specified.

+

-f

+

+
Forces use of vdevs, even if they appear in use or + specify a conflicting replication level. Not all devices can be overridden in + this manner.
+

+

-n

+

+
Displays the configuration that would be used without + actually creating the pool. The actual pool creation can still fail due to + insufficient privileges or device sharing.
+

+

-d

+

+
Do not enable any features on the new pool. Individual + features can be enabled by setting their corresponding properties to + enabled with the -o option. See zpool-features(5) for + details about feature properties.
+

+

-o property=value [-o + property=value] ...

+

+
Sets the given pool properties. See the + "Properties" section for a list of valid properties that can be + set.
+

+

-O file-system-property=value +
+ [-O file-system-property=value] ...

+

+
Sets the given file system properties in the root file + system of the pool. See the "Properties" section of zfs(8) + for a list of valid properties that can be set.
+

+

-R root

+

+
Equivalent to "-o + cachefile=none,altroot=root"
+

+

-m mountpoint

+

+
Sets the mount point for the root dataset. The default + mount point is "/pool" or + "altroot/pool" if altroot is specified. The + mount point must be an absolute path, "legacy", or + "none". For more information on dataset mount points, see + zfs(8).
+

+

-t tname

+

+
Sets the in-core pool name to "tname" + while the on-disk name will be the name specified as the pool name + "pool". This will set the default cachefile property to none. + This is intended to handle name space collisions when creating pools for other + systems, such as virtual machines or physical machines whose pools live on + network block devices.
+

+
+

+

zpool destroy [-f] pool

+

+
Destroys the given pool, freeing up any devices for other + use. This command tries to unmount any active datasets before destroying the + pool. +

-f

+
Forces any active datasets contained within the pool to + be unmounted.
+

+
+

+

zpool detach pool device

+

+
Detaches device from a mirror. The operation is + refused if there are no other valid replicas of the data. If device may + be re-added to the pool later on then consider the "zpool + offline" command instead.
+

+

+

zpool events [-vHfc] [pool] ...

+

+
Description of the different events generated by the ZFS + kernel modules. See zfs-events(5) for more information about the + subclasses and event payloads that can be generated. +

+

-v

+
Get a full detail of the events and what information is + available about it.
+

+

-H

+
Scripted mode. Do not display headers, and separate + fields by a single tab instead of arbitrary space.
+

+

-f

+
Follow mode.
+

+

-c

+
Clear all previous events.
+

+
+

+

zpool export [-a] [-f] pool + ...

+

+
Exports the given pools from the system. All devices are + marked as exported, but are still considered in use by other subsystems. The + devices can be moved between systems (even those of different endianness) and + imported as long as a sufficient number of devices are present. +

Before exporting the pool, all datasets within the pool are + unmounted. A pool can not be exported if it has a shared spare that is + currently being used.

+

For pools to be portable, you must give the zpool command + whole disks, not just partitions, so that ZFS can label the disks + with portable EFI labels. Otherwise, disk drivers on platforms of + different endianness will not recognize the disks.

+

-a

+
Exports all pools imported on the system.
+

+

-f

+
Forcefully unmount all datasets, using the + "unmount -f" command. +

This command will forcefully export the pool even if it has a + shared spare that is currently being used. This may lead to potential data + corruption.

+
+

+
+

+

zpool get [-p] "all" | + property[,...] pool ...

+

+
Retrieves the given list of properties (or all properties + if "all" is used) for the specified storage pool(s). These + properties are displayed with the following fields: +

+
+

+
+
+ name Name of storage pool +
+ property Property name +
+ value Property value +
+ source Property source, either 'default' or 'local'.
+
+

+

See the "Properties" section for more information on the + available pool properties.

+

-p

+
Display numbers in parseable (exact) values.
+

+

-H

+
Scripted mode. Do not display headers, and separate + fields by a single tab instead of arbitrary space.
+

+
+

+

zpool history [-il] [pool] ...

+

+
Displays the command history of the specified pools or + all pools if no pool is specified. +

-i

+
Displays internally logged ZFS events in addition + to user initiated events.
+

+

-l

+
Displays log records in long format, which in addition to + standard format includes, the user name, the hostname, and the zone in which + the operation was performed.
+

+
+

+

zpool import [-d dir | -c + cachefile] [-D]

+

+
Lists pools available to import. If the -d option + is not specified, this command searches for devices in "/dev". The + -d option can be specified multiple times, and all directories are + searched. If the device appears to be part of an exported pool, this command + displays a summary of the pool with the name of the pool, a numeric + identifier, as well as the vdev layout and current health of the device + for each device or file. Destroyed pools, pools that were previously destroyed + with the "zpool destroy" command, are not listed unless the + -D option is specified. +

The numeric identifier is unique, and can be used instead of the + pool name when multiple exported pools of the same name are available.

+

-c cachefile

+
Reads configuration from the given cachefile that + was created with the "cachefile" pool property. This + cachefile is used instead of searching for devices.
+

+

-d dir

+
Searches for devices or files in dir. The + -d option can be specified multiple times.
+

+

-D

+
Lists destroyed pools only.
+

+
+

+

zpool import [-o mntopts] [ -o + property=value] ... [-d dir | -c + cachefile] [-D] [-f] [-m] [-N] [-R + root] [-F [-n]] -a

+

+
Imports all pools found in the search directories. + Identical to the previous command, except that all pools with a sufficient + number of devices available are imported. Destroyed pools, pools that were + previously destroyed with the "zpool destroy" command, will + not be imported unless the -D option is specified. +

-o mntopts

+
Comma-separated list of mount options to use when + mounting datasets within the pool. See zfs(8) for a description of + dataset properties and mount options.
+

+

-o property=value

+
Sets the specified property on the imported pool. See the + "Properties" section for more information on the available pool + properties.
+

+

-c cachefile

+
Reads configuration from the given cachefile that + was created with the "cachefile" pool property. This + cachefile is used instead of searching for devices.
+

+

-d dir

+
Searches for devices or files in dir. The + -d option can be specified multiple times. This option is incompatible + with the -c option.
+

+

-D

+
Imports destroyed pools only. The -f option is + also required.
+

+

-f

+
Forces import, even if the pool appears to be potentially + active.
+

+

-F

+
Recovery mode for a non-importable pool. Attempt to + return the pool to an importable state by discarding the last few + transactions. Not all damaged pools can be recovered by using this option. If + successful, the data from the discarded transactions is irretrievably lost. + This option is ignored if the pool is importable or already imported.
+

+

-a

+
Searches for and imports all pools found.
+

+

-m

+
Allows a pool to import when there is a missing log + device.
+

+

-R root

+
Sets the "cachefile" property to + "none" and the "altroot" property to + "root".
+

+

-N

+
Import the pool without mounting any file systems.
+

+

-n

+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does not + actually perform the pool recovery. For more details about pool recovery mode, + see the -F option, above.
+

+

-X

+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This allows + the pool to be rolled back to a txg which is no longer guaranteed to be + consistent. Pools imported at an inconsistent txg may contain uncorrectable + checksum errors. For more details about pool recovery mode, see the -F + option, above. WARNING: This option can be extremely hazardous to the + health of your pool and should only be used as a last resort.
+

+

-T

+
Specify the txg to use for rollback. Implies -FX. + For more details about pool recovery mode, see the -X option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+

+
+

+

zpool import [-o mntopts] [ -o + property=value] ... [-d dir | -c + cachefile] [-D] [-f] [-m] [-R + root] [-F [-n]] [-t]] pool | id + [newpool]

+

+
Imports a specific pool. A pool can be identified by its + name or the numeric identifier. If newpool is specified, the pool is + imported using the name newpool. Otherwise, it is imported with the + same name as its exported name. +

If a device is removed from a system without running + "zpool export" first, the device appears as potentially + active. It cannot be determined if this was a failed export, or whether the + device is really in use from another host. To import a pool in this state, + the -f option is required.

+

-o mntopts

+

+
Comma-separated list of mount options to use when + mounting datasets within the pool. See zfs(8) for a description of + dataset properties and mount options.
+

+

-o property=value

+

+
Sets the specified property on the imported pool. See the + "Properties" section for more information on the available pool + properties.
+

+

-c cachefile

+

+
Reads configuration from the given cachefile that + was created with the "cachefile" pool property. This + cachefile is used instead of searching for devices.
+

+

-d dir

+

+
Searches for devices or files in dir. The + -d option can be specified multiple times. This option is incompatible + with the -c option.
+

+

-D

+

+
Imports destroyed pool. The -f option is also + required.
+

+

-f

+

+
Forces import, even if the pool appears to be potentially + active.
+

+

-F

+

+
Recovery mode for a non-importable pool. Attempt to + return the pool to an importable state by discarding the last few + transactions. Not all damaged pools can be recovered by using this option. If + successful, the data from the discarded transactions is irretrievably lost. + This option is ignored if the pool is importable or already imported.
+

+

-R root

+

+
Sets the "cachefile" property to + "none" and the "altroot" property to + "root".
+

+

-n

+

+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does not + actually perform the pool recovery. For more details about pool recovery mode, + see the -F option, above.
+

+

-X

+

+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This allows + the pool to be rolled back to a txg which is no longer guaranteed to be + consistent. Pools imported at an inconsistent txg may contain uncorrectable + checksum errors. For more details about pool recovery mode, see the -F + option, above. WARNING: This option can be extremely hazardous to the + health of your pool and should only be used as a last resort.
+

+

-T

+

+
Specify the txg to use for rollback. Implies -FX. + For more details about pool recovery mode, see the -X option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+

+

-t

+

+
Used with "newpool". Specifies that + "newpool" is temporary. Temporary pool names last until + export. Ensures that the original pool name will be used in all label updates + and therefore is retained upon export. Will also set -o cachefile=none when + not explicitly specified.
+

+

-m

+

+
Allows a pool to import when there is a missing log + device.
+

+
+

+

zpool iostat [-T d | u] + [-gLPvy] [pool] ... [interval[count]]

+

+
Displays I/O statistics for the given pools. When + given an interval, the statistics are printed every interval seconds + until Ctrl-C is pressed. If no pools are specified, statistics + for every pool in the system is shown. If count is specified, the + command exits after count reports are printed. +

-T u | d

+
Display a time stamp. +

Specify u for a printed representation of the internal + representation of time. See time(2). Specify d for standard + date format. See date(1).

+
+

+

-g

+
Display vdev GUIDs instead of the normal device names. + These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+

+

-L

+
Display real paths for vdevs resolving all symbolic + links. This can be used to look up the current block device name regardless of + the /dev/disk/ path used to open it.
+

+

-P

+
Display full paths for vdevs instead of only the last + component of the path. This can be used in conjunction with the -L + flag.
+

+

-v

+
Verbose statistics. Reports usage statistics for + individual vdevs within the pool, in addition to the pool-wide + statistics.
+

+

-y

+
Omit statistics since boot. Normally the first line of + output reports the statistics since boot. This option suppresses that first + line of output.
+

+
+

+

zpool labelclear [-f] device

+

+
Removes ZFS label information from the specified device. + The device must not be part of an active pool configuration. +

-f

+
Treat exported or foreign devices as inactive.
+

+
+

+

zpool list [-T d | u] + [-HgLPv] [-o props[,...]] [pool] ... + [interval[count]]

+

+
Lists the given pools along with a health status and + space usage. If no pools are specified, all pools in the system are + listed. When given an interval, the information is printed every + interval seconds until Ctrl-C is pressed. If count is + specified, the command exits after count reports are printed. +

-H

+
Scripted mode. Do not display headers, and separate + fields by a single tab instead of arbitrary space.
+

+

-g

+
Display vdev GUIDs instead of the normal device names. + These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+

+

-L

+
Display real paths for vdevs resolving all symbolic + links. This can be used to look up the current block device name regardless of + the /dev/disk/ path used to open it.
+

+

-P

+
Display full paths for vdevs instead of only the last + component of the path. This can be used in conjunction with the -L + flag.
+

-T d | u

+
Display a time stamp. +

Specify u for a printed representation of the internal + representation of time. See time(2). Specify d for standard + date format. See date(1).

+
+

+

-o props

+
Comma-separated list of properties to display. See the + "Properties" section for a list of valid properties. The default + list is "name, size, used, available, fragmentation, expandsize, + capacity, dedupratio, health, altroot"
+

+

-v

+
Verbose statistics. Reports usage statistics for + individual vdevs within the pool, in addition to the pool-wise + statistics.
+

+
+

+

zpool offline [-t] pool device + ...

+

+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or write to the device. +

This command is not applicable to spares or cache devices.

+

-t

+
Temporary. Upon reboot, the specified physical device + reverts to its previous state.
+

+
+

+

zpool online [-e] pool + device...

+

+
Brings the specified physical device online. +

This command is not applicable to spares or cache devices.

+

-e

+
Expand the device to use all available space. If the + device is part of a mirror or raidz then all devices must be expanded + before the new space will become available to the pool.
+

+
+

+

zpool reguid pool

+

+
Generates a new unique identifier for the pool. You must + ensure that all devices in this pool are online and healthy before performing + this action.
+

+

zpool reopen pool

+

+
Reopen all the vdevs associated with the pool.
+

+

zpool remove pool device ...

+

+
Removes the specified device from the pool. This command + currently only supports removing hot spares, cache, and log devices. A + mirrored log device can be removed by specifying the top-level mirror for the + log. Non-log devices that are part of a mirrored configuration can be removed + using the zpool detach command. Non-redundant and raidz devices + cannot be removed from a pool.
+

+

zpool replace [-f] [-o + property=value] pool old_device [new_device]

+

+
Replaces old_device with new_device. This + is equivalent to attaching new_device, waiting for it to resilver, and + then detaching old_device. +

The size of new_device must be greater than or equal to the + minimum size of all the devices in a mirror or raidz + configuration.

+

new_device is required if the pool is not redundant. If + new_device is not specified, it defaults to old_device. This + form of replacement is useful after an existing disk has failed and has been + physically replaced. In this case, the new disk may have the same + /dev path as the old device, even though it is actually a different + disk. ZFS recognizes this.

+

-f

+
Forces use of new_device, even if its appears to + be in use. Not all devices can be overridden in this manner.
+

+

-o property=value

+

+
Sets the given pool properties. See the + "Properties" section for a list of valid properties that can be set. + The only property supported at the moment is ashift. Do note + that some properties (among them ashift) are not inherited from + a previous vdev. They are vdev specific, not pool specific.
+

+
+

+

zpool scrub [-s] pool ...

+

+
Begins a scrub. The scrub examines all data in the + specified pools to verify that it checksums correctly. For replicated (mirror + or raidz) devices, ZFS automatically repairs any damage + discovered during the scrub. The "zpool status" command + reports the progress of the scrub and summarizes the results of the scrub upon + completion. +

Scrubbing and resilvering are very similar operations. The + difference is that resilvering only examines data that ZFS knows to + be out of date (for example, when attaching a new device to a mirror or + replacing an existing device), whereas scrubbing examines all data to + discover silent errors due to hardware faults or disk failure.

+

Because scrubbing and resilvering are I/O-intensive + operations, ZFS only allows one at a time. If a scrub is already in + progress, the "zpool scrub" command terminates it and + starts a new scrub. If a resilver is in progress, ZFS does not allow + a scrub to be started until the resilver completes.

+

-s

+
Stop scrubbing.
+

+
+

+

zpool set property=value + pool

+

+
Sets the given property on the specified pool. See the + "Properties" section for more information on what properties can be + set and acceptable values.
+

+

zpool split [-gLnP] [-R altroot] + [-o property=value] pool newpool [device + ...]

+

+
Split devices off pool creating newpool. + All vdevs in pool must be mirrors and the pool must not be in + the process of resilvering. At the time of the split, newpool will be a + replica of pool. By default, the last device in each mirror is split + from pool to create newpool. +

The optional device specification causes the specified + device(s) to be included in the new pool and, should any devices remain + unspecified, the last device in each mirror is used as would be by + default.

+

+

-g

+
Display vdev GUIDs instead of the normal device names. + These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+

+

-L

+
Display real paths for vdevs resolving all symbolic + links. This can be used to look up the current block device name regardless of + the /dev/disk/ path used to open it.
+

+

-n

+

+
Do dry run, do not actually perform the split. Print out + the expected configuration of newpool.
+

+

-P

+
Display full paths for vdevs instead of only the last + component of the path. This can be used in conjunction with the -L + flag.
+

+

-R altroot

+

+
Set altroot for newpool and automaticaly + import it. This can be useful to avoid mountpoint collisions if newpool + is imported on the same filesystem as pool.
+

+

-o property=value

+

+
Sets the specified property for newpool. See the + “Properties” section for more information on the available pool + properties.
+

+
+

+

zpool status [-gLPvxD] [-T d | u] + [pool] ... [interval [count]]

+

+
Displays the detailed health status for the given pools. + If no pool is specified, then the status of each pool in the system is + displayed. For more information on pool and device health, see the + "Device Failure and Recovery" section. +

If a scrub or resilver is in progress, this command reports the + percentage done and the estimated time to completion. Both of these are only + approximate, because the amount of data in the pool and the other workloads + on the system can change.

+

+

-g

+
Display vdev GUIDs instead of the normal device names. + These GUIDs can be used innplace of device names for the zpool + detach/offline/remove/replace commands.
+

+

-L

+
Display real paths for vdevs resolving all symbolic + links. This can be used to look up the current block device name regardless of + the /dev/disk/ path used to open it.
+

+

-P

+
Display full paths for vdevs instead of only the last + component of the path. This can be used in conjunction with the -L + flag.
+

+

-v

+
Displays verbose data error information, printing out a + complete list of all data errors since the last complete pool scrub.
+

+

-x

+
Only display status for pools that are exhibiting errors + or are otherwise unavailable. Warnings about pools not using the latest + on-disk format will not be included.
+

+

-D

+
Display a histogram of deduplication statistics, showing + the allocated (physically present on disk) and referenced (logically + referenced in the pool) block counts and sizes by reference count.
+

+

-T d | u

+
Display a time stamp. +

Specify u for a printed representation of the internal + representation of time. See time(2). Specify d for standard + date format. See date(1).

+
+

+

zpool upgrade

+

+
Displays pools which do not have all supported features + enabled and pools formatted using a legacy ZFS version number. These pools can + continue to be used, but some features may not be available. Use + "zpool upgrade -a" to enable all features on all pools.
+

+

zpool upgrade -v

+

+
Displays legacy ZFS versions supported by the + current software. See zfs-features(5) for a description of feature + flags features supported by the current software.
+

+

zpool upgrade [-V version] -a | + pool ...

+

+
Enables all supported features on the given pool. Once + this is done, the pool will no longer be accessible on systems that do not + support feature flags. See zfs-features(5) for details on compatibility + with systems that support feature flags, but do not support all features + enabled on the pool. +

-a

+
Enables all supported features on all pools.
+

+

-V version

+
Upgrade to the specified legacy version. If the -V + flag is specified, no features will be enabled on the pool. This option can + only be used to increase the version number up to the last supported legacy + version number.
+

+
+

+
+
+
+
+

+

Example 1 Creating a RAID-Z Storage Pool

+

+

The following command creates a pool with a single raidz + root vdev that consists of six disks.

+

+

+
+

+
# zpool create tank raidz sda sdb sdc sdd sde sdf
+
+

+

+

Example 2 Creating a Mirrored Storage Pool

+

+

The following command creates a pool with two mirrors, where each + mirror contains two disks.

+

+

+
+

+
# zpool create tank mirror sda sdb mirror sdc sdd
+
+

+

+

Example 3 Creating a ZFS Storage Pool by Using + Partitions

+

+

The following command creates an unmirrored pool using two disk + partitions.

+

+

+
+

+
# zpool create tank sda1 sdb2
+
+

+

+

Example 4 Creating a ZFS Storage Pool by Using Files

+

+

The following command creates an unmirrored pool using files. + While not recommended, a pool based on files can be useful for experimental + purposes.

+

+

+
+

+
# zpool create tank /path/to/file/a /path/to/file/b
+
+

+

+

Example 5 Adding a Mirror to a ZFS Storage Pool

+

+

The following command adds two mirrored disks to the pool + tank, assuming the pool is already made up of two-way mirrors. The + additional space is immediately available to any datasets within the + pool.

+

+

+
+

+
# zpool add tank mirror sda sdb
+
+

+

+

Example 6 Listing Available ZFS Storage Pools

+

+

The following command lists all available pools on the system. In + this case, the pool zion is faulted due to a missing device.

+

+

+

The results from this command are similar to the following:

+

+

+
+

+
# zpool list
+NAME    SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
+rpool  19.9G  8.43G  11.4G    33%         -    42%  1.00x  ONLINE  -
+tank   61.5G  20.0G  41.5G    48%         -    32%  1.00x  ONLINE  -
+zion       -      -      -      -         -      -      -  FAULTED -
+
+

+

+

Example 7 Destroying a ZFS Storage Pool

+

+

The following command destroys the pool tank and any + datasets contained within.

+

+

+
+

+
# zpool destroy -f tank
+
+

+

+

Example 8 Exporting a ZFS Storage Pool

+

+

The following command exports the devices in pool tank so + that they can be relocated or later imported.

+

+

+
+

+
# zpool export tank
+
+

+

+

Example 9 Importing a ZFS Storage Pool

+

+

The following command displays available pools, and then imports + the pool tank for use on the system.

+

+

+

The results from this command are similar to the following:

+

+

+
+

+
# zpool import
+
+ pool: tank +
+ id: 15451357997522795478 +
+ state: ONLINE +action: The pool can be imported using its name or numeric identifier. +config: +
+ tank ONLINE +
+ mirror ONLINE +
+ sda ONLINE +
+ sdb ONLINE +# zpool import tank
+
+

+

+

Example 10 Upgrading All ZFS Storage Pools to the Current + Version

+

+

The following command upgrades all ZFS Storage pools to the + current version of the software.

+

+

+
+

+
# zpool upgrade -a
+This system is currently running ZFS pool version 28.
+
+

+

+

Example 11 Managing Hot Spares

+

+

The following command creates a new pool with an available hot + spare:

+

+

+
+

+
# zpool create tank mirror sda sdb spare sdc
+
+

+

+

+

If one of the disks were to fail, the pool would be reduced to the + degraded state. The failed device can be replaced using the following + command:

+

+

+
+

+
# zpool replace tank sda sdd
+
+

+

+

+

Once the data has been resilvered, the spare is automatically + removed and is made available for use should another device fails. The hot + spare can be permanently removed from the pool using the following + command:

+

+

+
+

+
# zpool remove tank sdc
+
+

+

+

Example 12 Creating a ZFS Pool with Mirrored Separate + Intent Logs

+

+

The following command creates a ZFS storage pool consisting of + two, two-way mirrors and mirrored log devices:

+

+

+
+

+
# zpool create pool mirror sda sdb mirror sdc sdd log mirror \
+
+ sde sdf
+
+

+

+

Example 13 Adding Cache Devices to a ZFS Pool

+

+

The following command adds two disks for use as cache devices to a + ZFS storage pool:

+

+

+
+

+
# zpool add pool cache sdc sdd
+
+

+

+

+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take over + an hour for them to fill. Capacity and reads can be monitored using the + iostat option as follows:

+

+

+
+

+
# zpool iostat -v pool 5
+
+

+

+

Example 14 Removing a Mirrored Log Device

+

+

The following command removes the mirrored log device + mirror-2.

+

+

+

Given this configuration:

+

+

+
+

+
+
+ pool: tank +
+ state: ONLINE +
+ scrub: none requested +config: +
+ NAME STATE READ WRITE CKSUM +
+ tank ONLINE 0 0 0 +
+ mirror-0 ONLINE 0 0 0 +
+ sda ONLINE 0 0 0 +
+ sdb ONLINE 0 0 0 +
+ mirror-1 ONLINE 0 0 0 +
+ sdc ONLINE 0 0 0 +
+ sdd ONLINE 0 0 0 +
+ logs +
+ mirror-2 ONLINE 0 0 0 +
+ sde ONLINE 0 0 0 +
+ sdf ONLINE 0 0 0
+
+

+

+

+

The command to remove the mirrored log mirror-2 is:

+

+

+
+

+
# zpool remove tank mirror-2
+
+

+

+

Example 15 Displaying expanded space on a device

+

+

The following command displays the detailed information for the + data pool. This pool is comprised of a single raidz vdev where + one of its devices increased its capacity by 10GB. In this example, the pool + will not be able to utilized this extra capacity until all the devices under + the raidz vdev have been expanded.

+

+

+
+

+
# zpool list -v data
+NAME         SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
+data        23.9G  14.6G  9.30G    48%         -    61%  1.00x  ONLINE  -
+
+ raidz1 23.9G 14.6G 9.30G 48% - +
+ c1t1d0 - - - - - +
+ c1t2d0 - - - - 10G +
+ c1t3d0 - - - - -
+
+

+
+
+

+

The following exit values are returned:

+

0

+
Successful completion.
+

+

1

+
An error occurred.
+

+

2

+
Invalid command line options were specified.
+

+
+
+

+
+
+
Cause zpool to dump core on exit for the purposes of running + ::findleaks.
+
+
The search path for devices or files to use with the pool. This is a + colon-separated list of directories in which zpool looks for device + nodes and files. Similar to the -d option in zpool + import.
+
+
Cause zpool subcommands to output vdev guids by default. This + behavior is identical to the zpool status -g command line + option.
+ +
Cause zpool subcommands to follow links for vdev names by default. + This behavior is identical to the zpool status -L command line + option.
+
+
Cause zpool subcommands to output full vdev path names by default. + This behavior is identical to the zpool status -p command line + option. +

+
+
+
+
+

+

zfs(8), zpool-features(5), zfs-events(5)

+
+
+ + + + + +
14 December 2012ZFS pool 28, filesystem 5
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/zstreamdump.8.html b/man/v0.6/8/zstreamdump.8.html new file mode 100644 index 000000000..1ed0f92ae --- /dev/null +++ b/man/v0.6/8/zstreamdump.8.html @@ -0,0 +1,197 @@ + + + + + + + zstreamdump.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zstreamdump.8

+
+ + + + + +
zstreamdump(8)System Administration Commandszstreamdump(8)
+
+
+

+

zstreamdump - filter data in zfs send stream

+
+
+

+
zstreamdump [-C] [-v]
+

+
+
+

+

The zstreamdump utility reads from the output of the zfs + send command, then displays headers and some statistics from that + output. See zfs(1M).

+
+
+

+

The following options are supported:

+

-C

+

+
Suppress the validation of checksums.
+

+

-v

+

+
Verbose. Dump all headers, not only begin and end + headers.
+

+
+
+

+

zfs(8)

+
+
+ + + + + +
29 Aug 2012ZFS pool 28, filesystem 5
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/index.html b/man/v0.6/index.html new file mode 100644 index 000000000..6a3b35a39 --- /dev/null +++ b/man/v0.6/index.html @@ -0,0 +1,143 @@ + + + + + + + v0.6 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/1/cstyle.1.html b/man/v0.7/1/cstyle.1.html new file mode 100644 index 000000000..b6d7588b5 --- /dev/null +++ b/man/v0.7/1/cstyle.1.html @@ -0,0 +1,285 @@ + + + + + + + cstyle.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cstyle.1

+
+ + + + + +
cstyle(1)General Commands Manualcstyle(1)
+
+
+

+

cstyle - check for some common stylistic errors in C source + files

+
+
+

+

cstyle [-chpvCP] [-o constructs] [file...]

+
+
+

+

cstyle inspects C source files (*.c and *.h) for common + sylistic errors. It attempts to check for the cstyle documented in + http://www.cis.upenn.edu/~lee/06cse480/data/cstyle.ms.pdf. Note that + there is much in that document that cannot be checked for; just + because your code is cstyle(1) clean does not mean that you've + followed Sun's C style. Caveat emptor.

+
+
+

+

The following options are supported:

+
+
+
Check continuation line indentation inside of functions. Sun's C style + states that all statements must be indented to an appropriate tab stop, + and any continuation lines after them must be indented exactly four + spaces from the start line. This option enables a series of checks + designed to find continuation line problems within functions only. The + checks have some limitations; see CONTINUATION CHECKING, below.
+
+
Performs heuristic checks that are sometimes wrong. Not generally + used.
+
+
Performs some of the more picky checks. Includes ANSI #else and #endif + rules, and tries to detect spaces after casts. Used as part of the putback + checks.
+
+
Verbose output; includes the text of the line of error, and, for + -c, the first statement in the current continuation block.
+
+
Ignore errors in header comments (i.e. block comments starting in the + first column). Not generally used.
+
+
Check for use of non-POSIX types. Historically, types like + "u_int" and "u_long" were used, but they are now + deprecated in favor of the POSIX types uint_t, ulong_t, etc. This detects + any use of the deprecated types. Used as part of the putback checks.
+
+
Allow a comma-separated list of additional constructs. Available + constructs include:
+
+
Allow doxygen-style block comments (/** and /*!)
+
+
Allow splint-style lint comments (/*@...@*/)
+
+
+
+

+

The cstyle rule for the OS/Net consolidation is that all new files + must be -pP clean. For existing files, the following invocations are + run against both the old and new files:

+
+
+
+
+
+
+
+
+

If the old file gave no errors for one of the invocations, the new + file must also give no errors. This way, files can only become more + clean.

+
+
+

+

The continuation checker is a reasonably simple state machine that + knows something about how C is laid out, and can match parenthesis, etc. + over multiple lines. It does have some limitations:

+
+
1.
+
Preprocessor macros which cause unmatched parenthesis will confuse the + checker for that line. To fix this, you'll need to make sure that each + branch of the #if statement has balanced parenthesis.
+
2.
+
Some cpp macros do not require ;s after them. Any such macros + *must* be ALL_CAPS; any lower case letters will cause bad output.
+
+

The bad output will generally be corrected after the next + ;, {, or }.

+

Some continuation error messages deserve some additional + explanation

+
+
+
A multi-line statement which is not broken at statement boundaries. For + example:
+
+
+

if (this_is_a_long_variable == another_variable) a = +
+ b + c;

+

Will trigger this error. Instead, do:

+

if (this_is_a_long_variable == another_variable) +
+ a = b + c;

+
+
+
+
For visibility, empty bodies for if, for, and while statements should be + on their own line. For example:
+
+
+

while (do_something(&x) == 0);

+

Will trigger this error. Instead, do:

+

while (do_something(&x) == 0) +
+ ;

+
+

+
+
+ + + + + +
28 March 2005
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/1/index.html b/man/v0.7/1/index.html new file mode 100644 index 000000000..589b1c8ac --- /dev/null +++ b/man/v0.7/1/index.html @@ -0,0 +1,153 @@ + + + + + + + User Commands (1) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

User Commands (1)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/1/raidz_test.1.html b/man/v0.7/1/raidz_test.1.html new file mode 100644 index 000000000..c043d3e64 --- /dev/null +++ b/man/v0.7/1/raidz_test.1.html @@ -0,0 +1,260 @@ + + + + + + + raidz_test.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

raidz_test.1

+
+ + + + + +
raidz_test(1)User Commandsraidz_test(1)
+
+

+
+

+

raidz_test - raidz implementation verification and + bencmarking tool

+
+
+

+

raidz_test <options>

+
+
+

+

This manual page documents briefly the raidz_test + command.

+

Purpose of this tool is to run all supported raidz implementation + and verify results of all methods. Tool also contains a parameter sweep + option where all parameters affecting RAIDZ block are verified (like ashift + size, data offset, data size, etc...). The tool also supports a benchmarking + mode using -B option.

+
+
+

+

-h

+
+
+
Print a help summary.
+
+

-a ashift (default: 9)

+
+
+
Ashift value.
+
+

-o zio_off_shift (default: 0)

+
+
+
Zio offset for raidz block. Offset value is 1 << + (zio_off_shift)
+
+

-d raidz_data_disks (default: 8)

+
+
+
Number of raidz data disks to use. Additional disks for parity will be + used during testing.
+
+

-s zio_size_shift (default: 19)

+
+
+
Size of data for raidz block. Size is 1 << (zio_size_shift).
+
+

-S(weep)

+
+
+
Sweep parameter space while verifying the raidz implementations. This + option will exhaust all most of valid values for -a -o -d -s options. + Runtime using this option will be long.
+
+

-t(imeout)

+
+
+
Wall time for sweep test in seconds. The actual runtime could be + longer.
+
+

-B(enchmark)

+
+
+
This options starts the benchmark mode. All implementations are + benchmarked using increasing per disk data size. Results are given as + throughput per disk, measured in MiB/s.
+
+

-v(erbose)

+
+
+
Increase verbosity.
+
+

-T(est the test)

+
+
+
Debugging option. When this option is specified tool is supposed to fail + all tests. This is to check if tests would properly verify + bit-exactness.
+
+

-D(ebug)

+
+
+
Debugging option. Specify to attach gdb when SIGSEGV or SIGABRT are + received.
+
+

+

+
+
+

+

ztest (1)

+
+
+

+

vdev_raidz, created for ZFS on Linux by Gvozden + Nešković <neskovic@gmail.com>

+
+
+ + + + + +
2016ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/1/zhack.1.html b/man/v0.7/1/zhack.1.html new file mode 100644 index 000000000..02b6051ee --- /dev/null +++ b/man/v0.7/1/zhack.1.html @@ -0,0 +1,253 @@ + + + + + + + zhack.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zhack.1

+
+ + + + + +
zhack(1)User Commandszhack(1)
+
+

+
+

+

zhack - libzpool debugging tool

+
+
+

+

This utility pokes configuration changes directly into a ZFS pool, + which is dangerous and can cause data corruption.

+
+
+

+

zhack [-c cachefile] [-d dir] + <subcommand> [arguments]

+
+
+

+

-c cachefile

+
+
+
Read the pool configuration from the cachefile, which is + /etc/zfs/zpool.cache by default.
+
+

-d dir

+
+
+
Search for pool members in the dir path. Can be specified + more than once.
+
+
+
+

+

feature stat pool

+
+
+
List feature flags.
+
+

feature enable [-d description] [-r] pool + guid

+
+
+
Add a new feature to pool that is uniquely identified by + guid, which is specified in the same form as a zfs(8) user + property.
+
+
The description is a short human readable explanation of the new + feature.
+
+
The -r switch indicates that pool can be safely opened in + read-only mode by a system that does not have the guid + feature.
+
+

feature ref [-d|-m] pool guid

+
+
+
Increment the reference count of the guid feature in + pool.
+
+
The -d switch decrements the reference count of the guid + feature in pool.
+
+
The -m switch indicates that the guid feature is now + required to read the pool MOS.
+
+
+
+

+
# zhack feature stat tank
+for_read_obj:
+	org.illumos:lz4_compress = 0
+for_write_obj:
+	com.delphix:async_destroy = 0
+	com.delphix:empty_bpobj = 0
+descriptions_obj:
+	com.delphix:async_destroy = Destroy filesystems asynchronously.
+	com.delphix:empty_bpobj = Snapshots use less space.
+	org.illumos:lz4_compress = LZ4 compression algorithm support.
+
# zhack feature enable -d 'Predict future disk failures.' \
+
+ tank com.example:clairvoyance
+
# zhack feature ref tank com.example:clairvoyance
+
+
+

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

splat(1), zfs(8), zpios(1), + zpool-features(5), ztest(1)

+
+
+ + + + + +
2013 MAR 16ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/1/zpios.1.html b/man/v0.7/1/zpios.1.html new file mode 100644 index 000000000..63d1e0efd --- /dev/null +++ b/man/v0.7/1/zpios.1.html @@ -0,0 +1,420 @@ + + + + + + + zpios.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpios.1

+
+ + + + + +
zpios(1)User Commandszpios(1)
+
+

+
+

+

zpios - Directly test the DMU.

+
+
+

+

zpios [options] <-p pool>

+

+
+
+

+

This utility runs in-kernel DMU performance and stress tests that + do not depend on the ZFS Posix Layer ("ZPL").

+

+
+
+

+

-t regex, --threadcount regex

+
+
+
Start this many threads for each test series, specified as a comma + delimited regular expression. (eg: "-t 1,2,3")
+
+
This option is mutually exclusive with the threadcount_* + options.
+
+

-l regex_low, --threadcount_low + regex_low

+

-h regex_high, --threadcount_high + regex_high

+

-e regex_incr, --threadcount_incr + regex_incr

+
+
+
Start regex_low threads for the first test, add regex_incr + threads for each subsequent test, and start regex_high threads for + the last test.
+
+
These three options must be specified together and are mutually exclusive + with the threadcount option.
+
+

-n regex, --regioncount regex

+
+
+
Create this many regions for each test series, specified as a comma + delimited regular expression. (eg: "-n 512,4096,65536")
+
+
This option is mutually exclusive with the regioncount_* + options.
+
+

-i regex_low, --regioncount_low + regex_low

+

-j regex_high, --regioncount_high + regex_high

+

-k regex_incr, --regioncount_incr + regex_incr

+
+
+
Create regex_low regions for the first test, add regex_incr + regions for each subsequent test, and create regex_high regions for + the last test.
+
+
These three options must be specified together and are mutually exclusive + with the regioncount option.
+
+

-o size, --offset size

+
+
+
Create regions at size offset for each test series, specified as a + comma delimited regular expression with an optional unit suffix. (eg: + "-o 4M" means four megabytes.)
+
+
This option is mutually exclusive with the offset_* options.
+
+

-m size_low, --offset_low + size_low

+

-q size_high, --offset_high + size_high

+

-r size_incr, --offset_incr + size_incr

+
+
+
Create a region at size_low offset for the first test, add + size_incr to the offset for each subsequent test, and create a + region at size_high offset for the last test.
+
+
These three options must be specified together and are mutually exclusive + with the offset option.
+
+

-c size, --chunksize size

+
+
+
Use size chunks for each test, specified as a comma delimited + regular expression with an optional unit suffix. (eg: "-c 1M" + means one megabyte.) The chunk size must be at least the region size.
+
+
This option is mutually exclusive with the chunksize_* + options.
+
+

-a size_low, --chunksize_low + size_low

+

-b size_high, --chunksize_high + size_high

+

-g size_incr, --chunksize_incr + size_incr

+
+
+
Use a size_low chunk size for the first test, add size_incr + to the chunk size for each subsequent test, and use a size_high + chunk size for the last test.
+
+
These three options must be specified together and are mutually exclusive + with the chunksize option.
+
+

-s size, --regionsize size

+
+
+
Use size regions for each test, specified as a comma delimited + regular expression with an optional unit suffix. (eg: "-s 1M" + means one megabyte.)
+
+
This option is mutually exclusive with the regionsize_* + options.
+
+

-A size_low, --regionsize_low + size_low

+

-B size_high, --regionsize_high + size_high

+

-C size_incr, --regionsize_incr + size_incr

+
+
+
Use a size_low region size for the first test, add size_incr + to the region size for each subsequent test, and use a size_high + region size for the last test.
+
+
These three options must be specified together and are mutually exclusive + with the regionsize option.
+
+

-S size | sizes, --blocksize size | + sizes

+
+
+
Use size ZFS blocks for each test, specified as a comma delimited + regular expression with an optional unit suffix. (eg: "-S 1M" + means one megabyte.) The supported range is powers of two from 128K + through 16M. A range of blocks can be tested as follows: "-S + 128K,256K,512K,1M".
+
+

-L dmu_flags, --load dmu_flags

+
+
+
Specify dmuio for regular DMU_IO, ssf for single shared file + access, or fpp for per thread access. Use commas to delimit + multiple flags. (eg: "-L dmuio,ssf")
+
+

-p name, --pool name

+
+
+
The pool name, which is mandatory.
+
+

-M test, --name test

+
+
+
An arbitrary string that appears in the program output.
+
+

-x, --cleanup

+
+
+
Enable the DMU_REMOVE flag.
+
+

-P command, --prerun command

+
+
+
Invoke command from the kernel before running the test. Shell + expansion is not performed and the environment is set to HOME=/; + TERM=linux; PATH=/sbin:/usr/sbin:/bin:/usr/bin.
+
+

-R command, --postrun command

+
+
+
Invoke command from the kernel after running the test. Shell + expansion is not performed and the environment is set to HOME=/; + TERM=linux; PATH=/sbin:/usr/sbin:/bin:/usr/bin.
+
+

-G directory, --log directory

+
+
+
Put logging output in this directory.
+
+

-I size, --regionnoise size

+
+
+
Randomly vary the regionsize parameter for each test modulo + size bytes.
+
+

-N size, --chunknoise size

+
+
+
Randomly vary the chunksize parameter for each test modulo + size bytes.
+
+

-T time, --threaddelay time

+
+
+
Randomly vary the execution time for each test modulo time kernel + jiffies.
+
+

-V, --verify

+
+
+
Enable the DMU_VERIFY flag for trivial data verification.
+
+

-z, --zerocopy

+
+
+
Enable the DMU_READ_ZC and DMU_WRITE_ZC flags, which are currently + unimplemented for Linux.
+
+

-O, --nowait

+
+
+
Enable the DMU_WRITE_NOWAIT flag.
+
+

-f, --noprefetch

+
+
+
Enable the DMU_READ_NOPF flag.
+
+

-H, --human-readable

+
+
+
Print PASS and FAIL results explicitly and put unit suffixes on large + numbers.
+
+

-v, --verbose

+
+
+
Increase output verbosity.
+
+

-? , --help

+
+
+
Print the usage message.
+
+
+
+

+

The original zpios implementation was created by Cluster File + Systems Inc and adapted to ZFS on Linux by Brian Behlendorf + <behlendorf1@llnl.gov>.

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

zpool(8), zfs(8)

+
+
+ + + + + +
2013 FEB 28ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/1/ztest.1.html b/man/v0.7/1/ztest.1.html new file mode 100644 index 000000000..067e67d0a --- /dev/null +++ b/man/v0.7/1/ztest.1.html @@ -0,0 +1,344 @@ + + + + + + + ztest.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

ztest.1

+
+ + + + + +
ztest(1)User Commandsztest(1)
+
+

+
+

+

ztest - was written by the ZFS Developers as a ZFS unit + test.

+
+
+

+

ztest <options>

+
+
+

+

This manual page documents briefly the ztest command.

+

ztest was written by the ZFS Developers as a ZFS unit test. + The tool was developed in tandem with the ZFS functionality and was executed + nightly as one of the many regression test against the daily build. As + features were added to ZFS, unit tests were also added to ztest. In + addition, a separate test development team wrote and executed more + functional and stress tests.

+

By default ztest runs for ten minutes and uses block files + (stored in /tmp) to create pools rather than using physical disks. Block + files afford ztest its flexibility to play around with zpool + components without requiring large hardware configurations. However, storing + the block files in /tmp may not work for you if you have a small tmp + directory.

+

By default is non-verbose. This is why entering the command above + will result in ztest quietly executing for 5 minutes. The -V option + can be used to increase the verbosity of the tool. Adding multiple -V option + is allowed and the more you add the more chatty ztest becomes.

+

After the ztest run completes, you should notice many + ztest.* files lying around. Once the run completes you can safely remove + these files. Note that you shouldn't remove these files during a run. You + can re-use these files in your next ztest run by using the -E + option.

+
+
+

+

-?

+
+
+
Print a help summary.
+
+

-v vdevs (default: 5)

+
+
+
Number of vdevs.
+
+

-s size_of_each_vdev (default: 64M)

+
+
+
Size of each vdev.
+
+

-a alignment_shift (default: 9) (use 0 for + random)

+
+
+
Used alignment in test.
+
+

-m mirror_copies (default: 2)

+
+
+
Number of mirror copies.
+
+

-r raidz_disks (default: 4)

+
+
+
Number of raidz disks.
+
+

-R raidz_parity (default: 1)

+
+
+
Raidz parity.
+
+

-d datasets (default: 7)

+
+
+
Number of datasets.
+
+

-t threads (default: 23)

+
+
+
Number of threads.
+
+

-g gang_block_threshold (default: 32K)

+
+
+
Gang block threshold.
+
+

-i initialize_pool_i_times (default: + 1)

+
+
+
Number of pool initialisations.
+
+

-k kill_percentage (default: 70%)

+
+
+
Kill percentage.
+
+

-p pool_name (default: ztest)

+
+
+
Pool name.
+
+

-V(erbose)

+
+
+
Verbose (use multiple times for ever more blather).
+
+

-E(xisting)

+
+
+
Use existing pool (use existing pool instead of creating new one).
+
+

-T time (default: 300 sec)

+
+
+
Total test run time.
+
+

-z zil_failure_rate (default: fail every 2^5 + allocs)

+
+
+
Injected failure rate.
+
+
+
+

+

To override /tmp as your location for block files, you can use the + -f option:

+
+
+
ztest -f /
+
+

To get an idea of what ztest is actually testing try this:

+
+
+
ztest -f / -VVV
+
+

Maybe you'd like to run ztest for longer? To do so simply use the + -T option and specify the runlength in seconds like so:

+
+
+
ztest -f / -V -T 120 +

+
+
+
+
+

+
+
+
Use id instead of the SPL hostid to identify this host. Intended + for use with ztest, but this environment variable will affect any utility + which uses libzpool, including zpool(8). Since the kernel is + unaware of this setting results with utilities other than ztest are + undefined.
+
+
Limit the default stack size to stacksize bytes for the purpose of + detecting and debugging kernel stack overflows. This value defaults to + 32K which is double the default 16K Linux kernel stack size. +

In practice, setting the stack size slightly higher is needed + because differences in stack usage between kernel and user space can + lead to spurious stack overflows (especially when debugging is enabled). + The specified value will be rounded up to a floor of PTHREAD_STACK_MIN + which is the minimum stack required for a NULL procedure in user + space.

+

By default the stack size is limited to 256K.

+
+
+
+
+

+

spl-module-parameters (5), zpool (1), zfs + (1), zdb (1),

+
+
+

+

This manual page was transvered to asciidoc by Michael + Gebetsroither <gebi@grml.org> from + http://opensolaris.org/os/community/zfs/ztest/

+
+
+ + + + + +
2009 NOV 01ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/5/index.html b/man/v0.7/5/index.html new file mode 100644 index 000000000..6f734f414 --- /dev/null +++ b/man/v0.7/5/index.html @@ -0,0 +1,151 @@ + + + + + + + File Formats and Conventions (5) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

File Formats and Conventions (5)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/5/vdev_id.conf.5.html b/man/v0.7/5/vdev_id.conf.5.html new file mode 100644 index 000000000..8166c4b0c --- /dev/null +++ b/man/v0.7/5/vdev_id.conf.5.html @@ -0,0 +1,344 @@ + + + + + + + vdev_id.conf.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

vdev_id.conf.5

+
+ + + + + +
vdev_id.conf(5)File Formats Manualvdev_id.conf(5)
+
+
+

+

vdev_id.conf - Configuration file for vdev_id

+
+
+

+

vdev_id.conf is the configuration file for + vdev_id(8). It controls the default behavior of vdev_id(8) + while it is mapping a disk device name to an alias.

+

The vdev_id.conf file uses a simple format consisting of a + keyword followed by one or more values on a single line. Any line not + beginning with a recognized keyword is ignored. Comments may optionally + begin with a hash character.

+

The following keywords and values are used.

+
+
+
Maps a device link in the /dev directory hierarchy to a new device name. + The udev rule defining the device link must have run prior to + vdev_id(8). A defined alias takes precedence over a + topology-derived name, but the two naming methods can otherwise coexist. + For example, one might name drives in a JBOD with the sas_direct topology + while naming an internal L2ARC device with an alias. +

name - the name of the link to the device that will by + created in /dev/disk/by-vdev.

+

devlink - the name of the device link that has already + been defined by udev. This may be an absolute path or the base + filename.

+

+
+
+
Maps a physical path to a channel name (typically representing a single + disk enclosure). +

+
+ +
Additionally create /dev/by-enclosure symlinks to the disk enclosure sg + devices using the naming scheme from from vdev_id.conf. + enclosure_symlinks is only allowed for sas_direct mode.
+ +
Specify the prefix for the enclosure symlinks in the form of: +

/dev/by-enclosure/<prefix>-<channel><num>

+

Defaults to "enc" if not specified.

+
+
+
hosting the disk enclosure being mapped, as found in the output of + lspci(8). This argument is not used in sas_switch mode. +

port - specifies the numeric identifier of the HBA or + SAS switch port connected to the disk enclosure being mapped.

+

name - specifies the name of the channel.

+

+
+
+
Maps a disk slot number as reported by the operating system to an + alternative slot number. If the channel parameter is specified then + the mapping is only applied to slots in the named channel, otherwise the + mapping is applied to all channels. The first-specified slot rule + that can match a slot takes precedence. Therefore a channel-specific + mapping for a given slot should generally appear before a generic mapping + for the same slot. In this way a custom mapping may be applied to a + particular channel and a default mapping applied to the others. +

+
+
+
Specifies whether vdev_id(8) will handle only dm-multipath devices. + If set to "yes" then vdev_id(8) will examine the first + running component disk of a dm-multipath device as listed by the + multipath(8) command to determine the physical path.
+
+
Identifies a physical topology that governs how physical paths are mapped + to channels. +

sas_direct - in this mode a channel is uniquely + identified by a PCI slot and a HBA port number

+

sas_switch - in this mode a channel is uniquely + identified by a SAS switch port number

+

+
+
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) internally uses this value to determine + which HBA or switch port a device is connected to. The default is 4. +

+
+
+
Specifies from which element of a SAS identifier the slot number is taken. + The default is bay. +

bay - read the slot number from the bay identifier.

+

phy - read the slot number from the phy identifier.

+

port - use the SAS port as the slot number.

+

id - use the scsi id as the slot number.

+

lun - use the scsi lun as the slot number.

+

ses - use the SCSI Enclosure Services (SES) enclosure + device slot number, as reported by sg_ses(8). This is intended + for use only on systems where bay is unsupported, noting that + port and id may be unstable across disk replacement.

+
+
+
+
+

+

A non-multipath configuration with direct-attached SAS enclosures + and an arbitrary slot re-mapping.

+

+
	multipath     no
+	topology      sas_direct
+	phys_per_port 4
+	slot          bay
+	#       PCI_SLOT HBA PORT  CHANNEL NAME
+	channel 85:00.0  1         A
+	channel 85:00.0  0         B
+	channel 86:00.0  1         C
+	channel 86:00.0  0         D
+	# Custom mapping for Channel A
+	#    Linux      Mapped
+	#    Slot       Slot      Channel
+	slot 1          7         A
+	slot 2          10        A
+	slot 3          3         A
+	slot 4          6         A
+	# Default mapping for B, C, and D
+	slot 1          4
+	slot 2          2
+	slot 3          1
+	slot 4          3
+

A SAS-switch topology. Note that the channel keyword takes + only two arguments in this example.

+

+
	topology      sas_switch
+	#       SWITCH PORT  CHANNEL NAME
+	channel 1            A
+	channel 2            B
+	channel 3            C
+	channel 4            D
+

A multipath configuration. Note that channel names have multiple + definitions - one per physical path.

+

+
	multipath yes
+	#       PCI_SLOT HBA PORT  CHANNEL NAME
+	channel 85:00.0  1         A
+	channel 85:00.0  0         B
+	channel 86:00.0  1         A
+	channel 86:00.0  0         B
+

A configuration with enclosure_symlinks enabled.

+

+
	multipath yes
+	enclosure_symlinks yes
+	#          PCI_ID      HBA PORT     CHANNEL NAME
+	channel    05:00.0     1            U
+	channel    05:00.0     0            L
+	channel    06:00.0     1            U
+	channel    06:00.0     0            L
+In addition to the disks symlinks, this configuration will create: +

+
	/dev/by-enclosure/enc-L0
+	/dev/by-enclosure/enc-L1
+	/dev/by-enclosure/enc-U0
+	/dev/by-enclosure/enc-U1
+

A configuration using device link aliases.

+

+
	#     by-vdev
+	#     name     fully qualified or base name of device link
+	alias d1       /dev/disk/by-id/wwn-0x5000c5002de3b9ca
+	alias d2       wwn-0x5000c5002def789e
+
+
+

+
+
/etc/zfs/vdev_id.conf
+
The configuration file for vdev_id(8).
+
+
+
+

+

vdev_id(8)

+
+
+ + + + + +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/5/zfs-events.5.html b/man/v0.7/5/zfs-events.5.html new file mode 100644 index 000000000..c4e488bd6 --- /dev/null +++ b/man/v0.7/5/zfs-events.5.html @@ -0,0 +1,777 @@ + + + + + + + zfs-events.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-events.5

+
+ + + + + +
ZFS-EVENTS(5)File Formats ManualZFS-EVENTS(5)
+
+
+

+

zfs-events - Events created by the ZFS filesystem.

+
+
+

+

Description of the different events generated by the ZFS + stack.

+

Most of these don't have any description. The events generated by + ZFS have never been publicly documented. What is here is intended as a + starting point to provide documentation for all possible events.

+

To view all events created since the loading of the ZFS + infrastructure (i.e, "the module"), run

+

+
zpool events
+

to get a short list, and

+

+
zpool events -v
+

to get a full detail of the events and what information is + available about it.

+

This man page lists the different subclasses that are issued in + the case of an event. The full event name would be + ereport.fs.zfs.SUBCLASS, but we only list the last part here.

+

+
+

+

+

checksum

+
Issued when a checksum error have been detected.
+

+

io

+
Issued when there is an I/O error in a vdev in the + pool.
+

+

data

+
Issued when there have been data errors in the + pool.
+

+

delay

+
Issued when an I/O was slow to complete as defined by the + zio_delay_max module option.
+

+

config.sync

+
Issued every time a vdev change have been done to the + pool.
+

+

zpool

+
Issued when a pool cannot be imported.
+

+

zpool.destroy

+
Issued when a pool is destroyed.
+

+

zpool.export

+
Issued when a pool is exported.
+

+

zpool.import

+
Issued when a pool is imported.
+

+

zpool.reguid

+
Issued when a REGUID (new unique identifier for the pool + have been regenerated) have been detected.
+

+

vdev.unknown

+
Issued when the vdev is unknown. Such as trying to clear + device errors on a vdev that have failed/been kicked from the system/pool and + is no longer available.
+

+

vdev.open_failed

+
Issued when a vdev could not be opened (because it didn't + exist for example).
+

+

vdev.corrupt_data

+
Issued when corrupt data have been detected on a + vdev.
+

+

vdev.no_replicas

+
Issued when there are no more replicas to sustain the + pool. This would lead to the pool being DEGRADED.
+

+

vdev.bad_guid_sum

+
Issued when a missing device in the pool have been + detected.
+

+

vdev.too_small

+
Issued when the system (kernel) have removed a device, + and ZFS notices that the device isn't there any more. This is usually followed + by a probe_failure event.
+

+

vdev.bad_label

+
Issued when the label is OK but invalid.
+

+

vdev.bad_ashift

+
Issued when the ashift alignment requirement has + increased.
+

+

vdev.remove

+
Issued when a vdev is detached from a mirror (or a spare + detached from a vdev where it have been used to replace a failed drive - only + works if the original drive have been readded).
+

+

vdev.clear

+
Issued when clearing device errors in a pool. Such as + running zpool clear on a device in the pool.
+

+

vdev.check

+
Issued when a check to see if a given vdev could be + opened is started.
+

+

vdev.spare

+
Issued when a spare have kicked in to replace a failed + device.
+

+

vdev.autoexpand

+
Issued when a vdev can be automatically expanded.
+

+

io_failure

+
Issued when there is an I/O failure in a vdev in the + pool.
+

+

probe_failure

+
Issued when a probe fails on a vdev. This would occur if + a vdev have been kicked from the system outside of ZFS (such as the kernel + have removed the device).
+

+

log_replay

+
Issued when the intent log cannot be replayed. The can + occur in the case of a missing or damaged log device.
+

+

resilver.start

+
Issued when a resilver is started.
+

+

resilver.finish

+
Issued when the running resilver have finished.
+

+

scrub.start

+
Issued when a scrub is started on a pool.
+

+

scrub.finish

+
Issued when a pool have finished scrubbing.
+

+

bootfs.vdev.attach

+
+

+
+
+

+

This is the payload (data, information) that accompanies an + event.

+

For zed(8), these are set to uppercase and prefixed with + ZEVENT_.

+

+

pool

+
Pool name.
+

+

pool_failmode

+
Failmode - wait, continue or panic. + See pool(8) (failmode property) for more information.
+

+

pool_guid

+
The GUID of the pool.
+

+

pool_context

+
The load state for the pool (0=none, 1=open, 2=import, + 3=tryimport, 4=recover 5=error).
+

+

vdev_guid

+
The GUID of the vdev in question (the vdev failing or + operated upon with zpool clear etc).
+

+

vdev_type

+
Type of vdev - disk, file, mirror + etc. See zpool(8) under Virtual Devices for more information on + possible values.
+

+

vdev_path

+
Full path of the vdev, including any -partX.
+

+

vdev_devid

+
ID of vdev (if any).
+

+

vdev_fru

+
Physical FRU location.
+

+

vdev_state

+
State of vdev (0=uninitialized, 1=closed, 2=offline, + 3=removed, 4=failed to open, 5=faulted, 6=degraded, 7=healthy).
+

+

vdev_ashift

+
The ashift value of the vdev.
+

+

vdev_complete_ts

+
The time the last I/O completed for the specified + vdev.
+

+

vdev_delta_ts

+
The time since the last I/O completed for the specified + vdev.
+

+

vdev_spare_paths

+
List of spares, including full path and any + -partX.
+

+

vdev_spare_guids

+
GUID(s) of spares.
+

+

vdev_read_errors

+
How many read errors that have been detected on the + vdev.
+

+

vdev_write_errors

+
How many write errors that have been detected on the + vdev.
+

+

vdev_cksum_errors

+
How many checkum errors that have been detected on the + vdev.
+

+

parent_guid

+
GUID of the vdev parent.
+

+

parent_type

+
Type of parent. See vdev_type.
+

+

parent_path

+
Path of the vdev parent (if any).
+

+

parent_devid

+
ID of the vdev parent (if any).
+

+

zio_objset

+
The object set number for a given I/O.
+

+

zio_object

+
The object number for a given I/O.
+

+

zio_level

+
The block level for a given I/O.
+

+

zio_blkid

+
The block ID for a given I/O.
+

+

zio_err

+
The errno for a failure when handling a given I/O.
+

+

zio_offset

+
The offset in bytes of where to write the I/O for the + specified vdev.
+

+

zio_size

+
The size in bytes of the I/O.
+

+

zio_flags

+
The current flags describing how the I/O should be + handled. See the I/O FLAGS section for the full list of I/O + flags.
+

+

zio_stage

+
The current stage of the I/O in the pipeline. See the + I/O STAGES section for a full list of all the I/O stages.
+

+

zio_pipeline

+
The valid pipeline stages for the I/O. See the I/O + STAGES section for a full list of all the I/O stages.
+

+

zio_delay

+
The time in ticks (HZ) required for the block layer to + service the I/O. Unlike zio_delta this does not include any vdev + queuing time and is therefore solely a measure of the block layer performance. + On most modern Linux systems HZ is defined as 1000 making a tick equivalent to + 1 millisecond.
+

+

zio_timestamp

+
The time when a given I/O was submitted.
+

+

zio_delta

+
The time required to service a given I/O.
+

+

prev_state

+
The previous state of the vdev.
+

+

cksum_expected

+
The expected checksum value.
+

+

cksum_actual

+
The actual/current checksum value.
+

+

cksum_algorithm

+
Checksum algorithm used. See zfs(8) for more + information on checksum algorithms available.
+

+

cksum_byteswap

+
Checksum value is byte swapped.
+

+

bad_ranges

+
Checksum bad offset ranges.
+

+

bad_ranges_min_gap

+
Checksum allowed minimum gap.
+

+

bad_range_sets

+
Checksum for each range the number of bits set.
+

+

bad_range_clears

+
Checksum for each range the number of bits cleared.
+

+

bad_set_bits

+
Checksum array of bits set.
+

+

bad_cleared_bits

+
Checksum array of bits cleared.
+

+

bad_set_histogram

+
Checksum histogram of set bits by bit number in a 64-bit + word.
+

+

bad_cleared_histogram

+
Checksum histogram of cleared bits by bit number in a + 64-bit word.
+

+
+
+

+

The ZFS I/O pipeline is comprised of various stages which are + defined below. The individual stages are used to construct these basic I/O + operations: Read, Write, Free, Claim, and Ioctl. These stages may be set on + an event to describe the life cycle of a given I/O.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StageBit MaskOperations



ZIO_STAGE_OPEN0x00000001RWFCI
ZIO_STAGE_READ_BP_INIT0x00000002R----
ZIO_STAGE_FREE_BP_INIT0x00000004--F--
ZIO_STAGE_ISSUE_ASYNC0x00000008RWF--
ZIO_STAGE_WRITE_BP_INIT0x00000010-W---
ZIO_STAGE_CHECKSUM_GENERATE0x00000020-W---
ZIO_STAGE_NOP_WRITE0x00000040-W---
ZIO_STAGE_DDT_READ_START0x00000080R----
ZIO_STAGE_DDT_READ_DONE0x00000100R----
ZIO_STAGE_DDT_WRITE0x00000200-W---
ZIO_STAGE_DDT_FREE0x00000400--F--
ZIO_STAGE_GANG_ASSEMBLE0x00000800RWFC-
ZIO_STAGE_GANG_ISSUE0x00001000RWFC-
ZIO_STAGE_DVA_ALLOCATE0x00002000-W---
ZIO_STAGE_DVA_FREE0x00004000--F--
ZIO_STAGE_DVA_CLAIM0x00008000---C-
ZIO_STAGE_READY0x00010000RWFCI
ZIO_STAGE_VDEV_IO_START0x00020000RW--I
ZIO_STAGE_VDEV_IO_DONE0x00040000RW--I
ZIO_STAGE_VDEV_IO_ASSESS0x00080000RW--I
ZIO_STAGE_CHECKSUM_VERIFY00x00100000R----
ZIO_STAGE_DONE0x00200000RWFCI
+

+
+
+

+

Every I/O in the pipeline contains a set of flags which describe + its function and are used to govern its behavior. These flags will be set in + an event as an zio_flags payload entry.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FlagBit Mask


ZIO_FLAG_DONT_AGGREGATE0x00000001
ZIO_FLAG_IO_REPAIR0x00000002
ZIO_FLAG_SELF_HEAL0x00000004
ZIO_FLAG_RESILVER0x00000008
ZIO_FLAG_SCRUB0x00000010
ZIO_FLAG_SCAN_THREAD0x00000020
ZIO_FLAG_PHYSICAL0x00000040
ZIO_FLAG_CANFAIL0x00000080
ZIO_FLAG_SPECULATIVE0x00000100
ZIO_FLAG_CONFIG_WRITER0x00000200
ZIO_FLAG_DONT_RETRY0x00000400
ZIO_FLAG_DONT_CACHE0x00000800
ZIO_FLAG_NODATA0x00001000
ZIO_FLAG_INDUCE_DAMAGE0x00002000
ZIO_FLAG_IO_RETRY0x00004000
ZIO_FLAG_PROBE0x00008000
ZIO_FLAG_TRYHARD0x00010000
ZIO_FLAG_OPTIONAL0x00020000
ZIO_FLAG_DONT_QUEUE0x00040000
ZIO_FLAG_DONT_PROPAGATE0x00080000
ZIO_FLAG_IO_BYPASS0x00100000
ZIO_FLAG_IO_REWRITE0x00200000
ZIO_FLAG_RAW0x00400000
ZIO_FLAG_GANG_CHILD0x00800000
ZIO_FLAG_DDT_CHILD0x01000000
ZIO_FLAG_GODFATHER0x02000000
ZIO_FLAG_NOPWRITE0x04000000
ZIO_FLAG_REEXECUTED0x08000000
ZIO_FLAG_DELEGATED0x10000000
ZIO_FLAG_FASTWRITE0x20000000
+
+
+
+ + + + + +
June 6, 2015
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/5/zfs-module-parameters.5.html b/man/v0.7/5/zfs-module-parameters.5.html new file mode 100644 index 000000000..643096435 --- /dev/null +++ b/man/v0.7/5/zfs-module-parameters.5.html @@ -0,0 +1,1739 @@ + + + + + + + zfs-module-parameters.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-module-parameters.5

+
+ + + + + +
ZFS-MODULE-PARAMETERS(5)File Formats ManualZFS-MODULE-PARAMETERS(5)
+
+
+

+

zfs-module-parameters - ZFS module parameters

+
+
+

+

Description of the different parameters to the ZFS module.

+

+
+

+

+

ignore_hole_birth (int)

+
When set, the hole_birth optimization will not be used, + and all holes will always be sent on zfs send. Useful if you suspect your + datasets are affected by a bug in hole_birth. +

Use 1 for on (default) and 0 for off.

+
+

+

l2arc_feed_again (int)

+
Turbo L2ARC warm-up. When the L2ARC is cold the fill + interval will be set as fast as possible. +

Use 1 for yes (default) and 0 to disable.

+
+

+

l2arc_feed_min_ms (ulong)

+
Min feed interval in milliseconds. Requires + l2arc_feed_again=1 and only applicable in related situations. +

Default value: 200.

+
+

+

l2arc_feed_secs (ulong)

+
Seconds between L2ARC writing +

Default value: 1.

+
+

+

l2arc_headroom (ulong)

+
How far through the ARC lists to search for L2ARC + cacheable content, expressed as a multiplier of l2arc_write_max +

Default value: 2.

+
+

+

l2arc_headroom_boost (ulong)

+
Scales l2arc_headroom by this percentage when + L2ARC contents are being successfully compressed before writing. A value of + 100 disables this feature. +

Default value: 200.

+
+

+

l2arc_noprefetch (int)

+
Do not write buffers to L2ARC if they were prefetched but + not used by applications +

Use 1 for yes (default) and 0 to disable.

+
+

+

l2arc_norw (int)

+
No reads during writes +

Use 1 for yes and 0 for no (default).

+
+

+

l2arc_write_boost (ulong)

+
Cold L2ARC devices will have l2arc_write_max + increased by this amount while they remain cold. +

Default value: 8,388,608.

+
+

+

l2arc_write_max (ulong)

+
Max write bytes per interval +

Default value: 8,388,608.

+
+

+

metaslab_aliquot (ulong)

+
Metaslab granularity, in bytes. This is roughly similar + to what would be referred to as the "stripe size" in traditional + RAID arrays. In normal operation, ZFS will try to write this amount of data to + a top-level vdev before moving on to the next one. +

Default value: 524,288.

+
+

+

metaslab_bias_enabled (int)

+
Enable metaslab group biasing based on its vdev's over- + or under-utilization relative to the pool. +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_metaslab_segment_weight_enabled (int)

+
Enable/disable segment-based metaslab selection. +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_metaslab_switch_threshold (int)

+
When using segment-based metaslab selection, continue + allocating from the active metaslab until zfs_metaslab_switch_threshold + worth of buckets have been exhausted. +

Default value: 2.

+
+

+

metaslab_debug_load (int)

+
Load all metaslabs during pool import. +

Use 1 for yes and 0 for no (default).

+
+

+

metaslab_debug_unload (int)

+
Prevent metaslabs from being unloaded. +

Use 1 for yes and 0 for no (default).

+
+

+

metaslab_fragmentation_factor_enabled (int)

+
Enable use of the fragmentation metric in computing + metaslab weights. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslabs_per_vdev (int)

+
When a vdev is added, it will be divided into + approximately (but no more than) this number of metaslabs. +

Default value: 200.

+
+

+

metaslab_preload_enabled (int)

+
Enable metaslab group preloading. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_lba_weighting_enabled (int)

+
Give more weight to metaslabs with lower LBAs, assuming + they have greater bandwidth as is typically the case on a modern constant + angular velocity disk drive. +

Use 1 for yes (default) and 0 for no.

+
+

+

spa_config_path (charp)

+
SPA config file +

Default value: /etc/zfs/zpool.cache.

+
+

+

spa_asize_inflation (int)

+
Multiplication factor used to estimate actual disk + consumption from the size of data being written. The default value is a worst + case estimate, but lower values may be valid for a given pool depending on its + configuration. Pool administrators who understand the factors involved may + wish to specify a more realistic inflation factor, particularly if they + operate close to quota or capacity limits. +

Default value: 24.

+
+

+

spa_load_verify_data (int)

+
Whether to traverse data blocks during an "extreme + rewind" (-X) import. Use 0 to disable and 1 to enable. +

An extreme rewind import normally performs a full traversal of all + blocks in the pool for verification. If this parameter is set to 0, the + traversal skips non-metadata blocks. It can be toggled once the import has + started to stop or start the traversal of non-metadata blocks.

+

Default value: 1.

+
+

+

spa_load_verify_metadata (int)

+
Whether to traverse blocks during an "extreme + rewind" (-X) pool import. Use 0 to disable and 1 to enable. +

An extreme rewind import normally performs a full traversal of all + blocks in the pool for verification. If this parameter is set to 0, the + traversal is not performed. It can be toggled once the import has started to + stop or start the traversal.

+

Default value: 1.

+
+

+

spa_load_verify_maxinflight (int)

+
Maximum concurrent I/Os during the traversal performed + during an "extreme rewind" (-X) pool import. +

Default value: 10000.

+
+

+

spa_slop_shift (int)

+
Normally, we don't allow the last 3.2% + (1/(2^spa_slop_shift)) of space in the pool to be consumed. This ensures that + we don't run the pool completely out of space, due to unaccounted changes + (e.g. to the MOS). It also limits the worst-case time to allocate space. If we + have less than this amount of free space, most ZPL operations (e.g. write, + create) will return ENOSPC. +

Default value: 5.

+
+

+

zfetch_array_rd_sz (ulong)

+
If prefetching is enabled, disable prefetching for reads + larger than this size. +

Default value: 1,048,576.

+
+

+

zfetch_max_distance (uint)

+
Max bytes to prefetch per stream (default 8MB). +

Default value: 8,388,608.

+
+

+

zfetch_max_streams (uint)

+
Max number of streams per zfetch (prefetch streams per + file). +

Default value: 8.

+
+

+

zfetch_min_sec_reap (uint)

+
Min time before an active prefetch stream can be + reclaimed +

Default value: 2.

+
+

+

zfs_arc_dnode_limit (ulong)

+
When the number of bytes consumed by dnodes in the ARC + exceeds this number of bytes, try to unpin some of it in response to demand + for non-metadata. This value acts as a ceiling to the amount of dnode + metadata, and defaults to 0 which indicates that a percent which is based on + zfs_arc_dnode_limit_percent of the ARC meta buffers that may be used + for dnodes. +

See also zfs_arc_meta_prune which serves a similar purpose + but is used when the amount of metadata in the ARC exceeds + zfs_arc_meta_limit rather than in response to overall demand for + non-metadata.

+

+

Default value: 0.

+
+

+

zfs_arc_dnode_limit_percent (ulong)

+
Percentage that can be consumed by dnodes of ARC meta + buffers. +

See also zfs_arc_dnode_limit which serves a similar purpose + but has a higher priority if set to nonzero value.

+

Default value: 10.

+
+

+

zfs_arc_dnode_reduce_percent (ulong)

+
Percentage of ARC dnodes to try to scan in response to + demand for non-metadata when the number of bytes consumed by dnodes exceeds + zfs_arc_dnode_limit. +

+

Default value: 10% of the number of dnodes in the ARC.

+
+

+

zfs_arc_average_blocksize (int)

+
The ARC's buffer hash table is sized based on the + assumption of an average block size of zfs_arc_average_blocksize + (default 8K). This works out to roughly 1MB of hash table per 1GB of physical + memory with 8-byte pointers. For configurations with a known larger average + block size this value can be increased to reduce the memory footprint. +

+

Default value: 8192.

+
+

+

zfs_arc_evict_batch_limit (int)

+
Number ARC headers to evict per sub-list before + proceeding to another sub-list. This batch-style operation prevents entire + sub-lists from being evicted at once but comes at a cost of additional + unlocking and locking. +

Default value: 10.

+
+

+

zfs_arc_grow_retry (int)

+
If set to a non zero value, it will replace the + arc_grow_retry value with this value. The arc_grow_retry value (default 5) is + the number of seconds the ARC will wait before trying to resume growth after a + memory pressure event. +

Default value: 0.

+
+

+

zfs_arc_lotsfree_percent (int)

+
Throttle I/O when free system memory drops below this + percentage of total system memory. Setting this value to 0 will disable the + throttle. +

Default value: 10.

+
+

+

zfs_arc_max (ulong)

+
Max arc size of ARC in bytes. If set to 0 then it will + consume 1/2 of system RAM. This value must be at least 67108864 (64 + megabytes). +

This value can be changed dynamically with some caveats. It cannot + be set back to 0 while running and reducing it below the current ARC size + will not cause the ARC to shrink without memory pressure to induce + shrinking.

+

Default value: 0.

+
+

+

zfs_arc_meta_adjust_restarts (ulong)

+
The number of restart passes to make while scanning the + ARC attempting the free buffers in order to stay below the + zfs_arc_meta_limit. This value should not need to be tuned but is + available to facilitate performance analysis. +

Default value: 4096.

+
+

+

zfs_arc_meta_limit (ulong)

+
The maximum allowed size in bytes that meta data buffers + are allowed to consume in the ARC. When this limit is reached meta data + buffers will be reclaimed even if the overall arc_c_max has not been reached. + This value defaults to 0 which indicates that a percent which is based on + zfs_arc_meta_limit_percent of the ARC may be used for meta data. +

This value my be changed dynamically except that it cannot be set + back to 0 for a specific percent of the ARC; it must be set to an explicit + value.

+

Default value: 0.

+
+

+

zfs_arc_meta_limit_percent (ulong)

+
Percentage of ARC buffers that can be used for meta data. +

See also zfs_arc_meta_limit which serves a similar purpose + but has a higher priority if set to nonzero value.

+

+

Default value: 75.

+
+

+

zfs_arc_meta_min (ulong)

+
The minimum allowed size in bytes that meta data buffers + may consume in the ARC. This value defaults to 0 which disables a floor on the + amount of the ARC devoted meta data. +

Default value: 0.

+
+

+

zfs_arc_meta_prune (int)

+
The number of dentries and inodes to be scanned looking + for entries which can be dropped. This may be required when the ARC reaches + the zfs_arc_meta_limit because dentries and inodes can pin buffers in + the ARC. Increasing this value will cause to dentry and inode caches to be + pruned more aggressively. Setting this value to 0 will disable pruning the + inode and dentry caches. +

Default value: 10,000.

+
+

+

zfs_arc_meta_strategy (int)

+
Define the strategy for ARC meta data buffer eviction + (meta reclaim strategy). A value of 0 (META_ONLY) will evict only the ARC meta + data buffers. A value of 1 (BALANCED) indicates that additional data buffers + may be evicted if that is required to in order to evict the required number of + meta data buffers. +

Default value: 1.

+
+

+

zfs_arc_min (ulong)

+
Min arc size of ARC in bytes. If set to 0 then arc_c_min + will default to consuming the larger of 32M or 1/32 of total system memory. +

Default value: 0.

+
+

+

zfs_arc_min_prefetch_lifespan (int)

+
Minimum time prefetched blocks are locked in the ARC, + specified in jiffies. A value of 0 will default to 1 second. +

Default value: 0.

+
+

+

zfs_multilist_num_sublists (int)

+
To allow more fine-grained locking, each ARC state + contains a series of lists for both data and meta data objects. Locking is + performed at the level of these "sub-lists". This parameters + controls the number of sub-lists per ARC state, and also applies to other uses + of the multilist data structure. +

Default value: 4 or the number of online CPUs, whichever is + greater

+
+

+

zfs_arc_overflow_shift (int)

+
The ARC size is considered to be overflowing if it + exceeds the current ARC target size (arc_c) by a threshold determined by this + parameter. The threshold is calculated as a fraction of arc_c using the + formula "arc_c >> zfs_arc_overflow_shift". +

The default value of 8 causes the ARC to be considered to be + overflowing if it exceeds the target size by 1/256th (0.3%) of the target + size.

+

When the ARC is overflowing, new buffer allocations are stalled + until the reclaim thread catches up and the overflow condition no longer + exists.

+

Default value: 8.

+
+

+

+

zfs_arc_p_min_shift (int)

+
If set to a non zero value, this will update + arc_p_min_shift (default 4) with the new value. arc_p_min_shift is used to + shift of arc_c for calculating both min and max max arc_p +

Default value: 0.

+
+

+

zfs_arc_p_dampener_disable (int)

+
Disable arc_p adapt dampener +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_arc_shrink_shift (int)

+
If set to a non zero value, this will update + arc_shrink_shift (default 7) with the new value. +

Default value: 0.

+
+

+

zfs_arc_pc_percent (uint)

+
Percent of pagecache to reclaim arc to +

This tunable allows ZFS arc to play more nicely with the kernel's + LRU pagecache. It can guarantee that the arc size won't collapse under + scanning pressure on the pagecache, yet still allows arc to be reclaimed + down to zfs_arc_min if necessary. This value is specified as percent of + pagecache size (as measured by NR_FILE_PAGES) where that percent may exceed + 100. This only operates during memory pressure/reclaim.

+

Default value: 0 (disabled).

+
+

+

zfs_arc_sys_free (ulong)

+
The target number of bytes the ARC should leave as free + memory on the system. Defaults to the larger of 1/64 of physical memory or + 512K. Setting this option to a non-zero value will override the default. +

Default value: 0.

+
+

+

zfs_autoimport_disable (int)

+
Disable pool import at module load by ignoring the cache + file (typically /etc/zfs/zpool.cache). +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_checksums_per_second (int)

+
Rate limit checksum events to this many per second. Note + that this should not be set below the zed thresholds (currently 10 checksums + over 10 sec) or else zed may not trigger any action. +

Default value: 20

+
+

+

zfs_commit_timeout_pct (int)

+
This controls the amount of time that a ZIL block (lwb) + will remain "open" when it isn't "full", and it has a + thread waiting for it to be committed to stable storage. The timeout is scaled + based on a percentage of the last lwb latency to avoid significantly impacting + the latency of each individual transaction record (itx). +

Default value: 5%.

+
+

+

zfs_dbgmsg_enable (int)

+
Internally ZFS keeps a small log to facilitate debugging. + By default the log is disabled, to enable it set this option to 1. The + contents of the log can be accessed by reading the /proc/spl/kstat/zfs/dbgmsg + file. Writing 0 to this proc file clears the log. +

Default value: 0.

+
+

+

zfs_dbgmsg_maxsize (int)

+
The maximum size in bytes of the internal ZFS debug log. +

Default value: 4M.

+
+

+

zfs_dbuf_state_index (int)

+
This feature is currently unused. It is normally used for + controlling what reporting is available under /proc/spl/kstat/zfs. +

Default value: 0.

+
+

+

zfs_deadman_enabled (int)

+
When a pool sync operation takes longer than + zfs_deadman_synctime_ms milliseconds, a "slow spa_sync" + message is logged to the debug log (see zfs_dbgmsg_enable). If + zfs_deadman_enabled is set, all pending IO operations are also checked + and if any haven't completed within zfs_deadman_synctime_ms + milliseconds, a "SLOW IO" message is logged to the debug log and a + "delay" system event with the details of the hung IO is posted. +

Use 1 (default) to enable the slow IO check and 0 to + disable.

+
+

+

zfs_deadman_checktime_ms (int)

+
Once a pool sync operation has taken longer than + zfs_deadman_synctime_ms milliseconds, continue to check for slow + operations every zfs_deadman_checktime_ms milliseconds. +

Default value: 5,000.

+
+

+

zfs_deadman_synctime_ms (ulong)

+
Interval in milliseconds after which the deadman is + triggered and also the interval after which an IO operation is considered to + be "hung" if zfs_deadman_enabled is set. +

See zfs_deadman_enabled.

+

Default value: 1,000,000.

+
+

+

zfs_dedup_prefetch (int)

+
Enable prefetching dedup-ed blks +

Use 1 for yes and 0 to disable (default).

+
+

+

zfs_delay_min_dirty_percent (int)

+
Start to delay each transaction once there is this amount + of dirty data, expressed as a percentage of zfs_dirty_data_max. This + value should be >= zfs_vdev_async_write_active_max_dirty_percent. See the + section "ZFS TRANSACTION DELAY". +

Default value: 60.

+
+

+

zfs_delay_scale (int)

+
This controls how quickly the transaction delay + approaches infinity. Larger values cause longer delays for a given amount of + dirty data. +

For the smoothest delay, this value should be about 1 billion + divided by the maximum number of operations per second. This will smoothly + handle between 10x and 1/10th this number.

+

See the section "ZFS TRANSACTION DELAY".

+

Note: zfs_delay_scale * zfs_dirty_data_max must be + < 2^64.

+

Default value: 500,000.

+
+

+

zfs_delays_per_second (int)

+
Rate limit IO delay events to this many per second. +

Default value: 20

+
+

+

zfs_delete_blocks (ulong)

+
This is the used to define a large file for the purposes + of delete. Files containing more than zfs_delete_blocks will be deleted + asynchronously while smaller files are deleted synchronously. Decreasing this + value will reduce the time spent in an unlink(2) system call at the expense of + a longer delay before the freed space is available. +

Default value: 20,480.

+
+

+

zfs_dirty_data_max (int)

+
Determines the dirty space limit in bytes. Once this + limit is exceeded, new writes are halted until space frees up. This parameter + takes precedence over zfs_dirty_data_max_percent. See the section + "ZFS TRANSACTION DELAY". +

Default value: 10 percent of all memory, capped at + zfs_dirty_data_max_max.

+
+

+

zfs_dirty_data_max_max (int)

+
Maximum allowable value of zfs_dirty_data_max, + expressed in bytes. This limit is only enforced at module load time, and will + be ignored if zfs_dirty_data_max is later changed. This parameter takes + precedence over zfs_dirty_data_max_max_percent. See the section + "ZFS TRANSACTION DELAY". +

Default value: 25% of physical RAM.

+
+

+

zfs_dirty_data_max_max_percent (int)

+
Maximum allowable value of zfs_dirty_data_max, + expressed as a percentage of physical RAM. This limit is only enforced at + module load time, and will be ignored if zfs_dirty_data_max is later + changed. The parameter zfs_dirty_data_max_max takes precedence over + this one. See the section "ZFS TRANSACTION DELAY". +

Default value: 25.

+
+

+

zfs_dirty_data_max_percent (int)

+
Determines the dirty space limit, expressed as a + percentage of all memory. Once this limit is exceeded, new writes are halted + until space frees up. The parameter zfs_dirty_data_max takes precedence + over this one. See the section "ZFS TRANSACTION DELAY". +

Default value: 10%, subject to zfs_dirty_data_max_max.

+
+

+

zfs_dirty_data_sync (int)

+
Start syncing out a transaction group if there is at + least this much dirty data. +

Default value: 67,108,864.

+
+

+

zfs_fletcher_4_impl (string)

+
Select a fletcher 4 implementation. +

Supported selectors are: fastest, scalar, + sse2, ssse3, avx2, avx512f, and + aarch64_neon. All of the selectors except fastest and + scalar require instruction set extensions to be available and will + only appear if ZFS detects that they are present at runtime. If multiple + implementations of fletcher 4 are available, the fastest will be + chosen using a micro benchmark. Selecting scalar results in the + original, CPU based calculation, being used. Selecting any option other than + fastest and scalar results in vector instructions from the + respective CPU instruction set being used.

+

Default value: fastest.

+
+

+

zfs_free_bpobj_enabled (int)

+
Enable/disable the processing of the free_bpobj object. +

Default value: 1.

+
+

+

zfs_free_max_blocks (ulong)

+
Maximum number of blocks freed in a single txg. +

Default value: 100,000.

+
+

+

zfs_vdev_async_read_max_active (int)

+
Maximum asynchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 3.

+
+

+

zfs_vdev_async_read_min_active (int)

+
Minimum asynchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_async_write_active_max_dirty_percent (int)

+
When the pool has more than + zfs_vdev_async_write_active_max_dirty_percent dirty data, use + zfs_vdev_async_write_max_active to limit active async writes. If the + dirty data is between min and max, the active I/O limit is linearly + interpolated. See the section "ZFS I/O SCHEDULER". +

Default value: 60.

+
+

+

zfs_vdev_async_write_active_min_dirty_percent (int)

+
When the pool has less than + zfs_vdev_async_write_active_min_dirty_percent dirty data, use + zfs_vdev_async_write_min_active to limit active async writes. If the + dirty data is between min and max, the active I/O limit is linearly + interpolated. See the section "ZFS I/O SCHEDULER". +

Default value: 30.

+
+

+

zfs_vdev_async_write_max_active (int)

+
Maximum asynchronous write I/Os active to each device. + See the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_async_write_min_active (int)

+
Minimum asynchronous write I/Os active to each device. + See the section "ZFS I/O SCHEDULER". +

Lower values are associated with better latency on rotational + media but poorer resilver performance. The default value of 2 was chosen as + a compromise. A value of 3 has been shown to improve resilver performance + further at a cost of further increasing latency.

+

Default value: 2.

+
+

+

zfs_vdev_max_active (int)

+
The maximum number of I/Os active to each device. + Ideally, this will be >= the sum of each queue's max_active. It must be at + least the sum of each queue's min_active. See the section "ZFS I/O + SCHEDULER". +

Default value: 1,000.

+
+

+

zfs_vdev_scrub_max_active (int)

+
Maximum scrub I/Os active to each device. See the section + "ZFS I/O SCHEDULER". +

Default value: 2.

+
+

+

zfs_vdev_scrub_min_active (int)

+
Minimum scrub I/Os active to each device. See the section + "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_sync_read_max_active (int)

+
Maximum synchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_read_min_active (int)

+
Minimum synchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_write_max_active (int)

+
Maximum synchronous write I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_write_min_active (int)

+
Minimum synchronous write I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_queue_depth_pct (int)

+
Maximum number of queued allocations per top-level vdev + expressed as a percentage of zfs_vdev_async_write_max_active which + allows the system to detect devices that are more capable of handling + allocations and to allocate more blocks to those devices. It allows for + dynamic allocation distribution when devices are imbalanced as fuller devices + will tend to be slower than empty devices. +

See also zio_dva_throttle_enabled.

+

Default value: 1000.

+
+

+

zfs_disable_dup_eviction (int)

+
Disable duplicate buffer eviction +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_expire_snapshot (int)

+
Seconds to expire .zfs/snapshot +

Default value: 300.

+
+

+

zfs_admin_snapshot (int)

+
Allow the creation, removal, or renaming of entries in + the .zfs/snapshot directory to cause the creation, destruction, or renaming of + snapshots. When enabled this functionality works both locally and over NFS + exports which have the 'no_root_squash' option set. This functionality is + disabled by default. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_flags (int)

+
Set additional debugging flags. The following flags may + be bitwise-or'd together. +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueSymbolic Name
Description
1ZFS_DEBUG_DPRINTF
Enable dprintf entries in the debug log.
2ZFS_DEBUG_DBUF_VERIFY *
Enable extra dbuf verifications.
4ZFS_DEBUG_DNODE_VERIFY *
Enable extra dnode verifications.
8ZFS_DEBUG_SNAPNAMES
Enable snapshot name verification.
16ZFS_DEBUG_MODIFY
Check for illegally modified ARC buffers.
32ZFS_DEBUG_SPA
Enable spa_dbgmsg entries in the debug log.
64ZFS_DEBUG_ZIO_FREE
Enable verification of block frees.
128ZFS_DEBUG_HISTOGRAM_VERIFY
Enable extra spacemap histogram verifications.
256ZFS_DEBUG_METASLAB_VERIFY
Verify space accounting on disk matches in-core range_trees.
512ZFS_DEBUG_SET_ERROR
Enable SET_ERROR and dprintf entries in the debug log.
+

* Requires debug build.

+

Default value: 0.

+
+

+

zfs_free_leak_on_eio (int)

+
If destroy encounters an EIO while reading metadata (e.g. + indirect blocks), space referenced by the missing metadata can not be freed. + Normally this causes the background destroy to become "stalled", as + it is unable to make forward progress. While in this stalled state, all + remaining space to free from the error-encountering filesystem is + "temporarily leaked". Set this flag to cause it to ignore the EIO, + permanently leak the space from indirect blocks that can not be read, and + continue to free everything else that it can. +

The default, "stalling" behavior is useful if the + storage partially fails (i.e. some but not all i/os fail), and then later + recovers. In this case, we will be able to continue pool operations while it + is partially failed, and when it recovers, we can continue to free the + space, with no leaks. However, note that this case is actually fairly + rare.

+

Typically pools either (a) fail completely (but perhaps + temporarily, e.g. a top-level vdev going offline), or (b) have localized, + permanent errors (e.g. disk returns the wrong data due to bit flip or + firmware bug). In case (a), this setting does not matter because the pool + will be suspended and the sync thread will not be able to make forward + progress regardless. In case (b), because the error is permanent, the best + we can do is leak the minimum amount of space, which is what setting this + flag will do. Therefore, it is reasonable for this flag to normally be set, + but we chose the more conservative approach of not setting it, so that there + is no possibility of leaking space in the "partial temporary" + failure case.

+

Default value: 0.

+
+

+

zfs_free_min_time_ms (int)

+
During a zfs destroy operation using + feature@async_destroy a minimum of this much time will be spent working + on freeing blocks per txg. +

Default value: 1,000.

+
+

+

zfs_immediate_write_sz (long)

+
Largest data block to write to zil. Larger blocks will be + treated as if the dataset being written to had the property setting + logbias=throughput. +

Default value: 32,768.

+
+

+

zfs_max_recordsize (int)

+
We currently support block sizes from 512 bytes to 16MB. + The benefits of larger blocks, and thus larger IO, need to be weighed against + the cost of COWing a giant block to modify one byte. Additionally, very large + blocks can have an impact on i/o latency, and also potentially on the memory + allocator. Therefore, we do not allow the recordsize to be set larger than + zfs_max_recordsize (default 1MB). Larger blocks can be created by changing + this tunable, and pools with larger blocks can always be imported and used, + regardless of this setting. +

Default value: 1,048,576.

+
+

+

zfs_mdcomp_disable (int)

+
Disable meta data compression +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_metaslab_fragmentation_threshold (int)

+
Allow metaslabs to keep their active state as long as + their fragmentation percentage is less than or equal to this value. An active + metaslab that exceeds this threshold will no longer keep its active status + allowing better metaslabs to be selected. +

Default value: 70.

+
+

+

zfs_mg_fragmentation_threshold (int)

+
Metaslab groups are considered eligible for allocations + if their fragmentation metric (measured as a percentage) is less than or equal + to this value. If a metaslab group exceeds this threshold then it will be + skipped unless all metaslab groups within the metaslab class have also crossed + this threshold. +

Default value: 85.

+
+

+

zfs_mg_noalloc_threshold (int)

+
Defines a threshold at which metaslab groups should be + eligible for allocations. The value is expressed as a percentage of free space + beyond which a metaslab group is always eligible for allocations. If a + metaslab group's free space is less than or equal to the threshold, the + allocator will avoid allocating to that group unless all groups in the pool + have reached the threshold. Once all groups have reached the threshold, all + groups are allowed to accept allocations. The default value of 0 disables the + feature and causes all metaslab groups to be eligible for allocations. +

This parameter allows one to deal with pools having heavily + imbalanced vdevs such as would be the case when a new vdev has been added. + Setting the threshold to a non-zero percentage will stop allocations from + being made to vdevs that aren't filled to the specified percentage and allow + lesser filled vdevs to acquire more allocations than they otherwise would + under the old zfs_mg_alloc_failures facility.

+

Default value: 0.

+
+

+

zfs_multihost_history (int)

+
Historical statistics for the last N multihost updates + will be available in /proc/spl/kstat/zfs/<pool>/multihost +

Default value: 0.

+
+

+

zfs_multihost_interval (ulong)

+
Used to control the frequency of multihost writes which + are performed when the multihost pool property is on. This is one + factor used to determine the length of the activity check during import. +

The multihost write period is zfs_multihost_interval / + leaf-vdevs milliseconds. This means that on average a multihost write + will be issued for each leaf vdev every zfs_multihost_interval + milliseconds. In practice, the observed period can vary with the I/O load + and this observed value is the delay which is stored in the uberblock.

+

On import the activity check waits a minimum amount of time + determined by zfs_multihost_interval * + zfs_multihost_import_intervals. The activity check time may be further + extended if the value of mmp delay found in the best uberblock indicates + actual multihost updates happened at longer intervals than + zfs_multihost_interval. A minimum value of 100ms is + enforced.

+

Default value: 1000.

+
+

+

zfs_multihost_import_intervals (uint)

+
Used to control the duration of the activity test on + import. Smaller values of zfs_multihost_import_intervals will reduce + the import time but increase the risk of failing to detect an active pool. The + total activity check time is never allowed to drop below one second. A value + of 0 is ignored and treated as if it was set to 1 +

Default value: 10.

+
+

+

zfs_multihost_fail_intervals (uint)

+
Controls the behavior of the pool when multihost write + failures are detected. +

When zfs_multihost_fail_intervals = 0 then multihost write + failures are ignored. The failures will still be reported to the ZED which + depending on its configuration may take action such as suspending the pool + or offlining a device.

+

When zfs_multihost_fail_intervals > 0 then sequential + multihost write failures will cause the pool to be suspended. This occurs + when zfs_multihost_fail_intervals * zfs_multihost_interval + milliseconds have passed since the last successful multihost write. This + guarantees the activity test will see multihost writes if the pool is + imported.

+

Default value: 5.

+
+

+

zfs_no_scrub_io (int)

+
Set for no scrub I/O. This results in scrubs not actually + scrubbing data and simply doing a metadata crawl of the pool instead. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_no_scrub_prefetch (int)

+
Set to disable block prefetching for scrubs. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_nocacheflush (int)

+
Disable cache flush operations on disks when writing. + Beware, this may cause corruption if disks re-order writes. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_nopwrite_enabled (int)

+
Enable NOP writes +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_dmu_offset_next_sync (int)

+
Enable forcing txg sync to find holes. When enabled + forces ZFS to act like prior versions when SEEK_HOLE or SEEK_DATA flags are + used, which when a dnode is dirty causes txg's to be synced so that this data + can be found. +

Use 1 for yes and 0 to disable (default).

+
+

+

zfs_pd_bytes_max (int)

+
The number of bytes which should be prefetched during a + pool traversal (eg: zfs send or other data crawling operations) +

Default value: 52,428,800.

+
+

+

zfs_per_txg_dirty_frees_percent (ulong)

+
Tunable to control percentage of dirtied blocks from + frees in one TXG. After this threshold is crossed, additional dirty blocks + from frees wait until the next TXG. A value of zero will disable this + throttle. +

Default value: 30 and 0 to disable.

+
+

+

+

+

zfs_prefetch_disable (int)

+
This tunable disables predictive prefetch. Note that it + leaves "prescient" prefetch (e.g. prefetch for zfs send) intact. + Unlike predictive prefetch, prescient prefetch never issues i/os that end up + not being needed, so it can't hurt performance. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_read_chunk_size (long)

+
Bytes to read per chunk +

Default value: 1,048,576.

+
+

+

zfs_read_history (int)

+
Historical statistics for the last N reads will be + available in /proc/spl/kstat/zfs/<pool>/reads +

Default value: 0 (no data is kept).

+
+

+

zfs_read_history_hits (int)

+
Include cache hits in read history +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_recover (int)

+
Set to attempt to recover from fatal errors. This should + only be used as a last resort, as it typically results in leaked space, or + worse. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_resilver_delay (int)

+
Number of ticks to delay prior to issuing a resilver I/O + operation when a non-resilver or non-scrub I/O operation has occurred within + the past zfs_scan_idle ticks. +

Default value: 2.

+
+

+

zfs_resilver_min_time_ms (int)

+
Resilvers are processed by the sync thread. While + resilvering it will spend at least this much time working on a resilver + between txg flushes. +

Default value: 3,000.

+
+

+

zfs_scan_ignore_errors (int)

+
If set to a nonzero value, remove the DTL (dirty time + list) upon completion of a pool scan (scrub) even if there were unrepairable + errors. It is intended to be used during pool repair or recovery to stop + resilvering when the pool is next imported. +

Default value: 0.

+
+

+

zfs_scan_idle (int)

+
Idle window in clock ticks. During a scrub or a resilver, + if a non-scrub or non-resilver I/O operation has occurred during this window, + the next scrub or resilver operation is delayed by, respectively + zfs_scrub_delay or zfs_resilver_delay ticks. +

Default value: 50.

+
+

+

zfs_scan_min_time_ms (int)

+
Scrubs are processed by the sync thread. While scrubbing + it will spend at least this much time working on a scrub between txg flushes. +

Default value: 1,000.

+
+

+

zfs_scrub_delay (int)

+
Number of ticks to delay prior to issuing a scrub I/O + operation when a non-scrub or non-resilver I/O operation has occurred within + the past zfs_scan_idle ticks. +

Default value: 4.

+
+

+

zfs_send_corrupt_data (int)

+
Allow sending of corrupt data (ignore read/checksum + errors when sending data) +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_send_queue_length (int)

+
The maximum number of bytes allowed in the zfs + send queue. This value must be at least twice the maximum block size in + use. +

Default value: 16,777,216.

+
+

+

zfs_recv_queue_length (int)

+
+

The maximum number of bytes allowed in the zfs receive + queue. This value must be at least twice the maximum block size in use.

+

Default value: 16,777,216.

+
+

+

zfs_sync_pass_deferred_free (int)

+
Flushing of data to disk is done in passes. Defer frees + starting in this pass +

Default value: 2.

+
+

+

zfs_sync_pass_dont_compress (int)

+
Don't compress starting in this pass +

Default value: 5.

+
+

+

zfs_sync_pass_rewrite (int)

+
Rewrite new block pointers starting in this pass +

Default value: 2.

+
+

+

zfs_top_maxinflight (int)

+
Max concurrent I/Os per top-level vdev (mirrors or raidz + arrays) allowed during scrub or resilver operations. +

Default value: 32.

+
+

+

zfs_txg_history (int)

+
Historical statistics for the last N txgs will be + available in /proc/spl/kstat/zfs/<pool>/txgs +

Default value: 0.

+
+

+

zfs_txg_timeout (int)

+
Flush dirty data to disk at least every N seconds + (maximum txg duration) +

Default value: 5.

+
+

+

zfs_vdev_aggregation_limit (int)

+
Max vdev I/O aggregation size +

Default value: 131,072.

+
+

+

zfs_vdev_cache_bshift (int)

+
Shift size to inflate reads too +

Default value: 16 (effectively 65536).

+
+

+

zfs_vdev_cache_max (int)

+
Inflate reads smaller than this value to meet the + zfs_vdev_cache_bshift size (default 64k). +

Default value: 16384.

+
+

+

zfs_vdev_cache_size (int)

+
Total size of the per-disk cache in bytes. +

Currently this feature is disabled as it has been found to not be + helpful for performance and in some cases harmful.

+

Default value: 0.

+
+

+

zfs_vdev_mirror_rotating_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member + when an I/O immediately follows its predecessor on rotational vdevs for the + purpose of making decisions based on load. +

Default value: 0.

+
+

+

zfs_vdev_mirror_rotating_seek_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member + when an I/O lacks locality as defined by the + zfs_vdev_mirror_rotating_seek_offset. I/Os within this that are not + immediately following the previous I/O are incremented by half. +

Default value: 5.

+
+

+

zfs_vdev_mirror_rotating_seek_offset (int)

+
The maximum distance for the last queued I/O in which the + balancing algorithm considers an I/O to have locality. See the section + "ZFS I/O SCHEDULER". +

Default value: 1048576.

+
+

+

zfs_vdev_mirror_non_rotating_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member on + non-rotational vdevs when I/Os do not immediately follow one another. +

Default value: 0.

+
+

+

zfs_vdev_mirror_non_rotating_seek_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member + when an I/O lacks locality as defined by the + zfs_vdev_mirror_rotating_seek_offset. I/Os within this that are not + immediately following the previous I/O are incremented by half. +

Default value: 1.

+
+

+

zfs_vdev_read_gap_limit (int)

+
Aggregate read I/O operations if the gap on-disk between + them is within this threshold. +

Default value: 32,768.

+
+

+

zfs_vdev_scheduler (charp)

+
Set the Linux I/O scheduler on whole disk vdevs to this + scheduler. Valid options are noop, cfq, bfq & deadline +

Default value: noop.

+
+

+

zfs_vdev_write_gap_limit (int)

+
Aggregate write I/O over gap +

Default value: 4,096.

+
+

+

zfs_vdev_raidz_impl (string)

+
Parameter for selecting raidz parity implementation to + use. +

Options marked (always) below may be selected on module load as + they are supported on all systems. The remaining options may only be set + after the module is loaded, as they are available only if the + implementations are compiled in and supported on the running system.

+

Once the module is loaded, the content of + /sys/module/zfs/parameters/zfs_vdev_raidz_impl will show available options + with the currently selected one enclosed in []. Possible options are: +
+ fastest - (always) implementation selected using built-in benchmark +
+ original - (always) original raidz implementation +
+ scalar - (always) scalar raidz implementation +
+ sse2 - implementation using SSE2 instruction set (64bit x86 only) +
+ ssse3 - implementation using SSSE3 instruction set (64bit x86 only) +
+ avx2 - implementation using AVX2 instruction set (64bit x86 only) +
+ avx512f - implementation using AVX512F instruction set (64bit x86 only) +
+ avx512bw - implementation using AVX512F & AVX512BW instruction sets + (64bit x86 only) +
+ aarch64_neon - implementation using NEON (Aarch64/64 bit ARMv8 only) +
+ aarch64_neonx2 - implementation using NEON with more unrolling (Aarch64/64 + bit ARMv8 only)

+

Default value: fastest.

+
+

+

zfs_zevent_cols (int)

+
When zevents are logged to the console use this as the + word wrap width. +

Default value: 80.

+
+

+

zfs_zevent_console (int)

+
Log events to the console +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_zevent_len_max (int)

+
Max event queue length. A value of 0 will result in a + calculated value which increases with the number of CPUs in the system + (minimum 64 events). Events in the queue can be viewed with the zpool + events command. +

Default value: 0.

+
+

+

zil_replay_disable (int)

+
Disable intent logging replay. Can be disabled for + recovery from corrupted ZIL +

Use 1 for yes and 0 for no (default).

+
+

+

zil_slog_bulk (ulong)

+
Limit SLOG write size per commit executed with + synchronous priority. Any writes above that will be executed with lower + (asynchronous) priority to limit potential SLOG device abuse by single active + ZIL writer. +

Default value: 786,432.

+
+

+

zio_delay_max (int)

+
A zevent will be logged if a ZIO operation takes more + than N milliseconds to complete. Note that this is only a logging facility, + not a timeout on operations. +

Default value: 30,000.

+
+

+

zio_dva_throttle_enabled (int)

+
Throttle block allocations in the ZIO pipeline. This + allows for dynamic allocation distribution when devices are imbalanced. When + enabled, the maximum number of pending allocations per top-level vdev is + limited by zfs_vdev_queue_depth_pct. +

Default value: 1.

+
+

+

zio_requeue_io_start_cut_in_line (int)

+
Prioritize requeued I/O +

Default value: 0.

+
+

+

zio_taskq_batch_pct (uint)

+
Percentage of online CPUs (or CPU cores, etc) which will + run a worker thread for IO. These workers are responsible for IO work such as + compression and checksum calculations. Fractional number of CPUs will be + rounded down. +

The default value of 75 was chosen to avoid using all CPUs which + can result in latency issues and inconsistent application performance, + especially when high compression is enabled.

+

Default value: 75.

+
+

+

zvol_inhibit_dev (uint)

+
Do not create zvol device nodes. This may slightly + improve startup time on systems with a very large number of zvols. +

Use 1 for yes and 0 for no (default).

+
+

+

zvol_major (uint)

+
Major number for zvol block devices +

Default value: 230.

+
+

+

zvol_max_discard_blocks (ulong)

+
Discard (aka TRIM) operations done on zvols will be done + in batches of this many blocks, where block size is determined by the + volblocksize property of a zvol. +

Default value: 16,384.

+
+

+

zvol_prefetch_bytes (uint)

+
When adding a zvol to the system prefetch + zvol_prefetch_bytes from the start and end of the volume. Prefetching + these regions of the volume is desirable because they are likely to be + accessed immediately by blkid(8) or by the kernel scanning for a + partition table. +

Default value: 131,072.

+
+

+

zvol_request_sync (uint)

+
When processing I/O requests for a zvol submit them + synchronously. This effectively limits the queue depth to 1 for each I/O + submitter. When set to 0 requests are handled asynchronously by a thread pool. + The number of requests which can be handled concurrently is controller by + zvol_threads. +

Default value: 0.

+
+

+

zvol_threads (uint)

+
Max number of threads which can handle zvol I/O requests + concurrently. +

Default value: 32.

+
+

+

zvol_volmode (uint)

+
Defines zvol block devices behaviour when volmode + is set to default. Valid values are 1 (full), 2 (dev) and + 3 (none). +

Default value: 1.

+
+

+

zfs_qat_disable (int)

+
This tunable disables qat hardware acceleration for gzip + compression. It is available only if qat acceleration is compiled in and qat + driver is present. +

Use 1 for yes and 0 for no (default).

+
+

+
+
+
+

+

ZFS issues I/O operations to leaf vdevs to satisfy and complete + I/Os. The I/O scheduler determines when and in what order those operations + are issued. The I/O scheduler divides operations into five I/O classes + prioritized in the following order: sync read, sync write, async read, async + write, and scrub/resilver. Each queue defines the minimum and maximum number + of concurrent operations that may be issued to the device. In addition, the + device has an aggregate maximum, zfs_vdev_max_active. Note that the + sum of the per-queue minimums must not exceed the aggregate maximum. If the + sum of the per-queue maximums exceeds the aggregate maximum, then the number + of active I/Os may reach zfs_vdev_max_active, in which case no + further I/Os will be issued regardless of whether all per-queue minimums + have been met.

+

For many physical devices, throughput increases with the number of + concurrent operations, but latency typically suffers. Further, physical + devices typically have a limit at which more concurrent operations have no + effect on throughput or can actually cause it to decrease.

+

The scheduler selects the next operation to issue by first looking + for an I/O class whose minimum has not been satisfied. Once all are + satisfied and the aggregate maximum has not been hit, the scheduler looks + for classes whose maximum has not been satisfied. Iteration through the I/O + classes is done in the order specified above. No further operations are + issued if the aggregate maximum number of concurrent operations has been hit + or if there are no operations queued for an I/O class that has not hit its + maximum. Every time an I/O is queued or an operation completes, the I/O + scheduler looks for new operations to issue.

+

In general, smaller max_active's will lead to lower latency of + synchronous operations. Larger max_active's may lead to higher overall + throughput, depending on underlying storage.

+

The ratio of the queues' max_actives determines the balance of + performance between reads, writes, and scrubs. E.g., increasing + zfs_vdev_scrub_max_active will cause the scrub or resilver to + complete more quickly, but reads and writes to have higher latency and lower + throughput.

+

All I/O classes have a fixed maximum number of outstanding + operations except for the async write class. Asynchronous writes represent + the data that is committed to stable storage during the syncing stage for + transaction groups. Transaction groups enter the syncing state periodically + so the number of queued async writes will quickly burst up and then bleed + down to zero. Rather than servicing them as quickly as possible, the I/O + scheduler changes the maximum number of active async write I/Os according to + the amount of dirty data in the pool. Since both throughput and latency + typically increase with the number of concurrent operations issued to + physical devices, reducing the burstiness in the number of concurrent + operations also stabilizes the response time of operations from other -- and + in particular synchronous -- queues. In broad strokes, the I/O scheduler + will issue more concurrent operations from the async write queue as there's + more dirty data in the pool.

+

Async Writes

+

The number of concurrent operations issued for the async write I/O + class follows a piece-wise linear function defined by a few adjustable + points.

+
+
+ | o---------| <-- zfs_vdev_async_write_max_active +
+ ^ | /^ | +
+ | | / | | +active | / | | +
+ I/O | / | | +count | / | | +
+ | / | | +
+ |-------o | | <-- zfs_vdev_async_write_min_active +
+ 0|_______^______|_________| +
+ 0% | | 100% of zfs_dirty_data_max +
+ | | +
+ | `-- zfs_vdev_async_write_active_max_dirty_percent +
+ `--------- zfs_vdev_async_write_active_min_dirty_percent +
+Until the amount of dirty data exceeds a minimum percentage of the dirty data + allowed in the pool, the I/O scheduler will limit the number of concurrent + operations to the minimum. As that threshold is crossed, the number of + concurrent operations issued increases linearly to the maximum at the + specified maximum percentage of the dirty data allowed in the pool. +

Ideally, the amount of dirty data on a busy pool will stay in the + sloped part of the function between + zfs_vdev_async_write_active_min_dirty_percent and + zfs_vdev_async_write_active_max_dirty_percent. If it exceeds the + maximum percentage, this indicates that the rate of incoming data is greater + than the rate that the backend storage can handle. In this case, we must + further throttle incoming writes, as described in the next section.

+

+
+
+

+

We delay transactions when we've determined that the backend + storage isn't able to accommodate the rate of incoming writes.

+

If there is already a transaction waiting, we delay relative to + when that transaction will finish waiting. This way the calculated delay + time is independent of the number of threads concurrently executing + transactions.

+

If we are the only waiter, wait relative to when the transaction + started, rather than the current time. This credits the transaction for + "time already served", e.g. reading indirect blocks.

+

The minimum time for a transaction to take is calculated as:

+
+
+ min_time = zfs_delay_scale * (dirty - min) / (max - dirty) +
+ min_time is then capped at 100 milliseconds.
+

The delay has two degrees of freedom that can be adjusted via + tunables. The percentage of dirty data at which we start to delay is defined + by zfs_delay_min_dirty_percent. This should typically be at or above + zfs_vdev_async_write_active_max_dirty_percent so that we only start + to delay after writing at full speed has failed to keep up with the incoming + write rate. The scale of the curve is defined by zfs_delay_scale. + Roughly speaking, this variable determines the amount of delay at the + midpoint of the curve.

+

+
delay
+
+ 10ms +-------------------------------------------------------------*+ +
+ | *| +
+ 9ms + *+ +
+ | *| +
+ 8ms + *+ +
+ | * | +
+ 7ms + * + +
+ | * | +
+ 6ms + * + +
+ | * | +
+ 5ms + * + +
+ | * | +
+ 4ms + * + +
+ | * | +
+ 3ms + * + +
+ | * | +
+ 2ms + (midpoint) * + +
+ | | ** | +
+ 1ms + v *** + +
+ | zfs_delay_scale ----------> ******** | +
+ 0 +-------------------------------------*********----------------+ +
+ 0% <- zfs_dirty_data_max -> 100%
+

Note that since the delay is added to the outstanding time + remaining on the most recent transaction, the delay is effectively the + inverse of IOPS. Here the midpoint of 500us translates to 2000 IOPS. The + shape of the curve was chosen such that small changes in the amount of + accumulated dirty data in the first 3/4 of the curve yield relatively small + differences in the amount of delay.

+

The effects can be easier to understand when the amount of delay + is represented on a log scale:

+

+
delay
+100ms +-------------------------------------------------------------++
+
+ + + +
+ | | +
+ + *+ +
+ 10ms + *+ +
+ + ** + +
+ | (midpoint) ** | +
+ + | ** + +
+ 1ms + v **** + +
+ + zfs_delay_scale ----------> ***** + +
+ | **** | +
+ + **** + +100us + ** + +
+ + * + +
+ | * | +
+ + * + +
+ 10us + * + +
+ + + +
+ | | +
+ + + +
+ +--------------------------------------------------------------+ +
+ 0% <- zfs_dirty_data_max -> 100%
+

Note here that only as the amount of dirty data approaches its + limit does the delay start to increase rapidly. The goal of a properly tuned + system should be to keep the amount of dirty data out of that range by first + ensuring that the appropriate limits are set for the I/O scheduler to reach + optimal throughput on the backend storage, and then by changing the value of + zfs_delay_scale to increase the steepness of the curve.

+
+
+ + + + + +
October 28, 2017
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/5/zpool-features.5.html b/man/v0.7/5/zpool-features.5.html new file mode 100644 index 000000000..4e0e5b42a --- /dev/null +++ b/man/v0.7/5/zpool-features.5.html @@ -0,0 +1,771 @@ + + + + + + + zpool-features.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-features.5

+
+ + + + + +
ZPOOL-FEATURES(5)File Formats ManualZPOOL-FEATURES(5)
+
+
+

+

zpool-features - ZFS pool feature descriptions

+
+
+

+

ZFS pool on-disk format versions are specified via + "features" which replace the old on-disk format numbers (the last + supported on-disk format number is 28). To enable a feature on a pool use + the upgrade subcommand of the zpool(8) command, or set the + feature@feature_name property to enabled.

+

+

The pool format does not affect file system version compatibility + or the ability to send file systems between pools.

+

+

Since most features can be enabled independently of each other the + on-disk format of the pool is specified by the set of all features marked as + active on the pool. If the pool was created by another software + version this set may include unsupported features.

+
+

+

Every feature has a guid of the form + com.example:feature_name. The reverse DNS name ensures that the + feature's guid is unique across all ZFS implementations. When unsupported + features are encountered on a pool they will be identified by their guids. + Refer to the documentation for the ZFS implementation that created the pool + for information about those features.

+

+

Each supported feature also has a short name. By convention a + feature's short name is the portion of its guid which follows the ':' (e.g. + com.example:feature_name would have the short name + feature_name), however a feature's short name may differ across ZFS + implementations if following the convention would result in name + conflicts.

+
+
+

+

Features can be in one of three states:

+

active

+
This feature's on-disk format changes are in effect on + the pool. Support for this feature is required to import the pool in + read-write mode. If this feature is not read-only compatible, support is also + required to import the pool in read-only mode (see "Read-only + compatibility").
+

+

enabled

+
An administrator has marked this feature as enabled on + the pool, but the feature's on-disk format changes have not been made yet. The + pool can still be imported by software that does not support this feature, but + changes may be made to the on-disk format at any time which will move the + feature to the active state. Some features may support returning to the + enabled state after becoming active. See feature-specific + documentation for details.
+

+

disabled

+
This feature's on-disk format changes have not been made + and will not be made unless an administrator moves the feature to the + enabled state. Features cannot be disabled once they have been + enabled.
+

+

+

The state of supported features is exposed through pool properties + of the form feature@short_name.

+
+
+

+

Some features may make on-disk format changes that do not + interfere with other software's ability to read from the pool. These + features are referred to as "read-only compatible". If all + unsupported features on a pool are read-only compatible, the pool can be + imported in read-only mode by setting the readonly property during + import (see zpool(8) for details on importing pools).

+
+
+

+

For each unsupported feature enabled on an imported pool a pool + property named unsupported@feature_guid will indicate why the import + was allowed despite the unsupported feature. Possible values for this + property are:

+

+

inactive

+
The feature is in the enabled state and therefore + the pool's on-disk format is still compatible with software that does not + support this feature.
+

+

readonly

+
The feature is read-only compatible and the pool has been + imported in read-only mode.
+

+
+
+

+

Some features depend on other features being enabled in order to + function properly. Enabling a feature will automatically enable any features + it depends on.

+
+
+
+

+

The following features are supported on this system:

+

async_destroy

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:async_destroy
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

Destroying a file system requires traversing all of its data in + order to return its used space to the pool. Without async_destroy the + file system is not fully removed until all space has been reclaimed. If the + destroy operation is interrupted by a reboot or power outage the next + attempt to open the pool will need to complete the destroy operation + synchronously.

+

When async_destroy is enabled the file system's data will + be reclaimed by a background process, allowing the destroy operation to + complete without traversing the entire file system. The background process + is able to resume interrupted destroys after the pool has been opened, + eliminating the need to finish interrupted destroys as part of the open + operation. The amount of space remaining to be reclaimed by the background + process is available through the freeing property.

+

This feature is only active while freeing is + non-zero.

+
+

+

empty_bpobj

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:empty_bpobj
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature increases the performance of creating and using a + large number of snapshots of a single filesystem or volume, and also reduces + the disk space required.

+

When there are many snapshots, each snapshot uses many Block + Pointer Objects (bpobj's) to track blocks associated with that snapshot. + However, in common use cases, most of these bpobj's are empty. This feature + allows us to create each bpobj on-demand, thus eliminating the empty + bpobjs.

+

This feature is active while there are any filesystems, + volumes, or snapshots which were created after enabling this feature.

+
+

+

filesystem_limits

+
+ + + + + + + + + + + + + +
GUIDcom.joyent:filesystem_limits
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature enables filesystem and snapshot limits. These limits + can be used to control how many filesystems and/or snapshots can be created + at the point in the tree on which the limits are set.

+

This feature is active once either of the limit properties + has been set on a dataset. Once activated the feature is never + deactivated.

+
+

+

lz4_compress

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:lz4_compress
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

lz4 is a high-performance real-time compression algorithm + that features significantly faster compression and decompression as well as + a higher compression ratio than the older lzjb compression. + Typically, lz4 compression is approximately 50% faster on + compressible data and 200% faster on incompressible data than lzjb. + It is also approximately 80% faster on decompression, while giving + approximately 10% better compression ratio.

+

When the lz4_compress feature is set to enabled, the + administrator can turn on lz4 compression on any dataset on the pool + using the zfs(8) command. Please note that doing so will immediately + activate the lz4_compress feature on the underlying pool using the + zfs(1M) command. Also, all newly written metadata will be compressed + with lz4 algorithm. Since this feature is not read-only compatible, + this operation will render the pool unimportable on systems without support + for the lz4_compress feature.

+

Booting off of lz4-compressed root pools is supported.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

spacemap_histogram

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:spacemap_histogram
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This features allows ZFS to maintain more information about how + free space is organized within the pool. If this feature is enabled, + ZFS will set this feature to active when a new space map object is + created or an existing space map is upgraded to the new format. Once the + feature is active, it will remain in that state until the pool is + destroyed.

+

+
+

+

multi_vdev_crash_dump

+
+ + + + + + + + + + +
GUID com.joyent:multi_vdev_crash_dump
READ-ONLY COMPATIBLE no
DEPENDENCIES none
+

This feature allows a dump device to be configured with a pool + comprised of multiple vdevs. Those vdevs may be arranged in any mirrored or + raidz configuration.

+

When the multi_vdev_crash_dump feature is set to + enabled, the administrator can use the dumpadm(1M) command to + configure a dump device on a pool comprised of multiple vdevs.

+

Under Linux this feature is registered for compatibility but not + used. New pools created under Linux will have the feature enabled but + will never transition to active. This functionality is not + required in order to support crash dumps under Linux. Existing pools where + this feature is active can be imported.

+
+

+

extensible_dataset

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:extensible_dataset
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature allows more flexible use of internal ZFS data + structures, and exists for other features to depend on.

+

This feature will be active when the first dependent + feature uses it, and will be returned to the enabled state when all + datasets that use this feature are destroyed.

+

+
+

+

bookmarks

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:bookmarks
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature enables use of the zfs bookmark + subcommand.

+

This feature is active while any bookmarks exist in the + pool. All bookmarks in the pool can be listed by running zfs list -t + bookmark -r poolname.

+

+
+

+

enabled_txg

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:enabled_txg
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

Once this feature is enabled ZFS records the transaction group + number in which new features are enabled. This has no user-visible impact, + but other features may depend on this feature.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+

+
+

+

hole_birth

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:hole_birth
READ-ONLY COMPATIBLEno
DEPENDENCIESenabled_txg
+

This feature improves performance of incremental sends ("zfs + send -i") and receives for objects with many holes. The most common + case of hole-filled objects is zvols.

+

An incremental send stream from snapshot A to snapshot + B contains information about every block that changed between + A and B. Blocks which did not change between those snapshots + can be identified and omitted from the stream using a piece of metadata + called the 'block birth time', but birth times are not recorded for holes + (blocks filled only with zeroes). Since holes created after A cannot + be distinguished from holes created before A, information about every + hole in the entire filesystem or zvol is included in the send stream.

+

For workloads where holes are rare this is not a problem. However, + when incrementally replicating filesystems or zvols with many holes (for + example a zvol formatted with another filesystem) a lot of time will be + spent sending and receiving unnecessary information about holes that already + exist on the receiving side.

+

Once the hole_birth feature has been enabled the block + birth times of all new holes will be recorded. Incremental sends between + snapshots created after this feature is enabled will use this new metadata + to avoid sending information about holes that already exist on the receiving + side.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+

+
+

+

embedded_data

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:embedded_data
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature improves the performance and compression ratio of + highly-compressible blocks. Blocks whose contents can compress to 112 bytes + or smaller can take advantage of this feature.

+

When this feature is enabled, the contents of highly-compressible + blocks are stored in the block "pointer" itself (a misnomer in + this case, as it contains the compressed data, rather than a pointer to its + location on disk). Thus the space of the block (one sector, typically 512 + bytes or 4KB) is saved, and no additional i/o is needed to read and write + the data block.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+

+
+

+

large_blocks

+
+ + + + + + + + + + + + + +
GUIDorg.open-zfs:large_block
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

The large_block feature allows the record size on a dataset + to be set larger than 128KB.

+

This feature becomes active once a recordsize + property has been set larger than 128KB, and will return to being + enabled once all filesystems that have ever had their recordsize + larger than 128KB are destroyed.

+
+

+

large_dnode

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:large_dnode
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

The large_dnode feature allows the size of dnodes in a + dataset to be set larger than 512B.

+

This feature becomes active once a dataset contains an + object with a dnode larger than 512B, which occurs as a result of setting + the dnodesize dataset property to a value other than legacy. + The feature will return to being enabled once all filesystems that + have ever contained a dnode larger than 512B are destroyed. Large dnodes + allow more data to be stored in the bonus buffer, thus potentially improving + performance by avoiding the use of spill blocks.

+
+

sha512

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:sha512
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the use of the SHA-512/256 truncated hash + algorithm (FIPS 180-4) for checksum and dedup. The native 64-bit arithmetic + of SHA-512 provides an approximate 50% performance boost over SHA-256 on + 64-bit hardware and is thus a good minimum-change replacement candidate for + systems where hash performance is important, but these systems cannot for + whatever reason utilize the faster skein and edonr + algorithms.

+

When the sha512 feature is set to enabled, the + administrator can turn on the sha512 checksum on any dataset using + the zfs set checksum=sha512(1M) command. This feature becomes + active once a checksum property has been set to sha512, + and will return to being enabled once all filesystems that have ever + had their checksum set to sha512 are destroyed.

+

Booting off of pools utilizing SHA-512/256 is supported.

+

+
+

+

skein

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:skein
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the use of the Skein hash algorithm for + checksum and dedup. Skein is a high-performance secure hash algorithm that + was a finalist in the NIST SHA-3 competition. It provides a very high + security margin and high performance on 64-bit hardware (80% faster than + SHA-256). This implementation also utilizes the new salted checksumming + functionality in ZFS, which means that the checksum is pre-seeded with a + secret 256-bit random key (stored on the pool) before being fed the data + block to be checksummed. Thus the produced checksums are unique to a given + pool, preventing hash collision attacks on systems with dedup.

+

When the skein feature is set to enabled, the + administrator can turn on the skein checksum on any dataset using the + zfs set checksum=skein(1M) command. This feature becomes + active once a checksum property has been set to skein, + and will return to being enabled once all filesystems that have ever + had their checksum set to skein are destroyed.

+

Booting off of pools using skein is supported.

+

+
+

+

edonr

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:edonr
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the use of the Edon-R hash algorithm for + checksum, including for nopwrite (if compression is also enabled, an + overwrite of a block whose checksum matches the data being written will be + ignored). In an abundance of caution, Edon-R can not be used with dedup + (without verification).

+

Edon-R is a very high-performance hash algorithm that was part of + the NIST SHA-3 competition. It provides extremely high hash performance + (over 350% faster than SHA-256), but was not selected because of its + unsuitability as a general purpose secure hash algorithm. This + implementation utilizes the new salted checksumming functionality in ZFS, + which means that the checksum is pre-seeded with a secret 256-bit random key + (stored on the pool) before being fed the data block to be checksummed. Thus + the produced checksums are unique to a given pool.

+

When the edonr feature is set to enabled, the + administrator can turn on the edonr checksum on any dataset using the + zfs set checksum=edonr(1M) command. This feature becomes + active once a checksum property has been set to edonr, + and will return to being enabled once all filesystems that have ever + had their checksum set to edonr are destroyed.

+

Booting off of pools using edonr is supported.

+

+
+

+

userobj_accounting

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:userobj_accounting
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature allows administrators to account the object usage + information by user and group.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled. Each filesystem will be upgraded + automatically when remounted, or when new files are created under that + filesystem. The upgrade can also be started manually on filesystems by + running `zfs set version=current <pool/fs>`. The upgrade process runs + in the background and may take a while to complete for filesystems + containing a large number of files.

+

+
+

+
+
+

+

zpool(8)

+
+
+ + + + + +
June 8, 2018
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/fsck.zfs.8.html b/man/v0.7/8/fsck.zfs.8.html new file mode 100644 index 000000000..7b33f31fb --- /dev/null +++ b/man/v0.7/8/fsck.zfs.8.html @@ -0,0 +1,216 @@ + + + + + + + fsck.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

fsck.zfs.8

+
+ + + + + +
fsck.zfs(8)System Administration Commandsfsck.zfs(8)
+
+

+
+

+

fsck.zfs - Dummy ZFS filesystem checker.

+

+
+
+

+

fsck.zfs [options] + <dataset>

+

+
+
+

+

fsck.zfs is a shell stub that does nothing and always + returns true. It is installed by ZoL because some Linux distributions expect + a fsck helper for all filesystems.

+

+
+
+

+

All options and the dataset are ignored.

+

+
+
+

+

ZFS datasets are checked by running zpool scrub on the + containing pool. An individual ZFS dataset is never checked independently of + its pool, which is unlike a regular filesystem.

+

+
+
+

+

On some systems, if the dataset is in a degraded pool, then + it might be appropriate for fsck.zfs to return exit code 4 to + indicate an uncorrected filesystem error.

+

Similarly, if the dataset is in a faulted pool and has a + legacy /etc/fstab record, then fsck.zfs should return exit code 8 to + indicate a fatal operational error.

+

+
+
+

+

Darik Horn <dajhorn@vanadac.com>.

+

+
+
+

+

fsck(8), fstab(5), zpool(8)

+
+
+ + + + + +
2013 MAR 16ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/index.html b/man/v0.7/8/index.html new file mode 100644 index 000000000..9684c7e0b --- /dev/null +++ b/man/v0.7/8/index.html @@ -0,0 +1,163 @@ + + + + + + + System Administration Commands (8) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

System Administration Commands (8)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/mount.zfs.8.html b/man/v0.7/8/mount.zfs.8.html new file mode 100644 index 000000000..c1cf77391 --- /dev/null +++ b/man/v0.7/8/mount.zfs.8.html @@ -0,0 +1,265 @@ + + + + + + + mount.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

mount.zfs.8

+
+ + + + + +
mount.zfs(8)System Administration Commandsmount.zfs(8)
+
+

+
+

+

mount.zfs - mount a ZFS filesystem

+
+
+

+

mount.zfs [-sfnvh] [-o options] dataset + mountpoint

+

+
+
+

+

mount.zfs is part of the zfsutils package for Linux. It is + a helper program that is usually invoked by the mount(8) or + zfs(8) commands to mount a ZFS dataset.

+

All options are handled according to the FILESYSTEM + INDEPENDENT MOUNT OPTIONS section in the mount(8) manual, except for + those described below.

+

The dataset parameter is a ZFS filesystem name, as output + by the zfs list -H -o name command. This parameter never has a + leading slash character and is not a device name.

+

The mountpoint parameter is the path name of a + directory.

+

+

+
+
+

+
+
+
Ignore bad or sloppy mount options.
+
+
Do a fake mount; do not perform the mount operation.
+
+
Do not update the /etc/mtab file.
+
+
Increase verbosity.
+
+
Print the usage message.
+
+
This flag sets the SELinux context for all files in the filesystem under + that mountpoint.
+
+
This flag sets the SELinux context for the filesystem being mounted.
+
+
This flag sets the SELinux context for unlabeled files.
+
+
This flag sets the SELinux context for the root inode of the + filesystem.
+
+
This private flag indicates that the dataset has an entry in the + /etc/fstab file.
+
+
This private flag disables extended attributes.
+
+
This private flag enables directory-based extended attributes and, if + appropriate, adds a ZFS context to the selinux system policy.
+
+
This private flag enables system attributed-based extended attributes and, + if appropriate, adds a ZFS context to the selinux system policy.
+
+
Equivalent to xattr.
+
+
This private flag indicates that mount(8) is being called by the + zfs(8) command. +

+
+
+
+
+

+

ZFS conventionally requires that the mountpoint be an empty + directory, but the Linux implementation inconsistently enforces the + requirement.

+

The mount.zfs helper does not mount the contents of + zvols.

+

+
+
+

+
+
/etc/fstab
+
The static filesystem table.
+
/etc/mtab
+
The mounted filesystem table.
+
+
+
+

+

The primary author of mount.zfs is Brian Behlendorf + <behlendorf1@llnl.gov>.

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

fstab(5), mount(8), zfs(8)

+
+
+ + + + + +
2013 FEB 28ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/vdev_id.8.html b/man/v0.7/8/vdev_id.8.html new file mode 100644 index 000000000..d7927eefe --- /dev/null +++ b/man/v0.7/8/vdev_id.8.html @@ -0,0 +1,235 @@ + + + + + + + vdev_id.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

vdev_id.8

+
+ + + + + +
vdev_id(8)System Manager's Manualvdev_id(8)
+
+
+

+

vdev_id - generate user-friendly names for JBOD disks

+
+
+

+
vdev_id <-d dev> [-c config_file] [-g sas_direct|sas_switch]
+
+ [-m] [-p phys_per_port] +vdev_id -h
+
+
+

+

The vdev_id command is a udev helper which parses the file + /etc/zfs/vdev_id.conf(5) to map a physical path in a storage topology + to a channel name. The channel name is combined with a disk enclosure slot + number to create an alias that reflects the physical location of the drive. + This is particularly helpful when it comes to tasks like replacing failed + drives. Slot numbers may also be re-mapped in case the default numbering is + unsatisfactory. The drive aliases will be created as symbolic links in + /dev/disk/by-vdev.

+

The currently supported topologies are sas_direct and sas_switch. + A multipath mode is supported in which dm-mpath devices are handled by + examining the first-listed running component disk as reported by the + multipath(8) command. In multipath mode the configuration file should + contain a channel definition with the same name for each path to a given + enclosure.

+

vdev_id also supports creating aliases based on existing + udev links in the /dev hierarchy using the alias configuration file + keyword. See the vdev_id.conf(5) man page for details.

+

+
+
+

+
+
+
Specifies the path to an alternate configuration file. The default is + /etc/zfs/vdev_id.conf.
+
+
This is the only mandatory argument. Specifies the name of a device in + /dev, i.e. "sda".
+
+
Identifies a physical topology that governs how physical paths are mapped + to channels. +

sas_direct - in this mode a channel is uniquely + identified by a PCI slot and a HBA port number

+

sas_switch - in this mode a channel is uniquely + identified by a SAS switch port number

+
+
+
Specifies that vdev_id(8) will handle only dm-multipath devices. If + set to "yes" then vdev_id(8) will examine the first + running component disk of a dm-multipath device as listed by the + multipath(8) command to determine the physical path.
+
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) internally uses this value to determine + which HBA or switch port a device is connected to. The default is 4.
+
+
Print a usage summary.
+
+
+
+

+

vdev_id.conf(5)

+
+
+ + + + + +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/zdb.8.html b/man/v0.7/8/zdb.8.html new file mode 100644 index 000000000..a9b05a798 --- /dev/null +++ b/man/v0.7/8/zdb.8.html @@ -0,0 +1,568 @@ + + + + + + + zdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zdb.8

+
+ + + + + +
ZDB(8)System Manager's Manual (smm)ZDB(8)
+
+
+

+

zdbdisplay + zpool debugging and consistency information

+
+
+

+ + + + + +
zdb[-AbcdDFGhiLMPsvX] [-e + [-V] [-p + path ...]] [-I + inflight I/Os] [-o + var=value]... + [-t txg] + [-U cache] + [-x dumpdir] + [poolname [object ...]]
+
+ + + + + +
zdb[-AdiPv] [-e + [-V] [-p + path ...]] [-U + cache] dataset + [object ...]
+
+ + + + + +
zdb-C [-A] + [-U cache]
+
+ + + + + +
zdb-E [-A] + word0:word1:...:word15
+
+ + + + + +
zdb-l [-Aqu] + device
+
+ + + + + +
zdb-m [-AFLPX] + [-e [-V] + [-p path ...]] + [-t txg] + [-U cache] + poolname [vdev + [metaslab ...]]
+
+ + + + + +
zdb-O dataset path
+
+ + + + + +
zdb-R [-A] + [-e [-V] + [-p path ...]] + [-U cache] + poolname + vdev:offset:size[:flags]
+
+ + + + + +
zdb-S [-AP] + [-e [-V] + [-p path ...]] + [-U cache] + poolname
+
+
+

+

The zdb utility displays information about + a ZFS pool useful for debugging and performs some amount of consistency + checking. It is a not a general purpose tool and options (and facilities) + may change. This is neither a fsck(1M) nor an + fsdb(1M) utility.

+

The output of this command in general reflects the on-disk + structure of a ZFS pool, and is inherently unstable. The precise output of + most invocations is not documented, a knowledge of ZFS internals is + assumed.

+

If the dataset argument does not + contain any + "" or + "" + characters, it is interpreted as a pool name. The root dataset can be + specified as pool/ (pool name followed by a + slash).

+

When operating on an imported and active pool it is possible, + though unlikely, that zdb may interpret inconsistent pool data and behave + erratically.

+
+
+

+

Display options:

+
+
+
Display statistics regarding the number, size (logical, physical and + allocated) and deduplication of blocks.
+
+
Verify the checksum of all metadata blocks while printing block statistics + (see -b). +

If specified multiple times, verify the checksums of all + blocks.

+
+
+
Display information about the configuration. If specified with no other + options, instead display information about the cache file + (/etc/zfs/zpool.cache). To specify the cache file + to display, see -U. +

If specified multiple times, and a pool name is also specified + display both the cached configuration and the on-disk configuration. If + specified multiple times with -e also display + the configuration that would be used were the pool to be imported.

+
+
+
Display information about datasets. Specified once, displays basic dataset + information: ID, create transaction, size, and object count. +

If specified multiple times provides greater and greater + verbosity.

+

If object IDs are specified, display information about those + specific objects only.

+
+
+
Display deduplication statistics, including the deduplication ratio + (dedup), compression ratio (compress), + inflation due to the zfs copies property (copies), and + an overall effective ratio (dedup + * compress + / copies).
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in the + pool) block counts and sizes by reference count.
+
+
Display the statistics independently for each deduplication table.
+
+
Dump the contents of the deduplication tables describing duplicate + blocks.
+
+
Also dump the contents of the deduplication tables describing unique + blocks.
+
+ word0:word1:...:word15
+
Decode and display block from an embedded block pointer specified by the + word arguments.
+
+
Display pool history similar to zpool + history, but include internal changes, + transaction, and dataset information.
+
+
Display information about intent log (ZIL) entries relating to each + dataset. If specified multiple times, display counts of each intent log + transaction type.
+
+ device
+
Read the vdev labels from the specified device. + zdb -l will return 0 if + valid label was found, 1 if error occurred, and 2 if no valid labels were + found. Each unique configuration is displayed only once.
+
+ device
+
In addition display label space usage stats.
+
+ device
+
Display every configuration, unique or not. +

If the -q option is also specified, + don't print the labels.

+

If the -u option is also specified, + also display the uberblocks on this device. Specify multiple times to + increase verbosity.

+
+
+
Disable leak tracing and the loading of space maps. By default, + zdb verifies that all non-free blocks are + referenced, which can be very expensive.
+
+
Display the offset, spacemap, and free space of each metaslab.
+
+
Also display information about the on-disk free space histogram associated + with each metaslab.
+
+
Display the maximum contiguous free space, the in-core free space + histogram, and the percentage of free space in each space map.
+
+
Display every spacemap record.
+
+
Display the offset, spacemap, and free space of each metaslab.
+
+
Also display information about the maximum contiguous free space and the + percentage of free space in each space map.
+
+
Display every spacemap record.
+
+ dataset path
+
Look up the specified path inside of the + dataset and display its metadata and indirect + blocks. Specified path must be relative to the root + of dataset. This option can be combined with + -v for increasing verbosity.
+
+ poolname + vdev:offset:size[:flags]
+
Read and display a block from the specified device. By default the block + is displayed as a hex dump, but see the description of the + r flag, below. +

The block is specified in terms of a colon-separated tuple + vdev (an integer vdev identifier) + offset (the offset within the vdev) + size (the size of the block to read) and, + optionally, flags (a set of flags, described + below).

+

+
+
+ offset
+
Print block pointer
+
+
Decompress the block. Set environment variable + ZBD_NO_ZLE to skip zle when guessing.
+
+
Byte swap the block
+
+
Dump gang block header
+
+
Dump indirect block
+
+
Dump raw uninterpreted block data
+
+
+
+
Report statistics on zdb I/O. Display operation + counts, bandwidth, and error counts of I/O to the pool from + zdb.
+
+
Simulate the effects of deduplication, constructing a DDT and then display + that DDT as with -DD.
+
+
Display the current uberblock.
+
+

Other options:

+
+
+
Do not abort should any assertion fail.
+
+
Enable panic recovery, certain errors which would otherwise be fatal are + demoted to warnings.
+
+
Do not abort if asserts fail and also enable panic recovery.
+
+ [-p path ...]
+
Operate on an exported pool, not present in + /etc/zfs/zpool.cache. The + -p flag specifies the path under which devices are + to be searched.
+
+ dumpdir
+
All blocks accessed will be copied to files in the specified directory. + The blocks will be placed in sparse files whose name is the same as that + of the file or device read. zdb can be then run on + the generated files. Note that the -bbc flags are + sufficient to access (and thus copy) all metadata on the pool.
+
+
Attempt to make an unreadable pool readable by trying progressively older + transactions.
+
+
Dump the contents of the zfs_dbgmsg buffer before exiting + zdb. zfs_dbgmsg is a buffer used by ZFS to dump + advanced debug information.
+
+ inflight I/Os
+
Limit the number of outstanding checksum I/Os to the specified value. The + default value is 200. This option affects the performance of the + -c option.
+
+ var=value ...
+
Set the given global libzpool variable to the provided value. The value + must be an unsigned 32-bit integer. Currently only little-endian systems + are supported to avoid accidentally setting the high 32 bits of 64-bit + variables.
+
+
Print numbers in an unscaled form more amenable to parsing, eg. 1000000 + rather than 1M.
+
+ transaction
+
Specify the highest transaction to use when searching for uberblocks. See + also the -u and -l options + for a means to see the available uberblocks and their associated + transaction numbers.
+
+ cachefile
+
Use a cache file other than + /etc/zfs/zpool.cache.
+
+
Enable verbosity. Specify multiple times for increased verbosity.
+
+
Attempt verbatim import. This mimics the behavior of the kernel when + loading a pool from a cachefile. Only usable with + -e.
+
+
Attempt "extreme" transaction rewind, that is attempt the same + recovery as -F but read transactions otherwise + deemed too old.
+
+

Specifying a display option more than once enables verbosity for + only that option, with more occurrences enabling more verbosity.

+

If no options are specified, all information about the named pool + will be displayed at default verbosity.

+
+
+

+
+
Display the configuration of imported pool + rpool
+
+
+
# zdb -C rpool
+
+MOS Configuration:
+        version: 28
+        name: 'rpool'
+ ...
+
+
+
Display basic dataset information about + rpool
+
+
+
# zdb -d rpool
+Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects
+Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects
+ ...
+
+
+
Display basic information about object 0 in + rpool/export/home
+
+
+
# zdb -d rpool/export/home 0
+Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects
+
+    Object  lvl   iblk   dblk  dsize  lsize   %full  type
+         0    7    16K    16K  15.0K    16K   25.00  DMU dnode
+
+
+
Display the predicted effect of enabling deduplication on + rpool
+
+
+
# zdb -S rpool
+Simulated DDT histogram:
+
+bucket              allocated                       referenced
+______   ______________________________   ______________________________
+refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
+------   ------   -----   -----   -----   ------   -----   -----   -----
+     1     694K   27.1G   15.0G   15.0G     694K   27.1G   15.0G   15.0G
+     2    35.0K   1.33G    699M    699M    74.7K   2.79G   1.45G   1.45G
+ ...
+dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00
+
+
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
April 14, 2017Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/zed.8.html b/man/v0.7/8/zed.8.html new file mode 100644 index 000000000..25431a992 --- /dev/null +++ b/man/v0.7/8/zed.8.html @@ -0,0 +1,377 @@ + + + + + + + zed.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zed.8

+
+ + + + + +
ZED(8)System Administration CommandsZED(8)
+
+

+
+

+

ZED - ZFS Event Daemon

+

+
+
+

+

zed [-d zedletdir] [-f] [-F] + [-h] [-L] [-M] [-p pidfile] [-P + path] [-s statefile] [-v] [-V] + [-Z]

+

+
+
+

+

ZED (ZFS Event Daemon) monitors events generated by the ZFS + kernel module. When a zevent (ZFS Event) is posted, ZED will run any + ZEDLETs (ZFS Event Daemon Linkage for Executable Tasks) that have been + enabled for the corresponding zevent class.

+

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Display license information.
+
+
Display version information.
+
+
Be verbose.
+
+
Force the daemon to run if at all possible, disabling security checks and + throwing caution to the wind. Not recommended for use in production.
+
+
Run the daemon in the foreground.
+
+
Lock all current and future pages in the virtual memory address space. + This may help the daemon remain responsive when the system is under heavy + memory pressure.
+
+
Zero the daemon's state, thereby allowing zevents still within the kernel + to be reprocessed.
+
+
Read the enabled ZEDLETs from the specified directory.
+
+
Write the daemon's process ID to the specified file.
+
+
Custom $PATH for zedlets to use. Normally zedlets run in a locked-down + environment, with hardcoded paths to the ZFS commands ($ZFS, $ZPOOL, $ZED, + ...), and a hardcoded $PATH. This is done for security reasons. However, + the ZFS test suite uses a custom PATH for its ZFS commands, and passes it + to zed with -P. In short, -P is only to be used by the ZFS test suite; + never use it in production!
+
+
Write the daemon's state to the specified file.
+
+
+
+

+

A zevent is comprised of a list of nvpairs (name/value pairs). + Each zevent contains an EID (Event IDentifier) that uniquely identifies it + throughout the lifetime of the loaded ZFS kernel module; this EID is a + monotonically increasing integer that resets to 1 each time the kernel + module is loaded. Each zevent also contains a class string that identifies + the type of event. For brevity, a subclass string is defined that omits the + leading components of the class string. Additional nvpairs exist to provide + event details.

+

The kernel maintains a list of recent zevents that can be viewed + (along with their associated lists of nvpairs) using the "zpool + events -v" command.

+

+
+
+

+

ZEDLETs to be invoked in response to zevents are located in the + enabled-zedlets directory. These can be symlinked or copied from the + installed-zedlets directory; symlinks allow for automatic updates + from the installed ZEDLETs, whereas copies preserve local modifications. As + a security measure, ZEDLETs must be owned by root. They must have execute + permissions for the user, but they must not have write permissions for group + or other. Dotfiles are ignored.

+

ZEDLETs are named after the zevent class for which they should be + invoked. In particular, a ZEDLET will be invoked for a given zevent if + either its class or subclass string is a prefix of its filename (and is + followed by a non-alphabetic character). As a special case, the prefix + "all" matches all zevents. Multiple ZEDLETs may be invoked for a + given zevent.

+

+
+
+

+

ZEDLETs are executables invoked by the ZED in response to a given + zevent. They should be written under the presumption they can be invoked + concurrently, and they should use appropriate locking to access any shared + resources. Common variables used by ZEDLETs can be stored in the default rc + file which is sourced by scripts; these variables should be prefixed with + "ZED_".

+

The zevent nvpairs are passed to ZEDLETs as environment variables. + Each nvpair name is converted to an environment variable in the following + manner: 1) it is prefixed with "ZEVENT_", 2) it is converted to + uppercase, and 3) each non-alphanumeric character is converted to an + underscore. Some additional environment variables have been defined to + present certain nvpair values in a more convenient form. An incomplete list + of zevent environment variables is as follows:

+
+
+
The Event IDentifier.
+
+
The zevent class string.
+
+
The zevent subclass string.
+
+
The time at which the zevent was posted as + "seconds nanoseconds" since the Epoch.
+
+
The seconds component of ZEVENT_TIME.
+
+
The nanoseconds component of ZEVENT_TIME.
+
+
An almost-RFC3339-compliant string for ZEVENT_TIME.
+
+

Additionally, the following ZED & ZFS variables are + defined:

+
+
+
The daemon's process ID.
+
+
The daemon's current enabled-zedlets directory.
+
+
The ZFS alias (name-version-release) string used to build the + daemon.
+
+
The ZFS version used to build the daemon.
+
+
The ZFS release used to build the daemon.
+
+

ZEDLETs may need to call other ZFS commands. The installation + paths of the following executables are defined: ZDB, ZED, + ZFS, ZINJECT, and ZPOOL. These variables can be + overridden in the rc file if needed.

+

+
+
+

+
+
@sysconfdir@/zfs/zed.d
+
The default directory for enabled ZEDLETs.
+
@sysconfdir@/zfs/zed.d/zed.rc
+
The default rc file for common variables used by ZEDLETs.
+
@libexecdir@/zfs/zed.d
+
The default directory for installed ZEDLETs.
+
@runstatedir@/zed.pid
+
The default file containing the daemon's process ID.
+
@runstatedir@/zed.state
+
The default file containing the daemon's state. +

+
+
+
+
+

+
+
+
Reconfigure the daemon and rescan the directory for enabled ZEDLETs.
+
+
Terminate the daemon. +

+
+
+
+
+

+

ZED requires root privileges.

+

+
+
+

+

Events are processed synchronously by a single thread. This can + delay the processing of simultaneous zevents.

+

There is no maximum timeout for ZEDLET execution. Consequently, a + misbehaving ZEDLET can delay the processing of subsequent zevents.

+

The ownership and permissions of the enabled-zedlets + directory (along with all parent directories) are not checked. If any of + these directories are improperly owned or permissioned, an unprivileged user + could insert a ZEDLET to be executed as root. The requirement that ZEDLETs + be owned by root mitigates this to some extent.

+

ZEDLETs are unable to return state/status information to the + kernel.

+

Some zevent nvpair types are not handled. These are denoted by + zevent environment variables having a "_NOT_IMPLEMENTED_" + value.

+

Internationalization support via gettext has not been added.

+

The configuration file is not yet implemented.

+

The diagnosis engine is not yet implemented.

+

+
+
+

+

ZED (ZFS Event Daemon) is distributed under the terms of + the Common Development and Distribution License Version 1.0 (CDDL-1.0).

+

Developed at Lawrence Livermore National Laboratory + (LLNL-CODE-403049).

+

+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
Octember 1, 2013ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/zfs.8.html b/man/v0.7/8/zfs.8.html new file mode 100644 index 000000000..8f22dc4f0 --- /dev/null +++ b/man/v0.7/8/zfs.8.html @@ -0,0 +1,3543 @@ + + + + + + + zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.8

+
+ + + + + +
ZFS(8)System Manager's Manual (smm)ZFS(8)
+
+
+

+

zfsconfigures + ZFS file systems

+
+
+

+ + + + + +
zfs-?
+
+ + + + + +
zfscreate [-p] + [-o + property=value]... + filesystem
+
+ + + + + +
zfscreate [-ps] + [-b blocksize] + [-o + property=value]... + -V size + volume
+
+ + + + + +
zfsdestroy [-Rfnprv] + filesystem|volume
+
+ + + + + +
zfsdestroy [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]...
+
+ + + + + +
zfsdestroy + filesystem|volume#bookmark
+
+ + + + + +
zfssnapshot [-r] + [-o property=value]... + filesystem@snapname|volume@snapname...
+
+ + + + + +
zfsrollback [-Rfr] + snapshot
+
+ + + + + +
zfsclone [-p] + [-o + property=value]... + snapshot + filesystem|volume
+
+ + + + + +
zfspromote + clone-filesystem
+
+ + + + + +
zfsrename [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
+ + + + + +
zfsrename [-fp] + filesystem|volume + filesystem|volume
+
+ + + + + +
zfsrename -r + snapshot snapshot
+
+ + + + + +
zfslist + [-r|-d + depth] [-Hp] + [-o + property[,property]...] + [-s property]... + [-S property]... + [-t + type[,type]...] + [filesystem|volume|snapshot]...
+
+ + + + + +
zfsset + property=value + [property=value]... + filesystem|volume|snapshot...
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + filesystem|volume|snapshot|bookmark...
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot...
+
+ + + + + +
zfsupgrade
+
+ + + + + +
zfsupgrade -v
+
+ + + + + +
zfsupgrade [-r] + [-V version] + -a | filesystem
+
+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Ov] + [-o options] + -a | filesystem
+
+ + + + + +
zfsunmount [-f] + -a | + filesystem|mountpoint
+
+ + + + + +
zfsshare -a | + filesystem
+
+ + + + + +
zfsunshare -a | + filesystem|mountpoint
+
+ + + + + +
zfsbookmark snapshot + bookmark
+
+ + + + + +
zfssend [-DLPRcenpv] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-Lce] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend [-Penv] + -t receive_resume_token
+
+ + + + + +
zfsreceive [-Fnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-Fnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+ + + + + +
zfsallow + filesystem|volume
+
+ + + + + +
zfsallow [-dglu] + user|group[,user|group]... + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]... + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + -@setname + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfshold [-r] + tag snapshot...
+
+ + + + + +
zfsholds [-r] + snapshot...
+
+ + + + + +
zfsrelease [-r] + tag snapshot...
+
+ + + + + +
zfsdiff [-FHt] + snapshot + snapshot|filesystem
+
+
+

+

The zfs command configures ZFS datasets + within a ZFS storage pool, as described in zpool(8). A + dataset is identified by a unique path within the ZFS namespace. For + example:

+
+
pool/{filesystem,volume,snapshot}
+
+

where the maximum length of a dataset name is + MAXNAMELEN (256 bytes).

+

A dataset can be one of the following:

+
+
+
A ZFS dataset of type filesystem can be mounted within + the standard system namespace and behaves like other file systems. While + ZFS file systems are designed to be POSIX compliant, known issues exist + that prevent compliance in some cases. Applications that depend on + standards conformance might fail due to non-standard behavior when + checking file system free space.
+
+
A logical volume exported as a raw or block device. This type of dataset + should only be used under special circumstances. File systems are + typically used in most environments.
+
+
A read-only version of a file system or volume at a given point in time. + It is specified as + filesystem@name or + volume@name.
+
+
Much like a snapshot, but without the hold on on-disk + data. It can be used as the source of a send (but not for a receive). It + is specified as + filesystem#name or + volume#name.
+
+
+

+

A ZFS storage pool is a logical collection of devices that provide + space for datasets. A storage pool is also the root of the ZFS file system + hierarchy.

+

The root of the pool can be accessed as a file system, such as + mounting and unmounting, taking snapshots, and setting properties. The + physical storage characteristics, however, are managed by the + zpool(8) command.

+

See zpool(8) for more information on creating + and administering pools.

+
+
+

+

A snapshot is a read-only copy of a file system or volume. + Snapshots can be created extremely quickly, and initially consume no + additional space within the pool. As data within the active dataset changes, + the snapshot consumes more data than would otherwise be shared with the + active dataset.

+

Snapshots can have arbitrary names. Snapshots of volumes can be + cloned or rolled back, visibility is determined by the + snapdev property of the parent volume.

+

File system snapshots can be accessed under the + .zfs/snapshot directory in the root of the file + system. Snapshots are automatically mounted on demand and may be unmounted + at regular intervals. The visibility of the .zfs + directory can be controlled by the snapdir property.

+
+
+

+

A bookmark is like a snapshot, a read-only copy of a file system + or volume. Bookmarks can be created extremely quickly, compared to + snapshots, and they consume no additional space within the pool. Bookmarks + can also have arbitrary names, much like snapshots.

+

Unlike snapshots, bookmarks can not be accessed through the + filesystem in any way. From a storage standpoint a bookmark just provides a + way to reference when a snapshot was created as a distinct object. Bookmarks + are initially tied to a snapshot, not the filesystem or volume, and they + will survive if the snapshot itself is destroyed. Since they are very light + weight there's little incentive to destroy them.

+
+
+

+

A clone is a writable volume or file system whose initial contents + are the same as another dataset. As with snapshots, creating a clone is + nearly instantaneous, and initially consumes no additional space.

+

Clones can only be created from a snapshot. When a snapshot is + cloned, it creates an implicit dependency between the parent and child. Even + though the clone is created somewhere else in the dataset hierarchy, the + original snapshot cannot be destroyed as long as a clone exists. The + origin property exposes this dependency, and the + destroy command lists any such dependencies, if they + exist.

+

The clone parent-child dependency relationship can be reversed by + using the promote subcommand. This causes the + "origin" file system to become a clone of the specified file + system, which makes it possible to destroy the file system that the clone + was created from.

+
+
+

+

Creating a ZFS file system is a simple operation, so the number of + file systems per system is likely to be numerous. To cope with this, ZFS + automatically manages mounting and unmounting file systems without the need + to edit the /etc/fstab file. All automatically + managed file systems are mounted by ZFS at boot time.

+

By default, file systems are mounted under + /path, where path is the name + of the file system in the ZFS namespace. Directories are created and + destroyed as needed.

+

A file system can also have a mount point set + in the mountpoint property. This directory is created as + needed, and ZFS automatically mounts the file system when the + zfs mount + -a command is invoked (without editing + /etc/fstab). The mountpoint + property can be inherited, so if pool/home has a mount + point of /export/stuff, then + + automatically inherits a mount point of + /export/stuff/user.

+

A file system mountpoint property of + none prevents the file system from being mounted.

+

If needed, ZFS file systems can also be managed with traditional + tools (mount, umount, + /etc/fstab). If a file system's mount point is set + to legacy, ZFS makes no attempt to manage the file system, + and the administrator is responsible for mounting and unmounting the file + system. Because pools must be imported before a legacy mount can succeed, + administrators should ensure that legacy mounts are only attempted after the + zpool import process finishes at boot time. For example, on machines using + systemd, the mount option

+

x-systemd.requires=zfs-import.target

+

will ensure that the zfs-import completes before systemd attempts + mounting the filesystem. See systemd.mount(5) for details.

+
+
+

+

Deduplication is the process for removing redundant data at the + block level, reducing the total amount of data stored. If a file system has + the dedup property enabled, duplicate data blocks are + removed synchronously. The result is that only unique data is stored and + common components are shared among files.

+

Deduplicating data is a very resource-intensive operation. It is + generally recommended that you have at least 1.25 GiB of RAM per 1 TiB of + storage when you enable deduplication. Calculating the exact requirement + depends heavily on the type of data stored in the pool.

+

Enabling deduplication on an improperly-designed system can result + in performance issues (slow IO and administrative operations). It can + potentially lead to problems importing a pool due to memory exhaustion. + Deduplication can consume significant processing power (CPU) and memory as + well as generate additional disk IO.

+

Before creating a pool with deduplication + enabled, ensure that you have planned your hardware requirements + appropriately and implemented appropriate recovery practices, such as + regular backups. As an alternative to deduplication consider using + , + as a less resource-intensive alternative.

+
+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about user properties, + see the User Properties section, + below.

+

Every dataset has a set of properties that export statistics about + the dataset as well as control various behaviors. Properties are inherited + from the parent unless overridden by the child. Some properties apply only + to certain types of datasets (file systems, volumes, or snapshots).

+

The values of numeric properties can be specified using + human-readable suffixes (for example, + , + , + M, + , and so + forth, up to Z for zettabyte). The following are all valid + (and equal) specifications: 1536M, 1.5g, 1.50GB.

+

The values of non-numeric properties are case sensitive and must + be lowercase, except for mountpoint, + sharenfs, and sharesmb.

+

The following native properties consist of read-only statistics + about the dataset. These properties can be neither set, nor inherited. + Native properties apply to all dataset types unless otherwise noted.

+
+
+
The amount of space available to the dataset and all its children, + assuming that there is no other activity in the pool. Because space is + shared within a pool, availability can be limited by any number of + factors, including physical pool size, quotas, reservations, or other + datasets within the pool. +

This property can also be referred to by its shortened column + name, avail.

+
+
+
For non-snapshots, the compression ratio achieved for the + used space of this dataset, expressed as a multiplier. + The used property includes descendant datasets, and, for + clones, does not include the space shared with the origin snapshot. For + snapshots, the compressratio is the same as the + refcompressratio property. Compression can be turned on + by running: zfs set + compression=on + dataset. The default value is + off.
+
+
The transaction group (txg) in which the dataset was created. Bookmarks + have the same createtxg as the snapshot they are + initially tied to. This property is suitable for ordering a list of + snapshots, e.g. for incremental send and receive.
+
+
The time this dataset was created.
+
+
For snapshots, this property is a comma-separated list of filesystems or + volumes which are clones of this snapshot. The clones' + origin property is this snapshot. If the + clones property is not empty, then this snapshot can not + be destroyed (even with the -r or + -f options). The roles of origin and clone can be + swapped by promoting the clone with the zfs + promote command.
+
+
This property is on if the snapshot has been marked for + deferred destroy by using the zfs + destroy -d command. + Otherwise, the property is off.
+
+
The total number of filesystems and volumes that exist under this location + in the dataset tree. This value is only available when a + filesystem_limit has been set somewhere in the tree + under which the dataset resides.
+
+
The 64 bit GUID of this dataset or bookmark which does not change over its + entire lifetime. When a snapshot is sent to another pool, the received + snapshot has the same GUID. Thus, the guid is suitable + to identify a snapshot across pools.
+
+
The amount of space that is "logically" accessible by this + dataset. See the referenced property. The logical space + ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space that is "logically" consumed by this dataset + and all its descendents. See the used property. The + logical space ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For file systems, indicates whether the file system is currently mounted. + This property can be either + or + .
+
+
For cloned file systems or volumes, the snapshot from which the clone was + created. See also the clones property.
+
+
For filesystems or volumes which have saved partially-completed state from + zfs receive -s, this opaque token can be provided to + zfs send -t to resume and complete the zfs + receive.
+
+
The amount of data that is accessible by this dataset, which may or may + not be shared with other datasets in the pool. When a snapshot or clone is + created, it initially references the same amount of space as the file + system or snapshot it was created from, since its contents are identical. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The compression ratio achieved for the referenced space + of this dataset, expressed as a multiplier. See also the + compressratio property.
+
+
The total number of snapshots that exist under this location in the + dataset tree. This value is only available when a + snapshot_limit has been set somewhere in the tree under + which the dataset resides.
+
+
The type of dataset: filesystem, + volume, or snapshot.
+
+
The amount of space consumed by this dataset and all its descendents. This + is the value that is checked against this dataset's quota and reservation. + The space used does not include this dataset's reservation, but does take + into account the reservations of any descendent datasets. The amount of + space that a dataset consumes from its parent, as well as the amount of + space that is freed if this dataset is recursively destroyed, is the + greater of its space used and its reservation. +

The used space of a snapshot (see the + Snapshots section) is space that is + referenced exclusively by this snapshot. If this snapshot is destroyed, + the amount of used space will be freed. Space that is + shared by multiple snapshots isn't accounted for in this metric. When a + snapshot is destroyed, space that was previously shared with this + snapshot can become unique to snapshots adjacent to it, thus changing + the used space of those snapshots. The used space of the latest snapshot + can also be affected by changes in the file system. Note that the + used space of a snapshot is a subset of the + written space of the snapshot.

+

The amount of space used, available, or referenced does not + take into account pending changes. Pending changes are generally + accounted for within a few seconds. Committing a change to a disk using + fsync(2) or O_SYNC does not + necessarily guarantee that the space usage information is updated + immediately.

+
+
+
The usedby* properties decompose the + used properties into the various reasons that space is + used. Specifically, used = + usedbychildren + + usedbydataset + + usedbyrefreservation + + usedbysnapshots. These properties are only available for + datasets created on zpool "version 13" + pools.
+
+
The amount of space used by children of this dataset, which would be freed + if all the dataset's children were destroyed.
+
+
The amount of space used by this dataset itself, which would be freed if + the dataset were destroyed (after first removing any + refreservation and destroying any necessary snapshots or + descendents).
+
+
The amount of space used by a refreservation set on this + dataset, which would be freed if the refreservation was + removed.
+
+
The amount of space consumed by snapshots of this dataset. In particular, + it is the amount of space that would be freed if all of this dataset's + snapshots were destroyed. Note that this is not simply the sum of the + snapshots' used properties because space can be shared + by multiple snapshots.
+
@user
+
The amount of space consumed by the specified user in this dataset. Space + is charged to the owner of each file, as displayed by + ls -l. The amount of space + charged is displayed by du and + ls -s. See the + zfs userspace subcommand + for more information. +

Unprivileged users can access only their own space usage. The + root user, or a user who has been granted the userused + privilege with zfs + allow, can access everyone's usage.

+

The userused@... + properties are not displayed by zfs + get all. The user's name must + be appended after the @ symbol, using one of the following forms:

+ +

Files created on Linux always have POSIX owners.

+
+
@user
+
The userobjused property is similar to + userused but instead it counts the number of objects + consumed by a user. This property counts all objects allocated on behalf + of the user, it may differ from the results of system tools such as + df -i. +

When the property + is + set on a file system additional objects will be created per-file to + store extended attributes. These additional objects are reflected in the + userobjused value and are counted against the user's + userobjquota. When a file system is configured to use + xattr=sa no additional internal objects are normally + required.

+
+
+
This property is set to the number of user holds on this snapshot. User + holds are set by using the zfs + hold command.
+
@group
+
The amount of space consumed by the specified group in this dataset. Space + is charged to the group of each file, as displayed by + ls -l. See the + userused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupused privilege with zfs + allow, can access all groups' usage.

+
+
@group
+
The number of objects consumed by the specified group in this dataset. + Multiple objects may be charged to the group for each file when extended + attributes are in use. See the + userobjused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupobjused privilege with + zfs allow, can access + all groups' usage.

+
+
+
For volumes, specifies the block size of the volume. The + blocksize cannot be changed once the volume has been + written, so it should be set at volume creation time. The default + blocksize for volumes is 8 Kbytes. Any power of 2 from + 512 bytes to 128 Kbytes is valid. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space referenced by this dataset, that was + written since the previous snapshot (i.e. that is not referenced by the + previous snapshot).
+
@snapshot
+
The amount of referenced space written to this dataset + since the specified snapshot. This is the space that is referenced by this + dataset but was not referenced by the specified snapshot. +

The snapshot may be specified as a short + snapshot name (just the part after the @), in which + case it will be interpreted as a snapshot in the same filesystem as this + dataset. The snapshot may be a full snapshot name + (filesystem@snapshot), which for + clones may be a snapshot in the origin's filesystem (or the origin of + the origin's filesystem, etc.)

+
+
+

The following native properties can be used to change the behavior + of a ZFS dataset.

+
+
=discard|noallow|restricted|passthrough|passthrough-x
+
Controls how ACEs are inherited when files and directories are created. +
+
+
does not inherit any ACEs.
+
+
only inherits inheritable ACEs that specify "deny" + permissions.
+
+
default, removes the + + and + + permissions when the ACE is inherited.
+
+
inherits all inheritable ACEs without any modifications.
+
+
same meaning as passthrough, except that the + , + , + and + + ACEs inherit the execute permission only if the file creation mode + also requests the execute bit.
+
+

When the property value is set to + passthrough, files are created with a mode determined + by the inheritable ACEs. If no inheritable ACEs exist that affect the + mode, then the mode is set in accordance to the requested mode from the + application.

+

The aclinherit property does not apply to + posix ACLs.

+
+
=off|noacl|posixacl
+
Controls whether ACLs are enabled and if so what type of ACL to use. +
+
+
default, when a file system has the acltype property + set to off then ACLs are disabled.
+
+
an alias for off
+
+
indicates posix ACLs should be used. Posix ACLs are specific to Linux + and are not functional on other platforms. Posix ACLs are stored as an + extended attribute and therefore will not overwrite any existing NFSv4 + ACLs which may be set.
+
+

To obtain the best performance when setting + posixacl users are strongly encouraged to set the + xattr=sa property. This will result in the posix ACL + being stored more efficiently on disk. But as a consequence of this all + new extended attributes will only be accessible from OpenZFS + implementations which support the xattr=sa property. + See the xattr property for more details.

+
+
=on|off
+
Controls whether the access time for files is updated when they are read. + Turning this property off avoids producing write traffic when reading + files and can result in significant performance gains, though it might + confuse mailers and other similar utilities. The values + on and off are equivalent to the + atime and + + mount options. The default value is on. See also + relatime below.
+
=on|off|noauto
+
If this property is set to off, the file system cannot + be mounted, and is ignored by zfs + mount -a. Setting this + property to off is similar to setting the + mountpoint property to none, except + that the dataset still has a normal mountpoint property, + which can be inherited. Setting this property to off + allows datasets to be used solely as a mechanism to inherit properties. + One example of setting canmount=off is + to have two datasets with the same mountpoint, so that + the children of both datasets appear in the same directory, but might have + different inherited characteristics. +

When set to noauto, a dataset can only be + mounted and unmounted explicitly. The dataset is not mounted + automatically when the dataset is created or imported, nor is it mounted + by the zfs mount + -a command or unmounted by the + zfs unmount + -a command.

+

This property is not inherited.

+
+
=on|off||fletcher4|sha256|noparity|sha512|skein|edonr
+
Controls the checksum used to verify data integrity. The default value is + on, which automatically selects an appropriate algorithm + (currently, fletcher4, but this may change in future + releases). The value off disables integrity checking on + user data. The value noparity not only disables + integrity but also disables maintaining parity for user data. This setting + is used internally by a dump device residing on a RAID-Z pool and should + not be used by any other dataset. Disabling checksums is + NOT a recommended practice. +

The sha512, + skein, and edonr checksum algorithms + require enabling the appropriate features on the pool. These algorithms + are not supported by GRUB and should not be set on the + + filesystem when using GRUB to boot the system. Please see + zpool-features(5) for more information on these + algorithms.

+

Changing this property affects only newly-written data.

+
+
=on|off|gzip|gzip-N|lz4|lzjb|zle
+
Controls the compression algorithm used for this dataset. +

Setting compression to on indicates that the + current default compression algorithm should be used. The default + balances compression and decompression speed, with compression ratio and + is expected to work well on a wide variety of workloads. Unlike all + other settings for this property, on does not select a + fixed compression type. As new compression algorithms are added to ZFS + and enabled on a pool, the default compression algorithm may change. The + current default compression algorithm is either lzjb + or, if the lz4_compress feature is enabled, + lz4.

+

The lz4 compression algorithm + is a high-performance replacement for the lzjb + algorithm. It features significantly faster compression and + decompression, as well as a moderately higher compression ratio than + lzjb, but can only be used on pools with the + lz4_compress feature set to + . See + zpool-features(5) for details on ZFS feature flags and + the lz4_compress feature.

+

The lzjb compression algorithm is optimized + for performance while providing decent data compression.

+

The gzip compression algorithm + uses the same compression as the gzip(1) command. You + can specify the gzip level by using the value + gzip-N, where N is + an integer from 1 (fastest) to 9 (best compression ratio). Currently, + gzip is equivalent to + (which + is also the default for gzip(1)).

+

The zle compression algorithm compresses + runs of zeros.

+

This property can also be referred to by its + shortened column name + . + Changing this property affects only newly-written data.

+
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux context for all files in the file system under + a mount point for that file system. See selinux(8) for + more information.
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux context for the file system file system being + mounted. See selinux(8) for more information.
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux default context for unlabeled files. See + selinux(8) for more information.
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux context for the root inode of the file system. + See selinux(8) for more information.
+
=1||
+
Controls the number of copies of data stored for this dataset. These + copies are in addition to any redundancy provided by the pool, for + example, mirroring or RAID-Z. The copies are stored on different disks, if + possible. The space used by multiple copies is charged to the associated + file and dataset, changing the used property and + counting against quotas and reservations. +

Changing this property only affects newly-written data. + Therefore, set this property at file system creation time by using the + -o + copies=N option.

+

Remember that ZFS will not import a pool with a + missing top-level vdev. Do NOT create, for example a + two-disk striped pool and set + on + some datasets thinking you have setup redundancy for them. When a disk + fails you will not be able to import the pool and will have lost all of + your data.

+
+
=on|off
+
Controls whether device nodes can be opened on this file system. The + default value is on. The values on and + off are equivalent to the dev and + + mount options.
+
=legacy|auto|||||
+
Specifies a compatibility mode or literal value for the size of dnodes in + the file system. The default value is legacy. Setting + this property to a value other than legacy requires the + large_dnode pool feature to be enabled. +

Consider setting dnodesize to + auto if the dataset uses the + xattr=sa property setting and the workload makes heavy + use of extended attributes. This may be applicable to SELinux-enabled + systems, Lustre servers, and Samba servers, for example. Literal values + are supported for cases where the optimal size is known in advance and + for performance testing.

+

Leave dnodesize set to + legacy if you need to receive a send stream of this + dataset on a pool that doesn't enable the large_dnode feature, or if you + need to import this pool on a system that doesn't support the + large_dnode feature.

+

This property can also be referred to by its + shortened column name, + .

+
+
=on|off
+
Controls whether processes can be executed from within this file system. + The default value is on. The values on + and off are equivalent to the exec and + + mount options.
+
=count|none
+
Limits the number of filesystems and volumes that can exist under this + point in the dataset tree. The limit is not enforced if the user is + allowed to change the limit. Setting a filesystem_limit + to on a descendent of a filesystem that already has a + filesystem_limit does not override the ancestor's + filesystem_limit, but rather imposes an additional + limit. This feature must be enabled to be used (see + zpool-features(5)).
+
=path|none|legacy
+
Controls the mount point used for this file system. See the + Mount Points section for more + information on how this property is used. +

When the mountpoint property is changed for + a file system, the file system and any children that inherit the mount + point are unmounted. If the new value is legacy, then + they remain unmounted. Otherwise, they are automatically remounted in + the new location if the property was previously legacy + or none, or if they were mounted before the property + was changed. In addition, any shared file systems are unshared and + shared in the new location.

+
+
=on|off
+
Controls whether the file system should be mounted with + nbmand (Non Blocking mandatory locks). This is used for + SMB clients. Changes to this property only take effect when the file + system is umounted and remounted. See mount(8) for more + information on nbmand mounts. This property is not used + on Linux.
+
=off|on
+
Allow mounting on a busy directory or a directory which already contains + files or directories. This is the default mount behavior for Linux file + systems. For consistency with OpenZFS on other platforms overlay mounts + are off by default. Set to on to + enable overlay mounts.
+
=all|none|metadata
+
Controls what is cached in the primary cache (ARC). If this property is + set to all, then both user data and metadata is cached. + If this property is set to none, then neither user data + nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=size|none
+
Limits the amount of space a dataset and its descendents can consume. This + property enforces a hard limit on the amount of space used. This includes + all space consumed by descendents, including file systems and snapshots. + Setting a quota on a descendent of a dataset that already has a quota does + not override the ancestor's quota, but rather imposes an additional limit. +

Quotas cannot be set on volumes, as the + volsize property acts as an implicit quota.

+
+
=count|none
+
Limits the number of snapshots that can be created on a dataset and its + descendents. Setting a snapshot_limit on a descendent of + a dataset that already has a snapshot_limit does not + override the ancestor's snapshot_limit, but rather + imposes an additional limit. The limit is not enforced if the user is + allowed to change the limit. For example, this means that recursive + snapshots taken from the global zone are counted against each delegated + dataset within a zone. This feature must be enabled to be used (see + zpool-features(5)).
+
user=size|none
+
Limits the amount of space consumed by the specified user. User space + consumption is identified by the + user + property. +

Enforcement of user quotas may be delayed by several seconds. + This delay means that a user might exceed their quota before the system + notices that they are over quota and begins to refuse additional writes + with the EDQUOT error message. See the + zfs userspace subcommand + for more information.

+

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + userquota privilege with zfs + allow, can get and set everyone's quota.

+

This property is not available on volumes, on file systems + before version 4, or on pools before version 15. The + userquota@... properties are not + displayed by zfs get + all. The user's name must be appended after the + @ symbol, using one of the following forms:

+ +

Files created on Linux always have POSIX owners.

+
+
user=size|none
+
The userobjquota is similar to + userquota but it limits the number of objects a user can + create. Please refer to userobjused for more information + about how objects are counted.
+
group=size|none
+
Limits the amount of space consumed by the specified group. Group space + consumption is identified by the + group + property. +

Unprivileged users can access only their own groups' space + usage. The root user, or a user who has been granted the + groupquota privilege with zfs + allow, can get and set all groups' quotas.

+
+
group=size|none
+
The + + is similar to groupquota but it limits number of objects + a group can consume. Please refer to userobjused for + more information about how objects are counted.
+
=on|off
+
Controls whether this dataset can be modified. The default value is + off. The values on and + off are equivalent to the + and + rw mount options. +

This property can also be referred to by its + shortened column name, + .

+
+
=size
+
Specifies a suggested block size for files in the file system. This + property is designed solely for use with database workloads that access + files in fixed-size records. ZFS automatically tunes block sizes according + to internal algorithms optimized for typical access patterns. +

For databases that create very large files but access them in + small random chunks, these algorithms may be suboptimal. Specifying a + recordsize greater than or equal to the record size of + the database can result in significant performance gains. Use of this + property for general purpose file systems is strongly discouraged, and + may adversely affect performance.

+

The size specified must be a power of two greater than or + equal to 512 and less than or equal to 128 Kbytes. If the + large_blocks feature is enabled on the pool, the size + may be up to 1 Mbyte. See zpool-features(5) for + details on ZFS feature flags.

+

Changing the file system's recordsize + affects only files created afterward; existing files are unaffected.

+

This property can also be referred to by its + shortened column name, + .

+
+
=all|most
+
Controls what types of metadata are stored redundantly. ZFS stores an + extra copy of metadata, so that if a single block is corrupted, the amount + of user data lost is limited. This extra copy is in addition to any + redundancy provided at the pool level (e.g. by mirroring or RAID-Z), and + is in addition to an extra copy specified by the copies + property (up to a total of 3 copies). For example if the pool is mirrored, + copies=2, and + redundant_metadata=most, then ZFS + stores 6 copies of most metadata, and 4 copies of data and some metadata. +

When set to all, ZFS stores an extra copy of + all metadata. If a single on-disk block is corrupt, at worst a single + block of user data (which is recordsize bytes long) + can be lost.

+

When set to most, ZFS stores an extra copy + of most types of metadata. This can improve performance of random + writes, because less metadata must be written. In practice, at worst + about 100 blocks (of recordsize bytes each) of user + data can be lost if a single on-disk block is corrupt. The exact + behavior of which metadata blocks are stored redundantly may change in + future releases.

+

The default value is all.

+
+
=size|none
+
Limits the amount of space a dataset can consume. This property enforces a + hard limit on the amount of space used. This hard limit does not include + space used by descendents, including file systems and snapshots.
+
=size|none
+
The minimum amount of space guaranteed to a dataset, not including its + descendents. When the amount of space used is below this value, the + dataset is treated as if it were taking up the amount of space specified + by refreservation. The refreservation + reservation is accounted for in the parent datasets' space used, and + counts against the parent datasets' quotas and reservations. +

If refreservation is set, a snapshot is only + allowed if there is enough free pool space outside of this reservation + to accommodate the current number of "referenced" bytes in the + dataset.

+

This property can also be referred to by its + shortened column name, + .

+
+
=on|off
+
Controls the manner in which the access time is updated when + + is set. Turning this property on causes the access time to be updated + relative to the modify or change time. Access time is only updated if the + previous access time was earlier than the current modify or change time or + if the existing access time hasn't been updated within the past 24 hours. + The default value is off. The values + on and off are equivalent to the + relatime and + + mount options.
+
=size|none
+
The minimum amount of space guaranteed to a dataset and its descendants. + When the amount of space used is below this value, the dataset is treated + as if it were taking up the amount of space specified by its reservation. + Reservations are accounted for in the parent datasets' space used, and + count against the parent datasets' quotas and reservations. +

This property can also be referred to by its + shortened column name, + .

+
+
=all|none|metadata
+
Controls what is cached in the secondary cache (L2ARC). If this property + is set to all, then both user data and metadata is + cached. If this property is set to none, then neither + user data nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=on|off
+
Controls whether the setuid bit is respected for the file system. The + default value is on. The values on and + off are equivalent to the + and + nosuid mount options.
+
=on|off|opts
+
Controls whether the file system is shared by using + and what options are to be used. Otherwise, the file + system is automatically shared and unshared with the + zfs share and + zfs unshare commands. If + the property is set to on, the net(8) command is invoked + to create a USERSHARE. +

Because SMB shares requires a resource name, a unique resource + name is constructed from the dataset name. The constructed name is a + copy of the dataset name except that the characters in the dataset name, + which would be invalid in the resource name, are replaced with + underscore (_) characters. Linux does not currently support additional + options which might be available on Solaris.

+

If the sharesmb property is set to + off, the file systems are unshared.

+

The share is created with the ACL (Access Control List) + "Everyone:F" ("F" stands for "full + permissions", ie. read and write permissions) and no guest access + (which means Samba must be able to authenticate a real user, system + passwd/shadow, LDAP or smbpasswd based) by default. This means that any + additional access control (disallow specific user specific access etc) + must be done on the underlying file system.

+
+
=on|off|opts
+
Controls whether the file system is shared via NFS, and what options are + to be used. A file system with a sharenfs property of + off is managed with the exportfs(8) + command and entries in the + + file. Otherwise, the file system is automatically shared and unshared with + the zfs share and + zfs unshare commands. If + the property is set to on, the dataset is shared using + the default options: +

+

See exports(5) for the meaning of the + default options. Otherwise, the exportfs(8) command is + invoked with options equivalent to the contents of this property.

+

When the sharenfs property is changed for a + dataset, the dataset and any children inheriting the property are + re-shared with the new options, only if the property was previously + off, or if they were shared before the property was + changed. If the new property is off, the file systems + are unshared.

+
+
=latency|throughput
+
Provide a hint to ZFS about handling of synchronous requests in this + dataset. If logbias is set to latency + (the default), ZFS will use pool log devices (if configured) to handle the + requests at low latency. If logbias is set to + throughput, ZFS will not use configured pool log + devices. ZFS will instead optimize synchronous operations for global pool + throughput and efficient use of resources.
+
=hidden|visible
+
Controls whether the volume snapshot devices under + + are hidden or visible. The default value is hidden.
+
=hidden|visible
+
Controls whether the .zfs directory is hidden or + visible in the root of the file system as discussed in the + Snapshots section. The default value + is hidden.
+
=standard|always|disabled
+
Controls the behavior of synchronous requests (e.g. fsync, O_DSYNC). + standard is the POSIX specified behavior of ensuring all + synchronous requests are written to stable storage and all devices are + flushed to ensure data is not cached by device controllers (this is the + default). always causes every file system transaction to + be written and flushed before its system call returns. This has a large + performance penalty. disabled disables synchronous + requests. File system transactions are only committed to stable storage + periodically. This option will give the highest performance. However, it + is very dangerous as ZFS would be ignoring the synchronous transaction + demands of applications such as databases or NFS. Administrators should + only use this option when the risks are understood.
+
=N|
+
The on-disk version of this file system, which is independent of the pool + version. This property can only be set to later supported versions. See + the zfs upgrade + command.
+
=size
+
For volumes, specifies the logical size of the volume. By default, + creating a volume establishes a reservation of equal size. For storage + pools with a version number of 9 or higher, a + refreservation is set instead. Any changes to + volsize are reflected in an equivalent change to the + reservation (or refreservation). The + volsize can only be set to a multiple of + volblocksize, and cannot be zero. +

The reservation is kept equal to the volume's logical size to + prevent unexpected behavior for consumers. Without the reservation, the + volume could run out of space, resulting in undefined behavior or data + corruption, depending on how the volume is used. These effects can also + occur when the volume size is changed while it is in use (particularly + when shrinking the size). Extreme care should be used when adjusting the + volume size.

+

Though not recommended, a "sparse volume" (also + known as "thin provisioning") can be created by specifying the + -s option to the zfs + create -V command, or by + changing the reservation after the volume has been created. A + "sparse volume" is a volume where the reservation is less then + the volume size. Consequently, writes to a sparse volume can fail with + ENOSPC when the pool is low on space. For a + sparse volume, changes to volsize are not reflected in + the reservation.

+
+
=default + | + + | + + | + | +
+
This property specifies how volumes should be exposed to the OS. Setting + it to full exposes volumes as fully fledged block + devices, providing maximal functionality. The value geom + is just an alias for full and is kept for compatibility. + Setting it to dev hides its partitions. Volumes with + property set to none are not exposed outside ZFS, but + can be snapshoted, cloned, replicated, etc, that can be suitable for + backup purposes. Value default means that volumes + exposition is controlled by system-wide tunable + zvol_volmode, where full, + dev and none are encoded as 1, 2 and 3 + respectively. The default values is full.
+
=on|off
+
Controls whether regular files should be scanned for viruses when a file + is opened and closed. In addition to enabling this property, the virus + scan service must also be enabled for virus scanning to occur. The default + value is off. This property is not used on Linux.
+
=on|off|sa
+
Controls whether extended attributes are enabled for this file system. Two + styles of extended attributes are supported either directory based or + system attribute based. +

The default value of on enables directory + based extended attributes. This style of extended attribute imposes no + practical limit on either the size or number of attributes which can be + set on a file. Although under Linux the getxattr(2) + and setxattr(2) system calls limit the maximum size to + 64K. This is the most compatible style of extended attribute and is + supported by all OpenZFS implementations.

+

System attribute based xattrs can be enabled by setting the + value to sa. The key advantage of this type of xattr + is improved performance. Storing extended attributes as system + attributes significantly decreases the amount of disk IO required. Up to + 64K of data may be stored per-file in the space reserved for system + attributes. If there is not enough space available for an extended + attribute then it will be automatically written as a directory based + xattr. System attribute based extended attributes are not accessible on + platforms which do not support the xattr=sa + feature.

+

The use of system attribute based xattrs is strongly + encouraged for users of SELinux or posix ACLs. Both of these features + heavily rely of extended attributes and benefit significantly from the + reduced access time.

+

The values on and + off are equivalent to the xattr and + mount + options.

+
+
=on|off
+
Controls whether the dataset is managed from a non-global zone. Zones are + a Solaris feature and are not relevant on Linux. The default value is + off.
+
+

The following three properties cannot be changed after the file + system is created, and therefore, should be set when the file system is + created. If the properties are not set with the zfs + create or zpool + create commands, these properties are inherited from + the parent dataset. If the parent dataset lacks these properties due to + having been created prior to these features being supported, the new file + system will have the default values for these properties.

+
+
=sensitive||mixed
+
Indicates whether the file name matching algorithm used by the file system + should be case-sensitive, case-insensitive, or allow a combination of both + styles of matching. The default value for the + casesensitivity property is sensitive. + Traditionally, UNIX and POSIX file systems have + case-sensitive file names. +

The mixed value for the + casesensitivity property indicates that the file + system can support requests for both case-sensitive and case-insensitive + matching behavior. Currently, case-insensitive matching behavior on a + file system that supports mixed behavior is limited to the SMB server + product. For more information about the mixed value + behavior, see the "ZFS Administration Guide".

+
+
=none||||
+
Indicates whether the file system should perform a + + normalization of file names whenever two file names are compared, and + which normalization algorithm should be used. File names are always stored + unmodified, names are normalized as part of any comparison process. If + this property is set to a legal value other than none, + and the utf8only property was left unspecified, the + utf8only property is automatically set to + on. The default value of the + normalization property is none. This + property cannot be changed after the file system is created.
+
=on|off
+
Indicates whether the file system should reject file names that include + characters that are not present in the + + character code set. If this property is explicitly set to + off, the normalization property must either not be + explicitly set or be set to none. The default value for + the utf8only property is off. This + property cannot be changed after the file system is created.
+
+

The casesensitivity, + normalization, and utf8only properties + are also new permissions that can be assigned to non-privileged users by + using the ZFS delegated administration feature.

+
+
+

+

When a file system is mounted, either through + mount(8) for legacy mounts or the + zfs mount command for normal + file systems, its mount options are set according to its properties. The + correlation between properties and mount options is as follows:

+
+
    PROPERTY                MOUNT OPTION
+    atime                   atime/noatime
+    canmount                auto/noauto
+    devices                 dev/nodev
+    exec                    exec/noexec
+    readonly                ro/rw
+    relatime                relatime/norelatime
+    setuid                  suid/nosuid
+    xattr                   xattr/noxattr
+
+

In addition, these options can be set on a + per-mount basis using the -o option, without + affecting the property that is stored on disk. The values specified on the + command line override the values stored in the dataset. The + nosuid option is an alias for + ,. + These properties are reported as "temporary" by the + zfs get command. If the + properties are changed while the dataset is mounted, the new setting + overrides any temporary settings.

+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate datasets (file + systems, volumes, and snapshots).

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as module:property, but + this namespace is not enforced by ZFS. User property names can be at most + 256 characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the + module component of property names to reduce the chance + that two independently-developed packages use the same property name for + different purposes.

+

The values of user properties are arbitrary strings, are always + inherited, and are never validated. All of the commands that operate on + properties (zfs list, + zfs get, + zfs set, and so forth) can + be used to manipulate both native properties and user properties. Use the + zfs inherit command to clear + a user property. If the property is not defined in any parent dataset, it is + removed entirely. Property values are limited to 8192 bytes.

+
+
+

+

ZFS volumes may be used as swap devices. After creating the volume + with the zfs create + -V command set up and enable the swap area using the + mkswap(8) and swapon(8) commands. Do not + swap to a file on a ZFS file system. A ZFS swap file configuration is not + supported.

+
+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+
+
zfs -?
+
Displays a help message.
+
zfs create + [-p] [-o + property=value]... + filesystem
+
Creates a new ZFS file system. The file system is automatically mounted + according to the mountpoint property inherited from the + parent. +
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + at the same time the dataset was created. Any editable ZFS property + can also be set at creation time. Multiple -o + options can be specified. An error results if the same property is + specified in multiple -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
+
zfs create + [-ps] [-b + blocksize] [-o + property=value]... + -V size + volume
+
Creates a volume of the given size. The volume is exported as a block + device in /dev/zvol/path, where + is the name + of the volume in the ZFS namespace. The size represents the logical size + as exported by the device. By default, a reservation of equal size is + created. +

size is automatically rounded up to the + nearest 128 Kbytes to ensure that the volume has an integral number of + blocks regardless of blocksize.

+
+
+ blocksize
+
Equivalent to -o + volblocksize=blocksize. If + this option is specified in conjunction with + -o volblocksize, the + resulting behavior is undefined.
+
+ property=value
+
Sets the specified property as if the zfs + set + property=value command was + invoked at the same time the dataset was created. Any editable ZFS + property can also be set at creation time. Multiple + -o options can be specified. An error results + if the same property is specified in multiple + -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Creates a sparse volume with no reservation. See + volsize in the + Native Properties section + for more information about sparse volumes.
+
+
+
zfs destroy + [-Rfnprv] + filesystem|volume
+
Destroys the given dataset. By default, the command unshares any file + systems that are currently shared, unmounts any file systems that are + currently mounted, and refuses to destroy a dataset that has active + dependents (children or clones). +
+
+
Recursively destroy all dependents, including cloned file systems + outside the target hierarchy.
+
+
Force an unmount of any file systems using the + unmount -f command. + This option has no effect on non-file systems or unmounted file + systems.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -v or + -p flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Recursively destroy all children.
+
+
Print verbose information about the deleted data.
+
+

Extreme care should be taken when applying either the + -r or the -R options, as + they can destroy large portions of a pool and cause unexpected behavior + for mounted file systems in use.

+
+
zfs destroy + [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]...
+
The given snapshots are destroyed immediately if and only if the + zfs destroy command + without the -d option would have destroyed it. + Such immediate destruction would occur, for example, if the snapshot had + no clones and the user-initiated reference count were zero. +

If a snapshot does not qualify for immediate destruction, it + is marked for deferred deletion. In this state, it exists as a usable, + visible snapshot until both of the preconditions listed above are met, + at which point it is destroyed.

+

An inclusive range of snapshots may be specified by separating + the first and last snapshots with a percent sign. The first and/or last + snapshots may be left blank, in which case the filesystem's oldest or + newest snapshot will be implied.

+

Multiple snapshots (or ranges of snapshots) of the same + filesystem or volume may be specified in a comma-separated list of + snapshots. Only the snapshot's short name (the part after the + @) should be specified when using a range or + comma-separated list to identify multiple snapshots.

+
+
+
Recursively destroy all clones of these snapshots, including the + clones, snapshots, and children. If this flag is specified, the + -d flag will have no effect.
+
+
Defer snapshot deletion.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -p or + -v flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Destroy (or mark for deferred deletion) all snapshots with this name + in descendent file systems.
+
+
Print verbose information about the deleted data. +

Extreme care should be taken when applying either the + -r or the -R + options, as they can destroy large portions of a pool and cause + unexpected behavior for mounted file systems in use.

+
+
+
+
zfs destroy + filesystem|volume#bookmark
+
The given bookmark is destroyed.
+
zfs snapshot + [-r] [-o + property=value]... + filesystem@snapname|volume@snapname...
+
Creates snapshots with the given names. All previous modifications by + successful system calls to the file system are part of the snapshots. + Snapshots are taken atomically, so that all snapshots correspond to the + same moment in time. See the Snapshots + section for details. +
+
+ property=value
+
Sets the specified property; see zfs + create for details.
+
+
Recursively create snapshots of all descendent datasets
+
+
+
zfs rollback + [-Rfr] snapshot
+
Roll back the given dataset to a previous snapshot. When a dataset is + rolled back, all data that has changed since the snapshot is discarded, + and the dataset reverts to the state at the time of the snapshot. By + default, the command refuses to roll back to a snapshot other than the + most recent one. In order to do so, all intermediate snapshots and + bookmarks must be destroyed by specifying the -r + option. +

The -rR options do not recursively + destroy the child snapshots of a recursive snapshot. Only direct + snapshots of the specified filesystem are destroyed by either of these + options. To completely roll back a recursive snapshot, you must rollback + the individual child snapshots.

+
+
+
Destroy any more recent snapshots and bookmarks, as well as any clones + of those snapshots.
+
+
Used with the -R option to force an unmount of + any clone file systems that are to be destroyed.
+
+
Destroy any snapshots and bookmarks more recent than the one + specified.
+
+
+
zfs clone + [-p] [-o + property=value]... + snapshot + filesystem|volume
+
Creates a clone of the given snapshot. See the + Clones section for details. The target + dataset can be located anywhere in the ZFS hierarchy, and is created as + the same type as the original. +
+
+ property=value
+
Sets the specified property; see zfs + create for details.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. If + the target filesystem or volume already exists, the operation + completes successfully.
+
+
+
zfs promote + clone-filesystem
+
Promotes a clone file system to no longer be dependent on its + "origin" snapshot. This makes it possible to destroy the file + system that the clone was created from. The clone parent-child dependency + relationship is reversed, so that the origin file system becomes a clone + of the specified file system. +

The snapshot that was cloned, and any snapshots previous to + this snapshot, are now owned by the promoted clone. The space they use + moves from the origin file system to the promoted clone, so enough space + must be available to accommodate these snapshots. No new space is + consumed by this operation, but the space accounting is adjusted. The + promoted clone must not have any conflicting snapshot names of its own. + The rename subcommand can be used to rename any + conflicting snapshots.

+
+
zfs rename + [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
 
+
zfs rename + [-fp] + filesystem|volume + filesystem|volume
+
Renames the given dataset. The new target can be located anywhere in the + ZFS hierarchy, with the exception of snapshots. Snapshots can only be + renamed within the parent file system or volume. When renaming a snapshot, + the parent file system of the snapshot does not need to be specified as + part of the second argument. Renamed file systems can inherit new mount + points, in which case they are unmounted and remounted at the new mount + point. +
+
+
Force unmount any filesystems that need to be unmounted in the + process.
+
+
Creates all the nonexistent parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their + parent.
+
+
+
zfs rename + -r snapshot + snapshot
+
Recursively rename the snapshots of all descendent datasets. Snapshots are + the only dataset that can be renamed recursively.
+
zfs list + [-r|-d + depth] [-Hp] + [-o + property[,property]...] + [-s property]... + [-S property]... + [-t + type[,type]...] + [filesystem|volume|snapshot]...
+
Lists the property information for the given datasets in tabular form. If + specified, you can list property information by the absolute pathname or + the relative pathname. By default, all file systems and volumes are + displayed. Snapshots are displayed if the listsnaps + property is on (the default is off). + The following fields are displayed, + name,used,available,referenced,mountpoint. +
+
+
Used for scripting mode. Do not print headers and separate fields by a + single tab instead of arbitrary white space.
+
+ property
+
Same as the -s option, but sorts by property + in descending order.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A + depth of 1 will display only + the dataset and its direct children.
+
+ property
+
A comma-separated list of properties to display. The property must be: +
    +
  • One of the properties described in the + Native Properties + section
  • +
  • A user property
  • +
  • The value name to display the dataset name
  • +
  • The value + to + display space usage properties on file systems and volumes. This + is a shortcut for specifying -o + name,avail,used,,,, + -t + filesystem,volume syntax.
  • +
+
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display any children of the dataset on the command + line.
+
+ property
+
A property for sorting the output by column in ascending order based + on the value of the property. The property must be one of the + properties described in the + Properties section, or the + special value name to sort by the dataset name. + Multiple properties can be specified at one time using multiple + -s property options. Multiple + -s options are evaluated from left to right in + decreasing order of importance. The following is a list of sorting + criteria: +
    +
  • Numeric types sort in numeric order.
  • +
  • String types sort in alphabetical order.
  • +
  • Types inappropriate for a row sort that row to the literal bottom, + regardless of the specified ordering.
  • +
+

If no sorting options are specified the existing behavior + of zfs list is + preserved.

+
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or all. For example, + specifying -t snapshot + displays only snapshots.
+
+
+
zfs set + property=value + [property=value]... + filesystem|volume|snapshot...
+
Sets the property or list of properties to the given value(s) for each + dataset. Only some properties can be edited. See the + Properties section for more + information on what properties can be set and acceptable values. Numeric + values can be specified as exact values, or in a human-readable form with + a suffix of , + , + M, + , + , + , + , + Z (for bytes, kilobytes, megabytes, gigabytes, + terabytes, petabytes, exabytes, or zettabytes, respectively). User + properties can be set on snapshots. For more information, see the + User Properties section.
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + filesystem|volume|snapshot|bookmark...
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
    name      Dataset name
+    property  Property name
+    value     Property value
+    source    Property source.  Can either be local, default,
+              temporary, inherited, or none (-).
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections.

+

The special value all can be used to display + all properties that apply to the given dataset's type (filesystem, + volume, snapshot, or bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + 1 will display only the dataset and its direct + children.
+
+ field
+
A comma-separated list of columns to display. + name,property,, + is the default value.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: + , + default, + , + , + and none. The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot...
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See the Properties + section for a listing of default values, and details on which properties + can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value if one exists; otherwise + operate as if the -S option was not + specified.
+
+
+
zfs upgrade
+
Displays a list of file systems that are not the most recent version.
+
zfs upgrade + -v
+
Displays a list of currently supported file system versions.
+
zfs upgrade + [-r] [-V + version] -a | + filesystem
+
Upgrades file systems to a new on-disk version. Once this is done, the + file systems will no longer be accessible on systems running older + versions of the software. zfs + send streams generated from new snapshots of these + file systems cannot be accessed on systems running older versions of the + software. +

In general, the file system version is independent of the pool + version. See zpool(8) for information on the + zpool upgrade + command.

+

In some cases, the file system version and the pool version + are interrelated and the pool version must be upgraded before the file + system version can be upgraded.

+
+
+ version
+
Upgrade to the specified version. If the + -V flag is not specified, this command + upgrades to the most recent version. This option can only be used to + increase the version number, and only up to the most recent version + supported by this software.
+
+
Upgrade all file systems on all imported pools.
+
filesystem
+
Upgrade the specified file system.
+
+
Upgrade the specified file system and all descendent file + systems.
+
+
+
zfs + userspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each user in the specified + filesystem or snapshot. This corresponds to the + user, + user, + userquota@ + and userobjquota@user properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (for example, + stat(2), ls + -l) perform this translation, so the + -i option allows the output from + zfs userspace to be + compared directly with those utilities. However, + -i may lead to confusion if some files were + created by an SMB user before a SMB-to-POSIX name mapping was + established. In such a case, some files will be owned by the SMB + entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]...
+
Display only the specified fields from the following set: + type, name, + used, quota. The default is to + display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]...
+
Print only the specified types from the following set: + all, posixuser, + smbuser, posixgroup, + smbgroup. The default is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + zfs userspace, except that + the default types to display are -t + posixgroup,smbgroup.
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Ov] [-o + options] -a | + filesystem
+
Mounts ZFS file systems. +
+
+
Perform an overlay mount. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + Temporary Mount + Point Properties section for details.
+
+
Report mount progress.
+
+
+
zfs unmount + [-f] -a | + filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
Forcefully unmount the file system, even if it is currently in + use.
+
+
+
zfs share + -a | filesystem
+
Shares available ZFS file systems. +
+
+
Share all available ZFS file systems. Invoked automatically as part of + the boot process.
+
filesystem
+
Share the specified filesystem according to the + sharenfs and sharesmb properties. + File systems are shared when the sharenfs or + sharesmb property is set.
+
+
+
zfs unshare + -a | + filesystem|mountpoint
+
Unshares currently shared ZFS file systems. +
+
+
Unshare all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
filesystem|mountpoint
+
Unshare the specified filesystem. The command can also be given a path + to a ZFS file system shared on the system.
+
+
+
zfs bookmark + snapshot bookmark
+
Creates a bookmark of the given snapshot. Bookmarks mark the point in time + when the snapshot was created, and can be used as the incremental source + for a zfs send command. +

This feature must be enabled to be used. See + zpool-features(5) for details on ZFS feature flags and + the + + feature.

+
+
zfs send + [-DLPRcenpv] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
+ --dedup
+
Generate a deduplicated stream. Blocks which would have been sent + multiple times in the send stream will only be sent once. The + receiving system must also support this feature to receive a + deduplicated stream. This flag can be used regardless of the dataset's + dedup property, but performance will be much better + if the filesystem uses a dedup-capable checksum (for example, + sha256).
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
+ --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(5) for details on ZFS feature + flags and the large_blocks feature.
+
+ --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
+ --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed.

+
+
+ --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. See zpool-features(5) for details on ZFS + feature flags and the embedded_data feature.
+
+ --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
+ --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
+ --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature.
+
+ --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS .

+
+
+
+
zfs send + [-Lce] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
+ --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(5) for details on ZFS feature + flags and the large_blocks feature.
+
+ --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
+ --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. See zpool-features(5) for details on ZFS + feature flags and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
+
+
zfs send + [-Penv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs receive -s for more details.
+
zfs receive + [-Fnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-Fnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

If -o + property=value or -x + property is specified, it applies to the effective + value of the property throughout the entire subtree of replicated + datasets. Effective property values will be set ( + -o ) or inherited ( -x ) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + + is a special case because, even if origin is a + read-only property and cannot be set, it's allowed to receive the send + stream as a clone of the given snapshot.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked immediately before the + receive. When receiving a stream from zfs + send -R, causes the + property to be inherited by all descendant datasets, as through + zfs inherit + property was run on any descendant datasets that + have this property set on the sending system. +

Any editable property can be set at receive time. Set-once + properties bound to the received data, such as + normalization and + casesensitivity, cannot be set at receive time + even when the datasets are newly created by + zfs receive. + Additionally both settable properties version and + volsize cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with a stream generated by + zfs send + -t token, where the + token is the value of the + receive_resume_token property of the filesystem or + volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(5) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions on set-once + and special properties apply equally to + -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of + , + , + mountpoint, canmount, + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]... + perm|@setname[,perm|@setname]... + filesystem|volume +
+ zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]... + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]...
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]...
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]...
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]...
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+
+
NAME             TYPE           NOTES
+allow            subcommand     Must also have the permission that is
+                                being allowed
+clone            subcommand     Must also have the 'create' ability and
+                                'mount' ability in the origin file system
+create           subcommand     Must also have the 'mount' ability
+destroy          subcommand     Must also have the 'mount' ability
+diff             subcommand     Allows lookup of paths within a dataset
+                                given an object number, and the ability
+                                to create snapshots necessary to
+                                'zfs diff'.
+mount            subcommand     Allows mount/umount of ZFS datasets
+promote          subcommand     Must also have the 'mount' and 'promote'
+                                ability in the origin file system
+receive          subcommand     Must also have the 'mount' and 'create'
+                                ability
+rename           subcommand     Must also have the 'mount' and 'create'
+                                ability in the new parent
+rollback         subcommand     Must also have the 'mount' ability
+send             subcommand
+share            subcommand     Allows sharing file systems over NFS
+                                or SMB protocols
+snapshot         subcommand     Must also have the 'mount' ability
+
+groupquota       other          Allows accessing any groupquota@...
+                                property
+groupused        other          Allows reading any groupused@... property
+userprop         other          Allows changing any user property
+userquota        other          Allows accessing any userquota@...
+                                property
+userused         other          Allows reading any userused@... property
+
+aclinherit       property
+acltype          property
+atime            property
+canmount         property
+casesensitivity  property
+checksum         property
+compression      property
+copies           property
+devices          property
+exec             property
+filesystem_limit property
+mountpoint       property
+nbmand           property
+normalization    property
+primarycache     property
+quota            property
+readonly         property
+recordsize       property
+refquota         property
+refreservation   property
+reservation      property
+secondarycache   property
+setuid           property
+sharenfs         property
+sharesmb         property
+snapdir          property
+snapshot_limit   property
+utf8only         property
+version          property
+volblocksize     property
+volsize          property
+vscan            property
+xattr            property
+zoned            property
+
+
+
zfs allow + -c + perm|@setname[,perm|@setname]... + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]... + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]... + [perm|@setname[,perm|@setname]...] + filesystem|volume +
+ zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]...] + filesystem|volume +
+ zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
zfs hold + [-r] tag + snapshot...
+
Adds a single reference, named with the tag + argument, to the specified snapshot or snapshots. Each snapshot has its + own tag namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-r] snapshot...
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
+
zfs release + [-r] tag + snapshot...
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return + EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
zfs diff + [-FHt] snapshot + snapshot|filesystem
+
Display the difference between a snapshot of a given filesystem and + another snapshot of that filesystem from a later time or the current + contents of the filesystem. The first column is a character indicating the + type of change, the other columns indicate pathname, new pathname (in case + of rename), change in link count, and optionally file type and/or change + time. The types of change are: +
+
-       The path has been removed
++       The path has been created
+M       The path has been modified
+R       The path has been renamed
+
+
+
+
Display an indication of the type of file, in a manner similar to the + - option of ls(1). +
+
B       Block device
+C       Character device
+/       Directory
+>       Door
+|       Named pipe
+@       Symbolic link
+P       Event port
+=       Socket
+F       Regular file
+
+
+
+
Give more parsable tab-separated output, without header lines and + without arrows.
+
+
Display the path's inode change time as the first column of + output.
+
+
+
+
+
+

+

The zfs utility exits 0 on success, 1 if + an error occurs, and 2 if invalid command line options were specified.

+
+
+

+
+
Creating a ZFS File System Hierarchy
+
The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, + and is automatically inherited by the child file system. +
+
# zfs create pool/home
+# zfs set mountpoint=/export/home pool/home
+# zfs create pool/home/bob
+
+
+
Creating a ZFS Snapshot
+
The following command creates a snapshot named + yesterday. This snapshot is mounted on demand in the + .zfs/snapshot directory at the root of the + pool/home/bob file system. +
+
# zfs snapshot pool/home/bob@yesterday
+
+
+
Creating and Destroying Multiple + Snapshots
+
The following command creates snapshots named yesterday + of pool/home and all of its descendent file systems. + Each snapshot is mounted on demand in the + .zfs/snapshot directory at the root of its file + system. The second command destroys the newly created snapshots. +
+
# zfs snapshot -r pool/home@yesterday
+# zfs destroy -r pool/home@yesterday
+
+
+
Disabling and Enabling File System + Compression
+
The following command disables the compression property + for all file systems under pool/home. The next command + explicitly enables compression for + pool/home/anne. +
+
# zfs set compression=off pool/home
+# zfs set compression=on pool/home/anne
+
+
+
Listing ZFS Datasets
+
The following command lists all active file systems and volumes in the + system. Snapshots are displayed if the listsnaps + property is on. The default is off. + See zpool(8) for more information on pool properties. +
+
# zfs list
+NAME                      USED  AVAIL  REFER  MOUNTPOINT
+pool                      450K   457G    18K  /pool
+pool/home                 315K   457G    21K  /export/home
+pool/home/anne             18K   457G    18K  /export/home/anne
+pool/home/bob             276K   457G   276K  /export/home/bob
+
+
+
Setting a Quota on a ZFS File System
+
The following command sets a quota of 50 Gbytes for + pool/home/bob. +
+
# zfs set quota=50G pool/home/bob
+
+
+
Listing ZFS Properties
+
The following command lists all properties for + pool/home/bob. +
+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value.

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+ The following command lists all properties with local settings for + pool/home/bob. +
+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
Rolling Back a ZFS File System
+
The following command reverts the contents of + pool/home/anne to the snapshot named + yesterday, deleting all intermediate snapshots. +
+
# zfs rollback -r pool/home/anne@yesterday
+
+
+
Creating a ZFS Clone
+
The following command creates a writable file system whose initial + contents are the same as + . +
+
# zfs clone pool/home/bob@yesterday pool/clone
+
+
+
Promoting a ZFS Clone
+
The following commands illustrate how to test out changes to a file + system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming: +
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
Inheriting ZFS Properties
+
The following command causes pool/home/bob and + pool/home/anne to inherit the checksum + property from their parent. +
+
# zfs inherit checksum pool/home/bob pool/home/anne
+
+
+
Remotely Replicating ZFS Data
+
The following commands send a full stream and then an incremental stream + to a remote machine, restoring them into + + and + , + respectively. poolB must contain the file system + poolB/received, and must not initially contain + . +
+
# zfs send pool/fs@a | \
+  ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b | \
+  ssh host zfs receive poolB/received/fs
+
+
+
Using the zfs receive -d Option
+
The following command sends a full stream of + + to a remote machine, receiving it into + . + The + + portion of the received snapshot's name is determined from the name of the + sent snapshot. poolB must contain the file system + poolB/received. If + + does not exist, it is created as an empty file system. +
+
# zfs send poolA/fsA/fsB@snap | \
+  ssh host zfs receive -d poolB/received
+
+
+
Setting User Properties
+
The following example sets the user-defined + + property for a dataset. +
+
# zfs set com.example:department=12345 tank/accounting
+
+
+
Performing a Rolling Snapshot
+
The following example shows how to maintain a history of snapshots with a + consistent naming scheme. To keep a week's worth of snapshots, the user + destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows: +
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
Setting sharenfs Property Options on a ZFS File + System
+
The following commands show how to set sharenfs property + options to enable rw access for a set of + addresses + and to enable root access for system + on the + + file system. +
+
# zfs set sharenfs='rw=@123.123.0.0/16,root=neo' tank/home
+
+

If you are using DNS for host name + resolution, specify the fully qualified hostname.

+
+
Delegating ZFS Administration Permissions on a + ZFS Dataset
+
The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots on + tank/cindys. The permissions on + tank/cindys are also displayed. +
+
# zfs allow cindys create,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys will + be unable to mount file systems under tank/cindys. Add + an ACE similar to the following syntax to provide mount point + access:

+
+
# chmod A+user:cindys:add_subdirectory:allow /tank/cindys
+
+
+
Delegating Create Time Permissions on a ZFS + Dataset
+
The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed. +
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
Defining and Granting a Permission Set on a ZFS + Dataset
+
The following example shows how to define and grant a permission set on + the tank/users file system. The permissions on + tank/users are also displayed. +
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
Delegating Property Permissions on a ZFS + Dataset
+
The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed. +
+
# zfs allow cindys quota,reservation users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
Removing ZFS Delegated Permissions on a ZFS + Dataset
+
The following example shows how to remove the snapshot permission from the + staff group on the tank/users file + system. The permissions on tank/users are also + displayed. +
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
Showing the differences between a snapshot and a + ZFS Dataset
+
The following example shows how to see what has changed between a prior + snapshot of a ZFS dataset and its current state. The + -F option is used to indicate type information for + the files affected. +
+
# zfs diff -F tank/test@before tank/test
+M       /       /tank/test/
+M       F       /tank/test/linked      (+1)
+R       F       /tank/test/oldname -> /tank/test/newname
+-       F       /tank/test/deleted
++       F       /tank/test/created
+M       F       /tank/test/modified
+
+
+
Creating a bookmark
+
The following example create a bookmark to a snapshot. This bookmark can + then be used instead of snapshot in send streams. +
+
# zfs bookmark rpool@snapshot rpool#bookmark
+
+
+
Setting sharesmb Property Options on a ZFS File + System
+
The following example show how to share SMB filesystem through ZFS. Note + that that a user and his/her password must be given. +
+
# smbmount //127.0.0.1/share_tmp /mnt/tmp \
+  -o user=workgroup/turbo,password=obrut,uid=1000
+
+

Minimal + + configuration required:

+

Samba will need to listen to 'localhost' (127.0.0.1) for the + ZFS utilities to communicate with Samba. This is the default behavior + for most Linux distributions.

+

Samba must be able to authenticate a user. This can be done in + a number of ways, depending on if using the system password file, LDAP + or the Samba specific smbpasswd file. How to do this is outside the + scope of this manual. Please refer to the smb.conf(5) + man page for more information.

+

See the USERSHARE section of the + smb.conf(5) man page for all configuration options in + case you need to modify any options to the share afterwards. Do note + that any changes done with the net(8) command will be + undone if the share is ever unshared (such as at a reboot etc).

+
+
+
+
+

+

.

+
+
+

+

gzip(1), ssh(1), + zpool(8), + selinux(8), chmod(2), + stat(2), write(2), + fsync(2), attr(1), + acl(5), exports(5), + exportfs(8), net(8), + attributes(5)

+
+
+ + + + + +
January 5, 2019Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/zgenhostid.8.html b/man/v0.7/8/zgenhostid.8.html new file mode 100644 index 000000000..f0df09fcf --- /dev/null +++ b/man/v0.7/8/zgenhostid.8.html @@ -0,0 +1,228 @@ + + + + + + + zgenhostid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zgenhostid.8

+
+ + + + + +
ZGENHOSTID(8)System Manager's Manual (smm)ZGENHOSTID(8)
+
+
+

+

zgenhostid — + generate and store a hostid in + /etc/hostid

+
+
+

+ + + + + +
zgenhostid[hostid]
+
+
+

+

If /etc/hostid does not exist, create it and + store a hostid in it. If the user provides [hostid] on + the command line, store that value. Otherwise, randomly generate a value to + store.

+

This emulates the genhostid(1) utility and is + provided for use on systems which do not include the utility.

+
+
+

+

[hostid] Specifies the value to be placed in + /etc/hostid. It must be a number with a value between 1 + and 2^32-1. This value + be + unique among your systems. It must be expressed in hexadecimal and be + exactly 8 digits long.

+
+
+

+
+
Generate a random hostid and store it
+
+
+
# zgenhostid
+
+
+
Record the libc-generated hostid in /etc/hostid
+
+
+
# zgenhostid $(hostid)
+
+
+
Record a custom hostid (0xdeadbeef) in +
+
+
+
# zgenhostid deadbeef
+
+
+
+
+
+

+

spl-module-parameters(5), + genhostid(1), hostid(1)

+
+
+ + + + + +
July 24, 2017Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/zinject.8.html b/man/v0.7/8/zinject.8.html new file mode 100644 index 000000000..6240ee76e --- /dev/null +++ b/man/v0.7/8/zinject.8.html @@ -0,0 +1,320 @@ + + + + + + + zinject.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zinject.8

+
+ + + + + +
zinject(8)System Administration Commandszinject(8)
+
+

+
+

+

zinject - ZFS Fault Injector

+
+
+

+

zinject creates artificial problems in a ZFS pool by + simulating data corruption or device failures. This program is + dangerous.

+
+
+

+
+
+
List injection records.
+
zinject -b objset:object:level:blkd [-f + frequency] [-amu] pool
+
Force an error into the pool at a bookmark.
+
zinject -c <id | all>
+
Cancel injection records.
+
zinject -d vdev -A <degrade|fault> + pool
+
Force a vdev into the DEGRADED or FAULTED state.
+
zinject -d vdev -D latency:lanes + pool
+
+

Add an artificial delay to IO requests on a particular device, + such that the requests take a minimum of 'latency' milliseconds to + complete. Each delay has an associated number of 'lanes' which defines + the number of concurrent IO requests that can be processed.

+

For example, with a single lane delay of 10 ms (-D 10:1), the + device will only be able to service a single IO request at a time with + each request taking 10 ms to complete. So, if only a single request is + submitted every 10 ms, the average latency will be 10 ms; but if more + than one request is submitted every 10 ms, the average latency will be + more than 10 ms.

+

Similarly, if a delay of 10 ms is specified to have two lanes + (-D 10:2), then the device will be able to service two requests at a + time, each with a minimum latency of 10 ms. So, if two requests are + submitted every 10 ms, then the average latency will be 10 ms; but if + more than two requests are submitted every 10 ms, the average latency + will be more than 10 ms.

+

Also note, these delays are additive. So two invocations of + '-D 10:1', is roughly equivalent to a single invocation of '-D 10:2'. + This also means, one can specify multiple lanes with differing target + latencies. For example, an invocation of '-D 10:1' followed by '-D 25:2' + will create 3 lanes on the device; one lane with a latency of 10 ms and + two lanes with a 25 ms latency.

+

+
+
zinject -d vdev [-e device_error] [-L + label_error] [-T failure] [-f + frequency] [-F] pool
+
Force a vdev error.
+
zinject -I [-s seconds | -g txgs] + pool
+
Simulate a hardware failure that fails to honor a cache flush.
+
zinject -p function pool
+
Panic inside the specified function.
+
zinject -t data [-e device_error] [-f + frequency] [-l level] [-r range] + [-amq] path
+
Force an error into the contents of a file.
+
zinject -t dnode [-e device_error] [-f + frequency] [-l level] [-amq] + path
+
Force an error into the metadnode for a file or directory.
+
zinject -t mos_type [-e device_error] [-f + frequency] [-l level] [-r range] + [-amqu] pool
+
Force an error into the MOS of a pool.
+
+
+
+

+
+
+
Flush the ARC before injection.
+
+
Force an error into the pool at this bookmark tuple. Each number is in + hexadecimal, and only one block can be specified.
+
+
A vdev specified by path or GUID.
+
+
Specify checksum for an ECKSUM error, dtl for an ECHILD + error, io for an EIO error where reopening the device will succeed, + or nxio for an ENXIO error where reopening the device will fail. + For EIO and ENXIO, the "failed" reads or writes still occur. The + probe simply sets the error value reported by the I/O pipeline so it + appears the read or write failed.
+
+
Only inject errors a fraction of the time. Expressed as a real number + percentage between 0.0001 and 100.
+
+
Fail faster. Do fewer checks.
+
+
Run for this many transaction groups before reporting failure.
+
+
Print the usage message.
+
+
Inject an error at a particular block level. The default is 0.
+
+
Set the label error region to one of nvlist, pad1, + pad2, or uber.
+
+
Automatically remount the underlying filesystem.
+
+
Quiet mode. Only print the handler number added.
+
+
Inject an error over a particular logical range of an object, which will + be translated to the appropriate blkid range according to the object's + properties.
+
+
Run for this many seconds before reporting failure.
+
+
Set the failure type to one of all, claim, free, + read, or write.
+
+
Set this to mos for any data in the MOS, mosdir for an + object directory, config for the pool configuration, bpobj + for the block pointer list, spacemap for the space map, + metaslab for the metaslab, or errlog for the persistent + error log.
+
+
Unload the pool after injection. +

+
+
+
+
+

+
+
+
Run zinject in debug mode. +

+
+
+
+
+

+

This man page was written by Darik Horn + <dajhorn@vanadac.com> excerpting the zinject usage message and + source code.

+

+
+
+

+

zpool(8), zfs(8)

+
+
+ + + + + +
2013 FEB 28ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/zpool.8.html b/man/v0.7/8/zpool.8.html new file mode 100644 index 000000000..76e880c3c --- /dev/null +++ b/man/v0.7/8/zpool.8.html @@ -0,0 +1,2223 @@ + + + + + + + zpool.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool.8

+
+ + + + + +
ZPOOL(8)System Manager's Manual (smm)ZPOOL(8)
+
+
+

+

zpoolconfigure + ZFS storage pools

+
+
+

+ + + + + +
zpool-?
+
+ + + + + +
zpooladd [-fgLnP] + [-o + property=value] + pool vdev...
+
+ + + + + +
zpoolattach [-f] + [-o + property=value] + pool device new_device
+
+ + + + + +
zpoolclear pool + [device]
+
+ + + + + +
zpoolcreate [-dfn] + [-m mountpoint] + [-o + property=value]... + [-o + feature@feature=value] + [-O + file-system-property=value]... + [-R root] + pool vdev...
+
+ + + + + +
zpooldestroy [-f] + pool
+
+ + + + + +
zpooldetach pool device
+
+ + + + + +
zpoolevents [-vHfc] + [pool]
+
+ + + + + +
zpoolexport [-a] + [-f] pool...
+
+ + + + + +
zpoolget [-Hp] + [-o + field[,field]...] + all|property[,property]... + pool...
+
+ + + + + +
zpoolhistory [-il] + [pool]...
+
+ + + + + +
zpoolimport [-D] + [-c + cachefile|-d + dir]
+
+ + + + + +
zpoolimport -a + [-DfmN] [-F + [-n] [-T] + [-X]] [-c + cachefile|-d + dir] [-o + mntopts] [-o + property=value]... + [-R root]
+
+ + + + + +
zpoolimport [-Dfm] + [-F [-n] + [-T] [-X]] + [-c + cachefile|-d + dir] [-o + mntopts] [-o + property=value]... + [-R root] + [-s] + pool|id + [newpool [-t]]
+
+ + + + + +
zpooliostat [[[-c + SCRIPT] + [-lq]]|-rw] + [-T u|d] + [-ghHLpPvy] + [[pool...]|[pool + vdev...]|[vdev...]] + [interval [count]]
+
+ + + + + +
zpoollabelclear [-f] + device
+
+ + + + + +
zpoollist [-HgLpPv] + [-o + property[,property]...] + [-T u|d] + [pool]... [interval + [count]]
+
+ + + + + +
zpooloffline [-f] + [-t] pool + device...
+
+ + + + + +
zpoolonline [-e] + pool device...
+
+ + + + + +
zpoolreguid pool
+
+ + + + + +
zpoolreopen pool
+
+ + + + + +
zpoolremove pool + device...
+
+ + + + + +
zpoolreplace [-f] + [-o + property=value] + pool device + [new_device]
+
+ + + + + +
zpoolscrub [-s | + -p] pool...
+
+ + + + + +
zpoolset + property=value + pool
+
+ + + + + +
zpoolsplit [-gLnP] + [-o + property=value]... + [-R root] + pool newpool [device]...
+
+ + + + + +
zpoolstatus [-c + SCRIPT] [-gLPvxD] + [-T u|d] + [pool]... [interval + [count]]
+
+ + + + + +
zpoolsync [pool]...
+
+ + + + + +
zpoolupgrade
+
+ + + + + +
zpoolupgrade -v
+
+ + + + + +
zpoolupgrade [-V + version] + -a|pool...
+
+
+

+

The zpool command configures ZFS storage + pools. A storage pool is a collection of devices that provides physical + storage and data replication for ZFS datasets. All datasets within a storage + pool share the same space. See zfs(8) for information on + managing datasets.

+
+

+

A "virtual device" describes a single device or a + collection of devices organized according to certain performance and fault + characteristics. The following virtual devices are supported:

+
+
+
A block device, typically located under /dev. ZFS + can use individual slices or partitions, though the recommended mode of + operation is to use whole disks. A disk can be specified by a full path, + or it can be a shorthand name (the relative portion of the path under + /dev). A whole disk can be specified by omitting + the slice or partition designation. For example, + sda is equivalent to + /dev/sda. When given a whole disk, ZFS + automatically labels the disk, if necessary.
+
+
A regular file. The use of files as a backing store is strongly + discouraged. It is designed primarily for experimental purposes, as the + fault tolerance of a file is only as good as the file system of which it + is a part. A file must be specified by a full path.
+
+
A mirror of two or more devices. Data is replicated in an identical + fashion across all components of a mirror. A mirror with N disks of size X + can hold X bytes and can withstand (N-1) devices failing before data + integrity is compromised.
+
, + raidz1, raidz2, + raidz3
+
A variation on RAID-5 that allows for better distribution of parity and + eliminates the RAID-5 "write hole" (in which data and parity + become inconsistent after a power loss). Data and parity is striped across + all disks within a raidz group. +

A raidz group can have single-, double-, or triple-parity, + meaning that the raidz group can sustain one, two, or three failures, + respectively, without losing any data. The raidz1 vdev + type specifies a single-parity raidz group; the raidz2 + vdev type specifies a double-parity raidz group; and the + raidz3 vdev type specifies a triple-parity raidz + group. The raidz vdev type is an alias for + raidz1.

+

A raidz group with N disks of size X with P parity disks can + hold approximately (N-P)*X bytes and can withstand P device(s) failing + before data integrity is compromised. The minimum number of devices in a + raidz group is one more than the number of parity disks. The recommended + number is between 3 and 9 to help increase performance.

+
+
+
A special pseudo-vdev which keeps track of available hot spares for a + pool. For more information, see the Hot + Spares section.
+
+
A separate intent log device. If more than one log device is specified, + then writes are load-balanced between devices. Log devices can be + mirrored. However, raidz vdev types are not supported for the intent log. + For more information, see the Intent + Log section.
+
+
A device used to cache storage pool data. A cache device cannot be + configured as a mirror or raidz group. For more information, see the + Cache Devices section.
+
+

Virtual devices cannot be nested, so a mirror or raidz virtual + device can only contain files or disks. Mirrors of mirrors (or other + combinations) are not allowed.

+

A pool can have any number of virtual devices at the top of the + configuration (known as "root vdevs"). Data is dynamically + distributed across all top-level devices to balance data among devices. As + new virtual devices are added, ZFS automatically places data on the newly + available devices.

+

Virtual devices are specified one at a time on the command line, + separated by whitespace. The keywords mirror and + raidz are used to distinguish where a group ends and + another begins. For example, the following creates two root vdevs, each a + mirror of two disks:

+
+
# zpool create mypool mirror sda sdb mirror sdc sdd
+
+
+
+

+

ZFS supports a rich set of mechanisms for handling device failure + and data corruption. All metadata and data is checksummed, and ZFS + automatically repairs bad data from a good copy when corruption is + detected.

+

In order to take advantage of these features, a pool must make use + of some form of redundancy, using either mirrored or raidz groups. While ZFS + supports running in a non-redundant configuration, where each root vdev is + simply a disk or file, this is strongly discouraged. A single case of bit + corruption can render some or all of your data unavailable.

+

A pool's health status is described by one of three states: + online, degraded, or faulted. An online pool has all devices operating + normally. A degraded pool is one in which one or more devices have failed, + but the data is still available due to a redundant configuration. A faulted + pool has corrupted metadata, or one or more faulted devices, and + insufficient replicas to continue functioning.

+

The health of the top-level vdev, such as mirror or raidz device, + is potentially impacted by the state of its associated vdevs, or component + devices. A top-level vdev or component device is in one of the following + states:

+
+
+
One or more top-level vdevs is in the degraded state because one or more + component devices are offline. Sufficient replicas exist to continue + functioning. +

One or more component devices is in the degraded or faulted + state, but sufficient replicas exist to continue functioning. The + underlying conditions are as follows:

+
    +
  • The number of checksum errors exceeds acceptable levels and the device + is degraded as an indication that something may be wrong. ZFS + continues to use the device as necessary.
  • +
  • The number of I/O errors exceeds acceptable levels. The device could + not be marked as faulted because there are insufficient replicas to + continue functioning.
  • +
+
+
+
One or more top-level vdevs is in the faulted state because one or more + component devices are offline. Insufficient replicas exist to continue + functioning. +

One or more component devices is in the faulted state, and + insufficient replicas exist to continue functioning. The underlying + conditions are as follows:

+
    +
  • The device could be opened, but the contents did not match expected + values.
  • +
  • The number of I/O errors exceeds acceptable levels and the device is + faulted to prevent further use of the device.
  • +
+
+
+
The device was explicitly taken offline by the + zpool offline + command.
+
+
The device is online and functioning.
+
+
The device was physically removed while the system was running. Device + removal detection is hardware-dependent and may not be supported on all + platforms.
+
+
The device could not be opened. If a pool is imported when a device was + unavailable, then the device will be identified by a unique identifier + instead of its path since the path was never correct in the first + place.
+
+

If a device is removed and later re-attached to the system, ZFS + attempts to put the device online automatically. Device attach detection is + hardware-dependent and might not be supported on all platforms.

+
+
+

+

ZFS allows devices to be associated with pools as "hot + spares". These devices are not actively used in the pool, but when an + active device fails, it is automatically replaced by a hot spare. To create + a pool with hot spares, specify a spare vdev with any + number of devices. For example,

+
+
# zpool create pool mirror sda sdb spare sdc sdd
+
+

Spares can be shared across multiple pools, and can be added with + the zpool add command and + removed with the zpool + remove command. Once a spare replacement is + initiated, a new spare vdev is created within the + configuration that will remain there until the original device is replaced. + At this point, the hot spare becomes available again if another device + fails.

+

If a pool has a shared spare that is currently being used, the + pool can not be exported since other pools may use this shared spare, which + may lead to potential data corruption.

+

An in-progress spare replacement can be canceled by detaching the + hot spare. If the original faulted device is detached, then the hot spare + assumes its place in the configuration, and is removed from the spare list + of all active pools.

+

Spares cannot replace log devices.

+
+
+

+

The ZFS Intent Log (ZIL) satisfies POSIX requirements for + synchronous transactions. For instance, databases often require their + transactions to be on stable storage devices when returning from a system + call. NFS and other applications can also use fsync(2) to + ensure data stability. By default, the intent log is allocated from blocks + within the main pool. However, it might be possible to get better + performance using separate intent log devices such as NVRAM or a dedicated + disk. For example:

+
+
# zpool create pool sda sdb log sdc
+
+

Multiple log devices can also be specified, and they can be + mirrored. See the EXAMPLES section for an + example of mirroring multiple log devices.

+

Log devices can be added, replaced, attached, detached, and + imported and exported as part of the larger pool. Mirrored log devices can + be removed by specifying the top-level mirror for the log.

+
+
+

+

Devices can be added to a storage pool as "cache + devices". These devices provide an additional layer of caching between + main memory and disk. For read-heavy workloads, where the working set size + is much larger than what can be cached in main memory, using cache devices + allow much more of this working set to be served from low latency media. + Using cache devices provides the greatest performance improvement for random + read-workloads of mostly static content.

+

To create a pool with cache devices, specify a + cache vdev with any number of devices. For example:

+
+
# zpool create pool sda sdb cache sdc sdd
+
+

Cache devices cannot be mirrored or part of a raidz configuration. + If a read error is encountered on a cache device, that read I/O is reissued + to the original storage pool device, which might be part of a mirrored or + raidz configuration.

+

The content of the cache devices is considered volatile, as is the + case with other system caches.

+
+
+

+

Each pool has several properties associated with it. Some + properties are read-only statistics while others are configurable and change + the behavior of the pool.

+

The following are read-only properties:

+
+
+
Amount of storage available within the pool. This property can also be + referred to by its shortened column name, + .
+
+
Percentage of pool space used. This property can also be referred to by + its shortened column name, + .
+
+
Amount of uninitialized space within the pool or device that can be used + to increase the total capacity of the pool. Uninitialized space consists + of any space on an EFI labeled vdev which has not been brought online + (e.g, using zpool online + -e). This space occurs when a LUN is dynamically + expanded.
+
+
The amount of fragmentation in the pool.
+
+
The amount of free space available in the pool.
+
+
After a file system or snapshot is destroyed, the space it was using is + returned to the pool asynchronously. freeing is the + amount of space remaining to be reclaimed. Over time + freeing will decrease while free + increases.
+
+
The current health of the pool. Health can be one of + ONLINE, DEGRADED, + FAULTED, + , UNAVAIL.
+
+
A unique identifier for the pool.
+
+
Total size of the storage pool.
+
+
Information about unsupported features that are enabled on the pool. See + zpool-features(5) for details.
+
+
Amount of storage space used within the pool.
+
+

The space usage properties report actual physical space available + to the storage pool. The physical space can be different from the total + amount of space that any contained datasets can actually use. The amount of + space used in a raidz configuration depends on the characteristics of the + data being written. In addition, ZFS reserves some space for internal + accounting that the zfs(8) command takes into account, but + the zpool command does not. For non-full pools of a + reasonable size, these effects should be invisible. For small pools, or + pools that are close to being completely full, these discrepancies may + become more noticeable.

+

The following property can be set at creation time and import + time:

+
+
+
Alternate root directory. If set, this directory is prepended to any mount + points within the pool. This can be used when examining an unknown pool + where the mount points cannot be trusted, or in an alternate boot + environment, where the typical paths are not valid. + altroot is not a persistent property. It is valid only + while the system is up. Setting altroot defaults to + using cachefile=none, though this may + be overridden using an explicit setting.
+
+

The following property can be set only at import time:

+
+
=on|off
+
If set to on, the pool will be imported in read-only + mode. This property can also be referred to by its shortened column name, + .
+
+

The following properties can be set at creation time and import + time, and later changed with the zpool + set command:

+
+
=ashift
+
Pool sector size exponent, to the power of 2 (internally + referred to as ashift ). Values from 9 to 16, inclusive, + are valid; also, the special value 0 (the default) means to auto-detect + using the kernel's block layer and a ZFS internal exception list. I/O + operations will be aligned to the specified size boundaries. Additionally, + the minimum (disk) write size will be set to the specified size, so this + represents a space vs. performance trade-off. For optimal performance, the + pool sector size should be greater than or equal to the sector size of the + underlying disks. The typical case for setting this property is when + performance is important and the underlying disks use 4KiB sectors but + report 512B sectors to the OS (for compatibility reasons); in that case, + set + + (which is 1<<12 = 4096). When set, this property is used as the + default hint value in subsequent vdev operations (add, attach and + replace). Changing this value will not modify any existing vdev, not even + on disk replacement; however it can be used, for instance, to replace a + dying 512B sectors disk with a newer 4KiB sectors device: this will + probably result in bad performance but at the same time could prevent loss + of data.
+
=on|off
+
Controls automatic pool expansion when the underlying LUN is grown. If set + to on, the pool will be resized according to the size of + the expanded device. If the device is part of a mirror or raidz then all + devices within that mirror/raidz group must be expanded before the new + space is made available to the pool. The default behavior is + off. This property can also be referred to by its + shortened column name, + .
+
=on|off
+
Controls automatic device replacement. If set to off, + device replacement must be initiated by the administrator by using the + zpool replace command. If + set to on, any new device, found in the same physical + location as a device that previously belonged to the pool, is + automatically formatted and replaced. The default behavior is + off. This property can also be referred to by its + shortened column name, + . + Autoreplace can also be used with virtual disks (like device mapper) + provided that you use the /dev/disk/by-vdev paths setup by vdev_id.conf. + See the vdev_id(8) man page for more details. + Autoreplace and autoonline require the ZFS Event Daemon be configured and + running. See the zed(8) man page for more details.
+
=|pool/dataset
+
Identifies the default bootable dataset for the root pool. This property + is expected to be set mainly by the installation and upgrade programs. Not + all Linux distribution boot processes use the bootfs property.
+
=path|none
+
Controls the location of where the pool configuration is cached. + Discovering all pools on system startup requires a cached copy of the + configuration data that is stored on the root file system. All pools in + this cache are automatically imported when the system boots. Some + environments, such as install and clustering, need to cache this + information in a different location so that pools are not automatically + imported. Setting this property caches the pool configuration in a + different location that can later be imported with + zpool import + -c. Setting it to the special value + none creates a temporary pool that is never cached, and + the special value "" (empty string) uses the default location. +

Multiple pools can share the same cache file. Because the + kernel destroys and recreates this file when pools are added and + removed, care should be taken when attempting to access this file. When + the last pool using a cachefile is exported or + destroyed, the file will be empty.

+
+
=text
+
A text string consisting of printable ASCII characters that will be stored + such that it is available even if the pool becomes faulted. An + administrator can provide additional information about a pool using this + property.
+
=number
+
Threshold for the number of block ditto copies. If the reference count for + a deduplicated block increases above this number, a new ditto copy of this + block is automatically stored. The default setting is 0 + which causes no ditto copies to be created for deduplicated blocks. The + minimum legal nonzero setting is + .
+
=on|off
+
Controls whether a non-privileged user is granted access based on the + dataset permissions defined on the dataset. See zfs(8) + for more information on ZFS delegated administration.
+
=wait|continue|panic
+
Controls the system behavior in the event of catastrophic pool failure. + This condition is typically a result of a loss of connectivity to the + underlying storage device(s) or a failure of all devices within the pool. + The behavior of such an event is determined as follows: +
+
+
Blocks all I/O access until the device connectivity is recovered and + the errors are cleared. This is the default behavior.
+
+
Returns EIO to any new write I/O requests but + allows reads to any of the remaining healthy devices. Any write + requests that have yet to be committed to disk would be blocked.
+
+
Prints out a message to the console and generates a system crash + dump.
+
+
+
feature_name=enabled
+
The value of this property is the current state of + feature_name. The only valid value when setting this + property is enabled which moves + feature_name to the enabled state. See + zpool-features(5) for details on feature states.
+
=on|off
+
Controls whether information about snapshots associated with this pool is + output when zfs list is + run without the -t option. The default value is + off. This property can also be referred to by its + shortened name, + .
+
=on|off
+
Controls whether a pool activity check should be performed during + zpool import. When a pool + is determined to be active it cannot be imported, even with the + -f option. This property is intended to be used in + failover configurations where multiple hosts have access to a pool on + shared storage. When this property is on, periodic writes to storage occur + to show the pool is in use. See + + in the zfs-module-parameters(5) man page. In order to + enable this property each host must set a unique hostid. See + zgenhostid(8) + spl-module-parameters(5) for additional details. The + default value is off.
+
=version
+
The current on-disk version of the pool. This can be increased, but never + decreased. The preferred method of updating pools is with the + zpool upgrade command, + though this property can be used when a specific version is needed for + backwards compatibility. Once feature flags are enabled on a pool this + property will no longer have a value.
+
+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+

The zpool command provides subcommands to + create and destroy storage pools, add capacity to storage pools, and provide + information about the storage pools. The following subcommands are + supported:

+
+
zpool -?
+
Displays a help message.
+
zpool add + [-fgLnP] [-o + property=value] + pool vdev...
+
Adds the specified virtual devices to the given pool. The + vdev specification is described in the + Virtual Devices section. The + behavior of the -f option, and the device checks + performed are described in the zpool + create subcommand. +
+
+
Forces use of vdevs, even if they appear in use + or specify a conflicting replication level. Not all devices can be + overridden in this manner.
+
+
Display vdev, GUIDs instead of the normal device + names. These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all + symbolic links. This can be used to look up the current block device + name regardless of the /dev/disk/ path used to open it.
+
+
Displays the configuration that would be used without actually adding + the vdevs. The actual pool creation can still + fail due to insufficient privileges or device sharing.
+
+
Display real paths for vdevs instead of only the + last component of the path. This can be used in conjunction with the + -L flag.
+
+ property=value
+
Sets the given pool properties. See the + Properties section for a list of + valid properties that can be set. The only property supported at the + moment is ashift.
+
+
+
zpool attach + [-f] [-o + property=value] + pool device new_device
+
Attaches new_device to the existing + device. The existing device cannot be part of a + raidz configuration. If device is not currently part + of a mirrored configuration, device automatically + transforms into a two-way mirror of device and + new_device. If device is part + of a two-way mirror, attaching new_device creates a + three-way mirror, and so on. In either case, + new_device begins to resilver immediately. +
+
+
Forces use of new_device, even if its appears to + be in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the + Properties section for a list of + valid properties that can be set. The only property supported at the + moment is ashift.
+
+
+
zpool clear + pool [device]
+
Clears device errors in a pool. If no arguments are specified, all device + errors within the pool are cleared. If one or more devices is specified, + only those errors associated with the specified device or devices are + cleared.
+
zpool create + [-dfn] [-m + mountpoint] [-o + property=value]... + [-o + feature@feature=value]... + [-O + file-system-property=value]... + [-R root] + [-t tname] + pool vdev...
+
Creates a new storage pool containing the virtual devices specified on the + command line. The pool name must begin with a letter, and can only contain + alphanumeric characters as well as underscore + (""), dash + ("."), colon + (""), + space ("-"), and period + ("."). The pool names + mirror, raidz, spare + and log are reserved, as are names beginning with the + pattern + . + The vdev specification is described in the + Virtual Devices section. +

The command verifies that each device specified is accessible + and not currently in use by another subsystem. There are some uses, such + as being currently mounted, or specified as the dedicated dump device, + that prevents a device from ever being used by ZFS. Other uses, such as + having a preexisting UFS file system, can be overridden with the + -f option.

+

The command also checks that the replication strategy for the + pool is consistent. An attempt to combine redundant and non-redundant + storage in a single pool, or to mix disks and files, results in an error + unless -f is specified. The use of differently + sized devices within a single raidz or mirror group is also flagged as + an error unless -f is specified.

+

Unless the -R option is specified, the + default mount point is + /pool. The mount point + must not exist or must be empty, or else the root dataset cannot be + mounted. This can be overridden with the -m + option.

+

By default all supported features are enabled on the new pool + unless the -d option is specified.

+
+
+
Do not enable any features on the new pool. Individual features can be + enabled by setting their corresponding properties to + enabled with the -o option. + See zpool-features(5) for details about feature + properties.
+
+
Forces use of vdevs, even if they appear in use + or specify a conflicting replication level. Not all devices can be + overridden in this manner.
+
+ mountpoint
+
Sets the mount point for the root dataset. The default mount point is + /pool or altroot/pool + if altroot is specified. The mount point must be + an absolute path, + , + or none. For more information on dataset mount + points, see zfs(8).
+
+
Displays the configuration that would be used without actually + creating the pool. The actual pool creation can still fail due to + insufficient privileges or device sharing.
+
+ property=value
+
Sets the given pool properties. See the + Properties section for a list of + valid properties that can be set.
+
+ feature@feature=value
+
Sets the given pool feature. See the + zpool-features(5) section for a list of valid + features that can be set. Value can be either disabled or + enabled.
+
+ file-system-property=value
+
Sets the given file system properties in the root file system of the + pool. See the Properties section + of zfs(8) for a list of valid properties that can be + set.
+
+ root
+
Equivalent to -o + cachefile=none + -o + altroot=root
+
+ tname
+
Sets the in-core pool name to + + while the on-disk name will be the name specified as the pool name + . + This will set the default cachefile property to none. This is intended + to handle name space collisions when creating pools for other systems, + such as virtual machines or physical machines whose pools live on + network block devices.
+
+
+
zpool destroy + [-f] pool
+
Destroys the given pool, freeing up any devices for other use. This + command tries to unmount any active datasets before destroying the pool. +
+
+
Forces any active datasets contained within the pool to be + unmounted.
+
+
+
zpool detach + pool device
+
Detaches device from a mirror. The operation is + refused if there are no other valid replicas of the data. If device may be + re-added to the pool later on then consider the zpool + offline command instead.
+
zpool events + [-cfHv] [pool...]
+
Lists all recent events generated by the ZFS kernel modules. These events + are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. + For more information about the subclasses and event payloads that can be + generated see the zfs-events(5) man page. +
+
+
Clear all previous events.
+
+
Follow mode.
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+
Print the entire payload for each event.
+
+
+
zpool export + [-a] [-f] + pool...
+
Exports the given pools from the system. All devices are marked as + exported, but are still considered in use by other subsystems. The devices + can be moved between systems (even those of different endianness) and + imported as long as a sufficient number of devices are present. +

Before exporting the pool, all datasets within the pool are + unmounted. A pool can not be exported if it has a shared spare that is + currently being used.

+

For pools to be portable, you must give the + zpool command whole disks, not just partitions, + so that ZFS can label the disks with portable EFI labels. Otherwise, + disk drivers on platforms of different endianness will not recognize the + disks.

+
+
+
Exports all pools imported on the system.
+
+
Forcefully unmount all datasets, using the + unmount -f command. +

This command will forcefully export the pool even if it + has a shared spare that is currently being used. This may lead to + potential data corruption.

+
+
+
+
zpool get + [-Hp] [-o + field[,field]...] + all|property[,property]... + pool...
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
        name          Name of storage pool
+        property      Property name
+        value         Property value
+        source        Property source, either 'default' or 'local'.
+
+

See the Properties + section for more information on the available pool properties.

+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display. + ,,, + is the default value.
+
+
Display numbers in parsable (exact) values.
+
+
+
zpool history + [-il] [pool]...
+
Displays the command history of the specified pool(s) or all pools if no + pool is specified. +
+
+
Displays internally logged ZFS events in addition to user initiated + events.
+
+
Displays log records in long format, which in addition to standard + format includes, the user name, the hostname, and the zone in which + the operation was performed.
+
+
+
zpool import + [-D] [-c + cachefile|-d + dir]
+
Lists pools available to import. If the -d option + is not specified, this command searches for devices in + /dev. The -d option can be + specified multiple times, and all directories are searched. If the device + appears to be part of an exported pool, this command displays a summary of + the pool with the name of the pool, a numeric identifier, as well as the + vdev layout and current health of the device for each device or file. + Destroyed pools, pools that were previously destroyed with the + zpool destroy command, are + not listed unless the -D option is specified. +

The numeric identifier is unique, and can be used instead of + the pool name when multiple exported pools of the same name are + available.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir
+
Searches for devices or files in dir. The + -d option can be specified multiple + times.
+
+
Lists destroyed pools only.
+
+
+
zpool import + -a [-DfmN] + [-F [-n] + [-T] [-X]] + [-c + cachefile|-d + dir] [-o + mntopts] [-o + property=value]... + [-R root] + [-s]
+
Imports all pools found in the search directories. Identical to the + previous command, except that all pools with a sufficient number of + devices available are imported. Destroyed pools, pools that were + previously destroyed with the zpool + destroy command, will not be imported unless the + -D option is specified. +
+
+
Searches for and imports all pools found.
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir
+
Searches for devices or files in dir. The + -d option can be specified multiple times. + This option is incompatible with the -c + option.
+
+
Imports destroyed pools only. The -f option is + also required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+
Import the pool without mounting any file systems.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + Properties section for more + information on the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. WARNING: This + option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
+
zpool import + [-Dfm] [-F + [-n] [-t] + [-T] [-X]] + [-c + cachefile|-d + dir] [-o + mntopts] [-o + property=value]... + [-R root] + [-s] + pool|id + [newpool]
+
Imports a specific pool. A pool can be identified by its name or the + numeric identifier. If newpool is specified, the + pool is imported using the name newpool. Otherwise, + it is imported with the same name as its exported name. +

If a device is removed from a system without running + zpool export first, the + device appears as potentially active. It cannot be determined if this + was a failed export, or whether the device is really in use from another + host. To import a pool in this state, the -f + option is required.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir
+
Searches for devices or files in dir. The + -d option can be specified multiple times. + This option is incompatible with the -c + option.
+
+
Imports destroyed pool. The -f option is also + required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + Properties section for more + information on the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. WARNING: This + option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
Used with newpool. Specifies that + newpool is temporary. Temporary pool names last + until export. Ensures that the original pool name will be used in all + label updates and therefore is retained upon export. Will also set -o + cachefile=none when not explicitly specified.
+
+
+
zpool iostat + [[[-c SCRIPT] + [-lq]]|-rw] + [-T u|d] + [-ghHLpPvy] + [[pool...]|[pool + vdev...]|[vdev...]] + [interval [count]]
+
Displays I/O statistics for the given pools/vdevs. You can pass in a list + of pools, a pool and list of vdevs in that pool, or a list of any vdevs + from any pool. If no items are specified, statistics for every pool in the + system are shown. When given an interval, the + statistics are printed every interval seconds until + ^C is pressed. If count is specified, the command exits after count + reports are printed. The first report printed is always the statistics + since boot regardless of whether interval and + count are passed. However, this behavior can be + suppressed with the -y flag. Also note that the + units of , + , + that are + printed in the report are in base 1024. To get the raw values, use the + -p flag. +
+
+ [SCRIPT1[,SCRIPT2]...]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool + iostat output. Users can run any script found + in their ~/.zpool.d directory or from the + system /etc/zfs/zpool.d directory. Script + names containing the slash (/) character are not allowed. The default + search path can be overridden by setting the ZPOOL_SCRIPTS_PATH + environment variable. A privileged user can run + -c if they have the ZPOOL_SCRIPTS_AS_ROOT + environment variable set. If a script requires the use of a privileged + command, like smartctl(8), then it's recommended you + allow the user access to it in /etc/sudoers or + add the user to the /etc/sudoers.d/zfs file. +

If -c is passed without a script + name, it prints a list of all scripts. -c + also sets verbose mode + (-v).

+

Script output should be in the form of + "name=value". The column name is set to "name" + and the value is set to "value". Multiple lines can be + used to output multiple columns. The first line of output not in the + "name=value" format is displayed without a column title, + and no more output after that is displayed. This can be useful for + printing error messages. Blank or NULL values are printed as a '-' + to make output awk-able.

+

The following environment variables are set before running + each script:

+
+
+
Full path to the vdev
+
+
+
+
Underlying path to the vdev (/dev/sd*). For use with device + mapper, multipath, or partitioned vdevs.
+
+
+
+
The sysfs path to the enclosure for the vdev (if any).
+
+
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard + date format. See date(1).
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display numbers in parsable (exact) values. Time values are in + nanoseconds.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L flag.
+
+
Print request size histograms for the leaf ZIOs. This includes + histograms of individual ZIOs ( ind) and + aggregate ZIOs ( agg ). These stats can be + useful for seeing how well the ZFS IO aggregator is working. Do not + confuse these request size stats with the block layer requests; it's + possible ZIOs can be broken up before being sent to the block + device.
+
+
Verbose statistics Reports usage statistics for individual vdevs + within the pool, in addition to the pool-wide statistics.
+
+
Omit statistics since boot. Normally the first line of output reports + the statistics since boot. This option suppresses that first line of + output.
+
+
Display latency histograms: +

total_wait: Total IO time (queuing + + disk IO time). disk_wait: Disk IO time (time + reading/writing the disk). syncq_wait: Amount + of time IO spent in synchronous priority queues. Does not include + disk time. asyncq_wait: Amount of time IO + spent in asynchronous priority queues. Does not include disk time. + scrub: Amount of time IO spent in scrub queue. + Does not include disk time.

+
+
+
Include average latency statistics: +

total_wait: Average total IO time + (queuing + disk IO time). disk_wait: Average + disk IO time (time reading/writing the disk). + syncq_wait: Average amount of time IO spent in + synchronous priority queues. Does not include disk time. + asyncq_wait: Average amount of time IO spent + in asynchronous priority queues. Does not include disk time. + scrub: Average queuing time in scrub queue. + Does not include disk time.

+
+
+
Include active queue statistics. Each priority queue has both pending + ( pend) and active ( + activ) IOs. Pending IOs are waiting to be issued + to the disk, and active IOs have been issued to disk and are waiting + for completion. These stats are broken out by priority queue: +

syncq_read/write: Current number of + entries in synchronous priority queues. + asyncq_read/write: Current number of entries + in asynchronous priority queues. scrubq_read: + Current number of entries in scrub queue.

+

All queue statistics are instantaneous measurements of the + number of entries in the queues. If you specify an interval, the + measurements will be sampled from the end of the interval.

+
+
+
+
zpool labelclear + [-f] device
+
Removes ZFS label information from the specified + device. The device must not be + part of an active pool configuration. +
+
+
Treat exported or foreign devices as inactive.
+
+
+
zpool list + [-HgLpPv] [-o + property[,property]...] + [-T u|d] + [pool]... [interval + [count]]
+
Lists the given pools along with a health status and space usage. If no + pools are specified, all pools in the system are + listed. When given an interval, the information is + printed every interval seconds until ^C is pressed. + If count is specified, the command exits after + count reports are printed. +
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ property
+
Comma-separated list of properties to display. See the + Properties section for a list of + valid properties. The default list is + + .
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L -flag.
+
+ u|d
+
Display a time stamp. Specify -u for a printed + representation of the internal representation of time. See + time(2). Specify -d for + standard date format. See date(1).
+
+
Verbose statistics. Reports usage statistics for individual vdevs + within the pool, in addition to the pool-wise statistics.
+
+
+
zpool offline + [-f] [-t] + pool device...
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [-e] pool + device...
+
Brings the specified physical device online. This command is not + applicable to spares or cache devices. +
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
zpool reguid + pool
+
Generates a new unique identifier for the pool. You must ensure that all + devices in this pool are online and healthy before performing this + action.
+
zpool reopen + pool
+
Reopen all the vdevs associated with the pool.
+
zpool remove + pool device...
+
Removes the specified device from the pool. This command currently only + supports removing hot spares, cache, and log devices. A mirrored log + device can be removed by specifying the top-level mirror for the log. + Non-log devices that are part of a mirrored configuration can be removed + using the zpool detach + command. Non-redundant and raidz devices cannot be removed from a + pool.
+
zpool replace + [-f] [-o + property=value] + pool device + [new_device]
+
Replaces old_device with + new_device. This is equivalent to attaching + new_device, waiting for it to resilver, and then + detaching old_device. +

The size of new_device must be greater + than or equal to the minimum size of all the devices in a mirror or + raidz configuration.

+

new_device is required if the pool is + not redundant. If new_device is not specified, it + defaults to old_device. This form of replacement + is useful after an existing disk has failed and has been physically + replaced. In this case, the new disk may have the same + /dev path as the old device, even though it is + actually a different disk. ZFS recognizes this.

+
+
+
Forces use of new_device, even if its appears to + be in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the + Properties section for a list of + valid properties that can be set. The only property supported at the + moment is ashift.
+
+
+
zpool scrub + [-s | -p] + pool...
+
Begins a scrub or resumes a paused scrub. The scrub examines all data in + the specified pools to verify that it checksums correctly. For replicated + (mirror or raidz) devices, ZFS automatically repairs any damage discovered + during the scrub. The zpool + status command reports the progress of the scrub + and summarizes the results of the scrub upon completion. +

Scrubbing and resilvering are very similar operations. The + difference is that resilvering only examines data that ZFS knows to be + out of date (for example, when attaching a new device to a mirror or + replacing an existing device), whereas scrubbing examines all data to + discover silent errors due to hardware faults or disk failure.

+

Because scrubbing and resilvering are I/O-intensive + operations, ZFS only allows one at a time. If a scrub is paused, the + zpool scrub resumes it. + If a resilver is in progress, ZFS does not allow a scrub to be started + until the resilver completes.

+
+
+
Stop scrubbing.
+
+
+
+
Pause scrubbing. Scrub progress is periodically synced to disk so if + the system is restarted or pool is exported during a paused scrub, the + scrub will resume from the place where it was last checkpointed to + disk. To resume a paused scrub issue zpool + scrub again.
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + Properties section for more + information on what properties can be set and acceptable values.
+
zpool split + [-gLnP] [-o + property=value]... + [-R root] pool + newpool [device ...]
+
Splits devices off pool creating + newpool. All vdevs in pool + must be mirrors and the pool must not be in the process of resilvering. At + the time of the split, newpool will be a replica of + pool. By default, the last device in each mirror is + split from pool to create + newpool. +

The optional device specification causes the specified + device(s) to be included in the new pool and, + should any devices remain unspecified, the last device in each mirror is + used as would be by default.

+
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Do dry run, do not actually perform the split. Print out the expected + configuration of newpool.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L -flag.
+
+ property=value
+
Sets the specified property for newpool. See the + Properties section for more + information on the available pool properties.
+
+ root
+
Set altroot for newpool to + root and automatically import it.
+
+
+
zpool status + [-c + [SCRIPT1[,SCRIPT2]...]] + [-gLPvxD] [-T + u|d] [pool]... + [interval [count]]
+
Displays the detailed health status for the given pools. If no + pool is specified, then the status of each pool in + the system is displayed. For more information on pool and device health, + see the Device Failure + and Recovery section. +

If a scrub or resilver is in progress, this command reports + the percentage done and the estimated time to completion. Both of these + are only approximate, because the amount of data in the pool and the + other workloads on the system can change.

+
+
+ [SCRIPT1[,SCRIPT2]...]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool + status output. See the + -c option of zpool + iostat for complete details.
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L -flag.
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in + the pool) block counts and sizes by reference count.
+
+ u|d
+
Display a time stamp. Specify -u for a printed + representation of the internal representation of time. See + time(2). Specify -d for + standard date format. See date(1).
+
+
Displays verbose data error information, printing out a complete list + of all data errors since the last complete pool scrub.
+
+
Only display status for pools that are exhibiting errors or are + otherwise unavailable. Warnings about pools not using the latest + on-disk format will not be included.
+
+
+
zpool sync + [pool ...]
+
This command forces all in-core dirty data to be written to the primary + pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + zpool sync will sync all pools on the system. Otherwise, + it will sync only the specified pool(s).
+
zpool upgrade
+
Displays pools which do not have all supported features enabled and pools + formatted using a legacy ZFS version number. These pools can continue to + be used, but some features may not be available. Use + zpool upgrade + -a to enable all features on all pools.
+
zpool upgrade + -v
+
Displays legacy ZFS versions supported by the current software. See + zpool-features(5) for a description of feature flags + features supported by the current software.
+
zpool upgrade + [-V version] + -a|pool...
+
Enables all supported features on the given pool. Once this is done, the + pool will no longer be accessible on systems that do not support feature + flags. See zfs-features(5) for details on compatibility + with systems that support feature flags, but do not support all features + enabled on the pool. +
+
+
Enables all supported features on all pools.
+
+ version
+
Upgrade to the specified legacy version. If the + -V flag is specified, no features will be + enabled on the pool. This option can only be used to increase the + version number up to the last supported legacy version number.
+
+
+
+
+
+
+

+

The following exit values are returned:

+
+
+
Successful completion.
+
+
An error occurred.
+
+
Invalid command line options were specified.
+
+
+
+

+
+
Creating a RAID-Z Storage Pool
+
The following command creates a pool with a single raidz root vdev that + consists of six disks. +
+
# zpool create tank raidz sda sdb sdc sdd sde sdf
+
+
+
Creating a Mirrored Storage Pool
+
The following command creates a pool with two mirrors, where each mirror + contains two disks. +
+
# zpool create tank mirror sda sdb mirror sdc sdd
+
+
+
Creating a ZFS Storage Pool by Using + Partitions
+
The following command creates an unmirrored pool using two disk + partitions. +
+
# zpool create tank sda1 sdb2
+
+
+
Creating a ZFS Storage Pool by Using + Files
+
The following command creates an unmirrored pool using files. While not + recommended, a pool based on files can be useful for experimental + purposes. +
+
# zpool create tank /path/to/file/a /path/to/file/b
+
+
+
Adding a Mirror to a ZFS Storage Pool
+
The following command adds two mirrored disks to the pool + tank, assuming the pool is already made up of two-way + mirrors. The additional space is immediately available to any datasets + within the pool. +
+
# zpool add tank mirror sda sdb
+
+
+
Listing Available ZFS Storage Pools
+
The following command lists all available pools on the system. In this + case, the pool + is + faulted due to a missing device. The results from this command are similar + to the following: +
+
# zpool list
+NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+rpool  19.9G  8.43G  11.4G         -    33%    42%  1.00x  ONLINE  -
+tank   61.5G  20.0G  41.5G         -    48%    32%  1.00x  ONLINE  -
+zion       -      -      -         -      -      -      -  FAULTED -
+
+
+
Destroying a ZFS Storage Pool
+
The following command destroys the pool tank and any + datasets contained within. +
+
# zpool destroy -f tank
+
+
+
Exporting a ZFS Storage Pool
+
The following command exports the devices in pool tank + so that they can be relocated or later imported. +
+
# zpool export tank
+
+
+
Importing a ZFS Storage Pool
+
The following command displays available pools, and then imports the pool + tank for use on the system. The results from this + command are similar to the following: +
+
# zpool import
+  pool: tank
+    id: 15451357997522795478
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+        tank        ONLINE
+          mirror    ONLINE
+            sda     ONLINE
+            sdb     ONLINE
+
+# zpool import tank
+
+
+
Upgrading All ZFS Storage Pools to the Current + Version
+
The following command upgrades all ZFS Storage pools to the current + version of the software. +
+
# zpool upgrade -a
+This system is currently running ZFS version 2.
+
+
+
Managing Hot Spares
+
The following command creates a new pool with an available hot spare: +
+
# zpool create tank mirror sda sdb spare sdc
+
+

If one of the disks were to fail, the pool would be reduced to + the degraded state. The failed device can be replaced using the + following command:

+
+
# zpool replace tank sda sdd
+
+

Once the data has been resilvered, the spare is automatically + removed and is made available for use should another device fails. The + hot spare can be permanently removed from the pool using the following + command:

+
+
# zpool remove tank sdc
+
+
+
Creating a ZFS Pool with Mirrored Separate + Intent Logs
+
The following command creates a ZFS storage pool consisting of two, + two-way mirrors and mirrored log devices: +
+
# zpool create pool mirror sda sdb mirror sdc sdd log mirror \
+  sde sdf
+
+
+
Adding Cache Devices to a ZFS Pool
+
The following command adds two disks for use as cache devices to a ZFS + storage pool: +
+
# zpool add pool cache sdc sdd
+
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take + over an hour for them to fill. Capacity and reads can be monitored using + the iostat option as follows:

+
+
# zpool iostat -v pool 5
+
+
+
Removing a Mirrored Log Device
+
The following command removes the mirrored log device + mirror-2. Given this configuration: +
+
  pool: tank
+ state: ONLINE
+ scrub: none requested
+config:
+
+         NAME        STATE     READ WRITE CKSUM
+         tank        ONLINE       0     0     0
+           mirror-0  ONLINE       0     0     0
+             sda     ONLINE       0     0     0
+             sdb     ONLINE       0     0     0
+           mirror-1  ONLINE       0     0     0
+             sdc     ONLINE       0     0     0
+             sdd     ONLINE       0     0     0
+         logs
+           mirror-2  ONLINE       0     0     0
+             sde     ONLINE       0     0     0
+             sdf     ONLINE       0     0     0
+
+

The command to remove the mirrored log + mirror-2 is:

+
+
# zpool remove tank mirror-2
+
+
+
Displaying expanded space on a + device
+
The following command displays the detailed information for the pool + . + This pool is comprised of a single raidz vdev where one of its devices + increased its capacity by 10GB. In this example, the pool will not be able + to utilize this extra capacity until all the devices under the raidz vdev + have been expanded. +
+
# zpool list -v data
+NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+data        23.9G  14.6G  9.30G         -    48%    61%  1.00x  ONLINE  -
+  raidz1    23.9G  14.6G  9.30G         -    48%
+    sda         -      -      -         -      -
+    sdb         -      -      -       10G      -
+    sdc         -      -      -         -      -
+
+
+
Adding output columns
+
Additional columns can be added to the zpool + status and zpool + iostat output with -c + option. +
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc slaves
+   capacity operations bandwidth
+   pool       alloc free  read  write read  write slaves
+   ---------- ----- ----- ----- ----- ----- ----- ---------
+   tank       20.4G 7.23T 26    152   20.7M 21.6M
+   mirror     20.4G 7.23T 26    152   20.7M 21.6M
+   U1         -     -     0     31    1.46K 20.6M sdb sdff
+   U10        -     -     0     1     3.77K 13.3K sdas sdgw
+   U11        -     -     0     1     288K  13.3K sdat sdgx
+   U12        -     -     0     1     78.4K 13.3K sdau sdgy
+   U13        -     -     0     1     128K  13.3K sdav sdgz
+   U14        -     -     0     1     63.2K 13.3K sdfk sdg
+
+
+
+
+
+

+
+
+
Cause zpool to dump core on exit for the purposes + of running
+
+
+
+
The search path for devices or files to use with the pool. This is a + colon-separated list of directories in which zpool + looks for device nodes and files. Similar to the + -d option in zpool + import.
+
+
+
+
Cause zpool subcommands to output vdev guids by default. + This behavior is identical to the zpool status + -g command line option.
+
+
+ +
Cause zpool subcommands to follow links for vdev + names by default. This behavior is identical to the zpool + status -L command line option.
+
+
+
+
Cause zpool subcommands to output full vdev path + names by default. This behavior is identical to the zpool + status -p command line option.
+
+
+
+
Older ZFS on Linux implementations had issues when attempting to display + pool config VDEV names if a devid NVP value is present + in the pool's config. +

For example, a pool that originated on illumos platform would + have a devid value in the config and zpool + status would fail when listing the config. This would also be + true for future Linux based pools.

+

A pool can be stripped of any devid values + on import or prevented from adding them on zpool + create or zpool add by setting + ZFS_VDEV_DEVID_OPT_OUT.

+
+
+
+
+
Allow a privileged user to run the zpool + status/iostat with the -c option. Normally, + only unprivileged users are allowed to run + -c.
+
+
+
+
The search path for scripts when running zpool + status/iostat with the -c option. This is a + colon-separated list of directories and overrides the default + ~/.zpool.d and + /etc/zfs/zpool.d search paths.
+
+
+
+
Allow a user to run zpool status/iostat with the + -c option. If + ZPOOL_SCRIPTS_ENABLED is not set, it is assumed that the + user is allowed to run zpool status/iostat + -c.
+
+
+
+

+

+
+
+

+

zed(8), zfs(8), + zfs-events(5), zfs-module-parameters(5), + zpool-features(5)

+
+
+ + + + + +
April 27, 2018Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/zstreamdump.8.html b/man/v0.7/8/zstreamdump.8.html new file mode 100644 index 000000000..6f692c3f7 --- /dev/null +++ b/man/v0.7/8/zstreamdump.8.html @@ -0,0 +1,198 @@ + + + + + + + zstreamdump.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zstreamdump.8

+
+ + + + + +
zstreamdump(8)System Administration Commandszstreamdump(8)
+
+
+

+

zstreamdump - filter data in zfs send stream

+
+
+

+
zstreamdump [-C] [-v]
+

+
+
+

+

The zstreamdump utility reads from the output of the zfs + send command, then displays headers and some statistics from that + output. See zfs(1M).

+
+
+

+

The following options are supported:

+

-C

+

+
Suppress the validation of checksums.
+

+

-v

+

+
Verbose. Dump all headers, not only begin and end + headers.
+

+
+
+

+

zfs(8)

+
+
+ + + + + +
29 Aug 2012ZFS pool 28, filesystem 5
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/index.html b/man/v0.7/index.html new file mode 100644 index 000000000..8c5582154 --- /dev/null +++ b/man/v0.7/index.html @@ -0,0 +1,143 @@ + + + + + + + v0.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/1/cstyle.1.html b/man/v0.8/1/cstyle.1.html new file mode 100644 index 000000000..a750d879b --- /dev/null +++ b/man/v0.8/1/cstyle.1.html @@ -0,0 +1,285 @@ + + + + + + + cstyle.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cstyle.1

+
+ + + + + +
cstyle(1)General Commands Manualcstyle(1)
+
+
+

+

cstyle - check for some common stylistic errors in C source + files

+
+
+

+

cstyle [-chpvCP] [-o constructs] [file...]

+
+
+

+

cstyle inspects C source files (*.c and *.h) for common + stylistic errors. It attempts to check for the cstyle documented in + http://www.cis.upenn.edu/~lee/06cse480/data/cstyle.ms.pdf. Note that + there is much in that document that cannot be checked for; just + because your code is cstyle(1) clean does not mean that you've + followed Sun's C style. Caveat emptor.

+
+
+

+

The following options are supported:

+
+
+
Check continuation line indentation inside of functions. Sun's C style + states that all statements must be indented to an appropriate tab stop, + and any continuation lines after them must be indented exactly four + spaces from the start line. This option enables a series of checks + designed to find continuation line problems within functions only. The + checks have some limitations; see CONTINUATION CHECKING, below.
+
+
Performs heuristic checks that are sometimes wrong. Not generally + used.
+
+
Performs some of the more picky checks. Includes ANSI #else and #endif + rules, and tries to detect spaces after casts. Used as part of the putback + checks.
+
+
Verbose output; includes the text of the line of error, and, for + -c, the first statement in the current continuation block.
+
+
Ignore errors in header comments (i.e. block comments starting in the + first column). Not generally used.
+
+
Check for use of non-POSIX types. Historically, types like + "u_int" and "u_long" were used, but they are now + deprecated in favor of the POSIX types uint_t, ulong_t, etc. This detects + any use of the deprecated types. Used as part of the putback checks.
+
+
Allow a comma-separated list of additional constructs. Available + constructs include:
+
+
Allow doxygen-style block comments (/** and /*!)
+
+
Allow splint-style lint comments (/*@...@*/)
+
+
+
+

+

The cstyle rule for the OS/Net consolidation is that all new files + must be -pP clean. For existing files, the following invocations are + run against both the old and new files:

+
+
+
+
+
+
+
+
+

If the old file gave no errors for one of the invocations, the new + file must also give no errors. This way, files can only become more + clean.

+
+
+

+

The continuation checker is a reasonably simple state machine that + knows something about how C is laid out, and can match parenthesis, etc. + over multiple lines. It does have some limitations:

+
+
1.
+
Preprocessor macros which cause unmatched parenthesis will confuse the + checker for that line. To fix this, you'll need to make sure that each + branch of the #if statement has balanced parenthesis.
+
2.
+
Some cpp macros do not require ;s after them. Any such macros + *must* be ALL_CAPS; any lower case letters will cause bad output.
+
+

The bad output will generally be corrected after the next + ;, {, or }.

+

Some continuation error messages deserve some additional + explanation

+
+
+
A multi-line statement which is not broken at statement boundaries. For + example:
+
+
+

if (this_is_a_long_variable == another_variable) a = +
+ b + c;

+

Will trigger this error. Instead, do:

+

if (this_is_a_long_variable == another_variable) +
+ a = b + c;

+
+
+
+
For visibility, empty bodies for if, for, and while statements should be + on their own line. For example:
+
+
+

while (do_something(&x) == 0);

+

Will trigger this error. Instead, do:

+

while (do_something(&x) == 0) +
+ ;

+
+

+
+
+ + + + + +
28 March 2005
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/1/index.html b/man/v0.8/1/index.html new file mode 100644 index 000000000..57c235116 --- /dev/null +++ b/man/v0.8/1/index.html @@ -0,0 +1,153 @@ + + + + + + + User Commands (1) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

User Commands (1)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/1/raidz_test.1.html b/man/v0.8/1/raidz_test.1.html new file mode 100644 index 000000000..f61ce98d8 --- /dev/null +++ b/man/v0.8/1/raidz_test.1.html @@ -0,0 +1,260 @@ + + + + + + + raidz_test.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

raidz_test.1

+
+ + + + + +
raidz_test(1)User Commandsraidz_test(1)
+
+

+
+

+

raidz_test - raidz implementation verification and + benchmarking tool

+
+
+

+

raidz_test <options>

+
+
+

+

This manual page documents briefly the raidz_test + command.

+

Purpose of this tool is to run all supported raidz implementation + and verify results of all methods. Tool also contains a parameter sweep + option where all parameters affecting RAIDZ block are verified (like ashift + size, data offset, data size, etc...). The tool also supports a benchmarking + mode using -B option.

+
+
+

+

-h

+
+
+
Print a help summary.
+
+

-a ashift (default: 9)

+
+
+
Ashift value.
+
+

-o zio_off_shift (default: 0)

+
+
+
Zio offset for raidz block. Offset value is 1 << + (zio_off_shift)
+
+

-d raidz_data_disks (default: 8)

+
+
+
Number of raidz data disks to use. Additional disks for parity will be + used during testing.
+
+

-s zio_size_shift (default: 19)

+
+
+
Size of data for raidz block. Size is 1 << (zio_size_shift).
+
+

-S(weep)

+
+
+
Sweep parameter space while verifying the raidz implementations. This + option will exhaust all most of valid values for -a -o -d -s options. + Runtime using this option will be long.
+
+

-t(imeout)

+
+
+
Wall time for sweep test in seconds. The actual runtime could be + longer.
+
+

-B(enchmark)

+
+
+
This options starts the benchmark mode. All implementations are + benchmarked using increasing per disk data size. Results are given as + throughput per disk, measured in MiB/s.
+
+

-v(erbose)

+
+
+
Increase verbosity.
+
+

-T(est the test)

+
+
+
Debugging option. When this option is specified tool is supposed to fail + all tests. This is to check if tests would properly verify + bit-exactness.
+
+

-D(ebug)

+
+
+
Debugging option. Specify to attach gdb when SIGSEGV or SIGABRT are + received.
+
+

+

+
+
+

+

ztest (1)

+
+
+

+

vdev_raidz, created for ZFS on Linux by Gvozden + Nešković <neskovic@gmail.com>

+
+
+ + + + + +
2016ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/1/zhack.1.html b/man/v0.8/1/zhack.1.html new file mode 100644 index 000000000..70a012784 --- /dev/null +++ b/man/v0.8/1/zhack.1.html @@ -0,0 +1,252 @@ + + + + + + + zhack.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zhack.1

+
+ + + + + +
zhack(1)User Commandszhack(1)
+
+

+
+

+

zhack - libzpool debugging tool

+
+
+

+

This utility pokes configuration changes directly into a ZFS pool, + which is dangerous and can cause data corruption.

+
+
+

+

zhack [-c cachefile] [-d dir] + <subcommand> [arguments]

+
+
+

+

-c cachefile

+
+
+
Read the pool configuration from the cachefile, which is + /etc/zfs/zpool.cache by default.
+
+

-d dir

+
+
+
Search for pool members in the dir path. Can be specified + more than once.
+
+
+
+

+

feature stat pool

+
+
+
List feature flags.
+
+

feature enable [-d description] [-r] pool + guid

+
+
+
Add a new feature to pool that is uniquely identified by + guid, which is specified in the same form as a zfs(8) user + property.
+
+
The description is a short human readable explanation of the new + feature.
+
+
The -r switch indicates that pool can be safely opened in + read-only mode by a system that does not have the guid + feature.
+
+

feature ref [-d|-m] pool guid

+
+
+
Increment the reference count of the guid feature in + pool.
+
+
The -d switch decrements the reference count of the guid + feature in pool.
+
+
The -m switch indicates that the guid feature is now + required to read the pool MOS.
+
+
+
+

+
# zhack feature stat tank
+for_read_obj:
+	org.illumos:lz4_compress = 0
+for_write_obj:
+	com.delphix:async_destroy = 0
+	com.delphix:empty_bpobj = 0
+descriptions_obj:
+	com.delphix:async_destroy = Destroy filesystems asynchronously.
+	com.delphix:empty_bpobj = Snapshots use less space.
+	org.illumos:lz4_compress = LZ4 compression algorithm support.
+
# zhack feature enable -d 'Predict future disk failures.' \
+
+ tank com.example:clairvoyance
+
# zhack feature ref tank com.example:clairvoyance
+
+
+

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

zfs(8), zpool-features(5), ztest(1)

+
+
+ + + + + +
2013 MAR 16ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/1/ztest.1.html b/man/v0.8/1/ztest.1.html new file mode 100644 index 000000000..338ed1d86 --- /dev/null +++ b/man/v0.8/1/ztest.1.html @@ -0,0 +1,349 @@ + + + + + + + ztest.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

ztest.1

+
+ + + + + +
ztest(1)User Commandsztest(1)
+
+

+
+

+

ztest - was written by the ZFS Developers as a ZFS unit + test.

+
+
+

+

ztest <options>

+
+
+

+

This manual page documents briefly the ztest command.

+

ztest was written by the ZFS Developers as a ZFS unit test. + The tool was developed in tandem with the ZFS functionality and was executed + nightly as one of the many regression test against the daily build. As + features were added to ZFS, unit tests were also added to ztest. In + addition, a separate test development team wrote and executed more + functional and stress tests.

+

By default ztest runs for ten minutes and uses block files + (stored in /tmp) to create pools rather than using physical disks. Block + files afford ztest its flexibility to play around with zpool + components without requiring large hardware configurations. However, storing + the block files in /tmp may not work for you if you have a small tmp + directory.

+

By default is non-verbose. This is why entering the command above + will result in ztest quietly executing for 5 minutes. The -V option + can be used to increase the verbosity of the tool. Adding multiple -V option + is allowed and the more you add the more chatty ztest becomes.

+

After the ztest run completes, you should notice many + ztest.* files lying around. Once the run completes you can safely remove + these files. Note that you shouldn't remove these files during a run. You + can re-use these files in your next ztest run by using the -E + option.

+
+
+

+

-?

+
+
+
Print a help summary.
+
+

-v vdevs (default: 5)

+
+
+
Number of vdevs.
+
+

-s size_of_each_vdev (default: 64M)

+
+
+
Size of each vdev.
+
+

-a alignment_shift (default: 9) (use 0 for + random)

+
+
+
Used alignment in test.
+
+

-m mirror_copies (default: 2)

+
+
+
Number of mirror copies.
+
+

-r raidz_disks (default: 4)

+
+
+
Number of raidz disks.
+
+

-R raidz_parity (default: 1)

+
+
+
Raidz parity.
+
+

-d datasets (default: 7)

+
+
+
Number of datasets.
+
+

-t threads (default: 23)

+
+
+
Number of threads.
+
+

-g gang_block_threshold (default: 32K)

+
+
+
Gang block threshold.
+
+

-i initialize_pool_i_times (default: + 1)

+
+
+
Number of pool initialisations.
+
+

-k kill_percentage (default: 70%)

+
+
+
Kill percentage.
+
+

-p pool_name (default: ztest)

+
+
+
Pool name.
+
+

-V(erbose)

+
+
+
Verbose (use multiple times for ever more blather).
+
+

-E(xisting)

+
+
+
Use existing pool (use existing pool instead of creating new one).
+
+

-T time (default: 300 sec)

+
+
+
Total test run time.
+
+

-z zil_failure_rate (default: fail every 2^5 + allocs)

+
+
+
Injected failure rate.
+
+

-G

+
+
+
Dump zfs_dbgmsg buffer before exiting.
+
+
+
+

+

To override /tmp as your location for block files, you can use the + -f option:

+
+
+
ztest -f /
+
+

To get an idea of what ztest is actually testing try this:

+
+
+
ztest -f / -VVV
+
+

Maybe you'd like to run ztest for longer? To do so simply use the + -T option and specify the runlength in seconds like so:

+
+
+
ztest -f / -V -T 120 +

+
+
+
+
+

+
+
+
Use id instead of the SPL hostid to identify this host. Intended + for use with ztest, but this environment variable will affect any utility + which uses libzpool, including zpool(8). Since the kernel is + unaware of this setting results with utilities other than ztest are + undefined.
+
+
Limit the default stack size to stacksize bytes for the purpose of + detecting and debugging kernel stack overflows. This value defaults to + 32K which is double the default 16K Linux kernel stack size. +

In practice, setting the stack size slightly higher is needed + because differences in stack usage between kernel and user space can + lead to spurious stack overflows (especially when debugging is enabled). + The specified value will be rounded up to a floor of PTHREAD_STACK_MIN + which is the minimum stack required for a NULL procedure in user + space.

+

By default the stack size is limited to 256K.

+
+
+
+
+

+

spl-module-parameters (5), zpool (1), zfs + (1), zdb (1),

+
+
+

+

This manual page was transferred to asciidoc by Michael + Gebetsroither <gebi@grml.org> from + http://opensolaris.org/os/community/zfs/ztest/

+
+
+ + + + + +
2009 NOV 01ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/1/zvol_wait.1.html b/man/v0.8/1/zvol_wait.1.html new file mode 100644 index 000000000..884d0219c --- /dev/null +++ b/man/v0.8/1/zvol_wait.1.html @@ -0,0 +1,191 @@ + + + + + + + zvol_wait.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zvol_wait.1

+
+ + + + + +
ZVOL_WAIT(1)General Commands Manual (smm)ZVOL_WAIT(1)
+
+
+

+

zvol_waitWait + for ZFS volume links in + to be + created.

+
+
+

+ + + + + +
zvol_wait
+
+
+

+

When a ZFS pool is imported, ZFS will register each ZFS volume + (zvol) as a disk device with the system. As the disks are registered, + udev(7) will asynchronously create + symlinks under + + using the zvol's name. zvol_wait will wait for all + those symlinks to be created before returning.

+
+
+

+

udev(7)

+
+
+ + + + + +
July 5, 2019Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/5/index.html b/man/v0.8/5/index.html new file mode 100644 index 000000000..cf6dc050a --- /dev/null +++ b/man/v0.8/5/index.html @@ -0,0 +1,153 @@ + + + + + + + File Formats and Conventions (5) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

File Formats and Conventions (5)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/5/spl-module-parameters.5.html b/man/v0.8/5/spl-module-parameters.5.html new file mode 100644 index 000000000..71eff4f62 --- /dev/null +++ b/man/v0.8/5/spl-module-parameters.5.html @@ -0,0 +1,387 @@ + + + + + + + spl-module-parameters.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

spl-module-parameters.5

+
+ + + + + +
SPL-MODULE-PARAMETERS(5)File Formats ManualSPL-MODULE-PARAMETERS(5)
+
+
+

+

spl-module-parameters - SPL module parameters

+
+
+

+

Description of the different parameters to the SPL module.

+

+
+

+

+

spl_kmem_cache_expire (uint)

+
Cache expiration is part of default Illumos cache + behavior. The idea is that objects in magazines which have not been recently + accessed should be returned to the slabs periodically. This is known as cache + aging and when enabled objects will be typically returned after 15 seconds. +

On the other hand Linux slabs are designed to never move objects + back to the slabs unless there is memory pressure. This is possible because + under Linux the cache will be notified when memory is low and objects can be + released.

+

By default only the Linux method is enabled. It has been shown to + improve responsiveness on low memory systems and not negatively impact the + performance of systems with more memory. This policy may be changed by + setting the spl_kmem_cache_expire bit mask as follows, both policies + may be enabled concurrently.

+

0x01 - Aging (Illumos), 0x02 - Low memory (Linux)

+

Default value: 0x02

+
+

+

spl_kmem_cache_kmem_threads (uint)

+
The number of threads created for the spl_kmem_cache task + queue. This task queue is responsible for allocating new slabs for use by the + kmem caches. For the majority of systems and workloads only a small number of + threads are required. +

Default value: 4

+
+

+

spl_kmem_cache_reclaim (uint)

+
When this is set it prevents Linux from being able to + rapidly reclaim all the memory held by the kmem caches. This may be useful in + circumstances where it's preferable that Linux reclaim memory from some other + subsystem first. Setting this will increase the likelihood out of memory + events on a memory constrained system. +

Default value: 0

+
+

+

spl_kmem_cache_obj_per_slab (uint)

+
The preferred number of objects per slab in the cache. In + general, a larger value will increase the caches memory footprint while + decreasing the time required to perform an allocation. Conversely, a smaller + value will minimize the footprint and improve cache reclaim time but + individual allocations may take longer. +

Default value: 8

+
+

+

spl_kmem_cache_obj_per_slab_min (uint)

+
The minimum number of objects allowed per slab. Normally + slabs will contain spl_kmem_cache_obj_per_slab objects but for caches + that contain very large objects it's desirable to only have a few, or even + just one, object per slab. +

Default value: 1

+
+

+

spl_kmem_cache_max_size (uint)

+
The maximum size of a kmem cache slab in MiB. This + effectively limits the maximum cache object size to + spl_kmem_cache_max_size / spl_kmem_cache_obj_per_slab. Caches + may not be created with object sized larger than this limit. +

Default value: 32 (64-bit) or 4 (32-bit)

+
+

+

spl_kmem_cache_slab_limit (uint)

+
For small objects the Linux slab allocator should be used + to make the most efficient use of the memory. However, large objects are not + supported by the Linux slab and therefore the SPL implementation is preferred. + This value is used to determine the cutoff between a small and large object. +

Objects of spl_kmem_cache_slab_limit or smaller will be + allocated using the Linux slab allocator, large objects use the SPL + allocator. A cutoff of 16K was determined to be optimal for architectures + using 4K pages.

+

Default value: 16,384

+
+

+

spl_kmem_cache_kmem_limit (uint)

+
Depending on the size of a cache object it may be backed + by kmalloc()'d or vmalloc()'d memory. This is because the size of the required + allocation greatly impacts the best way to allocate the memory. +

When objects are small and only a small number of memory pages + need to be allocated, ideally just one, then kmalloc() is very efficient. + However, when allocating multiple pages with kmalloc() it gets increasingly + expensive because the pages must be physically contiguous.

+

For this reason we shift to vmalloc() for slabs of large objects + which which removes the need for contiguous pages. We cannot use vmalloc() + in all cases because there is significant locking overhead involved. This + function takes a single global lock over the entire virtual address range + which serializes all allocations. Using slightly different allocation + functions for small and large objects allows us to handle a wide range of + object sizes.

+

The spl_kmem_cache_kmem_limit value is used to determine + this cutoff size. One quarter the PAGE_SIZE is used as the default value + because spl_kmem_cache_obj_per_slab defaults to 16. This means that + at most we will need to allocate four contiguous pages.

+

Default value: PAGE_SIZE/4

+
+

+

spl_kmem_alloc_warn (uint)

+
As a general rule kmem_alloc() allocations should be + small, preferably just a few pages since they must by physically contiguous. + Therefore, a rate limited warning will be printed to the console for any + kmem_alloc() which exceeds a reasonable threshold. +

The default warning threshold is set to eight pages but capped at + 32K to accommodate systems using large pages. This value was selected to be + small enough to ensure the largest allocations are quickly noticed and + fixed. But large enough to avoid logging any warnings when a allocation size + is larger than optimal but not a serious concern. Since this value is + tunable, developers are encouraged to set it lower when testing so any new + largish allocations are quickly caught. These warnings may be disabled by + setting the threshold to zero.

+

Default value: 32,768

+
+

+

spl_kmem_alloc_max (uint)

+
Large kmem_alloc() allocations will fail if they exceed + KMALLOC_MAX_SIZE. Allocations which are marginally smaller than this limit may + succeed but should still be avoided due to the expense of locating a + contiguous range of free pages. Therefore, a maximum kmem size with reasonable + safely margin of 4x is set. Kmem_alloc() allocations larger than this maximum + will quickly fail. Vmem_alloc() allocations less than or equal to this value + will use kmalloc(), but shift to vmalloc() when exceeding this value. +

Default value: KMALLOC_MAX_SIZE/4

+
+

+

spl_kmem_cache_magazine_size (uint)

+
Cache magazines are an optimization designed to minimize + the cost of allocating memory. They do this by keeping a per-cpu cache of + recently freed objects, which can then be reallocated without taking a lock. + This can improve performance on highly contended caches. However, because + objects in magazines will prevent otherwise empty slabs from being immediately + released this may not be ideal for low memory machines. +

For this reason spl_kmem_cache_magazine_size can be used to + set a maximum magazine size. When this value is set to 0 the magazine size + will be automatically determined based on the object size. Otherwise + magazines will be limited to 2-256 objects per magazine (i.e per cpu). + Magazines may never be entirely disabled in this implementation.

+

Default value: 0

+
+

+

spl_hostid (ulong)

+
The system hostid, when set this can be used to uniquely + identify a system. By default this value is set to zero which indicates the + hostid is disabled. It can be explicitly enabled by placing a unique non-zero + value in /etc/hostid/. +

Default value: 0

+
+

+

spl_hostid_path (charp)

+
The expected path to locate the system hostid when + specified. This value may be overridden for non-standard configurations. +

Default value: /etc/hostid

+
+

+

spl_panic_halt (uint)

+
Cause a kernel panic on assertion failures. When not + enabled, the thread is halted to facilitate further debugging. +

Set to a non-zero value to enable.

+

Default value: 0

+
+

+

spl_taskq_kick (uint)

+
Kick stuck taskq to spawn threads. When writing a + non-zero value to it, it will scan all the taskqs. If any of them have a + pending task more than 5 seconds old, it will kick it to spawn more threads. + This can be used if you find a rare deadlock occurs because one or more taskqs + didn't spawn a thread when it should. +

Default value: 0

+
+

+

spl_taskq_thread_bind (int)

+
Bind taskq threads to specific CPUs. When enabled all + taskq threads will be distributed evenly over the available CPUs. By default, + this behavior is disabled to allow the Linux scheduler the maximum flexibility + to determine where a thread should run. +

Default value: 0

+
+

+

spl_taskq_thread_dynamic (int)

+
Allow dynamic taskqs. When enabled taskqs which set the + TASKQ_DYNAMIC flag will by default create only a single thread. New threads + will be created on demand up to a maximum allowed number to facilitate the + completion of outstanding tasks. Threads which are no longer needed will be + promptly destroyed. By default this behavior is enabled but it can be disabled + to aid performance analysis or troubleshooting. +

Default value: 1

+
+

+

spl_taskq_thread_priority (int)

+
Allow newly created taskq threads to set a non-default + scheduler priority. When enabled the priority specified when a taskq is + created will be applied to all threads created by that taskq. When disabled + all threads will use the default Linux kernel thread priority. By default, + this behavior is enabled. +

Default value: 1

+
+

+

spl_taskq_thread_sequential (int)

+
The number of items a taskq worker thread must handle + without interruption before requesting a new worker thread be spawned. This is + used to control how quickly taskqs ramp up the number of threads processing + the queue. Because Linux thread creation and destruction are relatively + inexpensive a small default value has been selected. This means that normally + threads will be created aggressively which is desirable. Increasing this value + will result in a slower thread creation rate which may be preferable for some + configurations. +

Default value: 4

+
+

+

spl_max_show_tasks (uint)

+
The maximum number of tasks per pending list in each + taskq shown in /proc/spl/{taskq,taskq-all}. Write 0 to turn off the limit. The + proc file will walk the lists with lock held, reading it could cause a lock up + if the list grow too large without limiting the output. + "(truncated)" will be shown if the list is larger than the limit. +

Default value: 512

+
+
+
+
+ + + + + +
October 28, 2017
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/5/vdev_id.conf.5.html b/man/v0.8/5/vdev_id.conf.5.html new file mode 100644 index 000000000..92f44cc16 --- /dev/null +++ b/man/v0.8/5/vdev_id.conf.5.html @@ -0,0 +1,345 @@ + + + + + + + vdev_id.conf.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

vdev_id.conf.5

+
+ + + + + +
vdev_id.conf(5)File Formats Manualvdev_id.conf(5)
+
+
+

+

vdev_id.conf - Configuration file for vdev_id

+
+
+

+

vdev_id.conf is the configuration file for + vdev_id(8). It controls the default behavior of vdev_id(8) + while it is mapping a disk device name to an alias.

+

The vdev_id.conf file uses a simple format consisting of a + keyword followed by one or more values on a single line. Any line not + beginning with a recognized keyword is ignored. Comments may optionally + begin with a hash character.

+

The following keywords and values are used.

+
+
+
Maps a device link in the /dev directory hierarchy to a new device name. + The udev rule defining the device link must have run prior to + vdev_id(8). A defined alias takes precedence over a + topology-derived name, but the two naming methods can otherwise coexist. + For example, one might name drives in a JBOD with the sas_direct topology + while naming an internal L2ARC device with an alias. +

name - the name of the link to the device that will by + created in /dev/disk/by-vdev.

+

devlink - the name of the device link that has already + been defined by udev. This may be an absolute path or the base + filename.

+

+
+
+
Maps a physical path to a channel name (typically representing a single + disk enclosure). +

+
+ +
Additionally create /dev/by-enclosure symlinks to the disk enclosure sg + devices using the naming scheme from vdev_id.conf. + enclosure_symlinks is only allowed for sas_direct mode.
+ +
Specify the prefix for the enclosure symlinks in the form of: +

/dev/by-enclosure/<prefix>-<channel><num>

+

Defaults to "enc" if not specified.

+
+
+
hosting the disk enclosure being mapped, as found in the output of + lspci(8). This argument is not used in sas_switch mode. +

port - specifies the numeric identifier of the HBA or + SAS switch port connected to the disk enclosure being mapped.

+

name - specifies the name of the channel.

+

+
+
+
Maps a disk slot number as reported by the operating system to an + alternative slot number. If the channel parameter is specified then + the mapping is only applied to slots in the named channel, otherwise the + mapping is applied to all channels. The first-specified slot rule + that can match a slot takes precedence. Therefore a channel-specific + mapping for a given slot should generally appear before a generic mapping + for the same slot. In this way a custom mapping may be applied to a + particular channel and a default mapping applied to the others. +

+
+
+
Specifies whether vdev_id(8) will handle only dm-multipath devices. + If set to "yes" then vdev_id(8) will examine the first + running component disk of a dm-multipath device as listed by the + multipath(8) command to determine the physical path.
+
+
Identifies a physical topology that governs how physical paths are mapped + to channels. +

sas_direct - in this mode a channel is uniquely + identified by a PCI slot and a HBA port number

+

sas_switch - in this mode a channel is uniquely + identified by a SAS switch port number

+

+
+
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) internally uses this value to determine + which HBA or switch port a device is connected to. The default is 4. +

+
+
+
Specifies from which element of a SAS identifier the slot number is taken. + The default is bay. +

bay - read the slot number from the bay identifier.

+

phy - read the slot number from the phy identifier.

+

port - use the SAS port as the slot number.

+

id - use the scsi id as the slot number.

+

lun - use the scsi lun as the slot number.

+

ses - use the SCSI Enclosure Services (SES) enclosure + device slot number, as reported by sg_ses(8). This is intended + for use only on systems where bay is unsupported, noting that + port and id may be unstable across disk replacement.

+
+
+
+
+

+

A non-multipath configuration with direct-attached SAS enclosures + and an arbitrary slot re-mapping.

+

+
	multipath     no
+	topology      sas_direct
+	phys_per_port 4
+	slot          bay
+	#       PCI_SLOT HBA PORT  CHANNEL NAME
+	channel 85:00.0  1         A
+	channel 85:00.0  0         B
+	channel 86:00.0  1         C
+	channel 86:00.0  0         D
+	# Custom mapping for Channel A
+	#    Linux      Mapped
+	#    Slot       Slot      Channel
+	slot 1          7         A
+	slot 2          10        A
+	slot 3          3         A
+	slot 4          6         A
+	# Default mapping for B, C, and D
+	slot 1          4
+	slot 2          2
+	slot 3          1
+	slot 4          3
+

A SAS-switch topology. Note that the channel keyword takes + only two arguments in this example.

+

+
	topology      sas_switch
+	#       SWITCH PORT  CHANNEL NAME
+	channel 1            A
+	channel 2            B
+	channel 3            C
+	channel 4            D
+

A multipath configuration. Note that channel names have multiple + definitions - one per physical path.

+

+
	multipath yes
+	#       PCI_SLOT HBA PORT  CHANNEL NAME
+	channel 85:00.0  1         A
+	channel 85:00.0  0         B
+	channel 86:00.0  1         A
+	channel 86:00.0  0         B
+

A configuration with enclosure_symlinks enabled.

+

+
	multipath yes
+	enclosure_symlinks yes
+	#          PCI_ID      HBA PORT     CHANNEL NAME
+	channel    05:00.0     1            U
+	channel    05:00.0     0            L
+	channel    06:00.0     1            U
+	channel    06:00.0     0            L
+In addition to the disks symlinks, this configuration will create: +

+
	/dev/by-enclosure/enc-L0
+	/dev/by-enclosure/enc-L1
+	/dev/by-enclosure/enc-U0
+	/dev/by-enclosure/enc-U1
+

A configuration using device link aliases.

+

+
	#     by-vdev
+	#     name     fully qualified or base name of device link
+	alias d1       /dev/disk/by-id/wwn-0x5000c5002de3b9ca
+	alias d2       wwn-0x5000c5002def789e
+
+
+

+
+
/etc/zfs/vdev_id.conf
+
The configuration file for vdev_id(8).
+
+
+
+

+

vdev_id(8)

+
+
+ + + + + +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/5/zfs-events.5.html b/man/v0.8/5/zfs-events.5.html new file mode 100644 index 000000000..dd352c0bf --- /dev/null +++ b/man/v0.8/5/zfs-events.5.html @@ -0,0 +1,848 @@ + + + + + + + zfs-events.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-events.5

+
+ + + + + +
ZFS-EVENTS(5)File Formats ManualZFS-EVENTS(5)
+
+
+

+

zfs-events - Events created by the ZFS filesystem.

+
+
+

+

Description of the different events generated by the ZFS + stack.

+

Most of these don't have any description. The events generated by + ZFS have never been publicly documented. What is here is intended as a + starting point to provide documentation for all possible events.

+

To view all events created since the loading of the ZFS + infrastructure (i.e, "the module"), run

+

+
zpool events
+

to get a short list, and

+

+
zpool events -v
+

to get a full detail of the events and what information is + available about it.

+

This man page lists the different subclasses that are issued in + the case of an event. The full event name would be + ereport.fs.zfs.SUBCLASS, but we only list the last part here.

+

+
+

+

+

checksum

+
Issued when a checksum error has been detected.
+

+

io

+
Issued when there is an I/O error in a vdev in the + pool.
+

+

data

+
Issued when there have been data errors in the + pool.
+

+

deadman

+
Issued when an I/O is determined to be "hung", + this can be caused by lost completion events due to flaky hardware or drivers. + See the zfs_deadman_failmode module option description for additional + information regarding "hung" I/O detection and configuration.
+

+

delay

+
Issued when a completed I/O exceeds the maximum allowed + time specified by the zio_slow_io_ms module option. This can be an + indicator of problems with the underlying storage device. The number of delay + events is ratelimited by the zfs_slow_io_events_per_second module + parameter.
+

+

config.sync

+
Issued every time a vdev change have been done to the + pool.
+

+

zpool

+
Issued when a pool cannot be imported.
+

+

zpool.destroy

+
Issued when a pool is destroyed.
+

+

zpool.export

+
Issued when a pool is exported.
+

+

zpool.import

+
Issued when a pool is imported.
+

+

zpool.reguid

+
Issued when a REGUID (new unique identifier for the pool + have been regenerated) have been detected.
+

+

vdev.unknown

+
Issued when the vdev is unknown. Such as trying to clear + device errors on a vdev that have failed/been kicked from the system/pool and + is no longer available.
+

+

vdev.open_failed

+
Issued when a vdev could not be opened (because it didn't + exist for example).
+

+

vdev.corrupt_data

+
Issued when corrupt data have been detected on a + vdev.
+

+

vdev.no_replicas

+
Issued when there are no more replicas to sustain the + pool. This would lead to the pool being DEGRADED.
+

+

vdev.bad_guid_sum

+
Issued when a missing device in the pool have been + detected.
+

+

vdev.too_small

+
Issued when the system (kernel) have removed a device, + and ZFS notices that the device isn't there any more. This is usually followed + by a probe_failure event.
+

+

vdev.bad_label

+
Issued when the label is OK but invalid.
+

+

vdev.bad_ashift

+
Issued when the ashift alignment requirement has + increased.
+

+

vdev.remove

+
Issued when a vdev is detached from a mirror (or a spare + detached from a vdev where it have been used to replace a failed drive - only + works if the original drive have been readded).
+

+

vdev.clear

+
Issued when clearing device errors in a pool. Such as + running zpool clear on a device in the pool.
+

+

vdev.check

+
Issued when a check to see if a given vdev could be + opened is started.
+

+

vdev.spare

+
Issued when a spare have kicked in to replace a failed + device.
+

+

vdev.autoexpand

+
Issued when a vdev can be automatically expanded.
+

+

io_failure

+
Issued when there is an I/O failure in a vdev in the + pool.
+

+

probe_failure

+
Issued when a probe fails on a vdev. This would occur if + a vdev have been kicked from the system outside of ZFS (such as the kernel + have removed the device).
+

+

log_replay

+
Issued when the intent log cannot be replayed. The can + occur in the case of a missing or damaged log device.
+

+

resilver.start

+
Issued when a resilver is started.
+

+

resilver.finish

+
Issued when the running resilver have finished.
+

+

scrub.start

+
Issued when a scrub is started on a pool.
+

+

scrub.finish

+
Issued when a pool has finished scrubbing.
+

+

scrub.abort

+
Issued when a scrub is aborted on a pool.
+

+

scrub.resume

+
Issued when a scrub is resumed on a pool.
+

+

scrub.paused

+
Issued when a scrub is paused on a pool.
+

+

bootfs.vdev.attach

+
+

+
+
+

+

This is the payload (data, information) that accompanies an + event.

+

For zed(8), these are set to uppercase and prefixed with + ZEVENT_.

+

+

pool

+
Pool name.
+

+

pool_failmode

+
Failmode - wait, continue or panic. + See zpool(8) (failmode property) for more information.
+

+

pool_guid

+
The GUID of the pool.
+

+

pool_context

+
The load state for the pool (0=none, 1=open, 2=import, + 3=tryimport, 4=recover 5=error).
+

+

vdev_guid

+
The GUID of the vdev in question (the vdev failing or + operated upon with zpool clear etc).
+

+

vdev_type

+
Type of vdev - disk, file, mirror + etc. See zpool(8) under Virtual Devices for more information on + possible values.
+

+

vdev_path

+
Full path of the vdev, including any -partX.
+

+

vdev_devid

+
ID of vdev (if any).
+

+

vdev_fru

+
Physical FRU location.
+

+

vdev_state

+
State of vdev (0=uninitialized, 1=closed, 2=offline, + 3=removed, 4=failed to open, 5=faulted, 6=degraded, 7=healthy).
+

+

vdev_ashift

+
The ashift value of the vdev.
+

+

vdev_complete_ts

+
The time the last I/O completed for the specified + vdev.
+

+

vdev_delta_ts

+
The time since the last I/O completed for the specified + vdev.
+

+

vdev_spare_paths

+
List of spares, including full path and any + -partX.
+

+

vdev_spare_guids

+
GUID(s) of spares.
+

+

vdev_read_errors

+
How many read errors that have been detected on the + vdev.
+

+

vdev_write_errors

+
How many write errors that have been detected on the + vdev.
+

+

vdev_cksum_errors

+
How many checksum errors that have been detected on the + vdev.
+

+

parent_guid

+
GUID of the vdev parent.
+

+

parent_type

+
Type of parent. See vdev_type.
+

+

parent_path

+
Path of the vdev parent (if any).
+

+

parent_devid

+
ID of the vdev parent (if any).
+

+

zio_objset

+
The object set number for a given I/O.
+

+

zio_object

+
The object number for a given I/O.
+

+

zio_level

+
The indirect level for the block. Level 0 is the lowest + level and includes data blocks. Values > 0 indicate metadata blocks at the + appropriate level.
+

+

zio_blkid

+
The block ID for a given I/O.
+

+

zio_err

+
The errno for a failure when handling a given I/O. The + errno is compatible with errno(3) with the value for EBADE (0x34) used + to indicate ZFS checksum error.
+

+

zio_offset

+
The offset in bytes of where to write the I/O for the + specified vdev.
+

+

zio_size

+
The size in bytes of the I/O.
+

+

zio_flags

+
The current flags describing how the I/O should be + handled. See the I/O FLAGS section for the full list of I/O + flags.
+

+

zio_stage

+
The current stage of the I/O in the pipeline. See the + I/O STAGES section for a full list of all the I/O stages.
+

+

zio_pipeline

+
The valid pipeline stages for the I/O. See the I/O + STAGES section for a full list of all the I/O stages.
+

+

zio_delay

+
The time elapsed (in nanoseconds) waiting for the block + layer to complete the I/O. Unlike zio_delta this does not include any + vdev queuing time and is therefore solely a measure of the block layer + performance.
+

+

zio_timestamp

+
The time when a given I/O was submitted.
+

+

zio_delta

+
The time required to service a given I/O.
+

+

prev_state

+
The previous state of the vdev.
+

+

cksum_expected

+
The expected checksum value for the block.
+

+

cksum_actual

+
The actual checksum value for an errant block.
+

+

cksum_algorithm

+
Checksum algorithm used. See zfs(8) for more + information on checksum algorithms available.
+

+

cksum_byteswap

+
Whether or not the data is byteswapped.
+

+

bad_ranges

+
[start, end) pairs of corruption offsets. Offsets are + always aligned on a 64-bit boundary, and can include some gaps of + non-corruption. (See bad_ranges_min_gap)
+

+

bad_ranges_min_gap

+
In order to bound the size of the bad_ranges + array, gaps of non-corruption less than or equal to bad_ranges_min_gap + bytes have been merged with adjacent corruption. Always at least 8 bytes, + since corruption is detected on a 64-bit word basis.
+

+

bad_range_sets

+
This array has one element per range in + bad_ranges. Each element contains the count of bits in that range which + were clear in the good data and set in the bad data.
+

+

bad_range_clears

+
This array has one element per range in + bad_ranges. Each element contains the count of bits for that range + which were set in the good data and clear in the bad data.
+

+

bad_set_bits

+
If this field exists, it is an array of: (bad data & + ~(good data)); that is, the bits set in the bad data which are cleared in the + good data. Each element corresponds a byte whose offset is in a range in + bad_ranges, and the array is ordered by offset. Thus, the first element + is the first byte in the first bad_ranges range, and the last element + is the last byte in the last bad_ranges range.
+

+

bad_cleared_bits

+
Like bad_set_bits, but contains: (good data & + ~(bad data)); that is, the bits set in the good data which are cleared in the + bad data.
+

+

bad_set_histogram

+
If this field exists, it is an array of counters. Each + entry counts bits set in a particular bit of a big-endian uint64 type. The + first entry counts bits set in the high-order bit of the first byte, the 9th + byte, etc, and the last entry counts bits set of the low-order bit of the 8th + byte, the 16th byte, etc. This information is useful for observing a stuck bit + in a parallel data path, such as IDE or parallel SCSI.
+

+

bad_cleared_histogram

+
If this field exists, it is an array of counters. Each + entry counts bit clears in a particular bit of a big-endian uint64 type. The + first entry counts bits clears of the high-order bit of the first byte, the + 9th byte, etc, and the last entry counts clears of the low-order bit of the + 8th byte, the 16th byte, etc. This information is useful for observing a stuck + bit in a parallel data path, such as IDE or parallel SCSI.
+

+
+
+

+

The ZFS I/O pipeline is comprised of various stages which are + defined below. The individual stages are used to construct these basic I/O + operations: Read, Write, Free, Claim, and Ioctl. These stages may be set on + an event to describe the life cycle of a given I/O.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StageBit MaskOperations



ZIO_STAGE_OPEN0x00000001RWFCI
ZIO_STAGE_READ_BP_INIT0x00000002R----
ZIO_STAGE_WRITE_BP_INIT0x00000004-W---
ZIO_STAGE_FREE_BP_INIT0x00000008--F--
ZIO_STAGE_ISSUE_ASYNC0x00000010RWF--
ZIO_STAGE_WRITE_COMPRESS0x00000020-W---
ZIO_STAGE_ENCRYPT0x00000040-W---
ZIO_STAGE_CHECKSUM_GENERATE0x00000080-W---
ZIO_STAGE_NOP_WRITE0x00000100-W---
ZIO_STAGE_DDT_READ_START0x00000200R----
ZIO_STAGE_DDT_READ_DONE0x00000400R----
ZIO_STAGE_DDT_WRITE0x00000800-W---
ZIO_STAGE_DDT_FREE0x00001000--F--
ZIO_STAGE_GANG_ASSEMBLE0x00002000RWFC-
ZIO_STAGE_GANG_ISSUE0x00004000RWFC-
ZIO_STAGE_DVA_THROTTLE0x00008000-W---
ZIO_STAGE_DVA_ALLOCATE0x00010000-W---
ZIO_STAGE_DVA_FREE0x00020000--F--
ZIO_STAGE_DVA_CLAIM0x00040000---C-
ZIO_STAGE_READY0x00080000RWFCI
ZIO_STAGE_VDEV_IO_START0x00100000RW--I
ZIO_STAGE_VDEV_IO_DONE0x00200000RW--I
ZIO_STAGE_VDEV_IO_ASSESS0x00400000RW--I
ZIO_STAGE_CHECKSUM_VERIFY0x00800000R----
ZIO_STAGE_DONE0x01000000RWFCI
+

+
+
+

+

Every I/O in the pipeline contains a set of flags which describe + its function and are used to govern its behavior. These flags will be set in + an event as an zio_flags payload entry.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FlagBit Mask


ZIO_FLAG_DONT_AGGREGATE0x00000001
ZIO_FLAG_IO_REPAIR0x00000002
ZIO_FLAG_SELF_HEAL0x00000004
ZIO_FLAG_RESILVER0x00000008
ZIO_FLAG_SCRUB0x00000010
ZIO_FLAG_SCAN_THREAD0x00000020
ZIO_FLAG_PHYSICAL0x00000040
ZIO_FLAG_CANFAIL0x00000080
ZIO_FLAG_SPECULATIVE0x00000100
ZIO_FLAG_CONFIG_WRITER0x00000200
ZIO_FLAG_DONT_RETRY0x00000400
ZIO_FLAG_DONT_CACHE0x00000800
ZIO_FLAG_NODATA0x00001000
ZIO_FLAG_INDUCE_DAMAGE0x00002000
ZIO_FLAG_IO_ALLOCATING0x00004000
ZIO_FLAG_IO_RETRY0x00008000
ZIO_FLAG_PROBE0x00010000
ZIO_FLAG_TRYHARD0x00020000
ZIO_FLAG_OPTIONAL0x00040000
ZIO_FLAG_DONT_QUEUE0x00080000
ZIO_FLAG_DONT_PROPAGATE0x00100000
ZIO_FLAG_IO_BYPASS0x00200000
ZIO_FLAG_IO_REWRITE0x00400000
ZIO_FLAG_RAW_COMPRESS0x00800000
ZIO_FLAG_RAW_ENCRYPT0x01000000
ZIO_FLAG_GANG_CHILD0x02000000
ZIO_FLAG_DDT_CHILD0x04000000
ZIO_FLAG_GODFATHER0x08000000
ZIO_FLAG_NOPWRITE0x10000000
ZIO_FLAG_REEXECUTED0x20000000
ZIO_FLAG_DELEGATED0x40000000
ZIO_FLAG_FASTWRITE0x80000000
+
+
+
+ + + + + +
October 24, 2018
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/5/zfs-module-parameters.5.html b/man/v0.8/5/zfs-module-parameters.5.html new file mode 100644 index 000000000..6968334c5 --- /dev/null +++ b/man/v0.8/5/zfs-module-parameters.5.html @@ -0,0 +1,2268 @@ + + + + + + + zfs-module-parameters.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-module-parameters.5

+
+ + + + + +
ZFS-MODULE-PARAMETERS(5)File Formats ManualZFS-MODULE-PARAMETERS(5)
+
+
+

+

zfs-module-parameters - ZFS module parameters

+
+
+

+

Description of the different parameters to the ZFS module.

+

+
+

+

+

dbuf_cache_max_bytes (ulong)

+
Maximum size in bytes of the dbuf cache. When 0 + this value will default to 1/2^dbuf_cache_shift (1/32) of the target + ARC size, otherwise the provided value in bytes will be used. The behavior of + the dbuf cache and its associated settings can be observed via the + /proc/spl/kstat/zfs/dbufstats kstat. +

Default value: 0.

+
+

+

dbuf_metadata_cache_max_bytes (ulong)

+
Maximum size in bytes of the metadata dbuf cache. When + 0 this value will default to 1/2^dbuf_cache_shift (1/16) of the + target ARC size, otherwise the provided value in bytes will be used. The + behavior of the metadata dbuf cache and its associated settings can be + observed via the /proc/spl/kstat/zfs/dbufstats kstat. +

Default value: 0.

+
+

+

dbuf_cache_hiwater_pct (uint)

+
The percentage over dbuf_cache_max_bytes when + dbufs must be evicted directly. +

Default value: 10%.

+
+

+

dbuf_cache_lowater_pct (uint)

+
The percentage below dbuf_cache_max_bytes when the + evict thread stops evicting dbufs. +

Default value: 10%.

+
+

+

dbuf_cache_shift (int)

+
Set the size of the dbuf cache, + dbuf_cache_max_bytes, to a log2 fraction of the target arc size. +

Default value: 5.

+
+

+

dbuf_metadata_cache_shift (int)

+
Set the size of the dbuf metadata cache, + dbuf_metadata_cache_max_bytes, to a log2 fraction of the target arc + size. +

Default value: 6.

+
+

+

dmu_prefetch_max (int)

+
Limit the amount we can prefetch with one call to this + amount (in bytes). This helps to limit the amount of memory that can be used + by prefetching. +

Default value: 134,217,728 (128MB).

+
+

+

ignore_hole_birth (int)

+
This is an alias for + send_holes_without_birth_time.
+

+

l2arc_feed_again (int)

+
Turbo L2ARC warm-up. When the L2ARC is cold the fill + interval will be set as fast as possible. +

Use 1 for yes (default) and 0 to disable.

+
+

+

l2arc_feed_min_ms (ulong)

+
Min feed interval in milliseconds. Requires + l2arc_feed_again=1 and only applicable in related situations. +

Default value: 200.

+
+

+

l2arc_feed_secs (ulong)

+
Seconds between L2ARC writing +

Default value: 1.

+
+

+

l2arc_headroom (ulong)

+
How far through the ARC lists to search for L2ARC + cacheable content, expressed as a multiplier of l2arc_write_max +

Default value: 2.

+
+

+

l2arc_headroom_boost (ulong)

+
Scales l2arc_headroom by this percentage when + L2ARC contents are being successfully compressed before writing. A value of + 100 disables this feature. +

Default value: 200%.

+
+

+

l2arc_noprefetch (int)

+
Do not write buffers to L2ARC if they were prefetched but + not used by applications +

Use 1 for yes (default) and 0 to disable.

+
+

+

l2arc_norw (int)

+
No reads during writes +

Use 1 for yes and 0 for no (default).

+
+

+

l2arc_write_boost (ulong)

+
Cold L2ARC devices will have l2arc_write_max + increased by this amount while they remain cold. +

Default value: 8,388,608.

+
+

+

l2arc_write_max (ulong)

+
Max write bytes per interval +

Default value: 8,388,608.

+
+

+

metaslab_aliquot (ulong)

+
Metaslab granularity, in bytes. This is roughly similar + to what would be referred to as the "stripe size" in traditional + RAID arrays. In normal operation, ZFS will try to write this amount of data to + a top-level vdev before moving on to the next one. +

Default value: 524,288.

+
+

+

metaslab_bias_enabled (int)

+
Enable metaslab group biasing based on its vdev's over- + or under-utilization relative to the pool. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_force_ganging (ulong)

+
Make some blocks above a certain size be gang blocks. + This option is used by the test suite to facilitate testing. +

Default value: 16,777,217.

+
+

+

zfs_metaslab_segment_weight_enabled (int)

+
Enable/disable segment-based metaslab selection. +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_metaslab_switch_threshold (int)

+
When using segment-based metaslab selection, continue + allocating from the active metaslab until zfs_metaslab_switch_threshold + worth of buckets have been exhausted. +

Default value: 2.

+
+

+

metaslab_debug_load (int)

+
Load all metaslabs during pool import. +

Use 1 for yes and 0 for no (default).

+
+

+

metaslab_debug_unload (int)

+
Prevent metaslabs from being unloaded. +

Use 1 for yes and 0 for no (default).

+
+

+

metaslab_fragmentation_factor_enabled (int)

+
Enable use of the fragmentation metric in computing + metaslab weights. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_df_max_search (int)

+
Maximum distance to search forward from the last offset. + Without this limit, fragmented pools can see >100,000 iterations and + metaslab_block_picker() becomes the performance limiting factor on + high-performance storage. +

With the default setting of 16MB, we typically see less than 500 + iterations, even with very fragmented, ashift=9 pools. The maximum number of + iterations possible is: metaslab_df_max_search / (2 * + (1<<ashift)). With the default setting of 16MB this is 16*1024 + (with ashift=9) or 2048 (with ashift=12).

+

Default value: 16,777,216 (16MB)

+
+

+

metaslab_df_use_largest_segment (int)

+
If we are not searching forward (due to + metaslab_df_max_search, metaslab_df_free_pct, or metaslab_df_alloc_threshold), + this tunable controls what segment is used. If it is set, we will use the + largest free segment. If it is not set, we will use a segment of exactly the + requested size (or larger). +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_vdev_default_ms_count (int)

+
When a vdev is added target this number of metaslabs per + top-level vdev. +

Default value: 200.

+
+

+

zfs_vdev_min_ms_count (int)

+
Minimum number of metaslabs to create in a top-level + vdev. +

Default value: 16.

+
+

+

vdev_ms_count_limit (int)

+
Practical upper limit of total metaslabs per top-level + vdev. +

Default value: 131,072.

+
+

+

metaslab_preload_enabled (int)

+
Enable metaslab group preloading. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_lba_weighting_enabled (int)

+
Give more weight to metaslabs with lower LBAs, assuming + they have greater bandwidth as is typically the case on a modern constant + angular velocity disk drive. +

Use 1 for yes (default) and 0 for no.

+
+

+

send_holes_without_birth_time (int)

+
When set, the hole_birth optimization will not be used, + and all holes will always be sent on zfs send. This is useful if you suspect + your datasets are affected by a bug in hole_birth. +

Use 1 for on (default) and 0 for off.

+
+

+

spa_config_path (charp)

+
SPA config file +

Default value: /etc/zfs/zpool.cache.

+
+

+

spa_asize_inflation (int)

+
Multiplication factor used to estimate actual disk + consumption from the size of data being written. The default value is a worst + case estimate, but lower values may be valid for a given pool depending on its + configuration. Pool administrators who understand the factors involved may + wish to specify a more realistic inflation factor, particularly if they + operate close to quota or capacity limits. +

Default value: 24.

+
+

+

spa_load_print_vdev_tree (int)

+
Whether to print the vdev tree in the debugging message + buffer during pool import. Use 0 to disable and 1 to enable. +

Default value: 0.

+
+

+

spa_load_verify_data (int)

+
Whether to traverse data blocks during an "extreme + rewind" (-X) import. Use 0 to disable and 1 to enable. +

An extreme rewind import normally performs a full traversal of all + blocks in the pool for verification. If this parameter is set to 0, the + traversal skips non-metadata blocks. It can be toggled once the import has + started to stop or start the traversal of non-metadata blocks.

+

Default value: 1.

+
+

+

spa_load_verify_metadata (int)

+
Whether to traverse blocks during an "extreme + rewind" (-X) pool import. Use 0 to disable and 1 to enable. +

An extreme rewind import normally performs a full traversal of all + blocks in the pool for verification. If this parameter is set to 0, the + traversal is not performed. It can be toggled once the import has started to + stop or start the traversal.

+

Default value: 1.

+
+

+

spa_load_verify_shift (int)

+
Sets the maximum number of bytes to consume during pool + import to the log2 fraction of the target arc size. +

Default value: 4.

+
+

+

spa_slop_shift (int)

+
Normally, we don't allow the last 3.2% + (1/(2^spa_slop_shift)) of space in the pool to be consumed. This ensures that + we don't run the pool completely out of space, due to unaccounted changes + (e.g. to the MOS). It also limits the worst-case time to allocate space. If we + have less than this amount of free space, most ZPL operations (e.g. write, + create) will return ENOSPC. +

Default value: 5.

+
+

+

vdev_removal_max_span (int)

+
During top-level vdev removal, chunks of data are copied + from the vdev which may include free space in order to trade bandwidth for + IOPS. This parameter determines the maximum span of free space (in bytes) + which will be included as "unnecessary" data in a chunk of copied + data. +

The default value here was chosen to align with + zfs_vdev_read_gap_limit, which is a similar concept when doing + regular reads (but there's no reason it has to be the same).

+

Default value: 32,768.

+
+

+

zap_iterate_prefetch (int)

+
If this is set, when we start iterating over a ZAP + object, zfs will prefetch the entire object (all leaf blocks). However, this + is limited by dmu_prefetch_max. +

Use 1 for on (default) and 0 for off.

+
+

+

zfetch_array_rd_sz (ulong)

+
If prefetching is enabled, disable prefetching for reads + larger than this size. +

Default value: 1,048,576.

+
+

+

zfetch_max_distance (uint)

+
Max bytes to prefetch per stream (default 8MB). +

Default value: 8,388,608.

+
+

+

zfetch_max_streams (uint)

+
Max number of streams per zfetch (prefetch streams per + file). +

Default value: 8.

+
+

+

zfetch_min_sec_reap (uint)

+
Min time before an active prefetch stream can be + reclaimed +

Default value: 2.

+
+

+

zfs_abd_scatter_min_size (uint)

+
This is the minimum allocation size that will use scatter + (page-based) ABD's. Smaller allocations will use linear ABD's. +

Default value: 1536 (512B and 1KB allocations will be + linear).

+
+

+

zfs_arc_dnode_limit (ulong)

+
When the number of bytes consumed by dnodes in the ARC + exceeds this number of bytes, try to unpin some of it in response to demand + for non-metadata. This value acts as a ceiling to the amount of dnode + metadata, and defaults to 0 which indicates that a percent which is based on + zfs_arc_dnode_limit_percent of the ARC meta buffers that may be used + for dnodes. +

See also zfs_arc_meta_prune which serves a similar purpose + but is used when the amount of metadata in the ARC exceeds + zfs_arc_meta_limit rather than in response to overall demand for + non-metadata.

+

+

Default value: 0.

+
+

+

zfs_arc_dnode_limit_percent (ulong)

+
Percentage that can be consumed by dnodes of ARC meta + buffers. +

See also zfs_arc_dnode_limit which serves a similar purpose + but has a higher priority if set to nonzero value.

+

Default value: 10%.

+
+

+

zfs_arc_dnode_reduce_percent (ulong)

+
Percentage of ARC dnodes to try to scan in response to + demand for non-metadata when the number of bytes consumed by dnodes exceeds + zfs_arc_dnode_limit. +

+

Default value: 10% of the number of dnodes in the ARC.

+
+

+

zfs_arc_average_blocksize (int)

+
The ARC's buffer hash table is sized based on the + assumption of an average block size of zfs_arc_average_blocksize + (default 8K). This works out to roughly 1MB of hash table per 1GB of physical + memory with 8-byte pointers. For configurations with a known larger average + block size this value can be increased to reduce the memory footprint. +

+

Default value: 8192.

+
+

+

zfs_arc_evict_batch_limit (int)

+
Number ARC headers to evict per sub-list before + proceeding to another sub-list. This batch-style operation prevents entire + sub-lists from being evicted at once but comes at a cost of additional + unlocking and locking. +

Default value: 10.

+
+

+

zfs_arc_grow_retry (int)

+
If set to a non zero value, it will replace the + arc_grow_retry value with this value. The arc_grow_retry value (default 5) is + the number of seconds the ARC will wait before trying to resume growth after a + memory pressure event. +

Default value: 0.

+
+

+

zfs_arc_lotsfree_percent (int)

+
Throttle I/O when free system memory drops below this + percentage of total system memory. Setting this value to 0 will disable the + throttle. +

Default value: 10%.

+
+

+

zfs_arc_max (ulong)

+
Max arc size of ARC in bytes. If set to 0 then it will + consume 1/2 of system RAM. This value must be at least 67108864 (64 + megabytes). +

This value can be changed dynamically with some caveats. It cannot + be set back to 0 while running and reducing it below the current ARC size + will not cause the ARC to shrink without memory pressure to induce + shrinking.

+

Default value: 0.

+
+

+

zfs_arc_meta_adjust_restarts (ulong)

+
The number of restart passes to make while scanning the + ARC attempting the free buffers in order to stay below the + zfs_arc_meta_limit. This value should not need to be tuned but is + available to facilitate performance analysis. +

Default value: 4096.

+
+

+

zfs_arc_meta_limit (ulong)

+
The maximum allowed size in bytes that meta data buffers + are allowed to consume in the ARC. When this limit is reached meta data + buffers will be reclaimed even if the overall arc_c_max has not been reached. + This value defaults to 0 which indicates that a percent which is based on + zfs_arc_meta_limit_percent of the ARC may be used for meta data. +

This value my be changed dynamically except that it cannot be set + back to 0 for a specific percent of the ARC; it must be set to an explicit + value.

+

Default value: 0.

+
+

+

zfs_arc_meta_limit_percent (ulong)

+
Percentage of ARC buffers that can be used for meta data. +

See also zfs_arc_meta_limit which serves a similar purpose + but has a higher priority if set to nonzero value.

+

+

Default value: 75%.

+
+

+

zfs_arc_meta_min (ulong)

+
The minimum allowed size in bytes that meta data buffers + may consume in the ARC. This value defaults to 0 which disables a floor on the + amount of the ARC devoted meta data. +

Default value: 0.

+
+

+

zfs_arc_meta_prune (int)

+
The number of dentries and inodes to be scanned looking + for entries which can be dropped. This may be required when the ARC reaches + the zfs_arc_meta_limit because dentries and inodes can pin buffers in + the ARC. Increasing this value will cause to dentry and inode caches to be + pruned more aggressively. Setting this value to 0 will disable pruning the + inode and dentry caches. +

Default value: 10,000.

+
+

+

zfs_arc_meta_strategy (int)

+
Define the strategy for ARC meta data buffer eviction + (meta reclaim strategy). A value of 0 (META_ONLY) will evict only the ARC meta + data buffers. A value of 1 (BALANCED) indicates that additional data buffers + may be evicted if that is required to in order to evict the required number of + meta data buffers. +

Default value: 1.

+
+

+

zfs_arc_min (ulong)

+
Min arc size of ARC in bytes. If set to 0 then arc_c_min + will default to consuming the larger of 32M or 1/32 of total system memory. +

Default value: 0.

+
+

+

zfs_arc_min_prefetch_ms (int)

+
Minimum time prefetched blocks are locked in the ARC, + specified in ms. A value of 0 will default to 1000 ms. +

Default value: 0.

+
+

+

zfs_arc_min_prescient_prefetch_ms (int)

+
Minimum time "prescient prefetched" blocks are + locked in the ARC, specified in ms. These blocks are meant to be prefetched + fairly aggressively ahead of the code that may use them. A value of 0 + will default to 6000 ms. +

Default value: 0.

+
+

+

zfs_max_missing_tvds (int)

+
Number of missing top-level vdevs which will be allowed + during pool import (only in read-only mode). +

Default value: 0

+
+

+

zfs_multilist_num_sublists (int)

+
To allow more fine-grained locking, each ARC state + contains a series of lists for both data and meta data objects. Locking is + performed at the level of these "sub-lists". This parameters + controls the number of sub-lists per ARC state, and also applies to other uses + of the multilist data structure. +

Default value: 4 or the number of online CPUs, whichever is + greater

+
+

+

zfs_arc_overflow_shift (int)

+
The ARC size is considered to be overflowing if it + exceeds the current ARC target size (arc_c) by a threshold determined by this + parameter. The threshold is calculated as a fraction of arc_c using the + formula "arc_c >> zfs_arc_overflow_shift". +

The default value of 8 causes the ARC to be considered to be + overflowing if it exceeds the target size by 1/256th (0.3%) of the target + size.

+

When the ARC is overflowing, new buffer allocations are stalled + until the reclaim thread catches up and the overflow condition no longer + exists.

+

Default value: 8.

+
+

+

+

zfs_arc_p_min_shift (int)

+
If set to a non zero value, this will update + arc_p_min_shift (default 4) with the new value. arc_p_min_shift is used to + shift of arc_c for calculating both min and max max arc_p +

Default value: 0.

+
+

+

zfs_arc_p_dampener_disable (int)

+
Disable arc_p adapt dampener +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_arc_shrink_shift (int)

+
If set to a non zero value, this will update + arc_shrink_shift (default 7) with the new value. +

Default value: 0.

+
+

+

zfs_arc_pc_percent (uint)

+
Percent of pagecache to reclaim arc to +

This tunable allows ZFS arc to play more nicely with the kernel's + LRU pagecache. It can guarantee that the arc size won't collapse under + scanning pressure on the pagecache, yet still allows arc to be reclaimed + down to zfs_arc_min if necessary. This value is specified as percent of + pagecache size (as measured by NR_FILE_PAGES) where that percent may exceed + 100. This only operates during memory pressure/reclaim.

+

Default value: 0% (disabled).

+
+

+

zfs_arc_sys_free (ulong)

+
The target number of bytes the ARC should leave as free + memory on the system. Defaults to the larger of 1/64 of physical memory or + 512K. Setting this option to a non-zero value will override the default. +

Default value: 0.

+
+

+

zfs_autoimport_disable (int)

+
Disable pool import at module load by ignoring the cache + file (typically /etc/zfs/zpool.cache). +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_checksums_per_second (int)

+
Rate limit checksum events to this many per second. Note + that this should not be set below the zed thresholds (currently 10 checksums + over 10 sec) or else zed may not trigger any action. +

Default value: 20

+
+

+

zfs_commit_timeout_pct (int)

+
This controls the amount of time that a ZIL block (lwb) + will remain "open" when it isn't "full", and it has a + thread waiting for it to be committed to stable storage. The timeout is scaled + based on a percentage of the last lwb latency to avoid significantly impacting + the latency of each individual transaction record (itx). +

Default value: 5%.

+
+

+

zfs_condense_indirect_vdevs_enable (int)

+
Enable condensing indirect vdev mappings. When set to a + non-zero value, attempt to condense indirect vdev mappings if the mapping uses + more than zfs_condense_min_mapping_bytes bytes of memory and if the + obsolete space map object uses more than + zfs_condense_max_obsolete_bytes bytes on-disk. The condensing process + is an attempt to save memory by removing obsolete mappings. +

Default value: 1.

+
+

+

zfs_condense_max_obsolete_bytes (ulong)

+
Only attempt to condense indirect vdev mappings if the + on-disk size of the obsolete space map object is greater than this number of + bytes (see fBzfs_condense_indirect_vdevs_enable). +

Default value: 1,073,741,824.

+
+

+

zfs_condense_min_mapping_bytes (ulong)

+
Minimum size vdev mapping to attempt to condense (see + zfs_condense_indirect_vdevs_enable). +

Default value: 131,072.

+
+

+

zfs_dbgmsg_enable (int)

+
Internally ZFS keeps a small log to facilitate debugging. + By default the log is disabled, to enable it set this option to 1. The + contents of the log can be accessed by reading the /proc/spl/kstat/zfs/dbgmsg + file. Writing 0 to this proc file clears the log. +

Default value: 0.

+
+

+

zfs_dbgmsg_maxsize (int)

+
The maximum size in bytes of the internal ZFS debug log. +

Default value: 4M.

+
+

+

zfs_dbuf_state_index (int)

+
This feature is currently unused. It is normally used for + controlling what reporting is available under /proc/spl/kstat/zfs. +

Default value: 0.

+
+

+

zfs_deadman_enabled (int)

+
When a pool sync operation takes longer than + zfs_deadman_synctime_ms milliseconds, or when an individual I/O takes + longer than zfs_deadman_ziotime_ms milliseconds, then the operation is + considered to be "hung". If zfs_deadman_enabled is set then + the deadman behavior is invoked as described by the + zfs_deadman_failmode module option. By default the deadman is enabled + and configured to wait which results in "hung" I/Os only + being logged. The deadman is automatically disabled when a pool gets + suspended. +

Default value: 1.

+
+

+

zfs_deadman_failmode (charp)

+
Controls the failure behavior when the deadman detects a + "hung" I/O. Valid values are wait, continue, and + panic. +

wait - Wait for a "hung" I/O to complete. For + each "hung" I/O a "deadman" event will be posted + describing that I/O.

+

continue - Attempt to recover from a "hung" I/O + by re-dispatching it to the I/O pipeline if possible.

+

panic - Panic the system. This can be used to facilitate an + automatic fail-over to a properly configured fail-over partner.

+

Default value: wait.

+
+

+

zfs_deadman_checktime_ms (int)

+
Check time in milliseconds. This defines the frequency at + which we check for hung I/O and potentially invoke the + zfs_deadman_failmode behavior. +

Default value: 60,000.

+
+

+

zfs_deadman_synctime_ms (ulong)

+
Interval in milliseconds after which the deadman is + triggered and also the interval after which a pool sync operation is + considered to be "hung". Once this limit is exceeded the deadman + will be invoked every zfs_deadman_checktime_ms milliseconds until the + pool sync completes. +

Default value: 600,000.

+
+

+

zfs_deadman_ziotime_ms (ulong)

+
Interval in milliseconds after which the deadman is + triggered and an individual I/O operation is considered to be + "hung". As long as the I/O remains "hung" the deadman will + be invoked every zfs_deadman_checktime_ms milliseconds until the I/O + completes. +

Default value: 300,000.

+
+

+

zfs_dedup_prefetch (int)

+
Enable prefetching dedup-ed blks +

Use 1 for yes and 0 to disable (default).

+
+

+

zfs_delay_min_dirty_percent (int)

+
Start to delay each transaction once there is this amount + of dirty data, expressed as a percentage of zfs_dirty_data_max. This + value should be >= zfs_vdev_async_write_active_max_dirty_percent. See the + section "ZFS TRANSACTION DELAY". +

Default value: 60%.

+
+

+

zfs_delay_scale (int)

+
This controls how quickly the transaction delay + approaches infinity. Larger values cause longer delays for a given amount of + dirty data. +

For the smoothest delay, this value should be about 1 billion + divided by the maximum number of operations per second. This will smoothly + handle between 10x and 1/10th this number.

+

See the section "ZFS TRANSACTION DELAY".

+

Note: zfs_delay_scale * zfs_dirty_data_max must be + < 2^64.

+

Default value: 500,000.

+
+

+

zfs_slow_io_events_per_second (int)

+
Rate limit delay zevents (which report slow I/Os) to this + many per second. +

Default value: 20

+
+

+

zfs_unlink_suspend_progress (uint)

+
When enabled, files will not be asynchronously removed + from the list of pending unlinks and the space they consume will be leaked. + Once this option has been disabled and the dataset is remounted, the pending + unlinks will be processed and the freed space returned to the pool. This + option is used by the test suite to facilitate testing. +

Uses 0 (default) to allow progress and 1 to pause + progress.

+
+

+

zfs_delete_blocks (ulong)

+
This is the used to define a large file for the purposes + of delete. Files containing more than zfs_delete_blocks will be deleted + asynchronously while smaller files are deleted synchronously. Decreasing this + value will reduce the time spent in an unlink(2) system call at the expense of + a longer delay before the freed space is available. +

Default value: 20,480.

+
+

+

zfs_dirty_data_max (int)

+
Determines the dirty space limit in bytes. Once this + limit is exceeded, new writes are halted until space frees up. This parameter + takes precedence over zfs_dirty_data_max_percent. See the section + "ZFS TRANSACTION DELAY". +

Default value: 10% of physical RAM, capped at + zfs_dirty_data_max_max.

+
+

+

zfs_dirty_data_max_max (int)

+
Maximum allowable value of zfs_dirty_data_max, + expressed in bytes. This limit is only enforced at module load time, and will + be ignored if zfs_dirty_data_max is later changed. This parameter takes + precedence over zfs_dirty_data_max_max_percent. See the section + "ZFS TRANSACTION DELAY". +

Default value: 25% of physical RAM.

+
+

+

zfs_dirty_data_max_max_percent (int)

+
Maximum allowable value of zfs_dirty_data_max, + expressed as a percentage of physical RAM. This limit is only enforced at + module load time, and will be ignored if zfs_dirty_data_max is later + changed. The parameter zfs_dirty_data_max_max takes precedence over + this one. See the section "ZFS TRANSACTION DELAY". +

Default value: 25%.

+
+

+

zfs_dirty_data_max_percent (int)

+
Determines the dirty space limit, expressed as a + percentage of all memory. Once this limit is exceeded, new writes are halted + until space frees up. The parameter zfs_dirty_data_max takes precedence + over this one. See the section "ZFS TRANSACTION DELAY". +

Default value: 10%, subject to + zfs_dirty_data_max_max.

+
+

+

zfs_dirty_data_sync_percent (int)

+
Start syncing out a transaction group if there's at least + this much dirty data as a percentage of zfs_dirty_data_max. This should + be less than zfs_vdev_async_write_active_min_dirty_percent. +

Default value: 20% of zfs_dirty_data_max.

+
+

+

zfs_fletcher_4_impl (string)

+
Select a fletcher 4 implementation. +

Supported selectors are: fastest, scalar, + sse2, ssse3, avx2, avx512f, and + aarch64_neon. All of the selectors except fastest and + scalar require instruction set extensions to be available and will + only appear if ZFS detects that they are present at runtime. If multiple + implementations of fletcher 4 are available, the fastest will be + chosen using a micro benchmark. Selecting scalar results in the + original, CPU based calculation, being used. Selecting any option other than + fastest and scalar results in vector instructions from the + respective CPU instruction set being used.

+

Default value: fastest.

+
+

+

zfs_free_bpobj_enabled (int)

+
Enable/disable the processing of the free_bpobj object. +

Default value: 1.

+
+

+

zfs_async_block_max_blocks (ulong)

+
Maximum number of blocks freed in a single txg. +

Default value: 100,000.

+
+

+

zfs_override_estimate_recordsize (ulong)

+
Record size calculation override for zfs send estimates. +

Default value: 0.

+
+

+

zfs_vdev_async_read_max_active (int)

+
Maximum asynchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 3.

+
+

+

zfs_vdev_async_read_min_active (int)

+
Minimum asynchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_async_write_active_max_dirty_percent (int)

+
When the pool has more than + zfs_vdev_async_write_active_max_dirty_percent dirty data, use + zfs_vdev_async_write_max_active to limit active async writes. If the + dirty data is between min and max, the active I/O limit is linearly + interpolated. See the section "ZFS I/O SCHEDULER". +

Default value: 60%.

+
+

+

zfs_vdev_async_write_active_min_dirty_percent (int)

+
When the pool has less than + zfs_vdev_async_write_active_min_dirty_percent dirty data, use + zfs_vdev_async_write_min_active to limit active async writes. If the + dirty data is between min and max, the active I/O limit is linearly + interpolated. See the section "ZFS I/O SCHEDULER". +

Default value: 30%.

+
+

+

zfs_vdev_async_write_max_active (int)

+
Maximum asynchronous write I/Os active to each device. + See the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_async_write_min_active (int)

+
Minimum asynchronous write I/Os active to each device. + See the section "ZFS I/O SCHEDULER". +

Lower values are associated with better latency on rotational + media but poorer resilver performance. The default value of 2 was chosen as + a compromise. A value of 3 has been shown to improve resilver performance + further at a cost of further increasing latency.

+

Default value: 2.

+
+

+

zfs_vdev_initializing_max_active (int)

+
Maximum initializing I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_initializing_min_active (int)

+
Minimum initializing I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_max_active (int)

+
The maximum number of I/Os active to each device. + Ideally, this will be >= the sum of each queue's max_active. It must be at + least the sum of each queue's min_active. See the section "ZFS I/O + SCHEDULER". +

Default value: 1,000.

+
+

+

zfs_vdev_removal_max_active (int)

+
Maximum removal I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 2.

+
+

+

zfs_vdev_removal_min_active (int)

+
Minimum removal I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_scrub_max_active (int)

+
Maximum scrub I/Os active to each device. See the section + "ZFS I/O SCHEDULER". +

Default value: 2.

+
+

+

zfs_vdev_scrub_min_active (int)

+
Minimum scrub I/Os active to each device. See the section + "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_sync_read_max_active (int)

+
Maximum synchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_read_min_active (int)

+
Minimum synchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_write_max_active (int)

+
Maximum synchronous write I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_write_min_active (int)

+
Minimum synchronous write I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_trim_max_active (int)

+
Maximum trim/discard I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 2.

+
+

+

zfs_vdev_trim_min_active (int)

+
Minimum trim/discard I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_queue_depth_pct (int)

+
Maximum number of queued allocations per top-level vdev + expressed as a percentage of zfs_vdev_async_write_max_active which + allows the system to detect devices that are more capable of handling + allocations and to allocate more blocks to those devices. It allows for + dynamic allocation distribution when devices are imbalanced as fuller devices + will tend to be slower than empty devices. +

See also zio_dva_throttle_enabled.

+

Default value: 1000%.

+
+

+

zfs_expire_snapshot (int)

+
Seconds to expire .zfs/snapshot +

Default value: 300.

+
+

+

zfs_admin_snapshot (int)

+
Allow the creation, removal, or renaming of entries in + the .zfs/snapshot directory to cause the creation, destruction, or renaming of + snapshots. When enabled this functionality works both locally and over NFS + exports which have the 'no_root_squash' option set. This functionality is + disabled by default. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_flags (int)

+
Set additional debugging flags. The following flags may + be bitwise-or'd together. +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueSymbolic Name
Description
1ZFS_DEBUG_DPRINTF
Enable dprintf entries in the debug log.
2ZFS_DEBUG_DBUF_VERIFY *
Enable extra dbuf verifications.
4ZFS_DEBUG_DNODE_VERIFY *
Enable extra dnode verifications.
8ZFS_DEBUG_SNAPNAMES
Enable snapshot name verification.
16ZFS_DEBUG_MODIFY
Check for illegally modified ARC buffers.
64ZFS_DEBUG_ZIO_FREE
Enable verification of block frees.
128ZFS_DEBUG_HISTOGRAM_VERIFY
Enable extra spacemap histogram verifications.
256ZFS_DEBUG_METASLAB_VERIFY
Verify space accounting on disk matches in-core range_trees.
512ZFS_DEBUG_SET_ERROR
Enable SET_ERROR and dprintf entries in the debug log.
1024ZFS_DEBUG_INDIRECT_REMAP
Verify split blocks created by device removal.
2048ZFS_DEBUG_TRIM
Verify TRIM ranges are always within the allocatable range tree.
+

* Requires debug build.

+

Default value: 0.

+
+

+

zfs_free_leak_on_eio (int)

+
If destroy encounters an EIO while reading metadata (e.g. + indirect blocks), space referenced by the missing metadata can not be freed. + Normally this causes the background destroy to become "stalled", as + it is unable to make forward progress. While in this stalled state, all + remaining space to free from the error-encountering filesystem is + "temporarily leaked". Set this flag to cause it to ignore the EIO, + permanently leak the space from indirect blocks that can not be read, and + continue to free everything else that it can. +

The default, "stalling" behavior is useful if the + storage partially fails (i.e. some but not all i/os fail), and then later + recovers. In this case, we will be able to continue pool operations while it + is partially failed, and when it recovers, we can continue to free the + space, with no leaks. However, note that this case is actually fairly + rare.

+

Typically pools either (a) fail completely (but perhaps + temporarily, e.g. a top-level vdev going offline), or (b) have localized, + permanent errors (e.g. disk returns the wrong data due to bit flip or + firmware bug). In case (a), this setting does not matter because the pool + will be suspended and the sync thread will not be able to make forward + progress regardless. In case (b), because the error is permanent, the best + we can do is leak the minimum amount of space, which is what setting this + flag will do. Therefore, it is reasonable for this flag to normally be set, + but we chose the more conservative approach of not setting it, so that there + is no possibility of leaking space in the "partial temporary" + failure case.

+

Default value: 0.

+
+

+

zfs_free_min_time_ms (int)

+
During a zfs destroy operation using + feature@async_destroy a minimum of this much time will be spent working + on freeing blocks per txg. +

Default value: 1,000.

+
+

+

zfs_immediate_write_sz (long)

+
Largest data block to write to zil. Larger blocks will be + treated as if the dataset being written to had the property setting + logbias=throughput. +

Default value: 32,768.

+
+

+

zfs_initialize_value (ulong)

+
Pattern written to vdev free space by zpool + initialize. +

Default value: 16,045,690,984,833,335,022 + (0xdeadbeefdeadbeee).

+
+

+

zfs_lua_max_instrlimit (ulong)

+
The maximum execution time limit that can be set for a + ZFS channel program, specified as a number of Lua instructions. +

Default value: 100,000,000.

+
+

+

zfs_lua_max_memlimit (ulong)

+
The maximum memory limit that can be set for a ZFS + channel program, specified in bytes. +

Default value: 104,857,600.

+
+

+

zfs_max_dataset_nesting (int)

+
The maximum depth of nested datasets. This value can be + tuned temporarily to fix existing datasets that exceed the predefined limit. +

Default value: 50.

+
+

+

zfs_max_recordsize (int)

+
We currently support block sizes from 512 bytes to 16MB. + The benefits of larger blocks, and thus larger I/O, need to be weighed against + the cost of COWing a giant block to modify one byte. Additionally, very large + blocks can have an impact on i/o latency, and also potentially on the memory + allocator. Therefore, we do not allow the recordsize to be set larger than + zfs_max_recordsize (default 1MB). Larger blocks can be created by changing + this tunable, and pools with larger blocks can always be imported and used, + regardless of this setting. +

Default value: 1,048,576.

+
+

+

zfs_metaslab_fragmentation_threshold (int)

+
Allow metaslabs to keep their active state as long as + their fragmentation percentage is less than or equal to this value. An active + metaslab that exceeds this threshold will no longer keep its active status + allowing better metaslabs to be selected. +

Default value: 70.

+
+

+

zfs_mg_fragmentation_threshold (int)

+
Metaslab groups are considered eligible for allocations + if their fragmentation metric (measured as a percentage) is less than or equal + to this value. If a metaslab group exceeds this threshold then it will be + skipped unless all metaslab groups within the metaslab class have also crossed + this threshold. +

Default value: 95.

+
+

+

zfs_mg_noalloc_threshold (int)

+
Defines a threshold at which metaslab groups should be + eligible for allocations. The value is expressed as a percentage of free space + beyond which a metaslab group is always eligible for allocations. If a + metaslab group's free space is less than or equal to the threshold, the + allocator will avoid allocating to that group unless all groups in the pool + have reached the threshold. Once all groups have reached the threshold, all + groups are allowed to accept allocations. The default value of 0 disables the + feature and causes all metaslab groups to be eligible for allocations. +

This parameter allows one to deal with pools having heavily + imbalanced vdevs such as would be the case when a new vdev has been added. + Setting the threshold to a non-zero percentage will stop allocations from + being made to vdevs that aren't filled to the specified percentage and allow + lesser filled vdevs to acquire more allocations than they otherwise would + under the old zfs_mg_alloc_failures facility.

+

Default value: 0.

+
+

+

zfs_ddt_data_is_special (int)

+
If enabled, ZFS will place DDT data into the special + allocation class. +

Default value: 1.

+
+

+

zfs_user_indirect_is_special (int)

+
If enabled, ZFS will place user data (both file and zvol) + indirect blocks into the special allocation class. +

Default value: 1.

+
+

+

zfs_multihost_history (int)

+
Historical statistics for the last N multihost updates + will be available in /proc/spl/kstat/zfs/<pool>/multihost +

Default value: 0.

+
+

+

zfs_multihost_interval (ulong)

+
Used to control the frequency of multihost writes which + are performed when the multihost pool property is on. This is one + factor used to determine the length of the activity check during import. +

The multihost write period is zfs_multihost_interval / + leaf-vdevs milliseconds. On average a multihost write will be issued for + each leaf vdev every zfs_multihost_interval milliseconds. In + practice, the observed period can vary with the I/O load and this observed + value is the delay which is stored in the uberblock.

+

Default value: 1000.

+
+

+

zfs_multihost_import_intervals (uint)

+
Used to control the duration of the activity test on + import. Smaller values of zfs_multihost_import_intervals will reduce + the import time but increase the risk of failing to detect an active pool. The + total activity check time is never allowed to drop below one second. +

On import the activity check waits a minimum amount of time + determined by zfs_multihost_interval * + zfs_multihost_import_intervals, or the same product computed on the host + which last had the pool imported (whichever is greater). The activity check + time may be further extended if the value of mmp delay found in the best + uberblock indicates actual multihost updates happened at longer intervals + than zfs_multihost_interval. A minimum value of 100ms is + enforced.

+

A value of 0 is ignored and treated as if it was set to 1.

+

Default value: 20.

+
+

+

zfs_multihost_fail_intervals (uint)

+
Controls the behavior of the pool when multihost write + failures or delays are detected. +

When zfs_multihost_fail_intervals = 0, multihost write + failures or delays are ignored. The failures will still be reported to the + ZED which depending on its configuration may take action such as suspending + the pool or offlining a device.

+

+

When zfs_multihost_fail_intervals > 0, the pool will be + suspended if zfs_multihost_fail_intervals * zfs_multihost_interval + milliseconds pass without a successful mmp write. This guarantees the + activity test will see mmp writes if the pool is imported. A value of 1 is + ignored and treated as if it was set to 2. This is necessary to prevent the + pool from being suspended due to normal, small I/O latency variations.

+

+

Default value: 10.

+
+

+

zfs_no_scrub_io (int)

+
Set for no scrub I/O. This results in scrubs not actually + scrubbing data and simply doing a metadata crawl of the pool instead. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_no_scrub_prefetch (int)

+
Set to disable block prefetching for scrubs. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_nocacheflush (int)

+
Disable cache flush operations on disks when writing. + Setting this will cause pool corruption on power loss if a volatile + out-of-order write cache is enabled. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_nopwrite_enabled (int)

+
Enable NOP writes +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_dmu_offset_next_sync (int)

+
Enable forcing txg sync to find holes. When enabled + forces ZFS to act like prior versions when SEEK_HOLE or SEEK_DATA flags are + used, which when a dnode is dirty causes txg's to be synced so that this data + can be found. +

Use 1 for yes and 0 to disable (default).

+
+

+

zfs_pd_bytes_max (int)

+
The number of bytes which should be prefetched during a + pool traversal (eg: zfs send or other data crawling operations) +

Default value: 52,428,800.

+
+

+

zfs_per_txg_dirty_frees_percent (ulong)

+
Tunable to control percentage of dirtied indirect blocks + from frees allowed into one TXG. After this threshold is crossed, additional + frees will wait until the next TXG. A value of zero will disable this + throttle. +

Default value: 5, set to 0 to disable.

+
+

+

zfs_prefetch_disable (int)

+
This tunable disables predictive prefetch. Note that it + leaves "prescient" prefetch (e.g. prefetch for zfs send) intact. + Unlike predictive prefetch, prescient prefetch never issues i/os that end up + not being needed, so it can't hurt performance. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_qat_checksum_disable (int)

+
This tunable disables qat hardware acceleration for + sha256 checksums. It may be set after the zfs modules have been loaded to + initialize the qat hardware as long as support is compiled in and the qat + driver is present. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_qat_compress_disable (int)

+
This tunable disables qat hardware acceleration for gzip + compression. It may be set after the zfs modules have been loaded to + initialize the qat hardware as long as support is compiled in and the qat + driver is present. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_qat_encrypt_disable (int)

+
This tunable disables qat hardware acceleration for + AES-GCM encryption. It may be set after the zfs modules have been loaded to + initialize the qat hardware as long as support is compiled in and the qat + driver is present. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_read_chunk_size (long)

+
Bytes to read per chunk +

Default value: 1,048,576.

+
+

+

zfs_read_history (int)

+
Historical statistics for the last N reads will be + available in /proc/spl/kstat/zfs/<pool>/reads +

Default value: 0 (no data is kept).

+
+

+

zfs_read_history_hits (int)

+
Include cache hits in read history +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_reconstruct_indirect_combinations_max (int)

+
If an indirect split block contains more than this many + possible unique combinations when being reconstructed, consider it too + computationally expensive to check them all. Instead, try at most + zfs_reconstruct_indirect_combinations_max randomly-selected + combinations each time the block is accessed. This allows all segment copies + to participate fairly in the reconstruction when all combinations cannot be + checked and prevents repeated use of one bad copy. +

Default value: 4096.

+
+

+

zfs_recover (int)

+
Set to attempt to recover from fatal errors. This should + only be used as a last resort, as it typically results in leaked space, or + worse. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_removal_ignore_errors (int)

+
+

Ignore hard IO errors during device removal. When set, if a device + encounters a hard IO error during the removal process the removal will not + be cancelled. This can result in a normally recoverable block becoming + permanently damaged and is not recommended. This should only be used as a + last resort when the pool cannot be returned to a healthy state prior to + removing the device.

+

Default value: 0.

+
+

+

zfs_removal_suspend_progress (int)

+
+

This is used by the test suite so that it can ensure that certain + actions happen while in the middle of a removal.

+

Default value: 0.

+
+

+

zfs_remove_max_segment (int)

+
+

The largest contiguous segment that we will attempt to allocate + when removing a device. This can be no larger than 16MB. If there is a + performance problem with attempting to allocate large blocks, consider + decreasing this.

+

Default value: 16,777,216 (16MB).

+
+

+

zfs_resilver_min_time_ms (int)

+
Resilvers are processed by the sync thread. While + resilvering it will spend at least this much time working on a resilver + between txg flushes. +

Default value: 3,000.

+
+

+

zfs_scan_ignore_errors (int)

+
If set to a nonzero value, remove the DTL (dirty time + list) upon completion of a pool scan (scrub) even if there were unrepairable + errors. It is intended to be used during pool repair or recovery to stop + resilvering when the pool is next imported. +

Default value: 0.

+
+

+

zfs_scrub_min_time_ms (int)

+
Scrubs are processed by the sync thread. While scrubbing + it will spend at least this much time working on a scrub between txg flushes. +

Default value: 1,000.

+
+

+

zfs_scan_checkpoint_intval (int)

+
To preserve progress across reboots the sequential scan + algorithm periodically needs to stop metadata scanning and issue all the + verifications I/Os to disk. The frequency of this flushing is determined by + the zfs_scan_checkpoint_intval tunable. +

Default value: 7200 seconds (every 2 hours).

+
+

+

zfs_scan_fill_weight (int)

+
This tunable affects how scrub and resilver I/O segments + are ordered. A higher number indicates that we care more about how filled in a + segment is, while a lower number indicates we care more about the size of the + extent without considering the gaps within a segment. This value is only + tunable upon module insertion. Changing the value afterwards will have no + affect on scrub or resilver performance. +

Default value: 3.

+
+

+

zfs_scan_issue_strategy (int)

+
Determines the order that data will be verified while + scrubbing or resilvering. If set to 1, data will be verified as + sequentially as possible, given the amount of memory reserved for scrubbing + (see zfs_scan_mem_lim_fact). This may improve scrub performance if the + pool's data is very fragmented. If set to 2, the largest + mostly-contiguous chunk of found data will be verified first. By deferring + scrubbing of small segments, we may later find adjacent data to coalesce and + increase the segment size. If set to 0, zfs will use strategy 1 + during normal verification and strategy 2 while taking a checkpoint. +

Default value: 0.

+
+

+

zfs_scan_legacy (int)

+
A value of 0 indicates that scrubs and resilvers will + gather metadata in memory before issuing sequential I/O. A value of 1 + indicates that the legacy algorithm will be used where I/O is initiated as + soon as it is discovered. Changing this value to 0 will not affect scrubs or + resilvers that are already in progress. +

Default value: 0.

+
+

+

zfs_scan_max_ext_gap (int)

+
Indicates the largest gap in bytes between scrub / + resilver I/Os that will still be considered sequential for sorting purposes. + Changing this value will not affect scrubs or resilvers that are already in + progress. +

Default value: 2097152 (2 MB).

+
+

+

zfs_scan_mem_lim_fact (int)

+
Maximum fraction of RAM used for I/O sorting by + sequential scan algorithm. This tunable determines the hard limit for I/O + sorting memory usage. When the hard limit is reached we stop scanning metadata + and start issuing data verification I/O. This is done until we get below the + soft limit. +

Default value: 20 which is 5% of RAM (1/20).

+
+

+

zfs_scan_mem_lim_soft_fact (int)

+
The fraction of the hard limit used to determined the + soft limit for I/O sorting by the sequential scan algorithm. When we cross + this limit from below no action is taken. When we cross this limit from above + it is because we are issuing verification I/O. In this case (unless the + metadata scan is done) we stop issuing verification I/O and start scanning + metadata again until we get to the hard limit. +

Default value: 20 which is 5% of the hard limit (1/20).

+
+

+

zfs_scan_vdev_limit (int)

+
Maximum amount of data that can be concurrently issued at + once for scrubs and resilvers per leaf device, given in bytes. +

Default value: 41943040.

+
+

+

zfs_send_corrupt_data (int)

+
Allow sending of corrupt data (ignore read/checksum + errors when sending data) +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_send_unmodified_spill_blocks (int)

+
Include unmodified spill blocks in the send stream. Under + certain circumstances previous versions of ZFS could incorrectly remove the + spill block from an existing object. Including unmodified copies of the spill + blocks creates a backwards compatible stream which will recreate a spill block + if it was incorrectly removed. +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_send_queue_length (int)

+
The maximum number of bytes allowed in the zfs + send queue. This value must be at least twice the maximum block size in + use. +

Default value: 16,777,216.

+
+

+

zfs_recv_queue_length (int)

+
The maximum number of bytes allowed in the zfs + receive queue. This value must be at least twice the maximum block size in + use. +

Default value: 16,777,216.

+
+

+

zfs_sync_pass_deferred_free (int)

+
Flushing of data to disk is done in passes. Defer frees + starting in this pass +

Default value: 2.

+
+

+

zfs_spa_discard_memory_limit (int)

+
Maximum memory used for prefetching a checkpoint's space + map on each vdev while discarding the checkpoint. +

Default value: 16,777,216.

+
+

+

zfs_special_class_metadata_reserve_pct (int)

+
Only allow small data blocks to be allocated on the + special and dedup vdev types when the available free space percentage on these + vdevs exceeds this value. This ensures reserved space is available for pool + meta data as the special vdevs approach capacity. +

Default value: 25.

+
+

+

zfs_sync_pass_dont_compress (int)

+
Starting in this sync pass, we disable compression + (including of metadata). With the default setting, in practice, we don't have + this many sync passes, so this has no effect. +

The original intent was that disabling compression would help the + sync passes to converge. However, in practice disabling compression + increases the average number of sync passes, because when we turn + compression off, a lot of block's size will change and thus we have to + re-allocate (not overwrite) them. It also increases the number of 128KB + allocations (e.g. for indirect blocks and spacemaps) because these will not + be compressed. The 128K allocations are especially detrimental to + performance on highly fragmented systems, which may have very few free + segments of this size, and may need to load new metaslabs to satisfy 128K + allocations.

+

Default value: 8.

+
+

+

zfs_sync_pass_rewrite (int)

+
Rewrite new block pointers starting in this pass +

Default value: 2.

+
+

+

zfs_sync_taskq_batch_pct (int)

+
This controls the number of threads used by the + dp_sync_taskq. The default value of 75% will create a maximum of one thread + per cpu. +

Default value: 75%.

+
+

+

zfs_trim_extent_bytes_max (unsigned int)

+
Maximum size of TRIM command. Ranges larger than this + will be split in to chunks no larger than zfs_trim_extent_bytes_max + bytes before being issued to the device. +

Default value: 134,217,728.

+
+

+

zfs_trim_extent_bytes_min (unsigned int)

+
Minimum size of TRIM commands. TRIM ranges smaller than + this will be skipped unless they're part of a larger range which was broken in + to chunks. This is done because it's common for these small TRIMs to + negatively impact overall performance. This value can be set to 0 to TRIM all + unallocated space. +

Default value: 32,768.

+
+

+

zfs_trim_metaslab_skip (unsigned int)

+
Skip uninitialized metaslabs during the TRIM process. + This option is useful for pools constructed from large thinly-provisioned + devices where TRIM operations are slow. As a pool ages an increasing fraction + of the pools metaslabs will be initialized progressively degrading the + usefulness of this option. This setting is stored when starting a manual TRIM + and will persist for the duration of the requested TRIM. +

Default value: 0.

+
+

+

zfs_trim_queue_limit (unsigned int)

+
Maximum number of queued TRIMs outstanding per leaf vdev. + The number of concurrent TRIM commands issued to the device is controlled by + the zfs_vdev_trim_min_active and zfs_vdev_trim_max_active module + options. +

Default value: 10.

+
+

+

zfs_trim_txg_batch (unsigned int)

+
The number of transaction groups worth of frees which + should be aggregated before TRIM operations are issued to the device. This + setting represents a trade-off between issuing larger, more efficient TRIM + operations and the delay before the recently trimmed space is available for + use by the device. +

Increasing this value will allow frees to be aggregated for a + longer time. This will result is larger TRIM operations and potentially + increased memory usage. Decreasing this value will have the opposite effect. + The default value of 32 was determined to be a reasonable compromise.

+

Default value: 32.

+
+

+

zfs_txg_history (int)

+
Historical statistics for the last N txgs will be + available in /proc/spl/kstat/zfs/<pool>/txgs +

Default value: 0.

+
+

+

zfs_txg_timeout (int)

+
Flush dirty data to disk at least every N seconds + (maximum txg duration) +

Default value: 5.

+
+

+

zfs_vdev_aggregate_trim (int)

+
Allow TRIM I/Os to be aggregated. This is normally not + helpful because the extents to be trimmed will have been already been + aggregated by the metaslab. This option is provided for debugging and + performance analysis. +

Default value: 0.

+
+

+

zfs_vdev_aggregation_limit (int)

+
Max vdev I/O aggregation size +

Default value: 1,048,576.

+
+

+

zfs_vdev_aggregation_limit_non_rotating (int)

+
Max vdev I/O aggregation size for non-rotating media +

Default value: 131,072.

+
+

+

zfs_vdev_cache_bshift (int)

+
Shift size to inflate reads too +

Default value: 16 (effectively 65536).

+
+

+

zfs_vdev_cache_max (int)

+
Inflate reads smaller than this value to meet the + zfs_vdev_cache_bshift size (default 64k). +

Default value: 16384.

+
+

+

zfs_vdev_cache_size (int)

+
Total size of the per-disk cache in bytes. +

Currently this feature is disabled as it has been found to not be + helpful for performance and in some cases harmful.

+

Default value: 0.

+
+

+

zfs_vdev_mirror_rotating_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member + when an I/O immediately follows its predecessor on rotational vdevs for the + purpose of making decisions based on load. +

Default value: 0.

+
+

+

zfs_vdev_mirror_rotating_seek_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member + when an I/O lacks locality as defined by the + zfs_vdev_mirror_rotating_seek_offset. I/Os within this that are not + immediately following the previous I/O are incremented by half. +

Default value: 5.

+
+

+

zfs_vdev_mirror_rotating_seek_offset (int)

+
The maximum distance for the last queued I/O in which the + balancing algorithm considers an I/O to have locality. See the section + "ZFS I/O SCHEDULER". +

Default value: 1048576.

+
+

+

zfs_vdev_mirror_non_rotating_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member on + non-rotational vdevs when I/Os do not immediately follow one another. +

Default value: 0.

+
+

+

zfs_vdev_mirror_non_rotating_seek_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member + when an I/O lacks locality as defined by the + zfs_vdev_mirror_rotating_seek_offset. I/Os within this that are not + immediately following the previous I/O are incremented by half. +

Default value: 1.

+
+

+

zfs_vdev_read_gap_limit (int)

+
Aggregate read I/O operations if the gap on-disk between + them is within this threshold. +

Default value: 32,768.

+
+

+

zfs_vdev_write_gap_limit (int)

+
Aggregate write I/O over gap +

Default value: 4,096.

+
+

+

zfs_vdev_raidz_impl (string)

+
Parameter for selecting raidz parity implementation to + use. +

Options marked (always) below may be selected on module load as + they are supported on all systems. The remaining options may only be set + after the module is loaded, as they are available only if the + implementations are compiled in and supported on the running system.

+

Once the module is loaded, the content of + /sys/module/zfs/parameters/zfs_vdev_raidz_impl will show available options + with the currently selected one enclosed in []. Possible options are: +
+ fastest - (always) implementation selected using built-in benchmark +
+ original - (always) original raidz implementation +
+ scalar - (always) scalar raidz implementation +
+ sse2 - implementation using SSE2 instruction set (64bit x86 only) +
+ ssse3 - implementation using SSSE3 instruction set (64bit x86 only) +
+ avx2 - implementation using AVX2 instruction set (64bit x86 only) +
+ avx512f - implementation using AVX512F instruction set (64bit x86 only) +
+ avx512bw - implementation using AVX512F & AVX512BW instruction sets + (64bit x86 only) +
+ aarch64_neon - implementation using NEON (Aarch64/64 bit ARMv8 only) +
+ aarch64_neonx2 - implementation using NEON with more unrolling (Aarch64/64 + bit ARMv8 only)

+

Default value: fastest.

+
+

+

zfs_zevent_cols (int)

+
When zevents are logged to the console use this as the + word wrap width. +

Default value: 80.

+
+

+

zfs_zevent_console (int)

+
Log events to the console +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_zevent_len_max (int)

+
Max event queue length. A value of 0 will result in a + calculated value which increases with the number of CPUs in the system + (minimum 64 events). Events in the queue can be viewed with the zpool + events command. +

Default value: 0.

+
+

+

zfs_zil_clean_taskq_maxalloc (int)

+
The maximum number of taskq entries that are allowed to + be cached. When this limit is exceeded transaction records (itxs) will be + cleaned synchronously. +

Default value: 1048576.

+
+

+

zfs_zil_clean_taskq_minalloc (int)

+
The number of taskq entries that are pre-populated when + the taskq is first created and are immediately available for use. +

Default value: 1024.

+
+

+

zfs_zil_clean_taskq_nthr_pct (int)

+
This controls the number of threads used by the + dp_zil_clean_taskq. The default value of 100% will create a maximum of one + thread per cpu. +

Default value: 100%.

+
+

+

zil_maxblocksize (int)

+
This sets the maximum block size used by the ZIL. On very + fragmented pools, lowering this (typically to 36KB) can improve performance. +

Default value: 131072 (128KB).

+
+

+

zil_nocacheflush (int)

+
Disable the cache flush commands that are normally sent + to the disk(s) by the ZIL after an LWB write has completed. Setting this will + cause ZIL corruption on power loss if a volatile out-of-order write cache is + enabled. +

Use 1 for yes and 0 for no (default).

+
+

+

zil_replay_disable (int)

+
Disable intent logging replay. Can be disabled for + recovery from corrupted ZIL +

Use 1 for yes and 0 for no (default).

+
+

+

zil_slog_bulk (ulong)

+
Limit SLOG write size per commit executed with + synchronous priority. Any writes above that will be executed with lower + (asynchronous) priority to limit potential SLOG device abuse by single active + ZIL writer. +

Default value: 786,432.

+
+

+

zio_deadman_log_all (int)

+
If non-zero, the zio deadman will produce debugging + messages (see zfs_dbgmsg_enable) for all zios, rather than only for + leaf zios possessing a vdev. This is meant to be used by developers to gain + diagnostic information for hang conditions which don't involve a mutex or + other locking primitive; typically conditions in which a thread in the zio + pipeline is looping indefinitely. +

Default value: 0.

+
+

+

zio_decompress_fail_fraction (int)

+
If non-zero, this value represents the denominator of the + probability that zfs should induce a decompression failure. For instance, for + a 5% decompression failure rate, this value should be set to 20. +

Default value: 0.

+
+

+

zio_slow_io_ms (int)

+
When an I/O operation takes more than + zio_slow_io_ms milliseconds to complete is marked as a slow I/O. Each + slow I/O causes a delay zevent. Slow I/O counters can be seen with "zpool + status -s". +

+

Default value: 30,000.

+
+

+

zio_dva_throttle_enabled (int)

+
Throttle block allocations in the I/O pipeline. This + allows for dynamic allocation distribution when devices are imbalanced. When + enabled, the maximum number of pending allocations per top-level vdev is + limited by zfs_vdev_queue_depth_pct. +

Default value: 1.

+
+

+

zio_requeue_io_start_cut_in_line (int)

+
Prioritize requeued I/O +

Default value: 0.

+
+

+

zio_taskq_batch_pct (uint)

+
Percentage of online CPUs (or CPU cores, etc) which will + run a worker thread for I/O. These workers are responsible for I/O work such + as compression and checksum calculations. Fractional number of CPUs will be + rounded down. +

The default value of 75 was chosen to avoid using all CPUs which + can result in latency issues and inconsistent application performance, + especially when high compression is enabled.

+

Default value: 75.

+
+

+

zvol_inhibit_dev (uint)

+
Do not create zvol device nodes. This may slightly + improve startup time on systems with a very large number of zvols. +

Use 1 for yes and 0 for no (default).

+
+

+

zvol_major (uint)

+
Major number for zvol block devices +

Default value: 230.

+
+

+

zvol_max_discard_blocks (ulong)

+
Discard (aka TRIM) operations done on zvols will be done + in batches of this many blocks, where block size is determined by the + volblocksize property of a zvol. +

Default value: 16,384.

+
+

+

zvol_prefetch_bytes (uint)

+
When adding a zvol to the system prefetch + zvol_prefetch_bytes from the start and end of the volume. Prefetching + these regions of the volume is desirable because they are likely to be + accessed immediately by blkid(8) or by the kernel scanning for a + partition table. +

Default value: 131,072.

+
+

+

zvol_request_sync (uint)

+
When processing I/O requests for a zvol submit them + synchronously. This effectively limits the queue depth to 1 for each I/O + submitter. When set to 0 requests are handled asynchronously by a thread pool. + The number of requests which can be handled concurrently is controller by + zvol_threads. +

Default value: 0.

+
+

+

zvol_threads (uint)

+
Max number of threads which can handle zvol I/O requests + concurrently. +

Default value: 32.

+
+

+

zvol_volmode (uint)

+
Defines zvol block devices behaviour when volmode + is set to default. Valid values are 1 (full), 2 (dev) and + 3 (none). +

Default value: 1.

+
+

+
+
+
+

+

ZFS issues I/O operations to leaf vdevs to satisfy and complete + I/Os. The I/O scheduler determines when and in what order those operations + are issued. The I/O scheduler divides operations into five I/O classes + prioritized in the following order: sync read, sync write, async read, async + write, and scrub/resilver. Each queue defines the minimum and maximum number + of concurrent operations that may be issued to the device. In addition, the + device has an aggregate maximum, zfs_vdev_max_active. Note that the + sum of the per-queue minimums must not exceed the aggregate maximum. If the + sum of the per-queue maximums exceeds the aggregate maximum, then the number + of active I/Os may reach zfs_vdev_max_active, in which case no + further I/Os will be issued regardless of whether all per-queue minimums + have been met.

+

For many physical devices, throughput increases with the number of + concurrent operations, but latency typically suffers. Further, physical + devices typically have a limit at which more concurrent operations have no + effect on throughput or can actually cause it to decrease.

+

The scheduler selects the next operation to issue by first looking + for an I/O class whose minimum has not been satisfied. Once all are + satisfied and the aggregate maximum has not been hit, the scheduler looks + for classes whose maximum has not been satisfied. Iteration through the I/O + classes is done in the order specified above. No further operations are + issued if the aggregate maximum number of concurrent operations has been hit + or if there are no operations queued for an I/O class that has not hit its + maximum. Every time an I/O is queued or an operation completes, the I/O + scheduler looks for new operations to issue.

+

In general, smaller max_active's will lead to lower latency of + synchronous operations. Larger max_active's may lead to higher overall + throughput, depending on underlying storage.

+

The ratio of the queues' max_actives determines the balance of + performance between reads, writes, and scrubs. E.g., increasing + zfs_vdev_scrub_max_active will cause the scrub or resilver to + complete more quickly, but reads and writes to have higher latency and lower + throughput.

+

All I/O classes have a fixed maximum number of outstanding + operations except for the async write class. Asynchronous writes represent + the data that is committed to stable storage during the syncing stage for + transaction groups. Transaction groups enter the syncing state periodically + so the number of queued async writes will quickly burst up and then bleed + down to zero. Rather than servicing them as quickly as possible, the I/O + scheduler changes the maximum number of active async write I/Os according to + the amount of dirty data in the pool. Since both throughput and latency + typically increase with the number of concurrent operations issued to + physical devices, reducing the burstiness in the number of concurrent + operations also stabilizes the response time of operations from other -- and + in particular synchronous -- queues. In broad strokes, the I/O scheduler + will issue more concurrent operations from the async write queue as there's + more dirty data in the pool.

+

Async Writes

+

The number of concurrent operations issued for the async write I/O + class follows a piece-wise linear function defined by a few adjustable + points.

+
+
+ | o---------| <-- zfs_vdev_async_write_max_active +
+ ^ | /^ | +
+ | | / | | +active | / | | +
+ I/O | / | | +count | / | | +
+ | / | | +
+ |-------o | | <-- zfs_vdev_async_write_min_active +
+ 0|_______^______|_________| +
+ 0% | | 100% of zfs_dirty_data_max +
+ | | +
+ | `-- zfs_vdev_async_write_active_max_dirty_percent +
+ `--------- zfs_vdev_async_write_active_min_dirty_percent +
+Until the amount of dirty data exceeds a minimum percentage of the dirty data + allowed in the pool, the I/O scheduler will limit the number of concurrent + operations to the minimum. As that threshold is crossed, the number of + concurrent operations issued increases linearly to the maximum at the + specified maximum percentage of the dirty data allowed in the pool. +

Ideally, the amount of dirty data on a busy pool will stay in the + sloped part of the function between + zfs_vdev_async_write_active_min_dirty_percent and + zfs_vdev_async_write_active_max_dirty_percent. If it exceeds the + maximum percentage, this indicates that the rate of incoming data is greater + than the rate that the backend storage can handle. In this case, we must + further throttle incoming writes, as described in the next section.

+

+
+
+

+

We delay transactions when we've determined that the backend + storage isn't able to accommodate the rate of incoming writes.

+

If there is already a transaction waiting, we delay relative to + when that transaction will finish waiting. This way the calculated delay + time is independent of the number of threads concurrently executing + transactions.

+

If we are the only waiter, wait relative to when the transaction + started, rather than the current time. This credits the transaction for + "time already served", e.g. reading indirect blocks.

+

The minimum time for a transaction to take is calculated as:

+
+
+ min_time = zfs_delay_scale * (dirty - min) / (max - dirty) +
+ min_time is then capped at 100 milliseconds.
+

The delay has two degrees of freedom that can be adjusted via + tunables. The percentage of dirty data at which we start to delay is defined + by zfs_delay_min_dirty_percent. This should typically be at or above + zfs_vdev_async_write_active_max_dirty_percent so that we only start + to delay after writing at full speed has failed to keep up with the incoming + write rate. The scale of the curve is defined by zfs_delay_scale. + Roughly speaking, this variable determines the amount of delay at the + midpoint of the curve.

+

+
delay
+
+ 10ms +-------------------------------------------------------------*+ +
+ | *| +
+ 9ms + *+ +
+ | *| +
+ 8ms + *+ +
+ | * | +
+ 7ms + * + +
+ | * | +
+ 6ms + * + +
+ | * | +
+ 5ms + * + +
+ | * | +
+ 4ms + * + +
+ | * | +
+ 3ms + * + +
+ | * | +
+ 2ms + (midpoint) * + +
+ | | ** | +
+ 1ms + v *** + +
+ | zfs_delay_scale ----------> ******** | +
+ 0 +-------------------------------------*********----------------+ +
+ 0% <- zfs_dirty_data_max -> 100%
+

Note that since the delay is added to the outstanding time + remaining on the most recent transaction, the delay is effectively the + inverse of IOPS. Here the midpoint of 500us translates to 2000 IOPS. The + shape of the curve was chosen such that small changes in the amount of + accumulated dirty data in the first 3/4 of the curve yield relatively small + differences in the amount of delay.

+

The effects can be easier to understand when the amount of delay + is represented on a log scale:

+

+
delay
+100ms +-------------------------------------------------------------++
+
+ + + +
+ | | +
+ + *+ +
+ 10ms + *+ +
+ + ** + +
+ | (midpoint) ** | +
+ + | ** + +
+ 1ms + v **** + +
+ + zfs_delay_scale ----------> ***** + +
+ | **** | +
+ + **** + +100us + ** + +
+ + * + +
+ | * | +
+ + * + +
+ 10us + * + +
+ + + +
+ | | +
+ + + +
+ +--------------------------------------------------------------+ +
+ 0% <- zfs_dirty_data_max -> 100%
+

Note here that only as the amount of dirty data approaches its + limit does the delay start to increase rapidly. The goal of a properly tuned + system should be to keep the amount of dirty data out of that range by first + ensuring that the appropriate limits are set for the I/O scheduler to reach + optimal throughput on the backend storage, and then by changing the value of + zfs_delay_scale to increase the steepness of the curve.

+
+
+ + + + + +
February 15, 2019
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/5/zpool-features.5.html b/man/v0.8/5/zpool-features.5.html new file mode 100644 index 000000000..18a5513a2 --- /dev/null +++ b/man/v0.8/5/zpool-features.5.html @@ -0,0 +1,1005 @@ + + + + + + + zpool-features.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-features.5

+
+ + + + + +
ZPOOL-FEATURES(5)File Formats ManualZPOOL-FEATURES(5)
+
+
+

+

zpool-features - ZFS pool feature descriptions

+
+
+

+

ZFS pool on-disk format versions are specified via + "features" which replace the old on-disk format numbers (the last + supported on-disk format number is 28). To enable a feature on a pool use + the upgrade subcommand of the zpool(8) command, or set the + feature@feature_name property to enabled.

+

+

The pool format does not affect file system version compatibility + or the ability to send file systems between pools.

+

+

Since most features can be enabled independently of each other the + on-disk format of the pool is specified by the set of all features marked as + active on the pool. If the pool was created by another software + version this set may include unsupported features.

+
+

+

Every feature has a GUID of the form + com.example:feature_name. The reverse DNS name ensures that the + feature's GUID is unique across all ZFS implementations. When unsupported + features are encountered on a pool they will be identified by their GUIDs. + Refer to the documentation for the ZFS implementation that created the pool + for information about those features.

+

+

Each supported feature also has a short name. By convention a + feature's short name is the portion of its GUID which follows the ':' (e.g. + com.example:feature_name would have the short name + feature_name), however a feature's short name may differ across ZFS + implementations if following the convention would result in name + conflicts.

+
+
+

+

Features can be in one of three states:

+

active

+
This feature's on-disk format changes are in effect on + the pool. Support for this feature is required to import the pool in + read-write mode. If this feature is not read-only compatible, support is also + required to import the pool in read-only mode (see "Read-only + compatibility").
+

+

enabled

+
An administrator has marked this feature as enabled on + the pool, but the feature's on-disk format changes have not been made yet. The + pool can still be imported by software that does not support this feature, but + changes may be made to the on-disk format at any time which will move the + feature to the active state. Some features may support returning to the + enabled state after becoming active. See feature-specific + documentation for details.
+

+

disabled

+
This feature's on-disk format changes have not been made + and will not be made unless an administrator moves the feature to the + enabled state. Features cannot be disabled once they have been + enabled.
+

+

+

The state of supported features is exposed through pool properties + of the form feature@short_name.

+
+
+

+

Some features may make on-disk format changes that do not + interfere with other software's ability to read from the pool. These + features are referred to as "read-only compatible". If all + unsupported features on a pool are read-only compatible, the pool can be + imported in read-only mode by setting the readonly property during + import (see zpool(8) for details on importing pools).

+
+
+

+

For each unsupported feature enabled on an imported pool a pool + property named unsupported@feature_name will indicate why the import + was allowed despite the unsupported feature. Possible values for this + property are:

+

+

inactive

+
The feature is in the enabled state and therefore + the pool's on-disk format is still compatible with software that does not + support this feature.
+

+

readonly

+
The feature is read-only compatible and the pool has been + imported in read-only mode.
+

+
+
+

+

Some features depend on other features being enabled in order to + function properly. Enabling a feature will automatically enable any features + it depends on.

+
+
+
+

+

The following features are supported on this system:

+

+

allocation_classes

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:allocation_classes
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature enables support for separate allocation classes.

+

This feature becomes active when a dedicated allocation + class vdev (dedup or special) is created with the zpool create or + zpool add subcommands. With device removal, it can be returned to the + enabled state if all the dedicated allocation class vdevs are + removed.

+
+

+

async_destroy

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:async_destroy
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

Destroying a file system requires traversing all of its data in + order to return its used space to the pool. Without async_destroy the + file system is not fully removed until all space has been reclaimed. If the + destroy operation is interrupted by a reboot or power outage the next + attempt to open the pool will need to complete the destroy operation + synchronously.

+

When async_destroy is enabled the file system's data will + be reclaimed by a background process, allowing the destroy operation to + complete without traversing the entire file system. The background process + is able to resume interrupted destroys after the pool has been opened, + eliminating the need to finish interrupted destroys as part of the open + operation. The amount of space remaining to be reclaimed by the background + process is available through the freeing property.

+

This feature is only active while freeing is + non-zero.

+
+

+

bookmarks

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:bookmarks
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature enables use of the zfs bookmark + subcommand.

+

This feature is active while any bookmarks exist in the + pool. All bookmarks in the pool can be listed by running zfs list -t + bookmark -r poolname.

+
+

+

bookmark_v2

+
+ + + + + + + + + + + + + +
GUIDcom.datto:bookmark_v2
READ-ONLY COMPATIBLEno
DEPENDENCIESbookmark, extensible_dataset
+

This feature enables the creation and management of larger + bookmarks which are needed for other features in ZFS.

+

This feature becomes active when a v2 bookmark is created + and will be returned to the enabled state when all v2 bookmarks are + destroyed.

+
+

+

device_removal

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:device_removal
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature enables the zpool remove subcommand to remove + top-level vdevs, evacuating them to reduce the total size of the pool.

+

This feature becomes active when the zpool remove + subcommand is used on a top-level vdev, and will never return to being + enabled.

+
+

+

edonr

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:edonr
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the use of the Edon-R hash algorithm for + checksum, including for nopwrite (if compression is also enabled, an + overwrite of a block whose checksum matches the data being written will be + ignored). In an abundance of caution, Edon-R requires verification when used + with dedup: zfs set dedup=edonr,verify. See zfs(8).

+

Edon-R is a very high-performance hash algorithm that was part of + the NIST SHA-3 competition. It provides extremely high hash performance + (over 350% faster than SHA-256), but was not selected because of its + unsuitability as a general purpose secure hash algorithm. This + implementation utilizes the new salted checksumming functionality in ZFS, + which means that the checksum is pre-seeded with a secret 256-bit random key + (stored on the pool) before being fed the data block to be checksummed. Thus + the produced checksums are unique to a given pool.

+

When the edonr feature is set to enabled, the + administrator can turn on the edonr checksum on any dataset using the + zfs set checksum=edonr. See zfs(8). This feature becomes + active once a checksum property has been set to edonr, + and will return to being enabled once all filesystems that have ever + had their checksum set to edonr are destroyed.

+

The edonr feature is not supported by GRUB and must not be + used on the pool if GRUB needs to access the pool (e.g. for /boot).

+
+

+

embedded_data

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:embedded_data
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature improves the performance and compression ratio of + highly-compressible blocks. Blocks whose contents can compress to 112 bytes + or smaller can take advantage of this feature.

+

When this feature is enabled, the contents of highly-compressible + blocks are stored in the block "pointer" itself (a misnomer in + this case, as it contains the compressed data, rather than a pointer to its + location on disk). Thus the space of the block (one sector, typically 512 + bytes or 4KB) is saved, and no additional i/o is needed to read and write + the data block.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

empty_bpobj

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:empty_bpobj
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature increases the performance of creating and using a + large number of snapshots of a single filesystem or volume, and also reduces + the disk space required.

+

When there are many snapshots, each snapshot uses many Block + Pointer Objects (bpobj's) to track blocks associated with that snapshot. + However, in common use cases, most of these bpobj's are empty. This feature + allows us to create each bpobj on-demand, thus eliminating the empty + bpobjs.

+

This feature is active while there are any filesystems, + volumes, or snapshots which were created after enabling this feature.

+
+

+

enabled_txg

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:enabled_txg
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

Once this feature is enabled ZFS records the transaction group + number in which new features are enabled. This has no user-visible impact, + but other features may depend on this feature.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

encryption

+
+ + + + + + + + + + + + + +
GUIDcom.datto:encryption
READ-ONLY COMPATIBLEno
DEPENDENCIESbookmark_v2, extensible_dataset
+

This feature enables the creation and management of natively + encrypted datasets.

+

This feature becomes active when an encrypted dataset is + created and will be returned to the enabled state when all datasets + that use this feature are destroyed.

+
+

+

extensible_dataset

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:extensible_dataset
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature allows more flexible use of internal ZFS data + structures, and exists for other features to depend on.

+

This feature will be active when the first dependent + feature uses it, and will be returned to the enabled state when all + datasets that use this feature are destroyed.

+
+

+

filesystem_limits

+
+ + + + + + + + + + + + + +
GUIDcom.joyent:filesystem_limits
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature enables filesystem and snapshot limits. These limits + can be used to control how many filesystems and/or snapshots can be created + at the point in the tree on which the limits are set.

+

This feature is active once either of the limit properties + has been set on a dataset. Once activated the feature is never + deactivated.

+
+

+

hole_birth

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:hole_birth
READ-ONLY COMPATIBLEno
DEPENDENCIESenabled_txg
+

This feature has/had bugs, the result of which is that, if you do + a zfs send -i (or -R, since it uses -i) from an + affected dataset, the receiver will not see any checksum or other errors, + but the resulting destination snapshot will not match the source. Its use by + zfs send -i has been disabled by default. See the + send_holes_without_birth_time module parameter in + zfs-module-parameters(5).

+

This feature improves performance of incremental sends (zfs + send -i) and receives for objects with many holes. The most common case + of hole-filled objects is zvols.

+

An incremental send stream from snapshot A to snapshot + B contains information about every block that changed between + A and B. Blocks which did not change between those snapshots + can be identified and omitted from the stream using a piece of metadata + called the 'block birth time', but birth times are not recorded for holes + (blocks filled only with zeroes). Since holes created after A cannot + be distinguished from holes created before A, information about every + hole in the entire filesystem or zvol is included in the send stream.

+

For workloads where holes are rare this is not a problem. However, + when incrementally replicating filesystems or zvols with many holes (for + example a zvol formatted with another filesystem) a lot of time will be + spent sending and receiving unnecessary information about holes that already + exist on the receiving side.

+

Once the hole_birth feature has been enabled the block + birth times of all new holes will be recorded. Incremental sends between + snapshots created after this feature is enabled will use this new metadata + to avoid sending information about holes that already exist on the receiving + side.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

large_blocks

+
+ + + + + + + + + + + + + +
GUIDorg.open-zfs:large_blocks
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

The large_block feature allows the record size on a dataset + to be set larger than 128KB.

+

This feature becomes active once a dataset contains a file + with a block size larger than 128KB, and will return to being enabled + once all filesystems that have ever had their recordsize larger than 128KB + are destroyed.

+
+

+

large_dnode

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:large_dnode
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

The large_dnode feature allows the size of dnodes in a + dataset to be set larger than 512B.

+

This feature becomes active once a dataset contains an + object with a dnode larger than 512B, which occurs as a result of setting + the dnodesize dataset property to a value other than legacy. + The feature will return to being enabled once all filesystems that + have ever contained a dnode larger than 512B are destroyed. Large dnodes + allow more data to be stored in the bonus buffer, thus potentially improving + performance by avoiding the use of spill blocks.

+
+

+

lz4_compress

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:lz4_compress
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

lz4 is a high-performance real-time compression algorithm + that features significantly faster compression and decompression as well as + a higher compression ratio than the older lzjb compression. + Typically, lz4 compression is approximately 50% faster on + compressible data and 200% faster on incompressible data than lzjb. + It is also approximately 80% faster on decompression, while giving + approximately 10% better compression ratio.

+

When the lz4_compress feature is set to enabled, the + administrator can turn on lz4 compression on any dataset on the pool + using the zfs(8) command. Please note that doing so will immediately + activate the lz4_compress feature on the underlying pool using the + zfs(8) command. Also, all newly written metadata will be compressed with + lz4 algorithm. Since this feature is not read-only compatible, this + operation will render the pool unimportable on systems without support for + the lz4_compress feature.

+

Booting off of lz4-compressed root pools is supported.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

multi_vdev_crash_dump

+
+ + + + + + + + + + + + + +
GUIDcom.joyent:multi_vdev_crash_dump
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature allows a dump device to be configured with a pool + comprised of multiple vdevs. Those vdevs may be arranged in any mirrored or + raidz configuration.

+

When the multi_vdev_crash_dump feature is set to + enabled, the administrator can use the dumpadm(1M) command to + configure a dump device on a pool comprised of multiple vdevs.

+

Under Linux this feature is registered for compatibility but not + used. New pools created under Linux will have the feature enabled but + will never transition to active. This functionality is not + required in order to support crash dumps under Linux. Existing pools where + this feature is active can be imported.

+
+

+

obsolete_counts

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:obsolete_counts
READ-ONLY COMPATIBLEyes
DEPENDENCIESdevice_removal
+

This feature is an enhancement of device_removal, which will over + time reduce the memory used to track removed devices. When indirect blocks + are freed or remapped, we note that their part of the indirect mapping is + "obsolete", i.e. no longer needed.

+

This feature becomes active when the zpool remove + subcommand is used on a top-level vdev, and will never return to being + enabled.

+
+

+

project_quota

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:project_quota
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature allows administrators to account the spaces and + objects usage information against the project identifier (ID).

+

The project ID is new object-based attribute. When upgrading an + existing filesystem, object without project ID attribute will be assigned a + zero project ID. After this feature is enabled, newly created object will + inherit its parent directory's project ID if the parent inherit flag is set + (via chattr +/-P or zfs project [-s|-C]). Otherwise, the new + object's project ID will be set as zero. An object's project ID can be + changed at anytime by the owner (or privileged user) via chattr -p + $prjid or zfs project -p $prjid.

+

This feature will become active as soon as it is enabled + and will never return to being disabled. Each filesystem will be + upgraded automatically when remounted or when new file is created under that + filesystem. The upgrade can also be triggered on filesystems via `zfs set + version=current <pool/fs>`. The upgrade process runs in the background + and may take a while to complete for the filesystems containing a large + number of files.

+
+

+

resilver_defer

+
+ + + + + + + + + + + + + +
GUIDcom.datto:resilver_defer
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature allows zfs to postpone new resilvers if an existing + one is already in progress. Without this feature, any new resilvers will + cause the currently running one to be immediately restarted from the + beginning.

+

This feature becomes active once a resilver has been + deferred, and returns to being enabled when the deferred resilver + begins.

+
+

+

sha512

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:sha512
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the use of the SHA-512/256 truncated hash + algorithm (FIPS 180-4) for checksum and dedup. The native 64-bit arithmetic + of SHA-512 provides an approximate 50% performance boost over SHA-256 on + 64-bit hardware and is thus a good minimum-change replacement candidate for + systems where hash performance is important, but these systems cannot for + whatever reason utilize the faster skein and edonr + algorithms.

+

When the sha512 feature is set to enabled, the + administrator can turn on the sha512 checksum on any dataset using + zfs set checksum=sha512. See zfs(8). This feature becomes + active once a checksum property has been set to sha512, + and will return to being enabled once all filesystems that have ever + had their checksum set to sha512 are destroyed.

+

The sha512 feature is not supported by GRUB and must not be + used on the pool if GRUB needs to access the pool (e.g. for /boot).

+
+

+

skein

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:skein
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the use of the Skein hash algorithm for + checksum and dedup. Skein is a high-performance secure hash algorithm that + was a finalist in the NIST SHA-3 competition. It provides a very high + security margin and high performance on 64-bit hardware (80% faster than + SHA-256). This implementation also utilizes the new salted checksumming + functionality in ZFS, which means that the checksum is pre-seeded with a + secret 256-bit random key (stored on the pool) before being fed the data + block to be checksummed. Thus the produced checksums are unique to a given + pool, preventing hash collision attacks on systems with dedup.

+

When the skein feature is set to enabled, the + administrator can turn on the skein checksum on any dataset using + zfs set checksum=skein. See zfs(8). This feature becomes + active once a checksum property has been set to skein, + and will return to being enabled once all filesystems that have ever + had their checksum set to skein are destroyed.

+

The skein feature is not supported by GRUB and must not be + used on the pool if GRUB needs to access the pool (e.g. for /boot).

+
+

+

spacemap_histogram

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:spacemap_histogram
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This features allows ZFS to maintain more information about how + free space is organized within the pool. If this feature is enabled, + ZFS will set this feature to active when a new space map object is + created or an existing space map is upgraded to the new format. Once the + feature is active, it will remain in that state until the pool is + destroyed.

+
+

+

spacemap_v2

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:spacemap_v2
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature enables the use of the new space map encoding which + consists of two words (instead of one) whenever it is advantageous. The new + encoding allows space maps to represent large regions of space more + efficiently on-disk while also increasing their maximum addressable + offset.

+

This feature becomes active once it is enabled, and + never returns back to being enabled.

+
+

+

userobj_accounting

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:userobj_accounting
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature allows administrators to account the object usage + information by user and group.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled. Each filesystem will be upgraded + automatically when remounted, or when new files are created under that + filesystem. The upgrade can also be started manually on filesystems by + running `zfs set version=current <pool/fs>`. The upgrade process runs + in the background and may take a while to complete for filesystems + containing a large number of files.

+
+

+

zpool_checkpoint

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:zpool_checkpoint
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature enables the zpool checkpoint subcommand that + can checkpoint the state of the pool at the time it was issued and later + rewind back to it or discard it.

+

This feature becomes active when the zpool + checkpoint subcommand is used to checkpoint the pool. The feature will + only return back to being enabled when the pool is rewound or the + checkpoint has been discarded.

+
+

+
+
+

+

zpool(8)

+
+
+ + + + + +
June 8, 2018
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/fsck.zfs.8.html b/man/v0.8/8/fsck.zfs.8.html new file mode 100644 index 000000000..e986fabcd --- /dev/null +++ b/man/v0.8/8/fsck.zfs.8.html @@ -0,0 +1,219 @@ + + + + + + + fsck.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

fsck.zfs.8

+
+ + + + + +
fsck.zfs(8)System Administration Commandsfsck.zfs(8)
+
+

+
+

+

fsck.zfs - Dummy ZFS filesystem checker.

+

+
+
+

+

fsck.zfs [options] + <dataset>

+

+
+
+

+

fsck.zfs is a shell stub that does nothing and always + returns true. It is installed by ZoL because some Linux distributions expect + a fsck helper for all filesystems.

+

+
+
+

+

All options and the dataset are ignored.

+

+
+
+

+

ZFS datasets are checked by running zpool scrub on the + containing pool. An individual ZFS dataset is never checked independently of + its pool, which is unlike a regular filesystem.

+

+
+
+

+

On some systems, if the dataset is in a degraded pool, then + it might be appropriate for fsck.zfs to return exit code 4 to + indicate an uncorrected filesystem error.

+

Similarly, if the dataset is in a faulted pool and has a + legacy /etc/fstab record, then fsck.zfs should return exit code 8 to + indicate a fatal operational error.

+

+
+
+

+

Darik Horn <dajhorn@vanadac.com>.

+

+
+
+

+

fsck(8), fstab(5), zpool(8)

+
+
+ + + + + +
2013 MAR 16ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/index.html b/man/v0.8/8/index.html new file mode 100644 index 000000000..7e8876c58 --- /dev/null +++ b/man/v0.8/8/index.html @@ -0,0 +1,169 @@ + + + + + + + System Administration Commands (8) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/mount.zfs.8.html b/man/v0.8/8/mount.zfs.8.html new file mode 100644 index 000000000..9e972bf69 --- /dev/null +++ b/man/v0.8/8/mount.zfs.8.html @@ -0,0 +1,268 @@ + + + + + + + mount.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

mount.zfs.8

+
+ + + + + +
mount.zfs(8)System Administration Commandsmount.zfs(8)
+
+

+
+

+

mount.zfs - mount a ZFS filesystem

+
+
+

+

mount.zfs [-sfnvh] [-o options] dataset + mountpoint

+

+
+
+

+

mount.zfs is part of the zfsutils package for Linux. It is + a helper program that is usually invoked by the mount(8) or + zfs(8) commands to mount a ZFS dataset.

+

All options are handled according to the FILESYSTEM + INDEPENDENT MOUNT OPTIONS section in the mount(8) manual, except for + those described below.

+

The dataset parameter is a ZFS filesystem name, as output + by the zfs list -H -o name command. This parameter never has a + leading slash character and is not a device name.

+

The mountpoint parameter is the path name of a + directory.

+

+

+
+
+

+
+
+
Ignore bad or sloppy mount options.
+
+
Do a fake mount; do not perform the mount operation.
+
+
Do not update the /etc/mtab file.
+
+
Increase verbosity.
+
+
Print the usage message.
+
+
This flag sets the SELinux context for all files in the filesystem under + that mountpoint.
+
+
This flag sets the SELinux context for the filesystem being mounted.
+
+
This flag sets the SELinux context for unlabeled files.
+
+
This flag sets the SELinux context for the root inode of the + filesystem.
+
+
This private flag indicates that the dataset has an entry in the + /etc/fstab file.
+
+
This private flag disables extended attributes.
+
+
This private flag enables directory-based extended attributes and, if + appropriate, adds a ZFS context to the selinux system policy.
+
+
This private flag enables system attributed-based extended attributes and, + if appropriate, adds a ZFS context to the selinux system policy.
+
+
Equivalent to xattr.
+
+
This private flag indicates that mount(8) is being called by the + zfs(8) command. +

+
+
+
+
+

+

ZFS conventionally requires that the mountpoint be an empty + directory, but the Linux implementation inconsistently enforces the + requirement.

+

The mount.zfs helper does not mount the contents of + zvols.

+

+
+
+

+
+
/etc/fstab
+
The static filesystem table.
+
/etc/mtab
+
The mounted filesystem table.
+
+
+
+

+

The primary author of mount.zfs is Brian Behlendorf + <behlendorf1@llnl.gov>.

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

fstab(5), mount(8), zfs(8)

+
+
+ + + + + +
2013 FEB 28ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/vdev_id.8.html b/man/v0.8/8/vdev_id.8.html new file mode 100644 index 000000000..7771ea214 --- /dev/null +++ b/man/v0.8/8/vdev_id.8.html @@ -0,0 +1,238 @@ + + + + + + + vdev_id.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

vdev_id.8

+
+ + + + + +
vdev_id(8)System Manager's Manualvdev_id(8)
+
+
+

+

vdev_id - generate user-friendly names for JBOD disks

+
+
+

+
vdev_id <-d dev> [-c config_file] [-g sas_direct|sas_switch]
+
+ [-m] [-p phys_per_port] +vdev_id -h
+
+
+

+

The vdev_id command is a udev helper which parses the file + /etc/zfs/vdev_id.conf(5) to map a physical path in a storage topology + to a channel name. The channel name is combined with a disk enclosure slot + number to create an alias that reflects the physical location of the drive. + This is particularly helpful when it comes to tasks like replacing failed + drives. Slot numbers may also be re-mapped in case the default numbering is + unsatisfactory. The drive aliases will be created as symbolic links in + /dev/disk/by-vdev.

+

The currently supported topologies are sas_direct and sas_switch. + A multipath mode is supported in which dm-mpath devices are handled by + examining the first-listed running component disk as reported by the + multipath(8) command. In multipath mode the configuration file should + contain a channel definition with the same name for each path to a given + enclosure.

+

vdev_id also supports creating aliases based on existing + udev links in the /dev hierarchy using the alias configuration file + keyword. See the vdev_id.conf(5) man page for details.

+

+
+
+

+
+
+
Specifies the path to an alternate configuration file. The default is + /etc/zfs/vdev_id.conf.
+
+
This is the only mandatory argument. Specifies the name of a device in + /dev, i.e. "sda".
+
+
Identifies a physical topology that governs how physical paths are mapped + to channels. +

sas_direct - in this mode a channel is uniquely + identified by a PCI slot and a HBA port number

+

sas_switch - in this mode a channel is uniquely + identified by a SAS switch port number

+
+
+
Specifies that vdev_id(8) will handle only dm-multipath devices. If + set to "yes" then vdev_id(8) will examine the first + running component disk of a dm-multipath device as listed by the + multipath(8) command to determine the physical path.
+
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) internally uses this value to determine + which HBA or switch port a device is connected to. The default is 4.
+
+
Print a usage summary.
+
+
+
+

+

vdev_id.conf(5)

+
+
+ + + + + +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zdb.8.html b/man/v0.8/8/zdb.8.html new file mode 100644 index 000000000..86944e180 --- /dev/null +++ b/man/v0.8/8/zdb.8.html @@ -0,0 +1,581 @@ + + + + + + + zdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zdb.8

+
+ + + + + +
ZDB(8)System Manager's Manual (smm)ZDB(8)
+
+
+

+

zdbdisplay + zpool debugging and consistency information

+
+
+

+ + + + + +
zdb[-AbcdDFGhikLMPsvXY] [-e + [-V] [-p + path ...]] [-I + inflight I/Os] [-o + var=value]... + [-t txg] + [-U cache] + [-x dumpdir] + [poolname [object ...]]
+
+ + + + + +
zdb[-AdiPv] [-e + [-V] [-p + path ...]] [-U + cache] dataset + [object ...]
+
+ + + + + +
zdb-C [-A] + [-U cache]
+
+ + + + + +
zdb-E [-A] + word0:word1:...:word15
+
+ + + + + +
zdb-l [-Aqu] + device
+
+ + + + + +
zdb-m [-AFLPXY] + [-e [-V] + [-p path ...]] + [-t txg] + [-U cache] + poolname [vdev + [metaslab ...]]
+
+ + + + + +
zdb-O dataset path
+
+ + + + + +
zdb-R [-A] + [-e [-V] + [-p path ...]] + [-U cache] + poolname + vdev:offset:[<lsize>/]<psize>[:flags]
+
+ + + + + +
zdb-S [-AP] + [-e [-V] + [-p path ...]] + [-U cache] + poolname
+
+
+

+

The zdb utility displays information about + a ZFS pool useful for debugging and performs some amount of consistency + checking. It is a not a general purpose tool and options (and facilities) + may change. This is not a fsck(8) utility.

+

The output of this command in general reflects the on-disk + structure of a ZFS pool, and is inherently unstable. The precise output of + most invocations is not documented, a knowledge of ZFS internals is + assumed.

+

If the dataset argument does not + contain any + "" or + "" + characters, it is interpreted as a pool name. The root dataset can be + specified as pool/ (pool name followed by a + slash).

+

When operating on an imported and active pool it is possible, + though unlikely, that zdb may interpret inconsistent pool data and behave + erratically.

+
+
+

+

Display options:

+
+
+
Display statistics regarding the number, size (logical, physical and + allocated) and deduplication of blocks.
+
+
Verify the checksum of all metadata blocks while printing block statistics + (see -b). +

If specified multiple times, verify the checksums of all + blocks.

+
+
+
Display information about the configuration. If specified with no other + options, instead display information about the cache file + (/etc/zfs/zpool.cache). To specify the cache file + to display, see -U. +

If specified multiple times, and a pool name is also specified + display both the cached configuration and the on-disk configuration. If + specified multiple times with -e also display + the configuration that would be used were the pool to be imported.

+
+
+
Display information about datasets. Specified once, displays basic dataset + information: ID, create transaction, size, and object count. +

If specified multiple times provides greater and greater + verbosity.

+

If object IDs are specified, display information about those + specific objects only.

+
+
+
Display deduplication statistics, including the deduplication ratio + (dedup), compression ratio (compress), + inflation due to the zfs copies property (copies), and + an overall effective ratio (dedup + * compress + / copies).
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in the + pool) block counts and sizes by reference count.
+
+
Display the statistics independently for each deduplication table.
+
+
Dump the contents of the deduplication tables describing duplicate + blocks.
+
+
Also dump the contents of the deduplication tables describing unique + blocks.
+
+ word0:word1:...:word15
+
Decode and display block from an embedded block pointer specified by the + word arguments.
+
+
Display pool history similar to zpool + history, but include internal changes, + transaction, and dataset information.
+
+
Display information about intent log (ZIL) entries relating to each + dataset. If specified multiple times, display counts of each intent log + transaction type.
+
+
Examine the checkpointed state of the pool. Note, the on disk format of + the pool is not reverted to the checkpointed state.
+
+ device
+
Read the vdev labels from the specified device. + zdb -l will return 0 if + valid label was found, 1 if error occurred, and 2 if no valid labels were + found. Each unique configuration is displayed only once.
+
+ device
+
In addition display label space usage stats.
+
+ device
+
Display every configuration, unique or not. +

If the -q option is also specified, + don't print the labels.

+

If the -u option is also specified, + also display the uberblocks on this device. Specify multiple times to + increase verbosity.

+
+
+
Disable leak detection and the loading of space maps. By default, + zdb verifies that all non-free blocks are + referenced, which can be very expensive.
+
+
Display the offset, spacemap, and free space of each metaslab.
+
+
Also display information about the on-disk free space histogram associated + with each metaslab.
+
+
Display the maximum contiguous free space, the in-core free space + histogram, and the percentage of free space in each space map.
+
+
Display every spacemap record.
+
+
Display the offset, spacemap, and free space of each metaslab.
+
+
Also display information about the maximum contiguous free space and the + percentage of free space in each space map.
+
+
Display every spacemap record.
+
+ dataset path
+
Look up the specified path inside of the + dataset and display its metadata and indirect + blocks. Specified path must be relative to the root + of dataset. This option can be combined with + -v for increasing verbosity.
+
+ poolname + vdev:offset:[<lsize>/]<psize>[:flags]
+
Read and display a block from the specified device. By default the block + is displayed as a hex dump, but see the description of the + r flag, below. +

The block is specified in terms of a colon-separated tuple + vdev (an integer vdev identifier) + offset (the offset within the vdev) + size (the physical size, or logical size / + physical size) of the block to read and, optionally, + flags (a set of flags, described below).

+

+
+
+ offset
+
Print block pointer
+
+
Calculate and display checksums
+
+
Decompress the block. Set environment variable + ZDB_NO_ZLE to skip zle when guessing.
+
+
Byte swap the block
+
+
Dump gang block header
+
+
Dump indirect block
+
+
Dump raw uninterpreted block data
+
+
Verbose output for guessing compression algorithm
+
+
+
+
Report statistics on zdb I/O. Display operation + counts, bandwidth, and error counts of I/O to the pool from + zdb.
+
+
Simulate the effects of deduplication, constructing a DDT and then display + that DDT as with -DD.
+
+
Display the current uberblock.
+
+

Other options:

+
+
+
Do not abort should any assertion fail.
+
+
Enable panic recovery, certain errors which would otherwise be fatal are + demoted to warnings.
+
+
Do not abort if asserts fail and also enable panic recovery.
+
+ [-p path ...]
+
Operate on an exported pool, not present in + /etc/zfs/zpool.cache. The + -p flag specifies the path under which devices are + to be searched.
+
+ dumpdir
+
All blocks accessed will be copied to files in the specified directory. + The blocks will be placed in sparse files whose name is the same as that + of the file or device read. zdb can be then run on + the generated files. Note that the -bbc flags are + sufficient to access (and thus copy) all metadata on the pool.
+
+
Attempt to make an unreadable pool readable by trying progressively older + transactions.
+
+
Dump the contents of the zfs_dbgmsg buffer before exiting + zdb. zfs_dbgmsg is a buffer used by ZFS to dump + advanced debug information.
+
+ inflight I/Os
+
Limit the number of outstanding checksum I/Os to the specified value. The + default value is 200. This option affects the performance of the + -c option.
+
+ var=value ...
+
Set the given global libzpool variable to the provided value. The value + must be an unsigned 32-bit integer. Currently only little-endian systems + are supported to avoid accidentally setting the high 32 bits of 64-bit + variables.
+
+
Print numbers in an unscaled form more amenable to parsing, eg. 1000000 + rather than 1M.
+
+ transaction
+
Specify the highest transaction to use when searching for uberblocks. See + also the -u and -l options + for a means to see the available uberblocks and their associated + transaction numbers.
+
+ cachefile
+
Use a cache file other than + /etc/zfs/zpool.cache.
+
+
Enable verbosity. Specify multiple times for increased verbosity.
+
+
Attempt verbatim import. This mimics the behavior of the kernel when + loading a pool from a cachefile. Only usable with + -e.
+
+
Attempt "extreme" transaction rewind, that is attempt the same + recovery as -F but read transactions otherwise + deemed too old.
+
+
Attempt all possible combinations when reconstructing indirect split + blocks. This flag disables the individual I/O deadman timer in order to + allow as much time as required for the attempted reconstruction.
+
+

Specifying a display option more than once enables verbosity for + only that option, with more occurrences enabling more verbosity.

+

If no options are specified, all information about the named pool + will be displayed at default verbosity.

+
+
+

+
+
Display the configuration of imported pool + rpool
+
+
+
# zdb -C rpool
+
+MOS Configuration:
+        version: 28
+        name: 'rpool'
+ ...
+
+
+
Display basic dataset information about + rpool
+
+
+
# zdb -d rpool
+Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects
+Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects
+ ...
+
+
+
Display basic information about object 0 in + rpool/export/home
+
+
+
# zdb -d rpool/export/home 0
+Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects
+
+    Object  lvl   iblk   dblk  dsize  lsize   %full  type
+         0    7    16K    16K  15.0K    16K   25.00  DMU dnode
+
+
+
Display the predicted effect of enabling deduplication on + rpool
+
+
+
# zdb -S rpool
+Simulated DDT histogram:
+
+bucket              allocated                       referenced
+______   ______________________________   ______________________________
+refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
+------   ------   -----   -----   -----   ------   -----   -----   -----
+     1     694K   27.1G   15.0G   15.0G     694K   27.1G   15.0G   15.0G
+     2    35.0K   1.33G    699M    699M    74.7K   2.79G   1.45G   1.45G
+ ...
+dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00
+
+
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
April 14, 2019Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zed.8.html b/man/v0.8/8/zed.8.html new file mode 100644 index 000000000..25d4508f2 --- /dev/null +++ b/man/v0.8/8/zed.8.html @@ -0,0 +1,380 @@ + + + + + + + zed.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zed.8

+
+ + + + + +
ZED(8)System Administration CommandsZED(8)
+
+

+
+

+

ZED - ZFS Event Daemon

+

+
+
+

+

zed [-d zedletdir] [-f] [-F] + [-h] [-L] [-M] [-p pidfile] [-P + path] [-s statefile] [-v] [-V] + [-Z]

+

+
+
+

+

ZED (ZFS Event Daemon) monitors events generated by the ZFS + kernel module. When a zevent (ZFS Event) is posted, ZED will run any + ZEDLETs (ZFS Event Daemon Linkage for Executable Tasks) that have been + enabled for the corresponding zevent class.

+

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Display license information.
+
+
Display version information.
+
+
Be verbose.
+
+
Force the daemon to run if at all possible, disabling security checks and + throwing caution to the wind. Not recommended for use in production.
+
+
Run the daemon in the foreground.
+
+
Lock all current and future pages in the virtual memory address space. + This may help the daemon remain responsive when the system is under heavy + memory pressure.
+
+
Zero the daemon's state, thereby allowing zevents still within the kernel + to be reprocessed.
+
+
Read the enabled ZEDLETs from the specified directory.
+
+
Write the daemon's process ID to the specified file.
+
+
Custom $PATH for zedlets to use. Normally zedlets run in a locked-down + environment, with hardcoded paths to the ZFS commands ($ZFS, $ZPOOL, $ZED, + ...), and a hardcoded $PATH. This is done for security reasons. However, + the ZFS test suite uses a custom PATH for its ZFS commands, and passes it + to zed with -P. In short, -P is only to be used by the ZFS test suite; + never use it in production!
+
+
Write the daemon's state to the specified file.
+
+
+
+

+

A zevent is comprised of a list of nvpairs (name/value pairs). + Each zevent contains an EID (Event IDentifier) that uniquely identifies it + throughout the lifetime of the loaded ZFS kernel module; this EID is a + monotonically increasing integer that resets to 1 each time the kernel + module is loaded. Each zevent also contains a class string that identifies + the type of event. For brevity, a subclass string is defined that omits the + leading components of the class string. Additional nvpairs exist to provide + event details.

+

The kernel maintains a list of recent zevents that can be viewed + (along with their associated lists of nvpairs) using the "zpool + events -v" command.

+

+
+
+

+

ZEDLETs to be invoked in response to zevents are located in the + enabled-zedlets directory. These can be symlinked or copied from the + installed-zedlets directory; symlinks allow for automatic updates + from the installed ZEDLETs, whereas copies preserve local modifications. As + a security measure, ZEDLETs must be owned by root. They must have execute + permissions for the user, but they must not have write permissions for group + or other. Dotfiles are ignored.

+

ZEDLETs are named after the zevent class for which they should be + invoked. In particular, a ZEDLET will be invoked for a given zevent if + either its class or subclass string is a prefix of its filename (and is + followed by a non-alphabetic character). As a special case, the prefix + "all" matches all zevents. Multiple ZEDLETs may be invoked for a + given zevent.

+

+
+
+

+

ZEDLETs are executables invoked by the ZED in response to a given + zevent. They should be written under the presumption they can be invoked + concurrently, and they should use appropriate locking to access any shared + resources. Common variables used by ZEDLETs can be stored in the default rc + file which is sourced by scripts; these variables should be prefixed with + "ZED_".

+

The zevent nvpairs are passed to ZEDLETs as environment variables. + Each nvpair name is converted to an environment variable in the following + manner: 1) it is prefixed with "ZEVENT_", 2) it is converted to + uppercase, and 3) each non-alphanumeric character is converted to an + underscore. Some additional environment variables have been defined to + present certain nvpair values in a more convenient form. An incomplete list + of zevent environment variables is as follows:

+
+
+
The Event IDentifier.
+
+
The zevent class string.
+
+
The zevent subclass string.
+
+
The time at which the zevent was posted as + "seconds nanoseconds" since the Epoch.
+
+
The seconds component of ZEVENT_TIME.
+
+
The nanoseconds component of ZEVENT_TIME.
+
+
An almost-RFC3339-compliant string for ZEVENT_TIME.
+
+

Additionally, the following ZED & ZFS variables are + defined:

+
+
+
The daemon's process ID.
+
+
The daemon's current enabled-zedlets directory.
+
+
The ZFS alias (name-version-release) string used to build the + daemon.
+
+
The ZFS version used to build the daemon.
+
+
The ZFS release used to build the daemon.
+
+

ZEDLETs may need to call other ZFS commands. The installation + paths of the following executables are defined: ZDB, ZED, + ZFS, ZINJECT, and ZPOOL. These variables can be + overridden in the rc file if needed.

+

+
+
+

+
+
@sysconfdir@/zfs/zed.d
+
The default directory for enabled ZEDLETs.
+
@sysconfdir@/zfs/zed.d/zed.rc
+
The default rc file for common variables used by ZEDLETs.
+
@zfsexecdir@/zed.d
+
The default directory for installed ZEDLETs.
+
@runstatedir@/zed.pid
+
The default file containing the daemon's process ID.
+
@runstatedir@/zed.state
+
The default file containing the daemon's state. +

+
+
+
+
+

+
+
+
Reconfigure the daemon and rescan the directory for enabled ZEDLETs.
+
+
Terminate the daemon. +

+
+
+
+
+

+

ZED requires root privileges.

+

+
+
+

+

Events are processed synchronously by a single thread. This can + delay the processing of simultaneous zevents.

+

There is no maximum timeout for ZEDLET execution. Consequently, a + misbehaving ZEDLET can delay the processing of subsequent zevents.

+

The ownership and permissions of the enabled-zedlets + directory (along with all parent directories) are not checked. If any of + these directories are improperly owned or permissioned, an unprivileged user + could insert a ZEDLET to be executed as root. The requirement that ZEDLETs + be owned by root mitigates this to some extent.

+

ZEDLETs are unable to return state/status information to the + kernel.

+

Some zevent nvpair types are not handled. These are denoted by + zevent environment variables having a "_NOT_IMPLEMENTED_" + value.

+

Internationalization support via gettext has not been added.

+

The configuration file is not yet implemented.

+

The diagnosis engine is not yet implemented.

+

+
+
+

+

ZED (ZFS Event Daemon) is distributed under the terms of + the Common Development and Distribution License Version 1.0 (CDDL-1.0).

+

Developed at Lawrence Livermore National Laboratory + (LLNL-CODE-403049).

+

+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
Octember 1, 2013ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zfs-mount-generator.8.html b/man/v0.8/8/zfs-mount-generator.8.html new file mode 100644 index 000000000..d16ec4efe --- /dev/null +++ b/man/v0.8/8/zfs-mount-generator.8.html @@ -0,0 +1,324 @@ + + + + + + + zfs-mount-generator.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-mount-generator.8

+
+ + + + + +
ZFS-MOUNT-GENERATOR(8)zfs-mount-generatorZFS-MOUNT-GENERATOR(8)
+
+

+

+
+

+

zfs-mount-generator - generates systemd mount units for ZFS

+
+
+

+

@systemdgeneratordir@/zfs-mount-generator

+

+
+
+

+

zfs-mount-generator implements the Generators Specification + of systemd(1), and is called during early boot to generate + systemd.mount(5) units for automatically mounted datasets. Mount + ordering and dependencies are created for all tracked pools (see below).

+

+
+

+

If the dataset is an encryption root, a service that loads the + associated key (either from file or through a systemd-ask-password(1) + prompt) will be created. This service RequiresMountsFor the path of + the key (if file-based) and also copies the mount unit's After, + Before and Requires. All mount units of encrypted datasets add + the key-load service for their encryption root to their Wants and + After. The service will not be Wanted or Required by + local-fs.target directly, and so will only be started manually or as + a dependency of a started mount unit.

+

+
+
+

+

mount unit's Before -> key-load service (if any) -> + mount unit -> mount unit's After

+

It is worth nothing that when a mount unit is activated, it + activates all available mount units for parent paths to its mountpoint, i.e. + activating the mount unit for /tmp/foo/1/2/3 automatically activates all + available mount units for /tmp, /tmp/foo, /tmp/foo/1, and /tmp/foo/1/2. This + is true for any combination of mount units from any sources, not just + ZFS.

+

+
+
+

+

Because ZFS pools may not be available very early in the boot + process, information on ZFS mountpoints must be stored separately. The + output of the command

+

+
zfs list -H -o + name,mountpoint,canmount,atime,relatime,devices,exec,readonly,setuid,nbmand,encroot,keylocation,org.openzfs.systemd:requires,org.openzfs.systemd:requires-mounts-for,org.openzfs.systemd:before,org.openzfs.systemd:after,org.openzfs.systemd:wanted-by,org.openzfs.systemd:required-by,org.openzfs.systemd:nofail,org.openzfs.systemd:ignore +

+
+

for datasets that should be mounted by systemd, should be kept + separate from the pool, at

+

+
@sysconfdir@/zfs/zfs-list.cache/POOLNAME
+

The cache file, if writeable, will be kept synchronized with the + pool state by the ZEDLET

+

+
history_event-zfs-list-cacher.sh .
+
+
+

+

The behavior of the generator script can be influenced by the + following dataset properties:

+

+
+
+
If a dataset has mountpoint set and canmount is not + off, a mount unit will be generated. Additionally, if + canmount is on, local-fs.target will gain a + dependency on the mount unit. +

This behavior is equal to the auto and noauto + legacy mount options, see systemd.mount(5).

+

Encryption roots always generate a key-load service, even for + canmount=off.

+
+
+
Space-separated list of mountpoints to require to be mounted for this + mount unit
+
+
The mount unit and associated key-load service will be ordered before this + space-separated list of units.
+
+
The mount unit and associated key-load service will be ordered after this + space-separated list of units.
+
+
Space-separated list of units that will gain a Wants dependency on + this mount unit. Setting this property implies noauto.
+
+
Space-separated list of units that will gain a Requires dependency + on this mount unit. Setting this property implies noauto.
+
+
Toggles between a Wants and Requires type of dependency + between the mount unit and local-fs.target, if noauto isn't + set or implied. +

on: Mount will be WantedBy local-fs.target

+

off: Mount will be Before and RequiredBy + local-fs.target

+

unset: Mount will be Before and WantedBy + local-fs.target

+
+
+
If set to on, do not generate a mount unit for this dataset. +

+
+
+
+See also systemd.mount(5) +

+
+
+
+

+

To begin, enable tracking for the pool:

+

+
touch + @sysconfdir@/zfs/zfs-list.cache/POOLNAME
+

Then, enable the tracking ZEDLET:

+

+
ln -s + "@zfsexecdir@/zed.d/history_event-zfs-list-cacher.sh" + "@sysconfdir@/zfs/zed.d" +

systemctl enable zfs-zed.service

+

systemctl restart zfs-zed.service

+
+

Force the running of the ZEDLET by setting a monitored property, + e.g. canmount, for at least one dataset in the pool:

+

+
zfs set canmount=on DATASET
+

This forces an update to the stale cache file.

+

To test the generator output, run

+

+
@systemdgeneratordir@/zfs-mount-generator + /tmp/zfs-mount-generator . .
+

This will generate units and dependencies in + /tmp/zfs-mount-generator for you to inspect them. The second and + third argument are ignored.

+

If you're satisfied with the generated units, instruct systemd to + re-run all generators:

+

+
systemctl daemon-reload
+

+

+
+
+

+

zfs(5) zfs-events(5) zed(8) zpool(5) + systemd(1) systemd.target(5) systemd.special(7) + systemd.mount(7)

+
+
+ + + + + +
2020-01-19ZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zfs-program.8.html b/man/v0.8/8/zfs-program.8.html new file mode 100644 index 000000000..a35b1c49c --- /dev/null +++ b/man/v0.8/8/zfs-program.8.html @@ -0,0 +1,693 @@ + + + + + + + zfs-program.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-program.8

+
+ + + + + +
ZFS-PROGRAM(8)System Manager's ManualZFS-PROGRAM(8)
+
+
+

+

zfs program — + executes ZFS channel programs

+
+
+

+

zfs program [-jn] + [-t instruction-limit] + [-m memory-limit] + pool script

+
+
+

+

The ZFS channel program interface allows ZFS administrative + operations to be run programmatically as a Lua script. The entire script is + executed atomically, with no other administrative operations taking effect + concurrently. A library of ZFS calls is made available to channel program + scripts. Channel programs may only be run with root privileges.

+

A modified version of the Lua 5.2 interpreter is used to run + channel program scripts. The Lua 5.2 manual can be found at:

+ +

The channel program given by script will be + run on pool, and any attempts to access or modify + other pools will cause an error.

+
+
+

+
+
+
Display channel program output in JSON format. When this flag is specified + and standard output is empty - channel program encountered an error. The + details of such an error will be printed to standard error in plain + text.
+
+
Executes a read-only channel program, which runs faster. The program + cannot change on-disk state by calling functions from the zfs.sync + submodule. The program can be used to gather information such as + properties and determining if changes would succeed (zfs.check.*). Without + this flag, all pending changes must be synced to disk before a channel + program can complete.
+
+ instruction-limit
+
Limit the number of Lua instructions to execute. If a channel program + executes more than the specified number of instructions, it will be + stopped and an error will be returned. The default limit is 10 million + instructions, and it can be set to a maximum of 100 million + instructions.
+
+ memory-limit
+
Memory limit, in bytes. If a channel program attempts to allocate more + memory than the given limit, it will be stopped and an error returned. The + default memory limit is 10 MB, and can be set to a maximum of 100 MB.
+
+

All remaining argument strings will be passed directly to the Lua + script as described in the LUA + INTERFACE section below.

+
+
+

+

A channel program can be invoked either from the command line, or + via a library call to + ().

+
+

+

Arguments passed to the channel program are converted to a Lua + table. If invoked from the command line, extra arguments to the Lua script + will be accessible as an array stored in the argument table with the key + 'argv':

+
+
args = ...
+argv = args["argv"]
+-- argv == {1="arg1", 2="arg2", ...}
+
+

If invoked from the libZFS interface, an arbitrary argument list + can be passed to the channel program, which is accessible via the same + "..." syntax in Lua:

+
+
args = ...
+-- args == {"foo"="bar", "baz"={...}, ...}
+
+

Note that because Lua arrays are 1-indexed, arrays passed to Lua + from the libZFS interface will have their indices incremented by 1. That is, + the element in arr[0] in a C array passed to a channel + program will be stored in arr[1] when accessed from + Lua.

+
+
+

+

Lua return statements take the form:

+
+
return ret0, ret1, ret2, ...
+
+

Return statements returning multiple values are permitted + internally in a channel program script, but attempting to return more than + one value from the top level of the channel program is not permitted and + will throw an error. However, tables containing multiple values can still be + returned. If invoked from the command line, a return statement:

+
+
a = {foo="bar", baz=2}
+return a
+
+

Will be output formatted as:

+
+
Channel program fully executed with return value:
+    return:
+        baz: 2
+        foo: 'bar'
+
+
+
+

+

If the channel program encounters a fatal error while running, a + non-zero exit status will be returned. If more information about the error + is available, a singleton list will be returned detailing the error:

+
+
error: "error string, including Lua stack trace"
+
+

If a fatal error is returned, the channel program may have not + executed at all, may have partially executed, or may have fully executed but + failed to pass a return value back to userland.

+

If the channel program exhausts an instruction or memory limit, a + fatal error will be generated and the program will be stopped, leaving the + program partially executed. No attempt is made to reverse or undo any + operations already performed. Note that because both the instruction count + and amount of memory used by a channel program are deterministic when run + against the same inputs and filesystem state, as long as a channel program + has run successfully once, you can guarantee that it will finish + successfully against a similar size system.

+

If a channel program attempts to return too large a value, the + program will fully execute but exit with a nonzero status code and no return + value.

+

+ ZFS API functions do not generate Fatal Errors when correctly invoked, they + return an error code and the channel program continues executing. See the + ZFS API section below for + function-specific details on error return codes.

+
+
+

+

When invoking a channel program via the libZFS interface, it is + necessary to translate arguments and return values from Lua values to their + C equivalents, and vice-versa.

+

There is a correspondence between nvlist values in C and Lua + tables. A Lua table which is returned from the channel program will be + recursively converted to an nvlist, with table values converted to their + natural equivalents:

+
+
string -> string
+number -> int64
+boolean -> boolean_value
+nil -> boolean (no value)
+table -> nvlist
+
+

Likewise, table keys are replaced by string equivalents as + follows:

+
+
string -> no change
+number -> signed decimal string ("%lld")
+boolean -> "true" | "false"
+
+

Any collision of table key strings (for example, the string + "true" and a true boolean value) will cause a fatal error.

+

Lua numbers are represented internally as signed 64-bit + integers.

+
+
+
+

+

The following Lua built-in base library functions are + available:

+
+
assert                  rawlen
+collectgarbage          rawget
+error                   rawset
+getmetatable            select
+ipairs                  setmetatable
+next                    tonumber
+pairs                   tostring
+rawequal                type
+
+

All functions in the + , + , + and + + built-in submodules are also available. A complete list and documentation of + these modules is available in the Lua manual.

+

The following functions base library functions have been disabled + and are not available for use in channel programs:

+
+
dofile
+loadfile
+load
+pcall
+print
+xpcall
+
+
+
+

+
+

+

Each API function takes a fixed set of required positional + arguments and optional keyword arguments. For example, the destroy function + takes a single positional string argument (the name of the dataset to + destroy) and an optional "defer" keyword boolean argument. When + using parentheses to specify the arguments to a Lua function, only + positional arguments can be used:

+
+
zfs.sync.destroy("rpool@snap")
+
+

To use keyword arguments, functions must be called with a single + argument that is a Lua table containing entries mapping integers to + positional arguments and strings to keyword arguments:

+
+
zfs.sync.destroy({1="rpool@snap", defer=true})
+
+

The Lua language allows curly braces to be used in place of + parenthesis as syntactic sugar for this calling convention:

+
+
zfs.sync.snapshot{"rpool@snap", defer=true}
+
+
+
+

+

If an API function succeeds, it returns 0. If it fails, it returns + an error code and the channel program continues executing. API functions do + not generate Fatal Errors except in the case of an unrecoverable internal + file system error.

+

In addition to returning an error code, some functions also return + extra details describing what caused the error. This extra description is + given as a second return value, and will always be a Lua table, or Nil if no + error details were returned. Different keys will exist in the error details + table depending on the function and error case. Any such function may be + called expecting a single return value:

+
+
errno = zfs.sync.promote(dataset)
+
+

Or, the error details can be retrieved:

+
+
errno, details = zfs.sync.promote(dataset)
+if (errno == EEXIST) then
+    assert(details ~= Nil)
+    list_of_conflicting_snapshots = details
+end
+
+

The following global aliases for API function error return codes + are defined for use in channel programs:

+
+
EPERM     ECHILD      ENODEV      ENOSPC
+ENOENT    EAGAIN      ENOTDIR     ESPIPE
+ESRCH     ENOMEM      EISDIR      EROFS
+EINTR     EACCES      EINVAL      EMLINK
+EIO       EFAULT      ENFILE      EPIPE
+ENXIO     ENOTBLK     EMFILE      EDOM
+E2BIG     EBUSY       ENOTTY      ERANGE
+ENOEXEC   EEXIST      ETXTBSY     EDQUOT
+EBADF     EXDEV       EFBIG
+
+
+
+

+

For detailed descriptions of the exact behavior of any zfs + administrative operations, see the main zfs(1) manual + page.

+
+
+
Record a debug message in the zfs_dbgmsg log. A log of these messages can + be printed via mdb's "::zfs_dbgmsg" command, or can be monitored + live by running: +
+
  dtrace -n 'zfs-dbgmsg{trace(stringof(arg0))}'
+
+

msg (string)

+
Debug message to be printed.
+
+
+
Returns true if the given dataset exists, or false if it doesn't. A fatal + error will be thrown if the dataset is not in the target pool. That is, in + a channel program running on rpool, + zfs.exists("rpool/nonexistent_fs") returns false, but + zfs.exists("somepool/fs_that_may_exist") will error. +

dataset (string)

+
Dataset to check for existence. Must be in the + target pool.
+
+
+
Returns two values. First, a string, number or table containing the + property value for the given dataset. Second, a string containing the + source of the property (i.e. the name of the dataset in which it was set + or nil if it is readonly). Throws a Lua error if the dataset is invalid or + the property doesn't exist. Note that Lua only supports int64 number types + whereas ZFS number properties are uint64. This means very large values + (like guid) may wrap around and appear negative. +

dataset (string)

+
Filesystem or snapshot path to retrieve properties + from.
+

property (string)

+
Name of property to retrieve. All filesystem, + snapshot and volume properties are supported except for 'mounted' and + 'iscsioptions.' Also supports the 'written@snap' and 'written#bookmark' + properties and the '<user|group><quota|used>@id' properties, + though the id must be in numeric form.
+
+
+
+
+
The sync submodule contains functions that modify the on-disk state. They + are executed in "syncing context". +

The available sync submodule functions are as follows:

+
+
+
Destroy the given dataset. Returns 0 on successful destroy, or a + nonzero error code if the dataset could not be destroyed (for example, + if the dataset has any active children or clones). +

dataset (string)

+
Filesystem or snapshot to be destroyed.
+

[optional] defer (boolean)

+
Valid only for destroying snapshots. If set to + true, and the snapshot has holds or clones, allows the snapshot to be + marked for deferred deletion rather than failing.
+
+
+
Promote the given clone to a filesystem. Returns 0 on successful + promotion, or a nonzero error code otherwise. If EEXIST is returned, + the second return value will be an array of the clone's snapshots + whose names collide with snapshots of the parent filesystem. +

dataset (string)

+
Clone to be promoted.
+
+
+
Rollback to the previous snapshot for a dataset. Returns 0 on + successful rollback, or a nonzero error code otherwise. Rollbacks can + be performed on filesystems or zvols, but not on snapshots or mounted + datasets. EBUSY is returned in the case where the filesystem is + mounted. +

filesystem (string)

+
Filesystem to rollback.
+
+
+
Create a snapshot of a filesystem. Returns 0 if the snapshot was + successfully created, and a nonzero error code otherwise. +

Note: Taking a snapshot will fail on any pool older than + legacy version 27. To enable taking snapshots from ZCP scripts, the + pool must be upgraded.

+

dataset (string)

+
Name of snapshot to create.
+
+
+
+
+
For each function in the zfs.sync submodule, there is a corresponding + zfs.check function which performs a "dry run" of the same + operation. Each takes the same arguments as its zfs.sync counterpart and + returns 0 if the operation would succeed, or a non-zero error code if it + would fail, along with any other error details. That is, each has the same + behavior as the corresponding sync function except for actually executing + the requested change. For example, + + returns 0 if + + would successfully destroy the dataset. +

The available zfs.check functions are:

+
+
+
 
+
+
 
+
+
 
+
+
 
+
+
+
+
The zfs.list submodule provides functions for iterating over datasets and + properties. Rather than returning tables, these functions act as Lua + iterators, and are generally used as follows: +
+
for child in zfs.list.children("rpool") do
+    ...
+end
+
+

The available zfs.list functions are:

+
+
+
Iterate through all clones of the given snapshot. +

snapshot (string)

+
Must be a valid snapshot path in the current + pool.
+
+
+
Iterate through all snapshots of the given dataset. Each snapshot is + returned as a string containing the full dataset name, e.g. + "pool/fs@snap". +

dataset (string)

+
Must be a valid filesystem or volume.
+
+
+
Iterate through all direct children of the given dataset. Each child + is returned as a string containing the full dataset name, e.g. + "pool/fs/child". +

dataset (string)

+
Must be a valid filesystem or volume.
+
+
+
Iterate through all user properties for the given dataset. +

dataset (string)

+
Must be a valid filesystem, snapshot, or + volume.
+
+
+
Returns an array of strings, the names of the valid system (non-user + defined) properties for the given dataset. Throws a Lua error if the + dataset is invalid. +

dataset (string)

+
Must be a valid filesystem, snapshot or + volume.
+
+
+
+
+
+
+
+

+
+

+

The following channel program recursively destroys a filesystem + and all its snapshots and children in a naive manner. Note that this does + not involve any error handling or reporting.

+
+
function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        zfs.sync.destroy(snap)
+    end
+    zfs.sync.destroy(root)
+end
+destroy_recursive("pool/somefs")
+
+
+
+

+

A more verbose and robust version of the same channel program, + which properly detects and reports errors, and also takes the dataset to + destroy as a command line argument, would be as follows:

+
+
succeeded = {}
+failed = {}
+
+function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        err = zfs.sync.destroy(snap)
+        if (err ~= 0) then
+            failed[snap] = err
+        else
+            succeeded[snap] = err
+        end
+    end
+    err = zfs.sync.destroy(root)
+    if (err ~= 0) then
+        failed[root] = err
+    else
+        succeeded[root] = err
+    end
+end
+
+args = ...
+argv = args["argv"]
+
+destroy_recursive(argv[1])
+
+results = {}
+results["succeeded"] = succeeded
+results["failed"] = failed
+return results
+
+
+
+

+

The following function performs a forced promote operation by + attempting to promote the given clone and destroying any conflicting + snapshots.

+
+
function force_promote(ds)
+   errno, details = zfs.check.promote(ds)
+   if (errno == EEXIST) then
+       assert(details ~= Nil)
+       for i, snap in ipairs(details) do
+           zfs.sync.destroy(ds .. "@" .. snap)
+       end
+   elseif (errno ~= 0) then
+       return errno
+   end
+   return zfs.sync.promote(ds)
+end
+
+
+
+
+ + + + + +
February 26, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zfs.8.html b/man/v0.8/8/zfs.8.html new file mode 100644 index 000000000..4ece7c64d --- /dev/null +++ b/man/v0.8/8/zfs.8.html @@ -0,0 +1,4308 @@ + + + + + + + zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.8

+
+ + + + + +
ZFS(8)System Manager's Manual (smm)ZFS(8)
+
+
+

+

zfsconfigures + ZFS file systems

+
+
+

+ + + + + +
zfs-?V
+
+ + + + + +
zfscreate [-p] + [-o + property=value]... + filesystem
+
+ + + + + +
zfscreate [-ps] + [-b blocksize] + [-o + property=value]... + -V size + volume
+
+ + + + + +
zfsdestroy [-Rfnprv] + filesystem|volume
+
+ + + + + +
zfsdestroy [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]...
+
+ + + + + +
zfsdestroy + filesystem|volume#bookmark
+
+ + + + + +
zfssnapshot [-r] + [-o + property=value]... + filesystem@snapname|volume@snapname...
+
+ + + + + +
zfsrollback [-Rfr] + snapshot
+
+ + + + + +
zfsclone [-p] + [-o + property=value]... + snapshot + filesystem|volume
+
+ + + + + +
zfspromote + clone-filesystem
+
+ + + + + +
zfsrename [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
+ + + + + +
zfsrename [-fp] + filesystem|volume + filesystem|volume
+
+ + + + + +
zfsrename -r + snapshot snapshot
+
+ + + + + +
zfslist + [-r|-d + depth] [-Hp] + [-o + property[,property]...] + [-s property]... + [-S property]... + [-t + type[,type]...] + [filesystem|volume|snapshot]...
+
+ + + + + +
zfsset + property=value + [property=value]... + filesystem|volume|snapshot...
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + [filesystem|volume|snapshot|bookmark]...
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot...
+
+ + + + + +
zfsupgrade
+
+ + + + + +
zfsupgrade -v
+
+ + + + + +
zfsupgrade [-r] + [-V version] + -a | filesystem
+
+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]...] + [-s field]... + [-S field]... + filesystem|snapshot
+
+ + + + + +
zfsproject + [-d|-r] + file|directory...
+
+ + + + + +
zfsproject -C + [-kr] + file|directory...
+
+ + + + + +
zfsproject -c + [-0] + [-d|-r] + [-p id] + file|directory...
+
+ + + + + +
zfsproject [-p + id] [-rs] + file|directory...
+
+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Olv] + [-o options] + -a | filesystem
+
+ + + + + +
zfsunmount [-f] + -a | + filesystem|mountpoint
+
+ + + + + +
zfsshare -a | + filesystem
+
+ + + + + +
zfsunshare -a | + filesystem|mountpoint
+
+ + + + + +
zfsbookmark snapshot + bookmark
+
+ + + + + +
zfssend [-DLPRbcehnpvw] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-LPcenvw] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend [-Penv] + -t receive_resume_token
+
+ + + + + +
zfsreceive [-Fhnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-Fhnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+ + + + + +
zfsallow + filesystem|volume
+
+ + + + + +
zfsallow [-dglu] + user|group[,user|group]... + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]... + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + -@setname + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfshold [-r] + tag snapshot...
+
+ + + + + +
zfsholds [-rH] + snapshot...
+
+ + + + + +
zfsrelease [-r] + tag snapshot...
+
+ + + + + +
zfsdiff [-FHt] + snapshot + snapshot|filesystem
+
+ + + + + +
zfsprogram [-jn] + [-t instruction-limit] + [-m memory-limit] + pool script [--] arg1 + ...
+
+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a | filesystem
+
+ + + + + +
zfsunload-key [-r] + -a | filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+ + + + + +
zfsversion
+
+
+

+

The zfs command configures ZFS datasets + within a ZFS storage pool, as described in zpool(8). A + dataset is identified by a unique path within the ZFS namespace. For + example:

+
+
pool/{filesystem,volume,snapshot}
+
+

where the maximum length of a dataset name is + MAXNAMELEN (256 bytes) and the maximum amount of + nesting allowed in a path is 50 levels deep.

+

A dataset can be one of the following:

+
+
+
A ZFS dataset of type filesystem can be mounted within + the standard system namespace and behaves like other file systems. While + ZFS file systems are designed to be POSIX compliant, known issues exist + that prevent compliance in some cases. Applications that depend on + standards conformance might fail due to non-standard behavior when + checking file system free space.
+
+
A logical volume exported as a raw or block device. This type of dataset + should only be used when a block device is required. File systems are + typically used in most environments.
+
+
A read-only version of a file system or volume at a given point in time. + It is specified as + filesystem@name or + volume@name.
+
+
Much like a snapshot, but without the hold on on-disk + data. It can be used as the source of a send (but not for a receive). It + is specified as + filesystem#name or + volume#name.
+
+
+

+

A ZFS storage pool is a logical collection of devices that provide + space for datasets. A storage pool is also the root of the ZFS file system + hierarchy.

+

The root of the pool can be accessed as a file system, such as + mounting and unmounting, taking snapshots, and setting properties. The + physical storage characteristics, however, are managed by the + zpool(8) command.

+

See zpool(8) for more information on creating + and administering pools.

+
+
+

+

A snapshot is a read-only copy of a file system or volume. + Snapshots can be created extremely quickly, and initially consume no + additional space within the pool. As data within the active dataset changes, + the snapshot consumes more data than would otherwise be shared with the + active dataset.

+

Snapshots can have arbitrary names. Snapshots of volumes can be + cloned or rolled back, visibility is determined by the + snapdev property of the parent volume.

+

File system snapshots can be accessed under the + .zfs/snapshot directory in the root of the file + system. Snapshots are automatically mounted on demand and may be unmounted + at regular intervals. The visibility of the .zfs + directory can be controlled by the snapdir property.

+
+
+

+

A bookmark is like a snapshot, a read-only copy of a file system + or volume. Bookmarks can be created extremely quickly, compared to + snapshots, and they consume no additional space within the pool. Bookmarks + can also have arbitrary names, much like snapshots.

+

Unlike snapshots, bookmarks can not be accessed through the + filesystem in any way. From a storage standpoint a bookmark just provides a + way to reference when a snapshot was created as a distinct object. Bookmarks + are initially tied to a snapshot, not the filesystem or volume, and they + will survive if the snapshot itself is destroyed. Since they are very light + weight there's little incentive to destroy them.

+
+
+

+

A clone is a writable volume or file system whose initial contents + are the same as another dataset. As with snapshots, creating a clone is + nearly instantaneous, and initially consumes no additional space.

+

Clones can only be created from a snapshot. When a snapshot is + cloned, it creates an implicit dependency between the parent and child. Even + though the clone is created somewhere else in the dataset hierarchy, the + original snapshot cannot be destroyed as long as a clone exists. The + origin property exposes this dependency, and the + destroy command lists any such dependencies, if they + exist.

+

The clone parent-child dependency relationship can be reversed by + using the promote subcommand. This causes the + "origin" file system to become a clone of the specified file + system, which makes it possible to destroy the file system that the clone + was created from.

+
+
+

+

Creating a ZFS file system is a simple operation, so the number of + file systems per system is likely to be numerous. To cope with this, ZFS + automatically manages mounting and unmounting file systems without the need + to edit the /etc/fstab file. All automatically + managed file systems are mounted by ZFS at boot time.

+

By default, file systems are mounted under + /path, where path is the name + of the file system in the ZFS namespace. Directories are created and + destroyed as needed.

+

A file system can also have a mount point set + in the mountpoint property. This directory is created as + needed, and ZFS automatically mounts the file system when the + zfs mount + -a command is invoked (without editing + /etc/fstab). The mountpoint + property can be inherited, so if pool/home has a mount + point of /export/stuff, then + + automatically inherits a mount point of + /export/stuff/user.

+

A file system mountpoint property of + none prevents the file system from being mounted.

+

If needed, ZFS file systems can also be managed with traditional + tools (mount, umount, + /etc/fstab). If a file system's mount point is set + to legacy, ZFS makes no attempt to manage the file system, + and the administrator is responsible for mounting and unmounting the file + system. Because pools must be imported before a legacy mount can succeed, + administrators should ensure that legacy mounts are only attempted after the + zpool import process finishes at boot time. For example, on machines using + systemd, the mount option

+

x-systemd.requires=zfs-import.target

+

will ensure that the zfs-import completes before systemd attempts + mounting the filesystem. See systemd.mount(5) for details.

+
+
+

+

Deduplication is the process for removing redundant data at the + block level, reducing the total amount of data stored. If a file system has + the dedup property enabled, duplicate data blocks are + removed synchronously. The result is that only unique data is stored and + common components are shared among files.

+

Deduplicating data is a very resource-intensive operation. It is + generally recommended that you have at least 1.25 GiB of RAM per 1 TiB of + storage when you enable deduplication. Calculating the exact requirement + depends heavily on the type of data stored in the pool.

+

Enabling deduplication on an improperly-designed system can result + in performance issues (slow IO and administrative operations). It can + potentially lead to problems importing a pool due to memory exhaustion. + Deduplication can consume significant processing power (CPU) and memory as + well as generate additional disk IO.

+

Before creating a pool with deduplication + enabled, ensure that you have planned your hardware requirements + appropriately and implemented appropriate recovery practices, such as + regular backups. As an alternative to deduplication consider using + , + as a less resource-intensive alternative.

+
+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about user properties, + see the User Properties section, + below.

+

Every dataset has a set of properties that export statistics about + the dataset as well as control various behaviors. Properties are inherited + from the parent unless overridden by the child. Some properties apply only + to certain types of datasets (file systems, volumes, or snapshots).

+

The values of numeric properties can be specified using + human-readable suffixes (for example, k, + , + M, + , and so + forth, up to Z for zettabyte). The following are all valid + (and equal) specifications: 1536M, 1.5g, 1.50GB.

+

The values of non-numeric properties are case sensitive and must + be lowercase, except for mountpoint, + sharenfs, and sharesmb.

+

The following native properties consist of read-only statistics + about the dataset. These properties can be neither set, nor inherited. + Native properties apply to all dataset types unless otherwise noted.

+
+
+
The amount of space available to the dataset and all its children, + assuming that there is no other activity in the pool. Because space is + shared within a pool, availability can be limited by any number of + factors, including physical pool size, quotas, reservations, or other + datasets within the pool. +

This property can also be referred to by its shortened column + name, avail.

+
+
+
For non-snapshots, the compression ratio achieved for the + used space of this dataset, expressed as a multiplier. + The used property includes descendant datasets, and, for + clones, does not include the space shared with the origin snapshot. For + snapshots, the compressratio is the same as the + refcompressratio property. Compression can be turned on + by running: zfs set + compression=on + dataset. The default value is + off.
+
+
The transaction group (txg) in which the dataset was created. Bookmarks + have the same createtxg as the snapshot they are + initially tied to. This property is suitable for ordering a list of + snapshots, e.g. for incremental send and receive.
+
+
The time this dataset was created.
+
+
For snapshots, this property is a comma-separated list of filesystems or + volumes which are clones of this snapshot. The clones' + origin property is this snapshot. If the + clones property is not empty, then this snapshot can not + be destroyed (even with the -r or + -f options). The roles of origin and clone can be + swapped by promoting the clone with the zfs + promote command.
+
+
This property is on if the snapshot has been marked for + deferred destroy by using the zfs + destroy -d command. + Otherwise, the property is off.
+
+
For encrypted datasets, indicates where the dataset is currently + inheriting its encryption key from. Loading or unloading a key for the + encryptionroot will implicitly load / unload the key for + any inheriting datasets (see zfs + load-key and zfs + unload-key for details). Clones will always share + an encryption key with their origin. See the + Encryption section for details.
+
+
The total number of filesystems and volumes that exist under this location + in the dataset tree. This value is only available when a + filesystem_limit has been set somewhere in the tree + under which the dataset resides.
+
+
Indicates if an encryption key is currently loaded into ZFS. The possible + values are none, available, and + unavailable. See zfs + load-key and zfs + unload-key.
+
+
The 64 bit GUID of this dataset or bookmark which does not change over its + entire lifetime. When a snapshot is sent to another pool, the received + snapshot has the same GUID. Thus, the guid is suitable + to identify a snapshot across pools.
+
+
The amount of space that is "logically" accessible by this + dataset. See the referenced property. The logical space + ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space that is "logically" consumed by this dataset + and all its descendents. See the used property. The + logical space ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For file systems, indicates whether the file system is currently mounted. + This property can be either + or + .
+
+
A unique identifier for this dataset within the pool. Unlike the dataset's + guid , the objsetid of a dataset is + not transferred to other pools when the snapshot is copied with a + send/receive operation. The objsetid can be reused (for + a new datatset) after the dataset is deleted.
+
+
For cloned file systems or volumes, the snapshot from which the clone was + created. See also the clones property.
+
+
For filesystems or volumes which have saved partially-completed state from + zfs receive -s, this opaque token can be provided to + zfs send -t to resume and complete the zfs + receive.
+
+
The amount of data that is accessible by this dataset, which may or may + not be shared with other datasets in the pool. When a snapshot or clone is + created, it initially references the same amount of space as the file + system or snapshot it was created from, since its contents are identical. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The compression ratio achieved for the referenced space + of this dataset, expressed as a multiplier. See also the + compressratio property.
+
+
The total number of snapshots that exist under this location in the + dataset tree. This value is only available when a + snapshot_limit has been set somewhere in the tree under + which the dataset resides.
+
+
The type of dataset: filesystem, + volume, or snapshot.
+
+
The amount of space consumed by this dataset and all its descendents. This + is the value that is checked against this dataset's quota and reservation. + The space used does not include this dataset's reservation, but does take + into account the reservations of any descendent datasets. The amount of + space that a dataset consumes from its parent, as well as the amount of + space that is freed if this dataset is recursively destroyed, is the + greater of its space used and its reservation. +

The used space of a snapshot (see the + Snapshots section) is space that is + referenced exclusively by this snapshot. If this snapshot is destroyed, + the amount of used space will be freed. Space that is + shared by multiple snapshots isn't accounted for in this metric. When a + snapshot is destroyed, space that was previously shared with this + snapshot can become unique to snapshots adjacent to it, thus changing + the used space of those snapshots. The used space of the latest snapshot + can also be affected by changes in the file system. Note that the + used space of a snapshot is a subset of the + written space of the snapshot.

+

The amount of space used, available, or referenced does not + take into account pending changes. Pending changes are generally + accounted for within a few seconds. Committing a change to a disk using + fsync(2) or O_SYNC does not + necessarily guarantee that the space usage information is updated + immediately.

+
+
+
The usedby* properties decompose the + used properties into the various reasons that space is + used. Specifically, used = + usedbychildren + + usedbydataset + + usedbyrefreservation + + usedbysnapshots. These properties are only available for + datasets created on zpool "version 13" + pools.
+
+
The amount of space used by children of this dataset, which would be freed + if all the dataset's children were destroyed.
+
+
The amount of space used by this dataset itself, which would be freed if + the dataset were destroyed (after first removing any + refreservation and destroying any necessary snapshots or + descendents).
+
+
The amount of space used by a refreservation set on this + dataset, which would be freed if the refreservation was + removed.
+
+
The amount of space consumed by snapshots of this dataset. In particular, + it is the amount of space that would be freed if all of this dataset's + snapshots were destroyed. Note that this is not simply the sum of the + snapshots' used properties because space can be shared + by multiple snapshots.
+
@user
+
The amount of space consumed by the specified user in this dataset. Space + is charged to the owner of each file, as displayed by + ls -l. The amount of space + charged is displayed by du and + ls -s. See the + zfs userspace subcommand + for more information. +

Unprivileged users can access only their own space usage. The + root user, or a user who has been granted the userused + privilege with zfs + allow, can access everyone's usage.

+

The userused@... + properties are not displayed by zfs + get all. The user's name must + be appended after the @ symbol, using one of the following forms:

+ +

Files created on Linux always have POSIX owners.

+
+
@user
+
The userobjused property is similar to + userused but instead it counts the number of objects + consumed by a user. This property counts all objects allocated on behalf + of the user, it may differ from the results of system tools such as + df -i. +

When the property xattr=on is set on a file + system additional objects will be created per-file to store extended + attributes. These additional objects are reflected in the + userobjused value and are counted against the user's + userobjquota. When a file system is configured to use + xattr=sa no additional internal objects are normally + required.

+
+
+
This property is set to the number of user holds on this snapshot. User + holds are set by using the zfs + hold command.
+
@group
+
The amount of space consumed by the specified group in this dataset. Space + is charged to the group of each file, as displayed by + ls -l. See the + userused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupused privilege with zfs + allow, can access all groups' usage.

+
+
@group
+
The number of objects consumed by the specified group in this dataset. + Multiple objects may be charged to the group for each file when extended + attributes are in use. See the + userobjused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupobjused privilege with + zfs allow, can access + all groups' usage.

+
+
@project
+
The amount of space consumed by the specified project in this dataset. + Project is identified via the project identifier (ID) that is object-based + numeral attribute. An object can inherit the project ID from its parent + object (if the parent has the flag of inherit project ID that can be set + and changed via chattr + -/+P or zfs project + -s) when being created. The privileged user can + set and change object's project ID via chattr + -p or zfs project + -s anytime. Space is charged to the project of + each file, as displayed by lsattr + -p or zfs project. See the + userused@user property for more + information. +

The root user, or a user who has been granted the + projectused privilege with zfs + allow, can access all projects' usage.

+
+
@project
+
The projectobjused is similar to + projectused but instead it counts the number of objects + consumed by project. When the property xattr=on is set + on a fileset, ZFS will create additional objects per-file to store + extended attributes. These additional objects are reflected in the + projectobjused value and are counted against the + project's projectobjquota. When a filesystem is + configured to use xattr=sa no additional internal + objects are required. See the + userobjused@user property for more + information. +

The root user, or a user who has been granted the + projectobjused privilege with zfs + allow, can access all projects' objects usage.

+
+
+
For volumes, specifies the block size of the volume. The + blocksize cannot be changed once the volume has been + written, so it should be set at volume creation time. The default + blocksize for volumes is 8 Kbytes. Any power of 2 from + 512 bytes to 128 Kbytes is valid. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space referenced by this dataset, that was + written since the previous snapshot (i.e. that is not referenced by the + previous snapshot).
+
@snapshot
+
The amount of referenced space written to this dataset + since the specified snapshot. This is the space that is referenced by this + dataset but was not referenced by the specified snapshot. +

The snapshot may be specified as a short + snapshot name (just the part after the @), in which + case it will be interpreted as a snapshot in the same filesystem as this + dataset. The snapshot may be a full snapshot name + (filesystem@snapshot), which for + clones may be a snapshot in the origin's filesystem (or the origin of + the origin's filesystem, etc.)

+
+
+

The following native properties can be used to change the behavior + of a ZFS dataset.

+
+
=discard|noallow|restricted|passthrough|passthrough-x
+
Controls how ACEs are inherited when files and directories are created. +
+
+
does not inherit any ACEs.
+
+
only inherits inheritable ACEs that specify "deny" + permissions.
+
+
default, removes the + + and + + permissions when the ACE is inherited.
+
+
inherits all inheritable ACEs without any modifications.
+
+
same meaning as passthrough, except that the + , + , + and + + ACEs inherit the execute permission only if the file creation mode + also requests the execute bit.
+
+

When the property value is set to + passthrough, files are created with a mode determined + by the inheritable ACEs. If no inheritable ACEs exist that affect the + mode, then the mode is set in accordance to the requested mode from the + application.

+

The aclinherit property does not apply to + POSIX ACLs.

+
+
=off|noacl|posixacl
+
Controls whether ACLs are enabled and if so what type of ACL to use. +
+
+
default, when a file system has the acltype property + set to off then ACLs are disabled.
+
+
an alias for off
+
+
indicates POSIX ACLs should be used. POSIX ACLs are specific to Linux + and are not functional on other platforms. POSIX ACLs are stored as an + extended attribute and therefore will not overwrite any existing NFSv4 + ACLs which may be set.
+
+

To obtain the best performance when setting + posixacl users are strongly encouraged to set the + xattr=sa property. This will result in the POSIX ACL + being stored more efficiently on disk. But as a consequence, all new + extended attributes will only be accessible from OpenZFS implementations + which support the xattr=sa property. See the + xattr property for more details.

+
+
=on|off
+
Controls whether the access time for files is updated when they are read. + Turning this property off avoids producing write traffic when reading + files and can result in significant performance gains, though it might + confuse mailers and other similar utilities. The values + on and off are equivalent to the + atime and + + mount options. The default value is on. See also + relatime below.
+
=on|off|noauto
+
If this property is set to off, the file system cannot + be mounted, and is ignored by zfs + mount -a. Setting this + property to off is similar to setting the + mountpoint property to none, except + that the dataset still has a normal mountpoint property, + which can be inherited. Setting this property to off + allows datasets to be used solely as a mechanism to inherit properties. + One example of setting canmount=off is + to have two datasets with the same mountpoint, so that + the children of both datasets appear in the same directory, but might have + different inherited characteristics. +

When set to noauto, a dataset can only be + mounted and unmounted explicitly. The dataset is not mounted + automatically when the dataset is created or imported, nor is it mounted + by the zfs mount + -a command or unmounted by the + zfs unmount + -a command.

+

This property is not inherited.

+
+
=on|off||fletcher4|sha256|noparity|sha512|skein|edonr
+
Controls the checksum used to verify data integrity. The default value is + on, which automatically selects an appropriate algorithm + (currently, fletcher4, but this may change in future + releases). The value off disables integrity checking on + user data. The value noparity not only disables + integrity but also disables maintaining parity for user data. This setting + is used internally by a dump device residing on a RAID-Z pool and should + not be used by any other dataset. Disabling checksums is + NOT a recommended practice. +

The sha512, skein, and + edonr checksum algorithms require enabling the + appropriate features on the pool. These pool features are not supported + by GRUB and must not be used on the pool if GRUB needs to access the + pool (e.g. for /boot).

+

Please see zpool-features(5) for more + information on these algorithms.

+

Changing this property affects only newly-written data.

+
+
=on|off|gzip|gzip-N|lz4|lzjb|zle
+
Controls the compression algorithm used for this dataset. +

Setting compression to on indicates that the + current default compression algorithm should be used. The default + balances compression and decompression speed, with compression ratio and + is expected to work well on a wide variety of workloads. Unlike all + other settings for this property, on does not select a + fixed compression type. As new compression algorithms are added to ZFS + and enabled on a pool, the default compression algorithm may change. The + current default compression algorithm is either lzjb + or, if the lz4_compress feature is enabled, + lz4.

+

The lz4 compression algorithm + is a high-performance replacement for the lzjb + algorithm. It features significantly faster compression and + decompression, as well as a moderately higher compression ratio than + lzjb, but can only be used on pools with the + lz4_compress feature set to + . See + zpool-features(5) for details on ZFS feature flags and + the lz4_compress feature.

+

The lzjb compression algorithm is optimized + for performance while providing decent data compression.

+

The gzip compression algorithm + uses the same compression as the gzip(1) command. You + can specify the gzip level by using the value + gzip-N, where N is + an integer from 1 (fastest) to 9 (best compression ratio). Currently, + gzip is equivalent to + (which + is also the default for gzip(1)).

+

The zle compression algorithm compresses + runs of zeros.

+

This property can also be referred to by its + shortened column name + . + Changing this property affects only newly-written data.

+

When any setting except off is selected, + compression will explicitly check for blocks consisting of only zeroes + (the NUL byte). When a zero-filled block is detected, it is stored as a + hole and not compressed using the indicated compression algorithm.

+

Any block being compressed must be no larger than 7/8 of its + original size after compression, otherwise the compression will not be + considered worthwhile and the block saved uncompressed. Note that when + the logical block is less than 8 times the disk sector size this + effectively reduces the necessary compression ratio; for example 8k + blocks on disks with 4k disk sectors must compress to 1/2 or less of + their original size.

+
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux context for all files in the file system under + a mount point for that file system. See selinux(8) for + more information.
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux context for the file system file system being + mounted. See selinux(8) for more information.
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux default context for unlabeled files. See + selinux(8) for more information.
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux context for the root inode of the file system. + See selinux(8) for more information.
+
=1||3
+
Controls the number of copies of data stored for this dataset. These + copies are in addition to any redundancy provided by the pool, for + example, mirroring or RAID-Z. The copies are stored on different disks, if + possible. The space used by multiple copies is charged to the associated + file and dataset, changing the used property and + counting against quotas and reservations. +

Changing this property only affects newly-written data. + Therefore, set this property at file system creation time by using the + -o + copies=N option.

+

Remember that ZFS will not import a pool with a + missing top-level vdev. Do NOT create, for example a + two-disk striped pool and set + on + some datasets thinking you have setup redundancy for them. When a disk + fails you will not be able to import the pool and will have lost all of + your data.

+

Encrypted datasets may not have + copies=3 since the implementation + stores some encryption metadata where the third copy would normally + be.

+
+
=on|off
+
Controls whether device nodes can be opened on this file system. The + default value is on. The values on and + off are equivalent to the dev and + + mount options.
+
=off|on|verify||||
+
Configures deduplication for a dataset. The default value is + off. The default deduplication checksum is + sha256 (this may change in the future). When + dedup is enabled, the checksum defined here overrides + the checksum property. Setting the value to + verify has the same effect as the setting + +

If set to verify, ZFS will do a byte-to-byte + comparsion in case of two blocks having the same signature to make sure + the block contents are identical. Specifying verify is + mandatory for the edonr algorithm.

+

Unless necessary, deduplication should NOT be enabled on a + system. See Deduplication + above.

+
+
=legacy|auto|||||
+
Specifies a compatibility mode or literal value for the size of dnodes in + the file system. The default value is legacy. Setting + this property to a value other than legacy requires the + large_dnode pool feature to be enabled. +

Consider setting dnodesize to + auto if the dataset uses the + xattr=sa property setting and the workload makes heavy + use of extended attributes. This may be applicable to SELinux-enabled + systems, Lustre servers, and Samba servers, for example. Literal values + are supported for cases where the optimal size is known in advance and + for performance testing.

+

Leave dnodesize set to + legacy if you need to receive a send stream of this + dataset on a pool that doesn't enable the large_dnode feature, or if you + need to import this pool on a system that doesn't support the + large_dnode feature.

+

This property can also be referred to by its + shortened column name, + .

+
+
=off|on||||||aes-256-gcm
+
Controls the encryption cipher suite (block cipher, key length, and mode) + used for this dataset. Requires the encryption feature + to be enabled on the pool. Requires a keyformat to be + set at dataset creation time. +

Selecting encryption=on + when creating a dataset indicates that the default encryption suite will + be selected, which is currently aes-256-gcm. In order + to provide consistent data protection, encryption must be specified at + dataset creation time and it cannot be changed afterwards.

+

For more details and caveats about encryption see the + Encryption section.

+
+
=||passphrase
+
Controls what format the user's encryption key will be provided as. This + property is only set when the dataset is encrypted. +

Raw keys and hex keys must be 32 bytes long (regardless of the + chosen encryption suite) and must be randomly generated. A raw key can + be generated with the following command:

+
+
# dd if=/dev/urandom of=/path/to/output/key bs=32 count=1
+
+

Passphrases must be between 8 and 512 bytes long and will be + processed through PBKDF2 before being used (see the + pbkdf2iters property). Even though the encryption + suite cannot be changed after dataset creation, the keyformat can be + with zfs change-key.

+
+
=prompt|
+
Controls where the user's encryption key will be loaded from by default + for commands such as zfs + load-key and zfs + mount -l. This property is + only set for encrypted datasets which are encryption roots. If + unspecified, the default is + +

Even though the encryption suite cannot be changed after + dataset creation, the keylocation can be with either + zfs set or + zfs change-key. If + prompt is selected ZFS will ask for the key at the + command prompt when it is required to access the encrypted data (see + zfs load-key for + details). This setting will also allow the key to be passed in via + STDIN, but users should be careful not to place keys which should be + kept secret on the command line. If a file URI is selected, the key will + be loaded from the specified absolute file path.

+
+
=iterations
+
Controls the number of PBKDF2 iterations that a + passphrase encryption key should be run through when + processing it into an encryption key. This property is only defined when + encryption is enabled and a keyformat of passphrase is + selected. The goal of PBKDF2 is to significantly increase the + computational difficulty needed to brute force a user's passphrase. This + is accomplished by forcing the attacker to run each passphrase through a + computationally expensive hashing function many times before they arrive + at the resulting key. A user who actually knows the passphrase will only + have to pay this cost once. As CPUs become better at processing, this + number should be raised to ensure that a brute force attack is still not + possible. The current default is + + and the minimum is + . + This property may be changed with zfs + change-key.
+
=on|off
+
Controls whether processes can be executed from within this file system. + The default value is on. The values on + and off are equivalent to the exec and + + mount options.
+
=count|none
+
Limits the number of filesystems and volumes that can exist under this + point in the dataset tree. The limit is not enforced if the user is + allowed to change the limit. Setting a filesystem_limit + to on a descendent of a filesystem that already has a + filesystem_limit does not override the ancestor's + filesystem_limit, but rather imposes an additional + limit. This feature must be enabled to be used (see + zpool-features(5)).
+
=size
+
This value represents the threshold block size for including small file + blocks into the special allocation class. Blocks smaller than or equal to + this value will be assigned to the special allocation class while greater + blocks will be assigned to the regular class. Valid values are zero or a + power of two from 512B up to 1M. The default size is 0 which means no + small file blocks will be allocated in the special class. +

Before setting this property, a special class vdev must be + added to the pool. See zpool(8) for more details on + the special allocation class.

+
+
=path|none|legacy
+
Controls the mount point used for this file system. See the + Mount Points section for more + information on how this property is used. +

When the mountpoint property is changed for + a file system, the file system and any children that inherit the mount + point are unmounted. If the new value is legacy, then + they remain unmounted. Otherwise, they are automatically remounted in + the new location if the property was previously legacy + or none, or if they were mounted before the property + was changed. In addition, any shared file systems are unshared and + shared in the new location.

+
+
=on|off
+
Controls whether the file system should be mounted with + nbmand (Non Blocking mandatory locks). This is used for + SMB clients. Changes to this property only take effect when the file + system is umounted and remounted. See mount(8) for more + information on nbmand mounts. This property is not used + on Linux.
+
=off|on
+
Allow mounting on a busy directory or a directory which already contains + files or directories. This is the default mount behavior for Linux file + systems. For consistency with OpenZFS on other platforms overlay mounts + are off by default. Set to on to + enable overlay mounts.
+
=all|none|metadata
+
Controls what is cached in the primary cache (ARC). If this property is + set to all, then both user data and metadata is cached. + If this property is set to none, then neither user data + nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=size|none
+
Limits the amount of space a dataset and its descendents can consume. This + property enforces a hard limit on the amount of space used. This includes + all space consumed by descendents, including file systems and snapshots. + Setting a quota on a descendent of a dataset that already has a quota does + not override the ancestor's quota, but rather imposes an additional limit. +

Quotas cannot be set on volumes, as the + volsize property acts as an implicit quota.

+
+
=count|none
+
Limits the number of snapshots that can be created on a dataset and its + descendents. Setting a snapshot_limit on a descendent of + a dataset that already has a snapshot_limit does not + override the ancestor's snapshot_limit, but rather + imposes an additional limit. The limit is not enforced if the user is + allowed to change the limit. For example, this means that recursive + snapshots taken from the global zone are counted against each delegated + dataset within a zone. This feature must be enabled to be used (see + zpool-features(5)).
+
user=size|none
+
Limits the amount of space consumed by the specified user. User space + consumption is identified by the + user + property. +

Enforcement of user quotas may be delayed by several seconds. + This delay means that a user might exceed their quota before the system + notices that they are over quota and begins to refuse additional writes + with the EDQUOT error message. See the + zfs userspace subcommand + for more information.

+

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + userquota privilege with zfs + allow, can get and set everyone's quota.

+

This property is not available on volumes, on file systems + before version 4, or on pools before version 15. The + userquota@... properties are not + displayed by zfs get + all. The user's name must be appended after the + @ symbol, using one of the following forms:

+ +

Files created on Linux always have POSIX owners.

+
+
user=size|none
+
The userobjquota is similar to + userquota but it limits the number of objects a user can + create. Please refer to userobjused for more information + about how objects are counted.
+
group=size|none
+
Limits the amount of space consumed by the specified group. Group space + consumption is identified by the + group + property. +

Unprivileged users can access only their own groups' space + usage. The root user, or a user who has been granted the + groupquota privilege with zfs + allow, can get and set all groups' quotas.

+
+
group=size|none
+
The + + is similar to groupquota but it limits number of objects + a group can consume. Please refer to userobjused for + more information about how objects are counted.
+
project=size|none
+
Limits the amount of space consumed by the specified project. Project + space consumption is identified by the + project + property. Please refer to projectused for more + information about how project is identified and set/changed. +

The root user, or a user who has been granted the + projectquota privilege with zfs + allow, can access all projects' quota.

+
+
project=size|none
+
The projectobjquota is similar to + projectquota but it limits number of objects a project + can consume. Please refer to userobjused for more + information about how objects are counted.
+
=on|off
+
Controls whether this dataset can be modified. The default value is + off. The values on and + off are equivalent to the + and + rw mount options. +

This property can also be referred to by its + shortened column name, + .

+
+
=size
+
Specifies a suggested block size for files in the file system. This + property is designed solely for use with database workloads that access + files in fixed-size records. ZFS automatically tunes block sizes according + to internal algorithms optimized for typical access patterns. +

For databases that create very large files but access them in + small random chunks, these algorithms may be suboptimal. Specifying a + recordsize greater than or equal to the record size of + the database can result in significant performance gains. Use of this + property for general purpose file systems is strongly discouraged, and + may adversely affect performance.

+

The size specified must be a power of two greater than or + equal to 512 and less than or equal to 128 Kbytes. If the + large_blocks feature is enabled on the pool, the size + may be up to 1 Mbyte. See zpool-features(5) for + details on ZFS feature flags.

+

Changing the file system's recordsize + affects only files created afterward; existing files are unaffected.

+

This property can also be referred to by its + shortened column name, + .

+
+
=all|most
+
Controls what types of metadata are stored redundantly. ZFS stores an + extra copy of metadata, so that if a single block is corrupted, the amount + of user data lost is limited. This extra copy is in addition to any + redundancy provided at the pool level (e.g. by mirroring or RAID-Z), and + is in addition to an extra copy specified by the copies + property (up to a total of 3 copies). For example if the pool is mirrored, + copies=2, and + redundant_metadata=most, then ZFS + stores 6 copies of most metadata, and 4 copies of data and some metadata. +

When set to all, ZFS stores an extra copy of + all metadata. If a single on-disk block is corrupt, at worst a single + block of user data (which is recordsize bytes long) + can be lost.

+

When set to most, ZFS stores an extra copy + of most types of metadata. This can improve performance of random + writes, because less metadata must be written. In practice, at worst + about 100 blocks (of recordsize bytes each) of user + data can be lost if a single on-disk block is corrupt. The exact + behavior of which metadata blocks are stored redundantly may change in + future releases.

+

The default value is all.

+
+
=size|none
+
Limits the amount of space a dataset can consume. This property enforces a + hard limit on the amount of space used. This hard limit does not include + space used by descendents, including file systems and snapshots.
+
=size|none|auto
+
The minimum amount of space guaranteed to a dataset, not including its + descendents. When the amount of space used is below this value, the + dataset is treated as if it were taking up the amount of space specified + by refreservation. The refreservation + reservation is accounted for in the parent datasets' space used, and + counts against the parent datasets' quotas and reservations. +

If refreservation is set, a snapshot is only + allowed if there is enough free pool space outside of this reservation + to accommodate the current number of "referenced" bytes in the + dataset.

+

If refreservation is set to + auto, a volume is thick provisioned (or "not + sparse"). refreservation=auto + is only supported on volumes. See volsize in the + Native Properties section + for more information about sparse volumes.

+

This property can also be referred to by its + shortened column name, + .

+
+
=on|off
+
Controls the manner in which the access time is updated when + + is set. Turning this property on causes the access time to be updated + relative to the modify or change time. Access time is only updated if the + previous access time was earlier than the current modify or change time or + if the existing access time hasn't been updated within the past 24 hours. + The default value is off. The values + on and off are equivalent to the + relatime and + + mount options.
+
=size|none
+
The minimum amount of space guaranteed to a dataset and its descendants. + When the amount of space used is below this value, the dataset is treated + as if it were taking up the amount of space specified by its reservation. + Reservations are accounted for in the parent datasets' space used, and + count against the parent datasets' quotas and reservations. +

This property can also be referred to by its + shortened column name, + .

+
+
=all|none|metadata
+
Controls what is cached in the secondary cache (L2ARC). If this property + is set to all, then both user data and metadata is + cached. If this property is set to none, then neither + user data nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=on|off
+
Controls whether the setuid bit is respected for the file system. The + default value is on. The values on and + off are equivalent to the + and + nosuid mount options.
+
=on|off|opts
+
Controls whether the file system is shared by using + and what options are to be used. Otherwise, the file + system is automatically shared and unshared with the + zfs share and + zfs unshare commands. If + the property is set to on, the net(8) command is invoked + to create a USERSHARE. +

Because SMB shares requires a resource name, a unique resource + name is constructed from the dataset name. The constructed name is a + copy of the dataset name except that the characters in the dataset name, + which would be invalid in the resource name, are replaced with + underscore (_) characters. Linux does not currently support additional + options which might be available on Solaris.

+

If the sharesmb property is set to + off, the file systems are unshared.

+

The share is created with the ACL (Access Control List) + "Everyone:F" ("F" stands for "full + permissions", ie. read and write permissions) and no guest access + (which means Samba must be able to authenticate a real user, system + passwd/shadow, LDAP or smbpasswd based) by default. This means that any + additional access control (disallow specific user specific access etc) + must be done on the underlying file system.

+
+
=on|off|opts
+
Controls whether the file system is shared via NFS, and what options are + to be used. A file system with a sharenfs property of + off is managed with the exportfs(8) + command and entries in the + + file. Otherwise, the file system is automatically shared and unshared with + the zfs share and + zfs unshare commands. If + the property is set to on, the dataset is shared using + the default options: +

+

See exports(5) for the meaning of the + default options. Otherwise, the exportfs(8) command is + invoked with options equivalent to the contents of this property.

+

When the sharenfs property is changed for a + dataset, the dataset and any children inheriting the property are + re-shared with the new options, only if the property was previously + off, or if they were shared before the property was + changed. If the new property is off, the file systems + are unshared.

+
+
=latency|throughput
+
Provide a hint to ZFS about handling of synchronous requests in this + dataset. If logbias is set to latency + (the default), ZFS will use pool log devices (if configured) to handle the + requests at low latency. If logbias is set to + throughput, ZFS will not use configured pool log + devices. ZFS will instead optimize synchronous operations for global pool + throughput and efficient use of resources.
+
=hidden|visible
+
Controls whether the volume snapshot devices under + + are hidden or visible. The default value is hidden.
+
=hidden|visible
+
Controls whether the .zfs directory is hidden or + visible in the root of the file system as discussed in the + Snapshots section. The default value + is hidden.
+
=standard|always|disabled
+
Controls the behavior of synchronous requests (e.g. fsync, O_DSYNC). + standard is the POSIX specified behavior of ensuring all + synchronous requests are written to stable storage and all devices are + flushed to ensure data is not cached by device controllers (this is the + default). always causes every file system transaction to + be written and flushed before its system call returns. This has a large + performance penalty. disabled disables synchronous + requests. File system transactions are only committed to stable storage + periodically. This option will give the highest performance. However, it + is very dangerous as ZFS would be ignoring the synchronous transaction + demands of applications such as databases or NFS. Administrators should + only use this option when the risks are understood.
+
=N|
+
The on-disk version of this file system, which is independent of the pool + version. This property can only be set to later supported versions. See + the zfs upgrade + command.
+
=size
+
For volumes, specifies the logical size of the volume. By default, + creating a volume establishes a reservation of equal size. For storage + pools with a version number of 9 or higher, a + refreservation is set instead. Any changes to + volsize are reflected in an equivalent change to the + reservation (or refreservation). The + volsize can only be set to a multiple of + volblocksize, and cannot be zero. +

The reservation is kept equal to the volume's logical size to + prevent unexpected behavior for consumers. Without the reservation, the + volume could run out of space, resulting in undefined behavior or data + corruption, depending on how the volume is used. These effects can also + occur when the volume size is changed while it is in use (particularly + when shrinking the size). Extreme care should be used when adjusting the + volume size.

+

Though not recommended, a "sparse + volume" (also known as "thin provisioned") can be created + by specifying the -s option to the + zfs create + -V command, or by changing the value of the + refreservation property (or + reservation property on pool version 8 or earlier) + after the volume has been created. A "sparse volume" is a + volume where the value of refreservation is less than + the size of the volume plus the space required to store its metadata. + Consequently, writes to a sparse volume can fail with + ENOSPC when the pool is low on space. For a + sparse volume, changes to volsize are not reflected in + the + + A volume that is not sparse is said to be "thick provisioned". + A sparse volume can become thick provisioned by setting + refreservation to auto.

+
+
=default + | + + | + + | + | +
+
This property specifies how volumes should be exposed to the OS. Setting + it to full exposes volumes as fully fledged block + devices, providing maximal functionality. The value geom + is just an alias for full and is kept for compatibility. + Setting it to dev hides its partitions. Volumes with + property set to none are not exposed outside ZFS, but + can be snapshoted, cloned, replicated, etc, that can be suitable for + backup purposes. Value default means that volumes + exposition is controlled by system-wide tunable + zvol_volmode, where full, + dev and none are encoded as 1, 2 and 3 + respectively. The default values is full.
+
=on|off
+
Controls whether regular files should be scanned for viruses when a file + is opened and closed. In addition to enabling this property, the virus + scan service must also be enabled for virus scanning to occur. The default + value is off. This property is not used on Linux.
+
=on|off|sa
+
Controls whether extended attributes are enabled for this file system. Two + styles of extended attributes are supported either directory based or + system attribute based. +

The default value of on enables directory + based extended attributes. This style of extended attribute imposes no + practical limit on either the size or number of attributes which can be + set on a file. Although under Linux the getxattr(2) + and setxattr(2) system calls limit the maximum size to + 64K. This is the most compatible style of extended attribute and is + supported by all OpenZFS implementations.

+

System attribute based xattrs can be enabled by setting the + value to sa. The key advantage of this type of xattr + is improved performance. Storing extended attributes as system + attributes significantly decreases the amount of disk IO required. Up to + 64K of data may be stored per-file in the space reserved for system + attributes. If there is not enough space available for an extended + attribute then it will be automatically written as a directory based + xattr. System attribute based extended attributes are not accessible on + platforms which do not support the xattr=sa + feature.

+

The use of system attribute based xattrs is strongly + encouraged for users of SELinux or POSIX ACLs. Both of these features + heavily rely of extended attributes and benefit significantly from the + reduced access time.

+

The values on and + off are equivalent to the xattr and + mount + options.

+
+
=on|off
+
Controls whether the dataset is managed from a non-global zone. Zones are + a Solaris feature and are not relevant on Linux. The default value is + off.
+
+

The following three properties cannot be changed after the file + system is created, and therefore, should be set when the file system is + created. If the properties are not set with the zfs + create or zpool + create commands, these properties are inherited from + the parent dataset. If the parent dataset lacks these properties due to + having been created prior to these features being supported, the new file + system will have the default values for these properties.

+
+
=sensitive||mixed
+
Indicates whether the file name matching algorithm used by the file system + should be case-sensitive, case-insensitive, or allow a combination of both + styles of matching. The default value for the + casesensitivity property is sensitive. + Traditionally, UNIX and POSIX file systems have + case-sensitive file names. +

The mixed value for the + casesensitivity property indicates that the file + system can support requests for both case-sensitive and case-insensitive + matching behavior. Currently, case-insensitive matching behavior on a + file system that supports mixed behavior is limited to the SMB server + product. For more information about the mixed value + behavior, see the "ZFS Administration Guide".

+
+
=none||||
+
Indicates whether the file system should perform a + + normalization of file names whenever two file names are compared, and + which normalization algorithm should be used. File names are always stored + unmodified, names are normalized as part of any comparison process. If + this property is set to a legal value other than none, + and the utf8only property was left unspecified, the + utf8only property is automatically set to + on. The default value of the + normalization property is none. This + property cannot be changed after the file system is created.
+
=on|off
+
Indicates whether the file system should reject file names that include + characters that are not present in the + + character code set. If this property is explicitly set to + off, the normalization property must either not be + explicitly set or be set to none. The default value for + the utf8only property is off. This + property cannot be changed after the file system is created.
+
+

The casesensitivity, + normalization, and utf8only properties + are also new permissions that can be assigned to non-privileged users by + using the ZFS delegated administration feature.

+
+
+

+

When a file system is mounted, either through + mount(8) for legacy mounts or the + zfs mount command for normal + file systems, its mount options are set according to its properties. The + correlation between properties and mount options is as follows:

+
+
    PROPERTY                MOUNT OPTION
+    atime                   atime/noatime
+    canmount                auto/noauto
+    devices                 dev/nodev
+    exec                    exec/noexec
+    readonly                ro/rw
+    relatime                relatime/norelatime
+    setuid                  suid/nosuid
+    xattr                   xattr/noxattr
+
+

In addition, these options can be set on a + per-mount basis using the -o option, without + affecting the property that is stored on disk. The values specified on the + command line override the values stored in the dataset. The + nosuid option is an alias for + ,. + These properties are reported as "temporary" by the + zfs get command. If the + properties are changed while the dataset is mounted, the new setting + overrides any temporary settings.

+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate datasets (file + systems, volumes, and snapshots).

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as module:property, but + this namespace is not enforced by ZFS. User property names can be at most + 256 characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the + module component of property names to reduce the chance + that two independently-developed packages use the same property name for + different purposes.

+

The values of user properties are arbitrary strings, are always + inherited, and are never validated. All of the commands that operate on + properties (zfs list, + zfs get, + zfs set, and so forth) can + be used to manipulate both native properties and user properties. Use the + zfs inherit command to clear + a user property. If the property is not defined in any parent dataset, it is + removed entirely. Property values are limited to 8192 bytes.

+
+
+

+

ZFS volumes may be used as swap devices. After creating the volume + with the zfs create + -V command set up and enable the swap area using the + mkswap(8) and swapon(8) commands. Do not + swap to a file on a ZFS file system. A ZFS swap file configuration is not + supported.

+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + zvol data, file attributes, ACLs, permission bits, directory listings, FUID + mappings, and userused / groupused data. + ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the zfs + load-key subcommand for more info on key + loading).

+

Creating an encrypted dataset requires specifying the + encryption and keyformat properties at + creation time, along with an optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, and + pbkdf2iters) do not inherit like other ZFS properties and + instead use the value determined by their encryption root. Encryption root + inheritance can be tracked via the read-only + encryptionroot property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only dedup against themselves, their + snapshots, and their clones.

+

There are a few limitations on encrypted datasets. Encrypted data + cannot be embedded via the embedded_data feature. + Encrypted datasets may not have copies=3 + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost per block written.

+
+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+
+
zfs -?
+
Displays a help message.
+
zfs -V, + --version
+
An alias for the zfs + version subcommand.
+
zfs create + [-p] [-o + property=value]... + filesystem
+
Creates a new ZFS file system. The file system is automatically mounted + according to the mountpoint property inherited from the + parent. +
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + at the same time the dataset was created. Any editable ZFS property + can also be set at creation time. Multiple -o + options can be specified. An error results if the same property is + specified in multiple -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
+
zfs create + [-ps] [-b + blocksize] [-o + property=value]... + -V size + volume
+
Creates a volume of the given size. The volume is exported as a block + device in /dev/zvol/path, where + is the name + of the volume in the ZFS namespace. The size represents the logical size + as exported by the device. By default, a reservation of equal size is + created. +

size is automatically rounded up to the + nearest 128 Kbytes to ensure that the volume has an integral number of + blocks regardless of blocksize.

+
+
+ blocksize
+
Equivalent to -o + volblocksize=blocksize. If + this option is specified in conjunction with + -o volblocksize, the + resulting behavior is undefined.
+
+ property=value
+
Sets the specified property as if the zfs + set + property=value command was + invoked at the same time the dataset was created. Any editable ZFS + property can also be set at creation time. Multiple + -o options can be specified. An error results + if the same property is specified in multiple + -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Creates a sparse volume with no reservation. See + volsize in the + Native Properties section + for more information about sparse volumes.
+
+
+
zfs destroy + [-Rfnprv] + filesystem|volume
+
Destroys the given dataset. By default, the command unshares any file + systems that are currently shared, unmounts any file systems that are + currently mounted, and refuses to destroy a dataset that has active + dependents (children or clones). +
+
+
Recursively destroy all dependents, including cloned file systems + outside the target hierarchy.
+
+
Force an unmount of any file systems using the + unmount -f command. + This option has no effect on non-file systems or unmounted file + systems.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -v or + -p flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Recursively destroy all children.
+
+
Print verbose information about the deleted data.
+
+

Extreme care should be taken when applying either the + -r or the -R options, as + they can destroy large portions of a pool and cause unexpected behavior + for mounted file systems in use.

+
+
zfs destroy + [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]...
+
The given snapshots are destroyed immediately if and only if the + zfs destroy command + without the -d option would have destroyed it. + Such immediate destruction would occur, for example, if the snapshot had + no clones and the user-initiated reference count were zero. +

If a snapshot does not qualify for immediate destruction, it + is marked for deferred deletion. In this state, it exists as a usable, + visible snapshot until both of the preconditions listed above are met, + at which point it is destroyed.

+

An inclusive range of snapshots may be specified by separating + the first and last snapshots with a percent sign. The first and/or last + snapshots may be left blank, in which case the filesystem's oldest or + newest snapshot will be implied.

+

Multiple snapshots (or ranges of snapshots) of the same + filesystem or volume may be specified in a comma-separated list of + snapshots. Only the snapshot's short name (the part after the + @) should be specified when using a range or + comma-separated list to identify multiple snapshots.

+
+
+
Recursively destroy all clones of these snapshots, including the + clones, snapshots, and children. If this flag is specified, the + -d flag will have no effect.
+
+
Destroy immediately. If a snapshot cannot be destroyed now, mark it + for deferred destruction.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -p or + -v flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Destroy (or mark for deferred deletion) all snapshots with this name + in descendent file systems.
+
+
Print verbose information about the deleted data. +

Extreme care should be taken when applying either the + -r or the -R + options, as they can destroy large portions of a pool and cause + unexpected behavior for mounted file systems in use.

+
+
+
+
zfs destroy + filesystem|volume#bookmark
+
The given bookmark is destroyed.
+
zfs snapshot + [-r] [-o + property=value]... + filesystem@snapname|volume@snapname...
+
Creates snapshots with the given names. All previous modifications by + successful system calls to the file system are part of the snapshots. + Snapshots are taken atomically, so that all snapshots correspond to the + same moment in time. zfs + snap can be used as an alias for + zfs snapshot. See the + Snapshots section for details. +
+
+ property=value
+
Sets the specified property; see zfs + create for details.
+
+
Recursively create snapshots of all descendent datasets
+
+
+
zfs rollback + [-Rfr] snapshot
+
Roll back the given dataset to a previous snapshot. When a dataset is + rolled back, all data that has changed since the snapshot is discarded, + and the dataset reverts to the state at the time of the snapshot. By + default, the command refuses to roll back to a snapshot other than the + most recent one. In order to do so, all intermediate snapshots and + bookmarks must be destroyed by specifying the -r + option. +

The -rR options do not recursively + destroy the child snapshots of a recursive snapshot. Only direct + snapshots of the specified filesystem are destroyed by either of these + options. To completely roll back a recursive snapshot, you must rollback + the individual child snapshots.

+
+
+
Destroy any more recent snapshots and bookmarks, as well as any clones + of those snapshots.
+
+
Used with the -R option to force an unmount of + any clone file systems that are to be destroyed.
+
+
Destroy any snapshots and bookmarks more recent than the one + specified.
+
+
+
zfs clone + [-p] [-o + property=value]... + snapshot + filesystem|volume
+
Creates a clone of the given snapshot. See the + Clones section for details. The target + dataset can be located anywhere in the ZFS hierarchy, and is created as + the same type as the original. +
+
+ property=value
+
Sets the specified property; see zfs + create for details.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. If + the target filesystem or volume already exists, the operation + completes successfully.
+
+
+
zfs promote + clone-filesystem
+
Promotes a clone file system to no longer be dependent on its + "origin" snapshot. This makes it possible to destroy the file + system that the clone was created from. The clone parent-child dependency + relationship is reversed, so that the origin file system becomes a clone + of the specified file system. +

The snapshot that was cloned, and any snapshots previous to + this snapshot, are now owned by the promoted clone. The space they use + moves from the origin file system to the promoted clone, so enough space + must be available to accommodate these snapshots. No new space is + consumed by this operation, but the space accounting is adjusted. The + promoted clone must not have any conflicting snapshot names of its own. + The rename subcommand can be used to rename any + conflicting snapshots.

+
+
zfs rename + [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
 
+
zfs rename + [-fp] + filesystem|volume + filesystem|volume
+
Renames the given dataset. The new target can be located anywhere in the + ZFS hierarchy, with the exception of snapshots. Snapshots can only be + renamed within the parent file system or volume. When renaming a snapshot, + the parent file system of the snapshot does not need to be specified as + part of the second argument. Renamed file systems can inherit new mount + points, in which case they are unmounted and remounted at the new mount + point. +
+
+
Force unmount any filesystems that need to be unmounted in the + process.
+
+
Creates all the nonexistent parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their + parent.
+
+
+
zfs rename + -r snapshot + snapshot
+
Recursively rename the snapshots of all descendent datasets. Snapshots are + the only dataset that can be renamed recursively.
+
zfs list + [-r|-d + depth] [-Hp] + [-o + property[,property]...] + [-s property]... + [-S property]... + [-t + type[,type]...] + [filesystem|volume|snapshot]...
+
Lists the property information for the given datasets in tabular form. If + specified, you can list property information by the absolute pathname or + the relative pathname. By default, all file systems and volumes are + displayed. Snapshots are displayed if the listsnaps + property is on (the default is off). + The following fields are displayed: name, + used, available, + referenced, mountpoint. +
+
+
Used for scripting mode. Do not print headers and separate fields by a + single tab instead of arbitrary white space.
+
+ property
+
Same as the -s option, but sorts by property + in descending order.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A + depth of 1 will display only + the dataset and its direct children.
+
+ property
+
A comma-separated list of properties to display. The property must be: +
    +
  • One of the properties described in the + Native Properties + section
  • +
  • A user property
  • +
  • The value name to display the dataset name
  • +
  • The value + to + display space usage properties on file systems and volumes. This + is a shortcut for specifying -o + name,avail,used,,,, + -t + filesystem,volume syntax.
  • +
+
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display any children of the dataset on the command + line.
+
+ property
+
A property for sorting the output by column in ascending order based + on the value of the property. The property must be one of the + properties described in the + Properties section or the value + name to sort by the dataset name. Multiple + properties can be specified at one time using multiple + -s property options. Multiple + -s options are evaluated from left to right in + decreasing order of importance. The following is a list of sorting + criteria: +
    +
  • Numeric types sort in numeric order.
  • +
  • String types sort in alphabetical order.
  • +
  • Types inappropriate for a row sort that row to the literal bottom, + regardless of the specified ordering.
  • +
+

If no sorting options are specified the existing behavior + of zfs list is + preserved.

+
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or all. For example, + specifying -t snapshot + displays only snapshots.
+
+
+
zfs set + property=value + [property=value]... + filesystem|volume|snapshot...
+
Sets the property or list of properties to the given value(s) for each + dataset. Only some properties can be edited. See the + Properties section for more + information on what properties can be set and acceptable values. Numeric + values can be specified as exact values, or in a human-readable form with + a suffix of , + , + M, + , + , + , + , + Z (for bytes, kilobytes, megabytes, gigabytes, + terabytes, petabytes, exabytes, or zettabytes, respectively). User + properties can be set on snapshots. For more information, see the + User Properties section.
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + [filesystem|volume|snapshot|bookmark]...
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
    name      Dataset name
+    property  Property name
+    value     Property value
+    source    Property source  local, default, inherited,
+              temporary, received or none (-).
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections.

+

The value all can be used to display all + properties that apply to the given dataset's type (filesystem, volume, + snapshot, or bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + 1 will display only the dataset and its direct + children.
+
+ field
+
A comma-separated list of columns to display. + name,property,, + is the default value.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: + , + default, + , + , + , + and none. The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot...
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See the Properties + section for a listing of default values, and details on which properties + can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value if one exists; otherwise + operate as if the -S option was not + specified.
+
+
+
zfs upgrade
+
Displays a list of file systems that are not the most recent version.
+
zfs upgrade + -v
+
Displays a list of currently supported file system versions.
+
zfs upgrade + [-r] [-V + version] -a | + filesystem
+
Upgrades file systems to a new on-disk version. Once this is done, the + file systems will no longer be accessible on systems running older + versions of the software. zfs + send streams generated from new snapshots of these + file systems cannot be accessed on systems running older versions of the + software. +

In general, the file system version is independent of the pool + version. See zpool(8) for information on the + zpool upgrade + command.

+

In some cases, the file system version and the pool version + are interrelated and the pool version must be upgraded before the file + system version can be upgraded.

+
+
+ version
+
Upgrade to the specified version. If the + -V flag is not specified, this command + upgrades to the most recent version. This option can only be used to + increase the version number, and only up to the most recent version + supported by this software.
+
+
Upgrade all file systems on all imported pools.
+
filesystem
+
Upgrade the specified file system.
+
+
Upgrade the specified file system and all descendent file + systems.
+
+
+
zfs + userspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each user in the specified + filesystem or snapshot. This corresponds to the + user, + user, + userquota@ + and userobjquota@user properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (for example, + stat(2), ls + -l) perform this translation, so the + -i option allows the output from + zfs userspace to be + compared directly with those utilities. However, + -i may lead to confusion if some files were + created by an SMB user before a SMB-to-POSIX name mapping was + established. In such a case, some files will be owned by the SMB + entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]...
+
Display only the specified fields from the following set: + type, name, + used, quota. The default is to + display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]...
+
Print only the specified types from the following set: + all, posixuser, + smbuser, posixgroup, + smbgroup. The default is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + zfs userspace, except that + the default types to display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]...] + [-s field]... + [-S field]... + filesystem|snapshot
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + zfs userspace, except that + the project identifier is numeral, not name. So need neither the option + -i for SID to POSIX ID nor -n for + numeric ID, nor -t for types.
+
zfs project + [-d|-r] + file|directory...
+
List project identifier (ID) and inherit flag of file(s) or directories. +
+
+
Show the directory project ID and inherit flag, not its childrens. It + will overwrite the former specified -r + option.
+
+
Show on subdirectories recursively. It will overwrite the former + specified -d option.
+
+
+
zfs project + -C [-kr] + file|directory...
+
Clear project inherit flag and/or ID on the file(s) or directories. +
+
+
Keep the project ID unchanged. If not specified, the project ID will + be reset as zero.
+
+
Clear on subdirectories recursively.
+
+
+
zfs project + -c [-0] + [-d|-r] + [-p id] + file|directory...
+
Check project ID and inherit flag on the file(s) or directories, report + the entries without project inherit flag or with different project IDs + from the specified (via -p option) value or the + target directory's project ID. +
+
+
Print file name with a trailing NUL instead of newline (by default), + like "find -print0".
+
+
Check the directory project ID and inherit flag, not its childrens. It + will overwrite the former specified -r + option.
+
+
Specify the referenced ID for comparing with the target file(s) or + directories' project IDs. If not specified, the target (top) + directory's project ID will be used as the referenced one.
+
+
Check on subdirectories recursively. It will overwrite the former + specified -d option.
+
+
+
zfs project + [-p id] + [-rs] + file|directory...
+
Set project ID and/or inherit flag on the file(s) or directories. +
+
+
Set the file(s)' or directories' project ID with the given value.
+
+
Set on subdirectories recursively.
+
+
Set project inherit flag on the given file(s) or directories. It is + usually used for setup tree quota on the directory target with + -r option specified together. When setup tree + quota, by default the directory's project ID will be set to all its + descendants unless you specify the project ID via + -p option explicitly.
+
+
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Olv] [-o + options] -a | + filesystem
+
Mount ZFS filesystem on a path described by its + mountpoint property, if the path exists and is empty. If + mountpoint is set to legacy, the + filesystem should be instead mounted using mount(8). +
+
+
Perform an overlay mount. Allows mounting in non-empty + mountpoint. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process if configured.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + Temporary Mount + Point Properties section for details.
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has a + keylocation of prompt this will + cause the terminal to interactively block after asking for the + key.
+
+
Report mount progress.
+
+
+
zfs unmount + [-f] -a | + filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
Forcefully unmount the file system, even if it is currently in + use.
+
+
+
zfs share + -a | filesystem
+
Shares available ZFS file systems. +
+
+
Share all available ZFS file systems. Invoked automatically as part of + the boot process.
+
filesystem
+
Share the specified filesystem according to the + sharenfs and sharesmb properties. + File systems are shared when the sharenfs or + sharesmb property is set.
+
+
+
zfs unshare + -a | + filesystem|mountpoint
+
Unshares currently shared ZFS file systems. +
+
+
Unshare all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
filesystem|mountpoint
+
Unshare the specified filesystem. The command can also be given a path + to a ZFS file system shared on the system.
+
+
+
zfs bookmark + snapshot bookmark
+
Creates a bookmark of the given snapshot. Bookmarks mark the point in time + when the snapshot was created, and can be used as the incremental source + for a zfs send command. +

This feature must be enabled to be used. See + zpool-features(5) for details on ZFS feature flags and + the + + feature.

+
+
zfs send + [-DLPRbcehnpvw] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
+ --dedup
+
Generate a deduplicated stream. Deduplicated send is deprecated and + will be removed in a future release. (In the future, the flag will + be accepted but a regular, non-deduplicated stream will be generated.) + Blocks which would have been sent multiple times in the send stream + will only be sent once. The receiving system must also support this + feature to receive a deduplicated stream. This flag can be used + regardless of the dataset's dedup property, but + performance will be much better if the filesystem uses a dedup-capable + checksum (for example, sha256).
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
+ --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(5) for details on ZFS feature + flags and the large_blocks feature.
+
+ --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
+ --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed. If the -R flag + is used to send encrypted datasets, then -w + must also be specified.

+
+
+ --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(5) for details on ZFS feature flags + and the embedded_data feature.
+
+ --backup
+
Sends only received property values whether or not they are overridden + by local settings, but only if the dataset has ever been received. Use + this option when you want zfs + receive to restore received properties backed + up on the sent dataset and to avoid sending local settings that may + have nothing to do with the source dataset, but only with how the data + is backed up.
+
+ --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
+ --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
+ --holds
+
Generate a stream package that includes any snapshot holds (created + with the zfs hold command), and indicating to + zfs receive that the holds be applied to the dataset + on the receiving system.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
+ --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
+ --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature. Sends of encrypted datasets must use + -w when using this flag.
+
+ --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS.

+
+
+
+
zfs send + [-LPcenvw] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
+ --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(5) for details on ZFS feature + flags and the large_blocks feature.
+
+ --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
+ --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
+ --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
+ --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(5) for details on ZFS feature flags + and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
+ --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
+ --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent.
+
+
+
zfs send + [-Penv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs receive -s for more details.
+
zfs receive + [-Fhnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-Fhnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

Deduplicated send streams can be generated by using the + zfs send + -D command. The ability to send and receive + deduplicated send streams is deprecated. In the future, the ability + to receive a deduplicated send stream with zfs + receive will be removed. However, in the future, + a utility will be provided to convert a deduplicated send stream to a + regular (non-deduplicated) stream. This future utility will require that + the send stream be located in a seek-able file, rather than provided by + a pipe.

+

If -o + property=value or + -x property is specified, it + applies to the effective value of the property throughout the entire + subtree of replicated datasets. Effective property values will be set ( + -o ) or inherited ( -x ) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + origin=snapshot is a special case + because, even if origin is a read-only property and + cannot be set, it's allowed to receive the send stream as a clone of the + given snapshot.

+

Raw encrypted send streams (created with + zfs send + -w ) may only be received as is, and cannot be + re-encrypted, decrypted, or recompressed by the receive process. + Unencrypted streams can be received as encrypted datasets, either + through inheritance or by specifying encryption parameters with the + -o options. Note that the + keylocation property cannot be overridden to + prompt during a receive. This is because the receive + process itself is already using stdin for the send stream. Instead, the + property can be overridden after the receive completes.

+

The added security provided by raw sends adds some + restrictions to the send and receive process. ZFS will not allow a mix + of raw receives and non-raw receives. Specifically, any raw incremental + receives that are attempted after a non-raw receive will fail. Non-raw + receives do not have this restriction and, therefore, are always + possible. Because of this, it is best practice to always use either raw + sends for their security benefits or non-raw sends for their flexibility + when working with encrypted datasets, but not a combination.

+

The reason for this restriction stems from the inherent + restrictions of the AEAD ciphers that ZFS uses to encrypt data. When + using ZFS native encryption, each block of data is encrypted against a + randomly generated number known as the "initialization vector" + (IV), which is stored in the filesystem metadata. This number is + required by the encryption algorithms whenever the data is to be + decrypted. Together, all of the IVs provided for all of the blocks in a + given snapshot are collectively called an "IV set". When ZFS + performs a raw send, the IV set is transferred from the source to the + destination in the send stream. When ZFS performs a non-raw send, the + data is decrypted by the source system and re-encrypted by the + destination system, creating a snapshot with effectively the same data, + but a different IV set. In order for decryption to work after a raw + send, ZFS must ensure that the IV set used on both the source and + destination side match. When an incremental raw receive is performed on + top of an existing snapshot, ZFS will check to confirm that the + "from" snapshot on both the source and destination were using + the same IV set, ensuring the new IV set is consistent.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Skip the receive of holds. There is no effect if holds are not + sent.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + immediately before the receive. When receiving a stream from + zfs send + -R, causes the property to be inherited by all + descendant datasets, as through zfs + inherit property was run on + any descendant datasets that have this property set on the sending + system. +

Any editable property can be set at receive time. Set-once + properties bound to the received data, such as + normalization and + casesensitivity, cannot be set at receive time + even when the datasets are newly created by + zfs receive. + Additionally both settable properties version and + volsize cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+

The -o option may also be used to + override encryption properties upon initial receive. This allows + unencrypted streams to be received as encrypted datasets. To cause + the received dataset (or root dataset of a recursive stream) to be + received as an encryption root, specify encryption properties in the + same manner as is required for zfs + create. For instance:

+
+
# zfs send tank/test@snap1 | zfs recv -o encryption=on -o keyformat=passphrase -o keylocation=file:///path/to/keyfile
+
+

Note that [-o + keylocation=prompt] may + not be specified here, since stdin is already being utilized for the + send stream. Once the receive has completed, you can use + zfs set to change + this setting after the fact. Similarly, you can receive a dataset as + an encrypted child by specifying [-x + encryption] to force the property to be + inherited. Overriding encryption properties (except for + keylocation) is not possible with raw send + streams.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with a stream generated by + zfs send + -t token, where the + token is the value of the + receive_resume_token property of the filesystem or + volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(5) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions (e.g. + set-once) apply equally to -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of + , + , + mountpoint, canmount, + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]... + perm|@setname[,perm|@setname]... + filesystem|volume
+
 
+
zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]... + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]...
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]...
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]...
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]...
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+
+
NAME             TYPE           NOTES
+allow            subcommand     Must also have the permission that is
+                                being allowed
+clone            subcommand     Must also have the 'create' ability and
+                                'mount' ability in the origin file system
+create           subcommand     Must also have the 'mount' ability.
+                                Must also have the 'refreservation' ability to
+                                create a non-sparse volume.
+destroy          subcommand     Must also have the 'mount' ability
+diff             subcommand     Allows lookup of paths within a dataset
+                                given an object number, and the ability
+                                to create snapshots necessary to
+                                'zfs diff'.
+load-key         subcommand     Allows loading and unloading of encryption key
+                                (see 'zfs load-key' and 'zfs unload-key').
+change-key       subcommand     Allows changing an encryption key via
+                                'zfs change-key'.
+mount            subcommand     Allows mount/umount of ZFS datasets
+promote          subcommand     Must also have the 'mount' and 'promote'
+                                ability in the origin file system
+receive          subcommand     Must also have the 'mount' and 'create'
+                                ability
+rename           subcommand     Must also have the 'mount' and 'create'
+                                ability in the new parent
+rollback         subcommand     Must also have the 'mount' ability
+send             subcommand
+share            subcommand     Allows sharing file systems over NFS
+                                or SMB protocols
+snapshot         subcommand     Must also have the 'mount' ability
+
+groupquota       other          Allows accessing any groupquota@...
+                                property
+groupused        other          Allows reading any groupused@... property
+userprop         other          Allows changing any user property
+userquota        other          Allows accessing any userquota@...
+                                property
+userused         other          Allows reading any userused@... property
+projectobjquota  other          Allows accessing any projectobjquota@...
+                                property
+projectquota     other          Allows accessing any projectquota@... property
+projectobjused   other          Allows reading any projectobjused@... property
+projectused      other          Allows reading any projectused@... property
+
+aclinherit       property
+acltype          property
+atime            property
+canmount         property
+casesensitivity  property
+checksum         property
+compression      property
+copies           property
+devices          property
+exec             property
+filesystem_limit property
+mountpoint       property
+nbmand           property
+normalization    property
+primarycache     property
+quota            property
+readonly         property
+recordsize       property
+refquota         property
+refreservation   property
+reservation      property
+secondarycache   property
+setuid           property
+sharenfs         property
+sharesmb         property
+snapdir          property
+snapshot_limit   property
+utf8only         property
+version          property
+volblocksize     property
+volsize          property
+vscan            property
+xattr            property
+zoned            property
+
+
+
zfs allow + -c + perm|@setname[,perm|@setname]... + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]... + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]... + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
 
+
zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
 
+
zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
zfs hold + [-r] tag + snapshot...
+
Adds a single reference, named with the tag + argument, to the specified snapshot or snapshots. Each snapshot has its + own tag namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-rH] snapshot...
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
Do not print headers, use tab-delimited output.
+
+
+
zfs release + [-r] tag + snapshot...
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return + EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
zfs diff + [-FHt] snapshot + snapshot|filesystem
+
Display the difference between a snapshot of a given filesystem and + another snapshot of that filesystem from a later time or the current + contents of the filesystem. The first column is a character indicating the + type of change, the other columns indicate pathname, new pathname (in case + of rename), change in link count, and optionally file type and/or change + time. The types of change are: +
+
-       The path has been removed
++       The path has been created
+M       The path has been modified
+R       The path has been renamed
+
+
+
+
Display an indication of the type of file, in a manner similar to the + - option of ls(1). +
+
B       Block device
+C       Character device
+/       Directory
+>       Door
+|       Named pipe
+@       Symbolic link
+P       Event port
+=       Socket
+F       Regular file
+
+
+
+
Give more parsable tab-separated output, without header lines and + without arrows.
+
+
Display the path's inode change time as the first column of + output.
+
+
+
zfs program + [-jn] [-t + instruction-limit] [-m + memory-limit] pool script [--] + arg1 ...
+
Executes script as a ZFS channel program on + pool. The ZFS channel program interface allows ZFS + administrative operations to be run programmatically via a Lua script. The + entire script is executed atomically, with no other administrative + operations taking effect concurrently. A library of ZFS calls is made + available to channel program scripts. Channel programs may only be run + with root privileges. +

For full documentation of the ZFS channel program interface, + see the manual page for zfs-program(8).

+
+
+
Display channel program output in JSON format. When this flag is + specified and standard output is empty - channel program encountered + an error. The details of such an error will be printed to standard + error in plain text.
+
+
Executes a read-only channel program, which runs faster. The program + cannot change on-disk state by calling functions from the zfs.sync + submodule. The program can be used to gather information such as + properties and determining if changes would succeed (zfs.check.*). + Without this flag, all pending changes must be synced to disk before a + channel program can complete.
+
+ instruction-limit
+
Limit the number of Lua instructions to execute. If a channel program + executes more than the specified number of instructions, it will be + stopped and an error will be returned. The default limit is 10 million + instructions, and it can be set to a maximum of 100 million + instructions.
+
+ memory-limit
+
Memory limit, in bytes. If a channel program attempts to allocate more + memory than the given limit, it will be stopped and an error returned. + The default memory limit is 10 MB, and can be set to a maximum of 100 + MB. +

All remaining argument strings are passed directly to the + channel program as arguments. See zfs-program(8) + for more information.

+
+
+
+
zfs load-key + [-nr] [-L + keylocation] -a | + filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset. Once the + key is loaded the keystatus property will become + available. +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. This will cause zfs to + simply check that the provided key is correct. This command may be run + even if the key is already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs unload-key + [-r] -a | + filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + unavailable. +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Allows a user to change the encryption key used to access a dataset. This + command requires that the existing key for the dataset is already loaded + into ZFS. This command may also be used to change the + keylocation, keyformat, and + pbkdf2iters properties as needed. If the dataset was not + previously an encryption root it will become one. Alternatively, the + -i flag may be provided to cause an encryption + root to inherit the parent's key instead. +
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to "zfs + load-key filesystem; + zfs change-key + filesystem"
+
+ property=value
+
Allows the user to set encryption key properties ( + keyformat, keylocation, and + pbkdf2iters ) while changing the key. This is the + only way to alter keyformat and + pbkdf2iters after the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
zfs version
+
Displays the software version of the zfs userland + utility and the zfs kernel module.
+
+
+
+

+

The zfs utility exits 0 on success, 1 if + an error occurs, and 2 if invalid command line options were specified.

+
+
+

+
+
Creating a ZFS File System Hierarchy
+
The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, + and is automatically inherited by the child file system. +
+
# zfs create pool/home
+# zfs set mountpoint=/export/home pool/home
+# zfs create pool/home/bob
+
+
+
Creating a ZFS Snapshot
+
The following command creates a snapshot named + yesterday. This snapshot is mounted on demand in the + .zfs/snapshot directory at the root of the + pool/home/bob file system. +
+
# zfs snapshot pool/home/bob@yesterday
+
+
+
Creating and Destroying Multiple + Snapshots
+
The following command creates snapshots named yesterday + of pool/home and all of its descendent file systems. + Each snapshot is mounted on demand in the + .zfs/snapshot directory at the root of its file + system. The second command destroys the newly created snapshots. +
+
# zfs snapshot -r pool/home@yesterday
+# zfs destroy -r pool/home@yesterday
+
+
+
Disabling and Enabling File System + Compression
+
The following command disables the compression property + for all file systems under pool/home. The next command + explicitly enables compression for + pool/home/anne. +
+
# zfs set compression=off pool/home
+# zfs set compression=on pool/home/anne
+
+
+
Listing ZFS Datasets
+
The following command lists all active file systems and volumes in the + system. Snapshots are displayed if the listsnaps + property is on. The default is off. + See zpool(8) for more information on pool properties. +
+
# zfs list
+NAME                      USED  AVAIL  REFER  MOUNTPOINT
+pool                      450K   457G    18K  /pool
+pool/home                 315K   457G    21K  /export/home
+pool/home/anne             18K   457G    18K  /export/home/anne
+pool/home/bob             276K   457G   276K  /export/home/bob
+
+
+
Setting a Quota on a ZFS File System
+
The following command sets a quota of 50 Gbytes for + pool/home/bob. +
+
# zfs set quota=50G pool/home/bob
+
+
+
Listing ZFS Properties
+
The following command lists all properties for + pool/home/bob. +
+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value.

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+ The following command lists all properties with local settings for + pool/home/bob. +
+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
Rolling Back a ZFS File System
+
The following command reverts the contents of + pool/home/anne to the snapshot named + yesterday, deleting all intermediate snapshots. +
+
# zfs rollback -r pool/home/anne@yesterday
+
+
+
Creating a ZFS Clone
+
The following command creates a writable file system whose initial + contents are the same as + . +
+
# zfs clone pool/home/bob@yesterday pool/clone
+
+
+
Promoting a ZFS Clone
+
The following commands illustrate how to test out changes to a file + system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming: +
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
Inheriting ZFS Properties
+
The following command causes pool/home/bob and + pool/home/anne to inherit the checksum + property from their parent. +
+
# zfs inherit checksum pool/home/bob pool/home/anne
+
+
+
Remotely Replicating ZFS Data
+
The following commands send a full stream and then an incremental stream + to a remote machine, restoring them into + + and + , + respectively. poolB must contain the file system + poolB/received, and must not initially contain + . +
+
# zfs send pool/fs@a | \
+  ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b | \
+  ssh host zfs receive poolB/received/fs
+
+
+
Using the zfs receive -d Option
+
The following command sends a full stream of + + to a remote machine, receiving it into + . + The + + portion of the received snapshot's name is determined from the name of the + sent snapshot. poolB must contain the file system + poolB/received. If + + does not exist, it is created as an empty file system. +
+
# zfs send poolA/fsA/fsB@snap | \
+  ssh host zfs receive -d poolB/received
+
+
+
Setting User Properties
+
The following example sets the user-defined + + property for a dataset. +
+
# zfs set com.example:department=12345 tank/accounting
+
+
+
Performing a Rolling Snapshot
+
The following example shows how to maintain a history of snapshots with a + consistent naming scheme. To keep a week's worth of snapshots, the user + destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows: +
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
Setting sharenfs Property Options on a ZFS File + System
+
The following commands show how to set sharenfs property + options to enable rw access for a set of + addresses + and to enable root access for system + on the + + file system. +
+
# zfs set sharenfs='rw=@123.123.0.0/16,root=neo' tank/home
+
+

If you are using DNS for host name + resolution, specify the fully qualified hostname.

+
+
Delegating ZFS Administration Permissions on a + ZFS Dataset
+
The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots on + tank/cindys. The permissions on + tank/cindys are also displayed. +
+
# zfs allow cindys create,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys will + be unable to mount file systems under tank/cindys. Add + an ACE similar to the following syntax to provide mount point + access:

+
+
# chmod A+user:cindys:add_subdirectory:allow /tank/cindys
+
+
+
Delegating Create Time Permissions on a ZFS + Dataset
+
The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed. +
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
Defining and Granting a Permission Set on a ZFS + Dataset
+
The following example shows how to define and grant a permission set on + the tank/users file system. The permissions on + tank/users are also displayed. +
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
Delegating Property Permissions on a ZFS + Dataset
+
The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed. +
+
# zfs allow cindys quota,reservation users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
Removing ZFS Delegated Permissions on a ZFS + Dataset
+
The following example shows how to remove the snapshot permission from the + staff group on the tank/users file + system. The permissions on tank/users are also + displayed. +
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
Showing the differences between a snapshot and a + ZFS Dataset
+
The following example shows how to see what has changed between a prior + snapshot of a ZFS dataset and its current state. The + -F option is used to indicate type information for + the files affected. +
+
# zfs diff -F tank/test@before tank/test
+M       /       /tank/test/
+M       F       /tank/test/linked      (+1)
+R       F       /tank/test/oldname -> /tank/test/newname
+-       F       /tank/test/deleted
++       F       /tank/test/created
+M       F       /tank/test/modified
+
+
+
Creating a bookmark
+
The following example create a bookmark to a snapshot. This bookmark can + then be used instead of snapshot in send streams. +
+
# zfs bookmark rpool@snapshot rpool#bookmark
+
+
+
Setting sharesmb Property Options on a ZFS File + System
+
The following example show how to share SMB filesystem through ZFS. Note + that that a user and his/her password must be given. +
+
# smbmount //127.0.0.1/share_tmp /mnt/tmp \
+  -o user=workgroup/turbo,password=obrut,uid=1000
+
+

Minimal + + configuration required:

+

Samba will need to listen to 'localhost' (127.0.0.1) for the + ZFS utilities to communicate with Samba. This is the default behavior + for most Linux distributions.

+

Samba must be able to authenticate a user. This can be done in + a number of ways, depending on if using the system password file, LDAP + or the Samba specific smbpasswd file. How to do this is outside the + scope of this manual. Please refer to the smb.conf(5) + man page for more information.

+

See the USERSHARE section of the + smb.conf(5) man page for all configuration options in + case you need to modify any options to the share afterwards. Do note + that any changes done with the net(8) command will be + undone if the share is ever unshared (such as at a reboot etc).

+
+
+
+
+

+

.

+
+
+

+

attr(1), gzip(1), + ssh(1), chmod(2), + fsync(2), stat(2), + write(2), acl(5), + attributes(5), exports(5), + exportfs(8), mount(8), + net(8), selinux(8), + zfs-program(8), zpool(8)

+
+
+ + + + + +
April 30, 2019Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zfsprops.8.html b/man/v0.8/8/zfsprops.8.html new file mode 100644 index 000000000..781142c8e --- /dev/null +++ b/man/v0.8/8/zfsprops.8.html @@ -0,0 +1,167 @@ + + + + + + + zfsprops.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfsprops.8

+
+ + + + + +
()()
+
+ + + + + +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zgenhostid.8.html b/man/v0.8/8/zgenhostid.8.html new file mode 100644 index 000000000..071a142d6 --- /dev/null +++ b/man/v0.8/8/zgenhostid.8.html @@ -0,0 +1,231 @@ + + + + + + + zgenhostid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zgenhostid.8

+
+ + + + + +
ZGENHOSTID(8)System Manager's Manual (smm)ZGENHOSTID(8)
+
+
+

+

zgenhostid — + generate and store a hostid in + /etc/hostid

+
+
+

+ + + + + +
zgenhostid[hostid]
+
+
+

+

If /etc/hostid does not exist, create it and + store a hostid in it. If the user provides [hostid] on + the command line, store that value. Otherwise, randomly generate a value to + store.

+

This emulates the genhostid(1) utility and is + provided for use on systems which do not include the utility.

+
+
+

+

[hostid] Specifies the value to be placed in + /etc/hostid. It must be a number with a value between 1 + and 2^32-1. This value + be + unique among your systems. It must be expressed in hexadecimal and be + exactly 8 digits long.

+
+
+

+
+
Generate a random hostid and store it
+
+
+
# zgenhostid
+
+
+
Record the libc-generated hostid in /etc/hostid
+
+
+
# zgenhostid $(hostid)
+
+
+
Record a custom hostid (0xdeadbeef) in +
+
+
+
# zgenhostid deadbeef
+
+
+
+
+
+

+

genhostid(1), hostid(1), + spl-module-parameters(5)

+
+
+ + + + + +
September 16, 2017Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zinject.8.html b/man/v0.8/8/zinject.8.html new file mode 100644 index 000000000..6b2fac990 --- /dev/null +++ b/man/v0.8/8/zinject.8.html @@ -0,0 +1,332 @@ + + + + + + + zinject.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zinject.8

+
+ + + + + +
zinject(8)System Administration Commandszinject(8)
+
+

+
+

+

zinject - ZFS Fault Injector

+
+
+

+

zinject creates artificial problems in a ZFS pool by + simulating data corruption or device failures. This program is + dangerous.

+
+
+

+
+
+
List injection records.
+
zinject -b objset:object:level:blkd [-f + frequency] [-amu] pool
+
Force an error into the pool at a bookmark.
+
zinject -c <id | all>
+
Cancel injection records.
+
zinject -d vdev -A <degrade|fault> + pool
+
Force a vdev into the DEGRADED or FAULTED state.
+
zinject -d vdev -D latency:lanes + pool
+
+

Add an artificial delay to IO requests on a particular device, + such that the requests take a minimum of 'latency' milliseconds to + complete. Each delay has an associated number of 'lanes' which defines + the number of concurrent IO requests that can be processed.

+

For example, with a single lane delay of 10 ms (-D 10:1), the + device will only be able to service a single IO request at a time with + each request taking 10 ms to complete. So, if only a single request is + submitted every 10 ms, the average latency will be 10 ms; but if more + than one request is submitted every 10 ms, the average latency will be + more than 10 ms.

+

Similarly, if a delay of 10 ms is specified to have two lanes + (-D 10:2), then the device will be able to service two requests at a + time, each with a minimum latency of 10 ms. So, if two requests are + submitted every 10 ms, then the average latency will be 10 ms; but if + more than two requests are submitted every 10 ms, the average latency + will be more than 10 ms.

+

Also note, these delays are additive. So two invocations of + '-D 10:1', is roughly equivalent to a single invocation of '-D 10:2'. + This also means, one can specify multiple lanes with differing target + latencies. For example, an invocation of '-D 10:1' followed by '-D 25:2' + will create 3 lanes on the device; one lane with a latency of 10 ms and + two lanes with a 25 ms latency.

+

+
+
zinject -d vdev [-e device_error] [-L + label_error] [-T failure] [-f + frequency] [-F] pool
+
Force a vdev error.
+
zinject -I [-s seconds | -g txgs] + pool
+
Simulate a hardware failure that fails to honor a cache flush.
+
zinject -p function pool
+
Panic inside the specified function.
+
zinject -t data [-C dvas] [-e device_error] [-f + frequency] [-l level] [-r range] + [-amq] path
+
Force an error into the contents of a file.
+
zinject -t dnode [-C dvas] [-e device_error] + [-f frequency] [-l level] [-amq] + path
+
Force an error into the metadnode for a file or directory.
+
zinject -t mos_type [-C dvas] [-e + device_error] [-f frequency] [-l + level] [-r range] [-amqu] + pool
+
Force an error into the MOS of a pool.
+
+
+
+

+
+
+
Flush the ARC before injection.
+
+
Force an error into the pool at this bookmark tuple. Each number is in + hexadecimal, and only one block can be specified.
+
+
Inject the given error only into specific DVAs. The mask should be + specified as a list of 0-indexed DVAs separated by commas (ex. '0,2'). + This option is not applicable to logical data errors such as + decompress and decrypt.
+
+
A vdev specified by path or GUID.
+
+
Specify checksum for an ECKSUM error, decompress for a data + decompression error, decrypt for a data decryption error, + corrupt to flip a bit in the data after a read, dtl for an + ECHILD error, io for an EIO error where reopening the device will + succeed, or nxio for an ENXIO error where reopening the device will + fail. For EIO and ENXIO, the "failed" reads or writes still + occur. The probe simply sets the error value reported by the I/O pipeline + so it appears the read or write failed. Decryption errors only currently + work with file data.
+
+
Only inject errors a fraction of the time. Expressed as a real number + percentage between 0.0001 and 100.
+
+
Fail faster. Do fewer checks.
+
+
Run for this many transaction groups before reporting failure.
+
+
Print the usage message.
+
+
Inject an error at a particular block level. The default is 0.
+
+
Set the label error region to one of nvlist, pad1, + pad2, or uber.
+
+
Automatically remount the underlying filesystem.
+
+
Quiet mode. Only print the handler number added.
+
+
Inject an error over a particular logical range of an object, which will + be translated to the appropriate blkid range according to the object's + properties.
+
+
Run for this many seconds before reporting failure.
+
+
Set the failure type to one of all, claim, free, + read, or write.
+
+
Set this to mos for any data in the MOS, mosdir for an + object directory, config for the pool configuration, bpobj + for the block pointer list, spacemap for the space map, + metaslab for the metaslab, or errlog for the persistent + error log.
+
+
Unload the pool after injection. +

+
+
+
+
+

+
+
+
Run zinject in debug mode. +

+
+
+
+
+

+

This man page was written by Darik Horn + <dajhorn@vanadac.com> excerpting the zinject usage message and + source code.

+

+
+
+

+

zpool(8), zfs(8)

+
+
+ + + + + +
2013 FEB 28ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zpool.8.html b/man/v0.8/8/zpool.8.html new file mode 100644 index 000000000..d15bf9ef3 --- /dev/null +++ b/man/v0.8/8/zpool.8.html @@ -0,0 +1,2629 @@ + + + + + + + zpool.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool.8

+
+ + + + + +
ZPOOL(8)System Manager's Manual (smm)ZPOOL(8)
+
+
+

+

zpoolconfigure + ZFS storage pools

+
+
+

+ + + + + +
zpool-?V
+
+ + + + + +
zpooladd [-fgLnP] + [-o + property=value] + pool vdev...
+
+ + + + + +
zpoolattach [-f] + [-o + property=value] + pool device new_device
+
+ + + + + +
zpoolcheckpoint [-d, + --discard] pool
+
+ + + + + +
zpoolclear pool + [device]
+
+ + + + + +
zpoolcreate [-dfn] + [-m mountpoint] + [-o + property=value]... + [-o + feature@feature=value] + [-O + file-system-property=value]... + [-R root] + pool vdev...
+
+ + + + + +
zpooldestroy [-f] + pool
+
+ + + + + +
zpooldetach pool device
+
+ + + + + +
zpoolevents [-vHf + [pool] | -c]
+
+ + + + + +
zpoolexport [-a] + [-f] pool...
+
+ + + + + +
zpoolget [-Hp] + [-o + field[,field]...] + all|property[,property]... + [pool]...
+
+ + + + + +
zpoolhistory [-il] + [pool]...
+
+ + + + + +
zpoolimport [-D] + [-d dir|device]
+
+ + + + + +
zpoolimport -a + [-DflmN] [-F + [-n] [-T] + [-X]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] [-o + mntopts] [-o + property=value]... + [-R root]
+
+ + + + + +
zpoolimport [-Dflm] + [-F [-n] + [-T] [-X]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] [-o + mntopts] [-o + property=value]... + [-R root] + [-s] + pool|id + [newpool [-t]]
+
+ + + + + +
zpoolinitialize [-c | + -s] pool + [device...]
+
+ + + + + +
zpooliostat [[[-c + SCRIPT] + [-lq]]|-rw] + [-T u|d] + [-ghHLnpPvy] + [[pool...]|[pool + vdev...]|[vdev...]] + [interval [count]]
+
+ + + + + +
zpoollabelclear [-f] + device
+
+ + + + + +
zpoollist [-HgLpPv] + [-o + property[,property]...] + [-T u|d] + [pool]... [interval + [count]]
+
+ + + + + +
zpooloffline [-f] + [-t] pool + device...
+
+ + + + + +
zpoolonline [-e] + pool device...
+
+ + + + + +
zpoolreguid pool
+
+ + + + + +
zpoolreopen [-n] + pool
+
+ + + + + +
zpoolremove [-np] + pool device...
+
+ + + + + +
zpoolremove -s + pool
+
+ + + + + +
zpoolreplace [-f] + [-o + property=value] + pool device + [new_device]
+
+ + + + + +
zpoolresilver pool...
+
+ + + + + +
zpoolscrub [-s | + -p] pool...
+
+ + + + + +
zpooltrim [-d] + [-r rate] + [-c | -s] + pool [device...]
+
+ + + + + +
zpoolset + property=value + pool
+
+ + + + + +
zpoolsplit [-gLlnP] + [-o + property=value]... + [-R root] + pool newpool [device]...
+
+ + + + + +
zpoolstatus [-c + SCRIPT] [-DigLpPstvx] + [-T u|d] + [pool]... [interval + [count]]
+
+ + + + + +
zpoolsync [pool]...
+
+ + + + + +
zpoolupgrade
+
+ + + + + +
zpoolupgrade -v
+
+ + + + + +
zpoolupgrade [-V + version] + -a|pool...
+
+ + + + + +
zpoolversion
+
+
+

+

The zpool command configures ZFS storage + pools. A storage pool is a collection of devices that provides physical + storage and data replication for ZFS datasets. All datasets within a storage + pool share the same space. See zfs(8) for information on + managing datasets.

+
+

+

A "virtual device" describes a single device or a + collection of devices organized according to certain performance and fault + characteristics. The following virtual devices are supported:

+
+
+
A block device, typically located under /dev. ZFS + can use individual slices or partitions, though the recommended mode of + operation is to use whole disks. A disk can be specified by a full path, + or it can be a shorthand name (the relative portion of the path under + /dev). A whole disk can be specified by omitting + the slice or partition designation. For example, + sda is equivalent to + /dev/sda. When given a whole disk, ZFS + automatically labels the disk, if necessary.
+
+
A regular file. The use of files as a backing store is strongly + discouraged. It is designed primarily for experimental purposes, as the + fault tolerance of a file is only as good as the file system of which it + is a part. A file must be specified by a full path.
+
+
A mirror of two or more devices. Data is replicated in an identical + fashion across all components of a mirror. A mirror with N disks of size X + can hold X bytes and can withstand (N-1) devices failing before data + integrity is compromised.
+
, + raidz1, raidz2, + raidz3
+
A variation on RAID-5 that allows for better distribution of parity and + eliminates the RAID-5 "write hole" (in which data and parity + become inconsistent after a power loss). Data and parity is striped across + all disks within a raidz group. +

A raidz group can have single-, double-, or triple-parity, + meaning that the raidz group can sustain one, two, or three failures, + respectively, without losing any data. The raidz1 vdev + type specifies a single-parity raidz group; the raidz2 + vdev type specifies a double-parity raidz group; and the + raidz3 vdev type specifies a triple-parity raidz + group. The raidz vdev type is an alias for + raidz1.

+

A raidz group with N disks of size X with P parity disks can + hold approximately (N-P)*X bytes and can withstand P device(s) failing + before data integrity is compromised. The minimum number of devices in a + raidz group is one more than the number of parity disks. The recommended + number is between 3 and 9 to help increase performance.

+
+
+
A pseudo-vdev which keeps track of available hot spares for a pool. For + more information, see the Hot Spares + section.
+
+
A separate intent log device. If more than one log device is specified, + then writes are load-balanced between devices. Log devices can be + mirrored. However, raidz vdev types are not supported for the intent log. + For more information, see the Intent + Log section.
+
+
A device dedicated solely for deduplication tables. The redundancy of this + device should match the redundancy of the other normal devices in the + pool. If more than one dedup device is specified, then allocations are + load-balanced between those devices.
+
+
A device dedicated solely for allocating various kinds of internal + metadata, and optionally small file blocks. The redundancy of this device + should match the redundancy of the other normal devices in the pool. If + more than one special device is specified, then allocations are + load-balanced between those devices. +

For more information on special allocations, see the + Special Allocation + Class section.

+
+
+
A device used to cache storage pool data. A cache device cannot be + configured as a mirror or raidz group. For more information, see the + Cache Devices section.
+
+

Virtual devices cannot be nested, so a mirror or raidz virtual + device can only contain files or disks. Mirrors of mirrors (or other + combinations) are not allowed.

+

A pool can have any number of virtual devices at the top of the + configuration (known as "root vdevs"). Data is dynamically + distributed across all top-level devices to balance data among devices. As + new virtual devices are added, ZFS automatically places data on the newly + available devices.

+

Virtual devices are specified one at a time on the command line, + separated by whitespace. The keywords mirror and + raidz are used to distinguish where a group ends and + another begins. For example, the following creates two root vdevs, each a + mirror of two disks:

+
+
# zpool create mypool mirror sda sdb mirror sdc sdd
+
+
+
+

+

ZFS supports a rich set of mechanisms for handling device failure + and data corruption. All metadata and data is checksummed, and ZFS + automatically repairs bad data from a good copy when corruption is + detected.

+

In order to take advantage of these features, a pool must make use + of some form of redundancy, using either mirrored or raidz groups. While ZFS + supports running in a non-redundant configuration, where each root vdev is + simply a disk or file, this is strongly discouraged. A single case of bit + corruption can render some or all of your data unavailable.

+

A pool's health status is described by one of three states: + online, degraded, or faulted. An online pool has all devices operating + normally. A degraded pool is one in which one or more devices have failed, + but the data is still available due to a redundant configuration. A faulted + pool has corrupted metadata, or one or more faulted devices, and + insufficient replicas to continue functioning.

+

The health of the top-level vdev, such as mirror or raidz device, + is potentially impacted by the state of its associated vdevs, or component + devices. A top-level vdev or component device is in one of the following + states:

+
+
+
One or more top-level vdevs is in the degraded state because one or more + component devices are offline. Sufficient replicas exist to continue + functioning. +

One or more component devices is in the degraded or faulted + state, but sufficient replicas exist to continue functioning. The + underlying conditions are as follows:

+
    +
  • The number of checksum errors exceeds acceptable levels and the device + is degraded as an indication that something may be wrong. ZFS + continues to use the device as necessary.
  • +
  • The number of I/O errors exceeds acceptable levels. The device could + not be marked as faulted because there are insufficient replicas to + continue functioning.
  • +
+
+
+
One or more top-level vdevs is in the faulted state because one or more + component devices are offline. Insufficient replicas exist to continue + functioning. +

One or more component devices is in the faulted state, and + insufficient replicas exist to continue functioning. The underlying + conditions are as follows:

+
    +
  • The device could be opened, but the contents did not match expected + values.
  • +
  • The number of I/O errors exceeds acceptable levels and the device is + faulted to prevent further use of the device.
  • +
+
+
+
The device was explicitly taken offline by the + zpool offline + command.
+
+
The device is online and functioning.
+
+
The device was physically removed while the system was running. Device + removal detection is hardware-dependent and may not be supported on all + platforms.
+
+
The device could not be opened. If a pool is imported when a device was + unavailable, then the device will be identified by a unique identifier + instead of its path since the path was never correct in the first + place.
+
+

If a device is removed and later re-attached to the system, ZFS + attempts to put the device online automatically. Device attach detection is + hardware-dependent and might not be supported on all platforms.

+
+
+

+

ZFS allows devices to be associated with pools as "hot + spares". These devices are not actively used in the pool, but when an + active device fails, it is automatically replaced by a hot spare. To create + a pool with hot spares, specify a spare vdev with any + number of devices. For example,

+
+
# zpool create pool mirror sda sdb spare sdc sdd
+
+

Spares can be shared across multiple pools, and can be added with + the zpool add command and + removed with the zpool + remove command. Once a spare replacement is + initiated, a new spare vdev is created within the + configuration that will remain there until the original device is replaced. + At this point, the hot spare becomes available again if another device + fails.

+

If a pool has a shared spare that is currently being used, the + pool can not be exported since other pools may use this shared spare, which + may lead to potential data corruption.

+

Shared spares add some risk. If the pools are imported on + different hosts, and both pools suffer a device failure at the same time, + both could attempt to use the spare at the same time. This may not be + detected, resulting in data corruption.

+

An in-progress spare replacement can be cancelled by detaching the + hot spare. If the original faulted device is detached, then the hot spare + assumes its place in the configuration, and is removed from the spare list + of all active pools.

+

Spares cannot replace log devices.

+
+
+

+

The ZFS Intent Log (ZIL) satisfies POSIX requirements for + synchronous transactions. For instance, databases often require their + transactions to be on stable storage devices when returning from a system + call. NFS and other applications can also use fsync(2) to + ensure data stability. By default, the intent log is allocated from blocks + within the main pool. However, it might be possible to get better + performance using separate intent log devices such as NVRAM or a dedicated + disk. For example:

+
+
# zpool create pool sda sdb log sdc
+
+

Multiple log devices can also be specified, and they can be + mirrored. See the EXAMPLES section for an + example of mirroring multiple log devices.

+

Log devices can be added, replaced, attached, detached and + removed. In addition, log devices are imported and exported as part of the + pool that contains them. Mirrored devices can be removed by specifying the + top-level mirror vdev.

+
+
+

+

Devices can be added to a storage pool as "cache + devices". These devices provide an additional layer of caching between + main memory and disk. For read-heavy workloads, where the working set size + is much larger than what can be cached in main memory, using cache devices + allow much more of this working set to be served from low latency media. + Using cache devices provides the greatest performance improvement for random + read-workloads of mostly static content.

+

To create a pool with cache devices, specify a + cache vdev with any number of devices. For example:

+
+
# zpool create pool sda sdb cache sdc sdd
+
+

Cache devices cannot be mirrored or part of a raidz configuration. + If a read error is encountered on a cache device, that read I/O is reissued + to the original storage pool device, which might be part of a mirrored or + raidz configuration.

+

The content of the cache devices is considered volatile, as is the + case with other system caches.

+
+
+

+

Before starting critical procedures that include destructive + actions (e.g zfs destroy ), + an administrator can checkpoint the pool's state and in the case of a + mistake or failure, rewind the entire pool back to the checkpoint. + Otherwise, the checkpoint can be discarded when the procedure has completed + successfully.

+

A pool checkpoint can be thought of as a pool-wide snapshot and + should be used with care as it contains every part of the pool's state, from + properties to vdev configuration. Thus, while a pool has a checkpoint + certain operations are not allowed. Specifically, vdev + removal/attach/detach, mirror splitting, and changing the pool's guid. + Adding a new vdev is supported but in the case of a rewind it will have to + be added again. Finally, users of this feature should keep in mind that + scrubs in a pool that has a checkpoint do not repair checkpointed data.

+

To create a checkpoint for a pool:

+
+
# zpool checkpoint pool
+
+

To later rewind to its checkpointed state, you need to first + export it and then rewind it during import:

+
+
# zpool export pool
+# zpool import --rewind-to-checkpoint pool
+
+

To discard the checkpoint from a pool:

+
+
# zpool checkpoint -d pool
+
+

Dataset reservations (controlled by the + reservation or + refreservation zfs properties) may be unenforceable + while a checkpoint exists, because the checkpoint is allowed to consume the + dataset's reservation. Finally, data that is part of the checkpoint but has + been freed in the current state of the pool won't be scanned during a + scrub.

+
+
+

+

The allocations in the special class are dedicated to specific + block types. By default this includes all metadata, the indirect blocks of + user data, and any deduplication tables. The class can also be provisioned + to accept small file blocks.

+

A pool must always have at least one normal (non-dedup/special) + vdev before other devices can be assigned to the special class. If the + special class becomes full, then allocations intended for it will spill back + into the normal class.

+

Deduplication tables can be excluded + from the special class by setting the + + zfs module parameter to false (0).

+

Inclusion of small file blocks in the + special class is opt-in. Each dataset can control the size of small file + blocks allowed in the special class by setting the + + dataset property. It defaults to zero, so you must opt-in by setting it to a + non-zero value. See zfs(8) for more info on setting this + property.

+
+
+

+

Each pool has several properties associated with it. Some + properties are read-only statistics while others are configurable and change + the behavior of the pool.

+

The following are read-only properties:

+
+
+
Amount of storage used within the pool. See + fragmentation and free for more + information.
+
+
Percentage of pool space used. This property can also be referred to by + its shortened column name, + .
+
+
Amount of uninitialized space within the pool or device that can be used + to increase the total capacity of the pool. Uninitialized space consists + of any space on an EFI labeled vdev which has not been brought online + (e.g, using zpool online + -e). This space occurs when a LUN is dynamically + expanded.
+
+
The amount of fragmentation in the pool. As the amount of space + allocated increases, it becomes more difficult to locate + free space. This may result in lower write performance + compared to pools with more unfragmented free space.
+
+
The amount of free space available in the pool. By contrast, the + zfs(8) available property describes + how much new data can be written to ZFS filesystems/volumes. The zpool + free property is not generally useful for this purpose, + and can be substantially more than the zfs available + space. This discrepancy is due to several factors, including raidz party; + zfs reservation, quota, refreservation, and refquota properties; and space + set aside by + + (see zfs-module-parameters(5) for more + information).
+
+
After a file system or snapshot is destroyed, the space it was using is + returned to the pool asynchronously. freeing is the + amount of space remaining to be reclaimed. Over time + freeing will decrease while free + increases.
+
+
The current health of the pool. Health can be one of + ONLINE, DEGRADED, + FAULTED, + , UNAVAIL.
+
+
A unique identifier for the pool.
+
+
A unique identifier for the pool. Unlike the guid + property, this identifier is generated every time we load the pool (e.g. + does not persist across imports/exports) and never changes while the pool + is loaded (even if a + + operation takes place).
+
+
Total size of the storage pool.
+
+
Information about unsupported features that are enabled on the pool. See + zpool-features(5) for details.
+
+

The space usage properties report actual physical space available + to the storage pool. The physical space can be different from the total + amount of space that any contained datasets can actually use. The amount of + space used in a raidz configuration depends on the characteristics of the + data being written. In addition, ZFS reserves some space for internal + accounting that the zfs(8) command takes into account, but + the zpool command does not. For non-full pools of a + reasonable size, these effects should be invisible. For small pools, or + pools that are close to being completely full, these discrepancies may + become more noticeable.

+

The following property can be set at creation time and import + time:

+
+
+
Alternate root directory. If set, this directory is prepended to any mount + points within the pool. This can be used when examining an unknown pool + where the mount points cannot be trusted, or in an alternate boot + environment, where the typical paths are not valid. + altroot is not a persistent property. It is valid only + while the system is up. Setting altroot defaults to + using cachefile=none, though this may + be overridden using an explicit setting.
+
+

The following property can be set only at import time:

+
+
=on|off
+
If set to on, the pool will be imported in read-only + mode. This property can also be referred to by its shortened column name, + .
+
+

The following properties can be set at creation time and import + time, and later changed with the zpool + set command:

+
+
=ashift
+
Pool sector size exponent, to the power of 2 (internally + referred to as ashift ). Values from 9 to 16, inclusive, + are valid; also, the value 0 (the default) means to auto-detect using the + kernel's block layer and a ZFS internal exception list. I/O operations + will be aligned to the specified size boundaries. Additionally, the + minimum (disk) write size will be set to the specified size, so this + represents a space vs. performance trade-off. For optimal performance, the + pool sector size should be greater than or equal to the sector size of the + underlying disks. The typical case for setting this property is when + performance is important and the underlying disks use 4KiB sectors but + report 512B sectors to the OS (for compatibility reasons); in that case, + set + + (which is 1<<12 = 4096). When set, this property is used as the + default hint value in subsequent vdev operations (add, attach and + replace). Changing this value will not modify any existing vdev, not even + on disk replacement; however it can be used, for instance, to replace a + dying 512B sectors disk with a newer 4KiB sectors device: this will + probably result in bad performance but at the same time could prevent loss + of data.
+
=on|off
+
Controls automatic pool expansion when the underlying LUN is grown. If set + to on, the pool will be resized according to the size of + the expanded device. If the device is part of a mirror or raidz then all + devices within that mirror/raidz group must be expanded before the new + space is made available to the pool. The default behavior is + off. This property can also be referred to by its + shortened column name, + .
+
=on|off
+
Controls automatic device replacement. If set to off, + device replacement must be initiated by the administrator by using the + zpool replace command. If + set to on, any new device, found in the same physical + location as a device that previously belonged to the pool, is + automatically formatted and replaced. The default behavior is + off. This property can also be referred to by its + shortened column name, + . + Autoreplace can also be used with virtual disks (like device mapper) + provided that you use the /dev/disk/by-vdev paths setup by vdev_id.conf. + See the vdev_id(8) man page for more details. + Autoreplace and autoonline require the ZFS Event Daemon be configured and + running. See the zed(8) man page for more details.
+
=|pool/dataset
+
Identifies the default bootable dataset for the root pool. This property + is expected to be set mainly by the installation and upgrade programs. Not + all Linux distribution boot processes use the bootfs property.
+
=path|none
+
Controls the location of where the pool configuration is cached. + Discovering all pools on system startup requires a cached copy of the + configuration data that is stored on the root file system. All pools in + this cache are automatically imported when the system boots. Some + environments, such as install and clustering, need to cache this + information in a different location so that pools are not automatically + imported. Setting this property caches the pool configuration in a + different location that can later be imported with + zpool import + -c. Setting it to the value none + creates a temporary pool that is never cached, and the "" (empty + string) uses the default location. +

Multiple pools can share the same cache file. Because the + kernel destroys and recreates this file when pools are added and + removed, care should be taken when attempting to access this file. When + the last pool using a cachefile is exported or + destroyed, the file will be empty.

+
+
=text
+
A text string consisting of printable ASCII characters that will be stored + such that it is available even if the pool becomes faulted. An + administrator can provide additional information about a pool using this + property.
+
=number
+
This property is deprecated. In a future release, it will no longer have + any effect. +

Threshold for the number of block ditto copies. If + the reference count for a deduplicated block increases above this + number, a new ditto copy of this block is automatically stored. The + default setting is 0 which causes no ditto copies to + be created for deduplicated blocks. The minimum legal nonzero setting is + .

+
+
=on|off
+
Controls whether a non-privileged user is granted access based on the + dataset permissions defined on the dataset. See zfs(8) + for more information on ZFS delegated administration.
+
=wait|continue|panic
+
Controls the system behavior in the event of catastrophic pool failure. + This condition is typically a result of a loss of connectivity to the + underlying storage device(s) or a failure of all devices within the pool. + The behavior of such an event is determined as follows: +
+
+
Blocks all I/O access until the device connectivity is recovered and + the errors are cleared. This is the default behavior.
+
+
Returns EIO to any new write I/O requests but + allows reads to any of the remaining healthy devices. Any write + requests that have yet to be committed to disk would be blocked.
+
+
Prints out a message to the console and generates a system crash + dump.
+
+
+
=on|off
+
When set to on space which has been recently freed, and + is no longer allocated by the pool, will be periodically trimmed. This + allows block device vdevs which support BLKDISCARD, such as SSDs, or file + vdevs on which the underlying file system supports hole-punching, to + reclaim unused blocks. The default setting for this property is + off. +

Automatic TRIM does not immediately reclaim blocks after a + free. Instead, it will optimistically delay allowing smaller ranges to + be aggregated in to a few larger ones. These can then be issued more + efficiently to the storage.

+

Be aware that automatic trimming of recently freed data blocks + can put significant stress on the underlying storage devices. This will + vary depending of how well the specific device handles these commands. + For lower end devices it is often possible to achieve most of the + benefits of automatic trimming by running an on-demand (manual) TRIM + periodically using the zpool + trim command.

+
+
feature_name=enabled
+
The value of this property is the current state of + feature_name. The only valid value when setting this + property is enabled which moves + feature_name to the enabled state. See + zpool-features(5) for details on feature states.
+
=on|off
+
Controls whether information about snapshots associated with this pool is + output when zfs list is + run without the -t option. The default value is + off. This property can also be referred to by its + shortened name, + .
+
=on|off
+
Controls whether a pool activity check should be performed during + zpool import. When a pool + is determined to be active it cannot be imported, even with the + -f option. This property is intended to be used in + failover configurations where multiple hosts have access to a pool on + shared storage. +

Multihost provides protection on import only. It does not + protect against an individual device being used in multiple pools, + regardless of the type of vdev. See the discussion under + zpool create.

+

When this property is on, periodic + writes to storage occur to show the pool is in use. See + + in the zfs-module-parameters(5) man page. In order to + enable this property each host must set a unique hostid. See + zgenhostid(8) + spl-module-parameters(5) for additional details. The + default value is off.

+
+
=version
+
The current on-disk version of the pool. This can be increased, but never + decreased. The preferred method of updating pools is with the + zpool upgrade command, + though this property can be used when a specific version is needed for + backwards compatibility. Once feature flags are enabled on a pool this + property will no longer have a value.
+
+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+

The zpool command provides subcommands to + create and destroy storage pools, add capacity to storage pools, and provide + information about the storage pools. The following subcommands are + supported:

+
+
zpool -?
+
Displays a help message.
+
zpool -V, + --version
+
An alias for the zpool + version subcommand.
+
zpool add + [-fgLnP] [-o + property=value] + pool vdev...
+
Adds the specified virtual devices to the given pool. The + vdev specification is described in the + Virtual Devices section. The + behavior of the -f option, and the device checks + performed are described in the zpool + create subcommand. +
+
+
Forces use of vdevs, even if they appear in use + or specify a conflicting replication level. Not all devices can be + overridden in this manner.
+
+
Display vdev, GUIDs instead of the normal device + names. These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all + symbolic links. This can be used to look up the current block device + name regardless of the /dev/disk/ path used to open it.
+
+
Displays the configuration that would be used without actually adding + the vdevs. The actual pool creation can still + fail due to insufficient privileges or device sharing.
+
+
Display real paths for vdevs instead of only the + last component of the path. This can be used in conjunction with the + -L flag.
+
+ property=value
+
Sets the given pool properties. See the + Properties section for a list of + valid properties that can be set. The only property supported at the + moment is ashift.
+
+
+
zpool attach + [-f] [-o + property=value] + pool device new_device
+
Attaches new_device to the existing + device. The existing device cannot be part of a + raidz configuration. If device is not currently part + of a mirrored configuration, device automatically + transforms into a two-way mirror of device and + new_device. If device is part + of a two-way mirror, attaching new_device creates a + three-way mirror, and so on. In either case, + new_device begins to resilver immediately. +
+
+
Forces use of new_device, even if it appears to + be in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the + Properties section for a list of + valid properties that can be set. The only property supported at the + moment is ashift.
+
+
+
zpool checkpoint + [-d, --discard] + pool
+
Checkpoints the current state of pool , which can be + later restored by zpool import + --rewind-to-checkpoint. The existence of a checkpoint in a pool + prohibits the following zpool commands: + remove, attach, + detach, split, and + reguid. In addition, it may break reservation + boundaries if the pool lacks free space. The zpool + status command indicates the existence of a + checkpoint or the progress of discarding a checkpoint from a pool. The + zpool list command reports + how much space the checkpoint takes from the pool. +
+
+ --discard
+
Discards an existing checkpoint from pool.
+
+
+
zpool clear + pool [device]
+
Clears device errors in a pool. If no arguments are specified, all device + errors within the pool are cleared. If one or more devices is specified, + only those errors associated with the specified device or devices are + cleared. If multihost is enabled, and the pool has been suspended, this + will not resume I/O. While the pool was suspended, it may have been + imported on another host, and resuming I/O could result in pool + damage.
+
zpool create + [-dfn] [-m + mountpoint] [-o + property=value]... + [-o + feature@feature=value]... + [-O + file-system-property=value]... + [-R root] + [-t tname] + pool vdev...
+
Creates a new storage pool containing the virtual devices specified on the + command line. The pool name must begin with a letter, and can only contain + alphanumeric characters as well as underscore + (""), dash + (""), + colon + (""), + space (" "), and period + (""). + The pool names mirror, raidz, + spare and log are reserved, as are + names beginning with mirror, raidz, + spare, and the pattern + . + The vdev specification is described in the + Virtual Devices section. +

The command attempts to verify that each device + specified is accessible and not currently in use by another subsystem. + However this check is not robust enough to detect simultaneous attempts + to use a new device in different pools, even if + multihost is + The + administrator must ensure that simultaneous invocations of any + combination of zpool replace, zpool + create, zpool add, or zpool + labelclear, do not refer to the same device. Using the same device + in two pools will result in pool corruption.

+

There are some uses, such as being currently mounted, or + specified as the dedicated dump device, that prevents a device from ever + being used by ZFS. Other uses, such as having a preexisting UFS file + system, can be overridden with the -f + option.

+

The command also checks that the replication strategy for the + pool is consistent. An attempt to combine redundant and non-redundant + storage in a single pool, or to mix disks and files, results in an error + unless -f is specified. The use of differently + sized devices within a single raidz or mirror group is also flagged as + an error unless -f is specified.

+

Unless the -R option is specified, the + default mount point is + /pool. The mount point + must not exist or must be empty, or else the root dataset cannot be + mounted. This can be overridden with the -m + option.

+

By default all supported features are enabled on the new pool + unless the -d option is specified.

+
+
+
Do not enable any features on the new pool. Individual features can be + enabled by setting their corresponding properties to + enabled with the -o option. + See zpool-features(5) for details about feature + properties.
+
+
Forces use of vdevs, even if they appear in use + or specify a conflicting replication level. Not all devices can be + overridden in this manner.
+
+ mountpoint
+
Sets the mount point for the root dataset. The default mount point is + /pool or altroot/pool + if altroot is specified. The mount point must be + an absolute path, + , + or none. For more information on dataset mount + points, see zfs(8).
+
+
Displays the configuration that would be used without actually + creating the pool. The actual pool creation can still fail due to + insufficient privileges or device sharing.
+
+ property=value
+
Sets the given pool properties. See the + Properties section for a list of + valid properties that can be set.
+
+ feature@feature=value
+
Sets the given pool feature. See the + zpool-features(5) section for a list of valid + features that can be set. Value can be either disabled or + enabled.
+
+ file-system-property=value
+
Sets the given file system properties in the root file system of the + pool. See the Properties section + of zfs(8) for a list of valid properties that can be + set.
+
+ root
+
Equivalent to -o + cachefile=none + -o + altroot=root
+
+ tname
+
Sets the in-core pool name to + + while the on-disk name will be the name specified as the pool name + . + This will set the default cachefile property to none. This is intended + to handle name space collisions when creating pools for other systems, + such as virtual machines or physical machines whose pools live on + network block devices.
+
+
+
zpool destroy + [-f] pool
+
Destroys the given pool, freeing up any devices for other use. This + command tries to unmount any active datasets before destroying the pool. +
+
+
Forces any active datasets contained within the pool to be + unmounted.
+
+
+
zpool detach + pool device
+
Detaches device from a mirror. The operation is + refused if there are no other valid replicas of the data. If device may be + re-added to the pool later on then consider the zpool + offline command instead.
+
zpool events + [-vHf [pool] | + -c]
+
Lists all recent events generated by the ZFS kernel modules. These events + are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. + For more information about the subclasses and event payloads that can be + generated see the zfs-events(5) man page. +
+
+
Clear all previous events.
+
+
Follow mode.
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+
Print the entire payload for each event.
+
+
+
zpool export + [-a] [-f] + pool...
+
Exports the given pools from the system. All devices are marked as + exported, but are still considered in use by other subsystems. The devices + can be moved between systems (even those of different endianness) and + imported as long as a sufficient number of devices are present. +

Before exporting the pool, all datasets within the pool are + unmounted. A pool can not be exported if it has a shared spare that is + currently being used.

+

For pools to be portable, you must give the + zpool command whole disks, not just partitions, + so that ZFS can label the disks with portable EFI labels. Otherwise, + disk drivers on platforms of different endianness will not recognize the + disks.

+
+
+
Exports all pools imported on the system.
+
+
Forcefully unmount all datasets, using the + unmount -f command. +

This command will forcefully export the pool even if it + has a shared spare that is currently being used. This may lead to + potential data corruption.

+
+
+
+
zpool get + [-Hp] [-o + field[,field]...] + all|property[,property]... + [pool]...
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
        name          Name of storage pool
+        property      Property name
+        value         Property value
+        source        Property source, either 'default' or 'local'.
+
+

See the Properties + section for more information on the available pool properties.

+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display. + ,,, + is the default value.
+
+
Display numbers in parsable (exact) values.
+
+
+
zpool history + [-il] [pool]...
+
Displays the command history of the specified pool(s) or all pools if no + pool is specified. +
+
+
Displays internally logged ZFS events in addition to user initiated + events.
+
+
Displays log records in long format, which in addition to standard + format includes, the user name, the hostname, and the zone in which + the operation was performed.
+
+
+
zpool import + [-D] [-d + dir|device]
+
Lists pools available to import. If the -d option + is not specified, this command searches for devices in + /dev. The -d option can be + specified multiple times, and all directories are searched. If the device + appears to be part of an exported pool, this command displays a summary of + the pool with the name of the pool, a numeric identifier, as well as the + vdev layout and current health of the device for each device or file. + Destroyed pools, pools that were previously destroyed with the + zpool destroy command, are + not listed unless the -D option is specified. +

The numeric identifier is unique, and can be used instead of + the pool name when multiple exported pools of the same name are + available.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times.
+
+
Lists destroyed pools only.
+
+
+
zpool import + -a [-DflmN] + [-F [-n] + [-T] [-X]] + [-c + cachefile|-d + dir|device] [-o + mntopts] [-o + property=value]... + [-R root] + [-s]
+
Imports all pools found in the search directories. Identical to the + previous command, except that all pools with a sufficient number of + devices available are imported. Destroyed pools, pools that were + previously destroyed with the zpool + destroy command, will not be imported unless the + -D option is specified. +
+
+
Searches for and imports all pools found.
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pools only. The -f option is + also required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+
Import the pool without mounting any file systems.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + Properties section for more + information on the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Rewinds pool to the checkpointed state. Once the pool is imported with + this flag there is no way to undo the rewind. All changes and data + that were written after the checkpoint are lost! The only exception is + when the readonly mounting option is enabled. In + this case, the checkpointed state of the pool is opened and an + administrator can see how the pool would look like if they were to + fully rewind.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. WARNING: This + option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
+
zpool import + [-Dflm] [-F + [-n] [-t] + [-T] [-X]] + [-c + cachefile|-d + dir|device] [-o + mntopts] [-o + property=value]... + [-R root] + [-s] + pool|id + [newpool]
+
Imports a specific pool. A pool can be identified by its name or the + numeric identifier. If newpool is specified, the + pool is imported using the name newpool. Otherwise, + it is imported with the same name as its exported name. +

If a device is removed from a system without running + zpool export first, the + device appears as potentially active. It cannot be determined if this + was a failed export, or whether the device is really in use from another + host. To import a pool in this state, the -f + option is required.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pool. The -f option is also + required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + Properties section for more + information on the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. WARNING: This + option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
Used with newpool. Specifies that + newpool is temporary. Temporary pool names last + until export. Ensures that the original pool name will be used in all + label updates and therefore is retained upon export. Will also set -o + cachefile=none when not explicitly specified.
+
+
+
zpool initialize + [-c | -s] + pool [device...]
+
Begins initializing by writing to all unallocated regions on the specified + devices, or all eligible devices in the pool if no individual devices are + specified. Only leaf data or log devices may be initialized. +
+
+ --cancel
+
Cancel initializing on the specified devices, or all eligible devices + if none are specified. If one or more target devices are invalid or + are not currently being initialized, the command will fail and no + cancellation will occur on any device.
+
+ --suspend
+
Suspend initializing on the specified devices, or all eligible devices + if none are specified. If one or more target devices are invalid or + are not currently being initialized, the command will fail and no + suspension will occur on any device. Initializing can then be resumed + by running zpool + initialize with no flags on the relevant + target devices.
+
+
+
zpool iostat + [[[-c SCRIPT] + [-lq]]|-rw] + [-T u|d] + [-ghHLnpPvy] + [[pool...]|[pool + vdev...]|[vdev...]] + [interval [count]]
+
Displays logical I/O statistics for the given pools/vdevs. Physical I/Os + may be observed via iostat(1). If writes are located + nearby, they may be merged into a single larger operation. Additional I/O + may be generated depending on the level of vdev redundancy. To filter + output, you may pass in a list of pools, a pool and list of vdevs in that + pool, or a list of any vdevs from any pool. If no items are specified, + statistics for every pool in the system are shown. When given an + interval, the statistics are printed every + interval seconds until ^C is pressed. If + -n flag is specified the headers are displayed + only once, otherwise they are displayed periodically. If count is + specified, the command exits after count reports are printed. The first + report printed is always the statistics since boot regardless of whether + interval and count are passed. + However, this behavior can be suppressed with the + -y flag. Also note that the units of + , + , + that are + printed in the report are in base 1024. To get the raw values, use the + -p flag. +
+
+ [SCRIPT1[,SCRIPT2]...]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool + iostat output. Users can run any script found + in their ~/.zpool.d directory or from the + system /etc/zfs/zpool.d directory. Script + names containing the slash (/) character are not allowed. The default + search path can be overridden by setting the ZPOOL_SCRIPTS_PATH + environment variable. A privileged user can run + -c if they have the ZPOOL_SCRIPTS_AS_ROOT + environment variable set. If a script requires the use of a privileged + command, like smartctl(8), then it's recommended you + allow the user access to it in /etc/sudoers or + add the user to the /etc/sudoers.d/zfs file. +

If -c is passed without a script + name, it prints a list of all scripts. -c + also sets verbose mode + (-v).

+

Script output should be in the form of + "name=value". The column name is set to "name" + and the value is set to "value". Multiple lines can be + used to output multiple columns. The first line of output not in the + "name=value" format is displayed without a column title, + and no more output after that is displayed. This can be useful for + printing error messages. Blank or NULL values are printed as a '-' + to make output awk-able.

+

The following environment variables are set before running + each script:

+
+
+
Full path to the vdev
+
+
+
+
Underlying path to the vdev (/dev/sd*). For use with device + mapper, multipath, or partitioned vdevs.
+
+
+
+
The sysfs path to the enclosure for the vdev (if any).
+
+
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard + date format. See date(1).
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Print headers only once when passed
+
+
Display numbers in parsable (exact) values. Time values are in + nanoseconds.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L flag.
+
+
Print request size histograms for the leaf vdev's IO. This includes + histograms of individual IOs (ind) and aggregate IOs (agg). These + stats can be useful for observing how well IO aggregation is working. + Note that TRIM IOs may exceed 16M, but will be counted as 16M.
+
+
Verbose statistics Reports usage statistics for individual vdevs + within the pool, in addition to the pool-wide statistics.
+
+
Omit statistics since boot. Normally the first line of output reports + the statistics since boot. This option suppresses that first line of + output. interval
+
+
Display latency histograms: +

total_wait: Total IO time (queuing + + disk IO time). disk_wait: Disk IO time (time + reading/writing the disk). syncq_wait: Amount + of time IO spent in synchronous priority queues. Does not include + disk time. asyncq_wait: Amount of time IO + spent in asynchronous priority queues. Does not include disk time. + scrub: Amount of time IO spent in scrub queue. + Does not include disk time.

+
+
+
Include average latency statistics: +

total_wait: Average total IO time + (queuing + disk IO time). disk_wait: Average + disk IO time (time reading/writing the disk). + syncq_wait: Average amount of time IO spent in + synchronous priority queues. Does not include disk time. + asyncq_wait: Average amount of time IO spent + in asynchronous priority queues. Does not include disk time. + scrub: Average queuing time in scrub queue. + Does not include disk time. trim: Average + queuing time in trim queue. Does not include disk time.

+
+
+
Include active queue statistics. Each priority queue has both pending + ( pend) and active ( + activ) IOs. Pending IOs are waiting to be issued + to the disk, and active IOs have been issued to disk and are waiting + for completion. These stats are broken out by priority queue: +

syncq_read/write: Current number of + entries in synchronous priority queues. + asyncq_read/write: Current number of entries + in asynchronous priority queues. scrubq_read: + Current number of entries in scrub queue. + trimq_write: Current number of entries in trim + queue.

+

All queue statistics are instantaneous measurements of the + number of entries in the queues. If you specify an interval, the + measurements will be sampled from the end of the interval.

+
+
+
+
zpool labelclear + [-f] device
+
Removes ZFS label information from the specified + device. The device must not be + part of an active pool configuration. +
+
+
Treat exported or foreign devices as inactive.
+
+
+
zpool list + [-HgLpPv] [-o + property[,property]...] + [-T u|d] + [pool]... [interval + [count]]
+
Lists the given pools along with a health status and space usage. If no + pools are specified, all pools in the system are + listed. When given an interval, the information is + printed every interval seconds until ^C is pressed. + If count is specified, the command exits after + count reports are printed. +
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ property
+
Comma-separated list of properties to display. See the + Properties section for a list of + valid properties. The default list is name, + size, allocated, + free, checkpoint, + expandsize, fragmentation, + capacity, dedupratio, + health, altroot.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L flag.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard + date format. See date(1).
+
+
Verbose statistics. Reports usage statistics for individual vdevs + within the pool, in addition to the pool-wise statistics.
+
+
+
zpool offline + [-f] [-t] + pool device...
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [-e] pool + device...
+
Brings the specified physical device online. This command is not + applicable to spares. +
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
zpool reguid + pool
+
Generates a new unique identifier for the pool. You must ensure that all + devices in this pool are online and healthy before performing this + action.
+
zpool reopen + [-n] pool
+
Reopen all the vdevs associated with the pool. +
+
+
Do not restart an in-progress scrub operation. This is not recommended + and can result in partially resilvered devices unless a second scrub + is performed.
+
+
+
zpool + remove [-np] + pool device...
+
Removes the specified device from the pool. This command supports removing + hot spare, cache, log, and both mirrored and non-redundant primary + top-level vdevs, including dedup and special vdevs. When the primary pool + storage includes a top-level raidz vdev only hot spare, cache, and log + devices can be removed. +

Removing a top-level vdev reduces the total amount of space in + the storage pool. The specified device will be evacuated by copying all + allocated space from it to the other devices in the pool. In this case, + the zpool remove command + initiates the removal and returns, while the evacuation continues in the + background. The removal progress can be monitored with + zpool status. If an IO + error is encountered during the removal process it will be cancelled. + The + + feature flag must be enabled to remove a top-level vdev, see + zpool-features(5).

+

A mirrored top-level device (log or data) can be removed by + specifying the top-level mirror for the same. Non-log devices or data + devices that are part of a mirrored configuration can be removed using + the zpool detach + command.

+
+
+
Do not actually perform the removal ("no-op"). Instead, + print the estimated amount of memory that will be used by the mapping + table after the removal completes. This is nonzero only for top-level + vdevs.
+
+
+
+
Used in conjunction with the -n flag, displays + numbers as parsable (exact) values.
+
+
+
zpool remove + -s pool
+
Stops and cancels an in-progress removal of a top-level vdev.
+
zpool replace + [-f] [-o + property=value] + pool device + [new_device]
+
Replaces old_device with + new_device. This is equivalent to attaching + new_device, waiting for it to resilver, and then + detaching old_device. +

The size of new_device must be greater + than or equal to the minimum size of all the devices in a mirror or + raidz configuration.

+

new_device is required if the pool is + not redundant. If new_device is not specified, it + defaults to old_device. This form of replacement + is useful after an existing disk has failed and has been physically + replaced. In this case, the new disk may have the same + /dev path as the old device, even though it is + actually a different disk. ZFS recognizes this.

+
+
+
Forces use of new_device, even if it appears to + be in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the + Properties section for a list of + valid properties that can be set. The only property supported at the + moment is ashift.
+
+
+
zpool scrub + [-s | -p] + pool...
+
Begins a scrub or resumes a paused scrub. The scrub examines all data in + the specified pools to verify that it checksums correctly. For replicated + (mirror or raidz) devices, ZFS automatically repairs any damage discovered + during the scrub. The zpool + status command reports the progress of the scrub + and summarizes the results of the scrub upon completion. +

Scrubbing and resilvering are very similar operations. The + difference is that resilvering only examines data that ZFS knows to be + out of date (for example, when attaching a new device to a mirror or + replacing an existing device), whereas scrubbing examines all data to + discover silent errors due to hardware faults or disk failure.

+

Because scrubbing and resilvering are I/O-intensive + operations, ZFS only allows one at a time. If a scrub is paused, the + zpool scrub resumes it. + If a resilver is in progress, ZFS does not allow a scrub to be started + until the resilver completes.

+

Note that, due to changes in pool data on a live system, it is + possible for scrubs to progress slightly beyond 100% completion. During + this period, no completion time estimate will be provided.

+
+
+
Stop scrubbing.
+
+
+
+
Pause scrubbing. Scrub pause state and progress are periodically + synced to disk. If the system is restarted or pool is exported during + a paused scrub, even after import, scrub will remain paused until it + is resumed. Once resumed the scrub will pick up from the place where + it was last checkpointed to disk. To resume a paused scrub issue + zpool scrub + again.
+
+
+
zpool + resilver pool...
+
Starts a resilver. If an existing resilver is already running it will be + restarted from the beginning. Any drives that were scheduled for a + deferred resilver will be added to the new one. This requires the + + feature.
+
zpool trim + [-d] [-c | + -s] pool + [device...]
+
Initiates an immediate on-demand TRIM operation for all of the free space + in a pool. This operation informs the underlying storage devices of all + blocks in the pool which are no longer allocated and allows thinly + provisioned devices to reclaim the space. +

A manual on-demand TRIM operation can be initiated + irrespective of the autotrim pool property setting. + See the documentation for the autotrim property above + for the types of vdev devices which can be trimmed.

+
+
+ --secure
+
Causes a secure TRIM to be initiated. When performing a secure TRIM, + the device guarantees that data stored on the trimmed blocks has been + erased. This requires support from the device and is not supported by + all SSDs.
+
+ --rate rate
+
Controls the rate at which the TRIM operation progresses. Without this + option TRIM is executed as quickly as possible. The rate, expressed in + bytes per second, is applied on a per-vdev basis and may be set + differently for each leaf vdev.
+
+ --cancel
+
Cancel trimming on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are + not currently being trimmed, the command will fail and no cancellation + will occur on any device.
+
+ --suspend
+
Suspend trimming on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are + not currently being trimmed, the command will fail and no suspension + will occur on any device. Trimming can then be resumed by running + zpool trim with no + flags on the relevant target devices.
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + Properties section for more + information on what properties can be set and acceptable values.
+
zpool split + [-gLlnP] [-o + property=value]... + [-R root] pool + newpool [device ...]
+
Splits devices off pool creating + newpool. All vdevs in pool + must be mirrors and the pool must not be in the process of resilvering. At + the time of the split, newpool will be a replica of + pool. By default, the last device in each mirror is + split from pool to create + newpool. +

The optional device specification causes the specified + device(s) to be included in the new pool and, + should any devices remain unspecified, the last device in each mirror is + used as would be by default.

+
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the new pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Do dry run, do not actually perform the split. Print out the expected + configuration of newpool.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L flag.
+
+ property=value
+
Sets the specified property for newpool. See the + Properties section for more + information on the available pool properties.
+
+ root
+
Set altroot for newpool to + root and automatically import it.
+
+
+
zpool status + [-c + [SCRIPT1[,SCRIPT2]...]] + [-DigLpPstvx] [-T + u|d] [pool]... + [interval [count]]
+
Displays the detailed health status for the given pools. If no + pool is specified, then the status of each pool in + the system is displayed. For more information on pool and device health, + see the Device Failure + and Recovery section. +

If a scrub or resilver is in progress, this command reports + the percentage done and the estimated time to completion. Both of these + are only approximate, because the amount of data in the pool and the + other workloads on the system can change.

+
+
+ [SCRIPT1[,SCRIPT2]...]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool + status output. See the + -c option of zpool + iostat for complete details.
+
+
Display vdev initialization status.
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L flag.
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in + the pool) block counts and sizes by reference count.
+
+
Display the number of leaf VDEV slow IOs. This is the number of IOs + that didn't complete in zio_slow_io_ms milliseconds (default 30 + seconds). This does not necessarily mean the IOs failed to complete, + just took an unreasonably long amount of time. This may indicate a + problem with the underlying storage.
+
+
Display vdev TRIM status.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard + date format. See date(1).
+
+
Displays verbose data error information, printing out a complete list + of all data errors since the last complete pool scrub.
+
+
Only display status for pools that are exhibiting errors or are + otherwise unavailable. Warnings about pools not using the latest + on-disk format will not be included.
+
+
+
zpool sync + [pool ...]
+
This command forces all in-core dirty data to be written to the primary + pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + zpool sync will sync all pools on the system. Otherwise, + it will sync only the specified pool(s).
+
zpool upgrade
+
Displays pools which do not have all supported features enabled and pools + formatted using a legacy ZFS version number. These pools can continue to + be used, but some features may not be available. Use + zpool upgrade + -a to enable all features on all pools.
+
zpool upgrade + -v
+
Displays legacy ZFS versions supported by the current software. See + zpool-features(5) for a description of feature flags + features supported by the current software.
+
zpool upgrade + [-V version] + -a|pool...
+
Enables all supported features on the given pool. Once this is done, the + pool will no longer be accessible on systems that do not support feature + flags. See zpool-features(5) for details on + compatibility with systems that support feature flags, but do not support + all features enabled on the pool. +
+
+
Enables all supported features on all pools.
+
+ version
+
Upgrade to the specified legacy version. If the + -V flag is specified, no features will be + enabled on the pool. This option can only be used to increase the + version number up to the last supported legacy version number.
+
+
+
zpool version
+
Displays the software version of the zpool + userland utility and the zfs kernel module.
+
+
+
+
+

+

The following exit values are returned:

+
+
+
Successful completion.
+
+
An error occurred.
+
+
Invalid command line options were specified.
+
+
+
+

+
+
Creating a RAID-Z Storage Pool
+
The following command creates a pool with a single raidz root vdev that + consists of six disks. +
+
# zpool create tank raidz sda sdb sdc sdd sde sdf
+
+
+
Creating a Mirrored Storage Pool
+
The following command creates a pool with two mirrors, where each mirror + contains two disks. +
+
# zpool create tank mirror sda sdb mirror sdc sdd
+
+
+
Creating a ZFS Storage Pool by Using + Partitions
+
The following command creates an unmirrored pool using two disk + partitions. +
+
# zpool create tank sda1 sdb2
+
+
+
Creating a ZFS Storage Pool by Using + Files
+
The following command creates an unmirrored pool using files. While not + recommended, a pool based on files can be useful for experimental + purposes. +
+
# zpool create tank /path/to/file/a /path/to/file/b
+
+
+
Adding a Mirror to a ZFS Storage Pool
+
The following command adds two mirrored disks to the pool + tank, assuming the pool is already made up of two-way + mirrors. The additional space is immediately available to any datasets + within the pool. +
+
# zpool add tank mirror sda sdb
+
+
+
Listing Available ZFS Storage Pools
+
The following command lists all available pools on the system. In this + case, the pool + is + faulted due to a missing device. The results from this command are similar + to the following: +
+
# zpool list
+NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+rpool  19.9G  8.43G  11.4G         -    33%    42%  1.00x  ONLINE  -
+tank   61.5G  20.0G  41.5G         -    48%    32%  1.00x  ONLINE  -
+zion       -      -      -         -      -      -      -  FAULTED -
+
+
+
Destroying a ZFS Storage Pool
+
The following command destroys the pool tank and any + datasets contained within. +
+
# zpool destroy -f tank
+
+
+
Exporting a ZFS Storage Pool
+
The following command exports the devices in pool tank + so that they can be relocated or later imported. +
+
# zpool export tank
+
+
+
Importing a ZFS Storage Pool
+
The following command displays available pools, and then imports the pool + tank for use on the system. The results from this + command are similar to the following: +
+
# zpool import
+  pool: tank
+    id: 15451357997522795478
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+        tank        ONLINE
+          mirror    ONLINE
+            sda     ONLINE
+            sdb     ONLINE
+
+# zpool import tank
+
+
+
Upgrading All ZFS Storage Pools to the Current + Version
+
The following command upgrades all ZFS Storage pools to the current + version of the software. +
+
# zpool upgrade -a
+This system is currently running ZFS version 2.
+
+
+
Managing Hot Spares
+
The following command creates a new pool with an available hot spare: +
+
# zpool create tank mirror sda sdb spare sdc
+
+

If one of the disks were to fail, the pool would be reduced to + the degraded state. The failed device can be replaced using the + following command:

+
+
# zpool replace tank sda sdd
+
+

Once the data has been resilvered, the spare is automatically + removed and is made available for use should another device fail. The + hot spare can be permanently removed from the pool using the following + command:

+
+
# zpool remove tank sdc
+
+
+
Creating a ZFS Pool with Mirrored Separate + Intent Logs
+
The following command creates a ZFS storage pool consisting of two, + two-way mirrors and mirrored log devices: +
+
# zpool create pool mirror sda sdb mirror sdc sdd log mirror \
+  sde sdf
+
+
+
Adding Cache Devices to a ZFS Pool
+
The following command adds two disks for use as cache devices to a ZFS + storage pool: +
+
# zpool add pool cache sdc sdd
+
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take + over an hour for them to fill. Capacity and reads can be monitored using + the iostat option as follows:

+
+
# zpool iostat -v pool 5
+
+
+
Removing a Mirrored top-level (Log or Data) + Device
+
The following commands remove the mirrored log device + mirror-2 and mirrored top-level data device + mirror-1. +

Given this configuration:

+
+
  pool: tank
+ state: ONLINE
+ scrub: none requested
+config:
+
+         NAME        STATE     READ WRITE CKSUM
+         tank        ONLINE       0     0     0
+           mirror-0  ONLINE       0     0     0
+             sda     ONLINE       0     0     0
+             sdb     ONLINE       0     0     0
+           mirror-1  ONLINE       0     0     0
+             sdc     ONLINE       0     0     0
+             sdd     ONLINE       0     0     0
+         logs
+           mirror-2  ONLINE       0     0     0
+             sde     ONLINE       0     0     0
+             sdf     ONLINE       0     0     0
+
+

The command to remove the mirrored log + mirror-2 is:

+
+
# zpool remove tank mirror-2
+
+

The command to remove the mirrored data + mirror-1 is:

+
+
# zpool remove tank mirror-1
+
+
+
Displaying expanded space on a + device
+
The following command displays the detailed information for the pool + . + This pool is comprised of a single raidz vdev where one of its devices + increased its capacity by 10GB. In this example, the pool will not be able + to utilize this extra capacity until all the devices under the raidz vdev + have been expanded. +
+
# zpool list -v data
+NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+data        23.9G  14.6G  9.30G         -    48%    61%  1.00x  ONLINE  -
+  raidz1    23.9G  14.6G  9.30G         -    48%
+    sda         -      -      -         -      -
+    sdb         -      -      -       10G      -
+    sdc         -      -      -         -      -
+
+
+
Adding output columns
+
Additional columns can be added to the zpool + status and zpool + iostat output with -c + option. +
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc slaves
+   capacity operations bandwidth
+   pool       alloc free  read  write read  write slaves
+   ---------- ----- ----- ----- ----- ----- ----- ---------
+   tank       20.4G 7.23T 26    152   20.7M 21.6M
+   mirror     20.4G 7.23T 26    152   20.7M 21.6M
+   U1         -     -     0     31    1.46K 20.6M sdb sdff
+   U10        -     -     0     1     3.77K 13.3K sdas sdgw
+   U11        -     -     0     1     288K  13.3K sdat sdgx
+   U12        -     -     0     1     78.4K 13.3K sdau sdgy
+   U13        -     -     0     1     128K  13.3K sdav sdgz
+   U14        -     -     0     1     63.2K 13.3K sdfk sdg
+
+
+
+
+
+

+
+
+
Cause zpool to dump core on exit for the purposes + of running + .
+
+
+
+
The search path for devices or files to use with the pool. This is a + colon-separated list of directories in which zpool + looks for device nodes and files. Similar to the + -d option in zpool + import.
+
+
+
+
The maximum time in milliseconds that zpool import + will wait for an expected device to be available.
+
+
+
+
Cause zpool subcommands to output vdev guids by + default. This behavior is identical to the zpool status + -g command line option.
+
+
+ +
Cause zpool subcommands to follow links for vdev + names by default. This behavior is identical to the zpool + status -L command line option.
+
+
+
+
Cause zpool subcommands to output full vdev path + names by default. This behavior is identical to the zpool + status -p command line option.
+
+
+
+
Older ZFS on Linux implementations had issues when attempting to display + pool config VDEV names if a devid NVP value is present + in the pool's config. +

For example, a pool that originated on illumos platform would + have a devid value in the config and zpool + status would fail when listing the config. This would also be + true for future Linux based pools.

+

A pool can be stripped of any devid values + on import or prevented from adding them on zpool + create or zpool add by setting + ZFS_VDEV_DEVID_OPT_OUT.

+
+
+
+
+
Allow a privileged user to run the zpool + status/iostat with the -c option. Normally, + only unprivileged users are allowed to run + -c.
+
+
+
+
The search path for scripts when running zpool + status/iostat with the -c option. This is a + colon-separated list of directories and overrides the default + ~/.zpool.d and + /etc/zfs/zpool.d search paths.
+
+
+
+
Allow a user to run zpool status/iostat with the + -c option. If + ZPOOL_SCRIPTS_ENABLED is not set, it is assumed that the + user is allowed to run zpool status/iostat + -c.
+
+
+
+

+

+
+
+

+

zfs-events(5), + zfs-module-parameters(5), + zpool-features(5), zed(8), + zfs(8)

+
+
+ + + + + +
May 2, 2019Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zstreamdump.8.html b/man/v0.8/8/zstreamdump.8.html new file mode 100644 index 000000000..b15ed8b8e --- /dev/null +++ b/man/v0.8/8/zstreamdump.8.html @@ -0,0 +1,205 @@ + + + + + + + zstreamdump.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zstreamdump.8

+
+ + + + + +
zstreamdump(8)System Administration Commandszstreamdump(8)
+
+
+

+

zstreamdump - filter data in zfs send stream

+
+
+

+
zstreamdump [-C] [-v] [-d]
+

+
+
+

+

The zstreamdump utility reads from the output of the zfs + send command, then displays headers and some statistics from that + output. See zfs(8).

+
+
+

+

The following options are supported:

+

-C

+

+
Suppress the validation of checksums.
+

+

-v

+

+
Verbose. Dump all headers, not only begin and end + headers.
+

+

-d

+

+
Dump contents of blocks modified. Implies verbose.
+

+
+
+

+

zfs(8)

+
+
+ + + + + +
29 Aug 2012ZFS pool 28, filesystem 5
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/index.html b/man/v0.8/index.html new file mode 100644 index 000000000..f0272e0c9 --- /dev/null +++ b/man/v0.8/index.html @@ -0,0 +1,143 @@ + + + + + + + v0.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/1/arcstat.1.html b/man/v2.0/1/arcstat.1.html new file mode 100644 index 000000000..194cf4a69 --- /dev/null +++ b/man/v2.0/1/arcstat.1.html @@ -0,0 +1,364 @@ + + + + + + + arcstat.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

arcstat.1

+
+ + + + + +
ARCSTAT(1)General Commands ManualARCSTAT(1)
+
+
+

+

arcstat - report ZFS ARC and L2ARC statistics

+
+
+

+
arcstat [-havxp] [-f field[,field]...] [-o file] [-s string] [interval [count]]
+

+
+
+

+

The arcstat utility print various ZFS ARC and L2ARC + statistics in vmstat-like fashion.

+

+

+

+

The arcstat command reports the following information:

+

+

+

c

+
ARC target size
+

+

dh%

+
Demand data hit percentage
+

+

dm%

+
Demand data miss percentage
+

+

mfu

+
MFU list hits per second
+

+

mh%

+
Metadata hit percentage
+

+

mm%

+
Metadata miss percentage
+

+

mru

+
MRU list hits per second
+

+

ph%

+
Prefetch hits percentage
+

+

pm%

+
Prefetch miss percentage
+

+

dhit

+
Demand data hits per second
+

+

dmis

+
Demand data misses per second
+

+

hit%

+
ARC hit percentage
+

+

hits

+
ARC reads per second
+

+

mfug

+
MFU ghost list hits per second
+

+

mhit

+
Metadata hits per second
+

+

miss

+
ARC misses per second
+

+

mmis

+
Metadata misses per second
+

+

mrug

+
MRU ghost list hits per second
+

+

phit

+
Prefetch hits per second
+

+

pmis

+
Prefetch misses per second
+

+

read

+
Total ARC accesses per second
+

+

time

+
Time
+

+

size

+
ARC size
+

+

arcsz

+
Alias for size
+

+

dread

+
Demand data accesses per second
+

+

eskip

+
evict_skip per second
+

+

miss%

+
ARC miss percentage
+

+

mread

+
Metadata accesses per second
+

+

pread

+
Prefetch accesses per second
+

+

l2hit%

+
L2ARC access hit percentage
+

+

l2hits

+
L2ARC hits per second
+

+

l2miss

+
L2ARC misses per second
+

+

l2read

+
Total L2ARC accesses per second
+

+

l2size

+
Size of the L2ARC
+

+

mtxmis

+
mutex_miss per second
+

+

l2bytes

+
Bytes read per second from the L2ARC
+

+

l2miss%

+
L2ARC access miss percentage
+

+

l2asize

+
Actual (compressed) size of the L2ARC
+

+

grow

+
ARC grow disabled
+

+

need

+
ARC reclaim needed
+

+

free

+
The ARC's idea of how much free memory there is, which + includes evictable memory in the page cache. Since the ARC tries to keep + avail above zero, avail is usually more instructive to observe + than free.
+

+

avail

+
The ARC's idea of how much free memory is available to + it, which is a bit less than free. May temporarily be negative, in + which case the ARC will reduce the target size c.
+

+
+
+

+

The following options are supported:

+

+

-a

+
Print all possible stats.
+

+

-f

+
Display only specific fields. See DESCRIPTION for + supported statistics.
+

+

-h

+
Display help message.
+

+

-o

+
Report statistics to a file instead of the standard + output.
+

+

-p

+
Disable auto-scaling of numerical fields (for raw, + machine-parsable values).
+

+

-s

+
Display data with a specified separator (default: 2 + spaces).
+

+

-x

+
Print extended stats (same as -f + time,mfu,mru,mfug,mrug,eskip,mtxmis,dread,pread,read).
+

+

-v

+
Show field headers and definitions
+

+
+
+

+

The following operands are supported:

+

count

+
Display only count reports.
+

+

interval

+
Specify the sampling interval in seconds.
+

+
+
+

+

arcstat was originally written in Perl by Neelakanth Nadgir and + supported only ZFS ARC statistics. Mike Harsch updated it to support L2ARC + statistics. John Hixson ported it to Python for FreeNAS over some beer, + after which many individuals from the OpenZFS community continued to + maintain and improve it.

+
+
+ + + + + +
October 20, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/1/cstyle.1.html b/man/v2.0/1/cstyle.1.html new file mode 100644 index 000000000..3700e91fe --- /dev/null +++ b/man/v2.0/1/cstyle.1.html @@ -0,0 +1,286 @@ + + + + + + + cstyle.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cstyle.1

+
+ + + + + +
CSTYLE(1)General Commands ManualCSTYLE(1)
+
+
+

+

cstyle - check for some common stylistic errors in C source + files

+
+
+

+

cstyle [-chpvCP] [-o constructs] [file...]

+
+
+

+

cstyle inspects C source files (*.c and *.h) for common + stylistic errors. It attempts to check for the cstyle documented in + http://www.cis.upenn.edu/~lee/06cse480/data/cstyle.ms.pdf. Note that + there is much in that document that cannot be checked for; just + because your code is cstyle(1) clean does not mean that you've + followed Sun's C style. Caveat emptor.

+
+
+

+

The following options are supported:

+
+
+
Check continuation line indentation inside of functions. Sun's C style + states that all statements must be indented to an appropriate tab stop, + and any continuation lines after them must be indented exactly four + spaces from the start line. This option enables a series of checks + designed to find continuation line problems within functions only. The + checks have some limitations; see CONTINUATION CHECKING, below.
+
+
Performs heuristic checks that are sometimes wrong. Not generally + used.
+
+
Performs some of the more picky checks. Includes ANSI #else and #endif + rules, and tries to detect spaces after casts. Used as part of the putback + checks.
+
+
Verbose output; includes the text of the line of error, and, for + -c, the first statement in the current continuation block.
+
+
Ignore errors in header comments (i.e. block comments starting in the + first column). Not generally used.
+
+
Check for use of non-POSIX types. Historically, types like + "u_int" and "u_long" were used, but they are now + deprecated in favor of the POSIX types uint_t, ulong_t, etc. This detects + any use of the deprecated types. Used as part of the putback checks.
+
+
Allow a comma-separated list of additional constructs. Available + constructs include:
+
+
Allow doxygen-style block comments (/** and /*!)
+
+
Allow splint-style lint comments (/*@...@*/)
+
+
+
+

+

The cstyle rule for the OS/Net consolidation is that all new files + must be -pP clean. For existing files, the following invocations are + run against both the old and new files:

+
+
+
+
+
+
+
+
+

If the old file gave no errors for one of the invocations, the new + file must also give no errors. This way, files can only become more + clean.

+
+
+

+

The continuation checker is a reasonably simple state machine that + knows something about how C is laid out, and can match parenthesis, etc. + over multiple lines. It does have some limitations:

+
+
1.
+
Preprocessor macros which cause unmatched parenthesis will confuse the + checker for that line. To fix this, you'll need to make sure that each + branch of the #if statement has balanced parenthesis.
+
2.
+
Some cpp macros do not require ;s after them. Any such macros + *must* be ALL_CAPS; any lower case letters will cause bad output.
+
+

The bad output will generally be corrected after the next + ;, {, or }.

+

Some continuation error messages deserve some additional + explanation

+
+
+
A multi-line statement which is not broken at statement boundaries. For + example:
+
+
+

if (this_is_a_long_variable == another_variable) a = +
+ b + c;

+

Will trigger this error. Instead, do:

+

if (this_is_a_long_variable == another_variable) +
+ a = b + c;

+
+
+
+
For visibility, empty bodies for if, for, and while statements should be + on their own line. For example:
+
+
+

while (do_something(&x) == 0);

+

Will trigger this error. Instead, do:

+

while (do_something(&x) == 0) +
+ ;

+
+

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/1/index.html b/man/v2.0/1/index.html new file mode 100644 index 000000000..ea31d7096 --- /dev/null +++ b/man/v2.0/1/index.html @@ -0,0 +1,155 @@ + + + + + + + User Commands (1) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

User Commands (1)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/1/raidz_test.1.html b/man/v2.0/1/raidz_test.1.html new file mode 100644 index 000000000..27b4863d9 --- /dev/null +++ b/man/v2.0/1/raidz_test.1.html @@ -0,0 +1,261 @@ + + + + + + + raidz_test.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

raidz_test.1

+
+ + + + + +
RAIDZ_TEST(1)General Commands ManualRAIDZ_TEST(1)
+
+

+
+

+

raidz_test - raidz implementation verification and + benchmarking tool

+
+
+

+

raidz_test <options>

+
+
+

+

This manual page documents briefly the raidz_test + command.

+

Purpose of this tool is to run all supported raidz implementation + and verify results of all methods. Tool also contains a parameter sweep + option where all parameters affecting RAIDZ block are verified (like ashift + size, data offset, data size, etc...). The tool also supports a benchmarking + mode using -B option.

+
+
+

+

-h

+
+
+
Print a help summary.
+
+

-a ashift (default: 9)

+
+
+
Ashift value.
+
+

-o zio_off_shift (default: 0)

+
+
+
Zio offset for raidz block. Offset value is 1 << + (zio_off_shift)
+
+

-d raidz_data_disks (default: 8)

+
+
+
Number of raidz data disks to use. Additional disks for parity will be + used during testing.
+
+

-s zio_size_shift (default: 19)

+
+
+
Size of data for raidz block. Size is 1 << (zio_size_shift).
+
+

-S(weep)

+
+
+
Sweep parameter space while verifying the raidz implementations. This + option will exhaust all most of valid values for -a -o -d -s options. + Runtime using this option will be long.
+
+

-t(imeout)

+
+
+
Wall time for sweep test in seconds. The actual runtime could be + longer.
+
+

-B(enchmark)

+
+
+
This options starts the benchmark mode. All implementations are + benchmarked using increasing per disk data size. Results are given as + throughput per disk, measured in MiB/s.
+
+

-v(erbose)

+
+
+
Increase verbosity.
+
+

-T(est the test)

+
+
+
Debugging option. When this option is specified tool is supposed to fail + all tests. This is to check if tests would properly verify + bit-exactness.
+
+

-D(ebug)

+
+
+
Debugging option. Specify to attach gdb when SIGSEGV or SIGABRT are + received.
+
+

+

+
+
+

+

ztest (1)

+
+
+

+

vdev_raidz, created for OpenZFS by Gvozden Nešković + <neskovic@gmail.com>

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/1/zhack.1.html b/man/v2.0/1/zhack.1.html new file mode 100644 index 000000000..343f84624 --- /dev/null +++ b/man/v2.0/1/zhack.1.html @@ -0,0 +1,253 @@ + + + + + + + zhack.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zhack.1

+
+ + + + + +
ZHACK(1)General Commands ManualZHACK(1)
+
+

+
+

+

zhack - libzpool debugging tool

+
+
+

+

This utility pokes configuration changes directly into a ZFS pool, + which is dangerous and can cause data corruption.

+
+
+

+

zhack [-c cachefile] [-d dir] + <subcommand> [arguments]

+
+
+

+

-c cachefile

+
+
+
Read the pool configuration from the cachefile, which is + /etc/zfs/zpool.cache by default.
+
+

-d dir

+
+
+
Search for pool members in the dir path. Can be specified + more than once.
+
+
+
+

+

feature stat pool

+
+
+
List feature flags.
+
+

feature enable [-d description] [-r] pool + guid

+
+
+
Add a new feature to pool that is uniquely identified by + guid, which is specified in the same form as a zfs(8) user + property.
+
+
The description is a short human readable explanation of the new + feature.
+
+
The -r switch indicates that pool can be safely opened in + read-only mode by a system that does not have the guid + feature.
+
+

feature ref [-d|-m] pool guid

+
+
+
Increment the reference count of the guid feature in + pool.
+
+
The -d switch decrements the reference count of the guid + feature in pool.
+
+
The -m switch indicates that the guid feature is now + required to read the pool MOS.
+
+
+
+

+
# zhack feature stat tank
+for_read_obj:
+	org.illumos:lz4_compress = 0
+for_write_obj:
+	com.delphix:async_destroy = 0
+	com.delphix:empty_bpobj = 0
+descriptions_obj:
+	com.delphix:async_destroy = Destroy filesystems asynchronously.
+	com.delphix:empty_bpobj = Snapshots use less space.
+	org.illumos:lz4_compress = LZ4 compression algorithm support.
+
# zhack feature enable -d 'Predict future disk failures.' \
+
+ tank com.example:clairvoyance
+
# zhack feature ref tank com.example:clairvoyance
+
+
+

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

zfs(8), zpool-features(5), ztest(1)

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/1/ztest.1.html b/man/v2.0/1/ztest.1.html new file mode 100644 index 000000000..fd49702bb --- /dev/null +++ b/man/v2.0/1/ztest.1.html @@ -0,0 +1,350 @@ + + + + + + + ztest.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

ztest.1

+
+ + + + + +
ZTEST(1)General Commands ManualZTEST(1)
+
+

+
+

+

ztest - was written by the ZFS Developers as a ZFS unit + test.

+
+
+

+

ztest <options>

+
+
+

+

This manual page documents briefly the ztest command.

+

ztest was written by the ZFS Developers as a ZFS unit test. + The tool was developed in tandem with the ZFS functionality and was executed + nightly as one of the many regression test against the daily build. As + features were added to ZFS, unit tests were also added to ztest. In + addition, a separate test development team wrote and executed more + functional and stress tests.

+

By default ztest runs for ten minutes and uses block files + (stored in /tmp) to create pools rather than using physical disks. Block + files afford ztest its flexibility to play around with zpool + components without requiring large hardware configurations. However, storing + the block files in /tmp may not work for you if you have a small tmp + directory.

+

By default is non-verbose. This is why entering the command above + will result in ztest quietly executing for 5 minutes. The -V option + can be used to increase the verbosity of the tool. Adding multiple -V option + is allowed and the more you add the more chatty ztest becomes.

+

After the ztest run completes, you should notice many + ztest.* files lying around. Once the run completes you can safely remove + these files. Note that you shouldn't remove these files during a run. You + can re-use these files in your next ztest run by using the -E + option.

+
+
+

+

-?

+
+
+
Print a help summary.
+
+

-v vdevs (default: 5)

+
+
+
Number of vdevs.
+
+

-s size_of_each_vdev (default: 64M)

+
+
+
Size of each vdev.
+
+

-a alignment_shift (default: 9) (use 0 for + random)

+
+
+
Used alignment in test.
+
+

-m mirror_copies (default: 2)

+
+
+
Number of mirror copies.
+
+

-r raidz_disks (default: 4)

+
+
+
Number of raidz disks.
+
+

-R raidz_parity (default: 1)

+
+
+
Raidz parity.
+
+

-d datasets (default: 7)

+
+
+
Number of datasets.
+
+

-t threads (default: 23)

+
+
+
Number of threads.
+
+

-g gang_block_threshold (default: 32K)

+
+
+
Gang block threshold.
+
+

-i initialize_pool_i_times (default: + 1)

+
+
+
Number of pool initialisations.
+
+

-k kill_percentage (default: 70%)

+
+
+
Kill percentage.
+
+

-p pool_name (default: ztest)

+
+
+
Pool name.
+
+

-V(erbose)

+
+
+
Verbose (use multiple times for ever more blather).
+
+

-E(xisting)

+
+
+
Use existing pool (use existing pool instead of creating new one).
+
+

-T time (default: 300 sec)

+
+
+
Total test run time.
+
+

-z zil_failure_rate (default: fail every 2^5 + allocs)

+
+
+
Injected failure rate.
+
+

-G

+
+
+
Dump zfs_dbgmsg buffer before exiting.
+
+
+
+

+

To override /tmp as your location for block files, you can use the + -f option:

+
+
+
ztest -f /
+
+

To get an idea of what ztest is actually testing try this:

+
+
+
ztest -f / -VVV
+
+

Maybe you'd like to run ztest for longer? To do so simply use the + -T option and specify the runlength in seconds like so:

+
+
+
ztest -f / -V -T 120 +

+
+
+
+
+

+
+
+
Use id instead of the SPL hostid to identify this host. Intended + for use with ztest, but this environment variable will affect any utility + which uses libzpool, including zpool(8). Since the kernel is + unaware of this setting results with utilities other than ztest are + undefined.
+
+
Limit the default stack size to stacksize bytes for the purpose of + detecting and debugging kernel stack overflows. This value defaults to + 32K which is double the default 16K Linux kernel stack size. +

In practice, setting the stack size slightly higher is needed + because differences in stack usage between kernel and user space can + lead to spurious stack overflows (especially when debugging is enabled). + The specified value will be rounded up to a floor of PTHREAD_STACK_MIN + which is the minimum stack required for a NULL procedure in user + space.

+

By default the stack size is limited to 256K.

+
+
+
+
+

+

spl-module-parameters (5), zpool (1), zfs + (1), zdb (1),

+
+
+

+

This manual page was transferred to asciidoc by Michael + Gebetsroither <gebi@grml.org> from + http://opensolaris.org/os/community/zfs/ztest/

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/1/zvol_wait.1.html b/man/v2.0/1/zvol_wait.1.html new file mode 100644 index 000000000..5413620b0 --- /dev/null +++ b/man/v2.0/1/zvol_wait.1.html @@ -0,0 +1,192 @@ + + + + + + + zvol_wait.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zvol_wait.1

+
+ + + + + +
ZVOL_WAIT(1)General Commands Manual (smm)ZVOL_WAIT(1)
+
+
+

+

zvol_waitWait + for ZFS volume links in + to be + created.

+
+
+

+ + + + + +
zvol_wait
+
+
+

+

When a ZFS pool is imported, ZFS will register each ZFS volume + (zvol) as a disk device with the system. As the disks are registered, + udev(7) will asynchronously create + symlinks under + + using the zvol's name. zvol_wait will wait for all + those symlinks to be created before returning.

+
+
+

+

udev(7)

+
+
+ + + + + +
July 5, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/5/index.html b/man/v2.0/5/index.html new file mode 100644 index 000000000..41dc9bbfc --- /dev/null +++ b/man/v2.0/5/index.html @@ -0,0 +1,153 @@ + + + + + + + File Formats and Conventions (5) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

File Formats and Conventions (5)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/5/spl-module-parameters.5.html b/man/v2.0/5/spl-module-parameters.5.html new file mode 100644 index 000000000..ec66c3f8a --- /dev/null +++ b/man/v2.0/5/spl-module-parameters.5.html @@ -0,0 +1,365 @@ + + + + + + + spl-module-parameters.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

spl-module-parameters.5

+
+ + + + + +
SPL-MODULE-PARAMETERS(5)File Formats ManualSPL-MODULE-PARAMETERS(5)
+
+
+

+

spl-module-parameters - SPL module parameters

+
+
+

+

Description of the different parameters to the SPL module.

+

+
+

+

+

spl_kmem_cache_expire (uint)

+
Cache expiration is part of default Illumos cache + behavior. The idea is that objects in magazines which have not been recently + accessed should be returned to the slabs periodically. This is known as cache + aging and when enabled objects will be typically returned after 15 seconds. +

On the other hand Linux slabs are designed to never move objects + back to the slabs unless there is memory pressure. This is possible because + under Linux the cache will be notified when memory is low and objects can be + released.

+

By default only the Linux method is enabled. It has been shown to + improve responsiveness on low memory systems and not negatively impact the + performance of systems with more memory. This policy may be changed by + setting the spl_kmem_cache_expire bit mask as follows, both policies + may be enabled concurrently.

+

0x01 - Aging (Illumos), 0x02 - Low memory (Linux)

+

Default value: 0x02

+
+

+

spl_kmem_cache_kmem_threads (uint)

+
The number of threads created for the spl_kmem_cache task + queue. This task queue is responsible for allocating new slabs for use by the + kmem caches. For the majority of systems and workloads only a small number of + threads are required. +

Default value: 4

+
+

+

spl_kmem_cache_reclaim (uint)

+
When this is set it prevents Linux from being able to + rapidly reclaim all the memory held by the kmem caches. This may be useful in + circumstances where it's preferable that Linux reclaim memory from some other + subsystem first. Setting this will increase the likelihood out of memory + events on a memory constrained system. +

Default value: 0

+
+

+

spl_kmem_cache_obj_per_slab (uint)

+
The preferred number of objects per slab in the cache. In + general, a larger value will increase the caches memory footprint while + decreasing the time required to perform an allocation. Conversely, a smaller + value will minimize the footprint and improve cache reclaim time but + individual allocations may take longer. +

Default value: 8

+
+

+

spl_kmem_cache_obj_per_slab_min (uint)

+
The minimum number of objects allowed per slab. Normally + slabs will contain spl_kmem_cache_obj_per_slab objects but for caches + that contain very large objects it's desirable to only have a few, or even + just one, object per slab. +

Default value: 1

+
+

+

spl_kmem_cache_max_size (uint)

+
The maximum size of a kmem cache slab in MiB. This + effectively limits the maximum cache object size to + spl_kmem_cache_max_size / spl_kmem_cache_obj_per_slab. Caches + may not be created with object sized larger than this limit. +

Default value: 32 (64-bit) or 4 (32-bit)

+
+

+

spl_kmem_cache_slab_limit (uint)

+
For small objects the Linux slab allocator should be used + to make the most efficient use of the memory. However, large objects are not + supported by the Linux slab and therefore the SPL implementation is preferred. + This value is used to determine the cutoff between a small and large object. +

Objects of spl_kmem_cache_slab_limit or smaller will be + allocated using the Linux slab allocator, large objects use the SPL + allocator. A cutoff of 16K was determined to be optimal for architectures + using 4K pages.

+

Default value: 16,384

+
+

+

spl_kmem_alloc_warn (uint)

+
As a general rule kmem_alloc() allocations should be + small, preferably just a few pages since they must by physically contiguous. + Therefore, a rate limited warning will be printed to the console for any + kmem_alloc() which exceeds a reasonable threshold. +

The default warning threshold is set to eight pages but capped at + 32K to accommodate systems using large pages. This value was selected to be + small enough to ensure the largest allocations are quickly noticed and + fixed. But large enough to avoid logging any warnings when a allocation size + is larger than optimal but not a serious concern. Since this value is + tunable, developers are encouraged to set it lower when testing so any new + largish allocations are quickly caught. These warnings may be disabled by + setting the threshold to zero.

+

Default value: 32,768

+
+

+

spl_kmem_alloc_max (uint)

+
Large kmem_alloc() allocations will fail if they exceed + KMALLOC_MAX_SIZE. Allocations which are marginally smaller than this limit may + succeed but should still be avoided due to the expense of locating a + contiguous range of free pages. Therefore, a maximum kmem size with reasonable + safely margin of 4x is set. Kmem_alloc() allocations larger than this maximum + will quickly fail. Vmem_alloc() allocations less than or equal to this value + will use kmalloc(), but shift to vmalloc() when exceeding this value. +

Default value: KMALLOC_MAX_SIZE/4

+
+

+

spl_kmem_cache_magazine_size (uint)

+
Cache magazines are an optimization designed to minimize + the cost of allocating memory. They do this by keeping a per-cpu cache of + recently freed objects, which can then be reallocated without taking a lock. + This can improve performance on highly contended caches. However, because + objects in magazines will prevent otherwise empty slabs from being immediately + released this may not be ideal for low memory machines. +

For this reason spl_kmem_cache_magazine_size can be used to + set a maximum magazine size. When this value is set to 0 the magazine size + will be automatically determined based on the object size. Otherwise + magazines will be limited to 2-256 objects per magazine (i.e per cpu). + Magazines may never be entirely disabled in this implementation.

+

Default value: 0

+
+

+

spl_hostid (ulong)

+
The system hostid, when set this can be used to uniquely + identify a system. By default this value is set to zero which indicates the + hostid is disabled. It can be explicitly enabled by placing a unique non-zero + value in /etc/hostid/. +

Default value: 0

+
+

+

spl_hostid_path (charp)

+
The expected path to locate the system hostid when + specified. This value may be overridden for non-standard configurations. +

Default value: /etc/hostid

+
+

+

spl_panic_halt (uint)

+
Cause a kernel panic on assertion failures. When not + enabled, the thread is halted to facilitate further debugging. +

Set to a non-zero value to enable.

+

Default value: 0

+
+

+

spl_taskq_kick (uint)

+
Kick stuck taskq to spawn threads. When writing a + non-zero value to it, it will scan all the taskqs. If any of them have a + pending task more than 5 seconds old, it will kick it to spawn more threads. + This can be used if you find a rare deadlock occurs because one or more taskqs + didn't spawn a thread when it should. +

Default value: 0

+
+

+

spl_taskq_thread_bind (int)

+
Bind taskq threads to specific CPUs. When enabled all + taskq threads will be distributed evenly over the available CPUs. By default, + this behavior is disabled to allow the Linux scheduler the maximum flexibility + to determine where a thread should run. +

Default value: 0

+
+

+

spl_taskq_thread_dynamic (int)

+
Allow dynamic taskqs. When enabled taskqs which set the + TASKQ_DYNAMIC flag will by default create only a single thread. New threads + will be created on demand up to a maximum allowed number to facilitate the + completion of outstanding tasks. Threads which are no longer needed will be + promptly destroyed. By default this behavior is enabled but it can be disabled + to aid performance analysis or troubleshooting. +

Default value: 1

+
+

+

spl_taskq_thread_priority (int)

+
Allow newly created taskq threads to set a non-default + scheduler priority. When enabled the priority specified when a taskq is + created will be applied to all threads created by that taskq. When disabled + all threads will use the default Linux kernel thread priority. By default, + this behavior is enabled. +

Default value: 1

+
+

+

spl_taskq_thread_sequential (int)

+
The number of items a taskq worker thread must handle + without interruption before requesting a new worker thread be spawned. This is + used to control how quickly taskqs ramp up the number of threads processing + the queue. Because Linux thread creation and destruction are relatively + inexpensive a small default value has been selected. This means that normally + threads will be created aggressively which is desirable. Increasing this value + will result in a slower thread creation rate which may be preferable for some + configurations. +

Default value: 4

+
+

+

spl_max_show_tasks (uint)

+
The maximum number of tasks per pending list in each + taskq shown in /proc/spl/{taskq,taskq-all}. Write 0 to turn off the limit. The + proc file will walk the lists with lock held, reading it could cause a lock up + if the list grow too large without limiting the output. + "(truncated)" will be shown if the list is larger than the limit. +

Default value: 512

+
+
+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/5/vdev_id.conf.5.html b/man/v2.0/5/vdev_id.conf.5.html new file mode 100644 index 000000000..98eb7c35f --- /dev/null +++ b/man/v2.0/5/vdev_id.conf.5.html @@ -0,0 +1,372 @@ + + + + + + + vdev_id.conf.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

vdev_id.conf.5

+
+ + + + + +
VDEV_ID.CONF(5)File Formats ManualVDEV_ID.CONF(5)
+
+
+

+

vdev_id.conf — + Configuration file for vdev_id

+
+
+

+

vdev_id.conf is the configuration file for + vdev_id(8). It controls the + default behavior of vdev_id(8) + while it is mapping a disk device name to an alias.

+

The vdev_id.conf file uses a simple format + consisting of a keyword followed by one or more values on a single line. Any + line not beginning with a recognized keyword is ignored. Comments may + optionally begin with a hash character.

+

The following keywords and values are used.

+
+
+ name devlink
+
Maps a device link in the /dev directory hierarchy + to a new device name. The udev rule defining the device link must have run + prior to vdev_id(8). A defined + alias takes precedence over a topology-derived name, but the two naming + methods can otherwise coexist. For example, one might name drives in a + JBOD with the sas_direct topology while naming an + internal L2ARC device with an alias. +

name is the name of the link to the + device that will by created under + /dev/disk/by-vdev.

+

devlink is the name of the device link + that has already been defined by udev. This may be an absolute path or + the base filename.

+
+
+ [pci_slot] port + name
+
Maps a physical path to a channel name (typically representing a single + disk enclosure).
+ +
Additionally create /dev/by-enclosure symlinks to + the disk enclosure + devices + using the naming scheme from vdev_id.conf. + enclosure_symlinks is only allowed for + sas_direct mode.
+ +
Specify the prefix for the enclosure symlinks in the form + /dev/by-enclosure/prefix⟩-⟨channel⟩⟨num⟩ +

Defaults to + “”.

+
+
+ prefix new + [channel]
+
Maps a disk slot number as reported by the operating system to an + alternative slot number. If the channel parameter is + specified then the mapping is only applied to slots in the named channel, + otherwise the mapping is applied to all channels. The first-specified + slot rule that can match a slot takes precedence. + Therefore a channel-specific mapping for a given slot should generally + appear before a generic mapping for the same slot. In this way a custom + mapping may be applied to a particular channel and a default mapping + applied to the others.
+
+ yes|no
+
Specifies whether vdev_id(8) + will handle only dm-multipath devices. If set to yes + then vdev_id(8) will examine the + first running component disk of a dm-multipath device as provided by the + driver command to determine the physical path.
+
+ sas_direct|sas_switch|scsi
+
Identifies a physical topology that governs how physical paths are mapped + to channels: +
+
+ and scsi
+
channels are uniquely identified by a PCI slot and HBA port + number
+
+
channels are uniquely identified by a SAS switch port number
+
+
+
+ num
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) + internally uses this value to determine which HBA or switch port a device + is connected to. The default is + .
+
+ bay|phy|port|id|lun|ses
+
Specifies from which element of a SAS identifier the slot number is taken. + The default is bay: +
+
+
read the slot number from the bay identifier.
+
+
read the slot number from the phy identifier.
+
+
use the SAS port as the slot number.
+
+
use the scsi id as the slot number.
+
+
use the scsi lun as the slot number.
+
+
use the SCSI Enclosure Services (SES) enclosure device slot number, as + reported by sg_ses(8). Intended for use only on + systems where bay is unsupported, noting that + port and id may be unstable across + disk replacement.
+
+
+
+
+
+

+
+
/etc/zfs/vdev_id.conf
+
The configuration file for + vdev_id(8).
+
+
+
+

+

A non-multipath configuration with direct-attached SAS enclosures + and an arbitrary slot re-mapping:

+
+
multipath     no
+topology      sas_direct
+phys_per_port 4
+slot          bay
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+channel 86:00.0  1         C
+channel 86:00.0  0         D
+
+# Custom mapping for Channel A
+
+#    Linux      Mapped
+#    Slot       Slot      Channel
+slot 1          7         A
+slot 2          10        A
+slot 3          3         A
+slot 4          6         A
+
+# Default mapping for B, C, and D
+
+slot 1          4
+slot 2          2
+slot 3          1
+slot 4          3
+
+

A SAS-switch topology. Note, that the + channel keyword takes only two arguments in this + example.

+
+
topology      sas_switch
+
+#       SWITCH PORT  CHANNEL NAME
+channel 1            A
+channel 2            B
+channel 3            C
+channel 4            D
+
+

A multipath configuration. Note that channel names have multiple + definitions - one per physical path.

+
+
multipath yes
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+channel 86:00.0  1         A
+channel 86:00.0  0         B
+
+

A configuration with enclosure_symlinks enabled.

+
+
multipath yes
+enclosure_symlinks yes
+
+#          PCI_ID      HBA PORT     CHANNEL NAME
+channel    05:00.0     1            U
+channel    05:00.0     0            L
+channel    06:00.0     1            U
+channel    06:00.0     0            L
+
+

In addition to the disks symlinks, this configuration will + create:

+
+
/dev/by-enclosure/enc-L0
+/dev/by-enclosure/enc-L1
+/dev/by-enclosure/enc-U0
+/dev/by-enclosure/enc-U1
+
+

A configuration using device link aliases.

+
+
#     by-vdev
+#     name     fully qualified or base name of device link
+alias d1       /dev/disk/by-id/wwn-0x5000c5002de3b9ca
+alias d2       wwn-0x5000c5002def789e
+
+
+
+

+

vdev_id(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/5/zfs-events.5.html b/man/v2.0/5/zfs-events.5.html new file mode 100644 index 000000000..cbe2c29e4 --- /dev/null +++ b/man/v2.0/5/zfs-events.5.html @@ -0,0 +1,848 @@ + + + + + + + zfs-events.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-events.5

+
+ + + + + +
ZFS-EVENTS(5)File Formats ManualZFS-EVENTS(5)
+
+
+

+

zfs-events - Events created by the ZFS filesystem.

+
+
+

+

Description of the different events generated by the ZFS + stack.

+

Most of these don't have any description. The events generated by + ZFS have never been publicly documented. What is here is intended as a + starting point to provide documentation for all possible events.

+

To view all events created since the loading of the ZFS + infrastructure (i.e, "the module"), run

+

+
zpool events
+

to get a short list, and

+

+
zpool events -v
+

to get a full detail of the events and what information is + available about it.

+

This man page lists the different subclasses that are issued in + the case of an event. The full event name would be + ereport.fs.zfs.SUBCLASS, but we only list the last part here.

+

+
+

+

+

checksum

+
Issued when a checksum error has been detected.
+

+

io

+
Issued when there is an I/O error in a vdev in the + pool.
+

+

data

+
Issued when there have been data errors in the + pool.
+

+

deadman

+
Issued when an I/O is determined to be "hung", + this can be caused by lost completion events due to flaky hardware or drivers. + See the zfs_deadman_failmode module option description for additional + information regarding "hung" I/O detection and configuration.
+

+

delay

+
Issued when a completed I/O exceeds the maximum allowed + time specified by the zio_slow_io_ms module option. This can be an + indicator of problems with the underlying storage device. The number of delay + events is ratelimited by the zfs_slow_io_events_per_second module + parameter.
+

+

config.sync

+
Issued every time a vdev change have been done to the + pool.
+

+

zpool

+
Issued when a pool cannot be imported.
+

+

zpool.destroy

+
Issued when a pool is destroyed.
+

+

zpool.export

+
Issued when a pool is exported.
+

+

zpool.import

+
Issued when a pool is imported.
+

+

zpool.reguid

+
Issued when a REGUID (new unique identifier for the pool + have been regenerated) have been detected.
+

+

vdev.unknown

+
Issued when the vdev is unknown. Such as trying to clear + device errors on a vdev that have failed/been kicked from the system/pool and + is no longer available.
+

+

vdev.open_failed

+
Issued when a vdev could not be opened (because it didn't + exist for example).
+

+

vdev.corrupt_data

+
Issued when corrupt data have been detected on a + vdev.
+

+

vdev.no_replicas

+
Issued when there are no more replicas to sustain the + pool. This would lead to the pool being DEGRADED.
+

+

vdev.bad_guid_sum

+
Issued when a missing device in the pool have been + detected.
+

+

vdev.too_small

+
Issued when the system (kernel) have removed a device, + and ZFS notices that the device isn't there any more. This is usually followed + by a probe_failure event.
+

+

vdev.bad_label

+
Issued when the label is OK but invalid.
+

+

vdev.bad_ashift

+
Issued when the ashift alignment requirement has + increased.
+

+

vdev.remove

+
Issued when a vdev is detached from a mirror (or a spare + detached from a vdev where it have been used to replace a failed drive - only + works if the original drive have been readded).
+

+

vdev.clear

+
Issued when clearing device errors in a pool. Such as + running zpool clear on a device in the pool.
+

+

vdev.check

+
Issued when a check to see if a given vdev could be + opened is started.
+

+

vdev.spare

+
Issued when a spare have kicked in to replace a failed + device.
+

+

vdev.autoexpand

+
Issued when a vdev can be automatically expanded.
+

+

io_failure

+
Issued when there is an I/O failure in a vdev in the + pool.
+

+

probe_failure

+
Issued when a probe fails on a vdev. This would occur if + a vdev have been kicked from the system outside of ZFS (such as the kernel + have removed the device).
+

+

log_replay

+
Issued when the intent log cannot be replayed. The can + occur in the case of a missing or damaged log device.
+

+

resilver.start

+
Issued when a resilver is started.
+

+

resilver.finish

+
Issued when the running resilver have finished.
+

+

scrub.start

+
Issued when a scrub is started on a pool.
+

+

scrub.finish

+
Issued when a pool has finished scrubbing.
+

+

scrub.abort

+
Issued when a scrub is aborted on a pool.
+

+

scrub.resume

+
Issued when a scrub is resumed on a pool.
+

+

scrub.paused

+
Issued when a scrub is paused on a pool.
+

+

bootfs.vdev.attach

+
+

+
+
+

+

This is the payload (data, information) that accompanies an + event.

+

For zed(8), these are set to uppercase and prefixed with + ZEVENT_.

+

+

pool

+
Pool name.
+

+

pool_failmode

+
Failmode - wait, continue or panic. + See zpool(8) (failmode property) for more information.
+

+

pool_guid

+
The GUID of the pool.
+

+

pool_context

+
The load state for the pool (0=none, 1=open, 2=import, + 3=tryimport, 4=recover 5=error).
+

+

vdev_guid

+
The GUID of the vdev in question (the vdev failing or + operated upon with zpool clear etc).
+

+

vdev_type

+
Type of vdev - disk, file, mirror + etc. See zpool(8) under Virtual Devices for more information on + possible values.
+

+

vdev_path

+
Full path of the vdev, including any -partX.
+

+

vdev_devid

+
ID of vdev (if any).
+

+

vdev_fru

+
Physical FRU location.
+

+

vdev_state

+
State of vdev (0=uninitialized, 1=closed, 2=offline, + 3=removed, 4=failed to open, 5=faulted, 6=degraded, 7=healthy).
+

+

vdev_ashift

+
The ashift value of the vdev.
+

+

vdev_complete_ts

+
The time the last I/O completed for the specified + vdev.
+

+

vdev_delta_ts

+
The time since the last I/O completed for the specified + vdev.
+

+

vdev_spare_paths

+
List of spares, including full path and any + -partX.
+

+

vdev_spare_guids

+
GUID(s) of spares.
+

+

vdev_read_errors

+
How many read errors that have been detected on the + vdev.
+

+

vdev_write_errors

+
How many write errors that have been detected on the + vdev.
+

+

vdev_cksum_errors

+
How many checksum errors that have been detected on the + vdev.
+

+

parent_guid

+
GUID of the vdev parent.
+

+

parent_type

+
Type of parent. See vdev_type.
+

+

parent_path

+
Path of the vdev parent (if any).
+

+

parent_devid

+
ID of the vdev parent (if any).
+

+

zio_objset

+
The object set number for a given I/O.
+

+

zio_object

+
The object number for a given I/O.
+

+

zio_level

+
The indirect level for the block. Level 0 is the lowest + level and includes data blocks. Values > 0 indicate metadata blocks at the + appropriate level.
+

+

zio_blkid

+
The block ID for a given I/O.
+

+

zio_err

+
The errno for a failure when handling a given I/O. The + errno is compatible with errno(3) with the value for EBADE (0x34) used + to indicate ZFS checksum error.
+

+

zio_offset

+
The offset in bytes of where to write the I/O for the + specified vdev.
+

+

zio_size

+
The size in bytes of the I/O.
+

+

zio_flags

+
The current flags describing how the I/O should be + handled. See the I/O FLAGS section for the full list of I/O + flags.
+

+

zio_stage

+
The current stage of the I/O in the pipeline. See the + I/O STAGES section for a full list of all the I/O stages.
+

+

zio_pipeline

+
The valid pipeline stages for the I/O. See the I/O + STAGES section for a full list of all the I/O stages.
+

+

zio_delay

+
The time elapsed (in nanoseconds) waiting for the block + layer to complete the I/O. Unlike zio_delta this does not include any + vdev queuing time and is therefore solely a measure of the block layer + performance.
+

+

zio_timestamp

+
The time when a given I/O was submitted.
+

+

zio_delta

+
The time required to service a given I/O.
+

+

prev_state

+
The previous state of the vdev.
+

+

cksum_expected

+
The expected checksum value for the block.
+

+

cksum_actual

+
The actual checksum value for an errant block.
+

+

cksum_algorithm

+
Checksum algorithm used. See zfs(8) for more + information on checksum algorithms available.
+

+

cksum_byteswap

+
Whether or not the data is byteswapped.
+

+

bad_ranges

+
[start, end) pairs of corruption offsets. Offsets are + always aligned on a 64-bit boundary, and can include some gaps of + non-corruption. (See bad_ranges_min_gap)
+

+

bad_ranges_min_gap

+
In order to bound the size of the bad_ranges + array, gaps of non-corruption less than or equal to bad_ranges_min_gap + bytes have been merged with adjacent corruption. Always at least 8 bytes, + since corruption is detected on a 64-bit word basis.
+

+

bad_range_sets

+
This array has one element per range in + bad_ranges. Each element contains the count of bits in that range which + were clear in the good data and set in the bad data.
+

+

bad_range_clears

+
This array has one element per range in + bad_ranges. Each element contains the count of bits for that range + which were set in the good data and clear in the bad data.
+

+

bad_set_bits

+
If this field exists, it is an array of: (bad data & + ~(good data)); that is, the bits set in the bad data which are cleared in the + good data. Each element corresponds a byte whose offset is in a range in + bad_ranges, and the array is ordered by offset. Thus, the first element + is the first byte in the first bad_ranges range, and the last element + is the last byte in the last bad_ranges range.
+

+

bad_cleared_bits

+
Like bad_set_bits, but contains: (good data & + ~(bad data)); that is, the bits set in the good data which are cleared in the + bad data.
+

+

bad_set_histogram

+
If this field exists, it is an array of counters. Each + entry counts bits set in a particular bit of a big-endian uint64 type. The + first entry counts bits set in the high-order bit of the first byte, the 9th + byte, etc, and the last entry counts bits set of the low-order bit of the 8th + byte, the 16th byte, etc. This information is useful for observing a stuck bit + in a parallel data path, such as IDE or parallel SCSI.
+

+

bad_cleared_histogram

+
If this field exists, it is an array of counters. Each + entry counts bit clears in a particular bit of a big-endian uint64 type. The + first entry counts bits clears of the high-order bit of the first byte, the + 9th byte, etc, and the last entry counts clears of the low-order bit of the + 8th byte, the 16th byte, etc. This information is useful for observing a stuck + bit in a parallel data path, such as IDE or parallel SCSI.
+

+
+
+

+

The ZFS I/O pipeline is comprised of various stages which are + defined below. The individual stages are used to construct these basic I/O + operations: Read, Write, Free, Claim, and Ioctl. These stages may be set on + an event to describe the life cycle of a given I/O.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StageBit MaskOperations



ZIO_STAGE_OPEN0x00000001RWFCI
ZIO_STAGE_READ_BP_INIT0x00000002R----
ZIO_STAGE_WRITE_BP_INIT0x00000004-W---
ZIO_STAGE_FREE_BP_INIT0x00000008--F--
ZIO_STAGE_ISSUE_ASYNC0x00000010RWF--
ZIO_STAGE_WRITE_COMPRESS0x00000020-W---
ZIO_STAGE_ENCRYPT0x00000040-W---
ZIO_STAGE_CHECKSUM_GENERATE0x00000080-W---
ZIO_STAGE_NOP_WRITE0x00000100-W---
ZIO_STAGE_DDT_READ_START0x00000200R----
ZIO_STAGE_DDT_READ_DONE0x00000400R----
ZIO_STAGE_DDT_WRITE0x00000800-W---
ZIO_STAGE_DDT_FREE0x00001000--F--
ZIO_STAGE_GANG_ASSEMBLE0x00002000RWFC-
ZIO_STAGE_GANG_ISSUE0x00004000RWFC-
ZIO_STAGE_DVA_THROTTLE0x00008000-W---
ZIO_STAGE_DVA_ALLOCATE0x00010000-W---
ZIO_STAGE_DVA_FREE0x00020000--F--
ZIO_STAGE_DVA_CLAIM0x00040000---C-
ZIO_STAGE_READY0x00080000RWFCI
ZIO_STAGE_VDEV_IO_START0x00100000RW--I
ZIO_STAGE_VDEV_IO_DONE0x00200000RW--I
ZIO_STAGE_VDEV_IO_ASSESS0x00400000RW--I
ZIO_STAGE_CHECKSUM_VERIFY0x00800000R----
ZIO_STAGE_DONE0x01000000RWFCI
+

+
+
+

+

Every I/O in the pipeline contains a set of flags which describe + its function and are used to govern its behavior. These flags will be set in + an event as an zio_flags payload entry.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FlagBit Mask


ZIO_FLAG_DONT_AGGREGATE0x00000001
ZIO_FLAG_IO_REPAIR0x00000002
ZIO_FLAG_SELF_HEAL0x00000004
ZIO_FLAG_RESILVER0x00000008
ZIO_FLAG_SCRUB0x00000010
ZIO_FLAG_SCAN_THREAD0x00000020
ZIO_FLAG_PHYSICAL0x00000040
ZIO_FLAG_CANFAIL0x00000080
ZIO_FLAG_SPECULATIVE0x00000100
ZIO_FLAG_CONFIG_WRITER0x00000200
ZIO_FLAG_DONT_RETRY0x00000400
ZIO_FLAG_DONT_CACHE0x00000800
ZIO_FLAG_NODATA0x00001000
ZIO_FLAG_INDUCE_DAMAGE0x00002000
ZIO_FLAG_IO_ALLOCATING0x00004000
ZIO_FLAG_IO_RETRY0x00008000
ZIO_FLAG_PROBE0x00010000
ZIO_FLAG_TRYHARD0x00020000
ZIO_FLAG_OPTIONAL0x00040000
ZIO_FLAG_DONT_QUEUE0x00080000
ZIO_FLAG_DONT_PROPAGATE0x00100000
ZIO_FLAG_IO_BYPASS0x00200000
ZIO_FLAG_IO_REWRITE0x00400000
ZIO_FLAG_RAW_COMPRESS0x00800000
ZIO_FLAG_RAW_ENCRYPT0x01000000
ZIO_FLAG_GANG_CHILD0x02000000
ZIO_FLAG_DDT_CHILD0x04000000
ZIO_FLAG_GODFATHER0x08000000
ZIO_FLAG_NOPWRITE0x10000000
ZIO_FLAG_REEXECUTED0x20000000
ZIO_FLAG_DELEGATED0x40000000
ZIO_FLAG_FASTWRITE0x80000000
+
+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/5/zfs-module-parameters.5.html b/man/v2.0/5/zfs-module-parameters.5.html new file mode 100644 index 000000000..d36a0a08e --- /dev/null +++ b/man/v2.0/5/zfs-module-parameters.5.html @@ -0,0 +1,2797 @@ + + + + + + + zfs-module-parameters.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-module-parameters.5

+
+ + + + + +
ZFS-MODULE-PARAMETERS(5)File Formats ManualZFS-MODULE-PARAMETERS(5)
+
+
+

+

zfs-module-parameters - ZFS module parameters

+
+
+

+

Description of the different parameters to the ZFS module.

+

+
+

+

+

dbuf_cache_max_bytes (ulong)

+
Maximum size in bytes of the dbuf cache. The target size + is determined by the MIN versus 1/2^dbuf_cache_shift (1/32) of the + target ARC size. The behavior of the dbuf cache and its associated settings + can be observed via the /proc/spl/kstat/zfs/dbufstats kstat. +

Default value: ULONG_MAX.

+
+

+

dbuf_metadata_cache_max_bytes (ulong)

+
Maximum size in bytes of the metadata dbuf cache. The + target size is determined by the MIN versus + 1/2^dbuf_metadata_cache_shift (1/64) of the target ARC size. The + behavior of the metadata dbuf cache and its associated settings can be + observed via the /proc/spl/kstat/zfs/dbufstats kstat. +

Default value: ULONG_MAX.

+
+

+

dbuf_cache_hiwater_pct (uint)

+
The percentage over dbuf_cache_max_bytes when + dbufs must be evicted directly. +

Default value: 10%.

+
+

+

dbuf_cache_lowater_pct (uint)

+
The percentage below dbuf_cache_max_bytes when the + evict thread stops evicting dbufs. +

Default value: 10%.

+
+

+

dbuf_cache_shift (int)

+
Set the size of the dbuf cache, + dbuf_cache_max_bytes, to a log2 fraction of the target ARC size. +

Default value: 5.

+
+

+

dbuf_metadata_cache_shift (int)

+
Set the size of the dbuf metadata cache, + dbuf_metadata_cache_max_bytes, to a log2 fraction of the target ARC + size. +

Default value: 6.

+
+

+

dmu_object_alloc_chunk_shift (int)

+
dnode slots allocated in a single operation as a power of + 2. The default value minimizes lock contention for the bulk operation + performed. +

Default value: 7 (128).

+
+

+

dmu_prefetch_max (int)

+
Limit the amount we can prefetch with one call to this + amount (in bytes). This helps to limit the amount of memory that can be used + by prefetching. +

Default value: 134,217,728 (128MB).

+
+

+

ignore_hole_birth (int)

+
This is an alias for + send_holes_without_birth_time.
+

+

l2arc_feed_again (int)

+
Turbo L2ARC warm-up. When the L2ARC is cold the fill + interval will be set as fast as possible. +

Use 1 for yes (default) and 0 to disable.

+
+

+

l2arc_feed_min_ms (ulong)

+
Min feed interval in milliseconds. Requires + l2arc_feed_again=1 and only applicable in related situations. +

Default value: 200.

+
+

+

l2arc_feed_secs (ulong)

+
Seconds between L2ARC writing +

Default value: 1.

+
+

+

l2arc_headroom (ulong)

+
How far through the ARC lists to search for L2ARC + cacheable content, expressed as a multiplier of l2arc_write_max. ARC + persistence across reboots can be achieved with persistent L2ARC by setting + this parameter to 0 allowing the full length of ARC lists to be + searched for cacheable content. +

Default value: 2.

+
+

+

l2arc_headroom_boost (ulong)

+
Scales l2arc_headroom by this percentage when + L2ARC contents are being successfully compressed before writing. A value of + 100 disables this feature. +

Default value: 200%.

+
+

+

l2arc_mfuonly (int)

+
Controls whether only MFU metadata and data are cached + from ARC into L2ARC. This may be desired to avoid wasting space on L2ARC when + reading/writing large amounts of data that are not expected to be accessed + more than once. The default is 0, meaning both MRU and MFU data and + metadata are cached. When turning off (0) this feature some MRU buffers + will still be present in ARC and eventually cached on L2ARC. +

Use 0 for no (default) and 1 for yes.

+
+

+

l2arc_meta_percent (int)

+
Percent of ARC size allowed for L2ARC-only headers. Since + L2ARC buffers are not evicted on memory pressure, too large amount of headers + on system with irrationaly large L2ARC can render it slow or unusable. This + parameter limits L2ARC writes and rebuild to achieve it. +

Default value: 33%.

+
+

+

l2arc_trim_ahead (ulong)

+
Trims ahead of the current write size + (l2arc_write_max) on L2ARC devices by this percentage of write size if + we have filled the device. If set to 100 we TRIM twice the space + required to accommodate upcoming writes. A minimum of 64MB will be trimmed. It + also enables TRIM of the whole L2ARC device upon creation or addition to an + existing pool or if the header of the device is invalid upon importing a pool + or onlining a cache device. A value of 0 disables TRIM on L2ARC + altogether and is the default as it can put significant stress on the + underlying storage devices. This will vary depending of how well the specific + device handles these commands. +

Default value: 0%.

+
+

+

l2arc_noprefetch (int)

+
Do not write buffers to L2ARC if they were prefetched but + not used by applications. +

Use 1 for yes (default) and 0 to disable.

+
+

+

l2arc_norw (int)

+
No reads during writes. +

Use 1 for yes and 0 for no (default).

+
+

+

l2arc_write_boost (ulong)

+
Cold L2ARC devices will have l2arc_write_max + increased by this amount while they remain cold. +

Default value: 8,388,608.

+
+

+

l2arc_write_max (ulong)

+
Max write bytes per interval. +

Default value: 8,388,608.

+
+

+

l2arc_rebuild_enabled (int)

+
Rebuild the L2ARC when importing a pool (persistent + L2ARC). This can be disabled if there are problems importing a pool or + attaching an L2ARC device (e.g. the L2ARC device is slow in reading stored log + metadata, or the metadata has become somehow fragmented/unusable). +

Use 1 for yes (default) and 0 for no.

+
+

+

l2arc_rebuild_blocks_min_l2size (ulong)

+
Min size (in bytes) of an L2ARC device required in order + to write log blocks in it. The log blocks are used upon importing the pool to + rebuild the L2ARC (persistent L2ARC). Rationale: for L2ARC devices less than + 1GB, the amount of data l2arc_evict() evicts is significant compared to the + amount of restored L2ARC data. In this case do not write log blocks in L2ARC + in order not to waste space. +

Default value: 1,073,741,824 (1GB).

+
+

+

metaslab_aliquot (ulong)

+
Metaslab granularity, in bytes. This is roughly similar + to what would be referred to as the "stripe size" in traditional + RAID arrays. In normal operation, ZFS will try to write this amount of data to + a top-level vdev before moving on to the next one. +

Default value: 524,288.

+
+

+

metaslab_bias_enabled (int)

+
Enable metaslab group biasing based on its vdev's over- + or under-utilization relative to the pool. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_force_ganging (ulong)

+
Make some blocks above a certain size be gang blocks. + This option is used by the test suite to facilitate testing. +

Default value: 16,777,217.

+
+

+

zfs_history_output_max (int)

+
When attempting to log the output nvlist of an ioctl in + the on-disk history, the output will not be stored if it is larger than size + (in bytes). This must be less then DMU_MAX_ACCESS (64MB). This applies + primarily to zfs_ioc_channel_program(). +

Default value: 1MB.

+
+

+

zfs_keep_log_spacemaps_at_export (int)

+
Prevent log spacemaps from being destroyed during pool + exports and destroys. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_metaslab_segment_weight_enabled (int)

+
Enable/disable segment-based metaslab selection. +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_metaslab_switch_threshold (int)

+
When using segment-based metaslab selection, continue + allocating from the active metaslab until zfs_metaslab_switch_threshold + worth of buckets have been exhausted. +

Default value: 2.

+
+

+

metaslab_debug_load (int)

+
Load all metaslabs during pool import. +

Use 1 for yes and 0 for no (default).

+
+

+

metaslab_debug_unload (int)

+
Prevent metaslabs from being unloaded. +

Use 1 for yes and 0 for no (default).

+
+

+

metaslab_fragmentation_factor_enabled (int)

+
Enable use of the fragmentation metric in computing + metaslab weights. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_df_max_search (int)

+
Maximum distance to search forward from the last offset. + Without this limit, fragmented pools can see >100,000 iterations and + metaslab_block_picker() becomes the performance limiting factor on + high-performance storage. +

With the default setting of 16MB, we typically see less than 500 + iterations, even with very fragmented, ashift=9 pools. The maximum number of + iterations possible is: metaslab_df_max_search / (2 * + (1<<ashift)). With the default setting of 16MB this is 16*1024 + (with ashift=9) or 2048 (with ashift=12).

+

Default value: 16,777,216 (16MB)

+
+

+

metaslab_df_use_largest_segment (int)

+
If we are not searching forward (due to + metaslab_df_max_search, metaslab_df_free_pct, or metaslab_df_alloc_threshold), + this tunable controls what segment is used. If it is set, we will use the + largest free segment. If it is not set, we will use a segment of exactly the + requested size (or larger). +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_metaslab_max_size_cache_sec (ulong)

+
When we unload a metaslab, we cache the size of the + largest free chunk. We use that cached size to determine whether or not to + load a metaslab for a given allocation. As more frees accumulate in that + metaslab while it's unloaded, the cached max size becomes less and less + accurate. After a number of seconds controlled by this tunable, we stop + considering the cached max size and start considering only the histogram + instead. +

Default value: 3600 seconds (one hour)

+
+

+

zfs_metaslab_mem_limit (int)

+
When we are loading a new metaslab, we check the amount + of memory being used to store metaslab range trees. If it is over a threshold, + we attempt to unload the least recently used metaslab to prevent the system + from clogging all of its memory with range trees. This tunable sets the + percentage of total system memory that is the threshold. +

Default value: 25 percent

+
+

+

zfs_vdev_default_ms_count (int)

+
When a vdev is added target this number of metaslabs per + top-level vdev. +

Default value: 200.

+
+

+

zfs_vdev_default_ms_shift (int)

+
Default limit for metaslab size. +

Default value: 29 [meaning (1 << 29) = 512MB].

+
+

+

zfs_vdev_max_auto_ashift (ulong)

+
Maximum ashift used when optimizing for logical -> + physical sector size on new top-level vdevs. +

Default value: ASHIFT_MAX (16).

+
+

+

zfs_vdev_min_auto_ashift (ulong)

+
Minimum ashift used when creating new top-level vdevs. +

Default value: ASHIFT_MIN (9).

+
+

+

zfs_vdev_min_ms_count (int)

+
Minimum number of metaslabs to create in a top-level + vdev. +

Default value: 16.

+
+

+

vdev_validate_skip (int)

+
Skip label validation steps during pool import. Changing + is not recommended unless you know what you are doing and are recovering a + damaged label. +

Default value: 0.

+
+

+

zfs_vdev_ms_count_limit (int)

+
Practical upper limit of total metaslabs per top-level + vdev. +

Default value: 131,072.

+
+

+

metaslab_preload_enabled (int)

+
Enable metaslab group preloading. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_lba_weighting_enabled (int)

+
Give more weight to metaslabs with lower LBAs, assuming + they have greater bandwidth as is typically the case on a modern constant + angular velocity disk drive. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_unload_delay (int)

+
After a metaslab is used, we keep it loaded for this many + txgs, to attempt to reduce unnecessary reloading. Note that both this many + txgs and metaslab_unload_delay_ms milliseconds must pass before + unloading will occur. +

Default value: 32.

+
+

+

metaslab_unload_delay_ms (int)

+
After a metaslab is used, we keep it loaded for this many + milliseconds, to attempt to reduce unnecessary reloading. Note that both this + many milliseconds and metaslab_unload_delay txgs must pass before + unloading will occur. +

Default value: 600000 (ten minutes).

+
+

+

send_holes_without_birth_time (int)

+
When set, the hole_birth optimization will not be used, + and all holes will always be sent on zfs send. This is useful if you suspect + your datasets are affected by a bug in hole_birth. +

Use 1 for on (default) and 0 for off.

+
+

+

spa_config_path (charp)

+
SPA config file +

Default value: /etc/zfs/zpool.cache.

+
+

+

spa_asize_inflation (int)

+
Multiplication factor used to estimate actual disk + consumption from the size of data being written. The default value is a worst + case estimate, but lower values may be valid for a given pool depending on its + configuration. Pool administrators who understand the factors involved may + wish to specify a more realistic inflation factor, particularly if they + operate close to quota or capacity limits. +

Default value: 24.

+
+

+

spa_load_print_vdev_tree (int)

+
Whether to print the vdev tree in the debugging message + buffer during pool import. Use 0 to disable and 1 to enable. +

Default value: 0.

+
+

+

spa_load_verify_data (int)

+
Whether to traverse data blocks during an "extreme + rewind" (-X) import. Use 0 to disable and 1 to enable. +

An extreme rewind import normally performs a full traversal of all + blocks in the pool for verification. If this parameter is set to 0, the + traversal skips non-metadata blocks. It can be toggled once the import has + started to stop or start the traversal of non-metadata blocks.

+

Default value: 1.

+
+

+

spa_load_verify_metadata (int)

+
Whether to traverse blocks during an "extreme + rewind" (-X) pool import. Use 0 to disable and 1 to enable. +

An extreme rewind import normally performs a full traversal of all + blocks in the pool for verification. If this parameter is set to 0, the + traversal is not performed. It can be toggled once the import has started to + stop or start the traversal.

+

Default value: 1.

+
+

+

spa_load_verify_shift (int)

+
Sets the maximum number of bytes to consume during pool + import to the log2 fraction of the target ARC size. +

Default value: 4.

+
+

+

spa_slop_shift (int)

+
Normally, we don't allow the last 3.2% + (1/(2^spa_slop_shift)) of space in the pool to be consumed. This ensures that + we don't run the pool completely out of space, due to unaccounted changes + (e.g. to the MOS). It also limits the worst-case time to allocate space. If we + have less than this amount of free space, most ZPL operations (e.g. write, + create) will return ENOSPC. +

Default value: 5.

+
+

+

vdev_removal_max_span (int)

+
During top-level vdev removal, chunks of data are copied + from the vdev which may include free space in order to trade bandwidth for + IOPS. This parameter determines the maximum span of free space (in bytes) + which will be included as "unnecessary" data in a chunk of copied + data. +

The default value here was chosen to align with + zfs_vdev_read_gap_limit, which is a similar concept when doing + regular reads (but there's no reason it has to be the same).

+

Default value: 32,768.

+
+

+

vdev_file_logical_ashift (ulong)

+
Logical ashift for file-based devices. +

Default value: 9.

+
+

+

vdev_file_physical_ashift (ulong)

+
Physical ashift for file-based devices. +

Default value: 9.

+
+

+

zap_iterate_prefetch (int)

+
If this is set, when we start iterating over a ZAP + object, zfs will prefetch the entire object (all leaf blocks). However, this + is limited by dmu_prefetch_max. +

Use 1 for on (default) and 0 for off.

+
+

+

zfetch_array_rd_sz (ulong)

+
If prefetching is enabled, disable prefetching for reads + larger than this size. +

Default value: 1,048,576.

+
+

+

zfetch_max_distance (uint)

+
Max bytes to prefetch per stream. +

Default value: 8,388,608 (8MB).

+
+

+

zfetch_max_idistance (uint)

+
Max bytes to prefetch indirects for per stream. +

Default vaule: 67,108,864 (64MB).

+
+

+

zfetch_max_streams (uint)

+
Max number of streams per zfetch (prefetch streams per + file). +

Default value: 8.

+
+

+

zfetch_min_sec_reap (uint)

+
Min time before an active prefetch stream can be + reclaimed +

Default value: 2.

+
+

+

zfs_abd_scatter_enabled (int)

+
Enables ARC from using scatter/gather lists and forces + all allocations to be linear in kernel memory. Disabling can improve + performance in some code paths at the expense of fragmented kernel memory. +

Default value: 1.

+
+

+

zfs_abd_scatter_max_order (iunt)

+
Maximum number of consecutive memory pages allocated in a + single block for scatter/gather lists. Default value is specified by the + kernel itself. +

Default value: 10 at the time of this writing.

+
+

+

zfs_abd_scatter_min_size (uint)

+
This is the minimum allocation size that will use scatter + (page-based) ABD's. Smaller allocations will use linear ABD's. +

Default value: 1536 (512B and 1KB allocations will be + linear).

+
+

+

zfs_arc_dnode_limit (ulong)

+
When the number of bytes consumed by dnodes in the ARC + exceeds this number of bytes, try to unpin some of it in response to demand + for non-metadata. This value acts as a ceiling to the amount of dnode + metadata, and defaults to 0 which indicates that a percent which is based on + zfs_arc_dnode_limit_percent of the ARC meta buffers that may be used + for dnodes. +

See also zfs_arc_meta_prune which serves a similar purpose + but is used when the amount of metadata in the ARC exceeds + zfs_arc_meta_limit rather than in response to overall demand for + non-metadata.

+

+

Default value: 0.

+
+

+

zfs_arc_dnode_limit_percent (ulong)

+
Percentage that can be consumed by dnodes of ARC meta + buffers. +

See also zfs_arc_dnode_limit which serves a similar purpose + but has a higher priority if set to nonzero value.

+

Default value: 10%.

+
+

+

zfs_arc_dnode_reduce_percent (ulong)

+
Percentage of ARC dnodes to try to scan in response to + demand for non-metadata when the number of bytes consumed by dnodes exceeds + zfs_arc_dnode_limit. +

+

Default value: 10% of the number of dnodes in the ARC.

+
+

+

zfs_arc_average_blocksize (int)

+
The ARC's buffer hash table is sized based on the + assumption of an average block size of zfs_arc_average_blocksize + (default 8K). This works out to roughly 1MB of hash table per 1GB of physical + memory with 8-byte pointers. For configurations with a known larger average + block size this value can be increased to reduce the memory footprint. +

+

Default value: 8192.

+
+

+

zfs_arc_eviction_pct (int)

+
When arc_is_overflowing(), + arc_get_data_impl() waits for this percent of the requested amount of + data to be evicted. For example, by default for every 2KB that's evicted, 1KB + of it may be "reused" by a new allocation. Since this is above 100%, + it ensures that progress is made towards getting arc_size under + arc_c. Since this is finite, it ensures that allocations can still + happen, even during the potentially long time that arc_size is more + than arc_c. +

Default value: 200.

+
+

+

zfs_arc_evict_batch_limit (int)

+
Number ARC headers to evict per sub-list before + proceeding to another sub-list. This batch-style operation prevents entire + sub-lists from being evicted at once but comes at a cost of additional + unlocking and locking. +

Default value: 10.

+
+

+

zfs_arc_grow_retry (int)

+
If set to a non zero value, it will replace the + arc_grow_retry value with this value. The arc_grow_retry value (default 5) is + the number of seconds the ARC will wait before trying to resume growth after a + memory pressure event. +

Default value: 0.

+
+

+

zfs_arc_lotsfree_percent (int)

+
Throttle I/O when free system memory drops below this + percentage of total system memory. Setting this value to 0 will disable the + throttle. +

Default value: 10%.

+
+

+

zfs_arc_max (ulong)

+
Max size of ARC in bytes. If set to 0 then the max size + of ARC is determined by the amount of system memory installed. For Linux, 1/2 + of system memory will be used as the limit. For FreeBSD, the larger of all + system memory - 1GB or 5/8 of system memory will be used as the limit. This + value must be at least 67108864 (64 megabytes). +

This value can be changed dynamically with some caveats. It cannot + be set back to 0 while running and reducing it below the current ARC size + will not cause the ARC to shrink without memory pressure to induce + shrinking.

+

Default value: 0.

+
+

+

zfs_arc_meta_adjust_restarts (ulong)

+
The number of restart passes to make while scanning the + ARC attempting the free buffers in order to stay below the + zfs_arc_meta_limit. This value should not need to be tuned but is + available to facilitate performance analysis. +

Default value: 4096.

+
+

+

zfs_arc_meta_limit (ulong)

+
The maximum allowed size in bytes that meta data buffers + are allowed to consume in the ARC. When this limit is reached meta data + buffers will be reclaimed even if the overall arc_c_max has not been reached. + This value defaults to 0 which indicates that a percent which is based on + zfs_arc_meta_limit_percent of the ARC may be used for meta data. +

This value my be changed dynamically except that it cannot be set + back to 0 for a specific percent of the ARC; it must be set to an explicit + value.

+

Default value: 0.

+
+

+

zfs_arc_meta_limit_percent (ulong)

+
Percentage of ARC buffers that can be used for meta data. +

See also zfs_arc_meta_limit which serves a similar purpose + but has a higher priority if set to nonzero value.

+

+

Default value: 75%.

+
+

+

zfs_arc_meta_min (ulong)

+
The minimum allowed size in bytes that meta data buffers + may consume in the ARC. This value defaults to 0 which disables a floor on the + amount of the ARC devoted meta data. +

Default value: 0.

+
+

+

zfs_arc_meta_prune (int)

+
The number of dentries and inodes to be scanned looking + for entries which can be dropped. This may be required when the ARC reaches + the zfs_arc_meta_limit because dentries and inodes can pin buffers in + the ARC. Increasing this value will cause to dentry and inode caches to be + pruned more aggressively. Setting this value to 0 will disable pruning the + inode and dentry caches. +

Default value: 10,000.

+
+

+

zfs_arc_meta_strategy (int)

+
Define the strategy for ARC meta data buffer eviction + (meta reclaim strategy). A value of 0 (META_ONLY) will evict only the ARC meta + data buffers. A value of 1 (BALANCED) indicates that additional data buffers + may be evicted if that is required to in order to evict the required number of + meta data buffers. +

Default value: 1.

+
+

+

zfs_arc_min (ulong)

+
Min size of ARC in bytes. If set to 0 then arc_c_min will + default to consuming the larger of 32M or 1/32 of total system memory. +

Default value: 0.

+
+

+

zfs_arc_min_prefetch_ms (int)

+
Minimum time prefetched blocks are locked in the ARC, + specified in ms. A value of 0 will default to 1000 ms. +

Default value: 0.

+
+

+

zfs_arc_min_prescient_prefetch_ms (int)

+
Minimum time "prescient prefetched" blocks are + locked in the ARC, specified in ms. These blocks are meant to be prefetched + fairly aggressively ahead of the code that may use them. A value of 0 + will default to 6000 ms. +

Default value: 0.

+
+

+

zfs_max_missing_tvds (int)

+
Number of missing top-level vdevs which will be allowed + during pool import (only in read-only mode). +

Default value: 0

+
+

+

zfs_max_nvlist_src_size (ulong)

+
Maximum size in bytes allowed to be passed as + zc_nvlist_src_size for ioctls on /dev/zfs. This prevents a user from causing + the kernel to allocate an excessive amount of memory. When the limit is + exceeded, the ioctl fails with EINVAL and a description of the error is sent + to the zfs-dbgmsg log. This parameter should not need to be touched under + normal circumstances. On FreeBSD, the default is based on the system limit on + user wired memory. On Linux, the default is 128MB. +

Default value: 0 (kernel decides)

+
+

+

zfs_multilist_num_sublists (int)

+
To allow more fine-grained locking, each ARC state + contains a series of lists for both data and meta data objects. Locking is + performed at the level of these "sub-lists". This parameters + controls the number of sub-lists per ARC state, and also applies to other uses + of the multilist data structure. +

Default value: 4 or the number of online CPUs, whichever is + greater

+
+

+

zfs_arc_overflow_shift (int)

+
The ARC size is considered to be overflowing if it + exceeds the current ARC target size (arc_c) by a threshold determined by this + parameter. The threshold is calculated as a fraction of arc_c using the + formula "arc_c >> zfs_arc_overflow_shift". +

The default value of 8 causes the ARC to be considered to be + overflowing if it exceeds the target size by 1/256th (0.3%) of the target + size.

+

When the ARC is overflowing, new buffer allocations are stalled + until the reclaim thread catches up and the overflow condition no longer + exists.

+

Default value: 8.

+
+

+

+

zfs_arc_p_min_shift (int)

+
If set to a non zero value, this will update + arc_p_min_shift (default 4) with the new value. arc_p_min_shift is used to + shift of arc_c for calculating both min and max max arc_p +

Default value: 0.

+
+

+

zfs_arc_p_dampener_disable (int)

+
Disable arc_p adapt dampener +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_arc_shrink_shift (int)

+
If set to a non zero value, this will update + arc_shrink_shift (default 7) with the new value. +

Default value: 0.

+
+

+

zfs_arc_pc_percent (uint)

+
Percent of pagecache to reclaim arc to +

This tunable allows ZFS arc to play more nicely with the kernel's + LRU pagecache. It can guarantee that the ARC size won't collapse under + scanning pressure on the pagecache, yet still allows arc to be reclaimed + down to zfs_arc_min if necessary. This value is specified as percent of + pagecache size (as measured by NR_FILE_PAGES) where that percent may exceed + 100. This only operates during memory pressure/reclaim.

+

Default value: 0% (disabled).

+
+

+

zfs_arc_shrinker_limit (int)

+
This is a limit on how many pages the ARC shrinker makes + available for eviction in response to one page allocation attempt. Note that + in practice, the kernel's shrinker can ask us to evict up to about 4x this for + one allocation attempt. +

The default limit of 10,000 (in practice, 160MB per allocation + attempt with 4K pages) limits the amount of time spent attempting to reclaim + ARC memory to less than 100ms per allocation attempt, even with a small + average compressed block size of ~8KB.

+

The parameter can be set to 0 (zero) to disable the limit.

+

This parameter only applies on Linux.

+

Default value: 10,000.

+
+

+

zfs_arc_sys_free (ulong)

+
The target number of bytes the ARC should leave as free + memory on the system. Defaults to the larger of 1/64 of physical memory or + 512K. Setting this option to a non-zero value will override the default. +

Default value: 0.

+
+

+

zfs_autoimport_disable (int)

+
Disable pool import at module load by ignoring the cache + file (typically /etc/zfs/zpool.cache). +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_checksum_events_per_second (uint)

+
Rate limit checksum events to this many per second. Note + that this should not be set below the zed thresholds (currently 10 checksums + over 10 sec) or else zed may not trigger any action. +

Default value: 20

+
+

+

zfs_commit_timeout_pct (int)

+
This controls the amount of time that a ZIL block (lwb) + will remain "open" when it isn't "full", and it has a + thread waiting for it to be committed to stable storage. The timeout is scaled + based on a percentage of the last lwb latency to avoid significantly impacting + the latency of each individual transaction record (itx). +

Default value: 5%.

+
+

+

zfs_condense_indirect_commit_entry_delay_ms (int)

+
Vdev indirection layer (used for device removal) sleeps + for this many milliseconds during mapping generation. Intended for use with + the test suite to throttle vdev removal speed. +

Default value: 0 (no throttle).

+
+

+

zfs_condense_indirect_obsolete_pct (int)

+
Minimum percent of obsolete bytes in vdev mapping + required to attempt to condense (see + zfs_condense_indirect_vdevs_enable). Intended for use with the test + suite to facilitate triggering condensing as needed. +

Default value: 25%.

+
+

+

zfs_condense_indirect_vdevs_enable (int)

+
Enable condensing indirect vdev mappings. When set to a + non-zero value, attempt to condense indirect vdev mappings if the mapping uses + more than zfs_condense_min_mapping_bytes bytes of memory and if the + obsolete space map object uses more than + zfs_condense_max_obsolete_bytes bytes on-disk. The condensing process + is an attempt to save memory by removing obsolete mappings. +

Default value: 1.

+
+

+

zfs_condense_max_obsolete_bytes (ulong)

+
Only attempt to condense indirect vdev mappings if the + on-disk size of the obsolete space map object is greater than this number of + bytes (see fBzfs_condense_indirect_vdevs_enable). +

Default value: 1,073,741,824.

+
+

+

zfs_condense_min_mapping_bytes (ulong)

+
Minimum size vdev mapping to attempt to condense (see + zfs_condense_indirect_vdevs_enable). +

Default value: 131,072.

+
+

+

zfs_dbgmsg_enable (int)

+
Internally ZFS keeps a small log to facilitate debugging. + By default the log is disabled, to enable it set this option to 1. The + contents of the log can be accessed by reading the /proc/spl/kstat/zfs/dbgmsg + file. Writing 0 to this proc file clears the log. +

Default value: 0.

+
+

+

zfs_dbgmsg_maxsize (int)

+
The maximum size in bytes of the internal ZFS debug log. +

Default value: 4M.

+
+

+

zfs_dbuf_state_index (int)

+
This feature is currently unused. It is normally used for + controlling what reporting is available under /proc/spl/kstat/zfs. +

Default value: 0.

+
+

+

zfs_deadman_enabled (int)

+
When a pool sync operation takes longer than + zfs_deadman_synctime_ms milliseconds, or when an individual I/O takes + longer than zfs_deadman_ziotime_ms milliseconds, then the operation is + considered to be "hung". If zfs_deadman_enabled is set then + the deadman behavior is invoked as described by the + zfs_deadman_failmode module option. By default the deadman is enabled + and configured to wait which results in "hung" I/Os only + being logged. The deadman is automatically disabled when a pool gets + suspended. +

Default value: 1.

+
+

+

zfs_deadman_failmode (charp)

+
Controls the failure behavior when the deadman detects a + "hung" I/O. Valid values are wait, continue, and + panic. +

wait - Wait for a "hung" I/O to complete. For + each "hung" I/O a "deadman" event will be posted + describing that I/O.

+

continue - Attempt to recover from a "hung" I/O + by re-dispatching it to the I/O pipeline if possible.

+

panic - Panic the system. This can be used to facilitate an + automatic fail-over to a properly configured fail-over partner.

+

Default value: wait.

+
+

+

zfs_deadman_checktime_ms (int)

+
Check time in milliseconds. This defines the frequency at + which we check for hung I/O and potentially invoke the + zfs_deadman_failmode behavior. +

Default value: 60,000.

+
+

+

zfs_deadman_synctime_ms (ulong)

+
Interval in milliseconds after which the deadman is + triggered and also the interval after which a pool sync operation is + considered to be "hung". Once this limit is exceeded the deadman + will be invoked every zfs_deadman_checktime_ms milliseconds until the + pool sync completes. +

Default value: 600,000.

+
+

+

zfs_deadman_ziotime_ms (ulong)

+
Interval in milliseconds after which the deadman is + triggered and an individual I/O operation is considered to be + "hung". As long as the I/O remains "hung" the deadman will + be invoked every zfs_deadman_checktime_ms milliseconds until the I/O + completes. +

Default value: 300,000.

+
+

+

zfs_dedup_prefetch (int)

+
Enable prefetching dedup-ed blks +

Use 1 for yes and 0 to disable (default).

+
+

+

zfs_delay_min_dirty_percent (int)

+
Start to delay each transaction once there is this amount + of dirty data, expressed as a percentage of zfs_dirty_data_max. This + value should be >= zfs_vdev_async_write_active_max_dirty_percent. See the + section "ZFS TRANSACTION DELAY". +

Default value: 60%.

+
+

+

zfs_delay_scale (int)

+
This controls how quickly the transaction delay + approaches infinity. Larger values cause longer delays for a given amount of + dirty data. +

For the smoothest delay, this value should be about 1 billion + divided by the maximum number of operations per second. This will smoothly + handle between 10x and 1/10th this number.

+

See the section "ZFS TRANSACTION DELAY".

+

Note: zfs_delay_scale * zfs_dirty_data_max must be + < 2^64.

+

Default value: 500,000.

+
+

+

zfs_disable_ivset_guid_check (int)

+
Disables requirement for IVset guids to be present and + match when doing a raw receive of encrypted datasets. Intended for users whose + pools were created with OpenZFS pre-release versions and now have + compatibility issues. +

Default value: 0.

+
+

+

zfs_key_max_salt_uses (ulong)

+
Maximum number of uses of a single salt value before + generating a new one for encrypted datasets. The default value is also the + maximum that will be accepted. +

Default value: 400,000,000.

+
+

+

zfs_object_mutex_size (uint)

+
Size of the znode hashtable used for holds. +

Due to the need to hold locks on objects that may not exist yet, + kernel mutexes are not created per-object and instead a hashtable is used + where collisions will result in objects waiting when there is not actually + contention on the same object.

+

Default value: 64.

+
+

+

zfs_slow_io_events_per_second (int)

+
Rate limit delay and deadman zevents (which report slow + I/Os) to this many per second. +

Default value: 20

+
+

+

zfs_unflushed_max_mem_amt (ulong)

+
Upper-bound limit for unflushed metadata changes to be + held by the log spacemap in memory (in bytes). +

Default value: 1,073,741,824 (1GB).

+
+

+

zfs_unflushed_max_mem_ppm (ulong)

+
Percentage of the overall system memory that ZFS allows + to be used for unflushed metadata changes by the log spacemap. (value is + calculated over 1000000 for finer granularity). +

Default value: 1000 (which is divided by 1000000, resulting + in the limit to be 0.1% of memory)

+
+

+

zfs_unflushed_log_block_max (ulong)

+
Describes the maximum number of log spacemap blocks + allowed for each pool. The default value of 262144 means that the space in all + the log spacemaps can add up to no more than 262144 blocks (which means 32GB + of logical space before compression and ditto blocks, assuming that blocksize + is 128k). +

This tunable is important because it involves a trade-off between + import time after an unclean export and the frequency of flushing metaslabs. + The higher this number is, the more log blocks we allow when the pool is + active which means that we flush metaslabs less often and thus decrease the + number of I/Os for spacemap updates per TXG. At the same time though, that + means that in the event of an unclean export, there will be more log + spacemap blocks for us to read, inducing overhead in the import time of the + pool. The lower the number, the amount of flushing increases destroying log + blocks quicker as they become obsolete faster, which leaves less blocks to + be read during import time after a crash.

+

Each log spacemap block existing during pool import leads to + approximately one extra logical I/O issued. This is the reason why this + tunable is exposed in terms of blocks rather than space used.

+

Default value: 262144 (256K).

+
+

+

zfs_unflushed_log_block_min (ulong)

+
If the number of metaslabs is small and our incoming rate + is high, we could get into a situation that we are flushing all our metaslabs + every TXG. Thus we always allow at least this many log blocks. +

Default value: 1000.

+
+

+

zfs_unflushed_log_block_pct (ulong)

+
Tunable used to determine the number of blocks that can + be used for the spacemap log, expressed as a percentage of the total number of + metaslabs in the pool. +

Default value: 400 (read as 400% - meaning that the + number of log spacemap blocks are capped at 4 times the number of metaslabs + in the pool).

+
+

+

zfs_unlink_suspend_progress (uint)

+
When enabled, files will not be asynchronously removed + from the list of pending unlinks and the space they consume will be leaked. + Once this option has been disabled and the dataset is remounted, the pending + unlinks will be processed and the freed space returned to the pool. This + option is used by the test suite to facilitate testing. +

Uses 0 (default) to allow progress and 1 to pause + progress.

+
+

+

zfs_delete_blocks (ulong)

+
This is the used to define a large file for the purposes + of delete. Files containing more than zfs_delete_blocks will be deleted + asynchronously while smaller files are deleted synchronously. Decreasing this + value will reduce the time spent in an unlink(2) system call at the expense of + a longer delay before the freed space is available. +

Default value: 20,480.

+
+

+

zfs_dirty_data_max (int)

+
Determines the dirty space limit in bytes. Once this + limit is exceeded, new writes are halted until space frees up. This parameter + takes precedence over zfs_dirty_data_max_percent. See the section + "ZFS TRANSACTION DELAY". +

Default value: 10% of physical RAM, capped at + zfs_dirty_data_max_max.

+
+

+

zfs_dirty_data_max_max (int)

+
Maximum allowable value of zfs_dirty_data_max, + expressed in bytes. This limit is only enforced at module load time, and will + be ignored if zfs_dirty_data_max is later changed. This parameter takes + precedence over zfs_dirty_data_max_max_percent. See the section + "ZFS TRANSACTION DELAY". +

Default value: 25% of physical RAM.

+
+

+

zfs_dirty_data_max_max_percent (int)

+
Maximum allowable value of zfs_dirty_data_max, + expressed as a percentage of physical RAM. This limit is only enforced at + module load time, and will be ignored if zfs_dirty_data_max is later + changed. The parameter zfs_dirty_data_max_max takes precedence over + this one. See the section "ZFS TRANSACTION DELAY". +

Default value: 25%.

+
+

+

zfs_dirty_data_max_percent (int)

+
Determines the dirty space limit, expressed as a + percentage of all memory. Once this limit is exceeded, new writes are halted + until space frees up. The parameter zfs_dirty_data_max takes precedence + over this one. See the section "ZFS TRANSACTION DELAY". +

Default value: 10%, subject to + zfs_dirty_data_max_max.

+
+

+

zfs_dirty_data_sync_percent (int)

+
Start syncing out a transaction group if there's at least + this much dirty data as a percentage of zfs_dirty_data_max. This should + be less than zfs_vdev_async_write_active_min_dirty_percent. +

Default value: 20% of zfs_dirty_data_max.

+
+

+

zfs_fallocate_reserve_percent (uint)

+
Since ZFS is a copy-on-write filesystem with snapshots, + blocks cannot be preallocated for a file in order to guarantee that later + writes will not run out of space. Instead, fallocate() space preallocation + only checks that sufficient space is currently available in the pool or the + user's project quota allocation, and then creates a sparse file of the + requested size. The requested space is multiplied by + zfs_fallocate_reserve_percent to allow additional space for indirect + blocks and other internal metadata. Setting this value to 0 disables support + for fallocate(2) and returns EOPNOTSUPP for fallocate() space preallocation + again. +

Default value: 110%

+
+

+

zfs_fletcher_4_impl (string)

+
Select a fletcher 4 implementation. +

Supported selectors are: fastest, scalar, + sse2, ssse3, avx2, avx512f, avx512bw, and + aarch64_neon. All of the selectors except fastest and + scalar require instruction set extensions to be available and will + only appear if ZFS detects that they are present at runtime. If multiple + implementations of fletcher 4 are available, the fastest will be + chosen using a micro benchmark. Selecting scalar results in the + original, CPU based calculation, being used. Selecting any option other than + fastest and scalar results in vector instructions from the + respective CPU instruction set being used.

+

Default value: fastest.

+
+

+

zfs_free_bpobj_enabled (int)

+
Enable/disable the processing of the free_bpobj object. +

Default value: 1.

+
+

+

zfs_async_block_max_blocks (ulong)

+
Maximum number of blocks freed in a single txg. +

Default value: ULONG_MAX (unlimited).

+
+

+

zfs_max_async_dedup_frees (ulong)

+
Maximum number of dedup blocks freed in a single txg. +

Default value: 100,000.

+
+

+

zfs_override_estimate_recordsize (ulong)

+
Record size calculation override for zfs send estimates. +

Default value: 0.

+
+

+

zfs_vdev_async_read_max_active (int)

+
Maximum asynchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 3.

+
+

+

zfs_vdev_async_read_min_active (int)

+
Minimum asynchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_async_write_active_max_dirty_percent (int)

+
When the pool has more than + zfs_vdev_async_write_active_max_dirty_percent dirty data, use + zfs_vdev_async_write_max_active to limit active async writes. If the + dirty data is between min and max, the active I/O limit is linearly + interpolated. See the section "ZFS I/O SCHEDULER". +

Default value: 60%.

+
+

+

zfs_vdev_async_write_active_min_dirty_percent (int)

+
When the pool has less than + zfs_vdev_async_write_active_min_dirty_percent dirty data, use + zfs_vdev_async_write_min_active to limit active async writes. If the + dirty data is between min and max, the active I/O limit is linearly + interpolated. See the section "ZFS I/O SCHEDULER". +

Default value: 30%.

+
+

+

zfs_vdev_async_write_max_active (int)

+
Maximum asynchronous write I/Os active to each device. + See the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_async_write_min_active (int)

+
Minimum asynchronous write I/Os active to each device. + See the section "ZFS I/O SCHEDULER". +

Lower values are associated with better latency on rotational + media but poorer resilver performance. The default value of 2 was chosen as + a compromise. A value of 3 has been shown to improve resilver performance + further at a cost of further increasing latency.

+

Default value: 2.

+
+

+

zfs_vdev_initializing_max_active (int)

+
Maximum initializing I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_initializing_min_active (int)

+
Minimum initializing I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_max_active (int)

+
The maximum number of I/Os active to each device. + Ideally, this will be >= the sum of each queue's max_active. See the + section "ZFS I/O SCHEDULER". +

Default value: 1,000.

+
+

+

zfs_vdev_rebuild_max_active (int)

+
Maximum sequential resilver I/Os active to each device. + See the section "ZFS I/O SCHEDULER". +

Default value: 3.

+
+

+

zfs_vdev_rebuild_min_active (int)

+
Minimum sequential resilver I/Os active to each device. + See the section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_removal_max_active (int)

+
Maximum removal I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 2.

+
+

+

zfs_vdev_removal_min_active (int)

+
Minimum removal I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_scrub_max_active (int)

+
Maximum scrub I/Os active to each device. See the section + "ZFS I/O SCHEDULER". +

Default value: 2.

+
+

+

zfs_vdev_scrub_min_active (int)

+
Minimum scrub I/Os active to each device. See the section + "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_sync_read_max_active (int)

+
Maximum synchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_read_min_active (int)

+
Minimum synchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_write_max_active (int)

+
Maximum synchronous write I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_write_min_active (int)

+
Minimum synchronous write I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_trim_max_active (int)

+
Maximum trim/discard I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 2.

+
+

+

zfs_vdev_trim_min_active (int)

+
Minimum trim/discard I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_nia_delay (int)

+
For non-interactive I/O (scrub, resilver, removal, + initialize and rebuild), the number of concurrently-active I/O's is limited to + *_min_active, unless the vdev is "idle". When there are no + interactive I/Os active (sync or async), and zfs_vdev_nia_delay I/Os have + completed since the last interactive I/O, then the vdev is considered to be + "idle", and the number of concurrently-active non-interactive I/O's + is increased to *_max_active. See the section "ZFS I/O SCHEDULER". +

Default value: 5.

+
+

+

zfs_vdev_nia_credit (int)

+
Some HDDs tend to prioritize sequential I/O so high, that + concurrent random I/O latency reaches several seconds. On some HDDs it happens + even if sequential I/Os are submitted one at a time, and so setting + *_max_active to 1 does not help. To prevent non-interactive I/Os, like scrub, + from monopolizing the device no more than zfs_vdev_nia_credit I/Os can be sent + while there are outstanding incomplete interactive I/Os. This enforced wait + ensures the HDD services the interactive I/O within a reasonable amount of + time. See the section "ZFS I/O SCHEDULER". +

Default value: 5.

+
+

+

zfs_vdev_queue_depth_pct (int)

+
Maximum number of queued allocations per top-level vdev + expressed as a percentage of zfs_vdev_async_write_max_active which + allows the system to detect devices that are more capable of handling + allocations and to allocate more blocks to those devices. It allows for + dynamic allocation distribution when devices are imbalanced as fuller devices + will tend to be slower than empty devices. +

See also zio_dva_throttle_enabled.

+

Default value: 1000%.

+
+

+

zfs_expire_snapshot (int)

+
Seconds to expire .zfs/snapshot +

Default value: 300.

+
+

+

zfs_admin_snapshot (int)

+
Allow the creation, removal, or renaming of entries in + the .zfs/snapshot directory to cause the creation, destruction, or renaming of + snapshots. When enabled this functionality works both locally and over NFS + exports which have the 'no_root_squash' option set. This functionality is + disabled by default. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_flags (int)

+
Set additional debugging flags. The following flags may + be bitwise-or'd together. +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueSymbolic Name
Description
1ZFS_DEBUG_DPRINTF
Enable dprintf entries in the debug log.
2ZFS_DEBUG_DBUF_VERIFY *
Enable extra dbuf verifications.
4ZFS_DEBUG_DNODE_VERIFY *
Enable extra dnode verifications.
8ZFS_DEBUG_SNAPNAMES
Enable snapshot name verification.
16ZFS_DEBUG_MODIFY
Check for illegally modified ARC buffers.
64ZFS_DEBUG_ZIO_FREE
Enable verification of block frees.
128ZFS_DEBUG_HISTOGRAM_VERIFY
Enable extra spacemap histogram verifications.
256ZFS_DEBUG_METASLAB_VERIFY
Verify space accounting on disk matches in-core range_trees.
512ZFS_DEBUG_SET_ERROR
Enable SET_ERROR and dprintf entries in the debug log.
1024ZFS_DEBUG_INDIRECT_REMAP
Verify split blocks created by device removal.
2048ZFS_DEBUG_TRIM
Verify TRIM ranges are always within the allocatable range tree.
4096ZFS_DEBUG_LOG_SPACEMAP
Verify that the log summary is consistent with the spacemap log
and enable zfs_dbgmsgs for metaslab loading and flushing.
+

* Requires debug build.

+

Default value: 0.

+
+

+

zfs_free_leak_on_eio (int)

+
If destroy encounters an EIO while reading metadata (e.g. + indirect blocks), space referenced by the missing metadata can not be freed. + Normally this causes the background destroy to become "stalled", as + it is unable to make forward progress. While in this stalled state, all + remaining space to free from the error-encountering filesystem is + "temporarily leaked". Set this flag to cause it to ignore the EIO, + permanently leak the space from indirect blocks that can not be read, and + continue to free everything else that it can. +

The default, "stalling" behavior is useful if the + storage partially fails (i.e. some but not all i/os fail), and then later + recovers. In this case, we will be able to continue pool operations while it + is partially failed, and when it recovers, we can continue to free the + space, with no leaks. However, note that this case is actually fairly + rare.

+

Typically pools either (a) fail completely (but perhaps + temporarily, e.g. a top-level vdev going offline), or (b) have localized, + permanent errors (e.g. disk returns the wrong data due to bit flip or + firmware bug). In case (a), this setting does not matter because the pool + will be suspended and the sync thread will not be able to make forward + progress regardless. In case (b), because the error is permanent, the best + we can do is leak the minimum amount of space, which is what setting this + flag will do. Therefore, it is reasonable for this flag to normally be set, + but we chose the more conservative approach of not setting it, so that there + is no possibility of leaking space in the "partial temporary" + failure case.

+

Default value: 0.

+
+

+

zfs_free_min_time_ms (int)

+
During a zfs destroy operation using + feature@async_destroy a minimum of this much time will be spent working + on freeing blocks per txg. +

Default value: 1,000.

+
+

+

zfs_obsolete_min_time_ms (int)

+
Similar to zfs_free_min_time_ms but for cleanup of + old indirection records for removed vdevs. +

Default value: 500.

+
+

+

zfs_immediate_write_sz (long)

+
Largest data block to write to zil. Larger blocks will be + treated as if the dataset being written to had the property setting + logbias=throughput. +

Default value: 32,768.

+
+

+

zfs_initialize_value (ulong)

+
Pattern written to vdev free space by zpool + initialize. +

Default value: 16,045,690,984,833,335,022 + (0xdeadbeefdeadbeee).

+
+

+

zfs_initialize_chunk_size (ulong)

+
Size of writes used by zpool initialize. This + option is used by the test suite to facilitate testing. +

Default value: 1,048,576

+
+

+

zfs_livelist_max_entries (ulong)

+
The threshold size (in block pointers) at which we create + a new sub-livelist. Larger sublists are more costly from a memory perspective + but the fewer sublists there are, the lower the cost of insertion. +

Default value: 500,000.

+
+

+

zfs_livelist_min_percent_shared (int)

+
If the amount of shared space between a snapshot and its + clone drops below this threshold, the clone turns off the livelist and reverts + to the old deletion method. This is in place because once a clone has been + overwritten enough livelists no long give us a benefit. +

Default value: 75.

+
+

+

zfs_livelist_condense_new_alloc (int)

+
Incremented each time an extra ALLOC blkptr is added to a + livelist entry while it is being condensed. This option is used by the test + suite to track race conditions. +

Default value: 0.

+
+

+

zfs_livelist_condense_sync_cancel (int)

+
Incremented each time livelist condensing is canceled + while in spa_livelist_condense_sync. This option is used by the test suite to + track race conditions. +

Default value: 0.

+
+

+

zfs_livelist_condense_sync_pause (int)

+
When set, the livelist condense process pauses + indefinitely before executing the synctask - spa_livelist_condense_sync. This + option is used by the test suite to trigger race conditions. +

Default value: 0.

+
+

+

zfs_livelist_condense_zthr_cancel (int)

+
Incremented each time livelist condensing is canceled + while in spa_livelist_condense_cb. This option is used by the test suite to + track race conditions. +

Default value: 0.

+
+

+

zfs_livelist_condense_zthr_pause (int)

+
When set, the livelist condense process pauses + indefinitely before executing the open context condensing work in + spa_livelist_condense_cb. This option is used by the test suite to trigger + race conditions. +

Default value: 0.

+
+

+

zfs_lua_max_instrlimit (ulong)

+
The maximum execution time limit that can be set for a + ZFS channel program, specified as a number of Lua instructions. +

Default value: 100,000,000.

+
+

+

zfs_lua_max_memlimit (ulong)

+
The maximum memory limit that can be set for a ZFS + channel program, specified in bytes. +

Default value: 104,857,600.

+
+

+

zfs_max_dataset_nesting (int)

+
The maximum depth of nested datasets. This value can be + tuned temporarily to fix existing datasets that exceed the predefined limit. +

Default value: 50.

+
+

+

zfs_max_log_walking (ulong)

+
The number of past TXGs that the flushing algorithm of + the log spacemap feature uses to estimate incoming log blocks. +

Default value: 5.

+
+

+

zfs_max_logsm_summary_length (ulong)

+
Maximum number of rows allowed in the summary of the + spacemap log. +

Default value: 10.

+
+

+

zfs_max_recordsize (int)

+
We currently support block sizes from 512 bytes to 16MB. + The benefits of larger blocks, and thus larger I/O, need to be weighed against + the cost of COWing a giant block to modify one byte. Additionally, very large + blocks can have an impact on i/o latency, and also potentially on the memory + allocator. Therefore, we do not allow the recordsize to be set larger than + zfs_max_recordsize (default 1MB). Larger blocks can be created by changing + this tunable, and pools with larger blocks can always be imported and used, + regardless of this setting. +

Default value: 1,048,576.

+
+

+

zfs_allow_redacted_dataset_mount (int)

+
Allow datasets received with redacted send/receive to be + mounted. Normally disabled because these datasets may be missing key data. +

Default value: 0.

+
+

+

zfs_min_metaslabs_to_flush (ulong)

+
Minimum number of metaslabs to flush per dirty TXG +

Default value: 1.

+
+

+

zfs_metaslab_fragmentation_threshold (int)

+
Allow metaslabs to keep their active state as long as + their fragmentation percentage is less than or equal to this value. An active + metaslab that exceeds this threshold will no longer keep its active status + allowing better metaslabs to be selected. +

Default value: 70.

+
+

+

zfs_mg_fragmentation_threshold (int)

+
Metaslab groups are considered eligible for allocations + if their fragmentation metric (measured as a percentage) is less than or equal + to this value. If a metaslab group exceeds this threshold then it will be + skipped unless all metaslab groups within the metaslab class have also crossed + this threshold. +

Default value: 95.

+
+

+

zfs_mg_noalloc_threshold (int)

+
Defines a threshold at which metaslab groups should be + eligible for allocations. The value is expressed as a percentage of free space + beyond which a metaslab group is always eligible for allocations. If a + metaslab group's free space is less than or equal to the threshold, the + allocator will avoid allocating to that group unless all groups in the pool + have reached the threshold. Once all groups have reached the threshold, all + groups are allowed to accept allocations. The default value of 0 disables the + feature and causes all metaslab groups to be eligible for allocations. +

This parameter allows one to deal with pools having heavily + imbalanced vdevs such as would be the case when a new vdev has been added. + Setting the threshold to a non-zero percentage will stop allocations from + being made to vdevs that aren't filled to the specified percentage and allow + lesser filled vdevs to acquire more allocations than they otherwise would + under the old zfs_mg_alloc_failures facility.

+

Default value: 0.

+
+

+

zfs_ddt_data_is_special (int)

+
If enabled, ZFS will place DDT data into the special + allocation class. +

Default value: 1.

+
+

+

zfs_user_indirect_is_special (int)

+
If enabled, ZFS will place user data (both file and zvol) + indirect blocks into the special allocation class. +

Default value: 1.

+
+

+

zfs_multihost_history (int)

+
Historical statistics for the last N multihost updates + will be available in /proc/spl/kstat/zfs/<pool>/multihost +

Default value: 0.

+
+

+

zfs_multihost_interval (ulong)

+
Used to control the frequency of multihost writes which + are performed when the multihost pool property is on. This is one + factor used to determine the length of the activity check during import. +

The multihost write period is zfs_multihost_interval / + leaf-vdevs milliseconds. On average a multihost write will be issued for + each leaf vdev every zfs_multihost_interval milliseconds. In + practice, the observed period can vary with the I/O load and this observed + value is the delay which is stored in the uberblock.

+

Default value: 1000.

+
+

+

zfs_multihost_import_intervals (uint)

+
Used to control the duration of the activity test on + import. Smaller values of zfs_multihost_import_intervals will reduce + the import time but increase the risk of failing to detect an active pool. The + total activity check time is never allowed to drop below one second. +

On import the activity check waits a minimum amount of time + determined by zfs_multihost_interval * + zfs_multihost_import_intervals, or the same product computed on the host + which last had the pool imported (whichever is greater). The activity check + time may be further extended if the value of mmp delay found in the best + uberblock indicates actual multihost updates happened at longer intervals + than zfs_multihost_interval. A minimum value of 100ms is + enforced.

+

A value of 0 is ignored and treated as if it was set to 1.

+

Default value: 20.

+
+

+

zfs_multihost_fail_intervals (uint)

+
Controls the behavior of the pool when multihost write + failures or delays are detected. +

When zfs_multihost_fail_intervals = 0, multihost write + failures or delays are ignored. The failures will still be reported to the + ZED which depending on its configuration may take action such as suspending + the pool or offlining a device.

+

+

When zfs_multihost_fail_intervals > 0, the pool will be + suspended if zfs_multihost_fail_intervals * zfs_multihost_interval + milliseconds pass without a successful mmp write. This guarantees the + activity test will see mmp writes if the pool is imported. A value of 1 is + ignored and treated as if it was set to 2. This is necessary to prevent the + pool from being suspended due to normal, small I/O latency variations.

+

+

Default value: 10.

+
+

+

zfs_no_scrub_io (int)

+
Set for no scrub I/O. This results in scrubs not actually + scrubbing data and simply doing a metadata crawl of the pool instead. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_no_scrub_prefetch (int)

+
Set to disable block prefetching for scrubs. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_nocacheflush (int)

+
Disable cache flush operations on disks when writing. + Setting this will cause pool corruption on power loss if a volatile + out-of-order write cache is enabled. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_nopwrite_enabled (int)

+
Enable NOP writes +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_dmu_offset_next_sync (int)

+
Enable forcing txg sync to find holes. When enabled + forces ZFS to act like prior versions when SEEK_HOLE or SEEK_DATA flags are + used, which when a dnode is dirty causes txg's to be synced so that this data + can be found. +

Use 1 for yes and 0 to disable (default).

+
+

+

zfs_pd_bytes_max (int)

+
The number of bytes which should be prefetched during a + pool traversal (eg: zfs send or other data crawling operations) +

Default value: 52,428,800.

+
+

+

zfs_per_txg_dirty_frees_percent (ulong)

+
Tunable to control percentage of dirtied indirect blocks + from frees allowed into one TXG. After this threshold is crossed, additional + frees will wait until the next TXG. A value of zero will disable this + throttle. +

Default value: 5, set to 0 to disable.

+
+

+

zfs_prefetch_disable (int)

+
This tunable disables predictive prefetch. Note that it + leaves "prescient" prefetch (e.g. prefetch for zfs send) intact. + Unlike predictive prefetch, prescient prefetch never issues i/os that end up + not being needed, so it can't hurt performance. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_qat_checksum_disable (int)

+
This tunable disables qat hardware acceleration for + sha256 checksums. It may be set after the zfs modules have been loaded to + initialize the qat hardware as long as support is compiled in and the qat + driver is present. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_qat_compress_disable (int)

+
This tunable disables qat hardware acceleration for gzip + compression. It may be set after the zfs modules have been loaded to + initialize the qat hardware as long as support is compiled in and the qat + driver is present. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_qat_encrypt_disable (int)

+
This tunable disables qat hardware acceleration for + AES-GCM encryption. It may be set after the zfs modules have been loaded to + initialize the qat hardware as long as support is compiled in and the qat + driver is present. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_read_chunk_size (long)

+
Bytes to read per chunk +

Default value: 1,048,576.

+
+

+

zfs_read_history (int)

+
Historical statistics for the last N reads will be + available in /proc/spl/kstat/zfs/<pool>/reads +

Default value: 0 (no data is kept).

+
+

+

zfs_read_history_hits (int)

+
Include cache hits in read history +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_rebuild_max_segment (ulong)

+
Maximum read segment size to issue when sequentially + resilvering a top-level vdev. +

Default value: 1,048,576.

+
+

+

zfs_reconstruct_indirect_combinations_max (int)

+
If an indirect split block contains more than this many + possible unique combinations when being reconstructed, consider it too + computationally expensive to check them all. Instead, try at most + zfs_reconstruct_indirect_combinations_max randomly-selected + combinations each time the block is accessed. This allows all segment copies + to participate fairly in the reconstruction when all combinations cannot be + checked and prevents repeated use of one bad copy. +

Default value: 4096.

+
+

+

zfs_recover (int)

+
Set to attempt to recover from fatal errors. This should + only be used as a last resort, as it typically results in leaked space, or + worse. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_removal_ignore_errors (int)

+
+

Ignore hard IO errors during device removal. When set, if a device + encounters a hard IO error during the removal process the removal will not + be cancelled. This can result in a normally recoverable block becoming + permanently damaged and is not recommended. This should only be used as a + last resort when the pool cannot be returned to a healthy state prior to + removing the device.

+

Default value: 0.

+
+

+

zfs_removal_suspend_progress (int)

+
+

This is used by the test suite so that it can ensure that certain + actions happen while in the middle of a removal.

+

Default value: 0.

+
+

+

zfs_remove_max_segment (int)

+
+

The largest contiguous segment that we will attempt to allocate + when removing a device. This can be no larger than 16MB. If there is a + performance problem with attempting to allocate large blocks, consider + decreasing this.

+

Default value: 16,777,216 (16MB).

+
+

+

zfs_resilver_disable_defer (int)

+
Disables the resilver_defer feature, causing an + operation that would start a resilver to restart one in progress immediately. +

Default value: 0 (feature enabled).

+
+

+

zfs_resilver_min_time_ms (int)

+
Resilvers are processed by the sync thread. While + resilvering it will spend at least this much time working on a resilver + between txg flushes. +

Default value: 3,000.

+
+

+

zfs_scan_ignore_errors (int)

+
If set to a nonzero value, remove the DTL (dirty time + list) upon completion of a pool scan (scrub) even if there were unrepairable + errors. It is intended to be used during pool repair or recovery to stop + resilvering when the pool is next imported. +

Default value: 0.

+
+

+

zfs_scrub_min_time_ms (int)

+
Scrubs are processed by the sync thread. While scrubbing + it will spend at least this much time working on a scrub between txg flushes. +

Default value: 1,000.

+
+

+

zfs_scan_checkpoint_intval (int)

+
To preserve progress across reboots the sequential scan + algorithm periodically needs to stop metadata scanning and issue all the + verifications I/Os to disk. The frequency of this flushing is determined by + the zfs_scan_checkpoint_intval tunable. +

Default value: 7200 seconds (every 2 hours).

+
+

+

zfs_scan_fill_weight (int)

+
This tunable affects how scrub and resilver I/O segments + are ordered. A higher number indicates that we care more about how filled in a + segment is, while a lower number indicates we care more about the size of the + extent without considering the gaps within a segment. This value is only + tunable upon module insertion. Changing the value afterwards will have no + affect on scrub or resilver performance. +

Default value: 3.

+
+

+

zfs_scan_issue_strategy (int)

+
Determines the order that data will be verified while + scrubbing or resilvering. If set to 1, data will be verified as + sequentially as possible, given the amount of memory reserved for scrubbing + (see zfs_scan_mem_lim_fact). This may improve scrub performance if the + pool's data is very fragmented. If set to 2, the largest + mostly-contiguous chunk of found data will be verified first. By deferring + scrubbing of small segments, we may later find adjacent data to coalesce and + increase the segment size. If set to 0, zfs will use strategy 1 + during normal verification and strategy 2 while taking a checkpoint. +

Default value: 0.

+
+

+

zfs_scan_legacy (int)

+
A value of 0 indicates that scrubs and resilvers will + gather metadata in memory before issuing sequential I/O. A value of 1 + indicates that the legacy algorithm will be used where I/O is initiated as + soon as it is discovered. Changing this value to 0 will not affect scrubs or + resilvers that are already in progress. +

Default value: 0.

+
+

+

zfs_scan_max_ext_gap (int)

+
Indicates the largest gap in bytes between scrub / + resilver I/Os that will still be considered sequential for sorting purposes. + Changing this value will not affect scrubs or resilvers that are already in + progress. +

Default value: 2097152 (2 MB).

+
+

+

zfs_scan_mem_lim_fact (int)

+
Maximum fraction of RAM used for I/O sorting by + sequential scan algorithm. This tunable determines the hard limit for I/O + sorting memory usage. When the hard limit is reached we stop scanning metadata + and start issuing data verification I/O. This is done until we get below the + soft limit. +

Default value: 20 which is 5% of RAM (1/20).

+
+

+

zfs_scan_mem_lim_soft_fact (int)

+
The fraction of the hard limit used to determined the + soft limit for I/O sorting by the sequential scan algorithm. When we cross + this limit from below no action is taken. When we cross this limit from above + it is because we are issuing verification I/O. In this case (unless the + metadata scan is done) we stop issuing verification I/O and start scanning + metadata again until we get to the hard limit. +

Default value: 20 which is 5% of the hard limit (1/20).

+
+

+

zfs_scan_strict_mem_lim (int)

+
Enforces tight memory limits on pool scans when a + sequential scan is in progress. When disabled the memory limit may be exceeded + by fast disks. +

Default value: 0.

+
+

+

zfs_scan_suspend_progress (int)

+
Freezes a scrub/resilver in progress without actually + pausing it. Intended for testing/debugging. +

Default value: 0.

+
+

+

+

zfs_scan_vdev_limit (int)

+
Maximum amount of data that can be concurrently issued at + once for scrubs and resilvers per leaf device, given in bytes. +

Default value: 41943040.

+
+

+

zfs_send_corrupt_data (int)

+
Allow sending of corrupt data (ignore read/checksum + errors when sending data) +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_send_unmodified_spill_blocks (int)

+
Include unmodified spill blocks in the send stream. Under + certain circumstances previous versions of ZFS could incorrectly remove the + spill block from an existing object. Including unmodified copies of the spill + blocks creates a backwards compatible stream which will recreate a spill block + if it was incorrectly removed. +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_send_no_prefetch_queue_ff (int)

+
The fill fraction of the zfs send internal queues. + The fill fraction controls the timing with which internal threads are woken + up. +

Default value: 20.

+
+

+

zfs_send_no_prefetch_queue_length (int)

+
The maximum number of bytes allowed in zfs send's + internal queues. +

Default value: 1,048,576.

+
+

+

zfs_send_queue_ff (int)

+
The fill fraction of the zfs send prefetch queue. + The fill fraction controls the timing with which internal threads are woken + up. +

Default value: 20.

+
+

+

zfs_send_queue_length (int)

+
The maximum number of bytes allowed that will be + prefetched by zfs send. This value must be at least twice the maximum + block size in use. +

Default value: 16,777,216.

+
+

+

zfs_recv_queue_ff (int)

+
The fill fraction of the zfs receive queue. The + fill fraction controls the timing with which internal threads are woken up. +

Default value: 20.

+
+

+

zfs_recv_queue_length (int)

+
The maximum number of bytes allowed in the zfs + receive queue. This value must be at least twice the maximum block size in + use. +

Default value: 16,777,216.

+
+

+

zfs_recv_write_batch_size (int)

+
The maximum amount of data (in bytes) that zfs + receive will write in one DMU transaction. This is the uncompressed size, + even when receiving a compressed send stream. This setting will not reduce the + write size below a single block. Capped at a maximum of 32MB +

Default value: 1MB.

+
+

+

zfs_override_estimate_recordsize (ulong)

+
Setting this variable overrides the default logic for + estimating block sizes when doing a zfs send. The default heuristic is that + the average block size will be the current recordsize. Override this value if + most data in your dataset is not of that size and you require accurate zfs + send size estimates. +

Default value: 0.

+
+

+

zfs_sync_pass_deferred_free (int)

+
Flushing of data to disk is done in passes. Defer frees + starting in this pass +

Default value: 2.

+
+

+

zfs_spa_discard_memory_limit (int)

+
Maximum memory used for prefetching a checkpoint's space + map on each vdev while discarding the checkpoint. +

Default value: 16,777,216.

+
+

+

zfs_special_class_metadata_reserve_pct (int)

+
Only allow small data blocks to be allocated on the + special and dedup vdev types when the available free space percentage on these + vdevs exceeds this value. This ensures reserved space is available for pool + meta data as the special vdevs approach capacity. +

Default value: 25.

+
+

+

zfs_sync_pass_dont_compress (int)

+
Starting in this sync pass, we disable compression + (including of metadata). With the default setting, in practice, we don't have + this many sync passes, so this has no effect. +

The original intent was that disabling compression would help the + sync passes to converge. However, in practice disabling compression + increases the average number of sync passes, because when we turn + compression off, a lot of block's size will change and thus we have to + re-allocate (not overwrite) them. It also increases the number of 128KB + allocations (e.g. for indirect blocks and spacemaps) because these will not + be compressed. The 128K allocations are especially detrimental to + performance on highly fragmented systems, which may have very few free + segments of this size, and may need to load new metaslabs to satisfy 128K + allocations.

+

Default value: 8.

+
+

+

zfs_sync_pass_rewrite (int)

+
Rewrite new block pointers starting in this pass +

Default value: 2.

+
+

+

zfs_sync_taskq_batch_pct (int)

+
This controls the number of threads used by the + dp_sync_taskq. The default value of 75% will create a maximum of one thread + per cpu. +

Default value: 75%.

+
+

+

zfs_trim_extent_bytes_max (uint)

+
Maximum size of TRIM command. Ranges larger than this + will be split in to chunks no larger than zfs_trim_extent_bytes_max + bytes before being issued to the device. +

Default value: 134,217,728.

+
+

+

zfs_trim_extent_bytes_min (uint)

+
Minimum size of TRIM commands. TRIM ranges smaller than + this will be skipped unless they're part of a larger range which was broken in + to chunks. This is done because it's common for these small TRIMs to + negatively impact overall performance. This value can be set to 0 to TRIM all + unallocated space. +

Default value: 32,768.

+
+

+

zfs_trim_metaslab_skip (uint)

+
Skip uninitialized metaslabs during the TRIM process. + This option is useful for pools constructed from large thinly-provisioned + devices where TRIM operations are slow. As a pool ages an increasing fraction + of the pools metaslabs will be initialized progressively degrading the + usefulness of this option. This setting is stored when starting a manual TRIM + and will persist for the duration of the requested TRIM. +

Default value: 0.

+
+

+

zfs_trim_queue_limit (uint)

+
Maximum number of queued TRIMs outstanding per leaf vdev. + The number of concurrent TRIM commands issued to the device is controlled by + the zfs_vdev_trim_min_active and zfs_vdev_trim_max_active module + options. +

Default value: 10.

+
+

+

zfs_trim_txg_batch (uint)

+
The number of transaction groups worth of frees which + should be aggregated before TRIM operations are issued to the device. This + setting represents a trade-off between issuing larger, more efficient TRIM + operations and the delay before the recently trimmed space is available for + use by the device. +

Increasing this value will allow frees to be aggregated for a + longer time. This will result is larger TRIM operations and potentially + increased memory usage. Decreasing this value will have the opposite effect. + The default value of 32 was determined to be a reasonable compromise.

+

Default value: 32.

+
+

+

zfs_txg_history (int)

+
Historical statistics for the last N txgs will be + available in /proc/spl/kstat/zfs/<pool>/txgs +

Default value: 0.

+
+

+

zfs_txg_timeout (int)

+
Flush dirty data to disk at least every N seconds + (maximum txg duration) +

Default value: 5.

+
+

+

zfs_vdev_aggregate_trim (int)

+
Allow TRIM I/Os to be aggregated. This is normally not + helpful because the extents to be trimmed will have been already been + aggregated by the metaslab. This option is provided for debugging and + performance analysis. +

Default value: 0.

+
+

+

zfs_vdev_aggregation_limit (int)

+
Max vdev I/O aggregation size +

Default value: 1,048,576.

+
+

+

zfs_vdev_aggregation_limit_non_rotating (int)

+
Max vdev I/O aggregation size for non-rotating media +

Default value: 131,072.

+
+

+

zfs_vdev_cache_bshift (int)

+
Shift size to inflate reads too +

Default value: 16 (effectively 65536).

+
+

+

zfs_vdev_cache_max (int)

+
Inflate reads smaller than this value to meet the + zfs_vdev_cache_bshift size (default 64k). +

Default value: 16384.

+
+

+

zfs_vdev_cache_size (int)

+
Total size of the per-disk cache in bytes. +

Currently this feature is disabled as it has been found to not be + helpful for performance and in some cases harmful.

+

Default value: 0.

+
+

+

zfs_vdev_mirror_rotating_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member + when an I/O immediately follows its predecessor on rotational vdevs for the + purpose of making decisions based on load. +

Default value: 0.

+
+

+

zfs_vdev_mirror_rotating_seek_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member + when an I/O lacks locality as defined by the + zfs_vdev_mirror_rotating_seek_offset. I/Os within this that are not + immediately following the previous I/O are incremented by half. +

Default value: 5.

+
+

+

zfs_vdev_mirror_rotating_seek_offset (int)

+
The maximum distance for the last queued I/O in which the + balancing algorithm considers an I/O to have locality. See the section + "ZFS I/O SCHEDULER". +

Default value: 1048576.

+
+

+

zfs_vdev_mirror_non_rotating_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member on + non-rotational vdevs when I/Os do not immediately follow one another. +

Default value: 0.

+
+

+

zfs_vdev_mirror_non_rotating_seek_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member + when an I/O lacks locality as defined by the + zfs_vdev_mirror_rotating_seek_offset. I/Os within this that are not + immediately following the previous I/O are incremented by half. +

Default value: 1.

+
+

+

zfs_vdev_read_gap_limit (int)

+
Aggregate read I/O operations if the gap on-disk between + them is within this threshold. +

Default value: 32,768.

+
+

+

zfs_vdev_write_gap_limit (int)

+
Aggregate write I/O over gap +

Default value: 4,096.

+
+

+

zfs_vdev_raidz_impl (string)

+
Parameter for selecting raidz parity implementation to + use. +

Options marked (always) below may be selected on module load as + they are supported on all systems. The remaining options may only be set + after the module is loaded, as they are available only if the + implementations are compiled in and supported on the running system.

+

Once the module is loaded, the content of + /sys/module/zfs/parameters/zfs_vdev_raidz_impl will show available options + with the currently selected one enclosed in []. Possible options are: +
+ fastest - (always) implementation selected using built-in benchmark +
+ original - (always) original raidz implementation +
+ scalar - (always) scalar raidz implementation +
+ sse2 - implementation using SSE2 instruction set (64bit x86 only) +
+ ssse3 - implementation using SSSE3 instruction set (64bit x86 only) +
+ avx2 - implementation using AVX2 instruction set (64bit x86 only) +
+ avx512f - implementation using AVX512F instruction set (64bit x86 only) +
+ avx512bw - implementation using AVX512F & AVX512BW instruction sets + (64bit x86 only) +
+ aarch64_neon - implementation using NEON (Aarch64/64 bit ARMv8 only) +
+ aarch64_neonx2 - implementation using NEON with more unrolling (Aarch64/64 + bit ARMv8 only) +
+ powerpc_altivec - implementation using Altivec (PowerPC only)

+

Default value: fastest.

+
+

+

zfs_vdev_scheduler (charp)

+
DEPRECATED: This option exists for compatibility + with older user configurations. It does nothing except print a warning to the + kernel log if set. +

+
+

+

zfs_zevent_cols (int)

+
When zevents are logged to the console use this as the + word wrap width. +

Default value: 80.

+
+

+

zfs_zevent_console (int)

+
Log events to the console +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_zevent_len_max (int)

+
Max event queue length. Events in the queue can be viewed + with the zpool events command. +

Default value: 512.

+
+

+

zfs_zevent_retain_max (int)

+
Maximum recent zevent records to retain for duplicate + checking. Setting this value to zero disables duplicate detection. +

Default value: 2000.

+
+

+

zfs_zevent_retain_expire_secs (int)

+
Lifespan for a recent ereport that was retained for + duplicate checking. +

Default value: 900.

+
+

zfs_zil_clean_taskq_maxalloc (int)

+
The maximum number of taskq entries that are allowed to + be cached. When this limit is exceeded transaction records (itxs) will be + cleaned synchronously. +

Default value: 1048576.

+
+

+

zfs_zil_clean_taskq_minalloc (int)

+
The number of taskq entries that are pre-populated when + the taskq is first created and are immediately available for use. +

Default value: 1024.

+
+

+

zfs_zil_clean_taskq_nthr_pct (int)

+
This controls the number of threads used by the + dp_zil_clean_taskq. The default value of 100% will create a maximum of one + thread per cpu. +

Default value: 100%.

+
+

+

zil_maxblocksize (int)

+
This sets the maximum block size used by the ZIL. On very + fragmented pools, lowering this (typically to 36KB) can improve performance. +

Default value: 131072 (128KB).

+
+

+

zil_nocacheflush (int)

+
Disable the cache flush commands that are normally sent + to the disk(s) by the ZIL after an LWB write has completed. Setting this will + cause ZIL corruption on power loss if a volatile out-of-order write cache is + enabled. +

Use 1 for yes and 0 for no (default).

+
+

+

zil_replay_disable (int)

+
Disable intent logging replay. Can be disabled for + recovery from corrupted ZIL +

Use 1 for yes and 0 for no (default).

+
+

+

zil_slog_bulk (ulong)

+
Limit SLOG write size per commit executed with + synchronous priority. Any writes above that will be executed with lower + (asynchronous) priority to limit potential SLOG device abuse by single active + ZIL writer. +

Default value: 786,432.

+
+

+

zio_deadman_log_all (int)

+
If non-zero, the zio deadman will produce debugging + messages (see zfs_dbgmsg_enable) for all zios, rather than only for + leaf zios possessing a vdev. This is meant to be used by developers to gain + diagnostic information for hang conditions which don't involve a mutex or + other locking primitive; typically conditions in which a thread in the zio + pipeline is looping indefinitely. +

Default value: 0.

+
+

+

zio_decompress_fail_fraction (int)

+
If non-zero, this value represents the denominator of the + probability that zfs should induce a decompression failure. For instance, for + a 5% decompression failure rate, this value should be set to 20. +

Default value: 0.

+
+

+

zio_slow_io_ms (int)

+
When an I/O operation takes more than + zio_slow_io_ms milliseconds to complete is marked as a slow I/O. Each + slow I/O causes a delay zevent. Slow I/O counters can be seen with "zpool + status -s". +

+

Default value: 30,000.

+
+

+

zio_dva_throttle_enabled (int)

+
Throttle block allocations in the I/O pipeline. This + allows for dynamic allocation distribution when devices are imbalanced. When + enabled, the maximum number of pending allocations per top-level vdev is + limited by zfs_vdev_queue_depth_pct. +

Default value: 1.

+
+

+

zio_requeue_io_start_cut_in_line (int)

+
Prioritize requeued I/O +

Default value: 0.

+
+

+

zio_taskq_batch_pct (uint)

+
Percentage of online CPUs (or CPU cores, etc) which will + run a worker thread for I/O. These workers are responsible for I/O work such + as compression and checksum calculations. Fractional number of CPUs will be + rounded down. +

The default value of 75 was chosen to avoid using all CPUs which + can result in latency issues and inconsistent application performance, + especially when high compression is enabled.

+

Default value: 75.

+
+

+

zvol_inhibit_dev (uint)

+
Do not create zvol device nodes. This may slightly + improve startup time on systems with a very large number of zvols. +

Use 1 for yes and 0 for no (default).

+
+

+

zvol_major (uint)

+
Major number for zvol block devices +

Default value: 230.

+
+

+

zvol_max_discard_blocks (ulong)

+
Discard (aka TRIM) operations done on zvols will be done + in batches of this many blocks, where block size is determined by the + volblocksize property of a zvol. +

Default value: 16,384.

+
+

+

zvol_prefetch_bytes (uint)

+
When adding a zvol to the system prefetch + zvol_prefetch_bytes from the start and end of the volume. Prefetching + these regions of the volume is desirable because they are likely to be + accessed immediately by blkid(8) or by the kernel scanning for a + partition table. +

Default value: 131,072.

+
+

+

zvol_request_sync (uint)

+
When processing I/O requests for a zvol submit them + synchronously. This effectively limits the queue depth to 1 for each I/O + submitter. When set to 0 requests are handled asynchronously by a thread pool. + The number of requests which can be handled concurrently is controller by + zvol_threads. +

Default value: 0.

+
+

+

zvol_threads (uint)

+
Max number of threads which can handle zvol I/O requests + concurrently. +

Default value: 32.

+
+

+

zvol_volmode (uint)

+
Defines zvol block devices behaviour when volmode + is set to default. Valid values are 1 (full), 2 (dev) and + 3 (none). +

Default value: 1.

+
+

+
+
+
+

+

ZFS issues I/O operations to leaf vdevs to satisfy and complete + I/Os. The I/O scheduler determines when and in what order those operations + are issued. The I/O scheduler divides operations into five I/O classes + prioritized in the following order: sync read, sync write, async read, async + write, and scrub/resilver. Each queue defines the minimum and maximum number + of concurrent operations that may be issued to the device. In addition, the + device has an aggregate maximum, zfs_vdev_max_active. Note that the + sum of the per-queue minimums must not exceed the aggregate maximum. If the + sum of the per-queue maximums exceeds the aggregate maximum, then the number + of active I/Os may reach zfs_vdev_max_active, in which case no + further I/Os will be issued regardless of whether all per-queue minimums + have been met.

+

For many physical devices, throughput increases with the number of + concurrent operations, but latency typically suffers. Further, physical + devices typically have a limit at which more concurrent operations have no + effect on throughput or can actually cause it to decrease.

+

The scheduler selects the next operation to issue by first looking + for an I/O class whose minimum has not been satisfied. Once all are + satisfied and the aggregate maximum has not been hit, the scheduler looks + for classes whose maximum has not been satisfied. Iteration through the I/O + classes is done in the order specified above. No further operations are + issued if the aggregate maximum number of concurrent operations has been hit + or if there are no operations queued for an I/O class that has not hit its + maximum. Every time an I/O is queued or an operation completes, the I/O + scheduler looks for new operations to issue.

+

In general, smaller max_active's will lead to lower latency of + synchronous operations. Larger max_active's may lead to higher overall + throughput, depending on underlying storage.

+

The ratio of the queues' max_actives determines the balance of + performance between reads, writes, and scrubs. E.g., increasing + zfs_vdev_scrub_max_active will cause the scrub or resilver to + complete more quickly, but reads and writes to have higher latency and lower + throughput.

+

All I/O classes have a fixed maximum number of outstanding + operations except for the async write class. Asynchronous writes represent + the data that is committed to stable storage during the syncing stage for + transaction groups. Transaction groups enter the syncing state periodically + so the number of queued async writes will quickly burst up and then bleed + down to zero. Rather than servicing them as quickly as possible, the I/O + scheduler changes the maximum number of active async write I/Os according to + the amount of dirty data in the pool. Since both throughput and latency + typically increase with the number of concurrent operations issued to + physical devices, reducing the burstiness in the number of concurrent + operations also stabilizes the response time of operations from other -- and + in particular synchronous -- queues. In broad strokes, the I/O scheduler + will issue more concurrent operations from the async write queue as there's + more dirty data in the pool.

+

Async Writes

+

The number of concurrent operations issued for the async write I/O + class follows a piece-wise linear function defined by a few adjustable + points.

+
+
+ | o---------| <-- zfs_vdev_async_write_max_active +
+ ^ | /^ | +
+ | | / | | +active | / | | +
+ I/O | / | | +count | / | | +
+ | / | | +
+ |-------o | | <-- zfs_vdev_async_write_min_active +
+ 0|_______^______|_________| +
+ 0% | | 100% of zfs_dirty_data_max +
+ | | +
+ | `-- zfs_vdev_async_write_active_max_dirty_percent +
+ `--------- zfs_vdev_async_write_active_min_dirty_percent +
+Until the amount of dirty data exceeds a minimum percentage of the dirty data + allowed in the pool, the I/O scheduler will limit the number of concurrent + operations to the minimum. As that threshold is crossed, the number of + concurrent operations issued increases linearly to the maximum at the + specified maximum percentage of the dirty data allowed in the pool. +

Ideally, the amount of dirty data on a busy pool will stay in the + sloped part of the function between + zfs_vdev_async_write_active_min_dirty_percent and + zfs_vdev_async_write_active_max_dirty_percent. If it exceeds the + maximum percentage, this indicates that the rate of incoming data is greater + than the rate that the backend storage can handle. In this case, we must + further throttle incoming writes, as described in the next section.

+

+
+
+

+

We delay transactions when we've determined that the backend + storage isn't able to accommodate the rate of incoming writes.

+

If there is already a transaction waiting, we delay relative to + when that transaction will finish waiting. This way the calculated delay + time is independent of the number of threads concurrently executing + transactions.

+

If we are the only waiter, wait relative to when the transaction + started, rather than the current time. This credits the transaction for + "time already served", e.g. reading indirect blocks.

+

The minimum time for a transaction to take is calculated as:

+
+
+ min_time = zfs_delay_scale * (dirty - min) / (max - dirty) +
+ min_time is then capped at 100 milliseconds.
+

The delay has two degrees of freedom that can be adjusted via + tunables. The percentage of dirty data at which we start to delay is defined + by zfs_delay_min_dirty_percent. This should typically be at or above + zfs_vdev_async_write_active_max_dirty_percent so that we only start + to delay after writing at full speed has failed to keep up with the incoming + write rate. The scale of the curve is defined by zfs_delay_scale. + Roughly speaking, this variable determines the amount of delay at the + midpoint of the curve.

+

+
delay
+
+ 10ms +-------------------------------------------------------------*+ +
+ | *| +
+ 9ms + *+ +
+ | *| +
+ 8ms + *+ +
+ | * | +
+ 7ms + * + +
+ | * | +
+ 6ms + * + +
+ | * | +
+ 5ms + * + +
+ | * | +
+ 4ms + * + +
+ | * | +
+ 3ms + * + +
+ | * | +
+ 2ms + (midpoint) * + +
+ | | ** | +
+ 1ms + v *** + +
+ | zfs_delay_scale ----------> ******** | +
+ 0 +-------------------------------------*********----------------+ +
+ 0% <- zfs_dirty_data_max -> 100%
+

Note that since the delay is added to the outstanding time + remaining on the most recent transaction, the delay is effectively the + inverse of IOPS. Here the midpoint of 500us translates to 2000 IOPS. The + shape of the curve was chosen such that small changes in the amount of + accumulated dirty data in the first 3/4 of the curve yield relatively small + differences in the amount of delay.

+

The effects can be easier to understand when the amount of delay + is represented on a log scale:

+

+
delay
+100ms +-------------------------------------------------------------++
+
+ + + +
+ | | +
+ + *+ +
+ 10ms + *+ +
+ + ** + +
+ | (midpoint) ** | +
+ + | ** + +
+ 1ms + v **** + +
+ + zfs_delay_scale ----------> ***** + +
+ | **** | +
+ + **** + +100us + ** + +
+ + * + +
+ | * | +
+ + * + +
+ 10us + * + +
+ + + +
+ | | +
+ + + +
+ +--------------------------------------------------------------+ +
+ 0% <- zfs_dirty_data_max -> 100%
+

Note here that only as the amount of dirty data approaches its + limit does the delay start to increase rapidly. The goal of a properly tuned + system should be to keep the amount of dirty data out of that range by first + ensuring that the appropriate limits are set for the I/O scheduler to reach + optimal throughput on the backend storage, and then by changing the value of + zfs_delay_scale to increase the steepness of the curve.

+
+
+ + + + + +
March 31, 2021OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/5/zpool-features.5.html b/man/v2.0/5/zpool-features.5.html new file mode 100644 index 000000000..6e70396eb --- /dev/null +++ b/man/v2.0/5/zpool-features.5.html @@ -0,0 +1,1181 @@ + + + + + + + zpool-features.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-features.5

+
+ + + + + +
ZPOOL-FEATURES(5)File Formats ManualZPOOL-FEATURES(5)
+
+
+

+

zpool-features - ZFS pool feature descriptions

+
+
+

+

ZFS pool on-disk format versions are specified via + "features" which replace the old on-disk format numbers (the last + supported on-disk format number is 28). To enable a feature on a pool use + the upgrade subcommand of the zpool(8) command, or set the + feature@feature_name property to enabled.

+

+

The pool format does not affect file system version compatibility + or the ability to send file systems between pools.

+

+

Since most features can be enabled independently of each other the + on-disk format of the pool is specified by the set of all features marked as + active on the pool. If the pool was created by another software + version this set may include unsupported features.

+
+

+

Every feature has a GUID of the form + com.example:feature_name. The reversed DNS name ensures that the + feature's GUID is unique across all ZFS implementations. When unsupported + features are encountered on a pool they will be identified by their GUIDs. + Refer to the documentation for the ZFS implementation that created the pool + for information about those features.

+

+

Each supported feature also has a short name. By convention a + feature's short name is the portion of its GUID which follows the ':' (e.g. + com.example:feature_name would have the short name + feature_name), however a feature's short name may differ across ZFS + implementations if following the convention would result in name + conflicts.

+
+
+

+

Features can be in one of three states:

+

active

+
This feature's on-disk format changes are in effect on + the pool. Support for this feature is required to import the pool in + read-write mode. If this feature is not read-only compatible, support is also + required to import the pool in read-only mode (see "Read-only + compatibility").
+

+

enabled

+
An administrator has marked this feature as enabled on + the pool, but the feature's on-disk format changes have not been made yet. The + pool can still be imported by software that does not support this feature, but + changes may be made to the on-disk format at any time which will move the + feature to the active state. Some features may support returning to the + enabled state after becoming active. See feature-specific + documentation for details.
+

+

disabled

+
This feature's on-disk format changes have not been made + and will not be made unless an administrator moves the feature to the + enabled state. Features cannot be disabled once they have been + enabled.
+

+

+

The state of supported features is exposed through pool properties + of the form feature@short_name.

+
+
+

+

Some features may make on-disk format changes that do not + interfere with other software's ability to read from the pool. These + features are referred to as "read-only compatible". If all + unsupported features on a pool are read-only compatible, the pool can be + imported in read-only mode by setting the readonly property during + import (see zpool(8) for details on importing pools).

+
+
+

+

For each unsupported feature enabled on an imported pool a pool + property named unsupported@feature_name will indicate why the import + was allowed despite the unsupported feature. Possible values for this + property are:

+

+

inactive

+
The feature is in the enabled state and therefore + the pool's on-disk format is still compatible with software that does not + support this feature.
+

+

readonly

+
The feature is read-only compatible and the pool has been + imported in read-only mode.
+

+
+
+

+

Some features depend on other features being enabled in order to + function properly. Enabling a feature will automatically enable any features + it depends on.

+
+
+
+

+

The following features are supported on this system:

+

+

allocation_classes

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:allocation_classes
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature enables support for separate allocation classes.

+

This feature becomes active when a dedicated allocation + class vdev (dedup or special) is created with the zpool create or + zpool add subcommands. With device removal, it can be returned to the + enabled state if all the dedicated allocation class vdevs are + removed.

+
+

+

async_destroy

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:async_destroy
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

Destroying a file system requires traversing all of its data in + order to return its used space to the pool. Without async_destroy the + file system is not fully removed until all space has been reclaimed. If the + destroy operation is interrupted by a reboot or power outage the next + attempt to open the pool will need to complete the destroy operation + synchronously.

+

When async_destroy is enabled the file system's data will + be reclaimed by a background process, allowing the destroy operation to + complete without traversing the entire file system. The background process + is able to resume interrupted destroys after the pool has been opened, + eliminating the need to finish interrupted destroys as part of the open + operation. The amount of space remaining to be reclaimed by the background + process is available through the freeing property.

+

This feature is only active while freeing is + non-zero.

+
+

+

bookmarks

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:bookmarks
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature enables use of the zfs bookmark + subcommand.

+

This feature is active while any bookmarks exist in the + pool. All bookmarks in the pool can be listed by running zfs list -t + bookmark -r poolname.

+
+

+

bookmark_v2

+
+ + + + + + + + + + + + + +
GUIDcom.datto:bookmark_v2
READ-ONLY COMPATIBLEno
DEPENDENCIESbookmark, extensible_dataset
+

This feature enables the creation and management of larger + bookmarks which are needed for other features in ZFS.

+

This feature becomes active when a v2 bookmark is created + and will be returned to the enabled state when all v2 bookmarks are + destroyed.

+
+

+

bookmark_written

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:bookmark_written
READ-ONLY COMPATIBLEno
DEPENDENCIESbookmark, extensible_dataset, bookmark_v2
+

This feature enables additional bookmark accounting fields, + enabling the written#<bookmark> property (space written since a + bookmark) and estimates of send stream sizes for incrementals from + bookmarks.

+

This feature becomes active when a bookmark is created and + will be returned to the enabled state when all bookmarks with these + fields are destroyed.

+
+

+

device_rebuild

+
+ + + + + + + + + + + + + +
GUIDorg.openzfs:device_rebuild
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature enables the ability for the zpool attach and + zpool replace subcommands to perform sequential reconstruction + (instead of healing reconstruction) when resilvering.

+

Sequential reconstruction resilvers a device in LBA order without + immediately verifying the checksums. Once complete a scrub is started which + then verifies the checksums. This approach allows full redundancy to be + restored to the pool in the minimum amount of time. This two phase approach + will take longer than a healing resilver when the time to verify the + checksums is included. However, unless there is additional pool damage no + checksum errors should be reported by the scrub. This feature is + incompatible with raidz configurations.

+

This feature becomes active while a sequential resilver is + in progress, and returns to enabled when the resilver completes.

+
+

+

device_removal

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:device_removal
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature enables the zpool remove subcommand to remove + top-level vdevs, evacuating them to reduce the total size of the pool.

+

This feature becomes active when the zpool remove + subcommand is used on a top-level vdev, and will never return to being + enabled.

+
+

+

edonr

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:edonr
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the use of the Edon-R hash algorithm for + checksum, including for nopwrite (if compression is also enabled, an + overwrite of a block whose checksum matches the data being written will be + ignored). In an abundance of caution, Edon-R requires verification when used + with dedup: zfs set dedup=edonr,verify. See zfs(8).

+

Edon-R is a very high-performance hash algorithm that was part of + the NIST SHA-3 competition. It provides extremely high hash performance + (over 350% faster than SHA-256), but was not selected because of its + unsuitability as a general purpose secure hash algorithm. This + implementation utilizes the new salted checksumming functionality in ZFS, + which means that the checksum is pre-seeded with a secret 256-bit random key + (stored on the pool) before being fed the data block to be checksummed. Thus + the produced checksums are unique to a given pool.

+

When the edonr feature is set to enabled, the + administrator can turn on the edonr checksum on any dataset using the + zfs set checksum=edonr. See zfs(8). This feature becomes + active once a checksum property has been set to edonr, + and will return to being enabled once all filesystems that have ever + had their checksum set to edonr are destroyed.

+

FreeBSD does not support the edonr feature.

+
+

+

embedded_data

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:embedded_data
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature improves the performance and compression ratio of + highly-compressible blocks. Blocks whose contents can compress to 112 bytes + or smaller can take advantage of this feature.

+

When this feature is enabled, the contents of highly-compressible + blocks are stored in the block "pointer" itself (a misnomer in + this case, as it contains the compressed data, rather than a pointer to its + location on disk). Thus the space of the block (one sector, typically 512 + bytes or 4KB) is saved, and no additional i/o is needed to read and write + the data block.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

empty_bpobj

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:empty_bpobj
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature increases the performance of creating and using a + large number of snapshots of a single filesystem or volume, and also reduces + the disk space required.

+

When there are many snapshots, each snapshot uses many Block + Pointer Objects (bpobj's) to track blocks associated with that snapshot. + However, in common use cases, most of these bpobj's are empty. This feature + allows us to create each bpobj on-demand, thus eliminating the empty + bpobjs.

+

This feature is active while there are any filesystems, + volumes, or snapshots which were created after enabling this feature.

+
+

+

enabled_txg

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:enabled_txg
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

Once this feature is enabled ZFS records the transaction group + number in which new features are enabled. This has no user-visible impact, + but other features may depend on this feature.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

encryption

+
+ + + + + + + + + + + + + +
GUIDcom.datto:encryption
READ-ONLY COMPATIBLEno
DEPENDENCIESbookmark_v2, extensible_dataset
+

This feature enables the creation and management of natively + encrypted datasets.

+

This feature becomes active when an encrypted dataset is + created and will be returned to the enabled state when all datasets + that use this feature are destroyed.

+
+

+

extensible_dataset

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:extensible_dataset
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature allows more flexible use of internal ZFS data + structures, and exists for other features to depend on.

+

This feature will be active when the first dependent + feature uses it, and will be returned to the enabled state when all + datasets that use this feature are destroyed.

+
+

+

filesystem_limits

+
+ + + + + + + + + + + + + +
GUIDcom.joyent:filesystem_limits
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature enables filesystem and snapshot limits. These limits + can be used to control how many filesystems and/or snapshots can be created + at the point in the tree on which the limits are set.

+

This feature is active once either of the limit properties + has been set on a dataset. Once activated the feature is never + deactivated.

+
+

+

hole_birth

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:hole_birth
READ-ONLY COMPATIBLEno
DEPENDENCIESenabled_txg
+

This feature has/had bugs, the result of which is that, if you do + a zfs send -i (or -R, since it uses -i) from an + affected dataset, the receiver will not see any checksum or other errors, + but the resulting destination snapshot will not match the source. Its use by + zfs send -i has been disabled by default. See the + send_holes_without_birth_time module parameter in + zfs-module-parameters(5).

+

This feature improves performance of incremental sends (zfs + send -i) and receives for objects with many holes. The most common case + of hole-filled objects is zvols.

+

An incremental send stream from snapshot A to snapshot + B contains information about every block that changed between + A and B. Blocks which did not change between those snapshots + can be identified and omitted from the stream using a piece of metadata + called the 'block birth time', but birth times are not recorded for holes + (blocks filled only with zeroes). Since holes created after A cannot + be distinguished from holes created before A, information about every + hole in the entire filesystem or zvol is included in the send stream.

+

For workloads where holes are rare this is not a problem. However, + when incrementally replicating filesystems or zvols with many holes (for + example a zvol formatted with another filesystem) a lot of time will be + spent sending and receiving unnecessary information about holes that already + exist on the receiving side.

+

Once the hole_birth feature has been enabled the block + birth times of all new holes will be recorded. Incremental sends between + snapshots created after this feature is enabled will use this new metadata + to avoid sending information about holes that already exist on the receiving + side.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

large_blocks

+
+ + + + + + + + + + + + + +
GUIDorg.open-zfs:large_blocks
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

The large_block feature allows the record size on a dataset + to be set larger than 128KB.

+

This feature becomes active once a dataset contains a file + with a block size larger than 128KB, and will return to being enabled + once all filesystems that have ever had their recordsize larger than 128KB + are destroyed.

+
+

+

large_dnode

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:large_dnode
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

The large_dnode feature allows the size of dnodes in a + dataset to be set larger than 512B.

+

This feature becomes active once a dataset contains an + object with a dnode larger than 512B, which occurs as a result of setting + the dnodesize dataset property to a value other than legacy. + The feature will return to being enabled once all filesystems that + have ever contained a dnode larger than 512B are destroyed. Large dnodes + allow more data to be stored in the bonus buffer, thus potentially improving + performance by avoiding the use of spill blocks.

+
+

+

livelist

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:livelist
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+This feature allows clones to be deleted faster than the traditional method when + a large number of random/sparse writes have been made to the clone. All blocks + allocated and freed after a clone is created are tracked by the the clone's + livelist which is referenced during the deletion of the clone. The feature is + activated when a clone is created and remains active until all clones have + been destroyed.
+

+

log_spacemap

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:log_spacemap
READ-ONLY COMPATIBLEyes
DEPENDENCIEScom.delphix:spacemap_v2
+

This feature improves performance for heavily-fragmented pools, + especially when workloads are heavy in random-writes. It does so by logging + all the metaslab changes on a single spacemap every TXG instead of + scattering multiple writes to all the metaslab spacemaps.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

lz4_compress

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:lz4_compress
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

lz4 is a high-performance real-time compression algorithm + that features significantly faster compression and decompression as well as + a higher compression ratio than the older lzjb compression. + Typically, lz4 compression is approximately 50% faster on + compressible data and 200% faster on incompressible data than lzjb. + It is also approximately 80% faster on decompression, while giving + approximately 10% better compression ratio.

+

When the lz4_compress feature is set to enabled, the + administrator can turn on lz4 compression on any dataset on the pool + using the zfs(8) command. Please note that doing so will immediately + activate the lz4_compress feature on the underlying pool using the + zfs(8) command. Also, all newly written metadata will be compressed with + lz4 algorithm. Since this feature is not read-only compatible, this + operation will render the pool unimportable on systems without support for + the lz4_compress feature.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

multi_vdev_crash_dump

+
+ + + + + + + + + + + + + +
GUIDcom.joyent:multi_vdev_crash_dump
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature allows a dump device to be configured with a pool + comprised of multiple vdevs. Those vdevs may be arranged in any mirrored or + raidz configuration.

+

When the multi_vdev_crash_dump feature is set to + enabled, the administrator can use the dumpadm(1M) command to + configure a dump device on a pool comprised of multiple vdevs.

+

Under FreeBSD and Linux this feature is registered for + compatibility but not used. New pools created under FreeBSD and Linux will + have the feature enabled but will never transition to + active. This functionality is not required in order to support + crash dumps under FreeBSD and Linux. Existing pools where this feature is + active can be imported.

+
+

+

obsolete_counts

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:obsolete_counts
READ-ONLY COMPATIBLEyes
DEPENDENCIESdevice_removal
+

This feature is an enhancement of device_removal, which will over + time reduce the memory used to track removed devices. When indirect blocks + are freed or remapped, we note that their part of the indirect mapping is + "obsolete", i.e. no longer needed.

+

This feature becomes active when the zpool remove + subcommand is used on a top-level vdev, and will never return to being + enabled.

+
+

+

project_quota

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:project_quota
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature allows administrators to account the spaces and + objects usage information against the project identifier (ID).

+

The project ID is new object-based attribute. When upgrading an + existing filesystem, object without project ID attribute will be assigned a + zero project ID. After this feature is enabled, newly created object will + inherit its parent directory's project ID if the parent inherit flag is set + (via chattr +/-P or zfs project [-s|-C]). Otherwise, the new + object's project ID will be set as zero. An object's project ID can be + changed at anytime by the owner (or privileged user) via chattr -p + $prjid or zfs project -p $prjid.

+

This feature will become active as soon as it is enabled + and will never return to being disabled. Each filesystem will be + upgraded automatically when remounted or when new file is created under that + filesystem. The upgrade can also be triggered on filesystems via `zfs set + version=current <pool/fs>`. The upgrade process runs in the background + and may take a while to complete for the filesystems containing a large + number of files.

+
+

+

redaction_bookmarks

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:redaction_bookmarks
READ-ONLY COMPATIBLEno
DEPENDENCIESbookmarks, extensible_dataset
+

This feature enables the use of the redacted zfs send. Redacted + zfs send creates redaction bookmarks, which store the list of blocks + redacted by the send that created them. For more information about redacted + send, see zfs(8).

+

+
+

+

redacted_datasets

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:redacted_datasets
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the receiving of redacted zfs send streams. + Redacted zfs send streams create redacted datasets when received. These + datasets are missing some of their blocks, and so cannot be safely mounted, + and their contents cannot be safely read. For more information about + redacted receive, see zfs(8).

+
+

+

resilver_defer

+
+ + + + + + + + + + + + + +
GUIDcom.datto:resilver_defer
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature allows zfs to postpone new resilvers if an existing + one is already in progress. Without this feature, any new resilvers will + cause the currently running one to be immediately restarted from the + beginning.

+

This feature becomes active once a resilver has been + deferred, and returns to being enabled when the deferred resilver + begins.

+
+

+

sha512

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:sha512
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the use of the SHA-512/256 truncated hash + algorithm (FIPS 180-4) for checksum and dedup. The native 64-bit arithmetic + of SHA-512 provides an approximate 50% performance boost over SHA-256 on + 64-bit hardware and is thus a good minimum-change replacement candidate for + systems where hash performance is important, but these systems cannot for + whatever reason utilize the faster skein and edonr + algorithms.

+

When the sha512 feature is set to enabled, the + administrator can turn on the sha512 checksum on any dataset using + zfs set checksum=sha512. See zfs(8). This feature becomes + active once a checksum property has been set to sha512, + and will return to being enabled once all filesystems that have ever + had their checksum set to sha512 are destroyed.

+
+

+

skein

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:skein
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the use of the Skein hash algorithm for + checksum and dedup. Skein is a high-performance secure hash algorithm that + was a finalist in the NIST SHA-3 competition. It provides a very high + security margin and high performance on 64-bit hardware (80% faster than + SHA-256). This implementation also utilizes the new salted checksumming + functionality in ZFS, which means that the checksum is pre-seeded with a + secret 256-bit random key (stored on the pool) before being fed the data + block to be checksummed. Thus the produced checksums are unique to a given + pool, preventing hash collision attacks on systems with dedup.

+

When the skein feature is set to enabled, the + administrator can turn on the skein checksum on any dataset using + zfs set checksum=skein. See zfs(8). This feature becomes + active once a checksum property has been set to skein, + and will return to being enabled once all filesystems that have ever + had their checksum set to skein are destroyed.

+
+

+

spacemap_histogram

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:spacemap_histogram
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This features allows ZFS to maintain more information about how + free space is organized within the pool. If this feature is enabled, + ZFS will set this feature to active when a new space map object is + created or an existing space map is upgraded to the new format. Once the + feature is active, it will remain in that state until the pool is + destroyed.

+
+

+

spacemap_v2

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:spacemap_v2
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature enables the use of the new space map encoding which + consists of two words (instead of one) whenever it is advantageous. The new + encoding allows space maps to represent large regions of space more + efficiently on-disk while also increasing their maximum addressable + offset.

+

This feature becomes active once it is enabled, and + never returns back to being enabled.

+
+

+

userobj_accounting

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:userobj_accounting
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature allows administrators to account the object usage + information by user and group.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled. Each filesystem will be upgraded + automatically when remounted, or when new files are created under that + filesystem. The upgrade can also be started manually on filesystems by + running `zfs set version=current <pool/fs>`. The upgrade process runs + in the background and may take a while to complete for filesystems + containing a large number of files.

+
+

+

zpool_checkpoint

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:zpool_checkpoint
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature enables the zpool checkpoint subcommand that + can checkpoint the state of the pool at the time it was issued and later + rewind back to it or discard it.

+

This feature becomes active when the zpool + checkpoint subcommand is used to checkpoint the pool. The feature will + only return back to being enabled when the pool is rewound or the + checkpoint has been discarded.

+
+

+

zstd_compress

+
+ + + + + + + + + + + + + +
GUIDorg.freebsd:zstd_compress
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

zstd is a high-performance compression algorithm that + features a combination of high compression ratios and high speed. Compared + to gzip, zstd offers slighty better compression at much higher + speeds. Compared to lz4, zstd offers much better compression + while being only modestly slower. Typically, zstd compression speed + ranges from 250 to 500 MB/s per thread and decompression speed is over 1 + GB/s per thread.

+

When the zstd feature is set to enabled, the + administrator can turn on zstd compression of any dataset by running + `zfs set compress=zstd <pool/fs>`.

+

This feature becomes active once a compress property + has been set to zstd, and will return to being enabled once + all filesystems that have ever had their compress property set to + zstd are destroyed.

+
+

+
+
+

+

zpool(8)

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/fsck.zfs.8.html b/man/v2.0/8/fsck.zfs.8.html new file mode 100644 index 000000000..a6567189a --- /dev/null +++ b/man/v2.0/8/fsck.zfs.8.html @@ -0,0 +1,290 @@ + + + + + + + fsck.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

fsck.zfs.8

+
+ + + + + +
FSCK.ZFS(8)System Manager's ManualFSCK.ZFS(8)
+
+

+
+

+

fsck.zfs - Dummy ZFS filesystem checker.

+

+
+
+

+

fsck.zfs [options] + <dataset>

+

+
+
+

+

fsck.zfs is a shell stub that does nothing and always + returns true. It is installed by ZoL because some Linux distributions expect + a fsck helper for all filesystems.

+

+
+
+

+

All options and the dataset are ignored.

+

+
+
+

+

ZFS datasets are checked by running zpool scrub on the + containing pool. An individual ZFS dataset is never checked independently of + its pool, which is unlike a regular filesystem.

+

+
+
+

+

On some systems, if the dataset is in a degraded pool, then + it might be appropriate for fsck.zfs to return exit code 4 to + indicate an uncorrected filesystem error.

+

Similarly, if the dataset is in a faulted pool and has a + legacy /etc/fstab record, then fsck.zfs should return exit code 8 to + indicate a fatal operational error.

+

+
+
+

+

Darik Horn <dajhorn@vanadac.com>.

+

+
+
+

+

fsck(8), fstab(5), zpool-scrub(8)

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/index.html b/man/v2.0/8/index.html new file mode 100644 index 000000000..8dd51c0f2 --- /dev/null +++ b/man/v2.0/8/index.html @@ -0,0 +1,311 @@ + + + + + + + System Administration Commands (8) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/mount.zfs.8.html b/man/v2.0/8/mount.zfs.8.html new file mode 100644 index 000000000..dfab77b82 --- /dev/null +++ b/man/v2.0/8/mount.zfs.8.html @@ -0,0 +1,339 @@ + + + + + + + mount.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

mount.zfs.8

+
+ + + + + +
MOUNT.ZFS(8)System Manager's ManualMOUNT.ZFS(8)
+
+

+
+

+

mount.zfs - mount a ZFS filesystem

+
+
+

+

mount.zfs [-sfnvh] [-o options] dataset + mountpoint

+

+
+
+

+

mount.zfs is part of the zfsutils package for Linux. It is + a helper program that is usually invoked by the mount(8) or + zfs(8) commands to mount a ZFS dataset.

+

All options are handled according to the FILESYSTEM + INDEPENDENT MOUNT OPTIONS section in the mount(8) manual, except for + those described below.

+

The dataset parameter is a ZFS filesystem name, as output + by the zfs list -H -o name command. This parameter never has a + leading slash character and is not a device name.

+

The mountpoint parameter is the path name of a + directory.

+

+

+
+
+

+
+
+
Ignore bad or sloppy mount options.
+
+
Do a fake mount; do not perform the mount operation.
+
+
Do not update the /etc/mtab file.
+
+
Increase verbosity.
+
+
Print the usage message.
+
+
This flag sets the SELinux context for all files in the filesystem under + that mountpoint.
+
+
This flag sets the SELinux context for the filesystem being mounted.
+
+
This flag sets the SELinux context for unlabeled files.
+
+
This flag sets the SELinux context for the root inode of the + filesystem.
+
+
This private flag indicates that the dataset has an entry in the + /etc/fstab file.
+
+
This private flag disables extended attributes.
+
+
This private flag enables directory-based extended attributes and, if + appropriate, adds a ZFS context to the selinux system policy.
+
+
This private flag enables system attributed-based extended attributes and, + if appropriate, adds a ZFS context to the selinux system policy.
+
+
Equivalent to xattr.
+
+
This private flag indicates that mount(8) is being called by the + zfs(8) command. +

+
+
+
+
+

+

ZFS conventionally requires that the mountpoint be an empty + directory, but the Linux implementation inconsistently enforces the + requirement.

+

The mount.zfs helper does not mount the contents of + zvols.

+

+
+
+

+
+
/etc/fstab
+
The static filesystem table.
+
/etc/mtab
+
The mounted filesystem table.
+
+
+
+

+

The primary author of mount.zfs is Brian Behlendorf + <behlendorf1@llnl.gov>.

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

fstab(5), mount(8), zfs(8)

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/vdev_id.8.html b/man/v2.0/8/vdev_id.8.html new file mode 100644 index 000000000..0cdd1e7d4 --- /dev/null +++ b/man/v2.0/8/vdev_id.8.html @@ -0,0 +1,322 @@ + + + + + + + vdev_id.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

vdev_id.8

+
+ + + + + +
VDEV_ID(8)System Manager's ManualVDEV_ID(8)
+
+
+

+

vdev_idgenerate + user-friendly names for JBOD disks

+
+
+

+ + + + + +
vdev_id-d dev + -c config_file + -g + sas_direct|sas_switch|scsi + -m -p + phys_per_port
+
+
+

+

vdev_id is an udev helper which parses + vdev_id.conf(5) to map a physical path in a storage + topology to a channel name. The channel name is combined with a disk + enclosure slot number to create an alias that reflects the physical location + of the drive. This is particularly helpful when it comes to tasks like + replacing failed drives. Slot numbers may also be remapped in case the + default numbering is unsatisfactory. The drive aliases will be created as + symbolic links in /dev/disk/by-vdev.

+

The currently supported topologies are + sas_direct, sas_switch, and + scsi. A multipath mode is supported in which dm-mpath + devices are handled by examining the first running component disk as + reported by the driver. In multipath mode the configuration file should + contain a channel definition with the same name for each path to a given + enclosure.

+

vdev_id also supports creating + aliases based on existing udev links in the /dev hierarchy using the + configuration + file keyword. See vdev_id.conf(5) for details.

+
+
+

+
+
+ device
+
The device node to classify, like /dev/sda.
+
+ config_file
+
Specifies the path to an alternate configuration file. The default is + /etc/zfs/vdev_id.conf.
+
+ sas_direct|sas_switch|scsi
+
Identifies a physical topology that governs how physical paths are mapped + to channels: +
+
+ and scsi
+
channels are uniquely identified by a PCI slot and HBA port + number
+
+
channels are uniquely identified by a SAS switch port number
+
+
+
+
Only handle dm-multipath devices. If specified, examine the first running + component disk of a dm-multipath device as provided by the driver to + determine the physical path.
+
+ phys_per_port
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id internally uses this value to + determine which HBA or switch port a device is connected to. The default + is .
+
+
Print a usage summary.
+
+
+
+

+

vdev_id.conf(5)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zdb.8.html b/man/v2.0/8/zdb.8.html new file mode 100644 index 000000000..c68d3ee8c --- /dev/null +++ b/man/v2.0/8/zdb.8.html @@ -0,0 +1,697 @@ + + + + + + + zdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zdb.8

+
+ + + + + +
ZDB(8)System Manager's Manual (smm)ZDB(8)
+
+
+

+

zdbdisplay + zpool debugging and consistency information

+
+
+

+ + + + + +
zdb[-AbcdDFGhikLMPsvXYy] + [-e [-V] + [-p path ...]] + [-I inflight I/Os] + [-o + var=value]... + [-t txg] + [-U cache] + [-x dumpdir] + [poolname[/dataset | objset + ID]] [object | range + ...]
+
+ + + + + +
zdb[-AdiPv] [-e + [-V] [-p + path ...]] [-U + cache] poolname[/dataset | + objset ID] [object | + range ...]
+
+ + + + + +
zdb-C [-A] + [-U cache]
+
+ + + + + +
zdb-E [-A] + word0:word1:...:word15
+
+ + + + + +
zdb-l [-Aqu] + device
+
+ + + + + +
zdb-m [-AFLPXY] + [-e [-V] + [-p path ...]] + [-t txg] + [-U cache] + poolname [vdev + [metaslab ...]]
+
+ + + + + +
zdb-O dataset path
+
+ + + + + +
zdb-R [-A] + [-e [-V] + [-p path ...]] + [-U cache] + poolname + vdev:offset:[<lsize>/]<psize>[:flags]
+
+ + + + + +
zdb-S [-AP] + [-e [-V] + [-p path ...]] + [-U cache] + poolname
+
+
+

+

The zdb utility displays information about + a ZFS pool useful for debugging and performs some amount of consistency + checking. It is a not a general purpose tool and options (and facilities) + may change. This is not a fsck(8) utility.

+

The output of this command in general reflects the on-disk + structure of a ZFS pool, and is inherently unstable. The precise output of + most invocations is not documented, a knowledge of ZFS internals is + assumed.

+

If the dataset argument does not + contain any + "" or + "" + characters, it is interpreted as a pool name. The root dataset can be + specified as pool/ (pool name followed by a + slash).

+

When operating on an imported and active pool it is possible, + though unlikely, that zdb may interpret inconsistent pool data and behave + erratically.

+
+
+

+

Display options:

+
+
+
Display statistics regarding the number, size (logical, physical and + allocated) and deduplication of blocks.
+
+
Verify the checksum of all metadata blocks while printing block statistics + (see -b). +

If specified multiple times, verify the checksums of all + blocks.

+
+
+
Display information about the configuration. If specified with no other + options, instead display information about the cache file + (/etc/zfs/zpool.cache). To specify the cache file + to display, see -U. +

If specified multiple times, and a pool name is also specified + display both the cached configuration and the on-disk configuration. If + specified multiple times with -e also display + the configuration that would be used were the pool to be imported.

+
+
+
Display information about datasets. Specified once, displays basic dataset + information: ID, create transaction, size, and object count. +

If specified multiple times provides greater and greater + verbosity.

+

If object IDs or object ID ranges are specified, display + information about those specific objects or ranges only.

+

An object ID range is specified in terms of a colon-separated + tuple of the form + ⟨start⟩:⟨end⟩[:⟨flags⟩]. The + fields start and end are + integer object identifiers that denote the upper and lower bounds of the + range. An end value of -1 specifies a range with + no upper bound. The flags field optionally + specifies a set of flags, described below, that control which object + types are dumped. By default, all object types are dumped. A minus sign + (-) negates the effect of the flag that follows it and has no effect + unless preceded by the A flag. For example, the + range 0:-1:A-d will dump all object types except for directories.

+

+
+
+
Dump all objects (this is the default)
+
+
Dump ZFS directory objects
+
+
Dump ZFS plain file objects
+
+
Dump SPA space map objects
+
+
Dump ZAP objects
+
-
+
Negate the effect of next flag
+
+
+
+
Display deduplication statistics, including the deduplication ratio + (dedup), compression ratio (compress), + inflation due to the zfs copies property (copies), and + an overall effective ratio (dedup + * compress + / copies).
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in the + pool) block counts and sizes by reference count.
+
+
Display the statistics independently for each deduplication table.
+
+
Dump the contents of the deduplication tables describing duplicate + blocks.
+
+
Also dump the contents of the deduplication tables describing unique + blocks.
+
+ word0:word1:...:word15
+
Decode and display block from an embedded block pointer specified by the + word arguments.
+
+
Display pool history similar to zpool + history, but include internal changes, + transaction, and dataset information.
+
+
Display information about intent log (ZIL) entries relating to each + dataset. If specified multiple times, display counts of each intent log + transaction type.
+
+
Examine the checkpointed state of the pool. Note, the on disk format of + the pool is not reverted to the checkpointed state.
+
+ device
+
Read the vdev labels and L2ARC header from the specified device. + zdb -l will return 0 if + valid label was found, 1 if error occurred, and 2 if no valid labels were + found. The presence of L2ARC header is indicated by a specific sequence + (L2ARC_DEV_HDR_MAGIC). If there is an accounting error in the size or the + number of L2ARC log blocks zdb + -l will return 1. Each unique configuration is + displayed only once.
+
+ device
+
In addition display label space usage stats. If a valid L2ARC header was + found also display the properties of log blocks used for restoring L2ARC + contents (persistent L2ARC).
+
+ device
+
Display every configuration, unique or not. If a valid L2ARC header was + found also display the properties of log entries in log blocks used for + restoring L2ARC contents (persistent L2ARC). +

If the -q option is also specified, + don't print the labels or the L2ARC header.

+

If the -u option is also specified, + also display the uberblocks on this device. Specify multiple times to + increase verbosity.

+
+
+
Disable leak detection and the loading of space maps. By default, + zdb verifies that all non-free blocks are + referenced, which can be very expensive.
+
+
Display the offset, spacemap, free space of each metaslab, all the log + spacemaps and their obsolete entry statistics.
+
+
Also display information about the on-disk free space histogram associated + with each metaslab.
+
+
Display the maximum contiguous free space, the in-core free space + histogram, and the percentage of free space in each space map.
+
+
Display every spacemap record.
+
+
Display the offset, spacemap, and free space of each metaslab.
+
+
Also display information about the maximum contiguous free space and the + percentage of free space in each space map.
+
+
Display every spacemap record.
+
+ dataset path
+
Look up the specified path inside of the + dataset and display its metadata and indirect + blocks. Specified path must be relative to the root + of dataset. This option can be combined with + -v for increasing verbosity.
+
+ poolname + vdev:offset:[<lsize>/]<psize>[:flags]
+
Read and display a block from the specified device. By default the block + is displayed as a hex dump, but see the description of the + r flag, below. +

The block is specified in terms of a colon-separated tuple + vdev (an integer vdev identifier) + offset (the offset within the vdev) + size (the physical size, or logical size / + physical size) of the block to read and, optionally, + flags (a set of flags, described below).

+

+
+
+ offset
+
Print block pointer at hex offset
+
+
Calculate and display checksums
+
+
Decompress the block. Set environment variable + ZDB_NO_ZLE to skip zle when guessing.
+
+
Byte swap the block
+
+
Dump gang block header
+
+
Dump indirect block
+
+
Dump raw uninterpreted block data
+
+
Verbose output for guessing compression algorithm
+
+
+
+
Report statistics on zdb I/O. Display operation + counts, bandwidth, and error counts of I/O to the pool from + zdb.
+
+
Simulate the effects of deduplication, constructing a DDT and then display + that DDT as with -DD.
+
+
Display the current uberblock.
+
+

Other options:

+
+
+
Do not abort should any assertion fail.
+
+
Enable panic recovery, certain errors which would otherwise be fatal are + demoted to warnings.
+
+
Do not abort if asserts fail and also enable panic recovery.
+
+ [-p path ...]
+
Operate on an exported pool, not present in + /etc/zfs/zpool.cache. The + -p flag specifies the path under which devices are + to be searched.
+
+ dumpdir
+
All blocks accessed will be copied to files in the specified directory. + The blocks will be placed in sparse files whose name is the same as that + of the file or device read. zdb can be then run on + the generated files. Note that the -bbc flags are + sufficient to access (and thus copy) all metadata on the pool.
+
+
Attempt to make an unreadable pool readable by trying progressively older + transactions.
+
+
Dump the contents of the zfs_dbgmsg buffer before exiting + zdb. zfs_dbgmsg is a buffer used by ZFS to dump + advanced debug information.
+
+ inflight I/Os
+
Limit the number of outstanding checksum I/Os to the specified value. The + default value is 200. This option affects the performance of the + -c option.
+
+ var=value ...
+
Set the given global libzpool variable to the provided value. The value + must be an unsigned 32-bit integer. Currently only little-endian systems + are supported to avoid accidentally setting the high 32 bits of 64-bit + variables.
+
+
Print numbers in an unscaled form more amenable to parsing, eg. 1000000 + rather than 1M.
+
+ transaction
+
Specify the highest transaction to use when searching for uberblocks. See + also the -u and -l options + for a means to see the available uberblocks and their associated + transaction numbers.
+
+ cachefile
+
Use a cache file other than + /etc/zfs/zpool.cache.
+
+
Enable verbosity. Specify multiple times for increased verbosity.
+
+
Attempt verbatim import. This mimics the behavior of the kernel when + loading a pool from a cachefile. Only usable with + -e.
+
+
Attempt "extreme" transaction rewind, that is attempt the same + recovery as -F but read transactions otherwise + deemed too old.
+
+
Attempt all possible combinations when reconstructing indirect split + blocks. This flag disables the individual I/O deadman timer in order to + allow as much time as required for the attempted reconstruction.
+
+
Perform validation for livelists that are being deleted. Scans through the + livelist and metaslabs, checking for duplicate entries and compares the + two, checking for potential double frees. If it encounters issues, + warnings will be printed, but the command will not necessarily fail.
+
+

Specifying a display option more than once enables verbosity for + only that option, with more occurrences enabling more verbosity.

+

If no options are specified, all information about the named pool + will be displayed at default verbosity.

+
+
+

+
+
Display the configuration of imported pool + rpool
+
+
+
# zdb -C rpool
+
+MOS Configuration:
+        version: 28
+        name: 'rpool'
+ ...
+
+
+
Display basic dataset information about + rpool
+
+
+
# zdb -d rpool
+Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects
+Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects
+ ...
+
+
+
Display basic information about object 0 in + rpool/export/home
+
+
+
# zdb -d rpool/export/home 0
+Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects
+
+    Object  lvl   iblk   dblk  dsize  lsize   %full  type
+         0    7    16K    16K  15.0K    16K   25.00  DMU dnode
+
+
+
Display the predicted effect of enabling deduplication on + rpool
+
+
+
# zdb -S rpool
+Simulated DDT histogram:
+
+bucket              allocated                       referenced
+______   ______________________________   ______________________________
+refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
+------   ------   -----   -----   -----   ------   -----   -----   -----
+     1     694K   27.1G   15.0G   15.0G     694K   27.1G   15.0G   15.0G
+     2    35.0K   1.33G    699M    699M    74.7K   2.79G   1.45G   1.45G
+ ...
+dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00
+
+
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
April 14, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zed.8.html b/man/v2.0/8/zed.8.html new file mode 100644 index 000000000..d468b9298 --- /dev/null +++ b/man/v2.0/8/zed.8.html @@ -0,0 +1,456 @@ + + + + + + + zed.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zed.8

+
+ + + + + +
ZED(8)System Manager's ManualZED(8)
+
+

+
+

+

ZED - ZFS Event Daemon

+

+
+
+

+

zed [-d zedletdir] [-f] [-F] + [-h] [-I] [-L] [-M] [-p pidfile] + [-P path] [-s statefile] [-v] [-V] + [-Z]

+

+
+
+

+

ZED (ZFS Event Daemon) monitors events generated by the ZFS + kernel module. When a zevent (ZFS Event) is posted, ZED will run any + ZEDLETs (ZFS Event Daemon Linkage for Executable Tasks) that have been + enabled for the corresponding zevent class.

+

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Display license information.
+
+
Display version information.
+
+
Be verbose.
+
+
Force the daemon to run if at all possible, disabling security checks and + throwing caution to the wind. Not recommended for use in production.
+
+
Run the daemon in the foreground.
+
+
Lock all current and future pages in the virtual memory address space. + This may help the daemon remain responsive when the system is under heavy + memory pressure.
+
+
Request that the daemon idle rather than exit when the kernel modules are + not loaded. Processing of events will start, or resume, when the kernel + modules are (re)loaded. Under Linux the kernel modules cannot be unloaded + while the daemon is running.
+
+
Zero the daemon's state, thereby allowing zevents still within the kernel + to be reprocessed.
+
+
Read the enabled ZEDLETs from the specified directory.
+
+
Write the daemon's process ID to the specified file.
+
+
Custom $PATH for zedlets to use. Normally zedlets run in a locked-down + environment, with hardcoded paths to the ZFS commands ($ZFS, $ZPOOL, $ZED, + ...), and a hardcoded $PATH. This is done for security reasons. However, + the ZFS test suite uses a custom PATH for its ZFS commands, and passes it + to zed with -P. In short, -P is only to be used by the ZFS test suite; + never use it in production!
+
+
Write the daemon's state to the specified file.
+
+
+
+

+

A zevent is comprised of a list of nvpairs (name/value pairs). + Each zevent contains an EID (Event IDentifier) that uniquely identifies it + throughout the lifetime of the loaded ZFS kernel module; this EID is a + monotonically increasing integer that resets to 1 each time the kernel + module is loaded. Each zevent also contains a class string that identifies + the type of event. For brevity, a subclass string is defined that omits the + leading components of the class string. Additional nvpairs exist to provide + event details.

+

The kernel maintains a list of recent zevents that can be viewed + (along with their associated lists of nvpairs) using the "zpool + events -v" command.

+

+
+
+

+

ZEDLETs to be invoked in response to zevents are located in the + enabled-zedlets directory. These can be symlinked or copied from the + installed-zedlets directory; symlinks allow for automatic updates + from the installed ZEDLETs, whereas copies preserve local modifications. As + a security measure, ZEDLETs must be owned by root. They must have execute + permissions for the user, but they must not have write permissions for group + or other. Dotfiles are ignored.

+

ZEDLETs are named after the zevent class for which they should be + invoked. In particular, a ZEDLET will be invoked for a given zevent if + either its class or subclass string is a prefix of its filename (and is + followed by a non-alphabetic character). As a special case, the prefix + "all" matches all zevents. Multiple ZEDLETs may be invoked for a + given zevent.

+

+
+
+

+

ZEDLETs are executables invoked by the ZED in response to a given + zevent. They should be written under the presumption they can be invoked + concurrently, and they should use appropriate locking to access any shared + resources. Common variables used by ZEDLETs can be stored in the default rc + file which is sourced by scripts; these variables should be prefixed with + "ZED_".

+

The zevent nvpairs are passed to ZEDLETs as environment variables. + Each nvpair name is converted to an environment variable in the following + manner: 1) it is prefixed with "ZEVENT_", 2) it is converted to + uppercase, and 3) each non-alphanumeric character is converted to an + underscore. Some additional environment variables have been defined to + present certain nvpair values in a more convenient form. An incomplete list + of zevent environment variables is as follows:

+
+
+
The Event IDentifier.
+
+
The zevent class string.
+
+
The zevent subclass string.
+
+
The time at which the zevent was posted as + "seconds nanoseconds" since the Epoch.
+
+
The seconds component of ZEVENT_TIME.
+
+
The nanoseconds component of ZEVENT_TIME.
+
+
An almost-RFC3339-compliant string for ZEVENT_TIME.
+
+

Additionally, the following ZED & ZFS variables are + defined:

+
+
+
The daemon's process ID.
+
+
The daemon's current enabled-zedlets directory.
+
+
The ZFS alias (name-version-release) string used to build the + daemon.
+
+
The ZFS version used to build the daemon.
+
+
The ZFS release used to build the daemon.
+
+

ZEDLETs may need to call other ZFS commands. The installation + paths of the following executables are defined: ZDB, ZED, + ZFS, ZINJECT, and ZPOOL. These variables can be + overridden in the rc file if needed.

+

+
+
+

+
+
@sysconfdir@/zfs/zed.d
+
The default directory for enabled ZEDLETs.
+
@sysconfdir@/zfs/zed.d/zed.rc
+
The default rc file for common variables used by ZEDLETs.
+
@zfsexecdir@/zed.d
+
The default directory for installed ZEDLETs.
+
@runstatedir@/zed.pid
+
The default file containing the daemon's process ID.
+
@runstatedir@/zed.state
+
The default file containing the daemon's state. +

+
+
+
+
+

+
+
+
Reconfigure the daemon and rescan the directory for enabled ZEDLETs.
+
+
Terminate the daemon. +

+
+
+
+
+

+

ZED requires root privileges.

+

+
+
+

+

Events are processed synchronously by a single thread. This can + delay the processing of simultaneous zevents.

+

ZEDLETs are killed after a maximum of ten seconds. This can lead + to a violation of a ZEDLET's atomicity assumptions.

+

The ownership and permissions of the enabled-zedlets + directory (along with all parent directories) are not checked. If any of + these directories are improperly owned or permissioned, an unprivileged user + could insert a ZEDLET to be executed as root. The requirement that ZEDLETs + be owned by root mitigates this to some extent.

+

ZEDLETs are unable to return state/status information to the + kernel.

+

Some zevent nvpair types are not handled. These are denoted by + zevent environment variables having a "_NOT_IMPLEMENTED_" + value.

+

Internationalization support via gettext has not been added.

+

The configuration file is not yet implemented.

+

The diagnosis engine is not yet implemented.

+

+
+
+

+

ZED (ZFS Event Daemon) is distributed under the terms of + the Common Development and Distribution License Version 1.0 (CDDL-1.0).

+

Developed at Lawrence Livermore National Laboratory + (LLNL-CODE-403049).

+

+
+
+

+

zfs(8), zpool(8) zpool-events(8)

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-allow.8.html b/man/v2.0/8/zfs-allow.8.html new file mode 100644 index 000000000..97982903f --- /dev/null +++ b/man/v2.0/8/zfs-allow.8.html @@ -0,0 +1,540 @@ + + + + + + + zfs-allow.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-allow.8

+
+ + + + + +
ZFS-ALLOW(8)System Manager's ManualZFS-ALLOW(8)
+
+
+

+

zfs-allow — + Delegates ZFS administration permission for the file + systems to non-privileged users.

+
+
+

+ + + + + +
zfsallow [-dglu] + user|group[,user|group]... + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]... + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + @setname + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+
+

+
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of + , + , + , + , + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]... + perm|@setname[,perm|@setname]... + filesystem|volume
+
 
+
zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]... + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]...
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]...
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]...
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]...
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+
+
NAME             TYPE           NOTES
+allow            subcommand     Must also have the permission that is
+                                being allowed
+clone            subcommand     Must also have the 'create' ability and
+                                'mount' ability in the origin file system
+create           subcommand     Must also have the 'mount' ability.
+                                Must also have the 'refreservation' ability to
+                                create a non-sparse volume.
+destroy          subcommand     Must also have the 'mount' ability
+diff             subcommand     Allows lookup of paths within a dataset
+                                given an object number, and the ability
+                                to create snapshots necessary to
+                                'zfs diff'.
+hold             subcommand     Allows adding a user hold to a snapshot
+load-key         subcommand     Allows loading and unloading of encryption key
+                                (see 'zfs load-key' and 'zfs unload-key').
+change-key       subcommand     Allows changing an encryption key via
+                                'zfs change-key'.
+mount            subcommand     Allows mount/umount of ZFS datasets
+promote          subcommand     Must also have the 'mount' and 'promote'
+                                ability in the origin file system
+receive          subcommand     Must also have the 'mount' and 'create'
+                                ability
+release          subcommand     Allows releasing a user hold which might
+                                destroy the snapshot
+rename           subcommand     Must also have the 'mount' and 'create'
+                                ability in the new parent
+rollback         subcommand     Must also have the 'mount' ability
+send             subcommand
+share            subcommand     Allows sharing file systems over NFS
+                                or SMB protocols
+snapshot         subcommand     Must also have the 'mount' ability
+
+groupquota       other          Allows accessing any groupquota@...
+                                property
+groupused        other          Allows reading any groupused@... property
+userprop         other          Allows changing any user property
+userquota        other          Allows accessing any userquota@...
+                                property
+userused         other          Allows reading any userused@... property
+projectobjquota  other          Allows accessing any projectobjquota@...
+                                property
+projectquota     other          Allows accessing any projectquota@... property
+projectobjused   other          Allows reading any projectobjused@... property
+projectused      other          Allows reading any projectused@... property
+
+aclinherit       property
+acltype          property
+atime            property
+canmount         property
+casesensitivity  property
+checksum         property
+compression      property
+copies           property
+devices          property
+exec             property
+filesystem_limit property
+mountpoint       property
+nbmand           property
+normalization    property
+primarycache     property
+quota            property
+readonly         property
+recordsize       property
+refquota         property
+refreservation   property
+reservation      property
+secondarycache   property
+setuid           property
+sharenfs         property
+sharesmb         property
+snapdir          property
+snapshot_limit   property
+utf8only         property
+version          property
+volblocksize     property
+volsize          property
+vscan            property
+xattr            property
+zoned            property
+
+
+
zfs allow + -c + perm|@setname[,perm|@setname]... + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]... + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]... + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
 
+
zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
 
+
zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-bookmark.8.html b/man/v2.0/8/zfs-bookmark.8.html new file mode 100644 index 000000000..215d9fe7a --- /dev/null +++ b/man/v2.0/8/zfs-bookmark.8.html @@ -0,0 +1,274 @@ + + + + + + + zfs-bookmark.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-bookmark.8

+
+ + + + + +
ZFS-BOOKMARK(8)System Manager's Manual (smm)ZFS-BOOKMARK(8)
+
+
+

+

zfs-bookmark — + Creates a bookmark of the given snapshot.

+
+
+

+
+
+

+
+
zfs bookmark + snapshot|bookmark + newbookmark
+
Creates a new bookmark of the given snapshot or bookmark. Bookmarks mark + the point in time when the snapshot was created, and can be used as the + incremental source for a zfs-send(8) command. +

When creating a bookmark from an existing redaction + bookmark, the resulting bookmark is + a redaction + bookmark.

+

This feature must be enabled to be used. See + zpool-features(5) for details on ZFS feature flags and + the + + feature.

+
+
+
+
+

+

zfs-destroy(8), zfs-send(8), + zfs-snapshot(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-change-key.8.html b/man/v2.0/8/zfs-change-key.8.html new file mode 100644 index 000000000..a2cbea4c5 --- /dev/null +++ b/man/v2.0/8/zfs-change-key.8.html @@ -0,0 +1,473 @@ + + + + + + + zfs-change-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-change-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + Load, unload, or change the encryption key used to access a + dataset.

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a | filesystem
+
+ + + + + +
zfsunload-key [-r] + -a | filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a | filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. This will cause zfs to + simply check that the provided key is correct. This command may be run + even if the key is already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a | filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded + into ZFS. This command may also be used to change the + keylocation, keyformat, and + pbkdf2iters properties as needed. If the dataset was not + previously an encryption root it will become one. Alternatively, the + -i flag may be provided to cause an encryption + root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim --secure if + supported by your hardware, otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to "zfs + load-key filesystem; + zfs change-key + filesystem"
+
+ property=value
+
Allows the user to set encryption key properties ( + keyformat, keylocation, and + pbkdf2iters ) while changing the key. This is the + only way to alter keyformat and + pbkdf2iters after the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + zvol data, file attributes, ACLs, permission bits, directory listings, FUID + mappings, and + + / + + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the zfs + load-key subcommand for more info on key + loading).

+

Creating an encrypted dataset requires + specifying the encryption and keyformat + properties at creation time, along with an optional + keylocation and pbkdf2iters. After + entering an encryption key, the created dataset will become an encryption + root. Any descendant datasets will inherit their encryption key from the + encryption root by default, meaning that loading, unloading, or changing the + key for the encryption root will implicitly do the same for all inheriting + datasets. If this inheritance is not desired, simply supply a + keyformat when creating the child dataset or use + zfs change-key to break an + existing relationship, creating a new encryption root on the child. Note + that the child's keyformat may match that of the parent + while still creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, and + pbkdf2iters) do not inherit like other ZFS properties and + instead use the value determined by their encryption root. Encryption root + inheritance can be tracked via the read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only dedup against themselves, their + snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost per block written.

+
+
+
+

+

zfs-create(8), zfs-set(8), + zfsprops(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-clone.8.html b/man/v2.0/8/zfs-clone.8.html new file mode 100644 index 000000000..d53e261fb --- /dev/null +++ b/man/v2.0/8/zfs-clone.8.html @@ -0,0 +1,290 @@ + + + + + + + zfs-clone.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-clone.8

+
+ + + + + +
ZFS-CLONE(8)System Manager's ManualZFS-CLONE(8)
+
+
+

+

zfs-clone — + Creates a clone of the given snapshot.

+
+
+

+ + + + + +
zfsclone [-p] + [-o + property=value]... + snapshot + filesystem|volume
+
+
+

+
+
zfs clone + [-p] [-o + property=value]... + snapshot + filesystem|volume
+
See the + section of zfsconcepts(8) for details. The target + dataset can be located anywhere in the ZFS hierarchy, and is created as + the same type as the original. +
+
+ property=value
+
Sets the specified property; see zfs + create for details.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + + property inherited from their parent. If the target filesystem or + volume already exists, the operation completes successfully.
+
+
+
+
+
+

+

zfs-promote(8), + zfs-snapshot(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-create.8.html b/man/v2.0/8/zfs-create.8.html new file mode 100644 index 000000000..978861411 --- /dev/null +++ b/man/v2.0/8/zfs-create.8.html @@ -0,0 +1,411 @@ + + + + + + + zfs-create.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-create.8

+
+ + + + + +
ZFS-CREATE(8)System Manager's ManualZFS-CREATE(8)
+
+
+

+

zfs-create — + Creates a new ZFS file system.

+
+
+

+ + + + + +
zfscreate [-Pnpv] + [-o + property=value]... + filesystem
+
+ + + + + +
zfscreate [-ps] + [-b blocksize] + [-o + property=value]... + -V size + volume
+
+
+

+
+
zfs create + [-Pnpv] [-o + property=value]... + filesystem
+
Creates a new ZFS file system. The file system is automatically mounted + according to the mountpoint property inherited from the + parent. +
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + at the same time the dataset was created. Any editable ZFS property + can also be set at creation time. Multiple -o + options can be specified. An error results if the same property is + specified in multiple -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Do a dry-run ("No-op") creation. No datasets will be + created. This is useful in conjunction with the + -v or -P flags to + validate properties that are passed via -o + options and those implied by other options. The actual dataset + creation can still fail due to insufficient privileges or available + capacity.
+
+
Print machine-parsable verbose information about the created dataset. + Each line of output contains a key and one or two values, all + separated by tabs. The create_ancestors and + create keys have filesystem as + their only value. The create_ancestors key only + appears if the -p option is used. The + property key has two values, a property name that + property's value. The property key may appear zero + or more times, once for each property that will be set local to + filesystem due to the use of the + -o option.
+
+
Print verbose information about the created dataset.
+
+
+
zfs create + [-ps] [-b + blocksize] [-o + property=value]... + -V size + volume
+
Creates a volume of the given size. The volume is exported as a block + device in /dev/zvol/path, where + is the name + of the volume in the ZFS namespace. The size represents the logical size + as exported by the device. By default, a reservation of equal size is + created. +

size is automatically + rounded up to the nearest multiple of the + .

+
+
+ blocksize
+
Equivalent to -o + volblocksize=blocksize. If + this option is specified in conjunction with + -o volblocksize, the + resulting behavior is undefined.
+
+ property=value
+
Sets the specified property as if the zfs + set + property=value command was + invoked at the same time the dataset was created. Any editable ZFS + property can also be set at creation time. Multiple + -o options can be specified. An error results + if the same property is specified in multiple + -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Creates a sparse volume with no reservation. See + + in the + section of zfsprops(8) for more + information about sparse volumes.
+
+
Do a dry-run ("No-op") creation. No datasets will be + created. This is useful in conjunction with the + -v or -P flags to + validate properties that are passed via -o + options and those implied by other options. The actual dataset + creation can still fail due to insufficient privileges or available + capacity.
+
+
Print machine-parsable verbose information about the created dataset. + Each line of output contains a key and one or two values, all + separated by tabs. The create_ancestors and + create keys have volume as their + only value. The create_ancestors key only appears if + the -p option is used. The + property key has two values, a property name that + property's value. The property key may appear zero + or more times, once for each property that will be set local to + volume due to the use of the + -b or -o options, as + well as + + if the volume is not sparse.
+
+
Print verbose information about the created dataset.
+
+
+
+
+

+

ZFS volumes may be used as swap devices. After creating the volume + with the zfs create + -V command set up and enable the swap area using the + mkswap(8) and swapon(8) commands. Do not + swap to a file on a ZFS file system. A ZFS swap file configuration is not + supported.

+
+
+
+

+

zfs-destroy(8), zfs-list(8), + zpool-create(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-destroy.8.html b/man/v2.0/8/zfs-destroy.8.html new file mode 100644 index 000000000..8ff670f53 --- /dev/null +++ b/man/v2.0/8/zfs-destroy.8.html @@ -0,0 +1,368 @@ + + + + + + + zfs-destroy.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-destroy.8

+
+ + + + + +
ZFS-DESTROY(8)System Manager's ManualZFS-DESTROY(8)
+
+
+

+

zfs-destroy — + Destroys the given dataset(s), snapshot(s), or + bookmark.

+
+
+

+ + + + + +
zfsdestroy [-Rfnprv] + filesystem|volume
+
+ + + + + +
zfsdestroy [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]...
+
+ + + + + +
zfsdestroy + filesystem|volume#bookmark
+
+
+

+
+
zfs destroy + [-Rfnprv] + filesystem|volume
+
Destroys the given dataset. By default, the command unshares any file + systems that are currently shared, unmounts any file systems that are + currently mounted, and refuses to destroy a dataset that has active + dependents (children or clones). +
+
+
Recursively destroy all dependents, including cloned file systems + outside the target hierarchy.
+
+
Force an unmount of any file systems using the + unmount -f command. + This option has no effect on non-file systems or unmounted file + systems.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -v or + -p flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Recursively destroy all children.
+
+
Print verbose information about the deleted data.
+
+

Extreme care should be taken when applying either the + -r or the -R options, as + they can destroy large portions of a pool and cause unexpected behavior + for mounted file systems in use.

+
+
zfs destroy + [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]...
+
The given snapshots are destroyed immediately if and only if the + ‘zfs destroy’ command without the + -d option would have destroyed it. Such immediate + destruction would occur, for example, if the snapshot had no clones and + the user-initiated reference count were zero. +

If a snapshot does not qualify for immediate destruction, it + is marked for deferred deletion. In this state, it exists as a usable, + visible snapshot until both of the preconditions listed above are met, + at which point it is destroyed.

+

An inclusive range of snapshots may be specified by separating + the first and last snapshots with a percent sign. The first and/or last + snapshots may be left blank, in which case the filesystem's oldest or + newest snapshot will be implied.

+

Multiple snapshots (or ranges of snapshots) of the same + filesystem or volume may be specified in a comma-separated list of + snapshots. Only the snapshot's short name (the part after the + ) should be + specified when using a range or comma-separated list to identify + multiple snapshots.

+
+
+
Recursively destroy all clones of these snapshots, including the + clones, snapshots, and children. If this flag is specified, the + -d flag will have no effect.
+
+
Destroy immediately. If a snapshot cannot be destroyed now, mark it + for deferred destruction.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -p or + -v flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Destroy (or mark for deferred deletion) all snapshots with this name + in descendent file systems.
+
+
Print verbose information about the deleted data. +

Extreme care should be taken when applying either the + -r or the -R + options, as they can destroy large portions of a pool and cause + unexpected behavior for mounted file systems in use.

+
+
+
+
zfs destroy + filesystem|volume#bookmark
+
The given bookmark is destroyed.
+
+
+
+

+

zfs-create(8), zfs-hold(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-diff.8.html b/man/v2.0/8/zfs-diff.8.html new file mode 100644 index 000000000..20781799f --- /dev/null +++ b/man/v2.0/8/zfs-diff.8.html @@ -0,0 +1,304 @@ + + + + + + + zfs-diff.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-diff.8

+
+ + + + + +
ZFS-DIFF(8)System Manager's ManualZFS-DIFF(8)
+
+
+

+

zfs-diffDisplay + the difference between two snapshots of a given filesystem.

+
+
+

+ + + + + +
zfsdiff [-FHt] + snapshot + snapshot|filesystem
+
+
+

+
+
zfs diff + [-FHt] snapshot + snapshot|filesystem
+
Display the difference between a snapshot of a given filesystem and + another snapshot of that filesystem from a later time or the current + contents of the filesystem. The first column is a character indicating the + type of change, the other columns indicate pathname, new pathname (in case + of rename), change in link count, and optionally file type and/or change + time. The types of change are: +
+
-       The path has been removed
++       The path has been created
+M       The path has been modified
+R       The path has been renamed
+
+
+
+
Display an indication of the type of file, in a manner similar to the + -F option of ls(1). +
+
B       Block device
+C       Character device
+/       Directory
+>       Door
+|       Named pipe
+@       Symbolic link
+P       Event port
+=       Socket
+F       Regular file
+
+
+
+
Give more parsable tab-separated output, without header lines and + without arrows.
+
+
Display the path's inode change time as the first column of + output.
+
+
+
+
+
+

+

zfs-snapshot(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-get.8.html b/man/v2.0/8/zfs-get.8.html new file mode 100644 index 000000000..11e8d36e0 --- /dev/null +++ b/man/v2.0/8/zfs-get.8.html @@ -0,0 +1,406 @@ + + + + + + + zfs-get.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-get.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setSets the + property or list of properties to the given value(s) for each + dataset.

+
+
+

+ + + + + +
zfsset + property=value + [property=value]... + filesystem|volume|snapshot...
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + [filesystem|volume|snapshot|bookmark]...
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot...
+
+
+

+
+
zfs set + property=value + [property=value]... + filesystem|volume|snapshot...
+
Only some properties can be edited. See zfsprops(8) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the User Properties section of + zfsprops(8).
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + [filesystem|volume|snapshot|bookmark]...
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
    name      Dataset name
+    property  Property name
+    value     Property value
+    source    Property source  local, default, inherited,
+              temporary, received or none (-).
+
+

All columns are displayed by default, though this + can be controlled by using the -o option. This + command takes a comma-separated list of properties as described in the + and User Properties sections of + zfsprops(8).

+

The value all can be used to display all + properties that apply to the given dataset's type (filesystem, volume, + snapshot, or bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display. + ,,, + is the default value.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: + , + , + , + , + , + and + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of + , + , + , + , + or all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot...
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(8) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value if one exists; otherwise + operate as if the -S option was not + specified.
+
+
+
+
+
+

+

zfs-list(8), zfsprops(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-groupspace.8.html b/man/v2.0/8/zfs-groupspace.8.html new file mode 100644 index 000000000..757ea7a3a --- /dev/null +++ b/man/v2.0/8/zfs-groupspace.8.html @@ -0,0 +1,390 @@ + + + + + + + zfs-groupspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-groupspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + Displays space consumed by, and quotas on, each user or + group in the specified filesystem or snapshot.

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]...] + [-s field]... + [-S field]... + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (for example, + stat(2), ls + -l) perform this translation, so the + -i option allows the output from + zfs userspace to be + compared directly with those utilities. However, + -i may lead to confusion if some files were + created by an SMB user before a SMB-to-POSIX name mapping was + established. In such a case, some files will be owned by the SMB + entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]...
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]...
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]...] + [-s field]... + [-S field]... + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is + numeral, not name. So need neither the option -i for SID + to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfs-set(8), zfsprops(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-hold.8.html b/man/v2.0/8/zfs-hold.8.html new file mode 100644 index 000000000..104a537be --- /dev/null +++ b/man/v2.0/8/zfs-hold.8.html @@ -0,0 +1,323 @@ + + + + + + + zfs-hold.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-hold.8

+
+ + + + + +
ZFS-HOLD(8)System Manager's ManualZFS-HOLD(8)
+
+
+

+

zfs-holdHold a + snapshot to prevent it being removed with the zfs destroy + command.

+
+
+

+ + + + + +
zfshold [-r] + tag snapshot...
+
+ + + + + +
zfsholds [-rH] + snapshot...
+
+ + + + + +
zfsrelease [-r] + tag snapshot...
+
+
+

+
+
zfs hold + [-r] tag + snapshot...
+
Adds a single reference, named with the tag + argument, to the specified snapshot or snapshots. Each snapshot has its + own tag namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-rH] snapshot...
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
Do not print headers, use tab-delimited output.
+
+
+
zfs release + [-r] tag + snapshot...
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return + EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
+
+
+

+

zfs-destroy(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-inherit.8.html b/man/v2.0/8/zfs-inherit.8.html new file mode 100644 index 000000000..4f0d475e9 --- /dev/null +++ b/man/v2.0/8/zfs-inherit.8.html @@ -0,0 +1,406 @@ + + + + + + + zfs-inherit.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-inherit.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setSets the + property or list of properties to the given value(s) for each + dataset.

+
+
+

+ + + + + +
zfsset + property=value + [property=value]... + filesystem|volume|snapshot...
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + [filesystem|volume|snapshot|bookmark]...
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot...
+
+
+

+
+
zfs set + property=value + [property=value]... + filesystem|volume|snapshot...
+
Only some properties can be edited. See zfsprops(8) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the User Properties section of + zfsprops(8).
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + [filesystem|volume|snapshot|bookmark]...
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
    name      Dataset name
+    property  Property name
+    value     Property value
+    source    Property source  local, default, inherited,
+              temporary, received or none (-).
+
+

All columns are displayed by default, though this + can be controlled by using the -o option. This + command takes a comma-separated list of properties as described in the + and User Properties sections of + zfsprops(8).

+

The value all can be used to display all + properties that apply to the given dataset's type (filesystem, volume, + snapshot, or bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display. + ,,, + is the default value.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: + , + , + , + , + , + and + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of + , + , + , + , + or all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot...
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(8) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value if one exists; otherwise + operate as if the -S option was not + specified.
+
+
+
+
+
+

+

zfs-list(8), zfsprops(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-jail.8.html b/man/v2.0/8/zfs-jail.8.html new file mode 100644 index 000000000..7324a4c57 --- /dev/null +++ b/man/v2.0/8/zfs-jail.8.html @@ -0,0 +1,312 @@ + + + + + + + zfs-jail.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-jail.8

+
+ + + + + +
ZFS-JAIL(8)System Manager's ManualZFS-JAIL(8)
+
+
+

+

zfs-jail — + Attaches and detaches ZFS filesystems from FreeBSD jails. + A ZFS dataset can be attached to a jail by using the + "zfs jail" subcommand. You cannot attach a + dataset to one jail and the children of the same dataset to another jail. + You can also not attach the root file system of the jail or any dataset + which needs to be mounted before the zfs rc script is run inside the jail, + as it would be attached unmounted until it is mounted from the rc script + inside the jail. To allow management of the dataset from within a jail, the + jailed property has to be set and the jail needs access to + the /dev/zfs device. The + + property cannot be changed from within a jail. See jail(8) + for information on how to allow mounting ZFS datasets from within a + jail.

+

A ZFS dataset can be detached from a jail + using the "zfs unjail" subcommand.

+

After a dataset is attached to a jail and the jailed property is + set, a jailed file system cannot be mounted outside the jail, since the jail + administrator might have set the mount point to an unacceptable value.

+
+
+

+ + + + + +
zfsjail + jailid|jailname + filesystem
+
+ + + + + +
zfsunjail + jailid|jailname + filesystem
+
+
+

+
+
zfs jail + jailid filesystem
+
+

Attaches the specified filesystem to the + jail identified by JID jailid. From now on this + file system tree can be managed from within a jail if the + jailed property has been set. To use this + functuinality, the jail needs the allow.mount and + allow.mount.zfs parameters set to 1 and the + enforce_statfs parameter set to a value lower than + 2.

+

See jail(8) for more information on managing + jails and configuring the parameters above.

+
+
zfs unjail + jailid filesystem
+
+

Detaches the specified filesystem from + the jail identified by JID jailid.

+
+
+
+
+

+

zfsprops(8)

+
+
+ + + + + +
December 9, 2019FreeBSD
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-list.8.html b/man/v2.0/8/zfs-list.8.html new file mode 100644 index 000000000..eeaf482b5 --- /dev/null +++ b/man/v2.0/8/zfs-list.8.html @@ -0,0 +1,370 @@ + + + + + + + zfs-list.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-list.8

+
+ + + + + +
ZFS-LIST(8)System Manager's ManualZFS-LIST(8)
+
+
+

+

zfs-listLists + the property information for the given datasets in tabular form.

+
+
+

+ + + + + +
zfslist + [-r|-d + depth] [-Hp] + [-o + property[,property]...] + [-s property]... + [-S property]... + [-t + type[,type]...] + [filesystem|volume|snapshot]...
+
+
+

+
+
zfs + list + [-r|-d + depth] [-Hp] + [-o + property[,property]...] + [-s property]... + [-S property]... + [-t + type[,type]...] + [filesystem|volume|snapshot]...
+
If specified, you can list property information by the absolute pathname + or the relative pathname. By default, all file systems and volumes are + displayed. Snapshots are displayed if the + + pool property is + (the + default is + ), or + if the -t snapshot or + -t all options are specified. + The following fields are displayed: name, + used, + , + , + . +
+
+
Used for scripting mode. Do not print headers and separate fields by a + single tab instead of arbitrary white space.
+
+ property
+
Same as the -s option, but sorts by property + in descending order.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A + depth of + will + display only the dataset and its direct children.
+
+ property
+
A comma-separated list of properties to display. The property must be: + +
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display any children of the dataset on the command + line.
+
+ property
+
A property for sorting the output by column in ascending order based + on the value of the property. The property must be one of the + properties described in the + + section of zfsprops(8) or the value + name to sort by the dataset name. Multiple + properties can be specified at one time using multiple + -s property options. Multiple + -s options are evaluated from left to right in + decreasing order of importance. The following is a list of sorting + criteria: +
    +
  • Numeric types sort in numeric order.
  • +
  • String types sort in alphabetical order.
  • +
  • Types inappropriate for a row sort that row to the literal bottom, + regardless of the specified ordering.
  • +
+

If no sorting options are specified the existing behavior + of zfs list is + preserved.

+
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + , + or all. For example, specifying + -t snapshot displays only + snapshots.
+
+
+
+
+
+

+

zfs-get(8), zfsprops(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-load-key.8.html b/man/v2.0/8/zfs-load-key.8.html new file mode 100644 index 000000000..5eebc99ff --- /dev/null +++ b/man/v2.0/8/zfs-load-key.8.html @@ -0,0 +1,473 @@ + + + + + + + zfs-load-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-load-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + Load, unload, or change the encryption key used to access a + dataset.

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a | filesystem
+
+ + + + + +
zfsunload-key [-r] + -a | filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a | filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. This will cause zfs to + simply check that the provided key is correct. This command may be run + even if the key is already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a | filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded + into ZFS. This command may also be used to change the + keylocation, keyformat, and + pbkdf2iters properties as needed. If the dataset was not + previously an encryption root it will become one. Alternatively, the + -i flag may be provided to cause an encryption + root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim --secure if + supported by your hardware, otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to "zfs + load-key filesystem; + zfs change-key + filesystem"
+
+ property=value
+
Allows the user to set encryption key properties ( + keyformat, keylocation, and + pbkdf2iters ) while changing the key. This is the + only way to alter keyformat and + pbkdf2iters after the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + zvol data, file attributes, ACLs, permission bits, directory listings, FUID + mappings, and + + / + + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the zfs + load-key subcommand for more info on key + loading).

+

Creating an encrypted dataset requires + specifying the encryption and keyformat + properties at creation time, along with an optional + keylocation and pbkdf2iters. After + entering an encryption key, the created dataset will become an encryption + root. Any descendant datasets will inherit their encryption key from the + encryption root by default, meaning that loading, unloading, or changing the + key for the encryption root will implicitly do the same for all inheriting + datasets. If this inheritance is not desired, simply supply a + keyformat when creating the child dataset or use + zfs change-key to break an + existing relationship, creating a new encryption root on the child. Note + that the child's keyformat may match that of the parent + while still creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, and + pbkdf2iters) do not inherit like other ZFS properties and + instead use the value determined by their encryption root. Encryption root + inheritance can be tracked via the read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only dedup against themselves, their + snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost per block written.

+
+
+
+

+

zfs-create(8), zfs-set(8), + zfsprops(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-mount-generator.8.html b/man/v2.0/8/zfs-mount-generator.8.html new file mode 100644 index 000000000..1e8fc712c --- /dev/null +++ b/man/v2.0/8/zfs-mount-generator.8.html @@ -0,0 +1,395 @@ + + + + + + + zfs-mount-generator.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-mount-generator.8

+
+ + + + + +
ZFS-MOUNT-GENERATOR(8)System Manager's ManualZFS-MOUNT-GENERATOR(8)
+
+

+

+
+

+

zfs-mount-generator - generates systemd mount units for ZFS

+
+
+

+

@systemdgeneratordir@/zfs-mount-generator

+

+
+
+

+

zfs-mount-generator implements the Generators Specification + of systemd(1), and is called during early boot to generate + systemd.mount(5) units for automatically mounted datasets. Mount + ordering and dependencies are created for all tracked pools (see below).

+

+
+

+

If the dataset is an encryption root, a service that loads the + associated key (either from file or through a systemd-ask-password(1) + prompt) will be created. This service RequiresMountsFor the path of + the key (if file-based) and also copies the mount unit's After, + Before and Requires. All mount units of encrypted datasets add + the key-load service for their encryption root to their Wants and + After. The service will not be Wanted or Required by + local-fs.target directly, and so will only be started manually or as + a dependency of a started mount unit.

+

+
+
+

+

mount unit's Before -> key-load service (if any) -> + mount unit -> mount unit's After

+

It is worth nothing that when a mount unit is activated, it + activates all available mount units for parent paths to its mountpoint, i.e. + activating the mount unit for /tmp/foo/1/2/3 automatically activates all + available mount units for /tmp, /tmp/foo, /tmp/foo/1, and /tmp/foo/1/2. This + is true for any combination of mount units from any sources, not just + ZFS.

+

+
+
+

+

Because ZFS pools may not be available very early in the boot + process, information on ZFS mountpoints must be stored separately. The + output of the command

+

+
zfs list -H -o + name,mountpoint,canmount,atime,relatime,devices,exec,readonly,setuid,nbmand,encroot,keylocation,org.openzfs.systemd:requires,org.openzfs.systemd:requires-mounts-for,org.openzfs.systemd:before,org.openzfs.systemd:after,org.openzfs.systemd:wanted-by,org.openzfs.systemd:required-by,org.openzfs.systemd:nofail,org.openzfs.systemd:ignore +

+
+

for datasets that should be mounted by systemd, should be kept + separate from the pool, at

+

+
@sysconfdir@/zfs/zfs-list.cache/POOLNAME
+

The cache file, if writeable, will be kept synchronized with the + pool state by the ZEDLET

+

+
history_event-zfs-list-cacher.sh .
+
+
+

+

The behavior of the generator script can be influenced by the + following dataset properties:

+

+
+
+
If a dataset has mountpoint set and canmount is not + off, a mount unit will be generated. Additionally, if + canmount is on, local-fs.target will gain a + dependency on the mount unit. +

This behavior is equal to the auto and noauto + legacy mount options, see systemd.mount(5).

+

Encryption roots always generate a key-load service, even for + canmount=off.

+
+
+
Space-separated list of mountpoints to require to be mounted for this + mount unit
+
+
The mount unit and associated key-load service will be ordered before this + space-separated list of units.
+
+
The mount unit and associated key-load service will be ordered after this + space-separated list of units.
+
+
Space-separated list of units that will gain a Wants dependency on + this mount unit. Setting this property implies noauto.
+
+
Space-separated list of units that will gain a Requires dependency + on this mount unit. Setting this property implies noauto.
+
+
Toggles between a Wants and Requires type of dependency + between the mount unit and local-fs.target, if noauto isn't + set or implied. +

on: Mount will be WantedBy local-fs.target

+

off: Mount will be Before and RequiredBy + local-fs.target

+

unset: Mount will be Before and WantedBy + local-fs.target

+
+
+
If set to on, do not generate a mount unit for this dataset. +

+
+
+
+See also systemd.mount(5) +

+
+
+
+

+

To begin, enable tracking for the pool:

+

+
touch + @sysconfdir@/zfs/zfs-list.cache/POOLNAME
+

Then, enable the tracking ZEDLET:

+

+
ln -s + "@zfsexecdir@/zed.d/history_event-zfs-list-cacher.sh" + "@sysconfdir@/zfs/zed.d" +

systemctl enable zfs-zed.service

+

systemctl restart zfs-zed.service

+
+

Force the running of the ZEDLET by setting a monitored property, + e.g. canmount, for at least one dataset in the pool:

+

+
zfs set canmount=on DATASET
+

This forces an update to the stale cache file.

+

To test the generator output, run

+

+
@systemdgeneratordir@/zfs-mount-generator + /tmp/zfs-mount-generator . .
+

This will generate units and dependencies in + /tmp/zfs-mount-generator for you to inspect them. The second and + third argument are ignored.

+

If you're satisfied with the generated units, instruct systemd to + re-run all generators:

+

+
systemctl daemon-reload
+

+

+
+
+

+

zfs(5) zfs-events(5) zed(8) zpool(5) + systemd(1) systemd.target(5) systemd.special(7) + systemd.mount(7)

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-mount.8.html b/man/v2.0/8/zfs-mount.8.html new file mode 100644 index 000000000..73c14ca09 --- /dev/null +++ b/man/v2.0/8/zfs-mount.8.html @@ -0,0 +1,339 @@ + + + + + + + zfs-mount.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-mount.8

+
+ + + + + +
ZFS-MOUNT(8)System Manager's ManualZFS-MOUNT(8)
+
+
+

+

zfs-mountManage + mount state of ZFS file systems.

+
+
+

+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Oflv] + [-o options] + -a | filesystem
+
+ + + + + +
zfsunmount [-fu] + -a | + filesystem|mountpoint
+
+
+

+
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Oflv] [-o + options] -a | + filesystem
+
Mount ZFS filesystem on a path described by its + mountpoint property, if the path exists and is empty. If + mountpoint is set to + , the + filesystem should be instead mounted using mount(8). +
+
+
Perform an overlay mount. Allows mounting in non-empty + mountpoint. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process if configured.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + section of + zfsprops(8) for details.
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has a + + of + + this will cause the terminal to interactively block after asking for + the key.
+
+
Report mount progress.
+
+
Attempt to force mounting of all filesystems, even those that couldn't + normally be mounted (e.g. redacted datasets).
+
+
+
zfs unmount + [-fu] -a | + filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
+
Forcefully unmount the file system, even if it is currently in use. + This option is not supported on Linux.
+
+
Unload keys for any encryption roots unmounted by this command.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
+
+
+
+ + + + + +
February 16, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-program.8.html b/man/v2.0/8/zfs-program.8.html new file mode 100644 index 000000000..7e289e4f0 --- /dev/null +++ b/man/v2.0/8/zfs-program.8.html @@ -0,0 +1,836 @@ + + + + + + + zfs-program.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-program.8

+
+ + + + + +
ZFS-PROGRAM(8)System Manager's ManualZFS-PROGRAM(8)
+
+
+

+

zfs-program — + executes ZFS channel programs

+
+
+

+ + + + + +
zfsprogram [-jn] + [-t instruction-limit] + [-m memory-limit] + pool script
+
+
+

+

The ZFS channel program interface allows ZFS administrative + operations to be run programmatically as a Lua script. The entire script is + executed atomically, with no other administrative operations taking effect + concurrently. A library of ZFS calls is made available to channel program + scripts. Channel programs may only be run with root privileges.

+

A modified version of the Lua 5.2 interpreter is used to run + channel program scripts. The Lua 5.2 manual can be found at:

+ +

The channel program given by script will be + run on pool, and any attempts to access or modify + other pools will cause an error.

+
+
+

+
+
+
Display channel program output in JSON format. When this flag is specified + and standard output is empty - channel program encountered an error. The + details of such an error will be printed to standard error in plain + text.
+
+
Executes a read-only channel program, which runs faster. The program + cannot change on-disk state by calling functions from the zfs.sync + submodule. The program can be used to gather information such as + properties and determining if changes would succeed (zfs.check.*). Without + this flag, all pending changes must be synced to disk before a channel + program can complete.
+
+ instruction-limit
+
Limit the number of Lua instructions to execute. If a channel program + executes more than the specified number of instructions, it will be + stopped and an error will be returned. The default limit is 10 million + instructions, and it can be set to a maximum of 100 million + instructions.
+
+ memory-limit
+
Memory limit, in bytes. If a channel program attempts to allocate more + memory than the given limit, it will be stopped and an error returned. The + default memory limit is 10 MB, and can be set to a maximum of 100 MB.
+
+

All remaining argument strings will be passed directly to the Lua + script as described in the LUA + INTERFACE section below.

+
+
+

+

A channel program can be invoked either from the command line, or + via a library call to + ().

+
+

+

Arguments passed to the channel program are converted to a Lua + table. If invoked from the command line, extra arguments to the Lua script + will be accessible as an array stored in the argument table with the key + 'argv':

+
+
args = ...
+argv = args["argv"]
+-- argv == {1="arg1", 2="arg2", ...}
+
+

If invoked from the libZFS interface, an arbitrary argument list + can be passed to the channel program, which is accessible via the same + "..." syntax in Lua:

+
+
args = ...
+-- args == {"foo"="bar", "baz"={...}, ...}
+
+

Note that because Lua arrays are 1-indexed, arrays passed to Lua + from the libZFS interface will have their indices incremented by 1. That is, + the element in arr[0] in a C array passed to a channel + program will be stored in arr[1] when accessed from + Lua.

+
+
+

+

Lua return statements take the form:

+
+
return ret0, ret1, ret2, ...
+
+

Return statements returning multiple values are permitted + internally in a channel program script, but attempting to return more than + one value from the top level of the channel program is not permitted and + will throw an error. However, tables containing multiple values can still be + returned. If invoked from the command line, a return statement:

+
+
a = {foo="bar", baz=2}
+return a
+
+

Will be output formatted as:

+
+
Channel program fully executed with return value:
+    return:
+        baz: 2
+        foo: 'bar'
+
+
+
+

+

If the channel program encounters a fatal error while running, a + non-zero exit status will be returned. If more information about the error + is available, a singleton list will be returned detailing the error:

+
+
error: "error string, including Lua stack trace"
+
+

If a fatal error is returned, the channel program may have not + executed at all, may have partially executed, or may have fully executed but + failed to pass a return value back to userland.

+

If the channel program exhausts an instruction or memory limit, a + fatal error will be generated and the program will be stopped, leaving the + program partially executed. No attempt is made to reverse or undo any + operations already performed. Note that because both the instruction count + and amount of memory used by a channel program are deterministic when run + against the same inputs and filesystem state, as long as a channel program + has run successfully once, you can guarantee that it will finish + successfully against a similar size system.

+

If a channel program attempts to return too large a value, the + program will fully execute but exit with a nonzero status code and no return + value.

+

: + ZFS API functions do not generate Fatal Errors when correctly invoked, they + return an error code and the channel program continues executing. See the + ZFS API section below for + function-specific details on error return codes.

+
+
+

+

When invoking a channel program via the libZFS interface, it is + necessary to translate arguments and return values from Lua values to their + C equivalents, and vice-versa.

+

There is a correspondence between nvlist values in C and Lua + tables. A Lua table which is returned from the channel program will be + recursively converted to an nvlist, with table values converted to their + natural equivalents:

+
+
string -> string
+number -> int64
+boolean -> boolean_value
+nil -> boolean (no value)
+table -> nvlist
+
+

Likewise, table keys are replaced by string equivalents as + follows:

+
+
string -> no change
+number -> signed decimal string ("%lld")
+boolean -> "true" | "false"
+
+

Any collision of table key strings (for example, the string + "true" and a true boolean value) will cause a fatal error.

+

Lua numbers are represented internally as signed 64-bit + integers.

+
+
+
+

+

The following Lua built-in base library functions are + available:

+
+
assert                  rawlen
+collectgarbage          rawget
+error                   rawset
+getmetatable            select
+ipairs                  setmetatable
+next                    tonumber
+pairs                   tostring
+rawequal                type
+
+

All functions in the + , + , + and + + built-in submodules are also available. A complete list and documentation of + these modules is available in the Lua manual.

+

The following functions base library functions have been disabled + and are not available for use in channel programs:

+
+
dofile
+loadfile
+load
+pcall
+print
+xpcall
+
+
+
+

+
+

+

Each API function takes a fixed set of required positional + arguments and optional keyword arguments. For example, the destroy function + takes a single positional string argument (the name of the dataset to + destroy) and an optional "defer" keyword boolean argument. When + using parentheses to specify the arguments to a Lua function, only + positional arguments can be used:

+
+
zfs.sync.destroy("rpool@snap")
+
+

To use keyword arguments, functions must be called with a single + argument that is a Lua table containing entries mapping integers to + positional arguments and strings to keyword arguments:

+
+
zfs.sync.destroy({1="rpool@snap", defer=true})
+
+

The Lua language allows curly braces to be used in place of + parenthesis as syntactic sugar for this calling convention:

+
+
zfs.sync.snapshot{"rpool@snap", defer=true}
+
+
+
+

+

If an API function succeeds, it returns 0. If it fails, it returns + an error code and the channel program continues executing. API functions do + not generate Fatal Errors except in the case of an unrecoverable internal + file system error.

+

In addition to returning an error code, some functions also return + extra details describing what caused the error. This extra description is + given as a second return value, and will always be a Lua table, or Nil if no + error details were returned. Different keys will exist in the error details + table depending on the function and error case. Any such function may be + called expecting a single return value:

+
+
errno = zfs.sync.promote(dataset)
+
+

Or, the error details can be retrieved:

+
+
errno, details = zfs.sync.promote(dataset)
+if (errno == EEXIST) then
+    assert(details ~= Nil)
+    list_of_conflicting_snapshots = details
+end
+
+

The following global aliases for API function error return codes + are defined for use in channel programs:

+
+
EPERM     ECHILD      ENODEV      ENOSPC
+ENOENT    EAGAIN      ENOTDIR     ESPIPE
+ESRCH     ENOMEM      EISDIR      EROFS
+EINTR     EACCES      EINVAL      EMLINK
+EIO       EFAULT      ENFILE      EPIPE
+ENXIO     ENOTBLK     EMFILE      EDOM
+E2BIG     EBUSY       ENOTTY      ERANGE
+ENOEXEC   EEXIST      ETXTBSY     EDQUOT
+EBADF     EXDEV       EFBIG
+
+
+
+

+

For detailed descriptions of the exact behavior of any zfs + administrative operations, see the main zfs(8) manual + page.

+
+
+
Record a debug message in the zfs_dbgmsg log. A log of these messages can + be printed via mdb's "::zfs_dbgmsg" command, or can be monitored + live by running: +
+
  dtrace -n 'zfs-dbgmsg{trace(stringof(arg0))}'
+
+

msg (string)

+
Debug message to be printed.
+
+
+
Returns true if the given dataset exists, or false if it doesn't. A fatal + error will be thrown if the dataset is not in the target pool. That is, in + a channel program running on rpool, + zfs.exists("rpool/nonexistent_fs") returns false, but + zfs.exists("somepool/fs_that_may_exist") will error. +

dataset (string)

+
Dataset to check for existence. Must be in the + target pool.
+
+
+
Returns two values. First, a string, number or table containing the + property value for the given dataset. Second, a string containing the + source of the property (i.e. the name of the dataset in which it was set + or nil if it is readonly). Throws a Lua error if the dataset is invalid or + the property doesn't exist. Note that Lua only supports int64 number types + whereas ZFS number properties are uint64. This means very large values + (like guid) may wrap around and appear negative. +

dataset (string)

+
Filesystem or snapshot path to retrieve properties + from.
+

property (string)

+
Name of property to retrieve. All filesystem, + snapshot and volume properties are supported except for 'mounted' and + 'iscsioptions.' Also supports the 'written@snap' and 'written#bookmark' + properties and the '<user|group><quota|used>@id' properties, + though the id must be in numeric form.
+
+
+
+
+
The sync submodule contains functions that modify the on-disk state. They + are executed in "syncing context". +

The available sync submodule functions are as follows:

+
+
+
Destroy the given dataset. Returns 0 on successful destroy, or a + nonzero error code if the dataset could not be destroyed (for example, + if the dataset has any active children or clones). +

dataset (string)

+
Filesystem or snapshot to be destroyed.
+

[optional] defer (boolean)

+
Valid only for destroying snapshots. If set to + true, and the snapshot has holds or clones, allows the snapshot to be + marked for deferred deletion rather than failing.
+
+
+
Clears the specified property in the given dataset, causing it to be + inherited from an ancestor, or restored to the default if no ancestor + property is set. The ‘zfs inherit + -S’ option has not been implemented. Returns 0 on + success, or a nonzero error code if the property could not be cleared. +

dataset (string)

+
Filesystem or snapshot containing the property + to clear.
+

property (string)

+
The property to clear. Allowed properties are + the same as those for the zfs + inherit command.
+
+
+
Promote the given clone to a filesystem. Returns 0 on successful + promotion, or a nonzero error code otherwise. If EEXIST is returned, + the second return value will be an array of the clone's snapshots + whose names collide with snapshots of the parent filesystem. +

dataset (string)

+
Clone to be promoted.
+
+
+
Rollback to the previous snapshot for a dataset. Returns 0 on + successful rollback, or a nonzero error code otherwise. Rollbacks can + be performed on filesystems or zvols, but not on snapshots or mounted + datasets. EBUSY is returned in the case where the filesystem is + mounted. +

filesystem (string)

+
Filesystem to rollback.
+
+
+
Sets the given property on a dataset. Currently only user properties + are supported. Returns 0 if the property was set, or a nonzero error + code otherwise. +

dataset (string)

+
The dataset where the property will be + set.
+

property (string)

+
The property to set. Only user properties are + supported.
+

value (string)

+
The value of the property to be set.
+
+
+
Create a snapshot of a filesystem. Returns 0 if the snapshot was + successfully created, and a nonzero error code otherwise. +

Note: Taking a snapshot will fail on any pool older than + legacy version 27. To enable taking snapshots from ZCP scripts, the + pool must be upgraded.

+

dataset (string)

+
Name of snapshot to create.
+
+
+
Create a bookmark of an existing source snapshot or bookmark. Returns + 0 if the new bookmark was successfully created, and a nonzero error + code otherwise. +

Note: Bookmarking requires the corresponding pool feature + to be enabled.

+

source (string)

+
Full name of the existing snapshot or + bookmark.
+

newbookmark (string)

+
Full name of the new bookmark.
+
+
+
+
+
For each function in the zfs.sync submodule, there is a corresponding + zfs.check function which performs a "dry run" of the same + operation. Each takes the same arguments as its zfs.sync counterpart and + returns 0 if the operation would succeed, or a non-zero error code if it + would fail, along with any other error details. That is, each has the same + behavior as the corresponding sync function except for actually executing + the requested change. For example, + + returns 0 if + + would successfully destroy the dataset. +

The available zfs.check functions are:

+
+
+
 
+
+
 
+
+
 
+
+
 
+
+
 
+
+
+
+
The zfs.list submodule provides functions for iterating over datasets and + properties. Rather than returning tables, these functions act as Lua + iterators, and are generally used as follows: +
+
for child in zfs.list.children("rpool") do
+    ...
+end
+
+

The available zfs.list functions are:

+
+
+
Iterate through all clones of the given snapshot. +

snapshot (string)

+
Must be a valid snapshot path in the current + pool.
+
+
+
Iterate through all snapshots of the given dataset. Each snapshot is + returned as a string containing the full dataset name, e.g. + "pool/fs@snap". +

dataset (string)

+
Must be a valid filesystem or volume.
+
+
+
Iterate through all direct children of the given dataset. Each child + is returned as a string containing the full dataset name, e.g. + "pool/fs/child". +

dataset (string)

+
Must be a valid filesystem or volume.
+
+
+
Iterate through all bookmarks of the given dataset. Each bookmark is + returned as a string containing the full dataset name, e.g. + "pool/fs#bookmark". +

dataset (string)

+
Must be a valid filesystem or volume.
+
+
+
Iterate through all user holds on the given snapshot. Each hold is + returned as a pair of the hold's tag and the timestamp (in seconds + since the epoch) at which it was created. +

snapshot (string)

+
Must be a valid snapshot.
+
+
+
An alias for zfs.list.user_properties (see relevant entry). +

dataset (string)

+
Must be a valid filesystem, snapshot, or + volume.
+
+
+
Iterate through all user properties for the given dataset. For each + step of the iteration, output the property name, its value, and its + source. Throws a Lua error if the dataset is invalid. +

dataset (string)

+
Must be a valid filesystem, snapshot, or + volume.
+
+
+
Returns an array of strings, the names of the valid system (non-user + defined) properties for the given dataset. Throws a Lua error if the + dataset is invalid. +

dataset (string)

+
Must be a valid filesystem, snapshot or + volume.
+
+
+
+
+
+
+
+

+
+

+

The following channel program recursively destroys a filesystem + and all its snapshots and children in a naive manner. Note that this does + not involve any error handling or reporting.

+
+
function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        zfs.sync.destroy(snap)
+    end
+    zfs.sync.destroy(root)
+end
+destroy_recursive("pool/somefs")
+
+
+
+

+

A more verbose and robust version of the same channel program, + which properly detects and reports errors, and also takes the dataset to + destroy as a command line argument, would be as follows:

+
+
succeeded = {}
+failed = {}
+
+function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        err = zfs.sync.destroy(snap)
+        if (err ~= 0) then
+            failed[snap] = err
+        else
+            succeeded[snap] = err
+        end
+    end
+    err = zfs.sync.destroy(root)
+    if (err ~= 0) then
+        failed[root] = err
+    else
+        succeeded[root] = err
+    end
+end
+
+args = ...
+argv = args["argv"]
+
+destroy_recursive(argv[1])
+
+results = {}
+results["succeeded"] = succeeded
+results["failed"] = failed
+return results
+
+
+
+

+

The following function performs a forced promote operation by + attempting to promote the given clone and destroying any conflicting + snapshots.

+
+
function force_promote(ds)
+   errno, details = zfs.check.promote(ds)
+   if (errno == EEXIST) then
+       assert(details ~= Nil)
+       for i, snap in ipairs(details) do
+           zfs.sync.destroy(ds .. "@" .. snap)
+       end
+   elseif (errno ~= 0) then
+       return errno
+   end
+   return zfs.sync.promote(ds)
+end
+
+
+
+
+ + + + + +
January 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-project.8.html b/man/v2.0/8/zfs-project.8.html new file mode 100644 index 000000000..0d0d77862 --- /dev/null +++ b/man/v2.0/8/zfs-project.8.html @@ -0,0 +1,366 @@ + + + + + + + zfs-project.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-project.8

+
+ + + + + +
ZFS-PROJECT(8)System Manager's ManualZFS-PROJECT(8)
+
+
+

+

zfs-project — + List, set, or clear project ID and/or inherit flag on the + file(s) or directories.

+
+
+

+ + + + + +
zfsproject + [-d|-r] + file|directory...
+
+ + + + + +
zfsproject -C + [-kr] + file|directory...
+
+ + + + + +
zfsproject -c + [-0] + [-d|-r] + [-p id] + file|directory...
+
+ + + + + +
zfsproject [-p + id] [-rs] + file|directory...
+
+
+

+
+
zfs project + [-d|-r] + file|directory...
+
List project identifier (ID) and inherit flag of file(s) or directories. +
+
+
Show the directory project ID and inherit flag, not its children. It + will overwrite the former specified -r + option.
+
+
Show on subdirectories recursively. It will overwrite the former + specified -d option.
+
+
+
zfs project + -C [-kr] + file|directory...
+
Clear project inherit flag and/or ID on the file(s) or directories. +
+
+
Keep the project ID unchanged. If not specified, the project ID will + be reset as zero.
+
+
Clear on subdirectories recursively.
+
+
+
zfs project + -c [-0] + [-d|-r] + [-p id] + file|directory...
+
Check project ID and inherit flag on the file(s) or directories, report + the entries without project inherit flag or with different project IDs + from the specified (via -p option) value or the + target directory's project ID. +
+
+
Print file name with a trailing NUL instead of newline (by default), + like "find -print0".
+
+
Check the directory project ID and inherit flag, not its children. It + will overwrite the former specified -r + option.
+
+
Specify the referenced ID for comparing with the target file(s) or + directories' project IDs. If not specified, the target (top) + directory's project ID will be used as the referenced one.
+
+
Check on subdirectories recursively. It will overwrite the former + specified -d option.
+
+
+
zfs project + [-p id] + [-rs] + file|directory...
+
Set project ID and/or inherit flag on the file(s) or directories. +
+
+
Set the file(s)' or directories' project ID with the given value.
+
+
Set on subdirectories recursively.
+
+
Set project inherit flag on the given file(s) or directories. It is + usually used for setup tree quota on the directory target with + -r option specified together. When setup tree + quota, by default the directory's project ID will be set to all its + descendants unless you specify the project ID via + -p option explicitly.
+
+
+
+
+
+

+

zfs-projectspace(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-projectspace.8.html b/man/v2.0/8/zfs-projectspace.8.html new file mode 100644 index 000000000..4ea2ea4e1 --- /dev/null +++ b/man/v2.0/8/zfs-projectspace.8.html @@ -0,0 +1,390 @@ + + + + + + + zfs-projectspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-projectspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + Displays space consumed by, and quotas on, each user or + group in the specified filesystem or snapshot.

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]...] + [-s field]... + [-S field]... + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (for example, + stat(2), ls + -l) perform this translation, so the + -i option allows the output from + zfs userspace to be + compared directly with those utilities. However, + -i may lead to confusion if some files were + created by an SMB user before a SMB-to-POSIX name mapping was + established. In such a case, some files will be owned by the SMB + entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]...
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]...
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]...] + [-s field]... + [-S field]... + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is + numeral, not name. So need neither the option -i for SID + to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfs-set(8), zfsprops(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-promote.8.html b/man/v2.0/8/zfs-promote.8.html new file mode 100644 index 000000000..2d92e93c9 --- /dev/null +++ b/man/v2.0/8/zfs-promote.8.html @@ -0,0 +1,280 @@ + + + + + + + zfs-promote.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-promote.8

+
+ + + + + +
ZFS-PROMOTE(8)System Manager's ManualZFS-PROMOTE(8)
+
+
+

+

zfs-promote — + Promotes a clone file system to no longer be dependent on + its origin snapshot.

+
+
+

+ + + + + +
zfspromote + clone-filesystem
+
+
+

+
+
zfs promote + clone-filesystem
+
The promote command makes it possible to destroy + the file system that the clone was created from. The clone parent-child + dependency relationship is reversed, so that the origin file system + becomes a clone of the specified file system. +

The snapshot that was cloned, and any snapshots previous to + this snapshot, are now owned by the promoted clone. The space they use + moves from the origin file system to the promoted clone, so enough space + must be available to accommodate these snapshots. No new space is + consumed by this operation, but the space accounting is adjusted. The + promoted clone must not have any conflicting snapshot names of its own. + The zfs-rename(8) subcommand can be used to rename any + conflicting snapshots.

+
+
+
+
+

+

zfs-clone(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-receive.8.html b/man/v2.0/8/zfs-receive.8.html new file mode 100644 index 000000000..bc0a46b6d --- /dev/null +++ b/man/v2.0/8/zfs-receive.8.html @@ -0,0 +1,557 @@ + + + + + + + zfs-receive.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-receive.8

+
+ + + + + +
ZFS-RECEIVE(8)System Manager's ManualZFS-RECEIVE(8)
+
+
+

+

zfs-receive — + Creates a snapshot whose contents are as specified in the + stream provided on standard input.

+
+
+

+ + + + + +
zfsreceive [-FhMnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+
+

+
+
zfs receive + [-FhMnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

The ability to send and receive deduplicated send streams has + been removed. However, a deduplicated send stream created with older + software can be converted to a regular (non-deduplicated) stream by + using the zstream redup + command.

+

If -o + property=value or + -x property is specified, it + applies to the effective value of the property throughout the entire + subtree of replicated datasets. Effective property values will be set ( + -o ) or inherited ( -x ) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + origin= + is a special case because, even if origin is a + read-only property and cannot be set, it's allowed to receive the send + stream as a clone of the given snapshot.

+

Raw encrypted send streams (created with + zfs send + -w ) may only be received as is, and cannot be + re-encrypted, decrypted, or recompressed by the receive process. + Unencrypted streams can be received as encrypted datasets, either + through inheritance or by specifying encryption parameters with the + -o options. Note that the + keylocation property cannot be overridden to + during + a receive. This is because the receive process itself is already using + stdin for the send stream. Instead, the property can be overridden after + the receive completes.

+

The added security provided by raw sends adds some + restrictions to the send and receive process. ZFS will not allow a mix + of raw receives and non-raw receives. Specifically, any raw incremental + receives that are attempted after a non-raw receive will fail. Non-raw + receives do not have this restriction and, therefore, are always + possible. Because of this, it is best practice to always use either raw + sends for their security benefits or non-raw sends for their flexibility + when working with encrypted datasets, but not a combination.

+

The reason for this restriction stems from the inherent + restrictions of the AEAD ciphers that ZFS uses to encrypt data. When + using ZFS native encryption, each block of data is encrypted against a + randomly generated number known as the "initialization vector" + (IV), which is stored in the filesystem metadata. This number is + required by the encryption algorithms whenever the data is to be + decrypted. Together, all of the IVs provided for all of the blocks in a + given snapshot are collectively called an "IV set". When ZFS + performs a raw send, the IV set is transferred from the source to the + destination in the send stream. When ZFS performs a non-raw send, the + data is decrypted by the source system and re-encrypted by the + destination system, creating a snapshot with effectively the same data, + but a different IV set. In order for decryption to work after a raw + send, ZFS must ensure that the IV set used on both the source and + destination side match. When an incremental raw receive is performed on + top of an existing snapshot, ZFS will check to confirm that the + "from" snapshot on both the source and destination were using + the same IV set, ensuring the new IV set is consistent.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Skip the receive of holds. There is no effect if holds are not + sent.
+
+
Force an unmount of the file system while receiving a snapshot. This + option is not supported on Linux.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + immediately before the receive. When receiving a stream from + zfs send + -R, causes the property to be inherited by all + descendant datasets, as through zfs + inherit property was run on + any descendant datasets that have this property set on the sending + system. +

If the send stream was sent with + -c then overriding the + compression property will have no affect on + received data but the compression property will be + set. To have the data recompressed on receive remove the + -c flag from the send stream.

+

Any editable property can be set at + receive time. Set-once properties bound to the received data, such + as + + and + , + cannot be set at receive time even when the datasets are newly + created by zfs + receive. Additionally both settable + properties + + and + + cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+

The -o option may also be used to + override encryption properties upon initial receive. This allows + unencrypted streams to be received as encrypted datasets. To cause + the received dataset (or root dataset of a recursive stream) to be + received as an encryption root, specify encryption properties in the + same manner as is required for zfs + create. For instance:

+
+
# zfs send tank/test@snap1 | zfs recv -o encryption=on -o keyformat=passphrase -o keylocation=file:///path/to/keyfile
+
+

Note that [-o + keylocation=prompt] may + not be specified here, since stdin is already being utilized for the + send stream. Once the receive has completed, you can use + zfs set to change + this setting after the fact. Similarly, you can receive a dataset as + an encrypted child by specifying [-x + encryption] to force the property to be + inherited. Overriding encryption properties (except for + keylocation) is not possible with raw send + streams.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with + a stream generated by zfs + send -t + token, where the token + is the value of the + + property of the filesystem or volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(5) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions (e.g. + set-once) apply equally to -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
+
+
+

+

zfs-send(8) zstream(8)

+
+
+ + + + + +
February 16, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-recv.8.html b/man/v2.0/8/zfs-recv.8.html new file mode 100644 index 000000000..1b17d7af1 --- /dev/null +++ b/man/v2.0/8/zfs-recv.8.html @@ -0,0 +1,557 @@ + + + + + + + zfs-recv.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-recv.8

+
+ + + + + +
ZFS-RECEIVE(8)System Manager's ManualZFS-RECEIVE(8)
+
+
+

+

zfs-receive — + Creates a snapshot whose contents are as specified in the + stream provided on standard input.

+
+
+

+ + + + + +
zfsreceive [-FhMnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+
+

+
+
zfs receive + [-FhMnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

The ability to send and receive deduplicated send streams has + been removed. However, a deduplicated send stream created with older + software can be converted to a regular (non-deduplicated) stream by + using the zstream redup + command.

+

If -o + property=value or + -x property is specified, it + applies to the effective value of the property throughout the entire + subtree of replicated datasets. Effective property values will be set ( + -o ) or inherited ( -x ) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + origin= + is a special case because, even if origin is a + read-only property and cannot be set, it's allowed to receive the send + stream as a clone of the given snapshot.

+

Raw encrypted send streams (created with + zfs send + -w ) may only be received as is, and cannot be + re-encrypted, decrypted, or recompressed by the receive process. + Unencrypted streams can be received as encrypted datasets, either + through inheritance or by specifying encryption parameters with the + -o options. Note that the + keylocation property cannot be overridden to + during + a receive. This is because the receive process itself is already using + stdin for the send stream. Instead, the property can be overridden after + the receive completes.

+

The added security provided by raw sends adds some + restrictions to the send and receive process. ZFS will not allow a mix + of raw receives and non-raw receives. Specifically, any raw incremental + receives that are attempted after a non-raw receive will fail. Non-raw + receives do not have this restriction and, therefore, are always + possible. Because of this, it is best practice to always use either raw + sends for their security benefits or non-raw sends for their flexibility + when working with encrypted datasets, but not a combination.

+

The reason for this restriction stems from the inherent + restrictions of the AEAD ciphers that ZFS uses to encrypt data. When + using ZFS native encryption, each block of data is encrypted against a + randomly generated number known as the "initialization vector" + (IV), which is stored in the filesystem metadata. This number is + required by the encryption algorithms whenever the data is to be + decrypted. Together, all of the IVs provided for all of the blocks in a + given snapshot are collectively called an "IV set". When ZFS + performs a raw send, the IV set is transferred from the source to the + destination in the send stream. When ZFS performs a non-raw send, the + data is decrypted by the source system and re-encrypted by the + destination system, creating a snapshot with effectively the same data, + but a different IV set. In order for decryption to work after a raw + send, ZFS must ensure that the IV set used on both the source and + destination side match. When an incremental raw receive is performed on + top of an existing snapshot, ZFS will check to confirm that the + "from" snapshot on both the source and destination were using + the same IV set, ensuring the new IV set is consistent.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Skip the receive of holds. There is no effect if holds are not + sent.
+
+
Force an unmount of the file system while receiving a snapshot. This + option is not supported on Linux.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + immediately before the receive. When receiving a stream from + zfs send + -R, causes the property to be inherited by all + descendant datasets, as through zfs + inherit property was run on + any descendant datasets that have this property set on the sending + system. +

If the send stream was sent with + -c then overriding the + compression property will have no affect on + received data but the compression property will be + set. To have the data recompressed on receive remove the + -c flag from the send stream.

+

Any editable property can be set at + receive time. Set-once properties bound to the received data, such + as + + and + , + cannot be set at receive time even when the datasets are newly + created by zfs + receive. Additionally both settable + properties + + and + + cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+

The -o option may also be used to + override encryption properties upon initial receive. This allows + unencrypted streams to be received as encrypted datasets. To cause + the received dataset (or root dataset of a recursive stream) to be + received as an encryption root, specify encryption properties in the + same manner as is required for zfs + create. For instance:

+
+
# zfs send tank/test@snap1 | zfs recv -o encryption=on -o keyformat=passphrase -o keylocation=file:///path/to/keyfile
+
+

Note that [-o + keylocation=prompt] may + not be specified here, since stdin is already being utilized for the + send stream. Once the receive has completed, you can use + zfs set to change + this setting after the fact. Similarly, you can receive a dataset as + an encrypted child by specifying [-x + encryption] to force the property to be + inherited. Overriding encryption properties (except for + keylocation) is not possible with raw send + streams.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with + a stream generated by zfs + send -t + token, where the token + is the value of the + + property of the filesystem or volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(5) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions (e.g. + set-once) apply equally to -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
+
+
+

+

zfs-send(8) zstream(8)

+
+
+ + + + + +
February 16, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-redact.8.html b/man/v2.0/8/zfs-redact.8.html new file mode 100644 index 000000000..f0ad8515f --- /dev/null +++ b/man/v2.0/8/zfs-redact.8.html @@ -0,0 +1,745 @@ + + + + + + + zfs-redact.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-redact.8

+
+ + + + + +
ZFS-SEND(8)System Manager's ManualZFS-SEND(8)
+
+
+

+

zfs-send — + Generate a send stream, which may be of a filesystem, and + may be incremental from a bookmark.

+
+
+

+ + + + + +
zfssend [-DLPRbcehnpvw] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-DLPRcenpvw] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend --redact + redaction_bookmark + [-DLPcenpv] +
+ [-i + snapshot|bookmark] + snapshot
+
+ + + + + +
zfssend [-Penv] + -t receive_resume_token
+
+ + + + + +
zfssend [-Pnv] + -S filesystem
+
+ + + + + +
zfsredact snapshot + redaction_bookmark redaction_snapshot...
+
+
+

+
+
zfs send + [-DLPRbcehnpvw] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
+ --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
+ --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(5) for details on ZFS feature + flags and the large_blocks feature.
+
+ --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
+ --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed. If the -R flag + is used to send encrypted datasets, then -w + must also be specified.

+
+
+ --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(5) for details on ZFS feature flags + and the embedded_data feature.
+
+ --backup
+
Sends only received property values whether or not they are overridden + by local settings, but only if the dataset has ever been received. Use + this option when you want zfs + receive to restore received properties backed + up on the sent dataset and to avoid sending local settings that may + have nothing to do with the source dataset, but only with how the data + is backed up.
+
+ --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes. Streams sent with -c will not + have their data recompressed on the receiver side using + -o -compress=value. + The data will stay compressed as it was from the sender. The new + compression property will be set for future data.
+
+ --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
+ --holds
+
Generate a stream package that includes any snapshot holds (created + with the zfs hold command), and indicating to + zfs receive that the holds be applied to the dataset + on the receiving system.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
+ --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
+ --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature. Sends of encrypted datasets must use + -w when using this flag.
+
+ --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS.

+
+
+
+
zfs send + [-DLPRcenpvw] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
+ --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(5) for details on ZFS feature + flags and the large_blocks feature.
+
+ --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
+ --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
+ --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
+ --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(5) for details on ZFS feature flags + and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
+ --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
+ --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent.
+
+
+
zfs send + --redact redaction_bookmark + [-DLPcenpv] +
+ [-i + snapshot|bookmark] + snapshot
+
Generate a redacted send stream. This send stream contains all blocks from + the snapshot being sent that aren't included in the redaction list + contained in the bookmark specified by the + --redact (or --d ) flag. + The resulting send stream is said to be redacted with respect to the + snapshots the bookmark specified by the --redact + flag was created with. The bookmark must have been + created by running zfs redact on the snapshot being + sent. +

This feature can be used to allow clones of a filesystem to be + made available on a remote system, in the case where their parent need + not (or needs to not) be usable. For example, if a filesystem contains + sensitive data, and it has clones where that sensitive data has been + secured or replaced with dummy data, redacted sends can be used to + replicate the secured data without replicating the original sensitive + data, while still sharing all possible blocks. A snapshot that has been + redacted with respect to a set of snapshots will contain all blocks + referenced by at least one snapshot in the set, but will contain none of + the blocks referenced by none of the snapshots in the set. In other + words, if all snapshots in the set have modified a given block in the + parent, that block will not be sent; but if one or more snapshots have + not modified a block in the parent, they will still reference the + parent's block, so that block will be sent. Note that only user data + will be redacted.

+

When the redacted send stream is received, we will generate a + redacted snapshot. Due to the nature of redaction, a redacted dataset + can only be used in the following ways:

+

1. To receive, as a clone, an incremental send from the + original snapshot to one of the snapshots it was redacted with respect + to. In this case, the stream will produce a valid dataset when received + because all blocks that were redacted in the parent are guaranteed to be + present in the child's send stream. This use case will produce a normal + snapshot, which can be used just like other snapshots.

+

2. To receive an incremental send from the original snapshot + to something redacted with respect to a subset of the set of snapshots + the initial snapshot was redacted with respect to. In this case, each + block that was redacted in the original is still redacted (redacting + with respect to additional snapshots causes less data to be redacted + (because the snapshots define what is permitted, and everything else is + redacted)). This use case will produce a new redacted snapshot.

+

3. To receive an incremental send from a redaction bookmark of + the original snapshot that was created when redacting with respect to a + subset of the set of snapshots the initial snapshot was created with + respect to anything else. A send stream from such a redaction bookmark + will contain all of the blocks necessary to fill in any redacted data, + should it be needed, because the sending system is aware of what blocks + were originally redacted. This will either produce a normal snapshot or + a redacted one, depending on whether the new send stream is + redacted.

+

4. To receive an incremental send from a redacted version of + the initial snapshot that is redacted with respect to a subject of the + set of snapshots the initial snapshot was created with respect to. A + send stream from a compatible redacted dataset will contain all of the + blocks necessary to fill in any redacted data. This will either produce + a normal snapshot or a redacted one, depending on whether the new send + stream is redacted.

+

5. To receive a full send as a clone of the redacted snapshot. + Since the stream is a full send, it definitionally contains all the data + needed to create a new dataset. This use case will either produce a + normal snapshot or a redacted one, depending on whether the full send + stream was redacted.

+

These restrictions are detected and enforced by zfs + receive; a redacted send stream will contain the list of snapshots + that the stream is redacted with respect to. These are stored with the + redacted snapshot, and are used to detect and correctly handle the cases + above. Note that for technical reasons, raw sends and redacted sends + cannot be combined at this time.

+
+
zfs send + [-Penv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs receive -s for more details.
+
zfs send + [-Pnv] [-i + snapshot|bookmark] + -S filesystem
+
Generate a send stream from a dataset that has been partially received. +
+
+ --saved
+
This flag requires that the specified filesystem previously received a + resumable send that did not finish and was interrupted. In such + scenarios this flag enables the user to send this partially received + state. Using this flag will always use the last fully received + snapshot as the incremental source if it exists.
+
+
+
zfs redact + snapshot redaction_bookmark + redaction_snapshot...
+
Generate a new redaction bookmark. In addition to the typical bookmark + information, a redaction bookmark contains the list of redacted blocks and + the list of redaction snapshots specified. The redacted blocks are blocks + in the snapshot which are not referenced by any of the redaction + snapshots. These blocks are found by iterating over the metadata in each + redaction snapshot to determine what has been changed since the target + snapshot. Redaction is designed to support redacted zfs sends; see the + entry for zfs send for more information on the purpose + of this operation. If a redact operation fails partway through (due to an + error or a system failure), the redaction can be resumed by rerunning the + same command.
+
+
+

+

ZFS has support for a limited version of data subsetting, in the + form of redaction. Using the zfs redact command, a + can be created that stores a list of blocks containing + sensitive information. When provided to zfs + , this + causes a redacted send to occur. Redacted sends omit the + blocks containing sensitive information, replacing them with REDACT records. + When these send streams are received, a redacted dataset + is created. A redacted dataset cannot be mounted by default, since it is + incomplete. It can be used to receive other send streams. In this way + datasets can be used for data backup and replication, with all the benefits + that zfs send and receive have to offer, while protecting sensitive + information from being stored on less-trusted machines or services.

+

For the purposes of redaction, there are two steps to the process. + A redact step, and a send/receive step. First, a redaction bookmark is + created. This is done by providing the zfs redact command + with a parent snapshot, a bookmark to be created, and a number of redaction + snapshots. These redaction snapshots must be descendants of the parent + snapshot, and they should modify data that is considered sensitive in some + way. Any blocks of data modified by all of the redaction snapshots will be + listed in the redaction bookmark, because it represents the truly sensitive + information. When it comes to the send step, the send process will not send + the blocks listed in the redaction bookmark, instead replacing them with + REDACT records. When received on the target system, this will create a + redacted dataset, missing the data that corresponds to the blocks in the + redaction bookmark on the sending system. The incremental send streams from + the original parent to the redaction snapshots can then also be received on + the target system, and this will produce a complete snapshot that can be + used normally. Incrementals from one snapshot on the parent filesystem and + another can also be done by sending from the redaction bookmark, rather than + the snapshots themselves.

+

In order to make the purpose of the feature more clear, an example + is provided. Consider a zfs filesystem containing four files. These files + represent information for an online shopping service. One file contains a + list of usernames and passwords, another contains purchase histories, a + third contains click tracking data, and a fourth contains user preferences. + The owner of this data wants to make it available for their development + teams to test against, and their market research teams to do analysis on. + The development teams need information about user preferences and the click + tracking data, while the market research teams need information about + purchase histories and user preferences. Neither needs access to the + usernames and passwords. However, because all of this data is stored in one + ZFS filesystem, it must all be sent and received together. In addition, the + owner of the data wants to take advantage of features like compression, + checksumming, and snapshots, so they do want to continue to use ZFS to store + and transmit their data. Redaction can help them do so. First, they would + make two clones of a snapshot of the data on the source. In one clone, they + create the setup they want their market research team to see; they delete + the usernames and passwords file, and overwrite the click tracking data with + dummy information. In another, they create the setup they want the + development teams to see, by replacing the passwords with fake information + and replacing the purchase histories with randomly generated ones. They + would then create a redaction bookmark on the parent snapshot, using + snapshots on the two clones as redaction snapshots. The parent can then be + sent, redacted, to the target server where the research and development + teams have access. Finally, incremental sends from the parent snapshot to + each of the clones can be send to and received on the target server; these + snapshots are identical to the ones on the source, and are ready to be used, + while the parent snapshot on the target contains none of the username and + password data present on the source, because it was removed by the redacted + send operation.

+
+
+
+

+

zfs-bookmark(8), + zfs-receive(8), zfs-redact(8), + zfs-snapshot(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-release.8.html b/man/v2.0/8/zfs-release.8.html new file mode 100644 index 000000000..ed50a4281 --- /dev/null +++ b/man/v2.0/8/zfs-release.8.html @@ -0,0 +1,323 @@ + + + + + + + zfs-release.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-release.8

+
+ + + + + +
ZFS-HOLD(8)System Manager's ManualZFS-HOLD(8)
+
+
+

+

zfs-holdHold a + snapshot to prevent it being removed with the zfs destroy + command.

+
+
+

+ + + + + +
zfshold [-r] + tag snapshot...
+
+ + + + + +
zfsholds [-rH] + snapshot...
+
+ + + + + +
zfsrelease [-r] + tag snapshot...
+
+
+

+
+
zfs hold + [-r] tag + snapshot...
+
Adds a single reference, named with the tag + argument, to the specified snapshot or snapshots. Each snapshot has its + own tag namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-rH] snapshot...
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
Do not print headers, use tab-delimited output.
+
+
+
zfs release + [-r] tag + snapshot...
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return + EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
+
+
+

+

zfs-destroy(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-rename.8.html b/man/v2.0/8/zfs-rename.8.html new file mode 100644 index 000000000..aa384891a --- /dev/null +++ b/man/v2.0/8/zfs-rename.8.html @@ -0,0 +1,333 @@ + + + + + + + zfs-rename.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-rename.8

+
+ + + + + +
ZFS-RENAME(8)System Manager's ManualZFS-RENAME(8)
+
+
+

+

zfs-rename — + Renames the given dataset (filesystem or + snapshot).

+
+
+

+ + + + + +
zfsrename [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
+ + + + + +
zfsrename -p + [-f] + filesystem|volume + filesystem|volume
+
+ + + + + +
zfsrename -u + [-f] filesystem + filesystem
+
+ + + + + +
zfsrename -r + snapshot snapshot
+
+
+

+
+
zfs rename + [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
 
+
zfs rename + -p [-f] + filesystem|volume + filesystem|volume
+
 
+
zfs rename + -u [-f] + filesystem filesystem
+
Renames the given dataset. The new target can be located anywhere in the + ZFS hierarchy, with the exception of snapshots. Snapshots can only be + renamed within the parent file system or volume. When renaming a snapshot, + the parent file system of the snapshot does not need to be specified as + part of the second argument. Renamed file systems can inherit new mount + points, in which case they are unmounted and remounted at the new mount + point. +
+
+
Force unmount any file systems that need to be unmounted in the + process. This flag has no effect if used together with the + -u flag.
+
+
Creates all the nonexistent parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their + parent.
+
+
Do not remount file systems during rename. If a file system's + mountpoint property is set to + + or + , + the file system is not unmounted even if this option is not + given.
+
+
+
zfs rename + -r snapshot + snapshot
+
Recursively rename the snapshots of all descendent datasets. Snapshots are + the only dataset that can be renamed recursively.
+
+
+
+ + + + + +
September 1, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-rollback.8.html b/man/v2.0/8/zfs-rollback.8.html new file mode 100644 index 000000000..1517d5d94 --- /dev/null +++ b/man/v2.0/8/zfs-rollback.8.html @@ -0,0 +1,290 @@ + + + + + + + zfs-rollback.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-rollback.8

+
+ + + + + +
ZFS-ROLLBACK(8)System Manager's ManualZFS-ROLLBACK(8)
+
+
+

+

zfs-rollback — + Roll back the given dataset to a previous + snapshot.

+
+
+

+ + + + + +
zfsrollback [-Rfr] + snapshot
+
+
+

+
+
zfs rollback + [-Rfr] snapshot
+
When a dataset is rolled back, all data that has changed since the + snapshot is discarded, and the dataset reverts to the state at the time of + the snapshot. By default, the command refuses to roll back to a snapshot + other than the most recent one. In order to do so, all intermediate + snapshots and bookmarks must be destroyed by specifying the + -r option. +

The -rR options do not recursively + destroy the child snapshots of a recursive snapshot. Only direct + snapshots of the specified filesystem are destroyed by either of these + options. To completely roll back a recursive snapshot, you must rollback + the individual child snapshots.

+
+
+
Destroy any more recent snapshots and bookmarks, as well as any clones + of those snapshots.
+
+
Used with the -R option to force an unmount of + any clone file systems that are to be destroyed.
+
+
Destroy any snapshots and bookmarks more recent than the one + specified.
+
+
+
+
+
+

+

zfs-snapshot(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-send.8.html b/man/v2.0/8/zfs-send.8.html new file mode 100644 index 000000000..70a8f1f04 --- /dev/null +++ b/man/v2.0/8/zfs-send.8.html @@ -0,0 +1,745 @@ + + + + + + + zfs-send.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-send.8

+
+ + + + + +
ZFS-SEND(8)System Manager's ManualZFS-SEND(8)
+
+
+

+

zfs-send — + Generate a send stream, which may be of a filesystem, and + may be incremental from a bookmark.

+
+
+

+ + + + + +
zfssend [-DLPRbcehnpvw] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-DLPRcenpvw] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend --redact + redaction_bookmark + [-DLPcenpv] +
+ [-i + snapshot|bookmark] + snapshot
+
+ + + + + +
zfssend [-Penv] + -t receive_resume_token
+
+ + + + + +
zfssend [-Pnv] + -S filesystem
+
+ + + + + +
zfsredact snapshot + redaction_bookmark redaction_snapshot...
+
+
+

+
+
zfs send + [-DLPRbcehnpvw] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
+ --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
+ --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(5) for details on ZFS feature + flags and the large_blocks feature.
+
+ --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
+ --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed. If the -R flag + is used to send encrypted datasets, then -w + must also be specified.

+
+
+ --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(5) for details on ZFS feature flags + and the embedded_data feature.
+
+ --backup
+
Sends only received property values whether or not they are overridden + by local settings, but only if the dataset has ever been received. Use + this option when you want zfs + receive to restore received properties backed + up on the sent dataset and to avoid sending local settings that may + have nothing to do with the source dataset, but only with how the data + is backed up.
+
+ --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes. Streams sent with -c will not + have their data recompressed on the receiver side using + -o -compress=value. + The data will stay compressed as it was from the sender. The new + compression property will be set for future data.
+
+ --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
+ --holds
+
Generate a stream package that includes any snapshot holds (created + with the zfs hold command), and indicating to + zfs receive that the holds be applied to the dataset + on the receiving system.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
+ --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
+ --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature. Sends of encrypted datasets must use + -w when using this flag.
+
+ --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS.

+
+
+
+
zfs send + [-DLPRcenpvw] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
+ --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(5) for details on ZFS feature + flags and the large_blocks feature.
+
+ --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
+ --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
+ --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
+ --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(5) for details on ZFS feature flags + and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
+ --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
+ --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent.
+
+
+
zfs send + --redact redaction_bookmark + [-DLPcenpv] +
+ [-i + snapshot|bookmark] + snapshot
+
Generate a redacted send stream. This send stream contains all blocks from + the snapshot being sent that aren't included in the redaction list + contained in the bookmark specified by the + --redact (or --d ) flag. + The resulting send stream is said to be redacted with respect to the + snapshots the bookmark specified by the --redact + flag was created with. The bookmark must have been + created by running zfs redact on the snapshot being + sent. +

This feature can be used to allow clones of a filesystem to be + made available on a remote system, in the case where their parent need + not (or needs to not) be usable. For example, if a filesystem contains + sensitive data, and it has clones where that sensitive data has been + secured or replaced with dummy data, redacted sends can be used to + replicate the secured data without replicating the original sensitive + data, while still sharing all possible blocks. A snapshot that has been + redacted with respect to a set of snapshots will contain all blocks + referenced by at least one snapshot in the set, but will contain none of + the blocks referenced by none of the snapshots in the set. In other + words, if all snapshots in the set have modified a given block in the + parent, that block will not be sent; but if one or more snapshots have + not modified a block in the parent, they will still reference the + parent's block, so that block will be sent. Note that only user data + will be redacted.

+

When the redacted send stream is received, we will generate a + redacted snapshot. Due to the nature of redaction, a redacted dataset + can only be used in the following ways:

+

1. To receive, as a clone, an incremental send from the + original snapshot to one of the snapshots it was redacted with respect + to. In this case, the stream will produce a valid dataset when received + because all blocks that were redacted in the parent are guaranteed to be + present in the child's send stream. This use case will produce a normal + snapshot, which can be used just like other snapshots.

+

2. To receive an incremental send from the original snapshot + to something redacted with respect to a subset of the set of snapshots + the initial snapshot was redacted with respect to. In this case, each + block that was redacted in the original is still redacted (redacting + with respect to additional snapshots causes less data to be redacted + (because the snapshots define what is permitted, and everything else is + redacted)). This use case will produce a new redacted snapshot.

+

3. To receive an incremental send from a redaction bookmark of + the original snapshot that was created when redacting with respect to a + subset of the set of snapshots the initial snapshot was created with + respect to anything else. A send stream from such a redaction bookmark + will contain all of the blocks necessary to fill in any redacted data, + should it be needed, because the sending system is aware of what blocks + were originally redacted. This will either produce a normal snapshot or + a redacted one, depending on whether the new send stream is + redacted.

+

4. To receive an incremental send from a redacted version of + the initial snapshot that is redacted with respect to a subject of the + set of snapshots the initial snapshot was created with respect to. A + send stream from a compatible redacted dataset will contain all of the + blocks necessary to fill in any redacted data. This will either produce + a normal snapshot or a redacted one, depending on whether the new send + stream is redacted.

+

5. To receive a full send as a clone of the redacted snapshot. + Since the stream is a full send, it definitionally contains all the data + needed to create a new dataset. This use case will either produce a + normal snapshot or a redacted one, depending on whether the full send + stream was redacted.

+

These restrictions are detected and enforced by zfs + receive; a redacted send stream will contain the list of snapshots + that the stream is redacted with respect to. These are stored with the + redacted snapshot, and are used to detect and correctly handle the cases + above. Note that for technical reasons, raw sends and redacted sends + cannot be combined at this time.

+
+
zfs send + [-Penv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs receive -s for more details.
+
zfs send + [-Pnv] [-i + snapshot|bookmark] + -S filesystem
+
Generate a send stream from a dataset that has been partially received. +
+
+ --saved
+
This flag requires that the specified filesystem previously received a + resumable send that did not finish and was interrupted. In such + scenarios this flag enables the user to send this partially received + state. Using this flag will always use the last fully received + snapshot as the incremental source if it exists.
+
+
+
zfs redact + snapshot redaction_bookmark + redaction_snapshot...
+
Generate a new redaction bookmark. In addition to the typical bookmark + information, a redaction bookmark contains the list of redacted blocks and + the list of redaction snapshots specified. The redacted blocks are blocks + in the snapshot which are not referenced by any of the redaction + snapshots. These blocks are found by iterating over the metadata in each + redaction snapshot to determine what has been changed since the target + snapshot. Redaction is designed to support redacted zfs sends; see the + entry for zfs send for more information on the purpose + of this operation. If a redact operation fails partway through (due to an + error or a system failure), the redaction can be resumed by rerunning the + same command.
+
+
+

+

ZFS has support for a limited version of data subsetting, in the + form of redaction. Using the zfs redact command, a + can be created that stores a list of blocks containing + sensitive information. When provided to zfs + , this + causes a redacted send to occur. Redacted sends omit the + blocks containing sensitive information, replacing them with REDACT records. + When these send streams are received, a redacted dataset + is created. A redacted dataset cannot be mounted by default, since it is + incomplete. It can be used to receive other send streams. In this way + datasets can be used for data backup and replication, with all the benefits + that zfs send and receive have to offer, while protecting sensitive + information from being stored on less-trusted machines or services.

+

For the purposes of redaction, there are two steps to the process. + A redact step, and a send/receive step. First, a redaction bookmark is + created. This is done by providing the zfs redact command + with a parent snapshot, a bookmark to be created, and a number of redaction + snapshots. These redaction snapshots must be descendants of the parent + snapshot, and they should modify data that is considered sensitive in some + way. Any blocks of data modified by all of the redaction snapshots will be + listed in the redaction bookmark, because it represents the truly sensitive + information. When it comes to the send step, the send process will not send + the blocks listed in the redaction bookmark, instead replacing them with + REDACT records. When received on the target system, this will create a + redacted dataset, missing the data that corresponds to the blocks in the + redaction bookmark on the sending system. The incremental send streams from + the original parent to the redaction snapshots can then also be received on + the target system, and this will produce a complete snapshot that can be + used normally. Incrementals from one snapshot on the parent filesystem and + another can also be done by sending from the redaction bookmark, rather than + the snapshots themselves.

+

In order to make the purpose of the feature more clear, an example + is provided. Consider a zfs filesystem containing four files. These files + represent information for an online shopping service. One file contains a + list of usernames and passwords, another contains purchase histories, a + third contains click tracking data, and a fourth contains user preferences. + The owner of this data wants to make it available for their development + teams to test against, and their market research teams to do analysis on. + The development teams need information about user preferences and the click + tracking data, while the market research teams need information about + purchase histories and user preferences. Neither needs access to the + usernames and passwords. However, because all of this data is stored in one + ZFS filesystem, it must all be sent and received together. In addition, the + owner of the data wants to take advantage of features like compression, + checksumming, and snapshots, so they do want to continue to use ZFS to store + and transmit their data. Redaction can help them do so. First, they would + make two clones of a snapshot of the data on the source. In one clone, they + create the setup they want their market research team to see; they delete + the usernames and passwords file, and overwrite the click tracking data with + dummy information. In another, they create the setup they want the + development teams to see, by replacing the passwords with fake information + and replacing the purchase histories with randomly generated ones. They + would then create a redaction bookmark on the parent snapshot, using + snapshots on the two clones as redaction snapshots. The parent can then be + sent, redacted, to the target server where the research and development + teams have access. Finally, incremental sends from the parent snapshot to + each of the clones can be send to and received on the target server; these + snapshots are identical to the ones on the source, and are ready to be used, + while the parent snapshot on the target contains none of the username and + password data present on the source, because it was removed by the redacted + send operation.

+
+
+
+

+

zfs-bookmark(8), + zfs-receive(8), zfs-redact(8), + zfs-snapshot(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-set.8.html b/man/v2.0/8/zfs-set.8.html new file mode 100644 index 000000000..04bf0d6c1 --- /dev/null +++ b/man/v2.0/8/zfs-set.8.html @@ -0,0 +1,406 @@ + + + + + + + zfs-set.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-set.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setSets the + property or list of properties to the given value(s) for each + dataset.

+
+
+

+ + + + + +
zfsset + property=value + [property=value]... + filesystem|volume|snapshot...
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + [filesystem|volume|snapshot|bookmark]...
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot...
+
+
+

+
+
zfs set + property=value + [property=value]... + filesystem|volume|snapshot...
+
Only some properties can be edited. See zfsprops(8) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the User Properties section of + zfsprops(8).
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + [filesystem|volume|snapshot|bookmark]...
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
    name      Dataset name
+    property  Property name
+    value     Property value
+    source    Property source  local, default, inherited,
+              temporary, received or none (-).
+
+

All columns are displayed by default, though this + can be controlled by using the -o option. This + command takes a comma-separated list of properties as described in the + and User Properties sections of + zfsprops(8).

+

The value all can be used to display all + properties that apply to the given dataset's type (filesystem, volume, + snapshot, or bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display. + ,,, + is the default value.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: + , + , + , + , + , + and + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of + , + , + , + , + or all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot...
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(8) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value if one exists; otherwise + operate as if the -S option was not + specified.
+
+
+
+
+
+

+

zfs-list(8), zfsprops(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-share.8.html b/man/v2.0/8/zfs-share.8.html new file mode 100644 index 000000000..0435616d2 --- /dev/null +++ b/man/v2.0/8/zfs-share.8.html @@ -0,0 +1,300 @@ + + + + + + + zfs-share.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-share.8

+
+ + + + + +
ZFS-SHARE(8)System Manager's ManualZFS-SHARE(8)
+
+
+

+

zfs-shareShares + and unshares available ZFS filesystems.

+
+
+

+ + + + + +
zfsshare -a | + filesystem
+
+ + + + + +
zfsunshare -a | + filesystem|mountpoint
+
+
+

+
+
zfs share + -a | filesystem
+
Shares available ZFS file systems. +
+
+
Share all available ZFS file systems. Invoked automatically as part of + the boot process.
+
filesystem
+
Share the specified filesystem according to the + sharenfs and sharesmb properties. + File systems are shared when the sharenfs or + sharesmb property is set.
+
+
+
zfs unshare + -a | + filesystem|mountpoint
+
Unshares currently shared ZFS file systems. +
+
+
Unshare all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
filesystem|mountpoint
+
Unshare the specified filesystem. The command can also be given a path + to a ZFS file system shared on the system.
+
+
+
+
+
+

+

exports(5), smb.conf(5), + zfsprops(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-snapshot.8.html b/man/v2.0/8/zfs-snapshot.8.html new file mode 100644 index 000000000..afc45584f --- /dev/null +++ b/man/v2.0/8/zfs-snapshot.8.html @@ -0,0 +1,291 @@ + + + + + + + zfs-snapshot.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-snapshot.8

+
+ + + + + +
ZFS-SNAPSHOT(8)System Manager's ManualZFS-SNAPSHOT(8)
+
+
+

+

zfs-snapshot — + Creates snapshots with the given names.

+
+
+

+ + + + + +
zfssnapshot [-r] + [-o + property=value]... + filesystem@snapname|volume@snapname...
+
+
+

+
+
zfs + snapshot [-r] + [-o + property=value]... + filesystem@snapname|volume@snapname...
+
All previous modifications by successful system calls to the file system + are part of the snapshots. Snapshots are taken atomically, so that all + snapshots correspond to the same moment in time. + zfs snap can be used as an + alias for zfs snapshot. + See the + + section of zfsconcepts(8) for details. +
+
+ property=value
+
Sets the specified property; see zfs + create for details.
+
+
Recursively create snapshots of all descendent datasets
+
+
+
+
+
+

+

zfs-bookmark(8), zfs-clone(8), + zfs-destroy(8), zfs-diff(8), + zfs-hold(8), zfs-rename(8), + zfs-rollback(8), zfs-send(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-unallow.8.html b/man/v2.0/8/zfs-unallow.8.html new file mode 100644 index 000000000..0b30658bb --- /dev/null +++ b/man/v2.0/8/zfs-unallow.8.html @@ -0,0 +1,540 @@ + + + + + + + zfs-unallow.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unallow.8

+
+ + + + + +
ZFS-ALLOW(8)System Manager's ManualZFS-ALLOW(8)
+
+
+

+

zfs-allow — + Delegates ZFS administration permission for the file + systems to non-privileged users.

+
+
+

+ + + + + +
zfsallow [-dglu] + user|group[,user|group]... + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]... + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + @setname + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+
+

+
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of + , + , + , + , + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]... + perm|@setname[,perm|@setname]... + filesystem|volume
+
 
+
zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]... + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]...
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]...
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]...
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]...
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+
+
NAME             TYPE           NOTES
+allow            subcommand     Must also have the permission that is
+                                being allowed
+clone            subcommand     Must also have the 'create' ability and
+                                'mount' ability in the origin file system
+create           subcommand     Must also have the 'mount' ability.
+                                Must also have the 'refreservation' ability to
+                                create a non-sparse volume.
+destroy          subcommand     Must also have the 'mount' ability
+diff             subcommand     Allows lookup of paths within a dataset
+                                given an object number, and the ability
+                                to create snapshots necessary to
+                                'zfs diff'.
+hold             subcommand     Allows adding a user hold to a snapshot
+load-key         subcommand     Allows loading and unloading of encryption key
+                                (see 'zfs load-key' and 'zfs unload-key').
+change-key       subcommand     Allows changing an encryption key via
+                                'zfs change-key'.
+mount            subcommand     Allows mount/umount of ZFS datasets
+promote          subcommand     Must also have the 'mount' and 'promote'
+                                ability in the origin file system
+receive          subcommand     Must also have the 'mount' and 'create'
+                                ability
+release          subcommand     Allows releasing a user hold which might
+                                destroy the snapshot
+rename           subcommand     Must also have the 'mount' and 'create'
+                                ability in the new parent
+rollback         subcommand     Must also have the 'mount' ability
+send             subcommand
+share            subcommand     Allows sharing file systems over NFS
+                                or SMB protocols
+snapshot         subcommand     Must also have the 'mount' ability
+
+groupquota       other          Allows accessing any groupquota@...
+                                property
+groupused        other          Allows reading any groupused@... property
+userprop         other          Allows changing any user property
+userquota        other          Allows accessing any userquota@...
+                                property
+userused         other          Allows reading any userused@... property
+projectobjquota  other          Allows accessing any projectobjquota@...
+                                property
+projectquota     other          Allows accessing any projectquota@... property
+projectobjused   other          Allows reading any projectobjused@... property
+projectused      other          Allows reading any projectused@... property
+
+aclinherit       property
+acltype          property
+atime            property
+canmount         property
+casesensitivity  property
+checksum         property
+compression      property
+copies           property
+devices          property
+exec             property
+filesystem_limit property
+mountpoint       property
+nbmand           property
+normalization    property
+primarycache     property
+quota            property
+readonly         property
+recordsize       property
+refquota         property
+refreservation   property
+reservation      property
+secondarycache   property
+setuid           property
+sharenfs         property
+sharesmb         property
+snapdir          property
+snapshot_limit   property
+utf8only         property
+version          property
+volblocksize     property
+volsize          property
+vscan            property
+xattr            property
+zoned            property
+
+
+
zfs allow + -c + perm|@setname[,perm|@setname]... + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]... + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]... + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
 
+
zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
 
+
zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-unjail.8.html b/man/v2.0/8/zfs-unjail.8.html new file mode 100644 index 000000000..2c0076e2e --- /dev/null +++ b/man/v2.0/8/zfs-unjail.8.html @@ -0,0 +1,312 @@ + + + + + + + zfs-unjail.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unjail.8

+
+ + + + + +
ZFS-JAIL(8)System Manager's ManualZFS-JAIL(8)
+
+
+

+

zfs-jail — + Attaches and detaches ZFS filesystems from FreeBSD jails. + A ZFS dataset can be attached to a jail by using the + "zfs jail" subcommand. You cannot attach a + dataset to one jail and the children of the same dataset to another jail. + You can also not attach the root file system of the jail or any dataset + which needs to be mounted before the zfs rc script is run inside the jail, + as it would be attached unmounted until it is mounted from the rc script + inside the jail. To allow management of the dataset from within a jail, the + jailed property has to be set and the jail needs access to + the /dev/zfs device. The + + property cannot be changed from within a jail. See jail(8) + for information on how to allow mounting ZFS datasets from within a + jail.

+

A ZFS dataset can be detached from a jail + using the "zfs unjail" subcommand.

+

After a dataset is attached to a jail and the jailed property is + set, a jailed file system cannot be mounted outside the jail, since the jail + administrator might have set the mount point to an unacceptable value.

+
+
+

+ + + + + +
zfsjail + jailid|jailname + filesystem
+
+ + + + + +
zfsunjail + jailid|jailname + filesystem
+
+
+

+
+
zfs jail + jailid filesystem
+
+

Attaches the specified filesystem to the + jail identified by JID jailid. From now on this + file system tree can be managed from within a jail if the + jailed property has been set. To use this + functuinality, the jail needs the allow.mount and + allow.mount.zfs parameters set to 1 and the + enforce_statfs parameter set to a value lower than + 2.

+

See jail(8) for more information on managing + jails and configuring the parameters above.

+
+
zfs unjail + jailid filesystem
+
+

Detaches the specified filesystem from + the jail identified by JID jailid.

+
+
+
+
+

+

zfsprops(8)

+
+
+ + + + + +
December 9, 2019FreeBSD
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-unload-key.8.html b/man/v2.0/8/zfs-unload-key.8.html new file mode 100644 index 000000000..ac3f8e21c --- /dev/null +++ b/man/v2.0/8/zfs-unload-key.8.html @@ -0,0 +1,473 @@ + + + + + + + zfs-unload-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unload-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + Load, unload, or change the encryption key used to access a + dataset.

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a | filesystem
+
+ + + + + +
zfsunload-key [-r] + -a | filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a | filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. This will cause zfs to + simply check that the provided key is correct. This command may be run + even if the key is already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a | filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded + into ZFS. This command may also be used to change the + keylocation, keyformat, and + pbkdf2iters properties as needed. If the dataset was not + previously an encryption root it will become one. Alternatively, the + -i flag may be provided to cause an encryption + root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim --secure if + supported by your hardware, otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to "zfs + load-key filesystem; + zfs change-key + filesystem"
+
+ property=value
+
Allows the user to set encryption key properties ( + keyformat, keylocation, and + pbkdf2iters ) while changing the key. This is the + only way to alter keyformat and + pbkdf2iters after the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + zvol data, file attributes, ACLs, permission bits, directory listings, FUID + mappings, and + + / + + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the zfs + load-key subcommand for more info on key + loading).

+

Creating an encrypted dataset requires + specifying the encryption and keyformat + properties at creation time, along with an optional + keylocation and pbkdf2iters. After + entering an encryption key, the created dataset will become an encryption + root. Any descendant datasets will inherit their encryption key from the + encryption root by default, meaning that loading, unloading, or changing the + key for the encryption root will implicitly do the same for all inheriting + datasets. If this inheritance is not desired, simply supply a + keyformat when creating the child dataset or use + zfs change-key to break an + existing relationship, creating a new encryption root on the child. Note + that the child's keyformat may match that of the parent + while still creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, and + pbkdf2iters) do not inherit like other ZFS properties and + instead use the value determined by their encryption root. Encryption root + inheritance can be tracked via the read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only dedup against themselves, their + snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost per block written.

+
+
+
+

+

zfs-create(8), zfs-set(8), + zfsprops(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-unmount.8.html b/man/v2.0/8/zfs-unmount.8.html new file mode 100644 index 000000000..cf17a2f9e --- /dev/null +++ b/man/v2.0/8/zfs-unmount.8.html @@ -0,0 +1,339 @@ + + + + + + + zfs-unmount.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unmount.8

+
+ + + + + +
ZFS-MOUNT(8)System Manager's ManualZFS-MOUNT(8)
+
+
+

+

zfs-mountManage + mount state of ZFS file systems.

+
+
+

+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Oflv] + [-o options] + -a | filesystem
+
+ + + + + +
zfsunmount [-fu] + -a | + filesystem|mountpoint
+
+
+

+
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Oflv] [-o + options] -a | + filesystem
+
Mount ZFS filesystem on a path described by its + mountpoint property, if the path exists and is empty. If + mountpoint is set to + , the + filesystem should be instead mounted using mount(8). +
+
+
Perform an overlay mount. Allows mounting in non-empty + mountpoint. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process if configured.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + section of + zfsprops(8) for details.
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has a + + of + + this will cause the terminal to interactively block after asking for + the key.
+
+
Report mount progress.
+
+
Attempt to force mounting of all filesystems, even those that couldn't + normally be mounted (e.g. redacted datasets).
+
+
+
zfs unmount + [-fu] -a | + filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
+
Forcefully unmount the file system, even if it is currently in use. + This option is not supported on Linux.
+
+
Unload keys for any encryption roots unmounted by this command.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
+
+
+
+ + + + + +
February 16, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-upgrade.8.html b/man/v2.0/8/zfs-upgrade.8.html new file mode 100644 index 000000000..3bb0b8b45 --- /dev/null +++ b/man/v2.0/8/zfs-upgrade.8.html @@ -0,0 +1,319 @@ + + + + + + + zfs-upgrade.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-upgrade.8

+
+ + + + + +
ZFS-UPGRADE(8)System Manager's ManualZFS-UPGRADE(8)
+
+
+

+

zfs-upgrade — + Manage upgrading the on-disk version of + filesystems.

+
+
+

+ + + + + +
zfsupgrade
+
+ + + + + +
zfsupgrade -v
+
+ + + + + +
zfsupgrade [-r] + [-V version] + -a | filesystem
+
+
+

+
+
zfs upgrade
+
Displays a list of file systems that are not the most recent version.
+
zfs upgrade + -v
+
Displays a list of currently supported file system versions.
+
zfs upgrade + [-r] [-V + version] -a | + filesystem
+
Upgrades file systems to a new on-disk version. Once this is done, the + file systems will no longer be accessible on systems running older + versions of the software. zfs + send streams generated from new snapshots of these + file systems cannot be accessed on systems running older versions of the + software. +

In general, the file system version is independent of the pool + version. See zpool(8) for information on the + zpool upgrade + command.

+

In some cases, the file system version and the pool version + are interrelated and the pool version must be upgraded before the file + system version can be upgraded.

+
+
+ version
+
Upgrade to the specified version. If the + -V flag is not specified, this command + upgrades to the most recent version. This option can only be used to + increase the version number, and only up to the most recent version + supported by this software.
+
+
Upgrade all file systems on all imported pools.
+
filesystem
+
Upgrade the specified file system.
+
+
Upgrade the specified file system and all descendent file + systems.
+
+
+
+
+
+

+

zpool-upgrade(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-userspace.8.html b/man/v2.0/8/zfs-userspace.8.html new file mode 100644 index 000000000..2e5a2fe4a --- /dev/null +++ b/man/v2.0/8/zfs-userspace.8.html @@ -0,0 +1,390 @@ + + + + + + + zfs-userspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-userspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + Displays space consumed by, and quotas on, each user or + group in the specified filesystem or snapshot.

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]...] + [-s field]... + [-S field]... + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (for example, + stat(2), ls + -l) perform this translation, so the + -i option allows the output from + zfs userspace to be + compared directly with those utilities. However, + -i may lead to confusion if some files were + created by an SMB user before a SMB-to-POSIX name mapping was + established. In such a case, some files will be owned by the SMB + entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]...
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]...
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]...] + [-s field]... + [-S field]... + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is + numeral, not name. So need neither the option -i for SID + to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfs-set(8), zfsprops(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-wait.8.html b/man/v2.0/8/zfs-wait.8.html new file mode 100644 index 000000000..2b9786a65 --- /dev/null +++ b/man/v2.0/8/zfs-wait.8.html @@ -0,0 +1,284 @@ + + + + + + + zfs-wait.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-wait.8

+
+ + + + + +
ZFS-WAIT(8)System Manager's ManualZFS-WAIT(8)
+
+
+

+

zfs-waitWait + for background activity to stop in a ZFS filesystem

+
+
+

+ + + + + +
zfswait [-t + activity[,activity]...] + fs
+
+
+

+
+
zfs wait + [-t + activity[,activity]...] + fs
+
Waits until all background activity of the given types has ceased in the + given filesystem. The activity could cease because it has completed or + because the filesystem has been destroyed or unmounted. If no activities + are specified, the command waits until background activity of every type + listed below has ceased. If there is no activity of the given types in + progress, the command returns immediately. +

These are the possible values for + activity, along with what each one waits for:

+
+
        deleteq       The filesystem's internal delete queue to empty
+
+

Note that the internal delete queue does not finish draining + until all large files have had time to be fully destroyed and all open + file handles to unlinked files are closed.

+
+
+
+
+

+

lsof(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs.8.html b/man/v2.0/8/zfs.8.html new file mode 100644 index 000000000..797d98782 --- /dev/null +++ b/man/v2.0/8/zfs.8.html @@ -0,0 +1,984 @@ + + + + + + + zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.8

+
+ + + + + +
ZFS(8)System Manager's ManualZFS(8)
+
+
+

+

zfsconfigures + ZFS file systems

+
+
+

+ + + + + +
zfs-?V
+
+ + + + + +
zfsversion
+
+ + + + + +
zfs<subcommand> + [<args>]
+
+
+

+

The zfs command configures ZFS datasets + within a ZFS storage pool, as described in zpool(8). A + dataset is identified by a unique path within the ZFS namespace. For + example:

+
+
pool/{filesystem,volume,snapshot}
+
+

where the maximum length of a dataset name is + MAXNAMELEN (256 bytes) and the maximum amount of + nesting allowed in a path is 50 levels deep.

+

A dataset can be one of the following:

+
+
+
A ZFS dataset of type + + can be mounted within the standard system namespace and behaves like other + file systems. While ZFS file systems are designed to be POSIX compliant, + known issues exist that prevent compliance in some cases. Applications + that depend on standards conformance might fail due to non-standard + behavior when checking file system free space.
+
+
A logical volume exported as a raw or block device. This type of dataset + should only be used when a block device is required. File systems are + typically used in most environments.
+
+
A read-only version of a file system or volume at a given point in time. + It is specified as + filesystem@name or + volume@name.
+
+
Much like a snapshot, but without the hold on on-disk + data. It can be used as the source of a send (but not for a receive). It + is specified as + filesystem#name or + volume#name.
+
+

For details see zfsconcepts(8).

+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about properties, see + the zfsprops(8) man page.

+
+
+

+

Enabling the + + feature allows for the creation of encrypted filesystems and volumes. ZFS + will encrypt file and zvol data, file attributes, ACLs, permission bits, + directory listings, FUID mappings, and + + / + + data. For an overview of encryption see the + zfs-load-key(8) command manual.

+
+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+
+
zfs -?
+
Displays a help message.
+
zfs -V, + --version
+
An alias for the zfs + version subcommand.
+
zfs version
+
Displays the software version of the zfs userland + utility and the zfs kernel module.
+
+
+

+
+
zfs-list(8)
+
Lists the property information for the given datasets in tabular + form.
+
zfs-create(8)
+
Creates a new ZFS file system or volume.
+
zfs-destroy(8)
+
Destroys the given dataset(s), snapshot(s), or bookmark.
+
zfs-rename(8)
+
Renames the given dataset (filesystem or snapshot).
+
zfs-upgrade(8)
+
Manage upgrading the on-disk version of filesystems.
+
+
+
+

+
+
zfs-snapshot(8)
+
Creates snapshots with the given names.
+
zfs-rollback(8)
+
Roll back the given dataset to a previous snapshot.
+
zfs-hold(8) / zfs-release(8)
+
Add or remove a hold reference to the specified snapshot or snapshots. If + a hold exists on a snapshot, attempts to destroy that snapshot by using + the zfs destroy command + return EBUSY.
+
zfs-diff(8)
+
Display the difference between a snapshot of a given filesystem and + another snapshot of that filesystem from a later time or the current + contents of the filesystem.
+
+
+
+

+
+
zfs-clone(8)
+
Creates a clone of the given snapshot.
+
zfs-promote(8)
+
Promotes a clone file system to no longer be dependent on its + "origin" snapshot.
+
+
+
+

+
+
zfs-send(8)
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark.
+
zfs-receive(8)
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the + zfs-send(8) subcommand, which by default creates a full + stream.
+
zfs-bookmark(8)
+
Creates a new bookmark of the given snapshot or bookmark. Bookmarks mark + the point in time when the snapshot was created, and can be used as the + incremental source for a zfs + send command.
+
zfs-redact(8)
+
Generate a new redaction bookmark. This feature can be used to allow + clones of a filesystem to be made available on a remote system, in the + case where their parent need not (or needs to not) be usable.
+
+
+
+

+
+
zfs-get(8)
+
Displays properties for the given datasets.
+
zfs-set(8)
+
Sets the property or list of properties to the given value(s) for each + dataset.
+
zfs-inherit(8)
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists.
+
+
+
+

+
+
zfs-userspace(8) / zfs-groupspace(8) / + zfs-projectspace(8)
+
Displays space consumed by, and quotas on, each user, group, or project in + the specified filesystem or snapshot.
+
zfs-project(8)
+
List, set, or clear project ID and/or inherit flag on the file(s) or + directories.
+
+
+
+

+
+
zfs-mount(8)
+
Displays all ZFS file systems currently mounted, or mount ZFS filesystem + on a path described by its + + property.
+
zfs-unmount(8)
+
Unmounts currently mounted ZFS file systems.
+
+
+
+

+
+
zfs-share(8)
+
Shares available ZFS file systems.
+
zfs-unshare(8)
+
Unshares currently shared ZFS file systems.
+
+
+
+

+
+
zfs-allow(8)
+
Delegate permissions on the specified filesystem or volume.
+
zfs-unallow(8)
+
Remove delegated permissions on the specified filesystem or volume.
+
+
+
+

+
+
zfs-change-key(8)
+
Add or change an encryption key on the specified dataset.
+
zfs-load-key(8)
+
Load the key for the specified encrypted dataset, enabling access.
+
zfs-unload-key(8)
+
Unload a key for the specified dataset, removing the ability to access the + dataset.
+
+
+
+

+
+
zfs-program(8)
+
Execute ZFS administrative operations programmatically via a Lua + script-language channel program.
+
+
+
+

+
+
zfs-jail(8)
+
Attaches a filesystem to a jail.
+
zfs-unjail(8)
+
Detaches a filesystem from a jail.
+
+
+
+

+
+
zfs-wait(8)
+
Wait for background activity in a filesystem to complete.
+
+
+
+
+

+

The zfs utility exits 0 on success, 1 if + an error occurs, and 2 if invalid command line options were specified.

+
+
+

+
+
Creating a ZFS File System Hierarchy
+
The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, + and is automatically inherited by the child file system. +
+
# zfs create pool/home
+# zfs set mountpoint=/export/home pool/home
+# zfs create pool/home/bob
+
+
+
Creating a ZFS Snapshot
+
The following command creates a snapshot named + yesterday. This snapshot is mounted on demand in the + .zfs/snapshot directory at the root of the + pool/home/bob file system. +
+
# zfs snapshot pool/home/bob@yesterday
+
+
+
Creating and Destroying Multiple + Snapshots
+
The following command creates snapshots named yesterday + of pool/home and all of its descendent file systems. + Each snapshot is mounted on demand in the + .zfs/snapshot directory at the root of its file + system. The second command destroys the newly created snapshots. +
+
# zfs snapshot -r pool/home@yesterday
+# zfs destroy -r pool/home@yesterday
+
+
+
Disabling and Enabling File System + Compression
+
The following command disables the compression property + for all file systems under pool/home. The next command + explicitly enables compression for + pool/home/anne. +
+
# zfs set compression=off pool/home
+# zfs set compression=on pool/home/anne
+
+
+
Listing ZFS Datasets
+
The following command lists all active file systems and volumes in the + system. Snapshots are displayed if the + + property is + . The + default is + . See + zpool(8) for more information on pool properties. +
+
# zfs list
+NAME                      USED  AVAIL  REFER  MOUNTPOINT
+pool                      450K   457G    18K  /pool
+pool/home                 315K   457G    21K  /export/home
+pool/home/anne             18K   457G    18K  /export/home/anne
+pool/home/bob             276K   457G   276K  /export/home/bob
+
+
+
Setting a Quota on a ZFS File System
+
The following command sets a quota of 50 Gbytes for + pool/home/bob. +
+
# zfs set quota=50G pool/home/bob
+
+
+
Listing ZFS Properties
+
The following command lists all properties for + pool/home/bob. +
+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value.

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+ The following command lists all properties with local settings for + pool/home/bob. +
+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
Rolling Back a ZFS File System
+
The following command reverts the contents of + pool/home/anne to the snapshot named + yesterday, deleting all intermediate snapshots. +
+
# zfs rollback -r pool/home/anne@yesterday
+
+
+
Creating a ZFS Clone
+
The following command creates a writable file system whose initial + contents are the same as + . +
+
# zfs clone pool/home/bob@yesterday pool/clone
+
+
+
Promoting a ZFS Clone
+
The following commands illustrate how to test out changes to a file + system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming: +
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
Inheriting ZFS Properties
+
The following command causes pool/home/bob and + pool/home/anne to inherit the + + property from their parent. +
+
# zfs inherit checksum pool/home/bob pool/home/anne
+
+
+
Remotely Replicating ZFS Data
+
The following commands send a full stream and then an incremental stream + to a remote machine, restoring them into + + and + , + respectively. poolB must contain the file system + poolB/received, and must not initially contain + . +
+
# zfs send pool/fs@a | \
+  ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b | \
+  ssh host zfs receive poolB/received/fs
+
+
+
Using the zfs receive -d Option
+
The following command sends a full stream of + + to a remote machine, receiving it into + . + The + + portion of the received snapshot's name is determined from the name of the + sent snapshot. poolB must contain the file system + poolB/received. If + + does not exist, it is created as an empty file system. +
+
# zfs send poolA/fsA/fsB@snap | \
+  ssh host zfs receive -d poolB/received
+
+
+
Setting User Properties
+
The following example sets the user-defined + + property for a dataset. +
+
# zfs set com.example:department=12345 tank/accounting
+
+
+
Performing a Rolling Snapshot
+
The following example shows how to maintain a history of snapshots with a + consistent naming scheme. To keep a week's worth of snapshots, the user + destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows: +
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
Setting sharenfs Property Options on a ZFS File + System
+
The following commands show how to set + + property options to enable + access + for a set of + addresses + and to enable root access for system + on the + + file system. +
+
# zfs set sharenfs='rw=@123.123.0.0/16,root=neo' tank/home
+
+

If you are using + for host name + resolution, specify the fully qualified hostname.

+
+
Delegating ZFS Administration Permissions on a + ZFS Dataset
+
The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots on + tank/cindys. The permissions on + tank/cindys are also displayed. +
+
# zfs allow cindys create,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys will + be unable to mount file systems under tank/cindys. Add + an ACE similar to the following syntax to provide mount point + access:

+
+
# chmod A+user:cindys:add_subdirectory:allow /tank/cindys
+
+
+
Delegating Create Time Permissions on a ZFS + Dataset
+
The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed. +
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
Defining and Granting a Permission Set on a ZFS + Dataset
+
The following example shows how to define and grant a permission set on + the tank/users file system. The permissions on + tank/users are also displayed. +
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
Delegating Property Permissions on a ZFS + Dataset
+
The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed. +
+
# zfs allow cindys quota,reservation users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
Removing ZFS Delegated Permissions on a ZFS + Dataset
+
The following example shows how to remove the snapshot permission from the + staff group on the tank/users file + system. The permissions on tank/users are also + displayed. +
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
Showing the differences between a snapshot and a + ZFS Dataset
+
The following example shows how to see what has changed between a prior + snapshot of a ZFS dataset and its current state. The + -F option is used to indicate type information for + the files affected. +
+
# zfs diff -F tank/test@before tank/test
+M       /       /tank/test/
+M       F       /tank/test/linked      (+1)
+R       F       /tank/test/oldname -> /tank/test/newname
+-       F       /tank/test/deleted
++       F       /tank/test/created
+M       F       /tank/test/modified
+
+
+
Creating a bookmark
+
The following example create a bookmark to a snapshot. This bookmark can + then be used instead of snapshot in send streams. +
+
# zfs bookmark rpool@snapshot rpool#bookmark
+
+
+
Setting sharesmb Property Options on a ZFS File + System
+
The following example show how to share SMB filesystem through ZFS. Note + that that a user and his/her password must be given. +
+
# smbmount //127.0.0.1/share_tmp /mnt/tmp \
+  -o user=workgroup/turbo,password=obrut,uid=1000
+
+

Minimal + + configuration required:

+

Samba will need to listen to 'localhost' (127.0.0.1) for the + ZFS utilities to communicate with Samba. This is the default behavior + for most Linux distributions.

+

Samba must be able to authenticate a user. This can be done in + a number of ways, depending on if using the system password file, LDAP + or the Samba specific smbpasswd file. How to do this is outside the + scope of this manual. Please refer to the smb.conf(5) + man page for more information.

+

See the + of the smb.conf(5) man page for all + configuration options in case you need to modify any options to the + share afterwards. Do note that any changes done with the + net(8) command will be undone if the share is ever + unshared (such as at a reboot etc).

+
+
+
+
+

+
+
+
Cause zfs mount to use + + to mount zfs datasets. This option is provided for backwards compatibility + with older zfs versions.
+
+
+
+

+

.

+
+
+

+

attr(1), gzip(1), + ssh(1), chmod(2), + fsync(2), stat(2), + write(2), acl(5), + attributes(5), exports(5), + exportfs(8), mount(8), + net(8), selinux(8), + zfs-allow(8), zfs-bookmark(8), + zfs-change-key(8), zfs-clone(8), + zfs-create(8), zfs-destroy(8), + zfs-diff(8), zfs-get(8), + zfs-groupspace(8), zfs-hold(8), + zfs-inherit(8), zfs-jail(8), + zfs-list(8), zfs-load-key(8), + zfs-mount(8), zfs-program(8), + zfs-project(8), zfs-projectspace(8), + zfs-promote(8), zfs-receive(8), + zfs-redact(8), zfs-release(8), + zfs-rename(8), zfs-rollback(8), + zfs-send(8), zfs-set(8), + zfs-share(8), zfs-snapshot(8), + zfs-unallow(8), zfs-unjail(8), + zfs-unload-key(8), zfs-unmount(8), + zfs-upgrade(8), + zfs-userspace(8), zfs-wait(8), + zfsconcepts(8), zfsprops(8), + zpool(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs_ids_to_path.8.html b/man/v2.0/8/zfs_ids_to_path.8.html new file mode 100644 index 000000000..c4750964b --- /dev/null +++ b/man/v2.0/8/zfs_ids_to_path.8.html @@ -0,0 +1,279 @@ + + + + + + + zfs_ids_to_path.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs_ids_to_path.8

+
+ + + + + +
ZFS_IDS_TO_PATH(8)System Manager's ManualZFS_IDS_TO_PATH(8)
+
+
+

+

zfs_ids_to_path — + convert objset and object ids to names and paths

+
+
+

+ + + + + +
zfs_ids_to_path[-v] pool + objset id object id
+
+ + + + + +
zfs_ids_to_path
+
+
+

+

The + + utility converts a provided objset and object id into a path to the file + that those ids refer to.

+
+
+
Verbose. Print the dataset name and the file path within the dataset + separately. This will work correctly even if the dataset is not + mounted.
+
+
+
+

+

zfs(8), zdb(8)

+
+
+ + + + + +
April 17, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfsconcepts.8.html b/man/v2.0/8/zfsconcepts.8.html new file mode 100644 index 000000000..6afe9023a --- /dev/null +++ b/man/v2.0/8/zfsconcepts.8.html @@ -0,0 +1,376 @@ + + + + + + + zfsconcepts.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfsconcepts.8

+
+ + + + + +
ZFSCONCEPTS(8)System Manager's ManualZFSCONCEPTS(8)
+
+
+

+

zfsconceptsAn + overview of ZFS concepts.

+
+
+

+
+

+

A ZFS storage pool is a logical collection of devices that provide + space for datasets. A storage pool is also the root of the ZFS file system + hierarchy.

+

The root of the pool can be accessed as a file system, such as + mounting and unmounting, taking snapshots, and setting properties. The + physical storage characteristics, however, are managed by the + zpool(8) command.

+

See zpool(8) for more information on creating + and administering pools.

+
+
+

+

A snapshot is a read-only copy of a file system or volume. + Snapshots can be created extremely quickly, and initially consume no + additional space within the pool. As data within the active dataset changes, + the snapshot consumes more data than would otherwise be shared with the + active dataset.

+

Snapshots can have arbitrary names. Snapshots of + volumes can be cloned or rolled back, visibility is determined by the + property + of the parent volume.

+

File system snapshots can be accessed under the + .zfs/snapshot directory in the root of the file + system. Snapshots are automatically mounted on demand and may be unmounted + at regular intervals. The visibility of the .zfs + directory can be controlled by the + + property.

+
+
+

+

A bookmark is like a snapshot, a read-only copy of a file system + or volume. Bookmarks can be created extremely quickly, compared to + snapshots, and they consume no additional space within the pool. Bookmarks + can also have arbitrary names, much like snapshots.

+

Unlike snapshots, bookmarks can not be accessed through the + filesystem in any way. From a storage standpoint a bookmark just provides a + way to reference when a snapshot was created as a distinct object. Bookmarks + are initially tied to a snapshot, not the filesystem or volume, and they + will survive if the snapshot itself is destroyed. Since they are very light + weight there's little incentive to destroy them.

+
+
+

+

A clone is a writable volume or file system whose initial contents + are the same as another dataset. As with snapshots, creating a clone is + nearly instantaneous, and initially consumes no additional space.

+

Clones can only be created from a snapshot. When a + snapshot is cloned, it creates an implicit dependency between the parent and + child. Even though the clone is created somewhere else in the dataset + hierarchy, the original snapshot cannot be destroyed as long as a clone + exists. The + property exposes this dependency, and the destroy + command lists any such dependencies, if they exist.

+

The clone parent-child dependency relationship can be reversed by + using the promote subcommand. This causes the + "origin" file system to become a clone of the specified file + system, which makes it possible to destroy the file system that the clone + was created from.

+
+
+

+

Creating a ZFS file system is a simple operation, so the number of + file systems per system is likely to be numerous. To cope with this, ZFS + automatically manages mounting and unmounting file systems without the need + to edit the /etc/fstab file. All automatically + managed file systems are mounted by ZFS at boot time.

+

By default, file systems are mounted under + /path, where path is the name + of the file system in the ZFS namespace. Directories are created and + destroyed as needed.

+

A file system can also have a mount point set in + the mountpoint property. This directory is created as + needed, and ZFS automatically mounts the file system when the + zfs mount + -a command is invoked (without editing + /etc/fstab). The mountpoint + property can be inherited, so if + has a + mount point of /export/stuff, then + + automatically inherits a mount point of + /export/stuff/user.

+

A file system mountpoint property of + prevents the + file system from being mounted.

+

If needed, ZFS file systems can also be managed with + traditional tools (mount, + umount, /etc/fstab). If a + file system's mount point is set to + , ZFS makes + no attempt to manage the file system, and the administrator is responsible + for mounting and unmounting the file system. Because pools must be imported + before a legacy mount can succeed, administrators should ensure that legacy + mounts are only attempted after the zpool import process finishes at boot + time. For example, on machines using systemd, the mount option

+

x-systemd.requires=zfs-import.target

+

will ensure that the zfs-import completes before systemd attempts + mounting the filesystem. See systemd.mount(5) for details.

+
+
+

+

Deduplication is the process for removing redundant data at the + block level, reducing the total amount of data stored. If a file system has + the + + property enabled, duplicate data blocks are removed synchronously. The + result is that only unique data is stored and common components are shared + among files.

+

Deduplicating data is a very resource-intensive operation. It is + generally recommended that you have at least 1.25 GiB of RAM per 1 TiB of + storage when you enable deduplication. Calculating the exact requirement + depends heavily on the type of data stored in the pool.

+

Enabling deduplication on an improperly-designed system can result + in performance issues (slow IO and administrative operations). It can + potentially lead to problems importing a pool due to memory exhaustion. + Deduplication can consume significant processing power (CPU) and memory as + well as generate additional disk IO.

+

Before creating a pool with deduplication + enabled, ensure that you have planned your hardware requirements + appropriately and implemented appropriate recovery practices, such as + regular backups. As an alternative to deduplication consider using + , + as a less resource-intensive alternative.

+
+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfsprops.8.html b/man/v2.0/8/zfsprops.8.html new file mode 100644 index 000000000..43204a80d --- /dev/null +++ b/man/v2.0/8/zfsprops.8.html @@ -0,0 +1,1534 @@ + + + + + + + zfsprops.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfsprops.8

+
+ + + + + +
ZFSPROPS(8)System Manager's ManualZFSPROPS(8)
+
+
+

+

zfspropsNative + properties and user-defined of ZFS datasets.

+
+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about user properties, + see the User Properties section, + below.

+
+

+

Every dataset has a set of properties that export statistics about + the dataset as well as control various behaviors. Properties are inherited + from the parent unless overridden by the child. Some properties apply only + to certain types of datasets (file systems, volumes, or snapshots).

+

The values of numeric properties can be specified using + human-readable suffixes (for example, + , + , + , + , and so + forth, up to + for zettabyte). The following are all valid (and equal) specifications: + 1536M, 1.5g, 1.50GB.

+

The values of non-numeric properties are case sensitive and must + be lowercase, except for mountpoint, + sharenfs, and sharesmb.

+

The following native properties consist of read-only statistics + about the dataset. These properties can be neither set, nor inherited. + Native properties apply to all dataset types unless otherwise noted.

+
+
+
The amount of space available to the dataset and all its children, + assuming that there is no other activity in the pool. Because space is + shared within a pool, availability can be limited by any number of + factors, including physical pool size, quotas, reservations, or other + datasets within the pool. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For non-snapshots, the compression ratio achieved for the + used space of this dataset, expressed as a multiplier. + The used property includes descendant datasets, and, for + clones, does not include the space shared with the origin snapshot. For + snapshots, the compressratio is the same as the + refcompressratio property. Compression can be turned on + by running: zfs set + compression=on + dataset. The default value is + off.
+
+
The transaction group (txg) in which the dataset was created. Bookmarks + have the same createtxg as the snapshot they are + initially tied to. This property is suitable for ordering a list of + snapshots, e.g. for incremental send and receive.
+
+
The time this dataset was created.
+
+
For snapshots, this property is a comma-separated list of filesystems or + volumes which are clones of this snapshot. The clones' + origin property is this snapshot. If the + clones property is not empty, then this snapshot can not + be destroyed (even with the -r or + -f options). The roles of origin and clone can be + swapped by promoting the clone with the zfs + promote command.
+
+
This property is on if the snapshot has been marked for + deferred destroy by using the zfs + destroy -d command. + Otherwise, the property is off.
+
+
For encrypted datasets, indicates where the dataset is currently + inheriting its encryption key from. Loading or unloading a key for the + encryptionroot will implicitly load / unload the key for + any inheriting datasets (see zfs + load-key and zfs + unload-key for details). Clones will always share + an encryption key with their origin. See the Encryption + section of zfs-load-key(8) for details.
+
+
The total number of filesystems and volumes that exist under this location + in the dataset tree. This value is only available when a + filesystem_limit has been set somewhere in the tree + under which the dataset resides.
+
+
Indicates if an encryption key is currently loaded into ZFS. The possible + values are none, available, and + . + See zfs load-key and + zfs unload-key.
+
+
The 64 bit GUID of this dataset or bookmark which does not change over its + entire lifetime. When a snapshot is sent to another pool, the received + snapshot has the same GUID. Thus, the guid is suitable + to identify a snapshot across pools.
+
+
The amount of space that is "logically" accessible by this + dataset. See the referenced property. The logical space + ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space that is "logically" consumed by this dataset + and all its descendents. See the used property. The + logical space ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For file systems, indicates whether the file system is currently mounted. + This property can be either + or + .
+
+
A unique identifier for this dataset within the pool. Unlike the dataset's + guid , the objsetid of a dataset is + not transferred to other pools when the snapshot is copied with a + send/receive operation. The objsetid can be reused (for + a new dataset) after the dataset is deleted.
+
+
For cloned file systems or volumes, the snapshot from which the clone was + created. See also the clones property.
+
+
For filesystems or volumes which have saved partially-completed state from + zfs receive -s, this opaque token can be provided to + zfs send -t to resume and complete the zfs + receive.
+
+
For bookmarks, this is the list of snapshot guids the bookmark contains a + redaction list for. For snapshots, this is the list of snapshot guids the + snapshot is redacted with respect to.
+
+
The amount of data that is accessible by this dataset, which may or may + not be shared with other datasets in the pool. When a snapshot or clone is + created, it initially references the same amount of space as the file + system or snapshot it was created from, since its contents are identical. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The compression ratio achieved for the referenced space + of this dataset, expressed as a multiplier. See also the + compressratio property.
+
+
The total number of snapshots that exist under this location in the + dataset tree. This value is only available when a + snapshot_limit has been set somewhere in the tree under + which the dataset resides.
+
+
The type of dataset: filesystem, + , + snapshot, or + .
+
+
The amount of space consumed by this dataset and all its descendents. This + is the value that is checked against this dataset's quota and reservation. + The space used does not include this dataset's reservation, but does take + into account the reservations of any descendent datasets. The amount of + space that a dataset consumes from its parent, as well as the amount of + space that is freed if this dataset is recursively destroyed, is the + greater of its space used and its reservation. +

The used space of a snapshot (see the + Snapshots section of zfsconcepts(8)) + is space that is referenced exclusively by this snapshot. If this + snapshot is destroyed, the amount of used space will + be freed. Space that is shared by multiple snapshots isn't accounted for + in this metric. When a snapshot is destroyed, space that was previously + shared with this snapshot can become unique to snapshots adjacent to it, + thus changing the used space of those snapshots. The used space of the + latest snapshot can also be affected by changes in the file system. Note + that the used space of a snapshot is a subset of the + written space of the snapshot.

+

The amount of space used, available, or referenced does not + take into account pending changes. Pending changes are generally + accounted for within a few seconds. Committing a change to a disk using + fsync(2) or O_SYNC does not + necessarily guarantee that the space usage information is updated + immediately.

+
+
+
The usedby* properties decompose the + used properties into the various reasons that space is + used. Specifically, used = + usedbychildren + + usedbydataset + + usedbyrefreservation + + usedbysnapshots. These properties are only available for + datasets created on zpool "version 13" + pools.
+
+
The amount of space used by children of this dataset, which would be freed + if all the dataset's children were destroyed.
+
+
The amount of space used by this dataset itself, which would be freed if + the dataset were destroyed (after first removing any + refreservation and destroying any necessary snapshots or + descendents).
+
+
The amount of space used by a refreservation set on this + dataset, which would be freed if the refreservation was + removed.
+
+
The amount of space consumed by snapshots of this dataset. In particular, + it is the amount of space that would be freed if all of this dataset's + snapshots were destroyed. Note that this is not simply the sum of the + snapshots' used properties because space can be shared + by multiple snapshots.
+
@user
+
The amount of space consumed by the specified user in this dataset. Space + is charged to the owner of each file, as displayed by + ls -l. The amount of space + charged is displayed by du and + ls -s. See the + zfs userspace subcommand + for more information. +

Unprivileged users can access only their own space usage. The + root user, or a user who has been granted the userused + privilege with zfs + allow, can access everyone's usage.

+

The userused@... + properties are not displayed by zfs + get all. The user's name must + be appended after the @ symbol, using one of the following forms:

+ +

Files created on Linux always have POSIX owners.

+
+
@user
+
The userobjused property is similar to + userused but instead it counts the number of objects + consumed by a user. This property counts all objects allocated on behalf + of the user, it may differ from the results of system tools such as + df -i. +

When the property xattr=on is set on a file + system additional objects will be created per-file to store extended + attributes. These additional objects are reflected in the + userobjused value and are counted against the user's + userobjquota. When a file system is configured to use + xattr=sa no additional internal objects are normally + required.

+
+
+
This property is set to the number of user holds on this snapshot. User + holds are set by using the zfs + hold command.
+
@group
+
The amount of space consumed by the specified group in this dataset. Space + is charged to the group of each file, as displayed by + ls -l. See the + userused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupused privilege with zfs + allow, can access all groups' usage.

+
+
@group
+
The number of objects consumed by the specified group in this dataset. + Multiple objects may be charged to the group for each file when extended + attributes are in use. See the + userobjused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupobjused privilege with + zfs allow, can access + all groups' usage.

+
+
@project
+
The amount of space consumed by the specified project in this dataset. + Project is identified via the project identifier (ID) that is object-based + numeral attribute. An object can inherit the project ID from its parent + object (if the parent has the flag of inherit project ID that can be set + and changed via chattr + -/+P or zfs project + -s) when being created. The privileged user can + set and change object's project ID via chattr + -p or zfs project + -s anytime. Space is charged to the project of + each file, as displayed by lsattr + -p or zfs project. See the + userused@user property for more + information. +

The root user, or a user who has been granted the + projectused privilege with zfs + allow, can access all projects' usage.

+
+
@project
+
The projectobjused is similar to + projectused but instead it counts the number of objects + consumed by project. When the property xattr=on is set + on a fileset, ZFS will create additional objects per-file to store + extended attributes. These additional objects are reflected in the + projectobjused value and are counted against the + project's projectobjquota. When a filesystem is + configured to use xattr=sa no additional internal + objects are required. See the + userobjused@user property for more + information. +

The root user, or a user who has been granted the + projectobjused privilege with zfs + allow, can access all projects' objects usage.

+
+
+
For volumes, specifies the block size of the volume. The + blocksize cannot be changed once the volume has been + written, so it should be set at volume creation time. The default + blocksize for volumes is 8 Kbytes. Any power of 2 from + 512 bytes to 128 Kbytes is valid. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space referenced by this dataset, that was + written since the previous snapshot (i.e. that is not referenced by the + previous snapshot).
+
@snapshot
+
The amount of referenced space written to this dataset + since the specified snapshot. This is the space that is referenced by this + dataset but was not referenced by the specified snapshot. +

The snapshot may be specified as a short + snapshot name (just the part after the @), in which + case it will be interpreted as a snapshot in the same filesystem as this + dataset. The snapshot may be a full snapshot name + (filesystem@snapshot), which for + clones may be a snapshot in the origin's filesystem (or the origin of + the origin's filesystem, etc.)

+
+
+

The following native properties can be used to change the behavior + of a ZFS dataset.

+
+
=discard|noallow|restricted|passthrough|passthrough-x
+
Controls how ACEs are inherited when files and directories are created. +
+
+
does not inherit any ACEs.
+
+
only inherits inheritable ACEs that specify "deny" + permissions.
+
+
default, removes the + + and + + permissions when the ACE is inherited.
+
+
inherits all inheritable ACEs without any modifications.
+
+
same meaning as passthrough, except that the + , + , + and + + ACEs inherit the execute permission only if the file creation mode + also requests the execute bit.
+
+

When the property value is set to + passthrough, files are created with a mode determined + by the inheritable ACEs. If no inheritable ACEs exist that affect the + mode, then the mode is set in accordance to the requested mode from the + application.

+

The aclinherit property does not apply to + POSIX ACLs.

+
+
=discard|groupmask|passthrough|restricted
+
Controls how an ACL is modified during chmod(2) and how inherited ACEs are + modified by the file creation mode. +
+
+
default, deletes all + + except for those representing the mode of the file or directory + requested by chmod(2).
+
+
reduces permissions granted in all + + entries found in the + + such that they are no greater than the group permissions specified by + chmod(2).
+
+
indicates that no changes are made to the ACL other than creating or + updating the necessary ACL entries to represent the new mode of the + file or directory.
+
+
will cause the chmod(2) operation to return an error + when used on any file or directory which has a non-trivial ACL whose + entries can not be represented by a mode. chmod(2) + is required to change the set user ID, set group ID, or sticky bits on + a file or directory, as they do not have equivalent ACL entries. In + order to use chmod(2) on a file or directory with a + non-trivial ACL when aclmode is set to + restricted, you must first remove all ACL entries + which do not represent the current mode.
+
+
+
=off|nfsv4|posix
+
Controls whether ACLs are enabled and if so what type of ACL to use. When + this property is set to a type of ACL not supported by the current + platform, the behavior is the same as if it were set to + off. +
+
+
default on Linux, when a file system has the acltype + property set to off then ACLs are disabled.
+
+
an alias for off
+
+
default on FreeBSD, indicates that NFSv4-style ZFS ACLs should be + used. These ACLs can be managed with the getfacl(1) + and setfacl(1) commands on FreeBSD. The + nfsv4 ZFS ACL type is not yet supported on + Linux.
+
+
indicates POSIX ACLs should be used. POSIX ACLs are specific to Linux + and are not functional on other platforms. POSIX ACLs are stored as an + extended attribute and therefore will not overwrite any existing NFSv4 + ACLs which may be set.
+
+
an alias for posix
+
+

To obtain the best performance when setting + posix users are strongly encouraged to set the + xattr=sa property. This will result in the POSIX ACL + being stored more efficiently on disk. But as a consequence, all new + extended attributes will only be accessible from OpenZFS implementations + which support the xattr=sa property. See the + xattr property for more details.

+
+
=on|off
+
Controls whether the access time for files is updated when they are read. + Turning this property off avoids producing write traffic when reading + files and can result in significant performance gains, though it might + confuse mailers and other similar utilities. The values + on and off are equivalent to the + atime and + + mount options. The default value is on. See also + relatime below.
+
=on|off|noauto
+
If this property is set to off, the file system cannot + be mounted, and is ignored by zfs + mount -a. Setting this + property to off is similar to setting the + mountpoint property to none, except + that the dataset still has a normal mountpoint property, + which can be inherited. Setting this property to off + allows datasets to be used solely as a mechanism to inherit properties. + One example of setting canmount=off is + to have two datasets with the same mountpoint, so that + the children of both datasets appear in the same directory, but might have + different inherited characteristics. +

When set to noauto, a dataset can only be + mounted and unmounted explicitly. The dataset is not mounted + automatically when the dataset is created or imported, nor is it mounted + by the zfs mount + -a command or unmounted by the + zfs unmount + -a command.

+

This property is not inherited.

+
+
=on|off||fletcher4|sha256|noparity|sha512|skein|edonr
+
Controls the checksum used to verify data integrity. The default value is + on, which automatically selects an appropriate algorithm + (currently, fletcher4, but this may change in future + releases). The value off disables integrity checking on + user data. The value noparity not only disables + integrity but also disables maintaining parity for user data. This setting + is used internally by a dump device residing on a RAID-Z pool and should + not be used by any other dataset. Disabling checksums is + NOT a recommended practice. +

The sha512, skein, and + edonr checksum algorithms require enabling the + appropriate features on the pool. FreeBSD does not support the + edonr algorithm.

+

Please see zpool-features(5) for more + information on these algorithms.

+

Changing this property affects only newly-written data.

+
+
=on|off|gzip|gzip-N|lz4|lzjb|zle|zstd|zstd-N|zstd-fast|zstd-fast-N
+
Controls the compression algorithm used for this dataset. +

Setting compression to on indicates that the + current default compression algorithm should be used. The default + balances compression and decompression speed, with compression ratio and + is expected to work well on a wide variety of workloads. Unlike all + other settings for this property, on does not select a + fixed compression type. As new compression algorithms are added to ZFS + and enabled on a pool, the default compression algorithm may change. The + current default compression algorithm is either lzjb + or, if the lz4_compress feature is enabled, + lz4.

+

The lz4 compression algorithm + is a high-performance replacement for the lzjb + algorithm. It features significantly faster compression and + decompression, as well as a moderately higher compression ratio than + lzjb, but can only be used on pools with the + lz4_compress feature set to + . See + zpool-features(5) for details on ZFS feature flags and + the lz4_compress feature.

+

The lzjb compression algorithm is optimized + for performance while providing decent data compression.

+

The gzip compression algorithm + uses the same compression as the gzip(1) command. You + can specify the gzip level by using the value + gzip-N, where N is + an integer from 1 (fastest) to 9 (best compression ratio). Currently, + gzip is equivalent to + (which + is also the default for gzip(1)).

+

The zstd compression algorithm + provides both high compression ratios and good performance. You can + specify the zstd level by using the value + zstd-N, where N is + an integer from 1 (fastest) to 19 (best compression ratio). + zstd is equivalent to + .

+

Faster speeds at the cost of the compression + ratio can be requested by setting a negative zstd + level. This is done using + zstd-fast-N, where + N is an integer in [1-9,10,20,30,...,100,500,1000] + which maps to a negative zstd level. The lower the + level the faster the compression - 1000 provides the fastest compression + and lowest compression ratio. zstd-fast is equivalent + to + .

+

The zle compression algorithm compresses + runs of zeros.

+

This property can also be referred to by its + shortened column name + . + Changing this property affects only newly-written data.

+

When any setting except off is selected, + compression will explicitly check for blocks consisting of only zeroes + (the NUL byte). When a zero-filled block is detected, it is stored as a + hole and not compressed using the indicated compression algorithm.

+

Any block being compressed must be no larger than 7/8 of its + original size after compression, otherwise the compression will not be + considered worthwhile and the block saved uncompressed. Note that when + the logical block is less than 8 times the disk sector size this + effectively reduces the necessary compression ratio; for example 8k + blocks on disks with 4k disk sectors must compress to 1/2 or less of + their original size.

+
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux context for all files in the file system under + a mount point for that file system. See selinux(8) for + more information.
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux context for the file system file system being + mounted. See selinux(8) for more information.
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux default context for unlabeled files. See + selinux(8) for more information.
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux context for the root inode of the file system. + See selinux(8) for more information.
+
=||3
+
Controls the number of copies of data stored for this dataset. These + copies are in addition to any redundancy provided by the pool, for + example, mirroring or RAID-Z. The copies are stored on different disks, if + possible. The space used by multiple copies is charged to the associated + file and dataset, changing the used property and + counting against quotas and reservations. +

Changing this property only affects newly-written data. + Therefore, set this property at file system creation time by using the + -o + copies=N option.

+

Remember that ZFS will not import a pool with a + missing top-level vdev. Do NOT create, for example a + two-disk striped pool and set + on + some datasets thinking you have setup redundancy for them. When a disk + fails you will not be able to import the pool and will have lost all of + your data.

+

Encrypted datasets may not have + copies=3 since the implementation + stores some encryption metadata where the third copy would normally + be.

+
+
=on|off
+
Controls whether device nodes can be opened on this file system. The + default value is on. The values on and + off are equivalent to the dev and + + mount options.
+
=off|on|verify||||
+
Configures deduplication for a dataset. The default value is + off. The default deduplication checksum is + sha256 (this may change in the future). When + dedup is enabled, the checksum defined here overrides + the checksum property. Setting the value to + verify has the same effect as the setting + +

If set to verify, ZFS will do a byte-to-byte + comparison in case of two blocks having the same signature to make sure + the block contents are identical. Specifying verify is + mandatory for the edonr algorithm.

+

Unless necessary, deduplication should NOT + be enabled on a system. See the + + section of zfsconcepts(8).

+
+
=legacy|auto|||||
+
Specifies a compatibility mode or literal value for the size of dnodes in + the file system. The default value is legacy. Setting + this property to a value other than legacy requires the + large_dnode pool feature to be enabled. +

Consider setting dnodesize to + auto if the dataset uses the + xattr=sa property setting and the workload makes heavy + use of extended attributes. This may be applicable to SELinux-enabled + systems, Lustre servers, and Samba servers, for example. Literal values + are supported for cases where the optimal size is known in advance and + for performance testing.

+

Leave dnodesize set to + legacy if you need to receive a send stream of this + dataset on a pool that doesn't enable the large_dnode feature, or if you + need to import this pool on a system that doesn't support the + large_dnode feature.

+

This property can also be referred to by its + shortened column name, + .

+
+
=off|on||||||aes-256-gcm
+
Controls the encryption cipher suite (block cipher, key length, and mode) + used for this dataset. Requires the encryption feature + to be enabled on the pool. Requires a keyformat to be + set at dataset creation time. +

Selecting encryption=on + when creating a dataset indicates that the default encryption suite will + be selected, which is currently aes-256-gcm. In order + to provide consistent data protection, encryption must be specified at + dataset creation time and it cannot be changed afterwards.

+

For more details and caveats about encryption see the + Encryption section of + zfs-load-key(8).

+
+
=||passphrase
+
Controls what format the user's encryption key will be provided as. This + property is only set when the dataset is encrypted. +

Raw keys and hex keys must be 32 bytes long (regardless of the + chosen encryption suite) and must be randomly generated. A raw key can + be generated with the following command:

+
+
# dd if=/dev/urandom of=/path/to/output/key bs=32 count=1
+
+

Passphrases must be between 8 and 512 bytes long and will be + processed through PBKDF2 before being used (see the + pbkdf2iters property). Even though the encryption + suite cannot be changed after dataset creation, the keyformat can be + with zfs change-key.

+
+
=prompt|
+
Controls where the user's encryption key will be loaded from by default + for commands such as zfs + load-key and zfs + mount -l. This property is + only set for encrypted datasets which are encryption roots. If + unspecified, the default is + +

Even though the encryption suite cannot be changed after + dataset creation, the keylocation can be with either + zfs set or + zfs change-key. If + prompt is selected ZFS will ask for the key at the + command prompt when it is required to access the encrypted data (see + zfs load-key for + details). This setting will also allow the key to be passed in via + STDIN, but users should be careful not to place keys which should be + kept secret on the command line. If a file URI is selected, the key will + be loaded from the specified absolute file path.

+
+
=iterations
+
Controls the number of PBKDF2 iterations that a + passphrase encryption key should be run through when + processing it into an encryption key. This property is only defined when + encryption is enabled and a keyformat of passphrase is + selected. The goal of PBKDF2 is to significantly increase the + computational difficulty needed to brute force a user's passphrase. This + is accomplished by forcing the attacker to run each passphrase through a + computationally expensive hashing function many times before they arrive + at the resulting key. A user who actually knows the passphrase will only + have to pay this cost once. As CPUs become better at processing, this + number should be raised to ensure that a brute force attack is still not + possible. The current default is + + and the minimum is + . + This property may be changed with zfs + change-key.
+
=on|off
+
Controls whether processes can be executed from within this file system. + The default value is on. The values on + and off are equivalent to the exec and + + mount options.
+
=count|none
+
Limits the number of filesystems and volumes that can exist under this + point in the dataset tree. The limit is not enforced if the user is + allowed to change the limit. Setting a filesystem_limit + to on a descendent of a filesystem that already has a + filesystem_limit does not override the ancestor's + filesystem_limit, but rather imposes an additional + limit. This feature must be enabled to be used (see + zpool-features(5)).
+
=size
+
This value represents the threshold block size for including small file + blocks into the special allocation class. Blocks smaller than or equal to + this value will be assigned to the special allocation class while greater + blocks will be assigned to the regular class. Valid values are zero or a + power of two from 512B up to 1M. The default size is 0 which means no + small file blocks will be allocated in the special class. +

Before setting this property, a special class vdev must be + added to the pool. See zpoolconcepts(8) for more + details on the special allocation class.

+
+
=path|none|legacy
+
Controls the mount point used for this file system. See the + section of zfsconcepts(8) for more + information on how this property is used. +

When the mountpoint property is changed for + a file system, the file system and any children that inherit the mount + point are unmounted. If the new value is legacy, then + they remain unmounted. Otherwise, they are automatically remounted in + the new location if the property was previously legacy + or none, or if they were mounted before the property + was changed. In addition, any shared file systems are unshared and + shared in the new location.

+
+
=on|off
+
Controls whether the file system should be mounted with + nbmand (Non Blocking mandatory locks). This is used for + SMB clients. Changes to this property only take effect when the file + system is umounted and remounted. See mount(8) for more + information on nbmand mounts. This property is not used + on Linux.
+
=on|off
+
Allow mounting on a busy directory or a directory which already contains + files or directories. This is the default mount behavior for Linux and + FreeBSD file systems. On these platforms the property is + on by default. Set to off to disable + overlay mounts for consistency with OpenZFS on other platforms.
+
=all|none|metadata
+
Controls what is cached in the primary cache (ARC). If this property is + set to all, then both user data and metadata is cached. + If this property is set to none, then neither user data + nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=size|none
+
Limits the amount of space a dataset and its descendents can consume. This + property enforces a hard limit on the amount of space used. This includes + all space consumed by descendents, including file systems and snapshots. + Setting a quota on a descendent of a dataset that already has a quota does + not override the ancestor's quota, but rather imposes an additional limit. +

Quotas cannot be set on volumes, as the + volsize property acts as an implicit quota.

+
+
=count|none
+
Limits the number of snapshots that can be created on a dataset and its + descendents. Setting a snapshot_limit on a descendent of + a dataset that already has a snapshot_limit does not + override the ancestor's snapshot_limit, but rather + imposes an additional limit. The limit is not enforced if the user is + allowed to change the limit. For example, this means that recursive + snapshots taken from the global zone are counted against each delegated + dataset within a zone. This feature must be enabled to be used (see + zpool-features(5)).
+
user=size|none
+
Limits the amount of space consumed by the specified user. User space + consumption is identified by the + user + property. +

Enforcement of user quotas may be delayed by several seconds. + This delay means that a user might exceed their quota before the system + notices that they are over quota and begins to refuse additional writes + with the EDQUOT error message. See the + zfs userspace subcommand + for more information.

+

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + userquota privilege with zfs + allow, can get and set everyone's quota.

+

This property is not available on volumes, on file systems + before version 4, or on pools before version 15. The + userquota@... properties are not + displayed by zfs get + all. The user's name must be appended after the + @ symbol, using one of the following forms:

+ +

Files created on Linux always have POSIX owners.

+
+
user=size|none
+
The userobjquota is similar to + userquota but it limits the number of objects a user can + create. Please refer to userobjused for more information + about how objects are counted.
+
group=size|none
+
Limits the amount of space consumed by the specified group. Group space + consumption is identified by the + group + property. +

Unprivileged users can access only their own groups' space + usage. The root user, or a user who has been granted the + groupquota privilege with zfs + allow, can get and set all groups' quotas.

+
+
group=size|none
+
The + + is similar to groupquota but it limits number of objects + a group can consume. Please refer to userobjused for + more information about how objects are counted.
+
project=size|none
+
Limits the amount of space consumed by the specified project. Project + space consumption is identified by the + project + property. Please refer to projectused for more + information about how project is identified and set/changed. +

The root user, or a user who has been granted the + projectquota privilege with zfs + allow, can access all projects' quota.

+
+
project=size|none
+
The projectobjquota is similar to + projectquota but it limits number of objects a project + can consume. Please refer to userobjused for more + information about how objects are counted.
+
=on|off
+
Controls whether this dataset can be modified. The default value is + off. The values on and + off are equivalent to the + and + mount + options. +

This property can also be referred to by its + shortened column name, + .

+
+
=size
+
Specifies a suggested block size for files in the file system. This + property is designed solely for use with database workloads that access + files in fixed-size records. ZFS automatically tunes block sizes according + to internal algorithms optimized for typical access patterns. +

For databases that create very large files but access them in + small random chunks, these algorithms may be suboptimal. Specifying a + recordsize greater than or equal to the record size of + the database can result in significant performance gains. Use of this + property for general purpose file systems is strongly discouraged, and + may adversely affect performance.

+

The size specified must be a power of two + greater than or equal to 512 and less than or equal to 128 Kbytes. If + the + + feature is enabled on the pool, the size may be up to 1 Mbyte. See + zpool-features(5) for details on ZFS feature + flags.

+

Changing the file system's recordsize + affects only files created afterward; existing files are unaffected.

+

This property can also be referred to by its + shortened column name, + .

+
+
=all|most
+
Controls what types of metadata are stored redundantly. ZFS stores an + extra copy of metadata, so that if a single block is corrupted, the amount + of user data lost is limited. This extra copy is in addition to any + redundancy provided at the pool level (e.g. by mirroring or RAID-Z), and + is in addition to an extra copy specified by the copies + property (up to a total of 3 copies). For example if the pool is mirrored, + copies=2, and + redundant_metadata=most, then ZFS + stores 6 copies of most metadata, and 4 copies of data and some metadata. +

When set to all, ZFS stores an extra copy of + all metadata. If a single on-disk block is corrupt, at worst a single + block of user data (which is recordsize bytes long) + can be lost.

+

When set to most, ZFS stores an extra copy + of most types of metadata. This can improve performance of random + writes, because less metadata must be written. In practice, at worst + about 100 blocks (of recordsize bytes each) of user + data can be lost if a single on-disk block is corrupt. The exact + behavior of which metadata blocks are stored redundantly may change in + future releases.

+

The default value is all.

+
+
=size|none
+
Limits the amount of space a dataset can consume. This property enforces a + hard limit on the amount of space used. This hard limit does not include + space used by descendents, including file systems and snapshots.
+
=size|none|auto
+
The minimum amount of space guaranteed to a dataset, not including its + descendents. When the amount of space used is below this value, the + dataset is treated as if it were taking up the amount of space specified + by refreservation. The refreservation + reservation is accounted for in the parent datasets' space used, and + counts against the parent datasets' quotas and reservations. +

If refreservation is set, a snapshot is only + allowed if there is enough free pool space outside of this reservation + to accommodate the current number of "referenced" bytes in the + dataset.

+

If refreservation is set to + auto, a volume is thick provisioned (or "not + sparse"). refreservation=auto + is only supported on volumes. See volsize in the + Native Properties section + for more information about sparse volumes.

+

This property can also be referred to by its + shortened column name, + .

+
+
=on|off
+
Controls the manner in which the access time is updated when + + is set. Turning this property on causes the access time to be updated + relative to the modify or change time. Access time is only updated if the + previous access time was earlier than the current modify or change time or + if the existing access time hasn't been updated within the past 24 hours. + The default value is off. The values + on and off are equivalent to the + relatime and + + mount options.
+
=size|none
+
The minimum amount of space guaranteed to a dataset and its descendants. + When the amount of space used is below this value, the dataset is treated + as if it were taking up the amount of space specified by its reservation. + Reservations are accounted for in the parent datasets' space used, and + count against the parent datasets' quotas and reservations. +

This property can also be referred to by its + shortened column name, + .

+
+
=all|none|metadata
+
Controls what is cached in the secondary cache (L2ARC). If this property + is set to all, then both user data and metadata is + cached. If this property is set to none, then neither + user data nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=on|off
+
Controls whether the setuid bit is respected for the file system. The + default value is on. The values on and + off are equivalent to the + and + nosuid mount options.
+
=on|off|opts
+
Controls whether the file system is shared by using + and what options are to be used. Otherwise, the file + system is automatically shared and unshared with the + zfs share and + zfs unshare commands. If + the property is set to on, the net(8) command is invoked + to create a + . +

Because SMB shares requires a resource name, a unique resource + name is constructed from the dataset name. The constructed name is a + copy of the dataset name except that the characters in the dataset name, + which would be invalid in the resource name, are replaced with + underscore (_) characters. Linux does not currently support additional + options which might be available on Solaris.

+

If the sharesmb property is set to + off, the file systems are unshared.

+

The share is created with the ACL (Access Control List) + "Everyone:F" ("F" stands for "full + permissions", ie. read and write permissions) and no guest access + (which means Samba must be able to authenticate a real user, system + passwd/shadow, LDAP or smbpasswd based) by default. This means that any + additional access control (disallow specific user specific access etc) + must be done on the underlying file system.

+
+
=on|off|opts
+
Controls whether the file system is shared via NFS, and what options are + to be used. A file system with a sharenfs property of + off is managed with the exportfs(8) + command and entries in the + + file. Otherwise, the file system is automatically shared and unshared with + the zfs share and + zfs unshare commands. If + the property is set to on, the dataset is shared using + the default options: +

+

See exports(5) for the meaning of the + default options. Otherwise, the exportfs(8) command is + invoked with options equivalent to the contents of this property.

+

When the sharenfs property is changed for a + dataset, the dataset and any children inheriting the property are + re-shared with the new options, only if the property was previously + off, or if they were shared before the property was + changed. If the new property is off, the file systems + are unshared.

+
+
=latency|throughput
+
Provide a hint to ZFS about handling of synchronous requests in this + dataset. If logbias is set to latency + (the default), ZFS will use pool log devices (if configured) to handle the + requests at low latency. If logbias is set to + throughput, ZFS will not use configured pool log + devices. ZFS will instead optimize synchronous operations for global pool + throughput and efficient use of resources.
+
=hidden|visible
+
Controls whether the volume snapshot devices under + + are hidden or visible. The default value is hidden.
+
=hidden|visible
+
Controls whether the .zfs directory is hidden or + visible in the root of the file system as discussed in the + Snapshots section of zfsconcepts(8). + The default value is hidden.
+
=standard|always|disabled
+
Controls the behavior of synchronous requests (e.g. fsync, O_DSYNC). + standard is the POSIX specified behavior of ensuring all + synchronous requests are written to stable storage and all devices are + flushed to ensure data is not cached by device controllers (this is the + default). always causes every file system transaction to + be written and flushed before its system call returns. This has a large + performance penalty. disabled disables synchronous + requests. File system transactions are only committed to stable storage + periodically. This option will give the highest performance. However, it + is very dangerous as ZFS would be ignoring the synchronous transaction + demands of applications such as databases or NFS. Administrators should + only use this option when the risks are understood.
+
=N|
+
The on-disk version of this file system, which is independent of the pool + version. This property can only be set to later supported versions. See + the zfs upgrade + command.
+
=size
+
For volumes, specifies the logical size of the volume. By default, + creating a volume establishes a reservation of equal size. For storage + pools with a version number of 9 or higher, a + refreservation is set instead. Any changes to + volsize are reflected in an equivalent change to the + reservation (or refreservation). The + volsize can only be set to a multiple of + volblocksize, and cannot be zero. +

The reservation is kept equal to the volume's logical size to + prevent unexpected behavior for consumers. Without the reservation, the + volume could run out of space, resulting in undefined behavior or data + corruption, depending on how the volume is used. These effects can also + occur when the volume size is changed while it is in use (particularly + when shrinking the size). Extreme care should be used when adjusting the + volume size.

+

Though not recommended, a "sparse + volume" (also known as "thin provisioned") can be created + by specifying the -s option to the + zfs create + -V command, or by changing the value of the + refreservation property (or + reservation property on pool version 8 or earlier) + after the volume has been created. A "sparse volume" is a + volume where the value of refreservation is less than + the size of the volume plus the space required to store its metadata. + Consequently, writes to a sparse volume can fail with + ENOSPC when the pool is low on space. For a + sparse volume, changes to volsize are not reflected in + the + + A volume that is not sparse is said to be "thick provisioned". + A sparse volume can become thick provisioned by setting + refreservation to auto.

+
+
=default + | + + | + + | + | +
+
This property specifies how volumes should be exposed to the OS. Setting + it to full exposes volumes as fully fledged block + devices, providing maximal functionality. The value geom + is just an alias for full and is kept for compatibility. + Setting it to dev hides its partitions. Volumes with + property set to none are not exposed outside ZFS, but + can be snapshotted, cloned, replicated, etc, that can be suitable for + backup purposes. Value + + means that volumes exposition is controlled by system-wide tunable + zvol_volmode, where full, + dev and none are encoded as 1, 2 and 3 + respectively. The default value is full.
+
=on|off
+
Controls whether regular files should be scanned for viruses when a file + is opened and closed. In addition to enabling this property, the virus + scan service must also be enabled for virus scanning to occur. The default + value is off. This property is not used on Linux.
+
=on|off|sa
+
Controls whether extended attributes are enabled for this file system. Two + styles of extended attributes are supported: either directory based or + system attribute based. +

The default value of on enables directory + based extended attributes. This style of extended attribute imposes no + practical limit on either the size or number of attributes which can be + set on a file. Although under Linux the getxattr(2) + and setxattr(2) system calls limit the maximum size to + 64K. This is the most compatible style of extended attribute and is + supported by all ZFS implementations.

+

System attribute based xattrs can be enabled by setting the + value to sa. The key advantage of this type of xattr + is improved performance. Storing extended attributes as system + attributes significantly decreases the amount of disk IO required. Up to + 64K of data may be stored per-file in the space reserved for system + attributes. If there is not enough space available for an extended + attribute then it will be automatically written as a directory based + xattr. System attribute based extended attributes are not accessible on + platforms which do not support the xattr=sa feature. + OpenZFS supports xattr=sa on both FreeBSD and + Linux.

+

The use of system attribute based xattrs is strongly + encouraged for users of SELinux or POSIX ACLs. Both of these features + heavily rely on extended attributes and benefit significantly from the + reduced access time.

+

The values on and + off are equivalent to the xattr and + mount + options.

+
+
=off|on
+
Controls whether the dataset is managed from a jail. See the + "Jails" section in + zfs(8) for more information. Jails are a FreeBSD feature + and are not relevant on other platforms. The default value is + off.
+
=on|off
+
Controls whether the dataset is managed from a non-global zone. Zones are + a Solaris feature and are not relevant on other platforms. The default + value is off.
+
+

The following three properties cannot be changed after the file + system is created, and therefore, should be set when the file system is + created. If the properties are not set with the zfs + create or zpool + create commands, these properties are inherited from + the parent dataset. If the parent dataset lacks these properties due to + having been created prior to these features being supported, the new file + system will have the default values for these properties.

+
+
=sensitive||mixed
+
Indicates whether the file name matching algorithm used by the file system + should be case-sensitive, case-insensitive, or allow a combination of both + styles of matching. The default value for the + casesensitivity property is sensitive. + Traditionally, UNIX and POSIX file systems have + case-sensitive file names. +

The mixed value for the + casesensitivity property indicates that the file + system can support requests for both case-sensitive and case-insensitive + matching behavior. Currently, case-insensitive matching behavior on a + file system that supports mixed behavior is limited to the SMB server + product. For more information about the mixed value + behavior, see the "ZFS Administration Guide".

+
+
=none||||
+
Indicates whether the file system should perform a + + normalization of file names whenever two file names are compared, and + which normalization algorithm should be used. File names are always stored + unmodified, names are normalized as part of any comparison process. If + this property is set to a legal value other than none, + and the utf8only property was left unspecified, the + utf8only property is automatically set to + on. The default value of the + normalization property is none. This + property cannot be changed after the file system is created.
+
=on|off
+
Indicates whether the file system should reject file names that include + characters that are not present in the + + character code set. If this property is explicitly set to + off, the normalization property must either not be + explicitly set or be set to none. The default value for + the utf8only property is off. This + property cannot be changed after the file system is created.
+
+

The casesensitivity, + normalization, and utf8only properties + are also new permissions that can be assigned to non-privileged users by + using the ZFS delegated administration feature.

+
+
+

+

When a file system is mounted, either through + mount(8) for legacy mounts or the + zfs mount command for normal + file systems, its mount options are set according to its properties. The + correlation between properties and mount options is as follows:

+
+
    PROPERTY                MOUNT OPTION
+    atime                   atime/noatime
+    canmount                auto/noauto
+    devices                 dev/nodev
+    exec                    exec/noexec
+    readonly                ro/rw
+    relatime                relatime/norelatime
+    setuid                  suid/nosuid
+    xattr                   xattr/noxattr
+
+

In addition, these options can be set on a + per-mount basis using the -o option, without + affecting the property that is stored on disk. The values specified on the + command line override the values stored in the dataset. The + nosuid option is an alias for + ,. + These properties are reported as "temporary" by the + zfs get command. If the + properties are changed while the dataset is mounted, the new setting + overrides any temporary settings.

+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate datasets (file + systems, volumes, and snapshots).

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as + module:, + but this namespace is not enforced by ZFS. User property names can be at + most 256 characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is + strongly suggested to use a reversed + domain name for + the module component of property names to reduce the + chance that two independently-developed packages use the same property name + for different purposes.

+

The values of user properties are arbitrary strings, are always + inherited, and are never validated. All of the commands that operate on + properties (zfs list, + zfs get, + zfs set, and so forth) can + be used to manipulate both native properties and user properties. Use the + zfs inherit command to clear + a user property. If the property is not defined in any parent dataset, it is + removed entirely. Property values are limited to 8192 bytes.

+
+
+
+ + + + + +
May 5, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zgenhostid.8.html b/man/v2.0/8/zgenhostid.8.html new file mode 100644 index 000000000..19896cc61 --- /dev/null +++ b/man/v2.0/8/zgenhostid.8.html @@ -0,0 +1,331 @@ + + + + + + + zgenhostid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zgenhostid.8

+
+ + + + + +
ZGENHOSTID(8)System Manager's Manual (smm)ZGENHOSTID(8)
+
+
+

+

zgenhostid — + generate and store a hostid in +

+
+
+

+ + + + + +
zgenhostid[-f] [-o + filename] [hostid]
+
+
+

+

Creates /etc/hostid file and stores hostid + in it. If the user provides [hostid] on the command + line, validates and stores that value. Otherwise, randomly generates a value + to store.

+
+
+
Display a summary of the command-line options.
+
+
Force file overwrite.
+
+ filename
+
Write to filename instead of default + /etc/hostid
+
hostid
+
Specifies the value to be placed in /etc/hostid. + It should be a number with a value between 1 and 2^32-1. If it is 0, + zgenhostid will generate a random hostid. This value + must be unique among your systems. It + must be expressed in hexadecimal and be exactly + digits long, + optionally prefixed by + .
+
+
+
+

+

/etc/hostid

+
+
+

+
+
Generate a random hostid and store it
+
+
+
# zgenhostid
+
+
+
Record the libc-generated hostid in + /etc/hostid
+
+
+
# zgenhostid "$(hostid)"
+
+
+
Record a custom hostid (0xdeadbeef) in + /etc/hostid
+
+
+
# zgenhostid deadbeef
+
+
+
Record a custom hostid (0x01234567) in + /tmp/hostid
+
and ovewrite the file if it exists +
+
# zgenhostid -f -o /tmp/hostid 0x01234567
+
+
+
+
+
+

+

genhostid(1), hostid(1), + sethostid(3), + spl-module-parameters(5)

+
+
+

+

zgenhostid emulates the + genhostid(1) utility and is provided for use on systems + which do not include the utility or do not provide + sethostid(3) call.

+
+
+ + + + + +
March 18, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zinject.8.html b/man/v2.0/8/zinject.8.html new file mode 100644 index 000000000..2435c226c --- /dev/null +++ b/man/v2.0/8/zinject.8.html @@ -0,0 +1,403 @@ + + + + + + + zinject.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zinject.8

+
+ + + + + +
ZINJECT(8)System Manager's ManualZINJECT(8)
+
+

+
+

+

zinject - ZFS Fault Injector

+
+
+

+

zinject creates artificial problems in a ZFS pool by + simulating data corruption or device failures. This program is + dangerous.

+
+
+

+
+
+
List injection records.
+
zinject -b objset:object:level:blkd [-f + frequency] [-amu] pool
+
Force an error into the pool at a bookmark.
+
zinject -c <id | all>
+
Cancel injection records.
+
zinject -d vdev -A <degrade|fault> + pool
+
Force a vdev into the DEGRADED or FAULTED state.
+
zinject -d vdev -D latency:lanes + pool
+
+

Add an artificial delay to IO requests on a particular device, + such that the requests take a minimum of 'latency' milliseconds to + complete. Each delay has an associated number of 'lanes' which defines + the number of concurrent IO requests that can be processed.

+

For example, with a single lane delay of 10 ms (-D 10:1), the + device will only be able to service a single IO request at a time with + each request taking 10 ms to complete. So, if only a single request is + submitted every 10 ms, the average latency will be 10 ms; but if more + than one request is submitted every 10 ms, the average latency will be + more than 10 ms.

+

Similarly, if a delay of 10 ms is specified to have two lanes + (-D 10:2), then the device will be able to service two requests at a + time, each with a minimum latency of 10 ms. So, if two requests are + submitted every 10 ms, then the average latency will be 10 ms; but if + more than two requests are submitted every 10 ms, the average latency + will be more than 10 ms.

+

Also note, these delays are additive. So two invocations of + '-D 10:1', is roughly equivalent to a single invocation of '-D 10:2'. + This also means, one can specify multiple lanes with differing target + latencies. For example, an invocation of '-D 10:1' followed by '-D 25:2' + will create 3 lanes on the device; one lane with a latency of 10 ms and + two lanes with a 25 ms latency.

+

+
+
zinject -d vdev [-e device_error] [-L + label_error] [-T failure] [-f + frequency] [-F] pool
+
Force a vdev error.
+
zinject -I [-s seconds | -g txgs] + pool
+
Simulate a hardware failure that fails to honor a cache flush.
+
zinject -p function pool
+
Panic inside the specified function.
+
zinject -t data [-C dvas] [-e device_error] [-f + frequency] [-l level] [-r range] + [-amq] path
+
Force an error into the contents of a file.
+
zinject -t dnode [-C dvas] [-e device_error] + [-f frequency] [-l level] [-amq] + path
+
Force an error into the metadnode for a file or directory.
+
zinject -t mos_type [-C dvas] [-e + device_error] [-f frequency] [-l + level] [-r range] [-amqu] + pool
+
Force an error into the MOS of a pool.
+
+
+
+

+
+
+
Flush the ARC before injection.
+
+
Force an error into the pool at this bookmark tuple. Each number is in + hexadecimal, and only one block can be specified.
+
+
Inject the given error only into specific DVAs. The mask should be + specified as a list of 0-indexed DVAs separated by commas (ex. '0,2'). + This option is not applicable to logical data errors such as + decompress and decrypt.
+
+
A vdev specified by path or GUID.
+
+
Specify checksum for an ECKSUM error, decompress for a data + decompression error, decrypt for a data decryption error, + corrupt to flip a bit in the data after a read, dtl for an + ECHILD error, io for an EIO error where reopening the device will + succeed, or nxio for an ENXIO error where reopening the device will + fail. For EIO and ENXIO, the "failed" reads or writes still + occur. The probe simply sets the error value reported by the I/O pipeline + so it appears the read or write failed. Decryption errors only currently + work with file data.
+
+
Only inject errors a fraction of the time. Expressed as a real number + percentage between 0.0001 and 100.
+
+
Fail faster. Do fewer checks.
+
+
Run for this many transaction groups before reporting failure.
+
+
Print the usage message.
+
+
Inject an error at a particular block level. The default is 0.
+
+
Set the label error region to one of nvlist, pad1, + pad2, or uber.
+
+
Automatically remount the underlying filesystem.
+
+
Quiet mode. Only print the handler number added.
+
+
Inject an error over a particular logical range of an object, which will + be translated to the appropriate blkid range according to the object's + properties.
+
+
Run for this many seconds before reporting failure.
+
+
Set the failure type to one of all, claim, free, + read, or write.
+
+
Set this to mos for any data in the MOS, mosdir for an + object directory, config for the pool configuration, bpobj + for the block pointer list, spacemap for the space map, + metaslab for the metaslab, or errlog for the persistent + error log.
+
+
Unload the pool after injection. +

+
+
+
+
+

+
+
+
Run zinject in debug mode. +

+
+
+
+
+

+

This man page was written by Darik Horn + <dajhorn@vanadac.com> excerpting the zinject usage message and + source code.

+

+
+
+

+

zpool(8), zfs(8)

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-add.8.html b/man/v2.0/8/zpool-add.8.html new file mode 100644 index 000000000..616899d4d --- /dev/null +++ b/man/v2.0/8/zpool-add.8.html @@ -0,0 +1,308 @@ + + + + + + + zpool-add.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-add.8

+
+ + + + + +
ZPOOL-ADD(8)System Manager's ManualZPOOL-ADD(8)
+
+
+

+

zpool-addAdds + specified virtual devices to a ZFS storage pool

+
+
+

+ + + + + +
zpooladd [-fgLnP] + [-o + property=value] + pool vdev...
+
+
+

+
+
zpool add + [-fgLnP] [-o + property=value] + pool vdev...
+
Adds the specified virtual devices to the given pool. The + vdev specification is described in the + + section of zpoolconcepts(8.) The behavior of the + -f option, and the device checks performed are + described in the zpool + create subcommand. +
+
+
Forces use of vdevs, even if they appear in use + or specify a conflicting replication level. Not all devices can be + overridden in this manner.
+
+
Display vdev, GUIDs instead of the normal device + names. These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all + symbolic links. This can be used to look up the current block device + name regardless of the /dev/disk/ path used to open it.
+
+
Displays the configuration that would be used without actually adding + the vdevs. The actual pool creation can still + fail due to insufficient privileges or device sharing.
+
+
Display real paths for vdevs instead of only the + last component of the path. This can be used in conjunction with the + -L flag.
+
+ property=value
+
Sets the given pool properties. See the + zpoolprops(8) manual page for a list of valid + properties that can be set. The only property supported at the moment + is ashift.
+
+
+
+
+
+

+

zpool-remove(8), + zpool-attach(8), zpool-import(8), + zpool-initialize(8), zpool-online(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-attach.8.html b/man/v2.0/8/zpool-attach.8.html new file mode 100644 index 000000000..0233dc367 --- /dev/null +++ b/man/v2.0/8/zpool-attach.8.html @@ -0,0 +1,305 @@ + + + + + + + zpool-attach.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-attach.8

+
+ + + + + +
ZPOOL-ATTACH(8)System Manager's ManualZPOOL-ATTACH(8)
+
+
+

+

zpool-attach — + Attach a new device to an existing ZFS virtual device + (vdev).

+
+
+

+ + + + + +
zpoolattach [-fsw] + [-o + property=value] + pool device new_device
+
+
+

+
+
zpool attach + [-fsw] [-o + property=value] + pool device new_device
+
Attaches new_device to the existing + device. The existing device cannot be part of a + raidz configuration. If device is not currently part + of a mirrored configuration, device automatically + transforms into a two-way mirror of device and + new_device. If device is part + of a two-way mirror, attaching new_device creates a + three-way mirror, and so on. In either case, + new_device begins to resilver immediately and any + running scrub is cancelled. +
+
+
Forces use of new_device, even if it appears to + be in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the + zpoolprops(8) manual page for a list of valid + properties that can be set. The only property supported at the moment + is ashift.
+
+
The new_device is reconstructed sequentially to + restore redundancy as quickly as possible. Checksums are not verfied + during sequential reconstruction so a scrub is started when the + resilver completes. Sequential reconstruction is not supported for + raidz configurations.
+
+
Waits until new_device has finished resilvering + before returning.
+
+
+
+
+
+

+

zpool-detach(8), zpool-add(8), + zpool-import(8), zpool-initialize(8), + zpool-online(8), zpool-replace(8), + zpool-resilver(8)

+
+
+ + + + + +
May 15, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-checkpoint.8.html b/man/v2.0/8/zpool-checkpoint.8.html new file mode 100644 index 000000000..17117c02a --- /dev/null +++ b/man/v2.0/8/zpool-checkpoint.8.html @@ -0,0 +1,293 @@ + + + + + + + zpool-checkpoint.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-checkpoint.8

+
+ + + + + +
ZPOOL-CHECKPOINT(8)System Manager's ManualZPOOL-CHECKPOINT(8)
+
+
+

+

zpool-checkpoint — + Checkpoints the current state of a ZFS storage + pool

+
+
+

+ + + + + +
zpoolcheckpoint [-d, + --discard [-w, + --wait]] pool
+
+
+

+
+
zpool checkpoint + [-d, --discard + [-w, --wait]] + pool
+
Checkpoints the current state of pool , which can be + later restored by zpool import + --rewind-to-checkpoint. The existence of a checkpoint in a pool + prohibits the following zpool commands: + remove, attach, + detach, split, and + reguid. In addition, it may break reservation + boundaries if the pool lacks free space. The zpool + status command indicates the existence of a + checkpoint or the progress of discarding a checkpoint from a pool. The + zpool list command reports + how much space the checkpoint takes from the pool. +
+
+ --discard
+
Discards an existing checkpoint from pool.
+
+ --wait
+
Waits until the checkpoint has finished being discarded before + returning.
+
+
+
+
+
+

+

zpool-import(8), + zpool-status(8), zfs-snapshot(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-clear.8.html b/man/v2.0/8/zpool-clear.8.html new file mode 100644 index 000000000..8acbe0d28 --- /dev/null +++ b/man/v2.0/8/zpool-clear.8.html @@ -0,0 +1,274 @@ + + + + + + + zpool-clear.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-clear.8

+
+ + + + + +
ZPOOL-CLEAR(8)System Manager's ManualZPOOL-CLEAR(8)
+
+
+

+

zpool-clear — + Clears device errors in a ZFS storage pool.

+
+
+

+ + + + + +
zpoolclear pool + [device]
+
+
+

+
+
zpool clear + pool [device]
+
Clears device errors in a pool. If no arguments are specified, all device + errors within the pool are cleared. If one or more devices is specified, + only those errors associated with the specified device or devices are + cleared. If multihost is enabled, and the pool has been suspended, this + will not resume I/O. While the pool was suspended, it may have been + imported on another host, and resuming I/O could result in pool + damage.
+
+
+
+

+

zdb(8), zpool-reopen(8), + zpool-status(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-create.8.html b/man/v2.0/8/zpool-create.8.html new file mode 100644 index 000000000..0645be7a6 --- /dev/null +++ b/man/v2.0/8/zpool-create.8.html @@ -0,0 +1,392 @@ + + + + + + + zpool-create.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-create.8

+
+ + + + + +
ZPOOL-CREATE(8)System Manager's ManualZPOOL-CREATE(8)
+
+
+

+

zpool-create — + Creates a new ZFS storage pool

+
+
+

+ + + + + +
zpoolcreate [-dfn] + [-m mountpoint] + [-o + property=value]... + [-o + feature@feature=value] + [-O + file-system-property=value]... + [-R root] + pool vdev...
+
+
+

+
+
zpool create + [-dfn] [-m + mountpoint] [-o + property=value]... + [-o + feature@feature=value]... + [-O + file-system-property=value]... + [-R root] + [-t tname] + pool vdev...
+
Creates a new storage pool containing the virtual devices specified on the + command line. The pool name must begin with a letter, and can only contain + alphanumeric characters as well as underscore + (""), dash + (""), + colon + (""), + space (" "), and period + (""). + The pool names mirror, raidz, + spare and + are + reserved, as are names beginning with mirror, + raidz, spare, and the pattern + . + The vdev specification is described in the + section of zpoolconcepts(8). +

The command attempts to verify that each device + specified is accessible and not currently in use by another subsystem. + However this check is not robust enough to detect simultaneous attempts + to use a new device in different pools, even if + + is + + The administrator must ensure that simultaneous invocations of any + combination of zpool replace, zpool + create, zpool add, or zpool + labelclear, do not refer to the same device. Using the same device + in two pools will result in pool corruption.

+

There are some uses, such as being currently mounted, or + specified as the dedicated dump device, that prevents a device from ever + being used by ZFS. Other uses, such as having a preexisting UFS file + system, can be overridden with the -f + option.

+

The command also checks that the replication strategy for the + pool is consistent. An attempt to combine redundant and non-redundant + storage in a single pool, or to mix disks and files, results in an error + unless -f is specified. The use of differently + sized devices within a single raidz or mirror group is also flagged as + an error unless -f is specified.

+

Unless the -R option is specified, the + default mount point is + /pool. The mount point + must not exist or must be empty, or else the root dataset cannot be + mounted. This can be overridden with the -m + option.

+

By default all supported features are enabled on the new pool + unless the -d option is specified.

+
+
+
Do not enable any features on the new pool. Individual features can be + enabled by setting their corresponding properties to + + with the -o option. See + zpool-features(5) for details about feature + properties.
+
+
Forces use of vdevs, even if they appear in use + or specify a conflicting replication level. Not all devices can be + overridden in this manner.
+
+ mountpoint
+
Sets the mount point for the root dataset. The default mount point is + /pool or altroot/pool + if altroot is specified. The mount point must be + an absolute path, + , + or none. For more information on dataset mount + points, see zfs(8).
+
+
Displays the configuration that would be used without actually + creating the pool. The actual pool creation can still fail due to + insufficient privileges or device sharing.
+
+ property=value
+
Sets the given pool properties. See the + zpoolprops(8) manual page for a list of valid + properties that can be set.
+
+ feature@feature=value
+
Sets the given pool feature. See the + zpool-features(5) section for a list of valid + features that can be set. Value can be either disabled or + enabled.
+
+ file-system-property=value
+
Sets the given file system properties in the root file system of the + pool. See the zfsprops(8) manual page for a list of + valid properties that can be set.
+
+ root
+
Equivalent to -o + =none + -o + =root
+
+ tname
+
Sets the in-core pool name to + + while the on-disk name will be the name specified as the pool name + . + This will set the default cachefile property to none. This is intended + to handle name space collisions when creating pools for other systems, + such as virtual machines or physical machines whose pools live on + network block devices.
+
+
+
+
+
+

+

zpool-destroy(8), + zpool-export(8), zpool-import(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-destroy.8.html b/man/v2.0/8/zpool-destroy.8.html new file mode 100644 index 000000000..121b444e4 --- /dev/null +++ b/man/v2.0/8/zpool-destroy.8.html @@ -0,0 +1,270 @@ + + + + + + + zpool-destroy.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-destroy.8

+
+ + + + + +
ZPOOL-DESTROY(8)System Manager's ManualZPOOL-DESTROY(8)
+
+
+

+

zpool-destroy — + Destroys the given ZFS storage pool, freeing up any devices + for other use

+
+
+

+ + + + + +
zpooldestroy [-f] + pool
+
+
+

+
+
zpool destroy + [-f] pool
+
Destroys the given pool, freeing up any devices for other use. This + command tries to unmount any active datasets before destroying the pool. +
+
+
Forces any active datasets contained within the pool to be + unmounted.
+
+
+
+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-detach.8.html b/man/v2.0/8/zpool-detach.8.html new file mode 100644 index 000000000..82dbf5c13 --- /dev/null +++ b/man/v2.0/8/zpool-detach.8.html @@ -0,0 +1,274 @@ + + + + + + + zpool-detach.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-detach.8

+
+ + + + + +
ZPOOL-DETACH(8)System Manager's ManualZPOOL-DETACH(8)
+
+
+

+

zpool-detach — + Detaches a device from a ZFS mirror vdev (virtual + device)

+
+
+

+ + + + + +
zpooldetach pool device
+
+
+

+
+
zpool detach + pool device
+
Detaches device from a mirror. The operation is + refused if there are no other valid replicas of the data. If device may be + re-added to the pool later on then consider the + + command instead.
+
+
+
+

+

zpool-attach(8), + zpool-offline(8), zpool-labelclear(8), + zpool-remove(8), zpool-replace(8), + zpool-split(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-events.8.html b/man/v2.0/8/zpool-events.8.html new file mode 100644 index 000000000..ff2ebfd15 --- /dev/null +++ b/man/v2.0/8/zpool-events.8.html @@ -0,0 +1,287 @@ + + + + + + + zpool-events.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-events.8

+
+ + + + + +
ZPOOL-EVENTS(8)System Manager's ManualZPOOL-EVENTS(8)
+
+
+

+

zpool-events — + Lists all recent events generated by the ZFS kernel + modules

+
+
+

+ + + + + +
zpoolevents [-vHf + [pool] | -c]
+
+
+

+
+
zpool events + [-vHf [pool] | + -c]
+
Lists all recent events generated by the ZFS kernel modules. These events + are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. + For more information about the subclasses and event payloads that can be + generated see the zfs-events(5) man page. +
+
+
Clear all previous events.
+
+
Follow mode.
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+
Print the entire payload for each event.
+
+
+
+
+
+

+

zed(8), zpool-wait(8), + zfs-events(5), + zfs-module-parameters(5)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-export.8.html b/man/v2.0/8/zpool-export.8.html new file mode 100644 index 000000000..4c35c3cfb --- /dev/null +++ b/man/v2.0/8/zpool-export.8.html @@ -0,0 +1,293 @@ + + + + + + + zpool-export.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-export.8

+
+ + + + + +
ZPOOL-EXPORT(8)System Manager's ManualZPOOL-EXPORT(8)
+
+
+

+

zpool-export — + Exports the given ZFS storage pools from the + system

+
+
+

+ + + + + +
zpoolexport [-a] + [-f] pool...
+
+
+

+
+
zpool export + [-a] [-f] + pool...
+
Exports the given pools from the system. All devices are marked as + exported, but are still considered in use by other subsystems. The devices + can be moved between systems (even those of different endianness) and + imported as long as a sufficient number of devices are present. +

Before exporting the pool, all datasets within the pool are + unmounted. A pool can not be exported if it has a shared spare that is + currently being used.

+

For pools to be portable, you must give the + zpool command whole disks, not just partitions, + so that ZFS can label the disks with portable EFI labels. Otherwise, + disk drivers on platforms of different endianness will not recognize the + disks.

+
+
+
Exports all pools imported on the system.
+
+
Forcefully unmount all datasets, using the + unmount -f command. + This option is not supported on Linux. +

This command will forcefully export the pool even if it + has a shared spare that is currently being used. This may lead to + potential data corruption.

+
+
+
+
+
+
+

+

zpool-import(8)

+
+
+ + + + + +
February 16, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-get.8.html b/man/v2.0/8/zpool-get.8.html new file mode 100644 index 000000000..27025f64d --- /dev/null +++ b/man/v2.0/8/zpool-get.8.html @@ -0,0 +1,313 @@ + + + + + + + zpool-get.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-get.8

+
+ + + + + +
ZPOOL-GET(8)System Manager's ManualZPOOL-GET(8)
+
+
+

+

zpool-get — + Retrieves properties for the specified ZFS storage + pool(s)

+
+
+

+ + + + + +
zpoolget [-Hp] + [-o + field[,field]...] + all|property[,property]... + [pool]...
+
+ + + + + +
zpoolset + property=value + pool
+
+
+

+
+
zpool get + [-Hp] [-o + field[,field]...] + all|property[,property]... + [pool]...
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
        name          Name of storage pool
+        property      Property name
+        value         Property value
+        source        Property source, either 'default' or 'local'.
+
+

See the zpoolprops(8) manual page for more + information on the available pool properties.

+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display. + ,,, + is the default value.
+
+
Display numbers in parsable (exact) values.
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + zpoolprops(8) manual page for more information on what + properties can be set and acceptable values.
+
+
+
+

+

zpoolprops(8), zpool-list(8), + zpool-features(5)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-history.8.html b/man/v2.0/8/zpool-history.8.html new file mode 100644 index 000000000..e106a8212 --- /dev/null +++ b/man/v2.0/8/zpool-history.8.html @@ -0,0 +1,281 @@ + + + + + + + zpool-history.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-history.8

+
+ + + + + +
ZPOOL-HISTORY(8)System Manager's ManualZPOOL-HISTORY(8)
+
+
+

+

zpool-history — + Displays the command history of the specified ZFS storage + pool(s)

+
+
+

+ + + + + +
zpoolhistory [-il] + [pool]...
+
+
+

+
+
zpool history + [-il] [pool]...
+
Displays the command history of the specified pool(s) or all pools if no + pool is specified. +
+
+
Displays internally logged ZFS events in addition to user initiated + events.
+
+
Displays log records in long format, which in addition to standard + format includes, the user name, the hostname, and the zone in which + the operation was performed.
+
+
+
+
+
+

+

zpool-checkpoint(8), + zpool-events(8), zpool-status(8), + zpool-wait(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-import.8.html b/man/v2.0/8/zpool-import.8.html new file mode 100644 index 000000000..cd7e976f2 --- /dev/null +++ b/man/v2.0/8/zpool-import.8.html @@ -0,0 +1,546 @@ + + + + + + + zpool-import.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-import.8

+
+ + + + + +
ZPOOL-IMPORT(8)System Manager's ManualZPOOL-IMPORT(8)
+
+
+

+

zpool-import — + Lists ZFS storage pools available to import or import the + specified pools

+
+
+

+ + + + + +
zpoolimport [-D] + [-d dir|device]
+
+ + + + + +
zpoolimport -a + [-DflmN] [-F + [-n] [-T] + [-X]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] [-o + mntopts] [-o + property=value]... + [-R root]
+
+ + + + + +
zpoolimport [-Dflm] + [-F [-n] + [-T] [-X]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] [-o + mntopts] [-o + property=value]... + [-R root] + [-s] + pool|id + [newpool [-t]]
+
+
+

+
+
zpool import + [-D] [-d + dir|device]
+
Lists pools available to import. If the -d + -or -c options are not + specified, this command searches for devices using libblkid on Linux and + geom on FreeBSD. The -d option can be specified + multiple times, and all directories are searched. If the device appears to + be part of an exported pool, this command displays a summary of the pool + with the name of the pool, a numeric identifier, as well as the vdev + layout and current health of the device for each device or file. Destroyed + pools, pools that were previously destroyed with the + zpool destroy command, are + not listed unless the -D option is specified. +

The numeric identifier is unique, and can be used instead of + the pool name when multiple exported pools of the same name are + available.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times.
+
+
Lists destroyed pools only.
+
+
+
zpool import + -a [-DflmN] + [-F [-n] + [-T] [-X]] + [-c + cachefile|-d + dir|device] [-o + mntopts] [-o + property=value]... + [-R root] + [-s]
+
Imports all pools found in the search directories. Identical to the + previous command, except that all pools with a sufficient number of + devices available are imported. Destroyed pools, pools that were + previously destroyed with the zpool + destroy command, will not be imported unless the + -D option is specified. +
+
+
Searches for and imports all pools found.
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pools only. The -f option is + also required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+
Import the pool without mounting any file systems.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + zpoolprops(8) manual page for more information on + the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Rewinds pool to the checkpointed state. Once the pool is imported with + this flag there is no way to undo the rewind. All changes and data + that were written after the checkpoint are lost! The only exception is + when the + + mounting option is enabled. In this case, the checkpointed state of + the pool is opened and an administrator can see how the pool would + look like if they were to fully rewind.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. WARNING: This + option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
+
zpool import + [-Dflm] [-F + [-n] [-t] + [-T] [-X]] + [-c + cachefile|-d + dir|device] [-o + mntopts] [-o + property=value]... + [-R root] + [-s] + pool|id + [newpool]
+
Imports a specific pool. A pool can be identified by its name or the + numeric identifier. If newpool is specified, the + pool is imported using the name newpool. Otherwise, + it is imported with the same name as its exported name. +

If a device is removed from a system without running + zpool export first, the + device appears as potentially active. It cannot be determined if this + was a failed export, or whether the device is really in use from another + host. To import a pool in this state, the -f + option is required.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pool. The -f option is also + required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + zpoolprops(8) manual page for more information on + the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. WARNING: This + option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
Used with newpool. Specifies that + newpool is temporary. Temporary pool names last + until export. Ensures that the original pool name will be used in all + label updates and therefore is retained upon export. Will also set -o + cachefile=none when not explicitly specified.
+
+
+
+
+
+

+

zpool-export(8), + zpool-list(8), zpool-status(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-initialize.8.html b/man/v2.0/8/zpool-initialize.8.html new file mode 100644 index 000000000..98110da28 --- /dev/null +++ b/man/v2.0/8/zpool-initialize.8.html @@ -0,0 +1,297 @@ + + + + + + + zpool-initialize.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-initialize.8

+
+ + + + + +
ZPOOL-INITIALIZE(8)System Manager's ManualZPOOL-INITIALIZE(8)
+
+
+

+

zpool-initialize — + Write to all unallocated regions of eligible devices in a + ZFS storage pool

+
+
+

+ + + + + +
zpoolinitialize [-c | + -s] [-w] + pool [device...]
+
+
+

+
+
zpool initialize + [-c | -s] + [-w] pool + [device...]
+
Begins initializing by writing to all unallocated regions on the specified + devices, or all eligible devices in the pool if no individual devices are + specified. Only leaf data or log devices may be initialized. +
+
+ --cancel
+
Cancel initializing on the specified devices, or all eligible devices + if none are specified. If one or more target devices are invalid or + are not currently being initialized, the command will fail and no + cancellation will occur on any device.
+
+ --suspend
+
Suspend initializing on the specified devices, or all eligible devices + if none are specified. If one or more target devices are invalid or + are not currently being initialized, the command will fail and no + suspension will occur on any device. Initializing can then be resumed + by running zpool + initialize with no flags on the relevant + target devices.
+
+ --wait
+
Wait until the devices have finished initializing before + returning.
+
+
+
+
+
+

+

zpool-add(8), zpool-attach(8), + zpool-create(8), zpool-online(8), + zpool-replace(8), zpool-trim(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-iostat.8.html b/man/v2.0/8/zpool-iostat.8.html new file mode 100644 index 000000000..80cbe6cc4 --- /dev/null +++ b/man/v2.0/8/zpool-iostat.8.html @@ -0,0 +1,427 @@ + + + + + + + zpool-iostat.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-iostat.8

+
+ + + + + +
ZPOOL-IOSTAT(8)System Manager's ManualZPOOL-IOSTAT(8)
+
+
+

+

zpool-iostat — + Display logical I/O statistics for the given ZFS storage + pools/vdevs

+
+
+

+ + + + + +
zpooliostat [[[-c + SCRIPT] + [-lq]]|-rw] + [-T u|d] + [-ghHLnpPvy] + [[pool...]|[pool + vdev...]|[vdev...]] + [interval [count]]
+
+
+

+
+
zpool iostat + [[[-c SCRIPT] + [-lq]]|-rw] + [-T u|d] + [-ghHLnpPvy] + [[pool...]|[pool + vdev...]|[vdev...]] + [interval [count]]
+
Displays logical I/O statistics for the given pools/vdevs. Physical I/Os + may be observed via iostat(1). If writes are located + nearby, they may be merged into a single larger operation. Additional I/O + may be generated depending on the level of vdev redundancy. To filter + output, you may pass in a list of pools, a pool and list of vdevs in that + pool, or a list of any vdevs from any pool. If no items are specified, + statistics for every pool in the system are shown. When given an + interval, the statistics are printed every + interval seconds until ^C is pressed. If + -n flag is specified the headers are displayed + only once, otherwise they are displayed periodically. If count is + specified, the command exits after count reports are printed. The first + report printed is always the statistics since boot regardless of whether + interval and count are passed. + However, this behavior can be suppressed with the + -y flag. Also note that the units of + , + , + that are + printed in the report are in base 1024. To get the raw values, use the + -p flag. +
+
+ [SCRIPT1[,SCRIPT2]...]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool + iostat output. Users can run any script found + in their ~/.zpool.d directory or from the + system /etc/zfs/zpool.d directory. Script + names containing the slash (/) character are not allowed. The default + search path can be overridden by setting the ZPOOL_SCRIPTS_PATH + environment variable. A privileged user can run + -c if they have the ZPOOL_SCRIPTS_AS_ROOT + environment variable set. If a script requires the use of a privileged + command, like smartctl(8), then it's recommended you + allow the user access to it in /etc/sudoers or + add the user to the /etc/sudoers.d/zfs file. +

If -c is passed without a script + name, it prints a list of all scripts. -c + also sets verbose mode + (-v).

+

Script output should be in the form of + "name=value". The column name is set to "name" + and the value is set to "value". Multiple lines can be + used to output multiple columns. The first line of output not in the + "name=value" format is displayed without a column title, + and no more output after that is displayed. This can be useful for + printing error messages. Blank or NULL values are printed as a '-' + to make output awk-able.

+

The following environment variables are set before running + each script:

+
+
+
Full path to the vdev
+
+
+
+
Underlying path to the vdev (/dev/sd*). For use with device + mapper, multipath, or partitioned vdevs.
+
+
+
+
The sysfs path to the enclosure for the vdev (if any).
+
+
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard + date format. See date(1).
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Print headers only once when passed
+
+
Display numbers in parsable (exact) values. Time values are in + nanoseconds.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L flag.
+
+
Print request size histograms for the leaf vdev's IO. This includes + histograms of individual IOs (ind) and aggregate IOs (agg). These + stats can be useful for observing how well IO aggregation is working. + Note that TRIM IOs may exceed 16M, but will be counted as 16M.
+
+
Verbose statistics Reports usage statistics for individual vdevs + within the pool, in addition to the pool-wide statistics.
+
+
Omit statistics since boot. Normally the first line of output reports + the statistics since boot. This option suppresses that first line of + output. interval
+
+
Display latency histograms: +

total_wait: Total IO time (queuing + + disk IO time). disk_wait: Disk IO time (time + reading/writing the disk). syncq_wait: Amount + of time IO spent in synchronous priority queues. Does not include + disk time. asyncq_wait: Amount of time IO + spent in asynchronous priority queues. Does not include disk time. + scrub: Amount of time IO spent in scrub queue. + Does not include disk time.

+
+
+
Include average latency statistics: +

total_wait: Average total IO time + (queuing + disk IO time). disk_wait: Average + disk IO time (time reading/writing the disk). + syncq_wait: Average amount of time IO spent in + synchronous priority queues. Does not include disk time. + asyncq_wait: Average amount of time IO spent + in asynchronous priority queues. Does not include disk time. + scrub: Average queuing time in scrub queue. + Does not include disk time. trim: Average + queuing time in trim queue. Does not include disk time.

+
+
+
Include active queue statistics. Each priority queue has both pending + ( pend) and active ( + activ) IOs. Pending IOs are waiting to be issued + to the disk, and active IOs have been issued to disk and are waiting + for completion. These stats are broken out by priority queue: +

syncq_read/write: Current number of + entries in synchronous priority queues. + asyncq_read/write: Current number of entries + in asynchronous priority queues. scrubq_read: + Current number of entries in scrub queue. + trimq_write: Current number of entries in trim + queue.

+

All queue statistics are instantaneous measurements of the + number of entries in the queues. If you specify an interval, the + measurements will be sampled from the end of the interval.

+
+
+
+
+
+
+

+

zpool-list(8), + zpool-status(8), iostat(1), + smartctl(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-labelclear.8.html b/man/v2.0/8/zpool-labelclear.8.html new file mode 100644 index 000000000..fca94445c --- /dev/null +++ b/man/v2.0/8/zpool-labelclear.8.html @@ -0,0 +1,279 @@ + + + + + + + zpool-labelclear.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-labelclear.8

+
+ + + + + +
ZPOOL-LABELCLEAR(8)System Manager's ManualZPOOL-LABELCLEAR(8)
+
+
+

+

zpool-labelclear — + Removes ZFS label information from the specified physical + device

+
+
+

+ + + + + +
zpoollabelclear [-f] + device
+
+
+

+
+
zpool labelclear + [-f] device
+
Removes ZFS label information from the specified + device. If the device is a + cache device, it also removes the L2ARC header (persistent L2ARC). The + device must not be part of an active pool + configuration. +
+
+
Treat exported or foreign devices as inactive.
+
+
+
+
+
+

+

zpool-destroy(8), + zpool-detach(8), zpool-remove(8), + zpool-replace(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-list.8.html b/man/v2.0/8/zpool-list.8.html new file mode 100644 index 000000000..da0acd2c7 --- /dev/null +++ b/man/v2.0/8/zpool-list.8.html @@ -0,0 +1,320 @@ + + + + + + + zpool-list.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-list.8

+
+ + + + + +
ZPOOL-LIST(8)System Manager's ManualZPOOL-LIST(8)
+
+
+

+

zpool-listLists + ZFS storage pools along with a health status and space usage

+
+
+

+ + + + + +
zpoollist [-HgLpPv] + [-o + property[,property]...] + [-T u|d] + [pool]... [interval + [count]]
+
+
+

+
+
zpool list + [-HgLpPv] [-o + property[,property]...] + [-T u|d] + [pool]... [interval + [count]]
+
Lists the given pools along with a health status and space usage. If no + pools are specified, all pools in the system are + listed. When given an interval, the information is + printed every interval seconds until ^C is pressed. + If count is specified, the command exits after + count reports are printed. +
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ property
+
Comma-separated list of properties to display. See the + zpoolprops(8) manual page for a list of valid + properties. The default list is name, + size, allocated, + free, checkpoint, + expandsize, fragmentation, + capacity, dedupratio, + health, altroot.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L flag.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard + date format. See date(1).
+
+
Verbose statistics. Reports usage statistics for individual vdevs + within the pool, in addition to the pool-wide statistics.
+
+
+
+
+
+

+

zpool-import(8), + zpool-status(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-offline.8.html b/man/v2.0/8/zpool-offline.8.html new file mode 100644 index 000000000..82d4b8ad9 --- /dev/null +++ b/man/v2.0/8/zpool-offline.8.html @@ -0,0 +1,304 @@ + + + + + + + zpool-offline.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-offline.8

+
+ + + + + +
ZPOOL-OFFLINE(8)System Manager's ManualZPOOL-OFFLINE(8)
+
+
+

+

zpool-offline — + Take a physical device in a ZFS storage pool + offline

+
+
+

+ + + + + +
zpooloffline [-f] + [-t] pool + device...
+
+ + + + + +
zpoolonline [-e] + pool device...
+
+
+

+
+
zpool offline + [-f] [-t] + pool device...
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [-e] pool + device...
+
Brings the specified physical device online. This command is not + applicable to spares. +
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
+
+
+

+

zpool-detach(8), + zpool-remove(8), zpool-reopen(8), + zpool-resilver(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-online.8.html b/man/v2.0/8/zpool-online.8.html new file mode 100644 index 000000000..2dafbc566 --- /dev/null +++ b/man/v2.0/8/zpool-online.8.html @@ -0,0 +1,304 @@ + + + + + + + zpool-online.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-online.8

+
+ + + + + +
ZPOOL-OFFLINE(8)System Manager's ManualZPOOL-OFFLINE(8)
+
+
+

+

zpool-offline — + Take a physical device in a ZFS storage pool + offline

+
+
+

+ + + + + +
zpooloffline [-f] + [-t] pool + device...
+
+ + + + + +
zpoolonline [-e] + pool device...
+
+
+

+
+
zpool offline + [-f] [-t] + pool device...
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [-e] pool + device...
+
Brings the specified physical device online. This command is not + applicable to spares. +
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
+
+
+

+

zpool-detach(8), + zpool-remove(8), zpool-reopen(8), + zpool-resilver(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-reguid.8.html b/man/v2.0/8/zpool-reguid.8.html new file mode 100644 index 000000000..c4a1cacbb --- /dev/null +++ b/man/v2.0/8/zpool-reguid.8.html @@ -0,0 +1,270 @@ + + + + + + + zpool-reguid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-reguid.8

+
+ + + + + +
ZPOOL-REGUID(8)System Manager's ManualZPOOL-REGUID(8)
+
+
+

+

zpool-reguid — + Generate a new unique identifier for a ZFS storage + pool

+
+
+

+ + + + + +
zpoolreguid pool
+
+
+

+
+
zpool reguid + pool
+
Generates a new unique identifier for the pool. You must ensure that all + devices in this pool are online and healthy before performing this + action.
+
+
+
+

+

zpool-export(8), + zpool-import(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-remove.8.html b/man/v2.0/8/zpool-remove.8.html new file mode 100644 index 000000000..2336588df --- /dev/null +++ b/man/v2.0/8/zpool-remove.8.html @@ -0,0 +1,318 @@ + + + + + + + zpool-remove.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-remove.8

+
+ + + + + +
ZPOOL-REMOVE(8)System Manager's ManualZPOOL-REMOVE(8)
+
+
+

+

zpool-remove — + Remove a device from a ZFS storage pool

+
+
+

+ + + + + +
zpoolremove [-npw] + pool device...
+
+ + + + + +
zpoolremove -s + pool
+
+
+

+
+
zpool + remove [-npw] + pool device...
+
Removes the specified device from the pool. This command supports removing + hot spare, cache, log, and both mirrored and non-redundant primary + top-level vdevs, including dedup and special vdevs. When the primary pool + storage includes a top-level raidz vdev only hot spare, cache, and log + devices can be removed. Note that keys for all encrypted datasets must be + loaded for top-level vdevs to be removed. +

Removing a top-level vdev reduces the total amount of space in + the storage pool. The specified device will be evacuated by copying all + allocated space from it to the other devices in the pool. In this case, + the zpool remove command + initiates the removal and returns, while the evacuation continues in the + background. The removal progress can be monitored with + zpool status. If an IO + error is encountered during the removal process it will be cancelled. + The + + feature flag must be enabled to remove a top-level vdev, see + zpool-features(5).

+

A mirrored top-level device (log or data) can be removed by + specifying the top-level mirror for the same. Non-log devices or data + devices that are part of a mirrored configuration can be removed using + the zpool detach + command.

+
+
+
Do not actually perform the removal ("no-op"). Instead, + print the estimated amount of memory that will be used by the mapping + table after the removal completes. This is nonzero only for top-level + vdevs.
+
+
+
+
Used in conjunction with the -n flag, displays + numbers as parsable (exact) values.
+
+
Waits until the removal has completed before returning.
+
+
+
zpool remove + -s pool
+
Stops and cancels an in-progress removal of a top-level vdev.
+
+
+
+

+

zpool-add(8), zpool-detach(8), + zpool-offline(8), zpool-labelclear(8), + zpool-replace(8), zpool-split(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-reopen.8.html b/man/v2.0/8/zpool-reopen.8.html new file mode 100644 index 000000000..96f75dcac --- /dev/null +++ b/man/v2.0/8/zpool-reopen.8.html @@ -0,0 +1,270 @@ + + + + + + + zpool-reopen.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-reopen.8

+
+ + + + + +
ZPOOL-REOPEN(8)System Manager's ManualZPOOL-REOPEN(8)
+
+
+

+

zpool-reopen — + Reopen all virtual devices (vdevs) associated with a ZFS + storage pool

+
+
+

+ + + + + +
zpoolreopen [-n] + pool
+
+
+

+
+
zpool reopen + [-n] pool
+
Reopen all the vdevs associated with the pool. +
+
+
Do not restart an in-progress scrub operation. This is not recommended + and can result in partially resilvered devices unless a second scrub + is performed.
+
+
+
+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-replace.8.html b/man/v2.0/8/zpool-replace.8.html new file mode 100644 index 000000000..89532bd14 --- /dev/null +++ b/man/v2.0/8/zpool-replace.8.html @@ -0,0 +1,311 @@ + + + + + + + zpool-replace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-replace.8

+
+ + + + + +
ZPOOL-REPLACE(8)System Manager's ManualZPOOL-REPLACE(8)
+
+
+

+

zpool-replace — + Replace one device with another in a ZFS storage + pool

+
+
+

+ + + + + +
zpoolreplace [-fsw] + [-o + property=value] + pool device + [new_device]
+
+
+

+
+
zpool replace + [-fsw] [-o + property=value] + pool device + [new_device]
+
Replaces old_device with + new_device. This is equivalent to attaching + new_device, waiting for it to resilver, and then + detaching old_device. Any in progress scrub will be + cancelled. +

The size of new_device must be greater + than or equal to the minimum size of all the devices in a mirror or + raidz configuration.

+

new_device is required if the pool is + not redundant. If new_device is not specified, it + defaults to old_device. This form of replacement + is useful after an existing disk has failed and has been physically + replaced. In this case, the new disk may have the same + /dev path as the old device, even though it is + actually a different disk. ZFS recognizes this.

+
+
+
Forces use of new_device, even if it appears to + be in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the + zpoolprops(8) manual page for a list of valid + properties that can be set. The only property supported at the moment + is + .
+
+
The new_device is reconstructed sequentially to + restore redundancy as quickly as possible. Checksums are not verfied + during sequential reconstruction so a scrub is started when the + resilver completes. Sequential reconstruction is not supported for + raidz configurations.
+
+
Waits until the replacement has completed before returning.
+
+
+
+
+
+

+

zpool-detach(8), + zpool-initialize(8), zpool-online(8), + zpool-resilver(8)

+
+
+ + + + + +
May 15, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-resilver.8.html b/man/v2.0/8/zpool-resilver.8.html new file mode 100644 index 000000000..1b1f86570 --- /dev/null +++ b/man/v2.0/8/zpool-resilver.8.html @@ -0,0 +1,274 @@ + + + + + + + zpool-resilver.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-resilver.8

+
+ + + + + +
ZPOOL-RESILVER(8)System Manager's ManualZPOOL-RESILVER(8)
+
+
+

+

zpool-resilver — + Start a resilver of a device in a ZFS storage + pool

+
+
+

+ + + + + +
zpoolresilver pool...
+
+
+

+
+
zpool + resilver pool...
+
Starts a resilver. If an existing resilver is already running it will be + restarted from the beginning. Any drives that were scheduled for a + deferred resilver will be added to the new one. This requires the + + feature.
+
+
+
+

+

zpool-iostat(8), + zpool-online(8), zpool-reopen(8), + zpool-replace(8), zpool-scrub(8), + zpool-status(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-scrub.8.html b/man/v2.0/8/zpool-scrub.8.html new file mode 100644 index 000000000..39f0c6ca6 --- /dev/null +++ b/man/v2.0/8/zpool-scrub.8.html @@ -0,0 +1,306 @@ + + + + + + + zpool-scrub.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-scrub.8

+
+ + + + + +
ZPOOL-SCRUB(8)System Manager's ManualZPOOL-SCRUB(8)
+
+
+

+

zpool-scrub — + Begin a scrub or resume a paused scrub of a ZFS storage + pool

+
+
+

+ + + + + +
zpoolscrub [-s | + -p] [-w] + pool...
+
+
+

+
+
zpool scrub + [-s | -p] + [-w] pool...
+
Begins a scrub or resumes a paused scrub. The scrub examines all data in + the specified pools to verify that it checksums correctly. For replicated + (mirror or raidz) devices, ZFS automatically repairs any damage discovered + during the scrub. The zpool + status command reports the progress of the scrub + and summarizes the results of the scrub upon completion. +

Scrubbing and resilvering are very similar operations. The + difference is that resilvering only examines data that ZFS knows to be + out of date (for example, when attaching a new device to a mirror or + replacing an existing device), whereas scrubbing examines all data to + discover silent errors due to hardware faults or disk failure.

+

Because scrubbing and resilvering are I/O-intensive + operations, ZFS only allows one at a time. If a scrub is paused, the + zpool scrub resumes it. + If a resilver is in progress, ZFS does not allow a scrub to be started + until the resilver completes.

+

Note that, due to changes in pool data on a live system, it is + possible for scrubs to progress slightly beyond 100% completion. During + this period, no completion time estimate will be provided.

+
+
+
Stop scrubbing.
+
+
+
+
Pause scrubbing. Scrub pause state and progress are periodically + synced to disk. If the system is restarted or pool is exported during + a paused scrub, even after import, scrub will remain paused until it + is resumed. Once resumed the scrub will pick up from the place where + it was last checkpointed to disk. To resume a paused scrub issue + zpool scrub + again.
+
+
Wait until scrub has completed before returning.
+
+
+
+
+
+

+

zpool-iostat(8), + zpool-resilver(8), zpool-status(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-set.8.html b/man/v2.0/8/zpool-set.8.html new file mode 100644 index 000000000..d9057e52c --- /dev/null +++ b/man/v2.0/8/zpool-set.8.html @@ -0,0 +1,313 @@ + + + + + + + zpool-set.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-set.8

+
+ + + + + +
ZPOOL-GET(8)System Manager's ManualZPOOL-GET(8)
+
+
+

+

zpool-get — + Retrieves properties for the specified ZFS storage + pool(s)

+
+
+

+ + + + + +
zpoolget [-Hp] + [-o + field[,field]...] + all|property[,property]... + [pool]...
+
+ + + + + +
zpoolset + property=value + pool
+
+
+

+
+
zpool get + [-Hp] [-o + field[,field]...] + all|property[,property]... + [pool]...
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
        name          Name of storage pool
+        property      Property name
+        value         Property value
+        source        Property source, either 'default' or 'local'.
+
+

See the zpoolprops(8) manual page for more + information on the available pool properties.

+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display. + ,,, + is the default value.
+
+
Display numbers in parsable (exact) values.
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + zpoolprops(8) manual page for more information on what + properties can be set and acceptable values.
+
+
+
+

+

zpoolprops(8), zpool-list(8), + zpool-features(5)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-split.8.html b/man/v2.0/8/zpool-split.8.html new file mode 100644 index 000000000..3bf8dbdee --- /dev/null +++ b/man/v2.0/8/zpool-split.8.html @@ -0,0 +1,324 @@ + + + + + + + zpool-split.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-split.8

+
+ + + + + +
ZPOOL-SPLIT(8)System Manager's ManualZPOOL-SPLIT(8)
+
+
+

+

zpool-split — + Split devices off a ZFS storage pool creating a new + pool

+
+
+

+ + + + + +
zpoolsplit [-gLlnP] + [-o + property=value]... + [-R root] + pool newpool [device]...
+
+
+

+
+
zpool split + [-gLlnP] [-o + property=value]... + [-R root] pool + newpool [device ...]
+
Splits devices off pool creating + newpool. All vdevs in pool + must be mirrors and the pool must not be in the process of resilvering. At + the time of the split, newpool will be a replica of + pool. By default, the last device in each mirror is + split from pool to create + newpool. +

The optional device specification causes the specified + device(s) to be included in the new pool and, + should any devices remain unspecified, the last device in each mirror is + used as would be by default.

+
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the new pool + online. Note that if any datasets have a + + of + + this command will block waiting for the keys to be entered. Without + this flag encrypted datasets will be left unavailable until the keys + are loaded.
+
+
Do dry run, do not actually perform the split. Print out the expected + configuration of newpool.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L flag.
+
+ property=value
+
Sets the specified property for newpool. See the + zpoolprops(8) manual page for more information on + the available pool properties.
+
+ root
+
Set + + for newpool to root and + automatically import it.
+
+
+
+
+
+

+

zpool-import(8), + zpool-list(8), zpool-remove(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-status.8.html b/man/v2.0/8/zpool-status.8.html new file mode 100644 index 000000000..8c453dc47 --- /dev/null +++ b/man/v2.0/8/zpool-status.8.html @@ -0,0 +1,337 @@ + + + + + + + zpool-status.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-status.8

+
+ + + + + +
ZPOOL-STATUS(8)System Manager's ManualZPOOL-STATUS(8)
+
+
+

+

zpool-status — + Display detailed health status for the given ZFS storage + pools

+
+
+

+ + + + + +
zpoolstatus [-c + SCRIPT] [-DigLpPstvx] + [-T u|d] + [pool]... [interval + [count]]
+
+
+

+
+
zpool status + [-c + [SCRIPT1[,SCRIPT2]...]] + [-DigLpPstvx] [-T + u|d] [pool]... + [interval [count]]
+
Displays the detailed health status for the given pools. If no + pool is specified, then the status of each pool in + the system is displayed. For more information on pool and device health, + see the section of zpoolconcepts(8). +

If a scrub or resilver is in progress, this command reports + the percentage done and the estimated time to completion. Both of these + are only approximate, because the amount of data in the pool and the + other workloads on the system can change.

+
+
+ [SCRIPT1[,SCRIPT2]...]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool + status output. See the + -c option of zpool + iostat for complete details.
+
+
Display vdev initialization status.
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L flag.
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in + the pool) block counts and sizes by reference count.
+
+
Display the number of leaf VDEV slow IOs. This is the number of IOs + that didn't complete in zio_slow_io_ms milliseconds (default 30 + seconds). This does not necessarily mean the IOs failed to complete, + just took an unreasonably long amount of time. This may indicate a + problem with the underlying storage.
+
+
Display vdev TRIM status.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard + date format. See date(1).
+
+
Displays verbose data error information, printing out a complete list + of all data errors since the last complete pool scrub.
+
+
Only display status for pools that are exhibiting errors or are + otherwise unavailable. Warnings about pools not using the latest + on-disk format will not be included.
+
+
+
+
+
+

+

zpool-events(8), + zpool-history(8), zpool-iostat(8), + zpool-list(8), zpool-resilver(8), + zpool-scrub(8), zpool-wait(8)

+
+
+ + + + + +
May 15, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-sync.8.html b/man/v2.0/8/zpool-sync.8.html new file mode 100644 index 000000000..ca07c1d1b --- /dev/null +++ b/man/v2.0/8/zpool-sync.8.html @@ -0,0 +1,273 @@ + + + + + + + zpool-sync.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-sync.8

+
+ + + + + +
ZPOOL-SYNC(8)System Manager's ManualZPOOL-SYNC(8)
+
+
+

+

zpool-syncForce + data to be written to primary storage of a ZFS storage pool and update + reporting data

+
+
+

+ + + + + +
zpoolsync [pool]...
+
+
+

+
+
zpool sync + [pool ...]
+
This command forces all in-core dirty data to be written to the primary + pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + will + sync all pools on the system. Otherwise, it will sync only the specified + pool(s).
+
+
+
+

+

zpoolconcepts(8), + zpool-export(8), zpool-iostat(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-trim.8.html b/man/v2.0/8/zpool-trim.8.html new file mode 100644 index 000000000..2917a11e7 --- /dev/null +++ b/man/v2.0/8/zpool-trim.8.html @@ -0,0 +1,312 @@ + + + + + + + zpool-trim.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-trim.8

+
+ + + + + +
ZPOOL-TRIM(8)System Manager's ManualZPOOL-TRIM(8)
+
+
+

+

zpool-trim — + Initiate immediate TRIM operations for all free space in a + ZFS storage pool

+
+
+

+ + + + + +
zpooltrim [-dw] + [-r rate] + [-c | -s] + pool [device...]
+
+
+

+
+
zpool trim + [-dw] [-c | + -s] pool + [device...]
+
Initiates an immediate on-demand TRIM operation for all of the free space + in a pool. This operation informs the underlying storage devices of all + blocks in the pool which are no longer allocated and allows thinly + provisioned devices to reclaim the space. +

A manual on-demand TRIM operation can be initiated + irrespective of the autotrim pool property setting. + See the documentation for the autotrim property above + for the types of vdev devices which can be trimmed.

+
+
+ --secure
+
Causes a secure TRIM to be initiated. When performing a secure TRIM, + the device guarantees that data stored on the trimmed blocks has been + erased. This requires support from the device and is not supported by + all SSDs.
+
+ --rate rate
+
Controls the rate at which the TRIM operation progresses. Without this + option TRIM is executed as quickly as possible. The rate, expressed in + bytes per second, is applied on a per-vdev basis and may be set + differently for each leaf vdev.
+
+ --cancel
+
Cancel trimming on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are + not currently being trimmed, the command will fail and no cancellation + will occur on any device.
+
+ --suspend
+
Suspend trimming on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are + not currently being trimmed, the command will fail and no suspension + will occur on any device. Trimming can then be resumed by running + zpool trim with no + flags on the relevant target devices.
+
+ --wait
+
Wait until the devices are done being trimmed before returning.
+
+
+
+
+
+

+

zpool-initialize(8), + zpool-wait(8), zpoolprops(8)

+
+
+ + + + + +
February 25, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-upgrade.8.html b/man/v2.0/8/zpool-upgrade.8.html new file mode 100644 index 000000000..4a8284b65 --- /dev/null +++ b/man/v2.0/8/zpool-upgrade.8.html @@ -0,0 +1,312 @@ + + + + + + + zpool-upgrade.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-upgrade.8

+
+ + + + + +
ZPOOL-UPGRADE(8)System Manager's ManualZPOOL-UPGRADE(8)
+
+
+

+

zpool-upgrade — + Manage version and feature flags of ZFS storage + pools

+
+
+

+ + + + + +
zpoolupgrade
+
+ + + + + +
zpoolupgrade -v
+
+ + + + + +
zpoolupgrade [-V + version] + -a|pool...
+
+
+

+
+
zpool upgrade
+
Displays pools which do not have all supported features enabled and pools + formatted using a legacy ZFS version number. These pools can continue to + be used, but some features may not be available. Use + zpool upgrade + -a to enable all features on all pools.
+
zpool upgrade + -v
+
Displays legacy ZFS versions supported by the current software. See + zpool-features(5) for a description of feature flags + features supported by the current software.
+
zpool upgrade + [-V version] + -a|pool...
+
Enables all supported features on the given pool. Once this is done, the + pool will no longer be accessible on systems that do not support feature + flags. See zpool-features(5) for details on + compatibility with systems that support feature flags, but do not support + all features enabled on the pool. +
+
+
Enables all supported features on all pools.
+
+ version
+
Upgrade to the specified legacy version. If the + -V flag is specified, no features will be + enabled on the pool. This option can only be used to increase the + version number up to the last supported legacy version number.
+
+
+
+
+
+

+

zpool-features(5), + zpoolconcepts(8), zpoolprops(8), + zpool-history(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-wait.8.html b/man/v2.0/8/zpool-wait.8.html new file mode 100644 index 000000000..22537b709 --- /dev/null +++ b/man/v2.0/8/zpool-wait.8.html @@ -0,0 +1,312 @@ + + + + + + + zpool-wait.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-wait.8

+
+ + + + + +
ZPOOL-WAIT(8)System Manager's ManualZPOOL-WAIT(8)
+
+
+

+

zpool-waitWait + for background activity to stop in a ZFS storage pool

+
+
+

+ + + + + +
zpoolwait [-Hp] + [-T u|d] + [-t + activity[,activity]...] + pool [interval]
+
+
+

+
+
zpool wait + [-Hp] [-T + u|d] [-t + activity[,activity]...] + pool [interval]
+
Waits until all background activity of the given types has ceased in the + given pool. The activity could cease because it has completed, or because + it has been paused or canceled by a user, or because the pool has been + exported or destroyed. If no activities are specified, the command waits + until background activity of every type listed below has ceased. If there + is no activity of the given types in progress, the command returns + immediately. +

These are the possible values for + activity, along with what each one waits for:

+
+
        discard       Checkpoint to be discarded
+        free          'freeing' property to become 0
+        initialize    All initializations to cease
+        replace       All device replacements to cease
+        remove        Device removal to cease
+        resilver      Resilver to cease
+        scrub         Scrub to cease
+        trim          Manual trim to cease
+
+

If an interval is provided, the amount + of work remaining, in bytes, for each activity is printed every + interval seconds.

+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+
Display numbers in parsable (exact) values.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard + date format. See date(1).
+
+
+
+
+
+

+

zpool-status(8), + zpool-checkpoint(8), + zpool-initialize(8), zpool-replace(8), + zpool-remove(8), zpool-resilver(8), + zpool-scrub(8), zpool-trim(8)

+
+
+ + + + + +
February 25, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool.8.html b/man/v2.0/8/zpool.8.html new file mode 100644 index 000000000..2e2b6881f --- /dev/null +++ b/man/v2.0/8/zpool.8.html @@ -0,0 +1,798 @@ + + + + + + + zpool.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool.8

+
+ + + + + +
ZPOOL(8)System Manager's ManualZPOOL(8)
+
+
+

+

zpoolconfigure + ZFS storage pools

+
+
+

+ + + + + +
zpool-?V
+
+ + + + + +
zpoolversion
+
+ + + + + +
zpool<subcommand> + [<args>]
+
+
+

+

The zpool command configures ZFS storage + pools. A storage pool is a collection of devices that provides physical + storage and data replication for ZFS datasets. All datasets within a storage + pool share the same space. See zfs(8) for information on + managing datasets.

+

For an overview of creating and managing ZFS storage pools see the + zpoolconcepts(8) manual page.

+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+

The zpool command provides subcommands to + create and destroy storage pools, add capacity to storage pools, and provide + information about the storage pools. The following subcommands are + supported:

+
+
zpool -?
+
Displays a help message.
+
zpool -V, + --version
+
An alias for the zpool + version subcommand.
+
zpool version
+
Displays the software version of the zpool + userland utility and the zfs kernel module.
+
+
+

+
+
zpool-create(8)
+
Creates a new storage pool containing the virtual devices specified on the + command line.
+
zpool-initialize(8)
+
Begins initializing by writing to all unallocated regions on the specified + devices, or all eligible devices in the pool if no individual devices are + specified.
+
+
+
+

+
+
zpool-destroy(8)
+
Destroys the given pool, freeing up any devices for other use.
+
zpool-labelclear(8)
+
Removes ZFS label information from the specified + device.
+
+
+
+

+
+
zpool-attach(8) / zpool-detach(8)
+
Increases or decreases redundancy by attach-ing or + detach-ing a device on an existing vdev (virtual + device).
+
zpool-add(8) / zpool-remove(8)
+
Adds the specified virtual devices to the given pool, or removes the + specified device from the pool.
+
zpool-replace(8)
+
Replaces an existing device (which may be faulted) with a new one.
+
zpool-split(8)
+
Creates a new pool by splitting all mirrors in an existing pool (which + decreases its redundancy).
+
+
+
+

+

Available pool properties listed in the + zpoolprops(8) manual page.

+
+
zpool-list(8)
+
Lists the given pools along with a health status and space usage.
+
zpool-get(8) / + zpool-set(8)
+
Retrieves the given list of properties (or all properties if + is used) for + the specified storage pool(s).
+
+
+
+

+
+
zpool-status(8)
+
Displays the detailed health status for the given pools.
+
zpool-iostat(8)
+
Displays logical I/O statistics for the given pools/vdevs. Physical I/Os + may be observed via iostat(1).
+
zpool-events(8)
+
Lists all recent events generated by the ZFS kernel modules. These events + are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. + For more information about the subclasses and event payloads that can be + generated see the zfs-events(5) man page.
+
zpool-history(8)
+
Displays the command history of the specified pool(s) or all pools if no + pool is specified.
+
+
+
+

+
+
zpool-scrub(8)
+
Begins a scrub or resumes a paused scrub.
+
zpool-checkpoint(8)
+
Checkpoints the current state of pool , which can be + later restored by zpool import + --rewind-to-checkpoint.
+
zpool-trim(8)
+
Initiates an immediate on-demand TRIM operation for all of the free space + in a pool. This operation informs the underlying storage devices of all + blocks in the pool which are no longer allocated and allows thinly + provisioned devices to reclaim the space.
+
zpool-sync(8)
+
This command forces all in-core dirty data to be written to the primary + pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + will + sync all pools on the system. Otherwise, it will sync only the specified + pool(s).
+
zpool-upgrade(8)
+
Manage the on-disk format version of storage pools.
+
zpool-wait(8)
+
Waits until all background activity of the given types has ceased in the + given pool.
+
+
+
+

+
+
zpool-offline(8) zpool-online(8)
+
Takes the specified physical device offline or brings it online.
+
zpool-resilver(8)
+
Starts a resilver. If an existing resilver is already running it will be + restarted from the beginning.
+
zpool-reopen(8)
+
Reopen all the vdevs associated with the pool.
+
zpool-clear(8)
+
Clears device errors in a pool.
+
+
+
+

+
+
zpool-import(8)
+
Make disks containing ZFS storage pools available for use on the + system.
+
zpool-export(8)
+
Exports the given pools from the system.
+
zpool-reguid(8)
+
Generates a new unique identifier for the pool.
+
+
+
+
+

+

The following exit values are returned:

+
+
+
Successful completion.
+
+
An error occurred.
+
+
Invalid command line options were specified.
+
+
+
+

+
+
Creating a RAID-Z Storage Pool
+
The following command creates a pool with a single raidz root vdev that + consists of six disks. +
+
# zpool create tank raidz sda sdb sdc sdd sde sdf
+
+
+
Creating a Mirrored Storage Pool
+
The following command creates a pool with two mirrors, where each mirror + contains two disks. +
+
# zpool create tank mirror sda sdb mirror sdc sdd
+
+
+
Creating a ZFS Storage Pool by Using + Partitions
+
The following command creates an unmirrored pool using two disk + partitions. +
+
# zpool create tank sda1 sdb2
+
+
+
Creating a ZFS Storage Pool by Using + Files
+
The following command creates an unmirrored pool using files. While not + recommended, a pool based on files can be useful for experimental + purposes. +
+
# zpool create tank /path/to/file/a /path/to/file/b
+
+
+
Adding a Mirror to a ZFS Storage Pool
+
The following command adds two mirrored disks to the pool + tank, assuming the pool is already made up of two-way + mirrors. The additional space is immediately available to any datasets + within the pool. +
+
# zpool add tank mirror sda sdb
+
+
+
Listing Available ZFS Storage Pools
+
The following command lists all available pools on the system. In this + case, the pool + is + faulted due to a missing device. The results from this command are similar + to the following: +
+
# zpool list
+NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+rpool  19.9G  8.43G  11.4G         -    33%    42%  1.00x  ONLINE  -
+tank   61.5G  20.0G  41.5G         -    48%    32%  1.00x  ONLINE  -
+zion       -      -      -         -      -      -      -  FAULTED -
+
+
+
Destroying a ZFS Storage Pool
+
The following command destroys the pool tank and any + datasets contained within. +
+
# zpool destroy -f tank
+
+
+
Exporting a ZFS Storage Pool
+
The following command exports the devices in pool tank + so that they can be relocated or later imported. +
+
# zpool export tank
+
+
+
Importing a ZFS Storage Pool
+
The following command displays available pools, and then imports the pool + tank for use on the system. The results from this + command are similar to the following: +
+
# zpool import
+  pool: tank
+    id: 15451357997522795478
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+        tank        ONLINE
+          mirror    ONLINE
+            sda     ONLINE
+            sdb     ONLINE
+
+# zpool import tank
+
+
+
Upgrading All ZFS Storage Pools to the Current + Version
+
The following command upgrades all ZFS Storage pools to the current + version of the software. +
+
# zpool upgrade -a
+This system is currently running ZFS version 2.
+
+
+
Managing Hot Spares
+
The following command creates a new pool with an available hot spare: +
+
# zpool create tank mirror sda sdb spare sdc
+
+

If one of the disks were to fail, the pool would be reduced to + the degraded state. The failed device can be replaced using the + following command:

+
+
# zpool replace tank sda sdd
+
+

Once the data has been resilvered, the spare is automatically + removed and is made available for use should another device fail. The + hot spare can be permanently removed from the pool using the following + command:

+
+
# zpool remove tank sdc
+
+
+
Creating a ZFS Pool with Mirrored Separate + Intent Logs
+
The following command creates a ZFS storage pool consisting of two, + two-way mirrors and mirrored log devices: +
+
# zpool create pool mirror sda sdb mirror sdc sdd log mirror \
+  sde sdf
+
+
+
Adding Cache Devices to a ZFS Pool
+
The following command adds two disks for use as cache devices to a ZFS + storage pool: +
+
# zpool add pool cache sdc sdd
+
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take + over an hour for them to fill. Capacity and reads can be monitored using + the iostat option as follows:

+
+
# zpool iostat -v pool 5
+
+
+
Removing a Mirrored top-level (Log or Data) + Device
+
The following commands remove the mirrored log device + mirror-2 and mirrored top-level data device + mirror-1. +

Given this configuration:

+
+
  pool: tank
+ state: ONLINE
+ scrub: none requested
+config:
+
+         NAME        STATE     READ WRITE CKSUM
+         tank        ONLINE       0     0     0
+           mirror-0  ONLINE       0     0     0
+             sda     ONLINE       0     0     0
+             sdb     ONLINE       0     0     0
+           mirror-1  ONLINE       0     0     0
+             sdc     ONLINE       0     0     0
+             sdd     ONLINE       0     0     0
+         logs
+           mirror-2  ONLINE       0     0     0
+             sde     ONLINE       0     0     0
+             sdf     ONLINE       0     0     0
+
+

The command to remove the mirrored log + mirror-2 is:

+
+
# zpool remove tank mirror-2
+
+

The command to remove the mirrored data + mirror-1 is:

+
+
# zpool remove tank mirror-1
+
+
+
Displaying expanded space on a + device
+
The following command displays the detailed information for the pool + . + This pool is comprised of a single raidz vdev where one of its devices + increased its capacity by 10GB. In this example, the pool will not be able + to utilize this extra capacity until all the devices under the raidz vdev + have been expanded. +
+
# zpool list -v data
+NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+data        23.9G  14.6G  9.30G         -    48%    61%  1.00x  ONLINE  -
+  raidz1    23.9G  14.6G  9.30G         -    48%
+    sda         -      -      -         -      -
+    sdb         -      -      -       10G      -
+    sdc         -      -      -         -      -
+
+
+
Adding output columns
+
Additional columns can be added to the zpool + status and zpool + iostat output with -c + option. +
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc size
+              capacity     operations     bandwidth
+pool        alloc   free   read  write   read  write  size
+----------  -----  -----  -----  -----  -----  -----  ----
+rpool       14.6G  54.9G      4     55   250K  2.69M
+  sda1      14.6G  54.9G      4     55   250K  2.69M   70G
+----------  -----  -----  -----  -----  -----  -----  ----
+
+
+
+
+
+

+
+
+
Cause zpool to dump core on exit for the purposes + of running + .
+
+
+
+
Use ANSI color in zpool status output.
+
+
+
+
The search path for devices or files to use with the pool. This is a + colon-separated list of directories in which zpool + looks for device nodes and files. Similar to the + -d option in zpool + import.
+
+
+
+
The maximum time in milliseconds that zpool import + will wait for an expected device to be available.
+
+
+
+
If set, suppress warning about non-native vdev ashift in + zpool status. The value is not used, only the + presence or absence of the variable matters.
+
+
+
+
Cause zpool subcommands to output vdev guids by + default. This behavior is identical to the zpool status + -g command line option.
+
+
+ +
Cause zpool subcommands to follow links for vdev + names by default. This behavior is identical to the zpool + status -L command line option.
+
+
+
+
Cause zpool subcommands to output full vdev path + names by default. This behavior is identical to the zpool + status -P command line option.
+
+
+
+
Older OpenZFS implementations had issues when attempting to display pool + config VDEV names if a devid NVP value is present in the + pool's config. +

For example, a pool that originated on illumos platform would + have a devid value in the config and zpool + status would fail when listing the config. This would also be + true for future Linux based pools.

+

A pool can be stripped of any devid values + on import or prevented from adding them on zpool + create or zpool add by setting + ZFS_VDEV_DEVID_OPT_OUT.

+
+
+
+
+
Allow a privileged user to run the zpool + status/iostat with the -c option. Normally, + only unprivileged users are allowed to run + -c.
+
+
+
+
The search path for scripts when running zpool + status/iostat with the -c option. This is a + colon-separated list of directories and overrides the default + ~/.zpool.d and + /etc/zfs/zpool.d search paths.
+
+
+
+
Allow a user to run zpool status/iostat with the + -c option. If + ZPOOL_SCRIPTS_ENABLED is not set, it is assumed that the + user is allowed to run zpool status/iostat + -c.
+
+
+
+

+

+
+
+

+

zfs-events(5), + zfs-module-parameters(5), + zpool-features(5), zed(8), + zfs(8), zpool-add(8), + zpool-attach(8), zpool-checkpoint(8), + zpool-clear(8), zpool-create(8), + zpool-destroy(8), zpool-detach(8), + zpool-events(8), zpool-export(8), + zpool-get(8), zpool-history(8), + zpool-import(8), zpool-initialize(8), + zpool-iostat(8), zpool-labelclear(8), + zpool-list(8), zpool-offline(8), + zpool-online(8), zpool-reguid(8), + zpool-remove(8), zpool-reopen(8), + zpool-replace(8), zpool-resilver(8), + zpool-scrub(8), zpool-set(8), + zpool-split(8), zpool-status(8), + zpool-sync(8), zpool-trim(8), + zpool-upgrade(8), zpool-wait(8), + zpoolconcepts(8), zpoolprops(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpoolconcepts.8.html b/man/v2.0/8/zpoolconcepts.8.html new file mode 100644 index 000000000..4c42cbfec --- /dev/null +++ b/man/v2.0/8/zpoolconcepts.8.html @@ -0,0 +1,572 @@ + + + + + + + zpoolconcepts.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpoolconcepts.8

+
+ + + + + +
ZPOOLCONCEPTS(8)System Manager's ManualZPOOLCONCEPTS(8)
+
+
+

+

zpoolconcepts — + overview of ZFS storage pools

+
+
+

+
+

+

A "virtual device" describes a single device or a + collection of devices organized according to certain performance and fault + characteristics. The following virtual devices are supported:

+
+
+
A block device, typically located under /dev. ZFS + can use individual slices or partitions, though the recommended mode of + operation is to use whole disks. A disk can be specified by a full path, + or it can be a shorthand name (the relative portion of the path under + /dev). A whole disk can be specified by omitting + the slice or partition designation. For example, + sda is equivalent to + /dev/sda. When given a whole disk, ZFS + automatically labels the disk, if necessary.
+
+
A regular file. The use of files as a backing store is strongly + discouraged. It is designed primarily for experimental purposes, as the + fault tolerance of a file is only as good as the file system of which it + is a part. A file must be specified by a full path.
+
+
A mirror of two or more devices. Data is replicated in an identical + fashion across all components of a mirror. A mirror with N disks of size X + can hold X bytes and can withstand (N-1) devices failing before data + integrity is compromised.
+
, + raidz1, raidz2, + raidz3
+
A variation on RAID-5 that allows for better distribution of parity and + eliminates the RAID-5 "write hole" (in which data and parity + become inconsistent after a power loss). Data and parity is striped across + all disks within a raidz group. +

A raidz group can have single-, double-, or triple-parity, + meaning that the raidz group can sustain one, two, or three failures, + respectively, without losing any data. The raidz1 vdev + type specifies a single-parity raidz group; the raidz2 + vdev type specifies a double-parity raidz group; and the + raidz3 vdev type specifies a triple-parity raidz + group. The raidz vdev type is an alias for + raidz1.

+

A raidz group with N disks of size X with P parity disks can + hold approximately (N-P)*X bytes and can withstand P device(s) failing + before data integrity is compromised. The minimum number of devices in a + raidz group is one more than the number of parity disks. The recommended + number is between 3 and 9 to help increase performance.

+
+
+
A pseudo-vdev which keeps track of available hot spares for a pool. For + more information, see the Hot Spares + section.
+
+
A separate intent log device. If more than one log device is specified, + then writes are load-balanced between devices. Log devices can be + mirrored. However, raidz vdev types are not supported for the intent log. + For more information, see the Intent + Log section.
+
+
A device dedicated solely for deduplication tables. The redundancy of this + device should match the redundancy of the other normal devices in the + pool. If more than one dedup device is specified, then allocations are + load-balanced between those devices.
+
+
A device dedicated solely for allocating various kinds of internal + metadata, and optionally small file blocks. The redundancy of this device + should match the redundancy of the other normal devices in the pool. If + more than one special device is specified, then allocations are + load-balanced between those devices. +

For more information on special allocations, see the + Special Allocation + Class section.

+
+
+
A device used to cache storage pool data. A cache device cannot be + configured as a mirror or raidz group. For more information, see the + Cache Devices section.
+
+

Virtual devices cannot be nested, so a mirror or raidz virtual + device can only contain files or disks. Mirrors of mirrors (or other + combinations) are not allowed.

+

A pool can have any number of virtual devices at the top of the + configuration (known as "root vdevs"). Data is dynamically + distributed across all top-level devices to balance data among devices. As + new virtual devices are added, ZFS automatically places data on the newly + available devices.

+

Virtual devices are specified one at a time on the command line, + separated by whitespace. The keywords mirror and + raidz are used to distinguish where a group ends and + another begins. For example, the following creates two root vdevs, each a + mirror of two disks:

+
+
# zpool create mypool mirror sda sdb mirror sdc sdd
+
+
+
+

+

ZFS supports a rich set of mechanisms for handling device failure + and data corruption. All metadata and data is checksummed, and ZFS + automatically repairs bad data from a good copy when corruption is + detected.

+

In order to take advantage of these features, a pool must make use + of some form of redundancy, using either mirrored or raidz groups. While ZFS + supports running in a non-redundant configuration, where each root vdev is + simply a disk or file, this is strongly discouraged. A single case of bit + corruption can render some or all of your data unavailable.

+

A pool's health status is described by one of three states: + online, degraded, or faulted. An online pool has all devices operating + normally. A degraded pool is one in which one or more devices have failed, + but the data is still available due to a redundant configuration. A faulted + pool has corrupted metadata, or one or more faulted devices, and + insufficient replicas to continue functioning.

+

The health of the top-level vdev, such as mirror or raidz device, + is potentially impacted by the state of its associated vdevs, or component + devices. A top-level vdev or component device is in one of the following + states:

+
+
+
One or more top-level vdevs is in the degraded state because one or more + component devices are offline. Sufficient replicas exist to continue + functioning. +

One or more component devices is in the degraded or faulted + state, but sufficient replicas exist to continue functioning. The + underlying conditions are as follows:

+
    +
  • The number of checksum errors exceeds acceptable levels and the device + is degraded as an indication that something may be wrong. ZFS + continues to use the device as necessary.
  • +
  • The number of I/O errors exceeds acceptable levels. The device could + not be marked as faulted because there are insufficient replicas to + continue functioning.
  • +
+
+
+
One or more top-level vdevs is in the faulted state because one or more + component devices are offline. Insufficient replicas exist to continue + functioning. +

One or more component devices is in the faulted state, and + insufficient replicas exist to continue functioning. The underlying + conditions are as follows:

+
    +
  • The device could be opened, but the contents did not match expected + values.
  • +
  • The number of I/O errors exceeds acceptable levels and the device is + faulted to prevent further use of the device.
  • +
+
+
+
The device was explicitly taken offline by the + zpool offline + command.
+
+
The device is online and functioning.
+
+
The device was physically removed while the system was running. Device + removal detection is hardware-dependent and may not be supported on all + platforms.
+
+
The device could not be opened. If a pool is imported when a device was + unavailable, then the device will be identified by a unique identifier + instead of its path since the path was never correct in the first + place.
+
+

If a device is removed and later re-attached to the system, ZFS + attempts to put the device online automatically. Device attach detection is + hardware-dependent and might not be supported on all platforms.

+
+
+

+

ZFS allows devices to be associated with pools as "hot + spares". These devices are not actively used in the pool, but when an + active device fails, it is automatically replaced by a hot spare. To create + a pool with hot spares, specify a spare vdev with any + number of devices. For example,

+
+
# zpool create pool mirror sda sdb spare sdc sdd
+
+

Spares can be shared across multiple pools, and can be added with + the zpool add command and + removed with the zpool + remove command. Once a spare replacement is + initiated, a new spare vdev is created within the + configuration that will remain there until the original device is replaced. + At this point, the hot spare becomes available again if another device + fails.

+

If a pool has a shared spare that is currently being used, the + pool can not be exported since other pools may use this shared spare, which + may lead to potential data corruption.

+

Shared spares add some risk. If the pools are imported on + different hosts, and both pools suffer a device failure at the same time, + both could attempt to use the spare at the same time. This may not be + detected, resulting in data corruption.

+

An in-progress spare replacement can be cancelled by detaching the + hot spare. If the original faulted device is detached, then the hot spare + assumes its place in the configuration, and is removed from the spare list + of all active pools.

+

Spares cannot replace log devices.

+
+
+

+

The ZFS Intent Log (ZIL) satisfies POSIX requirements for + synchronous transactions. For instance, databases often require their + transactions to be on stable storage devices when returning from a system + call. NFS and other applications can also use fsync(2) to + ensure data stability. By default, the intent log is allocated from blocks + within the main pool. However, it might be possible to get better + performance using separate intent log devices such as NVRAM or a dedicated + disk. For example:

+
+
# zpool create pool sda sdb log sdc
+
+

Multiple log devices can also be specified, and they can be + mirrored. See the EXAMPLES section for an + example of mirroring multiple log devices.

+

Log devices can be added, replaced, attached, detached and + removed. In addition, log devices are imported and exported as part of the + pool that contains them. Mirrored devices can be removed by specifying the + top-level mirror vdev.

+
+
+

+

Devices can be added to a storage pool as "cache + devices". These devices provide an additional layer of caching between + main memory and disk. For read-heavy workloads, where the working set size + is much larger than what can be cached in main memory, using cache devices + allow much more of this working set to be served from low latency media. + Using cache devices provides the greatest performance improvement for random + read-workloads of mostly static content.

+

To create a pool with cache devices, specify a + cache vdev with any number of devices. For example:

+
+
# zpool create pool sda sdb cache sdc sdd
+
+

Cache devices cannot be mirrored or part of a raidz configuration. + If a read error is encountered on a cache device, that read I/O is reissued + to the original storage pool device, which might be part of a mirrored or + raidz configuration.

+

The content of the cache devices is + persistent across reboots and restored asynchronously when importing the + pool in L2ARC (persistent L2ARC). This can be disabled by setting + . For cache devices smaller than 1GB we do not write the metadata + structures required for rebuilding the L2ARC in order not to waste space. + This can be changed with + . + The cache device header (512 bytes) is updated even if no metadata + structures are written. Setting + will result in scanning the full-length ARC lists for cacheable + content to be written in L2ARC (persistent ARC). If a cache device is added + with zpool add its label and + header will be overwritten and its contents are not going to be restored in + L2ARC, even if the device was previously part of the pool. If a cache device + is onlined with zpool online + its contents will be restored in L2ARC. This is useful in case of memory + pressure where the contents of the cache device are not fully restored in + L2ARC. The user can off/online the cache device when there is less memory + pressure in order to fully restore its contents to L2ARC.

+
+
+

+

Before starting critical procedures that include destructive + actions (e.g zfs destroy ), + an administrator can checkpoint the pool's state and in the case of a + mistake or failure, rewind the entire pool back to the checkpoint. + Otherwise, the checkpoint can be discarded when the procedure has completed + successfully.

+

A pool checkpoint can be thought of as a pool-wide snapshot and + should be used with care as it contains every part of the pool's state, from + properties to vdev configuration. Thus, while a pool has a checkpoint + certain operations are not allowed. Specifically, vdev + removal/attach/detach, mirror splitting, and changing the pool's guid. + Adding a new vdev is supported but in the case of a rewind it will have to + be added again. Finally, users of this feature should keep in mind that + scrubs in a pool that has a checkpoint do not repair checkpointed data.

+

To create a checkpoint for a pool:

+
+
# zpool checkpoint pool
+
+

To later rewind to its checkpointed state, you need to first + export it and then rewind it during import:

+
+
# zpool export pool
+# zpool import --rewind-to-checkpoint pool
+
+

To discard the checkpoint from a pool:

+
+
# zpool checkpoint -d pool
+
+

Dataset reservations (controlled by the + reservation or + refreservation zfs properties) may be unenforceable + while a checkpoint exists, because the checkpoint is allowed to consume the + dataset's reservation. Finally, data that is part of the checkpoint but has + been freed in the current state of the pool won't be scanned during a + scrub.

+
+
+

+

The allocations in the special class are dedicated to specific + block types. By default this includes all metadata, the indirect blocks of + user data, and any deduplication tables. The class can also be provisioned + to accept small file blocks.

+

A pool must always have at least one normal (non-dedup/special) + vdev before other devices can be assigned to the special class. If the + special class becomes full, then allocations intended for it will spill back + into the normal class.

+

Deduplication tables can be excluded + from the special class by setting the + + zfs module parameter to false (0).

+

Inclusion of small file blocks in the + special class is opt-in. Each dataset can control the size of small file + blocks allowed in the special class by setting the + + dataset property. It defaults to zero, so you must opt-in by setting it to a + non-zero value. See zfs(8) for more info on setting this + property.

+
+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpoolprops.8.html b/man/v2.0/8/zpoolprops.8.html new file mode 100644 index 000000000..6828f61b3 --- /dev/null +++ b/man/v2.0/8/zpoolprops.8.html @@ -0,0 +1,513 @@ + + + + + + + zpoolprops.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpoolprops.8

+
+ + + + + +
ZPOOLPROPS(8)System Manager's ManualZPOOLPROPS(8)
+
+
+

+

zpoolprops — + available properties for ZFS storage pools

+
+
+

+

Each pool has several properties associated with it. Some + properties are read-only statistics while others are configurable and change + the behavior of the pool.

+

The following are read-only properties:

+
+
+
Amount of storage used within the pool. See + fragmentation and free for more + information.
+
+
Percentage of pool space used. This property can also be referred to by + its shortened column name, + .
+
+
Amount of uninitialized space within the pool or device that can be used + to increase the total capacity of the pool. On whole-disk vdevs, this is + the space beyond the end of the GPT – typically occurring when a + LUN is dynamically expanded or a disk replaced with a larger one. On + partition vdevs, this is the space appended to the partition after it was + added to the pool – most likely by resizing it in-place. The space + can be claimed for the pool by bringing it online with + + or using zpool online + -e.
+
+
The amount of fragmentation in the pool. As the amount of space + allocated increases, it becomes more difficult to locate + free space. This may result in lower write performance + compared to pools with more unfragmented free space.
+
+
The amount of free space available in the pool. By contrast, the + zfs(8) available property describes + how much new data can be written to ZFS filesystems/volumes. The zpool + free property is not generally useful for this purpose, + and can be substantially more than the zfs available + space. This discrepancy is due to several factors, including raidz parity; + zfs reservation, quota, refreservation, and refquota properties; and space + set aside by + + (see zfs-module-parameters(5) for more + information).
+
+
After a file system or snapshot is destroyed, the space it was using is + returned to the pool asynchronously. freeing is the + amount of space remaining to be reclaimed. Over time + freeing will decrease while free + increases.
+
+
The current health of the pool. Health can be one of + , + , + , + , + .
+
+
A unique identifier for the pool.
+
+
A unique identifier for the pool. Unlike the guid + property, this identifier is generated every time we load the pool (e.g. + does not persist across imports/exports) and never changes while the pool + is loaded (even if a + + operation takes place).
+
+
Total size of the storage pool.
+
+
Information about unsupported features that are enabled on the pool. See + zpool-features(5) for details.
+
+

The space usage properties report actual physical space available + to the storage pool. The physical space can be different from the total + amount of space that any contained datasets can actually use. The amount of + space used in a raidz configuration depends on the characteristics of the + data being written. In addition, ZFS reserves some space for internal + accounting that the zfs(8) command takes into account, but + the zpoolprops command does not. For non-full pools + of a reasonable size, these effects should be invisible. For small pools, or + pools that are close to being completely full, these discrepancies may + become more noticeable.

+

The following property can be set at creation time and import + time:

+
+
+
Alternate root directory. If set, this directory is prepended to any mount + points within the pool. This can be used when examining an unknown pool + where the mount points cannot be trusted, or in an alternate boot + environment, where the typical paths are not valid. + altroot is not a persistent property. It is valid only + while the system is up. Setting altroot defaults to + using cachefile=none, though this may + be overridden using an explicit setting.
+
+

The following property can be set only at import time:

+
+
=on|off
+
If set to on, the pool will be imported in read-only + mode. This property can also be referred to by its shortened column name, + .
+
+

The following properties can be set at creation time and import + time, and later changed with the zpool + set command:

+
+
=ashift
+
Pool sector size exponent, to the power of + (internally + referred to as ashift ). Values from 9 to 16, inclusive, + are valid; also, the value 0 (the default) means to auto-detect using the + kernel's block layer and a ZFS internal exception list. I/O operations + will be aligned to the specified size boundaries. Additionally, the + minimum (disk) write size will be set to the specified size, so this + represents a space vs. performance trade-off. For optimal performance, the + pool sector size should be greater than or equal to the sector size of the + underlying disks. The typical case for setting this property is when + performance is important and the underlying disks use 4KiB sectors but + report 512B sectors to the OS (for compatibility reasons); in that case, + set + + (which is 1<<12 = 4096). When set, this property is used as the + default hint value in subsequent vdev operations (add, attach and + replace). Changing this value will not modify any existing vdev, not even + on disk replacement; however it can be used, for instance, to replace a + dying 512B sectors disk with a newer 4KiB sectors device: this will + probably result in bad performance but at the same time could prevent loss + of data.
+
=on|off
+
Controls automatic pool expansion when the underlying LUN is grown. If set + to on, the pool will be resized according to the size of + the expanded device. If the device is part of a mirror or raidz then all + devices within that mirror/raidz group must be expanded before the new + space is made available to the pool. The default behavior is + off. This property can also be referred to by its + shortened column name, + .
+
=on|off
+
Controls automatic device replacement. If set to off, + device replacement must be initiated by the administrator by using the + zpool replace command. If + set to on, any new device, found in the same physical + location as a device that previously belonged to the pool, is + automatically formatted and replaced. The default behavior is + off. This property can also be referred to by its + shortened column name, + . + Autoreplace can also be used with virtual disks (like device mapper) + provided that you use the /dev/disk/by-vdev paths setup by vdev_id.conf. + See the vdev_id(8) man page for more details. + Autoreplace and autoonline require the ZFS Event Daemon be configured and + running. See the zed(8) man page for more details.
+
=on|off
+
When set to on space which has been recently freed, and + is no longer allocated by the pool, will be periodically trimmed. This + allows block device vdevs which support BLKDISCARD, such as SSDs, or file + vdevs on which the underlying file system supports hole-punching, to + reclaim unused blocks. The default setting for this property is + off. +

Automatic TRIM does not immediately + reclaim blocks after a free. Instead, it will optimistically delay + allowing smaller ranges to be aggregated in to a few larger ones. These + can then be issued more efficiently to the storage. TRIM on L2ARC + devices is enabled by setting + .

+

Be aware that automatic trimming of recently freed data blocks + can put significant stress on the underlying storage devices. This will + vary depending of how well the specific device handles these commands. + For lower end devices it is often possible to achieve most of the + benefits of automatic trimming by running an on-demand (manual) TRIM + periodically using the zpool + trim command.

+
+
=|pool/dataset
+
Identifies the default bootable dataset for the root pool. This property + is expected to be set mainly by the installation and upgrade programs. Not + all Linux distribution boot processes use the bootfs property.
+
=path|none
+
Controls the location of where the pool configuration is cached. + Discovering all pools on system startup requires a cached copy of the + configuration data that is stored on the root file system. All pools in + this cache are automatically imported when the system boots. Some + environments, such as install and clustering, need to cache this + information in a different location so that pools are not automatically + imported. Setting this property caches the pool configuration in a + different location that can later be imported with + zpool import + -c. Setting it to the value none + creates a temporary pool that is never cached, and the "" (empty + string) uses the default location. +

Multiple pools can share the same cache file. Because the + kernel destroys and recreates this file when pools are added and + removed, care should be taken when attempting to access this file. When + the last pool using a cachefile is exported or + destroyed, the file will be empty.

+
+
=text
+
A text string consisting of printable ASCII characters that will be stored + such that it is available even if the pool becomes faulted. An + administrator can provide additional information about a pool using this + property.
+
=number
+
This property is deprecated and no longer has any effect.
+
=on|off
+
Controls whether a non-privileged user is granted access based on the + dataset permissions defined on the dataset. See zfs(8) + for more information on ZFS delegated administration.
+
=wait|continue|panic
+
Controls the system behavior in the event of catastrophic pool failure. + This condition is typically a result of a loss of connectivity to the + underlying storage device(s) or a failure of all devices within the pool. + The behavior of such an event is determined as follows: +
+
+
Blocks all I/O access until the device connectivity is recovered and + the errors are cleared. This is the default behavior.
+
+
Returns EIO to any new write I/O requests but + allows reads to any of the remaining healthy devices. Any write + requests that have yet to be committed to disk would be blocked.
+
+
Prints out a message to the console and generates a system crash + dump.
+
+
+
feature_name=enabled
+
The value of this property is the current state of + feature_name. The only valid value when setting this + property is enabled which moves + feature_name to the enabled state. See + zpool-features(5) for details on feature states.
+
=on|off
+
Controls whether information about snapshots associated with this pool is + output when zfs list is + run without the -t option. The default value is + off. This property can also be referred to by its + shortened name, + .
+
=on|off
+
Controls whether a pool activity check should be performed during + zpool import. When a pool + is determined to be active it cannot be imported, even with the + -f option. This property is intended to be used in + failover configurations where multiple hosts have access to a pool on + shared storage. +

Multihost provides protection on import only. It + does not protect against an individual device being used in multiple + pools, regardless of the type of vdev. See the discussion under +

+

When this property is on, periodic + writes to storage occur to show the pool is in use. See + + in the zfs-module-parameters(5) man page. In order to + enable this property each host must set a unique hostid. See + zgenhostid(8) + spl-module-parameters(5) for additional details. The + default value is off.

+
+
=version
+
The current on-disk version of the pool. This can be increased, but never + decreased. The preferred method of updating pools is with the + zpool upgrade command, + though this property can be used when a specific version is needed for + backwards compatibility. Once feature flags are enabled on a pool this + property will no longer have a value.
+
+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zstream.8.html b/man/v2.0/8/zstream.8.html new file mode 100644 index 000000000..cd85c0038 --- /dev/null +++ b/man/v2.0/8/zstream.8.html @@ -0,0 +1,325 @@ + + + + + + + zstream.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zstream.8

+
+ + + + + +
ZSTREAM(8)System Manager's ManualZSTREAM(8)
+
+
+

+

zstream — + manipulate zfs send streams

+
+
+

+ + + + + +
zstreamdump [-Cvd] + [file]
+
+ + + + + +
zstreamredup [-v] + file
+
+ + + + + +
zstreamtoken resume_token
+
+
+

+

The + + utility manipulates zfs send streams, which are the output of the + + command.

+
+
zstream dump + [-Cvd] [file]
+
Print information about the specified send stream, including headers and + record counts. The send stream may either be in the specified + file, or provided on standard input. +
+
+
Suppress the validation of checksums.
+
+
Verbose. Print metadata for each record.
+
+
Dump data contained in each record. Implies verbose.
+
+
+
zstream token + resume_token
+
Dumps zfs resume token information
+
zstream redup + [-v] file
+
Deduplicated send streams can be generated by using the + zfs send + -D command. The ability to send deduplicated send + streams is deprecated. In the future, the ability to receive a + deduplicated send stream with zfs + receive will be removed. However, deduplicated + send streams can still be received by utilizing + zstream redup. +

The zstream + redup command is provided a + file containing a deduplicated send stream, and + outputs an equivalent non-deduplicated send stream on standard output. + Therefore, a deduplicated send stream can be received by running:

+
+
# zstream redup DEDUP_STREAM_FILE | zfs receive ...
+
+
+
+
Verbose. Print summary of converted records.
+
+
+
+
+
+

+

zfs(8), zfs-send(8), + zfs-receive(8)

+
+
+ + + + + +
March 25, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zstreamdump.8.html b/man/v2.0/8/zstreamdump.8.html new file mode 100644 index 000000000..8a560c869 --- /dev/null +++ b/man/v2.0/8/zstreamdump.8.html @@ -0,0 +1,276 @@ + + + + + + + zstreamdump.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zstreamdump.8

+
+ + + + + +
ZSTREAMDUMP(8)System Manager's ManualZSTREAMDUMP(8)
+
+
+

+

zstreamdump - filter data in zfs send stream

+
+
+

+
zstreamdump [-C] [-v] [-d]
+

+
+
+

+

The zstreamdump utility reads from the output of the zfs + send command, then displays headers and some statistics from that + output. See zfs(8).

+
+
+

+

The following options are supported:

+

-C

+

+
Suppress the validation of checksums.
+

+

-v

+

+
Verbose. Dump all headers, not only begin and end + headers.
+

+

-d

+

+
Dump contents of blocks modified. Implies verbose.
+

+
+
+

+

zfs(8)

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/index.html b/man/v2.0/index.html new file mode 100644 index 000000000..5d84036dc --- /dev/null +++ b/man/v2.0/index.html @@ -0,0 +1,143 @@ + + + + + + + v2.0 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/1/arcstat.1.html b/man/v2.1/1/arcstat.1.html new file mode 100644 index 000000000..10e10d9fb --- /dev/null +++ b/man/v2.1/1/arcstat.1.html @@ -0,0 +1,336 @@ + + + + + + + arcstat.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

arcstat.1

+
+ + + + + +
ARCSTAT(1)General Commands ManualARCSTAT(1)
+
+
+

+

arcstatreport + ZFS ARC and L2ARC statistics

+
+
+

+ + + + + +
arcstat[-havxp] [-f + field[,field…]] + [-o file] + [-s string] + [interval] [count]
+
+
+

+

arcstat prints various ZFS ARC and L2ARC + statistics in vmstat-like fashion:

+
+
+
+
ARC target size
+
+
Demand data hit percentage
+
+
Demand data miss percentage
+
+
MFU list hits per second
+
+
Metadata hit percentage
+
+
Metadata miss percentage
+
+
MRU list hits per second
+
+
Prefetch hits percentage
+
+
Prefetch miss percentage
+
+
Demand data hits per second
+
+
Demand data misses per second
+
+
ARC hit percentage
+
+
ARC reads per second
+
+
MFU ghost list hits per second
+
+
Metadata hits per second
+
+
ARC misses per second
+
+
Metadata misses per second
+
+
MRU ghost list hits per second
+
+
Prefetch hits per second
+
+
Prefetch misses per second
+
+
Total ARC accesses per second
+
+
Current time
+
+
ARC size
+
+
Alias for size
+
+
Demand data accesses per second
+
+
evict_skip per second
+
+
ARC miss percentage
+
+
Metadata accesses per second
+
+
Prefetch accesses per second
+
+
L2ARC access hit percentage
+
+
L2ARC hits per second
+
+
L2ARC misses per second
+
+
Total L2ARC accesses per second
+
+
L2ARC prefetch allocated size per second
+
+
L2ARC prefetch allocated size percentage
+
+
L2ARC MFU allocated size per second
+
+
L2ARC MFU allocated size percentage
+
+
L2ARC MRU allocated size per second
+
+
L2ARC MRU allocated size percentage
+
+
L2ARC data (buf content) allocated size per second
+
+
L2ARC data (buf content) allocated size percentage
+
+
L2ARC metadata (buf content) allocated size per second
+
+
L2ARC metadata (buf content) allocated size percentage
+
+
Size of the L2ARC
+
+
mutex_miss per second
+
+
Bytes read per second from the L2ARC
+
+
L2ARC access miss percentage
+
+
Actual (compressed) size of the L2ARC
+
+
ARC grow disabled
+
+
ARC reclaim needed
+
+
The ARC's idea of how much free memory there is, which includes evictable + memory in the page cache. Since the ARC tries to keep + avail above zero, avail is usually + more instructive to observe than free.
+
+
The ARC's idea of how much free memory is available to it, which is a bit + less than free. May temporarily be negative, in which + case the ARC will reduce the target size c.
+
+
+
+
+

+
+
+
Print all possible stats.
+
+
Display only specific fields. See + DESCRIPTION for supported + statistics.
+
+
Display help message.
+
+
Report statistics to a file instead of the standard output.
+
+
Disable auto-scaling of numerical fields (for raw, machine-parsable + values).
+
+
Display data with a specified separator (default: 2 spaces).
+
+
Print extended stats (same as -f + time,mfu,mru,mfug,mrug,eskip,mtxmis,dread,pread,read).
+
+
Show field headers and definitions
+
+
+
+

+

The following operands are supported:

+
+
+
interval
+
Specify the sampling interval in seconds.
+
count
+
Display only count reports.
+
+
+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/1/cstyle.1.html b/man/v2.1/1/cstyle.1.html new file mode 100644 index 000000000..bcbe75829 --- /dev/null +++ b/man/v2.1/1/cstyle.1.html @@ -0,0 +1,304 @@ + + + + + + + cstyle.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cstyle.1

+
+ + + + + +
CSTYLE(1)General Commands ManualCSTYLE(1)
+
+
+

+

cstylecheck for + some common stylistic errors in C source files

+
+
+

+ + + + + +
cstyle[-chpvCP] [-o + construct[,construct…]] + [file]…
+
+
+

+

cstyle inspects C source files (*.c and + *.h) for common stylistic errors. It attempts to check for the cstyle + documented in + http://www.cis.upenn.edu/~lee/06cse480/data/cstyle.ms.pdf. + Note that there is much in that document that + + be checked for; just because your code is + cstyle-clean does not mean that you've followed + Sun's C style. + .

+
+
+

+

The following options are supported:

+
+
+
Check continuation line indentation inside of functions. Sun's C style + states that all statements must be indented to an appropriate tab stop, + and any continuation lines after them must be indented + + four spaces from the start line. This option enables a series of checks + designed to find continuation line problems within functions only. The + checks have some limitations; see + , below.
+
+
Performs heuristic checks that are sometimes wrong. Not generally + used.
+
+
Performs some of the more picky checks. Includes ANSI + + and + + rules, and tries to detect spaces after casts. Used as part of the putback + checks.
+
+
Verbose output; includes the text of the line of error, and, for + -c, the first statement in the current + continuation block.
+
+
Ignore errors in header comments (i.e. block comments starting in the + first column). Not generally used.
+
+
Check for use of non-POSIX types. Historically, types like + + and + + were used, but they are now deprecated in favor of the POSIX types + , + , + etc. This detects any use of the deprecated types. Used as part of the + putback checks.
+
+ construct[,construct…]
+
Available constructs include: +
+
+
Allow doxygen-style block comments + ( + and + ).
+
+
Allow splint-style lint comments + (...).
+
+
+
+
+
+

+

The continuation checker is a reasonably simple state machine that + knows something about how C is laid out, and can match parenthesis, etc. + over multiple lines. It does have some limitations:

+
    +
  1. Preprocessor macros which cause unmatched parenthesis will + confuse the checker for that line. To fix this, you'll need to make sure + that each branch of the + statement has + balanced parenthesis.
  2. +
  3. Some cpp(1) macros do not require + ;s after them. Any such macros + be ALL_CAPS; + any lower case letters will cause bad output. +

    The bad output will generally be corrected after the + next ;, + , + or + .

    +
  4. +
+Some continuation error messages deserve some additional explanation: +
+
+
A multi-line statement which is not broken at statement boundaries. For + example: +
+
if (this_is_a_long_variable == another_variable) a =
+    b + c;
+
+

Will trigger this error. Instead, do:

+
+
if (this_is_a_long_variable == another_variable)
+    a = b + c;
+
+
+
+
For visibility, empty bodies for if, for, and while statements should be + on their own line. For example: +
+
while (do_something(&x) == 0);
+
+

Will trigger this error. Instead, do:

+
+
while (do_something(&x) == 0)
+    ;
+
+
+
+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/1/index.html b/man/v2.1/1/index.html new file mode 100644 index 000000000..80a3e1df2 --- /dev/null +++ b/man/v2.1/1/index.html @@ -0,0 +1,157 @@ + + + + + + + User Commands (1) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

User Commands (1)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/1/raidz_test.1.html b/man/v2.1/1/raidz_test.1.html new file mode 100644 index 000000000..5da0ce0e2 --- /dev/null +++ b/man/v2.1/1/raidz_test.1.html @@ -0,0 +1,253 @@ + + + + + + + raidz_test.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

raidz_test.1

+
+ + + + + +
RAIDZ_TEST(1)General Commands ManualRAIDZ_TEST(1)
+
+
+

+

raidz_testraidz + implementation verification and benchmarking tool

+
+
+

+ + + + + +
raidz_test[-StBevTD] [-a + ashift] [-o + zio_off_shift] [-d + raidz_data_disks] [-s + zio_size_shift] [-r + reflow_offset]
+
+
+

+

The purpose of this tool is to run all supported raidz + implementation and verify the results of all methods. It also contains a + parameter sweep option where all parameters affecting a RAIDZ block are + verified (like ashift size, data offset, data size, etc.). The tool also + supports a benchmarking mode using the -B + option.

+
+
+

+
+
+
Print a help summary.
+
+ ashift (default: + )
+
Ashift value.
+
+ zio_off_shift (default: + )
+
ZIO offset for each raidz block. The offset's value is + .
+
+ raidz_data_disks (default: + )
+
Number of raidz data disks to use. Additional disks will be used for + parity.
+
+ zio_size_shift (default: + )
+
Size of data for raidz block. The real size is + .
+
+ reflow_offset (default: + )
+
Set raidz expansion offset. The expanded raidz map allocation function + will produce different map configurations depending on this value.
+
(weep)
+
Sweep parameter space while verifying the raidz implementations. This + option will exhaust all most of valid values for the + -aods options. Runtime using this option will be + long.
+
(imeout)
+
Wall time for sweep test in seconds. The actual runtime could be + longer.
+
(enchmark)
+
All implementations are benchmarked using increasing per disk data size. + Results are given as throughput per disk, measured in MiB/s.
+
(xpansion)
+
Use expanded raidz map allocation function.
+
(erbose)
+
Increase verbosity.
+
(est + the test)
+
Debugging option: fail all tests. This is to check if tests would properly + verify bit-exactness.
+
(ebug)
+
Debugging option: attach gdb(1) when + + or + + are received.
+
+
+
+

+

ztest(1)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/1/zhack.1.html b/man/v2.1/1/zhack.1.html new file mode 100644 index 000000000..e9e517f67 --- /dev/null +++ b/man/v2.1/1/zhack.1.html @@ -0,0 +1,267 @@ + + + + + + + zhack.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zhack.1

+
+ + + + + +
ZHACK(1)General Commands ManualZHACK(1)
+
+
+

+

zhacklibzpool + debugging tool

+
+
+

+

This utility pokes configuration changes directly into a ZFS pool, + which is dangerous and can cause data corruption.

+
+
+

+
+
+ + + + + +
zhackfeature stat pool
+
+
List feature flags.
+
+ + + + + +
zhackfeature enable [-d + description] [-r] + pool guid
+
+
Add a new feature to pool that is uniquely + identified by guid, which is specified in the same + form as a zfs(8) user property. +

The description is a short human + readable explanation of the new feature.

+

The -r flag indicates that + pool can be safely opened in read-only mode by a + system that does not understand the guid + feature.

+
+
+ + + + + +
zhackfeature ref + [-d|-m] + pool guid
+
+
Increment the reference count of the guid feature in + pool. +

The -d flag decrements the reference + count of the guid feature in + pool instead.

+

The -m flag indicates that the + guid feature is now required to read the pool + MOS.

+
+
+
+
+

+

The following can be passed to all zhack + invocations before any subcommand:

+
+
+ cachefile
+
Read pool configuration from the + cachefile, which is + /etc/zfs/zpool.cache by default.
+
+ dir
+
Search for pool members in + dir. Can be specified more than once.
+
+
+
+

+
+
# zhack feature stat tank
+for_read_obj:
+	org.illumos:lz4_compress = 0
+for_write_obj:
+	com.delphix:async_destroy = 0
+	com.delphix:empty_bpobj = 0
+descriptions_obj:
+	com.delphix:async_destroy = Destroy filesystems asynchronously.
+	com.delphix:empty_bpobj = Snapshots use less space.
+	org.illumos:lz4_compress = LZ4 compression algorithm support.
+
+# zhack feature enable -d 'Predict future disk failures.' tank com.example:clairvoyance
+# zhack feature ref tank com.example:clairvoyance
+
+
+
+

+

ztest(1), zpool-features(7), + zfs(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/1/ztest.1.html b/man/v2.1/1/ztest.1.html new file mode 100644 index 000000000..5b037d13b --- /dev/null +++ b/man/v2.1/1/ztest.1.html @@ -0,0 +1,379 @@ + + + + + + + ztest.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

ztest.1

+
+ + + + + +
ZTEST(1)General Commands ManualZTEST(1)
+
+
+

+

ztestwas + written by the ZFS Developers as a ZFS unit test

+
+
+

+ + + + + +
ztest[-VEG] [-v + vdevs] [-s + size_of_each_vdev] [-a + alignment_shift] [-m + mirror_copies] [-r + raidz_disks/draid_disks] [-R + raid_parity] [-K + raid_kind] [-D + draid_data] [-S + draid_spares] [-C + vdev_class_state] [-d + datasets] [-t + threads] [-g + gang_block_threshold] [-i + initialize_pool_i_times] [-k + kill_percentage] [-p + pool_name] [-T + time] [-z + zil_failure_rate]
+
+
+

+

ztest was written by the ZFS Developers as + a ZFS unit test. The tool was developed in tandem with the ZFS functionality + and was executed nightly as one of the many regression test against the + daily build. As features were added to ZFS, unit tests were also added to + ztest. In addition, a separate test development team + wrote and executed more functional and stress tests.

+

By default ztest runs for ten minutes and + uses block files (stored in /tmp) to create pools + rather than using physical disks. Block files afford + ztest its flexibility to play around with zpool + components without requiring large hardware configurations. However, storing + the block files in /tmp may not work for you if you + have a small tmp directory.

+

By default is non-verbose. This is why entering the command above + will result in ztest quietly executing for 5 + minutes. The -V option can be used to increase the + verbosity of the tool. Adding multiple -V options is + allowed and the more you add the more chatty ztest + becomes.

+

After the ztest run completes, you should + notice many ztest.* files lying around. Once the run + completes you can safely remove these files. Note that you shouldn't remove + these files during a run. You can re-use these files in your next + ztest run by using the -E + option.

+
+
+

+
+
, + -?, --help
+
Print a help summary.
+
, + --vdevs= (default: + )
+
Number of vdevs.
+
, + --vdev-size= (default: + )
+
Size of each vdev.
+
, + --alignment-shift= (default: + ) + (use + + for random)
+
Alignment shift used in test.
+
, + --mirror-copies= (default: + )
+
Number of mirror copies.
+
, + --raid-disks= (default: 4 + for + raidz/ + for draid)
+
Number of raidz/draid disks.
+
, + --raid-parity= (default: 1)
+
Raid parity (raidz & draid).
+
, + --raid-kind=||random + (default: random)
+
The kind of RAID config to use. With random the kind + alternates between raidz and draid.
+
, + --draid-data= (default: 4)
+
Number of data disks in a dRAID redundancy group.
+
, + --draid-spares= (default: 1)
+
Number of dRAID distributed spare disks.
+
, + --datasets= (default: + )
+
Number of datasets.
+
, + --threads= (default: + )
+
Number of threads.
+
, + --gang-block-threshold= (default: + 32K)
+
Gang block threshold.
+
, + --init-count= (default: 1)
+
Number of pool initializations.
+
, + --kill-percentage= (default: + )
+
Kill percentage.
+
, + --pool-name= (default: + )
+
Pool name.
+
, + --vdev-file-directory= (default: + /tmp)
+
File directory for vdev files.
+
, + --multi-host
+
Multi-host; simulate pool imported on remote host.
+
, + --use-existing-pool
+
Use existing pool (use existing pool instead of creating new one).
+
, + --run-time= (default: + s)
+
Total test run time.
+
, + --pass-time= (default: + s)
+
Time per pass.
+
, + --freeze-loops= (default: + )
+
Max loops in + ().
+
, + --alt-ztest=
+
Alternate ztest path.
+
, + --vdev-class-state=||random + (default: random)
+
The vdev allocation class state.
+
, + --option=variable=value
+
Set global variable to an unsigned 32-bit integer + value (little-endian only).
+
, + --dump-debug
+
Dump zfs_dbgmsg buffer before exiting due to an error.
+
, + --verbose
+
Verbose (use multiple times for ever more verbosity).
+
+
+
+

+

To override /tmp as your location for + block files, you can use the -f option:

+
# ztest -f /
+

To get an idea of what ztest is actually + testing try this:

+
# ztest -f / -VVV
+

Maybe you'd like to run ztest for longer? + To do so simply use the -T option and specify the + runlength in seconds like so:

+
# ztest -f / -V -T 120
+
+
+

+
+
=id
+
Use id instead of the SPL hostid to identify this host. + Intended for use with ztest, but this environment + variable will affect any utility which uses libzpool, including + zpool(8). Since the kernel is unaware of this setting, + results with utilities other than ztest are undefined.
+
=stacksize
+
Limit the default stack size to stacksize bytes for the + purpose of detecting and debugging kernel stack overflows. This value + defaults to 32K which is double the default + Linux + kernel stack size. +

In practice, setting the stack size slightly higher is needed + because differences in stack usage between kernel and user space can + lead to spurious stack overflows (especially when debugging is enabled). + The specified value will be rounded up to a floor of PTHREAD_STACK_MIN + which is the minimum stack required for a NULL procedure in user + space.

+

By default the stack size is limited to + .

+
+
+
+
+

+

zdb(1), zfs(1), + zpool(1), spl(4)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/1/zvol_wait.1.html b/man/v2.1/1/zvol_wait.1.html new file mode 100644 index 000000000..ae654edb7 --- /dev/null +++ b/man/v2.1/1/zvol_wait.1.html @@ -0,0 +1,190 @@ + + + + + + + zvol_wait.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zvol_wait.1

+
+ + + + + +
ZVOL_WAIT(1)General Commands ManualZVOL_WAIT(1)
+
+
+

+

zvol_waitwait + for ZFS volume links to appear in /dev

+
+
+

+ + + + + +
zvol_wait
+
+
+

+

When a ZFS pool is imported, the volumes within it will appear as + block devices. As they're registered, udev(7) + asynchronously creates symlinks under /dev/zvol + using the volumes' names. zvol_wait will wait for + all those symlinks to be created before exiting.

+
+
+

+

udev(7)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/4/index.html b/man/v2.1/4/index.html new file mode 100644 index 000000000..2d52512f2 --- /dev/null +++ b/man/v2.1/4/index.html @@ -0,0 +1,149 @@ + + + + + + + Devices and Special Files (4) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Devices and Special Files (4)

+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/4/spl.4.html b/man/v2.1/4/spl.4.html new file mode 100644 index 000000000..7d702cc43 --- /dev/null +++ b/man/v2.1/4/spl.4.html @@ -0,0 +1,319 @@ + + + + + + + spl.4 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

spl.4

+
+ + + + + +
SPL(4)Device Drivers ManualSPL(4)
+
+
+

+

splparameters + of the SPL kernel module

+
+
+

+
+
=4 + (uint)
+
The number of threads created for the spl_kmem_cache task queue. This task + queue is responsible for allocating new slabs for use by the kmem caches. + For the majority of systems and workloads only a small number of threads + are required.
+
=0 + (uint)
+
When this is set it prevents Linux from being able to rapidly reclaim all + the memory held by the kmem caches. This may be useful in circumstances + where it's preferable that Linux reclaim memory from some other subsystem + first. Setting this will increase the likelihood out of memory events on a + memory constrained system.
+
= + (uint)
+
The preferred number of objects per slab in the cache. In general, a + larger value will increase the caches memory footprint while decreasing + the time required to perform an allocation. Conversely, a smaller value + will minimize the footprint and improve cache reclaim time but individual + allocations may take longer.
+
= + (64-bit) or 4 (32-bit) (uint)
+
The maximum size of a kmem cache slab in MiB. This effectively limits the + maximum cache object size to + spl_kmem_cache_max_size/spl_kmem_cache_obj_per_slab. +

Caches may not be created with object sized larger than this + limit.

+
+
= + (uint)
+
For small objects the Linux slab allocator should be used to make the most + efficient use of the memory. However, large objects are not supported by + the Linux slab and therefore the SPL implementation is preferred. This + value is used to determine the cutoff between a small and large object. +

Objects of size spl_kmem_cache_slab_limit or + smaller will be allocated using the Linux slab allocator, large objects + use the SPL allocator. A cutoff of 16K was determined to be optimal for + architectures using 4K pages.

+
+
= + (uint)
+
As a general rule + () + allocations should be small, preferably just a few pages, since they must + by physically contiguous. Therefore, a rate limited warning will be + printed to the console for any kmem_alloc() which + exceeds a reasonable threshold. +

The default warning threshold is set to eight pages but capped + at 32K to accommodate systems using large pages. This value was selected + to be small enough to ensure the largest allocations are quickly noticed + and fixed. But large enough to avoid logging any warnings when a + allocation size is larger than optimal but not a serious concern. Since + this value is tunable, developers are encouraged to set it lower when + testing so any new largish allocations are quickly caught. These + warnings may be disabled by setting the threshold to zero.

+
+
=KMALLOC_MAX_SIZE/4 + (uint)
+
Large + () + allocations will fail if they exceed KMALLOC_MAX_SIZE. + Allocations which are marginally smaller than this limit may succeed but + should still be avoided due to the expense of locating a contiguous range + of free pages. Therefore, a maximum kmem size with reasonable safely + margin of 4x is set. kmem_alloc() allocations + larger than this maximum will quickly fail. + () + allocations less than or equal to this value will use + (), + but shift to + () + when exceeding this value.
+
=0 + (uint)
+
Cache magazines are an optimization designed to minimize the cost of + allocating memory. They do this by keeping a per-cpu cache of recently + freed objects, which can then be reallocated without taking a lock. This + can improve performance on highly contended caches. However, because + objects in magazines will prevent otherwise empty slabs from being + immediately released this may not be ideal for low memory machines. +

For this reason, + spl_kmem_cache_magazine_size can be used to set a + maximum magazine size. When this value is set to 0 the magazine size + will be automatically determined based on the object size. Otherwise + magazines will be limited to 2-256 objects per magazine (i.e per cpu). + Magazines may never be entirely disabled in this implementation.

+
+
=0 + (ulong)
+
The system hostid, when set this can be used to uniquely identify a + system. By default this value is set to zero which indicates the hostid is + disabled. It can be explicitly enabled by placing a unique non-zero value + in /etc/hostid.
+
=/etc/hostid + (charp)
+
The expected path to locate the system hostid when specified. This value + may be overridden for non-standard configurations.
+
=0 + (uint)
+
Cause a kernel panic on assertion failures. When not enabled, the thread + is halted to facilitate further debugging. +

Set to a non-zero value to enable.

+
+
=0 + (uint)
+
Kick stuck taskq to spawn threads. When writing a non-zero value to it, it + will scan all the taskqs. If any of them have a pending task more than 5 + seconds old, it will kick it to spawn more threads. This can be used if + you find a rare deadlock occurs because one or more taskqs didn't spawn a + thread when it should.
+
=0 + (int)
+
Bind taskq threads to specific CPUs. When enabled all taskq threads will + be distributed evenly across the available CPUs. By default, this behavior + is disabled to allow the Linux scheduler the maximum flexibility to + determine where a thread should run.
+
=1 + (int)
+
Allow dynamic taskqs. When enabled taskqs which set the + + flag will by default create only a single thread. New threads will be + created on demand up to a maximum allowed number to facilitate the + completion of outstanding tasks. Threads which are no longer needed will + be promptly destroyed. By default this behavior is enabled but it can be + disabled to aid performance analysis or troubleshooting.
+
=1 + (int)
+
Allow newly created taskq threads to set a non-default scheduler priority. + When enabled, the priority specified when a taskq is created will be + applied to all threads created by that taskq. When disabled all threads + will use the default Linux kernel thread priority. By default, this + behavior is enabled.
+
=4 + (int)
+
The number of items a taskq worker thread must handle without interruption + before requesting a new worker thread be spawned. This is used to control + how quickly taskqs ramp up the number of threads processing the queue. + Because Linux thread creation and destruction are relatively inexpensive a + small default value has been selected. This means that normally threads + will be created aggressively which is desirable. Increasing this value + will result in a slower thread creation rate which may be preferable for + some configurations.
+
= + (uint)
+
The maximum number of tasks per pending list in each taskq shown in + /proc/spl/taskq{,-all}. Write 0 + to turn off the limit. The proc file will walk the lists with lock held, + reading it could cause a lock-up if the list grow too large without + limiting the output. "(truncated)" will be shown if the list is + larger than the limit.
+
+
+
+ + + + + +
August 24, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/4/zfs.4.html b/man/v2.1/4/zfs.4.html new file mode 100644 index 000000000..549db6538 --- /dev/null +++ b/man/v2.1/4/zfs.4.html @@ -0,0 +1,2581 @@ + + + + + + + zfs.4 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.4

+
+ + + + + +
ZFS(4)Device Drivers ManualZFS(4)
+
+
+

+

zfstuning of + the ZFS kernel module

+
+
+

+

The ZFS module supports these parameters:

+
+
=ULONG_MAXB + (ulong)
+
Maximum size in bytes of the dbuf cache. The target size is determined by + the MIN versus + 1/2^dbuf_cache_shift (1/32nd) of + the target ARC size. The behavior of the dbuf cache and its associated + settings can be observed via the + /proc/spl/kstat/zfs/dbufstats kstat.
+
=ULONG_MAXB + (ulong)
+
Maximum size in bytes of the metadata dbuf cache. The target size is + determined by the MIN versus + 1/2^dbuf_metadata_cache_shift + (1/64th) of the target ARC size. The behavior of the metadata dbuf cache + and its associated settings can be observed via the + /proc/spl/kstat/zfs/dbufstats kstat.
+
=10% + (uint)
+
The percentage over dbuf_cache_max_bytes when dbufs must + be evicted directly.
+
=10% + (uint)
+
The percentage below dbuf_cache_max_bytes when the evict + thread stops evicting dbufs.
+
=5 + (int)
+
Set the size of the dbuf cache (dbuf_cache_max_bytes) to + a log2 fraction of the target ARC size.
+
= + (int)
+
Set the size of the dbuf metadata cache + (dbuf_metadata_cache_max_bytes) to a log2 fraction of + the target ARC size.
+
=7 + (128) (int)
+
dnode slots allocated in a single operation as a power of 2. The default + value minimizes lock contention for the bulk operation performed.
+
=134217728B + (128MB) (int)
+
Limit the amount we can prefetch with one call to this amount in bytes. + This helps to limit the amount of memory that can be used by + prefetching.
+
+ (int)
+
Alias for send_holes_without_birth_time.
+
=1|0 + (int)
+
Turbo L2ARC warm-up. When the L2ARC is cold the fill interval will be set + as fast as possible.
+
=200 + (ulong)
+
Min feed interval in milliseconds. Requires + l2arc_feed_again=1 and only + applicable in related situations.
+
=1 + (ulong)
+
Seconds between L2ARC writing.
+
=2 + (ulong)
+
How far through the ARC lists to search for L2ARC cacheable content, + expressed as a multiplier of l2arc_write_max. ARC + persistence across reboots can be achieved with persistent L2ARC by + setting this parameter to 0, allowing the full length of + ARC lists to be searched for cacheable content.
+
=200% + (ulong)
+
Scales l2arc_headroom by this percentage when L2ARC + contents are being successfully compressed before writing. A value of + 100 disables this feature.
+
=0|1 + (int)
+
Controls whether buffers present on special vdevs are eligibile for + caching into L2ARC. If set to 1, exclude dbufs on special vdevs from being + cached to L2ARC.
+
=0|1 + (int)
+
Controls whether only MFU metadata and data are cached from ARC into + L2ARC. This may be desired to avoid wasting space on L2ARC when + reading/writing large amounts of data that are not expected to be accessed + more than once. +

The default is off, meaning both MRU and MFU data and metadata + are cached. When turning off this feature, some MRU buffers will still + be present in ARC and eventually cached on L2ARC. + If + l2arc_noprefetch=0, some prefetched + buffers will be cached to L2ARC, and those might later transition to + MRU, in which case the l2arc_mru_asize + arcstat will not be 0.

+

Regardless of l2arc_noprefetch, some MFU + buffers might be evicted from ARC, accessed later on as prefetches and + transition to MRU as prefetches. If accessed again they are counted as + MRU and the l2arc_mru_asize arcstat + will not be 0.

+

The ARC status of L2ARC buffers when they + were first cached in L2ARC can be seen in the + l2arc_mru_asize, + , + and + + arcstats when importing the pool or onlining a cache device if + persistent L2ARC is enabled.

+

The + + arcstat does not take into account if this option is enabled as the + information provided by the + + arcstats can be used to decide if toggling this option is appropriate + for the current workload.

+
+
=% + (int)
+
Percent of ARC size allowed for L2ARC-only headers. Since L2ARC buffers + are not evicted on memory pressure, too many headers on a system with an + irrationally large L2ARC can render it slow or unusable. This parameter + limits L2ARC writes and rebuilds to achieve the target.
+
=0% + (ulong)
+
Trims ahead of the current write size (l2arc_write_max) + on L2ARC devices by this percentage of write size if we have filled the + device. If set to 100 we TRIM twice the space required + to accommodate upcoming writes. A minimum of + + will be trimmed. It also enables TRIM of the whole L2ARC device upon + creation or addition to an existing pool or if the header of the device is + invalid upon importing a pool or onlining a cache device. A value of + 0 disables TRIM on L2ARC altogether and is the default + as it can put significant stress on the underlying storage devices. This + will vary depending of how well the specific device handles these + commands.
+
=1|0 + (int)
+
Do not write buffers to L2ARC if they were prefetched but not used by + applications. In case there are prefetched buffers in L2ARC and this + option is later set, we do not read the prefetched buffers from L2ARC. + Unsetting this option is useful for caching sequential reads from the + disks to L2ARC and serve those reads from L2ARC later on. This may be + beneficial in case the L2ARC device is significantly faster in sequential + reads than the disks of the pool. +

Use 1 to disable and 0 to + enable caching/reading prefetches to/from L2ARC.

+
+
=0|1 + (int)
+
No reads during writes.
+
=8388608B + (8MB) (ulong)
+
Cold L2ARC devices will have l2arc_write_max increased + by this amount while they remain cold.
+
=8388608B + (8MB) (ulong)
+
Max write bytes per interval.
+
=1|0 + (int)
+
Rebuild the L2ARC when importing a pool (persistent L2ARC). This can be + disabled if there are problems importing a pool or attaching an L2ARC + device (e.g. the L2ARC device is slow in reading stored log metadata, or + the metadata has become somehow fragmented/unusable).
+
=1073741824B + (1GB) (ulong)
+
Mininum size of an L2ARC device required in order to write log blocks in + it. The log blocks are used upon importing the pool to rebuild the + persistent L2ARC. +

For L2ARC devices less than 1GB, the amount + of data + () + evicts is significant compared to the amount of restored L2ARC data. In + this case, do not write log blocks in L2ARC in order not to waste + space.

+
+
=1048576B + (1MB) (ulong)
+
Metaslab granularity, in bytes. This is roughly similar to what would be + referred to as the "stripe size" in traditional RAID arrays. In + normal operation, ZFS will try to write this amount of data to each disk + before moving on to the next top-level vdev.
+
=1|0 + (int)
+
Enable metaslab group biasing based on their vdevs' over- or + under-utilization relative to the pool.
+
=BB + (16MB + 1B) (ulong)
+
Make some blocks above a certain size be gang blocks. This option is used + by the test suite to facilitate testing.
+
=9 + (512 B) (int)
+
Default dnode block size as a power of 2.
+
= + (128 KiB) (int)
+
Default dnode indirect block size as a power of 2.
+
=1048576BB + (1MB) (int)
+
When attempting to log an output nvlist of an ioctl in the on-disk + history, the output will not be stored if it is larger than this size (in + bytes). This must be less than + + (64MB). This applies primarily to + () + (cf. zfs-program(8)).
+
=0|1 + (int)
+
Prevent log spacemaps from being destroyed during pool exports and + destroys.
+
=1|0 + (int)
+
Enable/disable segment-based metaslab selection.
+
=2 + (int)
+
When using segment-based metaslab selection, continue allocating from the + active metaslab until this option's worth of buckets have been + exhausted.
+
=0|1 + (int)
+
Load all metaslabs during pool import.
+
=0|1 + (int)
+
Prevent metaslabs from being unloaded.
+
=1|0 + (int)
+
Enable use of the fragmentation metric in computing metaslab weights.
+ +
Maximum distance to search forward from the last offset. Without this + limit, fragmented pools can see + + iterations and + () + becomes the performance limiting factor on high-performance storage. +

With the default setting of + 16MB, we typically see less than 500 + iterations, even with very fragmented + ashift=9 pools. The maximum number + of iterations possible is metaslab_df_max_search / + 2^(ashift+1). With the default setting of 16MB + this is + (with + ashift=9) or + + (with + ashift=).

+
+
=0|1 + (int)
+
If not searching forward (due to metaslab_df_max_search, + , + or + ), + this tunable controls which segment is used. If set, we will use the + largest free segment. If unset, we will use a segment of at least the + requested size.
+
=s + (1h) (ulong)
+
When we unload a metaslab, we cache the size of the largest free chunk. We + use that cached size to determine whether or not to load a metaslab for a + given allocation. As more frees accumulate in that metaslab while it's + unloaded, the cached max size becomes less and less accurate. After a + number of seconds controlled by this tunable, we stop considering the + cached max size and start considering only the histogram instead.
+
=25% + (int)
+
When we are loading a new metaslab, we check the amount of memory being + used to store metaslab range trees. If it is over a threshold, we attempt + to unload the least recently used metaslab to prevent the system from + clogging all of its memory with range trees. This tunable sets the + percentage of total system memory that is the threshold.
+
=0|1 + (int)
+
+
    +
  • If unset, we will first try normal allocation.
  • +
  • If that fails then we will do a gang allocation.
  • +
  • If that fails then we will do a "try hard" gang + allocation.
  • +
  • If that fails then we will have a multi-layer gang block.
  • +
+

+
    +
  • If set, we will first try normal allocation.
  • +
  • If that fails then we will do a "try hard" allocation.
  • +
  • If that fails we will do a gang allocation.
  • +
  • If that fails we will do a "try hard" gang allocation.
  • +
  • If that fails then we will have a multi-layer gang block.
  • +
+
+
=100 + (int)
+
When not trying hard, we only consider this number of the best metaslabs. + This improves performance, especially when there are many metaslabs per + vdev and the allocation can't actually be satisfied (so we would otherwise + iterate all metaslabs).
+
=200 + (int)
+
When a vdev is added, target this number of metaslabs per top-level + vdev.
+
= + (512MB) (int)
+
Default limit for metaslab size.
+
= + (ulong)
+
Maximum ashift used when optimizing for logical -> physical sector size + on new top-level vdevs. May be increased up to + + (16), but this may negatively impact pool space efficiency.
+
= + (9) (ulong)
+
Minimum ashift used when creating new top-level vdevs.
+
=16 + (int)
+
Minimum number of metaslabs to create in a top-level vdev.
+
=0|1 + (int)
+
Skip label validation steps during pool import. Changing is not + recommended unless you know what you're doing and are recovering a damaged + label.
+
=131072 + (128k) (int)
+
Practical upper limit of total metaslabs per top-level vdev.
+
=1|0 + (int)
+
Enable metaslab group preloading.
+
=1|0 + (int)
+
Give more weight to metaslabs with lower LBAs, assuming they have greater + bandwidth, as is typically the case on a modern constant angular velocity + disk drive.
+
=32 + (int)
+
After a metaslab is used, we keep it loaded for this many TXGs, to attempt + to reduce unnecessary reloading. Note that both this many TXGs and + metaslab_unload_delay_ms milliseconds must pass before + unloading will occur.
+
=600000ms + (10min) (int)
+
After a metaslab is used, we keep it loaded for this many milliseconds, to + attempt to reduce unnecessary reloading. Note, that both this many + milliseconds and metaslab_unload_delay TXGs must pass + before unloading will occur.
+
=3 + (int)
+
Maximum reference holders being tracked when reference_tracking_enable is + active.
+
=0|1 + (int)
+
Track reference holders to + + objects (debug builds only).
+
=1|0 + (int)
+
When set, the hole_birth optimization will not be used, + and all holes will always be sent during a zfs + send. This is useful if you suspect your datasets + are affected by a bug in hole_birth.
+
=/etc/zfs/zpool.cache + (charp)
+
SPA config file.
+
= + (int)
+
Multiplication factor used to estimate actual disk consumption from the + size of data being written. The default value is a worst case estimate, + but lower values may be valid for a given pool depending on its + configuration. Pool administrators who understand the factors involved may + wish to specify a more realistic inflation factor, particularly if they + operate close to quota or capacity limits.
+
=0|1 + (int)
+
Whether to print the vdev tree in the debugging message buffer during pool + import.
+
=1|0 + (int)
+
Whether to traverse data blocks during an "extreme rewind" + (-X) import. +

An extreme rewind import normally performs a full traversal of + all blocks in the pool for verification. If this parameter is unset, the + traversal skips non-metadata blocks. It can be toggled once the import + has started to stop or start the traversal of non-metadata blocks.

+
+
=1|0 + (int)
+
Whether to traverse blocks during an "extreme rewind" + (-X) pool import. +

An extreme rewind import normally performs a full traversal of + all blocks in the pool for verification. If this parameter is unset, the + traversal is not performed. It can be toggled once the import has + started to stop or start the traversal.

+
+
=4 + (1/16th) (int)
+
Sets the maximum number of bytes to consume during pool import to the log2 + fraction of the target ARC size.
+
=5 + (1/32nd) (int)
+
Normally, we don't allow the last + + () + of space in the pool to be consumed. This ensures that we don't run the + pool completely out of space, due to unaccounted changes (e.g. to the + MOS). It also limits the worst-case time to allocate space. If we have + less than this amount of free space, most ZPL operations (e.g. write, + create) will return + .
+
=32768B + (32kB) (int)
+
During top-level vdev removal, chunks of data are copied from the vdev + which may include free space in order to trade bandwidth for IOPS. This + parameter determines the maximum span of free space, in bytes, which will + be included as "unnecessary" data in a chunk of copied data. +

The default value here was chosen to align with + zfs_vdev_read_gap_limit, which is a similar concept + when doing regular reads (but there's no reason it has to be the + same).

+
+
=9 + (512B) (ulong)
+
Logical ashift for file-based devices.
+
=9 + (512B) (ulong)
+
Physical ashift for file-based devices.
+
=1|0 + (int)
+
If set, when we start iterating over a ZAP object, prefetch the entire + object (all leaf blocks). However, this is limited by + dmu_prefetch_max.
+
=1048576B + (1MB) (ulong)
+
If prefetching is enabled, disable prefetching for reads larger than this + size.
+
=4194304B + (4 MiB) (uint)
+
Min bytes to prefetch per stream. Prefetch distance starts from the demand + access size and quickly grows to this value, doubling on each hit. After + that it may grow further by 1/8 per hit, but only if some prefetch since + last time haven't completed in time to satisfy demand request, i.e. + prefetch depth didn't cover the read latency or the pool got + saturated.
+
=67108864B + (64 MiB) (uint)
+
Max bytes to prefetch per stream.
+
=67108864B + (64MB) (uint)
+
Max bytes to prefetch indirects for per stream.
+
=8 + (uint)
+
Max number of streams per zfetch (prefetch streams per file).
+
=1 + (uint)
+
Min time before inactive prefetch stream can be reclaimed
+
=2 + (uint)
+
Max time before inactive prefetch stream can be deleted
+
=1|0 + (int)
+
Enables ARC from using scatter/gather lists and forces all allocations to + be linear in kernel memory. Disabling can improve performance in some code + paths at the expense of fragmented kernel memory.
+
= + (uint)
+
Maximum number of consecutive memory pages allocated in a single block for + scatter/gather lists. +

The value of + + depends on kernel configuration.

+
+
=B + (1.5kB) (uint)
+
This is the minimum allocation size that will use scatter (page-based) + ABDs. Smaller allocations will use linear ABDs.
+
=0B + (ulong)
+
When the number of bytes consumed by dnodes in the ARC exceeds this number + of bytes, try to unpin some of it in response to demand for non-metadata. + This value acts as a ceiling to the amount of dnode metadata, and defaults + to 0, which indicates that a percent which is based on + zfs_arc_dnode_limit_percent of the ARC meta buffers that + may be used for dnodes. +

Also see zfs_arc_meta_prune which serves a + similar purpose but is used when the amount of metadata in the ARC + exceeds zfs_arc_meta_limit rather than in response to + overall demand for non-metadata.

+
+
=10% + (ulong)
+
Percentage that can be consumed by dnodes of ARC meta buffers. +

See also zfs_arc_dnode_limit, which serves a + similar purpose but has a higher priority if nonzero.

+
+
=10% + (ulong)
+
Percentage of ARC dnodes to try to scan in response to demand for + non-metadata when the number of bytes consumed by dnodes exceeds + zfs_arc_dnode_limit.
+
=B + (8kB) (int)
+
The ARC's buffer hash table is sized based on the assumption of an average + block size of this value. This works out to roughly 1MB of hash table per + 1GB of physical memory with 8-byte pointers. For configurations with a + known larger average block size, this value can be increased to reduce the + memory footprint.
+
=200% + (int)
+
When + (), + () + waits for this percent of the requested amount of data to be evicted. For + example, by default, for every + that's + evicted, + of it + may be "reused" by a new allocation. Since this is above + 100%, it ensures that progress is made towards getting + arc_size under + arc_c. Since this is finite, it ensures that allocations + can still happen, even during the potentially long time that + arc_size is more than + arc_c.
+
=10 + (int)
+
Number ARC headers to evict per sub-list before proceeding to another + sub-list. This batch-style operation prevents entire sub-lists from being + evicted at once but comes at a cost of additional unlocking and + locking.
+
=0s + (int)
+
If set to a non zero value, it will replace the + arc_grow_retry value with this value. The + arc_grow_retry value (default + 5s) is the number of seconds the ARC will wait before + trying to resume growth after a memory pressure event.
+
=10% + (int)
+
Throttle I/O when free system memory drops below this percentage of total + system memory. Setting this value to 0 will disable the + throttle.
+
=0B + (ulong)
+
Max size of ARC in bytes. If 0, then the max size of ARC + is determined by the amount of system memory installed. Under Linux, half + of system memory will be used as the limit. Under + FreeBSD, the larger of + and + will be used as the limit. This value must be at + least 67108864B (64MB). +

This value can be changed dynamically, with some caveats. It + cannot be set back to 0 while running, and reducing it + below the current ARC size will not cause the ARC to shrink without + memory pressure to induce shrinking.

+
+
=4096 + (ulong)
+
The number of restart passes to make while scanning the ARC attempting the + free buffers in order to stay below the + . + This value should not need to be tuned but is available to facilitate + performance analysis.
+
=0B + (ulong)
+
The maximum allowed size in bytes that metadata buffers are allowed to + consume in the ARC. When this limit is reached, metadata buffers will be + reclaimed, even if the overall + + has not been reached. It defaults to 0, which indicates + that a percentage based on zfs_arc_meta_limit_percent of + the ARC may be used for metadata. +

This value my be changed dynamically, except that must be set + to an explicit value (cannot be set back to 0).

+
+
=75% + (ulong)
+
Percentage of ARC buffers that can be used for metadata. +

See also zfs_arc_meta_limit, which serves a + similar purpose but has a higher priority if nonzero.

+
+
=0B + (ulong)
+
The minimum allowed size in bytes that metadata buffers may consume in the + ARC.
+
=10000 + (int)
+
The number of dentries and inodes to be scanned looking for entries which + can be dropped. This may be required when the ARC reaches the + zfs_arc_meta_limit because dentries and inodes can pin + buffers in the ARC. Increasing this value will cause to dentry and inode + caches to be pruned more aggressively. Setting this value to + 0 will disable pruning the inode and dentry caches.
+
=1|0 + (int)
+
Define the strategy for ARC metadata buffer eviction (meta reclaim + strategy): +
+
+
+ (META_ONLY)
+
evict only the ARC metadata buffers
+
+ (BALANCED)
+
additional data buffers may be evicted if required to evict the + required number of metadata buffers.
+
+
+
+
=0B + (ulong)
+
Min size of ARC in bytes. If set to + 0, + + will default to consuming the larger of 32MB + or + .
+
=0ms(≡1s) + (int)
+
Minimum time prefetched blocks are locked in the ARC.
+
=0ms(≡6s) + (int)
+
Minimum time "prescient prefetched" blocks are locked in the + ARC. These blocks are meant to be prefetched fairly aggressively ahead of + the code that may use them.
+
=1 + (int)
+
Number of arc_prune threads. FreeBSD does not need + more than one. Linux may theoretically use one per mount point up to + number of CPUs, but that was not proven to be useful.
+
=0 + (int)
+
Number of missing top-level vdevs which will be allowed during pool import + (only in read-only mode).
+
= + 0 (ulong)
+
Maximum size in bytes allowed to be passed as + + for ioctls on /dev/zfs. This prevents a user from + causing the kernel to allocate an excessive amount of memory. When the + limit is exceeded, the ioctl fails with + + and a description of the error is sent to the + zfs-dbgmsg log. This parameter should not need to + be touched under normal circumstances. If 0, equivalent + to a quarter of the user-wired memory limit under + FreeBSD and to 134217728B + (128MB) under Linux.
+
=0 + (int)
+
To allow more fine-grained locking, each ARC state contains a series of + lists for both data and metadata objects. Locking is performed at the + level of these "sub-lists". This parameters controls the number + of sub-lists per ARC state, and also applies to other uses of the + multilist data structure. +

If 0, equivalent to the greater of the + number of online CPUs and 4.

+
+
=8 + (int)
+
The ARC size is considered to be overflowing if it exceeds the current ARC + target size (arc_c) by thresholds determined by this + parameter. Exceeding by (arc_c >> + zfs_arc_overflow_shift) * 0.5 starts ARC reclamation + process. If that appears insufficient, exceeding by (arc_c + >> zfs_arc_overflow_shift) * 1.5 blocks new + buffer allocation until the reclaim thread catches up. Started reclamation + process continues till ARC size returns below the target size. +

The default value of 8 causes the + ARC to start reclamation if it exceeds the target size by + of the + target size, and block allocations by + .

+
+
=0 + (int)
+
If nonzero, this will update arc_p_min_shift (default + 4) with the new value. arc_p_min_shift + is used as a shift of arc_c when + calculating the minumum arc_p + size.
+
=1|0 + (int)
+
Disable arc_p adapt dampener, which reduces the maximum + single adjustment to arc_p.
+
=0 + (int)
+
If nonzero, this will update + + (default 7) with the new value.
+
=0% + (off) (uint)
+
Percent of pagecache to reclaim ARC to. +

This tunable allows the ZFS ARC to play + more nicely with the kernel's LRU pagecache. It can guarantee that the + ARC size won't collapse under scanning pressure on the pagecache, yet + still allows the ARC to be reclaimed down to + zfs_arc_min if necessary. This value is specified as + percent of pagecache size (as measured by + ), + where that percent may exceed 100. This only operates + during memory pressure/reclaim.

+
+
=10000 + (int)
+
This is a limit on how many pages the ARC shrinker makes available for + eviction in response to one page allocation attempt. Note that in + practice, the kernel's shrinker can ask us to evict up to about four times + this for one allocation attempt. +

The default limit of 10000 (in + practice, + per allocation attempt with 4kB pages) limits + the amount of time spent attempting to reclaim ARC memory to less than + 100ms per allocation attempt, even with a small average compressed block + size of ~8kB.

+

The parameter can be set to 0 (zero) to disable the limit, and + only applies on Linux.

+
+
=0B + (ulong)
+
The target number of bytes the ARC should leave as free memory on the + system. If zero, equivalent to the bigger of + + and + .
+
=1|0 + (int)
+
Disable pool import at module load by ignoring the cache file + (spa_config_path).
+
=20/s + (uint)
+
Rate limit checksum events to this many per second. Note that this should + not be set below the ZED thresholds (currently 10 checksums over 10 + seconds) or else the daemon may not trigger any action.
+
=5% + (int)
+
This controls the amount of time that a ZIL block (lwb) will remain + "open" when it isn't "full", and it has a thread + waiting for it to be committed to stable storage. The timeout is scaled + based on a percentage of the last lwb latency to avoid significantly + impacting the latency of each individual transaction record (itx).
+
=0ms + (int)
+
Vdev indirection layer (used for device removal) sleeps for this many + milliseconds during mapping generation. Intended for use with the test + suite to throttle vdev removal speed.
+
=25% + (int)
+
Minimum percent of obsolete bytes in vdev mapping required to attempt to + condense (see zfs_condense_indirect_vdevs_enable). + Intended for use with the test suite to facilitate triggering condensing + as needed.
+
=1|0 + (int)
+
Enable condensing indirect vdev mappings. When set, attempt to condense + indirect vdev mappings if the mapping uses more than + zfs_condense_min_mapping_bytes bytes of memory and if + the obsolete space map object uses more than + zfs_condense_max_obsolete_bytes bytes on-disk. The + condensing process is an attempt to save memory by removing obsolete + mappings.
+
=1073741824B + (1GB) (ulong)
+
Only attempt to condense indirect vdev mappings if the on-disk size of the + obsolete space map object is greater than this number of bytes (see + zfs_condense_indirect_vdevs_enable).
+
=131072B + (128kB) (ulong)
+
Minimum size vdev mapping to attempt to condense (see + zfs_condense_indirect_vdevs_enable).
+
=1|0 + (int)
+
Internally ZFS keeps a small log to facilitate debugging. The log is + enabled by default, and can be disabled by unsetting this option. The + contents of the log can be accessed by reading + /proc/spl/kstat/zfs/dbgmsg. Writing + 0 to the file clears the log. +

This setting does not influence debug prints due to + zfs_flags.

+
+
=4194304B + (4MB) (int)
+
Maximum size of the internal ZFS debug log.
+
=0 + (int)
+
Historically used for controlling what reporting was available under + /proc/spl/kstat/zfs. No effect.
+
=1|0 + (int)
+
When a pool sync operation takes longer than + zfs_deadman_synctime_ms, or when an individual I/O + operation takes longer than zfs_deadman_ziotime_ms, then + the operation is considered to be "hung". If + zfs_deadman_enabled is set, then the deadman behavior is + invoked as described by zfs_deadman_failmode. By + default, the deadman is enabled and set to wait which + results in "hung" I/Os only being logged. The deadman is + automatically disabled when a pool gets suspended.
+
=wait + (charp)
+
Controls the failure behavior when the deadman detects a "hung" + I/O operation. Valid values are: +
+
+
+
Wait for a "hung" operation to complete. For each + "hung" operation a "deadman" event will be posted + describing that operation.
+
+
Attempt to recover from a "hung" operation by re-dispatching + it to the I/O pipeline if possible.
+
+
Panic the system. This can be used to facilitate automatic fail-over + to a properly configured fail-over partner.
+
+
+
+
=ms + (1min) (int)
+
Check time in milliseconds. This defines the frequency at which we check + for hung I/O requests and potentially invoke the + zfs_deadman_failmode behavior.
+
=600000ms + (10min) (ulong)
+
Interval in milliseconds after which the deadman is triggered and also the + interval after which a pool sync operation is considered to be + "hung". Once this limit is exceeded the deadman will be invoked + every zfs_deadman_checktime_ms milliseconds until the + pool sync completes.
+
=ms + (5min) (ulong)
+
Interval in milliseconds after which the deadman is triggered and an + individual I/O operation is considered to be "hung". As long as + the operation remains "hung", the deadman will be invoked every + zfs_deadman_checktime_ms milliseconds until the + operation completes.
+
=0|1 + (int)
+
Enable prefetching dedup-ed blocks which are going to be freed.
+
=60% + (int)
+
Start to delay each transaction once there is this amount of dirty data, + expressed as a percentage of zfs_dirty_data_max. This + value should be at least + zfs_vdev_async_write_active_max_dirty_percent. + See + ZFS TRANSACTION + DELAY.
+
=500000 + (int)
+
This controls how quickly the transaction delay approaches infinity. + Larger values cause longer delays for a given amount of dirty data. +

For the smoothest delay, this value should be about 1 billion + divided by the maximum number of operations per second. This will + smoothly handle between ten times and a tenth of this number. + See + ZFS TRANSACTION + DELAY.

+

zfs_delay_scale * + zfs_dirty_data_max + + .

+
+
=0|1 + (int)
+
Disables requirement for IVset GUIDs to be present and match when doing a + raw receive of encrypted datasets. Intended for users whose pools were + created with OpenZFS pre-release versions and now have compatibility + issues.
+
= + (4*10^8) (ulong)
+
Maximum number of uses of a single salt value before generating a new one + for encrypted datasets. The default value is also the maximum.
+
=64 + (uint)
+
Size of the znode hashtable used for holds. +

Due to the need to hold locks on objects that may not exist + yet, kernel mutexes are not created per-object and instead a hashtable + is used where collisions will result in objects waiting when there is + not actually contention on the same object.

+
+
=20/s + (int)
+
Rate limit delay and deadman zevents (which report slow I/Os) to this many + per second.
+
=1073741824B + (1GB) (ulong)
+
Upper-bound limit for unflushed metadata changes to be held by the log + spacemap in memory, in bytes.
+
=1000ppm + (0.1%) (ulong)
+
Part of overall system memory that ZFS allows to be used for unflushed + metadata changes by the log spacemap, in millionths.
+
=131072 + (128k) (ulong)
+
Describes the maximum number of log spacemap blocks allowed for each pool. + The default value means that the space in all the log spacemaps can add up + to no more than 131072 blocks (which means + of + logical space before compression and ditto blocks, assuming that blocksize + is 128kB). +

This tunable is important because it involves a trade-off + between import time after an unclean export and the frequency of + flushing metaslabs. The higher this number is, the more log blocks we + allow when the pool is active which means that we flush metaslabs less + often and thus decrease the number of I/Os for spacemap updates per TXG. + At the same time though, that means that in the event of an unclean + export, there will be more log spacemap blocks for us to read, inducing + overhead in the import time of the pool. The lower the number, the + amount of flushing increases, destroying log blocks quicker as they + become obsolete faster, which leaves less blocks to be read during + import time after a crash.

+

Each log spacemap block existing during pool import leads to + approximately one extra logical I/O issued. This is the reason why this + tunable is exposed in terms of blocks rather than space used.

+
+
=1000 + (ulong)
+
If the number of metaslabs is small and our incoming rate is high, we + could get into a situation that we are flushing all our metaslabs every + TXG. Thus we always allow at least this many log blocks.
+
=% + (ulong)
+
Tunable used to determine the number of blocks that can be used for the + spacemap log, expressed as a percentage of the total number of unflushed + metaslabs in the pool.
+
=1000 + (ulong)
+
Tunable limiting maximum time in TXGs any metaslab may remain unflushed. + It effectively limits maximum number of unflushed per-TXG spacemap logs + that need to be read after unclean pool export.
+ +
When enabled, files will not be asynchronously removed from the list of + pending unlinks and the space they consume will be leaked. Once this + option has been disabled and the dataset is remounted, the pending unlinks + will be processed and the freed space returned to the pool. This option is + used by the test suite.
+
= + (ulong)
+
This is the used to define a large file for the purposes of deletion. + Files containing more than zfs_delete_blocks will be + deleted asynchronously, while smaller files are deleted synchronously. + Decreasing this value will reduce the time spent in an + unlink(2) system call, at the expense of a longer delay + before the freed space is available.
+
= + (int)
+
Determines the dirty space limit in bytes. Once this limit is exceeded, + new writes are halted until space frees up. This parameter takes + precedence over zfs_dirty_data_max_percent. + See + ZFS TRANSACTION DELAY. +

Defaults to + , + capped at zfs_dirty_data_max_max.

+
+
= + (int)
+
Maximum allowable value of zfs_dirty_data_max, expressed + in bytes. This limit is only enforced at module load time, and will be + ignored if zfs_dirty_data_max is later changed. This + parameter takes precedence over + zfs_dirty_data_max_max_percent. + See + ZFS TRANSACTION DELAY. +

Defaults to + ,

+
+
=25% + (int)
+
Maximum allowable value of zfs_dirty_data_max, expressed + as a percentage of physical RAM. This limit is only enforced at module + load time, and will be ignored if zfs_dirty_data_max is + later changed. The parameter zfs_dirty_data_max_max + takes precedence over this one. See + ZFS TRANSACTION + DELAY.
+
=10% + (int)
+
Determines the dirty space limit, expressed as a percentage of all memory. + Once this limit is exceeded, new writes are halted until space frees up. + The parameter zfs_dirty_data_max takes precedence over + this one. See + ZFS TRANSACTION DELAY. +

Subject to zfs_dirty_data_max_max.

+
+
=20% + (int)
+
Start syncing out a transaction group if there's at least this much dirty + data (as a percentage of zfs_dirty_data_max). This + should be less than + zfs_vdev_async_write_active_min_dirty_percent.
+
= + (int)
+
The upper limit of write-transaction zil log data size in bytes. Write + operations are throttled when approaching the limit until log data is + cleared out after transaction group sync. Because of some overhead, it + should be set at least 2 times the size of + zfs_dirty_data_max to prevent harming + normal write throughput. It also should be smaller than the size of + the slog device if slog is present. +

Defaults to +

+
+
=% + (uint)
+
Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be + preallocated for a file in order to guarantee that later writes will not + run out of space. Instead, fallocate(2) space + preallocation only checks that sufficient space is currently available in + the pool or the user's project quota allocation, and then creates a sparse + file of the requested size. The requested space is multiplied by + zfs_fallocate_reserve_percent to allow additional space + for indirect blocks and other internal metadata. Setting this to + 0 disables support for fallocate(2) + and causes it to return + .
+
=fastest + (string)
+
Select a fletcher 4 implementation. +

Supported selectors are: fastest, + scalar, + , + , + , + , + , + and + . + All except fastest and + scalar require instruction set extensions to be + available, and will only appear if ZFS detects that they are present at + runtime. If multiple implementations of fletcher 4 are available, the + fastest will be chosen using a micro benchmark. + Selecting scalar results in the original CPU-based + calculation being used. Selecting any option other than + fastest or + scalar results in vector instructions from the + respective CPU instruction set being used.

+
+
=1|0 + (int)
+
Enable/disable the processing of the free_bpobj object.
+
=ULONG_MAX + (unlimited) (ulong)
+
Maximum number of blocks freed in a single TXG.
+
= + (10^5) (ulong)
+
Maximum number of dedup blocks freed in a single TXG.
+
=0 + (ulong)
+
If nonzer, override record size calculation for + zfs send estimates.
+
=3 + (int)
+
Maximum asynchronous read I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (int)
+
Minimum asynchronous read I/O operation active to each device. + See ZFS + I/O SCHEDULER.
+
=60% + (int)
+
When the pool has more than this much dirty data, use + zfs_vdev_async_write_max_active to limit active async + writes. If the dirty data is between the minimum and maximum, the active + I/O limit is linearly interpolated. See + ZFS I/O SCHEDULER.
+
=30% + (int)
+
When the pool has less than this much dirty data, use + zfs_vdev_async_write_min_active to limit active async + writes. If the dirty data is between the minimum and maximum, the active + I/O limit is linearly interpolated. See + ZFS I/O SCHEDULER.
+
=30 + (int)
+
Maximum asynchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (int)
+
Minimum asynchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER. +

Lower values are associated with better latency on rotational + media but poorer resilver performance. The default value of + 2 was chosen as a compromise. A value of + 3 has been shown to improve resilver performance + further at a cost of further increasing latency.

+
+
=1 + (int)
+
Maximum initializing I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (int)
+
Minimum initializing I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1000 + (int)
+
The maximum number of I/O operations active to each device. Ideally, this + will be at least the sum of each queue's max_active. + See ZFS + I/O SCHEDULER.
+
=1000 + (uint)
+
Timeout value to wait before determining a device is missing during + import. This is helpful for transient missing paths due to links being + briefly removed and recreated in response to udev events.
+
=3 + (int)
+
Maximum sequential resilver I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (int)
+
Minimum sequential resilver I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (int)
+
Maximum removal I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (int)
+
Minimum removal I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (int)
+
Maximum scrub I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (int)
+
Minimum scrub I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (int)
+
Maximum synchronous read I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (int)
+
Minimum synchronous read I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (int)
+
Maximum synchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (int)
+
Minimum synchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (int)
+
Maximum trim/discard I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (int)
+
Minimum trim/discard I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=5 + (int)
+
For non-interactive I/O (scrub, resilver, removal, initialize and + rebuild), the number of concurrently-active I/O operations is limited to + , + unless the vdev is "idle". When there are no interactive I/O + operatinons active (synchronous or otherwise), and + zfs_vdev_nia_delay operations have completed since the + last interactive operation, then the vdev is considered to be + "idle", and the number of concurrently-active non-interactive + operations is increased to zfs_*_max_active. + See ZFS + I/O SCHEDULER.
+
=5 + (int)
+
Some HDDs tend to prioritize sequential I/O so strongly, that concurrent + random I/O latency reaches several seconds. On some HDDs this happens even + if sequential I/O operations are submitted one at a time, and so setting + zfs_*_max_active= 1 does not help. To + prevent non-interactive I/O, like scrub, from monopolizing the device, no + more than zfs_vdev_nia_credit operations can be sent + while there are outstanding incomplete interactive operations. This + enforced wait ensures the HDD services the interactive I/O within a + reasonable amount of time. See + ZFS I/O SCHEDULER.
+
=1000% + (int)
+
Maximum number of queued allocations per top-level vdev expressed as a + percentage of zfs_vdev_async_write_max_active, which + allows the system to detect devices that are more capable of handling + allocations and to allocate more blocks to those devices. This allows for + dynamic allocation distribution when devices are imbalanced, as fuller + devices will tend to be slower than empty devices. +

Also see zio_dva_throttle_enabled.

+
+
=s + (int)
+
Time before expiring .zfs/snapshot.
+
=0|1 + (int)
+
Allow the creation, removal, or renaming of entries in the + + directory to cause the creation, destruction, or renaming of snapshots. + When enabled, this functionality works both locally and over NFS exports + which have the + + option set.
+
=0 + (int)
+
Set additional debugging flags. The following flags may be bitwise-ored + together: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueSymbolic NameDescription
1ZFS_DEBUG_DPRINTFEnable dprintf entries in the debug log.
*2ZFS_DEBUG_DBUF_VERIFYEnable extra dbuf verifications.
*4ZFS_DEBUG_DNODE_VERIFYEnable extra dnode verifications.
8ZFS_DEBUG_SNAPNAMESEnable snapshot name verification.
16ZFS_DEBUG_MODIFYCheck for illegally modified ARC buffers.
64ZFS_DEBUG_ZIO_FREEEnable verification of block frees.
128ZFS_DEBUG_HISTOGRAM_VERIFYEnable extra spacemap histogram verifications.
256ZFS_DEBUG_METASLAB_VERIFYVerify space accounting on disk matches in-memory + range_trees.
512ZFS_DEBUG_SET_ERROREnable SET_ERROR and dprintf entries in the debug log.
1024ZFS_DEBUG_INDIRECT_REMAPVerify split blocks created by device removal.
2048ZFS_DEBUG_TRIMVerify TRIM ranges are always within the allocatable range + tree.
4096ZFS_DEBUG_LOG_SPACEMAPVerify that the log summary is consistent with the spacemap log
and enable zfs_dbgmsgs for metaslab loading and + flushing.
+ * Requires debug build.
+
=0 + (uint)
+
Enables btree verification. The following settings are culminative: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueDescription
1Verify height.
2Verify pointers from children to parent.
3Verify element counts.
4Verify element order. (expensive)
*5Verify unused memory is poisoned. (expensive)
+ * Requires debug build.
+
=0|1 + (int)
+
If destroy encounters an EIO while reading metadata + (e.g. indirect blocks), space referenced by the missing metadata can not + be freed. Normally this causes the background destroy to become + "stalled", as it is unable to make forward progress. While in + this stalled state, all remaining space to free from the + error-encountering filesystem is "temporarily leaked". Set this + flag to cause it to ignore the EIO, permanently leak the + space from indirect blocks that can not be read, and continue to free + everything else that it can. +

The default "stalling" behavior is useful if the + storage partially fails (i.e. some but not all I/O operations fail), and + then later recovers. In this case, we will be able to continue pool + operations while it is partially failed, and when it recovers, we can + continue to free the space, with no leaks. Note, however, that this case + is actually fairly rare.

+

Typically pools either

+
    +
  1. fail completely (but perhaps temporarily, e.g. due to a top-level vdev + going offline), or
  2. +
  3. have localized, permanent errors (e.g. disk returns the wrong data due + to bit flip or firmware bug).
  4. +
+ In the former case, this setting does not matter because the pool will be + suspended and the sync thread will not be able to make forward progress + regardless. In the latter, because the error is permanent, the best we can + do is leak the minimum amount of space, which is what setting this flag + will do. It is therefore reasonable for this flag to normally be set, but + we chose the more conservative approach of not setting it, so that there + is no possibility of leaking space in the "partial temporary" + failure case.
+
=1000ms + (1s) (int)
+
During a zfs destroy + operation using the + + feature, a minimum of this much time will be spent working on freeing + blocks per TXG.
+
=500ms + (int)
+
Similar to zfs_free_min_time_ms, but for cleanup of old + indirection records for removed vdevs.
+
=32768B + (32kB) (long)
+
Largest data block to write to the ZIL. Larger blocks will be treated as + if the dataset being written to had the + = + property set.
+
= + (0xDEADBEEFDEADBEEE) (ulong)
+
Pattern written to vdev free space by + zpool-initialize(8).
+
=1048576B + (1MB) (ulong)
+
Size of writes used by zpool-initialize(8). This option + is used by the test suite.
+
=500000 + (5*10^5) (ulong)
+
The threshold size (in block pointers) at which we create a new + sub-livelist. Larger sublists are more costly from a memory perspective + but the fewer sublists there are, the lower the cost of insertion.
+
=75% + (int)
+
If the amount of shared space between a snapshot and its clone drops below + this threshold, the clone turns off the livelist and reverts to the old + deletion method. This is in place because livelists no long give us a + benefit once a clone has been overwritten enough.
+
=0 + (int)
+
Incremented each time an extra ALLOC blkptr is added to a livelist entry + while it is being condensed. This option is used by the test suite to + track race conditions.
+
=0 + (int)
+
Incremented each time livelist condensing is canceled while in + (). + This option is used by the test suite to track race conditions.
+
=0|1 + (int)
+
When set, the livelist condense process pauses indefinitely before + executing the synctask - + spa_livelist_condense_sync(). This option is used + by the test suite to trigger race conditions.
+
=0 + (int)
+
Incremented each time livelist condensing is canceled while in + (). + This option is used by the test suite to track race conditions.
+
=0|1 + (int)
+
When set, the livelist condense process pauses indefinitely before + executing the open context condensing work in + spa_livelist_condense_cb(). This option is used by + the test suite to trigger race conditions.
+
= + (10^8) (ulong)
+
The maximum execution time limit that can be set for a ZFS channel + program, specified as a number of Lua instructions.
+
= + (100MB) (ulong)
+
The maximum memory limit that can be set for a ZFS channel program, + specified in bytes.
+
= + (int)
+
The maximum depth of nested datasets. This value can be tuned temporarily + to fix existing datasets that exceed the predefined limit.
+
=5 + (ulong)
+
The number of past TXGs that the flushing algorithm of the log spacemap + feature uses to estimate incoming log blocks.
+
=10 + (ulong)
+
Maximum number of rows allowed in the summary of the spacemap log.
+
=1048576 + (1MB) (int)
+
We currently support block sizes from + + to 16MB. The benefits of larger + blocks, and thus larger I/O, need to be weighed against the cost of COWing + a giant block to modify one byte. Additionally, very large blocks can have + an impact on I/O latency, and also potentially on the memory allocator. + Therefore, we do not allow the recordsize to be set larger than this + tunable. Larger blocks can be created by changing it, and pools with + larger blocks can always be imported and used, regardless of this + setting.
+
=0|1 + (int)
+
Allow datasets received with redacted send/receive to be mounted. Normally + disabled because these datasets may be missing key data.
+
=1 + (ulong)
+
Minimum number of metaslabs to flush per dirty TXG.
+
=% + (int)
+
Allow metaslabs to keep their active state as long as their fragmentation + percentage is no more than this value. An active metaslab that exceeds + this threshold will no longer keep its active status allowing better + metaslabs to be selected.
+
=% + (int)
+
Metaslab groups are considered eligible for allocations if their + fragmentation metric (measured as a percentage) is less than or equal to + this value. If a metaslab group exceeds this threshold then it will be + skipped unless all metaslab groups within the metaslab class have also + crossed this threshold.
+
=0% + (int)
+
Defines a threshold at which metaslab groups should be eligible for + allocations. The value is expressed as a percentage of free space beyond + which a metaslab group is always eligible for allocations. If a metaslab + group's free space is less than or equal to the threshold, the allocator + will avoid allocating to that group unless all groups in the pool have + reached the threshold. Once all groups have reached the threshold, all + groups are allowed to accept allocations. The default value of + 0 disables the feature and causes all metaslab groups to + be eligible for allocations. +

This parameter allows one to deal + with pools having heavily imbalanced vdevs such as would be the case + when a new vdev has been added. Setting the threshold to a non-zero + percentage will stop allocations from being made to vdevs that aren't + filled to the specified percentage and allow lesser filled vdevs to + acquire more allocations than they otherwise would under the old + + facility.

+
+
=1|0 + (int)
+
If enabled, ZFS will place DDT data into the special allocation + class.
+
=1|0 + (int)
+
If enabled, ZFS will place user data indirect blocks into the special + allocation class.
+
=0 + (int)
+
Historical statistics for this many latest multihost updates will be + available in + /proc/spl/kstat/zfs/pool/multihost.
+
=1000ms + (1s) (ulong)
+
Used to control the frequency of multihost writes which are performed when + the + + pool property is on. This is one of the factors used to determine the + length of the activity check during import. +

The multihost write period is + zfs_multihost_interval / leaf-vdevs. On average a + multihost write will be issued for each leaf vdev every + zfs_multihost_interval milliseconds. In practice, the + observed period can vary with the I/O load and this observed value is + the delay which is stored in the uberblock.

+
+
=20 + (uint)
+
Used to control the duration of the activity test on import. Smaller + values of zfs_multihost_import_intervals will reduce the + import time but increase the risk of failing to detect an active pool. The + total activity check time is never allowed to drop below one second. +

On import the activity check waits a minimum amount + of time determined by zfs_multihost_interval * + zfs_multihost_import_intervals, or the same product computed on the + host which last had the pool imported, whichever is greater. The + activity check time may be further extended if the value of MMP delay + found in the best uberblock indicates actual multihost updates happened + at longer intervals than zfs_multihost_interval. A + minimum of + is + enforced.

+

0 is equivalent to + 1.

+
+
=10 + (uint)
+
Controls the behavior of the pool when multihost write failures or delays + are detected. +

When 0, multihost write failures or delays + are ignored. The failures will still be reported to the ZED which + depending on its configuration may take action such as suspending the + pool or offlining a device.

+

Otherwise, the pool will be suspended if + zfs_multihost_fail_intervals * zfs_multihost_interval + milliseconds pass without a successful MMP write. This guarantees the + activity test will see MMP writes if the pool is imported. + 1 is equivalent to + 2; this is necessary to prevent the pool from being + suspended due to normal, small I/O latency variations.

+
+
=0|1 + (int)
+
Set to disable scrub I/O. This results in scrubs not actually scrubbing + data and simply doing a metadata crawl of the pool instead.
+
=0|1 + (int)
+
Set to disable block prefetching for scrubs.
+
=0|1 + (int)
+
Disable cache flush operations on disks when writing. Setting this will + cause pool corruption on power loss if a volatile out-of-order write cache + is enabled.
+
=1|0 + (int)
+
Allow no-operation writes. The occurrence of nopwrites will further depend + on other pool properties (i.a. the checksumming and compression + algorithms).
+
=1|0 + (int)
+
Enable forcing TXG sync to find holes. When enabled forces ZFS to sync + data when + + or + + flags are used allowing holes in a file to be accurately reported. When + disabled holes will not be reported in recently dirtied files.
+
=B + (50MB) (int)
+
The number of bytes which should be prefetched during a pool traversal, + like zfs send or other + data crawling operations.
+
=32 + (int)
+
The number of blocks pointed by indirect (non-L0) block which should be + prefetched during a pool traversal, like zfs + send or other data crawling operations.
+
=30% + (ulong)
+
Control percentage of dirtied indirect blocks from frees allowed into one + TXG. After this threshold is crossed, additional frees will wait until the + next TXG. 0 disables this + throttle.
+
=0|1 + (int)
+
Disable predictive prefetch. Note that it leaves "prescient" + prefetch (for. e.g. zfs + send) intact. Unlike predictive prefetch, + prescient prefetch never issues I/O that ends up not being needed, so it + can't hurt performance.
+
=0|1 + (int)
+
Disable QAT hardware acceleration for SHA256 checksums. May be unset after + the ZFS modules have been loaded to initialize the QAT hardware as long as + support is compiled in and the QAT driver is present.
+
=0|1 + (int)
+
Disable QAT hardware acceleration for gzip compression. May be unset after + the ZFS modules have been loaded to initialize the QAT hardware as long as + support is compiled in and the QAT driver is present.
+
=0|1 + (int)
+
Disable QAT hardware acceleration for AES-GCM encryption. May be unset + after the ZFS modules have been loaded to initialize the QAT hardware as + long as support is compiled in and the QAT driver is present.
+
=1048576B + (1MB) (long)
+
Bytes to read per chunk.
+
=0 + (int)
+
Historical statistics for this many latest reads will be available in + /proc/spl/kstat/zfs/pool/reads.
+
=0|1 + (int)
+
Include cache hits in read history
+
=1048576B + (1MB) (ulong)
+
Maximum read segment size to issue when sequentially resilvering a + top-level vdev.
+
=1|0 + (int)
+
Automatically start a pool scrub when the last active sequential resilver + completes in order to verify the checksums of all blocks which have been + resilvered. This is enabled by default and strongly recommended.
+
=67108864B + (64 MiB) (ulong)
+
Maximum amount of I/O that can be concurrently issued for a sequential + resilver per leaf device, given in bytes.
+
=4096 + (int)
+
If an indirect split block contains more than this many possible unique + combinations when being reconstructed, consider it too computationally + expensive to check them all. Instead, try at most this many randomly + selected combinations each time the block is accessed. This allows all + segment copies to participate fairly in the reconstruction when all + combinations cannot be checked and prevents repeated use of one bad + copy.
+
=0|1 + (int)
+
Set to attempt to recover from fatal errors. This should only be used as a + last resort, as it typically results in leaked space, or worse.
+
=0|1 + (int)
+
Ignore hard IO errors during device removal. When set, if a device + encounters a hard IO error during the removal process the removal will not + be cancelled. This can result in a normally recoverable block becoming + permanently damaged and is hence not recommended. This should only be used + as a last resort when the pool cannot be returned to a healthy state prior + to removing the device.
+
=0|1 + (int)
+
This is used by the test suite so that it can ensure that certain actions + happen while in the middle of a removal.
+
=16777216B + (16MB) (int)
+
The largest contiguous segment that we will attempt to allocate when + removing a device. If there is a performance problem with attempting to + allocate large blocks, consider decreasing this. The default value is also + the maximum.
+
=0|1 + (int)
+
Ignore the + + feature, causing an operation that would start a resilver to immediately + restart the one in progress.
+
=ms + (3s) (int)
+
Resilvers are processed by the sync thread. While resilvering, it will + spend at least this much time working on a resilver between TXG + flushes.
+
=0|1 + (int)
+
If set, remove the DTL (dirty time list) upon completion of a pool scan + (scrub), even if there were unrepairable errors. Intended to be used + during pool repair or recovery to stop resilvering when the pool is next + imported.
+
=1000ms + (1s) (int)
+
Scrubs are processed by the sync thread. While scrubbing, it will spend at + least this much time working on a scrub between TXG flushes.
+
=s + (2h) (int)
+
To preserve progress across reboots, the sequential scan algorithm + periodically needs to stop metadata scanning and issue all the + verification I/O to disk. The frequency of this flushing is determined by + this tunable.
+
=3 + (int)
+
This tunable affects how scrub and resilver I/O segments are ordered. A + higher number indicates that we care more about how filled in a segment + is, while a lower number indicates we care more about the size of the + extent without considering the gaps within a segment. This value is only + tunable upon module insertion. Changing the value afterwards will have no + affect on scrub or resilver performance.
+
=0 + (int)
+
Determines the order that data will be verified while scrubbing or + resilvering: +
+
+
+
Data will be verified as sequentially as possible, given the amount of + memory reserved for scrubbing (see + zfs_scan_mem_lim_fact). This may improve scrub + performance if the pool's data is very fragmented.
+
+
The largest mostly-contiguous chunk of found data will be verified + first. By deferring scrubbing of small segments, we may later find + adjacent data to coalesce and increase the segment size.
+
+
1 during normal + verification and strategy + 2 while taking a + checkpoint.
+
+
+
+
=0|1 + (int)
+
If unset, indicates that scrubs and resilvers will gather metadata in + memory before issuing sequential I/O. Otherwise indicates that the legacy + algorithm will be used, where I/O is initiated as soon as it is + discovered. Unsetting will not affect scrubs or resilvers that are already + in progress.
+
=B + (2MB) (int)
+
Sets the largest gap in bytes between scrub/resilver I/O operations that + will still be considered sequential for sorting purposes. Changing this + value will not affect scrubs or resilvers that are already in + progress.
+
=20^-1 + (int)
+
Maximum fraction of RAM used for I/O sorting by sequential scan algorithm. + This tunable determines the hard limit for I/O sorting memory usage. When + the hard limit is reached we stop scanning metadata and start issuing data + verification I/O. This is done until we get below the soft limit.
+
=20^-1 + (int)
+
The fraction of the hard limit used to determined the soft limit for I/O + sorting by the sequential scan algorithm. When we cross this limit from + below no action is taken. When we cross this limit from above it is + because we are issuing verification I/O. In this case (unless the metadata + scan is done) we stop issuing verification I/O and start scanning metadata + again until we get to the hard limit.
+
=0|1 + (uint)
+
When reporting resilver throughput and estimated completion time use the + performance observed over roughly the last + zfs_scan_report_txgs TXGs. When set to zero performance + is calculated over the time between checkpoints.
+
=0|1 + (int)
+
Enforce tight memory limits on pool scans when a sequential scan is in + progress. When disabled, the memory limit may be exceeded by fast + disks.
+
=0|1 + (int)
+
Freezes a scrub/resilver in progress without actually pausing it. Intended + for testing/debugging.
+
=16777216B + (16 MiB) (int)
+
Maximum amount of data that can be concurrently issued at once for scrubs + and resilvers per leaf device, given in bytes.
+
=0|1 + (int)
+
Allow sending of corrupt data (ignore read/checksum errors when + sending).
+
=1|0 + (int)
+
Include unmodified spill blocks in the send stream. Under certain + circumstances, previous versions of ZFS could incorrectly remove the spill + block from an existing object. Including unmodified copies of the spill + blocks creates a backwards-compatible stream which will recreate a spill + block if it was incorrectly removed.
+
=20^-1 + (int)
+
The fill fraction of the zfs + send internal queues. The fill fraction controls + the timing with which internal threads are woken up.
+
=1048576B + (1MB) (int)
+
The maximum number of bytes allowed in zfs + send's internal queues.
+
=20^-1 + (int)
+
The fill fraction of the zfs + send prefetch queue. The fill fraction controls + the timing with which internal threads are woken up.
+
=16777216B + (16MB) (int)
+
The maximum number of bytes allowed that will be prefetched by + zfs send. This value must + be at least twice the maximum block size in use.
+
=20^-1 + (int)
+
The fill fraction of the zfs + receive queue. The fill fraction controls the + timing with which internal threads are woken up.
+
=16777216B + (16MB) (int)
+
The maximum number of bytes allowed in the zfs + receive queue. This value must be at least twice + the maximum block size in use.
+
=1048576B + (1MB) (int)
+
The maximum amount of data, in bytes, that zfs + receive will write in one DMU transaction. This is + the uncompressed size, even when receiving a compressed send stream. This + setting will not reduce the write size below a single block. Capped at a + maximum of 32MB.
+
=0|1 + (ulong)
+
Setting this variable overrides the default logic for estimating block + sizes when doing a zfs + send. The default heuristic is that the average + block size will be the current recordsize. Override this value if most + data in your dataset is not of that size and you require accurate zfs send + size estimates.
+
=2 + (int)
+
Flushing of data to disk is done in passes. Defer frees starting in this + pass.
+
=16777216B + (16MB) (int)
+
Maximum memory used for prefetching a checkpoint's space map on each vdev + while discarding the checkpoint.
+
=25% + (int)
+
Only allow small data blocks to be allocated on the special and dedup vdev + types when the available free space percentage on these vdevs exceeds this + value. This ensures reserved space is available for pool metadata as the + special vdevs approach capacity.
+
=8 + (int)
+
Starting in this sync pass, disable compression (including of metadata). + With the default setting, in practice, we don't have this many sync + passes, so this has no effect. +

The original intent was that disabling compression would help + the sync passes to converge. However, in practice, disabling compression + increases the average number of sync passes; because when we turn + compression off, many blocks' size will change, and thus we have to + re-allocate (not overwrite) them. It also increases the number of + 128kB allocations (e.g. for indirect blocks and + spacemaps) because these will not be compressed. The + 128kB allocations are especially detrimental to + performance on highly fragmented systems, which may have very few free + segments of this size, and may need to load new metaslabs to satisfy + these allocations.

+
+
=2 + (int)
+
Rewrite new block pointers starting in this pass.
+
=75% + (int)
+
This controls the number of threads used by + . + The default value of + will + create a maximum of one thread per CPU.
+
=134217728B + (128MB) (uint)
+
Maximum size of TRIM command. Larger ranges will be split into chunks no + larger than this value before issuing.
+
=32768B + (32kB) (uint)
+
Minimum size of TRIM commands. TRIM ranges smaller than this will be + skipped, unless they're part of a larger range which was chunked. This is + done because it's common for these small TRIMs to negatively impact + overall performance.
+
=0|1 + (uint)
+
Skip uninitialized metaslabs during the TRIM process. This option is + useful for pools constructed from large thinly-provisioned devices where + TRIM operations are slow. As a pool ages, an increasing fraction of the + pool's metaslabs will be initialized, progressively degrading the + usefulness of this option. This setting is stored when starting a manual + TRIM and will persist for the duration of the requested TRIM.
+
=10 + (uint)
+
Maximum number of queued TRIMs outstanding per leaf vdev. The number of + concurrent TRIM commands issued to the device is controlled by + zfs_vdev_trim_min_active and + zfs_vdev_trim_max_active.
+
=32 + (uint)
+
The number of transaction groups' worth of frees which should be + aggregated before TRIM operations are issued to the device. This setting + represents a trade-off between issuing larger, more efficient TRIM + operations and the delay before the recently trimmed space is available + for use by the device. +

Increasing this value will allow frees to be aggregated for a + longer time. This will result is larger TRIM operations and potentially + increased memory usage. Decreasing this value will have the opposite + effect. The default of 32 was determined to be a + reasonable compromise.

+
+
=0 + (int)
+
Historical statistics for this many latest TXGs will be available in + /proc/spl/kstat/zfs/pool/TXGs.
+
=5s + (int)
+
Flush dirty data to disk at least every this many seconds (maximum TXG + duration).
+
=0|1 + (int)
+
Allow TRIM I/Os to be aggregated. This is normally not helpful because the + extents to be trimmed will have been already been aggregated by the + metaslab. This option is provided for debugging and performance + analysis.
+
=1048576B + (1MB) (int)
+
Max vdev I/O aggregation size.
+
=131072B + (128kB) (int)
+
Max vdev I/O aggregation size for non-rotating media.
+
=16 + (64kB) (int)
+
Shift size to inflate reads to.
+
=16384B + (16kB) (int)
+
Inflate reads smaller than this value to meet the + zfs_vdev_cache_bshift size (default + ).
+
=0 + (int)
+
Total size of the per-disk cache in bytes. +

Currently this feature is disabled, as it has been found to + not be helpful for performance and in some cases harmful.

+
+
=0 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member when an I/O + operation immediately follows its predecessor on rotational vdevs for the + purpose of making decisions based on load.
+
=5 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member when an I/O + operation lacks locality as defined by + zfs_vdev_mirror_rotating_seek_offset. Operations within + this that are not immediately following the previous operation are + incremented by half.
+
=1048576B + (1MB) (int)
+
The maximum distance for the last queued I/O operation in which the + balancing algorithm considers an operation to have locality. + See ZFS + I/O SCHEDULER.
+
=0 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member on + non-rotational vdevs when I/O operations do not immediately follow one + another.
+
=1 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member when an I/O + operation lacks locality as defined by the + zfs_vdev_mirror_rotating_seek_offset. Operations within + this that are not immediately following the previous operation are + incremented by half.
+
=32768B + (32kB) (int)
+
Aggregate read I/O operations if the on-disk gap between them is within + this threshold.
+
=4096B + (4kB) (int)
+
Aggregate write I/O operations if the on-disk gap between them is within + this threshold.
+
=fastest + (string)
+
Select the raidz parity implementation to use. +

Variants that don't depend on CPU-specific features may be + selected on module load, as they are supported on all systems. The + remaining options may only be set after the module is loaded, as they + are available only if the implementations are compiled in and supported + on the running system.

+

Once the module is loaded, + /sys/module/zfs/parameters/zfs_vdev_raidz_impl + will show the available options, with the currently selected one + enclosed in square brackets.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
fastestselected by built-in benchmark
originaloriginal implementation
scalarscalar implementation
sse2SSE2 instruction set64-bit x86
ssse3SSSE3 instruction set64-bit x86
avx2AVX2 instruction set64-bit x86
avx512fAVX512F instruction set64-bit x86
avx512bwAVX512F & AVX512BW instruction sets64-bit x86
aarch64_neonNEONAarch64/64-bit ARMv8
aarch64_neonx2NEON with more unrollingAarch64/64-bit ARMv8
powerpc_altivecAltivecPowerPC
+
+
+ (charp)
+
. + Prints warning to kernel log for compatibility.
+
= + (int)
+
Max event queue length. Events in the queue can be viewed with + zpool-events(8).
+
=2000 + (int)
+
Maximum recent zevent records to retain for duplicate checking. Setting + this to 0 disables duplicate detection.
+
=s + (15min) (int)
+
Lifespan for a recent ereport that was retained for duplicate + checking.
+
=1048576 + (int)
+
The maximum number of taskq entries that are allowed to be cached. When + this limit is exceeded transaction records (itxs) will be cleaned + synchronously.
+
= + (int)
+
The number of taskq entries that are pre-populated when the taskq is first + created and are immediately available for use.
+
=100% + (int)
+
This controls the number of threads used by + . + The default value of + + will create a maximum of one thread per cpu.
+
=131072B + (128kB) (int)
+
This sets the maximum block size used by the ZIL. On very fragmented + pools, lowering this (typically to + ) + can improve performance.
+
= + (u64)
+
This sets the minimum delay in nanoseconds ZIL care to delay block commit, + waiting for more records. If ZIL writes are too fast, kernel may not be + able sleep for so short interval, increasing log latency above allowed by + zfs_commit_timeout_pct.
+
=0|1 + (int)
+
Disable the cache flush commands that are normally sent to disk by the ZIL + after an LWB write has completed. Setting this will cause ZIL corruption + on power loss if a volatile out-of-order write cache is enabled.
+
=0|1 + (int)
+
Disable intent logging replay. Can be disabled for recovery from corrupted + ZIL.
+
=B + (768kB) (ulong)
+
Limit SLOG write size per commit executed with synchronous priority. Any + writes above that will be executed with lower (asynchronous) priority to + limit potential SLOG device abuse by single active ZIL writer.
+
=64 + (int)
+
Usually, one metaslab from each normal-class vdev is dedicated for use by + the ZIL to log synchronous writes. However, if there are fewer than + zfs_embedded_slog_min_ms metaslabs in the vdev, this + functionality is disabled. This ensures that we don't set aside an + unreasonable amount of space for the ZIL.
+
=0|1 + (int)
+
If non-zero, the zio deadman will produce debugging messages (see + zfs_dbgmsg_enable) for all zios, rather than only for + leaf zios possessing a vdev. This is meant to be used by developers to + gain diagnostic information for hang conditions which don't involve a + mutex or other locking primitive: typically conditions in which a thread + in the zio pipeline is looping indefinitely.
+
=ms + (30s) (int)
+
When an I/O operation takes more than this much time to complete, it's + marked as slow. Each slow operation causes a delay zevent. Slow I/O + counters can be seen with zpool + status -s.
+
=1|0 + (int)
+
Throttle block allocations in the I/O pipeline. This allows for dynamic + allocation distribution when devices are imbalanced. When enabled, the + maximum number of pending allocations per top-level vdev is limited by + zfs_vdev_queue_depth_pct.
+
=0|1 + (int)
+
Prioritize requeued I/O.
+
=% + (uint)
+
Percentage of online CPUs which will run a worker thread for I/O. These + workers are responsible for I/O work such as compression and checksum + calculations. Fractional number of CPUs will be rounded down. +

The default value of + was chosen to + avoid using all CPUs which can result in latency issues and inconsistent + application performance, especially when slower compression and/or + checksumming is enabled.

+
+
=0 + (uint)
+
Number of worker threads per taskq. Lower values improve I/O ordering and + CPU utilization, while higher reduces lock contention. +

If 0, generate a system-dependent value + close to 6 threads per taskq.

+
+
=0|1 + (uint)
+
Do not create zvol device nodes. This may slightly improve startup time on + systems with a very large number of zvols.
+
= + (uint)
+
Major number for zvol block devices.
+
=16384 + (ulong)
+
Discard (TRIM) operations done on zvols will be done in batches of this + many blocks, where block size is determined by the + + property of a zvol.
+
=131072B + (128kB) (uint)
+
When adding a zvol to the system, prefetch this many bytes from the start + and end of the volume. Prefetching these regions of the volume is + desirable, because they are likely to be accessed immediately by + blkid(8) or the kernel partitioner.
+
=0|1 + (uint)
+
When processing I/O requests for a zvol, submit them synchronously. This + effectively limits the queue depth to 1 for each I/O + submitter. When unset, requests are handled asynchronously by a thread + pool. The number of requests which can be handled concurrently is + controlled by zvol_threads.
+
=32 + (uint)
+
Max number of threads which can handle zvol I/O requests + concurrently.
+
=1 + (uint)
+
Defines zvol block devices behaviour when + =: + +
+
+
+
+

+

ZFS issues I/O operations to leaf vdevs to satisfy and complete + I/O operations. The scheduler determines when and in what order those + operations are issued. The scheduler divides operations into five I/O + classes, prioritized in the following order: sync read, sync write, async + read, async write, and scrub/resilver. Each queue defines the minimum and + maximum number of concurrent operations that may be issued to the device. In + addition, the device has an aggregate maximum, + zfs_vdev_max_active. Note that the sum of the per-queue + minima must not exceed the aggregate maximum. If the sum of the per-queue + maxima exceeds the aggregate maximum, then the number of active operations + may reach zfs_vdev_max_active, in which case no further + operations will be issued, regardless of whether all per-queue minima have + been met.

+

For many physical devices, throughput increases with the number of + concurrent operations, but latency typically suffers. Furthermore, physical + devices typically have a limit at which more concurrent operations have no + effect on throughput or can actually cause it to decrease.

+

The scheduler selects the next operation to issue by first looking + for an I/O class whose minimum has not been satisfied. Once all are + satisfied and the aggregate maximum has not been hit, the scheduler looks + for classes whose maximum has not been satisfied. Iteration through the I/O + classes is done in the order specified above. No further operations are + issued if the aggregate maximum number of concurrent operations has been + hit, or if there are no operations queued for an I/O class that has not hit + its maximum. Every time an I/O operation is queued or an operation + completes, the scheduler looks for new operations to issue.

+

In general, smaller max_actives will lead to + lower latency of synchronous operations. Larger + max_actives may lead to higher overall throughput, + depending on underlying storage.

+

The ratio of the queues' max_actives determines + the balance of performance between reads, writes, and scrubs. For example, + increasing zfs_vdev_scrub_max_active will cause the scrub + or resilver to complete more quickly, but reads and writes to have higher + latency and lower throughput.

+

All I/O classes have a fixed maximum number of outstanding + operations, except for the async write class. Asynchronous writes represent + the data that is committed to stable storage during the syncing stage for + transaction groups. Transaction groups enter the syncing state periodically, + so the number of queued async writes will quickly burst up and then bleed + down to zero. Rather than servicing them as quickly as possible, the I/O + scheduler changes the maximum number of active async write operations + according to the amount of dirty data in the pool. Since both throughput and + latency typically increase with the number of concurrent operations issued + to physical devices, reducing the burstiness in the number of concurrent + operations also stabilizes the response time of operations from other + – and in particular synchronous – queues. In broad strokes, + the I/O scheduler will issue more concurrent operations from the async write + queue as there's more dirty data in the pool.

+
+

+

The number of concurrent operations issued for the async write I/O + class follows a piece-wise linear function defined by a few adjustable + points:

+
+
       |              o---------| <-- zfs_vdev_async_write_max_active
+  ^    |             /^         |
+  |    |            / |         |
+active |           /  |         |
+ I/O   |          /   |         |
+count  |         /    |         |
+       |        /     |         |
+       |-------o      |         | <-- zfs_vdev_async_write_min_active
+      0|_______^______|_________|
+       0%      |      |       100% of zfs_dirty_data_max
+               |      |
+               |      `-- zfs_vdev_async_write_active_max_dirty_percent
+               `--------- zfs_vdev_async_write_active_min_dirty_percent
+
+

Until the amount of dirty data exceeds a minimum percentage of the + dirty data allowed in the pool, the I/O scheduler will limit the number of + concurrent operations to the minimum. As that threshold is crossed, the + number of concurrent operations issued increases linearly to the maximum at + the specified maximum percentage of the dirty data allowed in the pool.

+

Ideally, the amount of dirty data on a busy pool will stay in the + sloped part of the function between + zfs_vdev_async_write_active_min_dirty_percent and + zfs_vdev_async_write_active_max_dirty_percent. If it + exceeds the maximum percentage, this indicates that the rate of incoming + data is greater than the rate that the backend storage can handle. In this + case, we must further throttle incoming writes, as described in the next + section.

+
+
+
+

+

We delay transactions when we've determined that the backend + storage isn't able to accommodate the rate of incoming writes.

+

If there is already a transaction waiting, we delay relative to + when that transaction will finish waiting. This way the calculated delay + time is independent of the number of threads concurrently executing + transactions.

+

If we are the only waiter, wait relative to when the transaction + started, rather than the current time. This credits the transaction for + "time already served", e.g. reading indirect blocks.

+

The minimum time for a transaction to take is calculated as

+
min_time = + min(zfs_delay_scale * (dirty - min) / (max + - dirty), 100ms)
+

The delay has two degrees of freedom that can be adjusted via + tunables. The percentage of dirty data at which we start to delay is defined + by zfs_delay_min_dirty_percent. This should typically be + at or above zfs_vdev_async_write_active_max_dirty_percent, + so that we only start to delay after writing at full speed has failed to + keep up with the incoming write rate. The scale of the curve is defined by + zfs_delay_scale. Roughly speaking, this variable + determines the amount of delay at the midpoint of the curve.

+
+
delay
+ 10ms +-------------------------------------------------------------*+
+      |                                                             *|
+  9ms +                                                             *+
+      |                                                             *|
+  8ms +                                                             *+
+      |                                                            * |
+  7ms +                                                            * +
+      |                                                            * |
+  6ms +                                                            * +
+      |                                                            * |
+  5ms +                                                           *  +
+      |                                                           *  |
+  4ms +                                                           *  +
+      |                                                           *  |
+  3ms +                                                          *   +
+      |                                                          *   |
+  2ms +                                              (midpoint) *    +
+      |                                                  |    **     |
+  1ms +                                                  v ***       +
+      |             zfs_delay_scale ---------->     ********         |
+    0 +-------------------------------------*********----------------+
+      0%                    <- zfs_dirty_data_max ->               100%
+
+

Note, that since the delay is added to the outstanding + time remaining on the most recent transaction it's effectively the inverse + of IOPS. Here, the midpoint of + translates to + 2000 IOPS. The shape of the curve was chosen such that + small changes in the amount of accumulated dirty data in the first three + quarters of the curve yield relatively small differences in the amount of + delay.

+

The effects can be easier to understand when the amount of delay + is represented on a logarithmic scale:

+
+
delay
+100ms +-------------------------------------------------------------++
+      +                                                              +
+      |                                                              |
+      +                                                             *+
+ 10ms +                                                             *+
+      +                                                           ** +
+      |                                              (midpoint)  **  |
+      +                                                  |     **    +
+  1ms +                                                  v ****      +
+      +             zfs_delay_scale ---------->        *****         +
+      |                                             ****             |
+      +                                          ****                +
+100us +                                        **                    +
+      +                                       *                      +
+      |                                      *                       |
+      +                                     *                        +
+ 10us +                                     *                        +
+      +                                                              +
+      |                                                              |
+      +                                                              +
+      +--------------------------------------------------------------+
+      0%                    <- zfs_dirty_data_max ->               100%
+
+

Note here that only as the amount of dirty data approaches its + limit does the delay start to increase rapidly. The goal of a properly tuned + system should be to keep the amount of dirty data out of that range by first + ensuring that the appropriate limits are set for the I/O scheduler to reach + optimal throughput on the back-end storage, and then by changing the value + of zfs_delay_scale to increase the steepness of the + curve.

+
+
+ + + + + +
January 10, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/5/index.html b/man/v2.1/5/index.html new file mode 100644 index 000000000..467921b78 --- /dev/null +++ b/man/v2.1/5/index.html @@ -0,0 +1,147 @@ + + + + + + + File Formats and Conventions (5) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

File Formats and Conventions (5)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/5/vdev_id.conf.5.html b/man/v2.1/5/vdev_id.conf.5.html new file mode 100644 index 000000000..4f1f9a5be --- /dev/null +++ b/man/v2.1/5/vdev_id.conf.5.html @@ -0,0 +1,367 @@ + + + + + + + vdev_id.conf.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

vdev_id.conf.5

+
+ + + + + +
VDEV_ID.CONF(5)File Formats ManualVDEV_ID.CONF(5)
+
+
+

+

vdev_id.conf — + configuration file for vdev_id(8)

+
+
+

+

vdev_id.conf is the configuration file for + vdev_id(8). It controls the default behavior of + vdev_id(8) while it is mapping a disk device name to an + alias.

+

The vdev_id.conf file uses a simple format + consisting of a keyword followed by one or more values on a single line. Any + line not beginning with a recognized keyword is ignored. Comments may + optionally begin with a hash character.

+

The following keywords and values are used.

+
+
+ name devlink
+
Maps a device link in the /dev directory hierarchy + to a new device name. The udev rule defining the device link must have run + prior to vdev_id(8). A defined alias takes precedence + over a topology-derived name, but the two naming methods can otherwise + coexist. For example, one might name drives in a JBOD with the + sas_direct topology while naming an internal L2ARC + device with an alias. +

name is the name of the link to the + device that will by created under + /dev/disk/by-vdev.

+

devlink is the name of the device link + that has already been defined by udev. This may be an absolute path or + the base filename.

+
+
+ [pci_slot] port + name
+
Maps a physical path to a channel name (typically representing a single + disk enclosure).
+ +
Additionally create /dev/by-enclosure symlinks to + the disk enclosure + devices + using the naming scheme from vdev_id.conf. + enclosure_symlinks is only allowed for + sas_direct mode.
+ +
Specify the prefix for the enclosure symlinks in the form + /dev/by-enclosure/prefix⟩-⟨channel⟩⟨num⟩ +

Defaults to + “”.

+
+
+ prefix new + [channel]
+
Maps a disk slot number as reported by the operating system to an + alternative slot number. If the channel parameter is + specified then the mapping is only applied to slots in the named channel, + otherwise the mapping is applied to all channels. The first-specified + slot rule that can match a slot takes precedence. + Therefore a channel-specific mapping for a given slot should generally + appear before a generic mapping for the same slot. In this way a custom + mapping may be applied to a particular channel and a default mapping + applied to the others.
+
+ yes|no
+
Specifies whether vdev_id(8) will handle only + dm-multipath devices. If set to yes then + vdev_id(8) will examine the first running component disk + of a dm-multipath device as provided by the driver command to determine + the physical path.
+
+ sas_direct|sas_switch|scsi
+
Identifies a physical topology that governs how physical paths are mapped + to channels: +
+
+ and scsi
+
channels are uniquely identified by a PCI slot and HBA port + number
+
+
channels are uniquely identified by a SAS switch port number
+
+
+
+ num
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) internally uses this value to + determine which HBA or switch port a device is connected to. The default + is .
+
+ bay|phy|port|id|lun|ses
+
Specifies from which element of a SAS identifier the slot number is taken. + The default is bay: +
+
+
read the slot number from the bay identifier.
+
+
read the slot number from the phy identifier.
+
+
use the SAS port as the slot number.
+
+
use the scsi id as the slot number.
+
+
use the scsi lun as the slot number.
+
+
use the SCSI Enclosure Services (SES) enclosure device slot number, as + reported by sg_ses(8). Intended for use only on + systems where bay is unsupported, noting that + port and id may be unstable across + disk replacement.
+
+
+
+
+
+

+
+
/etc/zfs/vdev_id.conf
+
The configuration file for vdev_id(8).
+
+
+
+

+

A non-multipath configuration with direct-attached SAS enclosures + and an arbitrary slot re-mapping:

+
+
multipath     no
+topology      sas_direct
+phys_per_port 4
+slot          bay
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+channel 86:00.0  1         C
+channel 86:00.0  0         D
+
+# Custom mapping for Channel A
+
+#    Linux      Mapped
+#    Slot       Slot      Channel
+slot 1          7         A
+slot 2          10        A
+slot 3          3         A
+slot 4          6         A
+
+# Default mapping for B, C, and D
+
+slot 1          4
+slot 2          2
+slot 3          1
+slot 4          3
+
+

A SAS-switch topology. Note, that the + channel keyword takes only two arguments in this + example:

+
+
topology      sas_switch
+
+#       SWITCH PORT  CHANNEL NAME
+channel 1            A
+channel 2            B
+channel 3            C
+channel 4            D
+
+

A multipath configuration. Note that channel names have multiple + definitions - one per physical path:

+
+
multipath yes
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+channel 86:00.0  1         A
+channel 86:00.0  0         B
+
+

A configuration with enclosure_symlinks enabled:

+
+
multipath yes
+enclosure_symlinks yes
+
+#          PCI_ID      HBA PORT     CHANNEL NAME
+channel    05:00.0     1            U
+channel    05:00.0     0            L
+channel    06:00.0     1            U
+channel    06:00.0     0            L
+
+In addition to the disks symlinks, this configuration will create: +
+
/dev/by-enclosure/enc-L0
+/dev/by-enclosure/enc-L1
+/dev/by-enclosure/enc-U0
+/dev/by-enclosure/enc-U1
+
+

A configuration using device link aliases:

+
+
#     by-vdev
+#     name     fully qualified or base name of device link
+alias d1       /dev/disk/by-id/wwn-0x5000c5002de3b9ca
+alias d2       wwn-0x5000c5002def789e
+
+
+
+

+

vdev_id(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/7/dracut.zfs.7.html b/man/v2.1/7/dracut.zfs.7.html new file mode 100644 index 000000000..9b529c30f --- /dev/null +++ b/man/v2.1/7/dracut.zfs.7.html @@ -0,0 +1,402 @@ + + + + + + + dracut.zfs.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

dracut.zfs.7

+
+ + + + + +
DRACUT.ZFS(7)Miscellaneous Information ManualDRACUT.ZFS(7)
+
+
+

+

dracut.zfs — + overview of ZFS dracut hooks

+
+
+

+
+
                      parse-zfs.sh → dracut-cmdline.service
+                          |                     ↓
+                          |                     …
+                          |                     ↓
+                          \————————→ dracut-initqueue.service
+                                                |                      zfs-import-opts.sh
+   zfs-load-module.service                      ↓                          |       |
+     |                  |                sysinit.target                    ↓       |
+     ↓                  |                       |        zfs-import-scan.service   ↓
+zfs-import-scan.service ↓                       ↓           | zfs-import-cache.service
+     |   zfs-import-cache.service         basic.target      |     |
+     \__________________|                       |           ↓     ↓
+                        ↓                       |     zfs-load-key.sh
+     zfs-env-bootfs.service                     |         |
+                        ↓                       ↓         ↓
+                 zfs-import.target → dracut-pre-mount.service
+                        |          ↑            |
+                        | dracut-zfs-generator  |
+                        | _____________________/|
+                        |/                      ↓
+                        |                   sysroot.mount ←——— dracut-zfs-generator
+                        |                       |
+                        |                       ↓
+                        |             initrd-root-fs.target ←— zfs-nonroot-necessities.service
+                        |                       |                                 |
+                        |                       ↓                                 |
+                        ↓             dracut-mount.service                        |
+       zfs-snapshot-bootfs.service              |                                 |
+                        |                       ↓                                 |
+                        ↓                       …                                 |
+       zfs-rollback-bootfs.service              |                                 |
+                        |                       ↓                                 |
+                        |          /sysroot/{usr,etc,lib,&c.} ←———————————————————/
+                        |                       |
+                        |                       ↓
+                        |                initrd-fs.target
+                        \______________________ |
+                                               \|
+                                                ↓
+        export-zfs.sh                      initrd.target
+              |                                 |
+              ↓                                 ↓
+   dracut-shutdown.service                      …
+                                                |
+                                                ↓
+                 zfs-needshutdown.sh → initrd-cleanup.service
+
+

Compare dracut.bootup(7) for the full + flowchart.

+
+
+

+

Under dracut, booting with + ZFS-on-/ is facilitated by a + number of hooks in the 90zfs module.

+

Booting into a ZFS dataset requires + mountpoint=/ to be set on the + dataset containing the root filesystem (henceforth "the boot + dataset") and at the very least either the bootfs + property to be set to that dataset, or the root= kernel + cmdline (or dracut drop-in) argument to specify it.

+

All children of the boot dataset with + = + with mountpoints matching /etc, + /bin, /lib, + /lib??, /libx32, + and /usr globs are deemed + essential and will be mounted as well.

+

zfs-mount-generator(8) is recommended for proper + functioning of the system afterward (correct mount properties, remounting, + &c.).

+
+
+

+
+

+
+
dataset, + dataset
+
Use dataset as the boot dataset. All pluses + (‘+’) are replaced with spaces + (‘ ’).
+
, + root=zfs:, + , + [root=]
+
After import, search for the first pool with the bootfs + property set, use its value as-if specified as the + dataset above.
+
rootfstype=zfs root=dataset
+
Equivalent to + root=zfs:dataset.
+
+ [root=]
+
Equivalent to root=zfs:AUTO.
+
flags
+
Mount the boot dataset with -o + flags; cf. + Temporary Mount + Point Properties in zfsprops(7). These properties + will not last, since all filesystems will be re-mounted from the real + root.
+
+
If specified, dracut-zfs-generator logs to the + journal.
+
+

Be careful about setting neither rootfstype=zfs + nor root=zfs:dataset — other + automatic boot selection methods, like + systemd-gpt-auto-generator and + systemd-fstab-generator might take precedent.

+
+
+

+
+
[=snapshot-name]
+
Execute zfs snapshot + boot-dataset@snapshot-name + before pivoting to the real root. snapshot-name + defaults to the current kernel release.
+
[=snapshot-name]
+
Execute zfs snapshot + -Rf + boot-dataset@snapshot-name + before pivoting to the real root. snapshot-name + defaults to the current kernel release.
+
host-id
+
Use zgenhostid(8) to set the host ID to + host-id; otherwise, + /etc/hostid inherited from the real root is + used.
+
, + zfs.force, zfsforce
+
Appends -f to all zpool + import invocations; primarily useful in + conjunction with spl_hostid=, or if no host ID was + inherited.
+
+
+
+
+

+
+
parse-zfs.sh + ()
+
Processes spl_hostid=. If root= + matches a known pattern, above, provides /dev/root + and delays the initqueue until zfs(4) is loaded,
+
zfs-import-opts.sh + (systemd environment + generator)
+
Turns zfs_force, zfs.force, + or zfsforce into + ZPOOL_IMPORT_OPTS=-f for + zfs-import-scan.service or + zfs-import-cache.service.
+
zfs-load-key.sh + ()
+
Loads encryption keys for the boot dataset and its essential descendants. +
+
+
=
+
Is prompted for via systemd-ask-password + thrice.
+
=URL, + keylocation=URL
+
network-online.target is started before + loading.
+
=path
+
If path doesn't exist, + udevadm is + settled. If it still doesn't, it's waited for + for up to + s.
+
+
+
+
zfs-env-bootfs.service + (systemd service)
+
After pool import, sets BOOTFS= in the systemd + environment to the first non-null bootfs value in + iteration order.
+
dracut-zfs-generator + (systemd generator)
+
Generates sysroot.mount (using + rootflags=, if any). If an + explicit boot dataset was specified, also generates essential mountpoints + (sysroot-etc.mount, + sysroot-bin.mount, + &c.), otherwise generates + zfs-nonroot-necessities.service which mounts them + explicitly after /sysroot using + BOOTFS=.
+
zfs-snapshot-bootfs.service, + zfs-rollback-bootfs.service + (systemd services)
+
Consume bootfs.snapshot and + bootfs.rollback as described in + CMDLINE. Use + BOOTFS= if no explicit boot dataset was + specified.
+
zfs-needshutdown.sh + ()
+
If any pools were imported, signals that shutdown hooks are required.
+
export-zfs.sh + ()
+
Forcibly exports all pools.
+
/etc/hostid, + /etc/zfs/zpool.cache, + /etc/zfs/vdev_id.conf (regular files)
+
Included verbatim, hostonly.
+
mount-zfs.sh + ()
+
Does nothing on systemd systems (if + dracut-zfs-generator + succeeded). Otherwise, loads encryption key for + the boot dataset from the console or via plymouth. It may not work at + all!
+
+
+
+

+

zfsprops(7), + zpoolprops(7), + dracut-shutdown.service(8), + systemd-fstab-generator(8), + systemd-gpt-auto-generator(8), + zfs-mount-generator(8), + zgenhostid(8)

+
+
+ + + + + +
March 28, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/7/index.html b/man/v2.1/7/index.html new file mode 100644 index 000000000..da5227095 --- /dev/null +++ b/man/v2.1/7/index.html @@ -0,0 +1,157 @@ + + + + + + + Miscellaneous (7) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/man/v2.1/7/zfsconcepts.7.html b/man/v2.1/7/zfsconcepts.7.html new file mode 100644 index 000000000..e4a222780 --- /dev/null +++ b/man/v2.1/7/zfsconcepts.7.html @@ -0,0 +1,301 @@ + + + + + + + zfsconcepts.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfsconcepts.7

+
+ + + + + +
ZFSCONCEPTS(7)Miscellaneous Information ManualZFSCONCEPTS(7)
+
+
+

+

zfsconcepts — + overview of ZFS concepts

+
+
+

+
+

+

A ZFS storage pool is a logical collection of devices that provide + space for datasets. A storage pool is also the root of the ZFS file system + hierarchy.

+

The root of the pool can be accessed as a file system, such as + mounting and unmounting, taking snapshots, and setting properties. The + physical storage characteristics, however, are managed by the + zpool(8) command.

+

See zpool(8) for more information on creating + and administering pools.

+
+
+

+

A snapshot is a read-only copy of a file system or volume. + Snapshots can be created extremely quickly, and initially consume no + additional space within the pool. As data within the active dataset changes, + the snapshot consumes more data than would otherwise be shared with the + active dataset.

+

Snapshots can have arbitrary names. Snapshots of + volumes can be cloned or rolled back, visibility is determined by the + property + of the parent volume.

+

File system snapshots can be accessed under the + .zfs/snapshot directory in the root of the file + system. Snapshots are automatically mounted on demand and may be unmounted + at regular intervals. The visibility of the .zfs + directory can be controlled by the + + property.

+
+
+

+

A bookmark is like a snapshot, a read-only copy of a file system + or volume. Bookmarks can be created extremely quickly, compared to + snapshots, and they consume no additional space within the pool. Bookmarks + can also have arbitrary names, much like snapshots.

+

Unlike snapshots, bookmarks can not be accessed through the + filesystem in any way. From a storage standpoint a bookmark just provides a + way to reference when a snapshot was created as a distinct object. Bookmarks + are initially tied to a snapshot, not the filesystem or volume, and they + will survive if the snapshot itself is destroyed. Since they are very light + weight there's little incentive to destroy them.

+
+
+

+

A clone is a writable volume or file system whose initial contents + are the same as another dataset. As with snapshots, creating a clone is + nearly instantaneous, and initially consumes no additional space.

+

Clones can only be created from a snapshot. When a + snapshot is cloned, it creates an implicit dependency between the parent and + child. Even though the clone is created somewhere else in the dataset + hierarchy, the original snapshot cannot be destroyed as long as a clone + exists. The + property exposes this dependency, and the destroy + command lists any such dependencies, if they exist.

+

The clone parent-child dependency relationship can be reversed by + using the promote subcommand. This causes the + "origin" file system to become a clone of the specified file + system, which makes it possible to destroy the file system that the clone + was created from.

+
+
+

+

Creating a ZFS file system is a simple operation, so the number of + file systems per system is likely to be numerous. To cope with this, ZFS + automatically manages mounting and unmounting file systems without the need + to edit the /etc/fstab file. All automatically + managed file systems are mounted by ZFS at boot time.

+

By default, file systems are mounted under + /path, where path is the name + of the file system in the ZFS namespace. Directories are created and + destroyed as needed.

+

A file system can also have a mount point set in + the mountpoint property. This directory is created as + needed, and ZFS automatically mounts the file system when the + zfs mount + -a command is invoked (without editing + /etc/fstab). The mountpoint + property can be inherited, so if + has a + mount point of /export/stuff, then + + automatically inherits a mount point of + /export/stuff/user.

+

A file system mountpoint property of + prevents the + file system from being mounted.

+

If needed, ZFS file systems can also be managed with + traditional tools (mount, + umount, /etc/fstab). If a + file system's mount point is set to + , ZFS makes + no attempt to manage the file system, and the administrator is responsible + for mounting and unmounting the file system. Because pools must be imported + before a legacy mount can succeed, administrators should ensure that legacy + mounts are only attempted after the zpool import process finishes at boot + time. For example, on machines using systemd, the mount option

+

x-systemd.requires=zfs-import.target

+

will ensure that the zfs-import completes before systemd attempts + mounting the filesystem. See systemd.mount(5) for + details.

+
+
+

+

Deduplication is the process for removing redundant data at the + block level, reducing the total amount of data stored. If a file system has + the + + property enabled, duplicate data blocks are removed synchronously. The + result is that only unique data is stored and common components are shared + among files.

+

Deduplicating data is a very resource-intensive operation. It is + generally recommended that you have at least 1.25 GiB of RAM per 1 TiB of + storage when you enable deduplication. Calculating the exact requirement + depends heavily on the type of data stored in the pool.

+

Enabling deduplication on an improperly-designed system can result + in performance issues (slow IO and administrative operations). It can + potentially lead to problems importing a pool due to memory exhaustion. + Deduplication can consume significant processing power (CPU) and memory as + well as generate additional disk IO.

+

Before creating a pool with deduplication + enabled, ensure that you have planned your hardware requirements + appropriately and implemented appropriate recovery practices, such as + regular backups. Consider using the + + property as a less resource-intensive alternative.

+
+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/7/zfsprops.7.html b/man/v2.1/7/zfsprops.7.html new file mode 100644 index 000000000..18147ec84 --- /dev/null +++ b/man/v2.1/7/zfsprops.7.html @@ -0,0 +1,1494 @@ + + + + + + + zfsprops.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfsprops.7

+
+ + + + + +
ZFSPROPS(7)Miscellaneous Information ManualZFSPROPS(7)
+
+
+

+

zfspropsnative + and user-defined properties of ZFS datasets

+
+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about user properties, + see the User Properties section, + below.

+
+

+

Every dataset has a set of properties that export statistics about + the dataset as well as control various behaviors. Properties are inherited + from the parent unless overridden by the child. Some properties apply only + to certain types of datasets (file systems, volumes, or snapshots).

+

The values of numeric properties can be specified using + human-readable suffixes (for example, + , + , + , + , and so + forth, up to + for zettabyte). The following are all valid (and equal) specifications: + 1536M, 1.5g, 1.50GB.

+

The values of non-numeric properties are case sensitive and must + be lowercase, except for mountpoint, + sharenfs, and sharesmb.

+

The following native properties consist of read-only statistics + about the dataset. These properties can be neither set, nor inherited. + Native properties apply to all dataset types unless otherwise noted.

+
+
+
The amount of space available to the dataset and all its children, + assuming that there is no other activity in the pool. Because space is + shared within a pool, availability can be limited by any number of + factors, including physical pool size, quotas, reservations, or other + datasets within the pool. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For non-snapshots, the compression ratio achieved for the + used space of this dataset, expressed as a multiplier. + The used property includes descendant datasets, and, for + clones, does not include the space shared with the origin snapshot. For + snapshots, the compressratio is the same as the + refcompressratio property. Compression can be turned on + by running: zfs set + compression=on + dataset. The default value is + off.
+
+
The transaction group (txg) in which the dataset was created. Bookmarks + have the same createtxg as the snapshot they are + initially tied to. This property is suitable for ordering a list of + snapshots, e.g. for incremental send and receive.
+
+
The time this dataset was created.
+
+
For snapshots, this property is a comma-separated list of filesystems or + volumes which are clones of this snapshot. The clones' + origin property is this snapshot. If the + clones property is not empty, then this snapshot can not + be destroyed (even with the -r or + -f options). The roles of origin and clone can be + swapped by promoting the clone with the zfs + promote command.
+
+
This property is on if the snapshot has been marked for + deferred destroy by using the zfs + destroy -d command. + Otherwise, the property is off.
+
+
For encrypted datasets, indicates where the dataset is currently + inheriting its encryption key from. Loading or unloading a key for the + encryptionroot will implicitly load / unload the key for + any inheriting datasets (see zfs + load-key and zfs + unload-key for details). Clones will always share + an encryption key with their origin. See the + Encryption section of + zfs-load-key(8) for details.
+
+
The total number of filesystems and volumes that exist under this location + in the dataset tree. This value is only available when a + filesystem_limit has been set somewhere in the tree + under which the dataset resides.
+
+
Indicates if an encryption key is currently loaded into ZFS. The possible + values are none, available, and + . + See zfs load-key and + zfs unload-key.
+
+
The 64 bit GUID of this dataset or bookmark which does not change over its + entire lifetime. When a snapshot is sent to another pool, the received + snapshot has the same GUID. Thus, the guid is suitable + to identify a snapshot across pools.
+
+
The amount of space that is "logically" accessible by this + dataset. See the referenced property. The logical space + ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space that is "logically" consumed by this dataset + and all its descendents. See the used property. The + logical space ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For file systems, indicates whether the file system is currently mounted. + This property can be either + or + .
+
+
A unique identifier for this dataset within the pool. Unlike the dataset's + guid, the + objsetid of a dataset is not transferred to other pools + when the snapshot is copied with a send/receive operation. The + objsetid can be reused (for a new dataset) after the + dataset is deleted.
+
+
For cloned file systems or volumes, the snapshot from which the clone was + created. See also the clones property.
+
+
For filesystems or volumes which have saved partially-completed state from + zfs receive + -s, this opaque token can be provided to + zfs send + -t to resume and complete the + zfs receive.
+
+
For bookmarks, this is the list of snapshot guids the bookmark contains a + redaction list for. For snapshots, this is the list of snapshot guids the + snapshot is redacted with respect to.
+
+
The amount of data that is accessible by this dataset, which may or may + not be shared with other datasets in the pool. When a snapshot or clone is + created, it initially references the same amount of space as the file + system or snapshot it was created from, since its contents are identical. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The compression ratio achieved for the referenced space + of this dataset, expressed as a multiplier. See also the + compressratio property.
+
+
The total number of snapshots that exist under this location in the + dataset tree. This value is only available when a + snapshot_limit has been set somewhere in the tree under + which the dataset resides.
+
+
The type of dataset: + , + , + , + or + .
+
+
The amount of space consumed by this dataset and all its descendents. This + is the value that is checked against this dataset's quota and reservation. + The space used does not include this dataset's reservation, but does take + into account the reservations of any descendent datasets. The amount of + space that a dataset consumes from its parent, as well as the amount of + space that is freed if this dataset is recursively destroyed, is the + greater of its space used and its reservation. +

The used space of a snapshot (see the + Snapshots section of + zfsconcepts(7)) is space that is referenced + exclusively by this snapshot. If this snapshot is destroyed, the amount + of used space will be freed. Space that is shared by + multiple snapshots isn't accounted for in this metric. When a snapshot + is destroyed, space that was previously shared with this snapshot can + become unique to snapshots adjacent to it, thus changing the used space + of those snapshots. The used space of the latest snapshot can also be + affected by changes in the file system. Note that the + used space of a snapshot is a subset of the + written space of the snapshot.

+

The amount of space used, available, or referenced + does not take into account pending changes. Pending changes are + generally accounted for within a few seconds. Committing a change to a + disk using fsync(2) or + does + not necessarily guarantee that the space usage information is updated + immediately.

+
+
+
The usedby* properties decompose the + used properties into the various reasons that space is + used. Specifically, used = + usedbychildren + + usedbydataset + + usedbyrefreservation + + usedbysnapshots. These properties are only available for + datasets created on zpool "version 13" + pools.
+
+
The amount of space used by children of this dataset, which would be freed + if all the dataset's children were destroyed.
+
+
The amount of space used by this dataset itself, which would be freed if + the dataset were destroyed (after first removing any + refreservation and destroying any necessary snapshots or + descendents).
+
+
The amount of space used by a refreservation set on this + dataset, which would be freed if the refreservation was + removed.
+
+
The amount of space consumed by snapshots of this dataset. In particular, + it is the amount of space that would be freed if all of this dataset's + snapshots were destroyed. Note that this is not simply the sum of the + snapshots' used properties because space can be shared + by multiple snapshots.
+
@user
+
The amount of space consumed by the specified user in this dataset. Space + is charged to the owner of each file, as displayed by + ls -l. The amount of space + charged is displayed by du + and ls + -s. See the zfs + userspace command for more information. +

Unprivileged users can access only their own space usage. The + root user, or a user who has been granted the userused + privilege with zfs + allow, can access everyone's usage.

+

The userused@... + properties are not displayed by zfs + get all. The user's name must + be appended after the @ symbol, using one of the + following forms:

+
    +
  • POSIX name ("joe")
  • +
  • POSIX numeric ID ("789")
  • +
  • SID name ("joe.smith@mydomain")
  • +
  • SID numeric ID ("S-1-123-456-789")
  • +
+

Files created on Linux always have POSIX owners.

+
+
@user
+
The userobjused property is similar to + userused but instead it counts the number of objects + consumed by a user. This property counts all objects allocated on behalf + of the user, it may differ from the results of system tools such as + df -i. +

When the property xattr=on + is set on a file system additional objects will be created per-file to + store extended attributes. These additional objects are reflected in the + userobjused value and are counted against the user's + userobjquota. When a file system is configured to use + xattr=sa no additional internal + objects are normally required.

+
+
+
This property is set to the number of user holds on this snapshot. User + holds are set by using the zfs + hold command.
+
@group
+
The amount of space consumed by the specified group in this dataset. Space + is charged to the group of each file, as displayed by + ls -l. See the + userused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupused privilege with zfs + allow, can access all groups' usage.

+
+
@group
+
The number of objects consumed by the specified group in this dataset. + Multiple objects may be charged to the group for each file when extended + attributes are in use. See the + userobjused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupobjused privilege with + zfs allow, can access + all groups' usage.

+
+
@project
+
The amount of space consumed by the specified project in this dataset. + Project is identified via the project identifier (ID) that is object-based + numeral attribute. An object can inherit the project ID from its parent + object (if the parent has the flag of inherit project ID that can be set + and changed via chattr + -/+P or zfs project + -s) when being created. The privileged user can + set and change object's project ID via chattr + -p or zfs project + -s anytime. Space is charged to the project of + each file, as displayed by lsattr + -p or zfs project. See the + userused@user property for more + information. +

The root user, or a user who has been granted the + projectused privilege with zfs + allow, can access all projects' usage.

+
+
@project
+
The projectobjused is similar to + projectused but instead it counts the number of objects + consumed by project. When the property + xattr=on is set on a fileset, ZFS will + create additional objects per-file to store extended attributes. These + additional objects are reflected in the projectobjused + value and are counted against the project's + projectobjquota. When a filesystem is configured to use + xattr=sa no additional internal + objects are required. See the + userobjused@user property for more + information. +

The root user, or a user who has been granted the + projectobjused privilege with zfs + allow, can access all projects' objects usage.

+
+
+
For volumes, specifies the block size of the volume. The + blocksize cannot be changed once the volume has been + written, so it should be set at volume creation time. The default + blocksize for volumes is 8 Kbytes. Any power of 2 from + 512 bytes to 128 Kbytes is valid. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space referenced by this dataset, that was + written since the previous snapshot (i.e. that is not referenced by the + previous snapshot).
+
@snapshot
+
The amount of referenced space written to this dataset + since the specified snapshot. This is the space that is referenced by this + dataset but was not referenced by the specified snapshot. +

The snapshot may be specified as a short + snapshot name (just the part after the @), in which + case it will be interpreted as a snapshot in the same filesystem as this + dataset. The snapshot may be a full snapshot name + (filesystem@snapshot), which + for clones may be a snapshot in the origin's filesystem (or the origin + of the origin's filesystem, etc.)

+
+
+

The following native properties can be used to change the behavior + of a ZFS dataset.

+
+
=discard|noallow|restricted|passthrough|passthrough-x
+
Controls how ACEs are inherited when files and directories are created. +
+
+
+
does not inherit any ACEs.
+
+
only inherits inheritable ACEs that specify "deny" + permissions.
+
+
default, removes the + + and + + permissions when the ACE is inherited.
+
+
inherits all inheritable ACEs without any modifications.
+
+
same meaning as passthrough, except that the + , + , + and + + ACEs inherit the execute permission only if the file creation mode + also requests the execute bit.
+
+
+

When the property value is set to + passthrough, files are created with a mode determined + by the inheritable ACEs. If no inheritable ACEs exist that affect the + mode, then the mode is set in accordance to the requested mode from the + application.

+

The aclinherit property does not apply to + POSIX ACLs.

+
+
=discard|groupmask|passthrough|restricted
+
Controls how an ACL is modified during chmod(2) and how inherited ACEs are + modified by the file creation mode: +
+
+
+
default, deletes all + + except for those representing the mode of the file or directory + requested by chmod(2).
+
+
reduces permissions granted in all + + entries found in the + + such that they are no greater than the group permissions specified by + chmod(2).
+
+
indicates that no changes are made to the ACL other than creating or + updating the necessary ACL entries to represent the new mode of the + file or directory.
+
+
will cause the chmod(2) operation to return an error + when used on any file or directory which has a non-trivial ACL whose + entries can not be represented by a mode. chmod(2) + is required to change the set user ID, set group ID, or sticky bits on + a file or directory, as they do not have equivalent ACL entries. In + order to use chmod(2) on a file or directory with a + non-trivial ACL when aclmode is set to + restricted, you must first remove all ACL entries + which do not represent the current mode.
+
+
+
+
=off|nfsv4|posix
+
Controls whether ACLs are enabled and if so what type of ACL to use. When + this property is set to a type of ACL not supported by the current + platform, the behavior is the same as if it were set to + off. +
+
+
+
default on Linux, when a file system has the acltype + property set to off then ACLs are disabled.
+
+
an alias for off
+
+
default on FreeBSD, indicates that NFSv4-style + ZFS ACLs should be used. These ACLs can be managed with the + getfacl(1) and setfacl(1). The + nfsv4 ZFS ACL type is not yet supported on + Linux.
+
+
indicates POSIX ACLs should be used. POSIX ACLs are specific to Linux + and are not functional on other platforms. POSIX ACLs are stored as an + extended attribute and therefore will not overwrite any existing NFSv4 + ACLs which may be set.
+
+
an alias for posix
+
+
+

To obtain the best performance when setting + posix users are strongly encouraged to set the + xattr=sa property. This will result + in the POSIX ACL being stored more efficiently on disk. But as a + consequence, all new extended attributes will only be accessible from + OpenZFS implementations which support the + xattr=sa property. See the + xattr property for more details.

+
+
=on|off
+
Controls whether the access time for files is updated when they are read. + Turning this property off avoids producing write traffic when reading + files and can result in significant performance gains, though it might + confuse mailers and other similar utilities. The values + on and off are equivalent to the + atime and + + mount options. The default value is on. See also + relatime below.
+
=on|off|noauto
+
If this property is set to off, the file system cannot + be mounted, and is ignored by zfs + mount -a. Setting this + property to off is similar to setting the + mountpoint property to none, except + that the dataset still has a normal mountpoint property, + which can be inherited. Setting this property to off + allows datasets to be used solely as a mechanism to inherit properties. + One example of setting canmount=off is + to have two datasets with the same mountpoint, so that + the children of both datasets appear in the same directory, but might have + different inherited characteristics. +

When set to noauto, a dataset can only be + mounted and unmounted explicitly. The dataset is not mounted + automatically when the dataset is created or imported, nor is it mounted + by the zfs mount + -a command or unmounted by the + zfs unmount + -a command.

+

This property is not inherited.

+
+
=on|off||fletcher4|sha256|noparity|sha512|skein|edonr
+
Controls the checksum used to verify data integrity. The default value is + on, which automatically selects an appropriate algorithm + (currently, fletcher4, but this may change in future + releases). The value off disables integrity checking on + user data. The value noparity not only disables + integrity but also disables maintaining parity for user data. This setting + is used internally by a dump device residing on a RAID-Z pool and should + not be used by any other dataset. Disabling checksums is + NOT a recommended practice. +

The sha512, skein, and + edonr checksum algorithms require enabling the + appropriate features on the pool. FreeBSD does + not support the edonr algorithm.

+

Please see zpool-features(7) for more + information on these algorithms.

+

Changing this property affects only newly-written data.

+
+
=on|off|gzip|gzip-N|lz4|lzjb|zle|zstd|zstd-N|zstd-fast|zstd-fast-N
+
Controls the compression algorithm used for this dataset. +

Setting compression to on indicates that the + current default compression algorithm should be used. The default + balances compression and decompression speed, with compression ratio and + is expected to work well on a wide variety of workloads. Unlike all + other settings for this property, on does not select a + fixed compression type. As new compression algorithms are added to ZFS + and enabled on a pool, the default compression algorithm may change. The + current default compression algorithm is either lzjb + or, if the lz4_compress feature is enabled, + lz4.

+

The lz4 compression algorithm + is a high-performance replacement for the lzjb + algorithm. It features significantly faster compression and + decompression, as well as a moderately higher compression ratio than + lzjb, but can only be used on pools with the + lz4_compress feature set to + . See + zpool-features(7) for details on ZFS feature flags and + the lz4_compress feature.

+

The lzjb compression algorithm is optimized + for performance while providing decent data compression.

+

The gzip compression algorithm + uses the same compression as the gzip(1) command. You + can specify the gzip level by using the value + gzip-N, where + N is an integer from 1 (fastest) to 9 (best + compression ratio). Currently, gzip is equivalent to + (which + is also the default for gzip(1)).

+

The zstd compression algorithm + provides both high compression ratios and good performance. You can + specify the zstd level by using the value + zstd-N, where + N is an integer from 1 (fastest) to 19 (best + compression ratio). zstd is equivalent to + .

+

Faster speeds at the cost of the compression + ratio can be requested by setting a negative zstd + level. This is done using + zstd-fast-N, where + N is an integer in [1-9,10,20,30,...,100,500,1000] + which maps to a negative zstd level. The lower the + level the faster the compression - 1000 + provides the fastest compression and lowest compression + ratio. zstd-fast is equivalent to + .

+

The zle compression algorithm compresses + runs of zeros.

+

This property can also be referred to by its + shortened column name + . + Changing this property affects only newly-written data.

+

When any setting except off is selected, + compression will explicitly check for blocks consisting of only zeroes + (the NUL byte). When a zero-filled block is detected, it is stored as a + hole and not compressed using the indicated compression algorithm.

+

Any block being compressed must be no larger than 7/8 of its + original size after compression, otherwise the compression will not be + considered worthwhile and the block saved uncompressed. Note that when + the logical block is less than 8 times the disk sector size this + effectively reduces the necessary compression ratio; for example, 8kB + blocks on disks with 4kB disk sectors must compress to 1/2 or less of + their original size.

+
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux context for all files in the file system under + a mount point for that file system. See selinux(8) for + more information.
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux context for the file system file system being + mounted. See selinux(8) for more information.
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux default context for unlabeled files. See + selinux(8) for more information.
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux context for the root inode of the file system. + See selinux(8) for more information.
+
=||
+
Controls the number of copies of data stored for this dataset. These + copies are in addition to any redundancy provided by the pool, for + example, mirroring or RAID-Z. The copies are stored on different disks, if + possible. The space used by multiple copies is charged to the associated + file and dataset, changing the used property and + counting against quotas and reservations. +

Changing this property only affects newly-written data. + Therefore, set this property at file system creation time by using the + -o + copies=N option.

+

Remember that ZFS will not import a pool with a missing + top-level vdev. Do NOT create, for example a two-disk + striped pool and set copies=2 on + some datasets thinking you have setup redundancy for them. When a disk + fails you will not be able to import the pool and will have lost all of + your data.

+

Encrypted datasets may not have + copies=3 since the + implementation stores some encryption metadata where the third copy + would normally be.

+
+
=on|off
+
Controls whether device nodes can be opened on this file system. The + default value is on. The values on and + off are equivalent to the dev and + + mount options.
+
=off|on|verify|sha256[,verify]|sha512[,verify]|skein[,verify]|edonr,verify
+
Configures deduplication for a dataset. The default value is + off. The default deduplication checksum is + sha256 (this may change in the future). When + dedup is enabled, the checksum defined here overrides + the checksum property. Setting the value to + verify has the same effect as the setting + sha256,verify. +

If set to verify, ZFS will do a byte-to-byte + comparison in case of two blocks having the same signature to make sure + the block contents are identical. Specifying verify is + mandatory for the edonr algorithm.

+

Unless necessary, deduplication should + be enabled on + a system. See the Deduplication + section of zfsconcepts(7).

+
+
=legacy|auto|||||
+
Specifies a compatibility mode or literal value for the size of dnodes in + the file system. The default value is legacy. Setting + this property to a value other than legacy + requires the large_dnode + pool feature to be enabled. +

Consider setting dnodesize to + auto if the dataset uses the + xattr=sa property setting and the + workload makes heavy use of extended attributes. This may be applicable + to SELinux-enabled systems, Lustre servers, and Samba servers, for + example. Literal values are supported for cases where the optimal size + is known in advance and for performance testing.

+

Leave dnodesize set to + legacy if you need to receive a send stream of this + dataset on a pool that doesn't enable the large_dnode + feature, or if you need to import this pool on a system that doesn't + support the large_dnode + feature.

+

This property can also be referred to by its + shortened column name, + .

+
+
=off|on||||||aes-256-gcm
+
Controls the encryption cipher suite (block cipher, key length, and mode) + used for this dataset. Requires the encryption feature + to be enabled on the pool. Requires a keyformat to be + set at dataset creation time. +

Selecting encryption=on + when creating a dataset indicates that the default encryption suite will + be selected, which is currently aes-256-gcm. In order + to provide consistent data protection, encryption must be specified at + dataset creation time and it cannot be changed afterwards.

+

For more details and caveats about encryption see the + Encryption section of + zfs-load-key(8).

+
+
=||passphrase
+
Controls what format the user's encryption key will be provided as. This + property is only set when the dataset is encrypted. +

Raw keys and hex keys must be 32 bytes long (regardless of the + chosen encryption suite) and must be randomly generated. A raw key can + be generated with the following command:

+
# dd + + /path/to/output/key
+

Passphrases must be between 8 and 512 bytes long and will be + processed through PBKDF2 before being used (see the + pbkdf2iters property). Even though the encryption + suite cannot be changed after dataset creation, the keyformat can be + with zfs change-key.

+
+
=prompt||<address> + |<address>
+
Controls where the user's encryption key will be loaded from by default + for commands such as zfs + load-key and zfs + mount -l. This property is + only set for encrypted datasets which are encryption roots. If + unspecified, the default is prompt. +

Even though the encryption suite cannot be changed after + dataset creation, the keylocation can be with either + zfs set or + zfs change-key. If + prompt is selected ZFS will ask for the key at the + command prompt when it is required to access the encrypted data (see + zfs load-key for + details). This setting will also allow the key to be passed in via the + standard input stream, but users should be careful not to place keys + which should be kept secret on the command line. If a file URI is + selected, the key will be loaded from the specified absolute file path. + If an HTTPS or HTTP URL is selected, it will be GETted using + fetch(3), libcurl, or nothing, depending on + compile-time configuration and run-time availability. The + SSL_CA_CERT_FILE environment variable can be set + to set the location of the concatenated certificate store. The + SSL_CA_CERT_PATH environment variable can be set + to override the location of the directory containing the certificate + authority bundle. The SSL_CLIENT_CERT_FILE and + SSL_CLIENT_KEY_FILE environment variables can be + set to configure the path to the client certificate and its key.

+
+
=iterations
+
Controls the number of PBKDF2 iterations that a + passphrase encryption key should be run through when + processing it into an encryption key. This property is only defined when + encryption is enabled and a keyformat of passphrase is + selected. The goal of PBKDF2 is to significantly increase the + computational difficulty needed to brute force a user's passphrase. This + is accomplished by forcing the attacker to run each passphrase through a + computationally expensive hashing function many times before they arrive + at the resulting key. A user who actually knows the passphrase will only + have to pay this cost once. As CPUs become better at processing, this + number should be raised to ensure that a brute force attack is still not + possible. The current default is + + and the minimum is + . + This property may be changed with zfs + change-key.
+
=on|off
+
Controls whether processes can be executed from within this file system. + The default value is on. The values on + and off are equivalent to the exec and + + mount options.
+
=count|none
+
Limits the number of filesystems and volumes that can exist under this + point in the dataset tree. The limit is not enforced if the user is + allowed to change the limit. Setting a filesystem_limit + to on a descendent of a filesystem that already has a + filesystem_limit does not override the ancestor's + filesystem_limit, but rather imposes an additional + limit. This feature must be enabled to be used (see + zpool-features(7)).
+
=size
+
This value represents the threshold block size for including small file + blocks into the special allocation class. Blocks smaller than or equal to + this value will be assigned to the special allocation class while greater + blocks will be assigned to the regular class. Valid values are zero or a + power of two from 512B up to 1M. The default size is 0 which means no + small file blocks will be allocated in the special class. +

Before setting this property, a special class vdev must be + added to the pool. See zpoolconcepts(7) for more + details on the special allocation class.

+
+
=path|none|legacy
+
Controls the mount point used for this file system. See the + Mount Points section of + zfsconcepts(7) for more information on how this property + is used. +

When the mountpoint property is changed for + a file system, the file system and any children that inherit the mount + point are unmounted. If the new value is legacy, then + they remain unmounted. Otherwise, they are automatically remounted in + the new location if the property was previously legacy + or none, or if they were mounted before the property + was changed. In addition, any shared file systems are unshared and + shared in the new location.

+
+
=on|off
+
Controls whether the file system should be mounted with + nbmand (Non-blocking mandatory locks). This is used for + SMB clients. Changes to this property only take effect when the file + system is umounted and remounted. Support for these locks is scarce and + not described by POSIX.
+
=on|off
+
Allow mounting on a busy directory or a directory which already contains + files or directories. This is the default mount behavior for Linux and + FreeBSD file systems. On these platforms the + property is on by default. Set to off + to disable overlay mounts for consistency with OpenZFS on other + platforms.
+
=all|none|metadata
+
Controls what is cached in the primary cache (ARC). If this property is + set to all, then both user data and metadata is cached. + If this property is set to none, then neither user data + nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=size|none
+
Limits the amount of space a dataset and its descendents can consume. This + property enforces a hard limit on the amount of space used. This includes + all space consumed by descendents, including file systems and snapshots. + Setting a quota on a descendent of a dataset that already has a quota does + not override the ancestor's quota, but rather imposes an additional limit. +

Quotas cannot be set on volumes, as the + volsize property acts as an implicit quota.

+
+
=count|none
+
Limits the number of snapshots that can be created on a dataset and its + descendents. Setting a snapshot_limit on a descendent of + a dataset that already has a snapshot_limit does not + override the ancestor's snapshot_limit, but rather + imposes an additional limit. The limit is not enforced if the user is + allowed to change the limit. For example, this means that recursive + snapshots taken from the global zone are counted against each delegated + dataset within a zone. This feature must be enabled to be used (see + zpool-features(7)).
+
user=size|none
+
Limits the amount of space consumed by the specified user. User space + consumption is identified by the + user + property. +

Enforcement of user quotas may be delayed by several seconds. + This delay means that a user might exceed their quota before the system + notices that they are over quota and begins to refuse additional writes + with the EDQUOT error message. See the + zfs userspace command + for more information.

+

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + userquota privilege with zfs + allow, can get and set everyone's quota.

+

This property is not available on volumes, on file systems + before version 4, or on pools before version 15. The + userquota@... properties are not + displayed by zfs get + all. The user's name must be appended after the + @ symbol, using one of the following forms:

+
    +
  • POSIX name ("joe")
  • +
  • POSIX numeric ID ("789")
  • +
  • SID name ("joe.smith@mydomain")
  • +
  • SID numeric ID ("S-1-123-456-789")
  • +
+

Files created on Linux always have POSIX owners.

+
+
user=size|none
+
The userobjquota is similar to + userquota but it limits the number of objects a user can + create. Please refer to userobjused for more information + about how objects are counted.
+
group=size|none
+
Limits the amount of space consumed by the specified group. Group space + consumption is identified by the + group + property. +

Unprivileged users can access only their own groups' space + usage. The root user, or a user who has been granted the + groupquota privilege with zfs + allow, can get and set all groups' quotas.

+
+
group=size|none
+
The + + is similar to groupquota but it limits number of objects + a group can consume. Please refer to userobjused for + more information about how objects are counted.
+
project=size|none
+
Limits the amount of space consumed by the specified project. Project + space consumption is identified by the + project + property. Please refer to projectused for more + information about how project is identified and set/changed. +

The root user, or a user who has been granted the + projectquota privilege with zfs + allow, can access all projects' quota.

+
+
project=size|none
+
The projectobjquota is similar to + projectquota but it limits number of objects a project + can consume. Please refer to userobjused for more + information about how objects are counted.
+
=on|off
+
Controls whether this dataset can be modified. The default value is + off. The values on and + off are equivalent to the + and + mount + options. +

This property can also be referred to by its + shortened column name, + .

+
+
=size
+
Specifies a suggested block size for files in the file system. This + property is designed solely for use with database workloads that access + files in fixed-size records. ZFS automatically tunes block sizes according + to internal algorithms optimized for typical access patterns. +

For databases that create very large files but access them in + small random chunks, these algorithms may be suboptimal. Specifying a + recordsize greater than or equal to the record size of + the database can result in significant performance gains. Use of this + property for general purpose file systems is strongly discouraged, and + may adversely affect performance.

+

The size specified must be a power of two + greater than or equal to 512B and less than or + equal to 128kB. If the + + feature is enabled on the pool, the size may be up to + 1MB. See zpool-features(7) for + details on ZFS feature flags.

+

Changing the file system's recordsize + affects only files created afterward; existing files are unaffected.

+

This property can also be referred to by its + shortened column name, + .

+
+
=all|most|some|none
+
Controls what types of metadata are stored redundantly. ZFS stores an + extra copy of metadata, so that if a single block is corrupted, the amount + of user data lost is limited. This extra copy is in addition to any + redundancy provided at the pool level (e.g. by mirroring or RAID-Z), and + is in addition to an extra copy specified by the copies + property (up to a total of 3 copies). For example if the pool is mirrored, + copies=2, and + redundant_metadata=most, then ZFS + stores 6 copies of most metadata, and 4 copies of data and some metadata. +

When set to all, ZFS stores an extra copy of + all metadata. If a single on-disk block is corrupt, at worst a single + block of user data (which is recordsize bytes long) + can be lost.

+

When set to most, ZFS stores an extra copy + of most types of metadata. This can improve performance of random + writes, because less metadata must be written. In practice, at worst + about 1000 blocks (of recordsize bytes each) of user + data can be lost if a single on-disk block is corrupt. The exact + behavior of which metadata blocks are stored redundantly may change in + future releases.

+

When set to some, ZFS stores an extra copy + of only critical metadata. This can improve file create performance + since less metadata needs to be written. If a single on-disk block is + corrupt, at worst a single user file can be lost.

+

When set to none, ZFS does not store any + copies of metadata redundantly. If a single on-disk block is corrupt, an + entire dataset can be lost.

+

The default value is all.

+
+
=size|none
+
Limits the amount of space a dataset can consume. This property enforces a + hard limit on the amount of space used. This hard limit does not include + space used by descendents, including file systems and snapshots.
+
=size|none|auto
+
The minimum amount of space guaranteed to a dataset, not including its + descendents. When the amount of space used is below this value, the + dataset is treated as if it were taking up the amount of space specified + by refreservation. The refreservation + reservation is accounted for in the parent datasets' space used, and + counts against the parent datasets' quotas and reservations. +

If refreservation is set, a snapshot is only + allowed if there is enough free pool space outside of this reservation + to accommodate the current number of "referenced" bytes in the + dataset.

+

If refreservation is set to + auto, a volume is thick provisioned (or "not + sparse"). refreservation=auto + is only supported on volumes. See volsize in the + Native Properties section + for more information about sparse volumes.

+

This property can also be referred to by its + shortened column name, + .

+
+
=on|off
+
Controls the manner in which the access time is updated when + atime=on is set. Turning this property + on causes the access time to be updated relative to the modify or change + time. Access time is only updated if the previous access time was earlier + than the current modify or change time or if the existing access time + hasn't been updated within the past 24 hours. The default value is + off. The values on and + off are equivalent to the relatime and + + mount options.
+
=size|none
+
The minimum amount of space guaranteed to a dataset and its descendants. + When the amount of space used is below this value, the dataset is treated + as if it were taking up the amount of space specified by its reservation. + Reservations are accounted for in the parent datasets' space used, and + count against the parent datasets' quotas and reservations. +

This property can also be referred to by its + shortened column name, + .

+
+
=all|none|metadata
+
Controls what is cached in the secondary cache (L2ARC). If this property + is set to all, then both user data and metadata is + cached. If this property is set to none, then neither + user data nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=on|off
+
Controls whether the setuid bit is respected for the file system. The + default value is on. The values on and + off are equivalent to the + and + nosuid mount options.
+
=on|off|opts
+
Controls whether the file system is shared by using + and what options are to be used. Otherwise, the file + system is automatically shared and unshared with the + zfs share and + zfs unshare commands. If + the property is set to on, the net(8) command is invoked + to create a + . +

Because SMB shares requires a resource name, a unique resource + name is constructed from the dataset name. The constructed name is a + copy of the dataset name except that the characters in the dataset name, + which would be invalid in the resource name, are replaced with + underscore (_) characters. Linux does not currently support additional + options which might be available on Solaris.

+

If the sharesmb property is set to + off, the file systems are unshared.

+

The share is created with the ACL (Access Control List) + "Everyone:F" ("F" stands for "full + permissions", i.e. read and write permissions) and no guest access + (which means Samba must be able to authenticate a real user, system + passwd/shadow, LDAP or smbpasswd based) by default. This means that any + additional access control (disallow specific user specific access etc) + must be done on the underlying file system.

+
+
=on|off|opts
+
Controls whether the file system is shared via NFS, and what options are + to be used. A file system with a sharenfs property of + off is managed with the exportfs(8) + command and entries in the /etc/exports file. + Otherwise, the file system is automatically shared and unshared with the + zfs share and + zfs unshare commands. If + the property is set to on, the dataset is shared using + the default options: +
sec=sys,rw,crossmnt,no_subtree_check
+

Please note that the options are comma-separated, unlike those + found in exports(5). This is done to negate the need + for quoting, as well as to make parsing with scripts easier.

+

See exports(5) for the meaning of the + default options. Otherwise, the exportfs(8) command is + invoked with options equivalent to the contents of this property.

+

When the sharenfs property is changed for a + dataset, the dataset and any children inheriting the property are + re-shared with the new options, only if the property was previously + off, or if they were shared before the property was + changed. If the new property is off, the file systems + are unshared.

+
+
=latency|throughput
+
Provide a hint to ZFS about handling of synchronous requests in this + dataset. If logbias is set to latency + (the default), ZFS will use pool log devices (if configured) to handle the + requests at low latency. If logbias is set to + throughput, ZFS will not use configured pool log + devices. ZFS will instead optimize synchronous operations for global pool + throughput and efficient use of resources.
+
=hidden|visible
+
Controls whether the volume snapshot devices under + /dev/zvol/pool⟩ + are hidden or visible. The default value is hidden.
+
=hidden|visible
+
Controls whether the .zfs directory is hidden or + visible in the root of the file system as discussed in the + Snapshots section of + zfsconcepts(7). The default value is + hidden.
+
=standard|always|disabled
+
Controls the behavior of synchronous requests (e.g. fsync, O_DSYNC). + standard is the POSIX-specified behavior of ensuring all + synchronous requests are written to stable storage and all devices are + flushed to ensure data is not cached by device controllers (this is the + default). always causes every file system transaction to + be written and flushed before its system call returns. This has a large + performance penalty. disabled disables synchronous + requests. File system transactions are only committed to stable storage + periodically. This option will give the highest performance. However, it + is very dangerous as ZFS would be ignoring the synchronous transaction + demands of applications such as databases or NFS. Administrators should + only use this option when the risks are understood.
+
=N|
+
The on-disk version of this file system, which is independent of the pool + version. This property can only be set to later supported versions. See + the zfs upgrade + command.
+
=size
+
For volumes, specifies the logical size of the volume. By default, + creating a volume establishes a reservation of equal size. For storage + pools with a version number of 9 or higher, a + refreservation is set instead. Any changes to + volsize are reflected in an equivalent change to the + reservation (or refreservation). The + volsize can only be set to a multiple of + volblocksize, and cannot be zero. +

The reservation is kept equal to the volume's logical size to + prevent unexpected behavior for consumers. Without the reservation, the + volume could run out of space, resulting in undefined behavior or data + corruption, depending on how the volume is used. These effects can also + occur when the volume size is changed while it is in use (particularly + when shrinking the size). Extreme care should be used when adjusting the + volume size.

+

Though not recommended, a "sparse volume" (also + known as "thin provisioned") can be created by specifying the + -s option to the zfs + create -V command, or by + changing the value of the refreservation property (or + reservation property on pool version 8 or earlier) + after the volume has been created. A "sparse volume" is a + volume where the value of refreservation is less than + the size of the volume plus the space required to store its metadata. + Consequently, writes to a sparse volume can fail with + ENOSPC when the pool is low on space. For a + sparse volume, changes to volsize are not reflected in + the refreservation. A volume that is not sparse is + said to be "thick provisioned". A sparse volume can become + thick provisioned by setting refreservation to + auto.

+
+
=default|full|geom|dev|none
+
This property specifies how volumes should be exposed to the OS. Setting + it to full exposes volumes as fully fledged block + devices, providing maximal functionality. The value geom + is just an alias for full and is kept for compatibility. + Setting it to dev hides its partitions. Volumes with + property set to none are not exposed outside ZFS, but + can be snapshotted, cloned, replicated, etc, that can be suitable for + backup purposes. Value default means that volumes + exposition is controlled by system-wide tunable + , + where full, dev and + none are encoded as 1, 2 and 3 respectively. The default + value is full.
+
=on|off
+
Controls whether regular files should be scanned for viruses when a file + is opened and closed. In addition to enabling this property, the virus + scan service must also be enabled for virus scanning to occur. The default + value is off. This property is not used on Linux.
+
=on|off|sa
+
Controls whether extended attributes are enabled for this file system. Two + styles of extended attributes are supported: either directory based or + system attribute based. +

The default value of on enables directory + based extended attributes. This style of extended attribute imposes no + practical limit on either the size or number of attributes which can be + set on a file. Although under Linux the getxattr(2) + and setxattr(2) system calls limit the maximum size to + 64K. This is the most compatible style of extended attribute and is + supported by all ZFS implementations.

+

System attribute based xattrs can be enabled by setting the + value to sa. The key advantage of this type of xattr + is improved performance. Storing extended attributes as system + attributes significantly decreases the amount of disk IO required. Up to + 64K of data may be stored per-file in the space reserved for system + attributes. If there is not enough space available for an extended + attribute then it will be automatically written as a directory based + xattr. System attribute based extended attributes are not accessible on + platforms which do not support the + xattr=sa feature.

+

The use of system attribute based xattrs is strongly + encouraged for users of SELinux or POSIX ACLs. Both of these features + heavily rely on extended attributes and benefit significantly from the + reduced access time.

+

The values on and + off are equivalent to the xattr and + mount + options.

+
+
=off|on
+
Controls whether the dataset is managed from a jail. See + zfs-jail(8) for more information. Jails are a + FreeBSD feature and are not relevant on other + platforms. The default value is off.
+
=on|off
+
Controls whether the dataset is managed from a non-global zone. Zones are + a Solaris feature and are not relevant on other platforms. The default + value is off.
+
+

The following three properties cannot be changed after the file + system is created, and therefore, should be set when the file system is + created. If the properties are not set with the zfs + create or zpool + create commands, these properties are inherited from + the parent dataset. If the parent dataset lacks these properties due to + having been created prior to these features being supported, the new file + system will have the default values for these properties.

+
+
=sensitive||mixed
+
Indicates whether the file name matching algorithm used by the file system + should be case-sensitive, case-insensitive, or allow a combination of both + styles of matching. The default value for the + casesensitivity property is sensitive. + Traditionally, UNIX and POSIX file systems have + case-sensitive file names. +

The mixed value for the + casesensitivity property indicates that the file + system can support requests for both case-sensitive and case-insensitive + matching behavior. Currently, case-insensitive matching behavior on a + file system that supports mixed behavior is limited to the SMB server + product. For more information about the mixed value + behavior, see the "ZFS Administration Guide".

+
+
=none||||
+
Indicates whether the file system should perform a + + normalization of file names whenever two file names are compared, and + which normalization algorithm should be used. File names are always stored + unmodified, names are normalized as part of any comparison process. If + this property is set to a legal value other than none, + and the utf8only property was left unspecified, the + utf8only property is automatically set to + on. The default value of the + normalization property is none. This + property cannot be changed after the file system is created.
+
=on|off
+
Indicates whether the file system should reject file names that include + characters that are not present in the + + character code set. If this property is explicitly set to + off, the normalization property must either not be + explicitly set or be set to none. The default value for + the utf8only property is off. This + property cannot be changed after the file system is created.
+
+

The casesensitivity, + normalization, and utf8only properties + are also new permissions that can be assigned to non-privileged users by + using the ZFS delegated administration feature.

+
+
+

+

When a file system is mounted, either through + mount(8) for legacy mounts or the + zfs mount command for normal + file systems, its mount options are set according to its properties. The + correlation between properties and mount options is as follows:

+
+
+
+
atime/noatime
+
+
auto/noauto
+
+
dev/nodev
+
+
exec/noexec
+
+
ro/rw
+
+
relatime/norelatime
+
+
suid/nosuid
+
+
xattr/noxattr
+
+
mand/nomand
+
=
+
context=
+
=
+
fscontext=
+
=
+
defcontext=
+
=
+
rootcontext=
+
+
+

In addition, these options can be set on a + per-mount basis using the -o option, without + affecting the property that is stored on disk. The values specified on the + command line override the values stored in the dataset. The + nosuid option is an alias for + ,. + These properties are reported as "temporary" by the + zfs get command. If the + properties are changed while the dataset is mounted, the new setting + overrides any temporary settings.

+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate datasets (file + systems, volumes, and snapshots).

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as + module:property, but this + namespace is not enforced by ZFS. User property names can be at most 256 + characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the + module component of property names to reduce the + chance that two independently-developed packages use the same property name + for different purposes.

+

The values of user properties are arbitrary strings, are always + inherited, and are never validated. All of the commands that operate on + properties (zfs list, + zfs get, + zfs set, and so forth) can + be used to manipulate both native properties and user properties. Use the + zfs inherit command to clear + a user property. If the property is not defined in any parent dataset, it is + removed entirely. Property values are limited to 8192 bytes.

+
+
+
+ + + + + +
July 21, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/7/zpool-features.7.html b/man/v2.1/7/zpool-features.7.html new file mode 100644 index 000000000..435ef92cb --- /dev/null +++ b/man/v2.1/7/zpool-features.7.html @@ -0,0 +1,1101 @@ + + + + + + + zpool-features.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-features.7

+
+ + + + + +
ZPOOL-FEATURES(7)Miscellaneous Information ManualZPOOL-FEATURES(7)
+
+
+

+

zpool-features — + description of ZFS pool features

+
+
+

+

ZFS pool on-disk format versions are specified via + "features" which replace the old on-disk format numbers (the last + supported on-disk format number is 28). To enable a feature on a pool use + the zpool upgrade, or set + the feature@feature-name property to + enabled. Please also see the + Compatibility feature + sets section for information on how sets of features may be enabled + together.

+

The pool format does not affect file system version compatibility + or the ability to send file systems between pools.

+

Since most features can be enabled independently of each other, + the on-disk format of the pool is specified by the set of all features + marked as active on the pool. If the pool was created by + another software version this set may include unsupported features.

+
+

+

Every feature has a GUID of the form + com.example:feature-name. The + reversed DNS name ensures that the feature's GUID is unique across all ZFS + implementations. When unsupported features are encountered on a pool they + will be identified by their GUIDs. Refer to the documentation for the ZFS + implementation that created the pool for information about those + features.

+

Each supported feature also has a short name. By convention a + feature's short name is the portion of its GUID which follows the + ‘:’ (i.e. + com.example:feature-name would + have the short name feature-name), however a feature's + short name may differ across ZFS implementations if following the convention + would result in name conflicts.

+
+
+

+

Features can be in one of three states:

+
+
+
This feature's on-disk format changes are in effect on the pool. Support + for this feature is required to import the pool in read-write mode. If + this feature is not read-only compatible, support is also required to + import the pool in read-only mode (see + Read-only + compatibility).
+
+
An administrator has marked this feature as enabled on the pool, but the + feature's on-disk format changes have not been made yet. The pool can + still be imported by software that does not support this feature, but + changes may be made to the on-disk format at any time which will move the + feature to the active state. Some features may support + returning to the enabled state after becoming + active. See feature-specific documentation for + details.
+
+
This feature's on-disk format changes have not been made and will not be + made unless an administrator moves the feature to the + enabled state. Features cannot be disabled once they + have been enabled.
+
+

The state of supported features is exposed through pool properties + of the form feature@short-name.

+
+
+

+

Some features may make on-disk format changes that do not + interfere with other software's ability to read from the pool. These + features are referred to as “read-only compatible”. If all + unsupported features on a pool are read-only compatible, the pool can be + imported in read-only mode by setting the readonly + property during import (see zpool-import(8) for details on + importing pools).

+
+
+

+

For each unsupported feature enabled on an imported pool, a pool + property named + @feature-name + will indicate why the import was allowed despite the unsupported feature. + Possible values for this property are:

+
+
+
The feature is in the enabled state and therefore the + pool's on-disk format is still compatible with software that does not + support this feature.
+
+
The feature is read-only compatible and the pool has been imported in + read-only mode.
+
+
+
+

+

Some features depend on other features being enabled in order to + function. Enabling a feature will automatically enable any features it + depends on.

+
+
+

+

It is sometimes necessary for a pool to maintain compatibility + with a specific on-disk format, by enabling and disabling particular + features. The compatibility feature facilitates this by + allowing feature sets to be read from text files. When set to + (the + default), compatibility feature sets are disabled (i.e. all features are + enabled); when set to legacy, no features are enabled. + When set to a comma-separated list of filenames (each filename may either be + an absolute path, or relative to + /etc/zfs/compatibility.d or + /usr/share/zfs/compatibility.d), the lists of + requested features are read from those files, separated by whitespace and/or + commas. Only features present in all files are enabled.

+

Simple sanity checks are applied to the files: they must be + between 1B and 16kB in size, and must end with a newline character.

+

The requested features are applied when a pool is created using + zpool create + -o + compatibility= and controls + which features are enabled when using zpool + upgrade. zpool + status will not show a warning about disabled + features which are not part of the requested feature set.

+

The special value legacy prevents any features + from being enabled, either via zpool + upgrade or zpool + set + feature@feature-name=enabled. + This setting also prevents pools from being upgraded to newer on-disk + versions. This is a safety measure to prevent new features from being + accidentally enabled, breaking compatibility.

+

By convention, compatibility files in + /usr/share/zfs/compatibility.d are provided by the + distribution, and include feature sets supported by important versions of + popular distributions, and feature sets commonly supported at the start of + each year. Compatibility files in + /etc/zfs/compatibility.d, if present, will take + precedence over files with the same name in + /usr/share/zfs/compatibility.d.

+

If an unrecognized feature is found in these files, an error + message will be shown. If the unrecognized feature is in a file in + /etc/zfs/compatibility.d, this is treated as an + error and processing will stop. If the unrecognized feature is under + /usr/share/zfs/compatibility.d, this is treated as a + warning and processing will continue. This difference is to allow + distributions to include features which might not be recognized by the + currently-installed binaries.

+

Compatibility files may include comments: any text from + ‘#’ to the end of the line is ignored.

+

:

+
+
example# cat /usr/share/zfs/compatibility.d/grub2
+# Features which are supported by GRUB2
+async_destroy
+bookmarks
+embedded_data
+empty_bpobj
+enabled_txg
+extensible_dataset
+filesystem_limits
+hole_birth
+large_blocks
+lz4_compress
+spacemap_histogram
+
+example# zpool create -o compatibility=grub2 bootpool vdev
+
+

See zpool-create(8) and + zpool-upgrade(8) for more information on how these + commands are affected by feature sets.

+
+
+
+

+

The following features are supported on this system:

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables support for separate allocation + classes.

+

This feature becomes active when a dedicated + allocation class vdev (dedup or special) is created with the + zpool create + or zpool + add commands. With + device removal, it can be returned to the enabled + state if all the dedicated allocation class vdevs are removed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

Destroying a file system requires traversing all of its data + in order to return its used space to the pool. Without + async_destroy, the file system is not fully removed + until all space has been reclaimed. If the destroy operation is + interrupted by a reboot or power outage, the next attempt to open the + pool will need to complete the destroy operation synchronously.

+

When async_destroy is enabled, the file + system's data will be reclaimed by a background process, allowing the + destroy operation to complete without traversing the entire file system. + The background process is able to resume interrupted destroys after the + pool has been opened, eliminating the need to finish interrupted + destroys as part of the open operation. The amount of space remaining to + be reclaimed by the background process is available through the + freeing property.

+

This feature is only active while + freeing is non-zero.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables use of the zfs + bookmark command.

+

This feature is active while + any bookmarks exist in the pool. All bookmarks in the pool can be listed + by running zfs list + -t + + -r poolname.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmark, extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the creation and management of larger + bookmarks which are needed for other features in ZFS.

+

This feature becomes active when a v2 + bookmark is created and will be returned to the + enabled state when all v2 bookmarks are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmark, extensible_dataset, bookmark_v2
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables additional bookmark + accounting fields, enabling the + #bookmark + property (space written since a bookmark) and estimates of send stream + sizes for incrementals from bookmarks.

+

This feature becomes active when a bookmark + is created and will be returned to the enabled state + when all bookmarks with these fields are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables the ability for the + zpool attach and + zpool replace commands + to perform sequential reconstruction (instead of healing reconstruction) + when resilvering.

+

Sequential reconstruction resilvers a device in LBA order + without immediately verifying the checksums. Once complete, a scrub is + started, which then verifies the checksums. This approach allows full + redundancy to be restored to the pool in the minimum amount of time. + This two-phase approach will take longer than a healing resilver when + the time to verify the checksums is included. However, unless there is + additional pool damage, no checksum errors should be reported by the + scrub. This feature is incompatible with raidz configurations. This + feature becomes active while a sequential resilver is + in progress, and returns to enabled when the resilver + completes.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the zpool + remove command to remove top-level vdevs, + evacuating them to reduce the total size of the pool.

+

This feature becomes active when the + zpool remove command is + used on a top-level vdev, and will never return to being + enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables use of the draid vdev + type. dRAID is a variant of raidz which provides integrated distributed + hot spares that allow faster resilvering while retaining the benefits of + raidz. Data, parity, and spare space are organized in redundancy groups + and distributed evenly over all of the devices.

+

This feature becomes active when creating a + pool which uses the draid vdev type, or when adding a + new draid vdev to an existing pool.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the Edon-R hash + algorithm for checksum, including for nopwrite (if compression is also + enabled, an overwrite of a block whose checksum matches the data being + written will be ignored). In an abundance of caution, Edon-R requires + verification when used with dedup: zfs + set + =edonr, + (see zfs-set(8)).

+

Edon-R is a very high-performance hash algorithm that was part + of the NIST SHA-3 competition. It provides extremely high hash + performance (over 350% faster than SHA-256), but was not selected + because of its unsuitability as a general purpose secure hash algorithm. + This implementation utilizes the new salted checksumming functionality + in ZFS, which means that the checksum is pre-seeded with a secret + 256-bit random key (stored on the pool) before being fed the data block + to be checksummed. Thus the produced checksums are unique to a given + pool, preventing hash collision attacks on systems with dedup.

+

When the edonr feature is set to + enabled, the administrator can turn on the + edonr checksum on any dataset using + zfs set + checksum=edonr + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + edonr, and will return to being + enabled once all filesystems that have ever had their + checksum set to edonr are destroyed.

+

FreeBSD does not support the + edonr feature.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature improves the performance and compression ratio of + highly-compressible blocks. Blocks whose contents can compress to 112 + bytes or smaller can take advantage of this feature.

+

When this feature is enabled, the contents of + highly-compressible blocks are stored in the block "pointer" + itself (a misnomer in this case, as it contains the compressed data, + rather than a pointer to its location on disk). Thus the space of the + block (one sector, typically 512B or 4kB) is saved, and no additional + I/O is needed to read and write the data block. This + feature becomes active as soon + as it is enabled and will never return to being + enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature increases the performance of creating and using a + large number of snapshots of a single filesystem or volume, and also + reduces the disk space required.

+

When there are many snapshots, each snapshot uses many Block + Pointer Objects (bpobjs) to track blocks associated with that snapshot. + However, in common use cases, most of these bpobjs are empty. This + feature allows us to create each bpobj on-demand, thus eliminating the + empty bpobjs.

+

This feature is active while there are any + filesystems, volumes, or snapshots which were created after enabling + this feature.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

Once this feature is enabled, ZFS records the transaction + group number in which new features are enabled. This has no user-visible + impact, but other features may depend on this feature.

+

This feature becomes active +
+ as soon as it is enabled and will never return to being + enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmark_v2, extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the creation and management of natively + encrypted datasets.

+

This feature becomes active when an + encrypted dataset is created and will be returned to the + enabled state when all datasets that use this feature + are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows more flexible use of internal ZFS data + structures, and exists for other features to depend on.

+

This feature will be active when the first + dependent feature uses it, and will be returned to the + enabled state when all datasets that use this feature + are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables filesystem and snapshot limits. These + limits can be used to control how many filesystems and/or snapshots can + be created at the point in the tree on which the limits are set.

+

This feature is active once either of the + limit properties has been set on a dataset. Once activated the feature + is never deactivated.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
enabled_txg
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature has/had bugs, + the result of which is that, if you do a zfs + send -i (or + -R, since it uses + -i) from an affected dataset, the receiving + party will not see any checksum or other errors, but the resulting + destination snapshot will not match the source. Its use by + zfs send + -i has been disabled by default (see + + in zfs(4)).

+

This feature improves performance of incremental sends + (zfs send + -i) and receives for objects with many holes. + The most common case of hole-filled objects is zvols.

+

An incremental send stream from snapshot A + to snapshot B contains + information about every block that changed between A + and B. Blocks which did not + change between those snapshots can be identified and omitted from the + stream using a piece of metadata called the "block birth + time", but birth times are not recorded for holes (blocks filled + only with zeroes). Since holes created after A + cannot be distinguished from holes created + before A, information about every hole in the + entire filesystem or zvol is included in the send stream.

+

For workloads where holes are rare this is not a problem. + However, when incrementally replicating filesystems or zvols with many + holes (for example a zvol formatted with another filesystem) a lot of + time will be spent sending and receiving unnecessary information about + holes that already exist on the receiving side.

+

Once the hole_birth feature has been enabled + the block birth times of all new holes will be recorded. Incremental + sends between snapshots created after this feature is enabled will use + this new metadata to avoid sending information about holes that already + exist on the receiving side.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows the record size on a dataset to be set + larger than 128kB.

+

This feature becomes active once a dataset + contains a file with a block size larger than 128kB, and will return to + being enabled once all filesystems that have ever had + their recordsize larger than 128kB are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows the size of dnodes in a + dataset to be set larger than 512B. This feature becomes + active once a dataset contains an object with a dnode + larger than 512B, which occurs as a result of setting the + + dataset property to a value other than legacy. The + feature will return to being enabled once all + filesystems that have ever contained a dnode larger than 512B are + destroyed. Large dnodes allow more data to be stored in the bonus + buffer, thus potentially improving performance by avoiding the use of + spill blocks.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows clones to be deleted faster than the + traditional method when a large number of random/sparse writes have been + made to the clone. All blocks allocated and freed after a clone is + created are tracked by the the clone's livelist which is referenced + during the deletion of the clone. The feature is activated when a clone + is created and remains active until all clones have + been destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
com.delphix:spacemap_v2
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature improves performance for heavily-fragmented + pools, especially when workloads are heavy in random-writes. It does so + by logging all the metaslab changes on a single spacemap every TXG + instead of scattering multiple writes to all the metaslab spacemaps.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

lz4 is a high-performance real-time + compression algorithm that features significantly faster compression and + decompression as well as a higher compression ratio than the older + lzjb compression. Typically, lz4 + compression is approximately 50% faster on compressible data and 200% + faster on incompressible data than lzjb. It is also + approximately 80% faster on decompression, while giving approximately a + 10% better compression ratio.

+

When the lz4_compress feature is set to + enabled, the administrator can turn on + lz4 compression on any dataset on the pool using the + zfs-set(8) command. All newly written metadata will be + compressed with the lz4 algorithm.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows a dump device to be configured with a pool + comprised of multiple vdevs. Those vdevs may be arranged in any mirrored + or raidz configuration.

+

When the multi_vdev_crash_dump feature is + set to enabled, the administrator can use + dumpadm(1M) to configure a dump device on a pool + comprised of multiple vdevs.

+

Under FreeBSD and Linux this feature + is unused, but registered for compatibility. New pools created on these + systems will have the feature enabled but will never + transition to active, as this functionality is not + required for crash dump support. Existing pools where this feature is + active can be imported.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
device_removal
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature is an enhancement of + device_removal, which will over time reduce the memory + used to track removed devices. When indirect blocks are freed or + remapped, we note that their part of the indirect mapping is + "obsolete" – no longer needed.

+

This feature becomes active when the + zpool remove command is + used on a top-level vdev, and will never return to being + enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows administrators to account the spaces and + objects usage information against the project identifier (ID).

+

The project ID is an object-based attribute. When + upgrading an existing filesystem, objects without a project ID will be + assigned a zero project ID. When this feature is enabled, newly created + objects inherit their parent directories' project ID if the parent's + inherit flag is set (via chattr + + or zfs + project + -s|-C). Otherwise, the + new object's project ID will be zero. An object's project ID can be + changed at any time by the owner (or privileged user) via + chattr -p + prjid or zfs + project -p + prjid.

+

This feature will become active as soon as + it is enabled and will never return to being disabled. + Each filesystem will be upgraded automatically when + remounted, or when a new file is created under that filesystem. The + upgrade can also be triggered on filesystems via + zfs set + version=current + fs. The upgrade process runs in + the background and may take a while to complete for filesystems + containing large amounts of files.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmarks, extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of redacted + zfs sends, which create + redaction bookmarks storing the list of blocks redacted by the send that + created them. For more information about redacted sends, see + zfs-send(8).

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the receiving of redacted + zfs sendstreams. which + create redacted datasets when received. These datasets are missing some + of their blocks, and so cannot be safely mounted, and their contents + cannot be safely read. For more information about redacted receives, see + zfs-send(8).

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows ZFS to postpone new resilvers if an + existing one is already in progress. Without this feature, any new + resilvers will cause the currently running one to be immediately + restarted from the beginning.

+

This feature becomes active once a resilver + has been deferred, and returns to being enabled when + the deferred resilver begins.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the SHA-512/256 truncated hash + algorithm (FIPS 180-4) for checksum and dedup. The native 64-bit + arithmetic of SHA-512 provides an approximate 50% performance boost over + SHA-256 on 64-bit hardware and is thus a good minimum-change replacement + candidate for systems where hash performance is important, but these + systems cannot for whatever reason utilize the faster + skein and + edonr algorithms.

+

When the sha512 feature is set to + enabled, the administrator can turn on the + sha512 checksum on any dataset using + zfs set + checksum=sha512 + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + sha512, and will return to being + enabled once all filesystems that have ever had their + checksum set to sha512 are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the Skein hash algorithm for + checksum and dedup. Skein is a high-performance secure hash algorithm + that was a finalist in the NIST SHA-3 competition. It provides a very + high security margin and high performance on 64-bit hardware (80% faster + than SHA-256). This implementation also utilizes the new salted + checksumming functionality in ZFS, which means that the checksum is + pre-seeded with a secret 256-bit random key (stored on the pool) before + being fed the data block to be checksummed. Thus the produced checksums + are unique to a given pool, preventing hash collision attacks on systems + with dedup.

+

When the skein feature is set to + enabled, the administrator can turn on the + skein checksum on any dataset using + zfs set + checksum=skein + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + skein, and will return to being + enabled once all filesystems that have ever had their + checksum set to skein are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This features allows ZFS to maintain more information about + how free space is organized within the pool. If this feature is + enabled, it will be activated when a new space map + object is created, or an existing space map is upgraded to the new + format, and never returns back to being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables the use of the new space map encoding + which consists of two words (instead of one) whenever it is + advantageous. The new encoding allows space maps to represent large + regions of space more efficiently on-disk while also increasing their + maximum addressable offset.

+

This feature becomes active once it is + enabled, and never returns back to being + enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows administrators to account the object usage + information by user and group.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled. + Each filesystem will be upgraded automatically when + remounted, or when a new file is created under that filesystem. The + upgrade can also be triggered on filesystems via + zfs set + version=current + fs. The upgrade process runs in + the background and may take a while to complete for filesystems + containing large amounts of files.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables the zpool + checkpoint command that can checkpoint the state + of the pool at the time it was issued and later rewind back to it or + discard it.

+

This feature becomes active when the + zpool checkpoint command + is used to checkpoint the pool. The feature will only return back to + being enabled when the pool is rewound or the + checkpoint has been discarded.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

zstd is a high-performance + compression algorithm that features a combination of high compression + ratios and high speed. Compared to + , + zstd offers slightly better compression at much higher + speeds. Compared to lz4, zstd offers + much better compression while being only modestly slower. Typically, + zstd compression speed ranges from 250 to 500 MB/s per + thread and decompression speed is over 1 GB/s per thread.

+

When the zstd feature is set to + enabled, the administrator can turn on + zstd compression of any dataset using + zfs set + compress=zstd + dset (see zfs-set(8)). This + feature becomes active once a + compress property has been set to + zstd, and will return to being + enabled once all filesystems that have ever had their + compress property set to zstd are + destroyed.

+
+
+
+
+

+

zpool(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/7/zpoolconcepts.7.html b/man/v2.1/7/zpoolconcepts.7.html new file mode 100644 index 000000000..298edc24c --- /dev/null +++ b/man/v2.1/7/zpoolconcepts.7.html @@ -0,0 +1,602 @@ + + + + + + + zpoolconcepts.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpoolconcepts.7

+
+ + + + + +
ZPOOLCONCEPTS(7)Miscellaneous Information ManualZPOOLCONCEPTS(7)
+
+
+

+

zpoolconcepts — + overview of ZFS storage pools

+
+
+

+
+

+

A "virtual device" describes a single device or a + collection of devices organized according to certain performance and fault + characteristics. The following virtual devices are supported:

+
+
+
A block device, typically located under /dev. ZFS + can use individual slices or partitions, though the recommended mode of + operation is to use whole disks. A disk can be specified by a full path, + or it can be a shorthand name (the relative portion of the path under + /dev). A whole disk can be specified by omitting + the slice or partition designation. For example, + sda is equivalent to + /dev/sda. When given a whole disk, ZFS + automatically labels the disk, if necessary.
+
+
A regular file. The use of files as a backing store is strongly + discouraged. It is designed primarily for experimental purposes, as the + fault tolerance of a file is only as good as the file system on which it + resides. A file must be specified by a full path.
+
+
A mirror of two or more devices. Data is replicated in an identical + fashion across all components of a mirror. A mirror with + N disks of size + X can hold X + bytes and can withstand + + devices failing without losing data.
+
, + raidz1, raidz2, + raidz3
+
A variation on RAID-5 that allows for better distribution of parity and + eliminates the RAID-5 "write hole" (in which data and parity + become inconsistent after a power loss). Data and parity is striped across + all disks within a raidz group. +

A raidz group can have single, double, or triple parity, + meaning that the raidz group can sustain one, two, or three failures, + respectively, without losing any data. The raidz1 vdev + type specifies a single-parity raidz group; the raidz2 + vdev type specifies a double-parity raidz group; and the + raidz3 vdev type specifies a triple-parity raidz + group. The raidz vdev type is an alias for + raidz1.

+

A raidz group with N + disks of size X + with P parity + disks can hold approximately + + bytes and can withstand P + devices failing without losing data. The minimum + number of devices in a raidz group is one more than the number of parity + disks. The recommended number is between 3 and 9 to help increase + performance.

+
+
, + draid1, draid2, + draid3
+
A variant of raidz that provides integrated distributed hot spares which + allows for faster resilvering while retaining the benefits of raidz. A + dRAID vdev is constructed from multiple internal raidz groups, each with + D data devices and + P parity devices. These groups + are distributed over all of the children in order to fully utilize the + available disk performance. +

Unlike raidz, dRAID uses a fixed stripe width + (padding as necessary with zeros) to allow fully sequential resilvering. + This fixed stripe width significantly effects both usable capacity and + IOPS. For example, with the default + + and + + disk sectors the minimum allocation size is + . + If using compression, this relatively large allocation size can reduce + the effective compression ratio. When using ZFS volumes and dRAID, the + default of the + + property is increased to account for the allocation size. If a dRAID + pool will hold a significant amount of small blocks, it is recommended + to also add a mirrored special vdev to store those + blocks.

+

In regards to I/O, + performance is similar to raidz since for any read all + D data disks must be accessed. + Delivered random IOPS can be reasonably approximated as + .

+

Like raidzm a dRAID can have single-, double-, or + triple-parity. The draid1, draid2, + and draid3 types can be used to specify the parity + level. The draid vdev type is an alias for + draid1.

+

A dRAID with N disks + of size X, D + data disks per redundancy group, + P parity level, and + + distributed hot spares can hold approximately + + bytes and can withstand P + devices failing without losing data.

+
+
[parity][:data][:children][:spares]
+
A non-default dRAID configuration can be specified by appending one or + more of the following optional arguments to the draid + keyword: +
+
parity
+
The parity level (1-3).
+
data
+
The number of data devices per redundancy group. In general, a smaller + value of D will increase IOPS, + improve the compression ratio, and speed up resilvering at the + expense of total usable capacity. Defaults to 8, + unless + + is less than 8.
+
children
+
The expected number of children. Useful as a cross-check when listing + a large number of devices. An error is returned when the provided + number of children differs.
+
spares
+
The number of distributed hot spares. Defaults to zero.
+
+
+
+
A pseudo-vdev which keeps track of available hot spares for a pool. For + more information, see the Hot Spares + section.
+
+
A separate intent log device. If more than one log device is specified, + then writes are load-balanced between devices. Log devices can be + mirrored. However, raidz vdev types are not supported for the intent log. + For more information, see the Intent + Log section.
+
+
A device dedicated solely for deduplication tables. The redundancy of this + device should match the redundancy of the other normal devices in the + pool. If more than one dedup device is specified, then allocations are + load-balanced between those devices.
+
+
A device dedicated solely for allocating various kinds of internal + metadata, and optionally small file blocks. The redundancy of this device + should match the redundancy of the other normal devices in the pool. If + more than one special device is specified, then allocations are + load-balanced between those devices. +

For more information on special allocations, see the + Special Allocation + Class section.

+
+
+
A device used to cache storage pool data. A cache device cannot be + configured as a mirror or raidz group. For more information, see the + Cache Devices section.
+
+

Virtual devices cannot be nested, so a mirror or raidz virtual + device can only contain files or disks. Mirrors of mirrors (or other + combinations) are not allowed.

+

A pool can have any number of virtual devices at the top of the + configuration (known as "root vdevs"). Data is dynamically + distributed across all top-level devices to balance data among devices. As + new virtual devices are added, ZFS automatically places data on the newly + available devices.

+

Virtual devices are specified one at a time on the command line, + separated by whitespace. Keywords like mirror + and raidz are used to distinguish + where a group ends and another begins. For example, the following creates a + pool with two root vdevs, each a mirror of two disks:

+
# zpool + create mypool + mirror sda sdb + mirror sdc sdd
+
+
+

+

ZFS supports a rich set of mechanisms for handling device failure + and data corruption. All metadata and data is checksummed, and ZFS + automatically repairs bad data from a good copy when corruption is + detected.

+

In order to take advantage of these features, a pool must make use + of some form of redundancy, using either mirrored or raidz groups. While ZFS + supports running in a non-redundant configuration, where each root vdev is + simply a disk or file, this is strongly discouraged. A single case of bit + corruption can render some or all of your data unavailable.

+

A pool's health status is described by one of three + states: , + , + or + . + An online pool has all devices operating normally. A degraded pool is one in + which one or more devices have failed, but the data is still available due + to a redundant configuration. A faulted pool has corrupted metadata, or one + or more faulted devices, and insufficient replicas to continue + functioning.

+

The health of the top-level vdev, such as a mirror or raidz + device, is potentially impacted by the state of its associated vdevs, or + component devices. A top-level vdev or component device is in one of the + following states:

+
+
+
One or more top-level vdevs is in the degraded state because one or more + component devices are offline. Sufficient replicas exist to continue + functioning. +

One or more component devices is in the degraded or faulted + state, but sufficient replicas exist to continue functioning. The + underlying conditions are as follows:

+
    +
  • The number of checksum errors exceeds acceptable levels and the device + is degraded as an indication that something may be wrong. ZFS + continues to use the device as necessary.
  • +
  • The number of I/O errors exceeds acceptable levels. The device could + not be marked as faulted because there are insufficient replicas to + continue functioning.
  • +
+
+
+
One or more top-level vdevs is in the faulted state because one or more + component devices are offline. Insufficient replicas exist to continue + functioning. +

One or more component devices is in the faulted state, and + insufficient replicas exist to continue functioning. The underlying + conditions are as follows:

+
    +
  • The device could be opened, but the contents did not match expected + values.
  • +
  • The number of I/O errors exceeds acceptable levels and the device is + faulted to prevent further use of the device.
  • +
+
+
+
The device was explicitly taken offline by the + zpool offline + command.
+
+
The device is online and functioning.
+
+
The device was physically removed while the system was running. Device + removal detection is hardware-dependent and may not be supported on all + platforms.
+
+
The device could not be opened. If a pool is imported when a device was + unavailable, then the device will be identified by a unique identifier + instead of its path since the path was never correct in the first + place.
+
+

Checksum errors represent events where a disk returned data that + was expected to be correct, but was not. In other words, these are instances + of silent data corruption. The checksum errors are reported in + zpool status and + zpool events. When a block + is stored redundantly, a damaged block may be reconstructed (e.g. from raidz + parity or a mirrored copy). In this case, ZFS reports the checksum error + against the disks that contained damaged data. If a block is unable to be + reconstructed (e.g. due to 3 disks being damaged in a raidz2 group), it is + not possible to determine which disks were silently corrupted. In this case, + checksum errors are reported for all disks on which the block is stored.

+

If a device is removed and later re-attached to the system, ZFS + attempts online the device automatically. Device attachment detection is + hardware-dependent and might not be supported on all platforms.

+
+
+

+

ZFS allows devices to be associated with pools as "hot + spares". These devices are not actively used in the pool, but when an + active device fails, it is automatically replaced by a hot spare. To create + a pool with hot spares, specify a spare vdev with any + number of devices. For example,

+
# zpool + create pool + mirror sda sdb spare + sdc sdd
+

Spares can be shared across multiple pools, and can be added with + the zpool add command and + removed with the zpool + remove command. Once a spare replacement is + initiated, a new spare vdev is created within the + configuration that will remain there until the original device is replaced. + At this point, the hot spare becomes available again if another device + fails.

+

If a pool has a shared spare that is currently being used, the + pool can not be exported since other pools may use this shared spare, which + may lead to potential data corruption.

+

Shared spares add some risk. If the pools are imported on + different hosts, and both pools suffer a device failure at the same time, + both could attempt to use the spare at the same time. This may not be + detected, resulting in data corruption.

+

An in-progress spare replacement can be cancelled by detaching the + hot spare. If the original faulted device is detached, then the hot spare + assumes its place in the configuration, and is removed from the spare list + of all active pools.

+

The draid vdev type provides distributed hot + spares. These hot spares are named after the dRAID vdev they're a part of + (draid1-2-3 + specifies spare 3 + of vdev 2, + which is a single parity dRAID) and may only be used + by that dRAID vdev. Otherwise, they behave the same as normal hot + spares.

+

Spares cannot replace log devices.

+
+
+

+

The ZFS Intent Log (ZIL) satisfies POSIX requirements for + synchronous transactions. For instance, databases often require their + transactions to be on stable storage devices when returning from a system + call. NFS and other applications can also use fsync(2) to + ensure data stability. By default, the intent log is allocated from blocks + within the main pool. However, it might be possible to get better + performance using separate intent log devices such as NVRAM or a dedicated + disk. For example:

+
# zpool + create pool sda sdb + log sdc
+

Multiple log devices can also be specified, and they can be + mirrored. See the EXAMPLES section for an + example of mirroring multiple log devices.

+

Log devices can be added, replaced, attached, detached and + removed. In addition, log devices are imported and exported as part of the + pool that contains them. Mirrored devices can be removed by specifying the + top-level mirror vdev.

+
+
+

+

Devices can be added to a storage pool as "cache + devices". These devices provide an additional layer of caching between + main memory and disk. For read-heavy workloads, where the working set size + is much larger than what can be cached in main memory, using cache devices + allows much more of this working set to be served from low latency media. + Using cache devices provides the greatest performance improvement for random + read-workloads of mostly static content.

+

To create a pool with cache devices, specify a + cache vdev with any number of devices. For example:

+
# zpool + create pool sda sdb + cache sdc sdd
+

Cache devices cannot be mirrored or part of a raidz configuration. + If a read error is encountered on a cache device, that read I/O is reissued + to the original storage pool device, which might be part of a mirrored or + raidz configuration.

+

The content of the cache devices is + persistent across reboots and restored asynchronously when importing the + pool in L2ARC (persistent L2ARC). This can be disabled by setting + =0. + For cache devices smaller than + , we do + not write the metadata structures required for rebuilding the L2ARC in order + not to waste space. This can be changed with + . + The cache device header + () is + updated even if no metadata structures are written. Setting + =0 + will result in scanning the full-length ARC lists for cacheable content to + be written in L2ARC (persistent ARC). If a cache device is added with + zpool add its label and + header will be overwritten and its contents are not going to be restored in + L2ARC, even if the device was previously part of the pool. If a cache device + is onlined with zpool online + its contents will be restored in L2ARC. This is useful in case of memory + pressure where the contents of the cache device are not fully restored in + L2ARC. The user can off- and online the cache device when there is less + memory pressure in order to fully restore its contents to L2ARC.

+
+
+

+

Before starting critical procedures that include destructive + actions (like zfs destroy), + an administrator can checkpoint the pool's state and in the case of a + mistake or failure, rewind the entire pool back to the checkpoint. + Otherwise, the checkpoint can be discarded when the procedure has completed + successfully.

+

A pool checkpoint can be thought of as a pool-wide snapshot and + should be used with care as it contains every part of the pool's state, from + properties to vdev configuration. Thus, certain operations are not allowed + while a pool has a checkpoint. Specifically, vdev removal/attach/detach, + mirror splitting, and changing the pool's GUID. Adding a new vdev is + supported, but in the case of a rewind it will have to be added again. + Finally, users of this feature should keep in mind that scrubs in a pool + that has a checkpoint do not repair checkpointed data.

+

To create a checkpoint for a pool:

+
# zpool + checkpoint pool
+

To later rewind to its checkpointed state, you need to first + export it and then rewind it during import:

+
# zpool + export pool
+
# zpool + import --rewind-to-checkpoint + pool
+

To discard the checkpoint from a pool:

+
# zpool + checkpoint -d + pool
+

Dataset reservations (controlled by the + + and + + properties) may be unenforceable while a checkpoint exists, because the + checkpoint is allowed to consume the dataset's reservation. Finally, data + that is part of the checkpoint but has been freed in the current state of + the pool won't be scanned during a scrub.

+
+
+

+

Allocations in the special class are dedicated to specific block + types. By default this includes all metadata, the indirect blocks of user + data, and any deduplication tables. The class can also be provisioned to + accept small file blocks.

+

A pool must always have at least one normal + (non-dedup/-special) vdev before other + devices can be assigned to the special class. If the + special class becomes full, then allocations intended for + it will spill back into the normal class.

+

Deduplication tables can be excluded + from the special class by unsetting the + + ZFS module parameter.

+

Inclusion of small file blocks in the + special class is opt-in. Each dataset can control the size of small file + blocks allowed in the special class by setting the + + property to nonzero. See zfsprops(7) for more info on this + property.

+
+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/7/zpoolprops.7.html b/man/v2.1/7/zpoolprops.7.html new file mode 100644 index 000000000..96b188e1c --- /dev/null +++ b/man/v2.1/7/zpoolprops.7.html @@ -0,0 +1,457 @@ + + + + + + + zpoolprops.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpoolprops.7

+
+ + + + + +
ZPOOLPROPS(7)Miscellaneous Information ManualZPOOLPROPS(7)
+
+
+

+

zpoolprops — + properties of ZFS storage pools

+
+
+

+

Each pool has several properties associated with it. Some + properties are read-only statistics while others are configurable and change + the behavior of the pool.

+

The following are read-only properties:

+
+
+
Amount of storage used within the pool. See + fragmentation and free for more + information.
+
+
Percentage of pool space used. This property can also be referred to by + its shortened column name, + .
+
+
Amount of uninitialized space within the pool or device that can be used + to increase the total capacity of the pool. On whole-disk vdevs, this is + the space beyond the end of the GPT – typically occurring when a + LUN is dynamically expanded or a disk replaced with a larger one. On + partition vdevs, this is the space appended to the partition after it was + added to the pool – most likely by resizing it in-place. The space + can be claimed for the pool by bringing it online with + + or using zpool online + -e.
+
+
The amount of fragmentation in the pool. As the amount of space + allocated increases, it becomes more difficult to locate + free space. This may result in lower write performance + compared to pools with more unfragmented free space.
+
+
The amount of free space available in the pool. By contrast, the + zfs(8) available property describes + how much new data can be written to ZFS filesystems/volumes. The zpool + free property is not generally useful for this purpose, + and can be substantially more than the zfs available + space. This discrepancy is due to several factors, including raidz parity; + zfs reservation, quota, refreservation, and refquota properties; and space + set aside by + + (see zfs(4) for more information).
+
+
After a file system or snapshot is destroyed, the space it was using is + returned to the pool asynchronously. freeing is the + amount of space remaining to be reclaimed. Over time + freeing will decrease while free + increases.
+
+
Space not released while freeing due to corruption, now + permanently leaked into the pool.
+
+
The current health of the pool. Health can be one of + , + , + , + , + .
+
+
A unique identifier for the pool.
+
+
A unique identifier for the pool. Unlike the guid + property, this identifier is generated every time we load the pool (i.e. + does not persist across imports/exports) and never changes while the pool + is loaded (even if a + + operation takes place).
+
+
Total size of the storage pool.
+
guid
+
Information about unsupported features that are enabled on the pool. See + zpool-features(7) for details.
+
+

The space usage properties report actual physical space available + to the storage pool. The physical space can be different from the total + amount of space that any contained datasets can actually use. The amount of + space used in a raidz configuration depends on the characteristics of the + data being written. In addition, ZFS reserves some space for internal + accounting that the zfs(8) command takes into account, but + the zpoolprops command does not. For non-full pools + of a reasonable size, these effects should be invisible. For small pools, or + pools that are close to being completely full, these discrepancies may + become more noticeable.

+

The following property can be set at creation time and import + time:

+
+
+
Alternate root directory. If set, this directory is prepended to any mount + points within the pool. This can be used when examining an unknown pool + where the mount points cannot be trusted, or in an alternate boot + environment, where the typical paths are not valid. + altroot is not a persistent property. It is valid only + while the system is up. Setting altroot defaults to + using cachefile=none, though this may + be overridden using an explicit setting.
+
+

The following property can be set only at import time:

+
+
=on|off
+
If set to on, the pool will be imported in read-only + mode. This property can also be referred to by its shortened column name, + .
+
+

The following properties can be set at creation time and import + time, and later changed with the zpool + set command:

+
+
=ashift
+
Pool sector size exponent, to the power of + (internally + referred to as ashift). Values from 9 to 16, inclusive, + are valid; also, the value 0 (the default) means to auto-detect using the + kernel's block layer and a ZFS internal exception list. I/O operations + will be aligned to the specified size boundaries. Additionally, the + minimum (disk) write size will be set to the specified size, so this + represents a space vs. performance trade-off. For optimal performance, the + pool sector size should be greater than or equal to the sector size of the + underlying disks. The typical case for setting this property is when + performance is important and the underlying disks use 4KiB sectors but + report 512B sectors to the OS (for compatibility reasons); in that case, + set + ashift= + (which is + + = + ). + When set, this property is used as the default hint value in subsequent + vdev operations (add, attach and replace). Changing this value will not + modify any existing vdev, not even on disk replacement; however it can be + used, for instance, to replace a dying 512B sectors disk with a newer 4KiB + sectors device: this will probably result in bad performance but at the + same time could prevent loss of data.
+
=on|off
+
Controls automatic pool expansion when the underlying LUN is grown. If set + to on, the pool will be resized according to the size of + the expanded device. If the device is part of a mirror or raidz then all + devices within that mirror/raidz group must be expanded before the new + space is made available to the pool. The default behavior is + off. This property can also be referred to by its + shortened column name, + .
+
=on|off
+
Controls automatic device replacement. If set to off, + device replacement must be initiated by the administrator by using the + zpool replace command. If + set to on, any new device, found in the same physical + location as a device that previously belonged to the pool, is + automatically formatted and replaced. The default behavior is + off. This property can also be referred to by its + shortened column name, + . + Autoreplace can also be used with virtual disks (like device mapper) + provided that you use the /dev/disk/by-vdev paths setup by vdev_id.conf. + See the vdev_id(8) manual page for more details. + Autoreplace and autoonline require the ZFS Event Daemon be configured and + running. See the zed(8) manual page for more + details.
+
=on|off
+
When set to on space which has been recently freed, and + is no longer allocated by the pool, will be periodically trimmed. This + allows block device vdevs which support BLKDISCARD, such as SSDs, or file + vdevs on which the underlying file system supports hole-punching, to + reclaim unused blocks. The default value for this property is + off. +

Automatic TRIM does not immediately + reclaim blocks after a free. Instead, it will optimistically delay + allowing smaller ranges to be aggregated into a few larger ones. These + can then be issued more efficiently to the storage. TRIM on L2ARC + devices is enabled by setting + .

+

Be aware that automatic trimming of recently freed data blocks + can put significant stress on the underlying storage devices. This will + vary depending of how well the specific device handles these commands. + For lower-end devices it is often possible to achieve most of the + benefits of automatic trimming by running an on-demand (manual) TRIM + periodically using the zpool + trim command.

+
+
=|pool[/dataset]
+
Identifies the default bootable dataset for the root pool. This property + is expected to be set mainly by the installation and upgrade programs. Not + all Linux distribution boot processes use the bootfs property.
+
=path|none
+
Controls the location of where the pool configuration is cached. + Discovering all pools on system startup requires a cached copy of the + configuration data that is stored on the root file system. All pools in + this cache are automatically imported when the system boots. Some + environments, such as install and clustering, need to cache this + information in a different location so that pools are not automatically + imported. Setting this property caches the pool configuration in a + different location that can later be imported with + zpool import + -c. Setting it to the value none + creates a temporary pool that is never cached, and the "" (empty + string) uses the default location. +

Multiple pools can share the same cache file. Because the + kernel destroys and recreates this file when pools are added and + removed, care should be taken when attempting to access this file. When + the last pool using a cachefile is exported or + destroyed, the file will be empty.

+
+
=text
+
A text string consisting of printable ASCII characters that will be stored + such that it is available even if the pool becomes faulted. An + administrator can provide additional information about a pool using this + property.
+
=off|legacy|file[,file]…
+
Specifies that the pool maintain compatibility with specific feature sets. + When set to off (or unset) compatibility is disabled + (all features may be enabled); when set to legacyno + features may be enabled. When set to a comma-separated list of filenames + (each filename may either be an absolute path, or relative to + /etc/zfs/compatibility.d or + /usr/share/zfs/compatibility.d) the lists of + requested features are read from those files, separated by whitespace + and/or commas. Only features present in all files may be enabled. +

See zpool-features(7), + zpool-create(8) and zpool-upgrade(8) + for more information on the operation of compatibility feature sets.

+
+
=number
+
This property is deprecated and no longer has any effect.
+
=on|off
+
Controls whether a non-privileged user is granted access based on the + dataset permissions defined on the dataset. See zfs(8) + for more information on ZFS delegated administration.
+
=wait|continue|panic
+
Controls the system behavior in the event of catastrophic pool failure. + This condition is typically a result of a loss of connectivity to the + underlying storage device(s) or a failure of all devices within the pool. + The behavior of such an event is determined as follows: +
+
+
Blocks all I/O access until the device connectivity is recovered and + the errors are cleared with zpool + clear. This is the default behavior.
+
+
Returns EIO to any new write I/O requests but + allows reads to any of the remaining healthy devices. Any write + requests that have yet to be committed to disk would be blocked.
+
+
Prints out a message to the console and generates a system crash + dump.
+
+
+
feature_name=enabled
+
The value of this property is the current state of + feature_name. The only valid value when setting this + property is enabled which moves + feature_name to the enabled state. See + zpool-features(7) for details on feature states.
+
=on|off
+
Controls whether information about snapshots associated with this pool is + output when zfs list is + run without the -t option. The default value is + off. This property can also be referred to by its + shortened name, + .
+
=on|off
+
Controls whether a pool activity check should be performed during + zpool import. When a pool + is determined to be active it cannot be imported, even with the + -f option. This property is intended to be used in + failover configurations where multiple hosts have access to a pool on + shared storage. +

Multihost provides protection on import only. It does not + protect against an individual device being used in multiple pools, + regardless of the type of vdev. See the discussion under + zpool create.

+

When this property is on, periodic + writes to storage occur to show the pool is in use. See + + in the zfs(4) manual page. In order to enable this + property each host must set a unique hostid. See + zgenhostid(8) + spl(4) for additional details. The default value is + off.

+
+
=version
+
The current on-disk version of the pool. This can be increased, but never + decreased. The preferred method of updating pools is with the + zpool upgrade command, + though this property can be used when a specific version is needed for + backwards compatibility. Once feature flags are enabled on a pool this + property will no longer have a value.
+
+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/fsck.zfs.8.html b/man/v2.1/8/fsck.zfs.8.html new file mode 100644 index 000000000..8cdb8ab5b --- /dev/null +++ b/man/v2.1/8/fsck.zfs.8.html @@ -0,0 +1,289 @@ + + + + + + + fsck.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

fsck.zfs.8

+
+ + + + + +
FSCK.ZFS(8)System Manager's ManualFSCK.ZFS(8)
+
+
+

+

fsck.zfsdummy + ZFS filesystem checker

+
+
+

+ + + + + +
fsck.zfs[options] + dataset
+
+
+

+

fsck.zfs is a thin shell wrapper that at + most checks the status of a dataset's container pool. It is installed by + OpenZFS because some Linux distributions expect a fsck helper for all + filesystems.

+

If more than one dataset is specified, each + is checked in turn and the results binary-ored.

+
+
+

+

Ignored.

+
+
+

+

ZFS datasets are checked by running zpool + scrub on the containing pool. An individual ZFS + dataset is never checked independently of its pool, which is unlike a + regular filesystem.

+

However, the fsck(8) interface still + allows it to communicate some errors: if the dataset + is in a degraded pool, then fsck.zfs will return + exit code to indicate + an uncorrected filesystem error.

+

Similarly, if the dataset is in a + faulted pool and has a legacy /etc/fstab record, + then fsck.zfs will return exit code + to indicate a fatal + operational error.

+
+
+

+

fstab(5), fsck(8), + zpool-scrub(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/index.html b/man/v2.1/8/index.html new file mode 100644 index 000000000..e6d46099c --- /dev/null +++ b/man/v2.1/8/index.html @@ -0,0 +1,307 @@ + + + + + + + System Administration Commands (8) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ + +
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/mount.zfs.8.html b/man/v2.1/8/mount.zfs.8.html new file mode 100644 index 000000000..14444dbf5 --- /dev/null +++ b/man/v2.1/8/mount.zfs.8.html @@ -0,0 +1,296 @@ + + + + + + + mount.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

mount.zfs.8

+
+ + + + + +
MOUNT.ZFS(8)System Manager's ManualMOUNT.ZFS(8)
+
+
+

+

mount.zfsmount + ZFS filesystem

+
+
+

+ + + + + +
mount.zfs[-sfnvh] [-o + options] dataset + mountpoint
+
+
+

+

The mount.zfs helper is used by + mount(8) to mount filesystem snapshots and + legacy + ZFS filesystems, as well as by zfs(8) when the + + environment variable is not set. Users should should invoke either + zfs(8) in most cases.

+

options are handled according + to the section in zfsprops(7), except + for those described below.

+

If /etc/mtab is a regular file and + -n was not specified, it will be updated via + libmount.

+
+
+

+
+
+
Ignore unknown (sloppy) mount options.
+
+
Do everything except actually executing the system call.
+
+
Never update /etc/mtab.
+
+
Print resolved mount options and parser state.
+
+
Print the usage message.
+
+ zfsutil
+
This private flag indicates that mount(8) is being + called by the zfs(8) command.
+
+
+
+

+

fstab(5), mount(8), + zfs-mount(8)

+
+
+ + + + + +
May 24, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/vdev_id.8.html b/man/v2.1/8/vdev_id.8.html new file mode 100644 index 000000000..258236716 --- /dev/null +++ b/man/v2.1/8/vdev_id.8.html @@ -0,0 +1,321 @@ + + + + + + + vdev_id.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

vdev_id.8

+
+ + + + + +
VDEV_ID(8)System Manager's ManualVDEV_ID(8)
+
+
+

+

vdev_idgenerate + user-friendly names for JBOD disks

+
+
+

+ + + + + +
vdev_id-d dev + -c config_file + -g + sas_direct|sas_switch|scsi + -m -p + phys_per_port
+
+
+

+

vdev_id is an udev helper which parses + vdev_id.conf(5) to map a physical path in a storage + topology to a channel name. The channel name is combined with a disk + enclosure slot number to create an alias that reflects the physical location + of the drive. This is particularly helpful when it comes to tasks like + replacing failed drives. Slot numbers may also be remapped in case the + default numbering is unsatisfactory. The drive aliases will be created as + symbolic links in /dev/disk/by-vdev.

+

The currently supported topologies are + sas_direct, sas_switch, and + scsi. A multipath mode is supported in which dm-mpath + devices are handled by examining the first running component disk as + reported by the driver. In multipath mode the configuration file should + contain a channel definition with the same name for each path to a given + enclosure.

+

vdev_id also supports creating + aliases based on existing udev links in the /dev hierarchy using the + configuration + file keyword. See vdev_id.conf(5) for details.

+
+
+

+
+
+ device
+
The device node to classify, like /dev/sda.
+
+ config_file
+
Specifies the path to an alternate configuration file. The default is + /etc/zfs/vdev_id.conf.
+
+ sas_direct|sas_switch|scsi
+
Identifies a physical topology that governs how physical paths are mapped + to channels: +
+
+ and scsi
+
channels are uniquely identified by a PCI slot and HBA port + number
+
+
channels are uniquely identified by a SAS switch port number
+
+
+
+
Only handle dm-multipath devices. If specified, examine the first running + component disk of a dm-multipath device as provided by the driver to + determine the physical path.
+
+ phys_per_port
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id internally uses this value to + determine which HBA or switch port a device is connected to. The default + is .
+
+
Print a usage summary.
+
+
+
+

+

vdev_id.conf(5)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zdb.8.html b/man/v2.1/8/zdb.8.html new file mode 100644 index 000000000..ea9d9b495 --- /dev/null +++ b/man/v2.1/8/zdb.8.html @@ -0,0 +1,723 @@ + + + + + + + zdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zdb.8

+
+ + + + + +
ZDB(8)System Manager's ManualZDB(8)
+
+
+

+

zdbdisplay ZFS + storage pool debugging and consistency information

+
+
+

+ + + + + +
zdb[-AbcdDFGhikLMNPsvXYy] + [-e [-V] + [-p path]…] + [-I inflight I/Os] + [-o + var=value]… + [-t txg] + [-U cache] + [-x dumpdir] + [poolname[/dataset | + objset ID]] + [object|range…]
+
+ + + + + +
zdb[-AdiPv] [-e + [-V] [-p + path]…] [-U + cache] + poolname[/dataset + | objset ID] + [object|range…]
+
+ + + + + +
zdb-C [-A] + [-U cache]
+
+ + + + + +
zdb-E [-A] + word0:word1:…:word15
+
+ + + + + +
zdb-l [-Aqu] + device
+
+ + + + + +
zdb-m [-AFLPXY] + [-e [-V] + [-p path]…] + [-t txg] + [-U cache] + poolname [vdev + [metaslab]…]
+
+ + + + + +
zdb-O dataset path
+
+ + + + + +
zdb-r dataset path + destination
+
+ + + + + +
zdb-R [-A] + [-e [-V] + [-p path]…] + [-U cache] + poolname + vdev:offset:[lsize/]psize[:flags]
+
+ + + + + +
zdb-S [-AP] + [-e [-V] + [-p path]…] + [-U cache] + poolname
+
+
+

+

The zdb utility displays information about + a ZFS pool useful for debugging and performs some amount of consistency + checking. It is a not a general purpose tool and options (and facilities) + may change. This is not a fsck(8) utility.

+

The output of this command in general reflects the on-disk + structure of a ZFS pool, and is inherently unstable. The precise output of + most invocations is not documented, a knowledge of ZFS internals is + assumed.

+

If the dataset argument does not + contain any + "" or + "" + characters, it is interpreted as a pool name. The root dataset can be + specified as "pool/".

+

When operating on an imported and active pool it is possible, + though unlikely, that zdb may interpret inconsistent pool data and behave + erratically.

+
+
+

+

Display options:

+
+
+
Display statistics regarding the number, size (logical, physical and + allocated) and deduplication of blocks.
+
+
Verify the checksum of all metadata blocks while printing block statistics + (see -b). +

If specified multiple times, verify the checksums of all + blocks.

+
+
+
Display information about the configuration. If specified with no other + options, instead display information about the cache file + (/etc/zfs/zpool.cache). To specify the cache file + to display, see -U. +

If specified multiple times, and a pool name is also specified + display both the cached configuration and the on-disk configuration. If + specified multiple times with -e also display + the configuration that would be used were the pool to be imported.

+
+
+
Display information about datasets. Specified once, displays basic dataset + information: ID, create transaction, size, and object count. See + -N for determining if + [poolname[/dataset | + objset ID]] is to use the specified + [dataset | objset ID] as a + string (dataset name) or a number (objset ID) when datasets have numeric + names. +

If specified multiple times provides greater and greater + verbosity.

+

If object IDs or object ID ranges are specified, display + information about those specific objects or ranges only.

+

An object ID range is specified in terms of a colon-separated + tuple of the form + ⟨start⟩:⟨end⟩[:⟨flags⟩]. The + fields start and end are + integer object identifiers that denote the upper and lower bounds of the + range. An end value of -1 specifies a range with + no upper bound. The flags field optionally + specifies a set of flags, described below, that control which object + types are dumped. By default, all object types are dumped. A minus sign + (-) negates the effect of the flag that follows it and has no effect + unless preceded by the A flag. For example, the + range 0:-1:A-d will dump all object types except for directories.

+

+
+
+
Dump all objects (this is the default)
+
+
Dump ZFS directory objects
+
+
Dump ZFS plain file objects
+
+
Dump SPA space map objects
+
+
Dump ZAP objects
+
-
+
Negate the effect of next flag
+
+
+
+
Display deduplication statistics, including the deduplication ratio + (dedup), compression ratio (compress), + inflation due to the zfs copies property (copies), and + an overall effective ratio (dedup + * compress + / copies).
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in the + pool) block counts and sizes by reference count.
+
+
Display the statistics independently for each deduplication table.
+
+
Dump the contents of the deduplication tables describing duplicate + blocks.
+
+
Also dump the contents of the deduplication tables describing unique + blocks.
+
+ word0:word1:…:word15
+
Decode and display block from an embedded block pointer specified by the + word arguments.
+
+
Display pool history similar to zpool + history, but include internal changes, + transaction, and dataset information.
+
+
Display information about intent log (ZIL) entries relating to each + dataset. If specified multiple times, display counts of each intent log + transaction type.
+
+
Examine the checkpointed state of the pool. Note, the on disk format of + the pool is not reverted to the checkpointed state.
+
+ device
+
Read the vdev labels and L2ARC header from the specified device. + zdb -l will return 0 if + valid label was found, 1 if error occurred, and 2 if no valid labels were + found. The presence of L2ARC header is indicated by a specific sequence + (L2ARC_DEV_HDR_MAGIC). If there is an accounting error in the size or the + number of L2ARC log blocks zdb + -l will return 1. Each unique configuration is + displayed only once.
+
+ device
+
In addition display label space usage stats. If a valid L2ARC header was + found also display the properties of log blocks used for restoring L2ARC + contents (persistent L2ARC).
+
+ device
+
Display every configuration, unique or not. If a valid L2ARC header was + found also display the properties of log entries in log blocks used for + restoring L2ARC contents (persistent L2ARC). +

If the -q option is also specified, + don't print the labels or the L2ARC header.

+

If the -u option is also specified, + also display the uberblocks on this device. Specify multiple times to + increase verbosity.

+
+
+
Disable leak detection and the loading of space maps. By default, + zdb verifies that all non-free blocks are + referenced, which can be very expensive.
+
+
Display the offset, spacemap, free space of each metaslab, all the log + spacemaps and their obsolete entry statistics.
+
+
Also display information about the on-disk free space histogram associated + with each metaslab.
+
+
Display the maximum contiguous free space, the in-core free space + histogram, and the percentage of free space in each space map.
+
+
Display every spacemap record.
+
+
Display the offset, spacemap, and free space of each metaslab.
+
+
Also display information about the maximum contiguous free space and the + percentage of free space in each space map.
+
+
Display every spacemap record.
+
+
Same as -d but force zdb to interpret the + [dataset | objset ID] in + [poolname[/dataset | + objset ID]] as a numeric objset ID.
+
+ dataset path
+
Look up the specified path inside of the + dataset and display its metadata and indirect + blocks. Specified path must be relative to the root + of dataset. This option can be combined with + -v for increasing verbosity.
+
+ dataset path destination
+
Copy the specified path inside of the + dataset to the specified destination. Specified + path must be relative to the root of + dataset. This option can be combined with + -v for increasing verbosity.
+
+ poolname + vdev:offset:[lsize/]psize[:flags]
+
Read and display a block from the specified device. By default the block + is displayed as a hex dump, but see the description of the + r flag, below. +

The block is specified in terms of a colon-separated tuple + vdev (an integer vdev identifier) + offset (the offset within the vdev) + size (the physical size, or logical size / + physical size) of the block to read and, optionally, + flags (a set of flags, described below).

+

+
+
+ offset
+
Print block pointer at hex offset
+
+
Calculate and display checksums
+
+
Decompress the block. Set environment variable + ZDB_NO_ZLE to skip zle when guessing.
+
+
Byte swap the block
+
+
Dump gang block header
+
+
Dump indirect block
+
+
Dump raw uninterpreted block data
+
+
Verbose output for guessing compression algorithm
+
+
+
+
Report statistics on zdb I/O. Display operation + counts, bandwidth, and error counts of I/O to the pool from + zdb.
+
+
Simulate the effects of deduplication, constructing a DDT and then display + that DDT as with -DD.
+
+
Display the current uberblock.
+
+

Other options:

+
+
+
Do not abort should any assertion fail.
+
+
Enable panic recovery, certain errors which would otherwise be fatal are + demoted to warnings.
+
+
Do not abort if asserts fail and also enable panic recovery.
+
+ [-p path]…
+
Operate on an exported pool, not present in + /etc/zfs/zpool.cache. The + -p flag specifies the path under which devices are + to be searched.
+
+ dumpdir
+
All blocks accessed will be copied to files in the specified directory. + The blocks will be placed in sparse files whose name is the same as that + of the file or device read. zdb can be then run on + the generated files. Note that the -bbc flags are + sufficient to access (and thus copy) all metadata on the pool.
+
+
Attempt to make an unreadable pool readable by trying progressively older + transactions.
+
+
Dump the contents of the zfs_dbgmsg buffer before exiting + zdb. zfs_dbgmsg is a buffer used by ZFS to dump + advanced debug information.
+
+ inflight I/Os
+
Limit the number of outstanding checksum I/Os to the specified value. The + default value is 200. This option affects the performance of the + -c option.
+
+ var=value …
+
Set the given global libzpool variable to the provided value. The value + must be an unsigned 32-bit integer. Currently only little-endian systems + are supported to avoid accidentally setting the high 32 bits of 64-bit + variables.
+
+
Print numbers in an unscaled form more amenable to parsing, e.g. + + rather than + .
+
+ transaction
+
Specify the highest transaction to use when searching for uberblocks. See + also the -u and -l options + for a means to see the available uberblocks and their associated + transaction numbers.
+
+ cachefile
+
Use a cache file other than + /etc/zfs/zpool.cache.
+
+
Enable verbosity. Specify multiple times for increased verbosity.
+
+
Attempt verbatim import. This mimics the behavior of the kernel when + loading a pool from a cachefile. Only usable with + -e.
+
+
Attempt "extreme" transaction rewind, that is attempt the same + recovery as -F but read transactions otherwise + deemed too old.
+
+
Attempt all possible combinations when reconstructing indirect split + blocks. This flag disables the individual I/O deadman timer in order to + allow as much time as required for the attempted reconstruction.
+
+
Perform validation for livelists that are being deleted. Scans through the + livelist and metaslabs, checking for duplicate entries and compares the + two, checking for potential double frees. If it encounters issues, + warnings will be printed, but the command will not necessarily fail.
+
+

Specifying a display option more than once enables verbosity for + only that option, with more occurrences enabling more verbosity.

+

If no options are specified, all information about the named pool + will be displayed at default verbosity.

+
+
+

+
+
: Display the configuration of imported pool + rpool
+
+
+
# zdb -C rpool
+MOS Configuration:
+        version: 28
+        name: 'rpool'
+ …
+
+
+
: Display basic dataset information about + rpool
+
+
+
# zdb -d rpool
+Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects
+Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects
+ …
+
+
+
: Display basic information about object 0 in + rpool/export/home
+
+
+
# zdb -d rpool/export/home 0
+Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects
+
+    Object  lvl   iblk   dblk  dsize  lsize   %full  type
+         0    7    16K    16K  15.0K    16K   25.00  DMU dnode
+
+
+
: Display the predicted effect of enabling deduplication on + rpool
+
+
+
# zdb -S rpool
+Simulated DDT histogram:
+
+bucket              allocated                       referenced
+______   ______________________________   ______________________________
+refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
+------   ------   -----   -----   -----   ------   -----   -----   -----
+     1     694K   27.1G   15.0G   15.0G     694K   27.1G   15.0G   15.0G
+     2    35.0K   1.33G    699M    699M    74.7K   2.79G   1.45G   1.45G
+ …
+dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00
+
+
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
October 7, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zed.8.html b/man/v2.1/8/zed.8.html new file mode 100644 index 000000000..5e8325d57 --- /dev/null +++ b/man/v2.1/8/zed.8.html @@ -0,0 +1,462 @@ + + + + + + + zed.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zed.8

+
+ + + + + +
ZED(8)System Manager's ManualZED(8)
+
+
+

+

ZEDZFS Event + Daemon

+
+
+

+ + + + + +
ZED[-fFhILMvVZ] [-d + zedletdir] [-p + pidfile] [-P + path] [-s + statefile] [-j + jobs]
+
+
+

+

The ZED (ZFS Event Daemon) monitors events + generated by the ZFS kernel module. When a zevent (ZFS Event) is posted, the + ZED will run any ZEDLETs (ZFS Event Daemon Linkage + for Executable Tasks) that have been enabled for the corresponding zevent + class.

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Display license information.
+
+
Display version information.
+
+
Be verbose.
+
+
Force the daemon to run if at all possible, disabling security checks and + throwing caution to the wind. Not recommended for use in production.
+
+
Don't daemonise: remain attached to the controlling terminal, log to the + standard I/O streams.
+
+
Lock all current and future pages in the virtual memory address space. + This may help the daemon remain responsive when the system is under heavy + memory pressure.
+
+
Request that the daemon idle rather than exit when the kernel modules are + not loaded. Processing of events will start, or resume, when the kernel + modules are (re)loaded. Under Linux the kernel modules cannot be unloaded + while the daemon is running.
+
+
Zero the daemon's state, thereby allowing zevents still within the kernel + to be reprocessed.
+
+ zedletdir
+
Read the enabled ZEDLETs from the specified directory.
+
+ pidfile
+
Write the daemon's process ID to the specified file.
+
+ path
+
Custom $PATH for zedlets to use. Normally zedlets + run in a locked-down environment, with hardcoded paths to the ZFS commands + ($ZFS, $ZPOOL, + $ZED, ...), and a + hard-coded $PATH. This is done for security + reasons. However, the ZFS test suite uses a custom PATH for its ZFS + commands, and passes it to ZED with + -P. In short, -P is only + to be used by the ZFS test suite; never use it in production!
+
+ statefile
+
Write the daemon's state to the specified file.
+
+ jobs
+
Allow at most jobs ZEDLETs to run concurrently, + delaying execution of new ones until they finish. Defaults to + .
+
+
+
+

+

A zevent is comprised of a list of nvpairs (name/value pairs). + Each zevent contains an EID (Event IDentifier) that uniquely identifies it + throughout the lifetime of the loaded ZFS kernel module; this EID is a + monotonically increasing integer that resets to 1 each time the kernel + module is loaded. Each zevent also contains a class string that identifies + the type of event. For brevity, a subclass string is defined that omits the + leading components of the class string. Additional nvpairs exist to provide + event details.

+

The kernel maintains a list of recent zevents that can be viewed + (along with their associated lists of nvpairs) using the + zpool events + -v command.

+
+
+

+

ZEDLETs to be invoked in response to zevents are located in the + enabled-zedlets directory + (zedletdir). These can be symlinked or copied from the + + directory; symlinks allow for automatic updates from the installed ZEDLETs, + whereas copies preserve local modifications. As a security measure, since + ownership change is a privileged operation, ZEDLETs must be owned by root. + They must have execute permissions for the user, but they must not have + write permissions for group or other. Dotfiles are ignored.

+

ZEDLETs are named after the zevent class for which they + should be invoked. In particular, a ZEDLET will be invoked for a given + zevent if either its class or subclass string is a prefix of its filename + (and is followed by a non-alphabetic character). As a special case, the + prefix matches + all zevents. Multiple ZEDLETs may be invoked for a given zevent.

+
+
+

+

ZEDLETs are executables invoked by the ZED in response to a given + zevent. They should be written under the presumption they can be invoked + concurrently, and they should use appropriate locking to access any shared + resources. Common variables used by ZEDLETs can be stored in the default rc + file which is sourced by scripts; these variables should be prefixed with + .

+

The zevent nvpairs are passed to ZEDLETs as environment variables. + Each nvpair name is converted to an environment variable in the following + manner:

+
    +
  1. it is prefixed with + ,
  2. +
  3. it is converted to uppercase, and
  4. +
  5. each non-alphanumeric character is converted to an underscore.
  6. +
+

Some additional environment variables have been defined to present + certain nvpair values in a more convenient form. An incomplete list of + zevent environment variables is as follows:

+
+
+
The Event IDentifier.
+
+
The zevent class string.
+
+
The zevent subclass string.
+
+
The time at which the zevent was posted as “seconds + nanoseconds” since the Epoch.
+
+
The seconds component of + ZEVENT_TIME.
+
+
The + + component of ZEVENT_TIME.
+
+
An almost-RFC3339-compliant string for ZEVENT_TIME.
+
+

Additionally, the following ZED & ZFS variables are + defined:

+
+
+
The daemon's process ID.
+
+
The daemon's current enabled-zedlets directory.
+
+
The alias + (“--”) + string of the ZFS distribution the daemon is part of.
+
+
The ZFS version the daemon is part of.
+
+
The ZFS release the daemon is part of.
+
+

ZEDLETs may need to call other ZFS commands. The + installation paths of the following executables are defined as environment + variables: , + , + , + , + and + . + These variables may be overridden in the rc file.

+
+
+

+
+
@sysconfdir@/zfs/zed.d
+
The default directory for enabled ZEDLETs.
+
@sysconfdir@/zfs/zed.d/zed.rc
+
The default rc file for common variables used by ZEDLETs.
+
@zfsexecdir@/zed.d
+
The default directory for installed ZEDLETs.
+
@runstatedir@/zed.pid
+
The default file containing the daemon's process ID.
+
@runstatedir@/zed.state
+
The default file containing the daemon's state.
+
+
+
+

+
+
+
Reconfigure the daemon and rescan the directory for enabled ZEDLETs.
+
, +
+
Terminate the daemon.
+
+
+
+

+

zfs(8), zpool(8), + zpool-events(8)

+
+
+

+

The ZED requires root privileges.

+

Do not taunt the ZED.

+
+
+

+

ZEDLETs are unable to return state/status information to the + kernel.

+

Internationalization support via gettext has not been added.

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-allow.8.html b/man/v2.1/8/zfs-allow.8.html new file mode 100644 index 000000000..0e7d496fb --- /dev/null +++ b/man/v2.1/8/zfs-allow.8.html @@ -0,0 +1,848 @@ + + + + + + + zfs-allow.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-allow.8

+
+ + + + + +
ZFS-ALLOW(8)System Manager's ManualZFS-ALLOW(8)
+
+
+

+

zfs-allow — + delegate ZFS administration permissions to unprivileged + users

+
+
+

+ + + + + +
zfsallow [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+
+

+
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of + , + , + , + , + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
 
+
zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]…
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]…
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]…
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]…
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NAMETYPENOTES



allowsubcommandMust also have the permission that is being allowed
bookmarksubcommand
clonesubcommandMust also have the create ability and mount ability in + the origin file system
createsubcommandMust also have the mount ability. Must also have the + refreservation ability to create a non-sparse volume.
destroysubcommandMust also have the mount ability
diffsubcommandAllows lookup of paths within a dataset given an object number, and + the ability to create snapshots necessary to zfs diff.
holdsubcommandAllows adding a user hold to a snapshot
load-keysubcommandAllows loading and unloading of encryption key (see zfs + load-key and zfs unload-key).
change-keysubcommandAllows changing an encryption key via zfs change-key.
mountsubcommandAllows mounting/umounting ZFS datasets
promotesubcommandMust also have the mount and promote ability in the + origin file system
receivesubcommandMust also have the mount and create ability
releasesubcommandAllows releasing a user hold which might destroy the snapshot
renamesubcommandMust also have the mount and create ability in the new + parent
rollbacksubcommandMust also have the mount ability
sendsubcommand
sharesubcommandAllows sharing file systems over NFS or SMB protocols
snapshotsubcommandMust also have the mount ability
groupquotaotherAllows accessing any groupquota@... property
groupobjquotaotherAllows accessing any groupobjquota@... property
groupusedotherAllows reading any groupused@... property
groupobjusedotherAllows reading any groupobjused@... property
userpropotherAllows changing any user property
userquotaotherAllows accessing any userquota@... property
userobjquotaotherAllows accessing any userobjquota@... property
userusedotherAllows reading any userused@... property
userobjusedotherAllows reading any userobjused@... property
projectobjquotaotherAllows accessing any projectobjquota@... property
projectquotaotherAllows accessing any projectquota@... property
projectobjusedotherAllows reading any projectobjused@... property
projectusedotherAllows reading any projectused@... property
aclinheritproperty
aclmodeproperty
acltypeproperty
atimeproperty
canmountproperty
casesensitivityproperty
checksumproperty
compressionproperty
contextproperty
copiesproperty
dedupproperty
defcontextproperty
devicesproperty
dnodesizeproperty
encryptionproperty
execproperty
filesystem_limitproperty
fscontextproperty
keyformatproperty
keylocationproperty
logbiasproperty
mlslabelproperty
mountpointproperty
nbmandproperty
normalizationproperty
overlayproperty
pbkdf2itersproperty
primarycacheproperty
quotaproperty
readonlyproperty
recordsizeproperty
redundant_metadataproperty
refquotaproperty
refreservationproperty
relatimeproperty
reservationproperty
rootcontextproperty
secondarycacheproperty
setuidproperty
sharenfsproperty
sharesmbproperty
snapdevproperty
snapdirproperty
snapshot_limitproperty
special_small_blocksproperty
syncproperty
utf8onlyproperty
versionproperty
volblocksizeproperty
volmodeproperty
volsizeproperty
vscanproperty
xattrproperty
zonedproperty
+
+
zfs allow + -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-bookmark.8.html b/man/v2.1/8/zfs-bookmark.8.html new file mode 100644 index 000000000..5b4b7f51b --- /dev/null +++ b/man/v2.1/8/zfs-bookmark.8.html @@ -0,0 +1,275 @@ + + + + + + + zfs-bookmark.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-bookmark.8

+
+ + + + + +
ZFS-BOOKMARK(8)System Manager's ManualZFS-BOOKMARK(8)
+
+
+

+

zfs-bookmark — + create bookmark of ZFS snapshot

+
+
+

+ + + + + +
zfsbookmark + snapshot|bookmark + newbookmark
+
+
+

+

Creates a new bookmark of the given snapshot or bookmark. + Bookmarks mark the point in time when the snapshot was created, and can be + used as the incremental source for a zfs + send.

+

When creating a bookmark from an existing redaction + bookmark, the resulting bookmark is + a redaction + bookmark.

+

This feature must be enabled to be used. See + zpool-features(7) for details on ZFS feature flags and the + + feature.

+
+
+

+

zfs-destroy(8), zfs-send(8), + zfs-snapshot(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-change-key.8.html b/man/v2.1/8/zfs-change-key.8.html new file mode 100644 index 000000000..b357cc626 --- /dev/null +++ b/man/v2.1/8/zfs-change-key.8.html @@ -0,0 +1,473 @@ + + + + + + + zfs-change-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-change-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + load, unload, or change encryption key of ZFS + dataset

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a|filesystem
+
+ + + + + +
zfsunload-key [-r] + -a|filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a|filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. + This will cause zfs to simply check that the + provided key is correct. This command may be run even if the key is + already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a|filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded. + This command may also be used to change the keylocation, + keyformat, and pbkdf2iters properties + as needed. If the dataset was not previously an encryption root it will + become one. Alternatively, the -i flag may be + provided to cause an encryption root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim + --secure if supported by your hardware, + otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to running zfs + load-key filesystem; + zfs change-key + filesystem
+
+ property=value
+
Allows the user to set encryption key properties + (keyformat, keylocation, + and pbkdf2iters) while + changing the key. This is the only way to alter + keyformat and pbkdf2iters after + the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + volume data, file attributes, ACLs, permission bits, directory listings, + FUID mappings, and + / + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the load-key subcommand for more + info on key loading).

+

Creating an encrypted dataset requires + specifying the encryption and + keyformat properties at creation time, along with an + optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, + and pbkdf2iters) do not inherit + like other ZFS properties and instead use the value determined by their + encryption root. Encryption root inheritance can be tracked via the + read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only deduplicate against themselves, + their snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption, + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost for each block written.

+
+
+
+

+

zfsprops(7), zfs-create(8), + zfs-set(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-clone.8.html b/man/v2.1/8/zfs-clone.8.html new file mode 100644 index 000000000..805f95b45 --- /dev/null +++ b/man/v2.1/8/zfs-clone.8.html @@ -0,0 +1,281 @@ + + + + + + + zfs-clone.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-clone.8

+
+ + + + + +
ZFS-CLONE(8)System Manager's ManualZFS-CLONE(8)
+
+
+

+

zfs-cloneclone + snapshot of ZFS dataset

+
+
+

+ + + + + +
zfsclone [-p] + [-o + property=value]… + snapshot + filesystem|volume
+
+
+

+

See the Clones section of + zfsconcepts(7) for details. The target dataset can be + located anywhere in the ZFS hierarchy, and is created as the same type as + the original.

+
+
+ property=value
+
Sets the specified property; see zfs + create for details.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + + property inherited from their parent. If the target filesystem or volume + already exists, the operation completes successfully.
+
+
+
+

+

zfs-promote(8), + zfs-snapshot(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-create.8.html b/man/v2.1/8/zfs-create.8.html new file mode 100644 index 000000000..128eab51e --- /dev/null +++ b/man/v2.1/8/zfs-create.8.html @@ -0,0 +1,411 @@ + + + + + + + zfs-create.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-create.8

+
+ + + + + +
ZFS-CREATE(8)System Manager's ManualZFS-CREATE(8)
+
+
+

+

zfs-create — + create ZFS dataset

+
+
+

+ + + + + +
zfscreate [-Pnpuv] + [-o + property=value]… + filesystem
+
+ + + + + +
zfscreate [-ps] + [-b blocksize] + [-o + property=value]… + -V size + volume
+
+
+

+
+
zfs create + [-Pnpuv] [-o + property=value]… + filesystem
+
Creates a new ZFS file system. The file system is automatically mounted + according to the mountpoint property inherited from the + parent, unless the -u option is used. +
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + at the same time the dataset was created. Any editable ZFS property + can also be set at creation time. Multiple -o + options can be specified. An error results if the same property is + specified in multiple -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Do a dry-run ("No-op") creation. No datasets will be + created. This is useful in conjunction with the + -v or -P flags to + validate properties that are passed via -o + options and those implied by other options. The actual dataset + creation can still fail due to insufficient privileges or available + capacity.
+
+
Print machine-parsable verbose information about the created dataset. + Each line of output contains a key and one or two values, all + separated by tabs. The create_ancestors and + create keys have filesystem as + their only value. The create_ancestors key only + appears if the -p option is used. The + property key has two values, a property name that + property's value. The property key may appear zero + or more times, once for each property that will be set local to + filesystem due to the use of the + -o option.
+
+
Do not mount the newly created file system.
+
+
Print verbose information about the created dataset.
+
+
+
zfs create + [-ps] [-b + blocksize] [-o + property=value]… + -V size + volume
+
Creates a volume of the given size. The volume is exported as a block + device in /dev/zvol/path, where + is the name + of the volume in the ZFS namespace. The size represents the logical size + as exported by the device. By default, a reservation of equal size is + created. +

size is automatically + rounded up to the nearest multiple of the + .

+
+
+ blocksize
+
Equivalent to -o + volblocksize=blocksize. If + this option is specified in conjunction with + -o volblocksize, the + resulting behavior is undefined.
+
+ property=value
+
Sets the specified property as if the zfs + set + property=value command was + invoked at the same time the dataset was created. Any editable ZFS + property can also be set at creation time. Multiple + -o options can be specified. An error results + if the same property is specified in multiple + -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Creates a sparse volume with no reservation. See + + in the + section of zfsprops(7) for more + information about sparse volumes.
+
+
Do a dry-run ("No-op") creation. No datasets will be + created. This is useful in conjunction with the + -v or -P flags to + validate properties that are passed via -o + options and those implied by other options. The actual dataset + creation can still fail due to insufficient privileges or available + capacity.
+
+
Print machine-parsable verbose information about the created dataset. + Each line of output contains a key and one or two values, all + separated by tabs. The create_ancestors and + create keys have volume as their + only value. The create_ancestors key only appears if + the -p option is used. The + property key has two values, a property name that + property's value. The property key may appear zero + or more times, once for each property that will be set local to + volume due to the use of the + -b or -o options, as + well as + + if the volume is not sparse.
+
+
Print verbose information about the created dataset.
+
+
+
+
+

+

ZFS volumes may be used as swap devices. After creating the volume + with the zfs create + -V enable the swap area using the + swapon(8) command. Swapping to files on ZFS filesystems is + not supported.

+
+
+
+

+

zfs-destroy(8), zfs-list(8), + zpool-create(8)

+
+
+ + + + + +
December 1, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-destroy.8.html b/man/v2.1/8/zfs-destroy.8.html new file mode 100644 index 000000000..d6cd06f19 --- /dev/null +++ b/man/v2.1/8/zfs-destroy.8.html @@ -0,0 +1,364 @@ + + + + + + + zfs-destroy.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-destroy.8

+
+ + + + + +
ZFS-DESTROY(8)System Manager's ManualZFS-DESTROY(8)
+
+
+

+

zfs-destroy — + destroy ZFS dataset, snapshots, or bookmark

+
+
+

+ + + + + +
zfsdestroy [-Rfnprv] + filesystem|volume
+
+ + + + + +
zfsdestroy [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]…
+
+ + + + + +
zfsdestroy + filesystem|volume#bookmark
+
+
+

+
+
zfs destroy + [-Rfnprv] + filesystem|volume
+
Destroys the given dataset. By default, the command unshares any file + systems that are currently shared, unmounts any file systems that are + currently mounted, and refuses to destroy a dataset that has active + dependents (children or clones). +
+
+
Recursively destroy all dependents, including cloned file systems + outside the target hierarchy.
+
+
Forcibly unmount file systems. This option has no effect on non-file + systems or unmounted file systems.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -v or + -p flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Recursively destroy all children.
+
+
Print verbose information about the deleted data.
+
+

Extreme care should be taken when applying either the + -r or the -R options, as + they can destroy large portions of a pool and cause unexpected behavior + for mounted file systems in use.

+
+
zfs destroy + [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]…
+
The given snapshots are destroyed immediately if and only if the + zfs destroy command + without the -d option would have destroyed it. + Such immediate destruction would occur, for example, if the snapshot had + no clones and the user-initiated reference count were zero. +

If a snapshot does not qualify for immediate destruction, it + is marked for deferred deletion. In this state, it exists as a usable, + visible snapshot until both of the preconditions listed above are met, + at which point it is destroyed.

+

An inclusive range of snapshots may be specified by separating + the first and last snapshots with a percent sign. The first and/or last + snapshots may be left blank, in which case the filesystem's oldest or + newest snapshot will be implied.

+

Multiple snapshots (or ranges of snapshots) of the same + filesystem or volume may be specified in a comma-separated list of + snapshots. Only the snapshot's short name (the part after the + ) should be + specified when using a range or comma-separated list to identify + multiple snapshots.

+
+
+
Recursively destroy all clones of these snapshots, including the + clones, snapshots, and children. If this flag is specified, the + -d flag will have no effect.
+
+
Destroy immediately. If a snapshot cannot be destroyed now, mark it + for deferred destruction.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -p or + -v flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Destroy (or mark for deferred deletion) all snapshots with this name + in descendent file systems.
+
+
Print verbose information about the deleted data. +

Extreme care should be taken when applying either the + -r or the -R + options, as they can destroy large portions of a pool and cause + unexpected behavior for mounted file systems in use.

+
+
+
+
zfs destroy + filesystem|volume#bookmark
+
The given bookmark is destroyed.
+
+
+
+

+

zfs-create(8), zfs-hold(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-diff.8.html b/man/v2.1/8/zfs-diff.8.html new file mode 100644 index 000000000..d176a8382 --- /dev/null +++ b/man/v2.1/8/zfs-diff.8.html @@ -0,0 +1,317 @@ + + + + + + + zfs-diff.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-diff.8

+
+ + + + + +
ZFS-DIFF(8)System Manager's ManualZFS-DIFF(8)
+
+
+

+

zfs-diffshow + difference between ZFS snapshots

+
+
+

+ + + + + +
zfsdiff [-FHth] + snapshot + snapshot|filesystem
+
+
+

+

Display the difference between a snapshot of a given filesystem + and another snapshot of that filesystem from a later time or the current + contents of the filesystem. The first column is a character indicating the + type of change, the other columns indicate pathname, new pathname (in case + of rename), change in link count, and optionally file type and/or change + time. The types of change are:

+
+
+
-
+
The path has been removed
+
+
The path has been created
+
+
The path has been modified
+
+
The path has been renamed
+
+
+
+
+
Display an indication of the type of file, in a manner similar to the + -F option of ls(1). +
+
+
+
Block device
+
+
Character device
+
+
Directory
+
+
Door
+
+
Named pipe
+
+
Symbolic link
+
+
Event port
+
+
Socket
+
+
Regular file
+
+
+
+
+
Give more parsable tab-separated output, without header lines and without + arrows.
+
+
Display the path's inode change time as the first column of output.
+
+
Do not + ooo-escape + non-ASCII paths.
+
+
+
+

+

zfs-snapshot(8)

+
+
+ + + + + +
May 29, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-get.8.html b/man/v2.1/8/zfs-get.8.html new file mode 100644 index 000000000..1c4405afa --- /dev/null +++ b/man/v2.1/8/zfs-get.8.html @@ -0,0 +1,407 @@ + + + + + + + zfs-get.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-get.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setset + properties on ZFS datasets

+
+
+

+ + + + + +
zfsset + property=value + [property=value]… + filesystem|volume|snapshot
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot
+
+
+

+
+
zfs set + property=value + [property=value]… + filesystem|volume|snapshot
+
Only some properties can be edited. See zfsprops(7) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the + section of zfsprops(7).
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
+
+
Dataset name
+
+
Property name
+
+
Property value
+
+
Property source local, default, + inherited, temporary, + received, or + - (none).
+
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections of + zfsprops(7).

+

The value all can be used to display all + properties that apply to the given dataset's type + (filesystem, volume, + snapshot, or + bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: local, + default, inherited, + temporary, received, + or + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or + all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(7) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value, if one exists; otherwise, + for non-inheritable properties, to the default; otherwise, operate as + if the -S option was not specified.
+
+
+
+
+
+

+

zfsprops(7), zfs-list(8)

+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-groupspace.8.html b/man/v2.1/8/zfs-groupspace.8.html new file mode 100644 index 000000000..72223f2da --- /dev/null +++ b/man/v2.1/8/zfs-groupspace.8.html @@ -0,0 +1,387 @@ + + + + + + + zfs-groupspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-groupspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + display space and quotas of ZFS dataset

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + user, + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (like stat(2), + ls -l) perform this + translation, so the -i option allows the + output from zfs + userspace to be compared directly with those + utilities. However, -i may lead to confusion + if some files were created by an SMB user before a SMB-to-POSIX name + mapping was established. In such a case, some files will be owned by + the SMB entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]…
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]…
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is a + numeral, not a name. So need neither the option -i + for SID to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfsprops(7), zfs-set(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-hold.8.html b/man/v2.1/8/zfs-hold.8.html new file mode 100644 index 000000000..333634e07 --- /dev/null +++ b/man/v2.1/8/zfs-hold.8.html @@ -0,0 +1,320 @@ + + + + + + + zfs-hold.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-hold.8

+
+ + + + + +
ZFS-HOLD(8)System Manager's ManualZFS-HOLD(8)
+
+
+

+

zfs-holdhold + ZFS snapshots to prevent their removal

+
+
+

+ + + + + +
zfshold [-r] + tag snapshot
+
+ + + + + +
zfsholds [-rH] + snapshot
+
+ + + + + +
zfsrelease [-r] + tag snapshot
+
+
+

+
+
zfs hold + [-r] tag + snapshot
+
Adds a single reference, named with the tag + argument, to the specified snapshots. Each snapshot has its own tag + namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-rH] snapshot
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
Do not print headers, use tab-delimited output.
+
+
+
zfs release + [-r] tag + snapshot
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
+
+
+

+

zfs-destroy(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-inherit.8.html b/man/v2.1/8/zfs-inherit.8.html new file mode 100644 index 000000000..c7b7a802f --- /dev/null +++ b/man/v2.1/8/zfs-inherit.8.html @@ -0,0 +1,407 @@ + + + + + + + zfs-inherit.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-inherit.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setset + properties on ZFS datasets

+
+
+

+ + + + + +
zfsset + property=value + [property=value]… + filesystem|volume|snapshot
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot
+
+
+

+
+
zfs set + property=value + [property=value]… + filesystem|volume|snapshot
+
Only some properties can be edited. See zfsprops(7) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the + section of zfsprops(7).
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
+
+
Dataset name
+
+
Property name
+
+
Property value
+
+
Property source local, default, + inherited, temporary, + received, or + - (none).
+
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections of + zfsprops(7).

+

The value all can be used to display all + properties that apply to the given dataset's type + (filesystem, volume, + snapshot, or + bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: local, + default, inherited, + temporary, received, + or + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or + all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(7) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value, if one exists; otherwise, + for non-inheritable properties, to the default; otherwise, operate as + if the -S option was not specified.
+
+
+
+
+
+

+

zfsprops(7), zfs-list(8)

+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-jail.8.html b/man/v2.1/8/zfs-jail.8.html new file mode 100644 index 000000000..35f967eb3 --- /dev/null +++ b/man/v2.1/8/zfs-jail.8.html @@ -0,0 +1,311 @@ + + + + + + + zfs-jail.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-jail.8

+
+ + + + + +
ZFS-JAIL(8)System Manager's ManualZFS-JAIL(8)
+
+
+

+

zfs-jailattach + or detach ZFS filesystem from FreeBSD jail

+
+
+

+ + + + + +
zfs jailjailid|jailname + filesystem
+
+ + + + + +
zfs unjailjailid|jailname + filesystem
+
+
+

+
+
zfs jail + jailid|jailname + filesystem
+
Attach the specified filesystem to the jail + identified by JID jailid or name + jailname. From now on this file system tree can be + managed from within a jail if the jailed property has + been set. To use this functionality, the jail needs the + + and + + parameters set to + and the + + parameter set to a value lower than + . +

You cannot attach a jailed dataset's children to another jail. + You can also not attach the root file system of the jail or any dataset + which needs to be mounted before the zfs rc script is run inside the + jail, as it would be attached unmounted until it is mounted from the rc + script inside the jail.

+

To allow management of the dataset from within a + jail, the jailed property has to be set and the jail + needs access to the /dev/zfs device. The + property + cannot be changed from within a jail.

+

After a dataset is attached to a jail and the + jailed property is set, a jailed file system cannot be + mounted outside the jail, since the jail administrator might have set + the mount point to an unacceptable value.

+

See jail(8) for more information on managing + jails. Jails are a FreeBSD feature and are not + relevant on other platforms.

+
+
zfs unjail + jailid|jailname + filesystem
+
Detaches the specified filesystem from the jail + identified by JID jailid or name + jailname.
+
+
+
+

+

zfsprops(7), jail(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-list.8.html b/man/v2.1/8/zfs-list.8.html new file mode 100644 index 000000000..a5bd33dcb --- /dev/null +++ b/man/v2.1/8/zfs-list.8.html @@ -0,0 +1,351 @@ + + + + + + + zfs-list.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-list.8

+
+ + + + + +
ZFS-LIST(8)System Manager's ManualZFS-LIST(8)
+
+
+

+

zfs-listlist + properties of ZFS datasets

+
+
+

+ + + + + +
zfslist + [-r|-d + depth] [-Hp] + [-o + property[,property]…] + [-s property]… + [-S property]… + [-t + type[,type]…] + [filesystem|volume|snapshot]…
+
+
+

+

If specified, you can list property information by the absolute + pathname or the relative pathname. By default, all file systems and volumes + are displayed. Snapshots are displayed if the + + pool property is + (the + default is + ), or if + the -t snapshot or + -t all options are specified. The + following fields are displayed: name, + used, + , + , + .

+
+
+
Used for scripting mode. Do not print headers and separate fields by a + single tab instead of arbitrary white space.
+
+ property
+
Same as the -s option, but sorts by property in + descending order.
+
+ depth
+
Recursively display any children of the dataset, limiting the recursion to + depth. A depth of + will display + only the dataset and its direct children.
+
+ property
+
A comma-separated list of properties to display. The property must be: + +
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display any children of the dataset on the command line.
+
+ property
+
A property for sorting the output by column in ascending order based on + the value of the property. The property must be one of the properties + described in the Properties section + of zfsprops(7) or the value name to + sort by the dataset name. Multiple properties can be specified at one time + using multiple -s property options. Multiple + -s options are evaluated from left to right in + decreasing order of importance. The following is a list of sorting + criteria: +
    +
  • Numeric types sort in numeric order.
  • +
  • String types sort in alphabetical order.
  • +
  • Types inappropriate for a row sort that row to the literal bottom, + regardless of the specified ordering.
  • +
+

If no sorting options are specified the existing behavior of + zfs list is + preserved.

+
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + , + or all. For example, specifying + -t snapshot displays only + snapshots.
+
+
+
+

+

zfsprops(7), zfs-get(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-load-key.8.html b/man/v2.1/8/zfs-load-key.8.html new file mode 100644 index 000000000..4a7fcdb98 --- /dev/null +++ b/man/v2.1/8/zfs-load-key.8.html @@ -0,0 +1,473 @@ + + + + + + + zfs-load-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-load-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + load, unload, or change encryption key of ZFS + dataset

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a|filesystem
+
+ + + + + +
zfsunload-key [-r] + -a|filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a|filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. + This will cause zfs to simply check that the + provided key is correct. This command may be run even if the key is + already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a|filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded. + This command may also be used to change the keylocation, + keyformat, and pbkdf2iters properties + as needed. If the dataset was not previously an encryption root it will + become one. Alternatively, the -i flag may be + provided to cause an encryption root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim + --secure if supported by your hardware, + otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to running zfs + load-key filesystem; + zfs change-key + filesystem
+
+ property=value
+
Allows the user to set encryption key properties + (keyformat, keylocation, + and pbkdf2iters) while + changing the key. This is the only way to alter + keyformat and pbkdf2iters after + the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + volume data, file attributes, ACLs, permission bits, directory listings, + FUID mappings, and + / + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the load-key subcommand for more + info on key loading).

+

Creating an encrypted dataset requires + specifying the encryption and + keyformat properties at creation time, along with an + optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, + and pbkdf2iters) do not inherit + like other ZFS properties and instead use the value determined by their + encryption root. Encryption root inheritance can be tracked via the + read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only deduplicate against themselves, + their snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption, + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost for each block written.

+
+
+
+

+

zfsprops(7), zfs-create(8), + zfs-set(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-mount-generator.8.html b/man/v2.1/8/zfs-mount-generator.8.html new file mode 100644 index 000000000..a43bc1adb --- /dev/null +++ b/man/v2.1/8/zfs-mount-generator.8.html @@ -0,0 +1,436 @@ + + + + + + + zfs-mount-generator.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-mount-generator.8

+
+ + + + + +
ZFS-MOUNT-GENERATOR(8)System Manager's ManualZFS-MOUNT-GENERATOR(8)
+
+
+

+

zfs-mount-generator — + generate systemd mount units for ZFS filesystems

+
+
+

+

@systemdgeneratordir@/zfs-mount-generator

+
+
+

+

zfs-mount-generator is a + systemd.generator(7) that generates native + systemd.mount(5) units for configured ZFS datasets.

+
+

+
+
=
+
+ + or none.
+
=
+
off. Skipped if + only noauto datasets exist for a given mountpoint + and there's more than one. Datasets with + + take precedence over ones with + noauto for the same mountpoint. + Sets logical noauto + flag if noauto. Encryption roots + always generate + zfs-load-key@root.service, + even if off.
+
=, + relatime=, + =, + =, + =, + =, + =
+
Used to generate mount options equivalent to zfs + mount.
+
=, + keylocation=
+
If the dataset is an encryption root, its mount unit will bind to + zfs-load-key@root.service, + with additional dependencies as follows: +
+
+
=
+
None, uses systemd-ask-password(1)
+
=URL + (et al.)
+
=, + After=: + network-online.target
+
=<path>
+
=path
+
+
+ The service also uses the same Wants=, + After=, Requires=, + and RequiresMountsFor=, as the + mount unit.
+
=path[ + path]…
+
+ Requires= for the mount- and key-loading unit.
+
=path[ + path]…
+
+ RequiresMountsFor= for the mount- and key-loading + unit.
+
=unit[ + unit]…
+
+ Before= for the mount unit.
+
=unit[ + unit]…
+
+ After= for the mount unit.
+
=unit[ + unit]…
+
Sets logical noauto + flag (see below). If not + none, sets + WantedBy= for the mount unit.
+
=unit[ + unit]…
+
Sets logical noauto + flag (see below). If not + none, sets + RequiredBy= for the mount unit.
+
=(unset)|on|off
+
Waxes or wanes strength of default reverse dependencies of the mount unit, + see below.
+
=on|off
+
on. Defaults to + off.
+
+
+
+

+

Additionally, unless the pool the dataset resides on is imported + at generation time, both units gain + Wants=zfs-import.target and + After=zfs-import.target.

+

Additionally, unless the logical noauto flag is + set, the mount unit gains a reverse-dependency for + local-fs.target of strength

+
+
+
(unset)
+
= + + Before=
+
+
=
+
+
= + + Before=
+
+
+
+
+

+

Because ZFS pools may not be available very early in the boot + process, information on ZFS mountpoints must be stored separately. The + output of

+
zfs + list -Ho + name,⟨every property above in + order⟩
+for datasets that should be mounted by systemd should be kept at + @sysconfdir@/zfs/zfs-list.cache/poolname, + and, if writeable, will be kept synchronized for the entire pool by the + history_event-zfs-list-cacher.sh ZEDLET, if enabled + (see zed(8)). +
+
+
+

+

If the + + environment variable is nonzero (or unset and + /proc/cmdline contains + ""), + print summary accounting information at the end.

+
+
+

+

To begin, enable tracking for the pool:

+
# touch + @sysconfdir@/zfs/zfs-list.cache/poolname
+Then enable the tracking ZEDLET: +
# ln + -s + @zfsexecdir@/zed.d/history_event-zfs-list-cacher.sh + @sysconfdir@/zfs/zed.d
+
# systemctl + enable + zfs-zed.service
+
# systemctl + restart + zfs-zed.service
+

If no history event is in the queue, inject one to ensure the + ZEDLET runs to refresh the cache file by setting a monitored property + somewhere on the pool:

+
# zfs + set relatime=off + poolname/dset
+
# zfs + inherit relatime + poolname/dset
+

To test the generator output:

+
$ mkdir + /tmp/zfs-mount-generator
+
$ + @systemdgeneratordir@/zfs-mount-generator + /tmp/zfs-mount-generator
+If the generated units are satisfactory, instruct + systemd to re-run all generators: +
# systemctl + daemon-reload
+
+
+

+

systemd.mount(5), + zfs(5), + systemd.generator(7), + zed(8), + zpool-events(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-mount.8.html b/man/v2.1/8/zfs-mount.8.html new file mode 100644 index 000000000..c1ba7f34e --- /dev/null +++ b/man/v2.1/8/zfs-mount.8.html @@ -0,0 +1,335 @@ + + + + + + + zfs-mount.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-mount.8

+
+ + + + + +
ZFS-MOUNT(8)System Manager's ManualZFS-MOUNT(8)
+
+
+

+

zfs-mountmanage + mount state of ZFS filesystems

+
+
+

+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Oflv] + [-o options] + -a|filesystem
+
+ + + + + +
zfsunmount [-fu] + -a|filesystem|mountpoint
+
+
+

+
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Oflv] [-o + options] + -a|filesystem
+
Mount ZFS filesystem on a path described by its + mountpoint property, if the path exists and is empty. If + mountpoint is set to + , the + filesystem should be instead mounted using mount(8). +
+
+
Perform an overlay mount. Allows mounting in non-empty + mountpoint. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process if configured.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + section of + zfsprops(7) for details.
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has + =, + this will cause the terminal to interactively block after asking for + the key.
+
+
Report mount progress.
+
+
Attempt to force mounting of all filesystems, even those that couldn't + normally be mounted (e.g. redacted datasets).
+
+
+
zfs unmount + [-fu] + -a|filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
+
Forcefully unmount the file system, even if it is currently in use. + This option is not supported on Linux.
+
+
Unload keys for any encryption roots unmounted by this command.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
+
+
+
+ + + + + +
February 16, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-program.8.html b/man/v2.1/8/zfs-program.8.html new file mode 100644 index 000000000..f8df30cc3 --- /dev/null +++ b/man/v2.1/8/zfs-program.8.html @@ -0,0 +1,988 @@ + + + + + + + zfs-program.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-program.8

+
+ + + + + +
ZFS-PROGRAM(8)System Manager's ManualZFS-PROGRAM(8)
+
+
+

+

zfs-program — + execute ZFS channel programs

+
+
+

+ + + + + +
zfsprogram [-jn] + [-t instruction-limit] + [-m memory-limit] + pool script + [script arguments]
+
+
+

+

The ZFS channel program interface allows ZFS administrative + operations to be run programmatically as a Lua script. The entire script is + executed atomically, with no other administrative operations taking effect + concurrently. A library of ZFS calls is made available to channel program + scripts. Channel programs may only be run with root privileges.

+

A modified version of the Lua 5.2 interpreter is used to run + channel program scripts. The Lua 5.2 manual can be found at + http://www.lua.org/manual/5.2/

+

The channel program given by script will be + run on pool, and any attempts to access or modify + other pools will cause an error.

+
+
+

+
+
+
Display channel program output in JSON format. When this flag is specified + and standard output is empty - channel program encountered an error. The + details of such an error will be printed to standard error in plain + text.
+
+
Executes a read-only channel program, which runs faster. The program + cannot change on-disk state by calling functions from the zfs.sync + submodule. The program can be used to gather information such as + properties and determining if changes would succeed (zfs.check.*). Without + this flag, all pending changes must be synced to disk before a channel + program can complete.
+
+ instruction-limit
+
Limit the number of Lua instructions to execute. If a channel program + executes more than the specified number of instructions, it will be + stopped and an error will be returned. The default limit is 10 million + instructions, and it can be set to a maximum of 100 million + instructions.
+
+ memory-limit
+
Memory limit, in bytes. If a channel program attempts to allocate more + memory than the given limit, it will be stopped and an error returned. The + default memory limit is 10 MB, and can be set to a maximum of 100 MB.
+
+

All remaining argument strings will be passed directly to the Lua + script as described in the LUA + INTERFACE section below.

+
+
+

+

A channel program can be invoked either from the command line, or + via a library call to + ().

+
+

+

Arguments passed to the channel program are converted to a Lua + table. If invoked from the command line, extra arguments to the Lua script + will be accessible as an array stored in the argument table with the key + 'argv':

+
+
args = ...
+argv = args["argv"]
+-- argv == {1="arg1", 2="arg2", ...}
+
+

If invoked from the libZFS interface, an arbitrary argument list + can be passed to the channel program, which is accessible via the same + "..." syntax in Lua:

+
+
args = ...
+-- args == {"foo"="bar", "baz"={...}, ...}
+
+

Note that because Lua arrays are 1-indexed, arrays passed to Lua + from the libZFS interface will have their indices incremented by 1. That is, + the element in arr[0] in a C array passed to a channel + program will be stored in arr[1] when accessed from + Lua.

+
+
+

+

Lua return statements take the form:

+
return ret0, ret1, ret2, + ...
+

Return statements returning multiple values are permitted + internally in a channel program script, but attempting to return more than + one value from the top level of the channel program is not permitted and + will throw an error. However, tables containing multiple values can still be + returned. If invoked from the command line, a return statement:

+
+
a = {foo="bar", baz=2}
+return a
+
+

Will be output formatted as:

+
+
Channel program fully executed with return value:
+    return:
+        baz: 2
+        foo: 'bar'
+
+
+
+

+

If the channel program encounters a fatal error while running, a + non-zero exit status will be returned. If more information about the error + is available, a singleton list will be returned detailing the error:

+
error: "error string, including + Lua stack trace"
+

If a fatal error is returned, the channel program may have not + executed at all, may have partially executed, or may have fully executed but + failed to pass a return value back to userland.

+

If the channel program exhausts an instruction or memory limit, a + fatal error will be generated and the program will be stopped, leaving the + program partially executed. No attempt is made to reverse or undo any + operations already performed. Note that because both the instruction count + and amount of memory used by a channel program are deterministic when run + against the same inputs and filesystem state, as long as a channel program + has run successfully once, you can guarantee that it will finish + successfully against a similar size system.

+

If a channel program attempts to return too large a value, the + program will fully execute but exit with a nonzero status code and no return + value.

+

: + ZFS API functions do not generate Fatal Errors when correctly invoked, they + return an error code and the channel program continues executing. See the + ZFS API section below for + function-specific details on error return codes.

+
+
+

+

When invoking a channel program via the libZFS interface, it is + necessary to translate arguments and return values from Lua values to their + C equivalents, and vice-versa.

+

There is a correspondence between nvlist values in C and Lua + tables. A Lua table which is returned from the channel program will be + recursively converted to an nvlist, with table values converted to their + natural equivalents:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
string->string
number->int64
boolean->boolean_value
nil->boolean (no value)
table->nvlist
+

Likewise, table keys are replaced by string equivalents as + follows:

+ + + + + + + + + + + + + + + + + + + +
string->no change
number->signed decimal string ("%lld")
boolean->"true" | "false"
+

Any collision of table key strings (for example, the string + "true" and a true boolean value) will cause a fatal error.

+

Lua numbers are represented internally as signed 64-bit + integers.

+
+
+
+

+

The following Lua built-in base library functions are + available:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
assertrawlencollectgarbagerawget
errorrawsetgetmetatableselect
ipairssetmetatablenexttonumber
pairstostringrawequaltype
+

All functions in the + , + , + and + + built-in submodules are also available. A complete list and documentation of + these modules is available in the Lua manual.

+

The following functions base library functions have been disabled + and are not available for use in channel programs:

+ + + + + + + + + + +
dofileloadfileloadpcallprintxpcall
+
+
+

+
+

+

Each API function takes a fixed set of required positional + arguments and optional keyword arguments. For example, the destroy function + takes a single positional string argument (the name of the dataset to + destroy) and an optional "defer" keyword boolean argument. When + using parentheses to specify the arguments to a Lua function, only + positional arguments can be used:

+
zfs.sync.destroy("rpool@snap")
+

To use keyword arguments, functions must be called with a single + argument that is a Lua table containing entries mapping integers to + positional arguments and strings to keyword arguments:

+
zfs.sync.destroy({1="rpool@snap", + defer=true})
+

The Lua language allows curly braces to be used in place of + parenthesis as syntactic sugar for this calling convention:

+
zfs.sync.snapshot{"rpool@snap", + defer=true}
+
+
+

+

If an API function succeeds, it returns 0. If it fails, it returns + an error code and the channel program continues executing. API functions do + not generate Fatal Errors except in the case of an unrecoverable internal + file system error.

+

In addition to returning an error code, some functions also return + extra details describing what caused the error. This extra description is + given as a second return value, and will always be a Lua table, or Nil if no + error details were returned. Different keys will exist in the error details + table depending on the function and error case. Any such function may be + called expecting a single return value:

+
errno = + zfs.sync.promote(dataset)
+

Or, the error details can be retrieved:

+
+
errno, details = zfs.sync.promote(dataset)
+if (errno == EEXIST) then
+    assert(details ~= Nil)
+    list_of_conflicting_snapshots = details
+end
+
+

The following global aliases for API function error return codes + are defined for use in channel programs:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
EPERMECHILDENODEVENOSPCENOENTEAGAINENOTDIR
ESPIPEESRCHENOMEMEISDIREROFSEINTREACCES
EINVALEMLINKEIOEFAULTENFILEEPIPEENXIO
ENOTBLKEMFILEEDOME2BIGEBUSYENOTTYERANGE
ENOEXECEEXISTETXTBSYEDQUOTEBADFEXDEVEFBIG
+
+
+

+

For detailed descriptions of the exact behavior of any ZFS + administrative operations, see the main zfs(8) manual + page.

+
+
(msg)
+
Record a debug message in the zfs_dbgmsg log. A log of these messages can + be printed via mdb's "::zfs_dbgmsg" command, or can be monitored + live by running +
dtrace -n + 'zfs-dbgmsg{trace(stringof(arg0))}'
+

+
+
msg (string)
+
Debug message to be printed.
+
+
+
(dataset)
+
Returns true if the given dataset exists, or false if it doesn't. A fatal + error will be thrown if the dataset is not in the target pool. That is, in + a channel program running on rpool, + zfs.exists("rpool/nonexistent_fs") returns + false, but + zfs.exists("somepool/fs_that_may_exist") will + error. +

+
+
dataset (string)
+
Dataset to check for existence. Must be in the target pool.
+
+
+
(dataset, + property)
+
Returns two values. First, a string, number or table containing the + property value for the given dataset. Second, a string containing the + source of the property (i.e. the name of the dataset in which it was set + or nil if it is readonly). Throws a Lua error if the dataset is invalid or + the property doesn't exist. Note that Lua only supports int64 number types + whereas ZFS number properties are uint64. This means very large values + (like GUIDs) may wrap around and appear negative. +

+
+
dataset (string)
+
Filesystem or snapshot path to retrieve properties from.
+
property (string)
+
Name of property to retrieve. All filesystem, snapshot and volume + properties are supported except for + and + . + Also supports the + snap + and + bookmark + properties and the + ⟨|⟩⟨|id + properties, though the id must be in numeric form.
+
+
+
+
+
+
The sync submodule contains functions that modify the on-disk state. They + are executed in "syncing context". +

The available sync submodule functions are as follows:

+
+
(dataset, + [defer=true|false])
+
Destroy the given dataset. Returns 0 on successful destroy, or a + nonzero error code if the dataset could not be destroyed (for example, + if the dataset has any active children or clones). +

+
+
dataset (string)
+
Filesystem or snapshot to be destroyed.
+
[defer (boolean)]
+
Valid only for destroying snapshots. If set to true, and the + snapshot has holds or clones, allows the snapshot to be marked for + deferred deletion rather than failing.
+
+
+
(dataset, + property)
+
Clears the specified property in the given dataset, causing it to be + inherited from an ancestor, or restored to the default if no ancestor + property is set. The zfs + inherit -S option has + not been implemented. Returns 0 on success, or a nonzero error code if + the property could not be cleared. +

+
+
dataset (string)
+
Filesystem or snapshot containing the property to clear.
+
property (string)
+
The property to clear. Allowed properties are the same as those + for the zfs + inherit command.
+
+
+
(dataset)
+
Promote the given clone to a filesystem. Returns 0 on successful + promotion, or a nonzero error code otherwise. If EEXIST is returned, + the second return value will be an array of the clone's snapshots + whose names collide with snapshots of the parent filesystem. +

+
+
dataset (string)
+
Clone to be promoted.
+
+
+
(filesystem)
+
Rollback to the previous snapshot for a dataset. Returns 0 on + successful rollback, or a nonzero error code otherwise. Rollbacks can + be performed on filesystems or zvols, but not on snapshots or mounted + datasets. EBUSY is returned in the case where the filesystem is + mounted. +

+
+
filesystem (string)
+
Filesystem to rollback.
+
+
+
(dataset, + property, value)
+
Sets the given property on a dataset. Currently only user properties + are supported. Returns 0 if the property was set, or a nonzero error + code otherwise. +

+
+
dataset (string)
+
The dataset where the property will be set.
+
property (string)
+
The property to set.
+
value (string)
+
The value of the property to be set.
+
+
+
(dataset)
+
Create a snapshot of a filesystem. Returns 0 if the snapshot was + successfully created, and a nonzero error code otherwise. +

Note: Taking a snapshot will fail on any pool older than + legacy version 27. To enable taking snapshots from ZCP scripts, the + pool must be upgraded.

+

+
+
dataset (string)
+
Name of snapshot to create.
+
+
+
(source, + newbookmark)
+
Create a bookmark of an existing source snapshot or bookmark. Returns + 0 if the new bookmark was successfully created, and a nonzero error + code otherwise. +

Note: Bookmarking requires the corresponding pool feature + to be enabled.

+

+
+
source (string)
+
Full name of the existing snapshot or bookmark.
+
newbookmark (string)
+
Full name of the new bookmark.
+
+
+
+
+
+
For each function in the zfs.sync submodule, there is a + corresponding zfs.check function which performs a + "dry run" of the same operation. Each takes the same arguments + as its zfs.sync counterpart and returns 0 if the + operation would succeed, or a non-zero error code if it would fail, along + with any other error details. That is, each has the same behavior as the + corresponding sync function except for actually executing the requested + change. For example, + ("fs") + returns 0 if + zfs.sync.destroy("fs") + would successfully destroy the dataset. +

The available zfs.check functions are:

+
+
(dataset, + [defer=true|false])
+
 
+
(dataset)
+
 
+
(filesystem)
+
 
+
(dataset, + property, value)
+
 
+
(dataset)
+
 
+
+
+
+
The zfs.list submodule provides functions for iterating over datasets and + properties. Rather than returning tables, these functions act as Lua + iterators, and are generally used as follows: +
+
for child in zfs.list.children("rpool") do
+    ...
+end
+
+

The available zfs.list functions are:

+
+
(snapshot)
+
Iterate through all clones of the given snapshot. +

+
+
snapshot (string)
+
Must be a valid snapshot path in the current pool.
+
+
+
(dataset)
+
Iterate through all snapshots of the given dataset. Each snapshot is + returned as a string containing the full dataset name, e.g. + "pool/fs@snap". +

+
+
dataset (string)
+
Must be a valid filesystem or volume.
+
+
+
(dataset)
+
Iterate through all direct children of the given dataset. Each child + is returned as a string containing the full dataset name, e.g. + "pool/fs/child". +

+
+
dataset (string)
+
Must be a valid filesystem or volume.
+
+
+
(dataset)
+
Iterate through all bookmarks of the given dataset. Each bookmark is + returned as a string containing the full dataset name, e.g. + "pool/fs#bookmark". +

+
+
dataset (string)
+
Must be a valid filesystem or volume.
+
+
+
(snapshot)
+
Iterate through all user holds on the given snapshot. Each hold is + returned as a pair of the hold's tag and the timestamp (in seconds + since the epoch) at which it was created. +

+
+
snapshot (string)
+
Must be a valid snapshot.
+
+
+
(dataset)
+
An alias for zfs.list.user_properties (see relevant entry). +

+
+
dataset (string)
+
Must be a valid filesystem, snapshot, or volume.
+
+
+
(dataset)
+
Iterate through all user properties for the given dataset. For each + step of the iteration, output the property name, its value, and its + source. Throws a Lua error if the dataset is invalid. +

+
+
dataset (string)
+
Must be a valid filesystem, snapshot, or volume.
+
+
+
(dataset)
+
Returns an array of strings, the names of the valid system (non-user + defined) properties for the given dataset. Throws a Lua error if the + dataset is invalid. +

+
+
dataset (string)
+
Must be a valid filesystem, snapshot or volume.
+
+
+
+
+
+
+
+
+

+
+

+

The following channel program recursively destroys a filesystem + and all its snapshots and children in a naive manner. Note that this does + not involve any error handling or reporting.

+
+
function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        zfs.sync.destroy(snap)
+    end
+    zfs.sync.destroy(root)
+end
+destroy_recursive("pool/somefs")
+
+
+
+

+

A more verbose and robust version of the same channel program, + which properly detects and reports errors, and also takes the dataset to + destroy as a command line argument, would be as follows:

+
+
succeeded = {}
+failed = {}
+
+function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        err = zfs.sync.destroy(snap)
+        if (err ~= 0) then
+            failed[snap] = err
+        else
+            succeeded[snap] = err
+        end
+    end
+    err = zfs.sync.destroy(root)
+    if (err ~= 0) then
+        failed[root] = err
+    else
+        succeeded[root] = err
+    end
+end
+
+args = ...
+argv = args["argv"]
+
+destroy_recursive(argv[1])
+
+results = {}
+results["succeeded"] = succeeded
+results["failed"] = failed
+return results
+
+
+
+

+

The following function performs a forced promote operation by + attempting to promote the given clone and destroying any conflicting + snapshots.

+
+
function force_promote(ds)
+   errno, details = zfs.check.promote(ds)
+   if (errno == EEXIST) then
+       assert(details ~= Nil)
+       for i, snap in ipairs(details) do
+           zfs.sync.destroy(ds .. "@" .. snap)
+       end
+   elseif (errno ~= 0) then
+       return errno
+   end
+   return zfs.sync.promote(ds)
+end
+
+
+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-project.8.html b/man/v2.1/8/zfs-project.8.html new file mode 100644 index 000000000..2d0b3f43c --- /dev/null +++ b/man/v2.1/8/zfs-project.8.html @@ -0,0 +1,358 @@ + + + + + + + zfs-project.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-project.8

+
+ + + + + +
ZFS-PROJECT(8)System Manager's ManualZFS-PROJECT(8)
+
+
+

+

zfs-project — + manage projects in ZFS filesystem

+
+
+

+ + + + + +
zfsproject + [-d|-r] + file|directory
+
+ + + + + +
zfsproject -C + [-kr] + file|directory
+
+ + + + + +
zfsproject -c + [-0] + [-d|-r] + [-p id] + file|directory
+
+ + + + + +
zfsproject [-p + id] [-rs] + file|directory
+
+
+

+
+
zfs project + [-d|-r] + file|directory
+
List project identifier (ID) and inherit flag of files and directories. +
+
+
Show the directory project ID and inherit flag, not its children.
+
+
List subdirectories recursively.
+
+
+
zfs project + -C [-kr] + file|directory
+
Clear project inherit flag and/or ID on the files and directories. +
+
+
Keep the project ID unchanged. If not specified, the project ID will + be reset to zero.
+
+
Clear subdirectories' flags recursively.
+
+
+
zfs project + -c [-0] + [-d|-r] + [-p id] + file|directory
+
Check project ID and inherit flag on the files and directories: report + entries without the project inherit flag, or with project IDs different + from the target directory's project ID or the one specified with + -p. +
+
+
Delimit filenames with a NUL byte instead of newline.
+
+
Check the directory project ID and inherit flag, not its + children.
+
+ id
+
Compare to id instead of the target files and + directories' project IDs.
+
+
Check subdirectories recursively.
+
+
+
zfs project + -p id + [-rs] + file|directory
+
Set project ID and/or inherit flag on the files and directories. +
+
+ id
+
Set the project ID to the given value.
+
+
Set on subdirectories recursively.
+
+
Set project inherit flag on the given files and directories. This is + usually used for setting up tree quotas with + -r. In that case, the directory's project ID + will be set for all its descendants, unless specified explicitly with + -p.
+
+
+
+
+
+

+

zfs-projectspace(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-projectspace.8.html b/man/v2.1/8/zfs-projectspace.8.html new file mode 100644 index 000000000..d1b4f37cf --- /dev/null +++ b/man/v2.1/8/zfs-projectspace.8.html @@ -0,0 +1,387 @@ + + + + + + + zfs-projectspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-projectspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + display space and quotas of ZFS dataset

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + user, + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (like stat(2), + ls -l) perform this + translation, so the -i option allows the + output from zfs + userspace to be compared directly with those + utilities. However, -i may lead to confusion + if some files were created by an SMB user before a SMB-to-POSIX name + mapping was established. In such a case, some files will be owned by + the SMB entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]…
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]…
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is a + numeral, not a name. So need neither the option -i + for SID to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfsprops(7), zfs-set(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-promote.8.html b/man/v2.1/8/zfs-promote.8.html new file mode 100644 index 000000000..f59b17313 --- /dev/null +++ b/man/v2.1/8/zfs-promote.8.html @@ -0,0 +1,274 @@ + + + + + + + zfs-promote.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-promote.8

+
+ + + + + +
ZFS-PROMOTE(8)System Manager's ManualZFS-PROMOTE(8)
+
+
+

+

zfs-promote — + promote clone dataset to no longer depend on origin + snapshot

+
+
+

+ + + + + +
zfspromote clone
+
+
+

+

The zfs promote + command makes it possible to destroy the dataset that the clone was created + from. The clone parent-child dependency relationship is reversed, so that + the origin dataset becomes a clone of the specified dataset.

+

The snapshot that was cloned, and any snapshots previous to this + snapshot, are now owned by the promoted clone. The space they use moves from + the origin dataset to the promoted clone, so enough space must be available + to accommodate these snapshots. No new space is consumed by this operation, + but the space accounting is adjusted. The promoted clone must not have any + conflicting snapshot names of its own. The zfs + rename subcommand can be used to rename any + conflicting snapshots.

+
+
+

+

zfs-clone(8), + zfs-rename(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-receive.8.html b/man/v2.1/8/zfs-receive.8.html new file mode 100644 index 000000000..58d06d0ac --- /dev/null +++ b/man/v2.1/8/zfs-receive.8.html @@ -0,0 +1,560 @@ + + + + + + + zfs-receive.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-receive.8

+
+ + + + + +
ZFS-RECEIVE(8)System Manager's ManualZFS-RECEIVE(8)
+
+
+

+

zfs-receive — + create snapshot from backup stream

+
+
+

+ + + + + +
zfsreceive [-FhMnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+
+

+
+
zfs receive + [-FhMnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

The ability to send and receive deduplicated send streams has + been removed. However, a deduplicated send stream created with older + software can be converted to a regular (non-deduplicated) stream by + using the zstream redup + command.

+

If -o + property=value or + -x property is specified, it + applies to the effective value of the property throughout the entire + subtree of replicated datasets. Effective property values will be set + (-o) or inherited (-x) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + origin= + is a special case because, even if origin is a + read-only property and cannot be set, it's allowed to receive the send + stream as a clone of the given snapshot.

+

Raw encrypted send streams (created with + zfs send + -w) may only be received as is, and cannot be + re-encrypted, decrypted, or recompressed by the receive process. + Unencrypted streams can be received as encrypted datasets, either + through inheritance or by specifying encryption parameters with the + -o options. Note that the + keylocation property cannot be overridden to + prompt during a receive. This is because the receive + process itself is already using the standard input for the send stream. + Instead, the property can be overridden after the receive completes.

+

The added security provided by raw sends adds some + restrictions to the send and receive process. ZFS will not allow a mix + of raw receives and non-raw receives. Specifically, any raw incremental + receives that are attempted after a non-raw receive will fail. Non-raw + receives do not have this restriction and, therefore, are always + possible. Because of this, it is best practice to always use either raw + sends for their security benefits or non-raw sends for their flexibility + when working with encrypted datasets, but not a combination.

+

The reason for this restriction stems from the inherent + restrictions of the AEAD ciphers that ZFS uses to encrypt data. When + using ZFS native encryption, each block of data is encrypted against a + randomly generated number known as the "initialization vector" + (IV), which is stored in the filesystem metadata. This number is + required by the encryption algorithms whenever the data is to be + decrypted. Together, all of the IVs provided for all of the blocks in a + given snapshot are collectively called an "IV set". When ZFS + performs a raw send, the IV set is transferred from the source to the + destination in the send stream. When ZFS performs a non-raw send, the + data is decrypted by the source system and re-encrypted by the + destination system, creating a snapshot with effectively the same data, + but a different IV set. In order for decryption to work after a raw + send, ZFS must ensure that the IV set used on both the source and + destination side match. When an incremental raw receive is performed on + top of an existing snapshot, ZFS will check to confirm that the + "from" snapshot on both the source and destination were using + the same IV set, ensuring the new IV set is consistent.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Skip the receive of holds. There is no effect if holds are not + sent.
+
+
Force an unmount of the file system while receiving a snapshot. This + option is not supported on Linux.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + immediately before the receive. When receiving a stream from + zfs send + -R, causes the property to be inherited by all + descendant datasets, as through zfs + inherit property was run on + any descendant datasets that have this property set on the sending + system. +

If the send stream was sent with + -c then overriding the + compression property will have no affect on + received data but the compression property will be + set. To have the data recompressed on receive remove the + -c flag from the send stream.

+

Any editable property can be set at + receive time. Set-once properties bound to the received data, such + as + + and + , + cannot be set at receive time even when the datasets are newly + created by zfs + receive. Additionally both settable + properties + + and + + cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+

The -o option may also be used to + override encryption properties upon initial receive. This allows + unencrypted streams to be received as encrypted datasets. To cause + the received dataset (or root dataset of a recursive stream) to be + received as an encryption root, specify encryption properties in the + same manner as is required for zfs + create. For instance:

+
# zfs + send tank/test@snap1 | + zfs recv + -o + encryption= + -o + = + -o + keylocation=file:///path/to/keyfile
+

Note that -o + keylocation=prompt may not be + specified here, since the standard input is already being utilized + for the send stream. Once the receive has completed, you can use + zfs set to change + this setting after the fact. Similarly, you can receive a dataset as + an encrypted child by specifying -x + encryption to force the property to be inherited. + Overriding encryption properties (except for + keylocation) is not possible with raw send + streams.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with + a stream generated by zfs + send -t + token, where the token + is the value of the + + property of the filesystem or volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(7) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions (e.g. + set-once) apply equally to -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
+
+
+

+

zfs-send(8), zstream(8)

+
+
+ + + + + +
February 16, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-recv.8.html b/man/v2.1/8/zfs-recv.8.html new file mode 100644 index 000000000..f2f0b573a --- /dev/null +++ b/man/v2.1/8/zfs-recv.8.html @@ -0,0 +1,560 @@ + + + + + + + zfs-recv.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-recv.8

+
+ + + + + +
ZFS-RECEIVE(8)System Manager's ManualZFS-RECEIVE(8)
+
+
+

+

zfs-receive — + create snapshot from backup stream

+
+
+

+ + + + + +
zfsreceive [-FhMnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+
+

+
+
zfs receive + [-FhMnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

The ability to send and receive deduplicated send streams has + been removed. However, a deduplicated send stream created with older + software can be converted to a regular (non-deduplicated) stream by + using the zstream redup + command.

+

If -o + property=value or + -x property is specified, it + applies to the effective value of the property throughout the entire + subtree of replicated datasets. Effective property values will be set + (-o) or inherited (-x) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + origin= + is a special case because, even if origin is a + read-only property and cannot be set, it's allowed to receive the send + stream as a clone of the given snapshot.

+

Raw encrypted send streams (created with + zfs send + -w) may only be received as is, and cannot be + re-encrypted, decrypted, or recompressed by the receive process. + Unencrypted streams can be received as encrypted datasets, either + through inheritance or by specifying encryption parameters with the + -o options. Note that the + keylocation property cannot be overridden to + prompt during a receive. This is because the receive + process itself is already using the standard input for the send stream. + Instead, the property can be overridden after the receive completes.

+

The added security provided by raw sends adds some + restrictions to the send and receive process. ZFS will not allow a mix + of raw receives and non-raw receives. Specifically, any raw incremental + receives that are attempted after a non-raw receive will fail. Non-raw + receives do not have this restriction and, therefore, are always + possible. Because of this, it is best practice to always use either raw + sends for their security benefits or non-raw sends for their flexibility + when working with encrypted datasets, but not a combination.

+

The reason for this restriction stems from the inherent + restrictions of the AEAD ciphers that ZFS uses to encrypt data. When + using ZFS native encryption, each block of data is encrypted against a + randomly generated number known as the "initialization vector" + (IV), which is stored in the filesystem metadata. This number is + required by the encryption algorithms whenever the data is to be + decrypted. Together, all of the IVs provided for all of the blocks in a + given snapshot are collectively called an "IV set". When ZFS + performs a raw send, the IV set is transferred from the source to the + destination in the send stream. When ZFS performs a non-raw send, the + data is decrypted by the source system and re-encrypted by the + destination system, creating a snapshot with effectively the same data, + but a different IV set. In order for decryption to work after a raw + send, ZFS must ensure that the IV set used on both the source and + destination side match. When an incremental raw receive is performed on + top of an existing snapshot, ZFS will check to confirm that the + "from" snapshot on both the source and destination were using + the same IV set, ensuring the new IV set is consistent.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Skip the receive of holds. There is no effect if holds are not + sent.
+
+
Force an unmount of the file system while receiving a snapshot. This + option is not supported on Linux.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + immediately before the receive. When receiving a stream from + zfs send + -R, causes the property to be inherited by all + descendant datasets, as through zfs + inherit property was run on + any descendant datasets that have this property set on the sending + system. +

If the send stream was sent with + -c then overriding the + compression property will have no affect on + received data but the compression property will be + set. To have the data recompressed on receive remove the + -c flag from the send stream.

+

Any editable property can be set at + receive time. Set-once properties bound to the received data, such + as + + and + , + cannot be set at receive time even when the datasets are newly + created by zfs + receive. Additionally both settable + properties + + and + + cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+

The -o option may also be used to + override encryption properties upon initial receive. This allows + unencrypted streams to be received as encrypted datasets. To cause + the received dataset (or root dataset of a recursive stream) to be + received as an encryption root, specify encryption properties in the + same manner as is required for zfs + create. For instance:

+
# zfs + send tank/test@snap1 | + zfs recv + -o + encryption= + -o + = + -o + keylocation=file:///path/to/keyfile
+

Note that -o + keylocation=prompt may not be + specified here, since the standard input is already being utilized + for the send stream. Once the receive has completed, you can use + zfs set to change + this setting after the fact. Similarly, you can receive a dataset as + an encrypted child by specifying -x + encryption to force the property to be inherited. + Overriding encryption properties (except for + keylocation) is not possible with raw send + streams.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with + a stream generated by zfs + send -t + token, where the token + is the value of the + + property of the filesystem or volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(7) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions (e.g. + set-once) apply equally to -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
+
+
+

+

zfs-send(8), zstream(8)

+
+
+ + + + + +
February 16, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-redact.8.html b/man/v2.1/8/zfs-redact.8.html new file mode 100644 index 000000000..b62da9b5a --- /dev/null +++ b/man/v2.1/8/zfs-redact.8.html @@ -0,0 +1,766 @@ + + + + + + + zfs-redact.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-redact.8

+
+ + + + + +
ZFS-SEND(8)System Manager's ManualZFS-SEND(8)
+
+
+

+

zfs-send — + generate backup stream of ZFS dataset

+
+
+

+ + + + + +
zfssend [-DLPVRbcehnpsvw] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-DLPVcensvw] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend --redact + redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
+ + + + + +
zfssend [-PVenv] + -t receive_resume_token
+
+ + + + + +
zfssend [-PVnv] + -S filesystem
+
+ + + + + +
zfsredact snapshot + redaction_bookmark + redaction_snapshot
+
+
+

+
+
zfs send + [-DLPVRbcehnpvw] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed. If the -R flag + is used to send encrypted datasets, then -w + must also be specified.

+
+
, + --proctitle
+
Set the process title to a per-second report of how much data has been + sent.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
, + --backup
+
Sends only received property values whether or not they are overridden + by local settings, but only if the dataset has ever been received. Use + this option when you want zfs + receive to restore received properties backed + up on the sent dataset and to avoid sending local settings that may + have nothing to do with the source dataset, but only with how the data + is backed up.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes. Streams sent with -c will not + have their data recompressed on the receiver side using + -o + = + value. The data will stay compressed as it was + from the sender. The new compression property will be set for future + data.
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --holds
+
Generate a stream package that includes any snapshot holds (created + with the zfs hold + command), and indicating to zfs + receive that the holds be applied to the + dataset on the receiving system.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature. Sends of encrypted datasets must use + -w when using this flag.
+
, + --skip-missing
+
Allows sending a replication stream even when there are snapshots + missing in the hierarchy. When a snapshot is missing, instead of + throwing an error and aborting the send, a warning is printed to the + standard error stream and the dataset to which it belongs and its + descendents are skipped. This flag can only be used in conjunction + with -R.
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS.

+
+
+
+
zfs send + [-DLPVcenvw] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent.
+
+
+
zfs send + --redact redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
Generate a redacted send stream. This send stream contains all blocks from + the snapshot being sent that aren't included in the redaction list + contained in the bookmark specified by the + --redact (or -d) flag. The + resulting send stream is said to be redacted with respect to the snapshots + the bookmark specified by the --redact + flag was created with. The bookmark must have been + created by running zfs + redact on the snapshot being sent. +

This feature can be used to allow clones of a filesystem to be + made available on a remote system, in the case where their parent need + not (or needs to not) be usable. For example, if a filesystem contains + sensitive data, and it has clones where that sensitive data has been + secured or replaced with dummy data, redacted sends can be used to + replicate the secured data without replicating the original sensitive + data, while still sharing all possible blocks. A snapshot that has been + redacted with respect to a set of snapshots will contain all blocks + referenced by at least one snapshot in the set, but will contain none of + the blocks referenced by none of the snapshots in the set. In other + words, if all snapshots in the set have modified a given block in the + parent, that block will not be sent; but if one or more snapshots have + not modified a block in the parent, they will still reference the + parent's block, so that block will be sent. Note that only user data + will be redacted.

+

When the redacted send stream is received, we will generate a + redacted snapshot. Due to the nature of redaction, a redacted dataset + can only be used in the following ways:

+
    +
  1. To receive, as a clone, an incremental send from the original snapshot + to one of the snapshots it was redacted with respect to. In this case, + the stream will produce a valid dataset when received because all + blocks that were redacted in the parent are guaranteed to be present + in the child's send stream. This use case will produce a normal + snapshot, which can be used just like other snapshots.
  2. +
  3. To receive an incremental send from the original snapshot to something + redacted with respect to a subset of the set of snapshots the initial + snapshot was redacted with respect to. In this case, each block that + was redacted in the original is still redacted (redacting with respect + to additional snapshots causes less data to be redacted (because the + snapshots define what is permitted, and everything else is redacted)). + This use case will produce a new redacted snapshot.
  4. +
  5. To receive an incremental send from a redaction bookmark of the + original snapshot that was created when redacting with respect to a + subset of the set of snapshots the initial snapshot was created with + respect to anything else. A send stream from such a redaction bookmark + will contain all of the blocks necessary to fill in any redacted data, + should it be needed, because the sending system is aware of what + blocks were originally redacted. This will either produce a normal + snapshot or a redacted one, depending on whether the new send stream + is redacted.
  6. +
  7. To receive an incremental send from a redacted version of the initial + snapshot that is redacted with respect to a subject of the set of + snapshots the initial snapshot was created with respect to. A send + stream from a compatible redacted dataset will contain all of the + blocks necessary to fill in any redacted data. This will either + produce a normal snapshot or a redacted one, depending on whether the + new send stream is redacted.
  8. +
  9. To receive a full send as a clone of the redacted snapshot. Since the + stream is a full send, it definitionally contains all the data needed + to create a new dataset. This use case will either produce a normal + snapshot or a redacted one, depending on whether the full send stream + was redacted.
  10. +
+

These restrictions are detected and enforced by + zfs receive; a redacted + send stream will contain the list of snapshots that the stream is + redacted with respect to. These are stored with the redacted snapshot, + and are used to detect and correctly handle the cases above. Note that + for technical reasons, raw sends and redacted sends cannot be combined + at this time.

+
+
zfs send + [-PVenv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs + receive -s for more + details.
+
zfs send + [-PVnv] [-i + snapshot|bookmark] + -S filesystem
+
Generate a send stream from a dataset that has been partially received. +
+
, + --saved
+
This flag requires that the specified filesystem previously received a + resumable send that did not finish and was interrupted. In such + scenarios this flag enables the user to send this partially received + state. Using this flag will always use the last fully received + snapshot as the incremental source if it exists.
+
+
+
zfs redact + snapshot redaction_bookmark + redaction_snapshot
+
Generate a new redaction bookmark. In addition to the typical bookmark + information, a redaction bookmark contains the list of redacted blocks and + the list of redaction snapshots specified. The redacted blocks are blocks + in the snapshot which are not referenced by any of the redaction + snapshots. These blocks are found by iterating over the metadata in each + redaction snapshot to determine what has been changed since the target + snapshot. Redaction is designed to support redacted zfs sends; see the + entry for zfs send for + more information on the purpose of this operation. If a redact operation + fails partway through (due to an error or a system failure), the redaction + can be resumed by rerunning the same command.
+
+
+

+

ZFS has support for a limited version of data subsetting, in the + form of redaction. Using the zfs + redact command, a + can be created that stores a list of blocks containing + sensitive information. When provided to zfs + send, this causes a redacted send + to occur. Redacted sends omit the blocks containing sensitive information, + replacing them with REDACT records. When these send streams are received, a + redacted dataset is created. A redacted dataset cannot be + mounted by default, since it is incomplete. It can be used to receive other + send streams. In this way datasets can be used for data backup and + replication, with all the benefits that zfs send and receive have to offer, + while protecting sensitive information from being stored on less-trusted + machines or services.

+

For the purposes of redaction, there are two steps to the process. + A redact step, and a send/receive step. First, a redaction bookmark is + created. This is done by providing the zfs + redact command with a parent snapshot, a bookmark to + be created, and a number of redaction snapshots. These redaction snapshots + must be descendants of the parent snapshot, and they should modify data that + is considered sensitive in some way. Any blocks of data modified by all of + the redaction snapshots will be listed in the redaction bookmark, because it + represents the truly sensitive information. When it comes to the send step, + the send process will not send the blocks listed in the redaction bookmark, + instead replacing them with REDACT records. When received on the target + system, this will create a redacted dataset, missing the data that + corresponds to the blocks in the redaction bookmark on the sending system. + The incremental send streams from the original parent to the redaction + snapshots can then also be received on the target system, and this will + produce a complete snapshot that can be used normally. Incrementals from one + snapshot on the parent filesystem and another can also be done by sending + from the redaction bookmark, rather than the snapshots themselves.

+

In order to make the purpose of the feature more clear, an example + is provided. Consider a zfs filesystem containing four files. These files + represent information for an online shopping service. One file contains a + list of usernames and passwords, another contains purchase histories, a + third contains click tracking data, and a fourth contains user preferences. + The owner of this data wants to make it available for their development + teams to test against, and their market research teams to do analysis on. + The development teams need information about user preferences and the click + tracking data, while the market research teams need information about + purchase histories and user preferences. Neither needs access to the + usernames and passwords. However, because all of this data is stored in one + ZFS filesystem, it must all be sent and received together. In addition, the + owner of the data wants to take advantage of features like compression, + checksumming, and snapshots, so they do want to continue to use ZFS to store + and transmit their data. Redaction can help them do so. First, they would + make two clones of a snapshot of the data on the source. In one clone, they + create the setup they want their market research team to see; they delete + the usernames and passwords file, and overwrite the click tracking data with + dummy information. In another, they create the setup they want the + development teams to see, by replacing the passwords with fake information + and replacing the purchase histories with randomly generated ones. They + would then create a redaction bookmark on the parent snapshot, using + snapshots on the two clones as redaction snapshots. The parent can then be + sent, redacted, to the target server where the research and development + teams have access. Finally, incremental sends from the parent snapshot to + each of the clones can be sent to and received on the target server; these + snapshots are identical to the ones on the source, and are ready to be used, + while the parent snapshot on the target contains none of the username and + password data present on the source, because it was removed by the redacted + send operation.

+
+
+
+

+

zfs-bookmark(8), + zfs-receive(8), zfs-redact(8), + zfs-snapshot(8)

+
+
+ + + + + +
January 12, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-release.8.html b/man/v2.1/8/zfs-release.8.html new file mode 100644 index 000000000..f8af6f80e --- /dev/null +++ b/man/v2.1/8/zfs-release.8.html @@ -0,0 +1,320 @@ + + + + + + + zfs-release.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-release.8

+
+ + + + + +
ZFS-HOLD(8)System Manager's ManualZFS-HOLD(8)
+
+
+

+

zfs-holdhold + ZFS snapshots to prevent their removal

+
+
+

+ + + + + +
zfshold [-r] + tag snapshot
+
+ + + + + +
zfsholds [-rH] + snapshot
+
+ + + + + +
zfsrelease [-r] + tag snapshot
+
+
+

+
+
zfs hold + [-r] tag + snapshot
+
Adds a single reference, named with the tag + argument, to the specified snapshots. Each snapshot has its own tag + namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-rH] snapshot
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
Do not print headers, use tab-delimited output.
+
+
+
zfs release + [-r] tag + snapshot
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
+
+
+

+

zfs-destroy(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-rename.8.html b/man/v2.1/8/zfs-rename.8.html new file mode 100644 index 000000000..b6372a970 --- /dev/null +++ b/man/v2.1/8/zfs-rename.8.html @@ -0,0 +1,331 @@ + + + + + + + zfs-rename.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-rename.8

+
+ + + + + +
ZFS-RENAME(8)System Manager's ManualZFS-RENAME(8)
+
+
+

+

zfs-rename — + rename ZFS dataset

+
+
+

+ + + + + +
zfsrename [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
+ + + + + +
zfsrename -p + [-f] + filesystem|volume + filesystem|volume
+
+ + + + + +
zfsrename -u + [-f] filesystem + filesystem
+
+ + + + + +
zfsrename -r + snapshot snapshot
+
+
+

+
+
zfs rename + [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
 
+
zfs rename + -p [-f] + filesystem|volume + filesystem|volume
+
 
+
zfs rename + -u [-f] + filesystem filesystem
+
Renames the given dataset. The new target can be located anywhere in the + ZFS hierarchy, with the exception of snapshots. Snapshots can only be + renamed within the parent file system or volume. When renaming a snapshot, + the parent file system of the snapshot does not need to be specified as + part of the second argument. Renamed file systems can inherit new mount + points, in which case they are unmounted and remounted at the new mount + point. +
+
+
Force unmount any file systems that need to be unmounted in the + process. This flag has no effect if used together with the + -u flag.
+
+
Creates all the nonexistent parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their + parent.
+
+
Do not remount file systems during rename. If a file system's + mountpoint property is set to + + or + , + the file system is not unmounted even if this option is not + given.
+
+
+
zfs rename + -r snapshot + snapshot
+
Recursively rename the snapshots of all descendent datasets. Snapshots are + the only dataset that can be renamed recursively.
+
+
+
+ + + + + +
September 1, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-rollback.8.html b/man/v2.1/8/zfs-rollback.8.html new file mode 100644 index 000000000..bd044cee2 --- /dev/null +++ b/man/v2.1/8/zfs-rollback.8.html @@ -0,0 +1,283 @@ + + + + + + + zfs-rollback.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-rollback.8

+
+ + + + + +
ZFS-ROLLBACK(8)System Manager's ManualZFS-ROLLBACK(8)
+
+
+

+

zfs-rollback — + roll ZFS dataset back to snapshot

+
+
+

+ + + + + +
zfsrollback [-Rfr] + snapshot
+
+
+

+

When a dataset is rolled back, all data that has changed since the + snapshot is discarded, and the dataset reverts to the state at the time of + the snapshot. By default, the command refuses to roll back to a snapshot + other than the most recent one. In order to do so, all intermediate + snapshots and bookmarks must be destroyed by specifying the + -r option.

+

The -rR options do not recursively destroy + the child snapshots of a recursive snapshot. Only direct snapshots of the + specified filesystem are destroyed by either of these options. To completely + roll back a recursive snapshot, you must roll back the individual child + snapshots.

+
+
+
Destroy any more recent snapshots and bookmarks, as well as any clones of + those snapshots.
+
+
Used with the -R option to force an unmount of any + clone file systems that are to be destroyed.
+
+
Destroy any snapshots and bookmarks more recent than the one + specified.
+
+
+
+

+

zfs-snapshot(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-send.8.html b/man/v2.1/8/zfs-send.8.html new file mode 100644 index 000000000..963204d2b --- /dev/null +++ b/man/v2.1/8/zfs-send.8.html @@ -0,0 +1,766 @@ + + + + + + + zfs-send.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-send.8

+
+ + + + + +
ZFS-SEND(8)System Manager's ManualZFS-SEND(8)
+
+
+

+

zfs-send — + generate backup stream of ZFS dataset

+
+
+

+ + + + + +
zfssend [-DLPVRbcehnpsvw] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-DLPVcensvw] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend --redact + redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
+ + + + + +
zfssend [-PVenv] + -t receive_resume_token
+
+ + + + + +
zfssend [-PVnv] + -S filesystem
+
+ + + + + +
zfsredact snapshot + redaction_bookmark + redaction_snapshot
+
+
+

+
+
zfs send + [-DLPVRbcehnpvw] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed. If the -R flag + is used to send encrypted datasets, then -w + must also be specified.

+
+
, + --proctitle
+
Set the process title to a per-second report of how much data has been + sent.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
, + --backup
+
Sends only received property values whether or not they are overridden + by local settings, but only if the dataset has ever been received. Use + this option when you want zfs + receive to restore received properties backed + up on the sent dataset and to avoid sending local settings that may + have nothing to do with the source dataset, but only with how the data + is backed up.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes. Streams sent with -c will not + have their data recompressed on the receiver side using + -o + = + value. The data will stay compressed as it was + from the sender. The new compression property will be set for future + data.
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --holds
+
Generate a stream package that includes any snapshot holds (created + with the zfs hold + command), and indicating to zfs + receive that the holds be applied to the + dataset on the receiving system.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature. Sends of encrypted datasets must use + -w when using this flag.
+
, + --skip-missing
+
Allows sending a replication stream even when there are snapshots + missing in the hierarchy. When a snapshot is missing, instead of + throwing an error and aborting the send, a warning is printed to the + standard error stream and the dataset to which it belongs and its + descendents are skipped. This flag can only be used in conjunction + with -R.
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS.

+
+
+
+
zfs send + [-DLPVcenvw] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent.
+
+
+
zfs send + --redact redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
Generate a redacted send stream. This send stream contains all blocks from + the snapshot being sent that aren't included in the redaction list + contained in the bookmark specified by the + --redact (or -d) flag. The + resulting send stream is said to be redacted with respect to the snapshots + the bookmark specified by the --redact + flag was created with. The bookmark must have been + created by running zfs + redact on the snapshot being sent. +

This feature can be used to allow clones of a filesystem to be + made available on a remote system, in the case where their parent need + not (or needs to not) be usable. For example, if a filesystem contains + sensitive data, and it has clones where that sensitive data has been + secured or replaced with dummy data, redacted sends can be used to + replicate the secured data without replicating the original sensitive + data, while still sharing all possible blocks. A snapshot that has been + redacted with respect to a set of snapshots will contain all blocks + referenced by at least one snapshot in the set, but will contain none of + the blocks referenced by none of the snapshots in the set. In other + words, if all snapshots in the set have modified a given block in the + parent, that block will not be sent; but if one or more snapshots have + not modified a block in the parent, they will still reference the + parent's block, so that block will be sent. Note that only user data + will be redacted.

+

When the redacted send stream is received, we will generate a + redacted snapshot. Due to the nature of redaction, a redacted dataset + can only be used in the following ways:

+
    +
  1. To receive, as a clone, an incremental send from the original snapshot + to one of the snapshots it was redacted with respect to. In this case, + the stream will produce a valid dataset when received because all + blocks that were redacted in the parent are guaranteed to be present + in the child's send stream. This use case will produce a normal + snapshot, which can be used just like other snapshots.
  2. +
  3. To receive an incremental send from the original snapshot to something + redacted with respect to a subset of the set of snapshots the initial + snapshot was redacted with respect to. In this case, each block that + was redacted in the original is still redacted (redacting with respect + to additional snapshots causes less data to be redacted (because the + snapshots define what is permitted, and everything else is redacted)). + This use case will produce a new redacted snapshot.
  4. +
  5. To receive an incremental send from a redaction bookmark of the + original snapshot that was created when redacting with respect to a + subset of the set of snapshots the initial snapshot was created with + respect to anything else. A send stream from such a redaction bookmark + will contain all of the blocks necessary to fill in any redacted data, + should it be needed, because the sending system is aware of what + blocks were originally redacted. This will either produce a normal + snapshot or a redacted one, depending on whether the new send stream + is redacted.
  6. +
  7. To receive an incremental send from a redacted version of the initial + snapshot that is redacted with respect to a subject of the set of + snapshots the initial snapshot was created with respect to. A send + stream from a compatible redacted dataset will contain all of the + blocks necessary to fill in any redacted data. This will either + produce a normal snapshot or a redacted one, depending on whether the + new send stream is redacted.
  8. +
  9. To receive a full send as a clone of the redacted snapshot. Since the + stream is a full send, it definitionally contains all the data needed + to create a new dataset. This use case will either produce a normal + snapshot or a redacted one, depending on whether the full send stream + was redacted.
  10. +
+

These restrictions are detected and enforced by + zfs receive; a redacted + send stream will contain the list of snapshots that the stream is + redacted with respect to. These are stored with the redacted snapshot, + and are used to detect and correctly handle the cases above. Note that + for technical reasons, raw sends and redacted sends cannot be combined + at this time.

+
+
zfs send + [-PVenv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs + receive -s for more + details.
+
zfs send + [-PVnv] [-i + snapshot|bookmark] + -S filesystem
+
Generate a send stream from a dataset that has been partially received. +
+
, + --saved
+
This flag requires that the specified filesystem previously received a + resumable send that did not finish and was interrupted. In such + scenarios this flag enables the user to send this partially received + state. Using this flag will always use the last fully received + snapshot as the incremental source if it exists.
+
+
+
zfs redact + snapshot redaction_bookmark + redaction_snapshot
+
Generate a new redaction bookmark. In addition to the typical bookmark + information, a redaction bookmark contains the list of redacted blocks and + the list of redaction snapshots specified. The redacted blocks are blocks + in the snapshot which are not referenced by any of the redaction + snapshots. These blocks are found by iterating over the metadata in each + redaction snapshot to determine what has been changed since the target + snapshot. Redaction is designed to support redacted zfs sends; see the + entry for zfs send for + more information on the purpose of this operation. If a redact operation + fails partway through (due to an error or a system failure), the redaction + can be resumed by rerunning the same command.
+
+
+

+

ZFS has support for a limited version of data subsetting, in the + form of redaction. Using the zfs + redact command, a + can be created that stores a list of blocks containing + sensitive information. When provided to zfs + send, this causes a redacted send + to occur. Redacted sends omit the blocks containing sensitive information, + replacing them with REDACT records. When these send streams are received, a + redacted dataset is created. A redacted dataset cannot be + mounted by default, since it is incomplete. It can be used to receive other + send streams. In this way datasets can be used for data backup and + replication, with all the benefits that zfs send and receive have to offer, + while protecting sensitive information from being stored on less-trusted + machines or services.

+

For the purposes of redaction, there are two steps to the process. + A redact step, and a send/receive step. First, a redaction bookmark is + created. This is done by providing the zfs + redact command with a parent snapshot, a bookmark to + be created, and a number of redaction snapshots. These redaction snapshots + must be descendants of the parent snapshot, and they should modify data that + is considered sensitive in some way. Any blocks of data modified by all of + the redaction snapshots will be listed in the redaction bookmark, because it + represents the truly sensitive information. When it comes to the send step, + the send process will not send the blocks listed in the redaction bookmark, + instead replacing them with REDACT records. When received on the target + system, this will create a redacted dataset, missing the data that + corresponds to the blocks in the redaction bookmark on the sending system. + The incremental send streams from the original parent to the redaction + snapshots can then also be received on the target system, and this will + produce a complete snapshot that can be used normally. Incrementals from one + snapshot on the parent filesystem and another can also be done by sending + from the redaction bookmark, rather than the snapshots themselves.

+

In order to make the purpose of the feature more clear, an example + is provided. Consider a zfs filesystem containing four files. These files + represent information for an online shopping service. One file contains a + list of usernames and passwords, another contains purchase histories, a + third contains click tracking data, and a fourth contains user preferences. + The owner of this data wants to make it available for their development + teams to test against, and their market research teams to do analysis on. + The development teams need information about user preferences and the click + tracking data, while the market research teams need information about + purchase histories and user preferences. Neither needs access to the + usernames and passwords. However, because all of this data is stored in one + ZFS filesystem, it must all be sent and received together. In addition, the + owner of the data wants to take advantage of features like compression, + checksumming, and snapshots, so they do want to continue to use ZFS to store + and transmit their data. Redaction can help them do so. First, they would + make two clones of a snapshot of the data on the source. In one clone, they + create the setup they want their market research team to see; they delete + the usernames and passwords file, and overwrite the click tracking data with + dummy information. In another, they create the setup they want the + development teams to see, by replacing the passwords with fake information + and replacing the purchase histories with randomly generated ones. They + would then create a redaction bookmark on the parent snapshot, using + snapshots on the two clones as redaction snapshots. The parent can then be + sent, redacted, to the target server where the research and development + teams have access. Finally, incremental sends from the parent snapshot to + each of the clones can be sent to and received on the target server; these + snapshots are identical to the ones on the source, and are ready to be used, + while the parent snapshot on the target contains none of the username and + password data present on the source, because it was removed by the redacted + send operation.

+
+
+
+

+

zfs-bookmark(8), + zfs-receive(8), zfs-redact(8), + zfs-snapshot(8)

+
+
+ + + + + +
January 12, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-set.8.html b/man/v2.1/8/zfs-set.8.html new file mode 100644 index 000000000..d51cce0d5 --- /dev/null +++ b/man/v2.1/8/zfs-set.8.html @@ -0,0 +1,407 @@ + + + + + + + zfs-set.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-set.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setset + properties on ZFS datasets

+
+
+

+ + + + + +
zfsset + property=value + [property=value]… + filesystem|volume|snapshot
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot
+
+
+

+
+
zfs set + property=value + [property=value]… + filesystem|volume|snapshot
+
Only some properties can be edited. See zfsprops(7) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the + section of zfsprops(7).
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
+
+
Dataset name
+
+
Property name
+
+
Property value
+
+
Property source local, default, + inherited, temporary, + received, or + - (none).
+
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections of + zfsprops(7).

+

The value all can be used to display all + properties that apply to the given dataset's type + (filesystem, volume, + snapshot, or + bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: local, + default, inherited, + temporary, received, + or + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or + all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(7) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value, if one exists; otherwise, + for non-inheritable properties, to the default; otherwise, operate as + if the -S option was not specified.
+
+
+
+
+
+

+

zfsprops(7), zfs-list(8)

+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-share.8.html b/man/v2.1/8/zfs-share.8.html new file mode 100644 index 000000000..cd5514c78 --- /dev/null +++ b/man/v2.1/8/zfs-share.8.html @@ -0,0 +1,307 @@ + + + + + + + zfs-share.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-share.8

+
+ + + + + +
ZFS-SHARE(8)System Manager's ManualZFS-SHARE(8)
+
+
+

+

zfs-shareshare + and unshare ZFS filesystems

+
+
+

+ + + + + +
zfsshare [-l] + -a|filesystem
+
+ + + + + +
zfsunshare + -a|filesystem|mountpoint
+
+
+

+
+
zfs share + [-l] + -a|filesystem
+
Shares available ZFS file systems. +
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has + =, + this will cause the terminal to interactively block after asking for + the key.
+
+
Share all available ZFS file systems. Invoked automatically as part of + the boot process.
+
filesystem
+
Share the specified filesystem according to the + sharenfs and sharesmb properties. + File systems are shared when the sharenfs or + sharesmb property is set.
+
+
+
zfs unshare + -a|filesystem|mountpoint
+
Unshares currently shared ZFS file systems. +
+
+
Unshare all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
filesystem|mountpoint
+
Unshare the specified filesystem. The command can also be given a path + to a ZFS file system shared on the system.
+
+
+
+
+
+

+

exports(5), smb.conf(5), + zfsprops(7)

+
+
+ + + + + +
May 17, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-snapshot.8.html b/man/v2.1/8/zfs-snapshot.8.html new file mode 100644 index 000000000..ec3a7d834 --- /dev/null +++ b/man/v2.1/8/zfs-snapshot.8.html @@ -0,0 +1,281 @@ + + + + + + + zfs-snapshot.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-snapshot.8

+
+ + + + + +
ZFS-SNAPSHOT(8)System Manager's ManualZFS-SNAPSHOT(8)
+
+
+

+

zfs-snapshot — + create snapshots of ZFS datasets

+
+
+

+ + + + + +
zfssnapshot [-r] + [-o + property=value]… + dataset@snapname
+
+
+

+

All previous modifications by successful system calls to the file + system are part of the snapshots. Snapshots are taken atomically, so that + all snapshots correspond to the same moment in time. + zfs snap can be used as an + alias for zfs snapshot. See + the Snapshots section of + zfsconcepts(7) for details.

+
+
+ property=value
+
Set the specified property; see zfs + create for details.
+
+
Recursively create snapshots of all descendent datasets
+
+
+
+

+

zfs-bookmark(8), zfs-clone(8), + zfs-destroy(8), zfs-diff(8), + zfs-hold(8), zfs-rename(8), + zfs-rollback(8), zfs-send(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-unallow.8.html b/man/v2.1/8/zfs-unallow.8.html new file mode 100644 index 000000000..005cdb91c --- /dev/null +++ b/man/v2.1/8/zfs-unallow.8.html @@ -0,0 +1,848 @@ + + + + + + + zfs-unallow.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unallow.8

+
+ + + + + +
ZFS-ALLOW(8)System Manager's ManualZFS-ALLOW(8)
+
+
+

+

zfs-allow — + delegate ZFS administration permissions to unprivileged + users

+
+
+

+ + + + + +
zfsallow [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+
+

+
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of + , + , + , + , + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
 
+
zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]…
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]…
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]…
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]…
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NAMETYPENOTES



allowsubcommandMust also have the permission that is being allowed
bookmarksubcommand
clonesubcommandMust also have the create ability and mount ability in + the origin file system
createsubcommandMust also have the mount ability. Must also have the + refreservation ability to create a non-sparse volume.
destroysubcommandMust also have the mount ability
diffsubcommandAllows lookup of paths within a dataset given an object number, and + the ability to create snapshots necessary to zfs diff.
holdsubcommandAllows adding a user hold to a snapshot
load-keysubcommandAllows loading and unloading of encryption key (see zfs + load-key and zfs unload-key).
change-keysubcommandAllows changing an encryption key via zfs change-key.
mountsubcommandAllows mounting/umounting ZFS datasets
promotesubcommandMust also have the mount and promote ability in the + origin file system
receivesubcommandMust also have the mount and create ability
releasesubcommandAllows releasing a user hold which might destroy the snapshot
renamesubcommandMust also have the mount and create ability in the new + parent
rollbacksubcommandMust also have the mount ability
sendsubcommand
sharesubcommandAllows sharing file systems over NFS or SMB protocols
snapshotsubcommandMust also have the mount ability
groupquotaotherAllows accessing any groupquota@... property
groupobjquotaotherAllows accessing any groupobjquota@... property
groupusedotherAllows reading any groupused@... property
groupobjusedotherAllows reading any groupobjused@... property
userpropotherAllows changing any user property
userquotaotherAllows accessing any userquota@... property
userobjquotaotherAllows accessing any userobjquota@... property
userusedotherAllows reading any userused@... property
userobjusedotherAllows reading any userobjused@... property
projectobjquotaotherAllows accessing any projectobjquota@... property
projectquotaotherAllows accessing any projectquota@... property
projectobjusedotherAllows reading any projectobjused@... property
projectusedotherAllows reading any projectused@... property
aclinheritproperty
aclmodeproperty
acltypeproperty
atimeproperty
canmountproperty
casesensitivityproperty
checksumproperty
compressionproperty
contextproperty
copiesproperty
dedupproperty
defcontextproperty
devicesproperty
dnodesizeproperty
encryptionproperty
execproperty
filesystem_limitproperty
fscontextproperty
keyformatproperty
keylocationproperty
logbiasproperty
mlslabelproperty
mountpointproperty
nbmandproperty
normalizationproperty
overlayproperty
pbkdf2itersproperty
primarycacheproperty
quotaproperty
readonlyproperty
recordsizeproperty
redundant_metadataproperty
refquotaproperty
refreservationproperty
relatimeproperty
reservationproperty
rootcontextproperty
secondarycacheproperty
setuidproperty
sharenfsproperty
sharesmbproperty
snapdevproperty
snapdirproperty
snapshot_limitproperty
special_small_blocksproperty
syncproperty
utf8onlyproperty
versionproperty
volblocksizeproperty
volmodeproperty
volsizeproperty
vscanproperty
xattrproperty
zonedproperty
+
+
zfs allow + -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-unjail.8.html b/man/v2.1/8/zfs-unjail.8.html new file mode 100644 index 000000000..7d8a011d5 --- /dev/null +++ b/man/v2.1/8/zfs-unjail.8.html @@ -0,0 +1,311 @@ + + + + + + + zfs-unjail.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unjail.8

+
+ + + + + +
ZFS-JAIL(8)System Manager's ManualZFS-JAIL(8)
+
+
+

+

zfs-jailattach + or detach ZFS filesystem from FreeBSD jail

+
+
+

+ + + + + +
zfs jailjailid|jailname + filesystem
+
+ + + + + +
zfs unjailjailid|jailname + filesystem
+
+
+

+
+
zfs jail + jailid|jailname + filesystem
+
Attach the specified filesystem to the jail + identified by JID jailid or name + jailname. From now on this file system tree can be + managed from within a jail if the jailed property has + been set. To use this functionality, the jail needs the + + and + + parameters set to + and the + + parameter set to a value lower than + . +

You cannot attach a jailed dataset's children to another jail. + You can also not attach the root file system of the jail or any dataset + which needs to be mounted before the zfs rc script is run inside the + jail, as it would be attached unmounted until it is mounted from the rc + script inside the jail.

+

To allow management of the dataset from within a + jail, the jailed property has to be set and the jail + needs access to the /dev/zfs device. The + property + cannot be changed from within a jail.

+

After a dataset is attached to a jail and the + jailed property is set, a jailed file system cannot be + mounted outside the jail, since the jail administrator might have set + the mount point to an unacceptable value.

+

See jail(8) for more information on managing + jails. Jails are a FreeBSD feature and are not + relevant on other platforms.

+
+
zfs unjail + jailid|jailname + filesystem
+
Detaches the specified filesystem from the jail + identified by JID jailid or name + jailname.
+
+
+
+

+

zfsprops(7), jail(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-unload-key.8.html b/man/v2.1/8/zfs-unload-key.8.html new file mode 100644 index 000000000..9a24886ea --- /dev/null +++ b/man/v2.1/8/zfs-unload-key.8.html @@ -0,0 +1,473 @@ + + + + + + + zfs-unload-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unload-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + load, unload, or change encryption key of ZFS + dataset

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a|filesystem
+
+ + + + + +
zfsunload-key [-r] + -a|filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a|filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. + This will cause zfs to simply check that the + provided key is correct. This command may be run even if the key is + already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a|filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded. + This command may also be used to change the keylocation, + keyformat, and pbkdf2iters properties + as needed. If the dataset was not previously an encryption root it will + become one. Alternatively, the -i flag may be + provided to cause an encryption root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim + --secure if supported by your hardware, + otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to running zfs + load-key filesystem; + zfs change-key + filesystem
+
+ property=value
+
Allows the user to set encryption key properties + (keyformat, keylocation, + and pbkdf2iters) while + changing the key. This is the only way to alter + keyformat and pbkdf2iters after + the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + volume data, file attributes, ACLs, permission bits, directory listings, + FUID mappings, and + / + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the load-key subcommand for more + info on key loading).

+

Creating an encrypted dataset requires + specifying the encryption and + keyformat properties at creation time, along with an + optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, + and pbkdf2iters) do not inherit + like other ZFS properties and instead use the value determined by their + encryption root. Encryption root inheritance can be tracked via the + read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only deduplicate against themselves, + their snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption, + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost for each block written.

+
+
+
+

+

zfsprops(7), zfs-create(8), + zfs-set(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-unmount.8.html b/man/v2.1/8/zfs-unmount.8.html new file mode 100644 index 000000000..29d721c4e --- /dev/null +++ b/man/v2.1/8/zfs-unmount.8.html @@ -0,0 +1,335 @@ + + + + + + + zfs-unmount.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unmount.8

+
+ + + + + +
ZFS-MOUNT(8)System Manager's ManualZFS-MOUNT(8)
+
+
+

+

zfs-mountmanage + mount state of ZFS filesystems

+
+
+

+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Oflv] + [-o options] + -a|filesystem
+
+ + + + + +
zfsunmount [-fu] + -a|filesystem|mountpoint
+
+
+

+
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Oflv] [-o + options] + -a|filesystem
+
Mount ZFS filesystem on a path described by its + mountpoint property, if the path exists and is empty. If + mountpoint is set to + , the + filesystem should be instead mounted using mount(8). +
+
+
Perform an overlay mount. Allows mounting in non-empty + mountpoint. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process if configured.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + section of + zfsprops(7) for details.
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has + =, + this will cause the terminal to interactively block after asking for + the key.
+
+
Report mount progress.
+
+
Attempt to force mounting of all filesystems, even those that couldn't + normally be mounted (e.g. redacted datasets).
+
+
+
zfs unmount + [-fu] + -a|filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
+
Forcefully unmount the file system, even if it is currently in use. + This option is not supported on Linux.
+
+
Unload keys for any encryption roots unmounted by this command.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
+
+
+
+ + + + + +
February 16, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-upgrade.8.html b/man/v2.1/8/zfs-upgrade.8.html new file mode 100644 index 000000000..32e2d5aac --- /dev/null +++ b/man/v2.1/8/zfs-upgrade.8.html @@ -0,0 +1,314 @@ + + + + + + + zfs-upgrade.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-upgrade.8

+
+ + + + + +
ZFS-UPGRADE(8)System Manager's ManualZFS-UPGRADE(8)
+
+
+

+

zfs-upgrade — + manage on-disk version of ZFS filesystems

+
+
+

+ + + + + +
zfsupgrade
+
+ + + + + +
zfsupgrade -v
+
+ + + + + +
zfsupgrade [-r] + [-V version] + -a|filesystem
+
+
+

+
+
zfs upgrade
+
Displays a list of file systems that are not the most recent version.
+
zfs upgrade + -v
+
Displays a list of currently supported file system versions.
+
zfs upgrade + [-r] [-V + version] + -a|filesystem
+
Upgrades file systems to a new on-disk version. Once this is done, the + file systems will no longer be accessible on systems running older + versions of ZFS. zfs send + streams generated from new snapshots of these file systems cannot be + accessed on systems running older versions of ZFS. +

In general, the file system version is independent of the pool + version. See zpool-features(7) for information on + features of ZFS storage pools.

+

In some cases, the file system version and the pool version + are interrelated and the pool version must be upgraded before the file + system version can be upgraded.

+
+
+ version
+
Upgrade to version. If not specified, upgrade to + the most recent version. This option can only be used to increase the + version number, and only up to the most recent version supported by + this version of ZFS.
+
+
Upgrade all file systems on all imported pools.
+
filesystem
+
Upgrade the specified file system.
+
+
Upgrade the specified file system and all descendent file + systems.
+
+
+
+
+
+

+

zpool-upgrade(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-userspace.8.html b/man/v2.1/8/zfs-userspace.8.html new file mode 100644 index 000000000..c4bfa754f --- /dev/null +++ b/man/v2.1/8/zfs-userspace.8.html @@ -0,0 +1,387 @@ + + + + + + + zfs-userspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-userspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + display space and quotas of ZFS dataset

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + user, + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (like stat(2), + ls -l) perform this + translation, so the -i option allows the + output from zfs + userspace to be compared directly with those + utilities. However, -i may lead to confusion + if some files were created by an SMB user before a SMB-to-POSIX name + mapping was established. In such a case, some files will be owned by + the SMB entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]…
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]…
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is a + numeral, not a name. So need neither the option -i + for SID to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfsprops(7), zfs-set(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-wait.8.html b/man/v2.1/8/zfs-wait.8.html new file mode 100644 index 000000000..27bcdee51 --- /dev/null +++ b/man/v2.1/8/zfs-wait.8.html @@ -0,0 +1,279 @@ + + + + + + + zfs-wait.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-wait.8

+
+ + + + + +
ZFS-WAIT(8)System Manager's ManualZFS-WAIT(8)
+
+
+

+

zfs-waitwait + for activity in ZFS filesystem to stop

+
+
+

+ + + + + +
zfswait [-t + activity[,activity]…] + filesystem
+
+
+

+

Waits until all background activity of the given types has ceased + in the given filesystem. The activity could cease because it has completed + or because the filesystem has been destroyed or unmounted. If no activities + are specified, the command waits until background activity of every type + listed below has ceased. If there is no activity of the given types in + progress, the command returns immediately.

+

These are the possible values for activity, + along with what each one waits for:

+
+
+
+
The filesystem's internal delete queue to empty
+
+
+

Note that the internal delete queue does not finish draining until + all large files have had time to be fully destroyed and all open file + handles to unlinked files are closed.

+
+
+

+

lsof(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs.8.html b/man/v2.1/8/zfs.8.html new file mode 100644 index 000000000..b5cb708b2 --- /dev/null +++ b/man/v2.1/8/zfs.8.html @@ -0,0 +1,995 @@ + + + + + + + zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.8

+
+ + + + + +
ZFS(8)System Manager's ManualZFS(8)
+
+
+

+

zfsconfigure + ZFS datasets

+
+
+

+ + + + + +
zfs-?V
+
+ + + + + +
zfsversion
+
+ + + + + +
zfssubcommand + [arguments]
+
+
+

+

The zfs command configures ZFS datasets + within a ZFS storage pool, as described in zpool(8). A + dataset is identified by a unique path within the ZFS namespace. For + example:

+
pool/{filesystem,volume,snapshot}
+

where the maximum length of a dataset name is + + (256B) and the maximum amount of nesting allowed in a path is 50 levels + deep.

+

A dataset can be one of the following:

+
+
+
+
Can be mounted within the standard system namespace and behaves like other + file systems. While ZFS file systems are designed to be POSIX-compliant, + known issues exist that prevent compliance in some cases. Applications + that depend on standards conformance might fail due to non-standard + behavior when checking file system free space.
+
+
A logical volume exported as a raw or block device. This type of dataset + should only be used when a block device is required. File systems are + typically used in most environments.
+
+
A read-only version of a file system or volume at a given point in time. + It is specified as + filesystem@name or + volume@name.
+
+
Much like a snapshot, but without the hold on on-disk + data. It can be used as the source of a send (but not for a receive). It + is specified as + filesystem#name or + volume#name.
+
+
+

See zfsconcepts(7) for details.

+
+

+

Properties are divided into two types: native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about properties, see + zfsprops(7).

+
+
+

+

Enabling the + + feature allows for the creation of encrypted filesystems and volumes. ZFS + will encrypt file and zvol data, file attributes, ACLs, permission bits, + directory listings, FUID mappings, and + // + data. For an overview of encryption, see + zfs-load-key(8).

+
+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+
+
zfs -?
+
Displays a help message.
+
zfs -V, + --version
+
 
+
zfs version
+
Displays the software version of the zfs userland + utility and the zfs kernel module.
+
+
+

+
+
zfs-list(8)
+
Lists the property information for the given datasets in tabular + form.
+
zfs-create(8)
+
Creates a new ZFS file system or volume.
+
zfs-destroy(8)
+
Destroys the given dataset(s), snapshot(s), or bookmark.
+
zfs-rename(8)
+
Renames the given dataset (filesystem or snapshot).
+
zfs-upgrade(8)
+
Manage upgrading the on-disk version of filesystems.
+
+
+
+

+
+
zfs-snapshot(8)
+
Creates snapshots with the given names.
+
zfs-rollback(8)
+
Roll back the given dataset to a previous snapshot.
+
zfs-hold(8)/zfs-release(8)
+
Add or remove a hold reference to the specified snapshot or snapshots. If + a hold exists on a snapshot, attempts to destroy that snapshot by using + the zfs destroy command + return + .
+
zfs-diff(8)
+
Display the difference between a snapshot of a given filesystem and + another snapshot of that filesystem from a later time or the current + contents of the filesystem.
+
+
+
+

+
+
zfs-clone(8)
+
Creates a clone of the given snapshot.
+
zfs-promote(8)
+
Promotes a clone file system to no longer be dependent on its + "origin" snapshot.
+
+
+
+

+
+
zfs-send(8)
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark.
+
zfs-receive(8)
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the + zfs-send(8) subcommand, which by default creates a full + stream.
+
zfs-bookmark(8)
+
Creates a new bookmark of the given snapshot or bookmark. Bookmarks mark + the point in time when the snapshot was created, and can be used as the + incremental source for a zfs + send command.
+
zfs-redact(8)
+
Generate a new redaction bookmark. This feature can be used to allow + clones of a filesystem to be made available on a remote system, in the + case where their parent need not (or needs to not) be usable.
+
+
+
+

+
+
zfs-get(8)
+
Displays properties for the given datasets.
+
zfs-set(8)
+
Sets the property or list of properties to the given value(s) for each + dataset.
+
zfs-inherit(8)
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists.
+
+
+
+

+
+
zfs-userspace(8)/zfs-groupspace(8)/zfs-projectspace(8)
+
Displays space consumed by, and quotas on, each user, group, or project in + the specified filesystem or snapshot.
+
zfs-project(8)
+
List, set, or clear project ID and/or inherit flag on the file(s) or + directories.
+
+
+
+

+
+
zfs-mount(8)
+
Displays all ZFS file systems currently mounted, or mount ZFS filesystem + on a path described by its mountpoint property.
+
zfs-unmount(8)
+
Unmounts currently mounted ZFS file systems.
+
+
+
+

+
+
zfs-share(8)
+
Shares available ZFS file systems.
+
zfs-unshare(8)
+
Unshares currently shared ZFS file systems.
+
+
+
+

+
+
zfs-allow(8)
+
Delegate permissions on the specified filesystem or volume.
+
zfs-unallow(8)
+
Remove delegated permissions on the specified filesystem or volume.
+
+
+
+

+
+
zfs-change-key(8)
+
Add or change an encryption key on the specified dataset.
+
zfs-load-key(8)
+
Load the key for the specified encrypted dataset, enabling access.
+
zfs-unload-key(8)
+
Unload a key for the specified dataset, removing the ability to access the + dataset.
+
+
+
+

+
+
zfs-program(8)
+
Execute ZFS administrative operations programmatically via a Lua + script-language channel program.
+
+
+
+

+
+
zfs-jail(8)
+
Attaches a filesystem to a jail.
+
zfs-unjail(8)
+
Detaches a filesystem from a jail.
+
+
+
+

+
+
zfs-wait(8)
+
Wait for background activity in a filesystem to complete.
+
+
+
+
+

+

The zfs utility exits + on success, + if an error + occurs, and if + invalid command line options were specified.

+
+
+

+
+
: Creating a ZFS File System Hierarchy
+
The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, + and is automatically inherited by the child file system. +
# zfs + create + pool/home
+
# zfs + set + mountpoint=/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
: Creating a ZFS Snapshot
+
The following command creates a snapshot named + yesterday. This snapshot is mounted on demand in the + .zfs/snapshot directory at the root of the + pool/home/bob file system. +
# zfs + snapshot + pool/home/bob@yesterday
+
+
: Creating and Destroying Multiple + Snapshots
+
The following command creates snapshots named + yesterday of + pool/home and all of its descendent file systems. + Each snapshot is mounted on demand in the + .zfs/snapshot directory at the root of its file + system. The second command destroys the newly created snapshots. +
# zfs + snapshot -r + pool/home@yesterday
+
# zfs + destroy -r + pool/home@yesterday
+
+
: Disabling and Enabling File System + Compression
+
The following command disables the compression property + for all file systems under pool/home. The next + command explicitly enables compression for + pool/home/anne. +
# zfs + set + compression=off + pool/home
+
# zfs + set + compression=on + pool/home/anne
+
+
: Listing ZFS Datasets
+
The following command lists all active file systems and volumes in the + system. Snapshots are displayed if + =on. + The default is off. See zpoolprops(7) + for more information on pool properties. +
+
# zfs list
+NAME                      USED  AVAIL  REFER  MOUNTPOINT
+pool                      450K   457G    18K  /pool
+pool/home                 315K   457G    21K  /export/home
+pool/home/anne             18K   457G    18K  /export/home/anne
+pool/home/bob             276K   457G   276K  /export/home/bob
+
+
+
: Setting a Quota on a ZFS File System
+
The following command sets a quota of 50 Gbytes for + pool/home/bob: +
# zfs + set quota=50G + pool/home/bob
+
+
: Listing ZFS Properties
+
The following command lists all properties for + pool/home/bob: +
+
# zfs get  pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value:

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+

The following command lists all properties with local settings + for pool/home/bob:

+
+
# zfs get -r -s  -o ,,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
: Rolling Back a ZFS File System
+
The following command reverts the contents of + pool/home/anne to the snapshot named + yesterday, deleting all intermediate snapshots: +
# zfs + rollback -r + pool/home/anne@yesterday
+
+
: Creating a ZFS Clone
+
The following command creates a writable file system whose initial + contents are the same as pool/home/bob@yesterday. +
# zfs + clone pool/home/bob@yesterday + pool/clone
+
+
: Promoting a ZFS Clone
+
The following commands illustrate how to test out changes to a file + system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming: +
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
: Inheriting ZFS Properties
+
The following command causes pool/home/bob + and pool/home/anne to + inherit the checksum property from their parent. +
# zfs + inherit checksum + pool/home/bob pool/home/anne
+
+
: Remotely Replicating ZFS Data
+
The following commands send a full stream and then an incremental stream + to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + . +
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
: Using the zfs + receive -d + Option
+
The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as + an empty file system. +
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
: Setting User Properties
+
The following example sets the user-defined + com.example:department + property for a dataset: +
# zfs + set + com.example:department=12345 + tank/accounting
+
+
: Performing a Rolling Snapshot
+
The following example shows how to maintain a history of snapshots with a + consistent naming scheme. To keep a week's worth of snapshots, the user + destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows: +
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
: Setting sharenfs Property Options on a ZFS File + System
+
The following commands show how to set sharenfs property + options to enable read-write access for a set of IP addresses and to + enable root access for system "neo" on the + tank/home file system: +
# zfs + set + sharenfs='rw=@123.123.0.0/16,root=neo' + tank/home
+

If you are using DNS for host name resolution, specify the + fully-qualified hostname.

+
+
: Delegating ZFS Administration Permissions on a + ZFS Dataset
+
The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take + snapshots on tank/cindys. The permissions on + tank/cindys are also displayed. +
+
# zfs allow ,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys + will be unable to mount file systems under + tank/cindys. Add an ACE similar to the following + syntax to provide mount point access:

+
# chmod + A+user:cindys:add_subdirectory:allow + /tank/cindys
+
+
: Delegating Create Time Permissions on a ZFS + Dataset
+
The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed. +
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
: Defining and Granting a Permission Set on a ZFS + Dataset
+
The following example shows how to define and grant a permission set on + the tank/users file system. The permissions on + tank/users are also displayed. +
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
: Delegating Property Permissions on a ZFS + Dataset
+
The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed. +
+
# zfs allow cindys quota, users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
: Removing ZFS Delegated Permissions on a ZFS + Dataset
+
The following example shows how to remove the snapshot permission from the + staff group on the tank/users file + system. The permissions on tank/users are also + displayed. +
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
: Showing the differences between a snapshot and + a ZFS Dataset
+
The following example shows how to see what has changed between a prior + snapshot of a ZFS dataset and its current state. The + -F option is used to indicate type information for + the files affected. +
+
# zfs diff -F tank/test@before tank/test
+M       /       /tank/test/
+M       F       /tank/test/linked      (+1)
+R       F       /tank/test/oldname -> /tank/test/newname
+-       F       /tank/test/deleted
++       F       /tank/test/created
+M       F       /tank/test/modified
+
+
+
: Creating a bookmark
+
The following example create a bookmark to a snapshot. This bookmark can + then be used instead of snapshot in send streams. +
# zfs + bookmark + rpool@snapshot + rpool#bookmark
+
+
: Setting + + Property Options on a ZFS File System
+
The following example show how to share SMB filesystem through ZFS. Note + that a user and their password must be given. +
# + smbmount //127.0.0.1/share_tmp + /mnt/tmp -o + user=workgroup/turbo,password=obrut,uid=1000
+

Minimal /etc/samba/smb.conf + configuration is required, as follows.

+

Samba will need to bind to the loopback interface for the ZFS + utilities to communicate with Samba. This is the default behavior for + most Linux distributions.

+

Samba must be able to authenticate a user. This can be done in + a number of ways (passwd(5), LDAP, + smbpasswd(5), &c.). How to do this is outside the + scope of this document – refer to smb.conf(5) + for more information.

+

See the USERSHARES + section for all configuration options, in case you need to modify any + options of the share afterwards. Do note that any changes done with the + net(8) command will be undone if the share is ever + unshared (like via a reboot).

+
+
+
+
+

+
+
+
Use ANSI color in zfs diff + and zfs list output.
+
+
+
+
Cause zfs mount to use + mount(8) to mount ZFS datasets. This option is provided + for backwards compatibility with older ZFS versions.
+
+
+
+
Tells zfs to set the maximum pipe size for + sends/recieves. Disabled by default on Linux due to an unfixed deadlock in + Linux's pipe size handling code.
+
+
+
+

+

.

+
+
+

+

attr(1), gzip(1), + ssh(1), chmod(2), + fsync(2), stat(2), + write(2), acl(5), + attributes(5), exports(5), + zfsconcepts(7), zfsprops(7), + exportfs(8), mount(8), + net(8), selinux(8), + zfs-allow(8), zfs-bookmark(8), + zfs-change-key(8), zfs-clone(8), + zfs-create(8), zfs-destroy(8), + zfs-diff(8), zfs-get(8), + zfs-groupspace(8), zfs-hold(8), + zfs-inherit(8), zfs-jail(8), + zfs-list(8), zfs-load-key(8), + zfs-mount(8), zfs-program(8), + zfs-project(8), zfs-projectspace(8), + zfs-promote(8), zfs-receive(8), + zfs-redact(8), zfs-release(8), + zfs-rename(8), zfs-rollback(8), + zfs-send(8), zfs-set(8), + zfs-share(8), zfs-snapshot(8), + zfs-unallow(8), zfs-unjail(8), + zfs-unload-key(8), zfs-unmount(8), + zfs-upgrade(8), + zfs-userspace(8), zfs-wait(8), + zpool(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs_ids_to_path.8.html b/man/v2.1/8/zfs_ids_to_path.8.html new file mode 100644 index 000000000..d4fe5c7f7 --- /dev/null +++ b/man/v2.1/8/zfs_ids_to_path.8.html @@ -0,0 +1,271 @@ + + + + + + + zfs_ids_to_path.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs_ids_to_path.8

+
+ + + + + +
ZFS_IDS_TO_PATH(8)System Manager's ManualZFS_IDS_TO_PATH(8)
+
+
+

+

zfs_ids_to_path — + convert objset and object ids to names and paths

+
+
+

+ + + + + +
zfs_ids_to_path[-v] pool + objset-id object-id
+
+
+

+

The + + utility converts a provided objset and object ids into a path to the file + they refer to.

+
+
+
Verbose. Print the dataset name and the file path within the dataset + separately. This will work correctly even if the dataset is not + mounted.
+
+
+
+

+

zdb(8), zfs(8)

+
+
+ + + + + +
April 17, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zgenhostid.8.html b/man/v2.1/8/zgenhostid.8.html new file mode 100644 index 000000000..4414f894c --- /dev/null +++ b/man/v2.1/8/zgenhostid.8.html @@ -0,0 +1,329 @@ + + + + + + + zgenhostid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zgenhostid.8

+
+ + + + + +
ZGENHOSTID(8)System Manager's ManualZGENHOSTID(8)
+
+
+

+

zgenhostid — + generate host ID into /etc/hostid

+
+
+

+ + + + + +
zgenhostid[-f] [-o + filename] [hostid]
+
+
+

+

Creates /etc/hostid file and stores the + host ID in it. If hostid was provided, validate and + store that value. Otherwise, randomly generate an ID.

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Allow output overwrite.
+
+ filename
+
Write to filename instead of the default + /etc/hostid.
+
hostid
+
Specifies the value to be placed in /etc/hostid. + It should be a number with a value between 1 and 2^32-1. If + , generate a random + ID. This value must be unique among your systems. It + must be an 8-digit-long hexadecimal number, optionally + prefixed by "0x".
+
+
+
+

+

/etc/hostid

+
+
+

+
+
Generate a random hostid and store it
+
+
# + zgenhostid
+
+
Record the libc-generated hostid in + /etc/hostid
+
+
# + zgenhostid + "$(hostid)"
+
+
Record a custom hostid (0xdeadbeef) in + /etc/hostid
+
+
# + zgenhostid + deadbeef
+
+
Record a custom hostid (0x01234567) in + /tmp/hostid and overwrite the file + if it exists
+
+
# + zgenhostid -f + -o /tmp/hostid + 0x01234567
+
+
+
+
+

+

genhostid(1), hostid(1), + spl(4)

+
+
+

+

zgenhostid emulates the + genhostid(1) utility and is provided for use on systems + which do not include the utility or do not provide the + sethostid(3) function.

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zinject.8.html b/man/v2.1/8/zinject.8.html new file mode 100644 index 000000000..83f454ce8 --- /dev/null +++ b/man/v2.1/8/zinject.8.html @@ -0,0 +1,547 @@ + + + + + + + zinject.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zinject.8

+
+ + + + + +
ZINJECT(8)System Manager's ManualZINJECT(8)
+
+
+

+

zinjectZFS + Fault Injector

+
+
+

+

zinject creates artificial problems in a + ZFS pool by simulating data corruption or device failures. This program is + dangerous.

+
+
+

+
+
+ + + + + +
zinject
+
+
List injection records.
+
+ + + + + +
zinject-b + objset:object:level:start:end + [-f frequency] + -amu [pool]
+
+
Force an error into the pool at a bookmark.
+
+ + + + + +
zinject-c + id|all
+
+
Cancel injection records.
+
+ + + + + +
zinject-d vdev + -A + | + pool
+
+
Force a vdev into the DEGRADED or FAULTED state.
+
+ + + + + +
zinject-d vdev + -D + latency:lanes + pool
+
+
Add an artificial delay to IO requests on a particular device, such that + the requests take a minimum of latency milliseconds + to complete. Each delay has an associated number of + lanes which defines the number of concurrent IO + requests that can be processed. +

For example, with a single lane delay of 10 ms + (-D + 10:1), the device will only + be able to service a single IO request at a time with each request + taking 10 ms to complete. So, if only a single request is submitted + every 10 ms, the average latency will be 10 ms; but if more than one + request is submitted every 10 ms, the average latency will be more than + 10 ms.

+

Similarly, if a delay of 10 ms is specified to have two lanes + (-D + 10:2), then the device will + be able to service two requests at a time, each with a minimum latency + of 10 ms. So, if two requests are submitted every 10 ms, then the + average latency will be 10 ms; but if more than two requests are + submitted every 10 ms, the average latency will be more than 10 ms.

+

Also note, these delays are additive. So two invocations of + -D + 10:1 are roughly equivalent + to a single invocation of -D + 10:2. This also means, that + one can specify multiple lanes with differing target latencies. For + example, an invocation of -D + 10:1 followed by + -D + 25:2 will create 3 lanes on + the device: one lane with a latency of 10 ms and two lanes with a 25 ms + latency.

+
+
+ + + + + +
zinject-d vdev + [-e device_error] + [-L label_error] + [-T failure] + [-f frequency] + [-F] pool
+
+
Force a vdev error.
+
+ + + + + +
zinject-I [-s + seconds|-g + txgs] pool
+
+
Simulate a hardware failure that fails to honor a cache flush.
+
+ + + + + +
zinject-p function + pool
+
+
Panic inside the specified function.
+
+ + + + + +
zinject-t + + -C dvas + [-e device_error] + [-f frequency] + [-l level] + [-r range] + [-amq] path
+
+
Force an error into the contents of a file.
+
+ + + + + +
zinject-t + + -C dvas + [-e device_error] + [-f frequency] + [-l level] + [-amq] path
+
+
Force an error into the metadnode for a file or directory.
+
+ + + + + +
zinject-t mos_type + -C dvas + [-e device_error] + [-f frequency] + [-l level] + [-r range] + [-amqu] pool
+
+
Force an error into the MOS of a pool.
+
+
+
+

+
+
+
Flush the ARC before injection.
+
+ objset:object:level:start:end
+
Force an error into the pool at this bookmark tuple. Each number is in + hexadecimal, and only one block can be specified.
+
+ dvas
+
Inject the given error only into specific DVAs. The mask should be + specified as a list of 0-indexed DVAs separated by commas + (ex. + 0,2). This option is not + applicable to logical data errors such as decompress and + decrypt.
+
+ vdev
+
A vdev specified by path or GUID.
+
+ device_error
+
Specify +
+
+
for an ECKSUM error,
+
+
for a data decompression error,
+
+
for a data decryption error,
+
+
to flip a bit in the data after a read,
+
+
for an ECHILD error,
+
+
for an EIO error where reopening the device will succeed, or
+
+
for an ENXIO error where reopening the device will fail.
+
+

For EIO and ENXIO, the "failed" reads or writes + still occur. The probe simply sets the error value reported by the I/O + pipeline so it appears the read or write failed. Decryption errors only + currently work with file data.

+
+
+ frequency
+
Only inject errors a fraction of the time. Expressed as a real number + percentage between + + and + .
+
+
Fail faster. Do fewer checks.
+
+ txgs
+
Run for this many transaction groups before reporting failure.
+
+
Print the usage message.
+
+ level
+
Inject an error at a particular block level. The default is + .
+
+ label_error
+
Set the label error region to one of + , + , + , or + .
+
+
Automatically remount the underlying filesystem.
+
+
Quiet mode. Only print the handler number added.
+
+ range
+
Inject an error over a particular logical range of an object, which will + be translated to the appropriate blkid range according to the object's + properties.
+
+ seconds
+
Run for this many seconds before reporting failure.
+
+ failure
+
Set the failure type to one of all, + , + , + , or + .
+
+ mos_type
+
Set this to +
+
+
for any data in the MOS,
+
+
for an object directory,
+
+
for the pool configuration,
+
+
for the block pointer list,
+
+
for the space map,
+
+
for the metaslab, or
+
+
for the persistent error log.
+
+
+
+
Unload the pool after injection.
+
+
+
+

+
+
+
Run zinject in debug mode.
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-add.8.html b/man/v2.1/8/zpool-add.8.html new file mode 100644 index 000000000..7d028c3f6 --- /dev/null +++ b/man/v2.1/8/zpool-add.8.html @@ -0,0 +1,301 @@ + + + + + + + zpool-add.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-add.8

+
+ + + + + +
ZPOOL-ADD(8)System Manager's ManualZPOOL-ADD(8)
+
+
+

+

zpool-addadd + vdevs to ZFS storage pool

+
+
+

+ + + + + +
zpooladd [-fgLnP] + [-o + property=value] + pool vdev
+
+
+

+

Adds the specified virtual devices to the given pool. The + vdev specification is described in the + section of zpoolconcepts(7). The behavior + of the -f option, and the device checks performed + are described in the zpool + create subcommand.

+
+
+
Forces use of vdevs, even if they appear in use or + specify a conflicting replication level. Not all devices can be overridden + in this manner.
+
+
Display vdev, GUIDs instead of the normal device + names. These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all symbolic + links. This can be used to look up the current block device name + regardless of the /dev/disk path used to open + it.
+
+
Displays the configuration that would be used without actually adding the + vdevs. The actual pool creation can still fail due + to insufficient privileges or device sharing.
+
+
Display real paths for vdevs instead of only the + last component of the path. This can be used in conjunction with the + -L flag.
+
+ property=value
+
Sets the given pool properties. See the zpoolprops(7) + manual page for a list of valid properties that can be set. The only + property supported at the moment is + .
+
+
+
+

+

zpool-attach(8), + zpool-import(8), zpool-initialize(8), + zpool-online(8), zpool-remove(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-attach.8.html b/man/v2.1/8/zpool-attach.8.html new file mode 100644 index 000000000..da6403a0e --- /dev/null +++ b/man/v2.1/8/zpool-attach.8.html @@ -0,0 +1,296 @@ + + + + + + + zpool-attach.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-attach.8

+
+ + + + + +
ZPOOL-ATTACH(8)System Manager's ManualZPOOL-ATTACH(8)
+
+
+

+

zpool-attach — + attach new device to existing ZFS vdev

+
+
+

+ + + + + +
zpoolattach [-fsw] + [-o + property=value] + pool device new_device
+
+
+

+

Attaches new_device to the existing + device. The existing device cannot be part of a raidz + configuration. If device is not currently part of a + mirrored configuration, device automatically + transforms into a two-way mirror of device and + new_device. If device is part of + a two-way mirror, attaching new_device creates a + three-way mirror, and so on. In either case, + new_device begins to resilver immediately and any + running scrub is cancelled.

+
+
+
Forces use of new_device, even if it appears to be + in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the zpoolprops(7) + manual page for a list of valid properties that can be set. The only + property supported at the moment is + .
+
+
The new_device is reconstructed sequentially to + restore redundancy as quickly as possible. Checksums are not verified + during sequential reconstruction so a scrub is started when the resilver + completes. Sequential reconstruction is not supported for raidz + configurations.
+
+
Waits until new_device has finished resilvering + before returning.
+
+
+
+

+

zpool-add(8), zpool-detach(8), + zpool-import(8), zpool-initialize(8), + zpool-online(8), zpool-replace(8), + zpool-resilver(8)

+
+
+ + + + + +
May 15, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-checkpoint.8.html b/man/v2.1/8/zpool-checkpoint.8.html new file mode 100644 index 000000000..e9b72c895 --- /dev/null +++ b/man/v2.1/8/zpool-checkpoint.8.html @@ -0,0 +1,287 @@ + + + + + + + zpool-checkpoint.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-checkpoint.8

+
+ + + + + +
ZPOOL-CHECKPOINT(8)System Manager's ManualZPOOL-CHECKPOINT(8)
+
+
+

+

zpool-checkpoint — + check-point current ZFS storage pool state

+
+
+

+ + + + + +
zpoolcheckpoint [-d + [-w]] pool
+
+
+

+

Checkpoints the current state of pool , + which can be later restored by zpool + import --rewind-to-checkpoint. The existence of a + checkpoint in a pool prohibits the following zpool + subcommands: remove, attach, + detach, split, + and reguid. In addition, it + may break reservation boundaries if the pool lacks free space. The + zpool status command + indicates the existence of a checkpoint or the progress of discarding a + checkpoint from a pool. zpool + list can be used to check how much space the + checkpoint takes from the pool.

+
+
+

+
+
, + --discard
+
Discards an existing checkpoint from pool.
+
, + --wait
+
Waits until the checkpoint has finished being discarded before + returning.
+
+
+
+

+

zfs-snapshot(8), + zpool-import(8), zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-clear.8.html b/man/v2.1/8/zpool-clear.8.html new file mode 100644 index 000000000..ec1f56f0f --- /dev/null +++ b/man/v2.1/8/zpool-clear.8.html @@ -0,0 +1,272 @@ + + + + + + + zpool-clear.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-clear.8

+
+ + + + + +
ZPOOL-CLEAR(8)System Manager's ManualZPOOL-CLEAR(8)
+
+
+

+

zpool-clear — + clear device errors in ZFS storage pool

+
+
+

+ + + + + +
zpoolclear pool + [device]…
+
+
+

+

Clears device errors in a pool. If no arguments are specified, all + device errors within the pool are cleared. If one or more devices is + specified, only those errors associated with the specified device or devices + are cleared.

+

If the pool was suspended it will be brought back + online provided the devices can be accessed. Pools with + + enabled which have been suspended cannot be resumed. While the pool was + suspended, it may have been imported on another host, and resuming I/O could + result in pool damage.

+
+
+

+

zdb(8), zpool-reopen(8), + zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-create.8.html b/man/v2.1/8/zpool-create.8.html new file mode 100644 index 000000000..43e39c6dd --- /dev/null +++ b/man/v2.1/8/zpool-create.8.html @@ -0,0 +1,382 @@ + + + + + + + zpool-create.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-create.8

+
+ + + + + +
ZPOOL-CREATE(8)System Manager's ManualZPOOL-CREATE(8)
+
+
+

+

zpool-create — + create ZFS storage pool

+
+
+

+ + + + + +
zpoolcreate [-dfn] + [-m mountpoint] + [-o + property=value]… + [-o + feature@feature=value] + [-o + compatibility=off|legacy|file[,file]…] + [-O + file-system-property=value]… + [-R root] + [-t tname] + pool vdev
+
+
+

+

Creates a new storage pool containing the virtual devices + specified on the command line. The pool name must begin with a letter, and + can only contain alphanumeric characters as well as the underscore + (""), + dash + (""), + colon + (""), + space (" "), and period + (""). + The pool names mirror, raidz, + draid, spare and + are + reserved, as are names beginning with mirror, + raidz, draid, and + spare. The vdev specification is + described in the Virtual Devices + section of zpoolconcepts(7).

+

The command attempts to verify that each device + specified is accessible and not currently in use by another subsystem. + However this check is not robust enough to detect simultaneous attempts to + use a new device in different pools, even if + = + enabled. The administrator must ensure, that simultaneous + invocations of any combination of zpool + replace, zpool + create, zpool + add, or zpool + labelclear, do not refer to the same device. Using + the same device in two pools will result in pool corruption.

+

There are some uses, such as being currently mounted, or specified + as the dedicated dump device, that prevents a device from ever being used by + ZFS. Other uses, such as having a preexisting UFS file system, can be + overridden with -f.

+

The command also checks that the replication strategy for the pool + is consistent. An attempt to combine redundant and non-redundant storage in + a single pool, or to mix disks and files, results in an error unless + -f is specified. The use of differently-sized + devices within a single raidz or mirror group is also flagged as an error + unless -f is specified.

+

Unless the -R option is specified, the + default mount point is /pool. + The mount point must not exist or must be empty, or else the root dataset + will not be able to be be mounted. This can be overridden with the + -m option.

+

By default all supported features are enabled + on the new pool. The -d option and the + -o compatibility property (e.g + -o + =2020) + can be used to restrict the features that are enabled, so that the pool can + be imported on other releases of ZFS.

+
+
+
Do not enable any features on the new pool. Individual features can be + enabled by setting their corresponding properties to + enabled with -o. See + zpool-features(7) for details about feature + properties.
+
+
Forces use of vdevs, even if they appear in use or + specify a conflicting replication level. Not all devices can be overridden + in this manner.
+
+ mountpoint
+
Sets the mount point for the root dataset. The default mount point is + /pool or altroot/pool if + altroot is specified. The mount point must be an + absolute path, legacy, or none. For + more information on dataset mount points, see + zfsprops(7).
+
+
Displays the configuration that would be used without actually creating + the pool. The actual pool creation can still fail due to insufficient + privileges or device sharing.
+
+ property=value
+
Sets the given pool properties. See zpoolprops(7) for a + list of valid properties that can be set.
+
+ compatibility=off|legacy|file[,file]…
+
Specifies compatibility feature sets. See + zpool-features(7) for more information about + compatibility feature sets.
+
+ feature@feature=value
+
Sets the given pool feature. See the zpool-features(7) + section for a list of valid features that can be set. Value can be either + disabled or enabled.
+
+ file-system-property=value
+
Sets the given file system properties in the root file system of the pool. + See zfsprops(7) for a list of valid properties that can + be set.
+
+ root
+
Equivalent to -o + cachefile=none + -o + altroot=root
+
+ tname
+
Sets the in-core pool name to tname while the + on-disk name will be the name specified as pool. + This will set the default of the cachefile property to + none. This is intended to handle name space collisions + when creating pools for other systems, such as virtual machines or + physical machines whose pools live on network block devices.
+
+
+
+

+

zpool-destroy(8), + zpool-export(8), zpool-import(8)

+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-destroy.8.html b/man/v2.1/8/zpool-destroy.8.html new file mode 100644 index 000000000..a0022f3d7 --- /dev/null +++ b/man/v2.1/8/zpool-destroy.8.html @@ -0,0 +1,263 @@ + + + + + + + zpool-destroy.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-destroy.8

+
+ + + + + +
ZPOOL-DESTROY(8)System Manager's ManualZPOOL-DESTROY(8)
+
+
+

+

zpool-destroy — + destroy ZFS storage pool

+
+
+

+ + + + + +
zpooldestroy [-f] + pool
+
+
+

+

Destroys the given pool, freeing up any devices for other use. + This command tries to unmount any active datasets before destroying the + pool.

+
+
+
Forcefully unmount all active datasets.
+
+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-detach.8.html b/man/v2.1/8/zpool-detach.8.html new file mode 100644 index 000000000..0fe7d0786 --- /dev/null +++ b/man/v2.1/8/zpool-detach.8.html @@ -0,0 +1,268 @@ + + + + + + + zpool-detach.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-detach.8

+
+ + + + + +
ZPOOL-DETACH(8)System Manager's ManualZPOOL-DETACH(8)
+
+
+

+

zpool-detach — + detach device from ZFS mirror

+
+
+

+ + + + + +
zpooldetach pool device
+
+
+

+

Detaches device from a mirror. The operation + is refused if there are no other valid replicas of the data. If + device may be re-added to the pool later on then + consider the zpool offline + command instead.

+
+
+

+

zpool-attach(8), + zpool-labelclear(8), zpool-offline(8), + zpool-remove(8), zpool-replace(8), + zpool-split(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-events.8.html b/man/v2.1/8/zpool-events.8.html new file mode 100644 index 000000000..c02300aff --- /dev/null +++ b/man/v2.1/8/zpool-events.8.html @@ -0,0 +1,884 @@ + + + + + + + zpool-events.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-events.8

+
+ + + + + +
ZPOOL-EVENTS(8)System Manager's ManualZPOOL-EVENTS(8)
+
+
+

+

zpool-events — + list recent events generated by kernel

+
+
+

+ + + + + +
zpoolevents [-vHf] + [pool]
+
+ + + + + +
zpoolevents -c
+
+
+

+

Lists all recent events generated by the ZFS kernel modules. These + events are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. For + more information about the subclasses and event payloads that can be + generated see EVENTS and the following + sections.

+
+
+

+
+
+
Clear all previous events.
+
+
Follow mode.
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+
Print the entire payload for each event.
+
+
+
+

+

These are the different event subclasses. The full event name + would be + , + but only the last part is listed here.

+

+
+
+
Issued when a checksum error has been detected.
+
+
Issued when there is an I/O error in a vdev in the pool.
+
+
Issued when there have been data errors in the pool.
+
+
Issued when an I/O request is determined to be "hung", this can + be caused by lost completion events due to flaky hardware or drivers. See + + in zfs(4) for additional information regarding + "hung" I/O detection and configuration.
+
+
Issued when a completed I/O request exceeds the maximum allowed time + specified by the + + module parameter. This can be an indicator of problems with the underlying + storage device. The number of delay events is ratelimited by the + + module parameter.
+
+
Issued every time a vdev change have been done to the pool.
+
+
Issued when a pool cannot be imported.
+
+
Issued when a pool is destroyed.
+
+
Issued when a pool is exported.
+
+
Issued when a pool is imported.
+
+
Issued when a REGUID (new unique identifier for the pool have been + regenerated) have been detected.
+
+
Issued when the vdev is unknown. Such as trying to clear device errors on + a vdev that have failed/been kicked from the system/pool and is no longer + available.
+
+
Issued when a vdev could not be opened (because it didn't exist for + example).
+
+
Issued when corrupt data have been detected on a vdev.
+
+
Issued when there are no more replicas to sustain the pool. This would + lead to the pool being + .
+
+
Issued when a missing device in the pool have been detected.
+
+
Issued when the system (kernel) have removed a device, and ZFS notices + that the device isn't there any more. This is usually followed by a + probe_failure event.
+
+
Issued when the label is OK but invalid.
+
+
Issued when the ashift alignment requirement has increased.
+
+
Issued when a vdev is detached from a mirror (or a spare detached from a + vdev where it have been used to replace a failed drive - only works if the + original drive have been re-added).
+
+
Issued when clearing device errors in a pool. Such as running + zpool clear on a device in + the pool.
+
+
Issued when a check to see if a given vdev could be opened is + started.
+
+
Issued when a spare have kicked in to replace a failed device.
+
+
Issued when a vdev can be automatically expanded.
+
+
Issued when there is an I/O failure in a vdev in the pool.
+
+
Issued when a probe fails on a vdev. This would occur if a vdev have been + kicked from the system outside of ZFS (such as the kernel have removed the + device).
+
+
Issued when the intent log cannot be replayed. The can occur in the case + of a missing or damaged log device.
+
+
Issued when a resilver is started.
+
+
Issued when the running resilver have finished.
+
+
Issued when a scrub is started on a pool.
+
+
Issued when a pool has finished scrubbing.
+
+
Issued when a scrub is aborted on a pool.
+
+
Issued when a scrub is resumed on a pool.
+
+
Issued when a scrub is paused on a pool.
+
+
 
+
+
+
+

+

This is the payload (data, information) that accompanies an + event.

+

For zed(8), these are set to + uppercase and prefixed with + .

+

+
+
+
Pool name.
+
+
Failmode - + , + , + or + . + See the + + property in zpoolprops(7) for more information.
+
+
The GUID of the pool.
+
+
The load state for the pool (0=none, 1=open, 2=import, 3=tryimport, + 4=recover 5=error).
+
+
The GUID of the vdev in question (the vdev failing or operated upon with + zpool clear, etc.).
+
+
Type of vdev - + , + , + , + etc. See the + section of zpoolconcepts(7) for more + information on possible values.
+
+
Full path of the vdev, including any -partX.
+
+
ID of vdev (if any).
+
+
Physical FRU location.
+
+
State of vdev (0=uninitialized, 1=closed, 2=offline, 3=removed, 4=failed + to open, 5=faulted, 6=degraded, 7=healthy).
+
+
The ashift value of the vdev.
+
+
The time the last I/O request completed for the specified vdev.
+
+
The time since the last I/O request completed for the specified vdev.
+
+
List of spares, including full path and any -partX.
+
+
GUID(s) of spares.
+
+
How many read errors that have been detected on the vdev.
+
+
How many write errors that have been detected on the vdev.
+
+
How many checksum errors that have been detected on the vdev.
+
+
GUID of the vdev parent.
+
+
Type of parent. See vdev_type.
+
+
Path of the vdev parent (if any).
+
+
ID of the vdev parent (if any).
+
+
The object set number for a given I/O request.
+
+
The object number for a given I/O request.
+
+
The indirect level for the block. Level 0 is the lowest level and includes + data blocks. Values > 0 indicate metadata blocks at the appropriate + level.
+
+
The block ID for a given I/O request.
+
+
The error number for a failure when handling a given I/O request, + compatible with errno(3) with the value of + + used to indicate a ZFS checksum error.
+
+
The offset in bytes of where to write the I/O request for the specified + vdev.
+
+
The size in bytes of the I/O request.
+
+
The current flags describing how the I/O request should be handled. See + the I/O FLAGS section for the full list of I/O + flags.
+
+
The current stage of the I/O in the pipeline. See the I/O + STAGES section for a full list of all the I/O stages.
+
+
The valid pipeline stages for the I/O. See the I/O + STAGES section for a full list of all the I/O stages.
+
+
The time elapsed (in nanoseconds) waiting for the block layer to complete + the I/O request. Unlike zio_delta, this does not include + any vdev queuing time and is therefore solely a measure of the block layer + performance.
+
+
The time when a given I/O request was submitted.
+
+
The time required to service a given I/O request.
+
+
The previous state of the vdev.
+
+
The expected checksum value for the block.
+
+
The actual checksum value for an errant block.
+
+
Checksum algorithm used. See zfsprops(7) for more + information on the available checksum algorithms.
+
+
Whether or not the data is byteswapped.
+
+
start, + end) pairs of corruption offsets. Offsets are always + aligned on a 64-bit boundary, and can include some gaps of non-corruption. + (See bad_ranges_min_gap)
+
+
In order to bound the size of the bad_ranges array, gaps + of non-corruption less than or equal to + bad_ranges_min_gap bytes have been merged with adjacent + corruption. Always at least 8 bytes, since corruption is detected on a + 64-bit word basis.
+
+
This array has one element per range in bad_ranges. Each + element contains the count of bits in that range which were clear in the + good data and set in the bad data.
+
+
This array has one element per range in bad_ranges. Each + element contains the count of bits for that range which were set in the + good data and clear in the bad data.
+
+
If this field exists, it is an array of (bad data + & ~(good data)); that + is, the bits set in the bad data which are cleared in the good data. Each + element corresponds a byte whose offset is in a range in + bad_ranges, and the array is ordered by offset. Thus, + the first element is the first byte in the first + bad_ranges range, and the last element is the last byte + in the last bad_ranges range.
+
+
Like bad_set_bits, but contains (good + data & ~(bad + data)); that is, the bits set in the good data which are cleared in + the bad data.
+
+
If this field exists, it is an array of counters. Each entry counts bits + set in a particular bit of a big-endian uint64 type. The first entry + counts bits set in the high-order bit of the first byte, the 9th byte, + etc, and the last entry counts bits set of the low-order bit of the 8th + byte, the 16th byte, etc. This information is useful for observing a stuck + bit in a parallel data path, such as IDE or parallel SCSI.
+
+
If this field exists, it is an array of counters. Each entry counts bit + clears in a particular bit of a big-endian uint64 type. The first entry + counts bits clears of the high-order bit of the first byte, the 9th byte, + etc, and the last entry counts clears of the low-order bit of the 8th + byte, the 16th byte, etc. This information is useful for observing a stuck + bit in a parallel data path, such as IDE or parallel SCSI.
+
+
+
+

+

The ZFS I/O pipeline is comprised of various stages which are + defined below. The individual stages are used to construct these basic I/O + operations: Read, Write, Free, Claim, and Ioctl. These stages may be set on + an event to describe the life cycle of a given I/O request.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StageBit MaskOperations



ZIO_STAGE_OPEN0x00000001RWFCI
ZIO_STAGE_READ_BP_INIT0x00000002R----
ZIO_STAGE_WRITE_BP_INIT0x00000004-W---
ZIO_STAGE_FREE_BP_INIT0x00000008--F--
ZIO_STAGE_ISSUE_ASYNC0x00000010RWF--
ZIO_STAGE_WRITE_COMPRESS0x00000020-W---
ZIO_STAGE_ENCRYPT0x00000040-W---
ZIO_STAGE_CHECKSUM_GENERATE0x00000080-W---
ZIO_STAGE_NOP_WRITE0x00000100-W---
ZIO_STAGE_DDT_READ_START0x00000200R----
ZIO_STAGE_DDT_READ_DONE0x00000400R----
ZIO_STAGE_DDT_WRITE0x00000800-W---
ZIO_STAGE_DDT_FREE0x00001000--F--
ZIO_STAGE_GANG_ASSEMBLE0x00002000RWFC-
ZIO_STAGE_GANG_ISSUE0x00004000RWFC-
ZIO_STAGE_DVA_THROTTLE0x00008000-W---
ZIO_STAGE_DVA_ALLOCATE0x00010000-W---
ZIO_STAGE_DVA_FREE0x00020000--F--
ZIO_STAGE_DVA_CLAIM0x00040000---C-
ZIO_STAGE_READY0x00080000RWFCI
ZIO_STAGE_VDEV_IO_START0x00100000RW--I
ZIO_STAGE_VDEV_IO_DONE0x00200000RW--I
ZIO_STAGE_VDEV_IO_ASSESS0x00400000RW--I
ZIO_STAGE_CHECKSUM_VERIFY0x00800000R----
ZIO_STAGE_DONE0x01000000RWFCI
+
+
+

+

Every I/O request in the pipeline contains a set of flags which + describe its function and are used to govern its behavior. These flags will + be set in an event as a zio_flags payload entry.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FlagBit Mask


ZIO_FLAG_DONT_AGGREGATE0x00000001
ZIO_FLAG_IO_REPAIR0x00000002
ZIO_FLAG_SELF_HEAL0x00000004
ZIO_FLAG_RESILVER0x00000008
ZIO_FLAG_SCRUB0x00000010
ZIO_FLAG_SCAN_THREAD0x00000020
ZIO_FLAG_PHYSICAL0x00000040
ZIO_FLAG_CANFAIL0x00000080
ZIO_FLAG_SPECULATIVE0x00000100
ZIO_FLAG_CONFIG_WRITER0x00000200
ZIO_FLAG_DONT_RETRY0x00000400
ZIO_FLAG_DONT_CACHE0x00000800
ZIO_FLAG_NODATA0x00001000
ZIO_FLAG_INDUCE_DAMAGE0x00002000
ZIO_FLAG_IO_ALLOCATING0x00004000
ZIO_FLAG_IO_RETRY0x00008000
ZIO_FLAG_PROBE0x00010000
ZIO_FLAG_TRYHARD0x00020000
ZIO_FLAG_OPTIONAL0x00040000
ZIO_FLAG_DONT_QUEUE0x00080000
ZIO_FLAG_DONT_PROPAGATE0x00100000
ZIO_FLAG_IO_BYPASS0x00200000
ZIO_FLAG_IO_REWRITE0x00400000
ZIO_FLAG_RAW_COMPRESS0x00800000
ZIO_FLAG_RAW_ENCRYPT0x01000000
ZIO_FLAG_GANG_CHILD0x02000000
ZIO_FLAG_DDT_CHILD0x04000000
ZIO_FLAG_GODFATHER0x08000000
ZIO_FLAG_NOPWRITE0x10000000
ZIO_FLAG_REEXECUTED0x20000000
ZIO_FLAG_DELEGATED0x40000000
ZIO_FLAG_FASTWRITE0x80000000
+
+
+

+

zfs(4), zed(8), + zpool-wait(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-export.8.html b/man/v2.1/8/zpool-export.8.html new file mode 100644 index 000000000..37a27e8d2 --- /dev/null +++ b/man/v2.1/8/zpool-export.8.html @@ -0,0 +1,284 @@ + + + + + + + zpool-export.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-export.8

+
+ + + + + +
ZPOOL-EXPORT(8)System Manager's ManualZPOOL-EXPORT(8)
+
+
+

+

zpool-export — + export ZFS storage pools

+
+
+

+ + + + + +
zpoolexport [-f] + -a|pool
+
+
+

+

Exports the given pools from the system. All devices are marked as + exported, but are still considered in use by other subsystems. The devices + can be moved between systems (even those of different endianness) and + imported as long as a sufficient number of devices are present.

+

Before exporting the pool, all datasets within the pool are + unmounted. A pool can not be exported if it has a shared spare that is + currently being used.

+

For pools to be portable, you must give the + zpool command whole disks, not just partitions, so + that ZFS can label the disks with portable EFI labels. Otherwise, disk + drivers on platforms of different endianness will not recognize the + disks.

+
+
+
Exports all pools imported on the system.
+
+
Forcefully unmount all datasets, and allow export of pools with active + shared spares. +

This command will forcefully export the pool even if it has a + shared spare that is currently being used. This may lead to potential + data corruption.

+
+
+
+
+

+

zpool-import(8)

+
+
+ + + + + +
February 16, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-get.8.html b/man/v2.1/8/zpool-get.8.html new file mode 100644 index 000000000..a50906337 --- /dev/null +++ b/man/v2.1/8/zpool-get.8.html @@ -0,0 +1,321 @@ + + + + + + + zpool-get.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-get.8

+
+ + + + + +
ZPOOL-GET(8)System Manager's ManualZPOOL-GET(8)
+
+
+

+

zpool-get — + retrieve properties of ZFS storage pools

+
+
+

+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + [pool]…
+
+ + + + + +
zpoolset + property=value + pool
+
+
+

+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + [pool]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
+
+
Name of storage pool.
+
+
Property name.
+
+
Property value.
+
+
Property source, either + + or + .
+
+
+

See the zpoolprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + zpoolprops(7) manual page for more information on what + properties can be set and acceptable values.
+
+
+
+

+

zpool-features(7), + zpoolprops(7), zpool-list(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-history.8.html b/man/v2.1/8/zpool-history.8.html new file mode 100644 index 000000000..4737dfde0 --- /dev/null +++ b/man/v2.1/8/zpool-history.8.html @@ -0,0 +1,274 @@ + + + + + + + zpool-history.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-history.8

+
+ + + + + +
ZPOOL-HISTORY(8)System Manager's ManualZPOOL-HISTORY(8)
+
+
+

+

zpool-history — + inspect command history of ZFS storage pools

+
+
+

+ + + + + +
zpoolhistory [-il] + [pool]…
+
+
+

+

Displays the command history of the specified pool(s) or all pools + if no pool is specified.

+
+
+
Displays internally logged ZFS events in addition to user initiated + events.
+
+
Displays log records in long format, which in addition to standard format + includes, the user name, the hostname, and the zone in which the operation + was performed.
+
+
+
+

+

zpool-checkpoint(8), + zpool-events(8), zpool-status(8), + zpool-wait(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-import.8.html b/man/v2.1/8/zpool-import.8.html new file mode 100644 index 000000000..913b3a103 --- /dev/null +++ b/man/v2.1/8/zpool-import.8.html @@ -0,0 +1,547 @@ + + + + + + + zpool-import.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-import.8

+
+ + + + + +
ZPOOL-IMPORT(8)System Manager's ManualZPOOL-IMPORT(8)
+
+
+

+

zpool-import — + import ZFS storage pools or list available pools

+
+
+

+ + + + + +
zpoolimport [-D] + [-d + dir|device]…
+
+ + + + + +
zpoolimport -a + [-DflmN] [-F + [-nTX]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root]
+
+ + + + + +
zpoolimport [-Dflmt] + [-F [-nTX]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root] + [-s] + pool|id + [newpool]
+
+
+

+
+
zpool import + [-D] [-d + dir|device]…
+
Lists pools available to import. If the -d or + -c options are not specified, this command + searches for devices using libblkid on Linux and geom on + FreeBSD. The -d option can + be specified multiple times, and all directories are searched. If the + device appears to be part of an exported pool, this command displays a + summary of the pool with the name of the pool, a numeric identifier, as + well as the vdev layout and current health of the device for each device + or file. Destroyed pools, pools that were previously destroyed with the + zpool destroy command, are + not listed unless the -D option is specified. +

The numeric identifier is unique, and can be used instead of + the pool name when multiple exported pools of the same name are + available.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times.
+
+
Lists destroyed pools only.
+
+
+
zpool import + -a [-DflmN] + [-F [-nTX]] + [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root] + [-s]
+
Imports all pools found in the search directories. Identical to the + previous command, except that all pools with a sufficient number of + devices available are imported. Destroyed pools, pools that were + previously destroyed with the zpool + destroy command, will not be imported unless the + -D option is specified. +
+
+
Searches for and imports all pools found.
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pools only. The -f option is + also required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+
Import the pool without mounting any file systems.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + zpoolprops(7) manual page for more information on + the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Rewinds pool to the checkpointed state. Once the pool is imported with + this flag there is no way to undo the rewind. All changes and data + that were written after the checkpoint are lost! The only exception is + when the + + mounting option is enabled. In this case, the checkpointed state of + the pool is opened and an administrator can see how the pool would + look like if they were to fully rewind.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. WARNING: This + option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
+
zpool import + [-Dflmt] [-F + [-nTX]] [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root] + [-s] + pool|id + [newpool]
+
Imports a specific pool. A pool can be identified by its name or the + numeric identifier. If newpool is specified, the + pool is imported using the name newpool. Otherwise, + it is imported with the same name as its exported name. +

If a device is removed from a system without running + zpool export first, the + device appears as potentially active. It cannot be determined if this + was a failed export, or whether the device is really in use from another + host. To import a pool in this state, the -f + option is required.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pool. The -f option is also + required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + zpoolprops(7) manual page for more information on + the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. + : + This option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
Used with newpool. Specifies that + newpool is temporary. Temporary pool names last + until export. Ensures that the original pool name will be used in all + label updates and therefore is retained upon export. Will also set + -o + cachefile=none when not explicitly + specified.
+
+
+
+
+
+

+

zpool-export(8), + zpool-list(8), zpool-status(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-initialize.8.html b/man/v2.1/8/zpool-initialize.8.html new file mode 100644 index 000000000..82e384764 --- /dev/null +++ b/man/v2.1/8/zpool-initialize.8.html @@ -0,0 +1,295 @@ + + + + + + + zpool-initialize.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-initialize.8

+
+ + + + + +
ZPOOL-INITIALIZE(8)System Manager's ManualZPOOL-INITIALIZE(8)
+
+
+

+

zpool-initialize — + write to unallocated regions of ZFS storage pool

+
+
+

+ + + + + +
zpoolinitialize + [-c|-s + |-u] [-w] + pool [device]…
+
+
+

+

Begins initializing by writing to all unallocated regions on the + specified devices, or all eligible devices in the pool if no individual + devices are specified. Only leaf data or log devices may be initialized.

+
+
, + --cancel
+
Cancel initializing on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are not + currently being initialized, the command will fail and no cancellation + will occur on any device.
+
, + --suspend
+
Suspend initializing on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are not + currently being initialized, the command will fail and no suspension will + occur on any device. Initializing can then be resumed by running + zpool initialize with no + flags on the relevant target devices.
+
, + --uninit
+
Clears the initialization state on the specified devices, or all eligible + devices if none are specified. If the devices are being actively + initialized the command will fail. After being cleared + zpool initialize with no + flags can be used to re-initialize all unallocoated regions on the + relevant target devices.
+
, + --wait
+
Wait until the devices have finished initializing before returning.
+
+
+
+

+

zpool-add(8), zpool-attach(8), + zpool-create(8), zpool-online(8), + zpool-replace(8), zpool-trim(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-iostat.8.html b/man/v2.1/8/zpool-iostat.8.html new file mode 100644 index 000000000..54aede2ec --- /dev/null +++ b/man/v2.1/8/zpool-iostat.8.html @@ -0,0 +1,430 @@ + + + + + + + zpool-iostat.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-iostat.8

+
+ + + + + +
ZPOOL-IOSTAT(8)System Manager's ManualZPOOL-IOSTAT(8)
+
+
+

+

zpool-iostat — + display logical I/O statistics for ZFS storage + pools

+
+
+

+ + + + + +
zpooliostat [[[-c + SCRIPT] + [-lq]]|-rw] + [-T u|d] + [-ghHLnpPvy] + [pool…|[pool + vdev…]|vdev…] + [interval [count]]
+
+
+

+

Displays logical I/O statistics for the given pools/vdevs. + Physical I/O statistics may be observed via iostat(1). If + writes are located nearby, they may be merged into a single larger + operation. Additional I/O may be generated depending on the level of vdev + redundancy. To filter output, you may pass in a list of pools, a pool and + list of vdevs in that pool, or a list of any vdevs from any pool. If no + items are specified, statistics for every pool in the system are shown. When + given an interval, the statistics are printed every + interval seconds until killed. If + -n flag is specified the headers are displayed only + once, otherwise they are displayed periodically. If + count is specified, the command exits after + count reports are printed. The first report printed is + always the statistics since boot regardless of whether + interval and count are passed. + However, this behavior can be suppressed with the -y + flag. Also note that the units of + , + , + … that + are printed in the report are in base 1024. To get the raw values, use the + -p flag.

+
+
+ [SCRIPT1[,SCRIPT2]…]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool iostat + output. Users can run any script found in their + ~/.zpool.d directory or from the system + /etc/zfs/zpool.d directory. Script names + containing the slash + () character + are not allowed. The default search path can be overridden by setting the + + environment variable. A privileged user can only run + -c if they have the + + environment variable set. If a script requires the use of a privileged + command, like smartctl(8), then it's recommended you + allow the user access to it in /etc/sudoers or add + the user to the /etc/sudoers.d/zfs file. +

If -c is passed without a script name, + it prints a list of all scripts. -c also sets + verbose mode + (-v).

+

Script output should be in the form of "name=value". + The column name is set to "name" and the value is set to + "value". Multiple lines can be used to output multiple + columns. The first line of output not in the "name=value" + format is displayed without a column title, and no more output after + that is displayed. This can be useful for printing error messages. Blank + or NULL values are printed as a '-' to make output AWKable.

+

The following environment variables are set before running + each script:

+
+
+
Full path to the vdev
+
+
Underlying path to the vdev (/dev/sd*). For + use with device mapper, multipath, or partitioned vdevs.
+
+
The sysfs path to the enclosure for the vdev (if any).
+
+
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard date + format. See date(1).
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Print headers only once when passed
+
+
Display numbers in parsable (exact) values. Time values are in + nanoseconds.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+
Print request size histograms for the leaf vdev's I/O. This includes + histograms of individual I/O (ind) and aggregate I/O (agg). These stats + can be useful for observing how well I/O aggregation is working. Note that + TRIM I/O may exceed 16M, but will be counted as 16M.
+
+
Verbose statistics Reports usage statistics for individual vdevs within + the pool, in addition to the pool-wide statistics.
+
+
Normally the first line of output reports the statistics since boot: + suppress it.
+
+
Display latency histograms: +
+
+
Total I/O time (queuing + disk I/O time).
+
+
Disk I/O time (time reading/writing the disk).
+
+
Amount of time I/O spent in synchronous priority queues. Does not + include disk time.
+
+
Amount of time I/O spent in asynchronous priority queues. Does not + include disk time.
+
+
Amount of time I/O spent in scrub queue. Does not include disk + time.
+
+
+
+
Include average latency statistics: +
+
+
Average total I/O time (queuing + disk I/O time).
+
+
Average disk I/O time (time reading/writing the disk).
+
+
Average amount of time I/O spent in synchronous priority queues. Does + not include disk time.
+
+
Average amount of time I/O spent in asynchronous priority queues. Does + not include disk time.
+
+
Average queuing time in scrub queue. Does not include disk time.
+
+
Average queuing time in trim queue. Does not include disk time.
+
+
+
+
Include active queue statistics. Each priority queue has both pending + () + and active + () + I/O requests. Pending requests are waiting to be issued to the disk, and + active requests have been issued to disk and are waiting for completion. + These stats are broken out by priority queue: +
+
+
Current number of entries in synchronous priority queues.
+
+
Current number of entries in asynchronous priority queues.
+
+
Current number of entries in scrub queue.
+
+
Current number of entries in trim queue.
+
+

All queue statistics are instantaneous measurements of the + number of entries in the queues. If you specify an interval, the + measurements will be sampled from the end of the interval.

+
+
+
+
+

+

iostat(1), smartctl(8), + zpool-list(8), zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-labelclear.8.html b/man/v2.1/8/zpool-labelclear.8.html new file mode 100644 index 000000000..495777129 --- /dev/null +++ b/man/v2.1/8/zpool-labelclear.8.html @@ -0,0 +1,272 @@ + + + + + + + zpool-labelclear.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-labelclear.8

+
+ + + + + +
ZPOOL-LABELCLEAR(8)System Manager's ManualZPOOL-LABELCLEAR(8)
+
+
+

+

zpool-labelclear — + remove ZFS label information from device

+
+
+

+ + + + + +
zpoollabelclear [-f] + device
+
+
+

+

Removes ZFS label information from the specified + device. If the device is a cache + device, it also removes the L2ARC header (persistent L2ARC). The + device must not be part of an active pool + configuration.

+
+
+
Treat exported or foreign devices as inactive.
+
+
+
+

+

zpool-destroy(8), + zpool-detach(8), zpool-remove(8), + zpool-replace(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-list.8.html b/man/v2.1/8/zpool-list.8.html new file mode 100644 index 000000000..15b7469ce --- /dev/null +++ b/man/v2.1/8/zpool-list.8.html @@ -0,0 +1,316 @@ + + + + + + + zpool-list.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-list.8

+
+ + + + + +
ZPOOL-LIST(8)System Manager's ManualZPOOL-LIST(8)
+
+
+

+

zpool-listlist + information about ZFS storage pools

+
+
+

+ + + + + +
zpoollist [-HgLpPv] + [-o + property[,property]…] + [-T u|d] + [pool]… [interval + [count]]
+
+
+

+

Lists the given pools along with a health status and space usage. + If no pools are specified, all pools in the system are + listed. When given an interval, the information is + printed every interval seconds until killed. If + count is specified, the command exits after + count reports are printed.

+
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+ property
+
Comma-separated list of properties to display. See the + zpoolprops(7) manual page for a list of valid + properties. The default list is + , + , + , + , + , + , + , + , + , + .
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard date + format. See date(1).
+
+
Verbose statistics. Reports usage statistics for individual vdevs within + the pool, in addition to the pool-wide statistics.
+
+
+
+

+

zpool-import(8), + zpool-status(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-offline.8.html b/man/v2.1/8/zpool-offline.8.html new file mode 100644 index 000000000..0201a0658 --- /dev/null +++ b/man/v2.1/8/zpool-offline.8.html @@ -0,0 +1,302 @@ + + + + + + + zpool-offline.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-offline.8

+
+ + + + + +
ZPOOL-OFFLINE(8)System Manager's ManualZPOOL-OFFLINE(8)
+
+
+

+

zpool-offline — + take physical devices offline in ZFS storage + pool

+
+
+

+ + + + + +
zpooloffline [-ft] + pool device
+
+ + + + + +
zpoolonline [-e] + pool device
+
+
+

+
+
zpool offline + [-ft] pool + device
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [-e] pool + device
+
Brings the specified physical device online. This command is not + applicable to spares. +
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
+
+
+

+

zpool-detach(8), + zpool-remove(8), zpool-reopen(8), + zpool-resilver(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-online.8.html b/man/v2.1/8/zpool-online.8.html new file mode 100644 index 000000000..4f3ed487b --- /dev/null +++ b/man/v2.1/8/zpool-online.8.html @@ -0,0 +1,302 @@ + + + + + + + zpool-online.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-online.8

+
+ + + + + +
ZPOOL-OFFLINE(8)System Manager's ManualZPOOL-OFFLINE(8)
+
+
+

+

zpool-offline — + take physical devices offline in ZFS storage + pool

+
+
+

+ + + + + +
zpooloffline [-ft] + pool device
+
+ + + + + +
zpoolonline [-e] + pool device
+
+
+

+
+
zpool offline + [-ft] pool + device
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [-e] pool + device
+
Brings the specified physical device online. This command is not + applicable to spares. +
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
+
+
+

+

zpool-detach(8), + zpool-remove(8), zpool-reopen(8), + zpool-resilver(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-reguid.8.html b/man/v2.1/8/zpool-reguid.8.html new file mode 100644 index 000000000..e00fb8f36 --- /dev/null +++ b/man/v2.1/8/zpool-reguid.8.html @@ -0,0 +1,265 @@ + + + + + + + zpool-reguid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-reguid.8

+
+ + + + + +
ZPOOL-REGUID(8)System Manager's ManualZPOOL-REGUID(8)
+
+
+

+

zpool-reguid — + generate new unique identifier for ZFS storage + pool

+
+
+

+ + + + + +
zpoolreguid pool
+
+
+

+

Generates a new unique identifier for the pool. You must ensure + that all devices in this pool are online and healthy before performing this + action.

+
+
+

+

zpool-export(8), + zpool-import(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-remove.8.html b/man/v2.1/8/zpool-remove.8.html new file mode 100644 index 000000000..2d397d708 --- /dev/null +++ b/man/v2.1/8/zpool-remove.8.html @@ -0,0 +1,318 @@ + + + + + + + zpool-remove.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-remove.8

+
+ + + + + +
ZPOOL-REMOVE(8)System Manager's ManualZPOOL-REMOVE(8)
+
+
+

+

zpool-remove — + remove devices from ZFS storage pool

+
+
+

+ + + + + +
zpoolremove [-npw] + pool device
+
+ + + + + +
zpoolremove -s + pool
+
+
+

+
+
zpool remove + [-npw] pool + device
+
Removes the specified device from the pool. This command supports removing + hot spare, cache, log, and both mirrored and non-redundant primary + top-level vdevs, including dedup and special vdevs. +

Top-level vdevs can only be removed if the primary pool + storage does not contain a top-level raidz vdev, all top-level vdevs + have the same sector size, and the keys for all encrypted datasets are + loaded.

+

Removing a top-level vdev reduces the + total amount of space in the storage pool. The specified device will be + evacuated by copying all allocated space from it to the other devices in + the pool. In this case, the zpool + remove command initiates the removal and + returns, while the evacuation continues in the background. The removal + progress can be monitored with zpool + status. If an IO error is encountered during the + removal process it will be cancelled. The + + feature flag must be enabled to remove a top-level vdev, see + zpool-features(7).

+

A mirrored top-level device (log or data) can be removed by + specifying the top-level mirror for the same. Non-log devices or data + devices that are part of a mirrored configuration can be removed using + the zpool detach + command.

+
+
+
Do not actually perform the removal ("No-op"). Instead, + print the estimated amount of memory that will be used by the mapping + table after the removal completes. This is nonzero only for top-level + vdevs.
+
+
+
+
Used in conjunction with the -n flag, displays + numbers as parsable (exact) values.
+
+
Waits until the removal has completed before returning.
+
+
+
zpool remove + -s pool
+
Stops and cancels an in-progress removal of a top-level vdev.
+
+
+
+

+

zpool-add(8), zpool-detach(8), + zpool-labelclear(8), zpool-offline(8), + zpool-replace(8), zpool-split(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-reopen.8.html b/man/v2.1/8/zpool-reopen.8.html new file mode 100644 index 000000000..cdf3aa875 --- /dev/null +++ b/man/v2.1/8/zpool-reopen.8.html @@ -0,0 +1,267 @@ + + + + + + + zpool-reopen.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-reopen.8

+
+ + + + + +
ZPOOL-REOPEN(8)System Manager's ManualZPOOL-REOPEN(8)
+
+
+

+

zpool-reopen — + reopen vdevs associated with ZFS storage pools

+
+
+

+ + + + + +
zpoolreopen [-n] + [pool]…
+
+
+

+

Reopen all vdevs associated with the specified pools, or all pools + if none specified.

+
+
+

+
+
+
Do not restart an in-progress scrub operation. This is not recommended and + can result in partially resilvered devices unless a second scrub is + performed.
+
+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-replace.8.html b/man/v2.1/8/zpool-replace.8.html new file mode 100644 index 000000000..df1e1d07a --- /dev/null +++ b/man/v2.1/8/zpool-replace.8.html @@ -0,0 +1,301 @@ + + + + + + + zpool-replace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-replace.8

+
+ + + + + +
ZPOOL-REPLACE(8)System Manager's ManualZPOOL-REPLACE(8)
+
+
+

+

zpool-replace — + replace one device with another in ZFS storage + pool

+
+
+

+ + + + + +
zpoolreplace [-fsw] + [-o + property=value] + pool device + [new-device]
+
+
+

+

Replaces device with + new-device. This is equivalent to attaching + new-device, waiting for it to resilver, and then + detaching device. Any in progress scrub will be + cancelled.

+

The size of new-device must be greater than + or equal to the minimum size of all the devices in a mirror or raidz + configuration.

+

new-device is required if the pool is not + redundant. If new-device is not specified, it defaults + to device. This form of replacement is useful after an + existing disk has failed and has been physically replaced. In this case, the + new disk may have the same /dev path as the old + device, even though it is actually a different disk. ZFS recognizes + this.

+
+
+
Forces use of new-device, even if it appears to be + in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the zpoolprops(7) + manual page for a list of valid properties that can be set. The only + property supported at the moment is + .
+
+
The new-device is reconstructed sequentially to + restore redundancy as quickly as possible. Checksums are not verified + during sequential reconstruction so a scrub is started when the resilver + completes. Sequential reconstruction is not supported for raidz + configurations.
+
+
Waits until the replacement has completed before returning.
+
+
+
+

+

zpool-detach(8), + zpool-initialize(8), zpool-online(8), + zpool-resilver(8)

+
+
+ + + + + +
May 29, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-resilver.8.html b/man/v2.1/8/zpool-resilver.8.html new file mode 100644 index 000000000..435ed2443 --- /dev/null +++ b/man/v2.1/8/zpool-resilver.8.html @@ -0,0 +1,269 @@ + + + + + + + zpool-resilver.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-resilver.8

+
+ + + + + +
ZPOOL-RESILVER(8)System Manager's ManualZPOOL-RESILVER(8)
+
+
+

+

zpool-resilver — + resilver devices in ZFS storage pools

+
+
+

+ + + + + +
zpoolresilver pool
+
+
+

+

Starts a resilver of the specified pools. If an existing resilver + is already running it will be restarted from the beginning. Any drives that + were scheduled for a deferred resilver will be added to the new one. This + requires the + + pool feature.

+
+
+

+

zpool-iostat(8), + zpool-online(8), zpool-reopen(8), + zpool-replace(8), zpool-scrub(8), + zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-scrub.8.html b/man/v2.1/8/zpool-scrub.8.html new file mode 100644 index 000000000..70d03fa5f --- /dev/null +++ b/man/v2.1/8/zpool-scrub.8.html @@ -0,0 +1,347 @@ + + + + + + + zpool-scrub.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-scrub.8

+
+ + + + + +
ZPOOL-SCRUB(8)System Manager's ManualZPOOL-SCRUB(8)
+
+
+

+

zpool-scrub — + begin or resume scrub of ZFS storage pools

+
+
+

+ + + + + +
zpoolscrub + [-s|-p] + [-w] pool
+
+
+

+

Begins a scrub or resumes a paused scrub. The scrub examines all + data in the specified pools to verify that it checksums correctly. For + replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any + damage discovered during the scrub. The zpool + status command reports the progress of the scrub and + summarizes the results of the scrub upon completion.

+

Scrubbing and resilvering are very similar operations. The + difference is that resilvering only examines data that ZFS knows to be out + of date (for example, when attaching a new device to a mirror or replacing + an existing device), whereas scrubbing examines all data to discover silent + errors due to hardware faults or disk failure.

+

Because scrubbing and resilvering are I/O-intensive operations, + ZFS only allows one at a time.

+

A scrub is split into two parts: metadata scanning and block + scrubbing. The metadata scanning sorts blocks into large sequential ranges + which can then be read much more efficiently from disk when issuing the + scrub I/O.

+

If a scrub is paused, the zpool + scrub resumes it. If a resilver is in progress, ZFS + does not allow a scrub to be started until the resilver completes.

+

Note that, due to changes in pool data on a live system, it is + possible for scrubs to progress slightly beyond 100% completion. During this + period, no completion time estimate will be provided.

+
+
+

+
+
+
Stop scrubbing.
+
+
Pause scrubbing. Scrub pause state and progress are periodically synced to + disk. If the system is restarted or pool is exported during a paused + scrub, even after import, scrub will remain paused until it is resumed. + Once resumed the scrub will pick up from the place where it was last + checkpointed to disk. To resume a paused scrub issue + zpool scrub again.
+
+
Wait until scrub has completed before returning.
+
+
+
+

+
+
: +
+
Output: +
+
# zpool status
+  ...
+  scan: scrub in progress since Sun Jul 25 16:07:49 2021
+        403M scanned at 100M/s, 68.4M issued at 10.0M/s, 405M total
+        0B repaired, 16.91% done, 00:00:04 to go
+  ...
+
+ Where: +
    +
  • Metadata which references 403M of file data has been scanned at + 100M/s, and 68.4M of that file data has been scrubbed sequentially at + 10.0M/s.
  • +
+
+
+
+
+

+

On machines using systemd, scrub timers can be enabled on per-pool + basis. weekly and monthly + timer units are provided.

+
+
+
systemctl enable + zfs-scrub-weekly@rpool.timer + --now
+
+
systemctl + enable + zfs-scrub-monthly@otherpool.timer + --now
+
+
+
+

+

systemd.timer(5), + zpool-iostat(8), + zpool-resilver(8), + zpool-status(8)

+
+
+ + + + + +
July 25, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-set.8.html b/man/v2.1/8/zpool-set.8.html new file mode 100644 index 000000000..b9c8ddbe5 --- /dev/null +++ b/man/v2.1/8/zpool-set.8.html @@ -0,0 +1,321 @@ + + + + + + + zpool-set.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-set.8

+
+ + + + + +
ZPOOL-GET(8)System Manager's ManualZPOOL-GET(8)
+
+
+

+

zpool-get — + retrieve properties of ZFS storage pools

+
+
+

+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + [pool]…
+
+ + + + + +
zpoolset + property=value + pool
+
+
+

+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + [pool]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
+
+
Name of storage pool.
+
+
Property name.
+
+
Property value.
+
+
Property source, either + + or + .
+
+
+

See the zpoolprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + zpoolprops(7) manual page for more information on what + properties can be set and acceptable values.
+
+
+
+

+

zpool-features(7), + zpoolprops(7), zpool-list(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-split.8.html b/man/v2.1/8/zpool-split.8.html new file mode 100644 index 000000000..2418ce0fe --- /dev/null +++ b/man/v2.1/8/zpool-split.8.html @@ -0,0 +1,314 @@ + + + + + + + zpool-split.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-split.8

+
+ + + + + +
ZPOOL-SPLIT(8)System Manager's ManualZPOOL-SPLIT(8)
+
+
+

+

zpool-split — + split devices off ZFS storage pool, creating new + pool

+
+
+

+ + + + + +
zpoolsplit [-gLlnP] + [-o + property=value]… + [-R root] + pool newpool + [device]…
+
+
+

+

Splits devices off pool creating + newpool. All vdevs in pool must + be mirrors and the pool must not be in the process of resilvering. At the + time of the split, newpool will be a replica of + pool. By default, the last device in each mirror is + split from pool to create + newpool.

+

The optional device specification causes the specified device(s) + to be included in the new pool and, should any devices + remain unspecified, the last device in each mirror is used as would be by + default.

+
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Indicates that this command will request encryption keys for all encrypted + datasets it attempts to mount as it is bringing the new pool online. Note + that if any datasets have + =, + this command will block waiting for the keys to be entered. Without this + flag, encrypted datasets will be left unavailable until the keys are + loaded.
+
+
Do a dry-run ("No-op") split: do not actually perform it. Print + out the expected configuration of newpool.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+ property=value
+
Sets the specified property for newpool. See the + zpoolprops(7) manual page for more information on the + available pool properties.
+
+ root
+
Set + + for newpool to root and + automatically import it.
+
+
+
+

+

zpool-import(8), + zpool-list(8), zpool-remove(8)

+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-status.8.html b/man/v2.1/8/zpool-status.8.html new file mode 100644 index 000000000..ee64c8ed3 --- /dev/null +++ b/man/v2.1/8/zpool-status.8.html @@ -0,0 +1,329 @@ + + + + + + + zpool-status.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-status.8

+
+ + + + + +
ZPOOL-STATUS(8)System Manager's ManualZPOOL-STATUS(8)
+
+
+

+

zpool-status — + show detailed health status for ZFS storage + pools

+
+
+

+ + + + + +
zpoolstatus [-DigLpPstvx] + [-T u|d] + [-c + [SCRIPT1[,SCRIPT2]…]] + [pool]… [interval + [count]]
+
+
+

+

Displays the detailed health status for the given pools. If no + pool is specified, then the status of each pool in the + system is displayed. For more information on pool and device health, see the + Device Failure and + Recovery section of zpoolconcepts(7).

+

If a scrub or resilver is in progress, this command reports the + percentage done and the estimated time to completion. Both of these are only + approximate, because the amount of data in the pool and the other workloads + on the system can change.

+
+
+ [SCRIPT1[,SCRIPT2]…]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool status + output. See the -c option of + zpool iostat for complete + details.
+
+
Display vdev initialization status.
+
+
Display vdev GUIDs instead of the normal device names These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in the + pool) block counts and sizes by reference count.
+
+
Display the number of leaf VDEV slow IOs. This is the number of IOs that + didn't complete in + + milliseconds (default 30 seconds). This does not necessarily mean the IOs + failed to complete, just took an unreasonably long amount of time. This + may indicate a problem with the underlying storage.
+
+
Display vdev TRIM status.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard date + format. See date(1).
+
+
Displays verbose data error information, printing out a complete list of + all data errors since the last complete pool scrub.
+
+
Only display status for pools that are exhibiting errors or are otherwise + unavailable. Warnings about pools not using the latest on-disk format will + not be included.
+
+
+
+

+

zpool-events(8), + zpool-history(8), zpool-iostat(8), + zpool-list(8), zpool-resilver(8), + zpool-scrub(8), zpool-wait(8)

+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-sync.8.html b/man/v2.1/8/zpool-sync.8.html new file mode 100644 index 000000000..182357591 --- /dev/null +++ b/man/v2.1/8/zpool-sync.8.html @@ -0,0 +1,266 @@ + + + + + + + zpool-sync.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-sync.8

+
+ + + + + +
ZPOOL-SYNC(8)System Manager's ManualZPOOL-SYNC(8)
+
+
+

+

zpool-syncflush + data to primary storage of ZFS storage pools

+
+
+

+ + + + + +
zpoolsync [pool]…
+
+
+

+

This command forces all in-core dirty data to be written to the + primary pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + zpool sync will sync all + pools on the system. Otherwise, it will sync only the specified pools.

+
+
+

+

zpoolconcepts(7), + zpool-export(8), zpool-iostat(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-trim.8.html b/man/v2.1/8/zpool-trim.8.html new file mode 100644 index 000000000..28786af6a --- /dev/null +++ b/man/v2.1/8/zpool-trim.8.html @@ -0,0 +1,303 @@ + + + + + + + zpool-trim.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-trim.8

+
+ + + + + +
ZPOOL-TRIM(8)System Manager's ManualZPOOL-TRIM(8)
+
+
+

+

zpool-trim — + initiate TRIM of free space in ZFS storage pool

+
+
+

+ + + + + +
zpooltrim [-dw] + [-r rate] + [-c|-s] + pool [device]…
+
+
+

+

Initiates an immediate on-demand TRIM operation for all of the + free space in a pool. This operation informs the underlying storage devices + of all blocks in the pool which are no longer allocated and allows thinly + provisioned devices to reclaim the space.

+

A manual on-demand TRIM operation can be initiated irrespective of + the autotrim pool property setting. See the documentation + for the autotrim property above for the types of vdev + devices which can be trimmed.

+
+
, + --secure
+
Causes a secure TRIM to be initiated. When performing a secure TRIM, the + device guarantees that data stored on the trimmed blocks has been erased. + This requires support from the device and is not supported by all + SSDs.
+
, + --rate rate
+
Controls the rate at which the TRIM operation progresses. Without this + option TRIM is executed as quickly as possible. The rate, expressed in + bytes per second, is applied on a per-vdev basis and may be set + differently for each leaf vdev.
+
, + --cancel
+
Cancel trimming on the specified devices, or all eligible devices if none + are specified. If one or more target devices are invalid or are not + currently being trimmed, the command will fail and no cancellation will + occur on any device.
+
, + --suspend
+
Suspend trimming on the specified devices, or all eligible devices if none + are specified. If one or more target devices are invalid or are not + currently being trimmed, the command will fail and no suspension will + occur on any device. Trimming can then be resumed by running + zpool trim with no flags + on the relevant target devices.
+
, + --wait
+
Wait until the devices are done being trimmed before returning.
+
+
+
+

+

zpoolprops(7), + zpool-initialize(8), zpool-wait(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-upgrade.8.html b/man/v2.1/8/zpool-upgrade.8.html new file mode 100644 index 000000000..b298a4488 --- /dev/null +++ b/man/v2.1/8/zpool-upgrade.8.html @@ -0,0 +1,320 @@ + + + + + + + zpool-upgrade.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-upgrade.8

+
+ + + + + +
ZPOOL-UPGRADE(8)System Manager's ManualZPOOL-UPGRADE(8)
+
+
+

+

zpool-upgrade — + manage version and feature flags of ZFS storage + pools

+
+
+

+ + + + + +
zpoolupgrade
+
+ + + + + +
zpoolupgrade -v
+
+ + + + + +
zpoolupgrade [-V + version] + -a|pool
+
+
+

+
+
zpool upgrade
+
Displays pools which do not have all supported features enabled and pools + formatted using a legacy ZFS version number. These pools can continue to + be used, but some features may not be available. Use + zpool upgrade + -a to enable all features on all pools (subject to + the -o compatibility + property).
+
zpool upgrade + -v
+
Displays legacy ZFS versions supported by the this version of ZFS. See + zpool-features(7) for a description of feature flags + features supported by this version of ZFS.
+
zpool upgrade + [-V version] + -a|pool
+
Enables all supported features on the given pool. +

If the pool has specified compatibility feature sets using the + -o compatibility property, + only the features present in all requested compatibility sets will be + enabled. If this property is set to legacy then no + upgrade will take place.

+

Once this is done, the pool will no longer be accessible on + systems that do not support feature flags. See + zpool-features(7) for details on compatibility with + systems that support feature flags, but do not support all features + enabled on the pool.

+
+
+
Enables all supported features (from specified compatibility sets, if + any) on all pools.
+
+ version
+
Upgrade to the specified legacy version. If specified, no features + will be enabled on the pool. This option can only be used to increase + the version number up to the last supported legacy version + number.
+
+
+
+
+
+

+

zpool-features(7), + zpoolconcepts(7), zpoolprops(7), + zpool-history(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-wait.8.html b/man/v2.1/8/zpool-wait.8.html new file mode 100644 index 000000000..52a531b78 --- /dev/null +++ b/man/v2.1/8/zpool-wait.8.html @@ -0,0 +1,315 @@ + + + + + + + zpool-wait.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-wait.8

+
+ + + + + +
ZPOOL-WAIT(8)System Manager's ManualZPOOL-WAIT(8)
+
+
+

+

zpool-waitwait + for activity to stop in a ZFS storage pool

+
+
+

+ + + + + +
zpoolwait [-Hp] + [-T u|d] + [-t + activity[,activity]…] + pool [interval]
+
+
+

+

Waits until all background activity of the given types has ceased + in the given pool. The activity could cease because it has completed, or + because it has been paused or canceled by a user, or because the pool has + been exported or destroyed. If no activities are specified, the command + waits until background activity of every type listed below has ceased. If + there is no activity of the given types in progress, the command returns + immediately.

+

These are the possible values for activity, + along with what each one waits for:

+
+
+
+
Checkpoint to be discarded
+
+
+ property to become +
+
+
All initializations to cease
+
+
All device replacements to cease
+
+
Device removal to cease
+
+
Resilver to cease
+
+
Scrub to cease
+
+
Manual trim to cease
+
+
+

If an interval is provided, the amount of + work remaining, in bytes, for each activity is printed every + interval seconds.

+
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+
Display numbers in parsable (exact) values.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard date + format. See date(1).
+
+
+
+

+

zpool-checkpoint(8), + zpool-initialize(8), zpool-remove(8), + zpool-replace(8), zpool-resilver(8), + zpool-scrub(8), zpool-status(8), + zpool-trim(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool.8.html b/man/v2.1/8/zpool.8.html new file mode 100644 index 000000000..8cda0e7a6 --- /dev/null +++ b/man/v2.1/8/zpool.8.html @@ -0,0 +1,786 @@ + + + + + + + zpool.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool.8

+
+ + + + + +
ZPOOL(8)System Manager's ManualZPOOL(8)
+
+
+

+

zpoolconfigure + ZFS storage pools

+
+
+

+ + + + + +
zpool-?V
+
+ + + + + +
zpoolversion
+
+ + + + + +
zpoolsubcommand + [argumentss]
+
+
+

+

The zpool command configures ZFS storage + pools. A storage pool is a collection of devices that provides physical + storage and data replication for ZFS datasets. All datasets within a storage + pool share the same space. See zfs(8) for information on + managing datasets.

+

For an overview of creating and managing ZFS storage pools see the + zpoolconcepts(7) manual page.

+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+

The zpool command provides subcommands to + create and destroy storage pools, add capacity to storage pools, and provide + information about the storage pools. The following subcommands are + supported:

+
+
zpool -?
+
Displays a help message.
+
zpool -V, + --version
+
 
+
zpool version
+
Displays the software version of the zpool + userland utility and the ZFS kernel module.
+
+
+

+
+
zpool-create(8)
+
Creates a new storage pool containing the virtual devices specified on the + command line.
+
zpool-initialize(8)
+
Begins initializing by writing to all unallocated regions on the specified + devices, or all eligible devices in the pool if no individual devices are + specified.
+
+
+
+

+
+
zpool-destroy(8)
+
Destroys the given pool, freeing up any devices for other use.
+
zpool-labelclear(8)
+
Removes ZFS label information from the specified + device.
+
+
+
+

+
+
zpool-attach(8)/zpool-detach(8)
+
Increases or decreases redundancy by attaching or + detaching a device on an existing vdev (virtual + device).
+
zpool-add(8)/zpool-remove(8)
+
Adds the specified virtual devices to the given pool, or removes the + specified device from the pool.
+
zpool-replace(8)
+
Replaces an existing device (which may be faulted) with a new one.
+
zpool-split(8)
+
Creates a new pool by splitting all mirrors in an existing pool (which + decreases its redundancy).
+
+
+
+

+

Available pool properties listed in the + zpoolprops(7) manual page.

+
+
zpool-list(8)
+
Lists the given pools along with a health status and space usage.
+
zpool-get(8)/zpool-set(8)
+
Retrieves the given list of properties (or all properties if + is used) for + the specified storage pool(s).
+
+
+
+

+
+
zpool-status(8)
+
Displays the detailed health status for the given pools.
+
zpool-iostat(8)
+
Displays logical I/O statistics for the given pools/vdevs. Physical I/Os + may be observed via iostat(1).
+
zpool-events(8)
+
Lists all recent events generated by the ZFS kernel modules. These events + are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. + That manual page also describes the subclasses and event payloads that can + be generated.
+
zpool-history(8)
+
Displays the command history of the specified pool(s) or all pools if no + pool is specified.
+
+
+
+

+
+
zpool-scrub(8)
+
Begins a scrub or resumes a paused scrub.
+
zpool-checkpoint(8)
+
Checkpoints the current state of pool, which can be + later restored by zpool + import + --rewind-to-checkpoint.
+
zpool-trim(8)
+
Initiates an immediate on-demand TRIM operation for all of the free space + in a pool. This operation informs the underlying storage devices of all + blocks in the pool which are no longer allocated and allows thinly + provisioned devices to reclaim the space.
+
zpool-sync(8)
+
This command forces all in-core dirty data to be written to the primary + pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + zpool sync will sync all + pools on the system. Otherwise, it will sync only the specified + pool(s).
+
zpool-upgrade(8)
+
Manage the on-disk format version of storage pools.
+
zpool-wait(8)
+
Waits until all background activity of the given types has ceased in the + given pool.
+
+
+
+

+
+
zpool-offline(8)/zpool-online(8)
+
Takes the specified physical device offline or brings it online.
+
zpool-resilver(8)
+
Starts a resilver. If an existing resilver is already running it will be + restarted from the beginning.
+
zpool-reopen(8)
+
Reopen all the vdevs associated with the pool.
+
zpool-clear(8)
+
Clears device errors in a pool.
+
+
+
+

+
+
zpool-import(8)
+
Make disks containing ZFS storage pools available for use on the + system.
+
zpool-export(8)
+
Exports the given pools from the system.
+
zpool-reguid(8)
+
Generates a new unique identifier for the pool.
+
+
+
+
+

+

The following exit values are returned:

+
+
+
+
Successful completion.
+
+
An error occurred.
+
+
Invalid command line options were specified.
+
+
+
+
+

+
+
: Creating a RAID-Z Storage Pool
+
The following command creates a pool with a single raidz root vdev that + consists of six disks: +
# zpool + create tank + + sda sdb sdc sdd sde sdf
+
+
: Creating a Mirrored Storage Pool
+
The following command creates a pool with two mirrors, where each mirror + contains two disks: +
# zpool + create tank + mirror sda sdb + mirror sdc sdd
+
+
: Creating a ZFS Storage Pool by Using + Partitions
+
The following command creates an unmirrored pool using two disk + partitions: +
# zpool + create tank sda1 + sdb2
+
+
: Creating a ZFS Storage Pool by Using + Files
+
The following command creates an unmirrored pool using files. While not + recommended, a pool based on files can be useful for experimental + purposes. +
# zpool + create tank /path/to/file/a + /path/to/file/b
+
+
: Adding a Mirror to a ZFS Storage + Pool
+
The following command adds two mirrored disks to the pool + tank, assuming the pool is already made up of + two-way mirrors. The additional space is immediately available to any + datasets within the pool. +
# zpool + add tank + mirror sda sdb
+
+
: Listing Available ZFS Storage Pools
+
The following command lists all available pools on the system. In this + case, the pool zion is faulted due to a missing + device. The results from this command are similar to the following: +
+
# zpool list
+NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+rpool  19.9G  8.43G  11.4G         -    33%    42%  1.00x  ONLINE  -
+tank   61.5G  20.0G  41.5G         -    48%    32%  1.00x  ONLINE  -
+zion       -      -      -         -      -      -      -  FAULTED -
+
+
+
: Destroying a ZFS Storage Pool
+
The following command destroys the pool tank and any + datasets contained within: +
# zpool + destroy -f + tank
+
+
: Exporting a ZFS Storage Pool
+
The following command exports the devices in pool + tank so that they can be relocated or later + imported: +
# zpool + export tank
+
+
: Importing a ZFS Storage Pool
+
The following command displays available pools, and then imports the pool + tank for use on the system. The results from this + command are similar to the following: +
+
# zpool import
+  pool: tank
+    id: 15451357997522795478
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+        tank        ONLINE
+          mirror    ONLINE
+            sda     ONLINE
+            sdb     ONLINE
+
+# zpool import tank
+
+
+
: Upgrading All ZFS Storage Pools to the Current + Version
+
The following command upgrades all ZFS Storage pools to the current + version of the software: +
+
# zpool upgrade -a
+This system is currently running ZFS version 2.
+
+
+
: Managing Hot Spares
+
The following command creates a new pool with an available hot spare: +
# zpool + create tank + mirror sda sdb + + sdc
+

If one of the disks were to fail, the pool would be reduced to + the degraded state. The failed device can be replaced using the + following command:

+
# zpool + replace tank sda + sdd
+

Once the data has been resilvered, the spare is automatically + removed and is made available for use should another device fail. The + hot spare can be permanently removed from the pool using the following + command:

+
# zpool + remove tank sdc
+
+
: Creating a ZFS Pool with Mirrored Separate + Intent Logs
+
The following command creates a ZFS storage pool consisting of two, + two-way mirrors and mirrored log devices: +
# zpool + create pool + mirror sda sdb + mirror sdc sdd + + sde sdf
+
+
: Adding Cache Devices to a ZFS Pool
+
The following command adds two disks for use as cache devices to a ZFS + storage pool: +
# zpool + add pool + + sdc sdd
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take + over an hour for them to fill. Capacity and reads can be monitored using + the iostat subcommand as follows:

+
# zpool + iostat -v + pool 5
+
+
: Removing a Mirrored top-level (Log or Data) + Device
+
The following commands remove the mirrored log device + + and mirrored top-level data device + . +

Given this configuration:

+
+
  pool: tank
+ state: ONLINE
+ scrub: none requested
+config:
+
+         NAME        STATE     READ WRITE CKSUM
+         tank        ONLINE       0     0     0
+           mirror-0  ONLINE       0     0     0
+             sda     ONLINE       0     0     0
+             sdb     ONLINE       0     0     0
+           mirror-1  ONLINE       0     0     0
+             sdc     ONLINE       0     0     0
+             sdd     ONLINE       0     0     0
+         logs
+           mirror-2  ONLINE       0     0     0
+             sde     ONLINE       0     0     0
+             sdf     ONLINE       0     0     0
+
+

The command to remove the mirrored log + mirror-2 is:

+
# zpool + remove tank + mirror-2
+

The command to remove the mirrored data + mirror-1 is:

+
# zpool + remove tank + mirror-1
+
+
: Displaying expanded space on a + device
+
The following command displays the detailed information for the pool + data. This pool is comprised of a single raidz vdev + where one of its devices increased its capacity by 10GB. In this example, + the pool will not be able to utilize this extra capacity until all the + devices under the raidz vdev have been expanded. +
+
# zpool list -v data
+NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+data        23.9G  14.6G  9.30G         -    48%    61%  1.00x  ONLINE  -
+  raidz1    23.9G  14.6G  9.30G         -    48%
+    sda         -      -      -         -      -
+    sdb         -      -      -       10G      -
+    sdc         -      -      -         -      -
+
+
+
: Adding output columns
+
Additional columns can be added to the zpool + status and + zpool iostat + output with -c. +
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc size
+              capacity     operations     bandwidth
+pool        alloc   free   read  write   read  write  size
+----------  -----  -----  -----  -----  -----  -----  ----
+rpool       14.6G  54.9G      4     55   250K  2.69M
+  sda1      14.6G  54.9G      4     55   250K  2.69M   70G
+----------  -----  -----  -----  -----  -----  -----  ----
+
+
+
+
+
+

+
+
+
Cause zpool to dump core on exit for the purposes + of running + .
+
+
Use ANSI color in zpool status and + zpool iostat output.
+
+
The search path for devices or files to use with the pool. This is a + colon-separated list of directories in which zpool + looks for device nodes and files. Similar to the + -d option in zpool + import.
+
+
The maximum time in milliseconds that zpool import + will wait for an expected device to be available.
+
+
If set, suppress warning about non-native vdev ashift in + zpool status. The value is not used, only the + presence or absence of the variable matters.
+
+
Cause zpool subcommands to output vdev guids by + default. This behavior is identical to the zpool + status -g command line + option.
+ +
Cause zpool subcommands to follow links for vdev + names by default. This behavior is identical to the + zpool status + -L command line option.
+
+
Cause zpool subcommands to output full vdev path + names by default. This behavior is identical to the + zpool status + -P command line option.
+
+
Older OpenZFS implementations had issues when attempting to display pool + config VDEV names if a devid NVP value is present in the + pool's config. +

For example, a pool that originated on illumos platform would + have a devid value in the config and + zpool status would fail when listing the config. + This would also be true for future Linux-based pools.

+

A pool can be stripped of any devid values + on import or prevented from adding them on zpool + create or zpool + add by setting + ZFS_VDEV_DEVID_OPT_OUT.

+

+
+
+
Allow a privileged user to run zpool status/iostat + -c. Normally, only unprivileged users are allowed + to run -c.
+
+
The search path for scripts when running zpool + status/iostat -c. This is a colon-separated + list of directories and overrides the default + ~/.zpool.d and + /etc/zfs/zpool.d search paths.
+
+
Allow a user to run zpool status/iostat + -c. If ZPOOL_SCRIPTS_ENABLED is + not set, it is assumed that the user is allowed to run + zpool + status/iostat + -c.
+
+
+
+

+

+
+
+

+

zfs(4), zpool-features(7), + zpoolconcepts(7), zpoolprops(7), + zed(8), zfs(8), + zpool-add(8), zpool-attach(8), + zpool-checkpoint(8), zpool-clear(8), + zpool-create(8), zpool-destroy(8), + zpool-detach(8), zpool-events(8), + zpool-export(8), zpool-get(8), + zpool-history(8), zpool-import(8), + zpool-initialize(8), zpool-iostat(8), + zpool-labelclear(8), zpool-list(8), + zpool-offline(8), zpool-online(8), + zpool-reguid(8), zpool-remove(8), + zpool-reopen(8), zpool-replace(8), + zpool-resilver(8), zpool-scrub(8), + zpool-set(8), zpool-split(8), + zpool-status(8), zpool-sync(8), + zpool-trim(8), zpool-upgrade(8), + zpool-wait(8)

+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool_influxdb.8.html b/man/v2.1/8/zpool_influxdb.8.html new file mode 100644 index 000000000..4e5de1ec3 --- /dev/null +++ b/man/v2.1/8/zpool_influxdb.8.html @@ -0,0 +1,316 @@ + + + + + + + zpool_influxdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool_influxdb.8

+
+ + + + + +
ZPOOL_INFLUXDB(8)System Manager's ManualZPOOL_INFLUXDB(8)
+
+
+

+

zpool_influxdb — + collect ZFS pool statistics in InfluxDB line protocol + format

+
+
+

+ + + + + +
zpool_influxdb[-e|--execd] + [-n|--no-histogram] + [-s|--sum-histogram-buckets] + [-t|--tags + key=value[,key=value]…] + [pool]
+
+
+

+

zpool_influxdb produces + InfluxDB-line-protocol-compatible metrics from zpools. Like the + zpool command, + zpool_influxdb reads the current pool status and + statistics. Unlike the zpool command which is + intended for humans, zpool_influxdb formats the + output in the InfluxDB line protocol. The expected use is as a plugin to a + metrics collector or aggregator, such as Telegraf.

+

By default, zpool_influxdb prints pool + metrics and status in the InfluxDB line protocol format. All pools are + printed, similar to the zpool + status command. Providing a pool name restricts the + output to the named pool.

+
+
+

+
+
, + --execd
+
Run in daemon mode compatible with Telegraf's + execd plugin. In this mode, the pools are sampled + every time a newline appears on the standard input.
+
, + --no-histogram
+
Do not print latency and I/O size histograms. This can reduce the total + amount of data, but one should consider the value brought by the insights + that latency and I/O size distributions provide. The resulting values are + suitable for graphing with Grafana's heatmap plugin.
+
, + --sum-histogram-buckets
+
Accumulates bucket values. By default, the values are not accumulated and + the raw data appears as shown by zpool + iostat. This works well for Grafana's heatmap + plugin. Summing the buckets produces output similar to Prometheus + histograms.
+
, + --tags + key=value[,key=value]…
+
Adds specified tags to the tag set. No sanity checking is performed. See + the InfluxDB Line Protocol format documentation for details on escaping + special characters used in tags.
+
, + --help
+
Print a usage summary.
+
+
+
+

+

zpool-iostat(8), + zpool-status(8), + InfluxDB, + Telegraf, + Grafana, + Prometheus

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zstream.8.html b/man/v2.1/8/zstream.8.html new file mode 100644 index 000000000..e448bd3c9 --- /dev/null +++ b/man/v2.1/8/zstream.8.html @@ -0,0 +1,328 @@ + + + + + + + zstream.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zstream.8

+
+ + + + + +
ZSTREAM(8)System Manager's ManualZSTREAM(8)
+
+
+

+

zstream — + manipulate ZFS send streams

+
+
+

+ + + + + +
zstreamdump [-Cvd] + [file]
+
+ + + + + +
zstreamredup [-v] + file
+
+ + + + + +
zstreamtoken resume_token
+
+
+

+

The + + utility manipulates ZFS send streams output by the + + command.

+
+
zstream dump + [-Cvd] [file]
+
Print information about the specified send stream, including headers and + record counts. The send stream may either be in the specified + file, or provided on standard input. +
+
+
Suppress the validation of checksums.
+
+
Verbose. Print metadata for each record.
+
+
Dump data contained in each record. Implies verbose.
+
+

The zstreamdump alias is provided for + compatibility and is equivalent to running + zstream dump.

+
+
zstream token + resume_token
+
Dumps zfs resume token information
+
zstream redup + [-v] file
+
Deduplicated send streams can be generated by using the + zfs send + -D command. The ability to send deduplicated send + streams is deprecated. In the future, the ability to receive a + deduplicated send stream with zfs + receive will be removed. However, deduplicated + send streams can still be received by utilizing + zstream redup. +

The zstream + redup command is provided a + file containing a deduplicated send stream, and + outputs an equivalent non-deduplicated send stream on standard output. + Therefore, a deduplicated send stream can be received by running:

+
# zstream + redup DEDUP_STREAM_FILE | + zfs receive +
+
+
+
Verbose. Print summary of converted records.
+
+
+
+
+
+

+

zfs(8), zfs-receive(8), + zfs-send(8)

+
+
+ + + + + +
May 8, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zstreamdump.8.html b/man/v2.1/8/zstreamdump.8.html new file mode 100644 index 000000000..1119dd79b --- /dev/null +++ b/man/v2.1/8/zstreamdump.8.html @@ -0,0 +1,328 @@ + + + + + + + zstreamdump.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zstreamdump.8

+
+ + + + + +
ZSTREAM(8)System Manager's ManualZSTREAM(8)
+
+
+

+

zstream — + manipulate ZFS send streams

+
+
+

+ + + + + +
zstreamdump [-Cvd] + [file]
+
+ + + + + +
zstreamredup [-v] + file
+
+ + + + + +
zstreamtoken resume_token
+
+
+

+

The + + utility manipulates ZFS send streams output by the + + command.

+
+
zstream dump + [-Cvd] [file]
+
Print information about the specified send stream, including headers and + record counts. The send stream may either be in the specified + file, or provided on standard input. +
+
+
Suppress the validation of checksums.
+
+
Verbose. Print metadata for each record.
+
+
Dump data contained in each record. Implies verbose.
+
+

The zstreamdump alias is provided for + compatibility and is equivalent to running + zstream dump.

+
+
zstream token + resume_token
+
Dumps zfs resume token information
+
zstream redup + [-v] file
+
Deduplicated send streams can be generated by using the + zfs send + -D command. The ability to send deduplicated send + streams is deprecated. In the future, the ability to receive a + deduplicated send stream with zfs + receive will be removed. However, deduplicated + send streams can still be received by utilizing + zstream redup. +

The zstream + redup command is provided a + file containing a deduplicated send stream, and + outputs an equivalent non-deduplicated send stream on standard output. + Therefore, a deduplicated send stream can be received by running:

+
# zstream + redup DEDUP_STREAM_FILE | + zfs receive +
+
+
+
Verbose. Print summary of converted records.
+
+
+
+
+
+

+

zfs(8), zfs-receive(8), + zfs-send(8)

+
+
+ + + + + +
May 8, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/index.html b/man/v2.1/index.html new file mode 100644 index 000000000..97431502a --- /dev/null +++ b/man/v2.1/index.html @@ -0,0 +1,147 @@ + + + + + + + v2.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/man/v2.2/1/arcstat.1.html b/man/v2.2/1/arcstat.1.html new file mode 100644 index 000000000..f63b5c25b --- /dev/null +++ b/man/v2.2/1/arcstat.1.html @@ -0,0 +1,411 @@ + + + + + + + arcstat.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

arcstat.1

+
+ + + + + +
ARCSTAT(1)General Commands ManualARCSTAT(1)
+
+
+

+

arcstatreport + ZFS ARC and L2ARC statistics

+
+
+

+ + + + + +
arcstat[-havxp] [-f + field[,field…]] + [-o file] + [-s string] + [interval] [count]
+
+
+

+

arcstat prints various ZFS ARC and L2ARC + statistics in vmstat-like fashion:

+
+
+
+
ARC target size
+
+
Demand hit percentage
+
+
Demand I/O hit percentage
+
+
Demand miss percentage
+
+
Demand data hit percentage
+
+
Demand data I/O hit percentage
+
+
Demand data miss percentage
+
+
Demand metadata hit percentage
+
+
Demand metadata I/O hit percentage
+
+
Demand metadata miss percentage
+
+
MFU list hits per second
+
+
Metadata hit percentage
+
+
Metadata I/O hit percentage
+
+
Metadata miss percentage
+
+
MRU list hits per second
+
+
Prefetch hits percentage
+
+
Prefetch I/O hits percentage
+
+
Prefetch miss percentage
+
+
Prefetch data hits percentage
+
+
Prefetch data I/O hits percentage
+
+
Prefetch data miss percentage
+
+
Prefetch metadata hits percentage
+
+
Prefetch metadata I/O hits percentage
+
+
Prefetch metadata miss percentage
+
+
Demand hits per second
+
+
Demand I/O hits per second
+
+
Demand misses per second
+
+
Demand data hits per second
+
+
Demand data I/O hits per second
+
+
Demand data misses per second
+
+
Demand metadata hits per second
+
+
Demand metadata I/O hits per second
+
+
Demand metadata misses per second
+
+
ARC hit percentage
+
+
ARC hits per second
+
+
ARC I/O hits percentage
+
+
ARC I/O hits per second
+
+
MFU ghost list hits per second
+
+
Metadata hits per second
+
+
Metadata I/O hits per second
+
+
ARC misses per second
+
+
Metadata misses per second
+
+
MRU ghost list hits per second
+
+
Prefetch hits per second
+
+
Prefetch I/O hits per second
+
+
Prefetch misses per second
+
+
Prefetch data hits per second
+
+
Prefetch data I/O hits per second
+
+
Prefetch data misses per second
+
+
Prefetch metadata hits per second
+
+
Prefetch metadata I/O hits per second
+
+
Prefetch metadata misses per second
+
+
Total ARC accesses per second
+
+
Current time
+
+
ARC size
+
+
Alias for size
+
+
Uncached list hits per second
+
+
Demand accesses per second
+
+
Demand data accesses per second
+
+
Demand metadata accesses per second
+
+
evict_skip per second
+
+
ARC miss percentage
+
+
Metadata accesses per second
+
+
Prefetch accesses per second
+
+
Prefetch data accesses per second
+
+
Prefetch metadata accesses per second
+
+
L2ARC access hit percentage
+
+
L2ARC hits per second
+
+
L2ARC misses per second
+
+
Total L2ARC accesses per second
+
+
L2ARC prefetch allocated size per second
+
+
L2ARC prefetch allocated size percentage
+
+
L2ARC MFU allocated size per second
+
+
L2ARC MFU allocated size percentage
+
+
L2ARC MRU allocated size per second
+
+
L2ARC MRU allocated size percentage
+
+
L2ARC data (buf content) allocated size per second
+
+
L2ARC data (buf content) allocated size percentage
+
+
L2ARC metadata (buf content) allocated size per second
+
+
L2ARC metadata (buf content) allocated size percentage
+
+
Size of the L2ARC
+
+
mutex_miss per second
+
+
Bytes read per second from the L2ARC
+
+
L2ARC access miss percentage
+
+
Actual (compressed) size of the L2ARC
+
+
ARC grow disabled
+
+
ARC reclaim needed
+
+
The ARC's idea of how much free memory there is, which includes evictable + memory in the page cache. Since the ARC tries to keep + avail above zero, avail is usually + more instructive to observe than free.
+
+
The ARC's idea of how much free memory is available to it, which is a bit + less than free. May temporarily be negative, in which + case the ARC will reduce the target size c.
+
+
+
+
+

+
+
+
Print all possible stats.
+
+
Display only specific fields. See + DESCRIPTION for supported + statistics.
+
+
Display help message.
+
+
Report statistics to a file instead of the standard output.
+
+
Disable auto-scaling of numerical fields (for raw, machine-parsable + values).
+
+
Display data with a specified separator (default: 2 spaces).
+
+
Print extended stats (same as -f + time,mfu,mru,mfug,mrug,eskip,mtxmis,dread,pread,read).
+
+
Show field headers and definitions
+
+
+
+

+

The following operands are supported:

+
+
+
interval
+
Specify the sampling interval in seconds.
+
count
+
Display only count reports.
+
+
+
+
+ + + + + +
December 23, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/1/cstyle.1.html b/man/v2.2/1/cstyle.1.html new file mode 100644 index 000000000..667362f13 --- /dev/null +++ b/man/v2.2/1/cstyle.1.html @@ -0,0 +1,293 @@ + + + + + + + cstyle.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cstyle.1

+
+ + + + + +
CSTYLE(1)General Commands ManualCSTYLE(1)
+
+
+

+

cstylecheck for + some common stylistic errors in C source files

+
+
+

+ + + + + +
cstyle[-chpvCP] + [file]…
+
+
+

+

cstyle inspects C source files (*.c and + *.h) for common stylistic errors. It attempts to check for the cstyle + documented in + http://www.cis.upenn.edu/~lee/06cse480/data/cstyle.ms.pdf. + Note that there is much in that document that + + be checked for; just because your code is + cstyle-clean does not mean that you've followed + Sun's C style. + .

+
+
+

+
+
+
Check continuation line indentation inside of functions. Sun's C style + states that all statements must be indented to an appropriate tab stop, + and any continuation lines after them must be indented + + four spaces from the start line. This option enables a series of checks + designed to find continuation line problems within functions only. The + checks have some limitations; see + , below.
+
+
Performs some of the more picky checks. Includes ANSI + + and + + rules, and tries to detect spaces after casts. Used as part of the putback + checks.
+
+
Verbose output; includes the text of the line of error, and, for + -c, the first statement in the current + continuation block.
+
+
Check for use of non-POSIX types. Historically, types like + + and + + were used, but they are now deprecated in favor of the POSIX types + , + , + etc. This detects any use of the deprecated types. Used as part of the + putback checks.
+
+
Also print GitHub-Actions-style ::error + output.
+
+
+
+

+
+
+
If set and nonempty, equivalent to -g.
+
+
+
+

+

The continuation checker is a reasonably simple state machine that + knows something about how C is laid out, and can match parenthesis, etc. + over multiple lines. It does have some limitations:

+
    +
  1. Preprocessor macros which cause unmatched parenthesis will + confuse the checker for that line. To fix this, you'll need to make sure + that each branch of the + statement has + balanced parenthesis.
  2. +
  3. Some cpp(1) macros do not require + ;s after them. Any such macros + be ALL_CAPS; + any lower case letters will cause bad output. +

    The bad output will generally be corrected after the + next ;, + , + or + .

    +
  4. +
+Some continuation error messages deserve some additional explanation: +
+
+
A multi-line statement which is not broken at statement boundaries. For + example: +
+
if (this_is_a_long_variable == another_variable) a =
+    b + c;
+
+

Will trigger this error. Instead, do:

+
+
if (this_is_a_long_variable == another_variable)
+    a = b + c;
+
+
+
+
For visibility, empty bodies for if, for, and while statements should be + on their own line. For example: +
+
while (do_something(&x) == 0);
+
+

Will trigger this error. Instead, do:

+
+
while (do_something(&x) == 0)
+    ;
+
+
+
+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/1/index.html b/man/v2.2/1/index.html new file mode 100644 index 000000000..1d9b0e4c1 --- /dev/null +++ b/man/v2.2/1/index.html @@ -0,0 +1,159 @@ + + + + + + + User Commands (1) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

User Commands (1)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/1/raidz_test.1.html b/man/v2.2/1/raidz_test.1.html new file mode 100644 index 000000000..88d37d2bd --- /dev/null +++ b/man/v2.2/1/raidz_test.1.html @@ -0,0 +1,254 @@ + + + + + + + raidz_test.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

raidz_test.1

+
+ + + + + +
RAIDZ_TEST(1)General Commands ManualRAIDZ_TEST(1)
+
+
+

+

raidz_testraidz + implementation verification and benchmarking tool

+
+
+

+ + + + + +
raidz_test[-StBevTD] [-a + ashift] [-o + zio_off_shift] [-d + raidz_data_disks] [-s + zio_size_shift] [-r + reflow_offset]
+
+
+

+

The purpose of this tool is to run all supported raidz + implementation and verify the results of all methods. It also contains a + parameter sweep option where all parameters affecting a RAID-Z block are + verified (like ashift size, data offset, data size, etc.). The tool also + supports a benchmarking mode using the -B + option.

+
+
+

+
+
+
Print a help summary.
+
+ ashift (default: + )
+
Ashift value.
+
+ zio_off_shift (default: + )
+
ZIO offset for each raidz block. The offset's value is + .
+
+ raidz_data_disks (default: + )
+
Number of raidz data disks to use. Additional disks will be used for + parity.
+
+ zio_size_shift (default: + )
+
Size of data for raidz block. The real size is + .
+
+ reflow_offset (default: + )
+
Set raidz expansion offset. The expanded raidz map allocation function + will produce different map configurations depending on this value.
+
(weep)
+
Sweep parameter space while verifying the raidz implementations. This + option will exhaust all most of valid values for the + -aods options. Runtime using this option will be + long.
+
(imeout)
+
Wall time for sweep test in seconds. The actual runtime could be + longer.
+
(enchmark)
+
All implementations are benchmarked using increasing per disk data size. + Results are given as throughput per disk, measured in MiB/s.
+
(xpansion)
+
Use expanded raidz map allocation function.
+
(erbose)
+
Increase verbosity.
+
(est + the test)
+
Debugging option: fail all tests. This is to check if tests would properly + verify bit-exactness.
+
(ebug)
+
Debugging option: attach gdb(1) when + + or + + are received.
+
+
+
+

+

ztest(1)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/1/test-runner.1.html b/man/v2.2/1/test-runner.1.html new file mode 100644 index 000000000..917196d2e --- /dev/null +++ b/man/v2.2/1/test-runner.1.html @@ -0,0 +1,437 @@ + + + + + + + test-runner.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

test-runner.1

+
+ + + + + +
RUN(1)General Commands ManualRUN(1)
+
+
+

+

runfind, + execute, and log the results of tests

+
+
+

+ + + + + +
run[-dgq] [-o + outputdir] [-pP + script] [-t + -seconds] [-uxX + username] + pathname
+

+
+ + + + + +
run-w runfile + [-gq] [-o + outputdir] [-pP + script] [-t + -seconds] [-uxX + username] + pathname
+

+
+ + + + + +
run-c runfile + [-dq]
+

+
+ + + + + +
run[-h]
+
+
+

+

run command has three basic modes of + operation. With neither -c nor + -w, run processes the + arguments provided on the command line, adding them to the list for this + run. If a specified pathname is an executable file, it + is added as a test. If a specified pathname is a + directory, the behavior depends upon the presence of + -g. If -g is specified, the + directory is treated as a test group. See the section on + below. Without -g, + run simply descends into the directory looking for + executable files. The tests are then executed, and the results are + logged.

+

With -w, run finds + tests in the manner described above. Rather than executing the tests and + logging the results, the test configuration is stored in a + runfile, which can be used in future invocations, or + edited to modify which tests are executed and which options are applied. + Options included on the command line with -w become + defaults in the runfile.

+

With -c, run + parses a runfile, which can specify a series of tests + and test groups to be executed. The tests are then executed, and the results + are logged.

+
+

+

A test group is comprised of a set of executable files, all of + which exist in one directory. The options specified on the command line or + in a runfile apply to individual tests in the group. + The exception is options pertaining to pre and post scripts, which act on + all tests as a group. Rather than running before and after each test, these + scripts are run only once each at the start and end of the test group.

+
+
+

+

The specified tests run serially, and are typically assigned + results according to exit values. Tests that exit zero and non-zero are + marked + and + , + respectively. When a pre script fails for a test group, only the post script + is executed, and the remaining tests are marked + . + Any test that exceeds its timeout is terminated, and + marked + .

+

By default, tests are executed with the credentials of the + run script. Executing tests with other credentials + is done via sudo(1m), which must be configured to allow + execution without prompting for a password. Environment variables from the + calling shell are available to individual tests. During test execution, the + working directory is changed to outputdir.

+
+
+

+

By default, run will print one line on + standard output at the conclusion of each test indicating the test name, + result and elapsed time. Additionally, for each invocation of + run, a directory is created using the ISO 8601 date + format. Within this directory is a file named + + containing all the test output with timestamps, and a directory for each + test. Within the test directories, there is one file each for standard + output, standard error and merged output. The default location for the + outputdir is + /var/tmp/test_results.

+
+
+

+

The runfile is an INI-style configuration + file that describes a test run. The file has one section named + , + which contains configuration option names and their values in + + = value format. The values in + this section apply to all the subsequent sections, unless they are also + specified there, in which case the default is overridden. The remaining + section names are the absolute pathnames of files and directories, + describing tests and test groups respectively. The legal option names + are:

+
+
+ = pathname
+
The name of the directory that holds test logs.
+
+ = script
+
Run script prior to the test or test group.
+
+ = username
+
Execute the pre script as username.
+
+ = script
+
Run script after the test or test group.
+
+ = username
+
Execute the post script as username.
+
+ = + True|
+
If True, only the results summary is printed to standard + out.
+
+ = ['filename', + ]
+
Specify a list of filenames for this test group. + Only the basename of the absolute path is required. This option is only + valid for test groups, and each filename must be + single quoted.
+
+ = n
+
A timeout value of n seconds.
+
+ = username
+
Execute the test or test group as username.
+
+
+
+
+

+
+
+ runfile
+
Specify a runfile to be consumed by the run + command.
+
+
Dry run mode. Execute no tests, but print a description of each test that + would have been run.
+
+
Enable kmemleak reporting (Linux only)
+
+
Create test groups from any directories found while searching for + tests.
+
+ outputdir
+
Specify the directory in which to write test results.
+
+ script
+
Run script prior to any test or test group.
+
+ script
+
Run script after any test or test group.
+
+
Print only the results summary to the standard output.
+
+ script
+
Run script as a failsafe after any test is + killed.
+
+ username
+
Execute the failsafe script as username.
+
+ n
+
Specify a timeout value of n seconds per test.
+
+ username
+
Execute tests or test groups as username.
+
+ runfile
+
Specify the name of the runfile to create.
+
+ username
+
Execute the pre script as username.
+
+ username
+
Execute the post script as username.
+
+
+
+

+
+
: Running ad-hoc tests.
+
This example demonstrates the simplest invocation of + run. +
+
% run my-tests
+Test: /home/jkennedy/my-tests/test-01                    [00:02] [PASS]
+Test: /home/jkennedy/my-tests/test-02                    [00:04] [PASS]
+Test: /home/jkennedy/my-tests/test-03                    [00:01] [PASS]
+
+Results Summary
+PASS       3
+
+Running Time:   00:00:07
+Percent passed: 100.0%
+Log directory:  /var/tmp/test_results/20120923T180654
+
+
+
: Creating a runfile + for future use.
+
This example demonstrates creating a runfile with + non-default options. +
+
% run -p setup -x root -g -w new-tests.run new-tests
+% cat new-tests.run
+[DEFAULT]
+pre = setup
+post_user =
+quiet = False
+user =
+timeout = 60
+post =
+pre_user = root
+outputdir = /var/tmp/test_results
+
+[/home/jkennedy/new-tests]
+tests = ['test-01', 'test-02', 'test-03']
+
+
+
+
+
+

+

sudo(1m)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/1/zhack.1.html b/man/v2.2/1/zhack.1.html new file mode 100644 index 000000000..41d1d43e4 --- /dev/null +++ b/man/v2.2/1/zhack.1.html @@ -0,0 +1,297 @@ + + + + + + + zhack.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zhack.1

+
+ + + + + +
ZHACK(1)General Commands ManualZHACK(1)
+
+
+

+

zhacklibzpool + debugging tool

+
+
+

+

This utility pokes configuration changes directly into a ZFS pool, + which is dangerous and can cause data corruption.

+
+
+

+
+
+ + + + + +
zhackfeature stat pool
+
+
List feature flags.
+
+ + + + + +
zhackfeature enable [-d + description] [-r] + pool guid
+
+
Add a new feature to pool that is uniquely + identified by guid, which is specified in the same + form as a zfs(8) user property. +

The description is a short human + readable explanation of the new feature.

+

The -r flag indicates that + pool can be safely opened in read-only mode by a + system that does not understand the guid + feature.

+
+
+ + + + + +
zhackfeature ref + [-d|-m] + pool guid
+
+
Increment the reference count of the guid feature in + pool. +

The -d flag decrements the reference + count of the guid feature in + pool instead.

+

The -m flag indicates that the + guid feature is now required to read the pool + MOS.

+
+
+ + + + + +
zhacklabel repair [-cu] + device
+
+
Repair labels of a specified device according to + options. +

Flags may be combined to do their functions + simultaneously.

+

The -c flag repairs corrupted label + checksums

+

The -u flag restores the label on a + detached device

+

Example:

+
+ + + + + +
zhack label repair + -cu device +
+ Fix checksums and undetach a device
+
+
+
+
+

+

The following can be passed to all zhack + invocations before any subcommand:

+
+
+ cachefile
+
Read pool configuration from the + cachefile, which is + /etc/zfs/zpool.cache by default.
+
+ dir
+
Search for pool members in + dir. Can be specified more than once.
+
+
+
+

+
+
# zhack feature stat tank
+for_read_obj:
+	org.illumos:lz4_compress = 0
+for_write_obj:
+	com.delphix:async_destroy = 0
+	com.delphix:empty_bpobj = 0
+descriptions_obj:
+	com.delphix:async_destroy = Destroy filesystems asynchronously.
+	com.delphix:empty_bpobj = Snapshots use less space.
+	org.illumos:lz4_compress = LZ4 compression algorithm support.
+
+# zhack feature enable -d 'Predict future disk failures.' tank com.example:clairvoyance
+# zhack feature ref tank com.example:clairvoyance
+
+
+
+

+

ztest(1), zpool-features(7), + zfs(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/1/ztest.1.html b/man/v2.2/1/ztest.1.html new file mode 100644 index 000000000..870cd4a08 --- /dev/null +++ b/man/v2.2/1/ztest.1.html @@ -0,0 +1,386 @@ + + + + + + + ztest.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

ztest.1

+
+ + + + + +
ZTEST(1)General Commands ManualZTEST(1)
+
+
+

+

ztestwas + written by the ZFS Developers as a ZFS unit test

+
+
+

+ + + + + +
ztest[-VEG] [-v + vdevs] [-s + size_of_each_vdev] [-a + alignment_shift] [-m + mirror_copies] [-r + raidz_disks/draid_disks] [-R + raid_parity] [-K + raid_kind] [-D + draid_data] [-S + draid_spares] [-C + vdev_class_state] [-d + datasets] [-t + threads] [-g + gang_block_threshold] [-i + initialize_pool_i_times] [-k + kill_percentage] [-p + pool_name] [-T + time] [-z + zil_failure_rate]
+
+
+

+

ztest was written by the ZFS Developers as + a ZFS unit test. The tool was developed in tandem with the ZFS functionality + and was executed nightly as one of the many regression test against the + daily build. As features were added to ZFS, unit tests were also added to + ztest. In addition, a separate test development team + wrote and executed more functional and stress tests.

+

By default ztest runs for ten minutes and + uses block files (stored in /tmp) to create pools + rather than using physical disks. Block files afford + ztest its flexibility to play around with zpool + components without requiring large hardware configurations. However, storing + the block files in /tmp may not work for you if you + have a small tmp directory.

+

By default is non-verbose. This is why entering the command above + will result in ztest quietly executing for 5 + minutes. The -V option can be used to increase the + verbosity of the tool. Adding multiple -V options is + allowed and the more you add the more chatty ztest + becomes.

+

After the ztest run completes, you should + notice many ztest.* files lying around. Once the run + completes you can safely remove these files. Note that you shouldn't remove + these files during a run. You can re-use these files in your next + ztest run by using the -E + option.

+
+
+

+
+
, + -?, --help
+
Print a help summary.
+
, + --vdevs= (default: + )
+
Number of vdevs.
+
, + --vdev-size= (default: + )
+
Size of each vdev.
+
, + --alignment-shift= (default: + ) + (use + + for random)
+
Alignment shift used in test.
+
, + --mirror-copies= (default: + )
+
Number of mirror copies.
+
, + --raid-disks= (default: 4 + for + raidz/ + for draid)
+
Number of raidz/draid disks.
+
, + --raid-parity= (default: 1)
+
Raid parity (raidz & draid).
+
, + --raid-kind=||random + (default: random)
+
The kind of RAID config to use. With random the kind + alternates between raidz and draid.
+
, + --draid-data= (default: 4)
+
Number of data disks in a dRAID redundancy group.
+
, + --draid-spares= (default: 1)
+
Number of dRAID distributed spare disks.
+
, + --datasets= (default: + )
+
Number of datasets.
+
, + --threads= (default: + )
+
Number of threads.
+
, + --gang-block-threshold= (default: + 32K)
+
Gang block threshold.
+
, + --init-count= (default: 1)
+
Number of pool initializations.
+
, + --kill-percentage= (default: + )
+
Kill percentage.
+
, + --pool-name= (default: + )
+
Pool name.
+
, + --vdev-file-directory= (default: + /tmp)
+
File directory for vdev files.
+
, + --multi-host
+
Multi-host; simulate pool imported on remote host.
+
, + --use-existing-pool
+
Use existing pool (use existing pool instead of creating new one).
+
, + --run-time= (default: + s)
+
Total test run time.
+
, + --pass-time= (default: + s)
+
Time per pass.
+
, + --freeze-loops= (default: + )
+
Max loops in + ().
+
, + --alt-ztest=
+
Path to alternate ("older") ztest to + drive, which will be used to initialise the pool, and, a stochastic half + the time, to run the tests. The parallel lib + directory is prepended to LD_LIBRARY_PATH; i.e. + given -B + ./chroots/lenny/usr/bin/ztest, + ./chroots/lenny/usr/lib will be loaded.
+
, + --vdev-class-state=||random + (default: random)
+
The vdev allocation class state.
+
, + --option=variable=value
+
Set global variable to an unsigned 32-bit integer + value (little-endian only).
+
, + --dump-debug
+
Dump zfs_dbgmsg buffer before exiting due to an error.
+
, + --verbose
+
Verbose (use multiple times for ever more verbosity).
+
+
+
+

+

To override /tmp as your location for + block files, you can use the -f option:

+
# ztest -f /
+

To get an idea of what ztest is actually + testing try this:

+
# ztest -f / -VVV
+

Maybe you'd like to run ztest for longer? + To do so simply use the -T option and specify the + runlength in seconds like so:

+
# ztest -f / -V -T 120
+
+
+

+
+
=id
+
Use id instead of the SPL hostid to identify this host. + Intended for use with ztest, but this environment + variable will affect any utility which uses libzpool, including + zpool(8). Since the kernel is unaware of this setting, + results with utilities other than ztest are undefined.
+
=stacksize
+
Limit the default stack size to stacksize bytes for the + purpose of detecting and debugging kernel stack overflows. This value + defaults to 32K which is double the default + Linux + kernel stack size. +

In practice, setting the stack size slightly higher is needed + because differences in stack usage between kernel and user space can + lead to spurious stack overflows (especially when debugging is enabled). + The specified value will be rounded up to a floor of PTHREAD_STACK_MIN + which is the minimum stack required for a NULL procedure in user + space.

+

By default the stack size is limited to + .

+
+
+
+
+

+

zdb(1), zfs(1), + zpool(1), spl(4)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/1/zvol_wait.1.html b/man/v2.2/1/zvol_wait.1.html new file mode 100644 index 000000000..78c530350 --- /dev/null +++ b/man/v2.2/1/zvol_wait.1.html @@ -0,0 +1,191 @@ + + + + + + + zvol_wait.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zvol_wait.1

+
+ + + + + +
ZVOL_WAIT(1)General Commands ManualZVOL_WAIT(1)
+
+
+

+

zvol_waitwait + for ZFS volume links to appear in /dev

+
+
+

+ + + + + +
zvol_wait
+
+
+

+

When a ZFS pool is imported, the volumes within it will appear as + block devices. As they're registered, udev(7) + asynchronously creates symlinks under /dev/zvol + using the volumes' names. zvol_wait will wait for + all those symlinks to be created before exiting.

+
+
+

+

udev(7)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/4/index.html b/man/v2.2/4/index.html new file mode 100644 index 000000000..8ebe7c89a --- /dev/null +++ b/man/v2.2/4/index.html @@ -0,0 +1,149 @@ + + + + + + + Devices and Special Files (4) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Devices and Special Files (4)

+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/4/spl.4.html b/man/v2.2/4/spl.4.html new file mode 100644 index 000000000..00ae7a2d1 --- /dev/null +++ b/man/v2.2/4/spl.4.html @@ -0,0 +1,329 @@ + + + + + + + spl.4 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

spl.4

+
+ + + + + +
SPL(4)Device Drivers ManualSPL(4)
+
+
+

+

splparameters + of the SPL kernel module

+
+
+

+
+
=4 + (uint)
+
The number of threads created for the spl_kmem_cache task queue. This task + queue is responsible for allocating new slabs for use by the kmem caches. + For the majority of systems and workloads only a small number of threads + are required.
+
=0 + (uint)
+
When this is set it prevents Linux from being able to rapidly reclaim all + the memory held by the kmem caches. This may be useful in circumstances + where it's preferable that Linux reclaim memory from some other subsystem + first. Setting this will increase the likelihood out of memory events on a + memory constrained system.
+
= + (uint)
+
The preferred number of objects per slab in the cache. In general, a + larger value will increase the caches memory footprint while decreasing + the time required to perform an allocation. Conversely, a smaller value + will minimize the footprint and improve cache reclaim time but individual + allocations may take longer.
+
= + (64-bit) or 4 (32-bit) (uint)
+
The maximum size of a kmem cache slab in MiB. This effectively limits the + maximum cache object size to + spl_kmem_cache_max_size/spl_kmem_cache_obj_per_slab. +

Caches may not be created with object sized larger than this + limit.

+
+
= + (uint)
+
For small objects the Linux slab allocator should be used to make the most + efficient use of the memory. However, large objects are not supported by + the Linux slab and therefore the SPL implementation is preferred. This + value is used to determine the cutoff between a small and large object. +

Objects of size spl_kmem_cache_slab_limit or + smaller will be allocated using the Linux slab allocator, large objects + use the SPL allocator. A cutoff of 16K was determined to be optimal for + architectures using 4K pages.

+
+
= + (uint)
+
As a general rule + () + allocations should be small, preferably just a few pages, since they must + by physically contiguous. Therefore, a rate limited warning will be + printed to the console for any kmem_alloc() which + exceeds a reasonable threshold. +

The default warning threshold is set to eight pages but capped + at 32K to accommodate systems using large pages. This value was selected + to be small enough to ensure the largest allocations are quickly noticed + and fixed. But large enough to avoid logging any warnings when a + allocation size is larger than optimal but not a serious concern. Since + this value is tunable, developers are encouraged to set it lower when + testing so any new largish allocations are quickly caught. These + warnings may be disabled by setting the threshold to zero.

+
+
=KMALLOC_MAX_SIZE/4 + (uint)
+
Large + () + allocations will fail if they exceed KMALLOC_MAX_SIZE. + Allocations which are marginally smaller than this limit may succeed but + should still be avoided due to the expense of locating a contiguous range + of free pages. Therefore, a maximum kmem size with reasonable safely + margin of 4x is set. kmem_alloc() allocations + larger than this maximum will quickly fail. + () + allocations less than or equal to this value will use + (), + but shift to + () + when exceeding this value.
+
=0 + (uint)
+
Cache magazines are an optimization designed to minimize the cost of + allocating memory. They do this by keeping a per-cpu cache of recently + freed objects, which can then be reallocated without taking a lock. This + can improve performance on highly contended caches. However, because + objects in magazines will prevent otherwise empty slabs from being + immediately released this may not be ideal for low memory machines. +

For this reason, + spl_kmem_cache_magazine_size can be used to set a + maximum magazine size. When this value is set to 0 the magazine size + will be automatically determined based on the object size. Otherwise + magazines will be limited to 2-256 objects per magazine (i.e per cpu). + Magazines may never be entirely disabled in this implementation.

+
+
=0 + (ulong)
+
The system hostid, when set this can be used to uniquely identify a + system. By default this value is set to zero which indicates the hostid is + disabled. It can be explicitly enabled by placing a unique non-zero value + in /etc/hostid.
+
=/etc/hostid + (charp)
+
The expected path to locate the system hostid when specified. This value + may be overridden for non-standard configurations.
+
=0 + (uint)
+
Cause a kernel panic on assertion failures. When not enabled, the thread + is halted to facilitate further debugging. +

Set to a non-zero value to enable.

+
+
=0 + (uint)
+
Kick stuck taskq to spawn threads. When writing a non-zero value to it, it + will scan all the taskqs. If any of them have a pending task more than 5 + seconds old, it will kick it to spawn more threads. This can be used if + you find a rare deadlock occurs because one or more taskqs didn't spawn a + thread when it should.
+
=0 + (int)
+
Bind taskq threads to specific CPUs. When enabled all taskq threads will + be distributed evenly across the available CPUs. By default, this behavior + is disabled to allow the Linux scheduler the maximum flexibility to + determine where a thread should run.
+
=1 + (int)
+
Allow dynamic taskqs. When enabled taskqs which set the + + flag will by default create only a single thread. New threads will be + created on demand up to a maximum allowed number to facilitate the + completion of outstanding tasks. Threads which are no longer needed will + be promptly destroyed. By default this behavior is enabled but it can be + disabled to aid performance analysis or troubleshooting.
+
=1 + (int)
+
Allow newly created taskq threads to set a non-default scheduler priority. + When enabled, the priority specified when a taskq is created will be + applied to all threads created by that taskq. When disabled all threads + will use the default Linux kernel thread priority. By default, this + behavior is enabled.
+
=4 + (int)
+
The number of items a taskq worker thread must handle without interruption + before requesting a new worker thread be spawned. This is used to control + how quickly taskqs ramp up the number of threads processing the queue. + Because Linux thread creation and destruction are relatively inexpensive a + small default value has been selected. This means that normally threads + will be created aggressively which is desirable. Increasing this value + will result in a slower thread creation rate which may be preferable for + some configurations.
+
= + (uint)
+
The maximum number of tasks per pending list in each taskq shown in + /proc/spl/taskq{,-all}. Write 0 + to turn off the limit. The proc file will walk the lists with lock held, + reading it could cause a lock-up if the list grow too large without + limiting the output. "(truncated)" will be shown if the list is + larger than the limit.
+
= + (uint)
+
(Linux-only) How long a taskq has to have had no work before we tear it + down. Previously, we would tear down a dynamic taskq worker as soon as we + noticed it had no work, but it was observed that this led to a lot of + churn in tearing down things we then immediately spawned anew. In + practice, it seems any nonzero value will remove the vast majority of this + churn, while the nontrivially larger value was chosen to help filter out + the little remaining churn on a mostly idle system. Setting this value to + 0 will revert to the previous behavior.
+
+
+
+ + + + + +
August 24, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/4/zfs.4.html b/man/v2.2/4/zfs.4.html new file mode 100644 index 000000000..406b9f62b --- /dev/null +++ b/man/v2.2/4/zfs.4.html @@ -0,0 +1,2680 @@ + + + + + + + zfs.4 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.4

+
+ + + + + +
ZFS(4)Device Drivers ManualZFS(4)
+
+
+

+

zfstuning of + the ZFS kernel module

+
+
+

+

The ZFS module supports these parameters:

+
+
=UINT64_MAXB + (u64)
+
Maximum size in bytes of the dbuf cache. The target size is determined by + the MIN versus + 1/2^dbuf_cache_shift (1/32nd) of + the target ARC size. The behavior of the dbuf cache and its associated + settings can be observed via the + /proc/spl/kstat/zfs/dbufstats kstat.
+
=UINT64_MAXB + (u64)
+
Maximum size in bytes of the metadata dbuf cache. The target size is + determined by the MIN versus + 1/2^dbuf_metadata_cache_shift + (1/64th) of the target ARC size. The behavior of the metadata dbuf cache + and its associated settings can be observed via the + /proc/spl/kstat/zfs/dbufstats kstat.
+
=10% + (uint)
+
The percentage over dbuf_cache_max_bytes when dbufs must + be evicted directly.
+
=10% + (uint)
+
The percentage below dbuf_cache_max_bytes when the evict + thread stops evicting dbufs.
+
=5 + (uint)
+
Set the size of the dbuf cache (dbuf_cache_max_bytes) to + a log2 fraction of the target ARC size.
+
= + (uint)
+
Set the size of the dbuf metadata cache + (dbuf_metadata_cache_max_bytes) to a log2 fraction of + the target ARC size.
+
=0 + (uint)
+
Set the size of the mutex array for the dbuf cache. When set to + 0 the array is dynamically sized based on total system + memory.
+
=7 + (128) (uint)
+
dnode slots allocated in a single operation as a power of 2. The default + value minimizes lock contention for the bulk operation performed.
+
=134217728B + (128 MiB) (uint)
+
Limit the amount we can prefetch with one call to this amount in bytes. + This helps to limit the amount of memory that can be used by + prefetching.
+
+ (int)
+
Alias for send_holes_without_birth_time.
+
=1|0 + (int)
+
Turbo L2ARC warm-up. When the L2ARC is cold the fill interval will be set + as fast as possible.
+
=200 + (u64)
+
Min feed interval in milliseconds. Requires + l2arc_feed_again=1 and only + applicable in related situations.
+
=1 + (u64)
+
Seconds between L2ARC writing.
+
=2 + (u64)
+
How far through the ARC lists to search for L2ARC cacheable content, + expressed as a multiplier of l2arc_write_max. ARC + persistence across reboots can be achieved with persistent L2ARC by + setting this parameter to 0, allowing the full length of + ARC lists to be searched for cacheable content.
+
=200% + (u64)
+
Scales l2arc_headroom by this percentage when L2ARC + contents are being successfully compressed before writing. A value of + 100 disables this feature.
+
=0|1 + (int)
+
Controls whether buffers present on special vdevs are eligible for caching + into L2ARC. If set to 1, exclude dbufs on special vdevs from being cached + to L2ARC.
+
=0|1 + (int)
+
Controls whether only MFU metadata and data are cached from ARC into + L2ARC. This may be desired to avoid wasting space on L2ARC when + reading/writing large amounts of data that are not expected to be accessed + more than once. +

The default is off, meaning both MRU and MFU data and metadata + are cached. When turning off this feature, some MRU buffers will still + be present in ARC and eventually cached on L2ARC. + If + l2arc_noprefetch=0, some prefetched + buffers will be cached to L2ARC, and those might later transition to + MRU, in which case the l2arc_mru_asize + arcstat will not be 0.

+

Regardless of l2arc_noprefetch, some MFU + buffers might be evicted from ARC, accessed later on as prefetches and + transition to MRU as prefetches. If accessed again they are counted as + MRU and the l2arc_mru_asize arcstat + will not be 0.

+

The ARC status of L2ARC buffers when they + were first cached in L2ARC can be seen in the + l2arc_mru_asize, + , + and + + arcstats when importing the pool or onlining a cache device if + persistent L2ARC is enabled.

+

The + + arcstat does not take into account if this option is enabled as the + information provided by the + + arcstats can be used to decide if toggling this option is appropriate + for the current workload.

+
+
=% + (uint)
+
Percent of ARC size allowed for L2ARC-only headers. Since L2ARC buffers + are not evicted on memory pressure, too many headers on a system with an + irrationally large L2ARC can render it slow or unusable. This parameter + limits L2ARC writes and rebuilds to achieve the target.
+
=0% + (u64)
+
Trims ahead of the current write size (l2arc_write_max) + on L2ARC devices by this percentage of write size if we have filled the + device. If set to 100 we TRIM twice the space required + to accommodate upcoming writes. A minimum of 64 MiB will + be trimmed. It also enables TRIM of the whole L2ARC device upon creation + or addition to an existing pool or if the header of the device is invalid + upon importing a pool or onlining a cache device. A value of + 0 disables TRIM on L2ARC altogether and is the default + as it can put significant stress on the underlying storage devices. This + will vary depending of how well the specific device handles these + commands.
+
=1|0 + (int)
+
Do not write buffers to L2ARC if they were prefetched but not used by + applications. In case there are prefetched buffers in L2ARC and this + option is later set, we do not read the prefetched buffers from L2ARC. + Unsetting this option is useful for caching sequential reads from the + disks to L2ARC and serve those reads from L2ARC later on. This may be + beneficial in case the L2ARC device is significantly faster in sequential + reads than the disks of the pool. +

Use 1 to disable and 0 to + enable caching/reading prefetches to/from L2ARC.

+
+
=0|1 + (int)
+
No reads during writes.
+
=8388608B + (8 MiB) (u64)
+
Cold L2ARC devices will have l2arc_write_max increased + by this amount while they remain cold.
+
=8388608B + (8 MiB) (u64)
+
Max write bytes per interval.
+
=1|0 + (int)
+
Rebuild the L2ARC when importing a pool (persistent L2ARC). This can be + disabled if there are problems importing a pool or attaching an L2ARC + device (e.g. the L2ARC device is slow in reading stored log metadata, or + the metadata has become somehow fragmented/unusable).
+
=1073741824B + (1 GiB) (u64)
+
Mininum size of an L2ARC device required in order to write log blocks in + it. The log blocks are used upon importing the pool to rebuild the + persistent L2ARC. +

For L2ARC devices less than 1 GiB, the amount + of data + () + evicts is significant compared to the amount of restored L2ARC data. In + this case, do not write log blocks in L2ARC in order not to waste + space.

+
+
=1048576B + (1 MiB) (u64)
+
Metaslab granularity, in bytes. This is roughly similar to what would be + referred to as the "stripe size" in traditional RAID arrays. In + normal operation, ZFS will try to write this amount of data to each disk + before moving on to the next top-level vdev.
+
=1|0 + (int)
+
Enable metaslab group biasing based on their vdevs' over- or + under-utilization relative to the pool.
+
=B + (16 MiB + 1 B) (u64)
+
Make some blocks above a certain size be gang blocks. This option is used + by the test suite to facilitate testing.
+
=3% + (uint)
+
For blocks that could be forced to be a gang block (due to + metaslab_force_ganging), force this many of them to be + gang blocks.
+
=15 + (32 KiB) (int)
+
Default DDT ZAP data block size as a power of 2. Note that changing this + after creating a DDT on the pool will not affect existing DDTs, only newly + created ones.
+
=15 + (32 KiB) (int)
+
Default DDT ZAP indirect block size as a power of 2. Note that changing + this after creating a DDT on the pool will not affect existing DDTs, only + newly created ones.
+
=9 + (512 B) (int)
+
Default dnode block size as a power of 2.
+
= + (128 KiB) (int)
+
Default dnode indirect block size as a power of 2.
+
=1048576B + (1 MiB) (u64)
+
When attempting to log an output nvlist of an ioctl in the on-disk + history, the output will not be stored if it is larger than this size (in + bytes). This must be less than + + (64 MiB). This applies primarily to + () + (cf. zfs-program(8)).
+
=0|1 + (int)
+
Prevent log spacemaps from being destroyed during pool exports and + destroys.
+
=1|0 + (int)
+
Enable/disable segment-based metaslab selection.
+
=2 + (int)
+
When using segment-based metaslab selection, continue allocating from the + active metaslab until this option's worth of buckets have been + exhausted.
+
=0|1 + (int)
+
Load all metaslabs during pool import.
+
=0|1 + (int)
+
Prevent metaslabs from being unloaded.
+
=1|0 + (int)
+
Enable use of the fragmentation metric in computing metaslab weights.
+ +
Maximum distance to search forward from the last offset. Without this + limit, fragmented pools can see + + iterations and + () + becomes the performance limiting factor on high-performance storage. +

With the default setting of 16 + MiB, we typically see less than 500 iterations, + even with very fragmented ashift=9 + pools. The maximum number of iterations possible is + metaslab_df_max_search / 2^(ashift+1). With the + default setting of 16 MiB this is + (with + ashift=9) or + + (with + ashift=).

+
+
=0|1 + (int)
+
If not searching forward (due to metaslab_df_max_search, + , + or + ), + this tunable controls which segment is used. If set, we will use the + largest free segment. If unset, we will use a segment of at least the + requested size.
+
=s + (1 hour) (u64)
+
When we unload a metaslab, we cache the size of the largest free chunk. We + use that cached size to determine whether or not to load a metaslab for a + given allocation. As more frees accumulate in that metaslab while it's + unloaded, the cached max size becomes less and less accurate. After a + number of seconds controlled by this tunable, we stop considering the + cached max size and start considering only the histogram instead.
+
=25% + (uint)
+
When we are loading a new metaslab, we check the amount of memory being + used to store metaslab range trees. If it is over a threshold, we attempt + to unload the least recently used metaslab to prevent the system from + clogging all of its memory with range trees. This tunable sets the + percentage of total system memory that is the threshold.
+
=0|1 + (int)
+
+
    +
  • If unset, we will first try normal allocation.
  • +
  • If that fails then we will do a gang allocation.
  • +
  • If that fails then we will do a "try hard" gang + allocation.
  • +
  • If that fails then we will have a multi-layer gang block.
  • +
+

+
    +
  • If set, we will first try normal allocation.
  • +
  • If that fails then we will do a "try hard" allocation.
  • +
  • If that fails we will do a gang allocation.
  • +
  • If that fails we will do a "try hard" gang allocation.
  • +
  • If that fails then we will have a multi-layer gang block.
  • +
+
+
=100 + (uint)
+
When not trying hard, we only consider this number of the best metaslabs. + This improves performance, especially when there are many metaslabs per + vdev and the allocation can't actually be satisfied (so we would otherwise + iterate all metaslabs).
+
=200 + (uint)
+
When a vdev is added, target this number of metaslabs per top-level + vdev.
+
= + (512 MiB) (uint)
+
Default lower limit for metaslab size.
+
= + (16 GiB) (uint)
+
Default upper limit for metaslab size.
+
= + (uint)
+
Maximum ashift used when optimizing for logical → physical sector + size on new top-level vdevs. May be increased up to + + (16), but this may negatively impact pool space efficiency.
+
= + (9) (uint)
+
Minimum ashift used when creating new top-level vdevs.
+
=16 + (uint)
+
Minimum number of metaslabs to create in a top-level vdev.
+
=0|1 + (int)
+
Skip label validation steps during pool import. Changing is not + recommended unless you know what you're doing and are recovering a damaged + label.
+
=131072 + (128k) (uint)
+
Practical upper limit of total metaslabs per top-level vdev.
+
=1|0 + (int)
+
Enable metaslab group preloading.
+
=10 + (uint)
+
Maximum number of metaslabs per group to preload
+
=50 + (uint)
+
Percentage of CPUs to run a metaslab preload taskq
+
=1|0 + (int)
+
Give more weight to metaslabs with lower LBAs, assuming they have greater + bandwidth, as is typically the case on a modern constant angular velocity + disk drive.
+
=32 + (uint)
+
After a metaslab is used, we keep it loaded for this many TXGs, to attempt + to reduce unnecessary reloading. Note that both this many TXGs and + metaslab_unload_delay_ms milliseconds must pass before + unloading will occur.
+
=600000ms + (10 min) (uint)
+
After a metaslab is used, we keep it loaded for this many milliseconds, to + attempt to reduce unnecessary reloading. Note, that both this many + milliseconds and metaslab_unload_delay TXGs must pass + before unloading will occur.
+
=3 + (uint)
+
Maximum reference holders being tracked when reference_tracking_enable is + active.
+
=0|1 + (int)
+
Track reference holders to + + objects (debug builds only).
+
=1|0 + (int)
+
When set, the hole_birth optimization will not be used, + and all holes will always be sent during a zfs + send. This is useful if you suspect your datasets + are affected by a bug in hole_birth.
+
=/etc/zfs/zpool.cache + (charp)
+
SPA config file.
+
= + (uint)
+
Multiplication factor used to estimate actual disk consumption from the + size of data being written. The default value is a worst case estimate, + but lower values may be valid for a given pool depending on its + configuration. Pool administrators who understand the factors involved may + wish to specify a more realistic inflation factor, particularly if they + operate close to quota or capacity limits.
+
=0|1 + (int)
+
Whether to print the vdev tree in the debugging message buffer during pool + import.
+
=1|0 + (int)
+
Whether to traverse data blocks during an "extreme rewind" + (-X) import. +

An extreme rewind import normally performs a full traversal of + all blocks in the pool for verification. If this parameter is unset, the + traversal skips non-metadata blocks. It can be toggled once the import + has started to stop or start the traversal of non-metadata blocks.

+
+
=1|0 + (int)
+
Whether to traverse blocks during an "extreme rewind" + (-X) pool import. +

An extreme rewind import normally performs a full traversal of + all blocks in the pool for verification. If this parameter is unset, the + traversal is not performed. It can be toggled once the import has + started to stop or start the traversal.

+
+
=4 + (1/16th) (uint)
+
Sets the maximum number of bytes to consume during pool import to the log2 + fraction of the target ARC size.
+
=5 + (1/32nd) (int)
+
Normally, we don't allow the last + + () + of space in the pool to be consumed. This ensures that we don't run the + pool completely out of space, due to unaccounted changes (e.g. to the + MOS). It also limits the worst-case time to allocate space. If we have + less than this amount of free space, most ZPL operations (e.g. write, + create) will return + .
+
=0 + (uint)
+
Limits the number of on-disk error log entries that will be converted to + the new format when enabling the + + feature. The default is to convert all log entries.
+
=32768B + (32 KiB) (uint)
+
During top-level vdev removal, chunks of data are copied from the vdev + which may include free space in order to trade bandwidth for IOPS. This + parameter determines the maximum span of free space, in bytes, which will + be included as "unnecessary" data in a chunk of copied data. +

The default value here was chosen to align with + zfs_vdev_read_gap_limit, which is a similar concept + when doing regular reads (but there's no reason it has to be the + same).

+
+
=9 + (512 B) (u64)
+
Logical ashift for file-based devices.
+
=9 + (512 B) (u64)
+
Physical ashift for file-based devices.
+
=1|0 + (int)
+
If set, when we start iterating over a ZAP object, prefetch the entire + object (all leaf blocks). However, this is limited by + dmu_prefetch_max.
+
=131072B + (128 KiB) (int)
+
Maximum micro ZAP size. A micro ZAP is upgraded to a fat ZAP, once it + grows beyond the specified size.
+
=4194304B + (4 MiB) (uint)
+
Min bytes to prefetch per stream. Prefetch distance starts from the demand + access size and quickly grows to this value, doubling on each hit. After + that it may grow further by 1/8 per hit, but only if some prefetch since + last time haven't completed in time to satisfy demand request, i.e. + prefetch depth didn't cover the read latency or the pool got + saturated.
+
=67108864B + (64 MiB) (uint)
+
Max bytes to prefetch per stream.
+
=67108864B + (64 MiB) (uint)
+
Max bytes to prefetch indirects for per stream.
+
=8 + (uint)
+
Max number of streams per zfetch (prefetch streams per file).
+
=1 + (uint)
+
Min time before inactive prefetch stream can be reclaimed
+
=2 + (uint)
+
Max time before inactive prefetch stream can be deleted
+
=1|0 + (int)
+
Enables ARC from using scatter/gather lists and forces all allocations to + be linear in kernel memory. Disabling can improve performance in some code + paths at the expense of fragmented kernel memory.
+
=MAX_ORDER-1 + (uint)
+
Maximum number of consecutive memory pages allocated in a single block for + scatter/gather lists. +

The value of MAX_ORDER depends on kernel + configuration.

+
+
=B + (1.5 KiB) (uint)
+
This is the minimum allocation size that will use scatter (page-based) + ABDs. Smaller allocations will use linear ABDs.
+
=0B + (u64)
+
When the number of bytes consumed by dnodes in the ARC exceeds this number + of bytes, try to unpin some of it in response to demand for non-metadata. + This value acts as a ceiling to the amount of dnode metadata, and defaults + to 0, which indicates that a percent which is based on + zfs_arc_dnode_limit_percent of the ARC meta buffers that + may be used for dnodes.
+
=10% + (u64)
+
Percentage that can be consumed by dnodes of ARC meta buffers. +

See also zfs_arc_dnode_limit, which serves a + similar purpose but has a higher priority if nonzero.

+
+
=10% + (u64)
+
Percentage of ARC dnodes to try to scan in response to demand for + non-metadata when the number of bytes consumed by dnodes exceeds + zfs_arc_dnode_limit.
+
=B + (8 KiB) (uint)
+
The ARC's buffer hash table is sized based on the assumption of an average + block size of this value. This works out to roughly 1 MiB of hash table + per 1 GiB of physical memory with 8-byte pointers. For configurations with + a known larger average block size, this value can be increased to reduce + the memory footprint.
+
=200% + (uint)
+
When + (), + () + waits for this percent of the requested amount of data to be evicted. For + example, by default, for every 2 KiB that's evicted, + 1 KiB of it may be "reused" by a new + allocation. Since this is above 100%, it ensures that + progress is made towards getting arc_size + under arc_c. Since this is + finite, it ensures that allocations can still happen, even during the + potentially long time that arc_size is + more than arc_c.
+
=10 + (uint)
+
Number ARC headers to evict per sub-list before proceeding to another + sub-list. This batch-style operation prevents entire sub-lists from being + evicted at once but comes at a cost of additional unlocking and + locking.
+
=0s + (uint)
+
If set to a non zero value, it will replace the + arc_grow_retry value with this value. The + arc_grow_retry value (default + 5s) is the number of seconds the ARC will wait before + trying to resume growth after a memory pressure event.
+
=10% + (int)
+
Throttle I/O when free system memory drops below this percentage of total + system memory. Setting this value to 0 will disable the + throttle.
+
=0B + (u64)
+
Max size of ARC in bytes. If 0, then the max size of ARC + is determined by the amount of system memory installed. Under Linux, half + of system memory will be used as the limit. Under + FreeBSD, the larger of + all_system_memory - + 1 GiB and + + × all_system_memory will + be used as the limit. This value must be at least + 67108864B (64 MiB). +

This value can be changed dynamically, with some caveats. It + cannot be set back to 0 while running, and reducing it + below the current ARC size will not cause the ARC to shrink without + memory pressure to induce shrinking.

+
+
=500 + (uint)
+
Balance between metadata and data on ghost hits. Values above 100 increase + metadata caching by proportionally reducing effect of ghost data hits on + target data/metadata rate.
+
=0B + (u64)
+
Min size of ARC in bytes. If set to + 0, + + will default to consuming the larger of 32 MiB and + all_system_memory / + 32.
+
=0ms(≡1s) + (uint)
+
Minimum time prefetched blocks are locked in the ARC.
+
=0ms(≡6s) + (uint)
+
Minimum time "prescient prefetched" blocks are locked in the + ARC. These blocks are meant to be prefetched fairly aggressively ahead of + the code that may use them.
+
=1 + (int)
+
Number of arc_prune threads. FreeBSD does not need + more than one. Linux may theoretically use one per mount point up to + number of CPUs, but that was not proven to be useful.
+
=0 + (int)
+
Number of missing top-level vdevs which will be allowed during pool import + (only in read-only mode).
+
= + 0 (u64)
+
Maximum size in bytes allowed to be passed as + + for ioctls on /dev/zfs. This prevents a user from + causing the kernel to allocate an excessive amount of memory. When the + limit is exceeded, the ioctl fails with + + and a description of the error is sent to the + zfs-dbgmsg log. This parameter should not need to + be touched under normal circumstances. If 0, equivalent + to a quarter of the user-wired memory limit under + FreeBSD and to 134217728B (128 + MiB) under Linux.
+
=0 + (uint)
+
To allow more fine-grained locking, each ARC state contains a series of + lists for both data and metadata objects. Locking is performed at the + level of these "sub-lists". This parameters controls the number + of sub-lists per ARC state, and also applies to other uses of the + multilist data structure. +

If 0, equivalent to the greater of the + number of online CPUs and 4.

+
+
=8 + (int)
+
The ARC size is considered to be overflowing if it exceeds the current ARC + target size (arc_c) by thresholds determined by this + parameter. Exceeding by (arc_c + >> zfs_arc_overflow_shift) + / 2 starts ARC reclamation + process. If that appears insufficient, exceeding by + (arc_c >> + zfs_arc_overflow_shift) × + blocks + new buffer allocation until the reclaim thread catches up. Started + reclamation process continues till ARC size returns below the target size. +

The default value of 8 causes the + ARC to start reclamation if it exceeds the target size by + of the + target size, and block allocations by + .

+
+
=0 + (uint)
+
If nonzero, this will update + + (default 7) with the new value.
+
=0% + (off) (uint)
+
Percent of pagecache to reclaim ARC to. +

This tunable allows the ZFS ARC to play + more nicely with the kernel's LRU pagecache. It can guarantee that the + ARC size won't collapse under scanning pressure on the pagecache, yet + still allows the ARC to be reclaimed down to + zfs_arc_min if necessary. This value is specified as + percent of pagecache size (as measured by + ), + where that percent may exceed 100. This only operates + during memory pressure/reclaim.

+
+
=10000 + (int)
+
This is a limit on how many pages the ARC shrinker makes available for + eviction in response to one page allocation attempt. Note that in + practice, the kernel's shrinker can ask us to evict up to about four times + this for one allocation attempt. +

The default limit of 10000 (in + practice, + per allocation attempt with 4 KiB pages) limits + the amount of time spent attempting to reclaim ARC memory to less than + 100 ms per allocation attempt, even with a small average compressed + block size of ~8 KiB.

+

The parameter can be set to 0 (zero) to disable the limit, and + only applies on Linux.

+
+
=0B + (u64)
+
The target number of bytes the ARC should leave as free memory on the + system. If zero, equivalent to the bigger of 512 KiB + and + .
+
=1|0 + (int)
+
Disable pool import at module load by ignoring the cache file + (spa_config_path).
+
=20/s + (uint)
+
Rate limit checksum events to this many per second. Note that this should + not be set below the ZED thresholds (currently 10 checksums over 10 + seconds) or else the daemon may not trigger any action.
+
=5% + (uint)
+
This controls the amount of time that a ZIL block (lwb) will remain + "open" when it isn't "full", and it has a thread + waiting for it to be committed to stable storage. The timeout is scaled + based on a percentage of the last lwb latency to avoid significantly + impacting the latency of each individual transaction record (itx).
+
=0ms + (int)
+
Vdev indirection layer (used for device removal) sleeps for this many + milliseconds during mapping generation. Intended for use with the test + suite to throttle vdev removal speed.
+
=25% + (uint)
+
Minimum percent of obsolete bytes in vdev mapping required to attempt to + condense (see zfs_condense_indirect_vdevs_enable). + Intended for use with the test suite to facilitate triggering condensing + as needed.
+
=1|0 + (int)
+
Enable condensing indirect vdev mappings. When set, attempt to condense + indirect vdev mappings if the mapping uses more than + zfs_condense_min_mapping_bytes bytes of memory and if + the obsolete space map object uses more than + zfs_condense_max_obsolete_bytes bytes on-disk. The + condensing process is an attempt to save memory by removing obsolete + mappings.
+
=1073741824B + (1 GiB) (u64)
+
Only attempt to condense indirect vdev mappings if the on-disk size of the + obsolete space map object is greater than this number of bytes (see + zfs_condense_indirect_vdevs_enable).
+
=131072B + (128 KiB) (u64)
+
Minimum size vdev mapping to attempt to condense (see + zfs_condense_indirect_vdevs_enable).
+
=1|0 + (int)
+
Internally ZFS keeps a small log to facilitate debugging. The log is + enabled by default, and can be disabled by unsetting this option. The + contents of the log can be accessed by reading + /proc/spl/kstat/zfs/dbgmsg. Writing + 0 to the file clears the log. +

This setting does not influence debug prints due to + zfs_flags.

+
+
=4194304B + (4 MiB) (uint)
+
Maximum size of the internal ZFS debug log.
+
=0 + (int)
+
Historically used for controlling what reporting was available under + /proc/spl/kstat/zfs. No effect.
+
=1|0 + (int)
+
When a pool sync operation takes longer than + zfs_deadman_synctime_ms, or when an individual I/O + operation takes longer than zfs_deadman_ziotime_ms, then + the operation is considered to be "hung". If + zfs_deadman_enabled is set, then the deadman behavior is + invoked as described by zfs_deadman_failmode. By + default, the deadman is enabled and set to wait which + results in "hung" I/O operations only being logged. The deadman + is automatically disabled when a pool gets suspended.
+
=wait + (charp)
+
Controls the failure behavior when the deadman detects a "hung" + I/O operation. Valid values are: +
+
+
+
Wait for a "hung" operation to complete. For each + "hung" operation a "deadman" event will be posted + describing that operation.
+
+
Attempt to recover from a "hung" operation by re-dispatching + it to the I/O pipeline if possible.
+
+
Panic the system. This can be used to facilitate automatic fail-over + to a properly configured fail-over partner.
+
+
+
+
=ms + (1 min) (u64)
+
Check time in milliseconds. This defines the frequency at which we check + for hung I/O requests and potentially invoke the + zfs_deadman_failmode behavior.
+
=600000ms + (10 min) (u64)
+
Interval in milliseconds after which the deadman is triggered and also the + interval after which a pool sync operation is considered to be + "hung". Once this limit is exceeded the deadman will be invoked + every zfs_deadman_checktime_ms milliseconds until the + pool sync completes.
+
=ms + (5 min) (u64)
+
Interval in milliseconds after which the deadman is triggered and an + individual I/O operation is considered to be "hung". As long as + the operation remains "hung", the deadman will be invoked every + zfs_deadman_checktime_ms milliseconds until the + operation completes.
+
=0|1 + (int)
+
Enable prefetching dedup-ed blocks which are going to be freed.
+
=60% + (uint)
+
Start to delay each transaction once there is this amount of dirty data, + expressed as a percentage of zfs_dirty_data_max. This + value should be at least + zfs_vdev_async_write_active_max_dirty_percent. + See + ZFS TRANSACTION + DELAY.
+
=500000 + (int)
+
This controls how quickly the transaction delay approaches infinity. + Larger values cause longer delays for a given amount of dirty data. +

For the smoothest delay, this value should be about 1 billion + divided by the maximum number of operations per second. This will + smoothly handle between ten times and a tenth of this number. + See + ZFS TRANSACTION + DELAY.

+

zfs_delay_scale + × zfs_dirty_data_max + + be smaller than + .

+
+
=0|1 + (int)
+
Disables requirement for IVset GUIDs to be present and match when doing a + raw receive of encrypted datasets. Intended for users whose pools were + created with OpenZFS pre-release versions and now have compatibility + issues.
+
= + (4*10^8) (ulong)
+
Maximum number of uses of a single salt value before generating a new one + for encrypted datasets. The default value is also the maximum.
+
=64 + (uint)
+
Size of the znode hashtable used for holds. +

Due to the need to hold locks on objects that may not exist + yet, kernel mutexes are not created per-object and instead a hashtable + is used where collisions will result in objects waiting when there is + not actually contention on the same object.

+
+
=20/s + (int)
+
Rate limit delay and deadman zevents (which report slow I/O operations) to + this many per second.
+
=1073741824B + (1 GiB) (u64)
+
Upper-bound limit for unflushed metadata changes to be held by the log + spacemap in memory, in bytes.
+
=1000ppm + (0.1%) (u64)
+
Part of overall system memory that ZFS allows to be used for unflushed + metadata changes by the log spacemap, in millionths.
+
=131072 + (128k) (u64)
+
Describes the maximum number of log spacemap blocks allowed for each pool. + The default value means that the space in all the log spacemaps can add up + to no more than 131072 blocks (which means + 16 GiB of logical space before compression and ditto + blocks, assuming that blocksize is 128 KiB). +

This tunable is important because it involves a trade-off + between import time after an unclean export and the frequency of + flushing metaslabs. The higher this number is, the more log blocks we + allow when the pool is active which means that we flush metaslabs less + often and thus decrease the number of I/O operations for spacemap + updates per TXG. At the same time though, that means that in the event + of an unclean export, there will be more log spacemap blocks for us to + read, inducing overhead in the import time of the pool. The lower the + number, the amount of flushing increases, destroying log blocks quicker + as they become obsolete faster, which leaves less blocks to be read + during import time after a crash.

+

Each log spacemap block existing during pool import leads to + approximately one extra logical I/O issued. This is the reason why this + tunable is exposed in terms of blocks rather than space used.

+
+
=1000 + (u64)
+
If the number of metaslabs is small and our incoming rate is high, we + could get into a situation that we are flushing all our metaslabs every + TXG. Thus we always allow at least this many log blocks.
+
=% + (u64)
+
Tunable used to determine the number of blocks that can be used for the + spacemap log, expressed as a percentage of the total number of unflushed + metaslabs in the pool.
+
=1000 + (u64)
+
Tunable limiting maximum time in TXGs any metaslab may remain unflushed. + It effectively limits maximum number of unflushed per-TXG spacemap logs + that need to be read after unclean pool export.
+ +
When enabled, files will not be asynchronously removed from the list of + pending unlinks and the space they consume will be leaked. Once this + option has been disabled and the dataset is remounted, the pending unlinks + will be processed and the freed space returned to the pool. This option is + used by the test suite.
+
= + (ulong)
+
This is the used to define a large file for the purposes of deletion. + Files containing more than zfs_delete_blocks will be + deleted asynchronously, while smaller files are deleted synchronously. + Decreasing this value will reduce the time spent in an + unlink(2) system call, at the expense of a longer delay + before the freed space is available. This only applies on Linux.
+
= + (int)
+
Determines the dirty space limit in bytes. Once this limit is exceeded, + new writes are halted until space frees up. This parameter takes + precedence over zfs_dirty_data_max_percent. + See + ZFS TRANSACTION DELAY. +

Defaults to + , + capped at zfs_dirty_data_max_max.

+
+
= + (int)
+
Maximum allowable value of zfs_dirty_data_max, expressed + in bytes. This limit is only enforced at module load time, and will be + ignored if zfs_dirty_data_max is later changed. This + parameter takes precedence over + zfs_dirty_data_max_max_percent. + See + ZFS TRANSACTION DELAY. +

Defaults to min(physical_ram/4, 4GiB), or + min(physical_ram/4, 1GiB) for 32-bit systems.

+
+
=25% + (uint)
+
Maximum allowable value of zfs_dirty_data_max, expressed + as a percentage of physical RAM. This limit is only enforced at module + load time, and will be ignored if zfs_dirty_data_max is + later changed. The parameter zfs_dirty_data_max_max + takes precedence over this one. See + ZFS TRANSACTION + DELAY.
+
=10% + (uint)
+
Determines the dirty space limit, expressed as a percentage of all memory. + Once this limit is exceeded, new writes are halted until space frees up. + The parameter zfs_dirty_data_max takes precedence over + this one. See + ZFS TRANSACTION DELAY. +

Subject to zfs_dirty_data_max_max.

+
+
=20% + (uint)
+
Start syncing out a transaction group if there's at least this much dirty + data (as a percentage of zfs_dirty_data_max). This + should be less than + zfs_vdev_async_write_active_min_dirty_percent.
+
= + (int)
+
The upper limit of write-transaction zil log data size in bytes. Write + operations are throttled when approaching the limit until log data is + cleared out after transaction group sync. Because of some overhead, it + should be set at least 2 times the size of + zfs_dirty_data_max to prevent harming + normal write throughput. It also should be smaller than the size of + the slog device if slog is present. +

Defaults to +

+
+
=% + (uint)
+
Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be + preallocated for a file in order to guarantee that later writes will not + run out of space. Instead, fallocate(2) space + preallocation only checks that sufficient space is currently available in + the pool or the user's project quota allocation, and then creates a sparse + file of the requested size. The requested space is multiplied by + zfs_fallocate_reserve_percent to allow additional space + for indirect blocks and other internal metadata. Setting this to + 0 disables support for fallocate(2) + and causes it to return + .
+
=fastest + (string)
+
Select a fletcher 4 implementation. +

Supported selectors are: fastest, + scalar, sse2, + , + avx2, + , + , + and + . + All except fastest and + scalar require instruction set extensions to be + available, and will only appear if ZFS detects that they are present at + runtime. If multiple implementations of fletcher 4 are available, the + fastest will be chosen using a micro benchmark. + Selecting scalar results in the original CPU-based + calculation being used. Selecting any option other than + fastest or + scalar results in vector instructions from the + respective CPU instruction set being used.

+
+
=1|0 + (int)
+
Enable the experimental block cloning feature. If this setting is 0, then + even if feature@block_cloning is enabled, attempts to clone blocks will + act as though the feature is disabled.
+
=fastest + (string)
+
Select a BLAKE3 implementation. +

Supported selectors are: cycle, + fastest, generic, + sse2, + , + avx2, + . + All except cycle, fastest + and generic require + instruction set extensions to be available, and will only appear if ZFS + detects that they are present at runtime. If multiple implementations of + BLAKE3 are available, the fastest will be chosen using a + micro benchmark. You can see the benchmark results by reading this + kstat file: + /proc/spl/kstat/zfs/chksum_bench.

+
+
=1|0 + (int)
+
Enable/disable the processing of the free_bpobj object.
+
=UINT64_MAX + (unlimited) (u64)
+
Maximum number of blocks freed in a single TXG.
+
= + (10^5) (u64)
+
Maximum number of dedup blocks freed in a single TXG.
+
=3 + (uint)
+
Maximum asynchronous read I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum asynchronous read I/O operation active to each device. + See ZFS + I/O SCHEDULER.
+
=60% + (uint)
+
When the pool has more than this much dirty data, use + zfs_vdev_async_write_max_active to limit active async + writes. If the dirty data is between the minimum and maximum, the active + I/O limit is linearly interpolated. See + ZFS I/O SCHEDULER.
+
=30% + (uint)
+
When the pool has less than this much dirty data, use + zfs_vdev_async_write_min_active to limit active async + writes. If the dirty data is between the minimum and maximum, the active + I/O limit is linearly interpolated. See + ZFS I/O SCHEDULER.
+
=10 + (uint)
+
Maximum asynchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (uint)
+
Minimum asynchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER. +

Lower values are associated with better latency on rotational + media but poorer resilver performance. The default value of + 2 was chosen as a compromise. A value of + 3 has been shown to improve resilver performance + further at a cost of further increasing latency.

+
+
=1 + (uint)
+
Maximum initializing I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum initializing I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1000 + (uint)
+
The maximum number of I/O operations active to each device. Ideally, this + will be at least the sum of each queue's max_active. + See ZFS + I/O SCHEDULER.
+
=1000 + (uint)
+
Timeout value to wait before determining a device is missing during + import. This is helpful for transient missing paths due to links being + briefly removed and recreated in response to udev events.
+
=3 + (uint)
+
Maximum sequential resilver I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum sequential resilver I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (uint)
+
Maximum removal I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum removal I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (uint)
+
Maximum scrub I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum scrub I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (uint)
+
Maximum synchronous read I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (uint)
+
Minimum synchronous read I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (uint)
+
Maximum synchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (uint)
+
Minimum synchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (uint)
+
Maximum trim/discard I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum trim/discard I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=5 + (uint)
+
For non-interactive I/O (scrub, resilver, removal, initialize and + rebuild), the number of concurrently-active I/O operations is limited to + , + unless the vdev is "idle". When there are no interactive I/O + operations active (synchronous or otherwise), and + zfs_vdev_nia_delay operations have completed since the + last interactive operation, then the vdev is considered to be + "idle", and the number of concurrently-active non-interactive + operations is increased to zfs_*_max_active. + See ZFS + I/O SCHEDULER.
+
=5 + (uint)
+
Some HDDs tend to prioritize sequential I/O so strongly, that concurrent + random I/O latency reaches several seconds. On some HDDs this happens even + if sequential I/O operations are submitted one at a time, and so setting + zfs_*_max_active= 1 does not help. To + prevent non-interactive I/O, like scrub, from monopolizing the device, no + more than zfs_vdev_nia_credit operations can be sent + while there are outstanding incomplete interactive operations. This + enforced wait ensures the HDD services the interactive I/O within a + reasonable amount of time. See + ZFS I/O SCHEDULER.
+
=1000% + (uint)
+
Maximum number of queued allocations per top-level vdev expressed as a + percentage of zfs_vdev_async_write_max_active, which + allows the system to detect devices that are more capable of handling + allocations and to allocate more blocks to those devices. This allows for + dynamic allocation distribution when devices are imbalanced, as fuller + devices will tend to be slower than empty devices. +

Also see zio_dva_throttle_enabled.

+
+
=32 + (uint)
+
Default queue depth for each vdev IO allocator. Higher values allow for + better coalescing of sequential writes before sending them to the disk, + but can increase transaction commit times.
+
=1 + (uint)
+
Defines if the driver should retire on a given error type. The following + options may be bitwise-ored together: + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueNameDescription
1DeviceNo driver retries on device errors
2TransportNo driver retries on transport errors.
4DriverNo driver retries on driver errors.
+
+
=s + (int)
+
Time before expiring .zfs/snapshot.
+
=0|1 + (int)
+
Allow the creation, removal, or renaming of entries in the + + directory to cause the creation, destruction, or renaming of snapshots. + When enabled, this functionality works both locally and over NFS exports + which have the + + option set.
+
=0 + (int)
+
Set additional debugging flags. The following flags may be bitwise-ored + together: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueNameDescription
1ZFS_DEBUG_DPRINTFEnable dprintf entries in the debug log.
*2ZFS_DEBUG_DBUF_VERIFYEnable extra dbuf verifications.
*4ZFS_DEBUG_DNODE_VERIFYEnable extra dnode verifications.
8ZFS_DEBUG_SNAPNAMESEnable snapshot name verification.
*16ZFS_DEBUG_MODIFYCheck for illegally modified ARC buffers.
64ZFS_DEBUG_ZIO_FREEEnable verification of block frees.
128ZFS_DEBUG_HISTOGRAM_VERIFYEnable extra spacemap histogram verifications.
256ZFS_DEBUG_METASLAB_VERIFYVerify space accounting on disk matches in-memory + range_trees.
512ZFS_DEBUG_SET_ERROREnable SET_ERROR and dprintf entries in the debug log.
1024ZFS_DEBUG_INDIRECT_REMAPVerify split blocks created by device removal.
2048ZFS_DEBUG_TRIMVerify TRIM ranges are always within the allocatable range + tree.
4096ZFS_DEBUG_LOG_SPACEMAPVerify that the log summary is consistent with the spacemap log
and enable zfs_dbgmsgs for metaslab loading and + flushing.
+ * Requires debug build.
+
=0 + (uint)
+
Enables btree verification. The following settings are culminative: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueDescription
1Verify height.
2Verify pointers from children to parent.
3Verify element counts.
4Verify element order. (expensive)
*5Verify unused memory is poisoned. (expensive)
+ * Requires debug build.
+
=0|1 + (int)
+
If destroy encounters an EIO while reading metadata + (e.g. indirect blocks), space referenced by the missing metadata can not + be freed. Normally this causes the background destroy to become + "stalled", as it is unable to make forward progress. While in + this stalled state, all remaining space to free from the + error-encountering filesystem is "temporarily leaked". Set this + flag to cause it to ignore the EIO, permanently leak the + space from indirect blocks that can not be read, and continue to free + everything else that it can. +

The default "stalling" behavior is useful if the + storage partially fails (i.e. some but not all I/O operations fail), and + then later recovers. In this case, we will be able to continue pool + operations while it is partially failed, and when it recovers, we can + continue to free the space, with no leaks. Note, however, that this case + is actually fairly rare.

+

Typically pools either

+
    +
  1. fail completely (but perhaps temporarily, e.g. due to a top-level vdev + going offline), or
  2. +
  3. have localized, permanent errors (e.g. disk returns the wrong data due + to bit flip or firmware bug).
  4. +
+ In the former case, this setting does not matter because the pool will be + suspended and the sync thread will not be able to make forward progress + regardless. In the latter, because the error is permanent, the best we can + do is leak the minimum amount of space, which is what setting this flag + will do. It is therefore reasonable for this flag to normally be set, but + we chose the more conservative approach of not setting it, so that there + is no possibility of leaking space in the "partial temporary" + failure case.
+
=1000ms + (1s) (uint)
+
During a zfs destroy + operation using the + + feature, a minimum of this much time will be spent working on freeing + blocks per TXG.
+
=500ms + (uint)
+
Similar to zfs_free_min_time_ms, but for cleanup of old + indirection records for removed vdevs.
+
=32768B + (32 KiB) (s64)
+
Largest data block to write to the ZIL. Larger blocks will be treated as + if the dataset being written to had the + = + property set.
+
= + (0xDEADBEEFDEADBEEE) (u64)
+
Pattern written to vdev free space by + zpool-initialize(8).
+
=1048576B + (1 MiB) (u64)
+
Size of writes used by zpool-initialize(8). This option + is used by the test suite.
+
=500000 + (5*10^5) (u64)
+
The threshold size (in block pointers) at which we create a new + sub-livelist. Larger sublists are more costly from a memory perspective + but the fewer sublists there are, the lower the cost of insertion.
+
=75% + (int)
+
If the amount of shared space between a snapshot and its clone drops below + this threshold, the clone turns off the livelist and reverts to the old + deletion method. This is in place because livelists no long give us a + benefit once a clone has been overwritten enough.
+
=0 + (int)
+
Incremented each time an extra ALLOC blkptr is added to a livelist entry + while it is being condensed. This option is used by the test suite to + track race conditions.
+
=0 + (int)
+
Incremented each time livelist condensing is canceled while in + (). + This option is used by the test suite to track race conditions.
+
=0|1 + (int)
+
When set, the livelist condense process pauses indefinitely before + executing the synctask — + spa_livelist_condense_sync(). This option is used + by the test suite to trigger race conditions.
+
=0 + (int)
+
Incremented each time livelist condensing is canceled while in + (). + This option is used by the test suite to track race conditions.
+
=0|1 + (int)
+
When set, the livelist condense process pauses indefinitely before + executing the open context condensing work in + spa_livelist_condense_cb(). This option is used by + the test suite to trigger race conditions.
+
= + (10^8) (u64)
+
The maximum execution time limit that can be set for a ZFS channel + program, specified as a number of Lua instructions.
+
= + (100 MiB) (u64)
+
The maximum memory limit that can be set for a ZFS channel program, + specified in bytes.
+
=50 + (int)
+
The maximum depth of nested datasets. This value can be tuned temporarily + to fix existing datasets that exceed the predefined limit.
+
=5 + (u64)
+
The number of past TXGs that the flushing algorithm of the log spacemap + feature uses to estimate incoming log blocks.
+
=10 + (u64)
+
Maximum number of rows allowed in the summary of the spacemap log.
+
=16777216 + (16 MiB) (uint)
+
We currently support block sizes from 512 (512 B) + to 16777216 (16 MiB). The + benefits of larger blocks, and thus larger I/O, need to be weighed against + the cost of COWing a giant block to modify one byte. Additionally, very + large blocks can have an impact on I/O latency, and also potentially on + the memory allocator. Therefore, we formerly forbade creating blocks + larger than 1M. Larger blocks could be created by changing it, and pools + with larger blocks can always be imported and used, regardless of this + setting.
+
=0|1 + (int)
+
Allow datasets received with redacted send/receive to be mounted. Normally + disabled because these datasets may be missing key data.
+
=1 + (u64)
+
Minimum number of metaslabs to flush per dirty TXG.
+
=% + (uint)
+
Allow metaslabs to keep their active state as long as their fragmentation + percentage is no more than this value. An active metaslab that exceeds + this threshold will no longer keep its active status allowing better + metaslabs to be selected.
+
=% + (uint)
+
Metaslab groups are considered eligible for allocations if their + fragmentation metric (measured as a percentage) is less than or equal to + this value. If a metaslab group exceeds this threshold then it will be + skipped unless all metaslab groups within the metaslab class have also + crossed this threshold.
+
=0% + (uint)
+
Defines a threshold at which metaslab groups should be eligible for + allocations. The value is expressed as a percentage of free space beyond + which a metaslab group is always eligible for allocations. If a metaslab + group's free space is less than or equal to the threshold, the allocator + will avoid allocating to that group unless all groups in the pool have + reached the threshold. Once all groups have reached the threshold, all + groups are allowed to accept allocations. The default value of + 0 disables the feature and causes all metaslab groups to + be eligible for allocations. +

This parameter allows one to deal + with pools having heavily imbalanced vdevs such as would be the case + when a new vdev has been added. Setting the threshold to a non-zero + percentage will stop allocations from being made to vdevs that aren't + filled to the specified percentage and allow lesser filled vdevs to + acquire more allocations than they otherwise would under the old + + facility.

+
+
=1|0 + (int)
+
If enabled, ZFS will place DDT data into the special allocation + class.
+
=1|0 + (int)
+
If enabled, ZFS will place user data indirect blocks into the special + allocation class.
+
=0 + (uint)
+
Historical statistics for this many latest multihost updates will be + available in + /proc/spl/kstat/zfs/pool/multihost.
+
=1000ms + (1 s) (u64)
+
Used to control the frequency of multihost writes which are performed when + the + + pool property is on. This is one of the factors used to determine the + length of the activity check during import. +

The multihost write period is + zfs_multihost_interval / + . + On average a multihost write will be issued for each leaf vdev every + zfs_multihost_interval milliseconds. In practice, the + observed period can vary with the I/O load and this observed value is + the delay which is stored in the uberblock.

+
+
=20 + (uint)
+
Used to control the duration of the activity test on import. Smaller + values of zfs_multihost_import_intervals will reduce the + import time but increase the risk of failing to detect an active pool. The + total activity check time is never allowed to drop below one second. +

On import the activity check waits a minimum amount of time + determined by zfs_multihost_interval + × + zfs_multihost_import_intervals, or the same product + computed on the host which last had the pool imported, whichever is + greater. The activity check time may be further extended if the value of + MMP delay found in the best uberblock indicates actual multihost updates + happened at longer intervals than + zfs_multihost_interval. A minimum of 100 + ms is enforced.

+

0 is equivalent to + 1.

+
+
=10 + (uint)
+
Controls the behavior of the pool when multihost write failures or delays + are detected. +

When 0, multihost write failures or delays + are ignored. The failures will still be reported to the ZED which + depending on its configuration may take action such as suspending the + pool or offlining a device.

+

Otherwise, the pool will be suspended if + zfs_multihost_fail_intervals + × + zfs_multihost_interval milliseconds pass without a + successful MMP write. This guarantees the activity test will see MMP + writes if the pool is imported. 1 is + equivalent to 2; this is necessary to prevent + the pool from being suspended due to normal, small I/O latency + variations.

+
+
=0|1 + (int)
+
Set to disable scrub I/O. This results in scrubs not actually scrubbing + data and simply doing a metadata crawl of the pool instead.
+
=0|1 + (int)
+
Set to disable block prefetching for scrubs.
+
=0|1 + (int)
+
Disable cache flush operations on disks when writing. Setting this will + cause pool corruption on power loss if a volatile out-of-order write cache + is enabled.
+
=1|0 + (int)
+
Allow no-operation writes. The occurrence of nopwrites will further depend + on other pool properties (i.a. the checksumming and compression + algorithms).
+
=1|0 + (int)
+
Enable forcing TXG sync to find holes. When enabled forces ZFS to sync + data when + + or + + flags are used allowing holes in a file to be accurately reported. When + disabled holes will not be reported in recently dirtied files.
+
=B + (50 MiB) (int)
+
The number of bytes which should be prefetched during a pool traversal, + like zfs send or other + data crawling operations.
+
=32 + (uint)
+
The number of blocks pointed by indirect (non-L0) block which should be + prefetched during a pool traversal, like zfs + send or other data crawling operations.
+
=30% + (u64)
+
Control percentage of dirtied indirect blocks from frees allowed into one + TXG. After this threshold is crossed, additional frees will wait until the + next TXG. 0 disables this + throttle.
+
=0|1 + (int)
+
Disable predictive prefetch. Note that it leaves "prescient" + prefetch (for, e.g., zfs + send) intact. Unlike predictive prefetch, + prescient prefetch never issues I/O that ends up not being needed, so it + can't hurt performance.
+
=0|1 + (int)
+
Disable QAT hardware acceleration for SHA256 checksums. May be unset after + the ZFS modules have been loaded to initialize the QAT hardware as long as + support is compiled in and the QAT driver is present.
+
=0|1 + (int)
+
Disable QAT hardware acceleration for gzip compression. May be unset after + the ZFS modules have been loaded to initialize the QAT hardware as long as + support is compiled in and the QAT driver is present.
+
=0|1 + (int)
+
Disable QAT hardware acceleration for AES-GCM encryption. May be unset + after the ZFS modules have been loaded to initialize the QAT hardware as + long as support is compiled in and the QAT driver is present.
+
=1048576B + (1 MiB) (u64)
+
Bytes to read per chunk.
+
=0 + (uint)
+
Historical statistics for this many latest reads will be available in + /proc/spl/kstat/zfs/pool/reads.
+
=0|1 + (int)
+
Include cache hits in read history
+
=1048576B + (1 MiB) (u64)
+
Maximum read segment size to issue when sequentially resilvering a + top-level vdev.
+
=1|0 + (int)
+
Automatically start a pool scrub when the last active sequential resilver + completes in order to verify the checksums of all blocks which have been + resilvered. This is enabled by default and strongly recommended.
+
=67108864B + (64 MiB) (u64)
+
Maximum amount of I/O that can be concurrently issued for a sequential + resilver per leaf device, given in bytes.
+
=4096 + (int)
+
If an indirect split block contains more than this many possible unique + combinations when being reconstructed, consider it too computationally + expensive to check them all. Instead, try at most this many randomly + selected combinations each time the block is accessed. This allows all + segment copies to participate fairly in the reconstruction when all + combinations cannot be checked and prevents repeated use of one bad + copy.
+
=0|1 + (int)
+
Set to attempt to recover from fatal errors. This should only be used as a + last resort, as it typically results in leaked space, or worse.
+
=0|1 + (int)
+
Ignore hard I/O errors during device removal. When set, if a device + encounters a hard I/O error during the removal process the removal will + not be cancelled. This can result in a normally recoverable block becoming + permanently damaged and is hence not recommended. This should only be used + as a last resort when the pool cannot be returned to a healthy state prior + to removing the device.
+
=0|1 + (uint)
+
This is used by the test suite so that it can ensure that certain actions + happen while in the middle of a removal.
+
=16777216B + (16 MiB) (uint)
+
The largest contiguous segment that we will attempt to allocate when + removing a device. If there is a performance problem with attempting to + allocate large blocks, consider decreasing this. The default value is also + the maximum.
+
=0|1 + (int)
+
Ignore the + + feature, causing an operation that would start a resilver to immediately + restart the one in progress.
+
=ms + (3 s) (uint)
+
Resilvers are processed by the sync thread. While resilvering, it will + spend at least this much time working on a resilver between TXG + flushes.
+
=0|1 + (int)
+
If set, remove the DTL (dirty time list) upon completion of a pool scan + (scrub), even if there were unrepairable errors. Intended to be used + during pool repair or recovery to stop resilvering when the pool is next + imported.
+
=1000ms + (1 s) (uint)
+
Scrubs are processed by the sync thread. While scrubbing, it will spend at + least this much time working on a scrub between TXG flushes.
+
=4096 + (uint)
+
Error blocks to be scrubbed in one txg.
+
=s + (2 hour) (uint)
+
To preserve progress across reboots, the sequential scan algorithm + periodically needs to stop metadata scanning and issue all the + verification I/O to disk. The frequency of this flushing is determined by + this tunable.
+
=3 + (uint)
+
This tunable affects how scrub and resilver I/O segments are ordered. A + higher number indicates that we care more about how filled in a segment + is, while a lower number indicates we care more about the size of the + extent without considering the gaps within a segment. This value is only + tunable upon module insertion. Changing the value afterwards will have no + effect on scrub or resilver performance.
+
=0 + (uint)
+
Determines the order that data will be verified while scrubbing or + resilvering: +
+
+
+
Data will be verified as sequentially as possible, given the amount of + memory reserved for scrubbing (see + zfs_scan_mem_lim_fact). This may improve scrub + performance if the pool's data is very fragmented.
+
+
The largest mostly-contiguous chunk of found data will be verified + first. By deferring scrubbing of small segments, we may later find + adjacent data to coalesce and increase the segment size.
+
+
1 during normal + verification and strategy + 2 while taking a + checkpoint.
+
+
+
+
=0|1 + (int)
+
If unset, indicates that scrubs and resilvers will gather metadata in + memory before issuing sequential I/O. Otherwise indicates that the legacy + algorithm will be used, where I/O is initiated as soon as it is + discovered. Unsetting will not affect scrubs or resilvers that are already + in progress.
+
=B + (2 MiB) (int)
+
Sets the largest gap in bytes between scrub/resilver I/O operations that + will still be considered sequential for sorting purposes. Changing this + value will not affect scrubs or resilvers that are already in + progress.
+
=20^-1 + (uint)
+
Maximum fraction of RAM used for I/O sorting by sequential scan algorithm. + This tunable determines the hard limit for I/O sorting memory usage. When + the hard limit is reached we stop scanning metadata and start issuing data + verification I/O. This is done until we get below the soft limit.
+
=20^-1 + (uint)
+
The fraction of the hard limit used to determined the soft limit for I/O + sorting by the sequential scan algorithm. When we cross this limit from + below no action is taken. When we cross this limit from above it is + because we are issuing verification I/O. In this case (unless the metadata + scan is done) we stop issuing verification I/O and start scanning metadata + again until we get to the hard limit.
+
=0|1 + (uint)
+
When reporting resilver throughput and estimated completion time use the + performance observed over roughly the last + zfs_scan_report_txgs TXGs. When set to zero performance + is calculated over the time between checkpoints.
+
=0|1 + (int)
+
Enforce tight memory limits on pool scans when a sequential scan is in + progress. When disabled, the memory limit may be exceeded by fast + disks.
+
=0|1 + (int)
+
Freezes a scrub/resilver in progress without actually pausing it. Intended + for testing/debugging.
+
=16777216B + (16 MiB) (int)
+
Maximum amount of data that can be concurrently issued at once for scrubs + and resilvers per leaf device, given in bytes.
+
=0|1 + (int)
+
Allow sending of corrupt data (ignore read/checksum errors when + sending).
+
=1|0 + (int)
+
Include unmodified spill blocks in the send stream. Under certain + circumstances, previous versions of ZFS could incorrectly remove the spill + block from an existing object. Including unmodified copies of the spill + blocks creates a backwards-compatible stream which will recreate a spill + block if it was incorrectly removed.
+
=20^-1 + (uint)
+
The fill fraction of the zfs + send internal queues. The fill fraction controls + the timing with which internal threads are woken up.
+
=1048576B + (1 MiB) (uint)
+
The maximum number of bytes allowed in zfs + send's internal queues.
+
=20^-1 + (uint)
+
The fill fraction of the zfs + send prefetch queue. The fill fraction controls + the timing with which internal threads are woken up.
+
=16777216B + (16 MiB) (uint)
+
The maximum number of bytes allowed that will be prefetched by + zfs send. This value must + be at least twice the maximum block size in use.
+
=20^-1 + (uint)
+
The fill fraction of the zfs + receive queue. The fill fraction controls the + timing with which internal threads are woken up.
+
=16777216B + (16 MiB) (uint)
+
The maximum number of bytes allowed in the zfs + receive queue. This value must be at least twice + the maximum block size in use.
+
=1048576B + (1 MiB) (uint)
+
The maximum amount of data, in bytes, that zfs + receive will write in one DMU transaction. This is + the uncompressed size, even when receiving a compressed send stream. This + setting will not reduce the write size below a single block. Capped at a + maximum of 32 MiB.
+
=0 + (int)
+
When this variable is set to non-zero a corrective receive: +
    +
  1. Does not enforce the restriction of source & destination snapshot + GUIDs matching.
  2. +
  3. If there is an error during healing, the healing receive is not + terminated instead it moves on to the next record.
  4. +
+
+
=0|1 + (uint)
+
Setting this variable overrides the default logic for estimating block + sizes when doing a zfs + send. The default heuristic is that the average + block size will be the current recordsize. Override this value if most + data in your dataset is not of that size and you require accurate zfs send + size estimates.
+
=2 + (uint)
+
Flushing of data to disk is done in passes. Defer frees starting in this + pass.
+
=16777216B + (16 MiB) (int)
+
Maximum memory used for prefetching a checkpoint's space map on each vdev + while discarding the checkpoint.
+
=25% + (uint)
+
Only allow small data blocks to be allocated on the special and dedup vdev + types when the available free space percentage on these vdevs exceeds this + value. This ensures reserved space is available for pool metadata as the + special vdevs approach capacity.
+
=8 + (uint)
+
Starting in this sync pass, disable compression (including of metadata). + With the default setting, in practice, we don't have this many sync + passes, so this has no effect. +

The original intent was that disabling compression would help + the sync passes to converge. However, in practice, disabling compression + increases the average number of sync passes; because when we turn + compression off, many blocks' size will change, and thus we have to + re-allocate (not overwrite) them. It also increases the number of + 128 KiB allocations (e.g. for indirect blocks and + spacemaps) because these will not be compressed. The 128 + KiB allocations are especially detrimental to performance on highly + fragmented systems, which may have very few free segments of this size, + and may need to load new metaslabs to satisfy these allocations.

+
+
=2 + (uint)
+
Rewrite new block pointers starting in this pass.
+
=75% + (int)
+
This controls the number of threads used by + . + The default value of + will + create a maximum of one thread per CPU.
+
=134217728B + (128 MiB) (uint)
+
Maximum size of TRIM command. Larger ranges will be split into chunks no + larger than this value before issuing.
+
=32768B + (32 KiB) (uint)
+
Minimum size of TRIM commands. TRIM ranges smaller than this will be + skipped, unless they're part of a larger range which was chunked. This is + done because it's common for these small TRIMs to negatively impact + overall performance.
+
=0|1 + (uint)
+
Skip uninitialized metaslabs during the TRIM process. This option is + useful for pools constructed from large thinly-provisioned devices where + TRIM operations are slow. As a pool ages, an increasing fraction of the + pool's metaslabs will be initialized, progressively degrading the + usefulness of this option. This setting is stored when starting a manual + TRIM and will persist for the duration of the requested TRIM.
+
=10 + (uint)
+
Maximum number of queued TRIMs outstanding per leaf vdev. The number of + concurrent TRIM commands issued to the device is controlled by + zfs_vdev_trim_min_active and + zfs_vdev_trim_max_active.
+
=32 + (uint)
+
The number of transaction groups' worth of frees which should be + aggregated before TRIM operations are issued to the device. This setting + represents a trade-off between issuing larger, more efficient TRIM + operations and the delay before the recently trimmed space is available + for use by the device. +

Increasing this value will allow frees to be aggregated for a + longer time. This will result is larger TRIM operations and potentially + increased memory usage. Decreasing this value will have the opposite + effect. The default of 32 was determined to be a + reasonable compromise.

+
+
=0 + (uint)
+
Historical statistics for this many latest TXGs will be available in + /proc/spl/kstat/zfs/pool/TXGs.
+
=5s + (uint)
+
Flush dirty data to disk at least every this many seconds (maximum TXG + duration).
+
=1048576B + (1 MiB) (uint)
+
Max vdev I/O aggregation size.
+
=131072B + (128 KiB) (uint)
+
Max vdev I/O aggregation size for non-rotating media.
+
=0 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member when an I/O + operation immediately follows its predecessor on rotational vdevs for the + purpose of making decisions based on load.
+
=5 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member when an I/O + operation lacks locality as defined by + zfs_vdev_mirror_rotating_seek_offset. Operations within + this that are not immediately following the previous operation are + incremented by half.
+
=1048576B + (1 MiB) (int)
+
The maximum distance for the last queued I/O operation in which the + balancing algorithm considers an operation to have locality. + See ZFS + I/O SCHEDULER.
+
=0 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member on + non-rotational vdevs when I/O operations do not immediately follow one + another.
+
=1 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member when an I/O + operation lacks locality as defined by the + zfs_vdev_mirror_rotating_seek_offset. Operations within + this that are not immediately following the previous operation are + incremented by half.
+
=32768B + (32 KiB) (uint)
+
Aggregate read I/O operations if the on-disk gap between them is within + this threshold.
+
=4096B + (4 KiB) (uint)
+
Aggregate write I/O operations if the on-disk gap between them is within + this threshold.
+
=fastest + (string)
+
Select the raidz parity implementation to use. +

Variants that don't depend on CPU-specific features may be + selected on module load, as they are supported on all systems. The + remaining options may only be set after the module is loaded, as they + are available only if the implementations are compiled in and supported + on the running system.

+

Once the module is loaded, + /sys/module/zfs/parameters/zfs_vdev_raidz_impl + will show the available options, with the currently selected one + enclosed in square brackets.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
fastestselected by built-in benchmark
originaloriginal implementation
scalarscalar implementation
sse2SSE2 instruction set64-bit x86
ssse3SSSE3 instruction set64-bit x86
avx2AVX2 instruction set64-bit x86
avx512fAVX512F instruction set64-bit x86
avx512bwAVX512F & AVX512BW instruction sets64-bit x86
aarch64_neonNEONAarch64/64-bit ARMv8
aarch64_neonx2NEON with more unrollingAarch64/64-bit ARMv8
powerpc_altivecAltivecPowerPC
+
+
+ (charp)
+
. + Prints warning to kernel log for compatibility.
+
=512 + (uint)
+
Max event queue length. Events in the queue can be viewed with + zpool-events(8).
+
=2000 + (int)
+
Maximum recent zevent records to retain for duplicate checking. Setting + this to 0 disables duplicate detection.
+
=s + (15 min) (int)
+
Lifespan for a recent ereport that was retained for duplicate + checking.
+
=1048576 + (int)
+
The maximum number of taskq entries that are allowed to be cached. When + this limit is exceeded transaction records (itxs) will be cleaned + synchronously.
+
= + (int)
+
The number of taskq entries that are pre-populated when the taskq is first + created and are immediately available for use.
+
=100% + (int)
+
This controls the number of threads used by + . + The default value of + + will create a maximum of one thread per cpu.
+
=131072B + (128 KiB) (uint)
+
This sets the maximum block size used by the ZIL. On very fragmented + pools, lowering this (typically to + ) can + improve performance.
+
=B + (7.5 KiB) (uint)
+
This sets the maximum number of write bytes logged via WR_COPIED. It tunes + a tradeoff between additional memory copy and possibly worse log space + efficiency vs additional range lock/unlock.
+
= + (u64)
+
This sets the minimum delay in nanoseconds ZIL care to delay block commit, + waiting for more records. If ZIL writes are too fast, kernel may not be + able sleep for so short interval, increasing log latency above allowed by + zfs_commit_timeout_pct.
+
=0|1 + (int)
+
Disable the cache flush commands that are normally sent to disk by the ZIL + after an LWB write has completed. Setting this will cause ZIL corruption + on power loss if a volatile out-of-order write cache is enabled.
+
=0|1 + (int)
+
Disable intent logging replay. Can be disabled for recovery from corrupted + ZIL.
+
=67108864B + (64 MiB) (u64)
+
Limit SLOG write size per commit executed with synchronous priority. Any + writes above that will be executed with lower (asynchronous) priority to + limit potential SLOG device abuse by single active ZIL writer.
+
=1|0 + (int)
+
Setting this tunable to zero disables ZIL logging of new + = + records if the + + feature is enabled on the pool. This would only be necessary to work + around bugs in the ZIL logging or replay code for this record type. The + tunable has no effect if the feature is disabled.
+
=64 + (uint)
+
Usually, one metaslab from each normal-class vdev is dedicated for use by + the ZIL to log synchronous writes. However, if there are fewer than + zfs_embedded_slog_min_ms metaslabs in the vdev, this + functionality is disabled. This ensures that we don't set aside an + unreasonable amount of space for the ZIL.
+
=1 + (uint)
+
Whether heuristic for detection of incompressible data with zstd levels + >= 3 using LZ4 and zstd-1 passes is enabled.
+
=131072 + (uint)
+
Minimal uncompressed size (inclusive) of a record before the early abort + heuristic will be attempted.
+
=0|1 + (int)
+
If non-zero, the zio deadman will produce debugging messages (see + zfs_dbgmsg_enable) for all zios, rather than only for + leaf zios possessing a vdev. This is meant to be used by developers to + gain diagnostic information for hang conditions which don't involve a + mutex or other locking primitive: typically conditions in which a thread + in the zio pipeline is looping indefinitely.
+
=ms + (30 s) (int)
+
When an I/O operation takes more than this much time to complete, it's + marked as slow. Each slow operation causes a delay zevent. Slow I/O + counters can be seen with zpool + status -s.
+
=1|0 + (int)
+
Throttle block allocations in the I/O pipeline. This allows for dynamic + allocation distribution when devices are imbalanced. When enabled, the + maximum number of pending allocations per top-level vdev is limited by + zfs_vdev_queue_depth_pct.
+
=0|1 + (int)
+
Control the naming scheme used when setting new xattrs in the user + namespace. If 0 (the default on Linux), user namespace + xattr names are prefixed with the namespace, to be backwards compatible + with previous versions of ZFS on Linux. If 1 (the + default on FreeBSD), user namespace xattr names + are not prefixed, to be backwards compatible with previous versions of ZFS + on illumos and FreeBSD. +

Either naming scheme can be read on this and future versions + of ZFS, regardless of this tunable, but legacy ZFS on illumos or + FreeBSD are unable to read user namespace xattrs + written in the Linux format, and legacy versions of ZFS on Linux are + unable to read user namespace xattrs written in the legacy ZFS + format.

+

An existing xattr with the alternate naming scheme is removed + when overwriting the xattr so as to not accumulate duplicates.

+
+
=0|1 + (int)
+
Prioritize requeued I/O.
+
=% + (uint)
+
Percentage of online CPUs which will run a worker thread for I/O. These + workers are responsible for I/O work such as compression and checksum + calculations. Fractional number of CPUs will be rounded down. +

The default value of + was chosen to + avoid using all CPUs which can result in latency issues and inconsistent + application performance, especially when slower compression and/or + checksumming is enabled.

+
+
=0 + (uint)
+
Number of worker threads per taskq. Lower values improve I/O ordering and + CPU utilization, while higher reduces lock contention. +

If 0, generate a system-dependent value + close to 6 threads per taskq.

+
+
=0|1 + (uint)
+
Do not create zvol device nodes. This may slightly improve startup time on + systems with a very large number of zvols.
+
= + (uint)
+
Major number for zvol block devices.
+
= + (long)
+
Discard (TRIM) operations done on zvols will be done in batches of this + many blocks, where block size is determined by the + volblocksize property of a zvol.
+
=131072B + (128 KiB) (uint)
+
When adding a zvol to the system, prefetch this many bytes from the start + and end of the volume. Prefetching these regions of the volume is + desirable, because they are likely to be accessed immediately by + blkid(8) or the kernel partitioner.
+
=0|1 + (uint)
+
When processing I/O requests for a zvol, submit them synchronously. This + effectively limits the queue depth to 1 for each I/O + submitter. When unset, requests are handled asynchronously by a thread + pool. The number of requests which can be handled concurrently is + controlled by zvol_threads. + zvol_request_sync is ignored when running on a kernel + that supports block multiqueue (blk-mq).
+
=0 + (uint)
+
The number of system wide threads to use for processing zvol block IOs. If + 0 (the default) then internally set + zvol_threads to the number of CPUs present or 32 + (whichever is greater).
+
=0 + (uint)
+
The number of threads per zvol to use for queuing IO requests. This + parameter will only appear if your kernel supports + blk-mq and is only read and assigned to a zvol at + zvol load time. If 0 (the default) then internally set + zvol_blk_mq_threads to the number of CPUs present.
+
=0|1 + (uint)
+
Set to 1 to use the blk-mq API + for zvols. Set to 0 (the default) to use the legacy zvol + APIs. This setting can give better or worse zvol performance depending on + the workload. This parameter will only appear if your kernel supports + blk-mq and is only read and assigned to a zvol at + zvol load time.
+
=8 + (uint)
+
If zvol_use_blk_mq is enabled, then process this number + of volblocksize-sized blocks per zvol thread. This + tunable can be use to favor better performance for zvol reads (lower + values) or writes (higher values). If set to 0, then the + zvol layer will process the maximum number of blocks per thread that it + can. This parameter will only appear if your kernel supports + blk-mq and is only applied at each zvol's load + time.
+
=0 + (uint)
+
The queue_depth value for the zvol blk-mq + interface. This parameter will only appear if your kernel supports + blk-mq and is only applied at each zvol's load + time. If 0 (the default) then use the kernel's default + queue depth. Values are clamped to the kernel's + BLKDEV_MIN_RQ and + BLKDEV_MAX_RQ/BLKDEV_DEFAULT_RQ + limits.
+
=1 + (uint)
+
Defines zvol block devices behaviour when + =: + +
+
=0|1 + (uint)
+
Enable strict ZVOL quota enforcement. The strict quota enforcement may + have a performance impact.
+
+
+
+

+

ZFS issues I/O operations to leaf vdevs to satisfy and complete + I/O operations. The scheduler determines when and in what order those + operations are issued. The scheduler divides operations into five I/O + classes, prioritized in the following order: sync read, sync write, async + read, async write, and scrub/resilver. Each queue defines the minimum and + maximum number of concurrent operations that may be issued to the device. In + addition, the device has an aggregate maximum, + zfs_vdev_max_active. Note that the sum of the per-queue + minima must not exceed the aggregate maximum. If the sum of the per-queue + maxima exceeds the aggregate maximum, then the number of active operations + may reach zfs_vdev_max_active, in which case no further + operations will be issued, regardless of whether all per-queue minima have + been met.

+

For many physical devices, throughput increases with the number of + concurrent operations, but latency typically suffers. Furthermore, physical + devices typically have a limit at which more concurrent operations have no + effect on throughput or can actually cause it to decrease.

+

The scheduler selects the next operation to issue by first looking + for an I/O class whose minimum has not been satisfied. Once all are + satisfied and the aggregate maximum has not been hit, the scheduler looks + for classes whose maximum has not been satisfied. Iteration through the I/O + classes is done in the order specified above. No further operations are + issued if the aggregate maximum number of concurrent operations has been + hit, or if there are no operations queued for an I/O class that has not hit + its maximum. Every time an I/O operation is queued or an operation + completes, the scheduler looks for new operations to issue.

+

In general, smaller max_actives will lead to + lower latency of synchronous operations. Larger + max_actives may lead to higher overall throughput, + depending on underlying storage.

+

The ratio of the queues' max_actives determines + the balance of performance between reads, writes, and scrubs. For example, + increasing zfs_vdev_scrub_max_active will cause the scrub + or resilver to complete more quickly, but reads and writes to have higher + latency and lower throughput.

+

All I/O classes have a fixed maximum number of outstanding + operations, except for the async write class. Asynchronous writes represent + the data that is committed to stable storage during the syncing stage for + transaction groups. Transaction groups enter the syncing state periodically, + so the number of queued async writes will quickly burst up and then bleed + down to zero. Rather than servicing them as quickly as possible, the I/O + scheduler changes the maximum number of active async write operations + according to the amount of dirty data in the pool. Since both throughput and + latency typically increase with the number of concurrent operations issued + to physical devices, reducing the burstiness in the number of simultaneous + operations also stabilizes the response time of operations from other + queues, in particular synchronous ones. In broad strokes, the I/O scheduler + will issue more concurrent operations from the async write queue as there is + more dirty data in the pool.

+
+

+

The number of concurrent operations issued for the async write I/O + class follows a piece-wise linear function defined by a few adjustable + points:

+
+
       |              o---------| <-- zfs_vdev_async_write_max_active
+  ^    |             /^         |
+  |    |            / |         |
+active |           /  |         |
+ I/O   |          /   |         |
+count  |         /    |         |
+       |        /     |         |
+       |-------o      |         | <-- zfs_vdev_async_write_min_active
+      0|_______^______|_________|
+       0%      |      |       100% of zfs_dirty_data_max
+               |      |
+               |      `-- zfs_vdev_async_write_active_max_dirty_percent
+               `--------- zfs_vdev_async_write_active_min_dirty_percent
+
+

Until the amount of dirty data exceeds a minimum percentage of the + dirty data allowed in the pool, the I/O scheduler will limit the number of + concurrent operations to the minimum. As that threshold is crossed, the + number of concurrent operations issued increases linearly to the maximum at + the specified maximum percentage of the dirty data allowed in the pool.

+

Ideally, the amount of dirty data on a busy pool will stay in the + sloped part of the function between + zfs_vdev_async_write_active_min_dirty_percent and + zfs_vdev_async_write_active_max_dirty_percent. If it + exceeds the maximum percentage, this indicates that the rate of incoming + data is greater than the rate that the backend storage can handle. In this + case, we must further throttle incoming writes, as described in the next + section.

+
+
+
+

+

We delay transactions when we've determined that the backend + storage isn't able to accommodate the rate of incoming writes.

+

If there is already a transaction waiting, we delay relative to + when that transaction will finish waiting. This way the calculated delay + time is independent of the number of threads concurrently executing + transactions.

+

If we are the only waiter, wait relative to when the transaction + started, rather than the current time. This credits the transaction for + "time already served", e.g. reading indirect blocks.

+

The minimum time for a transaction to take is calculated as

+
min_time = min(zfs_delay_scale + × (dirty + - + ) / + ( + - dirty), 100ms)
+

The delay has two degrees of freedom that can be adjusted via + tunables. The percentage of dirty data at which we start to delay is defined + by zfs_delay_min_dirty_percent. This should typically be + at or above zfs_vdev_async_write_active_max_dirty_percent, + so that we only start to delay after writing at full speed has failed to + keep up with the incoming write rate. The scale of the curve is defined by + zfs_delay_scale. Roughly speaking, this variable + determines the amount of delay at the midpoint of the curve.

+
+
delay
+ 10ms +-------------------------------------------------------------*+
+      |                                                             *|
+  9ms +                                                             *+
+      |                                                             *|
+  8ms +                                                             *+
+      |                                                            * |
+  7ms +                                                            * +
+      |                                                            * |
+  6ms +                                                            * +
+      |                                                            * |
+  5ms +                                                           *  +
+      |                                                           *  |
+  4ms +                                                           *  +
+      |                                                           *  |
+  3ms +                                                          *   +
+      |                                                          *   |
+  2ms +                                              (midpoint) *    +
+      |                                                  |    **     |
+  1ms +                                                  v ***       +
+      |             zfs_delay_scale ---------->     ********         |
+    0 +-------------------------------------*********----------------+
+      0%                    <- zfs_dirty_data_max ->               100%
+
+

Note, that since the delay is added to the outstanding time + remaining on the most recent transaction it's effectively the inverse of + IOPS. Here, the midpoint of 500 us translates to + 2000 IOPS. The shape of the curve was chosen such that + small changes in the amount of accumulated dirty data in the first three + quarters of the curve yield relatively small differences in the amount of + delay.

+

The effects can be easier to understand when the amount of delay + is represented on a logarithmic scale:

+
+
delay
+100ms +-------------------------------------------------------------++
+      +                                                              +
+      |                                                              |
+      +                                                             *+
+ 10ms +                                                             *+
+      +                                                           ** +
+      |                                              (midpoint)  **  |
+      +                                                  |     **    +
+  1ms +                                                  v ****      +
+      +             zfs_delay_scale ---------->        *****         +
+      |                                             ****             |
+      +                                          ****                +
+100us +                                        **                    +
+      +                                       *                      +
+      |                                      *                       |
+      +                                     *                        +
+ 10us +                                     *                        +
+      +                                                              +
+      |                                                              |
+      +                                                              +
+      +--------------------------------------------------------------+
+      0%                    <- zfs_dirty_data_max ->               100%
+
+

Note here that only as the amount of dirty data approaches its + limit does the delay start to increase rapidly. The goal of a properly tuned + system should be to keep the amount of dirty data out of that range by first + ensuring that the appropriate limits are set for the I/O scheduler to reach + optimal throughput on the back-end storage, and then by changing the value + of zfs_delay_scale to increase the steepness of the + curve.

+
+
+ + + + + +
July 21, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/5/index.html b/man/v2.2/5/index.html new file mode 100644 index 000000000..93533a90f --- /dev/null +++ b/man/v2.2/5/index.html @@ -0,0 +1,147 @@ + + + + + + + File Formats and Conventions (5) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

File Formats and Conventions (5)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/5/vdev_id.conf.5.html b/man/v2.2/5/vdev_id.conf.5.html new file mode 100644 index 000000000..93806cb19 --- /dev/null +++ b/man/v2.2/5/vdev_id.conf.5.html @@ -0,0 +1,367 @@ + + + + + + + vdev_id.conf.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

vdev_id.conf.5

+
+ + + + + +
VDEV_ID.CONF(5)File Formats ManualVDEV_ID.CONF(5)
+
+
+

+

vdev_id.conf — + configuration file for vdev_id(8)

+
+
+

+

vdev_id.conf is the configuration file for + vdev_id(8). It controls the default behavior of + vdev_id(8) while it is mapping a disk device name to an + alias.

+

The vdev_id.conf file uses a simple format + consisting of a keyword followed by one or more values on a single line. Any + line not beginning with a recognized keyword is ignored. Comments may + optionally begin with a hash character.

+

The following keywords and values are used.

+
+
+ name devlink
+
Maps a device link in the /dev directory hierarchy + to a new device name. The udev rule defining the device link must have run + prior to vdev_id(8). A defined alias takes precedence + over a topology-derived name, but the two naming methods can otherwise + coexist. For example, one might name drives in a JBOD with the + sas_direct topology while naming an internal L2ARC + device with an alias. +

name is the name of the link to the + device that will by created under + /dev/disk/by-vdev.

+

devlink is the name of the device link + that has already been defined by udev. This may be an absolute path or + the base filename.

+
+
+ [pci_slot] port + name
+
Maps a physical path to a channel name (typically representing a single + disk enclosure).
+ +
Additionally create /dev/by-enclosure symlinks to + the disk enclosure + devices + using the naming scheme from vdev_id.conf. + enclosure_symlinks is only allowed for + sas_direct mode.
+ +
Specify the prefix for the enclosure symlinks in the form + /dev/by-enclosure/prefix⟩-⟨channel⟩⟨num⟩ +

Defaults to + “”.

+
+
+ prefix new + [channel]
+
Maps a disk slot number as reported by the operating system to an + alternative slot number. If the channel parameter is + specified then the mapping is only applied to slots in the named channel, + otherwise the mapping is applied to all channels. The first-specified + slot rule that can match a slot takes precedence. + Therefore a channel-specific mapping for a given slot should generally + appear before a generic mapping for the same slot. In this way a custom + mapping may be applied to a particular channel and a default mapping + applied to the others.
+
+ yes|no
+
Specifies whether vdev_id(8) will handle only + dm-multipath devices. If set to yes then + vdev_id(8) will examine the first running component disk + of a dm-multipath device as provided by the driver command to determine + the physical path.
+
+ sas_direct|sas_switch|scsi
+
Identifies a physical topology that governs how physical paths are mapped + to channels: +
+
+ and scsi
+
channels are uniquely identified by a PCI slot and HBA port + number
+
+
channels are uniquely identified by a SAS switch port number
+
+
+
+ num
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) internally uses this value to + determine which HBA or switch port a device is connected to. The default + is .
+
+ bay|phy|port|id|lun|ses
+
Specifies from which element of a SAS identifier the slot number is taken. + The default is bay: +
+
+
read the slot number from the bay identifier.
+
+
read the slot number from the phy identifier.
+
+
use the SAS port as the slot number.
+
+
use the scsi id as the slot number.
+
+
use the scsi lun as the slot number.
+
+
use the SCSI Enclosure Services (SES) enclosure device slot number, as + reported by sg_ses(8). Intended for use only on + systems where bay is unsupported, noting that + port and id may be unstable across + disk replacement.
+
+
+
+
+
+

+
+
/etc/zfs/vdev_id.conf
+
The configuration file for vdev_id(8).
+
+
+
+

+

A non-multipath configuration with direct-attached SAS enclosures + and an arbitrary slot re-mapping:

+
+
multipath     no
+topology      sas_direct
+phys_per_port 4
+slot          bay
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+channel 86:00.0  1         C
+channel 86:00.0  0         D
+
+# Custom mapping for Channel A
+
+#    Linux      Mapped
+#    Slot       Slot      Channel
+slot 1          7         A
+slot 2          10        A
+slot 3          3         A
+slot 4          6         A
+
+# Default mapping for B, C, and D
+
+slot 1          4
+slot 2          2
+slot 3          1
+slot 4          3
+
+

A SAS-switch topology. Note, that the + channel keyword takes only two arguments in this + example:

+
+
topology      sas_switch
+
+#       SWITCH PORT  CHANNEL NAME
+channel 1            A
+channel 2            B
+channel 3            C
+channel 4            D
+
+

A multipath configuration. Note that channel names have multiple + definitions - one per physical path:

+
+
multipath yes
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+channel 86:00.0  1         A
+channel 86:00.0  0         B
+
+

A configuration with enclosure_symlinks enabled:

+
+
multipath yes
+enclosure_symlinks yes
+
+#          PCI_ID      HBA PORT     CHANNEL NAME
+channel    05:00.0     1            U
+channel    05:00.0     0            L
+channel    06:00.0     1            U
+channel    06:00.0     0            L
+
+In addition to the disks symlinks, this configuration will create: +
+
/dev/by-enclosure/enc-L0
+/dev/by-enclosure/enc-L1
+/dev/by-enclosure/enc-U0
+/dev/by-enclosure/enc-U1
+
+

A configuration using device link aliases:

+
+
#     by-vdev
+#     name     fully qualified or base name of device link
+alias d1       /dev/disk/by-id/wwn-0x5000c5002de3b9ca
+alias d2       wwn-0x5000c5002def789e
+
+
+
+

+

vdev_id(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/7/dracut.zfs.7.html b/man/v2.2/7/dracut.zfs.7.html new file mode 100644 index 000000000..c83c85e37 --- /dev/null +++ b/man/v2.2/7/dracut.zfs.7.html @@ -0,0 +1,403 @@ + + + + + + + dracut.zfs.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

dracut.zfs.7

+
+ + + + + +
DRACUT.ZFS(7)Miscellaneous Information ManualDRACUT.ZFS(7)
+
+
+

+

dracut.zfs — + overview of ZFS dracut hooks

+
+
+

+
+
                      parse-zfs.sh → dracut-cmdline.service
+                          |                     ↓
+                          |                     …
+                          |                     ↓
+                          \————————→ dracut-initqueue.service
+                                                |                      zfs-import-opts.sh
+   zfs-load-module.service                      ↓                          |       |
+     |                  |                sysinit.target                    ↓       |
+     ↓                  |                       |        zfs-import-scan.service   ↓
+zfs-import-scan.service ↓                       ↓           | zfs-import-cache.service
+     |   zfs-import-cache.service         basic.target      |     |
+     \__________________|                       |           ↓     ↓
+                        ↓                       |     zfs-load-key.sh
+     zfs-env-bootfs.service                     |         |
+                        ↓                       ↓         ↓
+                 zfs-import.target → dracut-pre-mount.service
+                        |          ↑            |
+                        | dracut-zfs-generator  |
+                        | _____________________/|
+                        |/                      ↓
+                        |                   sysroot.mount ←——— dracut-zfs-generator
+                        |                       |
+                        |                       ↓
+                        |             initrd-root-fs.target ←— zfs-nonroot-necessities.service
+                        |                       |                                 |
+                        |                       ↓                                 |
+                        ↓             dracut-mount.service                        |
+       zfs-snapshot-bootfs.service              |                                 |
+                        |                       ↓                                 |
+                        ↓                       …                                 |
+       zfs-rollback-bootfs.service              |                                 |
+                        |                       ↓                                 |
+                        |          /sysroot/{usr,etc,lib,&c.} ←———————————————————/
+                        |                       |
+                        |                       ↓
+                        |                initrd-fs.target
+                        \______________________ |
+                                               \|
+                                                ↓
+        export-zfs.sh                      initrd.target
+              |                                 |
+              ↓                                 ↓
+   dracut-shutdown.service                      …
+                                                |
+                                                ↓
+                 zfs-needshutdown.sh → initrd-cleanup.service
+
+

Compare dracut.bootup(7) for the full + flowchart.

+
+
+

+

Under dracut, booting with + ZFS-on-/ is facilitated by a + number of hooks in the 90zfs module.

+

Booting into a ZFS dataset requires + mountpoint=/ to be set on the + dataset containing the root filesystem (henceforth "the boot + dataset") and at the very least either the bootfs + property to be set to that dataset, or the root= kernel + cmdline (or dracut drop-in) argument to specify it.

+

All children of the boot dataset with + = + with mountpoints matching /etc, + /bin, /lib, + /lib??, /libx32, + and /usr globs are deemed + essential and will be mounted as well.

+

zfs-mount-generator(8) is recommended for proper + functioning of the system afterward (correct mount properties, remounting, + &c.).

+
+
+

+
+

+
+
dataset, + dataset
+
Use dataset as the boot dataset. All pluses + (‘+’) are replaced with spaces + (‘ ’).
+
, + root=zfs:, + , + [root=]
+
After import, search for the first pool with the bootfs + property set, use its value as-if specified as the + dataset above.
+
rootfstype=zfs root=dataset
+
Equivalent to + root=zfs:dataset.
+
+ [root=]
+
Equivalent to root=zfs:AUTO.
+
flags
+
Mount the boot dataset with -o + flags; cf. + Temporary Mount + Point Properties in zfsprops(7). These properties + will not last, since all filesystems will be re-mounted from the real + root.
+
+
If specified, dracut-zfs-generator logs to the + journal.
+
+

Be careful about setting neither rootfstype=zfs + nor root=zfs:dataset — other + automatic boot selection methods, like + systemd-gpt-auto-generator and + systemd-fstab-generator might take precedent.

+
+
+

+
+
[=snapshot-name]
+
Execute zfs snapshot + boot-dataset@snapshot-name + before pivoting to the real root. snapshot-name + defaults to the current kernel release.
+
[=snapshot-name]
+
Execute zfs snapshot + -Rf + boot-dataset@snapshot-name + before pivoting to the real root. snapshot-name + defaults to the current kernel release.
+
host-id
+
Use zgenhostid(8) to set the host ID to + host-id; otherwise, + /etc/hostid inherited from the real root is + used.
+
, + zfs.force, zfsforce
+
Appends -f to all zpool + import invocations; primarily useful in + conjunction with spl_hostid=, or if no host ID was + inherited.
+
+
+
+
+

+
+
parse-zfs.sh + ()
+
Processes spl_hostid=. If root= + matches a known pattern, above, provides /dev/root + and delays the initqueue until zfs(4) is loaded,
+
zfs-import-opts.sh + (systemd environment + generator)
+
Turns zfs_force, zfs.force, + or zfsforce into + ZPOOL_IMPORT_OPTS=-f for + zfs-import-scan.service or + zfs-import-cache.service.
+
zfs-load-key.sh + ()
+
Loads encryption keys for the boot dataset and its essential descendants. +
+
+
=
+
Is prompted for via systemd-ask-password + thrice.
+
=URL, + keylocation=URL
+
network-online.target is started before + loading.
+
=path
+
If path doesn't exist, + udevadm is + settled. If it still doesn't, it's waited for + for up to + s.
+
+
+
+
zfs-env-bootfs.service + (systemd service)
+
After pool import, sets BOOTFS= in the systemd + environment to the first non-null bootfs value in + iteration order.
+
dracut-zfs-generator + (systemd generator)
+
Generates sysroot.mount (using + rootflags=, if any). If an + explicit boot dataset was specified, also generates essential mountpoints + (sysroot-etc.mount, + sysroot-bin.mount, + &c.), otherwise generates + zfs-nonroot-necessities.service which mounts them + explicitly after /sysroot using + BOOTFS=.
+
zfs-snapshot-bootfs.service, + zfs-rollback-bootfs.service + (systemd services)
+
Consume bootfs.snapshot and + bootfs.rollback as described in + CMDLINE. Use + BOOTFS= if no explicit boot dataset was + specified.
+
zfs-needshutdown.sh + ()
+
If any pools were imported, signals that shutdown hooks are required.
+
export-zfs.sh + ()
+
Forcibly exports all pools.
+
/etc/hostid, + /etc/zfs/zpool.cache, + /etc/zfs/vdev_id.conf (regular files)
+
Included verbatim, hostonly.
+
mount-zfs.sh + ()
+
Does nothing on systemd systems (if + dracut-zfs-generator + succeeded). Otherwise, loads encryption key for + the boot dataset from the console or via plymouth. It may not work at + all!
+
+
+
+

+

zfsprops(7), + zpoolprops(7), + dracut-shutdown.service(8), + systemd-fstab-generator(8), + systemd-gpt-auto-generator(8), + zfs-mount-generator(8), + zgenhostid(8)

+
+
+ + + + + +
March 28, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/7/index.html b/man/v2.2/7/index.html new file mode 100644 index 000000000..1d891240e --- /dev/null +++ b/man/v2.2/7/index.html @@ -0,0 +1,159 @@ + + + + + + + Miscellaneous (7) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/man/v2.2/7/vdevprops.7.html b/man/v2.2/7/vdevprops.7.html new file mode 100644 index 000000000..4535b44fb --- /dev/null +++ b/man/v2.2/7/vdevprops.7.html @@ -0,0 +1,330 @@ + + + + + + + vdevprops.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

vdevprops.7

+
+ + + + + +
VDEVPROPS(7)Miscellaneous Information ManualVDEVPROPS(7)
+
+
+

+

vdevpropsnative + and user-defined properties of ZFS vdevs

+
+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate vdevs in a way that is + meaningful in your environment. For more information about user properties, + see the User Properties section, + below.

+
+

+

Every vdev has a set of properties that export statistics about + the vdev as well as control various behaviors. Properties are not inherited + from top-level vdevs, with the exception of checksum_n, checksum_t, io_n, + and io_t.

+

The values of numeric properties can be specified using + human-readable suffixes (for example, + , + , + , + , and so + forth, up to + for zettabyte). The following are all valid (and equal) specifications: + 1536M, 1.5g, + 1.50GB.

+

The values of non-numeric properties are case sensitive and must + be lowercase.

+

The following native properties consist of read-only statistics + about the vdev. These properties can not be changed.

+
+
+
Percentage of vdev space used
+
+
state of this vdev such as online, faulted, or offline
+
+
globally unique id of this vdev
+
+
The allocable size of this vdev
+
+
The physical size of this vdev
+
+
The physical sector size of this vdev expressed as the power of two
+
+
The total size of this vdev
+
+
The amount of remaining free space on this vdev
+
+
The amount of allocated space on this vdev
+
+
How much this vdev can expand by
+
+
Percent of fragmentation in this vdev
+
+
The level of parity for this vdev
+
+
The device id for this vdev
+
+
The physical path to the device
+
+
The enclosure path to the device
+
+
Field Replacable Unit, usually a model number
+
+
Parent of this vdev
+
+
Comma separated list of children of this vdev
+
+
The number of children belonging to this vdev
+
, + , + , +
+
The number of errors of each type encountered by this vdev
+
, + , + , + , + , +
+
The number of I/O operations of each type performed by this vdev
+
, + , + , + , + , +
+
The cumulative size of all operations of each type performed by this + vdev
+
+
If this device is currently being removed from the pool
+
+

The following native properties can be used to change the behavior + of a vdev.

+
+
, + , + , +
+
Tune the fault management daemon by specifying checksum/io thresholds of + <N> errors in <T> seconds, respectively. These properties can + be set on leaf and top-level vdevs. When the property is set on the leaf + and top-level vdev, the value of the leaf vdev will be used. If the + property is only set on the top-level vdev, this value will be used. The + value of these properties do not persist across vdev replacement. For this + reason, it is advisable to set the property on the top-level vdev - not on + the leaf vdev itself. The default values are 10 errors in 600 + seconds.
+
+
A text comment up to 8192 characters long
+
+
The amount of space to reserve for the EFI system partition
+
+
If this device should propage BIO errors back to ZFS, used to disable + failfast.
+
+
The path to the device for this vdev
+
+
If this device should perform new allocations, used to disable a device + when it is scheduled for later removal. See + zpool-remove(8).
+
+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate vdevs.

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as + module:property, but this + namespace is not enforced by ZFS. User property names can be at most 256 + characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the + module component of property names to reduce the + chance that two independently-developed packages use the same property name + for different purposes.

+

The values of user properties are arbitrary strings and are never + validated. Use the zpool set + command with a blank value to clear a user property. Property values are + limited to 8192 bytes.

+
+
+
+

+

zpoolprops(7), + zpool-set(8)

+
+
+ + + + + +
October 30, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/7/zfsconcepts.7.html b/man/v2.2/7/zfsconcepts.7.html new file mode 100644 index 000000000..5162b421c --- /dev/null +++ b/man/v2.2/7/zfsconcepts.7.html @@ -0,0 +1,326 @@ + + + + + + + zfsconcepts.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfsconcepts.7

+
+ + + + + +
ZFSCONCEPTS(7)Miscellaneous Information ManualZFSCONCEPTS(7)
+
+
+

+

zfsconcepts — + overview of ZFS concepts

+
+
+

+
+

+

A ZFS storage pool is a logical collection of devices that provide + space for datasets. A storage pool is also the root of the ZFS file system + hierarchy.

+

The root of the pool can be accessed as a file system, such as + mounting and unmounting, taking snapshots, and setting properties. The + physical storage characteristics, however, are managed by the + zpool(8) command.

+

See zpool(8) for more information on creating + and administering pools.

+
+
+

+

A snapshot is a read-only copy of a file system or volume. + Snapshots can be created extremely quickly, and initially consume no + additional space within the pool. As data within the active dataset changes, + the snapshot consumes more data than would otherwise be shared with the + active dataset.

+

Snapshots can have arbitrary names. Snapshots of + volumes can be cloned or rolled back, visibility is determined by the + property + of the parent volume.

+

File system snapshots can be accessed under the + .zfs/snapshot directory in the root of the file + system. Snapshots are automatically mounted on demand and may be unmounted + at regular intervals. The visibility of the .zfs + directory can be controlled by the + + property.

+
+
+

+

A bookmark is like a snapshot, a read-only copy of a file system + or volume. Bookmarks can be created extremely quickly, compared to + snapshots, and they consume no additional space within the pool. Bookmarks + can also have arbitrary names, much like snapshots.

+

Unlike snapshots, bookmarks can not be accessed through the + filesystem in any way. From a storage standpoint a bookmark just provides a + way to reference when a snapshot was created as a distinct object. Bookmarks + are initially tied to a snapshot, not the filesystem or volume, and they + will survive if the snapshot itself is destroyed. Since they are very light + weight there's little incentive to destroy them.

+
+
+

+

A clone is a writable volume or file system whose initial contents + are the same as another dataset. As with snapshots, creating a clone is + nearly instantaneous, and initially consumes no additional space.

+

Clones can only be created from a snapshot. When a + snapshot is cloned, it creates an implicit dependency between the parent and + child. Even though the clone is created somewhere else in the dataset + hierarchy, the original snapshot cannot be destroyed as long as a clone + exists. The + property exposes this dependency, and the destroy + command lists any such dependencies, if they exist.

+

The clone parent-child dependency relationship can be reversed by + using the promote subcommand. This causes the + "origin" file system to become a clone of the specified file + system, which makes it possible to destroy the file system that the clone + was created from.

+
+
+

+

Creating a ZFS file system is a simple operation, so the number of + file systems per system is likely to be numerous. To cope with this, ZFS + automatically manages mounting and unmounting file systems without the need + to edit the /etc/fstab file. All automatically + managed file systems are mounted by ZFS at boot time.

+

By default, file systems are mounted under + /path, where path is the name + of the file system in the ZFS namespace. Directories are created and + destroyed as needed.

+

A file system can also have a mount point set in + the mountpoint property. This directory is created as + needed, and ZFS automatically mounts the file system when the + zfs mount + -a command is invoked (without editing + /etc/fstab). The mountpoint + property can be inherited, so if + has a + mount point of /export/stuff, then + + automatically inherits a mount point of + /export/stuff/user.

+

A file system mountpoint property of + prevents the + file system from being mounted.

+

If needed, ZFS file systems can also be managed with + traditional tools (mount, + umount, /etc/fstab). If a + file system's mount point is set to + , ZFS makes + no attempt to manage the file system, and the administrator is responsible + for mounting and unmounting the file system. Because pools must be imported + before a legacy mount can succeed, administrators should ensure that legacy + mounts are only attempted after the zpool import process finishes at boot + time. For example, on machines using systemd, the mount option

+

x-systemd.requires=zfs-import.target

+

will ensure that the zfs-import completes before systemd attempts + mounting the filesystem. See systemd.mount(5) for + details.

+
+
+

+

Deduplication is the process for removing redundant data at the + block level, reducing the total amount of data stored. If a file system has + the + + property enabled, duplicate data blocks are removed synchronously. The + result is that only unique data is stored and common components are shared + among files.

+

Deduplicating data is a very resource-intensive operation. It is + generally recommended that you have at least 1.25 GiB of RAM per 1 TiB of + storage when you enable deduplication. Calculating the exact requirement + depends heavily on the type of data stored in the pool.

+

Enabling deduplication on an improperly-designed system can result + in performance issues (slow I/O and administrative operations). It can + potentially lead to problems importing a pool due to memory exhaustion. + Deduplication can consume significant processing power (CPU) and memory as + well as generate additional disk I/O.

+

Before creating a pool with deduplication + enabled, ensure that you have planned your hardware requirements + appropriately and implemented appropriate recovery practices, such as + regular backups. Consider using the + + property as a less resource-intensive alternative.

+
+
+

+

Block cloning is a facility that allows a file (or parts of a + file) to be "cloned", that is, a shallow copy made where the + existing data blocks are referenced rather than copied. Later modifications + to the data will cause a copy of the data block to be taken and that copy + modified. This facility is used to implement "reflinks" or + "file-level copy-on-write".

+

Cloned blocks are tracked in a special on-disk structure called + the Block Reference Table (BRT). Unlike deduplication, this table has + minimal overhead, so can be enabled at all times.

+

Also unlike deduplication, cloning must be requested by a user + program. Many common file copying programs, including newer versions of + /bin/cp, will try to create clones automatically. + Look for "clone", "dedupe" or "reflink" in the + documentation for more information.

+

There are some limitations to block cloning. Only + whole blocks can be cloned, and blocks can not be cloned if they are not yet + written to disk, or if they are encrypted, or the source and destination + + properties differ. The OS may add additional restrictions; for example, most + versions of Linux will not allow clones across datasets.

+
+
+
+ + + + + +
October 6, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/7/zfsprops.7.html b/man/v2.2/7/zfsprops.7.html new file mode 100644 index 000000000..ced7501bc --- /dev/null +++ b/man/v2.2/7/zfsprops.7.html @@ -0,0 +1,1535 @@ + + + + + + + zfsprops.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfsprops.7

+
+ + + + + +
ZFSPROPS(7)Miscellaneous Information ManualZFSPROPS(7)
+
+
+

+

zfspropsnative + and user-defined properties of ZFS datasets

+
+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about user properties, + see the User Properties section, + below.

+
+

+

Every dataset has a set of properties that export statistics about + the dataset as well as control various behaviors. Properties are inherited + from the parent unless overridden by the child. Some properties apply only + to certain types of datasets (file systems, volumes, or snapshots).

+

The values of numeric properties can be specified using + human-readable suffixes (for example, + , + , + , + , and so + forth, up to + for zettabyte). The following are all valid (and equal) specifications: + 1536M, 1.5g, + 1.50GB.

+

The values of non-numeric properties are case sensitive and must + be lowercase, except for mountpoint, + sharenfs, and sharesmb.

+

The following native properties consist of read-only statistics + about the dataset. These properties can be neither set, nor inherited. + Native properties apply to all dataset types unless otherwise noted.

+
+
+
The amount of space available to the dataset and all its children, + assuming that there is no other activity in the pool. Because space is + shared within a pool, availability can be limited by any number of + factors, including physical pool size, quotas, reservations, or other + datasets within the pool. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For non-snapshots, the compression ratio achieved for the + used space of this dataset, expressed as a multiplier. + The used property includes descendant datasets, and, for + clones, does not include the space shared with the origin snapshot. For + snapshots, the compressratio is the same as the + refcompressratio property. Compression can be turned on + by running: zfs set + compression=on + dataset. The default value is + off.
+
+
The transaction group (txg) in which the dataset was created. Bookmarks + have the same createtxg as the snapshot they are + initially tied to. This property is suitable for ordering a list of + snapshots, e.g. for incremental send and receive.
+
+
The time this dataset was created.
+
+
For snapshots, this property is a comma-separated list of filesystems or + volumes which are clones of this snapshot. The clones' + origin property is this snapshot. If the + clones property is not empty, then this snapshot can not + be destroyed (even with the -r or + -f options). The roles of origin and clone can be + swapped by promoting the clone with the zfs + promote command.
+
+
This property is on if the snapshot has been marked for + deferred destroy by using the zfs + destroy -d command. + Otherwise, the property is off.
+
+
For encrypted datasets, indicates where the dataset is currently + inheriting its encryption key from. Loading or unloading a key for the + encryptionroot will implicitly load / unload the key for + any inheriting datasets (see zfs + load-key and zfs + unload-key for details). Clones will always share + an encryption key with their origin. See the + Encryption section of + zfs-load-key(8) for details.
+
+
The total number of filesystems and volumes that exist under this location + in the dataset tree. This value is only available when a + filesystem_limit has been set somewhere in the tree + under which the dataset resides.
+
+
Indicates if an encryption key is currently loaded into ZFS. The possible + values are none, available, and + . + See zfs load-key and + zfs unload-key.
+
+
The 64 bit GUID of this dataset or bookmark which does not change over its + entire lifetime. When a snapshot is sent to another pool, the received + snapshot has the same GUID. Thus, the guid is suitable + to identify a snapshot across pools.
+
+
The amount of space that is "logically" accessible by this + dataset. See the referenced property. The logical space + ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space that is "logically" consumed by this dataset + and all its descendents. See the used property. The + logical space ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For file systems, indicates whether the file system is currently mounted. + This property can be either + or + .
+
+
A unique identifier for this dataset within the pool. Unlike the dataset's + guid, the + objsetid of a dataset is not transferred to other pools + when the snapshot is copied with a send/receive operation. The + objsetid can be reused (for a new dataset) after the + dataset is deleted.
+
+
For cloned file systems or volumes, the snapshot from which the clone was + created. See also the clones property.
+
+
For filesystems or volumes which have saved partially-completed state from + zfs receive + -s, this opaque token can be provided to + zfs send + -t to resume and complete the + zfs receive.
+
+
For bookmarks, this is the list of snapshot guids the bookmark contains a + redaction list for. For snapshots, this is the list of snapshot guids the + snapshot is redacted with respect to.
+
+
The amount of data that is accessible by this dataset, which may or may + not be shared with other datasets in the pool. When a snapshot or clone is + created, it initially references the same amount of space as the file + system or snapshot it was created from, since its contents are identical. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The compression ratio achieved for the referenced space + of this dataset, expressed as a multiplier. See also the + compressratio property.
+
+
The total number of snapshots that exist under this location in the + dataset tree. This value is only available when a + snapshot_limit has been set somewhere in the tree under + which the dataset resides.
+
+
The type of dataset: + , + , + , + or + .
+
+
The amount of space consumed by this dataset and all its descendents. This + is the value that is checked against this dataset's quota and reservation. + The space used does not include this dataset's reservation, but does take + into account the reservations of any descendent datasets. The amount of + space that a dataset consumes from its parent, as well as the amount of + space that is freed if this dataset is recursively destroyed, is the + greater of its space used and its reservation. +

The used space of a snapshot (see the + Snapshots section of + zfsconcepts(7)) is space that is referenced + exclusively by this snapshot. If this snapshot is destroyed, the amount + of used space will be freed. Space that is shared by + multiple snapshots isn't accounted for in this metric. When a snapshot + is destroyed, space that was previously shared with this snapshot can + become unique to snapshots adjacent to it, thus changing the used space + of those snapshots. The used space of the latest snapshot can also be + affected by changes in the file system. Note that the + used space of a snapshot is a subset of the + written space of the snapshot.

+

The amount of space used, available, or referenced + does not take into account pending changes. Pending changes are + generally accounted for within a few seconds. Committing a change to a + disk using fsync(2) or + does + not necessarily guarantee that the space usage information is updated + immediately.

+
+
+
The usedby* properties decompose the + used properties into the various reasons that space is + used. Specifically, used = + usedbychildren + + usedbydataset + + usedbyrefreservation + + usedbysnapshots. These properties are only available for + datasets created on zpool "version 13" + pools.
+
+
The amount of space used by children of this dataset, which would be freed + if all the dataset's children were destroyed.
+
+
The amount of space used by this dataset itself, which would be freed if + the dataset were destroyed (after first removing any + refreservation and destroying any necessary snapshots or + descendents).
+
+
The amount of space used by a refreservation set on this + dataset, which would be freed if the refreservation was + removed.
+
+
The amount of space consumed by snapshots of this dataset. In particular, + it is the amount of space that would be freed if all of this dataset's + snapshots were destroyed. Note that this is not simply the sum of the + snapshots' used properties because space can be shared + by multiple snapshots.
+
@user
+
The amount of space consumed by the specified user in this dataset. Space + is charged to the owner of each file, as displayed by + ls -l. The amount of space + charged is displayed by du + and ls + -s. See the zfs + userspace command for more information. +

Unprivileged users can access only their own space usage. The + root user, or a user who has been granted the userused + privilege with zfs + allow, can access everyone's usage.

+

The userused@ + properties are not displayed by zfs + get all. The user's name must + be appended after the @ symbol, using one of the + following forms:

+
    +
  • POSIX name ("joe")
  • +
  • POSIX numeric ID ("789")
  • +
  • SID name ("joe.smith@mydomain")
  • +
  • SID numeric ID ("S-1-123-456-789")
  • +
+

Files created on Linux always have POSIX owners.

+
+
@user
+
The userobjused property is similar to + userused but instead it counts the number of objects + consumed by a user. This property counts all objects allocated on behalf + of the user, it may differ from the results of system tools such as + df -i. +

When the property xattr=on + is set on a file system additional objects will be created per-file to + store extended attributes. These additional objects are reflected in the + userobjused value and are counted against the user's + userobjquota. When a file system is configured to use + xattr=sa no additional internal + objects are normally required.

+
+
+
This property is set to the number of user holds on this snapshot. User + holds are set by using the zfs + hold command.
+
@group
+
The amount of space consumed by the specified group in this dataset. Space + is charged to the group of each file, as displayed by + ls -l. See the + userused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupused privilege with zfs + allow, can access all groups' usage.

+
+
@group
+
The number of objects consumed by the specified group in this dataset. + Multiple objects may be charged to the group for each file when extended + attributes are in use. See the + userobjused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupobjused privilege with + zfs allow, can access + all groups' usage.

+
+
@project
+
The amount of space consumed by the specified project in this dataset. + Project is identified via the project identifier (ID) that is object-based + numeral attribute. An object can inherit the project ID from its parent + object (if the parent has the flag of inherit project ID that can be set + and changed via chattr + -/+P or zfs project + -s) when being created. The privileged user can + set and change object's project ID via chattr + -p or zfs project + -s anytime. Space is charged to the project of + each file, as displayed by lsattr + -p or zfs project. See the + userused@user property for more + information. +

The root user, or a user who has been granted the + projectused privilege with zfs + allow, can access all projects' usage.

+
+
@project
+
The projectobjused is similar to + projectused but instead it counts the number of objects + consumed by project. When the property + xattr=on is set on a fileset, ZFS will + create additional objects per-file to store extended attributes. These + additional objects are reflected in the projectobjused + value and are counted against the project's + projectobjquota. When a filesystem is configured to use + xattr=sa no additional internal + objects are required. See the + userobjused@user property for more + information. +

The root user, or a user who has been granted the + projectobjused privilege with zfs + allow, can access all projects' objects usage.

+
+
+
Provides a mechanism to quickly determine whether snapshot list has + changed without having to mount a dataset or iterate the snapshot list. + Specifies the time at which a snapshot for a dataset was last created or + deleted. +

This allows us to be more efficient + how often we query snapshots. The property is persistent across mount + and unmount operations only if the + + feature is enabled.

+
+
+
For volumes, specifies the block size of the volume. The + blocksize cannot be changed once the volume has been + written, so it should be set at volume creation time. The default + blocksize for volumes is 16 Kbytes. Any power of 2 from + 512 bytes to 128 Kbytes is valid. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space referenced by this dataset, that was + written since the previous snapshot (i.e. that is not referenced by the + previous snapshot).
+
@snapshot
+
The amount of referenced space written to this dataset + since the specified snapshot. This is the space that is referenced by this + dataset but was not referenced by the specified snapshot. +

The snapshot may be specified as a short + snapshot name (just the part after the @), in which + case it will be interpreted as a snapshot in the same filesystem as this + dataset. The snapshot may be a full snapshot name + (filesystem@snapshot), which + for clones may be a snapshot in the origin's filesystem (or the origin + of the origin's filesystem, etc.)

+
+
+

The following native properties can be used to change the behavior + of a ZFS dataset.

+
+
=discard|noallow|restricted|passthrough|passthrough-x
+
Controls how ACEs are inherited when files and directories are created. +
+
+
+
does not inherit any ACEs.
+
+
only inherits inheritable ACEs that specify "deny" + permissions.
+
+
default, removes the + + and + + permissions when the ACE is inherited.
+
+
inherits all inheritable ACEs without any modifications.
+
+
same meaning as passthrough, except that the + , + , + and + + ACEs inherit the execute permission only if the file creation mode + also requests the execute bit.
+
+
+

When the property value is set to + passthrough, files are created with a mode determined + by the inheritable ACEs. If no inheritable ACEs exist that affect the + mode, then the mode is set in accordance to the requested mode from the + application.

+

The aclinherit property does not apply to + POSIX ACLs.

+
+
=discard|groupmask|passthrough|restricted
+
Controls how an ACL is modified during chmod(2) and how inherited ACEs are + modified by the file creation mode: +
+
+
+
default, deletes all + + except for those representing the mode of the file or directory + requested by chmod(2).
+
+
reduces permissions granted in all + + entries found in the + + such that they are no greater than the group permissions specified by + chmod(2).
+
+
indicates that no changes are made to the ACL other than creating or + updating the necessary ACL entries to represent the new mode of the + file or directory.
+
+
will cause the chmod(2) operation to return an error + when used on any file or directory which has a non-trivial ACL whose + entries can not be represented by a mode. chmod(2) + is required to change the set user ID, set group ID, or sticky bits on + a file or directory, as they do not have equivalent ACL entries. In + order to use chmod(2) on a file or directory with a + non-trivial ACL when aclmode is set to + restricted, you must first remove all ACL entries + which do not represent the current mode.
+
+
+
+
=off|nfsv4|posix
+
Controls whether ACLs are enabled and if so what type of ACL to use. When + this property is set to a type of ACL not supported by the current + platform, the behavior is the same as if it were set to + off. +
+
+
+
default on Linux, when a file system has the acltype + property set to off then ACLs are disabled.
+
+
an alias for off
+
+
default on FreeBSD, indicates that NFSv4-style + ZFS ACLs should be used. These ACLs can be managed with the + getfacl(1) and setfacl(1). The + nfsv4 ZFS ACL type is not yet supported on + Linux.
+
+
indicates POSIX ACLs should be used. POSIX ACLs are specific to Linux + and are not functional on other platforms. POSIX ACLs are stored as an + extended attribute and therefore will not overwrite any existing NFSv4 + ACLs which may be set.
+
+
an alias for posix
+
+
+

To obtain the best performance when setting + posix users are strongly encouraged to set the + xattr=sa property. This will result + in the POSIX ACL being stored more efficiently on disk. But as a + consequence, all new extended attributes will only be accessible from + OpenZFS implementations which support the + xattr=sa property. See the + xattr property for more details.

+
+
=on|off
+
Controls whether the access time for files is updated when they are read. + Turning this property off avoids producing write traffic when reading + files and can result in significant performance gains, though it might + confuse mailers and other similar utilities. The values + on and off are equivalent to the + atime and + + mount options. The default value is on. See also + relatime below.
+
=on|off|noauto
+
If this property is set to off, the file system cannot + be mounted, and is ignored by zfs + mount -a. Setting this + property to off is similar to setting the + mountpoint property to none, except + that the dataset still has a normal mountpoint property, + which can be inherited. Setting this property to off + allows datasets to be used solely as a mechanism to inherit properties. + One example of setting canmount=off is + to have two datasets with the same mountpoint, so that + the children of both datasets appear in the same directory, but might have + different inherited characteristics. +

When set to noauto, a dataset can only be + mounted and unmounted explicitly. The dataset is not mounted + automatically when the dataset is created or imported, nor is it mounted + by the zfs mount + -a command or unmounted by the + zfs unmount + -a command.

+

This property is not inherited.

+
+
=on|off||fletcher4|sha256|noparity|sha512|skein|edonr|blake3
+
Controls the checksum used to verify data integrity. The default value is + on, which automatically selects an appropriate algorithm + (currently, fletcher4, but this may change in future + releases). The value off disables integrity checking on + user data. The value noparity not only disables + integrity but also disables maintaining parity for user data. This setting + is used internally by a dump device residing on a RAID-Z pool and should + not be used by any other dataset. Disabling checksums is + NOT a recommended practice. +

The sha512, skein, + edonr, and blake3 checksum + algorithms require enabling the appropriate features on the pool.

+

Please see zpool-features(7) for more + information on these algorithms.

+

Changing this property affects only newly-written data.

+
+
=on|off|gzip|gzip-N|lz4|lzjb|zle|zstd|zstd-N|zstd-fast|zstd-fast-N
+
Controls the compression algorithm used for this dataset. +

When set to on (the default), indicates that + the current default compression algorithm should be used. The default + balances compression and decompression speed, with compression ratio and + is expected to work well on a wide variety of workloads. Unlike all + other settings for this property, on does not select a + fixed compression type. As new compression algorithms are added to ZFS + and enabled on a pool, the default compression algorithm may change. The + current default compression algorithm is either lzjb + or, if the lz4_compress feature is enabled, + lz4.

+

The lz4 compression algorithm + is a high-performance replacement for the lzjb + algorithm. It features significantly faster compression and + decompression, as well as a moderately higher compression ratio than + lzjb, but can only be used on pools with the + lz4_compress feature set to + . See + zpool-features(7) for details on ZFS feature flags and + the lz4_compress feature.

+

The lzjb compression algorithm is optimized + for performance while providing decent data compression.

+

The gzip compression algorithm + uses the same compression as the gzip(1) command. You + can specify the gzip level by using the value + gzip-N, where + N is an integer from 1 (fastest) to 9 (best + compression ratio). Currently, gzip is equivalent to + (which + is also the default for gzip(1)).

+

The zstd compression algorithm + provides both high compression ratios and good performance. You can + specify the zstd level by using the value + zstd-N, where + N is an integer from 1 (fastest) to 19 (best + compression ratio). zstd is equivalent to + .

+

Faster speeds at the cost of the compression ratio can + be requested by setting a negative zstd level. This is + done using zstd-fast-N, where + N is an integer in + [1-, + , + , + , + , + , + 1000] which maps to a negative zstd + level. The lower the level the faster the compression — + 1000 provides the fastest compression and lowest + compression ratio. zstd-fast is equivalent to + zstd-fast-1.

+

The zle compression algorithm compresses + runs of zeros.

+

This property can also be referred to by its + shortened column name + . + Changing this property affects only newly-written data.

+

When any setting except off is selected, + compression will explicitly check for blocks consisting of only zeroes + (the NUL byte). When a zero-filled block is detected, it is stored as a + hole and not compressed using the indicated compression algorithm.

+

Any block being compressed must be no larger than 7/8 of its + original size after compression, otherwise the compression will not be + considered worthwhile and the block saved uncompressed. Note that when + the logical block is less than 8 times the disk sector size this + effectively reduces the necessary compression ratio; for example, 8 KiB + blocks on disks with 4 KiB disk sectors must compress to 1/2 or less of + their original size.

+
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux context for all files in the file system under + a mount point for that file system. See selinux(8) for + more information.
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux context for the file system file system being + mounted. See selinux(8) for more information.
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux default context for unlabeled files. See + selinux(8) for more information.
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux context for the root inode of the file system. + See selinux(8) for more information.
+
=1||
+
Controls the number of copies of data stored for this dataset. These + copies are in addition to any redundancy provided by the pool, for + example, mirroring or RAID-Z. The copies are stored on different disks, if + possible. The space used by multiple copies is charged to the associated + file and dataset, changing the used property and + counting against quotas and reservations. +

Changing this property only affects newly-written data. + Therefore, set this property at file system creation time by using the + -o + copies=N option.

+

Remember that ZFS will not import a pool with a missing + top-level vdev. Do NOT create, for example a two-disk + striped pool and set copies=2 on + some datasets thinking you have setup redundancy for them. When a disk + fails you will not be able to import the pool and will have lost all of + your data.

+

Encrypted datasets may not have + copies=3 since the + implementation stores some encryption metadata where the third copy + would normally be.

+
+
=on|off
+
Controls whether device nodes can be opened on this file system. The + default value is on. The values on and + off are equivalent to the dev and + + mount options.
+
=off|on|verify|sha256[,verify]|sha512[,verify]|skein[,verify]|edonr,verify|blake3[,verify]
+
Configures deduplication for a dataset. The default value is + off. The default deduplication checksum is + sha256 (this may change in the future). When + dedup is enabled, the checksum defined here overrides + the checksum property. Setting the value to + verify has the same effect as the setting + sha256,verify. +

If set to verify, ZFS will do a byte-to-byte + comparison in case of two blocks having the same signature to make sure + the block contents are identical. Specifying verify is + mandatory for the edonr algorithm.

+

Unless necessary, deduplication should + be enabled on + a system. See the Deduplication + section of zfsconcepts(7).

+
+
=legacy|auto|||||
+
Specifies a compatibility mode or literal value for the size of dnodes in + the file system. The default value is legacy. Setting + this property to a value other than legacy + requires the large_dnode + pool feature to be enabled. +

Consider setting dnodesize to + auto if the dataset uses the + xattr=sa property setting and the + workload makes heavy use of extended attributes. This may be applicable + to SELinux-enabled systems, Lustre servers, and Samba servers, for + example. Literal values are supported for cases where the optimal size + is known in advance and for performance testing.

+

Leave dnodesize set to + legacy if you need to receive a send stream of this + dataset on a pool that doesn't enable the large_dnode + feature, or if you need to import this pool on a system that doesn't + support the large_dnode + feature.

+

This property can also be referred to by its + shortened column name, + .

+
+
=off|on||||||aes-256-gcm
+
Controls the encryption cipher suite (block cipher, key length, and mode) + used for this dataset. Requires the encryption feature + to be enabled on the pool. Requires a keyformat to be + set at dataset creation time. +

Selecting encryption=on + when creating a dataset indicates that the default encryption suite will + be selected, which is currently aes-256-gcm. In order + to provide consistent data protection, encryption must be specified at + dataset creation time and it cannot be changed afterwards.

+

For more details and caveats about encryption see the + Encryption section of + zfs-load-key(8).

+
+
=||passphrase
+
Controls what format the user's encryption key will be provided as. This + property is only set when the dataset is encrypted. +

Raw keys and hex keys must be 32 bytes long (regardless of the + chosen encryption suite) and must be randomly generated. A raw key can + be generated with the following command:

+
# dd + + /path/to/output/key
+

Passphrases must be between 8 and 512 bytes long and will be + processed through PBKDF2 before being used (see the + pbkdf2iters property). Even though the encryption + suite cannot be changed after dataset creation, the keyformat can be + with zfs change-key.

+
+
=prompt|/absolute/file/path|address|address
+
Controls where the user's encryption key will be loaded from by default + for commands such as zfs + load-key and zfs + mount -l. This property is + only set for encrypted datasets which are encryption roots. If + unspecified, the default is prompt. +

Even though the encryption suite cannot + be changed after dataset creation, the keylocation can be with either + zfs set or + zfs change-key. If + prompt is selected ZFS will ask for the key at the + command prompt when it is required to access the encrypted data (see + zfs load-key for + details). This setting will also allow the key to be passed in via the + standard input stream, but users should be careful not to place keys + which should be kept secret on the command line. If a file URI is + selected, the key will be loaded from the specified absolute file path. + If an HTTPS or HTTP URL is selected, it will be GETted using + fetch(3), libcurl, or nothing, depending on + compile-time configuration and run-time availability. The + + environment variable can be set to set the location of the concatenated + certificate store. The + + environment variable can be set to override the location of the + directory containing the certificate authority bundle. The + + and + + environment variables can be set to configure the path to the client + certificate and its key.

+
+
=iterations
+
Controls the number of PBKDF2 iterations that a + passphrase encryption key should be run through when + processing it into an encryption key. This property is only defined when + encryption is enabled and a keyformat of passphrase is + selected. The goal of PBKDF2 is to significantly increase the + computational difficulty needed to brute force a user's passphrase. This + is accomplished by forcing the attacker to run each passphrase through a + computationally expensive hashing function many times before they arrive + at the resulting key. A user who actually knows the passphrase will only + have to pay this cost once. As CPUs become better at processing, this + number should be raised to ensure that a brute force attack is still not + possible. The current default is + + and the minimum is + . + This property may be changed with zfs + change-key.
+
=on|off
+
Controls whether processes can be executed from within this file system. + The default value is on. The values on + and off are equivalent to the exec and + + mount options.
+
=count|none
+
Limits the number of filesystems and volumes that can exist under this + point in the dataset tree. The limit is not enforced if the user is + allowed to change the limit. Setting a filesystem_limit + to on a descendent of a filesystem that already has a + filesystem_limit does not override the ancestor's + filesystem_limit, but rather imposes an additional + limit. This feature must be enabled to be used (see + zpool-features(7)).
+
=size
+
This value represents the threshold block size for including small file + blocks into the special allocation class. Blocks smaller than or equal to + this value will be assigned to the special allocation class while greater + blocks will be assigned to the regular class. Valid values are zero or a + power of two from 512 up to 1048576 (1 MiB). The default size is 0 which + means no small file blocks will be allocated in the special class. +

Before setting this property, a special class vdev must be + added to the pool. See zpoolconcepts(7) for more + details on the special allocation class.

+
+
=path|none|legacy
+
Controls the mount point used for this file system. See the + Mount Points section of + zfsconcepts(7) for more information on how this property + is used. +

When the mountpoint property is changed for + a file system, the file system and any children that inherit the mount + point are unmounted. If the new value is legacy, then + they remain unmounted. Otherwise, they are automatically remounted in + the new location if the property was previously legacy + or none. In addition, any shared file systems are + unshared and shared in the new location.

+

When the mountpoint property is set with + zfs set + -u , the mountpoint property + is updated but dataset is not mounted or unmounted and remains as it was + before.

+
+
=on|off
+
Controls whether the file system should be mounted with + nbmand (Non-blocking mandatory locks). Changes to this + property only take effect when the file system is umounted and remounted. + This was only supported by Linux prior to 5.15, and was buggy there, and + is not supported by FreeBSD. On Solaris it's used + for SMB clients.
+
=on|off
+
Allow mounting on a busy directory or a directory which already contains + files or directories. This is the default mount behavior for Linux and + FreeBSD file systems. On these platforms the + property is on by default. Set to off + to disable overlay mounts for consistency with OpenZFS on other + platforms.
+
=all|none|metadata
+
Controls what is cached in the primary cache (ARC). If this property is + set to all, then both user data and metadata is cached. + If this property is set to none, then neither user data + nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=size|none
+
Limits the amount of space a dataset and its descendents can consume. This + property enforces a hard limit on the amount of space used. This includes + all space consumed by descendents, including file systems and snapshots. + Setting a quota on a descendent of a dataset that already has a quota does + not override the ancestor's quota, but rather imposes an additional limit. +

Quotas cannot be set on volumes, as the + volsize property acts as an implicit quota.

+
+
=count|none
+
Limits the number of snapshots that can be created on a dataset and its + descendents. Setting a snapshot_limit on a descendent of + a dataset that already has a snapshot_limit does not + override the ancestor's snapshot_limit, but rather + imposes an additional limit. The limit is not enforced if the user is + allowed to change the limit. For example, this means that recursive + snapshots taken from the global zone are counted against each delegated + dataset within a zone. This feature must be enabled to be used (see + zpool-features(7)).
+
user=size|none
+
Limits the amount of space consumed by the specified user. User space + consumption is identified by the + user + property. +

Enforcement of user quotas may be delayed by several seconds. + This delay means that a user might exceed their quota before the system + notices that they are over quota and begins to refuse additional writes + with the EDQUOT error message. See the + zfs userspace command + for more information.

+

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + userquota privilege with zfs + allow, can get and set everyone's quota.

+

This property is not available on volumes, on file systems + before version 4, or on pools before version 15. The + userquota@ properties + are not displayed by zfs + get all. The user's name must + be appended after the @ symbol, using one of the + following forms:

+
    +
  • POSIX name ("joe")
  • +
  • POSIX numeric ID ("789")
  • +
  • SID name ("joe.smith@mydomain")
  • +
  • SID numeric ID ("S-1-123-456-789")
  • +
+

Files created on Linux always have POSIX owners.

+
+
user=size|none
+
The userobjquota is similar to + userquota but it limits the number of objects a user can + create. Please refer to userobjused for more information + about how objects are counted.
+
group=size|none
+
Limits the amount of space consumed by the specified group. Group space + consumption is identified by the + group + property. +

Unprivileged users can access only their own groups' space + usage. The root user, or a user who has been granted the + groupquota privilege with zfs + allow, can get and set all groups' quotas.

+
+
group=size|none
+
The + + is similar to groupquota but it limits number of objects + a group can consume. Please refer to userobjused for + more information about how objects are counted.
+
project=size|none
+
Limits the amount of space consumed by the specified project. Project + space consumption is identified by the + project + property. Please refer to projectused for more + information about how project is identified and set/changed. +

The root user, or a user who has been granted the + projectquota privilege with zfs + allow, can access all projects' quota.

+
+
project=size|none
+
The projectobjquota is similar to + projectquota but it limits number of objects a project + can consume. Please refer to userobjused for more + information about how objects are counted.
+
=on|off
+
Controls whether this dataset can be modified. The default value is + off. The values on and + off are equivalent to the + and + mount + options. +

This property can also be referred to by its + shortened column name, + .

+
+
=size
+
Specifies a suggested block size for files in the file system. This + property is designed solely for use with database workloads that access + files in fixed-size records. ZFS automatically tunes block sizes according + to internal algorithms optimized for typical access patterns. +

For databases that create very large files but access them in + small random chunks, these algorithms may be suboptimal. Specifying a + recordsize greater than or equal to the record size of + the database can result in significant performance gains. Use of this + property for general purpose file systems is strongly discouraged, and + may adversely affect performance.

+

The size specified must be a power of two + greater than or equal to 512 B and less than or + equal to 128 KiB. If the + + feature is enabled on the pool, the size may be up to 1 + MiB. See zpool-features(7) for details on ZFS + feature flags.

+

Changing the file system's recordsize + affects only files created afterward; existing files are unaffected.

+

This property can also be referred to by its + shortened column name, + .

+
+
=all|most|some|none
+
Controls what types of metadata are stored redundantly. ZFS stores an + extra copy of metadata, so that if a single block is corrupted, the amount + of user data lost is limited. This extra copy is in addition to any + redundancy provided at the pool level (e.g. by mirroring or RAID-Z), and + is in addition to an extra copy specified by the copies + property (up to a total of 3 copies). For example if the pool is mirrored, + copies=2, and + redundant_metadata=most, then ZFS + stores 6 copies of most metadata, and 4 copies of data and some metadata. +

When set to all, ZFS stores an extra copy of + all metadata. If a single on-disk block is corrupt, at worst a single + block of user data (which is recordsize bytes long) + can be lost.

+

When set to most, ZFS stores an extra copy + of most types of metadata. This can improve performance of random + writes, because less metadata must be written. In practice, at worst + about 1000 blocks (of recordsize bytes each) of user + data can be lost if a single on-disk block is corrupt. The exact + behavior of which metadata blocks are stored redundantly may change in + future releases.

+

When set to some, ZFS stores an extra copy + of only critical metadata. This can improve file create performance + since less metadata needs to be written. If a single on-disk block is + corrupt, at worst a single user file can be lost.

+

When set to none, ZFS does not store any + copies of metadata redundantly. If a single on-disk block is corrupt, an + entire dataset can be lost.

+

The default value is all.

+
+
=size|none
+
Limits the amount of space a dataset can consume. This property enforces a + hard limit on the amount of space used. This hard limit does not include + space used by descendents, including file systems and snapshots.
+
=size|none|auto
+
The minimum amount of space guaranteed to a dataset, not including its + descendents. When the amount of space used is below this value, the + dataset is treated as if it were taking up the amount of space specified + by refreservation. The refreservation + reservation is accounted for in the parent datasets' space used, and + counts against the parent datasets' quotas and reservations. +

If refreservation is set, a snapshot is only + allowed if there is enough free pool space outside of this reservation + to accommodate the current number of "referenced" bytes in the + dataset.

+

If refreservation is set to + auto, a volume is thick provisioned (or "not + sparse"). refreservation=auto + is only supported on volumes. See volsize in the + Native Properties section + for more information about sparse volumes.

+

This property can also be referred to by its + shortened column name, + .

+
+
=on|off
+
Controls the manner in which the access time is updated when + atime=on is set. Turning this property + on causes the access time to be updated relative to the modify or change + time. Access time is only updated if the previous access time was earlier + than the current modify or change time or if the existing access time + hasn't been updated within the past 24 hours. The default value is + on. The values on and + off are equivalent to the relatime and + + mount options.
+
=size|none
+
The minimum amount of space guaranteed to a dataset and its descendants. + When the amount of space used is below this value, the dataset is treated + as if it were taking up the amount of space specified by its reservation. + Reservations are accounted for in the parent datasets' space used, and + count against the parent datasets' quotas and reservations. +

This property can also be referred to by its + shortened column name, + .

+
+
=all|none|metadata
+
Controls what is cached in the secondary cache (L2ARC). If this property + is set to all, then both user data and metadata is + cached. If this property is set to none, then neither + user data nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=on|off
+
Controls whether the setuid bit is respected for the file system. The + default value is on. The values on and + off are equivalent to the + and + nosuid mount options.
+
=on|off|opts
+
Controls whether the file system is shared by using + and what options are to be used. Otherwise, the file + system is automatically shared and unshared with the + zfs share and + zfs unshare commands. If + the property is set to on, the net(8) command is invoked + to create a + . +

Because SMB shares requires a resource name, a unique resource + name is constructed from the dataset name. The constructed name is a + copy of the dataset name except that the characters in the dataset name, + which would be invalid in the resource name, are replaced with + underscore (_) characters. Linux does not currently support additional + options which might be available on Solaris.

+

If the sharesmb property is set to + off, the file systems are unshared.

+

The share is created with the ACL (Access Control List) + "Everyone:F" ("F" stands for "full + permissions", i.e. read and write permissions) and no guest access + (which means Samba must be able to authenticate a real user — + passwd(5)/shadow(5)-, LDAP- or + smbpasswd(5)-based) by default. This means that any + additional access control (disallow specific user specific access etc) + must be done on the underlying file system.

+

When the sharesmb property is updated with + zfs set + -u , the property is set to desired value, but + the operation to share, reshare or unshare the the dataset is not + performed.

+
+
=on|off|opts
+
Controls whether the file system is shared via NFS, and what options are + to be used. A file system with a sharenfs property of + off is managed with the exportfs(8) + command and entries in the /etc/exports file. + Otherwise, the file system is automatically shared and unshared with the + zfs share and + zfs unshare commands. If + the property is set to on, the dataset is shared using + the default options: +
sec=sys,rw,crossmnt,no_subtree_check
+

Please note that the options are comma-separated, unlike those + found in exports(5). This is done to negate the need + for quoting, as well as to make parsing with scripts easier.

+

See exports(5) for the meaning of the + default options. Otherwise, the exportfs(8) command is + invoked with options equivalent to the contents of this property.

+

When the sharenfs property is changed for a + dataset, the dataset and any children inheriting the property are + re-shared with the new options, only if the property was previously + off, or if they were shared before the property was + changed. If the new property is off, the file systems + are unshared.

+

When the sharenfs property is updated with + zfs set + -u , the property is set to desired value, but + the operation to share, reshare or unshare the the dataset is not + performed.

+
+
=latency|throughput
+
Provide a hint to ZFS about handling of synchronous requests in this + dataset. If logbias is set to latency + (the default), ZFS will use pool log devices (if configured) to handle the + requests at low latency. If logbias is set to + throughput, ZFS will not use configured pool log + devices. ZFS will instead optimize synchronous operations for global pool + throughput and efficient use of resources.
+
=hidden|visible
+
Controls whether the volume snapshot devices under + /dev/zvol/pool⟩ + are hidden or visible. The default value is hidden.
+
=hidden|visible
+
Controls whether the .zfs directory is hidden or + visible in the root of the file system as discussed in the + Snapshots section of + zfsconcepts(7). The default value is + hidden.
+
=standard|always|disabled
+
Controls the behavior of synchronous requests (e.g. fsync, O_DSYNC). + standard is the POSIX-specified behavior of ensuring all + synchronous requests are written to stable storage and all devices are + flushed to ensure data is not cached by device controllers (this is the + default). always causes every file system transaction to + be written and flushed before its system call returns. This has a large + performance penalty. disabled disables synchronous + requests. File system transactions are only committed to stable storage + periodically. This option will give the highest performance. However, it + is very dangerous as ZFS would be ignoring the synchronous transaction + demands of applications such as databases or NFS. Administrators should + only use this option when the risks are understood.
+
=N|
+
The on-disk version of this file system, which is independent of the pool + version. This property can only be set to later supported versions. See + the zfs upgrade + command.
+
=size
+
For volumes, specifies the logical size of the volume. By default, + creating a volume establishes a reservation of equal size. For storage + pools with a version number of 9 or higher, a + refreservation is set instead. Any changes to + volsize are reflected in an equivalent change to the + reservation (or refreservation). The + volsize can only be set to a multiple of + volblocksize, and cannot be zero. +

The reservation is kept equal to the volume's logical size to + prevent unexpected behavior for consumers. Without the reservation, the + volume could run out of space, resulting in undefined behavior or data + corruption, depending on how the volume is used. These effects can also + occur when the volume size is changed while it is in use (particularly + when shrinking the size). Extreme care should be used when adjusting the + volume size.

+

Though not recommended, a "sparse volume" (also + known as "thin provisioned") can be created by specifying the + -s option to the zfs + create -V command, or by + changing the value of the refreservation property (or + reservation property on pool version 8 or earlier) + after the volume has been created. A "sparse volume" is a + volume where the value of refreservation is less than + the size of the volume plus the space required to store its metadata. + Consequently, writes to a sparse volume can fail with + ENOSPC when the pool is low on space. For a + sparse volume, changes to volsize are not reflected in + the refreservation. A volume that is not sparse is + said to be "thick provisioned". A sparse volume can become + thick provisioned by setting refreservation to + auto.

+
+
=default|full|geom|dev|none
+
This property specifies how volumes should be exposed to the OS. Setting + it to full exposes volumes as fully fledged block + devices, providing maximal functionality. The value geom + is just an alias for full and is kept for compatibility. + Setting it to dev hides its partitions. Volumes with + property set to none are not exposed outside ZFS, but + can be snapshotted, cloned, replicated, etc, that can be suitable for + backup purposes. Value default means that volumes + exposition is controlled by system-wide tunable + , + where full, dev and + none are encoded as 1, 2 and 3 respectively. The default + value is full.
+
=on|off
+
Controls whether regular files should be scanned for viruses when a file + is opened and closed. In addition to enabling this property, the virus + scan service must also be enabled for virus scanning to occur. The default + value is off. This property is not used by OpenZFS.
+
=on|off|sa
+
Controls whether extended attributes are enabled for this file system. Two + styles of extended attributes are supported: either directory-based or + system-attribute-based. +

The default value of on enables + directory-based extended attributes. This style of extended attribute + imposes no practical limit on either the size or number of attributes + which can be set on a file. Although under Linux the + getxattr(2) and setxattr(2) system + calls limit the maximum size to 64K. This is the most + compatible style of extended attribute and is supported by all ZFS + implementations.

+

System-attribute-based xattrs can be enabled by setting the + value to sa. The key advantage of this type of xattr + is improved performance. Storing extended attributes as system + attributes significantly decreases the amount of disk I/O required. Up + to 64K of data may be stored per-file in the space + reserved for system attributes. If there is not enough space available + for an extended attribute then it will be automatically written as a + directory-based xattr. System-attribute-based extended attributes are + not accessible on platforms which do not support the + xattr=sa feature. OpenZFS supports + xattr=sa on both + FreeBSD and Linux.

+

The use of system-attribute-based xattrs is strongly + encouraged for users of SELinux or POSIX ACLs. Both of these features + heavily rely on extended attributes and benefit significantly from the + reduced access time.

+

The values on and + off are equivalent to the xattr and + mount + options.

+
+
=off|on
+
Controls whether the dataset is managed from a jail. See + zfs-jail(8) for more information. Jails are a + FreeBSD feature and this property is not available + on other platforms.
+
=off|on
+
Controls whether the dataset is managed from a non-global zone or + namespace. See zfs-zone(8) for more information. Zoning + is a Linux feature and this property is not available on other + platforms.
+
+

The following three properties cannot be changed after the file + system is created, and therefore, should be set when the file system is + created. If the properties are not set with the zfs + create or zpool + create commands, these properties are inherited from + the parent dataset. If the parent dataset lacks these properties due to + having been created prior to these features being supported, the new file + system will have the default values for these properties.

+
+
=sensitive||mixed
+
Indicates whether the file name matching algorithm used by the file system + should be case-sensitive, case-insensitive, or allow a combination of both + styles of matching. The default value for the + casesensitivity property is sensitive. + Traditionally, UNIX and POSIX file systems have + case-sensitive file names. +

The mixed value for the + casesensitivity property indicates that the file + system can support requests for both case-sensitive and case-insensitive + matching behavior. Currently, case-insensitive matching behavior on a + file system that supports mixed behavior is limited to the SMB server + product. For more information about the mixed value + behavior, see the "ZFS Administration Guide".

+
+
=none||||
+
Indicates whether the file system should perform a + + normalization of file names whenever two file names are compared, and + which normalization algorithm should be used. File names are always stored + unmodified, names are normalized as part of any comparison process. If + this property is set to a legal value other than none, + and the utf8only property was left unspecified, the + utf8only property is automatically set to + on. The default value of the + normalization property is none. This + property cannot be changed after the file system is created.
+
=on|off
+
Indicates whether the file system should reject file names that include + characters that are not present in the + + character code set. If this property is explicitly set to + off, the normalization property must either not be + explicitly set or be set to none. The default value for + the utf8only property is off. This + property cannot be changed after the file system is created.
+
+

The casesensitivity, + normalization, and utf8only properties + are also new permissions that can be assigned to non-privileged users by + using the ZFS delegated administration feature.

+
+
+

+

When a file system is mounted, either through + mount(8) for legacy mounts or the + zfs mount command for normal + file systems, its mount options are set according to its properties. The + correlation between properties and mount options is as follows:

+
+
+
+
atime/noatime
+
+
auto/noauto
+
+
dev/nodev
+
+
exec/noexec
+
+
ro/rw
+
+
relatime/norelatime
+
+
suid/nosuid
+
+
xattr/noxattr
+
+
mand/nomand
+
=
+
context=
+
=
+
fscontext=
+
=
+
defcontext=
+
=
+
rootcontext=
+
+
+

In addition, these options can be set on a + per-mount basis using the -o option, without + affecting the property that is stored on disk. The values specified on the + command line override the values stored in the dataset. The + nosuid option is an alias for + ,. + These properties are reported as "temporary" by the + zfs get command. If the + properties are changed while the dataset is mounted, the new setting + overrides any temporary settings.

+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate datasets (file + systems, volumes, and snapshots).

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as + module:property, but this + namespace is not enforced by ZFS. User property names can be at most 256 + characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the + module component of property names to reduce the + chance that two independently-developed packages use the same property name + for different purposes.

+

The values of user properties are arbitrary strings, are always + inherited, and are never validated. All of the commands that operate on + properties (zfs list, + zfs get, + zfs set, and so forth) can + be used to manipulate both native properties and user properties. Use the + zfs inherit command to clear + a user property. If the property is not defined in any parent dataset, it is + removed entirely. Property values are limited to 8192 bytes.

+
+
+
+ + + + + +
August 8, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/7/zpool-features.7.html b/man/v2.2/7/zpool-features.7.html new file mode 100644 index 000000000..6cf0309af --- /dev/null +++ b/man/v2.2/7/zpool-features.7.html @@ -0,0 +1,1219 @@ + + + + + + + zpool-features.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-features.7

+
+ + + + + +
ZPOOL-FEATURES(7)Miscellaneous Information ManualZPOOL-FEATURES(7)
+
+
+

+

zpool-features — + description of ZFS pool features

+
+
+

+

ZFS pool on-disk format versions are specified via + “features” which replace the old on-disk format numbers (the + last supported on-disk format number is 28). To enable a feature on a pool + use the zpool upgrade, or + set the feature@feature-name + property to enabled. Please also see the + Compatibility feature + sets section for information on how sets of features may be enabled + together.

+

The pool format does not affect file system version compatibility + or the ability to send file systems between pools.

+

Since most features can be enabled independently of each other, + the on-disk format of the pool is specified by the set of all features + marked as active on the pool. If the pool was created by + another software version this set may include unsupported features.

+
+

+

Every feature has a GUID of the form + com.example:feature-name. The + reversed DNS name ensures that the feature's GUID is unique across all ZFS + implementations. When unsupported features are encountered on a pool they + will be identified by their GUIDs. Refer to the documentation for the ZFS + implementation that created the pool for information about those + features.

+

Each supported feature also has a short name. By convention a + feature's short name is the portion of its GUID which follows the + ‘:’ (i.e. + com.example:feature-name would + have the short name feature-name), however a feature's + short name may differ across ZFS implementations if following the convention + would result in name conflicts.

+
+
+

+

Features can be in one of three states:

+
+
+
This feature's on-disk format changes are in effect on the pool. Support + for this feature is required to import the pool in read-write mode. If + this feature is not read-only compatible, support is also required to + import the pool in read-only mode (see + Read-only + compatibility).
+
+
An administrator has marked this feature as enabled on the pool, but the + feature's on-disk format changes have not been made yet. The pool can + still be imported by software that does not support this feature, but + changes may be made to the on-disk format at any time which will move the + feature to the active state. Some features may support + returning to the enabled state after becoming + active. See feature-specific documentation for + details.
+
+
This feature's on-disk format changes have not been made and will not be + made unless an administrator moves the feature to the + enabled state. Features cannot be disabled once they + have been enabled.
+
+

The state of supported features is exposed through pool properties + of the form feature@short-name.

+
+
+

+

Some features may make on-disk format changes that do not + interfere with other software's ability to read from the pool. These + features are referred to as “read-only compatible”. If all + unsupported features on a pool are read-only compatible, the pool can be + imported in read-only mode by setting the readonly + property during import (see zpool-import(8) for details on + importing pools).

+
+
+

+

For each unsupported feature enabled on an imported pool, a pool + property named + @feature-name + will indicate why the import was allowed despite the unsupported feature. + Possible values for this property are:

+
+
+
The feature is in the enabled state and therefore the + pool's on-disk format is still compatible with software that does not + support this feature.
+
+
The feature is read-only compatible and the pool has been imported in + read-only mode.
+
+
+
+

+

Some features depend on other features being enabled in order to + function. Enabling a feature will automatically enable any features it + depends on.

+
+
+

+

It is sometimes necessary for a pool to maintain compatibility + with a specific on-disk format, by enabling and disabling particular + features. The compatibility feature facilitates this by + allowing feature sets to be read from text files. When set to + (the + default), compatibility feature sets are disabled (i.e. all features are + enabled); when set to legacy, no features are enabled. + When set to a comma-separated list of filenames (each filename may either be + an absolute path, or relative to + /etc/zfs/compatibility.d or + /usr/share/zfs/compatibility.d), the lists of + requested features are read from those files, separated by whitespace and/or + commas. Only features present in all files are enabled.

+

Simple sanity checks are applied to the files: they must be + between 1 B and 16 KiB in size, and must end with a newline character.

+

The requested features are applied when a pool is created using + zpool create + -o + compatibility= and controls + which features are enabled when using zpool + upgrade. zpool + status will not show a warning about disabled + features which are not part of the requested feature set.

+

The special value legacy prevents any features + from being enabled, either via zpool + upgrade or zpool + set + feature@feature-name=enabled. + This setting also prevents pools from being upgraded to newer on-disk + versions. This is a safety measure to prevent new features from being + accidentally enabled, breaking compatibility.

+

By convention, compatibility files in + /usr/share/zfs/compatibility.d are provided by the + distribution, and include feature sets supported by important versions of + popular distributions, and feature sets commonly supported at the start of + each year. Compatibility files in + /etc/zfs/compatibility.d, if present, will take + precedence over files with the same name in + /usr/share/zfs/compatibility.d.

+

If an unrecognized feature is found in these files, an error + message will be shown. If the unrecognized feature is in a file in + /etc/zfs/compatibility.d, this is treated as an + error and processing will stop. If the unrecognized feature is under + /usr/share/zfs/compatibility.d, this is treated as a + warning and processing will continue. This difference is to allow + distributions to include features which might not be recognized by the + currently-installed binaries.

+

Compatibility files may include comments: any text from + ‘#’ to the end of the line is ignored.

+

:

+
+
example# cat /usr/share/zfs/compatibility.d/grub2
+# Features which are supported by GRUB2
+allocation_classes
+async_destroy
+block_cloning
+bookmarks
+device_rebuild
+embedded_data
+empty_bpobj
+enabled_txg
+extensible_dataset
+filesystem_limits
+hole_birth
+large_blocks
+livelist
+log_spacemap
+lz4_compress
+project_quota
+resilver_defer
+spacemap_histogram
+spacemap_v2
+userobj_accounting
+zilsaxattr
+zpool_checkpoint
+
+example# zpool create -o compatibility=grub2 bootpool vdev
+
+

See zpool-create(8) and + zpool-upgrade(8) for more information on how these + commands are affected by feature sets.

+
+
+
+

+

The following features are supported on this system:

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables support for separate allocation + classes.

+

This feature becomes active when a dedicated + allocation class vdev (dedup or special) is created with the + zpool create + or zpool + add commands. With + device removal, it can be returned to the enabled + state if all the dedicated allocation class vdevs are removed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

Destroying a file system requires traversing all of its data + in order to return its used space to the pool. Without + async_destroy, the file system is not fully removed + until all space has been reclaimed. If the destroy operation is + interrupted by a reboot or power outage, the next attempt to open the + pool will need to complete the destroy operation synchronously.

+

When async_destroy is enabled, the file + system's data will be reclaimed by a background process, allowing the + destroy operation to complete without traversing the entire file system. + The background process is able to resume interrupted destroys after the + pool has been opened, eliminating the need to finish interrupted + destroys as part of the open operation. The amount of space remaining to + be reclaimed by the background process is available through the + freeing property.

+

This feature is only active while + freeing is non-zero.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the BLAKE3 hash algorithm for + checksum and dedup. BLAKE3 is a secure hash algorithm focused on high + performance.

+

When the blake3 feature is set to + enabled, the administrator can turn on the + blake3 checksum on any dataset using + zfs set + checksum=blake3 + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + blake3, and will return to being + enabled once all filesystems that have ever had their + checksum set to blake3 are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

When this feature is enabled ZFS will use + block cloning for operations like + (2). + Block cloning allows to create multiple references to a single block. It + is much faster than copying the data (as the actual data is neither read + nor written) and takes no additional space. Blocks can be cloned across + datasets under some conditions (like disabled encryption and equal + recordsize).

+

This feature becomes active when first block + is cloned. When the last cloned block is freed, it goes back to the + enabled state.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables use of the zfs + bookmark command.

+

This feature is active while + any bookmarks exist in the pool. All bookmarks in the pool can be listed + by running zfs list + -t + + -r poolname.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmark, extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the creation and management of larger + bookmarks which are needed for other features in ZFS.

+

This feature becomes active when a v2 + bookmark is created and will be returned to the + enabled state when all v2 bookmarks are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmark, extensible_dataset, bookmark_v2
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables additional bookmark + accounting fields, enabling the + #bookmark + property (space written since a bookmark) and estimates of send stream + sizes for incrementals from bookmarks.

+

This feature becomes active when a bookmark + is created and will be returned to the enabled state + when all bookmarks with these fields are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables the ability for the + zpool attach and + zpool replace commands + to perform sequential reconstruction (instead of healing reconstruction) + when resilvering.

+

Sequential reconstruction resilvers a device in LBA order + without immediately verifying the checksums. Once complete, a scrub is + started, which then verifies the checksums. This approach allows full + redundancy to be restored to the pool in the minimum amount of time. + This two-phase approach will take longer than a healing resilver when + the time to verify the checksums is included. However, unless there is + additional pool damage, no checksum errors should be reported by the + scrub. This feature is incompatible with raidz configurations. This + feature becomes active while a sequential resilver is + in progress, and returns to enabled when the resilver + completes.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the zpool + remove command to remove top-level vdevs, + evacuating them to reduce the total size of the pool.

+

This feature becomes active when the + zpool remove command is + used on a top-level vdev, and will never return to being + enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables use of the draid vdev + type. dRAID is a variant of RAID-Z which provides integrated distributed + hot spares that allow faster resilvering while retaining the benefits of + RAID-Z. Data, parity, and spare space are organized in redundancy groups + and distributed evenly over all of the devices.

+

This feature becomes active when creating a + pool which uses the draid vdev type, or when adding a + new draid vdev to an existing pool.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the Edon-R hash + algorithm for checksum, including for nopwrite (if compression is also + enabled, an overwrite of a block whose checksum matches the data being + written will be ignored). In an abundance of caution, Edon-R requires + verification when used with dedup: zfs + set + =edonr, + (see zfs-set(8)).

+

Edon-R is a very high-performance hash algorithm that was part + of the NIST SHA-3 competition. It provides extremely high hash + performance (over 350% faster than SHA-256), but was not selected + because of its unsuitability as a general purpose secure hash algorithm. + This implementation utilizes the new salted checksumming functionality + in ZFS, which means that the checksum is pre-seeded with a secret + 256-bit random key (stored on the pool) before being fed the data block + to be checksummed. Thus the produced checksums are unique to a given + pool, preventing hash collision attacks on systems with dedup.

+

When the edonr feature is set to + enabled, the administrator can turn on the + edonr checksum on any dataset using + zfs set + checksum=edonr + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + edonr, and will return to being + enabled once all filesystems that have ever had their + checksum set to edonr are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature improves the performance and compression ratio of + highly-compressible blocks. Blocks whose contents can compress to 112 + bytes or smaller can take advantage of this feature.

+

When this feature is enabled, the contents of + highly-compressible blocks are stored in the block + “pointer” itself (a misnomer in this case, as it contains + the compressed data, rather than a pointer to its location on disk). + Thus the space of the block (one sector, typically 512 B or 4 KiB) is + saved, and no additional I/O is needed to read and write the data block. + This feature becomes active + as soon as it is enabled and will never return to + being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature increases the performance of creating and using a + large number of snapshots of a single filesystem or volume, and also + reduces the disk space required.

+

When there are many snapshots, each snapshot uses many Block + Pointer Objects (bpobjs) to track blocks associated with that snapshot. + However, in common use cases, most of these bpobjs are empty. This + feature allows us to create each bpobj on-demand, thus eliminating the + empty bpobjs.

+

This feature is active while there are any + filesystems, volumes, or snapshots which were created after enabling + this feature.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

Once this feature is enabled, ZFS records the transaction + group number in which new features are enabled. This has no user-visible + impact, but other features may depend on this feature.

+

This feature becomes active as soon as it is + enabled and will never return to being enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmark_v2, extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the creation and management of natively + encrypted datasets.

+

This feature becomes active when an + encrypted dataset is created and will be returned to the + enabled state when all datasets that use this feature + are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows more flexible use of internal ZFS data + structures, and exists for other features to depend on.

+

This feature will be active when the first + dependent feature uses it, and will be returned to the + enabled state when all datasets that use this feature + are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables filesystem and snapshot limits. These + limits can be used to control how many filesystems and/or snapshots can + be created at the point in the tree on which the limits are set.

+

This feature is active once either of the + limit properties has been set on a dataset and will never return to + being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the upgraded version of errlog, which + required an on-disk error log format change. Now the error log of each + head dataset is stored separately in the zap object and keyed by the + head id. With this feature enabled, every dataset affected by an error + block is listed in the output of zpool + status. In case of encrypted filesystems with + unloaded keys we are unable to check their snapshots or clones for + errors and these will not be reported. An "access denied" + error will be reported.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
enabled_txg
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature has/had bugs, + the result of which is that, if you do a zfs + send -i (or + -R, since it uses + -i) from an affected dataset, the receiving + party will not see any checksum or other errors, but the resulting + destination snapshot will not match the source. Its use by + zfs send + -i has been disabled by default (see + + in zfs(4)).

+

This feature improves performance of incremental sends + (zfs send + -i) and receives for objects with many holes. + The most common case of hole-filled objects is zvols.

+

An incremental send stream from snapshot A + to snapshot B contains + information about every block that changed between A + and B. Blocks which did not + change between those snapshots can be identified and omitted from the + stream using a piece of metadata called the “block birth + time”, but birth times are not recorded for holes (blocks filled + only with zeroes). Since holes created after A + cannot be distinguished from holes created + before A, information about every hole in the + entire filesystem or zvol is included in the send stream.

+

For workloads where holes are rare this is not a problem. + However, when incrementally replicating filesystems or zvols with many + holes (for example a zvol formatted with another filesystem) a lot of + time will be spent sending and receiving unnecessary information about + holes that already exist on the receiving side.

+

Once the hole_birth feature has been enabled + the block birth times of all new holes will be recorded. Incremental + sends between snapshots created after this feature is enabled will use + this new metadata to avoid sending information about holes that already + exist on the receiving side.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows the record size on a dataset to be set + larger than 128 KiB.

+

This feature becomes active once a dataset + contains a file with a block size larger than 128 KiB, and will return + to being enabled once all filesystems that have ever + had their recordsize larger than 128 KiB are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows the size of dnodes in a + dataset to be set larger than 512 B. This feature becomes + active once a dataset contains an object with a dnode + larger than 512 B, which occurs as a result of setting the + + dataset property to a value other than legacy. The + feature will return to being enabled once all + filesystems that have ever contained a dnode larger than 512 B are + destroyed. Large dnodes allow more data to be stored in the bonus + buffer, thus potentially improving performance by avoiding the use of + spill blocks.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows clones to be deleted faster than the + traditional method when a large number of random/sparse writes have been + made to the clone. All blocks allocated and freed after a clone is + created are tracked by the the clone's livelist which is referenced + during the deletion of the clone. The feature is activated when a clone + is created and remains active until all clones have + been destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
com.delphix:spacemap_v2
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature improves performance for heavily-fragmented + pools, especially when workloads are heavy in random-writes. It does so + by logging all the metaslab changes on a single spacemap every TXG + instead of scattering multiple writes to all the metaslab spacemaps.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

lz4 is a high-performance real-time + compression algorithm that features significantly faster compression and + decompression as well as a higher compression ratio than the older + lzjb compression. Typically, lz4 + compression is approximately 50% faster on compressible data and 200% + faster on incompressible data than lzjb. It is also + approximately 80% faster on decompression, while giving approximately a + 10% better compression ratio.

+

When the lz4_compress feature is set to + enabled, the administrator can turn on + lz4 compression on any dataset on the pool using the + zfs-set(8) command. All newly written metadata will be + compressed with the lz4 algorithm.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows a dump device to be configured with a pool + comprised of multiple vdevs. Those vdevs may be arranged in any mirrored + or raidz configuration.

+

When the multi_vdev_crash_dump feature is + set to enabled, the administrator can use + dumpadm(8) to configure a dump device on a pool + comprised of multiple vdevs.

+

Under FreeBSD and Linux this feature + is unused, but registered for compatibility. New pools created on these + systems will have the feature enabled but will never + transition to active, as this functionality is not + required for crash dump support. Existing pools where this feature is + active can be imported.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
device_removal
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature is an enhancement of + device_removal, which will over time reduce the memory + used to track removed devices. When indirect blocks are freed or + remapped, we note that their part of the indirect mapping is + “obsolete” – no longer needed.

+

This feature becomes active when the + zpool remove command is + used on a top-level vdev, and will never return to being + enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows administrators to account the spaces and + objects usage information against the project identifier (ID).

+

The project ID is an object-based attribute. When + upgrading an existing filesystem, objects without a project ID will be + assigned a zero project ID. When this feature is enabled, newly created + objects inherit their parent directories' project ID if the parent's + inherit flag is set (via chattr + + or zfs + project + -s|-C). Otherwise, the + new object's project ID will be zero. An object's project ID can be + changed at any time by the owner (or privileged user) via + chattr -p + prjid or zfs + project -p + prjid.

+

This feature will become active as soon as + it is enabled and will never return to being disabled. + Each filesystem will be upgraded automatically when + remounted, or when a new file is created under that filesystem. The + upgrade can also be triggered on filesystems via + zfs set + version=current + fs. The upgrade process runs in + the background and may take a while to complete for filesystems + containing large amounts of files.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmarks, extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of redacted + zfs sends, which create + redaction bookmarks storing the list of blocks redacted by the send that + created them. For more information about redacted sends, see + zfs-send(8).

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the receiving of redacted + zfs send streams, which + create redacted datasets when received. These datasets are missing some + of their blocks, and so cannot be safely mounted, and their contents + cannot be safely read. For more information about redacted receives, see + zfs-send(8).

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows ZFS to postpone new resilvers if an + existing one is already in progress. Without this feature, any new + resilvers will cause the currently running one to be immediately + restarted from the beginning.

+

This feature becomes active once a resilver + has been deferred, and returns to being enabled when + the deferred resilver begins.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the SHA-512/256 truncated hash + algorithm (FIPS 180-4) for checksum and dedup. The native 64-bit + arithmetic of SHA-512 provides an approximate 50% performance boost over + SHA-256 on 64-bit hardware and is thus a good minimum-change replacement + candidate for systems where hash performance is important, but these + systems cannot for whatever reason utilize the faster + skein and + edonr algorithms.

+

When the sha512 feature is set to + enabled, the administrator can turn on the + sha512 checksum on any dataset using + zfs set + checksum=sha512 + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + sha512, and will return to being + enabled once all filesystems that have ever had their + checksum set to sha512 are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the Skein hash algorithm for + checksum and dedup. Skein is a high-performance secure hash algorithm + that was a finalist in the NIST SHA-3 competition. It provides a very + high security margin and high performance on 64-bit hardware (80% faster + than SHA-256). This implementation also utilizes the new salted + checksumming functionality in ZFS, which means that the checksum is + pre-seeded with a secret 256-bit random key (stored on the pool) before + being fed the data block to be checksummed. Thus the produced checksums + are unique to a given pool, preventing hash collision attacks on systems + with dedup.

+

When the skein feature is set to + enabled, the administrator can turn on the + skein checksum on any dataset using + zfs set + checksum=skein + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + skein, and will return to being + enabled once all filesystems that have ever had their + checksum set to skein are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This features allows ZFS to maintain more information about + how free space is organized within the pool. If this feature is + enabled, it will be activated when a new space map + object is created, or an existing space map is upgraded to the new + format, and never returns back to being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables the use of the new space map encoding + which consists of two words (instead of one) whenever it is + advantageous. The new encoding allows space maps to represent large + regions of space more efficiently on-disk while also increasing their + maximum addressable offset.

+

This feature becomes active once it is + enabled, and never returns back to being + enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows administrators to account the object usage + information by user and group.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled. + Each filesystem will be upgraded automatically when + remounted, or when a new file is created under that filesystem. The + upgrade can also be triggered on filesystems via + zfs set + version=current + fs. The upgrade process runs in + the background and may take a while to complete for filesystems + containing large amounts of files.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature creates a ZAP object for the root vdev.

+

This feature becomes active after the next + zpool import or + zpool reguid. Properties can be retrieved or set + on the root vdev using zpool + get and zpool + set with + as the vdev + name which is an alias for + .

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables + xattr=sa extended attribute logging + in the ZIL. If enabled, extended attribute changes (both + = + and + xattr=sa) are guaranteed to be + durable if either the dataset had + = + set at the time the changes were made, or sync(2) is + called on the dataset after the changes were made.

+

This feature becomes active when a ZIL is + created for at least one dataset and will be returned to the + enabled state when it is destroyed for all datasets + that use this feature.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables the zpool + checkpoint command that can checkpoint the state + of the pool at the time it was issued and later rewind back to it or + discard it.

+

This feature becomes active when the + zpool checkpoint command + is used to checkpoint the pool. The feature will only return back to + being enabled when the pool is rewound or the + checkpoint has been discarded.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

zstd is a high-performance + compression algorithm that features a combination of high compression + ratios and high speed. Compared to + , + zstd offers slightly better compression at much higher + speeds. Compared to lz4, zstd offers + much better compression while being only modestly slower. Typically, + zstd compression speed ranges from 250 to 500 MB/s per + thread and decompression speed is over 1 GB/s per thread.

+

When the zstd feature is set to + enabled, the administrator can turn on + zstd compression of any dataset using + zfs set + compress=zstd + dset (see zfs-set(8)). This + feature becomes active once a + compress property has been set to + zstd, and will return to being + enabled once all filesystems that have ever had their + compress property set to zstd are + destroyed.

+
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
June 23, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/7/zpoolconcepts.7.html b/man/v2.2/7/zpoolconcepts.7.html new file mode 100644 index 000000000..7e482004c --- /dev/null +++ b/man/v2.2/7/zpoolconcepts.7.html @@ -0,0 +1,605 @@ + + + + + + + zpoolconcepts.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpoolconcepts.7

+
+ + + + + +
ZPOOLCONCEPTS(7)Miscellaneous Information ManualZPOOLCONCEPTS(7)
+
+
+

+

zpoolconcepts — + overview of ZFS storage pools

+
+
+

+
+

+

A "virtual device" describes a single device or a + collection of devices, organized according to certain performance and fault + characteristics. The following virtual devices are supported:

+
+
+
A block device, typically located under /dev. ZFS + can use individual slices or partitions, though the recommended mode of + operation is to use whole disks. A disk can be specified by a full path, + or it can be a shorthand name (the relative portion of the path under + /dev). A whole disk can be specified by omitting + the slice or partition designation. For example, + sda is equivalent to + /dev/sda. When given a whole disk, ZFS + automatically labels the disk, if necessary.
+
+
A regular file. The use of files as a backing store is strongly + discouraged. It is designed primarily for experimental purposes, as the + fault tolerance of a file is only as good as the file system on which it + resides. A file must be specified by a full path.
+
+
A mirror of two or more devices. Data is replicated in an identical + fashion across all components of a mirror. A mirror with + N disks of size + X can hold X + bytes and can withstand + + devices failing, without losing data.
+
, + raidz1, raidz2, + raidz3
+
A distributed-parity layout, similar to RAID-5/6, with improved + distribution of parity, and which does not suffer from the RAID-5/6 + "write hole", (in which data and parity become inconsistent + after a power loss). Data and parity is striped across all disks within a + raidz group, though not necessarily in a consistent stripe width. +

A raidz group can have single, double, or triple parity, + meaning that the raidz group can sustain one, two, or three failures, + respectively, without losing any data. The raidz1 vdev + type specifies a single-parity raidz group; the raidz2 + vdev type specifies a double-parity raidz group; and the + raidz3 vdev type specifies a triple-parity raidz + group. The raidz vdev type is an alias for + raidz1.

+

A raidz group with N + disks of size X + with P parity + disks can hold approximately + + bytes and can withstand P + devices failing without losing data. The minimum + number of devices in a raidz group is one more than the number of parity + disks. The recommended number is between 3 and 9 to help increase + performance.

+
+
, + draid1, draid2, + draid3
+
A variant of raidz that provides integrated distributed hot spares, + allowing for faster resilvering, while retaining the benefits of raidz. A + dRAID vdev is constructed from multiple internal raidz groups, each with + D data devices and + P parity devices. These groups + are distributed over all of the children in order to fully utilize the + available disk performance. +

Unlike raidz, dRAID uses a fixed stripe width + (padding as necessary with zeros) to allow fully sequential resilvering. + This fixed stripe width significantly affects both usable capacity and + IOPS. For example, with the default + + and + + disk sectors the minimum allocation size is + . If + using compression, this relatively large allocation size can reduce the + effective compression ratio. When using ZFS volumes (zvols) and dRAID, + the default of the + + property is increased to account for the allocation size. If a dRAID + pool will hold a significant amount of small blocks, it is recommended + to also add a mirrored special vdev to store those + blocks.

+

In regards to I/O, + performance is similar to raidz since, for any read, all + D data disks must be accessed. + Delivered random IOPS can be reasonably approximated as + .

+

Like raidz, a dRAID can have single-, double-, or + triple-parity. The draid1, draid2, + and draid3 types can be used to specify the parity + level. The draid vdev type is an alias for + draid1.

+

A dRAID with N disks + of size X, D + data disks per redundancy group, + P parity level, and + + distributed hot spares can hold approximately + + bytes and can withstand P + devices failing without losing data.

+
+
[parity][:data][:children][:spares]
+
A non-default dRAID configuration can be specified by appending one or + more of the following optional arguments to the draid + keyword: +
+
parity
+
The parity level (1-3).
+
data
+
The number of data devices per redundancy group. In general, a smaller + value of D will increase IOPS, + improve the compression ratio, and speed up resilvering at the + expense of total usable capacity. Defaults to 8, + unless + + is less than 8.
+
children
+
The expected number of children. Useful as a cross-check when listing + a large number of devices. An error is returned when the provided + number of children differs.
+
spares
+
The number of distributed hot spares. Defaults to zero.
+
+
+
+
A pseudo-vdev which keeps track of available hot spares for a pool. For + more information, see the Hot Spares + section.
+
+
A separate intent log device. If more than one log device is specified, + then writes are load-balanced between devices. Log devices can be + mirrored. However, raidz vdev types are not supported for the intent log. + For more information, see the Intent + Log section.
+
+
A device solely dedicated for deduplication tables. The redundancy of this + device should match the redundancy of the other normal devices in the + pool. If more than one dedup device is specified, then allocations are + load-balanced between those devices.
+
+
A device dedicated solely for allocating various kinds of internal + metadata, and optionally small file blocks. The redundancy of this device + should match the redundancy of the other normal devices in the pool. If + more than one special device is specified, then allocations are + load-balanced between those devices. +

For more information on special allocations, see the + Special Allocation + Class section.

+
+
+
A device used to cache storage pool data. A cache device cannot be + configured as a mirror or raidz group. For more information, see the + Cache Devices section.
+
+

Virtual devices cannot be nested arbitrarily. A mirror, raidz or + draid virtual device can only be created with files or disks. Mirrors of + mirrors or other such combinations are not allowed.

+

A pool can have any number of virtual devices at the top of the + configuration (known as "root vdevs"). Data is dynamically + distributed across all top-level devices to balance data among devices. As + new virtual devices are added, ZFS automatically places data on the newly + available devices.

+

Virtual devices are specified one at a time on the command line, + separated by whitespace. Keywords like mirror + and raidz are used to distinguish + where a group ends and another begins. For example, the following creates a + pool with two root vdevs, each a mirror of two disks:

+
# zpool + create mypool + mirror sda sdb + mirror sdc sdd
+
+
+

+

ZFS supports a rich set of mechanisms for handling device failure + and data corruption. All metadata and data is checksummed, and ZFS + automatically repairs bad data from a good copy, when corruption is + detected.

+

In order to take advantage of these features, a pool must make use + of some form of redundancy, using either mirrored or raidz groups. While ZFS + supports running in a non-redundant configuration, where each root vdev is + simply a disk or file, this is strongly discouraged. A single case of bit + corruption can render some or all of your data unavailable.

+

A pool's health status is described by one of three + states: , + , + or + . + An online pool has all devices operating normally. A degraded pool is one in + which one or more devices have failed, but the data is still available due + to a redundant configuration. A faulted pool has corrupted metadata, or one + or more faulted devices, and insufficient replicas to continue + functioning.

+

The health of the top-level vdev, such as a mirror or raidz + device, is potentially impacted by the state of its associated vdevs or + component devices. A top-level vdev or component device is in one of the + following states:

+
+
+
One or more top-level vdevs is in the degraded state because one or more + component devices are offline. Sufficient replicas exist to continue + functioning. +

One or more component devices is in the degraded or faulted + state, but sufficient replicas exist to continue functioning. The + underlying conditions are as follows:

+
    +
  • The number of checksum errors exceeds acceptable levels and the device + is degraded as an indication that something may be wrong. ZFS + continues to use the device as necessary.
  • +
  • The number of I/O errors exceeds acceptable levels. The device could + not be marked as faulted because there are insufficient replicas to + continue functioning.
  • +
+
+
+
One or more top-level vdevs is in the faulted state because one or more + component devices are offline. Insufficient replicas exist to continue + functioning. +

One or more component devices is in the faulted state, and + insufficient replicas exist to continue functioning. The underlying + conditions are as follows:

+
    +
  • The device could be opened, but the contents did not match expected + values.
  • +
  • The number of I/O errors exceeds acceptable levels and the device is + faulted to prevent further use of the device.
  • +
+
+
+
The device was explicitly taken offline by the + zpool offline + command.
+
+
The device is online and functioning.
+
+
The device was physically removed while the system was running. Device + removal detection is hardware-dependent and may not be supported on all + platforms.
+
+
The device could not be opened. If a pool is imported when a device was + unavailable, then the device will be identified by a unique identifier + instead of its path since the path was never correct in the first + place.
+
+

Checksum errors represent events where a disk returned data that + was expected to be correct, but was not. In other words, these are instances + of silent data corruption. The checksum errors are reported in + zpool status and + zpool events. When a block + is stored redundantly, a damaged block may be reconstructed (e.g. from raidz + parity or a mirrored copy). In this case, ZFS reports the checksum error + against the disks that contained damaged data. If a block is unable to be + reconstructed (e.g. due to 3 disks being damaged in a raidz2 group), it is + not possible to determine which disks were silently corrupted. In this case, + checksum errors are reported for all disks on which the block is stored.

+

If a device is removed and later re-attached to the system, ZFS + attempts to bring the device online automatically. Device attachment + detection is hardware-dependent and might not be supported on all + platforms.

+
+
+

+

ZFS allows devices to be associated with pools as "hot + spares". These devices are not actively used in the pool. But, when an + active device fails, it is automatically replaced by a hot spare. To create + a pool with hot spares, specify a spare vdev with any + number of devices. For example,

+
# zpool + create pool + mirror sda sdb spare + sdc sdd
+

Spares can be shared across multiple pools, and can be added with + the zpool add command and + removed with the zpool + remove command. Once a spare replacement is + initiated, a new spare vdev is created within the + configuration that will remain there until the original device is replaced. + At this point, the hot spare becomes available again, if another device + fails.

+

If a pool has a shared spare that is currently being used, the + pool cannot be exported, since other pools may use this shared spare, which + may lead to potential data corruption.

+

Shared spares add some risk. If the pools are imported on + different hosts, and both pools suffer a device failure at the same time, + both could attempt to use the spare at the same time. This may not be + detected, resulting in data corruption.

+

An in-progress spare replacement can be cancelled by detaching the + hot spare. If the original faulted device is detached, then the hot spare + assumes its place in the configuration, and is removed from the spare list + of all active pools.

+

The draid vdev type provides distributed hot + spares. These hot spares are named after the dRAID vdev they're a part of + (draid1-2-3 + specifies spare 3 + of vdev 2, + which is a single parity dRAID) and may only be used + by that dRAID vdev. Otherwise, they behave the same as normal hot + spares.

+

Spares cannot replace log devices.

+
+
+

+

The ZFS Intent Log (ZIL) satisfies POSIX requirements for + synchronous transactions. For instance, databases often require their + transactions to be on stable storage devices when returning from a system + call. NFS and other applications can also use fsync(2) to + ensure data stability. By default, the intent log is allocated from blocks + within the main pool. However, it might be possible to get better + performance using separate intent log devices such as NVRAM or a dedicated + disk. For example:

+
# zpool + create pool sda sdb + log sdc
+

Multiple log devices can also be specified, and they can be + mirrored. See the EXAMPLES section for an + example of mirroring multiple log devices.

+

Log devices can be added, replaced, attached, detached, and + removed. In addition, log devices are imported and exported as part of the + pool that contains them. Mirrored devices can be removed by specifying the + top-level mirror vdev.

+
+
+

+

Devices can be added to a storage pool as "cache + devices". These devices provide an additional layer of caching between + main memory and disk. For read-heavy workloads, where the working set size + is much larger than what can be cached in main memory, using cache devices + allows much more of this working set to be served from low latency media. + Using cache devices provides the greatest performance improvement for random + read-workloads of mostly static content.

+

To create a pool with cache devices, specify a + cache vdev with any number of devices. For example:

+
# zpool + create pool sda sdb + cache sdc sdd
+

Cache devices cannot be mirrored or part of a raidz configuration. + If a read error is encountered on a cache device, that read I/O is reissued + to the original storage pool device, which might be part of a mirrored or + raidz configuration.

+

The content of the cache devices is + persistent across reboots and restored asynchronously when importing the + pool in L2ARC (persistent L2ARC). This can be disabled by setting + =0. + For cache devices smaller than + , ZFS does + not write the metadata structures required for rebuilding the L2ARC, to + conserve space. This can be changed with + . + The cache device header + () is + updated even if no metadata structures are written. Setting + =0 + will result in scanning the full-length ARC lists for cacheable content to + be written in L2ARC (persistent ARC). If a cache device is added with + zpool add, its label and + header will be overwritten and its contents will not be restored in L2ARC, + even if the device was previously part of the pool. If a cache device is + onlined with zpool online, + its contents will be restored in L2ARC. This is useful in case of memory + pressure, where the contents of the cache device are not fully restored in + L2ARC. The user can off- and online the cache device when there is less + memory pressure, to fully restore its contents to L2ARC.

+
+
+

+

Before starting critical procedures that include destructive + actions (like zfs destroy), + an administrator can checkpoint the pool's state and, in the case of a + mistake or failure, rewind the entire pool back to the checkpoint. + Otherwise, the checkpoint can be discarded when the procedure has completed + successfully.

+

A pool checkpoint can be thought of as a pool-wide snapshot and + should be used with care as it contains every part of the pool's state, from + properties to vdev configuration. Thus, certain operations are not allowed + while a pool has a checkpoint. Specifically, vdev removal/attach/detach, + mirror splitting, and changing the pool's GUID. Adding a new vdev is + supported, but in the case of a rewind it will have to be added again. + Finally, users of this feature should keep in mind that scrubs in a pool + that has a checkpoint do not repair checkpointed data.

+

To create a checkpoint for a pool:

+
# zpool + checkpoint pool
+

To later rewind to its checkpointed state, you need to first + export it and then rewind it during import:

+
# zpool + export pool
+
# zpool + import --rewind-to-checkpoint + pool
+

To discard the checkpoint from a pool:

+
# zpool + checkpoint -d + pool
+

Dataset reservations (controlled by the + + and + + properties) may be unenforceable while a checkpoint exists, because the + checkpoint is allowed to consume the dataset's reservation. Finally, data + that is part of the checkpoint but has been freed in the current state of + the pool won't be scanned during a scrub.

+
+
+

+

Allocations in the special class are dedicated to specific block + types. By default, this includes all metadata, the indirect blocks of user + data, and any deduplication tables. The class can also be provisioned to + accept small file blocks.

+

A pool must always have at least one normal + (non-dedup/-special) vdev before other + devices can be assigned to the special class. If the + special class becomes full, then allocations intended for + it will spill back into the normal class.

+

Deduplication tables can be excluded + from the special class by unsetting the + + ZFS module parameter.

+

Inclusion of small file blocks in the + special class is opt-in. Each dataset can control the size of small file + blocks allowed in the special class by setting the + + property to nonzero. See zfsprops(7) for more info on this + property.

+
+
+
+ + + + + +
April 7, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/7/zpoolprops.7.html b/man/v2.2/7/zpoolprops.7.html new file mode 100644 index 000000000..c412db0ef --- /dev/null +++ b/man/v2.2/7/zpoolprops.7.html @@ -0,0 +1,511 @@ + + + + + + + zpoolprops.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpoolprops.7

+
+ + + + + +
ZPOOLPROPS(7)Miscellaneous Information ManualZPOOLPROPS(7)
+
+
+

+

zpoolprops — + properties of ZFS storage pools

+
+
+

+

Each pool has several properties associated with it. Some + properties are read-only statistics while others are configurable and change + the behavior of the pool.

+

User properties have no effect on ZFS behavior. Use them to + annotate pools in a way that is meaningful in your environment. For more + information about user properties, see the + User Properties section.

+

The following are read-only properties:

+
+
+
Amount of storage used within the pool. See + fragmentation and free for more + information.
+
+
The ratio of the total amount of storage that would be required to store + all the cloned blocks without cloning to the actual storage used. The + bcloneratio property is calculated as: +

((bclonesaved + bcloneused) + ) +

+
+
+
The amount of additional storage that would be required if block cloning + was not used.
+
+
The amount of storage used by cloned blocks.
+
+
Percentage of pool space used. This property can also be referred to by + its shortened column name, + .
+
+
Amount of uninitialized space within the pool or device that can be used + to increase the total capacity of the pool. On whole-disk vdevs, this is + the space beyond the end of the GPT – typically occurring when a + LUN is dynamically expanded or a disk replaced with a larger one. On + partition vdevs, this is the space appended to the partition after it was + added to the pool – most likely by resizing it in-place. The space + can be claimed for the pool by bringing it online with + + or using zpool online + -e.
+
+
The amount of fragmentation in the pool. As the amount of space + allocated increases, it becomes more difficult to locate + free space. This may result in lower write performance + compared to pools with more unfragmented free space.
+
+
The amount of free space available in the pool. By contrast, the + zfs(8) available property describes + how much new data can be written to ZFS filesystems/volumes. The zpool + free property is not generally useful for this purpose, + and can be substantially more than the zfs available + space. This discrepancy is due to several factors, including raidz parity; + zfs reservation, quota, refreservation, and refquota properties; and space + set aside by + + (see zfs(4) for more information).
+
+
After a file system or snapshot is destroyed, the space it was using is + returned to the pool asynchronously. freeing is the + amount of space remaining to be reclaimed. Over time + freeing will decrease while free + increases.
+
+
A unique identifier for the pool.
+
+
The current health of the pool. Health can be one of + , + , + , + , + .
+
+
Space not released while freeing due to corruption, now + permanently leaked into the pool.
+
+
A unique identifier for the pool. Unlike the guid + property, this identifier is generated every time we load the pool (i.e. + does not persist across imports/exports) and never changes while the pool + is loaded (even if a + + operation takes place).
+
+
Total size of the storage pool.
+
guid
+
Information about unsupported features that are enabled on the pool. See + zpool-features(7) for details.
+
+

The space usage properties report actual physical space available + to the storage pool. The physical space can be different from the total + amount of space that any contained datasets can actually use. The amount of + space used in a raidz configuration depends on the characteristics of the + data being written. In addition, ZFS reserves some space for internal + accounting that the zfs(8) command takes into account, but + the zpoolprops command does not. For non-full pools + of a reasonable size, these effects should be invisible. For small pools, or + pools that are close to being completely full, these discrepancies may + become more noticeable.

+

The following property can be set at creation time and import + time:

+
+
+
Alternate root directory. If set, this directory is prepended to any mount + points within the pool. This can be used when examining an unknown pool + where the mount points cannot be trusted, or in an alternate boot + environment, where the typical paths are not valid. + altroot is not a persistent property. It is valid only + while the system is up. Setting altroot defaults to + using cachefile=none, though this may + be overridden using an explicit setting.
+
+

The following property can be set only at import time:

+
+
=on|off
+
If set to on, the pool will be imported in read-only + mode. This property can also be referred to by its shortened column name, + .
+
+

The following properties can be set at creation time and import + time, and later changed with the zpool + set command:

+
+
=ashift
+
Pool sector size exponent, to the power of + (internally + referred to as ashift). Values from 9 to 16, inclusive, + are valid; also, the value 0 (the default) means to auto-detect using the + kernel's block layer and a ZFS internal exception list. I/O operations + will be aligned to the specified size boundaries. Additionally, the + minimum (disk) write size will be set to the specified size, so this + represents a space/performance trade-off. For optimal performance, the + pool sector size should be greater than or equal to the sector size of the + underlying disks. The typical case for setting this property is when + performance is important and the underlying disks use 4KiB sectors but + report 512B sectors to the OS (for compatibility reasons); in that case, + set + ashift= + (which is + + = + ). + When set, this property is used as the default hint value in subsequent + vdev operations (add, attach and replace). Changing this value will not + modify any existing vdev, not even on disk replacement; however it can be + used, for instance, to replace a dying 512B sectors disk with a newer 4KiB + sectors device: this will probably result in bad performance but at the + same time could prevent loss of data.
+
=on|off
+
Controls automatic pool expansion when the underlying LUN is grown. If set + to on, the pool will be resized according to the size of + the expanded device. If the device is part of a mirror or raidz then all + devices within that mirror/raidz group must be expanded before the new + space is made available to the pool. The default behavior is + off. This property can also be referred to by its + shortened column name, + .
+
=on|off
+
Controls automatic device replacement. If set to off, + device replacement must be initiated by the administrator by using the + zpool replace command. If + set to on, any new device, found in the same physical + location as a device that previously belonged to the pool, is + automatically formatted and replaced. The default behavior is + off. This property can also be referred to by its + shortened column name, + . + Autoreplace can also be used with virtual disks (like device mapper) + provided that you use the /dev/disk/by-vdev paths setup by vdev_id.conf. + See the vdev_id(8) manual page for more details. + Autoreplace and autoonline require the ZFS Event Daemon be configured and + running. See the zed(8) manual page for more + details.
+
=on|off
+
When set to on space which has been recently freed, and + is no longer allocated by the pool, will be periodically trimmed. This + allows block device vdevs which support BLKDISCARD, such as SSDs, or file + vdevs on which the underlying file system supports hole-punching, to + reclaim unused blocks. The default value for this property is + off. +

Automatic TRIM does not immediately + reclaim blocks after a free. Instead, it will optimistically delay + allowing smaller ranges to be aggregated into a few larger ones. These + can then be issued more efficiently to the storage. TRIM on L2ARC + devices is enabled by setting + .

+

Be aware that automatic trimming of recently freed data blocks + can put significant stress on the underlying storage devices. This will + vary depending of how well the specific device handles these commands. + For lower-end devices it is often possible to achieve most of the + benefits of automatic trimming by running an on-demand (manual) TRIM + periodically using the zpool + trim command.

+
+
=|pool[/dataset]
+
Identifies the default bootable dataset for the root pool. This property + is expected to be set mainly by the installation and upgrade programs. Not + all Linux distribution boot processes use the bootfs property.
+
=path|none
+
Controls the location of where the pool configuration is cached. + Discovering all pools on system startup requires a cached copy of the + configuration data that is stored on the root file system. All pools in + this cache are automatically imported when the system boots. Some + environments, such as install and clustering, need to cache this + information in a different location so that pools are not automatically + imported. Setting this property caches the pool configuration in a + different location that can later be imported with + zpool import + -c. Setting it to the value none + creates a temporary pool that is never cached, and the "" (empty + string) uses the default location. +

Multiple pools can share the same cache file. Because the + kernel destroys and recreates this file when pools are added and + removed, care should be taken when attempting to access this file. When + the last pool using a cachefile is exported or + destroyed, the file will be empty.

+
+
=text
+
A text string consisting of printable ASCII characters that will be stored + such that it is available even if the pool becomes faulted. An + administrator can provide additional information about a pool using this + property.
+
=off|legacy|file[,file]…
+
Specifies that the pool maintain compatibility with specific feature sets. + When set to off (or unset) compatibility is disabled + (all features may be enabled); when set to legacyno + features may be enabled. When set to a comma-separated list of filenames + (each filename may either be an absolute path, or relative to + /etc/zfs/compatibility.d or + /usr/share/zfs/compatibility.d) the lists of + requested features are read from those files, separated by whitespace + and/or commas. Only features present in all files may be enabled. +

See zpool-features(7), + zpool-create(8) and zpool-upgrade(8) + for more information on the operation of compatibility feature sets.

+
+
=number
+
This property is deprecated and no longer has any effect.
+
=on|off
+
Controls whether a non-privileged user is granted access based on the + dataset permissions defined on the dataset. See zfs(8) + for more information on ZFS delegated administration.
+
=wait|continue|panic
+
Controls the system behavior in the event of catastrophic pool failure. + This condition is typically a result of a loss of connectivity to the + underlying storage device(s) or a failure of all devices within the pool. + The behavior of such an event is determined as follows: +
+
+
Blocks all I/O access until the device connectivity is recovered and + the errors are cleared with zpool + clear. This is the default behavior.
+
+
Returns EIO to any new write I/O requests but + allows reads to any of the remaining healthy devices. Any write + requests that have yet to be committed to disk would be blocked.
+
+
Prints out a message to the console and generates a system crash + dump.
+
+
+
feature_name=enabled
+
The value of this property is the current state of + feature_name. The only valid value when setting this + property is enabled which moves + feature_name to the enabled state. See + zpool-features(7) for details on feature states.
+
=on|off
+
Controls whether information about snapshots associated with this pool is + output when zfs list is + run without the -t option. The default value is + off. This property can also be referred to by its + shortened name, + .
+
=on|off
+
Controls whether a pool activity check should be performed during + zpool import. When a pool + is determined to be active it cannot be imported, even with the + -f option. This property is intended to be used in + failover configurations where multiple hosts have access to a pool on + shared storage. +

Multihost provides protection on import only. It does not + protect against an individual device being used in multiple pools, + regardless of the type of vdev. See the discussion under + zpool create.

+

When this property is on, periodic + writes to storage occur to show the pool is in use. See + + in the zfs(4) manual page. In order to enable this + property each host must set a unique hostid. See + zgenhostid(8) + spl(4) for additional details. The default value is + off.

+
+
=version
+
The current on-disk version of the pool. This can be increased, but never + decreased. The preferred method of updating pools is with the + zpool upgrade command, + though this property can be used when a specific version is needed for + backwards compatibility. Once feature flags are enabled on a pool this + property will no longer have a value.
+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate pools.

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as + module:property, but this + namespace is not enforced by ZFS. User property names can be at most 256 + characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the + module component of property names to reduce the + chance that two independently-developed packages use the same property name + for different purposes.

+

The values of user properties are arbitrary strings and are never + validated. All of the commands that operate on properties + (zpool list, + zpool get, + zpool set, and so forth) can + be used to manipulate both native properties and user properties. Use + zpool set + name= to clear a user property. Property values are + limited to 8192 bytes.

+
+
+
+ + + + + +
April 18, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/fsck.zfs.8.html b/man/v2.2/8/fsck.zfs.8.html new file mode 100644 index 000000000..134979784 --- /dev/null +++ b/man/v2.2/8/fsck.zfs.8.html @@ -0,0 +1,292 @@ + + + + + + + fsck.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

fsck.zfs.8

+
+ + + + + +
FSCK.ZFS(8)System Manager's ManualFSCK.ZFS(8)
+
+
+

+

fsck.zfsdummy + ZFS filesystem checker

+
+
+

+ + + + + +
fsck.zfs[options] + dataset
+
+
+

+

fsck.zfs is a thin shell wrapper that at + most checks the status of a dataset's container pool. It is installed by + OpenZFS because some Linux distributions expect a fsck helper for all + filesystems.

+

If more than one dataset is specified, each + is checked in turn and the results binary-ored.

+
+
+

+

Ignored.

+
+
+

+

ZFS datasets are checked by running zpool + scrub on the containing pool. An individual ZFS + dataset is never checked independently of its pool, which is unlike a + regular filesystem.

+

However, the fsck(8) interface still + allows it to communicate some errors: if the dataset + is in a degraded pool, then fsck.zfs will return + exit code to indicate + an uncorrected filesystem error.

+

Similarly, if the dataset is in a + faulted pool and has a legacy /etc/fstab record, + then fsck.zfs will return exit code + to indicate a fatal + operational error.

+
+
+

+

fstab(5), fsck(8), + zpool-scrub(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/index.html b/man/v2.2/8/index.html new file mode 100644 index 000000000..ca5c90980 --- /dev/null +++ b/man/v2.2/8/index.html @@ -0,0 +1,313 @@ + + + + + + + System Administration Commands (8) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/mount.zfs.8.html b/man/v2.2/8/mount.zfs.8.html new file mode 100644 index 000000000..17c9a888a --- /dev/null +++ b/man/v2.2/8/mount.zfs.8.html @@ -0,0 +1,299 @@ + + + + + + + mount.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

mount.zfs.8

+
+ + + + + +
MOUNT.ZFS(8)System Manager's ManualMOUNT.ZFS(8)
+
+
+

+

mount.zfsmount + ZFS filesystem

+
+
+

+ + + + + +
mount.zfs[-sfnvh] [-o + options] dataset + mountpoint
+
+
+

+

The mount.zfs helper is used by + mount(8) to mount filesystem snapshots and + legacy + ZFS filesystems, as well as by zfs(8) when the + + environment variable is not set. Users should should invoke either + zfs(8) in most cases.

+

options are handled according + to the section in zfsprops(7), except + for those described below.

+

If /etc/mtab is a regular file and + -n was not specified, it will be updated via + libmount.

+
+
+

+
+
+
Ignore unknown (sloppy) mount options.
+
+
Do everything except actually executing the system call.
+
+
Never update /etc/mtab.
+
+
Print resolved mount options and parser state.
+
+
Print the usage message.
+
+ zfsutil
+
This private flag indicates that mount(8) is being + called by the zfs(8) command.
+
+
+
+

+

fstab(5), mount(8), + zfs-mount(8)

+
+
+ + + + + +
May 24, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/vdev_id.8.html b/man/v2.2/8/vdev_id.8.html new file mode 100644 index 000000000..6c54c2861 --- /dev/null +++ b/man/v2.2/8/vdev_id.8.html @@ -0,0 +1,324 @@ + + + + + + + vdev_id.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

vdev_id.8

+
+ + + + + +
VDEV_ID(8)System Manager's ManualVDEV_ID(8)
+
+
+

+

vdev_idgenerate + user-friendly names for JBOD disks

+
+
+

+ + + + + +
vdev_id-d dev + -c config_file + -g + sas_direct|sas_switch|scsi + -m -p + phys_per_port
+
+
+

+

vdev_id is an udev helper which parses + vdev_id.conf(5) to map a physical path in a storage + topology to a channel name. The channel name is combined with a disk + enclosure slot number to create an alias that reflects the physical location + of the drive. This is particularly helpful when it comes to tasks like + replacing failed drives. Slot numbers may also be remapped in case the + default numbering is unsatisfactory. The drive aliases will be created as + symbolic links in /dev/disk/by-vdev.

+

The currently supported topologies are + sas_direct, sas_switch, and + scsi. A multipath mode is supported in which dm-mpath + devices are handled by examining the first running component disk as + reported by the driver. In multipath mode the configuration file should + contain a channel definition with the same name for each path to a given + enclosure.

+

vdev_id also supports creating + aliases based on existing udev links in the /dev hierarchy using the + configuration + file keyword. See vdev_id.conf(5) for details.

+
+
+

+
+
+ device
+
The device node to classify, like /dev/sda.
+
+ config_file
+
Specifies the path to an alternate configuration file. The default is + /etc/zfs/vdev_id.conf.
+
+ sas_direct|sas_switch|scsi
+
Identifies a physical topology that governs how physical paths are mapped + to channels: +
+
+ and scsi
+
channels are uniquely identified by a PCI slot and HBA port + number
+
+
channels are uniquely identified by a SAS switch port number
+
+
+
+
Only handle dm-multipath devices. If specified, examine the first running + component disk of a dm-multipath device as provided by the driver to + determine the physical path.
+
+ phys_per_port
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id internally uses this value to + determine which HBA or switch port a device is connected to. The default + is .
+
+
Print a usage summary.
+
+
+
+

+

vdev_id.conf(5)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zdb.8.html b/man/v2.2/8/zdb.8.html new file mode 100644 index 000000000..103c1407b --- /dev/null +++ b/man/v2.2/8/zdb.8.html @@ -0,0 +1,806 @@ + + + + + + + zdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zdb.8

+
+ + + + + +
ZDB(8)System Manager's ManualZDB(8)
+
+
+

+

zdbdisplay ZFS + storage pool debugging and consistency information

+
+
+

+ + + + + +
zdb[-AbcdDFGhikLMNPsTvXYy] + [-e [-V] + [-p path]…] + [-I inflight-I/O-ops] + [-o + var=value]… + [-t txg] + [-U cache] + [-x dumpdir] + [-K key] + [poolname[/dataset|objset-ID]] + [object|range…]
+
+ + + + + +
zdb[-AdiPv] [-e + [-V] [-p + path]…] [-U + cache] [-K + key] + poolname[/dataset|objset-ID] + [object|range…]
+
+ + + + + +
zdb-B [-e + [-V] [-p + path]…] [-U + cache] [-K + key] + poolname/objset-ID + [backup-flags]
+
+ + + + + +
zdb-C [-A] + [-U cache] + [poolname]
+
+ + + + + +
zdb-E [-A] + word0:word1:…:word15
+
+ + + + + +
zdb-l [-Aqu] + device
+
+ + + + + +
zdb-m [-AFLPXY] + [-e [-V] + [-p path]…] + [-t txg] + [-U cache] + poolname [vdev + [metaslab]…]
+
+ + + + + +
zdb-O [-K + key] dataset path
+
+ + + + + +
zdb-r [-K + key] dataset path + destination
+
+ + + + + +
zdb-R [-A] + [-e [-V] + [-p path]…] + [-U cache] + poolname + vdev:offset:[lsize/]psize[:flags]
+
+ + + + + +
zdb-S [-AP] + [-e [-V] + [-p path]…] + [-U cache] + poolname
+
+
+

+

The zdb utility displays information about + a ZFS pool useful for debugging and performs some amount of consistency + checking. It is a not a general purpose tool and options (and facilities) + may change. It is not a fsck(8) utility.

+

The output of this command in general reflects the on-disk + structure of a ZFS pool, and is inherently unstable. The precise output of + most invocations is not documented, a knowledge of ZFS internals is + assumed.

+

If the dataset argument does not + contain any + "" or + "" + characters, it is interpreted as a pool name. The root dataset can be + specified as "pool/".

+

zdb is an "offline" tool; it + accesses the block devices underneath the pools directly from userspace and + does not care if the pool is imported or datasets are mounted (or even if + the system understands ZFS at all). When operating on an imported and active + pool it is possible, though unlikely, that zdb may interpret inconsistent + pool data and behave erratically.

+
+
+

+

Display options:

+
+
, + --block-stats
+
Display statistics regarding the number, size (logical, physical and + allocated) and deduplication of blocks.
+
, + --backup
+
Generate a backup stream, similar to zfs + send, but for the numeric objset ID, and without + opening the dataset. This can be useful in recovery scenarios if dataset + metadata has become corrupted but the dataset itself is readable. The + optional flags argument is a string of one or more + of the letters e, L, + c, and + , which + correspond to the same flags in zfs-send(8).
+
, + --checksum
+
Verify the checksum of all metadata blocks while printing block statistics + (see -b). +

If specified multiple times, verify the checksums of all + blocks.

+
+
, + --config
+
Display information about the configuration. If specified with no other + options, instead display information about the cache file + (/etc/zfs/zpool.cache). To specify the cache file + to display, see -U. +

If specified multiple times, and a pool name is also specified + display both the cached configuration and the on-disk configuration. If + specified multiple times with -e also display + the configuration that would be used were the pool to be imported.

+
+
, + --datasets
+
Display information about datasets. Specified once, displays basic dataset + information: ID, create transaction, size, and object count. See + -N for determining if + poolname[/dataset|objset-ID] + is to use the specified + dataset|objset-ID as a string + (dataset name) or a number (objset ID) when datasets have numeric names. +

If specified multiple times provides greater and greater + verbosity.

+

If object IDs or object ID ranges are specified, display + information about those specific objects or ranges only.

+

An object ID range is specified in terms of a colon-separated + tuple of the form + ⟨start⟩:⟨end⟩[:⟨flags⟩]. The + fields start and end are + integer object identifiers that denote the upper and lower bounds of the + range. An end value of -1 specifies a range with + no upper bound. The flags field optionally + specifies a set of flags, described below, that control which object + types are dumped. By default, all object types are dumped. A minus sign + (-) negates the effect of the flag that follows it and has no effect + unless preceded by the A flag. For example, the + range 0:-1:A-d will dump all object types except for directories.

+

+
+
+
Dump all objects (this is the default)
+
+
Dump ZFS directory objects
+
+
Dump ZFS plain file objects
+
+
Dump SPA space map objects
+
+
Dump ZAP objects
+
-
+
Negate the effect of next flag
+
+
+
, + --dedup-stats
+
Display deduplication statistics, including the deduplication ratio + (dedup), compression ratio (compress), + inflation due to the zfs copies property (copies), and + an overall effective ratio (dedup + × compress + / copies).
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in the + pool) block counts and sizes by reference count.
+
+
Display the statistics independently for each deduplication table.
+
+
Dump the contents of the deduplication tables describing duplicate + blocks.
+
+
Also dump the contents of the deduplication tables describing unique + blocks.
+
, + --embedded-block-pointer=word0:word1:…:word15
+
Decode and display block from an embedded block pointer specified by the + word arguments.
+
, + --history
+
Display pool history similar to zpool + history, but include internal changes, + transaction, and dataset information.
+
, + --intent-logs
+
Display information about intent log (ZIL) entries relating to each + dataset. If specified multiple times, display counts of each intent log + transaction type.
+
, + --checkpointed-state
+
Examine the checkpointed state of the pool. Note, the on disk format of + the pool is not reverted to the checkpointed state.
+
, + --label=device
+
Read the vdev labels and L2ARC header from the specified device. + zdb -l will return 0 if + valid label was found, 1 if error occurred, and 2 if no valid labels were + found. The presence of L2ARC header is indicated by a specific sequence + (L2ARC_DEV_HDR_MAGIC). If there is an accounting error in the size or the + number of L2ARC log blocks zdb + -l will return 1. Each unique configuration is + displayed only once.
+
+ device
+
In addition display label space usage stats. If a valid L2ARC header was + found also display the properties of log blocks used for restoring L2ARC + contents (persistent L2ARC).
+
+ device
+
Display every configuration, unique or not. If a valid L2ARC header was + found also display the properties of log entries in log blocks used for + restoring L2ARC contents (persistent L2ARC). +

If the -q option is also specified, + don't print the labels or the L2ARC header.

+

If the -u option is also specified, + also display the uberblocks on this device. Specify multiple times to + increase verbosity.

+
+
, + --disable-leak-tracking
+
Disable leak detection and the loading of space maps. By default, + zdb verifies that all non-free blocks are + referenced, which can be very expensive.
+
, + --metaslabs
+
Display the offset, spacemap, free space of each metaslab, all the log + spacemaps and their obsolete entry statistics.
+
+
Also display information about the on-disk free space histogram associated + with each metaslab.
+
+
Display the maximum contiguous free space, the in-core free space + histogram, and the percentage of free space in each space map.
+
+
Display every spacemap record.
+
, + --metaslab-groups
+
Display all "normal" vdev metaslab group information - per-vdev + metaslab count, fragmentation, and free space histogram, as well as + overall pool fragmentation and histogram.
+
+
"Special" vdevs are added to -M's normal output.
+
, + --object-lookups=dataset + path
+
Also display information about the maximum contiguous free space and the + percentage of free space in each space map.
+
+
Display every spacemap record.
+
+
Same as -d but force zdb to interpret the + [dataset|objset-ID] in + [poolname[/dataset|objset-ID]] + as a numeric objset ID.
+
+ dataset path
+
Look up the specified path inside of the + dataset and display its metadata and indirect + blocks. Specified path must be relative to the root + of dataset. This option can be combined with + -v for increasing verbosity.
+
, + --copy-object=dataset path + destination
+
Copy the specified path inside of the + dataset to the specified destination. Specified + path must be relative to the root of + dataset. This option can be combined with + -v for increasing verbosity.
+
, + --read-block=poolname + vdev:offset:[lsize/]psize[:flags]
+
Read and display a block from the specified device. By default the block + is displayed as a hex dump, but see the description of the + r flag, below. +

The block is specified in terms of a colon-separated tuple + vdev (an integer vdev identifier) + offset (the offset within the vdev) + size (the physical size, or logical size / + physical size) of the block to read and, optionally, + flags (a set of flags, described below).

+

+
+
+ offset
+
Print block pointer at hex offset
+
+
Calculate and display checksums
+
+
Decompress the block. Set environment variable + ZDB_NO_ZLE to skip zle when guessing.
+
+
Byte swap the block
+
+
Dump gang block header
+
+
Dump indirect block
+
+
Dump raw uninterpreted block data
+
+
Verbose output for guessing compression algorithm
+
+
+
, + --io-stats
+
Report statistics on zdb I/O. Display operation + counts, bandwidth, and error counts of I/O to the pool from + zdb.
+
, + --simulate-dedup
+
Simulate the effects of deduplication, constructing a DDT and then display + that DDT as with -DD.
+
, + --brt-stats
+
Display block reference table (BRT) statistics, including the size of + uniques blocks cloned, the space saving as a result of cloning, and the + saving ratio.
+
+
Display the per-vdev BRT statistics, including total references.
+
+
Dump the contents of the block reference tables.
+
, + --uberblock
+
Display the current uberblock.
+
+

Other options:

+
+
, + --ignore-assertions
+
Do not abort should any assertion fail.
+
+
Enable panic recovery, certain errors which would otherwise be fatal are + demoted to warnings.
+
+
Do not abort if asserts fail and also enable panic recovery.
+
, + --exported=[-p + path]…
+
Operate on an exported pool, not present in + /etc/zfs/zpool.cache. The + -p flag specifies the path under which devices are + to be searched.
+
, + --dump-blocks=dumpdir
+
All blocks accessed will be copied to files in the specified directory. + The blocks will be placed in sparse files whose name is the same as that + of the file or device read. zdb can be then run on + the generated files. Note that the -bbc flags are + sufficient to access (and thus copy) all metadata on the pool.
+
, + --automatic-rewind
+
Attempt to make an unreadable pool readable by trying progressively older + transactions.
+
, + --dump-debug-msg
+
Dump the contents of the zfs_dbgmsg buffer before exiting + zdb. zfs_dbgmsg is a buffer used by ZFS to dump + advanced debug information.
+
, + --inflight=inflight-I/O-ops
+
Limit the number of outstanding checksum I/O operations to the specified + value. The default value is 200. This option affects the performance of + the -c option.
+
, + --key=key
+
Decryption key needed to access an encrypted dataset. This will cause + zdb to attempt to unlock the dataset using the + encryption root, key format and other encryption parameters on the given + dataset. zdb can still inspect pool and dataset + structures on encrypted datasets without unlocking them, but will not be + able to access file names and attributes and object contents. + WARNING: The raw decryption key and any decrypted data will be in + user memory while zdb is running. Other user + programs may be able to extract it by inspecting + zdb as it runs. Exercise extreme caution when + using this option in shared or uncontrolled environments.
+
, + --option=var=value
+
Set the given global libzpool variable to the provided value. The value + must be an unsigned 32-bit integer. Currently only little-endian systems + are supported to avoid accidentally setting the high 32 bits of 64-bit + variables.
+
, + --parseable
+
Print numbers in an unscaled form more amenable to parsing, e.g. + + rather than + .
+
, + --txg=transaction
+
Specify the highest transaction to use when searching for uberblocks. See + also the -u and -l options + for a means to see the available uberblocks and their associated + transaction numbers.
+
, + --cachefile=cachefile
+
Use a cache file other than + /etc/zfs/zpool.cache.
+
, + --verbose
+
Enable verbosity. Specify multiple times for increased verbosity.
+
, + --verbatim
+
Attempt verbatim import. This mimics the behavior of the kernel when + loading a pool from a cachefile. Only usable with + -e.
+
, + --extreme-rewind
+
Attempt "extreme" transaction rewind, that is attempt the same + recovery as -F but read transactions otherwise + deemed too old.
+
, + --all-reconstruction
+
Attempt all possible combinations when reconstructing indirect split + blocks. This flag disables the individual I/O deadman timer in order to + allow as much time as required for the attempted reconstruction.
+
, + --livelist
+
Perform validation for livelists that are being deleted. Scans through the + livelist and metaslabs, checking for duplicate entries and compares the + two, checking for potential double frees. If it encounters issues, + warnings will be printed, but the command will not necessarily fail.
+
+

Specifying a display option more than once enables verbosity for + only that option, with more occurrences enabling more verbosity.

+

If no options are specified, all information about the named pool + will be displayed at default verbosity.

+
+
+

+
+

+
+
# zdb -C rpool
+MOS Configuration:
+        version: 28
+        name: 'rpool'
+ …
+
+
+
+

+
+
# zdb -d rpool
+Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects
+Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects
+ …
+
+
+
+

+
+
# zdb -d rpool/export/home 0
+Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects
+
+    Object  lvl   iblk   dblk  dsize  lsize   %full  type
+         0    7    16K    16K  15.0K    16K   25.00  DMU dnode
+
+
+
+

+
+
# zdb -S rpool
+Simulated DDT histogram:
+
+bucket              allocated                       referenced
+______   ______________________________   ______________________________
+refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
+------   ------   -----   -----   -----   ------   -----   -----   -----
+     1     694K   27.1G   15.0G   15.0G     694K   27.1G   15.0G   15.0G
+     2    35.0K   1.33G    699M    699M    74.7K   2.79G   1.45G   1.45G
+ …
+dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00
+
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
November 18, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zed.8.html b/man/v2.2/8/zed.8.html new file mode 100644 index 000000000..51736d222 --- /dev/null +++ b/man/v2.2/8/zed.8.html @@ -0,0 +1,474 @@ + + + + + + + zed.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zed.8

+
+ + + + + +
ZED(8)System Manager's ManualZED(8)
+
+
+

+

ZEDZFS Event + Daemon

+
+
+

+ + + + + +
ZED[-fFhILMvVZ] [-d + zedletdir] [-p + pidfile] [-P + path] [-s + statefile] [-j + jobs] [-b + buflen]
+
+
+

+

The ZED (ZFS Event Daemon) monitors events + generated by the ZFS kernel module. When a zevent (ZFS Event) is posted, the + ZED will run any ZEDLETs (ZFS Event Daemon Linkage + for Executable Tasks) that have been enabled for the corresponding zevent + class.

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Display license information.
+
+
Display version information.
+
+
Be verbose.
+
+
Force the daemon to run if at all possible, disabling security checks and + throwing caution to the wind. Not recommended for use in production.
+
+
Don't daemonise: remain attached to the controlling terminal, log to the + standard I/O streams.
+
+
Lock all current and future pages in the virtual memory address space. + This may help the daemon remain responsive when the system is under heavy + memory pressure.
+
+
Request that the daemon idle rather than exit when the kernel modules are + not loaded. Processing of events will start, or resume, when the kernel + modules are (re)loaded. Under Linux the kernel modules cannot be unloaded + while the daemon is running.
+
+
Zero the daemon's state, thereby allowing zevents still within the kernel + to be reprocessed.
+
+ zedletdir
+
Read the enabled ZEDLETs from the specified directory.
+
+ pidfile
+
Write the daemon's process ID to the specified file.
+
+ path
+
Custom $PATH for zedlets to use. Normally zedlets + run in a locked-down environment, with hardcoded paths to the ZFS commands + ($ZFS, $ZPOOL, + $ZED, ), and a + hard-coded $PATH. This is done for security + reasons. However, the ZFS test suite uses a custom PATH for its ZFS + commands, and passes it to ZED with + -P. In short, -P is only + to be used by the ZFS test suite; never use it in production!
+
+ statefile
+
Write the daemon's state to the specified file.
+
+ jobs
+
Allow at most jobs ZEDLETs to run concurrently, + delaying execution of new ones until they finish. Defaults to + .
+
+ buflen
+
Cap kernel event buffer growth to buflen entries. + This buffer is grown when the daemon misses an event, but results in + unreclaimable memory use in the kernel. A value of + removes the + cap. Defaults to + .
+
+
+
+

+

A zevent is comprised of a list of nvpairs (name/value pairs). + Each zevent contains an EID (Event IDentifier) that uniquely identifies it + throughout the lifetime of the loaded ZFS kernel module; this EID is a + monotonically increasing integer that resets to 1 each time the kernel + module is loaded. Each zevent also contains a class string that identifies + the type of event. For brevity, a subclass string is defined that omits the + leading components of the class string. Additional nvpairs exist to provide + event details.

+

The kernel maintains a list of recent zevents that can be viewed + (along with their associated lists of nvpairs) using the + zpool events + -v command.

+
+
+

+

ZEDLETs to be invoked in response to zevents are located in the + enabled-zedlets directory + (zedletdir). These can be symlinked or copied from the + + directory; symlinks allow for automatic updates from the installed ZEDLETs, + whereas copies preserve local modifications. As a security measure, since + ownership change is a privileged operation, ZEDLETs must be owned by root. + They must have execute permissions for the user, but they must not have + write permissions for group or other. Dotfiles are ignored.

+

ZEDLETs are named after the zevent class for which they + should be invoked. In particular, a ZEDLET will be invoked for a given + zevent if either its class or subclass string is a prefix of its filename + (and is followed by a non-alphabetic character). As a special case, the + prefix matches + all zevents. Multiple ZEDLETs may be invoked for a given zevent.

+
+
+

+

ZEDLETs are executables invoked by the ZED in response to a given + zevent. They should be written under the presumption they can be invoked + concurrently, and they should use appropriate locking to access any shared + resources. Common variables used by ZEDLETs can be stored in the default rc + file which is sourced by scripts; these variables should be prefixed with + .

+

The zevent nvpairs are passed to ZEDLETs as environment variables. + Each nvpair name is converted to an environment variable in the following + manner:

+
    +
  1. it is prefixed with + ,
  2. +
  3. it is converted to uppercase, and
  4. +
  5. each non-alphanumeric character is converted to an underscore.
  6. +
+

Some additional environment variables have been defined to present + certain nvpair values in a more convenient form. An incomplete list of + zevent environment variables is as follows:

+
+
+
The Event IDentifier.
+
+
The zevent class string.
+
+
The zevent subclass string.
+
+
The time at which the zevent was posted as “seconds + nanoseconds” since the Epoch.
+
+
The seconds component of + ZEVENT_TIME.
+
+
The + + component of ZEVENT_TIME.
+
+
An almost-RFC3339-compliant string for ZEVENT_TIME.
+
+

Additionally, the following ZED & ZFS variables are + defined:

+
+
+
The daemon's process ID.
+
+
The daemon's current enabled-zedlets directory.
+
+
The alias + (“--”) + string of the ZFS distribution the daemon is part of.
+
+
The ZFS version the daemon is part of.
+
+
The ZFS release the daemon is part of.
+
+

ZEDLETs may need to call other ZFS commands. The + installation paths of the following executables are defined as environment + variables: , + , + , + , + and + . + These variables may be overridden in the rc file.

+
+
+

+
+
@sysconfdir@/zfs/zed.d
+
The default directory for enabled ZEDLETs.
+
@sysconfdir@/zfs/zed.d/zed.rc
+
The default rc file for common variables used by ZEDLETs.
+
@zfsexecdir@/zed.d
+
The default directory for installed ZEDLETs.
+
@runstatedir@/zed.pid
+
The default file containing the daemon's process ID.
+
@runstatedir@/zed.state
+
The default file containing the daemon's state.
+
+
+
+

+
+
+
Reconfigure the daemon and rescan the directory for enabled ZEDLETs.
+
, +
+
Terminate the daemon.
+
+
+
+

+

zfs(8), zpool(8), + zpool-events(8)

+
+
+

+

The ZED requires root privileges.

+

Do not taunt the ZED.

+
+
+

+

ZEDLETs are unable to return state/status information to the + kernel.

+

Internationalization support via gettext has not been added.

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-allow.8.html b/man/v2.2/8/zfs-allow.8.html new file mode 100644 index 000000000..5cb2c484f --- /dev/null +++ b/man/v2.2/8/zfs-allow.8.html @@ -0,0 +1,956 @@ + + + + + + + zfs-allow.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-allow.8

+
+ + + + + +
ZFS-ALLOW(8)System Manager's ManualZFS-ALLOW(8)
+
+
+

+

zfs-allow — + delegate ZFS administration permissions to unprivileged + users

+
+
+

+ + + + + +
zfsallow [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+
+

+
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of mount, + , + , + , + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
 
+
zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]…
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]…
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]…
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]…
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NAMETYPENOTES



allowsubcommandMust also have the permission that is being allowed
bookmarksubcommand
clonesubcommandMust also have the create ability and mount ability in + the origin file system
createsubcommandMust also have the mount ability. Must also have the + refreservation ability to create a non-sparse volume.
destroysubcommandMust also have the mount ability
diffsubcommandAllows lookup of paths within a dataset given an object number, and + the ability to create snapshots necessary to zfs diff.
holdsubcommandAllows adding a user hold to a snapshot
load-keysubcommandAllows loading and unloading of encryption key (see zfs + load-key and zfs unload-key).
change-keysubcommandAllows changing an encryption key via zfs change-key.
mountsubcommandAllows mounting/umounting ZFS datasets
promotesubcommandMust also have the mount and promote ability in the + origin file system
receivesubcommandMust also have the mount and create ability
releasesubcommandAllows releasing a user hold which might destroy the snapshot
renamesubcommandMust also have the mount and create ability in the new + parent
rollbacksubcommandMust also have the mount ability
sendsubcommand
sharesubcommandAllows sharing file systems over NFS or SMB protocols
snapshotsubcommandMust also have the mount ability
groupquotaotherAllows accessing any groupquota@ property
groupobjquotaotherAllows accessing any groupobjquota@ + property
groupusedotherAllows reading any groupused@ property
groupobjusedotherAllows reading any groupobjused@ property
userpropotherAllows changing any user property
userquotaotherAllows accessing any userquota@ property
userobjquotaotherAllows accessing any userobjquota@ + property
userusedotherAllows reading any userused@ property
userobjusedotherAllows reading any userobjused@ property
projectobjquotaotherAllows accessing any projectobjquota@ + property
projectquotaotherAllows accessing any projectquota@ + property
projectobjusedotherAllows reading any projectobjused@ + property
projectusedotherAllows reading any projectused@ property
aclinheritproperty
aclmodeproperty
acltypeproperty
atimeproperty
canmountproperty
casesensitivityproperty
checksumproperty
compressionproperty
contextproperty
copiesproperty
dedupproperty
defcontextproperty
devicesproperty
dnodesizeproperty
encryptionproperty
execproperty
filesystem_limitproperty
fscontextproperty
keyformatproperty
keylocationproperty
logbiasproperty
mlslabelproperty
mountpointproperty
nbmandproperty
normalizationproperty
overlayproperty
pbkdf2itersproperty
primarycacheproperty
quotaproperty
readonlyproperty
recordsizeproperty
redundant_metadataproperty
refquotaproperty
refreservationproperty
relatimeproperty
reservationproperty
rootcontextproperty
secondarycacheproperty
setuidproperty
sharenfsproperty
sharesmbproperty
snapdevproperty
snapdirproperty
snapshot_limitproperty
special_small_blocksproperty
syncproperty
utf8onlyproperty
versionproperty
volblocksizeproperty
volmodeproperty
volsizeproperty
vscanproperty
xattrproperty
zonedproperty
+
+
zfs allow + -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
+
+
+

+
+

+

The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots + on tank/cindys. The permissions on + tank/cindys are also displayed.

+
+
# zfs allow ,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys will + be unable to mount file systems under tank/cindys. Add + an ACE similar to the following syntax to provide mount point access:

+
# chmod + A+user:cindys:add_subdirectory:allow + /tank/cindys
+
+
+

+

The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed.

+
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
+

+

The following example shows how to define and grant a permission + set on the tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+

+

The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed.

+
+
# zfs allow cindys , users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
+

+

The following example shows how to remove the snapshot permission + from the staff group on the + tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-bookmark.8.html b/man/v2.2/8/zfs-bookmark.8.html new file mode 100644 index 000000000..6381d01cc --- /dev/null +++ b/man/v2.2/8/zfs-bookmark.8.html @@ -0,0 +1,291 @@ + + + + + + + zfs-bookmark.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-bookmark.8

+
+ + + + + +
ZFS-BOOKMARK(8)System Manager's ManualZFS-BOOKMARK(8)
+
+
+

+

zfs-bookmark — + create bookmark of ZFS snapshot

+
+
+

+ + + + + +
zfsbookmark + snapshot|bookmark + newbookmark
+
+
+

+

Creates a new bookmark of the given snapshot or bookmark. + Bookmarks mark the point in time when the snapshot was created, and can be + used as the incremental source for a zfs + send.

+

When creating a bookmark from an existing redaction + bookmark, the resulting bookmark is + a redaction + bookmark.

+

This feature must be enabled to be used. See + zpool-features(7) for details on ZFS feature flags and the + + feature.

+
+
+

+
+

+

The following example creates a bookmark to a snapshot. This + bookmark can then be used instead of a snapshot in send streams.

+
# zfs + bookmark + rpool@snapshot + rpool#bookmark
+
+
+
+

+

zfs-destroy(8), zfs-send(8), + zfs-snapshot(8)

+
+
+ + + + + +
May 12, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-change-key.8.html b/man/v2.2/8/zfs-change-key.8.html new file mode 100644 index 000000000..3bc60e41a --- /dev/null +++ b/man/v2.2/8/zfs-change-key.8.html @@ -0,0 +1,476 @@ + + + + + + + zfs-change-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-change-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + load, unload, or change encryption key of ZFS + dataset

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a|filesystem
+
+ + + + + +
zfsunload-key [-r] + -a|filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a|filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. + This will cause zfs to simply check that the + provided key is correct. This command may be run even if the key is + already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a|filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded. + This command may also be used to change the keylocation, + keyformat, and pbkdf2iters properties + as needed. If the dataset was not previously an encryption root it will + become one. Alternatively, the -i flag may be + provided to cause an encryption root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim + --secure if supported by your hardware, + otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to running zfs + load-key filesystem; + zfs change-key + filesystem
+
+ property=value
+
Allows the user to set encryption key properties + (keyformat, keylocation, + and pbkdf2iters) while + changing the key. This is the only way to alter + keyformat and pbkdf2iters after + the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + volume data, file attributes, ACLs, permission bits, directory listings, + FUID mappings, and + / + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the load-key subcommand for more + info on key loading).

+

Creating an encrypted dataset requires + specifying the encryption and + keyformat properties at creation time, along with an + optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, + and pbkdf2iters) do not inherit + like other ZFS properties and instead use the value determined by their + encryption root. Encryption root inheritance can be tracked via the + read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only deduplicate against themselves, + their snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption, + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost for each block written.

+
+
+
+

+

zfsprops(7), zfs-create(8), + zfs-set(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-clone.8.html b/man/v2.2/8/zfs-clone.8.html new file mode 100644 index 000000000..90a436804 --- /dev/null +++ b/man/v2.2/8/zfs-clone.8.html @@ -0,0 +1,315 @@ + + + + + + + zfs-clone.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-clone.8

+
+ + + + + +
ZFS-CLONE(8)System Manager's ManualZFS-CLONE(8)
+
+
+

+

zfs-cloneclone + snapshot of ZFS dataset

+
+
+

+ + + + + +
zfsclone [-p] + [-o + property=value]… + snapshot + filesystem|volume
+
+
+

+

See the Clones section of + zfsconcepts(7) for details. The target dataset can be + located anywhere in the ZFS hierarchy, and is created as the same type as + the original.

+
+
+ property=value
+
Sets the specified property; see zfs + create for details.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + + property inherited from their parent. If the target filesystem or volume + already exists, the operation completes successfully.
+
+
+
+

+
+

+

The following command creates a writable file system whose initial + contents are the same as pool/home/bob@yesterday.

+
# zfs + clone pool/home/bob@yesterday + pool/clone
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+
+

+

zfs-promote(8), + zfs-snapshot(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-create.8.html b/man/v2.2/8/zfs-create.8.html new file mode 100644 index 000000000..a88fd67a7 --- /dev/null +++ b/man/v2.2/8/zfs-create.8.html @@ -0,0 +1,452 @@ + + + + + + + zfs-create.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-create.8

+
+ + + + + +
ZFS-CREATE(8)System Manager's ManualZFS-CREATE(8)
+
+
+

+

zfs-create — + create ZFS dataset

+
+
+

+ + + + + +
zfscreate [-Pnpuv] + [-o + property=value]… + filesystem
+
+ + + + + +
zfscreate [-ps] + [-b blocksize] + [-o + property=value]… + -V size + volume
+
+
+

+
+
zfs create + [-Pnpuv] [-o + property=value]… + filesystem
+
Creates a new ZFS file system. The file system is automatically mounted + according to the mountpoint property inherited from the + parent, unless the -u option is used. +
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + at the same time the dataset was created. Any editable ZFS property + can also be set at creation time. Multiple -o + options can be specified. An error results if the same property is + specified in multiple -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Do a dry-run ("No-op") creation. No datasets will be + created. This is useful in conjunction with the + -v or -P flags to + validate properties that are passed via -o + options and those implied by other options. The actual dataset + creation can still fail due to insufficient privileges or available + capacity.
+
+
Print machine-parsable verbose information about the created dataset. + Each line of output contains a key and one or two values, all + separated by tabs. The create_ancestors and + create keys have filesystem as + their only value. The create_ancestors key only + appears if the -p option is used. The + property key has two values, a property name that + property's value. The property key may appear zero + or more times, once for each property that will be set local to + filesystem due to the use of the + -o option.
+
+
Do not mount the newly created file system.
+
+
Print verbose information about the created dataset.
+
+
+
zfs create + [-ps] [-b + blocksize] [-o + property=value]… + -V size + volume
+
Creates a volume of the given size. The volume is exported as a block + device in /dev/zvol/path, where + is the name + of the volume in the ZFS namespace. The size represents the logical size + as exported by the device. By default, a reservation of equal size is + created. +

size is automatically + rounded up to the nearest multiple of the + .

+
+
+ blocksize
+
Equivalent to -o + volblocksize=blocksize. If + this option is specified in conjunction with + -o volblocksize, the + resulting behavior is undefined.
+
+ property=value
+
Sets the specified property as if the zfs + set + property=value command was + invoked at the same time the dataset was created. Any editable ZFS + property can also be set at creation time. Multiple + -o options can be specified. An error results + if the same property is specified in multiple + -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Creates a sparse volume with no reservation. See + + in the + section of zfsprops(7) for more + information about sparse volumes.
+
+
Do a dry-run ("No-op") creation. No datasets will be + created. This is useful in conjunction with the + -v or -P flags to + validate properties that are passed via -o + options and those implied by other options. The actual dataset + creation can still fail due to insufficient privileges or available + capacity.
+
+
Print machine-parsable verbose information about the created dataset. + Each line of output contains a key and one or two values, all + separated by tabs. The create_ancestors and + create keys have volume as their + only value. The create_ancestors key only appears if + the -p option is used. The + property key has two values, a property name that + property's value. The property key may appear zero + or more times, once for each property that will be set local to + volume due to the use of the + -b or -o options, as + well as + + if the volume is not sparse.
+
+
Print verbose information about the created dataset.
+
+
+
+
+

+

Swapping to a ZFS volume is prone to deadlock and not recommended. + See OpenZFS FAQ.

+

Swapping to a file on a ZFS filesystem is not supported.

+
+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + mountpoint=/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+
+

+

zfs-destroy(8), zfs-list(8), + zpool-create(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-destroy.8.html b/man/v2.2/8/zfs-destroy.8.html new file mode 100644 index 000000000..8f1840e2e --- /dev/null +++ b/man/v2.2/8/zfs-destroy.8.html @@ -0,0 +1,424 @@ + + + + + + + zfs-destroy.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-destroy.8

+
+ + + + + +
ZFS-DESTROY(8)System Manager's ManualZFS-DESTROY(8)
+
+
+

+

zfs-destroy — + destroy ZFS dataset, snapshots, or bookmark

+
+
+

+ + + + + +
zfsdestroy [-Rfnprv] + filesystem|volume
+
+ + + + + +
zfsdestroy [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]…
+
+ + + + + +
zfsdestroy + filesystem|volume#bookmark
+
+
+

+
+
zfs destroy + [-Rfnprv] + filesystem|volume
+
Destroys the given dataset. By default, the command unshares any file + systems that are currently shared, unmounts any file systems that are + currently mounted, and refuses to destroy a dataset that has active + dependents (children or clones). +
+
+
Recursively destroy all dependents, including cloned file systems + outside the target hierarchy.
+
+
Forcibly unmount file systems. This option has no effect on non-file + systems or unmounted file systems.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -v or + -p flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Recursively destroy all children.
+
+
Print verbose information about the deleted data.
+
+

Extreme care should be taken when applying either the + -r or the -R options, as + they can destroy large portions of a pool and cause unexpected behavior + for mounted file systems in use.

+
+
zfs destroy + [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]…
+
The given snapshots are destroyed immediately if and only if the + zfs destroy command + without the -d option would have destroyed it. + Such immediate destruction would occur, for example, if the snapshot had + no clones and the user-initiated reference count were zero. +

If a snapshot does not qualify for immediate destruction, it + is marked for deferred deletion. In this state, it exists as a usable, + visible snapshot until both of the preconditions listed above are met, + at which point it is destroyed.

+

An inclusive range of snapshots may be specified by separating + the first and last snapshots with a percent sign. The first and/or last + snapshots may be left blank, in which case the filesystem's oldest or + newest snapshot will be implied.

+

Multiple snapshots (or ranges of snapshots) of the same + filesystem or volume may be specified in a comma-separated list of + snapshots. Only the snapshot's short name (the part after the + ) should be + specified when using a range or comma-separated list to identify + multiple snapshots.

+
+
+
Recursively destroy all clones of these snapshots, including the + clones, snapshots, and children. If this flag is specified, the + -d flag will have no effect.
+
+
Destroy immediately. If a snapshot cannot be destroyed now, mark it + for deferred destruction.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -p or + -v flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Destroy (or mark for deferred deletion) all snapshots with this name + in descendent file systems.
+
+
Print verbose information about the deleted data. +

Extreme care should be taken when applying either the + -r or the -R + options, as they can destroy large portions of a pool and cause + unexpected behavior for mounted file systems in use.

+
+
+
+
zfs destroy + filesystem|volume#bookmark
+
The given bookmark is destroyed.
+
+
+
+

+
+

+

The following command creates snapshots named + yesterday of + pool/home and all of its descendent file systems. Each + snapshot is mounted on demand in the .zfs/snapshot + directory at the root of its file system. The second command destroys the + newly created snapshots.

+
# zfs + snapshot -r + pool/home@yesterday
+
# zfs + destroy -r + pool/home@yesterday
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+

+

The following example shows how to maintain a history of snapshots + with a consistent naming scheme. To keep a week's worth of snapshots, the + user destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows:

+
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
+
+

+

zfs-create(8), zfs-hold(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-diff.8.html b/man/v2.2/8/zfs-diff.8.html new file mode 100644 index 000000000..8ad639ce9 --- /dev/null +++ b/man/v2.2/8/zfs-diff.8.html @@ -0,0 +1,341 @@ + + + + + + + zfs-diff.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-diff.8

+
+ + + + + +
ZFS-DIFF(8)System Manager's ManualZFS-DIFF(8)
+
+
+

+

zfs-diffshow + difference between ZFS snapshots

+
+
+

+ + + + + +
zfsdiff [-FHth] + snapshot + snapshot|filesystem
+
+
+

+

Display the difference between a snapshot of a given filesystem + and another snapshot of that filesystem from a later time or the current + contents of the filesystem. The first column is a character indicating the + type of change, the other columns indicate pathname, new pathname (in case + of rename), change in link count, and optionally file type and/or change + time. The types of change are:

+
+
+
-
+
The path has been removed
+
+
The path has been created
+
+
The path has been modified
+
+
The path has been renamed
+
+
+
+
+
Display an indication of the type of file, in a manner similar to the + -F option of ls(1). +
+
+
+
Block device
+
+
Character device
+
+
Directory
+
+
Door
+
+
Named pipe
+
+
Symbolic link
+
+
Event port
+
+
Socket
+
+
Regular file
+
+
+
+
+
Give more parsable tab-separated output, without header lines and without + arrows.
+
+
Display the path's inode change time as the first column of output.
+
+
Do not + ooo-escape + non-ASCII paths.
+
+
+
+

+
+

+

The following example shows how to see what has changed between a + prior snapshot of a ZFS dataset and its current state. The + -F option is used to indicate type information for + the files affected.

+
+
# zfs diff -F tank/test@before tank/test
+M       /       /tank/test/
+M       F       /tank/test/linked      (+1)
+R       F       /tank/test/oldname -> /tank/test/newname
+-       F       /tank/test/deleted
++       F       /tank/test/created
+M       F       /tank/test/modified
+
+
+
+
+

+

zfs-snapshot(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-get.8.html b/man/v2.2/8/zfs-get.8.html new file mode 100644 index 000000000..b7d496c78 --- /dev/null +++ b/man/v2.2/8/zfs-get.8.html @@ -0,0 +1,566 @@ + + + + + + + zfs-get.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-get.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setset + properties on ZFS datasets

+
+
+

+ + + + + +
zfsset [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot
+
+
+

+
+
zfs set + [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
Only some properties can be edited. See zfsprops(7) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the + section of zfsprops(7). +
+
+
Update mountpoint, sharenfs, sharesmb property but do not mount or + share the dataset.
+
+
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
+
+
Dataset name
+
+
Property name
+
+
Property value
+
+
Property source local, default, + inherited, temporary, + received, or + - (none).
+
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections of + zfsprops(7).

+

The value all can be used to display all + properties that apply to the given dataset's type + (filesystem, volume, + snapshot, or + bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: local, + default, inherited, + temporary, received, + or + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or + all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(7) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value, if one exists; otherwise, + for non-inheritable properties, to the default; otherwise, operate as + if the -S option was not specified.
+
+
+
+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + =/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following command disables the compression + property for all file systems under pool/home. The + next command explicitly enables compression for + pool/home/anne.

+
# zfs + set + compression= + pool/home
+
# zfs + set + compression= + pool/home/anne
+
+
+

+

The following command sets a quota of 50 Gbytes for + pool/home/bob:

+
# zfs + set + =50G + pool/home/bob
+
+
+

+

The following command lists all properties for + pool/home/bob:

+
+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value:

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+

The following command lists all properties with local settings for + pool/home/bob:

+
+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
+

+

The following command causes pool/home/bob + and pool/home/anne to inherit + the checksum property from their parent.

+
# zfs + inherit checksum + pool/home/bob pool/home/anne
+
+
+

+

The following example sets the user-defined + com.example:department property + for a dataset:

+
# zfs + set + com.example:department=12345 + tank/accounting
+
+
+

+

The following commands show how to set sharenfs + property options to enable read-write access for a set of IP addresses and + to enable root access for system "neo" on the + tank/home file system:

+
# zfs + set + sharenfs='rw=@123.123.0.0/16:[::1],root=neo' + tank/home
+

If you are using DNS for host name resolution, specify the + fully-qualified hostname.

+
+
+
+

+

zfsprops(7), zfs-list(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-groupspace.8.html b/man/v2.2/8/zfs-groupspace.8.html new file mode 100644 index 000000000..a2f1505f4 --- /dev/null +++ b/man/v2.2/8/zfs-groupspace.8.html @@ -0,0 +1,390 @@ + + + + + + + zfs-groupspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-groupspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + display space and quotas of ZFS dataset

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + user, + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (like stat(2), + ls -l) perform this + translation, so the -i option allows the + output from zfs + userspace to be compared directly with those + utilities. However, -i may lead to confusion + if some files were created by an SMB user before a SMB-to-POSIX name + mapping was established. In such a case, some files will be owned by + the SMB entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]…
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]…
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is a + numeral, not a name. So need neither the option -i + for SID to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfsprops(7), zfs-set(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-hold.8.html b/man/v2.2/8/zfs-hold.8.html new file mode 100644 index 000000000..3bc9c8a84 --- /dev/null +++ b/man/v2.2/8/zfs-hold.8.html @@ -0,0 +1,325 @@ + + + + + + + zfs-hold.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-hold.8

+
+ + + + + +
ZFS-HOLD(8)System Manager's ManualZFS-HOLD(8)
+
+
+

+

zfs-holdhold + ZFS snapshots to prevent their removal

+
+
+

+ + + + + +
zfshold [-r] + tag snapshot
+
+ + + + + +
zfsholds [-rHp] + snapshot
+
+ + + + + +
zfsrelease [-r] + tag snapshot
+
+
+

+
+
zfs hold + [-r] tag + snapshot
+
Adds a single reference, named with the tag + argument, to the specified snapshots. Each snapshot has its own tag + namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-rHp] snapshot
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
Do not print headers, use tab-delimited output.
+
+
Prints holds timestamps as unix epoch timestamps.
+
+
+
zfs release + [-r] tag + snapshot
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
+
+
+

+

zfs-destroy(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-inherit.8.html b/man/v2.2/8/zfs-inherit.8.html new file mode 100644 index 000000000..de01723ff --- /dev/null +++ b/man/v2.2/8/zfs-inherit.8.html @@ -0,0 +1,566 @@ + + + + + + + zfs-inherit.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-inherit.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setset + properties on ZFS datasets

+
+
+

+ + + + + +
zfsset [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot
+
+
+

+
+
zfs set + [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
Only some properties can be edited. See zfsprops(7) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the + section of zfsprops(7). +
+
+
Update mountpoint, sharenfs, sharesmb property but do not mount or + share the dataset.
+
+
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
+
+
Dataset name
+
+
Property name
+
+
Property value
+
+
Property source local, default, + inherited, temporary, + received, or + - (none).
+
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections of + zfsprops(7).

+

The value all can be used to display all + properties that apply to the given dataset's type + (filesystem, volume, + snapshot, or + bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: local, + default, inherited, + temporary, received, + or + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or + all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(7) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value, if one exists; otherwise, + for non-inheritable properties, to the default; otherwise, operate as + if the -S option was not specified.
+
+
+
+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + =/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following command disables the compression + property for all file systems under pool/home. The + next command explicitly enables compression for + pool/home/anne.

+
# zfs + set + compression= + pool/home
+
# zfs + set + compression= + pool/home/anne
+
+
+

+

The following command sets a quota of 50 Gbytes for + pool/home/bob:

+
# zfs + set + =50G + pool/home/bob
+
+
+

+

The following command lists all properties for + pool/home/bob:

+
+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value:

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+

The following command lists all properties with local settings for + pool/home/bob:

+
+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
+

+

The following command causes pool/home/bob + and pool/home/anne to inherit + the checksum property from their parent.

+
# zfs + inherit checksum + pool/home/bob pool/home/anne
+
+
+

+

The following example sets the user-defined + com.example:department property + for a dataset:

+
# zfs + set + com.example:department=12345 + tank/accounting
+
+
+

+

The following commands show how to set sharenfs + property options to enable read-write access for a set of IP addresses and + to enable root access for system "neo" on the + tank/home file system:

+
# zfs + set + sharenfs='rw=@123.123.0.0/16:[::1],root=neo' + tank/home
+

If you are using DNS for host name resolution, specify the + fully-qualified hostname.

+
+
+
+

+

zfsprops(7), zfs-list(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-jail.8.html b/man/v2.2/8/zfs-jail.8.html new file mode 100644 index 000000000..634102a4a --- /dev/null +++ b/man/v2.2/8/zfs-jail.8.html @@ -0,0 +1,314 @@ + + + + + + + zfs-jail.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-jail.8

+
+ + + + + +
ZFS-JAIL(8)System Manager's ManualZFS-JAIL(8)
+
+
+

+

zfs-jailattach + or detach ZFS filesystem from FreeBSD jail

+
+
+

+ + + + + +
zfs jailjailid|jailname + filesystem
+
+ + + + + +
zfs unjailjailid|jailname + filesystem
+
+
+

+
+
zfs jail + jailid|jailname + filesystem
+
Attach the specified filesystem to the jail + identified by JID jailid or name + jailname. From now on this file system tree can be + managed from within a jail if the jailed property has + been set. To use this functionality, the jail needs the + + and + + parameters set to + and the + + parameter set to a value lower than + . +

You cannot attach a jailed dataset's children to another jail. + You can also not attach the root file system of the jail or any dataset + which needs to be mounted before the zfs rc script is run inside the + jail, as it would be attached unmounted until it is mounted from the rc + script inside the jail.

+

To allow management of the dataset from within a + jail, the jailed property has to be set and the jail + needs access to the /dev/zfs device. The + property + cannot be changed from within a jail.

+

After a dataset is attached to a jail and the + jailed property is set, a jailed file system cannot be + mounted outside the jail, since the jail administrator might have set + the mount point to an unacceptable value.

+

See jail(8) for more information on managing + jails. Jails are a FreeBSD feature and are not + relevant on other platforms.

+
+
zfs unjail + jailid|jailname + filesystem
+
Detaches the specified filesystem from the jail + identified by JID jailid or name + jailname.
+
+
+
+

+

zfsprops(7), jail(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-list.8.html b/man/v2.2/8/zfs-list.8.html new file mode 100644 index 000000000..a557167e7 --- /dev/null +++ b/man/v2.2/8/zfs-list.8.html @@ -0,0 +1,371 @@ + + + + + + + zfs-list.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-list.8

+
+ + + + + +
ZFS-LIST(8)System Manager's ManualZFS-LIST(8)
+
+
+

+

zfs-listlist + properties of ZFS datasets

+
+
+

+ + + + + +
zfslist + [-r|-d + depth] [-Hp] + [-o + property[,property]…] + [-s property]… + [-S property]… + [-t + type[,type]…] + [filesystem|volume|snapshot]…
+
+
+

+

If specified, you can list property information by the absolute + pathname or the relative pathname. By default, all file systems and volumes + are displayed. Snapshots are displayed if the + + pool property is on (the default is + off), or if the -t + snapshot or -t + all options are specified. The following fields are + displayed: name, used, + , + , + .

+
+
+
Used for scripting mode. Do not print headers and separate fields by a + single tab instead of arbitrary white space.
+
+ depth
+
Recursively display any children of the dataset, limiting the recursion to + depth. A depth of + will display + only the dataset and its direct children.
+
+ property
+
A comma-separated list of properties to display. The property must be: + +
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display any children of the dataset on the command line.
+
+ property
+
A property for sorting the output by column in ascending order based on + the value of the property. The property must be one of the properties + described in the Properties section + of zfsprops(7) or the value name to + sort by the dataset name. Multiple properties can be specified at one time + using multiple -s property options. Multiple + -s options are evaluated from left to right in + decreasing order of importance. The following is a list of sorting + criteria: +
    +
  • Numeric types sort in numeric order.
  • +
  • String types sort in alphabetical order.
  • +
  • Types inappropriate for a row sort that row to the literal bottom, + regardless of the specified ordering.
  • +
+

If no sorting options are specified the existing behavior of + zfs list is + preserved.

+
+
+ property
+
Same as -s, but sorts by property in descending + order.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + , + or all. For example, specifying + -t snapshot displays only + snapshots.
+
+
+
+

+
+

+

The following command lists all active file systems and volumes in + the system. Snapshots are displayed if + =on. + The default is off. See zpoolprops(7) + for more information on pool properties.

+
+
# zfs list
+NAME                      USED  AVAIL  REFER  MOUNTPOINT
+pool                      450K   457G    18K  /pool
+pool/home                 315K   457G    21K  /export/home
+pool/home/anne             18K   457G    18K  /export/home/anne
+pool/home/bob             276K   457G   276K  /export/home/bob
+
+
+
+
+

+

zfsprops(7), zfs-get(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-load-key.8.html b/man/v2.2/8/zfs-load-key.8.html new file mode 100644 index 000000000..398eb71fe --- /dev/null +++ b/man/v2.2/8/zfs-load-key.8.html @@ -0,0 +1,476 @@ + + + + + + + zfs-load-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-load-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + load, unload, or change encryption key of ZFS + dataset

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a|filesystem
+
+ + + + + +
zfsunload-key [-r] + -a|filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a|filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. + This will cause zfs to simply check that the + provided key is correct. This command may be run even if the key is + already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a|filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded. + This command may also be used to change the keylocation, + keyformat, and pbkdf2iters properties + as needed. If the dataset was not previously an encryption root it will + become one. Alternatively, the -i flag may be + provided to cause an encryption root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim + --secure if supported by your hardware, + otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to running zfs + load-key filesystem; + zfs change-key + filesystem
+
+ property=value
+
Allows the user to set encryption key properties + (keyformat, keylocation, + and pbkdf2iters) while + changing the key. This is the only way to alter + keyformat and pbkdf2iters after + the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + volume data, file attributes, ACLs, permission bits, directory listings, + FUID mappings, and + / + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the load-key subcommand for more + info on key loading).

+

Creating an encrypted dataset requires + specifying the encryption and + keyformat properties at creation time, along with an + optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, + and pbkdf2iters) do not inherit + like other ZFS properties and instead use the value determined by their + encryption root. Encryption root inheritance can be tracked via the + read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only deduplicate against themselves, + their snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption, + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost for each block written.

+
+
+
+

+

zfsprops(7), zfs-create(8), + zfs-set(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-mount-generator.8.html b/man/v2.2/8/zfs-mount-generator.8.html new file mode 100644 index 000000000..23b9cba4c --- /dev/null +++ b/man/v2.2/8/zfs-mount-generator.8.html @@ -0,0 +1,439 @@ + + + + + + + zfs-mount-generator.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-mount-generator.8

+
+ + + + + +
ZFS-MOUNT-GENERATOR(8)System Manager's ManualZFS-MOUNT-GENERATOR(8)
+
+
+

+

zfs-mount-generator — + generate systemd mount units for ZFS filesystems

+
+
+

+

@systemdgeneratordir@/zfs-mount-generator

+
+
+

+

zfs-mount-generator is a + systemd.generator(7) that generates native + systemd.mount(5) units for configured ZFS datasets.

+
+

+
+
=
+
+ + or none.
+
=
+
off. Skipped if + only noauto datasets exist for a given mountpoint + and there's more than one. Datasets with + + take precedence over ones with + noauto for the same mountpoint. + Sets logical noauto + flag if noauto. Encryption roots + always generate + zfs-load-key@root.service, + even if off.
+
=, + relatime=, + =, + =, + =, + =, + =
+
Used to generate mount options equivalent to zfs + mount.
+
=, + keylocation=
+
If the dataset is an encryption root, its mount unit will bind to + zfs-load-key@root.service, + with additional dependencies as follows: +
+
+
=
+
None, uses systemd-ask-password(1)
+
=URL + (et al.)
+
=, + After=: + network-online.target
+
=<path>
+
=path
+
+
+ The service also uses the same Wants=, + After=, Requires=, + and RequiresMountsFor=, as the + mount unit.
+
=path[ + path]…
+
+ Requires= for the mount- and key-loading unit.
+
=path[ + path]…
+
+ RequiresMountsFor= for the mount- and key-loading + unit.
+
=unit[ + unit]…
+
+ Before= for the mount unit.
+
=unit[ + unit]…
+
+ After= for the mount unit.
+
=unit[ + unit]…
+
Sets logical noauto + flag (see below). If not + none, sets + WantedBy= for the mount unit.
+
=unit[ + unit]…
+
Sets logical noauto + flag (see below). If not + none, sets + RequiredBy= for the mount unit.
+
=(unset)|on|off
+
Waxes or wanes strength of default reverse dependencies of the mount unit, + see below.
+
=on|off
+
on. Defaults to + off.
+
+
+
+

+

Additionally, unless the pool the dataset resides on is imported + at generation time, both units gain + Wants=zfs-import.target and + After=zfs-import.target.

+

Additionally, unless the logical noauto flag is + set, the mount unit gains a reverse-dependency for + local-fs.target of strength

+
+
+
(unset)
+
= + + Before=
+
+
=
+
+
= + + Before=
+
+
+
+
+

+

Because ZFS pools may not be available very early in the boot + process, information on ZFS mountpoints must be stored separately. The + output of

+
zfs + list -Ho + name,⟨every property above in + order⟩
+for datasets that should be mounted by systemd should be kept at + @sysconfdir@/zfs/zfs-list.cache/poolname, + and, if writeable, will be kept synchronized for the entire pool by the + history_event-zfs-list-cacher.sh ZEDLET, if enabled + (see zed(8)). +
+
+
+

+

If the + + environment variable is nonzero (or unset and + /proc/cmdline contains + ""), + print summary accounting information at the end.

+
+
+

+

To begin, enable tracking for the pool:

+
# touch + @sysconfdir@/zfs/zfs-list.cache/poolname
+Then enable the tracking ZEDLET: +
# ln + -s + @zfsexecdir@/zed.d/history_event-zfs-list-cacher.sh + @sysconfdir@/zfs/zed.d
+
# systemctl + enable + zfs-zed.service
+
# systemctl + restart + zfs-zed.service
+

If no history event is in the queue, inject one to ensure the + ZEDLET runs to refresh the cache file by setting a monitored property + somewhere on the pool:

+
# zfs + set relatime=off + poolname/dset
+
# zfs + inherit relatime + poolname/dset
+

To test the generator output:

+
$ mkdir + /tmp/zfs-mount-generator
+
$ + @systemdgeneratordir@/zfs-mount-generator + /tmp/zfs-mount-generator
+If the generated units are satisfactory, instruct + systemd to re-run all generators: +
# systemctl + daemon-reload
+
+
+

+

systemd.mount(5), + zfs(5), + systemd.generator(7), + zed(8), + zpool-events(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-mount.8.html b/man/v2.2/8/zfs-mount.8.html new file mode 100644 index 000000000..efee116a4 --- /dev/null +++ b/man/v2.2/8/zfs-mount.8.html @@ -0,0 +1,338 @@ + + + + + + + zfs-mount.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-mount.8

+
+ + + + + +
ZFS-MOUNT(8)System Manager's ManualZFS-MOUNT(8)
+
+
+

+

zfs-mountmanage + mount state of ZFS filesystems

+
+
+

+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Oflv] + [-o options] + -a|filesystem
+
+ + + + + +
zfsunmount [-fu] + -a|filesystem|mountpoint
+
+
+

+
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Oflv] [-o + options] + -a|filesystem
+
Mount ZFS filesystem on a path described by its + mountpoint property, if the path exists and is empty. If + mountpoint is set to + , the + filesystem should be instead mounted using mount(8). +
+
+
Perform an overlay mount. Allows mounting in non-empty + mountpoint. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process if configured.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + section of + zfsprops(7) for details.
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has + =, + this will cause the terminal to interactively block after asking for + the key.
+
+
Report mount progress.
+
+
Attempt to force mounting of all filesystems, even those that couldn't + normally be mounted (e.g. redacted datasets).
+
+
+
zfs unmount + [-fu] + -a|filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
+
Forcefully unmount the file system, even if it is currently in use. + This option is not supported on Linux.
+
+
Unload keys for any encryption roots unmounted by this command.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
+
+
+
+ + + + + +
February 16, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-program.8.html b/man/v2.2/8/zfs-program.8.html new file mode 100644 index 000000000..f3dfebbb5 --- /dev/null +++ b/man/v2.2/8/zfs-program.8.html @@ -0,0 +1,1007 @@ + + + + + + + zfs-program.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-program.8

+
+ + + + + +
ZFS-PROGRAM(8)System Manager's ManualZFS-PROGRAM(8)
+
+
+

+

zfs-program — + execute ZFS channel programs

+
+
+

+ + + + + +
zfsprogram [-jn] + [-t instruction-limit] + [-m memory-limit] + pool script + [script arguments]
+
+
+

+

The ZFS channel program interface allows ZFS administrative + operations to be run programmatically as a Lua script. The entire script is + executed atomically, with no other administrative operations taking effect + concurrently. A library of ZFS calls is made available to channel program + scripts. Channel programs may only be run with root privileges.

+

A modified version of the Lua 5.2 interpreter is used to run + channel program scripts. The Lua 5.2 manual can be found at + http://www.lua.org/manual/5.2/

+

The channel program given by script will be + run on pool, and any attempts to access or modify + other pools will cause an error.

+
+
+

+
+
+
Display channel program output in JSON format. When this flag is specified + and standard output is empty - channel program encountered an error. The + details of such an error will be printed to standard error in plain + text.
+
+
Executes a read-only channel program, which runs faster. The program + cannot change on-disk state by calling functions from the zfs.sync + submodule. The program can be used to gather information such as + properties and determining if changes would succeed (zfs.check.*). Without + this flag, all pending changes must be synced to disk before a channel + program can complete.
+
+ instruction-limit
+
Limit the number of Lua instructions to execute. If a channel program + executes more than the specified number of instructions, it will be + stopped and an error will be returned. The default limit is 10 million + instructions, and it can be set to a maximum of 100 million + instructions.
+
+ memory-limit
+
Memory limit, in bytes. If a channel program attempts to allocate more + memory than the given limit, it will be stopped and an error returned. The + default memory limit is 10 MiB, and can be set to a maximum of 100 + MiB.
+
+

All remaining argument strings will be passed directly to the Lua + script as described in the LUA + INTERFACE section below.

+
+
+

+

A channel program can be invoked either from the command line, or + via a library call to + ().

+
+

+

Arguments passed to the channel program are converted to a Lua + table. If invoked from the command line, extra arguments to the Lua script + will be accessible as an array stored in the argument table with the key + 'argv':

+
+
args = ...
+argv = args["argv"]
+-- argv == {1="arg1", 2="arg2", ...}
+
+

If invoked from the libzfs interface, an arbitrary argument list + can be passed to the channel program, which is accessible via the same + "..." syntax in Lua:

+
+
args = ...
+-- args == {"foo"="bar", "baz"={...}, ...}
+
+

Note that because Lua arrays are 1-indexed, arrays passed to Lua + from the libzfs interface will have their indices incremented by 1. That is, + the element in arr[0] in a C array passed to a channel + program will be stored in arr[1] when accessed from + Lua.

+
+
+

+

Lua return statements take the form:

+
return ret0, ret1, ret2, + ...
+

Return statements returning multiple values are permitted + internally in a channel program script, but attempting to return more than + one value from the top level of the channel program is not permitted and + will throw an error. However, tables containing multiple values can still be + returned. If invoked from the command line, a return statement:

+
+
a = {foo="bar", baz=2}
+return a
+
+

Will be output formatted as:

+
+
Channel program fully executed with return value:
+    return:
+        baz: 2
+        foo: 'bar'
+
+
+
+

+

If the channel program encounters a fatal error while running, a + non-zero exit status will be returned. If more information about the error + is available, a singleton list will be returned detailing the error:

+
error: "error string, including + Lua stack trace"
+

If a fatal error is returned, the channel program may have not + executed at all, may have partially executed, or may have fully executed but + failed to pass a return value back to userland.

+

If the channel program exhausts an instruction or memory limit, a + fatal error will be generated and the program will be stopped, leaving the + program partially executed. No attempt is made to reverse or undo any + operations already performed. Note that because both the instruction count + and amount of memory used by a channel program are deterministic when run + against the same inputs and filesystem state, as long as a channel program + has run successfully once, you can guarantee that it will finish + successfully against a similar size system.

+

If a channel program attempts to return too large a value, the + program will fully execute but exit with a nonzero status code and no return + value.

+

: + ZFS API functions do not generate Fatal Errors when correctly invoked, they + return an error code and the channel program continues executing. See the + ZFS API section below for + function-specific details on error return codes.

+
+
+

+

When invoking a channel program via the libzfs interface, it is + necessary to translate arguments and return values from Lua values to their + C equivalents, and vice-versa.

+

There is a correspondence between nvlist values in C and Lua + tables. A Lua table which is returned from the channel program will be + recursively converted to an nvlist, with table values converted to their + natural equivalents:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
string->string
number->int64
boolean->boolean_value
nil->boolean (no value)
table->nvlist
+

Likewise, table keys are replaced by string equivalents as + follows:

+ + + + + + + + + + + + + + + + + + + +
string->no change
number->signed decimal string ("%lld")
boolean->"true" | "false"
+

Any collision of table key strings (for example, the string + "true" and a true boolean value) will cause a fatal error.

+

Lua numbers are represented internally as signed 64-bit + integers.

+
+
+
+

+

The following Lua built-in base library functions are + available:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
assertrawlencollectgarbagerawget
errorrawsetgetmetatableselect
ipairssetmetatablenexttonumber
pairstostringrawequaltype
+

All functions in the + , + , + and + + built-in submodules are also available. A complete list and documentation of + these modules is available in the Lua manual.

+

The following functions base library functions have been disabled + and are not available for use in channel programs:

+ + + + + + + + + + +
dofileloadfileloadpcallprintxpcall
+
+
+

+
+

+

Each API function takes a fixed set of required positional + arguments and optional keyword arguments. For example, the destroy function + takes a single positional string argument (the name of the dataset to + destroy) and an optional "defer" keyword boolean argument. When + using parentheses to specify the arguments to a Lua function, only + positional arguments can be used:

+
zfs.sync.destroy("rpool@snap")
+

To use keyword arguments, functions must be called with a single + argument that is a Lua table containing entries mapping integers to + positional arguments and strings to keyword arguments:

+
zfs.sync.destroy({1="rpool@snap", + defer=true})
+

The Lua language allows curly braces to be used in place of + parenthesis as syntactic sugar for this calling convention:

+
zfs.sync.snapshot{"rpool@snap", + defer=true}
+
+
+

+

If an API function succeeds, it returns 0. If it fails, it returns + an error code and the channel program continues executing. API functions do + not generate Fatal Errors except in the case of an unrecoverable internal + file system error.

+

In addition to returning an error code, some functions also return + extra details describing what caused the error. This extra description is + given as a second return value, and will always be a Lua table, or Nil if no + error details were returned. Different keys will exist in the error details + table depending on the function and error case. Any such function may be + called expecting a single return value:

+
errno = + zfs.sync.promote(dataset)
+

Or, the error details can be retrieved:

+
+
errno, details = zfs.sync.promote(dataset)
+if (errno == EEXIST) then
+    assert(details ~= Nil)
+    list_of_conflicting_snapshots = details
+end
+
+

The following global aliases for API function error return codes + are defined for use in channel programs:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
EPERMECHILDENODEVENOSPCENOENTEAGAINENOTDIR
ESPIPEESRCHENOMEMEISDIREROFSEINTREACCES
EINVALEMLINKEIOEFAULTENFILEEPIPEENXIO
ENOTBLKEMFILEEDOME2BIGEBUSYENOTTYERANGE
ENOEXECEEXISTETXTBSYEDQUOTEBADFEXDEVEFBIG
+
+
+

+

For detailed descriptions of the exact behavior of any ZFS + administrative operations, see the main zfs(8) manual + page.

+
+
(msg)
+
Record a debug message in the zfs_dbgmsg log. A log of these messages can + be printed via mdb's "::zfs_dbgmsg" command, or can be monitored + live by running +
dtrace -n + 'zfs-dbgmsg{trace(stringof(arg0))}'
+

+
+
msg (string)
+
Debug message to be printed.
+
+
+
(dataset)
+
Returns true if the given dataset exists, or false if it doesn't. A fatal + error will be thrown if the dataset is not in the target pool. That is, in + a channel program running on rpool, + zfs.exists("rpool/nonexistent_fs") returns + false, but + zfs.exists("somepool/fs_that_may_exist") will + error. +

+
+
dataset (string)
+
Dataset to check for existence. Must be in the target pool.
+
+
+
(dataset, + property)
+
Returns two values. First, a string, number or table containing the + property value for the given dataset. Second, a string containing the + source of the property (i.e. the name of the dataset in which it was set + or nil if it is readonly). Throws a Lua error if the dataset is invalid or + the property doesn't exist. Note that Lua only supports int64 number types + whereas ZFS number properties are uint64. This means very large values + (like GUIDs) may wrap around and appear negative. +

+
+
dataset (string)
+
Filesystem or snapshot path to retrieve properties from.
+
property (string)
+
Name of property to retrieve. All filesystem, snapshot and volume + properties are supported except for + and + . + Also supports the + snap + and + bookmark + properties and the + ⟨|⟩⟨|id + properties, though the id must be in numeric form.
+
+
+
+
+
+
The sync submodule contains functions that modify the on-disk state. They + are executed in "syncing context". +

The available sync submodule functions are as follows:

+
+
(dataset, + [defer=true|false])
+
Destroy the given dataset. Returns 0 on successful destroy, or a + nonzero error code if the dataset could not be destroyed (for example, + if the dataset has any active children or clones). +

+
+
dataset (string)
+
Filesystem or snapshot to be destroyed.
+
[defer (boolean)]
+
Valid only for destroying snapshots. If set to true, and the + snapshot has holds or clones, allows the snapshot to be marked for + deferred deletion rather than failing.
+
+
+
(dataset, + property)
+
Clears the specified property in the given dataset, causing it to be + inherited from an ancestor, or restored to the default if no ancestor + property is set. The zfs + inherit -S option has + not been implemented. Returns 0 on success, or a nonzero error code if + the property could not be cleared. +

+
+
dataset (string)
+
Filesystem or snapshot containing the property to clear.
+
property (string)
+
The property to clear. Allowed properties are the same as those + for the zfs + inherit command.
+
+
+
(dataset)
+
Promote the given clone to a filesystem. Returns 0 on successful + promotion, or a nonzero error code otherwise. If EEXIST is returned, + the second return value will be an array of the clone's snapshots + whose names collide with snapshots of the parent filesystem. +

+
+
dataset (string)
+
Clone to be promoted.
+
+
+
(filesystem)
+
Rollback to the previous snapshot for a dataset. Returns 0 on + successful rollback, or a nonzero error code otherwise. Rollbacks can + be performed on filesystems or zvols, but not on snapshots or mounted + datasets. EBUSY is returned in the case where the filesystem is + mounted. +

+
+
filesystem (string)
+
Filesystem to rollback.
+
+
+
(dataset, + property, value)
+
Sets the given property on a dataset. Currently only user properties + are supported. Returns 0 if the property was set, or a nonzero error + code otherwise. +

+
+
dataset (string)
+
The dataset where the property will be set.
+
property (string)
+
The property to set.
+
value (string)
+
The value of the property to be set.
+
+
+
(dataset)
+
Create a snapshot of a filesystem. Returns 0 if the snapshot was + successfully created, and a nonzero error code otherwise. +

Note: Taking a snapshot will fail on any pool older than + legacy version 27. To enable taking snapshots from ZCP scripts, the + pool must be upgraded.

+

+
+
dataset (string)
+
Name of snapshot to create.
+
+
+
(dataset, + oldsnapname, + newsnapname)
+
Rename a snapshot of a filesystem or a volume. Returns 0 if the + snapshot was successfully renamed, and a nonzero error code otherwise. +

+
+
dataset (string)
+
Name of the snapshot's parent dataset.
+
oldsnapname (string)
+
Original name of the snapshot.
+
newsnapname (string)
+
New name of the snapshot.
+
+
+
(source, + newbookmark)
+
Create a bookmark of an existing source snapshot or bookmark. Returns + 0 if the new bookmark was successfully created, and a nonzero error + code otherwise. +

Note: Bookmarking requires the corresponding pool feature + to be enabled.

+

+
+
source (string)
+
Full name of the existing snapshot or bookmark.
+
newbookmark (string)
+
Full name of the new bookmark.
+
+
+
+
+
+
For each function in the zfs.sync submodule, there is a + corresponding zfs.check function which performs a + "dry run" of the same operation. Each takes the same arguments + as its zfs.sync counterpart and returns 0 if the + operation would succeed, or a non-zero error code if it would fail, along + with any other error details. That is, each has the same behavior as the + corresponding sync function except for actually executing the requested + change. For example, + ("fs") + returns 0 if + zfs.sync.destroy("fs") + would successfully destroy the dataset. +

The available zfs.check functions are:

+
+
(dataset, + [defer=true|false])
+
 
+
(dataset)
+
 
+
(filesystem)
+
 
+
(dataset, + property, value)
+
 
+
(dataset)
+
 
+
+
+
+
The zfs.list submodule provides functions for iterating over datasets and + properties. Rather than returning tables, these functions act as Lua + iterators, and are generally used as follows: +
+
for child in zfs.list.children("rpool") do
+    ...
+end
+
+

The available zfs.list functions are:

+
+
(snapshot)
+
Iterate through all clones of the given snapshot. +

+
+
snapshot (string)
+
Must be a valid snapshot path in the current pool.
+
+
+
(dataset)
+
Iterate through all snapshots of the given dataset. Each snapshot is + returned as a string containing the full dataset name, e.g. + "pool/fs@snap". +

+
+
dataset (string)
+
Must be a valid filesystem or volume.
+
+
+
(dataset)
+
Iterate through all direct children of the given dataset. Each child + is returned as a string containing the full dataset name, e.g. + "pool/fs/child". +

+
+
dataset (string)
+
Must be a valid filesystem or volume.
+
+
+
(dataset)
+
Iterate through all bookmarks of the given dataset. Each bookmark is + returned as a string containing the full dataset name, e.g. + "pool/fs#bookmark". +

+
+
dataset (string)
+
Must be a valid filesystem or volume.
+
+
+
(snapshot)
+
Iterate through all user holds on the given snapshot. Each hold is + returned as a pair of the hold's tag and the timestamp (in seconds + since the epoch) at which it was created. +

+
+
snapshot (string)
+
Must be a valid snapshot.
+
+
+
(dataset)
+
An alias for zfs.list.user_properties (see relevant entry). +

+
+
dataset (string)
+
Must be a valid filesystem, snapshot, or volume.
+
+
+
(dataset)
+
Iterate through all user properties for the given dataset. For each + step of the iteration, output the property name, its value, and its + source. Throws a Lua error if the dataset is invalid. +

+
+
dataset (string)
+
Must be a valid filesystem, snapshot, or volume.
+
+
+
(dataset)
+
Returns an array of strings, the names of the valid system (non-user + defined) properties for the given dataset. Throws a Lua error if the + dataset is invalid. +

+
+
dataset (string)
+
Must be a valid filesystem, snapshot or volume.
+
+
+
+
+
+
+
+
+

+
+

+

The following channel program recursively destroys a filesystem + and all its snapshots and children in a naive manner. Note that this does + not involve any error handling or reporting.

+
+
function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        zfs.sync.destroy(snap)
+    end
+    zfs.sync.destroy(root)
+end
+destroy_recursive("pool/somefs")
+
+
+
+

+

A more verbose and robust version of the same channel program, + which properly detects and reports errors, and also takes the dataset to + destroy as a command line argument, would be as follows:

+
+
succeeded = {}
+failed = {}
+
+function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        err = zfs.sync.destroy(snap)
+        if (err ~= 0) then
+            failed[snap] = err
+        else
+            succeeded[snap] = err
+        end
+    end
+    err = zfs.sync.destroy(root)
+    if (err ~= 0) then
+        failed[root] = err
+    else
+        succeeded[root] = err
+    end
+end
+
+args = ...
+argv = args["argv"]
+
+destroy_recursive(argv[1])
+
+results = {}
+results["succeeded"] = succeeded
+results["failed"] = failed
+return results
+
+
+
+

+

The following function performs a forced promote operation by + attempting to promote the given clone and destroying any conflicting + snapshots.

+
+
function force_promote(ds)
+   errno, details = zfs.check.promote(ds)
+   if (errno == EEXIST) then
+       assert(details ~= Nil)
+       for i, snap in ipairs(details) do
+           zfs.sync.destroy(ds .. "@" .. snap)
+       end
+   elseif (errno ~= 0) then
+       return errno
+   end
+   return zfs.sync.promote(ds)
+end
+
+
+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-project.8.html b/man/v2.2/8/zfs-project.8.html new file mode 100644 index 000000000..0730cd49d --- /dev/null +++ b/man/v2.2/8/zfs-project.8.html @@ -0,0 +1,362 @@ + + + + + + + zfs-project.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-project.8

+
+ + + + + +
ZFS-PROJECT(8)System Manager's ManualZFS-PROJECT(8)
+
+
+

+

zfs-project — + manage projects in ZFS filesystem

+
+
+

+ + + + + +
zfsproject + [-d|-r] + file|directory
+
+ + + + + +
zfsproject -C + [-kr] + file|directory
+
+ + + + + +
zfsproject -c + [-0] + [-d|-r] + [-p id] + file|directory
+
+ + + + + +
zfsproject [-p + id] [-rs] + file|directory
+
+
+

+
+
zfs project + [-d|-r] + file|directory
+
List project identifier (ID) and inherit flag of files and directories. +
+
+
Show the directory project ID and inherit flag, not its children.
+
+
List subdirectories recursively.
+
+
+
zfs project + -C [-kr] + file|directory
+
Clear project inherit flag and/or ID on the files and directories. +
+
+
Keep the project ID unchanged. If not specified, the project ID will + be reset to zero.
+
+
Clear subdirectories' flags recursively.
+
+
+
zfs project + -c [-0] + [-d|-r] + [-p id] + file|directory
+
Check project ID and inherit flag on the files and directories: report + entries without the project inherit flag, or with project IDs different + from the target directory's project ID or the one specified with + -p. +
+
+
Delimit filenames with a NUL byte instead of newline, don't output + diagnoses.
+
+
Check the directory project ID and inherit flag, not its + children.
+
+ id
+
Compare to id instead of the target files and + directories' project IDs.
+
+
Check subdirectories recursively.
+
+
+
zfs project + -p id + [-rs] + file|directory
+
Set project ID and/or inherit flag on the files and directories. +
+
+ id
+
Set the project ID to the given value.
+
+
Set on subdirectories recursively.
+
+
Set project inherit flag on the given files and directories. This is + usually used for setting up tree quotas with + -r. In that case, the directory's project ID + will be set for all its descendants, unless specified explicitly with + -p.
+
+
+
+
+
+

+

zfs-projectspace(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-projectspace.8.html b/man/v2.2/8/zfs-projectspace.8.html new file mode 100644 index 000000000..cc3303ad0 --- /dev/null +++ b/man/v2.2/8/zfs-projectspace.8.html @@ -0,0 +1,390 @@ + + + + + + + zfs-projectspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-projectspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + display space and quotas of ZFS dataset

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + user, + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (like stat(2), + ls -l) perform this + translation, so the -i option allows the + output from zfs + userspace to be compared directly with those + utilities. However, -i may lead to confusion + if some files were created by an SMB user before a SMB-to-POSIX name + mapping was established. In such a case, some files will be owned by + the SMB entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]…
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]…
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is a + numeral, not a name. So need neither the option -i + for SID to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfsprops(7), zfs-set(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-promote.8.html b/man/v2.2/8/zfs-promote.8.html new file mode 100644 index 000000000..65ee40baf --- /dev/null +++ b/man/v2.2/8/zfs-promote.8.html @@ -0,0 +1,299 @@ + + + + + + + zfs-promote.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-promote.8

+
+ + + + + +
ZFS-PROMOTE(8)System Manager's ManualZFS-PROMOTE(8)
+
+
+

+

zfs-promote — + promote clone dataset to no longer depend on origin + snapshot

+
+
+

+ + + + + +
zfspromote clone
+
+
+

+

The zfs promote + command makes it possible to destroy the dataset that the clone was created + from. The clone parent-child dependency relationship is reversed, so that + the origin dataset becomes a clone of the specified dataset.

+

The snapshot that was cloned, and any snapshots previous to this + snapshot, are now owned by the promoted clone. The space they use moves from + the origin dataset to the promoted clone, so enough space must be available + to accommodate these snapshots. No new space is consumed by this operation, + but the space accounting is adjusted. The promoted clone must not have any + conflicting snapshot names of its own. The zfs + rename subcommand can be used to rename any + conflicting snapshots.

+
+
+

+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+
+

+

zfs-clone(8), + zfs-rename(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-receive.8.html b/man/v2.2/8/zfs-receive.8.html new file mode 100644 index 000000000..6dd3a01bf --- /dev/null +++ b/man/v2.2/8/zfs-receive.8.html @@ -0,0 +1,628 @@ + + + + + + + zfs-receive.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-receive.8

+
+ + + + + +
ZFS-RECEIVE(8)System Manager's ManualZFS-RECEIVE(8)
+
+
+

+

zfs-receive — + create snapshot from backup stream

+
+
+

+ + + + + +
zfsreceive [-FhMnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+ + + + + +
zfsreceive -c + [-vn] + filesystem|snapshot
+
+
+

+
+
zfs receive + [-FhMnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

The ability to send and receive deduplicated send streams has + been removed. However, a deduplicated send stream created with older + software can be converted to a regular (non-deduplicated) stream by + using the zstream redup + command.

+

If -o + property=value or + -x property is specified, it + applies to the effective value of the property throughout the entire + subtree of replicated datasets. Effective property values will be set + (-o) or inherited (-x) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + origin= + is a special case because, even if origin is a + read-only property and cannot be set, it's allowed to receive the send + stream as a clone of the given snapshot.

+

Raw encrypted send streams (created with + zfs send + -w) may only be received as is, and cannot be + re-encrypted, decrypted, or recompressed by the receive process. + Unencrypted streams can be received as encrypted datasets, either + through inheritance or by specifying encryption parameters with the + -o options. Note that the + keylocation property cannot be overridden to + prompt during a receive. This is because the receive + process itself is already using the standard input for the send stream. + Instead, the property can be overridden after the receive completes.

+

The added security provided by raw sends adds some + restrictions to the send and receive process. ZFS will not allow a mix + of raw receives and non-raw receives. Specifically, any raw incremental + receives that are attempted after a non-raw receive will fail. Non-raw + receives do not have this restriction and, therefore, are always + possible. Because of this, it is best practice to always use either raw + sends for their security benefits or non-raw sends for their flexibility + when working with encrypted datasets, but not a combination.

+

The reason for this restriction stems from the inherent + restrictions of the AEAD ciphers that ZFS uses to encrypt data. When + using ZFS native encryption, each block of data is encrypted against a + randomly generated number known as the "initialization vector" + (IV), which is stored in the filesystem metadata. This number is + required by the encryption algorithms whenever the data is to be + decrypted. Together, all of the IVs provided for all of the blocks in a + given snapshot are collectively called an "IV set". When ZFS + performs a raw send, the IV set is transferred from the source to the + destination in the send stream. When ZFS performs a non-raw send, the + data is decrypted by the source system and re-encrypted by the + destination system, creating a snapshot with effectively the same data, + but a different IV set. In order for decryption to work after a raw + send, ZFS must ensure that the IV set used on both the source and + destination side match. When an incremental raw receive is performed on + top of an existing snapshot, ZFS will check to confirm that the + "from" snapshot on both the source and destination were using + the same IV set, ensuring the new IV set is consistent.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Skip the receive of holds. There is no effect if holds are not + sent.
+
+
Force an unmount of the file system while receiving a snapshot. This + option is not supported on Linux.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + immediately before the receive. When receiving a stream from + zfs send + -R, causes the property to be inherited by all + descendant datasets, as through zfs + inherit property was run on + any descendant datasets that have this property set on the sending + system. +

If the send stream was sent with + -c then overriding the + compression property will have no effect on + received data but the compression property will be + set. To have the data recompressed on receive remove the + -c flag from the send stream.

+

Any editable property can be set at + receive time. Set-once properties bound to the received data, such + as + + and + , + cannot be set at receive time even when the datasets are newly + created by zfs + receive. Additionally both settable + properties + + and + + cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+

The -o option may also be used to + override encryption properties upon initial receive. This allows + unencrypted streams to be received as encrypted datasets. To cause + the received dataset (or root dataset of a recursive stream) to be + received as an encryption root, specify encryption properties in the + same manner as is required for zfs + create. For instance:

+
# zfs + send tank/test@snap1 | + zfs recv + -o + encryption= + -o + = + -o + keylocation=file:///path/to/keyfile
+

Note that -o + keylocation=prompt may not be + specified here, since the standard input is already being utilized + for the send stream. Once the receive has completed, you can use + zfs set to change + this setting after the fact. Similarly, you can receive a dataset as + an encrypted child by specifying -x + encryption to force the property to be inherited. + Overriding encryption properties (except for + keylocation) is not possible with raw send + streams.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with + a stream generated by zfs + send -t + token, where the token + is the value of the + + property of the filesystem or volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(7) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions (e.g. + set-once) apply equally to -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
zfs receive + -c [-vn] + filesystem|snapshot
+
Attempt to repair data corruption in the specified dataset, by using the + provided stream as the source of healthy data. This method of healing can + only heal data blocks present in the stream. Metadata can not be healed by + corrective receive. Running a scrub is recommended post-healing to ensure + all data corruption was repaired. +

It's important to consider why corruption has happened in the + first place. If you have slowly failing hardware - periodically + repairing the data is not going to save you from data loss later on when + the hardware fails completely.

+
+
+
+
+

+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+
+

+

zfs-send(8), zstream(8)

+
+
+ + + + + +
March 12, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-recv.8.html b/man/v2.2/8/zfs-recv.8.html new file mode 100644 index 000000000..5a104f2a9 --- /dev/null +++ b/man/v2.2/8/zfs-recv.8.html @@ -0,0 +1,628 @@ + + + + + + + zfs-recv.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-recv.8

+
+ + + + + +
ZFS-RECEIVE(8)System Manager's ManualZFS-RECEIVE(8)
+
+
+

+

zfs-receive — + create snapshot from backup stream

+
+
+

+ + + + + +
zfsreceive [-FhMnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+ + + + + +
zfsreceive -c + [-vn] + filesystem|snapshot
+
+
+

+
+
zfs receive + [-FhMnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

The ability to send and receive deduplicated send streams has + been removed. However, a deduplicated send stream created with older + software can be converted to a regular (non-deduplicated) stream by + using the zstream redup + command.

+

If -o + property=value or + -x property is specified, it + applies to the effective value of the property throughout the entire + subtree of replicated datasets. Effective property values will be set + (-o) or inherited (-x) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + origin= + is a special case because, even if origin is a + read-only property and cannot be set, it's allowed to receive the send + stream as a clone of the given snapshot.

+

Raw encrypted send streams (created with + zfs send + -w) may only be received as is, and cannot be + re-encrypted, decrypted, or recompressed by the receive process. + Unencrypted streams can be received as encrypted datasets, either + through inheritance or by specifying encryption parameters with the + -o options. Note that the + keylocation property cannot be overridden to + prompt during a receive. This is because the receive + process itself is already using the standard input for the send stream. + Instead, the property can be overridden after the receive completes.

+

The added security provided by raw sends adds some + restrictions to the send and receive process. ZFS will not allow a mix + of raw receives and non-raw receives. Specifically, any raw incremental + receives that are attempted after a non-raw receive will fail. Non-raw + receives do not have this restriction and, therefore, are always + possible. Because of this, it is best practice to always use either raw + sends for their security benefits or non-raw sends for their flexibility + when working with encrypted datasets, but not a combination.

+

The reason for this restriction stems from the inherent + restrictions of the AEAD ciphers that ZFS uses to encrypt data. When + using ZFS native encryption, each block of data is encrypted against a + randomly generated number known as the "initialization vector" + (IV), which is stored in the filesystem metadata. This number is + required by the encryption algorithms whenever the data is to be + decrypted. Together, all of the IVs provided for all of the blocks in a + given snapshot are collectively called an "IV set". When ZFS + performs a raw send, the IV set is transferred from the source to the + destination in the send stream. When ZFS performs a non-raw send, the + data is decrypted by the source system and re-encrypted by the + destination system, creating a snapshot with effectively the same data, + but a different IV set. In order for decryption to work after a raw + send, ZFS must ensure that the IV set used on both the source and + destination side match. When an incremental raw receive is performed on + top of an existing snapshot, ZFS will check to confirm that the + "from" snapshot on both the source and destination were using + the same IV set, ensuring the new IV set is consistent.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Skip the receive of holds. There is no effect if holds are not + sent.
+
+
Force an unmount of the file system while receiving a snapshot. This + option is not supported on Linux.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + immediately before the receive. When receiving a stream from + zfs send + -R, causes the property to be inherited by all + descendant datasets, as through zfs + inherit property was run on + any descendant datasets that have this property set on the sending + system. +

If the send stream was sent with + -c then overriding the + compression property will have no effect on + received data but the compression property will be + set. To have the data recompressed on receive remove the + -c flag from the send stream.

+

Any editable property can be set at + receive time. Set-once properties bound to the received data, such + as + + and + , + cannot be set at receive time even when the datasets are newly + created by zfs + receive. Additionally both settable + properties + + and + + cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+

The -o option may also be used to + override encryption properties upon initial receive. This allows + unencrypted streams to be received as encrypted datasets. To cause + the received dataset (or root dataset of a recursive stream) to be + received as an encryption root, specify encryption properties in the + same manner as is required for zfs + create. For instance:

+
# zfs + send tank/test@snap1 | + zfs recv + -o + encryption= + -o + = + -o + keylocation=file:///path/to/keyfile
+

Note that -o + keylocation=prompt may not be + specified here, since the standard input is already being utilized + for the send stream. Once the receive has completed, you can use + zfs set to change + this setting after the fact. Similarly, you can receive a dataset as + an encrypted child by specifying -x + encryption to force the property to be inherited. + Overriding encryption properties (except for + keylocation) is not possible with raw send + streams.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with + a stream generated by zfs + send -t + token, where the token + is the value of the + + property of the filesystem or volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(7) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions (e.g. + set-once) apply equally to -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
zfs receive + -c [-vn] + filesystem|snapshot
+
Attempt to repair data corruption in the specified dataset, by using the + provided stream as the source of healthy data. This method of healing can + only heal data blocks present in the stream. Metadata can not be healed by + corrective receive. Running a scrub is recommended post-healing to ensure + all data corruption was repaired. +

It's important to consider why corruption has happened in the + first place. If you have slowly failing hardware - periodically + repairing the data is not going to save you from data loss later on when + the hardware fails completely.

+
+
+
+
+

+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+
+

+

zfs-send(8), zstream(8)

+
+
+ + + + + +
March 12, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-redact.8.html b/man/v2.2/8/zfs-redact.8.html new file mode 100644 index 000000000..97623e961 --- /dev/null +++ b/man/v2.2/8/zfs-redact.8.html @@ -0,0 +1,836 @@ + + + + + + + zfs-redact.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-redact.8

+
+ + + + + +
ZFS-SEND(8)System Manager's ManualZFS-SEND(8)
+
+
+

+

zfs-send — + generate backup stream of ZFS dataset

+
+
+

+ + + + + +
zfssend [-DLPVbcehnpsvw] + [-R [-X + dataset[,dataset]…]] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-DLPVcensvw] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend --redact + redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
+ + + + + +
zfssend [-PVenv] + -t receive_resume_token
+
+ + + + + +
zfssend [-PVnv] + -S filesystem
+
+ + + + + +
zfsredact snapshot + redaction_bookmark + redaction_snapshot
+
+
+

+
+
zfs send + [-DLPVbcehnpsvw] [-R + [-X + dataset[,dataset]…]] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128 KiB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128 KiB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed. If the -R flag + is used to send encrypted datasets, then -w + must also be specified.

+
+
, + --proctitle
+
Set the process title to a per-second report of how much data has been + sent.
+
, + --exclude + dataset[,dataset]…
+
With -R, -X specifies + a set of datasets (and, hence, their descendants), to be excluded from + the send stream. The root dataset may not be excluded. + -X a + -X b is equivalent to + -X + a,b.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
, + --backup
+
Sends only received property values whether or not they are overridden + by local settings, but only if the dataset has ever been received. Use + this option when you want zfs + receive to restore received properties backed + up on the sent dataset and to avoid sending local settings that may + have nothing to do with the source dataset, but only with how the data + is backed up.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes. Streams sent with -c will not + have their data recompressed on the receiver side using + -o compress= + value. The data will stay compressed as it was + from the sender. The new compression property will be set for future + data. Note that uncompressed data from the sender will still attempt + to compress on the receiver, unless you specify + -o compress= + .
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --holds
+
Generate a stream package that includes any snapshot holds (created + with the zfs hold + command), and indicating to zfs + receive that the holds be applied to the + dataset on the receiving system.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature. Sends of encrypted datasets must use + -w when using this flag.
+
, + --skip-missing
+
Allows sending a replication stream even when there are snapshots + missing in the hierarchy. When a snapshot is missing, instead of + throwing an error and aborting the send, a warning is printed to the + standard error stream and the dataset to which it belongs and its + descendents are skipped. This flag can only be used in conjunction + with -R.
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. The same report can be requested by sending + SIGINFO or SIGUSR1, + regardless of -v. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS.

+
+
+
+
zfs send + [-DLPVcenvw] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128 KiB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128 KiB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. The same report can be requested by sending + SIGINFO or SIGUSR1, + regardless of -v.
+
+
+
zfs send + --redact redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
Generate a redacted send stream. This send stream contains all blocks from + the snapshot being sent that aren't included in the redaction list + contained in the bookmark specified by the + --redact (or -d) flag. The + resulting send stream is said to be redacted with respect to the snapshots + the bookmark specified by the --redact + flag was created with. The bookmark must have been + created by running zfs + redact on the snapshot being sent. +

This feature can be used to allow clones of a filesystem to be + made available on a remote system, in the case where their parent need + not (or needs to not) be usable. For example, if a filesystem contains + sensitive data, and it has clones where that sensitive data has been + secured or replaced with dummy data, redacted sends can be used to + replicate the secured data without replicating the original sensitive + data, while still sharing all possible blocks. A snapshot that has been + redacted with respect to a set of snapshots will contain all blocks + referenced by at least one snapshot in the set, but will contain none of + the blocks referenced by none of the snapshots in the set. In other + words, if all snapshots in the set have modified a given block in the + parent, that block will not be sent; but if one or more snapshots have + not modified a block in the parent, they will still reference the + parent's block, so that block will be sent. Note that only user data + will be redacted.

+

When the redacted send stream is received, we will generate a + redacted snapshot. Due to the nature of redaction, a redacted dataset + can only be used in the following ways:

+
    +
  1. To receive, as a clone, an incremental send from the original snapshot + to one of the snapshots it was redacted with respect to. In this case, + the stream will produce a valid dataset when received because all + blocks that were redacted in the parent are guaranteed to be present + in the child's send stream. This use case will produce a normal + snapshot, which can be used just like other snapshots.
  2. +
  3. To receive an incremental send from the original snapshot to something + redacted with respect to a subset of the set of snapshots the initial + snapshot was redacted with respect to. In this case, each block that + was redacted in the original is still redacted (redacting with respect + to additional snapshots causes less data to be redacted (because the + snapshots define what is permitted, and everything else is redacted)). + This use case will produce a new redacted snapshot.
  4. +
  5. To receive an incremental send from a redaction bookmark of the + original snapshot that was created when redacting with respect to a + subset of the set of snapshots the initial snapshot was created with + respect to anything else. A send stream from such a redaction bookmark + will contain all of the blocks necessary to fill in any redacted data, + should it be needed, because the sending system is aware of what + blocks were originally redacted. This will either produce a normal + snapshot or a redacted one, depending on whether the new send stream + is redacted.
  6. +
  7. To receive an incremental send from a redacted version of the initial + snapshot that is redacted with respect to a subject of the set of + snapshots the initial snapshot was created with respect to. A send + stream from a compatible redacted dataset will contain all of the + blocks necessary to fill in any redacted data. This will either + produce a normal snapshot or a redacted one, depending on whether the + new send stream is redacted.
  8. +
  9. To receive a full send as a clone of the redacted snapshot. Since the + stream is a full send, it definitionally contains all the data needed + to create a new dataset. This use case will either produce a normal + snapshot or a redacted one, depending on whether the full send stream + was redacted.
  10. +
+

These restrictions are detected and enforced by + zfs receive; a redacted + send stream will contain the list of snapshots that the stream is + redacted with respect to. These are stored with the redacted snapshot, + and are used to detect and correctly handle the cases above. Note that + for technical reasons, raw sends and redacted sends cannot be combined + at this time.

+
+
zfs send + [-PVenv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs + receive -s for more + details.
+
zfs send + [-PVnv] [-i + snapshot|bookmark] + -S filesystem
+
Generate a send stream from a dataset that has been partially received. +
+
, + --saved
+
This flag requires that the specified filesystem previously received a + resumable send that did not finish and was interrupted. In such + scenarios this flag enables the user to send this partially received + state. Using this flag will always use the last fully received + snapshot as the incremental source if it exists.
+
+
+
zfs redact + snapshot redaction_bookmark + redaction_snapshot
+
Generate a new redaction bookmark. In addition to the typical bookmark + information, a redaction bookmark contains the list of redacted blocks and + the list of redaction snapshots specified. The redacted blocks are blocks + in the snapshot which are not referenced by any of the redaction + snapshots. These blocks are found by iterating over the metadata in each + redaction snapshot to determine what has been changed since the target + snapshot. Redaction is designed to support redacted zfs sends; see the + entry for zfs send for + more information on the purpose of this operation. If a redact operation + fails partway through (due to an error or a system failure), the redaction + can be resumed by rerunning the same command.
+
+
+

+

ZFS has support for a limited version of data subsetting, in the + form of redaction. Using the zfs + redact command, a + can be created that stores a list of blocks containing + sensitive information. When provided to zfs + send, this causes a redacted send + to occur. Redacted sends omit the blocks containing sensitive information, + replacing them with REDACT records. When these send streams are received, a + redacted dataset is created. A redacted dataset cannot be + mounted by default, since it is incomplete. It can be used to receive other + send streams. In this way datasets can be used for data backup and + replication, with all the benefits that zfs send and receive have to offer, + while protecting sensitive information from being stored on less-trusted + machines or services.

+

For the purposes of redaction, there are two steps to the process. + A redact step, and a send/receive step. First, a redaction bookmark is + created. This is done by providing the zfs + redact command with a parent snapshot, a bookmark to + be created, and a number of redaction snapshots. These redaction snapshots + must be descendants of the parent snapshot, and they should modify data that + is considered sensitive in some way. Any blocks of data modified by all of + the redaction snapshots will be listed in the redaction bookmark, because it + represents the truly sensitive information. When it comes to the send step, + the send process will not send the blocks listed in the redaction bookmark, + instead replacing them with REDACT records. When received on the target + system, this will create a redacted dataset, missing the data that + corresponds to the blocks in the redaction bookmark on the sending system. + The incremental send streams from the original parent to the redaction + snapshots can then also be received on the target system, and this will + produce a complete snapshot that can be used normally. Incrementals from one + snapshot on the parent filesystem and another can also be done by sending + from the redaction bookmark, rather than the snapshots themselves.

+

In order to make the purpose of the feature more clear, an example + is provided. Consider a zfs filesystem containing four files. These files + represent information for an online shopping service. One file contains a + list of usernames and passwords, another contains purchase histories, a + third contains click tracking data, and a fourth contains user preferences. + The owner of this data wants to make it available for their development + teams to test against, and their market research teams to do analysis on. + The development teams need information about user preferences and the click + tracking data, while the market research teams need information about + purchase histories and user preferences. Neither needs access to the + usernames and passwords. However, because all of this data is stored in one + ZFS filesystem, it must all be sent and received together. In addition, the + owner of the data wants to take advantage of features like compression, + checksumming, and snapshots, so they do want to continue to use ZFS to store + and transmit their data. Redaction can help them do so. First, they would + make two clones of a snapshot of the data on the source. In one clone, they + create the setup they want their market research team to see; they delete + the usernames and passwords file, and overwrite the click tracking data with + dummy information. In another, they create the setup they want the + development teams to see, by replacing the passwords with fake information + and replacing the purchase histories with randomly generated ones. They + would then create a redaction bookmark on the parent snapshot, using + snapshots on the two clones as redaction snapshots. The parent can then be + sent, redacted, to the target server where the research and development + teams have access. Finally, incremental sends from the parent snapshot to + each of the clones can be sent to and received on the target server; these + snapshots are identical to the ones on the source, and are ready to be used, + while the parent snapshot on the target contains none of the username and + password data present on the source, because it was removed by the redacted + send operation.

+
+
+
+

+

See -v.

+
+
+

+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+
+

+

zfs-bookmark(8), + zfs-receive(8), zfs-redact(8), + zfs-snapshot(8)

+
+
+ + + + + +
July 27, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-release.8.html b/man/v2.2/8/zfs-release.8.html new file mode 100644 index 000000000..47430c5d6 --- /dev/null +++ b/man/v2.2/8/zfs-release.8.html @@ -0,0 +1,325 @@ + + + + + + + zfs-release.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-release.8

+
+ + + + + +
ZFS-HOLD(8)System Manager's ManualZFS-HOLD(8)
+
+
+

+

zfs-holdhold + ZFS snapshots to prevent their removal

+
+
+

+ + + + + +
zfshold [-r] + tag snapshot
+
+ + + + + +
zfsholds [-rHp] + snapshot
+
+ + + + + +
zfsrelease [-r] + tag snapshot
+
+
+

+
+
zfs hold + [-r] tag + snapshot
+
Adds a single reference, named with the tag + argument, to the specified snapshots. Each snapshot has its own tag + namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-rHp] snapshot
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
Do not print headers, use tab-delimited output.
+
+
Prints holds timestamps as unix epoch timestamps.
+
+
+
zfs release + [-r] tag + snapshot
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
+
+
+

+

zfs-destroy(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-rename.8.html b/man/v2.2/8/zfs-rename.8.html new file mode 100644 index 000000000..750855416 --- /dev/null +++ b/man/v2.2/8/zfs-rename.8.html @@ -0,0 +1,375 @@ + + + + + + + zfs-rename.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-rename.8

+
+ + + + + +
ZFS-RENAME(8)System Manager's ManualZFS-RENAME(8)
+
+
+

+

zfs-rename — + rename ZFS dataset

+
+
+

+ + + + + +
zfsrename [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
+ + + + + +
zfsrename -p + [-f] + filesystem|volume + filesystem|volume
+
+ + + + + +
zfsrename -u + [-f] filesystem + filesystem
+
+ + + + + +
zfsrename -r + snapshot snapshot
+
+
+

+
+
zfs rename + [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
 
+
zfs rename + -p [-f] + filesystem|volume + filesystem|volume
+
 
+
zfs rename + -u [-f] + filesystem filesystem
+
Renames the given dataset. The new target can be located anywhere in the + ZFS hierarchy, with the exception of snapshots. Snapshots can only be + renamed within the parent file system or volume. When renaming a snapshot, + the parent file system of the snapshot does not need to be specified as + part of the second argument. Renamed file systems can inherit new mount + points, in which case they are unmounted and remounted at the new mount + point. +
+
+
Force unmount any file systems that need to be unmounted in the + process. This flag has no effect if used together with the + -u flag.
+
+
Creates all the nonexistent parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their + parent.
+
+
Do not remount file systems during rename. If a file system's + mountpoint property is set to + + or + , + the file system is not unmounted even if this option is not + given.
+
+
+
zfs rename + -r snapshot + snapshot
+
Recursively rename the snapshots of all descendent datasets. Snapshots are + the only dataset that can be renamed recursively.
+
+
+
+

+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+

+

The following example shows how to maintain a history of snapshots + with a consistent naming scheme. To keep a week's worth of snapshots, the + user destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows:

+
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-rollback.8.html b/man/v2.2/8/zfs-rollback.8.html new file mode 100644 index 000000000..19a63bc04 --- /dev/null +++ b/man/v2.2/8/zfs-rollback.8.html @@ -0,0 +1,299 @@ + + + + + + + zfs-rollback.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-rollback.8

+
+ + + + + +
ZFS-ROLLBACK(8)System Manager's ManualZFS-ROLLBACK(8)
+
+
+

+

zfs-rollback — + roll ZFS dataset back to snapshot

+
+
+

+ + + + + +
zfsrollback [-Rfr] + snapshot
+
+
+

+

When a dataset is rolled back, all data that has changed since the + snapshot is discarded, and the dataset reverts to the state at the time of + the snapshot. By default, the command refuses to roll back to a snapshot + other than the most recent one. In order to do so, all intermediate + snapshots and bookmarks must be destroyed by specifying the + -r option.

+

The -rR options do not recursively destroy + the child snapshots of a recursive snapshot. Only direct snapshots of the + specified filesystem are destroyed by either of these options. To completely + roll back a recursive snapshot, you must roll back the individual child + snapshots.

+
+
+
Destroy any more recent snapshots and bookmarks, as well as any clones of + those snapshots.
+
+
Used with the -R option to force an unmount of any + clone file systems that are to be destroyed.
+
+
Destroy any snapshots and bookmarks more recent than the one + specified.
+
+
+
+

+
+

+

The following command reverts the contents of + pool/home/anne to the snapshot named + yesterday, deleting all intermediate snapshots:

+
# zfs + rollback -r + pool/home/anne@yesterday
+
+
+
+

+

zfs-snapshot(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-send.8.html b/man/v2.2/8/zfs-send.8.html new file mode 100644 index 000000000..2bc1a2c8f --- /dev/null +++ b/man/v2.2/8/zfs-send.8.html @@ -0,0 +1,836 @@ + + + + + + + zfs-send.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-send.8

+
+ + + + + +
ZFS-SEND(8)System Manager's ManualZFS-SEND(8)
+
+
+

+

zfs-send — + generate backup stream of ZFS dataset

+
+
+

+ + + + + +
zfssend [-DLPVbcehnpsvw] + [-R [-X + dataset[,dataset]…]] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-DLPVcensvw] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend --redact + redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
+ + + + + +
zfssend [-PVenv] + -t receive_resume_token
+
+ + + + + +
zfssend [-PVnv] + -S filesystem
+
+ + + + + +
zfsredact snapshot + redaction_bookmark + redaction_snapshot
+
+
+

+
+
zfs send + [-DLPVbcehnpsvw] [-R + [-X + dataset[,dataset]…]] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128 KiB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128 KiB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed. If the -R flag + is used to send encrypted datasets, then -w + must also be specified.

+
+
, + --proctitle
+
Set the process title to a per-second report of how much data has been + sent.
+
, + --exclude + dataset[,dataset]…
+
With -R, -X specifies + a set of datasets (and, hence, their descendants), to be excluded from + the send stream. The root dataset may not be excluded. + -X a + -X b is equivalent to + -X + a,b.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
, + --backup
+
Sends only received property values whether or not they are overridden + by local settings, but only if the dataset has ever been received. Use + this option when you want zfs + receive to restore received properties backed + up on the sent dataset and to avoid sending local settings that may + have nothing to do with the source dataset, but only with how the data + is backed up.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes. Streams sent with -c will not + have their data recompressed on the receiver side using + -o compress= + value. The data will stay compressed as it was + from the sender. The new compression property will be set for future + data. Note that uncompressed data from the sender will still attempt + to compress on the receiver, unless you specify + -o compress= + .
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --holds
+
Generate a stream package that includes any snapshot holds (created + with the zfs hold + command), and indicating to zfs + receive that the holds be applied to the + dataset on the receiving system.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature. Sends of encrypted datasets must use + -w when using this flag.
+
, + --skip-missing
+
Allows sending a replication stream even when there are snapshots + missing in the hierarchy. When a snapshot is missing, instead of + throwing an error and aborting the send, a warning is printed to the + standard error stream and the dataset to which it belongs and its + descendents are skipped. This flag can only be used in conjunction + with -R.
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. The same report can be requested by sending + SIGINFO or SIGUSR1, + regardless of -v. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS.

+
+
+
+
zfs send + [-DLPVcenvw] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128 KiB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128 KiB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. The same report can be requested by sending + SIGINFO or SIGUSR1, + regardless of -v.
+
+
+
zfs send + --redact redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
Generate a redacted send stream. This send stream contains all blocks from + the snapshot being sent that aren't included in the redaction list + contained in the bookmark specified by the + --redact (or -d) flag. The + resulting send stream is said to be redacted with respect to the snapshots + the bookmark specified by the --redact + flag was created with. The bookmark must have been + created by running zfs + redact on the snapshot being sent. +

This feature can be used to allow clones of a filesystem to be + made available on a remote system, in the case where their parent need + not (or needs to not) be usable. For example, if a filesystem contains + sensitive data, and it has clones where that sensitive data has been + secured or replaced with dummy data, redacted sends can be used to + replicate the secured data without replicating the original sensitive + data, while still sharing all possible blocks. A snapshot that has been + redacted with respect to a set of snapshots will contain all blocks + referenced by at least one snapshot in the set, but will contain none of + the blocks referenced by none of the snapshots in the set. In other + words, if all snapshots in the set have modified a given block in the + parent, that block will not be sent; but if one or more snapshots have + not modified a block in the parent, they will still reference the + parent's block, so that block will be sent. Note that only user data + will be redacted.

+

When the redacted send stream is received, we will generate a + redacted snapshot. Due to the nature of redaction, a redacted dataset + can only be used in the following ways:

+
    +
  1. To receive, as a clone, an incremental send from the original snapshot + to one of the snapshots it was redacted with respect to. In this case, + the stream will produce a valid dataset when received because all + blocks that were redacted in the parent are guaranteed to be present + in the child's send stream. This use case will produce a normal + snapshot, which can be used just like other snapshots.
  2. +
  3. To receive an incremental send from the original snapshot to something + redacted with respect to a subset of the set of snapshots the initial + snapshot was redacted with respect to. In this case, each block that + was redacted in the original is still redacted (redacting with respect + to additional snapshots causes less data to be redacted (because the + snapshots define what is permitted, and everything else is redacted)). + This use case will produce a new redacted snapshot.
  4. +
  5. To receive an incremental send from a redaction bookmark of the + original snapshot that was created when redacting with respect to a + subset of the set of snapshots the initial snapshot was created with + respect to anything else. A send stream from such a redaction bookmark + will contain all of the blocks necessary to fill in any redacted data, + should it be needed, because the sending system is aware of what + blocks were originally redacted. This will either produce a normal + snapshot or a redacted one, depending on whether the new send stream + is redacted.
  6. +
  7. To receive an incremental send from a redacted version of the initial + snapshot that is redacted with respect to a subject of the set of + snapshots the initial snapshot was created with respect to. A send + stream from a compatible redacted dataset will contain all of the + blocks necessary to fill in any redacted data. This will either + produce a normal snapshot or a redacted one, depending on whether the + new send stream is redacted.
  8. +
  9. To receive a full send as a clone of the redacted snapshot. Since the + stream is a full send, it definitionally contains all the data needed + to create a new dataset. This use case will either produce a normal + snapshot or a redacted one, depending on whether the full send stream + was redacted.
  10. +
+

These restrictions are detected and enforced by + zfs receive; a redacted + send stream will contain the list of snapshots that the stream is + redacted with respect to. These are stored with the redacted snapshot, + and are used to detect and correctly handle the cases above. Note that + for technical reasons, raw sends and redacted sends cannot be combined + at this time.

+
+
zfs send + [-PVenv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs + receive -s for more + details.
+
zfs send + [-PVnv] [-i + snapshot|bookmark] + -S filesystem
+
Generate a send stream from a dataset that has been partially received. +
+
, + --saved
+
This flag requires that the specified filesystem previously received a + resumable send that did not finish and was interrupted. In such + scenarios this flag enables the user to send this partially received + state. Using this flag will always use the last fully received + snapshot as the incremental source if it exists.
+
+
+
zfs redact + snapshot redaction_bookmark + redaction_snapshot
+
Generate a new redaction bookmark. In addition to the typical bookmark + information, a redaction bookmark contains the list of redacted blocks and + the list of redaction snapshots specified. The redacted blocks are blocks + in the snapshot which are not referenced by any of the redaction + snapshots. These blocks are found by iterating over the metadata in each + redaction snapshot to determine what has been changed since the target + snapshot. Redaction is designed to support redacted zfs sends; see the + entry for zfs send for + more information on the purpose of this operation. If a redact operation + fails partway through (due to an error or a system failure), the redaction + can be resumed by rerunning the same command.
+
+
+

+

ZFS has support for a limited version of data subsetting, in the + form of redaction. Using the zfs + redact command, a + can be created that stores a list of blocks containing + sensitive information. When provided to zfs + send, this causes a redacted send + to occur. Redacted sends omit the blocks containing sensitive information, + replacing them with REDACT records. When these send streams are received, a + redacted dataset is created. A redacted dataset cannot be + mounted by default, since it is incomplete. It can be used to receive other + send streams. In this way datasets can be used for data backup and + replication, with all the benefits that zfs send and receive have to offer, + while protecting sensitive information from being stored on less-trusted + machines or services.

+

For the purposes of redaction, there are two steps to the process. + A redact step, and a send/receive step. First, a redaction bookmark is + created. This is done by providing the zfs + redact command with a parent snapshot, a bookmark to + be created, and a number of redaction snapshots. These redaction snapshots + must be descendants of the parent snapshot, and they should modify data that + is considered sensitive in some way. Any blocks of data modified by all of + the redaction snapshots will be listed in the redaction bookmark, because it + represents the truly sensitive information. When it comes to the send step, + the send process will not send the blocks listed in the redaction bookmark, + instead replacing them with REDACT records. When received on the target + system, this will create a redacted dataset, missing the data that + corresponds to the blocks in the redaction bookmark on the sending system. + The incremental send streams from the original parent to the redaction + snapshots can then also be received on the target system, and this will + produce a complete snapshot that can be used normally. Incrementals from one + snapshot on the parent filesystem and another can also be done by sending + from the redaction bookmark, rather than the snapshots themselves.

+

In order to make the purpose of the feature more clear, an example + is provided. Consider a zfs filesystem containing four files. These files + represent information for an online shopping service. One file contains a + list of usernames and passwords, another contains purchase histories, a + third contains click tracking data, and a fourth contains user preferences. + The owner of this data wants to make it available for their development + teams to test against, and their market research teams to do analysis on. + The development teams need information about user preferences and the click + tracking data, while the market research teams need information about + purchase histories and user preferences. Neither needs access to the + usernames and passwords. However, because all of this data is stored in one + ZFS filesystem, it must all be sent and received together. In addition, the + owner of the data wants to take advantage of features like compression, + checksumming, and snapshots, so they do want to continue to use ZFS to store + and transmit their data. Redaction can help them do so. First, they would + make two clones of a snapshot of the data on the source. In one clone, they + create the setup they want their market research team to see; they delete + the usernames and passwords file, and overwrite the click tracking data with + dummy information. In another, they create the setup they want the + development teams to see, by replacing the passwords with fake information + and replacing the purchase histories with randomly generated ones. They + would then create a redaction bookmark on the parent snapshot, using + snapshots on the two clones as redaction snapshots. The parent can then be + sent, redacted, to the target server where the research and development + teams have access. Finally, incremental sends from the parent snapshot to + each of the clones can be sent to and received on the target server; these + snapshots are identical to the ones on the source, and are ready to be used, + while the parent snapshot on the target contains none of the username and + password data present on the source, because it was removed by the redacted + send operation.

+
+
+
+

+

See -v.

+
+
+

+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+
+

+

zfs-bookmark(8), + zfs-receive(8), zfs-redact(8), + zfs-snapshot(8)

+
+
+ + + + + +
July 27, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-set.8.html b/man/v2.2/8/zfs-set.8.html new file mode 100644 index 000000000..ef55f0e36 --- /dev/null +++ b/man/v2.2/8/zfs-set.8.html @@ -0,0 +1,566 @@ + + + + + + + zfs-set.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-set.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setset + properties on ZFS datasets

+
+
+

+ + + + + +
zfsset [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot
+
+
+

+
+
zfs set + [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
Only some properties can be edited. See zfsprops(7) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the + section of zfsprops(7). +
+
+
Update mountpoint, sharenfs, sharesmb property but do not mount or + share the dataset.
+
+
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
+
+
Dataset name
+
+
Property name
+
+
Property value
+
+
Property source local, default, + inherited, temporary, + received, or + - (none).
+
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections of + zfsprops(7).

+

The value all can be used to display all + properties that apply to the given dataset's type + (filesystem, volume, + snapshot, or + bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: local, + default, inherited, + temporary, received, + or + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or + all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(7) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value, if one exists; otherwise, + for non-inheritable properties, to the default; otherwise, operate as + if the -S option was not specified.
+
+
+
+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + =/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following command disables the compression + property for all file systems under pool/home. The + next command explicitly enables compression for + pool/home/anne.

+
# zfs + set + compression= + pool/home
+
# zfs + set + compression= + pool/home/anne
+
+
+

+

The following command sets a quota of 50 Gbytes for + pool/home/bob:

+
# zfs + set + =50G + pool/home/bob
+
+
+

+

The following command lists all properties for + pool/home/bob:

+
+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value:

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+

The following command lists all properties with local settings for + pool/home/bob:

+
+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
+

+

The following command causes pool/home/bob + and pool/home/anne to inherit + the checksum property from their parent.

+
# zfs + inherit checksum + pool/home/bob pool/home/anne
+
+
+

+

The following example sets the user-defined + com.example:department property + for a dataset:

+
# zfs + set + com.example:department=12345 + tank/accounting
+
+
+

+

The following commands show how to set sharenfs + property options to enable read-write access for a set of IP addresses and + to enable root access for system "neo" on the + tank/home file system:

+
# zfs + set + sharenfs='rw=@123.123.0.0/16:[::1],root=neo' + tank/home
+

If you are using DNS for host name resolution, specify the + fully-qualified hostname.

+
+
+
+

+

zfsprops(7), zfs-list(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-share.8.html b/man/v2.2/8/zfs-share.8.html new file mode 100644 index 000000000..ab28de312 --- /dev/null +++ b/man/v2.2/8/zfs-share.8.html @@ -0,0 +1,310 @@ + + + + + + + zfs-share.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-share.8

+
+ + + + + +
ZFS-SHARE(8)System Manager's ManualZFS-SHARE(8)
+
+
+

+

zfs-shareshare + and unshare ZFS filesystems

+
+
+

+ + + + + +
zfsshare [-l] + -a|filesystem
+
+ + + + + +
zfsunshare + -a|filesystem|mountpoint
+
+
+

+
+
zfs share + [-l] + -a|filesystem
+
Shares available ZFS file systems. +
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has + =, + this will cause the terminal to interactively block after asking for + the key.
+
+
Share all available ZFS file systems. Invoked automatically as part of + the boot process.
+
filesystem
+
Share the specified filesystem according to the + sharenfs and sharesmb properties. + File systems are shared when the sharenfs or + sharesmb property is set.
+
+
+
zfs unshare + -a|filesystem|mountpoint
+
Unshares currently shared ZFS file systems. +
+
+
Unshare all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
filesystem|mountpoint
+
Unshare the specified filesystem. The command can also be given a path + to a ZFS file system shared on the system.
+
+
+
+
+
+

+

exports(5), smb.conf(5), + zfsprops(7)

+
+
+ + + + + +
May 17, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-snapshot.8.html b/man/v2.2/8/zfs-snapshot.8.html new file mode 100644 index 000000000..509f20343 --- /dev/null +++ b/man/v2.2/8/zfs-snapshot.8.html @@ -0,0 +1,352 @@ + + + + + + + zfs-snapshot.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-snapshot.8

+
+ + + + + +
ZFS-SNAPSHOT(8)System Manager's ManualZFS-SNAPSHOT(8)
+
+
+

+

zfs-snapshot — + create snapshots of ZFS datasets

+
+
+

+ + + + + +
zfssnapshot [-r] + [-o + property=value]… + dataset@snapname
+
+
+

+

All previous modifications by successful system calls to the file + system are part of the snapshots. Snapshots are taken atomically, so that + all snapshots correspond to the same moment in time. + zfs snap can be used as an + alias for zfs snapshot. See + the Snapshots section of + zfsconcepts(7) for details.

+
+
+ property=value
+
Set the specified property; see zfs + create for details.
+
+
Recursively create snapshots of all descendent datasets
+
+
+
+

+
+

+

The following command creates a snapshot named + yesterday. This snapshot is mounted on demand in the + .zfs/snapshot directory at the root of the + pool/home/bob file system.

+
# zfs + snapshot + pool/home/bob@yesterday
+
+
+

+

The following command creates snapshots named + yesterday of + pool/home and all of its descendent file systems. Each + snapshot is mounted on demand in the .zfs/snapshot + directory at the root of its file system. The second command destroys the + newly created snapshots.

+
# zfs + snapshot -r + pool/home@yesterday
+
# zfs + destroy -r + pool/home@yesterday
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+

+

The following example shows how to maintain a history of snapshots + with a consistent naming scheme. To keep a week's worth of snapshots, the + user destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows:

+
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
+
+

+

zfs-bookmark(8), zfs-clone(8), + zfs-destroy(8), zfs-diff(8), + zfs-hold(8), zfs-rename(8), + zfs-rollback(8), zfs-send(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-unallow.8.html b/man/v2.2/8/zfs-unallow.8.html new file mode 100644 index 000000000..74609565a --- /dev/null +++ b/man/v2.2/8/zfs-unallow.8.html @@ -0,0 +1,956 @@ + + + + + + + zfs-unallow.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unallow.8

+
+ + + + + +
ZFS-ALLOW(8)System Manager's ManualZFS-ALLOW(8)
+
+
+

+

zfs-allow — + delegate ZFS administration permissions to unprivileged + users

+
+
+

+ + + + + +
zfsallow [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+
+

+
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of mount, + , + , + , + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
 
+
zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]…
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]…
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]…
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]…
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NAMETYPENOTES



allowsubcommandMust also have the permission that is being allowed
bookmarksubcommand
clonesubcommandMust also have the create ability and mount ability in + the origin file system
createsubcommandMust also have the mount ability. Must also have the + refreservation ability to create a non-sparse volume.
destroysubcommandMust also have the mount ability
diffsubcommandAllows lookup of paths within a dataset given an object number, and + the ability to create snapshots necessary to zfs diff.
holdsubcommandAllows adding a user hold to a snapshot
load-keysubcommandAllows loading and unloading of encryption key (see zfs + load-key and zfs unload-key).
change-keysubcommandAllows changing an encryption key via zfs change-key.
mountsubcommandAllows mounting/umounting ZFS datasets
promotesubcommandMust also have the mount and promote ability in the + origin file system
receivesubcommandMust also have the mount and create ability
releasesubcommandAllows releasing a user hold which might destroy the snapshot
renamesubcommandMust also have the mount and create ability in the new + parent
rollbacksubcommandMust also have the mount ability
sendsubcommand
sharesubcommandAllows sharing file systems over NFS or SMB protocols
snapshotsubcommandMust also have the mount ability
groupquotaotherAllows accessing any groupquota@ property
groupobjquotaotherAllows accessing any groupobjquota@ + property
groupusedotherAllows reading any groupused@ property
groupobjusedotherAllows reading any groupobjused@ property
userpropotherAllows changing any user property
userquotaotherAllows accessing any userquota@ property
userobjquotaotherAllows accessing any userobjquota@ + property
userusedotherAllows reading any userused@ property
userobjusedotherAllows reading any userobjused@ property
projectobjquotaotherAllows accessing any projectobjquota@ + property
projectquotaotherAllows accessing any projectquota@ + property
projectobjusedotherAllows reading any projectobjused@ + property
projectusedotherAllows reading any projectused@ property
aclinheritproperty
aclmodeproperty
acltypeproperty
atimeproperty
canmountproperty
casesensitivityproperty
checksumproperty
compressionproperty
contextproperty
copiesproperty
dedupproperty
defcontextproperty
devicesproperty
dnodesizeproperty
encryptionproperty
execproperty
filesystem_limitproperty
fscontextproperty
keyformatproperty
keylocationproperty
logbiasproperty
mlslabelproperty
mountpointproperty
nbmandproperty
normalizationproperty
overlayproperty
pbkdf2itersproperty
primarycacheproperty
quotaproperty
readonlyproperty
recordsizeproperty
redundant_metadataproperty
refquotaproperty
refreservationproperty
relatimeproperty
reservationproperty
rootcontextproperty
secondarycacheproperty
setuidproperty
sharenfsproperty
sharesmbproperty
snapdevproperty
snapdirproperty
snapshot_limitproperty
special_small_blocksproperty
syncproperty
utf8onlyproperty
versionproperty
volblocksizeproperty
volmodeproperty
volsizeproperty
vscanproperty
xattrproperty
zonedproperty
+
+
zfs allow + -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
+
+
+

+
+

+

The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots + on tank/cindys. The permissions on + tank/cindys are also displayed.

+
+
# zfs allow ,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys will + be unable to mount file systems under tank/cindys. Add + an ACE similar to the following syntax to provide mount point access:

+
# chmod + A+user:cindys:add_subdirectory:allow + /tank/cindys
+
+
+

+

The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed.

+
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
+

+

The following example shows how to define and grant a permission + set on the tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+

+

The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed.

+
+
# zfs allow cindys , users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
+

+

The following example shows how to remove the snapshot permission + from the staff group on the + tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-unjail.8.html b/man/v2.2/8/zfs-unjail.8.html new file mode 100644 index 000000000..6ab9d12ed --- /dev/null +++ b/man/v2.2/8/zfs-unjail.8.html @@ -0,0 +1,314 @@ + + + + + + + zfs-unjail.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unjail.8

+
+ + + + + +
ZFS-JAIL(8)System Manager's ManualZFS-JAIL(8)
+
+
+

+

zfs-jailattach + or detach ZFS filesystem from FreeBSD jail

+
+
+

+ + + + + +
zfs jailjailid|jailname + filesystem
+
+ + + + + +
zfs unjailjailid|jailname + filesystem
+
+
+

+
+
zfs jail + jailid|jailname + filesystem
+
Attach the specified filesystem to the jail + identified by JID jailid or name + jailname. From now on this file system tree can be + managed from within a jail if the jailed property has + been set. To use this functionality, the jail needs the + + and + + parameters set to + and the + + parameter set to a value lower than + . +

You cannot attach a jailed dataset's children to another jail. + You can also not attach the root file system of the jail or any dataset + which needs to be mounted before the zfs rc script is run inside the + jail, as it would be attached unmounted until it is mounted from the rc + script inside the jail.

+

To allow management of the dataset from within a + jail, the jailed property has to be set and the jail + needs access to the /dev/zfs device. The + property + cannot be changed from within a jail.

+

After a dataset is attached to a jail and the + jailed property is set, a jailed file system cannot be + mounted outside the jail, since the jail administrator might have set + the mount point to an unacceptable value.

+

See jail(8) for more information on managing + jails. Jails are a FreeBSD feature and are not + relevant on other platforms.

+
+
zfs unjail + jailid|jailname + filesystem
+
Detaches the specified filesystem from the jail + identified by JID jailid or name + jailname.
+
+
+
+

+

zfsprops(7), jail(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-unload-key.8.html b/man/v2.2/8/zfs-unload-key.8.html new file mode 100644 index 000000000..b626c2982 --- /dev/null +++ b/man/v2.2/8/zfs-unload-key.8.html @@ -0,0 +1,476 @@ + + + + + + + zfs-unload-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unload-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + load, unload, or change encryption key of ZFS + dataset

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a|filesystem
+
+ + + + + +
zfsunload-key [-r] + -a|filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a|filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. + This will cause zfs to simply check that the + provided key is correct. This command may be run even if the key is + already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a|filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded. + This command may also be used to change the keylocation, + keyformat, and pbkdf2iters properties + as needed. If the dataset was not previously an encryption root it will + become one. Alternatively, the -i flag may be + provided to cause an encryption root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim + --secure if supported by your hardware, + otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to running zfs + load-key filesystem; + zfs change-key + filesystem
+
+ property=value
+
Allows the user to set encryption key properties + (keyformat, keylocation, + and pbkdf2iters) while + changing the key. This is the only way to alter + keyformat and pbkdf2iters after + the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + volume data, file attributes, ACLs, permission bits, directory listings, + FUID mappings, and + / + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the load-key subcommand for more + info on key loading).

+

Creating an encrypted dataset requires + specifying the encryption and + keyformat properties at creation time, along with an + optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, + and pbkdf2iters) do not inherit + like other ZFS properties and instead use the value determined by their + encryption root. Encryption root inheritance can be tracked via the + read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only deduplicate against themselves, + their snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption, + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost for each block written.

+
+
+
+

+

zfsprops(7), zfs-create(8), + zfs-set(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-unmount.8.html b/man/v2.2/8/zfs-unmount.8.html new file mode 100644 index 000000000..7d4e8fe95 --- /dev/null +++ b/man/v2.2/8/zfs-unmount.8.html @@ -0,0 +1,338 @@ + + + + + + + zfs-unmount.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unmount.8

+
+ + + + + +
ZFS-MOUNT(8)System Manager's ManualZFS-MOUNT(8)
+
+
+

+

zfs-mountmanage + mount state of ZFS filesystems

+
+
+

+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Oflv] + [-o options] + -a|filesystem
+
+ + + + + +
zfsunmount [-fu] + -a|filesystem|mountpoint
+
+
+

+
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Oflv] [-o + options] + -a|filesystem
+
Mount ZFS filesystem on a path described by its + mountpoint property, if the path exists and is empty. If + mountpoint is set to + , the + filesystem should be instead mounted using mount(8). +
+
+
Perform an overlay mount. Allows mounting in non-empty + mountpoint. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process if configured.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + section of + zfsprops(7) for details.
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has + =, + this will cause the terminal to interactively block after asking for + the key.
+
+
Report mount progress.
+
+
Attempt to force mounting of all filesystems, even those that couldn't + normally be mounted (e.g. redacted datasets).
+
+
+
zfs unmount + [-fu] + -a|filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
+
Forcefully unmount the file system, even if it is currently in use. + This option is not supported on Linux.
+
+
Unload keys for any encryption roots unmounted by this command.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
+
+
+
+ + + + + +
February 16, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-unzone.8.html b/man/v2.2/8/zfs-unzone.8.html new file mode 100644 index 000000000..8103cc3c1 --- /dev/null +++ b/man/v2.2/8/zfs-unzone.8.html @@ -0,0 +1,314 @@ + + + + + + + zfs-unzone.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unzone.8

+
+ + + + + +
ZFS-ZONE(8)System Manager's ManualZFS-ZONE(8)
+
+
+

+

zfs-zone, + zfs-unzoneattach and + detach ZFS filesystems to user namespaces

+
+
+

+ + + + + +
zfs zonensfile filesystem
+
+ + + + + +
zfs unzonensfile filesystem
+
+
+

+
+
zfs zone + nsfile filesystem
+
Attach the specified filesystem to the user + namespace identified by nsfile. From now on this + file system tree can be managed from within a user namespace if the + zoned property has been set. +

You cannot attach a zoned dataset's children to another user + namespace. You can also not attach the root file system of the user + namespace or any dataset which needs to be mounted before the zfs + service is run inside the user namespace, as it would be attached + unmounted until it is mounted from the service inside the user + namespace.

+

To allow management of the dataset from within a + user namespace, the zoned property has to be set and + the user namespaces needs access to the /dev/zfs + device. The + property + cannot be changed from within a user namespace.

+

After a dataset is attached to a user namespace and the + zoned property is set, a zoned file system cannot be + mounted outside the user namespace, since the user namespace + administrator might have set the mount point to an unacceptable + value.

+
+
zfs unzone + nsfile filesystem
+
Detach the specified filesystem from the user + namespace identified by nsfile.
+
+
+
+

+
+

+

The following example delegates the + tank/users dataset to a user namespace identified by + user namespace file /proc/1234/ns/user.

+
# zfs + zone /proc/1234/ns/user + tank/users
+
+
+
+

+

zfsprops(7)

+
+
+ + + + + +
June 3, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-upgrade.8.html b/man/v2.2/8/zfs-upgrade.8.html new file mode 100644 index 000000000..ddcb1f4cc --- /dev/null +++ b/man/v2.2/8/zfs-upgrade.8.html @@ -0,0 +1,317 @@ + + + + + + + zfs-upgrade.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-upgrade.8

+
+ + + + + +
ZFS-UPGRADE(8)System Manager's ManualZFS-UPGRADE(8)
+
+
+

+

zfs-upgrade — + manage on-disk version of ZFS filesystems

+
+
+

+ + + + + +
zfsupgrade
+
+ + + + + +
zfsupgrade -v
+
+ + + + + +
zfsupgrade [-r] + [-V version] + -a|filesystem
+
+
+

+
+
zfs upgrade
+
Displays a list of file systems that are not the most recent version.
+
zfs upgrade + -v
+
Displays a list of currently supported file system versions.
+
zfs upgrade + [-r] [-V + version] + -a|filesystem
+
Upgrades file systems to a new on-disk version. Once this is done, the + file systems will no longer be accessible on systems running older + versions of ZFS. zfs send + streams generated from new snapshots of these file systems cannot be + accessed on systems running older versions of ZFS. +

In general, the file system version is independent of the pool + version. See zpool-features(7) for information on + features of ZFS storage pools.

+

In some cases, the file system version and the pool version + are interrelated and the pool version must be upgraded before the file + system version can be upgraded.

+
+
+ version
+
Upgrade to version. If not specified, upgrade to + the most recent version. This option can only be used to increase the + version number, and only up to the most recent version supported by + this version of ZFS.
+
+
Upgrade all file systems on all imported pools.
+
filesystem
+
Upgrade the specified file system.
+
+
Upgrade the specified file system and all descendent file + systems.
+
+
+
+
+
+

+

zpool-upgrade(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-userspace.8.html b/man/v2.2/8/zfs-userspace.8.html new file mode 100644 index 000000000..66d4b6ff9 --- /dev/null +++ b/man/v2.2/8/zfs-userspace.8.html @@ -0,0 +1,390 @@ + + + + + + + zfs-userspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-userspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + display space and quotas of ZFS dataset

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + user, + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (like stat(2), + ls -l) perform this + translation, so the -i option allows the + output from zfs + userspace to be compared directly with those + utilities. However, -i may lead to confusion + if some files were created by an SMB user before a SMB-to-POSIX name + mapping was established. In such a case, some files will be owned by + the SMB entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]…
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]…
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is a + numeral, not a name. So need neither the option -i + for SID to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfsprops(7), zfs-set(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-wait.8.html b/man/v2.2/8/zfs-wait.8.html new file mode 100644 index 000000000..992fa9900 --- /dev/null +++ b/man/v2.2/8/zfs-wait.8.html @@ -0,0 +1,282 @@ + + + + + + + zfs-wait.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-wait.8

+
+ + + + + +
ZFS-WAIT(8)System Manager's ManualZFS-WAIT(8)
+
+
+

+

zfs-waitwait + for activity in ZFS filesystem to stop

+
+
+

+ + + + + +
zfswait [-t + activity[,activity]…] + filesystem
+
+
+

+

Waits until all background activity of the given types has ceased + in the given filesystem. The activity could cease because it has completed + or because the filesystem has been destroyed or unmounted. If no activities + are specified, the command waits until background activity of every type + listed below has ceased. If there is no activity of the given types in + progress, the command returns immediately.

+

These are the possible values for activity, + along with what each one waits for:

+
+
+
+
The filesystem's internal delete queue to empty
+
+
+

Note that the internal delete queue does not finish draining until + all large files have had time to be fully destroyed and all open file + handles to unlinked files are closed.

+
+
+

+

lsof(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-zone.8.html b/man/v2.2/8/zfs-zone.8.html new file mode 100644 index 000000000..466ce0b45 --- /dev/null +++ b/man/v2.2/8/zfs-zone.8.html @@ -0,0 +1,314 @@ + + + + + + + zfs-zone.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-zone.8

+
+ + + + + +
ZFS-ZONE(8)System Manager's ManualZFS-ZONE(8)
+
+
+

+

zfs-zone, + zfs-unzoneattach and + detach ZFS filesystems to user namespaces

+
+
+

+ + + + + +
zfs zonensfile filesystem
+
+ + + + + +
zfs unzonensfile filesystem
+
+
+

+
+
zfs zone + nsfile filesystem
+
Attach the specified filesystem to the user + namespace identified by nsfile. From now on this + file system tree can be managed from within a user namespace if the + zoned property has been set. +

You cannot attach a zoned dataset's children to another user + namespace. You can also not attach the root file system of the user + namespace or any dataset which needs to be mounted before the zfs + service is run inside the user namespace, as it would be attached + unmounted until it is mounted from the service inside the user + namespace.

+

To allow management of the dataset from within a + user namespace, the zoned property has to be set and + the user namespaces needs access to the /dev/zfs + device. The + property + cannot be changed from within a user namespace.

+

After a dataset is attached to a user namespace and the + zoned property is set, a zoned file system cannot be + mounted outside the user namespace, since the user namespace + administrator might have set the mount point to an unacceptable + value.

+
+
zfs unzone + nsfile filesystem
+
Detach the specified filesystem from the user + namespace identified by nsfile.
+
+
+
+

+
+

+

The following example delegates the + tank/users dataset to a user namespace identified by + user namespace file /proc/1234/ns/user.

+
# zfs + zone /proc/1234/ns/user + tank/users
+
+
+
+

+

zfsprops(7)

+
+
+ + + + + +
June 3, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs.8.html b/man/v2.2/8/zfs.8.html new file mode 100644 index 000000000..13aba8108 --- /dev/null +++ b/man/v2.2/8/zfs.8.html @@ -0,0 +1,1033 @@ + + + + + + + zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.8

+
+ + + + + +
ZFS(8)System Manager's ManualZFS(8)
+
+
+

+

zfsconfigure + ZFS datasets

+
+
+

+ + + + + +
zfs-?V
+
+ + + + + +
zfsversion
+
+ + + + + +
zfssubcommand + [arguments]
+
+
+

+

The zfs command configures ZFS datasets + within a ZFS storage pool, as described in zpool(8). A + dataset is identified by a unique path within the ZFS namespace:

+

+
pool[/component]/component
+

for example:

+

+
rpool/var/log
+

The maximum length of a dataset name + is + + - 1 ASCII characters (currently 255) satisfying + . Additionally snapshots are allowed to contain a single + character, + while bookmarks are allowed to contain a single + character. + / is used as separator between components. The maximum + amount of nesting allowed in a path is + + levels deep. ZFS tunables + () + are explained in zfs(4).

+

A dataset can be one of the following:

+
+
+
+
Can be mounted within the standard system namespace and behaves like other + file systems. While ZFS file systems are designed to be POSIX-compliant, + known issues exist that prevent compliance in some cases. Applications + that depend on standards conformance might fail due to non-standard + behavior when checking file system free space.
+
+
A logical volume exported as a raw or block device. This type of dataset + should only be used when a block device is required. File systems are + typically used in most environments.
+
+
A read-only version of a file system or volume at a given point in time. + It is specified as + filesystem@name or + volume@name.
+
+
Much like a snapshot, but without the hold on on-disk + data. It can be used as the source of a send (but not for a receive). It + is specified as + filesystem#name or + volume#name.
+
+
+

See zfsconcepts(7) for details.

+
+

+

Properties are divided into two types: native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about properties, see + zfsprops(7).

+
+
+

+

Enabling the + + feature allows for the creation of encrypted filesystems and volumes. ZFS + will encrypt file and zvol data, file attributes, ACLs, permission bits, + directory listings, FUID mappings, and + // + data. For an overview of encryption, see + zfs-load-key(8).

+
+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+
+
zfs -?
+
Displays a help message.
+
zfs -V, + --version
+
 
+
zfs version
+
Displays the software version of the zfs userland + utility and the zfs kernel module.
+
+
+

+
+
zfs-list(8)
+
Lists the property information for the given datasets in tabular + form.
+
zfs-create(8)
+
Creates a new ZFS file system or volume.
+
zfs-destroy(8)
+
Destroys the given dataset(s), snapshot(s), or bookmark.
+
zfs-rename(8)
+
Renames the given dataset (filesystem or snapshot).
+
zfs-upgrade(8)
+
Manage upgrading the on-disk version of filesystems.
+
+
+
+

+
+
zfs-snapshot(8)
+
Creates snapshots with the given names.
+
zfs-rollback(8)
+
Roll back the given dataset to a previous snapshot.
+
zfs-hold(8)/zfs-release(8)
+
Add or remove a hold reference to the specified snapshot or snapshots. If + a hold exists on a snapshot, attempts to destroy that snapshot by using + the zfs destroy command + return + .
+
zfs-diff(8)
+
Display the difference between a snapshot of a given filesystem and + another snapshot of that filesystem from a later time or the current + contents of the filesystem.
+
+
+
+

+
+
zfs-clone(8)
+
Creates a clone of the given snapshot.
+
zfs-promote(8)
+
Promotes a clone file system to no longer be dependent on its + "origin" snapshot.
+
+
+
+

+
+
zfs-send(8)
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark.
+
zfs-receive(8)
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the + zfs-send(8) subcommand, which by default creates a full + stream.
+
zfs-bookmark(8)
+
Creates a new bookmark of the given snapshot or bookmark. Bookmarks mark + the point in time when the snapshot was created, and can be used as the + incremental source for a zfs + send command.
+
zfs-redact(8)
+
Generate a new redaction bookmark. This feature can be used to allow + clones of a filesystem to be made available on a remote system, in the + case where their parent need not (or needs to not) be usable.
+
+
+
+

+
+
zfs-get(8)
+
Displays properties for the given datasets.
+
zfs-set(8)
+
Sets the property or list of properties to the given value(s) for each + dataset.
+
zfs-inherit(8)
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists.
+
+
+
+

+
+
zfs-userspace(8)/zfs-groupspace(8)/zfs-projectspace(8)
+
Displays space consumed by, and quotas on, each user, group, or project in + the specified filesystem or snapshot.
+
zfs-project(8)
+
List, set, or clear project ID and/or inherit flag on the files or + directories.
+
+
+
+

+
+
zfs-mount(8)
+
Displays all ZFS file systems currently mounted, or mount ZFS filesystem + on a path described by its mountpoint property.
+
zfs-unmount(8)
+
Unmounts currently mounted ZFS file systems.
+
+
+
+

+
+
zfs-share(8)
+
Shares available ZFS file systems.
+
zfs-unshare(8)
+
Unshares currently shared ZFS file systems.
+
+
+
+

+
+
zfs-allow(8)
+
Delegate permissions on the specified filesystem or volume.
+
zfs-unallow(8)
+
Remove delegated permissions on the specified filesystem or volume.
+
+
+
+

+
+
zfs-change-key(8)
+
Add or change an encryption key on the specified dataset.
+
zfs-load-key(8)
+
Load the key for the specified encrypted dataset, enabling access.
+
zfs-unload-key(8)
+
Unload a key for the specified dataset, removing the ability to access the + dataset.
+
+
+
+

+
+
zfs-program(8)
+
Execute ZFS administrative operations programmatically via a Lua + script-language channel program.
+
+
+
+

+
+
zfs-jail(8)
+
Attaches a filesystem to a jail.
+
zfs-unjail(8)
+
Detaches a filesystem from a jail.
+
+
+
+

+
+
zfs-wait(8)
+
Wait for background activity in a filesystem to complete.
+
+
+
+
+

+

The zfs utility exits 0 + on success, if + an error occurs, and + if invalid + command line options were specified.

+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + mountpoint=/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following command creates a snapshot named + yesterday. This snapshot is mounted on demand in the + .zfs/snapshot directory at the root of the + pool/home/bob file system.

+
# zfs + snapshot + pool/home/bob@yesterday
+
+
+

+

The following command creates snapshots named + yesterday of + pool/home and all of its descendent file systems. Each + snapshot is mounted on demand in the .zfs/snapshot + directory at the root of its file system. The second command destroys the + newly created snapshots.

+
# zfs + snapshot -r + pool/home@yesterday
+
# zfs + destroy -r + pool/home@yesterday
+
+
+

+

The following command disables the compression + property for all file systems under pool/home. The + next command explicitly enables compression for + pool/home/anne.

+
# zfs + set + compression=off + pool/home
+
# zfs + set compression=on + pool/home/anne
+
+
+

+

The following command lists all active file systems and volumes in + the system. Snapshots are displayed if + =on. + The default is off. See zpoolprops(7) + for more information on pool properties.

+
+
# zfs list
+NAME                      USED  AVAIL  REFER  MOUNTPOINT
+pool                      450K   457G    18K  /pool
+pool/home                 315K   457G    21K  /export/home
+pool/home/anne             18K   457G    18K  /export/home/anne
+pool/home/bob             276K   457G   276K  /export/home/bob
+
+
+
+

+

The following command sets a quota of 50 Gbytes for + pool/home/bob:

+
# zfs + set quota=50G + pool/home/bob
+
+
+

+

The following command lists all properties for + pool/home/bob:

+
+
# zfs get  pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value:

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+

The following command lists all properties with local settings for + pool/home/bob:

+
+
# zfs get -r -s  -o ,,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
+

+

The following command reverts the contents of + pool/home/anne to the snapshot named + yesterday, deleting all intermediate snapshots:

+
# zfs + rollback -r + pool/home/anne@yesterday
+
+
+

+

The following command creates a writable file system whose initial + contents are the same as pool/home/bob@yesterday.

+
# zfs + clone pool/home/bob@yesterday + pool/clone
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+

+

The following command causes pool/home/bob + and pool/home/anne to inherit + the checksum property from their parent.

+
# zfs + inherit checksum + pool/home/bob pool/home/anne
+
+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+

+

The following example sets the user-defined + com.example:department property + for a dataset:

+
# zfs + set + com.example:department=12345 + tank/accounting
+
+
+

+

The following example shows how to maintain a history of snapshots + with a consistent naming scheme. To keep a week's worth of snapshots, the + user destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows:

+
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
+

+

The following commands show how to set sharenfs + property options to enable read-write access for a set of IP addresses and + to enable root access for system "neo" on the + tank/home file system:

+
# zfs + set + sharenfs='rw=@123.123.0.0/16:[::1],root=neo' + tank/home
+

If you are using DNS for host name resolution, specify the + fully-qualified hostname.

+
+
+

+

The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots + on tank/cindys. The permissions on + tank/cindys are also displayed.

+
+
# zfs allow ,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys will + be unable to mount file systems under tank/cindys. Add + an ACE similar to the following syntax to provide mount point access:

+
# chmod + A+user:cindys:add_subdirectory:allow + /tank/cindys
+
+
+

+

The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed.

+
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
+

+

The following example shows how to define and grant a permission + set on the tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+

+

The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed.

+
+
# zfs allow cindys quota, users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
+

+

The following example shows how to remove the snapshot permission + from the staff group on the + tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+

+

The following example shows how to see what has changed between a + prior snapshot of a ZFS dataset and its current state. The + -F option is used to indicate type information for + the files affected.

+
+
# zfs diff -F tank/test@before tank/test
+M       /       /tank/test/
+M       F       /tank/test/linked      (+1)
+R       F       /tank/test/oldname -> /tank/test/newname
+-       F       /tank/test/deleted
++       F       /tank/test/created
+M       F       /tank/test/modified
+
+
+
+

+

The following example creates a bookmark to a snapshot. This + bookmark can then be used instead of a snapshot in send streams.

+
# zfs + bookmark + rpool@snapshot + rpool#bookmark
+
+
+

+ Property Options on a ZFS File System

+

The following example show how to share SMB filesystem through + ZFS. Note that a user and their password must be given.

+
# smbmount + //127.0.0.1/share_tmp /mnt/tmp + -o + user=workgroup/turbo,password=obrut,uid=1000
+

Minimal /etc/samba/smb.conf configuration + is required, as follows.

+

Samba will need to bind to the loopback interface for the ZFS + utilities to communicate with Samba. This is the default behavior for most + Linux distributions.

+

Samba must be able to authenticate a user. This can be done in a + number of ways (passwd(5), LDAP, + smbpasswd(5), &c.). How to do this is outside the + scope of this document – refer to smb.conf(5) for + more information.

+

See the USERSHARES section + for all configuration options, in case you need to modify any options of the + share afterwards. Do note that any changes done with the + net(8) command will be undone if the share is ever + unshared (like via a reboot).

+
+
+
+

+
+
+
Use ANSI color in zfs diff + and zfs list output.
+
+
Cause zfs mount to use + mount(8) to mount ZFS datasets. This option is provided + for backwards compatibility with older ZFS versions.
+
+
Tells zfs to set the maximum pipe size for + sends/recieves. Disabled by default on Linux due to an unfixed deadlock in + Linux's pipe size handling code.
+
+
Time, in seconds, to wait for /dev/zfs to appear. + Defaults to + , max + (10 + minutes). If <0, wait forever; if + 0, don't wait.
+
+
+
+

+

.

+
+
+

+

attr(1), gzip(1), + ssh(1), chmod(2), + fsync(2), stat(2), + write(2), acl(5), + attributes(5), exports(5), + zfsconcepts(7), zfsprops(7), + exportfs(8), mount(8), + net(8), selinux(8), + zfs-allow(8), zfs-bookmark(8), + zfs-change-key(8), zfs-clone(8), + zfs-create(8), zfs-destroy(8), + zfs-diff(8), zfs-get(8), + zfs-groupspace(8), zfs-hold(8), + zfs-inherit(8), zfs-jail(8), + zfs-list(8), zfs-load-key(8), + zfs-mount(8), zfs-program(8), + zfs-project(8), zfs-projectspace(8), + zfs-promote(8), zfs-receive(8), + zfs-redact(8), zfs-release(8), + zfs-rename(8), zfs-rollback(8), + zfs-send(8), zfs-set(8), + zfs-share(8), zfs-snapshot(8), + zfs-unallow(8), zfs-unjail(8), + zfs-unload-key(8), zfs-unmount(8), + zfs-upgrade(8), + zfs-userspace(8), zfs-wait(8), + zpool(8)

+
+
+ + + + + +
May 12, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs_ids_to_path.8.html b/man/v2.2/8/zfs_ids_to_path.8.html new file mode 100644 index 000000000..e8716218a --- /dev/null +++ b/man/v2.2/8/zfs_ids_to_path.8.html @@ -0,0 +1,274 @@ + + + + + + + zfs_ids_to_path.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs_ids_to_path.8

+
+ + + + + +
ZFS_IDS_TO_PATH(8)System Manager's ManualZFS_IDS_TO_PATH(8)
+
+
+

+

zfs_ids_to_path — + convert objset and object ids to names and paths

+
+
+

+ + + + + +
zfs_ids_to_path[-v] pool + objset-id object-id
+
+
+

+

The + + utility converts a provided objset and object ids into a path to the file + they refer to.

+
+
+
Verbose. Print the dataset name and the file path within the dataset + separately. This will work correctly even if the dataset is not + mounted.
+
+
+
+

+

zdb(8), zfs(8)

+
+
+ + + + + +
April 17, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs_prepare_disk.8.html b/man/v2.2/8/zfs_prepare_disk.8.html new file mode 100644 index 000000000..f0881cfea --- /dev/null +++ b/man/v2.2/8/zfs_prepare_disk.8.html @@ -0,0 +1,302 @@ + + + + + + + zfs_prepare_disk.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs_prepare_disk.8

+
+ + + + + +
ZFS_PREPARE_DISK(8)System Manager's ManualZFS_PREPARE_DISK(8)
+
+
+

+

zfs_prepare_disk — + special script that gets run before bringing a disk into a + pool

+
+
+

+

zfs_prepare_disk is an optional script + that gets called by libzfs before bringing a disk into a pool. It can be + modified by the user to run whatever commands are necessary to prepare a + disk for inclusion into the pool. For example, users can add lines to + zfs_prepare_disk to do things like update the + drive's firmware or check the drive's health. + zfs_prepare_disk is optional and can be removed if + not needed. libzfs will look for the script at + @zfsexecdir@/zfs_prepare_disk.

+
+

+

zfs_prepare_disk will be passed the + following environment variables:

+

+
+
POOL_NAME
+
+
VDEV_PATH
+
+
VDEV_PREPARE
+
('create', 'add', 'replace', or + 'autoreplace'). This can be useful if you only want the script to be run + under certain actions.
+
VDEV_UPATH
+
disk. For multipath this would + return one of the /dev/sd* paths to the disk. If the device is not a + device mapper device, then VDEV_UPATH just returns + the same value as VDEV_PATH
+
VDEV_ENC_SYSFS_PATH
+
+
+

Note that some of these variables may have a blank value. + POOL_NAME is blank at pool creation time, for + example.

+
+
+
+

+

zfs_prepare_disk runs with a limited + $PATH.

+
+
+

+

zfs_prepare_disk should return 0 on + success, non-zero otherwise. If non-zero is returned, the disk will not be + included in the pool.

+
+
+ + + + + +
August 30, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zgenhostid.8.html b/man/v2.2/8/zgenhostid.8.html new file mode 100644 index 000000000..ffda7cdfa --- /dev/null +++ b/man/v2.2/8/zgenhostid.8.html @@ -0,0 +1,332 @@ + + + + + + + zgenhostid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zgenhostid.8

+
+ + + + + +
ZGENHOSTID(8)System Manager's ManualZGENHOSTID(8)
+
+
+

+

zgenhostid — + generate host ID into /etc/hostid

+
+
+

+ + + + + +
zgenhostid[-f] [-o + filename] [hostid]
+
+
+

+

Creates /etc/hostid file and stores the + host ID in it. If hostid was provided, validate and + store that value. Otherwise, randomly generate an ID.

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Allow output overwrite.
+
+ filename
+
Write to filename instead of the default + /etc/hostid.
+
hostid
+
Specifies the value to be placed in /etc/hostid. + It should be a number with a value between 1 and 2^32-1. If + , generate a random + ID. This value must be unique among your systems. It + must be an 8-digit-long hexadecimal number, optionally + prefixed by "0x".
+
+
+
+

+

/etc/hostid

+
+
+

+
+
Generate a random hostid and store it
+
+
# + zgenhostid
+
+
Record the libc-generated hostid in + /etc/hostid
+
+
# + zgenhostid + "$(hostid)"
+
+
Record a custom hostid (0xdeadbeef) in + /etc/hostid
+
+
# + zgenhostid + deadbeef
+
+
Record a custom hostid (0x01234567) in + /tmp/hostid and overwrite the file + if it exists
+
+
# + zgenhostid -f + -o /tmp/hostid + 0x01234567
+
+
+
+
+

+

genhostid(1), hostid(1), + spl(4)

+
+
+

+

zgenhostid emulates the + genhostid(1) utility and is provided for use on systems + which do not include the utility or do not provide the + sethostid(3) function.

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zinject.8.html b/man/v2.2/8/zinject.8.html new file mode 100644 index 000000000..7b812b530 --- /dev/null +++ b/man/v2.2/8/zinject.8.html @@ -0,0 +1,550 @@ + + + + + + + zinject.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zinject.8

+
+ + + + + +
ZINJECT(8)System Manager's ManualZINJECT(8)
+
+
+

+

zinjectZFS + Fault Injector

+
+
+

+

zinject creates artificial problems in a + ZFS pool by simulating data corruption or device failures. This program is + dangerous.

+
+
+

+
+
+ + + + + +
zinject
+
+
List injection records.
+
+ + + + + +
zinject-b + objset:object:level:start:end + [-f frequency] + -amu [pool]
+
+
Force an error into the pool at a bookmark.
+
+ + + + + +
zinject-c + id|all
+
+
Cancel injection records.
+
+ + + + + +
zinject-d vdev + -A + | + pool
+
+
Force a vdev into the DEGRADED or FAULTED state.
+
+ + + + + +
zinject-d vdev + -D + latency:lanes + pool
+
+
Add an artificial delay to I/O requests on a particular device, such that + the requests take a minimum of latency milliseconds + to complete. Each delay has an associated number of + lanes which defines the number of concurrent I/O + requests that can be processed. +

For example, with a single lane delay of 10 ms + (-D + 10:1), the device will only + be able to service a single I/O request at a time with each request + taking 10 ms to complete. So, if only a single request is submitted + every 10 ms, the average latency will be 10 ms; but if more than one + request is submitted every 10 ms, the average latency will be more than + 10 ms.

+

Similarly, if a delay of 10 ms is specified to have two lanes + (-D + 10:2), then the device will + be able to service two requests at a time, each with a minimum latency + of 10 ms. So, if two requests are submitted every 10 ms, then the + average latency will be 10 ms; but if more than two requests are + submitted every 10 ms, the average latency will be more than 10 ms.

+

Also note, these delays are additive. So two invocations of + -D + 10:1 are roughly equivalent + to a single invocation of -D + 10:2. This also means, that + one can specify multiple lanes with differing target latencies. For + example, an invocation of -D + 10:1 followed by + -D + 25:2 will create 3 lanes on + the device: one lane with a latency of 10 ms and two lanes with a 25 ms + latency.

+
+
+ + + + + +
zinject-d vdev + [-e device_error] + [-L label_error] + [-T failure] + [-f frequency] + [-F] pool
+
+
Force a vdev error.
+
+ + + + + +
zinject-I [-s + seconds|-g + txgs] pool
+
+
Simulate a hardware failure that fails to honor a cache flush.
+
+ + + + + +
zinject-p function + pool
+
+
Panic inside the specified function.
+
+ + + + + +
zinject-t + + -C dvas + [-e device_error] + [-f frequency] + [-l level] + [-r range] + [-amq] path
+
+
Force an error into the contents of a file.
+
+ + + + + +
zinject-t + + -C dvas + [-e device_error] + [-f frequency] + [-l level] + [-amq] path
+
+
Force an error into the metadnode for a file or directory.
+
+ + + + + +
zinject-t mos_type + -C dvas + [-e device_error] + [-f frequency] + [-l level] + [-r range] + [-amqu] pool
+
+
Force an error into the MOS of a pool.
+
+
+
+

+
+
+
Flush the ARC before injection.
+
+ objset:object:level:start:end
+
Force an error into the pool at this bookmark tuple. Each number is in + hexadecimal, and only one block can be specified.
+
+ dvas
+
Inject the given error only into specific DVAs. The mask should be + specified as a list of 0-indexed DVAs separated by commas + (e.g. + 0,2). This option is not + applicable to logical data errors such as decompress and + decrypt.
+
+ vdev
+
A vdev specified by path or GUID.
+
+ device_error
+
Specify +
+
+
for an ECKSUM error,
+
+
for a data decompression error,
+
+
for a data decryption error,
+
+
to flip a bit in the data after a read,
+
+
for an ECHILD error,
+
+
for an EIO error where reopening the device will succeed, or
+
+
for an ENXIO error where reopening the device will fail.
+
+

For EIO and ENXIO, the "failed" reads or writes + still occur. The probe simply sets the error value reported by the I/O + pipeline so it appears the read or write failed. Decryption errors only + currently work with file data.

+
+
+ frequency
+
Only inject errors a fraction of the time. Expressed as a real number + percentage between + + and + .
+
+
Fail faster. Do fewer checks.
+
+ txgs
+
Run for this many transaction groups before reporting failure.
+
+
Print the usage message.
+
+ level
+
Inject an error at a particular block level. The default is + .
+
+ label_error
+
Set the label error region to one of + , + , + , or + .
+
+
Automatically remount the underlying filesystem.
+
+
Quiet mode. Only print the handler number added.
+
+ range
+
Inject an error over a particular logical range of an object, which will + be translated to the appropriate blkid range according to the object's + properties.
+
+ seconds
+
Run for this many seconds before reporting failure.
+
+ failure
+
Set the failure type to one of all, + , + , + , or + .
+
+ mos_type
+
Set this to +
+
+
for any data in the MOS,
+
+
for an object directory,
+
+
for the pool configuration,
+
+
for the block pointer list,
+
+
for the space map,
+
+
for the metaslab, or
+
+
for the persistent error log.
+
+
+
+
Unload the pool after injection.
+
+
+
+

+
+
+
Run zinject in debug mode.
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-add.8.html b/man/v2.2/8/zpool-add.8.html new file mode 100644 index 000000000..24fc01dc1 --- /dev/null +++ b/man/v2.2/8/zpool-add.8.html @@ -0,0 +1,336 @@ + + + + + + + zpool-add.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-add.8

+
+ + + + + +
ZPOOL-ADD(8)System Manager's ManualZPOOL-ADD(8)
+
+
+

+

zpool-addadd + vdevs to ZFS storage pool

+
+
+

+ + + + + +
zpooladd [-fgLnP] + [-o + property=value] + pool vdev
+
+
+

+

Adds the specified virtual devices to the given pool. The + vdev specification is described in the + section of zpoolconcepts(7). The behavior + of the -f option, and the device checks performed + are described in the zpool + create subcommand.

+
+
+
Forces use of vdevs, even if they appear in use or + specify a conflicting replication level. Not all devices can be overridden + in this manner.
+
+
Display vdev, GUIDs instead of the normal device + names. These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all symbolic + links. This can be used to look up the current block device name + regardless of the /dev/disk path used to open + it.
+
+
Displays the configuration that would be used without actually adding the + vdevs. The actual pool creation can still fail due + to insufficient privileges or device sharing.
+
+
Display real paths for vdevs instead of only the + last component of the path. This can be used in conjunction with the + -L flag.
+
+ property=value
+
Sets the given pool properties. See the zpoolprops(7) + manual page for a list of valid properties that can be set. The only + property supported at the moment is + .
+
+
+
+

+
+

+

The following command adds two mirrored disks to the pool + tank, assuming the pool is already made up of two-way + mirrors. The additional space is immediately available to any datasets + within the pool.

+
# zpool + add tank + + sda sdb
+
+
+

+

The following command adds two disks for use as cache devices to a + ZFS storage pool:

+
# zpool + add pool + + sdc sdd
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take over + an hour for them to fill. Capacity and reads can be monitored using the + iostat subcommand as follows:

+
# zpool + iostat -v pool + 5
+
+
+
+

+

zpool-attach(8), + zpool-import(8), zpool-initialize(8), + zpool-online(8), zpool-remove(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-attach.8.html b/man/v2.2/8/zpool-attach.8.html new file mode 100644 index 000000000..e40032288 --- /dev/null +++ b/man/v2.2/8/zpool-attach.8.html @@ -0,0 +1,299 @@ + + + + + + + zpool-attach.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-attach.8

+
+ + + + + +
ZPOOL-ATTACH(8)System Manager's ManualZPOOL-ATTACH(8)
+
+
+

+

zpool-attach — + attach new device to existing ZFS vdev

+
+
+

+ + + + + +
zpoolattach [-fsw] + [-o + property=value] + pool device new_device
+
+
+

+

Attaches new_device to the existing + device. The existing device cannot be part of a raidz + configuration. If device is not currently part of a + mirrored configuration, device automatically + transforms into a two-way mirror of device and + new_device. If device is part of + a two-way mirror, attaching new_device creates a + three-way mirror, and so on. In either case, + new_device begins to resilver immediately and any + running scrub is cancelled.

+
+
+
Forces use of new_device, even if it appears to be + in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the zpoolprops(7) + manual page for a list of valid properties that can be set. The only + property supported at the moment is + .
+
+
The new_device is reconstructed sequentially to + restore redundancy as quickly as possible. Checksums are not verified + during sequential reconstruction so a scrub is started when the resilver + completes. Sequential reconstruction is not supported for raidz + configurations.
+
+
Waits until new_device has finished resilvering + before returning.
+
+
+
+

+

zpool-add(8), zpool-detach(8), + zpool-import(8), zpool-initialize(8), + zpool-online(8), zpool-replace(8), + zpool-resilver(8)

+
+
+ + + + + +
May 15, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-checkpoint.8.html b/man/v2.2/8/zpool-checkpoint.8.html new file mode 100644 index 000000000..4f828e994 --- /dev/null +++ b/man/v2.2/8/zpool-checkpoint.8.html @@ -0,0 +1,290 @@ + + + + + + + zpool-checkpoint.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-checkpoint.8

+
+ + + + + +
ZPOOL-CHECKPOINT(8)System Manager's ManualZPOOL-CHECKPOINT(8)
+
+
+

+

zpool-checkpoint — + check-point current ZFS storage pool state

+
+
+

+ + + + + +
zpoolcheckpoint [-d + [-w]] pool
+
+
+

+

Checkpoints the current state of pool , + which can be later restored by zpool + import --rewind-to-checkpoint. The existence of a + checkpoint in a pool prohibits the following zpool + subcommands: remove, attach, + detach, split, + and reguid. In addition, it + may break reservation boundaries if the pool lacks free space. The + zpool status command + indicates the existence of a checkpoint or the progress of discarding a + checkpoint from a pool. zpool + list can be used to check how much space the + checkpoint takes from the pool.

+
+
+

+
+
, + --discard
+
Discards an existing checkpoint from pool.
+
, + --wait
+
Waits until the checkpoint has finished being discarded before + returning.
+
+
+
+

+

zfs-snapshot(8), + zpool-import(8), zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-clear.8.html b/man/v2.2/8/zpool-clear.8.html new file mode 100644 index 000000000..37059e92c --- /dev/null +++ b/man/v2.2/8/zpool-clear.8.html @@ -0,0 +1,275 @@ + + + + + + + zpool-clear.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-clear.8

+
+ + + + + +
ZPOOL-CLEAR(8)System Manager's ManualZPOOL-CLEAR(8)
+
+
+

+

zpool-clear — + clear device errors in ZFS storage pool

+
+
+

+ + + + + +
zpoolclear pool + [device]…
+
+
+

+

Clears device errors in a pool. If no arguments are specified, all + device errors within the pool are cleared. If one or more devices is + specified, only those errors associated with the specified device or devices + are cleared.

+

If the pool was suspended it will be brought back + online provided the devices can be accessed. Pools with + + enabled which have been suspended cannot be resumed. While the pool was + suspended, it may have been imported on another host, and resuming I/O could + result in pool damage.

+
+
+

+

zdb(8), zpool-reopen(8), + zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-create.8.html b/man/v2.2/8/zpool-create.8.html new file mode 100644 index 000000000..ff456ef26 --- /dev/null +++ b/man/v2.2/8/zpool-create.8.html @@ -0,0 +1,449 @@ + + + + + + + zpool-create.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-create.8

+
+ + + + + +
ZPOOL-CREATE(8)System Manager's ManualZPOOL-CREATE(8)
+
+
+

+

zpool-create — + create ZFS storage pool

+
+
+

+ + + + + +
zpoolcreate [-dfn] + [-m mountpoint] + [-o + property=value]… + [-o + feature@feature=value] + [-o + compatibility=off|legacy|file[,file]…] + [-O + file-system-property=value]… + [-R root] + [-t tname] + pool vdev
+
+
+

+

Creates a new storage pool containing the virtual devices + specified on the command line. The pool name must begin with a letter, and + can only contain alphanumeric characters as well as the underscore + (""), + dash + (""), + colon + (""), + space (" "), and period + (""). + The pool names mirror, raidz, + draid, spare and log + are reserved, as are names beginning with mirror, + raidz, draid, and + spare. The vdev specification is + described in the Virtual Devices + section of zpoolconcepts(7).

+

The command attempts to verify that each device + specified is accessible and not currently in use by another subsystem. + However this check is not robust enough to detect simultaneous attempts to + use a new device in different pools, even if + = + enabled. The administrator must ensure that simultaneous + invocations of any combination of zpool + replace, zpool + create, zpool + add, or zpool + labelclear do not refer to the same device. Using + the same device in two pools will result in pool corruption.

+

There are some uses, such as being currently mounted, or specified + as the dedicated dump device, that prevents a device from ever being used by + ZFS. Other uses, such as having a preexisting UFS file system, can be + overridden with -f.

+

The command also checks that the replication strategy for the pool + is consistent. An attempt to combine redundant and non-redundant storage in + a single pool, or to mix disks and files, results in an error unless + -f is specified. The use of differently-sized + devices within a single raidz or mirror group is also flagged as an error + unless -f is specified.

+

Unless the -R option is specified, the + default mount point is /pool. + The mount point must not exist or must be empty, or else the root dataset + will not be able to be be mounted. This can be overridden with the + -m option.

+

By default all supported features are enabled + on the new pool. The -d option and the + -o compatibility property (e.g + -o + =2020) + can be used to restrict the features that are enabled, so that the pool can + be imported on other releases of ZFS.

+
+
+
Do not enable any features on the new pool. Individual features can be + enabled by setting their corresponding properties to + enabled with -o. See + zpool-features(7) for details about feature + properties.
+
+
Forces use of vdevs, even if they appear in use or + specify a conflicting replication level. Not all devices can be overridden + in this manner.
+
+ mountpoint
+
Sets the mount point for the root dataset. The default mount point is + /pool or altroot/pool if + altroot is specified. The mount point must be an + absolute path, legacy, or none. For + more information on dataset mount points, see + zfsprops(7).
+
+
Displays the configuration that would be used without actually creating + the pool. The actual pool creation can still fail due to insufficient + privileges or device sharing.
+
+ property=value
+
Sets the given pool properties. See zpoolprops(7) for a + list of valid properties that can be set.
+
+ compatibility=off|legacy|file[,file]…
+
Specifies compatibility feature sets. See + zpool-features(7) for more information about + compatibility feature sets.
+
+ feature@feature=value
+
Sets the given pool feature. See the zpool-features(7) + section for a list of valid features that can be set. Value can be either + disabled or enabled.
+
+ file-system-property=value
+
Sets the given file system properties in the root file system of the pool. + See zfsprops(7) for a list of valid properties that can + be set.
+
+ root
+
Equivalent to -o + cachefile=none + -o + altroot=root
+
+ tname
+
Sets the in-core pool name to tname while the + on-disk name will be the name specified as pool. + This will set the default of the cachefile property to + none. This is intended to handle name space collisions + when creating pools for other systems, such as virtual machines or + physical machines whose pools live on network block devices.
+
+
+
+

+
+

+

The following command creates a pool with a single raidz root vdev + that consists of six disks:

+
# zpool + create tank + raidz sda sdb sdc sdd sde + sdf
+
+
+

+

The following command creates a pool with two mirrors, where each + mirror contains two disks:

+
# zpool + create tank + mirror sda sdb + mirror sdc sdd
+
+
+

+

The following command creates a non-redundant pool using two disk + partitions:

+
# zpool + create tank + sda1 sdb2
+
+
+

+

The following command creates a non-redundant pool using files. + While not recommended, a pool based on files can be useful for experimental + purposes.

+
# zpool + create tank + /path/to/file/a /path/to/file/b
+
+
+

+

The following command creates a new pool with an available hot + spare:

+
# zpool + create tank + mirror sda sdb + spare sdc
+
+
+

+

The following command creates a ZFS storage pool consisting of + two, two-way mirrors and mirrored log devices:

+
# zpool + create pool + mirror sda sdb + mirror sdc sdd log + mirror sde sdf
+
+
+
+

+

zpool-destroy(8), + zpool-export(8), zpool-import(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-destroy.8.html b/man/v2.2/8/zpool-destroy.8.html new file mode 100644 index 000000000..6b45baa42 --- /dev/null +++ b/man/v2.2/8/zpool-destroy.8.html @@ -0,0 +1,278 @@ + + + + + + + zpool-destroy.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-destroy.8

+
+ + + + + +
ZPOOL-DESTROY(8)System Manager's ManualZPOOL-DESTROY(8)
+
+
+

+

zpool-destroy — + destroy ZFS storage pool

+
+
+

+ + + + + +
zpooldestroy [-f] + pool
+
+
+

+

Destroys the given pool, freeing up any devices for other use. + This command tries to unmount any active datasets before destroying the + pool.

+
+
+
Forcefully unmount all active datasets.
+
+
+
+

+
+

+

The following command destroys the pool tank + and any datasets contained within:

+
# zpool + destroy -f + tank
+
+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-detach.8.html b/man/v2.2/8/zpool-detach.8.html new file mode 100644 index 000000000..d6f3c8c5f --- /dev/null +++ b/man/v2.2/8/zpool-detach.8.html @@ -0,0 +1,271 @@ + + + + + + + zpool-detach.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-detach.8

+
+ + + + + +
ZPOOL-DETACH(8)System Manager's ManualZPOOL-DETACH(8)
+
+
+

+

zpool-detach — + detach device from ZFS mirror

+
+
+

+ + + + + +
zpooldetach pool device
+
+
+

+

Detaches device from a mirror. The operation + is refused if there are no other valid replicas of the data. If + device may be re-added to the pool later on then + consider the zpool offline + command instead.

+
+
+

+

zpool-attach(8), + zpool-labelclear(8), zpool-offline(8), + zpool-remove(8), zpool-replace(8), + zpool-split(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-events.8.html b/man/v2.2/8/zpool-events.8.html new file mode 100644 index 000000000..f199cf9d1 --- /dev/null +++ b/man/v2.2/8/zpool-events.8.html @@ -0,0 +1,872 @@ + + + + + + + zpool-events.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-events.8

+
+ + + + + +
ZPOOL-EVENTS(8)System Manager's ManualZPOOL-EVENTS(8)
+
+
+

+

zpool-events — + list recent events generated by kernel

+
+
+

+ + + + + +
zpoolevents [-vHf] + [pool]
+
+ + + + + +
zpoolevents -c
+
+
+

+

Lists all recent events generated by the ZFS kernel modules. These + events are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. For + more information about the subclasses and event payloads that can be + generated see EVENTS and the following + sections.

+
+
+

+
+
+
Clear all previous events.
+
+
Follow mode.
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+
Print the entire payload for each event.
+
+
+
+

+

These are the different event subclasses. The full event name + would be + , + but only the last part is listed here.

+

+
+
+
Issued when a checksum error has been detected.
+
+
Issued when there is an I/O error in a vdev in the pool.
+
+
Issued when there have been data errors in the pool.
+
+
Issued when an I/O request is determined to be "hung", this can + be caused by lost completion events due to flaky hardware or drivers. See + + in zfs(4) for additional information regarding + "hung" I/O detection and configuration.
+
+
Issued when a completed I/O request exceeds the maximum allowed time + specified by the + + module parameter. This can be an indicator of problems with the underlying + storage device. The number of delay events is ratelimited by the + + module parameter.
+
+
Issued every time a vdev change have been done to the pool.
+
+
Issued when a pool cannot be imported.
+
+
Issued when a pool is destroyed.
+
+
Issued when a pool is exported.
+
+
Issued when a pool is imported.
+
+
Issued when a REGUID (new unique identifier for the pool have been + regenerated) have been detected.
+
+
Issued when the vdev is unknown. Such as trying to clear device errors on + a vdev that have failed/been kicked from the system/pool and is no longer + available.
+
+
Issued when a vdev could not be opened (because it didn't exist for + example).
+
+
Issued when corrupt data have been detected on a vdev.
+
+
Issued when there are no more replicas to sustain the pool. This would + lead to the pool being + .
+
+
Issued when a missing device in the pool have been detected.
+
+
Issued when the system (kernel) have removed a device, and ZFS notices + that the device isn't there any more. This is usually followed by a + probe_failure event.
+
+
Issued when the label is OK but invalid.
+
+
Issued when the ashift alignment requirement has increased.
+
+
Issued when a vdev is detached from a mirror (or a spare detached from a + vdev where it have been used to replace a failed drive - only works if the + original drive have been re-added).
+
+
Issued when clearing device errors in a pool. Such as running + zpool clear on a device in + the pool.
+
+
Issued when a check to see if a given vdev could be opened is + started.
+
+
Issued when a spare have kicked in to replace a failed device.
+
+
Issued when a vdev can be automatically expanded.
+
+
Issued when there is an I/O failure in a vdev in the pool.
+
+
Issued when a probe fails on a vdev. This would occur if a vdev have been + kicked from the system outside of ZFS (such as the kernel have removed the + device).
+
+
Issued when the intent log cannot be replayed. The can occur in the case + of a missing or damaged log device.
+
+
Issued when a resilver is started.
+
+
Issued when the running resilver have finished.
+
+
Issued when a scrub is started on a pool.
+
+
Issued when a pool has finished scrubbing.
+
+
Issued when a scrub is aborted on a pool.
+
+
Issued when a scrub is resumed on a pool.
+
+
Issued when a scrub is paused on a pool.
+
+
 
+
+
+
+

+

This is the payload (data, information) that accompanies an + event.

+

For zed(8), these are set to + uppercase and prefixed with + .

+

+
+
+
Pool name.
+
+
Failmode - + , + , + or + . + See the + + property in zpoolprops(7) for more information.
+
+
The GUID of the pool.
+
+
The load state for the pool (0=none, 1=open, 2=import, 3=tryimport, + 4=recover 5=error).
+
+
The GUID of the vdev in question (the vdev failing or operated upon with + zpool clear, etc.).
+
+
Type of vdev - + , + , + , + etc. See the + section of zpoolconcepts(7) for more + information on possible values.
+
+
Full path of the vdev, including any -partX.
+
+
ID of vdev (if any).
+
+
Physical FRU location.
+
+
State of vdev (0=uninitialized, 1=closed, 2=offline, 3=removed, 4=failed + to open, 5=faulted, 6=degraded, 7=healthy).
+
+
The ashift value of the vdev.
+
+
The time the last I/O request completed for the specified vdev.
+
+
The time since the last I/O request completed for the specified vdev.
+
+
List of spares, including full path and any -partX.
+
+
GUID(s) of spares.
+
+
How many read errors that have been detected on the vdev.
+
+
How many write errors that have been detected on the vdev.
+
+
How many checksum errors that have been detected on the vdev.
+
+
GUID of the vdev parent.
+
+
Type of parent. See vdev_type.
+
+
Path of the vdev parent (if any).
+
+
ID of the vdev parent (if any).
+
+
The object set number for a given I/O request.
+
+
The object number for a given I/O request.
+
+
The indirect level for the block. Level 0 is the lowest level and includes + data blocks. Values > 0 indicate metadata blocks at the appropriate + level.
+
+
The block ID for a given I/O request.
+
+
The error number for a failure when handling a given I/O request, + compatible with errno(3) with the value of + + used to indicate a ZFS checksum error.
+
+
The offset in bytes of where to write the I/O request for the specified + vdev.
+
+
The size in bytes of the I/O request.
+
+
The current flags describing how the I/O request should be handled. See + the I/O FLAGS section for the full list of I/O + flags.
+
+
The current stage of the I/O in the pipeline. See the I/O + STAGES section for a full list of all the I/O stages.
+
+
The valid pipeline stages for the I/O. See the I/O + STAGES section for a full list of all the I/O stages.
+
+
The time elapsed (in nanoseconds) waiting for the block layer to complete + the I/O request. Unlike zio_delta, this does not include + any vdev queuing time and is therefore solely a measure of the block layer + performance.
+
+
The time when a given I/O request was submitted.
+
+
The time required to service a given I/O request.
+
+
The previous state of the vdev.
+
+
Checksum algorithm used. See zfsprops(7) for more + information on the available checksum algorithms.
+
+
Whether or not the data is byteswapped.
+
+
start, + end) pairs of corruption offsets. Offsets are always + aligned on a 64-bit boundary, and can include some gaps of non-corruption. + (See bad_ranges_min_gap)
+
+
In order to bound the size of the bad_ranges array, gaps + of non-corruption less than or equal to + bad_ranges_min_gap bytes have been merged with adjacent + corruption. Always at least 8 bytes, since corruption is detected on a + 64-bit word basis.
+
+
This array has one element per range in bad_ranges. Each + element contains the count of bits in that range which were clear in the + good data and set in the bad data.
+
+
This array has one element per range in bad_ranges. Each + element contains the count of bits for that range which were set in the + good data and clear in the bad data.
+
+
If this field exists, it is an array of (bad data + & ~(good data)); that + is, the bits set in the bad data which are cleared in the good data. Each + element corresponds a byte whose offset is in a range in + bad_ranges, and the array is ordered by offset. Thus, + the first element is the first byte in the first + bad_ranges range, and the last element is the last byte + in the last bad_ranges range.
+
+
Like bad_set_bits, but contains (good + data & ~(bad + data)); that is, the bits set in the good data which are cleared in + the bad data.
+
+
+
+

+

The ZFS I/O pipeline is comprised of various stages which are + defined below. The individual stages are used to construct these basic I/O + operations: Read, Write, Free, Claim, and Ioctl. These stages may be set on + an event to describe the life cycle of a given I/O request.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StageBit MaskOperations



ZIO_STAGE_OPEN0x00000001RWFCI
ZIO_STAGE_READ_BP_INIT0x00000002R----
ZIO_STAGE_WRITE_BP_INIT0x00000004-W---
ZIO_STAGE_FREE_BP_INIT0x00000008--F--
ZIO_STAGE_ISSUE_ASYNC0x00000010RWF--
ZIO_STAGE_WRITE_COMPRESS0x00000020-W---
ZIO_STAGE_ENCRYPT0x00000040-W---
ZIO_STAGE_CHECKSUM_GENERATE0x00000080-W---
ZIO_STAGE_NOP_WRITE0x00000100-W---
ZIO_STAGE_BRT_FREE0x00000200--F--
ZIO_STAGE_DDT_READ_START0x00000400R----
ZIO_STAGE_DDT_READ_DONE0x00000800R----
ZIO_STAGE_DDT_WRITE0x00001000-W---
ZIO_STAGE_DDT_FREE0x00002000--F--
ZIO_STAGE_GANG_ASSEMBLE0x00004000RWFC-
ZIO_STAGE_GANG_ISSUE0x00008000RWFC-
ZIO_STAGE_DVA_THROTTLE0x00010000-W---
ZIO_STAGE_DVA_ALLOCATE0x00020000-W---
ZIO_STAGE_DVA_FREE0x00040000--F--
ZIO_STAGE_DVA_CLAIM0x00080000---C-
ZIO_STAGE_READY0x00100000RWFCI
ZIO_STAGE_VDEV_IO_START0x00200000RW--I
ZIO_STAGE_VDEV_IO_DONE0x00400000RW--I
ZIO_STAGE_VDEV_IO_ASSESS0x00800000RW--I
ZIO_STAGE_CHECKSUM_VERIFY0x01000000R----
ZIO_STAGE_DONE0x02000000RWFCI
+
+
+

+

Every I/O request in the pipeline contains a set of flags which + describe its function and are used to govern its behavior. These flags will + be set in an event as a zio_flags payload entry.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FlagBit Mask


ZIO_FLAG_DONT_AGGREGATE0x00000001
ZIO_FLAG_IO_REPAIR0x00000002
ZIO_FLAG_SELF_HEAL0x00000004
ZIO_FLAG_RESILVER0x00000008
ZIO_FLAG_SCRUB0x00000010
ZIO_FLAG_SCAN_THREAD0x00000020
ZIO_FLAG_PHYSICAL0x00000040
ZIO_FLAG_CANFAIL0x00000080
ZIO_FLAG_SPECULATIVE0x00000100
ZIO_FLAG_CONFIG_WRITER0x00000200
ZIO_FLAG_DONT_RETRY0x00000400
ZIO_FLAG_NODATA0x00001000
ZIO_FLAG_INDUCE_DAMAGE0x00002000
ZIO_FLAG_IO_ALLOCATING0x00004000
ZIO_FLAG_IO_RETRY0x00008000
ZIO_FLAG_PROBE0x00010000
ZIO_FLAG_TRYHARD0x00020000
ZIO_FLAG_OPTIONAL0x00040000
ZIO_FLAG_DONT_QUEUE0x00080000
ZIO_FLAG_DONT_PROPAGATE0x00100000
ZIO_FLAG_IO_BYPASS0x00200000
ZIO_FLAG_IO_REWRITE0x00400000
ZIO_FLAG_RAW_COMPRESS0x00800000
ZIO_FLAG_RAW_ENCRYPT0x01000000
ZIO_FLAG_GANG_CHILD0x02000000
ZIO_FLAG_DDT_CHILD0x04000000
ZIO_FLAG_GODFATHER0x08000000
ZIO_FLAG_NOPWRITE0x10000000
ZIO_FLAG_REEXECUTED0x20000000
ZIO_FLAG_DELEGATED0x40000000
ZIO_FLAG_FASTWRITE0x80000000
+
+
+

+

zfs(4), zed(8), + zpool-wait(8)

+
+
+ + + + + +
July 11, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-export.8.html b/man/v2.2/8/zpool-export.8.html new file mode 100644 index 000000000..7462208bb --- /dev/null +++ b/man/v2.2/8/zpool-export.8.html @@ -0,0 +1,299 @@ + + + + + + + zpool-export.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-export.8

+
+ + + + + +
ZPOOL-EXPORT(8)System Manager's ManualZPOOL-EXPORT(8)
+
+
+

+

zpool-export — + export ZFS storage pools

+
+
+

+ + + + + +
zpoolexport [-f] + -a|pool
+
+
+

+

Exports the given pools from the system. All devices are marked as + exported, but are still considered in use by other subsystems. The devices + can be moved between systems (even those of different endianness) and + imported as long as a sufficient number of devices are present.

+

Before exporting the pool, all datasets within the pool are + unmounted. A pool can not be exported if it has a shared spare that is + currently being used.

+

For pools to be portable, you must give the + zpool command whole disks, not just partitions, so + that ZFS can label the disks with portable EFI labels. Otherwise, disk + drivers on platforms of different endianness will not recognize the + disks.

+
+
+
Exports all pools imported on the system.
+
+
Forcefully unmount all datasets, and allow export of pools with active + shared spares. +

This command will forcefully export the pool even if it has a + shared spare that is currently being used. This may lead to potential + data corruption.

+
+
+
+
+

+
+

+

The following command exports the devices in pool + tank so that they can be relocated or later + imported:

+
# zpool + export tank
+
+
+
+

+

zpool-import(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-get.8.html b/man/v2.2/8/zpool-get.8.html new file mode 100644 index 000000000..63eb380e2 --- /dev/null +++ b/man/v2.2/8/zpool-get.8.html @@ -0,0 +1,389 @@ + + + + + + + zpool-get.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-get.8

+
+ + + + + +
ZPOOL-GET(8)System Manager's ManualZPOOL-GET(8)
+
+
+

+

zpool-get — + retrieve properties of ZFS storage pools

+
+
+

+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + [pool]…
+
+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + pool + [all-vdevs|vdev]…
+
+ + + + + +
zpoolset + property=value + pool
+
+ + + + + +
zpoolset + property=value + pool vdev
+
+
+

+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + [pool]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
+
+
Name of storage pool.
+
+
Property name.
+
+
Property value.
+
+
Property source, either default + or local.
+
+
+

See the zpoolprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + pool + [all-vdevs|vdev]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified vdevs (or all vdevs if + all-vdevs is used) in the specified pool. These + properties are displayed with the following fields: +
+
+
+
Name of vdev.
+
+
Property name.
+
+
Property value.
+
+
Property source, either default + or local.
+
+
+

See the vdevprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + zpoolprops(7) manual page for more information on what + properties can be set and acceptable values.
+
zpool set + property=value + pool vdev
+
Sets the given property on the specified vdev in the specified pool. See + the vdevprops(7) manual page for more information on + what properties can be set and acceptable values.
+
+
+
+

+

vdevprops(7), + zpool-features(7), zpoolprops(7), + zpool-list(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-history.8.html b/man/v2.2/8/zpool-history.8.html new file mode 100644 index 000000000..073b5c177 --- /dev/null +++ b/man/v2.2/8/zpool-history.8.html @@ -0,0 +1,277 @@ + + + + + + + zpool-history.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-history.8

+
+ + + + + +
ZPOOL-HISTORY(8)System Manager's ManualZPOOL-HISTORY(8)
+
+
+

+

zpool-history — + inspect command history of ZFS storage pools

+
+
+

+ + + + + +
zpoolhistory [-il] + [pool]…
+
+
+

+

Displays the command history of the specified pool(s) or all pools + if no pool is specified.

+
+
+
Displays internally logged ZFS events in addition to user initiated + events.
+
+
Displays log records in long format, which in addition to standard format + includes, the user name, the hostname, and the zone in which the operation + was performed.
+
+
+
+

+

zpool-checkpoint(8), + zpool-events(8), zpool-status(8), + zpool-wait(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-import.8.html b/man/v2.2/8/zpool-import.8.html new file mode 100644 index 000000000..188d348d1 --- /dev/null +++ b/man/v2.2/8/zpool-import.8.html @@ -0,0 +1,575 @@ + + + + + + + zpool-import.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-import.8

+
+ + + + + +
ZPOOL-IMPORT(8)System Manager's ManualZPOOL-IMPORT(8)
+
+
+

+

zpool-import — + import ZFS storage pools or list available pools

+
+
+

+ + + + + +
zpoolimport [-D] + [-d + dir|device]…
+
+ + + + + +
zpoolimport -a + [-DflmN] [-F + [-nTX]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root]
+
+ + + + + +
zpoolimport [-Dflmt] + [-F [-nTX]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root] + [-s] + pool|id + [newpool]
+
+
+

+
+
zpool import + [-D] [-d + dir|device]…
+
Lists pools available to import. If the -d or + -c options are not specified, this command + searches for devices using libblkid on Linux and geom on + FreeBSD. The -d option can + be specified multiple times, and all directories are searched. If the + device appears to be part of an exported pool, this command displays a + summary of the pool with the name of the pool, a numeric identifier, as + well as the vdev layout and current health of the device for each device + or file. Destroyed pools, pools that were previously destroyed with the + zpool destroy command, are + not listed unless the -D option is specified. +

The numeric identifier is unique, and can be used instead of + the pool name when multiple exported pools of the same name are + available.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times.
+
+
Lists destroyed pools only.
+
+
+
zpool import + -a [-DflmN] + [-F [-nTX]] + [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root] + [-s]
+
Imports all pools found in the search directories. Identical to the + previous command, except that all pools with a sufficient number of + devices available are imported. Destroyed pools, pools that were + previously destroyed with the zpool + destroy command, will not be imported unless the + -D option is specified. +
+
+
Searches for and imports all pools found.
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pools only. The -f option is + also required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+
Import the pool without mounting any file systems.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + zpoolprops(7) manual page for more information on + the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Rewinds pool to the checkpointed state. Once the pool is imported with + this flag there is no way to undo the rewind. All changes and data + that were written after the checkpoint are lost! The only exception is + when the + + mounting option is enabled. In this case, the checkpointed state of + the pool is opened and an administrator can see how the pool would + look like if they were to fully rewind.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. WARNING: This + option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
+
zpool import + [-Dflmt] [-F + [-nTX]] [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root] + [-s] + pool|id + [newpool]
+
Imports a specific pool. A pool can be identified by its name or the + numeric identifier. If newpool is specified, the + pool is imported using the name newpool. Otherwise, + it is imported with the same name as its exported name. +

If a device is removed from a system without running + zpool export first, the + device appears as potentially active. It cannot be determined if this + was a failed export, or whether the device is really in use from another + host. To import a pool in this state, the -f + option is required.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pool. The -f option is also + required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + zpoolprops(7) manual page for more information on + the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. + : + This option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
Used with newpool. Specifies that + newpool is temporary. Temporary pool names last + until export. Ensures that the original pool name will be used in all + label updates and therefore is retained upon export. Will also set + -o + cachefile=none when not explicitly + specified.
+
+
+
+
+
+

+
+

+

The following command displays available pools, and then imports + the pool tank for use on the system. The results from + this command are similar to the following:

+
+
# zpool import
+  pool: tank
+    id: 15451357997522795478
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+        tank        ONLINE
+          mirror    ONLINE
+            sda     ONLINE
+            sdb     ONLINE
+
+# zpool import tank
+
+
+
+
+

+

zpool-export(8), + zpool-list(8), zpool-status(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-initialize.8.html b/man/v2.2/8/zpool-initialize.8.html new file mode 100644 index 000000000..683965b2d --- /dev/null +++ b/man/v2.2/8/zpool-initialize.8.html @@ -0,0 +1,298 @@ + + + + + + + zpool-initialize.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-initialize.8

+
+ + + + + +
ZPOOL-INITIALIZE(8)System Manager's ManualZPOOL-INITIALIZE(8)
+
+
+

+

zpool-initialize — + write to unallocated regions of ZFS storage pool

+
+
+

+ + + + + +
zpoolinitialize + [-c|-s + |-u] [-w] + pool [device]…
+
+
+

+

Begins initializing by writing to all unallocated regions on the + specified devices, or all eligible devices in the pool if no individual + devices are specified. Only leaf data or log devices may be initialized.

+
+
, + --cancel
+
Cancel initializing on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are not + currently being initialized, the command will fail and no cancellation + will occur on any device.
+
, + --suspend
+
Suspend initializing on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are not + currently being initialized, the command will fail and no suspension will + occur on any device. Initializing can then be resumed by running + zpool initialize with no + flags on the relevant target devices.
+
, + --uninit
+
Clears the initialization state on the specified devices, or all eligible + devices if none are specified. If the devices are being actively + initialized the command will fail. After being cleared + zpool initialize with no + flags can be used to re-initialize all unallocoated regions on the + relevant target devices.
+
, + --wait
+
Wait until the devices have finished initializing before returning.
+
+
+
+

+

zpool-add(8), zpool-attach(8), + zpool-create(8), zpool-online(8), + zpool-replace(8), zpool-trim(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-iostat.8.html b/man/v2.2/8/zpool-iostat.8.html new file mode 100644 index 000000000..bf08e3628 --- /dev/null +++ b/man/v2.2/8/zpool-iostat.8.html @@ -0,0 +1,490 @@ + + + + + + + zpool-iostat.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-iostat.8

+
+ + + + + +
ZPOOL-IOSTAT(8)System Manager's ManualZPOOL-IOSTAT(8)
+
+
+

+

zpool-iostat — + display logical I/O statistics for ZFS storage + pools

+
+
+

+ + + + + +
zpooliostat [[[-c + SCRIPT] + [-lq]]|-rw] + [-T u|d] + [-ghHLnpPvy] + [pool…|[pool + vdev…]|vdev…] + [interval [count]]
+
+
+

+

Displays logical I/O statistics for the given pools/vdevs. + Physical I/O statistics may be observed via iostat(1). If + writes are located nearby, they may be merged into a single larger + operation. Additional I/O may be generated depending on the level of vdev + redundancy. To filter output, you may pass in a list of pools, a pool and + list of vdevs in that pool, or a list of any vdevs from any pool. If no + items are specified, statistics for every pool in the system are shown. When + given an interval, the statistics are printed every + interval seconds until killed. If + -n flag is specified the headers are displayed only + once, otherwise they are displayed periodically. If + count is specified, the command exits after + count reports are printed. The first report printed is + always the statistics since boot regardless of whether + interval and count are passed. + However, this behavior can be suppressed with the -y + flag. Also note that the units of + , + , + … that + are printed in the report are in base 1024. To get the raw values, use the + -p flag.

+
+
+ [SCRIPT1[,SCRIPT2]…]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool iostat + output. Users can run any script found in their + ~/.zpool.d directory or from the system + /etc/zfs/zpool.d directory. Script names + containing the slash + () character + are not allowed. The default search path can be overridden by setting the + + environment variable. A privileged user can only run + -c if they have the + + environment variable set. If a script requires the use of a privileged + command, like smartctl(8), then it's recommended you + allow the user access to it in /etc/sudoers or add + the user to the /etc/sudoers.d/zfs file. +

If -c is passed without a script name, + it prints a list of all scripts. -c also sets + verbose mode + (-v).

+

Script output should be in the form of "name=value". + The column name is set to "name" and the value is set to + "value". Multiple lines can be used to output multiple + columns. The first line of output not in the "name=value" + format is displayed without a column title, and no more output after + that is displayed. This can be useful for printing error messages. Blank + or NULL values are printed as a '-' to make output AWKable.

+

The following environment variables are set before running + each script:

+
+
+
Full path to the vdev
+
+
Underlying path to the vdev (/dev/sd*). For + use with device mapper, multipath, or partitioned vdevs.
+
+
The sysfs path to the enclosure for the vdev (if any).
+
+
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard date + format. See date(1).
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Print headers only once when passed
+
+
Display numbers in parsable (exact) values. Time values are in + nanoseconds.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+
Print request size histograms for the leaf vdev's I/O. This includes + histograms of individual I/O (ind) and aggregate I/O (agg). These stats + can be useful for observing how well I/O aggregation is working. Note that + TRIM I/O may exceed 16M, but will be counted as 16M.
+
+
Verbose statistics Reports usage statistics for individual vdevs within + the pool, in addition to the pool-wide statistics.
+
+
Normally the first line of output reports the statistics since boot: + suppress it.
+
+
Display latency histograms: +
+
+
Total I/O time (queuing + disk I/O time).
+
+
Disk I/O time (time reading/writing the disk).
+
+
Amount of time I/O spent in synchronous priority queues. Does not + include disk time.
+
+
Amount of time I/O spent in asynchronous priority queues. Does not + include disk time.
+
+
Amount of time I/O spent in scrub queue. Does not include disk + time.
+
+
Amount of time I/O spent in rebuild queue. Does not include disk + time.
+
+
+
+
Include average latency statistics: +
+
+
Average total I/O time (queuing + disk I/O time).
+
+
Average disk I/O time (time reading/writing the disk).
+
+
Average amount of time I/O spent in synchronous priority queues. Does + not include disk time.
+
+
Average amount of time I/O spent in asynchronous priority queues. Does + not include disk time.
+
+
Average queuing time in scrub queue. Does not include disk time.
+
+
Average queuing time in trim queue. Does not include disk time.
+
+
Average queuing time in rebuild queue. Does not include disk + time.
+
+
+
+
Include active queue statistics. Each priority queue has both pending + () + and active + () + I/O requests. Pending requests are waiting to be issued to the disk, and + active requests have been issued to disk and are waiting for completion. + These stats are broken out by priority queue: +
+
+
Current number of entries in synchronous priority queues.
+
+
Current number of entries in asynchronous priority queues.
+
+
Current number of entries in scrub queue.
+
+
Current number of entries in trim queue.
+
+
Current number of entries in rebuild queue.
+
+

All queue statistics are instantaneous measurements of the + number of entries in the queues. If you specify an interval, the + measurements will be sampled from the end of the interval.

+
+
+
+
+

+
+

+

The following command adds two disks for use as cache devices to a + ZFS storage pool:

+
# zpool + add pool + + sdc sdd
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take over + an hour for them to fill. Capacity and reads can be monitored using the + iostat subcommand as follows:

+
# zpool + iostat -v pool + 5
+
+
+

+

Additional columns can be added to the + zpool status + and zpool + iostat output with + -c.

+
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc size
+              capacity     operations     bandwidth
+pool        alloc   free   read  write   read  write  size
+----------  -----  -----  -----  -----  -----  -----  ----
+rpool       14.6G  54.9G      4     55   250K  2.69M
+  sda1      14.6G  54.9G      4     55   250K  2.69M   70G
+----------  -----  -----  -----  -----  -----  -----  ----
+
+
+
+
+

+

iostat(1), smartctl(8), + zpool-list(8), zpool-status(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-labelclear.8.html b/man/v2.2/8/zpool-labelclear.8.html new file mode 100644 index 000000000..236093b79 --- /dev/null +++ b/man/v2.2/8/zpool-labelclear.8.html @@ -0,0 +1,275 @@ + + + + + + + zpool-labelclear.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-labelclear.8

+
+ + + + + +
ZPOOL-LABELCLEAR(8)System Manager's ManualZPOOL-LABELCLEAR(8)
+
+
+

+

zpool-labelclear — + remove ZFS label information from device

+
+
+

+ + + + + +
zpoollabelclear [-f] + device
+
+
+

+

Removes ZFS label information from the specified + device. If the device is a cache + device, it also removes the L2ARC header (persistent L2ARC). The + device must not be part of an active pool + configuration.

+
+
+
Treat exported or foreign devices as inactive.
+
+
+
+

+

zpool-destroy(8), + zpool-detach(8), zpool-remove(8), + zpool-replace(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-list.8.html b/man/v2.2/8/zpool-list.8.html new file mode 100644 index 000000000..422cfa6f3 --- /dev/null +++ b/man/v2.2/8/zpool-list.8.html @@ -0,0 +1,354 @@ + + + + + + + zpool-list.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-list.8

+
+ + + + + +
ZPOOL-LIST(8)System Manager's ManualZPOOL-LIST(8)
+
+
+

+

zpool-listlist + information about ZFS storage pools

+
+
+

+ + + + + +
zpoollist [-HgLpPv] + [-o + property[,property]…] + [-T u|d] + [pool]… [interval + [count]]
+
+
+

+

Lists the given pools along with a health status and space usage. + If no pools are specified, all pools in the system are + listed. When given an interval, the information is + printed every interval seconds until killed. If + count is specified, the command exits after + count reports are printed.

+
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+ property
+
Comma-separated list of properties to display. See the + zpoolprops(7) manual page for a list of valid + properties. The default list is + , + , + , + , + , + , + , + , + , + .
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard date + format. See date(1).
+
+
Verbose statistics. Reports usage statistics for individual vdevs within + the pool, in addition to the pool-wide statistics.
+
+
+
+

+
+

+

The following command lists all available pools on the system. In + this case, the pool zion is faulted due to a missing + device. The results from this command are similar to the following:

+
+
# zpool list
+NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+rpool  19.9G  8.43G  11.4G         -    33%    42%  1.00x  ONLINE  -
+tank   61.5G  20.0G  41.5G         -    48%    32%  1.00x  ONLINE  -
+zion       -      -      -         -      -      -      -  FAULTED -
+
+
+
+

+

The following command displays the detailed information for the + pool data. This pool is comprised of a single raidz + vdev where one of its devices increased its capacity by 10 GiB. In this + example, the pool will not be able to utilize this extra capacity until all + the devices under the raidz vdev have been expanded.

+
+
# zpool list -v data
+NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+data        23.9G  14.6G  9.30G         -    48%    61%  1.00x  ONLINE  -
+  raidz1    23.9G  14.6G  9.30G         -    48%
+    sda         -      -      -         -      -
+    sdb         -      -      -       10G      -
+    sdc         -      -      -         -      -
+
+
+
+
+

+

zpool-import(8), + zpool-status(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-offline.8.html b/man/v2.2/8/zpool-offline.8.html new file mode 100644 index 000000000..dc9f84776 --- /dev/null +++ b/man/v2.2/8/zpool-offline.8.html @@ -0,0 +1,305 @@ + + + + + + + zpool-offline.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-offline.8

+
+ + + + + +
ZPOOL-OFFLINE(8)System Manager's ManualZPOOL-OFFLINE(8)
+
+
+

+

zpool-offline — + take physical devices offline in ZFS storage + pool

+
+
+

+ + + + + +
zpooloffline [-ft] + pool device
+
+ + + + + +
zpoolonline [-e] + pool device
+
+
+

+
+
zpool offline + [-ft] pool + device
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [-e] pool + device
+
Brings the specified physical device online. This command is not + applicable to spares. +
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
+
+
+

+

zpool-detach(8), + zpool-remove(8), zpool-reopen(8), + zpool-resilver(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-online.8.html b/man/v2.2/8/zpool-online.8.html new file mode 100644 index 000000000..3fb58a1d6 --- /dev/null +++ b/man/v2.2/8/zpool-online.8.html @@ -0,0 +1,305 @@ + + + + + + + zpool-online.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-online.8

+
+ + + + + +
ZPOOL-OFFLINE(8)System Manager's ManualZPOOL-OFFLINE(8)
+
+
+

+

zpool-offline — + take physical devices offline in ZFS storage + pool

+
+
+

+ + + + + +
zpooloffline [-ft] + pool device
+
+ + + + + +
zpoolonline [-e] + pool device
+
+
+

+
+
zpool offline + [-ft] pool + device
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [-e] pool + device
+
Brings the specified physical device online. This command is not + applicable to spares. +
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
+
+
+

+

zpool-detach(8), + zpool-remove(8), zpool-reopen(8), + zpool-resilver(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-reguid.8.html b/man/v2.2/8/zpool-reguid.8.html new file mode 100644 index 000000000..4b71b2610 --- /dev/null +++ b/man/v2.2/8/zpool-reguid.8.html @@ -0,0 +1,268 @@ + + + + + + + zpool-reguid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-reguid.8

+
+ + + + + +
ZPOOL-REGUID(8)System Manager's ManualZPOOL-REGUID(8)
+
+
+

+

zpool-reguid — + generate new unique identifier for ZFS storage + pool

+
+
+

+ + + + + +
zpoolreguid pool
+
+
+

+

Generates a new unique identifier for the pool. You must ensure + that all devices in this pool are online and healthy before performing this + action.

+
+
+

+

zpool-export(8), + zpool-import(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-remove.8.html b/man/v2.2/8/zpool-remove.8.html new file mode 100644 index 000000000..1db1d2658 --- /dev/null +++ b/man/v2.2/8/zpool-remove.8.html @@ -0,0 +1,363 @@ + + + + + + + zpool-remove.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-remove.8

+
+ + + + + +
ZPOOL-REMOVE(8)System Manager's ManualZPOOL-REMOVE(8)
+
+
+

+

zpool-remove — + remove devices from ZFS storage pool

+
+
+

+ + + + + +
zpoolremove [-npw] + pool device
+
+ + + + + +
zpoolremove -s + pool
+
+
+

+
+
zpool remove + [-npw] pool + device
+
Removes the specified device from the pool. This command supports removing + hot spare, cache, log, and both mirrored and non-redundant primary + top-level vdevs, including dedup and special vdevs. +

Top-level vdevs can only be removed if the primary pool + storage does not contain a top-level raidz vdev, all top-level vdevs + have the same sector size, and the keys for all encrypted datasets are + loaded.

+

Removing a top-level vdev reduces the + total amount of space in the storage pool. The specified device will be + evacuated by copying all allocated space from it to the other devices in + the pool. In this case, the zpool + remove command initiates the removal and + returns, while the evacuation continues in the background. The removal + progress can be monitored with zpool + status. If an I/O error is encountered during + the removal process it will be cancelled. The + + feature flag must be enabled to remove a top-level vdev, see + zpool-features(7).

+

A mirrored top-level device (log or data) can be removed by + specifying the top- level mirror for the same. Non-log devices or data + devices that are part of a mirrored configuration can be removed using + the zpool detach + command.

+
+
+
Do not actually perform the removal ("No-op"). Instead, + print the estimated amount of memory that will be used by the mapping + table after the removal completes. This is nonzero only for top-level + vdevs.
+
+
+
+
Used in conjunction with the -n flag, displays + numbers as parsable (exact) values.
+
+
Waits until the removal has completed before returning.
+
+
+
zpool remove + -s pool
+
Stops and cancels an in-progress removal of a top-level vdev.
+
+
+
+

+
+

+

The following commands remove the mirrored log device + + and mirrored top-level data device + .

+

Given this configuration:

+
+
  pool: tank
+ state: ONLINE
+ scrub: none requested
+config:
+
+         NAME        STATE     READ WRITE CKSUM
+         tank        ONLINE       0     0     0
+           mirror-0  ONLINE       0     0     0
+             sda     ONLINE       0     0     0
+             sdb     ONLINE       0     0     0
+           mirror-1  ONLINE       0     0     0
+             sdc     ONLINE       0     0     0
+             sdd     ONLINE       0     0     0
+         logs
+           mirror-2  ONLINE       0     0     0
+             sde     ONLINE       0     0     0
+             sdf     ONLINE       0     0     0
+
+

The command to remove the mirrored log + mirror-2 is:

+
# zpool + remove tank + mirror-2
+

The command to remove the mirrored data + mirror-1 is:

+
# zpool + remove tank + mirror-1
+
+
+
+

+

zpool-add(8), zpool-detach(8), + zpool-labelclear(8), zpool-offline(8), + zpool-replace(8), zpool-split(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-reopen.8.html b/man/v2.2/8/zpool-reopen.8.html new file mode 100644 index 000000000..0f5bf8c53 --- /dev/null +++ b/man/v2.2/8/zpool-reopen.8.html @@ -0,0 +1,270 @@ + + + + + + + zpool-reopen.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-reopen.8

+
+ + + + + +
ZPOOL-REOPEN(8)System Manager's ManualZPOOL-REOPEN(8)
+
+
+

+

zpool-reopen — + reopen vdevs associated with ZFS storage pools

+
+
+

+ + + + + +
zpoolreopen [-n] + [pool]…
+
+
+

+

Reopen all vdevs associated with the specified pools, or all pools + if none specified.

+
+
+

+
+
+
Do not restart an in-progress scrub operation. This is not recommended and + can result in partially resilvered devices unless a second scrub is + performed.
+
+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-replace.8.html b/man/v2.2/8/zpool-replace.8.html new file mode 100644 index 000000000..3fac55fef --- /dev/null +++ b/man/v2.2/8/zpool-replace.8.html @@ -0,0 +1,304 @@ + + + + + + + zpool-replace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-replace.8

+
+ + + + + +
ZPOOL-REPLACE(8)System Manager's ManualZPOOL-REPLACE(8)
+
+
+

+

zpool-replace — + replace one device with another in ZFS storage + pool

+
+
+

+ + + + + +
zpoolreplace [-fsw] + [-o + property=value] + pool device + [new-device]
+
+
+

+

Replaces device with + new-device. This is equivalent to attaching + new-device, waiting for it to resilver, and then + detaching device. Any in progress scrub will be + cancelled.

+

The size of new-device must be greater than + or equal to the minimum size of all the devices in a mirror or raidz + configuration.

+

new-device is required if the pool is not + redundant. If new-device is not specified, it defaults + to device. This form of replacement is useful after an + existing disk has failed and has been physically replaced. In this case, the + new disk may have the same /dev path as the old + device, even though it is actually a different disk. ZFS recognizes + this.

+
+
+
Forces use of new-device, even if it appears to be + in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the zpoolprops(7) + manual page for a list of valid properties that can be set. The only + property supported at the moment is + .
+
+
The new-device is reconstructed sequentially to + restore redundancy as quickly as possible. Checksums are not verified + during sequential reconstruction so a scrub is started when the resilver + completes. Sequential reconstruction is not supported for raidz + configurations.
+
+
Waits until the replacement has completed before returning.
+
+
+
+

+

zpool-detach(8), + zpool-initialize(8), zpool-online(8), + zpool-resilver(8)

+
+
+ + + + + +
May 29, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-resilver.8.html b/man/v2.2/8/zpool-resilver.8.html new file mode 100644 index 000000000..dd5ea8b10 --- /dev/null +++ b/man/v2.2/8/zpool-resilver.8.html @@ -0,0 +1,272 @@ + + + + + + + zpool-resilver.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-resilver.8

+
+ + + + + +
ZPOOL-RESILVER(8)System Manager's ManualZPOOL-RESILVER(8)
+
+
+

+

zpool-resilver — + resilver devices in ZFS storage pools

+
+
+

+ + + + + +
zpoolresilver pool
+
+
+

+

Starts a resilver of the specified pools. If an existing resilver + is already running it will be restarted from the beginning. Any drives that + were scheduled for a deferred resilver will be added to the new one. This + requires the + + pool feature.

+
+
+

+

zpool-iostat(8), + zpool-online(8), zpool-reopen(8), + zpool-replace(8), zpool-scrub(8), + zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-scrub.8.html b/man/v2.2/8/zpool-scrub.8.html new file mode 100644 index 000000000..258255e07 --- /dev/null +++ b/man/v2.2/8/zpool-scrub.8.html @@ -0,0 +1,362 @@ + + + + + + + zpool-scrub.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-scrub.8

+
+ + + + + +
ZPOOL-SCRUB(8)System Manager's ManualZPOOL-SCRUB(8)
+
+
+

+

zpool-scrub — + begin or resume scrub of ZFS storage pools

+
+
+

+ + + + + +
zpoolscrub + [-s|-p] + [-w] [-e] + pool
+
+
+

+

Begins a scrub or resumes a paused scrub. The scrub examines all + data in the specified pools to verify that it checksums correctly. For + replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any + damage discovered during the scrub. The zpool + status command reports the progress of the scrub and + summarizes the results of the scrub upon completion.

+

Scrubbing and resilvering are very similar operations. The + difference is that resilvering only examines data that ZFS knows to be out + of date (for example, when attaching a new device to a mirror or replacing + an existing device), whereas scrubbing examines all data to discover silent + errors due to hardware faults or disk failure.

+

When scrubbing a pool with encrypted filesystems the keys do not + need to be loaded. However, if the keys are not loaded and an unrepairable + checksum error is detected the file name cannot be included in the + zpool status + -v verbose error report.

+

Because scrubbing and resilvering are I/O-intensive operations, + ZFS only allows one at a time.

+

A scrub is split into two parts: metadata scanning and block + scrubbing. The metadata scanning sorts blocks into large sequential ranges + which can then be read much more efficiently from disk when issuing the + scrub I/O.

+

If a scrub is paused, the zpool + scrub resumes it. If a resilver is in progress, ZFS + does not allow a scrub to be started until the resilver completes.

+

Note that, due to changes in pool data on a live system, it is + possible for scrubs to progress slightly beyond 100% completion. During this + period, no completion time estimate will be provided.

+
+
+

+
+
+
Stop scrubbing.
+
+
Pause scrubbing. Scrub pause state and progress are periodically synced to + disk. If the system is restarted or pool is exported during a paused + scrub, even after import, scrub will remain paused until it is resumed. + Once resumed the scrub will pick up from the place where it was last + checkpointed to disk. To resume a paused scrub issue + zpool scrub or + zpool scrub + -e again.
+
+
Wait until scrub has completed before returning.
+
+
Only scrub files with known data errors as reported by + zpool status + -v. The pool must have been scrubbed at least once + with the + + feature enabled to use this option. Error scrubbing cannot be run + simultaneously with regular scrubbing or resilvering, nor can it be run + when a regular scrub is paused.
+
+
+
+

+
+

+

Status of pool with ongoing scrub:

+

+
+
# zpool status
+  ...
+  scan: scrub in progress since Sun Jul 25 16:07:49 2021
+        403M / 405M scanned at 100M/s, 68.4M / 405M issued at 10.0M/s
+        0B repaired, 16.91% done, 00:00:04 to go
+  ...
+
+

Where metadata which references 403M of file data has been scanned + at 100M/s, and 68.4M of that file data has been scrubbed sequentially at + 10.0M/s.

+
+
+
+

+

On machines using systemd, scrub timers can be enabled on per-pool + basis. weekly and monthly + timer units are provided.

+
+
+
systemctl enable + zfs-scrub-weekly@rpool.timer + --now
+
+
systemctl + enable + zfs-scrub-monthly@otherpool.timer + --now
+
+
+
+

+

systemd.timer(5), + zpool-iostat(8), + zpool-resilver(8), + zpool-status(8)

+
+
+ + + + + +
June 22, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-set.8.html b/man/v2.2/8/zpool-set.8.html new file mode 100644 index 000000000..7c822206e --- /dev/null +++ b/man/v2.2/8/zpool-set.8.html @@ -0,0 +1,389 @@ + + + + + + + zpool-set.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-set.8

+
+ + + + + +
ZPOOL-GET(8)System Manager's ManualZPOOL-GET(8)
+
+
+

+

zpool-get — + retrieve properties of ZFS storage pools

+
+
+

+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + [pool]…
+
+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + pool + [all-vdevs|vdev]…
+
+ + + + + +
zpoolset + property=value + pool
+
+ + + + + +
zpoolset + property=value + pool vdev
+
+
+

+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + [pool]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
+
+
Name of storage pool.
+
+
Property name.
+
+
Property value.
+
+
Property source, either default + or local.
+
+
+

See the zpoolprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + pool + [all-vdevs|vdev]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified vdevs (or all vdevs if + all-vdevs is used) in the specified pool. These + properties are displayed with the following fields: +
+
+
+
Name of vdev.
+
+
Property name.
+
+
Property value.
+
+
Property source, either default + or local.
+
+
+

See the vdevprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + zpoolprops(7) manual page for more information on what + properties can be set and acceptable values.
+
zpool set + property=value + pool vdev
+
Sets the given property on the specified vdev in the specified pool. See + the vdevprops(7) manual page for more information on + what properties can be set and acceptable values.
+
+
+
+

+

vdevprops(7), + zpool-features(7), zpoolprops(7), + zpool-list(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-split.8.html b/man/v2.2/8/zpool-split.8.html new file mode 100644 index 000000000..1a1aac007 --- /dev/null +++ b/man/v2.2/8/zpool-split.8.html @@ -0,0 +1,317 @@ + + + + + + + zpool-split.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-split.8

+
+ + + + + +
ZPOOL-SPLIT(8)System Manager's ManualZPOOL-SPLIT(8)
+
+
+

+

zpool-split — + split devices off ZFS storage pool, creating new + pool

+
+
+

+ + + + + +
zpoolsplit [-gLlnP] + [-o + property=value]… + [-R root] + pool newpool + [device]…
+
+
+

+

Splits devices off pool creating + newpool. All vdevs in pool must + be mirrors and the pool must not be in the process of resilvering. At the + time of the split, newpool will be a replica of + pool. By default, the last device in each mirror is + split from pool to create + newpool.

+

The optional device specification causes the specified device(s) + to be included in the new pool and, should any devices + remain unspecified, the last device in each mirror is used as would be by + default.

+
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Indicates that this command will request encryption keys for all encrypted + datasets it attempts to mount as it is bringing the new pool online. Note + that if any datasets have + =, + this command will block waiting for the keys to be entered. Without this + flag, encrypted datasets will be left unavailable until the keys are + loaded.
+
+
Do a dry-run ("No-op") split: do not actually perform it. Print + out the expected configuration of newpool.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+ property=value
+
Sets the specified property for newpool. See the + zpoolprops(7) manual page for more information on the + available pool properties.
+
+ root
+
Set + + for newpool to root and + automatically import it.
+
+
+
+

+

zpool-import(8), + zpool-list(8), zpool-remove(8)

+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-status.8.html b/man/v2.2/8/zpool-status.8.html new file mode 100644 index 000000000..b16800ae7 --- /dev/null +++ b/man/v2.2/8/zpool-status.8.html @@ -0,0 +1,369 @@ + + + + + + + zpool-status.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-status.8

+
+ + + + + +
ZPOOL-STATUS(8)System Manager's ManualZPOOL-STATUS(8)
+
+
+

+

zpool-status — + show detailed health status for ZFS storage + pools

+
+
+

+ + + + + +
zpoolstatus [-DigLpPstvx] + [-T u|d] + [-c + [SCRIPT1[,SCRIPT2]…]] + [pool]… [interval + [count]]
+
+
+

+

Displays the detailed health status for the given pools. If no + pool is specified, then the status of each pool in the + system is displayed. For more information on pool and device health, see the + Device Failure and + Recovery section of zpoolconcepts(7).

+

If a scrub or resilver is in progress, this command reports the + percentage done and the estimated time to completion. Both of these are only + approximate, because the amount of data in the pool and the other workloads + on the system can change.

+
+
+ [SCRIPT1[,SCRIPT2]…]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool status + output. See the -c option of + zpool iostat for complete + details.
+
+
Display vdev initialization status.
+
+
Display vdev GUIDs instead of the normal device names These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in the + pool) block counts and sizes by reference count.
+
+
Display the number of leaf vdev slow I/O operations. This is the number of + I/O operations that didn't complete in + + milliseconds + ( + by default). This does not necessarily mean the + I/O operations failed to complete, just took an unreasonably long amount + of time. This may indicate a problem with the underlying storage.
+
+
Display vdev TRIM status.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard date + format. See date(1).
+
+
Displays verbose data error information, printing out a complete list of + all data errors since the last complete pool scrub. If the head_errlog + feature is enabled and files containing errors have been removed then the + respective filenames will not be reported in subsequent runs of this + command.
+
+
Only display status for pools that are exhibiting errors or are otherwise + unavailable. Warnings about pools not using the latest on-disk format will + not be included.
+
+
+
+

+
+

+

Additional columns can be added to the + zpool status + and zpool + iostat output with + -c.

+
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc size
+              capacity     operations     bandwidth
+pool        alloc   free   read  write   read  write  size
+----------  -----  -----  -----  -----  -----  -----  ----
+rpool       14.6G  54.9G      4     55   250K  2.69M
+  sda1      14.6G  54.9G      4     55   250K  2.69M   70G
+----------  -----  -----  -----  -----  -----  -----  ----
+
+
+
+
+

+

zpool-events(8), + zpool-history(8), zpool-iostat(8), + zpool-list(8), zpool-resilver(8), + zpool-scrub(8), zpool-wait(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-sync.8.html b/man/v2.2/8/zpool-sync.8.html new file mode 100644 index 000000000..264273b8b --- /dev/null +++ b/man/v2.2/8/zpool-sync.8.html @@ -0,0 +1,269 @@ + + + + + + + zpool-sync.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-sync.8

+
+ + + + + +
ZPOOL-SYNC(8)System Manager's ManualZPOOL-SYNC(8)
+
+
+

+

zpool-syncflush + data to primary storage of ZFS storage pools

+
+
+

+ + + + + +
zpoolsync [pool]…
+
+
+

+

This command forces all in-core dirty data to be written to the + primary pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + zpool sync will sync all + pools on the system. Otherwise, it will sync only the specified pools.

+
+
+

+

zpoolconcepts(7), + zpool-export(8), zpool-iostat(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-trim.8.html b/man/v2.2/8/zpool-trim.8.html new file mode 100644 index 000000000..476a13075 --- /dev/null +++ b/man/v2.2/8/zpool-trim.8.html @@ -0,0 +1,326 @@ + + + + + + + zpool-trim.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-trim.8

+
+ + + + + +
ZPOOL-TRIM(8)System Manager's ManualZPOOL-TRIM(8)
+
+
+

+

zpool-trim — + initiate TRIM of free space in ZFS storage pool

+
+
+

+ + + + + +
zpooltrim [-dw] + [-r rate] + [-c|-s] + pool [device]…
+
+
+

+

Initiates an immediate on-demand TRIM operation for all of the + free space in a pool. This operation informs the underlying storage devices + of all blocks in the pool which are no longer allocated and allows thinly + provisioned devices to reclaim the space.

+

A manual on-demand TRIM operation can be initiated irrespective of + the autotrim pool property setting. See the documentation + for the autotrim property above for the types of vdev + devices which can be trimmed.

+
+
, + --secure
+
Causes a secure TRIM to be initiated. When performing a secure TRIM, the + device guarantees that data stored on the trimmed blocks has been erased. + This requires support from the device and is not supported by all + SSDs.
+
, + --rate rate
+
Controls the rate at which the TRIM operation progresses. Without this + option TRIM is executed as quickly as possible. The rate, expressed in + bytes per second, is applied on a per-vdev basis and may be set + differently for each leaf vdev.
+
, + --cancel
+
Cancel trimming on the specified devices, or all eligible devices if none + are specified. If one or more target devices are invalid or are not + currently being trimmed, the command will fail and no cancellation will + occur on any device.
+
, + --suspend
+
Suspend trimming on the specified devices, or all eligible devices if none + are specified. If one or more target devices are invalid or are not + currently being trimmed, the command will fail and no suspension will + occur on any device. Trimming can then be resumed by running + zpool trim with no flags + on the relevant target devices.
+
, + --wait
+
Wait until the devices are done being trimmed before returning.
+
+
+
+

+

On machines using systemd, trim timers can be enabled on a + per-pool basis. weekly and + monthly timer units are provided.

+
+
+
systemctl enable + zfs-trim-weekly@rpool.timer + --now
+
+
systemctl + enable + zfs-trim-monthly@otherpool.timer + --now
+
+
+
+

+

systemd.timer(5), + zpoolprops(7), + zpool-initialize(8), + zpool-wait(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-upgrade.8.html b/man/v2.2/8/zpool-upgrade.8.html new file mode 100644 index 000000000..3581408fd --- /dev/null +++ b/man/v2.2/8/zpool-upgrade.8.html @@ -0,0 +1,337 @@ + + + + + + + zpool-upgrade.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-upgrade.8

+
+ + + + + +
ZPOOL-UPGRADE(8)System Manager's ManualZPOOL-UPGRADE(8)
+
+
+

+

zpool-upgrade — + manage version and feature flags of ZFS storage + pools

+
+
+

+ + + + + +
zpoolupgrade
+
+ + + + + +
zpoolupgrade -v
+
+ + + + + +
zpoolupgrade [-V + version] + -a|pool
+
+
+

+
+
zpool upgrade
+
Displays pools which do not have all supported features enabled and pools + formatted using a legacy ZFS version number. These pools can continue to + be used, but some features may not be available. Use + zpool upgrade + -a to enable all features on all pools (subject to + the -o compatibility + property).
+
zpool upgrade + -v
+
Displays legacy ZFS versions supported by the this version of ZFS. See + zpool-features(7) for a description of feature flags + features supported by this version of ZFS.
+
zpool upgrade + [-V version] + -a|pool
+
Enables all supported features on the given pool. +

If the pool has specified compatibility feature sets using the + -o compatibility property, + only the features present in all requested compatibility sets will be + enabled. If this property is set to legacy then no + upgrade will take place.

+

Once this is done, the pool will no longer be accessible on + systems that do not support feature flags. See + zpool-features(7) for details on compatibility with + systems that support feature flags, but do not support all features + enabled on the pool.

+
+
+
Enables all supported features (from specified compatibility sets, if + any) on all pools.
+
+ version
+
Upgrade to the specified legacy version. If specified, no features + will be enabled on the pool. This option can only be used to increase + the version number up to the last supported legacy version + number.
+
+
+
+
+
+

+
+

+

The following command upgrades all ZFS Storage pools to the + current version of the software:

+
+
# zpool upgrade -a
+This system is currently running ZFS version 2.
+
+
+
+
+

+

zpool-features(7), + zpoolconcepts(7), zpoolprops(7), + zpool-history(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-wait.8.html b/man/v2.2/8/zpool-wait.8.html new file mode 100644 index 000000000..f3bbbbbe3 --- /dev/null +++ b/man/v2.2/8/zpool-wait.8.html @@ -0,0 +1,318 @@ + + + + + + + zpool-wait.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-wait.8

+
+ + + + + +
ZPOOL-WAIT(8)System Manager's ManualZPOOL-WAIT(8)
+
+
+

+

zpool-waitwait + for activity to stop in a ZFS storage pool

+
+
+

+ + + + + +
zpoolwait [-Hp] + [-T u|d] + [-t + activity[,activity]…] + pool [interval]
+
+
+

+

Waits until all background activity of the given types has ceased + in the given pool. The activity could cease because it has completed, or + because it has been paused or canceled by a user, or because the pool has + been exported or destroyed. If no activities are specified, the command + waits until background activity of every type listed below has ceased. If + there is no activity of the given types in progress, the command returns + immediately.

+

These are the possible values for activity, + along with what each one waits for:

+
+
+
+
Checkpoint to be discarded
+
+
+ property to become +
+
+
All initializations to cease
+
+
All device replacements to cease
+
+
Device removal to cease
+
+
Resilver to cease
+
+
Scrub to cease
+
+
Manual trim to cease
+
+
+

If an interval is provided, the amount of + work remaining, in bytes, for each activity is printed every + interval seconds.

+
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+
Display numbers in parsable (exact) values.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard date + format. See date(1).
+
+
+
+

+

zpool-checkpoint(8), + zpool-initialize(8), zpool-remove(8), + zpool-replace(8), zpool-resilver(8), + zpool-scrub(8), zpool-status(8), + zpool-trim(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool.8.html b/man/v2.2/8/zpool.8.html new file mode 100644 index 000000000..2ad262807 --- /dev/null +++ b/man/v2.2/8/zpool.8.html @@ -0,0 +1,825 @@ + + + + + + + zpool.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool.8

+
+ + + + + +
ZPOOL(8)System Manager's ManualZPOOL(8)
+
+
+

+

zpoolconfigure + ZFS storage pools

+
+
+

+ + + + + +
zpool-?V
+
+ + + + + +
zpoolversion
+
+ + + + + +
zpoolsubcommand + [arguments]
+
+
+

+

The zpool command configures ZFS storage + pools. A storage pool is a collection of devices that provides physical + storage and data replication for ZFS datasets. All datasets within a storage + pool share the same space. See zfs(8) for information on + managing datasets.

+

For an overview of creating and managing ZFS storage pools see the + zpoolconcepts(7) manual page.

+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+

The zpool command provides subcommands to + create and destroy storage pools, add capacity to storage pools, and provide + information about the storage pools. The following subcommands are + supported:

+
+
zpool -?
+
Displays a help message.
+
zpool -V, + --version
+
 
+
zpool version
+
Displays the software version of the zpool + userland utility and the ZFS kernel module.
+
+
+

+
+
zpool-create(8)
+
Creates a new storage pool containing the virtual devices specified on the + command line.
+
zpool-initialize(8)
+
Begins initializing by writing to all unallocated regions on the specified + devices, or all eligible devices in the pool if no individual devices are + specified.
+
+
+
+

+
+
zpool-destroy(8)
+
Destroys the given pool, freeing up any devices for other use.
+
zpool-labelclear(8)
+
Removes ZFS label information from the specified + device.
+
+
+
+

+
+
zpool-attach(8)/zpool-detach(8)
+
Converts a non-redundant disk into a mirror, or increases the redundancy + level of an existing mirror (attach), or performs + the inverse operation (detach).
+
zpool-add(8)/zpool-remove(8)
+
Adds the specified virtual devices to the given pool, or removes the + specified device from the pool.
+
zpool-replace(8)
+
Replaces an existing device (which may be faulted) with a new one.
+
zpool-split(8)
+
Creates a new pool by splitting all mirrors in an existing pool (which + decreases its redundancy).
+
+
+
+

+

Available pool properties listed in the + zpoolprops(7) manual page.

+
+
zpool-list(8)
+
Lists the given pools along with a health status and space usage.
+
zpool-get(8)/zpool-set(8)
+
Retrieves the given list of properties (or all properties if + is used) for + the specified storage pool(s).
+
+
+
+

+
+
zpool-status(8)
+
Displays the detailed health status for the given pools.
+
zpool-iostat(8)
+
Displays logical I/O statistics for the given pools/vdevs. Physical I/O + operations may be observed via iostat(1).
+
zpool-events(8)
+
Lists all recent events generated by the ZFS kernel modules. These events + are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. + That manual page also describes the subclasses and event payloads that can + be generated.
+
zpool-history(8)
+
Displays the command history of the specified pool(s) or all pools if no + pool is specified.
+
+
+
+

+
+
zpool-scrub(8)
+
Begins a scrub or resumes a paused scrub.
+
zpool-checkpoint(8)
+
Checkpoints the current state of pool, which can be + later restored by zpool + import + --rewind-to-checkpoint.
+
zpool-trim(8)
+
Initiates an immediate on-demand TRIM operation for all of the free space + in a pool. This operation informs the underlying storage devices of all + blocks in the pool which are no longer allocated and allows thinly + provisioned devices to reclaim the space.
+
zpool-sync(8)
+
This command forces all in-core dirty data to be written to the primary + pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + zpool sync will sync all + pools on the system. Otherwise, it will sync only the specified + pool(s).
+
zpool-upgrade(8)
+
Manage the on-disk format version of storage pools.
+
zpool-wait(8)
+
Waits until all background activity of the given types has ceased in the + given pool.
+
+
+
+

+
+
zpool-offline(8)/zpool-online(8)
+
Takes the specified physical device offline or brings it online.
+
zpool-resilver(8)
+
Starts a resilver. If an existing resilver is already running it will be + restarted from the beginning.
+
zpool-reopen(8)
+
Reopen all the vdevs associated with the pool.
+
zpool-clear(8)
+
Clears device errors in a pool.
+
+
+
+

+
+
zpool-import(8)
+
Make disks containing ZFS storage pools available for use on the + system.
+
zpool-export(8)
+
Exports the given pools from the system.
+
zpool-reguid(8)
+
Generates a new unique identifier for the pool.
+
+
+
+
+

+

The following exit values are returned:

+
+
+
+
Successful completion.
+
+
An error occurred.
+
+
Invalid command line options were specified.
+
+
+
+
+

+
+

+

The following command creates a pool with a single raidz root vdev + that consists of six disks:

+
# zpool + create tank + + sda sdb sdc sdd sde sdf
+
+
+

+

The following command creates a pool with two mirrors, where each + mirror contains two disks:

+
# zpool + create tank + mirror sda sdb + mirror sdc sdd
+
+
+

+

The following command creates a non-redundant pool using two disk + partitions:

+
# zpool + create tank + sda1 sdb2
+
+
+

+

The following command creates a non-redundant pool using files. + While not recommended, a pool based on files can be useful for experimental + purposes.

+
# zpool + create tank + /path/to/file/a /path/to/file/b
+
+
+

+

The following command converts an existing single device + sda into a mirror by attaching a second device to it, + sdb.

+
# zpool + attach tank sda + sdb
+
+
+

+

The following command adds two mirrored disks to the pool + tank, assuming the pool is already made up of two-way + mirrors. The additional space is immediately available to any datasets + within the pool.

+
# zpool + add tank + mirror sda sdb
+
+
+

+

The following command lists all available pools on the system. In + this case, the pool zion is faulted due to a missing + device. The results from this command are similar to the following:

+
+
# zpool list
+NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+rpool  19.9G  8.43G  11.4G         -    33%    42%  1.00x  ONLINE  -
+tank   61.5G  20.0G  41.5G         -    48%    32%  1.00x  ONLINE  -
+zion       -      -      -         -      -      -      -  FAULTED -
+
+
+
+

+

The following command destroys the pool tank + and any datasets contained within:

+
# zpool + destroy -f + tank
+
+
+

+

The following command exports the devices in pool + tank so that they can be relocated or later + imported:

+
# zpool + export tank
+
+
+

+

The following command displays available pools, and then imports + the pool tank for use on the system. The results from + this command are similar to the following:

+
+
# zpool import
+  pool: tank
+    id: 15451357997522795478
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+        tank        ONLINE
+          mirror    ONLINE
+            sda     ONLINE
+            sdb     ONLINE
+
+# zpool import tank
+
+
+
+

+

The following command upgrades all ZFS Storage pools to the + current version of the software:

+
+
# zpool upgrade -a
+This system is currently running ZFS version 2.
+
+
+
+

+

The following command creates a new pool with an available hot + spare:

+
# zpool + create tank + mirror sda sdb + + sdc
+

If one of the disks were to fail, the pool would be reduced to the + degraded state. The failed device can be replaced using the following + command:

+
# zpool + replace tank + sda sdd
+

Once the data has been resilvered, the spare is automatically + removed and is made available for use should another device fail. The hot + spare can be permanently removed from the pool using the following + command:

+
# zpool + remove tank + sdc
+
+
+

+

The following command creates a ZFS storage pool consisting of + two, two-way mirrors and mirrored log devices:

+
# zpool + create pool + mirror sda sdb + mirror sdc sdd + + sde sdf
+
+
+

+

The following command adds two disks for use as cache devices to a + ZFS storage pool:

+
# zpool + add pool + + sdc sdd
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take over + an hour for them to fill. Capacity and reads can be monitored using the + iostat subcommand as follows:

+
# zpool + iostat -v pool + 5
+
+
+

+

The following commands remove the mirrored log device + + and mirrored top-level data device + .

+

Given this configuration:

+
+
  pool: tank
+ state: ONLINE
+ scrub: none requested
+config:
+
+         NAME        STATE     READ WRITE CKSUM
+         tank        ONLINE       0     0     0
+           mirror-0  ONLINE       0     0     0
+             sda     ONLINE       0     0     0
+             sdb     ONLINE       0     0     0
+           mirror-1  ONLINE       0     0     0
+             sdc     ONLINE       0     0     0
+             sdd     ONLINE       0     0     0
+         logs
+           mirror-2  ONLINE       0     0     0
+             sde     ONLINE       0     0     0
+             sdf     ONLINE       0     0     0
+
+

The command to remove the mirrored log + mirror-2 is:

+
# zpool + remove tank + mirror-2
+

The command to remove the mirrored data + mirror-1 is:

+
# zpool + remove tank + mirror-1
+
+
+

+

The following command displays the detailed information for the + pool data. This pool is comprised of a single raidz + vdev where one of its devices increased its capacity by 10 GiB. In this + example, the pool will not be able to utilize this extra capacity until all + the devices under the raidz vdev have been expanded.

+
+
# zpool list -v data
+NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+data        23.9G  14.6G  9.30G         -    48%    61%  1.00x  ONLINE  -
+  raidz1    23.9G  14.6G  9.30G         -    48%
+    sda         -      -      -         -      -
+    sdb         -      -      -       10G      -
+    sdc         -      -      -         -      -
+
+
+
+

+

Additional columns can be added to the + zpool status + and zpool + iostat output with + -c.

+
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc size
+              capacity     operations     bandwidth
+pool        alloc   free   read  write   read  write  size
+----------  -----  -----  -----  -----  -----  -----  ----
+rpool       14.6G  54.9G      4     55   250K  2.69M
+  sda1      14.6G  54.9G      4     55   250K  2.69M   70G
+----------  -----  -----  -----  -----  -----  -----  ----
+
+
+
+
+

+
+
+
Cause zpool to dump core on exit for the purposes + of running + .
+
+
Use ANSI color in zpool + status and zpool + iostat output.
+
+
The search path for devices or files to use with the pool. This is a + colon-separated list of directories in which zpool + looks for device nodes and files. Similar to the + -d option in zpool + import.
+
+
The maximum time in milliseconds that zpool import + will wait for an expected device to be available.
+
+
If set, suppress warning about non-native vdev ashift in + zpool status. The value is + not used, only the presence or absence of the variable matters.
+
+
Cause zpool subcommands to output vdev guids by + default. This behavior is identical to the zpool + status -g command line + option.
+ +
Cause zpool subcommands to follow links for vdev + names by default. This behavior is identical to the + zpool status + -L command line option.
+
+
Cause zpool subcommands to output full vdev path + names by default. This behavior is identical to the + zpool status + -P command line option.
+
+
Older OpenZFS implementations had issues when attempting to display pool + config vdev names if a devid NVP value is present in the + pool's config. +

For example, a pool that originated on illumos platform would + have a devid value in the config and + zpool status would fail + when listing the config. This would also be true for future Linux-based + pools.

+

A pool can be stripped of any devid values + on import or prevented from adding them on zpool + create or zpool + add by setting + ZFS_VDEV_DEVID_OPT_OUT.

+

+
+
+
Allow a privileged user to run zpool + status/iostat + -c. Normally, only unprivileged users are allowed + to run -c.
+
+
The search path for scripts when running zpool + status/iostat + -c. This is a colon-separated list of directories + and overrides the default ~/.zpool.d and + /etc/zfs/zpool.d search paths.
+
+
Allow a user to run zpool + status/iostat + -c. If ZPOOL_SCRIPTS_ENABLED is + not set, it is assumed that the user is allowed to run + zpool + status/iostat + -c.
+
+
Time, in seconds, to wait for /dev/zfs to appear. + Defaults to + , max + (10 + minutes). If <0, wait forever; if + 0, don't wait.
+
+
+
+

+

+
+
+

+

zfs(4), zpool-features(7), + zpoolconcepts(7), zpoolprops(7), + zed(8), zfs(8), + zpool-add(8), zpool-attach(8), + zpool-checkpoint(8), zpool-clear(8), + zpool-create(8), zpool-destroy(8), + zpool-detach(8), zpool-events(8), + zpool-export(8), zpool-get(8), + zpool-history(8), zpool-import(8), + zpool-initialize(8), zpool-iostat(8), + zpool-labelclear(8), zpool-list(8), + zpool-offline(8), zpool-online(8), + zpool-reguid(8), zpool-remove(8), + zpool-reopen(8), zpool-replace(8), + zpool-resilver(8), zpool-scrub(8), + zpool-set(8), zpool-split(8), + zpool-status(8), zpool-sync(8), + zpool-trim(8), zpool-upgrade(8), + zpool-wait(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool_influxdb.8.html b/man/v2.2/8/zpool_influxdb.8.html new file mode 100644 index 000000000..3ca6952bb --- /dev/null +++ b/man/v2.2/8/zpool_influxdb.8.html @@ -0,0 +1,319 @@ + + + + + + + zpool_influxdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool_influxdb.8

+
+ + + + + +
ZPOOL_INFLUXDB(8)System Manager's ManualZPOOL_INFLUXDB(8)
+
+
+

+

zpool_influxdb — + collect ZFS pool statistics in InfluxDB line protocol + format

+
+
+

+ + + + + +
zpool_influxdb[-e|--execd] + [-n|--no-histogram] + [-s|--sum-histogram-buckets] + [-t|--tags + key=value[,key=value]…] + [pool]
+
+
+

+

zpool_influxdb produces + InfluxDB-line-protocol-compatible metrics from zpools. Like the + zpool command, + zpool_influxdb reads the current pool status and + statistics. Unlike the zpool command which is + intended for humans, zpool_influxdb formats the + output in the InfluxDB line protocol. The expected use is as a plugin to a + metrics collector or aggregator, such as Telegraf.

+

By default, zpool_influxdb prints pool + metrics and status in the InfluxDB line protocol format. All pools are + printed, similar to the zpool + status command. Providing a pool name restricts the + output to the named pool.

+
+
+

+
+
, + --execd
+
Run in daemon mode compatible with Telegraf's + execd plugin. In this mode, the pools are sampled + every time a newline appears on the standard input.
+
, + --no-histogram
+
Do not print latency and I/O size histograms. This can reduce the total + amount of data, but one should consider the value brought by the insights + that latency and I/O size distributions provide. The resulting values are + suitable for graphing with Grafana's heatmap plugin.
+
, + --sum-histogram-buckets
+
Accumulates bucket values. By default, the values are not accumulated and + the raw data appears as shown by zpool + iostat. This works well for Grafana's heatmap + plugin. Summing the buckets produces output similar to Prometheus + histograms.
+
, + --tags + key=value[,key=value]…
+
Adds specified tags to the tag set. No sanity checking is performed. See + the InfluxDB Line Protocol format documentation for details on escaping + special characters used in tags.
+
, + --help
+
Print a usage summary.
+
+
+
+

+

zpool-iostat(8), + zpool-status(8), + InfluxDB, + Telegraf, + Grafana, + Prometheus

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zstream.8.html b/man/v2.2/8/zstream.8.html new file mode 100644 index 000000000..b250086e5 --- /dev/null +++ b/man/v2.2/8/zstream.8.html @@ -0,0 +1,406 @@ + + + + + + + zstream.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zstream.8

+
+ + + + + +
ZSTREAM(8)System Manager's ManualZSTREAM(8)
+
+
+

+

zstream — + manipulate ZFS send streams

+
+
+

+ + + + + +
zstreamdump [-Cvd] + [file]
+
+ + + + + +
zstreamdecompress [-v] + [object,offset[,type...]]
+
+ + + + + +
zstreamredup [-v] + file
+
+ + + + + +
zstreamtoken resume_token
+
+ + + + + +
zstreamrecompress [-l + level] algorithm
+
+
+

+

The + + utility manipulates ZFS send streams output by the + + command.

+
+
zstream dump + [-Cvd] [file]
+
Print information about the specified send stream, including headers and + record counts. The send stream may either be in the specified + file, or provided on standard input. +
+
+
Suppress the validation of checksums.
+
+
Verbose. Print metadata for each record.
+
+
Dump data contained in each record. Implies verbose.
+
+

The zstreamdump alias is provided for + compatibility and is equivalent to running + zstream dump.

+
+
zstream token + resume_token
+
Dumps zfs resume token information
+
zstream + decompress [-v] + [object,offset[,type...]]
+
Decompress selected records in a ZFS send stream provided on standard + input, when the compression type recorded in ZFS metadata may be + incorrect. Specify the object number and byte offset of each record that + you wish to decompress. Optionally specify the compression type. Valid + compression types include off, + , + lz4, + , + , + and . + The default is lz4. Every record for that object + beginning at that offset will be decompressed, if possible. It may not be + possible, because the record may be corrupted in some but not all of the + stream's snapshots. Specifying a compression type of off + will change the stream's metadata accordingly, without attempting + decompression. This can be useful if the record is already uncompressed + but the metadata insists otherwise. The repaired stream will be written to + standard output. +
+
+
Verbose. Print summary of decompressed records.
+
+
+
zstream redup + [-v] file
+
Deduplicated send streams can be generated by using the + zfs send + -D command. The ability to send deduplicated send + streams is deprecated. In the future, the ability to receive a + deduplicated send stream with zfs + receive will be removed. However, deduplicated + send streams can still be received by utilizing + zstream redup. +

The zstream + redup command is provided a + file containing a deduplicated send stream, and + outputs an equivalent non-deduplicated send stream on standard output. + Therefore, a deduplicated send stream can be received by running:

+
# zstream + redup DEDUP_STREAM_FILE | + zfs receive +
+
+
+
Verbose. Print summary of converted records.
+
+
+
zstream recompress + [-l level] + algorithm
+
Recompresses a send stream, provided on standard input, using the provided + algorithm and optional level, and writes the modified stream to standard + output. All WRITE records in the send stream will be recompressed, unless + they fail to result in size reduction compared to being left uncompressed. + The provided algorithm can be any valid value to the + compress property. Note that encrypted send + streams cannot be recompressed. +
+
+ level
+
Specifies compression level. Only needed for algorithms where the + level is not implied as part of the name of the algorithm (e.g. gzip-3 + does not require it, while zstd does, if a non-default level is + desired).
+
+
+
+
+
+

+

Heal a dataset that was corrupted due to OpenZFS bug #12762. + First, determine which records are corrupt. That cannot be done + automatically; it requires information beyond ZFS's metadata. If object + is + corrupted at offset + and is + compressed using lz4, then run this command:

+
+
# zfs send -c  | zstream decompress 128,0,lz4 | zfs recv 
+
+
+
+

+

zfs(8), zfs-receive(8), + zfs-send(8), + https://github.com/openzfs/zfs/issues/12762

+
+
+ + + + + +
October 4, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zstreamdump.8.html b/man/v2.2/8/zstreamdump.8.html new file mode 100644 index 000000000..31f4a10b9 --- /dev/null +++ b/man/v2.2/8/zstreamdump.8.html @@ -0,0 +1,406 @@ + + + + + + + zstreamdump.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zstreamdump.8

+
+ + + + + +
ZSTREAM(8)System Manager's ManualZSTREAM(8)
+
+
+

+

zstream — + manipulate ZFS send streams

+
+
+

+ + + + + +
zstreamdump [-Cvd] + [file]
+
+ + + + + +
zstreamdecompress [-v] + [object,offset[,type...]]
+
+ + + + + +
zstreamredup [-v] + file
+
+ + + + + +
zstreamtoken resume_token
+
+ + + + + +
zstreamrecompress [-l + level] algorithm
+
+
+

+

The + + utility manipulates ZFS send streams output by the + + command.

+
+
zstream dump + [-Cvd] [file]
+
Print information about the specified send stream, including headers and + record counts. The send stream may either be in the specified + file, or provided on standard input. +
+
+
Suppress the validation of checksums.
+
+
Verbose. Print metadata for each record.
+
+
Dump data contained in each record. Implies verbose.
+
+

The zstreamdump alias is provided for + compatibility and is equivalent to running + zstream dump.

+
+
zstream token + resume_token
+
Dumps zfs resume token information
+
zstream + decompress [-v] + [object,offset[,type...]]
+
Decompress selected records in a ZFS send stream provided on standard + input, when the compression type recorded in ZFS metadata may be + incorrect. Specify the object number and byte offset of each record that + you wish to decompress. Optionally specify the compression type. Valid + compression types include off, + , + lz4, + , + , + and . + The default is lz4. Every record for that object + beginning at that offset will be decompressed, if possible. It may not be + possible, because the record may be corrupted in some but not all of the + stream's snapshots. Specifying a compression type of off + will change the stream's metadata accordingly, without attempting + decompression. This can be useful if the record is already uncompressed + but the metadata insists otherwise. The repaired stream will be written to + standard output. +
+
+
Verbose. Print summary of decompressed records.
+
+
+
zstream redup + [-v] file
+
Deduplicated send streams can be generated by using the + zfs send + -D command. The ability to send deduplicated send + streams is deprecated. In the future, the ability to receive a + deduplicated send stream with zfs + receive will be removed. However, deduplicated + send streams can still be received by utilizing + zstream redup. +

The zstream + redup command is provided a + file containing a deduplicated send stream, and + outputs an equivalent non-deduplicated send stream on standard output. + Therefore, a deduplicated send stream can be received by running:

+
# zstream + redup DEDUP_STREAM_FILE | + zfs receive +
+
+
+
Verbose. Print summary of converted records.
+
+
+
zstream recompress + [-l level] + algorithm
+
Recompresses a send stream, provided on standard input, using the provided + algorithm and optional level, and writes the modified stream to standard + output. All WRITE records in the send stream will be recompressed, unless + they fail to result in size reduction compared to being left uncompressed. + The provided algorithm can be any valid value to the + compress property. Note that encrypted send + streams cannot be recompressed. +
+
+ level
+
Specifies compression level. Only needed for algorithms where the + level is not implied as part of the name of the algorithm (e.g. gzip-3 + does not require it, while zstd does, if a non-default level is + desired).
+
+
+
+
+
+

+

Heal a dataset that was corrupted due to OpenZFS bug #12762. + First, determine which records are corrupt. That cannot be done + automatically; it requires information beyond ZFS's metadata. If object + is + corrupted at offset + and is + compressed using lz4, then run this command:

+
+
# zfs send -c  | zstream decompress 128,0,lz4 | zfs recv 
+
+
+
+

+

zfs(8), zfs-receive(8), + zfs-send(8), + https://github.com/openzfs/zfs/issues/12762

+
+
+ + + + + +
October 4, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/index.html b/man/v2.2/index.html new file mode 100644 index 000000000..cdfdef8bb --- /dev/null +++ b/man/v2.2/index.html @@ -0,0 +1,147 @@ + + + + + + + v2.2 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/msg/ZFS-8000-14/index.html b/msg/ZFS-8000-14/index.html new file mode 100644 index 000000000..b2207cc29 --- /dev/null +++ b/msg/ZFS-8000-14/index.html @@ -0,0 +1,195 @@ + + + + + + + Message ID: ZFS-8000-14 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-14

+
+

Corrupt ZFS cache

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Critical

Description:

The ZFS cache file is corrupted.

Automated Response:

No automated response will be taken.

Impact:

ZFS filesystems are not available.

+

Suggested Action for System Administrator

+

ZFS keeps a list of active pools on the filesystem to avoid having to +scan all devices when the system is booted. If this file is corrupted, +then normally active pools will not be automatically opened. The pools +can be recovered using the zpool import command:

+
# zpool import
+  pool: test
+    id: 12743384782310107047
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+        test              ONLINE
+          sda9            ONLINE
+
+
+

This will automatically scan /dev for any devices part of a pool. +If devices have been made available in an alternate location, use the +-d option to zpool import to search for devices in a different +directory.

+

Once you have determined which pools are available for import, you +can import the pool explicitly by specifying the name or numeric +identifier:

+
# zpool import test
+
+
+

Alternately, you can import all available pools by specifying the -a +option. Once a pool has been imported, the ZFS cache will be repaired +so that the pool will appear normally in the future.

+

Details

+

The Message ID: ZFS-8000-14 indicates a corrupted ZFS cache file. +Take the documented action to resolve the problem.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-2Q/index.html b/msg/ZFS-8000-2Q/index.html new file mode 100644 index 000000000..55b01deb7 --- /dev/null +++ b/msg/ZFS-8000-2Q/index.html @@ -0,0 +1,238 @@ + + + + + + + Message ID: ZFS-8000-2Q — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-2Q

+
+

Missing device in replicated configuration

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Major

Description:

A device in a replicated configuration could not +be opened.

Automated Response:

A hot spare will be activated if available.

Impact:

The pool is no longer providing the configured +level of replication.

+

Suggested Action for System Administrator

+

For an active pool:

+

If this error was encountered while running zpool import, please +see the section below. Otherwise, run zpool status -x to determine +which pool has experienced a failure:

+
# zpool status -x
+  pool: test
+ state: DEGRADED
+status: One or more devices could not be opened.  Sufficient replicas exist for
+        the pool to continue functioning in a degraded state.
+action: Attach the missing device and online it using 'zpool online'.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-2Q
+ scrub: none requested
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        test                  DEGRADED     0     0     0
+          mirror              DEGRADED     0     0     0
+            c0t0d0            ONLINE       0     0     0
+            c0t0d1            FAULTED      0     0     0  cannot open
+
+errors: No known data errors
+
+
+

Determine which device failed to open by looking for a FAULTED device +with an additional ‘cannot open’ message. If this device has been +inadvertently removed from the system, attach the device and bring it +online with zpool online:

+
# zpool online test c0t0d1
+
+
+

If the device is no longer available, the device can be replaced +using the zpool replace command:

+
# zpool replace test c0t0d1 c0t0d2
+
+
+

If the device has been replaced by another disk in the same physical +slot, then the device can be replaced using a single argument to the +zpool replace command:

+
# zpool replace test c0t0d1
+
+
+

Existing data will be resilvered to the new device. Once the +resilvering completes, the device will be removed from the pool.

+

For an exported pool:

+

If this error is encountered during a zpool import, it means that +one of the devices is not attached to the system:

+
# zpool import
+  pool: test
+    id: 10121266328238932306
+ state: DEGRADED
+status: One or more devices are missing from the system.
+action: The pool can be imported despite missing or damaged devices.  The
+        fault tolerance of the pool may be compromised if imported.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-2Q
+config:
+
+        test              DEGRADED
+          mirror          DEGRADED
+            c0t0d0        ONLINE
+            c0t0d1        FAULTED   cannot open
+
+
+

Unlike when the pool is active on the system, the device cannot be +replaced while the pool is exported. If the device can be attached to +the system, attach the device and run zpool import again.

+

Alternatively, the pool can be imported as-is, though it will be +placed in the DEGRADED state due to a missing device. The device will +be marked as UNAVAIL. Once the pool has been imported, the missing +device can be replaced as described above.

+

Details

+

The Message ID: ZFS-8000-2Q indicates a device which was unable +to be opened by the ZFS subsystem.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-3C/index.html b/msg/ZFS-8000-3C/index.html new file mode 100644 index 000000000..144ceccce --- /dev/null +++ b/msg/ZFS-8000-3C/index.html @@ -0,0 +1,220 @@ + + + + + + + Message ID: ZFS-8000-3C — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-3C

+
+

Missing device in non-replicated configuration

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Critical

Description:

A device could not be opened and no replicas are +available.

Automated Response:

No automated response will be taken.

Impact:

The pool is no longer available.

+

Suggested Action for System Administrator

+

For an active pool:

+

If this error was encountered while running zpool import, please +see the section below. Otherwise, run zpool status -x to determine +which pool has experienced a failure:

+
# zpool status -x
+  pool: test
+ state: FAULTED
+status: One or more devices could not be opened.  There are insufficient
+        replicas for the pool to continue functioning.
+action: Attach the missing device and online it using 'zpool online'.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-3C
+ scrub: none requested
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        test                  FAULTED      0     0     0  insufficient replicas
+          c0t0d0              ONLINE       0     0     0
+          c0t0d1              FAULTED      0     0     0  cannot open
+
+errors: No known data errors
+
+
+

If the device has been temporarily detached from the system, attach +the device to the system and run zpool status again. The pool +should automatically detect the newly attached device and resume +functioning. You may have to mount the filesystems in the pool +explicitly using zfs mount -a.

+

If the device is no longer available and cannot be reattached to the +system, then the pool must be destroyed and re-created from a backup +source.

+

For an exported pool:

+

If this error is encountered during a zpool import, it means that +one of the devices is not attached to the system:

+
# zpool import
+  pool: test
+    id: 10121266328238932306
+ state: FAULTED
+status: One or more devices are missing from the system.
+action: The pool cannot be imported.  Attach the missing devices and try again.
+        see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-3C
+config:
+
+        test              FAULTED   insufficient replicas
+          c0t0d0          ONLINE
+          c0t0d1          FAULTED   cannot open
+
+
+

The pool cannot be imported until the missing device is attached to +the system. If the device has been made available in an alternate +location, use the -d option to zpool import to search for devices +in a different directory. If the missing device is unavailable, then +the pool cannot be imported.

+

Details

+

The Message ID: ZFS-8000-3C indicates a device which was unable +to be opened by the ZFS subsystem.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-4J/index.html b/msg/ZFS-8000-4J/index.html new file mode 100644 index 000000000..7a2a00c3c --- /dev/null +++ b/msg/ZFS-8000-4J/index.html @@ -0,0 +1,237 @@ + + + + + + + Message ID: ZFS-8000-4J — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-4J

+
+

Corrupted device label in a replicated configuration

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Major

Description:

A device could not be opened due to a missing or +invalid device label.

Automated Response:

A hot spare will be activated if available.

Impact:

The pool is no longer providing the configured +level of replication.

+

Suggested Action for System Administrator

+

For an active pool:

+

If this error was encountered while running zpool import, please +see the section below. Otherwise, run zpool status -x to determine +which pool has experienced a failure:

+
# zpool status -x
+  pool: test
+ state: DEGRADED
+status: One or more devices could not be used because the label is missing or
+        invalid.  Sufficient replicas exist for the pool to continue
+        functioning in a degraded state.
+action: Replace the device using 'zpool replace'.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
+ scrub: none requested
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        test                  DEGRADED     0     0     0
+          mirror              DEGRADED     0     0     0
+            c0t0d0            ONLINE       0     0     0
+            c0t0d1            FAULTED      0     0     0  corrupted data
+
+errors: No known data errors
+
+
+

If the device has been temporarily detached from the system, attach +the device to the system and run zpool status again. The pool +should automatically detect the newly attached device and resume +functioning.

+

If the device is no longer available, it can be replaced using zpool +replace:

+
# zpool replace test c0t0d1 c0t0d2
+
+
+

If the device has been replaced by another disk in the same physical +slot, then the device can be replaced using a single argument to the +zpool replace command:

+
# zpool replace test c0t0d1
+
+
+

ZFS will begin migrating data to the new device as soon as the +replace is issued. Once the resilvering completes, the original +device (if different from the replacement) will be removed, and the +pool will be restored to the ONLINE state.

+

For an exported pool:

+

If this error is encountered while running zpool import, the pool +can be still be imported despite the failure:

+
# zpool import
+  pool: test
+    id: 5187963178597328409
+ state: DEGRADED
+status: One or more devices contains corrupted data.  The fault tolerance of
+        the pool may be compromised if imported.
+action: The pool can be imported using its name or numeric identifier.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
+config:
+
+        test              DEGRADED
+          mirror          DEGRADED
+            c0t0d0        ONLINE
+            c0t0d1        FAULTED   corrupted data
+
+
+

To import the pool, run zpool import:

+
# zpool import test
+
+
+

Once the pool has been imported, the damaged device can be replaced +according to the above procedure.

+

Details

+

The Message ID: ZFS-8000-4J indicates a device which was unable +to be opened by the ZFS subsystem.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-5E/index.html b/msg/ZFS-8000-5E/index.html new file mode 100644 index 000000000..24dc4f4fd --- /dev/null +++ b/msg/ZFS-8000-5E/index.html @@ -0,0 +1,201 @@ + + + + + + + Message ID: ZFS-8000-5E — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-5E

+
+

Corrupted device label in non-replicated configuration

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Critical

Description:

A device could not be opened due to a missing or +invalid device label and no replicas are +available.

Automated Response:

No automated response will be taken.

Impact:

The pool is no longer available.

+

Suggested Action for System Administrator

+

For an active pool:

+

If this error was encountered while running zpool import, please see the +section below. Otherwise, run zpool status -x to determine which pool has +experienced a failure:

+
# zpool status -x
+  pool: test
+ state: FAULTED
+status: One or more devices could not be used because the the label is missing
+        or invalid.  There are insufficient replicas for the pool to continue
+        functioning.
+action: Destroy and re-create the pool from a backup source.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
+ scrub: none requested
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        test        FAULTED      0     0     0  insufficient replicas
+          c0t0d0    FAULTED      0     0     0  corrupted data
+          c0t0d1    ONLINE       0     0     0
+
+errors: No known data errors
+
+
+

The device listed as FAULTED with ‘corrupted data’ cannot be opened due to a +corrupt label. ZFS will be unable to use the pool, and all data within the +pool is irrevocably lost. The pool must be destroyed and recreated from an +appropriate backup source. Using replicated configurations will prevent this +from happening in the future.

+

For an exported pool:

+

If this error is encountered during zpool import, the action is the same. +The pool cannot be imported - all data is lost and must be restored from an +appropriate backup source.

+

Details

+

The Message ID: ZFS-8000-5E indicates a device which was unable to be +opened by the ZFS subsystem.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-6X/index.html b/msg/ZFS-8000-6X/index.html new file mode 100644 index 000000000..2681abb65 --- /dev/null +++ b/msg/ZFS-8000-6X/index.html @@ -0,0 +1,195 @@ + + + + + + + Message ID: ZFS-8000-6X — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-6X

+
+

Missing top level device

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Critical

Description:

One or more top level devices are missing.

Automated Response:

No automated response will be taken.

Impact:

The pool cannot be imported.

+

Suggested Action for System Administrator

+

Run zpool import to list which pool cannot be imported:

+
# zpool import
+  pool: test
+    id: 13783646421373024673
+ state: FAULTED
+status: One or more devices are missing from the system.
+action: The pool cannot be imported.  Attach the missing devices and try again.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-6X
+config:
+
+        test              FAULTED   missing device
+          c0t0d0          ONLINE
+
+Additional devices are known to be part of this pool, though their
+exact configuration cannot be determined.
+
+
+

ZFS attempts to store enough configuration data on the devices such +that the configuration is recoverable from any subset of devices. In +some cases, particularly when an entire toplevel virtual device is +not attached to the system, ZFS will be unable to determine the +complete configuration. It will always detect that these devices are +missing, even if it cannot identify all of the devices.

+

The pool cannot be imported until the unknown missing device is +attached to the system. If the device has been made available in an +alternate location, use the -d option to zpool import to search +for devices in a different directory. If the missing device is +unavailable, then the pool cannot be imported.

+

Details

+

The Message ID: ZFS-8000-6X indicates one or more top level +devices are missing from the configuration.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-72/index.html b/msg/ZFS-8000-72/index.html new file mode 100644 index 000000000..d6fdf2391 --- /dev/null +++ b/msg/ZFS-8000-72/index.html @@ -0,0 +1,223 @@ + + + + + + + Message ID: ZFS-8000-72 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-72

+
+

Corrupted pool metadata

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Critical

Description:

The metadata required to open the pool is +corrupt.

Automated Response:

No automated response will be taken.

Impact:

The pool is no longer available.

+

Suggested Action for System Administrator

+

Even though all the devices are available, the on-disk data has been +corrupted such that the pool cannot be opened. If a recovery action +is presented, the pool can be returned to a usable state. Otherwise, +all data within the pool is lost, and the pool must be destroyed and +restored from an appropriate backup source. ZFS includes built-in +metadata replication to prevent this from happening even for +unreplicated pools, but running in a replicated configuration will +decrease the chances of this happening in the future.

+

If this error is encountered during zpool import, see the section +below. Otherwise, run zpool status -x to determine which pool is +faulted and if a recovery option is available:

+
# zpool status -x
+  pool: test
+    id: 13783646421373024673
+ state: FAULTED
+status: The pool metadata is corrupted and cannot be opened.
+action: Recovery is possible, but will result in some data loss.
+        Returning the pool to its state as of Mon Sep 28 10:24:39 2009
+        should correct the problem.  Approximately 59 seconds of data
+        will have to be discarded, irreversibly.  Recovery can be
+        attempted by executing 'zpool clear -F test'.  A scrub of the pool
+        is strongly recommended following a successful recovery.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-72
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        test                  FAULTED      0     0     2  corrupted data
+            c0t0d0            ONLINE       0     0     2
+            c0t0d1            ONLINE       0     0     2
+
+
+

If recovery is unavailable, the recommended action will be:

+
action: Destroy the pool and restore from backup.
+
+
+

If this error is encountered during zpool import, and if no recovery option +is mentioned, the pool is unrecoverable and cannot be imported. The pool must +be restored from an appropriate backup source. If a recovery option is +available, the output from zpool import will look something like the +following:

+
# zpool import share
+cannot import 'share': I/O error
+        Recovery is possible, but will result in some data loss.
+        Returning the pool to its state as of Sun Sep 27 12:31:07 2009
+        should correct the problem.  Approximately 53 seconds of data
+        will have to be discarded, irreversibly.  Recovery can be
+        attempted by executing 'zpool import -F share'.  A scrub of the pool
+        is strongly recommended following a successful recovery.
+
+
+

Recovery actions are requested with the -F option to either zpool +clear or zpool import. Recovery will result in some data loss, +because it reverts the pool to an earlier state. A dry-run recovery +check can be performed by adding the -n option, affirming if recovery +is possible without actually reverting the pool to its earlier state.

+

Details

+

The Message ID: ZFS-8000-72 indicates a pool was unable to be +opened due to a detected corruption in the pool metadata.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-8A/index.html b/msg/ZFS-8000-8A/index.html new file mode 100644 index 000000000..ed62fc13c --- /dev/null +++ b/msg/ZFS-8000-8A/index.html @@ -0,0 +1,224 @@ + + + + + + + Message ID: ZFS-8000-8A — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-8A

+
+

Corrupted data

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Critical

Description:

A file or directory could not be read due to +corrupt data.

Automated Response:

No automated response will be taken.

Impact:

The file or directory is unavailable.

+

Suggested Action for System Administrator

+

Run zpool status -x to determine which pool is damaged:

+
# zpool status -x
+  pool: test
+ state: ONLINE
+status: One or more devices has experienced an error and no valid replicas
+        are available.  Some filesystem data is corrupt, and applications
+        may have been affected.
+action: Destroy the pool and restore from backup.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
+ scrub: none requested
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        test                  ONLINE       0     0     2
+          c0t0d0              ONLINE       0     0     2
+          c0t0d1              ONLINE       0     0     0
+
+errors: 1 data errors, use '-v' for a list
+
+
+

Unfortunately, the data cannot be repaired, and the only choice to +repair the data is to restore the pool from backup. Applications +attempting to access the corrupted data will get an error (EIO), and +data may be permanently lost.

+

The list of affected files can be retrieved by using the -v option to +zpool status:

+
# zpool status -xv
+  pool: test
+ state: ONLINE
+status: One or more devices has experienced an error and no valid replicas
+        are available.  Some filesystem data is corrupt, and applications
+        may have been affected.
+action: Destroy the pool and restore from backup.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
+ scrub: none requested
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        test                  ONLINE       0     0     2
+          c0t0d0              ONLINE       0     0     2
+          c0t0d1              ONLINE       0     0     0
+
+errors: Permanent errors have been detected in the following files:
+
+        /export/example/foo
+
+
+

Damaged files may or may not be able to be removed depending on the +type of corruption. If the corruption is within the plain data, the +file should be removable. If the corruption is in the file metadata, +then the file cannot be removed, though it can be moved to an +alternate location. In either case, the data should be restored from +a backup source. It is also possible for the corruption to be within +pool-wide metadata, resulting in entire datasets being unavailable. +If this is the case, the only option is to destroy the pool and +re-create the datasets from backup.

+

Details

+

The Message ID: ZFS-8000-8A indicates corrupted data exists in +the current pool.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-9P/index.html b/msg/ZFS-8000-9P/index.html new file mode 100644 index 000000000..82f57c1c3 --- /dev/null +++ b/msg/ZFS-8000-9P/index.html @@ -0,0 +1,264 @@ + + + + + + + Message ID: ZFS-8000-9P — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-9P

+
+

Failing device in replicated configuration

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Minor

Description:

A device has experienced uncorrectable errors in a +replicated configuration.

Automated Response:

ZFS has attempted to repair the affected data.

Impact:

The system is unaffected, though errors may +indicate future failure. Future errors may cause +ZFS to automatically fault the device.

+

Suggested Action for System Administrator

+

Run zpool status -x to determine which pool has experienced errors:

+
# zpool status
+  pool: test
+ state: ONLINE
+status: One or more devices has experienced an unrecoverable error.  An
+        attempt was made to correct the error.  Applications are unaffected.
+action: Determine if the device needs to be replaced, and clear the errors
+        using 'zpool online' or replace the device with 'zpool replace'.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
+ scrub: none requested
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        test                  ONLINE       0     0     0
+          mirror              ONLINE       0     0     0
+            c0t0d0            ONLINE       0     0     2
+            c0t0d1            ONLINE       0     0     0
+
+errors: No known data errors
+
+
+

Find the device with a non-zero error count for READ, WRITE, or +CKSUM. This indicates that the device has experienced a read I/O +error, write I/O error, or checksum validation error. Because the +device is part of a mirror or RAID-Z device, ZFS was able to recover +from the error and subsequently repair the damaged data.

+

If these errors persist over a period of time, ZFS may determine the +device is faulty and mark it as such. However, these error counts may +or may not indicate that the device is unusable. It depends on how +the errors were caused, which the administrator can determine in +advance of any ZFS diagnosis. For example, the following cases will +all produce errors that do not indicate potential device failure:

+
    +
  • A network attached device lost connectivity but has now +recovered

  • +
  • A device suffered from a bit flip, an expected event over long +periods of time

  • +
  • An administrator accidentally wrote over a portion of the disk +using another program

  • +
+

In these cases, the presence of errors does not indicate that the +device is likely to fail in the future, and therefore does not need +to be replaced. If this is the case, then the device errors should be +cleared using zpool clear:

+
# zpool clear test c0t0d0
+
+
+

On the other hand, errors may very well indicate that the device has +failed or is about to fail. If there are continual I/O errors to a +device that is otherwise attached and functioning on the system, it +most likely needs to be replaced. The administrator should check the +system log for any driver messages that may indicate hardware +failure. If it is determined that the device needs to be replaced, +then the zpool replace command should be used:

+
# zpool replace test c0t0d0 c0t0d2
+
+
+

This will attach the new device to the pool and begin resilvering +data to it. Once the resilvering process is complete, the old device +will automatically be removed from the pool, at which point it can +safely be removed from the system. If the device needs to be replaced +in-place (because there are no available spare devices), the original +device can be removed and replaced with a new device, at which point +a different form of zpool replace can be used:

+
# zpool replace test c0t0d0
+
+
+

This assumes that the original device at ‘c0t0d0’ has been replaced +with a new device under the same path, and will be replaced +appropriately.

+

You can monitor the progress of the resilvering operation by using +the zpool status -x command:

+
# zpool status -x
+  pool: test
+ state: DEGRADED
+status: One or more devices is currently being replaced.  The pool may not be
+        providing the necessary level of replication.
+action: Wait for the resilvering operation to complete
+ scrub: resilver in progress, 0.14% done, 0h0m to go
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        test                  ONLINE       0     0     0
+          mirror              ONLINE       0     0     0
+            replacing         ONLINE       0     0     0
+              c0t0d0          ONLINE       0     0     3
+              c0t0d2          ONLINE       0     0     0  58.5K resilvered
+            c0t0d1            ONLINE       0     0     0
+
+errors: No known data errors
+
+
+

Details

+

The Message ID: ZFS-8000-9P indicates a device has exceeded the +acceptable limit of errors allowed by the system. See document +203768 +for additional information.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-A5/index.html b/msg/ZFS-8000-A5/index.html new file mode 100644 index 000000000..ea4d77b4f --- /dev/null +++ b/msg/ZFS-8000-A5/index.html @@ -0,0 +1,197 @@ + + + + + + + Message ID: ZFS-8000-A5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-A5

+
+

Incompatible version

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Major

Description:

The on-disk version is not compatible with the +running system.

Automated Response:

No automated response will occur.

Impact:

The pool is unavailable.

+

Suggested Action for System Administrator

+

If this error is seen during zpool import, see the section below. +Otherwise, run zpool status -x to determine which pool is faulted:

+
# zpool status -x
+  pool: test
+ state: FAULTED
+status: The ZFS version for the pool is incompatible with the software running
+        on this system.
+action: Destroy and re-create the pool.
+ scrub: none requested
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        test                  FAULTED      0     0     0  incompatible version
+          mirror              ONLINE       0     0     0
+            sda9              ONLINE       0     0     0
+            sdb9              ONLINE       0     0     0
+
+errors: No known errors
+
+
+

The pool cannot be used on this system. Either move the storage to +the system where the pool was originally created, upgrade the current +system software to a more recent version, or destroy the pool and +re-create it from backup.

+

If this error is seen during import, the pool cannot be imported on +the current system. The disks must be attached to the system which +originally created the pool, and imported there.

+

The list of currently supported versions can be displayed using +zpool upgrade -v.

+

Details

+

The Message ID: ZFS-8000-A5 indicates a version mismatch exists +between the running system and the on-disk data.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-ER/index.html b/msg/ZFS-8000-ER/index.html new file mode 100644 index 000000000..cf6f46a91 --- /dev/null +++ b/msg/ZFS-8000-ER/index.html @@ -0,0 +1,440 @@ + + + + + + + Message ID: ZFS-8000-ER — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-ER

+
+

ZFS Errata #1

+ + + + + + + + + + + + + + + + + + +

Type:

Compatibility

Severity:

Moderate

Description:

The ZFS pool contains an on-disk format +incompatibility.

Automated Response:

No automated response will be taken.

Impact:

Until the pool is scrubbed using OpenZFS version +0.6.3 or newer the pool may not be imported by +older versions of OpenZFS or other ZFS +implementations.

+

Suggested Action for System Administrator

+

The pool contains an on-disk format incompatibility. Affected pools +must be imported and scrubbed using the current version of ZFS. This +will return the pool to a state in which it may be imported by other +implementations. This errata only impacts compatibility between ZFS +versions, no user data is at risk as result of this erratum.

+
# zpool status -x
+  pool: test
+ state: ONLINE
+status: Errata #1 detected.
+action: To correct the issue run 'zpool scrub'.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-ER
+  scan: none requested
+config:
+
+    NAME            STATE     READ WRITE CKSUM
+    test            ONLINE    0    0     0
+      raidz1-0      ONLINE    0    0     0
+        vdev0       ONLINE    0    0     0
+        vdev1       ONLINE    0    0     0
+        vdev2       ONLINE    0    0     0
+        vdev3       ONLINE    0    0     0
+
+errors: No known data errors
+
+# zpool scrub test
+
+# zpool status -x
+all pools are healthy
+
+
+
+
+

ZFS Errata #2

+ + + + + + + + + + + + + + + + + + +

Type:

Compatibility

Severity:

Moderate

Description:

The ZFS packages were updated while an +asynchronous destroy was in progress and the pool +contains an on-disk format incompatibility.

Automated Response:

No automated response will be taken.

Impact:

The pool cannot be imported until the issue is +corrected.

+

Suggested Action for System Administrator

+

Affected pools must be reverted to the previous ZFS version where +they can be correctly imported. Once imported, all asynchronous +destroy operations must be allowed to complete. The ZFS packages may +then be updated and the pool can be imported cleanly by the newer +software.

+
# zpool import
+  pool: test
+    id: 1165955789558693437
+ state: ONLINE
+status: Errata #2 detected.
+action: The pool cannot be imported with this version of ZFS due to
+        an active asynchronous destroy.  Revert to an earlier version
+        and allow the destroy to complete before updating.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-ER
+config:
+
+    test           ONLINE
+      raidz1-0     ONLINE
+        vdev0      ONLINE
+        vdev1      ONLINE
+        vdev2      ONLINE
+        vdev3      ONLINE
+
+
+

Revert to previous ZFS version, import the pool, then wait for the +freeing property to drop to zero. This indicates that all +outstanding asynchronous destroys have completed.

+
# zpool get freeing
+NAME  PROPERTY  VALUE    SOURCE
+test  freeing   0        default
+
+
+

The ZFS packages may be now be updated and the pool imported. The +on-disk format incompatibility can now be corrected online as +described in Errata #1.

+
+
+

ZFS Errata #3

+ + + + + + + + + + + + + + + + + + +

Type:

Compatibility

Severity:

Moderate

Description:

An encrypted dataset contains an on-disk format +incompatibility.

Automated Response:

No automated response will be taken.

Impact:

Encrypted datasets created before the ZFS packages +were updated cannot be mounted or opened for +write. The errata impacts the ability of ZFS to +correctly perform raw sends, so this functionality +has been disabled for these datasets.

+

Suggested Action for System Administrator

+

System administrators with affected pools will need to recreate any +encrypted datasets created before the new version of ZFS was used. +This can be accomplished by using zfs send and zfs receive. +Note, however, that backups can NOT be done with a raw zfs send -w, +since this would preserve the on-disk incompatibility. +Alternatively, system administrators can use conventional tools to +back up data to new encrypted datasets. The new version of ZFS will +prevent new data from being written to the impacted datasets, but +they can still be mounted read-only.

+
# zpool status
+  pool: test
+    id: 1165955789558693437
+ state: ONLINE
+status: Errata #3 detected.
+action: To correct the issue backup existing encrypted datasets to new
+        encrypted datasets and destroy the old ones.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-ER
+config:
+
+    test           ONLINE
+      raidz1-0     ONLINE
+        vdev0      ONLINE
+        vdev1      ONLINE
+        vdev2      ONLINE
+        vdev3      ONLINE
+
+
+

Import the pool and backup any existing encrypted datasets to new +datasets. To ensure the new datasets are re-encrypted, be sure to +receive them below an encryption root or use zfs receive -o +encryption=on, then destroy the source dataset.

+
# zfs send test/crypt1@snap1 | zfs receive -o encryption=on -o keyformat=passphrase -o keylocation=file:///path/to/keyfile test/newcrypt1
+# zfs send -I test/crypt1@snap1 test/crypt1@snap5 | zfs receive test/newcrypt1
+# zfs destroy -R test/crypt1
+
+
+

New datasets can be mounted read-write and used normally. The errata +will be cleared upon reimporting the pool and the alert will only be +shown again if another dataset is found with the errata. To ensure +that all datasets are on the new version reimport the pool, load all +keys, mount all encrypted datasets, and check zpool status.

+
# zpool export test
+# zpool import test
+# zfs load-key -a
+Enter passphrase for 'test/crypt1':
+1 / 1 key(s) successfully loaded
+# zfs mount -a
+# zpool status -x
+all pools are healthy
+
+
+
+
+

ZFS Errata #4

+ + + + + + + + + + + + + + + + + + +

Type:

Compatibility

Severity:

Moderate

Description:

An encrypted dataset contains an on-disk format +incompatibility.

Automated Response:

No automated response will be taken.

Impact:

Encrypted datasets created before the ZFS packages +were updated cannot be backed up via a raw send to +an updated system. These datasets also cannot +receive additional snapshots. New encrypted +datasets cannot be created until the +bookmark_v2 feature has been enabled.

+

Suggested Action for System Administrator

+

First, system administrators with affected pools will need to enable +the bookmark_v2 feature on their pools. Enabling this feature +will prevent this pool from being imported by previous versions of +the ZFS software after any new bookmarks are created (including +read-only imports). If the pool contains no encrypted datasets, this +is the only step required. If there are existing encrypted datasets, +administrators will then need to back these datasets up. This can be +done in several ways. Non-raw zfs send and zfs receive can be +used as per usual, as can traditional backup tools. Raw receives of +existing encrypted datasets and raw receives into existing encrypted +datasets are currently disabled because ZFS is not able to guarantee +that the stream and the existing dataset came from a consistent +source. This check can be disabled which will allow ZFS to receive +these streams anyway. Note that this can result in datasets with data +that cannot be accessed due to authentication errors if raw and +non-raw receives are mixed over the course of several incremental +backups. To disable this restriction, set the +zfs_disable_ivset_guid_check module parameter to 1. Streams +received this way (as well as any received before the upgrade) will +need to be manually checked by reading the data to ensure they are +not corrupted. Note that zpool scrub cannot be used for this +purpose because the scrub does not check the cryptographic +authentication codes. For more information on this issue, please +refer to the zfs man page section on zfs receive which describes +the restrictions on raw sends.

+
# zpool status
+  pool: test
+ state: ONLINE
+status: Errata #4 detected.
+        Existing encrypted datasets contain an on-disk incompatibility
+        which needs to be corrected.
+action: To correct the issue enable the bookmark_v2 feature and backup
+        any existing encrypted datasets to new encrypted datasets and
+        destroy the old ones. If this pool does not contain any
+        encrypted datasets, simply enable the bookmark_v2 feature.
+   see: http://openzfs.github.io/openzfs-docs/msg/ZFS-8000-ER
+  scan: none requested
+config:
+
+        NAME           STATE     READ WRITE CKSUM
+        test           ONLINE       0     0     0
+          /root/vdev0  ONLINE       0     0     0
+
+errors: No known data errors
+
+
+

Import the pool and enable the bookmark_v2 feature. Then backup +any existing encrypted datasets to new datasets. This can be done +with traditional tools or via zfs send. Raw sends will require +that the zfs_disable_ivset_guid_check is set to 1 on the receive +side. Once this is done, the original datasets should be destroyed.

+
# zpool set feature@bookmark_v2=enabled test
+# echo 1 > /sys/module/zfs/parameters/zfs_disable_ivset_guid_check
+# zfs send -Rw test/crypt1@snap1 | zfs receive test/newcrypt1
+# zfs send -I test/crypt1@snap1 test/crypt1@snap5 | zfs receive test/newcrypt1
+# zfs destroy -R test/crypt1
+# echo 0 > /sys/module/zfs/parameters/zfs_disable_ivset_guid_check
+
+
+

The errata will be cleared upon reimporting the pool and the alert +will only be shown again if another dataset is found with the errata. +To check that all datasets are fixed, perform a zfs list -t all, +and check zpool status once it is completed.

+
# zpool export test
+# zpool import test
+# zpool scrub # wait for completion
+# zpool status -x
+all pools are healthy
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-EY/index.html b/msg/ZFS-8000-EY/index.html new file mode 100644 index 000000000..8ada7bb67 --- /dev/null +++ b/msg/ZFS-8000-EY/index.html @@ -0,0 +1,195 @@ + + + + + + + Message ID: ZFS-8000-EY — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-EY

+
+

ZFS label hostid mismatch

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Major

Description:

The ZFS pool was last accessed by another system.

Automated Response:

No automated response will be taken.

Impact:

ZFS filesystems are not available.

+

Suggested Action for System Administrator

+

The pool has been written to from another host, and was not cleanly +exported from the other system. Actively importing a pool on multiple +systems will corrupt the pool and leave it in an unrecoverable state. +To determine which system last accessed the pool, run the zpool +import command:

+
# zpool import
+  pool: test
+    id: 14702934086626715962
+ state: ONLINE
+status: The pool was last accessed by another system.
+action: The pool can be imported using its name or numeric identifier and
+        the '-f' flag.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
+config:
+
+        test              ONLINE
+          c0t0d0          ONLINE
+
+# zpool import test
+cannot import 'test': pool may be in use from other system, it was last
+accessed by 'tank' (hostid: 0x1435718c) on Fri Mar  9 15:42:47 2007
+use '-f' to import anyway
+
+
+

If you are certain that the pool is not being actively accessed by +another system, then you can use the -f option to zpool import to +forcibly import the pool.

+

Details

+

The Message ID: ZFS-8000-EY indicates that the pool cannot be +imported as it was last accessed by another system. Take the +documented action to resolve the problem.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-HC/index.html b/msg/ZFS-8000-HC/index.html new file mode 100644 index 000000000..c1fa21eaa --- /dev/null +++ b/msg/ZFS-8000-HC/index.html @@ -0,0 +1,198 @@ + + + + + + + Message ID: ZFS-8000-HC — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-HC

+
+

ZFS pool I/O failures

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Major

Description:

The ZFS pool has experienced currently +unrecoverable I/O failures.

Automated Response:

No automated response will be taken.

Impact:

Read and write I/Os cannot be serviced.

+

Suggested Action for System Administrator

+

The pool has experienced I/O failures. Since the ZFS pool property +failmode is set to ‘wait’, all I/Os (reads and writes) are blocked. +See the zpoolprops(8) manpage for more information on the failmode +property. Manual intervention is required for I/Os to be serviced.

+

You can see which devices are affected by running zpool status -x:

+
# zpool status -x
+  pool: test
+ state: FAULTED
+status: There are I/O failures.
+action: Make sure the affected devices are connected, then run 'zpool clear'.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC
+ scrub: none requested
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        test        FAULTED      0    13     0  insufficient replicas
+          c0t0d0    FAULTED      0     7     0  experienced I/O failures
+          c0t1d0    ONLINE       0     0     0
+
+errors: 1 data errors, use '-v' for a list
+
+
+

After you have made sure the affected devices are connected, run zpool +clear to allow I/O to the pool again:

+
# zpool clear test
+
+
+

If I/O failures continue to happen, then applications and commands for the pool +may hang. At this point, a reboot may be necessary to allow I/O to the pool +again.

+

Details

+

The Message ID: ZFS-8000-HC indicates that the pool has experienced I/O +failures. Take the documented action to resolve the problem.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-JQ/index.html b/msg/ZFS-8000-JQ/index.html new file mode 100644 index 000000000..85ee56e8f --- /dev/null +++ b/msg/ZFS-8000-JQ/index.html @@ -0,0 +1,200 @@ + + + + + + + Message ID: ZFS-8000-JQ — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-JQ

+
+

ZFS pool I/O failures

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Major

Description:

The ZFS pool has experienced currently +unrecoverable I/O failures.

Automated Response:

No automated response will be taken.

Impact:

Write I/Os cannot be serviced.

+

Suggested Action for System Administrator

+

The pool has experienced I/O failures. Since the ZFS pool property +failmode is set to ‘continue’, read I/Os will continue to be +serviced, but write I/Os are blocked. See the zpoolprops(8) manpage for +more information on the failmode property. Manual intervention is +required for write I/Os to be serviced. You can see which devices are +affected by running zpool status -x:

+
# zpool status -x
+  pool: test
+ state: FAULTED
+status: There are I/O failures.
+action: Make sure the affected devices are connected, then run 'zpool clear'.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC
+ scrub: none requested
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        test        FAULTED      0    13     0  insufficient replicas
+          sda9      FAULTED      0     7     0  experienced I/O failures
+          sdb9      ONLINE       0     0     0
+
+errors: 1 data errors, use '-v' for a list
+
+
+

After you have made sure the affected devices are connected, run +zpool clear to allow write I/O to the pool again:

+
# zpool clear test
+
+
+

If I/O failures continue to happen, then applications and commands +for the pool may hang. At this point, a reboot may be necessary to +allow I/O to the pool again.

+

Details

+

The Message ID: ZFS-8000-JQ indicates that the pool has +experienced I/O failures. Take the documented action to resolve the +problem.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-K4/index.html b/msg/ZFS-8000-K4/index.html new file mode 100644 index 000000000..3a303fc5a --- /dev/null +++ b/msg/ZFS-8000-K4/index.html @@ -0,0 +1,244 @@ + + + + + + + Message ID: ZFS-8000-K4 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-K4

+
+

ZFS intent log read failure

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Major

Description:

A ZFS intent log device could not be read.

Automated Response:

No automated response will be taken.

Impact:

The intent log(s) cannot be replayed.

+

Suggested Action for System Administrator

+

A ZFS intent log record could not be read due to an error. This may +be due to a missing or broken log device, or a device within the pool +may be experiencing I/O errors. The pool itself is not corrupt but is +missing some pool changes that happened shortly before a power loss +or system failure. These are pool changes that applications had +requested to be written synchronously but had not been committed in +the pool. This transaction group commit currently occurs every five +seconds, and so typically at most five seconds worth of synchronous +writes have been lost. ZFS itself cannot determine if the pool +changes lost are critical to those applications running at the time +of the system failure. This is a decision the administrator must +make. You may want to consider mirroring log devices. First determine +which pool is in error:

+
# zpool status -x
+  pool: test
+ state: FAULTED
+status: One or more of the intent logs could not be read.
+        Waiting for adminstrator intervention to fix the faulted pool.
+action: Either restore the affected device(s) and run 'zpool online',
+        or ignore the intent log records by running 'zpool clear'.
+ scrub: none requested
+config:
+
+        NAME              STATE     READ WRITE CKSUM
+        test              FAULTED      0     0     0  bad intent log
+          c3t2d0          ONLINE       0     0     0
+        logs              FAULTED      0     0     0  bad intent log
+          c5t3d0          UNAVAIL      0     0     0  cannot open
+
+
+

There are two courses of action to resolve this problem. +If the validity of the pool from an application perspective requires +the pool changes then the log devices must be recovered. Make sure +power and cables are connected and that the affected device is +online. Then run zpool online and then zpool clear:

+
# zpool online test c5t3d0
+# zpool clear test
+# zpool status test
+  pool: test
+ state: ONLINE
+ scrub: none requested
+config:
+
+        NAME              STATE     READ WRITE CKSUM
+        test              ONLINE       0     0     0
+          c3t2d0          ONLINE       0     0     0
+        logs              ONLINE       0     0     0
+          c5t3d0          ONLINE       0     0     0
+
+errors: No known data errors
+
+
+

The second alternative action is to ignore the most recent pool +changes that could not be read. To do this run zpool clear:

+
# zpool clear test
+# zpool status test
+  pool: test
+ state: DEGRADED
+status: One or more devices could not be opened.  Sufficient replicas exist for
+        the pool to continue functioning in a degraded state.
+action: Attach the missing device and online it using 'zpool online'.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-2Q
+ scrub: none requested
+config:
+
+        NAME              STATE     READ WRITE CKSUM
+        test              DEGRADED     0     0     0
+          c3t2d0          ONLINE       0     0     0
+        logs              DEGRADED     0     0     0
+          c5t3d0          UNAVAIL      0     0     0  cannot open
+
+errors: No known data errors
+
+
+

Future log records will not use a failed log device but will be +written to the main pool. You should fix or replace any failed log +devices.

+

Details

+

The Message ID: ZFS-8000-K4 indicates that a log device is +missing or cannot be read.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/index.html b/msg/index.html new file mode 100644 index 000000000..3c5411454 --- /dev/null +++ b/msg/index.html @@ -0,0 +1,205 @@ + + + + + + + ZFS Messages — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ + +
+
+ + + + \ No newline at end of file diff --git a/objects.inv b/objects.inv new file mode 100644 index 000000000..682ac6a22 Binary files /dev/null and b/objects.inv differ diff --git a/search.html b/search.html new file mode 100644 index 000000000..bb62c78d2 --- /dev/null +++ b/search.html @@ -0,0 +1,131 @@ + + + + + + Search — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ + + + +
+ +
+ +
+
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/searchindex.js b/searchindex.js new file mode 100644 index 000000000..c8deefa78 --- /dev/null +++ b/searchindex.js @@ -0,0 +1 @@ +Search.setIndex({"docnames": ["404", "Basic Concepts/Checksums", "Basic Concepts/Feature Flags", "Basic Concepts/RAIDZ", "Basic Concepts/Troubleshooting", "Basic Concepts/dRAID Howto", "Basic Concepts/index", "Developer Resources/Buildbot Options", "Developer Resources/Building ZFS", "Developer Resources/Custom Packages", "Developer Resources/Git and GitHub for beginners", "Developer Resources/OpenZFS Exceptions", "Developer Resources/OpenZFS Patches", "Developer Resources/index", "Getting Started/Alpine Linux/Root on ZFS", "Getting Started/Alpine Linux/index", "Getting Started/Arch Linux/Root on ZFS", "Getting Started/Arch Linux/index", "Getting Started/Debian/Debian Bookworm Root on ZFS", "Getting Started/Debian/Debian Bullseye Root on ZFS", "Getting Started/Debian/Debian Buster Root on ZFS", "Getting Started/Debian/Debian GNU Linux initrd documentation", "Getting Started/Debian/Debian Stretch Root on ZFS", "Getting Started/Debian/index", "Getting Started/Fedora", "Getting Started/Fedora/Root on ZFS", "Getting Started/Fedora/index", "Getting Started/FreeBSD", "Getting Started/NixOS/Root on ZFS", "Getting Started/NixOS/index", "Getting Started/RHEL and CentOS", "Getting Started/RHEL-based distro/Root on ZFS", "Getting Started/RHEL-based distro/index", "Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS", "Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS", "Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS for Raspberry Pi", "Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS", "Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS for Raspberry Pi", "Getting Started/Ubuntu/index", "Getting Started/index", "Getting Started/openSUSE/index", "Getting Started/openSUSE/openSUSE Leap Root on ZFS", "Getting Started/openSUSE/openSUSE Tumbleweed Root on ZFS", "Getting Started/zfs_root_maintenance", "License", "Performance and Tuning/Async Write", "Performance and Tuning/Hardware", "Performance and Tuning/Module Parameters", "Performance and Tuning/Workload Tuning", "Performance and Tuning/ZFS Transaction Delay", "Performance and Tuning/ZIO Scheduler", "Performance and Tuning/index", "Project and Community/Admin Documentation", "Project and Community/FAQ", "Project and Community/FAQ hole birth", "Project and Community/Mailing Lists", "Project and Community/Signing Keys", "Project and Community/index", "_TableOfContents", "index", "man/index", "man/master/1/arcstat.1", "man/master/1/cstyle.1", "man/master/1/index", "man/master/1/raidz_test.1", "man/master/1/test-runner.1", "man/master/1/zhack.1", "man/master/1/ztest.1", "man/master/1/zvol_wait.1", "man/master/4/index", "man/master/4/spl.4", "man/master/4/zfs.4", "man/master/5/index", "man/master/5/vdev_id.conf.5", "man/master/7/dracut.zfs.7", "man/master/7/index", "man/master/7/vdevprops.7", "man/master/7/zfsconcepts.7", "man/master/7/zfsprops.7", "man/master/7/zpool-features.7", "man/master/7/zpoolconcepts.7", "man/master/7/zpoolprops.7", "man/master/8/fsck.zfs.8", "man/master/8/index", "man/master/8/mount.zfs.8", "man/master/8/vdev_id.8", "man/master/8/zdb.8", "man/master/8/zed.8", "man/master/8/zfs-allow.8", "man/master/8/zfs-bookmark.8", "man/master/8/zfs-change-key.8", "man/master/8/zfs-clone.8", "man/master/8/zfs-create.8", "man/master/8/zfs-destroy.8", "man/master/8/zfs-diff.8", "man/master/8/zfs-get.8", "man/master/8/zfs-groupspace.8", "man/master/8/zfs-hold.8", "man/master/8/zfs-inherit.8", "man/master/8/zfs-jail.8", "man/master/8/zfs-list.8", "man/master/8/zfs-load-key.8", "man/master/8/zfs-mount-generator.8", "man/master/8/zfs-mount.8", "man/master/8/zfs-program.8", "man/master/8/zfs-project.8", "man/master/8/zfs-projectspace.8", "man/master/8/zfs-promote.8", "man/master/8/zfs-receive.8", "man/master/8/zfs-recv.8", "man/master/8/zfs-redact.8", "man/master/8/zfs-release.8", "man/master/8/zfs-rename.8", "man/master/8/zfs-rollback.8", "man/master/8/zfs-send.8", "man/master/8/zfs-set.8", "man/master/8/zfs-share.8", "man/master/8/zfs-snapshot.8", "man/master/8/zfs-unallow.8", "man/master/8/zfs-unjail.8", "man/master/8/zfs-unload-key.8", "man/master/8/zfs-unmount.8", "man/master/8/zfs-unzone.8", "man/master/8/zfs-upgrade.8", "man/master/8/zfs-userspace.8", "man/master/8/zfs-wait.8", "man/master/8/zfs-zone.8", "man/master/8/zfs.8", "man/master/8/zfs_ids_to_path.8", "man/master/8/zfs_prepare_disk.8", "man/master/8/zgenhostid.8", "man/master/8/zinject.8", "man/master/8/zpool-add.8", "man/master/8/zpool-attach.8", "man/master/8/zpool-checkpoint.8", "man/master/8/zpool-clear.8", "man/master/8/zpool-create.8", "man/master/8/zpool-destroy.8", "man/master/8/zpool-detach.8", "man/master/8/zpool-events.8", "man/master/8/zpool-export.8", "man/master/8/zpool-get.8", "man/master/8/zpool-history.8", "man/master/8/zpool-import.8", "man/master/8/zpool-initialize.8", "man/master/8/zpool-iostat.8", "man/master/8/zpool-labelclear.8", "man/master/8/zpool-list.8", "man/master/8/zpool-offline.8", "man/master/8/zpool-online.8", "man/master/8/zpool-reguid.8", "man/master/8/zpool-remove.8", "man/master/8/zpool-reopen.8", "man/master/8/zpool-replace.8", "man/master/8/zpool-resilver.8", "man/master/8/zpool-scrub.8", "man/master/8/zpool-set.8", "man/master/8/zpool-split.8", "man/master/8/zpool-status.8", "man/master/8/zpool-sync.8", "man/master/8/zpool-trim.8", "man/master/8/zpool-upgrade.8", "man/master/8/zpool-wait.8", "man/master/8/zpool.8", "man/master/8/zpool_influxdb.8", "man/master/8/zstream.8", "man/master/8/zstreamdump.8", "man/master/index", "man/v0.6/1/cstyle.1", "man/v0.6/1/index", "man/v0.6/1/zhack.1", "man/v0.6/1/zpios.1", "man/v0.6/1/ztest.1", "man/v0.6/5/index", "man/v0.6/5/vdev_id.conf.5", "man/v0.6/5/zfs-events.5", "man/v0.6/5/zfs-module-parameters.5", "man/v0.6/5/zpool-features.5", "man/v0.6/8/fsck.zfs.8", "man/v0.6/8/index", "man/v0.6/8/mount.zfs.8", "man/v0.6/8/vdev_id.8", "man/v0.6/8/zdb.8", "man/v0.6/8/zed.8", "man/v0.6/8/zfs.8", "man/v0.6/8/zinject.8", "man/v0.6/8/zpool.8", "man/v0.6/8/zstreamdump.8", "man/v0.6/index", "man/v0.7/1/cstyle.1", "man/v0.7/1/index", "man/v0.7/1/raidz_test.1", "man/v0.7/1/zhack.1", "man/v0.7/1/zpios.1", "man/v0.7/1/ztest.1", "man/v0.7/5/index", "man/v0.7/5/vdev_id.conf.5", "man/v0.7/5/zfs-events.5", "man/v0.7/5/zfs-module-parameters.5", "man/v0.7/5/zpool-features.5", "man/v0.7/8/fsck.zfs.8", "man/v0.7/8/index", "man/v0.7/8/mount.zfs.8", "man/v0.7/8/vdev_id.8", "man/v0.7/8/zdb.8", "man/v0.7/8/zed.8", "man/v0.7/8/zfs.8", "man/v0.7/8/zgenhostid.8", "man/v0.7/8/zinject.8", "man/v0.7/8/zpool.8", "man/v0.7/8/zstreamdump.8", "man/v0.7/index", "man/v0.8/1/cstyle.1", "man/v0.8/1/index", "man/v0.8/1/raidz_test.1", "man/v0.8/1/zhack.1", "man/v0.8/1/ztest.1", "man/v0.8/1/zvol_wait.1", "man/v0.8/5/index", "man/v0.8/5/spl-module-parameters.5", "man/v0.8/5/vdev_id.conf.5", "man/v0.8/5/zfs-events.5", "man/v0.8/5/zfs-module-parameters.5", "man/v0.8/5/zpool-features.5", "man/v0.8/8/fsck.zfs.8", "man/v0.8/8/index", "man/v0.8/8/mount.zfs.8", "man/v0.8/8/vdev_id.8", "man/v0.8/8/zdb.8", "man/v0.8/8/zed.8", "man/v0.8/8/zfs-mount-generator.8", "man/v0.8/8/zfs-program.8", "man/v0.8/8/zfs.8", "man/v0.8/8/zfsprops.8", "man/v0.8/8/zgenhostid.8", "man/v0.8/8/zinject.8", "man/v0.8/8/zpool.8", "man/v0.8/8/zstreamdump.8", "man/v0.8/index", "man/v2.0/1/arcstat.1", "man/v2.0/1/cstyle.1", "man/v2.0/1/index", "man/v2.0/1/raidz_test.1", "man/v2.0/1/zhack.1", "man/v2.0/1/ztest.1", "man/v2.0/1/zvol_wait.1", "man/v2.0/5/index", "man/v2.0/5/spl-module-parameters.5", "man/v2.0/5/vdev_id.conf.5", "man/v2.0/5/zfs-events.5", "man/v2.0/5/zfs-module-parameters.5", "man/v2.0/5/zpool-features.5", "man/v2.0/8/fsck.zfs.8", "man/v2.0/8/index", "man/v2.0/8/mount.zfs.8", "man/v2.0/8/vdev_id.8", "man/v2.0/8/zdb.8", "man/v2.0/8/zed.8", "man/v2.0/8/zfs-allow.8", "man/v2.0/8/zfs-bookmark.8", "man/v2.0/8/zfs-change-key.8", "man/v2.0/8/zfs-clone.8", "man/v2.0/8/zfs-create.8", "man/v2.0/8/zfs-destroy.8", "man/v2.0/8/zfs-diff.8", "man/v2.0/8/zfs-get.8", "man/v2.0/8/zfs-groupspace.8", "man/v2.0/8/zfs-hold.8", "man/v2.0/8/zfs-inherit.8", "man/v2.0/8/zfs-jail.8", "man/v2.0/8/zfs-list.8", "man/v2.0/8/zfs-load-key.8", "man/v2.0/8/zfs-mount-generator.8", "man/v2.0/8/zfs-mount.8", "man/v2.0/8/zfs-program.8", "man/v2.0/8/zfs-project.8", "man/v2.0/8/zfs-projectspace.8", "man/v2.0/8/zfs-promote.8", "man/v2.0/8/zfs-receive.8", "man/v2.0/8/zfs-recv.8", "man/v2.0/8/zfs-redact.8", "man/v2.0/8/zfs-release.8", "man/v2.0/8/zfs-rename.8", "man/v2.0/8/zfs-rollback.8", "man/v2.0/8/zfs-send.8", "man/v2.0/8/zfs-set.8", "man/v2.0/8/zfs-share.8", "man/v2.0/8/zfs-snapshot.8", "man/v2.0/8/zfs-unallow.8", "man/v2.0/8/zfs-unjail.8", "man/v2.0/8/zfs-unload-key.8", "man/v2.0/8/zfs-unmount.8", "man/v2.0/8/zfs-upgrade.8", "man/v2.0/8/zfs-userspace.8", "man/v2.0/8/zfs-wait.8", "man/v2.0/8/zfs.8", "man/v2.0/8/zfs_ids_to_path.8", "man/v2.0/8/zfsconcepts.8", "man/v2.0/8/zfsprops.8", "man/v2.0/8/zgenhostid.8", "man/v2.0/8/zinject.8", "man/v2.0/8/zpool-add.8", "man/v2.0/8/zpool-attach.8", "man/v2.0/8/zpool-checkpoint.8", "man/v2.0/8/zpool-clear.8", "man/v2.0/8/zpool-create.8", "man/v2.0/8/zpool-destroy.8", "man/v2.0/8/zpool-detach.8", "man/v2.0/8/zpool-events.8", "man/v2.0/8/zpool-export.8", "man/v2.0/8/zpool-get.8", "man/v2.0/8/zpool-history.8", "man/v2.0/8/zpool-import.8", "man/v2.0/8/zpool-initialize.8", "man/v2.0/8/zpool-iostat.8", "man/v2.0/8/zpool-labelclear.8", "man/v2.0/8/zpool-list.8", "man/v2.0/8/zpool-offline.8", "man/v2.0/8/zpool-online.8", "man/v2.0/8/zpool-reguid.8", "man/v2.0/8/zpool-remove.8", "man/v2.0/8/zpool-reopen.8", "man/v2.0/8/zpool-replace.8", "man/v2.0/8/zpool-resilver.8", "man/v2.0/8/zpool-scrub.8", "man/v2.0/8/zpool-set.8", "man/v2.0/8/zpool-split.8", "man/v2.0/8/zpool-status.8", "man/v2.0/8/zpool-sync.8", "man/v2.0/8/zpool-trim.8", "man/v2.0/8/zpool-upgrade.8", "man/v2.0/8/zpool-wait.8", "man/v2.0/8/zpool.8", "man/v2.0/8/zpoolconcepts.8", "man/v2.0/8/zpoolprops.8", "man/v2.0/8/zstream.8", "man/v2.0/8/zstreamdump.8", "man/v2.0/index", "man/v2.1/1/arcstat.1", "man/v2.1/1/cstyle.1", "man/v2.1/1/index", "man/v2.1/1/raidz_test.1", "man/v2.1/1/zhack.1", "man/v2.1/1/ztest.1", "man/v2.1/1/zvol_wait.1", "man/v2.1/4/index", "man/v2.1/4/spl.4", "man/v2.1/4/zfs.4", "man/v2.1/5/index", "man/v2.1/5/vdev_id.conf.5", "man/v2.1/7/dracut.zfs.7", "man/v2.1/7/index", "man/v2.1/7/zfsconcepts.7", "man/v2.1/7/zfsprops.7", "man/v2.1/7/zpool-features.7", "man/v2.1/7/zpoolconcepts.7", "man/v2.1/7/zpoolprops.7", "man/v2.1/8/fsck.zfs.8", "man/v2.1/8/index", "man/v2.1/8/mount.zfs.8", "man/v2.1/8/vdev_id.8", "man/v2.1/8/zdb.8", "man/v2.1/8/zed.8", "man/v2.1/8/zfs-allow.8", "man/v2.1/8/zfs-bookmark.8", "man/v2.1/8/zfs-change-key.8", "man/v2.1/8/zfs-clone.8", "man/v2.1/8/zfs-create.8", "man/v2.1/8/zfs-destroy.8", "man/v2.1/8/zfs-diff.8", "man/v2.1/8/zfs-get.8", "man/v2.1/8/zfs-groupspace.8", "man/v2.1/8/zfs-hold.8", "man/v2.1/8/zfs-inherit.8", "man/v2.1/8/zfs-jail.8", "man/v2.1/8/zfs-list.8", "man/v2.1/8/zfs-load-key.8", "man/v2.1/8/zfs-mount-generator.8", "man/v2.1/8/zfs-mount.8", "man/v2.1/8/zfs-program.8", "man/v2.1/8/zfs-project.8", "man/v2.1/8/zfs-projectspace.8", "man/v2.1/8/zfs-promote.8", "man/v2.1/8/zfs-receive.8", "man/v2.1/8/zfs-recv.8", "man/v2.1/8/zfs-redact.8", "man/v2.1/8/zfs-release.8", "man/v2.1/8/zfs-rename.8", "man/v2.1/8/zfs-rollback.8", "man/v2.1/8/zfs-send.8", "man/v2.1/8/zfs-set.8", "man/v2.1/8/zfs-share.8", "man/v2.1/8/zfs-snapshot.8", "man/v2.1/8/zfs-unallow.8", "man/v2.1/8/zfs-unjail.8", "man/v2.1/8/zfs-unload-key.8", "man/v2.1/8/zfs-unmount.8", "man/v2.1/8/zfs-upgrade.8", "man/v2.1/8/zfs-userspace.8", "man/v2.1/8/zfs-wait.8", "man/v2.1/8/zfs.8", "man/v2.1/8/zfs_ids_to_path.8", "man/v2.1/8/zgenhostid.8", "man/v2.1/8/zinject.8", "man/v2.1/8/zpool-add.8", "man/v2.1/8/zpool-attach.8", "man/v2.1/8/zpool-checkpoint.8", "man/v2.1/8/zpool-clear.8", "man/v2.1/8/zpool-create.8", "man/v2.1/8/zpool-destroy.8", "man/v2.1/8/zpool-detach.8", "man/v2.1/8/zpool-events.8", "man/v2.1/8/zpool-export.8", "man/v2.1/8/zpool-get.8", "man/v2.1/8/zpool-history.8", "man/v2.1/8/zpool-import.8", "man/v2.1/8/zpool-initialize.8", "man/v2.1/8/zpool-iostat.8", "man/v2.1/8/zpool-labelclear.8", "man/v2.1/8/zpool-list.8", "man/v2.1/8/zpool-offline.8", "man/v2.1/8/zpool-online.8", "man/v2.1/8/zpool-reguid.8", "man/v2.1/8/zpool-remove.8", "man/v2.1/8/zpool-reopen.8", "man/v2.1/8/zpool-replace.8", "man/v2.1/8/zpool-resilver.8", "man/v2.1/8/zpool-scrub.8", "man/v2.1/8/zpool-set.8", "man/v2.1/8/zpool-split.8", "man/v2.1/8/zpool-status.8", "man/v2.1/8/zpool-sync.8", "man/v2.1/8/zpool-trim.8", "man/v2.1/8/zpool-upgrade.8", "man/v2.1/8/zpool-wait.8", "man/v2.1/8/zpool.8", "man/v2.1/8/zpool_influxdb.8", "man/v2.1/8/zstream.8", "man/v2.1/8/zstreamdump.8", "man/v2.1/index", "man/v2.2/1/arcstat.1", "man/v2.2/1/cstyle.1", "man/v2.2/1/index", "man/v2.2/1/raidz_test.1", "man/v2.2/1/test-runner.1", "man/v2.2/1/zhack.1", "man/v2.2/1/ztest.1", "man/v2.2/1/zvol_wait.1", "man/v2.2/4/index", "man/v2.2/4/spl.4", "man/v2.2/4/zfs.4", "man/v2.2/5/index", "man/v2.2/5/vdev_id.conf.5", "man/v2.2/7/dracut.zfs.7", "man/v2.2/7/index", "man/v2.2/7/vdevprops.7", "man/v2.2/7/zfsconcepts.7", "man/v2.2/7/zfsprops.7", "man/v2.2/7/zpool-features.7", "man/v2.2/7/zpoolconcepts.7", "man/v2.2/7/zpoolprops.7", "man/v2.2/8/fsck.zfs.8", "man/v2.2/8/index", "man/v2.2/8/mount.zfs.8", "man/v2.2/8/vdev_id.8", "man/v2.2/8/zdb.8", "man/v2.2/8/zed.8", "man/v2.2/8/zfs-allow.8", "man/v2.2/8/zfs-bookmark.8", "man/v2.2/8/zfs-change-key.8", "man/v2.2/8/zfs-clone.8", "man/v2.2/8/zfs-create.8", "man/v2.2/8/zfs-destroy.8", "man/v2.2/8/zfs-diff.8", "man/v2.2/8/zfs-get.8", "man/v2.2/8/zfs-groupspace.8", "man/v2.2/8/zfs-hold.8", "man/v2.2/8/zfs-inherit.8", "man/v2.2/8/zfs-jail.8", "man/v2.2/8/zfs-list.8", "man/v2.2/8/zfs-load-key.8", "man/v2.2/8/zfs-mount-generator.8", "man/v2.2/8/zfs-mount.8", "man/v2.2/8/zfs-program.8", "man/v2.2/8/zfs-project.8", "man/v2.2/8/zfs-projectspace.8", "man/v2.2/8/zfs-promote.8", "man/v2.2/8/zfs-receive.8", "man/v2.2/8/zfs-recv.8", "man/v2.2/8/zfs-redact.8", "man/v2.2/8/zfs-release.8", "man/v2.2/8/zfs-rename.8", "man/v2.2/8/zfs-rollback.8", "man/v2.2/8/zfs-send.8", "man/v2.2/8/zfs-set.8", "man/v2.2/8/zfs-share.8", "man/v2.2/8/zfs-snapshot.8", "man/v2.2/8/zfs-unallow.8", "man/v2.2/8/zfs-unjail.8", "man/v2.2/8/zfs-unload-key.8", "man/v2.2/8/zfs-unmount.8", "man/v2.2/8/zfs-unzone.8", "man/v2.2/8/zfs-upgrade.8", "man/v2.2/8/zfs-userspace.8", "man/v2.2/8/zfs-wait.8", "man/v2.2/8/zfs-zone.8", "man/v2.2/8/zfs.8", "man/v2.2/8/zfs_ids_to_path.8", "man/v2.2/8/zfs_prepare_disk.8", "man/v2.2/8/zgenhostid.8", "man/v2.2/8/zinject.8", "man/v2.2/8/zpool-add.8", "man/v2.2/8/zpool-attach.8", "man/v2.2/8/zpool-checkpoint.8", "man/v2.2/8/zpool-clear.8", "man/v2.2/8/zpool-create.8", "man/v2.2/8/zpool-destroy.8", "man/v2.2/8/zpool-detach.8", "man/v2.2/8/zpool-events.8", "man/v2.2/8/zpool-export.8", "man/v2.2/8/zpool-get.8", "man/v2.2/8/zpool-history.8", "man/v2.2/8/zpool-import.8", "man/v2.2/8/zpool-initialize.8", "man/v2.2/8/zpool-iostat.8", "man/v2.2/8/zpool-labelclear.8", "man/v2.2/8/zpool-list.8", "man/v2.2/8/zpool-offline.8", "man/v2.2/8/zpool-online.8", "man/v2.2/8/zpool-reguid.8", "man/v2.2/8/zpool-remove.8", "man/v2.2/8/zpool-reopen.8", "man/v2.2/8/zpool-replace.8", "man/v2.2/8/zpool-resilver.8", "man/v2.2/8/zpool-scrub.8", "man/v2.2/8/zpool-set.8", "man/v2.2/8/zpool-split.8", "man/v2.2/8/zpool-status.8", "man/v2.2/8/zpool-sync.8", "man/v2.2/8/zpool-trim.8", "man/v2.2/8/zpool-upgrade.8", "man/v2.2/8/zpool-wait.8", "man/v2.2/8/zpool.8", "man/v2.2/8/zpool_influxdb.8", "man/v2.2/8/zstream.8", "man/v2.2/8/zstreamdump.8", "man/v2.2/index", "msg/ZFS-8000-14/index", "msg/ZFS-8000-2Q/index", "msg/ZFS-8000-3C/index", "msg/ZFS-8000-4J/index", "msg/ZFS-8000-5E/index", "msg/ZFS-8000-6X/index", "msg/ZFS-8000-72/index", "msg/ZFS-8000-8A/index", "msg/ZFS-8000-9P/index", "msg/ZFS-8000-A5/index", "msg/ZFS-8000-ER/index", "msg/ZFS-8000-EY/index", "msg/ZFS-8000-HC/index", "msg/ZFS-8000-JQ/index", "msg/ZFS-8000-K4/index", "msg/index"], "filenames": ["404.rst", "Basic Concepts/Checksums.rst", "Basic Concepts/Feature Flags.rst", "Basic Concepts/RAIDZ.rst", "Basic Concepts/Troubleshooting.rst", "Basic Concepts/dRAID Howto.rst", "Basic Concepts/index.rst", "Developer Resources/Buildbot Options.rst", "Developer Resources/Building ZFS.rst", "Developer Resources/Custom Packages.rst", "Developer Resources/Git and GitHub for beginners.rst", "Developer Resources/OpenZFS Exceptions.rst", "Developer Resources/OpenZFS Patches.rst", "Developer Resources/index.rst", "Getting Started/Alpine Linux/Root on ZFS.rst", "Getting Started/Alpine Linux/index.rst", "Getting Started/Arch Linux/Root on ZFS.rst", "Getting Started/Arch Linux/index.rst", "Getting Started/Debian/Debian Bookworm Root on ZFS.rst", "Getting Started/Debian/Debian Bullseye Root on ZFS.rst", "Getting Started/Debian/Debian Buster Root on ZFS.rst", "Getting Started/Debian/Debian GNU Linux initrd documentation.rst", "Getting Started/Debian/Debian Stretch Root on ZFS.rst", "Getting Started/Debian/index.rst", "Getting Started/Fedora.rst", "Getting Started/Fedora/Root on ZFS.rst", "Getting Started/Fedora/index.rst", "Getting Started/FreeBSD.rst", "Getting Started/NixOS/Root on ZFS.rst", "Getting Started/NixOS/index.rst", "Getting Started/RHEL and CentOS.rst", "Getting Started/RHEL-based distro/Root on ZFS.rst", "Getting Started/RHEL-based distro/index.rst", "Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS.rst", "Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS.rst", "Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS for Raspberry Pi.rst", "Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS.rst", "Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS for Raspberry Pi.rst", "Getting Started/Ubuntu/index.rst", "Getting Started/index.rst", "Getting Started/openSUSE/index.rst", "Getting Started/openSUSE/openSUSE Leap Root on ZFS.rst", "Getting Started/openSUSE/openSUSE Tumbleweed Root on ZFS.rst", "Getting Started/zfs_root_maintenance.rst", "License.rst", "Performance and Tuning/Async Write.rst", "Performance and Tuning/Hardware.rst", "Performance and Tuning/Module Parameters.rst", "Performance and Tuning/Workload Tuning.rst", "Performance and Tuning/ZFS Transaction Delay.rst", "Performance and Tuning/ZIO Scheduler.rst", "Performance and Tuning/index.rst", "Project and Community/Admin Documentation.rst", "Project and Community/FAQ.rst", "Project and Community/FAQ hole birth.rst", "Project and Community/Mailing Lists.rst", "Project and Community/Signing Keys.rst", "Project and Community/index.rst", "_TableOfContents.rst", "index.rst", "man/index.rst", "man/master/1/arcstat.1.rst", "man/master/1/cstyle.1.rst", "man/master/1/index.rst", "man/master/1/raidz_test.1.rst", "man/master/1/test-runner.1.rst", "man/master/1/zhack.1.rst", "man/master/1/ztest.1.rst", "man/master/1/zvol_wait.1.rst", "man/master/4/index.rst", "man/master/4/spl.4.rst", "man/master/4/zfs.4.rst", "man/master/5/index.rst", "man/master/5/vdev_id.conf.5.rst", "man/master/7/dracut.zfs.7.rst", "man/master/7/index.rst", "man/master/7/vdevprops.7.rst", "man/master/7/zfsconcepts.7.rst", "man/master/7/zfsprops.7.rst", "man/master/7/zpool-features.7.rst", "man/master/7/zpoolconcepts.7.rst", "man/master/7/zpoolprops.7.rst", "man/master/8/fsck.zfs.8.rst", "man/master/8/index.rst", "man/master/8/mount.zfs.8.rst", "man/master/8/vdev_id.8.rst", "man/master/8/zdb.8.rst", "man/master/8/zed.8.rst", "man/master/8/zfs-allow.8.rst", "man/master/8/zfs-bookmark.8.rst", "man/master/8/zfs-change-key.8.rst", "man/master/8/zfs-clone.8.rst", "man/master/8/zfs-create.8.rst", "man/master/8/zfs-destroy.8.rst", "man/master/8/zfs-diff.8.rst", "man/master/8/zfs-get.8.rst", "man/master/8/zfs-groupspace.8.rst", "man/master/8/zfs-hold.8.rst", "man/master/8/zfs-inherit.8.rst", "man/master/8/zfs-jail.8.rst", "man/master/8/zfs-list.8.rst", "man/master/8/zfs-load-key.8.rst", "man/master/8/zfs-mount-generator.8.rst", "man/master/8/zfs-mount.8.rst", "man/master/8/zfs-program.8.rst", "man/master/8/zfs-project.8.rst", "man/master/8/zfs-projectspace.8.rst", "man/master/8/zfs-promote.8.rst", "man/master/8/zfs-receive.8.rst", "man/master/8/zfs-recv.8.rst", "man/master/8/zfs-redact.8.rst", "man/master/8/zfs-release.8.rst", "man/master/8/zfs-rename.8.rst", "man/master/8/zfs-rollback.8.rst", "man/master/8/zfs-send.8.rst", "man/master/8/zfs-set.8.rst", "man/master/8/zfs-share.8.rst", "man/master/8/zfs-snapshot.8.rst", "man/master/8/zfs-unallow.8.rst", "man/master/8/zfs-unjail.8.rst", "man/master/8/zfs-unload-key.8.rst", "man/master/8/zfs-unmount.8.rst", "man/master/8/zfs-unzone.8.rst", "man/master/8/zfs-upgrade.8.rst", "man/master/8/zfs-userspace.8.rst", "man/master/8/zfs-wait.8.rst", "man/master/8/zfs-zone.8.rst", "man/master/8/zfs.8.rst", "man/master/8/zfs_ids_to_path.8.rst", "man/master/8/zfs_prepare_disk.8.rst", "man/master/8/zgenhostid.8.rst", "man/master/8/zinject.8.rst", "man/master/8/zpool-add.8.rst", "man/master/8/zpool-attach.8.rst", "man/master/8/zpool-checkpoint.8.rst", "man/master/8/zpool-clear.8.rst", "man/master/8/zpool-create.8.rst", "man/master/8/zpool-destroy.8.rst", "man/master/8/zpool-detach.8.rst", "man/master/8/zpool-events.8.rst", "man/master/8/zpool-export.8.rst", "man/master/8/zpool-get.8.rst", "man/master/8/zpool-history.8.rst", "man/master/8/zpool-import.8.rst", "man/master/8/zpool-initialize.8.rst", "man/master/8/zpool-iostat.8.rst", "man/master/8/zpool-labelclear.8.rst", "man/master/8/zpool-list.8.rst", "man/master/8/zpool-offline.8.rst", "man/master/8/zpool-online.8.rst", "man/master/8/zpool-reguid.8.rst", "man/master/8/zpool-remove.8.rst", "man/master/8/zpool-reopen.8.rst", "man/master/8/zpool-replace.8.rst", "man/master/8/zpool-resilver.8.rst", "man/master/8/zpool-scrub.8.rst", "man/master/8/zpool-set.8.rst", "man/master/8/zpool-split.8.rst", "man/master/8/zpool-status.8.rst", "man/master/8/zpool-sync.8.rst", "man/master/8/zpool-trim.8.rst", "man/master/8/zpool-upgrade.8.rst", "man/master/8/zpool-wait.8.rst", "man/master/8/zpool.8.rst", "man/master/8/zpool_influxdb.8.rst", "man/master/8/zstream.8.rst", "man/master/8/zstreamdump.8.rst", "man/master/index.rst", "man/v0.6/1/cstyle.1.rst", "man/v0.6/1/index.rst", "man/v0.6/1/zhack.1.rst", "man/v0.6/1/zpios.1.rst", "man/v0.6/1/ztest.1.rst", "man/v0.6/5/index.rst", "man/v0.6/5/vdev_id.conf.5.rst", "man/v0.6/5/zfs-events.5.rst", "man/v0.6/5/zfs-module-parameters.5.rst", "man/v0.6/5/zpool-features.5.rst", "man/v0.6/8/fsck.zfs.8.rst", "man/v0.6/8/index.rst", "man/v0.6/8/mount.zfs.8.rst", "man/v0.6/8/vdev_id.8.rst", "man/v0.6/8/zdb.8.rst", "man/v0.6/8/zed.8.rst", "man/v0.6/8/zfs.8.rst", "man/v0.6/8/zinject.8.rst", "man/v0.6/8/zpool.8.rst", "man/v0.6/8/zstreamdump.8.rst", "man/v0.6/index.rst", "man/v0.7/1/cstyle.1.rst", "man/v0.7/1/index.rst", "man/v0.7/1/raidz_test.1.rst", "man/v0.7/1/zhack.1.rst", "man/v0.7/1/zpios.1.rst", "man/v0.7/1/ztest.1.rst", "man/v0.7/5/index.rst", "man/v0.7/5/vdev_id.conf.5.rst", "man/v0.7/5/zfs-events.5.rst", "man/v0.7/5/zfs-module-parameters.5.rst", "man/v0.7/5/zpool-features.5.rst", "man/v0.7/8/fsck.zfs.8.rst", "man/v0.7/8/index.rst", "man/v0.7/8/mount.zfs.8.rst", "man/v0.7/8/vdev_id.8.rst", "man/v0.7/8/zdb.8.rst", "man/v0.7/8/zed.8.rst", "man/v0.7/8/zfs.8.rst", "man/v0.7/8/zgenhostid.8.rst", "man/v0.7/8/zinject.8.rst", "man/v0.7/8/zpool.8.rst", "man/v0.7/8/zstreamdump.8.rst", "man/v0.7/index.rst", "man/v0.8/1/cstyle.1.rst", "man/v0.8/1/index.rst", "man/v0.8/1/raidz_test.1.rst", "man/v0.8/1/zhack.1.rst", "man/v0.8/1/ztest.1.rst", "man/v0.8/1/zvol_wait.1.rst", "man/v0.8/5/index.rst", "man/v0.8/5/spl-module-parameters.5.rst", "man/v0.8/5/vdev_id.conf.5.rst", "man/v0.8/5/zfs-events.5.rst", "man/v0.8/5/zfs-module-parameters.5.rst", "man/v0.8/5/zpool-features.5.rst", "man/v0.8/8/fsck.zfs.8.rst", "man/v0.8/8/index.rst", "man/v0.8/8/mount.zfs.8.rst", "man/v0.8/8/vdev_id.8.rst", "man/v0.8/8/zdb.8.rst", "man/v0.8/8/zed.8.rst", "man/v0.8/8/zfs-mount-generator.8.rst", "man/v0.8/8/zfs-program.8.rst", "man/v0.8/8/zfs.8.rst", "man/v0.8/8/zfsprops.8.rst", "man/v0.8/8/zgenhostid.8.rst", "man/v0.8/8/zinject.8.rst", "man/v0.8/8/zpool.8.rst", "man/v0.8/8/zstreamdump.8.rst", "man/v0.8/index.rst", "man/v2.0/1/arcstat.1.rst", "man/v2.0/1/cstyle.1.rst", "man/v2.0/1/index.rst", "man/v2.0/1/raidz_test.1.rst", "man/v2.0/1/zhack.1.rst", "man/v2.0/1/ztest.1.rst", "man/v2.0/1/zvol_wait.1.rst", "man/v2.0/5/index.rst", "man/v2.0/5/spl-module-parameters.5.rst", "man/v2.0/5/vdev_id.conf.5.rst", "man/v2.0/5/zfs-events.5.rst", "man/v2.0/5/zfs-module-parameters.5.rst", "man/v2.0/5/zpool-features.5.rst", "man/v2.0/8/fsck.zfs.8.rst", "man/v2.0/8/index.rst", "man/v2.0/8/mount.zfs.8.rst", "man/v2.0/8/vdev_id.8.rst", "man/v2.0/8/zdb.8.rst", "man/v2.0/8/zed.8.rst", "man/v2.0/8/zfs-allow.8.rst", "man/v2.0/8/zfs-bookmark.8.rst", "man/v2.0/8/zfs-change-key.8.rst", "man/v2.0/8/zfs-clone.8.rst", "man/v2.0/8/zfs-create.8.rst", "man/v2.0/8/zfs-destroy.8.rst", "man/v2.0/8/zfs-diff.8.rst", "man/v2.0/8/zfs-get.8.rst", "man/v2.0/8/zfs-groupspace.8.rst", "man/v2.0/8/zfs-hold.8.rst", "man/v2.0/8/zfs-inherit.8.rst", "man/v2.0/8/zfs-jail.8.rst", "man/v2.0/8/zfs-list.8.rst", "man/v2.0/8/zfs-load-key.8.rst", "man/v2.0/8/zfs-mount-generator.8.rst", "man/v2.0/8/zfs-mount.8.rst", "man/v2.0/8/zfs-program.8.rst", "man/v2.0/8/zfs-project.8.rst", "man/v2.0/8/zfs-projectspace.8.rst", "man/v2.0/8/zfs-promote.8.rst", "man/v2.0/8/zfs-receive.8.rst", "man/v2.0/8/zfs-recv.8.rst", "man/v2.0/8/zfs-redact.8.rst", "man/v2.0/8/zfs-release.8.rst", "man/v2.0/8/zfs-rename.8.rst", "man/v2.0/8/zfs-rollback.8.rst", "man/v2.0/8/zfs-send.8.rst", "man/v2.0/8/zfs-set.8.rst", "man/v2.0/8/zfs-share.8.rst", "man/v2.0/8/zfs-snapshot.8.rst", "man/v2.0/8/zfs-unallow.8.rst", "man/v2.0/8/zfs-unjail.8.rst", "man/v2.0/8/zfs-unload-key.8.rst", "man/v2.0/8/zfs-unmount.8.rst", "man/v2.0/8/zfs-upgrade.8.rst", "man/v2.0/8/zfs-userspace.8.rst", "man/v2.0/8/zfs-wait.8.rst", "man/v2.0/8/zfs.8.rst", "man/v2.0/8/zfs_ids_to_path.8.rst", "man/v2.0/8/zfsconcepts.8.rst", "man/v2.0/8/zfsprops.8.rst", "man/v2.0/8/zgenhostid.8.rst", "man/v2.0/8/zinject.8.rst", "man/v2.0/8/zpool-add.8.rst", "man/v2.0/8/zpool-attach.8.rst", "man/v2.0/8/zpool-checkpoint.8.rst", "man/v2.0/8/zpool-clear.8.rst", "man/v2.0/8/zpool-create.8.rst", "man/v2.0/8/zpool-destroy.8.rst", "man/v2.0/8/zpool-detach.8.rst", "man/v2.0/8/zpool-events.8.rst", "man/v2.0/8/zpool-export.8.rst", "man/v2.0/8/zpool-get.8.rst", "man/v2.0/8/zpool-history.8.rst", "man/v2.0/8/zpool-import.8.rst", "man/v2.0/8/zpool-initialize.8.rst", "man/v2.0/8/zpool-iostat.8.rst", "man/v2.0/8/zpool-labelclear.8.rst", "man/v2.0/8/zpool-list.8.rst", "man/v2.0/8/zpool-offline.8.rst", "man/v2.0/8/zpool-online.8.rst", "man/v2.0/8/zpool-reguid.8.rst", "man/v2.0/8/zpool-remove.8.rst", "man/v2.0/8/zpool-reopen.8.rst", "man/v2.0/8/zpool-replace.8.rst", "man/v2.0/8/zpool-resilver.8.rst", "man/v2.0/8/zpool-scrub.8.rst", "man/v2.0/8/zpool-set.8.rst", "man/v2.0/8/zpool-split.8.rst", "man/v2.0/8/zpool-status.8.rst", "man/v2.0/8/zpool-sync.8.rst", "man/v2.0/8/zpool-trim.8.rst", "man/v2.0/8/zpool-upgrade.8.rst", "man/v2.0/8/zpool-wait.8.rst", "man/v2.0/8/zpool.8.rst", "man/v2.0/8/zpoolconcepts.8.rst", "man/v2.0/8/zpoolprops.8.rst", "man/v2.0/8/zstream.8.rst", "man/v2.0/8/zstreamdump.8.rst", "man/v2.0/index.rst", "man/v2.1/1/arcstat.1.rst", "man/v2.1/1/cstyle.1.rst", "man/v2.1/1/index.rst", "man/v2.1/1/raidz_test.1.rst", "man/v2.1/1/zhack.1.rst", "man/v2.1/1/ztest.1.rst", "man/v2.1/1/zvol_wait.1.rst", "man/v2.1/4/index.rst", "man/v2.1/4/spl.4.rst", "man/v2.1/4/zfs.4.rst", "man/v2.1/5/index.rst", "man/v2.1/5/vdev_id.conf.5.rst", "man/v2.1/7/dracut.zfs.7.rst", "man/v2.1/7/index.rst", "man/v2.1/7/zfsconcepts.7.rst", "man/v2.1/7/zfsprops.7.rst", "man/v2.1/7/zpool-features.7.rst", "man/v2.1/7/zpoolconcepts.7.rst", "man/v2.1/7/zpoolprops.7.rst", "man/v2.1/8/fsck.zfs.8.rst", "man/v2.1/8/index.rst", "man/v2.1/8/mount.zfs.8.rst", "man/v2.1/8/vdev_id.8.rst", "man/v2.1/8/zdb.8.rst", "man/v2.1/8/zed.8.rst", "man/v2.1/8/zfs-allow.8.rst", "man/v2.1/8/zfs-bookmark.8.rst", "man/v2.1/8/zfs-change-key.8.rst", "man/v2.1/8/zfs-clone.8.rst", "man/v2.1/8/zfs-create.8.rst", "man/v2.1/8/zfs-destroy.8.rst", "man/v2.1/8/zfs-diff.8.rst", "man/v2.1/8/zfs-get.8.rst", "man/v2.1/8/zfs-groupspace.8.rst", "man/v2.1/8/zfs-hold.8.rst", "man/v2.1/8/zfs-inherit.8.rst", "man/v2.1/8/zfs-jail.8.rst", "man/v2.1/8/zfs-list.8.rst", "man/v2.1/8/zfs-load-key.8.rst", "man/v2.1/8/zfs-mount-generator.8.rst", "man/v2.1/8/zfs-mount.8.rst", "man/v2.1/8/zfs-program.8.rst", "man/v2.1/8/zfs-project.8.rst", "man/v2.1/8/zfs-projectspace.8.rst", "man/v2.1/8/zfs-promote.8.rst", "man/v2.1/8/zfs-receive.8.rst", "man/v2.1/8/zfs-recv.8.rst", "man/v2.1/8/zfs-redact.8.rst", "man/v2.1/8/zfs-release.8.rst", "man/v2.1/8/zfs-rename.8.rst", "man/v2.1/8/zfs-rollback.8.rst", "man/v2.1/8/zfs-send.8.rst", "man/v2.1/8/zfs-set.8.rst", "man/v2.1/8/zfs-share.8.rst", "man/v2.1/8/zfs-snapshot.8.rst", "man/v2.1/8/zfs-unallow.8.rst", "man/v2.1/8/zfs-unjail.8.rst", "man/v2.1/8/zfs-unload-key.8.rst", "man/v2.1/8/zfs-unmount.8.rst", "man/v2.1/8/zfs-upgrade.8.rst", "man/v2.1/8/zfs-userspace.8.rst", "man/v2.1/8/zfs-wait.8.rst", "man/v2.1/8/zfs.8.rst", "man/v2.1/8/zfs_ids_to_path.8.rst", "man/v2.1/8/zgenhostid.8.rst", "man/v2.1/8/zinject.8.rst", "man/v2.1/8/zpool-add.8.rst", "man/v2.1/8/zpool-attach.8.rst", "man/v2.1/8/zpool-checkpoint.8.rst", "man/v2.1/8/zpool-clear.8.rst", "man/v2.1/8/zpool-create.8.rst", "man/v2.1/8/zpool-destroy.8.rst", "man/v2.1/8/zpool-detach.8.rst", "man/v2.1/8/zpool-events.8.rst", "man/v2.1/8/zpool-export.8.rst", "man/v2.1/8/zpool-get.8.rst", "man/v2.1/8/zpool-history.8.rst", "man/v2.1/8/zpool-import.8.rst", "man/v2.1/8/zpool-initialize.8.rst", "man/v2.1/8/zpool-iostat.8.rst", "man/v2.1/8/zpool-labelclear.8.rst", "man/v2.1/8/zpool-list.8.rst", "man/v2.1/8/zpool-offline.8.rst", "man/v2.1/8/zpool-online.8.rst", "man/v2.1/8/zpool-reguid.8.rst", "man/v2.1/8/zpool-remove.8.rst", "man/v2.1/8/zpool-reopen.8.rst", "man/v2.1/8/zpool-replace.8.rst", "man/v2.1/8/zpool-resilver.8.rst", "man/v2.1/8/zpool-scrub.8.rst", "man/v2.1/8/zpool-set.8.rst", "man/v2.1/8/zpool-split.8.rst", "man/v2.1/8/zpool-status.8.rst", "man/v2.1/8/zpool-sync.8.rst", "man/v2.1/8/zpool-trim.8.rst", "man/v2.1/8/zpool-upgrade.8.rst", "man/v2.1/8/zpool-wait.8.rst", "man/v2.1/8/zpool.8.rst", "man/v2.1/8/zpool_influxdb.8.rst", "man/v2.1/8/zstream.8.rst", "man/v2.1/8/zstreamdump.8.rst", "man/v2.1/index.rst", "man/v2.2/1/arcstat.1.rst", "man/v2.2/1/cstyle.1.rst", "man/v2.2/1/index.rst", "man/v2.2/1/raidz_test.1.rst", "man/v2.2/1/test-runner.1.rst", "man/v2.2/1/zhack.1.rst", "man/v2.2/1/ztest.1.rst", "man/v2.2/1/zvol_wait.1.rst", "man/v2.2/4/index.rst", "man/v2.2/4/spl.4.rst", "man/v2.2/4/zfs.4.rst", "man/v2.2/5/index.rst", "man/v2.2/5/vdev_id.conf.5.rst", "man/v2.2/7/dracut.zfs.7.rst", "man/v2.2/7/index.rst", "man/v2.2/7/vdevprops.7.rst", "man/v2.2/7/zfsconcepts.7.rst", "man/v2.2/7/zfsprops.7.rst", "man/v2.2/7/zpool-features.7.rst", "man/v2.2/7/zpoolconcepts.7.rst", "man/v2.2/7/zpoolprops.7.rst", "man/v2.2/8/fsck.zfs.8.rst", "man/v2.2/8/index.rst", "man/v2.2/8/mount.zfs.8.rst", "man/v2.2/8/vdev_id.8.rst", "man/v2.2/8/zdb.8.rst", "man/v2.2/8/zed.8.rst", "man/v2.2/8/zfs-allow.8.rst", "man/v2.2/8/zfs-bookmark.8.rst", "man/v2.2/8/zfs-change-key.8.rst", "man/v2.2/8/zfs-clone.8.rst", "man/v2.2/8/zfs-create.8.rst", "man/v2.2/8/zfs-destroy.8.rst", "man/v2.2/8/zfs-diff.8.rst", "man/v2.2/8/zfs-get.8.rst", "man/v2.2/8/zfs-groupspace.8.rst", "man/v2.2/8/zfs-hold.8.rst", "man/v2.2/8/zfs-inherit.8.rst", "man/v2.2/8/zfs-jail.8.rst", "man/v2.2/8/zfs-list.8.rst", "man/v2.2/8/zfs-load-key.8.rst", "man/v2.2/8/zfs-mount-generator.8.rst", "man/v2.2/8/zfs-mount.8.rst", "man/v2.2/8/zfs-program.8.rst", "man/v2.2/8/zfs-project.8.rst", "man/v2.2/8/zfs-projectspace.8.rst", "man/v2.2/8/zfs-promote.8.rst", "man/v2.2/8/zfs-receive.8.rst", "man/v2.2/8/zfs-recv.8.rst", "man/v2.2/8/zfs-redact.8.rst", "man/v2.2/8/zfs-release.8.rst", "man/v2.2/8/zfs-rename.8.rst", "man/v2.2/8/zfs-rollback.8.rst", "man/v2.2/8/zfs-send.8.rst", "man/v2.2/8/zfs-set.8.rst", "man/v2.2/8/zfs-share.8.rst", "man/v2.2/8/zfs-snapshot.8.rst", "man/v2.2/8/zfs-unallow.8.rst", "man/v2.2/8/zfs-unjail.8.rst", "man/v2.2/8/zfs-unload-key.8.rst", "man/v2.2/8/zfs-unmount.8.rst", "man/v2.2/8/zfs-unzone.8.rst", "man/v2.2/8/zfs-upgrade.8.rst", "man/v2.2/8/zfs-userspace.8.rst", "man/v2.2/8/zfs-wait.8.rst", "man/v2.2/8/zfs-zone.8.rst", "man/v2.2/8/zfs.8.rst", "man/v2.2/8/zfs_ids_to_path.8.rst", "man/v2.2/8/zfs_prepare_disk.8.rst", "man/v2.2/8/zgenhostid.8.rst", "man/v2.2/8/zinject.8.rst", "man/v2.2/8/zpool-add.8.rst", "man/v2.2/8/zpool-attach.8.rst", "man/v2.2/8/zpool-checkpoint.8.rst", "man/v2.2/8/zpool-clear.8.rst", "man/v2.2/8/zpool-create.8.rst", "man/v2.2/8/zpool-destroy.8.rst", "man/v2.2/8/zpool-detach.8.rst", "man/v2.2/8/zpool-events.8.rst", "man/v2.2/8/zpool-export.8.rst", "man/v2.2/8/zpool-get.8.rst", "man/v2.2/8/zpool-history.8.rst", "man/v2.2/8/zpool-import.8.rst", "man/v2.2/8/zpool-initialize.8.rst", "man/v2.2/8/zpool-iostat.8.rst", "man/v2.2/8/zpool-labelclear.8.rst", "man/v2.2/8/zpool-list.8.rst", "man/v2.2/8/zpool-offline.8.rst", "man/v2.2/8/zpool-online.8.rst", "man/v2.2/8/zpool-reguid.8.rst", "man/v2.2/8/zpool-remove.8.rst", "man/v2.2/8/zpool-reopen.8.rst", "man/v2.2/8/zpool-replace.8.rst", "man/v2.2/8/zpool-resilver.8.rst", "man/v2.2/8/zpool-scrub.8.rst", "man/v2.2/8/zpool-set.8.rst", "man/v2.2/8/zpool-split.8.rst", "man/v2.2/8/zpool-status.8.rst", "man/v2.2/8/zpool-sync.8.rst", "man/v2.2/8/zpool-trim.8.rst", "man/v2.2/8/zpool-upgrade.8.rst", "man/v2.2/8/zpool-wait.8.rst", "man/v2.2/8/zpool.8.rst", "man/v2.2/8/zpool_influxdb.8.rst", "man/v2.2/8/zstream.8.rst", "man/v2.2/8/zstreamdump.8.rst", "man/v2.2/index.rst", "msg/ZFS-8000-14/index.rst", "msg/ZFS-8000-2Q/index.rst", "msg/ZFS-8000-3C/index.rst", "msg/ZFS-8000-4J/index.rst", "msg/ZFS-8000-5E/index.rst", "msg/ZFS-8000-6X/index.rst", "msg/ZFS-8000-72/index.rst", "msg/ZFS-8000-8A/index.rst", "msg/ZFS-8000-9P/index.rst", "msg/ZFS-8000-A5/index.rst", "msg/ZFS-8000-ER/index.rst", "msg/ZFS-8000-EY/index.rst", "msg/ZFS-8000-HC/index.rst", "msg/ZFS-8000-JQ/index.rst", "msg/ZFS-8000-K4/index.rst", "msg/index.rst"], "titles": ["", "Checksums and Their Use in ZFS", "Feature Flags", "RAIDZ", "Troubleshooting", "dRAID", "Basic Concepts", "Buildbot Options", "Building ZFS", "Custom Packages", "Git and GitHub for beginners (ZoL edition)", "OpenZFS Exceptions", "OpenZFS Patches", "Developer Resources", "Alpine Linux Root on ZFS", "Alpine Linux", "Arch Linux Root on ZFS", "Arch Linux", "Debian Bookworm Root on ZFS", "Debian Bullseye Root on ZFS", "Debian Buster Root on ZFS", "Debian GNU Linux initrd documentation", "Debian Stretch Root on ZFS", "Debian", "Fedora", "Fedora Root on ZFS", "Fedora", "FreeBSD", "NixOS Root on ZFS", "NixOS", "RHEL and CentOS", "Rocky Linux Root on ZFS", "RHEL-based distro", "Ubuntu 18.04 Root on ZFS", "Ubuntu 20.04 Root on ZFS", "Ubuntu 20.04 Root on ZFS for Raspberry Pi", "Ubuntu 22.04 Root on ZFS", "Ubuntu 22.04 Root on ZFS for Raspberry Pi", "Ubuntu", "Getting Started", "openSUSE", "openSUSE Leap Root on ZFS", "openSUSE Tumbleweed Root on ZFS", "Root on ZFS maintenance", "License", "Async Writes", "Hardware", "Module Parameters", "Workload Tuning", "ZFS Transaction Delay", "ZFS I/O (ZIO) Scheduler", "Performance and Tuning", "Admin Documentation", "FAQ", "FAQ Hole birth", "Mailing Lists", "Signing Keys", "Project and Community", "<no title>", "OpenZFS Documentation", "Man Pages", "arcstat.1", "cstyle.1", "User Commands (1)", "raidz_test.1", "test-runner.1", "zhack.1", "ztest.1", "zvol_wait.1", "Devices and Special Files (4)", "spl.4", "zfs.4", "File Formats and Conventions (5)", "vdev_id.conf.5", "dracut.zfs.7", "Miscellaneous (7)", "vdevprops.7", "zfsconcepts.7", "zfsprops.7", "zpool-features.7", "zpoolconcepts.7", "zpoolprops.7", "fsck.zfs.8", "System Administration Commands (8)", "mount.zfs.8", "vdev_id.8", "zdb.8", "zed.8", "zfs-allow.8", "zfs-bookmark.8", "zfs-change-key.8", "zfs-clone.8", "zfs-create.8", "zfs-destroy.8", "zfs-diff.8", "zfs-get.8", "zfs-groupspace.8", "zfs-hold.8", "zfs-inherit.8", "zfs-jail.8", "zfs-list.8", "zfs-load-key.8", "zfs-mount-generator.8", "zfs-mount.8", "zfs-program.8", "zfs-project.8", "zfs-projectspace.8", "zfs-promote.8", "zfs-receive.8", "zfs-recv.8", "zfs-redact.8", "zfs-release.8", "zfs-rename.8", "zfs-rollback.8", "zfs-send.8", "zfs-set.8", "zfs-share.8", "zfs-snapshot.8", "zfs-unallow.8", "zfs-unjail.8", "zfs-unload-key.8", "zfs-unmount.8", "zfs-unzone.8", "zfs-upgrade.8", "zfs-userspace.8", "zfs-wait.8", "zfs-zone.8", "zfs.8", "zfs_ids_to_path.8", "zfs_prepare_disk.8", "zgenhostid.8", "zinject.8", "zpool-add.8", "zpool-attach.8", "zpool-checkpoint.8", "zpool-clear.8", "zpool-create.8", "zpool-destroy.8", "zpool-detach.8", "zpool-events.8", "zpool-export.8", "zpool-get.8", "zpool-history.8", "zpool-import.8", "zpool-initialize.8", "zpool-iostat.8", "zpool-labelclear.8", "zpool-list.8", "zpool-offline.8", "zpool-online.8", "zpool-reguid.8", "zpool-remove.8", "zpool-reopen.8", "zpool-replace.8", "zpool-resilver.8", "zpool-scrub.8", "zpool-set.8", "zpool-split.8", "zpool-status.8", "zpool-sync.8", "zpool-trim.8", "zpool-upgrade.8", "zpool-wait.8", "zpool.8", "zpool_influxdb.8", "zstream.8", "zstreamdump.8", "master", "cstyle.1", "User Commands (1)", "zhack.1", "zpios.1", "ztest.1", "File Formats and Conventions (5)", "vdev_id.conf.5", "zfs-events.5", "zfs-module-parameters.5", "zpool-features.5", "fsck.zfs.8", "System Administration Commands (8)", "mount.zfs.8", "vdev_id.8", "zdb.8", "zed.8", "zfs.8", "zinject.8", "zpool.8", "zstreamdump.8", "v0.6", "cstyle.1", "User Commands (1)", "raidz_test.1", "zhack.1", "zpios.1", "ztest.1", "File Formats and Conventions (5)", "vdev_id.conf.5", "zfs-events.5", "zfs-module-parameters.5", "zpool-features.5", "fsck.zfs.8", "System Administration Commands (8)", "mount.zfs.8", "vdev_id.8", "zdb.8", "zed.8", "zfs.8", "zgenhostid.8", "zinject.8", "zpool.8", "zstreamdump.8", "v0.7", "cstyle.1", "User Commands (1)", "raidz_test.1", "zhack.1", "ztest.1", "zvol_wait.1", "File Formats and Conventions (5)", "spl-module-parameters.5", "vdev_id.conf.5", "zfs-events.5", "zfs-module-parameters.5", "zpool-features.5", "fsck.zfs.8", "System Administration Commands (8)", "mount.zfs.8", "vdev_id.8", "zdb.8", "zed.8", "zfs-mount-generator.8", "zfs-program.8", "zfs.8", "zfsprops.8", "zgenhostid.8", "zinject.8", "zpool.8", "zstreamdump.8", "v0.8", "arcstat.1", "cstyle.1", "User Commands (1)", "raidz_test.1", "zhack.1", "ztest.1", "zvol_wait.1", "File Formats and Conventions (5)", "spl-module-parameters.5", "vdev_id.conf.5", "zfs-events.5", "zfs-module-parameters.5", "zpool-features.5", "fsck.zfs.8", "System Administration Commands (8)", "mount.zfs.8", "vdev_id.8", "zdb.8", "zed.8", "zfs-allow.8", "zfs-bookmark.8", "zfs-change-key.8", "zfs-clone.8", "zfs-create.8", "zfs-destroy.8", "zfs-diff.8", "zfs-get.8", "zfs-groupspace.8", "zfs-hold.8", "zfs-inherit.8", "zfs-jail.8", "zfs-list.8", "zfs-load-key.8", "zfs-mount-generator.8", "zfs-mount.8", "zfs-program.8", "zfs-project.8", "zfs-projectspace.8", "zfs-promote.8", "zfs-receive.8", "zfs-recv.8", "zfs-redact.8", "zfs-release.8", "zfs-rename.8", "zfs-rollback.8", "zfs-send.8", "zfs-set.8", "zfs-share.8", "zfs-snapshot.8", "zfs-unallow.8", "zfs-unjail.8", "zfs-unload-key.8", "zfs-unmount.8", "zfs-upgrade.8", "zfs-userspace.8", "zfs-wait.8", "zfs.8", "zfs_ids_to_path.8", "zfsconcepts.8", "zfsprops.8", "zgenhostid.8", "zinject.8", "zpool-add.8", "zpool-attach.8", "zpool-checkpoint.8", "zpool-clear.8", "zpool-create.8", "zpool-destroy.8", "zpool-detach.8", "zpool-events.8", "zpool-export.8", "zpool-get.8", "zpool-history.8", "zpool-import.8", "zpool-initialize.8", "zpool-iostat.8", "zpool-labelclear.8", "zpool-list.8", "zpool-offline.8", "zpool-online.8", "zpool-reguid.8", "zpool-remove.8", "zpool-reopen.8", "zpool-replace.8", "zpool-resilver.8", "zpool-scrub.8", "zpool-set.8", "zpool-split.8", "zpool-status.8", "zpool-sync.8", "zpool-trim.8", "zpool-upgrade.8", "zpool-wait.8", "zpool.8", "zpoolconcepts.8", "zpoolprops.8", "zstream.8", "zstreamdump.8", "v2.0", "arcstat.1", "cstyle.1", "User Commands (1)", "raidz_test.1", "zhack.1", "ztest.1", "zvol_wait.1", "Devices and Special Files (4)", "spl.4", "zfs.4", "File Formats and Conventions (5)", "vdev_id.conf.5", "dracut.zfs.7", "Miscellaneous (7)", "zfsconcepts.7", "zfsprops.7", "zpool-features.7", "zpoolconcepts.7", "zpoolprops.7", "fsck.zfs.8", "System Administration Commands (8)", "mount.zfs.8", "vdev_id.8", "zdb.8", "zed.8", "zfs-allow.8", "zfs-bookmark.8", "zfs-change-key.8", "zfs-clone.8", "zfs-create.8", "zfs-destroy.8", "zfs-diff.8", "zfs-get.8", "zfs-groupspace.8", "zfs-hold.8", "zfs-inherit.8", "zfs-jail.8", "zfs-list.8", "zfs-load-key.8", "zfs-mount-generator.8", "zfs-mount.8", "zfs-program.8", "zfs-project.8", "zfs-projectspace.8", "zfs-promote.8", "zfs-receive.8", "zfs-recv.8", "zfs-redact.8", "zfs-release.8", "zfs-rename.8", "zfs-rollback.8", "zfs-send.8", "zfs-set.8", "zfs-share.8", "zfs-snapshot.8", "zfs-unallow.8", "zfs-unjail.8", "zfs-unload-key.8", "zfs-unmount.8", "zfs-upgrade.8", "zfs-userspace.8", "zfs-wait.8", "zfs.8", "zfs_ids_to_path.8", "zgenhostid.8", "zinject.8", "zpool-add.8", "zpool-attach.8", "zpool-checkpoint.8", "zpool-clear.8", "zpool-create.8", "zpool-destroy.8", "zpool-detach.8", "zpool-events.8", "zpool-export.8", "zpool-get.8", "zpool-history.8", "zpool-import.8", "zpool-initialize.8", "zpool-iostat.8", "zpool-labelclear.8", "zpool-list.8", "zpool-offline.8", "zpool-online.8", "zpool-reguid.8", "zpool-remove.8", "zpool-reopen.8", "zpool-replace.8", "zpool-resilver.8", "zpool-scrub.8", "zpool-set.8", "zpool-split.8", "zpool-status.8", "zpool-sync.8", "zpool-trim.8", "zpool-upgrade.8", "zpool-wait.8", "zpool.8", "zpool_influxdb.8", "zstream.8", "zstreamdump.8", "v2.1", "arcstat.1", "cstyle.1", "User Commands (1)", "raidz_test.1", "test-runner.1", "zhack.1", "ztest.1", "zvol_wait.1", "Devices and Special Files (4)", "spl.4", "zfs.4", "File Formats and Conventions (5)", "vdev_id.conf.5", "dracut.zfs.7", "Miscellaneous (7)", "vdevprops.7", "zfsconcepts.7", "zfsprops.7", "zpool-features.7", "zpoolconcepts.7", "zpoolprops.7", "fsck.zfs.8", "System Administration Commands (8)", "mount.zfs.8", "vdev_id.8", "zdb.8", "zed.8", "zfs-allow.8", "zfs-bookmark.8", "zfs-change-key.8", "zfs-clone.8", "zfs-create.8", "zfs-destroy.8", "zfs-diff.8", "zfs-get.8", "zfs-groupspace.8", "zfs-hold.8", "zfs-inherit.8", "zfs-jail.8", "zfs-list.8", "zfs-load-key.8", "zfs-mount-generator.8", "zfs-mount.8", "zfs-program.8", "zfs-project.8", "zfs-projectspace.8", "zfs-promote.8", "zfs-receive.8", "zfs-recv.8", "zfs-redact.8", "zfs-release.8", "zfs-rename.8", "zfs-rollback.8", "zfs-send.8", "zfs-set.8", "zfs-share.8", "zfs-snapshot.8", "zfs-unallow.8", "zfs-unjail.8", "zfs-unload-key.8", "zfs-unmount.8", "zfs-unzone.8", "zfs-upgrade.8", "zfs-userspace.8", "zfs-wait.8", "zfs-zone.8", "zfs.8", "zfs_ids_to_path.8", "zfs_prepare_disk.8", "zgenhostid.8", "zinject.8", "zpool-add.8", "zpool-attach.8", "zpool-checkpoint.8", "zpool-clear.8", "zpool-create.8", "zpool-destroy.8", "zpool-detach.8", "zpool-events.8", "zpool-export.8", "zpool-get.8", "zpool-history.8", "zpool-import.8", "zpool-initialize.8", "zpool-iostat.8", "zpool-labelclear.8", "zpool-list.8", "zpool-offline.8", "zpool-online.8", "zpool-reguid.8", "zpool-remove.8", "zpool-reopen.8", "zpool-replace.8", "zpool-resilver.8", "zpool-scrub.8", "zpool-set.8", "zpool-split.8", "zpool-status.8", "zpool-sync.8", "zpool-trim.8", "zpool-upgrade.8", "zpool-wait.8", "zpool.8", "zpool_influxdb.8", "zstream.8", "zstreamdump.8", "v2.2", "Message ID:\u00a0ZFS-8000-14", "Message ID:\u00a0ZFS-8000-2Q", "Message ID:\u00a0ZFS-8000-3C", "Message ID: ZFS-8000-4J", "Message ID: ZFS-8000-5E", "Message ID: ZFS-8000-6X", "Message ID:\u00a0ZFS-8000-72", "Message ID:\u00a0ZFS-8000-8A", "Message ID:\u00a0ZFS-8000-9P", "Message ID:\u00a0ZFS-8000-A5", "Message ID:\u00a0ZFS-8000-ER", "Message ID:\u00a0ZFS-8000-EY", "Message ID: ZFS-8000-HC", "Message ID:\u00a0ZFS-8000-JQ", "Message ID:\u00a0ZFS-8000-K4", "ZFS Messages"], "terms": {"end": [1, 14, 16, 25, 28, 31, 46, 47, 53, 54, 65, 71, 79, 80, 81, 86, 102, 104, 131, 133, 139, 145, 176, 185, 186, 187, 198, 208, 209, 210, 221, 222, 231, 235, 236, 237, 249, 250, 256, 274, 300, 314, 333, 334, 336, 347, 354, 355, 356, 361, 377, 379, 403, 411, 417, 444, 450, 458, 459, 460, 465, 481, 483, 510, 518, 524], "ar": [1, 2, 4, 5, 7, 8, 9, 10, 11, 12, 14, 16, 18, 19, 20, 21, 22, 23, 25, 26, 27, 31, 32, 33, 34, 35, 36, 37, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 53, 56, 61, 62, 64, 65, 67, 70, 71, 73, 74, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 127, 129, 131, 132, 133, 135, 136, 138, 139, 140, 141, 143, 144, 145, 147, 150, 151, 153, 155, 156, 157, 158, 160, 162, 163, 164, 165, 166, 168, 171, 174, 175, 176, 177, 178, 180, 181, 182, 183, 184, 186, 187, 189, 191, 193, 194, 196, 197, 198, 199, 200, 202, 203, 204, 205, 206, 208, 209, 210, 212, 214, 216, 217, 219, 220, 221, 222, 223, 224, 226, 227, 228, 229, 230, 231, 232, 235, 236, 237, 239, 240, 242, 244, 245, 247, 248, 249, 250, 251, 252, 254, 255, 256, 257, 258, 260, 261, 262, 263, 264, 265, 266, 267, 268, 270, 271, 272, 273, 274, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 290, 291, 292, 293, 294, 295, 297, 298, 300, 301, 302, 304, 305, 307, 308, 309, 310, 312, 313, 314, 316, 319, 320, 322, 324, 325, 326, 327, 329, 331, 332, 333, 334, 335, 336, 338, 339, 341, 343, 346, 347, 349, 350, 352, 353, 354, 355, 356, 357, 359, 360, 361, 362, 363, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 403, 404, 405, 407, 408, 410, 411, 412, 413, 415, 416, 417, 419, 422, 423, 425, 427, 428, 429, 430, 432, 434, 435, 436, 440, 441, 443, 444, 446, 449, 450, 452, 453, 455, 456, 457, 458, 459, 460, 461, 463, 464, 465, 466, 467, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 502, 503, 504, 506, 508, 510, 511, 512, 514, 515, 517, 518, 519, 520, 522, 523, 524, 526, 529, 530, 532, 534, 535, 536, 537, 539, 541, 542, 543, 544, 545, 547, 548, 549, 551, 552, 553, 554, 555, 557, 558, 559, 560, 561], "kei": [1, 5, 9, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 57, 58, 59, 71, 74, 78, 79, 83, 86, 88, 92, 102, 103, 104, 110, 114, 116, 118, 121, 127, 143, 151, 155, 157, 164, 184, 199, 206, 223, 230, 231, 232, 236, 250, 251, 253, 258, 262, 272, 273, 274, 280, 284, 288, 291, 295, 298, 312, 320, 326, 347, 350, 353, 354, 358, 363, 367, 377, 378, 379, 385, 389, 391, 393, 396, 400, 415, 423, 429, 436, 450, 453, 457, 458, 462, 465, 467, 471, 481, 482, 483, 489, 493, 495, 497, 500, 506, 522, 530, 534, 536, 543, 557], "featur": [1, 6, 11, 12, 14, 16, 17, 18, 19, 20, 22, 25, 28, 29, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 54, 58, 59, 66, 67, 71, 75, 78, 80, 81, 89, 90, 99, 101, 104, 108, 109, 110, 114, 119, 120, 123, 127, 136, 141, 151, 154, 155, 156, 158, 161, 163, 170, 172, 173, 176, 184, 186, 192, 194, 195, 198, 206, 209, 215, 216, 218, 222, 232, 236, 243, 244, 246, 250, 259, 260, 271, 274, 278, 279, 280, 284, 290, 295, 298, 305, 310, 320, 323, 325, 330, 332, 333, 334, 342, 343, 347, 351, 353, 355, 356, 364, 365, 374, 376, 379, 383, 384, 385, 389, 394, 395, 397, 400, 408, 413, 423, 426, 428, 433, 435, 445, 446, 450, 454, 457, 459, 460, 468, 469, 478, 480, 483, 487, 488, 489, 493, 498, 499, 502, 506, 515, 520, 530, 533, 534, 535, 537, 540, 542, 557], "an": [1, 2, 3, 4, 5, 7, 9, 10, 11, 12, 14, 15, 16, 17, 18, 19, 20, 22, 25, 26, 27, 29, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 44, 46, 47, 48, 50, 57, 62, 65, 67, 70, 71, 73, 74, 77, 78, 79, 80, 81, 82, 85, 86, 87, 88, 89, 90, 92, 93, 94, 95, 96, 98, 99, 101, 102, 103, 104, 106, 108, 109, 110, 113, 114, 115, 117, 118, 119, 120, 121, 122, 124, 126, 127, 129, 130, 131, 132, 134, 136, 139, 143, 145, 146, 147, 151, 152, 153, 154, 155, 158, 160, 162, 163, 165, 166, 168, 171, 172, 174, 175, 176, 177, 178, 180, 181, 182, 183, 184, 185, 186, 189, 193, 194, 196, 197, 198, 199, 200, 202, 203, 204, 205, 206, 208, 209, 212, 216, 219, 220, 221, 222, 223, 224, 226, 227, 228, 229, 230, 231, 232, 235, 236, 240, 244, 247, 248, 249, 250, 251, 252, 254, 255, 256, 257, 258, 259, 260, 262, 263, 264, 265, 266, 268, 269, 271, 272, 273, 274, 276, 278, 279, 280, 283, 284, 285, 287, 288, 289, 290, 291, 293, 295, 297, 298, 300, 302, 303, 305, 312, 314, 315, 316, 320, 321, 322, 323, 324, 327, 329, 331, 332, 333, 334, 335, 339, 343, 346, 347, 349, 350, 352, 353, 354, 355, 356, 357, 360, 361, 362, 363, 364, 365, 367, 368, 369, 370, 371, 373, 374, 376, 377, 378, 379, 381, 383, 384, 385, 388, 389, 390, 392, 393, 394, 395, 396, 398, 400, 402, 403, 406, 408, 411, 415, 417, 418, 419, 423, 424, 425, 426, 427, 430, 432, 434, 435, 437, 438, 441, 444, 446, 449, 450, 452, 453, 456, 457, 458, 459, 460, 461, 464, 465, 466, 467, 468, 469, 471, 472, 473, 474, 475, 477, 478, 480, 481, 482, 483, 485, 487, 488, 489, 492, 493, 494, 496, 497, 498, 499, 500, 501, 503, 505, 506, 508, 509, 510, 511, 513, 515, 518, 522, 524, 525, 526, 530, 531, 532, 533, 534, 537, 539, 541, 542, 544, 545, 547, 548, 549, 550, 551, 552, 553, 554, 555, 557, 558, 561], "import": [1, 4, 5, 8, 9, 12, 16, 18, 19, 20, 22, 32, 33, 34, 35, 36, 37, 41, 42, 43, 46, 48, 50, 53, 56, 67, 68, 71, 74, 77, 78, 79, 80, 81, 83, 86, 90, 100, 101, 102, 108, 109, 120, 123, 132, 133, 134, 135, 136, 139, 140, 147, 148, 149, 150, 155, 157, 163, 175, 176, 177, 182, 184, 186, 197, 198, 199, 204, 206, 209, 217, 221, 222, 223, 228, 232, 236, 245, 249, 250, 251, 253, 256, 260, 270, 271, 290, 292, 297, 298, 301, 302, 303, 304, 305, 309, 316, 317, 318, 319, 324, 326, 332, 333, 334, 343, 344, 347, 350, 352, 353, 354, 355, 356, 358, 361, 365, 375, 376, 377, 395, 397, 404, 405, 406, 407, 408, 411, 412, 419, 420, 421, 422, 427, 429, 435, 446, 447, 450, 453, 456, 457, 458, 459, 460, 462, 465, 469, 479, 480, 481, 487, 488, 499, 502, 511, 512, 513, 514, 515, 518, 519, 526, 527, 528, 529, 534, 536, 542, 547, 548, 549, 550, 551, 552, 553, 556, 557, 558], "differenti": 1, "over": [1, 5, 10, 12, 18, 19, 20, 33, 34, 35, 36, 41, 42, 46, 47, 53, 62, 71, 73, 78, 79, 80, 81, 88, 102, 104, 108, 109, 110, 114, 118, 131, 132, 145, 163, 168, 174, 176, 184, 185, 186, 189, 196, 198, 199, 206, 208, 209, 212, 219, 220, 222, 223, 231, 232, 235, 236, 239, 240, 247, 248, 250, 251, 258, 274, 278, 279, 280, 284, 288, 298, 300, 332, 334, 339, 347, 349, 353, 354, 355, 356, 363, 377, 379, 383, 384, 385, 389, 393, 403, 435, 441, 450, 452, 457, 458, 459, 460, 467, 481, 483, 487, 488, 489, 493, 497, 510, 511, 524, 542, 555, 557], "other": [1, 2, 4, 5, 12, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 43, 44, 46, 47, 48, 50, 54, 65, 67, 71, 73, 74, 78, 79, 80, 81, 86, 87, 88, 90, 92, 94, 95, 98, 99, 101, 104, 110, 113, 114, 115, 118, 119, 120, 127, 136, 137, 138, 140, 151, 158, 163, 174, 176, 177, 182, 183, 184, 186, 194, 196, 198, 199, 204, 205, 206, 209, 216, 219, 220, 222, 223, 228, 229, 231, 232, 236, 244, 247, 248, 250, 251, 256, 257, 258, 260, 262, 264, 265, 268, 271, 274, 280, 283, 284, 285, 288, 290, 295, 298, 305, 306, 307, 309, 320, 327, 332, 333, 334, 343, 346, 347, 349, 350, 353, 354, 355, 356, 361, 362, 363, 365, 367, 369, 370, 373, 374, 376, 379, 385, 388, 389, 390, 393, 394, 395, 400, 408, 409, 410, 412, 423, 430, 435, 444, 446, 449, 450, 452, 453, 457, 458, 459, 460, 465, 466, 467, 469, 471, 473, 474, 477, 478, 480, 483, 489, 492, 493, 494, 497, 498, 499, 506, 515, 516, 517, 519, 530, 537, 542, 555, 557, 558], "raid": [1, 3, 5, 34, 36, 47, 57, 64, 67, 71, 78, 79, 80, 133, 136, 162, 163, 176, 184, 186, 198, 206, 209, 222, 232, 236, 250, 298, 332, 333, 343, 347, 353, 355, 435, 443, 446, 450, 457, 458, 459, 515, 542, 555], "implement": [1, 6, 7, 8, 11, 14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 57, 59, 64, 70, 71, 77, 78, 79, 90, 101, 104, 120, 163, 171, 177, 180, 183, 184, 191, 193, 198, 199, 202, 205, 206, 209, 214, 219, 222, 223, 226, 229, 230, 232, 236, 242, 247, 250, 251, 254, 257, 260, 271, 272, 274, 290, 297, 298, 332, 341, 346, 347, 352, 353, 354, 365, 376, 379, 395, 435, 443, 449, 450, 456, 457, 458, 469, 480, 483, 499, 542, 557], "filesystem": [1, 11, 14, 16, 18, 19, 20, 22, 23, 25, 28, 31, 33, 34, 35, 36, 37, 38, 40, 46, 48, 53, 57, 66, 71, 74, 77, 78, 79, 81, 82, 84, 88, 90, 91, 92, 93, 94, 95, 96, 98, 99, 100, 101, 102, 103, 104, 105, 106, 108, 109, 110, 112, 113, 114, 115, 116, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 131, 155, 170, 175, 176, 177, 178, 180, 184, 185, 186, 187, 192, 197, 198, 199, 200, 202, 206, 208, 210, 215, 221, 222, 223, 224, 226, 231, 232, 235, 236, 237, 243, 249, 250, 251, 252, 254, 258, 260, 261, 262, 263, 264, 265, 266, 268, 269, 270, 271, 273, 274, 276, 277, 278, 279, 280, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 297, 298, 300, 334, 342, 347, 350, 352, 353, 354, 356, 357, 359, 363, 365, 366, 367, 368, 369, 370, 371, 373, 374, 375, 376, 377, 378, 379, 380, 381, 383, 384, 385, 387, 388, 389, 390, 391, 393, 394, 395, 396, 397, 398, 399, 400, 403, 445, 450, 453, 456, 457, 458, 460, 461, 463, 467, 469, 470, 471, 472, 473, 474, 475, 477, 478, 479, 480, 481, 482, 483, 484, 485, 487, 488, 489, 491, 492, 493, 494, 495, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 510, 534, 547, 549, 554, 558], "advantag": [1, 5, 46, 53, 78, 79, 80, 110, 114, 177, 184, 186, 199, 206, 209, 223, 232, 236, 251, 280, 284, 298, 333, 353, 354, 355, 385, 389, 457, 458, 459, 489, 493], "includ": [1, 2, 4, 9, 11, 12, 16, 18, 19, 20, 22, 23, 32, 33, 34, 35, 36, 37, 38, 40, 41, 42, 46, 47, 50, 53, 54, 57, 61, 62, 65, 67, 71, 74, 77, 78, 79, 80, 81, 86, 90, 93, 96, 101, 104, 106, 110, 114, 120, 124, 129, 130, 133, 139, 142, 145, 151, 155, 157, 158, 159, 163, 165, 166, 168, 175, 176, 177, 182, 184, 186, 189, 194, 197, 198, 199, 204, 206, 207, 209, 212, 216, 221, 222, 223, 228, 231, 232, 234, 236, 239, 240, 244, 249, 250, 251, 256, 260, 263, 266, 271, 274, 276, 280, 284, 290, 293, 298, 299, 311, 314, 320, 326, 327, 328, 332, 333, 334, 335, 338, 339, 343, 347, 350, 353, 354, 355, 356, 361, 365, 368, 371, 376, 379, 381, 385, 389, 395, 398, 402, 411, 414, 417, 423, 429, 430, 431, 435, 437, 438, 440, 441, 444, 446, 450, 453, 456, 457, 458, 459, 460, 465, 469, 472, 475, 480, 483, 485, 489, 493, 499, 503, 508, 509, 518, 521, 524, 530, 534, 536, 537, 538, 542, 544, 545, 553, 557], "detect": [1, 5, 8, 12, 14, 16, 21, 25, 26, 28, 31, 32, 46, 47, 48, 53, 62, 67, 71, 78, 80, 81, 86, 104, 110, 114, 136, 139, 155, 168, 172, 175, 186, 189, 194, 197, 198, 209, 212, 216, 221, 222, 228, 231, 232, 236, 240, 244, 249, 250, 256, 274, 280, 284, 298, 305, 333, 334, 339, 343, 347, 353, 355, 356, 361, 379, 385, 389, 408, 411, 441, 446, 450, 457, 459, 460, 465, 483, 489, 493, 515, 518, 534, 549, 550, 552, 553, 554, 557], "data": [1, 3, 5, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 45, 46, 47, 48, 49, 50, 53, 54, 57, 61, 62, 64, 66, 67, 71, 77, 78, 79, 80, 81, 86, 90, 91, 92, 93, 101, 107, 108, 109, 110, 112, 113, 114, 117, 120, 127, 131, 133, 138, 139, 140, 143, 144, 147, 151, 155, 158, 159, 160, 163, 164, 165, 166, 168, 170, 171, 175, 176, 177, 182, 184, 185, 186, 187, 189, 191, 192, 193, 197, 198, 199, 204, 206, 208, 209, 210, 212, 214, 215, 221, 222, 223, 228, 232, 235, 236, 237, 239, 240, 242, 243, 249, 250, 251, 256, 260, 263, 271, 278, 279, 280, 283, 284, 290, 295, 297, 298, 300, 307, 309, 312, 313, 320, 324, 327, 328, 329, 332, 333, 334, 335, 336, 338, 339, 341, 342, 343, 347, 352, 353, 354, 355, 356, 361, 365, 368, 376, 383, 384, 385, 388, 389, 395, 400, 403, 410, 411, 412, 415, 416, 423, 427, 430, 431, 432, 435, 436, 437, 438, 440, 441, 443, 445, 446, 450, 456, 457, 458, 459, 460, 465, 469, 470, 471, 472, 480, 486, 487, 488, 489, 491, 492, 493, 496, 499, 506, 510, 517, 518, 519, 522, 523, 526, 530, 534, 537, 538, 539, 542, 543, 544, 545, 548, 549, 550, 551, 552, 553, 555, 556, 557, 559, 560, 561, 562], "corrupt": [1, 46, 47, 48, 53, 57, 66, 71, 78, 80, 81, 86, 108, 109, 131, 136, 139, 140, 165, 166, 170, 175, 176, 184, 185, 186, 192, 197, 198, 206, 208, 209, 215, 221, 222, 232, 235, 236, 243, 249, 250, 298, 300, 305, 309, 333, 342, 347, 353, 355, 356, 403, 408, 411, 412, 445, 450, 457, 459, 460, 465, 487, 488, 510, 515, 518, 519, 544, 545, 557, 558, 561, 562], "upon": [1, 26, 47, 53, 65, 71, 108, 109, 110, 114, 139, 143, 148, 149, 155, 175, 186, 197, 198, 209, 221, 222, 232, 236, 249, 250, 278, 279, 280, 284, 312, 317, 318, 324, 347, 383, 384, 385, 389, 411, 415, 420, 421, 427, 444, 450, 487, 488, 489, 493, 518, 522, 527, 528, 534, 557], "read": [1, 4, 5, 8, 10, 12, 14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 36, 41, 42, 43, 44, 46, 47, 48, 49, 50, 53, 61, 66, 70, 71, 73, 76, 77, 78, 79, 80, 81, 86, 87, 88, 90, 95, 98, 101, 104, 108, 109, 110, 114, 115, 118, 120, 127, 131, 132, 133, 139, 143, 145, 148, 149, 151, 155, 158, 163, 164, 170, 174, 175, 176, 177, 182, 183, 184, 185, 186, 187, 192, 196, 197, 198, 199, 204, 205, 206, 208, 209, 210, 215, 219, 220, 221, 222, 223, 228, 229, 231, 232, 235, 236, 237, 239, 243, 247, 248, 249, 250, 251, 256, 257, 258, 260, 271, 274, 278, 279, 280, 284, 288, 290, 295, 297, 298, 300, 312, 314, 317, 318, 332, 333, 334, 336, 338, 342, 346, 347, 349, 352, 353, 354, 355, 356, 361, 362, 363, 365, 376, 379, 383, 384, 385, 389, 393, 395, 400, 403, 411, 415, 417, 420, 421, 427, 435, 436, 440, 445, 449, 450, 452, 455, 456, 457, 458, 459, 460, 465, 466, 467, 469, 474, 477, 480, 483, 487, 488, 489, 493, 494, 497, 499, 506, 510, 511, 518, 522, 524, 527, 528, 530, 534, 537, 542, 543, 548, 549, 550, 551, 553, 554, 555, 556, 557, 559, 560, 562], "from": [1, 4, 5, 9, 10, 11, 12, 14, 16, 18, 19, 20, 22, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 44, 46, 47, 48, 50, 54, 55, 56, 57, 61, 62, 65, 66, 70, 71, 73, 74, 76, 77, 78, 79, 80, 81, 86, 87, 88, 89, 90, 91, 92, 94, 95, 96, 97, 98, 99, 100, 101, 104, 105, 106, 107, 108, 109, 110, 111, 112, 114, 115, 118, 119, 120, 122, 123, 124, 126, 127, 132, 133, 134, 136, 138, 139, 140, 143, 145, 146, 147, 151, 154, 155, 157, 160, 161, 163, 164, 168, 170, 171, 172, 174, 175, 176, 177, 182, 183, 184, 186, 187, 189, 192, 193, 194, 196, 197, 198, 199, 204, 205, 206, 209, 210, 212, 215, 216, 219, 220, 221, 222, 223, 228, 229, 230, 231, 232, 236, 237, 239, 240, 243, 244, 247, 248, 249, 250, 251, 256, 257, 258, 259, 260, 261, 262, 264, 265, 266, 267, 268, 269, 270, 271, 272, 274, 275, 276, 277, 278, 279, 280, 281, 282, 284, 285, 288, 289, 290, 292, 293, 295, 297, 298, 303, 305, 307, 309, 312, 314, 315, 320, 323, 324, 326, 329, 332, 333, 334, 336, 338, 339, 342, 346, 347, 349, 350, 352, 353, 354, 355, 356, 361, 362, 363, 364, 365, 366, 367, 369, 370, 371, 372, 373, 374, 375, 376, 379, 380, 381, 382, 383, 384, 385, 386, 387, 389, 390, 393, 394, 395, 397, 398, 400, 406, 408, 410, 411, 412, 415, 417, 418, 423, 426, 427, 429, 432, 433, 435, 436, 440, 441, 444, 445, 449, 450, 452, 453, 455, 456, 457, 458, 459, 460, 465, 466, 467, 468, 469, 470, 471, 473, 474, 475, 476, 477, 478, 479, 480, 483, 484, 485, 486, 487, 488, 489, 490, 491, 493, 494, 497, 498, 499, 501, 502, 503, 505, 506, 511, 513, 515, 517, 518, 519, 522, 524, 525, 526, 530, 533, 534, 536, 539, 540, 542, 543, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 561], "media": [1, 14, 16, 18, 19, 20, 25, 31, 34, 36, 41, 42, 47, 48, 71, 80, 176, 186, 198, 209, 222, 236, 250, 333, 347, 355, 450, 459], "block": [1, 3, 5, 11, 18, 19, 20, 22, 33, 34, 36, 41, 42, 46, 47, 48, 49, 62, 64, 67, 68, 71, 77, 78, 79, 80, 81, 86, 90, 92, 94, 101, 103, 108, 109, 110, 114, 116, 120, 121, 127, 131, 132, 133, 136, 139, 143, 145, 147, 155, 157, 158, 160, 163, 168, 172, 175, 176, 177, 182, 184, 185, 186, 189, 191, 193, 194, 197, 198, 199, 204, 206, 208, 209, 212, 214, 216, 221, 222, 223, 228, 232, 235, 236, 237, 240, 242, 244, 249, 250, 251, 256, 260, 262, 264, 271, 273, 278, 279, 280, 284, 290, 291, 295, 297, 298, 300, 301, 305, 312, 314, 316, 326, 327, 329, 332, 333, 334, 336, 339, 341, 343, 344, 347, 352, 353, 354, 355, 356, 361, 365, 367, 369, 376, 378, 383, 384, 385, 389, 391, 395, 396, 400, 403, 404, 408, 411, 415, 417, 419, 427, 429, 430, 432, 435, 441, 443, 446, 447, 450, 456, 457, 458, 459, 460, 465, 469, 471, 473, 480, 482, 487, 488, 489, 493, 495, 499, 500, 506, 510, 511, 515, 518, 522, 524, 526, 534, 536, 537, 539, 542, 559, 560], "automat": [1, 9, 12, 18, 19, 20, 22, 32, 33, 34, 35, 36, 37, 41, 42, 47, 48, 53, 56, 57, 70, 71, 74, 77, 78, 79, 80, 81, 86, 87, 90, 91, 92, 95, 98, 101, 103, 112, 115, 116, 120, 121, 127, 131, 133, 139, 155, 157, 163, 165, 166, 175, 177, 183, 184, 185, 186, 197, 199, 205, 206, 208, 209, 219, 221, 222, 223, 229, 230, 232, 235, 236, 247, 249, 250, 251, 257, 260, 261, 262, 271, 272, 273, 282, 286, 290, 291, 295, 297, 298, 300, 302, 324, 326, 332, 333, 334, 346, 347, 350, 352, 353, 354, 355, 356, 362, 365, 366, 367, 376, 378, 387, 391, 395, 396, 400, 403, 405, 411, 427, 429, 435, 449, 450, 453, 456, 457, 458, 459, 460, 465, 466, 469, 470, 471, 474, 477, 480, 482, 491, 494, 495, 499, 500, 506, 510, 512, 518, 534, 536, 542, 544, 545, 547, 549, 550, 555], "repair": [1, 5, 47, 57, 66, 71, 80, 108, 109, 155, 165, 166, 186, 198, 209, 222, 236, 250, 324, 333, 347, 355, 427, 445, 450, 459, 487, 488, 534, 544, 545, 547, 554, 555], "possibl": [1, 7, 8, 9, 11, 12, 21, 22, 37, 41, 46, 47, 48, 50, 53, 54, 61, 71, 77, 78, 79, 80, 81, 86, 87, 90, 101, 107, 108, 109, 110, 114, 120, 125, 133, 139, 153, 155, 160, 162, 165, 166, 175, 176, 177, 182, 183, 184, 186, 197, 198, 199, 204, 205, 206, 209, 219, 221, 222, 223, 228, 229, 232, 236, 239, 247, 249, 250, 251, 256, 257, 260, 271, 277, 278, 279, 280, 284, 290, 294, 297, 298, 302, 322, 324, 329, 331, 333, 334, 338, 347, 352, 353, 354, 355, 356, 361, 362, 365, 376, 382, 383, 384, 385, 389, 395, 399, 405, 411, 425, 427, 432, 434, 440, 450, 456, 457, 458, 459, 460, 465, 466, 469, 480, 486, 487, 488, 489, 493, 499, 504, 512, 518, 532, 534, 539, 541, 544, 545, 553, 554], "protect": [1, 4, 18, 19, 20, 22, 26, 33, 34, 35, 36, 37, 41, 42, 47, 48, 53, 57, 78, 81, 90, 101, 110, 114, 120, 232, 236, 260, 271, 280, 284, 290, 298, 334, 353, 356, 365, 376, 385, 389, 395, 457, 460, 469, 480, 489, 493, 499], "suitabl": [1, 2, 32, 47, 48, 78, 164, 206, 232, 298, 353, 436, 457, 543], "configur": [1, 4, 5, 9, 10, 12, 13, 23, 27, 29, 32, 43, 46, 47, 48, 54, 64, 65, 66, 67, 70, 71, 73, 78, 79, 80, 81, 85, 86, 87, 102, 103, 121, 127, 131, 132, 133, 136, 139, 143, 146, 151, 153, 157, 163, 170, 172, 174, 176, 181, 182, 183, 184, 185, 186, 192, 194, 196, 198, 199, 203, 204, 205, 206, 208, 209, 215, 216, 219, 220, 221, 222, 223, 227, 228, 229, 232, 235, 236, 243, 244, 247, 248, 249, 250, 251, 255, 256, 257, 262, 269, 273, 289, 291, 295, 298, 300, 301, 302, 305, 312, 315, 320, 322, 326, 332, 333, 334, 341, 342, 343, 346, 347, 349, 353, 354, 355, 356, 360, 361, 362, 377, 378, 396, 400, 403, 404, 405, 408, 411, 415, 418, 423, 425, 429, 435, 443, 444, 445, 446, 449, 450, 452, 457, 458, 459, 460, 464, 465, 466, 481, 482, 500, 506, 510, 511, 512, 515, 518, 522, 525, 530, 532, 536, 542, 552, 553, 562], "pool": [1, 2, 4, 5, 7, 14, 16, 18, 19, 20, 22, 25, 26, 27, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 45, 46, 47, 50, 57, 66, 67, 68, 71, 74, 76, 77, 78, 79, 80, 81, 82, 86, 90, 91, 92, 93, 95, 98, 100, 101, 102, 104, 107, 108, 109, 110, 112, 113, 114, 115, 117, 120, 123, 127, 128, 129, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 170, 171, 172, 175, 176, 177, 178, 182, 184, 185, 186, 187, 192, 193, 194, 197, 198, 199, 200, 204, 206, 208, 209, 210, 215, 216, 217, 221, 222, 223, 224, 228, 230, 231, 232, 235, 236, 237, 243, 244, 245, 249, 250, 251, 252, 256, 260, 263, 270, 271, 272, 274, 278, 279, 280, 284, 290, 292, 295, 296, 297, 298, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 342, 343, 344, 347, 350, 352, 353, 354, 355, 356, 357, 361, 365, 368, 375, 376, 377, 379, 383, 384, 385, 389, 395, 397, 400, 401, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 445, 446, 447, 450, 453, 455, 456, 457, 458, 459, 460, 461, 465, 469, 470, 471, 472, 474, 477, 479, 480, 481, 483, 486, 487, 488, 489, 491, 492, 493, 494, 496, 499, 502, 506, 507, 508, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 547, 548, 549, 550, 551, 552, 554, 555, 556, 557, 558, 561, 562], "redund": [1, 3, 5, 46, 48, 53, 67, 77, 78, 79, 80, 133, 136, 145, 151, 153, 163, 184, 186, 206, 209, 232, 236, 251, 297, 298, 302, 305, 314, 320, 322, 332, 333, 343, 352, 353, 354, 355, 405, 408, 417, 423, 425, 435, 446, 456, 457, 458, 459, 512, 515, 524, 530, 532, 542], "copi": [1, 8, 10, 14, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 57, 67, 71, 77, 78, 79, 80, 81, 86, 87, 88, 90, 95, 98, 101, 115, 118, 120, 127, 133, 151, 172, 182, 183, 184, 186, 194, 204, 205, 206, 209, 216, 222, 228, 229, 230, 232, 236, 244, 250, 256, 257, 258, 260, 271, 272, 288, 290, 295, 297, 298, 320, 333, 334, 343, 347, 352, 353, 355, 356, 361, 362, 363, 365, 376, 393, 395, 400, 423, 446, 450, 456, 457, 458, 459, 460, 465, 466, 467, 469, 474, 477, 480, 494, 497, 499, 506, 530], "see": [1, 4, 8, 10, 12, 14, 15, 16, 17, 18, 19, 20, 22, 23, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 40, 41, 42, 43, 47, 48, 53, 54, 61, 62, 64, 65, 66, 67, 68, 71, 73, 74, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 130, 131, 132, 133, 134, 135, 136, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 168, 170, 171, 172, 174, 175, 176, 177, 178, 180, 181, 182, 183, 184, 185, 186, 187, 189, 191, 192, 193, 194, 196, 197, 198, 199, 200, 202, 203, 204, 205, 206, 207, 208, 209, 210, 212, 214, 215, 216, 217, 220, 221, 222, 223, 224, 226, 227, 228, 229, 230, 231, 232, 234, 235, 236, 237, 239, 240, 242, 243, 244, 245, 248, 249, 250, 251, 252, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 338, 339, 341, 342, 343, 344, 347, 349, 350, 352, 353, 354, 355, 356, 357, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 440, 441, 443, 444, 445, 446, 447, 450, 452, 453, 455, 456, 457, 458, 459, 460, 461, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 509, 510, 511, 512, 513, 514, 515, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "properti": [1, 2, 5, 18, 19, 20, 34, 35, 36, 37, 41, 42, 47, 48, 50, 53, 66, 71, 74, 76, 77, 78, 79, 80, 81, 84, 86, 88, 90, 91, 92, 95, 96, 98, 99, 100, 101, 102, 103, 104, 106, 108, 109, 110, 112, 114, 115, 116, 117, 118, 119, 120, 121, 122, 124, 126, 127, 129, 131, 132, 133, 136, 139, 141, 143, 147, 153, 156, 157, 160, 161, 162, 163, 165, 166, 170, 175, 177, 182, 184, 185, 186, 192, 197, 198, 199, 204, 206, 208, 209, 215, 221, 222, 223, 228, 230, 231, 232, 235, 236, 243, 249, 250, 251, 256, 258, 260, 261, 262, 265, 266, 268, 269, 270, 271, 272, 273, 274, 276, 278, 279, 280, 282, 284, 285, 286, 287, 288, 289, 290, 291, 293, 295, 297, 298, 300, 301, 302, 305, 310, 312, 316, 322, 325, 326, 329, 331, 332, 333, 334, 342, 347, 350, 352, 353, 354, 355, 356, 359, 361, 363, 365, 366, 367, 370, 371, 373, 374, 375, 376, 377, 378, 379, 381, 383, 384, 385, 387, 389, 390, 391, 392, 393, 394, 395, 396, 398, 400, 403, 404, 405, 408, 411, 413, 415, 419, 425, 428, 429, 432, 433, 434, 435, 445, 450, 453, 455, 456, 457, 458, 459, 460, 463, 465, 467, 469, 470, 471, 474, 475, 477, 478, 479, 480, 481, 482, 483, 485, 487, 488, 489, 491, 493, 494, 495, 496, 497, 498, 499, 500, 501, 503, 505, 506, 508, 510, 511, 512, 515, 518, 520, 522, 526, 532, 535, 536, 539, 540, 541, 542, 544, 545, 557, 559, 560], "period": [1, 2, 47, 50, 71, 76, 78, 81, 108, 109, 136, 145, 155, 160, 176, 184, 186, 198, 206, 209, 219, 222, 232, 236, 247, 250, 298, 305, 314, 324, 334, 347, 353, 356, 408, 417, 427, 450, 455, 457, 460, 487, 488, 515, 524, 534, 539, 555], "scrub": [1, 5, 46, 50, 71, 79, 80, 82, 83, 90, 101, 108, 109, 120, 133, 139, 145, 151, 152, 153, 154, 158, 162, 163, 175, 176, 178, 186, 197, 198, 200, 209, 221, 222, 224, 232, 236, 249, 250, 251, 252, 253, 260, 271, 290, 302, 314, 321, 322, 323, 327, 331, 332, 333, 347, 354, 355, 357, 358, 365, 376, 395, 405, 411, 417, 424, 425, 426, 430, 434, 435, 450, 458, 459, 461, 462, 469, 480, 487, 488, 499, 512, 518, 524, 530, 531, 532, 533, 537, 541, 542, 548, 549, 550, 551, 553, 554, 555, 556, 557, 559, 560, 561], "can": [1, 3, 4, 5, 7, 8, 9, 10, 11, 12, 14, 16, 17, 18, 19, 20, 22, 23, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 45, 46, 47, 48, 49, 50, 53, 56, 62, 65, 66, 67, 70, 71, 73, 76, 77, 78, 79, 80, 81, 86, 87, 88, 89, 90, 91, 92, 93, 95, 96, 98, 99, 100, 101, 103, 104, 106, 107, 108, 109, 110, 112, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 126, 127, 129, 131, 132, 133, 134, 135, 136, 139, 140, 141, 143, 144, 145, 147, 148, 149, 151, 152, 153, 155, 156, 157, 158, 160, 161, 163, 164, 165, 166, 168, 170, 172, 174, 175, 176, 177, 182, 183, 184, 185, 186, 189, 192, 193, 194, 196, 197, 198, 199, 204, 205, 206, 208, 209, 212, 215, 216, 219, 220, 221, 222, 223, 228, 229, 230, 231, 232, 235, 236, 240, 243, 244, 247, 248, 249, 250, 251, 256, 257, 258, 259, 260, 261, 262, 263, 265, 266, 268, 269, 270, 271, 272, 273, 274, 276, 277, 278, 279, 280, 282, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 295, 297, 298, 300, 301, 302, 303, 305, 308, 309, 310, 312, 313, 314, 316, 320, 321, 322, 325, 326, 327, 329, 330, 332, 333, 334, 335, 339, 342, 343, 346, 347, 349, 352, 353, 354, 355, 356, 361, 362, 363, 364, 365, 366, 367, 368, 370, 371, 373, 374, 375, 376, 378, 379, 381, 382, 383, 384, 385, 387, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 400, 403, 404, 405, 406, 407, 408, 411, 412, 413, 415, 416, 417, 419, 423, 424, 425, 427, 428, 429, 430, 432, 433, 435, 436, 437, 438, 441, 444, 445, 446, 449, 450, 452, 455, 456, 457, 458, 459, 460, 465, 466, 467, 468, 469, 470, 471, 472, 474, 475, 477, 478, 479, 480, 482, 483, 485, 486, 487, 488, 489, 491, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 505, 506, 508, 510, 511, 512, 513, 514, 515, 518, 519, 520, 522, 523, 524, 526, 530, 531, 532, 534, 535, 536, 537, 539, 540, 542, 543, 544, 545, 547, 548, 550, 553, 554, 555, 556, 557, 558, 559, 560], "check": [1, 5, 7, 8, 10, 12, 18, 19, 20, 22, 25, 27, 32, 34, 35, 36, 37, 39, 41, 42, 43, 46, 47, 57, 62, 64, 71, 78, 79, 80, 81, 82, 86, 87, 90, 101, 104, 105, 108, 109, 120, 127, 129, 131, 132, 134, 136, 139, 164, 168, 175, 176, 178, 182, 183, 184, 185, 186, 189, 191, 197, 198, 200, 204, 205, 206, 208, 209, 212, 214, 221, 222, 224, 228, 229, 231, 232, 235, 236, 240, 242, 249, 250, 252, 256, 257, 260, 271, 274, 275, 278, 279, 290, 295, 298, 300, 301, 305, 334, 339, 341, 347, 353, 354, 355, 356, 357, 361, 362, 365, 376, 379, 380, 383, 384, 395, 400, 403, 404, 406, 408, 411, 436, 441, 443, 450, 457, 458, 459, 460, 461, 465, 466, 469, 480, 483, 484, 487, 488, 499, 506, 508, 510, 511, 513, 515, 518, 543, 553, 555, 557], "latent": 1, "degrad": [1, 5, 43, 46, 47, 48, 53, 71, 80, 81, 82, 131, 139, 163, 175, 178, 185, 186, 197, 200, 208, 209, 221, 222, 224, 235, 236, 249, 250, 252, 300, 332, 333, 334, 347, 355, 356, 357, 403, 411, 435, 450, 459, 460, 461, 510, 518, 542, 548, 550, 555, 561], "bit": [1, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 51, 57, 61, 64, 67, 70, 71, 78, 79, 80, 86, 90, 101, 104, 120, 127, 131, 139, 175, 176, 184, 186, 191, 197, 198, 199, 204, 206, 209, 214, 219, 221, 222, 223, 228, 231, 232, 235, 236, 239, 242, 247, 249, 250, 251, 256, 260, 271, 274, 290, 295, 298, 300, 333, 338, 341, 343, 346, 347, 353, 354, 355, 361, 365, 376, 379, 395, 400, 403, 411, 440, 443, 446, 449, 450, 457, 458, 459, 465, 469, 480, 483, 499, 506, 510, 518, 555], "rot": 1, "sourc": [1, 3, 7, 8, 10, 12, 13, 18, 19, 20, 22, 23, 27, 33, 34, 36, 38, 44, 46, 47, 53, 54, 57, 62, 71, 77, 79, 87, 88, 89, 95, 98, 104, 108, 109, 110, 114, 115, 118, 127, 141, 156, 168, 183, 184, 185, 186, 189, 205, 206, 208, 209, 212, 223, 229, 230, 231, 232, 235, 236, 240, 251, 257, 259, 265, 268, 272, 274, 278, 279, 280, 284, 285, 295, 300, 310, 325, 339, 354, 362, 364, 370, 373, 379, 383, 384, 385, 389, 390, 400, 413, 428, 441, 450, 456, 458, 466, 467, 468, 474, 477, 483, 487, 488, 489, 493, 494, 497, 506, 520, 535, 549, 551, 553, 554, 557], "replic": [1, 57, 78, 79, 80, 108, 109, 110, 114, 127, 132, 136, 155, 163, 177, 184, 186, 199, 206, 209, 223, 232, 236, 251, 278, 279, 280, 284, 295, 298, 301, 305, 324, 332, 333, 353, 354, 355, 383, 384, 385, 389, 400, 404, 408, 427, 435, 457, 458, 459, 487, 488, 489, 493, 506, 511, 515, 534, 542, 553, 562], "stream": [1, 32, 47, 54, 57, 71, 78, 79, 86, 87, 89, 108, 109, 110, 114, 123, 127, 165, 166, 176, 177, 184, 187, 198, 199, 206, 210, 222, 223, 232, 237, 250, 251, 278, 279, 280, 284, 292, 295, 298, 335, 336, 347, 353, 354, 362, 383, 384, 385, 389, 397, 400, 437, 438, 450, 457, 458, 465, 466, 468, 487, 488, 489, 493, 502, 506, 544, 545, 557], "send": [1, 14, 16, 25, 28, 31, 48, 54, 57, 71, 78, 79, 83, 86, 88, 89, 90, 101, 108, 109, 110, 117, 118, 120, 123, 127, 165, 166, 176, 177, 184, 187, 198, 199, 206, 210, 222, 223, 232, 237, 250, 251, 253, 258, 259, 260, 271, 278, 279, 280, 287, 288, 290, 292, 295, 298, 335, 336, 347, 353, 354, 358, 363, 364, 365, 376, 383, 384, 385, 392, 393, 395, 397, 400, 437, 438, 450, 457, 458, 462, 465, 467, 468, 469, 480, 487, 488, 489, 496, 497, 499, 502, 506, 544, 545, 557], "receiv": [1, 9, 11, 19, 20, 33, 34, 35, 41, 42, 46, 48, 57, 64, 71, 78, 79, 83, 88, 95, 98, 109, 110, 114, 115, 118, 127, 165, 166, 177, 184, 191, 198, 199, 206, 214, 222, 223, 232, 242, 250, 251, 253, 258, 265, 268, 279, 280, 284, 285, 288, 295, 298, 335, 341, 347, 353, 354, 358, 363, 370, 373, 384, 385, 389, 390, 393, 400, 437, 438, 443, 450, 457, 458, 462, 467, 474, 477, 488, 489, 493, 494, 497, 506, 544, 545, 557], "ensur": [1, 5, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 49, 53, 57, 70, 71, 77, 78, 79, 80, 90, 101, 102, 108, 109, 120, 136, 143, 150, 176, 177, 184, 186, 198, 199, 206, 209, 219, 222, 223, 232, 236, 247, 250, 251, 260, 271, 278, 279, 290, 297, 298, 305, 312, 319, 333, 346, 347, 352, 353, 354, 355, 365, 376, 377, 383, 384, 395, 408, 415, 422, 449, 450, 456, 457, 458, 459, 469, 480, 481, 487, 488, 499, 515, 522, 529, 557], "i": [1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 41, 42, 43, 44, 45, 46, 47, 49, 51, 56, 57, 58, 59, 61, 62, 64, 65, 66, 67, 68, 70, 71, 73, 74, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 135, 136, 138, 139, 140, 141, 142, 143, 145, 146, 147, 148, 149, 151, 152, 153, 154, 155, 156, 157, 158, 160, 161, 162, 163, 164, 165, 166, 168, 170, 171, 172, 174, 175, 176, 177, 178, 180, 181, 182, 183, 184, 185, 186, 189, 191, 192, 193, 194, 196, 197, 198, 199, 200, 202, 203, 204, 205, 206, 207, 208, 209, 212, 214, 215, 216, 217, 219, 220, 221, 222, 223, 224, 226, 227, 228, 229, 230, 231, 232, 234, 235, 236, 239, 240, 242, 243, 244, 245, 247, 248, 249, 250, 251, 252, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 304, 305, 307, 309, 310, 311, 312, 314, 315, 316, 317, 318, 320, 321, 322, 323, 324, 325, 326, 327, 329, 330, 331, 332, 333, 334, 335, 338, 339, 341, 342, 343, 344, 346, 347, 349, 350, 352, 353, 354, 355, 356, 357, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 407, 408, 410, 411, 412, 413, 414, 415, 417, 418, 419, 420, 421, 423, 424, 425, 426, 427, 428, 429, 430, 432, 433, 434, 435, 436, 437, 438, 440, 441, 443, 444, 445, 446, 447, 449, 450, 452, 453, 455, 456, 457, 458, 459, 460, 461, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 514, 515, 517, 518, 519, 520, 521, 522, 524, 525, 526, 527, 528, 530, 531, 532, 533, 534, 535, 536, 537, 539, 540, 541, 542, 543, 544, 545, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 561, 562], "interven": [1, 47], "storag": [1, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 45, 46, 47, 48, 49, 50, 53, 57, 71, 77, 78, 80, 81, 85, 86, 108, 109, 123, 127, 132, 134, 135, 136, 137, 139, 140, 141, 142, 143, 144, 145, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 176, 181, 184, 186, 198, 203, 206, 209, 221, 222, 227, 232, 236, 249, 250, 255, 278, 279, 295, 297, 298, 301, 303, 304, 305, 306, 309, 310, 311, 312, 313, 314, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 347, 352, 353, 355, 356, 360, 361, 383, 384, 397, 400, 404, 406, 407, 408, 409, 411, 412, 413, 414, 415, 416, 417, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 450, 456, 457, 459, 460, 464, 465, 487, 488, 502, 506, 511, 513, 514, 515, 516, 518, 519, 520, 521, 522, 523, 524, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 556], "transport": [1, 71, 450], "mechan": [1, 4, 34, 36, 46, 47, 48, 78, 80, 184, 186, 206, 209, 232, 236, 298, 333, 353, 355, 457, 459], "The": [1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 14, 21, 23, 25, 27, 29, 32, 35, 37, 43, 44, 45, 46, 47, 48, 49, 50, 54, 57, 61, 62, 64, 65, 66, 67, 70, 71, 73, 76, 77, 78, 79, 80, 81, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 124, 125, 126, 127, 128, 131, 132, 133, 134, 136, 137, 138, 139, 140, 143, 145, 146, 147, 148, 149, 151, 153, 155, 157, 160, 161, 162, 163, 164, 165, 166, 168, 170, 171, 172, 174, 175, 176, 177, 180, 181, 182, 183, 184, 185, 186, 187, 189, 191, 192, 193, 194, 196, 197, 198, 199, 202, 203, 204, 205, 206, 208, 209, 210, 212, 214, 215, 216, 219, 220, 221, 222, 223, 226, 227, 228, 229, 230, 231, 232, 235, 236, 237, 239, 240, 242, 243, 244, 247, 248, 249, 250, 251, 254, 255, 256, 257, 258, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 288, 289, 290, 291, 293, 294, 295, 296, 297, 298, 300, 301, 302, 303, 305, 307, 309, 312, 314, 315, 316, 317, 318, 320, 322, 324, 326, 329, 331, 332, 333, 334, 335, 336, 338, 339, 341, 342, 343, 346, 347, 349, 352, 353, 354, 355, 356, 359, 360, 361, 362, 363, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 393, 394, 395, 396, 398, 399, 400, 401, 403, 404, 405, 406, 408, 410, 411, 412, 415, 417, 418, 419, 420, 421, 423, 425, 427, 429, 432, 434, 435, 436, 437, 438, 440, 441, 443, 444, 445, 446, 449, 450, 452, 455, 456, 457, 458, 459, 460, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 503, 504, 505, 506, 507, 510, 511, 512, 513, 515, 516, 517, 518, 519, 522, 524, 525, 526, 527, 528, 530, 532, 534, 536, 539, 540, 541, 542, 543, 544, 545, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "chang": [1, 2, 4, 8, 11, 13, 14, 16, 17, 18, 19, 20, 21, 22, 25, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 47, 48, 49, 50, 54, 55, 56, 57, 65, 66, 71, 76, 77, 78, 79, 80, 81, 83, 86, 87, 88, 91, 92, 93, 94, 96, 99, 101, 104, 106, 107, 108, 109, 110, 112, 113, 114, 117, 118, 119, 120, 122, 124, 126, 127, 133, 139, 143, 155, 158, 165, 166, 170, 175, 176, 177, 182, 184, 186, 192, 197, 198, 199, 204, 206, 209, 215, 219, 221, 222, 223, 228, 231, 232, 236, 243, 247, 249, 250, 251, 253, 256, 258, 264, 266, 269, 271, 274, 276, 278, 279, 280, 283, 284, 288, 289, 290, 293, 295, 297, 298, 312, 324, 327, 333, 334, 342, 347, 352, 353, 354, 355, 356, 358, 361, 362, 363, 369, 371, 374, 376, 379, 381, 383, 384, 385, 388, 389, 393, 394, 395, 398, 400, 411, 415, 427, 430, 444, 445, 450, 455, 456, 457, 458, 459, 460, 462, 465, 466, 467, 470, 471, 472, 473, 475, 478, 480, 483, 485, 486, 487, 488, 489, 491, 492, 493, 496, 497, 498, 499, 501, 503, 505, 506, 518, 522, 534, 537, 544, 545, 561], "dataset": [1, 7, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 53, 54, 67, 71, 74, 77, 78, 79, 80, 81, 82, 84, 86, 88, 90, 91, 92, 93, 94, 95, 96, 98, 99, 100, 101, 102, 103, 104, 106, 107, 108, 109, 110, 112, 113, 114, 115, 117, 118, 119, 120, 121, 122, 124, 126, 127, 128, 132, 136, 137, 140, 143, 151, 157, 163, 165, 166, 172, 176, 177, 178, 180, 182, 184, 186, 194, 198, 199, 200, 202, 204, 206, 209, 216, 222, 223, 224, 226, 228, 230, 231, 232, 236, 244, 250, 251, 252, 254, 256, 258, 260, 261, 262, 263, 265, 268, 269, 270, 271, 272, 273, 274, 278, 279, 280, 282, 283, 284, 285, 287, 288, 289, 290, 291, 295, 296, 297, 298, 305, 306, 309, 312, 320, 326, 332, 333, 334, 343, 347, 350, 352, 353, 354, 355, 356, 357, 359, 361, 363, 365, 366, 367, 368, 370, 371, 373, 374, 375, 376, 377, 378, 379, 381, 382, 383, 384, 385, 387, 388, 389, 390, 392, 393, 394, 395, 396, 398, 400, 401, 408, 409, 412, 415, 423, 429, 435, 446, 450, 453, 456, 457, 458, 459, 460, 461, 463, 465, 467, 469, 470, 471, 472, 473, 474, 475, 477, 478, 479, 480, 481, 482, 483, 485, 486, 487, 488, 489, 491, 492, 493, 494, 496, 497, 498, 499, 500, 501, 503, 505, 506, 507, 511, 515, 516, 519, 522, 530, 536, 542, 544, 545, 554, 557], "volum": [1, 5, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 48, 53, 57, 68, 71, 77, 78, 79, 80, 81, 88, 90, 91, 92, 93, 95, 98, 100, 101, 104, 108, 109, 110, 112, 114, 115, 118, 120, 127, 176, 177, 184, 198, 199, 206, 217, 222, 223, 231, 232, 236, 245, 250, 251, 258, 260, 261, 262, 263, 265, 268, 270, 271, 274, 278, 279, 280, 282, 284, 285, 287, 288, 290, 295, 297, 298, 334, 344, 347, 352, 353, 354, 355, 356, 363, 365, 366, 367, 368, 370, 373, 375, 376, 379, 383, 384, 385, 387, 389, 390, 393, 395, 400, 447, 450, 456, 457, 458, 459, 460, 467, 469, 470, 471, 472, 474, 477, 479, 480, 483, 487, 488, 489, 491, 493, 494, 497, 499, 506], "each": [1, 2, 3, 4, 5, 7, 12, 18, 19, 20, 21, 22, 32, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 50, 53, 54, 62, 64, 65, 67, 70, 71, 76, 78, 79, 80, 81, 82, 85, 86, 87, 90, 92, 93, 95, 96, 97, 98, 101, 103, 104, 106, 108, 109, 110, 111, 114, 115, 116, 117, 120, 121, 124, 125, 127, 131, 136, 139, 143, 145, 157, 158, 160, 162, 163, 165, 166, 168, 171, 172, 175, 176, 177, 181, 182, 183, 184, 185, 186, 189, 193, 194, 197, 198, 199, 203, 204, 205, 206, 208, 209, 212, 216, 217, 219, 221, 222, 223, 227, 228, 229, 231, 232, 235, 236, 240, 244, 245, 247, 249, 250, 251, 255, 256, 257, 262, 265, 266, 267, 268, 273, 274, 276, 278, 279, 280, 281, 284, 285, 291, 293, 294, 295, 298, 300, 305, 308, 312, 314, 326, 327, 329, 331, 332, 333, 334, 335, 339, 341, 343, 346, 347, 353, 354, 355, 356, 357, 360, 361, 362, 365, 367, 370, 371, 372, 373, 376, 378, 379, 381, 383, 384, 385, 386, 389, 390, 391, 395, 396, 398, 399, 400, 403, 408, 411, 415, 417, 429, 430, 432, 434, 435, 437, 438, 441, 443, 444, 446, 449, 450, 455, 457, 458, 459, 460, 461, 464, 465, 466, 469, 471, 472, 474, 475, 476, 477, 480, 482, 483, 485, 487, 488, 489, 490, 493, 494, 495, 496, 499, 500, 503, 504, 506, 510, 515, 518, 522, 524, 536, 537, 539, 541, 542, 544, 545], "store": [1, 3, 4, 5, 7, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 53, 54, 65, 67, 71, 77, 78, 79, 80, 81, 87, 90, 101, 102, 104, 108, 109, 110, 114, 120, 130, 160, 172, 177, 183, 184, 186, 194, 198, 199, 205, 206, 207, 209, 216, 222, 223, 229, 230, 231, 232, 234, 236, 244, 250, 251, 257, 260, 271, 272, 274, 278, 279, 280, 284, 290, 297, 298, 299, 329, 333, 334, 343, 347, 352, 353, 354, 355, 356, 362, 365, 376, 377, 379, 383, 384, 385, 389, 395, 402, 432, 444, 446, 450, 456, 457, 458, 459, 460, 466, 469, 480, 481, 483, 487, 488, 489, 493, 499, 509, 539, 552], "pointer": [1, 47, 71, 79, 86, 131, 176, 177, 182, 185, 198, 199, 204, 208, 222, 223, 228, 235, 250, 251, 256, 300, 347, 354, 361, 403, 450, 458, 465, 510], "metadata": [1, 18, 19, 20, 22, 32, 33, 34, 35, 36, 37, 41, 42, 46, 48, 53, 54, 61, 71, 78, 79, 80, 86, 90, 101, 108, 109, 110, 114, 120, 139, 155, 165, 166, 176, 177, 182, 184, 186, 198, 199, 204, 206, 209, 221, 222, 223, 228, 232, 236, 239, 249, 250, 251, 256, 260, 271, 278, 279, 280, 284, 290, 298, 333, 335, 338, 347, 353, 354, 355, 361, 365, 376, 383, 384, 385, 389, 395, 411, 427, 437, 438, 440, 450, 457, 458, 459, 465, 469, 480, 487, 488, 489, 493, 499, 518, 534, 544, 545, 554, 562], "calcul": [1, 46, 47, 48, 49, 54, 71, 77, 81, 86, 176, 184, 198, 206, 222, 228, 232, 250, 256, 297, 347, 352, 361, 450, 456, 460, 465], "when": [1, 2, 5, 7, 8, 9, 10, 11, 12, 14, 16, 18, 19, 20, 22, 23, 25, 27, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 49, 50, 54, 57, 64, 65, 67, 68, 70, 71, 76, 77, 78, 79, 80, 81, 84, 85, 86, 87, 89, 90, 93, 101, 104, 108, 109, 110, 112, 113, 114, 116, 120, 127, 133, 136, 139, 143, 145, 147, 153, 155, 160, 163, 165, 166, 172, 175, 176, 177, 181, 182, 183, 184, 186, 191, 194, 197, 198, 199, 203, 204, 205, 206, 209, 214, 216, 217, 219, 221, 222, 223, 227, 228, 229, 230, 231, 232, 236, 242, 244, 245, 247, 249, 250, 251, 255, 256, 257, 259, 260, 263, 271, 272, 274, 275, 278, 279, 280, 282, 283, 284, 286, 290, 295, 297, 298, 302, 305, 312, 314, 316, 320, 322, 324, 329, 332, 333, 334, 341, 343, 344, 346, 347, 352, 353, 354, 355, 356, 359, 360, 361, 362, 364, 365, 368, 376, 379, 383, 384, 385, 387, 388, 389, 391, 395, 400, 405, 408, 411, 415, 417, 419, 425, 427, 432, 435, 443, 444, 446, 447, 449, 450, 455, 456, 457, 458, 459, 460, 463, 464, 465, 466, 468, 469, 472, 480, 483, 487, 488, 489, 491, 492, 493, 495, 499, 506, 512, 515, 518, 522, 524, 526, 532, 534, 539, 542, 544, 545, 547, 548, 552], "written": [1, 3, 14, 16, 25, 31, 34, 41, 42, 46, 47, 48, 53, 67, 71, 77, 78, 79, 80, 81, 87, 90, 101, 104, 110, 114, 120, 133, 143, 159, 163, 165, 166, 170, 171, 172, 176, 177, 180, 183, 184, 185, 186, 192, 193, 194, 198, 199, 202, 205, 206, 208, 209, 215, 216, 222, 223, 226, 229, 231, 232, 235, 236, 239, 243, 244, 250, 251, 254, 257, 260, 271, 274, 280, 284, 290, 298, 300, 312, 328, 332, 333, 334, 343, 347, 353, 354, 355, 356, 362, 365, 376, 379, 385, 389, 395, 415, 431, 435, 446, 450, 456, 457, 458, 459, 460, 466, 469, 480, 483, 489, 493, 499, 522, 538, 542, 544, 545, 557, 558, 561], "so": [1, 3, 4, 10, 18, 19, 20, 21, 22, 27, 32, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 49, 50, 53, 54, 67, 70, 71, 76, 77, 78, 79, 81, 88, 90, 96, 101, 106, 107, 110, 113, 114, 117, 118, 120, 124, 127, 131, 133, 136, 140, 153, 163, 172, 176, 177, 184, 186, 194, 198, 199, 206, 208, 209, 216, 219, 222, 223, 230, 232, 235, 236, 244, 247, 250, 251, 258, 260, 266, 271, 272, 276, 277, 280, 283, 284, 287, 288, 290, 293, 295, 297, 298, 300, 302, 309, 322, 332, 333, 334, 343, 346, 347, 352, 353, 354, 355, 356, 363, 365, 371, 376, 381, 382, 385, 388, 389, 392, 393, 395, 398, 400, 403, 405, 408, 412, 425, 435, 446, 449, 450, 455, 456, 457, 458, 460, 467, 469, 475, 480, 485, 486, 489, 492, 493, 496, 497, 499, 503, 506, 510, 512, 515, 519, 532, 542, 547, 557, 561], "onli": [1, 5, 9, 11, 12, 14, 16, 18, 19, 20, 22, 25, 26, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 49, 53, 54, 61, 62, 65, 66, 67, 70, 71, 73, 76, 77, 78, 79, 80, 81, 85, 86, 87, 88, 90, 92, 93, 95, 96, 98, 100, 101, 102, 104, 106, 108, 109, 110, 112, 113, 114, 115, 118, 120, 123, 124, 127, 129, 131, 132, 133, 135, 136, 139, 143, 144, 145, 147, 148, 149, 151, 153, 155, 157, 158, 159, 161, 163, 165, 166, 168, 170, 174, 175, 176, 177, 181, 182, 184, 185, 186, 187, 189, 192, 196, 197, 198, 199, 203, 204, 205, 206, 208, 209, 210, 212, 215, 219, 220, 221, 222, 223, 227, 228, 229, 230, 231, 232, 235, 236, 237, 239, 240, 243, 247, 248, 249, 250, 251, 255, 256, 257, 258, 260, 262, 263, 265, 266, 268, 270, 271, 272, 274, 276, 278, 279, 280, 282, 283, 284, 285, 288, 290, 292, 293, 295, 297, 298, 300, 301, 302, 304, 305, 312, 313, 314, 316, 320, 322, 324, 326, 327, 328, 330, 332, 333, 334, 336, 338, 339, 342, 343, 346, 347, 349, 352, 353, 354, 355, 356, 360, 361, 362, 363, 365, 367, 368, 370, 371, 373, 375, 376, 377, 379, 381, 383, 384, 385, 387, 388, 389, 390, 393, 395, 397, 398, 400, 403, 404, 405, 407, 408, 411, 415, 416, 417, 419, 423, 425, 427, 429, 430, 431, 433, 435, 440, 441, 444, 445, 446, 449, 450, 452, 455, 456, 457, 458, 459, 460, 464, 465, 466, 467, 469, 471, 472, 474, 475, 477, 479, 480, 481, 483, 485, 487, 488, 489, 491, 492, 493, 494, 497, 499, 502, 503, 506, 508, 510, 511, 512, 514, 515, 518, 522, 523, 524, 526, 530, 532, 534, 536, 537, 538, 540, 542, 544, 545, 554, 557], "affect": [1, 25, 31, 46, 47, 53, 64, 67, 71, 78, 79, 80, 86, 94, 108, 109, 127, 176, 177, 182, 184, 191, 194, 198, 199, 204, 206, 214, 216, 222, 223, 228, 232, 242, 244, 250, 251, 256, 278, 279, 295, 298, 341, 343, 347, 353, 354, 361, 383, 384, 400, 443, 446, 450, 457, 458, 459, 465, 473, 487, 488, 506, 554, 555, 557, 559, 560, 561], "write": [1, 4, 5, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 49, 50, 51, 53, 57, 58, 59, 65, 70, 71, 77, 78, 79, 80, 81, 87, 95, 98, 110, 114, 115, 127, 130, 131, 139, 144, 145, 148, 149, 151, 158, 163, 165, 166, 175, 176, 177, 183, 184, 185, 186, 197, 198, 199, 205, 206, 208, 209, 219, 221, 222, 223, 229, 232, 235, 236, 247, 249, 250, 251, 257, 280, 284, 295, 298, 299, 300, 313, 314, 317, 318, 332, 333, 334, 346, 347, 353, 354, 355, 356, 362, 385, 389, 400, 402, 403, 411, 416, 417, 420, 421, 435, 444, 449, 450, 456, 457, 458, 459, 460, 466, 474, 477, 489, 493, 494, 506, 509, 510, 518, 523, 524, 527, 528, 530, 537, 542, 544, 545, 548, 549, 550, 551, 553, 554, 555, 556, 557, 559, 560, 561], "occur": [1, 34, 36, 41, 42, 46, 47, 48, 49, 53, 70, 71, 78, 79, 81, 86, 93, 110, 114, 127, 131, 139, 144, 160, 163, 175, 176, 184, 186, 197, 198, 199, 204, 206, 208, 209, 219, 221, 223, 228, 232, 235, 236, 247, 249, 250, 251, 256, 263, 280, 284, 295, 298, 300, 313, 329, 332, 334, 346, 347, 353, 354, 356, 361, 368, 385, 389, 400, 403, 411, 416, 432, 435, 449, 450, 457, 458, 460, 465, 472, 489, 493, 506, 510, 518, 523, 539, 542, 556, 561], "after": [1, 3, 5, 8, 12, 14, 18, 19, 20, 21, 22, 25, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 49, 53, 56, 62, 65, 67, 71, 74, 77, 78, 79, 80, 81, 87, 90, 93, 99, 101, 102, 103, 108, 109, 116, 119, 120, 121, 122, 126, 131, 133, 143, 144, 145, 147, 151, 153, 155, 163, 168, 171, 172, 176, 177, 183, 184, 185, 186, 189, 193, 194, 198, 199, 205, 206, 208, 209, 212, 216, 219, 222, 223, 229, 230, 232, 235, 236, 239, 240, 244, 247, 250, 251, 257, 260, 262, 263, 269, 271, 272, 273, 278, 279, 289, 290, 291, 297, 298, 300, 312, 314, 316, 320, 322, 324, 333, 334, 339, 343, 347, 350, 352, 353, 354, 355, 356, 362, 365, 367, 368, 374, 376, 377, 378, 383, 384, 391, 394, 395, 396, 403, 415, 416, 417, 419, 423, 425, 427, 441, 444, 446, 450, 453, 456, 457, 458, 459, 460, 466, 469, 472, 478, 480, 481, 482, 487, 488, 495, 498, 499, 500, 501, 505, 510, 522, 523, 524, 526, 530, 532, 534, 557, 559, 560], "set": [1, 3, 5, 7, 8, 9, 10, 11, 12, 14, 16, 18, 19, 20, 21, 22, 25, 27, 28, 29, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 49, 50, 54, 57, 62, 64, 65, 67, 70, 71, 73, 74, 76, 77, 78, 79, 80, 81, 83, 84, 86, 88, 90, 91, 92, 95, 96, 97, 98, 99, 101, 102, 103, 104, 105, 106, 108, 109, 110, 111, 112, 114, 116, 117, 118, 119, 120, 121, 122, 124, 126, 127, 131, 132, 133, 135, 136, 139, 141, 143, 145, 148, 149, 153, 157, 160, 161, 163, 164, 171, 172, 174, 175, 176, 177, 180, 181, 182, 184, 185, 186, 193, 194, 196, 197, 198, 199, 202, 203, 204, 206, 208, 209, 216, 219, 220, 221, 222, 223, 226, 227, 228, 230, 231, 232, 235, 236, 244, 247, 248, 249, 250, 251, 253, 254, 256, 258, 260, 261, 262, 265, 266, 267, 268, 269, 271, 272, 273, 274, 275, 276, 278, 279, 280, 281, 282, 284, 286, 287, 288, 289, 290, 291, 293, 295, 297, 298, 300, 301, 302, 305, 310, 312, 314, 322, 326, 329, 332, 333, 334, 341, 343, 346, 347, 349, 350, 352, 353, 354, 355, 356, 358, 359, 361, 363, 365, 366, 367, 370, 371, 372, 373, 374, 376, 377, 378, 379, 380, 381, 383, 384, 385, 386, 387, 389, 391, 392, 393, 394, 395, 396, 398, 400, 403, 404, 405, 408, 411, 413, 415, 417, 425, 429, 432, 433, 435, 436, 441, 443, 444, 446, 449, 450, 452, 453, 455, 456, 457, 458, 459, 460, 462, 463, 465, 467, 469, 470, 471, 474, 475, 476, 477, 478, 480, 481, 482, 483, 484, 485, 487, 488, 489, 490, 491, 493, 495, 496, 497, 498, 499, 500, 501, 503, 505, 506, 510, 511, 512, 515, 518, 520, 522, 524, 532, 536, 539, 540, 542, 543, 557, 559, 560], "sha256": [1, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 71, 78, 184, 206, 222, 232, 250, 298, 347, 353, 450, 457], "pool_nam": [1, 47, 67, 129, 172, 194, 216, 244, 343, 446, 508], "dataset_nam": 1, "ok": [1, 139, 175, 197, 221, 249, 411, 518], "dedup": [1, 48, 71, 77, 78, 79, 80, 86, 88, 110, 114, 118, 147, 151, 163, 176, 182, 184, 186, 198, 199, 204, 206, 209, 222, 223, 228, 232, 236, 250, 251, 256, 260, 271, 280, 284, 290, 297, 298, 320, 332, 333, 347, 352, 353, 354, 355, 361, 363, 385, 389, 393, 423, 435, 450, 456, 457, 458, 459, 465, 467, 489, 493, 497, 526, 530, 542], "nopwrit": [1, 71, 79, 199, 223, 251, 347, 354, 450, 458], "compat": [1, 6, 8, 9, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 53, 71, 78, 79, 81, 110, 114, 127, 136, 139, 161, 164, 165, 166, 177, 184, 186, 199, 206, 209, 221, 222, 223, 232, 236, 249, 250, 251, 280, 284, 295, 298, 330, 334, 347, 353, 354, 356, 385, 389, 400, 408, 411, 433, 436, 437, 438, 450, 457, 458, 460, 489, 493, 506, 515, 518, 540, 543, 544, 545, 556, 557], "note": [1, 2, 5, 8, 9, 10, 12, 14, 15, 16, 18, 19, 20, 22, 25, 26, 28, 29, 31, 32, 33, 34, 35, 36, 37, 42, 43, 44, 46, 47, 48, 49, 50, 53, 62, 67, 71, 73, 78, 79, 82, 86, 87, 88, 90, 101, 103, 104, 108, 109, 110, 114, 116, 118, 120, 121, 125, 127, 129, 131, 135, 143, 145, 155, 157, 165, 166, 168, 172, 174, 176, 177, 178, 180, 183, 184, 186, 189, 194, 196, 198, 199, 200, 202, 204, 205, 206, 208, 209, 212, 216, 220, 222, 223, 224, 226, 228, 229, 231, 232, 235, 236, 240, 244, 248, 250, 251, 252, 254, 256, 257, 258, 260, 271, 273, 274, 278, 279, 280, 284, 288, 290, 291, 294, 295, 298, 300, 312, 314, 320, 324, 326, 339, 343, 347, 349, 353, 354, 357, 361, 362, 363, 365, 376, 378, 379, 383, 384, 385, 389, 391, 393, 395, 396, 399, 400, 403, 415, 417, 427, 429, 441, 446, 450, 452, 457, 458, 461, 465, 466, 467, 469, 480, 482, 483, 487, 488, 489, 493, 495, 497, 499, 500, 504, 506, 508, 510, 522, 524, 534, 536, 544, 545, 557], "ye": [1, 7, 14, 16, 18, 19, 20, 21, 22, 25, 27, 31, 33, 34, 35, 36, 37, 41, 42, 53, 73, 78, 79, 95, 98, 102, 115, 127, 174, 176, 177, 181, 184, 196, 198, 199, 203, 206, 220, 222, 223, 227, 232, 248, 250, 251, 295, 298, 349, 353, 354, 377, 400, 452, 457, 458, 474, 477, 481, 494, 506], "short": [1, 10, 12, 18, 19, 20, 22, 28, 33, 41, 42, 47, 53, 66, 78, 79, 87, 93, 170, 175, 177, 184, 192, 197, 199, 205, 206, 215, 221, 223, 229, 232, 243, 249, 251, 257, 263, 298, 342, 347, 353, 354, 362, 368, 445, 450, 457, 458, 466, 472], "hand": [1, 18, 19, 20, 22, 33, 34, 35, 36, 37, 47, 53, 219, 247, 555], "fletcher4": [1, 78, 184, 206, 232, 298, 353, 457], "non": [1, 8, 9, 10, 18, 19, 20, 22, 26, 32, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 62, 65, 67, 70, 71, 73, 74, 76, 78, 79, 80, 81, 86, 87, 88, 91, 92, 93, 94, 95, 98, 103, 104, 108, 109, 110, 114, 115, 118, 121, 127, 129, 136, 139, 143, 151, 163, 165, 166, 168, 172, 174, 176, 177, 182, 183, 184, 186, 189, 194, 196, 198, 199, 204, 205, 206, 209, 212, 216, 219, 220, 221, 222, 223, 228, 229, 231, 232, 236, 240, 244, 247, 248, 249, 250, 251, 256, 257, 258, 261, 262, 263, 273, 274, 278, 279, 280, 284, 288, 291, 295, 298, 305, 312, 320, 332, 333, 334, 335, 339, 343, 346, 347, 349, 350, 353, 354, 355, 356, 361, 362, 363, 366, 367, 368, 369, 370, 373, 378, 379, 383, 384, 385, 389, 390, 393, 396, 400, 408, 411, 415, 423, 435, 437, 438, 441, 444, 446, 449, 450, 452, 453, 455, 457, 458, 459, 460, 465, 466, 467, 470, 471, 472, 473, 474, 477, 482, 483, 487, 488, 489, 493, 494, 497, 500, 506, 508, 515, 518, 522, 530, 542, 544, 545, 555, 557, 562], "off": [1, 7, 8, 10, 12, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 67, 70, 71, 78, 79, 80, 81, 95, 98, 100, 102, 110, 114, 115, 127, 136, 148, 149, 157, 158, 165, 166, 176, 177, 184, 186, 198, 199, 206, 209, 219, 222, 223, 230, 232, 236, 247, 250, 270, 272, 295, 298, 326, 333, 334, 343, 346, 347, 353, 354, 355, 356, 375, 377, 400, 408, 429, 446, 449, 450, 457, 458, 459, 460, 474, 477, 479, 481, 489, 493, 494, 506, 515, 536, 544, 545], "do": [1, 8, 9, 10, 11, 14, 16, 18, 19, 20, 21, 22, 25, 27, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 57, 62, 66, 67, 70, 71, 76, 78, 79, 80, 84, 86, 87, 90, 92, 93, 94, 95, 96, 97, 98, 100, 101, 104, 106, 108, 109, 110, 111, 112, 113, 114, 115, 120, 124, 127, 129, 130, 131, 136, 139, 141, 145, 147, 151, 152, 155, 156, 157, 161, 162, 164, 168, 171, 172, 176, 177, 180, 182, 184, 185, 186, 189, 193, 194, 198, 199, 202, 204, 206, 207, 208, 209, 212, 216, 219, 222, 223, 226, 228, 230, 231, 232, 234, 235, 236, 240, 244, 247, 250, 251, 254, 256, 260, 262, 263, 266, 267, 270, 271, 272, 274, 276, 278, 279, 280, 281, 282, 283, 284, 290, 293, 295, 298, 299, 300, 305, 308, 310, 314, 316, 320, 321, 325, 326, 330, 331, 333, 339, 343, 346, 347, 353, 354, 355, 359, 361, 362, 365, 367, 368, 369, 371, 372, 375, 376, 379, 381, 383, 384, 385, 386, 387, 388, 389, 395, 398, 400, 402, 403, 408, 411, 413, 417, 419, 423, 424, 428, 429, 433, 434, 436, 441, 445, 446, 449, 450, 455, 457, 458, 459, 463, 465, 466, 469, 471, 472, 473, 474, 475, 476, 477, 479, 480, 483, 485, 487, 488, 489, 490, 491, 492, 493, 494, 499, 503, 506, 508, 509, 510, 515, 518, 520, 524, 526, 530, 531, 534, 535, 536, 540, 541, 543, 555, 561], "fletcher2": [1, 78, 184, 206, 232, 298, 353, 457], "deprec": [1, 47, 62, 71, 81, 165, 166, 168, 189, 212, 232, 236, 240, 250, 334, 335, 339, 347, 356, 437, 438, 441, 450, 460, 544, 545], "fletcher": [1, 47, 71, 198, 222, 250, 347, 450], "instead": [1, 5, 18, 19, 20, 21, 22, 23, 27, 28, 33, 34, 36, 38, 40, 41, 42, 46, 47, 53, 61, 62, 66, 67, 71, 78, 79, 80, 81, 86, 89, 90, 95, 96, 98, 100, 101, 103, 105, 106, 108, 109, 110, 114, 115, 120, 121, 124, 127, 130, 132, 138, 139, 141, 143, 145, 147, 148, 149, 151, 156, 157, 158, 162, 168, 172, 182, 184, 186, 189, 194, 198, 204, 206, 209, 212, 216, 222, 223, 228, 232, 236, 239, 240, 244, 250, 251, 256, 260, 265, 266, 268, 270, 271, 273, 275, 276, 278, 279, 280, 284, 285, 290, 291, 293, 295, 298, 299, 301, 307, 308, 310, 312, 314, 316, 317, 318, 320, 325, 326, 327, 331, 333, 334, 338, 339, 342, 343, 347, 353, 354, 355, 356, 361, 365, 370, 371, 373, 375, 376, 378, 380, 381, 383, 384, 385, 389, 390, 395, 396, 398, 400, 402, 404, 410, 411, 413, 415, 417, 419, 420, 421, 423, 428, 429, 430, 434, 440, 441, 445, 446, 450, 457, 458, 459, 460, 465, 468, 469, 474, 475, 477, 479, 480, 482, 484, 485, 487, 488, 489, 493, 494, 499, 500, 503, 506, 509, 511, 517, 518, 520, 522, 524, 526, 527, 528, 530, 535, 536, 537, 541], "also": [1, 4, 5, 9, 10, 12, 18, 19, 20, 21, 22, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 50, 53, 62, 64, 65, 66, 67, 68, 71, 73, 74, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 130, 131, 132, 133, 134, 135, 136, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 168, 170, 171, 172, 174, 176, 177, 178, 180, 181, 182, 183, 184, 185, 186, 187, 189, 191, 192, 193, 194, 196, 198, 199, 200, 202, 203, 204, 205, 206, 207, 208, 209, 210, 212, 214, 215, 216, 217, 220, 222, 223, 224, 226, 227, 228, 229, 230, 231, 232, 234, 235, 236, 237, 240, 242, 243, 244, 245, 248, 250, 251, 252, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 341, 342, 343, 344, 347, 349, 350, 352, 353, 354, 355, 356, 357, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 441, 443, 444, 445, 446, 447, 450, 452, 453, 455, 456, 457, 458, 459, 460, 461, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 509, 510, 511, 512, 513, 514, 515, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 554, 557], "default": [1, 5, 7, 8, 9, 16, 18, 19, 20, 21, 22, 25, 26, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 41, 42, 43, 46, 47, 48, 50, 53, 54, 61, 64, 65, 66, 67, 70, 71, 73, 74, 76, 77, 78, 79, 80, 81, 85, 86, 87, 88, 90, 92, 93, 95, 96, 98, 100, 101, 102, 104, 106, 108, 109, 110, 113, 114, 115, 118, 120, 124, 127, 130, 131, 136, 141, 143, 145, 147, 153, 156, 157, 158, 163, 164, 165, 166, 170, 172, 174, 176, 181, 182, 183, 184, 185, 186, 191, 192, 194, 196, 198, 203, 204, 205, 206, 208, 209, 214, 215, 216, 219, 220, 222, 223, 227, 228, 229, 231, 232, 235, 236, 239, 242, 243, 244, 247, 248, 250, 251, 255, 256, 257, 260, 262, 263, 265, 266, 268, 270, 271, 274, 275, 276, 278, 279, 280, 283, 284, 285, 290, 293, 295, 297, 298, 299, 300, 305, 310, 312, 314, 316, 322, 325, 326, 327, 332, 333, 334, 338, 341, 342, 343, 346, 347, 349, 350, 352, 353, 354, 355, 356, 360, 361, 362, 365, 367, 368, 370, 371, 373, 375, 376, 377, 379, 381, 383, 384, 385, 388, 389, 390, 395, 398, 400, 402, 403, 408, 413, 415, 417, 419, 425, 428, 429, 430, 435, 436, 440, 443, 444, 445, 446, 449, 450, 452, 453, 455, 456, 457, 458, 459, 460, 464, 465, 466, 467, 469, 471, 472, 474, 475, 477, 479, 480, 481, 483, 485, 487, 488, 489, 492, 493, 494, 497, 499, 503, 506, 509, 510, 515, 520, 522, 524, 526, 532, 535, 536, 537, 542, 543, 544, 545, 557], "nopar": [1, 78, 206, 232, 298, 353, 457], "sha512": [1, 78, 79, 199, 206, 223, 232, 251, 298, 353, 354, 457, 458], "requir": [1, 2, 8, 9, 10, 12, 13, 17, 25, 27, 31, 32, 46, 47, 48, 49, 54, 57, 62, 65, 66, 67, 70, 71, 74, 77, 78, 79, 80, 81, 86, 87, 90, 101, 102, 104, 108, 109, 110, 114, 120, 127, 139, 143, 145, 153, 154, 160, 165, 166, 168, 170, 172, 175, 176, 177, 180, 183, 184, 186, 189, 192, 194, 197, 198, 199, 202, 205, 206, 209, 212, 215, 216, 219, 221, 222, 223, 226, 228, 229, 230, 231, 232, 236, 240, 243, 244, 247, 249, 250, 251, 254, 256, 257, 260, 271, 272, 274, 278, 279, 280, 284, 290, 295, 297, 298, 312, 314, 322, 323, 329, 333, 334, 339, 342, 343, 346, 347, 350, 352, 353, 354, 355, 356, 361, 362, 365, 376, 377, 379, 383, 384, 385, 389, 395, 400, 411, 415, 417, 425, 426, 432, 441, 444, 445, 446, 449, 450, 453, 456, 457, 458, 459, 460, 465, 466, 469, 480, 481, 483, 487, 488, 489, 493, 499, 506, 518, 522, 524, 532, 533, 539, 544, 545, 553, 557, 559, 560, 561], "org": [1, 2, 7, 9, 12, 18, 19, 20, 22, 23, 25, 26, 27, 31, 32, 34, 36, 41, 42, 46, 48, 53, 55, 66, 71, 79, 102, 104, 170, 172, 177, 192, 194, 199, 215, 216, 223, 230, 231, 243, 244, 251, 272, 274, 342, 354, 377, 379, 445, 450, 458, 481, 483], "illumo": [1, 2, 11, 12, 46, 47, 48, 53, 57, 66, 71, 79, 163, 170, 172, 177, 192, 199, 209, 215, 219, 223, 236, 243, 247, 251, 332, 342, 354, 435, 445, 450, 458, 542], "salt": [1, 47, 71, 79, 199, 223, 250, 251, 347, 354, 450, 458], "current": [1, 5, 7, 8, 9, 10, 18, 19, 21, 22, 27, 28, 32, 33, 34, 36, 41, 43, 46, 47, 48, 49, 61, 62, 71, 74, 76, 78, 79, 80, 81, 85, 86, 87, 90, 93, 94, 101, 103, 104, 110, 114, 116, 120, 121, 123, 127, 131, 132, 133, 134, 135, 136, 139, 140, 143, 144, 145, 147, 148, 149, 157, 158, 160, 161, 163, 164, 168, 171, 175, 176, 181, 182, 183, 184, 186, 189, 193, 197, 198, 199, 203, 204, 205, 206, 209, 212, 221, 222, 223, 227, 228, 229, 231, 232, 235, 236, 240, 249, 250, 251, 255, 256, 257, 260, 263, 264, 271, 273, 274, 280, 284, 286, 290, 291, 292, 295, 298, 300, 301, 302, 303, 305, 309, 312, 313, 314, 316, 326, 327, 329, 330, 332, 333, 334, 338, 339, 347, 350, 353, 354, 355, 356, 360, 361, 362, 365, 368, 369, 376, 378, 379, 385, 389, 391, 395, 396, 397, 400, 403, 404, 405, 406, 408, 411, 412, 415, 416, 417, 419, 429, 430, 432, 435, 436, 440, 441, 450, 453, 455, 457, 458, 459, 460, 464, 465, 466, 469, 472, 473, 480, 482, 483, 489, 493, 495, 499, 500, 502, 506, 510, 511, 512, 513, 515, 518, 519, 522, 523, 524, 526, 536, 537, 539, 540, 542, 543, 554, 555, 556, 557, 559, 560, 561], "support": [1, 2, 5, 7, 8, 9, 11, 14, 16, 25, 26, 28, 31, 39, 43, 46, 47, 48, 57, 61, 64, 66, 70, 71, 76, 78, 79, 80, 81, 85, 86, 87, 88, 90, 92, 101, 103, 104, 108, 109, 110, 114, 118, 120, 121, 123, 132, 133, 136, 151, 153, 160, 161, 163, 168, 170, 176, 177, 181, 183, 184, 186, 187, 189, 191, 192, 193, 198, 199, 203, 204, 205, 206, 209, 210, 212, 214, 215, 219, 222, 223, 227, 228, 229, 231, 232, 236, 237, 239, 240, 242, 243, 247, 250, 251, 255, 256, 257, 258, 260, 262, 271, 273, 274, 278, 279, 280, 284, 288, 290, 291, 292, 298, 301, 302, 305, 309, 320, 322, 329, 330, 332, 333, 334, 336, 338, 339, 341, 342, 346, 347, 353, 354, 355, 356, 360, 361, 362, 363, 365, 367, 376, 378, 379, 383, 384, 385, 389, 393, 395, 396, 397, 404, 405, 408, 423, 425, 432, 433, 435, 440, 443, 445, 449, 450, 455, 457, 458, 459, 460, 464, 465, 466, 467, 469, 471, 480, 482, 483, 487, 488, 489, 493, 497, 499, 500, 502, 511, 512, 515, 530, 532, 539, 540, 542, 556], "ani": [1, 2, 3, 4, 5, 7, 8, 9, 10, 12, 14, 16, 18, 19, 20, 21, 22, 25, 26, 28, 31, 33, 34, 35, 36, 37, 39, 41, 42, 46, 47, 48, 53, 62, 65, 66, 67, 70, 71, 73, 74, 77, 78, 79, 80, 81, 86, 87, 88, 90, 92, 93, 95, 98, 99, 100, 101, 103, 104, 107, 108, 109, 110, 112, 113, 114, 115, 118, 119, 120, 121, 122, 126, 127, 131, 132, 133, 136, 137, 139, 143, 144, 145, 153, 154, 155, 157, 160, 161, 163, 165, 166, 168, 174, 175, 176, 177, 182, 183, 184, 185, 186, 189, 194, 196, 197, 198, 199, 204, 205, 206, 208, 209, 212, 216, 219, 220, 221, 222, 223, 228, 229, 230, 231, 232, 235, 236, 240, 244, 247, 248, 249, 250, 251, 256, 257, 258, 260, 262, 263, 265, 268, 269, 270, 271, 272, 273, 274, 277, 278, 279, 280, 282, 283, 284, 285, 288, 289, 290, 291, 295, 297, 298, 300, 302, 305, 306, 312, 313, 314, 322, 323, 324, 326, 329, 332, 333, 334, 339, 342, 343, 346, 347, 349, 350, 352, 353, 354, 355, 356, 361, 362, 363, 365, 367, 368, 370, 373, 374, 375, 376, 378, 379, 382, 383, 384, 385, 387, 388, 389, 390, 393, 394, 395, 396, 400, 403, 405, 408, 409, 411, 415, 416, 417, 425, 426, 427, 429, 432, 433, 435, 441, 444, 445, 446, 449, 450, 452, 453, 456, 457, 458, 459, 460, 465, 466, 467, 469, 471, 472, 474, 477, 478, 479, 480, 482, 483, 486, 487, 488, 489, 491, 492, 493, 494, 497, 498, 499, 500, 501, 505, 506, 510, 511, 512, 515, 516, 518, 522, 523, 524, 532, 533, 534, 536, 539, 540, 542, 544, 545, 547, 552, 555, 557, 561], "boot": [1, 14, 16, 25, 26, 27, 28, 29, 31, 32, 39, 46, 47, 48, 57, 74, 77, 81, 102, 103, 116, 121, 145, 177, 184, 186, 199, 206, 209, 223, 230, 232, 236, 272, 273, 286, 291, 297, 314, 334, 350, 352, 356, 377, 378, 391, 396, 417, 453, 456, 460, 481, 482, 495, 500, 524, 547], "skein": [1, 78, 79, 199, 206, 223, 232, 251, 298, 353, 354, 457, 458], "edonr": [1, 2, 78, 79, 199, 206, 223, 232, 251, 298, 353, 354, 457, 458], "In": [1, 4, 5, 7, 9, 10, 12, 13, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 44, 45, 46, 47, 48, 50, 53, 67, 70, 71, 73, 76, 78, 79, 80, 81, 85, 86, 87, 90, 93, 96, 101, 104, 105, 106, 108, 109, 110, 113, 114, 120, 123, 124, 127, 133, 134, 139, 143, 147, 151, 153, 163, 164, 165, 166, 172, 174, 176, 181, 183, 184, 186, 194, 196, 198, 199, 203, 204, 205, 206, 209, 216, 219, 220, 221, 222, 223, 227, 228, 229, 231, 232, 236, 244, 247, 248, 249, 250, 251, 255, 256, 257, 260, 263, 266, 271, 274, 276, 278, 279, 280, 283, 284, 290, 292, 293, 295, 298, 302, 303, 312, 320, 322, 332, 333, 334, 335, 343, 346, 347, 349, 353, 354, 355, 356, 360, 361, 362, 365, 368, 371, 376, 379, 380, 381, 383, 384, 385, 388, 389, 395, 397, 398, 400, 405, 406, 411, 415, 423, 425, 435, 436, 437, 438, 446, 449, 450, 452, 455, 457, 458, 459, 460, 464, 465, 466, 469, 472, 475, 480, 483, 484, 485, 487, 488, 489, 492, 493, 499, 502, 503, 506, 512, 513, 518, 522, 526, 530, 532, 542, 543, 544, 545, 552, 554, 555], "abund": [1, 79, 199, 223, 251, 354, 458], "caution": [1, 23, 53, 79, 86, 87, 183, 199, 205, 223, 229, 251, 257, 354, 362, 458, 465, 466], "edon": [1, 79, 199, 223, 251, 354, 458], "r": [1, 8, 9, 14, 16, 17, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 47, 53, 54, 64, 66, 67, 78, 79, 86, 88, 90, 93, 94, 95, 97, 98, 100, 101, 105, 108, 109, 110, 111, 112, 113, 114, 115, 117, 118, 120, 123, 127, 131, 136, 139, 143, 145, 157, 160, 170, 171, 172, 175, 177, 182, 184, 185, 186, 192, 193, 194, 197, 199, 204, 206, 208, 209, 215, 216, 221, 223, 228, 232, 235, 236, 243, 244, 249, 251, 256, 258, 260, 263, 264, 265, 267, 268, 270, 271, 275, 278, 279, 280, 281, 282, 283, 284, 285, 287, 288, 290, 292, 295, 298, 300, 305, 312, 314, 326, 329, 341, 342, 343, 353, 354, 361, 363, 365, 368, 369, 370, 372, 373, 375, 376, 380, 383, 384, 385, 386, 387, 388, 389, 390, 392, 393, 395, 397, 400, 403, 408, 411, 415, 417, 429, 432, 443, 445, 446, 457, 458, 465, 467, 469, 472, 473, 474, 476, 477, 479, 480, 484, 487, 488, 489, 490, 491, 492, 493, 494, 496, 497, 499, 502, 506, 510, 515, 518, 522, 524, 536, 539, 557], "verif": [1, 12, 47, 64, 71, 79, 108, 109, 171, 176, 191, 193, 198, 199, 206, 214, 222, 223, 232, 242, 250, 251, 278, 279, 341, 347, 354, 383, 384, 443, 450, 458, 487, 488], "verifi": [1, 5, 8, 12, 14, 16, 18, 19, 20, 22, 25, 31, 32, 33, 34, 35, 36, 37, 41, 42, 47, 56, 64, 71, 78, 79, 86, 108, 109, 133, 136, 153, 155, 171, 182, 184, 186, 191, 193, 198, 204, 206, 209, 214, 222, 223, 228, 232, 236, 242, 250, 251, 256, 278, 279, 298, 305, 324, 341, 347, 353, 354, 361, 383, 384, 405, 408, 425, 427, 443, 450, 457, 458, 465, 487, 488, 512, 515, 532, 534], "blake3": [1, 71, 78, 79, 450, 457, 458], "openzf": [1, 2, 5, 8, 9, 10, 13, 18, 19, 20, 22, 26, 27, 28, 29, 32, 33, 34, 35, 36, 37, 39, 40, 41, 42, 44, 46, 47, 48, 52, 54, 57, 58, 71, 78, 79, 82, 92, 102, 163, 165, 166, 206, 230, 232, 239, 240, 242, 243, 244, 247, 249, 250, 251, 252, 254, 257, 272, 298, 300, 332, 336, 347, 353, 354, 357, 377, 435, 450, 457, 458, 461, 471, 481, 542, 544, 545, 548, 549, 550, 551, 552, 553, 554, 555, 557, 558, 559, 560, 561], "ha": [1, 3, 4, 5, 8, 9, 10, 11, 12, 14, 16, 18, 19, 20, 22, 24, 25, 28, 30, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 49, 50, 53, 54, 62, 65, 70, 71, 73, 76, 77, 78, 79, 80, 81, 82, 86, 87, 90, 92, 93, 94, 95, 96, 97, 98, 99, 101, 103, 104, 106, 108, 109, 110, 111, 112, 113, 114, 115, 116, 119, 120, 121, 122, 124, 125, 126, 127, 131, 133, 134, 139, 140, 145, 151, 153, 155, 160, 161, 162, 163, 168, 174, 175, 176, 177, 178, 180, 183, 184, 186, 189, 196, 197, 198, 199, 200, 202, 205, 206, 208, 209, 212, 219, 220, 221, 222, 223, 224, 226, 229, 230, 231, 232, 235, 236, 240, 247, 248, 249, 250, 251, 252, 254, 256, 257, 260, 262, 263, 264, 265, 266, 267, 268, 269, 271, 272, 273, 274, 276, 278, 279, 280, 281, 282, 283, 284, 285, 289, 290, 291, 293, 294, 295, 297, 298, 300, 302, 303, 304, 309, 314, 320, 322, 324, 329, 331, 332, 333, 334, 339, 346, 347, 349, 352, 353, 354, 355, 356, 357, 361, 362, 365, 367, 368, 369, 370, 371, 372, 373, 374, 376, 378, 379, 381, 383, 384, 385, 386, 387, 388, 389, 390, 391, 394, 395, 396, 398, 399, 400, 403, 405, 406, 411, 412, 417, 423, 425, 427, 432, 433, 434, 435, 441, 444, 449, 450, 452, 455, 456, 457, 458, 459, 460, 461, 465, 466, 469, 471, 472, 473, 474, 475, 476, 477, 478, 480, 482, 483, 485, 487, 488, 489, 490, 491, 492, 493, 494, 495, 498, 499, 500, 501, 503, 504, 505, 506, 510, 512, 513, 518, 519, 524, 530, 532, 534, 539, 540, 541, 542, 547, 548, 549, 550, 551, 552, 553, 554, 555, 557, 558, 559, 560], "abil": [1, 35, 37, 46, 47, 53, 79, 88, 90, 101, 108, 109, 110, 114, 118, 120, 127, 165, 166, 177, 184, 199, 206, 223, 232, 251, 258, 260, 271, 278, 279, 280, 284, 288, 290, 295, 335, 354, 363, 365, 376, 383, 384, 385, 389, 393, 395, 400, 437, 438, 458, 467, 469, 480, 487, 488, 489, 493, 497, 499, 506, 544, 545, 557], "offload": 1, "oper": [1, 43, 45, 46, 47, 48, 49, 50, 65, 71, 73, 76, 77, 78, 79, 80, 81, 82, 86, 87, 90, 91, 92, 95, 98, 101, 104, 107, 108, 109, 110, 114, 115, 120, 127, 138, 139, 142, 145, 152, 155, 158, 160, 163, 174, 175, 176, 177, 178, 180, 182, 184, 186, 196, 197, 198, 199, 200, 202, 204, 206, 209, 220, 221, 222, 223, 224, 226, 228, 231, 232, 236, 248, 249, 250, 251, 252, 254, 256, 260, 261, 262, 265, 268, 271, 274, 277, 278, 279, 280, 284, 285, 290, 295, 297, 298, 307, 311, 314, 321, 324, 329, 332, 333, 334, 347, 349, 352, 353, 354, 355, 356, 357, 361, 362, 365, 366, 367, 370, 373, 376, 379, 382, 383, 384, 385, 389, 390, 395, 400, 410, 411, 414, 417, 424, 427, 432, 435, 444, 450, 452, 455, 456, 457, 458, 459, 460, 461, 465, 466, 469, 470, 471, 474, 477, 480, 483, 486, 487, 488, 489, 493, 494, 499, 506, 517, 518, 521, 524, 531, 534, 537, 539, 542, 555, 557], "intel": [1, 16, 46, 47, 53], "quickassist": [1, 47], "technologi": [1, 47], "qat": [1, 71, 198, 222, 250, 347, 450], "adapt": [1, 8, 11, 12, 53, 171, 176, 193, 198, 222, 250, 347], "some": [1, 4, 7, 9, 10, 12, 18, 19, 20, 21, 22, 27, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 54, 62, 70, 71, 77, 78, 79, 80, 81, 82, 86, 87, 90, 95, 96, 98, 101, 104, 106, 108, 109, 110, 114, 115, 120, 123, 124, 127, 129, 136, 139, 161, 163, 165, 166, 168, 176, 177, 178, 182, 183, 184, 186, 187, 189, 198, 199, 200, 204, 205, 206, 209, 210, 212, 219, 221, 222, 223, 224, 228, 229, 231, 232, 236, 237, 239, 240, 247, 249, 250, 251, 252, 256, 257, 260, 265, 266, 268, 271, 274, 276, 278, 279, 280, 284, 285, 290, 292, 293, 295, 298, 305, 330, 333, 334, 336, 339, 346, 347, 353, 354, 355, 356, 357, 361, 362, 365, 370, 371, 373, 376, 379, 381, 383, 384, 385, 389, 390, 395, 397, 398, 400, 408, 411, 433, 441, 449, 450, 456, 457, 458, 459, 460, 461, 465, 466, 469, 474, 475, 477, 480, 483, 485, 487, 488, 489, 493, 494, 499, 502, 503, 506, 508, 515, 518, 540, 544, 545, 552, 553, 554, 561], "ko": [1, 25, 29, 47], "kernel": [1, 8, 9, 13, 14, 15, 16, 18, 19, 20, 22, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 44, 46, 47, 48, 57, 67, 70, 71, 74, 81, 86, 87, 127, 139, 163, 171, 172, 175, 176, 182, 183, 186, 193, 194, 197, 198, 204, 205, 209, 216, 219, 221, 222, 228, 229, 232, 236, 244, 247, 249, 250, 256, 257, 295, 308, 332, 334, 343, 346, 347, 350, 356, 361, 362, 400, 411, 435, 446, 449, 450, 453, 460, 465, 466, 506, 518, 542], "modul": [1, 8, 9, 11, 14, 15, 16, 18, 19, 20, 21, 22, 25, 26, 27, 28, 29, 31, 32, 33, 34, 36, 41, 42, 44, 48, 51, 53, 58, 59, 70, 71, 74, 76, 78, 80, 81, 87, 104, 127, 139, 163, 173, 175, 183, 184, 186, 194, 195, 197, 205, 206, 207, 209, 216, 218, 221, 223, 229, 231, 232, 234, 236, 244, 246, 249, 251, 257, 274, 295, 298, 299, 308, 332, 333, 334, 346, 347, 350, 353, 355, 362, 379, 400, 411, 435, 449, 450, 453, 455, 457, 459, 460, 466, 483, 506, 518, 542, 557], "load": [1, 4, 8, 10, 14, 15, 16, 18, 19, 20, 21, 25, 26, 27, 28, 31, 32, 34, 35, 36, 41, 42, 46, 47, 48, 67, 71, 74, 78, 80, 81, 83, 86, 87, 88, 90, 102, 103, 104, 110, 114, 116, 118, 120, 121, 127, 139, 143, 151, 155, 157, 171, 175, 176, 182, 183, 186, 193, 197, 198, 204, 205, 209, 221, 222, 228, 229, 230, 231, 232, 236, 249, 250, 253, 256, 257, 258, 260, 272, 273, 274, 280, 284, 288, 290, 291, 295, 298, 312, 320, 326, 333, 334, 347, 350, 353, 355, 356, 358, 361, 362, 363, 365, 377, 378, 379, 385, 389, 391, 393, 395, 396, 400, 411, 415, 423, 429, 446, 450, 453, 457, 459, 460, 462, 465, 466, 467, 469, 481, 482, 483, 489, 493, 495, 497, 499, 500, 506, 518, 522, 530, 534, 536, 557], "determin": [1, 4, 5, 7, 18, 19, 20, 22, 33, 34, 35, 36, 37, 46, 47, 49, 50, 53, 70, 71, 73, 77, 78, 80, 81, 85, 86, 90, 93, 101, 104, 108, 109, 110, 114, 120, 127, 139, 143, 165, 166, 174, 176, 181, 184, 186, 196, 198, 203, 206, 209, 219, 220, 221, 222, 227, 231, 232, 236, 247, 248, 249, 250, 255, 260, 263, 271, 274, 278, 279, 280, 284, 290, 295, 297, 298, 312, 334, 346, 347, 349, 352, 353, 355, 356, 360, 361, 365, 368, 376, 379, 383, 384, 385, 389, 395, 400, 411, 415, 449, 450, 452, 456, 457, 459, 460, 464, 465, 469, 472, 480, 483, 487, 488, 489, 493, 499, 506, 518, 522, 544, 545, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 558, 561], "optim": [1, 14, 16, 25, 28, 31, 46, 47, 48, 49, 53, 70, 71, 78, 81, 176, 184, 186, 198, 206, 209, 219, 222, 232, 236, 247, 250, 298, 334, 346, 347, 353, 356, 449, 450, 457, 460], "result": [1, 4, 12, 18, 19, 20, 22, 25, 32, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 54, 64, 65, 67, 70, 71, 77, 78, 79, 80, 81, 82, 86, 87, 89, 90, 92, 101, 104, 108, 109, 110, 114, 120, 135, 136, 143, 147, 152, 155, 163, 164, 165, 166, 171, 172, 176, 177, 184, 186, 191, 193, 194, 198, 199, 206, 209, 214, 216, 219, 222, 223, 231, 232, 236, 242, 244, 247, 250, 251, 259, 260, 262, 271, 274, 278, 279, 280, 284, 290, 297, 298, 304, 305, 321, 324, 332, 333, 334, 341, 343, 346, 347, 352, 353, 354, 355, 356, 357, 364, 365, 367, 376, 379, 383, 384, 385, 389, 395, 407, 408, 424, 427, 435, 436, 443, 444, 446, 449, 450, 456, 457, 458, 459, 460, 461, 465, 466, 468, 469, 471, 480, 483, 487, 488, 489, 493, 499, 514, 515, 522, 526, 531, 534, 542, 543, 544, 545, 553, 554, 557], "observ": [1, 4, 5, 46, 47, 48, 53, 61, 70, 71, 145, 163, 198, 221, 222, 236, 239, 249, 250, 314, 332, 338, 347, 411, 417, 435, 440, 449, 450, 524, 542], "proc": [1, 4, 14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 47, 70, 71, 102, 122, 126, 176, 198, 219, 222, 247, 250, 346, 347, 377, 449, 450, 481, 501, 505], "spl": [1, 4, 8, 11, 12, 13, 47, 53, 67, 69, 71, 81, 130, 176, 194, 198, 207, 209, 216, 218, 222, 234, 236, 244, 246, 250, 299, 334, 343, 345, 347, 356, 402, 446, 448, 450, 460, 509], "kstat": [1, 4, 47, 71, 176, 198, 222, 250, 347, 450], "directori": [1, 7, 8, 14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 47, 51, 53, 65, 67, 71, 73, 77, 78, 79, 81, 86, 87, 90, 93, 94, 101, 105, 117, 120, 127, 131, 143, 145, 163, 171, 172, 174, 176, 180, 183, 184, 185, 186, 193, 194, 196, 198, 202, 204, 205, 206, 208, 209, 216, 220, 222, 223, 226, 228, 229, 232, 235, 236, 244, 248, 250, 251, 254, 256, 257, 260, 264, 271, 275, 290, 295, 297, 298, 300, 312, 314, 332, 334, 343, 347, 349, 352, 353, 354, 356, 361, 362, 365, 369, 376, 380, 395, 400, 403, 415, 417, 435, 444, 446, 450, 452, 456, 457, 458, 460, 465, 466, 469, 472, 473, 480, 484, 496, 499, 506, 510, 522, 524, 542, 547, 549, 552, 554], "win": [1, 47, 48], "report": [1, 4, 17, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 57, 61, 65, 71, 73, 78, 79, 80, 81, 85, 86, 96, 103, 104, 105, 106, 110, 114, 121, 124, 131, 133, 145, 147, 155, 158, 159, 163, 174, 181, 182, 184, 185, 186, 196, 198, 203, 204, 206, 208, 209, 220, 222, 227, 228, 231, 232, 235, 236, 239, 248, 250, 251, 255, 256, 266, 273, 274, 275, 276, 280, 284, 291, 293, 298, 300, 303, 314, 316, 324, 327, 328, 332, 334, 338, 347, 349, 353, 354, 355, 356, 360, 361, 371, 378, 379, 380, 381, 385, 389, 396, 398, 403, 417, 419, 427, 430, 431, 435, 440, 444, 450, 452, 457, 458, 459, 460, 464, 465, 475, 482, 483, 484, 485, 489, 493, 500, 503, 510, 524, 526, 534, 537, 538, 542], "fastest": [1, 47, 71, 78, 184, 198, 206, 222, 232, 250, 298, 347, 353, 450, 457], "becom": [1, 3, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 65, 67, 71, 77, 78, 79, 80, 81, 86, 90, 101, 107, 120, 133, 148, 149, 162, 168, 172, 176, 177, 184, 186, 189, 194, 198, 199, 206, 209, 212, 216, 222, 223, 232, 236, 240, 244, 250, 251, 260, 271, 277, 290, 297, 298, 317, 318, 331, 333, 334, 343, 347, 352, 353, 354, 355, 356, 365, 376, 382, 395, 420, 421, 434, 444, 446, 450, 456, 457, 458, 459, 460, 465, 469, 480, 486, 499, 527, 528, 541], "overridden": [1, 47, 65, 70, 78, 81, 87, 108, 109, 110, 114, 132, 133, 136, 145, 153, 183, 184, 186, 205, 206, 209, 219, 229, 232, 236, 247, 257, 278, 279, 280, 284, 298, 301, 302, 305, 314, 322, 334, 346, 353, 356, 362, 383, 384, 385, 389, 404, 405, 408, 417, 425, 444, 449, 457, 460, 466, 487, 488, 489, 493, 511, 512, 515, 524, 532], "paramet": [1, 3, 4, 48, 50, 51, 53, 58, 59, 64, 70, 71, 73, 78, 80, 86, 99, 108, 109, 119, 139, 171, 173, 174, 180, 191, 193, 194, 195, 196, 202, 207, 209, 214, 216, 218, 220, 221, 223, 226, 232, 234, 236, 242, 244, 246, 248, 249, 251, 254, 269, 278, 279, 289, 299, 308, 332, 333, 334, 341, 346, 347, 349, 355, 374, 383, 384, 394, 411, 443, 449, 450, 452, 459, 465, 478, 487, 488, 498, 518, 557], "filenam": [1, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 48, 65, 73, 79, 81, 87, 105, 130, 158, 174, 183, 196, 205, 220, 229, 248, 257, 299, 349, 354, 356, 362, 380, 402, 444, 452, 458, 460, 466, 484, 509, 537], "fletcher_4_bench": [1, 47], "zfs_fletcher_4_impl": [1, 71, 198, 222, 250, 347, 450], "all": [1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 32, 33, 34, 35, 36, 37, 40, 41, 42, 46, 47, 48, 50, 53, 55, 56, 57, 61, 62, 64, 65, 66, 68, 70, 71, 73, 74, 76, 77, 78, 79, 80, 81, 82, 86, 87, 88, 90, 91, 92, 93, 95, 96, 97, 98, 100, 101, 102, 103, 104, 105, 106, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 120, 121, 123, 124, 125, 127, 131, 132, 133, 135, 136, 137, 139, 140, 141, 142, 143, 144, 145, 147, 148, 149, 150, 151, 152, 153, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 168, 174, 175, 176, 177, 178, 180, 182, 183, 184, 185, 186, 187, 189, 191, 196, 197, 198, 199, 200, 202, 204, 205, 206, 208, 209, 210, 212, 214, 217, 219, 220, 221, 222, 223, 224, 226, 228, 229, 230, 231, 232, 235, 236, 237, 239, 240, 242, 245, 247, 248, 249, 250, 251, 252, 254, 256, 257, 258, 260, 261, 262, 263, 265, 266, 267, 268, 270, 271, 272, 273, 274, 275, 276, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 290, 291, 292, 293, 294, 295, 297, 298, 300, 301, 302, 304, 305, 308, 309, 310, 311, 312, 313, 314, 316, 317, 318, 319, 320, 321, 322, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 336, 338, 339, 341, 342, 344, 346, 347, 349, 350, 352, 353, 354, 355, 356, 357, 361, 362, 363, 365, 366, 367, 368, 370, 371, 372, 373, 375, 376, 377, 378, 379, 380, 381, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 395, 396, 397, 398, 399, 400, 403, 404, 405, 407, 408, 409, 411, 412, 413, 414, 415, 416, 417, 419, 420, 421, 422, 423, 424, 425, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 440, 441, 443, 444, 445, 447, 449, 450, 452, 453, 455, 456, 457, 458, 459, 460, 461, 465, 466, 467, 469, 470, 471, 472, 474, 475, 476, 477, 479, 480, 481, 482, 483, 484, 485, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 499, 500, 502, 503, 504, 506, 510, 511, 512, 514, 515, 516, 518, 519, 520, 521, 522, 523, 524, 526, 527, 528, 529, 530, 531, 532, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 547, 551, 552, 553, 555, 557, 559], "chksum_bench": [1, 71, 450], "zfs_blake3_impl": [1, 71, 450], "zfs_sha256_impl": 1, "zfs_sha512_impl": 1, "while": [1, 5, 12, 18, 19, 34, 36, 41, 42, 44, 46, 47, 48, 53, 62, 64, 65, 70, 71, 73, 78, 79, 80, 81, 86, 87, 90, 101, 104, 108, 109, 110, 114, 120, 127, 133, 135, 136, 148, 149, 151, 163, 165, 166, 168, 174, 176, 177, 182, 184, 186, 189, 191, 196, 198, 199, 204, 206, 209, 212, 214, 219, 220, 222, 223, 228, 231, 232, 236, 240, 242, 247, 248, 250, 251, 256, 257, 260, 271, 274, 278, 279, 280, 284, 290, 295, 298, 304, 305, 317, 318, 320, 332, 333, 334, 339, 341, 346, 347, 349, 353, 354, 355, 356, 361, 362, 365, 376, 379, 383, 384, 385, 389, 395, 400, 407, 408, 420, 421, 423, 435, 441, 443, 444, 449, 450, 452, 457, 458, 459, 460, 465, 466, 469, 480, 483, 487, 488, 489, 493, 499, 506, 514, 515, 527, 528, 530, 542, 544, 545, 548, 549, 550, 551, 557], "mai": [1, 8, 9, 10, 11, 12, 18, 19, 20, 22, 23, 26, 27, 32, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 50, 53, 61, 62, 64, 65, 66, 67, 68, 70, 71, 73, 74, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90, 92, 93, 96, 99, 101, 102, 104, 105, 106, 108, 109, 110, 114, 116, 118, 119, 120, 124, 125, 127, 129, 130, 131, 133, 134, 135, 138, 139, 140, 143, 144, 145, 146, 150, 153, 154, 158, 160, 161, 162, 163, 164, 165, 166, 172, 174, 175, 176, 177, 181, 182, 183, 184, 186, 194, 196, 197, 198, 199, 203, 204, 205, 206, 209, 216, 219, 220, 221, 222, 223, 227, 228, 229, 230, 231, 232, 236, 239, 244, 247, 248, 249, 250, 251, 255, 256, 257, 258, 260, 262, 263, 266, 271, 272, 274, 276, 278, 279, 280, 284, 288, 290, 293, 295, 297, 298, 302, 303, 304, 307, 309, 312, 313, 314, 322, 327, 329, 330, 332, 333, 334, 335, 338, 339, 341, 342, 343, 344, 346, 347, 349, 350, 352, 353, 354, 355, 356, 357, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 371, 374, 375, 376, 377, 379, 380, 381, 383, 384, 385, 388, 389, 391, 392, 393, 394, 395, 398, 399, 400, 402, 403, 404, 405, 406, 407, 409, 410, 411, 412, 415, 416, 417, 418, 422, 425, 426, 430, 432, 433, 434, 435, 436, 437, 438, 440, 441, 443, 444, 445, 446, 447, 449, 450, 452, 453, 455, 456, 457, 458, 459, 460, 461, 463, 464, 465, 466, 467, 468, 469, 471, 472, 475, 478, 480, 481, 483, 484, 485, 487, 488, 489, 493, 495, 497, 498, 499, 503, 504, 506, 508, 509, 510, 512, 513, 514, 517, 518, 519, 522, 523, 524, 525, 529, 532, 533, 537, 539, 540, 541, 542, 543, 544, 545, 548, 549, 550, 554, 555, 557, 558, 559, 560, 561], "tempt": 1, "improv": [1, 5, 12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 57, 70, 71, 78, 79, 80, 176, 177, 184, 186, 198, 199, 206, 209, 219, 222, 223, 232, 236, 239, 247, 250, 251, 298, 333, 346, 347, 353, 354, 355, 449, 450, 457, 458, 459], "cpu": [1, 18, 19, 20, 22, 33, 34, 35, 36, 37, 48, 51, 53, 70, 71, 77, 78, 90, 101, 120, 176, 184, 198, 206, 219, 222, 232, 247, 250, 260, 271, 290, 297, 298, 346, 347, 352, 353, 365, 376, 395, 449, 450, 456, 457, 469, 480, 499], "perform": [1, 5, 6, 7, 8, 9, 12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 43, 47, 48, 50, 57, 58, 59, 62, 67, 70, 71, 76, 77, 78, 79, 80, 81, 86, 93, 96, 103, 104, 106, 108, 109, 112, 117, 121, 124, 127, 132, 139, 142, 143, 150, 151, 152, 157, 160, 163, 164, 168, 171, 175, 176, 177, 180, 182, 184, 186, 189, 193, 197, 198, 199, 202, 204, 206, 209, 212, 219, 221, 222, 223, 226, 228, 231, 232, 236, 240, 247, 249, 250, 251, 254, 256, 266, 273, 274, 276, 278, 279, 291, 293, 295, 297, 298, 301, 311, 312, 319, 320, 321, 326, 329, 333, 334, 339, 346, 347, 352, 353, 354, 355, 356, 361, 371, 378, 379, 381, 383, 384, 396, 398, 400, 404, 411, 414, 415, 422, 423, 424, 429, 432, 436, 441, 449, 450, 455, 456, 457, 458, 459, 460, 465, 472, 475, 482, 483, 485, 487, 488, 491, 496, 500, 503, 506, 511, 518, 521, 522, 529, 530, 531, 536, 539, 542, 543, 553, 557], "wide": [1, 9, 12, 32, 46, 47, 48, 57, 71, 78, 80, 133, 145, 147, 184, 186, 206, 209, 219, 232, 236, 298, 314, 316, 333, 353, 355, 417, 419, 450, 457, 459, 524, 526, 554], "consid": [1, 18, 19, 20, 33, 34, 36, 46, 47, 48, 53, 71, 77, 78, 108, 109, 110, 114, 138, 140, 164, 176, 184, 186, 198, 206, 209, 222, 232, 236, 250, 280, 284, 297, 298, 307, 309, 347, 352, 353, 385, 389, 410, 412, 436, 450, 456, 457, 487, 488, 489, 493, 517, 519, 543, 561], "commun": [1, 4, 16, 17, 18, 19, 20, 22, 25, 29, 31, 33, 34, 35, 36, 37, 41, 42, 53, 58, 59, 82, 127, 172, 194, 206, 216, 232, 239, 244, 295, 357, 400, 461, 506], "extrodinarili": 1, "bad": [1, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 62, 71, 80, 81, 139, 168, 175, 180, 186, 189, 197, 202, 209, 212, 221, 222, 226, 236, 240, 249, 250, 254, 333, 334, 339, 347, 355, 356, 411, 441, 450, 459, 460, 518, 561], "idea": [1, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 61, 67, 172, 194, 216, 219, 239, 244, 247, 338, 343, 440, 446], "don": [1, 9, 10, 14, 16, 21, 25, 27, 28, 29, 31, 41, 42, 47, 71, 86, 87, 105, 127, 163, 175, 176, 197, 198, 204, 221, 222, 228, 249, 250, 256, 347, 361, 362, 450, 465, 466, 484, 506, 542], "t": [1, 4, 8, 9, 10, 11, 14, 16, 18, 19, 20, 21, 22, 25, 27, 28, 29, 31, 32, 33, 34, 35, 36, 41, 42, 43, 47, 48, 49, 53, 64, 65, 67, 70, 71, 74, 76, 78, 79, 80, 81, 86, 87, 94, 95, 96, 98, 100, 103, 104, 105, 106, 108, 109, 110, 114, 115, 121, 124, 125, 127, 131, 136, 139, 143, 145, 147, 148, 149, 158, 162, 163, 164, 171, 172, 175, 176, 177, 182, 184, 185, 186, 191, 193, 194, 197, 198, 199, 204, 206, 208, 209, 214, 216, 219, 221, 222, 223, 228, 230, 231, 232, 235, 236, 242, 244, 247, 249, 250, 251, 256, 264, 265, 266, 268, 270, 272, 273, 274, 276, 278, 279, 280, 284, 285, 291, 293, 294, 298, 300, 305, 312, 314, 316, 317, 318, 327, 331, 333, 334, 341, 343, 346, 347, 350, 353, 354, 355, 356, 361, 362, 369, 370, 371, 373, 375, 378, 379, 381, 383, 384, 385, 389, 390, 396, 398, 399, 403, 408, 411, 415, 417, 419, 420, 421, 430, 434, 436, 443, 444, 446, 449, 450, 453, 455, 457, 458, 459, 460, 465, 466, 473, 474, 475, 477, 479, 482, 483, 484, 485, 487, 488, 489, 493, 494, 500, 503, 504, 506, 510, 515, 518, 522, 524, 526, 527, 528, 537, 541, 542, 543, 557], "zf": [2, 5, 6, 9, 10, 11, 13, 21, 27, 39, 46, 48, 51, 52, 54, 55, 57, 58, 59, 61, 66, 67, 68, 69, 73, 75, 76, 77, 78, 79, 80, 81, 83, 85, 86, 87, 128, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 170, 171, 172, 173, 174, 177, 179, 181, 182, 183, 185, 186, 187, 191, 192, 193, 194, 195, 196, 199, 201, 203, 204, 205, 208, 209, 210, 214, 215, 216, 217, 218, 220, 223, 225, 227, 228, 229, 235, 236, 237, 239, 243, 244, 245, 246, 248, 251, 253, 255, 256, 257, 296, 297, 298, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 338, 342, 343, 344, 345, 349, 351, 352, 353, 354, 355, 356, 358, 360, 361, 362, 401, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 440, 445, 446, 447, 448, 452, 454, 455, 456, 457, 458, 459, 460, 462, 464, 465, 466, 507, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545], "disk": [2, 3, 4, 5, 7, 14, 16, 25, 28, 31, 39, 46, 50, 57, 64, 66, 67, 71, 73, 77, 78, 79, 80, 81, 85, 86, 90, 101, 104, 110, 114, 120, 123, 127, 129, 132, 133, 136, 139, 140, 145, 147, 148, 149, 153, 155, 157, 158, 163, 170, 172, 174, 175, 176, 177, 181, 182, 184, 186, 191, 192, 194, 196, 197, 198, 199, 203, 204, 206, 209, 214, 215, 216, 217, 220, 221, 222, 223, 227, 228, 231, 232, 236, 242, 243, 244, 245, 248, 249, 250, 251, 255, 256, 260, 271, 274, 280, 284, 290, 292, 295, 297, 298, 301, 305, 309, 314, 316, 317, 318, 322, 324, 326, 327, 332, 333, 334, 341, 342, 343, 347, 349, 352, 353, 354, 355, 356, 360, 361, 365, 376, 379, 385, 389, 395, 397, 400, 404, 408, 411, 412, 417, 419, 420, 421, 425, 427, 429, 430, 435, 443, 445, 446, 450, 452, 456, 457, 458, 459, 460, 464, 465, 469, 480, 483, 489, 493, 499, 502, 506, 508, 511, 515, 518, 519, 524, 526, 527, 528, 532, 534, 536, 537, 542, 548, 550, 553, 555, 556, 557], "format": [2, 4, 12, 13, 14, 16, 25, 28, 31, 43, 47, 57, 65, 71, 73, 78, 79, 81, 86, 90, 101, 104, 110, 114, 120, 142, 145, 147, 158, 161, 162, 163, 164, 167, 174, 175, 176, 177, 184, 186, 188, 196, 197, 198, 199, 206, 209, 211, 219, 220, 221, 222, 223, 228, 231, 232, 236, 238, 247, 248, 249, 250, 251, 256, 260, 271, 274, 280, 284, 290, 298, 311, 314, 316, 327, 330, 331, 332, 334, 337, 349, 353, 354, 356, 361, 365, 376, 379, 385, 389, 395, 414, 417, 419, 430, 433, 434, 435, 436, 439, 444, 450, 452, 457, 458, 460, 465, 469, 480, 483, 489, 493, 499, 521, 524, 526, 537, 540, 541, 542, 543, 546, 557], "were": [2, 5, 32, 47, 48, 53, 54, 62, 67, 71, 74, 78, 79, 80, 86, 88, 93, 96, 104, 106, 108, 109, 110, 114, 118, 124, 127, 139, 143, 154, 163, 168, 172, 177, 182, 184, 186, 189, 194, 198, 199, 204, 206, 209, 212, 216, 221, 222, 223, 228, 231, 232, 236, 240, 244, 249, 250, 251, 256, 258, 263, 266, 274, 276, 278, 279, 280, 284, 288, 293, 295, 298, 312, 323, 332, 339, 343, 347, 350, 353, 354, 355, 361, 363, 368, 371, 379, 381, 383, 384, 385, 389, 393, 398, 400, 411, 415, 426, 435, 441, 446, 450, 453, 457, 458, 459, 465, 467, 472, 475, 483, 485, 487, 488, 489, 493, 497, 503, 506, 518, 522, 533, 542, 555, 557], "origin": [2, 10, 12, 18, 19, 20, 21, 22, 33, 34, 41, 42, 43, 46, 47, 48, 53, 71, 77, 78, 80, 88, 90, 91, 92, 93, 101, 104, 107, 108, 109, 110, 112, 114, 117, 118, 120, 127, 139, 143, 163, 171, 175, 184, 186, 193, 197, 198, 206, 209, 221, 222, 232, 236, 239, 249, 250, 258, 260, 261, 271, 277, 278, 279, 280, 284, 288, 290, 295, 297, 298, 312, 332, 333, 347, 352, 353, 355, 363, 365, 366, 376, 382, 383, 384, 385, 389, 393, 395, 400, 411, 415, 435, 450, 456, 457, 459, 467, 469, 470, 471, 472, 480, 483, 486, 487, 488, 489, 491, 493, 496, 497, 499, 506, 518, 522, 542, 550, 555, 556, 557], "version": [2, 8, 9, 12, 13, 21, 25, 26, 27, 28, 32, 33, 34, 35, 36, 37, 41, 44, 47, 48, 53, 54, 56, 71, 77, 78, 79, 81, 86, 87, 88, 91, 92, 93, 95, 98, 104, 107, 108, 109, 110, 112, 114, 115, 117, 118, 123, 127, 161, 163, 177, 182, 183, 184, 186, 198, 199, 204, 205, 206, 209, 222, 223, 228, 229, 231, 232, 236, 250, 251, 256, 257, 258, 274, 278, 279, 280, 284, 288, 292, 295, 298, 330, 332, 334, 347, 353, 354, 356, 361, 362, 363, 379, 383, 384, 385, 389, 393, 397, 400, 433, 435, 450, 456, 457, 458, 460, 465, 466, 467, 470, 471, 472, 474, 477, 483, 486, 487, 488, 489, 491, 493, 494, 496, 497, 502, 506, 540, 542, 557, 562], "singl": [2, 3, 5, 14, 16, 18, 19, 20, 25, 28, 31, 34, 36, 41, 42, 46, 47, 48, 53, 65, 70, 71, 73, 78, 79, 80, 95, 97, 98, 100, 104, 111, 115, 127, 131, 136, 139, 141, 145, 147, 156, 162, 163, 171, 174, 176, 177, 183, 184, 186, 193, 196, 198, 199, 205, 206, 208, 209, 219, 220, 222, 223, 229, 231, 232, 235, 236, 247, 248, 250, 251, 257, 265, 267, 268, 270, 274, 281, 285, 295, 298, 300, 305, 308, 310, 314, 316, 325, 331, 332, 333, 346, 347, 349, 353, 354, 355, 370, 372, 373, 375, 379, 386, 390, 400, 403, 408, 411, 413, 417, 419, 428, 434, 435, 444, 449, 450, 452, 457, 458, 459, 474, 476, 477, 479, 483, 490, 494, 506, 510, 515, 518, 520, 524, 526, 535, 541, 542, 548, 550], "number": [2, 3, 5, 7, 8, 9, 12, 14, 16, 18, 19, 20, 22, 25, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 45, 46, 47, 48, 49, 50, 53, 54, 56, 64, 67, 70, 71, 73, 74, 76, 77, 78, 79, 80, 81, 85, 86, 88, 95, 98, 100, 104, 108, 109, 110, 114, 115, 118, 123, 127, 130, 131, 133, 139, 140, 141, 143, 145, 147, 151, 156, 158, 161, 162, 165, 166, 171, 172, 174, 175, 176, 177, 181, 182, 184, 185, 186, 191, 193, 194, 196, 197, 198, 199, 203, 204, 206, 207, 208, 209, 214, 216, 219, 220, 221, 222, 223, 227, 228, 231, 232, 234, 235, 236, 242, 244, 247, 248, 249, 250, 251, 255, 256, 258, 265, 268, 270, 274, 278, 279, 280, 284, 285, 288, 292, 295, 297, 298, 299, 300, 309, 310, 312, 314, 316, 320, 325, 327, 330, 331, 333, 334, 341, 343, 346, 347, 349, 350, 352, 353, 354, 355, 356, 360, 361, 363, 370, 373, 375, 379, 383, 384, 385, 389, 390, 393, 397, 400, 402, 403, 411, 412, 413, 415, 417, 419, 423, 428, 430, 433, 434, 443, 446, 449, 450, 452, 453, 455, 456, 457, 458, 459, 460, 464, 465, 467, 474, 477, 479, 483, 487, 488, 489, 493, 494, 497, 502, 506, 509, 510, 518, 519, 520, 522, 524, 526, 530, 535, 537, 540, 541, 544, 545], "which": [2, 3, 5, 7, 8, 9, 10, 11, 12, 16, 18, 19, 20, 22, 25, 26, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 44, 46, 47, 48, 49, 50, 53, 54, 57, 61, 62, 65, 66, 67, 70, 71, 73, 74, 77, 78, 79, 80, 81, 82, 85, 86, 87, 88, 90, 93, 95, 98, 99, 101, 104, 108, 109, 110, 112, 114, 115, 118, 119, 120, 122, 126, 127, 130, 131, 133, 134, 135, 139, 142, 143, 155, 160, 161, 163, 164, 165, 166, 168, 170, 171, 172, 174, 175, 176, 177, 178, 181, 182, 183, 184, 185, 186, 189, 192, 193, 194, 196, 197, 198, 199, 200, 203, 204, 205, 206, 207, 208, 209, 212, 215, 216, 219, 220, 221, 222, 223, 224, 227, 228, 229, 231, 232, 234, 235, 236, 239, 240, 243, 244, 247, 248, 249, 250, 251, 252, 255, 256, 257, 258, 260, 263, 265, 268, 269, 271, 274, 278, 279, 280, 282, 284, 285, 288, 289, 290, 295, 297, 298, 299, 300, 303, 311, 312, 329, 330, 332, 333, 334, 335, 338, 339, 342, 343, 346, 347, 349, 350, 352, 353, 354, 355, 356, 357, 360, 361, 362, 363, 365, 368, 370, 373, 374, 376, 379, 383, 384, 385, 387, 389, 390, 393, 394, 395, 400, 402, 403, 406, 407, 411, 414, 415, 427, 432, 433, 435, 436, 440, 441, 444, 445, 446, 449, 450, 452, 453, 456, 457, 458, 459, 460, 461, 464, 465, 466, 467, 469, 472, 474, 477, 478, 480, 483, 487, 488, 489, 491, 493, 494, 497, 498, 499, 501, 505, 506, 509, 510, 513, 514, 518, 521, 522, 534, 539, 540, 542, 543, 544, 545, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "increas": [2, 3, 5, 8, 35, 37, 45, 46, 47, 48, 49, 50, 53, 64, 67, 70, 71, 78, 79, 80, 81, 86, 87, 123, 139, 147, 161, 163, 171, 172, 175, 176, 177, 180, 182, 183, 184, 186, 191, 193, 194, 197, 198, 199, 202, 204, 205, 206, 209, 214, 216, 219, 221, 222, 223, 226, 228, 229, 232, 236, 242, 244, 247, 249, 250, 251, 254, 256, 257, 292, 298, 330, 332, 333, 334, 341, 343, 346, 347, 353, 354, 355, 356, 361, 362, 397, 411, 433, 435, 443, 446, 449, 450, 457, 458, 459, 460, 465, 466, 502, 518, 526, 540, 542], "whenev": [2, 46, 48, 53, 78, 79, 108, 109, 184, 206, 223, 232, 251, 278, 279, 298, 353, 354, 383, 384, 457, 458, 487, 488], "approach": [2, 47, 48, 49, 53, 71, 79, 176, 198, 222, 250, 251, 347, 354, 450, 458], "wa": [2, 8, 11, 12, 18, 19, 20, 22, 33, 34, 36, 41, 42, 43, 46, 47, 48, 49, 53, 54, 67, 70, 71, 74, 77, 78, 79, 80, 81, 84, 86, 87, 89, 90, 92, 95, 96, 98, 101, 104, 106, 107, 108, 109, 110, 114, 115, 120, 124, 127, 130, 135, 139, 142, 143, 148, 149, 155, 165, 166, 170, 171, 172, 175, 176, 177, 180, 183, 184, 185, 186, 192, 193, 194, 197, 198, 199, 202, 204, 205, 206, 208, 209, 215, 216, 219, 221, 222, 223, 226, 228, 229, 231, 232, 235, 236, 239, 243, 244, 247, 249, 250, 251, 254, 256, 257, 259, 260, 262, 265, 266, 268, 271, 274, 276, 277, 278, 279, 280, 284, 285, 290, 293, 295, 297, 298, 300, 304, 311, 312, 317, 318, 324, 333, 334, 343, 346, 347, 350, 352, 353, 354, 355, 356, 359, 361, 362, 364, 365, 367, 370, 371, 373, 376, 379, 381, 382, 383, 384, 385, 389, 390, 395, 398, 400, 402, 407, 411, 414, 415, 420, 421, 427, 446, 449, 450, 453, 456, 457, 458, 459, 460, 463, 465, 466, 468, 469, 471, 474, 475, 477, 480, 483, 485, 486, 487, 488, 489, 493, 494, 499, 503, 506, 509, 514, 518, 521, 522, 527, 528, 534, 544, 545, 548, 549, 550, 551, 553, 555, 556, 557, 558], "develop": [2, 4, 5, 9, 10, 12, 14, 16, 25, 26, 28, 29, 31, 32, 39, 44, 46, 47, 53, 55, 57, 58, 59, 67, 70, 71, 76, 78, 81, 110, 114, 172, 183, 184, 194, 205, 206, 216, 219, 222, 229, 232, 244, 247, 250, 257, 280, 284, 298, 343, 346, 347, 353, 385, 389, 446, 449, 450, 455, 457, 460, 489, 493], "driven": 2, "organis": 2, "For": [2, 3, 4, 5, 8, 12, 14, 16, 18, 19, 20, 22, 25, 27, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 49, 50, 53, 62, 70, 71, 73, 76, 77, 78, 79, 80, 81, 86, 87, 88, 95, 98, 100, 104, 108, 109, 110, 114, 115, 118, 127, 129, 131, 136, 139, 140, 143, 145, 155, 158, 163, 168, 172, 174, 175, 176, 177, 183, 184, 186, 189, 196, 197, 198, 199, 205, 206, 208, 209, 212, 219, 220, 221, 222, 223, 229, 231, 232, 235, 236, 240, 247, 248, 249, 250, 251, 256, 257, 258, 265, 268, 270, 274, 278, 279, 280, 284, 285, 288, 295, 297, 298, 300, 305, 308, 309, 312, 314, 324, 327, 332, 333, 334, 339, 346, 347, 349, 352, 353, 354, 355, 356, 361, 362, 363, 370, 373, 375, 379, 383, 384, 385, 389, 390, 393, 400, 403, 408, 411, 412, 415, 417, 427, 430, 435, 441, 449, 450, 452, 455, 456, 457, 458, 459, 460, 465, 466, 467, 474, 477, 479, 483, 487, 488, 489, 493, 494, 497, 506, 508, 510, 515, 518, 519, 522, 524, 534, 537, 542, 548, 549, 550, 551, 555, 557], "distribut": [2, 3, 6, 7, 8, 9, 12, 32, 39, 40, 41, 43, 44, 47, 48, 53, 57, 67, 70, 71, 79, 80, 81, 82, 87, 127, 133, 164, 178, 183, 184, 186, 198, 200, 205, 206, 209, 219, 222, 224, 229, 232, 236, 247, 250, 252, 257, 295, 333, 334, 343, 346, 347, 354, 355, 356, 357, 362, 400, 436, 446, 449, 450, 458, 459, 460, 461, 466, 506, 543], "unsuit": [2, 79, 199, 223, 251, 354, 458], "would": [2, 5, 9, 19, 20, 32, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 54, 64, 65, 70, 71, 77, 78, 79, 81, 86, 90, 93, 99, 101, 104, 108, 109, 110, 114, 119, 120, 122, 126, 129, 132, 136, 139, 143, 157, 163, 175, 176, 177, 182, 184, 186, 191, 197, 198, 199, 204, 206, 209, 214, 221, 222, 223, 228, 231, 232, 236, 242, 249, 250, 251, 256, 260, 263, 269, 271, 274, 278, 279, 280, 284, 289, 290, 297, 298, 301, 305, 312, 326, 332, 334, 341, 347, 352, 353, 354, 356, 361, 365, 368, 374, 376, 379, 383, 384, 385, 389, 394, 395, 404, 408, 411, 415, 429, 435, 443, 444, 449, 450, 456, 457, 458, 460, 465, 469, 472, 478, 480, 483, 487, 488, 489, 493, 498, 499, 501, 505, 508, 511, 515, 518, 522, 536, 542, 557], "have": [2, 3, 8, 10, 11, 12, 17, 18, 19, 20, 22, 25, 26, 27, 29, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 50, 57, 62, 65, 67, 70, 71, 73, 76, 77, 78, 79, 80, 81, 86, 87, 88, 90, 92, 93, 99, 101, 104, 107, 108, 109, 110, 114, 118, 119, 120, 122, 125, 126, 127, 129, 131, 133, 135, 136, 139, 143, 144, 145, 147, 151, 153, 155, 157, 158, 161, 163, 168, 170, 172, 174, 175, 176, 177, 183, 184, 186, 189, 192, 194, 196, 197, 198, 199, 205, 206, 208, 209, 212, 215, 216, 219, 220, 221, 222, 223, 229, 231, 232, 235, 236, 240, 243, 244, 247, 248, 249, 250, 251, 257, 258, 260, 262, 263, 269, 271, 274, 277, 278, 279, 280, 284, 288, 289, 290, 294, 295, 297, 298, 300, 304, 305, 312, 313, 314, 322, 326, 330, 332, 333, 334, 339, 343, 346, 347, 349, 352, 353, 354, 355, 356, 361, 362, 363, 365, 367, 368, 374, 376, 379, 382, 383, 384, 385, 389, 393, 394, 395, 399, 400, 403, 407, 408, 411, 415, 416, 417, 423, 425, 429, 433, 435, 441, 444, 446, 449, 450, 452, 455, 456, 457, 458, 459, 460, 465, 466, 467, 469, 471, 472, 478, 480, 483, 486, 487, 488, 489, 493, 497, 498, 499, 501, 504, 505, 506, 508, 510, 514, 515, 518, 522, 523, 524, 526, 530, 532, 534, 536, 537, 540, 542, 547, 549, 553, 554, 557, 559, 560, 561], "agreement": 2, "across": [2, 3, 8, 46, 47, 48, 53, 70, 71, 73, 76, 77, 78, 79, 80, 81, 148, 149, 177, 186, 196, 199, 206, 209, 220, 222, 223, 232, 236, 248, 250, 251, 298, 317, 318, 333, 334, 346, 347, 349, 353, 354, 355, 356, 420, 421, 449, 450, 452, 455, 456, 457, 458, 459, 460, 527, 528], "altern": [2, 11, 14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 43, 47, 48, 67, 71, 73, 77, 81, 85, 90, 101, 120, 135, 148, 149, 174, 181, 184, 186, 196, 203, 206, 209, 220, 227, 232, 236, 248, 255, 260, 271, 290, 297, 334, 343, 349, 352, 356, 360, 365, 376, 395, 446, 450, 452, 456, 460, 464, 469, 480, 499, 547, 548, 549, 552, 554, 557, 561], "tradit": [2, 5, 8, 46, 47, 53, 57, 71, 77, 79, 176, 184, 198, 206, 222, 232, 250, 251, 297, 347, 352, 354, 450, 456, 458, 557], "allow": [2, 3, 5, 7, 8, 9, 12, 14, 16, 18, 19, 20, 22, 25, 27, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 45, 46, 47, 48, 53, 65, 67, 70, 71, 73, 77, 78, 79, 80, 81, 82, 83, 86, 87, 90, 96, 99, 101, 103, 104, 106, 108, 109, 110, 114, 118, 119, 120, 121, 122, 124, 126, 127, 130, 139, 140, 143, 145, 155, 160, 163, 168, 172, 175, 176, 177, 183, 184, 186, 189, 194, 196, 197, 198, 199, 205, 206, 209, 212, 216, 219, 220, 221, 222, 223, 228, 229, 231, 232, 236, 240, 244, 247, 248, 249, 250, 251, 253, 256, 257, 260, 266, 269, 271, 273, 274, 276, 278, 279, 280, 284, 288, 289, 290, 291, 293, 295, 298, 312, 314, 324, 329, 332, 333, 334, 339, 343, 346, 347, 349, 353, 354, 355, 356, 357, 358, 361, 362, 365, 371, 374, 376, 378, 379, 381, 383, 384, 385, 389, 393, 394, 395, 396, 398, 400, 402, 411, 412, 415, 417, 427, 432, 435, 444, 446, 449, 450, 452, 456, 457, 458, 459, 460, 461, 462, 465, 466, 469, 475, 478, 480, 482, 483, 485, 487, 488, 489, 493, 497, 498, 499, 500, 501, 503, 505, 506, 509, 518, 519, 522, 524, 534, 539, 542, 555, 557, 559, 560], "uniqu": [2, 18, 19, 20, 22, 33, 34, 36, 41, 42, 46, 47, 48, 53, 66, 70, 71, 73, 76, 77, 78, 79, 80, 81, 85, 86, 87, 97, 111, 127, 130, 139, 143, 150, 163, 170, 174, 175, 177, 181, 182, 183, 184, 186, 192, 196, 197, 199, 203, 204, 205, 206, 207, 209, 215, 219, 220, 221, 222, 223, 227, 228, 229, 232, 234, 236, 243, 247, 248, 249, 250, 251, 255, 256, 257, 267, 281, 295, 297, 298, 299, 312, 319, 332, 333, 334, 342, 346, 347, 349, 352, 353, 354, 355, 356, 360, 361, 362, 372, 386, 400, 402, 411, 415, 422, 435, 445, 449, 450, 452, 455, 456, 457, 458, 459, 460, 464, 465, 466, 476, 490, 506, 509, 518, 522, 529, 542], "name": [2, 5, 10, 12, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 54, 57, 61, 62, 64, 65, 66, 67, 68, 70, 71, 73, 74, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 168, 170, 171, 172, 174, 175, 176, 177, 178, 180, 181, 182, 183, 184, 185, 186, 187, 189, 191, 192, 193, 194, 196, 197, 198, 199, 200, 202, 203, 204, 205, 206, 207, 208, 209, 210, 212, 214, 215, 216, 217, 219, 220, 221, 222, 223, 224, 226, 227, 228, 229, 230, 231, 232, 234, 235, 236, 237, 239, 240, 242, 243, 244, 245, 247, 248, 249, 250, 251, 252, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 338, 339, 341, 342, 343, 344, 346, 347, 349, 350, 352, 353, 354, 355, 356, 357, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 440, 441, 443, 444, 445, 446, 447, 449, 450, 452, 453, 455, 456, 457, 458, 459, 460, 461, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 547, 548, 549, 550, 551, 553, 554, 555, 556, 557, 558, 559, 560, 561], "thi": [2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 53, 59, 62, 64, 65, 66, 67, 70, 71, 73, 76, 77, 78, 79, 80, 81, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 95, 96, 98, 99, 100, 101, 103, 104, 105, 106, 107, 108, 109, 110, 112, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 126, 127, 128, 129, 130, 131, 132, 133, 135, 136, 137, 139, 140, 143, 145, 147, 148, 149, 150, 151, 152, 153, 154, 155, 157, 158, 159, 160, 161, 163, 164, 165, 166, 168, 170, 171, 172, 174, 175, 176, 177, 180, 181, 182, 183, 184, 185, 186, 189, 191, 192, 193, 194, 196, 197, 198, 199, 202, 203, 204, 205, 206, 207, 208, 209, 212, 214, 215, 216, 219, 220, 221, 222, 223, 226, 227, 228, 229, 230, 231, 232, 234, 235, 236, 240, 242, 243, 244, 247, 248, 249, 250, 251, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 265, 266, 268, 269, 270, 271, 272, 273, 274, 276, 277, 278, 279, 280, 282, 284, 285, 288, 289, 290, 291, 292, 293, 295, 296, 297, 298, 299, 300, 301, 302, 304, 305, 306, 309, 312, 314, 316, 317, 318, 319, 320, 321, 322, 323, 324, 326, 327, 328, 329, 330, 332, 333, 334, 339, 341, 342, 343, 346, 347, 349, 352, 353, 354, 355, 356, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 370, 371, 373, 374, 375, 376, 378, 379, 380, 381, 382, 383, 384, 385, 387, 389, 390, 391, 393, 394, 395, 396, 397, 398, 400, 401, 402, 403, 404, 405, 408, 409, 411, 412, 415, 417, 419, 420, 421, 422, 423, 424, 425, 426, 427, 429, 430, 431, 432, 433, 435, 436, 441, 443, 444, 445, 446, 449, 450, 452, 455, 456, 457, 458, 459, 460, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 474, 475, 477, 478, 479, 480, 482, 483, 484, 485, 486, 487, 488, 489, 491, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 505, 506, 507, 508, 509, 510, 511, 512, 515, 516, 518, 519, 522, 524, 526, 527, 528, 529, 530, 531, 532, 533, 534, 536, 537, 538, 539, 540, 542, 543, 544, 545, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 559, 560, 561], "independ": [2, 14, 16, 25, 31, 46, 47, 48, 49, 55, 71, 76, 78, 79, 81, 82, 86, 123, 176, 177, 178, 180, 182, 184, 198, 199, 200, 202, 204, 206, 222, 223, 224, 226, 228, 232, 250, 251, 252, 254, 256, 292, 298, 347, 353, 354, 357, 361, 397, 450, 455, 457, 458, 460, 461, 465, 502], "depend": [2, 9, 12, 13, 16, 18, 19, 20, 22, 27, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 50, 53, 64, 65, 71, 77, 78, 79, 80, 81, 93, 102, 104, 107, 108, 109, 110, 114, 127, 132, 133, 145, 163, 171, 176, 177, 184, 186, 193, 198, 199, 206, 209, 219, 222, 223, 230, 231, 232, 236, 250, 251, 263, 272, 274, 277, 278, 279, 280, 284, 295, 297, 298, 314, 332, 333, 334, 341, 347, 352, 353, 354, 355, 356, 368, 377, 379, 382, 383, 384, 385, 389, 400, 417, 435, 443, 444, 450, 456, 457, 458, 459, 460, 472, 481, 483, 486, 487, 488, 489, 493, 506, 511, 524, 542, 554, 555], "where": [2, 4, 7, 12, 18, 19, 21, 29, 34, 35, 37, 43, 46, 47, 48, 50, 53, 54, 64, 70, 71, 73, 77, 78, 79, 80, 81, 90, 92, 95, 98, 100, 101, 104, 108, 109, 110, 114, 115, 120, 127, 131, 136, 139, 147, 155, 163, 165, 166, 175, 177, 184, 185, 186, 191, 196, 197, 198, 199, 206, 208, 209, 214, 219, 220, 221, 222, 223, 231, 232, 235, 236, 242, 247, 248, 249, 250, 251, 260, 262, 265, 268, 270, 271, 274, 278, 279, 280, 284, 285, 290, 295, 297, 298, 300, 324, 332, 333, 334, 341, 346, 347, 349, 352, 353, 354, 355, 356, 365, 367, 370, 373, 375, 376, 379, 383, 384, 385, 389, 390, 395, 400, 403, 411, 427, 435, 443, 449, 450, 452, 456, 457, 458, 459, 460, 469, 471, 474, 477, 479, 480, 483, 487, 488, 489, 493, 494, 499, 506, 510, 515, 518, 526, 534, 542, 544, 545, 556, 557], "us": [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 15, 16, 17, 23, 25, 26, 27, 28, 29, 31, 32, 35, 37, 38, 40, 43, 44, 46, 47, 48, 54, 56, 57, 58, 59, 62, 64, 65, 66, 67, 68, 70, 71, 73, 74, 76, 77, 78, 79, 80, 81, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 117, 118, 119, 120, 121, 123, 124, 127, 129, 130, 132, 133, 134, 136, 137, 139, 140, 141, 143, 144, 145, 147, 148, 149, 151, 153, 155, 156, 157, 158, 160, 161, 163, 164, 165, 166, 168, 170, 171, 172, 174, 175, 176, 177, 181, 182, 183, 184, 186, 189, 191, 192, 193, 194, 196, 197, 198, 199, 203, 204, 205, 206, 207, 209, 212, 214, 215, 216, 217, 219, 220, 221, 222, 223, 227, 228, 229, 231, 232, 234, 236, 240, 242, 243, 244, 245, 247, 248, 249, 250, 251, 255, 256, 257, 258, 259, 260, 262, 263, 265, 266, 267, 268, 269, 270, 271, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 287, 288, 289, 290, 291, 292, 293, 295, 297, 298, 299, 301, 302, 305, 306, 308, 309, 310, 312, 314, 316, 317, 318, 320, 322, 325, 326, 327, 330, 332, 333, 334, 335, 339, 341, 342, 343, 344, 346, 347, 349, 350, 352, 353, 354, 355, 356, 359, 360, 361, 362, 363, 364, 365, 367, 368, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 392, 393, 394, 395, 396, 397, 398, 400, 402, 404, 405, 406, 408, 409, 411, 412, 413, 415, 416, 417, 419, 420, 421, 423, 425, 427, 428, 429, 430, 433, 435, 436, 437, 438, 441, 443, 444, 445, 446, 447, 449, 450, 452, 453, 455, 456, 457, 458, 459, 460, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 496, 497, 498, 499, 500, 502, 503, 506, 508, 509, 511, 512, 513, 515, 516, 518, 519, 520, 522, 523, 524, 526, 527, 528, 530, 532, 534, 535, 536, 537, 539, 540, 542, 543, 544, 545, 547, 548, 549, 550, 551, 552, 554, 555, 556, 557, 558, 559, 560, 561], "multipl": [2, 4, 5, 8, 9, 12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 62, 67, 71, 73, 78, 79, 80, 81, 86, 87, 88, 92, 93, 96, 100, 104, 106, 108, 109, 117, 118, 124, 127, 131, 133, 143, 145, 168, 171, 172, 174, 176, 182, 183, 184, 186, 189, 193, 194, 196, 198, 199, 204, 205, 206, 208, 209, 212, 216, 219, 220, 222, 223, 228, 229, 231, 232, 235, 236, 240, 244, 248, 250, 251, 256, 257, 258, 262, 263, 266, 270, 274, 276, 278, 279, 288, 293, 295, 298, 300, 312, 314, 333, 334, 339, 343, 347, 349, 353, 354, 355, 356, 361, 362, 363, 367, 368, 371, 375, 379, 381, 383, 384, 393, 398, 400, 403, 415, 417, 441, 446, 450, 452, 457, 458, 459, 460, 465, 466, 467, 471, 472, 475, 479, 483, 485, 487, 488, 496, 497, 503, 506, 510, 522, 524, 558], "portabl": [2, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 140, 186, 209, 236, 309, 412, 519], "those": [2, 4, 5, 10, 12, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 50, 53, 54, 68, 71, 78, 79, 80, 81, 84, 86, 92, 95, 96, 98, 103, 104, 106, 113, 115, 121, 124, 135, 140, 163, 176, 177, 180, 182, 184, 186, 198, 199, 202, 204, 206, 209, 217, 222, 223, 226, 228, 232, 236, 245, 250, 251, 254, 256, 262, 265, 266, 268, 273, 274, 276, 283, 285, 291, 293, 296, 298, 304, 309, 333, 344, 347, 353, 354, 355, 356, 359, 361, 367, 370, 371, 373, 378, 379, 381, 388, 390, 396, 398, 407, 412, 447, 450, 457, 458, 459, 460, 463, 465, 471, 474, 475, 477, 482, 483, 485, 492, 494, 500, 503, 514, 519, 561], "exclus": [2, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 44, 46, 47, 78, 171, 193, 206, 232, 298, 353, 457], "enabl": [2, 7, 8, 9, 12, 14, 16, 18, 19, 20, 22, 25, 26, 28, 31, 32, 33, 34, 35, 36, 37, 38, 41, 42, 46, 47, 48, 53, 62, 65, 66, 67, 70, 71, 73, 77, 78, 79, 81, 86, 87, 89, 90, 95, 98, 101, 102, 104, 108, 109, 110, 114, 115, 120, 127, 135, 136, 143, 148, 149, 151, 155, 158, 160, 161, 168, 170, 171, 172, 176, 177, 180, 182, 183, 184, 186, 189, 192, 193, 194, 196, 198, 199, 202, 204, 205, 206, 209, 212, 215, 216, 219, 220, 222, 223, 226, 228, 229, 230, 231, 232, 236, 240, 243, 244, 247, 248, 250, 251, 254, 256, 257, 259, 260, 262, 271, 272, 274, 278, 279, 280, 284, 290, 295, 297, 298, 304, 305, 312, 320, 330, 334, 339, 342, 343, 346, 347, 349, 352, 353, 354, 356, 361, 362, 364, 365, 367, 376, 377, 379, 383, 384, 385, 389, 395, 400, 407, 408, 415, 423, 427, 433, 441, 444, 445, 446, 449, 450, 452, 456, 457, 458, 460, 465, 466, 468, 469, 474, 477, 480, 481, 483, 487, 488, 489, 493, 494, 499, 506, 514, 515, 522, 530, 534, 537, 539, 540, 557], "should": [2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 14, 16, 18, 19, 20, 21, 22, 25, 26, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 49, 53, 54, 59, 62, 67, 70, 71, 73, 76, 77, 78, 79, 80, 81, 84, 85, 86, 87, 90, 93, 101, 102, 103, 110, 114, 120, 121, 127, 129, 130, 131, 139, 143, 145, 157, 163, 164, 168, 172, 174, 175, 176, 178, 181, 182, 183, 184, 186, 189, 194, 196, 197, 198, 200, 203, 204, 205, 206, 209, 212, 216, 219, 220, 221, 222, 224, 227, 228, 229, 230, 232, 235, 236, 240, 244, 247, 248, 249, 250, 251, 252, 255, 256, 257, 260, 263, 271, 272, 273, 280, 284, 290, 291, 295, 297, 298, 299, 300, 312, 314, 326, 332, 333, 334, 339, 343, 346, 347, 349, 352, 353, 354, 355, 356, 359, 360, 361, 362, 365, 368, 376, 377, 378, 385, 389, 395, 396, 400, 402, 403, 411, 415, 417, 429, 435, 436, 441, 446, 449, 450, 452, 455, 456, 457, 458, 459, 460, 463, 464, 465, 466, 469, 472, 480, 481, 482, 489, 493, 499, 500, 506, 508, 509, 510, 518, 522, 524, 536, 542, 543, 549, 550, 553, 554, 555, 557, 561], "port": [2, 11, 13, 27, 46, 48, 53, 73, 85, 94, 174, 181, 184, 196, 203, 206, 220, 227, 232, 239, 248, 255, 264, 349, 360, 369, 452, 464, 473], "christoph": 2, "siden": 2, "2012": [2, 182, 186, 187, 210, 237], "01": [2, 16, 33, 34, 35, 36, 37, 48, 65, 172, 194, 216, 230, 444], "internet": [2, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42], "archiv": [2, 9, 16, 32, 33, 34, 36, 38, 43, 55], "wayback": 2, "machin": [2, 10, 16, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 51, 53, 61, 62, 70, 77, 92, 93, 108, 109, 110, 114, 127, 136, 155, 160, 168, 184, 186, 189, 206, 209, 212, 219, 232, 236, 239, 240, 247, 262, 263, 280, 284, 295, 297, 305, 338, 339, 346, 352, 367, 368, 385, 389, 400, 408, 427, 440, 441, 449, 456, 471, 472, 487, 488, 489, 493, 506, 515, 534, 539], "particular": [2, 11, 32, 46, 53, 54, 71, 73, 78, 79, 87, 131, 174, 176, 183, 184, 185, 196, 198, 205, 206, 208, 220, 221, 222, 229, 232, 235, 248, 249, 250, 257, 298, 300, 347, 349, 353, 354, 362, 403, 411, 450, 452, 457, 458, 466, 510], "legaci": [2, 14, 16, 18, 19, 20, 22, 25, 27, 28, 31, 33, 34, 36, 41, 42, 43, 47, 53, 71, 77, 78, 79, 81, 82, 84, 91, 92, 93, 102, 103, 104, 107, 112, 117, 121, 127, 136, 161, 178, 180, 184, 186, 199, 200, 202, 206, 209, 222, 223, 224, 226, 230, 231, 232, 236, 250, 251, 252, 254, 272, 273, 274, 282, 291, 295, 297, 298, 305, 330, 347, 352, 353, 354, 356, 357, 359, 377, 378, 379, 387, 396, 400, 408, 433, 450, 456, 457, 458, 460, 461, 463, 470, 471, 472, 481, 482, 483, 486, 491, 496, 500, 506, 515, 540], "still": [2, 11, 18, 19, 20, 22, 25, 26, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 53, 70, 71, 74, 78, 79, 80, 82, 86, 87, 88, 90, 92, 101, 104, 110, 114, 118, 120, 131, 132, 133, 136, 140, 165, 166, 177, 183, 184, 186, 198, 199, 205, 206, 208, 209, 219, 222, 223, 229, 231, 232, 235, 236, 247, 250, 251, 257, 258, 260, 262, 271, 274, 280, 284, 288, 290, 298, 300, 301, 305, 309, 333, 335, 346, 347, 350, 353, 354, 355, 357, 362, 363, 365, 367, 376, 379, 385, 389, 393, 395, 403, 404, 408, 412, 437, 438, 449, 450, 453, 457, 458, 459, 461, 465, 466, 467, 469, 471, 480, 483, 489, 493, 497, 499, 510, 511, 515, 519, 544, 545, 550, 557], "exist": [2, 8, 10, 11, 14, 15, 16, 18, 19, 20, 22, 25, 26, 28, 29, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 54, 57, 65, 67, 71, 74, 77, 78, 79, 80, 81, 85, 87, 89, 90, 91, 92, 93, 95, 96, 97, 98, 100, 101, 102, 103, 104, 106, 108, 109, 110, 111, 114, 115, 120, 121, 124, 127, 130, 133, 134, 136, 139, 153, 154, 155, 163, 168, 172, 175, 176, 177, 181, 183, 184, 186, 189, 194, 197, 198, 199, 203, 205, 206, 207, 209, 212, 216, 221, 222, 223, 227, 229, 231, 232, 234, 236, 240, 244, 249, 250, 251, 255, 257, 259, 260, 261, 262, 263, 265, 266, 267, 268, 270, 271, 273, 274, 276, 278, 279, 280, 281, 284, 285, 290, 291, 293, 295, 297, 298, 299, 302, 303, 305, 322, 323, 324, 332, 333, 334, 343, 347, 350, 352, 353, 354, 355, 356, 360, 362, 364, 365, 366, 367, 368, 370, 371, 372, 373, 375, 376, 377, 378, 379, 381, 383, 384, 385, 386, 389, 390, 395, 396, 398, 400, 402, 405, 406, 408, 411, 425, 426, 427, 435, 444, 446, 450, 453, 456, 457, 458, 459, 460, 464, 466, 468, 469, 470, 471, 472, 474, 475, 476, 477, 479, 480, 481, 482, 483, 485, 487, 488, 489, 490, 493, 494, 499, 500, 503, 506, 509, 512, 513, 515, 518, 532, 533, 534, 542, 548, 550, 554, 556, 557, 561], "1": [2, 3, 4, 5, 8, 9, 14, 16, 21, 25, 27, 28, 31, 32, 43, 46, 47, 48, 53, 56, 58, 59, 60, 70, 71, 73, 76, 77, 78, 79, 80, 81, 86, 87, 88, 89, 91, 92, 93, 94, 95, 98, 99, 100, 102, 104, 107, 108, 109, 110, 112, 114, 115, 117, 118, 119, 122, 126, 127, 130, 131, 132, 136, 137, 139, 140, 145, 147, 151, 155, 158, 161, 162, 163, 167, 174, 175, 176, 182, 183, 184, 185, 186, 188, 196, 197, 198, 204, 205, 206, 207, 208, 209, 211, 219, 220, 221, 222, 228, 229, 230, 231, 232, 234, 235, 236, 238, 247, 248, 249, 250, 251, 256, 257, 264, 265, 268, 269, 270, 272, 274, 280, 282, 284, 285, 289, 295, 297, 298, 299, 300, 314, 316, 327, 331, 332, 333, 334, 337, 346, 347, 349, 352, 353, 354, 355, 356, 361, 362, 367, 369, 370, 373, 374, 375, 377, 379, 385, 387, 389, 390, 394, 400, 402, 403, 411, 417, 419, 427, 430, 434, 435, 449, 450, 452, 455, 456, 457, 458, 459, 460, 465, 466, 467, 468, 470, 471, 472, 473, 474, 477, 478, 479, 481, 483, 486, 487, 488, 489, 491, 493, 494, 496, 497, 498, 501, 505, 506, 509, 510, 511, 515, 516, 518, 519, 524, 526, 530, 534, 537, 540, 541, 542, 546, 554, 559, 560, 562], "28": [2, 34, 74, 79, 86, 133, 168, 171, 177, 180, 182, 184, 185, 186, 187, 189, 193, 198, 199, 202, 204, 208, 210, 212, 219, 223, 226, 228, 235, 237, 251, 256, 350, 354, 361, 453, 458, 465, 553], "zpool": [2, 4, 5, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 57, 66, 67, 71, 74, 75, 76, 77, 78, 80, 81, 82, 83, 86, 87, 89, 90, 92, 101, 102, 108, 109, 110, 114, 120, 123, 127, 131, 164, 170, 171, 172, 173, 175, 176, 178, 179, 182, 183, 184, 185, 192, 193, 194, 195, 197, 198, 200, 201, 204, 205, 206, 208, 215, 216, 218, 221, 222, 224, 225, 228, 229, 230, 232, 235, 243, 244, 246, 249, 250, 252, 253, 256, 257, 259, 260, 262, 271, 272, 278, 279, 280, 284, 290, 292, 295, 297, 298, 300, 333, 334, 342, 343, 347, 350, 351, 352, 353, 355, 356, 357, 358, 361, 362, 364, 365, 367, 376, 377, 383, 384, 385, 389, 395, 397, 400, 403, 436, 445, 446, 450, 453, 454, 455, 456, 457, 459, 460, 461, 462, 465, 466, 468, 469, 471, 480, 481, 487, 488, 489, 493, 499, 502, 506, 510, 543, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "7": [2, 5, 8, 9, 12, 32, 46, 47, 48, 53, 54, 58, 59, 60, 66, 67, 68, 71, 73, 84, 86, 89, 90, 91, 92, 95, 96, 98, 99, 100, 101, 102, 103, 106, 108, 109, 110, 114, 115, 116, 117, 119, 120, 121, 122, 123, 124, 126, 127, 132, 133, 136, 139, 141, 143, 145, 147, 151, 153, 156, 157, 158, 159, 160, 161, 163, 167, 172, 174, 175, 182, 184, 186, 194, 196, 197, 198, 204, 206, 209, 216, 217, 220, 221, 222, 228, 230, 232, 236, 244, 245, 248, 249, 250, 256, 272, 295, 298, 332, 342, 343, 344, 347, 349, 359, 361, 364, 365, 366, 367, 370, 371, 373, 374, 375, 376, 377, 378, 381, 383, 384, 385, 389, 390, 391, 392, 394, 395, 396, 397, 398, 400, 404, 405, 408, 411, 413, 415, 419, 423, 425, 428, 429, 430, 431, 432, 433, 435, 439, 445, 446, 447, 450, 452, 463, 465, 468, 469, 470, 471, 474, 475, 477, 478, 479, 480, 481, 482, 485, 487, 488, 489, 493, 494, 495, 496, 498, 499, 500, 501, 502, 503, 505, 506, 511, 512, 515, 518, 520, 522, 524, 526, 530, 532, 535, 536, 537, 538, 539, 540, 542, 546, 559, 560], "man": [2, 4, 10, 11, 18, 47, 53, 58, 59, 170, 171, 175, 180, 181, 184, 185, 192, 193, 197, 202, 203, 206, 208, 209, 215, 221, 226, 227, 232, 235, 236, 243, 249, 254, 295, 300, 308, 332, 334, 557], "page": [2, 4, 5, 7, 8, 10, 11, 12, 14, 16, 18, 19, 20, 22, 24, 25, 28, 29, 30, 31, 32, 33, 36, 40, 41, 42, 43, 48, 51, 53, 58, 59, 61, 70, 71, 81, 87, 104, 132, 133, 141, 143, 147, 153, 156, 157, 163, 170, 171, 172, 175, 180, 181, 183, 184, 185, 191, 192, 193, 194, 197, 202, 203, 205, 206, 208, 209, 214, 215, 216, 219, 221, 222, 226, 227, 229, 231, 232, 235, 236, 239, 242, 243, 244, 247, 249, 250, 254, 257, 274, 295, 300, 301, 302, 305, 308, 310, 312, 316, 322, 325, 326, 332, 334, 338, 346, 347, 356, 362, 379, 404, 405, 413, 415, 419, 425, 428, 429, 435, 440, 449, 450, 460, 466, 483, 511, 512, 520, 522, 526, 532, 535, 536, 542, 557], "5": [2, 3, 14, 16, 21, 25, 27, 28, 31, 32, 46, 47, 48, 50, 53, 54, 56, 67, 70, 71, 77, 78, 80, 82, 84, 85, 88, 95, 98, 102, 104, 115, 116, 118, 127, 132, 133, 136, 139, 145, 155, 160, 163, 167, 170, 172, 178, 180, 181, 184, 186, 187, 188, 192, 194, 200, 202, 203, 206, 207, 209, 210, 211, 215, 216, 217, 224, 226, 227, 230, 231, 232, 234, 236, 237, 238, 243, 244, 245, 252, 254, 255, 259, 272, 274, 278, 279, 280, 284, 286, 295, 297, 298, 299, 305, 308, 310, 320, 325, 330, 332, 333, 334, 337, 343, 346, 347, 352, 353, 355, 357, 359, 360, 377, 379, 391, 400, 411, 427, 435, 439, 446, 449, 450, 456, 457, 459, 461, 463, 464, 467, 474, 477, 481, 483, 494, 495, 497, 506, 511, 515, 518, 524, 534, 539, 542, 546], "matrix": 2, "flagread": 2, "onlycompatibleopenzf": 2, "linux": [2, 4, 8, 9, 10, 11, 13, 18, 19, 20, 22, 23, 25, 28, 32, 33, 34, 35, 36, 37, 38, 39, 41, 42, 44, 46, 47, 52, 56, 57, 58, 59, 65, 67, 70, 71, 73, 77, 78, 79, 81, 82, 87, 88, 103, 108, 109, 118, 121, 127, 135, 143, 148, 149, 163, 170, 171, 172, 174, 175, 178, 180, 183, 184, 185, 191, 192, 193, 194, 196, 197, 198, 199, 200, 202, 204, 205, 206, 207, 208, 209, 214, 215, 216, 217, 219, 220, 223, 224, 226, 228, 229, 232, 234, 235, 236, 244, 247, 248, 250, 251, 252, 254, 257, 258, 273, 278, 279, 288, 291, 295, 298, 309, 312, 332, 334, 343, 346, 347, 349, 353, 354, 356, 357, 362, 363, 378, 383, 384, 393, 396, 400, 415, 435, 444, 446, 449, 450, 452, 456, 457, 458, 460, 461, 466, 467, 482, 487, 488, 497, 500, 506, 522, 542], "freebsd": [2, 8, 39, 46, 47, 48, 53, 57, 58, 59, 71, 78, 79, 99, 119, 143, 172, 250, 251, 269, 289, 298, 312, 347, 353, 354, 374, 394, 415, 450, 457, 458, 478, 498, 522], "13": [2, 5, 27, 32, 33, 46, 48, 53, 78, 90, 101, 120, 127, 145, 163, 184, 186, 206, 209, 232, 236, 260, 271, 290, 295, 298, 332, 353, 365, 376, 395, 400, 435, 457, 469, 480, 499, 506, 524, 542, 559, 560], "pre": [2, 8, 9, 27, 47, 65, 71, 74, 79, 199, 222, 223, 250, 251, 347, 350, 354, 444, 450, 453, 458], "openzfsillumosjoyentnetbsdnexentaomnio": 2, "ceopenzf": 2, "x": [2, 3, 8, 9, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 43, 47, 48, 57, 61, 62, 65, 67, 71, 77, 78, 80, 86, 108, 109, 110, 114, 143, 158, 168, 171, 176, 182, 184, 186, 189, 193, 198, 204, 206, 209, 212, 222, 228, 232, 236, 239, 240, 250, 256, 278, 279, 297, 298, 312, 327, 333, 338, 339, 347, 352, 353, 355, 361, 383, 384, 415, 430, 440, 441, 444, 450, 456, 457, 459, 465, 487, 488, 489, 493, 522, 537, 548, 549, 550, 551, 553, 554, 555, 556, 557, 559, 560, 561], "0": [2, 5, 8, 9, 11, 18, 19, 20, 21, 22, 25, 27, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 44, 45, 46, 47, 48, 49, 53, 54, 56, 58, 59, 60, 62, 64, 65, 66, 67, 70, 71, 73, 78, 79, 80, 81, 86, 87, 95, 98, 104, 105, 115, 127, 129, 130, 131, 133, 139, 145, 151, 158, 162, 163, 165, 166, 168, 170, 172, 174, 175, 176, 182, 183, 184, 185, 186, 189, 191, 192, 194, 196, 197, 198, 204, 205, 206, 208, 209, 212, 214, 215, 216, 219, 220, 221, 222, 228, 229, 231, 232, 235, 236, 240, 242, 243, 244, 247, 248, 249, 250, 256, 257, 274, 275, 295, 298, 299, 300, 305, 331, 332, 333, 334, 339, 341, 342, 343, 346, 347, 349, 353, 355, 356, 361, 379, 380, 400, 402, 403, 411, 434, 435, 441, 443, 444, 445, 446, 449, 450, 452, 457, 458, 459, 460, 465, 466, 474, 477, 483, 484, 494, 506, 508, 509, 510, 518, 524, 530, 537, 541, 542, 544, 545, 548, 549, 550, 551, 553, 554, 555, 556, 557, 559, 560, 561], "6": [2, 5, 21, 25, 32, 46, 47, 48, 53, 54, 56, 58, 59, 60, 71, 73, 77, 78, 80, 95, 98, 115, 127, 133, 136, 139, 163, 174, 175, 184, 186, 196, 197, 206, 209, 220, 221, 222, 232, 236, 248, 249, 250, 295, 298, 332, 347, 349, 353, 400, 411, 435, 450, 452, 456, 457, 459, 474, 477, 494, 506, 515, 518, 542, 557], "110": [2, 71, 250, 347, 450], "130": 2, "8": [2, 4, 5, 8, 9, 11, 14, 16, 25, 28, 31, 32, 35, 37, 47, 48, 53, 54, 58, 59, 60, 64, 66, 67, 70, 71, 73, 74, 76, 77, 78, 79, 80, 81, 167, 170, 171, 174, 175, 176, 177, 188, 191, 192, 193, 194, 196, 197, 198, 199, 211, 214, 215, 216, 219, 220, 221, 222, 223, 242, 243, 244, 247, 248, 249, 250, 251, 337, 341, 342, 343, 346, 347, 349, 350, 352, 353, 354, 355, 356, 439, 443, 445, 446, 449, 450, 452, 453, 455, 456, 457, 458, 459, 460, 546, 559, 560], "62": 2, "72": [2, 10, 12, 58, 59, 562], "142": 2, "2": [2, 3, 5, 8, 9, 11, 12, 25, 26, 27, 31, 32, 44, 46, 47, 48, 53, 58, 59, 60, 61, 64, 65, 67, 70, 71, 73, 78, 79, 80, 81, 86, 88, 91, 92, 93, 95, 96, 98, 99, 104, 106, 108, 109, 110, 112, 114, 115, 117, 118, 119, 124, 127, 130, 131, 132, 133, 136, 139, 145, 147, 151, 152, 157, 158, 161, 162, 163, 168, 171, 172, 174, 175, 176, 182, 183, 184, 186, 189, 193, 194, 196, 197, 198, 204, 205, 206, 207, 208, 209, 212, 216, 219, 220, 221, 222, 228, 229, 230, 231, 232, 234, 235, 236, 239, 240, 244, 247, 248, 249, 250, 256, 257, 266, 269, 272, 274, 276, 280, 284, 289, 293, 295, 298, 299, 300, 314, 316, 327, 331, 332, 333, 334, 338, 341, 343, 346, 347, 349, 353, 355, 356, 361, 370, 371, 373, 374, 379, 381, 390, 394, 398, 400, 402, 403, 408, 411, 417, 419, 424, 429, 430, 434, 435, 440, 443, 444, 446, 449, 450, 452, 457, 458, 459, 460, 465, 467, 470, 471, 472, 474, 475, 477, 478, 483, 485, 487, 488, 489, 491, 493, 494, 496, 497, 498, 503, 506, 509, 510, 511, 515, 518, 524, 526, 530, 531, 536, 537, 540, 541, 542, 553, 554, 555, 562], "2master12": 2, "012": 2, "0mastermaster9": 2, "3main4": 2, "fpmasterr151046r151048master2": 2, "02": [2, 65, 184, 444], "2main": 2, "zfsonlinux": [2, 9, 10, 12, 17, 18, 19, 20, 22, 25, 26, 29, 31, 32, 33, 34, 35, 36, 37, 41, 42, 55, 79, 199, 223, 251, 354, 458], "allocation_classesyesnonoyesyesyesyesyesnoyesyesyesnonononoyesyesyesyesyesyesy": 2, "com": [2, 7, 8, 9, 10, 12, 14, 16, 18, 19, 20, 22, 25, 27, 28, 29, 31, 33, 34, 35, 36, 37, 38, 41, 42, 46, 48, 53, 66, 79, 95, 98, 115, 127, 165, 166, 170, 171, 177, 178, 180, 184, 185, 191, 192, 193, 199, 200, 202, 206, 208, 214, 215, 223, 224, 226, 232, 235, 242, 243, 251, 252, 254, 295, 300, 342, 354, 400, 445, 458, 474, 477, 494, 506, 544, 545], "delphix": [2, 12, 66, 79, 170, 177, 192, 199, 215, 223, 243, 251, 342, 354, 445, 458], "async_destroyyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "blake3nonononononoyesyesnononononononononononoyesyesyesy": 2, "fudosecur": [2, 79, 458], "block_cloningyesnononononoyesyesnonononononononononononoyesyesy": 2, "datto": [2, 79, 223, 251, 354, 458], "bookmark_v2nononoyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesy": 2, "bookmark_writtennonononoyesyesyesyesnononononononononononoyesyesyesy": 2, "bookmarksyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "nexenta": 2, "class_of_storageyesnononononononononononononoyesyesnonononononono": 2, "device_rebuildyesnononoyesyesyesyesnononononononononononoyesyesyesy": 2, "device_removalnononoyesyesyesyesyesyesyesyesyesnononoyesyesyesyesyesyesyesy": 2, "draidnononononoyesyesyesnononononononononononoyesyesyesy": 2, "edonrnoyes1yes1yes1yes1yes1yes1yesnonoyesyesnononoyesyesyesyesyesyesyesy": 2, "embedded_datanoyesyesyesyesyesyesyesyesyesyesyesyesyesnoyesyesyesyesyesyesyesy": 2, "empty_bpobjyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "enabled_txgyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "encryptionnononoyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesy": 2, "extensible_datasetnoyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "joyent": [2, 79, 177, 199, 223, 251, 354, 458], "filesystem_limitsyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "head_errlognonononononoyesyesnononononononononononoyesyesyesy": 2, "hole_birthnoyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "open": [2, 7, 8, 10, 12, 14, 16, 18, 19, 20, 22, 25, 28, 29, 31, 33, 34, 35, 36, 37, 41, 42, 44, 46, 47, 48, 53, 55, 57, 66, 71, 78, 79, 80, 86, 90, 101, 120, 125, 132, 139, 143, 145, 147, 157, 158, 170, 175, 177, 184, 186, 192, 197, 198, 199, 206, 209, 215, 221, 222, 223, 232, 236, 243, 249, 250, 251, 260, 271, 290, 294, 298, 301, 312, 314, 316, 326, 327, 333, 342, 347, 353, 354, 355, 365, 376, 395, 399, 404, 411, 415, 417, 419, 429, 430, 445, 450, 457, 458, 459, 465, 469, 480, 499, 504, 511, 518, 522, 524, 526, 536, 537, 547, 548, 549, 550, 551, 553, 557, 561], "large_blocksnoyesyesyesyesyesyesyesyesyesyesyesyesyesnoyesyesyesyesyesyesyesy": 2, "large_dnodenonoyesyesyesyesyesyesnoyesyesyesnonononoyesyesyesyesyesyesy": 2, "livelistyesnononoyesyesyesyesnononononononononononoyesyesyesy": 2, "log_spacemapyesnononoyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesy": 2, "lz4_compressnoyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "meta_devicesyesnononononononononononononoyesyesnonononononono": 2, "multi_vdev_crash_dumpnonoyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "obsolete_countsyesnonoyesyesyesyesyesyesyesyesyesnononoyesyesyesyesyesyesyesy": 2, "project_quotayesnonoyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesy": 2, "raidz_expansionnononononononoyesnononononononononononononoyesy": 2, "redacted_datasetsnonononoyesyesyesyesnononononononononononoyesyesyesy": 2, "redaction_bookmarksnonononoyesyesyesyesnononononononononononoyesyesyesy": 2, "redaction_list_spillnononononononoyesnonononononononononononoyesyesy": 2, "resilver_deferyesnonoyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesy": 2, "sha512nonoyesyesyesyesyesyesyesyesyesyesyesyesnoyesyesyesyesyesyesyesy": 2, "skeinnonoyesyesyesyesyesyesyesyesyesyesyesyesnoyesyesyesyesyesyesyesy": 2, "spacemap_histogramyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "spacemap_v2yesnonoyesyesyesyesyesyesyesyesyesnonononoyesyesyesyesyesyesy": 2, "userobj_accountingyesnoyesyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesy": 2, "vdev_propertiesyesnononononononononononononoyesyesnonononononono": 2, "klarasystem": [2, 79, 458], "vdev_zaps_v2nonononononoyesyesnonononononononononononoyesyesy": 2, "wbcnononononononononononononononoyesnonononononono": 2, "zilsaxattryesnononononoyesyesnonoyesyesnonononoyesyesyesyesyesyesy": 2, "zpool_checkpointyesnonoyesyesyesyesyesyesyesyesyesnonononoyesyesyesyesyesyesy": 2, "zstd_compressnonononoyesyesyesyesnononononononononononoyesyesyesy": 2, "up": [2, 3, 5, 8, 10, 12, 14, 16, 19, 20, 23, 25, 27, 28, 31, 34, 35, 36, 37, 41, 42, 47, 48, 49, 50, 54, 57, 67, 70, 71, 74, 76, 78, 80, 81, 86, 92, 105, 110, 114, 123, 132, 135, 137, 145, 147, 148, 149, 155, 157, 158, 161, 163, 172, 176, 184, 186, 194, 198, 204, 206, 209, 216, 219, 222, 228, 232, 236, 244, 247, 250, 256, 262, 280, 284, 292, 298, 301, 306, 314, 316, 324, 326, 327, 330, 332, 334, 343, 346, 347, 350, 353, 355, 356, 361, 367, 380, 385, 389, 397, 404, 409, 417, 419, 427, 429, 430, 433, 435, 446, 449, 450, 453, 455, 457, 459, 460, 465, 471, 484, 489, 493, 502, 511, 516, 524, 526, 534, 536, 537, 540, 542, 557], "releas": [2, 5, 8, 12, 23, 25, 26, 27, 31, 37, 39, 41, 47, 48, 53, 55, 57, 58, 59, 70, 71, 74, 78, 81, 83, 87, 88, 97, 118, 127, 136, 183, 184, 205, 206, 219, 229, 232, 236, 247, 250, 253, 257, 258, 267, 288, 295, 298, 346, 347, 350, 353, 356, 358, 362, 363, 372, 393, 400, 408, 449, 450, 453, 457, 460, 462, 466, 467, 476, 497, 506, 515], "tabl": [2, 14, 16, 25, 28, 31, 43, 47, 71, 77, 80, 86, 90, 101, 104, 120, 151, 176, 180, 182, 198, 202, 204, 222, 226, 228, 231, 232, 236, 250, 254, 256, 260, 271, 274, 290, 320, 333, 347, 355, 361, 365, 376, 379, 395, 423, 450, 456, 459, 465, 469, 480, 483, 499, 530], "gener": [2, 5, 8, 9, 11, 12, 14, 16, 18, 19, 20, 22, 23, 25, 28, 29, 31, 33, 34, 35, 36, 37, 41, 42, 44, 46, 47, 50, 51, 57, 61, 62, 64, 65, 66, 67, 68, 70, 71, 73, 74, 77, 78, 79, 80, 81, 83, 85, 86, 87, 88, 104, 108, 109, 110, 114, 118, 123, 127, 130, 139, 145, 150, 163, 165, 166, 168, 174, 175, 176, 181, 182, 183, 184, 186, 189, 196, 197, 198, 199, 203, 204, 205, 206, 207, 209, 212, 217, 219, 220, 221, 222, 223, 225, 227, 228, 229, 231, 232, 234, 236, 239, 240, 242, 243, 244, 245, 247, 248, 249, 250, 251, 253, 255, 256, 257, 258, 274, 278, 279, 280, 284, 288, 292, 295, 297, 298, 299, 308, 314, 319, 332, 334, 335, 338, 339, 341, 342, 343, 344, 346, 347, 349, 350, 352, 353, 354, 355, 356, 358, 360, 361, 362, 363, 379, 383, 384, 385, 389, 393, 397, 400, 402, 411, 417, 422, 435, 437, 438, 440, 441, 443, 444, 445, 446, 447, 449, 450, 452, 453, 456, 457, 458, 459, 460, 462, 464, 465, 466, 467, 483, 487, 488, 489, 493, 497, 502, 506, 509, 518, 524, 529, 542, 544, 545], "pars": [2, 53, 65, 74, 78, 85, 86, 95, 98, 115, 181, 182, 184, 203, 204, 206, 227, 228, 232, 255, 256, 265, 268, 285, 350, 353, 360, 361, 370, 373, 390, 444, 453, 457, 464, 465, 474, 477, 494], "manpag": [2, 184, 559, 560], "entir": [2, 5, 8, 11, 12, 35, 37, 46, 47, 48, 53, 70, 71, 78, 79, 80, 88, 90, 101, 102, 104, 108, 109, 118, 120, 139, 176, 177, 184, 198, 199, 206, 209, 219, 222, 223, 231, 232, 236, 247, 250, 251, 258, 260, 271, 274, 278, 279, 288, 290, 298, 308, 333, 346, 347, 353, 354, 355, 363, 365, 376, 377, 379, 383, 384, 393, 395, 411, 449, 450, 457, 458, 459, 467, 469, 480, 481, 483, 487, 488, 497, 499, 518, 552, 554], "good": [2, 9, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 48, 53, 56, 78, 79, 80, 139, 186, 199, 209, 221, 223, 236, 249, 251, 298, 333, 353, 354, 355, 411, 457, 458, 459, 518], "accur": [2, 47, 71, 250, 347, 450], "document": [2, 10, 13, 18, 19, 20, 22, 23, 27, 29, 33, 34, 35, 36, 37, 39, 41, 42, 44, 46, 47, 48, 57, 58, 62, 77, 79, 86, 104, 110, 114, 127, 160, 164, 168, 172, 175, 177, 182, 189, 191, 194, 197, 199, 204, 206, 212, 214, 216, 221, 223, 228, 231, 232, 236, 240, 242, 244, 249, 251, 256, 274, 280, 284, 329, 339, 354, 361, 379, 385, 389, 400, 432, 436, 441, 456, 458, 465, 483, 489, 493, 506, 539, 543, 547, 555, 558, 559, 560], "last": [2, 10, 12, 32, 34, 35, 36, 37, 47, 71, 74, 78, 79, 81, 93, 108, 109, 110, 114, 132, 139, 143, 145, 147, 155, 157, 158, 161, 171, 175, 176, 177, 184, 186, 193, 197, 198, 199, 206, 209, 221, 222, 223, 232, 236, 249, 250, 251, 263, 278, 279, 280, 284, 301, 312, 314, 316, 324, 326, 327, 330, 334, 347, 350, 354, 356, 368, 383, 384, 385, 389, 404, 411, 415, 417, 419, 427, 429, 430, 433, 450, 453, 457, 458, 460, 472, 487, 488, 489, 493, 511, 518, 522, 524, 526, 534, 536, 537, 540, 558], "updat": [2, 4, 9, 10, 11, 12, 14, 16, 18, 19, 20, 22, 23, 25, 26, 28, 31, 32, 33, 34, 35, 36, 37, 38, 41, 42, 43, 47, 51, 53, 71, 78, 80, 81, 84, 87, 95, 98, 108, 109, 115, 129, 143, 159, 163, 180, 183, 184, 186, 198, 202, 205, 206, 209, 222, 226, 229, 230, 232, 236, 239, 250, 254, 257, 272, 278, 279, 298, 312, 328, 332, 333, 334, 347, 353, 355, 356, 359, 362, 383, 384, 415, 431, 435, 450, 457, 459, 460, 463, 466, 474, 477, 487, 488, 494, 508, 522, 538, 542, 557], "2023": [2, 16, 32, 71, 74, 77, 78, 80, 81, 86, 108, 109, 110, 114, 129, 133, 139, 155, 347, 350, 385, 389, 450, 453, 456, 457, 459, 460, 465, 487, 488, 489, 493, 508, 518, 534], "12": [2, 3, 14, 16, 18, 19, 20, 22, 25, 27, 28, 31, 33, 34, 35, 36, 37, 46, 48, 53, 71, 81, 89, 108, 109, 127, 163, 184, 186, 206, 209, 222, 232, 236, 250, 295, 332, 334, 347, 356, 385, 389, 400, 435, 450, 460, 468, 487, 488, 506, 542, 553], "25t19": 2, "17": [2, 5, 32, 46, 71, 116, 127, 128, 163, 184, 206, 232, 295, 296, 347, 391, 400, 401, 450, 495, 506, 507, 542], "15": [2, 41, 47, 71, 78, 86, 95, 98, 115, 127, 163, 182, 184, 186, 204, 206, 209, 219, 222, 228, 232, 236, 247, 256, 295, 298, 302, 322, 327, 332, 353, 361, 400, 405, 435, 450, 457, 465, 474, 477, 494, 506, 512, 542, 558], "361178z": 2, "compatibility_matrix": 2, "py": [2, 8, 27, 47], "tl": 3, "dr": 3, "effect": [3, 5, 46, 47, 48, 49, 50, 53, 70, 71, 76, 78, 79, 80, 81, 86, 88, 90, 93, 101, 104, 108, 109, 110, 112, 114, 118, 120, 127, 163, 176, 177, 182, 184, 186, 198, 199, 204, 206, 209, 219, 222, 223, 228, 231, 232, 236, 247, 250, 251, 256, 258, 260, 263, 271, 274, 278, 279, 280, 282, 284, 288, 290, 295, 298, 334, 346, 347, 353, 354, 355, 356, 361, 363, 365, 368, 376, 379, 383, 384, 385, 387, 389, 393, 395, 400, 449, 450, 455, 457, 458, 459, 460, 465, 467, 469, 472, 480, 483, 487, 488, 489, 491, 493, 497, 499, 506], "larg": [3, 5, 8, 9, 12, 34, 36, 46, 47, 48, 67, 70, 71, 78, 79, 80, 93, 104, 110, 114, 125, 155, 171, 172, 176, 177, 184, 193, 194, 198, 199, 206, 216, 219, 222, 223, 231, 232, 244, 247, 250, 251, 263, 274, 280, 284, 294, 298, 343, 346, 347, 353, 354, 355, 368, 379, 385, 389, 399, 427, 446, 449, 450, 457, 458, 459, 472, 483, 489, 493, 504, 534], "size": [3, 5, 7, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 47, 53, 61, 64, 67, 70, 71, 76, 78, 79, 80, 81, 86, 90, 92, 101, 104, 110, 114, 120, 127, 132, 136, 139, 145, 147, 151, 153, 158, 163, 164, 165, 166, 171, 172, 175, 176, 177, 182, 184, 186, 191, 193, 194, 197, 198, 199, 204, 206, 209, 214, 216, 219, 221, 222, 223, 228, 231, 232, 236, 239, 242, 244, 247, 249, 250, 251, 256, 260, 262, 271, 274, 280, 284, 290, 298, 305, 314, 316, 322, 327, 332, 333, 334, 338, 341, 343, 346, 347, 353, 354, 355, 356, 361, 365, 367, 376, 379, 385, 389, 395, 400, 408, 411, 417, 419, 423, 425, 430, 435, 436, 440, 443, 446, 449, 450, 455, 457, 458, 459, 460, 465, 469, 471, 480, 483, 489, 493, 499, 506, 511, 515, 518, 524, 526, 530, 532, 537, 542, 543, 544, 545], "sequenti": [3, 5, 47, 51, 71, 79, 80, 133, 153, 155, 198, 222, 250, 251, 302, 322, 347, 354, 355, 405, 425, 427, 450, 458, 459, 512, 532, 534], "workload": [3, 5, 18, 19, 20, 22, 33, 34, 36, 41, 42, 46, 47, 51, 53, 58, 59, 70, 71, 78, 79, 80, 158, 177, 184, 186, 199, 206, 209, 219, 223, 232, 236, 247, 251, 298, 327, 333, 346, 347, 353, 354, 355, 430, 449, 450, 457, 458, 459, 537], "variat": [3, 71, 186, 209, 222, 236, 250, 333, 347, 355, 450], "better": [3, 9, 11, 46, 47, 48, 53, 71, 78, 79, 80, 176, 177, 184, 186, 198, 199, 206, 209, 222, 223, 232, 236, 250, 251, 298, 333, 347, 353, 354, 355, 450, 457, 458, 459], "pariti": [3, 4, 5, 47, 48, 64, 67, 71, 76, 78, 79, 80, 81, 133, 172, 186, 191, 194, 198, 206, 209, 214, 216, 222, 232, 236, 242, 244, 250, 298, 333, 334, 341, 343, 347, 353, 354, 355, 356, 443, 446, 450, 455, 457, 458, 459, 460], "elimin": [3, 5, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 48, 79, 177, 186, 199, 209, 223, 236, 251, 333, 354, 355, 458], "hole": [3, 46, 47, 71, 78, 79, 80, 81, 90, 101, 120, 176, 177, 186, 198, 199, 209, 222, 223, 232, 236, 250, 251, 260, 271, 290, 298, 333, 334, 347, 353, 354, 355, 356, 365, 376, 395, 450, 457, 458, 459, 460, 469, 480, 499], "inconsist": [3, 46, 47, 53, 71, 80, 86, 143, 176, 180, 182, 186, 198, 202, 204, 209, 222, 226, 228, 236, 250, 254, 256, 312, 333, 347, 355, 361, 415, 450, 459, 465, 522], "power": [3, 14, 16, 25, 28, 31, 35, 37, 47, 48, 51, 53, 71, 76, 77, 78, 79, 80, 81, 135, 148, 149, 158, 163, 177, 184, 186, 193, 199, 206, 209, 222, 223, 232, 236, 250, 251, 297, 298, 333, 334, 347, 352, 353, 354, 355, 356, 450, 455, 456, 457, 458, 459, 460, 561], "loss": [3, 46, 47, 48, 53, 71, 80, 81, 108, 109, 133, 184, 186, 209, 222, 236, 250, 333, 334, 347, 355, 356, 450, 459, 460, 487, 488, 553, 561], "stripe": [3, 5, 46, 47, 71, 78, 80, 176, 186, 198, 206, 209, 222, 232, 236, 250, 298, 333, 347, 353, 355, 450, 457, 459], "within": [3, 46, 47, 62, 65, 68, 71, 77, 78, 79, 80, 81, 86, 87, 88, 97, 99, 108, 109, 111, 112, 118, 119, 122, 126, 127, 128, 132, 135, 136, 137, 140, 143, 145, 147, 163, 168, 176, 177, 182, 183, 184, 186, 189, 198, 199, 204, 205, 206, 209, 212, 222, 223, 228, 229, 232, 236, 240, 250, 251, 256, 257, 258, 267, 269, 278, 279, 281, 282, 288, 289, 295, 296, 297, 298, 304, 305, 306, 309, 312, 314, 316, 332, 333, 334, 339, 344, 347, 352, 353, 354, 355, 356, 361, 362, 363, 372, 374, 383, 384, 386, 387, 393, 394, 400, 401, 407, 408, 412, 415, 417, 419, 435, 441, 444, 447, 450, 456, 457, 458, 459, 460, 465, 466, 467, 476, 478, 487, 488, 490, 491, 497, 498, 501, 505, 506, 507, 511, 514, 515, 516, 519, 522, 524, 526, 542, 551, 553, 554, 561], "group": [3, 5, 8, 22, 25, 31, 33, 34, 35, 36, 37, 47, 48, 50, 53, 54, 65, 67, 71, 78, 79, 80, 81, 86, 87, 88, 96, 104, 106, 118, 124, 127, 131, 133, 136, 176, 177, 183, 184, 185, 186, 198, 199, 205, 206, 208, 209, 222, 223, 229, 231, 232, 235, 236, 250, 251, 257, 258, 266, 274, 276, 288, 293, 295, 298, 300, 305, 333, 334, 343, 347, 353, 354, 355, 356, 362, 363, 371, 379, 381, 393, 398, 400, 403, 408, 444, 446, 450, 457, 458, 459, 460, 465, 466, 467, 475, 483, 485, 497, 503, 506, 510, 515, 561], "A": [3, 4, 5, 7, 8, 12, 14, 16, 18, 19, 20, 22, 25, 28, 29, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 55, 62, 65, 70, 71, 73, 76, 77, 78, 79, 80, 81, 85, 86, 87, 88, 95, 98, 100, 104, 108, 109, 110, 114, 115, 118, 127, 131, 133, 140, 141, 143, 145, 151, 155, 156, 160, 163, 168, 174, 176, 177, 181, 182, 183, 184, 185, 186, 189, 193, 196, 198, 199, 203, 204, 205, 206, 208, 209, 212, 219, 220, 222, 223, 227, 228, 229, 231, 232, 235, 236, 240, 247, 248, 250, 251, 255, 256, 257, 262, 265, 268, 269, 270, 274, 278, 279, 280, 284, 285, 289, 295, 297, 298, 300, 309, 310, 312, 314, 320, 325, 329, 332, 333, 334, 339, 346, 347, 349, 352, 353, 354, 355, 356, 360, 361, 362, 370, 373, 375, 379, 383, 384, 385, 389, 390, 400, 403, 412, 413, 415, 417, 423, 427, 428, 432, 435, 441, 444, 449, 450, 452, 455, 456, 457, 458, 459, 460, 464, 465, 466, 467, 474, 477, 479, 483, 487, 488, 489, 493, 494, 497, 506, 510, 519, 520, 522, 524, 530, 534, 535, 539, 542, 548, 549, 550, 551, 553, 554, 555, 561], "doubl": [3, 35, 37, 46, 48, 67, 71, 80, 86, 186, 194, 209, 216, 236, 244, 256, 333, 343, 347, 355, 361, 446, 450, 459, 465], "tripl": [3, 48, 80, 186, 209, 236, 333, 355, 459], "mean": [3, 5, 9, 12, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47, 53, 54, 62, 70, 71, 78, 79, 80, 81, 86, 90, 101, 104, 108, 109, 120, 131, 158, 168, 171, 176, 182, 184, 186, 189, 193, 198, 199, 204, 206, 208, 209, 212, 219, 223, 228, 231, 232, 235, 236, 240, 247, 250, 251, 256, 260, 271, 274, 278, 279, 290, 298, 300, 327, 333, 334, 339, 346, 347, 353, 354, 355, 356, 361, 365, 376, 379, 383, 384, 395, 403, 430, 441, 449, 450, 457, 458, 459, 460, 465, 469, 480, 483, 487, 488, 499, 510, 537, 548, 549], "sustain": [3, 46, 47, 80, 139, 175, 186, 197, 209, 221, 236, 249, 333, 355, 411, 459, 518], "one": [3, 4, 5, 8, 10, 12, 18, 19, 20, 21, 22, 25, 33, 34, 35, 36, 37, 41, 42, 44, 46, 47, 48, 49, 53, 65, 67, 70, 71, 73, 78, 79, 80, 81, 82, 86, 90, 91, 92, 93, 95, 96, 98, 100, 101, 102, 104, 105, 106, 107, 108, 109, 110, 112, 113, 114, 115, 117, 120, 124, 125, 127, 129, 131, 135, 139, 144, 147, 153, 154, 155, 160, 162, 163, 164, 168, 171, 172, 174, 176, 177, 184, 185, 186, 189, 193, 194, 196, 198, 199, 206, 208, 209, 212, 216, 219, 220, 221, 222, 223, 230, 231, 232, 235, 236, 240, 244, 247, 248, 249, 250, 251, 260, 262, 265, 266, 268, 269, 270, 271, 272, 274, 275, 276, 278, 279, 280, 283, 284, 285, 289, 290, 293, 294, 295, 298, 300, 304, 313, 322, 323, 324, 329, 331, 332, 333, 334, 343, 346, 347, 349, 353, 354, 355, 356, 357, 365, 367, 370, 371, 373, 375, 376, 377, 379, 380, 381, 383, 384, 385, 388, 389, 390, 395, 398, 399, 400, 403, 407, 411, 416, 425, 426, 427, 432, 434, 435, 436, 444, 446, 449, 450, 452, 457, 458, 459, 460, 461, 465, 469, 470, 471, 472, 474, 475, 477, 479, 480, 481, 483, 484, 485, 486, 487, 488, 489, 491, 492, 493, 494, 496, 499, 503, 504, 506, 508, 510, 514, 518, 523, 526, 532, 533, 534, 539, 541, 542, 543, 548, 549, 552], "two": [3, 8, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 47, 48, 49, 53, 71, 73, 76, 78, 79, 80, 81, 86, 92, 104, 110, 114, 127, 131, 132, 133, 136, 145, 155, 163, 174, 176, 184, 186, 193, 196, 198, 206, 208, 209, 220, 222, 223, 231, 232, 235, 236, 248, 250, 251, 256, 262, 264, 274, 280, 284, 295, 298, 300, 302, 305, 332, 333, 347, 349, 353, 354, 355, 361, 367, 379, 385, 389, 400, 403, 405, 408, 427, 435, 450, 452, 455, 457, 458, 459, 460, 465, 471, 483, 489, 493, 506, 510, 511, 512, 515, 524, 534, 542, 561], "three": [3, 8, 18, 19, 20, 34, 35, 36, 37, 40, 41, 42, 46, 47, 65, 71, 78, 79, 80, 133, 171, 177, 182, 184, 186, 193, 199, 206, 209, 223, 232, 236, 251, 298, 302, 333, 347, 353, 354, 355, 405, 444, 450, 457, 458, 459, 512], "failur": [3, 5, 12, 18, 19, 20, 22, 25, 33, 34, 35, 36, 41, 42, 47, 48, 66, 70, 71, 80, 81, 108, 109, 110, 114, 131, 133, 139, 155, 158, 170, 172, 175, 176, 185, 186, 192, 194, 197, 198, 206, 208, 209, 215, 216, 219, 221, 222, 232, 235, 236, 243, 244, 247, 249, 250, 278, 279, 280, 284, 300, 324, 327, 333, 334, 342, 346, 347, 355, 356, 383, 384, 385, 389, 403, 411, 427, 430, 445, 449, 450, 459, 460, 487, 488, 489, 493, 510, 518, 534, 537, 548, 549, 550, 551, 555, 562], "respect": [3, 21, 43, 46, 47, 48, 65, 71, 76, 78, 80, 95, 98, 108, 109, 110, 114, 115, 127, 158, 176, 184, 186, 198, 206, 209, 222, 232, 236, 250, 265, 268, 280, 284, 285, 295, 298, 333, 347, 353, 355, 370, 373, 385, 389, 390, 400, 444, 450, 455, 457, 459, 474, 477, 487, 488, 489, 493, 494, 506, 537], "without": [3, 4, 8, 12, 14, 16, 18, 19, 20, 21, 22, 25, 27, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 65, 67, 70, 71, 77, 78, 79, 80, 81, 86, 90, 93, 94, 101, 104, 105, 110, 114, 120, 127, 132, 133, 136, 143, 145, 157, 159, 160, 163, 165, 166, 172, 177, 184, 186, 194, 198, 199, 206, 209, 216, 219, 222, 223, 231, 232, 236, 244, 247, 250, 251, 260, 263, 264, 271, 274, 275, 280, 284, 290, 295, 297, 298, 301, 305, 312, 314, 326, 328, 329, 332, 333, 334, 343, 346, 347, 352, 353, 354, 355, 356, 365, 368, 369, 376, 379, 380, 385, 389, 395, 400, 404, 408, 415, 417, 429, 431, 432, 435, 444, 446, 449, 450, 456, 457, 458, 459, 460, 465, 469, 472, 473, 480, 483, 484, 489, 493, 499, 506, 511, 515, 522, 524, 536, 538, 539, 542, 544, 545, 553], "lose": [3, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 80, 186, 209, 236, 333, 355, 459], "raidz1": [3, 80, 147, 163, 186, 209, 236, 332, 333, 355, 435, 459, 526, 542, 557], "vdev": [3, 6, 7, 8, 19, 20, 21, 34, 36, 41, 42, 46, 48, 50, 53, 67, 71, 73, 76, 78, 79, 80, 81, 85, 86, 131, 132, 133, 136, 139, 141, 143, 145, 147, 151, 152, 156, 157, 158, 160, 162, 163, 172, 174, 175, 176, 181, 182, 185, 186, 194, 196, 197, 198, 199, 203, 204, 206, 208, 209, 216, 220, 221, 222, 223, 227, 228, 232, 235, 236, 244, 248, 249, 250, 251, 255, 256, 298, 300, 301, 302, 305, 307, 312, 314, 316, 320, 321, 326, 327, 329, 332, 333, 334, 343, 347, 349, 353, 354, 355, 356, 360, 361, 403, 404, 405, 408, 411, 415, 417, 419, 423, 424, 429, 430, 432, 435, 446, 450, 452, 455, 457, 458, 459, 460, 464, 465, 510, 511, 512, 515, 518, 520, 522, 524, 526, 530, 531, 535, 536, 537, 539, 542], "type": [3, 4, 5, 7, 9, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47, 53, 56, 62, 71, 76, 77, 78, 79, 80, 81, 86, 87, 88, 91, 94, 95, 96, 98, 100, 104, 106, 108, 109, 115, 118, 124, 125, 127, 131, 139, 160, 162, 163, 165, 166, 168, 175, 182, 183, 184, 185, 186, 189, 197, 204, 205, 206, 208, 209, 212, 221, 222, 228, 229, 230, 231, 232, 235, 236, 240, 249, 250, 256, 257, 258, 261, 264, 265, 266, 268, 270, 272, 274, 276, 278, 279, 285, 288, 293, 294, 295, 297, 298, 300, 329, 331, 332, 333, 334, 339, 347, 352, 353, 354, 355, 356, 361, 362, 363, 366, 369, 370, 371, 373, 375, 379, 381, 383, 384, 390, 393, 398, 399, 400, 403, 411, 432, 434, 435, 441, 450, 455, 456, 457, 458, 459, 460, 465, 466, 467, 470, 473, 474, 475, 477, 479, 483, 485, 487, 488, 494, 497, 503, 504, 506, 510, 518, 539, 541, 542, 544, 545, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "specifi": [3, 5, 7, 8, 9, 18, 19, 20, 21, 22, 27, 32, 33, 34, 35, 36, 37, 41, 42, 45, 46, 47, 48, 50, 53, 61, 65, 66, 67, 70, 71, 73, 74, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 90, 91, 92, 93, 95, 96, 97, 98, 99, 100, 101, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 130, 131, 132, 133, 135, 136, 139, 141, 142, 143, 144, 145, 146, 147, 148, 149, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 170, 171, 172, 174, 175, 176, 177, 181, 182, 183, 184, 185, 186, 191, 192, 193, 194, 196, 197, 198, 199, 203, 204, 205, 206, 207, 208, 209, 214, 215, 216, 219, 220, 221, 222, 223, 227, 228, 229, 231, 232, 234, 235, 236, 239, 242, 243, 244, 247, 248, 249, 250, 251, 255, 256, 257, 258, 260, 261, 262, 263, 265, 266, 267, 268, 269, 270, 271, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 297, 298, 299, 300, 301, 304, 305, 310, 311, 312, 313, 314, 315, 316, 317, 318, 320, 322, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 338, 342, 343, 346, 347, 349, 350, 352, 353, 354, 355, 356, 357, 359, 360, 361, 362, 363, 365, 366, 367, 368, 370, 371, 372, 373, 374, 375, 376, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 402, 403, 404, 407, 408, 411, 413, 414, 415, 416, 417, 418, 419, 420, 421, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 440, 444, 445, 446, 449, 450, 452, 453, 455, 456, 457, 458, 459, 460, 461, 463, 464, 465, 466, 467, 469, 470, 471, 472, 474, 475, 476, 477, 478, 479, 480, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 509, 510, 511, 514, 515, 518, 520, 521, 522, 523, 524, 525, 526, 527, 528, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 547], "raidz2": [3, 18, 19, 20, 22, 33, 34, 36, 41, 42, 80, 133, 186, 209, 236, 333, 355, 459], "raidz3": [3, 18, 19, 20, 22, 33, 34, 36, 41, 42, 80, 186, 209, 236, 333, 355, 459], "alia": [3, 47, 53, 61, 71, 73, 78, 79, 80, 85, 87, 104, 108, 109, 117, 165, 166, 174, 181, 183, 184, 186, 196, 203, 205, 206, 209, 220, 222, 227, 229, 232, 236, 239, 248, 250, 255, 257, 274, 278, 279, 287, 295, 298, 332, 333, 338, 347, 349, 353, 355, 360, 362, 379, 383, 384, 392, 437, 438, 440, 450, 452, 457, 458, 459, 464, 466, 483, 487, 488, 496, 544, 545], "n": [3, 5, 14, 16, 18, 19, 20, 22, 23, 25, 27, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 47, 48, 65, 76, 78, 80, 84, 86, 90, 92, 93, 96, 101, 104, 106, 108, 109, 110, 114, 120, 122, 124, 126, 132, 136, 143, 145, 151, 152, 157, 164, 171, 176, 180, 184, 186, 193, 198, 202, 206, 209, 222, 226, 231, 232, 236, 250, 254, 260, 262, 263, 266, 271, 274, 276, 278, 279, 280, 284, 290, 293, 298, 301, 305, 312, 314, 320, 321, 326, 333, 353, 355, 359, 361, 365, 367, 368, 371, 376, 379, 381, 383, 384, 385, 389, 395, 398, 404, 408, 415, 417, 423, 424, 429, 436, 444, 455, 457, 459, 463, 465, 469, 471, 472, 475, 480, 483, 485, 487, 488, 489, 493, 499, 501, 503, 505, 511, 515, 522, 524, 530, 531, 536, 543, 553], "p": [3, 5, 9, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 47, 61, 62, 65, 67, 78, 79, 80, 85, 86, 87, 91, 92, 93, 94, 95, 96, 97, 98, 100, 105, 106, 110, 111, 112, 114, 115, 124, 131, 132, 141, 145, 147, 151, 155, 156, 157, 158, 162, 163, 168, 171, 172, 181, 182, 183, 184, 185, 186, 189, 193, 194, 203, 204, 205, 206, 208, 209, 212, 216, 223, 227, 228, 229, 232, 235, 236, 239, 240, 244, 251, 255, 256, 257, 261, 262, 263, 264, 265, 266, 268, 270, 275, 276, 280, 282, 284, 285, 293, 298, 300, 301, 310, 314, 316, 320, 324, 325, 326, 327, 331, 332, 333, 338, 339, 343, 353, 354, 355, 360, 361, 362, 366, 367, 368, 369, 370, 371, 373, 375, 380, 381, 385, 387, 389, 390, 398, 403, 404, 413, 417, 419, 423, 427, 428, 429, 430, 434, 435, 440, 441, 444, 446, 457, 458, 459, 464, 465, 466, 470, 471, 472, 473, 474, 475, 476, 477, 479, 484, 485, 489, 490, 491, 493, 494, 503, 510, 511, 520, 524, 526, 530, 534, 535, 536, 537, 541, 542], "hold": [3, 5, 44, 47, 65, 71, 78, 80, 83, 88, 93, 104, 108, 109, 110, 111, 114, 117, 118, 127, 184, 186, 206, 209, 231, 232, 236, 250, 253, 258, 263, 274, 278, 279, 280, 281, 284, 287, 288, 295, 298, 333, 347, 353, 355, 358, 363, 368, 379, 383, 384, 385, 386, 389, 392, 393, 400, 444, 450, 457, 459, 462, 467, 472, 483, 487, 488, 489, 490, 493, 496, 497, 506], "approxim": [3, 5, 12, 47, 48, 71, 79, 80, 90, 101, 120, 158, 176, 177, 186, 198, 199, 209, 223, 236, 250, 251, 260, 271, 290, 327, 333, 347, 354, 355, 365, 376, 395, 430, 450, 458, 459, 469, 480, 499, 537, 553], "byte": [3, 46, 47, 48, 53, 61, 67, 71, 76, 78, 79, 80, 81, 86, 95, 98, 104, 105, 115, 139, 160, 162, 165, 166, 171, 172, 175, 176, 177, 182, 184, 186, 193, 194, 197, 198, 199, 204, 206, 209, 216, 221, 222, 223, 228, 231, 232, 236, 239, 244, 249, 250, 251, 256, 265, 268, 274, 285, 295, 298, 329, 331, 333, 338, 343, 347, 353, 354, 355, 361, 370, 373, 379, 380, 390, 411, 432, 434, 440, 446, 450, 455, 457, 458, 459, 460, 465, 474, 477, 483, 484, 494, 518, 539, 541, 544, 545], "withstand": [3, 80, 186, 209, 236, 333, 355, 459], "devic": [3, 5, 7, 8, 11, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 50, 57, 66, 68, 70, 71, 73, 76, 77, 78, 79, 80, 81, 85, 86, 88, 92, 94, 95, 98, 99, 102, 108, 109, 115, 118, 119, 122, 126, 127, 129, 131, 132, 133, 135, 136, 137, 138, 139, 140, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 157, 158, 160, 162, 163, 167, 174, 175, 176, 180, 181, 182, 184, 185, 186, 196, 197, 198, 199, 202, 203, 204, 206, 208, 209, 217, 220, 221, 222, 223, 226, 227, 228, 230, 232, 235, 236, 245, 248, 249, 250, 251, 254, 255, 256, 258, 262, 264, 269, 272, 278, 279, 288, 289, 295, 297, 298, 300, 301, 302, 304, 305, 306, 307, 308, 309, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 326, 327, 329, 331, 332, 333, 334, 344, 346, 347, 349, 352, 353, 354, 355, 356, 360, 361, 363, 367, 369, 374, 377, 383, 384, 393, 394, 400, 403, 404, 405, 407, 408, 409, 410, 411, 412, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 429, 430, 432, 434, 435, 439, 445, 447, 449, 450, 452, 455, 456, 457, 458, 459, 460, 464, 465, 467, 471, 473, 474, 477, 478, 481, 487, 488, 494, 497, 498, 501, 505, 506, 508, 510, 511, 512, 514, 515, 516, 517, 518, 519, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 536, 537, 539, 541, 542, 546, 547, 553, 554, 559, 560, 561, 562], "fail": [3, 5, 14, 16, 18, 19, 20, 21, 23, 25, 31, 33, 34, 36, 41, 42, 43, 46, 47, 49, 53, 64, 65, 70, 71, 78, 80, 85, 86, 92, 104, 108, 109, 110, 114, 127, 131, 132, 133, 136, 139, 143, 144, 153, 158, 160, 163, 165, 166, 171, 172, 175, 176, 181, 182, 184, 185, 186, 191, 193, 194, 197, 198, 203, 204, 206, 208, 209, 214, 216, 219, 221, 222, 227, 228, 231, 232, 235, 236, 242, 244, 247, 249, 250, 255, 256, 262, 274, 278, 279, 280, 284, 295, 298, 300, 301, 305, 308, 312, 313, 322, 327, 329, 332, 333, 341, 346, 347, 353, 355, 360, 361, 367, 379, 383, 384, 385, 389, 400, 403, 404, 408, 411, 415, 416, 425, 430, 432, 435, 443, 444, 449, 450, 457, 459, 464, 465, 471, 483, 487, 488, 489, 493, 506, 510, 511, 515, 518, 522, 523, 532, 537, 539, 542, 544, 545, 548, 561, 562], "minimum": [3, 5, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42, 45, 47, 49, 50, 53, 67, 71, 78, 79, 80, 81, 131, 153, 172, 175, 176, 184, 186, 194, 197, 198, 199, 206, 208, 209, 216, 219, 222, 223, 232, 235, 236, 244, 247, 250, 251, 298, 300, 322, 333, 334, 343, 347, 353, 354, 355, 356, 403, 425, 446, 450, 457, 458, 459, 460, 510, 532], "more": [3, 4, 5, 7, 9, 10, 11, 14, 16, 18, 19, 20, 21, 22, 25, 31, 33, 34, 36, 41, 42, 46, 47, 48, 50, 53, 57, 61, 62, 66, 67, 70, 71, 73, 76, 77, 78, 79, 80, 81, 82, 86, 87, 88, 90, 92, 94, 95, 98, 99, 100, 101, 102, 103, 104, 110, 113, 114, 115, 118, 119, 120, 121, 127, 131, 135, 136, 139, 141, 143, 144, 145, 155, 156, 157, 158, 160, 168, 170, 172, 174, 175, 176, 177, 182, 183, 184, 186, 189, 192, 194, 196, 197, 198, 199, 204, 205, 206, 208, 209, 212, 215, 216, 219, 220, 221, 222, 223, 228, 229, 231, 232, 235, 236, 239, 240, 243, 244, 247, 248, 249, 250, 251, 256, 257, 258, 260, 262, 264, 265, 268, 269, 271, 273, 274, 280, 283, 284, 285, 288, 289, 290, 291, 295, 297, 298, 300, 304, 305, 308, 310, 312, 313, 314, 325, 326, 327, 329, 332, 333, 334, 338, 339, 342, 343, 346, 347, 349, 352, 353, 354, 355, 356, 357, 361, 362, 363, 365, 367, 369, 370, 373, 374, 376, 377, 378, 379, 385, 388, 389, 390, 393, 394, 395, 396, 400, 403, 407, 408, 411, 413, 415, 416, 417, 427, 428, 429, 430, 432, 440, 441, 445, 446, 449, 450, 452, 455, 456, 457, 458, 459, 460, 461, 465, 466, 467, 469, 471, 473, 474, 477, 478, 479, 480, 481, 482, 483, 489, 492, 493, 494, 497, 498, 499, 500, 506, 510, 514, 515, 518, 520, 522, 523, 524, 534, 535, 536, 537, 539, 548, 549, 550, 551, 552, 554, 555, 556, 557, 559, 560, 561], "than": [3, 5, 10, 11, 12, 18, 19, 20, 22, 32, 33, 34, 35, 36, 37, 41, 42, 43, 45, 46, 47, 48, 49, 50, 57, 61, 65, 66, 67, 70, 71, 77, 78, 79, 80, 81, 82, 86, 87, 88, 95, 98, 99, 102, 104, 108, 109, 110, 113, 114, 115, 118, 119, 131, 133, 139, 153, 170, 172, 176, 177, 182, 184, 186, 192, 194, 198, 199, 204, 206, 208, 209, 215, 216, 219, 221, 222, 223, 228, 231, 232, 235, 236, 239, 243, 244, 247, 249, 250, 251, 256, 257, 258, 265, 268, 269, 274, 278, 279, 280, 283, 284, 285, 288, 289, 297, 298, 300, 322, 333, 334, 338, 342, 343, 346, 347, 352, 353, 354, 355, 356, 357, 361, 362, 363, 370, 373, 374, 377, 379, 383, 384, 385, 388, 389, 390, 393, 394, 403, 411, 425, 440, 444, 445, 446, 449, 450, 456, 457, 458, 459, 460, 461, 465, 466, 467, 474, 477, 478, 481, 483, 487, 488, 489, 492, 493, 494, 497, 498, 510, 518, 532], "recommend": [3, 5, 8, 10, 14, 16, 18, 19, 20, 22, 25, 28, 31, 32, 33, 34, 35, 36, 37, 39, 41, 42, 46, 47, 51, 53, 71, 74, 77, 78, 80, 87, 92, 108, 109, 136, 145, 152, 163, 183, 184, 186, 205, 206, 209, 222, 229, 232, 236, 250, 257, 297, 298, 314, 321, 332, 333, 347, 350, 352, 353, 355, 362, 417, 424, 435, 450, 453, 456, 457, 459, 466, 471, 487, 488, 515, 524, 531, 542, 553], "between": [3, 5, 12, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 45, 46, 47, 48, 50, 53, 54, 67, 70, 71, 77, 78, 79, 80, 94, 104, 127, 130, 131, 140, 172, 176, 177, 184, 185, 186, 194, 198, 199, 206, 207, 208, 209, 216, 219, 222, 223, 230, 231, 232, 234, 235, 236, 244, 247, 250, 251, 264, 272, 274, 295, 297, 298, 299, 300, 309, 333, 343, 346, 347, 352, 353, 354, 355, 369, 379, 400, 402, 403, 412, 446, 449, 450, 456, 457, 458, 459, 473, 483, 506, 509, 510, 519, 556, 557], "3": [3, 5, 21, 25, 31, 32, 40, 43, 44, 46, 47, 48, 49, 53, 65, 71, 73, 78, 79, 80, 86, 88, 90, 93, 95, 98, 101, 104, 115, 117, 118, 120, 122, 126, 127, 130, 131, 133, 136, 139, 163, 165, 166, 171, 174, 175, 176, 182, 183, 184, 186, 193, 196, 197, 198, 199, 204, 205, 206, 208, 209, 220, 221, 222, 223, 228, 229, 230, 231, 232, 235, 236, 248, 249, 250, 251, 256, 257, 260, 271, 272, 274, 280, 284, 290, 295, 298, 299, 300, 332, 333, 347, 349, 353, 354, 355, 361, 365, 376, 379, 395, 400, 402, 403, 411, 435, 444, 450, 452, 457, 458, 459, 465, 467, 469, 472, 474, 477, 480, 483, 494, 496, 497, 499, 501, 505, 506, 509, 510, 515, 518, 542, 544, 545, 555, 562], "9": [3, 9, 21, 25, 31, 32, 34, 35, 36, 37, 46, 47, 48, 53, 64, 67, 71, 78, 80, 81, 127, 138, 141, 142, 143, 147, 148, 149, 156, 159, 163, 172, 184, 186, 191, 194, 206, 209, 214, 216, 222, 232, 236, 242, 244, 250, 269, 289, 294, 295, 298, 301, 303, 304, 305, 306, 307, 308, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 323, 324, 325, 326, 328, 330, 332, 333, 334, 341, 343, 347, 353, 355, 356, 400, 410, 413, 414, 415, 419, 420, 421, 423, 428, 431, 433, 435, 443, 446, 450, 457, 459, 460, 506, 517, 520, 521, 522, 526, 527, 528, 535, 538, 542, 558], "help": [3, 5, 10, 12, 18, 19, 20, 22, 25, 33, 34, 35, 36, 37, 41, 42, 47, 48, 53, 61, 64, 67, 70, 71, 80, 85, 87, 110, 114, 127, 163, 164, 171, 172, 181, 183, 184, 186, 191, 193, 194, 198, 203, 205, 206, 209, 214, 216, 222, 227, 229, 232, 236, 239, 242, 244, 250, 255, 257, 280, 284, 295, 332, 333, 338, 341, 343, 347, 355, 360, 362, 385, 389, 400, 435, 436, 440, 443, 446, 449, 450, 459, 464, 466, 489, 493, 506, 542, 543], "actual": [3, 18, 19, 20, 22, 33, 34, 35, 36, 37, 46, 47, 48, 53, 61, 64, 67, 71, 78, 79, 81, 84, 92, 104, 108, 109, 110, 114, 132, 136, 143, 151, 153, 157, 172, 175, 176, 184, 186, 191, 194, 197, 198, 206, 209, 214, 216, 221, 222, 231, 232, 236, 239, 242, 244, 249, 250, 262, 274, 278, 279, 280, 284, 298, 301, 305, 312, 320, 322, 326, 334, 338, 341, 343, 347, 353, 356, 359, 367, 379, 383, 384, 385, 389, 404, 408, 411, 415, 423, 425, 429, 440, 443, 446, 450, 457, 458, 460, 463, 471, 483, 487, 488, 489, 493, 511, 515, 522, 530, 532, 536, 553], "base": [3, 4, 5, 7, 8, 9, 11, 12, 14, 16, 18, 19, 25, 27, 28, 30, 31, 36, 39, 41, 42, 46, 47, 48, 53, 58, 59, 70, 71, 73, 78, 79, 81, 85, 100, 104, 136, 145, 163, 174, 176, 180, 181, 184, 186, 196, 198, 202, 203, 206, 209, 219, 220, 222, 223, 226, 227, 230, 231, 232, 236, 247, 248, 250, 251, 254, 255, 270, 272, 274, 298, 314, 332, 334, 346, 347, 349, 353, 354, 356, 360, 375, 379, 417, 435, 449, 450, 452, 457, 458, 460, 464, 479, 483, 515, 524, 542], "sever": [3, 9, 10, 34, 43, 46, 47, 48, 53, 71, 78, 81, 184, 186, 206, 209, 232, 236, 250, 298, 334, 347, 353, 356, 450, 457, 460, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "point": [3, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 33, 34, 36, 41, 42, 45, 46, 48, 53, 54, 71, 74, 77, 78, 79, 80, 81, 84, 88, 89, 92, 93, 95, 98, 99, 103, 112, 115, 118, 119, 121, 122, 126, 127, 134, 136, 175, 176, 177, 184, 186, 197, 198, 199, 206, 209, 221, 222, 223, 232, 236, 249, 250, 251, 259, 263, 269, 273, 282, 289, 291, 295, 297, 298, 305, 333, 334, 347, 350, 352, 353, 354, 355, 356, 359, 364, 368, 374, 378, 387, 394, 396, 400, 406, 408, 450, 453, 456, 457, 458, 459, 460, 463, 467, 468, 471, 472, 474, 477, 478, 482, 491, 494, 497, 498, 500, 501, 505, 506, 513, 515, 555, 559, 560], "minim": [3, 11, 14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 47, 48, 70, 71, 77, 127, 184, 206, 219, 232, 247, 250, 295, 346, 347, 400, 449, 450, 456, 506], "sector": [3, 5, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 48, 53, 71, 76, 78, 79, 80, 81, 151, 177, 186, 199, 209, 223, 232, 236, 250, 251, 298, 334, 347, 353, 354, 355, 356, 423, 450, 455, 457, 458, 459, 460, 530], "via": [3, 4, 7, 11, 34, 35, 36, 37, 46, 47, 48, 49, 50, 53, 65, 71, 74, 78, 79, 84, 87, 88, 90, 92, 101, 104, 118, 120, 127, 145, 163, 176, 177, 183, 184, 198, 199, 205, 206, 222, 223, 229, 231, 232, 236, 250, 251, 257, 258, 260, 262, 271, 274, 275, 288, 290, 295, 298, 314, 332, 347, 350, 353, 354, 359, 362, 363, 365, 367, 376, 379, 393, 395, 400, 417, 435, 444, 450, 453, 457, 458, 463, 466, 467, 469, 471, 480, 483, 497, 499, 506, 524, 542, 557], "ashift": [3, 11, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 46, 53, 64, 71, 76, 81, 132, 133, 139, 153, 163, 175, 186, 191, 197, 209, 214, 221, 222, 236, 242, 249, 250, 301, 302, 322, 332, 334, 341, 347, 356, 404, 405, 411, 425, 435, 443, 450, 455, 460, 511, 512, 518, 532, 542], "width": [3, 5, 80, 176, 198, 222, 250, 355, 459], "dynam": [3, 11, 47, 70, 71, 80, 81, 88, 118, 184, 186, 198, 206, 209, 219, 222, 232, 236, 247, 250, 258, 288, 333, 334, 346, 347, 355, 356, 363, 393, 449, 450, 459, 460, 467, 497], "start": [3, 5, 7, 8, 9, 10, 12, 14, 16, 18, 19, 20, 22, 25, 27, 28, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47, 49, 53, 54, 58, 59, 62, 65, 71, 74, 79, 80, 86, 87, 131, 133, 139, 153, 154, 155, 163, 168, 171, 175, 176, 185, 186, 189, 191, 193, 197, 198, 199, 208, 209, 212, 214, 221, 222, 223, 230, 235, 236, 240, 242, 249, 250, 251, 256, 257, 272, 300, 302, 322, 323, 324, 332, 333, 339, 347, 350, 354, 355, 361, 362, 403, 405, 411, 425, 426, 427, 435, 441, 444, 450, 453, 458, 459, 465, 466, 510, 512, 518, 532, 533, 534, 542], "least": [3, 34, 36, 43, 46, 47, 48, 71, 74, 77, 79, 80, 110, 114, 139, 155, 171, 176, 184, 193, 198, 206, 221, 222, 230, 232, 236, 249, 250, 272, 280, 284, 297, 333, 347, 350, 352, 355, 385, 389, 411, 450, 453, 456, 458, 459, 489, 493, 518, 534], "part": [3, 5, 7, 8, 12, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 44, 45, 46, 47, 62, 71, 77, 78, 79, 80, 81, 87, 93, 103, 112, 116, 117, 121, 133, 139, 143, 146, 148, 149, 151, 155, 165, 166, 168, 175, 176, 177, 180, 184, 186, 189, 197, 198, 199, 202, 206, 209, 212, 219, 221, 222, 223, 226, 232, 236, 240, 247, 249, 250, 251, 254, 263, 273, 282, 286, 287, 291, 298, 302, 312, 315, 317, 318, 320, 333, 334, 339, 347, 353, 354, 355, 356, 362, 368, 378, 387, 391, 392, 396, 405, 411, 415, 418, 420, 421, 423, 427, 441, 450, 456, 457, 458, 459, 460, 466, 472, 482, 491, 495, 496, 500, 512, 518, 522, 525, 527, 528, 530, 534, 544, 545, 547, 552, 555], "count": [3, 4, 34, 35, 36, 37, 45, 47, 48, 49, 61, 66, 67, 71, 78, 86, 93, 94, 104, 139, 145, 147, 158, 165, 166, 170, 176, 182, 184, 186, 192, 198, 204, 206, 209, 215, 221, 222, 228, 231, 232, 236, 239, 243, 249, 250, 256, 263, 264, 274, 298, 314, 316, 327, 335, 338, 342, 343, 347, 353, 361, 368, 369, 379, 411, 417, 419, 430, 437, 438, 440, 445, 446, 450, 457, 465, 472, 473, 483, 518, 524, 526, 537, 544, 545, 555], "minu": [3, 86, 256, 361, 465], "records": [3, 18, 19, 20, 33, 34, 35, 36, 37, 41, 42, 47, 53, 71, 77, 78, 79, 88, 95, 98, 110, 114, 115, 118, 127, 176, 177, 184, 198, 199, 206, 222, 223, 232, 250, 251, 258, 280, 284, 288, 295, 298, 347, 353, 354, 363, 385, 389, 393, 400, 450, 456, 457, 458, 467, 474, 477, 489, 493, 494, 497, 506], "split": [3, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 71, 80, 83, 86, 110, 114, 134, 138, 151, 155, 163, 186, 206, 209, 222, 228, 232, 236, 250, 253, 256, 280, 284, 303, 307, 320, 332, 333, 347, 355, 358, 361, 385, 389, 406, 410, 423, 427, 435, 450, 459, 462, 465, 489, 493, 513, 517, 530, 534, 542], "equal": [3, 47, 48, 70, 71, 76, 78, 79, 81, 92, 108, 109, 139, 153, 176, 184, 186, 198, 206, 209, 219, 221, 222, 230, 232, 236, 247, 249, 250, 262, 272, 278, 279, 298, 322, 334, 346, 347, 353, 356, 367, 383, 384, 411, 425, 449, 450, 455, 457, 458, 460, 471, 487, 488, 518, 532], "addit": [3, 5, 8, 10, 11, 12, 18, 19, 20, 22, 32, 33, 34, 35, 36, 37, 41, 42, 44, 46, 47, 48, 50, 53, 62, 64, 67, 71, 73, 76, 77, 78, 79, 80, 81, 86, 87, 90, 97, 101, 102, 104, 110, 111, 114, 120, 127, 131, 132, 133, 134, 139, 142, 145, 147, 158, 163, 168, 172, 176, 177, 183, 184, 186, 189, 191, 194, 196, 198, 199, 204, 205, 206, 208, 209, 212, 214, 216, 220, 221, 222, 223, 228, 229, 231, 232, 235, 236, 240, 242, 244, 248, 249, 250, 251, 256, 257, 260, 267, 271, 274, 280, 281, 284, 290, 295, 297, 298, 300, 303, 311, 314, 316, 332, 333, 334, 339, 341, 343, 347, 349, 352, 353, 354, 355, 356, 361, 362, 365, 372, 376, 377, 379, 385, 386, 389, 395, 400, 403, 406, 411, 414, 417, 419, 435, 441, 443, 446, 450, 452, 455, 456, 457, 458, 459, 460, 465, 466, 469, 476, 480, 481, 483, 489, 490, 493, 499, 506, 510, 511, 513, 518, 521, 524, 526, 537, 542, 548, 552, 555, 557], "per": [3, 5, 6, 7, 18, 19, 20, 22, 28, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 50, 53, 61, 64, 65, 67, 70, 71, 73, 77, 78, 79, 80, 86, 110, 114, 139, 155, 160, 171, 174, 176, 184, 191, 193, 196, 198, 206, 214, 219, 220, 221, 222, 232, 236, 239, 242, 247, 248, 249, 250, 251, 260, 271, 280, 284, 290, 297, 298, 329, 338, 341, 343, 346, 347, 349, 352, 353, 354, 355, 385, 389, 411, 427, 432, 440, 443, 444, 446, 449, 450, 452, 456, 457, 458, 459, 465, 489, 493, 518, 534, 539, 557], "due": [3, 11, 12, 14, 16, 17, 25, 31, 32, 34, 36, 43, 47, 48, 53, 67, 70, 71, 77, 78, 80, 81, 86, 92, 108, 109, 110, 114, 127, 132, 136, 139, 147, 155, 163, 165, 166, 176, 182, 184, 186, 198, 204, 206, 209, 219, 221, 222, 228, 232, 236, 247, 249, 250, 256, 262, 278, 279, 280, 284, 295, 297, 298, 301, 305, 324, 332, 333, 334, 343, 346, 347, 352, 353, 355, 356, 361, 367, 383, 384, 385, 389, 400, 404, 408, 411, 427, 435, 446, 449, 450, 456, 457, 459, 460, 465, 471, 487, 488, 489, 493, 506, 511, 515, 518, 526, 534, 542, 544, 545, 548, 550, 551, 553, 554, 557, 561], "input": [3, 14, 16, 25, 28, 31, 46, 78, 104, 108, 109, 127, 164, 165, 166, 184, 206, 231, 232, 274, 278, 279, 295, 335, 353, 379, 383, 384, 400, 436, 437, 438, 457, 483, 487, 488, 506, 543, 544, 545], "less": [3, 5, 10, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 61, 66, 70, 71, 77, 78, 80, 110, 114, 133, 139, 170, 176, 184, 192, 198, 206, 215, 219, 221, 222, 232, 239, 243, 247, 249, 250, 280, 284, 297, 298, 333, 338, 342, 346, 347, 352, 353, 355, 385, 389, 411, 440, 445, 449, 450, 456, 457, 459, 489, 493, 518], "": [3, 5, 7, 8, 9, 10, 11, 12, 14, 16, 17, 18, 19, 20, 22, 25, 28, 31, 32, 33, 34, 35, 36, 37, 39, 41, 42, 43, 46, 47, 48, 50, 52, 53, 61, 62, 64, 65, 67, 71, 74, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 168, 171, 172, 175, 176, 177, 180, 181, 182, 183, 184, 185, 186, 189, 191, 193, 194, 197, 198, 199, 202, 203, 204, 205, 206, 207, 208, 209, 212, 214, 216, 217, 219, 221, 222, 223, 226, 227, 228, 229, 230, 231, 232, 234, 235, 236, 239, 240, 242, 244, 245, 247, 249, 250, 251, 252, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 338, 339, 341, 343, 346, 347, 350, 352, 353, 354, 355, 356, 357, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 440, 441, 443, 444, 446, 449, 450, 453, 456, 457, 458, 459, 460, 461, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 557, 561], "effict": 3, "mirror": [3, 5, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 48, 53, 67, 71, 78, 79, 80, 81, 132, 133, 136, 138, 139, 143, 145, 148, 149, 151, 153, 155, 157, 158, 163, 172, 175, 176, 184, 186, 194, 197, 198, 199, 206, 209, 216, 221, 222, 223, 232, 236, 244, 249, 250, 251, 298, 302, 305, 307, 317, 318, 320, 322, 324, 326, 332, 333, 334, 343, 347, 353, 354, 355, 356, 405, 408, 410, 411, 420, 421, 423, 425, 427, 429, 435, 446, 450, 457, 458, 459, 460, 511, 512, 515, 517, 518, 522, 524, 527, 528, 530, 532, 534, 536, 537, 542, 548, 550, 555, 556, 561], "same": [3, 5, 9, 18, 19, 20, 21, 34, 36, 41, 42, 43, 44, 46, 47, 48, 53, 54, 61, 66, 71, 73, 76, 77, 78, 79, 80, 81, 85, 86, 88, 90, 91, 92, 93, 100, 101, 102, 104, 108, 109, 110, 114, 117, 118, 120, 127, 129, 136, 143, 151, 153, 163, 170, 174, 181, 182, 184, 186, 192, 196, 203, 204, 206, 209, 215, 220, 222, 227, 228, 231, 232, 236, 239, 243, 248, 250, 255, 256, 258, 260, 261, 262, 263, 269, 270, 271, 274, 278, 279, 280, 284, 287, 288, 289, 290, 295, 297, 298, 305, 312, 320, 322, 332, 333, 334, 338, 342, 347, 349, 352, 353, 354, 355, 356, 360, 361, 363, 365, 366, 367, 368, 375, 376, 377, 379, 383, 384, 385, 389, 392, 393, 395, 400, 408, 415, 423, 425, 435, 440, 445, 450, 452, 455, 456, 457, 458, 459, 460, 464, 465, 467, 469, 470, 471, 472, 479, 480, 481, 483, 487, 488, 489, 493, 496, 497, 499, 506, 508, 515, 522, 530, 532, 542, 548, 550, 551, 555], "exampl": [3, 4, 5, 7, 8, 12, 14, 16, 18, 19, 21, 25, 27, 28, 31, 32, 46, 47, 48, 49, 50, 53, 62, 65, 66, 67, 71, 73, 76, 77, 78, 79, 80, 86, 88, 89, 91, 92, 93, 94, 95, 98, 100, 102, 104, 107, 108, 109, 110, 112, 113, 114, 115, 117, 118, 122, 126, 127, 129, 130, 131, 132, 136, 137, 139, 140, 143, 145, 147, 151, 155, 158, 161, 163, 165, 166, 168, 170, 172, 174, 175, 177, 182, 184, 186, 189, 192, 194, 196, 197, 199, 204, 206, 207, 208, 209, 212, 215, 216, 220, 221, 223, 228, 230, 231, 232, 234, 235, 236, 240, 243, 244, 248, 249, 250, 251, 256, 258, 263, 266, 270, 272, 274, 276, 278, 279, 280, 284, 288, 293, 295, 297, 298, 299, 300, 324, 332, 333, 339, 342, 343, 347, 349, 352, 353, 354, 355, 361, 363, 368, 375, 377, 379, 383, 384, 385, 389, 393, 400, 402, 403, 411, 427, 435, 441, 444, 445, 446, 450, 452, 455, 456, 457, 458, 459, 465, 467, 468, 470, 471, 472, 473, 474, 477, 479, 481, 483, 486, 487, 488, 489, 491, 492, 493, 494, 496, 497, 501, 505, 506, 508, 509, 510, 511, 515, 516, 518, 519, 522, 524, 526, 530, 534, 537, 540, 542, 544, 545, 554, 555], "4k": [3, 5, 46, 48, 70, 78, 206, 209, 219, 232, 236, 247, 250, 298, 346, 353, 449, 457], "we": [3, 8, 10, 11, 12, 14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 45, 46, 47, 49, 53, 70, 71, 78, 79, 81, 110, 114, 175, 176, 197, 198, 219, 221, 222, 223, 236, 249, 250, 251, 280, 284, 333, 334, 347, 354, 355, 356, 385, 389, 449, 450, 457, 458, 460, 489, 493], "alloc": [3, 5, 53, 61, 64, 67, 70, 71, 76, 78, 79, 80, 81, 86, 104, 133, 145, 147, 151, 158, 160, 163, 172, 176, 182, 186, 194, 198, 204, 206, 209, 216, 219, 222, 223, 228, 231, 232, 236, 244, 247, 250, 251, 256, 274, 298, 316, 320, 327, 329, 332, 333, 334, 338, 341, 343, 346, 347, 353, 354, 355, 356, 361, 379, 419, 423, 430, 432, 435, 440, 443, 446, 449, 450, 455, 457, 458, 459, 460, 465, 483, 524, 526, 530, 537, 539, 542], "usabl": [3, 5, 47, 48, 55, 79, 80, 86, 93, 110, 114, 127, 184, 204, 206, 228, 232, 256, 263, 280, 284, 295, 355, 361, 368, 385, 389, 400, 459, 465, 472, 489, 493, 506, 553], "ratio": [3, 5, 47, 48, 50, 53, 71, 78, 79, 80, 81, 86, 90, 101, 120, 133, 176, 177, 182, 184, 198, 199, 204, 206, 222, 223, 228, 232, 250, 251, 256, 260, 271, 290, 298, 347, 353, 354, 355, 361, 365, 376, 395, 450, 457, 458, 459, 460, 465, 469, 480, 499], "50": [3, 12, 32, 35, 37, 47, 67, 71, 79, 95, 98, 115, 127, 176, 177, 184, 198, 199, 206, 222, 223, 232, 250, 251, 295, 343, 347, 354, 400, 446, 450, 458, 474, 477, 494, 506], "anoth": [3, 5, 10, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 71, 77, 78, 79, 80, 94, 96, 99, 106, 110, 114, 119, 122, 124, 126, 127, 135, 136, 143, 153, 163, 176, 177, 184, 186, 198, 199, 206, 209, 222, 223, 232, 236, 250, 251, 264, 266, 269, 276, 280, 284, 289, 293, 295, 297, 298, 304, 305, 312, 322, 332, 333, 347, 352, 353, 354, 355, 369, 371, 374, 381, 385, 389, 394, 398, 400, 407, 408, 415, 425, 435, 450, 456, 457, 458, 459, 473, 475, 478, 485, 489, 493, 498, 501, 503, 505, 506, 514, 515, 522, 532, 542, 548, 550, 555, 557, 558], "128k": [3, 48, 53, 71, 95, 98, 115, 127, 184, 193, 206, 209, 222, 232, 236, 250, 295, 347, 400, 450, 474, 477, 494, 506], "total": [3, 5, 47, 56, 61, 67, 71, 76, 77, 78, 79, 80, 81, 86, 96, 106, 124, 145, 151, 164, 172, 176, 184, 186, 194, 198, 206, 209, 216, 222, 223, 232, 236, 239, 244, 250, 251, 266, 276, 293, 297, 298, 314, 320, 334, 338, 343, 347, 352, 353, 354, 355, 356, 371, 381, 398, 417, 423, 427, 436, 440, 446, 450, 455, 456, 457, 458, 459, 460, 465, 475, 485, 503, 524, 530, 543], "becaus": [3, 9, 12, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 44, 46, 47, 48, 53, 62, 67, 70, 71, 77, 78, 79, 80, 81, 82, 88, 102, 104, 108, 109, 110, 114, 118, 125, 127, 139, 143, 155, 158, 162, 165, 166, 168, 172, 175, 176, 178, 184, 186, 189, 194, 197, 198, 199, 200, 206, 209, 212, 216, 219, 221, 222, 223, 224, 230, 231, 232, 236, 240, 244, 247, 249, 250, 251, 252, 258, 272, 274, 278, 279, 280, 284, 288, 294, 295, 297, 298, 312, 324, 327, 331, 333, 334, 339, 343, 346, 347, 352, 353, 354, 355, 356, 357, 363, 377, 379, 383, 384, 385, 389, 393, 399, 400, 411, 415, 427, 430, 434, 441, 446, 449, 450, 456, 457, 458, 459, 460, 461, 467, 481, 483, 487, 488, 489, 493, 497, 504, 506, 518, 522, 534, 537, 541, 544, 545, 550, 551, 553, 555, 557], "8k": [3, 48, 78, 176, 198, 206, 222, 232, 250, 298, 353, 457], "16": [3, 5, 32, 46, 47, 48, 53, 67, 71, 78, 79, 81, 87, 88, 91, 92, 93, 94, 95, 98, 100, 103, 107, 112, 113, 115, 117, 118, 121, 127, 132, 136, 137, 140, 143, 145, 147, 151, 155, 158, 161, 163, 170, 176, 178, 184, 192, 198, 200, 206, 209, 215, 219, 222, 224, 232, 234, 236, 247, 250, 273, 278, 279, 291, 295, 309, 332, 334, 343, 347, 356, 362, 378, 383, 384, 396, 400, 412, 427, 435, 446, 450, 457, 458, 460, 466, 467, 470, 471, 472, 473, 474, 477, 479, 482, 486, 491, 492, 494, 496, 497, 500, 506, 511, 515, 516, 519, 522, 524, 526, 530, 534, 537, 540, 542], "12k": 3, "192k": 3, "case": [3, 4, 7, 8, 11, 12, 18, 19, 20, 21, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 44, 45, 46, 47, 48, 50, 53, 54, 61, 62, 65, 71, 76, 78, 79, 80, 81, 84, 85, 87, 93, 94, 96, 104, 105, 106, 108, 109, 110, 112, 114, 123, 124, 127, 133, 139, 143, 147, 151, 153, 163, 168, 175, 176, 177, 181, 183, 184, 186, 189, 197, 198, 199, 203, 205, 206, 209, 212, 219, 221, 222, 223, 227, 229, 231, 232, 236, 239, 240, 249, 250, 251, 255, 257, 263, 264, 266, 274, 276, 278, 279, 280, 282, 284, 292, 293, 295, 298, 302, 312, 320, 322, 332, 333, 334, 338, 339, 347, 353, 354, 355, 356, 359, 360, 362, 368, 369, 371, 379, 380, 381, 383, 384, 385, 387, 389, 397, 398, 400, 405, 411, 415, 423, 425, 435, 440, 441, 444, 450, 455, 457, 458, 459, 460, 463, 464, 466, 472, 473, 475, 483, 484, 485, 487, 488, 489, 491, 493, 502, 503, 506, 512, 518, 522, 526, 530, 532, 542, 552, 554, 555], "66": 3, "wider": 3, "greater": [3, 45, 47, 48, 53, 71, 78, 81, 86, 153, 176, 182, 184, 186, 198, 204, 206, 209, 222, 228, 232, 236, 250, 256, 298, 322, 334, 347, 353, 356, 361, 425, 450, 457, 460, 465, 532], "you": [3, 4, 7, 8, 9, 10, 12, 14, 16, 17, 18, 19, 20, 21, 22, 23, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 40, 41, 42, 43, 46, 47, 48, 53, 54, 62, 67, 70, 71, 76, 77, 78, 79, 80, 81, 95, 98, 99, 100, 104, 108, 109, 110, 113, 114, 115, 119, 122, 126, 127, 129, 135, 140, 145, 148, 149, 150, 163, 165, 166, 168, 172, 176, 184, 186, 189, 194, 198, 206, 209, 212, 216, 219, 222, 223, 230, 231, 232, 236, 240, 244, 247, 250, 251, 269, 270, 272, 274, 275, 278, 279, 280, 283, 284, 289, 295, 297, 298, 309, 314, 319, 333, 334, 339, 343, 346, 347, 352, 353, 354, 355, 356, 374, 375, 379, 383, 384, 385, 388, 389, 394, 400, 412, 417, 422, 441, 446, 449, 450, 455, 456, 457, 458, 459, 460, 474, 477, 478, 479, 483, 487, 488, 489, 492, 493, 494, 498, 501, 505, 506, 508, 519, 524, 529, 544, 545, 547, 549, 555, 558, 559, 560, 561], "find": [3, 12, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 47, 53, 62, 65, 70, 71, 143, 168, 186, 189, 198, 209, 212, 219, 222, 232, 236, 240, 247, 250, 275, 312, 339, 346, 347, 415, 441, 444, 449, 450, 522, 555], "cost": [3, 35, 37, 46, 47, 53, 70, 71, 78, 90, 101, 120, 176, 198, 219, 222, 232, 247, 250, 260, 271, 290, 298, 346, 347, 353, 365, 376, 395, 449, 450, 457, 469, 480, 499], "here": [3, 9, 10, 14, 18, 19, 20, 22, 24, 26, 28, 32, 33, 34, 35, 36, 37, 41, 42, 46, 49, 53, 54, 71, 78, 108, 109, 139, 175, 176, 197, 198, 221, 222, 232, 249, 250, 278, 279, 298, 347, 353, 383, 384, 411, 450, 457, 487, 488, 518], "full": [3, 5, 8, 9, 12, 46, 47, 48, 49, 71, 74, 78, 79, 80, 81, 86, 104, 108, 109, 110, 114, 127, 139, 145, 147, 157, 158, 163, 175, 176, 182, 184, 186, 197, 198, 204, 206, 209, 221, 222, 228, 231, 232, 236, 249, 250, 251, 256, 274, 278, 279, 280, 284, 295, 298, 314, 316, 326, 327, 332, 333, 334, 347, 350, 353, 354, 355, 356, 361, 379, 383, 384, 385, 389, 400, 411, 417, 419, 429, 430, 435, 450, 453, 457, 458, 459, 460, 465, 483, 487, 488, 489, 493, 506, 518, 524, 526, 536, 537, 542], "One": [3, 5, 10, 12, 46, 47, 48, 53, 78, 80, 100, 110, 114, 184, 186, 206, 209, 219, 232, 236, 270, 280, 284, 298, 333, 353, 355, 375, 385, 389, 457, 459, 479, 489, 493, 548, 549, 550, 551, 552, 554, 555, 561], "iop": [3, 5, 46, 47, 48, 49, 71, 80, 176, 198, 222, 250, 347, 355, 450, 459], "slowest": [3, 48], "worst": [3, 46, 47, 71, 78, 176, 184, 198, 206, 222, 232, 250, 298, 347, 353, 450, 457], "draft": 4, "contain": [4, 7, 8, 9, 12, 14, 16, 18, 19, 20, 22, 23, 25, 26, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 64, 65, 71, 74, 76, 78, 79, 80, 81, 82, 85, 86, 87, 92, 96, 102, 104, 106, 108, 109, 110, 114, 124, 127, 136, 137, 139, 143, 145, 151, 158, 163, 165, 166, 175, 176, 177, 178, 181, 182, 183, 184, 186, 191, 197, 198, 199, 200, 203, 204, 205, 206, 209, 214, 219, 221, 222, 223, 224, 227, 228, 229, 231, 232, 236, 242, 247, 249, 250, 251, 252, 255, 256, 257, 262, 266, 274, 276, 280, 284, 293, 295, 298, 305, 306, 312, 314, 332, 333, 334, 335, 341, 347, 350, 353, 354, 355, 356, 357, 360, 361, 362, 367, 371, 377, 379, 381, 385, 389, 398, 400, 408, 411, 415, 417, 423, 435, 437, 438, 443, 444, 450, 453, 455, 457, 458, 459, 460, 461, 464, 465, 466, 471, 475, 481, 483, 485, 487, 488, 489, 493, 503, 506, 515, 516, 518, 522, 524, 530, 537, 542, 544, 545, 550, 557], "tip": [4, 8, 48], "what": [4, 5, 7, 9, 10, 11, 46, 47, 48, 50, 54, 57, 67, 71, 78, 80, 93, 94, 95, 98, 104, 110, 114, 115, 125, 127, 141, 156, 162, 172, 175, 176, 184, 186, 194, 197, 198, 206, 209, 216, 221, 222, 231, 232, 236, 244, 249, 250, 263, 265, 268, 274, 280, 284, 285, 294, 295, 298, 310, 325, 331, 333, 343, 347, 353, 355, 368, 370, 373, 379, 385, 389, 390, 399, 400, 413, 428, 434, 446, 450, 457, 459, 472, 473, 474, 477, 483, 489, 493, 494, 504, 506, 520, 535, 541], "info": [4, 14, 16, 25, 31, 47, 80, 90, 101, 120, 232, 236, 260, 271, 290, 333, 355, 365, 376, 395, 459, 469, 480, 499], "might": [4, 7, 12, 16, 25, 26, 31, 34, 36, 43, 46, 47, 53, 71, 73, 74, 78, 79, 80, 88, 99, 118, 119, 122, 126, 127, 174, 178, 184, 186, 196, 200, 206, 209, 220, 224, 232, 236, 248, 252, 258, 269, 288, 289, 295, 298, 333, 347, 349, 350, 353, 354, 355, 363, 374, 393, 394, 400, 450, 452, 453, 457, 458, 459, 467, 478, 497, 498, 501, 505, 506], "want": [4, 9, 10, 11, 12, 14, 16, 18, 19, 20, 21, 22, 23, 25, 28, 31, 33, 34, 35, 36, 37, 38, 40, 41, 42, 43, 46, 47, 48, 54, 102, 110, 114, 129, 184, 186, 230, 232, 272, 280, 284, 377, 385, 389, 481, 489, 493, 508, 561], "bug": [4, 12, 14, 16, 17, 18, 19, 20, 22, 25, 28, 29, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 54, 71, 79, 87, 165, 166, 176, 178, 183, 198, 200, 205, 222, 223, 224, 229, 250, 251, 252, 257, 347, 354, 362, 450, 458, 466, 544, 545], "triag": 4, "veri": [4, 10, 12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 46, 47, 48, 53, 71, 74, 77, 78, 79, 86, 102, 104, 155, 176, 182, 184, 186, 198, 199, 204, 206, 209, 219, 222, 223, 228, 230, 231, 232, 236, 247, 250, 251, 256, 272, 274, 297, 298, 324, 347, 350, 352, 353, 354, 361, 377, 379, 427, 450, 453, 456, 457, 458, 465, 481, 483, 534, 555], "interest": [4, 44], "inform": [4, 7, 8, 11, 12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 39, 41, 42, 46, 47, 48, 53, 71, 74, 76, 77, 78, 79, 80, 81, 86, 87, 88, 90, 92, 93, 94, 95, 98, 99, 100, 101, 102, 103, 104, 108, 109, 110, 114, 115, 118, 119, 120, 121, 123, 127, 136, 139, 141, 143, 146, 147, 156, 157, 158, 159, 160, 163, 165, 166, 175, 177, 182, 183, 184, 186, 197, 199, 204, 205, 206, 209, 221, 222, 223, 228, 229, 230, 231, 232, 236, 239, 249, 250, 251, 256, 257, 258, 260, 262, 263, 265, 268, 269, 270, 271, 272, 273, 274, 278, 279, 280, 284, 285, 288, 289, 290, 291, 292, 295, 297, 298, 305, 308, 310, 312, 315, 316, 325, 326, 327, 328, 329, 332, 333, 334, 335, 347, 350, 352, 353, 354, 355, 356, 361, 362, 363, 365, 367, 368, 370, 373, 374, 375, 376, 377, 378, 379, 383, 384, 385, 389, 390, 393, 394, 395, 396, 397, 400, 408, 411, 413, 415, 418, 419, 428, 429, 430, 431, 432, 435, 437, 438, 450, 453, 455, 456, 457, 458, 459, 460, 465, 466, 467, 469, 471, 472, 473, 474, 477, 478, 479, 480, 481, 482, 483, 487, 488, 489, 493, 494, 497, 498, 499, 500, 502, 506, 515, 518, 520, 522, 525, 526, 535, 536, 537, 538, 539, 542, 544, 545, 555, 557, 559, 560], "correl": [4, 78, 184, 206, 232, 298, 353, 457], "system": [4, 7, 8, 9, 11, 15, 17, 26, 27, 29, 32, 43, 46, 47, 48, 49, 52, 57, 59, 66, 70, 71, 73, 74, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 170, 171, 174, 175, 176, 177, 178, 180, 181, 183, 184, 185, 186, 187, 188, 192, 193, 196, 197, 198, 199, 200, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 215, 217, 219, 220, 221, 222, 223, 224, 226, 227, 228, 229, 231, 232, 234, 235, 236, 237, 238, 243, 245, 247, 248, 249, 250, 251, 252, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 342, 346, 347, 349, 350, 352, 353, 354, 355, 356, 357, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 445, 449, 450, 452, 453, 455, 456, 457, 458, 459, 460, 461, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "pro": [4, 46], "infrastructur": [4, 175, 197, 221, 249], "tool": [4, 8, 9, 12, 14, 16, 17, 18, 19, 20, 22, 25, 27, 31, 33, 34, 35, 36, 37, 41, 42, 47, 48, 64, 66, 67, 77, 78, 86, 133, 170, 172, 182, 184, 191, 192, 194, 204, 206, 214, 215, 216, 228, 232, 242, 243, 244, 256, 297, 298, 341, 342, 343, 352, 353, 361, 443, 445, 446, 456, 457, 465, 557], "like": [4, 5, 9, 10, 12, 18, 19, 20, 21, 22, 23, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 61, 62, 64, 67, 71, 74, 77, 79, 80, 81, 85, 90, 96, 101, 104, 106, 110, 114, 120, 124, 127, 129, 139, 143, 145, 164, 168, 172, 176, 181, 184, 189, 191, 194, 198, 203, 206, 209, 212, 214, 216, 221, 222, 227, 231, 232, 236, 239, 240, 242, 244, 249, 250, 255, 260, 271, 274, 275, 280, 284, 290, 295, 297, 312, 314, 334, 338, 339, 341, 343, 347, 350, 352, 355, 356, 360, 365, 371, 376, 379, 381, 385, 389, 395, 398, 400, 411, 415, 417, 436, 440, 441, 443, 446, 450, 453, 456, 458, 459, 460, 464, 469, 475, 480, 483, 485, 489, 493, 499, 503, 506, 508, 518, 522, 524, 543, 553, 555], "elasticsearch": 4, "fluentd": 4, "influxdb": [4, 164, 436, 543], "splunk": 4, "simplifi": [4, 5, 32, 53], "analysi": [4, 8, 47, 70, 90, 101, 110, 114, 120, 176, 198, 219, 222, 247, 250, 260, 271, 280, 284, 290, 346, 347, 365, 376, 385, 389, 395, 449, 469, 480, 489, 493, 499], "typic": [4, 32, 46, 47, 48, 49, 50, 53, 65, 71, 73, 78, 79, 80, 81, 110, 114, 127, 174, 176, 177, 184, 186, 196, 198, 199, 206, 209, 219, 220, 222, 223, 232, 236, 247, 248, 250, 251, 280, 284, 295, 298, 333, 334, 347, 349, 353, 354, 355, 356, 385, 389, 400, 444, 450, 452, 457, 458, 459, 460, 489, 493, 506, 561], "avail": [4, 5, 7, 8, 9, 11, 12, 14, 16, 17, 18, 25, 26, 27, 28, 29, 31, 32, 36, 37, 40, 41, 42, 44, 46, 47, 48, 53, 57, 61, 65, 70, 71, 78, 79, 80, 81, 86, 88, 90, 92, 95, 98, 100, 101, 102, 103, 104, 107, 110, 114, 115, 116, 118, 120, 121, 127, 129, 132, 136, 139, 141, 143, 147, 148, 149, 156, 157, 161, 163, 168, 175, 176, 177, 182, 184, 186, 189, 197, 198, 199, 204, 206, 209, 212, 219, 221, 222, 223, 228, 230, 231, 232, 236, 239, 240, 247, 249, 250, 251, 256, 258, 260, 262, 270, 271, 272, 273, 274, 277, 280, 284, 286, 288, 290, 291, 295, 298, 310, 312, 317, 318, 325, 326, 330, 332, 333, 334, 338, 339, 346, 347, 353, 354, 355, 356, 361, 363, 365, 367, 375, 376, 377, 378, 379, 382, 385, 389, 391, 393, 395, 396, 400, 411, 413, 415, 420, 421, 428, 429, 433, 435, 440, 444, 449, 450, 457, 458, 459, 460, 465, 467, 469, 471, 474, 477, 479, 480, 481, 482, 483, 486, 489, 493, 494, 495, 497, 499, 500, 506, 508, 511, 515, 518, 520, 522, 526, 527, 528, 535, 536, 540, 542, 547, 548, 549, 550, 551, 552, 553, 554, 555, 558], "dmesg": [4, 47, 53], "var": [4, 8, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 65, 86, 127, 204, 228, 256, 361, 444, 465, 506], "syslog": 4, "sent": [4, 46, 47, 48, 54, 71, 78, 108, 109, 110, 114, 127, 176, 184, 198, 206, 209, 222, 232, 250, 278, 279, 280, 284, 295, 298, 347, 353, 383, 384, 385, 389, 400, 450, 457, 487, 488, 489, 493, 506], "eg": [4, 47, 171, 182, 184, 193, 198, 204, 222, 228, 250, 256], "rsyslogd": 4, "intern": [4, 5, 46, 47, 48, 53, 71, 73, 76, 78, 79, 80, 81, 85, 86, 104, 125, 127, 142, 145, 147, 158, 162, 174, 176, 177, 181, 182, 184, 186, 196, 198, 199, 203, 204, 206, 209, 220, 222, 223, 227, 228, 231, 232, 236, 248, 250, 251, 255, 256, 274, 294, 295, 298, 311, 314, 316, 327, 331, 333, 334, 347, 349, 353, 354, 355, 356, 360, 361, 379, 399, 400, 414, 417, 419, 430, 434, 450, 452, 455, 457, 458, 459, 460, 464, 465, 483, 504, 506, 521, 524, 526, 537, 541], "buffer": [4, 47, 48, 67, 71, 79, 86, 87, 176, 198, 199, 204, 216, 222, 223, 228, 244, 250, 251, 256, 343, 347, 354, 361, 446, 450, 458, 465, 466], "detail": [4, 7, 8, 14, 16, 25, 28, 31, 43, 46, 47, 48, 53, 77, 78, 79, 81, 85, 87, 88, 89, 91, 95, 98, 103, 104, 108, 109, 110, 114, 115, 117, 118, 121, 127, 136, 143, 147, 158, 161, 163, 164, 175, 177, 181, 183, 184, 186, 197, 198, 199, 203, 205, 206, 209, 221, 223, 227, 229, 231, 232, 236, 249, 251, 255, 257, 258, 259, 261, 265, 268, 273, 274, 278, 279, 280, 284, 285, 287, 288, 291, 295, 297, 298, 305, 312, 327, 330, 332, 334, 352, 353, 354, 356, 360, 362, 363, 364, 366, 370, 373, 378, 379, 383, 384, 385, 389, 390, 392, 393, 396, 400, 408, 415, 430, 433, 435, 436, 456, 457, 458, 460, 464, 466, 467, 468, 470, 474, 477, 482, 483, 487, 488, 489, 493, 494, 496, 497, 500, 506, 515, 522, 526, 537, 540, 542, 543, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 558, 559, 560, 561], "pseudo": [4, 80, 186, 209, 236, 333, 355, 459], "dbgmsg": [4, 47, 71, 104, 176, 198, 222, 231, 250, 274, 347, 379, 450, 483], "build": [4, 9, 10, 11, 12, 13, 25, 27, 29, 31, 32, 43, 47, 53, 58, 59, 67, 71, 172, 176, 183, 194, 198, 205, 216, 222, 229, 244, 250, 257, 343, 347, 446, 450], "zfs_dbgmsg_enabl": [4, 71, 176, 198, 222, 250, 347, 450], "symptom": [4, 47], "command": [4, 5, 7, 8, 9, 10, 12, 14, 16, 18, 19, 20, 22, 23, 25, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 47, 48, 53, 61, 62, 64, 65, 66, 67, 68, 71, 73, 76, 77, 78, 79, 80, 81, 84, 86, 87, 88, 90, 91, 92, 93, 95, 97, 98, 100, 101, 103, 104, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 120, 121, 125, 127, 129, 130, 132, 134, 136, 137, 138, 140, 142, 143, 144, 145, 147, 148, 149, 151, 155, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 170, 171, 172, 174, 177, 178, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 191, 192, 193, 194, 196, 198, 199, 200, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 214, 215, 216, 217, 220, 222, 223, 224, 226, 227, 228, 229, 230, 231, 232, 234, 235, 236, 237, 238, 239, 240, 242, 243, 244, 245, 248, 250, 251, 254, 256, 257, 258, 259, 260, 262, 263, 265, 267, 268, 270, 271, 272, 273, 274, 277, 278, 279, 280, 281, 283, 284, 285, 286, 288, 290, 291, 292, 294, 295, 297, 298, 299, 301, 303, 305, 306, 307, 309, 311, 312, 313, 314, 316, 317, 318, 320, 324, 326, 327, 328, 329, 331, 332, 333, 334, 335, 336, 337, 338, 339, 341, 342, 343, 344, 347, 349, 352, 353, 354, 355, 356, 359, 361, 362, 363, 365, 367, 368, 370, 372, 373, 375, 376, 378, 379, 382, 383, 384, 385, 386, 388, 389, 390, 391, 393, 395, 396, 399, 400, 402, 404, 406, 408, 409, 410, 412, 414, 415, 416, 417, 419, 420, 421, 423, 427, 429, 430, 431, 432, 434, 435, 436, 437, 438, 439, 440, 441, 443, 444, 445, 446, 447, 450, 452, 455, 456, 457, 458, 459, 460, 463, 465, 466, 467, 469, 470, 471, 472, 474, 476, 477, 479, 480, 482, 483, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 499, 500, 504, 506, 508, 509, 511, 513, 515, 516, 517, 519, 521, 522, 523, 524, 526, 527, 528, 530, 534, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 550, 555, 558, 559, 560], "appear": [4, 18, 19, 20, 22, 23, 33, 34, 36, 41, 42, 46, 47, 68, 71, 73, 78, 92, 104, 127, 131, 132, 133, 136, 143, 153, 163, 164, 171, 174, 184, 186, 193, 196, 198, 206, 208, 209, 220, 222, 231, 232, 235, 236, 248, 250, 262, 274, 298, 300, 301, 302, 305, 312, 322, 344, 347, 349, 353, 367, 379, 403, 404, 405, 408, 415, 425, 436, 447, 450, 452, 457, 471, 483, 506, 510, 511, 512, 515, 522, 532, 542, 543, 547], "hung": [4, 47, 71, 139, 176, 198, 221, 222, 249, 250, 347, 411, 450, 518], "doe": [4, 10, 11, 14, 16, 18, 19, 20, 22, 25, 27, 28, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 49, 54, 57, 62, 66, 71, 74, 78, 79, 80, 81, 86, 90, 93, 101, 104, 108, 109, 110, 112, 114, 120, 125, 127, 133, 139, 143, 145, 151, 155, 158, 165, 166, 168, 170, 175, 176, 177, 178, 180, 182, 184, 186, 189, 192, 197, 198, 199, 200, 202, 204, 206, 207, 209, 212, 215, 221, 222, 223, 224, 226, 228, 231, 232, 234, 236, 240, 243, 249, 250, 251, 252, 254, 256, 260, 263, 271, 274, 278, 279, 282, 290, 294, 295, 298, 312, 314, 324, 327, 334, 339, 342, 347, 350, 353, 354, 356, 361, 365, 368, 376, 379, 383, 384, 387, 395, 399, 400, 411, 415, 417, 423, 427, 430, 441, 445, 450, 453, 457, 458, 459, 460, 465, 469, 472, 480, 483, 487, 488, 489, 491, 493, 499, 504, 506, 518, 522, 524, 530, 534, 537, 544, 545, 555, 557], "return": [4, 5, 10, 12, 41, 42, 43, 47, 48, 71, 78, 79, 80, 81, 82, 86, 87, 97, 104, 111, 125, 127, 129, 133, 134, 143, 144, 151, 153, 155, 160, 162, 163, 176, 177, 178, 183, 184, 186, 198, 199, 200, 204, 205, 206, 209, 217, 219, 222, 223, 224, 228, 229, 231, 232, 236, 245, 247, 250, 251, 252, 256, 257, 267, 274, 281, 294, 295, 298, 302, 303, 312, 313, 320, 322, 324, 329, 331, 332, 333, 334, 347, 353, 354, 355, 356, 357, 361, 362, 372, 379, 386, 399, 400, 405, 406, 415, 416, 423, 425, 427, 432, 434, 435, 450, 457, 458, 459, 460, 461, 465, 466, 476, 483, 490, 504, 506, 508, 512, 513, 522, 523, 530, 532, 534, 539, 541, 542, 553, 557], "killabl": 4, "caus": [4, 10, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 44, 46, 47, 48, 50, 53, 54, 62, 66, 70, 71, 77, 78, 79, 86, 90, 93, 95, 98, 101, 103, 104, 108, 109, 110, 114, 115, 116, 120, 121, 127, 139, 157, 160, 163, 168, 170, 176, 184, 186, 189, 192, 198, 206, 209, 212, 215, 219, 221, 222, 223, 231, 232, 236, 240, 243, 247, 249, 250, 251, 260, 263, 265, 268, 271, 273, 274, 278, 279, 280, 284, 285, 290, 291, 295, 297, 298, 326, 329, 332, 339, 342, 346, 347, 352, 353, 354, 365, 368, 370, 373, 376, 378, 379, 383, 384, 385, 389, 390, 391, 395, 396, 400, 411, 429, 432, 435, 441, 445, 449, 450, 456, 457, 458, 465, 469, 472, 474, 477, 480, 482, 483, 487, 488, 489, 493, 494, 495, 499, 500, 506, 518, 536, 539, 542, 555], "thread": [4, 47, 48, 49, 67, 70, 71, 78, 79, 171, 172, 176, 183, 193, 194, 198, 205, 216, 219, 222, 229, 244, 247, 250, 251, 257, 343, 346, 347, 354, 446, 449, 450, 458], "panic": [4, 70, 71, 81, 86, 131, 139, 175, 182, 185, 186, 197, 204, 208, 209, 219, 221, 222, 228, 235, 236, 247, 249, 250, 256, 300, 334, 346, 347, 356, 361, 403, 411, 449, 450, 460, 465, 510, 518], "stuck": [4, 70, 219, 221, 247, 249, 346, 411, 449], "backtrac": [4, 53], "until": [4, 12, 18, 19, 20, 22, 25, 29, 33, 34, 36, 41, 42, 45, 46, 47, 48, 71, 74, 79, 80, 81, 87, 93, 99, 119, 122, 125, 126, 133, 134, 143, 144, 145, 147, 151, 153, 155, 157, 160, 162, 163, 176, 177, 184, 186, 198, 199, 206, 209, 222, 223, 232, 236, 250, 251, 263, 269, 289, 294, 302, 303, 312, 313, 314, 316, 320, 322, 324, 326, 329, 331, 332, 333, 334, 347, 350, 354, 355, 356, 362, 368, 374, 394, 399, 405, 406, 415, 416, 417, 419, 423, 425, 427, 429, 432, 434, 435, 450, 453, 458, 459, 460, 466, 472, 478, 498, 501, 504, 505, 512, 513, 522, 523, 524, 526, 530, 532, 534, 536, 539, 541, 542, 549, 552, 557], "deadman": [4, 47, 71, 86, 139, 176, 198, 221, 222, 228, 249, 250, 256, 347, 361, 411, 450, 465, 518], "timer": [4, 47, 86, 155, 160, 176, 228, 256, 361, 427, 465, 534, 539], "expir": [4, 32, 41, 42, 47, 71, 176, 198, 219, 222, 247, 250, 347, 450], "tunabl": [4, 47, 48, 49, 50, 54, 70, 71, 78, 127, 176, 198, 206, 219, 222, 232, 247, 250, 298, 346, 347, 353, 449, 450, 457, 506], "interfac": [4, 8, 9, 11, 12, 14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 36, 47, 51, 53, 71, 82, 96, 104, 106, 124, 127, 163, 184, 206, 209, 231, 232, 236, 266, 274, 276, 293, 295, 332, 357, 371, 379, 381, 398, 400, 435, 450, 461, 475, 483, 485, 503, 506, 542], "consum": [4, 18, 19, 20, 33, 34, 36, 46, 47, 53, 65, 71, 74, 77, 78, 80, 96, 106, 107, 124, 127, 139, 163, 176, 184, 198, 206, 209, 222, 232, 236, 250, 266, 276, 277, 293, 295, 297, 298, 308, 332, 333, 347, 350, 352, 353, 355, 371, 381, 382, 398, 400, 411, 435, 444, 450, 453, 456, 457, 459, 475, 485, 486, 503, 506, 518, 542], "run": [4, 7, 9, 10, 12, 18, 19, 20, 22, 25, 27, 32, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 64, 65, 67, 70, 71, 73, 78, 79, 80, 81, 82, 85, 86, 87, 90, 92, 93, 99, 101, 102, 104, 108, 109, 110, 114, 119, 120, 122, 123, 126, 129, 131, 133, 139, 143, 144, 145, 154, 155, 157, 158, 160, 161, 163, 164, 165, 166, 168, 171, 172, 174, 175, 176, 177, 178, 181, 183, 184, 185, 186, 189, 191, 193, 194, 196, 197, 198, 199, 200, 203, 204, 205, 206, 208, 209, 212, 214, 216, 219, 220, 221, 222, 223, 224, 227, 228, 229, 230, 231, 232, 235, 236, 240, 242, 244, 247, 248, 249, 250, 251, 252, 255, 256, 257, 260, 262, 263, 269, 271, 272, 274, 278, 279, 280, 284, 289, 290, 292, 298, 300, 302, 312, 313, 314, 323, 326, 327, 329, 332, 333, 334, 335, 341, 343, 346, 347, 349, 353, 354, 355, 356, 357, 360, 361, 362, 365, 367, 368, 374, 376, 377, 379, 383, 384, 385, 389, 394, 395, 397, 403, 405, 411, 415, 416, 417, 426, 429, 430, 432, 435, 436, 437, 438, 443, 444, 446, 449, 450, 452, 457, 458, 459, 460, 461, 464, 465, 466, 469, 471, 472, 478, 480, 481, 483, 487, 488, 489, 493, 498, 499, 501, 502, 505, 508, 510, 512, 518, 522, 523, 524, 533, 534, 536, 537, 539, 540, 542, 543, 544, 545, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "daemon": [4, 5, 22, 33, 34, 36, 47, 71, 76, 81, 87, 102, 164, 183, 205, 209, 229, 230, 236, 257, 272, 334, 347, 356, 362, 377, 436, 450, 455, 460, 466, 481, 543], "zed": [4, 5, 11, 18, 19, 20, 34, 35, 36, 37, 41, 42, 71, 81, 83, 102, 139, 163, 175, 179, 197, 198, 201, 209, 221, 222, 225, 230, 236, 249, 250, 253, 272, 308, 332, 334, 347, 356, 358, 377, 411, 435, 450, 460, 462, 481, 518, 542], "userland": [4, 48, 104, 127, 163, 231, 232, 236, 274, 295, 332, 379, 400, 435, 483, 506, 542], "listen": [4, 184, 206, 232, 295], "them": [4, 8, 10, 16, 18, 19, 20, 21, 22, 25, 27, 31, 33, 34, 35, 36, 37, 41, 42, 44, 46, 47, 48, 53, 54, 62, 65, 70, 71, 74, 76, 77, 78, 79, 80, 81, 86, 91, 92, 93, 107, 108, 109, 110, 112, 114, 117, 127, 132, 145, 163, 168, 176, 184, 186, 189, 198, 206, 209, 212, 219, 222, 223, 230, 232, 236, 240, 247, 250, 251, 272, 280, 284, 295, 297, 298, 332, 333, 339, 346, 347, 350, 352, 353, 354, 355, 385, 389, 400, 435, 441, 444, 449, 450, 453, 455, 456, 457, 458, 459, 460, 465, 470, 471, 472, 486, 487, 488, 489, 491, 493, 496, 506, 511, 524, 542, 557], "extens": [4, 8, 11, 47, 48, 71, 184, 198, 222, 250, 347, 450], "shell": [4, 25, 31, 65, 82, 171, 178, 193, 200, 224, 252, 357, 444, 461], "script": [4, 8, 9, 12, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 33, 35, 37, 41, 42, 46, 65, 78, 87, 95, 98, 99, 100, 104, 115, 119, 127, 129, 139, 141, 145, 147, 156, 158, 162, 163, 183, 184, 186, 205, 206, 209, 229, 230, 231, 232, 236, 257, 265, 268, 269, 270, 272, 274, 285, 289, 295, 308, 310, 314, 316, 325, 327, 331, 332, 353, 362, 370, 373, 374, 375, 379, 390, 394, 400, 411, 413, 417, 419, 428, 430, 434, 435, 444, 457, 466, 474, 477, 478, 479, 483, 494, 498, 506, 508, 518, 520, 524, 526, 535, 537, 541, 542], "program": [4, 14, 28, 46, 47, 71, 77, 81, 83, 86, 127, 131, 171, 180, 185, 186, 193, 202, 208, 209, 222, 225, 226, 232, 235, 236, 250, 253, 254, 295, 300, 334, 347, 356, 358, 400, 403, 450, 456, 460, 462, 465, 506, 510, 555], "subscrib": 4, "take": [4, 5, 10, 12, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 49, 53, 70, 71, 73, 74, 77, 78, 79, 80, 81, 88, 95, 98, 102, 104, 110, 114, 115, 118, 127, 131, 132, 134, 143, 145, 148, 149, 161, 163, 174, 176, 177, 184, 186, 196, 198, 199, 206, 208, 209, 219, 220, 222, 223, 231, 232, 235, 236, 247, 248, 250, 251, 265, 268, 274, 280, 284, 285, 295, 297, 298, 300, 303, 312, 317, 318, 332, 333, 334, 346, 347, 349, 350, 352, 353, 354, 355, 356, 370, 373, 377, 379, 385, 389, 390, 400, 403, 406, 415, 420, 421, 433, 435, 449, 450, 452, 453, 456, 457, 458, 459, 460, 467, 474, 477, 481, 483, 489, 493, 494, 497, 506, 510, 511, 513, 522, 524, 527, 528, 540, 542, 547, 558, 559, 560], "action": [4, 5, 41, 42, 43, 47, 62, 71, 80, 129, 143, 150, 163, 186, 198, 209, 222, 236, 250, 319, 332, 333, 347, 355, 422, 435, 441, 450, 459, 508, 522, 529, 542, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "usual": [4, 5, 46, 47, 48, 50, 53, 61, 71, 76, 105, 108, 109, 139, 175, 180, 184, 197, 202, 206, 221, 226, 232, 239, 249, 254, 275, 278, 279, 338, 347, 380, 383, 384, 411, 440, 450, 455, 484, 487, 488, 518, 557], "instal": [4, 9, 10, 12, 13, 32, 39, 43, 47, 48, 57, 71, 79, 81, 82, 87, 178, 183, 186, 200, 205, 209, 224, 229, 236, 250, 252, 257, 334, 347, 354, 356, 357, 362, 450, 458, 460, 461, 466], "etc": [4, 8, 14, 16, 18, 19, 20, 21, 22, 23, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 41, 42, 43, 47, 57, 62, 64, 66, 70, 71, 73, 74, 77, 78, 79, 81, 82, 84, 85, 86, 110, 114, 127, 130, 139, 145, 163, 168, 170, 174, 175, 176, 178, 180, 181, 182, 184, 189, 191, 192, 196, 197, 198, 200, 202, 203, 204, 206, 207, 209, 212, 214, 215, 219, 220, 221, 222, 224, 226, 227, 228, 232, 234, 236, 240, 242, 243, 247, 248, 249, 250, 252, 254, 255, 256, 280, 284, 295, 297, 298, 299, 314, 332, 339, 341, 342, 346, 347, 349, 350, 352, 353, 354, 356, 357, 359, 360, 361, 385, 389, 400, 402, 411, 417, 435, 441, 443, 445, 449, 450, 452, 453, 456, 457, 458, 460, 461, 463, 464, 465, 489, 493, 506, 509, 518, 524, 542], "d": [4, 5, 8, 12, 14, 16, 17, 18, 19, 20, 21, 22, 23, 25, 26, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 53, 64, 65, 66, 67, 73, 78, 79, 80, 81, 85, 86, 87, 88, 93, 95, 98, 100, 102, 104, 105, 108, 109, 110, 114, 115, 118, 127, 131, 134, 136, 143, 145, 147, 158, 160, 162, 163, 165, 166, 170, 172, 174, 176, 181, 182, 183, 184, 185, 186, 191, 192, 194, 196, 198, 203, 204, 205, 206, 208, 209, 214, 215, 216, 219, 220, 222, 227, 228, 229, 230, 231, 232, 235, 236, 237, 242, 243, 244, 248, 250, 255, 256, 257, 258, 263, 265, 268, 270, 272, 274, 275, 278, 279, 280, 284, 285, 288, 295, 298, 300, 303, 305, 312, 314, 316, 327, 329, 331, 332, 333, 335, 336, 341, 342, 343, 349, 353, 354, 355, 356, 360, 361, 362, 363, 368, 370, 373, 375, 377, 379, 380, 383, 384, 385, 389, 390, 393, 400, 403, 406, 408, 415, 417, 419, 430, 432, 434, 435, 437, 438, 443, 444, 445, 446, 452, 457, 458, 459, 460, 464, 465, 466, 467, 472, 474, 477, 479, 481, 483, 484, 487, 488, 489, 493, 494, 497, 506, 510, 513, 515, 522, 524, 526, 537, 539, 541, 542, 544, 545, 547, 549, 552], "sh": [4, 7, 9, 10, 12, 14, 16, 25, 27, 31, 34, 35, 37, 41, 42, 43, 74, 102, 230, 272, 350, 377, 453, 481], "histori": [4, 47, 71, 83, 86, 93, 102, 110, 112, 114, 117, 127, 130, 158, 161, 163, 176, 182, 184, 186, 198, 204, 206, 209, 222, 228, 232, 236, 250, 253, 256, 280, 284, 295, 299, 327, 330, 332, 347, 358, 361, 377, 385, 389, 400, 402, 430, 433, 435, 450, 462, 465, 472, 481, 489, 491, 493, 496, 506, 509, 537, 540, 542], "begin": [4, 8, 12, 46, 47, 73, 76, 78, 79, 80, 81, 88, 102, 118, 133, 136, 144, 154, 155, 163, 165, 166, 174, 184, 186, 187, 196, 206, 209, 210, 220, 223, 230, 232, 236, 237, 248, 251, 258, 272, 288, 298, 302, 305, 313, 323, 324, 332, 333, 336, 349, 353, 354, 355, 363, 377, 393, 405, 408, 416, 426, 427, 435, 452, 455, 457, 458, 459, 460, 467, 481, 497, 512, 515, 523, 533, 534, 542, 544, 545, 550, 555], "These": [4, 5, 8, 9, 11, 12, 21, 26, 32, 35, 37, 46, 47, 48, 53, 56, 70, 71, 74, 76, 78, 79, 80, 81, 87, 88, 110, 114, 118, 125, 132, 139, 141, 145, 147, 156, 157, 158, 161, 162, 163, 171, 175, 176, 177, 183, 184, 186, 193, 197, 198, 199, 205, 206, 209, 219, 221, 222, 223, 229, 232, 236, 247, 249, 250, 251, 257, 258, 280, 284, 288, 294, 298, 301, 308, 310, 314, 316, 325, 326, 327, 330, 331, 332, 333, 334, 346, 347, 350, 353, 354, 355, 356, 362, 363, 385, 389, 393, 399, 404, 411, 413, 417, 419, 428, 429, 430, 433, 434, 435, 449, 450, 453, 455, 457, 458, 459, 460, 466, 467, 489, 493, 497, 504, 511, 518, 520, 524, 526, 535, 536, 537, 540, 541, 542, 557, 561], "ram": [4, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 71, 77, 176, 184, 198, 206, 222, 232, 250, 297, 347, 352, 450, 456], "limit": [4, 5, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 45, 46, 47, 48, 49, 50, 53, 62, 67, 70, 71, 76, 77, 78, 79, 81, 86, 90, 95, 98, 100, 101, 104, 110, 114, 115, 120, 129, 168, 172, 176, 177, 182, 184, 189, 194, 198, 199, 204, 206, 212, 216, 219, 222, 223, 228, 231, 232, 240, 244, 247, 250, 251, 256, 260, 265, 268, 270, 271, 274, 280, 284, 285, 290, 298, 339, 343, 346, 347, 353, 354, 361, 365, 370, 373, 375, 376, 379, 385, 389, 390, 395, 441, 446, 449, 450, 455, 456, 457, 458, 460, 465, 469, 474, 477, 479, 480, 483, 489, 493, 494, 499, 508, 555], "valu": [4, 5, 11, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 49, 53, 54, 61, 64, 65, 67, 70, 71, 73, 74, 76, 78, 79, 80, 81, 85, 86, 87, 88, 90, 91, 92, 95, 98, 99, 100, 101, 104, 105, 108, 109, 110, 114, 115, 117, 118, 119, 120, 122, 125, 126, 127, 129, 130, 131, 132, 133, 136, 139, 141, 143, 145, 147, 151, 153, 156, 157, 158, 162, 163, 164, 165, 166, 172, 174, 175, 176, 177, 181, 182, 183, 184, 186, 191, 194, 196, 197, 198, 199, 203, 204, 205, 206, 207, 208, 209, 214, 216, 219, 220, 221, 222, 223, 227, 228, 229, 231, 232, 234, 235, 236, 239, 242, 244, 247, 248, 249, 250, 251, 255, 256, 257, 260, 261, 262, 265, 268, 269, 270, 271, 274, 275, 278, 279, 280, 284, 285, 287, 289, 290, 294, 295, 298, 299, 300, 301, 302, 305, 310, 312, 314, 316, 320, 322, 325, 326, 327, 331, 332, 333, 334, 338, 341, 343, 346, 347, 349, 350, 353, 354, 355, 356, 360, 361, 362, 365, 366, 367, 370, 373, 374, 375, 376, 379, 380, 383, 384, 385, 389, 390, 392, 394, 395, 399, 400, 402, 403, 404, 405, 408, 411, 413, 415, 417, 419, 423, 425, 428, 429, 430, 434, 435, 436, 440, 443, 444, 446, 449, 450, 452, 453, 455, 457, 458, 459, 460, 464, 465, 466, 467, 469, 470, 471, 474, 477, 478, 479, 480, 483, 484, 487, 488, 489, 493, 494, 496, 497, 498, 499, 501, 504, 505, 506, 508, 509, 510, 511, 512, 515, 518, 520, 522, 524, 526, 530, 532, 535, 536, 537, 541, 542, 543, 544, 545, 557], "zfs_event_len_max": 4, "throttl": [4, 45, 46, 47, 49, 71, 176, 198, 222, 250, 347, 450], "prevent": [4, 26, 44, 46, 47, 48, 53, 70, 71, 77, 78, 79, 80, 81, 97, 111, 127, 136, 163, 176, 184, 186, 198, 199, 206, 209, 219, 222, 223, 232, 236, 247, 250, 251, 267, 281, 295, 297, 298, 305, 332, 333, 334, 346, 347, 352, 353, 354, 355, 356, 372, 386, 400, 408, 435, 449, 450, 456, 457, 458, 459, 460, 476, 490, 506, 515, 542, 551, 553, 557], "overconsumpt": 4, "resourc": [4, 10, 58, 59, 77, 78, 87, 183, 184, 205, 206, 229, 232, 257, 297, 298, 352, 353, 362, 456, 457, 466], "v": [4, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 48, 49, 57, 61, 62, 64, 67, 71, 78, 84, 86, 87, 92, 93, 103, 108, 109, 110, 114, 121, 123, 127, 128, 132, 139, 145, 147, 155, 158, 161, 163, 165, 166, 168, 171, 172, 175, 176, 180, 182, 183, 184, 186, 187, 189, 191, 193, 194, 197, 198, 202, 204, 205, 206, 209, 210, 212, 214, 216, 221, 222, 226, 228, 229, 232, 236, 237, 239, 240, 242, 244, 249, 250, 254, 256, 257, 262, 263, 273, 278, 279, 280, 284, 291, 292, 295, 296, 298, 308, 314, 316, 327, 330, 332, 334, 335, 336, 338, 339, 341, 343, 347, 353, 356, 359, 361, 362, 367, 368, 378, 383, 384, 385, 389, 396, 397, 400, 401, 411, 417, 419, 430, 433, 435, 437, 438, 440, 441, 443, 446, 450, 457, 463, 465, 466, 471, 472, 482, 487, 488, 489, 493, 500, 502, 506, 507, 511, 518, 524, 526, 534, 537, 540, 542, 544, 545, 554, 556, 559, 560], "content": [4, 14, 16, 25, 31, 39, 44, 47, 50, 61, 71, 77, 78, 79, 80, 86, 91, 94, 108, 109, 113, 127, 131, 132, 145, 163, 176, 177, 180, 182, 184, 185, 186, 198, 199, 202, 204, 206, 208, 209, 222, 223, 226, 228, 232, 235, 236, 237, 250, 251, 254, 256, 264, 278, 279, 295, 297, 298, 300, 332, 333, 336, 338, 347, 352, 353, 354, 355, 361, 369, 383, 384, 400, 403, 435, 440, 450, 456, 457, 458, 459, 465, 470, 473, 487, 488, 492, 506, 510, 511, 524, 542], "verbos": [4, 25, 31, 62, 64, 67, 86, 87, 92, 93, 104, 108, 109, 110, 114, 128, 145, 147, 155, 158, 165, 166, 168, 171, 172, 180, 182, 183, 184, 186, 187, 189, 191, 193, 194, 202, 204, 205, 206, 209, 210, 212, 214, 216, 226, 228, 229, 231, 232, 236, 237, 240, 242, 244, 254, 256, 257, 262, 263, 274, 278, 279, 280, 284, 296, 314, 316, 327, 335, 336, 339, 341, 343, 361, 362, 367, 368, 379, 383, 384, 385, 389, 401, 417, 419, 430, 437, 438, 441, 443, 446, 465, 466, 471, 472, 483, 487, 488, 489, 493, 507, 524, 526, 534, 537, 544, 545], "subject": [4, 12, 46, 48, 71, 110, 114, 161, 176, 184, 198, 222, 250, 280, 284, 347, 385, 389, 433, 450, 489, 493, 540], "time": [4, 5, 7, 8, 9, 11, 12, 13, 14, 16, 18, 19, 20, 21, 22, 25, 31, 32, 33, 35, 37, 39, 41, 42, 47, 48, 49, 50, 53, 61, 64, 65, 67, 70, 71, 77, 78, 79, 80, 81, 86, 87, 88, 89, 90, 92, 94, 96, 100, 101, 102, 106, 108, 109, 110, 113, 114, 117, 118, 120, 124, 125, 127, 129, 131, 133, 139, 143, 145, 147, 155, 157, 158, 162, 163, 164, 171, 172, 175, 176, 177, 182, 183, 184, 185, 186, 191, 193, 194, 197, 198, 199, 204, 205, 206, 208, 209, 214, 216, 219, 221, 222, 223, 228, 229, 232, 235, 236, 239, 242, 244, 247, 249, 250, 251, 256, 257, 258, 259, 260, 262, 264, 266, 270, 271, 276, 278, 279, 280, 283, 284, 287, 288, 290, 293, 294, 295, 297, 298, 300, 312, 314, 316, 324, 326, 327, 331, 332, 333, 334, 338, 341, 343, 346, 347, 352, 353, 354, 355, 356, 361, 362, 363, 364, 365, 367, 369, 371, 375, 376, 377, 381, 383, 384, 385, 388, 389, 392, 393, 395, 398, 399, 400, 403, 411, 415, 417, 419, 427, 429, 430, 434, 435, 436, 440, 443, 444, 446, 449, 450, 456, 457, 458, 459, 460, 465, 466, 467, 468, 469, 471, 473, 475, 479, 480, 481, 485, 487, 488, 489, 492, 493, 496, 497, 499, 503, 504, 506, 508, 510, 518, 522, 524, 526, 534, 536, 537, 541, 542, 543, 555, 561], "class": [4, 45, 47, 50, 67, 71, 78, 79, 80, 87, 176, 183, 198, 205, 222, 223, 229, 232, 236, 250, 251, 257, 298, 333, 343, 347, 353, 354, 355, 362, 446, 450, 457, 458, 459, 466], "identifi": [4, 14, 43, 46, 47, 48, 53, 66, 67, 70, 73, 78, 79, 80, 81, 85, 86, 87, 93, 96, 99, 105, 106, 119, 122, 124, 126, 127, 139, 143, 150, 163, 170, 174, 175, 177, 181, 182, 183, 184, 186, 192, 194, 196, 197, 199, 203, 204, 205, 206, 209, 215, 216, 219, 220, 221, 223, 227, 228, 229, 232, 236, 243, 244, 247, 248, 249, 251, 255, 256, 257, 263, 266, 269, 275, 276, 289, 293, 295, 298, 312, 319, 332, 333, 334, 342, 343, 346, 349, 353, 354, 355, 356, 360, 361, 362, 368, 371, 374, 380, 381, 394, 398, 400, 411, 415, 422, 435, 445, 446, 449, 452, 457, 458, 459, 460, 464, 465, 466, 472, 475, 478, 484, 485, 498, 501, 503, 505, 506, 518, 522, 529, 542, 547, 550, 552, 558], "filter": [4, 47, 70, 145, 187, 210, 236, 237, 314, 336, 417, 449, 524], "commonli": [4, 53, 79, 110, 114, 184, 206, 232, 280, 284, 354, 385, 389, 458, 489, 493], "seen": [4, 28, 41, 42, 71, 222, 250, 347, 450, 556], "relat": [4, 11, 17, 18, 19, 20, 22, 25, 29, 31, 33, 34, 35, 36, 37, 39, 41, 42, 47, 53, 55, 71, 86, 90, 101, 120, 182, 198, 204, 222, 228, 232, 250, 256, 260, 271, 290, 347, 361, 365, 376, 395, 450, 465, 469, 480, 499], "manag": [4, 9, 12, 14, 16, 18, 19, 20, 22, 25, 26, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 48, 53, 57, 76, 77, 78, 79, 82, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 181, 184, 186, 203, 204, 206, 207, 209, 223, 227, 228, 231, 232, 234, 236, 251, 252, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 352, 353, 354, 357, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 455, 456, 457, 458, 461, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545], "sysev": 4, "f": [4, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 47, 48, 61, 67, 74, 78, 79, 81, 84, 86, 87, 93, 94, 102, 103, 104, 108, 109, 110, 112, 113, 114, 121, 127, 130, 131, 132, 133, 136, 137, 139, 140, 143, 146, 148, 149, 153, 163, 171, 172, 175, 180, 182, 183, 184, 185, 186, 193, 194, 197, 199, 202, 204, 205, 206, 208, 209, 216, 221, 223, 226, 228, 229, 230, 231, 232, 235, 236, 239, 244, 249, 251, 254, 256, 257, 263, 264, 272, 273, 274, 278, 279, 280, 282, 283, 284, 291, 294, 295, 298, 299, 300, 301, 302, 305, 306, 308, 309, 312, 315, 317, 318, 322, 332, 334, 338, 343, 350, 353, 354, 356, 359, 361, 362, 368, 369, 377, 378, 379, 383, 384, 385, 387, 388, 389, 396, 400, 402, 403, 404, 405, 408, 409, 411, 412, 415, 418, 420, 421, 425, 435, 440, 446, 453, 457, 458, 460, 463, 465, 466, 472, 473, 481, 482, 483, 487, 488, 489, 491, 492, 493, 500, 506, 509, 510, 511, 512, 515, 516, 518, 519, 522, 525, 527, 528, 532, 542, 553, 558], "export": [4, 8, 9, 14, 16, 17, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 47, 53, 71, 74, 76, 77, 78, 80, 81, 83, 86, 92, 95, 98, 100, 115, 116, 127, 136, 139, 143, 146, 150, 155, 159, 162, 163, 175, 176, 182, 184, 186, 197, 198, 204, 206, 209, 221, 222, 228, 232, 236, 249, 250, 253, 256, 262, 286, 295, 297, 298, 305, 312, 315, 319, 324, 328, 331, 332, 333, 334, 347, 350, 352, 353, 355, 356, 358, 361, 367, 391, 400, 408, 411, 415, 418, 422, 427, 431, 434, 435, 450, 453, 455, 456, 457, 459, 460, 462, 465, 471, 474, 477, 479, 494, 495, 506, 515, 518, 522, 525, 529, 534, 538, 541, 542, 548, 549, 550, 551, 554, 557, 558], "error": [4, 5, 11, 12, 14, 18, 19, 20, 22, 25, 29, 33, 34, 36, 41, 42, 43, 47, 53, 54, 62, 65, 67, 71, 76, 78, 79, 80, 81, 82, 86, 92, 104, 108, 109, 110, 114, 127, 131, 135, 136, 139, 143, 145, 151, 155, 158, 163, 168, 175, 176, 178, 182, 184, 185, 186, 189, 197, 198, 200, 204, 206, 208, 209, 212, 221, 222, 223, 224, 228, 231, 232, 235, 236, 240, 249, 250, 251, 252, 256, 262, 274, 278, 279, 280, 284, 295, 298, 300, 304, 305, 312, 314, 320, 324, 327, 332, 333, 334, 339, 343, 347, 353, 354, 355, 356, 357, 361, 367, 379, 383, 384, 385, 389, 400, 403, 407, 408, 411, 415, 417, 423, 427, 430, 435, 441, 444, 446, 450, 455, 457, 458, 459, 460, 461, 465, 471, 483, 487, 488, 489, 493, 506, 510, 514, 515, 518, 522, 524, 530, 534, 537, 542, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "ereport": [4, 71, 139, 175, 197, 221, 249, 250, 347, 411, 450, 518], "invalu": 4, "fault": [4, 46, 76, 80, 81, 82, 131, 139, 147, 148, 149, 155, 163, 175, 178, 185, 186, 197, 200, 208, 209, 221, 224, 235, 236, 249, 252, 300, 317, 318, 324, 332, 333, 334, 355, 356, 357, 403, 411, 420, 421, 427, 435, 455, 459, 460, 461, 510, 518, 526, 527, 528, 534, 542, 548, 549, 550, 551, 552, 553, 555, 556, 559, 560, 561], "variou": [4, 7, 18, 19, 20, 33, 34, 35, 36, 37, 41, 42, 48, 53, 61, 76, 78, 79, 80, 139, 175, 184, 197, 206, 221, 232, 236, 239, 249, 298, 333, 338, 353, 355, 411, 440, 455, 457, 459, 518], "layer": [4, 8, 11, 25, 46, 47, 48, 71, 80, 81, 139, 171, 175, 186, 193, 197, 209, 221, 236, 249, 250, 333, 334, 347, 355, 356, 411, 450, 459, 460, 518], "softwar": [4, 9, 41, 42, 44, 46, 48, 53, 57, 79, 108, 109, 127, 161, 163, 177, 184, 186, 199, 206, 209, 223, 232, 236, 251, 278, 279, 292, 295, 330, 332, 354, 383, 384, 400, 435, 458, 487, 488, 506, 540, 542, 556, 557], "deal": [4, 47, 48, 71, 176, 198, 222, 250, 347, 450], "simpl": [4, 47, 48, 53, 62, 73, 77, 79, 168, 174, 184, 189, 196, 206, 212, 220, 232, 240, 248, 297, 339, 349, 352, 354, 441, 452, 456, 458], "faulti": [4, 53, 54, 555], "could": [4, 11, 21, 22, 33, 46, 47, 48, 53, 54, 64, 70, 71, 78, 80, 81, 104, 125, 132, 135, 139, 145, 162, 163, 175, 183, 184, 186, 191, 197, 205, 206, 209, 214, 219, 221, 222, 229, 231, 232, 236, 242, 247, 249, 250, 257, 274, 294, 298, 304, 331, 332, 333, 334, 341, 346, 347, 353, 355, 356, 379, 399, 407, 411, 434, 435, 443, 449, 450, 457, 459, 460, 483, 504, 511, 514, 518, 524, 541, 542, 548, 549, 550, 551, 554, 561], "io": [4, 5, 46, 47, 48, 71, 76, 86, 131, 139, 175, 176, 184, 185, 197, 198, 206, 208, 209, 221, 222, 232, 235, 236, 249, 250, 297, 298, 300, 314, 320, 327, 347, 352, 353, 403, 411, 423, 430, 450, 455, 465, 510, 518, 548, 549, 550, 551, 552, 553, 554, 555, 557, 558, 559, 560, 561], "dure": [4, 7, 8, 9, 18, 19, 25, 41, 42, 43, 46, 47, 50, 53, 65, 67, 71, 78, 79, 80, 81, 108, 109, 112, 133, 151, 153, 155, 172, 176, 177, 184, 186, 191, 194, 198, 199, 206, 209, 214, 216, 222, 223, 230, 232, 236, 242, 244, 250, 251, 272, 278, 279, 282, 298, 302, 320, 322, 324, 333, 334, 343, 347, 353, 354, 355, 356, 383, 384, 387, 405, 423, 425, 427, 444, 446, 450, 457, 458, 459, 460, 487, 488, 491, 512, 530, 532, 534, 548, 549, 551, 553, 556], "erport": 4, "checksum": [4, 5, 6, 14, 16, 25, 31, 34, 36, 46, 53, 54, 58, 59, 66, 71, 76, 78, 79, 80, 86, 88, 90, 95, 98, 101, 108, 109, 110, 114, 115, 118, 120, 127, 131, 133, 139, 143, 153, 155, 165, 166, 175, 176, 182, 184, 185, 186, 187, 197, 198, 199, 204, 206, 208, 209, 210, 221, 222, 223, 228, 232, 235, 236, 237, 249, 250, 251, 256, 258, 260, 271, 278, 279, 280, 284, 288, 290, 295, 298, 300, 302, 312, 322, 324, 333, 335, 336, 347, 353, 354, 355, 361, 363, 365, 376, 383, 384, 385, 389, 393, 395, 400, 403, 405, 411, 415, 425, 427, 437, 438, 445, 450, 455, 457, 458, 459, 465, 467, 469, 474, 477, 480, 487, 488, 489, 493, 494, 497, 499, 506, 510, 512, 518, 522, 532, 534, 544, 545, 555], "level": [4, 5, 7, 8, 34, 36, 47, 50, 53, 57, 71, 76, 77, 78, 79, 80, 104, 127, 131, 132, 136, 139, 145, 151, 163, 165, 166, 175, 176, 184, 185, 186, 197, 198, 206, 208, 209, 221, 222, 223, 231, 232, 235, 236, 249, 250, 251, 274, 295, 297, 298, 300, 301, 305, 314, 320, 332, 333, 347, 352, 353, 354, 355, 379, 400, 403, 404, 408, 411, 417, 423, 435, 450, 455, 456, 457, 458, 459, 483, 506, 510, 511, 515, 518, 524, 530, 542, 544, 545, 548, 550, 555, 562], "reflect": [4, 53, 78, 85, 86, 88, 118, 181, 182, 184, 203, 204, 206, 227, 228, 232, 255, 256, 258, 288, 298, 353, 360, 361, 363, 393, 457, 464, 465, 467, 497], "counter": [4, 47, 71, 221, 222, 249, 250, 347, 411, 450], "statu": [4, 5, 11, 12, 13, 14, 35, 37, 43, 46, 47, 53, 58, 59, 71, 79, 80, 82, 83, 87, 104, 127, 129, 133, 134, 135, 142, 143, 145, 147, 151, 154, 155, 162, 163, 164, 176, 183, 184, 186, 198, 205, 206, 209, 222, 229, 231, 232, 236, 250, 253, 257, 274, 295, 303, 304, 311, 312, 314, 316, 320, 323, 324, 331, 332, 333, 347, 354, 355, 357, 358, 362, 379, 400, 406, 407, 414, 415, 417, 419, 423, 426, 427, 434, 435, 436, 450, 458, 459, 461, 462, 466, 483, 506, 508, 513, 514, 521, 522, 524, 526, 530, 533, 534, 541, 542, 543, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "If": [4, 5, 8, 9, 10, 11, 12, 14, 16, 17, 18, 19, 20, 21, 22, 23, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 53, 54, 62, 65, 70, 71, 73, 74, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 88, 90, 91, 92, 93, 95, 96, 97, 98, 100, 101, 102, 103, 104, 105, 106, 108, 109, 110, 111, 112, 114, 115, 118, 120, 121, 123, 124, 125, 127, 129, 130, 133, 135, 138, 139, 143, 144, 145, 146, 147, 148, 149, 151, 153, 154, 155, 158, 160, 161, 162, 163, 165, 166, 168, 174, 176, 177, 181, 182, 183, 184, 186, 189, 196, 198, 199, 203, 204, 205, 206, 207, 209, 212, 219, 220, 221, 222, 223, 227, 228, 229, 230, 231, 232, 234, 236, 240, 247, 248, 249, 250, 251, 255, 256, 257, 258, 260, 261, 262, 263, 265, 266, 267, 268, 270, 271, 272, 273, 274, 275, 276, 278, 279, 280, 281, 282, 284, 285, 288, 290, 291, 292, 293, 294, 295, 297, 298, 299, 302, 304, 307, 312, 313, 314, 315, 316, 317, 318, 320, 322, 323, 324, 327, 329, 330, 331, 332, 333, 334, 346, 347, 349, 350, 352, 353, 354, 355, 356, 357, 359, 360, 361, 363, 365, 366, 367, 368, 370, 371, 372, 373, 375, 376, 377, 378, 379, 380, 381, 383, 384, 385, 386, 387, 389, 390, 393, 395, 396, 397, 398, 399, 400, 402, 405, 407, 410, 411, 415, 416, 417, 418, 419, 420, 421, 423, 425, 426, 427, 430, 432, 433, 434, 435, 441, 444, 449, 450, 452, 453, 455, 456, 457, 458, 459, 460, 461, 463, 464, 465, 467, 469, 470, 471, 472, 474, 475, 476, 477, 479, 480, 481, 482, 483, 484, 485, 487, 488, 489, 490, 491, 493, 494, 497, 499, 500, 502, 503, 504, 506, 508, 509, 512, 514, 517, 518, 522, 523, 524, 525, 526, 527, 528, 530, 532, 533, 534, 537, 539, 540, 541, 542, 544, 545, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "correspond": [4, 46, 47, 48, 86, 87, 90, 96, 101, 104, 106, 110, 114, 117, 120, 124, 136, 139, 183, 184, 186, 205, 206, 209, 221, 229, 231, 232, 236, 249, 257, 260, 266, 271, 274, 276, 280, 284, 287, 290, 293, 305, 362, 365, 371, 376, 379, 381, 385, 389, 392, 395, 398, 408, 411, 465, 466, 469, 475, 480, 483, 485, 489, 493, 496, 499, 503, 515, 518], "output": [4, 8, 12, 14, 16, 19, 21, 25, 28, 29, 31, 35, 37, 41, 42, 46, 47, 54, 61, 62, 65, 70, 71, 78, 79, 81, 86, 92, 94, 95, 96, 97, 98, 100, 102, 104, 105, 106, 110, 111, 114, 115, 124, 127, 130, 145, 158, 163, 164, 165, 166, 168, 171, 174, 180, 182, 184, 186, 187, 189, 193, 196, 202, 204, 206, 209, 210, 212, 219, 220, 226, 228, 230, 231, 232, 236, 237, 239, 240, 247, 250, 254, 256, 262, 264, 265, 266, 267, 268, 270, 272, 274, 276, 280, 281, 284, 285, 293, 298, 314, 327, 332, 334, 335, 336, 338, 339, 346, 347, 353, 356, 361, 367, 369, 370, 371, 372, 373, 375, 377, 379, 381, 385, 386, 389, 390, 398, 400, 402, 417, 427, 430, 435, 436, 437, 438, 440, 441, 444, 449, 450, 457, 458, 460, 465, 471, 473, 474, 475, 476, 477, 479, 481, 483, 484, 485, 489, 490, 493, 494, 503, 506, 509, 524, 537, 542, 543, 544, 545, 553], "describ": [5, 10, 11, 12, 27, 32, 43, 45, 47, 53, 65, 71, 74, 80, 81, 84, 86, 95, 98, 100, 103, 104, 108, 109, 115, 121, 127, 132, 136, 139, 163, 175, 176, 180, 182, 184, 186, 197, 198, 202, 204, 206, 209, 221, 222, 226, 228, 231, 232, 236, 249, 250, 254, 256, 265, 268, 270, 273, 274, 278, 279, 285, 291, 295, 301, 305, 333, 334, 347, 350, 353, 355, 356, 359, 361, 370, 373, 375, 378, 379, 383, 384, 390, 396, 400, 404, 408, 411, 435, 444, 450, 453, 459, 460, 463, 465, 474, 477, 479, 482, 483, 487, 488, 494, 500, 506, 511, 515, 518, 542, 548, 557], "function": [5, 8, 10, 11, 12, 18, 19, 20, 22, 28, 32, 33, 41, 42, 45, 46, 47, 48, 53, 55, 57, 62, 64, 66, 67, 71, 74, 78, 79, 80, 90, 99, 101, 104, 119, 120, 130, 131, 139, 168, 172, 175, 176, 177, 184, 185, 186, 189, 194, 197, 198, 199, 206, 208, 209, 212, 216, 219, 221, 222, 223, 231, 232, 235, 236, 240, 244, 249, 250, 251, 260, 271, 274, 290, 298, 300, 333, 339, 341, 343, 347, 350, 353, 354, 355, 365, 374, 376, 379, 394, 395, 402, 403, 411, 441, 443, 445, 446, 450, 453, 457, 458, 459, 469, 478, 480, 483, 498, 499, 509, 510, 518, 548, 549, 550, 551, 555, 557, 561], "been": [5, 8, 9, 11, 12, 18, 19, 20, 22, 24, 25, 26, 28, 30, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 50, 53, 65, 70, 71, 73, 78, 79, 80, 81, 87, 88, 90, 94, 99, 101, 104, 108, 109, 110, 114, 118, 119, 120, 122, 125, 126, 133, 135, 139, 145, 147, 153, 155, 158, 160, 162, 163, 174, 175, 176, 177, 183, 184, 186, 196, 197, 198, 199, 205, 206, 209, 219, 220, 221, 222, 223, 229, 231, 232, 236, 247, 248, 249, 250, 251, 257, 258, 260, 264, 269, 271, 274, 278, 279, 280, 284, 288, 289, 290, 294, 298, 304, 314, 322, 329, 331, 332, 333, 334, 346, 347, 349, 353, 354, 355, 356, 362, 363, 365, 369, 374, 376, 379, 383, 384, 385, 389, 393, 394, 395, 399, 407, 411, 417, 425, 427, 432, 434, 435, 444, 449, 450, 452, 457, 458, 459, 460, 466, 467, 469, 473, 478, 480, 483, 487, 488, 489, 493, 497, 498, 499, 501, 504, 505, 514, 518, 524, 526, 532, 534, 537, 539, 541, 542, 547, 548, 549, 550, 552, 553, 554, 555, 557, 558, 561], "ad": [5, 7, 11, 17, 18, 19, 20, 22, 27, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 49, 53, 54, 65, 67, 71, 78, 79, 80, 81, 86, 87, 88, 108, 109, 118, 131, 132, 133, 138, 139, 145, 154, 158, 163, 172, 176, 183, 184, 185, 186, 194, 198, 205, 206, 208, 209, 216, 222, 229, 232, 235, 236, 244, 250, 257, 258, 278, 279, 288, 298, 300, 301, 307, 323, 332, 333, 334, 343, 347, 353, 354, 355, 356, 362, 363, 383, 384, 393, 403, 404, 410, 411, 426, 435, 444, 446, 450, 457, 458, 459, 460, 465, 466, 467, 487, 488, 497, 510, 511, 517, 518, 524, 533, 537, 542, 553], "variant": [5, 14, 16, 25, 31, 57, 71, 79, 80, 347, 354, 355, 450, 458, 459], "raidz": [5, 6, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 48, 53, 58, 59, 64, 67, 71, 79, 80, 81, 133, 136, 147, 148, 149, 151, 153, 155, 163, 172, 186, 191, 194, 198, 199, 209, 214, 216, 222, 223, 236, 242, 244, 250, 251, 302, 305, 317, 318, 320, 322, 324, 332, 333, 334, 341, 343, 347, 354, 355, 356, 405, 408, 420, 421, 423, 425, 427, 435, 443, 446, 450, 458, 459, 460, 512, 515, 526, 527, 528, 530, 532, 534, 542], "provid": [5, 7, 8, 9, 11, 12, 17, 18, 19, 20, 22, 23, 26, 32, 33, 34, 35, 36, 37, 39, 41, 42, 46, 47, 48, 53, 59, 65, 71, 73, 74, 77, 78, 79, 80, 81, 85, 86, 87, 88, 90, 101, 104, 108, 109, 110, 114, 118, 120, 127, 128, 130, 135, 155, 160, 162, 163, 164, 165, 166, 175, 182, 183, 184, 186, 197, 199, 204, 205, 206, 207, 209, 221, 222, 223, 228, 229, 231, 232, 234, 236, 248, 249, 250, 251, 255, 256, 257, 260, 271, 274, 278, 279, 280, 284, 290, 295, 296, 297, 298, 299, 324, 331, 332, 333, 334, 335, 347, 349, 350, 352, 353, 354, 355, 356, 360, 361, 362, 365, 376, 379, 383, 384, 385, 389, 395, 400, 401, 402, 407, 427, 434, 435, 436, 437, 438, 444, 450, 452, 453, 456, 457, 458, 459, 460, 464, 465, 466, 467, 469, 480, 483, 487, 488, 489, 493, 497, 499, 506, 507, 509, 514, 534, 539, 541, 542, 543, 544, 545, 548, 550, 555], "integr": [5, 8, 9, 11, 12, 14, 16, 22, 25, 28, 31, 46, 47, 48, 53, 57, 78, 79, 80, 184, 186, 206, 209, 232, 236, 298, 333, 353, 354, 355, 457, 458, 459], "hot": [5, 47, 79, 80, 136, 139, 151, 163, 186, 209, 236, 308, 320, 332, 333, 354, 355, 411, 423, 435, 458, 459, 515, 518, 530, 542, 548, 550], "faster": [5, 19, 20, 34, 35, 36, 37, 41, 42, 47, 48, 71, 78, 79, 80, 104, 131, 177, 184, 185, 199, 206, 208, 223, 231, 232, 235, 250, 251, 274, 298, 300, 347, 353, 354, 355, 379, 403, 450, 457, 458, 459, 483, 510], "resilv": [5, 43, 50, 71, 79, 80, 83, 90, 101, 120, 133, 139, 148, 149, 152, 153, 155, 157, 158, 162, 163, 175, 176, 186, 197, 198, 209, 221, 222, 223, 232, 236, 249, 250, 251, 253, 260, 271, 290, 302, 317, 318, 321, 322, 324, 326, 327, 331, 332, 347, 354, 355, 358, 365, 376, 395, 405, 411, 420, 421, 424, 425, 427, 429, 430, 434, 435, 450, 458, 459, 462, 469, 480, 499, 512, 518, 527, 528, 531, 532, 534, 536, 537, 541, 542, 548, 550, 555], "retain": [5, 71, 79, 80, 108, 109, 133, 143, 186, 206, 209, 232, 236, 250, 278, 279, 312, 347, 354, 355, 383, 384, 415, 450, 458, 459, 487, 488, 522], "benefit": [5, 34, 36, 46, 47, 48, 53, 71, 78, 79, 80, 81, 108, 109, 110, 114, 176, 184, 198, 206, 222, 232, 236, 250, 278, 279, 280, 284, 298, 334, 347, 353, 354, 355, 356, 383, 384, 385, 389, 450, 457, 458, 459, 460, 487, 488, 489, 493], "construct": [5, 47, 71, 78, 80, 86, 139, 168, 175, 182, 184, 189, 197, 204, 206, 212, 221, 222, 228, 232, 240, 249, 250, 256, 298, 339, 347, 353, 355, 361, 411, 450, 457, 459, 465, 518], "children": [5, 71, 74, 76, 78, 80, 90, 93, 95, 98, 99, 100, 101, 104, 105, 115, 119, 120, 122, 126, 184, 206, 231, 232, 260, 263, 265, 268, 269, 270, 271, 274, 275, 285, 289, 290, 298, 347, 350, 353, 355, 365, 368, 370, 373, 374, 375, 376, 379, 380, 390, 394, 395, 450, 453, 455, 457, 459, 469, 472, 474, 477, 478, 479, 480, 483, 484, 494, 498, 499, 501, 505], "order": [5, 9, 10, 12, 18, 19, 20, 21, 22, 32, 33, 34, 35, 36, 37, 41, 42, 46, 47, 50, 53, 71, 74, 78, 79, 80, 81, 86, 96, 100, 102, 106, 108, 109, 110, 113, 114, 124, 133, 139, 176, 177, 184, 186, 198, 199, 206, 209, 221, 222, 223, 228, 230, 232, 236, 249, 250, 251, 256, 266, 270, 272, 276, 278, 279, 280, 283, 284, 293, 298, 333, 334, 347, 350, 353, 354, 355, 356, 361, 371, 375, 377, 381, 383, 384, 385, 388, 389, 398, 411, 450, 453, 457, 458, 459, 460, 465, 475, 479, 481, 485, 487, 488, 489, 492, 493, 503, 518], "fulli": [5, 9, 12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 73, 78, 79, 80, 95, 98, 104, 110, 114, 115, 125, 127, 143, 174, 177, 184, 196, 199, 206, 220, 223, 231, 232, 236, 248, 251, 274, 280, 284, 294, 295, 298, 312, 333, 349, 353, 354, 355, 379, 385, 389, 399, 400, 415, 452, 457, 458, 459, 474, 477, 483, 489, 493, 494, 504, 506, 522], "util": [5, 8, 9, 14, 16, 22, 25, 27, 31, 33, 35, 37, 38, 41, 42, 46, 47, 53, 57, 66, 67, 71, 78, 79, 80, 86, 96, 106, 108, 109, 124, 127, 128, 130, 147, 163, 165, 166, 170, 171, 176, 182, 184, 186, 187, 192, 193, 194, 198, 199, 204, 206, 207, 209, 210, 215, 216, 222, 223, 228, 232, 234, 236, 237, 239, 243, 244, 250, 251, 256, 266, 276, 278, 279, 293, 295, 296, 298, 299, 332, 335, 336, 342, 343, 347, 353, 354, 355, 361, 371, 381, 383, 384, 398, 400, 401, 402, 435, 437, 438, 445, 446, 450, 457, 458, 459, 465, 475, 485, 487, 488, 503, 506, 507, 509, 526, 542, 544, 545], "known": [5, 46, 47, 48, 49, 53, 54, 71, 74, 78, 80, 108, 109, 127, 155, 176, 184, 186, 198, 206, 209, 219, 222, 232, 236, 247, 250, 278, 279, 295, 298, 333, 347, 350, 353, 355, 383, 384, 400, 450, 453, 457, 459, 487, 488, 506, 534, 548, 549, 550, 551, 552, 555, 556, 557, 561], "declust": 5, "activ": [5, 18, 19, 20, 22, 26, 27, 32, 33, 34, 35, 36, 37, 41, 42, 45, 46, 47, 50, 71, 77, 78, 79, 80, 81, 86, 93, 100, 104, 110, 114, 125, 127, 137, 140, 143, 144, 145, 146, 162, 163, 176, 177, 182, 184, 186, 198, 199, 204, 206, 209, 222, 223, 228, 230, 231, 232, 236, 250, 251, 256, 263, 272, 274, 280, 284, 294, 295, 297, 298, 306, 312, 314, 315, 331, 332, 333, 334, 347, 352, 353, 354, 355, 356, 361, 368, 379, 385, 389, 399, 400, 409, 412, 415, 416, 417, 418, 434, 435, 450, 456, 457, 458, 459, 460, 465, 472, 479, 483, 489, 493, 504, 506, 516, 519, 522, 523, 524, 525, 541, 542, 547, 548, 549, 550, 551, 557, 558], "area": [5, 46, 48, 184, 206, 232, 262, 367], "research": [5, 46, 110, 114, 280, 284, 385, 389, 489, 493], "imag": [5, 14, 16, 18, 19, 20, 22, 23, 25, 28, 29, 31, 33, 34, 35, 36, 37, 41, 42, 43, 48], "below": [5, 7, 8, 9, 14, 15, 16, 18, 19, 20, 21, 22, 23, 25, 26, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 40, 41, 42, 43, 46, 47, 48, 62, 65, 71, 76, 78, 84, 86, 88, 102, 104, 118, 125, 139, 162, 168, 175, 176, 180, 182, 184, 189, 197, 198, 202, 204, 206, 212, 221, 222, 226, 228, 230, 231, 232, 240, 249, 250, 254, 256, 258, 272, 274, 288, 294, 298, 331, 339, 347, 353, 359, 361, 363, 377, 379, 393, 399, 411, 434, 441, 444, 450, 455, 457, 463, 465, 467, 481, 483, 497, 504, 518, 541, 548, 549, 550, 551, 553, 556, 557], "illustr": [5, 91, 92, 93, 107, 112, 117, 127, 184, 206, 232, 295, 400, 470, 471, 472, 486, 491, 496, 506], "differ": [5, 9, 10, 11, 12, 18, 19, 20, 21, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 49, 53, 54, 64, 67, 71, 76, 77, 78, 79, 80, 81, 90, 94, 101, 104, 105, 108, 109, 110, 114, 120, 127, 131, 133, 136, 139, 140, 153, 155, 160, 172, 175, 176, 177, 184, 186, 194, 197, 198, 199, 206, 208, 209, 216, 219, 221, 222, 223, 231, 232, 235, 236, 244, 247, 249, 250, 251, 260, 264, 271, 274, 275, 278, 279, 280, 284, 290, 295, 298, 300, 305, 309, 322, 324, 329, 333, 334, 341, 343, 347, 353, 354, 355, 356, 365, 369, 376, 379, 380, 383, 384, 385, 389, 395, 400, 403, 408, 411, 412, 425, 427, 432, 443, 446, 450, 455, 456, 457, 458, 459, 460, 469, 473, 480, 483, 484, 487, 488, 489, 493, 499, 506, 510, 515, 518, 519, 532, 534, 539, 547, 549, 550, 552, 555], "addition": [5, 7, 9, 34, 36, 37, 47, 53, 65, 71, 73, 81, 87, 102, 108, 109, 127, 176, 183, 184, 186, 196, 198, 205, 206, 209, 220, 222, 229, 230, 232, 236, 248, 250, 257, 272, 278, 279, 334, 347, 349, 356, 362, 377, 383, 384, 444, 450, 452, 460, 466, 481, 487, 488, 506], "must": [5, 8, 9, 11, 12, 14, 16, 18, 19, 20, 22, 25, 26, 27, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 45, 46, 47, 48, 50, 53, 54, 62, 65, 70, 71, 73, 76, 77, 78, 79, 80, 81, 86, 87, 88, 89, 95, 97, 98, 100, 102, 104, 107, 108, 109, 110, 111, 113, 114, 115, 118, 123, 127, 130, 136, 140, 146, 148, 149, 150, 151, 153, 155, 157, 168, 171, 174, 176, 183, 184, 186, 189, 193, 196, 198, 204, 205, 206, 207, 209, 212, 219, 220, 222, 223, 228, 229, 230, 231, 232, 234, 236, 240, 247, 248, 250, 256, 257, 258, 259, 265, 267, 268, 270, 272, 274, 277, 278, 279, 280, 281, 283, 284, 285, 288, 292, 295, 297, 298, 299, 305, 309, 315, 317, 318, 319, 320, 322, 326, 333, 334, 339, 346, 347, 349, 352, 353, 354, 355, 356, 361, 362, 363, 364, 370, 372, 373, 375, 377, 379, 382, 383, 384, 385, 386, 388, 389, 390, 393, 397, 400, 402, 408, 412, 418, 420, 421, 422, 423, 425, 429, 441, 444, 449, 450, 452, 455, 456, 457, 458, 459, 460, 465, 466, 467, 468, 474, 476, 477, 479, 481, 483, 486, 487, 488, 489, 490, 492, 493, 494, 497, 502, 506, 509, 515, 519, 525, 527, 528, 529, 530, 532, 534, 536, 549, 551, 553, 556, 557, 561], "shuffl": 5, "its": [5, 7, 8, 12, 14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 49, 50, 53, 62, 65, 67, 71, 74, 78, 79, 80, 81, 82, 86, 87, 88, 90, 93, 94, 95, 97, 98, 100, 101, 102, 103, 104, 105, 107, 108, 109, 110, 111, 114, 115, 117, 118, 120, 121, 127, 139, 143, 147, 148, 149, 163, 168, 172, 175, 176, 177, 178, 183, 184, 186, 189, 194, 197, 198, 199, 200, 204, 205, 206, 209, 212, 216, 221, 222, 223, 224, 228, 229, 230, 231, 232, 236, 240, 244, 249, 250, 251, 252, 256, 257, 258, 260, 265, 267, 268, 270, 271, 272, 273, 274, 275, 277, 278, 279, 281, 285, 288, 290, 291, 295, 298, 312, 317, 318, 332, 333, 334, 339, 343, 347, 350, 353, 354, 355, 356, 357, 361, 362, 363, 365, 370, 372, 373, 375, 376, 377, 378, 379, 380, 382, 383, 384, 385, 386, 389, 390, 393, 395, 396, 400, 411, 415, 420, 421, 435, 441, 444, 446, 450, 453, 457, 458, 459, 460, 461, 465, 466, 467, 469, 472, 473, 474, 476, 477, 479, 480, 481, 482, 483, 484, 486, 487, 488, 489, 490, 493, 494, 496, 497, 499, 500, 506, 518, 522, 526, 527, 528, 542, 547, 550, 553, 558], "child": [5, 22, 33, 47, 48, 77, 78, 90, 92, 95, 98, 101, 104, 107, 108, 109, 110, 113, 114, 115, 120, 127, 184, 206, 231, 232, 260, 271, 274, 277, 278, 279, 280, 283, 284, 290, 295, 297, 298, 352, 353, 365, 376, 379, 382, 383, 384, 385, 388, 389, 395, 400, 456, 457, 469, 471, 474, 477, 480, 483, 486, 487, 488, 489, 492, 493, 494, 499, 506], "wai": [5, 7, 8, 10, 11, 12, 18, 19, 20, 21, 22, 25, 27, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 71, 73, 76, 77, 78, 81, 90, 101, 110, 114, 120, 127, 132, 133, 136, 143, 163, 168, 174, 176, 184, 186, 189, 196, 198, 206, 209, 212, 219, 220, 222, 232, 236, 240, 248, 250, 260, 271, 280, 284, 290, 295, 297, 298, 302, 312, 332, 347, 349, 352, 353, 365, 376, 385, 389, 395, 400, 405, 415, 435, 450, 452, 455, 456, 457, 460, 469, 480, 489, 493, 499, 506, 511, 512, 515, 522, 542, 557], "regardless": [5, 18, 19, 20, 22, 33, 41, 42, 46, 47, 48, 50, 71, 78, 81, 100, 110, 114, 132, 145, 147, 157, 158, 176, 184, 186, 198, 206, 209, 222, 232, 236, 250, 270, 298, 301, 314, 316, 326, 327, 334, 347, 353, 356, 375, 404, 417, 419, 429, 430, 450, 457, 460, 479, 489, 493, 511, 524, 526, 536, 537], "drive": [5, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 47, 48, 51, 53, 67, 71, 73, 85, 90, 101, 120, 129, 139, 154, 163, 174, 175, 176, 181, 186, 196, 197, 198, 203, 220, 221, 222, 227, 236, 248, 249, 250, 255, 260, 271, 290, 323, 347, 349, 360, 365, 376, 395, 411, 426, 446, 450, 452, 464, 469, 480, 499, 508, 518, 533], "both": [5, 8, 9, 12, 14, 16, 18, 19, 21, 22, 25, 31, 33, 34, 36, 44, 46, 47, 48, 50, 57, 71, 78, 79, 80, 81, 86, 88, 93, 96, 102, 104, 106, 108, 109, 118, 124, 145, 151, 158, 168, 176, 182, 184, 186, 189, 198, 204, 206, 209, 212, 219, 222, 228, 231, 232, 236, 240, 247, 250, 256, 258, 263, 266, 274, 276, 278, 279, 288, 293, 298, 314, 320, 327, 333, 347, 353, 355, 361, 363, 368, 371, 377, 379, 381, 383, 384, 393, 398, 417, 423, 430, 450, 457, 458, 459, 460, 465, 467, 472, 475, 481, 483, 485, 487, 488, 497, 503, 524, 530, 537], "evenli": [5, 47, 53, 70, 79, 219, 247, 346, 354, 449, 458], "among": [5, 77, 80, 130, 133, 184, 186, 206, 207, 209, 232, 234, 236, 297, 299, 333, 352, 355, 402, 456, 459, 509], "surviv": [5, 46, 77, 184, 206, 232, 297, 352, 456], "accomplish": [5, 10, 47, 78, 232, 298, 353, 457, 557], "carefulli": 5, "chosen": [5, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 49, 70, 71, 78, 90, 101, 120, 176, 198, 222, 232, 250, 260, 271, 290, 298, 347, 353, 365, 376, 395, 449, 450, 457, 469, 480, 499], "precomput": 5, "permut": 5, "map": [5, 46, 47, 53, 64, 71, 73, 78, 79, 85, 86, 90, 96, 101, 104, 106, 120, 124, 127, 131, 151, 174, 177, 181, 182, 184, 185, 196, 199, 203, 204, 206, 208, 220, 222, 223, 227, 228, 231, 232, 235, 236, 248, 250, 251, 255, 256, 260, 266, 271, 274, 276, 290, 293, 295, 298, 300, 320, 341, 347, 349, 353, 354, 360, 361, 365, 371, 376, 379, 381, 395, 398, 400, 403, 423, 443, 450, 452, 457, 458, 464, 465, 469, 475, 480, 483, 485, 499, 503, 506, 510, 530], "keep": [5, 8, 10, 12, 14, 16, 18, 19, 20, 22, 25, 31, 32, 33, 34, 35, 36, 37, 41, 42, 47, 48, 49, 53, 61, 70, 71, 80, 93, 105, 112, 117, 127, 176, 184, 186, 198, 206, 209, 219, 222, 232, 236, 239, 247, 250, 275, 295, 333, 338, 346, 347, 355, 380, 400, 440, 449, 450, 459, 472, 484, 491, 496, 506, 547], "creation": [5, 46, 47, 48, 53, 54, 70, 71, 78, 79, 81, 90, 92, 95, 98, 101, 115, 120, 127, 129, 132, 136, 163, 176, 184, 186, 198, 206, 209, 219, 222, 223, 232, 236, 247, 250, 251, 260, 262, 271, 290, 295, 298, 301, 305, 332, 334, 346, 347, 353, 354, 356, 365, 367, 376, 395, 400, 404, 408, 435, 449, 450, 457, 458, 460, 469, 471, 474, 477, 480, 494, 499, 506, 508, 511, 515, 542], "fast": [5, 47, 48, 71, 78, 198, 222, 250, 298, 347, 353, 450, 457], "make": [5, 8, 9, 12, 13, 17, 18, 19, 20, 21, 22, 25, 27, 29, 32, 33, 34, 35, 36, 37, 38, 41, 42, 46, 47, 48, 53, 62, 70, 71, 76, 77, 78, 79, 80, 81, 86, 90, 91, 92, 93, 101, 107, 110, 112, 114, 117, 120, 127, 145, 163, 168, 175, 176, 177, 182, 184, 186, 189, 197, 198, 199, 204, 206, 209, 212, 219, 222, 223, 228, 232, 236, 240, 247, 250, 251, 256, 260, 271, 277, 280, 284, 290, 295, 297, 298, 314, 332, 333, 339, 346, 347, 352, 353, 354, 355, 361, 365, 376, 382, 385, 389, 395, 400, 417, 435, 441, 449, 450, 455, 456, 457, 458, 459, 460, 465, 469, 470, 471, 472, 480, 486, 489, 491, 493, 496, 499, 506, 524, 542, 559, 560, 561], "imposs": [5, 46], "damag": [5, 14, 16, 25, 31, 46, 47, 53, 71, 79, 80, 135, 139, 143, 155, 175, 186, 197, 209, 221, 222, 236, 249, 250, 251, 304, 312, 324, 347, 354, 355, 407, 411, 415, 427, 450, 458, 459, 514, 518, 522, 534, 548, 550, 554, 555], "lost": [5, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 78, 139, 143, 184, 186, 206, 209, 221, 232, 236, 249, 298, 312, 353, 411, 415, 457, 518, 522, 551, 553, 554, 555, 561], "fix": [5, 9, 11, 12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 50, 62, 66, 70, 71, 78, 80, 104, 168, 176, 184, 189, 198, 206, 212, 219, 222, 231, 232, 240, 247, 250, 274, 298, 339, 346, 347, 353, 355, 379, 441, 445, 449, 450, 457, 459, 483, 557, 561], "pad": [5, 80, 355, 459], "necessari": [5, 9, 18, 19, 20, 21, 22, 25, 32, 33, 34, 35, 36, 37, 41, 42, 43, 47, 71, 78, 79, 80, 88, 104, 110, 114, 118, 129, 184, 186, 198, 206, 209, 222, 231, 232, 236, 250, 258, 274, 280, 284, 288, 298, 333, 347, 353, 354, 355, 363, 379, 385, 389, 393, 450, 457, 458, 459, 467, 483, 489, 493, 497, 508, 555, 559, 560], "zero": [5, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 48, 50, 53, 61, 65, 70, 71, 78, 79, 80, 87, 92, 93, 104, 105, 129, 176, 177, 183, 184, 198, 199, 205, 206, 219, 222, 223, 229, 231, 232, 236, 239, 247, 250, 251, 257, 262, 263, 274, 275, 298, 333, 338, 346, 347, 353, 354, 355, 362, 367, 368, 379, 380, 440, 444, 449, 450, 457, 458, 459, 466, 471, 472, 483, 484, 508, 555, 557], "howev": [5, 7, 9, 10, 12, 14, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 43, 44, 46, 47, 48, 53, 67, 70, 71, 77, 78, 79, 80, 81, 82, 87, 96, 104, 106, 108, 109, 110, 114, 124, 133, 136, 145, 155, 165, 166, 172, 176, 177, 184, 186, 194, 198, 199, 205, 206, 209, 216, 219, 222, 223, 229, 231, 232, 236, 244, 247, 250, 251, 257, 266, 274, 276, 278, 279, 280, 284, 293, 297, 298, 305, 314, 333, 334, 335, 343, 346, 347, 352, 353, 354, 355, 356, 357, 362, 371, 379, 381, 383, 384, 385, 389, 398, 408, 417, 437, 438, 446, 449, 450, 456, 457, 458, 459, 460, 461, 466, 475, 483, 485, 487, 488, 489, 493, 503, 515, 524, 534, 544, 545, 555, 557], "significantli": [5, 11, 46, 47, 48, 71, 78, 79, 80, 177, 184, 198, 199, 206, 222, 223, 232, 250, 251, 298, 347, 353, 354, 355, 450, 457, 458, 459], "capac": [5, 46, 47, 53, 57, 71, 76, 80, 81, 92, 132, 145, 147, 158, 163, 176, 186, 198, 209, 222, 236, 250, 262, 316, 332, 334, 347, 355, 356, 367, 419, 435, 450, 455, 459, 460, 471, 511, 524, 526, 537, 542], "32k": [5, 47, 48, 67, 70, 86, 172, 182, 194, 204, 216, 219, 228, 244, 247, 256, 343, 346, 361, 446, 449, 465], "compress": [5, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 53, 57, 61, 66, 71, 77, 78, 79, 80, 86, 88, 90, 95, 98, 101, 108, 109, 110, 114, 115, 118, 120, 127, 165, 166, 170, 176, 177, 182, 184, 192, 198, 199, 204, 206, 215, 222, 223, 228, 232, 239, 243, 250, 251, 256, 258, 260, 271, 278, 279, 280, 284, 288, 290, 295, 297, 298, 338, 342, 347, 352, 353, 354, 355, 361, 363, 365, 376, 383, 384, 385, 389, 393, 395, 400, 440, 445, 450, 456, 457, 458, 459, 465, 467, 469, 474, 477, 480, 487, 488, 489, 493, 494, 497, 499, 506, 544, 545], "rel": [5, 35, 37, 46, 47, 49, 70, 71, 78, 79, 80, 81, 86, 100, 176, 184, 186, 198, 204, 206, 209, 219, 222, 228, 232, 236, 247, 250, 256, 270, 298, 333, 346, 347, 353, 354, 355, 356, 361, 375, 449, 450, 457, 458, 459, 460, 465, 479], "reduc": [5, 18, 19, 20, 22, 33, 41, 42, 46, 47, 48, 50, 61, 71, 76, 77, 78, 79, 80, 81, 151, 163, 164, 176, 177, 184, 186, 198, 199, 206, 209, 222, 223, 232, 236, 239, 250, 251, 297, 298, 320, 332, 338, 347, 352, 353, 354, 355, 423, 435, 436, 440, 450, 455, 456, 457, 458, 459, 460, 530, 542, 543], "volblocks": [5, 47, 71, 78, 80, 88, 92, 118, 184, 198, 206, 222, 232, 250, 258, 262, 288, 298, 347, 353, 355, 363, 367, 393, 450, 457, 459, 467, 471, 497], "account": [5, 10, 12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 48, 53, 71, 78, 79, 80, 81, 86, 95, 98, 102, 107, 115, 127, 184, 186, 198, 199, 206, 209, 222, 223, 232, 236, 250, 251, 256, 277, 295, 298, 334, 347, 353, 354, 355, 356, 361, 377, 382, 400, 450, 457, 458, 459, 460, 465, 474, 477, 481, 486, 494, 506], "signific": [5, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 48, 53, 71, 77, 78, 80, 81, 184, 206, 219, 232, 236, 250, 297, 298, 334, 347, 352, 353, 355, 356, 450, 456, 457, 459, 460], "amount": [5, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 45, 46, 47, 48, 49, 50, 53, 71, 76, 77, 78, 79, 80, 81, 86, 95, 98, 104, 115, 127, 145, 151, 158, 162, 164, 176, 177, 182, 184, 186, 198, 199, 204, 206, 209, 222, 223, 228, 231, 232, 236, 250, 251, 256, 265, 268, 274, 285, 295, 297, 298, 314, 320, 327, 331, 334, 347, 352, 353, 354, 355, 356, 361, 370, 373, 379, 390, 400, 417, 423, 430, 434, 436, 450, 455, 456, 457, 458, 459, 460, 465, 474, 477, 483, 494, 506, 524, 530, 537, 541, 543], "small": [5, 11, 12, 14, 16, 19, 20, 25, 28, 31, 34, 36, 41, 42, 46, 47, 48, 49, 53, 67, 70, 71, 78, 80, 81, 172, 176, 184, 186, 194, 198, 206, 209, 216, 219, 222, 232, 236, 244, 247, 250, 298, 333, 334, 343, 346, 347, 353, 355, 356, 446, 449, 450, 457, 459, 460], "add": [5, 7, 8, 10, 12, 14, 15, 16, 17, 18, 19, 20, 22, 23, 25, 26, 28, 29, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 46, 48, 53, 66, 67, 71, 77, 79, 80, 81, 83, 88, 97, 108, 109, 111, 118, 127, 129, 131, 133, 136, 144, 145, 151, 163, 164, 170, 171, 172, 180, 184, 186, 192, 193, 194, 202, 206, 208, 209, 215, 216, 223, 226, 230, 232, 235, 236, 243, 244, 250, 251, 253, 254, 258, 267, 272, 278, 279, 281, 288, 295, 300, 302, 305, 313, 314, 320, 332, 333, 334, 342, 343, 347, 354, 355, 356, 358, 363, 372, 383, 384, 386, 393, 400, 403, 405, 408, 416, 417, 423, 435, 436, 445, 446, 450, 456, 458, 459, 460, 462, 467, 476, 487, 488, 490, 497, 506, 508, 510, 512, 515, 523, 524, 530, 542, 543], "special": [5, 12, 19, 20, 34, 36, 41, 42, 46, 47, 48, 71, 77, 78, 79, 80, 86, 87, 108, 109, 129, 151, 164, 167, 183, 184, 186, 205, 206, 209, 222, 223, 229, 230, 232, 236, 250, 251, 257, 272, 278, 279, 298, 320, 333, 347, 353, 354, 355, 362, 383, 384, 423, 436, 439, 450, 456, 457, 458, 459, 465, 466, 487, 488, 508, 530, 543, 546], "regard": [5, 7, 12, 32, 46, 80, 86, 139, 182, 204, 221, 228, 249, 256, 355, 361, 411, 459, 465, 518], "similar": [5, 9, 11, 18, 19, 20, 22, 33, 34, 36, 41, 42, 46, 47, 48, 71, 78, 80, 86, 88, 94, 104, 110, 114, 118, 127, 133, 143, 147, 155, 163, 164, 176, 182, 184, 186, 198, 204, 206, 209, 222, 228, 231, 232, 236, 250, 256, 264, 274, 280, 284, 295, 298, 324, 332, 347, 353, 355, 361, 369, 379, 385, 389, 400, 427, 435, 436, 450, 457, 459, 465, 467, 473, 483, 489, 493, 497, 506, 522, 526, 534, 542, 543], "sinc": [5, 11, 18, 19, 20, 22, 32, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 49, 50, 53, 54, 61, 67, 70, 71, 74, 77, 78, 79, 80, 87, 90, 99, 101, 104, 108, 109, 110, 113, 114, 119, 120, 122, 126, 139, 145, 155, 158, 175, 176, 177, 183, 184, 186, 194, 197, 198, 199, 205, 206, 209, 216, 219, 221, 222, 223, 229, 232, 236, 239, 244, 247, 249, 250, 251, 257, 260, 269, 271, 274, 278, 279, 280, 283, 284, 289, 290, 297, 298, 314, 327, 333, 338, 343, 346, 347, 350, 352, 353, 354, 355, 362, 365, 374, 376, 379, 383, 384, 385, 388, 389, 394, 395, 411, 417, 427, 430, 440, 446, 449, 450, 453, 456, 457, 458, 459, 466, 469, 478, 480, 483, 487, 488, 489, 492, 493, 498, 499, 501, 505, 518, 524, 534, 537, 557, 559, 560], "access": [5, 18, 19, 20, 22, 33, 34, 36, 41, 42, 47, 48, 53, 61, 71, 77, 78, 79, 80, 81, 86, 87, 88, 90, 95, 98, 99, 101, 104, 108, 109, 110, 114, 115, 118, 119, 120, 122, 123, 126, 127, 135, 136, 145, 161, 171, 176, 183, 184, 186, 193, 198, 204, 205, 206, 209, 219, 222, 223, 228, 229, 231, 232, 236, 239, 247, 250, 256, 257, 258, 260, 269, 271, 274, 278, 279, 280, 284, 288, 289, 290, 292, 295, 297, 298, 305, 314, 330, 334, 338, 347, 352, 353, 355, 356, 361, 362, 363, 365, 374, 376, 379, 383, 384, 385, 389, 393, 394, 395, 397, 400, 407, 408, 417, 433, 440, 450, 456, 457, 458, 459, 460, 465, 466, 467, 469, 474, 477, 478, 480, 483, 487, 488, 489, 493, 494, 497, 498, 499, 501, 502, 505, 506, 514, 515, 524, 540, 554, 557, 558], "deliv": [5, 47, 80, 355, 459], "random": [5, 8, 14, 16, 25, 28, 31, 46, 47, 48, 53, 67, 71, 78, 79, 80, 130, 172, 184, 186, 194, 199, 206, 207, 209, 216, 223, 232, 234, 236, 244, 250, 251, 298, 299, 333, 343, 347, 353, 354, 355, 402, 446, 450, 457, 458, 459, 509], "reason": [5, 8, 9, 11, 12, 21, 34, 36, 46, 47, 48, 53, 62, 70, 71, 76, 78, 79, 80, 81, 87, 108, 109, 110, 114, 129, 176, 184, 186, 189, 198, 199, 205, 206, 209, 212, 219, 222, 223, 229, 232, 236, 240, 247, 250, 251, 257, 278, 279, 280, 284, 298, 334, 339, 346, 347, 353, 354, 355, 356, 362, 383, 384, 385, 389, 441, 449, 450, 455, 457, 458, 459, 460, 466, 487, 488, 489, 493, 508], "floor": [5, 47, 67, 80, 172, 176, 194, 198, 216, 222, 244, 250, 343, 355, 446, 459], "summari": [5, 7, 12, 53, 64, 65, 67, 71, 85, 87, 102, 130, 143, 164, 165, 166, 172, 181, 183, 186, 191, 194, 203, 205, 209, 214, 216, 227, 229, 236, 242, 244, 250, 255, 257, 299, 312, 335, 341, 343, 347, 360, 362, 377, 402, 415, 436, 437, 438, 443, 444, 446, 450, 464, 466, 481, 509, 522, 543, 544, 545], "enumer": 5, "immedi": [5, 46, 47, 53, 70, 71, 78, 79, 81, 88, 93, 108, 109, 118, 125, 132, 133, 160, 162, 163, 176, 177, 184, 186, 198, 199, 206, 209, 219, 222, 223, 232, 236, 247, 250, 251, 258, 263, 278, 279, 288, 294, 298, 302, 329, 331, 332, 334, 346, 347, 353, 354, 356, 363, 368, 383, 384, 393, 399, 405, 432, 434, 435, 449, 450, 457, 458, 460, 467, 472, 487, 488, 497, 504, 511, 512, 539, 541, 542], "unlik": [5, 18, 19, 20, 34, 35, 36, 37, 41, 42, 46, 47, 48, 71, 77, 78, 80, 81, 82, 86, 139, 164, 175, 178, 182, 184, 197, 198, 200, 204, 206, 221, 222, 224, 228, 232, 236, 249, 250, 252, 256, 297, 298, 334, 347, 352, 353, 355, 356, 357, 361, 411, 436, 450, 456, 457, 459, 460, 461, 465, 518, 543, 548], "colon": [5, 76, 78, 81, 86, 136, 163, 182, 184, 186, 204, 206, 209, 228, 232, 236, 256, 298, 305, 332, 353, 361, 408, 435, 455, 457, 460, 465, 515, 542], "separ": [5, 7, 8, 11, 14, 16, 18, 19, 20, 22, 25, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 46, 48, 53, 61, 67, 76, 78, 79, 80, 81, 86, 88, 92, 93, 94, 95, 98, 100, 102, 103, 115, 118, 121, 127, 128, 131, 136, 139, 141, 143, 145, 147, 156, 162, 163, 172, 176, 182, 184, 186, 189, 194, 204, 206, 209, 212, 216, 223, 228, 230, 232, 235, 236, 239, 240, 244, 251, 256, 258, 262, 263, 264, 265, 268, 270, 272, 273, 285, 288, 291, 296, 298, 300, 308, 310, 312, 314, 316, 325, 331, 332, 333, 338, 343, 353, 354, 355, 356, 361, 363, 367, 368, 369, 370, 373, 375, 377, 378, 390, 393, 396, 401, 403, 411, 413, 415, 417, 419, 428, 434, 435, 440, 446, 455, 457, 458, 459, 460, 465, 467, 471, 472, 473, 474, 477, 479, 481, 482, 494, 497, 500, 506, 507, 510, 515, 518, 520, 522, 524, 526, 535, 541, 542], "option": [5, 9, 10, 11, 12, 13, 14, 16, 21, 25, 26, 27, 28, 29, 31, 34, 35, 36, 37, 46, 47, 48, 53, 58, 59, 61, 62, 64, 65, 66, 67, 71, 73, 77, 78, 80, 81, 82, 84, 85, 86, 87, 88, 90, 92, 93, 94, 95, 96, 98, 100, 101, 102, 103, 104, 106, 108, 109, 110, 112, 113, 114, 115, 118, 120, 121, 123, 124, 127, 129, 130, 131, 132, 134, 136, 139, 143, 152, 155, 157, 158, 160, 161, 163, 164, 165, 166, 168, 170, 171, 172, 174, 175, 176, 178, 180, 181, 182, 183, 184, 185, 186, 187, 189, 191, 192, 193, 194, 196, 197, 198, 200, 202, 203, 204, 205, 206, 207, 208, 209, 210, 212, 214, 215, 216, 220, 221, 222, 224, 226, 227, 228, 229, 230, 231, 232, 234, 235, 236, 237, 239, 240, 242, 243, 244, 248, 249, 250, 252, 254, 255, 256, 257, 258, 260, 262, 263, 264, 265, 266, 268, 270, 271, 272, 273, 274, 275, 276, 278, 279, 280, 282, 283, 284, 285, 288, 290, 291, 292, 293, 295, 297, 298, 299, 300, 301, 305, 309, 312, 314, 326, 327, 329, 330, 332, 333, 334, 336, 338, 339, 341, 342, 343, 347, 349, 352, 353, 355, 356, 357, 359, 360, 361, 362, 363, 365, 367, 368, 369, 370, 371, 373, 375, 376, 377, 378, 379, 381, 383, 384, 385, 387, 388, 389, 390, 393, 395, 396, 397, 398, 400, 402, 403, 404, 406, 408, 411, 415, 424, 427, 429, 430, 432, 433, 435, 436, 440, 441, 443, 444, 445, 446, 450, 452, 456, 457, 459, 460, 461, 463, 464, 465, 466, 467, 469, 471, 472, 473, 474, 475, 477, 479, 480, 481, 482, 483, 485, 487, 488, 489, 491, 492, 493, 494, 497, 499, 500, 502, 503, 506, 508, 509, 510, 511, 513, 515, 518, 522, 531, 534, 536, 537, 539, 540, 542, 543, 544, 545, 547, 549, 552, 553, 554, 558], "most": [5, 9, 18, 19, 20, 22, 32, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 49, 50, 53, 64, 70, 71, 76, 77, 78, 79, 81, 82, 84, 86, 87, 108, 109, 113, 123, 127, 175, 176, 177, 182, 184, 191, 197, 198, 199, 204, 206, 214, 219, 221, 222, 223, 228, 232, 236, 242, 247, 249, 250, 251, 256, 278, 279, 283, 292, 295, 298, 334, 341, 346, 347, 353, 354, 356, 357, 359, 361, 362, 383, 384, 388, 397, 400, 443, 449, 450, 455, 456, 457, 458, 460, 461, 463, 465, 466, 487, 488, 492, 502, 506, 555, 561], "control": [5, 7, 11, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 48, 51, 53, 70, 71, 73, 76, 77, 78, 79, 80, 81, 86, 87, 95, 98, 115, 127, 160, 174, 176, 177, 184, 186, 196, 198, 199, 206, 209, 219, 220, 222, 223, 232, 236, 247, 248, 250, 251, 256, 265, 268, 285, 295, 297, 298, 329, 333, 334, 346, 347, 349, 352, 353, 354, 355, 356, 361, 362, 370, 373, 390, 400, 432, 449, 450, 452, 455, 456, 457, 458, 459, 460, 465, 466, 474, 477, 494, 506, 539], "By": [5, 7, 9, 10, 12, 26, 32, 47, 48, 53, 65, 67, 70, 71, 77, 78, 79, 80, 81, 86, 92, 93, 100, 110, 113, 114, 136, 157, 164, 172, 176, 177, 182, 184, 186, 194, 198, 199, 204, 206, 209, 216, 219, 222, 223, 228, 232, 236, 244, 247, 250, 251, 256, 262, 263, 270, 280, 283, 284, 297, 298, 305, 326, 333, 334, 343, 346, 347, 352, 353, 354, 355, 356, 361, 367, 368, 375, 385, 388, 389, 408, 429, 436, 444, 446, 449, 450, 456, 457, 458, 459, 460, 465, 471, 472, 479, 489, 492, 493, 515, 536, 543], "unspecifi": [5, 78, 157, 184, 186, 206, 209, 232, 236, 298, 326, 353, 429, 457, 536], "c": [5, 10, 11, 12, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 44, 46, 47, 53, 61, 62, 65, 66, 67, 73, 74, 79, 81, 85, 86, 88, 94, 104, 105, 108, 109, 110, 114, 118, 127, 131, 139, 143, 144, 145, 158, 160, 163, 165, 166, 168, 170, 171, 174, 175, 181, 182, 184, 185, 186, 187, 189, 192, 193, 196, 197, 203, 204, 206, 208, 209, 210, 212, 215, 220, 221, 223, 227, 228, 231, 232, 235, 236, 237, 239, 240, 243, 248, 249, 251, 255, 256, 258, 264, 274, 275, 278, 279, 280, 284, 288, 295, 300, 305, 308, 312, 313, 314, 316, 327, 329, 332, 334, 335, 336, 338, 339, 342, 343, 349, 350, 354, 356, 360, 361, 363, 369, 379, 380, 383, 384, 385, 389, 393, 400, 403, 411, 415, 416, 417, 430, 432, 435, 437, 438, 440, 441, 444, 445, 446, 452, 453, 458, 460, 464, 465, 467, 473, 483, 484, 487, 488, 489, 493, 497, 506, 510, 518, 522, 523, 524, 537, 539, 542, 544, 545], "smaller": [5, 46, 47, 48, 50, 70, 71, 78, 79, 80, 81, 110, 114, 176, 177, 198, 199, 206, 219, 222, 223, 232, 236, 247, 250, 251, 280, 284, 298, 333, 334, 346, 347, 353, 354, 355, 356, 385, 389, 449, 450, 457, 458, 459, 460, 489, 493], "speed": [5, 8, 12, 19, 20, 34, 36, 41, 42, 47, 48, 49, 54, 71, 78, 79, 80, 176, 184, 198, 206, 222, 232, 250, 251, 298, 347, 353, 354, 355, 450, 457, 458, 459], "expens": [5, 34, 36, 46, 47, 48, 53, 70, 71, 78, 80, 86, 182, 198, 204, 219, 222, 228, 232, 247, 250, 256, 298, 346, 347, 353, 355, 361, 449, 450, 457, 459, 465], "unless": [5, 8, 10, 14, 16, 18, 19, 20, 25, 28, 31, 34, 35, 36, 37, 41, 42, 44, 46, 47, 48, 65, 71, 78, 79, 80, 86, 92, 102, 105, 110, 114, 136, 143, 148, 149, 152, 165, 166, 176, 177, 184, 186, 198, 199, 206, 209, 219, 222, 223, 232, 236, 247, 250, 251, 256, 275, 298, 305, 312, 317, 318, 321, 347, 353, 354, 355, 361, 367, 377, 380, 408, 415, 420, 421, 424, 444, 450, 457, 458, 459, 465, 471, 481, 484, 489, 493, 515, 522, 527, 528, 531, 544, 545], "expect": [5, 12, 34, 36, 46, 47, 48, 57, 70, 71, 76, 78, 80, 81, 82, 90, 101, 104, 120, 133, 157, 163, 164, 175, 178, 184, 186, 197, 200, 206, 209, 219, 221, 224, 231, 232, 236, 247, 249, 250, 252, 260, 271, 274, 290, 298, 326, 332, 333, 334, 346, 347, 353, 355, 356, 357, 365, 376, 379, 395, 411, 429, 435, 436, 449, 450, 455, 457, 459, 460, 461, 469, 480, 483, 499, 536, 542, 543, 555], "cross": [5, 45, 47, 48, 71, 80, 176, 198, 222, 250, 347, 355, 450, 459], "list": [5, 7, 12, 14, 16, 17, 18, 19, 20, 21, 22, 23, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 41, 42, 47, 48, 53, 56, 57, 58, 59, 61, 65, 66, 70, 71, 76, 77, 78, 79, 80, 81, 83, 87, 88, 90, 92, 93, 95, 97, 98, 101, 102, 103, 104, 105, 110, 111, 114, 115, 118, 120, 121, 123, 125, 127, 131, 132, 133, 134, 136, 139, 141, 143, 145, 153, 156, 157, 158, 162, 163, 168, 170, 174, 175, 176, 177, 180, 181, 183, 184, 185, 186, 189, 192, 196, 197, 198, 199, 202, 203, 205, 206, 208, 209, 212, 215, 219, 220, 221, 222, 223, 226, 227, 229, 230, 231, 232, 235, 236, 239, 240, 243, 247, 249, 250, 251, 253, 254, 257, 258, 260, 262, 263, 265, 267, 268, 271, 272, 273, 274, 275, 280, 281, 284, 285, 288, 290, 291, 292, 294, 295, 297, 298, 300, 301, 302, 303, 305, 308, 310, 312, 314, 322, 325, 326, 327, 331, 332, 333, 334, 338, 342, 346, 347, 352, 353, 354, 355, 356, 358, 362, 363, 365, 367, 368, 370, 372, 373, 376, 377, 378, 379, 380, 385, 386, 389, 390, 393, 395, 396, 397, 399, 400, 403, 404, 405, 406, 408, 411, 413, 415, 417, 425, 428, 429, 430, 434, 435, 440, 444, 445, 449, 450, 455, 456, 457, 458, 459, 460, 462, 466, 467, 469, 471, 472, 474, 476, 477, 480, 481, 482, 483, 484, 489, 490, 493, 494, 497, 499, 500, 502, 504, 506, 510, 511, 512, 513, 515, 518, 520, 522, 524, 532, 535, 536, 537, 541, 542, 547, 551, 552, 554, 556, 557, 559, 560], "11": [5, 22, 33, 56, 86, 127, 139, 147, 163, 182, 184, 186, 204, 206, 209, 228, 232, 236, 256, 295, 332, 361, 400, 435, 465, 506, 518, 526, 542], "4": [5, 14, 16, 21, 25, 26, 28, 31, 32, 43, 46, 47, 48, 49, 53, 67, 73, 74, 78, 79, 80, 81, 82, 85, 86, 88, 95, 98, 115, 117, 118, 127, 130, 133, 136, 139, 145, 158, 163, 165, 166, 167, 172, 174, 175, 176, 178, 181, 182, 184, 186, 194, 196, 197, 198, 199, 200, 203, 204, 206, 209, 216, 219, 220, 221, 222, 223, 224, 227, 228, 232, 236, 244, 247, 248, 249, 250, 251, 252, 255, 256, 280, 284, 295, 298, 332, 343, 349, 350, 353, 354, 356, 357, 360, 361, 400, 402, 411, 435, 439, 446, 452, 453, 457, 458, 459, 460, 461, 464, 465, 467, 474, 477, 494, 496, 497, 506, 509, 515, 518, 524, 537, 542, 544, 545, 546, 562], "tank": [5, 48, 53, 66, 88, 94, 95, 98, 108, 109, 115, 118, 122, 126, 127, 132, 136, 137, 140, 143, 145, 147, 151, 158, 163, 170, 184, 186, 192, 206, 209, 215, 232, 236, 243, 278, 279, 295, 332, 342, 383, 384, 400, 435, 445, 467, 473, 474, 477, 487, 488, 494, 497, 501, 505, 506, 511, 515, 516, 519, 522, 524, 526, 530, 537, 542, 558], "4d": 5, "11c": 5, "dev": [5, 8, 9, 14, 16, 18, 19, 20, 22, 23, 25, 27, 28, 29, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 57, 68, 71, 73, 74, 78, 80, 81, 85, 92, 99, 119, 122, 126, 127, 129, 132, 145, 147, 153, 157, 158, 163, 174, 181, 184, 186, 196, 198, 203, 206, 209, 217, 220, 222, 227, 232, 236, 245, 248, 250, 255, 262, 269, 289, 298, 301, 314, 316, 322, 326, 327, 333, 334, 344, 347, 349, 350, 353, 355, 356, 360, 367, 374, 394, 404, 417, 419, 425, 429, 430, 447, 450, 452, 453, 457, 459, 460, 464, 471, 478, 498, 501, 505, 506, 508, 511, 524, 526, 532, 536, 537, 542, 547], "sd": [5, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 48, 129, 145, 209, 236, 314, 417, 508, 524], "k": [5, 14, 18, 19, 20, 22, 33, 34, 36, 41, 42, 67, 76, 78, 86, 95, 98, 105, 115, 145, 171, 172, 184, 193, 194, 206, 209, 216, 228, 232, 236, 244, 256, 265, 268, 275, 285, 298, 314, 343, 353, 361, 370, 373, 380, 390, 417, 446, 455, 457, 465, 474, 477, 484, 494, 524], "state": [5, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 50, 53, 62, 67, 71, 76, 78, 79, 80, 81, 84, 86, 87, 93, 94, 103, 104, 108, 109, 110, 113, 114, 121, 127, 131, 134, 139, 143, 144, 145, 148, 149, 151, 155, 158, 163, 168, 175, 176, 177, 183, 184, 185, 186, 189, 197, 198, 199, 205, 206, 208, 209, 212, 221, 222, 223, 228, 229, 230, 231, 232, 235, 236, 240, 249, 250, 251, 256, 257, 263, 272, 273, 274, 278, 279, 280, 283, 284, 291, 295, 298, 300, 303, 312, 317, 318, 324, 332, 333, 334, 339, 343, 347, 353, 354, 355, 356, 359, 361, 362, 368, 378, 379, 383, 384, 385, 388, 389, 396, 400, 403, 406, 411, 415, 416, 420, 421, 427, 435, 441, 446, 450, 455, 457, 458, 459, 460, 463, 465, 466, 472, 473, 482, 483, 487, 488, 489, 492, 493, 500, 506, 510, 513, 518, 522, 523, 524, 527, 528, 530, 534, 537, 542, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "onlin": [5, 35, 37, 43, 47, 53, 57, 71, 74, 76, 80, 81, 83, 102, 110, 114, 132, 133, 135, 143, 144, 145, 147, 148, 150, 151, 153, 154, 157, 158, 163, 176, 186, 198, 209, 222, 236, 250, 253, 280, 284, 301, 302, 312, 313, 317, 319, 322, 323, 326, 332, 333, 334, 347, 350, 355, 356, 358, 377, 385, 389, 404, 405, 407, 415, 416, 420, 422, 425, 426, 429, 435, 450, 453, 455, 459, 460, 462, 481, 489, 493, 511, 512, 514, 522, 523, 524, 526, 527, 529, 530, 532, 533, 536, 537, 542, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "config": [5, 9, 10, 14, 16, 25, 26, 28, 31, 32, 35, 37, 41, 42, 43, 53, 67, 71, 86, 131, 139, 143, 151, 163, 175, 176, 185, 186, 197, 198, 208, 209, 221, 222, 235, 236, 249, 250, 300, 332, 343, 347, 403, 411, 435, 446, 450, 465, 510, 518, 522, 530, 542, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "cksum": [5, 53, 145, 151, 158, 163, 186, 209, 236, 332, 435, 524, 530, 537, 542, 548, 549, 550, 551, 553, 554, 555, 556, 557, 559, 560, 561], "draid1": [5, 80, 355, 459], "sda": [5, 53, 80, 85, 129, 132, 133, 136, 143, 147, 151, 163, 181, 186, 203, 209, 227, 236, 255, 332, 333, 355, 360, 435, 459, 464, 508, 511, 515, 522, 526, 530, 542], "sdb": [5, 53, 80, 132, 136, 143, 147, 151, 163, 186, 209, 236, 332, 333, 355, 435, 459, 511, 515, 522, 526, 530, 542], "sdc": [5, 53, 80, 132, 136, 145, 147, 151, 163, 186, 209, 236, 332, 333, 355, 435, 459, 511, 515, 524, 526, 530, 542], "sdd": [5, 53, 80, 132, 136, 145, 151, 163, 186, 209, 236, 332, 333, 355, 435, 459, 511, 515, 524, 530, 542], "sde": [5, 136, 151, 163, 186, 209, 236, 332, 435, 515, 530, 542], "sdf": [5, 136, 151, 163, 186, 209, 236, 332, 435, 515, 530, 542], "sdg": [5, 209, 236], "sdh": 5, "sdi": 5, "sdj": 5, "sdk": 5, "furthermor": [5, 46, 48, 71, 347, 450], "logic": [5, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 71, 77, 78, 86, 92, 102, 127, 131, 145, 158, 163, 176, 182, 184, 185, 186, 204, 206, 208, 209, 228, 232, 235, 236, 250, 256, 262, 295, 297, 298, 300, 314, 327, 332, 347, 352, 353, 361, 367, 377, 400, 403, 417, 430, 435, 450, 456, 457, 465, 471, 481, 506, 510, 524, 537, 542], "shown": [5, 9, 10, 14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 47, 48, 56, 70, 71, 79, 145, 164, 176, 186, 198, 209, 219, 222, 236, 247, 250, 314, 346, 347, 354, 417, 436, 449, 450, 458, 524, 543, 557], "major": [5, 8, 18, 19, 20, 22, 32, 33, 41, 42, 46, 47, 48, 53, 70, 71, 176, 184, 198, 219, 222, 247, 250, 346, 347, 449, 450, 548, 550, 556, 558, 559, 560, 561], "heal": [5, 46, 71, 79, 108, 109, 165, 166, 251, 354, 450, 458, 487, 488, 544, 545], "scale": [5, 46, 47, 48, 49, 61, 71, 176, 198, 222, 239, 250, 338, 347, 440, 450], "divid": [5, 46, 47, 48, 50, 71, 76, 78, 81, 127, 176, 184, 198, 206, 222, 232, 250, 295, 298, 347, 353, 400, 450, 455, 457, 460, 506], "greatli": [5, 8, 47, 53, 219], "restor": [5, 34, 36, 46, 66, 71, 79, 80, 86, 95, 98, 104, 108, 109, 110, 114, 115, 127, 133, 134, 153, 163, 184, 206, 232, 236, 250, 251, 256, 265, 268, 274, 278, 279, 280, 284, 285, 295, 302, 303, 322, 332, 333, 347, 354, 355, 361, 370, 373, 379, 383, 384, 385, 389, 390, 400, 405, 406, 425, 435, 445, 450, 458, 459, 465, 474, 477, 483, 487, 488, 489, 493, 494, 506, 512, 513, 532, 542, 550, 551, 553, 554, 561], "fraction": [5, 47, 71, 131, 176, 185, 198, 208, 222, 235, 250, 300, 347, 403, 450, 510], "follow": [5, 8, 9, 10, 12, 14, 16, 18, 19, 20, 21, 22, 23, 25, 27, 29, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 45, 46, 47, 48, 50, 53, 61, 62, 66, 71, 73, 76, 78, 79, 80, 81, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 98, 100, 102, 104, 106, 107, 108, 109, 110, 112, 113, 114, 115, 117, 118, 122, 124, 126, 127, 129, 131, 132, 134, 136, 137, 139, 140, 141, 143, 145, 147, 151, 156, 161, 163, 168, 172, 174, 175, 176, 177, 182, 183, 184, 186, 187, 189, 193, 196, 197, 198, 199, 204, 205, 206, 208, 209, 210, 212, 219, 220, 221, 222, 223, 228, 229, 230, 231, 232, 235, 236, 237, 239, 240, 247, 248, 249, 250, 251, 256, 257, 258, 265, 266, 268, 270, 272, 274, 276, 280, 284, 285, 288, 293, 295, 298, 300, 303, 308, 310, 314, 325, 332, 333, 334, 336, 338, 339, 342, 347, 349, 353, 354, 355, 356, 361, 362, 363, 370, 371, 373, 375, 377, 379, 381, 385, 389, 390, 393, 398, 400, 403, 406, 411, 413, 417, 428, 435, 440, 441, 445, 450, 452, 455, 457, 458, 459, 460, 465, 466, 467, 468, 470, 471, 472, 473, 474, 475, 477, 479, 481, 483, 485, 486, 487, 488, 489, 491, 492, 493, 494, 496, 497, 501, 503, 505, 506, 508, 510, 511, 513, 515, 516, 518, 519, 520, 522, 524, 526, 530, 535, 540, 542, 553, 554, 555], "graph": [5, 164, 436, 543], "show": [5, 12, 18, 19, 20, 22, 28, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 61, 71, 79, 81, 86, 88, 93, 94, 95, 98, 105, 112, 115, 117, 118, 127, 135, 148, 149, 158, 182, 184, 186, 198, 204, 206, 209, 222, 228, 232, 236, 239, 250, 256, 275, 295, 327, 334, 338, 347, 354, 356, 361, 369, 380, 400, 430, 440, 450, 458, 460, 465, 467, 472, 473, 474, 477, 484, 491, 494, 496, 497, 506, 537], "hour": [5, 12, 47, 71, 78, 132, 145, 163, 184, 186, 206, 209, 222, 232, 236, 250, 298, 332, 353, 435, 450, 457, 511, 524, 542], "90": [5, 43], "hdd": [5, 53, 71, 250, 347, 450], "fill": [5, 47, 71, 78, 79, 110, 114, 132, 145, 163, 176, 177, 186, 198, 199, 209, 222, 223, 232, 236, 250, 251, 280, 284, 298, 332, 347, 353, 354, 385, 389, 435, 450, 457, 458, 489, 493, 511, 524, 542], "process": [5, 6, 9, 10, 12, 18, 19, 20, 22, 28, 33, 34, 35, 36, 37, 41, 42, 47, 48, 49, 56, 65, 70, 71, 74, 77, 78, 79, 81, 87, 102, 103, 108, 109, 110, 112, 114, 116, 121, 131, 151, 157, 177, 183, 184, 186, 198, 199, 205, 206, 208, 209, 219, 222, 223, 229, 230, 232, 235, 236, 247, 250, 251, 257, 272, 273, 278, 279, 280, 282, 284, 286, 291, 297, 298, 300, 320, 326, 334, 346, 347, 350, 352, 353, 354, 356, 362, 377, 378, 383, 384, 385, 387, 389, 391, 396, 403, 423, 429, 444, 449, 450, 453, 456, 457, 458, 460, 466, 481, 482, 487, 488, 489, 491, 493, 495, 500, 510, 530, 536, 555], "handl": [5, 8, 11, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 44, 45, 46, 47, 48, 53, 70, 71, 73, 78, 80, 81, 84, 85, 104, 110, 114, 125, 127, 136, 139, 174, 175, 176, 180, 181, 183, 184, 186, 196, 197, 198, 202, 203, 205, 206, 209, 219, 220, 221, 222, 226, 227, 229, 231, 232, 236, 247, 248, 249, 250, 254, 255, 257, 274, 280, 284, 294, 298, 305, 333, 334, 346, 347, 349, 353, 355, 356, 359, 360, 379, 385, 389, 399, 400, 408, 411, 449, 450, 452, 457, 459, 460, 463, 464, 483, 489, 493, 504, 506, 515, 518], "almost": [5, 9, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 87, 183, 205, 229, 257, 362, 466], "ident": [5, 9, 34, 78, 80, 96, 106, 110, 114, 124, 143, 163, 184, 186, 206, 209, 232, 236, 266, 276, 280, 284, 293, 298, 312, 332, 333, 353, 355, 371, 381, 385, 389, 398, 415, 435, 457, 459, 475, 485, 489, 493, 503, 522, 542], "event": [5, 6, 11, 34, 46, 47, 71, 80, 81, 83, 87, 90, 94, 101, 102, 120, 142, 158, 163, 173, 176, 183, 184, 186, 195, 198, 205, 206, 209, 218, 219, 222, 229, 230, 232, 236, 246, 247, 250, 253, 257, 260, 264, 271, 272, 290, 311, 327, 332, 334, 346, 347, 355, 356, 358, 362, 365, 369, 376, 377, 395, 414, 430, 435, 449, 450, 459, 460, 462, 466, 469, 473, 480, 481, 499, 521, 537, 542, 555], "echo": [5, 14, 16, 18, 19, 20, 22, 25, 26, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 47, 557], "offlin": [5, 43, 47, 71, 76, 80, 81, 83, 86, 132, 138, 139, 145, 147, 149, 151, 157, 158, 163, 175, 176, 186, 197, 198, 209, 221, 222, 236, 249, 250, 253, 301, 307, 314, 316, 318, 320, 326, 327, 332, 333, 334, 347, 355, 356, 358, 404, 410, 411, 417, 419, 421, 423, 429, 430, 435, 450, 455, 459, 460, 462, 465, 511, 517, 518, 524, 526, 528, 530, 536, 537, 542], "sy": [5, 14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 47, 71, 78, 184, 198, 206, 222, 232, 250, 298, 347, 353, 450, 457, 557], "replac": [5, 11, 14, 18, 19, 20, 22, 27, 33, 34, 35, 36, 37, 39, 41, 42, 46, 47, 53, 71, 73, 74, 76, 78, 79, 80, 81, 83, 85, 91, 92, 93, 104, 107, 110, 112, 114, 117, 127, 129, 132, 133, 136, 138, 139, 144, 145, 146, 147, 151, 154, 155, 157, 158, 162, 163, 175, 177, 181, 184, 186, 196, 197, 198, 199, 203, 206, 209, 220, 221, 222, 223, 227, 231, 232, 236, 248, 249, 250, 251, 253, 255, 274, 280, 284, 295, 298, 301, 302, 305, 307, 308, 313, 314, 315, 316, 320, 323, 324, 326, 327, 331, 332, 333, 334, 347, 349, 350, 353, 354, 355, 356, 358, 360, 379, 385, 389, 400, 404, 405, 408, 410, 411, 416, 417, 418, 419, 423, 426, 427, 429, 430, 434, 435, 450, 452, 453, 455, 457, 458, 459, 460, 462, 464, 470, 471, 472, 483, 486, 489, 491, 493, 496, 506, 508, 511, 512, 515, 517, 518, 523, 524, 525, 526, 530, 533, 534, 536, 537, 541, 542, 548, 550, 555, 561], "being": [5, 8, 10, 11, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 70, 71, 76, 77, 78, 79, 80, 81, 84, 86, 88, 90, 101, 103, 108, 109, 110, 114, 116, 118, 120, 121, 129, 134, 136, 139, 140, 144, 160, 165, 166, 174, 175, 176, 177, 180, 184, 186, 196, 197, 198, 199, 202, 206, 209, 219, 220, 221, 222, 223, 226, 232, 236, 247, 249, 250, 251, 254, 256, 258, 260, 267, 271, 273, 278, 279, 280, 281, 284, 288, 290, 291, 297, 298, 303, 305, 309, 313, 329, 333, 334, 346, 347, 352, 353, 354, 355, 356, 359, 361, 363, 365, 376, 378, 383, 384, 385, 389, 391, 393, 395, 396, 406, 408, 411, 412, 416, 432, 449, 450, 455, 456, 457, 458, 459, 460, 463, 465, 467, 469, 480, 482, 487, 488, 489, 493, 495, 497, 499, 500, 508, 513, 515, 518, 519, 523, 539, 544, 545, 554, 555, 557, 558], "continu": [5, 14, 16, 19, 20, 25, 28, 31, 33, 34, 35, 37, 41, 42, 46, 47, 53, 57, 62, 71, 79, 80, 81, 90, 101, 104, 110, 114, 120, 139, 151, 161, 168, 175, 176, 186, 189, 197, 198, 209, 212, 221, 222, 231, 236, 239, 240, 249, 250, 260, 271, 274, 280, 284, 290, 320, 330, 333, 334, 339, 347, 354, 355, 356, 365, 376, 379, 385, 389, 395, 411, 423, 433, 441, 450, 458, 459, 460, 469, 480, 483, 489, 493, 499, 518, 530, 540, 548, 549, 550, 551, 555, 559, 560, 561], "possibli": [5, 11, 71, 184, 450], "wait": [5, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 48, 49, 68, 71, 74, 81, 83, 90, 101, 120, 127, 133, 134, 135, 139, 142, 143, 144, 145, 148, 149, 151, 153, 155, 157, 158, 160, 163, 175, 176, 186, 197, 198, 209, 217, 221, 222, 232, 236, 245, 249, 250, 253, 260, 271, 290, 295, 302, 303, 308, 311, 312, 313, 314, 320, 322, 324, 326, 327, 329, 332, 334, 344, 347, 350, 356, 358, 365, 376, 395, 400, 405, 406, 411, 414, 415, 416, 417, 423, 425, 427, 429, 430, 432, 435, 447, 450, 453, 460, 462, 469, 480, 499, 506, 512, 513, 518, 521, 522, 523, 524, 530, 532, 534, 536, 537, 539, 542, 555, 557, 559, 561], "complet": [5, 12, 18, 19, 20, 22, 25, 28, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 50, 67, 70, 71, 77, 78, 79, 80, 81, 91, 92, 104, 108, 109, 110, 113, 114, 125, 127, 131, 133, 139, 145, 151, 153, 155, 158, 162, 163, 172, 175, 176, 177, 184, 186, 194, 197, 198, 199, 206, 208, 209, 216, 219, 221, 222, 223, 231, 232, 235, 236, 244, 247, 249, 250, 251, 261, 262, 274, 278, 279, 280, 283, 284, 294, 295, 297, 298, 300, 302, 314, 320, 322, 324, 327, 331, 332, 333, 334, 343, 346, 347, 352, 353, 354, 355, 356, 366, 367, 379, 383, 384, 385, 388, 389, 399, 400, 403, 405, 411, 417, 423, 425, 427, 430, 434, 435, 446, 449, 450, 456, 457, 458, 459, 460, 470, 471, 483, 487, 488, 489, 492, 493, 504, 506, 510, 512, 518, 524, 530, 532, 534, 537, 541, 542, 548, 550, 552, 555, 557], "scan": [5, 18, 19, 20, 22, 33, 41, 42, 47, 48, 50, 53, 70, 71, 74, 78, 80, 86, 143, 155, 176, 184, 198, 206, 209, 219, 222, 232, 236, 247, 250, 256, 298, 312, 333, 346, 347, 350, 353, 355, 361, 415, 427, 449, 450, 453, 457, 459, 465, 522, 534, 547, 557], "progress": [5, 35, 37, 40, 47, 71, 79, 80, 86, 103, 121, 125, 133, 134, 151, 152, 153, 155, 158, 160, 162, 176, 182, 184, 186, 198, 204, 206, 209, 222, 223, 228, 232, 236, 250, 251, 256, 273, 291, 294, 303, 320, 321, 322, 324, 327, 329, 331, 333, 347, 354, 355, 361, 378, 396, 399, 406, 423, 424, 425, 427, 430, 432, 434, 450, 458, 459, 465, 482, 500, 504, 513, 530, 531, 532, 534, 537, 539, 541, 555, 557], "tue": [5, 95, 98, 115, 127, 184, 206, 232, 295, 400, 474, 477, 494, 506], "nov": [5, 172, 194, 216], "24": [5, 47, 70, 71, 78, 84, 127, 176, 184, 198, 206, 207, 221, 222, 232, 240, 242, 243, 244, 247, 249, 250, 251, 252, 254, 257, 272, 295, 298, 300, 336, 346, 347, 353, 359, 400, 449, 450, 457, 463, 506, 553], "14": [5, 46, 58, 59, 71, 127, 145, 147, 158, 163, 184, 186, 204, 206, 209, 228, 232, 236, 256, 295, 332, 347, 400, 435, 450, 506, 524, 526, 537, 542, 555, 562], "34": [5, 32, 71, 450], "25": [5, 47, 71, 77, 86, 131, 155, 176, 182, 184, 198, 204, 206, 208, 222, 228, 232, 235, 250, 256, 297, 300, 329, 331, 335, 347, 352, 361, 403, 427, 450, 456, 465, 510, 534], "2020": [5, 34, 35, 37, 46, 48, 70, 90, 101, 120, 128, 136, 230, 239, 240, 242, 243, 244, 247, 249, 251, 252, 254, 257, 260, 271, 272, 278, 279, 282, 290, 296, 300, 302, 309, 322, 327, 329, 331, 335, 336, 346, 361, 365, 367, 376, 383, 384, 387, 395, 401, 405, 408, 412, 449, 469, 480, 499, 507, 512, 515], "51t": 5, "4g": [5, 18, 19, 20, 22, 33, 41, 42, 53, 147, 163, 186, 209, 236, 332, 435, 526, 542], "59t": 5, "issu": [5, 8, 11, 12, 13, 14, 16, 17, 18, 19, 20, 22, 25, 28, 29, 31, 32, 33, 34, 35, 36, 37, 41, 42, 45, 47, 50, 53, 54, 55, 57, 58, 59, 71, 77, 79, 81, 86, 127, 139, 145, 155, 163, 165, 166, 175, 176, 184, 197, 198, 206, 209, 221, 222, 223, 232, 236, 249, 250, 251, 256, 295, 297, 314, 324, 332, 334, 347, 352, 354, 356, 361, 400, 411, 417, 427, 435, 450, 456, 458, 460, 465, 506, 518, 524, 534, 542, 544, 545, 550, 557], "07g": 5, "13t": 5, "326g": 5, "57": 5, "done": [5, 8, 9, 10, 11, 14, 16, 18, 19, 20, 21, 22, 25, 27, 28, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 50, 53, 65, 71, 78, 87, 110, 114, 123, 127, 135, 139, 155, 158, 160, 161, 165, 166, 175, 176, 184, 186, 197, 198, 205, 206, 209, 221, 222, 229, 232, 236, 249, 250, 257, 280, 284, 292, 295, 298, 327, 329, 330, 347, 353, 362, 385, 389, 397, 400, 411, 427, 430, 432, 433, 444, 450, 457, 466, 489, 493, 502, 506, 518, 534, 537, 539, 540, 544, 545, 555, 557], "00": [5, 32, 53, 65, 73, 86, 155, 174, 182, 196, 204, 220, 228, 248, 256, 349, 361, 427, 444, 452, 465, 534], "03": [5, 35, 37, 48, 65, 184, 444], "21": [5, 71, 95, 98, 115, 127, 184, 206, 209, 232, 236, 295, 353, 400, 450, 474, 477, 494, 506], "go": [5, 10, 12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 48, 53, 71, 108, 109, 155, 176, 198, 222, 250, 333, 347, 355, 427, 450, 487, 488, 534, 555], "unavail": [5, 23, 43, 78, 80, 81, 90, 101, 120, 143, 157, 158, 186, 209, 232, 236, 260, 271, 290, 298, 312, 326, 327, 333, 334, 353, 355, 356, 365, 376, 395, 415, 429, 430, 457, 459, 460, 469, 480, 499, 522, 536, 537, 548, 549, 552, 553, 554, 556, 561], "inus": 5, "achiev": [5, 46, 47, 48, 53, 71, 78, 81, 184, 206, 232, 236, 250, 298, 334, 347, 353, 356, 450, 457, 460], "goal": [5, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 49, 57, 71, 78, 176, 198, 222, 232, 250, 298, 347, 353, 450, 457], "worth": [5, 47, 71, 93, 112, 117, 127, 176, 184, 198, 206, 222, 230, 232, 250, 272, 295, 347, 400, 450, 472, 491, 496, 506, 561], "moment": [5, 36, 117, 132, 133, 153, 184, 186, 206, 209, 232, 236, 287, 301, 302, 322, 392, 404, 405, 425, 496, 511, 512, 532], "summar": [5, 155, 186, 209, 236, 324, 427, 534], "tree": [5, 7, 13, 27, 47, 48, 71, 78, 79, 99, 105, 119, 122, 126, 177, 184, 199, 206, 222, 223, 232, 250, 251, 269, 275, 289, 298, 347, 353, 354, 374, 380, 394, 450, 457, 458, 478, 484, 498, 501, 505], "downsid": [5, 8, 9, 18, 19, 53], "ideal": [5, 11, 45, 46, 47, 48, 53, 70, 71, 90, 101, 120, 176, 198, 219, 222, 247, 250, 260, 271, 290, 346, 347, 365, 376, 395, 449, 450, 469, 480, 499], "space": [5, 6, 7, 8, 11, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 47, 57, 61, 62, 64, 66, 67, 71, 74, 76, 77, 78, 79, 80, 81, 86, 87, 90, 95, 96, 97, 98, 100, 101, 106, 107, 111, 115, 120, 124, 127, 131, 132, 133, 134, 136, 139, 141, 145, 147, 148, 149, 151, 156, 160, 162, 163, 168, 170, 172, 176, 177, 182, 183, 184, 185, 186, 189, 191, 192, 194, 198, 199, 204, 205, 206, 208, 209, 212, 214, 215, 216, 222, 223, 228, 229, 230, 232, 235, 236, 239, 240, 242, 243, 244, 250, 251, 256, 257, 260, 265, 266, 267, 268, 270, 271, 272, 276, 277, 281, 285, 290, 293, 295, 297, 298, 300, 303, 305, 308, 310, 314, 316, 317, 318, 320, 325, 329, 331, 332, 333, 334, 338, 339, 341, 342, 343, 347, 350, 352, 353, 354, 355, 356, 361, 362, 365, 370, 371, 372, 373, 375, 376, 381, 382, 386, 390, 395, 398, 400, 403, 406, 408, 411, 413, 417, 419, 420, 421, 423, 428, 432, 434, 435, 440, 441, 443, 445, 446, 450, 453, 455, 456, 457, 458, 459, 460, 465, 466, 469, 474, 475, 476, 477, 479, 480, 485, 486, 490, 494, 499, 503, 506, 510, 511, 513, 515, 518, 520, 524, 526, 527, 528, 530, 535, 539, 541, 542], "boundari": [5, 62, 81, 134, 139, 186, 189, 209, 212, 221, 236, 240, 249, 303, 334, 339, 356, 406, 411, 441, 460, 513, 518], "larger": [5, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 50, 57, 70, 71, 78, 79, 80, 81, 110, 114, 133, 145, 176, 177, 184, 186, 198, 199, 206, 209, 219, 222, 223, 232, 236, 247, 250, 251, 280, 284, 298, 314, 333, 334, 346, 347, 353, 354, 355, 356, 385, 389, 417, 449, 450, 457, 458, 459, 460, 489, 493, 524], "o": [5, 6, 11, 12, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 45, 46, 47, 49, 51, 53, 57, 58, 59, 61, 64, 65, 67, 71, 74, 76, 77, 78, 79, 80, 81, 84, 86, 87, 90, 91, 92, 95, 96, 98, 100, 101, 103, 106, 108, 109, 110, 114, 115, 117, 120, 121, 124, 127, 130, 131, 132, 133, 135, 136, 139, 141, 143, 145, 147, 151, 153, 155, 156, 157, 158, 161, 163, 164, 168, 171, 172, 175, 176, 177, 180, 182, 184, 186, 189, 191, 193, 194, 197, 198, 199, 202, 204, 206, 208, 209, 212, 214, 216, 221, 222, 223, 226, 228, 230, 232, 235, 236, 239, 240, 242, 244, 249, 250, 251, 254, 256, 260, 261, 262, 265, 266, 268, 270, 271, 272, 273, 276, 278, 279, 280, 284, 285, 287, 290, 291, 293, 295, 298, 299, 300, 301, 302, 304, 305, 310, 312, 314, 316, 322, 324, 325, 326, 332, 333, 334, 338, 339, 341, 343, 347, 350, 353, 354, 355, 356, 359, 361, 362, 365, 366, 367, 370, 371, 373, 375, 376, 378, 381, 383, 384, 385, 389, 390, 392, 395, 396, 398, 400, 402, 403, 404, 405, 407, 408, 411, 413, 415, 417, 419, 425, 427, 428, 429, 433, 435, 436, 440, 443, 444, 446, 450, 453, 455, 456, 457, 458, 459, 460, 463, 465, 466, 469, 470, 471, 474, 475, 477, 479, 480, 482, 485, 487, 488, 489, 493, 494, 496, 499, 500, 503, 506, 509, 510, 511, 512, 514, 515, 518, 520, 522, 524, 526, 530, 532, 534, 535, 536, 537, 540, 542, 543, 553, 555, 557, 561, 562], "price": [5, 46], "pai": [5, 8, 78, 232, 298, 353, 457], "cannot": [5, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 54, 62, 71, 76, 77, 78, 79, 80, 81, 87, 88, 90, 93, 99, 101, 104, 108, 109, 110, 114, 118, 119, 120, 122, 123, 126, 135, 139, 143, 155, 165, 166, 168, 175, 177, 184, 186, 189, 197, 198, 199, 206, 209, 212, 219, 221, 222, 223, 231, 232, 236, 240, 249, 250, 251, 257, 258, 260, 263, 269, 271, 274, 278, 279, 280, 284, 288, 289, 290, 292, 297, 298, 302, 305, 312, 333, 334, 339, 347, 352, 353, 354, 355, 356, 362, 363, 365, 368, 374, 376, 379, 383, 384, 385, 389, 393, 394, 395, 397, 405, 407, 411, 415, 441, 450, 455, 456, 457, 458, 459, 460, 466, 467, 469, 472, 478, 480, 483, 487, 488, 489, 493, 497, 498, 499, 501, 502, 505, 512, 514, 518, 522, 534, 544, 545, 548, 549, 551, 552, 553, 554, 556, 557, 558, 559, 560, 561], "therefor": [5, 8, 46, 47, 48, 53, 70, 71, 73, 78, 79, 108, 109, 139, 143, 165, 166, 174, 175, 176, 177, 184, 186, 196, 197, 198, 199, 206, 209, 219, 220, 221, 222, 223, 232, 236, 247, 248, 249, 250, 251, 278, 279, 298, 312, 335, 346, 347, 349, 353, 354, 383, 384, 411, 415, 437, 438, 449, 450, 452, 457, 458, 487, 488, 518, 522, 544, 545, 555], "depth": [5, 28, 47, 71, 95, 98, 100, 115, 184, 198, 206, 222, 232, 250, 265, 268, 270, 285, 347, 370, 373, 375, 390, 450, 474, 477, 479, 494], "explan": [5, 62, 66, 168, 170, 189, 192, 212, 215, 240, 243, 339, 342, 441, 445], "out": [5, 8, 12, 17, 18, 19, 20, 21, 22, 25, 29, 33, 34, 35, 36, 37, 39, 41, 42, 46, 47, 49, 53, 62, 65, 70, 71, 78, 81, 91, 92, 93, 107, 112, 117, 127, 145, 155, 157, 158, 168, 176, 184, 186, 189, 198, 206, 209, 212, 219, 222, 232, 236, 240, 247, 250, 295, 298, 314, 324, 326, 327, 334, 339, 346, 347, 353, 356, 400, 417, 427, 429, 430, 441, 444, 449, 450, 457, 460, 470, 471, 472, 486, 491, 496, 506, 524, 534, 536, 537], "slide": 5, "present": [5, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 48, 54, 71, 78, 79, 81, 86, 87, 108, 109, 110, 114, 140, 158, 161, 163, 182, 183, 184, 186, 198, 204, 205, 206, 209, 222, 228, 229, 232, 236, 250, 256, 257, 278, 279, 280, 284, 298, 309, 327, 332, 347, 353, 354, 356, 361, 362, 383, 384, 385, 389, 412, 430, 433, 435, 450, 457, 458, 460, 465, 466, 487, 488, 489, 493, 519, 537, 540, 542, 553], "summit": [5, 57], "made": [5, 8, 10, 11, 12, 32, 46, 47, 48, 56, 71, 77, 78, 79, 81, 104, 110, 114, 127, 132, 143, 148, 149, 163, 176, 177, 184, 186, 198, 199, 209, 222, 223, 231, 232, 236, 250, 251, 274, 280, 284, 295, 298, 312, 317, 318, 332, 334, 347, 353, 354, 356, 379, 385, 389, 400, 415, 420, 421, 435, 450, 456, 457, 458, 460, 483, 489, 493, 506, 511, 522, 527, 528, 542, 547, 549, 552, 555, 559, 560], "again": [5, 18, 19, 20, 21, 25, 34, 35, 36, 37, 41, 42, 53, 71, 80, 143, 155, 182, 186, 209, 222, 236, 250, 312, 324, 333, 347, 355, 415, 427, 450, 459, 522, 534, 548, 549, 550, 552, 557, 559, 560], "simpli": [5, 34, 36, 47, 53, 54, 65, 67, 71, 78, 80, 90, 101, 120, 131, 172, 184, 186, 194, 198, 206, 208, 209, 216, 222, 232, 235, 236, 244, 250, 260, 271, 290, 298, 300, 333, 343, 347, 353, 355, 365, 376, 395, 403, 444, 446, 450, 457, 459, 469, 480, 499, 510, 557], "new": [5, 8, 9, 10, 12, 14, 16, 17, 18, 19, 20, 22, 25, 26, 28, 29, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 50, 55, 57, 59, 65, 66, 67, 70, 71, 73, 76, 78, 79, 80, 81, 87, 88, 89, 90, 92, 93, 94, 101, 104, 107, 108, 109, 110, 112, 114, 117, 118, 120, 123, 127, 133, 136, 139, 145, 148, 149, 150, 153, 154, 155, 157, 158, 163, 168, 170, 172, 174, 175, 176, 177, 184, 186, 189, 192, 194, 196, 197, 198, 199, 206, 209, 212, 215, 216, 219, 220, 221, 222, 223, 232, 236, 240, 243, 244, 247, 248, 249, 250, 251, 258, 259, 260, 262, 264, 271, 274, 277, 278, 279, 280, 282, 284, 288, 290, 292, 295, 298, 302, 305, 314, 317, 318, 319, 322, 323, 324, 326, 327, 332, 333, 334, 342, 343, 346, 347, 349, 353, 354, 355, 356, 362, 363, 364, 365, 367, 369, 376, 379, 382, 383, 384, 385, 387, 389, 393, 395, 397, 400, 405, 408, 411, 417, 420, 421, 422, 425, 426, 427, 429, 430, 435, 444, 445, 446, 449, 450, 452, 455, 457, 458, 459, 460, 466, 467, 468, 469, 471, 472, 473, 480, 483, 486, 487, 488, 489, 491, 493, 496, 497, 499, 502, 506, 512, 515, 518, 524, 527, 528, 529, 532, 533, 534, 536, 537, 542, 548, 550, 555, 557], "call": [5, 12, 40, 46, 47, 48, 53, 65, 71, 77, 78, 79, 80, 84, 87, 104, 108, 109, 117, 129, 177, 180, 183, 184, 186, 198, 199, 202, 205, 206, 209, 222, 223, 226, 229, 230, 231, 232, 236, 250, 251, 254, 257, 272, 274, 278, 279, 287, 298, 299, 333, 347, 353, 354, 355, 359, 362, 379, 383, 384, 392, 444, 450, 456, 457, 458, 459, 463, 466, 483, 487, 488, 496, 508], "essenti": [5, 8, 9, 54, 74, 350, 453], "longer": [5, 19, 20, 33, 34, 35, 36, 46, 47, 64, 67, 70, 71, 79, 81, 91, 92, 93, 107, 110, 112, 114, 117, 123, 127, 139, 143, 160, 161, 163, 172, 175, 176, 184, 186, 191, 194, 197, 198, 206, 209, 214, 216, 219, 221, 222, 223, 232, 236, 242, 244, 247, 249, 250, 251, 277, 280, 284, 292, 295, 312, 329, 330, 332, 334, 341, 343, 346, 347, 354, 356, 382, 385, 389, 397, 400, 411, 415, 432, 433, 435, 443, 446, 449, 450, 458, 460, 470, 471, 472, 486, 489, 491, 493, 496, 502, 506, 518, 522, 539, 540, 542, 548, 549, 550, 551, 553], "need": [5, 7, 8, 9, 10, 11, 12, 14, 16, 17, 18, 19, 20, 22, 25, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 53, 54, 61, 62, 67, 70, 71, 77, 78, 79, 80, 81, 86, 87, 90, 91, 92, 93, 96, 99, 101, 106, 107, 108, 109, 110, 112, 114, 117, 119, 120, 122, 124, 126, 127, 129, 155, 165, 166, 168, 172, 176, 177, 183, 184, 186, 189, 194, 198, 199, 205, 206, 209, 212, 216, 219, 222, 223, 229, 232, 236, 239, 240, 244, 247, 250, 251, 257, 260, 266, 269, 271, 276, 278, 279, 280, 282, 284, 289, 290, 293, 295, 297, 298, 333, 334, 338, 339, 343, 346, 347, 352, 353, 354, 355, 356, 362, 365, 371, 374, 376, 381, 383, 384, 385, 387, 389, 394, 395, 398, 400, 440, 441, 446, 449, 450, 456, 457, 458, 459, 460, 465, 466, 469, 470, 471, 472, 475, 478, 480, 485, 486, 487, 488, 489, 491, 493, 496, 498, 499, 501, 503, 505, 506, 508, 534, 544, 545, 555, 557], "subsequ": [5, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 65, 81, 158, 171, 183, 193, 205, 209, 229, 236, 334, 356, 444, 460, 537, 555], "sdl": 5, "45": 5, "82g": 5, "10t": 5, "78g": 5, "565g": 5, "99": [5, 41, 42], "44": 5, "04": [5, 38, 39, 41, 42, 65, 155, 427, 444, 534], "onc": [5, 9, 10, 12, 14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 43, 47, 48, 50, 65, 66, 67, 71, 78, 79, 80, 81, 86, 90, 91, 92, 93, 101, 104, 107, 108, 109, 112, 117, 120, 123, 127, 132, 133, 143, 145, 155, 161, 163, 170, 172, 176, 177, 182, 184, 186, 192, 194, 198, 199, 204, 206, 209, 215, 216, 222, 223, 228, 231, 232, 236, 243, 244, 250, 251, 256, 260, 262, 271, 274, 278, 279, 290, 292, 295, 298, 312, 314, 324, 330, 332, 333, 334, 342, 343, 347, 353, 354, 355, 356, 361, 365, 367, 376, 379, 383, 384, 395, 397, 400, 415, 417, 427, 433, 435, 444, 445, 446, 450, 457, 458, 459, 460, 465, 469, 470, 471, 472, 480, 483, 486, 487, 488, 491, 496, 499, 502, 506, 511, 522, 524, 534, 540, 542, 547, 548, 550, 555, 557], "normal": [5, 9, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 50, 70, 71, 78, 80, 86, 87, 88, 90, 95, 96, 98, 101, 103, 106, 108, 109, 110, 114, 115, 118, 120, 121, 124, 127, 132, 145, 147, 157, 158, 163, 176, 184, 186, 198, 205, 206, 209, 219, 222, 229, 232, 236, 247, 250, 257, 258, 260, 266, 271, 273, 276, 278, 279, 280, 284, 288, 290, 291, 293, 295, 298, 301, 314, 316, 326, 327, 332, 333, 346, 347, 353, 355, 362, 363, 365, 371, 376, 378, 381, 383, 384, 385, 389, 393, 395, 396, 398, 400, 404, 417, 419, 429, 430, 435, 449, 450, 457, 459, 465, 466, 467, 469, 474, 475, 477, 480, 482, 485, 487, 488, 489, 493, 494, 497, 499, 500, 503, 506, 511, 524, 526, 536, 537, 542, 547, 557], "healthi": [5, 47, 71, 81, 108, 109, 139, 150, 186, 197, 209, 221, 222, 236, 249, 250, 319, 334, 347, 356, 411, 422, 450, 460, 487, 488, 518, 529, 557], "Their": [6, 58, 59], "algorithm": [6, 18, 19, 20, 22, 33, 41, 42, 47, 48, 66, 71, 78, 79, 86, 108, 109, 139, 165, 166, 170, 175, 177, 184, 192, 197, 198, 199, 206, 215, 221, 222, 223, 228, 232, 243, 249, 250, 251, 256, 278, 279, 298, 342, 347, 353, 354, 361, 383, 384, 411, 445, 450, 457, 458, 465, 487, 488, 518, 544, 545], "acceler": [6, 46, 47, 71, 198, 222, 250, 347, 450], "microbenchmark": [6, 47], "disabl": [6, 7, 11, 14, 16, 18, 19, 20, 22, 25, 26, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 47, 48, 53, 61, 70, 71, 76, 78, 79, 80, 81, 86, 87, 95, 98, 104, 110, 114, 115, 127, 136, 176, 177, 180, 182, 183, 184, 198, 199, 202, 204, 205, 206, 209, 219, 222, 223, 226, 228, 229, 231, 232, 236, 239, 247, 250, 251, 254, 256, 257, 274, 280, 284, 295, 298, 305, 333, 338, 346, 347, 353, 354, 355, 356, 361, 362, 379, 385, 389, 400, 408, 440, 449, 450, 455, 457, 458, 459, 460, 465, 466, 474, 477, 483, 489, 493, 494, 506, 515, 557], "flag": [6, 7, 46, 47, 48, 53, 58, 59, 66, 70, 71, 74, 78, 79, 81, 84, 86, 89, 90, 92, 93, 96, 101, 102, 104, 105, 106, 108, 109, 110, 112, 114, 120, 124, 127, 132, 135, 136, 139, 143, 144, 145, 147, 148, 149, 151, 157, 158, 160, 161, 170, 171, 175, 176, 180, 182, 184, 186, 192, 193, 197, 198, 202, 204, 206, 209, 215, 219, 221, 222, 223, 226, 228, 231, 232, 236, 243, 247, 249, 250, 251, 254, 256, 259, 260, 262, 263, 266, 271, 274, 275, 276, 278, 279, 280, 282, 284, 290, 292, 293, 295, 298, 301, 305, 312, 313, 314, 316, 317, 318, 320, 326, 327, 329, 330, 334, 342, 346, 347, 350, 353, 354, 356, 359, 361, 364, 365, 367, 368, 371, 376, 377, 379, 380, 381, 383, 384, 385, 387, 389, 395, 398, 400, 404, 408, 411, 415, 416, 417, 419, 420, 421, 423, 429, 430, 432, 433, 445, 449, 450, 453, 457, 458, 460, 463, 465, 468, 469, 471, 472, 475, 480, 481, 483, 484, 485, 487, 488, 489, 491, 493, 499, 503, 506, 511, 515, 518, 522, 523, 524, 526, 527, 528, 530, 536, 537, 539, 540, 558], "refer": [6, 10, 11, 14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 39, 41, 42, 43, 47, 48, 53, 59, 66, 71, 77, 78, 79, 81, 86, 93, 97, 100, 110, 111, 114, 127, 128, 136, 155, 158, 170, 176, 177, 182, 184, 186, 192, 198, 199, 204, 206, 209, 215, 222, 223, 228, 232, 236, 243, 250, 251, 256, 263, 267, 280, 281, 284, 295, 296, 297, 298, 305, 327, 334, 342, 347, 352, 353, 354, 356, 361, 368, 372, 385, 386, 389, 400, 401, 408, 427, 430, 445, 450, 456, 457, 458, 460, 465, 472, 476, 479, 489, 490, 493, 506, 507, 515, 534, 537, 557], "materi": 6, "introduct": [6, 10, 51], "effici": [6, 46, 47, 48, 53, 57, 70, 71, 78, 79, 81, 155, 184, 206, 219, 222, 223, 232, 236, 247, 250, 251, 298, 334, 346, 347, 353, 354, 356, 427, 449, 450, 457, 458, 460, 534], "consider": [6, 46, 47, 48, 57], "troubleshoot": [6, 47, 58, 59, 70, 219, 247, 346, 449], "about": [6, 12, 14, 18, 19, 20, 28, 34, 36, 41, 42, 46, 47, 48, 53, 57, 59, 62, 71, 74, 76, 78, 79, 81, 86, 90, 92, 93, 101, 104, 108, 109, 110, 114, 120, 127, 136, 139, 143, 147, 158, 163, 165, 166, 168, 175, 176, 177, 182, 184, 186, 189, 197, 198, 199, 204, 206, 209, 212, 221, 222, 223, 228, 231, 232, 236, 240, 249, 250, 251, 256, 260, 262, 263, 271, 274, 278, 279, 280, 284, 290, 295, 298, 305, 308, 312, 327, 332, 334, 335, 339, 347, 350, 353, 354, 356, 361, 365, 367, 368, 376, 379, 383, 384, 385, 389, 395, 400, 408, 411, 415, 419, 430, 435, 437, 438, 441, 450, 453, 455, 457, 458, 460, 465, 469, 471, 472, 480, 483, 487, 488, 489, 493, 499, 506, 515, 518, 522, 526, 537, 542, 544, 545, 555], "log": [6, 8, 10, 12, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 47, 48, 49, 53, 65, 70, 71, 74, 78, 79, 80, 86, 87, 104, 127, 131, 136, 139, 142, 143, 144, 151, 163, 171, 175, 176, 182, 184, 185, 186, 193, 197, 198, 204, 206, 208, 209, 219, 221, 222, 228, 231, 232, 235, 236, 247, 249, 250, 251, 256, 274, 295, 298, 300, 305, 311, 312, 313, 320, 332, 333, 346, 347, 350, 353, 354, 355, 361, 362, 379, 400, 403, 408, 411, 414, 415, 416, 423, 435, 444, 449, 450, 453, 457, 458, 459, 465, 466, 483, 506, 510, 515, 518, 521, 522, 523, 530, 542, 555, 562], "file": [6, 8, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 22, 25, 26, 28, 29, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 51, 54, 57, 61, 62, 65, 67, 70, 71, 73, 74, 77, 78, 79, 80, 81, 84, 85, 86, 87, 88, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 130, 131, 136, 139, 143, 145, 155, 158, 163, 165, 166, 167, 168, 171, 172, 174, 175, 176, 177, 180, 181, 182, 183, 184, 185, 186, 188, 189, 193, 194, 196, 197, 198, 199, 202, 203, 204, 205, 206, 208, 209, 211, 212, 216, 219, 220, 221, 222, 223, 226, 227, 228, 229, 230, 231, 232, 235, 236, 238, 239, 240, 244, 247, 248, 249, 250, 251, 254, 255, 256, 257, 258, 260, 262, 263, 264, 266, 267, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 305, 312, 314, 332, 333, 334, 335, 337, 338, 339, 343, 346, 347, 349, 350, 352, 353, 354, 355, 356, 359, 360, 361, 362, 363, 365, 367, 368, 369, 371, 372, 374, 375, 376, 377, 378, 379, 380, 381, 383, 384, 385, 386, 387, 388, 389, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 408, 411, 415, 417, 427, 435, 437, 438, 439, 440, 441, 444, 446, 449, 450, 452, 453, 456, 457, 458, 459, 460, 463, 464, 465, 466, 467, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 509, 510, 515, 518, 522, 524, 534, 537, 542, 544, 545, 546, 547, 554, 557], "unkil": 6, "draid": [6, 48, 58, 59, 67, 79, 80, 136, 155, 343, 354, 355, 408, 427, 446, 458, 459, 515, 534], "creat": [6, 7, 8, 9, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 25, 28, 29, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 54, 57, 65, 67, 68, 70, 71, 73, 77, 78, 79, 80, 81, 83, 85, 86, 88, 89, 90, 91, 93, 94, 95, 96, 98, 101, 104, 106, 107, 108, 109, 110, 112, 114, 115, 117, 118, 120, 124, 127, 129, 130, 131, 132, 133, 143, 144, 157, 163, 171, 172, 174, 175, 176, 177, 181, 182, 184, 185, 186, 191, 193, 194, 196, 197, 198, 199, 203, 204, 206, 207, 208, 209, 214, 216, 217, 219, 220, 221, 222, 223, 227, 228, 230, 231, 232, 234, 235, 236, 242, 244, 245, 247, 248, 249, 250, 251, 253, 255, 256, 258, 259, 260, 261, 263, 264, 266, 271, 272, 274, 276, 277, 278, 279, 280, 282, 284, 287, 288, 290, 293, 295, 297, 298, 299, 300, 301, 302, 312, 313, 326, 332, 333, 334, 343, 344, 346, 347, 349, 352, 353, 354, 355, 356, 358, 360, 361, 363, 364, 365, 366, 368, 369, 371, 376, 379, 381, 382, 383, 384, 385, 387, 389, 392, 393, 395, 398, 400, 402, 403, 404, 405, 415, 416, 429, 435, 444, 446, 447, 449, 450, 452, 456, 457, 458, 459, 460, 462, 464, 465, 467, 468, 469, 470, 472, 473, 474, 475, 477, 480, 483, 485, 486, 487, 488, 489, 491, 493, 494, 496, 497, 499, 503, 506, 508, 509, 510, 511, 512, 522, 523, 536, 542, 549, 551, 554, 556, 557], "rebuild": [6, 9, 18, 19, 25, 29, 32, 34, 36, 43, 47, 48, 71, 80, 145, 250, 333, 347, 355, 450, 459, 524], "spare": [6, 48, 67, 79, 80, 136, 139, 140, 148, 149, 151, 163, 175, 186, 197, 209, 221, 236, 249, 305, 308, 309, 317, 318, 320, 332, 333, 343, 354, 355, 408, 411, 412, 420, 421, 423, 435, 446, 458, 459, 515, 518, 519, 527, 528, 530, 542, 548, 550, 555], "rebalanc": [6, 47], "There": [7, 8, 10, 12, 14, 16, 18, 19, 20, 25, 31, 33, 34, 36, 40, 41, 42, 46, 53, 77, 90, 101, 104, 108, 109, 120, 136, 183, 186, 205, 209, 229, 231, 232, 236, 260, 271, 274, 278, 279, 290, 305, 365, 376, 379, 383, 384, 395, 408, 456, 469, 480, 483, 487, 488, 499, 515, 549, 551, 559, 560, 561], "how": [7, 8, 9, 10, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42, 46, 47, 48, 53, 59, 61, 62, 70, 71, 73, 76, 78, 79, 81, 85, 88, 91, 92, 93, 94, 95, 98, 107, 110, 112, 114, 115, 117, 118, 127, 134, 139, 143, 145, 168, 174, 175, 176, 177, 181, 184, 189, 196, 197, 198, 199, 203, 206, 209, 212, 219, 220, 221, 222, 223, 227, 232, 236, 239, 240, 247, 248, 249, 250, 251, 255, 269, 280, 284, 289, 295, 298, 303, 312, 314, 334, 338, 339, 346, 347, 349, 353, 354, 356, 360, 385, 389, 400, 406, 411, 415, 417, 440, 441, 449, 450, 452, 455, 457, 458, 460, 464, 467, 470, 471, 472, 473, 474, 477, 486, 489, 491, 493, 494, 496, 497, 506, 513, 518, 522, 524, 555], "impact": [7, 11, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 71, 79, 80, 176, 177, 186, 198, 199, 209, 219, 222, 223, 236, 247, 250, 251, 333, 347, 354, 355, 450, 458, 459, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "found": [7, 8, 18, 19, 22, 25, 34, 36, 41, 43, 46, 47, 53, 65, 71, 78, 79, 81, 86, 104, 110, 114, 143, 145, 174, 186, 196, 198, 204, 209, 220, 222, 228, 231, 236, 250, 256, 274, 280, 284, 298, 312, 314, 334, 347, 353, 354, 356, 361, 379, 385, 389, 415, 417, 444, 450, 457, 458, 460, 465, 483, 489, 493, 522, 524, 557], "github": [7, 9, 12, 13, 16, 18, 19, 20, 22, 27, 28, 29, 33, 34, 35, 36, 37, 41, 42, 48, 53, 56, 58, 59, 62, 165, 166, 441, 544, 545, 548, 549, 550, 551, 552, 553, 554, 555, 557, 558, 559, 560, 561], "your": [7, 8, 9, 12, 13, 14, 16, 17, 18, 19, 20, 22, 23, 25, 26, 27, 28, 29, 31, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 46, 47, 48, 53, 56, 59, 62, 67, 71, 76, 77, 78, 80, 81, 90, 101, 110, 114, 120, 127, 130, 132, 143, 145, 163, 168, 172, 176, 184, 186, 189, 194, 198, 206, 207, 209, 212, 216, 222, 232, 234, 236, 240, 244, 250, 260, 271, 280, 284, 290, 295, 297, 298, 299, 312, 332, 333, 339, 343, 347, 352, 353, 355, 365, 376, 385, 389, 395, 400, 402, 415, 435, 441, 446, 450, 455, 456, 457, 459, 460, 469, 480, 489, 493, 499, 506, 509, 511, 522, 524, 542], "compil": [7, 8, 9, 12, 46, 47, 53, 71, 78, 198, 222, 250, 347, 353, 450, 457], "top": [7, 8, 10, 47, 48, 53, 71, 76, 78, 79, 80, 104, 108, 109, 151, 163, 176, 186, 198, 206, 209, 222, 223, 231, 232, 236, 250, 251, 274, 275, 278, 279, 298, 320, 332, 333, 347, 353, 354, 355, 379, 383, 384, 423, 435, 450, 455, 457, 458, 459, 483, 487, 488, 530, 542, 562], "basi": [7, 46, 48, 78, 139, 155, 160, 184, 206, 221, 232, 236, 249, 298, 329, 353, 411, 427, 432, 457, 518, 534, 539], "none": [7, 11, 14, 16, 18, 19, 20, 22, 25, 28, 29, 31, 33, 34, 35, 36, 37, 41, 42, 47, 48, 53, 71, 77, 78, 79, 81, 95, 98, 102, 110, 112, 114, 115, 127, 136, 139, 143, 144, 151, 152, 160, 163, 175, 177, 184, 186, 197, 198, 199, 206, 209, 221, 222, 223, 232, 236, 249, 250, 251, 265, 268, 280, 282, 284, 285, 295, 297, 298, 305, 312, 313, 329, 332, 334, 347, 352, 353, 356, 370, 373, 377, 385, 387, 389, 390, 400, 408, 411, 415, 416, 424, 432, 435, 450, 456, 457, 460, 474, 477, 481, 489, 491, 493, 494, 506, 515, 518, 522, 523, 530, 531, 539, 542, 548, 549, 550, 551, 554, 555, 556, 557, 559, 560, 561], "arch": [7, 25, 31, 39, 43, 58, 59], "distro": [7, 18, 19, 20, 22, 30, 39, 43, 47, 51, 58, 59], "perf": [7, 8], "coverag": [7, 47], "unstabl": [7, 18, 19, 20, 22, 33, 34, 36, 41, 42, 46, 73, 86, 182, 196, 204, 220, 228, 248, 256, 349, 361, 452, 465], "messag": [7, 10, 12, 14, 18, 19, 20, 22, 23, 33, 34, 36, 41, 42, 47, 53, 58, 59, 61, 62, 71, 78, 79, 81, 84, 104, 127, 131, 145, 163, 168, 171, 180, 184, 185, 186, 189, 193, 198, 202, 206, 208, 209, 212, 222, 226, 231, 232, 235, 236, 239, 240, 250, 254, 274, 295, 298, 300, 314, 332, 334, 338, 339, 347, 353, 354, 356, 359, 379, 400, 403, 417, 435, 440, 441, 450, 457, 458, 460, 463, 483, 506, 510, 524, 542], "comma": [7, 76, 78, 79, 81, 88, 93, 95, 98, 100, 103, 115, 118, 121, 131, 141, 143, 147, 156, 168, 171, 184, 186, 189, 193, 206, 209, 212, 232, 235, 236, 240, 258, 263, 265, 268, 270, 273, 285, 288, 291, 298, 300, 310, 312, 316, 325, 353, 354, 356, 363, 368, 370, 373, 375, 378, 390, 393, 396, 403, 413, 415, 419, 428, 455, 457, 458, 460, 467, 472, 474, 477, 479, 482, 494, 497, 500, 510, 520, 522, 526, 535], "tag": [7, 8, 12, 46, 51, 53, 57, 97, 104, 111, 164, 184, 206, 232, 267, 274, 281, 372, 379, 386, 436, 476, 483, 490, 543], "architectur": [7, 47, 55, 57, 70, 219, 247, 346, 449], "exclud": [7, 18, 19, 20, 22, 26, 33, 36, 37, 41, 42, 71, 80, 108, 109, 110, 114, 206, 232, 236, 278, 279, 333, 347, 355, 383, 384, 450, 459, 487, 488, 489, 493], "fedora": [7, 8, 13, 32, 39, 43, 58, 59], "rawhid": 7, "coupl": 7, "text": [7, 10, 34, 36, 47, 62, 76, 79, 81, 104, 168, 186, 189, 209, 212, 231, 232, 236, 240, 274, 334, 339, 354, 356, 379, 441, 455, 458, 460, 483], "bodi": [7, 12, 62, 168, 189, 212, 240, 339, 441], "sign": [7, 10, 12, 18, 19, 20, 32, 33, 34, 36, 41, 42, 46, 53, 57, 58, 59, 86, 93, 104, 184, 206, 231, 232, 256, 263, 274, 361, 368, 379, 465, 472, 483], "contributor": [7, 46], "email": [7, 10, 12, 18, 19, 20, 22, 28, 33, 36, 37, 41, 42], "attempt": [7, 23, 35, 37, 46, 47, 48, 62, 71, 77, 79, 80, 81, 86, 90, 97, 101, 103, 104, 108, 109, 110, 111, 114, 120, 121, 127, 135, 136, 143, 148, 149, 157, 163, 165, 166, 168, 176, 177, 182, 184, 186, 189, 198, 199, 204, 206, 209, 212, 222, 223, 228, 231, 232, 236, 240, 250, 251, 256, 260, 267, 271, 273, 274, 278, 279, 281, 290, 291, 295, 297, 305, 312, 317, 318, 326, 332, 333, 334, 339, 347, 352, 354, 355, 356, 361, 365, 372, 376, 378, 379, 383, 384, 386, 395, 396, 400, 408, 415, 420, 421, 429, 435, 441, 450, 456, 458, 459, 460, 465, 469, 476, 480, 482, 483, 487, 488, 489, 490, 493, 499, 500, 506, 515, 522, 527, 528, 536, 542, 544, 545, 552, 553, 554, 555], "correct": [7, 8, 12, 13, 18, 19, 34, 35, 36, 37, 46, 47, 48, 53, 62, 71, 74, 80, 90, 101, 108, 109, 120, 163, 168, 186, 189, 209, 212, 232, 236, 240, 260, 271, 290, 333, 339, 350, 355, 365, 376, 395, 441, 450, 453, 459, 469, 480, 487, 488, 499, 553, 555, 557], "against": [7, 8, 9, 10, 34, 36, 46, 47, 48, 53, 57, 67, 71, 78, 79, 80, 81, 90, 101, 104, 108, 109, 110, 114, 120, 168, 172, 176, 184, 189, 194, 198, 206, 212, 216, 222, 223, 231, 232, 236, 240, 244, 250, 251, 260, 271, 274, 278, 279, 280, 284, 290, 298, 334, 343, 347, 353, 354, 355, 356, 365, 376, 379, 383, 384, 385, 389, 395, 446, 450, 457, 458, 459, 460, 469, 480, 483, 487, 488, 489, 493, 499], "instruct": [7, 9, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 47, 53, 61, 71, 102, 104, 198, 222, 230, 231, 232, 239, 250, 272, 274, 338, 347, 377, 379, 440, 450, 481, 483], "ref": [7, 66, 170, 192, 215, 243, 342, 445], "123": [7, 78, 95, 98, 115, 127, 184, 206, 232, 295, 298, 353, 400, 457, 474, 477, 494, 506], "head": [7, 10, 14, 28, 29, 43, 46, 47, 79, 110, 114, 184, 206, 232, 280, 284, 385, 389, 458, 489, 493], "clone": [7, 9, 12, 13, 17, 18, 19, 20, 21, 22, 27, 28, 29, 33, 34, 35, 36, 37, 41, 42, 43, 57, 71, 77, 78, 79, 81, 83, 86, 88, 90, 92, 93, 101, 104, 107, 108, 109, 110, 112, 113, 114, 117, 118, 120, 127, 184, 206, 231, 232, 250, 251, 253, 258, 260, 263, 271, 274, 277, 278, 279, 280, 283, 284, 287, 288, 290, 295, 297, 298, 347, 352, 353, 354, 358, 363, 365, 368, 376, 379, 382, 383, 384, 385, 388, 389, 392, 393, 395, 400, 450, 456, 457, 458, 460, 462, 465, 467, 469, 471, 472, 480, 483, 486, 487, 488, 489, 491, 492, 493, 496, 497, 499, 506], "master": [7, 8, 10, 11, 12, 27, 58, 59, 60, 79, 90, 101, 120, 260, 271, 290, 365, 376, 395, 469, 480, 499], "v4": 7, "execut": [7, 8, 47, 49, 53, 65, 67, 71, 74, 78, 84, 87, 103, 104, 116, 121, 127, 160, 171, 172, 176, 183, 184, 193, 194, 198, 205, 206, 216, 222, 229, 231, 232, 236, 244, 250, 257, 273, 274, 291, 295, 298, 329, 343, 347, 350, 353, 359, 362, 378, 379, 391, 396, 400, 432, 444, 446, 450, 453, 457, 463, 466, 482, 483, 495, 500, 506, 539, 553], "prefer": [7, 9, 18, 19, 20, 22, 23, 27, 32, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 70, 81, 110, 114, 186, 209, 219, 236, 247, 280, 284, 334, 346, 356, 385, 389, 449, 460, 489, 493], "scenario": [7, 46, 48, 86, 110, 114, 280, 284, 385, 389, 465, 489, 493], "No": [7, 22, 41, 46, 50, 53, 54, 71, 88, 90, 92, 93, 101, 104, 107, 110, 114, 118, 120, 151, 157, 164, 176, 184, 198, 206, 222, 231, 232, 250, 258, 260, 262, 263, 271, 274, 277, 280, 284, 288, 290, 347, 363, 365, 367, 368, 376, 379, 382, 385, 389, 393, 395, 423, 429, 436, 450, 467, 469, 471, 472, 480, 483, 486, 489, 493, 497, 499, 530, 536, 543, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "lint": [7, 11, 168, 189, 212, 240, 339], "At": [7, 9, 11, 36, 46, 47, 48, 71, 80, 157, 186, 209, 236, 250, 326, 333, 347, 355, 429, 450, 459, 536, 559, 560], "variabl": [7, 8, 9, 18, 19, 20, 21, 33, 34, 35, 36, 37, 41, 42, 43, 47, 49, 65, 67, 71, 78, 84, 86, 87, 102, 127, 129, 131, 135, 143, 145, 148, 149, 163, 172, 176, 182, 183, 184, 185, 186, 194, 198, 204, 205, 208, 209, 216, 222, 228, 229, 235, 236, 244, 250, 256, 257, 295, 300, 312, 314, 332, 343, 347, 353, 359, 361, 362, 377, 400, 403, 415, 417, 435, 444, 446, 450, 457, 463, 465, 466, 481, 506, 508, 510, 522, 524, 542], "brief": [7, 47, 48], "descript": [7, 10, 12, 41, 47, 48, 50, 55, 61, 62, 64, 65, 66, 67, 68, 70, 71, 73, 74, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 168, 170, 171, 172, 174, 175, 176, 177, 178, 180, 181, 182, 183, 184, 185, 186, 187, 189, 191, 192, 193, 194, 196, 197, 198, 199, 200, 202, 203, 204, 205, 206, 207, 208, 209, 210, 212, 214, 215, 216, 217, 219, 220, 221, 222, 223, 224, 226, 227, 228, 229, 230, 231, 232, 234, 235, 236, 237, 239, 240, 242, 243, 244, 245, 247, 248, 249, 250, 251, 252, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 338, 339, 341, 342, 343, 344, 346, 347, 349, 350, 352, 353, 354, 355, 356, 357, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 440, 441, 443, 444, 445, 446, 447, 449, 450, 452, 453, 455, 456, 457, 458, 459, 460, 461, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "test_prepare_watchdog": 7, "watchdog": 7, "test_prepare_shar": 7, "nf": [7, 8, 18, 19, 20, 22, 33, 35, 36, 37, 41, 42, 47, 71, 78, 80, 88, 118, 176, 184, 186, 198, 206, 209, 222, 232, 236, 250, 258, 288, 298, 333, 347, 353, 355, 363, 393, 450, 457, 459, 467, 497], "samba": [7, 8, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 78, 127, 184, 206, 232, 295, 298, 353, 400, 457, 506], "server": [7, 8, 10, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 46, 51, 53, 56, 78, 110, 114, 184, 206, 232, 280, 284, 298, 353, 385, 389, 457, 489, 493], "test_splat_skip": 7, "splat": [7, 170, 192], "test_splat_opt": 7, "line": [7, 10, 12, 18, 19, 20, 22, 25, 27, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47, 62, 65, 73, 78, 79, 80, 87, 92, 94, 100, 104, 127, 129, 130, 136, 145, 163, 164, 168, 174, 183, 184, 186, 189, 196, 205, 206, 207, 209, 212, 220, 229, 231, 232, 234, 236, 240, 248, 257, 262, 264, 270, 274, 295, 298, 299, 305, 314, 332, 333, 339, 349, 353, 354, 355, 362, 367, 369, 375, 379, 400, 402, 408, 417, 435, 436, 441, 444, 452, 457, 458, 459, 466, 471, 473, 479, 483, 506, 508, 509, 515, 524, 542, 543], "test_ztest_skip": 7, "ztest": [7, 8, 63, 64, 66, 169, 170, 190, 191, 192, 213, 214, 215, 241, 242, 243, 340, 341, 342, 442, 443, 445], "test_ztest_timeout": 7, "length": [7, 10, 47, 48, 53, 71, 78, 80, 90, 101, 120, 127, 176, 184, 198, 206, 222, 232, 250, 260, 271, 290, 295, 298, 333, 347, 353, 355, 365, 376, 395, 400, 450, 457, 459, 469, 480, 499, 506], "test_ztest_dir": 7, "test_ztest_opt": 7, "pass": [7, 8, 9, 12, 46, 47, 48, 65, 66, 67, 71, 78, 87, 92, 104, 129, 145, 163, 171, 176, 183, 193, 198, 205, 209, 222, 229, 231, 232, 236, 250, 257, 262, 274, 298, 314, 342, 343, 347, 353, 362, 367, 379, 417, 444, 445, 446, 450, 457, 466, 471, 483, 508, 524], "test_ztest_core_dir": 7, "core": [7, 8, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42, 47, 86, 136, 159, 163, 176, 182, 184, 186, 198, 204, 209, 222, 228, 236, 250, 256, 305, 328, 332, 361, 408, 431, 435, 465, 515, 538, 542], "dump": [7, 11, 35, 37, 47, 67, 78, 79, 81, 86, 136, 163, 165, 166, 182, 184, 186, 187, 199, 204, 206, 209, 210, 216, 223, 228, 232, 236, 237, 244, 251, 256, 298, 305, 332, 334, 335, 336, 343, 353, 354, 356, 361, 408, 435, 437, 438, 446, 457, 458, 460, 465, 515, 542, 544, 545], "test_zimport_skip": 7, "zimport": 7, "test_zimport_dir": 7, "test_zimport_vers": 7, "test_zimport_pool": 7, "test_zimport_opt": 7, "test_xfstests_skip": 7, "xfstest": 7, "test_xfstests_url": 7, "url": [7, 32, 74, 78, 102, 350, 353, 377, 453, 457, 481], "download": [7, 9, 10, 12, 14, 16, 25, 28, 31, 35, 37, 41, 42, 43, 48, 56], "test_xfstests_v": 7, "tarbal": [7, 53], "test_xfstests_pool": 7, "test_xfstests_f": 7, "test_xfstests_vdev": 7, "test_xfstests_opt": 7, "test_zfstests_skip": 7, "test_zfstests_dir": 7, "loopback": [7, 8, 35, 37, 127, 400, 506], "test_zfstests_disk": 7, "delimit": [7, 96, 97, 105, 106, 111, 124, 171, 184, 193, 206, 232, 266, 267, 276, 281, 293, 371, 372, 380, 381, 386, 398, 475, 476, 484, 485, 490, 503], "test_zfstests_disks": 7, "test_zfstests_it": 7, "runner": [7, 63, 442], "test_zfstests_opt": 7, "test_zfstests_runfil": 7, "runfil": [7, 65, 444], "test_zfstests_tag": 7, "test_zfsstress_skip": 7, "zfsstress": 7, "test_zfsstress_url": 7, "test_zfsstress_v": 7, "test_zfsstress_runtim": 7, "durat": [7, 8, 47, 71, 103, 121, 184, 198, 206, 222, 232, 250, 273, 291, 347, 378, 396, 450, 482, 500], "runstress": 7, "test_zfsstress_pool": 7, "test_zfsstress_f": 7, "test_zfsstress_fsopt": 7, "test_zfsstress_vdev": 7, "test_zfsstress_opt": 7, "offici": [8, 9, 17, 26, 32, 35, 37, 40, 41, 42, 53, 56], "maintain": [8, 12, 13, 26, 46, 47, 48, 53, 57, 78, 79, 81, 87, 93, 112, 117, 127, 133, 177, 183, 184, 199, 205, 206, 223, 229, 232, 239, 251, 257, 295, 298, 353, 354, 356, 362, 400, 457, 458, 460, 466, 472, 491, 496, 506], "organ": [8, 44, 53, 79, 80, 177, 186, 199, 209, 223, 236, 251, 333, 354, 355, 458, 459], "primari": [8, 18, 19, 20, 22, 32, 33, 34, 35, 36, 37, 41, 42, 47, 48, 78, 151, 159, 163, 180, 184, 202, 206, 209, 226, 232, 236, 254, 298, 320, 328, 332, 353, 423, 431, 435, 457, 530, 538, 542], "git": [8, 12, 13, 17, 18, 19, 20, 22, 27, 28, 29, 33, 34, 35, 36, 37, 41, 42, 43, 57, 58, 59], "project": [8, 10, 12, 40, 41, 42, 44, 48, 58, 59, 71, 78, 79, 83, 91, 92, 93, 96, 106, 107, 112, 117, 124, 127, 184, 206, 223, 232, 250, 251, 253, 266, 276, 293, 295, 298, 347, 353, 354, 358, 371, 381, 398, 400, 450, 457, 458, 462, 470, 471, 472, 475, 485, 486, 491, 496, 503, 506], "main": [8, 10, 18, 19, 20, 22, 23, 28, 33, 34, 36, 38, 40, 41, 42, 47, 48, 53, 80, 104, 132, 145, 163, 186, 209, 231, 236, 274, 332, 333, 355, 379, 435, 459, 483, 511, 524, 542, 561], "compon": [8, 67, 73, 76, 77, 78, 80, 81, 85, 87, 110, 114, 127, 132, 145, 147, 157, 158, 172, 174, 181, 183, 184, 186, 194, 196, 203, 205, 206, 209, 216, 220, 227, 229, 232, 236, 244, 248, 255, 257, 280, 284, 297, 298, 301, 314, 316, 326, 327, 333, 343, 349, 352, 353, 355, 360, 362, 385, 389, 404, 417, 419, 429, 430, 446, 452, 455, 456, 457, 459, 460, 464, 466, 489, 493, 506, 511, 524, 526, 536, 537], "upstream": [8, 10, 11, 12, 18, 19, 20, 32, 34, 35, 36, 41, 42], "code": [8, 10, 11, 12, 13, 14, 16, 21, 25, 31, 44, 46, 47, 48, 55, 57, 62, 71, 78, 82, 87, 104, 127, 168, 178, 183, 184, 185, 189, 200, 205, 206, 208, 212, 222, 224, 229, 231, 232, 235, 240, 250, 252, 257, 274, 298, 300, 339, 347, 353, 357, 362, 379, 400, 441, 450, 457, 461, 466, 483, 506, 557], "extend": [8, 14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 47, 61, 71, 78, 79, 180, 184, 198, 202, 206, 222, 226, 232, 239, 250, 254, 298, 338, 347, 353, 440, 450, 457, 458], "vast": [8, 70, 449], "self": [8, 19, 20, 22, 33, 41, 42, 46], "modif": [8, 12, 48, 77, 78, 87, 88, 117, 118, 183, 184, 205, 206, 229, 232, 257, 258, 287, 288, 298, 353, 362, 363, 392, 393, 456, 457, 466, 467, 496, 497], "thin": [8, 78, 82, 184, 206, 232, 298, 353, 357, 457, 461], "shim": [8, 18, 19, 20, 22, 25, 31, 33, 34, 36, 48], "respons": [8, 12, 46, 47, 48, 50, 70, 71, 77, 87, 176, 183, 184, 198, 205, 206, 219, 222, 229, 232, 247, 250, 257, 297, 346, 347, 352, 362, 449, 450, 456, 466, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "fundament": [8, 11], "It": [8, 9, 10, 11, 12, 14, 18, 19, 20, 21, 22, 26, 33, 34, 35, 36, 37, 39, 41, 42, 43, 46, 47, 48, 53, 54, 62, 64, 70, 71, 73, 74, 77, 78, 79, 80, 81, 82, 86, 108, 109, 110, 114, 127, 129, 130, 143, 159, 163, 165, 166, 168, 174, 176, 177, 178, 180, 182, 184, 186, 189, 196, 198, 199, 200, 202, 204, 206, 207, 209, 212, 219, 220, 222, 223, 224, 226, 228, 230, 232, 234, 236, 240, 247, 248, 250, 251, 252, 254, 256, 272, 275, 280, 284, 295, 297, 298, 299, 312, 328, 332, 333, 334, 339, 341, 346, 347, 349, 350, 352, 353, 354, 355, 356, 357, 361, 385, 389, 400, 402, 415, 431, 435, 441, 443, 449, 450, 452, 453, 456, 457, 458, 459, 460, 461, 465, 487, 488, 489, 493, 506, 508, 509, 522, 538, 542, 544, 545, 552, 554, 555], "platform": [8, 9, 12, 22, 46, 48, 53, 55, 57, 59, 78, 80, 99, 119, 140, 163, 172, 184, 186, 206, 209, 232, 236, 298, 309, 332, 333, 353, 355, 374, 394, 412, 435, 457, 459, 478, 498, 519, 542], "merg": [8, 11, 47, 65, 139, 145, 221, 236, 249, 314, 411, 417, 444, 518, 524], "first": [8, 9, 12, 13, 21, 23, 25, 26, 32, 38, 39, 43, 46, 47, 48, 49, 50, 53, 56, 62, 71, 73, 74, 78, 79, 80, 85, 93, 94, 96, 104, 106, 108, 109, 110, 114, 124, 139, 143, 145, 165, 166, 168, 171, 174, 176, 177, 181, 184, 186, 189, 193, 196, 198, 199, 203, 206, 209, 212, 219, 220, 221, 222, 223, 227, 231, 232, 236, 240, 247, 248, 249, 250, 251, 255, 263, 264, 266, 274, 276, 278, 279, 280, 284, 293, 298, 312, 314, 333, 339, 346, 347, 349, 350, 353, 354, 355, 360, 368, 369, 371, 379, 381, 383, 384, 385, 389, 398, 411, 415, 417, 441, 449, 450, 452, 453, 457, 458, 459, 464, 472, 473, 475, 483, 485, 487, 488, 489, 493, 503, 518, 522, 524, 544, 545, 557, 561], "thing": [8, 9, 10, 12, 18, 19, 20, 34, 35, 36, 37, 41, 42, 46, 48, 70, 129, 449, 508], "ll": [8, 9, 10, 11, 12, 48, 53, 62, 86, 168, 189, 204, 212, 228, 240, 256, 339, 361, 441, 465], "prepar": [8, 9, 12, 13, 47, 53, 129, 508], "environ": [8, 9, 14, 16, 23, 25, 27, 28, 31, 35, 37, 39, 46, 47, 53, 62, 65, 67, 74, 76, 78, 81, 84, 86, 87, 102, 127, 129, 131, 135, 143, 145, 148, 149, 163, 171, 172, 182, 183, 184, 185, 186, 193, 194, 204, 205, 206, 208, 209, 216, 228, 229, 232, 235, 236, 244, 256, 257, 295, 298, 300, 312, 314, 332, 334, 343, 350, 353, 356, 359, 361, 362, 377, 400, 403, 415, 417, 435, 441, 444, 446, 453, 455, 457, 460, 463, 465, 466, 481, 506, 508, 510, 522, 524, 542], "chain": 8, "header": [8, 9, 18, 19, 20, 22, 23, 25, 26, 33, 34, 35, 36, 37, 41, 42, 46, 47, 61, 71, 80, 86, 94, 95, 96, 97, 98, 100, 106, 111, 115, 124, 139, 141, 145, 146, 147, 156, 162, 165, 166, 168, 176, 182, 184, 186, 187, 189, 198, 204, 206, 209, 210, 212, 222, 228, 232, 236, 237, 239, 240, 250, 256, 264, 265, 266, 267, 268, 270, 276, 281, 285, 293, 308, 310, 314, 315, 316, 325, 331, 333, 335, 336, 338, 339, 347, 355, 361, 369, 370, 371, 372, 373, 375, 381, 386, 390, 398, 411, 413, 417, 418, 419, 428, 434, 437, 438, 440, 450, 459, 465, 473, 474, 475, 476, 477, 479, 485, 490, 494, 503, 518, 520, 524, 525, 526, 535, 541, 544, 545], "packag": [8, 13, 15, 16, 18, 19, 20, 22, 23, 25, 26, 27, 31, 32, 33, 34, 35, 36, 37, 38, 40, 41, 42, 43, 53, 58, 59, 76, 78, 81, 108, 109, 110, 114, 180, 184, 202, 206, 226, 232, 254, 278, 279, 280, 284, 298, 353, 383, 384, 385, 389, 455, 457, 460, 487, 488, 489, 493, 557], "aren": [8, 9, 47, 71, 110, 114, 176, 198, 222, 250, 280, 284, 347, 385, 389, 450, 489, 493], "won": [8, 9, 34, 35, 47, 48, 71, 80, 198, 222, 236, 250, 333, 347, 355, 450, 459], "properli": [8, 9, 11, 14, 16, 25, 31, 46, 47, 48, 49, 53, 64, 71, 104, 176, 177, 191, 198, 199, 214, 222, 223, 231, 242, 250, 251, 274, 341, 347, 379, 443, 450, 483], "latest": [8, 9, 14, 16, 25, 26, 28, 31, 32, 46, 47, 53, 71, 78, 158, 186, 206, 209, 232, 236, 298, 327, 347, 353, 430, 450, 457, 537], "rhel": [8, 13, 25, 31, 39, 43, 58, 59], "cento": [8, 13, 32], "sudo": [8, 9, 17, 18, 19, 20, 22, 27, 33, 34, 35, 36, 37, 41, 42, 65, 444], "yum": [8, 9, 32], "epel": [8, 9, 31, 32], "gcc": [8, 9], "autoconf": [8, 9, 27], "automak": [8, 9, 27], "libtool": [8, 9], "rpm": [8, 9, 25, 26, 31, 32, 41, 42], "libtirpc": [8, 9], "devel": [8, 9, 25, 26, 27, 32, 55], "libblkid": [8, 9, 143, 209, 236, 312, 415, 522], "libuuid": [8, 9], "libudev": [8, 9], "openssl": [8, 9], "zlib": [8, 9], "libaio": [8, 9], "libattr": [8, 9], "elfutil": [8, 9], "libelf": [8, 9], "unam": [8, 9, 28, 41, 42], "python": [8, 9, 17, 27, 239], "python2": [8, 9], "setuptool": [8, 9], "cffi": [8, 9], "libffi": [8, 9], "ncompress": [8, 9], "libcurl": [8, 78, 353, 457], "enablerepo": [8, 9], "dkm": [8, 18, 19, 20, 22, 23, 25, 39, 41], "dnf": [8, 9, 25, 26, 31, 32], "skip": [8, 9, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 43, 47, 48, 53, 65, 71, 86, 102, 108, 109, 110, 114, 176, 198, 204, 222, 228, 232, 250, 256, 278, 279, 347, 361, 377, 383, 384, 385, 389, 444, 450, 465, 481, 487, 488, 489, 493], "broken": [8, 9, 47, 62, 145, 168, 189, 209, 212, 222, 236, 240, 250, 314, 339, 417, 441, 524, 561], "python3": [8, 9, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "powertool": [8, 9], "debian": [8, 13, 39, 41, 42, 43, 48, 53, 58, 59, 61, 62, 64, 65, 66, 67, 68, 70, 71, 73, 74, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 231, 245, 248, 255, 256, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 270, 271, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 338, 339, 341, 342, 343, 344, 346, 347, 349, 350, 352, 353, 354, 355, 356, 357, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 440, 441, 443, 444, 445, 446, 447, 449, 450, 452, 453, 455, 456, 457, 458, 459, 460, 461, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545], "ubuntu": [8, 13, 14, 16, 18, 19, 20, 25, 31, 39, 41, 42, 43, 48, 53, 58, 59], "apt": [8, 9, 18, 19, 20, 22, 23, 33, 34, 35, 36, 37, 38], "gawk": [8, 9], "alien": [8, 9], "fakeroot": [8, 9], "uuid": [8, 9, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 53], "libssl": [8, 9], "zlib1g": [8, 9], "libattr1": [8, 9], "libcurl4": 8, "debhelp": [8, 9], "dh": [8, 9, 61, 239, 338, 440], "po": [8, 9], "debconf": [8, 9], "sphinx": [8, 9], "pkg": [8, 16, 18, 19, 20, 22, 27, 33, 41, 42], "autotool": [8, 9, 27], "gmake": [8, 27], "sysctl": [8, 27], "often": [8, 12, 23, 46, 47, 48, 53, 71, 78, 80, 81, 186, 209, 236, 250, 333, 334, 347, 355, 356, 450, 457, 459, 460], "custom": [8, 13, 14, 16, 18, 19, 20, 22, 25, 28, 31, 32, 33, 34, 36, 37, 41, 42, 53, 58, 59, 73, 87, 130, 143, 174, 196, 205, 207, 209, 220, 229, 234, 236, 248, 257, 299, 312, 349, 362, 402, 415, 452, 466, 509, 522], "best": [8, 9, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 71, 78, 79, 108, 109, 176, 184, 198, 206, 219, 222, 232, 250, 278, 279, 298, 347, 353, 383, 384, 450, 457, 487, 488], "systemd": [8, 9, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 43, 74, 77, 102, 155, 160, 206, 230, 232, 272, 297, 350, 352, 377, 427, 453, 456, 481, 534, 539], "dracut": [8, 25, 31, 41, 42, 75, 351, 454], "udev": [8, 14, 16, 25, 31, 41, 42, 47, 53, 68, 71, 73, 85, 174, 181, 196, 203, 217, 220, 227, 245, 248, 255, 344, 347, 349, 360, 447, 450, 452, 464], "rapidli": [8, 47, 49, 71, 176, 198, 219, 222, 247, 250, 346, 347, 449, 450], "iter": [8, 12, 50, 71, 74, 78, 104, 110, 114, 176, 198, 222, 231, 232, 250, 274, 280, 284, 298, 347, 350, 353, 379, 385, 389, 450, 453, 457, 483, 489, 493], "patch": [8, 11, 13, 21, 34, 35, 46, 48, 58, 59], "work": [8, 10, 12, 18, 19, 20, 22, 27, 32, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 53, 54, 59, 65, 67, 70, 71, 74, 78, 80, 108, 109, 128, 131, 135, 139, 145, 148, 149, 162, 164, 172, 175, 176, 184, 186, 194, 197, 198, 206, 209, 216, 221, 222, 232, 235, 236, 244, 249, 250, 278, 279, 296, 298, 300, 314, 331, 333, 343, 347, 350, 353, 355, 383, 384, 401, 403, 411, 417, 434, 436, 444, 446, 449, 450, 453, 457, 459, 487, 488, 507, 510, 518, 524, 541, 543], "leverag": 8, "increment": [8, 12, 18, 19, 20, 22, 33, 34, 41, 42, 47, 53, 66, 71, 78, 79, 89, 104, 108, 109, 110, 114, 127, 170, 177, 184, 192, 198, 199, 206, 215, 222, 223, 231, 232, 243, 250, 251, 259, 274, 278, 279, 280, 284, 295, 298, 342, 347, 353, 354, 364, 379, 383, 384, 385, 389, 400, 445, 450, 457, 458, 468, 483, 487, 488, 489, 493, 506, 557], "unload": [8, 27, 47, 71, 78, 79, 83, 87, 88, 90, 101, 103, 118, 121, 127, 131, 176, 185, 198, 208, 222, 232, 235, 250, 253, 257, 258, 260, 271, 273, 288, 291, 295, 298, 300, 347, 353, 358, 362, 363, 365, 376, 378, 393, 396, 400, 403, 450, 457, 458, 462, 466, 467, 469, 480, 482, 497, 500, 506, 510], "suit": [8, 10, 47, 71, 78, 87, 90, 101, 120, 205, 222, 229, 232, 250, 257, 260, 271, 290, 298, 347, 353, 362, 365, 376, 395, 450, 457, 466, 469, 480, 499], "remaind": 8, "focus": [8, 53, 79, 458], "method": [8, 12, 25, 31, 39, 41, 42, 46, 47, 48, 53, 64, 71, 73, 74, 79, 81, 108, 109, 174, 186, 191, 196, 209, 214, 219, 220, 236, 242, 247, 248, 250, 251, 334, 341, 347, 349, 350, 354, 356, 443, 450, 452, 453, 458, 460, 487, 488], "branch": [8, 10, 12, 17, 18, 19, 20, 22, 27, 28, 29, 33, 34, 35, 36, 37, 40, 41, 42, 62, 168, 189, 212, 240, 339, 441], "seri": [8, 46, 47, 62, 65, 71, 168, 171, 176, 189, 193, 198, 212, 222, 240, 250, 339, 347, 441, 444, 450], "built": [8, 9, 12, 20, 25, 27, 32, 35, 37, 46, 71, 104, 184, 198, 222, 231, 250, 274, 347, 379, 450, 483, 553], "y": [8, 9, 18, 19, 20, 22, 25, 26, 31, 32, 33, 34, 35, 36, 37, 41, 43, 86, 145, 186, 209, 228, 236, 256, 314, 361, 417, 465, 524], "z": [8, 9, 46, 47, 57, 64, 67, 71, 76, 78, 79, 86, 87, 95, 98, 115, 133, 136, 162, 163, 171, 172, 183, 184, 186, 193, 194, 205, 206, 209, 216, 229, 232, 236, 244, 256, 257, 265, 268, 285, 298, 332, 343, 353, 361, 362, 370, 373, 390, 435, 443, 446, 455, 457, 458, 465, 466, 474, 477, 494, 515, 542, 555], "match": [8, 11, 18, 19, 20, 32, 33, 34, 36, 41, 46, 47, 48, 53, 54, 62, 71, 73, 74, 78, 79, 80, 87, 90, 101, 108, 109, 120, 168, 174, 183, 184, 186, 189, 196, 198, 199, 205, 206, 209, 212, 220, 222, 223, 229, 232, 236, 240, 248, 250, 251, 257, 260, 271, 278, 279, 290, 298, 333, 339, 347, 349, 350, 353, 354, 355, 362, 365, 376, 383, 384, 395, 441, 450, 452, 453, 457, 458, 459, 466, 469, 480, 487, 488, 499], "http": [8, 9, 10, 12, 16, 18, 19, 20, 22, 23, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 41, 42, 46, 48, 62, 74, 78, 102, 104, 165, 166, 168, 172, 189, 194, 212, 216, 231, 240, 244, 274, 339, 350, 353, 377, 379, 441, 453, 457, 481, 483, 544, 545, 548, 549, 550, 551, 552, 553, 554, 555, 557, 558, 559, 560, 561], "alwai": [8, 10, 12, 18, 19, 20, 22, 26, 32, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 54, 71, 78, 79, 80, 90, 101, 102, 104, 108, 109, 110, 114, 120, 135, 139, 145, 148, 149, 176, 178, 184, 198, 200, 206, 209, 221, 222, 224, 230, 231, 232, 236, 249, 250, 252, 260, 271, 272, 274, 278, 279, 280, 284, 290, 298, 314, 333, 347, 353, 355, 365, 376, 377, 379, 383, 384, 385, 389, 395, 411, 417, 450, 457, 458, 459, 469, 480, 481, 483, 487, 488, 489, 493, 499, 518, 524, 552], "topic": [8, 10, 39], "easi": [8, 12, 48, 53], "pull": [8, 12, 13, 17, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42, 48], "request": [8, 9, 12, 13, 17, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42, 46, 47, 53, 56, 70, 71, 77, 78, 79, 81, 87, 104, 110, 114, 131, 139, 143, 145, 151, 157, 161, 163, 184, 186, 198, 206, 208, 209, 219, 222, 231, 232, 235, 236, 247, 250, 257, 274, 298, 300, 312, 314, 326, 332, 334, 346, 347, 353, 354, 356, 362, 379, 403, 411, 415, 417, 429, 433, 435, 449, 450, 456, 457, 458, 460, 466, 483, 489, 493, 510, 518, 522, 524, 530, 536, 540, 542, 548, 549, 550, 551, 553, 554, 555, 556, 557, 559, 560, 561], "latter": [8, 12, 32, 46, 48, 54, 71, 347, 450], "kept": [8, 12, 47, 78, 102, 184, 198, 206, 222, 230, 232, 250, 272, 298, 353, 377, 457, 481], "stabl": [8, 9, 12, 27, 46, 47, 50, 53, 71, 78, 80, 176, 184, 186, 198, 206, 209, 222, 232, 236, 250, 298, 333, 347, 353, 355, 450, 457, 459], "regress": [8, 12, 34, 35, 36, 37, 48, 67, 172, 194, 216, 244, 343, 446], "everi": [8, 12, 14, 16, 25, 31, 32, 46, 47, 48, 50, 53, 54, 71, 76, 78, 79, 80, 81, 86, 88, 102, 118, 125, 131, 139, 145, 147, 162, 164, 165, 166, 172, 175, 176, 177, 182, 184, 186, 194, 197, 198, 199, 204, 206, 208, 209, 216, 221, 222, 223, 228, 232, 235, 236, 244, 249, 250, 251, 256, 258, 288, 294, 298, 300, 314, 316, 331, 333, 334, 347, 353, 354, 355, 356, 361, 363, 377, 393, 399, 403, 411, 417, 419, 434, 436, 450, 455, 457, 458, 459, 460, 465, 467, 481, 497, 504, 510, 518, 524, 526, 541, 543, 544, 545, 561], "befor": [8, 12, 13, 14, 16, 18, 19, 20, 21, 22, 25, 26, 27, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 54, 65, 66, 67, 68, 70, 71, 73, 74, 77, 78, 79, 80, 81, 86, 90, 94, 96, 99, 101, 102, 103, 104, 106, 108, 109, 110, 114, 116, 119, 120, 121, 122, 123, 124, 126, 127, 129, 131, 133, 134, 135, 137, 140, 144, 145, 148, 149, 150, 151, 153, 155, 160, 171, 174, 176, 177, 184, 185, 186, 193, 196, 198, 199, 204, 206, 208, 209, 216, 217, 219, 220, 222, 223, 228, 230, 231, 232, 235, 236, 244, 245, 247, 248, 250, 251, 256, 260, 266, 269, 271, 272, 273, 274, 276, 278, 279, 280, 284, 289, 290, 291, 292, 293, 295, 297, 298, 300, 302, 303, 306, 309, 313, 314, 317, 318, 319, 320, 322, 324, 329, 333, 334, 342, 343, 344, 346, 347, 349, 350, 352, 353, 354, 355, 356, 361, 365, 371, 374, 376, 377, 378, 379, 381, 383, 384, 385, 389, 391, 394, 395, 396, 397, 398, 400, 403, 405, 406, 409, 412, 416, 417, 420, 421, 422, 423, 425, 427, 432, 444, 445, 446, 447, 449, 450, 452, 453, 456, 457, 458, 459, 460, 465, 469, 473, 475, 478, 480, 481, 482, 483, 485, 487, 488, 489, 493, 495, 498, 499, 500, 501, 502, 503, 505, 506, 508, 510, 512, 513, 516, 519, 523, 524, 527, 528, 529, 530, 532, 534, 539, 557, 561], "effort": [8, 47, 48, 57], "catch": [8, 35, 37, 47, 71, 176, 198, 222, 250, 347, 450], "defect": 8, "earli": [8, 16, 25, 31, 46, 47, 48, 71, 102, 230, 272, 377, 450, 481], "comfort": 8, "frequent": [8, 9, 47, 48], "rebas": [8, 10], "walk": [8, 47, 53, 70, 219, 247, 346, 449], "through": [8, 10, 12, 18, 19, 20, 22, 33, 34, 36, 40, 41, 42, 46, 47, 48, 50, 53, 71, 77, 78, 79, 86, 104, 108, 109, 110, 114, 127, 176, 177, 184, 193, 198, 199, 206, 222, 223, 230, 231, 232, 250, 251, 256, 272, 274, 278, 279, 280, 284, 295, 297, 298, 347, 352, 353, 354, 361, 379, 383, 384, 385, 389, 400, 450, 456, 457, 458, 465, 483, 487, 488, 489, 493, 506], "stock": [8, 27, 35, 37], "desir": [8, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 70, 71, 78, 90, 101, 120, 165, 166, 176, 198, 219, 222, 232, 247, 250, 260, 271, 290, 346, 347, 365, 376, 395, 449, 450, 457, 469, 480, 499, 544, 545], "fashion": [8, 61, 80, 186, 209, 236, 239, 333, 338, 355, 440, 459], "cd": [8, 9, 10, 12, 16, 17, 25, 27, 29, 31, 35, 37], "checkout": [8, 10, 12], "autogen": [8, 9, 10, 12, 27], "j": [8, 12, 27, 87, 104, 171, 193, 231, 232, 274, 362, 379, 466, 483], "nproc": [8, 12], "path": [8, 9, 12, 17, 18, 19, 20, 22, 27, 33, 34, 35, 36, 37, 41, 42, 46, 47, 53, 65, 67, 70, 71, 73, 74, 76, 77, 78, 79, 80, 81, 85, 86, 87, 88, 92, 94, 96, 102, 103, 104, 106, 108, 109, 116, 118, 121, 124, 127, 128, 129, 131, 132, 136, 139, 143, 145, 147, 153, 157, 158, 163, 170, 171, 174, 175, 180, 181, 182, 183, 184, 185, 186, 192, 193, 196, 197, 202, 203, 204, 205, 206, 208, 209, 215, 219, 220, 221, 226, 227, 228, 229, 230, 231, 232, 235, 236, 243, 247, 248, 249, 250, 254, 255, 256, 257, 258, 262, 264, 266, 272, 273, 274, 276, 278, 279, 286, 288, 291, 293, 295, 296, 297, 298, 300, 301, 305, 312, 314, 316, 322, 326, 327, 332, 333, 334, 343, 346, 347, 349, 350, 352, 353, 354, 355, 356, 360, 361, 362, 363, 367, 369, 371, 377, 378, 379, 381, 383, 384, 391, 393, 396, 398, 400, 401, 403, 404, 408, 411, 415, 417, 419, 425, 429, 430, 435, 444, 446, 449, 450, 452, 453, 455, 456, 457, 458, 459, 460, 464, 465, 466, 467, 471, 473, 475, 481, 482, 483, 485, 487, 488, 495, 497, 500, 503, 506, 507, 508, 510, 511, 515, 518, 522, 524, 526, 532, 536, 537, 542, 555, 557], "obj": [8, 9], "locat": [8, 9, 12, 32, 47, 48, 53, 65, 67, 70, 78, 79, 80, 81, 85, 87, 90, 91, 101, 108, 109, 112, 120, 139, 145, 172, 175, 177, 181, 183, 184, 186, 194, 197, 199, 203, 205, 206, 209, 216, 219, 221, 223, 227, 229, 232, 236, 244, 247, 249, 251, 255, 257, 260, 261, 271, 278, 279, 282, 290, 298, 314, 333, 334, 343, 346, 353, 354, 355, 356, 360, 362, 365, 366, 376, 383, 384, 387, 395, 411, 417, 444, 446, 449, 457, 458, 459, 460, 464, 466, 469, 470, 480, 487, 488, 491, 499, 518, 524, 547, 549, 552, 554], "debug": [8, 11, 12, 18, 19, 20, 21, 22, 33, 34, 35, 36, 37, 41, 42, 64, 66, 67, 70, 71, 74, 86, 102, 104, 131, 170, 172, 176, 182, 185, 191, 192, 194, 198, 204, 208, 214, 215, 216, 219, 222, 228, 231, 235, 242, 243, 244, 247, 250, 256, 274, 300, 341, 342, 343, 346, 347, 350, 361, 377, 379, 403, 443, 445, 446, 449, 450, 453, 465, 481, 483, 510], "assert": [8, 47, 70, 86, 104, 182, 204, 219, 228, 231, 247, 256, 274, 346, 361, 379, 449, 465, 483], "deb": [8, 9, 18, 19, 20, 22, 23, 33, 34, 36, 38], "convert": [8, 9, 18, 19, 47, 71, 87, 104, 108, 109, 128, 163, 165, 166, 183, 205, 229, 231, 232, 257, 274, 278, 279, 296, 335, 362, 379, 383, 384, 401, 437, 438, 450, 466, 483, 487, 488, 507, 542, 544, 545], "nativ": [8, 9, 14, 16, 18, 19, 20, 22, 25, 27, 28, 31, 33, 34, 35, 36, 37, 41, 42, 46, 48, 53, 76, 78, 79, 81, 92, 95, 98, 100, 102, 108, 109, 115, 127, 163, 184, 199, 206, 223, 232, 251, 262, 265, 268, 270, 278, 279, 285, 295, 298, 332, 353, 354, 367, 370, 373, 375, 377, 383, 384, 390, 400, 435, 455, 457, 458, 460, 471, 474, 477, 479, 481, 487, 488, 494, 506, 542], "overrid": [8, 9, 13, 47, 53, 67, 71, 78, 108, 109, 163, 172, 176, 182, 184, 194, 198, 206, 209, 216, 222, 232, 236, 244, 250, 278, 279, 298, 332, 343, 347, 353, 383, 384, 435, 446, 450, 457, 487, 488, 542], "debain": 8, "kver": [8, 9, 25, 31, 41, 42], "ksrc": [8, 9], "kobj": [8, 9], "attent": [8, 18, 19, 20, 22, 48], "On": [8, 10, 16, 18, 19, 20, 21, 22, 25, 33, 34, 35, 36, 37, 38, 41, 42, 46, 47, 48, 53, 71, 78, 81, 155, 160, 175, 178, 184, 197, 198, 200, 219, 222, 224, 247, 250, 252, 298, 334, 347, 353, 356, 427, 450, 457, 460, 534, 539, 555], "extra": [8, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 71, 78, 90, 101, 104, 120, 147, 163, 176, 184, 186, 198, 206, 209, 222, 231, 232, 236, 250, 260, 271, 274, 290, 298, 332, 347, 353, 365, 376, 379, 395, 435, 450, 457, 469, 480, 483, 499, 526, 542], "standard": [8, 9, 10, 12, 33, 34, 35, 36, 37, 46, 47, 48, 53, 54, 61, 65, 70, 74, 76, 78, 81, 87, 104, 108, 109, 110, 114, 127, 142, 145, 147, 158, 162, 164, 165, 166, 184, 186, 206, 209, 219, 231, 232, 236, 239, 247, 274, 278, 279, 280, 284, 295, 298, 311, 314, 316, 327, 331, 335, 338, 346, 350, 353, 362, 379, 383, 384, 385, 389, 400, 414, 417, 419, 430, 434, 436, 437, 438, 440, 444, 449, 453, 455, 457, 460, 466, 483, 487, 488, 489, 493, 506, 521, 524, 526, 537, 541, 543, 544, 545], "depmod": 8, "search": [8, 43, 47, 48, 53, 65, 66, 71, 74, 86, 143, 145, 163, 170, 182, 186, 192, 198, 204, 209, 215, 222, 228, 236, 243, 250, 256, 312, 314, 332, 342, 347, 350, 361, 415, 417, 435, 444, 445, 450, 453, 465, 522, 524, 542, 547, 549, 552], "edit": [8, 13, 14, 16, 18, 19, 20, 22, 25, 27, 28, 29, 31, 33, 34, 35, 36, 37, 58, 59, 65, 76, 77, 78, 92, 95, 98, 108, 109, 115, 127, 184, 206, 232, 262, 265, 268, 278, 279, 285, 295, 297, 298, 352, 353, 367, 370, 373, 383, 384, 390, 400, 444, 455, 456, 457, 471, 474, 477, 487, 488, 494, 506], "conf": [8, 16, 18, 19, 20, 22, 25, 26, 27, 28, 31, 32, 33, 34, 36, 41, 42, 47, 48, 57, 72, 74, 81, 85, 116, 127, 173, 181, 184, 195, 203, 206, 209, 218, 227, 232, 236, 246, 255, 286, 295, 334, 348, 350, 356, 360, 391, 400, 451, 453, 460, 464, 495, 506], "ldconfig": 8, "uninstal": [8, 32], "wish": [8, 10, 18, 19, 20, 22, 32, 33, 34, 35, 36, 37, 41, 42, 46, 47, 71, 165, 166, 176, 198, 222, 250, 347, 450, 544, 545], "zt": [8, 47], "ksh": 8, "few": [8, 10, 35, 37, 45, 46, 47, 48, 53, 70, 71, 78, 81, 90, 101, 120, 143, 176, 184, 186, 198, 206, 209, 219, 222, 232, 236, 247, 250, 260, 271, 290, 298, 312, 334, 346, 347, 353, 356, 365, 376, 395, 415, 449, 450, 457, 460, 469, 480, 499, 522], "bc": 8, "bzip2": 8, "fio": [8, 27], "acl": [8, 11, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 78, 90, 101, 120, 127, 184, 206, 232, 260, 271, 290, 295, 298, 353, 365, 376, 395, 400, 457, 469, 480, 499, 506], "sysstat": 8, "mdadm": [8, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "lsscsi": 8, "attr": [8, 127, 206, 232, 295, 400, 506], "rng": 8, "pax": 8, "dbench": 8, "selinux": [8, 25, 31, 47, 78, 127, 180, 184, 202, 206, 226, 232, 254, 295, 298, 353, 400, 457, 506], "quota": [8, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 71, 78, 81, 88, 95, 96, 98, 99, 104, 105, 106, 115, 118, 119, 122, 124, 126, 127, 159, 163, 176, 184, 198, 206, 209, 222, 231, 232, 236, 250, 258, 266, 269, 274, 275, 276, 288, 289, 293, 295, 298, 328, 332, 334, 347, 353, 356, 363, 371, 374, 379, 380, 381, 393, 394, 398, 400, 431, 435, 450, 457, 460, 467, 474, 475, 477, 478, 483, 484, 485, 494, 497, 498, 501, 503, 505, 506, 538, 542], "common": [8, 18, 19, 20, 22, 33, 41, 42, 44, 47, 53, 62, 71, 77, 79, 87, 168, 177, 183, 184, 189, 199, 205, 206, 212, 222, 223, 229, 232, 240, 250, 251, 257, 297, 339, 347, 352, 354, 362, 441, 450, 456, 458, 466], "base64": [8, 27], "bash": [8, 16, 18, 19, 20, 22, 25, 27, 31, 33, 34, 35, 36, 37, 41, 42], "checkbash": [8, 27], "h": [8, 18, 19, 20, 22, 27, 33, 34, 35, 36, 37, 41, 42, 43, 61, 62, 64, 65, 67, 84, 85, 86, 87, 94, 95, 96, 97, 98, 100, 106, 108, 109, 110, 111, 114, 115, 124, 127, 130, 131, 139, 141, 145, 147, 156, 162, 164, 168, 171, 180, 181, 182, 183, 184, 185, 186, 189, 191, 193, 202, 203, 204, 205, 206, 208, 209, 212, 214, 226, 227, 228, 229, 230, 232, 235, 236, 239, 240, 242, 254, 255, 256, 257, 264, 265, 266, 267, 268, 270, 272, 276, 278, 279, 280, 281, 284, 285, 293, 295, 299, 300, 308, 310, 314, 316, 325, 331, 338, 339, 341, 343, 359, 360, 361, 362, 369, 370, 371, 372, 373, 375, 381, 383, 384, 385, 386, 389, 390, 398, 400, 402, 403, 411, 413, 417, 419, 428, 434, 436, 440, 441, 443, 444, 446, 463, 464, 465, 466, 473, 474, 475, 476, 477, 479, 485, 487, 488, 489, 490, 493, 494, 503, 506, 509, 510, 518, 520, 524, 526, 535, 541, 543], "shellcheck": [8, 10, 14, 16, 25, 27, 28, 31, 43], "ksh93": [8, 27], "pamtest": [8, 27], "flake8": [8, 10, 27], "helper": [8, 82, 84, 85, 178, 180, 181, 200, 202, 203, 224, 226, 227, 252, 254, 255, 357, 359, 360, 461, 463, 464], "design": [8, 12, 43, 46, 47, 48, 53, 62, 70, 77, 78, 80, 110, 114, 127, 168, 184, 186, 189, 206, 209, 212, 219, 232, 236, 240, 247, 280, 284, 295, 297, 298, 333, 339, 346, 352, 353, 355, 385, 389, 400, 441, 449, 456, 457, 459, 489, 493, 506], "aid": [8, 70, 219, 247, 346, 449], "certain": [8, 22, 23, 33, 46, 47, 48, 71, 78, 80, 86, 87, 129, 182, 183, 184, 186, 204, 205, 206, 209, 222, 228, 229, 232, 236, 250, 256, 257, 298, 333, 347, 353, 355, 361, 362, 450, 457, 459, 465, 466, 508, 558], "e": [8, 16, 17, 18, 19, 20, 22, 23, 25, 26, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 53, 64, 67, 70, 71, 78, 79, 80, 81, 86, 88, 90, 95, 98, 101, 103, 104, 108, 109, 110, 114, 115, 118, 120, 121, 131, 133, 136, 148, 149, 155, 164, 165, 166, 168, 171, 172, 175, 176, 177, 181, 182, 184, 185, 186, 189, 193, 194, 197, 198, 199, 203, 204, 206, 208, 209, 212, 216, 219, 221, 222, 223, 227, 228, 230, 231, 232, 235, 236, 240, 244, 247, 249, 250, 251, 256, 258, 260, 265, 268, 271, 272, 273, 274, 278, 279, 280, 284, 285, 288, 290, 291, 298, 300, 317, 318, 333, 334, 339, 341, 343, 346, 347, 353, 354, 355, 356, 361, 363, 365, 370, 373, 376, 378, 379, 383, 384, 385, 389, 390, 393, 395, 396, 403, 408, 420, 421, 436, 443, 446, 449, 450, 457, 458, 459, 460, 465, 467, 469, 474, 477, 480, 482, 483, 487, 488, 489, 493, 494, 497, 499, 500, 510, 515, 527, 528, 534, 543, 544, 545], "zvol": [8, 18, 19, 20, 22, 33, 34, 36, 41, 42, 46, 47, 57, 68, 71, 78, 79, 80, 86, 92, 104, 108, 109, 127, 176, 177, 180, 182, 184, 198, 199, 202, 204, 206, 217, 222, 223, 226, 228, 231, 232, 245, 250, 251, 254, 256, 260, 262, 271, 274, 278, 279, 290, 295, 298, 344, 347, 353, 354, 361, 367, 379, 383, 384, 400, 447, 450, 457, 458, 459, 465, 471, 483, 487, 488, 506], "symlink": [8, 41, 42, 53, 68, 73, 87, 183, 196, 205, 217, 220, 229, 245, 248, 257, 344, 349, 362, 447, 452, 466], "link": [8, 12, 18, 19, 20, 21, 22, 23, 32, 33, 34, 35, 36, 37, 38, 39, 53, 68, 71, 73, 85, 94, 108, 109, 127, 132, 145, 147, 157, 158, 163, 174, 181, 184, 186, 196, 203, 206, 209, 217, 220, 227, 232, 236, 245, 248, 255, 264, 278, 279, 295, 301, 314, 316, 326, 327, 332, 344, 347, 349, 360, 369, 383, 384, 400, 404, 417, 419, 429, 430, 435, 447, 450, 452, 464, 473, 487, 488, 506, 511, 524, 526, 536, 537, 542], "place": [8, 46, 47, 48, 54, 70, 71, 78, 80, 81, 86, 90, 101, 104, 108, 109, 120, 130, 132, 143, 145, 147, 155, 157, 158, 161, 186, 204, 207, 209, 219, 222, 228, 231, 232, 234, 236, 247, 250, 256, 260, 271, 274, 290, 298, 299, 301, 312, 314, 316, 324, 326, 327, 333, 334, 346, 347, 353, 355, 356, 361, 365, 376, 379, 395, 402, 404, 415, 417, 419, 427, 429, 430, 433, 449, 450, 457, 459, 460, 465, 469, 480, 483, 487, 488, 499, 509, 511, 522, 524, 526, 534, 536, 537, 540, 548, 555], "successfulli": [8, 25, 43, 47, 53, 71, 80, 91, 92, 104, 184, 198, 206, 222, 231, 232, 236, 250, 261, 262, 274, 333, 347, 355, 366, 367, 379, 450, 459, 470, 471, 483, 557], "remov": [8, 11, 14, 16, 18, 19, 20, 22, 25, 26, 27, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 46, 48, 53, 67, 70, 71, 76, 77, 78, 79, 80, 81, 83, 87, 88, 90, 94, 97, 101, 108, 109, 110, 111, 114, 118, 120, 127, 129, 132, 134, 138, 139, 143, 145, 146, 147, 148, 149, 157, 158, 162, 163, 165, 166, 172, 175, 176, 177, 184, 186, 194, 197, 198, 199, 206, 209, 216, 219, 221, 222, 223, 232, 236, 244, 249, 250, 251, 253, 258, 260, 264, 267, 271, 278, 279, 280, 281, 284, 288, 290, 295, 297, 298, 301, 303, 307, 312, 314, 315, 316, 317, 318, 326, 327, 331, 332, 333, 334, 335, 343, 347, 352, 353, 354, 355, 356, 358, 363, 365, 369, 372, 376, 383, 384, 385, 386, 389, 393, 395, 400, 404, 406, 410, 411, 415, 417, 418, 419, 420, 421, 429, 430, 434, 435, 437, 438, 446, 449, 450, 455, 456, 457, 458, 459, 460, 462, 466, 467, 469, 473, 476, 480, 487, 488, 489, 490, 493, 497, 499, 506, 508, 511, 513, 517, 518, 522, 524, 525, 526, 527, 528, 536, 537, 541, 542, 544, 545, 548, 550, 554, 555], "freshli": 8, "later": [8, 14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 43, 47, 48, 71, 76, 77, 78, 79, 80, 81, 94, 108, 109, 127, 134, 138, 140, 163, 176, 184, 186, 198, 206, 209, 222, 223, 232, 236, 250, 251, 264, 295, 298, 303, 307, 332, 333, 334, 347, 353, 354, 355, 356, 369, 400, 406, 410, 435, 450, 455, 456, 457, 458, 459, 460, 473, 487, 488, 506, 513, 517, 519, 542], "u": [8, 16, 18, 19, 22, 25, 31, 35, 37, 44, 46, 47, 65, 66, 71, 73, 78, 79, 86, 88, 92, 95, 98, 103, 108, 109, 112, 115, 118, 121, 131, 144, 145, 147, 158, 162, 177, 182, 184, 185, 186, 196, 199, 204, 206, 208, 209, 219, 220, 223, 228, 232, 235, 236, 248, 250, 251, 256, 258, 273, 278, 279, 282, 288, 291, 300, 314, 316, 327, 331, 347, 349, 354, 361, 363, 367, 378, 383, 384, 387, 393, 396, 403, 416, 417, 419, 430, 434, 444, 445, 450, 452, 457, 458, 465, 467, 471, 474, 477, 482, 487, 488, 491, 494, 497, 500, 510, 523, 524, 526, 537, 541], "wrapper": [8, 82, 357, 461], "repeatedli": 8, "argument": [8, 53, 65, 73, 74, 80, 86, 88, 97, 104, 108, 109, 111, 112, 118, 127, 135, 159, 163, 170, 174, 181, 182, 184, 186, 192, 196, 203, 204, 206, 209, 215, 220, 227, 228, 230, 231, 232, 236, 243, 248, 256, 258, 267, 272, 274, 278, 279, 281, 282, 288, 304, 328, 332, 349, 350, 355, 361, 363, 372, 379, 383, 384, 386, 387, 393, 400, 407, 431, 435, 444, 452, 453, 459, 465, 467, 476, 483, 487, 488, 490, 491, 497, 506, 514, 538, 542, 548, 550], "user": [8, 9, 10, 11, 12, 14, 16, 18, 19, 20, 21, 22, 25, 27, 28, 31, 32, 33, 34, 35, 36, 37, 39, 41, 42, 46, 47, 48, 52, 53, 55, 59, 65, 66, 67, 71, 76, 77, 78, 79, 80, 81, 84, 85, 86, 87, 88, 90, 93, 95, 96, 97, 98, 100, 101, 104, 106, 110, 111, 112, 114, 115, 117, 118, 120, 122, 124, 126, 127, 129, 142, 145, 162, 163, 167, 170, 171, 172, 177, 181, 183, 184, 186, 188, 191, 192, 193, 194, 199, 203, 205, 206, 207, 209, 211, 214, 215, 216, 222, 223, 227, 229, 231, 232, 234, 236, 238, 243, 244, 250, 251, 255, 257, 258, 260, 263, 265, 266, 267, 268, 270, 271, 274, 276, 280, 281, 284, 285, 288, 290, 293, 295, 297, 298, 299, 311, 314, 331, 332, 333, 334, 337, 342, 343, 347, 352, 353, 354, 355, 356, 359, 360, 362, 363, 365, 368, 370, 371, 372, 373, 375, 376, 379, 381, 385, 386, 389, 390, 393, 395, 398, 400, 414, 417, 434, 435, 439, 444, 445, 446, 450, 455, 456, 457, 458, 459, 460, 463, 464, 465, 466, 467, 469, 472, 474, 475, 476, 477, 479, 480, 483, 485, 489, 490, 491, 493, 494, 496, 497, 499, 501, 503, 505, 506, 508, 521, 524, 541, 542, 546, 557], "stress": [8, 12, 47, 67, 71, 81, 171, 172, 193, 194, 216, 236, 244, 250, 334, 343, 347, 356, 446, 450, 460], "concurr": [8, 12, 45, 47, 48, 49, 50, 71, 87, 104, 131, 176, 183, 198, 205, 208, 219, 222, 229, 231, 232, 235, 247, 250, 257, 274, 300, 347, 362, 379, 403, 450, 466, 483, 510], "crash": [8, 11, 34, 35, 37, 46, 71, 79, 81, 186, 199, 209, 223, 236, 250, 251, 334, 347, 354, 356, 450, 458, 460], "encount": [8, 46, 47, 53, 71, 76, 79, 80, 86, 104, 151, 176, 177, 186, 198, 199, 209, 222, 223, 231, 232, 236, 250, 251, 256, 274, 320, 333, 347, 354, 355, 361, 379, 423, 450, 455, 458, 459, 465, 483, 530, 548, 549, 550, 551, 553], "associ": [8, 11, 44, 47, 71, 73, 78, 79, 80, 81, 85, 86, 87, 108, 109, 131, 135, 152, 163, 174, 176, 177, 181, 182, 183, 184, 186, 196, 198, 199, 203, 204, 205, 206, 208, 209, 220, 222, 223, 227, 228, 229, 230, 232, 235, 236, 248, 250, 251, 255, 256, 257, 272, 278, 279, 298, 300, 304, 321, 332, 333, 334, 347, 349, 353, 354, 355, 356, 360, 361, 362, 383, 384, 403, 407, 424, 435, 450, 452, 457, 458, 459, 460, 464, 465, 466, 487, 488, 510, 514, 531, 542], "collect": [8, 47, 77, 80, 108, 109, 163, 164, 184, 186, 206, 209, 232, 236, 278, 279, 297, 332, 333, 352, 355, 383, 384, 435, 436, 456, 459, 487, 488, 542, 543], "move": [8, 10, 18, 19, 20, 22, 24, 30, 33, 34, 35, 36, 37, 41, 42, 47, 48, 71, 79, 81, 107, 140, 176, 177, 184, 186, 198, 199, 206, 209, 219, 222, 223, 232, 236, 247, 250, 251, 277, 309, 334, 347, 354, 356, 382, 412, 450, 458, 460, 486, 519, 554, 556], "launch": 8, "spars": [8, 54, 71, 78, 79, 86, 88, 92, 118, 184, 204, 206, 228, 232, 250, 251, 256, 258, 262, 288, 298, 347, 353, 354, 361, 363, 367, 393, 450, 457, 458, 465, 467, 471, 497], "tmp": [8, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 65, 67, 102, 127, 130, 172, 184, 194, 206, 216, 230, 232, 244, 272, 295, 299, 343, 377, 400, 402, 444, 446, 481, 506, 509], "direct": [8, 32, 53, 73, 95, 98, 100, 104, 113, 115, 174, 184, 196, 206, 220, 231, 232, 248, 265, 268, 270, 274, 283, 285, 349, 370, 373, 375, 379, 388, 390, 452, 474, 477, 479, 483, 492, 494], "readm": [8, 10, 12], "vx": 8, "deleg": [8, 78, 81, 88, 118, 122, 126, 127, 184, 186, 206, 209, 232, 236, 258, 288, 295, 298, 334, 353, 356, 363, 393, 400, 457, 460, 467, 497, 501, 505, 506], "permiss": [8, 27, 41, 42, 47, 78, 81, 87, 88, 90, 101, 118, 120, 127, 183, 184, 186, 205, 206, 209, 229, 232, 236, 257, 258, 260, 271, 288, 290, 295, 298, 334, 353, 356, 362, 363, 365, 376, 393, 395, 400, 457, 460, 466, 467, 469, 480, 497, 499, 506], "parent": [8, 71, 76, 77, 78, 79, 88, 90, 91, 92, 95, 98, 101, 104, 107, 110, 112, 114, 115, 118, 120, 127, 139, 175, 183, 184, 197, 205, 206, 221, 223, 229, 230, 231, 232, 249, 251, 257, 258, 260, 261, 262, 271, 272, 274, 277, 280, 282, 284, 288, 290, 295, 297, 298, 347, 352, 353, 354, 363, 365, 366, 367, 376, 379, 382, 385, 387, 389, 393, 395, 400, 411, 450, 455, 456, 457, 458, 467, 469, 470, 471, 474, 477, 480, 483, 486, 489, 491, 493, 494, 497, 499, 506, 518], "assum": [9, 12, 18, 19, 20, 41, 42, 46, 71, 78, 80, 86, 110, 114, 132, 133, 163, 176, 182, 184, 186, 198, 204, 206, 209, 222, 228, 232, 236, 250, 256, 280, 284, 298, 332, 333, 347, 353, 355, 361, 385, 389, 435, 450, 457, 459, 465, 489, 493, 511, 542, 555], "newer": [9, 23, 32, 46, 47, 48, 53, 77, 79, 81, 209, 236, 334, 354, 356, 456, 458, 460, 557], "directli": [9, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 66, 71, 86, 96, 104, 106, 124, 170, 171, 184, 192, 193, 206, 215, 222, 230, 231, 232, 243, 250, 266, 272, 274, 276, 293, 342, 347, 371, 379, 381, 398, 445, 450, 465, 475, 483, 485, 503], "repositori": [9, 11, 12, 13, 16, 18, 19, 20, 22, 23, 25, 26, 31, 33, 34, 36, 39, 40, 41, 42], "preferenti": [9, 88, 118, 184, 206, 232, 258, 288, 363, 393, 467, 497], "As": [9, 12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 45, 46, 47, 53, 67, 68, 70, 71, 77, 78, 80, 81, 87, 90, 101, 120, 172, 176, 183, 184, 186, 194, 198, 205, 206, 209, 216, 217, 219, 222, 229, 232, 236, 244, 245, 247, 250, 257, 260, 271, 290, 297, 298, 333, 334, 343, 344, 346, 347, 352, 353, 355, 356, 362, 365, 376, 395, 446, 447, 449, 450, 456, 457, 459, 460, 466, 469, 480, 499], "rule": [9, 41, 42, 46, 47, 53, 62, 70, 73, 168, 174, 189, 196, 212, 219, 220, 240, 247, 248, 339, 346, 349, 441, 449, 452], "tightli": 9, "test": [9, 11, 13, 14, 16, 17, 18, 19, 20, 22, 25, 27, 28, 29, 31, 33, 34, 35, 36, 37, 39, 41, 42, 43, 46, 47, 48, 53, 63, 64, 67, 70, 71, 78, 87, 91, 92, 93, 94, 102, 107, 108, 109, 110, 112, 114, 117, 127, 171, 172, 184, 191, 193, 194, 198, 205, 206, 214, 216, 219, 222, 229, 230, 232, 242, 244, 247, 250, 257, 272, 278, 279, 280, 284, 295, 298, 341, 343, 346, 347, 353, 362, 377, 383, 384, 385, 389, 400, 442, 443, 446, 449, 450, 457, 466, 470, 471, 472, 473, 481, 486, 487, 488, 489, 491, 493, 496, 506, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "choic": [9, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 554], "doesn": [9, 11, 18, 19, 41, 42, 47, 48, 53, 74, 78, 104, 206, 231, 232, 274, 298, 350, 353, 379, 453, 457, 483], "re": [9, 10, 12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 48, 53, 67, 68, 71, 73, 74, 78, 80, 87, 90, 101, 102, 108, 109, 110, 114, 120, 138, 139, 144, 172, 174, 181, 184, 186, 194, 196, 198, 203, 206, 209, 216, 220, 222, 227, 230, 232, 236, 244, 248, 250, 257, 260, 271, 272, 278, 279, 280, 284, 290, 298, 307, 333, 343, 344, 347, 349, 350, 353, 355, 362, 365, 376, 377, 383, 384, 385, 389, 395, 410, 411, 416, 446, 447, 450, 452, 453, 457, 459, 466, 469, 480, 481, 487, 488, 489, 493, 499, 517, 518, 523, 549, 551, 554, 556, 557], "roll": [9, 11, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 77, 93, 112, 113, 117, 127, 143, 184, 186, 206, 209, 232, 236, 283, 295, 297, 312, 352, 388, 400, 415, 456, 472, 491, 492, 496, 506, 522], "own": [9, 10, 12, 18, 19, 20, 22, 26, 33, 36, 37, 41, 42, 46, 47, 48, 53, 62, 78, 87, 88, 96, 97, 106, 107, 111, 118, 124, 127, 168, 183, 184, 189, 205, 206, 212, 229, 232, 240, 257, 266, 267, 276, 277, 281, 293, 295, 298, 339, 353, 362, 371, 372, 381, 382, 386, 398, 400, 441, 457, 466, 467, 475, 476, 485, 486, 490, 497, 503, 506], "awar": [9, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 48, 57, 81, 110, 114, 236, 280, 284, 334, 356, 385, 389, 460, 489, 493], "capabl": [9, 46, 47, 71, 184, 198, 206, 222, 232, 250, 347, 450], "choos": [9, 13, 14, 18, 19, 20, 21, 22, 33, 34, 35, 36, 37, 41, 42, 43, 47, 48, 53], "exactli": [9, 22, 47, 62, 108, 109, 110, 114, 168, 184, 189, 206, 207, 212, 222, 232, 234, 240, 250, 278, 279, 280, 284, 299, 339, 383, 384, 385, 389, 441, 487, 488, 489, 493], "upgrad": [9, 18, 19, 20, 22, 32, 33, 34, 35, 36, 37, 41, 42, 71, 78, 79, 81, 83, 104, 127, 163, 177, 184, 186, 199, 206, 209, 223, 231, 232, 236, 251, 253, 274, 295, 298, 332, 334, 353, 354, 356, 358, 379, 400, 435, 450, 457, 458, 460, 462, 483, 506, 542, 556, 557], "particularli": [9, 47, 50, 53, 71, 78, 85, 176, 181, 184, 198, 203, 206, 222, 227, 232, 250, 255, 298, 347, 353, 360, 450, 457, 464, 552], "conveni": [9, 18, 19, 20, 22, 27, 33, 34, 36, 41, 42, 47, 48, 87, 183, 205, 229, 257, 362, 466], "desktop": [9, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42], "appropri": [9, 12, 18, 19, 20, 22, 33, 34, 36, 41, 42, 47, 48, 49, 53, 62, 71, 77, 78, 87, 131, 139, 168, 176, 178, 180, 183, 184, 185, 189, 198, 200, 202, 205, 206, 208, 212, 221, 222, 224, 226, 229, 232, 235, 240, 249, 250, 252, 254, 257, 297, 298, 300, 339, 347, 352, 353, 362, 403, 411, 441, 450, 456, 457, 466, 510, 518, 551, 553, 555], "deploy": 9, "binari": [9, 27, 44, 79, 82, 354, 357, 458, 461], "specif": [9, 11, 12, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 39, 41, 42, 46, 47, 51, 53, 61, 70, 71, 73, 74, 76, 78, 79, 80, 81, 86, 104, 108, 109, 131, 132, 136, 143, 157, 174, 177, 182, 184, 186, 196, 198, 199, 204, 206, 209, 219, 220, 222, 223, 228, 230, 231, 232, 235, 236, 239, 247, 248, 250, 251, 256, 272, 274, 278, 279, 295, 298, 300, 301, 305, 312, 326, 333, 334, 338, 346, 347, 349, 350, 353, 354, 355, 356, 361, 379, 383, 384, 403, 404, 408, 415, 429, 440, 449, 450, 452, 453, 455, 457, 458, 459, 460, 465, 483, 487, 488, 510, 511, 515, 522, 536], "enterpris": [9, 46, 53], "red": 9, "hat": 9, "applic": [9, 11, 12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 48, 50, 71, 76, 78, 80, 81, 90, 101, 120, 127, 131, 148, 149, 176, 184, 186, 198, 206, 209, 222, 232, 235, 236, 250, 260, 271, 290, 295, 298, 300, 317, 318, 333, 347, 353, 355, 365, 376, 395, 400, 403, 420, 421, 450, 455, 457, 459, 460, 469, 480, 499, 506, 510, 527, 528, 554, 555, 559, 560, 561], "style": [9, 10, 12, 32, 47, 62, 65, 71, 78, 168, 176, 184, 189, 198, 206, 212, 222, 232, 240, 250, 298, 339, 347, 353, 441, 444, 450, 457], "either": [9, 10, 18, 19, 20, 22, 27, 32, 33, 34, 35, 36, 37, 41, 42, 43, 44, 46, 47, 48, 54, 71, 74, 76, 78, 79, 80, 81, 84, 87, 90, 93, 101, 104, 108, 109, 110, 113, 114, 120, 127, 133, 136, 141, 156, 165, 166, 176, 177, 183, 184, 186, 198, 199, 205, 206, 209, 222, 223, 229, 230, 231, 232, 236, 250, 251, 257, 260, 263, 271, 272, 274, 278, 279, 280, 283, 284, 290, 295, 298, 302, 305, 310, 325, 333, 335, 347, 350, 353, 354, 355, 356, 359, 362, 365, 368, 376, 379, 383, 384, 385, 388, 389, 395, 400, 405, 408, 413, 428, 437, 438, 450, 453, 455, 457, 458, 459, 460, 463, 466, 469, 472, 480, 483, 487, 488, 489, 492, 493, 499, 506, 512, 515, 520, 535, 544, 545, 553, 554, 556, 561], "rebuilt": 9, "streamlin": 9, "Be": [9, 53, 74, 81, 87, 183, 205, 229, 236, 257, 334, 350, 356, 362, 453, 460, 466], "gnu": [9, 18, 19, 20, 22, 23, 39, 43, 44], "To": [9, 10, 15, 18, 19, 20, 22, 23, 26, 27, 29, 32, 33, 34, 35, 36, 37, 38, 39, 41, 42, 43, 46, 47, 48, 53, 62, 67, 71, 77, 78, 79, 80, 86, 88, 93, 99, 102, 104, 108, 109, 110, 112, 113, 114, 117, 118, 119, 122, 126, 127, 143, 145, 155, 168, 172, 175, 176, 177, 182, 184, 186, 189, 194, 197, 198, 199, 204, 206, 209, 212, 216, 221, 222, 223, 228, 230, 231, 232, 236, 240, 244, 249, 250, 251, 256, 258, 269, 272, 274, 278, 279, 280, 283, 284, 288, 289, 295, 297, 298, 312, 314, 324, 333, 339, 343, 347, 352, 353, 354, 355, 361, 363, 374, 377, 379, 383, 384, 385, 388, 389, 393, 394, 400, 415, 417, 427, 441, 446, 450, 456, 457, 458, 459, 465, 467, 472, 478, 481, 483, 487, 488, 489, 491, 492, 493, 496, 497, 498, 501, 505, 506, 522, 524, 534, 550, 557, 558, 561], "sure": [9, 10, 12, 18, 19, 20, 21, 22, 27, 32, 33, 34, 35, 36, 37, 38, 41, 42, 48, 53, 62, 78, 168, 189, 212, 232, 240, 298, 339, 353, 441, 457, 557, 559, 560, 561], "macro": [9, 62, 168, 189, 212, 240, 339, 441], "abi": 9, "stablelist": 9, "sed": [9, 12, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43], "crb": 9, "j1": 9, "localinstal": 9, "noarch": [9, 25, 26, 31, 32], "know": [9, 32, 47, 62, 71, 78, 155, 168, 184, 186, 189, 209, 212, 232, 236, 240, 250, 298, 324, 339, 347, 353, 427, 441, 450, 457, 534], "educ": 9, "guess": [9, 86, 204, 228, 256, 361, 465], "unabl": [9, 11, 46, 47, 48, 53, 71, 79, 80, 87, 88, 118, 127, 176, 183, 184, 198, 205, 206, 222, 229, 232, 250, 257, 295, 347, 355, 362, 400, 450, 458, 459, 466, 467, 497, 506, 548, 549, 550, 551, 552, 553], "exact": [9, 48, 64, 77, 78, 95, 96, 98, 100, 104, 106, 115, 124, 141, 145, 147, 151, 156, 158, 162, 184, 186, 191, 206, 209, 214, 231, 232, 236, 242, 265, 266, 268, 270, 274, 276, 285, 293, 297, 298, 310, 314, 316, 320, 325, 327, 331, 341, 352, 353, 370, 371, 373, 375, 379, 381, 390, 398, 413, 417, 419, 423, 428, 430, 434, 443, 456, 457, 474, 475, 477, 479, 483, 485, 494, 503, 520, 524, 526, 530, 535, 537, 541, 552], "produc": [9, 34, 64, 71, 78, 79, 110, 114, 164, 184, 199, 206, 222, 223, 232, 250, 251, 280, 284, 298, 341, 347, 353, 354, 385, 389, 436, 443, 450, 457, 458, 489, 493, 543, 555], "spec": 9, "redhat": [9, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "miss": [9, 18, 19, 20, 22, 33, 34, 36, 41, 42, 46, 47, 48, 61, 71, 78, 79, 87, 110, 114, 139, 143, 147, 163, 175, 176, 186, 197, 198, 206, 209, 221, 222, 232, 236, 239, 249, 250, 251, 280, 284, 298, 312, 332, 338, 347, 353, 354, 385, 389, 411, 415, 435, 440, 450, 457, 458, 466, 489, 493, 518, 522, 526, 542, 550, 551, 561, 562], "rm": [9, 16, 18, 19, 25, 28, 31, 33, 34, 35, 36, 37, 41], "dkms_": 9, "product": [9, 26, 32, 46, 47, 48, 71, 78, 87, 91, 92, 93, 107, 112, 117, 127, 183, 184, 205, 206, 222, 229, 232, 250, 257, 295, 298, 347, 353, 362, 400, 450, 457, 466, 470, 471, 472, 486, 491, 496, 506], "fetch": [9, 12, 78, 353, 457], "wget": [9, 12, 41], "tar": [9, 16, 25, 31, 35, 37], "gz": [9, 16, 25, 31, 41], "xzf": 9, "probabl": [9, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 81, 209, 222, 236, 250, 334, 356, 460], "weren": 9, "who": [9, 32, 46, 47, 71, 78, 176, 184, 198, 206, 222, 232, 250, 298, 347, 353, 450, 457], "intend": [9, 43, 46, 47, 48, 67, 71, 73, 80, 81, 136, 164, 175, 186, 194, 196, 197, 198, 209, 216, 220, 221, 222, 236, 244, 248, 249, 250, 305, 333, 334, 343, 347, 349, 355, 356, 408, 436, 446, 450, 452, 459, 460, 515, 543], "modifi": [9, 10, 11, 12, 35, 37, 46, 47, 48, 53, 65, 71, 77, 78, 81, 94, 104, 110, 114, 127, 129, 163, 165, 166, 176, 184, 186, 198, 206, 209, 222, 231, 232, 236, 237, 250, 264, 274, 280, 284, 295, 298, 332, 334, 336, 347, 353, 356, 369, 379, 385, 389, 400, 435, 444, 450, 456, 457, 460, 473, 483, 489, 493, 506, 508, 542, 544, 545], "decid": [9, 41, 42, 71, 250, 347, 450], "kind": [9, 67, 80, 236, 333, 343, 355, 446, 459], "jump": 9, "section": [9, 10, 12, 18, 19, 20, 22, 25, 28, 31, 32, 33, 34, 35, 36, 37, 41, 42, 43, 45, 47, 48, 53, 65, 71, 76, 78, 79, 80, 81, 84, 91, 92, 95, 98, 100, 103, 104, 115, 117, 121, 127, 132, 136, 139, 158, 175, 176, 180, 184, 186, 197, 198, 202, 206, 209, 221, 222, 226, 231, 232, 236, 249, 250, 254, 261, 262, 265, 268, 270, 273, 274, 285, 287, 291, 295, 298, 301, 305, 327, 333, 347, 353, 354, 355, 359, 366, 367, 370, 373, 375, 378, 379, 390, 392, 396, 400, 404, 408, 411, 430, 444, 450, 455, 457, 458, 459, 460, 463, 470, 471, 474, 477, 479, 482, 483, 494, 496, 500, 506, 511, 515, 518, 537, 548, 549, 550, 551, 553, 556, 557], "abov": [9, 10, 18, 19, 20, 22, 32, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 49, 50, 53, 54, 56, 61, 65, 67, 71, 74, 93, 102, 108, 109, 110, 114, 143, 160, 172, 176, 184, 186, 194, 198, 206, 209, 216, 222, 232, 236, 239, 244, 250, 263, 269, 278, 279, 280, 284, 289, 312, 329, 338, 343, 347, 350, 368, 377, 383, 384, 385, 389, 415, 432, 440, 444, 446, 450, 453, 472, 481, 487, 488, 489, 493, 522, 539, 548, 550], "basic": [10, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 51, 58, 59, 65, 74, 86, 139, 175, 182, 197, 204, 221, 228, 249, 256, 350, 361, 411, 444, 453, 465, 518], "rundown": 10, "contribut": [10, 12, 39, 47, 59, 133], "md": [10, 18, 19, 20, 22, 33, 34, 36, 41, 42, 46], "ve": [10, 12, 21, 62, 71, 168, 176, 189, 198, 212, 222, 240, 250, 339, 347, 441, 450], "never": [10, 11, 18, 19, 20, 22, 33, 34, 41, 42, 46, 47, 48, 70, 71, 76, 78, 79, 80, 81, 82, 84, 87, 110, 114, 175, 177, 178, 180, 184, 186, 197, 198, 199, 200, 202, 205, 206, 209, 219, 221, 222, 223, 224, 226, 229, 232, 236, 247, 249, 250, 251, 252, 254, 257, 280, 284, 298, 333, 334, 346, 347, 353, 354, 355, 356, 357, 359, 362, 385, 389, 449, 450, 455, 457, 458, 459, 460, 461, 463, 466, 489, 493], "littl": [10, 18, 19, 20, 22, 33, 34, 35, 36, 37, 47, 48, 67, 70, 77, 86, 184, 204, 206, 228, 232, 256, 297, 343, 352, 361, 446, 449, 456, 465], "global": [10, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 48, 66, 67, 76, 78, 86, 88, 104, 118, 184, 186, 204, 206, 219, 228, 231, 232, 256, 258, 274, 288, 298, 342, 343, 353, 361, 363, 379, 393, 445, 446, 455, 457, 465, 467, 483, 497], "my": [10, 48, 65, 198, 222, 250, 347, 444], "myemail": 10, "norepli": 10, "easiest": 10, "get": [10, 11, 13, 18, 19, 20, 21, 22, 27, 32, 33, 34, 35, 36, 37, 41, 42, 43, 47, 48, 53, 58, 59, 67, 71, 78, 79, 80, 81, 83, 88, 98, 100, 115, 118, 127, 129, 145, 156, 163, 172, 175, 184, 186, 194, 197, 206, 209, 216, 219, 221, 222, 232, 236, 244, 249, 250, 253, 268, 270, 285, 295, 298, 314, 325, 332, 333, 343, 347, 353, 355, 358, 373, 375, 390, 400, 417, 428, 435, 446, 450, 457, 458, 459, 460, 462, 467, 477, 479, 494, 497, 506, 508, 524, 535, 542, 554, 557], "click": [10, 41, 42, 48, 110, 114, 280, 284, 385, 389, 489, 493], "fork": [10, 12, 17, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42], "icon": [10, 48], "comput": [10, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 71, 78, 176, 198, 222, 232, 250, 298, 347, 353, 450, 457], "come": [10, 20, 22, 47, 48, 71, 85, 95, 98, 110, 114, 115, 176, 181, 184, 198, 203, 206, 222, 227, 232, 250, 255, 265, 268, 280, 284, 285, 347, 360, 370, 373, 385, 389, 390, 450, 464, 474, 477, 489, 493, 494], "handi": 10, "establish": [10, 78, 96, 106, 124, 184, 206, 232, 266, 276, 293, 298, 353, 371, 381, 398, 457, 475, 485, 503], "remot": [10, 12, 18, 19, 47, 57, 67, 108, 109, 110, 114, 127, 184, 206, 232, 278, 279, 280, 284, 295, 343, 383, 384, 385, 389, 400, 446, 487, 488, 489, 493, 506], "let": [10, 43], "unrel": [10, 34], "b": [10, 12, 14, 16, 18, 19, 20, 22, 25, 27, 28, 31, 33, 34, 35, 36, 37, 41, 42, 46, 53, 62, 64, 67, 71, 73, 78, 79, 80, 86, 87, 92, 94, 95, 98, 108, 109, 110, 114, 115, 127, 131, 136, 163, 168, 171, 174, 176, 177, 182, 184, 185, 186, 189, 191, 193, 196, 198, 199, 204, 206, 208, 209, 212, 214, 220, 222, 223, 228, 232, 235, 236, 240, 242, 248, 250, 251, 256, 262, 264, 265, 268, 280, 284, 285, 295, 298, 300, 332, 339, 341, 343, 347, 349, 353, 354, 361, 367, 369, 370, 373, 385, 389, 390, 400, 403, 435, 441, 443, 446, 450, 452, 457, 458, 459, 465, 466, 471, 473, 474, 477, 487, 488, 489, 493, 494, 506, 510, 515, 542], "next": [10, 34, 36, 45, 47, 48, 50, 62, 67, 71, 79, 86, 95, 98, 104, 115, 127, 168, 172, 176, 177, 184, 189, 194, 198, 199, 206, 212, 216, 222, 223, 231, 232, 240, 244, 250, 251, 256, 274, 295, 339, 343, 347, 354, 361, 379, 400, 441, 446, 450, 458, 465, 474, 477, 483, 494, 506], "step": [10, 12, 13, 14, 16, 25, 28, 31, 43, 47, 71, 104, 110, 114, 250, 274, 280, 284, 347, 379, 385, 389, 450, 483, 489, 493, 557], "suno": 10, "local": [10, 12, 14, 16, 17, 18, 19, 20, 22, 25, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 41, 42, 47, 56, 71, 87, 88, 92, 95, 98, 102, 108, 109, 110, 114, 115, 118, 127, 141, 156, 176, 183, 184, 186, 198, 205, 206, 209, 222, 229, 230, 232, 236, 250, 257, 258, 262, 265, 268, 272, 278, 279, 280, 284, 285, 288, 295, 310, 325, 347, 362, 363, 367, 370, 373, 377, 383, 384, 385, 389, 390, 393, 400, 413, 428, 450, 466, 467, 471, 474, 477, 481, 487, 488, 489, 493, 494, 497, 506, 520, 535], "highli": [10, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 53, 70, 71, 79, 177, 199, 219, 222, 223, 247, 250, 251, 346, 347, 354, 449, 450, 458], "virtual": [10, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 51, 53, 80, 81, 87, 132, 136, 139, 163, 175, 183, 186, 197, 205, 209, 219, 221, 229, 236, 249, 257, 301, 302, 305, 307, 321, 332, 333, 334, 355, 356, 362, 404, 408, 411, 435, 459, 460, 466, 511, 515, 518, 542, 552], "host": [10, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 47, 48, 67, 71, 74, 80, 81, 95, 98, 108, 109, 110, 114, 115, 127, 130, 135, 143, 174, 184, 186, 194, 196, 206, 209, 216, 220, 222, 232, 236, 244, 250, 295, 304, 312, 333, 334, 343, 347, 350, 355, 356, 400, 402, 407, 415, 446, 450, 453, 459, 460, 474, 477, 487, 488, 489, 493, 494, 506, 509, 514, 522, 558], "checkstyl": 10, "correctli": [10, 18, 19, 20, 33, 34, 36, 41, 42, 46, 48, 53, 104, 110, 114, 128, 155, 186, 209, 231, 236, 274, 280, 284, 296, 324, 379, 385, 389, 401, 427, 483, 489, 493, 507, 534, 557], "signoff": [10, 17, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42], "editor": [10, 34, 36], "unstag": 10, "pleas": [10, 12, 17, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42, 53, 78, 79, 177, 184, 199, 206, 223, 232, 251, 295, 298, 353, 354, 457, 458, 548, 549, 550, 551, 557], "enter": [10, 14, 16, 18, 19, 20, 21, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 50, 67, 71, 90, 101, 120, 143, 157, 172, 176, 194, 198, 216, 222, 232, 236, 244, 250, 260, 271, 290, 312, 326, 343, 347, 365, 376, 395, 415, 429, 446, 450, 469, 480, 499, 522, 536, 557], "ignor": [10, 14, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 53, 54, 71, 73, 78, 79, 82, 84, 86, 87, 92, 95, 98, 102, 115, 143, 168, 174, 176, 178, 180, 183, 184, 186, 189, 196, 198, 199, 200, 202, 205, 206, 209, 212, 220, 222, 223, 224, 226, 229, 230, 232, 236, 240, 248, 250, 251, 252, 254, 257, 262, 265, 268, 272, 285, 298, 312, 339, 347, 349, 353, 354, 357, 359, 362, 367, 370, 373, 377, 390, 415, 450, 452, 457, 458, 461, 463, 465, 466, 471, 474, 477, 481, 494, 522, 561], "empti": [10, 18, 19, 20, 28, 34, 35, 36, 37, 41, 42, 47, 62, 70, 71, 78, 79, 81, 103, 104, 108, 109, 110, 114, 121, 125, 127, 136, 168, 177, 180, 184, 186, 189, 198, 199, 202, 206, 209, 212, 219, 222, 223, 226, 231, 232, 236, 240, 247, 250, 251, 254, 273, 274, 291, 294, 295, 298, 305, 334, 339, 346, 347, 353, 354, 356, 378, 379, 396, 399, 400, 408, 441, 449, 450, 457, 458, 460, 482, 483, 487, 488, 489, 493, 500, 504, 506, 515], "abort": [10, 43, 71, 86, 108, 109, 110, 114, 139, 182, 204, 206, 221, 228, 232, 249, 256, 278, 279, 361, 383, 384, 385, 389, 411, 450, 465, 487, 488, 489, 493, 518], "reset": [10, 87, 105, 183, 205, 229, 232, 257, 275, 362, 380, 466, 484], "hello": 10, "displai": [10, 47, 53, 61, 78, 86, 87, 88, 94, 95, 96, 98, 100, 103, 104, 106, 115, 118, 121, 123, 124, 127, 130, 132, 136, 139, 141, 142, 143, 145, 147, 151, 156, 157, 158, 161, 162, 163, 182, 183, 184, 186, 187, 204, 205, 206, 209, 210, 228, 229, 231, 232, 236, 237, 239, 256, 257, 258, 264, 265, 266, 268, 270, 273, 274, 276, 285, 288, 291, 292, 293, 295, 298, 299, 301, 305, 308, 310, 311, 312, 314, 316, 320, 325, 326, 327, 330, 331, 332, 336, 338, 353, 361, 362, 363, 369, 370, 371, 373, 375, 378, 379, 381, 390, 393, 396, 397, 398, 400, 402, 404, 408, 411, 413, 414, 415, 417, 419, 423, 428, 429, 430, 433, 434, 435, 440, 457, 465, 466, 467, 473, 474, 475, 477, 479, 482, 483, 485, 494, 497, 500, 502, 503, 506, 509, 511, 515, 518, 520, 521, 522, 524, 526, 530, 535, 536, 537, 540, 541, 542, 556], "guidelin": [10, 12], "charact": [10, 12, 34, 47, 73, 76, 78, 79, 81, 86, 87, 88, 94, 110, 114, 118, 127, 136, 145, 164, 174, 180, 182, 183, 184, 186, 196, 202, 204, 205, 206, 209, 220, 226, 228, 229, 232, 236, 248, 254, 256, 257, 258, 264, 280, 284, 288, 298, 305, 314, 334, 349, 353, 354, 356, 361, 362, 363, 369, 385, 389, 393, 408, 417, 436, 452, 455, 457, 458, 460, 465, 466, 467, 473, 489, 493, 497, 506, 515, 524, 543], "underneath": [10, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 86, 465], "look": [10, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42, 46, 47, 50, 53, 65, 71, 77, 86, 129, 132, 143, 145, 147, 157, 158, 163, 176, 186, 198, 204, 209, 222, 228, 236, 250, 256, 301, 312, 314, 316, 326, 327, 332, 347, 361, 404, 415, 417, 419, 429, 430, 435, 444, 450, 456, 465, 508, 511, 522, 524, 526, 536, 537, 542, 548, 553], "close": [10, 44, 47, 48, 71, 78, 81, 125, 139, 175, 176, 184, 186, 197, 198, 206, 209, 221, 222, 232, 236, 249, 250, 294, 298, 334, 347, 353, 356, 399, 411, 450, 457, 460, 504, 518], "9998": 10, "9999": 10, "save": [10, 18, 19, 20, 22, 33, 34, 36, 41, 42, 46, 47, 48, 71, 78, 79, 86, 108, 109, 110, 114, 177, 199, 206, 222, 223, 232, 250, 251, 278, 279, 280, 284, 298, 347, 353, 354, 383, 384, 385, 389, 450, 457, 458, 465, 487, 488, 489, 493], "exit": [10, 14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 43, 65, 67, 68, 82, 86, 87, 104, 127, 129, 145, 147, 163, 178, 184, 186, 200, 204, 206, 209, 216, 224, 228, 231, 232, 236, 244, 252, 256, 257, 274, 295, 314, 316, 332, 343, 344, 357, 361, 362, 379, 400, 417, 419, 435, 444, 446, 447, 461, 465, 466, 483, 506, 508, 524, 526, 542], "home": [10, 14, 16, 17, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 46, 53, 65, 77, 86, 88, 91, 92, 93, 95, 98, 100, 113, 115, 117, 118, 127, 171, 182, 184, 193, 204, 206, 228, 232, 256, 295, 297, 352, 361, 400, 444, 456, 465, 467, 470, 471, 472, 474, 477, 479, 492, 494, 496, 497, 506], "stretch": [10, 23, 39], "now": [10, 18, 19, 20, 21, 22, 28, 33, 34, 35, 36, 37, 41, 42, 46, 47, 53, 62, 66, 71, 79, 81, 93, 99, 107, 119, 122, 126, 155, 160, 168, 170, 184, 189, 192, 206, 212, 215, 232, 240, 243, 250, 263, 269, 277, 289, 339, 342, 347, 356, 368, 374, 382, 394, 427, 441, 445, 450, 458, 460, 472, 478, 486, 498, 501, 505, 534, 539, 555, 557], "ask": [10, 47, 53, 71, 74, 78, 90, 101, 102, 103, 116, 120, 121, 230, 232, 250, 260, 271, 272, 273, 290, 291, 298, 347, 350, 353, 365, 376, 377, 378, 391, 395, 396, 450, 453, 457, 469, 480, 481, 482, 495, 499, 500], "credenti": [10, 65, 444], "upload": 10, "button": 10, "recent": [10, 47, 48, 49, 70, 71, 81, 87, 108, 109, 113, 123, 139, 143, 163, 176, 183, 184, 198, 205, 206, 209, 219, 222, 229, 232, 236, 247, 250, 257, 278, 279, 283, 292, 308, 312, 332, 334, 346, 347, 356, 362, 383, 384, 388, 397, 411, 415, 435, 449, 450, 460, 466, 487, 488, 492, 502, 518, 522, 542, 556, 561], "sometim": [10, 41, 42, 43, 79, 168, 189, 212, 240, 339, 354, 458], "plan": [10, 11, 14, 16, 25, 31, 47, 48, 53, 77, 184, 206, 232, 297, 352, 456], "along": [10, 47, 87, 90, 101, 104, 120, 125, 147, 162, 163, 183, 186, 205, 209, 229, 231, 232, 236, 257, 260, 271, 274, 290, 294, 316, 331, 332, 362, 365, 376, 379, 395, 399, 419, 434, 435, 466, 469, 480, 483, 499, 504, 526, 541, 542], "amend": [10, 12], "forc": [10, 12, 16, 18, 19, 20, 21, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47, 53, 71, 74, 78, 86, 87, 103, 104, 108, 109, 112, 113, 121, 131, 132, 133, 136, 143, 148, 149, 153, 159, 163, 183, 184, 185, 186, 198, 205, 206, 208, 209, 222, 229, 230, 231, 232, 235, 236, 250, 257, 263, 272, 273, 274, 278, 279, 282, 283, 291, 298, 299, 300, 301, 302, 305, 306, 312, 317, 318, 322, 328, 332, 347, 350, 353, 361, 362, 378, 379, 383, 384, 387, 388, 396, 403, 404, 405, 408, 415, 420, 421, 425, 431, 435, 450, 453, 457, 465, 466, 482, 483, 487, 488, 491, 492, 500, 510, 511, 512, 515, 522, 527, 528, 532, 538, 542], "screen": [10, 18, 19, 20, 22, 33, 41, 42], "old": [10, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 43, 47, 48, 70, 71, 79, 86, 90, 101, 120, 133, 153, 168, 174, 176, 177, 182, 186, 189, 196, 198, 199, 204, 209, 212, 219, 220, 222, 223, 228, 236, 240, 247, 250, 251, 256, 260, 271, 290, 322, 346, 347, 354, 361, 365, 376, 395, 425, 449, 450, 458, 465, 469, 480, 499, 532, 555, 557], "ones": [10, 26, 48, 71, 79, 81, 87, 102, 110, 114, 236, 280, 284, 334, 356, 362, 377, 385, 389, 450, 460, 466, 481, 489, 493, 557], "restart": [10, 18, 19, 20, 22, 28, 33, 34, 36, 41, 42, 47, 71, 79, 102, 152, 154, 155, 163, 176, 198, 209, 222, 223, 230, 236, 250, 251, 272, 321, 323, 324, 332, 347, 354, 377, 424, 426, 427, 435, 450, 458, 481, 531, 533, 534, 542], "excess": [10, 47, 48, 71, 250, 347, 450], "delai": [10, 51, 58, 59, 71, 74, 78, 81, 87, 131, 139, 175, 176, 183, 184, 197, 198, 205, 206, 208, 221, 222, 229, 232, 235, 236, 249, 250, 257, 298, 300, 334, 347, 350, 353, 356, 362, 403, 411, 450, 453, 457, 460, 466, 510, 518], "futur": [10, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 48, 65, 66, 71, 78, 87, 110, 114, 163, 165, 166, 170, 183, 184, 192, 205, 206, 209, 215, 229, 232, 236, 243, 257, 280, 284, 298, 332, 335, 342, 353, 362, 385, 389, 435, 437, 438, 444, 445, 450, 457, 466, 489, 493, 542, 544, 545, 547, 551, 553, 555, 561], "date": [10, 34, 35, 36, 37, 47, 65, 145, 147, 155, 158, 162, 186, 209, 236, 314, 316, 324, 327, 331, 417, 419, 427, 430, 434, 444, 524, 526, 534, 537, 541], "grab": [10, 47], "back": [10, 11, 12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 48, 53, 71, 76, 77, 79, 80, 90, 101, 104, 110, 113, 114, 120, 127, 135, 143, 163, 184, 186, 198, 206, 209, 219, 222, 223, 231, 232, 236, 247, 250, 251, 260, 271, 274, 280, 283, 284, 290, 295, 297, 312, 333, 347, 352, 354, 355, 365, 376, 379, 385, 388, 389, 395, 400, 407, 415, 450, 455, 456, 458, 459, 469, 480, 483, 489, 492, 493, 499, 506, 514, 522, 557], "mani": [10, 11, 12, 18, 19, 20, 21, 22, 25, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 50, 53, 67, 71, 77, 78, 79, 131, 139, 171, 172, 175, 176, 177, 185, 193, 194, 197, 198, 199, 208, 216, 221, 222, 223, 232, 235, 239, 244, 249, 250, 251, 298, 300, 343, 347, 353, 354, 403, 411, 446, 450, 456, 457, 458, 510, 518], "Not": [10, 11, 43, 46, 53, 81, 87, 132, 133, 136, 143, 153, 168, 183, 186, 189, 205, 209, 212, 229, 236, 240, 257, 301, 302, 305, 312, 322, 334, 339, 356, 362, 404, 405, 408, 415, 425, 460, 466, 511, 512, 515, 522, 532], "anyth": [10, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 48, 78, 110, 114, 280, 284, 385, 389, 489, 493], "touch": [10, 18, 19, 20, 34, 35, 36, 37, 41, 42, 53, 71, 102, 230, 250, 272, 347, 377, 450, 481], "advanc": [10, 46, 47, 48, 57, 71, 78, 86, 204, 206, 228, 232, 256, 298, 353, 361, 457, 465, 555], "wiki": [10, 14, 16, 17, 22, 25, 31, 41, 42, 53], "articl": [10, 46, 53], "atlassian": 10, "tutori": [10, 14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 36, 41, 42], "commit": [11, 12, 13, 17, 18, 19, 20, 22, 28, 29, 33, 34, 35, 36, 37, 41, 42, 43, 47, 48, 50, 56, 71, 78, 81, 110, 114, 127, 176, 184, 186, 198, 206, 209, 222, 232, 236, 250, 280, 284, 295, 298, 334, 347, 353, 356, 385, 389, 400, 450, 457, 460, 489, 493, 506, 561], "explicitli": [11, 47, 53, 70, 74, 78, 80, 88, 95, 98, 105, 115, 118, 127, 143, 171, 184, 186, 193, 206, 209, 219, 232, 236, 247, 258, 265, 268, 275, 285, 288, 295, 298, 312, 333, 346, 350, 353, 355, 363, 370, 373, 380, 390, 393, 400, 415, 449, 453, 457, 459, 467, 474, 477, 484, 494, 497, 506, 522, 547, 549], "given": [11, 18, 19, 20, 22, 33, 34, 36, 41, 42, 46, 47, 48, 53, 54, 64, 67, 71, 73, 79, 80, 85, 86, 87, 88, 89, 90, 92, 93, 94, 95, 96, 97, 98, 101, 102, 103, 104, 105, 106, 108, 109, 110, 111, 112, 114, 115, 116, 118, 120, 121, 124, 125, 127, 131, 132, 133, 136, 137, 139, 140, 141, 143, 145, 147, 151, 153, 156, 158, 161, 162, 163, 174, 175, 176, 181, 183, 184, 186, 191, 196, 197, 198, 199, 203, 204, 205, 206, 209, 214, 220, 221, 222, 223, 227, 228, 229, 231, 232, 235, 236, 242, 248, 249, 250, 251, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 270, 271, 273, 274, 275, 276, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 290, 291, 293, 294, 295, 300, 301, 302, 305, 306, 309, 310, 312, 314, 316, 322, 325, 327, 330, 331, 332, 333, 341, 347, 349, 354, 355, 360, 361, 362, 363, 364, 365, 367, 368, 369, 370, 371, 372, 373, 376, 377, 378, 379, 380, 381, 383, 384, 385, 386, 387, 389, 390, 391, 393, 395, 396, 398, 399, 400, 403, 404, 405, 408, 409, 411, 412, 413, 415, 417, 419, 425, 428, 430, 433, 434, 435, 443, 446, 450, 452, 458, 459, 464, 465, 466, 467, 468, 469, 471, 472, 473, 474, 475, 476, 477, 480, 481, 482, 483, 484, 485, 487, 488, 489, 490, 491, 493, 494, 495, 497, 499, 500, 503, 504, 506, 510, 511, 512, 515, 516, 518, 519, 520, 522, 524, 526, 530, 532, 535, 537, 540, 541, 542], "varieti": [11, 53, 78, 184, 206, 232, 298, 353, 457], "track": [11, 12, 13, 28, 39, 46, 47, 54, 58, 59, 71, 77, 79, 80, 86, 90, 101, 102, 110, 114, 120, 177, 186, 199, 209, 223, 230, 232, 236, 250, 251, 260, 271, 272, 280, 284, 290, 333, 347, 354, 355, 365, 376, 377, 385, 389, 395, 450, 456, 458, 459, 465, 469, 480, 481, 489, 493, 499], "comment": [11, 12, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 53, 73, 76, 79, 81, 168, 174, 186, 189, 196, 209, 212, 220, 236, 240, 248, 334, 339, 349, 354, 356, 452, 455, 458, 460], "isn": [11, 32, 47, 49, 53, 71, 78, 139, 175, 176, 197, 198, 206, 221, 222, 230, 232, 249, 250, 272, 298, 347, 353, 411, 450, 457, 518], "lack": [11, 46, 53, 71, 78, 134, 184, 198, 206, 222, 232, 236, 250, 298, 303, 347, 353, 406, 450, 457, 513], "denot": [11, 47, 86, 183, 205, 229, 256, 257, 361, 465], "prior": [11, 46, 47, 48, 49, 54, 65, 71, 73, 78, 94, 127, 174, 176, 184, 196, 198, 206, 220, 222, 232, 248, 250, 295, 298, 347, 349, 353, 400, 444, 450, 452, 457, 473, 506], "appli": [11, 12, 14, 16, 22, 25, 26, 28, 29, 31, 32, 33, 34, 35, 37, 43, 46, 47, 48, 53, 65, 70, 71, 73, 78, 79, 90, 93, 95, 97, 98, 101, 108, 109, 110, 111, 114, 115, 120, 160, 174, 184, 196, 198, 206, 219, 220, 222, 232, 236, 247, 248, 250, 260, 263, 265, 267, 268, 271, 278, 279, 280, 281, 284, 285, 290, 298, 329, 346, 347, 349, 353, 354, 365, 368, 370, 372, 373, 376, 383, 384, 385, 386, 389, 390, 395, 432, 444, 449, 450, 452, 457, 458, 469, 472, 474, 476, 477, 480, 487, 488, 489, 490, 493, 494, 499, 539], "id": [11, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 53, 56, 58, 59, 67, 73, 74, 76, 78, 79, 86, 87, 96, 104, 105, 106, 124, 127, 128, 130, 131, 139, 143, 163, 174, 175, 182, 183, 184, 185, 186, 194, 196, 197, 204, 205, 206, 208, 209, 216, 220, 221, 223, 228, 229, 231, 232, 235, 236, 244, 248, 249, 251, 256, 257, 266, 274, 275, 276, 293, 295, 296, 298, 300, 312, 332, 343, 349, 350, 353, 354, 361, 362, 371, 379, 380, 381, 398, 400, 401, 402, 403, 411, 415, 435, 446, 452, 453, 455, 457, 458, 465, 466, 475, 483, 484, 485, 503, 506, 507, 509, 510, 518, 522, 542, 562], "11453": 11, "check_disk": 11, "zol": [11, 12, 13, 18, 19, 20, 22, 33, 34, 36, 41, 42, 53, 54, 58, 59, 178, 184, 200, 224, 252], "11276": 11, "da68988": 11, "11052": 11, "2efea7c": 11, "11051": 11, "3b61ca3": 11, "10853": 11, "8dc2197": 11, "10844": 11, "61c3391": 11, "10842": 11, "d10b2f1": 11, "10841": 11, "944a372": 11, "10809": 11, "ee36c70": 11, "10808": 11, "2ef0f8c": 11, "10701": 11, "0091d66": 11, "10601": 11, "cc99f27": 11, "10573": 11, "48d3eb4": 11, "10572": 11, "edc1e71": 11, "10566": 11, "ab7615d": 11, "10554": 11, "bec1067": 11, "10500": 11, "03916905": 11, "10449": 11, "379ca9c": 11, "10406": 11, "da2feb4": 11, "10154": 11, "10067": 11, "remap": [11, 79, 85, 223, 251, 255, 354, 360, 458, 464], "9884": 11, "9851": 11, "9691": 11, "d9b4bf0": 11, "9683": 11, "devid": [11, 76, 163, 209, 236, 332, 435, 455, 542], "9680": 11, "9672": 11, "29445fe3": 11, "9647": 11, "a448a25": 11, "9626": 11, "59e6e7ca": 11, "9635": 11, "9623": 11, "22448f08": 11, "9621": 11, "305bc4b3": 11, "9539": 11, "5228cf01": 11, "9512": 11, "b4555c77": 11, "9487": 11, "48fbb9dd": 11, "9466": 11, "272b5d73": 11, "9440": 11, "f664f1e": 11, "ticket": 11, "land": [11, 47], "9433": 11, "0873bb63": 11, "9421": 11, "64c1dcef": 11, "9237": 11, "introduc": [11, 46], "8567": 11, "9194": 11, "9077": 11, "9027": 11, "4a5d7f82": 11, "9018": 11, "3ec34e55": 11, "8984": 11, "wip": 11, "nfsv4": [11, 34, 35, 36, 37, 78, 184, 206, 232, 298, 353, 457], "8969": 11, "8942": 11, "650258d7": 11, "8941": 11, "390d679a": 11, "8862": 11, "3b9edd7": 11, "8858": 11, "8856": 11, "encrypt": [11, 14, 16, 25, 27, 28, 31, 53, 71, 74, 77, 78, 79, 86, 88, 90, 101, 102, 103, 108, 109, 110, 114, 116, 118, 120, 121, 127, 143, 151, 155, 157, 165, 166, 222, 223, 230, 232, 236, 250, 251, 258, 260, 271, 272, 273, 278, 279, 280, 284, 288, 290, 291, 295, 298, 312, 320, 326, 347, 350, 353, 354, 363, 365, 376, 377, 378, 383, 384, 385, 389, 391, 393, 395, 396, 400, 415, 423, 429, 450, 453, 456, 457, 458, 465, 467, 469, 480, 481, 482, 487, 488, 489, 493, 495, 497, 499, 500, 506, 522, 530, 534, 536, 544, 545, 557], "b525630": 11, "8809": 11, "libfakekernel": 11, "refactor": 11, "8727": 11, "8713": 11, "871e0732": 11, "8661": 11, "1ce23dca": 11, "8648": 11, "f763c3d1": 11, "8602": 11, "a032ac4": 11, "8601": [11, 65, 444], "d99a015": 11, "equival": [11, 46, 47, 48, 54, 62, 71, 74, 78, 80, 90, 92, 101, 102, 103, 104, 110, 114, 116, 120, 121, 131, 136, 153, 165, 166, 175, 180, 184, 186, 197, 202, 206, 208, 209, 226, 231, 232, 235, 236, 254, 260, 262, 271, 273, 274, 280, 284, 290, 291, 298, 300, 305, 322, 333, 335, 347, 350, 353, 355, 365, 367, 376, 377, 378, 379, 385, 389, 391, 395, 396, 403, 408, 425, 437, 438, 441, 450, 453, 457, 459, 469, 471, 480, 481, 482, 483, 489, 493, 495, 499, 500, 510, 515, 532, 544, 545], "initi": [11, 12, 13, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 47, 67, 71, 77, 78, 80, 81, 83, 90, 91, 93, 101, 108, 109, 110, 114, 120, 127, 132, 133, 142, 151, 153, 158, 160, 162, 163, 184, 186, 206, 209, 222, 232, 236, 250, 253, 260, 263, 271, 278, 279, 280, 284, 290, 295, 297, 298, 301, 302, 311, 320, 322, 327, 329, 331, 332, 333, 334, 343, 347, 352, 353, 355, 356, 358, 365, 368, 376, 383, 384, 385, 389, 395, 400, 404, 405, 414, 423, 425, 430, 432, 434, 435, 446, 450, 456, 457, 459, 460, 462, 469, 470, 472, 480, 487, 488, 489, 493, 499, 506, 511, 512, 521, 530, 532, 537, 539, 541, 542], "8590": 11, "935e2c2": 11, "8569": 11, "relev": [11, 12, 41, 47, 99, 104, 119, 144, 160, 184, 206, 232, 236, 274, 298, 313, 329, 353, 374, 379, 394, 416, 432, 478, 483, 498, 523, 539], "8552": 11, "8521": 11, "ee6370a7": 11, "8502": 11, "7955": 11, "9485": 11, "1258bd7": 11, "8477": 11, "92e43c1": 11, "8454": 11, "8423": 11, "50c957f": 11, "8408": 11, "5f1346c": 11, "8379": 11, "8376": 11, "8311": 11, "assess": 11, "8304": 11, "8300": [11, 22, 33], "44f09cd": 11, "8265": 11, "large_dnod": [11, 78, 79, 199, 206, 223, 232, 251, 298, 353, 354, 457, 458], "8168": 11, "78d95ea": 11, "8138": 11, "spell": 11, "came": [11, 48, 557], "mdoc": 11, "convers": [11, 46, 47, 53, 70, 104, 219, 231, 247, 274, 346, 379, 449, 483], "8108": 11, "8068": 11, "a1d477c24c": 11, "evacu": [11, 79, 151, 223, 236, 251, 320, 354, 423, 458, 530], "8064": 11, "8022": 11, "e55ebf6": 11, "8021": 11, "7657def": 11, "8013": 11, "7982": 11, "7970": 11, "c30e58c": 11, "7956": 11, "cda0317": 11, "7869": 11, "df7eecc": 11, "7816": 11, "7803": 11, "upda": 11, "te_vdev_config_dev_str": 11, "7801": 11, "0eef1bd": 11, "f25efb3": 11, "7779": 11, "zfs_ctldir": 11, "rewritten": [11, 54], "7740": 11, "32d41fb": 11, "7739": 11, "582cc014": 11, "7730": 11, "e24e62a": 11, "7710": 11, "under": [11, 21, 26, 32, 35, 37, 44, 46, 47, 48, 53, 68, 71, 73, 74, 77, 78, 79, 80, 81, 86, 87, 88, 95, 98, 115, 118, 127, 129, 147, 163, 175, 176, 180, 182, 183, 184, 186, 197, 198, 199, 202, 204, 205, 206, 209, 217, 219, 221, 222, 223, 226, 228, 229, 232, 236, 245, 247, 248, 249, 250, 251, 254, 256, 257, 258, 288, 295, 297, 298, 332, 333, 334, 344, 347, 349, 350, 352, 353, 354, 355, 356, 361, 362, 363, 393, 400, 435, 447, 450, 452, 453, 456, 457, 458, 459, 460, 465, 466, 467, 474, 477, 494, 497, 506, 508, 526, 542, 555], "7602": 11, "7591": 11, "541a090": 11, "7586": 11, "c443487": 11, "7570": 11, "discard": [11, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 53, 71, 78, 79, 80, 95, 98, 108, 109, 113, 115, 127, 134, 143, 162, 176, 184, 186, 198, 206, 209, 222, 223, 232, 236, 250, 251, 278, 279, 283, 295, 298, 303, 312, 331, 333, 347, 353, 354, 355, 383, 384, 388, 400, 406, 415, 434, 450, 457, 458, 459, 474, 477, 487, 488, 492, 494, 506, 513, 522, 541, 553], "asynchron": [11, 18, 19, 20, 22, 33, 34, 36, 41, 42, 47, 50, 66, 68, 71, 80, 81, 145, 170, 176, 186, 192, 198, 209, 215, 217, 222, 236, 243, 245, 250, 314, 333, 334, 342, 344, 347, 355, 356, 417, 445, 447, 450, 459, 460, 524, 557], "unclear": 11, "purpos": [11, 43, 46, 47, 48, 64, 67, 71, 76, 78, 79, 80, 81, 86, 110, 114, 136, 163, 172, 182, 184, 186, 191, 194, 198, 199, 204, 206, 209, 214, 216, 222, 223, 228, 232, 236, 242, 244, 250, 251, 256, 280, 284, 298, 332, 333, 334, 341, 343, 347, 353, 354, 355, 356, 361, 385, 389, 435, 443, 446, 450, 455, 457, 458, 459, 460, 465, 489, 493, 515, 542, 557], "7542": 11, "libshar": 11, "address": [11, 12, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47, 53, 78, 79, 87, 95, 98, 115, 127, 183, 184, 205, 206, 219, 223, 229, 232, 251, 257, 295, 353, 354, 362, 400, 457, 458, 466, 474, 477, 494, 506], "eventu": [11, 71, 250, 347, 450], "retir": [11, 71, 450], "flexibli": 11, "share": [11, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 48, 53, 71, 77, 78, 79, 80, 81, 83, 86, 87, 88, 93, 95, 98, 110, 114, 115, 118, 127, 132, 136, 140, 163, 171, 183, 184, 186, 193, 205, 206, 209, 229, 232, 236, 250, 253, 257, 258, 263, 280, 284, 288, 295, 297, 298, 301, 305, 309, 332, 333, 334, 347, 352, 353, 354, 355, 356, 358, 362, 363, 368, 385, 389, 393, 400, 404, 408, 412, 435, 450, 456, 457, 458, 459, 460, 462, 465, 466, 467, 472, 474, 477, 489, 493, 494, 497, 506, 511, 515, 519, 542, 553], "7512": 11, "7497": 11, "dtrace": [11, 104, 231, 274, 379, 483], "readili": 11, "7446": 11, "7430": 11, "68cbd56": 11, "7402": 11, "690fe64": 11, "7345": 11, "058ac9b": 11, "7278": 11, "arc": [11, 46, 48, 53, 61, 71, 78, 80, 131, 176, 184, 185, 198, 206, 208, 222, 232, 235, 239, 250, 298, 300, 333, 338, 347, 353, 355, 403, 440, 450, 457, 459, 510], "tune": [11, 18, 19, 20, 33, 34, 35, 36, 37, 41, 42, 46, 47, 49, 53, 58, 59, 71, 76, 78, 176, 184, 198, 206, 222, 232, 250, 298, 347, 353, 450, 455, 457], "slightli": [11, 46, 47, 48, 67, 71, 79, 133, 155, 172, 194, 198, 216, 219, 222, 236, 244, 250, 324, 343, 347, 354, 427, 446, 450, 458, 534], "cover": [11, 39, 71, 347, 450], "arc_tuning_upd": 11, "7238": 11, "zvol_swap": 11, "alreadi": [11, 18, 19, 20, 22, 32, 33, 34, 35, 36, 37, 47, 48, 49, 53, 71, 73, 78, 79, 90, 91, 92, 97, 101, 104, 108, 109, 111, 120, 132, 143, 154, 163, 165, 166, 174, 176, 177, 184, 186, 196, 198, 199, 206, 209, 220, 222, 223, 231, 232, 236, 248, 250, 251, 260, 261, 262, 267, 271, 274, 278, 279, 281, 290, 298, 312, 323, 332, 347, 349, 353, 354, 365, 366, 367, 372, 376, 379, 383, 384, 386, 395, 415, 426, 435, 450, 452, 457, 458, 469, 470, 471, 476, 480, 483, 487, 488, 490, 499, 511, 522, 533, 542, 544, 545], "7194": 11, "d7958b4": 11, "7164": 11, "b1b85c87": 11, "7041": 11, "33c0819": 11, "7016": 11, "d3c2ae1": 11, "6914": 11, "arc_meta_limit": [11, 47], "zfs_arc_meta_limit_perc": [11, 198, 222, 250, 347], "6875": 11, "6843": 11, "f5f087e": 11, "6841": 11, "4254acb": 11, "6781": 11, "15313c5": 11, "6765": 11, "6764": 11, "6763": 11, "6762": 11, "6648": 11, "6bb24f4": 11, "6578": 11, "6577": 11, "6575": 11, "6568": 11, "6528": 11, "6494": 11, "vdev_disk": 11, "vdev_fil": 11, "rework": 11, "propos": 11, "6468": 11, "6465": 11, "6434": 11, "472e7c6": 11, "6421": 11, "ca0bf58": 11, "6418": 11, "131cc95": 11, "6391": 11, "ee06391": [11, 12], "6390": 11, "85802aa": 11, "6388": 11, "0de7c55": 11, "6386": 11, "485c581": 11, "6385": 11, "f3ad9cd": 11, "6369": 11, "6368": 11, "2024041": 11, "6346": 11, "6334": 11, "1a04bab": 11, "6290": 11, "017da6": 11, "6250": 11, "6249": 11, "6248": 11, "6220": 11, "b_thaw": 11, "unus": [11, 25, 31, 46, 47, 53, 71, 79, 81, 198, 222, 236, 250, 334, 347, 354, 356, 450, 458, 460, 555], "6209": 11, "mutex": [11, 47, 71, 222, 250, 347, 450], "phtread": 11, "primit": [11, 47, 71, 222, 250, 347, 450], "6095": 11, "f866a4ea": 11, "6091": 11, "c11f100": 11, "6037": 11, "a8bd6dc": 11, "5984": 11, "480f626": 11, "5966": 11, "5961": 11, "22872ff": 11, "5882": 11, "83e9986": 11, "5815": 11, "5770": 11, "c3275b5": 11, "5769": 11, "dd26aa5": 11, "5768": 11, "5766": 11, "4dd1893": 11, "5693": 11, "0f7d2a4": 11, "5692": 11, "filefrag": 11, "5684": 11, "5503": 11, "0f676dc": 11, "deploi": [11, 46, 53], "7072": 11, "5502": 11, "f0ed6c7": 11, "5410": 11, "0bf8501": 11, "5409": 11, "b23d543": 11, "5379": 11, "zfs_putpag": 11, "5316": 11, "idmap": 11, "facil": [11, 47, 71, 77, 86, 176, 182, 198, 204, 222, 228, 250, 256, 347, 361, 450, 456, 465], "delta": [11, 176], "have_idmap": 11, "chunk": [11, 47, 71, 78, 171, 176, 184, 193, 198, 206, 222, 232, 250, 298, 347, 353, 450, 457], "readabl": [11, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 53, 66, 76, 78, 86, 90, 95, 98, 101, 115, 120, 170, 171, 182, 184, 192, 193, 204, 206, 215, 228, 232, 243, 256, 260, 265, 268, 271, 285, 290, 298, 342, 353, 361, 365, 370, 373, 376, 390, 395, 445, 455, 457, 465, 469, 474, 477, 480, 494, 499], "5313": 11, "ec8501": 11, "5312": 11, "fold": 11, "cleanup": [11, 12, 47, 71, 74, 171, 193, 250, 347, 350, 450, 453], "5219": 11, "ef56b07": 11, "5179": 11, "3f4058c": 11, "5154": 11, "9a49d3f": 11, "5149": 11, "zvol_max_discard_block": [11, 71, 176, 198, 222, 250, 347, 450], "5148": 11, "dkiocfre": 11, "ioctl": [11, 71, 139, 175, 197, 221, 249, 250, 347, 411, 450, 518], "5136": 11, "e8b96c6": 11, "4752": 11, "aa9af22": 11, "4745": 11, "411bf20": 11, "4698": 11, "4fcc437": 11, "4620": 11, "4573": 11, "10b7549": 11, "4571": 11, "6e1b9d0": 11, "4570": 11, "b1d13a6": 11, "4391": 11, "78e2739": 11, "4465": 11, "4263": 11, "4242": 11, "neither": [11, 65, 74, 78, 79, 88, 96, 106, 108, 109, 110, 114, 118, 124, 182, 184, 204, 206, 232, 258, 266, 276, 278, 279, 280, 284, 288, 293, 298, 350, 353, 363, 371, 381, 383, 384, 385, 389, 393, 398, 444, 453, 457, 458, 467, 475, 485, 487, 488, 489, 493, 497, 503], "vnode": 11, "4206": 11, "2820bc4": 11, "4188": 11, "2e7b765": 11, "4181": 11, "4161": 11, "reader": [11, 35, 37, 47], "writer": [11, 47, 71, 198, 222, 250, 347, 450], "4128": 11, "ldi_ev_register_callback": 11, "notif": 11, "scsi": [11, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 53, 73, 85, 174, 196, 220, 221, 248, 249, 255, 349, 360, 411, 452, 464], "handler": [11, 131, 185, 208, 235, 300, 403, 510], "4072": 11, "3998": 11, "417104bd": 11, "3947": 11, "7f9d994": 11, "3928": 11, "3871": 11, "d1d7e268": 11, "3747": 11, "090ff09": 11, "3705": 11, "lz4": [11, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 66, 71, 78, 79, 165, 166, 170, 177, 184, 192, 199, 206, 215, 223, 232, 243, 251, 298, 342, 353, 354, 445, 450, 457, 458, 544, 545], "workspac": 11, "kmem": [11, 47, 70, 219, 247, 346, 449], "cach": [11, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 57, 61, 66, 70, 71, 74, 78, 80, 81, 86, 102, 131, 132, 143, 145, 146, 151, 163, 170, 176, 182, 184, 185, 186, 192, 198, 204, 206, 208, 209, 215, 219, 222, 228, 230, 232, 235, 236, 239, 243, 247, 250, 256, 272, 298, 300, 312, 315, 320, 332, 333, 334, 338, 342, 346, 347, 350, 353, 355, 356, 361, 377, 403, 415, 418, 423, 435, 440, 445, 449, 450, 453, 457, 459, 460, 465, 481, 510, 511, 522, 524, 525, 530, 542, 562], "resolv": [11, 12, 16, 18, 19, 20, 25, 31, 32, 34, 36, 41, 42, 46, 53, 84, 132, 145, 147, 157, 158, 186, 209, 236, 301, 314, 316, 326, 327, 359, 404, 417, 419, 429, 430, 463, 511, 524, 526, 536, 537, 547, 558, 559, 560, 561], "stack": [11, 46, 47, 53, 67, 104, 172, 175, 194, 197, 216, 221, 231, 244, 249, 274, 343, 379, 446, 483], "3606": 11, "c5b247f": 11, "3580": 11, "3543": 11, "8dca0a9": 11, "3512": 11, "67629d0": 11, "3507": 11, "43a696": 11, "3444": 11, "3371": 11, "3311": 11, "3301": 11, "3258": 11, "9d81146": 11, "3254": 11, "3246": 11, "cc92e9d": 11, "2933": 11, "2897": 11, "fb82700": 11, "2665": 11, "32a9872": 11, "2130": 11, "460a021": 11, "1974": 11, "restructur": 11, "1898": 11, "vm": [11, 14, 16, 23, 25, 28, 31], "1700": 11, "1618": 11, "ca67b33": 11, "1337": 11, "2402458": 11, "1126": 11, "e43b290": 11, "763": 11, "3cee226": 11, "742": 11, "701": 11, "348": 11, "243": 11, "manual": [11, 14, 16, 18, 19, 20, 22, 25, 27, 31, 33, 34, 35, 36, 37, 41, 42, 46, 51, 53, 61, 62, 64, 65, 66, 67, 68, 70, 71, 73, 74, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 168, 172, 174, 175, 176, 177, 180, 181, 184, 189, 191, 194, 196, 197, 198, 199, 202, 203, 204, 206, 207, 209, 212, 214, 216, 217, 219, 220, 221, 222, 223, 226, 227, 228, 230, 231, 232, 234, 236, 239, 240, 242, 243, 244, 245, 247, 248, 249, 250, 251, 252, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 338, 339, 341, 342, 343, 344, 346, 347, 349, 350, 352, 353, 354, 355, 356, 357, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 440, 441, 443, 444, 445, 446, 447, 449, 450, 452, 453, 455, 456, 457, 458, 459, 460, 461, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 557, 559, 560], "184": 11, "act": [12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 65, 71, 78, 104, 184, 198, 206, 222, 231, 232, 250, 274, 298, 347, 353, 379, 444, 450, 457, 483], "regularli": [12, 46, 53], "outstand": [12, 46, 47, 49, 50, 53, 70, 71, 86, 176, 182, 198, 204, 219, 222, 228, 247, 250, 256, 346, 347, 361, 449, 450, 465, 557], "submit": [12, 47, 71, 131, 139, 175, 197, 198, 208, 221, 222, 235, 249, 250, 300, 347, 403, 411, 450, 510, 518], "inclus": [12, 71, 80, 81, 93, 129, 184, 206, 209, 232, 236, 263, 333, 334, 355, 356, 368, 450, 459, 460, 472, 508], "great": [12, 48, 53], "familiar": [12, 43], "yourself": [12, 18, 19, 20, 33, 34, 36, 43], "quickli": [12, 46, 47, 48, 50, 53, 70, 71, 77, 78, 133, 153, 160, 176, 184, 198, 206, 219, 222, 232, 236, 247, 250, 297, 302, 322, 329, 346, 347, 352, 405, 425, 432, 449, 450, 456, 457, 512, 532, 539], "valuabl": 12, "guid": [12, 14, 16, 17, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 52, 66, 71, 76, 78, 79, 80, 81, 104, 131, 132, 139, 145, 147, 157, 158, 163, 170, 175, 177, 184, 185, 186, 192, 197, 199, 206, 208, 209, 215, 221, 223, 231, 232, 235, 236, 243, 249, 250, 251, 274, 298, 300, 301, 314, 316, 326, 327, 332, 333, 334, 342, 347, 353, 354, 355, 356, 379, 403, 404, 411, 417, 419, 429, 430, 435, 445, 450, 455, 457, 458, 459, 460, 483, 510, 511, 518, 524, 526, 536, 537, 542], "web": 12, "person": [12, 53], "slow": [12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 48, 71, 77, 158, 175, 184, 197, 198, 206, 222, 232, 236, 250, 297, 327, 347, 352, 430, 450, 456, 537], "connect": [12, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 46, 73, 81, 85, 108, 109, 174, 181, 184, 186, 196, 203, 206, 209, 220, 227, 232, 236, 248, 255, 278, 279, 334, 349, 356, 360, 383, 384, 452, 460, 464, 487, 488, 555, 559, 560, 561], "consult": [12, 143, 209, 236, 312, 415, 522], "select": [12, 18, 19, 20, 22, 25, 26, 33, 34, 36, 41, 42, 43, 47, 48, 50, 57, 70, 71, 74, 78, 79, 104, 165, 166, 176, 184, 198, 199, 206, 219, 222, 223, 231, 232, 247, 250, 251, 274, 298, 346, 347, 350, 353, 354, 379, 449, 450, 453, 457, 458, 483, 544, 545], "yet": [12, 14, 16, 18, 19, 20, 22, 25, 26, 31, 33, 35, 37, 41, 42, 47, 71, 77, 78, 79, 81, 177, 183, 184, 186, 198, 199, 205, 209, 222, 223, 229, 236, 250, 251, 257, 298, 334, 347, 353, 354, 356, 450, 456, 457, 458, 460], "easier": [12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 49, 71, 78, 176, 198, 222, 250, 347, 353, 450, 457], "learn": 12, "whole": [12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 53, 71, 77, 80, 81, 140, 186, 198, 209, 236, 250, 309, 333, 334, 347, 355, 356, 412, 450, 456, 459, 460, 519], "tri": [12, 61, 62, 79, 137, 168, 186, 189, 209, 212, 236, 239, 240, 306, 338, 339, 409, 440, 441, 516], "mandatori": [12, 78, 171, 181, 184, 193, 203, 206, 227, 232, 298, 353, 457], "gitconfig": 12, "renamelimit": 12, "999999": 12, "mail": [12, 17, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42, 46, 53, 57, 58, 59], "yourmail": 12, "raw": [12, 14, 16, 25, 28, 31, 47, 48, 53, 61, 71, 78, 86, 108, 109, 110, 114, 127, 145, 164, 182, 184, 204, 206, 209, 228, 232, 236, 239, 250, 256, 278, 279, 280, 284, 295, 298, 314, 338, 347, 353, 361, 383, 384, 385, 389, 400, 417, 436, 440, 450, 457, 465, 487, 488, 489, 493, 506, 524, 543, 557], "githubusercont": 12, "buildbot": [12, 13, 58, 59], "path_to_zfs_fold": 12, "openzfs_commit_hash": 12, "autoport": 12, "ozxxxx": 12, "xxxx": 12, "try": [12, 18, 19, 20, 21, 22, 27, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 67, 71, 77, 86, 139, 172, 175, 176, 182, 194, 197, 198, 204, 216, 221, 222, 228, 244, 249, 250, 256, 343, 347, 361, 411, 446, 450, 456, 465, 518, 549, 552], "cstyle": [12, 63, 169, 190, 213, 241, 340, 442], "success": [12, 25, 47, 71, 104, 108, 109, 117, 127, 129, 143, 163, 184, 186, 198, 206, 209, 222, 231, 232, 236, 250, 274, 278, 279, 287, 295, 312, 332, 347, 379, 383, 384, 392, 400, 415, 435, 450, 483, 487, 488, 496, 506, 508, 522, 542, 553], "succe": [12, 46, 47, 70, 77, 104, 131, 185, 206, 208, 219, 231, 232, 235, 247, 274, 297, 300, 346, 352, 379, 403, 449, 456, 483, 510], "conflict": [12, 79, 104, 107, 132, 136, 177, 184, 186, 199, 206, 209, 223, 231, 232, 236, 251, 274, 277, 301, 305, 354, 379, 382, 404, 408, 458, 483, 486, 511, 515], "readi": [12, 48, 110, 114, 280, 284, 385, 389, 489, 493], "congratul": 12, "otherwis": [12, 14, 16, 18, 19, 20, 22, 25, 27, 28, 31, 33, 34, 35, 36, 37, 41, 42, 44, 46, 47, 70, 71, 73, 74, 77, 78, 79, 80, 86, 90, 95, 98, 101, 104, 110, 114, 115, 120, 129, 130, 140, 143, 145, 158, 159, 163, 165, 166, 174, 176, 182, 184, 186, 196, 198, 204, 206, 207, 209, 219, 220, 222, 223, 228, 231, 232, 234, 236, 247, 248, 250, 251, 256, 260, 265, 268, 271, 274, 280, 284, 285, 290, 297, 298, 299, 309, 312, 314, 327, 328, 332, 333, 346, 347, 349, 350, 352, 353, 354, 355, 361, 365, 370, 373, 376, 379, 385, 389, 390, 395, 402, 412, 415, 417, 430, 431, 435, 449, 450, 452, 453, 456, 457, 458, 459, 465, 469, 474, 477, 480, 483, 489, 493, 494, 499, 508, 509, 519, 522, 524, 537, 538, 542, 544, 545, 548, 549, 550, 551, 553, 555, 556], "meld": 12, "diff": [12, 83, 88, 117, 118, 127, 184, 206, 232, 253, 258, 287, 288, 295, 358, 363, 392, 393, 400, 462, 467, 496, 497, 506], "mergetool": 12, "g": [12, 17, 18, 19, 20, 22, 23, 28, 32, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 53, 62, 65, 67, 71, 78, 80, 85, 86, 88, 90, 95, 98, 101, 103, 104, 108, 109, 115, 118, 120, 121, 131, 132, 133, 136, 145, 147, 157, 158, 163, 165, 166, 171, 172, 176, 177, 181, 182, 184, 185, 186, 193, 194, 198, 199, 203, 204, 206, 208, 209, 216, 222, 223, 227, 228, 230, 231, 232, 235, 236, 244, 250, 251, 255, 256, 258, 260, 265, 268, 271, 272, 273, 274, 278, 279, 285, 288, 290, 291, 298, 300, 301, 314, 316, 326, 327, 332, 333, 334, 343, 347, 353, 355, 360, 361, 363, 365, 370, 373, 376, 378, 379, 383, 384, 390, 393, 395, 396, 403, 404, 408, 417, 419, 429, 430, 435, 441, 444, 446, 450, 457, 459, 464, 465, 467, 469, 474, 477, 480, 482, 483, 487, 488, 494, 497, 499, 500, 510, 511, 515, 524, 526, 536, 537, 542, 544, 545], "someth": [12, 18, 19, 20, 22, 33, 34, 36, 41, 42, 46, 62, 80, 110, 114, 168, 186, 189, 209, 212, 236, 240, 280, 284, 333, 339, 355, 385, 389, 441, 459, 489, 493, 553], "push": [12, 13, 17, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42], "easili": [12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 43, 53, 95, 98, 115, 184, 206, 232, 265, 268, 285, 370, 373, 390, 474, 477, 494], "nr": [12, 90, 101, 120, 232, 260, 271, 290, 365, 376, 395, 469, 480, 499], "notic": [12, 23, 47, 67, 70, 78, 81, 139, 172, 175, 184, 186, 194, 197, 206, 209, 216, 219, 221, 232, 236, 244, 247, 249, 298, 334, 343, 346, 353, 356, 411, 446, 449, 457, 460, 518], "laid": [12, 62, 189, 212, 240, 339, 441], "organization": 12, "much": [12, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 35, 37, 41, 42, 46, 47, 48, 53, 61, 62, 71, 76, 77, 79, 80, 81, 86, 110, 114, 127, 134, 155, 168, 176, 184, 186, 189, 198, 206, 209, 212, 222, 228, 232, 236, 239, 240, 250, 251, 256, 280, 284, 295, 297, 303, 333, 334, 338, 339, 347, 352, 354, 355, 356, 361, 385, 389, 400, 406, 427, 440, 441, 450, 455, 456, 458, 459, 460, 465, 489, 493, 506, 513, 534], "flatter": 12, "That": [12, 18, 19, 20, 27, 34, 35, 36, 37, 41, 42, 48, 104, 163, 165, 166, 231, 274, 379, 435, 483, 542, 544, 545], "zfs2zol": 12, "translat": [12, 46, 49, 71, 96, 104, 106, 124, 131, 176, 184, 185, 198, 206, 208, 222, 231, 232, 235, 250, 266, 274, 276, 293, 300, 347, 371, 379, 381, 398, 403, 450, 475, 483, 485, 503, 510], "stdout": [12, 28], "hash": [12, 47, 48, 71, 73, 78, 79, 174, 176, 196, 198, 199, 220, 222, 223, 232, 248, 250, 251, 298, 347, 349, 353, 354, 450, 452, 457, 458], "cleanli": [12, 557, 558], "mind": [12, 32, 46, 80, 186, 236, 333, 355, 459], "why": [12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 48, 67, 71, 79, 108, 109, 129, 172, 177, 194, 199, 216, 223, 244, 250, 251, 343, 347, 354, 446, 450, 458, 487, 488, 508], "hunk": [12, 34, 35], "drop": [12, 18, 19, 20, 22, 33, 34, 36, 41, 42, 47, 71, 74, 176, 198, 222, 250, 347, 350, 450, 453, 557], "preserv": [12, 18, 19, 20, 34, 36, 41, 42, 47, 71, 87, 90, 100, 101, 110, 114, 120, 183, 184, 205, 206, 222, 229, 232, 250, 257, 260, 270, 271, 280, 284, 290, 347, 362, 365, 375, 376, 385, 389, 395, 450, 466, 469, 479, 480, 489, 493, 499, 557], "intent": [12, 47, 71, 80, 86, 136, 139, 163, 175, 176, 182, 186, 197, 198, 204, 209, 221, 222, 228, 236, 249, 250, 256, 332, 333, 347, 355, 361, 411, 435, 450, 459, 465, 515, 518, 542, 562], "am": [12, 56], "authorship": 12, "squash": 12, "care": [12, 18, 19, 20, 22, 28, 33, 34, 35, 36, 37, 41, 42, 47, 71, 74, 78, 80, 81, 86, 93, 184, 186, 206, 209, 222, 232, 236, 250, 263, 298, 333, 334, 347, 350, 353, 355, 356, 368, 450, 453, 457, 459, 460, 465, 472], "long": [12, 18, 19, 20, 22, 25, 33, 34, 36, 41, 42, 43, 46, 47, 48, 53, 64, 70, 71, 76, 77, 78, 79, 88, 90, 101, 104, 108, 109, 118, 120, 130, 140, 142, 158, 176, 184, 186, 191, 198, 206, 207, 209, 214, 222, 231, 232, 234, 236, 242, 250, 258, 260, 271, 274, 278, 279, 288, 290, 297, 298, 299, 309, 311, 327, 341, 347, 352, 353, 363, 365, 376, 379, 383, 384, 393, 395, 402, 412, 414, 430, 443, 449, 450, 455, 456, 457, 467, 469, 480, 483, 487, 488, 497, 499, 509, 519, 521, 537, 555], "truncat": [12, 47, 70, 79, 199, 219, 223, 247, 251, 346, 354, 449, 458], "pretti": 12, "onelin": 12, "leav": [12, 18, 19, 20, 22, 25, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 71, 78, 104, 108, 109, 176, 198, 206, 222, 231, 232, 250, 274, 278, 279, 298, 347, 353, 379, 383, 384, 450, 457, 483, 487, 488, 558], "blank": [12, 76, 93, 129, 145, 184, 206, 209, 232, 236, 263, 314, 368, 417, 455, 472, 508, 524], "Then": [12, 18, 19, 20, 22, 33, 34, 36, 38, 41, 42, 43, 48, 102, 230, 272, 377, 481, 557, 561], "wrap": [12, 47, 90, 101, 104, 120, 198, 222, 231, 250, 260, 271, 274, 290, 365, 376, 379, 395, 469, 480, 483, 499], "exce": [12, 45, 46, 47, 48, 50, 65, 70, 71, 78, 80, 139, 145, 176, 184, 186, 198, 206, 209, 219, 221, 222, 232, 236, 247, 249, 250, 298, 314, 333, 346, 347, 353, 355, 411, 417, 444, 449, 450, 457, 459, 518, 524], "final": [12, 13, 43, 48, 80, 110, 114, 236, 280, 284, 333, 355, 385, 389, 459, 489, 493], "contact": [12, 46], "form": [12, 18, 19, 20, 22, 27, 33, 41, 42, 44, 46, 47, 66, 73, 78, 79, 80, 86, 87, 88, 95, 98, 104, 110, 114, 115, 118, 127, 145, 153, 163, 170, 177, 182, 183, 184, 186, 192, 196, 199, 204, 205, 206, 209, 215, 220, 223, 228, 229, 231, 232, 236, 243, 248, 251, 256, 257, 258, 265, 268, 270, 274, 280, 284, 285, 288, 295, 298, 314, 322, 332, 333, 342, 349, 353, 354, 355, 361, 362, 363, 370, 373, 379, 385, 389, 390, 393, 400, 417, 425, 435, 445, 452, 457, 458, 459, 465, 466, 467, 474, 477, 483, 489, 493, 494, 497, 506, 524, 532, 542, 555], "author": [12, 25, 37, 78, 170, 171, 172, 178, 180, 185, 191, 192, 193, 194, 200, 202, 208, 214, 215, 216, 224, 226, 235, 239, 242, 243, 244, 252, 254, 300, 353, 457], "review": [12, 46, 55], "approv": 12, "www": [12, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 48, 62, 104, 168, 189, 212, 231, 240, 274, 339, 379, 441, 483], "6873": 12, "zfs_destroy_snaps_nvl": 12, "leak": [12, 47, 48, 71, 81, 86, 90, 101, 120, 176, 182, 198, 204, 222, 228, 232, 250, 256, 260, 271, 290, 347, 356, 361, 365, 376, 395, 450, 460, 465, 469, 480, 499], "errlist": 12, "chri": 12, "williamson": 12, "matthew": 12, "ahren": 12, "mahren": 12, "paul": 12, "dagneli": 12, "pcd": 12, "deni": [12, 78, 79, 88, 118, 184, 206, 232, 258, 288, 298, 353, 363, 393, 457, 458, 467, 497], "rtveliashvili": 12, "lzc_destroy_snap": 12, "nvlist": [12, 47, 71, 104, 131, 185, 208, 231, 235, 250, 274, 300, 347, 379, 403, 450, 483, 510], "nvlist_fre": 12, "warn": [12, 18, 19, 20, 22, 25, 26, 29, 33, 34, 35, 36, 37, 41, 42, 47, 53, 70, 71, 79, 86, 110, 114, 143, 158, 163, 182, 184, 186, 204, 209, 219, 228, 236, 247, 250, 256, 312, 327, 332, 346, 347, 354, 361, 385, 389, 415, 430, 435, 449, 450, 458, 465, 489, 493, 522, 537, 542], "checker": [12, 62, 82, 168, 178, 189, 200, 212, 224, 240, 252, 339, 357, 441, 461], "print": [12, 16, 18, 19, 20, 22, 28, 33, 34, 35, 36, 37, 41, 42, 47, 61, 62, 64, 65, 67, 70, 71, 81, 84, 85, 86, 92, 93, 96, 97, 100, 102, 104, 106, 108, 109, 110, 111, 114, 124, 128, 131, 139, 145, 147, 151, 157, 158, 162, 164, 165, 166, 171, 172, 180, 181, 182, 184, 185, 186, 191, 193, 194, 202, 203, 204, 206, 208, 209, 214, 216, 219, 222, 226, 227, 228, 231, 232, 235, 236, 239, 242, 244, 247, 250, 254, 255, 256, 262, 263, 266, 267, 270, 274, 275, 276, 278, 279, 280, 281, 284, 293, 296, 300, 308, 314, 316, 320, 326, 327, 331, 334, 335, 338, 341, 343, 346, 347, 356, 359, 360, 361, 367, 368, 371, 372, 375, 377, 379, 381, 383, 384, 385, 386, 389, 398, 401, 403, 411, 417, 419, 423, 429, 430, 434, 436, 437, 438, 440, 441, 443, 444, 446, 449, 450, 460, 463, 464, 465, 471, 472, 475, 476, 479, 481, 483, 485, 487, 488, 489, 490, 493, 503, 507, 510, 518, 524, 526, 530, 536, 537, 541, 543, 544, 545], "queu": [12, 47, 50, 71, 139, 145, 175, 176, 197, 198, 209, 221, 222, 236, 249, 250, 314, 347, 411, 417, 450, 518, 524], "autom": [12, 14, 16, 25, 31, 41, 47, 139, 163, 209, 236, 308, 332, 411, 435, 518, 542, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "rang": [12, 32, 47, 48, 49, 53, 57, 70, 71, 79, 81, 86, 93, 131, 139, 155, 175, 176, 184, 185, 193, 197, 198, 206, 208, 219, 221, 222, 232, 235, 236, 247, 249, 250, 251, 256, 263, 300, 334, 346, 347, 354, 356, 361, 368, 403, 411, 427, 449, 450, 458, 460, 465, 472, 510, 518, 534], "batteri": 12, "post": [12, 18, 19, 20, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 65, 71, 87, 108, 109, 176, 183, 198, 205, 222, 229, 250, 257, 347, 362, 444, 450, 466, 487, 488], "investig": [12, 22, 33], "reproduc": [12, 53], "trigger": [12, 47, 53, 62, 71, 79, 168, 189, 198, 212, 222, 223, 240, 250, 251, 339, 347, 354, 441, 450, 458], "round": [12, 47, 48, 67, 71, 92, 172, 176, 184, 194, 198, 206, 216, 222, 232, 244, 250, 262, 343, 347, 367, 446, 450, 471], "lastli": [12, 88, 118, 184, 206, 232, 258, 288, 363, 393, 467, 497], "happi": 12, "thei": [12, 18, 19, 20, 21, 22, 32, 33, 34, 35, 36, 37, 41, 42, 44, 46, 47, 48, 53, 54, 62, 65, 68, 70, 71, 76, 77, 78, 79, 80, 81, 87, 93, 103, 104, 107, 110, 112, 114, 116, 121, 128, 132, 136, 140, 143, 145, 163, 165, 166, 168, 176, 177, 183, 184, 186, 189, 198, 199, 205, 206, 209, 212, 219, 222, 223, 229, 231, 232, 236, 240, 247, 250, 251, 257, 263, 273, 274, 277, 280, 282, 284, 291, 297, 298, 301, 305, 312, 314, 332, 333, 339, 344, 346, 347, 352, 353, 354, 355, 362, 368, 378, 379, 382, 385, 387, 389, 391, 396, 401, 404, 408, 415, 417, 435, 441, 444, 447, 449, 450, 455, 456, 457, 458, 459, 460, 466, 472, 482, 483, 486, 489, 491, 493, 495, 500, 507, 511, 515, 519, 522, 524, 542, 544, 545, 557], "mark": [12, 22, 33, 35, 37, 46, 47, 48, 53, 65, 71, 78, 79, 80, 88, 89, 93, 104, 118, 127, 140, 177, 184, 186, 198, 199, 206, 209, 222, 223, 231, 232, 236, 250, 251, 259, 263, 274, 295, 298, 309, 333, 347, 353, 354, 355, 364, 368, 379, 400, 412, 444, 450, 457, 458, 459, 467, 468, 472, 483, 497, 506, 519, 548, 555], "thank": 12, "builder": 13, "except": [13, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 50, 58, 59, 65, 71, 76, 78, 81, 84, 86, 88, 90, 96, 101, 104, 106, 108, 109, 112, 118, 120, 124, 143, 176, 180, 184, 186, 198, 202, 206, 209, 222, 226, 231, 232, 236, 250, 254, 256, 258, 260, 266, 271, 274, 276, 278, 279, 282, 288, 290, 293, 298, 312, 334, 347, 353, 356, 359, 361, 363, 365, 371, 376, 379, 381, 383, 384, 387, 393, 395, 398, 415, 444, 450, 455, 457, 460, 463, 465, 467, 469, 475, 480, 483, 485, 487, 488, 491, 497, 499, 503, 522], "beginn": [13, 41, 42, 58, 59], "setup": [13, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42, 43, 53, 65, 78, 81, 110, 114, 206, 209, 232, 236, 275, 280, 284, 298, 334, 353, 356, 385, 389, 444, 457, 460, 489, 493], "word": [13, 34, 36, 79, 80, 86, 110, 114, 139, 175, 197, 198, 204, 221, 222, 223, 228, 249, 250, 251, 256, 280, 284, 354, 355, 361, 385, 389, 411, 458, 459, 465, 489, 493, 518], "zfsbootmenu": [14, 16, 25, 31], "grub": [14, 16, 25, 28, 31, 37, 41, 42, 43, 48, 53, 206, 223, 232], "bootload": [14, 18, 19, 20, 22, 33, 34, 35, 36, 37, 39, 41, 42, 46], "driver": [14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 36, 41, 42, 46, 47, 48, 70, 71, 73, 85, 139, 140, 184, 186, 198, 209, 221, 222, 236, 248, 249, 250, 255, 309, 346, 347, 349, 360, 411, 412, 449, 450, 452, 464, 518, 519, 555], "subset": [14, 16, 25, 31, 78, 110, 114, 206, 232, 280, 284, 298, 353, 385, 389, 457, 489, 493, 552], "treat": [14, 16, 25, 31, 46, 47, 48, 53, 65, 71, 78, 79, 146, 184, 186, 198, 206, 209, 222, 232, 236, 250, 298, 315, 347, 353, 354, 418, 444, 450, 457, 458, 525], "risk": [14, 16, 25, 26, 31, 47, 48, 71, 78, 80, 184, 198, 206, 222, 232, 236, 250, 298, 333, 347, 353, 355, 450, 457, 459, 557], "free": [14, 16, 18, 25, 31, 44, 47, 53, 61, 70, 71, 76, 78, 79, 81, 86, 90, 101, 120, 127, 131, 134, 137, 139, 145, 147, 158, 160, 162, 163, 175, 176, 177, 182, 184, 185, 186, 197, 198, 199, 204, 206, 208, 209, 219, 221, 222, 223, 228, 232, 235, 236, 239, 247, 249, 250, 251, 256, 260, 271, 290, 295, 298, 300, 303, 306, 316, 329, 331, 332, 334, 338, 346, 347, 353, 354, 356, 361, 365, 376, 395, 400, 403, 406, 409, 411, 419, 432, 434, 435, 440, 449, 450, 455, 457, 458, 460, 465, 469, 480, 499, 506, 510, 513, 516, 518, 524, 526, 537, 539, 541, 542, 557], "zbm": [14, 16, 25, 31], "layout": [14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 53, 80, 143, 186, 209, 236, 312, 415, 459, 522], "site": [14, 16, 25, 31, 46], "reboot": [14, 16, 18, 19, 20, 22, 25, 26, 27, 28, 29, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 71, 79, 80, 127, 148, 149, 177, 184, 186, 199, 206, 209, 222, 223, 232, 236, 250, 251, 295, 317, 318, 333, 347, 354, 355, 400, 420, 421, 450, 458, 459, 506, 527, 528, 559, 560], "well": [14, 16, 25, 28, 31, 46, 47, 48, 53, 71, 74, 76, 77, 78, 79, 81, 84, 86, 92, 108, 109, 110, 113, 114, 127, 136, 143, 145, 164, 177, 184, 186, 199, 206, 209, 223, 232, 236, 250, 251, 262, 278, 279, 280, 283, 284, 295, 297, 298, 305, 312, 314, 334, 347, 350, 352, 353, 354, 356, 359, 367, 383, 384, 385, 388, 389, 400, 408, 415, 417, 436, 450, 453, 455, 456, 457, 458, 460, 463, 465, 471, 487, 488, 489, 492, 493, 506, 515, 522, 524, 543, 555, 557], "avoid": [14, 16, 18, 19, 20, 25, 27, 28, 31, 32, 33, 34, 36, 41, 42, 46, 47, 48, 53, 70, 71, 78, 79, 86, 110, 114, 176, 177, 184, 186, 198, 199, 204, 206, 219, 222, 223, 228, 232, 247, 250, 251, 256, 280, 284, 298, 346, 347, 353, 354, 361, 385, 389, 449, 450, 457, 458, 465, 489, 493, 547], "paramount": [14, 16, 25, 28, 31], "secur": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 46, 47, 79, 87, 90, 101, 108, 109, 110, 114, 120, 160, 183, 199, 205, 223, 229, 232, 236, 251, 257, 260, 271, 278, 279, 280, 284, 290, 329, 354, 362, 365, 376, 383, 384, 385, 389, 395, 432, 458, 466, 469, 480, 487, 488, 489, 493, 499, 539], "live": [14, 16, 25, 28, 29, 31, 35, 37, 104, 136, 155, 186, 209, 231, 236, 274, 305, 324, 379, 408, 427, 483, 515, 534], "gpg": [14, 16, 25, 31, 32, 41, 42, 56], "auto": [14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 53, 61, 74, 78, 81, 206, 209, 230, 232, 236, 239, 272, 298, 334, 338, 350, 353, 356, 440, 453, 457, 460], "retriev": [14, 16, 25, 31, 79, 104, 141, 156, 163, 186, 209, 231, 236, 274, 310, 325, 332, 379, 413, 428, 435, 458, 483, 520, 535, 542, 554], "keyserv": [14, 16, 25, 31, 56], "hkp": [14, 16, 25, 31, 56], "asc": [14, 16, 25, 31], "dd": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 78, 86, 182, 204, 228, 232, 256, 298, 353, 361, 457, 465], "1m": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 48, 49, 53, 65, 71, 86, 171, 176, 177, 182, 184, 187, 193, 198, 199, 204, 210, 222, 223, 228, 232, 250, 251, 256, 298, 347, 353, 354, 361, 444, 450, 465], "login": [14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42], "password": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 65, 74, 102, 110, 114, 127, 184, 206, 230, 232, 272, 280, 284, 295, 350, 377, 385, 389, 400, 444, 453, 481, 489, 493, 506], "network": [14, 16, 18, 19, 20, 22, 25, 28, 29, 31, 33, 34, 35, 36, 37, 41, 42, 74, 102, 108, 109, 136, 186, 206, 209, 232, 236, 278, 279, 305, 350, 377, 383, 384, 408, 453, 481, 487, 488, 515, 555], "servic": [14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47, 50, 71, 73, 74, 78, 102, 110, 114, 122, 126, 131, 139, 175, 176, 184, 196, 197, 198, 206, 208, 220, 221, 222, 230, 232, 235, 248, 249, 250, 272, 280, 284, 298, 300, 347, 349, 350, 353, 377, 385, 389, 403, 411, 450, 452, 453, 457, 481, 489, 493, 501, 505, 510, 518, 559, 560], "wlan0": [14, 16, 25, 31], "wifi": [14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 36, 41, 42], "ssid": [14, 16, 25, 31], "ip": [14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 36, 41, 42, 95, 98, 115, 127, 184, 206, 232, 295, 400, 474, 477, 494, 506], "dhcp": [14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 36], "finish": [14, 16, 22, 25, 31, 35, 37, 46, 48, 49, 71, 77, 79, 87, 104, 110, 114, 125, 133, 134, 139, 144, 175, 176, 177, 197, 198, 199, 206, 221, 222, 223, 231, 232, 249, 250, 251, 274, 280, 284, 294, 297, 302, 303, 313, 347, 352, 354, 362, 379, 385, 389, 399, 405, 406, 411, 416, 450, 456, 458, 466, 483, 489, 493, 504, 512, 513, 518, 523], "netconfig": [14, 16, 25, 31], "wireless": [14, 16, 25, 28, 31], "further": [14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 45, 46, 47, 50, 53, 70, 71, 80, 176, 186, 198, 209, 219, 222, 236, 247, 250, 333, 346, 347, 355, 449, 450, 459], "wpa_supplic": [14, 16, 25, 31], "apk": [14, 15, 16, 25, 31], "ssh": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 108, 109, 110, 114, 127, 184, 206, 232, 280, 284, 295, 385, 389, 400, 487, 488, 489, 493, 506], "sshd": [14, 16, 25, 28, 31, 41, 42], "openssh": [14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 36, 41, 42], "prohibit": [14, 16, 25, 31, 134, 236, 303, 406, 513], "public": [14, 16, 25, 31, 32, 44, 46, 53, 56], "verbatim": [14, 46, 74, 86, 182, 204, 228, 256, 350, 361, 453, 465], "authorized_kei": [14, 16, 18, 19, 25, 28, 31], "strong": [14, 46], "192": [14, 16, 18, 19, 25, 28, 31, 47, 78, 232, 298, 353, 457], "168": [14, 16, 18, 19, 25, 28, 31], "91": [14, 16, 25, 28, 31, 155, 427, 534], "ntp": [14, 16, 18, 19, 25, 31], "client": [14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 36, 47, 48, 78, 184, 206, 232, 298, 353, 457], "synchron": [14, 16, 18, 19, 25, 31, 35, 37, 46, 47, 50, 71, 77, 78, 79, 80, 102, 145, 176, 177, 183, 184, 186, 198, 199, 205, 206, 209, 222, 223, 229, 230, 232, 236, 250, 251, 257, 272, 297, 298, 314, 333, 347, 352, 353, 354, 355, 377, 417, 450, 456, 457, 458, 459, 481, 524, 561], "busybox": [14, 16, 25, 31, 41, 42], "repo": [14, 16, 17, 25, 28, 29, 31, 32, 39, 41, 42, 43], "press": [14, 16, 18, 19, 20, 25, 31, 33, 34, 35, 36, 37, 41, 42, 43, 186, 209, 236, 314, 316], "bar": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42, 48, 104, 231, 274, 379, 483], "apkrepo": [14, 16, 25, 31], "throughout": [14, 16, 25, 31, 87, 108, 109, 183, 205, 206, 229, 232, 257, 278, 279, 362, 383, 384, 466, 487, 488], "predict": [14, 16, 25, 31, 47, 66, 71, 86, 170, 182, 192, 198, 204, 215, 222, 228, 243, 250, 256, 342, 347, 361, 445, 450, 465], "eudev": [14, 16, 25, 31], "devd": [14, 16, 25, 31], "mdev": 14, "del": 14, "target": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47, 50, 61, 71, 74, 77, 91, 92, 93, 102, 104, 105, 108, 109, 110, 112, 114, 131, 144, 160, 176, 184, 198, 206, 208, 222, 230, 231, 232, 235, 236, 239, 250, 261, 262, 263, 272, 274, 275, 278, 279, 280, 282, 284, 297, 300, 313, 329, 338, 347, 350, 352, 366, 367, 368, 377, 379, 380, 383, 384, 385, 387, 389, 403, 416, 432, 440, 450, 453, 456, 470, 471, 472, 481, 483, 484, 487, 488, 489, 491, 493, 510, 523, 539], "virtio": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42], "bu": [14, 16, 25, 28, 31, 53], "serial": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42, 47, 48, 53, 65, 219, 444], "qemu": [14, 16, 25, 28, 31], "disk2": [14, 16, 18, 19, 20, 25, 28, 31, 33, 34, 36], "img": [14, 16, 22, 25, 28, 31, 35, 37], "aabb": [14, 16, 25, 28, 31], "libvirt": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42], "domain": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 76, 78, 81, 184, 206, 232, 298, 353, 455, 457, 460], "xml": [14, 16, 25, 28, 31], "declar": [14, 16, 25, 28, 31], "arrai": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42, 46, 47, 48, 71, 104, 139, 175, 176, 197, 198, 221, 222, 231, 249, 250, 274, 347, 379, 411, 450, 483, 518], "ata": [14, 16, 25, 28, 31, 43, 47], "foo": [14, 16, 25, 28, 31, 104, 230, 231, 272, 274, 379, 483, 554], "nvme": [14, 16, 18, 19, 25, 28, 31, 53], "disk1": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42], "mount": [14, 16, 18, 19, 20, 21, 22, 25, 27, 28, 31, 33, 35, 36, 37, 41, 42, 43, 47, 53, 71, 74, 77, 78, 79, 81, 83, 86, 88, 90, 91, 92, 93, 95, 98, 99, 101, 104, 108, 109, 110, 112, 114, 115, 116, 117, 118, 119, 120, 121, 122, 126, 127, 128, 136, 143, 157, 179, 184, 186, 201, 206, 209, 225, 231, 232, 236, 250, 251, 253, 258, 260, 261, 262, 263, 269, 271, 274, 278, 279, 280, 282, 284, 288, 289, 290, 291, 295, 296, 297, 298, 305, 312, 326, 334, 347, 350, 352, 353, 354, 356, 358, 363, 365, 366, 367, 368, 374, 376, 379, 383, 384, 385, 387, 389, 391, 393, 394, 395, 396, 400, 401, 408, 415, 429, 450, 453, 456, 457, 458, 460, 462, 465, 467, 469, 470, 471, 472, 474, 477, 478, 480, 483, 487, 488, 489, 491, 493, 494, 495, 496, 497, 498, 499, 500, 501, 505, 506, 507, 515, 522, 536, 549, 557], "mnt": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 127, 184, 206, 232, 295, 400, 506], "mktemp": [14, 16, 25, 28, 31, 35, 37, 43], "partit": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 53, 76, 78, 80, 81, 136, 140, 145, 163, 176, 186, 198, 206, 209, 222, 232, 236, 250, 298, 309, 314, 332, 333, 334, 353, 355, 356, 412, 417, 435, 455, 457, 459, 460, 515, 519, 524, 542], "swap": [14, 16, 25, 28, 31, 34, 35, 36, 37, 43, 57, 78, 86, 92, 175, 182, 184, 197, 204, 206, 228, 232, 256, 262, 298, 353, 361, 367, 457, 465, 471], "gb": [14, 16, 25, 28, 31, 46, 76, 78, 79, 184, 206, 232, 251, 298, 353, 354, 455, 457, 458], "too": [14, 16, 21, 25, 28, 31, 34, 35, 36, 37, 46, 47, 70, 71, 86, 104, 176, 182, 198, 204, 219, 222, 228, 231, 247, 250, 256, 274, 346, 347, 361, 379, 449, 450, 465, 483], "swapsiz": [14, 16, 25, 28, 31], "left": [14, 16, 21, 25, 28, 31, 47, 48, 78, 93, 100, 143, 157, 165, 166, 184, 206, 232, 236, 263, 270, 298, 312, 326, 353, 368, 375, 415, 429, 457, 472, 479, 522, 536, 544, 545], "1gb": [14, 16, 25, 28, 31, 47, 176, 198, 222, 250, 333, 347, 355], "reserv": [14, 16, 25, 28, 31, 47, 71, 76, 78, 80, 81, 88, 92, 95, 98, 115, 118, 127, 134, 136, 184, 186, 206, 209, 222, 232, 236, 250, 258, 262, 288, 295, 298, 303, 305, 333, 334, 347, 353, 355, 356, 363, 367, 393, 400, 406, 408, 450, 455, 457, 459, 460, 467, 471, 474, 477, 494, 497, 506, 513, 515], "bio": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42, 51, 76, 455], "efi": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42, 43, 76, 140, 186, 209, 236, 309, 412, 455, 519], "e2fsprog": [14, 16, 25, 31], "cryptsetup": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42], "clear": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47, 71, 76, 78, 81, 83, 90, 95, 98, 101, 104, 105, 110, 114, 115, 120, 127, 139, 144, 163, 175, 176, 184, 186, 197, 198, 206, 209, 221, 222, 232, 236, 249, 250, 253, 260, 265, 268, 271, 274, 275, 280, 284, 285, 290, 295, 298, 308, 332, 334, 347, 353, 356, 358, 365, 370, 373, 376, 379, 380, 385, 389, 390, 395, 400, 411, 416, 435, 450, 455, 457, 460, 462, 469, 474, 477, 480, 483, 484, 489, 493, 494, 499, 506, 518, 523, 542, 553, 555, 557, 559, 560, 561], "structur": [14, 16, 25, 28, 31, 46, 47, 71, 77, 79, 80, 86, 90, 101, 120, 177, 182, 198, 199, 204, 222, 223, 228, 232, 250, 251, 256, 260, 271, 290, 333, 347, 354, 355, 361, 365, 376, 395, 450, 456, 458, 459, 465, 469, 480, 499], "flash": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 48, 51], "blkdiscard": [14, 16, 18, 19, 25, 28, 31, 36, 81, 236, 334, 356, 460], "partition_disk": [14, 16, 25, 28, 31], "true": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 53, 65, 104, 163, 178, 200, 209, 224, 230, 231, 236, 252, 272, 274, 332, 379, 435, 444, 483, 542], "align": [14, 16, 25, 28, 31, 46, 53, 67, 71, 81, 139, 172, 175, 186, 194, 197, 209, 216, 221, 222, 236, 244, 249, 250, 334, 343, 347, 356, 411, 446, 450, 460, 518], "mklabel": [14, 16, 25, 28, 31], "gpt": [14, 16, 25, 28, 31, 34, 36, 48, 53, 74, 81, 334, 350, 356, 453, 460], "mkpart": [14, 16, 25, 28, 31], "2mib": [14, 16, 25, 28, 31], "1gib": [14, 16, 25, 28, 31, 47, 71, 450], "bpool": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 37, 41, 42, 43], "5gib": [14, 16, 25, 28, 31], "rpool": [14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 53, 86, 89, 104, 127, 145, 147, 155, 158, 160, 163, 182, 184, 186, 204, 206, 209, 228, 231, 232, 236, 256, 274, 295, 332, 361, 379, 400, 427, 435, 465, 468, 483, 506, 524, 526, 534, 537, 539, 542], "gib": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 47, 71, 77, 80, 147, 163, 206, 232, 297, 352, 450, 456, 459, 526, 542], "1mib": [14, 16, 25, 28, 31], "esp": [14, 16, 18, 19, 20, 25, 28, 31, 34, 36, 41, 42], "bios_grub": [14, 16, 25, 28, 31], "legacy_boot": [14, 16, 25, 28, 31, 41, 42], "partprob": [14, 16, 25, 28, 31, 35, 37], "memori": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 48, 51, 57, 61, 70, 71, 77, 79, 80, 86, 87, 104, 110, 114, 132, 145, 151, 163, 176, 183, 184, 186, 198, 205, 206, 209, 219, 222, 223, 229, 231, 232, 236, 239, 247, 250, 251, 257, 274, 280, 284, 297, 320, 332, 333, 338, 346, 347, 352, 354, 355, 362, 379, 385, 389, 423, 435, 440, 449, 450, 456, 458, 459, 465, 466, 483, 489, 493, 511, 524, 530, 542], "plain": [14, 16, 25, 28, 31, 86, 104, 133, 231, 232, 256, 274, 361, 379, 465, 483, 554], "part4": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42], "mkswap": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42, 184, 206, 232, 262], "mapper": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 57, 81, 129, 145, 209, 236, 314, 334, 356, 417, 460, 508, 524], "swapon": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 184, 206, 232, 262, 367], "modprob": [14, 15, 16, 20, 22, 25, 26, 29, 31, 41, 42, 47, 53], "sc2046": [14, 16, 25, 28, 31], "autotrim": [14, 16, 18, 19, 25, 28, 31, 34, 36, 81, 160, 236, 329, 334, 356, 432, 460, 539], "acltyp": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 78, 88, 95, 98, 115, 118, 127, 184, 206, 232, 258, 288, 295, 298, 353, 363, 393, 400, 457, 467, 474, 477, 494, 497, 506], "posixacl": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 78, 184, 206, 232, 298, 353, 457], "canmount": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 74, 78, 88, 95, 98, 102, 115, 118, 127, 184, 206, 230, 232, 258, 272, 288, 295, 298, 350, 353, 363, 377, 393, 400, 453, 457, 467, 474, 477, 481, 494, 497, 506], "formd": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 78, 184, 206, 232, 298, 353, 457], "relatim": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 48, 78, 88, 102, 118, 184, 206, 230, 232, 272, 298, 353, 363, 377, 393, 457, 467, 481, 497], "xattr": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 47, 53, 71, 78, 79, 88, 95, 98, 115, 118, 127, 180, 184, 202, 206, 226, 232, 254, 258, 288, 295, 298, 353, 363, 393, 400, 450, 457, 458, 467, 474, 477, 494, 497, 506], "sa": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 44, 47, 53, 71, 73, 78, 79, 85, 174, 181, 184, 196, 203, 206, 220, 227, 232, 248, 255, 298, 349, 353, 360, 450, 452, 457, 458, 464], "mountpoint": [14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 74, 77, 78, 84, 88, 91, 92, 95, 98, 100, 102, 103, 112, 115, 116, 118, 121, 127, 136, 180, 184, 186, 202, 206, 209, 226, 230, 232, 236, 254, 258, 261, 262, 270, 272, 273, 282, 286, 288, 291, 295, 297, 298, 305, 350, 352, 353, 359, 363, 366, 367, 375, 377, 378, 387, 391, 393, 396, 400, 408, 453, 456, 457, 463, 467, 470, 471, 474, 477, 479, 481, 482, 491, 494, 495, 497, 500, 506, 515], "printf": [14, 16, 25, 28, 31, 43], "part2": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42, 43], "multi": [14, 16, 25, 28, 31, 62, 67, 71, 168, 189, 212, 240, 339, 343, 347, 441, 446, 450], "spa_feature_nam": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42], "zstd": [14, 16, 25, 28, 31, 48, 71, 78, 79, 165, 166, 251, 298, 353, 354, 450, 457, 458, 544, 545], "dnodes": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 53, 78, 79, 88, 118, 199, 206, 223, 232, 251, 298, 353, 354, 363, 393, 457, 458, 467, 497], "part3": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42, 43], "unencrypt": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 108, 109, 110, 114, 232, 278, 279, 280, 284, 383, 384, 385, 389, 487, 488, 489, 493], "alpinelinux": 14, "recv": [14, 16, 25, 28, 31, 48, 53, 56, 83, 90, 101, 108, 120, 165, 166, 184, 206, 232, 253, 260, 271, 278, 290, 358, 365, 376, 383, 395, 462, 469, 480, 487, 499, 544, 545], "__": [14, 16, 22, 25, 28, 31, 47], "spreadsheet": [14, 16, 25, 28, 31], "luk": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42], "compromis": [14, 16, 25, 31, 47, 71, 90, 101, 120, 176, 186, 198, 209, 222, 236, 250, 260, 271, 290, 333, 347, 365, 376, 395, 450, 469, 480, 499, 548, 550], "safe": [14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 66, 67, 70, 79, 170, 172, 192, 194, 215, 216, 219, 243, 244, 247, 251, 342, 343, 346, 354, 445, 446, 449, 458, 555], "keyloc": [14, 16, 18, 19, 20, 25, 31, 34, 35, 36, 37, 41, 42, 74, 78, 88, 90, 101, 102, 103, 108, 109, 110, 114, 116, 118, 120, 121, 143, 157, 230, 232, 236, 260, 271, 272, 273, 278, 279, 280, 284, 290, 291, 298, 312, 326, 350, 353, 363, 365, 376, 377, 378, 383, 384, 385, 389, 391, 393, 395, 396, 415, 429, 453, 457, 467, 469, 480, 481, 482, 487, 488, 489, 493, 495, 497, 499, 500, 522, 536, 557], "prompt": [14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 53, 65, 74, 78, 90, 101, 102, 103, 108, 109, 110, 114, 116, 120, 121, 143, 157, 230, 232, 236, 260, 271, 272, 273, 278, 279, 280, 284, 290, 291, 298, 312, 326, 350, 353, 365, 376, 377, 378, 383, 384, 385, 389, 391, 395, 396, 415, 429, 444, 453, 457, 469, 480, 481, 482, 487, 488, 489, 493, 495, 499, 500, 522, 536], "keyformat": [14, 16, 18, 19, 20, 25, 31, 34, 35, 36, 37, 41, 42, 78, 88, 90, 101, 108, 109, 118, 120, 232, 260, 271, 278, 279, 290, 298, 353, 363, 365, 376, 383, 384, 393, 395, 457, 467, 469, 480, 487, 488, 497, 499, 557], "passphras": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 78, 90, 101, 108, 109, 120, 232, 260, 271, 278, 279, 290, 298, 353, 365, 376, 383, 384, 395, 457, 469, 480, 487, 488, 499, 557], "insecur": [14, 16, 25, 31], "poolpass": [14, 16, 25, 31], "noauto": [14, 16, 18, 19, 20, 22, 25, 31, 33, 35, 37, 41, 42, 43, 78, 102, 184, 206, 230, 232, 272, 298, 353, 377, 457, 481], "mkdir": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 102, 377, 481], "lib": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 67, 74, 350, 446, 453], "mkf": [14, 16, 25, 28, 31, 47], "vfat": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42], "part1": [14, 16, 25, 28, 31, 34, 36, 43], "iocharset": [14, 16, 25, 28, 31], "iso8859": [14, 16, 25, 28, 31], "cut": [14, 16, 25, 31, 34, 35, 36, 37, 43], "f1": [14, 16, 25, 31, 35, 37, 43], "workaround": [14, 16, 18, 19, 20, 22, 25, 31, 33, 42, 46, 47, 53], "recogn": [14, 18, 19, 20, 22, 33, 34, 36, 41, 42, 48, 73, 79, 140, 153, 174, 186, 196, 209, 220, 236, 248, 309, 322, 349, 354, 412, 425, 452, 458, 519, 532], "zpool_vdev_name_path": [14, 16, 25, 31, 41, 42, 43, 163, 186, 209, 236, 332, 435, 542], "lt": [14, 15, 26, 71, 76, 81, 102, 127, 163, 170, 171, 172, 174, 176, 178, 180, 181, 184, 185, 186, 191, 192, 193, 194, 196, 198, 199, 200, 202, 203, 206, 208, 209, 214, 215, 216, 220, 222, 223, 224, 226, 227, 228, 231, 232, 235, 236, 242, 243, 244, 250, 251, 252, 254, 256, 274, 295, 298, 300, 332, 334, 347, 353, 356, 377, 450, 455, 460, 481, 506, 542], "reinstal": [14, 25, 31, 32, 41, 42, 43], "rw": [14, 16, 25, 27, 28, 31, 78, 95, 98, 115, 127, 139, 145, 175, 184, 197, 206, 209, 221, 232, 236, 249, 295, 298, 314, 353, 400, 411, 417, 457, 474, 477, 494, 506, 518, 524, 557], "nofail": [14, 16, 22, 25, 31, 33, 102, 230, 272, 377, 481], "fstab": [14, 16, 18, 19, 20, 22, 25, 27, 31, 33, 34, 35, 36, 37, 41, 42, 43, 53, 74, 77, 82, 84, 178, 180, 184, 200, 202, 206, 224, 226, 232, 252, 254, 297, 350, 352, 357, 359, 453, 456, 461, 463], "chroot": [14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 67, 446], "rbind": [14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42], "usr": [14, 16, 18, 19, 20, 21, 22, 25, 27, 31, 33, 34, 35, 36, 37, 41, 42, 67, 74, 79, 81, 171, 184, 193, 350, 354, 356, 446, 453, 458, 460], "bin": [14, 16, 17, 18, 19, 20, 22, 25, 27, 31, 33, 34, 35, 36, 37, 41, 42, 67, 74, 77, 171, 193, 295, 350, 446, 453, 456], "env": [14, 16, 18, 19, 20, 25, 27, 28, 31, 33, 34, 35, 36, 37, 41, 42, 53, 74, 350, 453], "profil": [14, 16, 25, 27, 31, 41, 42], "sc1091": [14, 16, 25, 31], "hard": [14, 16, 25, 31, 47, 48, 51, 71, 78, 87, 184, 206, 222, 232, 250, 298, 347, 353, 362, 450, 457, 466], "10_linux": [14, 16, 25, 31], "stat": [14, 61, 66, 86, 96, 106, 124, 127, 145, 170, 184, 192, 204, 206, 209, 215, 228, 232, 236, 239, 243, 256, 266, 276, 293, 295, 314, 338, 342, 361, 371, 381, 398, 400, 417, 440, 445, 465, 475, 485, 503, 506, 524], "sbin": [14, 18, 19, 20, 22, 27, 33, 41, 42, 46, 171, 184, 193], "mkconfig": [14, 16, 25, 31, 41, 42], "probe": [14, 18, 19, 20, 22, 33, 34, 36, 41, 42, 131, 139, 175, 197, 208, 221, 235, 249, 300, 403, 411, 510, 518], "boot_devic": 14, "grep": [14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 46, 47], "n1": [14, 18, 19, 20, 22, 28, 33, 34, 36, 41, 42, 43], "grub_device_boot": 14, "overwrit": [14, 16, 25, 31, 71, 78, 79, 90, 101, 110, 114, 120, 130, 184, 199, 206, 222, 223, 232, 250, 251, 260, 271, 275, 280, 284, 290, 298, 299, 347, 353, 354, 365, 376, 385, 389, 395, 402, 450, 457, 458, 469, 480, 489, 493, 499, 509], "bootdir": [14, 16, 25, 31], "i386": [14, 16, 22, 25, 31, 41, 42, 43], "pc": [14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 43], "x86_64": [14, 16, 18, 19, 20, 22, 25, 28, 31, 32, 33, 34, 36, 41, 42, 43, 53, 172], "firmwar": [14, 16, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 71, 129, 176, 198, 222, 250, 347, 450, 508], "efivar": [14, 16], "efibootmgr": [14, 16, 18, 19, 20, 22, 25, 31, 33, 41, 42], "fi": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 43], "menu": [14, 16, 25, 31, 34, 36, 43, 48], "cfg": [14, 16, 25, 31, 34, 41, 42, 43], "cp": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 48, 77, 456], "espdir": [14, 16, 25, 31], "maxdepth": [14, 16, 25, 31], "mindepth": [14, 16, 25, 31], "print0": [14, 16, 25, 31, 43, 232, 275], "xarg": [14, 16, 18, 19, 20, 22, 25, 31, 33, 34, 36, 41, 42, 43], "0i": [14, 16, 25, 31, 43], "vxc": [14, 16, 25, 31, 43], "unmount": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 47, 77, 78, 83, 88, 93, 99, 103, 108, 109, 112, 113, 118, 119, 122, 125, 126, 127, 137, 140, 184, 186, 206, 209, 232, 236, 253, 258, 263, 269, 273, 278, 279, 282, 283, 288, 289, 294, 295, 297, 298, 306, 309, 352, 353, 358, 363, 368, 374, 378, 383, 384, 387, 388, 393, 394, 399, 400, 409, 412, 456, 457, 462, 467, 472, 478, 482, 487, 488, 491, 492, 497, 498, 501, 504, 505, 506, 516, 519], "snapshot": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 53, 57, 66, 71, 74, 77, 78, 79, 80, 81, 83, 84, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 100, 101, 104, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 118, 120, 123, 124, 127, 134, 165, 166, 170, 176, 177, 184, 186, 192, 198, 199, 206, 209, 215, 222, 223, 231, 232, 236, 243, 250, 251, 253, 258, 259, 260, 261, 263, 264, 265, 266, 267, 268, 270, 271, 274, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 288, 290, 292, 293, 295, 297, 298, 303, 333, 334, 342, 347, 350, 352, 353, 354, 355, 356, 358, 359, 363, 364, 365, 366, 368, 369, 370, 371, 372, 373, 375, 376, 379, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 393, 395, 397, 398, 400, 406, 445, 450, 453, 456, 457, 458, 459, 460, 462, 463, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 479, 480, 483, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 497, 499, 502, 503, 506, 513, 544, 545, 557], "mainten": [14, 16, 25, 28, 31, 39, 58, 59, 163, 332, 435, 542], "umount": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 43, 77, 78, 88, 118, 184, 206, 232, 258, 288, 297, 298, 352, 353, 363, 393, 456, 457, 467, 497], "rl": [14, 16, 25, 28, 31, 43], "incompat": [16, 17, 19, 25, 31, 36, 43, 79, 143, 186, 209, 236, 251, 312, 354, 415, 458, 522, 557, 562], "alpin": [16, 25, 31, 39, 43, 58, 59], "ship": [16, 25, 29, 31, 34, 36, 46, 48], "archlinux": [16, 17], "extract": [16, 25, 31, 43, 86, 465], "curl": [16, 25, 31, 34, 35, 37], "l": [16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 46, 48, 73, 78, 86, 87, 88, 90, 94, 96, 101, 103, 106, 110, 114, 116, 118, 120, 121, 124, 131, 132, 133, 142, 143, 145, 147, 157, 158, 163, 165, 166, 171, 182, 183, 184, 185, 186, 193, 196, 204, 205, 206, 208, 209, 220, 228, 229, 232, 235, 236, 248, 256, 257, 258, 260, 264, 266, 271, 273, 276, 280, 284, 288, 290, 291, 293, 298, 300, 301, 311, 312, 314, 316, 326, 327, 332, 349, 353, 361, 362, 363, 365, 369, 371, 376, 378, 381, 385, 389, 391, 393, 395, 396, 398, 403, 404, 414, 415, 417, 419, 429, 430, 435, 452, 457, 465, 466, 467, 469, 473, 475, 480, 482, 485, 489, 493, 495, 497, 499, 500, 503, 510, 511, 521, 522, 524, 526, 536, 537, 542, 544, 545], "america": 16, "pkgbuild": 16, "iso": [16, 18, 19, 20, 41, 42, 65, 444], "09": [16, 35, 37, 48], "bootstrap": [16, 18, 19], "rootf": [16, 21, 25, 31], "sig": 16, "gnupg": 16, "ln": [16, 19, 20, 22, 25, 33, 34, 35, 41, 42, 102, 230, 272, 377, 481], "af": [16, 25, 31, 53], "edg": [16, 25, 31, 54], "1commun": [16, 25, 31], "genfstab": [16, 25, 31], "partuuid": [16, 22, 25, 31, 33, 41, 42, 53], "idl": [16, 25, 31, 46, 70, 71, 87, 176, 198, 250, 257, 347, 362, 449, 450, 466], "timeout": [16, 22, 25, 31, 33, 41, 42, 43, 47, 65, 71, 183, 198, 205, 222, 229, 250, 347, 444, 450], "1min": [16, 25, 31, 347], "automount": [16, 18, 19, 20, 25, 31, 34, 36, 41, 42, 47], "archzf": [16, 17], "pacman": [16, 17], "init": [16, 28, 35, 37, 46, 67, 343, 446], "refresh": [16, 18, 19, 20, 22, 33, 34, 36, 41, 42, 102, 377, 481], "popul": [16, 53, 71, 91, 92, 93, 107, 112, 117, 127, 184, 206, 222, 232, 250, 295, 347, 400, 450, 470, 471, 472, 486, 491, 496, 506], "gpgdir": 16, "lsign": 16, "ddf7db817396a49b2a2723f7403bd972f75d9d76": 16, "tee": [16, 34, 41, 42], "mirrorlist": 16, "eof": [16, 35, 37, 41, 42], "franc": 16, "germani": 16, "sum7": 16, "eu": [16, 35, 37], "biocraft": 16, "net": [16, 34, 35, 46, 48, 78, 127, 168, 184, 189, 206, 212, 232, 240, 295, 298, 353, 400, 457, 506], "india": 16, "themindsmaz": 16, "unit": [16, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 67, 76, 102, 145, 155, 160, 171, 172, 193, 194, 209, 216, 230, 236, 244, 272, 314, 343, 377, 417, 427, 446, 455, 481, 524, 534, 539], "zxcvfdsa": 16, "prefix": [16, 23, 34, 71, 73, 87, 130, 139, 175, 183, 196, 197, 205, 220, 221, 229, 248, 249, 257, 299, 349, 362, 402, 411, 450, 452, 466, 509, 518], "ci": [16, 62, 168, 189, 212, 240, 339, 441], "noconfirm": 16, "mg": 16, "mandoc": 16, "mkinitcpio": 16, "kernel_compatible_with_zf": 16, "si": 16, "awk": [16, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 209, 236, 314], "zst": 16, "physic": [16, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 50, 53, 67, 70, 71, 73, 76, 77, 78, 80, 81, 85, 86, 136, 139, 145, 148, 149, 153, 158, 163, 172, 174, 175, 176, 181, 182, 184, 186, 194, 196, 197, 198, 203, 204, 206, 209, 216, 219, 220, 221, 222, 227, 228, 232, 236, 244, 247, 248, 249, 250, 255, 256, 297, 298, 305, 314, 315, 317, 318, 322, 327, 332, 333, 334, 343, 346, 347, 349, 352, 353, 355, 356, 360, 361, 408, 411, 417, 420, 421, 425, 430, 435, 446, 449, 450, 452, 455, 456, 457, 459, 460, 464, 465, 515, 518, 524, 527, 528, 532, 537, 542, 548, 550], "ucod": 16, "amd": 16, "synchronis": [16, 25], "systemctl": [16, 18, 19, 20, 22, 25, 28, 33, 34, 35, 36, 37, 41, 42, 102, 155, 160, 230, 272, 377, 427, 481, 534, 539], "timesyncd": [16, 18, 19, 25], "zgenhostid": [16, 25, 31, 41, 74, 81, 83, 201, 209, 225, 236, 253, 334, 350, 356, 358, 453, 460, 462], "hostid": [16, 25, 29, 31, 41, 67, 70, 74, 81, 130, 194, 207, 209, 216, 219, 234, 236, 244, 247, 299, 334, 343, 346, 350, 356, 402, 446, 449, 453, 460, 509, 562], "en_u": [16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42], "utf": [16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 78, 184, 206, 232, 298, 353, 457], "gen": 16, "keymap": [16, 25, 31], "timezon": [16, 25, 31], "hostnam": [16, 18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42, 95, 98, 115, 127, 142, 184, 186, 206, 209, 232, 236, 295, 311, 400, 414, 474, 477, 494, 506, 521], "localtim": [16, 25, 31], "firstboot": [16, 25, 31], "utc": [16, 25, 31, 32], "testhost": [16, 25, 31], "passwd": [16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42, 78, 127, 184, 206, 232, 298, 353, 400, 457, 506], "yourpassword": [16, 25, 31], "chpasswd": [16, 25, 31], "grub_cmdline_linux": [16, 18, 19, 20, 22, 33, 41, 42], "zfs_import_dir": 16, "reach": [17, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 49, 50, 53, 71, 176, 198, 222, 250, 347, 450], "irc": [17, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42], "libera": [17, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42], "chat": [17, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42], "howto": [17, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42], "mention": [17, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42, 48, 53, 553], "ne9z": [17, 28, 29], "licens": [17, 57, 58, 59, 87, 183, 205, 229, 257, 362, 466], "third": [17, 25, 26, 46, 78, 90, 101, 110, 114, 120, 182, 230, 232, 260, 271, 272, 280, 284, 290, 298, 353, 365, 376, 385, 389, 395, 457, 469, 480, 489, 493, 499], "parti": [17, 25, 26, 46, 79, 236, 354, 458], "pip": [17, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "pip3": [17, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "doc": [17, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42, 548, 549, 550, 551, 552, 553, 554, 555, 557, 558, 559, 560, 561], "txt": [17, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "bashrc": [17, 18, 19, 20, 22, 27, 33, 34, 35, 36, 37, 41, 42], "html": [17, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42, 46, 48], "sensibl": [17, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "browser": [17, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "_build": [17, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42], "index": [17, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42, 48, 51, 104, 131, 176, 231, 235, 274, 300, 379, 403, 483, 510], "dual": [18, 19, 20, 22, 33, 34, 36, 41, 42], "backup": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 77, 78, 86, 108, 109, 110, 114, 184, 206, 232, 280, 284, 297, 298, 352, 353, 383, 384, 385, 389, 456, 457, 465, 487, 488, 489, 493, 549, 551, 553, 554, 556, 557], "64": [18, 19, 20, 22, 35, 37, 41, 42, 47, 57, 70, 71, 78, 79, 86, 88, 104, 118, 139, 175, 176, 184, 197, 198, 199, 204, 206, 219, 221, 222, 223, 228, 231, 232, 247, 249, 250, 251, 256, 258, 274, 288, 298, 346, 347, 353, 354, 361, 363, 379, 393, 411, 449, 450, 457, 458, 465, 467, 483, 497, 518], "w": [18, 19, 20, 41, 42, 43, 47, 65, 86, 108, 109, 110, 114, 133, 134, 139, 144, 145, 151, 153, 155, 160, 175, 197, 209, 221, 232, 236, 249, 278, 279, 280, 284, 302, 303, 313, 314, 320, 322, 324, 329, 383, 384, 385, 389, 405, 406, 411, 416, 417, 423, 425, 427, 432, 444, 465, 487, 488, 489, 493, 512, 513, 518, 523, 524, 530, 532, 534, 539, 557], "gui": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "gnome": [18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42], "strongli": [18, 19, 20, 22, 32, 41, 42, 47, 53, 71, 76, 78, 80, 81, 184, 186, 206, 209, 232, 236, 298, 333, 347, 353, 355, 450, 455, 457, 459, 460, 553], "encourag": [18, 19, 20, 22, 32, 39, 41, 42, 44, 47, 53, 57, 70, 78, 184, 206, 219, 232, 247, 298, 346, 353, 449, 457], "kib": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 71, 78, 79, 80, 110, 114, 347, 450, 457, 458, 459, 489, 493], "4kn": [18, 19, 20, 22, 33, 34, 36, 41, 42], "uefi": [18, 19, 20, 22, 25, 31, 33, 34, 36, 41, 42, 48], "slowli": [18, 19, 20, 22, 33, 34, 36, 41, 42, 47, 108, 109, 487, 488], "dedupl": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 53, 77, 78, 80, 86, 90, 101, 108, 109, 110, 114, 120, 158, 165, 166, 182, 184, 186, 204, 206, 209, 228, 232, 236, 256, 260, 271, 278, 279, 280, 284, 290, 297, 298, 327, 333, 335, 352, 353, 355, 361, 365, 376, 383, 384, 385, 389, 395, 430, 437, 438, 456, 457, 459, 465, 469, 480, 487, 488, 489, 493, 499, 537, 544, 545], "massiv": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "perman": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 71, 81, 163, 176, 186, 198, 209, 222, 236, 250, 332, 347, 356, 435, 450, 460, 542, 554], "revert": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 70, 71, 86, 95, 98, 113, 115, 127, 148, 149, 184, 186, 206, 209, 228, 232, 236, 250, 256, 265, 268, 283, 285, 295, 317, 318, 347, 361, 370, 373, 388, 390, 400, 420, 421, 449, 450, 465, 474, 477, 492, 494, 506, 527, 528, 553, 557], "rlaager": [18, 19, 20, 22, 33, 34, 35, 36, 37], "With": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 65, 67, 71, 79, 110, 114, 222, 223, 250, 251, 343, 347, 354, 444, 446, 450, 458, 489, 493], "cours": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 557, 561], "happen": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 48, 53, 71, 108, 109, 198, 222, 250, 347, 450, 487, 488, 551, 553, 559, 560, 561], "natur": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 48, 104, 110, 114, 231, 274, 280, 284, 379, 385, 389, 483, 489, 493], "initrd": [18, 19, 20, 22, 23, 25, 31, 33, 34, 35, 36, 37, 39, 41, 42, 43, 74, 350, 453], "put": [18, 19, 20, 22, 27, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 71, 81, 148, 149, 171, 186, 193, 209, 236, 250, 317, 318, 333, 334, 347, 356, 420, 421, 450, 460, 527, 528], "sensit": [18, 19, 20, 34, 35, 36, 37, 41, 42, 48, 76, 78, 95, 98, 110, 114, 115, 127, 184, 206, 232, 280, 284, 295, 298, 353, 385, 389, 400, 455, 457, 474, 477, 489, 493, 494, 506], "consol": [18, 19, 20, 22, 23, 33, 34, 35, 36, 37, 41, 42, 47, 70, 74, 81, 176, 186, 198, 209, 219, 222, 236, 247, 250, 334, 346, 350, 356, 449, 453, 460], "even": [18, 19, 20, 21, 22, 27, 29, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 71, 77, 78, 80, 81, 86, 90, 101, 102, 103, 108, 109, 110, 112, 114, 120, 121, 128, 132, 133, 136, 140, 143, 153, 155, 176, 184, 186, 198, 206, 209, 219, 222, 230, 232, 236, 247, 250, 260, 271, 272, 273, 278, 279, 280, 282, 284, 290, 291, 296, 297, 298, 301, 302, 305, 309, 312, 322, 324, 333, 334, 347, 352, 353, 355, 356, 365, 376, 377, 378, 383, 384, 385, 387, 389, 395, 396, 401, 404, 405, 408, 412, 415, 425, 427, 450, 456, 457, 459, 460, 465, 469, 480, 481, 482, 487, 488, 489, 491, 493, 499, 500, 507, 511, 512, 515, 519, 522, 532, 534, 552, 553], "topologi": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 53, 73, 85, 174, 181, 196, 203, 220, 227, 248, 255, 349, 360, 452, 464], "everyth": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 71, 84, 110, 114, 176, 198, 222, 250, 280, 284, 347, 359, 385, 389, 450, 463, 489, 493], "sit": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "usernam": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 65, 110, 114, 280, 284, 385, 389, 444, 489, 493], "join": [18, 19, 20, 22, 33, 34, 36, 41, 42], "termin": [18, 19, 20, 33, 34, 36, 41, 42, 65, 71, 87, 90, 101, 103, 108, 109, 116, 120, 121, 183, 186, 205, 206, 229, 232, 257, 260, 271, 273, 278, 279, 290, 291, 362, 365, 376, 378, 383, 384, 391, 395, 396, 444, 450, 466, 469, 480, 482, 487, 488, 495, 499, 500], "vi": [18, 19, 20, 22, 23, 33, 34, 35, 36, 37, 41, 42], "contrib": [18, 19, 20, 22, 23], "second": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 50, 61, 64, 65, 67, 70, 71, 76, 78, 87, 93, 104, 110, 112, 114, 117, 127, 131, 145, 147, 152, 160, 162, 163, 172, 176, 183, 184, 185, 186, 191, 194, 198, 205, 206, 208, 209, 214, 216, 219, 222, 229, 230, 231, 232, 235, 236, 239, 242, 244, 247, 250, 257, 272, 274, 280, 282, 284, 295, 298, 300, 314, 316, 321, 327, 329, 331, 338, 341, 343, 346, 347, 353, 362, 379, 385, 387, 389, 400, 403, 417, 419, 424, 430, 432, 434, 440, 443, 444, 446, 449, 450, 455, 457, 466, 472, 483, 489, 491, 493, 496, 506, 510, 524, 526, 531, 539, 541, 542, 553, 561], "hint": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 78, 81, 184, 206, 209, 232, 236, 298, 334, 353, 356, 457, 460], "addr": [18, 19, 20, 22, 33, 34, 36, 41, 42], "scope": [18, 19, 20, 22, 33, 34, 36, 41, 42, 48, 127, 184, 206, 232, 295, 400, 506], "inet": [18, 19, 20, 22, 33, 34, 36, 41, 42], "offset": [18, 19, 20, 34, 36, 41, 42, 47, 64, 71, 79, 86, 139, 165, 166, 171, 175, 182, 191, 193, 197, 204, 214, 221, 222, 223, 228, 242, 249, 250, 251, 256, 341, 347, 354, 361, 411, 443, 450, 458, 465, 518, 544, 545], "previou": [18, 19, 20, 34, 35, 36, 37, 39, 41, 42, 43, 47, 70, 71, 78, 90, 101, 104, 107, 117, 120, 127, 139, 143, 148, 149, 175, 184, 186, 197, 198, 206, 209, 221, 222, 231, 232, 236, 249, 250, 260, 271, 274, 277, 283, 287, 290, 295, 298, 308, 312, 317, 318, 347, 353, 365, 376, 379, 382, 392, 395, 400, 411, 415, 420, 421, 449, 450, 457, 469, 480, 483, 486, 496, 499, 506, 518, 522, 527, 528, 557], "gset": [18, 19, 20, 34, 36, 41, 42], "fals": [18, 19, 20, 22, 25, 28, 29, 31, 33, 34, 36, 41, 42, 53, 65, 104, 231, 236, 274, 333, 379, 444, 483], "debootstrap": [18, 19, 20, 22, 33, 34, 36], "gdisk": [18, 19, 20, 22, 33, 34, 36, 41, 42], "zfsutil": [18, 19, 20, 23, 34, 35, 36, 37, 38, 43, 84, 180, 202, 226, 254, 359, 463], "sata_disk1": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "alias": [18, 19, 20, 22, 25, 31, 32, 33, 34, 36, 41, 42, 53, 73, 85, 104, 174, 181, 196, 203, 220, 227, 231, 248, 255, 274, 349, 360, 379, 452, 464, 483], "node": [18, 19, 20, 22, 33, 34, 36, 41, 42, 47, 71, 78, 85, 163, 176, 184, 186, 198, 206, 209, 222, 232, 236, 250, 255, 298, 332, 347, 353, 360, 435, 450, 457, 464, 542], "sporad": [18, 19, 20, 22, 33, 34, 36, 41, 42], "especi": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 67, 71, 79, 172, 176, 194, 198, 216, 222, 244, 250, 251, 343, 347, 354, 446, 450, 458], "la": [18, 19, 20, 22, 33, 34, 36, 41, 42], "vda": [18, 19, 20, 22, 33, 34, 36, 41, 42], "around": [18, 19, 20, 21, 22, 33, 34, 35, 36, 37, 41, 42, 48, 67, 71, 104, 172, 194, 216, 231, 244, 274, 343, 379, 446, 450, 483], "100m": [18, 19, 20, 33, 34, 36, 49, 53, 71, 155, 176, 198, 222, 250, 347, 427, 450, 534], "low": [18, 19, 20, 33, 34, 36, 47, 53, 55, 70, 78, 80, 184, 186, 206, 209, 219, 221, 232, 236, 247, 249, 298, 333, 346, 353, 355, 411, 449, 457, 459], "regener": [18, 19, 20, 25, 31, 33, 34, 36, 47, 139, 175, 197, 221, 249, 411, 518], "initramf": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "85m": [18, 19, 20, 33, 34, 36], "swapoff": [18, 19, 20, 34, 35, 36, 37], "previous": [18, 19, 20, 21, 22, 32, 33, 34, 36, 41, 42, 46, 47, 70, 78, 80, 81, 90, 101, 110, 114, 120, 143, 184, 186, 206, 209, 232, 236, 260, 271, 280, 284, 290, 298, 312, 333, 334, 353, 355, 356, 365, 376, 385, 389, 395, 415, 449, 457, 459, 460, 469, 480, 489, 493, 499, 522], "cat": [18, 19, 20, 34, 35, 36, 37, 41, 42, 47, 65, 79, 354, 444, 458], "mdstat": [18, 19, 20, 34, 36, 41, 42], "stop": [18, 19, 20, 34, 35, 36, 37, 41, 42, 46, 47, 48, 62, 71, 79, 104, 125, 151, 155, 162, 168, 176, 186, 189, 198, 209, 212, 222, 231, 232, 236, 240, 250, 274, 294, 320, 324, 331, 339, 347, 354, 379, 399, 423, 427, 434, 441, 450, 458, 483, 504, 530, 534, 541], "md0": [18, 19, 20, 34, 36, 41, 42], "superblock": [18, 19, 20, 22, 33, 34, 36, 41, 42], "wipef": [18, 19, 35, 36, 37], "trim": [18, 19, 36, 53, 71, 81, 83, 90, 101, 120, 144, 145, 158, 162, 163, 198, 222, 236, 250, 253, 260, 271, 290, 313, 314, 327, 331, 332, 334, 347, 356, 358, 365, 376, 395, 416, 417, 430, 434, 435, 450, 460, 462, 469, 480, 499, 523, 524, 537, 541, 542], "unmap": [18, 19, 36, 47], "sgdisk": [18, 19, 20, 22, 33, 34, 36, 41, 42], "zap": [18, 19, 20, 22, 33, 34, 36, 41, 42, 48, 71, 79, 86, 222, 250, 256, 347, 361, 450, 458, 465], "a1": [18, 19, 20, 22, 33, 34, 36, 41, 42, 53], "24k": [18, 19, 20, 22, 33, 34, 36, 41, 42], "1000k": [18, 19, 20, 22, 33, 34, 36, 41, 42], "t1": [18, 19, 20, 22, 33, 34, 36, 41, 42], "ef02": [18, 19, 20, 22, 33, 34, 36, 41, 42], "n2": [18, 19, 20, 22, 33, 34, 36, 41, 42], "512m": [18, 19, 20, 22, 33, 34, 36, 41, 42, 53], "t2": [18, 19, 20, 22, 33, 34, 36, 41, 42], "ef00": [18, 19, 20, 22, 33, 34, 36, 41, 42], "n3": [18, 19, 20, 22, 33, 34, 36, 41, 42], "1g": [18, 19, 20, 22, 33, 41, 42, 86, 182, 204, 228, 256, 361, 465], "t3": [18, 19, 20, 22, 33, 34, 36, 41, 42], "bf01": [18, 19, 20, 22, 33, 41, 42], "n4": [18, 19, 20, 22, 33, 34, 36, 41, 42], "t4": [18, 19, 20, 22, 33, 34, 36, 41, 42], "bf00": [18, 19, 20, 34, 36, 41, 42], "8309": [18, 19, 20, 34, 36, 41, 42], "repeat": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 71, 222, 250, 347, 450], "grub2": [18, 25, 28, 31, 36, 43, 79, 354, 458], "cachefil": [18, 19, 20, 22, 33, 34, 36, 41, 42, 46, 47, 53, 66, 81, 86, 136, 143, 170, 182, 186, 192, 204, 209, 215, 228, 236, 243, 256, 305, 312, 334, 342, 356, 361, 408, 415, 445, 460, 465, 515, 522], "restrict": [18, 33, 34, 36, 44, 47, 71, 77, 78, 88, 95, 98, 108, 109, 110, 114, 115, 118, 127, 136, 164, 184, 206, 232, 258, 278, 279, 280, 284, 288, 295, 298, 353, 363, 383, 384, 385, 389, 393, 400, 408, 436, 450, 456, 457, 467, 474, 477, 487, 488, 489, 493, 494, 497, 506, 515, 543, 557], "sata_disk2": [18, 19, 20, 22, 33, 34, 36, 41, 42], "arbitrari": [18, 19, 20, 22, 33, 34, 36, 41, 42, 46, 48, 53, 73, 76, 77, 78, 81, 95, 98, 100, 104, 115, 139, 141, 145, 147, 156, 162, 171, 174, 184, 186, 193, 196, 206, 209, 220, 231, 232, 236, 248, 265, 268, 270, 274, 285, 297, 298, 308, 310, 314, 316, 325, 331, 349, 352, 353, 370, 373, 375, 379, 390, 411, 413, 417, 419, 428, 434, 452, 455, 456, 457, 460, 474, 477, 479, 483, 494, 518, 520, 524, 526, 535, 541], "consist": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 57, 71, 73, 76, 78, 79, 80, 81, 86, 93, 108, 109, 112, 117, 127, 136, 143, 163, 174, 182, 184, 186, 196, 204, 206, 209, 220, 223, 228, 232, 236, 248, 250, 251, 256, 278, 279, 295, 298, 305, 312, 332, 334, 347, 349, 353, 354, 356, 361, 383, 384, 400, 408, 415, 435, 450, 452, 455, 457, 458, 459, 460, 465, 472, 487, 488, 491, 496, 506, 515, 522, 542, 557], "convent": [18, 19, 20, 22, 33, 41, 42, 76, 78, 79, 81, 104, 167, 177, 184, 188, 199, 206, 211, 223, 231, 232, 238, 251, 274, 298, 337, 353, 354, 379, 439, 455, 457, 458, 460, 483, 546, 557], "luksformat": [18, 19, 20, 22, 28, 33, 34, 35, 36, 37, 41, 42], "ae": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 71, 78, 222, 232, 250, 298, 347, 353, 450, 457], "xt": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "plain64": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "512": [18, 19, 20, 22, 28, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 53, 70, 71, 78, 79, 80, 171, 176, 177, 184, 193, 198, 199, 206, 219, 222, 223, 232, 247, 250, 251, 298, 333, 346, 347, 353, 354, 449, 450, 457, 458, 459], "luksopen": [18, 19, 20, 22, 28, 33, 34, 35, 36, 37, 41, 42], "luks1": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "todai": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 48, 91, 92, 93, 107, 112, 117, 127, 184, 206, 232, 295, 400, 470, 471, 472, 486, 491, 496, 506], "though": [18, 19, 20, 22, 27, 33, 34, 35, 36, 37, 41, 42, 47, 71, 77, 78, 80, 81, 86, 90, 95, 98, 101, 104, 115, 120, 153, 182, 184, 186, 204, 206, 209, 228, 231, 232, 236, 250, 256, 260, 265, 268, 271, 274, 285, 290, 297, 298, 322, 333, 334, 347, 352, 353, 355, 356, 361, 365, 370, 373, 376, 379, 390, 395, 425, 450, 456, 457, 459, 460, 465, 469, 474, 477, 480, 483, 494, 499, 532, 548, 552, 553, 554, 555], "posix": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 48, 62, 78, 80, 96, 106, 124, 127, 168, 171, 184, 186, 189, 193, 206, 209, 212, 232, 236, 240, 266, 276, 293, 295, 298, 333, 339, 353, 355, 371, 381, 398, 400, 441, 457, 459, 475, 485, 503, 506], "lowercas": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 76, 78, 81, 184, 206, 232, 298, 353, 455, 457, 460], "journald": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "vastli": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "attribut": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 44, 46, 48, 78, 79, 86, 90, 101, 120, 127, 180, 184, 202, 206, 223, 226, 232, 251, 254, 260, 271, 290, 295, 298, 353, 354, 365, 376, 395, 400, 457, 458, 465, 469, 480, 499, 506], "insid": [18, 19, 20, 22, 28, 29, 33, 34, 35, 36, 37, 41, 42, 43, 62, 86, 99, 119, 122, 126, 131, 168, 185, 189, 204, 208, 212, 228, 235, 240, 256, 269, 289, 300, 339, 361, 374, 394, 403, 441, 465, 478, 498, 501, 505, 510], "window": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 48, 176, 198], "besid": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "omit": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 43, 79, 80, 87, 95, 98, 110, 114, 115, 177, 183, 184, 186, 199, 205, 206, 209, 223, 229, 232, 236, 251, 257, 265, 268, 280, 284, 285, 314, 333, 354, 355, 362, 370, 373, 385, 389, 390, 458, 459, 466, 474, 477, 489, 493, 494], "fine": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 71, 176, 198, 222, 250, 347, 450], "corner": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "impli": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 53, 54, 92, 93, 143, 165, 166, 184, 186, 206, 209, 230, 232, 236, 237, 262, 263, 272, 312, 335, 336, 367, 368, 415, 437, 438, 471, 472, 522, 544, 545], "utf8onli": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 78, 88, 95, 98, 115, 118, 127, 184, 206, 232, 258, 288, 295, 298, 353, 363, 393, 400, 457, 467, 474, 477, 494, 497, 506], "discuss": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 48, 55, 59, 78, 81, 184, 206, 232, 236, 298, 334, 353, 356, 457, 460], "problem": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 44, 46, 47, 48, 57, 62, 71, 77, 79, 131, 139, 158, 168, 177, 185, 189, 199, 206, 208, 212, 221, 222, 223, 232, 235, 236, 240, 249, 250, 251, 297, 300, 327, 339, 347, 352, 354, 403, 411, 430, 441, 450, 456, 458, 510, 518, 537, 547, 553, 558, 559, 560, 561], "enforc": [18, 19, 20, 22, 25, 33, 34, 35, 36, 37, 41, 42, 47, 71, 76, 78, 81, 110, 114, 176, 180, 184, 198, 202, 206, 222, 226, 232, 250, 254, 280, 284, 298, 347, 353, 385, 389, 450, 455, 457, 460, 489, 493], "unset": [18, 19, 20, 21, 25, 31, 33, 34, 35, 36, 37, 41, 42, 71, 80, 81, 102, 209, 230, 236, 272, 334, 347, 355, 356, 377, 450, 459, 460, 481], "128": [18, 19, 20, 33, 34, 35, 36, 37, 41, 42, 46, 47, 71, 78, 79, 90, 101, 110, 114, 120, 165, 166, 176, 184, 198, 206, 222, 232, 250, 260, 271, 290, 298, 347, 353, 365, 376, 395, 450, 457, 458, 469, 480, 489, 493, 499, 544, 545], "blog": [18, 19, 20, 33, 34, 35, 36, 37, 41, 42, 46, 48], "middl": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 71, 222, 250, 347, 450], "ground": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46], "classic": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 48], "atim": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 78, 88, 95, 98, 102, 115, 118, 127, 184, 206, 230, 232, 258, 272, 288, 295, 298, 353, 363, 377, 393, 400, 457, 467, 474, 477, 481, 494, 497, 506], "behavior": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 65, 70, 71, 73, 76, 78, 81, 86, 90, 92, 93, 100, 101, 104, 120, 127, 132, 133, 135, 139, 145, 148, 149, 163, 174, 175, 176, 182, 184, 186, 196, 197, 198, 204, 206, 209, 219, 220, 221, 222, 228, 230, 231, 232, 236, 247, 248, 249, 250, 256, 260, 262, 263, 270, 271, 272, 274, 290, 295, 298, 301, 314, 332, 334, 346, 347, 349, 353, 356, 361, 365, 367, 368, 375, 376, 379, 395, 400, 404, 411, 417, 435, 444, 449, 450, 452, 455, 457, 460, 465, 469, 471, 472, 479, 480, 483, 499, 506, 511, 518, 524, 542], "30": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 53, 71, 76, 78, 96, 97, 106, 111, 123, 124, 129, 163, 176, 198, 222, 232, 236, 250, 258, 259, 261, 262, 263, 264, 265, 266, 267, 268, 270, 275, 276, 277, 280, 281, 283, 284, 285, 286, 287, 288, 292, 293, 295, 297, 298, 327, 347, 352, 353, 368, 371, 372, 381, 382, 386, 397, 398, 400, 430, 450, 455, 457, 475, 476, 485, 490, 502, 503, 508], "portion": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 76, 78, 79, 80, 81, 93, 108, 109, 110, 114, 127, 177, 184, 186, 199, 206, 209, 223, 232, 236, 251, 263, 278, 279, 295, 298, 333, 353, 354, 355, 368, 383, 384, 400, 455, 457, 458, 459, 460, 472, 487, 488, 489, 493, 506, 555], "forget": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 54], "256": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 70, 71, 76, 78, 79, 81, 90, 101, 120, 176, 184, 198, 199, 206, 219, 222, 223, 232, 247, 250, 251, 260, 271, 290, 295, 298, 346, 347, 353, 354, 365, 376, 395, 449, 450, 455, 457, 458, 460, 469, 480, 499], "gcm": [18, 19, 20, 34, 35, 36, 37, 41, 42, 47, 71, 78, 222, 232, 250, 298, 347, 353, 450, 457], "mode": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 64, 65, 66, 71, 73, 78, 79, 80, 81, 85, 100, 131, 139, 141, 143, 145, 147, 156, 162, 164, 170, 174, 177, 181, 184, 185, 186, 191, 192, 196, 199, 203, 206, 208, 209, 214, 215, 220, 222, 223, 227, 232, 235, 236, 242, 243, 248, 250, 251, 255, 270, 298, 300, 308, 310, 312, 314, 316, 325, 331, 333, 334, 341, 342, 347, 349, 353, 354, 355, 356, 360, 375, 403, 411, 413, 415, 417, 419, 428, 434, 436, 443, 444, 445, 450, 452, 457, 458, 459, 460, 464, 479, 510, 518, 520, 522, 524, 526, 535, 541, 543], "half": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 47, 48, 67, 71, 198, 222, 250, 347, 446, 450], "thu": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 43, 46, 47, 48, 49, 50, 71, 78, 79, 80, 86, 88, 118, 139, 176, 177, 184, 198, 199, 204, 206, 221, 222, 223, 228, 232, 236, 249, 250, 251, 256, 258, 288, 298, 333, 347, 353, 354, 355, 361, 363, 393, 411, 450, 457, 458, 459, 465, 467, 497, 518], "weakest": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "wise": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 45, 71, 176, 186, 198, 209, 222, 236, 250, 347, 450], "faq": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 57, 58, 59, 92, 471], "guidanc": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46], "luks2": [18, 19, 20, 22, 28, 33, 34, 35, 36, 37, 41, 42], "solari": [18, 19, 20, 22, 33, 41, 42, 48, 52, 53, 78, 184, 206, 232, 298, 353, 457], "suffix": [18, 19, 20, 22, 33, 41, 42, 76, 78, 95, 98, 115, 171, 184, 193, 206, 232, 265, 268, 285, 298, 353, 370, 373, 390, 455, 457, 474, 477, 494], "beadm": [18, 19, 20, 22, 33, 41, 42], "zsy": [18, 19, 20, 33, 34, 35, 36, 37, 41, 42], "complic": [18, 19, 20, 33, 41, 42, 43, 46, 184], "life": [18, 19, 20, 54, 139, 175, 176, 197, 221, 249, 411, 518], "said": [18, 19, 20, 41, 42, 46, 78, 110, 114, 232, 280, 284, 298, 353, 385, 389, 457, 489, 493], "simplic": [18, 19, 20, 41, 42, 53], "situat": [18, 19, 20, 22, 33, 35, 37, 41, 42, 46, 47, 71, 198, 222, 250, 347, 450], "chmod": [18, 19, 20, 22, 28, 33, 34, 35, 36, 37, 41, 42, 78, 88, 118, 127, 184, 206, 232, 295, 298, 353, 400, 457, 467, 497, 506], "700": [18, 19, 20, 34, 36, 37, 41, 42], "spool": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "sun": [18, 19, 20, 22, 33, 41, 42, 47, 48, 53, 62, 155, 168, 184, 189, 212, 240, 339, 427, 441, 534, 553], "1777": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "srv": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "game": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 51], "accountsservic": [18, 19, 20, 22, 33, 35, 36, 37, 41, 42], "networkmanag": [18, 19, 28, 33, 34, 35, 36, 37], "docker": [18, 19, 20, 22, 33, 36, 37, 41, 42], "snap": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 93, 104, 108, 109, 110, 114, 117, 127, 184, 206, 231, 232, 263, 274, 287, 295, 368, 379, 392, 400, 472, 483, 487, 488, 489, 493, 496, 506], "tmpf": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "noth": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 44, 74, 78, 108, 109, 110, 114, 178, 200, 206, 224, 230, 232, 250, 252, 272, 278, 279, 280, 284, 350, 353, 383, 384, 385, 389, 453, 457, 487, 488, 489, 493], "maximum": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 45, 46, 47, 48, 50, 53, 70, 71, 78, 79, 86, 104, 127, 139, 163, 176, 182, 183, 184, 198, 204, 205, 206, 219, 221, 222, 223, 228, 229, 231, 232, 236, 247, 249, 250, 251, 256, 257, 274, 295, 298, 332, 346, 347, 353, 354, 361, 379, 400, 411, 435, 449, 450, 457, 458, 465, 483, 506, 518, 542], "zfs_initrd_additional_dataset": [18, 19, 36, 37], "matter": [18, 19, 20, 33, 34, 36, 37, 41, 42, 46, 47, 48, 53, 71, 163, 176, 198, 222, 250, 332, 347, 435, 450, 542], "lock": [18, 19, 20, 22, 26, 28, 33, 34, 35, 36, 37, 41, 42, 47, 70, 71, 78, 87, 176, 183, 184, 198, 205, 206, 219, 222, 229, 232, 247, 250, 257, 298, 346, 347, 353, 362, 449, 450, 457, 466], "unconfigur": [18, 19, 20, 22, 33, 34, 36], "entireti": [18, 19, 20, 22, 33, 34, 36], "127": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 127, 184, 206, 232, 295, 400, 506], "real": [18, 19, 20, 22, 23, 33, 34, 35, 36, 37, 41, 42, 64, 74, 78, 79, 131, 132, 145, 147, 157, 158, 177, 184, 186, 199, 206, 208, 209, 223, 232, 235, 236, 251, 298, 300, 301, 314, 316, 326, 327, 341, 350, 353, 354, 403, 404, 417, 419, 429, 430, 443, 453, 457, 458, 510, 511, 524, 526, 536, 537], "dn": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 76, 78, 79, 81, 95, 98, 115, 127, 177, 184, 199, 206, 223, 232, 251, 295, 298, 353, 354, 400, 455, 457, 458, 460, 474, 477, 494, 506], "fqdn": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "nano": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "confus": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 62, 78, 96, 106, 124, 168, 184, 189, 206, 209, 212, 232, 240, 266, 276, 293, 298, 339, 353, 371, 381, 398, 441, 457, 475, 485, 503], "adjust": [18, 19, 20, 22, 27, 33, 34, 36, 41, 42, 45, 46, 47, 48, 49, 71, 78, 107, 176, 184, 198, 206, 222, 232, 250, 277, 298, 347, 353, 382, 450, 457, 486], "ifac": [18, 19, 20, 22], "src": [18, 19, 20, 22, 23, 25, 27], "bind": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 70, 102, 127, 219, 247, 346, 377, 400, 449, 481, 506], "livecd": [18, 19, 20, 22, 33, 34, 36, 41, 42], "privat": [18, 19, 34, 35, 36, 37, 41, 42, 84, 180, 202, 226, 254, 359, 463], "english": [18, 19, 20, 22, 25, 31, 33, 34, 35, 36, 37, 41, 42], "languag": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 104, 127, 231, 274, 295, 379, 400, 483, 506], "dpkg": [18, 19, 20, 22, 23, 33, 34, 35, 36, 37, 41, 42], "reconfigur": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 87, 183, 205, 229, 257, 362, 466], "tzdata": [18, 19, 20, 22, 33, 34, 35, 36, 37], "keyboard": [18, 19, 20, 34, 36], "remake_initrd": [18, 19, 20], "sai": [18, 19, 20, 34, 35, 36, 41, 42], "couldn": [18, 19, 20, 34, 36, 103, 121, 273, 291, 378, 396, 482, 500], "crypttab": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "blkid": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 71, 131, 176, 185, 198, 208, 222, 235, 250, 300, 347, 403, 450, 510], "entri": [18, 19, 20, 22, 27, 33, 34, 36, 41, 42, 43, 47, 48, 57, 71, 78, 79, 86, 87, 104, 105, 110, 114, 139, 145, 175, 176, 180, 182, 184, 197, 198, 202, 204, 206, 209, 221, 222, 226, 228, 231, 232, 236, 249, 250, 254, 256, 274, 275, 280, 284, 298, 314, 347, 353, 361, 379, 380, 385, 389, 411, 417, 450, 457, 465, 466, 483, 484, 489, 493, 518, 524], "although": [18, 19, 21, 41, 42, 46, 48, 53, 78, 110, 114, 184, 206, 232, 280, 284, 298, 353, 385, 389, 457, 489, 493], "brows": [18, 19], "clock": [18, 19, 46, 47, 176, 198], "drift": [18, 19], "dosfstool": [18, 19, 20, 22, 33, 34, 36, 41, 42], "mkdosf": [18, 19, 20, 22, 33, 34, 36, 41, 42], "32": [18, 19, 20, 22, 33, 34, 36, 41, 42, 46, 47, 57, 67, 70, 71, 78, 80, 86, 130, 147, 163, 176, 186, 198, 204, 207, 209, 219, 222, 228, 232, 234, 236, 247, 250, 256, 298, 299, 332, 343, 346, 347, 353, 361, 402, 435, 446, 449, 450, 457, 459, 465, 509, 526, 542], "amd64": [18, 19, 20, 21, 22, 33, 34, 36, 48], "meet": [18, 19, 20, 22, 33, 34, 36, 41, 42, 46, 47, 198, 222, 250, 347], "cluster": [18, 19, 20, 22, 33, 34, 36, 41, 42, 47, 81, 171, 186, 193, 209, 236, 334, 356, 460], "mib": [18, 19, 20, 22, 33, 34, 36, 41, 42, 47, 64, 70, 71, 78, 104, 191, 214, 219, 242, 247, 341, 346, 347, 443, 449, 450, 457, 483], "fat32": [18, 19, 20, 22, 33, 34, 36, 41, 42], "prober": [18, 19, 20, 33, 34, 36, 41, 42], "purg": [18, 19, 20, 33, 34, 35, 36, 37], "whether": [18, 19, 20, 22, 25, 33, 41, 42, 46, 47, 48, 50, 53, 54, 71, 73, 78, 81, 108, 109, 110, 114, 139, 143, 145, 174, 176, 184, 186, 196, 198, 206, 209, 220, 221, 222, 232, 236, 248, 249, 250, 278, 279, 280, 284, 298, 312, 314, 334, 347, 349, 353, 356, 383, 384, 385, 389, 411, 415, 417, 450, 452, 457, 460, 487, 488, 489, 493, 518, 522, 524], "defaultdepend": [18, 19, 20, 22, 33, 41, 42], "oneshot": [18, 19, 20, 22, 33, 41, 42], "remainafterexit": [18, 19, 20, 22, 33, 41, 42], "execstart": [18, 19, 20, 22, 33, 41, 42], "execstartpr": [18, 19, 20, 41, 42], "mv": [18, 19, 20, 34, 35, 37, 41, 42, 48], "preboot_zpool": [18, 19, 20, 41, 42], "execstartpost": [18, 19, 20, 41, 42], "wantedbi": [18, 19, 20, 22, 33, 41, 42, 102, 230, 272, 377, 481], "indic": [18, 19, 45, 46, 47, 65, 66, 70, 71, 78, 79, 80, 82, 84, 86, 90, 94, 101, 104, 110, 114, 120, 127, 134, 139, 143, 157, 158, 170, 176, 177, 178, 180, 184, 186, 192, 198, 199, 200, 202, 206, 209, 215, 219, 221, 222, 223, 224, 226, 231, 232, 236, 243, 247, 249, 250, 251, 252, 254, 256, 260, 264, 271, 274, 280, 284, 290, 295, 298, 303, 312, 326, 327, 333, 342, 346, 347, 353, 354, 355, 357, 359, 361, 365, 369, 376, 379, 385, 389, 395, 400, 406, 411, 415, 429, 430, 444, 445, 449, 450, 457, 458, 459, 461, 463, 465, 469, 473, 480, 483, 489, 493, 499, 506, 513, 518, 522, 536, 537, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "chose": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 71, 176, 198, 222, 250, 347, 450], "mutual": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 171, 193], "sshd_config": [18, 19, 20, 34, 36, 41, 42], "permitrootlogin": [18, 19, 20, 34, 36, 41, 42], "dropbear": [18, 19], "unlock": [18, 19, 20, 21, 22, 33, 34, 36, 41, 42, 47, 71, 86, 176, 198, 222, 250, 347, 450, 465], "ecdsa": [18, 19], "ed25519": [18, 19], "rsa": [18, 19], "ssh_host_": [18, 19], "_kei": [18, 19], "keygen": [18, 19], "m": [18, 19, 28, 34, 35, 36, 37, 46, 47, 62, 65, 66, 67, 71, 76, 78, 85, 86, 87, 94, 95, 98, 104, 108, 109, 115, 127, 131, 136, 143, 145, 168, 170, 171, 172, 181, 182, 183, 184, 185, 186, 189, 192, 193, 194, 203, 204, 205, 206, 208, 209, 212, 215, 216, 222, 227, 228, 229, 231, 232, 235, 236, 240, 243, 244, 250, 255, 256, 257, 264, 265, 268, 274, 278, 279, 285, 295, 298, 300, 305, 312, 314, 339, 342, 343, 353, 360, 361, 362, 369, 370, 373, 379, 383, 384, 390, 400, 403, 408, 415, 417, 441, 444, 445, 446, 450, 455, 457, 464, 465, 466, 473, 474, 477, 483, 487, 488, 494, 506, 510, 515, 522, 524], "pem": [18, 19], "dropbearconvert": [18, 19], "dropbear_": [18, 19], "_host_kei": [18, 19], "static": [18, 19, 80, 180, 186, 202, 209, 226, 236, 254, 333, 355, 459], "syntax": [18, 19, 48, 88, 104, 118, 127, 184, 206, 231, 232, 270, 274, 295, 379, 400, 467, 483, 497, 506], "gatewai": [18, 19], "mask": [18, 19, 34, 36, 41, 42, 47, 131, 139, 175, 197, 219, 221, 235, 247, 249, 300, 403, 411, 510, 518], "nic": [18, 19], "100": [18, 19, 25, 34, 35, 36, 37, 45, 47, 48, 49, 65, 71, 78, 81, 104, 131, 155, 176, 184, 185, 186, 198, 206, 208, 209, 222, 231, 232, 235, 236, 250, 274, 298, 300, 324, 347, 353, 379, 403, 427, 444, 450, 457, 460, 483, 510, 534], "255": [18, 19, 127, 506], "myhostnam": [18, 19], "ens3": [18, 19], "mismatch": [18, 19, 34, 36, 556, 562], "understand": [18, 19, 34, 47, 49, 66, 71, 86, 176, 198, 222, 250, 342, 347, 445, 450, 465], "zfsunlock": [18, 19], "cryptroot": [18, 19], "front": [18, 19], "kindli": [18, 19, 20, 22], "popcon": [18, 19, 20, 22], "popular": [18, 19, 20, 22, 46, 79, 354, 458], "contest": [18, 19, 20, 22], "term": [18, 19, 20, 22, 25, 46, 71, 86, 171, 182, 183, 193, 204, 205, 228, 229, 250, 256, 257, 347, 361, 450, 465], "quiet": [18, 19, 20, 22, 33, 34, 36, 41, 42, 65, 131, 185, 208, 235, 300, 403, 444, 510], "grub_cmdline_linux_default": [18, 19, 20, 22, 33, 34, 36, 41, 42], "uncom": [18, 19, 20, 22, 33, 34, 36, 41, 42], "grub_termin": [18, 19, 20, 22, 33, 34, 36, 41, 42], "quit": [18, 19, 20, 22, 33, 34, 36, 41, 42, 47, 53], "twice": [18, 19, 20, 22, 33, 34, 35, 36, 41, 42, 46, 47, 48, 71, 182, 198, 222, 250, 347, 450], "undo": [18, 19, 20, 22, 33, 34, 36, 41, 42, 104, 143, 231, 236, 274, 312, 379, 415, 483, 522], "osprob": [18, 19, 20, 22, 33, 34, 36, 41, 42], "loader": [18, 19, 20, 22, 25, 27, 31, 33, 34, 36, 41, 42], "mbr": [18, 19, 20, 22, 33, 34, 36, 41, 42], "recheck": [18, 19, 20, 22, 33, 34, 36, 41, 42], "floppi": [18, 19, 20, 22, 33, 34, 36, 41, 42], "turn": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 48, 70, 71, 74, 78, 79, 82, 163, 177, 184, 199, 206, 219, 222, 223, 232, 247, 250, 251, 298, 346, 347, 350, 353, 354, 357, 449, 450, 453, 457, 458, 461], "rsyslog": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "privatetmp": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "fg": [18, 19, 20, 34, 35, 36, 37, 41, 42], "ctrl": [18, 19, 20, 33, 34, 35, 36, 37, 41, 42, 186], "ei": [18, 19, 20, 34, 35, 36, 37, 41, 42, 58, 59, 562], "tac": [18, 19, 20, 22, 33, 34, 36, 41, 42], "lf": [18, 19, 20, 22, 33, 34, 36, 41, 42], "initamf": [18, 19], "newli": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 70, 71, 78, 79, 80, 88, 90, 92, 93, 101, 108, 109, 117, 118, 120, 127, 133, 177, 184, 186, 199, 206, 209, 219, 223, 232, 236, 247, 251, 258, 260, 271, 278, 279, 288, 290, 295, 298, 333, 346, 353, 354, 355, 363, 365, 367, 376, 383, 384, 393, 395, 400, 449, 450, 457, 458, 459, 467, 469, 471, 472, 480, 487, 488, 496, 497, 499, 506, 549, 550], "your_usernam": [18, 19, 20, 34, 35, 36, 37], "addus": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "skel": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "chown": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 48], "usermod": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "audio": [18, 19, 20, 22, 33, 41, 42], "cdrom": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "dip": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46], "netdev": [18, 19, 20, 22, 33, 41, 42], "plugdev": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "video": [18, 19, 20, 22, 33, 41, 42, 51], "hit": [18, 19, 20, 22, 33, 41, 42, 47, 48, 50, 61, 71, 176, 198, 222, 239, 250, 338, 347, 440, 450], "grubx64": [18, 19, 20, 22, 41, 42], "extrem": [18, 19, 20, 22, 33, 34, 36, 41, 42, 46, 47, 48, 71, 77, 78, 79, 86, 93, 143, 176, 182, 184, 186, 198, 199, 204, 206, 209, 222, 223, 228, 232, 236, 250, 251, 256, 263, 297, 298, 312, 347, 352, 353, 354, 361, 368, 415, 450, 456, 457, 458, 465, 472, 522], "high": [18, 19, 20, 22, 33, 41, 42, 46, 47, 48, 53, 57, 71, 78, 79, 86, 176, 177, 184, 198, 199, 204, 206, 221, 222, 223, 228, 232, 249, 250, 251, 256, 298, 347, 353, 354, 361, 411, 450, 457, 458, 465], "pressur": [18, 19, 20, 22, 33, 41, 42, 47, 71, 80, 87, 183, 198, 205, 219, 222, 229, 247, 250, 257, 333, 347, 355, 362, 450, 459, 466], "lockup": [18, 19, 20, 22, 33, 41, 42], "getconf": [18, 19, 20, 22, 33, 41, 42, 53], "pages": [18, 19, 20, 22, 33, 41, 42, 53], "zle": [18, 19, 20, 22, 33, 41, 42, 48, 78, 86, 165, 166, 184, 204, 206, 228, 232, 256, 298, 353, 361, 457, 465, 544, 545], "logbia": [18, 19, 20, 22, 33, 41, 42, 47, 48, 53, 71, 78, 88, 118, 184, 198, 206, 222, 232, 250, 298, 347, 353, 363, 393, 450, 457, 467, 497], "throughput": [18, 19, 20, 22, 33, 41, 42, 46, 47, 48, 49, 50, 53, 64, 71, 78, 176, 184, 191, 198, 206, 214, 222, 232, 242, 250, 298, 341, 347, 353, 443, 450, 457], "sync": [18, 19, 20, 22, 33, 35, 37, 41, 42, 47, 50, 53, 71, 78, 79, 83, 88, 104, 118, 155, 163, 175, 176, 184, 197, 198, 206, 209, 221, 222, 231, 232, 236, 249, 250, 253, 274, 298, 324, 332, 347, 353, 358, 363, 379, 393, 427, 435, 450, 457, 458, 462, 467, 483, 497, 534, 542], "primarycach": [18, 19, 20, 22, 33, 41, 42, 48, 53, 78, 88, 95, 98, 115, 118, 127, 184, 206, 232, 258, 288, 295, 298, 353, 363, 393, 400, 457, 467, 474, 477, 494, 497, 506], "secondarycach": [18, 19, 20, 22, 33, 41, 42, 48, 78, 88, 95, 98, 115, 118, 127, 184, 206, 232, 258, 288, 295, 298, 353, 363, 393, 400, 457, 467, 474, 477, 494, 497, 506], "cheapest": [18, 19, 20, 22, 33, 41, 42], "zdx": [18, 19, 20, 22, 33, 41, 42], "resum": [18, 19, 20, 22, 33, 41, 42, 71, 78, 79, 87, 108, 109, 110, 114, 135, 139, 144, 155, 160, 163, 165, 166, 177, 198, 199, 206, 209, 221, 222, 223, 232, 236, 249, 250, 251, 257, 278, 279, 280, 284, 298, 304, 313, 324, 329, 332, 335, 347, 353, 354, 362, 383, 384, 385, 389, 407, 411, 416, 427, 432, 435, 437, 438, 450, 457, 458, 466, 487, 488, 489, 493, 514, 518, 523, 534, 539, 542, 544, 545, 549, 550], "hibern": [18, 19, 20, 22, 33, 41, 42], "hang": [18, 19, 20, 22, 23, 33, 35, 37, 41, 42, 47, 71, 222, 250, 347, 450, 559, 560], "av": [18, 19, 20, 22, 33, 41, 42], "dist": [18, 19, 20, 22, 25, 26, 31, 32, 33, 34, 35, 36, 37], "regular": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 47, 71, 74, 77, 78, 80, 82, 84, 94, 108, 109, 110, 114, 155, 171, 178, 184, 186, 193, 200, 206, 209, 222, 224, 232, 236, 250, 252, 264, 278, 279, 280, 284, 297, 298, 333, 347, 350, 352, 353, 355, 357, 359, 369, 383, 384, 385, 389, 450, 453, 456, 457, 459, 461, 463, 473, 487, 488, 489, 493, 534], "tasksel": [18, 19, 20, 22], "unselect": [18, 19, 20, 22], "logrot": [18, 19, 20, 22, 33, 34, 35, 36, 37], "burn": [18, 19, 20, 22, 33, 34, 35, 36, 37], "gain": [18, 19, 20, 22, 33, 34, 35, 36, 37, 47, 48, 71, 78, 102, 184, 206, 222, 230, 232, 250, 272, 298, 347, 353, 377, 450, 457, 481], "wast": [18, 19, 20, 22, 33, 34, 35, 36, 37, 47, 71, 250, 333, 347, 355, 450], "uncompress": [18, 19, 20, 22, 33, 34, 35, 36, 37, 47, 48, 71, 78, 110, 114, 165, 166, 232, 250, 298, 347, 353, 450, 457, 489, 493, 544, 545], "loop": [18, 19, 20, 22, 33, 34, 35, 36, 37, 47, 67, 71, 222, 250, 343, 347, 446, 450], "past": [18, 19, 20, 22, 33, 34, 35, 36, 37, 46, 47, 71, 78, 176, 184, 198, 206, 232, 250, 298, 347, 353, 450, 457], "eq": [18, 19, 20, 22, 33, 34, 35, 36, 37], "delet": [18, 19, 20, 22, 33, 35, 37, 41, 42, 43, 48, 53, 71, 78, 79, 86, 90, 93, 94, 101, 104, 108, 109, 110, 113, 114, 120, 125, 127, 184, 198, 206, 222, 231, 232, 250, 251, 256, 260, 263, 271, 274, 278, 279, 280, 284, 290, 294, 295, 298, 347, 353, 354, 361, 365, 368, 376, 379, 383, 384, 385, 389, 395, 399, 400, 450, 457, 458, 465, 469, 472, 473, 480, 483, 487, 488, 489, 492, 493, 499, 504, 506], "destroi": [18, 19, 20, 21, 22, 33, 35, 37, 41, 42, 43, 47, 66, 70, 71, 77, 78, 79, 80, 81, 83, 88, 89, 91, 92, 97, 104, 107, 108, 109, 110, 111, 112, 113, 114, 117, 118, 125, 127, 136, 139, 143, 146, 162, 163, 170, 175, 176, 177, 184, 186, 192, 197, 198, 199, 206, 209, 215, 219, 221, 222, 223, 231, 232, 236, 243, 247, 249, 250, 251, 253, 258, 259, 262, 267, 274, 277, 278, 279, 280, 281, 283, 284, 287, 288, 294, 295, 297, 298, 305, 312, 315, 331, 332, 333, 334, 342, 346, 347, 352, 353, 354, 355, 356, 358, 363, 364, 367, 372, 379, 382, 383, 384, 385, 386, 388, 389, 392, 393, 399, 400, 408, 411, 415, 418, 434, 435, 445, 449, 450, 456, 457, 458, 459, 460, 462, 467, 468, 470, 471, 476, 483, 486, 487, 488, 489, 490, 491, 492, 493, 496, 497, 504, 506, 515, 518, 522, 525, 541, 542, 549, 551, 553, 554, 556, 557], "earlier": [18, 19, 20, 34, 36, 41, 42, 46, 47, 78, 110, 114, 184, 206, 232, 280, 284, 298, 353, 385, 389, 457, 489, 493, 553, 557], "temporari": [18, 19, 20, 34, 36, 41, 42, 47, 48, 71, 74, 78, 81, 84, 95, 98, 103, 115, 121, 143, 148, 149, 176, 184, 186, 198, 206, 209, 222, 232, 236, 250, 265, 268, 273, 285, 291, 298, 312, 317, 318, 334, 347, 350, 353, 356, 359, 370, 373, 378, 390, 396, 415, 420, 421, 450, 453, 457, 460, 463, 474, 477, 482, 494, 500, 522, 527, 528], "graphic": [18, 19, 20, 22, 33, 34, 36, 41, 42], "nicer": [18, 19, 20, 22, 33, 34, 36, 41, 42], "luksheaderbackup": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "dat": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "somewher": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 77, 78, 102, 184, 206, 232, 297, 298, 352, 353, 377, 456, 457, 481], "cloud": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "whatev": [18, 19, 20, 22, 33, 34, 36, 41, 42, 79, 129, 199, 223, 251, 354, 458, 508], "arcsa": [18, 19, 20, 22, 33, 34, 36, 41, 42], "blob": [18, 19, 20, 22, 33, 34, 36, 41, 42], "downgrad": [18, 19, 20, 22, 33, 34, 36, 41, 42], "rip": [18, 19, 20, 22, 33, 34, 36, 41, 42], "0010": [18, 19, 20, 22, 33, 34, 36, 41, 42], "ffffffff8101b316": [18, 19, 20, 22, 33, 34, 36, 41, 42], "native_read_tsc": [18, 19, 20, 22, 33, 34, 36, 41, 42], "0x6": [18, 19, 20, 22, 33, 34, 36, 41, 42], "0x20": [18, 19, 20, 22, 33, 34, 36, 41, 42, 47], "anywher": [18, 19, 20, 22, 33, 34, 36, 41, 42, 53, 91, 112, 184, 206, 232, 261, 282, 366, 387, 470, 491], "emit": [18, 19, 20, 22, 33, 34, 36, 41, 42], "involv": [18, 19, 20, 22, 33, 34, 36, 41, 42, 46, 47, 48, 71, 104, 176, 198, 219, 222, 231, 250, 274, 347, 379, 450, 483], "hardwar": [18, 19, 20, 22, 28, 33, 34, 36, 41, 42, 47, 48, 51, 57, 58, 59, 67, 71, 77, 79, 80, 90, 101, 108, 109, 120, 131, 139, 155, 172, 184, 185, 186, 194, 198, 199, 206, 208, 209, 216, 221, 222, 223, 232, 235, 236, 244, 249, 250, 251, 260, 271, 290, 297, 300, 324, 333, 343, 347, 352, 354, 355, 365, 376, 395, 403, 411, 427, 446, 450, 456, 458, 459, 469, 480, 487, 488, 499, 510, 518, 534, 555], "ibm": [18, 19, 20, 22, 33, 34, 36, 41, 42], "m1015": [18, 19, 20, 22, 33, 34, 36, 41, 42], "oem": [18, 19, 20, 22, 33, 34, 36, 41, 42], "brand": [18, 19, 20, 22, 33, 34, 36, 41, 42], "card": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "lsi": [18, 19, 20, 22, 33, 34, 36, 41, 42], "visibl": [18, 19, 20, 22, 33, 34, 36, 41, 42, 47, 62, 77, 78, 79, 93, 168, 177, 184, 189, 199, 206, 212, 223, 232, 240, 251, 263, 297, 298, 339, 352, 353, 354, 368, 441, 456, 457, 458, 472], "hotplug": [18, 19, 20, 22, 33, 34, 36, 41, 42], "member": [18, 19, 20, 22, 33, 34, 36, 41, 42, 47, 66, 71, 88, 118, 127, 170, 184, 192, 198, 206, 215, 222, 232, 243, 250, 295, 342, 347, 400, 445, 450, 467, 497, 506], "330": [18, 19, 20, 22, 33, 34, 36, 41, 42, 46], "perfectli": [18, 19, 20, 22, 33, 34, 36, 41, 42, 53], "glitch": [18, 19, 20, 22, 33, 34, 36, 41, 42], "zfs_initrd_pre_mountroot_sleep": [18, 19, 20, 22, 33, 34, 36, 41, 42], "qcow2": [18, 19, 20, 22, 33, 34, 36, 41, 42], "1234567890": [18, 19, 20, 22, 33, 34, 36, 41, 42], "abl": [18, 19, 20, 22, 33, 34, 36, 41, 42, 46, 47, 48, 49, 53, 71, 78, 79, 86, 110, 114, 127, 131, 136, 147, 163, 176, 177, 184, 186, 198, 199, 206, 208, 209, 219, 222, 223, 232, 235, 236, 247, 250, 251, 280, 284, 295, 298, 300, 314, 332, 346, 347, 353, 354, 385, 389, 400, 403, 408, 435, 449, 450, 457, 458, 465, 489, 493, 506, 510, 515, 526, 542, 554, 555, 557], "guest": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 48, 78, 184, 206, 232, 298, 353, 457], "ovmf": [18, 19, 20, 22, 33, 34, 36, 41, 42], "nvram": [18, 19, 20, 22, 33, 34, 36, 41, 42, 80, 186, 209, 236, 333, 355, 459], "ovmf_cod": [18, 19, 20, 22, 33, 34, 36, 41, 42], "fd": [18, 19, 20, 22, 27, 33, 34, 36, 41, 42], "ovmf_var": [18, 19, 20, 22, 33, 34, 36, 41, 42], "secboot": [18, 19, 20, 34, 36, 41, 42], "aavmf": [18, 19, 20, 22, 33, 34, 36, 41, 42], "aavmf_cod": [18, 19, 20, 22, 33, 34, 36, 41, 42], "aavmf_var": [18, 19, 20, 22, 33, 34, 36, 41, 42], "aavmf32_cod": [18, 19, 20, 34, 36, 41, 42], "aavmf32_var": [18, 19, 20, 34, 36, 41, 42], "libvirtd": [18, 19, 20, 33, 34, 36, 41, 42], "enableuuid": [18, 19, 20, 22, 33, 34, 36, 41, 42], "vmx": [18, 19, 20, 22, 33, 34, 36, 41, 42], "vsphere": [18, 19, 20, 22, 33, 34, 36, 41, 42], "bookworm": [19, 23, 39], "async_destroi": [19, 20, 22, 33, 34, 41, 42, 47, 66, 71, 79, 170, 177, 192, 198, 199, 215, 222, 223, 243, 250, 251, 342, 347, 354, 445, 450, 458], "bookmark": [19, 20, 22, 33, 34, 41, 42, 77, 78, 79, 83, 88, 93, 95, 98, 100, 104, 110, 113, 114, 115, 117, 118, 127, 131, 177, 184, 185, 199, 206, 208, 223, 231, 232, 235, 251, 253, 263, 265, 268, 270, 274, 280, 283, 284, 285, 287, 295, 297, 298, 300, 352, 353, 354, 358, 363, 368, 370, 373, 375, 379, 385, 388, 389, 390, 392, 393, 400, 403, 456, 457, 458, 462, 467, 472, 474, 477, 479, 483, 489, 492, 493, 494, 496, 497, 506, 510, 557], "embedded_data": [19, 20, 22, 33, 34, 41, 42, 79, 90, 101, 110, 114, 120, 177, 184, 199, 206, 223, 232, 251, 260, 271, 280, 284, 290, 354, 365, 376, 385, 389, 395, 458, 469, 480, 489, 493, 499], "empty_bpobj": [19, 20, 22, 33, 34, 41, 42, 66, 79, 170, 177, 192, 199, 215, 223, 243, 251, 342, 354, 445, 458], "enabled_txg": [19, 20, 22, 33, 34, 41, 42, 79, 177, 199, 223, 251, 354, 458], "extensible_dataset": [19, 20, 22, 33, 34, 41, 42, 78, 79, 108, 109, 177, 199, 206, 223, 232, 251, 278, 279, 354, 383, 384, 457, 458, 487, 488], "filesystem_limit": [19, 20, 22, 33, 34, 41, 42, 78, 79, 88, 118, 177, 184, 199, 206, 223, 232, 251, 258, 288, 298, 353, 354, 363, 393, 457, 458, 467, 497], "hole_birth": [19, 20, 22, 33, 34, 41, 42, 47, 71, 79, 176, 177, 198, 199, 222, 223, 250, 251, 347, 354, 450, 458], "large_block": [19, 20, 22, 33, 34, 41, 42, 47, 48, 78, 79, 110, 114, 177, 184, 199, 206, 223, 232, 251, 280, 284, 298, 353, 354, 385, 389, 457, 458, 489, 493], "livelist": [19, 36, 71, 79, 86, 250, 251, 256, 347, 354, 361, 450, 458, 465], "lz4_compress": [19, 20, 22, 33, 34, 41, 42, 66, 78, 79, 110, 114, 170, 177, 184, 192, 199, 206, 215, 223, 232, 243, 251, 280, 284, 298, 342, 353, 354, 385, 389, 445, 457, 458, 489, 493], "spacemap_histogram": [19, 20, 22, 33, 34, 41, 42, 79, 177, 199, 223, 251, 354, 458], "zpool_checkpoint": [19, 20, 36, 41, 42, 79, 223, 251, 354, 458], "allocation_class": [19, 20, 34, 36, 41, 42, 79, 223, 251, 354, 458], "someon": [19, 20, 34, 36, 41, 42], "sens": [19, 20, 34, 36, 41, 42, 48], "rather": [19, 20, 32, 34, 36, 41, 42, 46, 47, 48, 49, 50, 53, 65, 67, 71, 77, 78, 79, 86, 87, 104, 108, 109, 110, 114, 172, 176, 177, 182, 184, 194, 198, 199, 204, 206, 216, 222, 223, 228, 231, 232, 244, 250, 251, 256, 257, 274, 278, 279, 280, 284, 298, 343, 347, 353, 354, 361, 362, 379, 383, 384, 385, 389, 444, 446, 450, 456, 457, 458, 465, 466, 483, 487, 488, 489, 493], "device_rebuild": [19, 36, 79, 251, 354, 458], "practic": [19, 20, 22, 33, 34, 36, 41, 42, 47, 48, 53, 67, 70, 71, 77, 78, 108, 109, 172, 184, 194, 198, 206, 216, 222, 232, 244, 250, 278, 279, 297, 298, 343, 347, 352, 353, 383, 384, 446, 449, 450, 456, 457, 487, 488], "log_spacemap": [19, 36, 79, 251, 354, 458], "spacemap_v2": [19, 20, 34, 36, 41, 42, 79, 223, 251, 354, 458], "project_quota": [19, 20, 34, 36, 41, 42, 79, 223, 251, 354, 458], "resilver_def": [19, 20, 34, 36, 41, 42, 47, 71, 79, 154, 223, 236, 250, 251, 323, 347, 354, 426, 450, 458, 533], "enough": [19, 20, 34, 36, 41, 42, 46, 47, 48, 53, 70, 71, 78, 107, 136, 184, 206, 219, 232, 236, 247, 250, 277, 298, 305, 346, 347, 353, 382, 408, 449, 450, 457, 486, 515, 552], "userobj_account": [19, 20, 22, 33, 34, 36, 41, 42, 79, 199, 223, 251, 354, 458], "theori": [19, 20, 33, 34, 36, 41, 42], "invalid": [19, 20, 33, 34, 36, 41, 42, 71, 78, 104, 127, 139, 144, 160, 163, 175, 184, 186, 197, 206, 209, 221, 231, 232, 236, 249, 250, 274, 295, 298, 313, 329, 332, 347, 353, 379, 400, 411, 416, 432, 435, 450, 457, 483, 506, 518, 523, 539, 542, 550, 551], "dnode": [19, 20, 33, 34, 36, 41, 42, 47, 71, 78, 79, 86, 131, 176, 182, 185, 198, 199, 204, 206, 208, 222, 223, 228, 232, 235, 250, 251, 256, 298, 300, 347, 353, 354, 361, 403, 450, 457, 458, 465, 510], "anywai": [19, 20, 33, 34, 36, 41, 42, 557, 558], "mtab": [19, 20, 22, 33, 41, 42, 84, 180, 202, 226, 254, 359, 463], "timedatectl": 19, "bullsey": [20, 23, 39], "backport": [20, 22, 23, 33, 34, 36], "just": [20, 22, 32, 35, 37, 46, 47, 48, 49, 62, 70, 77, 78, 110, 114, 129, 140, 158, 168, 184, 186, 189, 206, 209, 212, 219, 230, 232, 236, 240, 247, 272, 280, 284, 297, 298, 309, 327, 339, 346, 352, 353, 385, 389, 412, 430, 441, 449, 456, 457, 489, 493, 508, 519, 537], "critic": [20, 22, 47, 78, 80, 236, 333, 353, 355, 457, 459, 547, 549, 551, 552, 553, 554, 561], "opt": [20, 22, 32, 33, 41, 42, 74, 78, 80, 184, 206, 232, 236, 298, 333, 350, 353, 355, 453, 457, 459], "90_zf": [20, 22, 23], "pin": [20, 22, 23, 47, 176, 198, 222, 250, 347], "prioriti": [20, 22, 23, 47, 50, 70, 71, 145, 198, 209, 219, 222, 236, 247, 250, 314, 346, 347, 417, 449, 450, 524], "990": [20, 22, 23], "zfs_debug": [21, 102, 377, 481], "zfs_forc": [21, 74, 350, 453], "root": [21, 27, 39, 48, 58, 59, 65, 74, 77, 78, 79, 80, 81, 86, 87, 88, 90, 93, 95, 98, 99, 101, 102, 103, 104, 108, 109, 110, 114, 115, 116, 117, 118, 119, 120, 121, 122, 126, 127, 136, 143, 157, 163, 177, 180, 182, 183, 184, 186, 199, 202, 204, 205, 206, 209, 223, 226, 228, 229, 230, 231, 232, 236, 254, 256, 257, 258, 260, 269, 271, 272, 273, 274, 278, 279, 288, 289, 290, 291, 295, 297, 298, 305, 312, 326, 332, 333, 334, 350, 352, 353, 355, 356, 361, 362, 363, 365, 374, 376, 377, 378, 379, 383, 384, 391, 393, 394, 395, 396, 400, 408, 415, 429, 435, 444, 453, 456, 457, 458, 459, 460, 465, 466, 467, 469, 472, 474, 477, 478, 480, 481, 482, 483, 487, 488, 489, 493, 494, 495, 496, 497, 498, 499, 500, 501, 505, 506, 515, 522, 536, 542, 557], "bootf": [21, 34, 35, 36, 37, 74, 81, 139, 175, 186, 197, 206, 209, 221, 236, 249, 334, 350, 356, 411, 453, 460, 518], "lot": [21, 47, 53, 70, 79, 177, 199, 222, 223, 250, 251, 354, 449, 458], "use_disk_by_id": 21, "vmlinuz": [21, 22, 35, 37, 41, 42], "10": [21, 22, 32, 33, 35, 37, 46, 47, 48, 53, 71, 73, 74, 76, 78, 79, 104, 127, 131, 147, 155, 163, 174, 176, 177, 184, 186, 196, 198, 199, 206, 208, 209, 220, 222, 223, 231, 232, 235, 236, 248, 250, 251, 274, 295, 298, 300, 332, 347, 349, 350, 353, 354, 379, 400, 403, 427, 435, 450, 452, 453, 455, 457, 458, 483, 506, 510, 526, 534, 542, 553], "some_snapshot": 21, "ro": [21, 78, 184, 206, 232, 298, 353, 457], "debian_some_snapshot": 21, "alon": [21, 90, 101, 120, 232, 260, 271, 290, 365, 376, 395, 469, 480, 499], "bewar": [21, 27, 47, 198], "blindingli": 21, "undon": [21, 127, 184, 206, 232, 295, 400, 506], "destruct": [21, 47, 70, 71, 80, 93, 163, 176, 184, 198, 206, 219, 222, 232, 236, 247, 250, 263, 332, 333, 346, 347, 355, 368, 435, 449, 450, 459, 472, 542], "null": [21, 34, 35, 36, 37, 67, 71, 74, 145, 172, 194, 209, 216, 236, 244, 314, 343, 350, 417, 446, 453, 524], "discov": [21, 47, 71, 81, 155, 186, 209, 222, 236, 250, 324, 334, 347, 356, 427, 450, 460, 534], "san": 21, "float": 21, "usr_some_snapshot": 21, "Or": [21, 104, 231, 274, 379, 483], "buster": [22, 23, 39], "4kib": [22, 47, 81, 186, 209, 236, 334, 356, 460], "els": [22, 33, 43, 47, 62, 71, 77, 88, 104, 110, 114, 118, 127, 136, 168, 176, 184, 186, 189, 198, 206, 209, 212, 222, 231, 232, 236, 240, 250, 274, 280, 284, 295, 297, 305, 339, 347, 352, 379, 385, 389, 400, 408, 441, 450, 456, 467, 483, 489, 493, 497, 506, 515], "2a": [22, 33], "2b": [22, 33], "4a": 22, "4b": 22, "512b": [22, 47, 81, 186, 199, 209, 222, 223, 232, 236, 250, 251, 298, 334, 347, 353, 354, 355, 356, 460], "platform_code_differ": 22, "unimpl": [22, 171, 193], "transient": [22, 33, 71, 347, 450], "8a": [22, 33, 58, 59, 562], "8b": [22, 33], "6a": [22, 33], "6b": [22, 33], "mod": 22, "race": [22, 33, 71, 250, 347, 450], "5754": [22, 33], "seem": [22, 33, 34, 35, 36, 37, 70, 449], "guarante": [22, 33, 35, 37, 46, 47, 53, 71, 78, 79, 104, 110, 114, 143, 160, 184, 186, 198, 206, 209, 222, 231, 232, 236, 250, 274, 280, 284, 298, 312, 329, 347, 353, 379, 385, 389, 415, 432, 450, 457, 458, 483, 489, 493, 522, 539, 557], "nodev": [22, 33, 78, 206, 232, 298, 353, 457], "yourusernam": 22, "administr": [22, 46, 47, 48, 52, 53, 59, 71, 76, 77, 78, 79, 80, 81, 88, 99, 104, 118, 119, 122, 126, 127, 136, 139, 143, 159, 163, 167, 176, 177, 178, 180, 183, 184, 185, 186, 187, 188, 198, 199, 200, 202, 205, 206, 208, 209, 210, 211, 222, 223, 224, 226, 229, 231, 232, 235, 236, 237, 238, 250, 251, 258, 269, 274, 288, 289, 295, 297, 298, 305, 308, 312, 328, 332, 333, 334, 337, 347, 352, 353, 354, 355, 356, 363, 374, 379, 393, 394, 400, 408, 411, 415, 431, 435, 439, 450, 455, 456, 457, 458, 459, 460, 467, 478, 483, 497, 498, 501, 505, 506, 515, 518, 522, 538, 542, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "7734": [22, 33], "right": [22, 34, 36, 47, 48, 53, 100, 184, 206, 232, 270, 375, 479], "poorli": [23, 53], "pop": 23, "indefinit": [23, 47, 71, 222, 250, 347, 450], "circumv": 23, "debian_frontend": 23, "noninteract": 23, "dl": [25, 31, 88, 118, 206, 232, 258, 288, 363, 393, 467, 497], "fedoraproject": 25, "pub": [25, 31], "38": 25, "xz": [25, 31, 35, 37], "sha256sum": [25, 28, 31], "z0": [25, 31, 34, 35, 36, 37], "sha256checksum": [25, 31], "rootfs_tar": 25, "rootfs_tar_dir": 25, "dirnam": 25, "unlink": [25, 47, 71, 125, 198, 222, 250, 294, 347, 399, 450, 504], "interfer": [25, 31, 79, 177, 199, 223, 251, 354, 458], "unalia": [25, 31], "x64": [25, 31], "eval": [25, 26, 31, 32], "tail": [25, 28, 43], "n10": 25, "modpost": 25, "gpl": 25, "symbol": [25, 47, 78, 85, 94, 132, 145, 147, 157, 158, 176, 181, 184, 186, 198, 203, 206, 209, 222, 227, 232, 236, 250, 255, 264, 298, 301, 314, 316, 326, 327, 347, 353, 360, 369, 404, 417, 419, 429, 430, 457, 464, 473, 511, 524, 526, 536, 537], "bio_start_io_acct": 25, "bio_end_io_acct_remap": 25, "makefil": 25, "138": 25, "symver": 25, "1977": 25, "fc36": 25, "55": [25, 145, 158, 163, 332, 435, 524, 537, 542], "933": 25, "recurs": [25, 78, 88, 90, 93, 95, 97, 98, 100, 101, 104, 105, 108, 109, 111, 112, 113, 115, 117, 118, 120, 184, 206, 231, 232, 258, 260, 263, 265, 267, 268, 270, 271, 274, 275, 278, 279, 281, 282, 283, 285, 287, 288, 290, 298, 353, 363, 365, 368, 370, 372, 373, 375, 376, 379, 380, 383, 384, 386, 387, 388, 390, 392, 393, 395, 457, 467, 469, 472, 474, 476, 477, 479, 480, 483, 484, 487, 488, 490, 491, 492, 494, 496, 497, 499], "794": 25, "copr": [25, 26], "fedorainfracloud": [25, 26], "kwizart": [25, 26], "longterm": [25, 26], "add_dracutmodul": [25, 31], "force_driv": [25, 31], "mpt3sa": [25, 31], "virtio_blk": [25, 31], "exec": [25, 31, 78, 88, 95, 98, 102, 115, 118, 127, 184, 206, 230, 232, 258, 272, 288, 295, 298, 353, 363, 377, 393, 400, 457, 467, 474, 477, 481, 494, 497, 506], "dep": [25, 31], "basenam": [25, 31, 65, 444], "relabel": [25, 31], "fixfil": [25, 31], "onboot": [25, 31], "glibc": [25, 31], "langpack": [25, 31], "en": [25, 31, 46], "grub_enable_blscfg": [25, 31], "recoveri": [25, 31, 39, 47, 71, 77, 80, 86, 143, 158, 182, 184, 186, 198, 204, 206, 209, 222, 228, 232, 236, 250, 256, 297, 312, 327, 333, 347, 352, 355, 361, 415, 430, 450, 456, 459, 465, 522, 537, 553], "polici": [25, 47, 180, 202, 219, 226, 247, 254], "albeit": [25, 43], "incomplet": [25, 71, 87, 110, 114, 183, 205, 229, 250, 257, 280, 284, 347, 362, 385, 389, 450, 466, 489, 493], "append": [25, 74, 78, 80, 81, 108, 109, 184, 206, 232, 278, 279, 298, 334, 350, 353, 355, 356, 383, 384, 453, 457, 459, 460, 487, 488], "solv": [25, 46, 47, 48], "appreci": 25, "hidden": [25, 31, 33, 34, 36, 78, 95, 98, 115, 127, 184, 206, 232, 295, 298, 353, 400, 457, 474, 477, 494, 506], "queri": [25, 31, 78, 457], "fuse": 26, "circumst": [26, 46, 47, 71, 184, 206, 219, 222, 247, 250, 346, 347, 449, 450], "nodep": 26, "pend": [26, 47, 70, 71, 78, 104, 145, 184, 198, 206, 209, 219, 222, 231, 232, 236, 247, 250, 274, 298, 314, 346, 347, 353, 379, 417, 449, 450, 457, 483, 524], "forward": [27, 34, 36, 47, 71, 176, 198, 222, 250, 347, 450], "sysutil": 27, "kmod": [27, 31, 39], "rest": [27, 48], "accordingli": [27, 165, 166, 544, 545], "openzfs_load": 27, "zfs_load": 27, "migrat": [27, 46, 550], "elsewher": 27, "sysdir": 27, "arcstat": [27, 47, 63, 71, 241, 340, 347, 442, 450], "arc_summari": [27, 47], "dbufstat": [27, 47, 71, 222, 250, 347, 450], "substitut": [27, 46], "nopasswd": 27, "hw": 27, "ncpu": 27, "cshrc": 27, "rapid": [27, 47], "uf": [27, 48, 136, 186, 209, 236, 305, 408, 515], "without_zf": 27, "fdescf": 27, "temporarili": [27, 41, 42, 47, 48, 61, 71, 103, 121, 176, 184, 198, 206, 222, 232, 239, 250, 273, 291, 338, 347, 378, 396, 440, 450, 482, 500, 549, 550], "arm64": [28, 35, 37], "immut": [28, 48], "nix": [28, 29, 43], "flake": 28, "experiment": [28, 47, 71, 80, 136, 163, 186, 209, 236, 332, 333, 355, 435, 450, 459, 515, 542], "nixpkg": 28, "ia": 28, "udevadm": [28, 53, 74, 350, 453], "settl": [28, 74, 350, 453], "your_passwd": 28, "templat": 28, "dotfil": [28, 87, 183, 205, 229, 257, 362, 466], "rf": [28, 35, 37, 71, 74, 347, 350, 450, 453], "alic": 28, "q": [28, 47, 65, 86, 131, 145, 171, 185, 193, 204, 208, 209, 228, 235, 236, 256, 300, 314, 361, 403, 417, 444, 465, 510, 524], "nixer": 28, "asm": [28, 43], "examplehost": [28, 43], "break": [28, 34, 35, 36, 37, 41, 42, 47, 79, 90, 101, 120, 134, 232, 236, 260, 271, 290, 303, 354, 365, 376, 395, 406, 458, 469, 480, 499, 513], "disknam": 28, "bootdevices_placehold": 28, "abcd1234": 28, "c4": [28, 29], "urandom": [28, 29, 34, 35, 36, 37, 78, 232, 298, 353, 457], "od": [28, 29], "x4": [28, 29], "sc2016": [28, 43], "initrdavailablekernelmodul": 28, "kernelmodul": 28, "kernelmodules_placehold": 28, "rootpwd": 28, "mkpasswd": 28, "sha": [28, 79, 199, 223, 251, 354, 458], "roothash_placehold": 28, "task": [28, 47, 53, 70, 85, 87, 139, 163, 181, 183, 203, 205, 209, 219, 227, 229, 236, 247, 255, 257, 308, 332, 346, 360, 362, 411, 435, 449, 464, 466, 518, 542], "supportedfilesystem": 29, "forceimportroot": 29, "yourhostid": 29, "devshel": 29, "xdg": 29, "rockylinux": 31, "20230513": 31, "alloweras": 31, "rocki": [32, 39], "signatur": [32, 57, 78, 184, 232, 298, 353, 457], "authent": [32, 78, 127, 184, 206, 232, 295, 298, 353, 400, 457, 506, 557], "fingerprint": [32, 41, 42, 56], "pki": 32, "el7": 32, "el8": 32, "el9": 32, "key1": 32, "older": [32, 46, 47, 48, 67, 79, 86, 104, 108, 109, 123, 127, 163, 177, 182, 184, 199, 204, 206, 209, 223, 228, 231, 232, 236, 250, 251, 256, 274, 278, 279, 292, 295, 332, 354, 361, 379, 383, 384, 397, 400, 435, 446, 458, 465, 483, 487, 488, 502, 506, 542, 557], "36": [32, 71, 79, 450], "pgp": [32, 56], "mit": [32, 56], "edu": [32, 56, 62, 168, 189, 212, 240, 339, 441], "c93a": 32, "fffd": 32, "9f3f": 32, "7b03": 32, "c310": 32, "ceb6": 32, "a9d5": 32, "a1c0": 32, "f14a": 32, "b620": 32, "key2": 32, "37": [32, 41, 42], "7dc7": 32, "299d": 32, "cf7c": 32, "7fd9": 32, "cd87": 32, "701b": 32, "a599": 32, "fd5e": 32, "9db8": 32, "4141": 32, "el6": 32, "And": [32, 53, 102, 377, 481], "switch": [32, 34, 35, 37, 43, 47, 48, 53, 73, 85, 170, 174, 176, 181, 192, 196, 203, 215, 220, 227, 243, 248, 255, 349, 360, 452, 464], "releasev": 32, "did": [32, 35, 37, 46, 47, 54, 79, 80, 110, 114, 177, 186, 199, 209, 223, 232, 236, 251, 280, 284, 333, 354, 355, 385, 389, 458, 459, 489, 493], "vagrant": 32, "localhost": [32, 184, 206, 232, 295], "showdupl": 32, "08": 32, "ago": [32, 48], "tor": 32, "31": [32, 56, 102, 125, 146, 150, 209, 236, 250, 354, 377, 399, 409, 418, 422, 481, 504, 525, 529, 553], "jan": 32, "05": [32, 34, 73, 196, 220, 248, 349, 452], "former": [32, 48, 54, 71, 232, 275, 347, 450], "feedback": 32, "stabil": [32, 41, 42, 46, 50, 53, 71, 80, 127, 163, 176, 186, 198, 206, 209, 222, 232, 236, 250, 295, 332, 333, 347, 355, 400, 435, 450, 459, 506, 542], "upcom": [32, 47, 71, 250, 347, 450], "20": [33, 36, 37, 38, 39, 41, 42, 47, 48, 71, 78, 95, 98, 115, 127, 147, 163, 184, 186, 198, 206, 209, 222, 232, 236, 239, 250, 295, 298, 332, 347, 353, 400, 435, 450, 457, 474, 477, 494, 506, 526, 542], "bionic": 33, "alt": [33, 34, 36, 67, 343, 446], "univers": [33, 34, 36, 38], "3a": 33, "3b": 33, "5a": 33, "5b": 33, "netplan": [33, 34, 35, 36, 37], "netcfg": [33, 34, 35, 36, 37], "yaml": [33, 34, 35, 36, 37], "ethernet": [33, 34, 35, 36, 37], "dhcp4": [33, 34, 35, 36, 37], "multivers": [33, 34, 36], "hwe": 33, "addgroup": [33, 34, 35, 36, 37], "lpadmin": [33, 34, 35, 36, 37], "sambashar": [33, 34, 35, 36, 37], "grub_timeout_styl": [33, 34, 36], "grub_timeout": [33, 34, 36], "grub_recordfail_timeout": [33, 34, 36], "splash": [33, 34, 36], "shimx64": 33, "gdm3": 33, "initialsetupen": 33, "render": [33, 34, 35, 36, 37, 43, 46, 47, 71, 80, 177, 186, 199, 209, 223, 236, 250, 251, 333, 347, 355, 450, 459], "22": [34, 35, 38, 39, 127, 155, 184, 206, 232, 295, 400, 506, 534], "grave": 34, "ubuntu_uuid": [34, 36], "lead": [34, 46, 47, 50, 53, 54, 67, 71, 77, 80, 87, 96, 106, 124, 139, 140, 172, 175, 176, 180, 183, 184, 186, 194, 197, 198, 202, 205, 206, 209, 216, 221, 222, 226, 229, 232, 236, 244, 249, 250, 254, 257, 266, 276, 293, 297, 309, 333, 343, 347, 352, 355, 362, 371, 381, 398, 411, 412, 446, 450, 456, 459, 466, 475, 485, 503, 518, 519], "underli": [34, 47, 50, 53, 71, 78, 80, 81, 129, 131, 139, 145, 158, 160, 163, 176, 177, 185, 186, 198, 199, 206, 208, 209, 221, 222, 223, 232, 235, 236, 249, 250, 251, 298, 300, 314, 327, 329, 332, 333, 334, 347, 353, 355, 356, 403, 411, 417, 430, 432, 435, 450, 457, 459, 460, 508, 510, 518, 524, 537, 539, 542], "efi2": 34, "renam": [34, 36, 46, 47, 53, 71, 83, 88, 90, 91, 92, 93, 94, 101, 104, 107, 117, 118, 120, 127, 176, 184, 198, 206, 222, 232, 250, 253, 258, 260, 264, 271, 277, 287, 288, 290, 295, 347, 358, 363, 365, 369, 376, 382, 392, 393, 395, 400, 450, 462, 467, 469, 470, 471, 472, 473, 480, 483, 486, 496, 497, 499, 506], "had": [34, 41, 42, 46, 47, 48, 53, 54, 70, 71, 79, 93, 108, 109, 125, 163, 177, 184, 198, 199, 206, 209, 222, 223, 232, 236, 250, 251, 263, 278, 279, 294, 332, 347, 354, 368, 383, 384, 399, 435, 449, 450, 458, 472, 487, 488, 504, 542, 561], "typo": 34, "plural": 34, "accountservic": 34, "harm": [34, 47, 71, 198, 222, 250, 347, 450], "rollback": [34, 74, 83, 88, 104, 108, 109, 117, 118, 127, 143, 184, 186, 206, 209, 231, 232, 236, 253, 258, 274, 278, 279, 287, 288, 295, 312, 350, 358, 363, 379, 383, 384, 392, 393, 400, 415, 453, 462, 467, 483, 487, 488, 496, 497, 506, 522], "rmdir": 34, "nearli": [34, 46, 48, 77, 184, 206, 232, 297, 352, 456], "bidirect": 34, "collabor": 34, "far": [34, 46, 47, 71, 198, 222, 250, 347, 450], "trivial": [34, 78, 171, 193, 298, 353, 457], "hack": [34, 46], "partn": [34, 43], "cipher": [34, 36, 78, 90, 101, 108, 109, 120, 232, 260, 271, 278, 279, 290, 298, 353, 365, 376, 383, 384, 395, 457, 469, 480, 487, 488, 499], "hopefulli": [34, 36], "focal": [34, 35], "vim": [34, 36], "tini": [34, 36], "n5": [34, 36], "t5": [34, 36], "label": [34, 35, 36, 37, 43, 47, 48, 66, 71, 80, 86, 131, 139, 140, 143, 146, 163, 175, 182, 184, 185, 186, 197, 204, 208, 209, 221, 228, 235, 236, 249, 250, 256, 300, 309, 312, 315, 332, 333, 347, 355, 361, 403, 411, 412, 415, 418, 435, 445, 450, 459, 465, 510, 518, 519, 522, 525, 542, 562], "simpler": [34, 36], "proof": [34, 36, 54], "motherboard": [34, 36], "deadlock": [34, 36, 47, 53, 70, 92, 127, 219, 247, 346, 400, 449, 471, 506], "give": [34, 36, 41, 42, 46, 48, 71, 78, 79, 94, 140, 168, 176, 177, 184, 186, 189, 198, 199, 206, 209, 212, 222, 223, 232, 236, 240, 250, 251, 264, 298, 309, 347, 353, 354, 369, 412, 450, 457, 458, 473, 519], "trade": [34, 36, 47, 48, 71, 81, 186, 209, 222, 236, 250, 334, 347, 356, 450, 460], "bother": [34, 36], "500m": [34, 36, 71, 347, 450], "8200": [34, 36], "fd00": [34, 36], "swize": [34, 36], "hiber": [34, 36], "2g": [34, 36, 184], "be00": [34, 36], "constrain": [34, 36, 47, 219, 247, 346, 449], "500": [34, 36, 47, 49, 71, 78, 79, 176, 198, 222, 250, 251, 298, 347, 353, 354, 450, 457, 458], "inabl": [34, 36], "_must_": [34, 36], "realli": [34, 36, 47, 53, 54, 143, 186, 209, 236, 312, 415, 522], "10_linux_zf": [34, 36], "appar": [34, 35, 36, 37], "umask": [34, 35, 36, 37], "ccm": [34, 35, 78, 232, 298, 353, 457], "tr": [34, 35, 36, 37], "dc": [34, 35, 36, 37, 46, 53], "ubuntu_": [34, 35, 36, 37], "userdata": [34, 35, 36, 37], "root_": [34, 35, 36, 37], "fat": [34, 36, 48, 71, 450], "grubenv": [34, 36], "recordfail": [34, 36], "duplic": [34, 36, 46, 47, 48, 71, 77, 86, 176, 182, 184, 198, 204, 206, 228, 232, 250, 256, 297, 347, 352, 361, 450, 456, 465], "irrelev": [34, 36, 46, 48], "install_devic": [34, 36], "raid5": [34, 36], "raid6": [34, 36], "lxd": [34, 35, 36, 37], "launchpadlibrarian": [34, 35], "478315221": [34, 35], "2150": [34, 35], "p1": [34, 35, 37], "1875577": [34, 35], "init_on_alloc": [34, 35, 36, 37], "fallback": [34, 36], "ever": [34, 36, 48, 53, 67, 79, 110, 114, 127, 136, 172, 177, 186, 194, 199, 206, 209, 216, 223, 232, 236, 244, 251, 280, 284, 295, 305, 343, 354, 385, 389, 400, 408, 446, 458, 489, 493, 506, 515], "requiresmountsfor": [34, 36, 102, 230, 272, 377, 481], "history_ev": [34, 35, 41, 42, 102, 230, 272, 377, 481], "cacher": [34, 35, 41, 42, 102, 230, 272, 377, 481], "root_d": [34, 35, 36, 37], "_": [34, 35, 36, 37, 47, 76, 78, 81, 136, 184, 186, 206, 209, 232, 236, 298, 305, 353, 408, 455, 457, 460, 515], "adm": [34, 35, 36, 37], "microsd": [35, 37], "jeff": [35, 37], "geerl": [35, 37], "comparison": [35, 37, 78, 184, 206, 232, 298, 353, 457], "enclosur": [35, 37, 53, 73, 76, 85, 129, 135, 145, 148, 149, 158, 163, 174, 181, 196, 203, 209, 220, 227, 236, 248, 255, 314, 349, 360, 417, 452, 455, 464, 508, 524], "uasp": [35, 37], "solid": [35, 37, 46, 48], "ssd": [35, 37, 48, 51, 53, 81, 160, 236, 329, 334, 356, 432, 460, 539], "eeprom": [35, 37], "insert": [35, 37, 71, 183, 205, 222, 229, 250, 257, 347, 450], "attach": [35, 37, 47, 53, 64, 71, 73, 79, 80, 81, 83, 87, 99, 119, 122, 126, 127, 132, 134, 138, 139, 144, 153, 155, 162, 163, 174, 175, 186, 191, 196, 197, 209, 214, 220, 221, 236, 242, 248, 249, 250, 251, 253, 269, 289, 295, 301, 303, 307, 313, 322, 324, 332, 333, 334, 341, 347, 349, 354, 355, 356, 358, 362, 374, 394, 400, 404, 406, 410, 411, 416, 425, 427, 435, 443, 450, 452, 458, 459, 460, 462, 466, 478, 498, 501, 505, 506, 511, 513, 517, 518, 523, 532, 534, 542, 548, 549, 550, 552, 555, 556, 561], "rpi": [35, 37], "boot_ord": [35, 37], "0xf41": [35, 37], "misc": [35, 37], "folder": [35, 37, 48], "decompress": [35, 37, 47, 48, 78, 79, 86, 110, 114, 131, 165, 166, 177, 182, 184, 199, 204, 206, 222, 223, 228, 232, 235, 250, 251, 256, 280, 284, 298, 300, 353, 354, 361, 385, 389, 403, 457, 458, 465, 489, 493, 510, 544, 545], "postinst": [35, 37], "ext4": [35, 37, 48, 53], "unpack": [35, 37], "cdimag": [35, 37], "preinstal": [35, 37], "raspi": [35, 37], "combin": [35, 37, 44, 47, 48, 53, 57, 66, 71, 78, 79, 80, 85, 86, 108, 109, 110, 114, 136, 181, 184, 186, 203, 204, 206, 209, 222, 227, 228, 230, 232, 236, 250, 251, 255, 256, 272, 278, 279, 280, 284, 298, 305, 333, 347, 353, 354, 355, 360, 361, 383, 384, 385, 389, 408, 445, 450, 457, 458, 459, 464, 465, 487, 488, 489, 493, 515], "sfdisk": [35, 37], "0xddbefb06": 35, "img1": [35, 37], "2048": [35, 37, 71, 222, 250, 347, 450], "524288": [35, 37], "bootabl": [35, 37, 81, 186, 209, 236, 334, 356, 460], "img2": [35, 37], "526336": [35, 37], "6285628": 35, "83": [35, 37], "certainli": [35, 37], "mmcblk0": [35, 37], "sdx": [35, 37, 53], "letter": [35, 37, 62, 76, 78, 81, 86, 136, 168, 184, 186, 189, 206, 209, 212, 232, 236, 240, 298, 305, 339, 353, 408, 441, 455, 457, 460, 465, 515], "lsblk": [35, 37], "diskp": [35, 37], "proceed": [35, 37, 47, 71, 176, 198, 222, 250, 347, 450], "that_partit": [35, 37], "labelclear": [35, 37, 83, 136, 138, 151, 163, 186, 209, 236, 253, 305, 307, 320, 332, 358, 408, 410, 423, 435, 462, 515, 517, 530, 542], "expand": [35, 37, 46, 47, 64, 67, 71, 76, 79, 81, 133, 139, 147, 148, 149, 163, 175, 186, 197, 209, 221, 236, 249, 317, 318, 332, 334, 341, 356, 411, 420, 421, 435, 443, 455, 460, 518, 526, 527, 528, 542], "unboot": [35, 37], "itself": [35, 37, 46, 48, 53, 76, 77, 78, 79, 86, 90, 101, 108, 109, 120, 177, 184, 199, 206, 223, 232, 250, 251, 260, 271, 278, 279, 290, 297, 298, 352, 353, 354, 365, 376, 383, 384, 395, 455, 456, 457, 458, 465, 469, 480, 487, 488, 499, 561], "strictli": [35, 37, 47, 48], "losetup": [35, 37], "fp": [35, 37, 184, 206, 232], "writabl": [35, 37, 77, 91, 127, 184, 206, 232, 295, 297, 352, 400, 456, 470, 506], "destin": [35, 37, 54, 71, 77, 79, 86, 108, 109, 110, 114, 184, 206, 223, 232, 251, 278, 279, 280, 284, 354, 361, 383, 384, 385, 389, 450, 456, 458, 465, 487, 488, 489, 493], "p3": [35, 37], "p2": [35, 37], "conv": [35, 37], "fsync": [35, 37, 48, 78, 80, 127, 184, 186, 206, 209, 232, 236, 295, 298, 333, 353, 355, 400, 457, 459, 506], "se": [35, 37, 73, 196, 220, 248, 349, 452], "xxxxxxxx": [35, 37], "zcat": [35, 37], "qf": [35, 37], "vmlinux": [35, 37], "usercfg": [35, 37], "followkernel": [35, 37], "boot_delai": [35, 37], "zz": [35, 37], "bak": [35, 37], "vmlinuxtmp": [35, 37], "controlpersist": [35, 37], "kill": [35, 37, 53, 65, 67, 145, 147, 172, 194, 216, 244, 257, 343, 417, 419, 444, 446, 524, 526], "slot": [35, 37, 47, 53, 71, 73, 85, 135, 148, 149, 158, 163, 174, 181, 196, 203, 220, 227, 248, 250, 255, 347, 349, 360, 450, 452, 464, 548, 550], "pv": [35, 37], "unattend": [35, 37], "background": [35, 37, 47, 71, 79, 125, 127, 151, 162, 163, 176, 177, 198, 199, 222, 223, 236, 250, 251, 294, 295, 320, 331, 332, 347, 354, 399, 400, 423, 434, 435, 450, 458, 504, 506, 530, 541, 542], "safeti": [35, 37, 47, 48, 53, 79, 354, 458], "flush": [35, 37, 46, 47, 48, 53, 71, 78, 131, 159, 176, 184, 185, 198, 206, 208, 222, 232, 235, 250, 298, 300, 347, 353, 403, 431, 450, 457, 510, 538], "o_": [35, 37], "transact": [35, 37, 47, 48, 50, 51, 54, 58, 59, 71, 78, 79, 80, 86, 131, 143, 176, 177, 182, 184, 185, 186, 198, 199, 204, 206, 208, 209, 222, 223, 228, 232, 235, 236, 250, 251, 256, 298, 300, 312, 333, 347, 353, 354, 355, 361, 403, 415, 450, 457, 458, 459, 465, 510, 522, 561], "persist": [35, 37, 47, 53, 71, 76, 78, 80, 81, 86, 127, 131, 146, 148, 149, 163, 184, 185, 186, 206, 208, 209, 222, 232, 235, 236, 250, 256, 295, 300, 315, 317, 318, 332, 333, 334, 347, 355, 356, 361, 400, 403, 418, 420, 421, 435, 450, 455, 457, 459, 460, 465, 506, 510, 525, 527, 528, 542, 555], "databas": [35, 37, 51, 78, 80, 184, 186, 206, 209, 232, 236, 298, 333, 353, 355, 457, 459], "cf": [35, 37, 71, 74, 347, 350, 450, 453], "du": [35, 37, 78, 184, 206, 232, 298, 353, 457], "sxm": [35, 37], "cmdline": [35, 37, 74, 102, 350, 377, 453, 481], "rootfstyp": [35, 37, 74, 350, 453], "fixrtc": [35, 37], "180": [35, 37, 79, 199, 223, 251, 354, 458], "nosplash": [35, 37], "reread": [35, 37], "delus": [35, 37], "eth0": [35, 37], "autoremov": [35, 37], "bcach": [35, 37], "btrf": [35, 37, 53], "prog": [35, 37], "lvm2": [35, 37], "multipath": [35, 37, 53, 73, 85, 129, 145, 174, 181, 196, 203, 209, 220, 227, 236, 248, 255, 314, 349, 360, 417, 452, 464, 508, 524], "iscsi": [35, 37, 47, 48, 184], "overlayroot": [35, 37], "xfsprog": [35, 37], "dtoverlai": [35, 37], "vc4": [35, 37], "fkm": [35, 37], "v3d": [35, 37], "jammi": [36, 37], "welcom": [36, 59], "beta": [37, 91, 92, 93, 107, 112, 117, 127, 184, 206, 232, 295, 400, 470, 471, 472, 486, 491, 496, 506], "0x638274e3": 37, "7193932": 37, "codenam": 38, "18": [38, 39, 41, 81, 86, 127, 184, 206, 232, 295, 299, 400, 460, 465, 506], "raspberri": [38, 39], "pi": [38, 39, 61, 440], "aaron": [39, 52], "toponc": [39, 52], "excel": [39, 46, 53], "overview": [39, 74, 77, 80, 127, 163, 295, 297, 332, 333, 350, 352, 355, 400, 435, 453, 456, 459, 506, 542], "gentoo": [39, 53, 58, 59], "nixo": [39, 43, 58, 59], "opensus": [39, 58, 59], "extern": 39, "leap": [39, 40], "tumblewe": [39, 40], "kabi": 39, "minor": [39, 41, 42, 555], "el": [39, 184], "sle": 40, "yast2": [41, 42], "zypper": [41, 42], "experi": [41, 42, 46, 48], "peopl": [41, 42, 46, 53], "zaryob": [41, 42], "unoffici": 41, "lroz": 41, "lsb_releas": 41, "addrepo": [41, 42], "kmp": [41, 42], "suse": [41, 42], "flatpak": [41, 42], "oss": [41, 42], "nonoss": 41, "trust": [41, 42, 46, 81, 110, 114, 184, 186, 209, 236, 280, 284, 334, 356, 385, 389, 460, 489, 493], "22c07ba5": [41, 42], "34178cd0": [41, 42], "2efe22aa": [41, 42], "b88b2fd4": [41, 42], "3dbdc284": [41, 42], "mon": [41, 42, 553], "40": [41, 42, 46, 53], "2014": [41, 42, 46, 48], "2024": [41, 42], "pubkei": [41, 42], "53674dd4": [41, 42], "reject": [41, 42, 78, 184, 206, 232, 298, 353, 457], "pattern": [41, 42, 47, 50, 71, 74, 78, 184, 186, 206, 209, 222, 232, 236, 250, 298, 305, 347, 350, 353, 450, 453, 457], "Thats": [41, 42], "enhanced_bas": [41, 42], "But": [41, 42, 47, 53, 70, 78, 80, 184, 206, 219, 232, 247, 298, 346, 353, 449, 457, 459], "bloat": [41, 42], "annoi": [41, 42], "enhanc": [41, 42, 79, 223, 251, 354, 458], "yast2_basi": 41, "utf8": [41, 42], "yout": [41, 42], "localectl": [41, 42], "lang": [41, 42], "iputil": [41, 42], "ca": [41, 42], "certif": [41, 42, 78, 353, 457], "mozilla": [41, 42], "pam": [41, 42], "shadow": [41, 42, 78, 184, 206, 232, 298, 353, 457], "dbu": [41, 42], "libutempter0": [41, 42], "deinstal": [41, 42], "lsb": 41, "dm_name": [41, 42], "dm": [41, 42, 61, 73, 85, 174, 181, 196, 203, 220, 227, 239, 248, 255, 338, 349, 360, 440, 452, 464], "crypt": [41, 42], "4537537": 41, "genhostid": [41, 130, 207, 234, 299, 402, 509], "gzip": [41, 47, 48, 71, 78, 79, 127, 165, 166, 184, 198, 206, 222, 232, 250, 251, 295, 298, 347, 353, 354, 400, 450, 457, 458, 506, 544, 545], "processor": [41, 47, 48], "32bit": 41, "kernel_vers": 41, "eo": 41, "digit": [41, 46, 130, 207, 234, 299, 402, 509], "mkinitrd": [41, 42], "volume_cryptsetup_fail": [41, 42], "troubl": [41, 42], "suggest": [41, 42, 46, 48, 53, 76, 78, 81, 184, 206, 232, 298, 353, 455, 457, 460, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "menuentri": [41, 43], "150300": 41, "59": [41, 86, 182, 204, 228, 256, 361, 465, 553], "60": [41, 47, 65, 67, 71, 176, 198, 222, 250, 343, 347, 444, 446, 450], "bootctl": [41, 42], "cmd": 41, "repli": 41, "medium": 41, "opensuse_leap": 41, "titl": [41, 42, 110, 114, 145, 209, 236, 314, 385, 389, 417, 489, 493, 524], "zenlinux": [41, 42], "opensuse_tumblewe": 42, "proce": [42, 43, 47], "nameserv": 42, "recov": [43, 47, 71, 81, 139, 143, 175, 176, 186, 197, 198, 209, 221, 222, 236, 249, 250, 312, 334, 347, 356, 411, 415, 450, 460, 518, 522, 547, 555, 561], "my_boot_env": 43, "boot_dataset": 43, "df": [43, 78, 133, 206, 232, 298, 353, 457], "root_dataset": 43, "new_root_dataset": 43, "new_boot_dataset": 43, "is_grub2": 43, "elif": 43, "new_boot_env_entry_": 43, "drive1": 43, "configfil": 43, "esc": 43, "submenu": 43, "rm_boot_dataset": 43, "rm_root_dataset": 43, "rm_boot_dataset_origin": 43, "f3": 43, "rm_root_dataset_origin": 43, "new_entry_escap": 43, "procedur": [43, 67, 80, 172, 194, 216, 236, 244, 333, 343, 355, 446, 459, 550], "shutdown": [43, 46, 74, 103, 108, 109, 116, 121, 184, 206, 232, 273, 278, 279, 286, 291, 350, 378, 383, 384, 391, 396, 453, 482, 487, 488, 495, 500], "minut": [43, 46, 47, 67, 127, 163, 172, 194, 216, 244, 250, 343, 446, 506, 542], "slower": [43, 46, 47, 48, 70, 71, 79, 198, 219, 222, 247, 250, 251, 346, 347, 354, 449, 450, 458], "2387489723748": 43, "disk_known_good": 43, "disk_to_replac": 43, "part5": 43, "disk_new": 43, "accident": [43, 47, 79, 86, 204, 228, 256, 354, 361, 458, 465, 555], "overwritten": [43, 71, 80, 250, 333, 347, 355, 450, 459], "inaccess": 43, "resid": [43, 47, 78, 80, 102, 184, 206, 232, 298, 353, 355, 377, 457, 459, 481], "remain": [43, 46, 47, 48, 49, 65, 70, 71, 76, 78, 79, 80, 81, 87, 93, 104, 108, 109, 112, 117, 127, 155, 157, 162, 176, 177, 183, 184, 186, 198, 199, 205, 206, 209, 222, 223, 229, 231, 232, 236, 250, 251, 257, 274, 278, 279, 295, 298, 324, 326, 331, 333, 334, 347, 353, 354, 355, 356, 362, 379, 383, 384, 400, 427, 429, 434, 444, 449, 450, 455, 457, 458, 459, 460, 466, 472, 483, 487, 488, 491, 496, 506, 534, 536, 541], "untouch": [43, 53], "rescu": 43, "cddl": [44, 183, 205, 229, 257], "creativ": 44, "sharealik": 44, "cc": 44, "BY": 44, "spi": 44, "501": 44, "nonprofit": 44, "donat": 44, "financ": 44, "legal": [44, 65, 78, 184, 186, 206, 209, 232, 236, 298, 353, 444, 457], "gplv2": 44, "piec": [44, 45, 71, 79, 176, 177, 198, 199, 222, 223, 250, 251, 347, 354, 450, 458], "opinion": 44, "freedom": [44, 49, 71, 176, 198, 222, 250, 347, 450], "law": 44, "center": [44, 46], "conserv": [44, 47, 71, 80, 176, 198, 222, 250, 347, 450, 459], "foundat": 44, "linear": [45, 47, 71, 176, 198, 222, 250, 347, 450], "defin": [45, 47, 49, 50, 53, 71, 73, 76, 78, 81, 87, 88, 95, 98, 104, 110, 114, 115, 118, 127, 131, 139, 174, 175, 176, 183, 184, 186, 196, 197, 198, 205, 206, 208, 209, 220, 221, 222, 229, 231, 232, 235, 236, 248, 249, 250, 257, 258, 274, 280, 284, 288, 295, 298, 300, 334, 347, 349, 353, 356, 362, 363, 379, 385, 389, 393, 400, 403, 411, 450, 452, 455, 457, 460, 466, 467, 474, 477, 483, 489, 493, 494, 497, 506, 510, 518], "zfs_vdev_async_write_max_act": [45, 50, 71, 176, 198, 222, 250, 347, 450], "zfs_vdev_async_write_min_act": [45, 50, 71, 176, 198, 222, 250, 347, 450], "_______": [45, 71, 176, 198, 222, 250, 347, 450], "______": [45, 71, 86, 176, 182, 198, 204, 222, 228, 250, 256, 347, 361, 450, 465], "_________": [45, 71, 176, 198, 222, 250, 347, 450], "zfs_dirty_data_max": [45, 49, 71, 176, 198, 222, 250, 347, 450], "zfs_vdev_async_write_active_max_dirty_perc": [45, 49, 71, 176, 198, 222, 250, 347, 450], "zfs_vdev_async_write_active_min_dirty_perc": [45, 71, 176, 198, 222, 250, 347, 450], "dirti": [45, 47, 48, 49, 50, 71, 159, 163, 176, 198, 209, 222, 236, 250, 328, 332, 347, 431, 435, 450, 538, 542], "percentag": [45, 47, 48, 49, 61, 67, 71, 76, 81, 86, 131, 158, 172, 176, 182, 185, 186, 194, 198, 204, 208, 209, 216, 222, 228, 235, 236, 239, 244, 250, 256, 300, 327, 334, 338, 343, 347, 356, 361, 403, 430, 440, 446, 450, 455, 460, 465, 510, 537], "schedul": [45, 47, 49, 51, 58, 59, 70, 71, 76, 154, 176, 198, 219, 222, 236, 247, 250, 323, 346, 347, 426, 449, 450, 455, 533], "threshold": [45, 47, 48, 67, 70, 71, 76, 78, 172, 176, 186, 194, 198, 209, 216, 219, 222, 232, 236, 244, 247, 250, 298, 343, 346, 347, 353, 446, 449, 450, 455, 457], "linearli": [45, 47, 48, 71, 176, 198, 222, 250, 347, 450], "busi": [45, 47, 71, 78, 176, 184, 198, 206, 222, 232, 250, 298, 347, 353, 450, 457], "stai": [45, 47, 71, 110, 114, 176, 198, 222, 250, 280, 284, 347, 385, 389, 450, 489, 493], "slope": [45, 71, 176, 198, 222, 250, 347, 450], "rate": [45, 46, 47, 48, 49, 53, 70, 71, 160, 172, 176, 194, 198, 216, 219, 222, 236, 244, 247, 250, 329, 346, 347, 432, 449, 450, 539], "incom": [45, 49, 71, 176, 198, 222, 250, 347, 450], "backend": [45, 49, 71, 176, 198, 222, 250, 347, 450], "silent": [46, 80, 155, 186, 209, 236, 324, 355, 427, 459, 534], "industri": 46, "superior": [46, 48], "bui": 46, "workstat": 46, "adher": 46, "potenti": [46, 47, 71, 77, 79, 80, 86, 140, 143, 176, 184, 186, 198, 199, 206, 209, 222, 223, 232, 236, 250, 251, 256, 297, 309, 312, 333, 347, 352, 354, 355, 361, 412, 415, 450, 456, 458, 459, 465, 519, 522, 555], "reliabl": [46, 53, 57], "serv": [46, 47, 48, 49, 71, 80, 176, 186, 198, 209, 222, 236, 250, 333, 347, 355, 450, 459], "handicap": 46, "compet": [46, 47, 48], "microprocessor": 46, "complex": [46, 53], "errata": [46, 562], "modern": [46, 47, 48, 71, 175, 176, 197, 198, 222, 250, 347, 450], "quasi": 46, "chip": 46, "bundl": [46, 48, 78, 353, 457], "interact": [46, 71, 90, 101, 103, 116, 120, 121, 232, 250, 260, 271, 273, 290, 291, 347, 365, 376, 378, 391, 395, 396, 450, 469, 480, 482, 495, 499, 500], "regist": [46, 68, 79, 199, 217, 223, 245, 251, 344, 354, 447, 458], "flip": [46, 47, 53, 71, 131, 176, 198, 222, 235, 250, 300, 347, 403, 450, 510, 555], "fairli": [46, 47, 71, 176, 198, 222, 250, 347, 450], "dramat": 46, "consequ": [46, 48, 78, 183, 184, 205, 206, 229, 232, 298, 353, 457], "techniqu": 46, "ordinari": 46, "radiat": 46, "randomli": [46, 47, 53, 71, 78, 108, 109, 110, 114, 130, 171, 193, 207, 222, 232, 234, 250, 278, 279, 280, 284, 298, 299, 347, 353, 383, 384, 385, 389, 402, 450, 457, 487, 488, 489, 493, 509], "undefin": [46, 67, 78, 92, 184, 194, 206, 216, 232, 244, 262, 298, 343, 353, 367, 446, 457, 471], "four": [46, 47, 48, 62, 71, 110, 114, 168, 171, 182, 189, 193, 212, 219, 240, 280, 284, 339, 347, 385, 389, 441, 450, 489, 493], "runtim": [46, 64, 71, 191, 198, 214, 222, 242, 250, 341, 347, 443, 450], "alter": [46, 48, 90, 101, 110, 114, 120, 232, 260, 271, 280, 284, 290, 365, 376, 385, 389, 395, 469, 480, 489, 493, 499], "routin": 46, "reload": [46, 47, 71, 102, 230, 250, 272, 347, 377, 450, 481], "realiz": [46, 53], "unimport": [46, 177, 184, 199, 223, 251], "poor": [46, 48], "absolut": [46, 65, 73, 78, 79, 81, 100, 136, 174, 184, 186, 196, 206, 209, 220, 232, 236, 248, 270, 298, 305, 349, 353, 354, 356, 375, 408, 444, 452, 457, 458, 460, 479, 515], "Such": [46, 47, 93, 139, 175, 184, 197, 206, 221, 232, 249, 263, 368, 411, 472, 518], "extraordinarili": 46, "rare": [46, 47, 48, 53, 70, 71, 79, 176, 177, 198, 199, 219, 222, 223, 247, 250, 251, 346, 347, 354, 449, 450, 458], "interpos": 46, "multipli": [46, 47, 48, 71, 78, 176, 184, 198, 206, 222, 232, 250, 298, 347, 353, 450, 457], "unrecogn": [46, 79, 354, 458], "smart": 46, "passthrough": [46, 78, 184, 206, 232, 298, 353, 457], "erc": 46, "unreli": [46, 53], "bandwidth": [46, 47, 71, 86, 145, 158, 163, 176, 182, 198, 204, 209, 222, 228, 236, 250, 256, 332, 347, 361, 435, 450, 465, 524, 537, 542], "pci": [46, 53, 73, 85, 174, 181, 196, 203, 220, 227, 248, 255, 349, 360, 452, 464], "express": [46, 47, 71, 76, 78, 131, 160, 171, 176, 184, 185, 193, 198, 206, 207, 208, 222, 232, 234, 235, 236, 250, 298, 299, 300, 329, 347, 353, 403, 432, 450, 455, 457, 510, 539], "unnecessari": [46, 47, 48, 71, 79, 177, 199, 222, 223, 250, 251, 347, 354, 450, 458], "marc": 46, "bevand": 46, "he": 46, "opportun": [46, 48], "reconstruct": [46, 47, 71, 79, 80, 86, 133, 153, 222, 228, 250, 251, 256, 302, 322, 347, 354, 355, 361, 405, 425, 450, 458, 459, 465, 512, 532], "necessarili": [46, 78, 80, 86, 90, 101, 120, 158, 184, 206, 232, 236, 256, 260, 271, 290, 298, 327, 353, 361, 365, 376, 395, 430, 457, 459, 465, 469, 480, 499, 537], "overhead": [46, 47, 48, 71, 77, 219, 250, 347, 450, 456], "partial": [46, 47, 48, 71, 78, 104, 108, 109, 110, 114, 152, 176, 198, 206, 222, 231, 232, 236, 250, 274, 278, 279, 280, 284, 298, 321, 347, 353, 379, 383, 384, 385, 389, 424, 450, 457, 483, 487, 488, 489, 493, 531], "certainti": 46, "suffer": [46, 47, 48, 50, 71, 80, 176, 198, 222, 236, 250, 333, 347, 355, 450, 459, 555], "obtain": [46, 48, 78, 90, 101, 120, 184, 206, 232, 260, 271, 290, 298, 353, 365, 376, 395, 457, 469, 480, 499], "misreport": [46, 48], "transit": [46, 71, 79, 199, 223, 251, 347, 354, 450, 458], "xp": [46, 48], "eol": 46, "misalign": 46, "model": [46, 48, 53, 76, 145, 158, 163, 209, 236, 332, 435, 455, 524, 537, 542], "manufactur": 46, "mitig": [46, 183, 205, 229, 257], "manner": [46, 47, 65, 78, 87, 91, 92, 94, 104, 108, 109, 112, 132, 133, 136, 153, 183, 184, 186, 205, 206, 209, 229, 231, 232, 236, 257, 261, 262, 264, 274, 278, 279, 282, 298, 301, 302, 305, 322, 353, 362, 366, 367, 369, 379, 383, 384, 387, 404, 405, 408, 425, 444, 457, 466, 470, 471, 473, 483, 487, 488, 491, 511, 512, 515, 532], "ineffici": 46, "flight": 46, "weaker": 46, "embed": [46, 86, 90, 101, 120, 204, 228, 232, 256, 260, 271, 290, 361, 365, 376, 395, 465, 469, 480, 499], "lower": [46, 47, 48, 50, 53, 62, 70, 71, 78, 81, 86, 99, 119, 168, 176, 184, 189, 198, 212, 219, 222, 236, 240, 247, 250, 256, 269, 289, 298, 334, 339, 346, 347, 353, 356, 361, 374, 394, 441, 449, 450, 457, 460, 465, 478, 498], "inspect": [46, 62, 86, 142, 168, 189, 212, 230, 240, 272, 339, 414, 441, 465, 521], "anyon": [46, 53, 88, 118, 127, 184, 206, 232, 295, 400, 467, 497, 506], "expos": [46, 47, 71, 77, 78, 79, 177, 184, 199, 206, 223, 232, 250, 251, 297, 298, 347, 352, 353, 354, 450, 456, 457, 458], "di": [46, 61, 440], "behav": [46, 47, 48, 80, 86, 127, 182, 184, 204, 206, 228, 232, 256, 295, 355, 361, 400, 459, 465, 506], "vendor": [46, 48, 53, 145, 158, 163, 209, 236, 332, 435, 524, 537, 542], "inclin": 46, "hba": [46, 53, 73, 85, 174, 181, 196, 203, 220, 227, 248, 255, 349, 360, 452, 464], "histor": [46, 47, 62, 71, 168, 176, 189, 198, 212, 222, 240, 250, 339, 347, 441, 450], "2009": [46, 95, 98, 115, 127, 172, 184, 194, 206, 216, 232, 295, 400, 474, 477, 494, 506, 553], "4096": [46, 47, 48, 71, 81, 171, 176, 186, 193, 198, 209, 222, 236, 250, 334, 347, 356, 450, 460], "2tb": 46, "market": [46, 48, 110, 114, 280, 284, 385, 389, 489, 493], "2013": [46, 170, 171, 176, 177, 178, 180, 183, 184, 185, 192, 193, 200, 202, 205, 208, 215, 224, 226, 229, 235], "believ": 46, "jumper": 46, "proper": [46, 48, 53, 74, 350, 453], "63": [46, 47, 209, 236], "themselv": [46, 47, 90, 101, 110, 114, 120, 232, 260, 271, 280, 284, 290, 365, 376, 385, 389, 395, 469, 480, 489, 493, 499], "behind": 46, "advers": [46, 78, 184, 206, 232, 298, 353, 457], "neg": [46, 47, 48, 61, 71, 78, 104, 219, 222, 231, 239, 247, 250, 274, 298, 338, 347, 353, 379, 440, 450, 457, 483], "cheap": [46, 48], "notabl": [46, 47], "western": 46, "polar": 46, "region": [46, 47, 54, 71, 79, 131, 144, 163, 171, 176, 185, 193, 198, 208, 222, 223, 235, 236, 250, 251, 300, 313, 332, 347, 354, 403, 416, 435, 450, 458, 510, 523, 542], "magnet": [46, 48], "surfac": 46, "pose": 46, "imperfect": 46, "vibrat": 46, "compos": 46, "respond": 46, "retri": [46, 47, 71, 450], "conclud": [46, 54], "substanti": [46, 81, 236, 334, 356, 460], "stall": [46, 47, 71, 176, 198, 222, 250, 347, 450], "tler": 46, "seagat": [46, 145, 158, 163, 209, 236, 332, 435, 524, 537, 542], "hitachi": 46, "samsung": [46, 53], "permit": [46, 47, 104, 110, 114, 231, 274, 280, 284, 379, 385, 389, 483, 489, 493], "willing": [46, 48], "spend": [46, 47, 71, 198, 222, 250, 347, 450], "arbitrarili": [46, 80, 459], "advis": [46, 48, 76, 455], "seek": [46, 47, 232], "sacrific": 46, "densiti": [46, 47], "factor": [46, 47, 48, 53, 71, 78, 81, 176, 184, 198, 206, 222, 232, 236, 250, 298, 334, 347, 353, 356, 450, 457, 460], "counterpart": [46, 104, 231, 274, 379, 483], "15k": 46, "millisecond": [46, 47, 49, 71, 131, 158, 163, 175, 176, 197, 198, 208, 222, 235, 236, 250, 300, 327, 332, 347, 403, 430, 435, 450, 510, 537, 542], "averag": [46, 47, 48, 71, 131, 145, 176, 198, 208, 209, 222, 235, 236, 250, 300, 314, 347, 403, 417, 450, 510, 524], "presum": 46, "awai": 46, "Being": 46, "7200": [46, 71, 222, 250, 347, 450], "empir": [46, 47], "measur": [46, 47, 64, 71, 79, 87, 139, 143, 145, 175, 176, 183, 186, 191, 197, 198, 205, 209, 214, 221, 222, 229, 236, 242, 249, 250, 257, 312, 314, 341, 347, 354, 362, 411, 415, 417, 443, 450, 458, 466, 518, 522, 524], "5400": 46, "zil": [46, 48, 53, 71, 79, 80, 86, 159, 163, 176, 182, 186, 198, 204, 209, 222, 228, 236, 250, 256, 328, 332, 333, 347, 355, 361, 431, 435, 450, 458, 459, 465, 538, 542], "l2arc": [46, 48, 61, 71, 73, 78, 80, 81, 86, 146, 174, 176, 184, 196, 198, 206, 220, 222, 232, 239, 248, 250, 256, 298, 315, 333, 334, 338, 347, 349, 353, 355, 356, 361, 418, 440, 450, 452, 457, 459, 460, 465, 525], "slog": [46, 48, 53, 71, 198, 222, 250, 347, 450], "higher": [46, 47, 48, 50, 67, 71, 78, 79, 172, 176, 177, 184, 194, 198, 199, 206, 216, 222, 223, 232, 244, 250, 251, 298, 343, 347, 353, 354, 446, 450, 457, 458], "queue": [46, 47, 50, 70, 71, 102, 125, 145, 176, 198, 209, 219, 222, 236, 247, 250, 294, 314, 346, 347, 377, 399, 417, 449, 450, 481, 504, 524], "reorder": 46, "pata": 46, "object": [46, 47, 56, 70, 71, 77, 78, 79, 86, 88, 118, 128, 131, 139, 165, 166, 175, 176, 177, 182, 184, 185, 197, 198, 199, 204, 206, 208, 219, 221, 222, 223, 228, 232, 235, 247, 249, 250, 251, 256, 258, 288, 296, 297, 298, 300, 346, 347, 352, 353, 354, 361, 363, 393, 401, 403, 411, 449, 450, 456, 457, 458, 465, 467, 497, 507, 510, 518, 544, 545], "metaslab": [46, 71, 79, 86, 131, 176, 182, 185, 198, 204, 208, 222, 228, 235, 250, 251, 256, 300, 347, 354, 361, 403, 450, 458, 465, 510], "metastab": 46, "year": [46, 79, 354, 458], "2003": 46, "2004": 46, "emul": [46, 53, 130, 207, 234, 299, 402, 509], "hdparm": [46, 48], "camcontrol": 46, "domin": 46, "focu": [46, 47], "2017": [46, 198, 204, 207, 219, 234], "predominantli": 46, "primarili": [46, 71, 74, 80, 186, 209, 236, 250, 333, 347, 350, 355, 450, 453, 459], "electr": 46, "buse": 46, "t10": 46, "dif": 46, "crc": 46, "rel_perf": 46, "lba": [46, 47, 48, 71, 79, 176, 198, 222, 250, 251, 347, 354, 450, 458], "smartctl": [46, 145, 209, 236, 314, 417, 524], "device_namespac": 46, "nvme1n1": [46, 48], "plu": [46, 78, 232, 298, 353, 457], "fmt": 46, "field": [46, 47, 48, 61, 76, 79, 86, 95, 96, 98, 100, 106, 115, 124, 139, 141, 145, 147, 156, 162, 184, 186, 206, 209, 221, 232, 236, 239, 249, 251, 256, 265, 266, 268, 270, 276, 285, 293, 308, 310, 314, 316, 325, 331, 338, 354, 361, 370, 371, 373, 375, 381, 390, 398, 411, 413, 417, 419, 428, 434, 440, 455, 458, 465, 474, 475, 477, 479, 485, 494, 503, 518, 520, 524, 526, 535, 541], "tradition": [46, 78, 184, 206, 232, 298, 353, 457], "vulner": [46, 48, 90, 101, 120, 232, 260, 271, 290, 365, 376, 395, 469, 480, 499], "simultan": [46, 47, 66, 71, 136, 155, 183, 205, 229, 236, 257, 305, 408, 445, 450, 515, 534], "conclus": [46, 65, 444], "brick": 46, "vanish": 46, "literatur": 46, "2015": [46, 56, 175, 197], "claim": [46, 81, 131, 139, 175, 185, 197, 208, 221, 235, 249, 300, 334, 356, 403, 411, 460, 510, 518], "robust": [46, 104, 136, 231, 236, 274, 305, 379, 408, 483, 515], "kingston": 46, "concept": [46, 51, 54, 58, 59, 71, 77, 222, 250, 297, 347, 352, 450, 456], "sole": [46, 78, 80, 139, 175, 184, 197, 206, 221, 232, 236, 249, 298, 333, 353, 355, 411, 457, 459, 518], "unflush": [46, 71, 250, 347, 450], "beyond": [46, 47, 48, 53, 71, 81, 155, 165, 166, 176, 198, 222, 236, 250, 324, 334, 347, 356, 427, 450, 460, 534, 544, 545], "hurt": [46, 71, 198, 222, 250, 347, 450], "laptop": [46, 48], "datacent": 46, "ipmi": 46, "experienc": [46, 548, 549, 550, 551, 554, 555, 559, 560, 561], "exhaust": [46, 47, 64, 71, 77, 104, 184, 191, 198, 206, 214, 222, 231, 232, 242, 250, 274, 297, 341, 347, 352, 379, 443, 450, 456, 483], "750": 46, "p3500": 46, "p3600": [46, 53], "p3608": 46, "p3700": [46, 53], "micron": 46, "7300": 46, "7400": 46, "7450": 46, "max": [46, 47, 49, 50, 53, 64, 67, 71, 127, 163, 176, 198, 222, 250, 341, 343, 347, 443, 446, 450, 506, 542], "pm963": 46, "pm1725": 46, "pm1725a": 46, "xs1715": 46, "toshiba": 46, "zd6300": 46, "nytro": 46, "5000": [46, 48, 347, 450], "xp1920le30002": 46, "inexpens": [46, 47, 70, 219, 247, 346, 449], "22110": 46, "mlc": 46, "mostli": [46, 47, 48, 70, 71, 80, 186, 209, 222, 236, 250, 333, 347, 355, 449, 450, 459], "airflow": 46, "suffici": [46, 48, 71, 80, 86, 140, 143, 186, 204, 209, 228, 236, 250, 256, 309, 312, 333, 347, 355, 361, 412, 415, 450, 459, 465, 519, 522, 548, 550, 561], "fan": 46, "overheat": 46, "thermal": 46, "latenc": [46, 47, 50, 71, 78, 80, 131, 145, 164, 176, 184, 186, 198, 206, 208, 209, 222, 232, 235, 236, 250, 298, 300, 314, 333, 347, 353, 355, 403, 417, 436, 450, 457, 459, 510, 524, 543], "hundr": [46, 184], "hotter": 46, "namespac": [46, 48, 71, 76, 77, 78, 81, 88, 92, 97, 111, 118, 122, 126, 127, 184, 206, 232, 258, 262, 267, 281, 288, 295, 297, 298, 352, 353, 363, 367, 372, 386, 393, 400, 450, 455, 456, 457, 460, 467, 471, 476, 490, 497, 501, 505, 506], "eras": [46, 90, 101, 120, 160, 236, 260, 271, 290, 329, 365, 376, 395, 432, 469, 480, 499, 539], "passiv": 46, "heatsink": 46, "sticker": 46, "closest": 46, "capacitor": 46, "undesir": 46, "overh": 46, "allevi": 46, "gigabyt": [46, 47, 95, 98, 115, 184, 206, 232, 265, 268, 285, 370, 373, 390, 474, 477, 494], "cool": 46, "76": 46, "degre": [46, 49, 71, 176, 198, 222, 250, 347, 450], "celsiu": 46, "74": [46, 86, 182, 204, 228, 256, 361, 465], "evalu": [46, 88, 100, 118, 184, 206, 232, 258, 270, 288, 363, 375, 393, 467, 479, 497], "temperatur": 46, "overcool": 46, "pm1633": 46, "pm1633a": 46, "sm1625": 46, "pm853t": 46, "px05shb": 46, "px04shb": 46, "px04shq": 46, "px05slb": 46, "px04slb": 46, "px04slq": 46, "px05smb": 46, "px04smb": 46, "px04smq": 46, "px05srb": 46, "px04srb": 46, "px04srq": 46, "px05svb": 46, "px04svb": 46, "px04svq": 46, "crucial": [46, 48], "mx100": 46, "mx200": 46, "mx300": 46, "m500": 46, "m550": 46, "m600": 46, "320": [46, 48], "335": [46, 222, 250], "710": 46, "730": 46, "s3500": 46, "s3510": 46, "s3610": [46, 53], "s3700": [46, 53], "s3710": [46, 53], "dc500r": 46, "dc500m": 46, "5210": 46, "ion": 46, "qlc": 46, "pm863": 46, "pm863a": 46, "sm843t": 46, "sm843": 46, "sm863": [46, 53], "sm863a": 46, "845dc": 46, "evo": 46, "hk4e": 46, "hk3e2": 46, "hk4r": 46, "hk3r2": 46, "hk3r": 46, "volunt": 46, "mainli": [46, 48, 81, 186, 209, 236, 334, 356, 460], "richard": 46, "yao": 46, "trustworthi": 46, "neutral": 46, "perceiv": 46, "bia": [46, 47], "toward": [46, 48, 53, 57, 71, 250, 347, 450], "confirm": [46, 108, 109, 232, 278, 279, 383, 384, 487, 488], "presenc": [46, 47, 65, 86, 163, 256, 332, 361, 435, 444, 465, 542, 555], "adequ": 46, "whose": [46, 47, 48, 50, 71, 77, 78, 79, 86, 91, 104, 108, 109, 127, 136, 139, 176, 177, 184, 186, 198, 199, 204, 206, 209, 221, 222, 223, 228, 231, 232, 236, 249, 250, 251, 256, 274, 278, 279, 295, 297, 298, 305, 347, 352, 353, 354, 361, 379, 383, 384, 400, 408, 411, 450, 456, 457, 458, 465, 470, 483, 487, 488, 506, 515, 518], "unlist": 46, "pictur": 46, "statement": [46, 62, 104, 168, 189, 212, 231, 240, 274, 339, 379, 441, 483], "anandtech": [46, 48], "sheet": 46, "accept": [46, 47, 53, 71, 80, 95, 98, 110, 114, 115, 141, 156, 176, 184, 186, 198, 206, 209, 222, 232, 236, 250, 265, 268, 280, 284, 285, 310, 325, 333, 347, 355, 370, 373, 385, 389, 390, 413, 428, 450, 459, 474, 477, 489, 493, 494, 520, 535, 555], "honor": [46, 47, 53, 131, 185, 208, 235, 300, 403, 510], "misstat": 46, "realiti": 46, "honest": 46, "smallest": [46, 48], "incorrectli": [46, 47, 53, 71, 222, 250, 347, 450], "8192": [46, 48, 76, 78, 81, 172, 176, 198, 206, 222, 232, 250, 298, 353, 455, 457, 460], "gbit": 46, "16384": [46, 70, 71, 172, 198, 222, 250, 346, 347, 449, 450], "punch": [46, 81, 236, 334, 356, 460], "conform": [46, 127, 184, 206, 232, 295, 400, 506], "drain": [46, 125, 294, 399, 504], "difficult": [46, 47, 53, 81, 236, 334, 356, 460], "distinguish": [46, 76, 78, 79, 80, 81, 177, 184, 186, 199, 206, 209, 223, 232, 236, 251, 298, 333, 353, 354, 355, 455, 457, 458, 459, 460], "endur": [46, 53], "circuitri": 46, "p4800x": 46, "p4801x": 46, "p1600x": 46, "4gb": [46, 48], "plug": [46, 47], "receptacl": 46, "wire": [46, 71, 250, 347, 450], "voltag": 46, "brownout": 46, "condition": 46, "outright": 46, "exhibit": [46, 47, 158, 186, 209, 236, 327, 430, 537], "undocu": 46, "suppos": [46, 191, 214, 242], "deassert": 46, "deviat": 46, "brown": 46, "strict": [46, 47, 71, 450], "toler": [46, 80, 133, 186, 209, 236, 333, 355, 459, 548, 550], "transfer": [46, 78, 108, 109, 216, 232, 244, 278, 279, 298, 353, 383, 384, 457, 487, 488], "taken": [46, 47, 53, 71, 73, 77, 78, 80, 81, 93, 110, 114, 117, 174, 184, 186, 196, 198, 206, 209, 220, 222, 232, 236, 248, 250, 263, 280, 284, 287, 298, 333, 334, 347, 349, 353, 355, 356, 368, 385, 389, 392, 450, 452, 456, 457, 459, 460, 472, 489, 493, 496, 547, 549, 551, 552, 553, 554, 557, 558, 559, 560, 561], "suppli": [46, 90, 101, 110, 114, 120, 206, 232, 260, 271, 280, 284, 290, 365, 376, 385, 389, 395, 469, 480, 489, 493, 499], "atx": 46, "invers": [46, 49, 71, 163, 176, 198, 222, 250, 347, 450, 542], "holdup": 46, "ag": [46, 47, 71, 219, 222, 247, 250, 347, 450], "equip": 46, "substandard": 46, "doubt": [46, 53], "hybrid": 46, "94": 46, "acid": 46, "outag": [46, 79, 177, 199, 223, 251, 354, 458], "vari": [46, 47, 71, 81, 171, 193, 198, 222, 236, 250, 334, 347, 356, 450, 460], "footnot": [46, 48], "lkcl": 46, "ssd_analysi": 46, "usenix": 46, "confer": 46, "fast13": 46, "final80": 46, "pdf": [46, 62, 168, 189, 212, 240, 339, 441], "engin": [46, 183, 205, 229, 257], "nordeu": 46, "apc": 46, "fa158934": 46, "sysf": [47, 129, 145, 163, 209, 236, 314, 417, 508, 524], "newvalu": 47, "xzy": 47, "problem_descript": 47, "your_nam": 47, "individu": [47, 65, 70, 71, 80, 81, 82, 86, 113, 136, 139, 144, 145, 147, 163, 175, 178, 184, 186, 197, 198, 200, 206, 209, 219, 221, 222, 224, 228, 232, 236, 239, 247, 249, 250, 252, 256, 283, 305, 313, 314, 316, 332, 333, 334, 346, 347, 355, 356, 357, 361, 388, 408, 411, 416, 417, 419, 435, 444, 449, 450, 459, 460, 461, 465, 492, 515, 518, 523, 524, 526, 542], "icp": 47, "ala": 47, "quick": [47, 53], "captur": 47, "wisdom": 47, "practition": 47, "synopsi": [47, 61, 62, 64, 65, 66, 67, 68, 74, 82, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 168, 170, 171, 172, 178, 180, 181, 182, 183, 184, 185, 186, 187, 189, 191, 192, 193, 194, 200, 202, 203, 204, 205, 206, 207, 208, 209, 210, 212, 214, 215, 216, 217, 224, 226, 227, 228, 229, 230, 231, 232, 234, 235, 236, 237, 239, 240, 242, 243, 244, 245, 252, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 335, 336, 338, 339, 341, 342, 343, 344, 350, 357, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 440, 441, 443, 444, 445, 446, 447, 453, 461, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545], "modinfo": 47, "resist": 47, "hierarch": 47, "represent": [47, 110, 114, 145, 147, 158, 162, 184, 186, 206, 209, 232, 236, 280, 284, 314, 316, 327, 331, 385, 389, 417, 419, 430, 434, 489, 493, 524, 526, 537, 541], "assist": [47, 48], "row": [47, 71, 100, 184, 206, 232, 250, 270, 347, 375, 450, 479], "keyword": [47, 53, 73, 80, 85, 88, 104, 118, 174, 181, 184, 186, 196, 203, 206, 209, 220, 227, 231, 232, 236, 248, 255, 258, 274, 288, 333, 349, 355, 360, 363, 379, 393, 452, 459, 464, 467, 483, 497], "suspect": [47, 71, 176, 198, 222, 250, 347, 450], "boolean": [47, 104, 231, 274, 379, 483], "birth": [47, 79, 177, 199, 223, 251, 354, 458], "tbd": 47, "elig": [47, 71, 144, 160, 163, 176, 198, 222, 236, 250, 313, 329, 332, 347, 416, 432, 435, 450, 523, 539, 542], "turbo": [47, 71, 127, 176, 184, 198, 206, 222, 232, 250, 295, 347, 400, 450, 506], "warm": [47, 71, 198, 222, 250, 347, 450], "cold": [47, 71, 198, 222, 250, 347, 450], "interv": [47, 50, 61, 71, 77, 145, 147, 158, 162, 176, 184, 186, 198, 206, 209, 222, 232, 236, 239, 250, 297, 314, 316, 327, 331, 338, 347, 352, 417, 419, 430, 434, 440, 450, 456, 524, 526, 537, 541], "aggress": [47, 70, 71, 176, 198, 219, 222, 247, 250, 346, 347, 449, 450], "feed": [47, 71, 176, 198, 222, 250, 347, 450], "wake": 47, "uint64": [47, 104, 221, 231, 249, 274, 379, 411, 483], "1000": [47, 71, 78, 127, 175, 176, 184, 197, 198, 206, 222, 232, 250, 295, 298, 347, 353, 400, 450, 457, 506], "200": [47, 71, 79, 86, 176, 177, 182, 198, 199, 204, 222, 223, 228, 250, 251, 256, 347, 354, 361, 450, 458, 465], "readonli": [47, 78, 79, 81, 88, 95, 98, 102, 104, 115, 118, 127, 143, 177, 184, 186, 199, 206, 209, 223, 230, 231, 232, 236, 251, 258, 272, 274, 288, 295, 298, 312, 334, 353, 354, 356, 363, 377, 379, 393, 400, 415, 457, 458, 460, 467, 474, 477, 481, 483, 494, 497, 506, 522], "onto": 47, "uint64_max": [47, 71, 450], "cacheabl": [47, 71, 80, 198, 222, 250, 333, 347, 355, 450, 459], "overal": [47, 48, 50, 71, 86, 176, 182, 198, 204, 222, 228, 250, 256, 347, 361, 450, 465], "l2a": 47, "rc_headroom_boost": 47, "percent": [47, 48, 65, 71, 76, 93, 176, 184, 198, 206, 222, 232, 250, 263, 347, 368, 444, 450, 455, 472], "headroom": 47, "boost": [47, 79, 199, 223, 251, 354, 458], "v0": [47, 58, 59, 60], "evict": [47, 48, 61, 71, 176, 198, 222, 239, 250, 338, 347, 440, 450], "irrationali": [47, 250], "enorm": 47, "int": [47, 70, 71, 176, 198, 219, 222, 247, 250, 346, 347, 449, 450], "33": [47, 71, 147, 163, 186, 209, 236, 250, 332, 347, 435, 450, 526, 542], "v2": [47, 48, 58, 59, 60, 79, 223, 251, 354, 458], "mfu": [47, 61, 71, 239, 250, 338, 347, 440, 450], "mru": [47, 61, 71, 239, 250, 338, 347, 440, 450], "dai": 47, "antiqu": 47, "073": [47, 222, 250], "741": [47, 222, 250], "824": [47, 222, 250], "ahead": [47, 53, 71, 222, 250, 347, 450], "characterist": [47, 77, 78, 80, 81, 184, 186, 206, 209, 232, 236, 297, 298, 333, 334, 352, 353, 355, 356, 456, 457, 459, 460], "accommod": [47, 49, 70, 71, 78, 107, 176, 184, 198, 206, 219, 222, 232, 247, 250, 277, 298, 346, 347, 353, 382, 449, 450, 457, 486], "64mb": [47, 250, 347], "effeci": 47, "ulong": [47, 70, 71, 176, 198, 219, 222, 247, 250, 346, 347, 449, 450], "388": [47, 176, 198, 222, 250], "608": [47, 176, 198, 222, 250], "sec": [47, 48, 78, 172, 184, 194, 198, 206, 216, 222, 232, 244, 250, 298, 353, 457], "granular": [47, 71, 176, 198, 222, 250, 347, 450], "nomin": 47, "roughli": [47, 48, 49, 71, 131, 176, 198, 208, 222, 235, 250, 300, 347, 403, 450, 510], "monitor": [47, 87, 102, 104, 132, 133, 145, 151, 163, 183, 186, 205, 209, 229, 230, 231, 236, 257, 272, 274, 320, 332, 362, 377, 379, 423, 435, 466, 481, 483, 511, 524, 530, 542, 555], "contigu": [47, 70, 71, 86, 182, 204, 219, 222, 228, 247, 250, 256, 346, 347, 361, 449, 450, 465], "decreas": [47, 50, 53, 70, 71, 78, 81, 100, 163, 176, 184, 186, 198, 206, 209, 219, 222, 232, 236, 247, 250, 270, 298, 332, 334, 346, 347, 353, 356, 375, 435, 449, 450, 457, 460, 479, 542, 553], "524": [47, 176, 198, 222, 250], "288": [47, 176, 198, 222, 250], "bias": [47, 48, 71, 176, 198, 222, 250, 347, 450], "spread": 47, "favor": [47, 48, 62, 71, 168, 189, 212, 240, 339, 441, 450], "largest": [47, 70, 71, 176, 198, 219, 222, 247, 250, 346, 347, 449, 450], "segment": [47, 71, 198, 222, 250, 347, 450], "_metaslab_segment_weight_en": 47, "bucket": [47, 71, 86, 164, 182, 198, 204, 222, 228, 250, 256, 347, 361, 436, 450, 465, 543], "plenti": 47, "freed": [47, 48, 70, 71, 78, 79, 80, 81, 176, 184, 198, 206, 219, 222, 223, 232, 236, 247, 250, 251, 298, 333, 334, 346, 347, 353, 354, 355, 356, 449, 450, 457, 458, 459, 460], "penalti": [47, 48, 78, 184, 206, 232, 298, 353, 457], "metric": [47, 48, 71, 78, 164, 176, 198, 206, 222, 232, 250, 298, 347, 353, 436, 450, 457, 543], "weight": [47, 71, 77, 176, 184, 198, 206, 222, 232, 250, 297, 347, 352, 450, 456], "meta": [47, 71, 86, 176, 182, 198, 204, 222, 228, 250, 256, 347, 361, 450, 465], "lab_fragmentation_factor_en": 47, "preload": [47, 71, 176, 198, 222, 250, 347, 450], "uniform": 47, "constant": [47, 71, 176, 198, 222, 250, 347, 450], "angular": [47, 71, 176, 198, 222, 250, 347, 450], "veloc": [47, 71, 176, 198, 222, 250, 347, 450], "outer": 47, "record": [47, 54, 71, 78, 79, 82, 86, 104, 110, 114, 130, 131, 142, 165, 166, 177, 178, 182, 184, 185, 186, 198, 199, 200, 204, 206, 207, 208, 209, 222, 223, 224, 228, 231, 232, 234, 235, 236, 250, 251, 252, 256, 274, 280, 284, 298, 299, 300, 311, 335, 347, 353, 354, 357, 361, 379, 385, 389, 402, 403, 414, 437, 438, 450, 457, 458, 461, 465, 483, 489, 493, 509, 510, 521, 544, 545, 561], "zone": [47, 78, 83, 88, 95, 98, 115, 118, 122, 127, 142, 184, 186, 206, 209, 232, 236, 258, 288, 295, 298, 311, 353, 363, 393, 400, 414, 457, 462, 467, 474, 477, 494, 497, 501, 506, 521], "inner": 47, "diamet": 47, "repres": [47, 49, 50, 71, 73, 78, 79, 80, 81, 92, 104, 110, 114, 174, 176, 184, 186, 196, 198, 206, 209, 220, 222, 223, 231, 232, 236, 248, 250, 251, 262, 274, 280, 284, 298, 334, 347, 349, 353, 354, 355, 356, 367, 379, 385, 389, 450, 452, 457, 458, 459, 460, 471, 483, 489, 493], "rotat": [47, 71, 90, 101, 120, 176, 198, 222, 232, 250, 260, 271, 290, 347, 365, 376, 395, 450, 469, 480, 499], "misrepres": 47, "disk_nam": 47, "inconveni": 47, "string": [47, 61, 71, 76, 78, 81, 86, 87, 100, 104, 171, 183, 184, 186, 193, 198, 205, 206, 209, 222, 229, 231, 232, 236, 239, 250, 257, 270, 274, 298, 334, 338, 347, 353, 356, 361, 362, 375, 379, 440, 450, 455, 457, 460, 465, 466, 479, 483], "invoc": [47, 65, 66, 74, 86, 131, 136, 168, 182, 189, 204, 208, 212, 228, 235, 236, 240, 256, 300, 305, 342, 350, 361, 403, 408, 444, 445, 453, 465, 510, 515], "estim": [47, 48, 71, 79, 151, 155, 158, 176, 186, 198, 209, 222, 236, 250, 251, 320, 324, 327, 347, 354, 423, 427, 430, 450, 458, 530, 534, 537], "consumpt": [47, 71, 78, 176, 184, 198, 206, 222, 232, 250, 298, 347, 353, 450, 457], "valid": [47, 64, 65, 71, 76, 78, 81, 86, 92, 104, 110, 114, 130, 132, 133, 136, 138, 139, 143, 147, 153, 165, 166, 175, 176, 184, 186, 187, 191, 197, 198, 204, 206, 209, 210, 214, 221, 222, 228, 231, 232, 236, 237, 242, 249, 250, 256, 262, 274, 280, 284, 298, 299, 301, 302, 305, 307, 312, 316, 322, 334, 335, 336, 341, 347, 353, 356, 361, 367, 379, 385, 389, 402, 404, 405, 408, 410, 411, 415, 419, 425, 437, 438, 443, 444, 450, 455, 457, 460, 465, 471, 483, 489, 493, 509, 511, 512, 515, 517, 518, 522, 526, 532, 544, 545, 554, 555, 561], "realist": [47, 71, 176, 198, 222, 250, 347, 450], "inflat": [47, 71, 86, 176, 182, 198, 204, 222, 228, 250, 256, 347, 361, 450, 465], "altogeth": [47, 71, 250, 347, 450], "condit": [47, 71, 79, 80, 81, 176, 186, 198, 209, 222, 236, 250, 333, 334, 347, 355, 356, 450, 458, 459, 460], "optimist": [47, 81, 236, 334, 356, 460], "misbehav": [47, 183, 205, 229], "rewind": [47, 71, 79, 80, 86, 134, 143, 163, 176, 182, 198, 204, 222, 223, 228, 236, 250, 251, 256, 303, 312, 332, 333, 347, 354, 355, 361, 406, 415, 435, 450, 458, 459, 465, 513, 522, 542], "travers": [47, 71, 79, 176, 177, 198, 199, 222, 223, 250, 251, 347, 354, 450, 458], "toggl": [47, 71, 176, 198, 222, 230, 250, 272, 347, 450], "max_int": 47, "000": [47, 71, 176, 198, 222, 250, 347, 450], "unaccount": [47, 71, 176, 198, 222, 250, 347, 450], "mo": [47, 66, 71, 86, 131, 170, 176, 182, 185, 192, 198, 204, 208, 215, 222, 228, 235, 243, 250, 256, 300, 342, 347, 361, 403, 445, 450, 465, 510], "zpl": [47, 71, 86, 171, 176, 182, 193, 198, 204, 222, 228, 250, 256, 347, 361, 450, 465], "enospc": [47, 71, 78, 104, 176, 184, 198, 206, 222, 231, 232, 250, 274, 298, 347, 353, 379, 450, 457, 483], "slop": 47, "shift": [47, 53, 67, 70, 176, 198, 219, 222, 247, 250, 343, 346, 347, 446, 449], "upper": [47, 48, 71, 86, 222, 250, 256, 347, 361, 450, 465], "4tb": 47, "unsign": [47, 67, 86, 204, 222, 228, 256, 343, 361, 446, 465], "max_ulong": 47, "048": [47, 176, 198, 222, 250], "576": [47, 176, 198, 222, 250], "uint": [47, 64, 70, 71, 176, 198, 219, 222, 247, 250, 341, 346, 347, 443, 449, 450], "uint_max": 47, "overlap": 47, "max_uint": 47, "arc_prun": [47, 71, 347, 450], "arc_dnode_s": 47, "arc_dnode_limit": 47, "demand": [47, 53, 61, 70, 71, 77, 78, 79, 81, 93, 117, 127, 160, 163, 177, 184, 198, 199, 206, 219, 222, 223, 232, 236, 239, 247, 250, 251, 295, 297, 298, 329, 332, 334, 338, 346, 347, 352, 353, 354, 356, 400, 432, 435, 440, 449, 450, 456, 457, 458, 460, 472, 496, 506, 539, 542], "max_uint64": 47, "zfs_arc_dnode_lim": 47, "it_perc": 47, "zfs_arc_d": 47, "node_limit": 47, "assumpt": [47, 48, 71, 176, 198, 222, 250, 257, 347, 450], "blocksiz": [47, 71, 78, 92, 184, 193, 206, 232, 250, 262, 298, 347, 353, 367, 450, 457, 471], "usag": [47, 53, 67, 71, 78, 79, 81, 84, 85, 86, 96, 100, 106, 124, 131, 145, 147, 163, 164, 171, 172, 180, 181, 184, 185, 186, 193, 194, 199, 202, 203, 204, 206, 208, 209, 216, 222, 223, 226, 227, 228, 232, 235, 236, 244, 250, 251, 254, 255, 256, 266, 270, 276, 293, 298, 300, 314, 316, 332, 334, 343, 347, 353, 354, 356, 359, 360, 361, 371, 375, 381, 398, 403, 417, 419, 435, 436, 446, 450, 457, 458, 460, 463, 464, 465, 475, 479, 485, 503, 510, 524, 526, 542, 543], "777": [47, 198, 222, 250], "216": [47, 198, 222, 250], "sublist": [47, 71, 250, 347, 450], "batch": [47, 71, 176, 198, 222, 250, 347, 450], "multilist": [47, 71, 198, 222, 250, 347, 450], "int_max": 47, "shrunk": 47, "grow": [47, 61, 70, 71, 176, 219, 239, 247, 338, 346, 347, 440, 449, 450], "damper": 47, "oscil": 47, "shrink": [47, 71, 78, 184, 198, 206, 222, 232, 250, 298, 347, 353, 450, 457], "cycl": [47, 71, 139, 175, 197, 221, 249, 411, 450, 518], "arcstat_memory_throttle_count": 47, "all_system_memori": [47, 71, 347, 450], "caveat": [47, 48, 53, 62, 71, 78, 168, 189, 198, 212, 222, 232, 240, 250, 298, 339, 347, 353, 441, 450, 457], "induc": [47, 71, 198, 222, 250, 347, 450], "67": [47, 176, 198, 250], "108": [47, 176, 198, 250], "864": [47, 176, 198, 250], "column": [47, 78, 81, 94, 95, 98, 100, 115, 141, 145, 156, 158, 163, 168, 176, 184, 186, 189, 206, 209, 212, 232, 236, 240, 264, 265, 268, 270, 285, 298, 310, 314, 325, 327, 332, 334, 339, 353, 356, 369, 370, 373, 375, 390, 413, 417, 428, 430, 435, 457, 460, 473, 474, 477, 479, 494, 520, 524, 535, 537, 542], "c_max": 47, "096": [47, 53, 176, 198, 222, 250], "reclaim": [47, 61, 70, 71, 79, 81, 160, 163, 176, 177, 186, 198, 199, 209, 219, 222, 223, 236, 239, 247, 250, 251, 329, 332, 334, 338, 346, 347, 354, 356, 432, 435, 440, 449, 450, 458, 460, 539, 542], "explicit": [47, 74, 81, 108, 109, 186, 198, 206, 209, 222, 232, 236, 250, 278, 279, 334, 347, 350, 356, 383, 384, 453, 460, 487, 488], "75": [47, 71, 176, 198, 222, 250, 347, 450], "devot": [47, 48, 176, 198, 222, 250], "metadata_s": 47, "arc_meta_min": 47, "dentri": [47, 176, 198, 222, 250, 347], "znode": [47, 71, 250, 347, 450], "prune": [47, 176, 198, 222, 250, 347], "strategi": [47, 48, 71, 136, 186, 198, 209, 222, 236, 250, 305, 347, 408, 450, 515], "meta_onli": [47, 198, 222, 250, 347], "balanc": [47, 48, 50, 53, 62, 71, 78, 80, 168, 176, 184, 186, 189, 198, 206, 209, 212, 222, 232, 236, 240, 250, 298, 333, 339, 347, 353, 355, 441, 450, 457, 459], "enum": 47, "c_min": 47, "554": 47, "432": [47, 198, 222, 250], "prescient": [47, 71, 198, 222, 250, 347, 450], "meant": [47, 48, 54, 71, 222, 250, 347, 450], "fs_arc_min_prescient_prefetch_m": 47, "6000": [47, 222, 250], "grain": [47, 71, 176, 198, 222, 250, 347, 450], "overflow": [47, 67, 71, 172, 176, 194, 198, 216, 222, 244, 250, 343, 347, 446, 450], "formula": [47, 176, 198, 222, 250], "256th": [47, 176, 198, 222, 250], "arc_p_min_shift": [47, 198, 222, 250, 347], "ghost": [47, 61, 71, 239, 338, 440, 450], "cap": [47, 49, 70, 71, 81, 87, 147, 163, 176, 186, 198, 209, 219, 222, 236, 247, 250, 332, 334, 346, 347, 356, 435, 449, 450, 460, 466, 526, 542], "behaviour": [47, 71, 198, 222, 250, 347, 450], "arc_shrink_shift": [47, 71, 198, 222, 250, 347, 450], "reduct": [47, 48, 165, 166, 544, 545], "shortfal": 47, "shrinkag": 47, "plai": [47, 48, 67, 71, 172, 194, 198, 216, 222, 244, 250, 343, 347, 446, 450], "nice": [47, 53, 71, 198, 222, 250, 347, 450], "lru": [47, 48, 71, 198, 222, 250, 347, 450], "pagecach": [47, 71, 198, 222, 250, 347, 450], "collaps": [47, 71, 198, 222, 250, 347, 450], "down": [47, 50, 53, 70, 71, 87, 163, 176, 198, 205, 222, 229, 250, 257, 347, 362, 449, 450, 466], "nr_file_pag": [47, 71, 198, 222, 250, 347, 450], "scanner": 47, "512k": [47, 176, 193, 198, 222, 250], "margin": [47, 70, 79, 199, 219, 223, 247, 251, 346, 354, 449, 458], "ulong_max": [47, 250, 347], "lwb": [47, 71, 198, 222, 250, 347, 450], "itx": [47, 71, 198, 222, 250, 347, 450], "facilit": [47, 70, 71, 74, 79, 176, 198, 219, 222, 247, 250, 346, 347, 350, 354, 449, 450, 453, 458], "view": [47, 71, 87, 175, 183, 197, 198, 205, 221, 222, 229, 249, 250, 257, 347, 362, 450, 466], "dbuf": [47, 71, 176, 198, 222, 250, 347, 450], "spa_sync": [47, 176, 198], "haven": [47, 71, 198, 347, 450], "invok": [47, 71, 77, 78, 84, 87, 92, 103, 104, 108, 109, 116, 121, 171, 180, 183, 184, 193, 202, 205, 206, 222, 226, 229, 231, 232, 250, 254, 257, 262, 273, 274, 278, 279, 286, 291, 297, 298, 347, 352, 353, 359, 362, 367, 378, 379, 383, 384, 391, 396, 450, 456, 457, 463, 466, 471, 482, 483, 487, 488, 495, 500], "300": [47, 48, 67, 71, 172, 176, 194, 198, 216, 222, 244, 250, 343, 347, 446, 450], "spa_deadman": [47, 176], "fire": [47, 176], "txg": [47, 50, 54, 71, 78, 79, 86, 131, 143, 176, 182, 185, 186, 198, 204, 206, 208, 209, 222, 228, 232, 235, 236, 250, 251, 256, 298, 300, 312, 347, 353, 354, 361, 403, 415, 450, 457, 458, 465, 510, 522], "600": [47, 76, 127, 163, 222, 250, 455, 506, 542], "ddt": [47, 48, 71, 86, 182, 204, 222, 228, 250, 256, 347, 361, 450, 465], "spent": [47, 71, 79, 145, 177, 198, 199, 209, 222, 223, 236, 250, 251, 314, 347, 354, 417, 450, 458, 524], "ultim": 47, "480": [47, 198, 222, 250], "infin": [47, 71, 176, 198, 222, 250, 347, 450], "smoothest": [47, 71, 176, 198, 222, 250, 347, 450], "billion": [47, 71, 176, 198, 222, 250, 347, 450], "smoothli": [47, 71, 176, 198, 222, 250, 347, 450], "10x": [47, 176, 198, 222, 250], "10th": [47, 176, 198, 222, 250], "scalar": [47, 71, 198, 222, 250, 347, 450], "nanosecond": [47, 87, 139, 145, 183, 205, 209, 221, 229, 236, 249, 257, 314, 347, 362, 411, 417, 450, 466, 518, 524], "exceed": [47, 70, 71, 176, 198, 219, 222, 247, 250, 346, 347, 449, 450, 555], "preced": [47, 71, 73, 74, 79, 86, 102, 174, 176, 196, 198, 220, 222, 248, 250, 256, 347, 349, 350, 354, 361, 377, 450, 452, 453, 458, 465, 481], "zfs_d": 47, "irty_data_max_max": 47, "physical_ram": [47, 71, 347, 450], "min": [47, 49, 50, 71, 176, 198, 222, 250, 347, 450], "4gib": [47, 71, 450], "zfs_vdev_async_write_ac": 47, "tive_min_dirty_perc": 47, "zfs_dirt": 47, "y_data_sync": 47, "selector": [47, 71, 198, 222, 250, 347, 450], "endian": [47, 67, 86, 140, 186, 204, 209, 221, 228, 236, 249, 256, 309, 343, 361, 411, 412, 446, 465, 519], "big": [47, 221, 249, 411], "transform": [47, 133, 186, 209, 236, 302, 405, 512], "superscalar": 47, "superscalar4": 47, "sse2": [47, 71, 198, 222, 250, 347, 450], "ssse3": [47, 71, 198, 222, 250, 347, 450], "avx2": [47, 71, 198, 222, 250, 347, 450], "avx512f": [47, 71, 198, 222, 250, 347, 450], "aarch64_neon": [47, 71, 198, 222, 250, 347, 450], "free_bpobj": [47, 71, 198, 222, 250, 347, 450], "uint32": 47, "zfs_vdev_ma": 47, "x_activ": 47, "async": [47, 50, 51, 58, 59, 71, 176, 198, 222, 250, 347, 450], "interpol": [47, 71, 176, 198, 222, 250, 347, 450], "zfs_vdev_asyn": 47, "c_write_active_max_dirty_perc": 47, "io_schedul": 47, "sch": 47, "edul": 47, "zfs_dirty_d": 47, "ata_max": 47, "c_write_active_min_dirty_perc": 47, "fs_vdev_async_write_active_max_d": 47, "irty_perc": 47, "zio": [47, 51, 58, 59, 64, 71, 176, 191, 198, 209, 214, 222, 242, 250, 341, 347, 443, 450], "chedul": 47, "zfs_vdev_max": 47, "_activ": 47, "poorer": [47, 71, 176, 198, 222, 250, 347, 450], "_vdev_async_write_max_act": 47, "sum": [47, 48, 50, 71, 78, 164, 176, 184, 198, 206, 222, 232, 250, 298, 347, 353, 436, 450, 457, 543], "max_act": [47, 50, 71, 176, 198, 222, 250, 347, 450], "priorit": [47, 50, 71, 176, 198, 222, 250, 347, 450], "min_act": [47, 176, 198, 222], "uint32_max": 47, "zfs_vd": 47, "ev_max_act": 47, "zfs_vdev_scrub_max": 47, "zfs_vdev_m": 47, "ax_act": 47, "imbalanc": [47, 71, 176, 198, 222, 250, 347, 450], "fuller": [47, 71, 198, 222, 250, 347, 450], "tend": [47, 71, 198, 222, 250, 347, 450], "subdirectori": [47, 105, 232, 275, 380, 484], "no_root_squash": [47, 71, 176, 184, 198, 206, 222, 250, 347, 450], "manipul": [47, 78, 81, 165, 166, 184, 206, 232, 298, 335, 353, 437, 438, 457, 460, 544, 545], "0x1": 47, "zfs_debug_dprintf": [47, 71, 176, 198, 222, 250, 347, 450], "dprintf": [47, 71, 176, 198, 222, 250, 347, 450], "0x2": 47, "zfs_debug_dbuf_verifi": [47, 71, 176, 198, 222, 250, 347, 450], "0x4": 47, "zfs_debug_dnode_verifi": [47, 71, 176, 198, 222, 250, 347, 450], "0x8": 47, "zfs_debug_snapnam": [47, 71, 176, 198, 222, 250, 347, 450], "0x10": 47, "zfs_debug_modifi": [47, 71, 176, 198, 222, 250, 347, 450], "illeg": [47, 71, 176, 184, 198, 222, 250, 347, 450], "zfs_debug_spa": [47, 176, 198], "spa_dbgmsg": [47, 176, 198], "0x40": 47, "zfs_debug_zio_fre": [47, 71, 176, 198, 222, 250, 347, 450], "0x80": 47, "fs_debug_histogram_verifi": 47, "spacemap": [47, 71, 79, 86, 131, 176, 182, 185, 198, 204, 208, 222, 228, 235, 250, 251, 256, 300, 347, 354, 361, 403, 450, 458, 465, 510], "histogram": [47, 71, 86, 145, 158, 164, 175, 176, 182, 186, 197, 198, 204, 209, 222, 228, 236, 250, 256, 314, 327, 347, 361, 417, 430, 436, 450, 465, 524, 537, 543], "0x100": 47, "zfs_debug_metaslab_verifi": [47, 71, 198, 222, 250, 347, 450], "range_tre": [47, 71, 198, 222, 250, 347, 450], "0x200": 47, "zfs_debug_set_error": [47, 71, 198, 222, 250, 347, 450], "set_error": [47, 71, 198, 222, 250, 347, 450], "eio": [47, 71, 81, 104, 131, 176, 185, 186, 198, 208, 209, 222, 231, 235, 236, 250, 274, 300, 334, 347, 356, 379, 403, 450, 460, 483, 510, 554], "indirect": [47, 48, 49, 71, 79, 80, 86, 139, 176, 182, 198, 204, 221, 222, 223, 228, 236, 249, 250, 251, 256, 333, 347, 354, 355, 361, 411, 450, 458, 459, 465, 518], "referenc": [47, 53, 71, 77, 78, 79, 86, 95, 98, 100, 110, 114, 115, 127, 158, 176, 182, 184, 186, 198, 204, 206, 209, 222, 228, 232, 236, 250, 251, 256, 270, 275, 280, 284, 295, 298, 327, 347, 353, 354, 361, 375, 385, 389, 400, 430, 450, 456, 457, 458, 465, 474, 477, 479, 489, 493, 494, 506, 537], "perhap": [47, 53, 71, 176, 198, 222, 250, 347, 450], "wrong": [47, 71, 80, 168, 176, 186, 189, 198, 209, 212, 222, 236, 240, 250, 333, 339, 347, 355, 450, 459], "suspend": [47, 71, 135, 144, 160, 176, 198, 222, 236, 250, 304, 313, 329, 347, 407, 416, 432, 450, 514, 523, 539], "terminologi": 47, "768": [47, 176, 198, 219, 222, 247, 250], "zil_itx_indirect_count": 47, "weigh": [47, 71, 176, 198, 222, 250, 347, 450], "bound": [47, 48, 71, 86, 108, 109, 139, 206, 221, 232, 249, 250, 256, 278, 279, 347, 361, 383, 384, 411, 450, 465, 487, 488, 518], "pipelin": [47, 71, 131, 139, 175, 197, 198, 208, 221, 222, 235, 249, 250, 300, 347, 403, 411, 450, 510, 518], "zio_buf_": 47, "zio_data_buf_": 47, "zdb": [47, 48, 53, 54, 67, 83, 87, 128, 135, 172, 179, 183, 194, 201, 205, 216, 225, 229, 244, 253, 257, 296, 304, 343, 358, 362, 401, 407, 446, 462, 466, 507, 514], "mm": [47, 61, 86, 204, 228, 239, 256, 338, 361, 440, 465], "zfs_metaslab_fragmentation_thresh": 47, "fr": 47, "agment": 47, "70": [47, 67, 71, 172, 176, 194, 198, 216, 222, 244, 250, 343, 347, 446, 450], "85": [47, 53, 73, 174, 176, 196, 198, 220, 248, 349, 452], "heavili": [47, 53, 71, 77, 78, 79, 176, 184, 198, 206, 222, 232, 250, 251, 297, 298, 347, 352, 353, 354, 450, 456, 457, 458], "lesser": [47, 71, 176, 198, 222, 250, 347, 450], "acquir": [47, 71, 176, 184, 198, 222, 250, 347, 450], "zfs_mg_alloc_failur": [47, 71, 176, 198, 222, 250, 347, 450], "multihost": [47, 71, 81, 135, 136, 198, 209, 222, 236, 250, 304, 305, 334, 347, 356, 407, 408, 450, 460, 514, 515], "multimodifi": 47, "subsystem": [47, 136, 140, 186, 209, 219, 236, 247, 305, 309, 346, 408, 412, 449, 515, 519, 548, 549, 550, 551], "frequenc": [47, 71, 131, 185, 198, 208, 222, 235, 250, 300, 347, 403, 450, 510], "leaf": [47, 50, 71, 76, 144, 145, 158, 160, 176, 198, 209, 222, 236, 250, 313, 314, 327, 329, 347, 416, 417, 430, 432, 450, 455, 523, 524, 537, 539], "uberblock": [47, 71, 86, 182, 198, 204, 222, 228, 250, 256, 347, 361, 450, 465], "overwhelm": 47, "serd": 47, "checksum_n": [47, 76, 455], "checksum_t": [47, 76, 455], "crawl": [47, 71, 198, 222, 250, 347, 450], "barrier": 47, "volatil": [47, 71, 186, 209, 222, 236, 250, 347, 450], "nonvolatil": 47, "op": [47, 48, 86, 90, 92, 93, 101, 110, 114, 120, 151, 157, 184, 206, 232, 236, 260, 262, 263, 271, 280, 284, 290, 320, 365, 367, 368, 376, 385, 389, 395, 423, 429, 465, 469, 471, 472, 480, 489, 493, 499, 530, 536], "occasion": [47, 48], "nop": [47, 176, 198, 222, 250], "crytograph": 47, "seek_hol": [47, 71, 198, 222, 250, 347, 450], "seek_data": [47, 71, 198, 222, 250, 347, 450], "exchang": 47, "int32": 47, "int32_max": 47, "52": [47, 176, 198, 222, 250], "428": [47, 176, 198, 222, 250], "800": [47, 176, 198, 222, 250], "consecut": [47, 71, 250, 347, 450], "side": [47, 48, 53, 54, 79, 108, 109, 110, 114, 177, 184, 199, 206, 223, 232, 251, 278, 279, 280, 284, 354, 383, 384, 385, 389, 458, 487, 488, 489, 493, 557], "otim": 47, "poolnam": [47, 48, 79, 86, 102, 177, 182, 199, 204, 223, 228, 230, 251, 256, 272, 354, 361, 377, 458, 465, 481], "pipe": [47, 94, 127, 184, 206, 232, 264, 369, 400, 473, 506], "intact": [47, 71, 198, 222, 250, 347, 450], "efficaci": 47, "statist": [47, 61, 71, 76, 78, 81, 86, 127, 145, 147, 158, 163, 164, 176, 182, 184, 186, 187, 198, 204, 206, 209, 210, 222, 228, 232, 236, 237, 239, 250, 256, 295, 298, 314, 316, 327, 332, 334, 336, 338, 347, 353, 356, 361, 400, 417, 419, 430, 435, 436, 440, 450, 455, 457, 460, 465, 506, 524, 526, 537, 542, 543], "fatal": [47, 71, 82, 86, 104, 176, 178, 182, 198, 200, 204, 222, 224, 228, 231, 250, 252, 256, 274, 347, 357, 361, 379, 450, 461, 465, 483], "zfs_panic_recov": 47, "resort": [47, 71, 143, 176, 186, 198, 209, 222, 236, 250, 312, 347, 415, 450, 522], "wors": [47, 48, 71, 176, 198, 222, 250, 347, 450], "context": [47, 48, 71, 78, 88, 104, 118, 180, 184, 202, 206, 226, 231, 232, 250, 254, 274, 298, 347, 353, 363, 379, 393, 450, 457, 467, 483, 497], "extent": [47, 71, 183, 205, 222, 229, 250, 257, 347, 450], "gap": [47, 71, 139, 175, 176, 197, 198, 221, 222, 249, 250, 347, 411, 450, 518], "defer": [47, 71, 78, 79, 93, 104, 154, 176, 184, 198, 206, 222, 223, 231, 232, 236, 250, 251, 263, 274, 298, 323, 347, 353, 354, 368, 379, 426, 450, 457, 458, 472, 483, 533], "adjac": [47, 71, 78, 139, 206, 221, 222, 232, 249, 250, 298, 347, 353, 411, 450, 457, 518], "coalesc": [47, 71, 222, 250, 347, 450], "sort": [47, 71, 96, 100, 106, 124, 155, 184, 206, 222, 232, 250, 266, 270, 276, 293, 347, 371, 375, 381, 398, 427, 450, 475, 479, 485, 503, 534], "gather": [47, 71, 104, 222, 231, 232, 250, 274, 347, 379, 450, 483], "soon": [47, 70, 71, 79, 177, 199, 222, 223, 250, 251, 347, 354, 449, 450, 458, 550], "097": 47, "152": [47, 209, 236], "sio_cach": 47, "procf": 47, "slabinfo": 47, "slab": [47, 70, 219, 247, 346, 449], "slabtop": 47, "divisor": 47, "soft": [47, 71, 222, 250, 347, 450], "zfs_scan_mem": 47, "_lim_fact": 47, "strike": 47, "194": 47, "304": 47, "unread": [47, 86, 182, 204, 228, 256, 361, 465], "0x2f5baddb10c": 47, "cooki": 47, "gang": [47, 67, 71, 86, 172, 182, 194, 204, 216, 222, 228, 244, 250, 256, 343, 347, 361, 446, 450, 465], "dsl": 47, "dp_sync_taskq": [47, 222, 250, 347, 450], "shorter": 47, "intens": [47, 48, 77, 155, 184, 186, 206, 209, 232, 236, 297, 324, 352, 427, 456, 534], "aggreg": [47, 50, 71, 81, 145, 164, 176, 198, 209, 222, 236, 250, 314, 334, 347, 356, 417, 436, 450, 460, 524, 543], "131": [47, 176, 198, 222, 250], "072": [47, 176, 198, 222, 250], "iostat": [47, 83, 132, 154, 155, 158, 159, 163, 164, 186, 209, 236, 253, 323, 324, 327, 328, 332, 358, 426, 427, 430, 431, 435, 436, 462, 511, 533, 534, 537, 538, 542, 543], "thusit": 47, "vdev_cache_stat": 47, "inop": 47, "65": 47, "536": 47, "384": [47, 176, 198, 219, 222, 247, 250], "nonrot": 47, "distanc": [47, 71, 198, 222, 250, 347, 450], "fs_vdev_mirror_rotating_seek_inc": 47, "zfs_vdev_mirror_rotating_seek_off": 47, "fewer": [47, 48, 71, 131, 185, 208, 235, 250, 300, 347, 403, 450, 510], "zfs_v": 47, "dev_mirror_non_rotating_seek_inc": 47, "noop": [47, 176, 198], "cfq": [47, 198], "bfq": [47, 198], "deadlin": [47, 198], "changeabl": 47, "scsi_mq": 47, "unchang": [47, 56, 105, 108, 109, 206, 232, 275, 278, 279, 380, 383, 384, 484, 487, 488], "clearli": 47, "enclos": [47, 71, 198, 222, 250, 347, 450], "vdev_raidz_bench": 47, "x86": [47, 48, 53, 71, 198, 222, 250, 347, 450], "avx512bw": [47, 71, 198, 222, 250, 347, 450], "aarch64": [47, 53, 71, 198, 222, 250, 347, 450], "armv8": [47, 71, 198, 222, 250, 347, 450], "neon": [47, 71, 198, 222, 250, 347, 450], "aarch64_neonx2": [47, 71, 198, 222, 250, 347, 450], "unrol": [47, 71, 198, 222, 250, 347, 450], "80": [47, 53, 71, 79, 86, 176, 177, 182, 198, 199, 204, 222, 223, 228, 250, 251, 256, 347, 354, 361, 450, 458, 465], "itxg": 47, "clean": [47, 48, 62, 71, 168, 189, 212, 222, 240, 250, 339, 347, 441, 450], "dispatch": [47, 71, 222, 250, 347, 450], "dp_zil_clean_taskq": [47, 71, 222, 250, 347, 450], "zil_clean": 47, "zfs_zil_clean_taskq_minallo": 47, "zfs_zil_clean_taskq_maxallo": 47, "024": 47, "brought": [47, 53, 135, 164, 186, 209, 236, 407, 436, 514, 543], "replai": [47, 71, 139, 175, 176, 197, 198, 221, 222, 249, 250, 347, 411, 450, 518, 561], "abus": [47, 71, 198, 222, 250, 347, 450], "786": [47, 198, 222, 250], "worker": [47, 70, 71, 176, 198, 219, 222, 247, 250, 346, 347, 449, 450], "z_wr_iss": 47, "instanc": [47, 48, 71, 80, 81, 108, 109, 186, 209, 222, 232, 236, 250, 278, 279, 333, 334, 355, 356, 383, 384, 459, 460, 487, 488], "recompil": 47, "parallel": [47, 67, 221, 249, 411, 446], "multiprocessor": 47, "inhibit": 47, "startup": [47, 71, 81, 186, 198, 209, 222, 236, 250, 334, 347, 356, 450, 460], "230": [47, 71, 176, 198, 222, 250, 347, 450], "aka": [47, 56, 198, 222, 250], "uncommon": [47, 53], "unfortun": [47, 48, 53, 54, 554], "8kb": [47, 48, 250, 347, 353], "heavi": [47, 53, 78, 79, 80, 87, 183, 186, 205, 206, 209, 229, 232, 236, 251, 257, 298, 333, 353, 354, 355, 362, 457, 458, 459, 466], "discard_max_byt": 47, "discard_max_hw_byt": 47, "volume_inst": 47, "submitt": [47, 71, 198, 222, 250, 347, 450], "similarli": [47, 53, 82, 108, 109, 131, 178, 200, 208, 224, 232, 235, 252, 278, 279, 300, 357, 383, 384, 403, 461, 487, 488, 510], "avgqu": 47, "sz": 47, "aqu": [47, 86, 204, 228, 256, 361, 465], "volmod": [47, 71, 78, 88, 118, 198, 206, 222, 232, 250, 298, 347, 353, 363, 393, 450, 457, 467, 497], "bsd": [47, 53], "geom": [47, 78, 143, 206, 232, 298, 312, 353, 415, 457, 522], "synonym": 47, "hide": [47, 78, 206, 232, 298, 353, 457], "outsid": [47, 78, 93, 99, 119, 122, 126, 127, 139, 175, 184, 197, 206, 221, 232, 249, 263, 269, 289, 295, 298, 353, 368, 374, 394, 400, 411, 457, 472, 478, 498, 501, 505, 506, 518], "zfs_qat_": 47, "compress_dis": 47, "hiwat": 47, "lowat": 47, "dbug": 47, "fall": [47, 53], "held": [47, 70, 71, 219, 247, 250, 346, 347, 449, 450], "104": [47, 222, 250], "857": [47, 222, 250], "lowest": [47, 48, 50, 78, 139, 221, 249, 298, 353, 411, 457, 518], "scatter": [47, 71, 79, 222, 250, 251, 347, 354, 450, 458], "zio_": 47, "data_": 47, "buf_": 47, "abd_chunk_cach": 47, "kmem_cach": 47, "abdstat": 47, "buddi": 47, "incres": 47, "collis": [47, 71, 79, 104, 136, 186, 199, 209, 223, 231, 236, 250, 251, 274, 305, 347, 354, 379, 408, 450, 458, 483, 515], "birthdai": 47, "400": [47, 71, 250, 347, 450], "trillion": 47, "resiz": [47, 81, 186, 209, 236, 334, 356, 460], "therein": 47, "finer": [47, 250], "arc_min_prefetch_lifespan": 47, "tick": [47, 175, 176, 197, 198], "dtl": [47, 71, 131, 185, 198, 208, 222, 235, 250, 300, 347, 403, 450, 510], "treatment": 47, "highest": [47, 50, 78, 86, 182, 184, 204, 206, 228, 232, 256, 298, 353, 361, 457, 465], "bracket": [47, 71, 347, 450], "sub": [47, 71, 176, 198, 222, 250, 347, 450], "2kb": [47, 250, 347], "1kb": [47, 222, 250, 347], "buf": [47, 61, 338, 440], "spill": [47, 71, 79, 80, 199, 222, 223, 236, 250, 251, 333, 347, 354, 355, 450, 458, 459], "5kb": [47, 347], "1536": [47, 222, 250], "remount": [47, 71, 74, 78, 79, 112, 131, 184, 185, 199, 206, 208, 222, 223, 232, 235, 250, 251, 282, 298, 300, 347, 350, 353, 354, 387, 403, 450, 453, 457, 458, 491, 510], "inflight": [47, 86, 182, 204, 228, 256, 361, 465], "maxinflight": 47, "inevit": 47, "failmod": [47, 81, 139, 175, 186, 197, 209, 221, 236, 249, 334, 356, 411, 460, 518, 559, 560], "recover": [47, 71, 222, 250, 347, 450, 552], "chanc": [47, 76, 78, 81, 184, 206, 232, 298, 353, 455, 457, 460, 553], "inadvert": [47, 548], "dbuf_metadata_cache_sh": 47, "ift": 47, "node_export": 47, "prometheu": [47, 164, 436, 543], "telegraf": [47, 164, 436, 543], "plugin": [47, 164, 436, 543], "channel": [47, 53, 71, 73, 85, 104, 127, 174, 181, 196, 203, 220, 222, 227, 231, 232, 248, 250, 255, 274, 295, 347, 349, 360, 379, 400, 450, 452, 464, 483, 506], "spa_minblocks": 47, "spa_maxblocks": 47, "217": [47, 222, 250], "span": [47, 71, 222, 250, 347, 450], "cancel": [47, 71, 80, 131, 133, 144, 151, 153, 160, 162, 185, 186, 208, 209, 222, 235, 236, 250, 300, 302, 313, 320, 322, 329, 331, 333, 347, 355, 403, 405, 416, 423, 425, 432, 434, 450, 459, 510, 512, 523, 530, 532, 539, 541], "inceas": 47, "sleep": [47, 71, 250, 347, 450], "zfs_conden": 47, "e_indirect_commit_entry_delay_m": 47, "condens": [47, 71, 222, 250, 347, 450], "obsolet": [47, 71, 79, 86, 222, 223, 250, 251, 256, 347, 354, 361, 450, 458, 465], "s_condense_indirect_vdevs_en": 47, "zfs_vdev_max_": 47, "zfs_vde": 47, "v_initializing_max_act": 47, "zfs_vdev": 47, "_max_act": [47, 71, 250, 347, 450], "iv": [47, 108, 109, 232, 278, 279, 383, 384, 487, 488], "dev_max_act": 47, "zfs_vdev_trim_m": 47, "0xdeadbeef": [47, 130, 207, 234, 299, 402, 509], "0xdeadbeefdeadbee": [47, 71, 222, 250, 347, 450], "lua": [47, 71, 104, 127, 222, 231, 232, 250, 274, 295, 347, 379, 400, 450, 483, 506], "nest": [47, 71, 80, 127, 186, 209, 222, 232, 236, 250, 295, 333, 347, 355, 400, 450, 459, 506], "deepli": 47, "impract": 47, "predefin": [47, 71, 222, 250, 347, 450], "computation": [47, 71, 78, 222, 232, 250, 298, 347, 353, 450, 457], "particip": [47, 71, 222, 250, 347, 450], "zfs_recon": 47, "struct_indirect_combinations_max": 47, "unmodifi": [47, 71, 78, 184, 206, 222, 232, 250, 298, 347, 353, 450, 457], "backward": [47, 48, 71, 81, 110, 114, 127, 186, 209, 222, 236, 250, 280, 284, 295, 334, 347, 356, 385, 389, 400, 450, 460, 489, 493, 506], "recreat": [47, 48, 71, 81, 108, 109, 184, 186, 206, 209, 222, 232, 236, 250, 278, 279, 334, 347, 356, 383, 384, 450, 460, 487, 488, 551, 557], "zfs_trim_extent_bi": 47, "tes_min": 47, "134": [47, 222, 250], "728": [47, 222, 250], "unalloc": [47, 144, 163, 222, 236, 250, 313, 332, 416, 435, 523, 542], "max_": 47, "uniniti": [47, 48, 71, 81, 139, 175, 186, 197, 209, 221, 222, 236, 249, 250, 334, 347, 356, 411, 450, 460, 518], "thinli": [47, 71, 160, 163, 222, 236, 250, 329, 332, 347, 432, 435, 450, 539, 542], "provis": [47, 71, 78, 80, 160, 163, 184, 206, 222, 232, 236, 250, 298, 329, 332, 333, 347, 353, 355, 432, 435, 450, 457, 459, 539, 542], "opposit": [47, 71, 222, 250, 347, 450], "stride": 47, "blkdev": 47, "v_aggregation_limit_non_rot": 47, "diagnost": [47, 71, 222, 250, 347, 450], "denomin": [47, 222, 250], "zevent": [47, 71, 87, 176, 183, 198, 205, 222, 229, 250, 257, 347, 362, 450, 466], "inappropri": [47, 100, 184, 206, 232, 270, 375, 479], "ivset": [47, 71, 250, 347, 450], "crypt_keydata": 47, "to_ivset_guid": 47, "heurist": [47, 71, 168, 189, 212, 240, 250, 339, 347, 450], "16mb": [47, 176, 198, 222, 250, 347], "postpon": [47, 79, 223, 251, 354, 458], "constraint": 47, "freez": [47, 67, 71, 250, 343, 347, 446, 450], "paus": [47, 71, 133, 139, 155, 162, 163, 209, 221, 222, 236, 249, 250, 324, 331, 332, 347, 411, 427, 434, 435, 450, 518, 534, 541, 542], "s_count_limit": 47, "_min_ms_count": 47, "assign": [47, 53, 65, 71, 78, 79, 80, 184, 206, 223, 232, 236, 251, 298, 333, 353, 354, 355, 444, 450, 457, 458, 459], "factori": 47, "294": 47, "967": 47, "295": 47, "0xffffffff": 47, "zfs_hostid": [47, 67, 131, 194, 216, 244, 343, 403, 446, 510], "kmem_alloc": [47, 70, 219, 247, 346, 449], "kmalloc_max_s": [47, 70, 219, 247, 346, 449], "4x": [47, 70, 219, 247, 250, 346, 449], "vmem_alloc": [47, 70, 219, 247, 346, 449], "kmalloc": [47, 70, 219, 247, 346, 449], "vmalloc": [47, 53, 70, 219, 247, 346, 449], "eight": [47, 70, 219, 247, 346, 449], "seriou": [47, 70, 219, 247, 346, 449], "concern": [47, 70, 219, 247, 346, 449], "largish": [47, 70, 219, 247, 346, 449], "caught": [47, 70, 219, 247, 346, 449], "magazin": [47, 70, 219, 247, 346, 449], "notifi": [47, 219, 247], "bitmask": 47, "0x01": [47, 219, 247], "0x02": [47, 219, 247], "increasingli": [47, 219], "cutoff": [47, 70, 219, 247, 346, 449], "quarter": [47, 71, 219, 347, 450], "page_s": [47, 219], "footprint": [47, 70, 71, 176, 198, 219, 222, 247, 250, 346, 347, 449, 450], "likelihood": [47, 219, 247, 346, 449], "halt": [47, 70, 71, 176, 198, 219, 222, 247, 250, 346, 347, 449, 450], "spawn": [47, 70, 219, 247, 346, 449], "flexibl": [47, 67, 70, 79, 108, 109, 172, 177, 194, 199, 216, 219, 223, 232, 244, 247, 251, 278, 279, 343, 346, 354, 383, 384, 446, 449, 458, 487, 488], "taskq_dynam": [47, 70, 219, 247, 346, 449], "promptli": [47, 70, 219, 247, 346, 449], "item": [47, 70, 145, 209, 219, 236, 247, 314, 346, 417, 449, 524], "interrupt": [47, 70, 79, 108, 109, 110, 114, 177, 199, 206, 219, 223, 232, 247, 251, 278, 279, 280, 284, 346, 354, 383, 384, 385, 389, 449, 458, 487, 488, 489, 493], "ramp": [47, 70, 219, 247, 346, 449], "spl_kmem_cach": [47, 70, 219, 247, 346, 449], "realloc": [47, 70, 219, 247, 346, 449], "contend": [47, 70, 219, 247, 346, 449], "decad": 48, "necess": [48, 74, 350, 453], "evicit": 48, "outperform": 48, "dedic": [48, 53, 67, 71, 79, 80, 136, 186, 209, 223, 236, 251, 305, 333, 347, 354, 355, 408, 450, 458, 459, 515], "devicenam": 48, "oracl": [48, 52, 184], "contrast": [48, 81, 110, 114, 184, 206, 232, 236, 280, 284, 334, 356, 385, 389, 460, 489, 493], "stand": [48, 78, 184, 206, 232, 298, 353, 457], "logarithm": [48, 71, 347, 450], "accord": [48, 50, 65, 66, 71, 78, 80, 81, 84, 91, 92, 112, 116, 131, 133, 176, 180, 184, 185, 186, 198, 202, 206, 208, 209, 222, 226, 232, 235, 236, 250, 254, 261, 262, 282, 286, 298, 300, 333, 334, 347, 353, 355, 356, 359, 366, 367, 387, 391, 403, 444, 445, 450, 457, 459, 460, 463, 470, 471, 491, 495, 510, 550], "incur": [48, 90, 101, 120, 232, 260, 271, 290, 365, 376, 395, 469, 480, 499], "implicit": [48, 77, 78, 110, 114, 184, 206, 232, 280, 284, 297, 298, 352, 353, 385, 389, 456, 457, 489, 493], "world": 48, "2007": [48, 558], "nand": [48, 51], "gnop": 48, "2011": 48, "maczf": 48, "osx": 48, "flaw": 48, "reli": [48, 53, 78, 184, 206, 232, 298, 353, 457], "compens": 48, "ambigu": 48, "lun": [48, 73, 81, 174, 186, 196, 209, 220, 236, 248, 334, 349, 356, 452, 460], "speak": [48, 49, 71, 176, 198, 222, 250, 347, 450], "belong": [48, 76, 81, 110, 114, 186, 209, 236, 334, 356, 385, 389, 455, 460, 489, 493], "difficulti": [48, 78, 232, 298, 353, 457], "necessit": 48, "4kb": [48, 177, 199, 223, 251, 347, 353, 354, 355], "128kb": [48, 177, 184, 199, 206, 222, 223, 232, 250, 251, 280, 284, 347, 353, 354, 385, 389], "lzjb": [48, 78, 79, 165, 166, 177, 184, 199, 206, 223, 232, 251, 298, 353, 354, 457, 458, 544, 545], "satisfi": [48, 50, 71, 80, 127, 176, 186, 198, 209, 222, 230, 236, 250, 272, 333, 347, 355, 450, 459, 506], "fair": [48, 53], "incompress": [48, 71, 79, 177, 199, 223, 251, 354, 450, 458], "lempel": 48, "ziv": 48, "encod": [48, 53, 78, 79, 206, 223, 232, 251, 298, 353, 354, 457, 458], "zstandard": 48, "offer": [48, 79, 110, 114, 251, 280, 284, 354, 385, 389, 458, 489, 493], "decod": [48, 86, 204, 228, 256, 361, 465], "uncertain": 48, "figur": 48, "silesia": 48, "corpu": 48, "worthwhil": [48, 78, 232, 298, 353, 457], "megabyt": [48, 95, 98, 115, 171, 184, 193, 198, 206, 222, 232, 250, 265, 268, 285, 370, 373, 390, 474, 477, 494], "16m": [48, 145, 193, 236, 314, 417, 524], "zfs_max_records": [48, 71, 176, 198, 222, 250, 347, 450], "analog": 48, "16kb": [48, 347, 354], "decent": [48, 78, 184, 206, 232, 298, 353, 457], "amplif": 48, "fse": 48, "meaningless": 48, "fragment": [48, 53, 71, 76, 79, 81, 86, 147, 176, 186, 198, 209, 222, 236, 250, 251, 316, 334, 347, 354, 356, 419, 450, 455, 458, 460, 465, 526], "insuffici": [48, 71, 80, 92, 132, 136, 186, 209, 236, 262, 301, 305, 333, 347, 355, 367, 404, 408, 450, 459, 471, 511, 515, 549, 551, 559, 560], "7200rpm": 48, "uncach": [48, 61, 440], "400kb": 48, "simul": [48, 67, 86, 131, 172, 182, 185, 204, 208, 228, 235, 256, 300, 343, 361, 403, 446, 465, 510], "mac": [48, 90, 101, 120, 232, 260, 271, 290, 365, 376, 395, 469, 480, 499], "spin": 48, "metaslab_lba_weighting_en": [48, 71, 176, 198, 222, 250, 347, 450], "tuanbl": 48, "fit": 48, "tell": [48, 127, 400, 506], "mmm": [48, 86, 204, 228, 256, 361, 465], "compani": [48, 53, 57], "elev": 48, "whole_disk": 48, "precaut": 48, "flow": 48, "determinist": [48, 104, 231, 274, 379, 483], "cope": [48, 77, 184, 206, 232, 297, 352, 456], "ephemer": [48, 96, 106, 124, 184, 206, 232, 266, 276, 293, 371, 381, 398, 475, 485, 503], "amazon": 48, "ec2": 48, "stamp": [48, 145, 147, 158, 162, 186, 209, 236, 314, 316, 327, 331, 417, 419, 430, 434, 524, 526, 537, 541], "inherit": [48, 74, 76, 77, 78, 79, 83, 90, 91, 92, 95, 101, 102, 104, 105, 108, 109, 112, 115, 120, 127, 184, 186, 206, 223, 232, 251, 253, 260, 261, 262, 265, 271, 274, 275, 278, 279, 282, 285, 290, 295, 297, 298, 350, 352, 353, 354, 358, 365, 366, 367, 370, 376, 377, 379, 380, 383, 384, 387, 390, 395, 400, 453, 455, 456, 457, 458, 462, 469, 470, 471, 474, 480, 481, 483, 484, 487, 488, 491, 494, 499, 506], "10gb": [48, 186, 209, 236, 332, 435], "bottleneck": 48, "o_sync": [48, 78, 184, 206, 232, 298, 353, 457], "optan": [48, 51], "3d": [48, 51], "xpoint": [48, 51], "overprovison": 48, "somewhat": [48, 184], "alright": 48, "mix": [48, 50, 53, 78, 108, 109, 136, 184, 186, 206, 209, 232, 236, 278, 279, 298, 305, 353, 383, 384, 408, 457, 487, 488, 515, 557], "unpartit": [48, 53], "sanit": 48, "explain": [48, 127, 506], "rewrit": [48, 71, 133, 176, 198, 222, 250, 347, 450], "defrag": 48, "redundant_metadata": [48, 78, 88, 118, 184, 206, 232, 298, 353, 363, 393, 457, 467, 497], "16k": [48, 53, 67, 70, 78, 86, 182, 194, 204, 206, 216, 219, 228, 232, 244, 247, 256, 298, 343, 346, 353, 361, 446, 449, 457, 465], "innodb_doublewrit": 48, "cnf": 48, "percona": 48, "advoc": 48, "recant": 48, "advic": 48, "aio": 48, "bare": 48, "codepath": 48, "innodb_use_native_aio": 48, "innodb_use_atomic_writ": 48, "wal": 48, "64k": [48, 78, 184, 198, 206, 222, 232, 250, 298, 353, 457], "full_page_writ": 48, "65536": [48, 53, 171, 193, 198, 222, 250], "exercis": [48, 86, 465], "merit": 48, "casesensit": [48, 78, 88, 95, 98, 108, 109, 115, 118, 127, 184, 206, 232, 258, 278, 279, 288, 295, 298, 353, 363, 383, 384, 393, 400, 457, 467, 474, 477, 487, 488, 494, 497, 506], "insensit": [48, 78, 184, 206, 232, 298, 353, 457], "smb": [48, 78, 88, 96, 106, 116, 118, 124, 127, 184, 206, 232, 258, 266, 276, 286, 288, 293, 295, 298, 353, 363, 371, 381, 391, 393, 398, 400, 457, 467, 475, 485, 495, 497, 503, 506], "despit": [48, 79, 177, 199, 223, 251, 354, 458, 548, 550], "librari": [48, 104, 231, 232, 274, 379, 483], "humbl": 48, "tweak": 48, "saw": 48, "asset": 48, "tab": [48, 62, 92, 94, 95, 96, 97, 98, 100, 106, 111, 115, 124, 139, 141, 145, 147, 156, 162, 168, 184, 186, 189, 206, 209, 212, 232, 236, 240, 262, 264, 265, 266, 267, 268, 270, 276, 281, 285, 293, 308, 310, 314, 316, 325, 331, 339, 367, 369, 370, 371, 372, 373, 375, 381, 386, 390, 398, 411, 413, 417, 419, 428, 434, 441, 471, 473, 474, 475, 476, 477, 479, 485, 490, 494, 503, 518, 520, 524, 526, 535, 541], "dialogu": 48, "proton": 48, "maxim": [48, 78, 206, 232, 298, 353, 457], "6489": 48, "patpro": 48, "php": 48, "2617": 48, "pragma": 48, "pragma_page_s": 48, "pgszchng2016": 48, "13790": 48, "patchwork": 48, "20190626121943": 48, "131390": 48, "glider": 48, "googl": 48, "22731857": 48, "12406": 48, "waiter": [49, 71, 176, 198, 222, 250, 347, 450], "credit": [49, 71, 176, 198, 222, 250, 347, 450], "min_tim": [49, 71, 176, 198, 222, 250, 347, 450], "zfs_delay_scal": [49, 71, 176, 198, 222, 250, 347, 450], "zfs_delay_min_dirty_perc": [49, 71, 176, 198, 222, 250, 347, 450], "curv": [49, 71, 176, 198, 222, 250, 347, 450], "midpoint": [49, 71, 176, 198, 222, 250, 347, 450], "10m": [49, 71, 176, 198, 222, 250, 347, 450], "9m": [49, 71, 86, 176, 182, 198, 204, 222, 228, 250, 256, 347, 361, 450, 465], "8m": [49, 71, 176, 198, 222, 250, 347, 450], "7m": [49, 71, 176, 198, 209, 222, 236, 250, 347, 450], "6m": [49, 71, 176, 198, 209, 222, 236, 250, 347, 450], "5m": [49, 71, 176, 198, 222, 250, 347, 450], "4m": [49, 71, 155, 171, 176, 193, 198, 222, 250, 347, 427, 450, 534], "3m": [49, 71, 176, 198, 222, 250, 347, 450], "2m": [49, 71, 176, 198, 222, 250, 347, 450], "microsecond": 49, "2000": [49, 71, 176, 198, 222, 250, 347, 450], "shape": [49, 71, 176, 198, 222, 250, 347, 450], "accumul": [49, 71, 164, 176, 198, 222, 250, 347, 436, 450, 543], "yield": [49, 71, 176, 198, 222, 250, 347, 450], "100u": [49, 71, 176, 198, 222, 250, 347, 450], "10u": [49, 71, 176, 198, 222, 250, 347, 450], "steep": [49, 71, 176, 198, 222, 250, 347, 450], "five": [50, 71, 176, 198, 222, 250, 347, 450, 561], "prefetch": [50, 61, 71, 78, 176, 198, 222, 239, 250, 338, 347, 440, 450], "zfs_vdev_max_act": [50, 71, 176, 198, 222, 250, 347, 450], "met": [50, 71, 93, 176, 184, 198, 206, 222, 232, 250, 263, 347, 368, 450, 472], "zfs_vdev_sync_read_min_act": [50, 71, 176, 198, 222, 250, 347, 450], "zfs_vdev_sync_read_max_act": [50, 71, 176, 198, 222, 250, 347, 450], "zfs_vdev_sync_write_min_act": [50, 71, 176, 198, 222, 250, 347, 450], "zfs_vdev_sync_write_max_act": [50, 71, 176, 198, 222, 250, 347, 450], "zfs_vdev_async_read_min_act": [50, 71, 176, 198, 222, 250, 347, 450], "zfs_vdev_async_read_max_act": [50, 71, 176, 198, 222, 250, 347, 450], "zfs_vdev_scrub_min_act": [50, 71, 176, 198, 222, 250, 347, 450], "zfs_vdev_scrub_max_act": [50, 71, 176, 198, 222, 250, 347, 450], "stage": [50, 71, 139, 175, 176, 197, 198, 221, 222, 249, 250, 347, 411, 450, 518], "burst": [50, 71, 176, 198, 222, 250, 347, 450], "zfs_txg_timeout": [50, 71, 176, 198, 222, 250, 347, 450], "bursti": [50, 71, 176, 198, 222, 250, 347, 450], "broad": [50, 71, 176, 198, 222, 250, 347, 450], "stroke": [50, 71, 176, 198, 222, 250, 347, 450], "microcod": 51, "ecc": [51, 57], "torrent": 51, "wine": 51, "encompass": 53, "wikipedia": 53, "afford": [53, 67, 172, 194, 216, 244, 343, 446], "numer": [53, 61, 76, 77, 78, 86, 95, 96, 98, 100, 104, 106, 115, 124, 143, 163, 174, 184, 186, 196, 206, 209, 220, 231, 232, 236, 239, 265, 266, 268, 270, 274, 276, 285, 293, 297, 298, 312, 332, 338, 352, 353, 361, 370, 371, 373, 375, 379, 381, 390, 398, 415, 435, 440, 455, 456, 457, 465, 474, 475, 477, 479, 483, 485, 494, 503, 522, 542, 547, 550, 558], "opensourc": 53, "umbrella": 53, "8gb": 53, "2gb": 53, "strongest": 53, "cosmic": 53, "rai": 53, "undetect": 53, "grade": 53, "justifi": 53, "arm": 53, "ppc": 53, "ppc64": 53, "oldest": [53, 93, 112, 117, 127, 184, 206, 232, 263, 295, 368, 400, 472, 491, 496, 506], "promin": 53, "importantli": 53, "discourag": [53, 78, 80, 184, 186, 206, 209, 232, 236, 298, 333, 353, 355, 457, 459], "bump": 53, "vmap": 53, "4198400": 53, "conting": 53, "wean": 53, "tighter": 53, "box": 53, "drawback": 53, "hdx": 53, "human": [53, 66, 76, 78, 95, 98, 115, 164, 170, 171, 184, 192, 193, 206, 215, 232, 243, 265, 268, 285, 298, 342, 353, 370, 373, 390, 436, 445, 455, 457, 474, 477, 494, 543], "friendli": [53, 85, 181, 203, 227, 255, 360, 464], "desk": 53, "prone": [53, 92, 471], "sata_hitachi_hts7220071201dp1d10dgg6hmrp": 53, "cabl": [53, 561], "ti": [53, 77, 78, 184, 206, 232, 297, 298, 352, 353, 456, 457], "cumbersom": 53, "0000": 53, "1f": 53, "jbod": [53, 73, 85, 174, 181, 196, 203, 220, 227, 248, 255, 349, 360, 452, 464], "pick": [53, 155, 236, 324, 427, 534], "meaning": [53, 76, 78, 81, 127, 184, 206, 232, 295, 298, 353, 400, 455, 457, 460, 506], "clarifi": 53, "emploi": 53, "deriv": [53, 73, 174, 196, 220, 248, 349, 452], "wwn": [53, 73, 174, 196, 220, 248, 349, 452], "b1": 53, "a2": 53, "b2": 53, "think": [53, 78, 206, 232, 298, 353, 457], "partlabel": 53, "sas_direct": [53, 73, 85, 174, 181, 196, 203, 220, 227, 248, 255, 349, 360, 452, 464], "phys_per_port": [53, 73, 85, 174, 181, 196, 203, 220, 227, 248, 255, 349, 360, 452, 464], "pci_slot": [53, 73, 174, 196, 220, 248, 349, 452], "sas_switch": [53, 73, 85, 174, 181, 196, 203, 220, 227, 248, 255, 349, 360, 452, 464], "definit": [53, 61, 73, 85, 174, 181, 196, 203, 220, 227, 239, 248, 255, 338, 349, 360, 440, 452, 464], "86": [53, 73, 174, 196, 220, 248, 349, 452], "qualifi": [53, 73, 93, 95, 98, 115, 127, 174, 184, 196, 206, 220, 232, 248, 263, 295, 349, 368, 400, 452, 472, 474, 477, 494, 506], "d1": [53, 73, 174, 196, 220, 248, 349, 452], "0x5000c5002de3b9ca": [53, 73, 174, 196, 220, 248, 349, 452], "d2": [53, 73, 174, 196, 220, 248, 349, 452], "0x5000c5002def789": [53, 73, 174, 196, 220, 248, 349, 452], "a0": 53, "b0": 53, "a3": 53, "b3": 53, "a4": 53, "b4": 53, "a5": [53, 58, 59, 562], "b5": 53, "a6": 53, "b6": 53, "a7": 53, "b7": 53, "stale": [53, 230, 272], "failov": [53, 81, 209, 236, 334, 356, 460], "rc1": [53, 54], "sender": [53, 54, 110, 114, 280, 284, 385, 389, 489, 493], "unaffect": [53, 78, 108, 109, 184, 206, 232, 278, 279, 298, 353, 383, 384, 457, 487, 488, 555], "6224": 53, "filestor": 53, "rbd": 53, "cephf": 53, "objectstor": 53, "s3": 53, "rado": 53, "xf": 53, "osd": 53, "gear": 53, "filestore_max_inline_xattr": 53, "filestore_max_inline_xattr_s": 53, "filestore_max_xattr_value_s": 53, "journal": [53, 74, 350, 453], "colloc": 53, "terribl": 53, "upfront": 53, "dsync": 53, "qualiti": [53, 57], "WILL": 53, "NOT": [53, 78, 184, 206, 232, 298, 353, 457, 557], "830": 53, "840": 53, "850": 53, "sm853": 53, "200gb": 53, "rememb": [53, 78, 206, 232, 298, 353, 457], "4x10gb": 53, "4x20gb": 53, "disappoint": 53, "interoper": 53, "wholedisk": 53, "volsiz": [53, 78, 88, 92, 108, 109, 118, 184, 206, 232, 258, 262, 278, 279, 288, 298, 353, 363, 367, 383, 384, 393, 457, 467, 471, 487, 488, 497], "reus": [53, 71, 78, 232, 250, 298, 347, 353, 450, 457], "aris": 53, "ex": [53, 235, 300, 403], "fstrim": 53, "dom0_mem": 53, "16384m": 53, "zfs_arc_max": [53, 71, 176, 198, 222, 250, 347, 450], "6442450944": 53, "balloon": 53, "xl": 53, "watch": 53, "id_part_entry_schem": 53, "id_fs_typ": 53, "zfs_member": 53, "id_part_entry_typ": 53, "6a898cc3": 53, "1dd2": 53, "11b2": 53, "99a6": 53, "080020736631": 53, "udisks_ignor": 53, "tracker": [53, 57, 58, 59], "quicker": [53, 71, 250, 347, 450], "exception": 53, "vmm": 53, "trace": [53, 104, 182, 204, 231, 274, 379, 483], "technic": [54, 55, 59, 110, 114, 280, 284, 385, 389, 489, 493], "scrape": 54, "combinator": 54, "nightmar": 54, "feasibli": 54, "birth_tim": 54, "wonder": 54, "knowledg": [54, 86, 182, 204, 228, 256, 361, 465], "surround": 54, "oh": 54, "ignore_hole_birth": [54, 71, 176, 198, 222, 250, 347, 450], "send_holes_without_birth_tim": [54, 71, 79, 222, 223, 250, 251, 347, 354, 450, 458], "announc": 55, "traffic": [55, 78, 184, 206, 232, 298, 353, 457], "ned": 56, "bass": 56, "c77b9667": 56, "29d5": 56, "610e": 56, "ae29": 56, "41e3": 56, "55a2": 56, "fe8a": 56, "b974": 56, "67aa": 56, "c77b": 56, "9667": 56, "toni": 56, "hutter": 56, "d4598027": 56, "4f3b": 56, "a9ab": 56, "6d1f": 56, "8d68": 56, "3dc2": 56, "dfb5": 56, "6ad8": 56, "60ee": 56, "d459": 56, "8027": 56, "brian": [56, 171, 180, 193, 202, 226, 254], "behlendorf": [56, 171, 180, 193, 202, 226, 254], "c6af658b": 56, "c33d": 56, "f142": 56, "657e": 56, "d1f7": 56, "c328": 56, "a296": 56, "0ab9": 56, "e991": 56, "c6af": 56, "658b": 56, "ring": 56, "behlendorf1": [56, 171, 180, 193, 202, 226, 254], "llnl": [56, 171, 180, 183, 193, 202, 205, 226, 229, 254, 257], "gov": [56, 171, 180, 193, 202, 226, 254], "7a27ad00ae142b38d4aef8cc0af7a72b4c0e44f": 56, "tagger": 56, "1441996302": 56, "0700": 56, "fri": [56, 558], "sep": [56, 553], "42": [56, 147, 163, 186, 209, 236, 332, 435, 526, 542, 558], "pdt": 56, "dsa": 56, "bring": [57, 80, 81, 129, 143, 148, 149, 157, 163, 186, 209, 236, 312, 317, 318, 326, 332, 334, 356, 415, 420, 421, 429, 435, 459, 460, 508, 522, 527, 528, 536, 542, 548], "togeth": [57, 71, 79, 108, 109, 110, 112, 114, 171, 176, 193, 198, 222, 232, 250, 275, 278, 279, 280, 282, 284, 347, 354, 383, 384, 385, 387, 389, 450, 458, 487, 488, 489, 491, 493], "annual": 57, "rais": [57, 78, 232, 298, 353, 457], "ongo": [57, 155, 427, 534], "admin": [57, 58, 59], "vdev_id": [57, 72, 74, 81, 83, 173, 179, 195, 201, 209, 218, 225, 236, 246, 253, 334, 348, 350, 356, 358, 451, 453, 460, 462], "ceph": 57, "xen": 57, "hypervisor": 57, "dom0": 57, "udisks2": 57, "conduct": 57, "roadmap": [57, 58, 59], "8000": [58, 59, 562], "2q": [58, 59, 561, 562], "3c": [58, 59, 562], "4j": [58, 59, 562], "5e": [58, 59, 562], "6x": [58, 59, 562], "9p": [58, 59, 562], "er": [58, 59, 562], "hc": [58, 59, 560, 562], "jq": [58, 59, 562], "k4": [58, 59, 562], "favorit": 59, "x2014": [61, 62, 64, 65, 66, 67, 68, 70, 71, 73, 74, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 204, 206, 207, 209, 217, 228, 231, 232, 234, 236, 245, 248, 255, 256, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 338, 339, 341, 342, 343, 344, 346, 347, 349, 350, 352, 353, 354, 355, 356, 357, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 440, 441, 443, 444, 445, 446, 447, 449, 450, 452, 453, 455, 456, 457, 458, 459, 460, 461, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545], "havxp": [61, 239, 338, 440], "x2026": [61, 62, 65, 74, 78, 79, 81, 82, 86, 87, 88, 91, 92, 93, 95, 96, 97, 98, 100, 102, 105, 106, 110, 111, 114, 115, 117, 118, 124, 125, 132, 135, 136, 140, 141, 142, 143, 144, 145, 147, 148, 149, 151, 152, 154, 155, 156, 157, 158, 159, 160, 161, 162, 164, 165, 166, 338, 339, 350, 354, 356, 357, 361, 363, 366, 367, 368, 370, 371, 372, 373, 375, 377, 380, 381, 385, 386, 389, 390, 392, 393, 398, 399, 404, 407, 408, 412, 413, 414, 415, 416, 417, 419, 420, 421, 423, 424, 426, 427, 428, 429, 430, 431, 432, 433, 434, 436, 437, 438, 440, 441, 444, 453, 457, 458, 460, 461, 465, 466, 467, 470, 471, 472, 474, 475, 476, 477, 479, 481, 484, 485, 489, 490, 493, 494, 496, 497, 503, 504, 511, 514, 515, 519, 520, 521, 522, 523, 524, 526, 527, 528, 530, 531, 533, 534, 535, 536, 537, 538, 539, 540, 541, 543, 544, 545], "vmstat": [61, 239, 338, 440], "ddh": [61, 440], "ddi": [61, 440], "ddm": [61, 440], "dmh": [61, 440], "dmi": [61, 239, 338, 440], "dmm": [61, 440], "mh": [61, 239, 338, 440], "mi": [61, 440], "ph": [61, 186, 239, 338, 440], "pm": [61, 239, 338, 440], "pdh": [61, 440], "pdi": [61, 440], "pdm": [61, 440], "pmh": [61, 440], "pmi": [61, 239, 338, 440], "pmm": [61, 440], "dhit": [61, 239, 338, 440], "dioh": [61, 440], "ddhit": [61, 440], "ddioh": [61, 440], "ddmi": [61, 440], "dmhit": [61, 440], "dmioh": [61, 440], "dmmi": [61, 440], "ioh": [61, 440], "mfug": [61, 239, 338, 440], "mhit": [61, 239, 338, 440], "mioh": [61, 440], "mmi": [61, 239, 338, 440], "mrug": [61, 239, 338, 440], "phit": [61, 239, 338, 440], "pioh": [61, 440], "pdhit": [61, 440], "pdioh": [61, 440], "pdmi": [61, 440], "pmhit": [61, 440], "pmioh": [61, 440], "pmmi": [61, 440], "arcsz": [61, 239, 338, 440], "unc": [61, 440], "dread": [61, 239, 338, 440], "ddread": [61, 440], "dmread": [61, 440], "eskip": [61, 239, 338, 440], "evict_skip": [61, 239, 338, 440], "mread": [61, 239, 338, 440], "pread": [61, 239, 338, 440], "pdread": [61, 440], "pmread": [61, 440], "l2hit": [61, 239, 338, 440], "l2miss": [61, 239, 338, 440], "l2read": [61, 239, 338, 440], "l2pref": [61, 338, 440], "l2mfu": [61, 338, 440], "l2mru": [61, 338, 440], "l2data": [61, 338, 440], "l2meta": [61, 338, 440], "l2size": [61, 239, 338, 440], "mtxmi": [61, 239, 338, 440], "mutex_miss": [61, 239, 338, 440], "l2byte": [61, 239, 338, 440], "l2asiz": [61, 239, 338, 440], "parsabl": [61, 92, 93, 94, 95, 96, 98, 100, 106, 110, 114, 115, 124, 141, 145, 147, 151, 156, 158, 162, 184, 206, 209, 232, 236, 239, 262, 263, 264, 265, 266, 268, 270, 276, 280, 284, 285, 293, 310, 314, 316, 320, 325, 327, 331, 338, 367, 368, 369, 370, 371, 373, 375, 381, 385, 389, 390, 398, 413, 417, 419, 423, 428, 430, 434, 440, 471, 472, 473, 474, 475, 477, 479, 485, 489, 493, 494, 503, 520, 524, 526, 530, 535, 537, 541], "operand": [61, 239, 338, 440], "sampl": [61, 145, 164, 209, 236, 239, 314, 338, 417, 436, 440, 524, 543], "decemb": [61, 186, 269, 289, 367, 440], "23": [61, 67, 79, 127, 147, 163, 172, 184, 186, 194, 206, 209, 216, 232, 236, 244, 295, 332, 343, 400, 435, 440, 446, 458, 506, 526, 542], "2022": [61, 76, 79, 88, 89, 91, 92, 93, 94, 95, 98, 100, 107, 112, 113, 115, 117, 118, 122, 126, 127, 132, 136, 137, 140, 143, 145, 147, 151, 158, 161, 163, 165, 166, 353, 440, 455, 458, 467, 468, 470, 471, 472, 473, 474, 477, 479, 486, 491, 492, 494, 496, 497, 501, 505, 506, 511, 515, 516, 519, 522, 524, 526, 530, 537, 540, 542, 544, 545], "stylist": [62, 168, 189, 212, 240, 339, 441], "chpvcp": [62, 168, 189, 212, 240, 339, 441], "upenn": [62, 168, 189, 212, 240, 339, 441], "lee": [62, 168, 189, 212, 240, 339, 441], "06cse480": [62, 168, 189, 212, 240, 339, 441], "emptor": [62, 168, 189, 212, 240, 339, 441], "indent": [62, 168, 189, 212, 240, 339, 441], "picki": [62, 168, 189, 212, 240, 339, 441], "ansi": [62, 127, 163, 168, 189, 212, 240, 332, 339, 400, 435, 441, 506, 542], "endif": [62, 168, 189, 212, 240, 339, 441], "cast": [62, 168, 189, 212, 240, 339, 441], "putback": [62, 168, 189, 212, 240, 339, 441], "u_int": [62, 168, 189, 212, 240, 339, 441], "u_long": [62, 168, 189, 212, 240, 339, 441], "uint_t": [62, 168, 189, 212, 240, 339, 441], "ulong_t": [62, 168, 189, 212, 240, 339, 441], "nonempti": [62, 441], "parenthesi": [62, 104, 168, 189, 212, 231, 240, 274, 339, 379, 441, 483], "preprocessor": [62, 168, 189, 212, 240, 339, 441], "unmatch": [62, 168, 189, 212, 240, 339, 441], "cpp": [62, 168, 189, 212, 240, 339, 441], "all_cap": [62, 168, 189, 212, 240, 339, 441], "deserv": [62, 168, 189, 212, 240, 339, 441], "this_is_a_long_vari": [62, 168, 189, 212, 240, 339, 441], "another_vari": [62, 168, 189, 212, 240, 339, 441], "Will": [62, 104, 143, 168, 186, 189, 209, 212, 231, 236, 240, 274, 312, 339, 379, 415, 441, 483, 522], "do_someth": [62, 168, 189, 212, 240, 339, 441], "amp": [62, 67, 71, 74, 87, 127, 139, 163, 168, 183, 189, 198, 205, 212, 221, 222, 229, 240, 249, 250, 257, 295, 332, 339, 343, 347, 350, 362, 400, 411, 435, 441, 446, 450, 453, 466, 506, 518, 542], "26": [62, 64, 65, 66, 67, 73, 82, 85, 86, 87, 130, 131, 164, 182, 204, 209, 228, 231, 236, 248, 255, 256, 274, 338, 339, 341, 342, 343, 349, 357, 360, 361, 362, 402, 403, 436, 441, 443, 444, 445, 446, 452, 461, 464, 465, 466, 509, 510, 543], "2021": [62, 64, 65, 66, 67, 68, 73, 82, 84, 85, 87, 99, 102, 104, 105, 116, 119, 125, 130, 131, 134, 135, 144, 146, 150, 152, 153, 154, 155, 157, 160, 162, 164, 248, 250, 255, 274, 298, 299, 338, 339, 341, 342, 343, 344, 349, 354, 355, 356, 357, 359, 360, 362, 363, 364, 366, 369, 370, 373, 374, 375, 377, 379, 380, 388, 390, 391, 392, 393, 394, 399, 402, 403, 404, 406, 407, 408, 409, 411, 416, 417, 418, 422, 424, 425, 426, 427, 429, 430, 432, 434, 435, 436, 437, 438, 441, 443, 444, 445, 446, 447, 452, 461, 463, 464, 466, 478, 481, 483, 484, 495, 498, 504, 509, 510, 513, 514, 523, 525, 529, 531, 532, 533, 534, 536, 539, 541, 543], "raidz_test": [63, 190, 213, 241, 340, 442], "zhack": [63, 169, 190, 213, 241, 340, 442], "zvol_wait": [63, 213, 241, 340, 442], "benchmark": [64, 71, 191, 198, 214, 222, 242, 250, 341, 347, 443, 450], "stbevtd": [64, 341, 443], "zio_off_shift": [64, 191, 214, 242, 341, 443], "raidz_data_disk": [64, 191, 214, 242, 341, 443], "zio_size_shift": [64, 191, 214, 242, 341, 443], "reflow_offset": [64, 341, 443], "sweep": [64, 191, 214, 242, 341, 443], "19": [64, 78, 127, 147, 163, 184, 186, 191, 206, 209, 214, 230, 232, 236, 242, 295, 298, 332, 341, 353, 400, 435, 443, 457, 506, 526, 542], "expans": [64, 67, 71, 81, 133, 171, 186, 193, 209, 236, 334, 341, 356, 443, 460], "weep": [64, 191, 214, 242, 341, 443], "aod": [64, 341, 443], "imeout": [64, 191, 214, 242, 341, 443], "wall": [64, 191, 214, 242, 341, 443], "enchmark": [64, 191, 214, 242, 341, 443], "xpansion": [64, 341, 443], "erbos": [64, 172, 191, 194, 214, 216, 242, 244, 341, 443], "est": [64, 191, 214, 242, 341, 443], "ebug": [64, 191, 214, 242, 341, 443], "gdb": [64, 191, 214, 242, 341, 443], "sigsegv": [64, 191, 214, 242, 341, 443], "sigabrt": [64, 191, 214, 242, 341, 443], "dgq": [65, 444], "outputdir": [65, 444], "pp": [65, 168, 189, 212, 240, 444], "uxx": [65, 444], "pathnam": [65, 94, 100, 184, 206, 232, 264, 270, 369, 375, 444, 473, 479], "gq": [65, 444], "dq": [65, 444], "nor": [65, 74, 78, 79, 96, 106, 124, 155, 182, 184, 204, 206, 232, 266, 276, 293, 298, 350, 353, 371, 381, 398, 444, 453, 457, 458, 475, 485, 503, 534], "descend": [65, 74, 78, 88, 90, 93, 97, 100, 101, 105, 108, 109, 110, 111, 112, 114, 117, 118, 120, 123, 127, 184, 206, 232, 258, 260, 263, 267, 270, 271, 275, 278, 279, 280, 281, 282, 284, 287, 288, 290, 292, 295, 298, 350, 353, 363, 365, 368, 372, 375, 376, 380, 383, 384, 385, 386, 387, 389, 392, 393, 395, 397, 400, 444, 453, 457, 467, 469, 472, 476, 479, 480, 484, 487, 488, 489, 490, 491, 493, 496, 497, 499, 502, 506], "compris": [65, 79, 87, 139, 147, 163, 175, 183, 186, 197, 199, 205, 209, 221, 223, 229, 236, 249, 251, 257, 332, 354, 362, 411, 435, 444, 458, 466, 518, 526, 542], "pertain": [65, 444], "elaps": [65, 139, 221, 249, 411, 444, 518], "timestamp": [65, 97, 104, 111, 274, 379, 444, 476, 483, 490], "test_result": [65, 444], "ini": [65, 444], "pre_us": [65, 444], "post_us": [65, 444], "quot": [65, 67, 70, 71, 74, 76, 77, 78, 79, 80, 81, 86, 88, 90, 92, 93, 95, 98, 101, 102, 104, 108, 109, 110, 114, 115, 118, 120, 127, 130, 131, 133, 136, 139, 145, 151, 157, 163, 168, 171, 174, 175, 176, 177, 181, 183, 184, 186, 189, 193, 196, 197, 198, 199, 203, 204, 205, 206, 208, 209, 212, 219, 220, 221, 222, 223, 227, 228, 229, 230, 231, 232, 235, 236, 240, 247, 249, 250, 251, 256, 257, 258, 260, 262, 263, 269, 271, 272, 274, 275, 278, 279, 280, 284, 288, 289, 290, 295, 297, 298, 299, 300, 305, 314, 320, 333, 334, 346, 347, 350, 352, 353, 354, 355, 356, 361, 363, 365, 367, 368, 376, 377, 379, 383, 384, 385, 389, 393, 395, 400, 402, 403, 408, 411, 417, 423, 429, 444, 446, 449, 450, 453, 455, 456, 457, 458, 459, 460, 465, 467, 469, 471, 472, 474, 477, 480, 481, 483, 487, 488, 489, 493, 494, 497, 499, 506, 509, 510, 515, 518, 524, 530, 536], "dry": [65, 90, 92, 93, 101, 104, 110, 114, 120, 157, 184, 186, 206, 209, 231, 232, 236, 260, 262, 263, 271, 274, 280, 284, 290, 326, 365, 367, 368, 376, 379, 385, 389, 395, 429, 444, 469, 471, 472, 480, 483, 489, 493, 499, 536, 553], "kmemleak": [65, 444], "failsaf": [65, 444], "hoc": [65, 444], "demonstr": [65, 444], "simplest": [65, 444], "jkennedi": [65, 444], "07": [65, 155, 427, 444, 534, 553], "20120923t180654": [65, 444], "libzpool": [66, 67, 86, 170, 192, 194, 204, 215, 216, 228, 243, 244, 256, 342, 343, 361, 445, 446, 465], "poke": [66, 170, 192, 215, 243, 342, 445], "danger": [66, 78, 131, 170, 184, 185, 192, 206, 208, 215, 232, 235, 243, 298, 300, 342, 353, 403, 445, 457, 510], "decrement": [66, 170, 192, 215, 243, 342, 445], "cu": [66, 445], "detach": [66, 80, 83, 99, 119, 122, 126, 127, 132, 133, 134, 139, 145, 146, 147, 148, 149, 151, 153, 157, 158, 163, 175, 186, 197, 209, 221, 236, 249, 253, 269, 289, 295, 301, 302, 303, 314, 315, 316, 317, 318, 320, 322, 326, 327, 332, 333, 355, 358, 374, 394, 400, 404, 405, 406, 411, 417, 418, 419, 420, 421, 423, 425, 429, 430, 435, 445, 459, 462, 478, 498, 501, 505, 506, 511, 512, 513, 518, 524, 525, 526, 527, 528, 530, 532, 536, 537, 542, 549, 550], "undetach": [66, 445], "subcommand": [66, 77, 79, 88, 90, 96, 101, 106, 107, 108, 109, 118, 120, 124, 127, 132, 134, 145, 163, 170, 177, 184, 186, 192, 199, 206, 209, 215, 223, 232, 236, 243, 251, 258, 260, 266, 269, 271, 276, 277, 278, 279, 288, 289, 290, 293, 295, 297, 298, 301, 332, 342, 352, 363, 365, 371, 376, 381, 382, 383, 384, 393, 395, 398, 400, 404, 406, 435, 445, 456, 467, 469, 475, 480, 485, 486, 487, 488, 497, 499, 503, 506, 511, 513, 524, 542], "dir": [66, 79, 143, 170, 186, 192, 209, 215, 236, 243, 312, 342, 415, 445, 458, 522], "for_read_obj": [66, 170, 192, 215, 243, 342, 445], "for_write_obj": [66, 170, 192, 215, 243, 342, 445], "descriptions_obj": [66, 170, 192, 215, 243, 342, 445], "clairvoy": [66, 170, 192, 215, 243, 342, 445], "veg": [67, 343, 446], "size_of_each_vdev": [67, 172, 194, 216, 244, 343, 446], "alignment_shift": [67, 172, 194, 216, 244, 343, 446], "mirror_copi": [67, 172, 194, 216, 244, 343, 446], "raidz_disk": [67, 172, 194, 216, 244, 343, 446], "draid_disk": [67, 343, 446], "raid_par": [67, 343, 446], "raid_kind": [67, 343, 446], "draid_data": [67, 343, 446], "draid_spar": [67, 343, 446], "vdev_class_st": [67, 343, 446], "gang_block_threshold": [67, 172, 194, 216, 244, 343, 446], "initialize_pool_i_tim": [67, 172, 194, 216, 244, 343, 446], "kill_percentag": [67, 172, 194, 216, 244, 343, 446], "zil_failure_r": [67, 172, 194, 216, 244, 343, 446], "vg": 67, "tandem": [67, 172, 194, 216, 244, 343, 446], "nightli": [67, 172, 194, 216, 244, 343, 446], "daili": [67, 172, 194, 216, 244, 343, 446], "team": [67, 110, 114, 172, 194, 216, 244, 280, 284, 343, 385, 389, 446, 489, 493], "wrote": [67, 172, 194, 216, 244, 343, 446, 555], "ten": [67, 71, 172, 194, 216, 244, 250, 257, 343, 347, 446, 450], "quietli": [67, 172, 194, 216, 244, 343, 446], "chatti": [67, 172, 194, 216, 244, 343, 446], "ly": [67, 172, 194, 216, 244, 343, 446], "shouldn": [67, 172, 194, 216, 244, 343, 446], "64m": [67, 172, 194, 216, 244, 343, 446], "eraidz": 67, "spa_freez": [67, 343, 446], "initialis": [67, 172, 194, 216, 244, 446], "stochast": [67, 446], "prepend": [67, 81, 186, 209, 236, 334, 356, 446, 460], "ld_library_path": [67, 446], "lenni": [67, 446], "integ": [67, 78, 86, 87, 104, 182, 183, 184, 185, 204, 205, 206, 228, 229, 231, 232, 256, 257, 274, 298, 343, 353, 361, 362, 379, 446, 457, 465, 466, 483], "zfs_dbgmsg": [67, 71, 86, 104, 204, 216, 228, 231, 244, 250, 256, 274, 343, 347, 361, 379, 446, 450, 465, 483], "vvv": [67, 172, 194, 216, 244, 343, 446], "mayb": [67, 172, 194, 216, 244, 343, 446], "runlength": [67, 172, 194, 216, 244, 343, 446], "120": [67, 172, 194, 216, 244, 343, 446], "unawar": [67, 194, 216, 244, 343, 446], "zfs_stack_siz": [67, 172, 194, 216, 244, 343, 446], "stacksiz": [67, 172, 194, 216, 244, 343, 446], "spuriou": [67, 172, 194, 216, 244, 343, 446], "pthread_stack_min": [67, 172, 194, 216, 244, 343, 446], "256k": [67, 172, 193, 194, 216, 244, 250, 343, 446], "27": [68, 86, 99, 104, 105, 110, 114, 119, 134, 135, 144, 154, 160, 162, 177, 182, 204, 209, 228, 231, 256, 274, 344, 356, 361, 363, 364, 366, 374, 375, 379, 380, 388, 392, 393, 394, 404, 406, 407, 411, 416, 417, 426, 432, 434, 447, 465, 478, 483, 484, 489, 493, 498, 513, 514, 523, 533, 539, 541, 553], "spl_kmem_cache_kmem_thread": [70, 219, 247, 346, 449], "spl_kmem_cache_obj_per_slab": [70, 219, 247, 346, 449], "spl_kmem_cache_max_s": [70, 219, 247, 346, 449], "spl_kmem_cache_slab_limit": [70, 219, 247, 346, 449], "spl_kmem_alloc_warn": [70, 219, 247, 346, 449], "32768": [70, 346, 449], "spl_kmem_alloc_max": [70, 219, 247, 346, 449], "spl_kmem_cache_magazine_s": [70, 219, 247, 346, 449], "spl_hostid": [70, 74, 219, 247, 346, 350, 449, 453], "spl_hostid_path": [70, 219, 247, 346, 449], "charp": [70, 71, 176, 198, 219, 222, 247, 250, 346, 347, 449, 450], "spl_panic_halt": [70, 219, 247, 346, 449], "spl_taskq_kick": [70, 219, 247, 346, 449], "kick": [70, 139, 175, 197, 219, 221, 247, 249, 346, 411, 449, 518], "taskq": [70, 71, 219, 222, 247, 250, 346, 347, 449, 450], "didn": [70, 71, 139, 158, 175, 197, 219, 221, 236, 247, 249, 327, 346, 347, 411, 430, 449, 450, 518, 537], "spl_taskq_thread_bind": [70, 219, 247, 346, 449], "spl_taskq_thread_dynam": [70, 219, 247, 346, 449], "spl_taskq_thread_prior": [70, 219, 247, 346, 449], "spl_taskq_thread_sequenti": [70, 219, 247, 346, 449], "spl_max_show_task": [70, 219, 247, 346, 449], "spl_taskq_thread_timeout_m": [70, 449], "10000": [70, 71, 176, 198, 347, 449, 450], "tear": [70, 449], "led": [70, 449], "churn": [70, 449], "anew": [70, 449], "nonzero": [70, 71, 80, 102, 104, 151, 186, 198, 209, 222, 231, 236, 250, 274, 320, 347, 355, 377, 379, 423, 449, 450, 459, 481, 483, 530], "nontrivi": [70, 79, 449], "august": [70, 78, 129, 138, 141, 142, 148, 149, 156, 159, 177, 240, 242, 243, 244, 247, 249, 251, 252, 254, 257, 272, 294, 300, 301, 303, 304, 305, 306, 307, 308, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 323, 324, 325, 326, 328, 330, 332, 333, 334, 336, 346, 410, 413, 414, 415, 419, 420, 421, 423, 428, 431, 433, 449, 457, 508, 517, 520, 521, 527, 528, 535, 538], "dbuf_cache_max_byt": [71, 222, 250, 347, 450], "uint64_maxb": [71, 450], "u64": [71, 347, 450], "versu": [71, 250, 347, 450], "dbuf_cache_shift": [71, 222, 250, 347, 450], "32nd": [71, 347, 450], "dbuf_metadata_cache_max_byt": [71, 222, 250, 347, 450], "dbuf_metadata_cache_shift": [71, 222, 250, 347, 450], "64th": [71, 347, 450], "dbuf_cache_hiwater_pct": [71, 222, 250, 347, 450], "dbuf_cache_lowater_pct": [71, 222, 250, 347, 450], "log2": [71, 176, 222, 250, 347, 450], "dbuf_mutex_cache_shift": [71, 450], "dmu_object_alloc_chunk_shift": [71, 250, 347, 450], "bulk": [71, 250, 347, 450], "dmu_prefetch_max": [71, 222, 250, 347, 450], "134217728b": [71, 347, 450], "l2arc_feed_again": [71, 176, 198, 222, 250, 347, 450], "l2arc_feed_min_m": [71, 176, 198, 222, 250, 347, 450], "l2arc_feed_sec": [71, 176, 198, 222, 250, 347, 450], "l2arc_headroom": [71, 80, 176, 198, 222, 250, 333, 347, 355, 450, 459], "l2arc_write_max": [71, 176, 198, 222, 250, 347, 450], "l2arc_headroom_boost": [71, 176, 198, 222, 250, 347, 450], "l2arc_exclude_speci": [71, 347, 450], "l2arc_mfuonli": [71, 250, 347, 450], "l2arc_noprefetch": [71, 176, 198, 222, 250, 347, 450], "l2arc_mru_as": [71, 347, 450], "l2arc_mfu_as": [71, 347, 450], "l2arc_prefetch_as": [71, 347, 450], "evict_l2_eligible_mru": [71, 347, 450], "evict_l2_eligible_m": [71, 347, 450], "l2arc_meta_perc": [71, 250, 347, 450], "irration": [71, 347, 450], "l2arc_trim_ahead": [71, 81, 250, 334, 347, 356, 450, 460], "benefici": [71, 347, 450], "l2arc_norw": [71, 176, 198, 222, 250, 347, 450], "l2arc_write_boost": [71, 176, 198, 222, 250, 347, 450], "33554432b": 71, "l2arc_rebuild_en": [71, 80, 250, 333, 347, 355, 450, 459], "somehow": [71, 250, 347, 450], "l2arc_rebuild_blocks_min_l2s": [71, 80, 250, 333, 347, 355, 450, 459], "1073741824b": [71, 347, 450], "mininum": [71, 347, 450], "l2arc_evict": [71, 250, 347, 450], "compar": [71, 74, 77, 78, 79, 81, 86, 96, 105, 106, 124, 165, 166, 184, 206, 232, 236, 250, 251, 256, 266, 275, 276, 293, 297, 298, 334, 347, 350, 352, 353, 354, 356, 361, 371, 380, 381, 398, 450, 453, 456, 457, 458, 460, 465, 475, 484, 485, 503, 544, 545], "metaslab_aliquot": [71, 176, 198, 222, 250, 347, 450], "1048576b": [71, 347, 450], "metaslab_bias_en": [71, 176, 198, 222, 250, 347, 450], "metaslab_force_gang": [71, 222, 250, 347, 450], "16777217b": [71, 450], "metaslab_force_ganging_pct": [71, 450], "zfs_ddt_zap_default_b": [71, 450], "zfs_ddt_zap_default_ib": [71, 450], "zfs_default_b": [71, 347, 450], "zfs_default_ib": [71, 347, 450], "zfs_history_output_max": [71, 250, 347, 450], "dmu_max_access": [71, 250, 347, 450], "zfs_ioc_channel_program": [71, 250, 347, 450], "zfs_keep_log_spacemaps_at_export": [71, 250, 347, 450], "zfs_metaslab_segment_weight_en": [71, 198, 222, 250, 347, 450], "zfs_metaslab_switch_threshold": [71, 198, 222, 250, 347, 450], "metaslab_debug_load": [71, 176, 198, 222, 250, 347, 450], "metaslab_debug_unload": [71, 176, 198, 222, 250, 347, 450], "metaslab_fragmentation_factor_en": [71, 176, 198, 222, 250, 347, 450], "metaslab_df_max_search": [71, 222, 250, 347, 450], "16777216b": [71, 347, 450], "gt": [71, 76, 81, 94, 102, 104, 127, 139, 170, 171, 172, 174, 176, 178, 180, 181, 184, 185, 191, 192, 193, 194, 196, 198, 199, 200, 202, 203, 206, 208, 214, 215, 216, 220, 221, 222, 223, 224, 226, 227, 228, 230, 231, 232, 235, 242, 243, 244, 249, 250, 251, 252, 254, 256, 264, 272, 274, 295, 298, 300, 332, 334, 347, 353, 356, 369, 377, 379, 400, 411, 450, 455, 460, 473, 481, 483, 506, 518], "metaslab_block_pick": [71, 222, 250, 347, 450], "1024": [71, 145, 184, 209, 222, 236, 250, 314, 347, 417, 450, 524], "metaslab_df_use_largest_seg": [71, 222, 250, 347, 450], "metaslab_df_free_pct": [71, 222, 250, 347, 450], "metaslab_df_alloc_threshold": [71, 222, 250, 347, 450], "zfs_metaslab_max_size_cache_sec": [71, 250, 347, 450], "3600": [71, 250, 347, 450], "zfs_metaslab_mem_limit": [71, 250, 347, 450], "clog": [71, 250, 347, 450], "zfs_metaslab_try_hard_before_gang": [71, 347, 450], "zfs_metaslab_find_max_tri": [71, 347, 450], "zfs_vdev_default_ms_count": [71, 222, 250, 347, 450], "zfs_vdev_default_ms_shift": [71, 250, 347, 450], "29": [71, 153, 187, 210, 237, 250, 347, 369, 425, 450, 532], "zfs_vdev_max_ms_shift": [71, 450], "zfs_vdev_max_auto_ashift": [71, 250, 347, 450], "x2192": [71, 74, 350, 450, 453], "ashift_max": [71, 250, 347, 450], "zfs_vdev_min_auto_ashift": [71, 250, 347, 450], "ashift_min": [71, 250, 347, 450], "zfs_vdev_min_ms_count": [71, 222, 250, 347, 450], "vdev_validate_skip": [71, 250, 347, 450], "zfs_vdev_ms_count_limit": [71, 250, 347, 450], "131072": [71, 222, 250, 347, 450], "metaslab_preload_en": [71, 176, 198, 222, 250, 347, 450], "metaslab_preload_limit": [71, 450], "metaslab_preload_pct": [71, 450], "metaslab_unload_delai": [71, 250, 347, 450], "metaslab_unload_delay_m": [71, 250, 347, 450], "600000m": [71, 347, 450], "reference_histori": [71, 347, 450], "holder": [71, 347, 450], "reference_tracking_en": [71, 347, 450], "raidz_expand_max_copy_byt": 71, "160mb": [71, 250, 347], "raidz_expand_max_reflow_byt": 71, "reflow": 71, "raidz_io_aggregate_row": 71, "refcount_t": [71, 347, 450], "spa_config_path": [71, 176, 182, 198, 222, 250, 347, 450], "spa": [71, 86, 176, 198, 222, 250, 256, 347, 361, 450, 465], "spa_asize_infl": [71, 176, 198, 222, 250, 347, 450], "spa_load_print_vdev_tre": [71, 222, 250, 347, 450], "spa_load_verify_data": [71, 176, 198, 222, 250, 347, 450], "spa_load_verify_metadata": [71, 176, 198, 222, 250, 347, 450], "spa_load_verify_shift": [71, 222, 250, 347, 450], "16th": [71, 221, 249, 347, 411, 450], "spa_slop_shift": [71, 81, 176, 198, 222, 236, 250, 334, 347, 356, 450, 460], "spa_num_alloc": 71, "alloct": 71, "degred": 71, "spa_upgrade_errlog_limit": [71, 450], "head_errlog": [71, 79, 155, 158, 450, 458, 534, 537], "vdev_removal_max_span": [71, 222, 250, 347, 450], "32768b": [71, 347, 450], "zfs_vdev_read_gap_limit": [71, 176, 198, 222, 250, 347, 450], "vdev_file_logical_ashift": [71, 250, 347, 450], "vdev_file_physical_ashift": [71, 250, 347, 450], "zap_iterate_prefetch": [71, 222, 250, 347, 450], "zap_micro_max_s": [71, 450], "131072b": [71, 347, 450], "micro": [71, 198, 222, 250, 347, 450], "zfetch_min_dist": [71, 347, 450], "4194304b": [71, 347, 450], "got": [71, 347, 450], "satur": [71, 347, 450], "zfetch_max_dist": [71, 198, 222, 250, 347, 450], "67108864b": [71, 347, 450], "zfetch_max_idist": [71, 250, 347, 450], "zfetch_max_stream": [71, 176, 198, 222, 250, 347, 450], "zfetch": [71, 176, 198, 222, 250, 347, 450], "zfetch_min_sec_reap": [71, 176, 198, 222, 250, 347, 450], "inact": [71, 79, 146, 177, 186, 199, 209, 223, 236, 251, 315, 347, 354, 418, 450, 458, 525], "zfetch_max_sec_reap": [71, 347, 450], "zfs_abd_scatter_en": [71, 250, 347, 450], "zfs_abd_scatter_max_ord": [71, 250, 347, 450], "max_ord": [71, 347, 450], "zfs_abd_scatter_min_s": [71, 222, 250, 347, 450], "1536b": [71, 347, 450], "abd": [71, 222, 250, 347, 450], "zfs_arc_dnode_limit": [71, 198, 222, 250, 347, 450], "0b": [71, 155, 347, 427, 450, 534], "unpin": [71, 198, 222, 250, 347, 450], "ceil": [71, 198, 222, 250, 347, 450], "zfs_arc_dnode_limit_perc": [71, 198, 222, 250, 347, 450], "zfs_arc_dnode_reduce_perc": [71, 198, 222, 250, 347, 450], "zfs_arc_average_blocks": [71, 176, 198, 222, 250, 347, 450], "8192b": [71, 347, 450], "zfs_arc_eviction_pct": [71, 250, 347, 450], "arc_is_overflow": [71, 250, 347, 450], "arc_get_data_impl": [71, 250, 347, 450], "arc_siz": [71, 250, 347, 450], "arc_c": [71, 176, 198, 222, 250, 347, 450], "finit": [71, 250, 347, 450], "zfs_arc_evict_batch_limit": [71, 176, 198, 222, 250, 347, 450], "zfs_arc_grow_retri": [71, 176, 198, 222, 250, 347, 450], "arc_grow_retri": [71, 198, 222, 250, 347, 450], "growth": [71, 87, 176, 198, 222, 250, 347, 450, 466], "zfs_arc_lotsfree_perc": [71, 176, 198, 222, 250, 347, 450], "x00d7": [71, 86, 450, 465], "zfs_arc_meta_bal": [71, 450], "proportion": [71, 450], "zfs_arc_min": [71, 176, 198, 222, 250, 347, 450], "arc_c_min": [71, 198, 222, 250, 347, 450], "zfs_arc_min_prefetch_m": [71, 222, 250, 347, 450], "0m": [71, 155, 347, 427, 450, 534], "x2261": [71, 347, 450], "zfs_arc_min_prescient_prefetch_m": [71, 222, 250, 347, 450], "zfs_arc_prune_task_thread": [71, 347, 450], "theoret": [71, 347, 450], "proven": [71, 347, 450], "zfs_max_missing_tvd": [71, 222, 250, 347, 450], "zfs_max_nvlist_src_s": [71, 250, 347, 450], "zc_nvlist_src_siz": [71, 250, 347, 450], "einval": [71, 104, 231, 250, 274, 347, 379, 450, 483], "zfs_multilist_num_sublist": [71, 198, 222, 250, 347, 450], "zfs_arc_overflow_shift": [71, 176, 198, 222, 250, 347, 450], "reclam": [71, 347, 450], "till": [71, 347, 450], "zfs_arc_shrink_shift": [71, 176, 198, 222, 250, 347, 450], "zfs_arc_pc_perc": [71, 198, 222, 250, 347, 450], "zfs_arc_shrinker_limit": [71, 250, 347, 450], "shrinker": [71, 250, 347, 450], "160": [71, 450], "zfs_arc_sys_fre": [71, 176, 198, 222, 250, 347, 450], "bigger": [71, 347, 450], "zfs_autoimport_dis": [71, 176, 198, 222, 250, 347, 450], "zfs_checksum_events_per_second": [71, 250, 347, 450], "zfs_commit_timeout_pct": [71, 198, 222, 250, 347, 450], "zfs_condense_indirect_commit_entry_delay_m": [71, 250, 347, 450], "zfs_condense_indirect_obsolete_pct": [71, 250, 347, 450], "zfs_condense_indirect_vdevs_en": [71, 222, 250, 347, 450], "zfs_condense_min_mapping_byt": [71, 222, 250, 347, 450], "zfs_condense_max_obsolete_byt": [71, 222, 250, 347, 450], "influenc": [71, 230, 272, 347, 450], "zfs_flag": [71, 176, 198, 222, 250, 347, 450], "zfs_dbgmsg_maxsiz": [71, 176, 198, 222, 250, 347, 450], "zfs_dbuf_state_index": [71, 176, 198, 222, 250, 347, 450], "zfs_deadman_en": [71, 176, 198, 222, 250, 347, 450], "zfs_deadman_synctime_m": [71, 176, 198, 222, 250, 347, 450], "zfs_deadman_ziotime_m": [71, 222, 250, 347, 450], "zfs_deadman_failmod": [71, 139, 221, 222, 249, 250, 347, 411, 450, 518], "partner": [71, 222, 250, 347, 450], "zfs_deadman_checktime_m": [71, 198, 222, 250, 347, 450], "60000m": [71, 347, 450], "300000m": [71, 347, 450], "zfs_dedup_prefetch": [71, 176, 198, 222, 250, 347, 450], "ed": [71, 176, 198, 222, 250, 347, 450], "500000": [71, 347, 450], "tenth": [71, 347, 450], "zfs_disable_ivset_guid_check": [71, 250, 347, 450, 557], "zfs_key_max_salt_us": [71, 250, 347, 450], "400000000": [71, 347, 450], "zfs_object_mutex_s": [71, 250, 347, 450], "hashtabl": [71, 250, 347, 450], "zfs_slow_io_events_per_second": [71, 139, 221, 222, 249, 250, 347, 411, 450, 518], "zfs_unflushed_max_mem_amt": [71, 250, 347, 450], "zfs_unflushed_max_mem_ppm": [71, 250, 347, 450], "1000ppm": [71, 347, 450], "millionth": [71, 347, 450], "zfs_unflushed_log_block_max": [71, 250, 347, 450], "ditto": [71, 186, 209, 236, 250, 347, 450], "unclean": [71, 108, 109, 206, 232, 250, 278, 279, 347, 383, 384, 450, 487, 488], "zfs_unflushed_log_block_min": [71, 250, 347, 450], "our": [71, 250, 347, 450], "zfs_unflushed_log_block_pct": [71, 250, 347, 450], "zfs_unflushed_log_txg_max": [71, 347, 450], "zfs_unlink_suspend_progress": [71, 222, 250, 347, 450], "zfs_delete_block": [71, 198, 222, 250, 347, 450], "20480": [71, 172, 347, 450], "zfs_dirty_data_max_perc": [71, 176, 198, 222, 250, 347, 450], "zfs_dirty_data_max_max": [71, 176, 198, 222, 250, 347, 450], "zfs_dirty_data_max_max_perc": [71, 176, 198, 222, 250, 347, 450], "zfs_dirty_data_sync_perc": [71, 222, 250, 347, 450], "zfs_wrlog_data_max": [71, 347, 450], "zfs_fallocate_reserve_perc": [71, 250, 347, 450], "prealloc": [71, 250, 347, 450], "falloc": [71, 250, 347, 450], "eopnotsupp": [71, 250, 347, 450], "vector": [71, 108, 109, 198, 222, 232, 250, 278, 279, 347, 383, 384, 450, 487, 488], "zfs_bclone_en": [71, 450], "block_clon": [71, 79, 450, 458], "sse41": [71, 450], "avx512": [71, 450], "zfs_free_bpobj_en": [71, 198, 222, 250, 347, 450], "zfs_async_block_max_block": [71, 222, 250, 347, 450], "unlimit": [71, 250, 347, 450], "zfs_max_async_dedup_fre": [71, 250, 347, 450], "100000": [71, 78, 232, 298, 347, 353, 450, 457], "zfs_vdev_initializing_max_act": [71, 222, 250, 347, 450], "zfs_vdev_initializing_min_act": [71, 222, 250, 347, 450], "zfs_vdev_open_timeout_m": [71, 347, 450], "briefli": [71, 172, 191, 194, 214, 216, 242, 244, 347, 450], "zfs_vdev_rebuild_max_act": [71, 250, 347, 450], "zfs_vdev_rebuild_min_act": [71, 250, 347, 450], "zfs_vdev_removal_max_act": [71, 222, 250, 347, 450], "zfs_vdev_removal_min_act": [71, 222, 250, 347, 450], "zfs_vdev_trim_max_act": [71, 222, 250, 347, 450], "zfs_vdev_trim_min_act": [71, 222, 250, 347, 450], "zfs_vdev_nia_delai": [71, 250, 347, 450], "zfs_": [71, 127, 347, 450, 506], "_min_act": [71, 250, 347, 450], "zfs_vdev_nia_credit": [71, 250, 347, 450], "monopol": [71, 250, 347, 450], "zfs_vdev_queue_depth_pct": [71, 198, 222, 250, 347, 450], "zio_dva_throttle_en": [71, 198, 222, 250, 347, 450], "zfs_vdev_def_queue_depth": [71, 450], "zfs_vdev_failfast_mask": [71, 450], "bitwis": [71, 176, 198, 222, 250, 347, 450], "ored": [71, 82, 347, 357, 450, 461], "zfs_expire_snapshot": [71, 176, 198, 222, 250, 347, 450], "zfs_admin_snapshot": [71, 176, 198, 222, 250, 347, 450], "zfs_debug_histogram_verifi": [71, 176, 198, 222, 250, 347, 450], "zfs_debug_indirect_remap": [71, 222, 250, 347, 450], "zfs_debug_trim": [71, 222, 250, 347, 450], "allocat": [71, 222, 250, 347, 450], "zfs_debug_log_spacemap": [71, 250, 347, 450], "zfs_btree_verify_intens": [71, 347, 450], "btree": [71, 347, 450], "culmin": [71, 347, 450], "height": [71, 347, 450], "element": [71, 73, 104, 108, 109, 139, 174, 184, 196, 206, 220, 221, 231, 232, 248, 249, 274, 278, 279, 347, 349, 379, 383, 384, 411, 450, 452, 483, 487, 488, 518], "poison": [71, 347, 450], "zfs_free_leak_on_eio": [71, 176, 198, 222, 250, 347, 450], "zfs_free_min_time_m": [71, 176, 198, 222, 250, 347, 450], "1000m": [71, 347, 450], "zfs_obsolete_min_time_m": [71, 250, 347, 450], "zfs_immediate_write_sz": [71, 176, 198, 222, 250, 347, 450], "s64": [71, 450], "zfs_initialize_valu": [71, 222, 250, 347, 450], "16045690984833335022": [71, 347, 450], "zfs_initialize_chunk_s": [71, 250, 347, 450], "zfs_livelist_max_entri": [71, 250, 347, 450], "costli": [71, 250, 347, 450], "perspect": [71, 250, 347, 450, 561], "zfs_livelist_min_percent_shar": [71, 250, 347, 450], "zfs_livelist_condense_new_alloc": [71, 250, 347, 450], "blkptr": [71, 250, 347, 450], "zfs_livelist_condense_sync_cancel": [71, 250, 347, 450], "spa_livelist_condense_sync": [71, 250, 347, 450], "zfs_livelist_condense_sync_paus": [71, 250, 347, 450], "synctask": [71, 250, 347, 450], "zfs_livelist_condense_zthr_cancel": [71, 250, 347, 450], "spa_livelist_condense_cb": [71, 250, 347, 450], "zfs_livelist_condense_zthr_paus": [71, 250, 347, 450], "zfs_lua_max_instrlimit": [71, 222, 250, 347, 450], "100000000": [71, 347, 450], "zfs_lua_max_memlimit": [71, 222, 250, 347, 450], "104857600": [71, 347, 450], "zfs_max_dataset_nest": [71, 127, 222, 250, 347, 450, 506], "zfs_max_log_walk": [71, 250, 347, 450], "zfs_max_logsm_summary_length": [71, 250, 347, 450], "16777216": [71, 450], "cow": [71, 176, 198, 222, 250, 347, 450], "giant": [71, 176, 198, 222, 250, 347, 450], "formerli": [71, 450], "forbad": [71, 450], "zfs_allow_redacted_dataset_mount": [71, 250, 347, 450], "redact": [71, 78, 79, 83, 89, 103, 114, 121, 127, 250, 251, 253, 259, 273, 284, 291, 295, 298, 347, 353, 354, 358, 364, 378, 389, 396, 400, 450, 457, 458, 462, 468, 482, 493, 500, 506], "zfs_min_metaslabs_to_flush": [71, 250, 347, 450], "zfs_metaslab_fragmentation_threshold": [71, 176, 198, 222, 250, 347, 450], "zfs_mg_fragmentation_threshold": [71, 176, 198, 222, 250, 347, 450], "95": [71, 222, 250, 347, 450], "zfs_mg_noalloc_threshold": [71, 176, 198, 222, 250, 347, 450], "zfs_ddt_data_is_speci": [71, 80, 222, 236, 250, 333, 347, 355, 450, 459], "zfs_user_indirect_is_speci": [71, 222, 250, 347, 450], "zfs_multihost_histori": [71, 198, 222, 250, 347, 450], "x27e8": [71, 73, 78, 86, 102, 104, 248, 256, 347, 349, 353, 361, 377, 379, 450, 452, 457, 465, 481, 483], "x27e9": [71, 73, 78, 86, 102, 104, 248, 256, 347, 349, 353, 361, 377, 379, 450, 452, 457, 465, 481, 483], "zfs_multihost_interv": [71, 81, 198, 209, 222, 236, 250, 334, 347, 356, 450, 460], "zfs_multihost_import_interv": [71, 198, 222, 250, 347, 450], "whichev": [71, 176, 198, 222, 250, 347, 450], "mmp": [71, 198, 222, 250, 347, 450], "zfs_multihost_fail_interv": [71, 198, 222, 250, 347, 450], "zfs_no_scrub_io": [71, 176, 198, 222, 250, 347, 450], "zfs_no_scrub_prefetch": [71, 176, 198, 222, 250, 347, 450], "zfs_nocacheflush": [71, 176, 198, 222, 250, 347, 450], "zfs_nopwrite_en": [71, 176, 198, 222, 250, 347, 450], "occurr": [71, 86, 182, 204, 228, 256, 347, 361, 450, 465], "zfs_dmu_offset_next_sync": [71, 198, 222, 250, 347, 450], "zfs_pd_bytes_max": [71, 176, 198, 222, 250, 347, 450], "52428800b": [71, 347, 450], "zfs_traverse_indirect_prefetch_limit": [71, 347, 450], "l0": [71, 73, 196, 220, 248, 347, 349, 450, 452], "zfs_per_txg_dirty_frees_perc": [71, 198, 222, 250, 347, 450], "zfs_prefetch_dis": [71, 176, 198, 222, 250, 347, 450], "zfs_qat_checksum_dis": [71, 222, 250, 347, 450], "zfs_qat_compress_dis": [71, 222, 250, 347, 450], "zfs_qat_encrypt_dis": [71, 222, 250, 347, 450], "zfs_vnops_read_chunk_s": [71, 347, 450], "zfs_read_histori": [71, 176, 198, 222, 250, 347, 450], "zfs_read_history_hit": [71, 176, 198, 222, 250, 347, 450], "zfs_rebuild_max_seg": [71, 250, 347, 450], "zfs_rebuild_scrub_en": [71, 347, 450], "zfs_rebuild_vdev_limit": [71, 347, 450], "zfs_reconstruct_indirect_combinations_max": [71, 222, 250, 347, 450], "zfs_recov": [71, 176, 198, 222, 250, 347, 450], "zfs_removal_ignore_error": [71, 222, 250, 347, 450], "henc": [71, 110, 114, 347, 450, 489, 493], "zfs_removal_suspend_progress": [71, 222, 250, 347, 450], "zfs_remove_max_seg": [71, 222, 250, 347, 450], "zfs_resilver_disable_def": [71, 250, 347, 450], "zfs_resilver_min_time_m": [71, 176, 198, 222, 250, 347, 450], "3000m": [71, 347, 450], "zfs_scan_ignore_error": [71, 198, 222, 250, 347, 450], "unrepair": [71, 155, 198, 222, 250, 347, 450, 534], "zfs_scrub_after_expand": 71, "zfs_scrub_min_time_m": [71, 222, 250, 347, 450], "zfs_scrub_error_blocks_per_txg": [71, 450], "zfs_scan_checkpoint_intv": [71, 222, 250, 347, 450], "zfs_scan_fill_weight": [71, 222, 250, 347, 450], "afterward": [71, 74, 78, 127, 184, 206, 222, 232, 250, 295, 298, 347, 350, 353, 400, 450, 453, 457, 506], "zfs_scan_issue_strategi": [71, 222, 250, 347, 450], "zfs_scan_mem_lim_fact": [71, 222, 250, 347, 450], "checkpoint": [71, 79, 80, 83, 86, 142, 143, 147, 155, 162, 163, 209, 222, 223, 228, 236, 250, 251, 253, 256, 311, 312, 316, 324, 331, 332, 333, 347, 354, 355, 358, 361, 414, 415, 419, 427, 434, 435, 450, 458, 459, 462, 465, 521, 522, 526, 534, 541, 542], "zfs_scan_legaci": [71, 222, 250, 347, 450], "zfs_scan_max_ext_gap": [71, 222, 250, 347, 450], "2097152b": [71, 347, 450], "zfs_scan_mem_lim_soft_fact": [71, 222, 250, 347, 450], "zfs_scan_report_txg": [71, 347, 450], "zfs_scan_strict_mem_lim": [71, 250, 347, 450], "tight": [71, 250, 347, 450], "zfs_scan_suspend_progress": [71, 250, 347, 450], "zfs_scan_vdev_limit": [71, 222, 250, 347, 450], "zfs_send_corrupt_data": [71, 176, 198, 222, 250, 347, 450], "zfs_send_unmodified_spill_block": [71, 222, 250, 347, 450], "zfs_send_no_prefetch_queue_ff": [71, 250, 347, 450], "woken": [71, 250, 347, 450], "zfs_send_no_prefetch_queue_length": [71, 250, 347, 450], "zfs_send_queue_ff": [71, 250, 347, 450], "zfs_send_queue_length": [71, 198, 222, 250, 347, 450], "zfs_recv_queue_ff": [71, 250, 347, 450], "zfs_recv_queue_length": [71, 198, 222, 250, 347, 450], "zfs_recv_write_batch_s": [71, 250, 347, 450], "dmu": [71, 86, 171, 182, 193, 204, 228, 250, 256, 347, 361, 450, 465], "zfs_recv_best_effort_correct": [71, 450], "zfs_override_estimate_records": [71, 222, 250, 347, 450], "zfs_sync_pass_deferred_fre": [71, 176, 198, 222, 250, 347, 450], "zfs_spa_discard_memory_limit": [71, 222, 250, 347, 450], "zfs_special_class_metadata_reserve_pct": [71, 222, 250, 347, 450], "zfs_sync_pass_dont_compress": [71, 176, 198, 222, 250, 347, 450], "converg": [71, 222, 250, 347, 450], "detriment": [71, 222, 250, 347, 450], "zfs_sync_pass_rewrit": [71, 176, 198, 222, 250, 347, 450], "zfs_trim_extent_bytes_max": [71, 222, 250, 347, 450], "zfs_trim_extent_bytes_min": [71, 222, 250, 347, 450], "zfs_trim_metaslab_skip": [71, 222, 250, 347, 450], "zfs_trim_queue_limit": [71, 222, 250, 347, 450], "zfs_trim_txg_batch": [71, 222, 250, 347, 450], "zfs_txg_histori": [71, 176, 198, 222, 250, 347, 450], "zfs_vdev_aggregation_limit": [71, 176, 198, 222, 250, 347, 450], "zfs_vdev_aggregation_limit_non_rot": [71, 222, 250, 347, 450], "zfs_vdev_mirror_rotating_inc": [71, 198, 222, 250, 347, 450], "predecessor": [71, 198, 222, 250, 347, 450], "decis": [71, 198, 222, 250, 347, 450, 561], "zfs_vdev_mirror_rotating_seek_inc": [71, 198, 222, 250, 347, 450], "zfs_vdev_mirror_rotating_seek_offset": [71, 198, 222, 250, 347, 450], "zfs_vdev_mirror_non_rotating_inc": [71, 198, 222, 250, 347, 450], "zfs_vdev_mirror_non_rotating_seek_inc": [71, 198, 222, 250, 347, 450], "zfs_vdev_write_gap_limit": [71, 176, 198, 222, 250, 347, 450], "4096b": [71, 347, 450], "zfs_vdev_raidz_impl": [71, 198, 222, 250, 347, 450], "squar": [71, 347, 450], "powerpc_altivec": [71, 250, 347, 450], "altivec": [71, 250, 347, 450], "powerpc": [71, 250, 347, 450], "zfs_vdev_schedul": [71, 176, 198, 250, 347, 450], "zfs_zevent_len_max": [71, 176, 198, 222, 250, 347, 450], "zfs_zevent_retain_max": [71, 250, 347, 450], "zfs_zevent_retain_expire_sec": [71, 250, 347, 450], "900": [71, 250, 347, 450], "lifespan": [71, 250, 347, 450], "zfs_zil_clean_taskq_maxalloc": [71, 222, 250, 347, 450], "1048576": [71, 78, 87, 198, 222, 250, 347, 450, 457, 466], "zfs_zil_clean_taskq_minalloc": [71, 222, 250, 347, 450], "zfs_zil_clean_taskq_nthr_pct": [71, 222, 250, 347, 450], "zil_maxblocks": [71, 222, 250, 347, 450], "zil_maxcopi": [71, 450], "7680b": [71, 450], "wr_copi": [71, 450], "tradeoff": [71, 450], "zil_nocacheflush": [71, 222, 250, 347, 450], "zil_replay_dis": [71, 176, 198, 222, 250, 347, 450], "zil_slog_bulk": [71, 198, 222, 250, 347, 450], "zfs_zil_saxattr": [71, 450], "zilsaxattr": [71, 79, 450, 458], "zfs_embedded_slog_min_m": [71, 347, 450], "asid": [71, 81, 236, 334, 347, 356, 450, 460], "unreason": [71, 158, 236, 327, 347, 430, 450, 537], "zstd_earlyabort_pass": [71, 450], "zstd_abort_s": [71, 450], "zio_deadman_log_al": [71, 222, 250, 347, 450], "possess": [71, 222, 250, 347, 450], "zio_slow_io_m": [71, 139, 158, 221, 222, 236, 249, 250, 327, 347, 411, 430, 450, 518, 537], "30000m": [71, 163, 347, 450], "zfs_xattr_compat": [71, 450], "scheme": [71, 73, 93, 112, 117, 127, 184, 196, 206, 220, 232, 248, 295, 349, 400, 450, 452, 472, 491, 496, 506], "zio_requeue_io_start_cut_in_lin": [71, 176, 198, 222, 250, 347, 450], "requeu": [71, 176, 198, 222, 250, 347, 450], "zio_taskq_batch_pct": [71, 176, 198, 222, 250, 347, 450], "zio_taskq_batch_tpq": [71, 347, 450], "zio_taskq_wr_iss_ncpu": 71, "zio_taskq_read": 71, "zio_taskq_writ": 71, "zvol_inhibit_dev": [71, 176, 198, 222, 250, 347, 450], "zvol_major": [71, 176, 198, 222, 250, 347, 450], "zvol_prefetch_byt": [71, 176, 198, 222, 250, 347, 450], "partition": [71, 347, 450], "zvol_request_sync": [71, 78, 198, 222, 250, 347, 450], "zvol_thread": [71, 198, 222, 250, 347, 450], "multiqueu": [71, 450], "blk": [71, 176, 198, 222, 250, 450], "mq": [71, 450], "zvol_blk_mq_thread": [71, 450], "zvol_use_blk_mq": [71, 450], "api": [71, 104, 231, 274, 379, 450, 483], "zvol_blk_mq_blocks_per_thread": [71, 450], "zvol_blk_mq_queue_depth": [71, 450], "queue_depth": [71, 450], "clamp": [71, 450], "blkdev_min_rq": [71, 450], "blkdev_max_rq": [71, 450], "blkdev_default_rq": [71, 450], "zvol_volmod": [71, 78, 198, 206, 222, 232, 250, 298, 347, 353, 450, 457], "zvol_enforce_quota": [71, 450], "minima": [71, 347, 450], "maxima": [71, 347, 450], "bleed": [71, 176, 198, 222, 250, 347, 450], "juli": [71, 110, 114, 139, 207, 217, 245, 353, 427, 450, 489, 493, 518], "devlink": [73, 174, 196, 220, 248, 349, 452], "hierarchi": [73, 77, 85, 90, 91, 92, 93, 95, 98, 101, 110, 112, 114, 115, 120, 127, 174, 181, 184, 196, 203, 206, 220, 227, 232, 248, 255, 260, 261, 263, 271, 282, 290, 295, 297, 349, 352, 360, 365, 366, 368, 376, 385, 387, 389, 395, 400, 452, 456, 464, 469, 470, 471, 472, 474, 477, 480, 489, 491, 493, 494, 499, 506], "coexist": [73, 174, 196, 220, 248, 349, 452], "enclosure_symlink": [73, 196, 220, 248, 349, 452], "sg": [73, 196, 220, 248, 349, 452], "enclosure_symlinks_prefix": [73, 196, 220, 248, 349, 452], "num": [73, 174, 196, 220, 248, 349, 452], "x201c": [73, 79, 87, 186, 248, 349, 354, 362, 452, 458, 466], "enc": [73, 196, 220, 248, 349, 452], "x201d": [73, 79, 87, 186, 248, 349, 354, 362, 452, 458, 466], "examin": [73, 81, 85, 86, 155, 174, 181, 186, 196, 203, 209, 220, 227, 228, 236, 248, 255, 256, 324, 334, 349, 356, 360, 361, 427, 452, 460, 464, 465, 534], "govern": [73, 85, 139, 174, 175, 181, 196, 197, 203, 220, 221, 227, 248, 249, 255, 349, 360, 411, 452, 464, 518], "phy": [73, 85, 174, 181, 196, 203, 220, 227, 248, 255, 349, 360, 452, 464], "bai": [73, 174, 196, 220, 248, 349, 452], "sg_se": [73, 196, 220, 248, 349, 452], "unsupport": [73, 79, 81, 177, 186, 196, 199, 209, 220, 223, 236, 248, 251, 334, 349, 354, 356, 452, 458, 460], "pci_id": [73, 196, 220, 248, 349, 452], "06": [73, 196, 220, 248, 349, 452], "l1": [73, 196, 220, 248, 349, 452], "u0": [73, 196, 220, 248, 349, 452], "u1": [73, 145, 158, 163, 196, 209, 220, 236, 248, 332, 349, 435, 452, 524, 537, 542], "miscellan": [74, 76, 77, 78, 79, 80, 81, 167, 350, 352, 353, 354, 355, 356, 439, 453, 455, 456, 457, 458, 459, 460, 546], "hook": [74, 350, 453], "x2193": [74, 350, 453], "initqueu": [74, 350, 453], "sysinit": [74, 350, 453], "__________________": [74, 350, 453], "x2191": [74, 350, 453], "_____________________": [74, 350, 453], "sysroot": [74, 350, 453], "x2190": [74, 350, 453], "nonroot": [74, 350, 453], "______________________": [74, 350, 453], "needshutdown": [74, 350, 453], "bootup": [74, 350, 453], "flowchart": [74, 350, 453], "90zf": [74, 350, 453], "henceforth": [74, 350, 453], "libx32": [74, 350, 453], "glob": [74, 350, 453], "deem": [74, 86, 182, 204, 228, 256, 350, 361, 453, 465], "pluse": [74, 350, 453], "x2018": [74, 79, 263, 274, 350, 354, 453, 458], "x2019": [74, 79, 263, 274, 350, 354, 453, 458], "x00a0": [74, 88, 90, 100, 101, 104, 108, 109, 112, 118, 120, 127, 136, 139, 163, 183, 205, 206, 229, 231, 232, 236, 257, 258, 260, 271, 274, 278, 279, 282, 288, 290, 305, 350, 363, 365, 375, 376, 379, 383, 384, 387, 393, 395, 400, 408, 411, 435, 453, 467, 469, 479, 480, 483, 487, 488, 491, 497, 499, 506, 515, 518, 542], "rootflag": [74, 350, 453], "zfsprop": [74, 75, 80, 84, 90, 92, 95, 96, 98, 99, 100, 101, 103, 106, 115, 116, 119, 120, 121, 122, 124, 126, 127, 136, 139, 225, 253, 260, 262, 265, 266, 268, 269, 270, 271, 273, 276, 285, 286, 289, 290, 291, 293, 295, 305, 350, 351, 355, 359, 365, 367, 370, 371, 373, 374, 375, 376, 378, 381, 390, 391, 394, 395, 396, 398, 400, 408, 411, 453, 454, 459, 463, 469, 471, 474, 475, 477, 478, 479, 480, 482, 485, 494, 495, 498, 499, 500, 501, 503, 505, 506, 515, 518], "pivot": [74, 350, 453], "zfsforc": [74, 350, 453], "conjunct": [74, 92, 93, 108, 109, 110, 114, 132, 145, 147, 151, 157, 158, 184, 186, 206, 209, 232, 236, 262, 263, 278, 279, 280, 284, 301, 314, 316, 320, 326, 327, 350, 367, 368, 383, 384, 385, 389, 404, 417, 419, 423, 429, 430, 453, 471, 472, 487, 488, 489, 493, 511, 524, 526, 530, 536, 537], "zpool_import_opt": [74, 350, 453], "thrice": [74, 350, 453], "signal": [74, 87, 110, 114, 183, 205, 229, 257, 350, 362, 453, 466, 489, 493], "forcibli": [74, 93, 350, 368, 453, 472, 558], "hostonli": [74, 350, 453], "succeed": [74, 104, 231, 274, 350, 379, 453, 483], "plymouth": [74, 350, 453], "zpoolprop": [74, 75, 76, 100, 127, 132, 133, 136, 139, 141, 143, 147, 153, 156, 157, 160, 161, 163, 253, 301, 302, 305, 310, 312, 316, 322, 325, 326, 329, 330, 332, 350, 351, 400, 404, 405, 408, 411, 413, 415, 419, 425, 428, 429, 432, 433, 435, 453, 454, 455, 479, 506, 511, 512, 515, 518, 520, 522, 526, 532, 535, 536, 539, 540, 542, 559, 560], "march": [74, 88, 91, 92, 93, 94, 95, 98, 100, 107, 108, 109, 112, 113, 115, 117, 118, 132, 136, 137, 140, 143, 145, 147, 151, 158, 161, 163, 168, 189, 212, 250, 299, 335, 350, 453, 467, 470, 471, 472, 473, 474, 477, 479, 486, 487, 488, 491, 492, 494, 496, 497, 511, 515, 516, 519, 522, 524, 526, 530, 537, 540, 542], "vdevprop": [75, 141, 156, 454, 520, 535], "zfsconcept": [75, 78, 91, 117, 127, 253, 261, 287, 295, 298, 351, 353, 366, 392, 400, 454, 457, 470, 496, 506], "zpoolconcept": [75, 78, 132, 136, 139, 158, 159, 161, 163, 253, 298, 301, 305, 327, 328, 330, 332, 351, 353, 404, 408, 411, 430, 431, 433, 435, 454, 457, 511, 515, 518, 537, 538, 540, 542], "annot": [76, 78, 81, 127, 184, 206, 232, 295, 298, 353, 400, 455, 457, 460, 506], "io_n": [76, 455], "io_t": [76, 455], "kb": [76, 78, 184, 206, 232, 298, 353, 455, 457], "forth": [76, 78, 81, 184, 206, 232, 298, 353, 455, 457, 460], "zettabyt": [76, 78, 95, 98, 115, 184, 206, 232, 265, 268, 285, 298, 353, 370, 373, 390, 455, 457, 474, 477, 494], "1536m": [76, 78, 184, 206, 232, 298, 353, 455, 457], "5g": [76, 78, 147, 163, 184, 186, 206, 209, 232, 236, 298, 332, 353, 435, 455, 457, 526, 542], "50gb": [76, 78, 184, 206, 232, 298, 353, 455, 457], "asiz": [76, 455], "psize": [76, 86, 182, 204, 228, 256, 361, 455, 465], "expands": [76, 81, 147, 186, 209, 236, 316, 334, 356, 419, 455, 460, 526], "physpath": [76, 455], "encpath": [76, 455], "fru": [76, 139, 175, 197, 221, 249, 411, 455, 518], "numchildren": [76, 455], "read_error": [76, 455], "write_error": [76, 455], "checksum_error": [76, 455], "initialize_error": [76, 455], "null_op": [76, 455], "read_op": [76, 455], "write_op": [76, 455], "free_op": [76, 455], "claim_op": [76, 455], "trim_op": [76, 455], "null_byt": [76, 455], "read_byt": [76, 455], "write_byt": [76, 455], "free_byt": [76, 455], "claim_byt": [76, 455], "trim_byt": [76, 455], "cumul": [76, 455], "bootsiz": [76, 455], "failfast": [76, 455], "propag": [76, 455], "punctuat": [76, 78, 81, 184, 206, 232, 298, 353, 455, 457, 460], "dash": [76, 78, 81, 136, 184, 186, 206, 209, 232, 236, 298, 305, 353, 408, 455, 457, 460, 515], "underscor": [76, 78, 81, 87, 136, 183, 184, 186, 205, 206, 209, 229, 232, 236, 257, 298, 305, 353, 362, 408, 455, 457, 460, 466, 515], "programmat": [76, 78, 81, 104, 127, 184, 206, 231, 232, 274, 295, 298, 353, 379, 400, 455, 457, 460, 483, 506], "revers": [76, 77, 78, 79, 81, 96, 102, 104, 106, 107, 124, 177, 184, 199, 206, 223, 231, 232, 251, 266, 274, 276, 277, 293, 297, 298, 352, 353, 354, 371, 377, 379, 381, 382, 398, 455, 456, 457, 458, 460, 475, 481, 483, 485, 486, 503], "octob": [76, 77, 165, 166, 198, 219, 221, 239, 361, 455, 456, 544, 545], "administ": [77, 184, 206, 232, 297, 352, 456], "snapdev": [77, 78, 88, 118, 184, 206, 232, 297, 298, 352, 353, 363, 393, 456, 457, 467, 497], "snapdir": [77, 78, 88, 95, 98, 115, 118, 127, 184, 206, 232, 258, 288, 295, 297, 298, 352, 353, 363, 393, 400, 456, 457, 467, 474, 477, 494, 497, 506], "standpoint": [77, 184, 206, 232, 297, 352, 456], "distinct": [77, 184, 206, 232, 297, 352, 456], "light": [77, 184, 206, 232, 297, 352, 456], "incent": [77, 184, 206, 232, 297, 352, 456], "instantan": [77, 145, 184, 206, 209, 232, 236, 297, 314, 352, 417, 456, 524], "relationship": [77, 90, 101, 107, 120, 184, 206, 232, 260, 271, 277, 290, 297, 352, 365, 376, 382, 395, 456, 469, 480, 486, 499], "promot": [77, 78, 83, 88, 91, 92, 93, 104, 112, 117, 118, 127, 184, 206, 231, 232, 253, 258, 261, 274, 288, 295, 297, 298, 352, 353, 358, 363, 366, 379, 393, 400, 456, 457, 462, 467, 470, 471, 472, 483, 491, 496, 497, 506], "stuff": [77, 184, 206, 232, 297, 352, 456], "tib": [77, 206, 232, 297, 352, 456], "improperli": [77, 183, 184, 205, 206, 229, 232, 257, 297, 352, 456], "shallow": [77, 456], "reflink": [77, 456], "brt": [77, 86, 456, 465], "sharenf": [78, 88, 95, 98, 115, 116, 118, 127, 184, 206, 232, 258, 286, 288, 295, 298, 353, 363, 391, 393, 400, 457, 467, 474, 477, 494, 495, 497, 506], "sharesmb": [78, 88, 95, 98, 115, 116, 118, 127, 184, 206, 232, 258, 286, 288, 295, 298, 353, 363, 391, 393, 400, 457, 467, 474, 477, 494, 495, 497, 506], "shorten": [78, 81, 184, 186, 206, 209, 232, 236, 298, 334, 353, 356, 457, 460], "compressratio": [78, 95, 98, 115, 127, 184, 206, 232, 295, 298, 353, 400, 457, 474, 477, 494, 506], "refcompressratio": [78, 184, 206, 232, 298, 353, 457], "createtxg": [78, 206, 232, 298, 353, 457], "role": [78, 206, 232, 298, 353, 457], "defer_destroi": [78, 184, 206, 232, 298, 353, 457], "encryptionroot": [78, 90, 101, 120, 232, 260, 271, 290, 298, 353, 365, 376, 395, 457, 469, 480, 499], "implicitli": [78, 90, 101, 120, 186, 232, 260, 271, 290, 298, 353, 365, 376, 395, 457, 469, 480, 499], "filesystem_count": [78, 184, 206, 232, 298, 353, 457], "keystatu": [78, 90, 101, 120, 232, 260, 271, 290, 298, 353, 365, 376, 395, 457, 469, 480, 499], "lifetim": [78, 87, 183, 205, 206, 229, 232, 257, 298, 353, 362, 457, 466], "logicalreferenc": [78, 184, 206, 232, 298, 353, 457], "quantiti": [78, 184, 206, 232, 298, 353, 457], "closer": [78, 184, 206, 232, 298, 353, 457], "lrefer": [78, 184, 206, 232, 298, 353, 457], "logicalus": [78, 184, 206, 232, 298, 353, 457], "luse": [78, 184, 206, 232, 298, 353, 457], "objsetid": [78, 232, 298, 353, 457], "receive_resume_token": [78, 108, 109, 110, 114, 206, 232, 278, 279, 280, 284, 298, 353, 383, 384, 385, 389, 457, 487, 488, 489, 493], "opaqu": [78, 206, 232, 298, 353, 457], "token": [78, 108, 109, 165, 166, 206, 232, 278, 279, 298, 335, 353, 383, 384, 437, 438, 457, 487, 488, 544, 545], "redact_snap": [78, 298, 353, 457], "snapshot_count": [78, 184, 206, 232, 298, 353, 457], "snapshot_limit": [78, 88, 118, 184, 206, 232, 258, 288, 298, 353, 363, 393, 457, 467, 497], "usedbi": [78, 184, 206, 232, 298, 353, 457], "decompos": [78, 184, 206, 232, 298, 353, 457], "usedbychildren": [78, 95, 98, 115, 127, 184, 206, 232, 295, 298, 353, 400, 457, 474, 477, 494, 506], "usedbydataset": [78, 95, 98, 115, 127, 184, 206, 232, 295, 298, 353, 400, 457, 474, 477, 494, 506], "usedbyrefreserv": [78, 95, 98, 115, 127, 184, 206, 232, 295, 298, 353, 400, 457, 474, 477, 494, 506], "usedbysnapshot": [78, 95, 98, 115, 127, 184, 206, 232, 295, 298, 353, 400, 457, 474, 477, 494, 506], "refreserv": [78, 80, 81, 88, 92, 95, 98, 115, 118, 127, 184, 206, 232, 236, 258, 262, 288, 295, 298, 333, 334, 353, 355, 356, 363, 367, 393, 400, 457, 459, 460, 467, 471, 474, 477, 494, 497, 506], "userus": [78, 88, 90, 96, 101, 106, 118, 120, 124, 127, 184, 206, 232, 258, 260, 266, 271, 276, 288, 290, 293, 295, 298, 353, 363, 365, 371, 376, 381, 393, 395, 398, 400, 457, 467, 469, 475, 480, 485, 497, 499, 503, 506], "charg": [78, 184, 206, 232, 298, 353, 457], "owner": [78, 79, 110, 114, 184, 206, 223, 232, 251, 280, 284, 298, 353, 354, 385, 389, 457, 458, 489, 493], "userspac": [78, 83, 86, 96, 106, 127, 184, 206, 232, 253, 266, 276, 295, 298, 353, 358, 371, 381, 400, 457, 462, 465, 475, 485, 506], "unprivileg": [78, 88, 118, 163, 183, 184, 205, 206, 209, 229, 232, 236, 257, 298, 332, 353, 363, 393, 435, 457, 467, 497, 542], "grant": [78, 81, 88, 118, 127, 184, 186, 206, 209, 232, 236, 258, 288, 295, 298, 334, 353, 356, 363, 393, 400, 457, 460, 467, 497, 506], "privileg": [78, 79, 81, 87, 88, 92, 104, 118, 132, 136, 145, 163, 183, 184, 186, 205, 206, 209, 223, 229, 231, 232, 236, 251, 257, 258, 262, 274, 288, 298, 301, 305, 314, 332, 334, 353, 354, 356, 362, 363, 367, 379, 393, 404, 408, 417, 435, 457, 458, 460, 466, 467, 471, 483, 497, 511, 515, 524, 542], "everyon": [78, 88, 118, 184, 206, 232, 258, 288, 298, 353, 363, 393, 457, 467, 497], "joe": [78, 184, 206, 232, 298, 353, 457], "789": [78, 184, 206, 232, 298, 353, 457], "sid": [78, 96, 106, 124, 184, 206, 232, 266, 276, 293, 298, 353, 371, 381, 398, 457, 475, 485, 503], "smith": [78, 184, 206, 232, 298, 353, 457], "mydomain": [78, 184, 206, 232, 298, 353, 457], "456": [78, 184, 206, 232, 298, 353, 457], "userobjus": [78, 88, 96, 106, 118, 124, 206, 232, 266, 276, 293, 298, 353, 363, 371, 381, 393, 398, 457, 467, 475, 485, 497, 503], "behalf": [78, 206, 232, 298, 353, 457], "userobjquota": [78, 88, 96, 106, 118, 124, 206, 232, 266, 276, 293, 298, 353, 363, 371, 381, 393, 398, 457, 467, 475, 485, 497, 503], "userref": [78, 184, 206, 232, 298, 353, 457], "groupus": [78, 88, 90, 101, 118, 120, 127, 184, 206, 232, 258, 260, 271, 288, 290, 295, 298, 353, 363, 365, 376, 393, 395, 400, 457, 467, 469, 480, 497, 499, 506], "groupobjus": [78, 88, 118, 206, 232, 298, 353, 363, 393, 457, 467, 497], "projectus": [78, 88, 118, 127, 232, 258, 288, 298, 353, 363, 393, 400, 457, 467, 497, 506], "chattr": [78, 79, 223, 232, 251, 298, 353, 354, 457, 458], "anytim": [78, 223, 232, 251, 298, 353, 457], "lsattr": [78, 232, 298, 353, 457], "projectobjus": [78, 88, 118, 232, 258, 288, 298, 353, 363, 393, 457, 467, 497], "fileset": [78, 232, 298, 353, 457], "projectobjquota": [78, 88, 118, 232, 258, 288, 298, 353, 363, 393, 457, 467, 497], "snapshots_chang": [78, 457], "kbyte": [78, 184, 206, 232, 298, 353, 457], "volblock": [78, 184, 206, 232, 298, 353, 457], "interpret": [78, 86, 88, 104, 118, 182, 184, 204, 206, 228, 231, 232, 256, 258, 274, 288, 298, 353, 361, 363, 379, 393, 457, 465, 467, 483, 497], "aclinherit": [78, 88, 95, 98, 115, 118, 127, 184, 206, 232, 258, 288, 295, 298, 353, 363, 393, 400, 457, 467, 474, 477, 494, 497, 506], "noallow": [78, 184, 206, 232, 298, 353, 457], "ac": [78, 88, 118, 127, 184, 206, 232, 295, 298, 353, 400, 457, 467, 497, 506], "write_acl": [78, 184, 206, 232, 298, 353, 457], "write_own": [78, 184, 206, 232, 298, 353, 457], "aclmod": [78, 88, 95, 98, 115, 118, 127, 295, 298, 353, 363, 393, 400, 457, 467, 474, 477, 494, 497, 506], "groupmask": [78, 298, 353, 457], "sticki": [78, 298, 353, 457], "noacl": [78, 184, 206, 232, 298, 353, 457], "getfacl": [78, 298, 353, 457], "setfacl": [78, 298, 353, 457], "mailer": [78, 184, 206, 232, 298, 353, 457], "noatim": [78, 184, 206, 232, 298, 353, 457], "moder": [78, 184, 206, 232, 298, 353, 457, 557], "nul": [78, 105, 232, 275, 298, 353, 380, 457, 484], "fscontext": [78, 88, 118, 180, 184, 202, 206, 226, 232, 254, 298, 353, 363, 393, 457, 467, 497], "defcontext": [78, 88, 118, 180, 202, 206, 226, 232, 254, 298, 353, 363, 393, 457, 467, 497], "unlabel": [78, 180, 184, 202, 206, 226, 232, 254, 298, 353, 457], "rootcontext": [78, 88, 118, 180, 184, 202, 206, 226, 232, 254, 298, 353, 363, 393, 457, 467, 497], "inod": [78, 94, 176, 180, 184, 198, 202, 206, 222, 226, 232, 250, 254, 264, 298, 347, 353, 369, 457, 473], "1k": [78, 206, 232, 298, 353, 457], "2k": [78, 206, 209, 232, 236, 298, 353, 457], "liter": [78, 100, 184, 206, 232, 270, 298, 353, 375, 457, 479], "lustr": [78, 206, 232, 298, 353, 457], "dnsize": [78, 206, 232, 298, 353, 457], "hex": [78, 86, 182, 204, 228, 232, 256, 298, 353, 361, 457, 465], "pbkdf2": [78, 232, 298, 353, 457], "pbkdf2iter": [78, 88, 90, 101, 118, 120, 232, 260, 271, 290, 298, 353, 363, 365, 376, 393, 395, 457, 467, 469, 480, 497, 499], "secret": [78, 79, 199, 223, 232, 251, 298, 353, 354, 457, 458], "uri": [78, 232, 298, 353, 457], "ssl_ca_cert_fil": [78, 353, 457], "concaten": [78, 353, 457], "ssl_ca_cert_path": [78, 353, 457], "ssl_client_cert_fil": [78, 353, 457], "ssl_client_key_fil": [78, 353, 457], "brute": [78, 232, 298, 353, 457], "attack": [78, 79, 90, 101, 120, 199, 223, 232, 251, 260, 271, 290, 298, 353, 354, 365, 376, 395, 457, 458, 469, 480, 499], "arriv": [78, 232, 298, 353, 457], "350000": [78, 232, 298, 353, 457], "noexec": [78, 184, 206, 232, 298, 353, 457], "volthread": 78, "ancestor": [78, 88, 95, 98, 104, 115, 118, 127, 184, 206, 232, 258, 265, 268, 274, 285, 288, 295, 298, 353, 363, 370, 373, 379, 390, 393, 400, 457, 467, 474, 477, 483, 494, 497, 506], "impos": [78, 184, 206, 232, 298, 353, 457], "special_small_block": [78, 80, 88, 118, 232, 236, 298, 333, 353, 355, 363, 393, 457, 459, 467, 497], "unshar": [78, 93, 116, 127, 184, 206, 232, 263, 286, 295, 298, 353, 368, 391, 400, 457, 472, 495, 506], "nbmand": [78, 88, 95, 98, 102, 115, 118, 127, 184, 206, 230, 232, 258, 272, 288, 295, 298, 353, 363, 377, 393, 400, 457, 467, 474, 477, 481, 494, 497, 506], "buggi": [78, 457], "overlai": [78, 88, 103, 118, 121, 184, 206, 232, 273, 291, 298, 353, 363, 378, 393, 396, 457, 467, 482, 497, 500], "userquota": [78, 88, 96, 106, 118, 124, 184, 206, 232, 258, 266, 276, 288, 293, 298, 353, 363, 371, 381, 393, 398, 457, 467, 475, 485, 497, 503], "refus": [78, 93, 113, 138, 184, 186, 206, 209, 232, 236, 263, 283, 298, 307, 353, 368, 388, 410, 457, 472, 492, 517], "edquot": [78, 104, 184, 206, 231, 232, 274, 298, 353, 379, 457, 483], "groupquota": [78, 88, 118, 184, 206, 232, 258, 288, 298, 353, 363, 393, 457, 467, 497], "groupobjquota": [78, 88, 118, 206, 232, 298, 353, 363, 393, 457, 467, 497], "projectquota": [78, 88, 118, 232, 258, 288, 298, 353, 363, 393, 457, 467, 497], "rdonli": [78, 81, 184, 186, 206, 209, 232, 236, 298, 334, 353, 356, 457, 460], "suboptim": [78, 184, 206, 232, 298, 353, 457], "recsiz": [78, 184, 206, 232, 298, 353, 457], "redundantli": [78, 80, 184, 206, 232, 298, 353, 355, 457, 459], "refquota": [78, 81, 88, 95, 98, 115, 118, 127, 184, 206, 232, 236, 258, 288, 295, 298, 334, 353, 356, 363, 393, 400, 457, 460, 467, 474, 477, 494, 497, 506], "thick": [78, 232, 298, 353, 457], "hasn": [78, 184, 206, 232, 298, 353, 457], "norelatim": [78, 184, 206, 232, 298, 353, 457], "secondari": [78, 184, 206, 232, 298, 353, 457], "specul": 78, "zfs_disable_prefetch": 78, "bypass": 78, "setuid": [78, 88, 95, 98, 102, 115, 118, 127, 184, 206, 230, 232, 258, 272, 288, 295, 298, 353, 363, 377, 393, 400, 457, 467, 474, 477, 481, 494, 497, 506], "suid": [78, 206, 232, 298, 353, 457], "nosuid": [78, 184, 206, 232, 298, 353, 457], "usershar": [78, 127, 184, 206, 232, 295, 298, 353, 400, 457, 506], "ldap": [78, 127, 184, 206, 232, 295, 298, 353, 400, 457, 506], "smbpasswd": [78, 127, 184, 206, 232, 295, 298, 353, 400, 457, 506], "disallow": [78, 206, 232, 298, 353, 457], "reshar": [78, 457], "exportf": [78, 127, 184, 206, 232, 295, 298, 353, 400, 457, 506], "crossmnt": [78, 206, 232, 298, 353, 457], "no_subtree_check": [78, 184, 206, 232, 298, 353, 457], "negat": [78, 86, 256, 353, 361, 457, 465], "o_dsync": [78, 184, 206, 232, 298, 353, 457], "understood": [78, 184, 206, 232, 298, 353, 457], "unexpect": [78, 93, 184, 206, 232, 263, 298, 353, 368, 457, 472], "fledg": [78, 206, 232, 298, 353, 457], "exposit": [78, 206, 232, 298, 353, 457], "vscan": [78, 88, 95, 98, 115, 118, 127, 184, 206, 232, 258, 288, 295, 298, 353, 363, 393, 400, 457, 467, 474, 477, 494, 497, 506], "virus": [78, 184, 206, 232, 298, 353, 457], "viru": [78, 184, 206, 232, 298, 353, 457], "getxattr": [78, 184, 206, 232, 298, 353, 457], "setxattr": [78, 184, 206, 232, 298, 353, 457], "noxattr": [78, 180, 184, 202, 206, 226, 232, 254, 298, 353, 457], "jail": [78, 83, 119, 127, 253, 289, 295, 298, 353, 358, 394, 400, 457, 462, 498, 506], "unix": [78, 97, 111, 184, 206, 232, 298, 353, 457, 476, 490], "formc": [78, 184, 206, 232, 298, 353, 457], "formkc": [78, 184, 206, 232, 298, 353, 457], "formkd": [78, 184, 206, 232, 298, 353, 457], "unicod": [78, 184, 206, 232, 298, 353, 457], "mand": [78, 353, 457], "nomand": [78, 353, 457], "nodevic": [78, 184, 206, 232, 298, 353, 457], "nosetuid": [78, 184, 206, 232, 298, 353, 457], "whitespac": [79, 80, 81, 186, 209, 236, 333, 354, 355, 356, 458, 459, 460], "saniti": [79, 164, 354, 436, 458, 543], "newlin": [79, 105, 164, 232, 275, 354, 380, 436, 458, 484, 543], "bootpool": [79, 354, 458], "dset": [79, 102, 354, 377, 458, 481], "copy_file_rang": [79, 458], "goe": [79, 110, 114, 184, 206, 232, 280, 284, 385, 389, 458, 489, 493], "bookmark_v2": [79, 223, 251, 354, 458, 557], "bookmark_written": [79, 251, 354, 458], "phase": [79, 251, 354, 458], "device_remov": [79, 151, 223, 236, 251, 320, 354, 423, 458, 530], "nist": [79, 199, 223, 251, 354, 458], "competit": [79, 199, 223, 251, 354, 458], "350": [79, 199, 223, 251, 354, 458], "seed": [79, 199, 223, 251, 354, 458], "fed": [79, 199, 223, 251, 354, 458], "112": [79, 177, 199, 223, 251, 354, 458], "misnom": [79, 177, 199, 223, 251, 354, 458], "bpobj": [79, 131, 177, 185, 199, 208, 223, 235, 251, 300, 354, 403, 458, 510], "errlog": [79, 131, 185, 208, 235, 300, 403, 458, 510], "Its": [79, 133, 223, 251, 354, 458], "bonu": [79, 199, 223, 251, 354, 458], "multi_vdev_crash_dump": [79, 199, 223, 251, 354, 458], "arrang": [79, 199, 223, 251, 354, 458], "dumpadm": [79, 199, 223, 251, 354, 458], "obsolete_count": [79, 223, 251, 354, 458], "x2013": [79, 81, 127, 334, 347, 354, 356, 400, 458, 460, 506], "prjid": [79, 223, 251, 354, 458], "raidz_expans": 79, "redaction_bookmark": [79, 110, 114, 251, 280, 284, 354, 385, 389, 458, 489, 493], "redacted_dataset": [79, 251, 354, 458], "redaction_list_spil": 79, "fip": [79, 199, 223, 251, 354, 458], "arithmet": [79, 199, 223, 251, 354, 458], "candid": [79, 199, 223, 251, 354, 458], "finalist": [79, 199, 223, 251, 354, 458], "vdev_zaps_v2": [79, 458], "reguid": [79, 81, 83, 134, 139, 163, 175, 186, 197, 209, 221, 236, 249, 253, 303, 332, 334, 356, 358, 406, 411, 435, 458, 460, 462, 513, 518, 542], "xattrdir": [79, 458], "durabl": [79, 458], "rewound": [79, 223, 251, 354, 458], "zstd_compress": [79, 251, 354, 458], "modestli": [79, 251, 354, 458], "250": [79, 251, 354, 458], "mb": [79, 222, 231, 232, 250, 251, 274, 354, 379, 458], "june": [79, 96, 97, 106, 111, 122, 123, 124, 126, 133, 152, 155, 157, 175, 197, 199, 223, 258, 259, 261, 262, 263, 264, 265, 266, 267, 268, 270, 275, 276, 277, 280, 281, 283, 284, 285, 286, 287, 288, 292, 293, 295, 297, 352, 355, 368, 370, 371, 372, 373, 381, 382, 386, 390, 397, 398, 400, 408, 424, 429, 430, 435, 458, 475, 476, 485, 490, 501, 502, 503, 505, 531, 534, 536], "slice": [80, 209, 236, 333, 355, 459], "shorthand": [80, 186, 209, 236, 333, 355, 459], "draid2": [80, 355, 459], "draid3": [80, 355, 459], "single_drive_iop": [80, 355, 459], "datad": [80, 355, 459], "childrenc": [80, 355, 459], "sparess": [80, 355, 459], "mypool": [80, 186, 209, 236, 333, 355, 459], "rich": [80, 186, 209, 236, 333, 355, 459], "health": [80, 81, 129, 133, 143, 147, 158, 163, 186, 209, 236, 312, 316, 327, 332, 333, 334, 355, 356, 415, 419, 430, 435, 459, 460, 508, 522, 526, 537, 542], "replica": [80, 138, 139, 157, 175, 186, 197, 209, 221, 236, 249, 307, 326, 333, 355, 410, 411, 429, 459, 517, 518, 536, 548, 549, 550, 551, 554, 559, 560, 561], "greatest": [80, 186, 209, 236, 333, 355, 459], "reissu": [80, 186, 209, 236, 333, 355, 459], "mistak": [80, 236, 333, 355, 459], "thought": [80, 236, 333, 355, 459], "unenforc": [80, 236, 333, 355, 459], "april": [80, 81, 128, 204, 209, 228, 232, 256, 296, 401, 459, 460, 507], "bcloneratio": [81, 460], "bclonesav": [81, 460], "bcloneus": [81, 460], "autoexpand": [81, 139, 175, 186, 197, 209, 221, 236, 249, 334, 356, 411, 460, 518], "unfrag": [81, 236, 334, 356, 460], "discrep": [81, 186, 209, 236, 334, 356, 460], "load_guid": [81, 236, 334, 356, 460], "invis": [81, 186, 209, 236, 334, 356, 460], "altroot": [81, 136, 143, 147, 157, 163, 186, 209, 236, 305, 312, 316, 326, 332, 334, 356, 408, 415, 419, 429, 435, 460, 515, 522, 526, 536, 542], "unknown": [81, 84, 139, 175, 186, 197, 209, 221, 236, 249, 334, 356, 359, 411, 460, 463, 518, 552], "expon": [81, 186, 209, 236, 334, 356, 460], "dy": [81, 209, 236, 334, 356, 460], "grown": [81, 87, 186, 209, 236, 334, 356, 460, 466], "autoreplac": [81, 129, 186, 209, 236, 334, 356, 460, 508], "autoonlin": [81, 209, 236, 334, 356, 460], "printabl": [81, 186, 209, 236, 334, 356, 460], "ascii": [81, 94, 127, 186, 209, 236, 334, 356, 369, 460, 473, 506], "legacyno": [81, 356, 460], "dedupditto": [81, 186, 209, 236, 334, 356, 460], "catastroph": [81, 186, 209, 236, 334, 356, 460], "feature_nam": [81, 177, 186, 199, 209, 223, 236, 251, 334, 356, 460], "listsnapshot": [81, 100, 209, 236, 270, 334, 356, 375, 460, 479], "listsnap": [81, 100, 127, 184, 186, 206, 209, 232, 236, 295, 334, 356, 400, 460, 479, 506], "dummi": [82, 110, 114, 178, 200, 224, 252, 280, 284, 357, 385, 389, 461, 489, 493], "uncorrect": [82, 143, 178, 186, 200, 209, 224, 236, 252, 312, 357, 415, 461, 522, 555], "fsck": [83, 86, 179, 182, 201, 204, 225, 228, 253, 256, 358, 361, 462, 465], "groupspac": [83, 106, 124, 127, 184, 206, 232, 253, 276, 293, 295, 358, 381, 398, 400, 462, 485, 503, 506], "projectspac": [83, 96, 105, 124, 127, 232, 253, 266, 275, 293, 295, 358, 371, 380, 398, 400, 462, 475, 484, 503, 506], "unallow": [83, 88, 127, 184, 206, 232, 253, 258, 295, 358, 363, 400, 462, 467, 506], "unjail": [83, 99, 127, 253, 269, 295, 358, 374, 400, 462, 478, 506], "unzon": [83, 126, 462, 505], "zfs_ids_to_path": [83, 253, 358, 462], "zfs_prepare_disk": [83, 462], "zinject": [83, 87, 179, 183, 201, 205, 225, 229, 253, 257, 358, 362, 462, 466], "reopen": [83, 131, 135, 148, 149, 154, 163, 185, 186, 208, 209, 235, 236, 253, 300, 304, 317, 318, 323, 332, 358, 403, 407, 420, 421, 426, 435, 462, 510, 514, 527, 528, 533, 542], "zpool_influxdb": [83, 358, 462], "zstream": [83, 108, 109, 166, 253, 278, 279, 358, 383, 384, 438, 462, 487, 488, 545], "zstreamdump": [83, 165, 179, 201, 225, 253, 358, 437, 462, 544], "sfnvh": [84, 180, 202, 226, 254, 359, 463], "zfs_mount_help": [84, 127, 295, 359, 400, 463, 506], "libmount": [84, 359, 463], "sloppi": [84, 180, 202, 226, 254, 359, 463], "parser": [84, 359, 463], "config_fil": [85, 181, 203, 227, 255, 360, 464], "unsatisfactori": [85, 181, 203, 227, 255, 360, 464], "mpath": [85, 181, 203, 227, 255, 360, 464], "classifi": [85, 255, 360, 464], "abcddfghiklmnpstvxyi": [86, 465], "dumpdir": [86, 204, 228, 256, 361, 465], "objset": [86, 128, 131, 185, 208, 235, 256, 296, 300, 361, 401, 403, 465, 507, 510], "adipv": [86, 204, 228, 256, 361, 465], "word0": [86, 204, 228, 256, 361, 465], "word1": [86, 204, 228, 256, 361, 465], "word15": [86, 204, 228, 256, 361, 465], "aflpxi": [86, 228, 256, 361, 465], "lsize": [86, 182, 204, 228, 256, 361, 465], "ap": [86, 182, 204, 228, 256, 361, 465], "inher": [86, 108, 109, 182, 204, 228, 232, 256, 278, 279, 361, 383, 384, 465, 487, 488], "precis": [86, 182, 204, 228, 256, 361, 465], "errat": [86, 182, 204, 228, 256, 361, 465], "tupl": [86, 131, 182, 185, 204, 208, 228, 235, 256, 300, 361, 403, 465, 510], "ddd": [86, 204, 228, 256, 361, 465], "dddd": [86, 204, 228, 256, 361, 465], "ddddd": [86, 204, 228, 256, 361, 465], "sequenc": [86, 256, 361, 465], "l2arc_dev_hdr_mag": [86, 256, 361, 465], "lll": [86, 204, 228, 256, 361, 465], "mmmm": [86, 204, 228, 256, 361, 465], "lookup": [86, 88, 118, 184, 206, 232, 258, 288, 363, 393, 465, 467, 497], "zdb_no_zl": [86, 228, 256, 361, 465], "uninterpret": [86, 182, 204, 228, 256, 361, 465], "tt": [86, 465], "ttt": [86, 465], "aa": [86, 182, 204, 228, 256, 361, 465], "demot": [86, 182, 204, 228, 256, 361, 465], "aaa": [86, 182, 204, 228, 256, 361, 465], "bbc": [86, 204, 228, 256, 361, 465], "msg": [86, 104, 231, 274, 379, 465, 483, 548, 549, 550, 551, 552, 553, 554, 555, 557, 558, 559, 560, 561], "decrypt": [86, 108, 109, 131, 232, 235, 278, 279, 300, 383, 384, 403, 465, 487, 488, 510], "uncontrol": [86, 465], "parseabl": [86, 186, 465], "unscal": [86, 182, 204, 228, 256, 361, 465], "amen": [86, 182, 204, 228, 256, 361, 465], "1000000": [86, 182, 204, 228, 250, 256, 361, 465], "mimic": [86, 182, 204, 228, 256, 361, 465], "cr_txg": [86, 182, 204, 228, 256, 361, 465], "1051": [86, 182, 204, 228, 256, 361, 465], "356": [86, 182, 204, 228, 256, 361, 465], "486m": [86, 182, 204, 228, 256, 361, 465], "137": [86, 182, 204, 228, 256, 361, 465], "1546": [86, 182, 204, 228, 256, 361, 465], "lvl": [86, 182, 204, 228, 256, 361, 465], "iblk": [86, 182, 204, 228, 256, 361, 465], "dblk": [86, 182, 204, 228, 256, 361, 465], "dsize": [86, 182, 204, 228, 256, 361, 465], "0k": [86, 182, 204, 228, 256, 361, 465], "______________________________": [86, 182, 204, 228, 256, 361, 465], "refcnt": [86, 182, 204, 228, 256, 361, 465], "694k": [86, 182, 204, 228, 256, 361, 465], "0g": [86, 95, 98, 115, 127, 147, 163, 182, 184, 186, 204, 206, 209, 228, 232, 236, 256, 295, 332, 361, 400, 435, 465, 474, 477, 494, 506, 526, 542], "35": [86, 182, 204, 228, 256, 361, 465], "33g": [86, 182, 204, 228, 256, 361, 465], "699m": [86, 182, 204, 228, 256, 361, 465], "7k": [86, 182, 204, 228, 256, 361, 465], "79g": [86, 182, 204, 228, 256, 361, 465], "45g": [86, 182, 204, 228, 256, 361, 465], "novemb": [86, 176, 184, 465], "ffhilmvvz": [87, 362, 466], "zedletdir": [87, 183, 205, 229, 257, 362, 466], "pidfil": [87, 183, 205, 229, 257, 362, 466], "statefil": [87, 183, 205, 229, 257, 362, 466], "job": [87, 362, 466], "buflen": [87, 466], "zedlet": [87, 102, 183, 205, 229, 230, 257, 272, 362, 377, 466, 481], "linkag": [87, 183, 205, 229, 257, 362, 466], "throw": [87, 104, 110, 114, 183, 205, 229, 231, 257, 274, 362, 379, 385, 389, 466, 483, 489, 493], "wind": [87, 183, 205, 229, 257, 362, 466], "daemonis": [87, 362, 466], "therebi": [87, 183, 205, 229, 257, 362, 466], "reprocess": [87, 183, 205, 229, 257, 362, 466], "hardcod": [87, 205, 229, 257, 362, 466], "unreclaim": [87, 466], "nvpair": [87, 183, 205, 229, 257, 362, 466], "pair": [87, 104, 139, 183, 205, 221, 229, 231, 249, 257, 274, 362, 379, 411, 466, 483, 518], "eid": [87, 183, 205, 229, 257, 362, 466], "monoton": [87, 183, 205, 229, 257, 362, 466], "breviti": [87, 183, 205, 229, 257, 362, 466], "subclass": [87, 139, 163, 175, 183, 186, 197, 205, 209, 221, 229, 236, 249, 257, 308, 332, 362, 411, 435, 466, 518, 542], "wherea": [87, 104, 155, 183, 186, 205, 209, 229, 231, 236, 257, 274, 324, 362, 379, 427, 466, 483, 534], "ownership": [87, 183, 205, 229, 257, 362, 466], "alphabet": [87, 100, 183, 184, 205, 206, 229, 232, 257, 270, 362, 375, 466, 479], "presumpt": [87, 183, 205, 229, 257, 362, 466], "rc": [87, 99, 119, 183, 205, 229, 257, 269, 289, 362, 374, 394, 466, 478, 498], "zed_": [87, 183, 205, 229, 257, 362, 466], "zevent_": [87, 139, 175, 183, 197, 205, 221, 229, 249, 257, 362, 411, 466, 518], "uppercas": [87, 139, 175, 183, 197, 205, 221, 229, 249, 257, 362, 411, 466, 518], "alphanumer": [87, 136, 183, 186, 205, 209, 229, 236, 257, 305, 362, 408, 466, 515], "zevent_eid": [87, 183, 205, 229, 257, 362, 466], "zevent_class": [87, 183, 205, 229, 257, 362, 466], "zevent_subclass": [87, 183, 205, 229, 257, 362, 466], "zevent_tim": [87, 183, 205, 229, 257, 362, 466], "epoch": [87, 97, 104, 111, 183, 205, 229, 257, 274, 362, 379, 466, 476, 483, 490], "zevent_time_sec": [87, 183, 205, 229, 257, 362, 466], "zevent_time_nsec": [87, 183, 205, 229, 257, 362, 466], "zevent_time_str": [87, 183, 205, 229, 257, 362, 466], "rfc3339": [87, 183, 205, 229, 257, 362, 466], "compliant": [87, 127, 183, 184, 205, 206, 229, 232, 257, 295, 362, 400, 466, 506], "zed_pid": [87, 183, 205, 229, 257, 362, 466], "zed_zedlet_dir": [87, 183, 205, 229, 257, 362, 466], "zfs_alia": [87, 183, 205, 229, 257, 362, 466], "zfs_version": [87, 183, 205, 229, 257, 362, 466], "zfs_releas": [87, 183, 205, 229, 257, 362, 466], "sysconfdir": [87, 102, 183, 205, 229, 230, 257, 272, 362, 377, 466, 481], "zfsexecdir": [87, 102, 129, 229, 230, 257, 272, 362, 377, 466, 481, 508], "runstatedir": [87, 183, 205, 229, 257, 362, 466], "pid": [87, 183, 205, 229, 257, 362, 466], "sighup": [87, 362, 466], "rescan": [87, 183, 205, 229, 257, 362, 466], "sigterm": [87, 362, 466], "sigint": [87, 362, 466], "taunt": [87, 362, 466], "internation": [87, 183, 205, 229, 257, 362, 466], "gettext": [87, 183, 205, 229, 257, 362, 466], "dglu": [88, 118, 206, 232, 258, 288, 363, 393, 467, 497], "perm": [88, 118, 184, 206, 232, 258, 288, 363, 393, 467, 497], "setnam": [88, 118, 184, 206, 232, 258, 288, 363, 393, 467, 497], "dglru": [88, 118, 206, 232, 258, 288, 363, 393, 467, 497], "dlr": [88, 118, 206, 232, 258, 288, 363, 393, 467, 497], "whom": [88, 118, 184, 206, 232, 258, 288, 363, 393, 467, 497], "entiti": [88, 96, 106, 118, 124, 184, 206, 232, 258, 266, 276, 288, 293, 363, 371, 381, 393, 398, 467, 475, 485, 497, 503], "gu": [88, 118, 206, 232, 258, 288, 363, 393, 467, 497], "protocol": [88, 118, 164, 184, 206, 232, 258, 288, 363, 393, 436, 467, 497, 543], "userprop": [88, 118, 184, 206, 232, 258, 288, 363, 393, 467, 497], "mlslabel": [88, 118, 184, 363, 393, 467, 497], "creator": [88, 118, 184, 206, 232, 258, 288, 363, 393, 467, 497], "ldugec": [88, 118, 184, 206, 232, 258, 288, 363, 393, 467, 497], "cindi": [88, 118, 127, 184, 206, 232, 295, 400, 467, 497, 506], "755": [88, 118, 127, 184, 206, 232, 295, 400, 467, 497, 506], "add_subdirectori": [88, 118, 127, 184, 206, 232, 295, 400, 467, 497, 506], "staff": [88, 118, 127, 184, 206, 232, 295, 400, 467, 497, 506], "pset": [88, 118, 127, 184, 206, 232, 295, 400, 467, 497, 506], "10g": [88, 118, 127, 147, 163, 184, 186, 206, 209, 232, 236, 295, 332, 400, 435, 467, 497, 506, 526, 542], "newbookmark": [89, 104, 259, 274, 364, 379, 468, 483], "forens": [90, 101, 120, 260, 271, 290, 365, 376, 395, 469, 480, 499], "indetermin": [90, 101, 120, 260, 271, 290, 365, 376, 395, 469, 480, 499], "fuid": [90, 101, 120, 127, 232, 260, 271, 290, 295, 365, 376, 395, 400, 469, 480, 499, 506], "malici": [90, 101, 120, 232, 260, 271, 290, 365, 376, 395, 469, 480, 499], "crime": [90, 101, 120, 232, 260, 271, 290, 365, 376, 395, 469, 480, 499], "januari": [90, 101, 120, 206, 260, 271, 274, 290, 347, 365, 376, 385, 389, 395, 469, 480, 499], "bob": [91, 92, 95, 98, 100, 115, 117, 127, 184, 206, 232, 295, 400, 470, 471, 474, 477, 479, 494, 496, 506], "yesterdai": [91, 93, 112, 113, 117, 127, 184, 206, 232, 295, 400, 470, 472, 491, 492, 496, 506], "pnpuv": [92, 367, 471], "create_ancestor": [92, 262, 367, 471], "nearest": [92, 184, 206, 232, 262, 367, 471], "rfnprv": [93, 206, 232, 263, 368, 472], "rdnprv": [93, 206, 232, 263, 368, 472], "precondit": [93, 184, 206, 232, 263, 368, 472], "newest": [93, 184, 206, 232, 263, 368, 472], "week": [93, 112, 117, 127, 184, 206, 232, 295, 400, 472, 491, 496, 506], "7daysago": [93, 112, 117, 127, 184, 206, 232, 295, 400, 472, 491, 496, 506], "6daysago": [93, 112, 117, 127, 184, 206, 232, 295, 400, 472, 491, 496, 506], "5daysago": [93, 112, 117, 127, 184, 206, 232, 295, 400, 472, 491, 496, 506], "4daysago": [93, 112, 117, 127, 184, 206, 232, 295, 400, 472, 491, 496, 506], "3daysago": [93, 112, 117, 127, 184, 206, 232, 295, 400, 472, 491, 496, 506], "2daysago": [93, 112, 117, 127, 184, 206, 232, 295, 400, 472, 491, 496, 506], "fhth": [94, 369, 473], "door": [94, 184, 206, 232, 264, 369, 473], "socket": [94, 184, 206, 232, 264, 369, 473], "arrow": [94, 184, 206, 232, 264, 369, 473], "0ooo": [94, 369, 473], "escap": [94, 164, 369, 436, 473, 543], "oldnam": [94, 127, 184, 206, 232, 295, 400, 473, 506], "newnam": [94, 127, 184, 206, 232, 295, 400, 473, 506], "hp": [95, 96, 98, 100, 106, 115, 124, 141, 156, 162, 184, 206, 209, 232, 236, 265, 266, 268, 270, 276, 285, 293, 310, 325, 331, 370, 371, 373, 375, 381, 390, 398, 413, 428, 434, 474, 475, 477, 479, 485, 494, 503, 520, 535, 541], "kilobyt": [95, 98, 115, 184, 206, 232, 265, 268, 285, 370, 373, 390, 474, 477, 494], "terabyt": [95, 98, 115, 184, 206, 232, 265, 268, 285, 370, 373, 390, 474, 477, 494], "petabyt": [95, 98, 115, 184, 206, 232, 265, 268, 285, 370, 373, 390, 474, 477, 494], "exabyt": [95, 98, 115, 184, 206, 232, 265, 268, 285, 370, 373, 390, 474, 477, 494], "ann": [95, 98, 100, 113, 115, 127, 184, 206, 232, 295, 400, 474, 477, 479, 492, 494, 506], "gbyte": [95, 98, 115, 127, 184, 206, 232, 295, 400, 474, 477, 494, 506], "50g": [95, 98, 115, 127, 184, 206, 232, 295, 400, 474, 477, 494, 506], "jul": [95, 98, 115, 127, 155, 184, 206, 232, 295, 400, 427, 474, 477, 494, 506, 534], "53": [95, 98, 115, 127, 184, 206, 232, 295, 400, 474, 477, 494, 506, 553], "21k": [95, 98, 100, 115, 127, 184, 206, 232, 295, 400, 474, 477, 479, 494, 506], "00x": [95, 98, 115, 127, 147, 163, 184, 186, 206, 209, 232, 236, 295, 332, 400, 435, 474, 477, 494, 506, 526, 542], "20g": [95, 98, 115, 127, 184, 206, 232, 295, 400, 474, 477, 494, 506], "depart": [95, 98, 115, 127, 184, 206, 232, 295, 400, 474, 477, 494, 506], "12345": [95, 98, 115, 127, 184, 206, 232, 295, 400, 474, 477, 494, 506], "neo": [95, 98, 115, 127, 184, 206, 232, 295, 400, 474, 477, 494, 506], "resolut": [95, 98, 115, 127, 163, 184, 206, 232, 295, 332, 400, 435, 474, 477, 494, 506, 542], "hinp": [96, 106, 124, 184, 206, 232, 266, 276, 293, 371, 381, 398, 475, 485, 503], "posixus": [96, 106, 124, 184, 206, 232, 266, 276, 293, 371, 381, 398, 475, 485, 503], "smbuser": [96, 106, 124, 184, 206, 232, 266, 276, 293, 371, 381, 398, 475, 485, 503], "posixgroup": [96, 106, 124, 184, 206, 232, 266, 276, 293, 371, 381, 398, 475, 485, 503], "smbgroup": [96, 106, 124, 184, 206, 232, 266, 276, 293, 371, 381, 398, 475, 485, 503], "2019": [96, 97, 103, 106, 111, 121, 123, 124, 138, 141, 142, 148, 149, 156, 159, 206, 217, 222, 228, 231, 232, 236, 245, 256, 258, 259, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 273, 275, 276, 277, 280, 281, 283, 284, 285, 286, 287, 288, 289, 291, 292, 293, 294, 295, 297, 301, 303, 304, 305, 306, 307, 308, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 323, 324, 325, 326, 328, 330, 332, 333, 334, 352, 368, 371, 372, 378, 381, 382, 386, 396, 397, 398, 400, 410, 413, 414, 415, 419, 420, 421, 423, 428, 431, 433, 475, 476, 482, 485, 490, 500, 502, 503, 517, 520, 521, 527, 528, 535, 538], "rhp": [97, 111, 476, 490], "ebusi": [97, 104, 111, 127, 184, 206, 231, 232, 267, 274, 281, 295, 372, 379, 386, 400, 476, 483, 490, 506], "jailid": [99, 119, 269, 289, 374, 394, 478, 498], "jailnam": [99, 119, 269, 289, 374, 394, 478, 498], "jid": [99, 119, 269, 289, 374, 394, 478, 498], "enforce_statf": [99, 119, 269, 289, 374, 394, 478, 498], "unaccept": [99, 119, 122, 126, 269, 289, 374, 394, 478, 498, 501, 505], "white": [100, 184, 206, 232, 270, 375, 479], "shortcut": [100, 184, 206, 232, 270, 375, 479], "usedsnap": [100, 184, 206, 232, 270, 375, 479], "usedd": [100, 184, 206, 232, 270, 375, 479], "usedrefreserv": [100, 184, 206, 232, 270, 375, 479], "usedchild": [100, 184, 206, 232, 270, 375, 479], "ascend": [100, 184, 206, 232, 270, 375, 479], "criteria": [100, 184, 206, 232, 270, 375, 479], "bottom": [100, 184, 206, 232, 270, 375, 479], "450k": [100, 127, 184, 206, 232, 295, 400, 479, 506], "457g": [100, 127, 184, 206, 232, 295, 400, 479, 506], "18k": [100, 127, 184, 206, 232, 295, 400, 479, 506], "315k": [100, 127, 184, 206, 232, 295, 400, 479, 506], "276k": [100, 127, 184, 206, 232, 295, 400, 479, 506], "systemdgeneratordir": [102, 230, 272, 377, 481], "encroot": [102, 230, 272, 377, 481], "et": [102, 377, 481], "al": [102, 377, 481], "requiredbi": [102, 230, 272, 377, 481], "wax": [102, 377, 481], "wane": [102, 377, 481], "strength": [102, 377, 481], "ho": [102, 377, 481], "writeabl": [102, 230, 272, 377, 481], "inject": [102, 131, 172, 185, 194, 208, 216, 235, 244, 300, 377, 403, 481, 510], "satisfactori": [102, 377, 481], "oflv": [103, 121, 273, 291, 378, 396, 482, 500], "fu": [103, 121, 273, 291, 378, 396, 482, 500], "forcefulli": [103, 121, 137, 140, 184, 186, 206, 209, 232, 236, 273, 291, 309, 378, 396, 409, 412, 482, 500, 516, 519], "februari": [103, 121, 182, 222, 231, 273, 278, 279, 291, 309, 329, 331, 378, 383, 384, 396, 412, 482, 500], "jn": [104, 231, 232, 274, 379, 483], "atom": [104, 117, 184, 206, 231, 232, 257, 274, 287, 379, 392, 483, 496], "json": [104, 231, 232, 274, 379, 483], "submodul": [104, 231, 232, 274, 379, 483], "million": [104, 231, 232, 274, 379, 483], "lzc_channel_program": [104, 231, 274, 379, 483], "argv": [104, 231, 274, 379, 483], "arg": [104, 231, 274, 295, 332, 379, 483], "arg1": [104, 231, 232, 274, 379, 483], "arg2": [104, 231, 274, 379, 483], "libzf": [104, 129, 231, 274, 379, 483, 508], "baz": [104, 231, 274, 379, 483], "arr": [104, 231, 274, 379, 483], "ret0": [104, 231, 274, 379, 483], "ret1": [104, 231, 274, 379, 483], "ret2": [104, 231, 274, 379, 483], "singleton": [104, 231, 274, 379, 483], "vice": [104, 231, 274, 379, 483], "versa": [104, 231, 274, 379, 483], "int64": [104, 231, 274, 379, 483], "boolean_valu": [104, 231, 274, 379, 483], "nil": [104, 231, 274, 379, 483], "likewis": [104, 231, 274, 379, 483], "decim": [104, 231, 274, 379, 483], "lld": [104, 231, 274, 379, 483], "rawlen": [104, 231, 274, 379, 483], "collectgarbag": [104, 231, 274, 379, 483], "rawget": [104, 231, 274, 379, 483], "rawset": [104, 231, 274, 379, 483], "getmetat": [104, 231, 274, 379, 483], "ipair": [104, 231, 274, 379, 483], "setmetat": [104, 231, 274, 379, 483], "tonumb": [104, 231, 274, 379, 483], "tostr": [104, 231, 274, 379, 483], "rawequ": [104, 231, 274, 379, 483], "coroutin": [104, 231, 274, 379, 483], "dofil": [104, 231, 274, 379, 483], "loadfil": [104, 231, 274, 379, 483], "pcall": [104, 231, 274, 379, 483], "xpcall": [104, 231, 274, 379, 483], "posit": [104, 231, 274, 379, 483], "parenthes": [104, 231, 274, 379, 483], "curli": [104, 231, 274, 379, 483], "brace": [104, 231, 274, 379, 483], "syntact": [104, 231, 274, 379, 483], "sugar": [104, 231, 274, 379, 483], "unrecover": [104, 231, 274, 379, 483, 553, 555, 558, 559, 560], "errno": [104, 139, 175, 197, 221, 231, 249, 274, 379, 411, 483, 518], "eexist": [104, 231, 274, 379, 483], "list_of_conflicting_snapshot": [104, 231, 274, 379, 483], "eperm": [104, 231, 274, 379, 483], "echild": [104, 131, 185, 208, 231, 235, 274, 300, 379, 403, 483, 510], "enodev": [104, 231, 274, 379, 483], "enoent": [104, 231, 274, 379, 483], "eagain": [104, 231, 274, 379, 483], "enotdir": [104, 231, 274, 379, 483], "espip": [104, 231, 274, 379, 483], "esrch": [104, 231, 274, 379, 483], "enomem": [104, 231, 274, 379, 483], "eisdir": [104, 231, 274, 379, 483], "erof": [104, 231, 274, 379, 483], "eintr": [104, 231, 274, 379, 483], "eacc": [104, 231, 274, 379, 483], "emlink": [104, 231, 274, 379, 483], "efault": [104, 231, 274, 379, 483], "enfil": [104, 231, 274, 379, 483], "epip": [104, 231, 274, 379, 483], "enxio": [104, 131, 185, 208, 231, 235, 274, 300, 379, 403, 483, 510], "enotblk": [104, 231, 274, 379, 483], "emfil": [104, 231, 274, 379, 483], "edom": [104, 231, 274, 379, 483], "e2big": [104, 231, 274, 379, 483], "enotti": [104, 231, 274, 379, 483], "erang": [104, 231, 274, 379, 483], "enoexec": [104, 231, 274, 379, 483], "etxtbsi": [104, 231, 274, 379, 483], "ebadf": [104, 231, 274, 379, 483], "exdev": [104, 231, 274, 379, 483], "efbig": [104, 231, 274, 379, 483], "mdb": [104, 231, 274, 379, 483], "stringof": [104, 231, 274, 379, 483], "arg0": [104, 231, 274, 379, 483], "thrown": [104, 231, 274, 379, 483], "nonexistent_f": [104, 231, 274, 379, 483], "somepool": [104, 231, 274, 379, 483], "fs_that_may_exist": [104, 231, 274, 379, 483], "get_prop": [104, 231, 274, 379, 483], "iscsiopt": [104, 231, 274, 379, 483], "collid": [104, 231, 274, 379, 483], "set_prop": [104, 274, 379, 483], "zcp": [104, 231, 274, 379, 483], "rename_snapshot": [104, 483], "oldsnapnam": [104, 483], "newsnapnam": [104, 483], "set_properti": [104, 274, 379, 483], "user_properti": [104, 274, 379, 483], "system_properti": [104, 231, 274, 379, 483], "naiv": [104, 231, 274, 379, 483], "destroy_recurs": [104, 231, 274, 379, 483], "somef": [104, 231, 274, 379, 483], "err": [104, 231, 274, 379, 483], "force_promot": [104, 231, 274, 379, 483], "elseif": [104, 231, 274, 379, 483], "kr": [105, 232, 275, 380, 484], "diagnos": [105, 484], "fhmnsuv": [108, 109, 278, 279, 383, 384, 487, 488], "vn": [108, 109, 487, 488], "redup": [108, 109, 165, 166, 278, 279, 335, 383, 384, 437, 438, 487, 488, 544, 545], "subtre": [108, 109, 206, 232, 278, 279, 383, 384, 487, 488], "topmost": [108, 109, 206, 232, 278, 279, 383, 384, 487, 488], "x2010": [108, 109, 206, 232, 278, 279, 383, 384, 487, 488], "spite": [108, 109, 206, 232, 278, 279, 383, 384, 487, 488], "recompress": [108, 109, 110, 114, 165, 166, 232, 278, 279, 280, 284, 383, 384, 385, 389, 487, 488, 489, 493, 544, 545], "stem": [108, 109, 232, 278, 279, 383, 384, 487, 488], "aead": [108, 109, 232, 278, 279, 383, 384, 487, 488], "intermedi": [108, 109, 113, 127, 184, 206, 232, 278, 279, 283, 295, 383, 384, 388, 400, 487, 488, 492, 506], "paragraph": [108, 109, 184, 206, 232, 278, 279, 383, 384, 487, 488], "settabl": [108, 109, 206, 232, 278, 279, 383, 384, 487, 488], "snap1": [108, 109, 232, 278, 279, 383, 384, 487, 488, 557], "keyfil": [108, 109, 232, 278, 279, 383, 384, 487, 488, 557], "fact": [108, 109, 232, 278, 279, 383, 384, 487, 488], "prematur": [108, 109, 206, 232, 278, 279, 383, 384, 487, 488], "poolb": [108, 109, 110, 114, 127, 184, 206, 232, 295, 400, 487, 488, 489, 493, 506], "poola": [108, 109, 110, 114, 127, 184, 206, 232, 295, 400, 487, 488, 489, 493, 506], "fsa": [108, 109, 110, 114, 127, 184, 206, 232, 295, 400, 487, 488, 489, 493, 506], "fsb": [108, 109, 110, 114, 127, 184, 206, 232, 295, 400, 487, 488, 489, 493, 506], "dlpvbcehnpsvw": [110, 114, 489, 493], "dlpvcensvw": [110, 114, 385, 389, 489, 493], "dlpvcenpv": [110, 114, 385, 389, 489, 493], "pvenv": [110, 114, 385, 389, 489, 493], "pvnv": [110, 114, 385, 389, 489, 493], "redaction_snapshot": [110, 114, 280, 284, 385, 389, 489, 493], "redirect": [110, 114, 184, 206, 232, 280, 284, 385, 389, 489, 493], "intermediari": [110, 114, 184, 206, 232, 280, 284, 385, 389, 489, 493], "proctitl": [110, 114, 385, 389, 489, 493], "emb": [110, 114, 206, 232, 280, 284, 385, 389, 489, 493], "compact": [110, 114, 184, 206, 232, 280, 284, 385, 389, 489, 493], "write_embed": [110, 114, 184, 206, 232, 280, 284, 385, 389, 489, 493], "compactli": [110, 114, 184, 206, 232, 280, 284, 385, 389, 489, 493], "untrust": [110, 114, 232, 280, 284, 385, 389, 489, 493], "lec": [110, 114, 232, 280, 284, 385, 389, 489, 493], "dryrun": [110, 114, 206, 232, 280, 284, 385, 389, 489, 493], "prop": [110, 114, 186, 206, 232, 280, 284, 385, 389, 489, 493], "siginfo": [110, 114, 489, 493], "sigusr1": [110, 114, 489, 493], "dlpvcenvw": [110, 114, 385, 389, 489, 493], "definition": [110, 114, 280, 284, 385, 389, 489, 493], "partwai": [110, 114, 280, 284, 385, 389, 489, 493], "rerun": [110, 114, 280, 284, 385, 389, 489, 493], "truli": [110, 114, 280, 284, 385, 389, 489, 493], "shop": [110, 114, 280, 284, 385, 389, 489, 493], "purchas": [110, 114, 280, 284, 385, 389, 489, 493], "fourth": [110, 114, 182, 280, 284, 385, 389, 489, 493], "transmit": [110, 114, 280, 284, 385, 389, 489, 493], "fake": [110, 114, 180, 202, 226, 254, 280, 284, 385, 389, 489, 493], "nonexist": [112, 184, 206, 232, 282, 387, 491], "rfr": [113, 206, 232, 283, 388, 492], "rr": [113, 184, 206, 232, 283, 388, 492], "snapnam": [117, 184, 206, 232, 287, 392, 496], "nsfile": [122, 126, 501, 505], "1234": [122, 126, 501, 505], "interrel": [123, 184, 206, 232, 292, 397, 502], "ceas": [125, 162, 163, 294, 331, 332, 399, 434, 435, 504, 541, 542], "deleteq": [125, 294, 399, 504], "lsof": [125, 294, 399, 504], "zfs_max_dataset_name_len": [127, 506], "za": [127, 506], "z_": [127, 506], "deep": [127, 232, 295, 400, 506], "complianc": [127, 184, 206, 232, 295, 400, 506], "tabular": [127, 184, 206, 232, 270, 295, 400, 506], "smbmount": [127, 184, 206, 232, 295, 400, 506], "share_tmp": [127, 184, 206, 232, 295, 400, 506], "workgroup": [127, 184, 206, 232, 295, 400, 506], "obrut": [127, 184, 206, 232, 295, 400, 506], "uid": [127, 184, 206, 232, 295, 400, 506], "zfs_color": [127, 163, 332, 400, 435, 506, 542], "color": [127, 163, 332, 400, 435, 506, 542], "zfs_set_pipe_max": [127, 400, 506], "reciev": [127, 400, 506], "unfix": [127, 400, 506], "zfs_module_timeout": [127, 163, 506, 542], "forev": [127, 163, 506, 542], "vdev_path": [129, 139, 145, 175, 197, 209, 221, 236, 249, 314, 411, 417, 508, 518, 524], "vdev_prepar": [129, 508], "vdev_upath": [129, 145, 209, 236, 314, 417, 508, 524], "vdev_enc_sysfs_path": [129, 145, 209, 236, 314, 417, 508, 524], "hexadecim": [130, 131, 207, 208, 234, 235, 299, 300, 402, 403, 509, 510], "0x": [130, 299, 402, 509], "libc": [130, 207, 234, 299, 402, 509], "deadbeef": [130, 207, 234, 299, 402, 509], "0x01234567": [130, 299, 402, 509], "sethostid": [130, 299, 402, 509], "injector": [131, 185, 208, 235, 300, 403, 510], "artifici": [131, 185, 208, 235, 300, 403, 510], "amu": [131, 185, 208, 235, 300, 403, 510], "lane": [131, 208, 235, 300, 403, 510], "device_error": [131, 185, 208, 235, 300, 403, 510], "label_error": [131, 185, 208, 235, 300, 403, 510], "dva": [131, 235, 300, 403, 510], "amq": [131, 185, 208, 235, 300, 403, 510], "metadnod": [131, 185, 208, 235, 300, 403, 510], "mos_typ": [131, 185, 208, 235, 300, 403, 510], "amqu": [131, 185, 208, 235, 300, 403, 510], "ecksum": [131, 185, 208, 235, 300, 403, 510], "nxio": [131, 185, 208, 235, 300, 403, 510], "0001": [131, 208, 235, 300, 403, 510], "pad1": [131, 185, 208, 235, 300, 403, 510], "pad2": [131, 185, 208, 235, 300, 403, 510], "uber": [131, 185, 208, 235, 300, 403, 510], "mosdir": [131, 185, 208, 235, 300, 403, 510], "fglnp": [132, 186, 209, 236, 301, 404, 511], "gradual": [132, 145, 163, 186, 209, 236, 332, 435, 511, 524, 542], "fsw": [133, 153, 302, 322, 405, 425, 512, 532], "new_devic": [133, 186, 209, 236, 302, 322, 405, 512], "entail": 133, "z2": 133, "zpool_auto_power_on_slot": [135, 148, 149, 163], "dfn": [136, 209, 236, 305, 408, 515], "tname": [136, 186, 209, 236, 305, 408, 515], "preexist": [136, 186, 209, 236, 305, 408, 515], "six": [136, 163, 186, 209, 236, 332, 435, 515, 542], "sda1": [136, 145, 158, 163, 186, 209, 236, 332, 435, 515, 524, 537, 542], "sdb2": [136, 163, 186, 209, 236, 332, 435, 515, 542], "vhf": [139, 236, 308, 411, 518], "payload": [139, 163, 175, 186, 197, 209, 221, 236, 249, 308, 332, 411, 435, 518, 542], "flaki": [139, 221, 249, 411, 518], "ratelimit": [139, 221, 249, 411, 518], "open_fail": [139, 175, 197, 221, 249, 411, 518], "corrupt_data": [139, 175, 197, 221, 249, 411, 518], "no_replica": [139, 175, 197, 221, 249, 411, 518], "bad_guid_sum": [139, 175, 197, 221, 249, 411, 518], "too_smal": [139, 175, 197, 221, 249, 411, 518], "probe_failur": [139, 175, 197, 221, 249, 411, 518], "bad_label": [139, 175, 197, 221, 249, 411, 518], "bad_ashift": [139, 175, 197, 221, 249, 411, 518], "io_failur": [139, 175, 197, 221, 249, 411, 518], "log_replai": [139, 175, 197, 221, 249, 411, 518], "accompani": [139, 175, 197, 221, 249, 411, 518], "pool_failmod": [139, 175, 197, 221, 249, 411, 518], "pool_guid": [139, 175, 197, 221, 249, 411, 518], "pool_context": [139, 175, 197, 221, 249, 411, 518], "tryimport": [139, 175, 197, 221, 249, 411, 518], "vdev_guid": [139, 175, 197, 221, 249, 411, 518], "question": [139, 175, 197, 221, 249, 411, 518], "vdev_typ": [139, 175, 197, 221, 249, 411, 518], "partx": [139, 175, 197, 221, 249, 411, 518], "vdev_devid": [139, 175, 197, 221, 249, 411, 518], "vdev_fru": [139, 175, 197, 221, 249, 411, 518], "vdev_stat": [139, 175, 197, 221, 249, 411, 518], "vdev_ashift": [139, 175, 197, 221, 249, 411, 518], "vdev_complete_t": [139, 175, 197, 221, 249, 411, 518], "vdev_delta_t": [139, 175, 197, 221, 249, 411, 518], "vdev_spare_path": [139, 175, 197, 221, 249, 411, 518], "vdev_spare_guid": [139, 175, 197, 221, 249, 411, 518], "vdev_read_error": [139, 175, 197, 221, 249, 411, 518], "vdev_write_error": [139, 175, 197, 221, 249, 411, 518], "vdev_cksum_error": [139, 175, 197, 221, 249, 411, 518], "parent_guid": [139, 175, 197, 221, 249, 411, 518], "parent_typ": [139, 175, 197, 221, 249, 411, 518], "parent_path": [139, 175, 197, 221, 249, 411, 518], "parent_devid": [139, 175, 197, 221, 249, 411, 518], "zio_objset": [139, 175, 197, 221, 249, 411, 518], "zio_object": [139, 175, 197, 221, 249, 411, 518], "zio_level": [139, 175, 197, 221, 249, 411, 518], "zio_blkid": [139, 175, 197, 221, 249, 411, 518], "zio_err": [139, 175, 197, 221, 249, 411, 518], "ebad": [139, 221, 249, 411, 518], "zio_offset": [139, 175, 197, 221, 249, 411, 518], "zio_siz": [139, 175, 197, 221, 249, 411, 518], "zio_flag": [139, 175, 197, 221, 249, 411, 518], "zio_stag": [139, 175, 197, 221, 249, 411, 518], "zio_pipelin": [139, 175, 197, 221, 249, 411, 518], "zio_delai": [139, 175, 197, 221, 249, 411, 518], "zio_delta": [139, 175, 197, 221, 249, 411, 518], "zio_timestamp": [139, 175, 197, 221, 249, 411, 518], "prev_stat": [139, 175, 197, 221, 249, 411, 518], "cksum_algorithm": [139, 175, 197, 221, 249, 411, 518], "cksum_byteswap": [139, 175, 197, 221, 249, 411, 518], "byteswap": [139, 221, 249, 411, 518], "bad_rang": [139, 175, 197, 221, 249, 411, 518], "bad_ranges_min_gap": [139, 175, 197, 221, 249, 411, 518], "bad_range_set": [139, 175, 197, 221, 249, 411, 518], "bad_range_clear": [139, 175, 197, 221, 249, 411, 518], "bad_set_bit": [139, 175, 197, 221, 249, 411, 518], "bad_cleared_bit": [139, 175, 197, 221, 249, 411, 518], "zio_stage_open": [139, 175, 197, 221, 249, 411, 518], "0x00000001": [139, 175, 197, 221, 249, 411, 518], "rwfci": [139, 175, 197, 221, 249, 411, 518], "zio_stage_read_bp_init": [139, 175, 197, 221, 249, 411, 518], "0x00000002": [139, 175, 197, 221, 249, 411, 518], "zio_stage_write_bp_init": [139, 175, 197, 221, 249, 411, 518], "0x00000004": [139, 175, 197, 221, 249, 411, 518], "zio_stage_free_bp_init": [139, 175, 197, 221, 249, 411, 518], "0x00000008": [139, 175, 197, 221, 249, 411, 518], "zio_stage_issue_async": [139, 175, 197, 221, 249, 411, 518], "0x00000010": [139, 175, 197, 221, 249, 411, 518], "rwf": [139, 175, 197, 221, 249, 411, 518], "zio_stage_write_compress": [139, 221, 249, 411, 518], "0x00000020": [139, 175, 197, 221, 249, 411, 518], "zio_stage_encrypt": [139, 221, 249, 411, 518], "0x00000040": [139, 175, 197, 221, 249, 411, 518], "zio_stage_checksum_gener": [139, 175, 197, 221, 249, 411, 518], "0x00000080": [139, 175, 197, 221, 249, 411, 518], "zio_stage_nop_writ": [139, 175, 197, 221, 249, 411, 518], "0x00000100": [139, 175, 197, 221, 249, 411, 518], "zio_stage_brt_fre": [139, 518], "0x00000200": [139, 175, 197, 221, 249, 411, 518], "zio_stage_ddt_read_start": [139, 175, 197, 221, 249, 411, 518], "0x00000400": [139, 175, 197, 221, 249, 411, 518], "zio_stage_ddt_read_don": [139, 175, 197, 221, 249, 411, 518], "0x00000800": [139, 175, 197, 221, 249, 411, 518], "zio_stage_ddt_writ": [139, 175, 197, 221, 249, 411, 518], "0x00001000": [139, 175, 197, 221, 249, 411, 518], "zio_stage_ddt_fre": [139, 175, 197, 221, 249, 411, 518], "0x00002000": [139, 175, 197, 221, 249, 411, 518], "zio_stage_gang_assembl": [139, 175, 197, 221, 249, 411, 518], "0x00004000": [139, 175, 197, 221, 249, 411, 518], "rwfc": [139, 175, 197, 221, 249, 411, 518], "zio_stage_gang_issu": [139, 175, 197, 221, 249, 411, 518], "0x00008000": [139, 175, 197, 221, 249, 411, 518], "zio_stage_dva_throttl": [139, 221, 249, 411, 518], "0x00010000": [139, 175, 197, 221, 249, 411, 518], "zio_stage_dva_alloc": [139, 175, 197, 221, 249, 411, 518], "0x00020000": [139, 175, 197, 221, 249, 411, 518], "zio_stage_dva_fre": [139, 175, 197, 221, 249, 411, 518], "0x00040000": [139, 175, 197, 221, 249, 411, 518], "zio_stage_dva_claim": [139, 175, 197, 221, 249, 411, 518], "0x00080000": [139, 175, 197, 221, 249, 411, 518], "zio_stage_readi": [139, 175, 197, 221, 249, 411, 518], "0x00100000": [139, 175, 197, 221, 249, 411, 518], "zio_stage_vdev_io_start": [139, 175, 197, 221, 249, 411, 518], "0x00200000": [139, 175, 197, 221, 249, 411, 518], "zio_stage_vdev_io_don": [139, 175, 197, 221, 249, 411, 518], "0x00400000": [139, 175, 197, 221, 249, 411, 518], "zio_stage_vdev_io_assess": [139, 175, 197, 221, 249, 411, 518], "0x00800000": [139, 175, 197, 221, 249, 411, 518], "zio_stage_checksum_verifi": [139, 221, 249, 411, 518], "0x01000000": [139, 175, 197, 221, 249, 411, 518], "zio_stage_don": [139, 175, 197, 221, 249, 411, 518], "0x02000000": [139, 175, 197, 221, 249, 411, 518], "zio_flag_dont_aggreg": [139, 175, 197, 221, 249, 411, 518], "zio_flag_io_repair": [139, 175, 197, 221, 249, 411, 518], "zio_flag_self_h": [139, 175, 197, 221, 249, 411, 518], "zio_flag_resilv": [139, 175, 197, 221, 249, 411, 518], "zio_flag_scrub": [139, 175, 197, 221, 249, 411, 518], "zio_flag_scan_thread": [139, 175, 197, 221, 249, 411, 518], "zio_flag_phys": [139, 175, 197, 221, 249, 411, 518], "zio_flag_canfail": [139, 175, 197, 221, 249, 411, 518], "zio_flag_specul": [139, 175, 197, 221, 249, 411, 518], "zio_flag_config_writ": [139, 175, 197, 221, 249, 411, 518], "zio_flag_dont_retri": [139, 175, 197, 221, 249, 411, 518], "zio_flag_nodata": [139, 175, 197, 221, 249, 411, 518], "zio_flag_induce_damag": [139, 175, 197, 221, 249, 411, 518], "zio_flag_io_alloc": [139, 221, 249, 411, 518], "zio_flag_io_retri": [139, 175, 197, 221, 249, 411, 518], "zio_flag_prob": [139, 175, 197, 221, 249, 411, 518], "zio_flag_tryhard": [139, 175, 197, 221, 249, 411, 518], "zio_flag_opt": [139, 175, 197, 221, 249, 411, 518], "zio_flag_dont_queu": [139, 175, 197, 221, 249, 411, 518], "zio_flag_dont_propag": [139, 175, 197, 221, 249, 411, 518], "zio_flag_io_bypass": [139, 175, 197, 221, 249, 411, 518], "zio_flag_io_rewrit": [139, 175, 197, 221, 249, 411, 518], "zio_flag_raw_compress": [139, 221, 249, 411, 518], "zio_flag_raw_encrypt": [139, 221, 249, 411, 518], "zio_flag_gang_child": [139, 175, 197, 221, 249, 411, 518], "zio_flag_ddt_child": [139, 175, 197, 221, 249, 411, 518], "0x04000000": [139, 175, 197, 221, 249, 411, 518], "zio_flag_godfath": [139, 175, 197, 221, 249, 411, 518], "0x08000000": [139, 175, 197, 221, 249, 411, 518], "zio_flag_nopwrit": [139, 175, 197, 221, 249, 411, 518], "0x10000000": [139, 175, 197, 221, 249, 411, 518], "zio_flag_reexecut": [139, 175, 197, 221, 249, 411, 518], "0x20000000": [139, 175, 197, 221, 249, 411, 518], "zio_flag_deleg": [139, 175, 197, 221, 249, 411, 518], "0x40000000": [139, 221, 249, 411, 518], "zio_flag_fastwrit": [139, 175, 197, 221, 249, 411, 518], "0x80000000": [139, 221, 249, 411, 518], "reloc": [140, 163, 186, 209, 236, 332, 435, 519, 542], "il": [142, 186, 209, 236, 311, 414, 521], "dflmn": [143, 236, 312, 415, 522], "ntx": [143, 415, 522], "mntopt": [143, 186, 209, 236, 312, 415, 522], "dflmt": [143, 415, 522], "newpool": [143, 157, 186, 209, 236, 312, 326, 415, 429, 522, 536], "irretriev": [143, 186, 209, 236, 312, 415, 522], "zpool_import_path": [143, 163, 186, 209, 236, 312, 332, 415, 435, 522, 542], "hazard": [143, 186, 209, 236, 312, 415, 522], "fx": [143, 186, 209, 236, 312, 415, 522], "15451357997522795478": [143, 163, 186, 209, 236, 332, 435, 522, 542], "suspens": [144, 160, 236, 313, 329, 416, 432, 523, 539], "uninit": [144, 416, 523], "unalloco": [144, 416, 523], "lq": [145, 209, 236, 314, 417, 524], "ghhlnppvy": [145, 236, 314, 417, 524], "nearbi": [145, 236, 314, 417, 524], "suppress": [145, 163, 165, 166, 186, 187, 209, 210, 236, 237, 314, 332, 335, 336, 417, 435, 437, 438, 524, 542, 544, 545], "script1": [145, 158, 209, 236, 314, 327, 417, 430, 524, 537], "script2": [145, 158, 209, 236, 314, 327, 417, 430, 524, 537], "slash": [145, 180, 182, 202, 204, 209, 226, 228, 236, 254, 256, 314, 417, 524], "zpool_scripts_path": [145, 163, 209, 236, 314, 332, 417, 435, 524, 542], "zpool_scripts_as_root": [145, 163, 209, 236, 314, 332, 417, 435, 524, 542], "sudoer": [145, 209, 236, 314, 417, 524], "awkabl": [145, 417, 524], "ind": [145, 209, 236, 314, 417, 524], "agg": [145, 209, 236, 314, 417, 524], "total_wait": [145, 209, 236, 314, 417, 524], "disk_wait": [145, 209, 236, 314, 417, 524], "syncq_wait": [145, 209, 236, 314, 417, 524], "asyncq_wait": [145, 209, 236, 314, 417, 524], "syncq_read": [145, 209, 236, 314, 417, 524], "asyncq_read": [145, 209, 236, 314, 417, 524], "scrubq_read": [145, 209, 236, 314, 417, 524], "trimq_writ": [145, 236, 314, 417, 524], "rebuildq_writ": [145, 524], "st8000nm0075": [145, 158, 163, 209, 236, 332, 435, 524, 537, 542], "3t": [145, 158, 163, 209, 236, 332, 435, 524, 537, 542], "u10": [145, 158, 163, 209, 236, 332, 435, 524, 537, 542], "u11": [145, 158, 163, 209, 236, 332, 435, 524, 537, 542], "u12": [145, 158, 163, 209, 236, 332, 435, 524, 537, 542], "u13": [145, 158, 163, 209, 236, 332, 435, 524, 537, 542], "u14": [145, 158, 163, 209, 236, 332, 435, 524, 537, 542], "vc": [145, 158, 163, 209, 236, 332, 435, 524, 537, 542], "6g": [145, 147, 158, 163, 186, 209, 236, 332, 435, 524, 526, 537, 542], "54": [145, 158, 163, 332, 435, 524, 537, 542], "9g": [145, 147, 158, 163, 186, 209, 236, 332, 435, 524, 526, 537, 542], "250k": [145, 158, 163, 332, 435, 524, 537, 542], "69m": [145, 158, 163, 332, 435, 524, 537, 542], "70g": [145, 158, 163, 332, 435, 524, 537, 542], "foreign": [146, 186, 209, 236, 315, 418, 525], "hglppv": [147, 209, 236, 316, 419, 526], "dedupratio": [147, 186, 209, 236, 316, 419, 526], "zion": [147, 163, 186, 209, 236, 332, 435, 526, 542], "expandsz": [147, 163, 186, 209, 236, 332, 435, 526, 542], "frag": [147, 163, 186, 209, 236, 332, 435, 526, 542], "43g": [147, 163, 186, 209, 236, 332, 435, 526, 542], "61": [147, 163, 186, 209, 236, 332, 435, 526, 542], "41": [147, 163, 186, 209, 236, 332, 435, 526, 542], "48": [147, 163, 186, 209, 236, 332, 435, 526, 542], "30g": [147, 163, 186, 209, 236, 332, 435, 526, 542], "ft": [148, 149, 420, 421, 527, 528], "npw": [151, 320, 423, 530], "49": [155, 427, 534], "403m": [155, 427, 534], "405m": [155, 427, 534], "68": [155, 427, 534], "weekli": [155, 160, 427, 534, 539], "monthli": [155, 160, 427, 534, 539], "otherpool": [155, 160, 427, 534, 539], "gllnp": [157, 236, 326, 429, 536], "diglppstvx": [158, 236, 327, 430, 537], "30000": [158, 537], "took": [158, 236, 327, 430, 537], "dw": [160, 329, 432, 539], "irrespect": [160, 236, 329, 432, 539], "raidz_expand": 162, "zfs_abort": [163, 184, 186, 209, 236, 332, 435, 542], "findleak": [163, 184, 186, 236, 332, 435, 542], "zpool_power_on_slot_timeout_m": 163, "power_control": 163, "zpool_import_udev_timeout_m": [163, 236, 332, 435, 542], "zpool_status_non_native_ashift_ignor": [163, 332, 435, 542], "absenc": [163, 332, 435, 542], "zpool_vdev_name_guid": [163, 186, 209, 236, 332, 435, 542], "zpool_vdev_name_follow_link": [163, 186, 209, 236, 332, 435, 542], "zfs_vdev_devid_opt_out": [163, 209, 236, 332, 435, 542], "nvp": [163, 209, 236, 332, 435, 542], "strip": [163, 209, 236, 332, 435, 542], "zpool_scripts_en": [163, 209, 236, 332, 435, 542], "evolv": [163, 209, 236, 332, 435, 542], "execd": [164, 436, 543], "collector": [164, 436, 543], "insight": [164, 436, 543], "grafana": [164, 436, 543], "heatmap": [164, 436, 543], "cvd": [165, 166, 335, 437, 438, 544, 545], "resume_token": [165, 166, 335, 437, 438, 544, 545], "incorrect": [165, 166, 544, 545], "insist": [165, 166, 544, 545], "dedup_stream_fil": [165, 166, 335, 437, 438, 544, 545], "12762": [165, 166, 544, 545], "sylist": [168, 189], "contin": 168, "seper": 168, "doxygen": [168, 189, 212, 240, 339], "splint": [168, 189, 212, 240, 339], "consolid": [168, 189, 212, 240], "gave": [168, 189, 212, 240], "reson": 168, "lai": 168, "boundri": 168, "2005": [168, 189, 212], "zpio": [169, 170, 190, 192], "darik": [170, 171, 178, 180, 185, 192, 193, 200, 202, 208, 215, 224, 226, 235, 243, 252, 254, 300], "horn": [170, 171, 178, 180, 185, 192, 193, 200, 202, 208, 215, 224, 226, 235, 243, 252, 254, 300], "dajhorn": [170, 171, 178, 180, 185, 192, 193, 200, 202, 208, 215, 224, 226, 235, 243, 252, 254, 300], "vanadac": [170, 171, 178, 180, 185, 192, 193, 200, 202, 208, 215, 224, 226, 235, 243, 252, 254, 300], "mar": [170, 178, 192, 200, 215, 224, 558], "regex": [171, 193], "threadcount": [171, 193], "threadcount_": [171, 193], "regex_low": [171, 193], "threadcount_low": [171, 193], "regex_high": [171, 193], "threadcount_high": [171, 193], "regex_incr": [171, 193], "threadcount_incr": [171, 193], "regioncount": [171, 193], "regioncount_": [171, 193], "regioncount_low": [171, 193], "regioncount_high": [171, 193], "regioncount_incr": [171, 193], "offset_": [171, 193], "size_low": [171, 193], "offset_low": [171, 193], "size_high": [171, 193], "offset_high": [171, 193], "size_incr": [171, 193], "offset_incr": [171, 193], "chunksiz": [171, 193], "chunksize_": [171, 193], "chunksize_low": [171, 193], "chunksize_high": [171, 193], "chunksize_incr": [171, 193], "dmu_flag": [171, 193], "dmuio": [171, 193], "dmu_io": [171, 193], "ssf": [171, 193], "fpp": [171, 193], "dmu_remov": [171, 193], "prerun": [171, 193], "postrun": [171, 193], "regionnois": [171, 193], "regions": [171, 193], "modulo": [171, 193], "chunknois": [171, 193], "threaddelai": [171, 193], "jiffi": [171, 193, 198], "dmu_verifi": [171, 193], "zerocopi": [171, 193], "dmu_read_zc": [171, 193], "dmu_write_zc": [171, 193], "nowait": [171, 193], "dmu_write_nowait": [171, 193], "noprefetch": [171, 193], "dmu_read_nopf": [171, 193], "inc": [171, 193], "feb": [171, 180, 185, 193, 202, 208, 226, 235], "raidz_par": [172, 194, 216, 244], "blather": [172, 194, 216, 244], "xist": [172, 194, 216, 244], "transver": [172, 194], "asciidoc": [172, 194, 216, 244], "michael": [172, 194, 216, 244], "gebetsroith": [172, 194, 216, 244], "gebi": [172, 194, 216, 244], "grml": [172, 194, 216, 244], "opensolari": [172, 194, 216, 244], "lspci": [174, 196, 220], "publicli": [175, 197, 221, 249], "zio_delay_max": [175, 176, 197, 198], "vdeev": 175, "healti": 175, "checkum": [175, 197], "hz": [175, 197], "cksum_expect": [175, 197, 221, 249, 411], "cksum_actu": [175, 197, 221, 249, 411], "bad_set_histogram": [175, 197, 221, 249, 411], "bad_cleared_histogram": [175, 197, 221, 249, 411], "zio_stage_checksum_verify0": [175, 197], "zio_flag_dont_cach": [175, 197, 221, 249, 411], "zio_flag_raw": [175, 197], "warmup": 176, "precach": 176, "l2arc_nocompress": 176, "metaslabs_per_vdev": [176, 198], "spa_load_verify_maxinflight": [176, 198], "zfetch_array_rd_sz": [176, 198, 222, 250, 347], "zfetch_block_cap": 176, "1mb": [176, 198, 222, 250, 347, 353], "zfs_arc_meta_limit": [176, 198, 222, 250, 347], "arc_c_max": [176, 198, 222, 250, 347], "zfs_arc_meta_min": [176, 198, 222, 250, 347], "zfs_arc_meta_prun": [176, 198, 222, 250, 347], "zfs_arc_meta_adjust_restart": [176, 198, 222, 250, 347], "zfs_arc_min_prefetch_lifespan": [176, 198], "zfs_arc_num_sublists_per_st": 176, "zfs_arc_p_min_shift": [176, 198, 222, 250, 347], "calc": 176, "arc_p": [176, 198, 222, 250, 347], "zfs_arc_p_aggressive_dis": 176, "zfs_arc_p_dampener_dis": [176, 198, 222, 250, 347], "dampen": [176, 198, 222, 250, 347], "secondli": 176, "zfs_dirty_data_sync": [176, 198], "zfs_free_max_block": [176, 198], "maxium": 176, "zfs_disable_dup_evict": [176, 198], "millisec": 176, "zfs_mdcomp_dis": [176, 198], "fragmen": 176, "zfs_read_chunk_s": [176, 198, 222, 250], "zfs_resilver_delai": [176, 198], "zfs_scan_idl": [176, 198], "zfs_scrub_delai": [176, 198], "zfs_scan_min_time_m": [176, 198], "bp": 176, "zfs_top_maxinflight": [176, 198], "zfs_vdev_cache_bshift": [176, 198, 222, 250, 347], "zfs_vdev_cache_max": [176, 198, 222, 250, 347], "zfs_vdev_cache_s": [176, 198, 222, 250, 347], "zfs_vdev_mirror_switch_u": 176, "usec": 176, "zfs_zevent_col": [176, 198, 222, 250], "zfs_zevent_consol": [176, 198, 222, 250], "zil_slog_limit": 176, "500u": [176, 198, 222, 250, 347], "short_nam": [177, 199, 223, 251], "feature_guid": [177, 186, 199, 209, 236, 334], "deactiv": [177, 199, 223, 251, 354], "compresse": 177, "stub": [178, 200, 224, 252], "filesytem": [180, 184], "saxattr": [180, 202, 226, 254], "dirxattr": [180, 202, 226, 254], "convention": [180, 202, 226, 254], "cumdibcsdvhlmxfpa": 182, "divpa": 182, "mlxfpa": 182, "ua": 182, "fsdb": [182, 204], "fifth": 182, "x00b4": 182, "foreground": [183, 205, 229, 257], "libexecdir": [183, 205], "hup": [183, 205, 229, 257], "_not_implemented_": [183, 205, 229, 257], "diagnosi": [183, 205, 229, 257, 555], "lawrenc": [183, 205, 229, 257], "livermor": [183, 205, 229, 257], "nation": [183, 205, 229, 257], "laboratori": [183, 205, 229, 257], "403049": [183, 205, 229, 257], "octemb": [183, 205, 229], "fnprrv": 184, "dnprrv": 184, "rrf": 184, "vo": 184, "dnpprvel": 184, "ii": 184, "vnfu": 184, "ldug": 184, "ld": 184, "rldug": 184, "rld": 184, "fht": [184, 206, 232, 264], "maxnamelen": [184, 206, 232, 295, 400], "nonstandard": 184, "IT": 184, "AND": [184, 230, 272], "tb": 184, "requiren": 184, "affair": 184, "algorthm": 184, "priv_file_upgrade_sl": 184, "priv_file_downgrade_sl": 184, "cif": 184, "shareiscsi": 184, "tape": 184, "ie": [184, 206, 232, 298], "dissalow": 184, "underlai": 184, "hi": [184, 206, 232, 295], "her": [184, 206, 232, 295], "communit": 184, "selinux_us": [184, 206, 232, 298], "selinux_rol": [184, 206, 232, 298], "selinux_typ": [184, 206, 232, 298], "sensitivity_level": [184, 206, 232, 298], "defntext": 184, "nonbmand": 184, "corpor": 184, "microsystem": 184, "thousand": 184, "ug": 184, "aand": 184, "vol1": 184, "iscsitadm": 184, "iqn": 184, "1986": 184, "7b4b02a6": 184, "3277": 184, "eb1b": 184, "e686": 184, "a24762c52a8c": 184, "blkd": [185, 208, 235, 300], "hexidecim": 185, "zinject_debug": [185, 208, 235, 300], "excerpt": [185, 208, 235, 300], "fnd": 186, "vhfc": [186, 209], "glpvy": 186, "hglpv": 186, "glnp": [186, 209], "glpvxd": [186, 209], "miniumum": 186, "quorum": 186, "old_devic": [186, 209, 236, 322], "automaticali": 186, "innplac": 186, "unmirror": [186, 209, 236, 332, 435], "c1t1d0": 186, "c1t2d0": 186, "c1t3d0": 186, "aug": [187, 210, 237], "bencmark": 191, "vdev_raidz": [191, 214, 242], "gvozden": [191, 214, 242], "ne": [191, 214, 242], "x0161": [191, 214, 242], "kovi": [191, 214, 242], "x0107": [191, 214, 242], "neskov": [191, 214, 242], "gmail": [191, 214, 242], "2016": [191, 214], "regionsize_": 193, "regionsize_low": 193, "regionsize_high": 193, "regionsize_incr": 193, "8mb": [198, 222, 250, 347], "67108864": [198, 222, 250], "zfs_arc_meta_strategi": [198, 222, 250, 347], "32m": [198, 222, 250], "zfs_checksums_per_second": [198, 222], "zfs_delays_per_second": 198, "64bit": [198, 222, 250], "zfs_qat_dis": 198, "2018": [199, 209, 221, 223], "smm": [204, 206, 207, 209, 217, 228, 232, 234, 236, 245, 256, 259, 299], "abcddfghilmpsvx": 204, "aflpx": 204, "zbd_no_zl": 204, "ov": 206, "dlprcenpv": 206, "lce": 206, "penv": [206, 232, 280, 284], "fnsuv": 206, "mbyte": [206, 232, 298], "dfmn": 209, "dfm": 209, "ghhlppvy": 209, "cfhv": 209, "slave": [209, 236], "23t": [209, 236], "46k": [209, 236], "sdff": [209, 236], "77k": [209, 236], "3k": [209, 236], "sdgw": [209, 236], "288k": [209, 236], "sdat": [209, 236], "sdgx": [209, 236], "78": [209, 236], "sdau": [209, 236], "sdgy": [209, 236], "sdav": [209, 236], "sdgz": [209, 236], "sdfk": [209, 236], "spl_kmem_cache_expir": [219, 247], "spl_kmem_cache_reclaim": [219, 247, 346, 449], "spl_kmem_cache_obj_per_slab_min": [219, 247], "spl_kmem_cache_kmem_limit": 219, "0x34": [221, 249], "errant": [221, 249, 411], "9th": [221, 249, 411], "8th": [221, 249, 411], "128mb": [222, 250, 347], "vdev_ms_count_limit": 222, "fbzfs_condense_indirect_vdevs_en": [222, 250], "045": [222, 250], "690": [222, 250], "984": [222, 250], "833": [222, 250], "022": [222, 250], "2097152": [222, 250], "41943040": [222, 250], "zfs_sync_taskq_batch_pct": [222, 250, 347, 450], "zfs_vdev_aggregate_trim": [222, 250, 347], "36kb": [222, 250, 347], "zio_decompress_fail_fract": [222, 250], "abcddfghiklmpsvxi": 228, "olv": 232, "dlprbcehnpvw": [232, 280, 284], "lpcenvw": 232, "fhnsuv": 232, "rh": [232, 267, 281, 372, 386], "datatset": 232, "compars": 232, "stdin": [232, 278, 279, 298], "trail": [232, 275], "septemb": [234, 282, 387], "dflm": [236, 312], "np": 236, "perl": 239, "neelakanth": 239, "nadgir": 239, "mike": 239, "harsch": 239, "john": 239, "hixson": 239, "freena": 239, "beer": 239, "rational": 250, "512mb": [250, 347], "600000": 250, "vaul": 250, "iunt": 250, "262144": 250, "32gb": 250, "32mb": [250, 347], "slighti": 251, "abcddfghiklmpsvxyi": 256, "violat": 257, "pnpv": 262, "functuin": [269, 289], "dlprcenpvw": [280, 284], "dlpcenpv": [280, 284], "pnv": [280, 284], "ovewrit": 299, "verfi": [302, 322], "ing": 332, "ulong_maxb": 347, "eligibil": 347, "8388608b": [347, 450], "16777217bb": 347, "1b": [347, 354], "1048576bb": 347, "1h": 347, "10min": 347, "32kb": [347, 355], "fs_arc_meta_limit": 347, "minumum": 347, "512kb": 347, "4mb": 347, "5min": 347, "16gb": 347, "nonzer": 347, "operatinon": 347, "100mb": 347, "50mb": 347, "2h": 347, "2mb": 347, "64kb": 347, "16384b": 347, "15min": 347, "zil_min_commit_timeout": [347, 450], "786432b": 347, "768kb": 347, "scarc": 353, "sendstream": 354, "raidzm": 355, "abcddfghiklmnpsvxyi": 361, "dlpvrbcehnpsvw": [385, 389], "dlpvrbcehnpvw": [385, 389], "256b": 400, "argumentss": 435, "12743384782310107047": 547, "sda9": [547, 556, 560], "c0t0d0": [548, 549, 550, 551, 552, 553, 554, 555, 558, 559], "c0t0d1": [548, 549, 550, 551, 553, 554, 555], "c0t0d2": [548, 550, 555], "10121266328238932306": [548, 549], "reattach": 549, "5187963178597328409": 550, "irrevoc": 551, "13783646421373024673": [552, 553], "toplevel": 552, "unrepl": 553, "39": 553, "irreversibli": 553, "affirm": 553, "xv": 554, "0h0m": 555, "58": 555, "5k": 555, "203768": 555, "sdb9": [556, 560], "erratum": 557, "vdev0": 557, "vdev1": 557, "vdev2": 557, "vdev3": 557, "1165955789558693437": 557, "crypt1": 557, "newcrypt1": 557, "snap5": 557, "reimport": 557, "alert": 557, "cryptograph": 557, "14702934086626715962": 558, "0x1435718c": 558, "47": 558, "intervent": [559, 560, 561], "c0t1d0": 559, "shortli": 561, "adminstr": 561, "c3t2d0": 561, "c5t3d0": 561}, "objects": {}, "objtypes": {}, "objnames": {}, "titleterms": {"checksum": [1, 47], "Their": 1, "us": [1, 18, 19, 20, 21, 22, 33, 34, 36, 41, 42, 53], "zf": [1, 4, 7, 8, 12, 14, 15, 16, 17, 18, 19, 20, 22, 23, 25, 26, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 40, 41, 42, 43, 47, 49, 50, 53, 56, 71, 74, 82, 84, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 175, 176, 178, 180, 184, 197, 198, 200, 202, 206, 221, 222, 224, 226, 230, 231, 232, 249, 250, 252, 254, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 347, 350, 357, 359, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 450, 453, 461, 463, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562], "algorithm": 1, "acceler": 1, "microbenchmark": 1, "disabl": 1, "featur": [2, 79, 177, 199, 223, 251, 354, 458], "flag": 2, "compat": 2, "refer": 2, "materi": 2, "implement": 2, "per": 2, "o": [2, 48, 50, 559, 560], "raidz": [3, 47], "introduct": [3, 5, 46], "space": [3, 48, 53], "effici": 3, "perform": [3, 46, 51, 53], "consider": [3, 53], "write": [3, 45], "troubleshoot": [4, 18, 19, 20, 22, 33, 34, 36, 41, 42], "todo": 4, "about": 4, "log": [4, 561], "file": [4, 7, 21, 48, 53, 69, 72, 173, 195, 218, 246, 345, 348, 448, 451], "gener": [4, 48, 53, 102, 230, 272, 377, 481], "kernel": [4, 7, 41, 42, 53], "modul": [4, 47, 176, 198, 219, 222, 247, 250], "debug": [4, 47], "messag": [4, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562], "unkil": 4, "process": [4, 46], "event": [4, 139, 175, 197, 221, 249, 308, 411, 518], "draid": 5, "creat": [5, 10, 53, 92, 136, 262, 305, 367, 408, 471, 515], "vdev": [5, 47], "rebuild": 5, "distribut": 5, "spare": 5, "rebalanc": 5, "basic": [6, 48], "concept": [6, 48], "content": [6, 13, 15, 17, 18, 19, 20, 22, 23, 26, 29, 32, 33, 34, 35, 36, 37, 38, 40, 41, 42, 46, 48, 51, 53, 57, 59, 562], "buildbot": 7, "option": [7, 8, 18, 19, 20, 22, 33, 41, 42], "choos": 7, "builder": 7, "prevent": 7, "commit": [7, 10], "from": [7, 8, 21, 53], "being": 7, "built": 7, "test": [7, 8, 10, 12, 26, 32, 65, 444], "submit": 7, "style": 7, "onli": 7, "requir": [7, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 53], "spl": [7, 56, 70, 219, 247, 346, 449], "version": [7, 556], "build": [7, 8], "specif": [7, 48], "pull": [7, 10], "request": [7, 10], "branch": [7, 9, 56], "name": [7, 53], "zfsonlinux": 7, "repositori": [7, 8, 10, 32], "linux": [7, 12, 14, 15, 16, 17, 21, 31, 48, 53], "4": [7, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 69, 70, 71, 345, 346, 347, 448, 449, 450, 557], "14": [7, 547], "step": [7, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "overrid": 7, "skip": 7, "lustr": 7, "without": 7, "ldiskf": 7, "configur": [7, 8, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 53, 548, 549, 550, 551, 555], "github": [8, 10], "instal": [8, 14, 15, 16, 17, 18, 19, 20, 22, 23, 25, 26, 27, 28, 29, 31, 33, 34, 35, 36, 37, 38, 40, 41, 42, 53], "depend": 8, "develop": [8, 13, 27], "In": 8, "tree": 8, "clone": [8, 10, 91, 261, 366, 470], "run": 8, "zloop": 8, "sh": 8, "custom": 9, "packag": 9, "rhel": [9, 30, 32], "cento": [9, 30], "fedora": [9, 24, 25, 26], "dkm": [9, 32], "kmod": [9, 32], "kabi": [9, 32], "track": [9, 32], "debian": [9, 18, 19, 20, 21, 22, 23], "ubuntu": [9, 33, 34, 35, 36, 37, 38], "get": [9, 39, 95, 141, 265, 310, 370, 413, 474, 520], "sourc": 9, "code": [9, 53], "releas": [9, 19, 20, 22, 32, 33, 34, 35, 56, 111, 281, 386, 490], "tarbal": 9, "git": [9, 10, 56], "master": [9, 56, 167], "beginn": 10, "zol": 10, "edit": 10, "first": [10, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "time": [10, 46], "setup": [10, 12, 35, 37], "initi": [10, 144, 313, 416, 523], "prepar": [10, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 36, 41, 42], "make": 10, "chang": [10, 12, 53, 90, 260, 365, 469], "your": 10, "patch": [10, 12], "befor": 10, "push": 10, "correct": 10, "issu": [10, 46], "maintain": [10, 56], "final": [10, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "word": 10, "openzf": [11, 12, 53, 59], "except": 11, "format": [11, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 46, 48, 53, 72, 173, 195, 218, 246, 348, 451], "port": 12, "environ": [12, 18, 19, 20, 22, 33, 34, 36, 41, 42, 43], "pick": 12, "cherri": 12, "manual": [12, 47], "merg": 12, "resourc": 13, "alpin": [14, 15], "root": [14, 15, 16, 17, 18, 19, 20, 22, 23, 25, 26, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 40, 41, 42, 43], "system": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 34, 35, 36, 37, 41, 42, 53, 83, 179, 201, 225, 253, 358, 462], "arch": [16, 17], "bootload": [16, 25, 31, 43], "support": [17, 18, 19, 20, 21, 22, 29, 33, 34, 35, 36, 37, 41, 42, 53], "overview": [17, 18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "contribut": [17, 18, 19, 20, 22, 29, 33, 34, 35, 36, 37, 41, 42], "bookworm": 18, "tabl": [18, 19, 20, 22, 23, 33, 34, 35, 36, 37, 38, 40, 41, 42, 46, 48, 53, 59], "caution": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "encrypt": [18, 19, 20, 21, 22, 33, 34, 35, 36, 37, 41, 42, 47], "1": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 61, 62, 63, 64, 65, 66, 67, 68, 168, 169, 170, 171, 172, 189, 190, 191, 192, 193, 194, 212, 213, 214, 215, 216, 217, 239, 240, 241, 242, 243, 244, 245, 338, 339, 340, 341, 342, 343, 344, 439, 440, 441, 442, 443, 444, 445, 446, 447, 557], "The": [18, 19, 20, 22, 33, 34, 36, 41, 42, 53], "2": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 546, 557], "disk": [18, 19, 20, 21, 22, 33, 34, 35, 36, 37, 41, 42, 43, 47, 48, 53], "3": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 557], "5": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 72, 73, 173, 174, 175, 176, 177, 195, 196, 197, 198, 199, 218, 219, 220, 221, 222, 223, 246, 247, 248, 249, 250, 251, 348, 349, 451, 452], "grub": [18, 19, 20, 22, 33, 34, 36], "6": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 188], "boot": [18, 19, 20, 21, 22, 33, 34, 35, 36, 37, 41, 42, 43, 53], "7": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42, 74, 75, 76, 77, 78, 79, 80, 81, 211, 350, 351, 352, 353, 354, 355, 356, 453, 454, 455, 456, 457, 458, 459, 460], "swap": [18, 19, 20, 22, 33, 41, 42, 53], "8": [18, 19, 20, 22, 33, 34, 36, 41, 42, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545], "full": [18, 19, 20, 22, 33, 34, 35, 36, 37], "softwar": [18, 19, 20, 22, 33, 34, 35, 36, 37], "9": [18, 19, 20, 22, 33, 41, 42], "cleanup": [18, 19, 20, 22, 33, 34, 35, 36, 37, 41, 42], "rescu": [18, 19, 20, 22, 33, 34, 36, 41, 42], "live": [18, 19, 20, 22, 33, 34, 36, 41, 42], "cd": [18, 19, 20, 22, 33, 34, 36, 41, 42], "areca": [18, 19, 20, 22, 33, 34, 36, 41, 42], "mpt2sa": [18, 19, 20, 22, 33, 34, 36, 41, 42], "qemu": [18, 19, 20, 22, 33, 34, 36, 41, 42, 48], "kvm": [18, 19, 20, 22, 33, 34, 36, 41, 42, 48], "xen": [18, 19, 20, 22, 33, 34, 36, 41, 42, 48, 53], "vmware": [18, 19, 20, 22, 33, 34, 36, 41, 42], "bullsey": 19, "newer": [19, 20, 22, 33, 34, 35], "avail": [19, 20, 22, 33, 34, 35], "buster": 20, "gnu": 21, "initrd": 21, "document": [21, 52, 59], "paramet": [21, 47, 176, 198, 219, 222, 247, 250], "pool": [21, 48, 53, 54, 553, 559, 560], "import": [21, 47, 143, 312, 415, 522], "dev": [21, 53], "cach": [21, 48, 53, 547], "last": 21, "ditch": 21, "attempt": 21, "snapshot": [21, 47, 54, 117, 287, 392, 496], "rollback": [21, 113, 283, 388, 492], "select": [21, 53], "dynam": 21, "nativ": 21, "filesystem": [21, 41, 42, 47], "separ": 21, "descend": 21, "stretch": 22, "relat": 23, "topic": 23, "post": [25, 31], "installaion": [25, 31], "repo": 26, "freebsd": 27, "nixo": [28, 29], "rocki": 31, "base": 32, "distro": [32, 48], "previou": 32, "minor": 32, "el": 32, "18": 33, "04": [33, 34, 35, 36, 37], "20": [34, 35], "errata": [34, 557], "Not": 34, "mount": [34, 84, 102, 103, 180, 202, 226, 230, 254, 272, 273, 359, 377, 378, 463, 481, 482], "accountsservic": 34, "raspberri": [34, 35, 36, 37], "pi": [34, 35, 36, 37], "usb": [35, 37, 46], "22": [36, 37], "start": 39, "opensus": [40, 41, 42], "extern": [40, 41, 42], "link": [40, 41, 42], "leap": 41, "note": 41, "grub2": [41, 42], "systemd": [41, 42], "10": [41, 42], "11": [41, 42], "12": [41, 42], "tumblewe": 42, "mainten": 43, "replac": [43, 48, 153, 322, 425, 532], "recoveri": [43, 46], "licens": [44, 53], "async": 45, "hardwar": [46, 53], "bio": 46, "cpu": [46, 47], "microcod": 46, "updat": [46, 48], "background": 46, "ecc": [46, 53], "memori": [46, 47, 53], "drive": 46, "interfac": 46, "sa": 46, "versu": [46, 48], "sata": 46, "hard": 46, "adapt": [46, 48], "control": 46, "raid": [46, 48], "sector": 46, "size": [46, 48], "error": 46, "rpm": 46, "speed": 46, "command": [46, 63, 83, 169, 179, 190, 201, 213, 225, 241, 253, 340, 358, 442, 462], "queu": 46, "nand": 46, "flash": 46, "ssd": [46, 47], "nvme": [46, 48], "low": [46, 48], "level": [46, 48, 552], "power": 46, "failur": [46, 559, 560, 561], "protect": 46, "criteria": 46, "inclus": 46, "list": [46, 55, 100, 147, 270, 316, 375, 419, 479, 526], "page": [46, 47, 60], "ata": 46, "trim": [46, 47, 160, 329, 432, 539], "scsi": 46, "unmap": 46, "optan": 46, "3d": 46, "xpoint": 46, "pwr_ok": 46, "signal": 46, "psu": 46, "hold": [46, 97, 267, 372, 476], "up": [46, 53], "batteri": 46, "tag": [47, 56], "abd": 47, "alloc": [47, 48], "arc": 47, "channel_program": 47, "checkpoint": [47, 134, 303, 406, 513], "compress": [47, 48], "dataset": [47, 48], "dbuf_cach": 47, "dedup": 47, "delai": [47, 49], "delet": 47, "discard": 47, "dmu": 47, "fragment": 47, "hdd": 47, "hostid": [47, 558], "l2arc": 47, "metadata": [47, 553], "metaslab": [47, 48], "mirror": 47, "mmp": 47, "panic": 47, "prefetch": 47, "qat": 47, "receiv": [47, 53, 54, 108, 278, 383, 487], "remov": [47, 151, 320, 423, 530], "resilv": [47, 154, 323, 426, 533], "scrub": [47, 155, 324, 427, 534], "send": [47, 53, 114, 284, 389, 493], "spa": 47, "special_vdev": 47, "taskq": 47, "vdev_cach": 47, "vdev_initi": 47, "vdev_remov": 47, "volum": 47, "write_throttl": 47, "zed": [47, 87, 183, 205, 229, 257, 362, 466], "zil": 47, "zio_schedul": 47, "index": 47, "ignore_hole_birth": 47, "l2arc_exclude_speci": 47, "l2arc_feed_again": 47, "l2arc_feed_min_m": 47, "l2arc_feed_sec": 47, "l2arc_headroom": 47, "l2arc_headroom_boost": 47, "l2arc_nocompress": 47, "l2arc_meta_perc": 47, "l2arc_mfuonli": 47, "l2arc_noprefetch": 47, "l2arc_norw": 47, "l2arc_rebuild_blocks_min_l2s": 47, "l2arc_rebuild_en": 47, "l2arc_trim_ahead": 47, "l2arc_write_boost": 47, "l2arc_write_max": 47, "metaslab_aliquot": 47, "metaslab_bias_en": 47, "zfs_metaslab_segment_weight_en": 47, "zfs_metaslab_switch_threshold": 47, "metaslab_debug_load": 47, "metaslab_debug_unload": 47, "metaslab_fragmentation_factor_en": 47, "metaslabs_per_vdev": 47, "metaslab_preload_en": 47, "metaslab_lba_weighting_en": 47, "spa_config_path": 47, "spa_asize_infl": 47, "spa_load_verify_data": 47, "spa_load_verify_metadata": 47, "spa_load_verify_maxinflight": 47, "spa_slop_shift": 47, "zfetch_array_rd_sz": 47, "zfetch_max_dist": 47, "zfetch_max_stream": 47, "zfetch_min_sec_reap": 47, "zfs_arc_dnode_limit_perc": 47, "zfs_arc_dnode_limit": 47, "zfs_arc_dnode_reduce_perc": 47, "zfs_arc_average_blocks": 47, "zfs_arc_evict_batch_limit": 47, "zfs_arc_grow_retri": 47, "zfs_arc_lotsfree_perc": 47, "zfs_arc_max": 47, "zfs_arc_meta_adjust_restart": 47, "zfs_arc_meta_limit": 47, "zfs_arc_meta_limit_perc": 47, "zfs_arc_meta_min": 47, "zfs_arc_meta_prun": 47, "zfs_arc_meta_strategi": 47, "zfs_arc_min": 47, "zfs_arc_min_prefetch_m": 47, "zfs_arc_min_prescient_prefetch_m": 47, "zfs_multilist_num_sublist": 47, "zfs_arc_overflow_shift": 47, "zfs_arc_p_min_shift": 47, "zfs_arc_p_dampener_dis": 47, "zfs_arc_shrink_shift": 47, "zfs_arc_pc_perc": 47, "zfs_arc_sys_fre": 47, "zfs_autoimport_dis": 47, "zfs_commit_timeout_pct": 47, "zfs_dbgmsg_enabl": 47, "zfs_dbgmsg_maxsiz": 47, "zfs_dbuf_state_index": 47, "zfs_deadman_en": 47, "zfs_deadman_checktime_m": 47, "zfs_deadman_ziotime_m": 47, "zfs_deadman_synctime_m": 47, "zfs_deadman_failmod": 47, "zfs_dedup_prefetch": 47, "zfs_delete_block": 47, "zfs_delay_min_dirty_perc": 47, "zfs_delay_scal": 47, "zfs_dirty_data_max": 47, "zfs_dirty_data_max_perc": 47, "zfs_dirty_data_max_max": 47, "zfs_dirty_data_max_max_perc": 47, "zfs_dirty_data_sync": 47, "zfs_dirty_data_sync_perc": 47, "zfs_fletcher_4_impl": 47, "zfs_free_bpobj_en": 47, "zfs_free_max_block": 47, "zfs_vdev_async_read_max_act": 47, "zfs_vdev_async_read_min_act": 47, "zfs_vdev_async_write_active_max_dirty_perc": 47, "zfs_vdev_async_write_active_min_dirty_perc": 47, "zfs_vdev_async_write_max_act": 47, "zfs_vdev_async_write_min_act": 47, "zfs_vdev_max_act": 47, "zfs_vdev_scrub_max_act": 47, "zfs_vdev_scrub_min_act": 47, "zfs_vdev_sync_read_max_act": 47, "zfs_vdev_sync_read_min_act": 47, "zfs_vdev_sync_write_max_act": 47, "zfs_vdev_sync_write_min_act": 47, "zfs_vdev_queue_depth_pct": 47, "zfs_disable_dup_evict": 47, "zfs_expire_snapshot": 47, "zfs_admin_snapshot": 47, "zfs_flag": 47, "zfs_free_leak_on_eio": 47, "zfs_free_min_time_m": 47, "zfs_immediate_write_sz": 47, "zfs_max_records": 47, "zfs_mdcomp_dis": 47, "zfs_metaslab_fragmentation_threshold": 47, "zfs_mg_fragmentation_threshold": 47, "zfs_mg_noalloc_threshold": 47, "zfs_multihost_histori": 47, "zfs_multihost_interv": 47, "zfs_multihost_import_interv": 47, "zfs_multihost_fail_interv": 47, "zfs_delays_per_second": 47, "zfs_checksums_per_second": 47, "zfs_no_scrub_io": 47, "zfs_no_scrub_prefetch": 47, "zfs_nocacheflush": 47, "zfs_nopwrite_en": 47, "zfs_dmu_offset_next_sync": 47, "zfs_pd_bytes_max": 47, "zfs_per_txg_dirty_frees_perc": 47, "zfs_prefetch_dis": 47, "zfs_read_chunk_s": 47, "zfs_read_histori": 47, "zfs_read_history_hit": 47, "zfs_recov": 47, "zfs_resilver_min_time_m": 47, "zfs_scan_min_time_m": 47, "zfs_scan_checkpoint_intv": 47, "zfs_scan_fill_weight": 47, "zfs_scan_issue_strategi": 47, "zfs_scan_legaci": 47, "zfs_scan_max_ext_gap": 47, "zfs_scan_mem_lim_fact": 47, "zfs_scan_mem_lim_soft_fact": 47, "zfs_scan_vdev_limit": 47, "zfs_send_corrupt_data": 47, "zfs_sync_pass_deferred_fre": 47, "zfs_sync_pass_dont_compress": 47, "zfs_sync_pass_rewrit": 47, "zfs_sync_taskq_batch_pct": 47, "zfs_txg_histori": 47, "zfs_txg_timeout": 47, "zfs_vdev_aggregation_limit": 47, "zfs_vdev_cache_s": 47, "zfs_vdev_cache_bshift": 47, "zfs_vdev_cache_max": 47, "zfs_vdev_mirror_rotating_inc": 47, "zfs_vdev_mirror_non_rotating_inc": 47, "zfs_vdev_mirror_rotating_seek_inc": 47, "zfs_vdev_mirror_rotating_seek_offset": 47, "zfs_vdev_mirror_non_rotating_seek_inc": 47, "zfs_vdev_read_gap_limit": 47, "zfs_vdev_write_gap_limit": 47, "zfs_vdev_schedul": 47, "zfs_vdev_raidz_impl": 47, "zfs_zevent_col": 47, "zfs_zevent_consol": 47, "zfs_zevent_len_max": 47, "zfs_zil_clean_taskq_maxalloc": 47, "zfs_zil_clean_taskq_minalloc": 47, "zfs_zil_clean_taskq_nthr_pct": 47, "zil_replay_dis": 47, "zil_slog_bulk": 47, "zio_delay_max": 47, "zio_dva_throttle_en": 47, "zio_requeue_io_start_cut_in_lin": 47, "zio_taskq_batch_pct": 47, "zvol_inhibit_dev": 47, "zvol_major": 47, "zvol_max_discard_block": 47, "zvol_prefetch_byt": 47, "zvol_request_sync": 47, "zvol_thread": 47, "zvol_volmod": 47, "zfs_qat_dis": 47, "zfs_qat_checksum_dis": 47, "zfs_qat_compress_dis": 47, "zfs_qat_encrypt_dis": 47, "dbuf_cache_hiwater_pct": 47, "dbuf_cache_lowater_pct": 47, "dbuf_cache_max_byt": 47, "dbuf_cache_max_shift": 47, "dmu_object_alloc_chunk_shift": 47, "send_holes_without_birth_tim": 47, "zfs_abd_scatter_en": 47, "zfs_abd_scatter_max_ord": 47, "zfs_compressed_arc_en": 47, "zfs_key_max_salt_us": 47, "zfs_object_mutex_s": 47, "zfs_scan_strict_mem_lim": 47, "zfs_send_queue_length": 47, "zfs_recv_queue_length": 47, "zfs_arc_min_prefetch_lifespan": 47, "zfs_scan_ignore_error": 47, "zfs_top_maxinflight": 47, "zfs_resilver_delai": 47, "zfs_scrub_delai": 47, "zfs_scan_idl": 47, "icp_aes_impl": 47, "icp_gcm_impl": 47, "zfs_abd_scatter_min_s": 47, "zfs_unlink_suspend_progress": 47, "spa_load_verify_shift": 47, "spa_load_print_vdev_tre": 47, "zfs_max_missing_tvd": 47, "dbuf_metadata_cache_shift": 47, "dbuf_metadata_cache_max_byt": 47, "dbuf_cache_shift": 47, "metaslab_force_gang": 47, "zfs_vdev_default_ms_count": 47, "vdev_removal_max_span": 47, "zfs_removal_ignore_error": 47, "zfs_removal_suspend_progress": 47, "zfs_condense_indirect_commit_entry_delay_m": 47, "zfs_condense_indirect_vdevs_en": 47, "zfs_condense_max_obsolete_byt": 47, "zfs_condense_min_mapping_byt": 47, "zfs_vdev_initializing_max_act": 47, "zfs_vdev_initializing_min_act": 47, "zfs_vdev_removal_max_act": 47, "zfs_vdev_removal_min_act": 47, "zfs_vdev_trim_max_act": 47, "zfs_vdev_trim_min_act": 47, "zfs_initialize_valu": 47, "zfs_lua_max_instrlimit": 47, "zfs_lua_max_memlimit": 47, "zfs_max_dataset_nest": 47, "zfs_ddt_data_is_speci": 47, "zfs_user_indirect_is_speci": 47, "zfs_reconstruct_indirect_combinations_max": 47, "zfs_send_unmodified_spill_block": 47, "zfs_spa_discard_memory_limit": 47, "zfs_special_class_metadata_reserve_pct": 47, "zfs_trim_extent_bytes_max": 47, "zfs_trim_extent_bytes_min": 47, "zfs_trim_metaslab_skip": 47, "zfs_trim_queue_limit": 47, "zfs_trim_txg_batch": 47, "zfs_vdev_aggregate_trim": 47, "zfs_vdev_aggregation_limit_non_rot": 47, "zil_nocacheflush": 47, "zio_deadman_log_al": 47, "zio_decompress_fail_fract": 47, "zio_slow_io_m": 47, "vdev_validate_skip": 47, "zfs_async_block_max_block": 47, "zfs_checksum_events_per_second": 47, "zfs_disable_ivset_guid_check": 47, "zfs_obsolete_min_time_m": 47, "zfs_override_estimate_records": 47, "zfs_remove_max_seg": 47, "zfs_resilver_disable_def": 47, "zfs_scan_suspend_progress": 47, "zfs_scrub_min_time_m": 47, "zfs_slow_io_events_per_second": 47, "zfs_vdev_min_ms_count": 47, "zfs_vdev_ms_count_limit": 47, "spl_hostid": 47, "spl_hostid_path": 47, "spl_kmem_alloc_max": 47, "spl_kmem_alloc_warn": 47, "spl_kmem_cache_expir": 47, "spl_kmem_cache_kmem_limit": 47, "spl_kmem_cache_max_s": 47, "spl_kmem_cache_obj_per_slab": 47, "spl_kmem_cache_obj_per_slab_min": 47, "spl_kmem_cache_reclaim": 47, "spl_kmem_cache_slab_limit": 47, "spl_max_show_task": 47, "spl_panic_halt": 47, "spl_taskq_kick": 47, "spl_taskq_thread_bind": 47, "spl_taskq_thread_dynam": 47, "spl_taskq_thread_prior": 47, "spl_taskq_thread_sequenti": 47, "spl_kmem_cache_kmem_thread": 47, "spl_kmem_cache_magazine_s": 47, "workload": 48, "tune": [48, 51], "align": 48, "shift": 48, "ashift": 48, "z": 48, "stripe": 48, "width": 48, "records": 48, "larger": [48, 53], "record": 48, "zvol": [48, 53], "volblocks": 48, "dedupl": 48, "geometri": 48, "whole": 48, "partit": 48, "recommend": 48, "init_on_alloc": 48, "atim": 48, "free": 48, "lz4": 48, "synchron": 48, "i": [48, 50, 53, 54, 559, 560], "overprovis": 48, "secur": 48, "eras": 48, "trick": 48, "bit": [48, 53], "torrent": 48, "databas": 48, "mysql": 48, "innodb": 48, "postgresql": 48, "sqlite": 48, "server": 48, "samba": 48, "sequenti": 48, "video": 48, "game": 48, "directori": 48, "lutri": 48, "steam": 48, "wine": 48, "virtual": 48, "machin": 48, "transact": 49, "zio": 50, "schedul": 50, "admin": 52, "faq": [53, 54], "what": 53, "do": [53, 54], "have": [53, 54], "architectur": 53, "32": 53, "v": 53, "64": 53, "when": 53, "set": [53, 115, 156, 285, 325, 390, 428, 494, 535], "etc": 53, "vdev_id": [53, 73, 85, 174, 181, 196, 203, 220, 227, 248, 255, 349, 360, 452, 464], "conf": [53, 73, 174, 196, 220, 248, 349, 452], "an": [53, 54], "exist": 53, "zpool": [53, 79, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 177, 186, 199, 209, 223, 236, 251, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 354, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 458, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542], "new": 53, "stream": 53, "hole_birth": [53, 54], "bug": 53, "larg": 53, "block": 53, "ceph": 53, "other": 53, "guidelin": 53, "advanc": 53, "than": 53, "expect": 53, "devic": [53, 69, 345, 448, 548, 549, 550, 551, 552, 555], "hypervisor": 53, "dom0": 53, "udisks2": 53, "mapper": 53, "entri": 53, "report": 53, "problem": 53, "doe": 53, "conduct": 53, "hole": 54, "birth": 54, "short": 54, "explan": 54, "enabl": 54, "how": 54, "know": 54, "am": 54, "affect": 54, "ani": 54, "less": 54, "pain": 54, "wai": 54, "fix": 54, "thi": 54, "we": 54, "alreadi": 54, "long": 54, "mail": 55, "sign": 56, "kei": [56, 90, 101, 120, 260, 271, 290, 365, 376, 395, 469, 480, 499], "check": 56, "signatur": 56, "project": [57, 105, 275, 380, 484], "commun": 57, "man": 60, "arcstat": [61, 239, 338, 440], "cstyle": [62, 168, 189, 212, 240, 339, 441], "user": [63, 169, 190, 213, 241, 340, 442], "raidz_test": [64, 191, 214, 242, 341, 443], "runner": [65, 444], "zhack": [66, 170, 192, 215, 243, 342, 445], "ztest": [67, 172, 194, 216, 244, 343, 446], "zvol_wait": [68, 217, 245, 344, 447], "special": [69, 345, 448], "convent": [72, 173, 195, 218, 246, 348, 451], "dracut": [74, 350, 453], "miscellan": [75, 351, 454], "vdevprop": [76, 455], "zfsconcept": [77, 297, 352, 456], "zfsprop": [78, 233, 298, 353, 457], "zpoolconcept": [80, 333, 355, 459], "zpoolprop": [81, 334, 356, 460], "fsck": [82, 178, 200, 224, 252, 357, 461], "administr": [83, 179, 201, 225, 253, 358, 462], "zdb": [86, 182, 204, 228, 256, 361, 465], "allow": [88, 258, 363, 467], "bookmark": [89, 259, 364, 468], "destroi": [93, 137, 263, 306, 368, 409, 472, 516], "diff": [94, 264, 369, 473], "groupspac": [96, 266, 371, 475], "inherit": [98, 268, 373, 477], "jail": [99, 269, 374, 478], "load": [101, 271, 376, 480], "program": [104, 231, 274, 379, 483], "projectspac": [106, 276, 381, 485], "promot": [107, 277, 382, 486], "recv": [109, 279, 384, 488], "redact": [110, 280, 385, 489], "renam": [112, 282, 387, 491], "share": [116, 286, 391, 495], "unallow": [118, 288, 393, 497], "unjail": [119, 289, 394, 498], "unload": [120, 290, 395, 499], "unmount": [121, 291, 396, 500], "unzon": [122, 501], "upgrad": [123, 161, 292, 330, 397, 433, 502, 540], "userspac": [124, 293, 398, 503], "wait": [125, 162, 294, 331, 399, 434, 504, 541], "zone": [126, 505], "zfs_ids_to_path": [128, 296, 401, 507], "zfs_prepare_disk": [129, 508], "zgenhostid": [130, 207, 234, 299, 402, 509], "zinject": [131, 185, 208, 235, 300, 403, 510], "add": [132, 301, 404, 511], "attach": [133, 302, 405, 512], "clear": [135, 304, 407, 514], "detach": [138, 307, 410, 517], "export": [140, 309, 412, 519], "histori": [142, 311, 414, 521], "iostat": [145, 314, 417, 524], "labelclear": [146, 315, 418, 525], "offlin": [148, 317, 420, 527], "onlin": [149, 318, 421, 528], "reguid": [150, 319, 422, 529], "reopen": [152, 321, 424, 531], "split": [157, 326, 429, 536], "statu": [158, 327, 430, 537], "sync": [159, 328, 431, 538], "zpool_influxdb": [164, 436, 543], "zstream": [165, 335, 437, 544], "zstreamdump": [166, 187, 210, 237, 336, 438, 545], "zpio": [171, 193], "v0": [188, 211, 238], "v2": [337, 439, 546], "0": 337, "id": [547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "8000": [547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561], "corrupt": [547, 550, 551, 553, 554], "2q": 548, "miss": [548, 549, 552], "replic": [548, 549, 550, 551, 555], "3c": 549, "non": [549, 551], "4j": 550, "label": [550, 551, 558], "5e": 551, "6x": 552, "top": 552, "72": 553, "8a": 554, "data": 554, "9p": 555, "fail": 555, "a5": 556, "incompat": 556, "er": 557, "ei": 558, "mismatch": 558, "hc": 559, "jq": 560, "k4": 561, "intent": 561, "read": 561}, "envversion": {"sphinx.domains.c": 3, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 9, "sphinx.domains.index": 1, "sphinx.domains.javascript": 3, "sphinx.domains.math": 2, "sphinx.domains.python": 4, "sphinx.domains.rst": 2, "sphinx.domains.std": 2, "sphinx.ext.todo": 2, "sphinx.ext.intersphinx": 1, "sphinx": 58}, "alltitles": {"Checksums and Their Use in ZFS": [[1, "checksums-and-their-use-in-zfs"]], "Checksum Algorithms": [[1, "checksum-algorithms"]], "Checksum Accelerators": [[1, "checksum-accelerators"]], "Checksum Microbenchmarks": [[1, "checksum-microbenchmarks"]], "Disabling Checksums": [[1, "disabling-checksums"]], "Feature Flags": [[2, "feature-flags"]], "Compatibility": [[2, "compatibility"]], "Reference materials": [[2, "reference-materials"]], "Feature flags implementation per OS": [[2, "feature-flags-implementation-per-os"]], "RAIDZ": [[3, "raidz"]], "Introduction": [[3, "introduction"], [5, "introduction"], [46, "introduction"]], "Space efficiency": [[3, "space-efficiency"]], "Performance considerations": [[3, "performance-considerations"]], "Write": [[3, "write"]], "Troubleshooting": [[4, "troubleshooting"], [18, "troubleshooting"], [19, "troubleshooting"], [20, "troubleshooting"], [22, "troubleshooting"], [33, "troubleshooting"], [34, "troubleshooting"], [36, "troubleshooting"], [41, "troubleshooting"], [42, "troubleshooting"]], "Todo": [[4, "id1"]], "About Log Files": [[4, "about-log-files"]], "Generic Kernel Log": [[4, "generic-kernel-log"]], "ZFS Kernel Module Debug Messages": [[4, "zfs-kernel-module-debug-messages"]], "Unkillable Process": [[4, "unkillable-process"]], "ZFS Events": [[4, "zfs-events"]], "dRAID": [[5, "draid"]], "Create a dRAID vdev": [[5, "create-a-draid-vdev"]], "Rebuilding to a Distributed Spare": [[5, "rebuilding-to-a-distributed-spare"]], "Rebalancing": [[5, "rebalancing"]], "Basic Concepts": [[6, "basic-concepts"]], "Contents:": [[6, null], [13, null], [51, null], [57, null], [562, null]], "Buildbot Options": [[7, "buildbot-options"]], "Choosing Builders": [[7, "choosing-builders"]], "Preventing a commit from being built and tested.": [[7, "preventing-a-commit-from-being-built-and-tested"]], "Submitting a commit to STYLE and TEST builders only.": [[7, "submitting-a-commit-to-style-and-test-builders-only"]], "Requiring SPL Versions": [[7, "requiring-spl-versions"]], "Build SPL from a specific pull request": [[7, "build-spl-from-a-specific-pull-request"]], "Build SPL branch spl-branch-name from zfsonlinux/spl repository": [[7, "build-spl-branch-spl-branch-name-from-zfsonlinux-spl-repository"]], "Requiring Kernel Version": [[7, "requiring-kernel-version"]], "Build Linux Kernel Version 4.14": [[7, "build-linux-kernel-version-4-14"]], "Build Steps Overrides": [[7, "build-steps-overrides"]], "Skip building the SPL and build Lustre without ldiskfs": [[7, "skip-building-the-spl-and-build-lustre-without-ldiskfs"]], "Build ZFS Only": [[7, "build-zfs-only"]], "Configuring Tests with the TEST File": [[7, "configuring-tests-with-the-test-file"]], "Building ZFS": [[8, "building-zfs"]], "GitHub Repositories": [[8, "github-repositories"]], "Installing Dependencies": [[8, "installing-dependencies"]], "Build Options": [[8, "build-options"]], "Developing In-Tree": [[8, "developing-in-tree"]], "Clone from GitHub": [[8, "clone-from-github"]], "Configure and Build": [[8, "configure-and-build"]], "Install": [[8, "install"]], "Running zloop.sh and zfs-tests.sh": [[8, "running-zloop-sh-and-zfs-tests-sh"]], "Custom Packages": [[9, "custom-packages"]], "RHEL, CentOS and Fedora": [[9, "rhel-centos-and-fedora"]], "DKMS": [[9, "dkms"], [9, "dkms-1"], [32, "dkms"]], "kmod": [[9, "kmod"], [9, "kmod-1"]], "kABI-tracking kmod": [[9, "kabi-tracking-kmod"], [32, "kabi-tracking-kmod"]], "Debian and Ubuntu": [[9, "debian-and-ubuntu"]], "Get the Source Code": [[9, "get-the-source-code"]], "Released Tarball": [[9, "released-tarball"]], "Git Master Branch": [[9, "git-master-branch"]], "Git and GitHub for beginners (ZoL edition)": [[10, "git-and-github-for-beginners-zol-edition"]], "First time setup": [[10, "first-time-setup"]], "Cloning the initial repository": [[10, "cloning-the-initial-repository"]], "Preparing and making changes": [[10, "preparing-and-making-changes"]], "Testing your patches before pushing": [[10, "testing-your-patches-before-pushing"]], "Committing your changes to be pushed": [[10, "committing-your-changes-to-be-pushed"]], "Pushing and creating the pull request": [[10, "pushing-and-creating-the-pull-request"]], "Correcting issues with your pull request": [[10, "correcting-issues-with-your-pull-request"]], "Maintaining your repository": [[10, "maintaining-your-repository"]], "Final words": [[10, "final-words"]], "OpenZFS Exceptions": [[11, "openzfs-exceptions"]], "Format:": [[11, "format"]], "OpenZFS Patches": [[12, "openzfs-patches"]], "Porting OpenZFS changes to ZFS on Linux": [[12, "porting-openzfs-changes-to-zfs-on-linux"]], "Setup the Environment": [[12, "setup-the-environment"]], "Pick a patch": [[12, "pick-a-patch"]], "Porting a Patch": [[12, "porting-a-patch"]], "Cherry-pick": [[12, "cherry-pick"]], "Manual merge": [[12, "manual-merge"]], "Testing a Patch": [[12, "testing-a-patch"]], "Merging the Patch": [[12, "merging-the-patch"]], "Porting ZFS on Linux changes to OpenZFS": [[12, "porting-zfs-on-linux-changes-to-openzfs"]], "Developer Resources": [[13, "developer-resources"]], "Alpine Linux Root on ZFS": [[14, "alpine-linux-root-on-zfs"]], "Preparation": [[14, "preparation"], [16, "preparation"], [25, "preparation"], [28, "preparation"], [31, "preparation"]], "System Installation": [[14, "system-installation"], [16, "system-installation"], [25, "system-installation"], [28, "system-installation"], [31, "system-installation"]], "System Configuration": [[14, "system-configuration"], [16, "system-configuration"], [25, "system-configuration"], [28, "system-configuration"], [31, "system-configuration"]], "Alpine Linux": [[15, "alpine-linux"]], "Contents": [[15, "contents"], [17, "contents"], [26, "contents"], [29, "contents"], [32, "contents"]], "Installation": [[15, "installation"], [17, "installation"], [23, "installation"], [26, "installation"], [29, "installation"], [38, "installation"], [40, "installation"], [53, "installation"]], "Root on ZFS": [[15, "root-on-zfs"], [17, "root-on-zfs"], [23, "root-on-zfs"], [26, "root-on-zfs"], [29, "root-on-zfs"], [32, "root-on-zfs"], [38, "root-on-zfs"], [40, "root-on-zfs"]], "Arch Linux Root on ZFS": [[16, "arch-linux-root-on-zfs"]], "Bootloader": [[16, "bootloader"], [25, "bootloader"], [31, "bootloader"]], "Arch Linux": [[17, "arch-linux"]], "Support": [[17, "support"], [18, "support"], [19, "support"], [20, "support"], [22, "support"], [29, "support"], [33, "support"], [34, "support"], [35, "support"], [36, "support"], [37, "support"], [41, "support"], [42, "support"]], "Overview": [[17, "overview"], [18, "overview"], [19, "overview"], [20, "overview"], [22, "overview"], [33, "overview"], [34, "overview"], [35, "overview"], [36, "overview"], [37, "overview"], [41, "overview"], [42, "overview"]], "Contribute": [[17, "contribute"], [29, "contribute"]], "Debian Bookworm Root on ZFS": [[18, "debian-bookworm-root-on-zfs"]], "Table of Contents": [[18, "table-of-contents"], [19, "table-of-contents"], [20, "table-of-contents"], [22, "table-of-contents"], [23, "table-of-contents"], [33, "table-of-contents"], [34, "table-of-contents"], [35, "table-of-contents"], [36, "table-of-contents"], [37, "table-of-contents"], [38, "table-of-contents"], [40, "table-of-contents"], [41, "table-of-contents"], [42, "table-of-contents"], [46, "table-of-contents"], [48, "table-of-contents"], [53, "table-of-contents"]], "Caution": [[18, "caution"], [19, "caution"], [20, "caution"], [22, "caution"], [33, "caution"], [34, "caution"], [35, "caution"], [36, "caution"], [37, "caution"], [41, "caution"], [42, "caution"]], "System Requirements": [[18, "system-requirements"], [19, "system-requirements"], [20, "system-requirements"], [22, "system-requirements"], [33, "system-requirements"], [34, "system-requirements"], [35, "system-requirements"], [36, "system-requirements"], [37, "system-requirements"], [41, "system-requirements"], [42, "system-requirements"]], "Contributing": [[18, "contributing"], [19, "contributing"], [20, "contributing"], [22, "contributing"], [33, "contributing"], [34, "contributing"], [35, "contributing"], [36, "contributing"], [37, "contributing"], [41, "contributing"], [42, "contributing"]], "Encryption": [[18, "encryption"], [19, "encryption"], [20, "encryption"], [22, "encryption"], [33, "encryption"], [34, "encryption"], [35, "encryption"], [36, "encryption"], [37, "encryption"], [41, "encryption"], [42, "encryption"]], "Step 1: Prepare The Install Environment": [[18, "step-1-prepare-the-install-environment"], [19, "step-1-prepare-the-install-environment"], [20, "step-1-prepare-the-install-environment"], [22, "step-1-prepare-the-install-environment"], [33, "step-1-prepare-the-install-environment"], [34, "step-1-prepare-the-install-environment"], [36, "step-1-prepare-the-install-environment"], [41, "step-1-prepare-the-install-environment"], [42, "step-1-prepare-the-install-environment"]], "Step 2: Disk Formatting": [[18, "step-2-disk-formatting"], [19, "step-2-disk-formatting"], [20, "step-2-disk-formatting"], [22, "step-2-disk-formatting"], [33, "step-2-disk-formatting"], [34, "step-2-disk-formatting"], [36, "step-2-disk-formatting"], [41, "step-2-disk-formatting"], [42, "step-2-disk-formatting"]], "Step 3: System Installation": [[18, "step-3-system-installation"], [19, "step-3-system-installation"], [20, "step-3-system-installation"], [22, "step-3-system-installation"], [33, "step-3-system-installation"], [34, "step-3-system-installation"], [35, "step-3-system-installation"], [36, "step-3-system-installation"], [37, "step-3-system-installation"], [41, "step-3-system-installation"], [42, "step-3-system-installation"]], "Step 4: System Configuration": [[18, "step-4-system-configuration"], [19, "step-4-system-configuration"], [20, "step-4-system-configuration"], [22, "step-4-system-configuration"], [33, "step-4-system-configuration"], [34, "step-4-system-configuration"], [35, "step-4-system-configuration"], [36, "step-4-system-configuration"], [37, "step-4-system-configuration"]], "Step 5: GRUB Installation": [[18, "step-5-grub-installation"], [19, "step-5-grub-installation"], [20, "step-5-grub-installation"], [22, "step-5-grub-installation"], [33, "step-5-grub-installation"], [34, "step-5-grub-installation"], [36, "step-5-grub-installation"]], "Step 6: First Boot": [[18, "step-6-first-boot"], [19, "step-6-first-boot"], [20, "step-6-first-boot"], [22, "step-6-first-boot"], [33, "step-6-first-boot"], [34, "step-6-first-boot"], [36, "step-6-first-boot"]], "Step 7: Optional: Configure Swap": [[18, "step-7-optional-configure-swap"], [19, "step-7-optional-configure-swap"], [20, "step-7-optional-configure-swap"]], "Step 8: Full Software Installation": [[18, "step-8-full-software-installation"], [19, "step-8-full-software-installation"], [20, "step-8-full-software-installation"], [22, "step-8-full-software-installation"], [33, "step-8-full-software-installation"]], "Step 9: Final Cleanup": [[18, "step-9-final-cleanup"], [19, "step-9-final-cleanup"], [20, "step-9-final-cleanup"], [22, "step-9-final-cleanup"], [33, "step-9-final-cleanup"]], "Rescuing using a Live CD": [[18, "rescuing-using-a-live-cd"], [19, "rescuing-using-a-live-cd"], [20, "rescuing-using-a-live-cd"], [22, "rescuing-using-a-live-cd"], [33, "rescuing-using-a-live-cd"], [34, "rescuing-using-a-live-cd"], [36, "rescuing-using-a-live-cd"], [41, "rescuing-using-a-live-cd"], [42, "rescuing-using-a-live-cd"]], "Areca": [[18, "areca"], [19, "areca"], [20, "areca"], [22, "areca"], [33, "areca"], [34, "areca"], [36, "areca"], [41, "areca"], [42, "areca"]], "MPT2SAS": [[18, "mpt2sas"], [19, "mpt2sas"], [20, "mpt2sas"], [22, "mpt2sas"], [33, "mpt2sas"], [34, "mpt2sas"], [36, "mpt2sas"], [41, "mpt2sas"], [42, "mpt2sas"]], "QEMU/KVM/XEN": [[18, "qemu-kvm-xen"], [19, "qemu-kvm-xen"], [20, "qemu-kvm-xen"], [22, "qemu-kvm-xen"], [33, "qemu-kvm-xen"], [34, "qemu-kvm-xen"], [36, "qemu-kvm-xen"], [41, "qemu-kvm-xen"], [42, "qemu-kvm-xen"]], "VMware": [[18, "vmware"], [19, "vmware"], [20, "vmware"], [22, "vmware"], [33, "vmware"], [34, "vmware"], [36, "vmware"], [41, "vmware"], [42, "vmware"]], "Debian Bullseye Root on ZFS": [[19, "debian-bullseye-root-on-zfs"]], "Newer release available": [[19, "newer-release-available"], [20, "newer-release-available"], [22, "newer-release-available"], [33, "newer-release-available"], [34, "newer-release-available"], [35, "newer-release-available"]], "Debian Buster Root on ZFS": [[20, "debian-buster-root-on-zfs"]], "Debian GNU Linux initrd documentation": [[21, "debian-gnu-linux-initrd-documentation"]], "Supported boot parameters": [[21, "supported-boot-parameters"]], "Pool imports": [[21, "pool-imports"]], "Import using /dev/disk/by-*": [[21, "import-using-dev-disk-by"]], "Import using cache file": [[21, "import-using-cache-file"]], "Last ditch attempt at importing": [[21, "last-ditch-attempt-at-importing"]], "Booting": [[21, "booting"]], "Booting from snapshot:": [[21, "booting-from-snapshot"]], "Snapshot rollback": [[21, "snapshot-rollback"]], "Select snapshot dynamically": [[21, "select-snapshot-dynamically"]], "Booting from native encrypted filesystem": [[21, "booting-from-native-encrypted-filesystem"]], "Separated filesystems": [[21, "separated-filesystems"]], "Descended filesystems": [[21, "descended-filesystems"]], "Debian Stretch Root on ZFS": [[22, "debian-stretch-root-on-zfs"]], "Step 7: (Optional) Configure Swap": [[22, "step-7-optional-configure-swap"], [33, "step-7-optional-configure-swap"]], "Debian": [[23, "debian"]], "Related topics": [[23, "related-topics"]], "Fedora": [[24, "fedora"], [26, "fedora"]], "Fedora Root on ZFS": [[25, "fedora-root-on-zfs"]], "Post installaion": [[25, "post-installaion"], [31, "post-installaion"]], "Testing Repo": [[26, "testing-repo"]], "FreeBSD": [[27, "freebsd"]], "Installation on FreeBSD": [[27, "installation-on-freebsd"]], "Development on FreeBSD": [[27, "development-on-freebsd"]], "NixOS Root on ZFS": [[28, "nixos-root-on-zfs"]], "NixOS": [[29, "nixos"]], "RHEL and CentOS": [[30, "rhel-and-centos"]], "Rocky Linux Root on ZFS": [[31, "rocky-linux-root-on-zfs"]], "RHEL-based distro": [[32, "rhel-based-distro"]], "Previous minor EL releases": [[32, "previous-minor-el-releases"]], "Testing Repositories": [[32, "testing-repositories"]], "Ubuntu 18.04 Root on ZFS": [[33, "ubuntu-18-04-root-on-zfs"]], "Ubuntu 20.04 Root on ZFS": [[34, "ubuntu-20-04-root-on-zfs"]], "Errata": [[34, "errata"]], "/boot/grub Not Mounted": [[34, "boot-grub-not-mounted"]], "AccountsService Not Mounted": [[34, "accountsservice-not-mounted"]], "Ubuntu Installer": [[34, "ubuntu-installer"], [36, "ubuntu-installer"]], "Raspberry Pi": [[34, "raspberry-pi"], [36, "raspberry-pi"]], "Step 7: Full Software Installation": [[34, "step-7-full-software-installation"], [36, "step-7-full-software-installation"]], "Step 8: Final Cleanup": [[34, "step-8-final-cleanup"], [36, "step-8-final-cleanup"]], "Ubuntu 20.04 Root on ZFS for Raspberry Pi": [[35, "ubuntu-20-04-root-on-zfs-for-raspberry-pi"]], "USB Disks": [[35, "usb-disks"], [37, "usb-disks"]], "Step 1: Disk Formatting": [[35, "step-1-disk-formatting"], [37, "step-1-disk-formatting"]], "Step 2: Setup ZFS": [[35, "step-2-setup-zfs"], [37, "step-2-setup-zfs"]], "Step 5: First Boot": [[35, "step-5-first-boot"], [37, "step-5-first-boot"]], "Step 6: Full Software Installation": [[35, "step-6-full-software-installation"], [37, "step-6-full-software-installation"]], "Step 7: Final Cleanup": [[35, "step-7-final-cleanup"], [37, "step-7-final-cleanup"]], "Ubuntu 22.04 Root on ZFS": [[36, "ubuntu-22-04-root-on-zfs"]], "Ubuntu 22.04 Root on ZFS for Raspberry Pi": [[37, "ubuntu-22-04-root-on-zfs-for-raspberry-pi"]], "Ubuntu": [[38, "ubuntu"]], "Getting Started": [[39, "getting-started"]], "openSUSE": [[40, "opensuse"]], "External Links": [[40, "external-links"], [41, "external-links"], [42, "external-links"]], "openSUSE Leap Root on ZFS": [[41, "opensuse-leap-root-on-zfs"]], "Notes": [[41, "notes"]], "Step 4. Install System": [[41, "step-4-install-system"], [42, "step-4-install-system"]], "Step 5: System Configuration": [[41, "step-5-system-configuration"], [42, "step-5-system-configuration"]], "Step 6: Kernel Installation": [[41, "step-6-kernel-installation"], [42, "step-6-kernel-installation"]], "Step 7: Grub2 Installation": [[41, "step-7-grub2-installation"], [42, "step-7-grub2-installation"]], "Step 8: Systemd-Boot Installation": [[41, "step-8-systemd-boot-installation"], [42, "step-8-systemd-boot-installation"]], "Step 9: Filesystem Configuration": [[41, "step-9-filesystem-configuration"], [42, "step-9-filesystem-configuration"]], "Step 10: First Boot": [[41, "step-10-first-boot"], [42, "step-10-first-boot"]], "Step 11: Optional: Configure Swap": [[41, "step-11-optional-configure-swap"], [42, "step-11-optional-configure-swap"]], "Step 12: Final Cleanup": [[41, "step-12-final-cleanup"], [42, "step-12-final-cleanup"]], "openSUSE Tumbleweed Root on ZFS": [[42, "opensuse-tumbleweed-root-on-zfs"]], "Root on ZFS maintenance": [[43, "root-on-zfs-maintenance"]], "Boot Environment": [[43, "boot-environment"]], "Disk replacement": [[43, "disk-replacement"]], "Bootloader Recovery": [[43, "bootloader-recovery"]], "License": [[44, "license"]], "Async Writes": [[45, "async-writes"]], "Hardware": [[46, "hardware"]], "BIOS / CPU microcode updates": [[46, "bios-cpu-microcode-updates"]], "Background": [[46, "background"], [46, "background-1"], [46, "background-2"], [46, "background-3"]], "ECC Memory": [[46, "ecc-memory"]], "Drive Interfaces": [[46, "drive-interfaces"]], "SAS versus SATA": [[46, "sas-versus-sata"]], "USB Hard Drives and/or Adapters": [[46, "usb-hard-drives-and-or-adapters"]], "Controllers": [[46, "controllers"]], "Hardware RAID controllers": [[46, "hardware-raid-controllers"]], "Hard drives": [[46, "hard-drives"]], "Sector Size": [[46, "sector-size"]], "Error recovery control": [[46, "error-recovery-control"]], "RPM Speeds": [[46, "rpm-speeds"]], "Command Queuing": [[46, "command-queuing"]], "NAND Flash SSDs": [[46, "nand-flash-ssds"]], "NVMe low level formatting": [[46, "nvme-low-level-formatting"], [48, "nvme-low-level-formatting"]], "Power Failure Protection": [[46, "power-failure-protection"]], "NVMe drives with power failure protection": [[46, "nvme-drives-with-power-failure-protection"]], "SAS drives with power failure protection": [[46, "sas-drives-with-power-failure-protection"]], "SATA drives with power failure protection": [[46, "sata-drives-with-power-failure-protection"]], "Criteria/process for inclusion into these lists": [[46, "criteria-process-for-inclusion-into-these-lists"]], "Flash pages": [[46, "flash-pages"]], "ATA TRIM / SCSI UNMAP": [[46, "ata-trim-scsi-unmap"]], "ATA TRIM Performance Issues": [[46, "ata-trim-performance-issues"]], "Optane / 3D XPoint SSDs": [[46, "optane-3d-xpoint-ssds"]], "Power": [[46, "power"]], "PWR_OK signal": [[46, "pwr-ok-signal"]], "PSU Hold-up Times": [[46, "psu-hold-up-times"]], "UPS batteries": [[46, "ups-batteries"]], "Module Parameters": [[47, "module-parameters"], [47, "zfs-module-parameters-1"]], "Manual Pages": [[47, "manual-pages"]], "ZFS Module Parameters": [[47, "zfs-module-parameters"]], "Tags": [[47, "tags"]], "ABD": [[47, "abd"]], "allocation": [[47, "allocation"]], "ARC": [[47, "arc"]], "channel_programs": [[47, "channel-programs"]], "checkpoint": [[47, "checkpoint"]], "checksum": [[47, "checksum"]], "compression": [[47, "compression"]], "CPU": [[47, "cpu"]], "dataset": [[47, "dataset"]], "dbuf_cache": [[47, "dbuf-cache"]], "debug": [[47, "debug"]], "dedup": [[47, "dedup"]], "delay": [[47, "delay"]], "delete": [[47, "delete"]], "discard": [[47, "discard"]], "disks": [[47, "disks"]], "DMU": [[47, "dmu"]], "encryption": [[47, "encryption"]], "filesystem": [[47, "filesystem"]], "fragmentation": [[47, "fragmentation"]], "HDD": [[47, "hdd"]], "hostid": [[47, "hostid"]], "import": [[47, "import"]], "L2ARC": [[47, "l2arc"]], "memory": [[47, "memory"]], "metadata": [[47, "metadata"]], "metaslab": [[47, "metaslab"]], "mirror": [[47, "mirror"]], "MMP": [[47, "mmp"]], "panic": [[47, "panic"]], "prefetch": [[47, "prefetch"]], "QAT": [[47, "qat"]], "raidz": [[47, "raidz"]], "receive": [[47, "receive"]], "remove": [[47, "remove"]], "resilver": [[47, "resilver"]], "scrub": [[47, "scrub"]], "send": [[47, "send"]], "snapshot": [[47, "snapshot"]], "SPA": [[47, "spa"]], "special_vdev": [[47, "special-vdev"]], "SSD": [[47, "ssd"]], "taskq": [[47, "taskq"]], "trim": [[47, "trim"]], "vdev": [[47, "vdev"]], "vdev_cache": [[47, "vdev-cache"]], "vdev_initialize": [[47, "vdev-initialize"]], "vdev_removal": [[47, "vdev-removal"]], "volume": [[47, "volume"]], "write_throttle": [[47, "write-throttle"]], "zed": [[47, "zed"]], "ZIL": [[47, "zil"]], "ZIO_scheduler": [[47, "zio-scheduler"]], "Index": [[47, "index"]], "ignore_hole_birth": [[47, "ignore-hole-birth"]], "l2arc_exclude_special": [[47, "l2arc-exclude-special"]], "l2arc_feed_again": [[47, "l2arc-feed-again"]], "l2arc_feed_min_ms": [[47, "l2arc-feed-min-ms"]], "l2arc_feed_secs": [[47, "l2arc-feed-secs"]], "l2arc_headroom": [[47, "l2arc-headroom"]], "l2arc_headroom_boost": [[47, "l2arc-headroom-boost"]], "l2arc_nocompress": [[47, "l2arc-nocompress"]], "l2arc_meta_percent": [[47, "l2arc-meta-percent"]], "l2arc_mfuonly": [[47, "l2arc-mfuonly"]], "l2arc_noprefetch": [[47, "l2arc-noprefetch"]], "l2arc_norw": [[47, "l2arc-norw"]], "l2arc_rebuild_blocks_min_l2size": [[47, "l2arc-rebuild-blocks-min-l2size"]], "l2arc_rebuild_enabled": [[47, "l2arc-rebuild-enabled"]], "l2arc_trim_ahead": [[47, "l2arc-trim-ahead"]], "l2arc_write_boost": [[47, "l2arc-write-boost"]], "l2arc_write_max": [[47, "l2arc-write-max"]], "metaslab_aliquot": [[47, "metaslab-aliquot"]], "metaslab_bias_enabled": [[47, "metaslab-bias-enabled"]], "zfs_metaslab_segment_weight_enabled": [[47, "zfs-metaslab-segment-weight-enabled"]], "zfs_metaslab_switch_threshold": [[47, "zfs-metaslab-switch-threshold"]], "metaslab_debug_load": [[47, "metaslab-debug-load"]], "metaslab_debug_unload": [[47, "metaslab-debug-unload"]], "metaslab_fragmentation_factor_enabled": [[47, "metaslab-fragmentation-factor-enabled"]], "metaslabs_per_vdev": [[47, "metaslabs-per-vdev"]], "metaslab_preload_enabled": [[47, "metaslab-preload-enabled"]], "metaslab_lba_weighting_enabled": [[47, "metaslab-lba-weighting-enabled"]], "spa_config_path": [[47, "spa-config-path"]], "spa_asize_inflation": [[47, "spa-asize-inflation"]], "spa_load_verify_data": [[47, "spa-load-verify-data"]], "spa_load_verify_metadata": [[47, "spa-load-verify-metadata"]], "spa_load_verify_maxinflight": [[47, "spa-load-verify-maxinflight"]], "spa_slop_shift": [[47, "spa-slop-shift"]], "zfetch_array_rd_sz": [[47, "zfetch-array-rd-sz"]], "zfetch_max_distance": [[47, "zfetch-max-distance"]], "zfetch_max_streams": [[47, "zfetch-max-streams"]], "zfetch_min_sec_reap": [[47, "zfetch-min-sec-reap"]], "zfs_arc_dnode_limit_percent": [[47, "zfs-arc-dnode-limit-percent"]], "zfs_arc_dnode_limit": [[47, "zfs-arc-dnode-limit"]], "zfs_arc_dnode_reduce_percent": [[47, "zfs-arc-dnode-reduce-percent"]], "zfs_arc_average_blocksize": [[47, "zfs-arc-average-blocksize"]], "zfs_arc_evict_batch_limit": [[47, "zfs-arc-evict-batch-limit"]], "zfs_arc_grow_retry": [[47, "zfs-arc-grow-retry"]], "zfs_arc_lotsfree_percent": [[47, "zfs-arc-lotsfree-percent"]], "zfs_arc_max": [[47, "zfs-arc-max"]], "zfs_arc_meta_adjust_restarts": [[47, "zfs-arc-meta-adjust-restarts"]], "zfs_arc_meta_limit": [[47, "zfs-arc-meta-limit"]], "zfs_arc_meta_limit_percent": [[47, "zfs-arc-meta-limit-percent"]], "zfs_arc_meta_min": [[47, "zfs-arc-meta-min"]], "zfs_arc_meta_prune": [[47, "zfs-arc-meta-prune"]], "zfs_arc_meta_strategy": [[47, "zfs-arc-meta-strategy"]], "zfs_arc_min": [[47, "zfs-arc-min"]], "zfs_arc_min_prefetch_ms": [[47, "zfs-arc-min-prefetch-ms"]], "zfs_arc_min_prescient_prefetch_ms": [[47, "zfs-arc-min-prescient-prefetch-ms"]], "zfs_multilist_num_sublists": [[47, "zfs-multilist-num-sublists"]], "zfs_arc_overflow_shift": [[47, "zfs-arc-overflow-shift"]], "zfs_arc_p_min_shift": [[47, "zfs-arc-p-min-shift"]], "zfs_arc_p_dampener_disable": [[47, "zfs-arc-p-dampener-disable"]], "zfs_arc_shrink_shift": [[47, "zfs-arc-shrink-shift"]], "zfs_arc_pc_percent": [[47, "zfs-arc-pc-percent"]], "zfs_arc_sys_free": [[47, "zfs-arc-sys-free"]], "zfs_autoimport_disable": [[47, "zfs-autoimport-disable"]], "zfs_commit_timeout_pct": [[47, "zfs-commit-timeout-pct"]], "zfs_dbgmsg_enable": [[47, "zfs-dbgmsg-enable"]], "zfs_dbgmsg_maxsize": [[47, "zfs-dbgmsg-maxsize"]], "zfs_dbuf_state_index": [[47, "zfs-dbuf-state-index"]], "zfs_deadman_enabled": [[47, "zfs-deadman-enabled"]], "zfs_deadman_checktime_ms": [[47, "zfs-deadman-checktime-ms"]], "zfs_deadman_ziotime_ms": [[47, "zfs-deadman-ziotime-ms"]], "zfs_deadman_synctime_ms": [[47, "zfs-deadman-synctime-ms"]], "zfs_deadman_failmode": [[47, "zfs-deadman-failmode"]], "zfs_dedup_prefetch": [[47, "zfs-dedup-prefetch"]], "zfs_delete_blocks": [[47, "zfs-delete-blocks"]], "zfs_delay_min_dirty_percent": [[47, "zfs-delay-min-dirty-percent"]], "zfs_delay_scale": [[47, "zfs-delay-scale"]], "zfs_dirty_data_max": [[47, "zfs-dirty-data-max"]], "zfs_dirty_data_max_percent": [[47, "zfs-dirty-data-max-percent"]], "zfs_dirty_data_max_max": [[47, "zfs-dirty-data-max-max"]], "zfs_dirty_data_max_max_percent": [[47, "zfs-dirty-data-max-max-percent"]], "zfs_dirty_data_sync": [[47, "zfs-dirty-data-sync"]], "zfs_dirty_data_sync_percent": [[47, "zfs-dirty-data-sync-percent"]], "zfs_fletcher_4_impl": [[47, "zfs-fletcher-4-impl"]], "zfs_free_bpobj_enabled": [[47, "zfs-free-bpobj-enabled"]], "zfs_free_max_blocks": [[47, "zfs-free-max-blocks"]], "zfs_vdev_async_read_max_active": [[47, "zfs-vdev-async-read-max-active"]], "zfs_vdev_async_read_min_active": [[47, "zfs-vdev-async-read-min-active"]], "zfs_vdev_async_write_active_max_dirty_percent": [[47, "zfs-vdev-async-write-active-max-dirty-percent"]], "zfs_vdev_async_write_active_min_dirty_percent": [[47, "zfs-vdev-async-write-active-min-dirty-percent"]], "zfs_vdev_async_write_max_active": [[47, "zfs-vdev-async-write-max-active"]], "zfs_vdev_async_write_min_active": [[47, "zfs-vdev-async-write-min-active"]], "zfs_vdev_max_active": [[47, "zfs-vdev-max-active"]], "zfs_vdev_scrub_max_active": [[47, "zfs-vdev-scrub-max-active"]], "zfs_vdev_scrub_min_active": [[47, "zfs-vdev-scrub-min-active"]], "zfs_vdev_sync_read_max_active": [[47, "zfs-vdev-sync-read-max-active"]], "zfs_vdev_sync_read_min_active": [[47, "zfs-vdev-sync-read-min-active"]], "zfs_vdev_sync_write_max_active": [[47, "zfs-vdev-sync-write-max-active"]], "zfs_vdev_sync_write_min_active": [[47, "zfs-vdev-sync-write-min-active"]], "zfs_vdev_queue_depth_pct": [[47, "zfs-vdev-queue-depth-pct"]], "zfs_disable_dup_eviction": [[47, "zfs-disable-dup-eviction"]], "zfs_expire_snapshot": [[47, "zfs-expire-snapshot"]], "zfs_admin_snapshot": [[47, "zfs-admin-snapshot"]], "zfs_flags": [[47, "zfs-flags"]], "zfs_free_leak_on_eio": [[47, "zfs-free-leak-on-eio"]], "zfs_free_min_time_ms": [[47, "zfs-free-min-time-ms"]], "zfs_immediate_write_sz": [[47, "zfs-immediate-write-sz"]], "zfs_max_recordsize": [[47, "zfs-max-recordsize"]], "zfs_mdcomp_disable": [[47, "zfs-mdcomp-disable"]], "zfs_metaslab_fragmentation_threshold": [[47, "zfs-metaslab-fragmentation-threshold"]], "zfs_mg_fragmentation_threshold": [[47, "zfs-mg-fragmentation-threshold"]], "zfs_mg_noalloc_threshold": [[47, "zfs-mg-noalloc-threshold"]], "zfs_multihost_history": [[47, "zfs-multihost-history"]], "zfs_multihost_interval": [[47, "zfs-multihost-interval"]], "zfs_multihost_import_intervals": [[47, "zfs-multihost-import-intervals"]], "zfs_multihost_fail_intervals": [[47, "zfs-multihost-fail-intervals"]], "zfs_delays_per_second": [[47, "zfs-delays-per-second"]], "zfs_checksums_per_second": [[47, "zfs-checksums-per-second"]], "zfs_no_scrub_io": [[47, "zfs-no-scrub-io"]], "zfs_no_scrub_prefetch": [[47, "zfs-no-scrub-prefetch"]], "zfs_nocacheflush": [[47, "zfs-nocacheflush"]], "zfs_nopwrite_enabled": [[47, "zfs-nopwrite-enabled"]], "zfs_dmu_offset_next_sync": [[47, "zfs-dmu-offset-next-sync"]], "zfs_pd_bytes_max": [[47, "zfs-pd-bytes-max"]], "zfs_per_txg_dirty_frees_percent": [[47, "zfs-per-txg-dirty-frees-percent"]], "zfs_prefetch_disable": [[47, "zfs-prefetch-disable"]], "zfs_read_chunk_size": [[47, "zfs-read-chunk-size"]], "zfs_read_history": [[47, "zfs-read-history"]], "zfs_read_history_hits": [[47, "zfs-read-history-hits"]], "zfs_recover": [[47, "zfs-recover"]], "zfs_resilver_min_time_ms": [[47, "zfs-resilver-min-time-ms"]], "zfs_scan_min_time_ms": [[47, "zfs-scan-min-time-ms"]], "zfs_scan_checkpoint_intval": [[47, "zfs-scan-checkpoint-intval"]], "zfs_scan_fill_weight": [[47, "zfs-scan-fill-weight"]], "zfs_scan_issue_strategy": [[47, "zfs-scan-issue-strategy"]], "zfs_scan_legacy": [[47, "zfs-scan-legacy"]], "zfs_scan_max_ext_gap": [[47, "zfs-scan-max-ext-gap"]], "zfs_scan_mem_lim_fact": [[47, "zfs-scan-mem-lim-fact"]], "zfs_scan_mem_lim_soft_fact": [[47, "zfs-scan-mem-lim-soft-fact"]], "zfs_scan_vdev_limit": [[47, "zfs-scan-vdev-limit"]], "zfs_send_corrupt_data": [[47, "zfs-send-corrupt-data"]], "zfs_sync_pass_deferred_free": [[47, "zfs-sync-pass-deferred-free"]], "zfs_sync_pass_dont_compress": [[47, "zfs-sync-pass-dont-compress"]], "zfs_sync_pass_rewrite": [[47, "zfs-sync-pass-rewrite"]], "zfs_sync_taskq_batch_pct": [[47, "zfs-sync-taskq-batch-pct"]], "zfs_txg_history": [[47, "zfs-txg-history"]], "zfs_txg_timeout": [[47, "zfs-txg-timeout"]], "zfs_vdev_aggregation_limit": [[47, "zfs-vdev-aggregation-limit"]], "zfs_vdev_cache_size": [[47, "zfs-vdev-cache-size"]], "zfs_vdev_cache_bshift": [[47, "zfs-vdev-cache-bshift"]], "zfs_vdev_cache_max": [[47, "zfs-vdev-cache-max"]], "zfs_vdev_mirror_rotating_inc": [[47, "zfs-vdev-mirror-rotating-inc"]], "zfs_vdev_mirror_non_rotating_inc": [[47, "zfs-vdev-mirror-non-rotating-inc"]], "zfs_vdev_mirror_rotating_seek_inc": [[47, "zfs-vdev-mirror-rotating-seek-inc"]], "zfs_vdev_mirror_rotating_seek_offset": [[47, "zfs-vdev-mirror-rotating-seek-offset"]], "zfs_vdev_mirror_non_rotating_seek_inc": [[47, "zfs-vdev-mirror-non-rotating-seek-inc"]], "zfs_vdev_read_gap_limit": [[47, "zfs-vdev-read-gap-limit"]], "zfs_vdev_write_gap_limit": [[47, "zfs-vdev-write-gap-limit"]], "zfs_vdev_scheduler": [[47, "zfs-vdev-scheduler"]], "zfs_vdev_raidz_impl": [[47, "zfs-vdev-raidz-impl"]], "zfs_zevent_cols": [[47, "zfs-zevent-cols"]], "zfs_zevent_console": [[47, "zfs-zevent-console"]], "zfs_zevent_len_max": [[47, "zfs-zevent-len-max"]], "zfs_zil_clean_taskq_maxalloc": [[47, "zfs-zil-clean-taskq-maxalloc"]], "zfs_zil_clean_taskq_minalloc": [[47, "zfs-zil-clean-taskq-minalloc"]], "zfs_zil_clean_taskq_nthr_pct": [[47, "zfs-zil-clean-taskq-nthr-pct"]], "zil_replay_disable": [[47, "zil-replay-disable"]], "zil_slog_bulk": [[47, "zil-slog-bulk"]], "zio_delay_max": [[47, "zio-delay-max"]], "zio_dva_throttle_enabled": [[47, "zio-dva-throttle-enabled"]], "zio_requeue_io_start_cut_in_line": [[47, "zio-requeue-io-start-cut-in-line"]], "zio_taskq_batch_pct": [[47, "zio-taskq-batch-pct"]], "zvol_inhibit_dev": [[47, "zvol-inhibit-dev"]], "zvol_major": [[47, "zvol-major"]], "zvol_max_discard_blocks": [[47, "zvol-max-discard-blocks"]], "zvol_prefetch_bytes": [[47, "zvol-prefetch-bytes"]], "zvol_request_sync": [[47, "zvol-request-sync"]], "zvol_threads": [[47, "zvol-threads"]], "zvol_volmode": [[47, "zvol-volmode"]], "zfs_qat_disable": [[47, "zfs-qat-disable"]], "zfs_qat_checksum_disable": [[47, "zfs-qat-checksum-disable"]], "zfs_qat_compress_disable": [[47, "zfs-qat-compress-disable"]], "zfs_qat_encrypt_disable": [[47, "zfs-qat-encrypt-disable"]], "dbuf_cache_hiwater_pct": [[47, "dbuf-cache-hiwater-pct"]], "dbuf_cache_lowater_pct": [[47, "dbuf-cache-lowater-pct"]], "dbuf_cache_max_bytes": [[47, "dbuf-cache-max-bytes"], [47, "dbuf-cache-max-bytes-1"]], "dbuf_cache_max_shift": [[47, "dbuf-cache-max-shift"]], "dmu_object_alloc_chunk_shift": [[47, "dmu-object-alloc-chunk-shift"]], "send_holes_without_birth_time": [[47, "send-holes-without-birth-time"]], "zfs_abd_scatter_enabled": [[47, "zfs-abd-scatter-enabled"]], "zfs_abd_scatter_max_order": [[47, "zfs-abd-scatter-max-order"]], "zfs_compressed_arc_enabled": [[47, "zfs-compressed-arc-enabled"]], "zfs_key_max_salt_uses": [[47, "zfs-key-max-salt-uses"]], "zfs_object_mutex_size": [[47, "zfs-object-mutex-size"]], "zfs_scan_strict_mem_lim": [[47, "zfs-scan-strict-mem-lim"]], "zfs_send_queue_length": [[47, "zfs-send-queue-length"]], "zfs_recv_queue_length": [[47, "zfs-recv-queue-length"]], "zfs_arc_min_prefetch_lifespan": [[47, "zfs-arc-min-prefetch-lifespan"]], "zfs_scan_ignore_errors": [[47, "zfs-scan-ignore-errors"]], "zfs_top_maxinflight": [[47, "zfs-top-maxinflight"]], "zfs_resilver_delay": [[47, "zfs-resilver-delay"]], "zfs_scrub_delay": [[47, "zfs-scrub-delay"]], "zfs_scan_idle": [[47, "zfs-scan-idle"]], "icp_aes_impl": [[47, "icp-aes-impl"]], "icp_gcm_impl": [[47, "icp-gcm-impl"]], "zfs_abd_scatter_min_size": [[47, "zfs-abd-scatter-min-size"]], "zfs_unlink_suspend_progress": [[47, "zfs-unlink-suspend-progress"]], "spa_load_verify_shift": [[47, "spa-load-verify-shift"]], "spa_load_print_vdev_tree": [[47, "spa-load-print-vdev-tree"]], "zfs_max_missing_tvds": [[47, "zfs-max-missing-tvds"]], "dbuf_metadata_cache_shift": [[47, "dbuf-metadata-cache-shift"]], "dbuf_metadata_cache_max_bytes": [[47, "dbuf-metadata-cache-max-bytes"]], "dbuf_cache_shift": [[47, "dbuf-cache-shift"]], "metaslab_force_ganging": [[47, "metaslab-force-ganging"]], "zfs_vdev_default_ms_count": [[47, "zfs-vdev-default-ms-count"]], "vdev_removal_max_span": [[47, "vdev-removal-max-span"]], "zfs_removal_ignore_errors": [[47, "zfs-removal-ignore-errors"]], "zfs_removal_suspend_progress": [[47, "zfs-removal-suspend-progress"]], "zfs_condense_indirect_commit_entry_delay_ms": [[47, "zfs-condense-indirect-commit-entry-delay-ms"]], "zfs_condense_indirect_vdevs_enable": [[47, "zfs-condense-indirect-vdevs-enable"]], "zfs_condense_max_obsolete_bytes": [[47, "zfs-condense-max-obsolete-bytes"]], "zfs_condense_min_mapping_bytes": [[47, "zfs-condense-min-mapping-bytes"]], "zfs_vdev_initializing_max_active": [[47, "zfs-vdev-initializing-max-active"]], "zfs_vdev_initializing_min_active": [[47, "zfs-vdev-initializing-min-active"]], "zfs_vdev_removal_max_active": [[47, "zfs-vdev-removal-max-active"]], "zfs_vdev_removal_min_active": [[47, "zfs-vdev-removal-min-active"]], "zfs_vdev_trim_max_active": [[47, "zfs-vdev-trim-max-active"]], "zfs_vdev_trim_min_active": [[47, "zfs-vdev-trim-min-active"]], "zfs_initialize_value": [[47, "zfs-initialize-value"]], "zfs_lua_max_instrlimit": [[47, "zfs-lua-max-instrlimit"]], "zfs_lua_max_memlimit": [[47, "zfs-lua-max-memlimit"]], "zfs_max_dataset_nesting": [[47, "zfs-max-dataset-nesting"]], "zfs_ddt_data_is_special": [[47, "zfs-ddt-data-is-special"]], "zfs_user_indirect_is_special": [[47, "zfs-user-indirect-is-special"]], "zfs_reconstruct_indirect_combinations_max": [[47, "zfs-reconstruct-indirect-combinations-max"]], "zfs_send_unmodified_spill_blocks": [[47, "zfs-send-unmodified-spill-blocks"]], "zfs_spa_discard_memory_limit": [[47, "zfs-spa-discard-memory-limit"]], "zfs_special_class_metadata_reserve_pct": [[47, "zfs-special-class-metadata-reserve-pct"]], "zfs_trim_extent_bytes_max": [[47, "zfs-trim-extent-bytes-max"]], "zfs_trim_extent_bytes_min": [[47, "zfs-trim-extent-bytes-min"]], "zfs_trim_metaslab_skip": [[47, "zfs-trim-metaslab-skip"]], "zfs_trim_queue_limit": [[47, "zfs-trim-queue-limit"]], "zfs_trim_txg_batch": [[47, "zfs-trim-txg-batch"]], "zfs_vdev_aggregate_trim": [[47, "zfs-vdev-aggregate-trim"]], "zfs_vdev_aggregation_limit_non_rotating": [[47, "zfs-vdev-aggregation-limit-non-rotating"]], "zil_nocacheflush": [[47, "zil-nocacheflush"]], "zio_deadman_log_all": [[47, "zio-deadman-log-all"]], "zio_decompress_fail_fraction": [[47, "zio-decompress-fail-fraction"]], "zio_slow_io_ms": [[47, "zio-slow-io-ms"]], "vdev_validate_skip": [[47, "vdev-validate-skip"]], "zfs_async_block_max_blocks": [[47, "zfs-async-block-max-blocks"]], "zfs_checksum_events_per_second": [[47, "zfs-checksum-events-per-second"]], "zfs_disable_ivset_guid_check": [[47, "zfs-disable-ivset-guid-check"]], "zfs_obsolete_min_time_ms": [[47, "zfs-obsolete-min-time-ms"]], "zfs_override_estimate_recordsize": [[47, "zfs-override-estimate-recordsize"]], "zfs_remove_max_segment": [[47, "zfs-remove-max-segment"]], "zfs_resilver_disable_defer": [[47, "zfs-resilver-disable-defer"]], "zfs_scan_suspend_progress": [[47, "zfs-scan-suspend-progress"]], "zfs_scrub_min_time_ms": [[47, "zfs-scrub-min-time-ms"]], "zfs_slow_io_events_per_second": [[47, "zfs-slow-io-events-per-second"]], "zfs_vdev_min_ms_count": [[47, "zfs-vdev-min-ms-count"]], "zfs_vdev_ms_count_limit": [[47, "zfs-vdev-ms-count-limit"]], "spl_hostid": [[47, "spl-hostid"]], "spl_hostid_path": [[47, "spl-hostid-path"]], "spl_kmem_alloc_max": [[47, "spl-kmem-alloc-max"]], "spl_kmem_alloc_warn": [[47, "spl-kmem-alloc-warn"]], "spl_kmem_cache_expire": [[47, "spl-kmem-cache-expire"]], "spl_kmem_cache_kmem_limit": [[47, "spl-kmem-cache-kmem-limit"]], "spl_kmem_cache_max_size": [[47, "spl-kmem-cache-max-size"]], "spl_kmem_cache_obj_per_slab": [[47, "spl-kmem-cache-obj-per-slab"]], "spl_kmem_cache_obj_per_slab_min": [[47, "spl-kmem-cache-obj-per-slab-min"]], "spl_kmem_cache_reclaim": [[47, "spl-kmem-cache-reclaim"]], "spl_kmem_cache_slab_limit": [[47, "spl-kmem-cache-slab-limit"]], "spl_max_show_tasks": [[47, "spl-max-show-tasks"]], "spl_panic_halt": [[47, "spl-panic-halt"]], "spl_taskq_kick": [[47, "spl-taskq-kick"]], "spl_taskq_thread_bind": [[47, "spl-taskq-thread-bind"]], "spl_taskq_thread_dynamic": [[47, "spl-taskq-thread-dynamic"]], "spl_taskq_thread_priority": [[47, "spl-taskq-thread-priority"]], "spl_taskq_thread_sequential": [[47, "spl-taskq-thread-sequential"]], "spl_kmem_cache_kmem_threads": [[47, "spl-kmem-cache-kmem-threads"]], "spl_kmem_cache_magazine_size": [[47, "spl-kmem-cache-magazine-size"]], "Workload Tuning": [[48, "workload-tuning"]], "Basic concepts": [[48, "basic-concepts"]], "Adaptive Replacement Cache": [[48, "adaptive-replacement-cache"]], "Alignment Shift (ashift)": [[48, "alignment-shift-ashift"]], "Compression": [[48, "compression"]], "RAID-Z stripe width": [[48, "raid-z-stripe-width"]], "Dataset recordsize": [[48, "dataset-recordsize"]], "Larger record sizes": [[48, "larger-record-sizes"]], "zvol volblocksize": [[48, "zvol-volblocksize"]], "Deduplication": [[48, "deduplication"]], "Metaslab Allocator": [[48, "metaslab-allocator"]], "Pool Geometry": [[48, "pool-geometry"], [48, "pool-geometry-1"]], "Whole Disks versus Partitions": [[48, "whole-disks-versus-partitions"]], "OS/distro-specific recommendations": [[48, "os-distro-specific-recommendations"]], "Linux": [[48, "linux"]], "init_on_alloc": [[48, "init-on-alloc"]], "General recommendations": [[48, "general-recommendations"]], "Alignment shift": [[48, "alignment-shift"]], "Atime Updates": [[48, "atime-updates"]], "Free Space": [[48, "free-space"]], "LZ4 compression": [[48, "lz4-compression"]], "Synchronous I/O": [[48, "synchronous-i-o"]], "Overprovisioning by secure erase and partition table trick": [[48, "overprovisioning-by-secure-erase-and-partition-table-trick"]], "NVMe overprovisioning": [[48, "nvme-overprovisioning"]], "Whole disks": [[48, "whole-disks"]], "Bit Torrent": [[48, "bit-torrent"]], "Database workloads": [[48, "database-workloads"]], "MySQL": [[48, "mysql"]], "InnoDB": [[48, "innodb"]], "PostgreSQL": [[48, "postgresql"]], "SQLite": [[48, "sqlite"]], "File servers": [[48, "file-servers"]], "Samba": [[48, "samba"]], "Sequential workloads": [[48, "sequential-workloads"]], "Video games directories": [[48, "video-games-directories"]], "Lutris": [[48, "lutris"]], "Steam": [[48, "steam"]], "Wine": [[48, "wine"]], "Virtual machines": [[48, "virtual-machines"]], "QEMU / KVM / Xen": [[48, "qemu-kvm-xen"]], "ZFS Transaction Delay": [[49, "zfs-transaction-delay"]], "ZFS I/O (ZIO) Scheduler": [[50, "zfs-i-o-zio-scheduler"]], "Performance and Tuning": [[51, "performance-and-tuning"]], "Admin Documentation": [[52, "admin-documentation"]], "FAQ": [[53, "faq"], [54, "faq"]], "What is OpenZFS": [[53, "what-is-openzfs"]], "Hardware Requirements": [[53, "hardware-requirements"]], "Do I have to use ECC memory for ZFS?": [[53, "do-i-have-to-use-ecc-memory-for-zfs"]], "Supported Architectures": [[53, "supported-architectures"]], "Supported Linux Kernels": [[53, "supported-linux-kernels"]], "32-bit vs 64-bit Systems": [[53, "bit-vs-64-bit-systems"]], "Booting from ZFS": [[53, "booting-from-zfs"]], "Selecting /dev/ names when creating a pool (Linux)": [[53, "selecting-dev-names-when-creating-a-pool-linux"]], "Setting up the /etc/zfs/vdev_id.conf file": [[53, "setting-up-the-etc-zfs-vdev-id-conf-file"]], "Changing /dev/ names on an existing pool": [[53, "changing-dev-names-on-an-existing-pool"]], "The /etc/zfs/zpool.cache file": [[53, "the-etc-zfs-zpool-cache-file"]], "Generating a new /etc/zfs/zpool.cache file": [[53, "generating-a-new-etc-zfs-zpool-cache-file"]], "Sending and Receiving Streams": [[53, "sending-and-receiving-streams"]], "hole_birth Bugs": [[53, "hole-birth-bugs"]], "Sending Large Blocks": [[53, "sending-large-blocks"]], "CEPH/ZFS": [[53, "ceph-zfs"]], "ZFS Configuration": [[53, "zfs-configuration"]], "CEPH Configuration (ceph.conf)": [[53, "ceph-configuration-ceph-conf"]], "Other General Guidelines": [[53, "other-general-guidelines"]], "Performance Considerations": [[53, "performance-considerations"]], "Advanced Format Disks": [[53, "advanced-format-disks"]], "ZVOL used space larger than expected": [[53, "zvol-used-space-larger-than-expected"]], "Using a zvol for a swap device on Linux": [[53, "using-a-zvol-for-a-swap-device-on-linux"]], "Using ZFS on Xen Hypervisor or Xen Dom0 (Linux)": [[53, "using-zfs-on-xen-hypervisor-or-xen-dom0-linux"]], "udisks2 creating /dev/mapper/ entries for zvol (Linux)": [[53, "udisks2-creating-dev-mapper-entries-for-zvol-linux"]], "Licensing": [[53, "licensing"]], "Reporting a problem": [[53, "reporting-a-problem"]], "Does OpenZFS have a Code of Conduct?": [[53, "does-openzfs-have-a-code-of-conduct"]], "FAQ Hole birth": [[54, "faq-hole-birth"]], "Short explanation": [[54, "short-explanation"]], "I have a pool with hole_birth enabled, how do I know if I am affected?": [[54, "i-have-a-pool-with-hole-birth-enabled-how-do-i-know-if-i-am-affected"]], "Is there any less painful way to fix this if we have already received an affected snapshot?": [[54, "is-there-any-less-painful-way-to-fix-this-if-we-have-already-received-an-affected-snapshot"]], "Long explanation": [[54, "long-explanation"]], "Mailing Lists": [[55, "mailing-lists"]], "Signing Keys": [[56, "signing-keys"]], "Maintainers": [[56, "maintainers"]], "Release branch (spl/zfs-*-release)": [[56, "release-branch-spl-zfs-release"]], "Master branch (master)": [[56, "master-branch-master"]], "Checking the Signature of a Git Tag": [[56, "checking-the-signature-of-a-git-tag"]], "Project and Community": [[57, "project-and-community"]], "OpenZFS Documentation": [[59, "openzfs-documentation"]], "Table of Contents:": [[59, "table-of-contents"]], "Man Pages": [[60, "man-pages"]], "arcstat.1": [[61, "arcstat-1"], [239, "arcstat-1"], [338, "arcstat-1"], [440, "arcstat-1"]], "cstyle.1": [[62, "cstyle-1"], [168, "cstyle-1"], [189, "cstyle-1"], [212, "cstyle-1"], [240, "cstyle-1"], [339, "cstyle-1"], [441, "cstyle-1"]], "User Commands (1)": [[63, "user-commands-1"], [169, "user-commands-1"], [190, "user-commands-1"], [213, "user-commands-1"], [241, "user-commands-1"], [340, "user-commands-1"], [442, "user-commands-1"]], "raidz_test.1": [[64, "raidz-test-1"], [191, "raidz-test-1"], [214, "raidz-test-1"], [242, "raidz-test-1"], [341, "raidz-test-1"], [443, "raidz-test-1"]], "test-runner.1": [[65, "test-runner-1"], [444, "test-runner-1"]], "zhack.1": [[66, "zhack-1"], [170, "zhack-1"], [192, "zhack-1"], [215, "zhack-1"], [243, "zhack-1"], [342, "zhack-1"], [445, "zhack-1"]], "ztest.1": [[67, "ztest-1"], [172, "ztest-1"], [194, "ztest-1"], [216, "ztest-1"], [244, "ztest-1"], [343, "ztest-1"], [446, "ztest-1"]], "zvol_wait.1": [[68, "zvol-wait-1"], [217, "zvol-wait-1"], [245, "zvol-wait-1"], [344, "zvol-wait-1"], [447, "zvol-wait-1"]], "Devices and Special Files (4)": [[69, "devices-and-special-files-4"], [345, "devices-and-special-files-4"], [448, "devices-and-special-files-4"]], "spl.4": [[70, "spl-4"], [346, "spl-4"], [449, "spl-4"]], "zfs.4": [[71, "zfs-4"], [347, "zfs-4"], [450, "zfs-4"]], "File Formats and Conventions (5)": [[72, "file-formats-and-conventions-5"], [173, "file-formats-and-conventions-5"], [195, "file-formats-and-conventions-5"], [218, "file-formats-and-conventions-5"], [246, "file-formats-and-conventions-5"], [348, "file-formats-and-conventions-5"], [451, "file-formats-and-conventions-5"]], "vdev_id.conf.5": [[73, "vdev-id-conf-5"], [174, "vdev-id-conf-5"], [196, "vdev-id-conf-5"], [220, "vdev-id-conf-5"], [248, "vdev-id-conf-5"], [349, "vdev-id-conf-5"], [452, "vdev-id-conf-5"]], "dracut.zfs.7": [[74, "dracut-zfs-7"], [350, "dracut-zfs-7"], [453, "dracut-zfs-7"]], "Miscellaneous (7)": [[75, "miscellaneous-7"], [351, "miscellaneous-7"], [454, "miscellaneous-7"]], "vdevprops.7": [[76, "vdevprops-7"], [455, "vdevprops-7"]], "zfsconcepts.7": [[77, "zfsconcepts-7"], [352, "zfsconcepts-7"], [456, "zfsconcepts-7"]], "zfsprops.7": [[78, "zfsprops-7"], [353, "zfsprops-7"], [457, "zfsprops-7"]], "zpool-features.7": [[79, "zpool-features-7"], [354, "zpool-features-7"], [458, "zpool-features-7"]], "zpoolconcepts.7": [[80, "zpoolconcepts-7"], [355, "zpoolconcepts-7"], [459, "zpoolconcepts-7"]], "zpoolprops.7": [[81, "zpoolprops-7"], [356, "zpoolprops-7"], [460, "zpoolprops-7"]], "fsck.zfs.8": [[82, "fsck-zfs-8"], [178, "fsck-zfs-8"], [200, "fsck-zfs-8"], [224, "fsck-zfs-8"], [252, "fsck-zfs-8"], [357, "fsck-zfs-8"], [461, "fsck-zfs-8"]], "System Administration Commands (8)": [[83, "system-administration-commands-8"], [179, "system-administration-commands-8"], [201, "system-administration-commands-8"], [225, "system-administration-commands-8"], [253, "system-administration-commands-8"], [358, "system-administration-commands-8"], [462, "system-administration-commands-8"]], "mount.zfs.8": [[84, "mount-zfs-8"], [180, "mount-zfs-8"], [202, "mount-zfs-8"], [226, "mount-zfs-8"], [254, "mount-zfs-8"], [359, "mount-zfs-8"], [463, "mount-zfs-8"]], "vdev_id.8": [[85, "vdev-id-8"], [181, "vdev-id-8"], [203, "vdev-id-8"], [227, "vdev-id-8"], [255, "vdev-id-8"], [360, "vdev-id-8"], [464, "vdev-id-8"]], "zdb.8": [[86, "zdb-8"], [182, "zdb-8"], [204, "zdb-8"], [228, "zdb-8"], [256, "zdb-8"], [361, "zdb-8"], [465, "zdb-8"]], "zed.8": [[87, "zed-8"], [183, "zed-8"], [205, "zed-8"], [229, "zed-8"], [257, "zed-8"], [362, "zed-8"], [466, "zed-8"]], "zfs-allow.8": [[88, "zfs-allow-8"], [258, "zfs-allow-8"], [363, "zfs-allow-8"], [467, "zfs-allow-8"]], "zfs-bookmark.8": [[89, "zfs-bookmark-8"], [259, "zfs-bookmark-8"], [364, "zfs-bookmark-8"], [468, "zfs-bookmark-8"]], "zfs-change-key.8": [[90, "zfs-change-key-8"], [260, "zfs-change-key-8"], [365, "zfs-change-key-8"], [469, "zfs-change-key-8"]], "zfs-clone.8": [[91, "zfs-clone-8"], [261, "zfs-clone-8"], [366, "zfs-clone-8"], [470, "zfs-clone-8"]], "zfs-create.8": [[92, "zfs-create-8"], [262, "zfs-create-8"], [367, "zfs-create-8"], [471, "zfs-create-8"]], "zfs-destroy.8": [[93, "zfs-destroy-8"], [263, "zfs-destroy-8"], [368, "zfs-destroy-8"], [472, "zfs-destroy-8"]], "zfs-diff.8": [[94, "zfs-diff-8"], [264, "zfs-diff-8"], [369, "zfs-diff-8"], [473, "zfs-diff-8"]], "zfs-get.8": [[95, "zfs-get-8"], [265, "zfs-get-8"], [370, "zfs-get-8"], [474, "zfs-get-8"]], "zfs-groupspace.8": [[96, "zfs-groupspace-8"], [266, "zfs-groupspace-8"], [371, "zfs-groupspace-8"], [475, "zfs-groupspace-8"]], "zfs-hold.8": [[97, "zfs-hold-8"], [267, "zfs-hold-8"], [372, "zfs-hold-8"], [476, "zfs-hold-8"]], "zfs-inherit.8": [[98, "zfs-inherit-8"], [268, "zfs-inherit-8"], [373, "zfs-inherit-8"], [477, "zfs-inherit-8"]], "zfs-jail.8": [[99, "zfs-jail-8"], [269, "zfs-jail-8"], [374, "zfs-jail-8"], [478, "zfs-jail-8"]], "zfs-list.8": [[100, "zfs-list-8"], [270, "zfs-list-8"], [375, "zfs-list-8"], [479, "zfs-list-8"]], "zfs-load-key.8": [[101, "zfs-load-key-8"], [271, "zfs-load-key-8"], [376, "zfs-load-key-8"], [480, "zfs-load-key-8"]], "zfs-mount-generator.8": [[102, "zfs-mount-generator-8"], [230, "zfs-mount-generator-8"], [272, "zfs-mount-generator-8"], [377, "zfs-mount-generator-8"], [481, "zfs-mount-generator-8"]], "zfs-mount.8": [[103, "zfs-mount-8"], [273, "zfs-mount-8"], [378, "zfs-mount-8"], [482, "zfs-mount-8"]], "zfs-program.8": [[104, "zfs-program-8"], [231, "zfs-program-8"], [274, "zfs-program-8"], [379, "zfs-program-8"], [483, "zfs-program-8"]], "zfs-project.8": [[105, "zfs-project-8"], [275, "zfs-project-8"], [380, "zfs-project-8"], [484, "zfs-project-8"]], "zfs-projectspace.8": [[106, "zfs-projectspace-8"], [276, "zfs-projectspace-8"], [381, "zfs-projectspace-8"], [485, "zfs-projectspace-8"]], "zfs-promote.8": [[107, "zfs-promote-8"], [277, "zfs-promote-8"], [382, "zfs-promote-8"], [486, "zfs-promote-8"]], "zfs-receive.8": [[108, "zfs-receive-8"], [278, "zfs-receive-8"], [383, "zfs-receive-8"], [487, "zfs-receive-8"]], "zfs-recv.8": [[109, "zfs-recv-8"], [279, "zfs-recv-8"], [384, "zfs-recv-8"], [488, "zfs-recv-8"]], "zfs-redact.8": [[110, "zfs-redact-8"], [280, "zfs-redact-8"], [385, "zfs-redact-8"], [489, "zfs-redact-8"]], "zfs-release.8": [[111, "zfs-release-8"], [281, "zfs-release-8"], [386, "zfs-release-8"], [490, "zfs-release-8"]], "zfs-rename.8": [[112, "zfs-rename-8"], [282, "zfs-rename-8"], [387, "zfs-rename-8"], [491, "zfs-rename-8"]], "zfs-rollback.8": [[113, "zfs-rollback-8"], [283, "zfs-rollback-8"], [388, "zfs-rollback-8"], [492, "zfs-rollback-8"]], "zfs-send.8": [[114, "zfs-send-8"], [284, "zfs-send-8"], [389, "zfs-send-8"], [493, "zfs-send-8"]], "zfs-set.8": [[115, "zfs-set-8"], [285, "zfs-set-8"], [390, "zfs-set-8"], [494, "zfs-set-8"]], "zfs-share.8": [[116, "zfs-share-8"], [286, "zfs-share-8"], [391, "zfs-share-8"], [495, "zfs-share-8"]], "zfs-snapshot.8": [[117, "zfs-snapshot-8"], [287, "zfs-snapshot-8"], [392, "zfs-snapshot-8"], [496, "zfs-snapshot-8"]], "zfs-unallow.8": [[118, "zfs-unallow-8"], [288, "zfs-unallow-8"], [393, "zfs-unallow-8"], [497, "zfs-unallow-8"]], "zfs-unjail.8": [[119, "zfs-unjail-8"], [289, "zfs-unjail-8"], [394, "zfs-unjail-8"], [498, "zfs-unjail-8"]], "zfs-unload-key.8": [[120, "zfs-unload-key-8"], [290, "zfs-unload-key-8"], [395, "zfs-unload-key-8"], [499, "zfs-unload-key-8"]], "zfs-unmount.8": [[121, "zfs-unmount-8"], [291, "zfs-unmount-8"], [396, "zfs-unmount-8"], [500, "zfs-unmount-8"]], "zfs-unzone.8": [[122, "zfs-unzone-8"], [501, "zfs-unzone-8"]], "zfs-upgrade.8": [[123, "zfs-upgrade-8"], [292, "zfs-upgrade-8"], [397, "zfs-upgrade-8"], [502, "zfs-upgrade-8"]], "zfs-userspace.8": [[124, "zfs-userspace-8"], [293, "zfs-userspace-8"], [398, "zfs-userspace-8"], [503, "zfs-userspace-8"]], "zfs-wait.8": [[125, "zfs-wait-8"], [294, "zfs-wait-8"], [399, "zfs-wait-8"], [504, "zfs-wait-8"]], "zfs-zone.8": [[126, "zfs-zone-8"], [505, "zfs-zone-8"]], "zfs.8": [[127, "zfs-8"], [184, "zfs-8"], [206, "zfs-8"], [232, "zfs-8"], [295, "zfs-8"], [400, "zfs-8"], [506, "zfs-8"]], "zfs_ids_to_path.8": [[128, "zfs-ids-to-path-8"], [296, "zfs-ids-to-path-8"], [401, "zfs-ids-to-path-8"], [507, "zfs-ids-to-path-8"]], "zfs_prepare_disk.8": [[129, "zfs-prepare-disk-8"], [508, "zfs-prepare-disk-8"]], "zgenhostid.8": [[130, "zgenhostid-8"], [207, "zgenhostid-8"], [234, "zgenhostid-8"], [299, "zgenhostid-8"], [402, "zgenhostid-8"], [509, "zgenhostid-8"]], "zinject.8": [[131, "zinject-8"], [185, "zinject-8"], [208, "zinject-8"], [235, "zinject-8"], [300, "zinject-8"], [403, "zinject-8"], [510, "zinject-8"]], "zpool-add.8": [[132, "zpool-add-8"], [301, "zpool-add-8"], [404, "zpool-add-8"], [511, "zpool-add-8"]], "zpool-attach.8": [[133, "zpool-attach-8"], [302, "zpool-attach-8"], [405, "zpool-attach-8"], [512, "zpool-attach-8"]], "zpool-checkpoint.8": [[134, "zpool-checkpoint-8"], [303, "zpool-checkpoint-8"], [406, "zpool-checkpoint-8"], [513, "zpool-checkpoint-8"]], "zpool-clear.8": [[135, "zpool-clear-8"], [304, "zpool-clear-8"], [407, "zpool-clear-8"], [514, "zpool-clear-8"]], "zpool-create.8": [[136, "zpool-create-8"], [305, "zpool-create-8"], [408, "zpool-create-8"], [515, "zpool-create-8"]], "zpool-destroy.8": [[137, "zpool-destroy-8"], [306, "zpool-destroy-8"], [409, "zpool-destroy-8"], [516, "zpool-destroy-8"]], "zpool-detach.8": [[138, "zpool-detach-8"], [307, "zpool-detach-8"], [410, "zpool-detach-8"], [517, "zpool-detach-8"]], "zpool-events.8": [[139, "zpool-events-8"], [308, "zpool-events-8"], [411, "zpool-events-8"], [518, "zpool-events-8"]], "zpool-export.8": [[140, "zpool-export-8"], [309, "zpool-export-8"], [412, "zpool-export-8"], [519, "zpool-export-8"]], "zpool-get.8": [[141, "zpool-get-8"], [310, "zpool-get-8"], [413, "zpool-get-8"], [520, "zpool-get-8"]], "zpool-history.8": [[142, "zpool-history-8"], [311, "zpool-history-8"], [414, "zpool-history-8"], [521, "zpool-history-8"]], "zpool-import.8": [[143, "zpool-import-8"], [312, "zpool-import-8"], [415, "zpool-import-8"], [522, "zpool-import-8"]], "zpool-initialize.8": [[144, "zpool-initialize-8"], [313, "zpool-initialize-8"], [416, "zpool-initialize-8"], [523, "zpool-initialize-8"]], "zpool-iostat.8": [[145, "zpool-iostat-8"], [314, "zpool-iostat-8"], [417, "zpool-iostat-8"], [524, "zpool-iostat-8"]], "zpool-labelclear.8": [[146, "zpool-labelclear-8"], [315, "zpool-labelclear-8"], [418, "zpool-labelclear-8"], [525, "zpool-labelclear-8"]], "zpool-list.8": [[147, "zpool-list-8"], [316, "zpool-list-8"], [419, "zpool-list-8"], [526, "zpool-list-8"]], "zpool-offline.8": [[148, "zpool-offline-8"], [317, "zpool-offline-8"], [420, "zpool-offline-8"], [527, "zpool-offline-8"]], "zpool-online.8": [[149, "zpool-online-8"], [318, "zpool-online-8"], [421, "zpool-online-8"], [528, "zpool-online-8"]], "zpool-reguid.8": [[150, "zpool-reguid-8"], [319, "zpool-reguid-8"], [422, "zpool-reguid-8"], [529, "zpool-reguid-8"]], "zpool-remove.8": [[151, "zpool-remove-8"], [320, "zpool-remove-8"], [423, "zpool-remove-8"], [530, "zpool-remove-8"]], "zpool-reopen.8": [[152, "zpool-reopen-8"], [321, "zpool-reopen-8"], [424, "zpool-reopen-8"], [531, "zpool-reopen-8"]], "zpool-replace.8": [[153, "zpool-replace-8"], [322, "zpool-replace-8"], [425, "zpool-replace-8"], [532, "zpool-replace-8"]], "zpool-resilver.8": [[154, "zpool-resilver-8"], [323, "zpool-resilver-8"], [426, "zpool-resilver-8"], [533, "zpool-resilver-8"]], "zpool-scrub.8": [[155, "zpool-scrub-8"], [324, "zpool-scrub-8"], [427, "zpool-scrub-8"], [534, "zpool-scrub-8"]], "zpool-set.8": [[156, "zpool-set-8"], [325, "zpool-set-8"], [428, "zpool-set-8"], [535, "zpool-set-8"]], "zpool-split.8": [[157, "zpool-split-8"], [326, "zpool-split-8"], [429, "zpool-split-8"], [536, "zpool-split-8"]], "zpool-status.8": [[158, "zpool-status-8"], [327, "zpool-status-8"], [430, "zpool-status-8"], [537, "zpool-status-8"]], "zpool-sync.8": [[159, "zpool-sync-8"], [328, "zpool-sync-8"], [431, "zpool-sync-8"], [538, "zpool-sync-8"]], "zpool-trim.8": [[160, "zpool-trim-8"], [329, "zpool-trim-8"], [432, "zpool-trim-8"], [539, "zpool-trim-8"]], "zpool-upgrade.8": [[161, "zpool-upgrade-8"], [330, "zpool-upgrade-8"], [433, "zpool-upgrade-8"], [540, "zpool-upgrade-8"]], "zpool-wait.8": [[162, "zpool-wait-8"], [331, "zpool-wait-8"], [434, "zpool-wait-8"], [541, "zpool-wait-8"]], "zpool.8": [[163, "zpool-8"], [186, "zpool-8"], [209, "zpool-8"], [236, "zpool-8"], [332, "zpool-8"], [435, "zpool-8"], [542, "zpool-8"]], "zpool_influxdb.8": [[164, "zpool-influxdb-8"], [436, "zpool-influxdb-8"], [543, "zpool-influxdb-8"]], "zstream.8": [[165, "zstream-8"], [335, "zstream-8"], [437, "zstream-8"], [544, "zstream-8"]], "zstreamdump.8": [[166, "zstreamdump-8"], [187, "zstreamdump-8"], [210, "zstreamdump-8"], [237, "zstreamdump-8"], [336, "zstreamdump-8"], [438, "zstreamdump-8"], [545, "zstreamdump-8"]], "master": [[167, "master"]], "zpios.1": [[171, "zpios-1"], [193, "zpios-1"]], "zfs-events.5": [[175, "zfs-events-5"], [197, "zfs-events-5"], [221, "zfs-events-5"], [249, "zfs-events-5"]], "zfs-module-parameters.5": [[176, "zfs-module-parameters-5"], [198, "zfs-module-parameters-5"], [222, "zfs-module-parameters-5"], [250, "zfs-module-parameters-5"]], "zpool-features.5": [[177, "zpool-features-5"], [199, "zpool-features-5"], [223, "zpool-features-5"], [251, "zpool-features-5"]], "v0.6": [[188, "v0-6"]], "v0.7": [[211, "v0-7"]], "spl-module-parameters.5": [[219, "spl-module-parameters-5"], [247, "spl-module-parameters-5"]], "zfsprops.8": [[233, "zfsprops-8"], [298, "zfsprops-8"]], "v0.8": [[238, "v0-8"]], "zfsconcepts.8": [[297, "zfsconcepts-8"]], "zpoolconcepts.8": [[333, "zpoolconcepts-8"]], "zpoolprops.8": [[334, "zpoolprops-8"]], "v2.0": [[337, "v2-0"]], "v2.1": [[439, "v2-1"]], "v2.2": [[546, "v2-2"]], "Message ID:\u00a0ZFS-8000-14": [[547, "message-id-zfs-8000-14"]], "Corrupt ZFS cache": [[547, "corrupt-zfs-cache"]], "Message ID:\u00a0ZFS-8000-2Q": [[548, "message-id-zfs-8000-2q"]], "Missing device in replicated configuration": [[548, "missing-device-in-replicated-configuration"]], "Message ID:\u00a0ZFS-8000-3C": [[549, "message-id-zfs-8000-3c"]], "Missing device in non-replicated configuration": [[549, "missing-device-in-non-replicated-configuration"]], "Message ID: ZFS-8000-4J": [[550, "message-id-zfs-8000-4j"]], "Corrupted device label in a replicated configuration": [[550, "corrupted-device-label-in-a-replicated-configuration"]], "Message ID: ZFS-8000-5E": [[551, "message-id-zfs-8000-5e"]], "Corrupted device label in non-replicated configuration": [[551, "corrupted-device-label-in-non-replicated-configuration"]], "Message ID: ZFS-8000-6X": [[552, "message-id-zfs-8000-6x"]], "Missing top level device": [[552, "missing-top-level-device"]], "Message ID:\u00a0ZFS-8000-72": [[553, "message-id-zfs-8000-72"]], "Corrupted pool metadata": [[553, "corrupted-pool-metadata"]], "Message ID:\u00a0ZFS-8000-8A": [[554, "message-id-zfs-8000-8a"]], "Corrupted data": [[554, "corrupted-data"]], "Message ID:\u00a0ZFS-8000-9P": [[555, "message-id-zfs-8000-9p"]], "Failing device in replicated configuration": [[555, "failing-device-in-replicated-configuration"]], "Message ID:\u00a0ZFS-8000-A5": [[556, "message-id-zfs-8000-a5"]], "Incompatible version": [[556, "incompatible-version"]], "Message ID:\u00a0ZFS-8000-ER": [[557, "message-id-zfs-8000-er"]], "ZFS Errata #1": [[557, "zfs-errata-1"]], "ZFS Errata #2": [[557, "zfs-errata-2"]], "ZFS Errata #3": [[557, "zfs-errata-3"]], "ZFS Errata #4": [[557, "zfs-errata-4"]], "Message ID:\u00a0ZFS-8000-EY": [[558, "message-id-zfs-8000-ey"]], "ZFS label hostid mismatch": [[558, "zfs-label-hostid-mismatch"]], "Message ID: ZFS-8000-HC": [[559, "message-id-zfs-8000-hc"]], "ZFS pool I/O failures": [[559, "zfs-pool-i-o-failures"], [560, "zfs-pool-i-o-failures"]], "Message ID:\u00a0ZFS-8000-JQ": [[560, "message-id-zfs-8000-jq"]], "Message ID:\u00a0ZFS-8000-K4": [[561, "message-id-zfs-8000-k4"]], "ZFS intent log read failure": [[561, "zfs-intent-log-read-failure"]], "ZFS Messages": [[562, "zfs-messages"]]}, "indexentries": {}}) \ No newline at end of file