https://medium.com/@cq94/zfs-vous-connaissez-vous-devriez-1d2611e7dad6
TOC
chapter | designation |
---|---|
ADD | Adds the specified virtual devices to the given pool |
ATTACH | Attaches new_device to the existing device |
CLEAR | Clears device errors in a pool |
CREATE | Creates a new storage pool containing the virtual devices specified on the command line |
DESTROY | Destroys the given pool, freeing up any devices for other use |
DETACH | Detaches device from a mirror |
EVENTS | Lists all recent events generated by the ZFS kernel modules |
EXPORT | Exports the given pools from the system |
GET | Retrieves the given list of properties (or all properties if all is used) for the specified storage pool(s) |
HISTORY | Displays the command history of the specified pool(s) or all pools if no pool is specified |
IMPORT-LIST | Lists pools available to import |
IMPORT-ALL | Imports all pools found in the search directories |
IMPORT | Imports a specific pool |
IOSTAT | Displays I/O statistics for the given pools/vdevs |
LABELCLEAR | Removes ZFS label information from the specified device |
LIST | Lists the given pools along with a health status and space usage |
OFFLINE | Takes the specified physical device offline |
ONLINE | Brings the specified physical device online |
REGUID | Generates a new unique identifier for the pool |
REOPEN | Reopen all the vdevs associated with the pool |
REMOVE | Removes the specified device from the pool |
REPLACE | Replaces old_device with new_device |
SCRUB | Begins a scrub or resumes a paused scrub |
SET | Sets the given property on the specified pool |
SPLIT | Splits devices off pool creating newpool |
STATUS | Displays the detailed health status for the given pools |
UPGRADE-DISPLAY-NOT | Displays pools which do not have all supported features enabled and pools formatted using a legacy ZFS version number |
UPGRADE-DISPLAY | Displays legacy ZFS versions supported by the current software |
UPGRADE | Enables all supported features on the given pool |
PROPERTIES | Available propserties |
ADD
Adds the specified virtual devices to the given pool
The vdev specification is described in the Virtual Devices section. The behavior of the -f option, and the device checks performed are described in the zpool create subcommand
zpool add [-fgLnP] [-o property=value] pool vdev...
-f # Forces use of vdevs, even if they appear in use or specify a conflicting replication level
-g # Display vdev, GUIDs instead of the normal device names
-L # Display real paths for vdevs resolving all symbolic links
-n # Displays the configuration that would be used without actually adding the vdevs
-P # Display real paths for vdevs instead of only the last component of the path
-o property=value # Sets the given pool properties
ATTACH
Attaches new_device to the existing device
The existing device cannot be part of a raidz configuration. If device is not currently part of a mirrored configuration, device automatically transforms into a two-way mirror of device and new_device. If device is part of a two-way mirror, attaching new_device creates a three-way mirror, and so on. In either case, new_device begins to resilver immediately
zpool attach [-f] [-o property=value] pool device new_device
-f # Forces use of new_device, even if its appears to be in use
-o property=value # Sets the given pool properties
CLEAR
Clears device errors in a pool
If no arguments are specified, all device errors within the pool are cleared. If one or more devices is specified, only those errors associated with the specified device or devices are cleared
zpool clear pool [device]
CREATE
Creates a new storage pool containing the virtual devices specified on the command line
The pool name must begin with a letter, and can only contain alphanumeric characters as well as underscore ("_"), dash ("."), colon (":"),space ("-"), and period ("."). The pool names mirror, raidz, spare and log are reserved, as are names beginning with the pattern c[0-9]. The vdev specification is described in the Virtual Devices section
zpool create [-dfn] [-m mountpoint] [-o property=value]... [-o feature@feature=value]... [-O file-system-property=value]... [-R root] [-t tname] pool vdev...
-d # Do not enable any features on the new pool
-f # Forces use of vdevs, even if they appear in use or specify a conflicting replication level
-m mountpoint # Sets the mount point for the root dataset
-n # Displays the configuration that would be used without actually creating the pool
-o property=value # Sets the given pool properties
-o feature@feature=value # Sets the given pool feature
-O file-system-property=value # Sets the given file system properties in the root file system of the pool
-R root # Equivalent to -o cachefile=none -o altroot=root
-t tname # Sets the in-core pool name to tname while the on-disk name will be the name specified as the pool name pool
DESTROY
Destroys the given pool, freeing up any devices for other use
This command tries to unmount any active datasets before destroying the pool
zpool destroy [-f] pool
-f # Forces any active datasets contained within the pool to be unmounted
DETACH
Detaches device from a mirror
The operation is refused if there are no other valid replicas of the data
# Destroys the given pool, freeing up any devices for other use
zpool detach pool device
EVENTS
Lists all recent events generated by the ZFS kernel modules
These events are consumed by the zed(8) and used to automate administrative tasks such as replacing a failed device with a hot spare. For more information about the subclasses and event payloads that can be generated see the zfs-events(5) man page
zpool events
-c # Clear all previous events
-f # Follow mode
-H # Scripted mode. Do not display headers, and separate fields by a single tab instead of arbitrary space
-v # Print the entire payload for each event
EXPORT
Exports the given pools from the system
All devices are marked as exported, but are still considered in use by other subsystems. The devices can be moved between systems (even those of different endianness) and imported as long as a sufficient number of devices are present
zpool export [-a] [-f] pool...
-a # Exports all pools imported on the system
-f # Forcefully unmount all datasets, using the unmount -f command
GET
Retrieves the given list of properties (or all properties if all is used) for the specified storage pool(s)
These properties are displayed with the following fields:
- name Name of storage pool
- property Property name
- value Property value
- source Property source, either 'default' or 'local'
zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
-H # Scripted mode. Do not display headers, and separate fields by a single tab instead of arbitrary space
-o field # A comma-separated list of columns to display. name,property,value,source is the default value
-p # Display numbers in parsable (exact) values
HISTORY
Displays the command history of the specified pool(s) or all pools if no pool is specified
zpool history [-il] [pool]...
-i # Displays internally logged ZFS events in addition to user initiated events
-l # Displays log records in long format: + user name, the hostname, and the zone
IMPORT-LIST
Lists pools available to import
zpool import [-D] [-c cachefile|-d dir]
-c cachefile # Reads configuration from the given cachefile that was created with the cachefile pool property
-d dir # Searches for devices or files in dir
-D # Lists destroyed pools only
IMPORT-ALL
Imports all pools found in the search directories
Identical to the previous command, except that all pools with a sufficient number of devices available are imported. Destroyed pools, pools that were previously destroyed with the zpool destroy command, will not be imported unless the -D option is specified
zpool import -a [-DfmN] [-F [-n] [-T] [-X]] [-c cachefile|-d dir] [-o mntopts] [-o property=value]... [-R root] [-s]
-a # Searches for and imports all pools found
-c cachefile # Reads configuration from the given cachefile that was created with the cachefile pool property
-d dir # Searches for devices or files in dir
-D # Imports destroyed pools only
-f # Forces import, even if the pool appears to be potentially active
-F # Recovery mode for a non-importable pool
-m # Allows a pool to import when there is a missing log device
-n # Used with the -F recovery option
-N # Import the pool without mounting any file systems
-o mntopts # Comma-separated list of mount options to use when mounting datasets within the pool
-o property=value # Sets the specified property on the imported pool
-R root # Sets the cachefile property to none and the altroot property to root
-s # Scan using the default search path, the libblkid cache will not be consulted
-X # Used with the -F recovery option
-T # Specify the txg to use for rollback
IMPORT
Imports a specific pool
A pool can be identified by its name or the numeric identifier. If newpool is specified, the pool is imported using the name newpool. Otherwise, it is imported with the same name as its exported name
zpool import [-Dfm] [-F [-n] [-t] [-T] [-X]] [-c cachefile|-d dir] [-o mntopts] [-o property=value]... [-R root] [-s] pool|id [newpool]
-c cachefile # Reads configuration from the given cachefile that was created with the cachefile pool property
-d dir Searches for devices or files in dir. The -d option can be specified multiple times. This option is incompatible with the -c option.
-D # Imports destroyed pool. The -f option is also required
-f # Forces import, even if the pool appears to be potentially active
-F # Recovery mode for a non-importable pool
-m # Allows a pool to import when there is a missing log device
-n # Used with the -F recovery option
-o mntopts # Comma-separated list of mount options to use when mounting datasets within the pool
-o property=value # Sets the specified property on the imported pool
-R root # Sets the cachefile property to none and the altroot property to root
-s # Scan using the default search path, the libblkid cache will not be consulted
-X # Used with the -F recovery option
-T # Specify the txg to use for rollback
-t # Used with newpool
IOSTAT
Displays I/O statistics for the given pools/vdevs
You can pass in a list of pools, a pool and list of vdevs in that pool, or a list of any vdevs from any pool. If no items are specified, statistics for every pool in the system are shown. When given an interval, the statistics are printed every interval seconds until ^C is pressed. If count is specified, the command exits after count reports are printed. The first report printed is always the statistics since boot regardless of whether interval and count are passed. However, this behavior can be suppressed with the -y flag. Also note that the units of K, M, ... that are printed in the report are in base 1024. To get the raw values, use the -p flag
zpool iostat [[[-c SCRIPT] [-lq]]|-rw] [-T u|d] [-ghHLpPvy] [[pool...]|[pool vdev...]|[vdev...]] [interval [count]]
-c [SCRIPT1[,SCRIPT2]...] # Run a script (or scripts) on each vdev and include the output as a new column in the zpool iostat output
-T u|d # Display a time stamp
-g # Display vdev GUIDs instead of the normal device names
-H # Scripted mode
-L # Display real paths for vdevs resolving all symbolic links
-p # Display numbers in parsable (exact) values
-P # Display full paths for vdevs instead of only the last component of the path
-r # Print request size histograms for the leaf ZIOs
-v # Verbose statistics Reports usage statistics for individual vdevs within the pool, in addition to the pool-wide statistics
-l # Include average latency statistics:
- total_wait: Average total IO time (queuing + disk IO time)
- disk_wait: Average disk IO time (time reading/writing the disk)
- syncq_wait: Average amount of time IO spent in synchronous priority queues. Does not include disk time
- asyncq_wait: Average amount of time IO spent in asynchronous priority queues. Does not include disk time
- scrub: Average queuing time in scrub queue. Does not include disk time
-q # Include active queue statistics
- syncq_read/write: Current number of entries in synchronous priority queues
- asyncq_read/write: Current number of entries in asynchronous priority queues
- scrubq_read: Current number of entries in scrub queue
LABELCLEAR
Removes ZFS label information from the specified device
The device must not be part of an active pool configuration
zpool labelclear [-f] device
-f # Treat exported or foreign devices as inactive
LIST
Lists the given pools along with a health status and space usage
If no pools are specified, all pools in the system are listed. When given an interval, the information is printed every interval seconds until ^C is pressed. If count is specified, the command exits after count reports are printed
zpool list [-HgLpPv] [-o property[,property]...] [-T u|d] [pool]... [interval [count]]
-g # print vdev GUIDs instead of the normal device names
-H # scripted mode. Do not display headers, and separate fields by a single tab instead of arbitrary space
-o property,... # print only specidied properties. Default list is name, size, alloc, free, fragmentation, expandsize, capacity, dedupratio, health, altroot
-L # Display real paths for vdevs resolving all symbolic links
-p # Display numbers in parsable (exact) values
-P # Display full paths for vdevs instead of only the last component of the path
-T u|d # Display a time stamp
-v # Verbose statistics
OFFLINE
Takes the specified physical device offline
While the device is offline, no attempt is made to read or write to the device. This command is not applicable to spares
zpool offline [-f] [-t] pool device...
-f # Force fault. Instead of offlining the disk, put it into a faulted state
-t # Temporary. Upon reboot, the specified physical device reverts to its previous state
ONLINE
Brings the specified physical device online
This command is not applicable to spares or cache devices
zpool online [-e] pool device...
-e # Expand the device to use all available space```
REGUID
Generates a new unique identifier for the pool
You must ensure that all devices in this pool are online and healthy before performing this action
zpool reguid pool
REOPEN
Reopen all the vdevs associated with the pool
zpool reopen pool
REMOVE
Removes the specified device from the pool
This command currently only supports removing hot spares, cache, and log devices. A mirrored log device can be removed by specifying the top-level mirror for the log. Non-log devices that are part of a mirrored configuration can be removed using the zpool detach command. Non-redundant and raidz devices cannot be removed from a pool
zpool remove pool device...
REPLACE
Replaces old_device with new_device
This is equivalent to attaching new_device, waiting for it to resilver, and then detaching old_device
The size of new_device must be greater than or equal to the minimum size of all the devices in a mirror or raidz configuration
zpool replace [-f] [-o property=value] pool device [new_device]
-f # Forces use of new_device, even if its appears to be in use
-o property=value # Sets the given pool properties. See the Properties section for a list of valid properties that can be set
SCRUB
Begins a scrub or resumes a paused scrub
The scrub examines all data in the specified pools to verify that it checksums correctly. For replicated (mirror or raidz) devices, ZFS automatically repairs any damage discovered during the scrub. The zpool status command reports the progress of the scrub and summarizes the results of the scrub upon completion
zpool scrub [-s | -p] pool...
-s # Stop scrubbing
-p # Pause scrubbing
SET
Sets the given property on the specified pool
zpool set property=value pool
SPLIT
Splits devices off pool creating newpool
All vdevs in pool must be mirrors and the pool must not be in the process of resilvering. At the time of the split, newpool will be a replica of pool. By default, the last device in each mirror is split from pool to create newpool
zpool split [-gLnP] [-o property=value]... [-R root] pool newpool [device ...]
-g # Display vdev GUIDs instead of the normal device names
-L # Display real paths for vdevs resolving all symbolic links
-n # Do dry run, do not actually perform the split
-P # Display full paths for vdevs instead of only the last component of the path
-o property=value # Sets the specified property for newpool
-R root # Set altroot for newpool to root and automatically import it
STATUS
Displays the detailed health status for the given pools
If no pool is specified, then the status of each pool in the system is displayed. For more information on pool and device health, see the Device Failure and Recovery section.
If a scrub or resilver is in progress, this command reports the percentage done and the estimated time to completion. Both of these are only approximate, because the amount of data in the pool and the other workloads on the system can change
zpool status [-c [SCRIPT1[,SCRIPT2]...]] [-gLPvxD] [-T u|d] [pool]... [interval [count]]
-c [SCRIPT1[,SCRIPT2]...] # Run a script (or scripts) on each vdev and include the output as a new column in the zpool status output
-g # Display vdev GUIDs instead of the normal device names
-L # Display real paths for vdevs resolving all symbolic links
-P # Display full paths for vdevs instead of only the last component of the path
-D # Display a histogram of deduplication statistics
-T u|d # Display a time stamp. -u for a the internal representation of time, -d for standard date format
-v # Displays verbose data error information, printing out a complete list of all data errors since the last complete pool scrub
-x # Only display status for pools that are exhibiting errors or are otherwise unavailable
UPGRADE-DISPLAY-NOT
Displays pools which do not have all supported features enabled and pools formatted using a legacy ZFS version number
These pools can continue to be used, but some features may not be available. Use zpool upgrade -a to enable all features on all pools
zpool upgrade
UPGRADE-DISPLAY
Displays legacy ZFS versions supported by the current software
See zpool-features(5) for a description of feature flags features supported by the current software
zpool upgrade -v
UPGRADE
Enables all supported features on the given pool
Once this is done, the pool will no longer be accessible on systems that do not support feature flags. See zfs-features(5) for details on compatibility with systems that support feature flags, but do not support all features enabled on the pool
zpool upgrade [-V version] -a|pool...
-a # Enables all supported features on all pools.
-V version # Upgrade to the specified legacy version. If the -V flag is specified, no features will be enabled on the pool
PROPERTIES
available # Amount of storage available within the pool
capacity # Percentage of pool space used
expandsize # Amount of uninitialized space within the pool or device that can be used to increase the total capacity of the pool
fragmentation # The amount of fragmentation in the pool
free # The amount of free space available in the pool
freeing # After a file system or snapshot is destroyed, the space it was using is returned to the pool asynchronously. freeing is the amount of space remaining to be reclaimed. Over time freeing will decrease while free increases
health # The current health of the pool. Health can be one of ONLINE, DEGRADED, FAULTED, OFFLINE, REMOVED, UNAVAIL
guid # A unique identifier for the pool
size # Total size of the storage pool
unsupported@feature_guid # Information about unsupported features that are enabled on the pool. See zpool-features(5) for details
used # Amount of storage space used within the pool
The following property can be set at creation time and import time:
altroot # Alternate root directory. If set, this directory is prepended to any mount points within the pool
The following property can be set only at import time:
readonly=on|off # If set to on, the pool will be imported in read-only mode
The following properties can be set at creation time and import time, and later changed with the zpool set command:
ashift=ashift # Pool sector size exponent, to the power of 2 (internally referred to as ashift )
autoexpand=on|off # Controls automatic pool expansion when the underlying LUN is grown. If set to on, the pool will be resized according to the size of the expanded device
autoreplace=on|off # Controls automatic device replacement. If set to off, device replacement must be initiated by the administrator by using the zpool replace command. If set to on, any new device, found in the same physical location as a device that previously belonged to the pool, is automatically formatted and replaced. The default behavior is off
bootfs=(unset)|pool/dataset # Identifies the default bootable dataset for the root pool
cachefile=path|none # Controls the location of where the pool configuration is cached. Discovering all pools on system startup requires a cached copy of the configuration data that is stored on the root file system
comment=text # A text string consisting of printable ASCII characters that will be stored such that it is available even if the pool becomes faulted. An administrator can provide additional information about a pool using this property
dedupditto=number # Threshold for the number of block ditto copies
delegation=on|off # Controls whether a non-privileged user is granted access based on the dataset permissions defined on the dataset
failmode=wait|continue|panic # Controls the system behavior in the event of catastrophic pool failure
wait # Blocks all I/O access until the device connectivity is recovered and the errors are cleared
continue # Returns EIO to any new write I/O requests but allows reads to any of the remaining healthy devices. Any write requests that have yet to be committed to disk would be blocked
panic # Prints out a message to the console and generates a system crash dump
feature@feature_name=enabled # The value of this property is the current state of feature_name. The only valid value when setting this property is enabled which moves feature_name to the enabled state
listsnapshots=on|off # Controls whether information about snapshots associated with this pool is output when zfs list is run without the -t option. The default value is off
version=version # The current on-disk version of the pool. This can be increased, but never decreased