hammer man page on DragonFly

Man page or keyword search:  
man Server   44335 pages
apropos Keyword Search (all sections)
Output format
DragonFly logo
[printable version]

HAMMER(8)		  BSD System Manager's Manual		     HAMMER(8)

NAME
     hammer — HAMMER file system utility

SYNOPSIS
     hammer -h
     hammer [-2BFqrvXy] [-b bandwidth] [-C cachesize[:readahead]]
	    [-c cyclefile] [-f blkdevs] [-i delay] [-p ssh-port]
	    [-S splitsize] [-t seconds] command [argument ...]

DESCRIPTION
     This manual page documents the hammer utility which provides miscella‐
     neous functions related to managing a HAMMER file system.	For a general
     introduction to the HAMMER file system, its features, and examples on how
     to set up and maintain one, see HAMMER(5).

     The options are as follows:

     -2	     Tell the mirror commands to use a 2-way protocol, which allows
	     automatic negotiation of transaction id ranges.  This option is
	     automatically enabled by the mirror-copy command.

     -B	     Bulk transfer.  Mirror-stream will not attempt to break-up large
	     initial bulk transfers into smaller pieces.  This can save time
	     but if the link is lost in the middle of the initial bulk trans‐
	     fer you will have to start over from scratch.  For more informa‐
	     tion see the -S option.

     -b bandwidth
	     Specify a bandwidth limit in bytes per second for mirroring
	     streams.  This option is typically used to prevent batch mirror‐
	     ing operations from loading down the machine.  The bandwidth may
	     be suffixed with k, m, or g to specify values in kilobytes,
	     megabytes, and gigabytes per second.  If no suffix is specified,
	     bytes per second is assumed.

	     Unfortunately this is only applicable to the pre-compression
	     bandwidth when compression is used, so a better solution would
	     probably be to use a ipfw(8) pipe or a pf(4) queue.

     -C cachesize[:readahead]
	     Set the memory cache size for any raw I/O.	 The default is 16MB.
	     A suffix of k for kilobytes and m for megabytes is allowed, else
	     the cache size is specified in bytes.

	     The read-behind/read-ahead defaults to 4 HAMMER blocks.

	     This option is typically only used with diagnostic commands as
	     kernel-supported commands will use the kernel's buffer cache.

     -c cyclefile
	     When pruning, rebalancing or reblocking you can tell the utility
	     to start at the object id stored in the specified file.  If the
	     file does not exist hammer will start at the beginning.  If
	     hammer is told to run for a specific period of time (-t) and is
	     unable to complete the operation it will write out the current
	     object id so the next run can pick up where it left off.  If
	     hammer runs to completion it will delete cyclefile.

     -F	     Force operation.  E.g. cleanup will not check that time period
	     has elapsed if this option is given.

     -f blkdevs
	     Specify the volumes making up a HAMMER file system.  Blkdevs is a
	     colon-separated list of devices, each specifying a HAMMER volume.

     -h	     Show usage.

     -i delay
	     Specify delay in seconds for mirror-read-stream.  When maintain‐
	     ing a streaming mirroring this option specifies the minimum delay
	     after a batch ends before the next batch is allowed to start.
	     The default is five seconds.

     -p ssh-port
	     Pass the -p ssh-port option to ssh(1) when using a remote speci‐
	     fication for the source and/or destination.

     -q	     Decrease verboseness.  May be specified multiple times.

     -r	     Specify recursion for those commands which support it.

     -S splitsize
	     Specify the bulk splitup size in bytes for mirroring streams.
	     When a mirror-stream is first started hammer will do an initial
	     run-through of the data to calculate good transaction ids to cut
	     up the bulk transfers, creating restart points in case the stream
	     is interrupted.  If we don't do this and the stream is inter‐
	     rupted it might have to start all over again.  The default is a
	     splitsize of 4GB.

	     At the moment the run-through is disk-bandwidth-heavy but some
	     future version will limit the run-through to just the B-Tree
	     records and not the record data.

	     The splitsize may be suffixed with k, m, or g to specify values
	     in kilobytes, megabytes, or gigabytes.  If no suffix is speci‐
	     fied, bytes is assumed.

	     When mirroring very large filesystems the minimum recommended
	     split size is 4GB.	 A small split size may wind up generating a
	     great deal of overhead but very little actual incremental data
	     and is not recommended.

     -t seconds
	     Specify timeout in seconds.  When pruning, rebalancing, reblock‐
	     ing or mirror-reading you can tell the utility to stop after a
	     certain period of time.  A value of 0 means unlimited.  This
	     option is used along with the -c cyclefile option to prune,
	     rebalance or reblock incrementally.

     -v	     Increase verboseness.  May be specified multiple times.

     -X	     Enable compression for any remote ssh specifications.  This
	     option is typically used with the mirroring directives.

     -y	     Force “yes” for interactive questions.

     The commands are as follows:

     synctid filesystem [quick]
	     Generate a guaranteed, formal 64-bit transaction id representing
	     the current state of the specified HAMMER file system.  The file
	     system will be synced to the media.

	     If the quick keyword is specified the file system will be soft-
	     synced, meaning that a crash might still undo the state of the
	     file system as of the transaction id returned but any new modifi‐
	     cations will occur after the returned transaction id as expected.

	     This operation does not create a snapshot.	 It is meant to be
	     used to track temporary fine-grained changes to a subset of files
	     and will only remain valid for ‘@@’ access purposes for the
	     prune-min period configured for the PFS.  If you desire a real
	     snapshot then the snapq directive may be what you are looking
	     for.

     bstats [interval]
	     Output HAMMER B-Tree statistics until interrupted.	 Pause
	     interval seconds between each display.  The default interval is
	     one second.

     iostats [interval]
	     Output HAMMER I/O statistics until interrupted.  Pause interval
	     seconds between each display.  The default interval is one sec‐
	     ond.

     history[@offset[,length]] path ...
	     Show the modification history for inode and data of HAMMER files.
	     If offset is given history is shown for data block at given off‐
	     set, otherwise history is shown for inode.	 If -v is specified
	     length data bytes at given offset are dumped for each version,
	     default is 32.

	     For each path this directive shows object id and sync status, and
	     for each object version it shows transaction id and time stamp.
	     Files has to exist for this directive to be applicable, to track
	     inodes which has been deleted or renamed see undo(1).

     blockmap
	     Dump the blockmap for the file system.  The HAMMER blockmap is
	     two-layer blockmap representing the maximum possible file system
	     size of 1 Exabyte.	 Needless to say the second layer is only
	     present for blocks which exist.  HAMMER's blockmap represents
	     8-Megabyte blocks, called big-blocks.  Each big-block has an
	     append point, a free byte count, and a typed zone id which allows
	     content to be reverse engineered to some degree.

	     In HAMMER allocations are essentially appended to a selected big-
	     block using the append offset and deducted from the free byte
	     count.  When space is freed the free byte count is adjusted but
	     HAMMER does not track holes in big-blocks for reallocation.  A
	     big-block must be completely freed, either through normal file
	     system operations or through reblocking, before it can be reused.

	     Data blocks can be shared by deducting the space used from the
	     free byte count for each shared references.  This means the free
	     byte count can legally go negative.

	     This command needs the -f blkdevs option.

     checkmap
	     Check the blockmap allocation count.  hammer will scan the B-
	     Tree, collect allocation information, and construct a blockmap
	     in-memory.	 It will then check that blockmap against the on-disk
	     blockmap.

	     This command needs the -f blkdevs option.

     show [localization[:object_id]]
	     Dump the B-Tree.  By default this command will validate all B-
	     Tree linkages and CRCs, including data CRCs, and will report the
	     most verbose information it can dig up.  Any errors will show up
	     with a ‘B’ in column 1 along with various other error flags.

	     If you specify localization or localization:object_id the dump
	     will search for the key printing nodes as it recurses down, and
	     then will iterate forwards.  These fields are specified in HEX.
	     Note that the pfsid is the top 16 bits of the 32-bit localization
	     field so PFS #1 would be 00010000.

	     If you use -q the command will report less information about the
	     inode contents.

	     If you use -qq the command will not report the content of the
	     inode or other typed data at all.

	     If you use -qqq the command will not report volume header infor‐
	     mation, big-block fill ratios, mirror transaction ids, or report
	     or check data CRCs.  B-Tree CRCs and linkages are still checked.

	     This command needs the -f blkdevs option.

     show-undo
	     (HAMMER VERSION 4+) Dump the UNDO/REDO map.

	     This command needs the -f blkdevs option.

     recover targetdir
	     Recover data from a corrupted HAMMER filesystem.  This is a low
	     level command which operates on the filesystem image and attempts
	     to locate and recover files from a corrupted filesystem.  The
	     entire image is scanned linearly looking for B-Tree nodes.	 Any
	     node found which passes its CRC test is scanned for file, inode,
	     and directory fragments and the target directory is populated
	     with the resulting data.  files and directories in the target
	     directory are initially named after the object id and are renamed
	     as fragmentary information is processed.

	     This command keeps track of filename/object_id translations and
	     may eat a considerably amount of memory while operating.

	     This command is literally the last line of defense when it comes
	     to recovering data from a dead filesystem.

	     This command needs the -f blkdevs option.

     namekey1 filename
	     Generate a HAMMER 64-bit directory hash for the specified file
	     name, using the original directory hash algorithm in version 1 of
	     the file system.  The low 32 bits are used as an iterator for
	     hash collisions and will be output as 0.

     namekey2 filename
	     Generate a HAMMER 64-bit directory hash for the specified file
	     name, using the new directory hash algorithm in version 2 of the
	     file system.  The low 32 bits are still used as an iterator but
	     will start out containing part of the hash key.

     namekey32 filename
	     Generate the top 32 bits of a HAMMER 64 bit directory hash for
	     the specified file name.

     info    Show extended information about HAMMER file systems.  The infor‐
	     mation is divided into sections:

	     Volume identification
		     General information, like the label of the HAMMER
		     filesystem, the number of volumes it contains, the FSID,
		     and the HAMMER version being used.

	     Big block information
		     Big block statistics, such as total, used, reserved and
		     free big blocks.

	     Space information
		     Information about space used on the filesystem.  Cur‐
		     rently total size, used, reserved and free space are dis‐
		     played.

	     PFS information
		     Basic information about the PFSs currently present on a
		     HAMMER filesystem.

		     “PFS ID” is the ID of the PFS, with 0 being the root PFS.
		     “Snaps” is the current snapshot count on the PFS.
		     “Mounted on” displays the mount point of the PFS is cur‐
		     rently mounted on (if any).

     cleanup [filesystem ...]
	     This is a meta-command which executes snapshot, prune, rebalance,
	     dedup and reblock commands on the specified HAMMER file systems.
	     If no filesystem is specified this command will clean-up all
	     HAMMER file systems in use, including PFS's.  To do this it will
	     scan all HAMMER and null mounts, extract PFS id's, and clean-up
	     each PFS found.

	     This command will access a snapshots directory and a configura‐
	     tion file for each filesystem, creating them if necessary.

	     HAMMER version 2-
		     The configuration file is config in the snapshots direc‐
		     tory which defaults to <pfs>/snapshots.

	     HAMMER version 3+
		     The configuration file is saved in file system meta-data,
		     see hammer config.	 The snapshots directory defaults to
		     /var/hammer/<pfs> (/var/hammer/root for root mount).

	     The format of the configuration file is:

		   snapshots  <period> <retention-time> [any]
		   prune      <period> <max-runtime>
		   rebalance  <period> <max-runtime>
		   dedup      <period> <max-runtime>
		   reblock    <period> <max-runtime>
		   recopy     <period> <max-runtime>

	     Defaults are:

		   snapshots  1d 60d  # 0d 0d  for PFS /tmp, /var/tmp, /usr/obj
		   prune      1d 5m
		   rebalance  1d 5m
		   dedup      1d 5m
		   reblock    1d 5m
		   recopy     30d 10m

	     Time is given with a suffix of d, h, m or s meaning day, hour,
	     minute and second.

	     If the snapshots directive has a period of 0 and a retention time
	     of 0 then snapshot generation is disabled, removal of old snap‐
	     shots are disabled, and prunes will use prune-everything.

	     If the snapshots directive has a period of 0 but a non-zero
	     retention time then this command will not create any new snap‐
	     shots but will remove old snapshots it finds based on the reten‐
	     tion time.	 This form should be used on PFS masters where you are
	     generating your own snapshot softlinks manually and on PFS slaves
	     when all you wish to do is prune away existing snapshots inher‐
	     ited via the mirroring stream.

	     By default only snapshots in the form ‘snap-yyyymmdd[-HHMM]’ are
	     processed.	 If the any directive is specified as a third argument
	     on the snapshots config line then any softlink of the form
	     ‘*-yyyymmdd[-HHMM]’ or ‘*.yyyymmdd[-HHMM]’ will be processed.

	     A period of 0 for prune, rebalance, reblock or recopy disables
	     the directive.  A max-runtime of 0 means unlimited.

	     If period hasn't passed since the previous cleanup run nothing is
	     done.  For example a day has passed when midnight is passed
	     (localtime).  If the -F flag is given the period is ignored.  By
	     default, DragonFly is set up to run hammer cleanup nightly via
	     periodic(8).

	     The default configuration file will create a daily snapshot, do a
	     daily pruning, rebalancing, deduping and reblocking run and a
	     monthly recopy run.  Reblocking is defragmentation with a level
	     of 95%, and recopy is full defragmentation.

	     By default prune, dedup and rebalance operations are time limited
	     to 5 minutes, and reblock operations to a bit over 5 minutes, and
	     recopy operations to a bit over 10 minutes.  Reblocking and
	     recopy runs are each broken down into four separate functions:
	     btree, inodes, dirs and data.  Each function is time limited to
	     the time given in the configuration file, but the btree, inodes
	     and dirs functions usually does not take very long time, full
	     defragmentation is always used for these three functions.	Also
	     note that this directive will by default disable snapshots on the
	     following PFS's: /tmp, /var/tmp and /usr/obj.

	     The defaults may be adjusted by modifying the configuration file.
	     The pruning and reblocking commands automatically maintain a
	     cyclefile for incremental operation.  If you interrupt (^C) the
	     program the cyclefile will be updated, but a sub-command may con‐
	     tinue to run in the background for a few seconds until the HAMMER
	     ioctl detects the interrupt.  The snapshots PFS option can be set
	     to use another location for the snapshots directory.

	     Work on this command is still in progress.	 Expected additions:
	     An ability to remove snapshots dynamically as the file system
	     becomes full.

     config [filesystem [configfile]]
	     (HAMMER VERSION 3+) Show or change configuration for filesystem.
	     If zero or one arguments are specified this function dumps the
	     current configuration file to stdout.  Zero arguments specifies
	     the PFS containing the current directory.	This configuration
	     file is stored in file system meta-data.  If two arguments are
	     specified this function installs a new config file.

	     In HAMMER versions less than 3 the configuration file is by
	     default stored in <pfs>/snapshots/config, but in all later ver‐
	     sions the configuration file is stored in file system meta-data.

     viconfig [filesystem]
	     (HAMMER VERSION 3+) Edit the configuration file and reinstall
	     into file system meta-data when done.  Zero arguments specifies
	     the PFS containing the current directory.

     volume-add device filesystem
	     Add volume device to filesystem.  This will format device and add
	     all of its space to filesystem.  A HAMMER file system can use up
	     to 256 volumes.

	     NOTE! All existing data contained on device will be destroyed by
	     this operation!  If device contains a valid HAMMER file system,
	     formatting will be denied.	 You can overcome this sanity check by
	     using dd(1) to erase the beginning sectors of the device.

	     Remember that you have to specify device, together with any other
	     device that make up the file system, colon-separated to
	     /etc/fstab and mount_hammer(8).  If filesystem is root file sys‐
	     tem, also remember to add device to vfs.root.mountfrom in
	     /boot/loader.conf, see loader(8).

     volume-del device filesystem
	     Remove volume device from filesystem.

	     Remember that you have to remove device from the colon-separated
	     list in /etc/fstab and mount_hammer(8).  If filesystem is root
	     file system, also remember to remove device from
	     vfs.root.mountfrom in /boot/loader.conf, see loader(8).

     volume-list filesystem
	     List the volumes that make up filesystem.

     snapshot [filesystem] snapshot-dir

     snapshot filesystem snapshot-dir [note]
	     Take a snapshot of the file system either explicitly given by
	     filesystem or implicitly derived from the snapshot-dir argument
	     and creates a symlink in the directory provided by snapshot-dir
	     pointing to the snapshot.	If snapshot-dir is not a directory, it
	     is assumed to be a format string passed to strftime(3) with the
	     current time as parameter.	 If snapshot-dir refers to an existing
	     directory, a default format string of ‘snap-%Y%m%d-%H%M’ is
	     assumed and used as name for the newly created symlink.

	     Snapshot is a per PFS operation, so each PFS in a HAMMER file
	     system have to be snapshot separately.

	     Example, assuming that /mysnapshots is on file system / and that
	     /obj and /usr are file systems on their own, the following invo‐
	     cations:

		   hammer snapshot /mysnapshots

		   hammer snapshot /mysnapshots/%Y-%m-%d

		   hammer snapshot /obj /mysnapshots/obj-%Y-%m-%d

		   hammer snapshot /usr /my/snaps/usr "note"

	     Would create symlinks similar to:

		   /mysnapshots/snap-20080627-1210 -> /@@0x10d2cd05b7270d16

		   /mysnapshots/2008-06-27 -> /@@0x10d2cd05b7270d16

		   /mysnapshots/obj-2008-06-27 -> /obj@@0x10d2cd05b7270d16

		   /my/snaps/usr/snap-20080627-1210 -> /usr@@0x10d2cd05b7270d16

	     When run on a HAMMER version 3+ file system the snapshot is also
	     recorded in file system meta-data along with the optional note.
	     See the snapls directive.

     snap path [note]
	     (HAMMER VERSION 3+) Create a snapshot for the PFS containing path
	     and create a snapshot softlink.  If the path specified is a
	     directory a standard snapshot softlink will be created in the
	     directory.	 The snapshot softlink points to the base of the
	     mounted PFS.

     snaplo path [note]
	     (HAMMER VERSION 3+) Create a snapshot for the PFS containing path
	     and create a snapshot softlink.  If the path specified is a
	     directory a standard snapshot softlink will be created in the
	     directory.	 The snapshot softlink points into the directory it is
	     contained in.

     snapq dir [note]
	     (HAMMER VERSION 3+) Create a snapshot for the PFS containing the
	     specified directory but do not create a softlink.	Instead output
	     a path which can be used to access the directory via the snap‐
	     shot.

	     An absolute or relative path may be specified.  The path will be
	     used as-is as a prefix in the path output to stdout.  As with the
	     other snap and snapshot directives the snapshot transaction id
	     will be registered in the file system meta-data.

     snaprm path ...

     snaprm transaction_id ...

     snaprm filesystem transaction_id ...
	     (HAMMER VERSION 3+) Remove a snapshot given its softlink or
	     transaction id.  If specifying a transaction id the snapshot is
	     removed from file system meta-data but you are responsible for
	     removing any related softlinks.

	     If a softlink path is specified the filesystem and transaction id
	     is derived from the contents of the softlink.  If just a transac‐
	     tion id is specified it is assumed to be a snapshot in the HAMMER
	     filesystem you are currently chdir'd into.	 You can also specify
	     the filesystem and transaction id explicitly.

     snapls [path ...]
	     (HAMMER VERSION 3+) Dump the snapshot meta-data for PFSs contain‐
	     ing each path listing all available snapshots and their notes.
	     If no arguments are specified snapshots for the PFS containing
	     the current directory are listed.	This is the definitive list of
	     snapshots for the file system.

     prune softlink-dir
	     Prune the file system based on previously created snapshot soft‐
	     links.  Pruning is the act of deleting file system history.  The
	     prune command will delete file system history such that the file
	     system state is retained for the given snapshots, and all history
	     after the latest snapshot.	 By setting the per PFS parameter
	     prune-min, history is guaranteed to be saved at least this time
	     interval.	All other history is deleted.

	     The target directory is expected to contain softlinks pointing to
	     snapshots of the file systems you wish to retain.	The directory
	     is scanned non-recursively and the mount points and transaction
	     ids stored in the softlinks are extracted and sorted.  The file
	     system is then explicitly pruned according to what is found.
	     Cleaning out portions of the file system is as simple as removing
	     a snapshot softlink and then running the prune command.

	     As a safety measure pruning only occurs if one or more softlinks
	     are found containing the ‘@@’ snapshot id extension.  Currently
	     the scanned softlink directory must contain softlinks pointing to
	     a single HAMMER mount.  The softlinks may specify absolute or
	     relative paths.  Softlinks must use 20-character ‘@@0x%016llx’
	     transaction ids, as might be returned from hammer synctid
	     filesystem.

	     Pruning is a per PFS operation, so each PFS in a HAMMER file sys‐
	     tem have to be pruned separately.

	     Note that pruning a file system may not immediately free-up
	     space, though typically some space will be freed if a large num‐
	     ber of records are pruned out.  The file system must be reblocked
	     to completely recover all available space.

	     Example, lets say your that you didn't set prune-min, and snap‐
	     shot directory contains the following links:

		   lrwxr-xr-x  1 root  wheel  29 May 31 17:57 snap1 ->
		   /usr/obj/@@0x10d2cd05b7270d16

		   lrwxr-xr-x  1 root  wheel  29 May 31 17:58 snap2 ->
		   /usr/obj/@@0x10d2cd13f3fde98f

		   lrwxr-xr-x  1 root  wheel  29 May 31 17:59 snap3 ->
		   /usr/obj/@@0x10d2cd222adee364

	     If you were to run the prune command on this directory, then the
	     HAMMER /usr/obj mount will be pruned to retain the above three
	     snapshots.	 In addition, history for modifications made to the
	     file system older than the oldest snapshot will be destroyed and
	     history for potentially fine-grained modifications made to the
	     file system more recently than the most recent snapshot will be
	     retained.

	     If you then delete the snap2 softlink and rerun the prune com‐
	     mand, history for modifications pertaining to that snapshot would
	     be destroyed.

	     In HAMMER file system versions 3+ this command also scans the
	     snapshots stored in the file system meta-data and includes them
	     in the prune.

     prune-everything filesystem
	     Remove all historical records from filesystem.  Use this direc‐
	     tive with caution on PFSs where you intend to use history.

	     This command does not remove snapshot softlinks but will delete
	     all snapshots recorded in file system meta-data (for file system
	     version 3+).  The user is responsible for deleting any softlinks.

	     Pruning is a per PFS operation, so each PFS in a HAMMER file sys‐
	     tem have to be pruned separately.

     rebalance filesystem [saturation_percentage]
	     Rebalance the B-Tree, nodes with small number of elements will be
	     combined and element counts will be smoothed out between nodes.

	     The saturation percentage is between 50% and 100%.	 The default
	     is 85% (the ‘%’ suffix is not needed).

	     Rebalancing is a per PFS operation, so each PFS in a HAMMER file
	     system have to be rebalanced separately.

     dedup filesystem
	     (HAMMER VERSION 5+) Perform offline (post-process) deduplication.
	     Deduplication occurs at the block level, currently only data
	     blocks of the same size can be deduped, metadata blocks can not.
	     The hash function used for comparing data blocks is CRC-32 (CRCs
	     are computed anyways as part of HAMMER data integrity features,
	     so there's no additional overhead).  Since CRC is a weak hash
	     function a byte-by-byte comparison is done before actual dedup‐
	     ing.  In case of a CRC collision (two data blocks have the same
	     CRC but different contents) the checksum is upgraded to SHA-256.

	     Currently HAMMER reblocker may partially blow up (re-expand)
	     dedup (reblocker's normal operation is to reallocate every
	     record, so it's possible for deduped blocks to be re-expanded
	     back).

	     Deduplication is a per PFS operation, so each PFS in a HAMMER
	     file system have to be deduped separately.	 This also means that
	     if you have duplicated data in two different PFSs that data won't
	     be deduped, however the addition of such feature is planned.

     dedup-simulate filesystem
	     Shows potential space savings (simulated dedup ratio) one can get
	     after running dedup command.  If the estimated dedup ratio is
	     greater than 1.00 you will see dedup space savings.  Remember
	     that this is an estimated number, in practice real dedup ratio
	     will be slightly smaller because of HAMMER bigblock underflows,
	     B-Tree locking issues and other factors.

	     Note that deduplication currently works only on bulk data so if
	     you try to run dedup-simulate or dedup commands on a PFS that
	     contains metadata only (directory entries, softlinks) you will
	     get a 0.00 dedup ratio.

     reblock filesystem [fill_percentage]

     reblock-btree filesystem [fill_percentage]

     reblock-inodes filesystem [fill_percentage]

     reblock-dirs filesystem [fill_percentage]

     reblock-data filesystem [fill_percentage]
	     Attempt to defragment and free space for reuse by reblocking a
	     live HAMMER file system.  Big-blocks cannot be reused by HAMMER
	     until they are completely free.  This command also has the effect
	     of reordering all elements, effectively defragmenting the file
	     system.

	     The default fill percentage is 100% and will cause the file sys‐
	     tem to be completely defragmented.	 All specified element types
	     will be reallocated and rewritten.	 If you wish to quickly free
	     up space instead try specifying a smaller fill percentage, such
	     as 90% or 80% (the ‘%’ suffix is not needed).

	     Since this command may rewrite the entire contents of the disk it
	     is best to do it incrementally from a cron(8) job along with the
	     -c cyclefile and -t seconds options to limit the run time.	 The
	     file system would thus be defragmented over long period of time.

	     It is recommended that separate invocations be used for each data
	     type.  B-Tree nodes, inodes, and directories are typically the
	     most important elements needing defragmentation.  Data can be
	     defragmented over a longer period of time.

	     Reblocking is a per PFS operation, so each PFS in a HAMMER file
	     system have to be reblocked separately.

     pfs-status dirpath ...
	     Retrieve the mirroring configuration parameters for the specified
	     HAMMER file systems or pseudo-filesystems (PFS's).

     pfs-master dirpath [options]
	     Create a pseudo-filesystem (PFS) inside a HAMMER file system.  Up
	     to 65536 PFSs can be created.  Each PFS uses an independent inode
	     numbering space making it suitable for replication.

	     The pfs-master directive creates a PFS that you can read, write,
	     and use as a mirroring source.

	     A PFS can only be truly destroyed with the pfs-destroy directive.
	     Removing the softlink will not destroy the underlying PFS.

	     A PFS can only be created in the root PFS (PFS# 0), not in a PFS
	     created by pfs-master or pfs-slave (PFS# >0).

	     It is recommended that dirpath is of the form <fs>/pfs/<name>
	     (i.e. located in pfs directory at root of HAMMER file system).

	     It is recommended to use a null mount to access a PFS, except for
	     root PFS, for more information see HAMMER(5).

     pfs-slave dirpath [options]
	     Create a pseudo-filesystem (PFS) inside a HAMMER file system.  Up
	     to 65536 PFSs can be created.  Each PFS uses an independent inode
	     numbering space making it suitable for replication.

	     The pfs-slave directive creates a PFS that you can use as a mir‐
	     roring source or target.  You will not be able to access a slave
	     PFS until you have completed the first mirroring operation with
	     it as the target (its root directory will not exist until then).

	     Access to the pfs-slave via the special softlink, as described in
	     the PFS NOTES below, allows HAMMER to dynamically modify the
	     snapshot transaction id by returning a dynamic result from
	     readlink(2) calls.

	     A PFS can only be truly destroyed with the pfs-destroy directive.
	     Removing the softlink will not destroy the underlying PFS.

	     A PFS can only be created in the root PFS (PFS# 0), not in a PFS
	     created by pfs-master or pfs-slave (PFS# >0).

	     It is recommended that dirpath is of the form <fs>/pfs/<name>
	     (i.e. located in pfs directory at root of HAMMER file system).

	     It is recommended to use a null mount to access a PFS, except for
	     root PFS, for more information see HAMMER(5).

     pfs-update dirpath [options]
	     Update the configuration parameters for an existing HAMMER file
	     system or pseudo-filesystem.  Options that may be specified:

	     sync-beg-tid=0x16llx
		     This is the automatic snapshot access starting transac‐
		     tion id for mirroring slaves.  This parameter is normally
		     updated automatically by the mirror-write directive.

		     It is important to note that accessing a mirroring slave
		     with a transaction id greater than the last fully syn‐
		     chronized transaction id can result in an unreliable
		     snapshot since you will be accessing data that is still
		     undergoing synchronization.

		     Manually modifying this field is dangerous and can result
		     in a broken mirror.

	     sync-end-tid=0x16llx
		     This is the current synchronization point for mirroring
		     slaves.  This parameter is normally updated automatically
		     by the mirror-write directive.

		     Manually modifying this field is dangerous and can result
		     in a broken mirror.

	     shared-uuid=uuid
		     Set the shared UUID for this file system.	All mirrors
		     must have the same shared UUID.  For safety purposes the
		     mirror-write directives will refuse to operate on a tar‐
		     get with a different shared UUID.

		     Changing the shared UUID on an existing, non-empty mir‐
		     roring target, including an empty but not completely
		     pruned target, can lead to corruption of the mirroring
		     target.

	     unique-uuid=uuid
		     Set the unique UUID for this file system.	This UUID
		     should not be used anywhere else, even on exact copies of
		     the file system.

	     label=string
		     Set a descriptive label for this file system.

	     snapshots=string
		     Specify the snapshots directory which hammer cleanup will
		     use to manage this PFS.

		     HAMMER version 2-
			     The snapshots directory does not need to be con‐
			     figured for PFS masters and will default to
			     <pfs>/snapshots.

			     PFS slaves are mirroring slaves so you cannot
			     configure a snapshots directory on the slave
			     itself to be managed by the slave's machine.  In
			     fact, the slave will likely have a snapshots sub-
			     directory mirrored from the master, but that
			     directory contains the configuration the master
			     is using for its copy of the file system, not the
			     configuration that we want to use for our slave.

			     It is recommended that <fs>/var/slaves/<name> be
			     configured for a PFS slave, where <fs> is the
			     base HAMMER file system, and <name> is an appro‐
			     priate label.

		     HAMMER version 3+
			     The snapshots directory does not need to be con‐
			     figured for PFS masters or slaves.	 The snapshots
			     directory defaults to /var/hammer/<pfs>
			     (/var/hammer/root for root mount).

		     You can control snapshot retention on your slave indepen‐
		     dent of the master.

	     snapshots-clear
		     Zero out the snapshots directory path for this PFS.

	     prune-min=Nd

	     prune-min=[Nd/]hh[:mm[:ss]]
		     Set the minimum fine-grained data retention period.
		     HAMMER always retains fine-grained history up to the most
		     recent snapshot.  You can extend the retention period
		     further by specifying a non-zero pruning minimum.	Any
		     snapshot softlinks within the retention period are
		     ignored for the purposes of pruning (i.e. the fine
		     grained history is retained).  Number of days, hours,
		     minutes and seconds are given as N, hh, mm and ss.

		     Because the transaction id in the snapshot softlink can‐
		     not be used to calculate a timestamp, HAMMER uses the
		     earlier of the st_ctime or st_mtime field of the softlink
		     to determine which snapshots fall within the retention
		     period.  Users must be sure to retain one of these two
		     fields when manipulating the softlink.

     pfs-upgrade dirpath
	     Upgrade a PFS from slave to master operation.  The PFS will be
	     rolled back to the current end synchronization transaction id
	     (removing any partial synchronizations), and will then become
	     writable.

	     WARNING! HAMMER currently supports only single masters and using
	     this command can easily result in file system corruption if you
	     don't know what you are doing.

	     This directive will refuse to run if any programs have open
	     descriptors in the PFS, including programs chdir'd into the PFS.

     pfs-downgrade dirpath
	     Downgrade a master PFS from master to slave operation.  The PFS
	     becomes read-only and access will be locked to its sync-end-tid.

	     This directive will refuse to run if any programs have open
	     descriptors in the PFS, including programs chdir'd into the PFS.

     pfs-destroy dirpath
	     This permanently destroys a PFS.

	     This directive will refuse to run if any programs have open
	     descriptors in the PFS, including programs chdir'd into the PFS.
	     As safety measure the -y flag have no effect on this directive.

     mirror-read filesystem [begin-tid]
	     Generate a mirroring stream to stdout.  The stream ends when the
	     transaction id space has been exhausted.  filesystem may be a
	     master or slave PFS.

     mirror-read-stream filesystem [begin-tid]
	     Generate a mirroring stream to stdout.  Upon completion the
	     stream is paused until new data is synced to the filesystem, then
	     resumed.  Operation continues until the pipe is broken.  See the
	     mirror-stream command for more details.

     mirror-write filesystem
	     Take a mirroring stream on stdin.	filesystem must be a slave
	     PFS.

	     This command will fail if the shared-uuid configuration field for
	     the two file systems do not match.	 See the mirror-copy command
	     for more details.

	     If the target PFS does not exist this command will ask you
	     whether you want to create a compatible PFS slave for the target
	     or not.

     mirror-dump
	     A mirror-read can be piped into a mirror-dump to dump an ASCII
	     representation of the mirroring stream.

     mirror-copy [[user@]host:]filesystem [[user@]host:]filesystem
	     This is a shortcut which pipes a mirror-read command to a
	     mirror-write command.  If a remote host specification is made the
	     program forks a ssh(1) and execs the mirror-read and/or
	     mirror-write on the appropriate host.  The source may be a master
	     or slave PFS, and the target must be a slave PFS.

	     This command also establishes full duplex communication and turns
	     on the 2-way protocol feature (-2) which automatically negotiates
	     transaction id ranges without having to use a cyclefile.  If the
	     operation completes successfully the target PFS's sync-end-tid
	     will be updated.  Note that you must re-chdir into the target PFS
	     to see the updated information.  If you do not you will still be
	     in the previous snapshot.

	     If the target PFS does not exist this command will ask you
	     whether you want to create a compatible PFS slave for the target
	     or not.

     mirror-stream [[user@]host:]filesystem [[user@]host:]filesystem
	     This is a shortcut which pipes a mirror-read-stream command to a
	     mirror-write command.  This command works similarly to
	     mirror-copy but does not exit after the initial mirroring com‐
	     pletes.  The mirroring operation will resume as changes continue
	     to be made to the source.	The command is commonly used with -i
	     delay and -b bandwidth options to keep the mirroring target in
	     sync with the source on a continuing basis.

	     If the pipe is broken the command will automatically retry after
	     sleeping for a short while.  The time slept will be 15 seconds
	     plus the time given in the -i option.

	     This command also detects the initial-mirroring case and spends
	     some time scanning the B-Tree to find good break points, allowing
	     the initial bulk mirroring operation to be broken down into 4GB
	     pieces.  This means that the user can kill and restart the opera‐
	     tion and it will not have to start from scratch once it has got‐
	     ten past the first chunk.	The -S option may be used to change
	     the size of pieces and the -B option may be used to disable this
	     feature and perform an initial bulk transfer instead.

     version filesystem
	     This command returns the HAMMER file system version for the spec‐
	     ified filesystem as well as the range of versions supported in
	     the kernel.  The -q option may be used to remove the summary at
	     the end.

     version-upgrade filesystem version [force]
	     Upgrade the HAMMER filesystem to the specified version.  Once
	     upgraded a file system may not be downgraded.  If you wish to
	     upgrade a file system to a version greater or equal to the work-
	     in-progress (WIP) version number you must specify the force
	     directive.	 Use of WIP versions should be relegated to testing
	     and may require wiping the file system as development progresses,
	     even though the WIP version might not change.

	     NOTE! This command operates on the entire HAMMER file system and
	     is not a per PFS operation.  All PFS's will be affected.

	     1	     DragonFly 2.0 default version, first HAMMER release.

	     2	     DragonFly 2.3.  New directory entry layout.  This version
		     is using a new directory hash key.

	     3	     DragonFly 2.5.  New snapshot management, using file sys‐
		     tem meta-data for saving configuration file and snapshots
		     (transaction ids etc.).  Also default snapshots directory
		     has changed.

	     4	     DragonFly 2.6 default version.  New undo/redo/flush, giv‐
		     ing HAMMER a much faster sync and fsync.

	     5	     DragonFly 2.9.  Deduplication support.

	     6	     DragonFly 2.9.  Directory hash ALG1.  Tends to maintain
		     inode number / directory name entry ordering better for
		     files after minor renaming.

PSEUDO-FILESYSTEM (PFS) NOTES
     The root of a PFS is not hooked into the primary HAMMER file system as a
     directory.	 Instead, HAMMER creates a special softlink called ‘@@PFS%05d’
     (exactly 10 characters long) in the primary HAMMER file system.  HAMMER
     then modifies the contents of the softlink as read by readlink(2), and
     thus what you see with an ls command or if you were to cd into the link.
     If the PFS is a master the link reflects the current state of the PFS.
     If the PFS is a slave the link reflects the last completed snapshot, and
     the contents of the link will change when the next snapshot is completed,
     and so forth.

     The hammer utility employs numerous safeties to reduce user foot-shoot‐
     ing.  The mirror-copy directive requires that the target be configured as
     a slave and that the shared-uuid field of the mirroring source and target
     match.

DOUBLE_BUFFER MODE
     There is a limit to the number of vnodes the kernel can cache, and
     because file buffers are associated with a vnode the related data cache
     can get blown away when operating on large numbers of files even if the
     system has sufficient memory to hold the file data.

     If you turn on HAMMER's double buffer mode by setting the sysctl(8) node
     vfs.hammer.double_buffer to 1 HAMMER will cache file data via the block
     device and copy it into the per-file buffers as needed.  The data will be
     double-cached at least until the buffer cache throws away the file buf‐
     fer.  This mode is typically used in conjuction with swapcache(8) when
     vm.swapcache.data_enable is turned on in order to prevent unnecessary re-
     caching of file data due to vnode recycling.  The swapcache will save the
     cached VM pages related to HAMMER's block device (which doesn't recycle
     unless you umount the filesystem) instead of the cached VM pages backing
     the file vnodes.

     Double buffering should also be turned on if live dedup is enabled via
     vfs.hammer.live_dedup.  This is because the live dedup must validate the
     contents of a potential duplicate file block and it must run through the
     block device to do that and not the file vnode.  If double buffering is
     not enabled then live dedup will create extra disk reads to validate
     potential data duplicates.

UPGRADE INSTRUCTIONS HAMMER V1 TO V2
     This upgrade changes the way directory entries are stored.	 It is possi‐
     ble to upgrade a V1 file system to V2 in place, but directories created
     prior to the upgrade will continue to use the old layout.

     Note that the slave mirroring code in the target kernel had bugs in V1
     which can create an incompatible root directory on the slave.  Do not mix
     a HAMMER master created after the upgrade with a HAMMER slave created
     prior to the upgrade.

     Any directories created after upgrading will use a new layout.

UPGRADE INSTRUCTIONS HAMMER V2 TO V3
     This upgrade adds meta-data elements to the B-Tree.  It is possible to
     upgrade a V2 file system to V3 in place.  After issuing the upgrade be
     sure to run a hammer cleanup to perform post-upgrade tasks.

     After making this upgrade running a hammer cleanup will move the
     <pfs>/snapshots directory for each PFS mount into /var/hammer/<pfs>.  A
     HAMMER root mount will migrate /snapshots into /var/hammer/root.  Migra‐
     tion occurs only once and only if you have not specified a snapshots
     directory in the PFS configuration.  If you have specified a snapshots
     directory in the PFS configuration no automatic migration will occur.

     For slaves, if you desire, you can migrate your snapshots config to the
     new location manually and then clear the snapshot directory configuration
     in the slave PFS.	The new snapshots hierarchy is designed to work with
     both master and slave PFSs equally well.

     In addition, the old config file will be moved to file system meta-data,
     editable via the new hammer viconfig directive.  The old config file will
     be deleted.  Migration occurs only once.

     The V3 file system has new snap* directives for creating snapshots.  All
     snapshot directives, including the original, will create meta-data
     entries for the snapshots and the pruning code will automatically incor‐
     porate these entries into its list and expire them the same way it
     expires softlinks.	 If you by accident blow away your snapshot softlinks
     you can use the snapls directive to get a definitive list from the file
     system meta-data and regenerate them from that list.

     WARNING! If you are using hammer to backup file systems your scripts may
     be using the synctid directive to generate transaction ids.  This direc‐
     tive does not create a snapshot.  You will have to modify your scripts to
     use the snapq directive to generate the linkbuf for the softlink you cre‐
     ate, or use one of the other snap* directives.  The older snapshot direc‐
     tive will continue to work as expected and in V3 it will also record the
     snapshot transaction id in file system meta-data.	You may also want to
     make use of the new note tag for the meta-data.

     WARNING! If you used to remove snapshot softlinks with rm you should
     probably start using the snaprm directive instead to also remove the
     related meta-data.	 The pruning code scans the meta-data so just removing
     the softlink is not sufficient.

UPGRADE INSTRUCTIONS HAMMER V3 TO V4
     This upgrade changes undo/flush, giving faster sync.  It is possible to
     upgrade a V3 file system to V4 in place.  This upgrade reformats the
     UNDO/REDO FIFO (typically 1GB), so upgrade might take a minute or two
     depending.

     Version 4 allows the UNDO/REDO FIFO to be flushed without also having to
     flush the volume header, removing 2 of the 4 disk syncs typically
     required for an fsync() and removing 1 of the 2 disk syncs typically
     required for a flush sequence.  Version 4 also implements the REDO log
     (see FSYNC FLUSH MODES below) which is capable of fsync()ing with either
     one disk flush or zero disk flushes.

UPGRADE INSTRUCTIONS HAMMER V4 TO V5
     This upgrade brings in deduplication support.  It is possible to upgrade
     a V4 file system to V5 in place.  Technically it makes the layer2
     bytes_free field a signed value instead of unsigned, allowing it to go
     negative.	A version 5 filesystem is required for dedup operation.

UPGRADE INSTRUCTIONS HAMMER V5 TO V6
     It is possible to upgrade a V5 file system to V6 in place.

FSYNC FLUSH MODES
     HAMMER implements five different fsync flush modes via the
     vfs.hammer.fsync_mode sysctl, for HAMMER version 4+ file systems.

     As of DragonFly 2.6 fsync mode 3 is set by default.  REDO operation and
     recovery is enabled by default.

     mode 0  Full synchronous fsync semantics without REDO.

	     HAMMER will not generate REDOs.  A fsync() will completely sync
	     the data and meta-data and double-flush the FIFO, including issu‐
	     ing two disk synchronization commands.  The data is guaranteed to
	     be on the media as of when fsync() returns.  Needless to say,
	     this is slow.

     mode 1  Relaxed asynchronous fsync semantics without REDO.

	     This mode works the same as mode 0 except the last disk synchro‐
	     nization command is not issued.  It is faster than mode 0 but not
	     even remotely close to the speed you get with mode 2 or mode 3.

	     Note that there is no chance of meta-data corruption when using
	     this mode, it simply means that the data you wrote and then
	     fsync()'d might not have made it to the media if the storage sys‐
	     tem crashes at a bad time.

     mode 2  Full synchronous fsync semantics using REDO.  NOTE: If not run‐
	     ning a HAMMER version 4 filesystem or later mode 0 is silently
	     used.

	     HAMMER will generate REDOs in the UNDO/REDO FIFO based on a
	     heuristic.	 If this is sufficient to satisfy the fsync() opera‐
	     tion the blocks will be written out and HAMMER will wait for the
	     I/Os to complete, and then followup with a disk sync command to
	     guarantee the data is on the media before returning.  This is
	     slower than mode 3 and can result in significant disk or SSDs
	     overheads, though not as bad as mode 0 or mode 1.

     mode 3  Relaxed asynchronous fsync semantics using REDO.  NOTE: If not
	     running a HAMMER version 4 filesystem or later mode 1 is silently
	     used.

	     HAMMER will generate REDOs in the UNDO/REDO FIFO based on a
	     heuristic.	 If this is sufficient to satisfy the fsync() opera‐
	     tion the blocks will be written out and HAMMER will wait for the
	     I/Os to complete, but will NOT issue a disk synchronization com‐
	     mand.

	     Note that there is no chance of meta-data corruption when using
	     this mode, it simply means that the data you wrote and then
	     fsync()'d might not have made it to the media if the storage sys‐
	     tem crashes at a bad time.

	     This mode is the fastest production fsyncing mode available.
	     This mode is equivalent to how the UFS fsync in the BSDs oper‐
	     ates.

     mode 4  fsync is ignored.

	     Calls to fsync() will be ignored.	This mode is primarily
	     designed for testing and should not be used on a production sys‐
	     tem.

RESTORING FROM A SNAPSHOT BACKUP
     You restore a snapshot by copying it over to live, but there is a caveat.
     The mtime and atime fields for files accessed via a snapshot is locked to
     the ctime in order to keep the snapshot consistent, because neither mtime
     nor atime changes roll any history.

     In order to avoid unnecessary copying it is recommended that you use
     cpdup -VV -v when doing the copyback.  Also make sure you traverse the
     snapshot softlink by appending a ".", as in "<snapshotpath>/.", and you
     match up the directory properly.

RESTORING A PFS FROM A MIRROR
     A PFS can be restored from a mirror with mirror-copy.  config data must
     be copied separately.  At last the PFS can be upgraded to master using
     pfs-upgrade.

     It is not possible to restore the root PFS (PFS# 0) by using mirroring,
     as the root PFS is always a master PFS.  A normal copy (e.g. using
     cpdup(1)) must be done, ignoring history.	If history is important, old
     root PFS can me restored to a new PFS, and important directories/files
     can be null mounted to the new PFS.

EXIT STATUS
     The hammer utility exits 0 on success, and >0 if an error occurs.

ENVIRONMENT
     If the following environment variables exist, they will be used by:

     EDITOR  The editor program specified in the variable EDITOR will be
	     invoked instead of the default editor, which is vi(1).

     VISUAL  Same effect as EDITOR variable.

FILES
     <pfs>/snapshots	     default per PFS snapshots directory (HAMMER VER‐
			     SION 2-)
     /var/hammer/<pfs>	     default per PFS snapshots directory (not root)
			     (HAMMER VERSION 3+)
     /var/hammer/root	     default snapshots directory for root directory
			     (HAMMER VERSION 3+)
     <snapshots>/config	     per PFS hammer cleanup configuration file (HAMMER
			     VERSION 2-)
     <fs>/var/slaves/<name>  recommended slave PFS snapshots directory (HAMMER
			     VERSION 2-)
     <fs>/pfs		     recommended PFS directory

SEE ALSO
     ssh(1), undo(1), HAMMER(5), periodic.conf(5), loader(8), mount_hammer(8),
     mount_null(8), newfs_hammer(8), swapcache(8), sysctl(8)

HISTORY
     The hammer utility first appeared in DragonFly 1.11.

AUTHORS
     Matthew Dillon ⟨dillon@backplane.com⟩

BSD				April 19, 2011				   BSD
[top]

List of man pages available for DragonFly

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net