vxtunefs man page on HP-UX

Man page or keyword search:  
man Server   10987 pages
apropos Keyword Search (all sections)
Output format
HP-UX logo
[printable version]

vxtunefs(1M)							  vxtunefs(1M)

NAME
       vxtunefs - tune a VxFS File System

SYNOPSIS
       tunefstab] parameter=value] [{mount_point|block_special}]...

DESCRIPTION
       sets  or	 prints	 tuneable I/O parameters of mounted file systems.  can
       set parameters describing the I/O properties of the underlying  device,
       parameters  to  indicate when to treat an I/O as direct I/O, or parame‐
       ters to control the extent allocation policy  for  the  specified  file
       system.

       With  no options specified, prints the existing VxFS parameters for the
       specified file systems.

       works on a list of mount points specified on the command line,  or  all
       the  mounted  file systems listed in the file.  The default file is You
       can change the default using the option.

       can be run at any time on a mounted  file  system,  and	all  parameter
       changes	take  immediate	 effect.   Parameters specified on the command
       line override parameters listed in the file.

       If exists, the VxFS-specific command invokes to set  device  parameters
       from

       If  the file system is built on a VERITAS Volume Manager (VxVM) volume,
       the VxFS-specific command interacts with VxVM to obtain default	values
       for the tunables, so you need to specify tunables for VxVM devices only
       to change the defaults.

       Only a privileged user can run

   Options
       recognizes the following options:

	      Use	     filename instead of as the file containing tuning
			     parameters.

	      Specify parameters for the file systems
			     listed on the command line.  See the "VxFS Tuning
			     Parameters and Guidelines" topic in this section.

	      Print the tuning parameters for
			     all the file systems  specified  on  the  command
			     line.

	      Set the new tuning parameters for the VxFS file systems
			     specified on the command line or in the file.

   Operands
       recognizes the following operands:

	      mount_point    Name of directory for a mounted VxFS file system.

	      block_special  Name  of  the block_special device which contains
			     the VxFS file system.

   Notes
       works with Storage Checkpoints; however,	 VxFS  tunables	 apply	to  an
       entire  file  system.   Therefore  tunables affect not only the primary
       fileset, but also any Storage Checkpoint filesets within that file sys‐
       tem.

       The tunables and are not supported on HP-UX.

   VxFS Tuning Parameters and Guidelines
       The values for all the following parameters except and can be specified
       in bytes, kilobytes, megabytes or sectors (1024 bytes) by appending  or
       You do not need for a suffix for the value in bytes.

       If  the file system is being used with a hardware disk array or another
       volume manager (such as VxVM), align the parameters to match the geome‐
       try  of the logical disk.  For disk striping and RAID-5 configurations,
       set to the stripe unit size or interleave factor and set to be the num‐
       ber  of columns.	 For disk striping configurations, set and to the same
       values as and but for RAID-5 configurations, set	 to  the  full	stripe
       size and set to 1.

       For an application to do efficient direct I/O or discovered direct I/O,
       it should issue read requests that are equal to the product of  and  In
       general,	 any  multiple	or  factor of multiplied by is a good size for
       performance.  For writing, the same general rule	 applies  to  the  and
       parameters.  When tuning a file system, the best thing to do is use the
       tuning parameters under a real workload.

       If an application is doing sequential I/O to  large  files,  it	should
       issue  requests	larger than the This performs the I/O requests as dis‐
       covered direct I/O requests which are unbuffered like direct  I/O,  but
       which do not require synchronous inode updates when extending the file.
       If the file is too large to fit in  the	cache,	using  unbuffered  I/O
       avoids losing useful data out of the cache, and lowers CPU overhead.

       The VxFS tuneable parameters are:

       On VxFS, files can have up to 10 variable
	    sized  extents stored in the inode.	 After these extents are used,
	    the file must use indirect extents which are a fixed size that  is
	    set	 when  the  file  first uses indirect extents.	These indirect
	    extents are 8K by default.	The file system does  not  use	larger
	    indirect extents because it must fail a write and return ENOSPC if
	    there are no extents available that are the indirect extent	 size.
	    For	 file  systems	with  many large files, the 8K indirect extent
	    size is too small.	The files that get into indirect extents use a
	    lot	 of  smaller  extents  instead of a few larger ones.  By using
	    this parameter, the default indirect extent size can be  increased
	    so that large files in indirects use fewer larger extents.

	    Be	careful	 using	this tuneable. If it is too large, then writes
	    fail when they are unable to  allocate  extents  of	 the  indirect
	    extent  size  to a file.  In general, the fewer and the larger the
	    files on a file system, the larger can  be.	  The  value  of  this
	    parameter is generally a multiple of the parameter.

	    This tuneable does not apply to disk layout Version 4.

       Any file I/O requests larger than the
	    are	 handled as discovered direct I/O.  A discovered direct I/O is
	    unbuffered like direct I/O, but it does not require a  synchronous
	    commit  of the inode when the file is extended or blocks are allo‐
	    cated.  For larger I/O requests, the CPU time for copying the data
	    into  the  buffer cache and the cost of using memory to buffer the
	    I/O becomes more expensive than the cost of doing  the  disk  I/O.
	    For	 these I/O requests, using discovered direct I/O is more effi‐
	    cient than regular I/O.  The default value of  this	 parameter  is
	    256K.

       Specifies the minimum amount of time,
	    in	seconds,  that the VxFS File Change Log (FCL) keeps records in
	    the log.  When the oldest 8K block of FCL records have  been  kept
	    longer  than  the  value  of  they are purged from the FCL and the
	    extents nearest to the beginning of the FCL file are freed.	  This
	    process is referred to as "punching a hole."  Holes are punched in
	    the FCL file in 8K chunks.

	    If the parameter is set, records are purged from the  FCL  if  the
	    amount  of	space allocated to the FCL exceeds even if the elapsed
	    time the records have been in the log is less than the value of If
	    the	 file  system  runs out of space before is reached, the FCL is
	    deactivated.

	    Either or both of the or parameters must be set  before  the  File
	    Change Log can be activated.  operates only on Version 6 or higher
	    disk layout file systems.

       Specifies the maximum amount of space that can be allocated to the
	    VxFS File Change Log.  The FCL file is a sparse file that grows as
	    changes occur in the file system.  When the space allocated to the
	    FCL file reaches the value, the oldest FCL records are purged from
	    the	 FCL  and the extents nearest to the beginning of the FCL file
	    are freed.	This process is referred  to  as  "punching  a	hole."
	    Holes  are punched in the FCL file in 8K chunks.  If the file sys‐
	    tem runs out of space before is reached, the FCL is deactivated.

	    Either or both of the or parameters must be set  before  the  File
	    Change Log can be activated.  operates only on Version 6 or higher
	    disk layout file systems.

       Specifies the time,
	    in seconds, that must elapse  before  the  VxFS  File  Change  Log
	    records  a	data overwrite, data extending write, or data truncate
	    for a file.	 The ability to limit the  number  of  repetitive  FCL
	    records  for  continuous  writes to the same file is important for
	    file system performance and for applications processing  the  FCL.
	    is best set to an interval less than the shortest interval between
	    reads of the FCL by any application.  This	way  all  applications
	    using  the	FCL  can be assured of finding at least one FCL record
	    for any file experiencing continuous data changes.

	    is enforced for all files in the file system.  Each file maintains
	    its	 own  time stamps, and the elapsed time between FCL records is
	    per file.  This elapsed time can be overridden using the VxFS  FCL
	    sync public API (see the manual page).

	    operates only on Version 6 or higher disk layout file systems.

       For a file managed by a hierarchical storage management
	    (HSM)   application,  preallocates	disk  blocks  before  data  is
	    migrated back into the file system.	 An  HSM  application  usually
	    migrates  the  data	 back  through a series of writes to the file,
	    each of which allocates a few blocks.   By	setting	 a  sufficient
	    number  of disk blocks will be allocated on the first write to the
	    empty file so that no disk block allocation is required for subse‐
	    quent  writes,  which improves the write performance during migra‐
	    tion.

	    The parameter is implemented outside of the	 DMAPI	specification,
	    and its usage has limitations depending on how the space within an
	    HSM controlled file is managed.  It is advisable to use only  when
	    recommended by the HSM application controlling the file system.

       Changes the default size of the initial extent.

	    VxFS  determines, based on the first write to a new file, the size
	    of the first extent to allocate to the file. Typically  the	 first
	    extent  is the smallest power of 2 that is larger than the size of
	    the first write. If that power of 2 is less	 than  8K,  the	 first
	    extent  allocated is 8K. After the initial extent, the file system
	    increases the size of subsequent extents (see  with	 each  alloca‐
	    tion.

	    For	 a  file  managed  by  a hierarchical storage management (HSM)
	    application, preallocates disk blocks before data is migrated back
	    into  the  file  system.   An HSM application usually migrates the
	    data back through a series of writes to the file,  each  of	 which
	    allocates  a  few  blocks.	By setting a sufficient number of disk
	    blocks will be allocated on the first write to the empty  file  so
	    that  no  disk block allocation is required for subsequent writes,
	    which improves the write performance during migration.

	    Because most applications write to files using a buffer size of 8K
	    or	less,  the increasing extents start doubling from a small ini‐
	    tial extent.  changes the default initial extent size to a	larger
	    value,  so	the  doubling policy starts from a much larger initial
	    size, and the file system won't allocate a set of small extents at
	    the start of file.

	    Use	 this  parameter  only	on file systems that have a very large
	    average file size. On such file systems, there are	fewer  extents
	    per file and less fragmentation.

	    is measured in file system blocks.

       Specifies the maximum number of inodes to
	    place  on an inode aging list.  Inode aging is used in conjunction
	    with file system Storage Checkpoints to allow quick restoration of
	    large,  recently  deleted  files.  The aging list is maintained in
	    first-in-first-out (fifo) order up to  maximum  number  of	inodes
	    specified  by As newer inodes are placed on the list, older inodes
	    are removed to complete their aging	 process.   For	 best  perfor‐
	    mance,  it	is  advisable  to  age only a limited number of larger
	    files before completion of the removal process.  The default maxi‐
	    mum number of inodes to age is 2048.

       Specifies the minimum size to qualify a deleted inode for inode aging.
	    Inode aging is used in conjunction with file system Storage Check‐
	    points to allow  quick  restoration	 of  large,  recently  deleted
	    files.   For  best performance, it is advisable to age only a lim‐
	    ited number of larger  files  before  completion  of  the  removal
	    process.  Setting the size too low can push larger file inodes out
	    of the aging queue to make room for	 newly	removed	 smaller  file
	    inodes.

       Determines the maximum buffer size allocated for file data.
	    The	 two  accepted	values are 8K bytes and 64K bytes.  The larger
	    value can be beneficial for workloads where large reads/writes are
	    performed  sequentially.  The smaller value is preferable on work‐
	    loads where the I/O is random or is done  in  small	 chunks.   The
	    default value is 8K bytes.

       Maximum size of a direct I/O
	    request  issued  by	 the  file  system.   If there is a larger I/O
	    request, it is broken up into chunks.  This parameter defines  how
	    much memory an I/O request can lock at once; do not set it to more
	    than 20% of memory.

       Limits the maximum disk queue generated by a single file.
	    When the file system is flushing data for a file and the number of
	    pages  being  flushed  exceeds processes block until the amount of
	    data being flushed decreases.  Although this does  not  limit  the
	    actual disk queue, it prevents synchronizing processes from making
	    the system unresponsive. The default value is 1 megabyte.

	    Although it does not limit the actual disk	queue,	prevents  pro‐
	    cesses  that  flush	 data  to disk, such as from making the system
	    unresponsive.

	    See the description for more information on pages and system  mem‐
	    ory.

       Increases or decreases the maximum size of an extent.
	    When  the  file  system is following its default allocation policy
	    for sequential writes to a file, it allocates  an  initial	extent
	    that  is  large enough for the first write to the file. When addi‐
	    tional extents are allocated, they are progressively  larger  (the
	    algorithm  tries  to  double  the  size  of the file with each new
	    extent), so each extent can hold several  writes  worth  of	 data.
	    This  reduces  the total number of extents in anticipation of con‐
	    tinued sequential writes. When there are no	 more  writes  to  the
	    file, unused space is freed for other files to use.

	    In	general,  this allocation stops increasing the size of extents
	    at 2048 blocks, which prevents one	file  from  holding  too  much
	    unused space.

	    is measured in file system blocks.

	    is measured in file system blocks. The default value for this tun‐
	    able is 2048 blocks. Setting to a value less than  2048  automati‐
	    cally resets this tunable to the default value of 2048 blocks.

       Enables or disables caching on Quick I/O for Databases files.
	    The	 default  behavior  is to disable caching.  To enable caching,
	    set to 1.

	    On systems with large  amounts  of	memory,	 the  database	cannot
	    always  use all of the memory as a cache.  By enabling file system
	    caching as a second level cache, performance can improve.

	    If the database is performing  sequential  scans  of  tables,  the
	    scans  can	run faster by enabling file system caching so the file
	    system performs aggressive read aheads on the files.

       In the absence of a specific caching advisory,
	    the default for all VxFS read operations is to perform  sequential
	    read  ahead.   The enhanced read ahead functionality implements an
	    algorithm that allows read aheads to detect	 more  elaborate  pat‐
	    terns  (such  as  increasing or decreasing read offsets, or multi‐
	    threaded file accesses) in addition to  simple  sequential	reads.
	    You can specify the following values for

	    Disables read ahead functionality
	    Retains traditional sequential read ahead behavior
	    Enables enhanced read ahead for all reads

	    By	default,  is  set  to 1, that is, VxFS detects only sequential
	    patterns.

	    detects patterns on a per-thread basis, up to  a  maximum  of  The
	    default  number  of	 threads  is  5,  however,  you can change the
	    default value by setting the parameter in the system configuration
	    file,

       The number of parallel read
	    requests of size to have outstanding at one time.  The file system
	    uses the product of and to determine its  read  ahead  size.   The
	    default value for is 1.

       The preferred read request size.	 The
	    file  system  uses this in conjunction with the value to determine
	    how much data to read ahead.  The default value is 64K.

       The number of parallel write
	    requests of size to have outstanding at one time.  The file system
	    uses  the  product	of and to determine when to do flush behind on
	    writes.  The default value for is 1.

       The preferred write request size.  The
	    file system uses this in conjunction with the value	 to  determine
	    how to do flush behind on writes.  The default value is 64K.

       When data is written to a file through buffered writes,
	    the file system updates only the in-memory image of the file, cre‐
	    ating what are referred to as dirty buffers.   Dirty  buffers  are
	    cleaned  when  the	the file system later writes the data in these
	    buffers to disk.  (Note that  data	can  be	 lost  if  the	system
	    crashes before dirty buffers are written to disk.)

	    Newer model computer systems typically have more memory.  The more
	    physical memory a system has, the more dirty buffers the file sys‐
	    tem	 can  generate	before	having to write the buffers to disk to
	    free up memory.  So more dirty buffers  can	 potentially  lead  to
	    longer  return  times  for	operations that write dirty buffers to
	    disk such as and If your system has a combination of a slow	 stor‐
	    age	 device	 and a large amount of memory, the sync operations may
	    take long enough to complete that it gives	the  appearance	 of  a
	    hung system.

	    If	your  system  is  exhibiting this behavior, you can change the
	    value of lets you lower the number of dirty buffers per file  that
	    the	 file system will generate before writing them to disk.	 After
	    the number of dirty buffers for a file reaches the threshold,  the
	    file system starts flushing buffers to disk even if free memory is
	    still available.  Depending on the speed of	 the  storage  device,
	    user write performance may suffer, but the number of dirty buffers
	    is limited, so sync operations will complete much faster.

	    The default value of is zero.  The default value places  no	 limit
	    on the number of dirty buffers per file.  This typically generates
	    a large number of dirty buffers, but maintains fast writes.	 If is
	    non-zero, VxFS limits the number of dirty buffers per file to buf‐
	    fers In some cases, may delay write requests.  For example, lower‐
	    ing	 the  value  of may increase the file disk queue to the value,
	    delaying user writes until the disk queue  decreases.   So	unless
	    the	 system	 has  a	 combination of large physical memory and slow
	    storage devices, it is advisable not to change the value of

FILES
       VxFS file system tuning parameters table.

SEE ALSO
       mkfs_vxfs(1M), mount(1M), mount_vxfs(1M), sync(2),  tunefstab(4),  vxf‐
       sio(7).

								  vxtunefs(1M)
[top]

List of man pages available for HP-UX

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net