nsr_storage_node man page on DigitalUNIX

Man page or keyword search:  
man Server   12896 pages
apropos Keyword Search (all sections)
Output format
DigitalUNIX logo
[printable version]

NSR_STORAGE_NODE(5)					   NSR_STORAGE_NODE(5)

NAME
       nsr_storage_node - description of the storage node feature

SYNOPSIS
       The storage node feature provides central server control of distributed
       devices for saving and recovering client data.

DESCRIPTION
       A storage node is a host that has directly attached  devices  that  are
       used  and  controlled by a NetWorker server.   These devices are called
       remote devices, because they are remote from the server.	  Clients  may
       save  and  recover  to  these remote devices by altering their "storage
       nodes" attribute (see nsr_client(5)).  A storage node  may  also	 be  a
       client of the server, and may save to its own devices.

       The  main  advantages  provided	by this feature are central control of
       remote devices, reduction of network traffic, use of faster local saves
       and recovers on a storage node, and support of heterogeneous server and
       storage node architectures.

       There are several attributes which affect this  function.   Within  the
       NSR  resource (see nsr_service(5)) there are the "nsrmmd polling inter‐
       val",  "nsrmmd  restart	interval"   and	  "nsrmmd   control   timeout"
       attributes.   These  attributes control how often the remote media dae‐
       mons (see nsrmmd(8)) are polled, how long between restart attempts, and
       how long to wait for remote requests to complete.

       Within  the  "NSR  device"  resource (see nsr_device(5)) the resource's
       name will accept the  "rd=hostname:dev_path"  format  when  defining  a
       remote  device.	The "hostname" is the hostname of the storage node and
       "dev_path" is the device path of the  device  attached  to  that	 host.
       There  are also hidden attributes called "save mount timeout" and "save
       lockout," which allow a pending save mount request to  timeout,	and  a
       storage node to be locked out for upcoming save requests.

       Within  the "NSR client" resource (see nsr_client(5)), there are "stor‐
       age  nodes",  "clone  storage  nodes",  and  "recover  storage	nodes"
       attributes:
	  The  "storage	 nodes" attribute is used by the server in selecting a
	  storage node when the client is saving data.

	  During a cloning operation (which is	essentially  a	recover	 whose
	  output  data	is directed straight into another save operation), the
	  "clone storage node" attribute of the (first) client whose  data  is
	  being	 cloned is consulted to determine where to direct the data for
	  the save side of the operation.

	  The "recover storage nodes" attribute	 is  used  by  the  server  in
	  selecting  a	storage	 node  to  be  used when the client performs a
	  recover (or the recover side of a clone operation). Note that if the
	  volume in question is already mounted, it will be used from its cur‐
	  rent location rather than being unmounted and remounted on a	system
	  that	is  in the "recover storage node" list. If the volume in ques‐
	  tion is in a jukebox, and the jukebox has a value set for its	 "read
	  hostname" attribute then that designated system will be used instead
	  of consulting the "recover storage node" list, unless	 the  environ‐
	  ment variable FORCE_REC_AFFINITY is set to "yes".

       The  "NSR  jukebox"  resource  (see nsr_jukebox(5)), contains the "read
       hostname" attribute.  When all of a jukebox's devices are not  attached
       to the same host, this attribute specifies the hostname that is used in
       selecting a storage node for recover and read-side clone requests.  For
       recover	requests,  if  the  required  volume  is  not mounted, and the
       client's "storage nodes" attribute does not match  one  of  the	owning
       hosts in the jukebox, then this attribute is used.  For clone requests,
       if the required volume is not mounted, then this attribute is used.

INSTALL AND CONFIGURE
       In order to install a storage node, choose the client and storage  node
       packages, where given the choice.  For those platforms that do not have
       a choice, the storage node binaries are included in the client package.
       In  addition,  install  any appropriate device driver packages.	If not
       running in evaluation mode, a storage node enabler must	be  configured
       on the server for each node.

       As  with a client, ensure that the nsrexecd(8) daemon is started on the
       storage node.  To define a device on a storage node, from the  control‐
       ling server define a device with the above mentioned "rd=" syntax.  For
       a remote jukebox (on a storage node), run jbconfig(8)  from  the	 node,
       after  adding  root@storage_node	 to  the  server's administrator list,
       (where root is the user running jbconfig(8)  and	 storage_node  is  the
       hostname	 of  the  storage node).  This administrator list entry may be
       removed after jbconfig(8) completes.

       In addition to jbconfig(8), when running scanner(8) on a storage	 node,
       root@storage_node must be on the adminstrator list.

       When  a	device	is  defined (or enabled) on a storage node, the server
       will attempt to start a media daemon (see nsrmmd(8)) on the  node.   In
       order  for  the	server to know whether the node is alive, it polls the
       node every "nsrmmd polling interval" minutes.  When the server  detects
       a  problem  with	 the  node's daemon or the node itself, it attempts to
       restart the daemon  every  "nsrmmd  restart  interval"  minutes,	 until
       either  the  daemon  is restarted or the device is disabled (by setting
       "enabled" to "no" in the device's "enabled" attribute).

       In addition to needing a storage node enabler for  each	storage	 node,
       each jukebox will need its own jukebox enabler.

OPERATION
       A  storage node is assignable for work when it is considered functional
       by the server - nsrexecd(8) running, device enabled, nsrmmd(8) running,
       and  the	 node is responding to the server's polls.  When a client save
       starts, the client's "storage nodes" attribute  is  used	 to  select  a
       storage	node.	This  attribute	 is  a list of storage node hostnames,
       which are considered in order, for assignment to the request.

       The exception to this node assignment approach  is  when	 the  server's
       index or bootstrap is being saved - these save sets are always directed
       to the server's local devices,  regardless  of  the  server's  "storage
       nodes" attribute.  Hence, the server will always need a local device to
       backup such data, at a minimum.	These save sets can later be cloned to
       a storage node, as can any save set.

       If  a storage node is created first (by defining a device on the host),
       and a client resource for that host is then  added,  that  hostname  is
       added to its "storage nodes" attribute.	This addition means the client
       will back up to its own devices.	 However, if a client resource already
       exists,	and  a device is later defined on that host, then the client's
       hostname must  be  added	 manually  to  the  client's  "storage	nodes"
       attribute.   This  attribute  is	 an ordered list of hostnames; add the
       client's own name as the first entry.

       The volume's location field is used to determine the host  location  of
       an  unmounted volume.  The server looks for a device or jukebox name in
       this field, as would be added when a volume resides in a jukebox.  Vol‐
       umes in a jukebox are considered to be located on the host to which the
       jukebox is connected.  The location field can be used to bind a	stand-
       alone volume to a particular node by manually setting this field to any
       device on that node (using the "rd=" syntax).  For jukeboxes  which  do
       not have all of their devices attached to the same host, see the previ‐
       ous description of the "read hostname" attribute.

       There are several commands that interact directly with a device, and so
       must  run  on  a storage node.  These include jbconfig(8), nsrjb(8) and
       scanner(8), in addition to those in the device driver package.	Invoke
       these  commands directly on the storage node rather than on the server,
       and use the server option ("-s server_host", where server_host  is  the
       controlling server's hostname).

CLONING FUNCTION
       A  single  clone request may be divided into multiple sub-requests, one
       for each different source machine (the host from which save  sets  will
       be  read).   For	 example,  suppose a clone request must read data from
       volumeA and volumeB, which are  located	on  storage  nodes  A  and  B,
       respectively.   Such  a request would be divided into two sub-requests,
       one to read volumeA from storage node A and  another  to	 read  volumeB
       from storage node B.

       A  clone request involves two sides, the source that reads data and the
       target that writes data.	 These two sides may be on the same host or on
       different  hosts,  depending  on the configuration.  The source host is
       determined first and then the target host.  If the volume  is  mounted,
       the  source  host  is determined by its current mount location.	If the
       volume is not mounted at the time of the clone request and  it  resides
       in  a  jukebox,	then the source host is determined by the value of the
       jukebox's "read hostname" attribute.

       Once the source host is known, the target host is determined by examin‐
       ing  the	 "clone storage nodes" attribute of the client resource of the
       source host.  If this attribute has no value, the "clone storage nodes"
       attribute  of  the  server's  client  resource  is  consulted.  If this
       attribute has no value, the "storage nodes" attribute of	 the  server's
       client resource is used.

LIMITATIONS
       A server cannot be a storage node of another server.

SEE ALSO
       jbconfig(8), mmlocate(8), nsr_client(5), nsr_device(5), nsr_jukebox(5),
       nsr_service(5), nsrclone(8),  nsrexecd(8),  nsrjb(8),  nsrmmd(8),  nsr‐
       mon(8), scanner(8).

NetWorker 7.3.2			  Aug 23, 06		   NSR_STORAGE_NODE(5)
[top]

List of man pages available for DigitalUNIX

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net