cmquerycl man page on HP-UX

Man page or keyword search:  
man Server   10987 pages
apropos Keyword Search (all sections)
Output format
HP-UX logo
[printable version]

cmquerycl(1m)							 cmquerycl(1m)

       cmquerycl - query cluster or node configuration information

       cmquerycl  [-k]	[-v]  [-f  format]  [-l	 limit]	 [-w  probe_type]  [-h
       ipv4|ipv6] [-c cluster_name] [-C cluster_ascii_file] [-q	 quorum_server
       [qs_ip2] | -L lock_lun_device] [-n node_name [-L lock_lun_device]]...

       cmquerycl  searches  all	 specified nodes for cluster configuration and
       Logical Volume Manager (LVM) information.  Cluster configuration infor‐
       mation	includes   network  information	 such  as  LAN	interface,  IP
       addresses, bridged  networks  and  possible  heartbeat  networks.   LVM
       information  includes volume group (VG) interconnection and file system
       mount point information.	 This command does not perform automatic  dis‐
       covery  of  lock	 LUN  devices.	(HP-UX only) To prevent cmquerycl from
       probing hardware devices, list them  in	/etc/cmcluster/cmnotdisk.conf,
       giving  the device file name in the DEVICE_FILE section. These could be
       CD-ROM, DVD-ROM, CD-RW or other peripheral devices that should  not  be
       probed,	or  whose description string does not match the possible TYPEs
       listed in /etc/cmcluster/cmignoretypes.conf.

       This command should be run as the first step in preparing  for  cluster
       configuration.	It may also be used as a troubleshooting tool to iden‐
       tify the current configuration of a cluster.

       If neither node_name nor cluster_name is specified, this	 command  will
       search  all  nodes  that	 are accessible over the system's networks and
       return a list of machines running Serviceguard products.	 Also in  this
       situation, the -l and -C options are ignored.

       If  node_name  is specified, it cannot contain the full domain name but
       should match only the system's hostname.

       The -C option may be used to create a cluster ASCII file which  can  be
       customized  for the desired configuration.  This file can then be veri‐
       fied by using the cmcheckconf command and distributed to all the	 clus‐
       ter  nodes  by  using the cmapplyconf command.  If the -C option is not
       specified, output will be directed to stdout.

       cmquerycl supports the following options:

	      -C cluster_ascii_file
			    Cluster configuration information will be saved in
			    cluster_ascii_file,	 which can then be verified by
			    using the cmcheckconf(1m) command and  distributed
			    to	all  other  cluster  nodes  with  the cmapply‐
			    conf(1m) command.  The -C option  is  only	valid,
			    and	 cluster_ascii_file will only be created, when
			    -C is used together with the -c and/or -n options.

	      -c cluster_name
			    Cluster information will be queried	 from  cluster
			    cluster_name.  This option may be used in conjunc‐
			    tion with the -n and -C options to generate a  new
			    ASCII  file for use in adding nodes to or removing
			    nodes from an existing cluster. This  option  also
			    creates  commented-out  entries  in	 the new ASCII
			    file for any  additional  networks	configured  on
			    existing cluster nodes; you can add these networks
			    to a running cluster by uncommenting the  entries.
			    When  -n  is  used,	 the  heartbeat network(s) and
			    cluster lock device(s) that are currently  config‐
			    ured  for  the cluster will be included in the new
			    ASCII file. Please refer to the  manual  for  more

	      -f format	    Select  the	 output format to display.  The format
			    parameter may be one of the following values:

			    table This option displays a human readable	 tabu‐
				  lar output format.  This is the default out‐
				  put format if no -f is specified on the com‐
				  mand line.

			    line  This option displays a machine parsable out‐
				  put format. Data items  are  displayed,  one
				  per  line,  in a way that makes them easy to
				  manipulate with tools such  as  grep(1)  and

	      -k	    To	speed  up  the disk query process, this option
			    eliminates some disk probing, and does not	return
			    information	 about	potential  cluster lock volume
			    groups and lock physical volumes.

			    When the -k option is used with the -C  option,  a
			    list of cluster-aware volume groups is provided in
			    the ASCII file, but a suggested cluster lock  vol‐
			    ume	 group	and  physical volume are not provided.
			    The ASCII file can be edited to include lock  disk
			    information if necessary.

	      -l limit	    Limit  the	information  included  to  type limit.
			    Legal values for limit are:

			    lvm	 Include logical volume information only.

				 When -l lvm is used with -w local or -w full,
				 -l  lvm  takes	 precedence,  and only logical
				 volume information is provided. When  -l  lvm
				 is used with both -w local or -w full and -C,
				 the effect is to provide local (or full) net‐
				 work  information in addition to logical vol‐
				 ume information in the ASCII file.

				 When -l lvm is used with -k, the effect is to
				 provide  logical  volume  information, but no
				 cluster lock information.

			    net	 Include network information only

				 When -l net is used with -k, the effect is to
				 provide  local network information only, with
				 no disk probing.

	      -L lock_lun_device
			    Specifies the block device file name for the  lock
			    LUN	 device to be used as the cluster lock. The -L
			    option can be used in one of two  forms.   In  the
			    first  form, when the device file name is the same
			    across all nodes, the -L option can	 be  specified
			    once  before any -n options. Alternatively, in the
			    second form, multiple -L options can  be  used  to
			    specify  the  node	specific device file names. In
			    this form, there must be one -L option for each -n
			    option  used.   -L	and -q are mutually exclusive.
			    Without -C this option does not have an affect.

	      -n node_name  Specifies node_name should be included in the  set
			    of	nodes  to  query.  node_name must be valid and
			    must match the node name  returned	by  the	 host‐
			    name(1)  command.  cmquerycl may be executed with‐
			    out any arguments to return a list of  valid  node
			    names that can be used with this option.

	      -q quorum_server [qs_ip2]
			    Specifies  the  host name or the IP address of the
			    quorum server. Two IP addresses or	hostnames  can
			    be	specified,  using space as a separator. The IP
			    addresses must be IPv4 addresses or hostnames that
			    resolve  to IPv4 addresses. Follow instructions in
			    the Managing Serviceguard manual to configure IPv6
			    quorum server addresses. The quorum server must be
			    running and	 reachable  from  all  the  configured
			    nodes  through  all the quorum server IP addresses
			    when the cmquerycl command is executed.  A maximum
			    of	two  IP addresses are supported for the quorum
			    server.  The -q option may only be	used  in  con‐
			    junction  with  the	 -c  or -n options. All of the
			    nodes being queried, as specified  by  -c  and  -n
			    options,  must  be authorized to access the speci‐
			    fied quorum server.	 -L and -q are mutually exclu‐

	      -v	    Verbose output will be displayed.

	      -h address_family
			    Specifies the heartbeat IP address family. If both
			    IPv4 and IPv6 addresses are configured on the LAN,
			    this  option specifies which address family to use
			    for the heartbeat for cluster heartbeat communica‐
			    tion.  Without -C and -n this option does not have
			    an affect.	options -h and -c are mutually	exclu‐
			    sive.  The	best  available	 configuration will be
			    chosen when cmquerycl is used without  -h  option.
			    IPv4  takes	 precedence  over IPv6 if both address
			    families  are  available.	The  legal  values  of
			    address_family are:

			    ipv4  Only	IPv4  addresses	 will  be  chosen  for
				  heartbeat if available, otherwise
				   error message will be shown.

			    ipv6  Only	IPv6  addresses	 will  be  chosen  for
				  heartbeat if available, otherwise
				   error message will be shown.

       -w probe_type
	      Specifies the type of network probing performed.	The legal val‐
	      ues of probe_type are:

	      none  No network probing is done.

	      local LAN connectivity is	 verified  between  interfaces	within
		    each  node	only. Bridged networks information is not com‐
		    plete when the local option is used.  This is the  default
		    behavior  if  the -w option is not specified and -C option
		    is used.

	      full  Actual connectivity is verified among all  LAN  interfaces
		    on	all  nodes in the cluster. Note that you must use this
		    option to discover cross-subnet connectivity or route con‐
		    nectivity  between the nodes in the cluster. You must also
		    use this option to discover gateways for possible  polling
		    targets for IP monitored subnets.

   Tabular output format
       cmquerycl writes information to stdout that can be used in both config‐
       uration and troubleshooting.

	      Bridged networks
			     Groups of network	interfaces.   These  groupings
			     represent link level network connections indicat‐
			     ing that the interfaces are connected either by a
			     network segment or via a bridge.

	      IP subnets     IP	 subnet	 information based on the bridged net‐
			     works.  The subnet is a masking of the IP address
			     by	 the subnet mask, which is specified by ifcon‐
			     fig(1).  The netstat(1) command  will  also  show
			     this  information.	  This subnet name can be used
			     as a parameter in the package configuration  file
			     created via cmmakepkg(1m).

	      Possible heartbeat IPs
			     List  of  IP subnets and addresses which are con‐
			     nected to all nodes  specified.   Both  IPv4  and
			     IPv6 addresses are supported for heartbeat.

	      Route Connectivity
			     Groups  of	 IP subnets. These groupings represent
			     IP-level connectivity indicating that these  sub‐
			     nets  are	routed to each other (potentially by a

	      IP Monitor Subnets
			     List of IP subnets and possible  polling  targets
			     for  each	subnet.	  The possible polling targets
			     are  gateways  of	each  subnet  that   cmquerycl
			     detected.	 Note  that gateways are only detected
			     with the -w full option.

	      LVM volume groups
			     Names of volume groups listed that are listed  by
			     node.  If a volume group has been imported to one
			     or more systems, all systems which are  connected
			     to	 that  volume  group  will  be displayed.  See

	      LVM physical volumes
			     Names of the  physical  volumes  in  each	volume
			     group  including  block  device file and hardware
			     path.  This information is node specific since  a
			     physical  volume  may  have  a different hardware
			     path or device file name on each node.

	      LVM logical volumes
			     Names of logical volumes  in  each	 volume	 group
			     including	information on use, such as filesystem
			     and mount	point  location	 if  it	 is  currently
			     mounted.  This information is node specific.

	  cluster_ascii_file values
	      cmquerycl optionally creates a cluster_ascii_file which contains
	      configuration information for the specified  nodes  or  cluster.
	      This file can be verified by the cmcheckconf command.  The clus‐
	      ter_ascii_file contains the following fields with default values
	      which can be customized:

	      CLUSTER_NAME   Name  of  the cluster.  This name will be used to
			     identify the cluster when viewing or manipulating
			     it.   This	 name must be unique across other Ser‐
			     viceguard clusters.

			     This parameter determines the  Internet  Protocol
			     address family to which Serviceguard will attempt
			     to resolve cluster node names and	quorum	server
			     host  names.  The default value is IPV4.  Setting
			     this parameter to IPV4 will  cause	 cluster  node
			     names  and quorum server host names to resolve to
			     IPv4 address only even if IPv6 addresses are con‐
			     figured  for the node names or quorum server host
			     names in addition to the IPv4 addresses.

			     Setting this parameter to ANY will cause  cluster
			     node  names  and  quorum  server  host  names  to
			     resolve to both IPv4  and	IPv6  addresses.   The
			     /etc/hosts file on each node must contain entries
			     for all IPv4 and IPv6 addresses  used  throughout
			     the   cluster  including  all  STATIONARY_IP  and
			     HEARTBEAT_IP  addresses  as  well	as  any	 other
			     addresses	before ANY can be used.	 There must be
			     at least one IPv4 address in this file.

			     Volume group that holds the  cluster  lock.   The
			     cluster lock is used to break a cluster formation
			     tie where two separate groups of nodes are trying
			     to	 form  a cluster and each group would have 50%
			     of the nodes.  To break the tie, the cluster lock
			     disk  is  used.  The winning cluster remains run‐
			     ning; the losing nodes will be halted.

	      QS_HOST	     The quorum server that acts as a cluster  member‐
			     ship  tie-breaker,	 alternative  to  volume group
			     (disk device) cluster lock.

	      QS_ADDR	     It can be used to specify an alternate IP address
			     for  the quorum server.  The two specified quorum
			     server  parameters	 QS_HOST  and  QS_ADDR	values
			     should  not resolve to same IP address. Otherwise
			     the cmquerycl command will fail.

			     The interval, specified in microseconds, at which
			     quorum  server  health  is	 checked.   Default is
			     300000000 (5 minutes). Minimum value is  10000000
			     (10   seconds).	Maximum	 value	is  2147483647
			     (approx. 35 minutes).

			     This is an optional parameter  (in	 microseconds)
			     that  is  used  to increase the time interval for
			     quorum  server  response.	 The  default	quorum
			     server  timeout  is  calculated from the Service‐
			     guard  cluster  parameter	MEMBER_TIMEOUT.	   For
			     clusters  of 2 nodes, it is 0.1 * MEMBER_TIMEOUT,
			     and for more than	2  nodes  it  is  0.2  *  MEM‐

			     If	 you  are  experiencing	 quorum server polling
			     timeouts (see the system  log),  if  your	quorum
			     server  is	 on  a busy network, or if your quorum
			     server is serving many clusters, the default quo‐
			     rum  server  timeout  may not be sufficient.  You
			     can use  QS_TIMEOUT_EXTENSION  to	allocate  more
			     time for quorum server requests.  You should also
			     consider using this parameter if you want to  use
			     small MEMBER_TIMEOUT values (under 14 seconds).

			     The   value   of  QS_TIMEOUT_EXTENSION  is	 added
			     directly to the quorum server timeout.  This,  in
			     turn,  directly  increases	 the amount of time it
			     takes for cluster reformation in the event	 of  a
			     node  failure.  For example, if QS_TIMEOUT_EXTEN‐
			     SION is set to 10 seconds, the  cluster  reforma‐
			     tion  will	 take  10  seconds  longer than if the
			     QS_TIMEOUT_EXTENSION was set  to  0.  This	 delay
			     applies even when there is no delay in contacting
			     the Quorum Server.	  The  recommended  value  for
			     QS_TIMEOUT_EXTENSION  is  0, which is used as the
			     default.	The   maximum	supported   value   is
			     300000000 (5 minutes).

	      NODE_NAME	     Name of a node participating in the cluster.  The
			     variables are associated with this node until the
			     next NODE_NAME entry occurs.

			     LAN interface (HP-UX  examples  are  lan0,	 lan1;
			     Linux  examples  are  eth0,  eth1).   This may be
			     specified	repeatedly  for	 all  applicable   LAN

	      HEARTBEAT_IP   The heartbeat IP address.	This is either an IPv4
			     or an IPv6 address to be used for sending	heart‐
			     beat messages.

	      STATIONARY_IP  This  is  the  IP	address dedicated to the node.
			     This IP address will stay with the node and  will
			     not be moved.  This IP address can be either IPv4
			     or IPv6 address.


			     are  used to define a capacity for the node. Node
			     capacities correspond to  package	weights;  node
			     capacity  is  checked  against  the corresponding
			     package weight to determine if  the  package  can
			     run on that node.

			     CAPACITY_NAME  specifies a name for the capacity.
			     The capacity name can be any string  that	starts
			     and ends with an alphanumeric character, and oth‐
			     erwise contains only alphanumeric characters, dot
			     (.),  dash (-), or underscore (_). Maximum string
			     length is 39 characters. Duplicate capacity names
			     are not allowed.

			     CAPACITY_VALUE  specifies	a value for the CAPAC‐
			     ITY_NAME that precedes it.	 This  is  a  floating
			     point  value between 0 and 1000000. Capacity val‐
			     ues are arbitrary as far as Serviceguard is  con‐
			     cerned; they have meaning only in relation to the
			     corresponding package weights.

			     Node capacity  definition	is  optional,  but  if
			     CAPACITY_NAME  is	specified, CAPACITY_VALUE must
			     also be specified; CAPACITY_NAME must come first.
			     To	 specify  more	than one capacity, repeat this
			     process for each  capacity.   NOTE:  If  a	 given
			     capacity  is not defined for a node, Serviceguard
			     assumes that capacity is infinite on  that	 node.
			     For example, if pkgA, pkgB, and pkgC each specify
			     a weight of 1000000 for WEIGHT_NAME "memory", and
			     CAPACITY_NAME  "memory" is not defined for node1,
			     then all three packages are eligible  to  run  at
			     the  same	time  on  node1,  assuming  all	 other
			     requirements are met.

			     Cmapplyconf will  fail  if	 any  node  defines  a
			     capacity  and any package has min_package_node as
			     the failover policy or has automatic as the fail‐
			     back policy.

			     You can define a maximum of four capacities.

			     NOTE:  Serviceguard  supports a capacity with the
			     reserved name "package_limit". This can  be  used
			     to limit the number of packages that can run on a
			     node. If  you  use	 "package_limit",  you	cannot
			     define any other capacities for this cluster, and
			     the default weight for all packages is one.

			     For all capacities	 other	than  "package_limit",
			     the default weight for all packages is zero.

			     The  lock	LUN  acts as a cluster membership tie-
			     breaker.  It is  an  alternative  to  the	quorum

			     The  physical volume path to the disk holding the
			     cluster lock for this node.  This	disk  must  be
			     part of the FIRST_CLUSTER_LOCK_VG on all nodes in
			     the cluster, but the device path to this disk may
			     be different on all nodes in the cluster.

	      MEMBER_TIMEOUT Number  of	 microseconds  to wait for a heartbeat
			     message before declaring  a  node	failure.   The
			     cluster  will  reform  to	remove the failed node
			     from the cluster.

			     Heartbeat messages are sent at a regular interval
			     which  is	0.25 * MEMBER_TIMEOUT, up to a maximum
			     of 1 second.  Quorum server  timeout  is  also  a
			     factor  of	 MEMBER_TIMEOUT (see QS_TIMEOUT_EXTEN‐
			     SION, above, for details).	 Timeouts for  Cluster
			     Lock  and Lock Lun are both 0.2 * MEMBER_TIMEOUT,
			     with Dual Cluster Lock set to a fixed 13 seconds.

			     MEMBER_TIMEOUT defaults to 14000000 (14 seconds).
			     A	value  of  10 to 25 seconds is appropriate for
			     most installations.  For installations  in	 which
			     the  highest priority is to reform the cluster as
			     fast as possible in case of  node	failure,  this
			     value  can	 be  set  as low as 3 seconds.	When a
			     single heartbeat network with standby  interfaces
			     is	 configured, this value cannot be set below 14
			     seconds if the network interface type  is	Ether‐
			     net,  or 22 seconds if the network interface type
			     is InfiniBand (HP-UX only).  Note that  a	system
			     hang  or  a  network  load	 spike	whose duration
			     exceeds MEMBER_TIMEOUT will result in one or more
			     node  failures.   A  system  hang or network load
			     spike whose duration exceeds 0.1  *  MEMBER_TIME‐
			     OUT,  and	which  happens to occur during cluster
			     reformation, can also result in one or more  node
			     failures.	 Hangs	of this duration are logged in
			     the system log, which should be monitored so MEM‐
			     BER_TIMEOUT can be increased when necessary.  See
			     the Managing Serviceguard manual for  more	 guid‐
			     ance on setting MEMBER_TIMEOUT.

			     The  maximum value recommended for MEMBER_TIMEOUT
			     is 60000000 (60 seconds).

			     Number of microseconds to wait for a new  cluster
			     to	 form  after  a	 failure,  before  giving  up.
			     Default is 600000000 (10 minutes).

			     Number of microseconds  between  network  polling
			     messages.	Default is 2000000 (2 seconds).

			     parameter (microseconds) is used to increase  the
			     time  interval after the detection of one or more
			     node  failures  that  all	application   activity
			     including pending I/O on the failed node is guar‐
			     anteed  to	 have  ceased.	 This	parameter   is
			     required to be set for Extended Distance Clusters
			     using iFCP interconnects between sites.  See  the
			     manual  "Understanding and Designing Serviceguard
			     Disaster Tolerant Architectures" for more	infor‐
			     mation.  Default is 0.

			     This  parameter  determines which network failure
			     detection method will be used  for	 the  cluster.
			     The  default  value is INOUT , when this value is
			     applied, network interface will be marked as down
			     only  when	 both  inbound and outbound traffic to
			     and from the interface have stopped.

			     By setting the value  to  INONLY_OR_INOUT	,  the
			     enhanced inbound failure detection method will be
			     applied throughout the cluster. With this method,
			     when  inbound  traffic  to	 a  network  interface
			     stops, Serviceguard Network Manager will start  a
			     mechanism	to determine if inbound traffic to the
			     card actually has stopped. If  it	is,  the  card
			     will  be  marked  as  down. The card will also be
			     marked down when both inbound and outbound	 traf‐
			     fic to and from the interface stop.

			     Thorough  testing should be done when setting the
			     parameter value to INONLY_OR_INOUT before running
			     application  on the cluster to ensure the cluster
			     network  configuration  is	 suitable   for	  this
			     option.  There is considerable impact on Service‐
			     guard from	 the  network  environment  where  the
			     enhanced  inbound	failure	 detection  method  is
			     applied. Make sure that the following  conditions
			     are met:

			     ·	All bridged nets in the cluster have more than
				two interfaces each.
			     ·	Each primary interface needs to have at	 least
				a  standby  interface  which is connected to a
				standby switch.
			     ·	Primary switch should be directly connected to
				its standby.
			     ·	No  single  point  of  failure anywhere in all
				bridged nets.
	      Please refer  to	the  Managing  Serviceguard  Manual  for  more
	      details and examples.

			     This  parameter  determines  how the cluster will
			     handle the recovery of the primary LAN  interface
			     after it has failed over to the standby interface
			     because of a link	level  failure.	  The  default
			     value is YES.  Setting this parameter to YES will
			     cause the IP address(es) to failback to the  pri‐
			     mary LAN interface from the standby when the pri‐
			     mary LAN interface recovers at link level.

			     Setting this  value  to  NO  will	cause  the  IP
			     address(es) to failback to the primary LAN inter‐
			     face only when a user uses	 cmmodnet(1m)  to  re-
			     enable the interface.

			     This feature does not affect how failback is han‐
			     dled if IP level failures are  detected  with  IP
			     monitoring.   However,  if	 a  link level failure
			     happens after an IP-level failure,	 this  parame‐
			     ter's setting will be ignored.

	      SUBNET	     Name  of  a  subnet  in the cluster that is to be
			     configured with or without	 being	IP  Monitored.
			     All entries for IP_MONITOR and POLLING_TARGET are
			     associated with this subnet until the next SUBNET
			     entry occurs.

	      IP_MONITOR     This  parameter specifies whether or not the sub‐
			     net specified in the SUBNET entry will  be	 moni‐
			     tored  at	IP  layer. To enable IP monitoring for
			     the subnet, set IP_MONITOR to ON , to disable it,
			     set  the value to OFF a network interface in that
			     subnet fails at IP-level the  interface  will  be
			     marked  down.  If	the IP Monitoring is disabled,
			     failures only at IP-level will not be detected.

	      POLLING_TARGET The IP address to which polling messages are sent
			     from  all	network	 interfaces in this SUBNET ,if
			     IP_MONITOR is set to ON to determine  the	health
			     of	 network  interfaces  at  the  IP-level.   The
			     POLLING_TARGET entry can be repeated as needed as
			     each  subnet  can	have multiple polling targets.
			     When IP_MONITOR is ,  but	no  POLLING_TARGET  is
			     specified, polling messages are sent between net‐
			     work interfaces in the same subnet.

			     Maximum number of packages which can  be  config‐
			     ured in the cluster.  The legal values are from 0
			     to 300. Default is 300.   The  parameter  can  be
			     changed online.  This parameter may be deprecated
			     in a future release.


			     ters  are used to define a default value for this
			     weight for all packages (except system multi-node
			     packages).	  Package  weights  correspond to node
			     capacities; node capacity is checked against  the
			     corresponding  package weight to determine if the
			     package can run on that node.

			     WEIGHT_NAME specifies a name for  a  weight  that
			     corresponds  to  a	 capacity specified earlier in
			     this file.	 Weight	 is  defined  for  a  package,
			     whereas  capacity	is defined for a node. For any
			     given weight/capacity pair,  WEIGHT_NAME,	CAPAC‐
			     ITY_NAME (and weight_name in the package configu‐
			     ration file) must be  the	same.  The  rules  for
			     forming  all  three are the same. See the discus‐
			     sion of the capacity parameters earlier  in  this

			     NOTE:  A  weight (WEIGHT_NAME/WEIGHT_DEFAULT) has
			     no meaning	 on  a	node  unless  a	 corresponding
			     capacity	  (CAPACITY_NAME/CAPACITY_VALUE)    is
			     defined for that node.  For  example,  if	CAPAC‐
			     ITY_NAME  "memory" is not defined for node1, then
			     node1's "memory" capacity is assumed to be	 infi‐
			     nite.   Now  even	if  pkgA,  pkgB, and pkgC each
			     specify  the  maximum  weight  of	 1000000   for
			     WEIGHT_NAME "memory", all three packages are eli‐
			     gible to run at the same time on node1,  assuming
			     all other requirements are met.

			     WEIGHT_DEFAULT  specifies	a  default  weight for
			     this WEIGHT_NAME for all  packages.   This	 is  a
			     floating  point  value  between  0	 and  1000000.
			     Package weight default values  are	 arbitrary  as
			     far as Serviceguard is concerned; they have mean‐
			     ing only in relation to  the  corresponding  node

			     The   package   weight   default  parameters  are
			     optional. If they are not	specified,  a  default
			     value  of	zero  will  be	assumed.  If  defined,
			     WEIGHT_DEFAULT must follow WEIGHT_NAME. To	 spec‐
			     ify   more	 than  one package weight, repeat this
			     process for each weight.

			     Note: for the  reserved  weight  "package_limit",
			     the  default  weight is always one.  This default
			     cannot be changed in  the	cluster	 configuration
			     file, but it can be overriden in the package con‐
			     figuration file.

			     For any given package and	WEIGHT_NAME,  you  can
			     override  the  WEIGHT_DEFAULT set here by setting
			     weight_value to a different value for the	corre‐
			     sponding weight_name in the package configuration

			     Cmapplyconf will fail if you define a default for
			     a	weight	and no node in the cluster specifies a
			     capacity of the same name.	 You can define a max‐
			     imum of four weight defaults.

	      The  next	 three entries are used to set access control policies
	      for the cluster.	Access policies control non-root  user	access
	      to  the  cluster.	  The  first  line  of	each  policy  must  be
	      USER_NAME, second USER_HOST, and third USER_ROLE.	  NOTE:	  When
	      this  command  is run by an authorized user who is not the supe‐
	      ruser(UID=0), only those entries that match the user running the
	      command and the node where the command is run are displayed.

	      USER_NAME	     Specifies the name of the user to whom the access
			     needs to be granted.  The	user  name  value  can
			     either  be or a maximum of 8 login names from the
			     /etc/passwd file on If you	 misspell  keyword  it
			     will  be  interpreted  as	the name of a specific
			     user, and the resultant access policy will not be
			     what you expect.

	      USER_HOST	     Specifies	the  hostname  from where the user can
			     issue Serviceguard commands.  If  using  Service‐
			     guard  Manager, it is the COM server hostname. It
			     can be set to or  or  a  specific	hostname.   If
			     hostname is specified, it cannot contain the full
			     domain name or an IP  address.  If	 you  misspell
			     keywords or it will be interpreted as the name of
			     a specific node, and the resultant access	policy
			     will not be what you expect.

	      USER_ROLE	     Specifies	the  access  granted  to the user.  It
			     must be one of these three values:

			     This role grants permission to  view  information
			     about the entire cluster.

			     This  role	 is granted by default to any Service‐
			     guard user when a new  cluster  is	 created;  the
			     following	entry is written to the output cluster
			     configuration file the first time	it  is	gener‐


			     NOTE: If you remove or change this entry, you may
			     not  be able to manage the cluster with some ver‐
			     sions of HP SIM or HP VSE Manager.	  Consult  the
			     documentation  for	 those	products before making
			     such a change.

			     This role grants
				    MONITOR  permission,  plus	permission  to
				    issue   administrative  commands  for  all
				    packages in the cluster.

			     This role grants
				    MONITOR and PACKAGE_ADMIN permission, plus
				    permission	to  issue  administrative com‐
				    mands for the cluster.

	      VOLUME_GROUP (HP-UX Only)
			     Name of volume group to be marked	cluster-aware.
			     The volume group will be used by clustered appli‐
			     cations via the vgchange -a e command that	 marks
			     the  volume group for exclusive access.  Multiple
			     VOLUME_GROUP  keywords  may  be  specified.    By
			     default, cmquerycl will specify each VOLUME_GROUP
			     that is accessible by two or  more	 nodes	within
			     the  cluster.  This volume group will be initial‐
			     ized to be part of the cluster such that the vol‐
			     ume  group can only be activated via the vgchange
			     -a e option.

	      The following values may also be used in the  cluster_ascii_file
	      for   accessing	a   second  cluster  disk  lock:  SECOND_CLUS‐
	      TER_LOCK_VG, SECOND_CLUSTER_LOCK_PV.  These values are only rec‐
	      ommended for the 2 node configuration using only internal disks,
	      therefore a possibility of both the first cluster lock pv	 fail‐
	      ing at the same instant a node fails.

	      In  addition  to the fields already listed, the following fields
	      are used in the cluster_ascii_file when the Serviceguard	Exten‐
	      sion for RAC (HP-UX only) product is installed:

		     Name  of  volume  group  to be marked OPS or RAC cluster-
		     aware.  The volume group will be used by OPS or RAC clus‐
		     ter  applications	via  the  vgchange -a s command, which
		     marks the	volume	group  for  shared  access.   Multiple
		     OPS_VOLUME_GROUP keywords may be specified.

	      In  addition,  the  following  fields  can  be used in the clus‐
	      ter_ascii_file as part of an HP approved,	 site  aware  disaster
	      tolerant	solution  (HP-UX  only).   In  the cluster_ascii_file,
	      SITE_NAME must precede any NODE_NAME entries and SITE must  fol‐
	      low the NODE_NAME it is being associated with.

	      SITE_NAME	  Defines  a  logical  site name to which nodes may be
			  associated.  Each SITE_NAME must be  assocated  with
			  at least one NODE_NAME (see SITE)

	      SITE	  Associates  a	 node  with a site name. Modifying the
			  SITE entry requires the node	to  be	offline.   All
			  nodes	 in the cluster must be site-aware, or none; a
			  cluster must not consist  of	some  nodes  that  are
			  associated  with sites and some that are not associ‐
			  ated with any.

   Line output format
       The line output format is designed for  simple  machine	parsing	 using
       tools such as grep(1), cut(1) or awk(1).

       Each line of output is composed of the form:


       The value field is a string that describes a single piece of configura‐
       tion or status data related to name.

       The name field uniquely identifies a single configuration item  and  is
       composed	 of  one  or  more elements, separated by a pipe character. In
       cases where the output contains multiple lines with the	same  type  of
       name element, the name element is further qualified by an element iden‐

       The element identifier is appended to the name element and separated by
       a colon. The element identifier is used to distinguish which object the
       configuration information corresponds to.

       In the following example, the name element path for IP address informa‐
       tion  is node.interface.ip_address.  Since multiple IP address configu‐
       ration elements are being displayed, the node  and  interface  elements
       are  further qualified with the element identifiers cnode1 (the name of
       the node containing the interface) and lan1  or	lan3  (the  interfaces
       containing the IP address).

	      # cmquerycl -f line -c cluster | grep ip_address

       cmquerycl returns the following value:

	       0   Successful completion.
	       1   Command failed.

       To  poll the configurations for node1 and node2 and to save the results
       in file clusterA.config :

	      cmquerycl -v -n node1 -n node2 -C clusterA.config

       To poll the configurations for node1 and node2 , to use IPv6  addresses
       for the Heartbeat, and save the results in file clusterA.config :

	      cmquerycl -h ipv6 -C clusterA.config -n node1 -n node2

       To poll the configurations for node1 and node2 , to use a physical lock
       LUN device (which shares the same file name across nodes) as a  cluster
       lock, and to save the results in file clusterA.config :

	      cmquerycl	 -C  clusterA.config  -L  lock_lun_device  -n node1 -n

       To display a list of monitored subnets in clusterA

	      cmquerycl -f line -c clusterA |
		grep '^node.*subnet=' | cut -d= -f2 | sort -u

       To poll the configurations for node1 and node2 when the device name  is
       different  on  different nodes to use a physical lock LUN device (which
       does not share the same file name across nodes) as a cluster lock,  and
       to save the results in file config :

	      cmquerycl -C config -n node1 -L lock_dev1 -n node2 -L lock_dev2

       To check node1 access and authorization to quorum server qs_host , with
       the addtional IP address qs_ip2 :

	      cmquerycl -n node1 -q qs_host qs_ip2

       To poll the configurations for node1 and node2  ,  to  use  qs_host  as
       cluster lock, and save the results in file clusterA qs_ip2

	      cmquerycl -c clusterA -q qs_host qs_ip2 -C clusterA.config

       To  create  a  configuration  file  that	 can  be used to change or add
       qs_host as the cluster lock to clusterA :

	      cmquerycl -c clusterA -q qs_host -C clusterA.config

       To query using the -k option

	      cmquerycl -v -k -C clusterA.config

       To query using the -w option

	      cmquerycl -v -w local -C clusterA.config

       To query using the -k and -w option

	      cmquerycl -v -k -w local -C clusterA.config

       To query the configuration for

	      cmquerycl -v -c clusterA

       To display the LVM configuration for clusterA :

	      cmquerycl -v -c clusterA -l lvm

       To create an ASCII file that can be used to remove node2 and add	 node3
       to  clusterA  :	(assuming  clusterA  contains node1 and node2 to begin

	      cmquerycl -c clusterA -n node1 -n node3 -C clusterA.config

       To create an ASCII file that can be used to add new networks or	remove
       existing	 ones  from  nodes configured in clusterA : (assuming clusterA
       contains node1 and node2)

	      cmquerycl -c clusterA -C clusterA.config

	      Note: clusterA.config will contain commented-out entries of  the
	      new  networks,  if  any, for node1 and node2.  You can uncomment
	      the entries for networks you want to include in the new configu‐
	      ration  or  comment-out entries for networks you want to remove.
	      When you use the -n option, commented-out	 entries  are  created
	      (in  the	ASCII file) only for existing cluster node(s); in this
	      case, given the previous example, entries would be created  only
	      for node1.

       This  command  is part of the cluster configuration process.  Following
       is an example of configuring a cluster with two nodes and two packages:

	      cmquerycl -C clusterA.config -n node1 -n node2
	      cmmakepkg -p pkg1.config
	      cmmakepkg -p pkg2.config
	      cmmakepkg -s pkg1.control.script
	      cmmakepkg -s pkg2.control.script
	      < customize clusterA.config >
	      < customize pkg1.config >
	      < customize pkg2.config >
	      < customize pkg1.control.script >
	      < customize pkg2.control.script >
	      cmcheckconf -C clusterA.config -P pkg1.config -P pkg2.config
	      cmapplyconf -C clusterA.config -P pkg1.config -P pkg2.config

       cmquerycl was developed by HP.

       cmapplyconf(1m),	 cmcheckconf(1m),  cmmakepkg(1m),  cmruncl(1m),	  net‐
       stat(1), lanscan(1) (HP-UX only), vgimport(1), vgchange(1).

		    Requires Optional Serviceguard Software	 cmquerycl(1m)

List of man pages available for HP-UX

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
Vote for polarhome
Free Shell Accounts :: the biggest list on the net