pcs man page on Oracle

Printed from http://www.polarhome.com/service/man/?qf=pcs&af=0&tf=2&of=Oracle

PCS(8)			System Administration Utilities			PCS(8)

NAME
       pcs - pacemaker/corosync configuration system

SYNOPSIS
       pcs [-f file] [-h] [commands]...

DESCRIPTION
       Control and configure pacemaker and corosync.

OPTIONS
       -h, --help
	      Display usage and exit

       -f file
	      Perform actions on file instead of active CIB

       --debug
	      Print all network traffic and external commands run

       --version
	      Print pcs version information

   Commands:
       cluster
	      Configure cluster options and nodes

       resource
	      Manage cluster resources

       stonith
	      Configure fence devices

       constraint
	      Set resource constraints

       property
	      Set pacemaker properties

       status View cluster status

       config Print full cluster configuration

   resource
       show [resource id] [--full] [--groups]
	      Show  all	 currently  configured	resources  or if a resource is
	      specified show the options  for  the  configured	resource.   If
	      --full is specified all configured resource options will be dis‐
	      played.  If --groups is specified, only show groups  (and	 their
	      resources).

       list [<standard|provider|type>] [--nodesc]
	      Show  list  of  all  available resources, optionally filtered by
	      specified type, standard or provider. If --nodesc is  used  then
	      descriptions of resources are not printed.

       describe <standard:provider:type|type>
	      Show options for the specified resource

       create  <resource  id> <standard:provider:type|type> [resource options]
       [op <operation action> <operation options> [<operation action>  <opera‐
       tion  options>]...] [meta <meta options>...] [--clone <clone options> |
       --master <master options> | --group <group name>]
	      Create specified resource.  If --clone is used a clone  resource
	      is  created  if --master is specified a master/slave resource is
	      created. If --group is specified the resource is	added  to  the
	      group  named.  Example: pcs resource create VirtualIP ocf:heart‐
	      beat:IPaddr2 \ ip=192.168.0.99 cidr_netmask=32 nic=eth2 op moni‐
	      tor interval=30s \ Create a new resource called 'VirtualIP' with
	      IP address 192.168.0.99, netmask of 32, monitored everything  30
	      seconds, on eth2.

       delete <resource id|group id|master id|clone id>
	      Deletes  the resource, group, master or clone (and all resources
	      within the group/master/clone).

       enable <resource id> [--wait[=n]]
	      Allow the cluster to start the resource. Depending on  the  rest
	      of  the configuration (constraints, options, failures, etc), the
	      resource may remain stopped.  If --wait is specified,  pcs  will
	      wait up to 30 seconds (or 'n' seconds) for the resource to start
	      and then return 0 if the	resource  is  started,	or  1  if  the
	      resource has not yet started.

       disable <resource id> [--wait[=n]]
	      Attempt  to  stop	 the  resource if it is running and forbid the
	      cluster from starting it again.  Depending on the	 rest  of  the
	      configuration   (constraints,   options,	 failures,  etc),  the
	      resource may remain started.  If --wait is specified,  pcs  will
	      wait  up to 30 seconds (or 'n' seconds) for the resource to stop
	      and then return 0 if  the	 resource  is  stopped	or  1  if  the
	      resource has not stopped.

       debug-start <resource id> [--full]
	      This  command will force the specified resource to start on this
	      node ignoring the cluster recommendations and print  the	output
	      from starting the resource. Using --full will give more detailed
	      output. This is mainly used for debugging resources that fail to
	      start.

       move <resource id> [destination node] [--master]
	      Move  resource off current node (and optionally onto destination
	      node). If --master is used the scope of the command  is  limited
	      to  the  master  role and you must use the master id (instead of
	      the resource id).

       ban <resource id> [node] [--master]
	      Prevent the resource id specified from running on the  node  (or
	      on  the  current node it is running on if no node is specified).
	      If --master is used the scope of the command is limited  to  the
	      master  role  and	 you  must  use	 the master id (instead of the
	      resource id).

       clear <resource id> [node] [--master]
	      Remove constraints created by move and/or ban on	the  specified
	      resource	(and node if specified). If --master is used the scope
	      of the command is limited to the master role and	you  must  use
	      the master id (instead of the resource id).

       standards
	      List  available  resource	 agent	standards  supported  by  this
	      installation. (OCF, LSB, etc.)

       providers
	      List available OCF resource agent providers

       agents [standard[:provider]]
	      List  available  agents  optionally  filtered  by	 standard  and
	      provider

       update <resource id> [resource options] [op [<operation action>
	      <operation  options>]...] [meta <meta operations>...] Add/Change
	      options to specified resource, clone  or	multi-state  resource.
	      If an operation (op) is specified it will update the first found
	      operation with the same action on the specified resource, if  no
	      operation	 with  that action exists then a new operation will be
	      created (WARNING: all current options on the update op  will  be
	      reset  if not specified). If you want to create multiple monitor
	      operations you should use the add_operation  &  remove_operation
	      commands.

       op add <resource id> <operation action> [operation properties]
	      Add operation for specified resource

       op remove <resource id> <operation action> <operation properties>
	      Remove  specified	 operation  (note:  you must specify the exact
	      operation properties to properly remove an existing operation).

       op remove <operation id>
	      Remove the specified operation id

       op defaults [options]
	      Set default values for operations, if  no	 options  are  passed,
	      lists currently configured defaults

       meta <resource id | group id | master id | clone id> <meta options>
	      Add  specified  options  to  the specified resource, group, mas‐
	      ter/slave or clone.  Meta options should be  in  the  format  of
	      name=value,  options may be removed by setting an option without
	      a value. Example: pcs resource meta  TestResource	 failure-time‐
	      out=50 stickiness=

       group add <group name> <resource id> [resource id] ... [resource id]
	      Add  the	specified resource to the group, creating the group if
	      it does not exist.  If the resource is present in another	 group
	      it is moved to the new group.

       group remove <group name> <resource id> [resource id] ... [resource id]
	      Remove  the  specified  resource(s) from the group, removing the
	      group if it no resources remain.

       ungroup <group name> [resource id] ... [resource id]
	      Remove the group (Note: this does not remove any resources  from
	      the cluster) or if resources are specified, remove the specified
	      resources from the group

       clone <resource id | group id> [clone options]...
	      Setup up the specified resource or group as a clone

       unclone <resource id | group name>
	      Remove the clone which contains the specified group or  resource
	      (the resource or group will not be removed)

       master [<master/slave name>] <resource id | group name> [options]
	      Configure	 a  resource  or group as a multi-state (master/slave)
	      resource.	 Note:	to  remove  a  master  you  must  remove   the
	      resource/group it contains.

       manage <resource id> ... [resource n]
	      Set resources listed to managed mode (default)

       unmanage <resource id> ... [resource n]
	      Set resources listed to unmanaged mode

       defaults [options]
	      Set  default  values  for	 resources,  if no options are passed,
	      lists currently configured defaults

       cleanup <resource id>
	      Cleans up the resource in the lrmd (useful to reset the resource
	      status  and  failcount).	This  tells  the cluster to forget the
	      operation history of a resource and re-detect its current state.
	      This can be useful to purge knowledge of past failures that have
	      since been resolved.

       failcount show <resource id> [node]
	      Show current failcount for specified resource from all nodes  or
	      only on specified node

       failcount reset <resource id> [node]
	      Reset  failcount	for specified resource on all nodes or only on
	      specified node. This tells the cluster to forget how many	 times
	      a	 resource has failed in the past.  This may allow the resource
	      to be started or moved to a more preferred location.

   cluster
       auth [node] [...] [-u username] [-p password] [--local] [--force]
	      Authenticate pcs to pcsd on nodes specified,  or	on  all	 nodes
	      configured  in  corosync.conf  if no nodes are specified (autho‐
	      rization	  tokens    are	   stored    in	   ~/.pcs/tokens    or
	      /var/lib/pcsd/tokens  for	 root).	 By default all nodes are also
	      authenticated to each other, using  --local  only	 authenticates
	      the  local node (and does not authenticate the remote nodes with
	      each other).  Using --force forces re-authentication to occur.

       setup   [--start]   [--local]   [--enable]   --name   <cluster	 name>
       <node1[,node1-altaddr]>	 [node2[,node2-altaddr]]   [..]	  [--transport
       <udpu|udp>] [--rrpmode active|passive] [--addr0 <addr/net>  [[[--mcast0
       <address>]  [--mcastport0  <port>]  [--ttl0  <ttl>]]  | [--broadcast0]]
       [--addr1	 <addr/net>  [[[--mcast1  <address>]   [--mcastport1   <port>]
       [--ttl1	   <ttl>]]    |	   [--broadcast1]]]]	[--wait_for_all=<0|1>]
       [--auto_tie_breaker=<0|1>]		    [--last_man_standing=<0|1>
       [--last_man_standing_window=<time  in ms>]] [--token <timeout>] [--join
       <timeout>]   [--consensus   <timeout>]	[--miss_count_const   <count>]
       [--fail_recv_const <failures>]
	      Configure	 corosync  and sync configuration out to listed nodes.
	      --local will only perform changes on  the	 local	node,  --start
	      will  also  start	 the  cluster on the specified nodes, --enable
	      will enable corosync and pacemaker on node startup,  --transport
	      allows  specification of corosync transport (default: udpu). The
	      --wait_for_all,	  --auto_tie_breaker,	  --last_man_standing,
	      --last_man_standing_window   options   are   all	documented  in
	      corosync's  votequorum(5)	 man  page.  --ipv6   will   configure
	      corosync to use ipv6 (instead of ipv4)

	      --ipv6 will configure corosync to use ipv6 (instead of ipv4)

	      --token  <timeout>  sets time in milliseconds until a token loss
	      is declared after not receiving a token (default 1000 ms)

	      --join <timeout> sets time in  milliseconds  to  wait  for  join
	      mesages (default 50 ms)

	      --consensus <timeout> sets time in milliseconds to wait for con‐
	      sensus to be achieved before starting a new round of  membership
	      configuration (default 1200 ms)

	      --miss_count_const  <count>  sets the maximum number of times on
	      receipt of a token  a  message  is  checked  for	retransmission
	      before a retransmission occurs (default 5 messages)

	      --fail_recv_const <failures> specifies how many rotations of the
	      token without receiving any messages  when  messages  should  be
	      received may occur before a new configuration is formed (default
	      2500 failures)

	      Configuring Redundant Ring Protocol (RRP)

	      When using udpu (the default) specifying nodes, specify the ring
	      0 address first followed by a ',' and then the ring 1 address.

	      Example:	 pcs   cluster	 setup	--name	cname  nodeA-0,nodeA-1
	      nodeB-0,nodeB-1

	      When using udp, using --addr0 and --addr1 will allow you to con‐
	      figure rrp mode for corosync.  It's recommended to use a network
	      (instead of IP address) for --addr0  and	--addr1	 so  the  same
	      corosync.conf  file  can	be  used around the cluster.  --mcast0
	      defaults to 239.255.1.1 and --mcast1  defaults  to  239.255.2.1,
	      --mcastport0/1  default  to  5405	 and  ttl  defaults  to	 1. If
	      --broadcast is specified, --mcast0/1, --mcastport0/1 &  --ttl0/1
	      are ignored.

       start [--all] [node] [...]
	      Start  corosync  &  pacemaker on specified node(s), if a node is
	      not specified then corosync & pacemaker are started on the local
	      node.  If	 --all	is  specified  then  corosync  & pacemaker are
	      started on all nodes.

       stop [--all] [node] [...]
	      Stop corosync & pacemaker on specified node(s), if a node is not
	      specified	 then  corosync	 &  pacemaker are stopped on the local
	      node. If --all  is  specified  then  corosync  &	pacemaker  are
	      stopped on all nodes.

       kill   Force  corosync  and pacemaker daemons to stop on the local node
	      (performs kill -9).

       enable [--all] [node] [...]
	      Configure corosync & pacemaker to run on node boot on  specified
	      node(s),	if node is not specified then corosync & pacemaker are
	      enabled on the local node. If --all is specified then corosync &
	      pacemaker are enabled on all nodes.

       disable [--all] [node] [...]
	      Configure corosync & pacemaker to not run on node boot on speci‐
	      fied node(s), if node is not specified then corosync & pacemaker
	      are  disabled  on	 the  local  node.  If --all is specified then
	      corosync & pacemaker are disabled on all nodes. (Note:  this  is
	      the default after installation)

       standby [<node>] | --all
	      Put specified node into standby mode (the node specified will no
	      longer be able to host resources), if no	node  or  options  are
	      specified	 the  current  node  will be put into standby mode, if
	      --all is specified all nodes will be put into standby mode.

       unstandby [<node>] | --all
	      Remove node from standby mode (the node specified	 will  now  be
	      able to host resources), if no node or options are specified the
	      current node will be removed from	 standby  mode,	 if  --all  is
	      specified all nodes will be removed from standby mode.

       remote-node add <hostname> <resource id> [options]
	      Enables  the specified resource as a remote-node resource on the
	      specified hostname (hostname should be the same as 'uname -n')

       remote-node remove <hostname>
	      Disables any resources configured to be remote-node resource  on
	      the  specified  hostname	(hostname should be the same as 'uname
	      -n')

       status View current cluster status (an alias of 'pcs status cluster')

       pcsd-status [node] [...]
	      Get current status of pcsd on nodes specified, or on  all	 nodes
	      configured in corosync.conf if no nodes are specified

       certkey <certificate file> <key file>
	      Load custom certificate and key files for use in pcsd

       sync   Sync  corosync  configuration  to	 all  nodes found from current
	      corosync.conf file

       cib [filename]
	      Get the raw xml from the CIB (Cluster Information Base).	 If  a
	      filename	is  provided,  we save the cib to that file, otherwise
	      the cib is printed

       cib-push <filename>
	      Push the raw xml from <filename> to the CIB (Cluster Information
	      Base)

       edit   Edit  the cib in the editor specified by the $EDITOR environment
	      variable and push out any changes upon saving

       node add <node> [--start] [--enable]
	      Add the node to corosync.conf and corosync on all nodes  in  the
	      cluster  and  sync  the  new  corosync.conf to the new node.  If
	      --start is specified also start corosync/pacemaker  on  the  new
	      node,  if --enable is specified enable corosync/pacemaker on new
	      node

       node remove <node>
	      Shutdown	specified  node	 and  remove  it  from	pacemaker  and
	      corosync on all other nodes in the cluster

       uidgid List  the	 current  configured uids and gids of users allowed to
	      connect to corosync

       uidgid add [uid=<uid>] [gid=<gid>]
	      Add the specified uid and/or gid to  the	list  of  users/groups
	      allowed to connect to corosync

       uidgid rm [uid=<uid>] [gid=<gid>]
	      Remove   the   specified	 uid  and/or  gid  from	 the  list  of
	      users/groups allowed to connect to corosync

       corosync <node>
	      Get the corosync.conf from the specified node

       reload corosync
	      Reload the corosync configuration on the current node

       destroy [--all]
	      Permanently destroy the cluster on the current node, killing all
	      corosync/pacemaker  processes  removing  all  cib	 files and the
	      corosync.conf file.  Using '--all' will attempt to  destroy  the
	      cluster  on  all nodes configure in the corosync.conf file WARN‐
	      ING: This command permantly removes  any	cluster	 configuration
	      that  has	 been  created.	 It is recommended to run 'pcs cluster
	      stop' before destroying the cluster.

       verify [-V] [filename]
	      Checks the pacemaker configuration (cib) for syntax  and	common
	      conceptual  errors.   If	no  filename is specified the check is
	      performmed on the currently running cluster.  If	'-V'  is  used
	      more verbose output will be printed

       report [--from "YYYY-M-D H:M:S" [--to "YYYY-M-D" H:M:S"]] dest
	      Create  a	 tarball  containing  everything needed when reporting
	      cluster problems.	 If '--from' and  '--to'  are  not  used,  the
	      report will include the past 24 hours

   stonith
       show [stonith id] [--full]
	      Show all currently configured stonith devices or if a stonith id
	      is specified show the options for the configured stonith device.
	      If  --full  is  specified all configured stonith options will be
	      displayed

       list [filter] [--nodesc]
	      Show list of all available stonith agents (if filter is provided
	      then  only stonith agents matching the filter will be shown). If
	      --nodesc is used then descriptions of  stontih  agents  are  not
	      printed.

       describe <stonith agent>
	      Show options for specified stonith agent

       create <stonith id> <stonith device type> [stonith device options]
	      Create stonith device with specified type and options

       update <stonith id> [stonith device options]
	      Add/Change options to specified stonith id

       delete <stonith id>
	      Remove stonith id from configuration

       cleanup <stonith id>
	      Cleans  up  the  stonith device in the lrmd (useful to reset the
	      status and failcount).  This tells the  cluster  to  forget  the
	      operation	 history of a stonith device and re-detect its current
	      state.  This can be useful to purge knowledge of	past  failures
	      that have since been resolved.

       level  Lists all of the fencing levels currently configured

       level add <level> <node> <devices>
	      Add  the fencing level for the specified node with a comma sepa‐
	      rated list of devices (stonith ids) to attempt for that node  at
	      that  level.  Fence  levels  are	attempted  in  numerical order
	      (starting with 1) if a level succeeds (meaning all  devices  are
	      successfully  fenced  in	that  level)  then no other levels are
	      tried, and the node is considered fenced.

       level remove <level> [node id] [stonith id] ... [stonith id]
	      Removes the fence level for the level, node and/or devices spec‐
	      ified  If no nodes or devices are specified then the fence level
	      is removed

       level clear [node|stonith id(s)]
	      Clears the fence levels on the node (or stonith id) specified or
	      clears  all  fence levels if a node/stonith id is not specified.
	      If more than one stonith id is specified they must be  separated
	      by  a  comma  and	 no  spaces.  Example: pcs stonith level clear
	      dev_a,dev_b

       level verify
	      Verifies all fence devices and nodes specified in	 fence	levels
	      exist

       fence <node> [--off]
	      Fence  the  node specified (if --off is specified, use the 'off'
	      API call to stonith which will turn  the	node  off  instead  of
	      rebooting it)

       confirm <node>
	      Confirm  that  the  host specified is currently down WARNING: if
	      this node is not actually down data  corruption/cluster  failure
	      can occur.

   property
       list|show [<property> | --all | --defaults]
	      List property settings (default: lists configured properties) If
	      --defaults is specified will  show  all  property	 defaults,  if
	      --all  is specified, current configured properties will be shown
	      with unset properties and their defaults

       set [--force] [--node <nodename>] <property>=[<value>]
	      Set specific pacemaker properties (if the value  is  blank  then
	      the  property is removed from the configuration).	 If a property
	      is not recognized by pcs the property will not be created unless
	      the '--force' is used. If --node is used a node attribute is set
	      on the specified node.

       unset [--node <nodename>] <property>
	      Remove property from configuration  (or  remove  attribute  from
	      specified node if --node is used).

   constraint
       [list|show] --full
	      List  all current location, order and colocation constraints, if
	      --full is specified also list the constraint ids.

       location <resource id> prefers <node[=score]>...
	      Create a location constraint on a resource to prefer the	speci‐
	      fied node and score (default score: INFINITY)

       location <resource id> avoids <node[=score]>...
	      Create  a	 location constraint on a resource to avoid the speci‐
	      fied node and score (default score: INFINITY)

       location	 <resource  id>	  rule	 [role=master|slave]   [score=<score>]
       <expression>
	      Creates  a  location  rule  on  the specified resource where the
	      expression looks like one of the following:
		defined|not_defined <attribute>
		<attribute> lt|gt|lte|gte|eq|ne <value>
		date [start=<start>] [end=<end>] operation=gt|lt|in-range
		date-spec <date spec options>...
		<expression> and|or <expression>

       location show [resources|nodes [node id|resource id]...] [--full]
	      List all the current location  constraints,  if  'resources'  is
	      specified	  location  constraints	 are  displayed	 per  resource
	      (default), if 'nodes' is specified location constraints are dis‐
	      played  per  node.  If specific nodes or resources are specified
	      then we only show information about them

       location add <id> <resource name> <node> <score>
	      Add a location constraint	 with  the  appropriate	 id,  resource
	      name, node name and score. (For more advanced pacemaker usage)

       location remove <id> [<resource name> <node> <score>]
	      Remove  a	 location constraint with the appropriate id, resource
	      name, node name and score. (For more advanced pacemaker usage)

       order show [--full]
	      List all current ordering constraints (if '--full' is  specified
	      show the internal constraint id's as well).

       order [action] <resource id> then [action] <resource id> [options]
	      Add  an  ordering constraint specifying actions (start,stop,pro‐
	      mote, demote) and if no action is specified the  default	action
	      will   be	 start.	 Available  options  are  kind=Optional/Manda‐
	      tory/Serialize and symmetrical=true/false

       order set <resource1> <resource2> [resourceN]... [options] [set
	      <resourceX> <resourceY> ...] Create an ordered set of resources.

       order remove <resource1> [resourceN]...
	      Remove resource from any ordering constraint

       colocation show [--full]
	      List all current colocation constraints (if '--full'  is	speci‐
	      fied show the internal constraint id's as well).

       colocation add [master|slave] <source resource id> with [master|slave]
	      <target resource id> [score] [options] Request <source resource>
	      to run on the same node where pacemaker has  determined  <target
	      resource>	 should	 run.	Positive  values  of  score  mean  the
	      resources should be run on the same node, negative  values  mean
	      the  resources  should  not be run on the same node.  Specifying
	      'INFINITY'  (or  '-INFINITY')  for  the  score   force   <source
	      resource>	 to  run  (or  not run) with <target resource>. (score
	      defaults to "INFINITY") A role can be master  or	slave  (if  no
	      role is specified, it defaults to 'started').

       colocation set <resource1> <resource2> [resourceN]... [setoptions] ...
	      [set <resourceX> <resourceY> ...] [setoptions <name>=<value>...]
	      Create a colocation constraint with a resource set

       colocation remove <source resource id> <target resource id>
	      Remove colocation constraints with <source resource>

       remove [constraint id]...
	      Remove constraint(s) or  constraint  rules  with	the  specified
	      id(s)

       ref <resource>...
	      List constraints referencing specified resource

       rule add <constraint id> [<rule type>] [score=<score>] [id=<rule id>]
	      <expression|date_expression|date_spec>...	 Add  a rule to a con‐
	      straint, if score is omitted it defaults to INFINITY, if	id  is
	      omitted one is generated from the constraint id. The <rule type>
	      should be 'expression' or 'date_expression'

       rule remove <rule id>
	      Remove a rule if a rule id is specified, if rule is last rule in
	      its constraint, the constraint will be removed

   status
       [status]
	      View all information about the cluster and resources

       resources
	      View current status of cluster resources

       groups View currently configured groups and their resources

       cluster
	      View current cluster status

       corosync
	      View current membership information as seen by corosync

       nodes [corosync|both|config]
	      View  current  status  of nodes from pacemaker. If 'corosync' is
	      specified, print nodes  currently	 configured  in	 corosync,  if
	      'both' is specified, print nodes from both corosync & pacemaker.
	      If 'config' is specified, print nodes from corosync &  pacemaker
	      configuration.

       pcsd <node> ...
	      Show the current status of pcsd on the specified nodes

       xml    View xml version of status (output from crm_mon -r -1 -X)

EXAMPLES
       Show all resources
	      # pcs resource show

       Show options specific to the 'VirtualIP' resource
	      # pcs resource show VirtualIP

       Create a new resource called 'VirtualIP' with options
	      #	   pcs	 resource   create   VirtualIP	 ocf:heartbeat:IPaddr2
	      ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor interval=30s

       Create a new resource called 'VirtualIP' with options
	      #	 pcs  resource	create	 VirtualIP   IPaddr2   ip=192.168.0.99
	      cidr_netmask=32 nic=eth2 op monitor interval=30s

       Change the ip address of VirtualIP and remove the nic option
	      # pcs resource update VirtualIP ip=192.168.0.98 nic=

       Delete the VirtualIP resource
	      # pcs resource delete VirtualIP

       Create  the  MyStonith  stonith	fence_virt device which can fence host
       'f1'
	      # pcs stonith create MyStonith fence_virt pcmk_host_list=f1

       Set the stonith-enabled property to false on the	 cluster  (which  dis‐
       ables stonith)
	      # pcs property set stonith-enabled=false

pcs 0.9.115			  August 2013				PCS(8)
[top]

List of man pages available for Oracle

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net