wlmpard man page on HP-UX

Man page or keyword search:  
man Server   10987 pages
apropos Keyword Search (all sections)
Output format
HP-UX logo
[printable version]

wlmpard(1M)							   wlmpard(1M)

NAME
       wlmpard - activate the HP-UX Workload Manager Global Arbiter

SYNOPSIS
       [-p] [-s] [-t] [-n] [-l par[=n]] -A
       [-p] [-s] [-t] [-n] [-l par[=n]]
       [-n]

DESCRIPTION
       Use to:

	 · Migrate  CPU resources (cores) across virtual partitions (a core is
	   the actual data-processing engine within a processor, where a  pro‐
	   cessor might have multiple cores)

	 · Simulate  core migration between nPartitions using Instant Capacity
	   (formerly known as iCOD)

	 · Optimize use of Temporary Instant Capacity (v6 or  later)  and  Pay
	   per use (v4, v7, or later) resources to minimize costs

	 · Manage  CPU	allocation  for	 a combination of nPartitions, virtual
	   partitions, PSET workload groups, and FSS workload groups

       Every global arbiter interval (120 seconds by default), the WLM	global
       arbiter checks for CPU requests.

       When managing partitions, the arbiter moves cores across partitions, if
       needed, to better achieve the SLOs specified in the  WLM	 configuration
       files  that are active in the partitions. (Given the physical nature of
       nPartitions, WLM only simulates core movement--as described in the wlm‐
       parconf(4)  manpage.) The WLM daemon must be running in each partition.
       Also, the WLM configuration in each partition must use the  keyword  to
       reference  the  name of the system where the global arbiter is running.
       (This system can be one of  the	partitions  or	another	 system.   The
       global  arbiter can run on any HP-UX system that has network connectiv‐
       ity to the partitions being managed by WLM.)

       WLM allows you to manage a combination of nPars containing  vPars  con‐
       taining FSS or PSET-based groups.

       In addition to partition management, optimizes Temporary Instant Capac‐
       ity (v6 or later) and Pay per use (v4, v7, or later) resources.

       A global arbiter configuration file is required.

       logs messages to the file /var/opt/wlm/msglog in the partition in which
       it is running.

       To  start  automatically	 at  system  boot,  edit the file /etc/rc.con‐
       fig.d/wlm.

OPTIONS
       Displays usage information and exits.
	    This option overrides all other options.

       Displays version information and exits.	This option overrides all
	    options other than

       Displays the most recent global arbiter configuration, appending two
	    commented lines that indicate the origin of the configuration.

       Prevents the global arbiter from	 running  in  daemon  mode  (that  is,
       forces it
	    to run in the foreground).

       Causes the global arbiter to run in passive mode. In this mode, you can
       see
	    how WLM will approximately respond to a particular global  arbiter
	    configuration--without  the	 configuration actually taking control
	    of your system. Using this mode allows you to verify various items
	    in your global arbiter configuration.

	    You	 can  run in passive mode with each partition's daemon running
	    in regular mode. Thus, you can run	experiments  on	 a  production
	    system without consequence.

	    To	see  how  WLM approximately responds to the configuration, use
	    the WLM utility

	    For more information on passive mode, including  its  limitations,
	    see the wlm(5) manpage.

       Causes the global arbiter to run in secure mode if you have
	    distributed	 security certificates to the systems in question. For
	    more information on using  security	 certificates,	see  the  wlm‐
	    cert(1M) manpage.

	    The global arbiter runs in secure mode by default when you use the
	    /sbin/init.d/wlm script to start WLM. If you upgraded WLM,	secure
	    mode  might	 not  be  the default. Ensure that the variable in the
	    file /etc/rc.config.d/wlm is enabled. You can change  the  default
	    by editing this variable.

       Generates  comma-separated  audit data files. These files are placed in
       the
	    directory /var/opt/wlm/audit/ and are named wlmpard.monyyyy,  with
	    monyyyy representing the month and year the data was gathered. You
	    can access these files directly or through the command. For infor‐
	    mation on or on the format of the data files, see the wlmaudit(1M)
	    manpage.

	    Be sure to set in your WLM global arbiter  configuration  file  as
	    indicated  in  the	wlmparconf(4) manpage when you use the option.
	    The interval for the global arbiter daemon should be  larger  than
	    the largest WLM interval being used on the system.

       Activates a copy of the most recent global arbiter configuration.

	    WLM	 activation  may  take longer than usual when managing nParti‐
	    tions.

       Activates the configuration specified in the file
	    configfile.	 If configfile is not valid, an error message is  dis‐
	    played, and exits.

	    WLM	 activation  may  take longer than usual when managing nParti‐
	    tions.

       Checks the configuration specified in the file
	    configfile for syntax errors. The  current	configuration  is  not
	    affected.

       -l par[=n]
	    Causes to log either virtual partition or nPartition statistics to
	    the file /var/opt/wlm/wlmpardstats every global  arbiter  interval
	    or every n intervals.

	    You	   can	 use   the   command   to   review   statistics	  from
	    /var/opt/wlm/wlmpardstats:

	    % wlminfo par -o

	    For more information on see the wlminfo(1M) manpage.

	    You can enable automatic trimming  of  the	wlmpardstats  file  by
	    using  the	keyword	 in your WLM global arbiter configuration. For
	    more information, see the wlmparconf(4) manpage.

       Kills any running instance of
	    Use this option to shutdown	 the  HP-UX  Workload  Manager	global
	    arbiter.

	    WLM shutdown may take longer than usual when managing nPartitions.

HOW TO USE wlmpard TO MIGRATE CORES ACROSS PARTITIONS
       HP recommends running WLM global arbitration in secure mode; otherwise,
       a rogue user could manipulate the communications, resulting in  one  or
       more  partitions	 being	granted	 an  incorrect number of CPU resources
       (cores). To enable secure communications, you must set up security cer‐
       tificates  and  distribute  them	 to  all systems in question. For more
       information, refer to the section HOW TO SECURE COMMUNICATIONS  in  the
       wlmcert(1M) manpage.

       Assuming	 you have completed the required steps for setting up and dis‐
       tributing security certificates, WLM global arbitration runs in	secure
       mode  by default when you use the /sbin/init.d/wlm script to start WLM.
       (If you upgraded WLM, secure mode might not be the default. Ensure that
       the  variable  in /etc/rc.config.d/wlm is enabled.)  You can also acti‐
       vate global arbitration in secure mode by using the option.  HP	recom‐
       mends  using secure communications. If you must disable secure communi‐
       cations, use global arbitration only on trusted	local  area  networks.
       For  information on disabling secure communications, refer to the HP-UX
       Workload Manager User's Guide.

       Do not use cell-specific core bindings or user-assigned	core  bindings
       on virtual partitions you are going to manage with WLM.

       With Instant Capacity v6 or earlier, do not include spaces in partition
       names. Also, if or truncate the name of an nPartition, use  to  shorten
       the name so that it is not truncated.

       Do  not	adjust	any  partition	WLM is managing while is running. This
       includes using or to change the name, configuration, or resources  (CPU
       and  memory)  associated with the virtual partition or nPartition. This
       also includes using to perform online cell (cell OL*) operations	 on  a
       cell  in	 a  WLM-managed partition. (To check the status of online cell
       operations, use the command.) To adjust a partition or cell, first shut
       down WLM (including on all partitions that will be affected by the mod‐
       ification, then modify the partition. Restart WLM after	modifying  the
       partition.   (Changes  to  Instant  Capacity affect the entire complex;
       changes to a virtual  partition	affect	the  nPartition	 only,	unless
       Instant Capacity is configured on the nPartition.)  For example, if WLM
       is managing two virtual partitions vParA and vParB,  and	 you  need  to
       migrate memory resources from vParA to vParB, you must shut down WLM in
       both virtual partitions. As another example, to change the name	of  an
       nPartition,  you	 must  first  shut  down WLM in every operating system
       instance across the entire complex, because  the	 name  change  affects
       Instant	Capacity, and Instant Capacity changes affect every nPartition
       across the complex.

       You can configure WLM to manage partitions and workload groups  (either
       FSS  or PSET-based groups) at the same time.  For restrictions pertain‐
       ing to HP Integrity Virtual Machines, see  the  section	"Compatibility
       with HP Integrity Virtual Machines".

       The following steps explain how to use WLM's global arbiter:

       1. (Optional) Set up secure WLM communications

	  Follow the procedure HOW TO SECURE COMMUNICATIONS in the wlmcert(1M)
	  manpage--skipping the step about starting/restarting	the  WLM  dae‐
	  mons. You will do that later in this procedure.

       2. Create a WLM configuration file for each partition

	  Each	partition on the system must have the WLM daemon running. Cre‐
	  ate a WLM configuration file for each partition, ensuring each  con‐
	  figuration uses the keyword to reference the system where the global
	  arbiter is running.

	  For example WLM configurations for virtual  partitions  and  nParti‐
	  tions,    see	   /opt/wlm/examples/wlmconf/par_usage_goal.wlm	   and
	  /opt/wlm/examples/wlmconf/par_manual_allocation.wlm.

	  WLM allocates cores to a partition based on the CPU resource	limits
	  of  the  partition  (physical limits for nPartitions; logical limits
	  for virtual partitions). For example, WLM adjusts the number of  CPU
	  resources  (cores) assigned to a virtual partition within the limits
	  of the given virtual	partition's  minimum  and  maximum  number  of
	  cores, which you set using
	  The way WLM uses group weighting to determine CPU allocations across
	  partitions is equivalent to the  way	it  uses  group	 weighting  to
	  determine  allocations  within  a  partition.	 For more information,
	  refer to the wlmconf(4) manpage and  the  HP-UX  Workload  Manager's
	  User's Guide (/opt/wlm/share/doc/WLMug.pdf).

	  If  the  tunable  is	set  to 0 in a WLM configuration that contains
	  goal-based SLOs, those SLOs may not release CPU  resources  properly
	  when	the  CPU  resources  are  no  longer  needed.	is set to 1 by
	  default. For more information, see the wlmconf(4) manpage.

       3. Activate each partition's WLM configuration file in passive mode
	    if desired

	  WLM operates in "passive mode" when you include the option  in  your
	  command  to activate a configuration. With passive mode, you can see
	  approximately how  WLM  will	respond	 to  a	particular  configura‐
	  tion--without the configuration actually taking control of your sys‐
	  tem.

	  Activate each partition's WLM configuration file configfile in  pas‐
	  sive mode as follows:

	  configfile

	  WLM activation may take longer than usual when managing nPartitions.

	  To  see how WLM will approximately respond to the configuration, use
	  the WLM utility

       4. Activate each partition's WLM configuration file

	  After verifying each partition's WLM configuration  file  configfile
	  in passive mode, activate it as follows:

	  configfile

	  To use secure communications, activate the file using the option:

	  configfile

	  The  daemon  runs  in	 secure	 mode  by  default  when  you  use the
	  /sbin/init.d/wlm script to start WLM. (If you upgraded  WLM,	secure
	  mode	might  not  be	the  default.  Ensure  that  the  variable  in
	  /etc/rc.config.d/wlm is enabled.)

       5. Create a configuration file for the global arbiter

	  On the system running the global  arbiter,  create  a	 configuration
	  file	for  the  global  arbiter. (If this system is being managed by
	  WLM, it will have both a WLM configuration file  and	a  WLM	global
	  arbiter  configuration  file.	 You  can  set	up  and run the global
	  arbiter configuration on a system that is  not  managed  by  WLM  if
	  needed  for  the  creation of a fault-tolerant environment or a Ser‐
	  viceguard environment.)

	  This global arbiter configuration file is required.  In  this	 file,
	  you can specify the:

	      · Global arbiter interval

	      · Port used for communications between the partitions

	      · Size at which to truncate wlmpardstats log file

	      · Lowest	priority  at  which  Temporary Instant Capacity (v6 or
		later) or Pay per use (v4, v7, or later) resources are used

		Specifying this priority ensures WLM maintains compliance with
		your  usage  rights  for Temporary Instant Capacity. When your
		prepaid amount of temporary capacity expires,  WLM  no	longer
		attempts to use the temporary resources.

	  If you specify an interval, it should be larger than the largest WLM
	  interval you use on the system. For the syntax of this file--as well
	  as  default  values  for the items above, see the wlmparconf(4) man‐
	  page.

	  For example WLM global arbiter configurations for  managing  virtual
	  partitions	 and	 nPartitions,	 see	/opt/wlm/examples/wlm‐
	  conf/par_usage_goal.wlmpar  and   /opt/wlm/examples/wlmconf/par_man‐
	  ual_allocation.wlmpar.

       6. Activate the global arbiter in passive mode if desired

	  Similar  to WLM's passive mode, the WLM global arbiter has a passive
	  mode (also enabled with the option) that allows you to  verify  set‐
	  tings before you let it control your system.

	  Activate the global arbiter configuration file configfile in passive
	  mode as follows:

	  configfile

	  WLM global arbiter activation may take longer than usual when manag‐
	  ing nPartitions.

	  Again, to see how WLM responds, approximately, use the WLM utility

       7. Activate the global arbiter

	  Activate the global arbiter configuration file as follows:

	  configfile

	  To use secure communications, activate the file using the option:

	  configfile

	  The  global  arbiter runs in secure mode by default when you use the
	  /sbin/init.d/wlm script to start WLM. (If you upgraded  WLM,	secure
	  mode	might  not  be	the  default.  Ensure  that  the  variable  in
	  /etc/rc.config.d/wlm is enabled.)

	  In a complex (a system divided into either  nPartitions  or  virtual
	  partitions):

	      · If  the	 complex  is  an Instant Capacity system, you must use
		exactly one process to manage all the partitions in  the  com‐
		plex that are under WLM control

	      · If the complex is not an Instant Capacity system, you must use
		a separate process for	each  nPartition--with	each  managing
		only the virtual partitions within its nPartition.

	  A  complex  is  an  Instant  Capacity	 system if Instant Capacity is
	  installed and Instant Capacity usage rights are applied on the  sys‐
	  tem.

MANAGEMENT OF NESTED nPARTITIONS / VIRTUAL PARTITIONS /
WORKLOAD GROUPS
       You  can	 manage	 any  combination of FSS or PSET-based workload groups
       inside virtual partitions inside nPartitions if desired.

       Certain software restrictions apply to using PSET-based groups with HP-
       UX  Virtual  Partitions (vPars), Instant Capacity, and Pay per use. For
       more    information,    refer	to    the    WLM     Release	 Notes
       (/opt/wlm/share/doc/Rel_Notes).

       For  information on setting up this type of management, see the chapter
       in the WLM user's guide (/opt/wlm/share/doc/WLMug.pdf) titled  "Manage‐
       ment of nested nPars / vPars / workload groups."

HOW TO USE wlmpard TO OPTIMIZE TEMPORARY INSTANT CAPACITY
AND PAY PER USE SYSTEMS
       If  you	have  Temporary	 Instant Capacity (v6 or later) or Pay per use
       (v4, v7, or later) resources available on a system, WLM	can  help  you
       optimize	 the use of those resources--using them only as needed to meet
       the service-level objectives for your workloads.

       While has always managed migration of CPU resources  across  partitions
       for  WLM, it also now provides management of Temporary Instant Capacity
       and Pay per use resources for WLM.  This	 management  is	 available  on
       standalone  systems, as well as across a collection of partitions. With
       the keyword mentioned below, optimizes the Temporary  Instant  Capacity
       or  Pay per use resources. It determines the amount of resources needed
       to meet your workloads' SLOs, then keeps the  total  number  of	active
       cores on the system to the minimum needed to meet that resource demand.
       By minimizing the number of active cores, WLM reduces your costs.

       To use to optimize the use of these resources, follow the  steps	 given
       above in the section "HOW TO USE wlmpard TO MIGRATE CORES ACROSS PARTI‐
       TIONS" --with one exception: Specify the keyword, which works with Tem‐
       porary Instant Capacity v6 or later, and with PPU v4, v7, or later. For
       more information on this keyword, see the wlmparconf(4) manpage.

RETURN VALUE
       returns exit status if no errors occur, or if there are errors.

       returns exit status if no errors occur or nonzero if the	 configuration
       file  is	 invalid  or there were errors while activating the configura‐
       tion.

       returns exit status if no errors occur or nonzero if the running	 could
       not be confirmed as killed.

EXAMPLES
       Check the configuration file for syntax errors:

	      configfile

       Activate the specified configuration file:

	      configfile

COMPATIBILITY NOTES
   Compatibility with Instant Capacity
       Use  WLM's  virtual  partition  management  in combination with Instant
       Capacity only when using	 vPars	version	 A.03.01  or  later.  (Instant
       Capacity was formerly known as iCOD.)

       Use  WLM's  nPartition  management  with	 the Instant Capacity versions
       specified in the WLM Release Notes (/opt/wlm/share/doc/Rel_Notes).

   Compatibility with Temporary Instant Capacity
       WLM manages any present Temporary Instant Capacity (TiCAP) whenever  is
       running.	 Use  the keyword in your configuration file for to ensure WLM
       maintains compliance with  your	usage  rights  for  Temporary  Instant
       Capacity.  By default, WLM will not use Temporary Instant Capacity when
       15 or fewer processing days of temporary	 capacity  are	available;  in
       this  case,  you must purchase extra capacity. Before adding the capac‐
       ity, be sure to stop (using the option).

       You can change the 15-day default by setting  the  WLM  global  arbiter
       keyword.	 For  more information on this keyword and on the keyword, see
       the wlmparconf(4) manpage.

   Compatibility with PSETs
       As of WLM A.03.01, you can configure WLM to manage PSET-based  workload
       groups  and  partitions (virtual partitions or nPartitions) at the same
       time.  Certain software restrictions apply to using  PSET-based	groups
       with  HP-UX  Virtual  Partitions (vPars), Instant Capacity, and Pay per
       use.  For  more	information,  refer   to   the	 WLM   Release	 Notes
       (/opt/wlm/share/doc/Rel_Notes).

   Compatibility with HP Integrity Virtual Machines
       WLM  can	 run  on  an Integrity Virtual Machines (Integrity VM) Host or
       within an Integrity VM (guest).	You can run WLM both on the  Integrity
       VM  Host	 and  in  an Integrity VM, but each WLM runs as an independent
       instance. To run WLM on the VM Host, you must use  strictly  host-based
       configurations--WLM  configurations  designed  exclusively  for	moving
       cores across nPartitions or for activating Temporary  Instant  Capacity
       or  Pay	per  use cores.	 (WLM will not run with FSS groups or PSETs on
       systems where Integrity VM is running.) In addition,  ensure  that  the
       minimum	number	of  cores  allocated  to a WLM host is greater than or
       equal to the maximum number of virtual CPUs (vCPU  count)  assigned  to
       each  VM guest. Otherwise, VM guests with a vCPU count greater or equal
       to WLM's minimum allocation could receive  insufficient	resources  and
       eventually  crash. For example, if an Integrity VM host has 8 cores and
       three guests with 1, 2, and 4 virtual CPUs, respectively, your WLM host
       should maintain an allocation of at least 4 cores at all times. You can
       achieve this by using the WLM keyword; for  more	 information  on  this
       keyword, see the wlmconf(4) manpage.

       To  run	WLM  within  an	 Integrity  VM (guest), you cannot use Instant
       Capacity, Pay per use, and vPar integration.  (However,	a  guest  will
       take  advantage of CPU resources added to the VM Host by Instant Capac‐
       ity, Temporary Instant Capacity, and Pay per  use.)   As	 noted	previ‐
       ously, WLM must continue allocating at least as many cores as the maxi‐
       mum number of virtual CPUs in any VM guest on the system.  In addition,
       specify	a  WLM	interval  greater than 60 seconds. This helps ensure a
       fair allocation of CPU resources for FSS groups.

       For more information on HP Integrity VM, refer  to  the	following  web
       site and navigate to the "Solution components" page:
       www.hp.com/go/vse

AUTHOR
       was developed by HP.

FEEDBACK
       If  you would like to comment on the current HP-UX WLM functionality or
       make suggestions for future releases, please send email to:

       wlmfeedback@rsn.hp.com

FILES
       example WLM configuration file

       example WLM global arbiter configuration file

       example WLM configuration file

       example WLM global arbiter configuration file

       system initialization directives

       default message log

       optional global arbiter statistics log

       copy of most recent configuration file

SEE ALSO
       wlmd(1M), wlmaudit(1M), wlminfo(1M), wlmcert(1M),  wlmconf(4),  wlmpar‐
       conf(4), wlm(5)

       HP-UX Workload Manager User's Guide (/opt/wlm/share/doc/WLMug.pdf)

       HP-UX Workload Manager homepage (http://www.hp.com/go/wlm)

								   wlmpard(1M)
[top]

List of man pages available for HP-UX

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net