gwlmxml man page on HP-UX

Man page or keyword search:  
man Server   10987 pages
apropos Keyword Search (all sections)
Output format
HP-UX logo
[printable version]

gwlmxml(4)							    gwlmxml(4)

NAME
       gwlmxml - Overview of XML file structure for Global Workload Manager

DESCRIPTION
       You  can	 create	 and  modify  definitions  for Global Workload Manager
       (gWLM) using XML files as input to the gwlm command, which is the  com‐
       mand-line  interface  for  gWLM.	 The  gwlm  command also allows you to
       export existing definitions.

       Use XML to define or modify:

	      + Policies

	      + Conditional policies

	      + Workloads

	      + Shared Resource Domains (SRDs)

XML OVERVIEW
       An XML file imported to the gWLM configuration repository  must	follow
       the form:

       <?xml version="1.0" encoding="UTF-8"?>
       <!DOCTYPE dataStore SYSTEM "file:/opt/gwlm/dtd/config.dtd">

       <dataStore>

	   <policyDefinition name="policy_name" type="policy_type"
	       ticapByPolicy="ticapPolicySetting">
	    <!-- definition XML goes here -->
	   </policyDefinition>

	   <compositePolicyDefinition name="policy_name" type="Conditional">
	    <!-- definition XML goes here -->
	   </compositePolicyDefinition>

	   <workloadDefinition name="workload_name">
	    <!-- definition XML goes here -->
	   </workloadDefinition>

	   <sharedResourceDomain name="SRD_name">
	    <compartment name="compartment_name" id="compartment_ID"
		type="hpvm_npar_vpar_pset_or_fss">
		<!-- definition XML goes here -->
	    </compartment>
	   </sharedResourceDomain>

       </dataStore>

       NOTE:  When  you create your XML file, the first line must start in the
       first column. Do not leave any spaces  before  the  "<?xml  version..."
       text.

       policyDefinition,  compositePolicyDefinition,  workloadDefinition,  and
       sharedResourceDomain are all optional: They can	appear	zero  or  more
       times. When present, they must appear in the order shown above though.

       The  syntax  and	 structure  of	the XML you use in your definitions is
       described in the sections below. Policy definitions are covered	first,
       as  they	 are self-contained. Then, composite policy definitions, which
       enable you to form conditional policies using  existing	policies,  are
       explained.   Next, workload definitions, which reference policy defini‐
       tions, are discussed. Lastly, SRD definitions, which  require  workload
       definitions, are described.

PLACING XML DEFINITIONS IN ONE FILE OR IN MULTIPLE FILES
       You  can combine definitions for all your policies, workloads, and SRDs
       in a single file or place them in  separate  files.  (Within  a	single
       file, however, a given workload cannot be referenced by multiple SRDs.)
       Use the gwlm command to import these definitions into the gWLM configu‐
       ration  repository. When importing a definition that references another
       definition, the referenced definition must already be in the repository
       or  be in the file being imported (above the definition that references
       it).

DEFAULT VALUES
       When you export definitions from the configuration repository, the def‐
       initions	 may  include  items  you  did	not  specify.  These items are
       optional in the definitions. They are exported with the default values.

NESTED PARTITIONS
       Using gwlm discover --nested, you can create  nested  partitions,  with
       gWLM  managing resources for the various levels of partitions. When you
       nest partitions in an SRD, the different compartments in the  SRD  will
       have  different parent IDs.  The compartments can be of different types
       as well.

DEFINING POLICIES
       A policy is a collection of settings that instruct gWLM how to manage a
       workload's resources. A single policy can be associated, or applied, to
       multiple workloads.

       You can create conditional policies, which take effect only in  certain
       circumstances,  as explained in the section "DEFINING CONDITIONAL POLI‐
       CIES" below.

       gWLM supports the following types of policies:

	      + Fixed

		Allocates a fixed (constant) amount  of	 CPU  resources	 to  a
		workload's compartment.

	      + OwnBorrow

		Allows	you  to specify the amount of CPU resources an associ‐
		ated workload:

		- Owns (is allocated whenever needed)

		- Borrows (when its CPU demand increases)

		- Lends (when not needed) to other workloads

	      + Utilization

		Attempts to keep a workload's utilization of its  CPU  alloca‐
		tion below a target percentage by:

		 - Adding CPU resources when the workload is using too much of
		   its current CPU allocation

		 - Removing CPU resources when the workload is using too  lit‐
		   tle of its current CPU allocation

	      + Custom (user-defined)

		Allows	you  to	 provide your own metric. gWLM then manages an
		associated workload, adjusting allocation as needed, based  on
		the  value  of the metric. (Update values for the metric using
		the gwlmsend command on the HP-UX instance where the  workload
		is running.)

       The XML tags, or elements, for each policy type are explained below.

       Each  workload  in  an  SRD  must  have	a  policy.  Starting with gWLM
       A.02.00.00.07, you can use any combination of the policy	 types	within
       an SRD.

   Working with earlier gWLM versions
       As  new features are introduced, backward compatibility can be affected
       as described in the following sections.

				 gWLM A.04.00.07
       Changes introduced with gWLM A.04.00.07 enable you to:

	      + Create conditional policies that depend on  time-based	condi‐
		tions

	      + Create	conditional  policies that depend on file-based condi‐
		tions

	      + Create custom policies that scale metrics you provide to  gen‐
		erate requests for CPU resources for your workloads

	      + Create	SRDs from members of a Global Instant Capacity (GiCAP)
		group.	 (For  information  on	the  supported	 versions   of
		iCAP/GiCAP,  see  the VSE Management Software Installation and
		Update Guide.)

       You cannot deploy SRDs that use these features on  managed  nodes  with
       earlier agents.

			       gWLM A.03.00.00.05
       Changes introduced with gWLM A.03.00.00.05 enable you to:

	      + Create conditional policies based on Serviceguard conditions

	      + Enable use of TiCAP resources at the policy level

	      + Define	workloads  based  on  process  maps  (executables that
		return a set of process IDs)

       You cannot deploy SRDs that use these features on  managed  nodes  with
       earlier agents.

			       gWLM A.02.50.00.04
       Changes introduced with gWLM A.02.50.00.04 enable you to:

	      + Nest partitions

       You cannot deploy SRDs that use this feature on managed nodes with ear‐
       lier agents.

			       gWLM A.02.00.00.07
       Changes made in gWLM A.02.00.00.07 enable you to combine all the policy
       types in a single SRD.

   Choosing a policy type
       How do you decide which policy type to use? The list below answers this
       question for several common use cases. The section following  the  list
       helps  you  choose  between  using an OwnBorrow policy or a utilization
       policy.

       Use a fixed policy
	      If you want gWLM to allocate a constant amount of CPU  resources
	      to a workload.

       Use a custom policy
	      If  you want gWLM to manage a workload according to a metric you
	      supply.

       Use an OwnBorrow policy
	      If IT acts as a service provider to business units.

	      This policy type allows you to set an owned amount of resources,
	      while also giving you control over how workloads borrow and lend
	      resources.

	      gWLM provides a  "topborrowers"  report  and  a  "resourceaudit"
	      report to help you manage your data center using this model. For
	      more information, see the gwlmreport(1M) manpage.

       Use an OwnBorrow policy
	      If you have static vpars, but you want to move to a model	 where
	      cores, formerly known as CPUs, migrate among vpars.

	      For  each vpar, set its number of owned cores to its static num‐
	      ber of cores. The vpar gets those owned cores whenever needed.

       Use an OwnBorrow policy
	      If you have npars, but you want to move to a model  where	 cores
	      migrate among npars.

	      Install  the  HP	product	 Instant  Capacity on each npar. (This
	      software allows gWLM to simulate core movement among npars  with
	      spare capacity.)

	      For  each	 npar,	set its number of owned cores to the number of
	      cores you want the npar to have whenever needed.

       Use a utilization policy
	      If you want to tap into a pool of resources taking or giving CPU
	      resources	 as   needed--with  no	guaranteed access to resources
	      beyond a minimum request.

       Use a conditional policy
	      If you have a policy that should be in effect only at a  certain
	      time,  when  a particular file exists, or for a certain Service‐
	      guard condition.

	      Select an existing policy and a default policy and then  specify
	      a time period, a file, or a Serviceguard condition.

	  Choosing between an OwnBorrow policy and a utilization policy
       OwnBorrow  and  utilization policies both allocate resources to a work‐
       load based on the workload's use of its current allocation. Both policy
       types  also  specify minimum and maximum amounts of resources the work‐
       load should get. A workload with either type of policy can  lend	 other
       workloads  its  unused resources--down to its minimum. (If the workload
       does not consume its entire minimum allocation, those unused  resources
       are  not	 available  to	other workloads.) OwnBorrow policies, however,
       provide greater control in lending resources because they also have  an
       owned  amount  of resources. A workload always gets its owned resources
       back whenever needed. So, with an OwnBorrow policy, you can set a lower
       minimum	allocation  (increasing	 the amount of resources available for
       sharing among workloads), knowing the associated	 workloads  get	 their
       owned  resources	 whenever  needed.  Thus, an OwnBorrow policy provides
       greater flexibility in attempting to  allocate  a  workload  a  certain
       amount  of  resources when needed while also lending those resources to
       other workloads when not needed.

   Defining a fixed policy
       A fixed policy allocates a fixed (constant) amount of CPU resources  to
       a workload's compartment.

       Fixed policies do not have a priority you can set. gWLM satisfies these
       policies before attempting to satisfy any other type of policies.

       NOTE: Any XML file you use with gwlm must  contain  certain  additional
       tags  that  are not shown below. See the EXAMPLES section for the addi‐
       tional tags.

       Define a fixed policy by specifying the following XML tags in the order
       shown:

	   <policyDefinition name="policy_name" type="Fixed">
	    <cpu>
		<own> Cores_owned </own>
	    </cpu>
	   </policyDefinition>

       where:

	      policy_name
		   Is  the name for the policy. You can use A-Z, a-z, 0-9, the
		   dash character ( - ), the period ( . ), the colon  (	 :  ),
		   the space ( ), and the underscore ( _ ) in the name.

	      Cores_owned
		   Is  a  decimal  value  greater  than 0.0. A workload with a
		   fixed policy is allocated its owned	cores  at  all	times.
		   (Cores were formerly known as CPUs.)

   Defining an OwnBorrow policy
       An  OwnBorrow  policy allows you to specify the amount of CPU resources
       an associated workload:

	 + Owns (is allocated whenever needed)

	 + Borrows (when its CPU resource demand increases)

	 + Lends (when not needed) to other workloads

       NOTE: Any XML file you use with gwlm must  contain  certain  additional
       tags  that  are not shown below. See the EXAMPLES section for the addi‐
       tional tags.

       Define an OwnBorrow policy by specifying the following XML tags in  the
       order shown (brackets, '[' and ']', designate optional items):

	   <policyDefinition name="policy_name" type="OwnBorrow"
	       ticapByPolicy="ticapPolicySetting">
	    <cpu>
		<minimum> minimum_request </minimum>
		<maximum> maximum_request </maximum>
		<own> Cores_owned </own>
       [	<target> target </target>		  ]
       [	<weight> weight </weight>		  ]
       [	<convergenceRate> rate </convergenceRate> ]
	    </cpu>
       [    <priority> priority </priority>		  ]
	   </policyDefinition>

       where:

	      policy_name
		   Is  the name for the policy. You can use A-Z, a-z, 0-9, the
		   dash character ( - ), the period ( . ), the colon  (	 :  ),
		   the space ( ), and the underscore ( _ ) in the name.

	      ticapPolicySetting
		   Is  either true or false, indicating whether gWLM should be
		   allowed to  activate	 Temporary  Instant  Capacity  (TiCAP)
		   resources to satisfy the policy, if needed.

		   NOTE:  You  can  enable use of TiCAP on an SRD level (using
		   the ticapMode attribute in an SRD definition) or on a  pol‐
		   icy	level (using the ticapByPolicy attribute). Assigning a
		   value to ticapByPolicy is effective only when the policy is
		   used in an SRD that has its ticapMode attribute set to off.
		   When ticapMode is set to none or all, it  takes  precedence
		   over	 ticapByPolicy.	  Setting ticapMode to all enables all
		   policies in the SRD to take advantage of TiCAP.

		   Setting this attribute to true only enables	an  associated
		   workload  to	 activate  TiCAP resources if resources cannot
		   otherwise be borrowed to meet a request. It does  not  give
		   the	workload  a  higher priority in an attempt to minimize
		   use of TiCAP resources. (You can manually increase the pri‐
		   ority  so  the  workload is favored over other workloads in
		   borrowing resources,	 reducing  the	chance	that  it  will
		   require the activation of TiCAP resources.)

		   For additional information and warnings on using TiCAP, see
		   the section "NOTES ON  USING	 TEMPORARY  INSTANT  CAPACITY"
		   below.

	      minimum_request
		   Is  the minimum amount of CPU resources (in cores) the pol‐
		   icy will request for	 its  associated  workloads.   When  a
		   workload  associated with this policy does not need all the
		   CPU resources it owns, it can lend out to  other  workloads
		   the following amount of CPU resources:

		   Cores_owned minus minimum_request

	      maximum_request
		   Is  the maximum amount of CPU resources (in cores) the pol‐
		   icy will request for	 its  associated  workloads.   When  a
		   workload  associated	 with  this  policy  is busy and needs
		   additional CPU resources, it can borrow up to the following
		   amount of CPU resources from workloads that are not busy:

		   maximum_request minus Cores_owned

	      Cores_owned
		   Is	a  decimal  value  greater  than  or  equal  to	 mini‐
		   mum_request.	 A workload is allocated its owned cores when‐
		   ever needed. (Cores were formerly known as CPUs.)

		   NOTE: To ensure CPU allocations behave as expected for Own‐
		   Borrow policies, the sum of the CPU resources owned	cannot
		   exceed  the	SRD's size.  (However, if the sum is less than
		   the SRD's size, the excess is distributed to	 all  compart‐
		   ments  in  proportion to the amounts owned. Thus, workloads
		   will routinely get more than they are due.)

	      target
		   (Optional) Is a value that drives a policy, influencing its
		   resource requests to gWLM.

		   An OwnBorrow policy is based on a utilization policy with a
		   default target of 75%. Change the default by	 specifying  a
		   new	target	value. gWLM will then adjust the CPU resources
		   the compartment gets so that the "CPU consumption"  /  "CPU
		   allocation"	percentage  is less than the percentage repre‐
		   sented by target.

	      weight
		   (Optional) Is any value greater than or equal to 0 that you
		   assign  to  a policy that determines how resources are dis‐
		   tributed in the following two scenarios:

		   + gWLM addresses priority levels from  highest  to  lowest,
		     allocating	 resources to all requests at a given priority
		     level before considering lower priority requests. If,  at
		     some  priority  level,  all requests cannot be satisfied,
		     the remaining resources are distributed so that the total
		     resource allocation for each workload is as near the pro‐
		     portion to its weight relative to	the  sum  of  all  the
		     weights as possible.

		   + If gWLM has satisfied all resource requests at all prior‐
		     ities and there are resources still to be	allocated,  it
		     will distribute the remaining resources by weight. Again,
		     this is so that the total resource	 allocation  for  each
		     workload is as near the proportion to its weight relative
		     to the sum of all the weights as possible.

		   If you do not specify a weight, the	Cores_owned  value  is
		   used as the weight.

	      rate (Optional) Indicates how sensitive a workload is to changes
		   in CPU allocation. The default rate is 1.0.	Larger	values
		   produce  larger  changes  in the allocation, resulting in a
		   faster  convergence	on  the	 policy's  target.  Similarly,
		   smaller values slow down convergence on the target.

		   Being  able	to specify this rate is important because each
		   workload behaves differently when allocated CPU  resources.
		   Small  changes  in  the  allocation	may have a significant
		   effect on one workload's performance while those same small
		   changes produce no effect at all in another workload's per‐
		   formance.

	      priority
		   (Optional) Indicates	 a  policy's  importance  relative  to
		   other  policies.   Priority	determines  the order in which
		   resource requests above the policy minimum amounts and  the
		   owned  amounts  are	satisfied.  The highest priority is 1.
		   Lower priorities are 2, 3, and so on--up to	and  including
		   1000.

   Defining a utilization policy
       With  a utilization policy, gWLM attempts to keep a workload's utiliza‐
       tion of its CPU allocation below a target percentage by:

	  + Adding CPU resources when the workload is using too	 much  of  its
	    current CPU allocation

	  + Removing  CPU  resources  when the workload is using too little of
	    its current CPU allocation

       NOTE: Any XML file you use with gwlm must  contain  certain  additional
       tags  that  are not shown below. See the EXAMPLES section for the addi‐
       tional tags.

       Define a utilization policy by specifying the following XML tags in the
       order shown (brackets, '[' and ']', designate optional items):

	   <policyDefinition name="policy_name" type="Utilization"
	       ticapByPolicy="ticapPolicySetting">
	    <cpu>
       [	<minimum> minimum_request </minimum>	  ]
       [	<maximum> maximum_request </maximum>	  ]
       [	<target> target </target>		  ]
       [	<weight> weight </weight>		  ]
       [	<convergenceRate> rate </convergenceRate> ]
	    </cpu>
       [    <priority> priority </priority>		  ]
	   </policyDefinition>

       where:

	      policy_name
		   Is  the name for the policy. You can use A-Z, a-z, 0-9, the
		   dash character ( - ), the period ( . ), the colon  (	 :  ),
		   the space ( ), and the underscore ( _ ) in the name.

	      ticapPolicySetting
		   Is  either true or false, indicating whether gWLM should be
		   allowed to  activate	 Temporary  Instant  Capacity  (TiCAP)
		   resources to satisfy the policy, if needed.

		   NOTE:  You  can  enable use of TiCAP on an SRD level (using
		   the ticapMode attribute in an SRD definition) or on a  pol‐
		   icy	level (using the ticapByPolicy attribute). Assigning a
		   value to ticapByPolicy is effective only when the policy is
		   used in an SRD that has its ticapMode attribute set to off.
		   When ticapMode is set to none or all, it  takes  precedence
		   over	 ticapByPolicy.	  Setting ticapMode to all enables all
		   policies in the SRD to take advantage of TiCAP.

		   Setting this attribute to true only enables	an  associated
		   workload  to	 activate  TiCAP resources if resources cannot
		   otherwise be borrowed to meet a request. It does  not  give
		   the	workload  a  higher priority in an attempt to minimize
		   use of TiCAP resources. (You can manually increase the pri‐
		   ority  so  the  workload is favored over other workloads in
		   borrowing resources,	 reducing  the	chance	that  it  will
		   require the activation of TiCAP resources.)

		   For additional information and warnings on using TiCAP, see
		   the section "NOTES ON  USING	 TEMPORARY  INSTANT  CAPACITY"
		   below.

	      minimum_request
		   (Optional)  Is  the	minimum	 amount	 of  CPU resources, in
		   cores, the policy will request  for	its  associated	 work‐
		   loads.

	      maximum_request
		   (Optional)  Is  the	maximum	 amount	 of  CPU resources, in
		   cores, the policy will request  for	its  associated	 work‐
		   loads.

	      target
		   (Optional) Is a value that drives a policy, influencing its
		   resource requests to gWLM.

		   With a utilization policy, gWLM attempts to adjust the  CPU
		   resources  a compartment gets so that the "CPU consumption"
		   / "CPU allocation" percentage is less than  the  percentage
		   represented	by  target.   By default, gWLM uses 75% as the
		   target. Change the  default	by  specifying	a  new	target
		   value.

	      weight
		   (Optional) Is any value greater than or equal to 0 that you
		   assign to a policy that determines how resources  are  dis‐
		   tributed in the following two scenarios:

		   + gWLM  addresses  priority	levels from highest to lowest,
		     allocating resources to all requests at a given  priority
		     level  before considering lower priority requests. If, at
		     some priority level, all requests	cannot	be  satisfied,
		     the remaining resources are distributed so that the total
		     resource allocation for each workload is as near the pro‐
		     portion  to  its  weight  relative	 to the sum of all the
		     weights as possible.

		   + If gWLM has satisfied all resource requests at all prior‐
		     ities  and	 there are resources still to be allocated, it
		     will distribute the remaining resources by weight. Again,
		     this  is  so  that the total resource allocation for each
		     workload is as near the proportion to its weight relative
		     to the sum of all the weights as possible.

		   The default weight is 1.

	      rate (Optional) Indicates how sensitive a workload is to changes
		   in CPU allocation. The default rate is 1.0.	Larger	values
		   produce  larger  changes  in the allocation, resulting in a
		   faster  convergence	on  the	 policy's  target.  Similarly,
		   smaller values slow down convergence on the target.

		   Being  able	to specify this rate is important because each
		   workload behaves  differently  when	allocated  CPU.	 Small
		   changes  in the allocation may have a significant effect on
		   one workload's performance while those same	small  changes
		   produce no effect at all in another workload's performance.

	      priority
		   (Optional)  Indicates  a  policy's  importance  relative to
		   other policies.  Priority determines	 the  order  in	 which
		   resource  requests above the policy minimum amounts and the
		   owned amounts are satisfied. The  highest  priority	is  1.
		   Lower  priorities  are 2, 3, and so on--up to and including
		   1000.

   Defining a custom policy
       A custom policy enables you to provide a metric	and  a	target	value.
       With these values:

	      * gWLM  can adjust CPU requests for an associated workload based
		on how the value of the metric (which could be	response  time
		or number of transactions per second, for example) compares to
		the target value

	      * You can directly specify a CPU request (based  on  the	metric
		and the target value) for an associated workload

       Update  values  for  the metric using the gwlmsend command on the HP-UX
       instance where the workload is running.

       NOTE: Any XML file you use with gwlm must  contain  certain  additional
       tags  that  are not shown below. See the EXAMPLES section for the addi‐
       tional tags.

       Define a custom policy by specifying the	 following  XML	 tags  in  the
       order shown (brackets, '[' and ']', designate optional items):

	   <policyDefinition name="policy_name" type="Custom"
	       ticapByPolicy="ticapPolicySetting">
	    <cpu>
       [	<minimum> minimum_request </minimum>	  ]
       [	<maximum> maximum_request </maximum>	  ]
       [	<own> CPU_owned </own>			  ]
		<target> target </target>
       [	<weight> weight </weight>		  ]
       [	<convergenceRate> rate </convergenceRate> ]
		<metric name="name" response="response"/>
	    </cpu>
       [    <priority> priority </priority>		  ]
	   </policyDefinition>

       where:

	      policy_name
		   Is  the name for the policy. You can use A-Z, a-z, 0-9, the
		   dash character ( - ), the period ( . ), the colon  (	 :  ),
		   the space ( ), and the underscore ( _ ) in the name.

	      ticapPolicySetting
		   Is  either true or false, indicating whether gWLM should be
		   allowed to  activate	 Temporary  Instant  Capacity  (TiCAP)
		   resources to satisfy the policy, if needed.

		   NOTE:  You  can  enable use of TiCAP on an SRD level (using
		   the ticapMode attribute in an SRD definition) or on a  pol‐
		   icy level (using the ticapByPolicy attribute).  Assigning a
		   value to ticapByPolicy is effective only when the policy is
		   used in an SRD that has its ticapMode attribute set to off.
		   When ticapMode is set to none or all, it  takes  precedence
		   over	 ticapByPolicy.	  Setting ticapMode to all enables all
		   policies in the SRD to take advantage of TiCAP.

		   Setting this attribute to true only enables	an  associated
		   workload  to	 activate  TiCAP resources if resources cannot
		   otherwise be borrowed to meet a request. It does  not  give
		   the	workload  a  higher priority in an attempt to minimize
		   use of TiCAP resources. (You can manually increase the pri‐
		   ority  so  the  workload is favored over other workloads in
		   borrowing resources,	 reducing  the	chance	that  it  will
		   require the activation of TiCAP resources.)

		   For additional information and warnings on using TiCAP, see
		   the section "NOTES ON  USING	 TEMPORARY  INSTANT  CAPACITY"
		   below.

	      minimum_request
		   (Optional)  Is  the	minimum	 amount	 of  CPU resources, in
		   cores, the policy will request  for	its  associated	 work‐
		   loads.

	      maximum_request
		   (Optional)  Is  the	maximum	 amount	 of  CPU resources, in
		   cores, the policy will request  for	its  associated	 work‐
		   loads.

	      Cores_owned
		   (Optional) Is a decimal value greater than or equal to min‐
		   imum_request.  A workload  is  allocated  its  owned	 cores
		   whenever  needed--regardless	 of priority. (Cores were for‐
		   merly known as CPUs.)

	      target
		   In a custom	policy	with  a	 Direct	 response  or  Inverse
		   response:  target is a value that drives the policy, influ‐
		   encing its resource requests to  gWLM.   gWLM  attempts  to
		   adjust  the CPU resources a workload gets so that the named
		   metric can meet the specified target.

		   In a custom policy with a Scale response: target is a scal‐
		   ing	factor.	 gWLM  scales the value of the named metric to
		   form a request for CPU resources. Scaling  enables  you  to
		   directly  specify a CPU resource request (based on the met‐
		   ric and the target value) for an associated workload.

		   The formula for the request is:

		   request = metric value * (1/target)

		   A request of 1.0 represents one core, formerly known	 as  a
		   CPU.

		   One	use  for  scaling  is  to give each user logged into a
		   database system 0.5% of a core. You	would  use  the	 total
		   number  of  database	 users	as the metric value. You would
		   then set target to 200. (1/200 = 0.005 = 0.5%)

		   For users of HP-UX Workload Manager (WLM) who are  convert‐
		   ing WLM configurations to gWLM configurations, you can sim‐
		   ulate WLM cpushares behavior by setting target to 100.

	      weight
		   (Optional) Is any value greater than or equal to 0 that you
		   assign  to  a policy that determines how resources are dis‐
		   tributed in the following two scenarios:

		   + gWLM addresses priority levels from  highest  to  lowest,
		     allocating	 resources to all requests at a given priority
		     level before considering lower priority requests. If,  at
		     some  priority  level,  all requests cannot be satisfied,
		     the remaining resources are distributed so that the total
		     resource allocation for each workload is as near the pro‐
		     portion to its weight relative to	the  sum  of  all  the
		     weights as possible.

		   + If gWLM has satisfied all resource requests at all prior‐
		     ities and there are resources still to be	allocated,  it
		     will distribute the remaining resources by weight. Again,
		     this is so that the total resource	 allocation  for  each
		     workload is as near the proportion to its weight relative
		     to the sum of all the weights as possible.

		   The default weight is 1.

	      rate (Optional) Indicates how sensitive a workload is to changes
		   in  CPU  allocation. The default rate is 1.0. Larger values
		   produce larger changes in the allocation,  resulting	 in  a
		   faster  convergence	on  the	 policy's  target.  Similarly,
		   smaller values slow down convergence on the target.

		   Being able to specify this rate is important	 because  each
		   workload  behaves  differently  when	 allocated  CPU. Small
		   changes in the allocation may have a significant effect  on
		   one	workload's  performance while those same small changes
		   produce no effect at all in another workload's performance.

	      name Is the name for the metric. You can use A-Z, a-z, 0-9,  the
		   dash	 character  (  - ), the period ( . ), the colon ( : ),
		   the space ( ), and the underscore ( _ ) in the name.

		   Update values for the metric using  the  gwlmsend  command,
		   using name as an argument.

	      response
		   Valid values for response are Direct, Inverse, and Scale:

		   Direct A  metric  value  less  than target results in a CPU
			  request larger than the current CPU resource alloca‐
			  tion.	  If  the request produces a larger allocation
			  for the workload, the metric value increases.	 Exam‐
			  ple: number of transactions per second.

		   Inverse
			  A  metric value greater than target results in a CPU
			  request larger than the current CPU resource alloca‐
			  tion.	 If  the  request produces a larger allocation
			  for the workload, the metric value decreases.	 Exam‐
			  ple: response time.

		   Scale  The metric value should be scaled by gWLM to compute
			  a request for CPU resources. Scaling	is  useful  in
			  giving  a workload a certain amount of CPU resources
			  per the metric. For an example as well  as  informa‐
			  tion	on how the request is formed, see the descrip‐
			  tion of target above.

	      priority
		   (Optional) Indicates	 a  policy's  importance  relative  to
		   other  policies.   Priority	determines  the order in which
		   resource requests above the policy minimum amounts and  the
		   owned  amounts  are	satisfied.  The highest priority is 1.
		   Lower priorities are 2, 3, and so on--up to	and  including
		   1000.

DEFINING CONDITIONAL POLICIES
       A  conditional  policy  is a policy that gWLM should use when a certain
       condition occurs. You create conditional policies by specifying one  or
       more  existing  policies, along with conditions, in a composite policy.
       You must also specify a default policy to use in case none of the  con‐
       ditions are met.

       NOTE:

       The  types  of  conditional  policies that are available depends on the
       gWLM agent version. The following table indicates the  supported	 types
       by agent version.

       gWLM Agent Version      |       Conditional Policies Supported
       --------------------------------------------------------------
       A.03.00.00.05 or later  |       Serviceguard-based
       A.04.00.07 or later     |       Time-based, file-based

       Use  gwlmstatus	on  a  managed	node  to determine the version of that
       node's agent.

       NOTE: Any XML file you use with gwlm must  contain  certain  additional
       tags  that  are not shown below. See the EXAMPLES section for the addi‐
       tional tags.

       Define a composite policy by specifying the following XML tags  in  the
       order shown:

	   <compositePolicyDefinition name="policy_name" type="Conditional">
	    <conditionList>

		<conditionItem order="order_value">
		 condition_clause
		 <policyReference name="some_existing_policy"/>
		</conditionItem>

       [	<!-- Additional conditionItem elements --> ]

		<conditionItem order="some_other_order_value">
		 <default/>
		 <policyReference name="some_other_existing_policy"/>
		</conditionItem>

	    </conditionList>
	   </compositePolicyDefinition>

       where:

	      policy_name
		   Is the name for the composite policy. You can use A-Z, a-z,
		   0-9, the dash character ( - ), the period ( . ), the	 colon
		   ( : ), the space ( ), and the underscore ( _ ) in the name.

	      order_value
		   Is  the  order  in  which  the given conditionItem is to be
		   evaluated relative to the other conditionItem  elements  in
		   the	given compositePolicyDefinition element. This order is
		   important because the first conditionItem with a  condition
		   that	 evaluates to true determines the policy that is asso‐
		   ciated with the workload.

	      condition_clause
		   A condition clause is based on a time  period,  on  a  file
		   existing,  or on an HP Serviceguard condition. (When an SRD
		   uses a Serviceguard-based condition, the deployment of that
		   SRD	causes	the  Serviceguard version to be validated. The
		   version must be A.11.16 or later.)

		   When a condition clause is true, the policy	referenced  in
		   its	conditionItem element might be--depending on the order
		   of evaluation--the policy used for the associated workload.

		   NOTE: In most cases, gWLM  validates	 an  SRD  when	it  is
		   deployed  to ensure all the policies used in the SRD can be
		   met. When an SRD includes composite policies, however, only
		   the	composite  policy's  default policy is included in the
		   validation with the SRD's other policies. (The default pol‐
		   icy	is  the	 policy	 in the default condition clause, dis‐
		   cussed below.) As a result, it is possible for the  SRD  to
		   have	 policies  that	 cannot be met. (For example, based on
		   conditions, the SRD could use a fixed policy that  requests
		   more	 resources  than  are  available  on the system, or it
		   might use an OwnBorrow policy that has an Owned Size	 that,
		   when	 combined  with	 other policies, exceeds the available
		   resources.) When policies  cannot  be  met,	resources  are
		   allocated based on the priority and weight values specified
		   in the policies.

		   The possible clauses are:

		   <default/>
			Always evaluates true. This clause is required.

			Typically, the order_value for the  default  condition
			clause should be high enough to ensure it is evaluated
			after all other condition clauses.  Any	 conditionItem
			element with a higher order value is not evaluated.

		   <sgReducedClusterCapacity/>
			True  if  gWLM	detects	 that the Serviceguard cluster
			associated with the host of the	 workload  is  missing
			any cluster members.

		   <sgNonPrimaryPackagePresent/>
			True  if  gWLM	detects	 that any Serviceguard package
			active on the host of the workload does not  have  the
			host configured as its primary node.

		   <sgPkgActive package="package_name"/>
			True if gWLM detects that the named Serviceguard pack‐
			age is active on the host of the workload.

		   <timePeriod>
			True for the period starting at the  specified	begin‐
			Time (discussed below) through--but not including--the
			specified endTime.

			The syntax of this clause is:

			<timePeriod>
			    <beginTime>
			[     <date> date_format </date>	  ]
			[     <weekday> weekday_format </weekday> ]
			      <time> time_format </time>
			    </beginTime>
			    <endTime>
			[     <date> date_format </date>	  ]
			[     <weekday> weekday_format </weekday> ]
			      <time> time_format </time>
			    </endTime>
			</timePeriod>

			where

			date_format
				  Is the date in either of the following  for‐
				  mats.

				  --mm-dd
				       (month/day  format)  mm is a digit from
				       01 to 12 representing  the  month.  (01
				       indicates  January; 12 indicates Decem‐
				       ber.)   dd  goes	 from  01  up  to  31,
				       depending upon month (mm).

				  ---dd
				       (day format) dd goes from 01 to 31.

				  A beginTime element can optionally contain a
				  date element. (If you specify	 a  date  ele‐
				  ment, you cannot specify a weekday element.)

				  You  can  specify  a	beginTime date element
				  without specifying an endTime date  element.
				  However, an endTime element can only contain
				  a date element if the beginTime element  has
				  one.	Also, an endTime date element can only
				  contain a month if the beginTime  date  ele‐
				  ment	contains a month. (HP recommends using
				  the same format, either  month/day  or  day,
				  for  both  the  beginTime  and  endTime ele‐
				  ments.)

				  If you do specify a beginTime	 date  element
				  without  specifying an endTime date element,
				  the time period ends when the	 system	 clock
				  next	crosses the time specified in the end‐
				  Time element.

			weekday_format
				  Is a digit from 1 to 7 representing the  day
				  of  the  week.  (1 indicates Monday; 7 indi‐
				  cates Sunday.)  To  specify  multiple	 days,
				  repeat the "<weekday> weekday_format </week‐
				  day>" line, substituting new days for	 week‐
				  day_format.

				  A  beginTime	element can optionally contain
				  multiple weekday elements.  (If you  specify
				  a weekday element, you cannot specify a date
				  element.)

				  You can specify a beginTime weekday  element
				  without  specifying  an endTime weekday ele‐
				  ment. An endTime  element  can  contain  one
				  weekday element if the beginTime element has
				  exactly one.

				  If you do specify a beginTime	 weekday  ele‐
				  ment	without	 specifying an endTime weekday
				  element, the time period ends when the  sys‐
				  tem clock next crosses the time specified in
				  the endTime element.

			time_format
				  Is the time in hours (hh) and minutes	 (mm):
				  hh:mm:00.

				  NOTE:	 The  time  you	 specify  is  compared
				  against the system  clocks  on  the  managed
				  nodes. Be aware of time-zone differences for
				  your managed nodes.

				  You must specify a time element  within  the
				  beginTime  and  endTime elements. Valid hour
				  values are 00 to 23. Valid minute values are
				  00  to  59. The seconds value must always be
				  00.

				  If you omit the date and  weekday  elements,
				  the time period applies to every day.

		   <file filename="filename"/>
			True  if  filename  exists in the /var/opt/gwlm/condi‐
			tions/ directory on the host of the workload. You must
			be root to create files in this directory.

			This  type  of	condition  is  useful when you want to
			ensure a certain policy is in effect at	 a  particular
			time.  For  example, you could use a file-based condi‐
			tion when an npar managed by gWLM boots and an	Oracle
			database  workload  is starting in the npar. You could
			then write a script to touch the  file	named  in  the
			condition.  This  script would run as part of the boot
			process and then remove the file  ten  minutes	later,
			leaving	 gWLM  to  manage the workload based on normal
			arbitration of resource requests.

			You could also use a file-based condition, with multi‐
			ple  conditions	 in a single policy, to switch between
			effective policies by creating and deleting the	 files
			as desired. For an example of this usage, see the sec‐
			tion "Composite (conditional) policy example based  on
			a file" below.

	      some_existing_policy
		   Is  the  name  of an existing user-defined policy, a policy
		   that will exist by the time this XML is imported, or one of
		   the	policies provided with gWLM. It cannot be another con‐
		   ditional policy.

	      some_other_order_value
		   Is an order value different	from  any  other  order	 value
		   specified in the given compositePolicyDefinition element.

	      some_other_existing_policy
		   Is  the  name  of an existing user-defined policy, a policy
		   that will exist by the time this XML is imported, or one of
		   the	policies provided with gWLM. This policy can be unique
		   within the compositePolicyDefinition	 element  or  match  a
		   previously  specified  policy.  It cannot be another condi‐
		   tional policy.

       For examples of conditional policies, see the EXAMPLES section below.

DEFINING WORKLOADS
       A workload is a collection of processes executing within a single  com‐
       partment.  gWLM	manages	 a  workload  by adjusting the system resource
       allocations for its compartment.

       For information on what types  of  workloads  to	 combine  for  optimal
       resource	 utilization,  see the online help topic "Getting the Most Out
       of gWLM," available in gWLM's graphical interface in HP Systems Insight
       Manager.

       NOTE:  You can place processes in a workload that is based on a pset or
       an fss group using one or more of the following methods:

	      + User records (created with the "<user>" tag)

	      + Application records (created with the "<application>" tag)

	      + Process maps (executables that generate a list of PIDs)

	      + gwlmplace command

       For precedence rules and additional information, see the section	 "TIPS
       FOR MANAGING PSETS AND FSS GROUPS" below.

       NOTE:  Any  XML	file you use with gwlm must contain certain additional
       tags that are not shown below. See the EXAMPLES section for  the	 addi‐
       tional tags.

       Define  a  workload  by	specifying the following XML tags in the order
       shown (brackets, '[' and ']', designate optional items):

	   <workloadDefinition name="workload_name">
       [    <user> user_name </user>					]
       [    <application>						]
       [       <path>application_name</path>				]
       [       <alternateName>altName_or_script_name</alternateName>	]
       [    </application>						]
       [       <script> script_name</script>				]
       [    <policyReference name="policy_name"/>			]
	   </workloadDefinition>

       where:

	      workload_name
		   Is the name for the workload. You can use  A-Z,  a-z,  0-9,
		   the	dash  character ( - ), the period ( . ), the colon ( :
		   ), the space ( ), and the underscore ( _ ) in the name.

		   You will associate the workload  with  a  compartment  type
		   (hpvm,  npar,  vpar, pset, or fss group) when you define an
		   SRD.

	      user_name
		   Is the name of a user whose processes should run  in	 work‐
		   load_name.	 To   specify	multiple   users,  repeat  the
		   "<user>user_name</user>" line, substituting new  users  for
		   user_name.

		   The	<user> tag is useful only for workloads based on psets
		   or fss groups.  user_name must be a valid user on the HP-UX
		   instance where the workload runs.

	      application_name
		   Is  the  full path to an application whose processes should
		   run in workload_name.  gWLM does not support whitespace  in
		   application_name.

		   You	can  use  wildcard patterns in the file name part (but
		   not in the directory part of the path). The following wild‐
		   card patterns are allowed:

		   [xyz]  Match	 any of the characters xyz that are within the
			  brackets.

		   ?	  Match any single character.

		   *	  Match zero or more occurrences of any character.

		   For a shell script, use the full path to  the  shell	 being
		   used	 in the script as the application_name.	 Then, be sure
		   to specify an entry in the <alternateName>  tag,  discussed
		   next.

		   To specify multiple applications, repeat the "<application>
		   ... </application>" section, substituting new applications.

		   The <application> section  is  useful  only	for  workloads
		   based on psets or fss groups.

	      altName_or_script_name
		   Is  the  name a process takes once it begins executing. For
		   example, Oracle processes rename themselves. gWLM does  not
		   support whitespace in altName_or_script_name.

		   You	can  use  wildcard  patterns,  the same as in applica‐
		   tion_name, to match multiple alternate names.

		   For a shell script, specify the name of the script--exclud‐
		   ing its path.

		   To  specify	multiple  alternate names, repeat the "<alter‐
		   nateName>   altName_or_script_name</alternateName>"	 line,
		   substituting new names for altName_or_script_name.

	      script_name
		   Is the name for a script or other executable that returns a
		   set of process IDs (PIDs) separated	by  white  space.  The
		   identified processes are placed in the workload.

		   The	executable must be located in the /etc/opt/vse/scripts
		   directory on the managed node. It must have executable  and
		   root user permissions.

		   This executable is also known as a process map.

		   When created using the interface in HP Systems Insight Man‐
		   ager, the script element has the attribute isAppMap.

	      policy_name
		   Is the name for a policy that already exists	 in  the  gWLM
		   configuration  repository  or  will	exist  by the time the
		   workload's XML definition is imported. The  policy  can  be
		   specified  in  a  policyDefinition  element or in a compos‐
		   itePolicyDefinition element.

		   The <policyReference> tag is optional until an SRD contain‐
		   ing the defined workload is deployed or imported. This fea‐
		   ture allows you to create all your workloads then  go  back
		   and	associate policies to the workloads when the workloads
		   are needed in an SRD.

DEFINING SHARED RESOURCE DOMAINS (SRDs)
       An SRD is a collection of compartments that share system resources. The
       compartments  can  be virtual machines provided by HP Integrity Virtual
       Machines (hpvm), hard partitions (nPartitions, or npar for short), vir‐
       tual  partitions (vpar), processor sets (pset), or fair share scheduler
       (fss) groups. For example, a server containing npars can be an  SRD--as
       long  as the requirements immediately below are met. Also, an npar or a
       server divided into vpars can be an SRD for its vpar compartments. Sim‐
       ilarly,	an  npar or server divided into virtual machines can be an SRD
       for its virtual machine compartments.

       You do not manually define SRDs. Instead, you  use  the	gwlm  discover
       command	to  create  an XML file that describes the possible SRDs for a
       given host. You import the file into the	 configuration	repository  to
       define an SRD.

       NOTE:  gWLM does not support changing the configuration of the vCPUs of
       a virtual machine that is part of a deployed SRD. To have  gWLM	recog‐
       nize  such  a  change, first undeploy and delete the existing SRD, make
       the configuration change to the virtual machine, and then create a  new
       SRD containing that virtual machine.

       NOTE:  When  an	SRD is deployed, you are prevented from using hpvmmod‐
       ify -P vm-name -e percent to change the	entitlement  for  any  virtual
       machine	in  the SRD. However, you can still change a virtual machine's
       entitlement for the next time it is rebooted by adding -A to the previ‐
       ous command: hpvmmodify -A -P vm-name -e percent.

       NOTE: When managing vpars, gWLM can only migrate the following types of
       cores (subject to a vpar's configured minimum):

	      + unbound, or floating, cores (vPars A.03.x)

	      + dynamic cores (cores other than the boot processor) that have
		not been specified either by  hardware	path  or  by  cell  ID
		(vPars A.04.x)

       Ensure  there  are  sufficient  numbers of these types of cores so that
       gWLM has cores available for migrating among vpars.

       gWLM computes minimum and maximum values for the vpars when an  SRD  is
       discovered,  created, or modified. In addition, gWLM attempts to detect
       changes in vpar minimum and maximum values when a vpar is in a deployed
       SRD.  If	 such  changes	are detected, gWLM adjusts the vpar's resource
       allocation to be consistent with the new minimum and maximum values. To
       ensure  the  resource  allocation  continues  to be in accord with your
       business priorities, consider changing the policies in the SRD to  take
       advantage  of  the  change  in available resources, as described in the
       section "MANUALLY ADJUSTING CPU RESOURCES" in gwlm(5).

       Similarly, you should also consider changing policies in the SRD if you
       change  core bindings in any vpar in the complex--regardless of whether
       gWLM manages it.

       NOTE: Although it is possible to edit SRD definitions manually, if  you
       need the XML to reflect changes you have made to your system (for exam‐
       ple, because of a change in the number of usable cores), you should use
       the command gwlm discover again to generate the new XML and import that
       into the configuration repository. (When you use gwlm discover, several
       items  are  set	to default values. These items are: the mode attribute
       for the sharedResourceDomain element, the interval  attribute  for  the
       sharedResourceDomain  element,  the  ticapMode attribute for the share‐
       dResourceDomain element. Update their values accordingly. Also,	verify
       the  workloadReference  entries in the compartment definitions are cor‐
       rect and adjust the names in the workload definitions  themselves.  For
       example, you might have host.OTHER.2 instead of host.OTHER.)

       See  the section "Modifying SRDs" below for information on what you can
       edit in an SRD definition.

   Using npars as compartments in an SRD
       Using the HP product Instant Capacity, gWLM can simulate	 the  movement
       of  cores  among	 npars	by turning off an active core in one npar then
       turning on a deactivated core in another	 npar  in  the	same  complex.
       Thus,  the  first  npar has one less active core, while the second npar
       has one additional active core. (gWLM maintains the  number  of	active
       cores,  honoring the Instant Capacity licensing limits. As a result, no
       additional costs are incurred.)

       When setting up npars to be managed by gWLM:

       + Each npar must have the HP product Instant Capacity (iCAP)  installed
	 and running

       + Each  npar must have the following property set to 'on' (the default)
	 in the file /etc/opt/gwlm/conf/gwlmagent.properties:

	 com.hp.gwlm.platform.icap.manageWithIcap

       + Each npar must not be divided into virtual partitions
	 (Otherwise, each virtual partition must run gwlmagent; in which case,
	 gWLM  would  be  managing nested partitions, with usage rights shared
	 across the npars in the SRD and cores shared across the virtual  par‐
	 titions.)

       gWLM can manage iCAP resources in the following configurations:

       + One or more npars in the same complex with gWLM enabled to use Tempo‐
	 rary iCAP (TiCAP)

       + Two or more npars in the same complex with gWLM not  enabled  to  use
	 TiCAP

       + Two  or  more	npars in the same or different complexes with the com‐
	 plexes in the same Global iCAP (GiCAP) group

       gWLM can adjust an npar's number of cores:

       Up     By as many inactive cores as it has--assuming there are just  as
	      many  active  cores  in  the  other npars in the SRD that can be
	      deactivated

       Down   To no lower than its compartment minimum

		     npars serving as virtual machine hosts
       When managing npars, for any npar that is serving as a VM Host, set the
       policies	 in  the SRD so that the VM Host always keeps enough resources
       to meet at least the minimum cores required for	its  configuration  of
       virtual	machines.  For information on this VM Host minimum, see the HP
       Integrity Virtual Machines documentation.

   Modifying SRDs
       There are many elements in the definition of an SRD that you should not
       edit, as indicated below.

       For  information	 on removing a GiCAP group member from an SRD, see the
       online help topic "Getting the Most Out of gWLM" (available  in	gWLM's
       graphical  interface  in	 HP  Systems  Insight Manager) and its section
       "Taking Advantage of Global Instant Capacity."

       NOTE: Any XML file you use with gwlm must contain certain tags that are
       not shown below. See the EXAMPLES section for the additional tags.

       An SRD definition has the following form (brackets, '[' and ']', desig‐
       nate optional items):

	   <sharedResourceDomain name="SRD_name"
	   [ mode="Advisory_or_Managed" ] time="time_value"
	   [ ticapMode="mode_value" ] [ interval="length" ]>
	    <compartment name="compartment_name" id="compartment_ID"
	     type="hpvm_npar_vpar_pset_or_fss"
	     [ icap="icap_value" [ ticap="ticap_value" ]]>
		<cpu>
		 <minimum> minimum_size </minimum>
		 <maximum> maximum_size </maximum>
		 <size> Core_size </size>
		</cpu>
		<hostName> hostname </hostName>
		<nativeId> nativeId_value </nativeId>
		<parentId> parentId_value </parentId>

		<workloadReference name="workload_name"/>
	    </compartment>

	      <!-- 0 or more additional compartment definitions -->

	   </sharedResourceDomain>

       where:

	      SRD_name
		   Is the name of the SRD. You can use A-Z, a-z, 0-9, the dash
		   character  (	 -  ),	the period ( . ), the colon ( : ), the
		   space ( ), the space ( ), and the underscore ( _ )  in  the
		   name.

	      Advisory_or_Managed
		   (Optional--the  mode	 attribute is optional; its default is
		   Managed) Indicates the mode in which	 gWLM  should  operate
		   for the SRD. Valid values are:

		   + Advisory

		     Allows  you to see what resource requests gWLM would make
		     for a workload--without actually affecting resource allo‐
		     cation.

		     Advisory  mode  is not available for SRDs containing vir‐
		     tual machines, psets, or fss groups due to the nature  of
		     these compartments.

		   + Managed

		     gWLM  actively adjusts resource allocations for the work‐
		     loads.

		   Monitor gWLM's resource allocations--regardless of mode--on
		   the	command	 line  using  the  gwlm	 monitor command or by
		   accessing the graphs (reports) available through  the  gWLM
		   menu in HP Systems Insight Manager.

	      time_value
		   Do not edit this value.

		   Is a value for gWLM use only.

	      mode_value
		   Indicates  the  Temporary  Instant  Capacity	 (ticap) mode.
		   Valid values are:

		   + none
		     Default setting--or the system does not support Temporary
		     Instant Capacity.

		   + off
		     Temporary Instant Capacity is supported on the system but
		     is only being used to satisfy policy requests  for	 poli‐
		     cies  in  the SRD with the ticapByPolicy attribute set to
		     true.

		   + all
		     Temporary Instant Capacity is supported on the system and
		     can be used to satisfy policy requests by all policies in
		     the SRD.

		   For additional information and warnings on using TiCAP, see
		   the	section	 "NOTES	 ON  USING TEMPORARY INSTANT CAPACITY"
		   below.

	      length
		   (Optional--the interval attribute is optional; its  default
		   is  15) Is the number of seconds gWLM waits between setting
		   CPU allocations.

		   NOTE: If you are managing GiCAP group members, nPartitions,
		   or virtual partitions, a longer interval may be needed. For
		   managing members of a GiCAP group,  increase	 the  interval
		   manually.  For  nPartitions	and  virtual  partitions, gWLM
		   automatically estimates a longer  interval  for  you.  This
		   estimated value is not guaranteed to be the best value: You
		   may need to adjust it manually. A manual  adjustment	 might
		   be  needed  if,  in managed mode, values for Allocation and
		   Size are different--indicating the change in size is taking
		   too	long. (These values appear in real-time reports avail‐
		   able through gWLM's SIM interface and in  output  from  the
		   gwlm	 monitor  command.) An "SRD Resource Interval Warning"
		   logged to /var/opt/gwlm/gwlmcmsd.log.0 on HP-UX is also  an
		   indication  that a manual adjustment may be needed. (On Mi‐
		   crosoft Windows,  the  file	is  C:\ProgramFiles\HP\Virtual
		   Server Environment\logs\gwlmcmsd.log.0 by default; however,
		   a different path may have been selected at installation.)

		   NOTE: If you are managing virtual machines, specify	length
		   as  a multiple of the vm_fssagt interval, which defaults to
		   10 seconds. If you need a length of fewer than 10  seconds,
		   set	the interval for the VM agents (using vm_fssagt -i) to
		   length.

	      compartment_name
		   Do not edit this value.

		   Is the name for the compartment.

	      compartment_ID
		   Do not edit this value.

		   Is gWLM's ID for the compartment.

	      hpvm_npar_vpar_pset_or_fss
		   Do not edit this value.

		   Indicates the compartment type. Valid values are:

		   + hpvm

		     Indicates a compartment based on a virtual	 machine  pro‐
		     vided by HP Integrity Virtual Machines.

		   + npar

		     Indicates a compartment based on an nPartition.

		   + vpar

		     Indicates a compartment based on a virtual partition.

		   + pset

		     Indicates a compartment based on a processor set.

		   + fss

		     Indicates	a  compartment based on a Fair Share Scheduler
		     group.

	      icap_value
		   Do not edit this value.

		   Indicates whether Instant  Capacity	(icap)	is  supported.
		   Valid value: Supported.

	      ticap_value
		   Do not edit this value.

		   Indicates  whether  Temporary  Instant  Capacity (ticap) is
		   supported. Valid value: Supported. The icap attribute  must
		   be specified to specify the ticap attribute.

	      minimum_size
		   Do not edit this value.

		   Is  the  minimum  amount of CPU resources, in cores, that a
		   compartment can have. This value is	the  minimum  resource
		   allocation required by the underlying compartment.

	      maximum_size
		   Do not edit this value.

		   Is  the  maximum  amount of CPU resources, in cores, that a
		   compartment can have. This value is	the  maximum  resource
		   allocation  allowed by the underlying compartment. However,
		   gWLM may reduce this number at times because an SRD	has  a
		   large  number  of  compartments  and	 each compartment must
		   receive a minimum portion of the resources.

	      Core_size
		   Do not edit this value.

		   Is the amount of CPU resources, in cores, given to the com‐
		   partment.

	      hostname
		   Do not edit this value.

		   Is the full name of the host (including domain) of the com‐
		   partment.

	      nativeId_value
		   Do not edit this value.

		   Is an ID value used by gWLM.

	      parentId_value
		   Do not edit this value.

		   Is an ID value used by gWLM.

	      workload_name
		   Is the name of the workload running in the compartment.

TIPS FOR MANAGING PSETS AND FSS GROUPS
       This section presents various items to be aware of when managing	 work‐
       loads based on psets or fss groups.

   Precedence of placement techniques
       You  can	 place	processes  in workloads with user records, application
       records, and the gwlmplace command. gWLM checks that processes  are  in
       their  appropriate  workloads  every 30 seconds, based on the following
       rules of precedence.

       For each process:

       1. Has the process been placed using the gwlmplace command?

	  Yes: Leave process where it is (If you use  gwlmplace	 --recurse  to
	  move	a  process  tree,  all processes in the tree are considered to
	  have been placed using gwlmplace.)

	  No: Continue to next step

       2. Has the process been placed using a process map?

	  Yes: Leave process where it is

	  No: Continue to next step

       3. Is there a matching application record?

	  Yes: Then assign the process to that workload

	  No: Continue to next step

       4. Is there a matching user record?

	  Yes: Then assign the process to that workload

	  No: Continue to next step

       5. Is the parent of the process assigned to a workload?

	  Yes: Then inherit the parent's workload

	  No: Continue to next step

       6. Place nonroot processes in the default workload. (gWLM  leaves  root
	  processes where they are.)

   Management of short-lived processes
       gWLM  polls  the system every 30 seconds to determine whether processes
       are running in their assigned workloads. Consequently, you  should  not
       assign  short-lived  processes  to  workloads.  (By  default, a process
       inherits the workload of its parent process.)

   Use of the default pset or default fss group
       If you let processes run in the default pset or the default fss	group,
       they  will  be  competing  against all the other processes that are not
       explicitly placed in workloads. To ensure appropriate resource  alloca‐
       tions  for your processes, place them in workloads by specifying <user>
       tags or <application> tags when defining	 workloads  or	by  using  the
       gwlmplace command.

NOTES ON USING TEMPORARY INSTANT CAPACITY
       Several	items  you  should  be	aware  of when using Temporary Instant
       Capacity (TiCAP):

	      + Undeployment takes longer

		When an SRD that is using TiCAP is  undeployed,	 gWLM  deacti‐
		vates its borrowed TiCAP resources. This operation adds to the
		overall time required to complete the undeployment.

	      + Partial undeployment can leave TiCAP resources unmanaged

		When an SRD is undeployed, gWLM stops managing its  resources.
		If  a  failure occurs that prevents the undeployment from com‐
		pleting, the SRD could remain deployed, but with some  portion
		of its resources no longer actively managed. In particular, if
		TiCAP is being used by the SRD,	 it  is	 possible  that	 those
		resources  are	no  longer  managed by gWLM after an attempted
		undeploy  has  aborted.	 Consequently,	if  the	 SRD   remains
		deployed,  it  will  no	 longer	 activate  TiCAP  resources in
		response to high demand.

	      + Re-deploy failure may leave SRD undeployed with active TiCAP

		When an SRD or any of its policies or workloads	 is  modified,
		the  change  causes a re-deployment of that SRD. If there is a
		failure during re-deployment, the SRD  may  become  undeployed
		without cleaning up, potentially leaving TiCAP active. If this
		happens, you  should  either  manually	deactivate  the	 TiCAP
		resources  or  deploy  the  SRD again and allow gWLM to resume
		TiCAP management.

	      + Time to detect an increase or decrease in TiCAP balance

		Allow several minutes for gWLM to detect an increase in a sys‐
		tem's  TiCAP  balance. (A re-deployment of the SRD also causes
		gWLM to check the TiCAP balance.)

	      + Minimum balance topics

		- On  managed  nodes,	the   file   /etc/opt/gwlm/conf/gwlma‐
		  gent.properties has the property com.hp.gwlm.node.ticap.min‐
		  imumBalance that enables you to indicate the minimum	number
		  of  Temporary	 Instant Capacity (TiCAP) minutes that must be
		  available for gWLM to activate TiCAP resources. (gWLM honors
		  usage	 limits determined by the TiCAP software: If the value
		  of com.hp.gwlm.node.ticap.minimumBalance is below the	 TiCAP
		  limit, the TiCAP limit is used instead.)

		- When	a  system's TiCAP balance runs out, you may see resize
		  errors for several minutes or until the SRD is re-deployed.

   Messages related to TiCAP resources
       The following messages inform you of gWLM's  behavior  regarding	 TiCAP
       resources.  Explanations	 and  suggested actions for you to take follow
       the messages.

       Sufficient TiCAP time is available.  All TiCAP cores are	 eligible  for
       use.

	      Indicates	 a change occurred that allows gWLM to allocate TiCAP.
	      The change could be that the  com.hp.gwlm.node.ticap.minimumBal‐
	      ance value was reduced or the TiCAP balance increased.

	      Suggested action: No action is needed.

       Insufficient TiCAP time remaining.  Disabling use of TiCAP.

	      Indicates	 a  change occurred that prevents gWLM from allocating
	      TiCAP. The change could be that the com.hp.gwlm.node.ticap.mini‐
	      mumBalance value was increased or the TiCAP balance decreased.

	      Suggested	 action:  Increase  the TiCAP balance, as explained in
	      the HP Instant Capacity User's Guide.

       TiCAP time is running out.  Limiting use of temporary  capacity	to  at
       most <x> out of <y> TiCAP cores.

	      The TiCAP time is approaching 0.

	      Suggested	 action:  Increase  the TiCAP balance, as explained in
	      the HP Instant Capacity User's Guide.

EXAMPLES
   Additional tags
       The examples above exclude certain text to focus the  discussion	 on  a
       given topic. The excluded text is given below.

       XML  files  you use with gwlm must start with the following lines. (The
       first line must start in the first column.  Do  not  leave  any	spaces
       before the "<?xml version..." text.)

	  <?xml version="1.0" encoding="UTF-8"?>
	  <!DOCTYPE dataStore SYSTEM "file:/opt/gwlm/dtd/config.dtd">
	  <dataStore>

       The following tag must be at the end of your XML files:

	  </dataStore>

       The examples below illustrate complete XML files.

   Policy examples
       These examples illustrate each of the available policy types.

       Because	policy definitions are self-contained, you can perform--at any
       time--a gwlm import on a file containing only policy definitions.

       NOTE: When you create your XML file, the first line must start  in  the
       first  column.  Do  not	leave any spaces before the "<?xml version..."
       text.

       <?xml version="1.0" encoding="UTF-8"?>
       <!DOCTYPE dataStore SYSTEM "file:/opt/gwlm/dtd/config.dtd">
       <dataStore>

	   <policyDefinition name="fixed_3_policy" type="Fixed">
	    <cpu>
		<own> 3.0 </own>
	    </cpu>
	   </policyDefinition>

	   <policyDefinition name="util_85_policy" type="Utilization"
	       ticapByPolicy="false">
	    <cpu>
		<target> 85.0 </target>
	    </cpu>
	    <priority> 12 </priority>
	   </policyDefinition>

	   <policyDefinition name="own_3_policy" type="OwnBorrow"
	       ticapByPolicy="false">
	    <cpu>
		<minimum> 1.0 </minimum>
		<maximum> 5.0 </maximum>
		<own> 3.0 </own>
	    </cpu>
	   </policyDefinition>

	   <policyDefinition name="db_policy" type="Custom"
	       ticapByPolicy="false">
	    <cpu>
		<target>3.5</target>
		<metric name="db_tx_time" response="Inverse" />
	    </cpu>
	   </policyDefinition>

       </dataStore>

   Composite (conditional) policy example based on time
       This example illustrates a policy, Fixed_3, that takes  effect  several
       mornings every week.

       NOTE:  When  you create your XML file, the first line must start in the
       first column. Do not leave any spaces  before  the  "<?xml  version..."
       text.

       <?xml version="1.0" encoding="UTF-8"?>
       <!DOCTYPE dataStore SYSTEM "file:/opt/gwlm/dtd/config.dtd">
       <dataStore>

	   <compositePolicyDefinition name="Backup" type="Conditional">

	    <conditionList>

		<conditionItem order="0">
		 <timePeriod>

		   <beginTime> <!-- Begin Tue-Sat at 1 A.M. -->
		       <weekday> 2 </weekday>
		       <weekday> 3 </weekday>
		       <weekday> 4 </weekday>
		       <weekday> 5 </weekday>
		       <weekday> 6 </weekday>
		       <time> 01:00:00 </time>
		   </beginTime>

		   <endTime> <!-- End Tue-Sat by 5 A.M. -->
		       <time> 05:00:00 </time>
		   </endTime>

		 </timePeriod>
		 <policyReference name="Fixed_3"/>
		</conditionItem>

		<conditionItem order="1">
		 <default/>
		 <policyReference name="Owns_1-Max_2"/>
		</conditionItem>

	    </conditionList>

	   </compositePolicyDefinition>

       </dataStore>

   Composite (conditional) policy example based on a file
       This  example shows how to set multiple file-based conditions in a sin‐
       gle policy. gWLM changes the effective policy based  on	the  files  in
       existence in /var/opt/gwlm/conditions/ on the host of the workload.

       NOTE:  When  you create your XML file, the first line must start in the
       first column. Do not leave any spaces  before  the  "<?xml  version..."
       text.

       <?xml version="1.0" encoding="UTF-8"?>
       <!DOCTYPE dataStore SYSTEM "file:/opt/gwlm/dtd/config.dtd">
       <dataStore>

	   <compositePolicyDefinition name="SwitchOnFile" type="Conditional">

	       <conditionList>

		   <conditionItem order="0">
		       <file filename="Owns_0.25"/>
		       <policyReference name="Owns_0.25-Max_1"/>
		   </conditionItem>

		   <conditionItem order="1">
		       <file filename="Owns_0.5"/>
		       <policyReference name="Owns_0.5-Max_1"/>
		   </conditionItem>

		   <conditionItem order="2">
		       <file filename="Owns_1"/>
		       <policyReference name="Owns_1-Max_2"/>
		   </conditionItem>

		   <conditionItem order="3">
		       <file filename="Owns_2"/>
		       <policyReference name="Owns_2-Max_4"/>
		   </conditionItem>

		   <conditionItem order="4">
		       <default/>
		       <policyReference name="CPU_Utilization"/>
		   </conditionItem>

	       </conditionList>

	   </compositePolicyDefinition>

       </dataStore>

       You can then select an effective policy as follows:

	      # touch /var/opt/gwlm/conditions/Owns_1

       Creating	 the  Owns_1  file  causes  the	 Owns_1-Max_2  policy to be in
       effect, assuming no prior conditions are met.

	      # touch /var/opt/gwlm/conditions/Owns_2

       Creating Owns_2 has no effect because the Owns_1 condition, which has a
       higher order value, is met.

	      # rm /var/opt/gwlm/conditions/Owns_1

       With  the Owns_1 file removed, the Owns_2 condition is now met, and the
       Owns_2-Max_4 policy is in effect.

   Composite (conditional) policy example for Serviceguard
       This example illustrates a composite policy  based  on  a  Serviceguard
       condition.  It assumes a two-node Serviceguard cluster. gWLM is used on
       both nodes. If gWLM finds that the cluster capacity  is	reduced	 (that
       is,  a  node  is	 not active), workloads on the one remaining node that
       reference the following policy can then use TiCAP resources. The policy
       used  in this condition is Owns_3-Max_6-with-TiCAP, which is simply the
       gWLM-provided Owns_3-Max_6 policy with TiCAP usage enabled.  With  both
       nodes active, workloads referencing this policy are allocated resources
       based on the policy Owns_1-Max_2.

       NOTE: When you create your XML file, the first line must start  in  the
       first  column.  Do  not	leave any spaces before the "<?xml version..."
       text.

       <?xml version="1.0" encoding="UTF-8"?>
       <!DOCTYPE dataStore SYSTEM "file:/opt/gwlm/dtd/config.dtd">
       <dataStore>

	   <compositePolicyDefinition name="ReducedCapacity" type="Conditional">

	    <conditionList>

		<conditionItem order="0">
		 <sgReducedClusterCapacity/>
		 <policyReference name="Owns_3-Max_6-with-TiCAP"/>
		</conditionItem>

		<conditionItem order="1">
		 <default/>
		 <policyReference name="Owns_1-Max_2"/>
		</conditionItem>

	    </conditionList>

	   </compositePolicyDefinition>

       </dataStore>

   Workload examples
       Any policy definition you reference in a workload definition must:

	  + Be a policy in the library of policies  that  gWLM	provides  (use
	    gwlm list to see the policies already in the configuration reposi‐
	    tory)

       or

	  + Be defined in the XML file that includes the workload  definition,
	    with the policy definition appearing above the workload definition

       or

	  + Have  already  been	 placed	 in  the gWLM configuration repository
	    (through a gwlm import)

       The following workloads use the policies defined in the examples above.

       NOTE: When you create your XML file, the first line must start  in  the
       first  column.  Do  not	leave any spaces before the "<?xml version..."
       text.

       <?xml version="1.0" encoding="UTF-8"?>
       <!DOCTYPE dataStore SYSTEM "file:/opt/gwlm/dtd/config.dtd">
       <dataStore>

	   <workloadDefinition name="mysystem_wkld">
	    <policyReference name="own_3_policy"/>
	   </workloadDefinition>

	   <workloadDefinition name="mysystem1_wkld">
	    <user>jdoe</user>
	    <user>msmith</user>
	    <application>
	       <path>/opt/perl/bin/perl</path>
	       <alternateName>generate_report.pl</alternateName>
	    </application>
	    <policyReference name="util_85_policy"/>
	   </workloadDefinition>

	   <workloadDefinition name="mysystem2_wkld">
	    <policyReference name="fixed_3_policy"/>
	   </workloadDefinition>

       </dataStore>

   SRD examples
       Any workload definition you reference in an SRD definition must:

	  + Be defined in the XML file that includes the SRD definition,  with
	    the workload definition appearing above the SRD definition

       or

	  + Have  already  been	 placed	 in  the gWLM configuration repository
	    (through a gwlm import)

       or

	  + Have  been	created	 by  HP	 Virtualization	  Manager,   such   as
	    host.OTHER.

       Here,  we define an SRD, named mySRD, that gWLM will manage in advisory
       mode. The compartments are based on virtual partitions.

       NOTE: When you create your XML file, the first line must start  in  the
       first  column.  Do  not	leave any spaces before the "<?xml version..."
       text.

       <?xml version="1.0" encoding="UTF-8"?>
       <!DOCTYPE dataStore SYSTEM "file:/opt/gwlm/dtd/config.dtd">
       <dataStore>

	   <sharedResourceDomain name="mySRD" mode="Advisory"
	    time="1092925994945" interval="15">
	    <compartment name="mysystem" id="34" type="vpar">
		<cpu>
		 <minimum>1.0</minimum>
		 <maximum>14.0</maximum>
		 <size>6.0</size>
		</cpu>
		<hostName>mysystem.mydomain.com</hostName>
		<nativeId>696399316_V00</nativeId>
		<parentId>696399316</parentId>
		<workloadReference name="mysystem_wkld"/>
	    </compartment>
	    <compartment name="mysystem1" id="35" type="vpar">
		<cpu>
		 <minimum>1.0</minimum>
		 <maximum>14.0</maximum>
		 <size>5.0</size>
		</cpu>
		<hostName>mysystem1.mydomain.com</hostName>
		<nativeId>696399316_V01</nativeId>
		<parentId>696399316</parentId>
		<workloadReference name="mysystem1_wkld"/>
	    </compartment>
	    <compartment name="mysystem2" id="36" type="vpar">
		<cpu>
		 <minimum>1.0</minimum>
		 <maximum>14.0</maximum>
		 <size>5.0</size>
		</cpu>
		<hostName>mysystem2.mydomain.com</hostName>
		<nativeId>696399316_V02</nativeId>
		<parentId>696399316</parentId>
		<workloadReference name="mysystem2_wkld"/>
	    </compartment>
	   </sharedResourceDomain>

       </dataStore>

FEEDBACK
       If you would like to comment on the current HP  gWLM  functionality  or
       make suggestions for future releases, please send email to:

	      gwlmfeedback@rsn.hp.com

FILES
       /opt/gwlm/dtd/config.dtd
	      DTD  file	 on  HP-UX that defines the XML combinations that gwlm
	      import accepts

       C:\Program Files\HP\Virtual Server Environment\dtd\config.dtd
	      DTD file on Microsoft Windows that defines the XML  combinations
	      that  gwlm  import accepts (This path is the default; however, a
	      different path may have been selected at installation.)

SEE ALSO
       gwlm(1M), gwlmplace(1M), gwlmsend(1M), gwlm(5)

								    gwlmxml(4)
[top]

List of man pages available for HP-UX

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net