doc: Update RDT/vCAT tutorial

* Merge RDT and vCAT tutorials
* Update overview, dependencies and constraints
* Update to match Configurator UI instead of manually editing XML files
* Remove architectural details and instead point to high-level design documentation

Tracked-On: #6081
Signed-off-by: Reyes, Amy <amy.reyes@intel.com>
This commit is contained in:
Reyes, Amy 2022-06-28 14:00:06 -07:00 committed by David Kinder
parent a42997e9e0
commit b594e1fbc7
11 changed files with 218 additions and 332 deletions

View File

@ -61,7 +61,6 @@ Advanced Features
tutorials/vuart_configuration
tutorials/rdt_configuration
tutorials/vcat_configuration
tutorials/waag-secure-boot
tutorials/enable_s5
tutorials/cpu_sharing

View File

@ -1,7 +1,7 @@
.. _hv_vcat:
Enable vCAT
###########
Virtual Cache Allocation Technology (vCAT)
###########################################
vCAT refers to the virtualization of Cache Allocation Technology (CAT), one of the
RDT (Resource Director Technology) technologies.
@ -26,7 +26,7 @@ When assigning cache ways, however, the VM can be given exclusive, shared, or mi
ways depending on particular performance needs. For example, use dedicated cache ways for RTVM, and use
shared cache ways between low priority VMs.
In ACRN, the CAT resources allocated for vCAT VMs are determined in :ref:`vcat_configuration`.
In ACRN, the CAT resources allocated for vCAT VMs are determined in :ref:`rdt_configuration`.
For further details on the RDT, refer to the ACRN RDT high-level design :ref:`hv_rdt`.

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.0 KiB

View File

@ -1,254 +1,229 @@
.. _rdt_configuration:
Enable RDT Configuration
########################
Enable Intel Resource Director Technology (RDT) Configurations
###############################################################
About Intel Resource Director Technology (RDT)
**********************************************
On x86 platforms that support Intel Resource Director Technology (RDT)
allocation features such as Cache Allocation Technology (CAT) and Memory
Bandwidth Allocation (MBA), the ACRN hypervisor can be used to limit regular
VMs that may be over-utilizing common resources such as cache and memory
bandwidth relative to their priorities so that the performance of other
higher priority VMs (such as RTVMs) is not impacted.
allocation features, the ACRN hypervisor can partition the shared cache among
VMs to minimize performance impacts on higher-priority VMs, such as real-time
VMs (RTVMs). “Shared cache” refers to cache that is shared among multiple CPU
cores. By default, VMs running on these cores are configured to use the entire
cache, effectively sharing the cache among all VMs without any partitions. This
design may cause too many cache misses for applications running in
higher-priority VMs, negatively affecting their performance. The ACRN hypervisor
can help minimize cache misses by isolating a portion of the shared cache for
a specific VM.
Using RDT includes three steps:
ACRN supports the following features:
1. Detect and enumerate RDT allocation capabilities on supported
resources such as cache and memory bandwidth.
#. Set up resource mask array MSRs (model-specific registers) for each
CLOS (class of service, which is a resource allocation), basically to
limit or allow access to resource usage.
#. Select the CLOS for the CPU associated with the VM that will apply
the resource mask on the CP.
* Cache Allocation Technology (CAT)
* Code and Data Prioritization (CDP)
* Virtual Cache Allocation Technology (vCAT)
Steps #2 and #3 configure RDT resources for a VM and can be done in two ways:
The CAT support in the hypervisor isolates a portion of the cache for a VM from
other VMs. Generally, certain cache resources are allocated for the RTVMs to
reduce performance interference by other VMs attempting to use the same cache.
* Using an HV debug shell (See `Tuning RDT resources in HV debug shell`_)
* Using a VM configuration (See `Configure RDT for VM using VM Configuration`_)
The CDP feature in RDT is an extension of CAT that enables separate control over
code and data placement in the cache. The CDP support in the hypervisor isolates
a portion of the cache for code and another portion for data for the same VM.
The following sections discuss how to detect, enumerate capabilities, and
configure RDT resources for VMs in the ACRN hypervisor.
ACRN also supports the virtualization of CAT, referred to as vCAT. With
vCAT enabled, the hypervisor presents CAT to a selected set of VMs to allow the
guest OSes to further isolate the cache used by higher-priority processes in
those VMs.
For further details, refer to the ACRN RDT high-level design :ref:`hv_rdt` and
Dependencies and Constraints
*****************************
Consider the following dependencies and constraints:
* The hardware must support RDT in order for ACRN to enable RDT support in the
hypervisor.
* The cache must be shared cache (cache shared across multiple CPU cores), as
opposed to private cache (cache that is owned by only one CPU core). If the
cache is private, CAT, CDP, and vCAT have no benefit because the cache is
already exclusively used by one core. For this reason, the ACRN Configurator
will not allow you to configure private cache.
* The ACRN Configurator relies on the board configuration file to provide CAT
information that it can use to display configuration parameters. On Tiger Lake
systems, L3 CAT, also known as LLC CAT, is model specific and
non-architectural. For these reasons, the Board Inspector doesn't detect LLC
CAT, and therefore doesn't provide LLC CAT information in the board
configuration file even if the board has LLC CAT capabilities. The Board
Inspector offers a way to manually add LLC CAT information to the board
configuration file via a command-line option described in
:ref:`board_inspector_tool`. Run the Board Inspector with the command-line
option, then import the board configuration file into the ACRN Configurator.
* The guest OS in a VM with vCAT enabled requires utilities in that OS for
further cache allocation configurations. An example is the `resctrl
<https://docs.kernel.org/x86/resctrl.html>`__ framework in Linux.
Configuration Overview
**********************
You can allocate cache to each VM at the virtual CPU (vCPU) level. For example,
you can create a post-launched real-time VM and assign three physical CPU
cores to it. ACRN assigns a vCPU ID to each physical CPU. Furthermore, you can
specify a vCPU as a real-time vCPU. Then you can allocate a portion of the cache
to the real-time vCPU and allocate the rest of the cache to be shared among the
other vCPUs. This type of configuration allows the real-time vCPU to use its
assigned cache without interference from the other vCPUs, thus improving the
performance of applications running on the real-time vCPU. The following
documentation is a general overview of the configuration process.
The :ref:`acrn_configurator_tool` provides a user interface to help you allocate
cache to vCPUs. The configuration process requires setting VM parameters, then
allocating cache to the VMs via an interface in the hypervisor parameters. This
documentation presents the configuration process as a linear flow, but in
reality you may find yourself moving back and forth between setting the
hypervisor parameters and the VM parameters until you are satisfied with the
entire configuration.
For a real-time VM, you must set the following parameters in the VM's **Basic
Parameters**:
* **VM type**: Select **Real-time**.
* **pCPU ID**: Select the physical CPU affinity for the VM.
* **Virtual CPU ID**: Note the vCPU ID that the tool assigns to each physical
CPU. You will need to know the ID when you are ready to allocate cache.
* **Real-time vCPU**: Select the Real-time vCPU check box next to each real-time
vCPU. The ACRN Configurator uses this information to create default cache
configurations, as you will see later in this documentation. If you change the
VM type from Real-time to Standard, the ACRN Configurator disables the
Real-time vCPU check box.
.. image:: images/configurator-rt01.png
:align: center
:class: drop-shadow
To use vCAT for the VM, you must also set the following parameters in the VM's
**Advanced Parameters**:
* **Maximum virtual CLOS**: Select the maximum number of virtual CLOS masks.
This parameter defines the number of cache chunks that you will see in the
hypervisor parameters.
* Select **VM Virtual Cache Allocation Tech**.
.. image:: images/configurator-vcat01.png
:align: center
:class: drop-shadow
Next, you can enable Intel RDT features in **Hypervisor Global Settings >
Advanced Parameters > Memory Isolation for Performance**. You can enable one of
the following combinations of features:
* Cache Allocation Technology (CAT) alone
* Cache Allocation Technology plus Code and Data Prioritization (CDP)
* Cache Allocation Technology plus Virtual Cache Allocation Technology (vCAT)
The following figure shows Cache Allocation Technology enabled:
.. image:: images/configurator-cache01.png
:align: center
:class: drop-shadow
When CDP or vCAT is enabled, CAT must be enabled too. The tool selects CAT if it's not already selected.
.. image:: images/configurator-cache02.png
:align: center
:class: drop-shadow
CDP and vCAT can't be enabled at the same time, so the tool clears the vCAT check box when CDP is selected and vice versa.
Based on your selection, the tool displays the available cache in tables.
Example:
.. image:: images/configurator-cache03.png
:align: center
:class: drop-shadow
The table title shows important information:
* Cache level, such as Level 3 (L3) or Level 2 (L2)
* Physical CPU cores that can access the cache
The above example shows an L2 cache table. VMs assigned to any CPU cores 2-6 can
have cache allocated to them.
The table's y-axis shows the names of all VMs that are assigned to the CPU cores
noted in the table title, as well as their vCPU IDs. The table categorizes the
vCPUs as either standard or real-time. The real-time vCPUs are those that are
set as real-time in the VM's parameters. All other vCPUs are considered
standard. The above example shows one real-time vCPU (VM1 vCPU 2) and two
standard vCPUs (VM0 vCPU 2 and 6).
.. note::
The Service VM is automatically assigned to all CPUs, so it appears in the standard category in all cache tables.
The table's x-axis shows the number of available cache chunks. You can see the
size of each cache chunk in the note below the table. In the above example, 20
cache chunks are available to allocate to the VMs, and each cache chunk is 64KB.
All cache chunks are yellow, which means all of them are allocated to all VMs.
All VMs share the entire cache.
The **Apply basic real-time defaults** button creates a basic real-time
configuration if real-time vCPUs exist. If there are no real-time vCPUs, the
button will not do anything.
If you select Cache Allocation Technology (CAT) alone, the **Apply basic
real-time defaults** button allocates a different cache chunk to each real-time
vCPU, making sure it doesn't overlap the cache of any other vCPU. The rest of
the cache is shared among the standard vCPUs. In the following example, only VM1
vCPU 2 can use cache chunk19, while all other vCPUs share the rest of the cache.
.. image:: images/configurator-cache04.png
:align: center
:class: drop-shadow
If you select CAT with Code and Data Prioritization, you can allocate different
cache chunks to code or data on the same vCPU. The **Apply basic real-time
defaults** button allocates one cache chunk to code on the real-time vCPU and a
different cache chunk to data on the same vCPU, making sure the cache chunks
don't overlap any others. In the following example, VM1 vCPU 2 can use cache
chunk19 for code and chunk18 for data, while all other vCPUs share the rest of
the cache.
.. image:: images/configurator-cache05.png
:align: center
:class: drop-shadow
To further customize the cache allocation, you can drag the right or left edges
of the yellow boxes to cover the cache chunks that you want to allocate to
specific VMs.
.. note::
If you have a real-time VM, ensure its cache chunks do not overlap with any
other VM's cache chunks.
The tool helps you create valid configurations based on the underlying platform
architecture. For example, it is only possible to assign consecutive cache
chunks to a vCPU; there can be no gaps. Also, a vCPU must have access to at
least one cache chunk.
Learn More
**********
For details on the ACRN RDT high-level design, see :ref:`hv_rdt`.
For details about RDT, see
`Intel 64 and IA-32 Architectures Software Developer's Manual (SDM), Volume 3,
(Section 17.19 Intel Resource Director Technology Allocation Features)
<https://www.intel.com/content/www/us/en/developer/articles/technical/intel-sdm.html>`_.
.. _rdt_detection_capabilities:
RDT Detection and Resource Capabilities
***************************************
From the ACRN HV debug shell, use ``cpuid`` to detect and identify the
resource capabilities. Use the platform's serial port for the HV shell.
Check if the platform supports RDT with ``cpuid``. First, run
``cpuid 0x7 0x0``; the return value EBX [bit 15] is set to 1 if the
platform supports RDT. Next, run ``cpuid 0x10 0x0`` and check the EBX
[3-1] bits. EBX [bit 1] indicates that L3 CAT is supported. EBX [bit 2]
indicates that L2 CAT is supported. EBX [bit 3] indicates that MBA is
supported. To query the capabilities of the supported resources, use the
bit position as a subleaf index. For example, run ``cpuid 0x10 0x2`` to
query the L2 CAT capability.
.. code-block:: none
ACRN:\>cpuid 0x7 0x0
cpuid leaf: 0x7, subleaf: 0x0, 0x0:0xd39ffffb:0x00000818:0xbc000400
L3/L2 bit encoding:
* EAX [bit 4:0] reports the length of the cache mask minus one. For
example, a value 0xa means the cache mask is 0x7ff.
* EBX [bit 31:0] reports a bit mask. Each set bit indicates the
corresponding unit of the cache allocation that can be used by other
entities in the platform (e.g. integrated graphics engine).
* ECX [bit 2] if set, indicates that cache Code and Data Prioritization
Technology is supported.
* EDX [bit 15:0] reports the maximum CLOS supported for the resource
minus one. For example, a value of 0xf means the max CLOS supported
is 0x10.
.. code-block:: none
ACRN:\>cpuid 0x10 0x0
cpuid leaf: 0x10, subleaf: 0x0, 0x0:0xa:0x0:0x0
ACRN:\>cpuid 0x10 **0x1**
cpuid leaf: 0x10, subleaf: 0x1, 0xa:0x600:0x4:0xf
MBA bit encoding:
* EAX [bit 11:0] reports the maximum MBA throttling value minus one. For example, a value 0x59 means the max delay value is 0x60.
* EBX [bit 31:0] reserved.
* ECX [bit 2] reports whether the response of the delay values is linear.
* EDX [bit 15:0] reports the maximum CLOS supported for the resource minus one. For example, a value of 0x7 means the max CLOS supported is 0x8.
.. code-block:: none
ACRN:\>cpuid 0x10 0x0
cpuid leaf: 0x10, subleaf: 0x0, 0x0:0xa:0x0:0x0
ACRN:\>cpuid 0x10 **0x3**
cpuid leaf: 0x10, subleaf: 0x3, 0x59:0x0:0x4:0x7
.. note::
ACRN takes the lowest common CLOS max value between the supported
resources as maximum supported CLOS ID. For example, if max CLOS
supported by L3 is 16 and MBA is 8, ACRN programs MAX_PLATFORM_CLOS_NUM
to 8. ACRN recommends having consistent capabilities across all RDT
resources by using a common subset CLOS. This is done in order to minimize
misconfiguration errors.
Tuning RDT Resources in HV Debug Shell
**************************************
This section explains how to configure the RDT resources from the HV debug
shell.
#. Check the pCPU IDs of each VM; the ``vcpu_list`` below shows that VM0 is
running on pCPU0, and VM1 is running on pCPU1:
.. code-block:: none
ACRN:\>vcpu_list
VM ID pCPU ID VCPU ID VCPU ROLE VCPU STATE
===== ======= ======= ========= ==========
0 0 0 PRIMARY Running
1 1 0 PRIMARY Running
#. Set the resource mask array MSRs for each CLOS with a ``wrmsr <reg_num> <value>``.
For example, if you want to restrict VM1 to use the
lower 4 ways of LLC cache and you want to allocate the upper 7 ways of
LLC to access to VM0, you must first assign a CLOS for each VM (e.g. VM0
is assigned CLOS0 and VM1 CLOS1). Next, resource mask the MSR that
corresponds to the CLOS0. In our example, IA32_L3_MASK_BASE + 0 is
programmed to 0x7f0. Finally, resource mask the MSR that corresponds to
CLOS1. In our example, IA32_L3_MASK_BASE + 1 is set to 0xf.
.. code-block:: none
ACRN:\>wrmsr -p1 0xc90 0x7f0
ACRN:\>wrmsr -p1 0xc91 0xf
#. Assign CLOS1 to pCPU1 by programming the MSR IA32_PQR_ASSOC [bit 63:32]
(0xc8f) to 0x100000000 to use CLOS1 and assign CLOS0 to pCPU 0 by
programming MSR IA32_PQR_ASSOC [bit 63:32] to 0x0. Note that
IA32_PQR_ASSOC is per LP MSR and CLOS must be programmed on each LP.
.. code-block:: none
ACRN:\>wrmsr -p0 0xc8f 0x000000000 (this is default and can be skipped)
ACRN:\>wrmsr -p1 0xc8f 0x100000000
.. _rdt_vm_configuration:
Configure RDT for VM Using VM Configuration
*******************************************
#. RDT hardware feature is enabled by default on supported platforms. This
information can be found using an offline tool that generates a
platform-specific XML file that helps ACRN identify RDT-supported
platforms. RDT on ACRN is enabled by configuring the ``FEATURES``
sub-section of the scenario XML file as in the below example. For
details on building ACRN with a scenario, refer to :ref:`gsg`.
.. code-block:: none
:emphasize-lines: 6
<FEATURES>
<RELOC>y</RELOC>
<SCHEDULER>SCHED_BVT</SCHEDULER>
<MULTIBOOT2>y</MULTIBOOT2>
<RDT>
<RDT_ENABLED>y</RDT_ENABLED>
<CDP_ENABLED>n</CDP_ENABLED>
<CLOS_MASK></CLOS_MASK>
<MBA_DELAY></MBA_DELAY>
</RDT>
#. Once RDT is enabled in the scenario XML file, the next step is to program
the desired cache mask or/and the MBA delay value as needed in the
scenario file. Each cache mask or MBA delay configuration corresponds
to a CLOS ID. For example, if the maximum supported CLOS ID is 4, then 4
cache mask settings needs to be in place where each setting corresponds
to a CLOS ID starting from 0. To set the cache masks for 4 CLOS ID and
use default delay value for MBA, it can be done as shown in the example below.
.. code-block:: none
:emphasize-lines: 8,9,10,11,12
<FEATURES>
<RELOC>y</RELOC>
<SCHEDULER>SCHED_BVT</SCHEDULER>
<MULTIBOOT2>y</MULTIBOOT2>
<RDT>
<RDT_ENABLED>y</RDT_ENABLED>
<CDP_ENABLED>n</CDP_ENABLED>
<CLOS_MASK>0xff</CLOS_MASK>
<CLOS_MASK>0x3f</CLOS_MASK>
<CLOS_MASK>0xf</CLOS_MASK>
<CLOS_MASK>0x3</CLOS_MASK>
<MBA_DELAY>0</MBA_DELAY>
</RDT>
.. note::
Users can change the mask values, but the cache mask must have
**continuous bits** or a #GP fault can be triggered. Similarly, when
programming an MBA delay value, be sure to set the value to less than or
equal to the MAX delay value.
#. Configure each CPU in VMs to a desired CLOS ID in the ``VM`` section of the
scenario file. Follow `RDT detection and resource capabilities`_
to identify the maximum supported CLOS ID that can be used. ACRN uses
**the lowest common MAX CLOS** value among all RDT resources to avoid
resource misconfigurations.
.. code-block:: none
:emphasize-lines: 5,6,7,8
<vm id="0">
<vm_type readonly="true">PRE_STD_VM</vm_type>
<name>ACRN PRE-LAUNCHED VM0</name>
<clos>
<vcpu_clos>0</vcpu_clos>
<vcpu_clos>1</vcpu_clos>
</clos>
</vm>
.. note::
In ACRN, Lower CLOS always means higher priority (CLOS 0 > CLOS 1 > CLOS 2 > ... CLOS n).
So, carefully program each VM's CLOS accordingly.
#. Careful consideration should be made when assigning vCPU affinity. In
a cache isolation configuration, in addition to isolating CAT-capable
caches, you must also isolate lower-level caches. In the following
example, logical processor #0 and #2 share L1 and L2 caches. In this
case, do not assign LP #0 and LP #2 to different VMs that need to do
cache isolation. Assign LP #1 and LP #3 with similar consideration:
.. code-block:: none
:emphasize-lines: 3
# lstopo-no-graphics -v
Package L#0 (P#0 CPUVendor=GenuineIntel CPUFamilyNumber=6 CPUModelNumber=142)
L3Cache L#0 (size=3072KB linesize=64 ways=12 Inclusive=1)
L2Cache L#0 (size=256KB linesize=64 ways=4 Inclusive=0)
L1dCache L#0 (size=32KB linesize=64 ways=8 Inclusive=0)
L1iCache L#0 (size=32KB linesize=64 ways=8 Inclusive=0)
Core L#0 (P#0)
PU L#0 (P#0)
PU L#1 (P#2)
L2Cache L#1 (size=256KB linesize=64 ways=4 Inclusive=0)
L1dCache L#1 (size=32KB linesize=64 ways=8 Inclusive=0)
L1iCache L#1 (size=32KB linesize=64 ways=8 Inclusive=0)
Core L#1 (P#1)
PU L#2 (P#1)
PU L#3 (P#3)
#. Bandwidth control is per-core (not per LP), so max delay values of
per-LP CLOS is applied to the core. If HT is turned on, don't place high
priority threads on sibling LPs running lower priority threads.
#. Based on our scenario, build and install ACRN. See :ref:`gsg`
for building and installing instructions.
#. Restart the platform.
For details on the ACRN vCAT high-level design, see :ref:`hv_vcat`.

View File

@ -1,88 +0,0 @@
.. _vcat_configuration:
Enable vCAT Configuration
#########################
vCAT is built on top of RDT, so to use vCAT we must first enable RDT.
For details on enabling RDT configuration on ACRN, see :ref:`rdt_configuration`.
For details on ACRN vCAT high-level design, see :ref:`hv_vcat`.
The vCAT feature is disabled by default in ACRN. You can enable vCAT via the UI,
the steps listed below serve as an FYI to show how those settings are translated
into XML in the scenario file:
#. Configure system level features:
- Edit ``hv.FEATURES.RDT.RDT_ENABLED`` to `y` to enable RDT
- Edit ``hv.FEATURES.RDT.CDP_ENABLED`` to `n` to disable CDP.
vCAT requires CDP to be disabled.
- Edit ``hv.FEATURES.RDT.VCAT_ENABLED`` to `y` to enable vCAT
.. code-block:: xml
:emphasize-lines: 3,4,5
<FEATURES>
<RDT>
<RDT_ENABLED>y</RDT_ENABLED>
<CDP_ENABLED>n</CDP_ENABLED>
<VCAT_ENABLED>y</VCAT_ENABLED>
<CLOS_MASK></CLOS_MASK>
</RDT>
</FEATURES>
#. In each Guest VM configuration:
- Edit ``vm.virtual_cat_support`` to 'y' to enable the vCAT feature on the VM.
- Edit ``vm.clos.vcpu_clos`` to assign COS IDs to the VM.
If ``GUEST_FLAG_VCAT_ENABLED`` is not specified for a VM (abbreviated as RDT VM):
``vcpu_clos`` is per CPU in a VM and it configures each CPU in a VM to a desired COS ID.
So the number of vcpu_closes is equal to the number of vCPUs assigned.
If ``GUEST_FLAG_VCAT_ENABLED`` is specified for a VM (abbreviated as vCAT VM):
``vcpu_clos`` is not per CPU anymore; instead, it specifies a list of physical COS IDs (minimum 2)
that are assigned to a vCAT VM. The number of vcpu_closes is not necessarily equal to
the number of vCPUs assigned, but may be not only greater than the number of vCPUs assigned but
less than this number. Each vcpu_clos will be mapped to a virtual COS ID, the first vcpu_clos
is mapped to virtual COS ID 0 and the second is mapped to virtual COS ID 1, etc.
.. code-block:: xml
:emphasize-lines: 3,10,11,12,13
<vm id="1">
<guest_flags>
<guest_flag>GUEST_FLAG_VCAT_ENABLED</guest_flag>
</guest_flags>
<cpu_affinity>
<pcpu_id>1</pcpu_id>
<pcpu_id>2</pcpu_id>
</cpu_affinity>
<clos>
<vcpu_clos>2</vcpu_clos>
<vcpu_clos>4</vcpu_clos>
<vcpu_clos>5</vcpu_clos>
<vcpu_clos>7</vcpu_clos>
</clos>
</vm>
.. note::
CLOS_MASK defined in scenario file is a capacity bitmask (CBM) starting
at bit position low (the lowest assigned physical cache way) and ending at position
high (the highest assigned physical cache way, inclusive). As CBM only allows
contiguous '1' combinations, so CLOS_MASK essentially is the maximum CBM that covers
all the physical cache ways assigned to a vCAT VM.
The config tool imposes oversight to prevent any problems with invalid configuration data for vCAT VMs:
* For a vCAT VM, its vcpu_closes cannot be set to 0, COS ID 0 is reserved to be used only by hypervisor
* There should not be any COS ID overlap between a vCAT VM and any other VMs. e.g. the vCAT VM has exclusive use of the assigned COS IDs
* For a vCAT VM, each vcpu_clos must be less than L2/L3 COS_MAX
* For a vCAT VM, its vcpu_closes cannot contain duplicate values
#. Follow instructions in :ref:`gsg` and build with this XML configuration.