Doc: content edits for GSG for ACRN Ind Scenario

Signed-off-by: Deb Taylor <deb.taylor@intel.com>
This commit is contained in:
Deb Taylor 2019-09-28 11:06:10 -04:00 committed by deb-intel
parent 018fed2c21
commit f489312e67

View File

@ -13,12 +13,14 @@ Verified version
Prerequisites Prerequisites
************* *************
Below example is based on Intel Kaby Lake NUC platform with two disks,
SATA disk is for Clear Linux based Service VM and the NVMe disk is for RTVM.
- Intel Kaby Lake (aka KBL) NUC platform with two disks inside. The example below is based on the Intel Kaby Lake NUC platform with two
(refer to :ref:`the tables <hardware_setup>` for detailed information) disks, a SATA disk for the Clear Linux-based Service VM and an NVMe disk
- Installed Clear Linux OS (Ver: 31080) onto both disks on KBL NUC. for the RTVM.
- Intel Kaby Lake (aka KBL) NUC platform with two disks inside
(refer to :ref:`the tables <hardware_setup>` for detailed information).
- Clear Linux OS (Ver: 31080) installation onto both disks on the KBL NUC.
.. _installation guide: .. _installation guide:
https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html
@ -40,7 +42,7 @@ Hardware Setup
| Kaby Lake | NUC7i7DNH | Processor | - Intel |reg| Core |trade| i7-8650U CPU @ 1.90GHz | | Kaby Lake | NUC7i7DNH | Processor | - Intel |reg| Core |trade| i7-8650U CPU @ 1.90GHz |
| | +----------------------+-----------------------------------------------------------+ | | +----------------------+-----------------------------------------------------------+
| | | Graphics | - UHD Graphics 620 | | | | Graphics | - UHD Graphics 620 |
| | | | - Two HDMI* 2.0a ports supporting 4K at 60 Hz | | | | | - Two HDMI 2.0a ports supporting 4K at 60 Hz |
| | +----------------------+-----------------------------------------------------------+ | | +----------------------+-----------------------------------------------------------+
| | | System memory | - 8GiB SODIMM DDR4 2400 MHz | | | | System memory | - 8GiB SODIMM DDR4 2400 MHz |
| | +----------------------+-----------------------------------------------------------+ | | +----------------------+-----------------------------------------------------------+
@ -48,23 +50,25 @@ Hardware Setup
| | | | - NVMe: 256G Intel Corporation SSD Pro 7600p/760p/E 6100p | | | | | - NVMe: 256G Intel Corporation SSD Pro 7600p/760p/E 6100p |
+----------------------+-------------------+----------------------+-----------------------------------------------------------+ +----------------------+-------------------+----------------------+-----------------------------------------------------------+
How to set up ACRN Hypervisor for industry scenario Set up the ACRN Hypervisor for industry scenario
*************************************************** ************************************************
There are several ways to set up ACRN industry scenario environment. Two of them listed below
are recommended: The ACRN industry scenario environment can be set up in several ways. The
two listed below are recommended:
- :ref:`Using the pre-installed industry ACRN hypervisor <use pre-installed industry efi>` - :ref:`Using the pre-installed industry ACRN hypervisor <use pre-installed industry efi>`
- :ref:`Using the ACRN industry out-of-the-box image <use industry ootb image>` - :ref:`Using the ACRN industry out-of-the-box image <use industry ootb image>`
.. _use pre-installed industry efi: .. _use pre-installed industry efi:
Using the pre-installed industry ACRN hypervisor Use the pre-installed industry ACRN hypervisor
================================================ ==============================================
.. note:: Skip this section if you choose following the :ref:`Using the ACRN industry out-of-the-box image <use industry ootb image>`..
Firstly we need to follow :ref:`ACRN quick setup guide <quick-setup-guide>` to set up .. note:: Skip this section if you choose :ref:`Using the ACRN industry out-of-the-box image <use industry ootb image>`.
ACRN Service VM. The industry hypervisor image is installed in ``/usr/lib/acrn/``
directory once the Service VM boots. You may simply follow these steps to use Follow :ref:`ACRN quick setup guide <quick-setup-guide>` to set up the
ACRN Service VM. The industry hypervisor image is installed in the ``/usr/lib/acrn/``
directory once the Service VM boots. Follow the steps below to use
``acrn.kbl-nuc-i7.industry.efi`` instead of the original SDC hypervisor: ``acrn.kbl-nuc-i7.industry.efi`` instead of the original SDC hypervisor:
.. code-block:: none .. code-block:: none
@ -77,58 +81,62 @@ directory once the Service VM boots. You may simply follow these steps to use
.. _use industry ootb image: .. _use industry ootb image:
Using the ACRN industry out-of-the-box image Use the ACRN industry out-of-the-box image
============================================ ==========================================
#. Download the #. Download the
`sos-industry-31080.img.xz <https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2019w39.1-140000p/sos-industry-31080.img.xz>`_ `sos-industry-31080.img.xz <https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2019w39.1-140000p/sos-industry-31080.img.xz>`_
to your development machine. to your development machine.
#. Decompress the xz image
#. Decompress the xz image:
.. code-block:: none .. code-block:: none
$ xz -d sos-industry-31080.img.xz $ xz -d sos-industry-31080.img.xz
#. Follow the :ref:`Deploy the Service VM image <deploy_ootb_service_vm>` #. Follow the instructions at :ref:`Deploy the Service VM image <deploy_ootb_service_vm>`
to deploy the Service VM image to the SATA disk. to deploy the Service VM image on the SATA disk.
Install and launch Preempt-RT VM Install and launch the Preempt-RT VM
******************************** ************************************
#. Download the
`preempt-rt-31080.img.xz <`https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2019w39.1-140000p/preempt-rt-31080.img.xz>`_ #. Download
to your development machine. `preempt-rt-31080.img.xz <`https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2019w39.1-140000p/preempt-rt-31080.img.xz>`_ to your development machine.
#. Decompress the xz image
#. Decompress the xz image:
.. code-block:: none .. code-block:: none
$ xz -d preempt-rt-31080.img.xz $ xz -d preempt-rt-31080.img.xz
#. Follow the :ref:`Deploy the User VM Preempt-RT image <deploy_ootb_rtvm>` #. Follow the instructions at :ref:`Deploy the User VM Preempt-RT image <deploy_ootb_rtvm>`
to deploy the Preempt-RT vm image to the NVMe disk. to deploy the Preempt-RT vm image on the NVMe disk.
#. After deploying is completed, launch RTVM directly on your KBL NUC::
#. Upon deployment completion, launch the RTVM directly on your KBL NUC::
$ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh $ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
.. note:: Use ``lspci`` command to make sure the correct NMVe device IDs will be used .. note:: Use the ``lspci`` command to ensure that the correct NMVe device IDs will be used for the passthru before launching the script::
for passthru before launch the script::
$ sudo lspci -v | grep -iE 'nvm|ssd' $ sudo lspci -v | grep -iE 'nvm|ssd' 02:00.0 Non-Volatile memory controller: Intel Corporation Device f1a6 (rev 03) (prog-if 02 [NVM Express])
02:00.0 Non-Volatile memory controller: Intel Corporation Device f1a6 (rev 03) (prog-if 02 [NVM Express]) $ sudo lspci -nn | grep "Non-Volatile memory controller" 02:00.0 Non-Volatile memory controller [0108]: Intel Corporation Device [8086:f1a6] (rev 03)
$ sudo lspci -nn | grep "Non-Volatile memory controller"
02:00.0 Non-Volatile memory controller [0108]: Intel Corporation Device [8086:f1a6] (rev 03)
RT Performance Test RT Performance Test
******************* *******************
.. _cyclictest:
Cyclictest introduction Cyclictest introduction
======================= =======================
Cyclictest is most commonly used for benchmarking RT systems. It is one of the
The cyclictest is most commonly used for benchmarking RT systems. It is one of the
most frequently used tools for evaluating the relative performance of real-time most frequently used tools for evaluating the relative performance of real-time
systems. Cyclictest accurately and repeatedly measures the difference between a systems. Cyclictest accurately and repeatedly measures the difference between a
thread's intended wake-up time and the time at which it actually wakes up in order thread's intended wake-up time and the time at which it actually wakes up in order
to provide statistics about the system's latencies. It can measure latencies in to provide statistics about the system's latencies. It can measure latencies in
real-time systems caused by the hardware, the firmware, and the operating system. real-time systems that are caused by hardware, firmware, and the operating system.
Cyclictest is currently maintained by Linux Foundation and is part of the test The cyclictest is currently maintained by Linux Foundation and is part of the test
suite rt-tests. suite rt-tests.
Pre-Configurations Pre-Configurations
@ -161,20 +169,22 @@ Recommended BIOS settings
"ACPI S3 Support", "Intel Advanced Menu -> ACPI Settings", "Disabled" "ACPI S3 Support", "Intel Advanced Menu -> ACPI Settings", "Disabled"
"Native ASPM", "Intel Advanced Menu -> ACPI Settings", "Disabled" "Native ASPM", "Intel Advanced Menu -> ACPI Settings", "Disabled"
.. note:: The BIOS settings depend on platform and BIOS version, some ones may not be applicable. .. note:: The BIOS settings depend on the platform and BIOS version; some may not be applicable.
Configure CAT Configure CAT
------------- -------------
With the ACRN Hypervisor shell, we can use ``cpuid``, ``wrmsr``/``rdmsr`` debug
commands to enumerate CAT capability and set CAT configuration without rebuild binaries. With the ACRN Hypervisor shell, we can use ``cpuid`` and ``wrmsr``/``rdmsr`` debug
Because ``lapic`` is pass-through to the RTVM, so CAT configuration need to be commands to enumerate the CAT capability and set the CAT configuration without rebuilding binaries.
set before launching RTVM. Because ``lapic`` is a pass-through to the RTVM, the CAT configuration must be
set before launching the RTVM.
Check CAT ability with cupid Check CAT ability with cupid
```````````````````````````` ````````````````````````````
First run ``cpuid 0x10 0x0``, the return value ``ebx[bit 2]`` reports the L2 CAT is supported.
Then run ``cpuid 0x10 0x2`` to query L2 CAT capability, the return value ``eax[bit 4:0]`` First run ``cpuid 0x10 0x0``. The return value of ``ebx[bit 2]`` reports that the L2 CAT is supported.
reports the cache mask has 8 bit, and ``edx[bit 15:0]`` reports 4 CLOS are supported, Next, run ``cpuid 0x10 0x2`` to query the L2 CAT capability; the return value of ``eax[bit 4:0]``
reports that the cache mask has 8 bits, and ``edx[bit 15:0]`` reports that 04 CLOS are supported,
as shown below. The reported data is in the format of ``[ eax:ebx:ecx:edx ]``:: as shown below. The reported data is in the format of ``[ eax:ebx:ecx:edx ]``::
ACRN:\>cpuid 0x10 0x0 ACRN:\>cpuid 0x10 0x0
@ -185,25 +195,24 @@ as shown below. The reported data is in the format of ``[ eax:ebx:ecx:edx ]``::
Set CLOS (QOS MASK) and PQR_ASSOC MSRs to configure the CAT Set CLOS (QOS MASK) and PQR_ASSOC MSRs to configure the CAT
``````````````````````````````````````````````````````````` ```````````````````````````````````````````````````````````
ApolloLake doesn't have L3 cache, and supports L2 CAT. The CLOS MSRs are per L2 cache,
starts from 0x00000D10, in the case there is 4 CLOS MSRs, the address of CLOS MSRs:: Apollo Lake doesn't have L3 cache and it supports L2 CAT. The CLOS MSRs are per L2 cache and starts from 0x00000D10. In the case of 4 CLOS MSRs, the address is as follows::
MSR_IA32_L2_QOS_MASK_0 0x00000D10 MSR_IA32_L2_QOS_MASK_0 0x00000D10
MSR_IA32_L2_QOS_MASK_1 0x00000D11 MSR_IA32_L2_QOS_MASK_1 0x00000D11
MSR_IA32_L2_QOS_MASK_2 0x00000D12 MSR_IA32_L2_QOS_MASK_2 0x00000D12
MSR_IA32_L2_QOS_MASK_3 0x00000D13 MSR_IA32_L2_QOS_MASK_3 0x00000D13
And the PQR_ASSOC MSR is per CPU core, each core has its own PQR_ASSOC:: The PQR_ASSOC MSR is per CPU core; each core has its own PQR_ASSOC::
MSR_IA32_PQR_ASSOC 0x00000C8F MSR_IA32_PQR_ASSOC 0x00000C8F
To set the CAT, we need to set the CLOS MSRs and then set PQR_ASSOC of each CPU, To set the CAT, first set the CLOS MSRs. Next, set the PQR_ASSOC of each CPU
so that the CPU of RTVM to use dedicated cache ways and other CPUs to use the cache ways. so that the CPU of the RTVM uses dedicated cache and other CPUs use other cache.
Taking a Quad Core ApolloLake platform for example, CPU0 and CPU1 share a L2 cache Taking a Quad Core Apollo Lake platform for example, CPU0 and CPU1 share L2 cache while CPU2 and CPU3 share the other L2 cache.
and CPU2 and CPU3 share the other L2 cache.
- If we allocate CPU2 and CPU3, there is no extra action required. - If we allocate CPU2 and CPU3, no extra action is required.
- If we allocate only CPU1 to the RTVM, we need to set CAT as follows. - If we allocate only CPU1 to the RTVM, we need to set the CAT as follows.
These commands actually set the CAT configuration for L2 cache shared by CPU0 and CPU1. These commands actually set the CAT configuration for L2 cache shared by CPU0 and CPU1.
a. Set CLOS with ``wrmsr <reg_num> <value>``, we want VM1 to use the lower 6 ways of cache, a. Set CLOS with ``wrmsr <reg_num> <value>``, we want VM1 to use the lower 6 ways of cache,
@ -212,21 +221,22 @@ a. Set CLOS with ``wrmsr <reg_num> <value>``, we want VM1 to use the lower 6 way
ACRN:\>wrmsr -p1 0xd10 0xf0 ACRN:\>wrmsr -p1 0xd10 0xf0
ACRN:\>wrmsr -p1 0xd11 0x0f ACRN:\>wrmsr -p1 0xd11 0x0f
#. Attach COS1 to PCPU1. Because MSR IA32_PQR_ASSOC [bit 63:32], well write #. Attach COS1 to PCPU1. Because MSR is IA32_PQR_ASSOC [bit 63:32], well write
0x100000000 to it to use CLOS1:: 0x100000000 to it to use CLOS1::
ACRN:\>wrmsr -p0 0xc8f 0x000000000 ACRN:\>wrmsr -p0 0xc8f 0x000000000
ACRN:\>wrmsr -p1 0xc8f 0x100000000 ACRN:\>wrmsr -p1 0xc8f 0x100000000
In addition to set the CAT configuration via HV commands, we allow developers to add In addition to setting the CAT configuration via HV commands, we allow developers to add
the CAT configurations to the VM config and do the configure automatically at the the CAT configurations to the VM config and do the configure automatically at the
time of RTVM creation, please refer to the :ref:`configure_cat_vm` for the details. time of RTVM creation. Refer to the :ref:`configure_cat_vm` for details.
Set up core allocation for RTVM Set up the core allocation for the RTVM
------------------------------- ---------------------------------------
In our recommended configuration, 2 cores will be allocated to RTVM:
In our recommended configuration, two cores are allocated to the RTVM:
core 0 for housekeeping and core 1 for RT tasks. In order to achieve core 0 for housekeeping and core 1 for RT tasks. In order to achieve
this, follow below steps to allocate all housekeeping tasks to core 0: this, follow the below steps to allocate all housekeeping tasks to core 0:
.. code-block:: bash .. code-block:: bash
@ -255,15 +265,16 @@ this, follow below steps to allocate all housekeeping tasks to core 0:
Run cyclictest Run cyclictest
============== ==============
Use below command to start cyclictest::
Use the following command to start cyclictest::
$ cyclictest -a 1 -p 80 -m -N -D 1h -q -H 30000 --histfile=test.log $ cyclictest -a 1 -p 80 -m -N -D 1h -q -H 30000 --histfile=test.log
- Usage: - Usage:
:-a 1: to bind the RT task to core 1 :-a 1: to bind the RT task to core 1
:-p 80: to set priority of highest prio thread :-p 80: to set the priority of the highest prio thread
:-N: print results in ns instead of us (default us) :-N: print results in ns instead of us (default us)
:-D 1h: to run it for 1 hour, you can change it to other value :-D 1h: to run for 1 hour, you can change it to other values
:-q: quite mode, print a summary only on exit :-q: quiee mode; print a summary only on exit
:-H 30000 --histfile=test.log: dump the latency histogram to local file :-H 30000 --histfile=test.log: dump the latency histogram to a local file