mirror of
https://github.com/projectacrn/acrn-hypervisor.git
synced 2025-06-21 21:19:35 +00:00
Doc: content edits for GSG for ACRN Ind Scenario
Signed-off-by: Deb Taylor <deb.taylor@intel.com>
This commit is contained in:
parent
018fed2c21
commit
f489312e67
@ -13,12 +13,14 @@ Verified version
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
Below example is based on Intel Kaby Lake NUC platform with two disks,
|
||||
SATA disk is for Clear Linux based Service VM and the NVMe disk is for RTVM.
|
||||
|
||||
- Intel Kaby Lake (aka KBL) NUC platform with two disks inside.
|
||||
(refer to :ref:`the tables <hardware_setup>` for detailed information)
|
||||
- Installed Clear Linux OS (Ver: 31080) onto both disks on KBL NUC.
|
||||
The example below is based on the Intel Kaby Lake NUC platform with two
|
||||
disks, a SATA disk for the Clear Linux-based Service VM and an NVMe disk
|
||||
for the RTVM.
|
||||
|
||||
- Intel Kaby Lake (aka KBL) NUC platform with two disks inside
|
||||
(refer to :ref:`the tables <hardware_setup>` for detailed information).
|
||||
- Clear Linux OS (Ver: 31080) installation onto both disks on the KBL NUC.
|
||||
|
||||
.. _installation guide:
|
||||
https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html
|
||||
@ -40,7 +42,7 @@ Hardware Setup
|
||||
| Kaby Lake | NUC7i7DNH | Processor | - Intel |reg| Core |trade| i7-8650U CPU @ 1.90GHz |
|
||||
| | +----------------------+-----------------------------------------------------------+
|
||||
| | | Graphics | - UHD Graphics 620 |
|
||||
| | | | - Two HDMI* 2.0a ports supporting 4K at 60 Hz |
|
||||
| | | | - Two HDMI 2.0a ports supporting 4K at 60 Hz |
|
||||
| | +----------------------+-----------------------------------------------------------+
|
||||
| | | System memory | - 8GiB SODIMM DDR4 2400 MHz |
|
||||
| | +----------------------+-----------------------------------------------------------+
|
||||
@ -48,23 +50,25 @@ Hardware Setup
|
||||
| | | | - NVMe: 256G Intel Corporation SSD Pro 7600p/760p/E 6100p |
|
||||
+----------------------+-------------------+----------------------+-----------------------------------------------------------+
|
||||
|
||||
How to set up ACRN Hypervisor for industry scenario
|
||||
***************************************************
|
||||
There are several ways to set up ACRN industry scenario environment. Two of them listed below
|
||||
are recommended:
|
||||
Set up the ACRN Hypervisor for industry scenario
|
||||
************************************************
|
||||
|
||||
The ACRN industry scenario environment can be set up in several ways. The
|
||||
two listed below are recommended:
|
||||
|
||||
- :ref:`Using the pre-installed industry ACRN hypervisor <use pre-installed industry efi>`
|
||||
- :ref:`Using the ACRN industry out-of-the-box image <use industry ootb image>`
|
||||
|
||||
.. _use pre-installed industry efi:
|
||||
|
||||
Using the pre-installed industry ACRN hypervisor
|
||||
================================================
|
||||
.. note:: Skip this section if you choose following the :ref:`Using the ACRN industry out-of-the-box image <use industry ootb image>`..
|
||||
Use the pre-installed industry ACRN hypervisor
|
||||
==============================================
|
||||
|
||||
Firstly we need to follow :ref:`ACRN quick setup guide <quick-setup-guide>` to set up
|
||||
ACRN Service VM. The industry hypervisor image is installed in ``/usr/lib/acrn/``
|
||||
directory once the Service VM boots. You may simply follow these steps to use
|
||||
.. note:: Skip this section if you choose :ref:`Using the ACRN industry out-of-the-box image <use industry ootb image>`.
|
||||
|
||||
Follow :ref:`ACRN quick setup guide <quick-setup-guide>` to set up the
|
||||
ACRN Service VM. The industry hypervisor image is installed in the ``/usr/lib/acrn/``
|
||||
directory once the Service VM boots. Follow the steps below to use
|
||||
``acrn.kbl-nuc-i7.industry.efi`` instead of the original SDC hypervisor:
|
||||
|
||||
.. code-block:: none
|
||||
@ -77,58 +81,62 @@ directory once the Service VM boots. You may simply follow these steps to use
|
||||
|
||||
.. _use industry ootb image:
|
||||
|
||||
Using the ACRN industry out-of-the-box image
|
||||
============================================
|
||||
Use the ACRN industry out-of-the-box image
|
||||
==========================================
|
||||
|
||||
#. Download the
|
||||
`sos-industry-31080.img.xz <https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2019w39.1-140000p/sos-industry-31080.img.xz>`_
|
||||
to your development machine.
|
||||
#. Decompress the xz image
|
||||
|
||||
#. Decompress the xz image:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ xz -d sos-industry-31080.img.xz
|
||||
|
||||
#. Follow the :ref:`Deploy the Service VM image <deploy_ootb_service_vm>`
|
||||
to deploy the Service VM image to the SATA disk.
|
||||
#. Follow the instructions at :ref:`Deploy the Service VM image <deploy_ootb_service_vm>`
|
||||
to deploy the Service VM image on the SATA disk.
|
||||
|
||||
Install and launch Preempt-RT VM
|
||||
********************************
|
||||
#. Download the
|
||||
`preempt-rt-31080.img.xz <`https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2019w39.1-140000p/preempt-rt-31080.img.xz>`_
|
||||
to your development machine.
|
||||
#. Decompress the xz image
|
||||
Install and launch the Preempt-RT VM
|
||||
************************************
|
||||
|
||||
#. Download
|
||||
`preempt-rt-31080.img.xz <`https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2019w39.1-140000p/preempt-rt-31080.img.xz>`_ to your development machine.
|
||||
|
||||
#. Decompress the xz image:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ xz -d preempt-rt-31080.img.xz
|
||||
|
||||
#. Follow the :ref:`Deploy the User VM Preempt-RT image <deploy_ootb_rtvm>`
|
||||
to deploy the Preempt-RT vm image to the NVMe disk.
|
||||
#. After deploying is completed, launch RTVM directly on your KBL NUC::
|
||||
#. Follow the instructions at :ref:`Deploy the User VM Preempt-RT image <deploy_ootb_rtvm>`
|
||||
to deploy the Preempt-RT vm image on the NVMe disk.
|
||||
|
||||
#. Upon deployment completion, launch the RTVM directly on your KBL NUC::
|
||||
|
||||
$ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
|
||||
|
||||
.. note:: Use ``lspci`` command to make sure the correct NMVe device IDs will be used
|
||||
for passthru before launch the script::
|
||||
.. note:: Use the ``lspci`` command to ensure that the correct NMVe device IDs will be used for the passthru before launching the script::
|
||||
|
||||
$ sudo lspci -v | grep -iE 'nvm|ssd'
|
||||
02:00.0 Non-Volatile memory controller: Intel Corporation Device f1a6 (rev 03) (prog-if 02 [NVM Express])
|
||||
$ sudo lspci -nn | grep "Non-Volatile memory controller"
|
||||
02:00.0 Non-Volatile memory controller [0108]: Intel Corporation Device [8086:f1a6] (rev 03)
|
||||
$ sudo lspci -v | grep -iE 'nvm|ssd' 02:00.0 Non-Volatile memory controller: Intel Corporation Device f1a6 (rev 03) (prog-if 02 [NVM Express])
|
||||
$ sudo lspci -nn | grep "Non-Volatile memory controller" 02:00.0 Non-Volatile memory controller [0108]: Intel Corporation Device [8086:f1a6] (rev 03)
|
||||
|
||||
|
||||
RT Performance Test
|
||||
*******************
|
||||
|
||||
.. _cyclictest:
|
||||
|
||||
Cyclictest introduction
|
||||
=======================
|
||||
Cyclictest is most commonly used for benchmarking RT systems. It is one of the
|
||||
|
||||
The cyclictest is most commonly used for benchmarking RT systems. It is one of the
|
||||
most frequently used tools for evaluating the relative performance of real-time
|
||||
systems. Cyclictest accurately and repeatedly measures the difference between a
|
||||
thread's intended wake-up time and the time at which it actually wakes up in order
|
||||
to provide statistics about the system's latencies. It can measure latencies in
|
||||
real-time systems caused by the hardware, the firmware, and the operating system.
|
||||
Cyclictest is currently maintained by Linux Foundation and is part of the test
|
||||
real-time systems that are caused by hardware, firmware, and the operating system.
|
||||
The cyclictest is currently maintained by Linux Foundation and is part of the test
|
||||
suite rt-tests.
|
||||
|
||||
Pre-Configurations
|
||||
@ -161,20 +169,22 @@ Recommended BIOS settings
|
||||
"ACPI S3 Support", "Intel Advanced Menu -> ACPI Settings", "Disabled"
|
||||
"Native ASPM", "Intel Advanced Menu -> ACPI Settings", "Disabled"
|
||||
|
||||
.. note:: The BIOS settings depend on platform and BIOS version, some ones may not be applicable.
|
||||
.. note:: The BIOS settings depend on the platform and BIOS version; some may not be applicable.
|
||||
|
||||
Configure CAT
|
||||
-------------
|
||||
With the ACRN Hypervisor shell, we can use ``cpuid``, ``wrmsr``/``rdmsr`` debug
|
||||
commands to enumerate CAT capability and set CAT configuration without rebuild binaries.
|
||||
Because ``lapic`` is pass-through to the RTVM, so CAT configuration need to be
|
||||
set before launching RTVM.
|
||||
|
||||
With the ACRN Hypervisor shell, we can use ``cpuid`` and ``wrmsr``/``rdmsr`` debug
|
||||
commands to enumerate the CAT capability and set the CAT configuration without rebuilding binaries.
|
||||
Because ``lapic`` is a pass-through to the RTVM, the CAT configuration must be
|
||||
set before launching the RTVM.
|
||||
|
||||
Check CAT ability with cupid
|
||||
````````````````````````````
|
||||
First run ``cpuid 0x10 0x0``, the return value ``ebx[bit 2]`` reports the L2 CAT is supported.
|
||||
Then run ``cpuid 0x10 0x2`` to query L2 CAT capability, the return value ``eax[bit 4:0]``
|
||||
reports the cache mask has 8 bit, and ``edx[bit 15:0]`` reports 4 CLOS are supported,
|
||||
|
||||
First run ``cpuid 0x10 0x0``. The return value of ``ebx[bit 2]`` reports that the L2 CAT is supported.
|
||||
Next, run ``cpuid 0x10 0x2`` to query the L2 CAT capability; the return value of ``eax[bit 4:0]``
|
||||
reports that the cache mask has 8 bits, and ``edx[bit 15:0]`` reports that 04 CLOS are supported,
|
||||
as shown below. The reported data is in the format of ``[ eax:ebx:ecx:edx ]``::
|
||||
|
||||
ACRN:\>cpuid 0x10 0x0
|
||||
@ -185,25 +195,24 @@ as shown below. The reported data is in the format of ``[ eax:ebx:ecx:edx ]``::
|
||||
|
||||
Set CLOS (QOS MASK) and PQR_ASSOC MSRs to configure the CAT
|
||||
```````````````````````````````````````````````````````````
|
||||
ApolloLake doesn't have L3 cache, and supports L2 CAT. The CLOS MSRs are per L2 cache,
|
||||
starts from 0x00000D10, in the case there is 4 CLOS MSRs, the address of CLOS MSRs::
|
||||
|
||||
Apollo Lake doesn't have L3 cache and it supports L2 CAT. The CLOS MSRs are per L2 cache and starts from 0x00000D10. In the case of 4 CLOS MSRs, the address is as follows::
|
||||
|
||||
MSR_IA32_L2_QOS_MASK_0 0x00000D10
|
||||
MSR_IA32_L2_QOS_MASK_1 0x00000D11
|
||||
MSR_IA32_L2_QOS_MASK_2 0x00000D12
|
||||
MSR_IA32_L2_QOS_MASK_3 0x00000D13
|
||||
|
||||
And the PQR_ASSOC MSR is per CPU core, each core has its own PQR_ASSOC::
|
||||
The PQR_ASSOC MSR is per CPU core; each core has its own PQR_ASSOC::
|
||||
|
||||
MSR_IA32_PQR_ASSOC 0x00000C8F
|
||||
|
||||
To set the CAT, we need to set the CLOS MSRs and then set PQR_ASSOC of each CPU,
|
||||
so that the CPU of RTVM to use dedicated cache ways and other CPUs to use the cache ways.
|
||||
Taking a Quad Core ApolloLake platform for example, CPU0 and CPU1 share a L2 cache
|
||||
and CPU2 and CPU3 share the other L2 cache.
|
||||
To set the CAT, first set the CLOS MSRs. Next, set the PQR_ASSOC of each CPU
|
||||
so that the CPU of the RTVM uses dedicated cache and other CPUs use other cache.
|
||||
Taking a Quad Core Apollo Lake platform for example, CPU0 and CPU1 share L2 cache while CPU2 and CPU3 share the other L2 cache.
|
||||
|
||||
- If we allocate CPU2 and CPU3, there is no extra action required.
|
||||
- If we allocate only CPU1 to the RTVM, we need to set CAT as follows.
|
||||
- If we allocate CPU2 and CPU3, no extra action is required.
|
||||
- If we allocate only CPU1 to the RTVM, we need to set the CAT as follows.
|
||||
These commands actually set the CAT configuration for L2 cache shared by CPU0 and CPU1.
|
||||
|
||||
a. Set CLOS with ``wrmsr <reg_num> <value>``, we want VM1 to use the lower 6 ways of cache,
|
||||
@ -212,21 +221,22 @@ a. Set CLOS with ``wrmsr <reg_num> <value>``, we want VM1 to use the lower 6 way
|
||||
ACRN:\>wrmsr -p1 0xd10 0xf0
|
||||
ACRN:\>wrmsr -p1 0xd11 0x0f
|
||||
|
||||
#. Attach COS1 to PCPU1. Because MSR IA32_PQR_ASSOC [bit 63:32], we’ll write
|
||||
#. Attach COS1 to PCPU1. Because MSR is IA32_PQR_ASSOC [bit 63:32], we’ll write
|
||||
0x100000000 to it to use CLOS1::
|
||||
|
||||
ACRN:\>wrmsr -p0 0xc8f 0x000000000
|
||||
ACRN:\>wrmsr -p1 0xc8f 0x100000000
|
||||
|
||||
In addition to set the CAT configuration via HV commands, we allow developers to add
|
||||
In addition to setting the CAT configuration via HV commands, we allow developers to add
|
||||
the CAT configurations to the VM config and do the configure automatically at the
|
||||
time of RTVM creation, please refer to the :ref:`configure_cat_vm` for the details.
|
||||
time of RTVM creation. Refer to the :ref:`configure_cat_vm` for details.
|
||||
|
||||
Set up core allocation for RTVM
|
||||
-------------------------------
|
||||
In our recommended configuration, 2 cores will be allocated to RTVM:
|
||||
Set up the core allocation for the RTVM
|
||||
---------------------------------------
|
||||
|
||||
In our recommended configuration, two cores are allocated to the RTVM:
|
||||
core 0 for housekeeping and core 1 for RT tasks. In order to achieve
|
||||
this, follow below steps to allocate all housekeeping tasks to core 0:
|
||||
this, follow the below steps to allocate all housekeeping tasks to core 0:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
@ -255,15 +265,16 @@ this, follow below steps to allocate all housekeeping tasks to core 0:
|
||||
|
||||
Run cyclictest
|
||||
==============
|
||||
Use below command to start cyclictest::
|
||||
|
||||
Use the following command to start cyclictest::
|
||||
|
||||
$ cyclictest -a 1 -p 80 -m -N -D 1h -q -H 30000 --histfile=test.log
|
||||
|
||||
- Usage:
|
||||
|
||||
:-a 1: to bind the RT task to core 1
|
||||
:-p 80: to set priority of highest prio thread
|
||||
:-p 80: to set the priority of the highest prio thread
|
||||
:-N: print results in ns instead of us (default us)
|
||||
:-D 1h: to run it for 1 hour, you can change it to other value
|
||||
:-q: quite mode, print a summary only on exit
|
||||
:-H 30000 --histfile=test.log: dump the latency histogram to local file
|
||||
:-D 1h: to run for 1 hour, you can change it to other values
|
||||
:-q: quiee mode; print a summary only on exit
|
||||
:-H 30000 --histfile=test.log: dump the latency histogram to a local file
|
||||
|
Loading…
Reference in New Issue
Block a user