mirror of
https://github.com/projectacrn/acrn-hypervisor.git
synced 2025-06-25 06:51:49 +00:00
doc: clean up utf8 characters
Stray non-ASCII characters can creep in when pasting from Word or Google Docs, particularly for "smart" single and double quotes and non-breaking spaces. Change these to their ASCII equivalents. Also fixed some very long lines of text to wrap at 80-ish characters. Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
parent
138c3aeadd
commit
f5f16f4e64
@ -2916,12 +2916,12 @@ Compliant example::
|
||||
Non-compliant example::
|
||||
|
||||
/*
|
||||
* The example here uses the char ␣ to stand for the space at the end of the line
|
||||
* The example here uses the char ~ to stand for the space at the end of the line
|
||||
* in order to highlight the non-compliant part.
|
||||
*/
|
||||
uint32_t a;␣␣␣␣
|
||||
uint32_t b;␣␣␣␣
|
||||
uint32_t c;␣␣␣␣
|
||||
uint32_t a;~~~~
|
||||
uint32_t b;~~~~
|
||||
uint32_t c;~~~~
|
||||
|
||||
|
||||
C-CS-06: A single space shall exist between non-function-like keywords and opening brackets
|
||||
|
@ -32,12 +32,12 @@ Usage:
|
||||
|
||||
***-s <slot>,ahci,<type:><filepath>***
|
||||
|
||||
Type: ‘hd’ and ‘cd’ are available.
|
||||
Type: 'hd' and 'cd' are available.
|
||||
|
||||
Filepath: the path for the backend file, could be a partition or a
|
||||
Filepath: the path for the backend file, could be a partition or a
|
||||
regular file.
|
||||
|
||||
E.g.
|
||||
For example,
|
||||
|
||||
SOS: -s 20,ahci,\ `hd:/dev/mmcblk0p1 <http://hd/dev/mmcblk0p1>`__
|
||||
|
||||
|
@ -3,7 +3,12 @@
|
||||
Virtio-gpio
|
||||
###########
|
||||
|
||||
virtio-gpio provides a virtual GPIO controller, which will map part of native GPIOs to UOS, UOS can perform GPIO operations through it, including setting values, including set/get value, set/get direction and set configuration (only Open Source and Open Drain types are currently supported). GPIOs quite often be used as IRQs, typically for wakeup events, virtio-gpio supports level and edge interrupt trigger modes.
|
||||
virtio-gpio provides a virtual GPIO controller, which will map part of
|
||||
native GPIOs to UOS, UOS can perform GPIO operations through it,
|
||||
including setting values, including set/get value, set/get direction and
|
||||
set configuration (only Open Source and Open Drain types are currently
|
||||
supported). GPIOs quite often be used as IRQs, typically for wakeup
|
||||
events, virtio-gpio supports level and edge interrupt trigger modes.
|
||||
|
||||
The virtio-gpio architecture is shown below
|
||||
|
||||
@ -13,11 +18,20 @@ The virtio-gpio architecture is shown below
|
||||
|
||||
Virtio-gpio Architecture
|
||||
|
||||
Virtio-gpio is implemented as a virtio legacy device in the ACRN device model (DM), and is registered as a PCI virtio device to the guest OS. No changes are required in the frontend Linux virtio-gpio except that the guest (UOS) kernel should be built with ``CONFIG_VIRTIO_GPIO=y``.
|
||||
Virtio-gpio is implemented as a virtio legacy device in the ACRN device
|
||||
model (DM), and is registered as a PCI virtio device to the guest OS. No
|
||||
changes are required in the frontend Linux virtio-gpio except that the
|
||||
guest (UOS) kernel should be built with ``CONFIG_VIRTIO_GPIO=y``.
|
||||
|
||||
There are three virtqueues used between FE and BE, one for gpio operations, one for irq request and one for irq event notification.
|
||||
There are three virtqueues used between FE and BE, one for gpio
|
||||
operations, one for irq request and one for irq event notification.
|
||||
|
||||
Virtio-gpio FE driver will register a gpiochip and irqchip when it is probed, the base and number of gpio are generated by the BE. Each gpiochip or irqchip operation(e.g. get_direction of gpiochip or irq_set_type of irqchip) will trigger a virtqueue_kick on its own virtqueue. If some gpio has been set to interrupt mode, the interrupt events will be handled within the irq virtqueue callback.
|
||||
Virtio-gpio FE driver will register a gpiochip and irqchip when it is
|
||||
probed, the base and number of gpio are generated by the BE. Each
|
||||
gpiochip or irqchip operation(e.g. get_direction of gpiochip or
|
||||
irq_set_type of irqchip) will trigger a virtqueue_kick on its own
|
||||
virtqueue. If some gpio has been set to interrupt mode, the interrupt
|
||||
events will be handled within the irq virtqueue callback.
|
||||
|
||||
GPIO mapping
|
||||
************
|
||||
@ -28,9 +42,10 @@ GPIO mapping
|
||||
|
||||
GPIO mapping
|
||||
|
||||
- Each UOS has only one GPIO chip instance, its number of GPIO is based on acrn-dm command line and GPIO base always start from 0.
|
||||
- Each UOS has only one GPIO chip instance, its number of GPIO is based
|
||||
on acrn-dm command line and GPIO base always start from 0.
|
||||
|
||||
- Each GPIO is exclusive, uos can’t map the same native gpio.
|
||||
- Each GPIO is exclusive, uos can't map the same native gpio.
|
||||
|
||||
- Each acrn-dm maximum number of GPIO is 64.
|
||||
|
||||
@ -39,22 +54,36 @@ Usage
|
||||
|
||||
add the following parameters into command line::
|
||||
|
||||
-s <slot>,virtio-gpio,<@controller_name{offset|name[=mapping_name]:offset|name[=mapping_name]:…}@controller_name{…}…]>
|
||||
-s <slot>,virtio-gpio,<@controller_name{offset|name[=mapping_name]:offset|name[=mapping_name]:...}@controller_name{...}...]>
|
||||
|
||||
- **controller_name**: Input “ls /sys/bus/gpio/devices” to check native gpio controller information.Usually, the devices represent the controller_name, you can use it as controller_name directly. You can also input “cat /sys/bus/gpio/device/XXX/dev” to get device id that can be used to match /dev/XXX, then use XXX as the controller_name. On MRB and NUC platforms, the controller_name are gpiochip0, gpiochip1, gpiochip2.gpiochip3.
|
||||
- **controller_name**: Input ``ls /sys/bus/gpio/devices`` to check
|
||||
native gpio controller information.Usually, the devices represent the
|
||||
controller_name, you can use it as controller_name directly. You can
|
||||
also input "cat /sys/bus/gpio/device/XXX/dev" to get device id that can
|
||||
be used to match /dev/XXX, then use XXX as the controller_name. On MRB
|
||||
and NUC platforms, the controller_name are gpiochip0, gpiochip1,
|
||||
gpiochip2.gpiochip3.
|
||||
|
||||
- **offset|name**: you can use gpio offset or its name to locate one native gpio within the gpio controller.
|
||||
- **offset|name**: you can use gpio offset or its name to locate one
|
||||
native gpio within the gpio controller.
|
||||
|
||||
- **mapping_name**: This is optional, if you want to use a customized name for a FE gpio, you can set a new name for a FE virtual gpio.
|
||||
- **mapping_name**: This is optional, if you want to use a customized
|
||||
name for a FE gpio, you can set a new name for a FE virtual gpio.
|
||||
|
||||
Example
|
||||
*******
|
||||
|
||||
- Map three native gpio to UOS, they are native gpiochip0 with offset of 1 and 6, and with the name “reset”. In UOS, the three gpio has no name, and base from 0.::
|
||||
- Map three native gpio to UOS, they are native gpiochip0 with offset
|
||||
of 1 and 6, and with the name ``reset``. In UOS, the three gpio has
|
||||
no name, and base from 0 ::
|
||||
|
||||
-s 10,virtio-gpio,@gpiochip0{1:6:reset}
|
||||
|
||||
- Map four native gpio to UOS, native gpiochip0’s gpio with offset 1 and offset 6 map to FE virtual gpio with offset 0 and offset 1 without names, native gpiochip0’s gpio with name “reset” maps to FE virtual gpio with offset 2 and its name is “shutdown”, native gpiochip1’s gpio with offset 0 maps to FE virtual gpio with offset 3 and its name is “reset”.::
|
||||
|
||||
-s 10,virtio-gpio,@gpiochip0{1:6:reset=shutdown}@gpiochip1{0=reset}
|
||||
- Map four native gpio to UOS, native gpiochip0's gpio with offset 1
|
||||
and offset 6 map to FE virtual gpio with offset 0 and offset 1
|
||||
without names, native gpiochip0's gpio with name ``reset`` maps to FE
|
||||
virtual gpio with offset 2 and its name is ``shutdown``, native
|
||||
gpiochip1's gpio with offset 0 maps to FE virtual gpio with offset 3 and
|
||||
its name is ``reset`` ::
|
||||
|
||||
-s 10,virtio-gpio,@gpiochip0{1:6:reset=shutdown}@gpiochip1{0=reset}
|
||||
|
@ -3,7 +3,11 @@
|
||||
Virtio-i2c
|
||||
##########
|
||||
|
||||
Virtio-i2c provides a virtual I2C adapter that supports mapping multiple slave devices under multiple native I2C adapters to one virtio I2C adapter. The address for the slave device is not changed. Virtio-i2c also provides an interface to add an acpi node for slave devices so that the slave device driver in the guest OS does not need to change.
|
||||
Virtio-i2c provides a virtual I2C adapter that supports mapping multiple
|
||||
slave devices under multiple native I2C adapters to one virtio I2C
|
||||
adapter. The address for the slave device is not changed. Virtio-i2c
|
||||
also provides an interface to add an acpi node for slave devices so that
|
||||
the slave device driver in the guest OS does not need to change.
|
||||
|
||||
:numref:`virtio-i2c-1` below shows the virtio-i2c architecture.
|
||||
|
||||
@ -13,17 +17,27 @@ Virtio-i2c provides a virtual I2C adapter that supports mapping multiple slave d
|
||||
|
||||
Virtio-i2c Architecture
|
||||
|
||||
Virtio-i2c is implemented as a virtio legacy device in the ACRN device model (DM) and is registered as a PCI virtio device to the guest OS. The Device ID of virtio-i2c is 0x860A and the Sub Device ID is 0xFFF6.
|
||||
Virtio-i2c is implemented as a virtio legacy device in the ACRN device
|
||||
model (DM) and is registered as a PCI virtio device to the guest OS. The
|
||||
Device ID of virtio-i2c is 0x860A and the Sub Device ID is 0xFFF6.
|
||||
|
||||
Virtio-i2c uses one **virtqueue** to transfer the I2C msg that is received from the I2C core layer. Each I2C msg is translated into three parts:
|
||||
Virtio-i2c uses one **virtqueue** to transfer the I2C msg that is
|
||||
received from the I2C core layer. Each I2C msg is translated into three
|
||||
parts:
|
||||
|
||||
- Header: includes addr, flags, and len.
|
||||
- Data buffer: includes the pointer to msg data.
|
||||
- Status: includes the process results at the backend.
|
||||
|
||||
In the backend kick handler, data is obtained from the virtqueue, which reformats the data to a standard I2C message and then sends it to a message queue that is maintained in the backend. A worker thread is created during the initiate phase; it receives the I2C message from the queue and then calls the I2C APIs to send to the native I2C adapter.
|
||||
In the backend kick handler, data is obtained from the virtqueue, which
|
||||
reformats the data to a standard I2C message and then sends it to a
|
||||
message queue that is maintained in the backend. A worker thread is
|
||||
created during the initiate phase; it receives the I2C message from the
|
||||
queue and then calls the I2C APIs to send to the native I2C adapter.
|
||||
|
||||
When the request is done, the backend driver updates the results and notifies the frontend. The msg process flow is shown in :numref:`virtio-process-flow` below.
|
||||
When the request is done, the backend driver updates the results and
|
||||
notifies the frontend. The msg process flow is shown in
|
||||
:numref:`virtio-process-flow` below.
|
||||
|
||||
.. figure:: images/virtio-i2c-1a.png
|
||||
:align: center
|
||||
@ -35,23 +49,29 @@ When the request is done, the backend driver updates the results and notifies th
|
||||
-s <slot>,virtio-i2c,<bus>[:<slave_addr>[@<node>]][:<slave_addr>[@<node>]][,<bus>[:<slave_addr>[@<node>]][:<slave_addr>][@<node>]]
|
||||
|
||||
bus:
|
||||
The bus number for the native I2C adapter; “2” means “/dev/i2c-2”.
|
||||
The bus number for the native I2C adapter; ``2`` means ``/dev/i2c-2``.
|
||||
|
||||
slave_addr:
|
||||
he address for the native slave devices such as “1C”, “2F”...
|
||||
he address for the native slave devices such as ``1C``, ``2F``...
|
||||
|
||||
@:
|
||||
The prefix for the acpi node.
|
||||
|
||||
node:
|
||||
The acpi node name supported in the current code. You can find the supported name in the acpi_node_table[] from the source code. Currently, only ‘cam1’, ‘cam2’, and ‘hdac’ are supported for MRB. These nodes are platform-specific.
|
||||
The acpi node name supported in the current code. You can find the
|
||||
supported name in the ``acpi_node_table[]`` from the source code. Currently,
|
||||
only ``cam1``, ``cam2``, and ``hdac`` are supported for MRB. These nodes are
|
||||
platform-specific.
|
||||
|
||||
|
||||
**Example:**
|
||||
|
||||
-s 19,virtio-i2c,0:70@cam1:2F,4:1C
|
||||
|
||||
This adds slave devices 0x70 and 0x2F under the native adapter /dev/i2c-0, and 0x1C under /dev/i2c-6 to the virtio-i2c adapter. Since 0x70 includes '@cam1', acpi info is also added to it. Since 0x2F and 0x1C have '@<node>', no acpi info is added to them.
|
||||
This adds slave devices 0x70 and 0x2F under the native adapter
|
||||
/dev/i2c-0, and 0x1C under /dev/i2c-6 to the virtio-i2c adapter. Since
|
||||
0x70 includes '@cam1', acpi info is also added to it. Since 0x2F and
|
||||
0x1C have '@<node>', no acpi info is added to them.
|
||||
|
||||
|
||||
**Simple use case:**
|
||||
@ -111,8 +131,5 @@ You can dump the i2c device if it is supported:
|
||||
e0: 00 ff 06 00 03 fa 00 ff ff ff ff ff ff ff ff ff ..?.??..........
|
||||
f0: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
|
||||
Note that the virtual I2C bus number has no relationship with the native I2C bus number; it is auto-generated by the guest OS.
|
||||
|
||||
|
||||
|
||||
|
||||
Note that the virtual I2C bus number has no relationship with the native
|
||||
I2C bus number; it is auto-generated by the guest OS.
|
||||
|
@ -221,7 +221,7 @@ Use the ACRN industry out-of-the-box image
|
||||
It ensures that the end of the string is properly detected.
|
||||
|
||||
#. Reboot the test machine. After the Clear Linux OS boots,
|
||||
log in as “root” for the first time.
|
||||
log in as ``root`` for the first time.
|
||||
|
||||
.. _install_rtvm:
|
||||
|
||||
@ -287,7 +287,15 @@ RT Performance Test
|
||||
Cyclictest introduction
|
||||
=======================
|
||||
|
||||
The cyclictest is most commonly used for benchmarking RT systems. It is one of the most frequently used tools for evaluating the relative performance of real-time systems. Cyclictest accurately and repeatedly measures the difference between a thread's intended wake-up time and the time at which it actually wakes up in order to provide statistics about the system's latencies. It can measure latencies in real-time systems that are caused by hardware, firmware, and the operating system. The cyclictest is currently maintained by Linux Foundation and is part of the test suite rt-tests.
|
||||
The cyclictest is most commonly used for benchmarking RT systems. It is
|
||||
one of the most frequently used tools for evaluating the relative
|
||||
performance of real-time systems. Cyclictest accurately and repeatedly
|
||||
measures the difference between a thread's intended wake-up time and the
|
||||
time at which it actually wakes up in order to provide statistics about
|
||||
the system's latencies. It can measure latencies in real-time systems
|
||||
that are caused by hardware, firmware, and the operating system. The
|
||||
cyclictest is currently maintained by Linux Foundation and is part of
|
||||
the test suite rt-tests.
|
||||
|
||||
Pre-Configurations
|
||||
==================
|
||||
@ -555,5 +563,3 @@ Passthrough a hard disk to the RTVM
|
||||
.. code-block:: none
|
||||
|
||||
# /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
|
||||
|
||||
|
||||
|
@ -6,7 +6,7 @@ What is ACRN
|
||||
Introduction to Project ACRN
|
||||
****************************
|
||||
|
||||
ACRN™ is a, flexible, lightweight reference hypervisor, built with
|
||||
ACRN |trade| is a, flexible, lightweight reference hypervisor, built with
|
||||
real-time and safety-criticality in mind, and optimized to streamline
|
||||
embedded development through an open source platform. ACRN defines a
|
||||
device hypervisor reference stack and an architecture for running
|
||||
@ -59,7 +59,7 @@ actions when system critical failures occur.
|
||||
|
||||
Shown on the right of :numref:`V2-hl-arch`, the remaining hardware
|
||||
resources are shared among the service VM and user VMs. The service VM
|
||||
is similar to Xen’s Dom0, and a user VM is similar to Xen’s DomU. The
|
||||
is similar to Xen's Dom0, and a user VM is similar to Xen's DomU. The
|
||||
service VM is the first VM launched by ACRN, if there is no pre-launched
|
||||
VM. The service VM can access hardware resources directly by running
|
||||
native drivers and it provides device sharing services to the user VMs
|
||||
@ -117,7 +117,7 @@ information about the vehicle, such as:
|
||||
fuel or tire pressure;
|
||||
- showing rear-view and surround-view cameras for parking assistance.
|
||||
|
||||
An **In-Vehicle Infotainment (IVI)** system’s capabilities can include:
|
||||
An **In-Vehicle Infotainment (IVI)** system's capabilities can include:
|
||||
|
||||
- navigation systems, radios, and other entertainment systems;
|
||||
- connection to mobile devices for phone calls, music, and applications
|
||||
@ -197,7 +197,7 @@ real-time OS needs, such as VxWorks* or RT-Linux*.
|
||||
|
||||
ACRN Industrial Usage Architecture Overview
|
||||
|
||||
:numref:`V2-industry-usage-arch` shows ACRN’s block diagram for an
|
||||
:numref:`V2-industry-usage-arch` shows ACRN's block diagram for an
|
||||
Industrial usage scenario:
|
||||
|
||||
- ACRN boots from the SoC platform, and supports firmware such as the
|
||||
|
@ -12,7 +12,7 @@ Minimum System Requirements for Installing ACRN
|
||||
+------------------------+-----------------------------------+---------------------------------------------------------------------------------+
|
||||
| Hardware | Minimum Requirements | Recommended |
|
||||
+========================+===================================+=================================================================================+
|
||||
| Processor | Compatible x86 64-bit processor | 2 core with “Intel Hyper Threading Technology” enabled in the BIOS or more core |
|
||||
| Processor | Compatible x86 64-bit processor | 2 core with Intel Hyper Threading Technology enabled in the BIOS or more cores |
|
||||
+------------------------+-----------------------------------+---------------------------------------------------------------------------------+
|
||||
| System memory | 4GB RAM | 8GB or more (< 32G) |
|
||||
+------------------------+-----------------------------------+---------------------------------------------------------------------------------+
|
||||
@ -109,7 +109,7 @@ Verified Hardware Specifications Detail
|
||||
| | | System memory | - Two DDR3L SO-DIMM sockets |
|
||||
| | | | (up to 8 GB, 1866 MHz), 1.35V |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | Storage capabilities | - SDXC slot with UHS-I support on the side |
|
||||
| | | Storage capabilities | - SDXC slot with UHS-I support on the side |
|
||||
| | | | - One SATA3 port for connection to 2.5" HDD or SSD |
|
||||
| | | | (up to 9.5 mm thickness) |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
|
@ -94,7 +94,7 @@ Fixed Issues Details
|
||||
- :acrn-issue:`2857` - FAQs for ACRN's memory usage need to be updated
|
||||
- :acrn-issue:`2971` - PCIE ECFG support for AcrnGT
|
||||
- :acrn-issue:`2976` - [GVT]don't register memory for gvt in acrn-dm
|
||||
- :acrn-issue:`2984` - HV will crash if launch two UOS with same UUID
|
||||
- :acrn-issue:`2984` - HV will crash if launch two UOS with same UUID
|
||||
- :acrn-issue:`2991` - Failed to boot normal vm on the pcpu which ever run lapic_pt vm
|
||||
- :acrn-issue:`3009` - When running new workload on weston, the last workload animation not disappeared and screen flashed badly.
|
||||
- :acrn-issue:`3028` - virtio gpio line fd not release
|
||||
@ -129,14 +129,14 @@ Known Issues
|
||||
After booting UOS with multiple USB devices plugged in,
|
||||
there's a 60% chance that one or more devices are not discovered.
|
||||
|
||||
**Impact:** Cannot use multiple USB devices at same time.
|
||||
**Impact:** Cannot use multiple USB devices at same time.
|
||||
|
||||
**Workaround:** Unplug and plug-in the unrecognized device after booting.
|
||||
**Workaround:** Unplug and plug-in the unrecognized device after booting.
|
||||
|
||||
-----
|
||||
|
||||
:acrn-issue:`1991` - Input not accepted in UART Console for corner case
|
||||
Input is useless in UART Console for a corner case, demonstrated with these steps:
|
||||
Input is useless in UART Console for a corner case, demonstrated with these steps:
|
||||
|
||||
1) Boot to SOS
|
||||
2) ssh into the SOS.
|
||||
@ -144,18 +144,18 @@ Known Issues
|
||||
4) On the host, use ``minicom -D /dev/ttyUSB0``.
|
||||
5) Use ``sos_console 0`` to launch SOS.
|
||||
|
||||
**Impact:** Fails to use UART for input.
|
||||
**Impact:** Fails to use UART for input.
|
||||
|
||||
**Workaround:** Enter other keys before typing :kbd:`Enter`.
|
||||
**Workaround:** Enter other keys before typing :kbd:`Enter`.
|
||||
|
||||
-----
|
||||
|
||||
:acrn-issue:`2267` - [APLUP2][LaaG] LaaG can't detect 4k monitor
|
||||
After launching UOS on APL UP2 , 4k monitor cannot be detected.
|
||||
|
||||
**Impact:** UOS can't display on a 4k monitor.
|
||||
**Impact:** UOS can't display on a 4k monitor.
|
||||
|
||||
**Workaround:** Use a monitor with less than 4k resolution.
|
||||
**Workaround:** Use a monitor with less than 4k resolution.
|
||||
|
||||
-----
|
||||
|
||||
@ -173,18 +173,18 @@ Known Issues
|
||||
4) Exit UOS.
|
||||
5) SOS tries to access USB keyboard and mouse, and fails.
|
||||
|
||||
**Impact:** SOS cannot use USB keyboard and mouse in such case.
|
||||
**Impact:** SOS cannot use USB keyboard and mouse in such case.
|
||||
|
||||
**Workaround:** Unplug and plug-in the USB keyboard and mouse after exiting UOS.
|
||||
**Workaround:** Unplug and plug-in the USB keyboard and mouse after exiting UOS.
|
||||
|
||||
-----
|
||||
|
||||
:acrn-issue:`2753` - UOS cannot resume after suspend by pressing power key
|
||||
UOS cannot resume after suspend by pressing power key
|
||||
|
||||
**Impact:** UOS may failed to resume after suspend by pressing the power key.
|
||||
**Impact:** UOS may failed to resume after suspend by pressing the power key.
|
||||
|
||||
**Workaround:** None
|
||||
**Workaround:** None
|
||||
|
||||
-----
|
||||
|
||||
@ -203,7 +203,7 @@ Known Issues
|
||||
|
||||
**Impact:** Launching Zephyr RTOS as a real-time UOS takes too long
|
||||
|
||||
**Workaround:** A different version of Grub is known to work correctly
|
||||
**Workaround:** A different version of Grub is known to work correctly
|
||||
|
||||
-----
|
||||
|
||||
@ -239,11 +239,11 @@ Known Issues
|
||||
:acrn-issue:`3279` - AcrnGT causes display flicker in some situations.
|
||||
In current scaler ownership assignment logic, there's an issue that when SOS disables a plane,
|
||||
it will disable corresponding plane scalers; however, there's no scaler ownership checking there.
|
||||
So the scalers owned by UOS may be disabled by SOS by accident.
|
||||
So the scalers owned by UOS may be disabled by SOS by accident.
|
||||
|
||||
**Impact:** AcrnGT causes display flicker in some situations
|
||||
**Impact:** AcrnGT causes display flicker in some situations
|
||||
|
||||
**Workaround:** None
|
||||
**Workaround:** None
|
||||
|
||||
-----
|
||||
|
||||
|
@ -82,7 +82,7 @@ Fixed Issues Details
|
||||
- :acrn-issue:`3281` - AcrnGT emulation thread causes high cpu usage when shadowing ppgtt
|
||||
- :acrn-issue:`3283` - New scenario-based configurations lack documentation
|
||||
- :acrn-issue:`3341` - Documentation on how to run Windows as a Guest (WaaG)
|
||||
- :acrn-issue:`3370` - vm_console 2 cannot switch to VM2’s console in hybrid mode
|
||||
- :acrn-issue:`3370` - vm_console 2 cannot switch to VM2's console in hybrid mode
|
||||
- :acrn-issue:`3374` - Potential interrupt info overwrite in acrn_handle_pending_request
|
||||
- :acrn-issue:`3379` - DM: Increase hugetlbfs MAX_PATH_LEN from 128 to 256
|
||||
- :acrn-issue:`3392` - During run UnigenHeaven 3D gfx benchmark in WaaG, RTVM latency is much long
|
||||
@ -102,22 +102,22 @@ Known Issues
|
||||
with vpci bar emulation, vpci needs to reinit the physical bar base address to a
|
||||
valid address if a device reset is detected.
|
||||
|
||||
**Impact:** Fail to launch Clear Linux Preempt_RT VM with ``reset`` passthru parameter
|
||||
**Impact:** Fail to launch Clear Linux Preempt_RT VM with ``reset`` passthru parameter
|
||||
|
||||
**Workaround:** Issue resolved on ACRN tag: ``acrn-2019w33.1-140000p``
|
||||
**Workaround:** Issue resolved on ACRN tag: ``acrn-2019w33.1-140000p``
|
||||
|
||||
-----
|
||||
|
||||
:acrn-issue:`3520` - bundle of "VGPU unconformance guest" messages observed for "gvt" in SOS console while using UOS
|
||||
After the need_force_wake is not removed in course of submitting VGPU workload,
|
||||
After the need_force_wake is not removed in course of submitting VGPU workload,
|
||||
it will print a bundle of below messages while the User VM is started.
|
||||
|
||||
| gvt: vgpu1 unconformance guest detected
|
||||
| gvt: vgpu1 unconformance mmio 0x2098:0xffffffff,0x0
|
||||
|
||||
**Impact:** Messy and repetitive output from the monitor
|
||||
**Impact:** Messy and repetitive output from the monitor
|
||||
|
||||
**Workaround:** Need to rebuild and apply the latest Service VM kernel from the ``acrn-kernel`` source code.
|
||||
**Workaround:** Need to rebuild and apply the latest Service VM kernel from the ``acrn-kernel`` source code.
|
||||
|
||||
-----
|
||||
|
||||
@ -131,35 +131,35 @@ Known Issues
|
||||
#) Reboot RTVM and then will restart the whole system
|
||||
#) After Service VM boot up, return to step 3
|
||||
|
||||
**Impact:** Cold boot operation is not stable for NUC platform
|
||||
**Impact:** Cold boot operation is not stable for NUC platform
|
||||
|
||||
**Workaround:** Need to rebuild and apply the latest Service VM kernel from the ``acrn-kernel`` source code.
|
||||
**Workaround:** Need to rebuild and apply the latest Service VM kernel from the ``acrn-kernel`` source code.
|
||||
|
||||
-----
|
||||
|
||||
:acrn-issue:`3576` - Expand default memory from 2G to 4G for WaaG
|
||||
|
||||
**Impact:** More memory size is required from Windows VM
|
||||
**Impact:** More memory size is required from Windows VM
|
||||
|
||||
**Workaround:** Issue resolved on ACRN tag: ``acrn-2019w33.1-140000p``
|
||||
**Workaround:** Issue resolved on ACRN tag: ``acrn-2019w33.1-140000p``
|
||||
|
||||
-----
|
||||
|
||||
:acrn-issue:`3609` - Sometimes fail to boot os while repeating the cold boot operation
|
||||
|
||||
**Workaround:** Please refer the PR information in this git issue
|
||||
**Workaround:** Please refer the PR information in this git issue
|
||||
|
||||
-----
|
||||
|
||||
:acrn-issue:`3610` - LaaG hang while run some workloads loop with zephyr idle
|
||||
|
||||
**Workaround:** Revert commit ``bbb891728d82834ec450f6a61792f715f4ec3013`` from the kernel
|
||||
**Workaround:** Revert commit ``bbb891728d82834ec450f6a61792f715f4ec3013`` from the kernel
|
||||
|
||||
-----
|
||||
|
||||
:acrn-issue:`3611` - OVMF launch UOS fail for Hybrid and industry scenario
|
||||
|
||||
**Workaround:** Please refer the PR information in this git issue
|
||||
**Workaround:** Please refer the PR information in this git issue
|
||||
|
||||
-----
|
||||
|
||||
|
@ -152,7 +152,7 @@ Fixed Issues Details
|
||||
- :acrn-issue:`3853` - [acrn-configuration-tool] Generated Launch script is incorrect when select audio&audio_codec for nuc7i7dnb with Scenario:SDC
|
||||
- :acrn-issue:`3859` - VM-Manager: the return value of "strtol" is not validated properly
|
||||
- :acrn-issue:`3863` - [acrn-configuration-tool]WebUI do not select audio&wifi devices by default for apl-mrb with LaunchSetting: sdc_launch_1uos_aaag
|
||||
- :acrn-issue:`3879` - [acrn-configuration-tool]The “-k" parameter is unnecessary in launch_uos_id2.sh for RTVM.
|
||||
- :acrn-issue:`3879` - [acrn-configuration-tool]The "-k" parameter is unnecessary in launch_uos_id2.sh for RTVM.
|
||||
- :acrn-issue:`3880` - [acrn-configuration-tool]"--windows \" missing in launch_uos_id1.sh for waag.
|
||||
- :acrn-issue:`3900` - [WHL][acrn-configuration-tool]Same bdf in generated whl-ipc-i5.xml.
|
||||
- :acrn-issue:`3913` - [acrn-configuration-tool]WebUI do not give any prompt when generate launch_script for a new imported board
|
||||
|
@ -42,7 +42,7 @@ Many new `reference documents <https://projectacrn.github.io>`_ are available, i
|
||||
* :ref:`run-kata-containers`
|
||||
* :ref:`hardware` (Addition of Whiskey Lake information)
|
||||
* :ref:`cpu_sharing`
|
||||
* :ref:`using_windows_as_uos` (Update to use ACRNGT GOP to install Windows)
|
||||
* :ref:`using_windows_as_uos` (Update to use ACRNGT GOP to install Windows)
|
||||
|
||||
Fixed Issues Details
|
||||
********************
|
||||
@ -90,13 +90,13 @@ Fixed Issues Details
|
||||
- :acrn-issue:`4135` - [Community][External]Invalid guest vCPUs (0) Ubuntu as SOS.
|
||||
- :acrn-issue:`4139` - [Community][External]mngr_client_new: Failed to accept from fd 38
|
||||
- :acrn-issue:`4143` - [acrn-configuration-tool] bus of DRHD scope devices is parsed incorrectly
|
||||
- :acrn-issue:`4163` - [acrn-configuration-tool] not support: –s n,virtio-input
|
||||
- :acrn-issue:`4164` - [acrn-configuration-tool] not support: –s n,xhci,1-1:1-2:2-1:2-2
|
||||
- :acrn-issue:`4163` - [acrn-configuration-tool] not support: -s n,virtio-input
|
||||
- :acrn-issue:`4164` - [acrn-configuration-tool] not support: -s n,xhci,1-1:1-2:2-1:2-2
|
||||
- :acrn-issue:`4165` -[WHL][acrn-configuration-tool]Configure epc_section is incorrect
|
||||
- :acrn-issue:`4172` - [acrn-configuration-tool] not support: –s n,virtio-blk, (/root/part.img---dd if=/dev/zero of=/root/part.img bs=1M count=10 all/part of img, one u-disk device, u-disk as rootfs and the n is special)
|
||||
- :acrn-issue:`4172` - [acrn-configuration-tool] not support: -s n,virtio-blk, (/root/part.img---dd if=/dev/zero of=/root/part.img bs=1M count=10 all/part of img, one u-disk device, u-disk as rootfs and the n is special)
|
||||
- :acrn-issue:`4173` - [acrn-configuration-tool]acrn-config tool not support parse default pci mmcfg base
|
||||
- :acrn-issue:`4175` - acrntrace fixes and improvement
|
||||
- :acrn-issue:`4185` - [acrn-configuration-tool] not support: –s n,virtio-net, (not set,error net, set 1 net, set multi-net, vhost net)
|
||||
- :acrn-issue:`4185` - [acrn-configuration-tool] not support: -s n,virtio-net, (not set,error net, set 1 net, set multi-net, vhost net)
|
||||
- :acrn-issue:`4211` - [kbl nuc] acrn failed to boot when generate hypervisor config source from config app with HT enabled in BIOS
|
||||
- :acrn-issue:`4212` - [KBL][acrn-configuration-tool][WaaG+RTVM]Need support pm_channel&pm_by_vuart setting for Board:nuc7i7dnb+WaaG&RTVM
|
||||
- :acrn-issue:`4227` - [ISD][Stability][WaaG][Regression] "Passmark8.0-Graphics3D-DirectX9Complex" test failed on WaaG due to driver error
|
||||
|
@ -6,7 +6,8 @@ Platform S5 Enable Guide
|
||||
Introduction
|
||||
************
|
||||
|
||||
S5 is one of the `ACPI sleep states <http://acpi.sourceforge.net/documentation/sleep.html>`_ that refers to the system being shut down (although some power may still be supplied to
|
||||
S5 is one of the `ACPI sleep states <http://acpi.sourceforge.net/documentation/sleep.html>`_
|
||||
that refers to the system being shut down (although some power may still be supplied to
|
||||
certain devices). In this document, S5 means the function to shut down the
|
||||
**User VMs**, **the Service VM**, the hypervisor, and the hardware. In most cases,
|
||||
directly shutting down the power of a computer system is not advisable because it can
|
||||
@ -30,14 +31,16 @@ The diagram below shows the overall architecture:
|
||||
|
||||
- **Scenario I**:
|
||||
|
||||
The User VM's serial port device (``ttySn``) is emulated in the Device Model, the channel from the Service VM to the User VM:
|
||||
The User VM's serial port device (``ttySn``) is emulated in the
|
||||
Device Model, the channel from the Service VM to the User VM:
|
||||
|
||||
.. graphviz:: images/s5-scenario-1.dot
|
||||
:name: s5-scenario-1
|
||||
|
||||
- **Scenario II**:
|
||||
|
||||
The User VM's (like RT-Linux or other RT-VMs) serial port device (``ttySn``) is emulated in the Hypervisor,
|
||||
The User VM's (like RT-Linux or other RT-VMs) serial port device
|
||||
(``ttySn``) is emulated in the Hypervisor,
|
||||
the channel from the Service OS to the User VM:
|
||||
|
||||
.. graphviz:: images/s5-scenario-2.dot
|
||||
@ -186,7 +189,7 @@ How to test
|
||||
Active: active (running) since Tue 2019-09-10 07:15:06 UTC; 1min 11s ago
|
||||
Main PID: 840 (life_mngr)
|
||||
|
||||
.. note:: For WaaG, we need to close ``windbg`` by using the ``"bcdedit /set debug off`` command
|
||||
.. note:: For WaaG, we need to close ``windbg`` by using the ``bcdedit /set debug off`` command
|
||||
IF you executed the ``bcdedit /set debug on`` when you set up the WaaG, because it occupies the ``COM2``.
|
||||
|
||||
#. Use the``acrnctl stop`` command on the Service VM to trigger S5 to the User VMs:
|
||||
|
@ -12,9 +12,13 @@ higher priorities VMs (such as RTVMs) are not impacted.
|
||||
|
||||
Using RDT includes three steps:
|
||||
|
||||
1. Detect and enumerate RDT allocation capabilities on supported resources such as cache and memory bandwidth.
|
||||
#. Set up resource mask array MSRs (Model-Specific Registers) for each CLOS (Class of Service, which is a resource allocation), basically to limit or allow access to resource usage.
|
||||
#. Select the CLOS for the CPU associated with the VM that will apply the resource mask on the CP.
|
||||
1. Detect and enumerate RDT allocation capabilities on supported
|
||||
resources such as cache and memory bandwidth.
|
||||
#. Set up resource mask array MSRs (Model-Specific Registers) for each
|
||||
CLOS (Class of Service, which is a resource allocation), basically to
|
||||
limit or allow access to resource usage.
|
||||
#. Select the CLOS for the CPU associated with the VM that will apply
|
||||
the resource mask on the CP.
|
||||
|
||||
Steps #2 and #3 configure RDT resources for a VM and can be done in two ways:
|
||||
|
||||
@ -24,7 +28,10 @@ Steps #2 and #3 configure RDT resources for a VM and can be done in two ways:
|
||||
The following sections discuss how to detect, enumerate capabilities, and
|
||||
configure RDT resources for VMs in the ACRN hypervisor.
|
||||
|
||||
For further details, refer to the ACRN RDT high-level design :ref:`hv_rdt` and `Intel 64 and IA-32 Architectures Software Developer's Manual, (Section 17.19 Intel Resource Director Technology Allocation Features) <https://software.intel.com/en-us/download/intel-64-and-ia-32-architectures-sdm-combined-volumes-3a-3b-3c-and-3d-system-programming-guide>`_
|
||||
For further details, refer to the ACRN RDT high-level design
|
||||
:ref:`hv_rdt` and `Intel 64 and IA-32 Architectures Software Developer's
|
||||
Manual, (Section 17.19 Intel Resource Director Technology Allocation Features)
|
||||
<https://software.intel.com/en-us/download/intel-64-and-ia-32-architectures-sdm-combined-volumes-3a-3b-3c-and-3d-system-programming-guide>`_
|
||||
|
||||
.. _rdt_detection_capabilities:
|
||||
|
||||
@ -48,10 +55,16 @@ index. For example, run ``cpuid 0x10 0x2`` to query the L2 CAT capability.
|
||||
|
||||
L3/L2 bit encoding:
|
||||
|
||||
* EAX [bit 4:0] reports the length of the cache mask minus one. For example, a value 0xa means the cache mask is 0x7ff.
|
||||
* EBX [bit 31:0] reports a bit mask. Each set bit indicates the corresponding unit of the cache allocation that can be used by other entities in the platform (e.g. integrated graphics engine).
|
||||
* ECX [bit 2] if set, indicates that cache Code and Data Prioritization Technology is supported.
|
||||
* EDX [bit 15:0] reports the maximum CLOS supported for the resource minus one. For example, a value of 0xf means the max CLOS supported is 0x10.
|
||||
* EAX [bit 4:0] reports the length of the cache mask minus one. For
|
||||
example, a value 0xa means the cache mask is 0x7ff.
|
||||
* EBX [bit 31:0] reports a bit mask. Each set bit indicates the
|
||||
corresponding unit of the cache allocation that can be used by other
|
||||
entities in the platform (e.g. integrated graphics engine).
|
||||
* ECX [bit 2] if set, indicates that cache Code and Data Prioritization
|
||||
Technology is supported.
|
||||
* EDX [bit 15:0] reports the maximum CLOS supported for the resource
|
||||
minus one. For example, a value of 0xf means the max CLOS supported
|
||||
is 0x10.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -82,7 +95,8 @@ Tuning RDT resources in HV debug shell
|
||||
This section explains how to configure the RDT resources from the HV debug
|
||||
shell.
|
||||
|
||||
#. Check the PCPU IDs of each VM; the ``vcpu_list`` below shows that VM0 is running on PCPU0, and VM1 is running on PCPU1:
|
||||
#. Check the PCPU IDs of each VM; the ``vcpu_list`` below shows that VM0 is
|
||||
running on PCPU0, and VM1 is running on PCPU1:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -93,14 +107,24 @@ shell.
|
||||
0 0 0 PRIMARY Running
|
||||
1 1 0 PRIMARY Running
|
||||
|
||||
#. Set the resource mask array MSRs for each CLOS with a ``wrmsr <reg_num> <value>``. For example, if you want to restrict VM1 to use the lower 4 ways of LLC cache and you want to allocate the upper 7 ways of LLC to access to VM0, you must first assign a CLOS for each VM (e.g. VM0 is assigned CLOS0 and VM1 CLOS1). Next, resource mask the MSR that corresponds to the CLOS0. In our example, IA32_L3_MASK_BASE + 0 is programmed to 0x7f0. Finally, resource mask the MSR that corresponds to CLOS1. In our example, IA32_L3_MASK_BASE + 1 is set to 0xf.
|
||||
#. Set the resource mask array MSRs for each CLOS with a ``wrmsr <reg_num> <value>``.
|
||||
For example, if you want to restrict VM1 to use the
|
||||
lower 4 ways of LLC cache and you want to allocate the upper 7 ways of
|
||||
LLC to access to VM0, you must first assign a CLOS for each VM (e.g. VM0
|
||||
is assigned CLOS0 and VM1 CLOS1). Next, resource mask the MSR that
|
||||
corresponds to the CLOS0. In our example, IA32_L3_MASK_BASE + 0 is
|
||||
programmed to 0x7f0. Finally, resource mask the MSR that corresponds to
|
||||
CLOS1. In our example, IA32_L3_MASK_BASE + 1 is set to 0xf.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
ACRN:\>wrmsr -p1 0xc90 0x7f0
|
||||
ACRN:\>wrmsr -p1 0xc91 0xf
|
||||
|
||||
#. Assign CLOS1 to PCPU1 by programming the MSR IA32_PQR_ASSOC [bit 63:32] (0xc8f) to 0x100000000 to use CLOS1 and assign CLOS0 to PCPU 0 by programming MSR IA32_PQR_ASSOC [bit 63:32] to 0x0. Note that IA32_PQR_ASSOC is per LP MSR and CLOS must be programmed on each LP.
|
||||
#. Assign CLOS1 to PCPU1 by programming the MSR IA32_PQR_ASSOC [bit 63:32]
|
||||
(0xc8f) to 0x100000000 to use CLOS1 and assign CLOS0 to PCPU 0 by
|
||||
programming MSR IA32_PQR_ASSOC [bit 63:32] to 0x0. Note that
|
||||
IA32_PQR_ASSOC is per LP MSR and CLOS must be programmed on each LP.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -112,7 +136,12 @@ shell.
|
||||
Configure RDT for VM using VM Configuration
|
||||
*******************************************
|
||||
|
||||
#. RDT on ACRN is enabled by default on supported platforms. This information can be found using an offline tool that generates a platform-specific xml file that helps ACRN identify RDT-supported platforms. This feature can be also be toggled using the CONFIG_RDT_ENABLED flag with the ``make menuconfig`` command. The first step is to clone the ACRN source code (if you haven't already done so):
|
||||
#. RDT on ACRN is enabled by default on supported platforms. This
|
||||
information can be found using an offline tool that generates a
|
||||
platform-specific xml file that helps ACRN identify RDT-supported
|
||||
platforms. This feature can be also be toggled using the
|
||||
CONFIG_RDT_ENABLED flag with the ``make menuconfig`` command. The first
|
||||
step is to clone the ACRN source code (if you haven't already done so):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -122,7 +151,9 @@ Configure RDT for VM using VM Configuration
|
||||
.. figure:: images/menuconfig-rdt.png
|
||||
:align: center
|
||||
|
||||
#. The predefined cache masks can be found at ``hypervisor/arch/x86/configs/$(CONFIG_BOARD)/board.c`` for respective boards. For example, apl-up2 can found at ``hypervisor/arch/x86/configs/apl-up2/board.c``.
|
||||
#. The predefined cache masks can be found at
|
||||
``hypervisor/arch/x86/configs/$(CONFIG_BOARD)/board.c`` for respective boards.
|
||||
For example, apl-up2 can found at ``hypervisor/arch/x86/configs/apl-up2/board.c``.
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 3,7,11,15
|
||||
@ -147,9 +178,17 @@ Configure RDT for VM using VM Configuration
|
||||
};
|
||||
|
||||
.. note::
|
||||
Users can change the mask values, but the cache mask must have **continuous bits** or a #GP fault can be triggered. Similary, when programming an MBA delay value, be sure to set the value to less than or equal to the MAX delay value.
|
||||
Users can change the mask values, but the cache mask must have
|
||||
**continuous bits** or a #GP fault can be triggered. Similary, when
|
||||
programming an MBA delay value, be sure to set the value to less than or
|
||||
equal to the MAX delay value.
|
||||
|
||||
#. Set up the CLOS in the VM config. Follow `RDT detection and resource capabilities`_ to identify the MAX CLOS that can be used. ACRN uses the **the lowest common MAX CLOS** value among all RDT resources to avoid resource misconfigurations. For example, configuration data for the Service VM sharing mode can be found at ``hypervisor/arch/x86/configs/vm_config.c``
|
||||
#. Set up the CLOS in the VM config. Follow `RDT detection and resource capabilities`_
|
||||
to identify the MAX CLOS that can be used. ACRN uses the
|
||||
**the lowest common MAX CLOS** value among all RDT resources to avoid
|
||||
resource misconfigurations. For example, configuration data for the
|
||||
Service VM sharing mode can be found at
|
||||
``hypervisor/arch/x86/configs/vm_config.c``
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 6
|
||||
@ -171,9 +210,15 @@ Configure RDT for VM using VM Configuration
|
||||
};
|
||||
|
||||
.. note::
|
||||
In ACRN, Lower CLOS always means higher priority (clos 0 > clos 1 > clos 2>...clos n). So, carefully program each VM's CLOS accordingly.
|
||||
In ACRN, Lower CLOS always means higher priority (clos 0 > clos 1 > clos 2> ...clos n).
|
||||
So, carefully program each VM's CLOS accordingly.
|
||||
|
||||
#. Careful consideration should be made when assigning vCPU affinity. In a cache isolation configuration, in addition to isolating CAT-capable caches, you must also isolate lower-level caches. In the following example, logical processor #0 and #2 share L1 and L2 caches. In this case, do not assign LP #0 and LP #2 to different VMs that need to do cache isolation. Assign LP #1 and LP #3 with similar consideration:
|
||||
#. Careful consideration should be made when assigning vCPU affinity. In
|
||||
a cache isolation configuration, in addition to isolating CAT-capable
|
||||
caches, you must also isolate lower-level caches. In the following
|
||||
example, logical processor #0 and #2 share L1 and L2 caches. In this
|
||||
case, do not assign LP #0 and LP #2 to different VMs that need to do
|
||||
cache isolation. Assign LP #1 and LP #3 with similar consideration:
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 3
|
||||
@ -194,10 +239,15 @@ Configure RDT for VM using VM Configuration
|
||||
PU L#2 (P#1)
|
||||
PU L#3 (P#3)
|
||||
|
||||
#. Bandwidth control is per-core (not per LP), so max delay values of per-LP CLOS is applied to the core. If HT is turned on, don’t place high priority threads on sibling LPs running lower priority threads.
|
||||
#. Bandwidth control is per-core (not per LP), so max delay values of
|
||||
per-LP CLOS is applied to the core. If HT is turned on, don't place high
|
||||
priority threads on sibling LPs running lower priority threads.
|
||||
|
||||
#. Based on our scenario, build the ACRN hypervisor and copy the artifact ``acrn.efi`` to the
|
||||
``/boot/EFI/acrn`` directory. If needed, update the devicemodel ``acrn-dm`` as well in ``/usr/bin`` directory. see :ref:`getting-started-building` for building instructions.
|
||||
#. Based on our scenario, build the ACRN hypervisor and copy the
|
||||
artifact ``acrn.efi`` to the
|
||||
``/boot/EFI/acrn`` directory. If needed, update the devicemodel
|
||||
``acrn-dm`` as well in ``/usr/bin`` directory. see
|
||||
:ref:`getting-started-building` for building instructions.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
@ -38,11 +38,11 @@ Here is example pseudocode of a cyclictest implementation.
|
||||
.. code-block:: none
|
||||
|
||||
while (!shutdown) {
|
||||
…
|
||||
...
|
||||
clock_nanosleep(&next)
|
||||
clock_gettime(&now)
|
||||
latency = calcdiff(now, next)
|
||||
…
|
||||
...
|
||||
next += interval
|
||||
}
|
||||
|
||||
@ -161,7 +161,9 @@ CPU hardware differences in Linux performance measurements and presents a
|
||||
simple command line interface. Perf is based on the ``perf_events`` interface
|
||||
exported by recent versions of the Linux kernel.
|
||||
|
||||
**PMU** tools is a collection of tools for profile collection and performance analysis on Intel CPUs on top of Linux Perf. Refer to the following links for perf usage:
|
||||
**PMU** tools is a collection of tools for profile collection and
|
||||
performance analysis on Intel CPUs on top of Linux Perf. Refer to the
|
||||
following links for perf usage:
|
||||
|
||||
- https://perf.wiki.kernel.org/index.php/Main_Page
|
||||
- https://perf.wiki.kernel.org/index.php/Tutorial
|
||||
@ -174,7 +176,8 @@ Top-down Micro-Architecture Analysis Method (TMAM)
|
||||
The Top-down Micro-Architecture Analysis Method (TMAM), based on Top-Down
|
||||
Characterization methodology, aims to provide an insight into whether you
|
||||
have made wise choices with your algorithms and data structures. See the
|
||||
Intel |reg| 64 and IA-32 `Architectures Optimization Reference Manual <http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf>`_,
|
||||
Intel |reg| 64 and IA-32 `Architectures Optimization Reference Manual
|
||||
<http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf>`_,
|
||||
Appendix B.1 for more details on TMAM. Refer to this `technical paper
|
||||
<https://fd.io/docs/whitepapers/performance_analysis_sw_data_planes_dec21_2017.pdf>`_
|
||||
which adopts TMAM for systematic performance benchmarking and analysis
|
||||
@ -197,4 +200,3 @@ Example: Using Perf to analyze TMAM level 1 on CPU core 1
|
||||
S0-C1 1 10.6% 1.5% 3.9% 84.0%
|
||||
|
||||
0.006737123 seconds time elapsed
|
||||
|
||||
|
@ -35,7 +35,9 @@ Install Kata Containers
|
||||
|
||||
The Kata Containers installation from Clear Linux's official repository does
|
||||
not work with ACRN at the moment. Therefore, you must install Kata
|
||||
Containers using the `manual installation <https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md>`_ instructions (using a ``rootfs`` image).
|
||||
Containers using the `manual installation
|
||||
<https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md>`_
|
||||
instructions (using a ``rootfs`` image).
|
||||
|
||||
#. Install the build dependencies.
|
||||
|
||||
@ -45,7 +47,8 @@ Containers using the `manual installation <https://github.com/kata-containers/do
|
||||
|
||||
#. Install Kata Containers.
|
||||
|
||||
At a high level, the `manual installation <https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md>`_
|
||||
At a high level, the `manual installation
|
||||
<https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md>`_
|
||||
steps are:
|
||||
|
||||
#. Build and install the Kata runtime.
|
||||
@ -89,7 +92,7 @@ outputs:
|
||||
$ kata-runtime kata-env | awk -v RS= '/\[Hypervisor\]/'
|
||||
[Hypervisor]
|
||||
MachineType = ""
|
||||
Version = "DM version is: 1.5-unstable-”2020w02.5.140000p_261” (daily tag:”2020w02.5.140000p”), build by mockbuild@2020-01-12 08:44:52"
|
||||
Version = "DM version is: 1.5-unstable-"2020w02.5.140000p_261" (daily tag:"2020w02.5.140000p"), build by mockbuild@2020-01-12 08:44:52"
|
||||
Path = "/usr/bin/acrn-dm"
|
||||
BlockDeviceDriver = "virtio-blk"
|
||||
EntropySource = "/dev/urandom"
|
||||
|
@ -10,7 +10,9 @@ extended capability and manages entire physical devices; and VF (Virtual
|
||||
Function), a "lightweight" PCIe function which is a passthrough device for
|
||||
VMs.
|
||||
|
||||
For details, refer to Chapter 9 of PCI-SIG's `PCI Express Base SpecificationRevision 4.0, Version 1.0 <https://pcisig.com/pci-express-architecture-configuration-space-test-specification-revision-40-version-10>`_.
|
||||
For details, refer to Chapter 9 of PCI-SIG's
|
||||
`PCI Express Base SpecificationRevision 4.0, Version 1.0
|
||||
<https://pcisig.com/pci-express-architecture-configuration-space-test-specification-revision-40-version-10>`_.
|
||||
|
||||
SR-IOV Architectural Overview
|
||||
-----------------------------
|
||||
@ -31,7 +33,7 @@ SR-IOV Architectural Overview
|
||||
- **PF** - A PCIe Function that supports the SR-IOV capability
|
||||
and is accessible to an SR-PCIM, a VI, or an SI.
|
||||
|
||||
- **VF** - A “light-weight” PCIe Function that is directly accessible by an
|
||||
- **VF** - A "light-weight" PCIe Function that is directly accessible by an
|
||||
SI.
|
||||
|
||||
SR-IOV Extended Capability
|
||||
@ -39,7 +41,7 @@ SR-IOV Extended Capability
|
||||
|
||||
The SR-IOV Extended Capability defined here is a PCIe extended
|
||||
capability that must be implemented in each PF device that supports the
|
||||
SR-IOV feature. This capability is used to describe and control a PF’s
|
||||
SR-IOV feature. This capability is used to describe and control a PF's
|
||||
SR-IOV Capabilities.
|
||||
|
||||
.. figure:: images/sriov-image2.png
|
||||
@ -84,17 +86,17 @@ SR-IOV Capabilities.
|
||||
supported by the PF.
|
||||
|
||||
- **System Page Size** - The field that defines the page size the system
|
||||
will use to map the VFs’ memory addresses. Software must set the
|
||||
will use to map the VFs' memory addresses. Software must set the
|
||||
value of the *System Page Size* to one of the page sizes set in the
|
||||
*Supported Page Sizes* field.
|
||||
|
||||
- **VF BARs** - Fields that must define the VF’s Base Address
|
||||
- **VF BARs** - Fields that must define the VF's Base Address
|
||||
Registers (BARs). These fields behave as normal PCI BARs.
|
||||
|
||||
- **VF Migration State Array Offset** - Register that contains a
|
||||
PF BAR relative pointer to the VF Migration State Array.
|
||||
|
||||
- **VF Migration State Array** – Located using the VF Migration
|
||||
- **VF Migration State Array** - Located using the VF Migration
|
||||
State Array Offset register of the SR-IOV Capability block.
|
||||
|
||||
For details, refer to the *PCI Express Base Specification Revision 4.0, Version 1.0 Chapter 9.3.3*.
|
||||
@ -111,7 +113,7 @@ SR-IOV Architecture In ACRN
|
||||
1. A hypervisor detects a SR-IOV capable PCIe device in the physical PCI
|
||||
device enumeration phase.
|
||||
|
||||
2. The hypervisor intercepts the PF’s SR-IOV capability and accesses whether
|
||||
2. The hypervisor intercepts the PF's SR-IOV capability and accesses whether
|
||||
to enable/disable VF devices based on the *VF\_ENABLE* state. All
|
||||
read/write requests for a PF device passthrough to the PF physical
|
||||
device.
|
||||
@ -122,9 +124,9 @@ SR-IOV Architecture In ACRN
|
||||
initialization. The hypervisor uses *Subsystem Vendor ID* to detect the
|
||||
SR-IOV VF physical device instead of *Vendor ID* since no valid
|
||||
*Vendor ID* exists for the SR-IOV VF physical device. The VF BARs are
|
||||
initialized by its associated PF’s SR-IOV capabilities, not PCI
|
||||
initialized by its associated PF's SR-IOV capabilities, not PCI
|
||||
standard BAR registers. The MSIx mapping base address is also from the
|
||||
PF’s SR-IOV capabilities, not PCI standard BAR registers.
|
||||
PF's SR-IOV capabilities, not PCI standard BAR registers.
|
||||
|
||||
SR-IOV Passthrough VF Architecture In ACRN
|
||||
------------------------------------------
|
||||
@ -144,8 +146,8 @@ SR-IOV Passthrough VF Architecture In ACRN
|
||||
|
||||
3. The hypervisor emulates *Device ID/Vendor ID* and *Memory Space Enable
|
||||
(MSE)* in the configuration space for an assigned SR-IOV VF device. The
|
||||
assigned VF *Device ID* comes from its associated PF’s capability. The
|
||||
*Vendor ID* is the same as the PF’s *Vendor ID* and the *MSE* is always
|
||||
assigned VF *Device ID* comes from its associated PF's capability. The
|
||||
*Vendor ID* is the same as the PF's *Vendor ID* and the *MSE* is always
|
||||
set when reading the SR-IOV VF device's *CONTROL* register.
|
||||
|
||||
4. The vendor-specific VF driver in the target VM probes the assigned SR-IOV
|
||||
@ -180,7 +182,7 @@ The hypervisor intercepts all SR-IOV capability access and checks the
|
||||
*VF\_ENABLE* state. If *VF\_ENABLE* is set, the hypervisor creates n
|
||||
virtual devices after 100ms so that VF physical devices have enough time to
|
||||
be created. The Service VM waits 100ms and then only accesses the first VF
|
||||
device’s configuration space including *Class Code, Reversion ID, Subsystem
|
||||
device's configuration space including *Class Code, Reversion ID, Subsystem
|
||||
Vendor ID, Subsystem ID*. The Service VM uses the first VF device
|
||||
information to initialize subsequent VF devices.
|
||||
|
||||
@ -238,8 +240,10 @@ only support LaaG (Linux as a Guest).
|
||||
|
||||
#. Input the ``\ *echo n > /sys/class/net/enp109s0f0/device/sriov\_numvfs*\``
|
||||
command in the Service VM to enable n VF devices for the first PF
|
||||
device (\ *enp109s0f0)*. The number *n* can’t be more than *TotalVFs*
|
||||
which comes from the return value of command ``cat /sys/class/net/enp109s0f0/device/sriov\_totalvfs``. Here we use *n = 2* as an example.
|
||||
device (\ *enp109s0f0)*. The number *n* can't be more than *TotalVFs*
|
||||
which comes from the return value of command
|
||||
``cat /sys/class/net/enp109s0f0/device/sriov\_totalvfs``. Here we
|
||||
use *n = 2* as an example.
|
||||
|
||||
.. figure:: images/sriov-image10.png
|
||||
:align: center
|
||||
@ -267,7 +271,7 @@ only support LaaG (Linux as a Guest).
|
||||
iv. *echo "0000:6d:10.0" >
|
||||
/sys/bus/pci/drivers/pci-stub/bind*
|
||||
|
||||
b. Add the SR-IOV VF device parameter (“*-s X, passthru,6d/10/0*\ ”) in
|
||||
b. Add the SR-IOV VF device parameter ("*-s X, passthru,6d/10/0*\ ") in
|
||||
the launch User VM script
|
||||
|
||||
.. figure:: images/sriov-image12.png
|
||||
|
@ -47,12 +47,13 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo
|
||||
|
||||
|
||||
.. note:: The module ``/boot/zephyr.bin`` is the VM0 (Zephyr) kernel file.
|
||||
The param ``xxxxxx`` is VM0’s kernel file tag and must exactly match the
|
||||
The param ``xxxxxx`` is VM0's kernel file tag and must exactly match the
|
||||
``kernel_mod_tag`` of VM0 which is configured in the ``hypervisor/scenarios/hybrid/vm_configurations.c``
|
||||
file. The multiboot module ``/boot/bzImage`` is the Service VM kernel
|
||||
file. The param ``yyyyyy`` is the bzImage tag and must exactly match the
|
||||
``kernel_mod_tag`` of VM1 in the ``hypervisor/scenarios/hybrid/vm_configurations.c``
|
||||
file. The kernel command line arguments used to boot the Service VM are located in the header file ``hypervisor/scenarios/hybrid/vm_configurations.h``
|
||||
file. The kernel command line arguments used to boot the Service VM are
|
||||
located in the header file ``hypervisor/scenarios/hybrid/vm_configurations.h``
|
||||
and are configured by the `SOS_VM_BOOTARGS` macro.
|
||||
|
||||
#. Modify the ``/etc/default/grub`` file as follows to make the GRUB menu
|
||||
@ -68,7 +69,7 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo
|
||||
$ sudo update-grub
|
||||
|
||||
#. Reboot the NUC. Select the **ACRN hypervisor Hybrid Scenario** entry to boot
|
||||
the ACRN hypervisor on the NUC’s display. The GRUB loader will boot the
|
||||
the ACRN hypervisor on the NUC's display. The GRUB loader will boot the
|
||||
hypervisor, and the hypervisor will start the VMs automatically.
|
||||
|
||||
Hybrid Scenario Startup Checking
|
||||
@ -83,7 +84,7 @@ Hybrid Scenario Startup Checking
|
||||
a. Use the ``vm_console 0`` to switch to VM0 (Zephyr) console. It will display **Hello world! acrn**.
|
||||
#. Enter :kbd:`Ctrl+Spacebar` to return to the ACRN hypervisor shell.
|
||||
#. Use the ``vm_console 1`` command to switch to the VM1 (Service VM) console.
|
||||
#. Verify that the VM1’s Service VM can boot up and you can log in.
|
||||
#. Verify that the VM1's Service VM can boot up and you can log in.
|
||||
#. ssh to VM1 and launch the post-launched VM2 using the ACRN device model launch script.
|
||||
#. Go to the Service VM console, and enter :kbd:`Ctrl+Spacebar` to return to the ACRN hypervisor shell.
|
||||
#. Use the ``vm_console 2`` command to switch to the VM2 (User VM) console.
|
||||
|
@ -161,7 +161,7 @@ Update ACRN hypervisor Image
|
||||
* Set ACRN Scenario as "Logical Partition VMs";
|
||||
* Set Maximum number of VCPUs per VM as "2";
|
||||
* Set Maximum number of PCPU as "4";
|
||||
* Clear/Disable “Enable hypervisor relocation”.
|
||||
* Clear/Disable "Enable hypervisor relocation".
|
||||
|
||||
We recommend keeping the default values of items not mentioned above.
|
||||
|
||||
|
@ -72,7 +72,7 @@ Build the Service VM Kernel
|
||||
|
||||
$ WORKDIR=`pwd`;
|
||||
$ JOBS=`nproc`
|
||||
$ git clone -b master https://github.com/projectacrn/acrn-kernel.git
|
||||
$ git clone -b master https://github.com/projectacrn/acrn-kernel.git
|
||||
$ cd acrn-kernel && mkdir -p ${WORKDIR}/{build,build-rootfs}
|
||||
$ cp kernel_config_uefi_sos ${WORKDIR}/build/.config
|
||||
$ make olddefconfig O=${WORKDIR}/build && make -j${JOBS} O=${WORKDIR}/build
|
||||
@ -256,7 +256,7 @@ ACRN Windows verified feature list
|
||||
, "Virtio input - keyboard", "Working"
|
||||
, "GOP & VNC remote display", "Working"
|
||||
"GVT-g", "GVT-g without local display", "Working with 3D benchmark"
|
||||
, "GVT-g with local display", "Working with 3D benchmark"
|
||||
, "GVT-g with local display", "Working with 3D benchmark"
|
||||
"Tools", "WinDbg", "Working"
|
||||
"Test cases", "Install Windows 10 from scratch", "OK"
|
||||
, "Windows reboot", "OK"
|
||||
|
Loading…
Reference in New Issue
Block a user