doc: push doc updates for v2.5 release

Cumulative changes to docs since the release_2.5 branch was made

Tracked-On: #5692

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
David B. Kinder 2021-06-24 20:58:54 -07:00 committed by David Kinder
parent 7e9d625425
commit cd4dc73ca5
47 changed files with 1697 additions and 734 deletions

View File

@ -3,6 +3,53 @@
Security Advisory Security Advisory
################# #################
Addressed in ACRN v2.5
************************
We recommend that all developers upgrade to this v2.5 release (or later), which
addresses the following security issues that were discovered in previous releases:
-----
- NULL Pointer Dereference in ``devicemodel/hw/pci/virtio/virtio_net.c``
``virtio_net_ping_rxq()`` function tries to set ``vq->used->flags`` without
validating pointer ``vq->used``, which may be NULL and cause a NULL pointer dereference.
**Affected Release:** v2.4 and earlier.
- NULL Pointer Dereference in ``hw/pci/virtio/virtio.c``
``vq_endchains`` function tries to read ``vq->used->idx`` without
validating pointer ``vq->used``, which may be NULL and cause a NULL pointer dereference.
**Affected Release:** v2.4 and earlier.
- NULL Pointer Dereference in ``devicemodel/hw/pci/xhci.c``
The ``trb`` pointer in ``pci_xhci_complete_commands`` function may be from user space and may be NULL.
Accessing it without validating may cause a NULL pointer dereference.
**Affected Release:** v2.4 and earlier.
- Buffer overflow in ``hypervisor/arch/x86/vtd.c``
Malicious input ``index`` for function ``dmar_free_irte`` may trigger buffer
overflow on array ``irte_alloc_bitmap[]``.
**Affected Release:** v2.4 and earlier.
- Page Fault in ``devicemodel/core/mem.c``
``unregister_mem_int()`` function frees any entry when it is valid, which is not expected.
(only entries to be removed from RB tree can be freed). This will cause a page fault
when next RB tree iteration happens.
**Affected Release:** v2.4 and earlier
- Heap-use-after-free happens in VIRTIO timer_handler
With virtio polling mode enabled, a timer is running in the virtio
backend service. The timer will also be triggered if its frontend
driver didn't do the device reset on shutdown. A freed virtio device
could be accessed in the polling timer handler.
**Affected Release:** v2.4 and earlier
Addressed in ACRN v2.3 Addressed in ACRN v2.3
************************ ************************

View File

@ -389,22 +389,6 @@ html_redirect_pages = [
('user-guides/index', 'develop'), ('user-guides/index', 'develop'),
('hardware', 'reference/hardware'), ('hardware', 'reference/hardware'),
('release_notes', 'release_notes/index'), ('release_notes', 'release_notes/index'),
('getting-started/rt_industry', 'tutorials/cl_servicevm'), ('getting-started/rt_industry', 'getting-started/getting-started'),
('tutorials/acrn-dm_QoS', 'nocl'), ('getting-started/rt_industry_ubuntu', 'getting-started/getting-started'),
('tutorials/acrn_ootb', 'nocl'),
('tutorials/agl-vms', 'nocl'),
('tutorials/building_acrn_in_docker', 'nocl'),
('tutorials/building_uos_from_clearlinux', 'nocl'),
('tutorials/cl_servicevm', 'nocl'),
('tutorials/enable_laag_secure_boot', 'nocl'),
('tutorials/increase_uos_disk_size', 'nocl'),
('tutorials/kbl-nuc-sdc', 'nocl'),
('tutorials/open_vswitch', 'nocl'),
('tutorials/running_deb_as_serv_vm', 'nocl'),
('tutorials/sign_clear_linux_image', 'nocl'),
('tutorials/static-ip', 'nocl'),
('tutorials/up2', 'nocl'),
('tutorials/using_celadon_as_uos', 'nocl'),
('tutorials/using_sbl_on_up2', 'nocl'),
('tutorials/using_ubuntu_as_sos', 'nocl'),
] ]

View File

@ -59,6 +59,7 @@ Enable ACRN Features
:maxdepth: 1 :maxdepth: 1
tutorials/sgx_virtualization tutorials/sgx_virtualization
tutorials/nvmx_virtualization
tutorials/vuart_configuration tutorials/vuart_configuration
tutorials/rdt_configuration tutorials/rdt_configuration
tutorials/waag-secure-boot tutorials/waag-secure-boot
@ -73,6 +74,7 @@ Enable ACRN Features
tutorials/acrn_on_qemu tutorials/acrn_on_qemu
tutorials/using_grub tutorials/using_grub
tutorials/acrn-secure-boot-with-grub tutorials/acrn-secure-boot-with-grub
tutorials/acrn-secure-boot-with-efi-stub
tutorials/pre-launched-rt tutorials/pre-launched-rt
tutorials/enable_ivshmem tutorials/enable_ivshmem

View File

@ -294,11 +294,21 @@ For example, there are roles for marking :file:`filenames`
(``:command:`make```). You can also use the \`\`inline code\`\` (``:command:`make```). You can also use the \`\`inline code\`\`
markup (double backticks) to indicate a ``filename``. markup (double backticks) to indicate a ``filename``.
Don't use items within a single backtick, for example ```word```. Don't use items within a single backtick, for example ```word```. Instead
use double backticks: ````word````.
For references to files that are in the ACRN Hypervisor GitHub tree, you can Branch-Specific File Links
use a special role that creates a hyperlink to that file. For example, a **************************
GitHub link to the reST file used to create this document can be generated
Links in the documentation to specific files in the GitHub tree should also be
to the branch for that version of the documentation (e.g., links in the v2.5
release of the documentation should be to files in the v2.5 branch). You should
not link to files in the master branch because files in that branch could change
or even be deleted after the release is made.
To make this kind of file linking possible, use a special role that
creates a hyperlink to that file in the current branch. For example, a GitHub
link to the reST file used to create this document can be generated
using ``:acrn_file:`doc/developer-guides/doc_guidelines.rst```, which will using ``:acrn_file:`doc/developer-guides/doc_guidelines.rst```, which will
appear as :acrn_file:`doc/developer-guides/doc_guidelines.rst`, a link to appear as :acrn_file:`doc/developer-guides/doc_guidelines.rst`, a link to
the “blob” file in the GitHub repo as displayed by GitHub. Theres also an the “blob” file in the GitHub repo as displayed by GitHub. Theres also an
@ -307,6 +317,11 @@ to the “raw” uninterpreted file,
:acrn_raw:`doc/developer-guides/doc_guidelines.rst` file. (Click these links :acrn_raw:`doc/developer-guides/doc_guidelines.rst` file. (Click these links
to see the difference.) to see the difference.)
If you don't want the whole path to the file name to
appear in the text, you use the usual linking notation to define what link text
is shown, for example ``:acrn_file:`Guidelines <doc/developer-guides/doc_guidelines.rst>```
would show up as simply :acrn_file:`Guidelines <doc/developer-guides/doc_guidelines.rst>`.
.. _internal-linking: .. _internal-linking:
Internal Cross-Reference Linking Internal Cross-Reference Linking
@ -351,8 +366,12 @@ This is the same directive used to define a label that's a reference to a URL:
To enable easy cross-page linking within the site, each file should have a To enable easy cross-page linking within the site, each file should have a
reference label before its title so that it can be referenced from another reference label before its title so that it can be referenced from another
file. These reference labels must be unique across the whole site, so generic file.
names such as "samples" should be avoided. For example, the top of this
.. note:: These reference labels must be unique across the whole site, so generic
names such as "samples" should be avoided.
For example, the top of this
document's ``.rst`` file is: document's ``.rst`` file is:
.. code-block:: rst .. code-block:: rst
@ -418,15 +437,15 @@ spaces (to the first non-blank space of the directive name).
This would be rendered as: This would be rendered as:
.. code-block:: c .. code-block:: c
struct _k_object { struct _k_object {
char *name; char *name;
u8_t perms[CONFIG_MAX_THREAD_BYTES]; u8_t perms[CONFIG_MAX_THREAD_BYTES];
u8_t type; u8_t type;
u8_t flags; u8_t flags;
u32_t data; u32_t data;
} __packed; } __packed;
You can specify other languages for the ``code-block`` directive, including You can specify other languages for the ``code-block`` directive, including
@ -442,10 +461,10 @@ If you want no syntax highlighting, specify ``none``. For example:
Would display as: Would display as:
.. code-block:: none .. code-block:: none
This would be a block of text styled with a background This would be a block of text styled with a background
and box, but with no syntax highlighting. and box, but with no syntax highlighting.
There's a shorthand for writing code blocks, too: end the introductory There's a shorthand for writing code blocks, too: end the introductory
paragraph with a double colon (``::``) and indent the code block content paragraph with a double colon (``::``) and indent the code block content
@ -517,7 +536,7 @@ This results in the image being placed in the document:
.. image:: ../images/ACRNlogo.png .. image:: ../images/ACRNlogo.png
:align: center :align: center
Alternatively, use the ``.. figure`` directive to include a picture with The preferred alternative is to use the ``.. figure`` directive to include a picture with
a caption and automatic figure numbering for your image, (so that you can say a caption and automatic figure numbering for your image, (so that you can say
see :numref:`acrn-logo-figure`, by using the notation see :numref:`acrn-logo-figure`, by using the notation
``:numref:`acrn-logo-figure``` and specifying the name of figure):: ``:numref:`acrn-logo-figure``` and specifying the name of figure)::
@ -534,6 +553,7 @@ see :numref:`acrn-logo-figure`, by using the notation
Caption for the figure Caption for the figure
All figures should have a figure caption.
We've also included the ``graphviz`` Sphinx extension to enable you to use a We've also included the ``graphviz`` Sphinx extension to enable you to use a
text description language to render drawings. For more information, see text description language to render drawings. For more information, see
@ -613,8 +633,7 @@ a ``.. tab::`` directive. Under the hood, we're using the `sphinx-tabs
<https://github.com/djungelorm/sphinx-tabs>`_ extension that's included <https://github.com/djungelorm/sphinx-tabs>`_ extension that's included
in the ACRN (requirements.txt) setup. Within a tab, you can have most in the ACRN (requirements.txt) setup. Within a tab, you can have most
any content *other than a heading* (code-blocks, ordered and unordered any content *other than a heading* (code-blocks, ordered and unordered
lists, pictures, paragraphs, and such). You can read more about sphinx-tabs lists, pictures, paragraphs, and such).
from the link above.
Instruction Steps Instruction Steps
***************** *****************
@ -653,6 +672,45 @@ This is the second instruction step.
only one set of numbered steps is intended per document and the steps only one set of numbered steps is intended per document and the steps
must be level 2 headings. must be level 2 headings.
Configuration Option Documentation
**********************************
Most of the ACRN documentation is maintained in ``.rst`` files found in the
``doc/`` folder. API documentation is maintained as Doxygen comments in the C
header files (or as kerneldoc comments in the ``acrn-kernel`` repo headers),
along with some prose documentation in ``.rst`` files. The ACRN configuration
option documentation is created based on details maintained in schema definition
files (``.xsd``) in the ``misc/config_tools/schema`` folder. These schema
definition files are used by the configuration tool to validate the XML scenario
configuration files as well as to hold documentation about each option. For
example:
.. code-block:: xml
<xs:element name="RELEASE" type="Boolean" default="n">
<xs:annotation>
<xs:documentation>Build an image for release (``y``) or debug (``n``).
In a **release** image, assertions are not enforced and debugging
features are disabled, including logs, serial console, and the
hypervisor shell.</xs:documentation>
</xs:annotation>
</xs:element>
During the documentation ``make html`` processing, the documentation annotations
in the ``.xsd`` files are extracted and transformed into restructureText using
an XSLT transformation found in ``doc/scripts/configdoc.xsl``. The generated
option documentation is organized and formatted to make it easy to created links
to specific option descriptions using an ``:option:`` role, for example
``:option:`hv.DEBUG_OPTIONS.RELEASE``` would link to
:option:`hv.DEBUG_OPTIONS.RELEASE`.
The transformed option documentation is
created in the ``_build/rst/reference/configdoc.txt`` file and included by
``doc/reference/config-options.rst`` to create the final published
:ref:`scenario-config-options` document. You make changes to the option
descriptions by editing the documentation found in one of the ``.xsd`` files.
Documentation Generation Documentation Generation
************************ ************************

View File

@ -4,11 +4,11 @@ AHCI Virtualization in Device Model
################################### ###################################
AHCI (Advanced Host Controller Interface) is a hardware mechanism AHCI (Advanced Host Controller Interface) is a hardware mechanism
that allows software to communicate with Serial ATA devices. AHCI HBA that enables software to communicate with Serial ATA devices. AHCI HBA
(host bus adapters) is a PCI class device that acts as a data movement (host bus adapters) is a PCI class device that acts as a data movement
engine between system memory and Serial ATA devices. The ACPI HBA in engine between system memory and Serial ATA devices. The ACPI HBA in
ACRN supports both ATA and ATAPI devices. The architecture is shown in ACRN supports both ATA and ATAPI devices. The architecture is shown in
the below diagram. the diagram below:
.. figure:: images/ahci-image1.png .. figure:: images/ahci-image1.png
:align: center :align: center
@ -16,17 +16,17 @@ the below diagram.
:name: achi-device :name: achi-device
HBA is registered to the PCI system with device id 0x2821 and vendor id HBA is registered to the PCI system with device id 0x2821 and vendor id
0x8086. Its memory registers are mapped in BAR 5. It only supports 6 0x8086. Its memory registers are mapped in BAR 5. It supports only six
ports (refer to ICH8 AHCI). AHCI driver in the User VM can access HBA in DM ports (refer to ICH8 AHCI). The AHCI driver in the User VM can access HBA in
through the PCI BAR. And HBA can inject MSI interrupts through the PCI DM through the PCI BAR, and HBA can inject MSI interrupts through the PCI
framework. framework.
When the application in the User VM reads data from /dev/sda, the request will When the application in the User VM reads data from /dev/sda, the request will
send through the AHCI driver and then the PCI driver. The User VM will trap to be sent through the AHCI driver and then the PCI driver. The Hypervisor will
hypervisor, and hypervisor dispatch the request to DM. According to the trap the request from the User VM and dispatch it to the DM. According to the
offset in the BAR, the request will dispatch to port control handler. offset in the BAR, the request will be dispatched to the port control handler.
Then the request is parse to a block I/O request which can be processed Then the request is parsed to a block I/O request which can be processed by
by Block backend model. the Block backend model.
Usage: Usage:
@ -34,7 +34,7 @@ Usage:
Type: 'hd' and 'cd' are available. Type: 'hd' and 'cd' are available.
Filepath: the path for the backend file, could be a partition or a Filepath: the path for the backend file; could be a partition or a
regular file. regular file.
For example, For example,

View File

@ -19,7 +19,7 @@ The PS2 port is a 6-pin mini-Din connector used for connecting keyboards and mic
PS2 Keyboard Emulation PS2 Keyboard Emulation
********************** **********************
ACRN supports AT keyboard controller for PS2 keyboard that can be accessed through I/O ports(0x60 and 0x64). 0x60 is used to access AT keyboard controller data register, 0x64 is used to access AT keyboard controller address register. ACRN supports the AT keyboard controller for PS2 keyboard that can be accessed through I/O ports (0x60 and 0x64). 0x60 is used to access AT keyboard controller data register; 0x64 is used to access AT keyboard controller address register.
The PS2 keyboard ACPI description as below:: The PS2 keyboard ACPI description as below::
@ -48,8 +48,8 @@ The PS2 keyboard ACPI description as below::
PS2 Mouse Emulation PS2 Mouse Emulation
******************* *******************
ACRN supports AT keyboard controller for PS2 mouse that can be accessed through I/O ports(0x60 and 0x64). ACRN supports AT keyboard controller for PS2 mouse that can be accessed through I/O ports (0x60 and 0x64).
0x60 is used to access AT keyboard controller data register, 0x64 is used to access AT keyboard controller address register. 0x60 is used to access AT keyboard controller data register; 0x64 is used to access AT keyboard controller address register.
The PS2 mouse ACPI description as below:: The PS2 mouse ACPI description as below::

View File

@ -10,7 +10,7 @@ Purpose of This Document
======================== ========================
This high-level design (HLD) document describes the usage requirements This high-level design (HLD) document describes the usage requirements
and high level design for Intel |reg| Graphics Virtualization Technology for and high-level design for Intel |reg| Graphics Virtualization Technology for
shared virtual :term:`GPU` technology (:term:`GVT-g`) on Apollo Lake-I shared virtual :term:`GPU` technology (:term:`GVT-g`) on Apollo Lake-I
SoCs. SoCs.
@ -18,14 +18,14 @@ This document describes:
- The different GPU virtualization techniques - The different GPU virtualization techniques
- GVT-g mediated passthrough - GVT-g mediated passthrough
- High level design - High-level design
- Key components - Key components
- GVT-g new architecture differentiation - GVT-g new architecture differentiation
Audience Audience
======== ========
This document is for developers, validation teams, architects and This document is for developers, validation teams, architects, and
maintainers of Intel |reg| GVT-g for the Apollo Lake SoCs. maintainers of Intel |reg| GVT-g for the Apollo Lake SoCs.
The reader should have some familiarity with the basic concepts of The reader should have some familiarity with the basic concepts of
@ -47,24 +47,24 @@ Background
Intel GVT-g is an enabling technology in emerging graphics Intel GVT-g is an enabling technology in emerging graphics
virtualization scenarios. It adopts a full GPU virtualization approach virtualization scenarios. It adopts a full GPU virtualization approach
based on mediated passthrough technology, to achieve good performance, based on mediated passthrough technology to achieve good performance,
scalability and secure isolation among Virtual Machines (VMs). A virtual scalability, and secure isolation among Virtual Machines (VMs). A virtual
GPU (vGPU), with full GPU features, is presented to each VM so that a GPU (vGPU), with full GPU features, is presented to each VM so that a
native graphics driver can run directly inside a VM. native graphics driver can run directly inside a VM.
Intel GVT-g technology for Apollo Lake (APL) has been implemented in Intel GVT-g technology for Apollo Lake (APL) has been implemented in
open source hypervisors or Virtual Machine Monitors (VMMs): open-source hypervisors or Virtual Machine Monitors (VMMs):
- Intel GVT-g for ACRN, also known as, "AcrnGT" - Intel GVT-g for ACRN, also known as, "AcrnGT"
- Intel GVT-g for KVM, also known as, "KVMGT" - Intel GVT-g for KVM, also known as, "KVMGT"
- Intel GVT-g for Xen, also known as, "XenGT" - Intel GVT-g for Xen, also known as, "XenGT"
The core vGPU device model is released under BSD/MIT dual license, so it The core vGPU device model is released under the BSD/MIT dual license, so it
can be reused in other proprietary hypervisors. can be reused in other proprietary hypervisors.
Intel has a portfolio of graphics virtualization technologies Intel has a portfolio of graphics virtualization technologies
(:term:`GVT-g`, :term:`GVT-d` and :term:`GVT-s`). GVT-d and GVT-s are (:term:`GVT-g`, :term:`GVT-d`, and :term:`GVT-s`). GVT-d and GVT-s are
outside of the scope of this document. outside the scope of this document.
This HLD applies to the Apollo Lake platform only. Support of other This HLD applies to the Apollo Lake platform only. Support of other
hardware is outside the scope of this HLD. hardware is outside the scope of this HLD.
@ -89,11 +89,11 @@ The main targeted usage of GVT-g is in automotive applications, such as:
Existing Techniques Existing Techniques
=================== ===================
A graphics device is no different from any other I/O device, with A graphics device is no different from any other I/O device with
respect to how the device I/O interface is virtualized. Therefore, respect to how the device I/O interface is virtualized. Therefore,
existing I/O virtualization techniques can be applied to graphics existing I/O virtualization techniques can be applied to graphics
virtualization. However, none of the existing techniques can meet the virtualization. However, none of the existing techniques can meet the
general requirement of performance, scalability, and secure isolation general requirements of performance, scalability, and secure isolation
simultaneously. In this section, we review the pros and cons of each simultaneously. In this section, we review the pros and cons of each
technique in detail, enabling the audience to understand the rationale technique in detail, enabling the audience to understand the rationale
behind the entire GVT-g effort. behind the entire GVT-g effort.
@ -102,12 +102,12 @@ Emulation
--------- ---------
A device can be emulated fully in software, including its I/O registers A device can be emulated fully in software, including its I/O registers
and internal functional blocks. There would be no dependency on the and internal functional blocks. Because there is no dependency on the
underlying hardware capability, therefore compatibility can be achieved underlying hardware capability, compatibility can be achieved
across platforms. However, due to the CPU emulation cost, this technique across platforms. However, due to the CPU emulation cost, this technique
is usually used for legacy devices, such as a keyboard, mouse, and VGA is usually used only for legacy devices such as a keyboard, mouse, and VGA
card. There would be great complexity and extremely low performance to card. Fully emulating a modern accelerator such as a GPU would involve great
fully emulate a modern accelerator, such as a GPU. It may be acceptable complexity and extremely low performance. It may be acceptable
for use in a simulation environment, but it is definitely not suitable for use in a simulation environment, but it is definitely not suitable
for production usage. for production usage.
@ -116,9 +116,9 @@ API Forwarding
API forwarding, or a split driver model, is another widely-used I/O API forwarding, or a split driver model, is another widely-used I/O
virtualization technology. It has been used in commercial virtualization virtualization technology. It has been used in commercial virtualization
productions, for example, VMware*, PCoIP*, and Microsoft* RemoteFx*. productions such as VMware*, PCoIP*, and Microsoft* RemoteFx*.
It is a natural path when researchers study a new type of It is a natural path when researchers study a new type of
I/O virtualization usage, for example, when GPGPU computing in VM was I/O virtualization usage—for example, when GPGPU computing in a VM was
initially proposed. Intel GVT-s is based on this approach. initially proposed. Intel GVT-s is based on this approach.
The architecture of API forwarding is shown in :numref:`api-forwarding`: The architecture of API forwarding is shown in :numref:`api-forwarding`:
@ -131,10 +131,10 @@ The architecture of API forwarding is shown in :numref:`api-forwarding`:
API Forwarding API Forwarding
A frontend driver is employed to forward high-level API calls (OpenGL, A frontend driver is employed to forward high-level API calls (OpenGL,
Directx, and so on) inside a VM, to a Backend driver in the Hypervisor DirectX, and so on) inside a VM to a backend driver in the Hypervisor
for acceleration. The Backend may be using a different graphics stack, for acceleration. The backend may be using a different graphics stack,
so API translation between different graphics protocols may be required. so API translation between different graphics protocols may be required.
The Backend driver allocates a physical GPU resource for each VM, The backend driver allocates a physical GPU resource for each VM,
behaving like a normal graphics application in a Hypervisor. Shared behaving like a normal graphics application in a Hypervisor. Shared
memory may be used to reduce memory copying between the host and guest memory may be used to reduce memory copying between the host and guest
graphic stacks. graphic stacks.
@ -143,16 +143,16 @@ API forwarding can bring hardware acceleration capability into a VM,
with other merits such as vendor independence and high density. However, it with other merits such as vendor independence and high density. However, it
also suffers from the following intrinsic limitations: also suffers from the following intrinsic limitations:
- Lagging features - Every new API version needs to be specifically - Lagging features - Every new API version must be specifically
handled, so it means slow time-to-market (TTM) to support new standards. handled, which means slow time-to-market (TTM) to support new standards.
For example, For example,
only DirectX9 is supported, when DirectX11 is already in the market. only DirectX9 is supported while DirectX11 is already in the market.
Also, there is a big gap in supporting media and compute usages. Also, there is a big gap in supporting media and compute usages.
- Compatibility issues - A GPU is very complex, and consequently so are - Compatibility issues - A GPU is very complex, and consequently so are
high level graphics APIs. Different protocols are not 100% compatible high-level graphics APIs. Different protocols are not 100% compatible
on every subtle API, so the customer can observe feature/quality loss on every subtly different API, so the customer can observe feature/quality
for specific applications. loss for specific applications.
- Maintenance burden - Occurs when supported protocols and specific - Maintenance burden - Occurs when supported protocols and specific
versions are incremented. versions are incremented.
@ -165,10 +165,10 @@ Direct Passthrough
------------------- -------------------
"Direct passthrough" dedicates the GPU to a single VM, providing full "Direct passthrough" dedicates the GPU to a single VM, providing full
features and good performance, but at the cost of device sharing features and good performance at the cost of device sharing
capability among VMs. Only one VM at a time can use the hardware capability among VMs. Only one VM at a time can use the hardware
acceleration capability of the GPU, which is a major limitation of this acceleration capability of the GPU, which is a major limitation of this
technique. However, it is still a good approach to enable graphics technique. However, it is still a good approach for enabling graphics
virtualization usages on Intel server platforms, as an intermediate virtualization usages on Intel server platforms, as an intermediate
solution. Intel GVT-d uses this mechanism. solution. Intel GVT-d uses this mechanism.
@ -197,7 +197,7 @@ passthrough" technique.
Concept Concept
======= =======
Mediated passthrough allows a VM to access performance-critical I/O Mediated passthrough enables a VM to access performance-critical I/O
resources (usually partitioned) directly, without intervention from the resources (usually partitioned) directly, without intervention from the
hypervisor in most cases. Privileged operations from this VM are hypervisor in most cases. Privileged operations from this VM are
trapped-and-emulated to provide secure isolation among VMs. trapped-and-emulated to provide secure isolation among VMs.
@ -212,7 +212,7 @@ trapped-and-emulated to provide secure isolation among VMs.
The Hypervisor must ensure that no vulnerability is exposed when The Hypervisor must ensure that no vulnerability is exposed when
assigning performance-critical resource to each VM. When a assigning performance-critical resource to each VM. When a
performance-critical resource cannot be partitioned, a scheduler must be performance-critical resource cannot be partitioned, a scheduler must be
implemented (either in software or hardware) to allow time-based sharing implemented (either in software or hardware) to enable time-based sharing
among multiple VMs. In this case, the device must allow the hypervisor among multiple VMs. In this case, the device must allow the hypervisor
to save and restore the hardware state associated with the shared resource, to save and restore the hardware state associated with the shared resource,
either through direct I/O register reads and writes (when there is no software either through direct I/O register reads and writes (when there is no software
@ -255,7 +255,7 @@ multiple virtual address spaces by GPU page tables. A 4 GB global
virtual address space called "global graphics memory", accessible from virtual address space called "global graphics memory", accessible from
both the GPU and CPU, is mapped through a global page table. Local both the GPU and CPU, is mapped through a global page table. Local
graphics memory spaces are supported in the form of multiple 4 GB local graphics memory spaces are supported in the form of multiple 4 GB local
virtual address spaces, but are only limited to access by the Render virtual address spaces but are limited to access by the Render
Engine through local page tables. Global graphics memory is mostly used Engine through local page tables. Global graphics memory is mostly used
for the Frame Buffer and also serves as the Command Buffer. Massive data for the Frame Buffer and also serves as the Command Buffer. Massive data
accesses are made to local graphics memory when hardware acceleration is accesses are made to local graphics memory when hardware acceleration is
@ -265,24 +265,24 @@ the on-die memory.
The CPU programs the GPU through GPU-specific commands, shown in The CPU programs the GPU through GPU-specific commands, shown in
:numref:`graphics-arch`, using a producer-consumer model. The graphics :numref:`graphics-arch`, using a producer-consumer model. The graphics
driver programs GPU commands into the Command Buffer, including primary driver programs GPU commands into the Command Buffer, including primary
buffer and batch buffer, according to the high-level programming APIs, buffer and batch buffer, according to the high-level programming APIs
such as OpenGL* or DirectX*. Then, the GPU fetches and executes the such as OpenGL* and DirectX*. Then, the GPU fetches and executes the
commands. The primary buffer (called a ring buffer) may chain other commands. The primary buffer (called a ring buffer) may chain other
batch buffers together. The primary buffer and ring buffer are used batch buffers together. The primary buffer and ring buffer are used
interchangeably thereafter. The batch buffer is used to convey the interchangeably thereafter. The batch buffer is used to convey the
majority of the commands (up to ~98% of them) per programming model. A majority of the commands (up to ~98% of them) per programming model. A
register tuple (head, tail) is used to control the ring buffer. The CPU register tuple (head, tail) is used to control the ring buffer. The CPU
submits the commands to the GPU by updating the tail, while the GPU submits the commands to the GPU by updating the tail, while the GPU
fetches commands from the head, and then notifies the CPU by updating fetches commands from the head and then notifies the CPU by updating
the head, after the commands have finished execution. Therefore, when the head after the commands have finished execution. Therefore, when
the GPU has executed all commands from the ring buffer, the head and the GPU has executed all commands from the ring buffer, the head and
tail pointers are the same. tail pointers are the same.
Having introduced the GPU architecture abstraction, it is important for Having introduced the GPU architecture abstraction, it is important for
us to understand how real-world graphics applications use the GPU us to understand how real-world graphics applications use the GPU
hardware so that we can virtualize it in VMs efficiently. To do so, we hardware so that we can virtualize it in VMs efficiently. To do so, we
characterized, for some representative GPU-intensive 3D workloads (the characterized the usages of the four critical interfaces for some
Phoronix Test Suite), the usages of the four critical interfaces: representative GPU-intensive 3D workloads (the Phoronix Test Suite):
1) the Frame Buffer, 1) the Frame Buffer,
2) the Command Buffer, 2) the Command Buffer,
@ -299,9 +299,9 @@ performance-critical resources, as shown in :numref:`access-patterns`.
When the applications are being loaded, lots of source vertices and When the applications are being loaded, lots of source vertices and
pixels are written by the CPU, so the Frame Buffer accesses occur in the pixels are written by the CPU, so the Frame Buffer accesses occur in the
range of hundreds of thousands per second. Then at run-time, the CPU range of hundreds of thousands per second. Then at run-time, the CPU
programs the GPU through the commands, to render the Frame Buffer, so programs the GPU through the commands to render the Frame Buffer, so
the Command Buffer accesses become the largest group, also in the the Command Buffer accesses become the largest group (also in the
hundreds of thousands per second. PTE and I/O accesses are minor in both hundreds of thousands per second). PTE and I/O accesses are minor in both
load and run-time phases ranging in tens of thousands per second. load and run-time phases ranging in tens of thousands per second.
.. figure:: images/APL_GVT-g-access-patterns.png .. figure:: images/APL_GVT-g-access-patterns.png
@ -311,18 +311,18 @@ load and run-time phases ranging in tens of thousands per second.
Access Patterns of Running 3D Workloads Access Patterns of Running 3D Workloads
High Level Architecture High-Level Architecture
*********************** ***********************
:numref:`gvt-arch` shows the overall architecture of GVT-g, based on the :numref:`gvt-arch` shows the overall architecture of GVT-g, based on the
ACRN hypervisor, with Service VM as the privileged VM, and multiple user ACRN hypervisor, with Service VM as the privileged VM, and multiple user
guests. A GVT-g device model working with the ACRN hypervisor, guests. A GVT-g device model working with the ACRN hypervisor
implements the policies of trap and passthrough. Each guest runs the implements the policies of trap and passthrough. Each guest runs the
native graphics driver and can directly access performance-critical native graphics driver and can directly access performance-critical
resources: the Frame Buffer and Command Buffer, with resource resources: the Frame Buffer and Command Buffer, with resource
partitioning (as presented later). To protect privileged resources, that partitioning (as presented later). To protect privileged resourcesthat
is, the I/O registers and PTEs, corresponding accesses from the graphics is, the I/O registers and PTEscorresponding accesses from the graphics
driver in user VMs are trapped and forwarded to the GVT device model in driver in user VMs are trapped and forwarded to the GVT device model in the
Service VM for emulation. The device model leverages i915 interfaces to access Service VM for emulation. The device model leverages i915 interfaces to access
the physical GPU. the physical GPU.
@ -366,7 +366,7 @@ and gives the corresponding result back to the guest.
The vGPU Device Model provides the basic framework to do The vGPU Device Model provides the basic framework to do
trap-and-emulation, including MMIO virtualization, interrupt trap-and-emulation, including MMIO virtualization, interrupt
virtualization, and display virtualization. It also handles and virtualization, and display virtualization. It also handles and
processes all the requests internally, such as, command scan and shadow, processes all the requests internally (such as command scan and shadow),
schedules them in the proper manner, and finally submits to schedules them in the proper manner, and finally submits to
the Service VM i915 driver. the Service VM i915 driver.
@ -384,9 +384,9 @@ Intel Processor Graphics implements two PCI MMIO BARs:
- **GTTMMADR BAR**: Combines both :term:`GGTT` modification range and Memory - **GTTMMADR BAR**: Combines both :term:`GGTT` modification range and Memory
Mapped IO range. It is 16 MB on :term:`BDW`, with 2 MB used by MMIO, 6 MB Mapped IO range. It is 16 MB on :term:`BDW`, with 2 MB used by MMIO, 6 MB
reserved and 8 MB allocated to GGTT. GGTT starts from reserved, and 8 MB allocated to GGTT. GGTT starts from
:term:`GTTMMADR` + 8 MB. In this section, we focus on virtualization of :term:`GTTMMADR` + 8 MB. In this section, we focus on virtualization of
the MMIO range, discussing GGTT virtualization later. the MMIO range, leaving discussion of GGTT virtualization for later.
- **GMADR BAR**: As the PCI aperture is used by the CPU to access tiled - **GMADR BAR**: As the PCI aperture is used by the CPU to access tiled
graphics memory, GVT-g partitions this aperture range among VMs for graphics memory, GVT-g partitions this aperture range among VMs for
@ -395,11 +395,11 @@ Intel Processor Graphics implements two PCI MMIO BARs:
A 2 MB virtual MMIO structure is allocated per vGPU instance. A 2 MB virtual MMIO structure is allocated per vGPU instance.
All the virtual MMIO registers are emulated as simple in-memory All the virtual MMIO registers are emulated as simple in-memory
read-write, that is, guest driver will read back the same value that was read-write; that is, the guest driver will read back the same value that was
programmed earlier. A common emulation handler (for example, programmed earlier. A common emulation handler (for example,
intel_gvt_emulate_read/write) is enough to handle such general intel_gvt_emulate_read/write) is enough to handle such general
emulation requirements. However, some registers need to be emulated with emulation requirements. However, some registers must be emulated with
specific logic, for example, affected by change of other states or specific logicfor example, affected by change of other states or
additional audit or translation when updating the virtual register. additional audit or translation when updating the virtual register.
Therefore, a specific emulation handler must be installed for those Therefore, a specific emulation handler must be installed for those
special registers. special registers.
@ -408,19 +408,19 @@ The graphics driver may have assumptions about the initial device state,
which stays with the point when the BIOS transitions to the OS. To meet which stays with the point when the BIOS transitions to the OS. To meet
the driver expectation, we need to provide an initial state of vGPU that the driver expectation, we need to provide an initial state of vGPU that
a driver may observe on a pGPU. So the host graphics driver is expected a driver may observe on a pGPU. So the host graphics driver is expected
to generate a snapshot of physical GPU state, which it does before guest to generate a snapshot of physical GPU state, which it does before the guest
driver's initialization. This snapshot is used as the initial vGPU state driver's initialization. This snapshot is used as the initial vGPU state
by the device model. by the device model.
PCI Configuration Space Virtualization PCI Configuration Space Virtualization
-------------------------------------- --------------------------------------
PCI configuration space also needs to be virtualized in the device The PCI configuration space also must be virtualized in the device
model. Different implementations may choose to implement the logic model. Different implementations may choose to implement the logic
within the vGPU device model or in default system device model (for within the vGPU device model or in the default system device model (for
example, ACRN-DM). GVT-g emulates the logic in the device model. example, ACRN-DM). GVT-g emulates the logic in the device model.
Some information is vital for the vGPU device model, including: Some information is vital for the vGPU device model, including
Guest PCI BAR, Guest PCI MSI, and Base of ACPI OpRegion. Guest PCI BAR, Guest PCI MSI, and Base of ACPI OpRegion.
Legacy VGA Port I/O Virtualization Legacy VGA Port I/O Virtualization
@ -443,17 +443,17 @@ handle the GPU interrupt virtualization by itself. Virtual GPU
interrupts are categorized into three types: interrupts are categorized into three types:
- Periodic GPU interrupts are emulated by timers. However, a notable - Periodic GPU interrupts are emulated by timers. However, a notable
exception to this is the VBlank interrupt. Due to the demands of user exception to this is the VBlank interrupt. Due to the demands of user space
space compositors, such as Wayland, which requires a flip done event compositors such as Wayland, which requires a flip done event to be
to be synchronized with a VBlank, this interrupt is forwarded from synchronized with a VBlank, this interrupt is forwarded from the Service VM
Service VM to User VM when Service VM receives it from the hardware. to the User VM when the Service VM receives it from the hardware.
- Event-based GPU interrupts are emulated by the emulation logic. For - Event-based GPU interrupts are emulated by the emulation logic (for
example, AUX Channel Interrupt. example, AUX Channel Interrupt).
- GPU command interrupts are emulated by a command parser and workload - GPU command interrupts are emulated by a command parser and workload
dispatcher. The command parser marks out which GPU command interrupts dispatcher. The command parser marks out which GPU command interrupts
are generated during the command execution and the workload are generated during the command execution, and the workload
dispatcher injects those interrupts into the VM after the workload is dispatcher injects those interrupts into the VM after the workload is
finished. finished.
@ -468,27 +468,27 @@ Workload Scheduler
------------------ ------------------
The scheduling policy and workload scheduler are decoupled for The scheduling policy and workload scheduler are decoupled for
scalability reasons. For example, a future QoS enhancement will only scalability reasons. For example, a future QoS enhancement will impact
impact the scheduling policy, any i915 interface change or HW submission only the scheduling policy, and any i915 interface change or hardware submission
interface change (from execlist to :term:`GuC`) will only need workload interface change (from execlist to :term:`GuC`) will need only workload
scheduler updates. scheduler updates.
The scheduling policy framework is the core of the vGPU workload The scheduling policy framework is the core of the vGPU workload
scheduling system. It controls all of the scheduling actions and scheduling system. It controls all of the scheduling actions and
provides the developer with a generic framework for easy development of provides the developer with a generic framework for easy development of
scheduling policies. The scheduling policy framework controls the work scheduling policies. The scheduling policy framework controls the work
scheduling process without caring about how the workload is dispatched scheduling process without regard for how the workload is dispatched
or completed. All the detailed workload dispatching is hidden in the or completed. All the detailed workload dispatching is hidden in the
workload scheduler, which is the actual executer of a vGPU workload. workload scheduler, which is the actual executer of a vGPU workload.
The workload scheduler handles everything about one vGPU workload. Each The workload scheduler handles everything about one vGPU workload. Each
hardware ring is backed by one workload scheduler kernel thread. The hardware ring is backed by one workload scheduler kernel thread. The
workload scheduler picks the workload from current vGPU workload queue workload scheduler picks the workload from current vGPU workload queue
and communicates with the virtual HW submission interface to emulate the and communicates with the virtual hardware submission interface to emulate the
"schedule-in" status for the vGPU. It performs context shadow, Command "schedule-in" status for the vGPU. It performs context shadow, Command
Buffer scan and shadow, PPGTT page table pin/unpin/out-of-sync, before Buffer scan and shadow, and PPGTT page table pin/unpin/out-of-sync before
submitting this workload to the host i915 driver. When the vGPU workload submitting this workload to the host i915 driver. When the vGPU workload
is completed, the workload scheduler asks the virtual HW submission is completed, the workload scheduler asks the virtual hardware submission
interface to emulate the "schedule-out" status for the vGPU. The VM interface to emulate the "schedule-out" status for the vGPU. The VM
graphics driver then knows that a GPU workload is finished. graphics driver then knows that a GPU workload is finished.
@ -504,11 +504,11 @@ Workload Submission Path
Software submits the workload using the legacy ring buffer mode on Intel Software submits the workload using the legacy ring buffer mode on Intel
Processor Graphics before Broadwell, which is no longer supported by the Processor Graphics before Broadwell, which is no longer supported by the
GVT-g virtual device model. A new HW submission interface named GVT-g virtual device model. A new hardware submission interface named
"Execlist" is introduced since Broadwell. With the new HW submission "Execlist" is introduced since Broadwell. With the new hardware submission
interface, software can achieve better programmability and easier interface, software can achieve better programmability and easier
context management. In Intel GVT-g, the vGPU submits the workload context management. In Intel GVT-g, the vGPU submits the workload
through the virtual HW submission interface. Each workload in submission through the virtual hardware submission interface. Each workload in submission
will be represented as an ``intel_vgpu_workload`` data structure, a vGPU will be represented as an ``intel_vgpu_workload`` data structure, a vGPU
workload, which will be put on a per-vGPU and per-engine workload queue workload, which will be put on a per-vGPU and per-engine workload queue
later after performing a few basic checks and verifications. later after performing a few basic checks and verifications.
@ -546,15 +546,15 @@ Direct Display Model
Direct Display Model Direct Display Model
A typical automotive use case is where there are two displays in the car In a typical automotive use case, there are two displays in the car
and each one needs to show one domain's content, with the two domains and each one must show one domain's content, with the two domains
being the Instrument cluster and the In Vehicle Infotainment (IVI). As being the Instrument cluster and the In Vehicle Infotainment (IVI). As
shown in :numref:`direct-display`, this can be accomplished through the direct shown in :numref:`direct-display`, this can be accomplished through the direct
display model of GVT-g, where the Service VM and User VM are each assigned all HW display model of GVT-g, where the Service VM and User VM are each assigned all hardware
planes of two different pipes. GVT-g has a concept of display owner on a planes of two different pipes. GVT-g has a concept of display owner on a
per HW plane basis. If it determines that a particular domain is the per hardware plane basis. If it determines that a particular domain is the
owner of a HW plane, then it allows the domain's MMIO register write to owner of a hardware plane, then it allows the domain's MMIO register write to
flip a frame buffer to that plane to go through to the HW. Otherwise, flip a frame buffer to that plane to go through to the hardware. Otherwise,
such writes are blocked by the GVT-g. such writes are blocked by the GVT-g.
Indirect Display Model Indirect Display Model
@ -568,23 +568,23 @@ Indirect Display Model
Indirect Display Model Indirect Display Model
For security or fastboot reasons, it may be determined that the User VM is For security or fastboot reasons, it may be determined that the User VM is
either not allowed to display its content directly on the HW or it may either not allowed to display its content directly on the hardware or it may
be too late before it boots up and displays its content. In such a be too late before it boots up and displays its content. In such a
scenario, the responsibility of displaying content on all displays lies scenario, the responsibility of displaying content on all displays lies
with the Service VM. One of the use cases that can be realized is to display the with the Service VM. One of the use cases that can be realized is to display the
entire frame buffer of the User VM on a secondary display. GVT-g allows for this entire frame buffer of the User VM on a secondary display. GVT-g allows for this
model by first trapping all MMIO writes by the User VM to the HW. A proxy model by first trapping all MMIO writes by the User VM to the hardware. A proxy
application can then capture the address in GGTT where the User VM has written application can then capture the address in GGTT where the User VM has written
its frame buffer and using the help of the Hypervisor and the Service VM's i915 its frame buffer and using the help of the Hypervisor and the Service VM's i915
driver, can convert the Guest Physical Addresses (GPAs) into Host driver, can convert the Guest Physical Addresses (GPAs) into Host
Physical Addresses (HPAs) before making a texture source or EGL image Physical Addresses (HPAs) before making a texture source or EGL image
out of the frame buffer and then either post processing it further or out of the frame buffer and then either post processing it further or
simply displaying it on a HW plane of the secondary display. simply displaying it on a hardware plane of the secondary display.
GGTT-Based Surface Sharing GGTT-Based Surface Sharing
-------------------------- --------------------------
One of the major automotive use case is called "surface sharing". This One of the major automotive use cases is called "surface sharing". This
use case requires that the Service VM accesses an individual surface or a set of use case requires that the Service VM accesses an individual surface or a set of
surfaces from the User VM without having to access the entire frame buffer of surfaces from the User VM without having to access the entire frame buffer of
the User VM. Unlike the previous two models, where the User VM did not have to do the User VM. Unlike the previous two models, where the User VM did not have to do
@ -608,13 +608,13 @@ compositor, Mesa, and i915 driver had to be modified.
This model has a major benefit and a major limitation. The This model has a major benefit and a major limitation. The
benefit is that since it builds on top of the indirect display model, benefit is that since it builds on top of the indirect display model,
there are no special drivers necessary for it on either Service VM or User VM. there are no special drivers necessary for it on either Service VM or User VM.
Therefore, any Real Time Operating System (RTOS) that use Therefore, any Real Time Operating System (RTOS) that uses
this model can simply do so without having to implement a driver, the this model can simply do so without having to implement a driver, the
infrastructure for which may not be present in their operating system. infrastructure for which may not be present in their operating system.
The limitation of this model is that video memory dedicated for a User VM is The limitation of this model is that video memory dedicated for a User VM is
generally limited to a couple of hundred MBs. This can easily be generally limited to a couple of hundred MBs. This can easily be
exhausted by a few application buffers so the number and size of buffers exhausted by a few application buffers so the number and size of buffers
is limited. Since it is not a highly-scalable model, in general, Intel is limited. Since it is not a highly-scalable model in general, Intel
recommends the Hyper DMA buffer sharing model, described next. recommends the Hyper DMA buffer sharing model, described next.
Hyper DMA Buffer Sharing Hyper DMA Buffer Sharing
@ -628,12 +628,12 @@ Hyper DMA Buffer Sharing
Hyper DMA Buffer Design Hyper DMA Buffer Design
Another approach to surface sharing is Hyper DMA Buffer sharing. This Another approach to surface sharing is Hyper DMA Buffer sharing. This
model extends the Linux DMA buffer sharing mechanism where one driver is model extends the Linux DMA buffer sharing mechanism in which one driver is
able to share its pages with another driver within one domain. able to share its pages with another driver within one domain.
Applications buffers are backed by i915 Graphics Execution Manager Applications buffers are backed by i915 Graphics Execution Manager
Buffer Objects (GEM BOs). As in GGTT surface Buffer Objects (GEM BOs). As in GGTT surface
sharing, this model also requires compositor changes. The compositor of sharing, this model also requires compositor changes. The compositor of the
User VM requests i915 to export these application GEM BOs and then passes User VM requests i915 to export these application GEM BOs and then passes
them on to a special driver called the Hyper DMA Buf exporter whose job them on to a special driver called the Hyper DMA Buf exporter whose job
is to create a scatter gather list of pages mapped by PDEs and PTEs and is to create a scatter gather list of pages mapped by PDEs and PTEs and
@ -643,13 +643,13 @@ The compositor then shares this Hyper DMA Buf ID with the Service VM's Hyper DMA
Buf importer driver which then maps the memory represented by this ID in Buf importer driver which then maps the memory represented by this ID in
the Service VM. A proxy application in the Service VM can then provide the ID of this driver the Service VM. A proxy application in the Service VM can then provide the ID of this driver
to the Service VM i915, which can create its own GEM BO. Finally, the application to the Service VM i915, which can create its own GEM BO. Finally, the application
can use it as an EGL image and do any post processing required before can use it as an EGL image and do any post-processing required before
either providing it to the Service VM compositor or directly flipping it on a either providing it to the Service VM compositor or directly flipping it on a
HW plane in the compositor's absence. hardware plane in the compositor's absence.
This model is highly scalable and can be used to share up to 4 GB worth This model is highly scalable and can be used to share up to 4 GB worth
of pages. It is also not limited to only sharing graphics buffers. Other of pages. It is also not limited to sharing graphics buffers. Other
buffers for the IPU and others, can also be shared with it. However, it buffers for the IPU and others can also be shared with it. However, it
does require that the Service VM port the Hyper DMA Buffer importer driver. Also, does require that the Service VM port the Hyper DMA Buffer importer driver. Also,
the Service VM must comprehend and implement the DMA buffer sharing model. the Service VM must comprehend and implement the DMA buffer sharing model.
@ -671,8 +671,8 @@ Plane-Based Domain Ownership
Yet another mechanism for showing content of both the Service VM and User VM on the Yet another mechanism for showing content of both the Service VM and User VM on the
same physical display is called plane-based domain ownership. Under this same physical display is called plane-based domain ownership. Under this
model, both the Service VM and User VM are provided a set of HW planes that they can model, both the Service VM and User VM are provided a set of hardware planes that they can
flip their contents on to. Since each domain provides its content, there flip their contents onto. Since each domain provides its content, there
is no need for any extra composition to be done through the Service VM. The display is no need for any extra composition to be done through the Service VM. The display
controller handles alpha blending contents of different domains on a controller handles alpha blending contents of different domains on a
single pipe. This saves on any complexity on either the Service VM or the User VM single pipe. This saves on any complexity on either the Service VM or the User VM
@ -680,9 +680,9 @@ SW stack.
It is important to provide only specific planes and have them statically It is important to provide only specific planes and have them statically
assigned to different Domains. To achieve this, the i915 driver of both assigned to different Domains. To achieve this, the i915 driver of both
domains is provided a command line parameter that specifies the exact domains is provided a command-line parameter that specifies the exact
planes that this domain has access to. The i915 driver then enumerates planes that this domain has access to. The i915 driver then enumerates
only those HW planes and exposes them to its compositor. It is then left only those hardware planes and exposes them to its compositor. It is then left
to the compositor configuration to use these planes appropriately and to the compositor configuration to use these planes appropriately and
show the correct content on them. No other changes are necessary. show the correct content on them. No other changes are necessary.
@ -691,7 +691,7 @@ quick to implement, it also has some drawbacks. First, since each domain
is responsible for showing the content on the screen, there is no is responsible for showing the content on the screen, there is no
control of the User VM by the Service VM. If the User VM is untrusted, this could control of the User VM by the Service VM. If the User VM is untrusted, this could
potentially cause some unwanted content to be displayed. Also, there is potentially cause some unwanted content to be displayed. Also, there is
no post processing capability, except that provided by the display no post-processing capability, except that provided by the display
controller (for example, scaling, rotation, and so on). So each domain controller (for example, scaling, rotation, and so on). So each domain
must provide finished buffers with the expectation that alpha blending must provide finished buffers with the expectation that alpha blending
with another domain will not cause any corruption or unwanted artifacts. with another domain will not cause any corruption or unwanted artifacts.
@ -705,7 +705,7 @@ from the VM. For the global graphics memory space, GVT-g uses graphics
memory resource partitioning and an address space ballooning mechanism. memory resource partitioning and an address space ballooning mechanism.
For local graphics memory spaces, GVT-g implements per-VM local graphics For local graphics memory spaces, GVT-g implements per-VM local graphics
memory through a render context switch because local graphics memory is memory through a render context switch because local graphics memory is
only accessible by the GPU. accessible only by the GPU.
Global Graphics Memory Global Graphics Memory
---------------------- ----------------------
@ -717,7 +717,7 @@ GVT-g partitions the global graphics memory among VMs. Splitting the
CPU/GPU scheduling mechanism requires that the global graphics memory of CPU/GPU scheduling mechanism requires that the global graphics memory of
different VMs can be accessed by the CPU and the GPU simultaneously. different VMs can be accessed by the CPU and the GPU simultaneously.
Consequently, GVT-g must, at any time, present each VM with its own Consequently, GVT-g must, at any time, present each VM with its own
resource, leading to the resource partitioning approaching, for global resource, leading to the resource partitioning approach, for global
graphics memory, as shown in :numref:`mem-part`. graphics memory, as shown in :numref:`mem-part`.
.. figure:: images/APL_GVT-g-mem-part.png .. figure:: images/APL_GVT-g-mem-part.png
@ -727,7 +727,7 @@ graphics memory, as shown in :numref:`mem-part`.
Memory Partition and Ballooning Memory Partition and Ballooning
The performance impact of reduced global graphics memory resource The performance impact of reduced global graphics memory resources
due to memory partitioning is very limited according to various test due to memory partitioning is very limited according to various test
results. results.
@ -740,7 +740,7 @@ partitioning information to the VM graphics driver through the PVINFO
MMIO window. The graphics driver marks the other VMs' regions as MMIO window. The graphics driver marks the other VMs' regions as
'ballooned', and reserves them as not being used from its graphics 'ballooned', and reserves them as not being used from its graphics
memory allocator. Under this design, the guest view of global graphics memory allocator. Under this design, the guest view of global graphics
memory space is exactly the same as the host view and the driver memory space is exactly the same as the host view, and the driver
programmed addresses, using guest physical address, can be directly used programmed addresses, using guest physical address, can be directly used
by the hardware. Address space ballooning is different from traditional by the hardware. Address space ballooning is different from traditional
memory ballooning techniques. Memory ballooning is for memory usage memory ballooning techniques. Memory ballooning is for memory usage
@ -756,7 +756,7 @@ Per-VM Local Graphics Memory
GVT-g allows each VM to use the full local graphics memory spaces of its GVT-g allows each VM to use the full local graphics memory spaces of its
own, similar to the virtual address spaces on the CPU. The local own, similar to the virtual address spaces on the CPU. The local
graphics memory spaces are only visible to the Render Engine in the GPU. graphics memory spaces are visible only to the Render Engine in the GPU.
Therefore, any valid local graphics memory address, programmed by a VM, Therefore, any valid local graphics memory address, programmed by a VM,
can be used directly by the GPU. The GVT-g device model switches the can be used directly by the GPU. The GVT-g device model switches the
local graphics memory spaces, between VMs, when switching render local graphics memory spaces, between VMs, when switching render
@ -796,13 +796,13 @@ Per-VM Shadow PPGTT
------------------- -------------------
To support local graphics memory access passthrough, GVT-g implements To support local graphics memory access passthrough, GVT-g implements
per-VM shadow local page tables. The local graphics memory is only per-VM shadow local page tables. The local graphics memory is accessible
accessible from the Render Engine. The local page tables have two-level only from the Render Engine. The local page tables have two-level
paging structures, as shown in :numref:`per-vm-shadow`. paging structures, as shown in :numref:`per-vm-shadow`.
The first level, Page Directory Entries (PDEs), located in the global The first level, Page Directory Entries (PDEs), located in the global
page table, points to the second level, Page Table Entries (PTEs) in page table, points to the second level, Page Table Entries (PTEs) in
system memory, so guest accesses to the PDE are trapped and emulated, system memory, so guest accesses to the PDE are trapped and emulated
through the implementation of shared shadow global page table. through the implementation of shared shadow global page table.
GVT-g also write-protects a list of guest PTE pages for each VM. The GVT-g also write-protects a list of guest PTE pages for each VM. The
@ -838,11 +838,11 @@ In the system, there are three different schedulers for the GPU:
- Mediator GVT scheduler - Mediator GVT scheduler
- i915 Service VM scheduler - i915 Service VM scheduler
Since User VM always uses the host-based command submission (ELSP) model, Because the User VM always uses the host-based command submission (ELSP) model
and it never accesses the GPU or the Graphic Micro Controller (:term:`GuC`) and it never accesses the GPU or the Graphic Micro Controller (:term:`GuC`)
directly, its scheduler cannot do any preemption by itself. directly, its scheduler cannot do any preemption by itself.
The i915 scheduler does ensure batch buffers are The i915 scheduler does ensure that batch buffers are
submitted in dependency order, that is, if a compositor had to wait for submitted in dependency order—that is, if a compositor has to wait for
an application buffer to finish before its workload can be submitted to an application buffer to finish before its workload can be submitted to
the GPU, then the i915 scheduler of the User VM ensures that this happens. the GPU, then the i915 scheduler of the User VM ensures that this happens.
@ -879,23 +879,23 @@ context to preempt the current running context and then wait for the GPU
engine to be idle. engine to be idle.
While the identification of workloads to be preempted is decided by While the identification of workloads to be preempted is decided by
customizable scheduling policies, once a candidate for preemption is customizable scheduling policies, the i915 scheduler simply submits a
identified, the i915 scheduler simply submits a preemption request to preemption request to the :term:`GuC` high-priority queue once a candidate for
the :term:`GuC` high-priority queue. Based on the HW's ability to preempt (on an preemption is identified. Based on the hardware's ability to preempt (on an
Apollo Lake SoC, 3D workload is preemptible on a 3D primitive level with Apollo Lake SoC, 3D workload is preemptible on a 3D primitive level with
some exceptions), the currently executing workload is saved and some exceptions), the currently executing workload is saved and
preempted. The :term:`GuC` informs the driver using an interrupt of a preemption preempted. The :term:`GuC` informs the driver using an interrupt of a preemption
event occurring. After handling the interrupt, the driver submits the event occurring. After handling the interrupt, the driver submits the
high-priority workload through the normal priority :term:`GuC` queue. As such, high-priority workload through the normal priority :term:`GuC` queue. As such,
the normal priority :term:`GuC` queue is used for actual execbuf submission most the normal priority :term:`GuC` queue is used for actual execbuf submission most
of the time with the high-priority :term:`GuC` queue only being used for the of the time with the high-priority :term:`GuC` queue being used only for the
preemption of lower-priority workload. preemption of lower-priority workload.
Scheduling policies are customizable and left to customers to change if Scheduling policies are customizable and left to customers to change if
they are not satisfied with the built-in i915 driver policy, where all they are not satisfied with the built-in i915 driver policy, where all
workloads of the Service VM are considered higher priority than those of the workloads of the Service VM are considered higher priority than those of the
User VM. This policy can be enforced through an Service VM i915 kernel command line User VM. This policy can be enforced through a Service VM i915 kernel command-line
parameter, and can replace the default in-order command submission (no parameter and can replace the default in-order command submission (no
preemption) policy. preemption) policy.
AcrnGT AcrnGT
@ -903,7 +903,7 @@ AcrnGT
ACRN is a flexible, lightweight reference hypervisor, built with ACRN is a flexible, lightweight reference hypervisor, built with
real-time and safety-criticality in mind, optimized to streamline real-time and safety-criticality in mind, optimized to streamline
embedded development through an open source platform. embedded development through an open-source platform.
AcrnGT is the GVT-g implementation on the ACRN hypervisor. It adapts AcrnGT is the GVT-g implementation on the ACRN hypervisor. It adapts
the MPT interface of GVT-g onto ACRN by using the kernel APIs provided the MPT interface of GVT-g onto ACRN by using the kernel APIs provided
@ -935,7 +935,7 @@ application:
hypervisor through hyper-calls. hypervisor through hyper-calls.
- It provides user space interfaces through ``sysfs`` to the user space - It provides user space interfaces through ``sysfs`` to the user space
ACRN-DM, so that DM can manage the lifecycle of the virtual GPUs. ACRN-DM so that DM can manage the lifecycle of the virtual GPUs.
AcrnGT in DM AcrnGT in DM
============= =============

View File

@ -59,7 +59,7 @@ options:
[--acpidev_pt HID] [--mmiodev_pt MMIO_regions] [--acpidev_pt HID] [--mmiodev_pt MMIO_regions]
[--vtpm2 sock_path] [--virtio_poll interval] [--mac_seed seed_string] [--vtpm2 sock_path] [--virtio_poll interval] [--mac_seed seed_string]
[--cpu_affinity pCPUs] [--lapic_pt] [--rtvm] [--windows] [--cpu_affinity pCPUs] [--lapic_pt] [--rtvm] [--windows]
[--debugexit] [--logger-setting param_setting] [--pm_notify_channel] [--debugexit] [--logger-setting param_setting] [--pm_notify_channel channel]
[--pm_by_vuart vuart_node] [--ssram] <vm> [--pm_by_vuart vuart_node] [--ssram] <vm>
-A: create ACPI tables -A: create ACPI tables
-B: bootargs for kernel -B: bootargs for kernel
@ -789,6 +789,7 @@ the bus hierarchy would be:
00:04.0 Ethernet controller: Red Hat, Inc. Virtio network device 00:04.0 Ethernet controller: Red Hat, Inc. Virtio network device
00:05.0 Serial controller: Red Hat, Inc. Virtio console 00:05.0 Serial controller: Red Hat, Inc. Virtio console
ACPI Virtualization ACPI Virtualization
******************* *******************

View File

@ -57,16 +57,17 @@ A typical industry usage would include one Windows HMI + one RT VM:
- RT VM that runs a specific RTOS on it to handle - RT VM that runs a specific RTOS on it to handle
real-time workloads such as PLC control real-time workloads such as PLC control
ACRN supports guest OS of Windows; ACRN has also added/is adding a ACRN supports a Windows* Guest OS for such HMI capability. ACRN continues to add
series features to enhance its real-time performance then meet hard-RT KPI features to enhance its real-time performance to meet hard-RT key performance
for its RT VM: indicators for its RT VM:
- CAT (Cache Allocation Technology) - Cache Allocation Technology (CAT)
- MBA (Memory Bandwidth Allocation) - Memory Bandwidth Allocation (MBA)
- LAPIC passthrough - LAPIC passthrough
- Polling mode driver - Polling mode driver
- ART (always running timer) - Always Running Timer (ART)
- other TCC features like split lock detection, Pseudo locking for cache - Intel Time Coordinated Computing (TCC) features, such as split lock
detection and cache locking
Hardware Requirements Hardware Requirements

View File

@ -12,6 +12,14 @@ emulation is discussed in :ref:`hld-io-emulation`, para-virtualization
is discussed in :ref:`hld-virtio-devices` and device passthrough will be is discussed in :ref:`hld-virtio-devices` and device passthrough will be
discussed here. discussed here.
.. rst-class:: rst-columns2
.. contents::
:depth: 1
:local:
--------
In the ACRN project, device emulation means emulating all existing In the ACRN project, device emulation means emulating all existing
hardware resource through a software component device model running in hardware resource through a software component device model running in
the Service OS (SOS). Device emulation must maintain the same SW the Service OS (SOS). Device emulation must maintain the same SW
@ -382,14 +390,163 @@ the current VM, otherwise, none of them should be assigned to the
current VM. A device that violates the rule will be rejected to be current VM. A device that violates the rule will be rejected to be
passed-through. The checking logic is implemented in Device Model and not passed-through. The checking logic is implemented in Device Model and not
in scope of hypervisor. in scope of hypervisor.
The platform GSI information is in devicemodel/hw/pci/platform_gsi_info.c The platform specific GSI information shall be filled in devicemodel/hw/pci/platform_gsi_info.c
for limited platform (currently, only APL MRB). For other platforms, the platform for target platform to activate the checking of GSI sharing violation.
specific GSI information should be added to activate the checking of GSI sharing violation.
Data Structures and Interfaces .. _PCIe PTM implementation:
******************************
The following APIs are common APIs provided to initialize interrupt remapping for PCIe Precision Time Measurement (PTM)
*************************************
The PCI Express (PCIe) specification defines a Precision Time Measurement (PTM)
mechanism that enables time coordination and synchronization of events across
multiple PCI components with independent local time clocks within the same
system. Intel supports PTM on several of its systems and devices, such as PTM
root capabilities support on Whiskey Lake and Tiger Lake PCIe root ports, and
PTM device support on an Intel I225-V/I225-LM family Ethernet controller. For
further details on PTM, please refer to the `PCIe specification
<https://pcisig.com/specifications>`_.
ACRN adds PCIe root port emulation in the hypervisor to support the PTM feature
and emulates a simple PTM hierarchy. ACRN enables PTM in a Guest VM if the user
sets the ``enable_ptm`` option when passing through a device to a post-launched
VM. When you enable PTM, the passthrough device is connected to a virtual
root port instead of the host bridge.
By default, the :ref:`vm.PTM` option is disabled in ACRN VMs. Use the
:ref:`ACRN configuration tool <acrn_configuration_tool>` to enable PTM
in the scenario XML file that configures the Guest VM.
Here is an example launch script that configures a supported Ethernet card for
passthrough and enables PTM on it:
.. code-block:: bash
:emphasize-lines: 9-11,17
declare -A passthru_vpid
declare -A passthru_bdf
passthru_vpid=(
["ethptm"]="8086 15f2"
)
passthru_bdf=(
["ethptm"]="0000:aa:00.0"
)
echo ${passthru_vpid["ethptm"]} > /sys/bus/pci/drivers/pci-stub/new_id
echo ${passthru_bdf["ethptm"]} > /sys/bus/pci/devices/${passthru_bdf["ethptm"]}/driver/unbind
echo ${passthru_bdf["ethptm"]} > /sys/bus/pci/drivers/pci-stub/bind
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
-s 3,virtio-blk,uos-test.img \
-s 4,virtio-net,tap0 \
-s 5,virtio-console,@stdio:stdio_port \
-s 6,passthru,a9/00/0,enable_ptm \
--ovmf /usr/share/acrn/bios/OVMF.fd
And here is the bus hierarchy from the User VM (as shown by the ``lspci`` command)::
lspci -tv
-[0000:00]-+-00.0 Network Appliance Corporation Device 1275
+-03.0 Red Hat, Inc. Virtio block device
+-04.0 Red Hat, Inc. Virtio network device
+-05.0 Red Hat, Inc. Virtio console
\-06.0-[01]----00.0 Intel Corporation Device 15f2
PTM Implementation Notes
========================
To simplify PTM support implementation, the virtual root port only supports the
most basic PCIe configuration and operation, in addition to PTM capabilities.
In Guest VM post-launched scenarios, you enable PTM by setting the
``enable_ptm`` option for the pass through device (as shown above).
.. figure:: images/PTM-hld-PTM-flow.png
:align: center
:width: 700
:name: ptm-flow
PTM-enabling workflow in post-launched VM
As shown in :numref:`ptm-flow`, PTM is enabled in the root port during the
hypervisor startup. The Device Model (DM) then checks whether the pass-through device
supports PTM requestor capabilities and whether the corresponding root port
supports PTM root capabilities, as well as some other sanity checks. If an
error is detected during these checks, the error will be reported and ACRN will
not enable PTM in the Guest VM. This doesnt prevent the user from launching the Guest
VM and passing through the device to the Guest VM. If no error is detected,
the device model will use ``add_vdev`` hypercall to add a virtual root port (VRP),
acting as the PTM root, to the Guest VM before passing through the device to the Guest VM.
.. figure:: images/PTM-hld-PTM-passthru.png
:align: center
:width: 700
:name: ptm-vrp
PTM-enabled PCI device pass-through to post-launched VM
:numref:`ptm-vrp` shows that, after enabling PTM, the passthru device connects to
the virtual root port instead of the virtual host bridge.
To use PTM in a virtualized environment, you may want to first verify that PTM
is supported by the device and is enabled on the bare metal machine.
If supported, follow these steps to enable PTM in the post-launched guest VM:
1. Make sure that PTM is enabled in the guest kernel. In the Linux kernel, for example,
set ``CONFIG_PCIE_PTM=y``.
2. Not every PCI device supports PTM. One example that does is the Intel I225-V
Ethernet controller. If you passthrough this card to the guest VM, make sure the guest VM
uses a version of the IGC driver that supports PTM.
3. In the device model launch script, add the ``enable_ptm`` option to the
passthrough device. For example:
.. code-block:: bash
:emphasize-lines: 5
$ acrn-dm -A -m $mem_size -s 0:0,hostbridge \
-s 3,virtio-blk,uos-test.img \
-s 4,virtio-net,tap0 \
-s 5,virtio-console,@stdio:stdio_port \
-s 6,passthru,a9/00/0,enable_ptm \
--ovmf /usr/share/acrn/bios/OVMF.fd \
4. You can check that PTM is correctly enabled on guest by displaying the PCI
bus hiearchy on the guest using the ``lspci`` command:
.. code-block:: bash
:emphasize-lines: 12,20
lspci -tv
-[0000:00]-+-00.0 Network Appliance Corporation Device 1275
+-03.0 Red Hat, Inc. Virtio block device
+-04.0 Red Hat, Inc. Virtio network device
+-05.0 Red Hat, Inc. Virtio console
\-06.0-[01]----00.0 Intel Corporation Device 15f2
sudo lspci -vv # (Only relevant output is shown)
00:00.0 Host bridge: Network Appliance Corporation Device 1275
00:06.0 PCI bridge: Intel Corporation Sunrise Point-LP PCI Express Root Port #5 (rev 02) (prog-if 00 [Normal decode])
. . .
Capabilities: [100 v1] Precision Time Measurement
PTMCap: Requester:- Responder:+ Root:+
PTMClockGranularity: 4ns
PTMControl: Enabled:+ RootSelected:+
PTMEffectiveGranularity: 4ns
Kernel driver in use: pcieport
01:00.0 Ethernet controller: Intel Corporation Device 15f2 (rev 01)
. . .
Capabilities: [1f0 v1] Precision Time Measurement
PTMCap: Requester:+ Responder:- Root:-
PTMClockGranularity: 4ns
PTMControl: Enabled:+ RootSelected:-
PTMEffectiveGranularity: 4ns
Kernel driver in use: igc
API Data Structures and Interfaces
**********************************
The following are common APIs provided to initialize interrupt remapping for
VMs: VMs:
.. doxygenfunction:: ptirq_intx_pin_remap .. doxygenfunction:: ptirq_intx_pin_remap

View File

@ -33,6 +33,9 @@ Interfaces Design
.. doxygenfunction:: timer_expired .. doxygenfunction:: timer_expired
:project: Project ACRN :project: Project ACRN
.. doxygenfunction:: timer_is_started
:project: Project ACRN
.. doxygenfunction:: add_timer .. doxygenfunction:: add_timer
:project: Project ACRN :project: Project ACRN
@ -45,6 +48,12 @@ Interfaces Design
.. doxygenfunction:: calibrate_tsc .. doxygenfunction:: calibrate_tsc
:project: Project ACRN :project: Project ACRN
.. doxygenfunction:: cpu_ticks
:project: Project ACRN
.. doxygenfunction:: cpu_tickrate
:project: Project ACRN
.. doxygenfunction:: us_to_ticks .. doxygenfunction:: us_to_ticks
:project: Project ACRN :project: Project ACRN
@ -54,14 +63,5 @@ Interfaces Design
.. doxygenfunction:: ticks_to_ms .. doxygenfunction:: ticks_to_ms
:project: Project ACRN :project: Project ACRN
.. doxygenfunction:: rdtsc
:project: Project ACRN
.. doxygenfunction:: get_tsc_khz
:project: Project ACRN
.. doxygenfunction:: timer_is_started
:project: Project ACRN
.. doxygenfunction:: udelay .. doxygenfunction:: udelay
:project: Project ACRN :project: Project ACRN

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

View File

@ -179,12 +179,12 @@ the following to build the hypervisor, device model, and tools:
You can also build ACRN with your customized scenario: You can also build ACRN with your customized scenario:
* Build with your own scenario configuration on the ``nuc6cayh``, assuming the * Build with your own scenario configuration on the ``nuc11tnbi5``, assuming the
scenario is defined in ``/path/to/scenario.xml``: scenario is defined in ``/path/to/scenario.xml``:
.. code-block:: none .. code-block:: none
make BOARD=nuc6cayh SCENARIO=/path/to/scenario.xml make BOARD=nuc11tnbi5 SCENARIO=/path/to/scenario.xml
* Build with your own board and scenario configuration, assuming the board and * Build with your own board and scenario configuration, assuming the board and
scenario XML files are ``/path/to/board.xml`` and ``/path/to/scenario.xml``: scenario XML files are ``/path/to/board.xml`` and ``/path/to/scenario.xml``:

View File

@ -1,21 +1,38 @@
.. _gsg:
.. _rt_industry_ubuntu_setup: .. _rt_industry_ubuntu_setup:
Getting Started Guide for ACRN Industry Scenario With Ubuntu Service VM Getting Started Guide
####################################################################### #####################
.. contents:: .. contents::
:local: :local:
:depth: 1 :depth: 1
Introduction
************
This document describes the various steps to set up a system based on the following components:
- ACRN: Industry scenario
- Service VM OS: Ubuntu (running off the NVMe storage device)
- Real-Time VM (RTVM) OS: Ubuntu modified to use a PREEMPT-RT kernel (running off the
SATA storage device)
- Post-launched User VM OS: Windows
Verified Version Verified Version
**************** ****************
- Ubuntu version: **18.04** - Ubuntu version: **18.04**
- GCC version: **7.5** - GCC version: **7.5**
- ACRN-hypervisor branch: **release_2.4 (v2.4)** - ACRN-hypervisor branch: **release_2.5 (v2.5)**
- ACRN-Kernel (Service VM kernel): **release_2.4 (v2.4)** - ACRN-Kernel (Service VM kernel): **release_2.5 (v2.5)**
- RT kernel for Ubuntu User OS: **4.19/preempt-rt (4.19.72-rt25)** - RT kernel for Ubuntu User OS: **4.19/preempt-rt (4.19.72-rt25)**
- HW: Maxtang Intel WHL-U i7-8665U (`AX8665U-A2 <http://www.maxtangpc.com/fanlessembeddedcomputers/140.html>`_) - HW: Intel NUC 11 Pro Kit NUC11TNHi5 (`NUC11TNHi5
<https://ark.intel.com/content/www/us/en/ark/products/205594/intel-nuc-11-pro-kit-nuc11tnhi5.html>`_)
.. note:: This NUC is based on the
`NUC11TNBi5 board <https://ark.intel.com/content/www/us/en/ark/products/205596/intel-nuc-11-pro-board-nuc11tnbi5.html>`_.
The ``BOARD`` parameter that is used to build ACRN for this NUC is therefore ``nuc11tnbi5``.
Prerequisites Prerequisites
************* *************
@ -25,26 +42,24 @@ Prerequisites
- Monitors with HDMI interface (DP interface is optional) - Monitors with HDMI interface (DP interface is optional)
- USB keyboard and mouse - USB keyboard and mouse
- Ethernet cables - Ethernet cables
- A grub-2.04-7 bootloader with the following patch:
http://git.savannah.gnu.org/cgit/grub.git/commit/?id=0f3f5b7c13fa9b677a64cf11f20eca0f850a2b20:
multiboot2: Set min address for mbi allocation to 0x1000
.. rst-class:: numbered-step .. rst-class:: numbered-step
Hardware Connection Hardware Connection
******************* *******************
Connect the WHL Maxtang with the appropriate external devices. Connect the NUC11TNHi5 with the appropriate external devices.
#. Connect the WHL Maxtang board to a monitor via an HDMI cable. #. Connect the NUC11TNHi5 NUC to a monitor via an HDMI cable.
#. Connect the mouse, keyboard, Ethernet cable, and power supply cable to #. Connect the mouse, keyboard, Ethernet cable, and power supply cable to
the WHL Maxtang board. the NUC11TNHi5 board.
#. Insert the Ubuntu 18.04 USB boot disk into the USB port. #. Insert the Ubuntu 18.04 USB boot disk into the USB port.
.. figure:: images/rt-ind-ubun-hw-1.png .. figure:: images/rt-ind-ubun-hw-1.png
:scale: 15
.. figure:: images/rt-ind-ubun-hw-2.png .. figure:: images/rt-ind-ubun-hw-2.png
:scale: 15
.. rst-class:: numbered-step .. rst-class:: numbered-step
@ -54,12 +69,12 @@ Connect the WHL Maxtang with the appropriate external devices.
Install the Ubuntu User VM (RTVM) on the SATA Disk Install the Ubuntu User VM (RTVM) on the SATA Disk
************************************************** **************************************************
.. note:: The WHL Maxtang machine contains both an NVMe and SATA disk. .. note:: The NUC11TNHi5 NUC contains both an NVMe and SATA disk.
Before you install the Ubuntu User VM on the SATA disk, either Before you install the Ubuntu User VM on the SATA disk, either
remove the NVMe disk or delete its blocks. remove the NVMe disk or delete its blocks.
#. Insert the Ubuntu USB boot disk into the WHL Maxtang machine. #. Insert the Ubuntu USB boot disk into the NUC11TNHi5 machine.
#. Power on the machine, then press F11 to select the USB disk as the boot #. Power on the machine, then press F10 to select the USB disk as the boot
device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the
label depends on the brand/make of the USB drive. label depends on the brand/make of the USB drive.
#. Install the Ubuntu OS. #. Install the Ubuntu OS.
@ -69,10 +84,10 @@ Install the Ubuntu User VM (RTVM) on the SATA Disk
#. Configure the ``/dev/sda`` partition. Refer to the diagram below: #. Configure the ``/dev/sda`` partition. Refer to the diagram below:
.. figure:: images/native-ubuntu-on-SATA-2.png .. figure:: images/native-ubuntu-on-SATA-3.png
a. Select the ``/dev/sda`` partition, not ``/dev/nvme0p1``. a. Select the ``/dev/sda`` partition, not ``/dev/nvme0p1``.
b. Select ``/dev/sda`` **ATA KINGSTON RBUSNS4** as the device for the b. Select ``/dev/sda`` **ATA KINGSTON SA400S3** as the device for the
bootloader installation. Note that the label depends on the SATA disk used. bootloader installation. Note that the label depends on the SATA disk used.
#. Complete the Ubuntu installation on ``/dev/sda``. #. Complete the Ubuntu installation on ``/dev/sda``.
@ -87,12 +102,11 @@ to turn it into a real-time User VM (RTVM).
Install the Ubuntu Service VM on the NVMe Disk Install the Ubuntu Service VM on the NVMe Disk
********************************************** **********************************************
.. note:: Before you install the Ubuntu Service VM on the NVMe disk, either .. note:: Before you install the Ubuntu Service VM on the NVMe disk, please
remove the SATA disk or disable it in the BIOS. Disable it by going to: remove the SATA disk.
**Chipset****PCH-IO Configuration** -> **SATA and RST Configuration** -> **SATA Controller [Disabled]**
#. Insert the Ubuntu USB boot disk into the WHL Maxtang machine. #. Insert the Ubuntu USB boot disk into the NUC11TNHi5 machine.
#. Power on the machine, then press F11 to select the USB disk as the boot #. Power on the machine, then press F10 to select the USB disk as the boot
device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the
label depends on the brand/make of the USB drive. label depends on the brand/make of the USB drive.
#. Install the Ubuntu OS. #. Install the Ubuntu OS.
@ -102,10 +116,10 @@ Install the Ubuntu Service VM on the NVMe Disk
#. Configure the ``/dev/nvme0n1`` partition. Refer to the diagram below: #. Configure the ``/dev/nvme0n1`` partition. Refer to the diagram below:
.. figure:: images/native-ubuntu-on-NVME-2.png .. figure:: images/native-ubuntu-on-NVME-3.png
a. Select the ``/dev/nvme0n1`` partition, not ``/dev/sda``. a. Select the ``/dev/nvme0n1`` partition, not ``/dev/sda``.
b. Select ``/dev/nvme0n1`` **FORESEE 256GB SSD** as the device for the b. Select ``/dev/nvme0n1`` **Lenovo SL700 PCI-E M.2 256G** as the device for the
bootloader installation. Note that the label depends on the NVMe disk used. bootloader installation. Note that the label depends on the NVMe disk used.
#. Complete the Ubuntu installation and reboot the system. #. Complete the Ubuntu installation and reboot the system.
@ -143,7 +157,7 @@ Build the ACRN Hypervisor on Ubuntu
.. code-block:: none .. code-block:: none
$ sudo -E apt install gcc \ $ sudo apt install gcc \
git \ git \
make \ make \
libssl-dev \ libssl-dev \
@ -152,6 +166,7 @@ Build the ACRN Hypervisor on Ubuntu
libsystemd-dev \ libsystemd-dev \
libevent-dev \ libevent-dev \
libxml2-dev \ libxml2-dev \
libxml2-utils \
libusb-1.0-0-dev \ libusb-1.0-0-dev \
python3 \ python3 \
python3-pip \ python3-pip \
@ -162,7 +177,8 @@ Build the ACRN Hypervisor on Ubuntu
liblz4-tool \ liblz4-tool \
flex \ flex \
bison \ bison \
xsltproc xsltproc \
clang-format
$ sudo pip3 install lxml xmlschema $ sudo pip3 install lxml xmlschema
@ -191,21 +207,23 @@ Build the ACRN Hypervisor on Ubuntu
$ git clone https://github.com/projectacrn/acrn-hypervisor $ git clone https://github.com/projectacrn/acrn-hypervisor
$ cd acrn-hypervisor $ cd acrn-hypervisor
#. Switch to the v2.4 version: #. Switch to the v2.5 version:
.. code-block:: none .. code-block:: none
$ git checkout v2.4 $ git checkout v2.5
#. Build ACRN: #. Build ACRN:
.. code-block:: none .. code-block:: none
$ make BOARD=whl-ipc-i7 SCENARIO=industry $ make BOARD=nuc11tnbi5 SCENARIO=industry
$ sudo make install $ sudo make install
$ sudo mkdir -p /boot/acrn $ sudo mkdir -p /boot/acrn
$ sudo cp build/hypervisor/acrn.bin /boot/acrn/ $ sudo cp build/hypervisor/acrn.bin /boot/acrn/
.. _build-and-install-ACRN-kernel:
Build and Install the ACRN Kernel Build and Install the ACRN Kernel
================================= =================================
@ -221,7 +239,7 @@ Build and Install the ACRN Kernel
.. code-block:: none .. code-block:: none
$ git checkout v2.4 $ git checkout v2.5
$ cp kernel_config_uefi_sos .config $ cp kernel_config_uefi_sos .config
$ make olddefconfig $ make olddefconfig
$ make all $ make all
@ -234,6 +252,8 @@ Install the Service VM Kernel and Modules
$ sudo make modules_install $ sudo make modules_install
$ sudo cp arch/x86/boot/bzImage /boot/bzImage $ sudo cp arch/x86/boot/bzImage /boot/bzImage
.. _gsg_update_grub:
Update Grub for the Ubuntu Service VM Update Grub for the Ubuntu Service VM
===================================== =====================================
@ -317,33 +337,6 @@ typical output of a successful installation resembles the following:
Additional Settings in the Service VM Additional Settings in the Service VM
===================================== =====================================
BIOS Settings of GVT-d for WaaG
-------------------------------
.. note::
Skip this step if you are using a Kaby Lake (KBL) Intel NUC.
Go to **Chipset** -> **System Agent (SA) Configuration** -> **Graphics
Configuration** and make the following settings:
Set **DVMT Pre-Allocated** to **64MB**:
.. figure:: images/DVMT-reallocated-64mb.png
Set **PM Support** to **Enabled**:
.. figure:: images/PM-support-enabled.png
Use OVMF to Launch the User VM
------------------------------
The User VM will be launched by OVMF, so copy it to the specific folder:
.. code-block:: none
$ sudo mkdir -p /usr/share/acrn/bios
$ sudo cp /home/acrn/work/acrn-hypervisor/devicemodel/bios/OVMF.fd /usr/share/acrn/bios
Build and Install the RT Kernel for the Ubuntu User VM Build and Install the RT Kernel for the Ubuntu User VM
------------------------------------------------------ ------------------------------------------------------
@ -361,7 +354,7 @@ Follow these instructions to build the RT kernel.
$ git clone https://github.com/projectacrn/acrn-kernel $ git clone https://github.com/projectacrn/acrn-kernel
$ cd acrn-kernel $ cd acrn-kernel
$ git checkout 4.19/preempt-rt $ git checkout origin/4.19/preempt-rt
$ make mrproper $ make mrproper
.. note:: .. note::
@ -382,8 +375,7 @@ Follow these instructions to build the RT kernel.
$ sudo mount /dev/sda2 /mnt $ sudo mount /dev/sda2 /mnt
$ sudo cp arch/x86/boot/bzImage /mnt/boot/ $ sudo cp arch/x86/boot/bzImage /mnt/boot/
$ sudo tar -zxvf linux-4.19.72-rt25-x86.tar.gz -C /mnt/lib/modules/ $ sudo tar -zxvf linux-4.19.72-rt25-x86.tar.gz -C /mnt/
$ sudo cp -r /mnt/lib/modules/lib/modules/4.19.72-rt25 /mnt/lib/modules/
$ sudo cd ~ && sudo umount /mnt && sync $ sudo cd ~ && sudo umount /mnt && sync
.. rst-class:: numbered-step .. rst-class:: numbered-step
@ -455,42 +447,11 @@ Launch the RTVM
.. code-block:: none .. code-block:: none
$ sudo cp /home/acrn/work/acrn-hyperviso/misc/config_tools/data/sample_launch_scripts/nuc/launch_hard_rt_vm.sh /usr/share/acrn/ $ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
$ sudo /usr/share/acrn/launch_hard_rt_vm.sh
.. note:: .. note::
If using a KBL NUC, the script must be adapted to match the BDF on the actual HW platform If using a KBL NUC, the script must be adapted to match the BDF on the actual HW platform
Recommended BIOS Settings for RTVM
----------------------------------
.. csv-table::
:widths: 15, 30, 10
"Hyper-threading", "Intel Advanced Menu -> CPU Configuration", "Disabled"
"Intel VMX", "Intel Advanced Menu -> CPU Configuration", "Enable"
"Speed Step", "Intel Advanced Menu -> Power & Performance -> CPU - Power Management Control", "Disabled"
"Speed Shift", "Intel Advanced Menu -> Power & Performance -> CPU - Power Management Control", "Disabled"
"C States", "Intel Advanced Menu -> Power & Performance -> CPU - Power Management Control", "Disabled"
"RC6", "Intel Advanced Menu -> Power & Performance -> GT - Power Management", "Disabled"
"GT freq", "Intel Advanced Menu -> Power & Performance -> GT - Power Management", "Lowest"
"SA GV", "Intel Advanced Menu -> Memory Configuration", "Fixed High"
"VT-d", "Intel Advanced Menu -> System Agent Configuration", "Enable"
"Gfx Low Power Mode", "Intel Advanced Menu -> System Agent Configuration -> Graphics Configuration", "Disabled"
"DMI spine clock gating", "Intel Advanced Menu -> System Agent Configuration -> DMI/OPI Configuration", "Disabled"
"PCH Cross Throttling", "Intel Advanced Menu -> PCH-IO Configuration", "Disabled"
"Legacy IO Low Latency", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Enabled"
"PCI Express Clock Gating", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Disabled"
"Delay Enable DMI ASPM", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Disabled"
"DMI Link ASPM", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Disabled"
"Aggressive LPM Support", "Intel Advanced Menu -> PCH-IO Configuration -> SATA And RST Configuration", "Disabled"
"USB Periodic SMI", "Intel Advanced Menu -> LEGACY USB Configuration", "Disabled"
"ACPI S3 Support", "Intel Advanced Menu -> ACPI Settings", "Disabled"
"Native ASPM", "Intel Advanced Menu -> ACPI Settings", "Disabled"
.. note:: BIOS settings depend on the platform and BIOS version; some may
not be applicable.
Recommended Kernel Cmdline for RTVM Recommended Kernel Cmdline for RTVM
----------------------------------- -----------------------------------
@ -523,13 +484,13 @@ this, follow the below steps to allocate all housekeeping tasks to core 0:
#. Prepare the RTVM launch script #. Prepare the RTVM launch script
Follow the `Passthrough a hard disk to RTVM`_ section to make adjustments to Follow the `Passthrough a hard disk to RTVM`_ section to make adjustments to
the ``/usr/share/acrn/launch_hard_rt_vm.sh`` launch script. the ``/usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh`` launch script.
#. Launch the RTVM: #. Launch the RTVM:
.. code-block:: none .. code-block:: none
$ sudo /usr/share/acrn/launch_hard_rt_vm.sh $ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
#. Log in to the RTVM as root and run the script as below: #. Log in to the RTVM as root and run the script as below:
@ -575,13 +536,13 @@ Run Cyclictest
.. code-block:: none .. code-block:: none
# apt install rt-tests sudo apt install rt-tests
#. Use the following command to start cyclictest: #. Use the following command to start cyclictest:
.. code-block:: none .. code-block:: none
# cyclictest -a 1 -p 80 -m -N -D 1h -q -H 30000 --histfile=test.log sudo cyclictest -a 1 -p 80 -m -N -D 1h -q -H 30000 --histfile=test.log
Parameter descriptions: Parameter descriptions:
@ -599,22 +560,8 @@ Run Cyclictest
Launch the Windows VM Launch the Windows VM
********************* *********************
#. Follow this :ref:`guide <using_windows_as_uos>` to prepare the Windows Follow this :ref:`guide <using_windows_as_uos>` to prepare the Windows
image file and then reboot with a new ``acrngt.conf``. image file and then reboot.
#. Modify the ``launch_uos_id1.sh`` script as follows and then launch
the Windows VM as one of the post-launched standard VMs:
.. code-block:: none
:emphasize-lines: 2
acrn-dm -A -m $mem_size -s 0:0,hostbridge -s 1:0,lpc -l com1,stdio \
-s 2,passthru,0/2/0,gpu \
-s 3,virtio-blk,./win10-ltsc.img \
-s 4,virtio-net,tap0 \
--ovmf /usr/share/acrn/bios/OVMF.fd \
--windows \
$vm_name
Troubleshooting Troubleshooting
*************** ***************
@ -635,7 +582,7 @@ to the ``launch_hard_rt_vm.sh`` script before launching it:
--rtvm \ --rtvm \
--virtio_poll 1000000 \ --virtio_poll 1000000 \
-U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \ -U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \
-s 2,passthru,02/0/0 \ -s 2,passthru,00/17/0 \
-s 3,virtio-console,@stdio:stdio_port \ -s 3,virtio-console,@stdio:stdio_port \
-s 8,virtio-net,tap0 \ -s 8,virtio-net,tap0 \
--ovmf /usr/share/acrn/bios/OVMF.fd \ --ovmf /usr/share/acrn/bios/OVMF.fd \
@ -652,7 +599,7 @@ Passthrough a Hard Disk to RTVM
.. code-block:: none .. code-block:: none
# lspci -nn | grep -i sata # lspci -nn | grep -i sata
00:17.0 SATA controller [0106]: Intel Corporation Cannon Point-LP SATA Controller [AHCI Mode] [8086:9dd3] (rev 30) 00:17.0 SATA controller [0106]: Intel Corporation Device [8086:a0d3] (rev 20)
#. Modify the script to use the correct SATA device IDs and bus number: #. Modify the script to use the correct SATA device IDs and bus number:
@ -661,14 +608,14 @@ Passthrough a Hard Disk to RTVM
# vim /usr/share/acrn/launch_hard_rt_vm.sh # vim /usr/share/acrn/launch_hard_rt_vm.sh
passthru_vpid=( passthru_vpid=(
["eth"]="8086 156f" ["eth"]="8086 15f2"
["sata"]="8086 9dd3" ["sata"]="8086 a0d3"
["nvme"]="8086 f1a6" ["nvme"]="126f 2263"
) )
passthru_bdf=( passthru_bdf=(
["eth"]="0000:00:1f.6" ["eth"]="0000:58:00.0"
["sata"]="0000:00:17.0" ["sata"]="0000:00:17.0"
["nvme"]="0000:02:00.0" ["nvme"]="0000:01:00.0"
) )
# SATA pass-through # SATA pass-through
@ -694,9 +641,8 @@ Passthrough a Hard Disk to RTVM
--ovmf /usr/share/acrn/bios/OVMF.fd \ --ovmf /usr/share/acrn/bios/OVMF.fd \
hard_rtvm hard_rtvm
#. Upon deployment completion, launch the RTVM directly onto your WHL #. Upon deployment completion, launch the RTVM directly onto your NUC11TNHi5:
Intel NUC:
.. code-block:: none .. code-block:: none
$ sudo /usr/share/acrn/launch_hard_rt_vm.sh $ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh

Binary file not shown.

After

Width:  |  Height:  |  Size: 167 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 324 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

After

Width:  |  Height:  |  Size: 317 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 73 KiB

After

Width:  |  Height:  |  Size: 164 KiB

View File

@ -25,35 +25,28 @@ Minimum Requirements for Processor
Known Limitations Known Limitations
***************** *****************
Platforms with multiple PCI segments
ACRN assumes the following conditions are satisfied from the Platform BIOS Platforms with multiple PCI segments are not supported.
* All the PCI device BARs should be assigned resources, including SR-IOV VF BARs if a device supports. ACRN assumes the following conditions are satisfied from the Platform BIOS:
* Bridge windows for PCI bridge devices and the resources for root bus, should be programmed with values * All the PCI device BARs must be assigned resources, including SR-IOV VF BARs if a device supports it.
* Bridge windows for PCI bridge devices and the resources for root bus must be programmed with values
that enclose resources used by all the downstream devices. that enclose resources used by all the downstream devices.
* There should be no conflict in resources among the PCI devices and also between PCI devices and other platform devices. * There should be no conflict in resources among the PCI devices or with other platform devices.
New Processor Families
**********************
Here are announced Intel processor architectures that are supported by ACRN v2.2, but don't yet have a recommended platform available: Tested Platforms by ACRN Release
********************************
* `Tiger Lake <https://ark.intel.com/content/www/us/en/ark/products/codename/88759/tiger-lake.html#@Embedded>`_ These platforms have been tested by the development team with the noted ACRN
(Q3'2020 Launch Date) release version and may not work as expected on later ACRN releases.
* `Elkhart Lake <https://ark.intel.com/content/www/us/en/ark/products/codename/128825/elkhart-lake.html#@Embedded>`_
(Q1'2021 Launch Date)
.. _NUC11TNHi5:
Verified Platforms According to ACRN Usage https://ark.intel.com/content/www/us/en/ark/products/205594/intel-nuc-11-pro-kit-nuc11tnhi5.html
******************************************
These Apollo Lake and Kaby Lake platforms have been verified by the
development team for Software-Defined Cockpit (SDC), Industrial Usage
(IU), and Logical Partition scenarios.
.. _NUC6CAYH: .. _NUC6CAYH:
https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc6cayh.html https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc6cayh.html
@ -70,169 +63,133 @@ development team for Software-Defined Cockpit (SDC), Industrial Usage
.. _NUC7i7DNH: .. _NUC7i7DNH:
https://ark.intel.com/content/www/us/en/ark/products/130393/intel-nuc-kit-nuc7i7dnhe.html https://ark.intel.com/content/www/us/en/ark/products/130393/intel-nuc-kit-nuc7i7dnhe.html
.. _WHL-IPC-I5: .. _WHL-IPC-I7:
http://www.maxtangpc.com/industrialmotherboards/142.html#parameters http://www.maxtangpc.com/industrialmotherboards/142.html#parameters
.. _UP2-N3350:
.. _UP2-N4200:
.. _UP2-x5-E3940:
.. _UP2 Shop: .. _UP2 Shop:
https://up-shop.org/home/270-up-squared.html https://up-shop.org/home/270-up-squared.html
For general instructions setting up ACRN on supported hardware platforms, visit the :ref:`rt_industry_ubuntu_setup` page. For general instructions setting up ACRN on supported hardware platforms, visit the :ref:`gsg` page.
.. list-table:: Supported Target Platforms
:widths: 20 20 12 5 5
:header-rows: 1
* - Intel x86 Platform Family
- Product / Kit Name
- Board configuration
- ACRN Release
- Graphics
* - **Tiger Lake**
- `NUC11TNHi5`_ |br| (Board: NUC11TNBi5)
- :acrn_file:`nuc11tnbi5.xml <misc/config_tools/data/nuc11tnbi5/nuc11tnbi5.xml>`
- v2.5
- GVT-d
* - **Whiskey Lake**
- `WHL-IPC-I7`_ |br| (Board: WHL-IPC-I7)
- :acrn_file:`whl-ipc-i7.xml <misc/config_tools/data/whl-ipc-i7/whl-ipc-i7.xml>`
- v2.0
- GVT-g
* - **Kaby Lake** |br| (Dawson Canyon)
- `NUC7i7DNH`_ |br| (board: NUC7i7DNB)
- :acrn_file:`nuc7i7dnb.xml <misc/config_tools/data/nuc7i7dnb/nuc7i7dnb.xml>`
- v1.6.1
- GVT-g
* - **Apollo Lake**
- `NUC6CAYH`_, |br| `UP2-N3350`_, `UP2-N4200`_, |br| `UP2-x5-E3940`_
-
- v1.0
- GVT-g
If an XML file is not provided by project ACRN for your board, we recommend you
use the board inspector tool to generate an XML file specifically for your board.
Refer to the :ref:`acrn_configuration_tool` for more details on using the board inspector
tool.
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+ Tested Hardware Specifications Detail
| Platform (Intel x86) | Product/Kit Name | Usage Scenario - BKC Examples | *************************************
| | +-----------+-----------+-------------+------------+
| | | SDC with | IU without| IU with | Logical |
| | | 2 VMs | Safety VM | Safety VM | Partition |
| | | | | | |
+================================+=========================+===========+===========+=============+============+
| | **Apollo Lake** | | `NUC6CAYH`_ | V | V | | |
| | (Formal name: Arches Canyon | | (Board: NUC6CAYB) | | | | |
| | | | | | |
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+
| **Apollo Lake** | | UP2 - N3350 | V | | | |
| | | UP2 - N4200 | | | | |
| | | UP2 - x5-E3940 | | | | |
| | | (see `UP2 Shop`_) | | | | |
| | | | | | |
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+
| | **Kaby Lake** | | `NUC7i5BNH`_ | V | | | |
| | (Code name: Baby Canyon) | | (Board: NUC7i5BNB) | | | | |
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+
| | **Kaby Lake** | | `NUC7i7BNH`_ | V | | | |
| | (Code name: Baby Canyon) | | (Board: NUC7i7BNB | | | | |
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+
| | **Kaby Lake** | | `NUC7i5DNH`_ | V | | | |
| | (Code name: Dawson Canyon) | | (Board: NUC7i5DNB) | | | | |
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+
| | **Kaby Lake** | | `NUC7i7DNH`_ | V | V | V | V |
| | (Code name: Dawson Canyon) | | (Board: NUC7i7DNB) | | | | |
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+
| | **Whiskey Lake** | | `WHL-IPC-I5`_ | V | V | V | V |
| | | | (Board: WHL-IPC-I5) | | | | |
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+
V: Verified by engineering team; remaining scenarios are not in verification scope +---------------------------+------------------------+------------------------+------------------------------------------------------------+
| Platform (Intel x86) | Product/Kit Name | Hardware Class | Description |
Verified Hardware Specifications Detail +===========================+========================+========================+============================================================+
*************************************** | | **Tiger Lake** | | NUC11TNHi5 | Processor | - Intel |copy| Core |trade| i5-113G7 CPU (8M Cache, |
| | | | (Board: NUC11TNBi5) | | up to 4.2 GHz) |
+--------------------------------+------------------------+------------------------+-----------------------------------------------------------+ | | +------------------------+------------------------------------------------------------+
| Platform (Intel x86) | Product/Kit Name | Hardware Class | Description | | | | Graphics | - Dual HDMI 2.0b w/HDMI CEC, Dual DP 1.4a via Type C |
+================================+========================+========================+===========================================================+ | | | | - Supports 4 displays |
| | **Apollo Lake** | | NUC6CAYH | Processor | - Intel® Celeron™ CPU J3455 @ 1.50GHz (4C4T) | | | +------------------------+------------------------------------------------------------+
| | (Formal name: Arches Canyon) | | (Board: NUC6CAYB) | | | | | | System memory | - Two DDR4 SO-DIMM sockets (up to 64 GB, 3200 MHz), 1.2V |
| | +------------------------+-----------------------------------------------------------+ | | +------------------------+------------------------------------------------------------+
| | | Graphics | - Intel® HD Graphics 500 | | | | Storage capabilities | - One M.2 connector for storage |
| | | | - VGA (HDB15); HDMI 2.0 | | | | | 22x80 NVMe (M), 22x42 SATA (B) |
| | +------------------------+-----------------------------------------------------------+ | | +------------------------+------------------------------------------------------------+
| | | System memory | - Two DDR3L SO-DIMM sockets | | | | Serial Port | - Yes |
| | | | (up to 8 GB, 1866 MHz), 1.35V | +---------------------------+------------------------+------------------------+------------------------------------------------------------+
| | +------------------------+-----------------------------------------------------------+ | | **Whiskey Lake** | | WHL-IPC-I7 | Processor | - Intel |copy| Core |trade| i7-8565U CPU @ 1.80GHz (4C8T) |
| | | Storage capabilities | - SDXC slot with UHS-I support on the side | | | | | (Board: WHL-IPC-I7) | | |
| | | | - One SATA3 port for connection to 2.5" HDD or SSD | | | +------------------------+------------------------------------------------------------+
| | | | (up to 9.5 mm thickness) | | | | Graphics | - HD Graphics 610/620 |
| | +------------------------+-----------------------------------------------------------+ | | | | - ONE HDMI\* 1.4a ports supporting 4K at 60 Hz |
| | | Serial Port | - No | | | +------------------------+------------------------------------------------------------+
+--------------------------------+------------------------+------------------------+-----------------------------------------------------------+ | | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2400 MHz), 1.2V |
| | **Apollo Lake** | | UP2 - N3350 | Processor | - Intel® Celeron™ N3350 (2C2T, up to 2.4 GHz) | | | +------------------------+------------------------------------------------------------+
| | | UP2 - N4200 | | - Intel® Pentium™ N4200 (4C4T, up to 2.5 GHz) | | | | Storage capabilities | - One M.2 connector for Wi-Fi |
| | | UP2 - x5-E3940 | | - Intel® Atom ™ x5-E3940 (4C4T) | | | | | - One M.2 connector for 3G/4G module, supporting |
| | | | (up to 1.8GHz)/x7-E3950 (4C4T, up to 2.0GHz) | | | | | LTE Category 6 and above |
| | +------------------------+-----------------------------------------------------------+ | | | | - One M.2 connector for 2242 SSD |
| | | Graphics | - 2GB (single channel) LPDDR4 | | | | | - TWO SATA3 port (only one if Celeron onboard) |
| | | | - 4GB/8GB (dual channel) LPDDR4 | | | +------------------------+------------------------------------------------------------+
| | +------------------------+-----------------------------------------------------------+ | | | Serial Port | - Yes |
| | | System memory | - Intel® Gen 9 HD, supporting 4K Codec | +---------------------------+------------------------+------------------------+------------------------------------------------------------+
| | | | Decode and Encode for HEVC4, H.264, VP8 | | | **Kaby Lake** | | NUC7i7DNH | Processor | - Intel |copy| Core |trade| i7-8650U Processor |
| | +------------------------+-----------------------------------------------------------+ | | (Dawson Canyon) | | (Board: NUC7i7DNB) | | (8M Cache, up to 4.2 GHz) |
| | | Storage capabilities | - 32 GB / 64 GB / 128 GB eMMC | | | +------------------------+------------------------------------------------------------+
| | +------------------------+-----------------------------------------------------------+ | | | Graphics | - Dual HDMI 2.0a, 4-lane eDP 1.4 |
| | | Serial Port | - Yes | | | | | - Supports 2 displays |
+--------------------------------+------------------------+------------------------+-----------------------------------------------------------+ | | +------------------------+------------------------------------------------------------+
| | **Kaby Lake** | | NUC7i5BNH | Processor | - Intel® Core™ i5-7260U CPU @ 2.20GHz (2C4T) | | | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2400 MHz), 1.2V |
| | (Code name: Baby Canyon) | | (Board: NUC7i5BNB) | | | | | +------------------------+------------------------------------------------------------+
| | +------------------------+-----------------------------------------------------------+ | | | Storage capabilities | - One M.2 connector supporting 22x80 M.2 SSD |
| | | Graphics | - Intel® Iris® Plus Graphics 640 | | | | | - One M.2 connector supporting 22x30 M.2 card |
| | | | - One HDMI\* 2.0 port with 4K at 60 Hz | | | | | - One SATA3 port for connection to 2.5" HDD or SSD |
| | | | - Thunderbolt™ 3 port with support for USB\* 3.1 | | | +------------------------+------------------------------------------------------------+
| | | | Gen 2, DisplayPort\* 1.2 and 40 Gb/s Thunderbolt | | | | Serial Port | - Yes |
| | +------------------------+-----------------------------------------------------------+ +---------------------------+------------------------+------------------------+------------------------------------------------------------+
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2133 MHz), 1.2V | | | **Apollo Lake** | | NUC6CAYH | Processor | - Intel |copy| Celeron |trade| CPU J3455 @ 1.50GHz (4C4T) |
| | +------------------------+-----------------------------------------------------------+ | | (Arches Canyon) | | (Board: NUC6CAYB) | | |
| | | Storage capabilities | - microSDXC slot with UHS-I support on the side | | | +------------------------+------------------------------------------------------------+
| | | | - One M.2 connector supporting 22x42 or 22x80 M.2 SSD | | | | Graphics | - Intel |copy| HD Graphics 500 |
| | | | - One SATA3 port for connection to 2.5" HDD or SSD | | | | | - VGA (HDB15); HDMI 2.0 |
| | | | (up to 9.5 mm thickness) | | | +------------------------+------------------------------------------------------------+
| | +------------------------+-----------------------------------------------------------+ | | | System memory | - Two DDR3L SO-DIMM sockets |
| | | Serial Port | - No | | | | | (up to 8 GB, 1866 MHz), 1.35V |
+--------------------------------+------------------------+------------------------+-----------------------------------------------------------+ | | +------------------------+------------------------------------------------------------+
| | **Kaby Lake** | | NUC7i7BNH | Processor | - Intel® Core™ i7-7567U CPU @ 3.50GHz (2C4T) | | | | Storage capabilities | - SDXC slot with UHS-I support on the side |
| | (Code name: Baby Canyon) | | (Board: NUC7i7BNB) | | | | | | | - One SATA3 port for connection to 2.5" HDD or SSD |
| | +------------------------+-----------------------------------------------------------+ | | | | (up to 9.5 mm thickness) |
| | | Graphics | - Intel® Iris® Plus Graphics 650 | | | +------------------------+------------------------------------------------------------+
| | | | - One HDMI\* 2.0 port with 4K at 60 Hz | | | | Serial Port | - No |
| | | | - Thunderbolt™ 3 port with support for USB\* 3.1 Gen 2, | +---------------------------+------------------------+------------------------+------------------------------------------------------------+
| | | | DisplayPort\* 1.2 and 40 Gb/s Thunderbolt | | | **Apollo Lake** | | UP2 - N3350 | Processor | - Intel |copy| Celeron |trade| N3350 (2C2T, up to 2.4 GHz)|
| | +------------------------+-----------------------------------------------------------+ | | | UP2 - N4200 | | - Intel |copy| Pentium |trade| N4200 (4C4T, up to 2.5 GHz)|
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2133 MHz), 1.2V | | | | UP2 - x5-E3940 | | - Intel |copy| Atom |trade| x5-E3940 (4C4T) |
| | +------------------------+-----------------------------------------------------------+ | | | | (up to 1.8GHz)/x7-E3950 (4C4T, up to 2.0GHz) |
| | | Storage capabilities | - microSDXC slot with UHS-I support on the side | | | +------------------------+------------------------------------------------------------+
| | | | - One M.2 connector supporting 22x42 or 22x80 M.2 SSD | | | | Graphics | - 2GB (single channel) LPDDR4 |
| | | | - One SATA3 port for connection to 2.5" HDD or SSD | | | | | - 4GB/8GB (dual channel) LPDDR4 |
| | | | (up to 9.5 mm thickness) | | | +------------------------+------------------------------------------------------------+
| | +------------------------+-----------------------------------------------------------+ | | | System memory | - Intel |copy| Gen 9 HD, supporting 4K Codec |
| | | Serial Port | - No | | | | | Decode and Encode for HEVC4, H.264, VP8 |
+--------------------------------+------------------------+------------------------+-----------------------------------------------------------+ | | +------------------------+------------------------------------------------------------+
| | **Kaby Lake** | | NUC7i5DNH | Processor | - Intel® Core™ i5-7300U CPU @ 2.64GHz (2C4T) | | | | Storage capabilities | - 32 GB / 64 GB / 128 GB eMMC |
| | (Code name: Dawson Canyon) | | (Board: NUC7i5DNB) | | | | | +------------------------+------------------------------------------------------------+
| | +------------------------+-----------------------------------------------------------+ | | | Serial Port | - Yes |
| | | Graphics | - Intel® HD Graphics 620 | +---------------------------+------------------------+------------------------+------------------------------------------------------------+
| | | | - Two HDMI\* 2.0a ports supporting 4K at 60 Hz |
| | +------------------------+-----------------------------------------------------------+
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2133 MHz), 1.2V |
| | +------------------------+-----------------------------------------------------------+
| | | Storage capabilities | - One M.2 connector supporting 22x80 M.2 SSD |
| | | | - One M.2 connector supporting 22x30 M.2 card |
| | | | (NUC7i5DNBE only) |
| | | | - One SATA3 port for connection to 2.5" HDD or SSD |
| | | | (up to 9.5 mm thickness) (NUC7i5DNHE only) |
| | +------------------------+-----------------------------------------------------------+
| | | Serial Port | - Yes |
+--------------------------------+------------------------+------------------------+-----------------------------------------------------------+
| | **Whiskey Lake** | | WHL-IPC-I5 | Processor | - Intel® Core™ i5-8265U CPU @ 1.60GHz (4C8T) |
| | | | (Board: WHL-IPC-I5) | | |
| | +------------------------+-----------------------------------------------------------+
| | | Graphics | - HD Graphics 610/620 |
| | | | - ONE HDMI\* 1.4a ports supporting 4K at 60 Hz |
| | +------------------------+-----------------------------------------------------------+
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2400 MHz), 1.2V |
| | +------------------------+-----------------------------------------------------------+
| | | Storage capabilities | - One M.2 connector for Wi-Fi |
| | | | - One M.2 connector for 3G/4G module, supporting |
| | | | LTE Category 6 and above |
| | | | - One M.2 connector for 2242 SSD |
| | | | - TWO SATA3 port (only one if Celeron onboard) |
| | +------------------------+-----------------------------------------------------------+
| | | Serial Port | - Yes |
+--------------------------------+------------------------+------------------------+-----------------------------------------------------------+
| | **Whiskey Lake** | | WHL-IPC-I7 | Processor | - Intel® Core™ i5-8265U CPU @ 1.80GHz (4C8T) |
| | | | (Board: WHL-IPC-I7) | | |
| | +------------------------+-----------------------------------------------------------+
| | | Graphics | - HD Graphics 610/620 |
| | | | - ONE HDMI\* 1.4a ports supporting 4K at 60 Hz |
| | +------------------------+-----------------------------------------------------------+
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2400 MHz), 1.2V |
| | +------------------------+-----------------------------------------------------------+
| | | Storage capabilities | - One M.2 connector for Wi-Fi |
| | | | - One M.2 connector for 3G/4G module, supporting |
| | | | LTE Category 6 and above |
| | | | - One M.2 connector for 2242 SSD |
| | | | - TWO SATA3 port (only one if Celeron onboard) |
| | +------------------------+-----------------------------------------------------------+
| | | Serial Port | - Yes |
+--------------------------------+------------------------+------------------------+-----------------------------------------------------------+
.. # vim: tw=200 .. # vim: tw=200

View File

@ -76,6 +76,8 @@ Upgrading to v2.4 From Previous Releases
We highly recommended that you follow the instructions below to We highly recommended that you follow the instructions below to
upgrade to v2.4 from previous ACRN releases. upgrade to v2.4 from previous ACRN releases.
.. _upgrade_python:
Additional Dependencies Additional Dependencies
======================= =======================

View File

@ -0,0 +1,209 @@
.. _release_notes_2.5:
ACRN v2.5 (Jun 2021)
####################
We are pleased to announce the release of the Project ACRN hypervisor
version 2.5.
ACRN is a flexible, lightweight reference hypervisor that is built with
real-time and safety-criticality in mind. It is optimized to streamline
embedded development through an open-source platform. See the
:ref:`introduction` introduction for more information.
All project ACRN source code is maintained in the
https://github.com/projectacrn/acrn-hypervisor repository and includes
folders for the ACRN hypervisor, the ACRN device model, tools, and
documentation. You can either download this source code as a zip or
tar.gz file (see the `ACRN v2.5 GitHub release page
<https://github.com/projectacrn/acrn-hypervisor/releases/tag/v2.5>`_) or
use Git ``clone`` and ``checkout`` commands::
git clone https://github.com/projectacrn/acrn-hypervisor
cd acrn-hypervisor
git checkout v2.5
The project's online technical documentation is also tagged to
correspond with a specific release: generated v2.5 documents can be
found at https://projectacrn.github.io/2.5/. Documentation for the
latest under-development branch is found at
https://projectacrn.github.io/latest/.
ACRN v2.5 requires Ubuntu 18.04. Follow the instructions in the
:ref:`gsg` to get started with ACRN.
What's New in v2.5
******************
Nested Virtualization Technology Preview
A brand-new concept, nested virtualization, is introduced as a preview in this
v2.5 release. Nested virtualization lets you run virtual machine instances
inside of a guest VM that's running on the ACRN hypervisor. It's designed to
leverage the KVM/QEMU community's rich feature set while keeping ACRN's unique
advantages in partition mode and hybrid mode. Read more in the
:ref:`nested_virt` advanced guide.
Secure Boot Using EFI Stub
EFI stub, previously retired in favor of using direct boot, returns as an
alternative to end-to-end secure boot with Grub. The hypervisor, Service VM
kernel, and prelaunched VM kernel are packaged into a single ``acrn.efi`` blob
as an EFI application that can then be verified by the EFI BIOS. Read more in
the :ref:`how-to-enable-acrn-secure-boot-with-efi-stub` and
:ref:`how-to-enable-acrn-secure-boot-with-grub` advanced guides.
Modularization Improvements
:ref:`ACRN hypervisor modularization <modularity>` has been improved to be more
scalable, including change to multiboot, interrupt handling, paging and memory
management, and timers, with more to come in future releases.
Configuration and Build Process Improvements
The ACRN configuration and build process continues to evolve from the changes
made in the previous releases. For instructions using the build system, refer
to :ref:`getting-started-building`. For an introduction on the concepts and
workflow of the configuration tools and processes, refer to
:ref:`acrn_configuration_tool`.
Upgrading to v2.5 From Previous Releases
****************************************
We highly recommended that you follow these instructions to
upgrade to v2.5 from previous ACRN releases.
Generate New Board XML
======================
Board XML files, generated by ACRN board inspector, contain board information
that is essential to build ACRN. Compared to previous versions, ACRN v2.5
extends the schema of board XMLs to summarize board information more
systematically. You must regenerate your board XML file using the new
board inspector when you upgrade to ACRN v2.5 to get the additional information
needed for configuration.
Before using the new board inspector, ensure you have Python >= 3.6 on the target
board and install the ``lxml`` PyPI package. Refer to :ref:`upgrade_python` for
detailed steps to check and upgrade your Python version. The ``lxml`` package can be
installed by executing the following command:
.. code-block:: bash
sudo pip3 install lxml
.. note::
Refer to :ref:`acrn_config_workflow` for a complete list of tools required to
run the board inspector.
With the prerequisites done, copy the entire board inspector folder from
``misc/config_tools/board_inspector`` to the target board, ``cd`` to that
directory on the target, and run the board inspector tool using::
sudo python3 cli.py <my_board_name>
This will generate ``<my_board_name>.xml`` in the current working directory.
You'll need to copy that XML file back to the host system to continue
development.
Add New Configuration Options
=============================
In v2.5, the following elements are added to scenario XML files:
- :option:`hv.FEATURES.NVMX_ENABLED`
- :option:`vm.PTM`
The following element is renamed:
- :option:`hv.FEATURES.SSRAM.SSRAM_ENABLED` (was ``hv.FEATURES.PSRAM.PSRAM_ENABLED`` in v2.4)
Constraints on values of the following element have changed:
- :option:`vm.guest_flags.guest_flag` no longer accepts an empty text. For VMs
with no guest flag set, set the value to ``0``.
Document Updates
****************
With the changes to ACRN configuration, we made updates
to the ACRN documentation around configuration, options, and parameters:
.. rst-class:: rst-columns2
* :ref:`acrn_configuration_tool`
* :ref:`scenario-config-options`
* :ref:`acrn-dm_parameters`
* :ref:`kernel-parameters`
New capabilities are documented here:
* :ref:`nested_virt`
We've also made edits throughout the documentation to improve clarity,
formatting, and presentation throughout the ACRN documentation:
.. rst-class:: rst-columns2
* :ref:`contribute_guidelines`
* :ref:`doc_guidelines`
* :ref:`ahci-hld`
* :ref:`hv-device-passthrough`
* :ref:`hv-hypercall`
* :ref:`timer-hld`
* :ref:`l1tf`
* :ref:`modularity`
* :ref:`sw_design_guidelines`
* :ref:`trusty_tee`
* :ref:`getting-started-building`
* :ref:`gsg`
* :ref:`hardware`
* :ref:`acrn_on_qemu`
* :ref:`acrn_doc`
* :ref:`enable_ivshmem`
* :ref:`running_deb_as_serv_vm`
* :ref:`trusty-security-services`
* :ref:`using_hybrid_mode_on_nuc`
* :ref:`connect_serial_port`
Fixed Issues Details
********************
.. comment example item
- :acrn-issue:`5626` - [CFL][industry] Host Call Trace once detected
- :acrn-issue:`5626` - [CFL][industry] Host Call Trace once detected
- :acrn-issue:`5879` - hybrid_rt scenario does not work with large initrd in pre-launched VM
- :acrn-issue:`6015` - HV and DM: Obsolete terms cleanup for SSRAM
- :acrn-issue:`6024` - config-tools: generate board_info.h and pci_dev.c using xslt
- :acrn-issue:`6034` - dm: add allow_trigger_s5 mode to pm_notify_channel uart
- :acrn-issue:`6038` - [REG][RAMDISK] Fail to launch pre RTVM while config ramdisk
- :acrn-issue:`6056` - dm: a minor bug fix of unregister_mem_int
- :acrn-issue:`6072` - [WHL][WAAG]use config tool to passthru Audio,will not display GOP
- :acrn-issue:`6075` - [config_tools][regression][v2.5_rc1] config tool failed to save industry.xml with GuestFlagsOptionsType check
- :acrn-issue:`6078` - Make ACRN HV with hybrid_rt bootable without GRUB on UEFI BIOS
- :acrn-issue:`6100` - virtio_net_ping_rxq SEGV on read from NULL
- :acrn-issue:`6102` - Build failure for BOARD=qemu SCENARIO=sdc on release_2.5
- :acrn-issue:`6104` - [acrn-configuration-tool] Need update tgl-rvp.xml to the latest BIOS info
- :acrn-issue:`6113` - [config_tools][ADL-S]generated board xml parse error on ADL-S
- :acrn-issue:`6120` - [acrn-configuration-tool] shall we add CLOS_MASK elements into tgl scenario files as default configuration
- :acrn-issue:`6126` - TPM do not support dynamic GPA
- :acrn-issue:`6129` - virtio: NULL deref in hw/pci/virtio/virtio.c:664 in vq_endchains
- :acrn-issue:`6131` - guest/vlapic fatal assertion reachable from guest - DoS
- :acrn-issue:`6134` - [acrn-configuration-tool] lxml module not found when get board xml following doc
- :acrn-issue:`6138` - config-tools: support of launch script to generate the "allow_trigger_s5" automatically
- :acrn-issue:`6147` - ASAN reports UAF + SEGV when fuzzing exposed PIO with Hypercube guest VM.
- :acrn-issue:`6157` - coding style fix on v2.5 branch
- :acrn-issue:`6162` - [REG][EHL][SBL] Fail to boot sos
- :acrn-issue:`6168` - SOS failed to boot with nest enabled
- :acrn-issue:`6172` - member access within null pointer of type 'struct xhci_trb'
- :acrn-issue:`6178` - config-tools: adding an empty node <pt_intx> for a pre-launched VM causing check_pt_intx throw out an error
- :acrn-issue:`6185` - [TGL][Industry]yaag can't get ip after SRIVO VF passthru
- :acrn-issue:`6186` - [acrn-configuration-tool] CONFIG_MAX_MSIX_TABLE_NUM value is auto set as 64 when generate an new scenario xml
- :acrn-issue:`6199` - [doc][buildSource] can not pass SCENARIO parameter into hypervisor/build/.config with "make defconfig"
Known Issues
************
- :acrn-issue:`6256` - [TGL][qemu] Cannot launch qemu on TGL
- :acrn-issue:`6256` - [S5]S5 fails on post-launched RTVM

View File

@ -84,6 +84,15 @@
</xsl:if> </xsl:if>
</xsl:when> </xsl:when>
<xsl:otherwise> <xsl:otherwise>
<!-- Write a section header for elements with a simple type -->
<xsl:if test="$level = 3">
<xsl:call-template name="section-header">
<xsl:with-param name="title" select="concat($prefix, @name)"/>
<xsl:with-param name="label" select="concat($prefix, @name)"/>
<xsl:with-param name="level" select="$level"/>
</xsl:call-template>
</xsl:if>
<xsl:call-template name="option-header"> <xsl:call-template name="option-header">
<xsl:with-param name="label" select="concat($prefix, @name)"/> <xsl:with-param name="label" select="concat($prefix, @name)"/>
</xsl:call-template> </xsl:call-template>

View File

@ -8,8 +8,7 @@ using ACRN in a reference setup. We'll show how to set up your
development and target hardware, and then how to boot the ACRN development and target hardware, and then how to boot the ACRN
hypervisor, the Service VM, and a User VM on the Intel platform. hypervisor, the Service VM, and a User VM on the Intel platform.
ACRN is supported on Apollo Lake and Kaby Lake Intel platforms, ACRN is supported on platforms listed in :ref:`hardware`.
as described in :ref:`hardware`.
Follow these getting started guides to give ACRN a try: Follow these getting started guides to give ACRN a try:
@ -17,8 +16,8 @@ Follow these getting started guides to give ACRN a try:
:maxdepth: 1 :maxdepth: 1
reference/hardware reference/hardware
getting-started/getting-started
getting-started/building-from-source getting-started/building-from-source
getting-started/rt_industry_ubuntu
getting-started/roscube/roscube-gsg getting-started/roscube/roscube-gsg
tutorials/using_hybrid_mode_on_nuc tutorials/using_hybrid_mode_on_nuc
tutorials/using_partition_mode_on_nuc tutorials/using_partition_mode_on_nuc

View File

@ -0,0 +1,146 @@
.. _how-to-enable-acrn-secure-boot-with-efi-stub:
Enable ACRN Secure Boot With EFI-Stub
#####################################
Introduction
************
``ACRN EFI-Stub`` is an EFI application to support booting ACRN Hypervisor on
UEFI systems with Secure Boot. ACRN has supported
:ref:`how-to-enable-acrn-secure-boot-with-grub`.
It relies on the GRUB multiboot2 module by default. However, on certain platform
the GRUB multiboot2 is intentionally disabled when Secure Boot is enabled due
to the `CVE-2015-5281 <https://www.cvedetails.com/cve/CVE-2015-5281/>`_.
As an alternative booting method, ``ACRN EFI-Stub`` supports to boot ACRN HV on
UEFI systems without using GRUB. Although it is based on the legacy EFI-Stub
which was obsoleted in ACRN v2.3, the new EFI-Stub can boot ACRN HV in the direct
mode rather than the former deprivileged mode.
In order to boot ACRN HV with the new EFI-Stub, you need to create a container blob
which contains HV image and Service VM kernel image (and optionally pre-launched
VM kernel image and ACPI table). That blob file is stitched to the
EFI-Stub to form a single EFI application (``acrn.efi``). The overall boot flow is as below.
.. graphviz::
digraph G {
rankdir=LR;
bgcolor="transparent";
UEFI -> "acrn.efi" ->
"ACRN\nHypervisor" -> "pre-launched RTVM\nKernel";
"ACRN\nHypervisor" -> "Service VM\nKernel";
}
- UEFI firmware verifies ``acrn.efi``
- ``acrn.efi`` unpacks ACRN Hypervisor image and VM Kernels from a stitched container blob
- ``acrn.efi`` loads ACRN Hypervisor to memory
- ``acrn.efi`` prepares MBI to store Service VM & pre-launched RTVM Kernel info
- ``acrn.efi`` hands over control to ACRN Hypervisor with MBI
- ACRN Hypervisor boots Service VM and pre-launched RTVM in parallel
As the container blob format, ``ACRN EFI-Stub`` uses the `Slim Bootloader Container
Boot Image <https://slimbootloader.github.io/how-tos/create-container-boot-image.html>`_.
Verified Configurations
***********************
- ACRN Hypervisor Release Version 2.5
- hybrid_rt scenario
- TGL platform
- CONFIG_MULTIBOOT2=y (as default)
- CONFIG_RELOC=y (as default)
Building
********
Build Dependencies
==================
- Build Tools and Dependencies described in the :ref:`getting-started-building` guide
- ``gnu-efi`` package
- Service VM Kernel ``bzImage``
- pre-launched RTVM Kernel ``bzImage``
- `Slim Bootloader Container Tool <https://slimbootloader.github.io/how-tos/create-container-boot-image.html>`_
The Slim Bootloader Tools can be downloaded from its `GitHub project <https://github.com/slimbootloader/slimbootloader>`_.
The verified version is the commit `9f146af <https://github.com/slimbootloader/slimbootloader/tree/9f146af>`_.
You may use the `meta-acrn Yocto Project integration layer
<https://github.com/intel/meta-acrn>`_ to build Service VM Kernel and
pre-launched VM.
Build EFI-Stub for TGL hybrid_rt
======================================
.. code-block:: none
$ TOPDIR=`pwd`
$ cd acrn-hypervisor
$ make BOARD=tgl-rvp SCENARIO=hybrid_rt hypervisor
$ make BOARD=tgl-rvp SCENARIO=hybrid_rt -C misc/efi-stub/ \
HV_OBJDIR=`pwd`/build/hypervisor/ \
EFI_OBJDIR=`pwd`/build/hypervisor/misc/efi-stub `pwd`/build/hypervisor/misc/efi-stub/boot.efi
Create Container
================
.. code-block:: none
$ mkdir -p $TOPDIR/acrn-efi; cd $TOPDIR/acrn-efi
$ echo > hv_cmdline.txt
$ echo RT_bzImage > vm0_tag.txt
$ echo Linux_bzImage > vm1_tag.txt
$ echo ACPI_VM0 > acpi_vm0.txt
$ python3 GenContainer.py create -cl \
CMDL:./hv_cmdline.txt \
ACRN:$TOPDIR/acrn-hypervisor/build/hypervisor/acrn.32.out \
MOD0:./vm0_tag.txt \
MOD1:./vm0_kernel \
MOD2:./vm1_tag.txt \
MOD3:./vm1_kernel \
MOD4:./acpi_vm0.txt \
MOD5:$TOPDIR/acrn-hypervisor/build/hypervisor/acpi/ACPI_VM0.bin \
-o sbl_os \
-t MULTIBOOT \
-a NONE
You may optionally put HV boot options in the ``hv_cmdline.txt`` file. This file
must contain at least one character even if you don't need additional boot options.
.. code-block:: none
# Acceptable Examples
$ echo > hv_cmdline.txt # end-of-line
$ echo " " > hv_cmdline.txt # space + end-of-line
# Not Acceptable Example
$ touch hv_cmdline.txt # empty file
The ``vm0_kernel`` is the Kernel ``bzImage`` of the pre-launched RTVM, and the
``vm1_kernel`` is the image of the Service VM in the above case.
Stitch Container to EFI-Stub
============================
.. code-block:: none
$ objcopy --add-section .hv=sbl_os --change-section-vma .hv=0x6e000 \
--set-section-flags .hv=alloc,data,contents,load \
--section-alignment 0x1000 $TOPDIR/acrn-hypervisor/build/hypervisor/misc/efi-stub/boot.efi acrn.efi
Installing (without SB for testing)
***********************************
For example:
.. code-block:: none
$ sudo mkdir -p /boot/EFI/BOOT/
$ sudo cp acrn.efi /boot/EFI/BOOT/
$ sudo efibootmgr -c -l "\EFI\BOOT\acrn.efi" -d /dev/nvme0n1 -p 1 -L "ACRN Hypervisor"
$ sudo reboot
Signing
*******
See :ref:`how-to-enable-acrn-secure-boot-with-grub` for how to sign your ``acrn.efi`` file.

View File

@ -115,13 +115,13 @@ toolset.
| **Native Linux requirement:** | **Native Linux requirement:**
| **Release:** Ubuntu 18.04+ | **Release:** Ubuntu 18.04+
| **Tools:** cpuid, rdmsr, lspci, dmidecode (optional) | **Tools:** cpuid, rdmsr, lspci, lxml, dmidecode (optional)
| **Kernel cmdline:** "idle=nomwait intel_idle.max_cstate=0 intel_pstate=disable" | **Kernel cmdline:** "idle=nomwait intel_idle.max_cstate=0 intel_pstate=disable"
#. Copy the ``target`` directory into the target file system and then run the #. Copy the ``board_inspector`` directory into the target file system and then run the
``sudo python3 board_parser.py $(BOARD)`` command. ``sudo python3 cli.py $(BOARD)`` command.
#. A ``$(BOARD).xml`` that includes all needed hardware-specific information #. A ``$(BOARD).xml`` that includes all needed hardware-specific information
is generated in the ``./out/`` directory. Here, ``$(BOARD)`` is the is generated under the current working directory. Here, ``$(BOARD)`` is the
specified board name. specified board name.
#. Customize your needs. #. Customize your needs.
@ -322,6 +322,13 @@ current scenario has:
Specify whether the User VM power off channel is through the IOC, Specify whether the User VM power off channel is through the IOC,
power button, or vUART. power button, or vUART.
``allow_trigger_s5``:
Allow VM to trigger s5 shutdown flow, this flag works with ``poweroff_channel``
``vuart1(pty)`` and ``vuart1(tty)`` only.
``enable_ptm``:
Enable the Precision Timing Measurement (PTM) feature.
``usb_xhci``: ``usb_xhci``:
USB xHCI mediator configuration. Input format: USB xHCI mediator configuration. Input format:
``bus#-port#[:bus#-port#: ...]``, e.g.: ``1-2:2-4``. ``bus#-port#[:bus#-port#: ...]``, e.g.: ``1-2:2-4``.
@ -332,7 +339,16 @@ current scenario has:
``shm_region`` (a child node of ``shm_regions``): ``shm_region`` (a child node of ``shm_regions``):
configure the shared memory regions for current VM, input format: configure the shared memory regions for current VM, input format:
``hv:/<;shm name>;, <;shm size in MB>;``. Refer to :ref:`ivshmem-hld` for details. ``hv:/<;shm name>; (or dm:/<shm_name>;), <;shm size in MB>;``. Refer to :ref:`ivshmem-hld` for details.
``console_vuart``:
Enable a PCI-based console vUART. Refer to :ref:`vuart_config` for details.
``communication_vuarts``:
List of PCI-based communication vUARTs. Refer to :ref:`vuart_config` for details.
``communication_vuart`` (a child node of ``communication_vuarts``):
Enable a PCI-based communication vUART with its ID. Refer to :ref:`vuart_config` for details.
``passthrough_devices``: ``passthrough_devices``:
Select the passthrough device from the lspci list. Currently we support: Select the passthrough device from the lspci list. Currently we support:
@ -353,12 +369,15 @@ current scenario has:
Input format: Input format:
``[@]stdio|tty|pty|sock:portname[=portpath][,[@]stdio|tty|pty:portname[=portpath]]``. ``[@]stdio|tty|pty|sock:portname[=portpath][,[@]stdio|tty|pty:portname[=portpath]]``.
``cpu_affinity``:
List of pCPU that this VM's vCPUs are pinned to.
.. note:: .. note::
The ``configurable`` and ``readonly`` attributes are used to mark The ``configurable`` and ``readonly`` attributes are used to mark
whether the item is configurable for users. When ``configurable="0"`` whether the item is configurable for users. When ``configurable="n"``
and ``readonly="true"``, the item is not configurable from the web and ``readonly="y"``, the item is not configurable from the web
interface. When ``configurable="0"``, the item does not appear on the interface. When ``configurable="n"``, the item does not appear on the
interface. interface.
.. _acrn_config_tool_ui: .. _acrn_config_tool_ui:

View File

@ -11,38 +11,36 @@ with basic functionality such as running Service VM (SOS) and User VM (UOS) for
This setup was tested with the following configuration, This setup was tested with the following configuration,
- ACRN Hypervisor: tag ``v2.0`` - ACRN Hypervisor: ``v2.5`` tag
- ACRN Kernel: release_2.0 (5.4.43-PKT-200203T060100Z) - ACRN Kernel: ``v2.5`` tag
- QEMU emulator version 4.2.0 - QEMU emulator version 4.2.1
- Service VM/User VM is ubuntu 18.04 - Service VM/User VM is Ubuntu 20.04
- Platforms Tested: Apollo Lake, Kaby Lake, Coffee Lake - Platforms Tested: Kaby Lake, Skylake
.. note::
ACRN versions newer than v2.0 do not work on QEMU.
Prerequisites Prerequisites
************* *************
1. Make sure the platform supports Intel VMX as well as VT-d 1. Make sure the platform supports Intel VMX as well as VT-d
technologies. On Ubuntu 18.04, this technologies. On Ubuntu 20.04, this
can be checked by installing ``cpu-checker`` tool. If the output displays **KVM acceleration can be used** can be checked by installing ``cpu-checker`` tool. If the
output displays **KVM acceleration can be used**
the platform supports it. the platform supports it.
.. code-block:: none .. code-block:: none
$ kvm-ok kvm-ok
INFO: /dev/kvm exists INFO: /dev/kvm exists
KVM acceleration can be used KVM acceleration can be used
2. Ensure the Ubuntu18.04 Host kernel version is **at least 5.3.0** and above. 2. The host kernel version must be **at least 5.3.0** or above.
Ubuntu 20.04 uses a 5.8.0 kernel (or later),
so no changes are needed if you are using it.
3. Make sure KVM and the following utilities are installed. 3. Make sure KVM and the following utilities are installed.
.. code-block:: none .. code-block:: none
$ sudo apt update && sudo apt upgrade -y sudo apt update && sudo apt upgrade -y
$ sudo apt install qemu-kvm libvirt-bin virtinst -y sudo apt install qemu-kvm virtinst libvirt-daemon-system -y
Prepare Service VM (L1 Guest) Prepare Service VM (L1 Guest)
@ -51,7 +49,7 @@ Prepare Service VM (L1 Guest)
.. code-block:: none .. code-block:: none
$ virt-install \ virt-install \
--connect qemu:///system \ --connect qemu:///system \
--name ACRNSOS \ --name ACRNSOS \
--machine q35 \ --machine q35 \
@ -68,35 +66,40 @@ Prepare Service VM (L1 Guest)
--location 'http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/' \ --location 'http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/' \
--extra-args "console=tty0 console=ttyS0,115200n8" --extra-args "console=tty0 console=ttyS0,115200n8"
2. Walk through the installation steps as prompted. Here are a few things to note: #. Walk through the installation steps as prompted. Here are a few things to note:
a. Make sure to install an OpenSSH server so that once the installation is complete, we can SSH into the system. a. Make sure to install an OpenSSH server so that once the installation is complete, we can SSH into the system.
.. figure:: images/acrn_qemu_1.png .. figure:: images/acrn_qemu_1.png
:align: center :align: center
b. We use GRUB to boot ACRN, so make sure you install it when prompted. b. We use Grub to boot ACRN, so make sure you install it when prompted.
.. figure:: images/acrn_qemu_2.png .. figure:: images/acrn_qemu_2.png
:align: center :align: center
3. To login to the Service VM guest, find the IP address of the guest to SSH. This can be done via the c. The Service VM (guest) will be restarted once the installation is complete.
virsh command as shown below,
.. figure:: images/acrn_qemu_3.png #. Login to the Service VM guest. Find the IP address of the guest and use it to connect
:align: center via SSH. The IP address can be retrieved using the ``virsh`` command as shown below.
4. Once ACRN hypervisor is enabled, the above virsh command might not display the IP. So enable Serial console by, .. code-block:: console
virsh domifaddr ACRNSOS
Name MAC address Protocol Address
-------------------------------------------------------------------------------
vnet0 52:54:00:72:4e:71 ipv4 192.168.122.31/24
#. Once logged into the Service VM, enable the serial console. Once ACRN is enabled,
the ``virsh`` command will no longer show the IP.
.. code-block:: none .. code-block:: none
$ sudo systemctl enable serial-getty@ttyS0.service sudo systemctl enable serial-getty@ttyS0.service
$ sudo systemctl start serial-getty@ttyS0.service sudo systemctl start serial-getty@ttyS0.service
.. note:: #. Enable the Grub menu to choose between Ubuntu and the ACRN hypervisor.
You might want to write down the Service VM IP address in case you want to SSH to it. Modify :file:`/etc/default/grub` and edit below entries,
5. Enable GRUB menu to choose between Ubuntu vs ACRN hypervisor. Modify :file:`/etc/default/grub` and edit below entries,
.. code-block:: none .. code-block:: none
@ -105,62 +108,60 @@ Prepare Service VM (L1 Guest)
GRUB_CMDLINE_LINUX_DEFAULT="" GRUB_CMDLINE_LINUX_DEFAULT=""
GRUB_GFXMODE=text GRUB_GFXMODE=text
6. Update GRUB changes by ``sudo update-grub`` #. The Service VM guest can also be launched again later using ``virsh start ACRNSOS --console``.
Make sure to use the domain name you used while creating the VM in case it is different than ``ACRNSOS``.
7. Once the above steps are done, Service VM guest can also be launched using, ``virsh start ACRNSOS --console``. Make sure to use the domain name This concludes the initial configuration of the Service VM, the next steps will install ACRN in it.
you used while creating the VM instead of ``ACRNSOS``.
This concludes setting up of Service VM and preparing it to boot ACRN hypervisor.
.. _install_acrn_hypervisor: .. _install_acrn_hypervisor:
Install ACRN Hypervisor Install ACRN Hypervisor
*********************** ***********************
1. Clone ACRN repo with ``tag: acrn-2020w19.5-140000p`` or the latest 1. Launch the ``ACRNSOS`` Service VM guest and log onto it (SSH is recommended but the console is
(main) branch. Below steps show our tested version, available too).
.. important:: All the steps below are performed **inside** the Service VM guest that we built in the
previous section.
#. Install the ACRN build tools and dependencies following the :ref:`install-build-tools-dependencies`
#. Clone ACRN repo and check out the ``v2.5`` tag.
.. code-block:: none .. code-block:: none
$ git clone https://github.com/projectacrn/acrn-hypervisor.git cd ~
$ cd acrn-hypervisor/ git clone https://github.com/projectacrn/acrn-hypervisor.git
$ git fetch --all --tags --prune cd acrn-hypervisor
$ git checkout tags/acrn-2020w19.5-140000p -b acrn_on_qemu git checkout v2.5
2. Use the following command to build ACRN for QEMU, #. Build ACRN for QEMU,
.. code-block:: none .. code-block:: none
$ make all BOARD_FILE=./misc/acrn-config/xmls/board-xmls/qemu.xml SCENARIO_FILE=./misc/acrn-config/xmls/config-xmls/qemu/sdc.xml make BOARD=qemu SCENARIO=sdc
For more details, refer to :ref:`getting-started-building`. For more details, refer to :ref:`getting-started-building`.
3. Copy ``acrn.32.out`` from ``build/hypervisor`` to Service VM guest ``/boot/`` directory. #. Install the ACRN Device Model and tools
4. Clone and build the Service VM kernel that includes the virtio-blk driver. User VM (L2 guest) uses virtio-blk .. code-block::
driver to mount rootfs.
sudo make install
#. Copy ``acrn.32.out`` to the Service VM guest ``/boot`` directory.
.. code-block:: none .. code-block:: none
$ git clone https://github.com/projectacrn/acrn-kernel sudo cp build/hypervisor/acrn.32.out /boot
$ cd acrn-kernel
$ cp kernel_config_uefi_sos .config
$ make olddefconfig
$ make menuconfig
$ make
The below figure shows the drivers to be enabled using ``make menuconfig`` command. #. Clone and configure the Service VM kernel repository following the instructions at
:ref:`build-and-install-ACRN-kernel` and using the ``v2.5`` tag. The User VM (L2 guest)
uses the ``virtio-blk`` driver to mount the rootfs. This driver is included in the default
kernel configuration as of the ``v2.5`` tag.
.. figure:: images/acrn_qemu_4.png #. Update Grub to boot the ACRN hypervisor and load the Service VM kernel. Append the following
:align: center configuration to the :file:`/etc/grub.d/40_custom`.
Once the Service VM kernel is built successfully, copy ``arch/x86/boot/bzImage`` to the Service VM /boot/ directory and rename it to ``bzImage_sos``.
.. note::
The Service VM kernel contains all needed drivers so you won't need to install extra kernel modules.
5. Update Ubuntu GRUB to boot ACRN hypervisor and load ACRN Kernel Image. Append the following
configuration to the :file:`/etc/grub.d/40_custom`,
.. code-block:: none .. code-block:: none
@ -174,107 +175,73 @@ Install ACRN Hypervisor
echo 'Loading ACRN hypervisor with SDC scenario ...' echo 'Loading ACRN hypervisor with SDC scenario ...'
multiboot --quirk-modules-after-kernel /boot/acrn.32.out multiboot --quirk-modules-after-kernel /boot/acrn.32.out
module /boot/bzImage_sos Linux_bzImage module /boot/bzImage Linux_bzImage
} }
6. Update GRUB ``sudo update-grub``. #. Update Grub: ``sudo update-grub``.
7. Shut down the guest and relaunch using, ``virsh start ACRNSOS --console`` #. Enable networking for the User VMs
and select ACRN hypervisor from GRUB menu to launch Service
VM running on top of ACRN. .. code-block:: none
This can be verified using ``dmesg``, as shown below,
sudo systemctl enable systemd-networkd
sudo systemctl start systemd-networkd
#. Shut down the guest and relaunch it using ``virsh start ACRNSOS --console``.
Select the ``ACRN hypervisor`` entry from the Grub menu.
.. note::
You may occasionnally run into the following error: ``Assertion failed in file
arch/x86/vtd.c,line 256 : fatal error`` occasionally. This is a transient issue,
try to restart the VM when that happens. If you need a more stable setup, you
can work around the problem by switching your native host to a non-graphical
environment (``sudo systemctl set-default multi-user.target``).
#. Verify that you are now running ACRN using ``dmesg``.
.. code-block:: console .. code-block:: console
guestl1@ACRNSOS:~$ dmesg | grep ACRN dmesg | grep ACRN
[ 0.000000] Hypervisor detected: ACRN [ 0.000000] Hypervisor detected: ACRN
[ 2.337176] ACRNTrace: Initialized acrn trace module with 4 cpu [ 2.337176] ACRNTrace: Initialized acrn trace module with 4 cpu
[ 2.368358] ACRN HVLog: Initialized hvlog module with 4 cpu [ 2.368358] ACRN HVLog: Initialized hvlog module with 4 cpu
[ 2.727905] systemd[1]: Set hostname to <ACRNSOS>. [ 2.727905] systemd[1]: Set hostname to <ACRNSOS>.
8. When shutting down, make sure to cleanly destroy the Service VM to prevent crashes in subsequent boots. This can be done using, .. note::
When shutting down the Service VM, make sure to cleanly destroy it with these commands,
to prevent crashes in subsequent boots.
.. code-block:: none .. code-block:: none
$ virsh destroy ACRNSOS # where ACRNSOS is the virsh domain name.
Service VM Networking Updates for User VM
*****************************************
Follow these steps to enable networking for the User VM (L2 guest):
1. Edit your :file:`/etc/netplan/01-netcfg.yaml` file to add acrn-br0 as below,
.. code-block:: none
network:
version: 2
renderer: networkd
ethernets:
enp1s0:
dhcp4: no
bridges:
acrn-br0:
interfaces: [enp1s0]
dhcp4: true
dhcp6: no
2. Apply the new network configuration by,
.. code-block:: none
$ cd /etc/netplan
$ sudo netplan generate
$ sudo netplan apply
3. Create a tap interface (tap0) and add the tap interface as part of the acrn-br0 using the below steps,
a. Copy files ``misc/acrnbridge/acrn.network`` and ``misc/acrnbridge/tap0.netdev`` from the cloned ACRN repo to :file:`/usr/lib/system/network`.
b. Rename ``acrn.network`` to ``50-acrn.network``.
c. Rename ``tap0.netdev`` to ``50-tap0.netdev``.
4. Restart ACRNSOS guest (L1 guest) to complete the setup and start with bring-up of User VM
virsh destroy ACRNSOS # where ACRNSOS is the virsh domain name.
Bring-Up User VM (L2 Guest) Bring-Up User VM (L2 Guest)
*************************** ***************************
1. Build the device-model, using ``make devicemodel`` and copy acrn-dm to ACRNSOS guest (L1 guest) directory ``/usr/bin/acrn-dm``
.. note:: 1. Build the ACRN User VM kernel.
It should be already built as part of :ref:`install_acrn_hypervisor`.
2. On the ACRNSOS guest, install shared libraries for acrn-dm (if not already installed).
.. code-block:: none .. code-block:: none
$ sudo apt-get install libpciaccess-dev cd ~/acrn-kernel
cp kernel_config_uos .config
make olddefconfig
make
3. Install latest `IASL tool <https://acpica.org/downloads>`_ and copy the binary to ``/usr/sbin/iasl``. #. Copy the User VM kernel to your home folder, we will use it to launch the User VM (L2 guest)
For this setup, used IASL 20200326 version but anything after 20190215 should be good.
4. Clone latest stable version or main branch and build ACRN User VM Kernel.
.. code-block:: none .. code-block:: none
$ git clone https://github.com/projectacrn/acrn-kernel cp arch/x86/boot/bzImage ~/bzImage_uos
$ cd acrn-kernel
$ cp kernel_config_uos .config
$ make
Once the User VM kernel is built successfully, copy ``arch/x86/boot/bzImage`` to ACRNSOS (L1 guest) and rename this to ``bzImage_uos``. Need this to launch the User VM (L2 guest) #. Build the User VM disk image (``UOS.img``) following :ref:`build-the-ubuntu-kvm-image` and copy it to the ACRNSOS (L1 Guest).
Alternatively you can also use ``virt-install`` **in the host environment** to create a User VM image similarly to how we built ACRNSOS previously.
.. note::
The User VM kernel contains all needed drivers so you won't need to install extra kernel modules.
5. Build ubuntu.img using :ref:`build-the-ubuntu-kvm-image` and copy it to the ACRNSOS (L1 Guest).
Alternatively you can also use virt-install to create a User VM image similar to ACRNSOS as shown below,
.. code-block:: none .. code-block:: none
$ virt-install \ virt-install \
--name UOS \ --name UOS \
--ram 2048 \ --ram 1024 \
--disk path=/var/lib/libvirt/images/UOSUbuntu.img,size=8 \ --disk path=/var/lib/libvirt/images/UOS.img,size=8,format=raw \
--vcpus 2 \ --vcpus 2 \
--virt-type kvm \ --virt-type kvm \
--os-type linux \ --os-type linux \
@ -283,18 +250,29 @@ Bring-Up User VM (L2 Guest)
--location 'http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/' \ --location 'http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/' \
--extra-args "console=tty0 console=ttyS0,115200n8" --extra-args "console=tty0 console=ttyS0,115200n8"
.. note:: #. Transfer the ``UOS.img`` User VM disk image to the Service VM (L1 guest).
Image at ``/var/lib/libvirt/images/UOSUbuntu.img`` is a qcow2 image. Convert it to raw image using, ``qemu-img convert -f qcow2 UOSUbuntu.img -O raw UOS.img``
6. Launch User VM using launch script from the cloned repo path ``devicemodel/samples/launch_ubuntu.sh``. Make sure to update with your ubuntu image and rootfs .. code-block::
sudo scp /var/lib/libvirt/images/UOS.img <username>@<IP address>
Where ``<username>`` is your username in the Service VM and ``<IP address>`` its IP address.
#. Launch User VM using the ``launch_ubuntu.sh`` script.
.. code-block:: none
cp ~/acrn-hypervisor/misc/config_tools/data/samples_launch_scripts/launch_ubuntu.sh ~/
#. Update the script to use your disk image and kernel
.. code-block:: none .. code-block:: none
acrn-dm -A -m $mem_size -s 0:0,hostbridge \ acrn-dm -A -m $mem_size -s 0:0,hostbridge \
-s 3,virtio-blk,/home/guestl1/acrn-dm-bins/UOS.img \ -s 3,virtio-blk,~/UOS.img \
-s 4,virtio-net,tap0 \ -s 4,virtio-net,tap0 \
-s 5,virtio-console,@stdio:stdio_port \ -s 5,virtio-console,@stdio:stdio_port \
-k /home/guestl1/acrn-dm-bins/bzImage_uos \ -k ~/bzImage_uos \
-B "earlyprintk=serial,ttyS0,115200n8 consoleblank=0 root=/dev/vda1 rw rootwait maxcpus=1 nohpet console=tty0 console=hvc0 console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M tsc=reliable" \ -B "earlyprintk=serial,ttyS0,115200n8 consoleblank=0 root=/dev/vda1 rw rootwait maxcpus=1 nohpet console=tty0 console=hvc0 console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M tsc=reliable" \
$logger_setting \ $logger_setting \
$vm_name $vm_name

View File

@ -8,6 +8,10 @@ documentation and publishing it to https://projectacrn.github.io.
You can also use these instructions to generate the ACRN documentation You can also use these instructions to generate the ACRN documentation
on your local system. on your local system.
.. contents::
:local:
:depth: 1
Documentation Overview Documentation Overview
********************** **********************
@ -67,14 +71,15 @@ recommended folder setup for documentation contributions and generation:
misc/ misc/
acrn-kernel/ acrn-kernel/
The parent ``projectacrn folder`` is there because we'll also be creating a The parent ``projectacrn folder`` is there because, if you have repo publishing
publishing area later in these steps. For API documentation generation, we'll also rights, we'll also be creating a publishing area later in these steps. For API
need the ``acrn-kernel`` repo contents in a sibling folder to the documentation generation, we'll also need the ``acrn-kernel`` repo contents in a
acrn-hypervisor repo contents. sibling folder to the acrn-hypervisor repo contents.
It's best if the ``acrn-hypervisor`` It's best if the ``acrn-hypervisor`` folder is an ssh clone of your personal
folder is an ssh clone of your personal fork of the upstream project fork of the upstream project repos (though ``https`` clones work too and won't
repos (though ``https`` clones work too): require you to
`register your public SSH key with GitHub <https://github.com/settings/keys>`_):
#. Use your browser to visit https://github.com/projectacrn and do a #. Use your browser to visit https://github.com/projectacrn and do a
fork of the **acrn-hypervisor** repo to your personal GitHub account.) fork of the **acrn-hypervisor** repo to your personal GitHub account.)
@ -100,8 +105,11 @@ repos (though ``https`` clones work too):
cd acrn-hypervisor cd acrn-hypervisor
git remote add upstream git@github.com:projectacrn/acrn-hypervisor.git git remote add upstream git@github.com:projectacrn/acrn-hypervisor.git
After that, you'll have ``origin`` pointing to your cloned personal repo and
``upstream`` pointing to the project repo.
#. For API documentation generation we'll also need the ``acrn-kernel`` repo available #. For API documentation generation we'll also need the ``acrn-kernel`` repo available
locally: locally into the ``acrn-hypervisor`` folder:
.. code-block:: bash .. code-block:: bash
@ -151,7 +159,7 @@ Then use ``pip3`` to install the remaining Python-based tools:
cd ~/projectacrn/acrn-hypervisor/doc cd ~/projectacrn/acrn-hypervisor/doc
pip3 install --user -r scripts/requirements.txt pip3 install --user -r scripts/requirements.txt
Add ``$HOME/.local/bin`` to the front of your ``PATH`` so the system will Use this command to add ``$HOME/.local/bin`` to the front of your ``PATH`` so the system will
find expected versions of these Python utilities such as ``sphinx-build`` and find expected versions of these Python utilities such as ``sphinx-build`` and
``breathe``: ``breathe``:
@ -159,7 +167,7 @@ find expected versions of these Python utilities such as ``sphinx-build`` and
printf "\nexport PATH=\$HOME/.local/bin:\$PATH" >> ~/.bashrc printf "\nexport PATH=\$HOME/.local/bin:\$PATH" >> ~/.bashrc
.. note:: .. important::
You will need to open a new terminal for this change to take effect. You will need to open a new terminal for this change to take effect.
Adding this to your ``~/.bashrc`` file ensures it is set by default. Adding this to your ``~/.bashrc`` file ensures it is set by default.
@ -197,7 +205,7 @@ another ``make html`` and the output layout and style is changed. The
sphinx build system creates document cache information that attempts to sphinx build system creates document cache information that attempts to
expedite documentation rebuilds, but occasionally can cause an unexpected error or expedite documentation rebuilds, but occasionally can cause an unexpected error or
warning to be generated. Doing a ``make clean`` to create a clean warning to be generated. Doing a ``make clean`` to create a clean
generation environment and a ``make html`` again generally cleans this up. generation environment and a ``make html`` again generally fixes these issues.
The ``read-the-docs`` theme is installed as part of the The ``read-the-docs`` theme is installed as part of the
``requirements.txt`` list above. Tweaks to the standard ``requirements.txt`` list above. Tweaks to the standard

View File

@ -9,50 +9,24 @@ solution or hv-land solution, according to the usage scenario needs.
While both solutions can be used at the same time, VMs using different While both solutions can be used at the same time, VMs using different
solutions cannot communicate with each other. solutions cannot communicate with each other.
Ivshmem DM-Land Usage Enable Ivshmem Support
********************* **********************
Add this line as an ``acrn-dm`` boot parameter:: The ``ivshmem`` solution is disabled by default in ACRN. You can enable
-s slot,ivshmem,shm_name,shm_size
where
- ``-s slot`` - Specify the virtual PCI slot number
- ``ivshmem`` - Virtual PCI device name
- ``shm_name`` - Specify a shared memory name. Post-launched VMs with the same
``shm_name`` share a shared memory region. The ``shm_name`` needs to start
with ``dm:/`` prefix. For example, ``dm:/test``
- ``shm_size`` - Specify a shared memory size. The unit is megabyte. The size
ranges from 2 megabytes to 512 megabytes and must be a power of 2 megabytes.
For example, to set up a shared memory of 2 megabytes, use ``2``
instead of ``shm_size``. The two communicating VMs must define the same size.
.. note:: This device can be used with real-time VM (RTVM) as well.
.. _ivshmem-hv:
Ivshmem HV-Land Usage
*********************
The ``ivshmem`` hv-land solution is disabled by default in ACRN. You can enable
it using the :ref:`ACRN configuration toolset <acrn_config_workflow>` with these it using the :ref:`ACRN configuration toolset <acrn_config_workflow>` with these
steps: steps:
- Enable ``ivshmem`` hv-land in ACRN XML configuration file. - Enable ``ivshmem`` via ACRN configuration tool GUI.
- Edit ``IVSHMEM_ENABLED`` to ``y`` in ACRN scenario XML configuration - Set :option:`hv.FEATURES.IVSHMEM.IVSHMEM_ENABLED` to ``y``
to enable ``ivshmem`` hv-land
- Edit ``IVSHMEM_REGION`` to specify the shared memory name, size and - Edit :option:`hv.FEATURES.IVSHMEM.IVSHMEM_REGION` to specify the shared memory name, size and
communication VMs in ACRN scenario XML configuration. The ``IVSHMEM_REGION`` communication VMs. The ``IVSHMEM_REGION`` format is ``shm_name,shm_size,VM IDs``:
format is ``shm_name,shm_size,VM IDs``:
- ``shm_name`` - Specify a shared memory name. The name needs to start - ``shm_name`` - Specify a shared memory name. The name needs to start
with the ``hv:/`` prefix. For example, ``hv:/shm_region_0`` with the ``hv:/`` prefix for hv-land, or ``dm:/`` for dm-land.
For example, ``hv:/shm_region_0`` for hv-land and ``dm:/shm_region_0``
for dm-land.
- ``shm_size`` - Specify a shared memory size. The unit is megabyte. The - ``shm_size`` - Specify a shared memory size. The unit is megabyte. The
size ranges from 2 megabytes to 512 megabytes and must be a power of 2 megabytes. size ranges from 2 megabytes to 512 megabytes and must be a power of 2 megabytes.
@ -63,10 +37,54 @@ steps:
communication and separate it with ``:``. For example, the communication and separate it with ``:``. For example, the
communication between VM0 and VM2, it can be written as ``0:2`` communication between VM0 and VM2, it can be written as ``0:2``
.. note:: You can define up to eight ``ivshmem`` hv-land shared regions.
- Build with the XML configuration, refer to :ref:`getting-started-building`. - Build with the XML configuration, refer to :ref:`getting-started-building`.
Ivshmem DM-Land Usage
*********************
Follow `Enable Ivshmem Support`_ and
add below line as an ``acrn-dm`` boot parameter::
-s slot,ivshmem,shm_name,shm_size
where
- ``-s slot`` - Specify the virtual PCI slot number
- ``ivshmem`` - Virtual PCI device emulating the Shared Memory
- ``shm_name`` - Specify a shared memory name. This ``shm_name`` must be listed
in :option:`hv.FEATURES.IVSHMEM.IVSHMEM_REGION` in `Enable Ivshmem Support`_ section and needs to start
with ``dm:/`` prefix.
- ``shm_size`` - Shared memory size of selected ``shm_name``.
There are two ways to insert above boot parameter for ``acrn-dm``
- Manually edit launch script file, in this case, user shall ensure that both
``shm_name`` and ``shm_size`` match with that are defined via configuration tool GUI.
- Use the command following below format to create a launch script, when IVSHMEM is enabled
and :option:`hv.FEATURES.IVSHMEM.IVSHMEM_REGION` is properly configured via configuration tool GUI.
.. code-block:: none
:emphasize-lines: 5
python3 misc/config_tools/launch_config/launch_cfg_gen.py \
--board <path_to_your_boardxml> \
--scenario <path_to_your_scenarioxml> \
--launch <path_to_your_launched_script_xml> \
--uosid <desired_single_vmid_or_0_for_all_vmids>
.. note:: This device can be used with real-time VM (RTVM) as well.
.. _ivshmem-hv:
Ivshmem HV-Land Usage
*********************
Follow `Enable Ivshmem Support`_ to setup HV-Land Ivshmem support.
Ivshmem Notification Mechanism Ivshmem Notification Mechanism
****************************** ******************************
@ -188,7 +206,7 @@ Linux-based VMs (VM0 is a pre-launched VM and VM2 is a post-launched VM).
2. Build ACRN based on the XML configuration for hybrid_rt scenario on whl-ipc-i5 board:: 2. Build ACRN based on the XML configuration for hybrid_rt scenario on whl-ipc-i5 board::
make BOARD=whl-ipc-i5 SCENARIO=<path/to/edited/scenario.xml> TARGET_DIR=xxx make BOARD=whl-ipc-i5 SCENARIO=<path/to/edited/scenario.xml> TARGET_DIR=xxx
3. Add a new virtual PCI device for VM2 (post-launched VM): the device type is 3. Add a new virtual PCI device for VM2 (post-launched VM): the device type is
``ivshmem``, shared memory name is ``hv:/shm_region_0``, and shared memory ``ivshmem``, shared memory name is ``hv:/shm_region_0``, and shared memory

View File

@ -58,9 +58,10 @@ the request via vUART to the lifecycle manager in the Service VM which in turn a
the request and trigger the following flow. the request and trigger the following flow.
.. note:: The User VM need to be authorized to be able to request a Shutdown, this is achieved by adding .. note:: The User VM need to be authorized to be able to request a Shutdown, this is achieved by adding
"``--pm_notify_channel uart``" in the launch script of that VM. "``--pm_notify_channel uart,allow_trigger_s5``" in the launch script of that VM.
And, there is only one VM in the system can be configured to request a shutdown. If there is a second User And, there is only one VM in the system can be configured to request a shutdown. If there is a second User
VM launched with "``--pm_notify_channel uart``", ACRN will stop launching it and throw out below error message: VM launched with "``--pm_notify_channel uart,allow_trigger_s5``", ACRN will stop launching it and throw
out below error message:
``initiate a connection on a socket error`` ``initiate a connection on a socket error``
``create socket to connect life-cycle manager failed`` ``create socket to connect life-cycle manager failed``

View File

@ -28,7 +28,7 @@ Verified Version
Prerequisites Prerequisites
************* *************
Follow :ref:`these instructions <rt_industry_ubuntu_setup>` to set up Follow :ref:`these instructions <gsg>` to set up
Ubuntu as the ACRN Service VM. Ubuntu as the ACRN Service VM.
Supported Hardware Platform Supported Hardware Platform

Binary file not shown.

After

Width:  |  Height:  |  Size: 113 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

View File

@ -0,0 +1,352 @@
.. _nested_virt:
Enable Nested Virtualization
############################
With nested virtualization enabled in ACRN, you can run virtual machine
instances inside of a guest VM (also called a user VM) running on the ACRN hypervisor.
Although both "level 1" guest VMs and nested guest VMs can be launched
from the Service VM, the following distinction is worth noting:
* The VMX feature (``CPUID01.01H:ECX[5]``) does not need to be visible to the Service VM
in order to launch guest VMs. A guest VM not running on top of the
Service VM is considered a level 1 (L1) guest.
* The VMX feature must be visible to an L1 guest to launch a nested VM. An instance
of a guest hypervisor (KVM) runs on the L1 guest and works with the
L0 ACRN hypervisor to run the nested VM.
The conventional single-level virtualization has two levels - the L0 host
(ACRN hypervisor) and the L1 guest VMs. With nested virtualization enabled,
ACRN can run guest VMs with their associated virtual machines that define a
third level:
* The host (ACRN hypervisor), which we call the L0 hypervisor
* The guest hypervisor (KVM), which we call the L1 hypervisor
* The nested guest VMs, which we call the L2 guest VMs
.. figure:: images/nvmx_1.png
:width: 700px
:align: center
Generic Nested Virtualization
High Level ACRN Nested Virtualization Design
********************************************
The high-level design of nested virtualization in ACRN is shown in :numref:`nested_virt_hld`.
Nested VMX is enabled by allowing a guest VM to use VMX instructions,
and emulating them using the single level of VMX available in the hardware.
In x86, a logical processor uses VMCSs to manage VM entries and VM exits as
well as processor behavior in VMX non-root operation. The trick of nVMX
emulation is ACRN builds a VMCS02 out of the VMCS01, which is the VMCS
ACRN uses to run the L1 VM, and VMCS12 which is built by L1 hypervisor to
actually run the L2 guest.
.. figure:: images/nvmx_arch_1.png
:width: 400px
:align: center
:name: nested_virt_hld
Nested Virtualization in ACRN
#. L0 hypervisor (ACRN) runs L1 guest with VMCS01
#. L1 hypervisor (KVM) creates VMCS12 to run a L2 guest
#. VMX instructions from L1 hypervisor trigger VMExits to L0 hypervisor:
#. L0 hypervisor runs a L2 guest with VMCS02
a. L0 caches VMCS12 in host memory
#. L0 merges VMCS01 and VMCS12 to create VMCS02
#. L2 guest runs until triggering VMExits to L0
a. L0 reflects most VMEXits to L1 hypervisor
#. L0 runs L1 guest with VMCS01 and VMCS02 as the shadow VMCS
Restrictions and Constraints
****************************
Nested virtualization is considered an experimental feature, and only tested
on Tiger Lake and Kaby Lake platforms (See :ref:`hardware`.)
L1 VMs have the following restrictions:
* KVM is the only L1 hypervisor supported by ACRN
* KVM runs in 64-bit mode
* KVM enables EPT for L2 guests
* QEMU is used to launch L2 guests
Constraints on L1 guest configuration:
* Local APIC passthrough must be enabled
* Only the ``SCHED_NOOP`` scheduler is supported. ACRN can't receive timer interrupts
on LAPIC passthrough pCPUs
Service OS VM configuration
***************************
ACRN only supports enabling the nested virtualization feature on the Service VM, not on pre-launched
VMs.
The nested virtualization feature is disabled by default in ACRN. You can
enable it using the :ref:`Use the ACRN Configuration Editor <acrn_config_tool_ui>`
with these settings:
.. note:: Normally you'd use the configuration tool GUI to edit the scenario XML file.
The tool wasn't updated in time for the v2.5 release, so you'll need to manually edit
the ACRN scenario XML configuration file to edit the ``SCHEDULER``, ``NVMX_ENABLED``,
``pcpu_id`` , ``guest_flags``, ``legacy_vuart``, and ``console_vuart`` settings for
the Service VM (SOS), as shown below:
#. Configure system level features:
- Edit :option:`hv.FEATURES.NVMX_ENABLED` to `y` to enable nested virtualization
- Edit :option:`hv.FEATURES.SCHEDULER` to ``SCHED_NOOP`` to disable CPU sharing
.. code-block:: xml
:emphasize-lines: 3,18
<FEATURES>
<RELOC>y</RELOC>
<SCHEDULER>SCHED_NOOP</SCHEDULER>
<MULTIBOOT2>y</MULTIBOOT2>
<ENFORCE_TURNOFF_AC>y</ENFORCE_TURNOFF_AC>
<RDT>
<RDT_ENABLED>n</RDT_ENABLED>
<CDP_ENABLED>y</CDP_ENABLED>
<CLOS_MASK>0xfff</CLOS_MASK>
<CLOS_MASK>0xfff</CLOS_MASK>
<CLOS_MASK>0xfff</CLOS_MASK>
<CLOS_MASK>0xfff</CLOS_MASK>
<CLOS_MASK>0xfff</CLOS_MASK>
<CLOS_MASK>0xfff</CLOS_MASK>
<CLOS_MASK>0xfff</CLOS_MASK>
<CLOS_MASK>0xfff</CLOS_MASK>
</RDT>
<NVMX_ENABLED>y</NVMX_ENABLED>
<HYPERV_ENABLED>y</HYPERV_ENABLED>
#. In each guest VM configuration:
- Edit :option:`vm.guest_flags.guest_flag` on the SOS VM section and add ``GUEST_FLAG_NVMX_ENABLED``
to enable the nested virtualization feature on the Service VM.
- Edit :option:`vm.guest_flags.guest_flag` and add ``GUEST_FLAG_LAPIC_PASSTHROUGH`` to enable local
APIC passthrough on the Service VM.
- Edit :option:`vm.cpu_affinity.pcpu_id` to assign ``pCPU`` IDs to run the Service VM. If you are
using debug build and need the hypervisor console, don't assign
``pCPU0`` to the Service VM.
.. code-block:: xml
:emphasize-lines: 5,6,7,10,11
<vm id="1">
<vm_type>SOS_VM</vm_type>
<name>ACRN SOS VM</name>
<cpu_affinity>
<pcpu_id>1</pcpu_id>
<pcpu_id>2</pcpu_id>
<pcpu_id>3</pcpu_id>
</cpu_affinity>
<guest_flags>
<guest_flag>GUEST_FLAG_NVMX_ENABLED</guest_flag>
<guest_flag>GUEST_FLAG_LAPIC_PASSTHROUGH</guest_flag>
</guest_flags>
The Service VM's virtual legacy UART interrupt doesn't work with LAPIC
passthrough, which may prevent the Service VM from booting. Instead, we need to use
the PCI-vUART for the Service VM. Refer to :ref:`Enable vUART Configurations <vuart_config>`
for more details about VUART configuration.
- Edit :option:`vm.legacy_vuart.base` in ``legacy_vuart 0`` and set it to ``INVALID_LEGACY_PIO``
- Edit :option:`vm.console_vuart.base` in ``console_vuart 0`` and set it to ``PCI_VUART``
.. code-block:: xml
:emphasize-lines: 3, 14
<legacy_vuart id="0">
<type>VUART_LEGACY_PIO</type>
<base>INVALID_COM_BASE</base>
<irq>COM1_IRQ</irq>
</legacy_vuart>
<legacy_vuart id="1">
<type>VUART_LEGACY_PIO</type>
<base>INVALID_COM_BASE</base>
<irq>COM2_IRQ</irq>
<target_vm_id>1</target_vm_id>
<target_uart_id>1</target_uart_id>
</legacy_vuart>
<console_vuart id="0">
<base>PCI_VUART</base>
</console_vuart>
#. Remove CPU sharing VMs
Since CPU sharing is disabled, you may need to delete all ``POST_STD_VM`` and ``KATA_VM`` VMs
from the scenario configuration file, which may share pCPU with the Service OS VM.
#. Follow instructions in :ref:`getting-started-building` and build with this XML configuration.
Prepare for Service VM Kernel and rootfs
****************************************
The service VM can run Ubuntu or other Linux distributions.
Instructions on how to boot Ubuntu as the Service VM can be found in
:ref:`gsg`.
The Service VM kernel needs to be built from the ``acrn-kernel`` repo, and some changes
to the kernel ``.config`` are needed.
Instructions on how to build and install the Service VM kernel can be found
in :ref:`Build and Install the ACRN Kernel <build-and-install-ACRN-kernel>`.
Here is a summary of how to modify and build the kernel:
.. code-block:: none
git clone https://github.com/projectacrn/acrn-kernel
cd acrn-kernel
cp kernel_config_uefi_sos .config
make olddefconfig
The following configuration entries are needed to launch nested
guests on the Service VM:
.. code-block:: none
CONFIG_KVM=y
CONFIG_KVM_INTEL=y
CONFIG_ACRN_GUEST=y
After you made these configuration modifications, build and install the kernel
as described in :ref:`gsg`.
Launch a Nested Guest VM
************************
Create an Ubuntu KVM Image
==========================
Refer to :ref:`Build the Ubuntu KVM Image <build-the-ubuntu-kvm-image>`
on how to create an Ubuntu KVM image as the nested guest VM's root filesystem.
There is no particular requirement for this image, e.g., it could be of either
qcow2 or raw format.
Prepare for Launch Scripts
==========================
Install QEMU on the Service VM that will launch the nested guest VM:
.. code-block:: none
sudo apt-get install qemu-kvm qemu virt-manager virt-viewer libvirt-bin
.. important:: The QEMU ``-cpu host`` option is needed to launch a nested guest VM, and ``-nographics``
is required to run nested guest VMs reliably.
You can prepare the script just like the one you use to launch a VM
on native Linux. For example, other than ``-hda``, you can use the following option to launch
a virtio block based RAW image::
-drive format=raw,file=/root/ubuntu-20.04.img,if=virtio
Use the following option to enable Ethernet on the guest VM::
-netdev tap,id=net0 -device virtio-net-pci,netdev=net0,mac=a6:cd:47:5f:20:dc
The following is a simple example for the script to launch a nested guest VM.
.. code-block:: bash
:emphasize-lines: 2-4
sudo qemu-system-x86_64 \
-enable-kvm \
-cpu host \
-nographic \
-m 2G -smp 2 -hda /root/ubuntu-20.04.qcow2 \
-net nic,macaddr=00:16:3d:60:0a:80 -net tap,script=/etc/qemu-ifup
Launch the Guest VM
===================
You can launch the nested guest VM from the Service VM's virtual serial console
or from an SSH remote login.
If the nested VM is launched successfully, you should see the nested
VM's login prompt:
.. code-block:: console
[ OK ] Started Terminate Plymouth Boot Screen.
[ OK ] Started Hold until boot process finishes up.
[ OK ] Starting Set console scheme...
[ OK ] Started Serial Getty on ttyS0.
[ OK ] Started LXD - container startup/shutdown.
[ OK ] Started Set console scheme.
[ OK ] Started Getty on tty1.
[ OK ] Reached target Login Prompts.
[ OK ] Reached target Multi-User System.
[ OK ] Started Update UTMP about System Runlevel Changes.
Ubuntu 20.04 LTS ubuntu_vm ttyS0
ubuntu_vm login:
You won't see the nested guest from a ``vcpu_list`` or ``vm_list`` command
on the ACRN hypervisor console because these commands only show level 1 VMs.
.. code-block:: console
ACRN:\>vm_list
VM_UUID VM_ID VM_NAME VM_STATE
================================ ===== ==========================
dbbbd4347a574216a12c2201f1ab0240 0 ACRN SOS VM Running
ACRN:\>vcpu_list
VM ID PCPU ID VCPU ID VCPU ROLE VCPU STATE THREAD STATE
===== ======= ======= ========= ========== ============
0 1 0 PRIMARY Running RUNNING
0 2 1 SECONDARY Running RUNNING
0 3 2 SECONDARY Running RUNNING
On the nested guest VM console, run an ``lshw`` or ``dmidecode`` command
and you'll see that this is a QEMU-managed virtual machine:
.. code-block:: console
:emphasize-lines: 4,5
$ sudo lshw -c system
ubuntu_vm
description: Computer
product: Standard PC (i440FX + PIIX, 1996)
vendor: QEMU
version: pc-i440fx-5.2
width: 64 bits
capabilities: smbios-2.8 dmi-2.8 smp vsyscall32
configuration: boot=normal
For example, compare this to the same command run on the L1 guest (Service VM):
.. code-block:: console
:emphasize-lines: 4,5
$ sudo lshw -c system
localhost.localdomain
description: Computer
product: NUC7i5DNHE
vendor: Intel Corporation
version: J57828-507
serial: DW1710099900081
width: 64 bits
capabilities: smbios-3.1 dmi-3.1 smp vsyscall32
configuration: boot=normal family=Intel NUC uuid=36711CA2-A784-AD49-B0DC-54B2030B16AB

View File

@ -44,7 +44,7 @@ kernels are loaded as multiboot modules. The ACRN hypervisor, Service
VM, and Pre-Launched RT kernel images are all located on the NVMe drive. VM, and Pre-Launched RT kernel images are all located on the NVMe drive.
We recommend installing Ubuntu on the NVMe drive as the Service VM OS, We recommend installing Ubuntu on the NVMe drive as the Service VM OS,
which also has the required GRUB image to launch Pre-Launched RT mode. which also has the required GRUB image to launch Pre-Launched RT mode.
Refer to :ref:`rt_industry_ubuntu_setup`, to Refer to :ref:`gsg`, to
install Ubuntu on the NVMe drive, and use grub to launch the Service VM. install Ubuntu on the NVMe drive, and use grub to launch the Service VM.
Install Pre-Launched RT Filesystem on SATA and Kernel Image on NVMe Install Pre-Launched RT Filesystem on SATA and Kernel Image on NVMe
@ -83,7 +83,7 @@ Add Pre-Launched RT Kernel Image to GRUB Config
The last step is to modify the GRUB configuration file to load the Pre-Launched The last step is to modify the GRUB configuration file to load the Pre-Launched
kernel. (For more information about this, see :ref:`Update Grub for the Ubuntu Service VM kernel. (For more information about this, see :ref:`Update Grub for the Ubuntu Service VM
<rt_industry_ubuntu_setup>`.) The grub config file will look something <gsg_update_grub>` section in the :ref:`gsg`.) The grub config file will look something
like this: like this:
.. code-block:: none .. code-block:: none

View File

@ -249,19 +249,7 @@ Configure RDT for VM Using VM Configuration
per-LP CLOS is applied to the core. If HT is turned on, don't place high per-LP CLOS is applied to the core. If HT is turned on, don't place high
priority threads on sibling LPs running lower priority threads. priority threads on sibling LPs running lower priority threads.
#. Based on our scenario, build the ACRN hypervisor and copy the #. Based on our scenario, build and install ACRN. See :ref:`build-with-acrn-scenario`
artifact ``acrn.efi`` to the for building and installing instructions.
``/boot/EFI/acrn`` directory. If needed, update the device model
``acrn-dm`` as well in ``/usr/bin`` directory. see
:ref:`getting-started-building` for building instructions.
.. code-block:: none
$ make hypervisor BOARD=apl-up2 FIRMWARE=uefi
...
# these operations are done on UP2 board
$ mount /dev/mmcblk0p0 /boot
$ scp <acrn.efi-at-your-compile-PC> /boot/EFI/acrn
#. Restart the platform. #. Restart the platform.

View File

@ -18,7 +18,7 @@ Prerequisites
#. Refer to the :ref:`ACRN supported hardware <hardware>`. #. Refer to the :ref:`ACRN supported hardware <hardware>`.
#. For a default prebuilt ACRN binary in the end-to-end (E2E) package, you must have 4 #. For a default prebuilt ACRN binary in the end-to-end (E2E) package, you must have 4
CPU cores or enable "CPU Hyper-threading" in order to have 4 CPU threads for 2 CPU cores. CPU cores or enable "CPU Hyper-threading" in order to have 4 CPU threads for 2 CPU cores.
#. Follow the :ref:`rt_industry_ubuntu_setup` to set up the ACRN Service VM #. Follow the :ref:`gsg` to set up the ACRN Service VM
based on Ubuntu. based on Ubuntu.
#. This tutorial is validated on the following configurations: #. This tutorial is validated on the following configurations:
@ -75,7 +75,7 @@ to automate the Kata Containers installation procedure.
$ sudo cp build/misc/tools/acrnctl /usr/bin/ $ sudo cp build/misc/tools/acrnctl /usr/bin/
.. note:: This assumes you have built ACRN on this machine following the .. note:: This assumes you have built ACRN on this machine following the
instructions in the :ref:`rt_industry_ubuntu_setup`. instructions in the :ref:`gsg`.
#. Modify the :ref:`daemon.json` file in order to: #. Modify the :ref:`daemon.json` file in order to:

View File

@ -74,21 +74,21 @@ Install ACRN on the Debian VM
#. Build and Install the Service VM kernel: #. Build and Install the Service VM kernel:
.. code-block:: bash .. code-block:: bash
$ mkdir ~/sos-kernel && cd ~/sos-kernel $ mkdir ~/sos-kernel && cd ~/sos-kernel
$ git clone https://github.com/projectacrn/acrn-kernel $ git clone https://github.com/projectacrn/acrn-kernel
$ cd acrn-kernel $ cd acrn-kernel
$ git checkout release_2.2 $ git checkout release_2.2
$ cp kernel_config_uefi_sos .config $ cp kernel_config_uefi_sos .config
$ make olddefconfig $ make olddefconfig
$ make all $ make all
$ sudo make modules_install $ sudo make modules_install
$ sudo cp arch/x86/boot/bzImage /boot/bzImage $ sudo cp arch/x86/boot/bzImage /boot/bzImage
#. Update Grub for the Debian Service VM #. Update Grub for the Debian Service VM:
Update the ``/etc/grub.d/40_custom`` file as shown below. Update the ``/etc/grub.d/40_custom`` file as shown below.
.. note:: .. note::
Enter the command line for the kernel in ``/etc/grub.d/40_custom`` as Enter the command line for the kernel in ``/etc/grub.d/40_custom`` as
@ -146,10 +146,11 @@ Install ACRN on the Debian VM
[ 0.982837] ACRN HVLog: Failed to init last hvlog devs, errno -19 [ 0.982837] ACRN HVLog: Failed to init last hvlog devs, errno -19
[ 0.983023] ACRN HVLog: Initialized hvlog module with 4 cp [ 0.983023] ACRN HVLog: Initialized hvlog module with 4 cp
Enable the Network Sharing to Give Network Access to User VM Enable Network Sharing to Give Network Access to the User VM
************************************************************ ************************************************************
.. code-block:: bash
$ sudo systemctl enable systemd-networkd .. code-block:: bash
$ sudo systemctl start systemd-networkd
$ sudo systemctl enable systemd-networkd
$ sudo systemctl start systemd-networkd

View File

@ -7,7 +7,7 @@ ACRN hypervisor supports a hybrid scenario where the User VM (such as Zephyr
or Ubuntu) runs in a pre-launched VM or in a post-launched VM that is or Ubuntu) runs in a pre-launched VM or in a post-launched VM that is
launched by a Device model in the Service VM. launched by a Device model in the Service VM.
.. figure:: images/hybrid_scenario_on_nuc.png .. figure:: images/ACRN-Hybrid.png
:align: center :align: center
:width: 600px :width: 600px
:name: hybrid_scenario_on_nuc :name: hybrid_scenario_on_nuc
@ -18,12 +18,20 @@ The following guidelines
describe how to set up the ACRN hypervisor hybrid scenario on the Intel NUC, describe how to set up the ACRN hypervisor hybrid scenario on the Intel NUC,
as shown in :numref:`hybrid_scenario_on_nuc`. as shown in :numref:`hybrid_scenario_on_nuc`.
.. note::
All build operations are done directly on the target. Building the artifacts (ACRN hypervisor, kernel, tools and Zephyr)
on a separate development machine can be done but is not described in this document.
.. contents:: .. contents::
:local: :local:
:depth: 1 :depth: 1
Prerequisites .. rst-class:: numbered-step
*************
Set-up base installation
************************
- Use the `Intel NUC Kit NUC7i7DNHE <https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc7i7dnhe.html>`_. - Use the `Intel NUC Kit NUC7i7DNHE <https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc7i7dnhe.html>`_.
- Connect to the serial port as described in :ref:`Connecting to the serial port <connect_serial_port>`. - Connect to the serial port as described in :ref:`Connecting to the serial port <connect_serial_port>`.
- Install Ubuntu 18.04 on your SATA device or on the NVME disk of your - Install Ubuntu 18.04 on your SATA device or on the NVME disk of your
@ -31,6 +39,51 @@ Prerequisites
.. rst-class:: numbered-step .. rst-class:: numbered-step
Prepare the Zephyr image
************************
Prepare the Zephyr kernel that you will run in VM0 later.
- Follow step 1 from the :ref:`using_zephyr_as_uos` instructions
.. note:: We only need the binary Zephyr kernel, not the entire ``zephyr.img``
- Copy the :file:`zephyr/zephyr.bin` to the ``/boot`` folder::
sudo cp zephyr/zephyr.bin /boot
.. rst-class:: numbered-step
Set-up ACRN on your device
**************************
- Follow the instructions in :Ref:`getting-started-building` to build ACRN using the
``hybrid`` scenario. Here is the build command-line for the `Intel NUC Kit NUC7i7DNHE <https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc7i7dnhe.html>`_::
make BOARD=nuc7i7dnb SCENARIO=hybrid
- Install the ACRN hypervisor and tools
.. code-block:: none
cd ~/acrn-hypervisor # Or wherever your sources are
sudo make install
sudo cp build/hypervisor/acrn.bin /boot
sudo cp build/hypervisor/acpi/ACPI_VM0.bin /boot
- Build and install the ACRN kernel
.. code-block:: none
cd ~/acrn-kernel # Or where your ACRN kernel sources are
cp kernel_config_uefi_sos .config
make olddefconfig
make
sudo make modules_install
sudo cp arch/x86/boot/bzImage /boot/bzImage
.. rst-class:: numbered-step
Update Ubuntu GRUB Update Ubuntu GRUB
****************** ******************

View File

@ -11,7 +11,7 @@ ACRN hypervisor.
ACRN Service VM Setup ACRN Service VM Setup
********************* *********************
Follow the steps in this :ref:`rt_industry_ubuntu_setup` to set up ACRN Follow the steps in this :ref:`gsg` to set up ACRN
based on Ubuntu and launch the Service VM. based on Ubuntu and launch the Service VM.
Setup for Using Windows as the Guest VM Setup for Using Windows as the Guest VM

View File

@ -417,13 +417,17 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters:
This option is used to define which channel could be used DM to This option is used to define which channel could be used DM to
communicate with VM about power management event. communicate with VM about power management event.
ACRN supports three channels: ``ioc``, ``power button`` and ``uart``. ACRN supports three channels: ``ioc``, ``power_button`` and ``uart``.
For ``uart``, an additional option, ``,allow_trigger_s5``, can be added.
A user can use this option to indicate the User VM is allowed to trigger
system S5.
usage:: usage::
--pm_notify_channel ioc --pm_notify_channel ioc
Use ioc as power management event motify channel. Use ioc as power management event notify channel.
---- ----

View File

@ -317,6 +317,28 @@ relevant for configuring or debugging ACRN-based systems.
intel_iommu=off intel_iommu=off
* - ``hugepages``
``hugepagesz``
- Service VM,User VM
- ``hugepages``:
HugeTLB pages to allocate at boot.
``hugepagesz``:
The size of the HugeTLB pages. On x86-64 and PowerPC,
this option can be specified multiple times interleaved
with ``hugepages`` to reserve huge pages of different sizes.
Valid page sizes on x86-64 are 2M (when the CPU supports Page Size Extension (PSE))
and 1G (when the CPU supports the ``pdpe1gb`` cpuinfo flag).
- ::
hugepages=10
hugepagesz=1G
.. note:: The ``hugepages`` and ``hugepagesz`` parameters are automatically
taken care of by ACRN config tool. In case user have customized hugepage
settings to satisfy their particular workloads in Service VM, the ``hugepages``
and ``hugepagesz`` parameters could be redefined in GRUB menu to override
the settings from ACRN config tool.
Intel GVT-g (AcrnGT) Parameters Intel GVT-g (AcrnGT) Parameters
******************************* *******************************

View File

@ -273,7 +273,8 @@ devices.</xs:documentation>
</xs:element> </xs:element>
<xs:element name="MAX_MSIX_TABLE_NUM" default="64"> <xs:element name="MAX_MSIX_TABLE_NUM" default="64">
<xs:annotation> <xs:annotation>
<xs:documentation>Maximum number of MSI-X tables per device.</xs:documentation> <xs:documentation>Maximum number of MSI-X tables per device.
If this value is empty, then the default value will be calculated from the board XML file.</xs:documentation>
</xs:annotation> </xs:annotation>
<xs:simpleType> <xs:simpleType>
<xs:annotation> <xs:annotation>