doc: update release_1.6 docs with master docs
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
28
doc/404.rst
Normal file
@@ -0,0 +1,28 @@
|
||||
:orphan:
|
||||
|
||||
.. _page-not-found:
|
||||
|
||||
Page Not Found
|
||||
##############
|
||||
|
||||
.. rst-class:: rst-columns
|
||||
|
||||
.. image:: images/ACRN-fall-from-tree-small.png
|
||||
:align: left
|
||||
:width: 320px
|
||||
|
||||
Sorry. The page you requested was not found on this site.
|
||||
|
||||
Check the address for misspellings.
|
||||
It's also possible we've removed or renamed the page you're looking for.
|
||||
|
||||
Try using the navigation links on the left of this page to navigate
|
||||
the major sections of our site, or use the document search box.
|
||||
|
||||
If you got this error by following a link, please let us know by sending
|
||||
us a message to `info@projectacrn.org
|
||||
<mailto:info@projectacrn.org?subject=projectacrn.github.io%20broken%20link>`_.
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<div style='clear:both'></div>
|
||||
@@ -82,9 +82,12 @@ publish:
|
||||
cd $(PUBLISHDIR)/..; git pull origin master
|
||||
rm -fr $(PUBLISHDIR)/*
|
||||
cp -r $(BUILDDIR)/html/* $(PUBLISHDIR)
|
||||
ifeq ($(RELEASE),latest)
|
||||
cp scripts/publish-README.md $(PUBLISHDIR)/../README.md
|
||||
cp scripts/publish-index.html $(PUBLISHDIR)/../index.html
|
||||
cp scripts/publish-robots.txt $(PUBLISHDIR)/../robots.txt
|
||||
sed 's/<head>/<head>\n <base href="https:\/\/projectacrn.github.io\/latest\/">/' $(BUILDDIR)/html/404.html > $(PUBLISHDIR)/../404.html
|
||||
endif
|
||||
cd $(PUBLISHDIR)/..; git add -A; git commit -s -m "publish $(RELEASE)"; git push origin master;
|
||||
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
Device Model APIs
|
||||
#################
|
||||
|
||||
This section contains APIs for the SOS Device Model services. Sources
|
||||
This section contains APIs for the Service VM Device Model services. Sources
|
||||
for the Device Model are found in the devicemodel folder of the `ACRN
|
||||
hypervisor GitHub repo`_
|
||||
|
||||
|
||||
70
doc/asa.rst
@@ -1,66 +1,74 @@
|
||||
.. _asa:
|
||||
.. _asa:
|
||||
|
||||
Security Advisory
|
||||
*****************
|
||||
#################
|
||||
|
||||
We recommend that all developers upgrade to this v1.6 release, which
|
||||
Addressed in ACRN v1.6
|
||||
**********************
|
||||
|
||||
We recommend that all developers upgrade to this v1.6 release (or later), which
|
||||
addresses the following security issues that were discovered in previous releases:
|
||||
|
||||
Hypervisor Crashes When Fuzzing HC_DESTROY_VM
|
||||
------
|
||||
|
||||
- Hypervisor Crashes When Fuzzing HC_DESTROY_VM
|
||||
The input 'vdev->pdev' should be validated properly when handling
|
||||
HC_SET_PTDEV_INTR_INFO to ensure that the physical device is linked to
|
||||
'vdev'; otherwise, the hypervisor crashes when fuzzing the
|
||||
hypercall HC_DESTROY_VM with crafted input.
|
||||
|
||||
| **Affected Release:** v1.5 and earlier.
|
||||
| Upgrade to ACRN release v1.6.
|
||||
**Affected Release:** v1.5 and earlier.
|
||||
|
||||
Hypervisor Crashes When Fuzzing HC_VM_WRITE_PROTECT_PAGE
|
||||
- Hypervisor Crashes When Fuzzing HC_VM_WRITE_PROTECT_PAGE
|
||||
The input GPA is not validated when handling this hypercall; an "Invalid
|
||||
GPA" that is not in the scope of the target VM's EPT address space results
|
||||
in the hypervisor crashing when handling this hypercall.
|
||||
|
||||
| **Affected Release:** v1.4 and earlier.
|
||||
| Upgrade to ACRN release v1.6.
|
||||
**Affected Release:** v1.4 and earlier.
|
||||
|
||||
Hypervisor Crashes When Fuzzing HC_NOTIFY_REQUEST_FINISH
|
||||
- Hypervisor Crashes When Fuzzing HC_NOTIFY_REQUEST_FINISH
|
||||
The input is not validated properly when handing this hypercall;
|
||||
'vcpu_id' should be less than 'vm->hw.created_vcpus' instead of
|
||||
'MAX_VCPUS_PER_VM'. When the software fails to validate input properly,
|
||||
the hypervisor crashes when handling crafted inputs.
|
||||
|
||||
| **Affected Release:** v1.4 and earlier.
|
||||
| Upgrade to ACRN release v1.6.
|
||||
**Affected Release:** v1.4 and earlier.
|
||||
|
||||
Mitigation for Machine Check Error on Page Size Change
|
||||
|
||||
Addressed in ACRN v1.4
|
||||
**********************
|
||||
|
||||
We recommend that all developers upgrade to this v1.4 release (or later), which
|
||||
addresses the following security issues that were discovered in previous releases:
|
||||
|
||||
------
|
||||
|
||||
- Mitigation for Machine Check Error on Page Size Change
|
||||
Improper invalidation for page table updates by a virtual guest operating
|
||||
system for multiple Intel(R) Processors may allow an authenticated user
|
||||
to potentially enable denial of service of the host system via local
|
||||
access. A malicious guest kernel could trigger this issue, CVE-2018-12207.
|
||||
|
||||
| **Affected Release:** v1.3 and earlier.
|
||||
| Upgrade to ACRN release v1.4.
|
||||
**Affected Release:** v1.3 and earlier.
|
||||
|
||||
AP Trampoline Is Accessible to the Service VM
|
||||
- AP Trampoline Is Accessible to the Service VM
|
||||
This vulnerability is triggered when validating the memory isolation
|
||||
between the VM and the hypervisor. The AP Trampoline code exists in the
|
||||
LOW_RAM region of the hypervisor but is potentially accessible to the
|
||||
Service VM. This could be used by an attacker to mount DoS attacks on the
|
||||
hypervisor if the Service VM is compromised.
|
||||
|
||||
| **Affected Release:** v1.3 and earlier.
|
||||
| Upgrade to ACRN release v1.4.
|
||||
**Affected Release:** v1.3 and earlier.
|
||||
|
||||
Improper Usage Of the ``LIST_FOREACH()`` Macro
|
||||
- Improper Usage Of the ``LIST_FOREACH()`` Macro
|
||||
Testing discovered that the MACRO ``LIST_FOREACH()`` was incorrectly used
|
||||
in some cases which could induce a "wild pointer" and cause the ACRN
|
||||
Device Model to crash. Attackers can potentially use this issue to cause
|
||||
denial of service (DoS) attacks.
|
||||
|
||||
| **Affected Release:** v1.3 and earlier.
|
||||
| Upgrade to ACRN release v1.4.
|
||||
**Affected Release:** v1.3 and earlier.
|
||||
|
||||
Hypervisor Crashes When Fuzzing HC_SET_CALLBACK_VECTOR
|
||||
- Hypervisor Crashes When Fuzzing HC_SET_CALLBACK_VECTOR
|
||||
This vulnerability was reported by the Fuzzing tool for the debug version
|
||||
of ACRN. When the software fails to validate input properly, an attacker
|
||||
is able to craft the input in a form that is not expected by the rest of
|
||||
@@ -68,31 +76,27 @@ Hypervisor Crashes When Fuzzing HC_SET_CALLBACK_VECTOR
|
||||
unintended inputs, which may result in an altered control flow, arbitrary
|
||||
control of a resource, or arbitrary code execution.
|
||||
|
||||
| **Affected Release:** v1.3 and earlier.
|
||||
| Upgrade to ACRN release v1.4.
|
||||
**Affected Release:** v1.3 and earlier.
|
||||
|
||||
FILE Pointer Is Not Closed After Using
|
||||
- FILE Pointer Is Not Closed After Using
|
||||
This vulnerability was reported by the Fuzzing tool. Leaving the file
|
||||
unclosed will cause a leaking file descriptor and may cause unexpected
|
||||
errors in the Device Model program.
|
||||
|
||||
| **Affected Release:** v1.3 and earlier.
|
||||
| Upgrade to ACRN release v1.4.
|
||||
**Affected Release:** v1.3 and earlier.
|
||||
|
||||
Descriptor of Directory Stream Is Referenced After Release
|
||||
- Descriptor of Directory Stream Is Referenced After Release
|
||||
This vulnerability was reported by the Fuzzing tool. A successful call to
|
||||
``closedir(DIR *dirp)`` also closes the underlying file descriptor
|
||||
associated with ``dirp``. Access to the released descriptor may point to
|
||||
some arbitrary memory location or cause undefined behavior.
|
||||
|
||||
| **Affected Release:** v1.3 and earlier.
|
||||
| Upgrade to ACRN release v1.4.
|
||||
**Affected Release:** v1.3 and earlier.
|
||||
|
||||
Mutex Is Potentially Kept in a Locked State Forever
|
||||
- Mutex Is Potentially Kept in a Locked State Forever
|
||||
This vulnerability was reported by the Fuzzing tool. Here,
|
||||
pthread_mutex_lock/unlock pairing was not always done. Leaving a mutex in
|
||||
a locked state forever can cause program deadlock, depending on the usage
|
||||
scenario.
|
||||
|
||||
| **Affected Release:** v1.3 and earlier.
|
||||
| Upgrade to ACRN release v1.4.
|
||||
**Affected Release:** v1.3 and earlier.
|
||||
|
||||
10
doc/conf.py
@@ -35,8 +35,11 @@ if "RELEASE" in os.environ:
|
||||
# ones.
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.abspath('.'), 'extensions'))
|
||||
extensions = ['breathe', 'sphinx.ext.graphviz', 'sphinx.ext.extlinks',
|
||||
'kerneldoc', 'eager_only', 'html_redirects']
|
||||
extensions = [
|
||||
'breathe', 'sphinx.ext.graphviz', 'sphinx.ext.extlinks',
|
||||
'kerneldoc', 'eager_only', 'html_redirects',
|
||||
'sphinx_tabs.tabs'
|
||||
]
|
||||
|
||||
# extlinks provides a macro template
|
||||
|
||||
@@ -186,6 +189,7 @@ else:
|
||||
html_context = {
|
||||
'current_version': current_version,
|
||||
'versions': ( ("latest", "/latest/"),
|
||||
("1.6", "/1.6/"),
|
||||
("1.5", "/1.5/"),
|
||||
("1.4", "/1.4/"),
|
||||
("1.3", "/1.3/"),
|
||||
@@ -216,6 +220,8 @@ html_static_path = ['static']
|
||||
|
||||
def setup(app):
|
||||
app.add_stylesheet("acrn-custom.css")
|
||||
app.add_javascript("https://www.googletagmanager.com/gtag/js?id=UA-831873-64")
|
||||
# note more GA tag manager calls are in acrn-custom.js
|
||||
app.add_javascript("acrn-custom.js")
|
||||
|
||||
# Custom sidebar templates, must be a dictionary that maps document names
|
||||
|
||||
@@ -12,7 +12,7 @@ Design Guides
|
||||
*************
|
||||
|
||||
Read about ACRN's high-level design and architecture principles that led
|
||||
to the develoment of the ACRN hypervisor and its components. You'll
|
||||
to the development of the ACRN hypervisor and its components. You'll
|
||||
also find details about specific architecture topics.
|
||||
|
||||
.. toctree::
|
||||
|
||||
@@ -4,8 +4,8 @@ Advanced Guides
|
||||
###############
|
||||
|
||||
|
||||
Tools
|
||||
*****
|
||||
Configuration and Tools
|
||||
***********************
|
||||
|
||||
.. toctree::
|
||||
:glob:
|
||||
@@ -13,6 +13,7 @@ Tools
|
||||
|
||||
tutorials/acrn_configuration_tool
|
||||
reference/kconfig/index
|
||||
user-guides/kernel-parameters
|
||||
user-guides/acrn-shell
|
||||
user-guides/acrn-dm-parameters
|
||||
misc/tools/acrn-crashlog/README
|
||||
@@ -34,16 +35,15 @@ User VM Tutorials
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
tutorials/using_agl_as_uos
|
||||
tutorials/agl-vms
|
||||
tutorials/using_celadon_as_uos
|
||||
tutorials/building_uos_from_clearlinux
|
||||
tutorials/using_vxworks_as_uos
|
||||
tutorials/using_windows_as_uos
|
||||
tutorials/using_zephyr_as_uos
|
||||
tutorials/running_deb_as_user_vm
|
||||
tutorials/running_ubun_as_user_vm
|
||||
tutorials/running_deb_as_user_vm
|
||||
tutorials/using_xenomai_as_uos
|
||||
tutorials/using_celadon_as_uos
|
||||
tutorials/using_vxworks_as_uos
|
||||
tutorials/using_zephyr_as_uos
|
||||
tutorials/agl-vms
|
||||
|
||||
Enable ACRN Features
|
||||
********************
|
||||
@@ -53,17 +53,18 @@ Enable ACRN Features
|
||||
|
||||
tutorials/acrn-dm_QoS
|
||||
tutorials/open_vswitch
|
||||
tutorials/rtvm_workload_design_guideline
|
||||
tutorials/sgx_virtualization
|
||||
tutorials/vuart_configuration
|
||||
tutorials/skl-nuc
|
||||
tutorials/rdt_configuration
|
||||
tutorials/using_sbl_on_up2
|
||||
tutorials/trustyACRN
|
||||
tutorials/waag-secure-boot
|
||||
tutorials/enable_s5
|
||||
tutorials/cpu_sharing
|
||||
tutorials/sriov_virtualization
|
||||
tutorials/skl-nuc
|
||||
tutorials/run_kata_containers
|
||||
tutorials/trustyACRN
|
||||
tutorials/rtvm_workload_design_guideline
|
||||
|
||||
Debug
|
||||
*****
|
||||
@@ -73,6 +74,7 @@ Debug
|
||||
|
||||
tutorials/debug
|
||||
tutorials/realtime_performance_tuning
|
||||
tutorials/rtvm_performance_tips
|
||||
|
||||
Additional Tutorials
|
||||
********************
|
||||
@@ -81,16 +83,10 @@ Additional Tutorials
|
||||
:maxdepth: 1
|
||||
|
||||
tutorials/up2
|
||||
tutorials/increase-uos-disk-size
|
||||
tutorials/sign_clear_linux_image
|
||||
tutorials/static-ip
|
||||
tutorials/using_partition_mode_on_nuc
|
||||
tutorials/using_partition_mode_on_up2
|
||||
tutorials/using_sdc2_mode_on_nuc
|
||||
tutorials/using_hybrid_mode_on_nuc
|
||||
tutorials/kbl-nuc-sdc
|
||||
tutorials/enable_laag_secure_boot
|
||||
tutorials/building_acrn_in_docker
|
||||
tutorials/acrn_ootb
|
||||
tutorials/run_kata_containers
|
||||
user-guides/kernel-parameters
|
||||
tutorials/static-ip
|
||||
tutorials/increase-uos-disk-size
|
||||
tutorials/sign_clear_linux_image
|
||||
tutorials/enable_laag_secure_boot
|
||||
tutorials/kbl-nuc-sdc
|
||||
|
||||
@@ -35,11 +35,11 @@ background introduction, please refer to:
|
||||
|
||||
virtio-echo is implemented as a virtio legacy device in the ACRN device
|
||||
model (DM), and is registered as a PCI virtio device to the guest OS
|
||||
(UOS). The virtio-echo software has three parts:
|
||||
(User VM). The virtio-echo software has three parts:
|
||||
|
||||
- **virtio-echo Frontend Driver**: This driver runs in the UOS. It prepares
|
||||
- **virtio-echo Frontend Driver**: This driver runs in the User VM. It prepares
|
||||
the RXQ and notifies the backend for receiving incoming data when the
|
||||
UOS starts. Second, it copies the received data from the RXQ to TXQ
|
||||
User VM starts. Second, it copies the received data from the RXQ to TXQ
|
||||
and sends them to the backend. After receiving the message that the
|
||||
transmission is completed, it starts again another round of reception
|
||||
and transmission, and keeps running until a specified number of cycle
|
||||
@@ -71,30 +71,30 @@ Virtualization Overhead Analysis
|
||||
********************************
|
||||
|
||||
Let's analyze the overhead of the VBS-K framework. As we know, the VBS-K
|
||||
handles notifications in the SOS kernel instead of in the SOS user space
|
||||
handles notifications in the Service VM kernel instead of in the Service VM user space
|
||||
DM. This can avoid overhead from switching between kernel space and user
|
||||
space. Virtqueues are allocated by UOS, and virtqueue information is
|
||||
space. Virtqueues are allocated by User VM, and virtqueue information is
|
||||
configured to VBS-K backend by the virtio-echo driver in DM, thus
|
||||
virtqueues can be shared between UOS and SOS. There is no copy overhead
|
||||
virtqueues can be shared between User VM and Service VM. There is no copy overhead
|
||||
in this sense. The overhead of VBS-K framework mainly contains two
|
||||
parts: kick overhead and notify overhead.
|
||||
|
||||
- **Kick Overhead**: The UOS gets trapped when it executes sensitive
|
||||
- **Kick Overhead**: The User VM gets trapped when it executes sensitive
|
||||
instructions that notify the hypervisor first. The notification is
|
||||
assembled into an IOREQ, saved in a shared IO page, and then
|
||||
forwarded to the VHM module by the hypervisor. The VHM notifies its
|
||||
client for this IOREQ, in this case, the client is the vbs-echo
|
||||
backend driver. Kick overhead is defined as the interval from the
|
||||
beginning of UOS trap to a specific VBS-K driver e.g. when
|
||||
beginning of User VM trap to a specific VBS-K driver e.g. when
|
||||
virtio-echo gets notified.
|
||||
- **Notify Overhead**: After the data in virtqueue being processed by the
|
||||
backend driver, vbs-echo calls the VHM module to inject an interrupt
|
||||
into the frontend. The VHM then uses the hypercall provided by the
|
||||
hypervisor, which causes a UOS VMEXIT. The hypervisor finally injects
|
||||
an interrupt into the vLAPIC of the UOS and resumes it. The UOS
|
||||
hypervisor, which causes a User VM VMEXIT. The hypervisor finally injects
|
||||
an interrupt into the vLAPIC of the User VM and resumes it. The User VM
|
||||
therefore receives the interrupt notification. Notify overhead is
|
||||
defined as the interval from the beginning of the interrupt injection
|
||||
to when the UOS starts interrupt processing.
|
||||
to when the User VM starts interrupt processing.
|
||||
|
||||
The overhead of a specific application based on VBS-K includes two
|
||||
parts: VBS-K framework overhead and application-specific overhead.
|
||||
|
||||
@@ -2916,12 +2916,12 @@ Compliant example::
|
||||
Non-compliant example::
|
||||
|
||||
/*
|
||||
* The example here uses the char ␣ to stand for the space at the end of the line
|
||||
* The example here uses the char ~ to stand for the space at the end of the line
|
||||
* in order to highlight the non-compliant part.
|
||||
*/
|
||||
uint32_t a;␣␣␣␣
|
||||
uint32_t b;␣␣␣␣
|
||||
uint32_t c;␣␣␣␣
|
||||
uint32_t a;~~~~
|
||||
uint32_t b;~~~~
|
||||
uint32_t c;~~~~
|
||||
|
||||
|
||||
C-CS-06: A single space shall exist between non-function-like keywords and opening brackets
|
||||
@@ -3364,8 +3364,8 @@ The data structure types include struct, union, and enum.
|
||||
This rule applies to the data structure with all the following properties:
|
||||
|
||||
a) The data structure is used by multiple modules;
|
||||
b) The corresponding resource is exposed to external components, such as SOS or
|
||||
UOS;
|
||||
b) The corresponding resource is exposed to external components, such as
|
||||
the Service VM or a User VM;
|
||||
c) The name meaning is simplistic or common, such as vcpu or vm.
|
||||
|
||||
Compliant example::
|
||||
|
||||
@@ -18,7 +18,7 @@ and about `Sphinx extensions`_ from their respective websites.
|
||||
|
||||
.. _Sphinx extensions: https://www.sphinx-doc.org/en/stable/contents.html
|
||||
.. _reStructuredText: http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html
|
||||
.. _Sphinx Inline Markup: https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html
|
||||
.. _Sphinx Inline Markup: https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html
|
||||
.. _Project ACRN documentation: https://projectacrn.github.io
|
||||
|
||||
This document provides a quick reference for commonly used reST and
|
||||
@@ -165,29 +165,13 @@ Would be rendered as:
|
||||
Multi-column lists
|
||||
******************
|
||||
|
||||
If you have a long bullet list of items, where each item is short,
|
||||
you can indicate the list items should be rendered in multiple columns
|
||||
with a special ``hlist`` directive:
|
||||
If you have a long bullet list of items, where each item is short, you
|
||||
can indicate the list items should be rendered in multiple columns with
|
||||
a special ``.. rst-class:: rst-columns`` directive. The directive will
|
||||
apply to the next non-comment element (e.g., paragraph), or to content
|
||||
indented under the directive. For example, this unordered list::
|
||||
|
||||
.. code-block:: rest
|
||||
|
||||
.. hlist::
|
||||
:columns: 3
|
||||
|
||||
* A list of
|
||||
* short items
|
||||
* that should be
|
||||
* displayed
|
||||
* horizontally
|
||||
* so it doesn't
|
||||
* use up so much
|
||||
* space on
|
||||
* the page
|
||||
|
||||
This would be rendered as:
|
||||
|
||||
.. hlist::
|
||||
:columns: 3
|
||||
.. rst-class:: rst-columns
|
||||
|
||||
* A list of
|
||||
* short items
|
||||
@@ -199,8 +183,102 @@ This would be rendered as:
|
||||
* space on
|
||||
* the page
|
||||
|
||||
Note the optional ``:columns:`` parameter (default is two columns), and
|
||||
all the list items are indented by three spaces.
|
||||
would be rendered as:
|
||||
|
||||
.. rst-class:: rst-columns
|
||||
|
||||
* A list of
|
||||
* short items
|
||||
* that should be
|
||||
* displayed
|
||||
* horizontally
|
||||
* so it doesn't
|
||||
* use up so much
|
||||
* space on
|
||||
* the page
|
||||
|
||||
A maximum of three columns will be displayed, and change based on the
|
||||
available width of the display window, reducing to one column on narrow
|
||||
(phone) screens if necessary. We've deprecated use of the ``hlist``
|
||||
directive because it misbehaves on smaller screens.
|
||||
|
||||
Tables
|
||||
******
|
||||
|
||||
There are a few ways to create tables, each with their limitations or
|
||||
quirks. `Grid tables
|
||||
<http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html#grid-tables>`_
|
||||
offer the most capability for defining merged rows and columns, but are
|
||||
hard to maintain::
|
||||
|
||||
+------------------------+------------+----------+----------+
|
||||
| Header row, column 1 | Header 2 | Header 3 | Header 4 |
|
||||
| (header rows optional) | | | |
|
||||
+========================+============+==========+==========+
|
||||
| body row 1, column 1 | column 2 | column 3 | column 4 |
|
||||
+------------------------+------------+----------+----------+
|
||||
| body row 2 | ... | ... | you can |
|
||||
+------------------------+------------+----------+ easily +
|
||||
| body row 3 with a two column span | ... | span |
|
||||
+------------------------+------------+----------+ rows +
|
||||
| body row 4 | ... | ... | too |
|
||||
+------------------------+------------+----------+----------+
|
||||
|
||||
This example would render as:
|
||||
|
||||
+------------------------+------------+----------+----------+
|
||||
| Header row, column 1 | Header 2 | Header 3 | Header 4 |
|
||||
| (header rows optional) | | | |
|
||||
+========================+============+==========+==========+
|
||||
| body row 1, column 1 | column 2 | column 3 | column 4 |
|
||||
+------------------------+------------+----------+----------+
|
||||
| body row 2 | ... | ... | you can |
|
||||
+------------------------+------------+----------+ easily +
|
||||
| body row 3 with a two column span | ... | span |
|
||||
+------------------------+------------+----------+ rows +
|
||||
| body row 4 | ... | ... | too |
|
||||
+------------------------+------------+----------+----------+
|
||||
|
||||
`List tables
|
||||
<http://docutils.sourceforge.net/docs/ref/rst/directives.html#list-table>`_
|
||||
are much easier to maintain, but don't support row or column spans::
|
||||
|
||||
.. list-table:: Table title
|
||||
:widths: 15 20 40
|
||||
:header-rows: 1
|
||||
|
||||
* - Heading 1
|
||||
- Heading 2
|
||||
- Heading 3
|
||||
* - body row 1, column 1
|
||||
- body row 1, column 2
|
||||
- body row 1, column 3
|
||||
* - body row 2, column 1
|
||||
- body row 2, column 2
|
||||
- body row 2, column 3
|
||||
|
||||
This example would render as:
|
||||
|
||||
.. list-table:: Table title
|
||||
:widths: 15 20 40
|
||||
:header-rows: 1
|
||||
|
||||
* - Heading 1
|
||||
- Heading 2
|
||||
- Heading 3
|
||||
* - body row 1, column 1
|
||||
- body row 1, column 2
|
||||
- body row 1, column 3
|
||||
* - body row 2, column 1
|
||||
- body row 2, column 2
|
||||
- body row 2, column 3
|
||||
|
||||
The ``:widths:`` parameter lets you define relative column widths. The
|
||||
default is equal column widths. If you have a three-column table and you
|
||||
want the first column to be half as wide as the other two equal-width
|
||||
columns, you can specify ``:widths: 1 2 2``. If you'd like the browser
|
||||
to set the column widths automatically based on the column contents, you
|
||||
can use ``:widths: auto``.
|
||||
|
||||
File names and Commands
|
||||
***********************
|
||||
@@ -222,7 +300,7 @@ Don't use items within a single backtick, for example ```word```.
|
||||
Internal Cross-Reference Linking
|
||||
********************************
|
||||
|
||||
ReST links are only supported within the current file using the
|
||||
Traditional ReST links are only supported within the current file using the
|
||||
notation:
|
||||
|
||||
.. code-block:: rest
|
||||
@@ -251,7 +329,7 @@ Note the leading underscore indicating an inbound link.
|
||||
The content immediately following
|
||||
this label is the target for a ``:ref:`my label name```
|
||||
reference from anywhere within the documentation set.
|
||||
The label should be added immediately before a heading so there's a
|
||||
The label **must** be added immediately before a heading so there's a
|
||||
natural phrase to show when referencing this label (e.g., the heading
|
||||
text).
|
||||
|
||||
@@ -369,6 +447,30 @@ highlighting package makes a best guess at the type of content in the
|
||||
block and highlighting purposes. This can lead to some odd
|
||||
highlighting in the generated output.
|
||||
|
||||
Images
|
||||
******
|
||||
|
||||
Images are included in documentation by using an image directive::
|
||||
|
||||
.. image:: ../../images/doc-gen-flow.png
|
||||
:align: center
|
||||
:alt: alt text for the image
|
||||
|
||||
or if you'd like to add an image caption, use::
|
||||
|
||||
.. figure:: ../../images/doc-gen-flow.png
|
||||
:alt: image description
|
||||
|
||||
Caption for the figure
|
||||
|
||||
The file name specified is relative to the document source file,
|
||||
and we recommend putting images into an ``images`` folder where the document
|
||||
source is found. The usual image formats handled by a web browser are
|
||||
supported: JPEG, PNG, GIF, and SVG. Keep the image size only as large
|
||||
as needed, generally at least 500 px wide but no more than 1000 px, and
|
||||
no more than 250 KB unless a particularly large image is needed for
|
||||
clarity.
|
||||
|
||||
Tabs, spaces, and indenting
|
||||
***************************
|
||||
|
||||
@@ -431,4 +533,121 @@ We've also included the ``graphviz`` Sphinx extension to let you use a text
|
||||
description language to render drawings. See :ref:`graphviz-examples` for more
|
||||
information.
|
||||
|
||||
Alternative Tabbed Content
|
||||
**************************
|
||||
|
||||
Instead of creating multiple documents with common material except for
|
||||
some specific sections, you can write one document and provide alternative
|
||||
content to the reader via a tabbed interface. When the reader clicks on
|
||||
a tab, the content for that tab is displayed, for example::
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. tab:: Apples
|
||||
|
||||
Apples are green, or sometimes red.
|
||||
|
||||
.. tab:: Pears
|
||||
|
||||
Pears are green.
|
||||
|
||||
.. tab:: Oranges
|
||||
|
||||
Oranges are orange.
|
||||
|
||||
will display as:
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. tab:: Apples
|
||||
|
||||
Apples are green, or sometimes red.
|
||||
|
||||
.. tab:: Pears
|
||||
|
||||
Pears are green.
|
||||
|
||||
.. tab:: Oranges
|
||||
|
||||
Oranges are orange.
|
||||
|
||||
Tabs can also be grouped, so that changing the current tab in one area
|
||||
changes all tabs with the same name throughout the page. For example:
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Linux
|
||||
|
||||
Linux Line 1
|
||||
|
||||
.. group-tab:: macOS
|
||||
|
||||
macOS Line 1
|
||||
|
||||
.. group-tab:: Windows
|
||||
|
||||
Windows Line 1
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Linux
|
||||
|
||||
Linux Line 2
|
||||
|
||||
.. group-tab:: macOS
|
||||
|
||||
macOS Line 2
|
||||
|
||||
.. group-tab:: Windows
|
||||
|
||||
Windows Line 2
|
||||
|
||||
In this latter case, we're using a ``.. group-tab::`` directive instead of
|
||||
a ``.. tab::`` directive. Under the hood, we're using the `sphinx-tabs
|
||||
<https://github.com/djungelorm/sphinx-tabs>`_ extension that's included
|
||||
in the ACRN (requirements.txt) setup. Within a tab, you can have most
|
||||
any content *other than a heading* (code-blocks, ordered and unordered
|
||||
lists, pictures, paragraphs, and such). You can read more about
|
||||
sphinx-tabs from the link above.
|
||||
|
||||
Instruction Steps
|
||||
*****************
|
||||
|
||||
Numbered instruction steps is a style that makes it
|
||||
easy to create tutorial guides with clearly identified steps. Add
|
||||
the ``.. rst-class:: numbered-step`` directive immediately before a
|
||||
second-level heading (by project convention, a heading underlined with
|
||||
asterisks ``******``, and it will be displayed as a numbered step,
|
||||
sequentially numbered within the document. (Second-level headings
|
||||
without this ``rst-class`` directive will not be numbered.) For example::
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Put your right hand in
|
||||
**********************
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
First instruction step
|
||||
**********************
|
||||
|
||||
This is the first instruction step material. You can do the usual paragraphs and
|
||||
pictures as you'd use in normal document writing. Write the heading to
|
||||
be a summary of what the step is (the step numbering is automated so you
|
||||
can move steps around easily if needed).
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Second instruction step
|
||||
***********************
|
||||
|
||||
This is the second instruction step.
|
||||
|
||||
.. note:: As implemented,
|
||||
only one set of numbered steps is intended per document and the steps
|
||||
must be level 2 headings.
|
||||
|
||||
Documentation Generation
|
||||
************************
|
||||
|
||||
For instructions on building the documentation, see :ref:`acrn_doc`.
|
||||
|
||||
@@ -17,12 +17,12 @@ the below diagram.
|
||||
|
||||
HBA is registered to the PCI system with device id 0x2821 and vendor id
|
||||
0x8086. Its memory registers are mapped in BAR 5. It only supports 6
|
||||
ports (refer to ICH8 AHCI). AHCI driver in the Guest OS can access HBA in DM
|
||||
ports (refer to ICH8 AHCI). AHCI driver in the User VM can access HBA in DM
|
||||
through the PCI BAR. And HBA can inject MSI interrupts through the PCI
|
||||
framework.
|
||||
|
||||
When the application in the Guest OS reads data from /dev/sda, the request will
|
||||
send through the AHCI driver and then the PCI driver. The Guest VM will trap to
|
||||
When the application in the User VM reads data from /dev/sda, the request will
|
||||
send through the AHCI driver and then the PCI driver. The User VM will trap to
|
||||
hypervisor, and hypervisor dispatch the request to DM. According to the
|
||||
offset in the BAR, the request will dispatch to port control handler.
|
||||
Then the request is parse to a block I/O request which can be processed
|
||||
@@ -32,13 +32,13 @@ Usage:
|
||||
|
||||
***-s <slot>,ahci,<type:><filepath>***
|
||||
|
||||
Type: ‘hd’ and ‘cd’ are available.
|
||||
Type: 'hd' and 'cd' are available.
|
||||
|
||||
Filepath: the path for the backend file, could be a partition or a
|
||||
Filepath: the path for the backend file, could be a partition or a
|
||||
regular file.
|
||||
|
||||
E.g.
|
||||
For example,
|
||||
|
||||
SOS: -s 20,ahci,\ `hd:/dev/mmcblk0p1 <http://hd/dev/mmcblk0p1>`__
|
||||
System VM: -s 20,ahci,\ `hd:/dev/mmcblk0p1 <http://hd/dev/mmcblk0p1>`__
|
||||
|
||||
UOS: /dev/sda
|
||||
User VM: /dev/sda
|
||||
|
||||
@@ -315,7 +315,7 @@ High Level Architecture
|
||||
***********************
|
||||
|
||||
:numref:`gvt-arch` shows the overall architecture of GVT-g, based on the
|
||||
ACRN hypervisor, with SOS as the privileged VM, and multiple user
|
||||
ACRN hypervisor, with Service VM as the privileged VM, and multiple user
|
||||
guests. A GVT-g device model working with the ACRN hypervisor,
|
||||
implements the policies of trap and pass-through. Each guest runs the
|
||||
native graphics driver and can directly access performance-critical
|
||||
@@ -323,7 +323,7 @@ resources: the Frame Buffer and Command Buffer, with resource
|
||||
partitioning (as presented later). To protect privileged resources, that
|
||||
is, the I/O registers and PTEs, corresponding accesses from the graphics
|
||||
driver in user VMs are trapped and forwarded to the GVT device model in
|
||||
SOS for emulation. The device model leverages i915 interfaces to access
|
||||
Service VM for emulation. The device model leverages i915 interfaces to access
|
||||
the physical GPU.
|
||||
|
||||
In addition, the device model implements a GPU scheduler that runs
|
||||
@@ -338,7 +338,7 @@ direct GPU execution. With that, GVT-g can achieve near-native
|
||||
performance for a VM workload.
|
||||
|
||||
In :numref:`gvt-arch`, the yellow GVT device model works as a client on
|
||||
top of an i915 driver in the SOS. It has a generic Mediated Pass-Through
|
||||
top of an i915 driver in the Service VM. It has a generic Mediated Pass-Through
|
||||
(MPT) interface, compatible with all types of hypervisors. For ACRN,
|
||||
some extra development work is needed for such MPT interfaces. For
|
||||
example, we need some changes in ACRN-DM to make ACRN compatible with
|
||||
@@ -368,7 +368,7 @@ trap-and-emulation, including MMIO virtualization, interrupt
|
||||
virtualization, and display virtualization. It also handles and
|
||||
processes all the requests internally, such as, command scan and shadow,
|
||||
schedules them in the proper manner, and finally submits to
|
||||
the SOS i915 driver.
|
||||
the Service VM i915 driver.
|
||||
|
||||
.. figure:: images/APL_GVT-g-DM.png
|
||||
:width: 800px
|
||||
@@ -446,7 +446,7 @@ interrupts are categorized into three types:
|
||||
exception to this is the VBlank interrupt. Due to the demands of user
|
||||
space compositors, such as Wayland, which requires a flip done event
|
||||
to be synchronized with a VBlank, this interrupt is forwarded from
|
||||
SOS to UOS when SOS receives it from the hardware.
|
||||
Service VM to User VM when Service VM receives it from the hardware.
|
||||
|
||||
- Event-based GPU interrupts are emulated by the emulation logic. For
|
||||
example, AUX Channel Interrupt.
|
||||
@@ -524,7 +524,7 @@ later after performing a few basic checks and verifications.
|
||||
Display Virtualization
|
||||
----------------------
|
||||
|
||||
GVT-g reuses the i915 graphics driver in the SOS to initialize the Display
|
||||
GVT-g reuses the i915 graphics driver in the Service VM to initialize the Display
|
||||
Engine, and then manages the Display Engine to show different VM frame
|
||||
buffers. When two vGPUs have the same resolution, only the frame buffer
|
||||
locations are switched.
|
||||
@@ -550,7 +550,7 @@ A typical automotive use case is where there are two displays in the car
|
||||
and each one needs to show one domain's content, with the two domains
|
||||
being the Instrument cluster and the In Vehicle Infotainment (IVI). As
|
||||
shown in :numref:`direct-display`, this can be accomplished through the direct
|
||||
display model of GVT-g, where the SOS and UOS are each assigned all HW
|
||||
display model of GVT-g, where the Service VM and User VM are each assigned all HW
|
||||
planes of two different pipes. GVT-g has a concept of display owner on a
|
||||
per HW plane basis. If it determines that a particular domain is the
|
||||
owner of a HW plane, then it allows the domain's MMIO register write to
|
||||
@@ -567,15 +567,15 @@ Indirect Display Model
|
||||
|
||||
Indirect Display Model
|
||||
|
||||
For security or fastboot reasons, it may be determined that the UOS is
|
||||
For security or fastboot reasons, it may be determined that the User VM is
|
||||
either not allowed to display its content directly on the HW or it may
|
||||
be too late before it boots up and displays its content. In such a
|
||||
scenario, the responsibility of displaying content on all displays lies
|
||||
with the SOS. One of the use cases that can be realized is to display the
|
||||
entire frame buffer of the UOS on a secondary display. GVT-g allows for this
|
||||
model by first trapping all MMIO writes by the UOS to the HW. A proxy
|
||||
application can then capture the address in GGTT where the UOS has written
|
||||
its frame buffer and using the help of the Hypervisor and the SOS's i915
|
||||
with the Service VM. One of the use cases that can be realized is to display the
|
||||
entire frame buffer of the User VM on a secondary display. GVT-g allows for this
|
||||
model by first trapping all MMIO writes by the User VM to the HW. A proxy
|
||||
application can then capture the address in GGTT where the User VM has written
|
||||
its frame buffer and using the help of the Hypervisor and the Service VM's i915
|
||||
driver, can convert the Guest Physical Addresses (GPAs) into Host
|
||||
Physical Addresses (HPAs) before making a texture source or EGL image
|
||||
out of the frame buffer and then either post processing it further or
|
||||
@@ -585,33 +585,33 @@ GGTT-Based Surface Sharing
|
||||
--------------------------
|
||||
|
||||
One of the major automotive use case is called "surface sharing". This
|
||||
use case requires that the SOS accesses an individual surface or a set of
|
||||
surfaces from the UOS without having to access the entire frame buffer of
|
||||
the UOS. Unlike the previous two models, where the UOS did not have to do
|
||||
anything to show its content and therefore a completely unmodified UOS
|
||||
could continue to run, this model requires changes to the UOS.
|
||||
use case requires that the Service VM accesses an individual surface or a set of
|
||||
surfaces from the User VM without having to access the entire frame buffer of
|
||||
the User VM. Unlike the previous two models, where the User VM did not have to do
|
||||
anything to show its content and therefore a completely unmodified User VM
|
||||
could continue to run, this model requires changes to the User VM.
|
||||
|
||||
This model can be considered an extension of the indirect display model.
|
||||
Under the indirect display model, the UOS's frame buffer was temporarily
|
||||
Under the indirect display model, the User VM's frame buffer was temporarily
|
||||
pinned by it in the video memory access through the Global graphics
|
||||
translation table. This GGTT-based surface sharing model takes this a
|
||||
step further by having a compositor of the UOS to temporarily pin all
|
||||
step further by having a compositor of the User VM to temporarily pin all
|
||||
application buffers into GGTT. It then also requires the compositor to
|
||||
create a metadata table with relevant surface information such as width,
|
||||
height, and GGTT offset, and flip that in lieu of the frame buffer.
|
||||
In the SOS, the proxy application knows that the GGTT offset has been
|
||||
In the Service VM, the proxy application knows that the GGTT offset has been
|
||||
flipped, maps it, and through it can access the GGTT offset of an
|
||||
application that it wants to access. It is worth mentioning that in this
|
||||
model, UOS applications did not require any changes, and only the
|
||||
model, User VM applications did not require any changes, and only the
|
||||
compositor, Mesa, and i915 driver had to be modified.
|
||||
|
||||
This model has a major benefit and a major limitation. The
|
||||
benefit is that since it builds on top of the indirect display model,
|
||||
there are no special drivers necessary for it on either SOS or UOS.
|
||||
there are no special drivers necessary for it on either Service VM or User VM.
|
||||
Therefore, any Real Time Operating System (RTOS) that use
|
||||
this model can simply do so without having to implement a driver, the
|
||||
infrastructure for which may not be present in their operating system.
|
||||
The limitation of this model is that video memory dedicated for a UOS is
|
||||
The limitation of this model is that video memory dedicated for a User VM is
|
||||
generally limited to a couple of hundred MBs. This can easily be
|
||||
exhausted by a few application buffers so the number and size of buffers
|
||||
is limited. Since it is not a highly-scalable model, in general, Intel
|
||||
@@ -634,24 +634,24 @@ able to share its pages with another driver within one domain.
|
||||
Applications buffers are backed by i915 Graphics Execution Manager
|
||||
Buffer Objects (GEM BOs). As in GGTT surface
|
||||
sharing, this model also requires compositor changes. The compositor of
|
||||
UOS requests i915 to export these application GEM BOs and then passes
|
||||
User VM requests i915 to export these application GEM BOs and then passes
|
||||
them on to a special driver called the Hyper DMA Buf exporter whose job
|
||||
is to create a scatter gather list of pages mapped by PDEs and PTEs and
|
||||
export a Hyper DMA Buf ID back to the compositor.
|
||||
|
||||
The compositor then shares this Hyper DMA Buf ID with the SOS's Hyper DMA
|
||||
The compositor then shares this Hyper DMA Buf ID with the Service VM's Hyper DMA
|
||||
Buf importer driver which then maps the memory represented by this ID in
|
||||
the SOS. A proxy application in the SOS can then provide the ID of this driver
|
||||
to the SOS i915, which can create its own GEM BO. Finally, the application
|
||||
the Service VM. A proxy application in the Service VM can then provide the ID of this driver
|
||||
to the Service VM i915, which can create its own GEM BO. Finally, the application
|
||||
can use it as an EGL image and do any post processing required before
|
||||
either providing it to the SOS compositor or directly flipping it on a
|
||||
either providing it to the Service VM compositor or directly flipping it on a
|
||||
HW plane in the compositor's absence.
|
||||
|
||||
This model is highly scalable and can be used to share up to 4 GB worth
|
||||
of pages. It is also not limited to only sharing graphics buffers. Other
|
||||
buffers for the IPU and others, can also be shared with it. However, it
|
||||
does require that the SOS port the Hyper DMA Buffer importer driver. Also,
|
||||
the SOS OS must comprehend and implement the DMA buffer sharing model.
|
||||
does require that the Service VM port the Hyper DMA Buffer importer driver. Also,
|
||||
the Service VM must comprehend and implement the DMA buffer sharing model.
|
||||
|
||||
For detailed information about this model, please refer to the `Linux
|
||||
HYPER_DMABUF Driver High Level Design
|
||||
@@ -669,13 +669,13 @@ Plane-Based Domain Ownership
|
||||
|
||||
Plane-Based Domain Ownership
|
||||
|
||||
Yet another mechanism for showing content of both the SOS and UOS on the
|
||||
Yet another mechanism for showing content of both the Service VM and User VM on the
|
||||
same physical display is called plane-based domain ownership. Under this
|
||||
model, both the SOS and UOS are provided a set of HW planes that they can
|
||||
model, both the Service VM and User VM are provided a set of HW planes that they can
|
||||
flip their contents on to. Since each domain provides its content, there
|
||||
is no need for any extra composition to be done through the SOS. The display
|
||||
is no need for any extra composition to be done through the Service VM. The display
|
||||
controller handles alpha blending contents of different domains on a
|
||||
single pipe. This saves on any complexity on either the SOS or the UOS
|
||||
single pipe. This saves on any complexity on either the Service VM or the User VM
|
||||
SW stack.
|
||||
|
||||
It is important to provide only specific planes and have them statically
|
||||
@@ -689,7 +689,7 @@ show the correct content on them. No other changes are necessary.
|
||||
While the biggest benefit of this model is that is extremely simple and
|
||||
quick to implement, it also has some drawbacks. First, since each domain
|
||||
is responsible for showing the content on the screen, there is no
|
||||
control of the UOS by the SOS. If the UOS is untrusted, this could
|
||||
control of the User VM by the Service VM. If the User VM is untrusted, this could
|
||||
potentially cause some unwanted content to be displayed. Also, there is
|
||||
no post processing capability, except that provided by the display
|
||||
controller (for example, scaling, rotation, and so on). So each domain
|
||||
@@ -834,43 +834,43 @@ Different Schedulers and Their Roles
|
||||
|
||||
In the system, there are three different schedulers for the GPU:
|
||||
|
||||
- i915 UOS scheduler
|
||||
- i915 User VM scheduler
|
||||
- Mediator GVT scheduler
|
||||
- i915 SOS scheduler
|
||||
- i915 Service VM scheduler
|
||||
|
||||
Since UOS always uses the host-based command submission (ELSP) model,
|
||||
Since User VM always uses the host-based command submission (ELSP) model,
|
||||
and it never accesses the GPU or the Graphic Micro Controller (GuC)
|
||||
directly, its scheduler cannot do any preemption by itself.
|
||||
The i915 scheduler does ensure batch buffers are
|
||||
submitted in dependency order, that is, if a compositor had to wait for
|
||||
an application buffer to finish before its workload can be submitted to
|
||||
the GPU, then the i915 scheduler of the UOS ensures that this happens.
|
||||
the GPU, then the i915 scheduler of the User VM ensures that this happens.
|
||||
|
||||
The UOS assumes that by submitting its batch buffers to the Execlist
|
||||
The User VM assumes that by submitting its batch buffers to the Execlist
|
||||
Submission Port (ELSP), the GPU will start working on them. However,
|
||||
the MMIO write to the ELSP is captured by the Hypervisor, which forwards
|
||||
these requests to the GVT module. GVT then creates a shadow context
|
||||
based on this batch buffer and submits the shadow context to the SOS
|
||||
based on this batch buffer and submits the shadow context to the Service VM
|
||||
i915 driver.
|
||||
|
||||
However, it is dependent on a second scheduler called the GVT
|
||||
scheduler. This scheduler is time based and uses a round robin algorithm
|
||||
to provide a specific time for each UOS to submit its workload when it
|
||||
is considered as a "render owner". The workload of the UOSs that are not
|
||||
to provide a specific time for each User VM to submit its workload when it
|
||||
is considered as a "render owner". The workload of the User VMs that are not
|
||||
render owners during a specific time period end up waiting in the
|
||||
virtual GPU context until the GVT scheduler makes them render owners.
|
||||
The GVT shadow context submits only one workload at
|
||||
a time, and once the workload is finished by the GPU, it copies any
|
||||
context state back to DomU and sends the appropriate interrupts before
|
||||
picking up any other workloads from either this UOS or another one. This
|
||||
picking up any other workloads from either this User VM or another one. This
|
||||
also implies that this scheduler does not do any preemption of
|
||||
workloads.
|
||||
|
||||
Finally, there is the i915 scheduler in the SOS. This scheduler uses the
|
||||
GuC or ELSP to do command submission of SOS local content as well as any
|
||||
content that GVT is submitting to it on behalf of the UOSs. This
|
||||
Finally, there is the i915 scheduler in the Service VM. This scheduler uses the
|
||||
GuC or ELSP to do command submission of Service VM local content as well as any
|
||||
content that GVT is submitting to it on behalf of the User VMs. This
|
||||
scheduler uses GuC or ELSP to preempt workloads. GuC has four different
|
||||
priority queues, but the SOS i915 driver uses only two of them. One of
|
||||
priority queues, but the Service VM i915 driver uses only two of them. One of
|
||||
them is considered high priority and the other is normal priority with a
|
||||
GuC rule being that any command submitted on the high priority queue
|
||||
would immediately try to preempt any workload submitted on the normal
|
||||
@@ -893,8 +893,8 @@ preemption of lower-priority workload.
|
||||
|
||||
Scheduling policies are customizable and left to customers to change if
|
||||
they are not satisfied with the built-in i915 driver policy, where all
|
||||
workloads of the SOS are considered higher priority than those of the
|
||||
UOS. This policy can be enforced through an SOS i915 kernel command line
|
||||
workloads of the Service VM are considered higher priority than those of the
|
||||
User VM. This policy can be enforced through an Service VM i915 kernel command line
|
||||
parameter, and can replace the default in-order command submission (no
|
||||
preemption) policy.
|
||||
|
||||
@@ -922,7 +922,7 @@ OS and an Android Guest OS.
|
||||
AcrnGT in kernel
|
||||
=================
|
||||
|
||||
The AcrnGT module in the SOS kernel acts as an adaption layer to connect
|
||||
The AcrnGT module in the Service VM kernel acts as an adaption layer to connect
|
||||
between GVT-g in the i915, the VHM module, and the ACRN-DM user space
|
||||
application:
|
||||
|
||||
@@ -930,7 +930,7 @@ application:
|
||||
services to it, including set and unset trap areas, set and unset
|
||||
write-protection pages, etc.
|
||||
|
||||
- It calls the VHM APIs provided by the ACRN VHM module in the SOS
|
||||
- It calls the VHM APIs provided by the ACRN VHM module in the Service VM
|
||||
kernel, to eventually call into the routines provided by ACRN
|
||||
hypervisor through hyper-calls.
|
||||
|
||||
|
||||
@@ -3,8 +3,8 @@
|
||||
Device Model high-level design
|
||||
##############################
|
||||
|
||||
Hypervisor Device Model (DM) is a QEMU-like application in SOS
|
||||
responsible for creating a UOS VM and then performing devices emulation
|
||||
Hypervisor Device Model (DM) is a QEMU-like application in Service VM
|
||||
responsible for creating a User VM and then performing devices emulation
|
||||
based on command line configurations.
|
||||
|
||||
.. figure:: images/dm-image75.png
|
||||
@@ -14,18 +14,18 @@ based on command line configurations.
|
||||
Device Model Framework
|
||||
|
||||
:numref:`dm-framework` above gives a big picture overview of DM
|
||||
framework. There are 3 major subsystems in SOS:
|
||||
framework. There are 3 major subsystems in Service VM:
|
||||
|
||||
- **Device Emulation**: DM provides backend device emulation routines for
|
||||
frontend UOS device drivers. These routines register their I/O
|
||||
frontend User VM device drivers. These routines register their I/O
|
||||
handlers to the I/O dispatcher inside the DM. When the VHM
|
||||
assigns any I/O request to the DM, the I/O dispatcher
|
||||
dispatches this request to the corresponding device emulation
|
||||
routine to do the emulation.
|
||||
|
||||
- I/O Path in SOS:
|
||||
- I/O Path in Service VM:
|
||||
|
||||
- HV initializes an I/O request and notifies VHM driver in SOS
|
||||
- HV initializes an I/O request and notifies VHM driver in Service VM
|
||||
through upcall.
|
||||
- VHM driver dispatches I/O requests to I/O clients and notifies the
|
||||
clients (in this case the client is the DM which is notified
|
||||
@@ -34,9 +34,9 @@ framework. There are 3 major subsystems in SOS:
|
||||
- I/O dispatcher notifies VHM driver the I/O request is completed
|
||||
through char device
|
||||
- VHM driver notifies HV on the completion through hypercall
|
||||
- DM injects VIRQ to UOS frontend device through hypercall
|
||||
- DM injects VIRQ to User VM frontend device through hypercall
|
||||
|
||||
- VHM: Virtio and Hypervisor Service Module is a kernel module in SOS as a
|
||||
- VHM: Virtio and Hypervisor Service Module is a kernel module in Service VM as a
|
||||
middle layer to support DM. Refer to :ref:`virtio-APIs` for details
|
||||
|
||||
This section introduces how the acrn-dm application is configured and
|
||||
@@ -136,7 +136,7 @@ DM Initialization
|
||||
|
||||
- **Option Parsing**: DM parse options from command line inputs.
|
||||
|
||||
- **VM Create**: DM calls ioctl to SOS VHM, then SOS VHM makes
|
||||
- **VM Create**: DM calls ioctl to Service VM VHM, then Service VM VHM makes
|
||||
hypercalls to HV to create a VM, it returns a vmid for a
|
||||
dedicated VM.
|
||||
|
||||
@@ -147,8 +147,8 @@ DM Initialization
|
||||
with VHM and HV. Refer to :ref:`hld-io-emulation` and
|
||||
:ref:`IO-emulation-in-sos` for more details.
|
||||
|
||||
- **Memory Setup**: UOS memory is allocated from SOS
|
||||
memory. This section of memory will use SOS hugetlbfs to allocate
|
||||
- **Memory Setup**: User VM memory is allocated from Service VM
|
||||
memory. This section of memory will use Service VM hugetlbfs to allocate
|
||||
linear continuous host physical address for guest memory. It will
|
||||
try to get the page size as big as possible to guarantee maximum
|
||||
utilization of TLB. It then invokes a hypercall to HV for its EPT
|
||||
@@ -175,7 +175,7 @@ DM Initialization
|
||||
according to acrn-dm command line configuration and derived from
|
||||
their default value.
|
||||
|
||||
- **SW Load**: DM prepares UOS VM's SW configuration such as kernel,
|
||||
- **SW Load**: DM prepares User VM's SW configuration such as kernel,
|
||||
ramdisk, and zeropage, according to these memory locations:
|
||||
|
||||
.. code-block:: c
|
||||
@@ -186,7 +186,7 @@ DM Initialization
|
||||
#define ZEROPAGE_LOAD_OFF(ctx) (ctx->lowmem - 4*KB)
|
||||
#define KERNEL_LOAD_OFF(ctx) (16*MB)
|
||||
|
||||
For example, if the UOS memory is set as 800M size, then **SW Load**
|
||||
For example, if the User VM memory is set as 800M size, then **SW Load**
|
||||
will prepare its ramdisk (if there is) at 0x31c00000 (796M), bootargs at
|
||||
0x31ffe000 (800M - 8K), kernel entry at 0x31ffe800(800M - 6K) and zero
|
||||
page at 0x31fff000 (800M - 4K). The hypervisor will finally run VM based
|
||||
@@ -277,8 +277,8 @@ VHM
|
||||
VHM overview
|
||||
============
|
||||
|
||||
Device Model manages UOS VM by accessing interfaces exported from VHM
|
||||
module. VHM module is an SOS kernel driver. The ``/dev/acrn_vhm`` node is
|
||||
Device Model manages User VM by accessing interfaces exported from VHM
|
||||
module. VHM module is an Service VM kernel driver. The ``/dev/acrn_vhm`` node is
|
||||
created when VHM module is initialized. Device Model follows the standard
|
||||
Linux char device API (ioctl) to access the functionality of VHM.
|
||||
|
||||
@@ -287,8 +287,8 @@ hypercall to the hypervisor. There are two exceptions:
|
||||
|
||||
- I/O request client management is implemented in VHM.
|
||||
|
||||
- For memory range management of UOS VM, VHM needs to save all memory
|
||||
range info of UOS VM. The subsequent memory mapping update of UOS VM
|
||||
- For memory range management of User VM, VHM needs to save all memory
|
||||
range info of User VM. The subsequent memory mapping update of User VM
|
||||
needs this information.
|
||||
|
||||
.. figure:: images/dm-image108.png
|
||||
@@ -306,10 +306,10 @@ VHM ioctl interfaces
|
||||
|
||||
.. _IO-emulation-in-sos:
|
||||
|
||||
I/O Emulation in SOS
|
||||
********************
|
||||
I/O Emulation in Service VM
|
||||
***************************
|
||||
|
||||
I/O requests from the hypervisor are dispatched by VHM in the SOS kernel
|
||||
I/O requests from the hypervisor are dispatched by VHM in the Service VM kernel
|
||||
to a registered client, responsible for further processing the
|
||||
I/O access and notifying the hypervisor on its completion.
|
||||
|
||||
@@ -317,8 +317,8 @@ Initialization of Shared I/O Request Buffer
|
||||
===========================================
|
||||
|
||||
For each VM, there is a shared 4-KByte memory region used for I/O request
|
||||
communication between the hypervisor and SOS. Upon initialization
|
||||
of a VM, the DM (acrn-dm) in SOS userland first allocates a 4-KByte
|
||||
communication between the hypervisor and Service VM. Upon initialization
|
||||
of a VM, the DM (acrn-dm) in Service VM userland first allocates a 4-KByte
|
||||
page and passes the GPA of the buffer to HV via hypercall. The buffer is
|
||||
used as an array of 16 I/O request slots with each I/O request being
|
||||
256 bytes. This array is indexed by vCPU ID. Thus, each vCPU of the VM
|
||||
@@ -330,7 +330,7 @@ cannot issue multiple I/O requests at the same time.
|
||||
I/O Clients
|
||||
===========
|
||||
|
||||
An I/O client is either a SOS userland application or a SOS kernel space
|
||||
An I/O client is either a Service VM userland application or a Service VM kernel space
|
||||
module responsible for handling I/O access whose address
|
||||
falls in a certain range. Each VM has an array of registered I/O
|
||||
clients which are initialized with a fixed I/O address range, plus a PCI
|
||||
@@ -389,14 +389,14 @@ Processing I/O Requests
|
||||
:align: center
|
||||
:name: io-sequence-sos
|
||||
|
||||
I/O request handling sequence in SOS
|
||||
I/O request handling sequence in Service VM
|
||||
|
||||
:numref:`io-sequence-sos` above illustrates the interactions among the
|
||||
hypervisor, VHM,
|
||||
and the device model for handling I/O requests. The main interactions
|
||||
are as follows:
|
||||
|
||||
1. The hypervisor makes an upcall to SOS as an interrupt
|
||||
1. The hypervisor makes an upcall to Service VM as an interrupt
|
||||
handled by the upcall handler in VHM.
|
||||
|
||||
2. The upcall handler schedules the execution of the I/O request
|
||||
@@ -616,11 +616,11 @@ to destination emulated devices:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
/* Generate one msi interrupt to UOS, the index parameter indicates
|
||||
/* Generate one msi interrupt to User VM, the index parameter indicates
|
||||
* the msi number from its PCI msi capability. */
|
||||
void pci_generate_msi(struct pci_vdev *pi, int index);
|
||||
|
||||
/* Generate one msix interrupt to UOS, the index parameter indicates
|
||||
/* Generate one msix interrupt to User VM, the index parameter indicates
|
||||
* the msix number from its PCI msix bar. */
|
||||
void pci_generate_msix(struct pci_vdev *pi, int index);
|
||||
|
||||
@@ -984,11 +984,11 @@ potentially error-prone.
|
||||
ACPI Emulation
|
||||
--------------
|
||||
|
||||
An alternative ACPI resource abstraction option is for the SOS (SOS_VM) to
|
||||
own all devices and emulate a set of virtual devices for the UOS (POST_LAUNCHED_VM).
|
||||
An alternative ACPI resource abstraction option is for the Service VM to
|
||||
own all devices and emulate a set of virtual devices for the User VM (POST_LAUNCHED_VM).
|
||||
This is the most popular ACPI resource model for virtualization,
|
||||
as shown in the picture below. ACRN currently
|
||||
uses device emulation plus some device passthrough for UOS.
|
||||
uses device emulation plus some device passthrough for User VM.
|
||||
|
||||
.. figure:: images/dm-image52.png
|
||||
:align: center
|
||||
@@ -1001,11 +1001,11 @@ different components:
|
||||
- **Hypervisor** - ACPI is transparent to the Hypervisor, and has no knowledge
|
||||
of ACPI at all.
|
||||
|
||||
- **SOS** - All ACPI resources are physically owned by SOS, and enumerates
|
||||
- **Service VM** - All ACPI resources are physically owned by Service VM, and enumerates
|
||||
all ACPI tables and devices.
|
||||
|
||||
- **UOS** - Virtual ACPI resources, exposed by device model, are owned by
|
||||
UOS.
|
||||
- **User VM** - Virtual ACPI resources, exposed by device model, are owned by
|
||||
User VM.
|
||||
|
||||
ACPI emulation code of device model is found in
|
||||
``hw/platform/acpi/acpi.c``
|
||||
@@ -1095,10 +1095,10 @@ basl_compile for each table. basl_compile does the following:
|
||||
basl_end(&io[0], &io[1]);
|
||||
}
|
||||
|
||||
After handling each entry, virtual ACPI tables are present in UOS
|
||||
After handling each entry, virtual ACPI tables are present in User VM
|
||||
memory.
|
||||
|
||||
For passthrough dev in UOS, we may need to add some ACPI description
|
||||
For passthrough dev in User VM, we may need to add some ACPI description
|
||||
in virtual DSDT table. There is one hook (passthrough_write_dsdt) in
|
||||
``hw/pci/passthrough.c`` for this. The following source code, shows
|
||||
calls different functions to add different contents for each vendor and
|
||||
@@ -1142,7 +1142,7 @@ device id:
|
||||
}
|
||||
|
||||
For instance, write_dsdt_urt1 provides ACPI contents for Bluetooth
|
||||
UART device when passthroughed to UOS. It provides virtual PCI
|
||||
UART device when passthroughed to User VM. It provides virtual PCI
|
||||
device/function as _ADR. With other description, it could be used for
|
||||
Bluetooth UART enumeration.
|
||||
|
||||
@@ -1174,23 +1174,23 @@ Bluetooth UART enumeration.
|
||||
PM in Device Model
|
||||
******************
|
||||
|
||||
PM module in Device Model emulate the UOS low power state transition.
|
||||
PM module in Device Model emulate the User VM low power state transition.
|
||||
|
||||
Each time UOS writes an ACPI control register to initialize low power
|
||||
Each time User VM writes an ACPI control register to initialize low power
|
||||
state transition, the writing operation is trapped to DM as an I/O
|
||||
emulation request by the I/O emulation framework.
|
||||
|
||||
To emulate UOS S5 entry, DM will destroy I/O request client, release
|
||||
allocated UOS memory, stop all created threads, destroy UOS VM, and exit
|
||||
To emulate User VM S5 entry, DM will destroy I/O request client, release
|
||||
allocated User VM memory, stop all created threads, destroy User VM, and exit
|
||||
DM. To emulate S5 exit, a fresh DM start by VM manager is used.
|
||||
|
||||
To emulate UOS S3 entry, DM pauses the UOS VM, stops the UOS watchdog,
|
||||
and waits for a resume signal. When the UOS should exit from S3, DM will
|
||||
get a wakeup signal and reset the UOS VM to emulate the UOS exit from
|
||||
To emulate User VM S3 entry, DM pauses the User VM, stops the User VM watchdog,
|
||||
and waits for a resume signal. When the User VM should exit from S3, DM will
|
||||
get a wakeup signal and reset the User VM to emulate the User VM exit from
|
||||
S3.
|
||||
|
||||
Pass-through in Device Model
|
||||
****************************
|
||||
|
||||
You may refer to :ref:`hv-device-passthrough` for pass-through realization
|
||||
You may refer to :ref:`hv-device-passthrough` for pass-through realization
|
||||
in device model.
|
||||
|
||||
@@ -23,4 +23,5 @@ Hypervisor high-level design
|
||||
Console, Shell, and vUART <hv-console>
|
||||
Hypercall / VHM upcall <hv-hypercall>
|
||||
Compile-time configuration <hv-config>
|
||||
RDT support <hv-rdt>
|
||||
RDT support <hv-rdt>
|
||||
Split-locked Access handling <hld-splitlock>
|
||||
|
||||
@@ -5,7 +5,7 @@ ACRN high-level design overview
|
||||
|
||||
ACRN is an open source reference hypervisor (HV) that runs on top of Intel
|
||||
platforms (APL, KBL, etc) for heterogeneous scenarios such as the Software Defined
|
||||
Cockpit (SDC), or the In-Vehicle Experience (IVE) for automotives, or HMI & Real-Time OS for industry. ACRN provides embedded hypervisor vendors with a reference
|
||||
Cockpit (SDC), or the In-Vehicle Experience (IVE) for automotive, or HMI & Real-Time OS for industry. ACRN provides embedded hypervisor vendors with a reference
|
||||
I/O mediation solution with a permissive license and provides auto makers and
|
||||
industry users a reference software stack for corresponding use.
|
||||
|
||||
|
||||
@@ -137,7 +137,7 @@ will be dispatched to Device Model and Device Model will emulate the User VM
|
||||
power state (pause User VM for S3 and power off User VM for S5)
|
||||
|
||||
The VM Manager monitors all User VMs. If all active User VMs are in required
|
||||
power state, VM Manager will notify lifecyle manager of Service VM to start
|
||||
power state, VM Manager will notify lifecycle manager of Service VM to start
|
||||
Service VM power state transition. lifecycle manager of Service VM follows
|
||||
a very similar process as User VM for power state transition. The difference
|
||||
is Service VM ACPI register writing is trapped to ACRN HV. And ACRN HV will
|
||||
@@ -163,10 +163,10 @@ For system power state entry:
|
||||
1. Service VM received S5 request.
|
||||
2. Lifecycle manager in Service VM notify User VM1 and RTVM through
|
||||
vUART for S5 request.
|
||||
3. Guest lifecycle manager initliaze S5 action. And guest enter S5.
|
||||
3. Guest lifecycle manager initialize S5 action. And guest enter S5.
|
||||
4. RTOS cleanup rt task, send response of S5 request back to Service
|
||||
VM and RTVM enter S5.
|
||||
5. After get response from RTVM and all User VM are shutdown, Sevice VM
|
||||
5. After get response from RTVM and all User VM are shutdown, Service VM
|
||||
enter S5.
|
||||
6. OSPM in ACRN hypervisor check all guest in S5 state and shutdown
|
||||
whole system.
|
||||
|
||||
@@ -228,7 +228,7 @@ UEFI Secure Boot implementations use these keys:
|
||||
#. Platform Key (PK) is the top-level key in Secure Boot; UEFI supports a single PK,
|
||||
which is generally provided by the manufacturer.
|
||||
#. Key Exchange Key (KEK) is used to sign Signature and Forbidden Signature Database updates.
|
||||
#. Signature Database (db) contains kyes and/or hashes of allowed EFI binaries.
|
||||
#. Signature Database (db) contains keys and/or hashes of allowed EFI binaries.
|
||||
|
||||
And keys and certificates are in multiple format:
|
||||
|
||||
@@ -386,7 +386,7 @@ three typical solutions exist:
|
||||
may even require the hypervisor to flush the TLB. This solution won't
|
||||
be used by the ACRN hypervisor.
|
||||
|
||||
#. **Use CR0.WP (write-protection) bit.**
|
||||
#. **Use CR0.WP (write-protection) bit.**
|
||||
|
||||
This processor feature allows
|
||||
pages to be protected from supervisor-mode write access.
|
||||
|
||||
137
doc/developer-guides/hld/hld-splitlock.rst
Normal file
@@ -0,0 +1,137 @@
|
||||
.. _hld_splitlock:
|
||||
|
||||
Handling Split-locked Access in ACRN
|
||||
####################################
|
||||
|
||||
A split lock is any atomic operation whose operand crosses two cache
|
||||
lines. Because the operation must be atomic, the system locks the bus
|
||||
while the CPU accesses the two cache lines. Blocking bus access from
|
||||
other CPUs plus the bus locking protocol overhead degrades overall
|
||||
system performance.
|
||||
|
||||
This document explains Split-locked Access, how to detect it, and how
|
||||
ACRN handles it.
|
||||
|
||||
Split-locked Access Introduction
|
||||
********************************
|
||||
Intel-64 and IA32 multiple-processor systems support locked atomic
|
||||
operations on locations in system memory. For example, The LOCK instruction
|
||||
prefix can be prepended to the following instructions: ADD, ADC, AND, BTC, BTR, BTS,
|
||||
CMPXCHG, CMPXCH8B, CMPXCHG16B, DEC, INC, NEG, NOT, OR, SBB, SUB, XOR, XADD,
|
||||
and XCHG, when these instructions use memory destination operand forms.
|
||||
Reading or writing a byte in system memory is always guaranteed to be
|
||||
atomic, otherwise, these locked atomic operations can impact system in two
|
||||
ways:
|
||||
|
||||
- **The destination operand is located in the same cache line.**
|
||||
|
||||
Cache coherency protocols ensure that atomic operations can be
|
||||
carried out on cached data structures with cache lock.
|
||||
|
||||
- **The destination operand is located in two cache lines.**
|
||||
|
||||
This atomic operation is called a Split-locked Access. For this situation,
|
||||
the LOCK# bus signal is asserted to lock the system bus, to ensure
|
||||
the operation is atomic. See `Intel 64 and IA-32 Architectures Software Developer's Manual(SDM), Volume 3, (Section 8.1.2 Bus Locking) <https://software.intel.com/en-us/download/intel-64-and-ia-32-architectures-sdm-combined-volumes-3a-3b-3c-and-3d-system-programming-guide>`_.
|
||||
|
||||
Split-locked Access can cause unexpected long latency to ordinary memory
|
||||
operations by other CPUs while the bus is locked. This degraded system
|
||||
performance can be hard to investigate.
|
||||
|
||||
Split-locked Access Detection
|
||||
*****************************
|
||||
The `Intel Tremont Microarchitecture
|
||||
<https://newsroom.intel.com/news/intel-introduces-tremont-microarchitecture>`_
|
||||
introduced a new CPU capability for detecting Split-locked Access. When
|
||||
this feature is enabled, an alignment check exception (#AC) with error
|
||||
code 0 is raised for instructions causing a Split-locked Access. Because
|
||||
#AC is a fault, the instruction is not executed, giving the #AC handler
|
||||
an opportunity to decide how to handle this instruction:
|
||||
|
||||
- It can allow the instruction to run with LOCK# bus signal potentially
|
||||
impacting performance of other CPUs.
|
||||
- It can disable LOCK# assertion for split locked access, but
|
||||
improperly makes the instruction non-atomic. (Intel plans to remove this CPU feature
|
||||
from upcoming products as documented in
|
||||
`SDM, Volume 1, (Section 2.4 PROPOSED REMOVAL FROM UPCOMING PRODUCTS.) <https://software.intel.com/en-us/download/intel-64-and-ia-32-architectures-software-developers-manual-volume-1-basic-architecture>`_
|
||||
- It can terminate the software at this instruction.
|
||||
|
||||
Feature Enumeration and Control
|
||||
*******************************
|
||||
#AC for Split-locked Access feature is enumerated and controlled via CPUID and
|
||||
MSR registers.
|
||||
|
||||
- CPUID.(EAX=0x7, ECX=0):EDX[30], the 30th bit of output value in EDX indicates
|
||||
if the platform has IA32_CORE_CAPABILITIES MSR.
|
||||
|
||||
- The 5th bit of IA32_CORE_CAPABILITIES MSR(0xcf), enumerates whether the CPU
|
||||
supports #AC for Split-locked Access (and has TEST_CTRL MSR).
|
||||
|
||||
- The 29th bit of TEST_CTL MSR(0x33) controls enabling and disabling #AC for Split-locked
|
||||
Access.
|
||||
|
||||
ACRN Handling Split-locked Access
|
||||
*********************************
|
||||
Split-locked Access is not expected in the ACRN hypervisor itself, and
|
||||
should never happen. However, such access could happen inside a VM. ACRN
|
||||
support for handling split-locked access follows these design principles:
|
||||
|
||||
- Always enable #AC on Split-locked Access for the physical processors.
|
||||
|
||||
- Present a virtual split lock capability to guest (VMs), and directly
|
||||
deliver the alignment check exception (#AC) to the guest. (This
|
||||
virtual split-lock capability helps the guest isolate violations from
|
||||
user land).
|
||||
|
||||
- Guest write of MSR_TEST_CTL is ignored, and guest read gets the written value.
|
||||
|
||||
- Any Split-locked Access in the ACRN hypervisor is a software bug we must fix.
|
||||
|
||||
- If split-locked Access happens in a guest kernel, the guest may not be able to
|
||||
fix the issue gracefully. (The guest may behave differently than the
|
||||
native OS). The real-time (RT) guest must avoid a Split-locked Access
|
||||
and consider it a software bug.
|
||||
|
||||
Enable Split-Locked Access handling early
|
||||
==========================================
|
||||
This feature is enumerated at the Physical CPU (pCPU) pre-initialization
|
||||
stage, where ACRN detects CPU capabilities. If the pCPU supports this
|
||||
feature:
|
||||
|
||||
- Enable it at each pCPU post-initialization stage.
|
||||
|
||||
- ACRN hypervisor presents a virtual emulated TEST_CTRL MSR to each
|
||||
Virtual CPU (vCPU).
|
||||
Setting or clearing TEST_CTRL[bit 29] in a vCPU, has no effect.
|
||||
|
||||
If pCPU does not have this capability, a vCPU does not have the virtual
|
||||
TEST_CTRL either.
|
||||
|
||||
Expected Behavior in ACRN
|
||||
=========================
|
||||
The ACRN hypervisor should never trigger Split-locked Access and it is
|
||||
not allowed to run with Split-locked Access. If ACRN does trigger a
|
||||
split-locked access, ACRN reports #AC at the instruction and stops
|
||||
running. The offending HV instruction is considered a bug that must be
|
||||
fixed.
|
||||
|
||||
Expected Behavior in VM
|
||||
=======================
|
||||
If a VM process has a Split-locked Access in user space, it will be
|
||||
terminated by SIGBUS. When debugging inside a VM, you may find it
|
||||
triggers an #AC even if alignment checking is disabled.
|
||||
|
||||
If a VM kernel has a Split-locked Access, it will hang or oops on an
|
||||
#AC. A VM kernel may try to disable #AC for Split-locked Access and
|
||||
continue, but it will fail. The ACRN hypervisor helps identify the
|
||||
problem by reporting a warning message that the VM tried writing to
|
||||
TEST_CTRL MSR.
|
||||
|
||||
|
||||
Disable Split-locked Access Detection
|
||||
=====================================
|
||||
If the CPU supports Split-locked Access detection, the ACRN hypervisor
|
||||
uses it to prevent any VM running with potential system performance
|
||||
impacting split-locked instructions. This detection can be disabled
|
||||
(eventually by using the ACRN configuration tools) for customers not
|
||||
caring about system performance.
|
||||
@@ -13,21 +13,21 @@ Shared Buffer is a ring buffer divided into predetermined-size slots. There
|
||||
are two use scenarios of Sbuf:
|
||||
|
||||
- sbuf can serve as a lockless ring buffer to share data from ACRN HV to
|
||||
SOS in non-overwritten mode. (Writing will fail if an overrun
|
||||
Service VM in non-overwritten mode. (Writing will fail if an overrun
|
||||
happens.)
|
||||
- sbuf can serve as a conventional ring buffer in hypervisor in
|
||||
over-written mode. A lock is required to synchronize access by the
|
||||
producer and consumer.
|
||||
|
||||
Both ACRNTrace and ACRNLog use sbuf as a lockless ring buffer. The Sbuf
|
||||
is allocated by SOS and assigned to HV via a hypercall. To hold pointers
|
||||
is allocated by Service VM and assigned to HV via a hypercall. To hold pointers
|
||||
to sbuf passed down via hypercall, an array ``sbuf[ACRN_SBUF_ID_MAX]``
|
||||
is defined in per_cpu region of HV, with predefined sbuf id to identify
|
||||
the usage, such as ACRNTrace, ACRNLog, etc.
|
||||
|
||||
For each physical CPU there is a dedicated Sbuf. Only a single producer
|
||||
is allowed to put data into that Sbuf in HV, and a single consumer is
|
||||
allowed to get data from Sbuf in SOS. Therefore, no lock is required to
|
||||
allowed to get data from Sbuf in Service VM. Therefore, no lock is required to
|
||||
synchronize access by the producer and consumer.
|
||||
|
||||
sbuf APIs
|
||||
@@ -39,7 +39,7 @@ The sbuf APIs are defined in ``hypervisor/include/debug/sbuf.h``
|
||||
ACRN Trace
|
||||
**********
|
||||
|
||||
ACRNTrace is a tool running on the Service OS (SOS) to capture trace
|
||||
ACRNTrace is a tool running on the Service VM to capture trace
|
||||
data. It allows developers to add performance profiling trace points at
|
||||
key locations to get a picture of what is going on inside the
|
||||
hypervisor. Scripts to analyze the collected trace data are also
|
||||
@@ -52,8 +52,8 @@ up:
|
||||
- **ACRNTrace userland app**: Userland application collecting trace data to
|
||||
files (Per Physical CPU)
|
||||
|
||||
- **SOS Trace Module**: allocates/frees SBufs, creates device for each
|
||||
SBuf, sets up sbuf shared between SOS and HV, and provides a dev node for the
|
||||
- **Service VM Trace Module**: allocates/frees SBufs, creates device for each
|
||||
SBuf, sets up sbuf shared between Service VM and HV, and provides a dev node for the
|
||||
userland app to retrieve trace data from Sbuf
|
||||
|
||||
- **Trace APIs**: provide APIs to generate trace event and insert to Sbuf.
|
||||
@@ -71,18 +71,18 @@ See ``hypervisor/include/debug/trace.h``
|
||||
for trace_entry struct and function APIs.
|
||||
|
||||
|
||||
SOS Trace Module
|
||||
================
|
||||
Service VM Trace Module
|
||||
=======================
|
||||
|
||||
The SOS trace module is responsible for:
|
||||
The Service VM trace module is responsible for:
|
||||
|
||||
- allocating sbuf in sos memory range for each physical CPU, and assign
|
||||
- allocating sbuf in Service VM memory range for each physical CPU, and assign
|
||||
the gpa of Sbuf to ``per_cpu sbuf[ACRN_TRACE]``
|
||||
- create a misc device for each physical CPU
|
||||
- provide mmap operation to map entire Sbuf to userspace for high
|
||||
flexible and efficient access.
|
||||
|
||||
On SOS shutdown, the trace module is responsible to remove misc devices, free
|
||||
On Service VM shutdown, the trace module is responsible to remove misc devices, free
|
||||
SBufs, and set ``per_cpu sbuf[ACRN_TRACE]`` to null.
|
||||
|
||||
ACRNTrace Application
|
||||
@@ -98,7 +98,7 @@ readable text, and do analysis.
|
||||
With a debug build, trace components are initialized at boot
|
||||
time. After initialization, HV writes trace event date into sbuf
|
||||
until sbuf is full, which can happen easily if the ACRNTrace app is not
|
||||
consuming trace data from Sbuf on SOS user space.
|
||||
consuming trace data from Sbuf on Service VM user space.
|
||||
|
||||
Once ACRNTrace is launched, for each physical CPU a consumer thread is
|
||||
created to periodically read RAW trace data from sbuf and write to a
|
||||
@@ -122,7 +122,7 @@ ACRN Log
|
||||
********
|
||||
|
||||
acrnlog is a tool used to capture ACRN hypervisor log to files on
|
||||
SOS filesystem. It can run as an SOS service at boot, capturing two
|
||||
Service VM filesystem. It can run as an Service VM service at boot, capturing two
|
||||
kinds of logs:
|
||||
|
||||
- Current runtime logs;
|
||||
@@ -137,9 +137,9 @@ up:
|
||||
|
||||
- **ACRN Log app**: Userland application collecting hypervisor log to
|
||||
files;
|
||||
- **SOS ACRN Log Module**: constructs/frees SBufs at reserved memory
|
||||
- **Service VM ACRN Log Module**: constructs/frees SBufs at reserved memory
|
||||
area, creates dev for current/last logs, sets up sbuf shared between
|
||||
SOS and HV, and provides a dev node for the userland app to
|
||||
Service VM and HV, and provides a dev node for the userland app to
|
||||
retrieve logs
|
||||
- **ACRN log support in HV**: put logs at specified loglevel to Sbuf.
|
||||
|
||||
@@ -157,7 +157,7 @@ system:
|
||||
|
||||
- log messages with severity level higher than a specified value will
|
||||
be put into Sbuf when calling logmsg in hypervisor
|
||||
- allocate sbuf to accommodate early hypervisor logs before SOS
|
||||
- allocate sbuf to accommodate early hypervisor logs before Service VM
|
||||
can allocate and set up sbuf
|
||||
|
||||
There are 6 different loglevels, as shown below. The specified
|
||||
@@ -181,17 +181,17 @@ of a single log message is 320 bytes. Log messages with a length between
|
||||
80 and 320 bytes will be separated into multiple sbuf elements. Log
|
||||
messages with length larger then 320 will be truncated.
|
||||
|
||||
For security, SOS allocates sbuf in its memory range and assigns it to
|
||||
For security, Service VM allocates sbuf in its memory range and assigns it to
|
||||
the hypervisor.
|
||||
|
||||
SOS ACRN Log Module
|
||||
===================
|
||||
Service VM ACRN Log Module
|
||||
==========================
|
||||
|
||||
ACRNLog module provides one kernel option `hvlog=$size@$pbase` to configure
|
||||
the size and base address of hypervisor log buffer. This space will be further divided
|
||||
into two buffers with equal size: last log buffer and current log buffer.
|
||||
|
||||
On SOS boot, SOS acrnlog module is responsible to:
|
||||
On Service VM boot, Service VM acrnlog module is responsible to:
|
||||
|
||||
- examine if there are log messages remaining from last crashed
|
||||
run by checking the magic number of each sbuf
|
||||
@@ -211,7 +211,7 @@ current sbuf with magic number ``0x5aa57aa71aa13aa3``, and changes the
|
||||
magic number of last sbuf to ``0x5aa57aa71aa13aa2``, to distinguish which is
|
||||
the current/last.
|
||||
|
||||
On SOS shutdown, the module is responsible to remove misc devices,
|
||||
On Service VM shutdown, the module is responsible to remove misc devices,
|
||||
free SBufs, and set ``per_cpu sbuf[ACRN_TRACE]`` to null.
|
||||
|
||||
ACRN Log Application
|
||||
|
||||
@@ -30,7 +30,7 @@ service (VBS) APIs, and virtqueue (VQ) APIs, as shown in
|
||||
- **DM APIs** are exported by the DM, and are mainly used during the
|
||||
device initialization phase and runtime. The DM APIs also include
|
||||
PCIe emulation APIs because each virtio device is a PCIe device in
|
||||
the SOS and UOS.
|
||||
the Service VM and User VM.
|
||||
- **VBS APIs** are mainly exported by the VBS and related modules.
|
||||
Generally they are callbacks to be
|
||||
registered into the DM.
|
||||
@@ -111,7 +111,7 @@ Efficient: batching operation is encouraged
|
||||
high-performance I/O, since notification between FE and BE driver
|
||||
usually involves an expensive exit of the guest. Therefore batching
|
||||
operating and notification suppression are highly encouraged if
|
||||
possible. This will give an efficient implementation for
|
||||
possible. This will give an efficient implementation for
|
||||
performance-critical devices.
|
||||
|
||||
Standard: virtqueue
|
||||
@@ -120,7 +120,7 @@ Standard: virtqueue
|
||||
queue of scatter-gather buffers. There are three important methods on
|
||||
virtqueues:
|
||||
|
||||
- **add_buf** is for adding a request/response buffer in a virtqueue,
|
||||
- **add_buf** is for adding a request/response buffer in a virtqueue,
|
||||
- **get_buf** is for getting a response/request in a virtqueue, and
|
||||
- **kick** is for notifying the other side for a virtqueue to consume buffers.
|
||||
|
||||
@@ -366,7 +366,7 @@ The workflow can be summarized as:
|
||||
irqfd.
|
||||
2. pass ioeventfd to vhost kernel driver.
|
||||
3. pass ioevent fd to vhm driver
|
||||
4. UOS FE driver triggers ioreq and forwarded to SOS by hypervisor
|
||||
4. User VM FE driver triggers ioreq and forwarded to Service VM by hypervisor
|
||||
5. ioreq is dispatched by vhm driver to related vhm client.
|
||||
6. ioeventfd vhm client traverse the io_range list and find
|
||||
corresponding eventfd.
|
||||
@@ -396,7 +396,7 @@ The workflow can be summarized as:
|
||||
5. irqfd related logic traverses the irqfd list to retrieve related irq
|
||||
information.
|
||||
6. irqfd related logic inject an interrupt through vhm interrupt API.
|
||||
7. interrupt is delivered to UOS FE driver through hypervisor.
|
||||
7. interrupt is delivered to User VM FE driver through hypervisor.
|
||||
|
||||
.. _virtio-APIs:
|
||||
|
||||
@@ -542,7 +542,7 @@ VBS APIs
|
||||
========
|
||||
|
||||
The VBS APIs are exported by VBS related modules, including VBS, DM, and
|
||||
SOS kernel modules. They can be classified into VBS-U and VBS-K APIs
|
||||
Service VM kernel modules. They can be classified into VBS-U and VBS-K APIs
|
||||
listed as follows.
|
||||
|
||||
VBS-U APIs
|
||||
|
||||
@@ -6,7 +6,7 @@ Hostbridge emulation
|
||||
Overview
|
||||
********
|
||||
|
||||
Hostbridge emulation is based on PCI emulation; however, the hostbridge emulation only sets the PCI configuration space. The device model sets the PCI configuration space for hostbridge in the Service VM ans then exposes it to the User VM to detect the PCI hostbridge.
|
||||
Hostbridge emulation is based on PCI emulation; however, the hostbridge emulation only sets the PCI configuration space. The device model sets the PCI configuration space for hostbridge in the Service VM and then exposes it to the User VM to detect the PCI hostbridge.
|
||||
|
||||
PCI Host Bridge and hierarchy
|
||||
*****************************
|
||||
|
||||
@@ -30,7 +30,7 @@ is active:
|
||||
|
||||
.. note:: The console is only available in the debug version of the hypervisor,
|
||||
configured at compile time. In the release version, the console is
|
||||
disabled and the physical UART is not used by the hypervisor or SOS.
|
||||
disabled and the physical UART is not used by the hypervisor or Service VM.
|
||||
|
||||
Hypervisor shell
|
||||
****************
|
||||
@@ -45,8 +45,8 @@ Virtual UART
|
||||
|
||||
Currently UART 16550 is owned by the hypervisor itself and used for
|
||||
debugging purposes. Properties are configured by hypervisor command
|
||||
line. Hypervisor emulates a UART device with 0x3F8 address to SOS that
|
||||
acts as the console of SOS with these features:
|
||||
line. Hypervisor emulates a UART device with 0x3F8 address to Service VM that
|
||||
acts as the console of Service VM with these features:
|
||||
|
||||
- The vUART is exposed via I/O port 0x3f8.
|
||||
- Incorporate a 256-byte RX buffer and 65536 TX buffer.
|
||||
@@ -85,8 +85,8 @@ The workflows are described as follows:
|
||||
- Characters are read from this sbuf and put to rxFIFO,
|
||||
triggered by vuart_console_rx_chars
|
||||
|
||||
- A virtual interrupt is sent to SOS, triggered by a read from
|
||||
SOS. Characters in rxFIFO are sent to SOS by emulation of
|
||||
- A virtual interrupt is sent to Service VM, triggered by a read from
|
||||
Service VM. Characters in rxFIFO are sent to Service VM by emulation of
|
||||
read of register UART16550_RBR
|
||||
|
||||
- TX flow:
|
||||
|
||||
@@ -4,20 +4,21 @@ Device Passthrough
|
||||
##################
|
||||
|
||||
A critical part of virtualization is virtualizing devices: exposing all
|
||||
aspects of a device including its I/O, interrupts, DMA, and configuration.
|
||||
There are three typical device
|
||||
virtualization methods: emulation, para-virtualization, and passthrough.
|
||||
All emulation, para-virtualization and passthrough are used in ACRN project. Device
|
||||
emulation is discussed in :ref:`hld-io-emulation`, para-virtualization is discussed
|
||||
in :ref:`hld-virtio-devices` and device passthrough will be discussed here.
|
||||
aspects of a device including its I/O, interrupts, DMA, and
|
||||
configuration. There are three typical device virtualization methods:
|
||||
emulation, para-virtualization, and passthrough. All emulation,
|
||||
para-virtualization and passthrough are used in ACRN project. Device
|
||||
emulation is discussed in :ref:`hld-io-emulation`, para-virtualization
|
||||
is discussed in :ref:`hld-virtio-devices` and device passthrough will be
|
||||
discussed here.
|
||||
|
||||
In the ACRN project, device emulation means emulating all existing hardware
|
||||
resource through a software component device model running in the
|
||||
Service OS (SOS). Device
|
||||
emulation must maintain the same SW interface as a native device,
|
||||
providing transparency to the VM software stack. Passthrough implemented in
|
||||
hypervisor assigns a physical device to a VM so the VM can access
|
||||
the hardware device directly with minimal (if any) VMM involvement.
|
||||
In the ACRN project, device emulation means emulating all existing
|
||||
hardware resource through a software component device model running in
|
||||
the Service OS (SOS). Device emulation must maintain the same SW
|
||||
interface as a native device, providing transparency to the VM software
|
||||
stack. Passthrough implemented in hypervisor assigns a physical device
|
||||
to a VM so the VM can access the hardware device directly with minimal
|
||||
(if any) VMM involvement.
|
||||
|
||||
The difference between device emulation and passthrough is shown in
|
||||
:numref:`emu-passthru-diff`. You can notice device emulation has
|
||||
@@ -75,7 +76,7 @@ one the following 4 cases:
|
||||
to any VM. For now, UART is the only pci device could be owned by hypervisor.
|
||||
- **Pre-launched VM**: The passthrough devices will be used in a pre-launched VM is
|
||||
pre-defined in VM configuration. These passthrough devices are owned by the
|
||||
pre-launched VM after the VM is created. These devices will not be removed
|
||||
pre-launched VM after the VM is created. These devices will not be removed
|
||||
from the pre-launched VM. There could be pre-launched VM(s) in logical partition
|
||||
mode and hybrid mode.
|
||||
- **Service VM**: All the passthrough devices except these described above (owned by
|
||||
@@ -143,6 +144,102 @@ interrupt vector after checking the external interrupt request is valid. Transla
|
||||
physical vector to virtual vector is still needed to be done by hypervisor, which is
|
||||
also described in the below section :ref:`interrupt-remapping`.
|
||||
|
||||
VT-d posted interrupt (PI) enables direct delivery of external interrupts from
|
||||
passthrough devices to VMs without having to exit to hypervisor, thereby improving
|
||||
interrupt performance. ACRN uses VT-d posted interrupts if the platform
|
||||
supports them. VT-d distinguishes between remapped
|
||||
and posted interrupt modes by bit 15 in the low 64-bit of the IRTE. If cleared the
|
||||
entry is remapped, if set it's posted.
|
||||
The idea for posted interrupt is to keep a Posted Interrupt Descriptor (PID) in memory.
|
||||
The PID is a 64-byte data structure that contains several fields:
|
||||
|
||||
Posted Interrupt Request (PIR):
|
||||
a 256-bit field, one bit per request vector;
|
||||
this is where the interrupts are posted;
|
||||
|
||||
Suppress Notification (SN):
|
||||
determines whether to notify (``SN=0``) or not notify (``SN=1``)
|
||||
the CPU for non-urgent interrupts. For ACRN,
|
||||
all interrupts are treated as non-urgent. ACRN sets SN=0 during initialization
|
||||
and then never changes it at runtime;
|
||||
|
||||
Notification Vector (NV):
|
||||
the CPU must be notified with an interrupt and this
|
||||
field specifies the vector for notification;
|
||||
|
||||
Notification Destination (NDST):
|
||||
the physical APIC-ID of the destination.
|
||||
ACRN does not support vCPU migration, one vCPU always runs on the same pCPU,
|
||||
so for ACRN, NDST is never changed after initialization.
|
||||
|
||||
Outstanding Notification (ON):
|
||||
indicates if a notification event is outstanding
|
||||
|
||||
The ACRN scheduler supports vCPU scheduling, where two or more vCPUs can
|
||||
share the same pCPU using a time sharing technique. One issue emerges
|
||||
here for VT-d posted interrupt handling process, where IRQs could happen
|
||||
when the target vCPU is in a halted state. We need to handle the case
|
||||
where the running vCPU disrupted by the external interrupt, is not the
|
||||
target vCPU that an external interrupt should be delivered.
|
||||
|
||||
Consider this scenario:
|
||||
|
||||
* vCPU0 runs on pCPU0 and then enters a halted state,
|
||||
* ACRN scheduler now chooses vCPU1 to run on pCPU0.
|
||||
|
||||
If an external interrupt from an assigned device destined to vCPU0
|
||||
happens at this time, we do not want this interrupt to be incorrectly
|
||||
consumed by vCPU1 currently running on pCPU0. This would happen if we
|
||||
allocate the same Activation Notification Vector (ANV) to all vCPUs.
|
||||
|
||||
To circumvent this issue, ACRN allocates unique ANVs for each vCPU that
|
||||
belongs to the same pCPU. The ANVs need only be unique within each pCPU,
|
||||
not across all vCPUs. Since vCPU0's ANV is different from vCPU1's ANV,
|
||||
if a vCPU0 is in a halted state, external interrupts from an assigned
|
||||
device destined to vCPU0 delivered through the PID will not trigger the
|
||||
posted interrupt processing. Instead, a VMExit to ACRN happens that can
|
||||
then process the event such as waking up the halted vCPU0 and kick it
|
||||
to run on pCPU0.
|
||||
|
||||
For ACRN, ``CONFIG_MAX_VM_NUM`` vCPUs may be running on top of a pCPU. ACRN
|
||||
does not support two vCPUs of the same VM running on top of the same
|
||||
pCPU. This reduces the number of pre-allocated ANVs for posted
|
||||
interrupts to ``CONFIG_MAX_VM_NUM``, and enables ACRN to avoid switching
|
||||
between active and wake-up vector values in the posted interrupt
|
||||
descriptor on vCPU scheduling state changes. ACRN uses the following
|
||||
formula to assign posted interrupt vectors to vCPUs::
|
||||
|
||||
NV = POSTED_INTR_VECTOR + vcpu->vm->vm_id
|
||||
|
||||
where ``POSTED_INTR_VECTOR`` is the starting vector (0xe3) for posted interrupts.
|
||||
|
||||
ACRN maintains a per-PCPU vCPU array that stores the pointers to
|
||||
assigned vCPUs for each pCPU and is indexed by ``vcpu->vm->vm_id``.
|
||||
When the vCPU is created, ACRN adds the vCPU to the containing pCPU's
|
||||
vCPU array. When the vCPU is offline, ACRN removes the vCPU from the
|
||||
related vCPU array.
|
||||
|
||||
An example to illustrate our solution:
|
||||
|
||||
.. figure:: images/passthru-image50.png
|
||||
:align: center
|
||||
|
||||
ACRN sets ``SN=0`` during initialization and then never change it at
|
||||
runtime. This means posted interrupt notification is never suppressed.
|
||||
After posting the interrupt in Posted Interrupt Request (PIR), VT-d will
|
||||
always notify the CPU using the interrupt vector NV, in both root and
|
||||
non-root mode. With this scheme, if the target vCPU is running under
|
||||
VMX non-root mode, it will receive the interrupts coming from
|
||||
passed-through device without a VMExit (and therefore without any
|
||||
intervention of the ACRN hypervisor).
|
||||
|
||||
If the target vCPU is in a halted state (under VMX non-root mode), a
|
||||
scheduling request will be raised to wake it up. This is needed to
|
||||
achieve real time behavior. If an RT-VM is waiting for an event, when
|
||||
the event is fired (a PI interrupt fires), we need to wake up the VM
|
||||
immediately.
|
||||
|
||||
|
||||
MMIO Remapping
|
||||
**************
|
||||
|
||||
@@ -229,15 +326,15 @@ hypervisor before it configures the PCI configuration space to enable an
|
||||
MSI. The hypervisor takes this opportunity to set up a remapping for the
|
||||
given MSI or MSIX before it is actually enabled by Service VM.
|
||||
|
||||
When the UOS needs to access the physical device by passthrough, it uses
|
||||
When the User VM needs to access the physical device by passthrough, it uses
|
||||
the following steps:
|
||||
|
||||
- UOS gets a virtual interrupt
|
||||
- User VM gets a virtual interrupt
|
||||
- VM exit happens and the trapped vCPU is the target where the interrupt
|
||||
will be injected.
|
||||
- Hypervisor will handle the interrupt and translate the vector
|
||||
according to ptirq_remapping_info.
|
||||
- Hypervisor delivers the interrupt to UOS.
|
||||
- Hypervisor delivers the interrupt to User VM.
|
||||
|
||||
When the Service VM needs to use the physical device, the passthrough is also
|
||||
active because the Service VM is the first VM. The detail steps are:
|
||||
@@ -258,7 +355,7 @@ ACPI virtualization is designed in ACRN with these assumptions:
|
||||
|
||||
- HV has no knowledge of ACPI,
|
||||
- Service VM owns all physical ACPI resources,
|
||||
- UOS sees virtual ACPI resources emulated by device model.
|
||||
- User VM sees virtual ACPI resources emulated by device model.
|
||||
|
||||
Some passthrough devices require physical ACPI table entry for
|
||||
initialization. The device model will create such device entry based on
|
||||
|
||||
@@ -63,7 +63,9 @@ to support this. The ACRN hypervisor also initializes all the interrupt
|
||||
related modules like IDT, PIC, IOAPIC, and LAPIC.
|
||||
|
||||
HV does not own any host devices (except UART). All devices are by
|
||||
default assigned to the Service VM. Any interrupts received by Guest VM (Service VM or User VM) device drivers are virtual interrupts injected by HV (via vLAPIC).
|
||||
default assigned to the Service VM. Any interrupts received by VM
|
||||
(Service VM or User VM) device drivers are virtual interrupts injected
|
||||
by HV (via vLAPIC).
|
||||
HV manages a Host-to-Guest mapping. When a native IRQ/interrupt occurs,
|
||||
HV decides whether this IRQ/interrupt should be forwarded to a VM and
|
||||
which VM to forward to (if any). Refer to
|
||||
@@ -357,15 +359,15 @@ IPI vector 0xF3 upcall. The virtual interrupt injection uses IPI vector 0xF0.
|
||||
0xF3 upcall
|
||||
A Guest vCPU VM Exit exits due to EPT violation or IO instruction trap.
|
||||
It requires Device Module to emulate the MMIO/PortIO instruction.
|
||||
However it could be that the Service OS (SOS) vCPU0 is still in non-root
|
||||
However it could be that the Service VM vCPU0 is still in non-root
|
||||
mode. So an IPI (0xF3 upcall vector) should be sent to the physical CPU0
|
||||
(with non-root mode as vCPU0 inside SOS) to force vCPU0 to VM Exit due
|
||||
(with non-root mode as vCPU0 inside the Service VM) to force vCPU0 to VM Exit due
|
||||
to the external interrupt. The virtual upcall vector is then injected to
|
||||
SOS, and the vCPU0 inside SOS then will pick up the IO request and do
|
||||
the Service VM, and the vCPU0 inside the Service VM then will pick up the IO request and do
|
||||
emulation for other Guest.
|
||||
|
||||
0xF0 IPI flow
|
||||
If Device Module inside SOS needs to inject an interrupt to other Guest
|
||||
If Device Module inside the Service VM needs to inject an interrupt to other Guest
|
||||
such as vCPU1, it will issue an IPI first to kick CPU1 (assuming CPU1 is
|
||||
running on vCPU1) to root-hv_interrupt-data-apmode. CPU1 will inject the
|
||||
interrupt before VM Enter.
|
||||
|
||||
@@ -4,7 +4,7 @@ I/O Emulation high-level design
|
||||
###############################
|
||||
|
||||
As discussed in :ref:`intro-io-emulation`, there are multiple ways and
|
||||
places to handle I/O emulation, including HV, SOS Kernel VHM, and SOS
|
||||
places to handle I/O emulation, including HV, Service VM Kernel VHM, and Service VM
|
||||
user-land device model (acrn-dm).
|
||||
|
||||
I/O emulation in the hypervisor provides these functionalities:
|
||||
@@ -12,7 +12,7 @@ I/O emulation in the hypervisor provides these functionalities:
|
||||
- Maintain lists of port I/O or MMIO handlers in the hypervisor for
|
||||
emulating trapped I/O accesses in a certain range.
|
||||
|
||||
- Forward I/O accesses to SOS when they cannot be handled by the
|
||||
- Forward I/O accesses to Service VM when they cannot be handled by the
|
||||
hypervisor by any registered handlers.
|
||||
|
||||
:numref:`io-control-flow` illustrates the main control flow steps of I/O emulation
|
||||
@@ -26,7 +26,7 @@ inside the hypervisor:
|
||||
access, or ignore the access if the access crosses the boundary.
|
||||
|
||||
3. If the range of the I/O access does not overlap the range of any I/O
|
||||
handler, deliver an I/O request to SOS.
|
||||
handler, deliver an I/O request to Service VM.
|
||||
|
||||
.. figure:: images/ioem-image101.png
|
||||
:align: center
|
||||
@@ -92,16 +92,16 @@ following cases exist:
|
||||
- Otherwise it is implied that the access crosses the boundary of
|
||||
multiple devices which the hypervisor does not emulate. Thus
|
||||
no handler is called and no I/O request will be delivered to
|
||||
SOS. I/O reads get all 1's and I/O writes are dropped.
|
||||
Service VM. I/O reads get all 1's and I/O writes are dropped.
|
||||
|
||||
- If the range of the I/O access does not overlap with any range of the
|
||||
handlers, the I/O access is delivered to SOS as an I/O request
|
||||
handlers, the I/O access is delivered to Service VM as an I/O request
|
||||
for further processing.
|
||||
|
||||
I/O Requests
|
||||
************
|
||||
|
||||
An I/O request is delivered to SOS vCPU 0 if the hypervisor does not
|
||||
An I/O request is delivered to Service VM vCPU 0 if the hypervisor does not
|
||||
find any handler that overlaps the range of a trapped I/O access. This
|
||||
section describes the initialization of the I/O request mechanism and
|
||||
how an I/O access is emulated via I/O requests in the hypervisor.
|
||||
@@ -109,11 +109,11 @@ how an I/O access is emulated via I/O requests in the hypervisor.
|
||||
Initialization
|
||||
==============
|
||||
|
||||
For each UOS the hypervisor shares a page with SOS to exchange I/O
|
||||
For each User VM the hypervisor shares a page with Service VM to exchange I/O
|
||||
requests. The 4-KByte page consists of 16 256-Byte slots, indexed by
|
||||
vCPU ID. It is required for the DM to allocate and set up the request
|
||||
buffer on VM creation, otherwise I/O accesses from UOS cannot be
|
||||
emulated by SOS, and all I/O accesses not handled by the I/O handlers in
|
||||
buffer on VM creation, otherwise I/O accesses from User VM cannot be
|
||||
emulated by Service VM, and all I/O accesses not handled by the I/O handlers in
|
||||
the hypervisor will be dropped (reads get all 1's).
|
||||
|
||||
Refer to the following sections for details on I/O requests and the
|
||||
@@ -145,7 +145,7 @@ There are four types of I/O requests:
|
||||
|
||||
|
||||
For port I/O accesses, the hypervisor will always deliver an I/O request
|
||||
of type PIO to SOS. For MMIO accesses, the hypervisor will deliver an
|
||||
of type PIO to Service VM. For MMIO accesses, the hypervisor will deliver an
|
||||
I/O request of either MMIO or WP, depending on the mapping of the
|
||||
accessed address (in GPA) in the EPT of the vCPU. The hypervisor will
|
||||
never deliver any I/O request of type PCI, but will handle such I/O
|
||||
@@ -170,11 +170,11 @@ The four states are:
|
||||
|
||||
FREE
|
||||
The I/O request slot is not used and new I/O requests can be
|
||||
delivered. This is the initial state on UOS creation.
|
||||
delivered. This is the initial state on User VM creation.
|
||||
|
||||
PENDING
|
||||
The I/O request slot is occupied with an I/O request pending
|
||||
to be processed by SOS.
|
||||
to be processed by Service VM.
|
||||
|
||||
PROCESSING
|
||||
The I/O request has been dispatched to a client but the
|
||||
@@ -185,19 +185,19 @@ COMPLETE
|
||||
has not consumed the results yet.
|
||||
|
||||
The contents of an I/O request slot are owned by the hypervisor when the
|
||||
state of an I/O request slot is FREE or COMPLETE. In such cases SOS can
|
||||
state of an I/O request slot is FREE or COMPLETE. In such cases Service VM can
|
||||
only access the state of that slot. Similarly the contents are owned by
|
||||
SOS when the state is PENDING or PROCESSING, when the hypervisor can
|
||||
Service VM when the state is PENDING or PROCESSING, when the hypervisor can
|
||||
only access the state of that slot.
|
||||
|
||||
The states are transferred as follow:
|
||||
|
||||
1. To deliver an I/O request, the hypervisor takes the slot
|
||||
corresponding to the vCPU triggering the I/O access, fills the
|
||||
contents, changes the state to PENDING and notifies SOS via
|
||||
contents, changes the state to PENDING and notifies Service VM via
|
||||
upcall.
|
||||
|
||||
2. On upcalls, SOS dispatches each I/O request in the PENDING state to
|
||||
2. On upcalls, Service VM dispatches each I/O request in the PENDING state to
|
||||
clients and changes the state to PROCESSING.
|
||||
|
||||
3. The client assigned an I/O request changes the state to COMPLETE
|
||||
@@ -211,7 +211,7 @@ The states are transferred as follow:
|
||||
States are accessed using atomic operations to avoid getting unexpected
|
||||
states on one core when it is written on another.
|
||||
|
||||
Note that there is no state to represent a 'failed' I/O request. SOS
|
||||
Note that there is no state to represent a 'failed' I/O request. Service VM
|
||||
should return all 1's for reads and ignore writes whenever it cannot
|
||||
handle the I/O request, and change the state of the request to COMPLETE.
|
||||
|
||||
@@ -224,7 +224,7 @@ hypervisor re-enters the vCPU thread every time a vCPU is scheduled back
|
||||
in, rather than switching to where the vCPU is scheduled out. As a result,
|
||||
post-work is introduced for this purpose.
|
||||
|
||||
The hypervisor pauses a vCPU before an I/O request is delivered to SOS.
|
||||
The hypervisor pauses a vCPU before an I/O request is delivered to Service VM.
|
||||
Once the I/O request emulation is completed, a client notifies the
|
||||
hypervisor by a hypercall. The hypervisor will pick up that request, do
|
||||
the post-work, and resume the guest vCPU. The post-work takes care of
|
||||
@@ -236,9 +236,9 @@ updating the vCPU guest state to reflect the effect of the I/O reads.
|
||||
Workflow of MMIO I/O request completion
|
||||
|
||||
The figure above illustrates the workflow to complete an I/O
|
||||
request for MMIO. Once the I/O request is completed, SOS makes a
|
||||
hypercall to notify the hypervisor which resumes the UOS vCPU triggering
|
||||
the access after requesting post-work on that vCPU. After the UOS vCPU
|
||||
request for MMIO. Once the I/O request is completed, Service VM makes a
|
||||
hypercall to notify the hypervisor which resumes the User VM vCPU triggering
|
||||
the access after requesting post-work on that vCPU. After the User VM vCPU
|
||||
resumes, it does the post-work first to update the guest registers if
|
||||
the access reads an address, changes the state of the corresponding I/O
|
||||
request slot to FREE, and continues execution of the vCPU.
|
||||
@@ -255,7 +255,7 @@ similar to the MMIO case, except the post-work is done before resuming
|
||||
the vCPU. This is because the post-work for port I/O reads need to update
|
||||
the general register eax of the vCPU, while the post-work for MMIO reads
|
||||
need further emulation of the trapped instruction. This is much more
|
||||
complex and may impact the performance of SOS.
|
||||
complex and may impact the performance of the Service VM.
|
||||
|
||||
.. _io-structs-interfaces:
|
||||
|
||||
|
||||
@@ -106,7 +106,7 @@ Virtualization architecture
|
||||
---------------------------
|
||||
|
||||
In the virtualization architecture, the IOC Device Model (DM) is
|
||||
responsible for communication between the UOS and IOC firmware. The IOC
|
||||
responsible for communication between the User VM and IOC firmware. The IOC
|
||||
DM communicates with several native CBC char devices and a PTY device.
|
||||
The native CBC char devices only include ``/dev/cbc-lifecycle``,
|
||||
``/dev/cbc-signals``, and ``/dev/cbc-raw0`` - ``/dev/cbc-raw11``. Others
|
||||
@@ -133,7 +133,7 @@ There are five parts in this high-level design:
|
||||
* Power management involves boot/resume/suspend/shutdown flows
|
||||
* Emulated CBC commands introduces some commands work flow
|
||||
|
||||
IOC mediator has three threads to transfer data between UOS and SOS. The
|
||||
IOC mediator has three threads to transfer data between User VM and Service VM. The
|
||||
core thread is responsible for data reception, and Tx and Rx threads are
|
||||
used for data transmission. Each of the transmission threads has one
|
||||
data queue as a buffer, so that the IOC mediator can read data from CBC
|
||||
@@ -154,7 +154,7 @@ char devices and UART DM immediately.
|
||||
data comes from a raw channel, the data will be passed forward. Before
|
||||
transmitting to the virtual UART interface, all data needs to be
|
||||
packed with an address header and link header.
|
||||
- For Rx direction, the data comes from the UOS. The IOC mediator receives link
|
||||
- For Rx direction, the data comes from the User VM. The IOC mediator receives link
|
||||
data from the virtual UART interface. The data will be unpacked by Core
|
||||
thread, and then forwarded to Rx queue, similar to how the Tx direction flow
|
||||
is done except that the heartbeat and RTC are only used by the IOC
|
||||
@@ -176,10 +176,10 @@ IOC mediator has four states and five events for state transfer.
|
||||
IOC Mediator - State Transfer
|
||||
|
||||
- **INIT state**: This state is the initialized state of the IOC mediator.
|
||||
All CBC protocol packets are handled normally. In this state, the UOS
|
||||
All CBC protocol packets are handled normally. In this state, the User VM
|
||||
has not yet sent an active heartbeat.
|
||||
- **ACTIVE state**: Enter this state if an HB ACTIVE event is triggered,
|
||||
indicating that the UOS state has been active and need to set the bit
|
||||
indicating that the User VM state has been active and need to set the bit
|
||||
23 (SoC bit) in the wakeup reason.
|
||||
- **SUSPENDING state**: Enter this state if a RAM REFRESH event or HB
|
||||
INACTIVE event is triggered. The related event handler needs to mask
|
||||
@@ -219,17 +219,17 @@ The difference between the native and virtualization architectures is
|
||||
that the IOC mediator needs to re-compute the checksum and reset
|
||||
priority. Currently, priority is not supported by IOC firmware; the
|
||||
priority setting by the IOC mediator is based on the priority setting of
|
||||
the CBC driver. The SOS and UOS use the same CBC driver.
|
||||
the CBC driver. The Service VM and User VM use the same CBC driver.
|
||||
|
||||
Power management virtualization
|
||||
-------------------------------
|
||||
|
||||
In acrn-dm, the IOC power management architecture involves PM DM, IOC
|
||||
DM, and UART DM modules. PM DM is responsible for UOS power management,
|
||||
DM, and UART DM modules. PM DM is responsible for User VM power management,
|
||||
and IOC DM is responsible for heartbeat and wakeup reason flows for IOC
|
||||
firmware. The heartbeat flow is used to control IOC firmware power state
|
||||
and wakeup reason flow is used to indicate IOC power state to the OS.
|
||||
UART DM transfers all IOC data between the SOS and UOS. These modules
|
||||
UART DM transfers all IOC data between the Service VM and User VM. These modules
|
||||
complete boot/suspend/resume/shutdown functions.
|
||||
|
||||
Boot flow
|
||||
@@ -243,13 +243,13 @@ Boot flow
|
||||
IOC Virtualizaton - Boot flow
|
||||
|
||||
#. Press ignition button for booting.
|
||||
#. SOS lifecycle service gets a "booting" wakeup reason.
|
||||
#. SOS lifecycle service notifies wakeup reason to VM Manager, and VM
|
||||
#. Service VM lifecycle service gets a "booting" wakeup reason.
|
||||
#. Service VM lifecycle service notifies wakeup reason to VM Manager, and VM
|
||||
Manager starts VM.
|
||||
#. VM Manager sets the VM state to "start".
|
||||
#. IOC DM forwards the wakeup reason to UOS.
|
||||
#. PM DM starts UOS.
|
||||
#. UOS lifecycle gets a "booting" wakeup reason.
|
||||
#. IOC DM forwards the wakeup reason to User VM.
|
||||
#. PM DM starts User VM.
|
||||
#. User VM lifecycle gets a "booting" wakeup reason.
|
||||
|
||||
Suspend & Shutdown flow
|
||||
+++++++++++++++++++++++
|
||||
@@ -262,23 +262,23 @@ Suspend & Shutdown flow
|
||||
IOC Virtualizaton - Suspend and Shutdown by Ignition
|
||||
|
||||
#. Press ignition button to suspend or shutdown.
|
||||
#. SOS lifecycle service gets a 0x800000 wakeup reason, then keeps
|
||||
#. Service VM lifecycle service gets a 0x800000 wakeup reason, then keeps
|
||||
sending a shutdown delay heartbeat to IOC firmware, and notifies a
|
||||
"stop" event to VM Manager.
|
||||
#. IOC DM forwards the wakeup reason to UOS lifecycle service.
|
||||
#. SOS lifecycle service sends a "stop" event to VM Manager, and waits for
|
||||
#. IOC DM forwards the wakeup reason to User VM lifecycle service.
|
||||
#. Service VM lifecycle service sends a "stop" event to VM Manager, and waits for
|
||||
the stop response before timeout.
|
||||
#. UOS lifecycle service gets a 0x800000 wakeup reason and sends inactive
|
||||
#. User VM lifecycle service gets a 0x800000 wakeup reason and sends inactive
|
||||
heartbeat with suspend or shutdown SUS_STAT to IOC DM.
|
||||
#. UOS lifecycle service gets a 0x000000 wakeup reason, then enters
|
||||
#. User VM lifecycle service gets a 0x000000 wakeup reason, then enters
|
||||
suspend or shutdown kernel PM flow based on SUS_STAT.
|
||||
#. PM DM executes UOS suspend/shutdown request based on ACPI.
|
||||
#. PM DM executes User VM suspend/shutdown request based on ACPI.
|
||||
#. VM Manager queries each VM state from PM DM. Suspend request maps
|
||||
to a paused state and shutdown request maps to a stop state.
|
||||
#. VM Manager collects all VMs state, and reports it to SOS lifecycle
|
||||
#. VM Manager collects all VMs state, and reports it to Service VM lifecycle
|
||||
service.
|
||||
#. SOS lifecycle sends inactive heartbeat to IOC firmware with
|
||||
suspend/shutdown SUS_STAT, based on the SOS' own lifecycle service
|
||||
#. Service VM lifecycle sends inactive heartbeat to IOC firmware with
|
||||
suspend/shutdown SUS_STAT, based on the Service VM's own lifecycle service
|
||||
policy.
|
||||
|
||||
Resume flow
|
||||
@@ -297,33 +297,33 @@ the same flow blocks.
|
||||
For ignition resume flow:
|
||||
|
||||
#. Press ignition button to resume.
|
||||
#. SOS lifecycle service gets an initial wakeup reason from the IOC
|
||||
#. Service VM lifecycle service gets an initial wakeup reason from the IOC
|
||||
firmware. The wakeup reason is 0x000020, from which the ignition button
|
||||
bit is set. It then sends active or initial heartbeat to IOC firmware.
|
||||
#. SOS lifecycle forwards the wakeup reason and sends start event to VM
|
||||
#. Service VM lifecycle forwards the wakeup reason and sends start event to VM
|
||||
Manager. The VM Manager starts to resume VMs.
|
||||
#. IOC DM gets the wakeup reason from the VM Manager and forwards it to UOS
|
||||
#. IOC DM gets the wakeup reason from the VM Manager and forwards it to User VM
|
||||
lifecycle service.
|
||||
#. VM Manager sets the VM state to starting for PM DM.
|
||||
#. PM DM resumes UOS.
|
||||
#. UOS lifecycle service gets wakeup reason 0x000020, and then sends an initial
|
||||
or active heartbeat. The UOS gets wakeup reason 0x800020 after
|
||||
#. PM DM resumes User VM.
|
||||
#. User VM lifecycle service gets wakeup reason 0x000020, and then sends an initial
|
||||
or active heartbeat. The User VM gets wakeup reason 0x800020 after
|
||||
resuming.
|
||||
|
||||
For RTC resume flow
|
||||
|
||||
#. RTC timer expires.
|
||||
#. SOS lifecycle service gets initial wakeup reason from the IOC
|
||||
#. Service VM lifecycle service gets initial wakeup reason from the IOC
|
||||
firmware. The wakeup reason is 0x000200, from which RTC bit is set.
|
||||
It then sends active or initial heartbeat to IOC firmware.
|
||||
#. SOS lifecycle forwards the wakeup reason and sends start event to VM
|
||||
#. Service VM lifecycle forwards the wakeup reason and sends start event to VM
|
||||
Manager. VM Manager begins resuming VMs.
|
||||
#. IOC DM gets the wakeup reason from the VM Manager, and forwards it to
|
||||
the UOS lifecycle service.
|
||||
the User VM lifecycle service.
|
||||
#. VM Manager sets the VM state to starting for PM DM.
|
||||
#. PM DM resumes UOS.
|
||||
#. UOS lifecycle service gets the wakeup reason 0x000200, and sends
|
||||
initial or active heartbeat. The UOS gets wakeup reason 0x800200
|
||||
#. PM DM resumes User VM.
|
||||
#. User VM lifecycle service gets the wakeup reason 0x000200, and sends
|
||||
initial or active heartbeat. The User VM gets wakeup reason 0x800200
|
||||
after resuming..
|
||||
|
||||
System control data
|
||||
@@ -413,19 +413,19 @@ Currently the wakeup reason bits are supported by sources shown here:
|
||||
|
||||
* - wakeup_button
|
||||
- 5
|
||||
- Get from IOC FW, forward to UOS
|
||||
- Get from IOC FW, forward to User VM
|
||||
|
||||
* - RTC wakeup
|
||||
- 9
|
||||
- Get from IOC FW, forward to UOS
|
||||
- Get from IOC FW, forward to User VM
|
||||
|
||||
* - car door wakeup
|
||||
- 11
|
||||
- Get from IOC FW, forward to UOS
|
||||
- Get from IOC FW, forward to User VM
|
||||
|
||||
* - SoC wakeup
|
||||
- 23
|
||||
- Emulation (Depends on UOS's heartbeat message
|
||||
- Emulation (Depends on User VM's heartbeat message
|
||||
|
||||
- CBC_WK_RSN_BTN (bit 5): ignition button.
|
||||
- CBC_WK_RSN_RTC (bit 9): RTC timer.
|
||||
@@ -522,7 +522,7 @@ definition is as below.
|
||||
:align: center
|
||||
|
||||
- The RTC command contains a relative time but not an absolute time.
|
||||
- SOS lifecycle service will re-compute the time offset before it is
|
||||
- Service VM lifecycle service will re-compute the time offset before it is
|
||||
sent to the IOC firmware.
|
||||
|
||||
.. figure:: images/ioc-image10.png
|
||||
@@ -560,10 +560,10 @@ IOC signal type definitions are as below.
|
||||
IOC Mediator - Signal flow
|
||||
|
||||
- The IOC backend needs to emulate the channel open/reset/close message which
|
||||
shouldn't be forward to the native cbc signal channel. The SOS signal
|
||||
shouldn't be forward to the native cbc signal channel. The Service VM signal
|
||||
related services should do a real open/reset/close signal channel.
|
||||
- Every backend should maintain a whitelist for different VMs. The
|
||||
whitelist can be stored in the SOS file system (Read only) in the
|
||||
whitelist can be stored in the Service VM file system (Read only) in the
|
||||
future, but currently it is hard coded.
|
||||
|
||||
IOC mediator has two whitelist tables, one is used for rx
|
||||
@@ -582,9 +582,9 @@ new multi signal, which contains the signals in the whitelist.
|
||||
Raw data
|
||||
--------
|
||||
|
||||
OEM raw channel only assigns to a specific UOS following that OEM
|
||||
OEM raw channel only assigns to a specific User VM following that OEM
|
||||
configuration. The IOC Mediator will directly forward all read/write
|
||||
message from IOC firmware to UOS without any modification.
|
||||
message from IOC firmware to User VM without any modification.
|
||||
|
||||
|
||||
IOC Mediator Usage
|
||||
@@ -600,14 +600,14 @@ The "ioc_channel_path" is an absolute path for communication between
|
||||
IOC mediator and UART DM.
|
||||
|
||||
The "lpc_port" is "com1" or "com2", IOC mediator needs one unassigned
|
||||
lpc port for data transfer between UOS and SOS.
|
||||
lpc port for data transfer between User VM and Service VM.
|
||||
|
||||
The "wakeup_reason" is IOC mediator boot up reason, each bit represents
|
||||
one wakeup reason.
|
||||
|
||||
For example, the following commands are used to enable IOC feature, the
|
||||
initial wakeup reason is the ignition button and cbc_attach uses ttyS1
|
||||
for TTY line discipline in UOS::
|
||||
for TTY line discipline in User VM::
|
||||
|
||||
-i /run/acrn/ioc_$vm_name,0x20
|
||||
-l com2,/run/acrn/ioc_$vm_name
|
||||
|
||||
@@ -96,7 +96,7 @@ After the application processor (AP) receives the IPI CPU startup
|
||||
interrupt, it uses the MMU page tables created by the BSP. In order to bring
|
||||
the memory access rights into effect, some other APIs are provided:
|
||||
enable_paging will enable IA32_EFER.NXE and CR0.WP, enable_smep will
|
||||
enable CR4.SMEP, and enable_smap will enale CR4.SMAP.
|
||||
enable CR4.SMEP, and enable_smap will enable CR4.SMAP.
|
||||
:numref:`hv-mem-init` describes the hypervisor memory initialization for the BSP
|
||||
and APs.
|
||||
|
||||
@@ -291,7 +291,8 @@ Virtual MTRR
|
||||
************
|
||||
|
||||
In ACRN, the hypervisor only virtualizes MTRRs fixed range (0~1MB).
|
||||
The HV sets MTRRs of the fixed range as Write-Back for UOS, and the SOS reads
|
||||
The HV sets MTRRs of the fixed range as Write-Back for a User VM, and
|
||||
the Service VM reads
|
||||
native MTRRs of the fixed range set by BIOS.
|
||||
|
||||
If the guest physical address is not in the fixed range (0~1MB), the
|
||||
@@ -380,7 +381,7 @@ VM Exit about EPT
|
||||
|
||||
There are two VM exit handlers for EPT violation and EPT
|
||||
misconfiguration in the hypervisor. EPT page tables are
|
||||
always configured correctly for the Service ans User VMs. If an EPT misconfiguration is
|
||||
always configured correctly for the Service and User VMs. If an EPT misconfiguration is
|
||||
detected, a fatal error is reported by the HV. The hypervisor
|
||||
uses EPT violation to intercept MMIO access to do device emulation. EPT
|
||||
violation handling data flow is described in the
|
||||
@@ -489,7 +490,7 @@ almost all the system memory as shown here:
|
||||
:width: 900px
|
||||
:name: sos-mem-layout
|
||||
|
||||
SOS Physical Memory Layout
|
||||
Service VM Physical Memory Layout
|
||||
|
||||
Host to Guest Mapping
|
||||
=====================
|
||||
@@ -521,4 +522,4 @@ must not be accessible by the Seervice/User VM normal world.
|
||||
.. figure:: images/mem-image18.png
|
||||
:align: center
|
||||
|
||||
UOS Physical Memory Layout with Trusty
|
||||
User VM Physical Memory Layout with Trusty
|
||||
|
||||
@@ -176,7 +176,7 @@ Guest SMP boot flow
|
||||
The core APIC IDs are reported to the guest using mptable info. SMP boot
|
||||
flow is similar to sharing mode. Refer to :ref:`vm-startup`
|
||||
for guest SMP boot flow in ACRN. Partition mode guests startup is same as
|
||||
the SOS startup in sharing mode.
|
||||
the Service VM startup in sharing mode.
|
||||
|
||||
Inter-processor Interrupt (IPI) Handling
|
||||
========================================
|
||||
|
||||
@@ -33,7 +33,7 @@ power state transition:
|
||||
- Pauses Service VM.
|
||||
- Wait all other guests enter low power state.
|
||||
- Offlines all physical APs.
|
||||
- Save the context of console, ioapic of Service VM, I/O MMU, lapic of
|
||||
- Save the context of console, ioapic of Service VM, I/O MMU, lapic of
|
||||
Service VM, virtual BSP.
|
||||
- Save the context of physical BSP.
|
||||
|
||||
|
||||
@@ -199,7 +199,7 @@ sets up CLOS for VMs and the hypervisor itself per the "vm configuration"(:ref:`
|
||||
|
||||
- The RDT capabilities are enumerated on the bootstrap processor (BSP) during
|
||||
the pCPU pre-initialize stage. The global data structure ``res_cap_info``
|
||||
stores the capabilites of the supported resources.
|
||||
stores the capabilities of the supported resources.
|
||||
|
||||
- If CAT or/and MBA is supported, then setup masks array on all APs at the
|
||||
pCPU post-initialize stage. The mask values are written to
|
||||
|
||||
@@ -139,7 +139,7 @@ The main steps include:
|
||||
VM ID is picked, EPT is initialized, e820 table for this VM is prepared,
|
||||
I/O bitmap is set up, virtual PIC/IOAPIC/PCI/UART is initialized, EPC for
|
||||
virtual SGX is prepared, guest PM IO is set up, IOMMU for PT dev support
|
||||
is enabled, virtual CPUID entries are filled, and vCPUs configred in this VM's
|
||||
is enabled, virtual CPUID entries are filled, and vCPUs configured in this VM's
|
||||
``vm config`` are prepared. For post-launched User VM, the EPT page table and
|
||||
e820 table is actually prepared by DM instead of hypervisor.
|
||||
|
||||
@@ -214,7 +214,7 @@ SW configuration for post-launched User VMs (OVMF SW load as example):
|
||||
F-Segment. Refer to :ref:`hld-io-emulation` for details.
|
||||
|
||||
- **E820**: the virtual E820 table is built by the DM then passed to
|
||||
the virtual bootloader. Refer to :ref:`hld-io-emulation` for detais.
|
||||
the virtual bootloader. Refer to :ref:`hld-io-emulation` for details.
|
||||
|
||||
- **Entry address**: the DM will copy User OS kernel(OVMF) image to
|
||||
OVMF_NVSTORAGE_OFFSET - normally is @(4G - 2M), and set the entry
|
||||
@@ -241,7 +241,8 @@ Here is initial mode of vCPUs:
|
||||
+----------------------------------+----------------------------------------------------------+
|
||||
| VM and Processor Type | Initial Mode |
|
||||
+=================+================+==========================================================+
|
||||
| Service VM | BSP | Same as physical BSP, or Real Mode if SOS boot w/ OVMF |
|
||||
| Service VM | BSP | Same as physical BSP, or Real Mode if Service VM boot |
|
||||
| | | w/ OVMF |
|
||||
| +----------------+----------------------------------------------------------+
|
||||
| | AP | Real Mode |
|
||||
+-----------------+----------------+----------------------------------------------------------+
|
||||
|
||||
@@ -20,7 +20,7 @@ the mapping between physical and virtual interrupts for pass-through
|
||||
devices. However, a hard RT VM with LAPIC pass-through does own the physical
|
||||
maskable external interrupts. On its physical CPUs, interrupts are disabled
|
||||
in VMX root mode, while in VMX non-root mode, physical interrupts will be
|
||||
deliverd to RT VM directly.
|
||||
delivered to RT VM directly.
|
||||
|
||||
Emulation for devices is inside the Service VM user space device model, i.e.,
|
||||
acrn-dm. However, for performance consideration, vLAPIC, vIOAPIC, and vPIC
|
||||
@@ -72,7 +72,7 @@ target VCPU.
|
||||
Virtual LAPIC
|
||||
*************
|
||||
|
||||
LAPIC is virtualized for all Guest types: Serice and User VMs. Given support
|
||||
LAPIC is virtualized for all Guest types: Service and User VMs. Given support
|
||||
by the physical processor, APICv Virtual Interrupt Delivery (VID) is enabled
|
||||
and will support Posted-Interrupt feature. Otherwise, it will fall back to
|
||||
the legacy virtual interrupt injection mode.
|
||||
@@ -151,7 +151,7 @@ Virtual IOAPIC
|
||||
**************
|
||||
|
||||
vIOAPIC is emulated by HV when Guest accesses MMIO GPA range:
|
||||
0xFEC00000-0xFEC01000. vIOAPIC for SOS should match to the native HW
|
||||
0xFEC00000-0xFEC01000. vIOAPIC for Service VM should match to the native HW
|
||||
IOAPIC Pin numbers. vIOAPIC for guest VM provides 48 pins. As the vIOAPIC is
|
||||
always associated with vLAPIC, the virtual interrupt injection from
|
||||
vIOAPIC will finally trigger a request for vLAPIC event by calling
|
||||
@@ -248,10 +248,10 @@ devices.
|
||||
VM via vLAPIC/vIOAPIC. See :ref:`device-assignment`.
|
||||
|
||||
- **For User VM assigned devices**: only PCI devices could be assigned to
|
||||
Uer VM. For the standard VM and soft RT VM, the virtual interrupt
|
||||
injection follows the same way as Servic VM. A virtual interrupt injection
|
||||
User VM. For the standard VM and soft RT VM, the virtual interrupt
|
||||
injection follows the same way as Service VM. A virtual interrupt injection
|
||||
operation is triggered when a device's physical interrupt occurs. For the
|
||||
hard RT VM, the physical interrupts are delieverd to VM directly without
|
||||
hard RT VM, the physical interrupts are delivered to VM directly without
|
||||
causing VM-exit.
|
||||
|
||||
- **For User VM emulated devices**: DM is responsible for the
|
||||
|
||||
@@ -14,7 +14,7 @@ VM structure
|
||||
The ``acrn_vm`` structure is defined to manage a VM instance, this structure
|
||||
maintained a VM's HW resources like vcpu, vpic, vioapic, vuart, vpci. And at
|
||||
the same time ``acrn_vm`` structure also recorded a bunch of SW information
|
||||
related with corresponding VM, like info for VM indentifier, info for SW
|
||||
related with corresponding VM, like info for VM identifier, info for SW
|
||||
loader, info for memory e820 entries, info for IO/MMIO handlers, info for
|
||||
platform level cpuid entries, and so on.
|
||||
|
||||
@@ -54,10 +54,10 @@ management. Please refer to ACRN power management design for more details.
|
||||
Post-launched User VMs
|
||||
======================
|
||||
|
||||
DM is taking control of post-launched User VMs' state transition after SOS
|
||||
DM is taking control of post-launched User VMs' state transition after Service VM
|
||||
boot up, and it calls VM APIs through hypercalls.
|
||||
|
||||
SOS user level service like Life-Cycle-Service and tool like Acrnd may work
|
||||
Service VM user level service like Life-Cycle-Service and tool like Acrnd may work
|
||||
together with DM to launch or stop a User VM. Please refer to ACRN tool
|
||||
introduction for more details.
|
||||
|
||||
|
||||
@@ -290,7 +290,7 @@ Power Management support for S3
|
||||
*******************************
|
||||
|
||||
During platform S3 suspend and resume, the VT-d register values are
|
||||
lost. ACRN VT-d provides APIs tthat are called during S3 suspend and resume.
|
||||
lost. ACRN VT-d provides APIs that are called during S3 suspend and resume.
|
||||
|
||||
During S3 suspend, some register values are saved in the memory, and
|
||||
DMAR translation is disabled. During S3 resume, the register values
|
||||
|
||||
BIN
doc/developer-guides/hld/images/passthru-image50.png
Normal file
|
After Width: | Height: | Size: 12 KiB |
@@ -4,8 +4,8 @@ UART Virtualization
|
||||
###################
|
||||
|
||||
In ACRN, UART virtualization is implemented as a fully-emulated device.
|
||||
In the Service OS (SOS), UART virtualization is implemented in the
|
||||
hypervisor itself. In the User OS (UOS), UART virtualization is
|
||||
In the Service VM, UART virtualization is implemented in the
|
||||
hypervisor itself. In the User VM, UART virtualization is
|
||||
implemented in the Device Model (DM), and is the primary topic of this
|
||||
document. We'll summarize differences between the hypervisor and DM
|
||||
implementations at the end of this document.
|
||||
@@ -93,7 +93,7 @@ A similar virtual UART device is implemented in the hypervisor.
|
||||
Currently UART16550 is owned by the hypervisor itself and is used for
|
||||
debugging purposes. (The UART properties are configured by parameters
|
||||
to the hypervisor command line.) The hypervisor emulates a UART device
|
||||
with 0x3F8 address to the SOS and acts as the SOS console. The general
|
||||
with 0x3F8 address to the Service VM and acts as the Service VM console. The general
|
||||
emulation is the same as used in the device model, with the following
|
||||
differences:
|
||||
|
||||
@@ -110,8 +110,8 @@ differences:
|
||||
- Characters are read from the sbuf and put to rxFIFO,
|
||||
triggered by ``vuart_console_rx_chars``
|
||||
|
||||
- A virtual interrupt is sent to the SOS that triggered the read,
|
||||
and characters from rxFIFO are sent to the SOS by emulating a read
|
||||
- A virtual interrupt is sent to the Service VM that triggered the read,
|
||||
and characters from rxFIFO are sent to the Service VM by emulating a read
|
||||
of register ``UART16550_RBR``
|
||||
|
||||
- TX flow:
|
||||
|
||||
@@ -29,8 +29,8 @@ emulation of three components, described here and shown in
|
||||
specific User OS with I/O MMU assistance.
|
||||
|
||||
- **DRD DM** (Dual Role Device) emulates the PHY MUX control
|
||||
logic. The sysfs interface in UOS is used to trap the switch operation
|
||||
into DM, and the the sysfs interface in SOS is used to operate on the physical
|
||||
logic. The sysfs interface in a User VM is used to trap the switch operation
|
||||
into DM, and the the sysfs interface in the Service VM is used to operate on the physical
|
||||
registers to switch between DCI and HCI role.
|
||||
|
||||
On Intel Apollo Lake platform, the sysfs interface path is
|
||||
@@ -39,7 +39,7 @@ emulation of three components, described here and shown in
|
||||
device mode. Similarly, by echoing ``host``, the usb phy will be
|
||||
connected with xHCI controller as host mode.
|
||||
|
||||
An xHCI register access from UOS will induce EPT trap from UOS to
|
||||
An xHCI register access from a User VM will induce EPT trap from the User VM to
|
||||
DM, and the xHCI DM or DRD DM will emulate hardware behaviors to make
|
||||
the subsystem run.
|
||||
|
||||
@@ -94,7 +94,7 @@ DM:
|
||||
ports to virtual USB ports. It communicate with
|
||||
native USB ports though libusb.
|
||||
|
||||
All the USB data buffers from UOS (User OS) are in the form of TRB
|
||||
All the USB data buffers from a User VM are in the form of TRB
|
||||
(Transfer Request Blocks), according to xHCI spec. xHCI DM will fetch
|
||||
these data buffers when the related xHCI doorbell registers are set.
|
||||
These data will convert to *struct usb_data_xfer* and, through USB core,
|
||||
@@ -106,15 +106,15 @@ The device model configuration command syntax for xHCI is as follows::
|
||||
-s <slot>,xhci,[bus1-port1,bus2-port2]
|
||||
|
||||
- *slot*: virtual PCI slot number in DM
|
||||
- *bus-port*: specify which physical USB ports need to map to UOS.
|
||||
- *bus-port*: specify which physical USB ports need to map to a User VM.
|
||||
|
||||
A simple example::
|
||||
|
||||
-s 7,xhci,1-2,2-2
|
||||
|
||||
This configuration means the virtual xHCI will appear in PCI slot 7
|
||||
in UOS, and any physical USB device attached on 1-2 or 2-2 will be
|
||||
detected by UOS and used as expected.
|
||||
in the User VM, and any physical USB device attached on 1-2 or 2-2 will be
|
||||
detected by a User VM and used as expected.
|
||||
|
||||
USB DRD virtualization
|
||||
**********************
|
||||
@@ -129,7 +129,7 @@ USB DRD (Dual Role Device) emulation works as shown in this figure:
|
||||
ACRN emulates the DRD hardware logic of an Intel Apollo Lake platform to
|
||||
support the dual role requirement. The DRD feature is implemented as xHCI
|
||||
vendor extended capability. ACRN emulates
|
||||
the same way, so the native driver can be reused in UOS. When UOS DRD
|
||||
the same way, so the native driver can be reused in a User VM. When a User VM DRD
|
||||
driver reads or writes the related xHCI extended registers, these access will
|
||||
be captured by xHCI DM. xHCI DM uses the native DRD related
|
||||
sysfs interface to do the Host/Device mode switch operations.
|
||||
|
||||
@@ -4,8 +4,8 @@ Virtio-blk
|
||||
##########
|
||||
|
||||
The virtio-blk device is a simple virtual block device. The FE driver
|
||||
(in the UOS space) places read, write, and other requests onto the
|
||||
virtqueue, so that the BE driver (in the SOS space) can process them
|
||||
(in the User VM space) places read, write, and other requests onto the
|
||||
virtqueue, so that the BE driver (in the Service VM space) can process them
|
||||
accordingly. Communication between the FE and BE is based on the virtio
|
||||
kick and notify mechanism.
|
||||
|
||||
@@ -86,7 +86,7 @@ The device model configuration command syntax for virtio-blk is::
|
||||
|
||||
A simple example for virtio-blk:
|
||||
|
||||
1. Prepare a file in SOS folder::
|
||||
1. Prepare a file in Service VM folder::
|
||||
|
||||
dd if=/dev/zero of=test.img bs=1M count=1024
|
||||
mkfs.ext4 test.img
|
||||
@@ -96,15 +96,15 @@ A simple example for virtio-blk:
|
||||
|
||||
-s 9,virtio-blk,/root/test.img
|
||||
|
||||
#. Launch UOS, you can find ``/dev/vdx`` in UOS.
|
||||
#. Launch User VM, you can find ``/dev/vdx`` in User VM.
|
||||
|
||||
The ``x`` in ``/dev/vdx`` is related to the slot number used. If
|
||||
If you start DM with two virtio-blks, and the slot numbers are 9 and 10,
|
||||
then, the device with slot 9 will be recognized as ``/dev/vda``, and
|
||||
the device with slot 10 will be ``/dev/vdb``
|
||||
|
||||
#. Mount ``/dev/vdx`` to a folder in the UOS, and then you can access it.
|
||||
#. Mount ``/dev/vdx`` to a folder in the User VM, and then you can access it.
|
||||
|
||||
|
||||
Successful booting of the User OS verifies the correctness of the
|
||||
Successful booting of the User VM verifies the correctness of the
|
||||
device.
|
||||
|
||||
@@ -33,7 +33,7 @@ The virtio-console architecture diagram in ACRN is shown below.
|
||||
Virtio-console is implemented as a virtio legacy device in the ACRN
|
||||
device model (DM), and is registered as a PCI virtio device to the guest
|
||||
OS. No changes are required in the frontend Linux virtio-console except
|
||||
that the guest (UOS) kernel should be built with
|
||||
that the guest (User VM) kernel should be built with
|
||||
``CONFIG_VIRTIO_CONSOLE=y``.
|
||||
|
||||
The virtio console FE driver registers a HVC console to the kernel if
|
||||
@@ -152,7 +152,7 @@ PTY
|
||||
TTY
|
||||
===
|
||||
|
||||
1. Identify your tty that will be used as the UOS console:
|
||||
1. Identify your tty that will be used as the User VM console:
|
||||
|
||||
- If you're connected to your device over the network via ssh, use
|
||||
the linux ``tty`` command, and it will report the node (may be
|
||||
|
||||
@@ -1,60 +1,89 @@
|
||||
.. _virtio-gpio:
|
||||
|
||||
Virtio-gpio
|
||||
###########
|
||||
|
||||
virtio-gpio provides a virtual GPIO controller, which will map part of native GPIOs to UOS, UOS can perform GPIO operations through it, including setting values, including set/get value, set/get direction and set configuration (only Open Source and Open Drain types are currently supported). GPIOs quite often be used as IRQs, typically for wakeup events, virtio-gpio supports level and edge interrupt trigger modes.
|
||||
|
||||
The virtio-gpio architecture is shown below
|
||||
|
||||
.. figure:: images/virtio-gpio-1.png
|
||||
:align: center
|
||||
:name: virtio-gpio-1
|
||||
|
||||
Virtio-gpio Architecture
|
||||
|
||||
Virtio-gpio is implemented as a virtio legacy device in the ACRN device model (DM), and is registered as a PCI virtio device to the guest OS. No changes are required in the frontend Linux virtio-gpio except that the guest (UOS) kernel should be built with ``CONFIG_VIRTIO_GPIO=y``.
|
||||
|
||||
There are three virtqueues used between FE and BE, one for gpio operations, one for irq request and one for irq event notification.
|
||||
|
||||
Virtio-gpio FE driver will register a gpiochip and irqchip when it is probed, the base and number of gpio are generated by the BE. Each gpiochip or irqchip operation(e.g. get_direction of gpiochip or irq_set_type of irqchip) will trigger a virtqueue_kick on its own virtqueue. If some gpio has been set to interrupt mode, the interrupt events will be handled within the irq virtqueue callback.
|
||||
|
||||
GPIO mapping
|
||||
************
|
||||
|
||||
.. figure:: images/virtio-gpio-2.png
|
||||
:align: center
|
||||
:name: virtio-gpio-2
|
||||
|
||||
GPIO mapping
|
||||
|
||||
- Each UOS has only one GPIO chip instance, its number of GPIO is based on acrn-dm command line and GPIO base always start from 0.
|
||||
|
||||
- Each GPIO is exclusive, uos can’t map the same native gpio.
|
||||
|
||||
- Each acrn-dm maximum number of GPIO is 64.
|
||||
|
||||
Usage
|
||||
*****
|
||||
|
||||
add the following parameters into command line::
|
||||
|
||||
-s <slot>,virtio-gpio,<@controller_name{offset|name[=mapping_name]:offset|name[=mapping_name]:…}@controller_name{…}…]>
|
||||
|
||||
- **controller_name**: Input “ls /sys/bus/gpio/devices” to check native gpio controller information.Usually, the devices represent the controller_name, you can use it as controller_name directly. You can also input “cat /sys/bus/gpio/device/XXX/dev” to get device id that can be used to match /dev/XXX, then use XXX as the controller_name. On MRB and NUC platforms, the controller_name are gpiochip0, gpiochip1, gpiochip2.gpiochip3.
|
||||
|
||||
- **offset|name**: you can use gpio offset or its name to locate one native gpio within the gpio controller.
|
||||
|
||||
- **mapping_name**: This is optional, if you want to use a customized name for a FE gpio, you can set a new name for a FE virtual gpio.
|
||||
|
||||
Example
|
||||
*******
|
||||
|
||||
- Map three native gpio to UOS, they are native gpiochip0 with offset of 1 and 6, and with the name “reset”. In UOS, the three gpio has no name, and base from 0.::
|
||||
|
||||
-s 10,virtio-gpio,@gpiochip0{1:6:reset}
|
||||
|
||||
- Map four native gpio to UOS, native gpiochip0’s gpio with offset 1 and offset 6 map to FE virtual gpio with offset 0 and offset 1 without names, native gpiochip0’s gpio with name “reset” maps to FE virtual gpio with offset 2 and its name is “shutdown”, native gpiochip1’s gpio with offset 0 maps to FE virtual gpio with offset 3 and its name is “reset”.::
|
||||
|
||||
-s 10,virtio-gpio,@gpiochip0{1:6:reset=shutdown}@gpiochip1{0=reset}
|
||||
|
||||
.. _virtio-gpio:
|
||||
|
||||
Virtio-gpio
|
||||
###########
|
||||
|
||||
virtio-gpio provides a virtual GPIO controller, which will map part of
|
||||
native GPIOs to User VM, User VM can perform GPIO operations through it,
|
||||
including setting values, including set/get value, set/get direction and
|
||||
set configuration (only Open Source and Open Drain types are currently
|
||||
supported). GPIOs quite often be used as IRQs, typically for wakeup
|
||||
events, virtio-gpio supports level and edge interrupt trigger modes.
|
||||
|
||||
The virtio-gpio architecture is shown below
|
||||
|
||||
.. figure:: images/virtio-gpio-1.png
|
||||
:align: center
|
||||
:name: virtio-gpio-1
|
||||
|
||||
Virtio-gpio Architecture
|
||||
|
||||
Virtio-gpio is implemented as a virtio legacy device in the ACRN device
|
||||
model (DM), and is registered as a PCI virtio device to the guest OS. No
|
||||
changes are required in the frontend Linux virtio-gpio except that the
|
||||
guest (User VM) kernel should be built with ``CONFIG_VIRTIO_GPIO=y``.
|
||||
|
||||
There are three virtqueues used between FE and BE, one for gpio
|
||||
operations, one for irq request and one for irq event notification.
|
||||
|
||||
Virtio-gpio FE driver will register a gpiochip and irqchip when it is
|
||||
probed, the base and number of gpio are generated by the BE. Each
|
||||
gpiochip or irqchip operation(e.g. get_direction of gpiochip or
|
||||
irq_set_type of irqchip) will trigger a virtqueue_kick on its own
|
||||
virtqueue. If some gpio has been set to interrupt mode, the interrupt
|
||||
events will be handled within the irq virtqueue callback.
|
||||
|
||||
GPIO mapping
|
||||
************
|
||||
|
||||
.. figure:: images/virtio-gpio-2.png
|
||||
:align: center
|
||||
:name: virtio-gpio-2
|
||||
|
||||
GPIO mapping
|
||||
|
||||
- Each User VM has only one GPIO chip instance, its number of GPIO is
|
||||
based on acrn-dm command line and GPIO base always start from 0.
|
||||
|
||||
- Each GPIO is exclusive, User VM can't map the same native gpio.
|
||||
|
||||
- Each acrn-dm maximum number of GPIO is 64.
|
||||
|
||||
Usage
|
||||
*****
|
||||
|
||||
Add the following parameters into the command line::
|
||||
|
||||
-s <slot>,virtio-gpio,<@controller_name{offset|name[=mapping_name]:offset|name[=mapping_name]:...}@controller_name{...}...]>
|
||||
|
||||
- **controller_name**: Input ``ls /sys/bus/gpio/devices`` to check native
|
||||
gpio controller information. Usually, the devices represent the
|
||||
controller_name, you can use it as controller_name directly. You can
|
||||
also input ``cat /sys/bus/gpio/device/XXX/dev`` to get device id that can
|
||||
be used to match /dev/XXX, then use XXX as the controller_name. On MRB
|
||||
and NUC platforms, the controller_name are gpiochip0, gpiochip1,
|
||||
gpiochip2.gpiochip3.
|
||||
|
||||
- **offset|name**: you can use gpio offset or its name to locate one
|
||||
native gpio within the gpio controller.
|
||||
|
||||
- **mapping_name**: This is optional, if you want to use a customized
|
||||
name for a FE gpio, you can set a new name for a FE virtual gpio.
|
||||
|
||||
Example
|
||||
*******
|
||||
|
||||
- Map three native gpio to User VM, they are native gpiochip0 with
|
||||
offset of 1 and 6, and with the name ``reset``. In User VM, the three
|
||||
gpio has no name, and base from 0.::
|
||||
|
||||
-s 10,virtio-gpio,@gpiochip0{1:6:reset}
|
||||
|
||||
- Map four native gpio to User VM, native gpiochip0's gpio with offset 1
|
||||
and offset 6 map to FE virtual gpio with offset 0 and offset 1
|
||||
without names, native gpiochip0's gpio with name ``reset`` maps to FE
|
||||
virtual gpio with offset 2 and its name is ``shutdown``, native
|
||||
gpiochip1's gpio with offset 0 maps to FE virtual gpio with offset 3 and
|
||||
its name is ``reset`` ::
|
||||
|
||||
-s 10,virtio-gpio,@gpiochip0{1:6:reset=shutdown}@gpiochip1{0=reset}
|
||||
|
||||
@@ -1,118 +1,135 @@
|
||||
.. _virtio-i2c:
|
||||
|
||||
Virtio-i2c
|
||||
##########
|
||||
|
||||
Virtio-i2c provides a virtual I2C adapter that supports mapping multiple slave devices under multiple native I2C adapters to one virtio I2C adapter. The address for the slave device is not changed. Virtio-i2c also provides an interface to add an acpi node for slave devices so that the slave device driver in the guest OS does not need to change.
|
||||
|
||||
:numref:`virtio-i2c-1` below shows the virtio-i2c architecture.
|
||||
|
||||
.. figure:: images/virtio-i2c-1.png
|
||||
:align: center
|
||||
:name: virtio-i2c-1
|
||||
|
||||
Virtio-i2c Architecture
|
||||
|
||||
Virtio-i2c is implemented as a virtio legacy device in the ACRN device model (DM) and is registered as a PCI virtio device to the guest OS. The Device ID of virtio-i2c is 0x860A and the Sub Device ID is 0xFFF6.
|
||||
|
||||
Virtio-i2c uses one **virtqueue** to transfer the I2C msg that is received from the I2C core layer. Each I2C msg is translated into three parts:
|
||||
|
||||
- Header: includes addr, flags, and len.
|
||||
- Data buffer: includes the pointer to msg data.
|
||||
- Status: includes the process results at the backend.
|
||||
|
||||
In the backend kick handler, data is obtained from the virtqueue, which reformats the data to a standard I2C message and then sends it to a message queue that is maintained in the backend. A worker thread is created during the initiate phase; it receives the I2C message from the queue and then calls the I2C APIs to send to the native I2C adapter.
|
||||
|
||||
When the request is done, the backend driver updates the results and notifies the frontend. The msg process flow is shown in :numref:`virtio-process-flow` below.
|
||||
|
||||
.. figure:: images/virtio-i2c-1a.png
|
||||
:align: center
|
||||
:name: virtio-process-flow
|
||||
|
||||
Message Process Flow
|
||||
|
||||
**Usage:**
|
||||
-s <slot>,virtio-i2c,<bus>[:<slave_addr>[@<node>]][:<slave_addr>[@<node>]][,<bus>[:<slave_addr>[@<node>]][:<slave_addr>][@<node>]]
|
||||
|
||||
bus:
|
||||
The bus number for the native I2C adapter; “2” means “/dev/i2c-2”.
|
||||
|
||||
slave_addr:
|
||||
he address for the native slave devices such as “1C”, “2F”...
|
||||
|
||||
@:
|
||||
The prefix for the acpi node.
|
||||
|
||||
node:
|
||||
The acpi node name supported in the current code. You can find the supported name in the acpi_node_table[] from the source code. Currently, only ‘cam1’, ‘cam2’, and ‘hdac’ are supported for MRB. These nodes are platform-specific.
|
||||
|
||||
|
||||
**Example:**
|
||||
|
||||
-s 19,virtio-i2c,0:70@cam1:2F,4:1C
|
||||
|
||||
This adds slave devices 0x70 and 0x2F under the native adapter /dev/i2c-0, and 0x1C under /dev/i2c-6 to the virtio-i2c adapter. Since 0x70 includes '@cam1', acpi info is also added to it. Since 0x2F and 0x1C have '@<node>', no acpi info is added to them.
|
||||
|
||||
|
||||
**Simple use case:**
|
||||
|
||||
When launched with this cmdline:
|
||||
|
||||
-s 19,virtio-i2c,4:1C
|
||||
|
||||
a virtual I2C adapter will appear in the guest OS:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
root@clr-d5f61ae5f5224e59bb1727db3b5f5d4e ~ # ./i2cdetect -y -l
|
||||
i2c-3 i2c DPDDC-A I2C adapter
|
||||
i2c-1 i2c i915 gmbus dpc I2C adapter
|
||||
i2c-6 i2c i2c-virtio I2C adapter <------
|
||||
i2c-4 i2c DPDDC-B I2C adapter
|
||||
i2c-2 i2c i915 gmbus misc I2C adapter
|
||||
i2c-0 i2c i915 gmbus dpb I2C adapter
|
||||
i2c-5 i2c DPDDC-C I2C adapter
|
||||
|
||||
You can find the slave device 0x1C under the virtio I2C adapter i2c-6:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
root@clr-d5f61ae5f5224e59bb1727db3b5f5d4e ~ # ./i2cdetect -y -r 6
|
||||
0 1 2 3 4 5 6 7 8 9 a b c d e f
|
||||
00: -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- -- <--------
|
||||
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
70: -- -- -- -- -- -- -- --
|
||||
|
||||
You can dump the i2c device if it is supported:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
root@clr-d5f61ae5f5224e59bb1727db3b5f5d4e ~ # ./i2cdump -f -y 6 0x1C
|
||||
No size specified (using byte-data access)
|
||||
0 1 2 3 4 5 6 7 8 9 a b c d e f 0123456789abcdef
|
||||
10: ff ff 00 22 b2 05 00 00 00 00 00 00 00 00 00 00 ..."??..........
|
||||
20: 00 00 00 ff ff ff ff ff 00 00 00 ff ff ff ff ff ................
|
||||
30: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff 00 ................
|
||||
40: 00 00 00 ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
50: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
60: 00 10 00 00 00 00 00 00 00 00 00 00 00 00 00 00 .?..............
|
||||
70: ff ff 00 ff 10 10 ff ff ff ff ff ff ff ff ff ff ....??..........
|
||||
80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
90: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
a0: ff ff ff ff ff ff f8 ff 00 00 ff ff 00 ff ff ff ......?.........
|
||||
b0: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
c0: 00 ff 00 00 ff ff ff 00 00 ff ff ff ff ff ff ff ................
|
||||
d0: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
e0: 00 ff 06 00 03 fa 00 ff ff ff ff ff ff ff ff ff ..?.??..........
|
||||
f0: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
|
||||
Note that the virtual I2C bus number has no relationship with the native I2C bus number; it is auto-generated by the guest OS.
|
||||
|
||||
|
||||
|
||||
|
||||
.. _virtio-i2c:
|
||||
|
||||
Virtio-i2c
|
||||
##########
|
||||
|
||||
Virtio-i2c provides a virtual I2C adapter that supports mapping multiple
|
||||
slave devices under multiple native I2C adapters to one virtio I2C
|
||||
adapter. The address for the slave device is not changed. Virtio-i2c
|
||||
also provides an interface to add an acpi node for slave devices so that
|
||||
the slave device driver in the guest OS does not need to change.
|
||||
|
||||
:numref:`virtio-i2c-1` below shows the virtio-i2c architecture.
|
||||
|
||||
.. figure:: images/virtio-i2c-1.png
|
||||
:align: center
|
||||
:name: virtio-i2c-1
|
||||
|
||||
Virtio-i2c Architecture
|
||||
|
||||
Virtio-i2c is implemented as a virtio legacy device in the ACRN device
|
||||
model (DM) and is registered as a PCI virtio device to the guest OS. The
|
||||
Device ID of virtio-i2c is 0x860A and the Sub Device ID is 0xFFF6.
|
||||
|
||||
Virtio-i2c uses one **virtqueue** to transfer the I2C msg that is
|
||||
received from the I2C core layer. Each I2C msg is translated into three
|
||||
parts:
|
||||
|
||||
- Header: includes addr, flags, and len.
|
||||
- Data buffer: includes the pointer to msg data.
|
||||
- Status: includes the process results at the backend.
|
||||
|
||||
In the backend kick handler, data is obtained from the virtqueue, which
|
||||
reformats the data to a standard I2C message and then sends it to a
|
||||
message queue that is maintained in the backend. A worker thread is
|
||||
created during the initiate phase; it receives the I2C message from the
|
||||
queue and then calls the I2C APIs to send to the native I2C adapter.
|
||||
|
||||
When the request is done, the backend driver updates the results and
|
||||
notifies the frontend. The msg process flow is shown in
|
||||
:numref:`virtio-process-flow` below.
|
||||
|
||||
.. figure:: images/virtio-i2c-1a.png
|
||||
:align: center
|
||||
:name: virtio-process-flow
|
||||
|
||||
Message Process Flow
|
||||
|
||||
**Usage:**
|
||||
-s <slot>,virtio-i2c,<bus>[:<slave_addr>[@<node>]][:<slave_addr>[@<node>]][,<bus>[:<slave_addr>[@<node>]][:<slave_addr>][@<node>]]
|
||||
|
||||
bus:
|
||||
The bus number for the native I2C adapter; ``2`` means ``/dev/i2c-2``.
|
||||
|
||||
slave_addr:
|
||||
The address for the native slave devices such as ``1C``, ``2F`` ...
|
||||
|
||||
@:
|
||||
The prefix for the acpi node.
|
||||
|
||||
node:
|
||||
The acpi node name supported in the current code. You can find the
|
||||
supported name in the acpi_node_table[] from the source code. Currently,
|
||||
only ``cam1``, ``cam2``, and ``hdac`` are supported for MRB. These nodes are
|
||||
platform-specific.
|
||||
|
||||
|
||||
**Example:**
|
||||
|
||||
-s 19,virtio-i2c,0:70@cam1:2F,4:1C
|
||||
|
||||
This adds slave devices 0x70 and 0x2F under the native adapter
|
||||
/dev/i2c-0, and 0x1C under /dev/i2c-6 to the virtio-i2c adapter. Since
|
||||
0x70 includes '@cam1', acpi info is also added to it. Since 0x2F and
|
||||
0x1C have '@<node>', no acpi info is added to them.
|
||||
|
||||
|
||||
**Simple use case:**
|
||||
|
||||
When launched with this cmdline:
|
||||
|
||||
-s 19,virtio-i2c,4:1C
|
||||
|
||||
a virtual I2C adapter will appear in the guest OS:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
root@clr-d5f61ae5f5224e59bb1727db3b5f5d4e ~ # ./i2cdetect -y -l
|
||||
i2c-3 i2c DPDDC-A I2C adapter
|
||||
i2c-1 i2c i915 gmbus dpc I2C adapter
|
||||
i2c-6 i2c i2c-virtio I2C adapter <------
|
||||
i2c-4 i2c DPDDC-B I2C adapter
|
||||
i2c-2 i2c i915 gmbus misc I2C adapter
|
||||
i2c-0 i2c i915 gmbus dpb I2C adapter
|
||||
i2c-5 i2c DPDDC-C I2C adapter
|
||||
|
||||
You can find the slave device 0x1C under the virtio I2C adapter i2c-6:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
root@clr-d5f61ae5f5224e59bb1727db3b5f5d4e ~ # ./i2cdetect -y -r 6
|
||||
0 1 2 3 4 5 6 7 8 9 a b c d e f
|
||||
00: -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- -- <--------
|
||||
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
70: -- -- -- -- -- -- -- --
|
||||
|
||||
You can dump the i2c device if it is supported:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
root@clr-d5f61ae5f5224e59bb1727db3b5f5d4e ~ # ./i2cdump -f -y 6 0x1C
|
||||
No size specified (using byte-data access)
|
||||
0 1 2 3 4 5 6 7 8 9 a b c d e f 0123456789abcdef
|
||||
10: ff ff 00 22 b2 05 00 00 00 00 00 00 00 00 00 00 ..."??..........
|
||||
20: 00 00 00 ff ff ff ff ff 00 00 00 ff ff ff ff ff ................
|
||||
30: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff 00 ................
|
||||
40: 00 00 00 ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
50: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
60: 00 10 00 00 00 00 00 00 00 00 00 00 00 00 00 00 .?..............
|
||||
70: ff ff 00 ff 10 10 ff ff ff ff ff ff ff ff ff ff ....??..........
|
||||
80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
90: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
a0: ff ff ff ff ff ff f8 ff 00 00 ff ff 00 ff ff ff ......?.........
|
||||
b0: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
c0: 00 ff 00 00 ff ff ff 00 00 ff ff ff ff ff ff ff ................
|
||||
d0: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
e0: 00 ff 06 00 03 fa 00 ff ff ff ff ff ff ff ff ff ..?.??..........
|
||||
f0: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
|
||||
Note that the virtual I2C bus number has no relationship with the native
|
||||
I2C bus number; it is auto-generated by the guest OS.
|
||||
|
||||
@@ -21,8 +21,8 @@ must be built with ``CONFIG_VIRTIO_INPUT=y``.
|
||||
|
||||
Two virtqueues are used to transfer input_event between FE and BE. One
|
||||
is for the input_events from BE to FE, as generated by input hardware
|
||||
devices in SOS. The other is for status changes from FE to BE, as
|
||||
finally sent to input hardware device in SOS.
|
||||
devices in Service VM. The other is for status changes from FE to BE, as
|
||||
finally sent to input hardware device in Service VM.
|
||||
|
||||
At the probe stage of FE virtio-input driver, a buffer (used to
|
||||
accommodate 64 input events) is allocated together with the driver data.
|
||||
@@ -37,7 +37,7 @@ char device and caches it into an internal buffer until an EV_SYN input
|
||||
event with SYN_REPORT is received. BE driver then copies all the cached
|
||||
input events to the event virtqueue, one by one. These events are added by
|
||||
the FE driver following a notification to FE driver, implemented
|
||||
as an interrupt injection to UOS.
|
||||
as an interrupt injection to User VM.
|
||||
|
||||
For input events regarding status change, FE driver allocates a
|
||||
buffer for an input event and adds it to the status virtqueue followed
|
||||
@@ -93,7 +93,7 @@ The general command syntax is::
|
||||
-s n,virtio-input,/dev/input/eventX[,serial]
|
||||
|
||||
- /dev/input/eventX is used to specify the evdev char device node in
|
||||
SOS.
|
||||
Service VM.
|
||||
|
||||
- "serial" is an optional string. When it is specified it will be used
|
||||
as the Uniq of guest virtio input device.
|
||||
|
||||
@@ -4,7 +4,7 @@ Virtio-net
|
||||
##########
|
||||
|
||||
Virtio-net is the para-virtualization solution used in ACRN for
|
||||
networking. The ACRN device model emulates virtual NICs for UOS and the
|
||||
networking. The ACRN device model emulates virtual NICs for User VM and the
|
||||
frontend virtio network driver, simulating the virtual NIC and following
|
||||
the virtio specification. (Refer to :ref:`introduction` and
|
||||
:ref:`virtio-hld` background introductions to ACRN and Virtio.)
|
||||
@@ -23,7 +23,7 @@ Network Virtualization Architecture
|
||||
|
||||
ACRN's network virtualization architecture is shown below in
|
||||
:numref:`net-virt-arch`, and illustrates the many necessary network
|
||||
virtualization components that must cooperate for the UOS to send and
|
||||
virtualization components that must cooperate for the User VM to send and
|
||||
receive data from the outside world.
|
||||
|
||||
.. figure:: images/network-virt-arch.png
|
||||
@@ -38,7 +38,7 @@ components are parts of the Linux kernel.)
|
||||
|
||||
Let's explore these components further.
|
||||
|
||||
SOS/UOS Network Stack:
|
||||
Service VM/User VM Network Stack:
|
||||
This is the standard Linux TCP/IP stack, currently the most
|
||||
feature-rich TCP/IP implementation.
|
||||
|
||||
@@ -57,11 +57,11 @@ ACRN Hypervisor:
|
||||
bare-metal hardware, and suitable for a variety of IoT and embedded
|
||||
device solutions. It fetches and analyzes the guest instructions, puts
|
||||
the decoded information into the shared page as an IOREQ, and notifies
|
||||
or interrupts the VHM module in the SOS for processing.
|
||||
or interrupts the VHM module in the Service VM for processing.
|
||||
|
||||
VHM Module:
|
||||
The Virtio and Hypervisor Service Module (VHM) is a kernel module in the
|
||||
Service OS (SOS) acting as a middle layer to support the device model
|
||||
Service VM acting as a middle layer to support the device model
|
||||
and hypervisor. The VHM forwards a IOREQ to the virtio-net backend
|
||||
driver for processing.
|
||||
|
||||
@@ -72,7 +72,7 @@ ACRN Device Model and virtio-net Backend Driver:
|
||||
|
||||
Bridge and Tap Device:
|
||||
Bridge and Tap are standard virtual network infrastructures. They play
|
||||
an important role in communication among the SOS, the UOS, and the
|
||||
an important role in communication among the Service VM, the User VM, and the
|
||||
outside world.
|
||||
|
||||
IGB Driver:
|
||||
@@ -82,7 +82,7 @@ IGB Driver:
|
||||
|
||||
The virtual network card (NIC) is implemented as a virtio legacy device
|
||||
in the ACRN device model (DM). It is registered as a PCI virtio device
|
||||
to the guest OS (UOS) and uses the standard virtio-net in the Linux kernel as
|
||||
to the guest OS (User VM) and uses the standard virtio-net in the Linux kernel as
|
||||
its driver (the guest kernel should be built with
|
||||
``CONFIG_VIRTIO_NET=y``).
|
||||
|
||||
@@ -96,7 +96,7 @@ ACRN Virtio-Network Calling Stack
|
||||
|
||||
Various components of ACRN network virtualization are shown in the
|
||||
architecture diagram shows in :numref:`net-virt-arch`. In this section,
|
||||
we will use UOS data transmission (TX) and reception (RX) examples to
|
||||
we will use User VM data transmission (TX) and reception (RX) examples to
|
||||
explain step-by-step how these components work together to implement
|
||||
ACRN network virtualization.
|
||||
|
||||
@@ -123,13 +123,13 @@ Initialization in virtio-net Frontend Driver
|
||||
- Register network driver
|
||||
- Setup shared virtqueues
|
||||
|
||||
ACRN UOS TX FLOW
|
||||
================
|
||||
ACRN User VM TX FLOW
|
||||
====================
|
||||
|
||||
The following shows the ACRN UOS network TX flow, using TCP as an
|
||||
The following shows the ACRN User VM network TX flow, using TCP as an
|
||||
example, showing the flow through each layer:
|
||||
|
||||
**UOS TCP Layer**
|
||||
**User VM TCP Layer**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -139,7 +139,7 @@ example, showing the flow through each layer:
|
||||
tcp_write_xmit -->
|
||||
tcp_transmit_skb -->
|
||||
|
||||
**UOS IP Layer**
|
||||
**User VM IP Layer**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -153,7 +153,7 @@ example, showing the flow through each layer:
|
||||
neigh_output -->
|
||||
neigh_resolve_output -->
|
||||
|
||||
**UOS MAC Layer**
|
||||
**User VM MAC Layer**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -165,7 +165,7 @@ example, showing the flow through each layer:
|
||||
__netdev_start_xmit -->
|
||||
|
||||
|
||||
**UOS MAC Layer virtio-net Frontend Driver**
|
||||
**User VM MAC Layer virtio-net Frontend Driver**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -187,7 +187,7 @@ example, showing the flow through each layer:
|
||||
pio_instr_vmexit_handler -->
|
||||
emulate_io --> // ioreq cant be processed in HV, forward it to VHM
|
||||
acrn_insert_request_wait -->
|
||||
fire_vhm_interrupt --> // interrupt SOS, VHM will get notified
|
||||
fire_vhm_interrupt --> // interrupt Service VM, VHM will get notified
|
||||
|
||||
**VHM Module**
|
||||
|
||||
@@ -216,7 +216,7 @@ example, showing the flow through each layer:
|
||||
virtio_net_tap_tx -->
|
||||
writev --> // write data to tap device
|
||||
|
||||
**SOS TAP Device Forwarding**
|
||||
**Service VM TAP Device Forwarding**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -233,7 +233,7 @@ example, showing the flow through each layer:
|
||||
__netif_receive_skb_core -->
|
||||
|
||||
|
||||
**SOS Bridge Forwarding**
|
||||
**Service VM Bridge Forwarding**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -244,7 +244,7 @@ example, showing the flow through each layer:
|
||||
br_forward_finish -->
|
||||
br_dev_queue_push_xmit -->
|
||||
|
||||
**SOS MAC Layer**
|
||||
**Service VM MAC Layer**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -256,16 +256,16 @@ example, showing the flow through each layer:
|
||||
__netdev_start_xmit -->
|
||||
|
||||
|
||||
**SOS MAC Layer IGB Driver**
|
||||
**Service VM MAC Layer IGB Driver**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
igb_xmit_frame --> // IGB physical NIC driver xmit function
|
||||
|
||||
ACRN UOS RX FLOW
|
||||
================
|
||||
ACRN User VM RX FLOW
|
||||
====================
|
||||
|
||||
The following shows the ACRN UOS network RX flow, using TCP as an example.
|
||||
The following shows the ACRN User VM network RX flow, using TCP as an example.
|
||||
Let's start by receiving a device interrupt. (Note that the hypervisor
|
||||
will first get notified when receiving an interrupt even in passthrough
|
||||
cases.)
|
||||
@@ -288,11 +288,11 @@ cases.)
|
||||
|
||||
do_softirq -->
|
||||
ptdev_softirq -->
|
||||
vlapic_intr_msi --> // insert the interrupt into SOS
|
||||
vlapic_intr_msi --> // insert the interrupt into Service VM
|
||||
|
||||
start_vcpu --> // VM Entry here, will process the pending interrupts
|
||||
|
||||
**SOS MAC Layer IGB Driver**
|
||||
**Service VM MAC Layer IGB Driver**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -306,7 +306,7 @@ cases.)
|
||||
__netif_receive_skb -->
|
||||
__netif_receive_skb_core --
|
||||
|
||||
**SOS Bridge Forwarding**
|
||||
**Service VM Bridge Forwarding**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -317,7 +317,7 @@ cases.)
|
||||
br_forward_finish -->
|
||||
br_dev_queue_push_xmit -->
|
||||
|
||||
**SOS MAC Layer**
|
||||
**Service VM MAC Layer**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -328,7 +328,7 @@ cases.)
|
||||
netdev_start_xmit -->
|
||||
__netdev_start_xmit -->
|
||||
|
||||
**SOS MAC Layer TAP Driver**
|
||||
**Service VM MAC Layer TAP Driver**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -339,7 +339,7 @@ cases.)
|
||||
.. code-block:: c
|
||||
|
||||
virtio_net_rx_callback --> // the tap fd get notified and this function invoked
|
||||
virtio_net_tap_rx --> // read data from tap, prepare virtqueue, insert interrupt into the UOS
|
||||
virtio_net_tap_rx --> // read data from tap, prepare virtqueue, insert interrupt into the User VM
|
||||
vq_endchains -->
|
||||
vq_interrupt -->
|
||||
pci_generate_msi -->
|
||||
@@ -357,10 +357,10 @@ cases.)
|
||||
|
||||
vmexit_handler --> // vmexit because VMX_EXIT_REASON_VMCALL
|
||||
vmcall_vmexit_handler -->
|
||||
hcall_inject_msi --> // insert interrupt into UOS
|
||||
hcall_inject_msi --> // insert interrupt into User VM
|
||||
vlapic_intr_msi -->
|
||||
|
||||
**UOS MAC Layer virtio_net Frontend Driver**
|
||||
**User VM MAC Layer virtio_net Frontend Driver**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -372,7 +372,7 @@ cases.)
|
||||
virtnet_receive -->
|
||||
receive_buf -->
|
||||
|
||||
**UOS MAC Layer**
|
||||
**User VM MAC Layer**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -382,7 +382,7 @@ cases.)
|
||||
__netif_receive_skb -->
|
||||
__netif_receive_skb_core -->
|
||||
|
||||
**UOS IP Layer**
|
||||
**User VM IP Layer**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -393,7 +393,7 @@ cases.)
|
||||
ip_local_deliver_finish -->
|
||||
|
||||
|
||||
**UOS TCP Layer**
|
||||
**User VM TCP Layer**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -410,7 +410,7 @@ How to Use
|
||||
==========
|
||||
|
||||
The network infrastructure shown in :numref:`net-virt-infra` needs to be
|
||||
prepared in the SOS before we start. We need to create a bridge and at
|
||||
prepared in the Service VM before we start. We need to create a bridge and at
|
||||
least one tap device (two tap devices are needed to create a dual
|
||||
virtual NIC) and attach a physical NIC and tap device to the bridge.
|
||||
|
||||
@@ -419,11 +419,11 @@ virtual NIC) and attach a physical NIC and tap device to the bridge.
|
||||
:width: 900px
|
||||
:name: net-virt-infra
|
||||
|
||||
Network Infrastructure in SOS
|
||||
Network Infrastructure in Service VM
|
||||
|
||||
You can use Linux commands (e.g. ip, brctl) to create this network. In
|
||||
our case, we use systemd to automatically create the network by default.
|
||||
You can check the files with prefix 50- in the SOS
|
||||
You can check the files with prefix 50- in the Service VM
|
||||
``/usr/lib/systemd/network/``:
|
||||
|
||||
- `50-acrn.netdev <https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/misc/acrnbridge/acrn.netdev>`__
|
||||
@@ -431,7 +431,7 @@ You can check the files with prefix 50- in the SOS
|
||||
- `50-tap0.netdev <https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/misc/acrnbridge/tap0.netdev>`__
|
||||
- `50-eth.network <https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/misc/acrnbridge/eth.network>`__
|
||||
|
||||
When the SOS is started, run ``ifconfig`` to show the devices created by
|
||||
When the Service VM is started, run ``ifconfig`` to show the devices created by
|
||||
this systemd configuration:
|
||||
|
||||
.. code-block:: none
|
||||
@@ -486,7 +486,7 @@ optional):
|
||||
|
||||
-s 4,virtio-net,<tap_name>,[mac=<XX:XX:XX:XX:XX:XX>]
|
||||
|
||||
When the UOS is launched, run ``ifconfig`` to check the network. enp0s4r
|
||||
When the User VM is launched, run ``ifconfig`` to check the network. enp0s4r
|
||||
is the virtual NIC created by acrn-dm:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
Virtio-rnd
|
||||
##########
|
||||
|
||||
Virtio-rnd provides a virtual hardware random source for the UOS. It simulates a PCI device
|
||||
Virtio-rnd provides a virtual hardware random source for the User VM. It simulates a PCI device
|
||||
followed by a virtio specification, and is implemented based on the virtio user mode framework.
|
||||
|
||||
Architecture
|
||||
@@ -15,9 +15,9 @@ components are parts of Linux software or third party tools.
|
||||
|
||||
virtio-rnd is implemented as a virtio legacy device in the ACRN device
|
||||
model (DM), and is registered as a PCI virtio device to the guest OS
|
||||
(UOS). Tools such as :command:`od` (dump a file in octal or other format) can
|
||||
(User VM). Tools such as :command:`od` (dump a file in octal or other format) can
|
||||
be used to read random values from ``/dev/random``. This device file in the
|
||||
UOS is bound with the frontend virtio-rng driver. (The guest kernel must
|
||||
User VM is bound with the frontend virtio-rng driver. (The guest kernel must
|
||||
be built with ``CONFIG_HW_RANDOM_VIRTIO=y``). The backend
|
||||
virtio-rnd reads the HW random value from ``/dev/random`` in the SOS and sends
|
||||
them to the frontend.
|
||||
@@ -35,7 +35,7 @@ Add a pci slot to the device model acrn-dm command line; for example::
|
||||
|
||||
-s <slot_number>,virtio-rnd
|
||||
|
||||
Check to see if the frontend virtio_rng driver is available in the UOS:
|
||||
Check to see if the frontend virtio_rng driver is available in the User VM:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
|
||||
@@ -29,27 +29,27 @@ Model following the PCI device framework. The following
|
||||
|
||||
Watchdog device flow
|
||||
|
||||
The DM in the Service OS (SOS) treats the watchdog as a passive device.
|
||||
The DM in the Service VM treats the watchdog as a passive device.
|
||||
It receives read/write commands from the watchdog driver, does the
|
||||
actions, and returns. In ACRN, the commands are from User OS (UOS)
|
||||
actions, and returns. In ACRN, the commands are from User VM
|
||||
watchdog driver.
|
||||
|
||||
UOS watchdog work flow
|
||||
**********************
|
||||
User VM watchdog work flow
|
||||
**************************
|
||||
|
||||
When the UOS does a read or write operation on the watchdog device's
|
||||
When the User VM does a read or write operation on the watchdog device's
|
||||
registers or memory space (Port IO or Memory map I/O), it will trap into
|
||||
the hypervisor. The hypervisor delivers the operation to the SOS/DM
|
||||
the hypervisor. The hypervisor delivers the operation to the Service VM/DM
|
||||
through IPI (inter-process interrupt) or shared memory, and the DM
|
||||
dispatches the operation to the watchdog emulation code.
|
||||
|
||||
After the DM watchdog finishes emulating the read or write operation, it
|
||||
then calls ``ioctl`` to the SOS/kernel (``/dev/acrn_vhm``). VHM will call a
|
||||
then calls ``ioctl`` to the Service VM/kernel (``/dev/acrn_vhm``). VHM will call a
|
||||
hypercall to trap into the hypervisor to tell it the operation is done, and
|
||||
the hypervisor will set UOS-related VCPU registers and resume UOS so the
|
||||
UOS watchdog driver will get the return values (or return status). The
|
||||
:numref:`watchdog-workflow` below is a typical operation flow:
|
||||
from UOS to SOS and return back:
|
||||
the hypervisor will set User VM-related VCPU registers and resume the User VM so the
|
||||
User VM watchdog driver will get the return values (or return status). The
|
||||
:numref:`watchdog-workflow` below is a typical operation flow:
|
||||
from a User VM to the Service VM and return back:
|
||||
|
||||
.. figure:: images/watchdog-image1.png
|
||||
:align: center
|
||||
@@ -82,18 +82,18 @@ emulation.
|
||||
|
||||
The main part in the watchdog emulation is the timer thread. It emulates
|
||||
the watchdog device timeout management. When it gets the kick action
|
||||
from the UOS, it resets the timer. If the timer expires before getting a
|
||||
timely kick action, it will call DM API to reboot that UOS.
|
||||
from the User VM, it resets the timer. If the timer expires before getting a
|
||||
timely kick action, it will call DM API to reboot that User VM.
|
||||
|
||||
In the UOS launch script, add: ``-s xx,wdt-i6300esb`` into DM parameters.
|
||||
In the User VM launch script, add: ``-s xx,wdt-i6300esb`` into DM parameters.
|
||||
(xx is the virtual PCI BDF number as with other PCI devices)
|
||||
|
||||
Make sure the UOS kernel has the I6300ESB driver enabled:
|
||||
``CONFIG_I6300ESB_WDT=y``. After the UOS boots up, the watchdog device
|
||||
Make sure the User VM kernel has the I6300ESB driver enabled:
|
||||
``CONFIG_I6300ESB_WDT=y``. After the User VM boots up, the watchdog device
|
||||
will be created as node ``/dev/watchdog``, and can be used as a normal
|
||||
device file.
|
||||
|
||||
Usually the UOS needs a watchdog service (daemon) to run in userland and
|
||||
Usually the User VM needs a watchdog service (daemon) to run in userland and
|
||||
kick the watchdog periodically. If something prevents the daemon from
|
||||
kicking the watchdog, for example the UOS system is hung, the watchdog
|
||||
will timeout and the DM will reboot the UOS.
|
||||
kicking the watchdog, for example the User VM system is hung, the watchdog
|
||||
will timeout and the DM will reboot the User VM.
|
||||
|
||||
@@ -19,7 +19,7 @@ speculative access to data which is available in the Level 1 Data Cache
|
||||
when the page table entry controlling the virtual address, which is used
|
||||
for the access, has the Present bit cleared or reserved bits set.
|
||||
|
||||
When the processor accesses a linear address, it first looks for a
|
||||
When the processor accesses a linear address, it first looks for a
|
||||
translation to a physical address in the translation lookaside buffer (TLB).
|
||||
For an unmapped address this will not provide a physical address, so the
|
||||
processor performs a table walk of a hierarchical paging structure in
|
||||
@@ -70,7 +70,7 @@ There is no additional action in ACRN hypervisor.
|
||||
Guest -> hypervisor Attack
|
||||
==========================
|
||||
|
||||
ACRN always enables EPT for all guests (SOS and UOS), thus a malicious
|
||||
ACRN always enables EPT for all guests (Service VM and User VM), thus a malicious
|
||||
guest can directly control guest PTEs to construct L1TF-based attack
|
||||
to hypervisor. Alternatively if ACRN EPT is not sanitized with some
|
||||
PTEs (with present bit cleared, or reserved bit set) pointing to valid
|
||||
@@ -93,7 +93,7 @@ e.g. whether CPU partitioning is used, whether Hyper-Threading is on, etc.
|
||||
If CPU partitioning is enabled (default policy in ACRN), there is
|
||||
1:1 mapping between vCPUs and pCPUs i.e. no sharing of pCPU. There
|
||||
may be an attack possibility when Hyper-Threading is on, where
|
||||
logical processors of same physical core may be allocated to two
|
||||
logical processors of same physical core may be allocated to two
|
||||
different guests. Then one guest may be able to attack the other guest
|
||||
on sibling thread due to shared L1D.
|
||||
|
||||
@@ -153,7 +153,7 @@ to current VM (in case of CPU sharing).
|
||||
|
||||
Flushing the L1D evicts not only the data which should not be
|
||||
accessed by a potentially malicious guest, it also flushes the
|
||||
guest data. Flushing the L1D has a performance impact as the
|
||||
guest data. Flushing the L1D has a performance impact as the
|
||||
processor has to bring the flushed guest data back into the L1D,
|
||||
and actual overhead is proportional to the frequency of vmentry.
|
||||
|
||||
@@ -188,7 +188,7 @@ platform seed. They are critical secrets to serve for guest keystore or
|
||||
other security usage, e.g. disk encryption, secure storage.
|
||||
|
||||
If the critical secret data in ACRN is identified, then such
|
||||
data can be put into un-cached memory. As the content will
|
||||
data can be put into un-cached memory. As the content will
|
||||
never go to L1D, it is immune to L1TF attack
|
||||
|
||||
For example, after getting the physical seed from CSME, before any guest
|
||||
@@ -240,8 +240,8 @@ Mitigation Recommendations
|
||||
There is no mitigation required on Apollo Lake based platforms.
|
||||
|
||||
The majority use case for ACRN is in pre-configured environment,
|
||||
where the whole software stack (from ACRN hypervisor to guest
|
||||
kernel to SOS root) is tightly controlled by solution provider
|
||||
where the whole software stack (from ACRN hypervisor to guest
|
||||
kernel to Service VM root) is tightly controlled by solution provider
|
||||
and not allowed for run-time change after sale (guest kernel is
|
||||
trusted). In that case solution provider will make sure that guest
|
||||
kernel is up-to-date including necessary page table sanitization,
|
||||
|
||||
@@ -88,20 +88,21 @@ The components are listed as follows.
|
||||
virtualization. The vCPU loop module in this component handles VM exit events
|
||||
by calling the proper handler in the other components. Hypercalls are
|
||||
implemented as a special type of VM exit event. This component is also able to
|
||||
inject upcall interrupts to SOS.
|
||||
inject upcall interrupts to the Service VM.
|
||||
* **Device Emulation** This component implements devices that are emulated in
|
||||
the hypervisor itself, such as the virtual programmable interrupt controllers
|
||||
including vPIC, vLAPIC and vIOAPIC.
|
||||
* **Passthru Management** This component manages devices that are passed-through
|
||||
to specific VMs.
|
||||
* **Extended Device Emulation** This component implements an I/O request
|
||||
mechanism that allow the hypervisor to forward I/O accesses from UOSes to SOS
|
||||
mechanism that allow the hypervisor to forward I/O accesses from a User
|
||||
VM to the Service VM.
|
||||
for emulation.
|
||||
* **VM Management** This component manages the creation, deletion and other
|
||||
lifecycle operations of VMs.
|
||||
* **Hypervisor Initialization** This component invokes the initialization
|
||||
subroutines in the other components to bring up the hypervisor and start up
|
||||
SOS in sharing mode or all the VMs in partitioning mode.
|
||||
Service VM in sharing mode or all the VMs in partitioning mode.
|
||||
|
||||
ACRN hypervisor adopts a layered design where higher layers can invoke the
|
||||
interfaces of lower layers but not vice versa. The only exception is the
|
||||
|
||||
@@ -150,7 +150,7 @@ How to build ACRN on Fedora 29?
|
||||
There is a known issue when attempting to build ACRN on Fedora 29
|
||||
because of how ``gnu-efi`` is packaged in this Fedora release.
|
||||
(See the `ACRN GitHub issue
|
||||
<https://github.com/projectacrn/acrn-hypervisor/issues/2457>`_
|
||||
<https://github.com/projectacrn/acrn-hypervisor/issues/2457>`_
|
||||
for more information.) The following patch to ``/efi-stub/Makefile``
|
||||
fixes the problem on Fedora 29 development systems (but should
|
||||
not be used on other Linux distros)::
|
||||
|
||||
@@ -43,6 +43,8 @@ skip_download_uos=0
|
||||
disable_reboot=0
|
||||
# set default scenario name
|
||||
scenario=sdc
|
||||
# swupd config file path
|
||||
swupd_config=/usr/share/defaults/swupd/config
|
||||
|
||||
function upgrade_sos()
|
||||
{
|
||||
@@ -60,7 +62,7 @@ function upgrade_sos()
|
||||
|
||||
# set up mirror and proxy url while specified with m and p options
|
||||
[[ -n $mirror ]] && echo "Setting swupd mirror to: $mirror" && \
|
||||
sed -i 's/#allow_insecure_http=<true\/false>/allow_insecure_http=true/' /usr/share/defaults/swupd/config && \
|
||||
sed -i 's/#allow_insecure_http=<true\/false>/allow_insecure_http=true/' $swupd_config && \
|
||||
swupd mirror -s $mirror
|
||||
[[ -n $proxy ]] && echo "Setting proxy to: $proxy" && export https_proxy=$proxy
|
||||
|
||||
@@ -76,12 +78,12 @@ function upgrade_sos()
|
||||
echo "Clear Linux version $sos_ver is already installed. Continuing to set up the Service VM..."
|
||||
else
|
||||
echo "Upgrading the Clear Linux version from $VERSION_ID to $sos_ver ..."
|
||||
swupd repair --picky -V $sos_ver 2>/dev/null
|
||||
swupd repair -x --picky -V $sos_ver 2>/dev/null
|
||||
fi
|
||||
|
||||
# Do the setups if previous process succeed.
|
||||
if [[ $? -eq 0 ]]; then
|
||||
[[ -n $mirror ]] && sed -i 's/#allow_insecure_http=<true\/false>/allow_insecure_http=true/' /usr/share/defaults/swupd/config
|
||||
[[ -n $mirror ]] && sed -i 's/#allow_insecure_http=<true\/false>/allow_insecure_http=true/' $swupd_config
|
||||
echo "Adding the service-os and systemd-networkd-autostart bundles..."
|
||||
swupd bundle-add service-os systemd-networkd-autostart 2>/dev/null
|
||||
|
||||
@@ -127,7 +129,7 @@ function upgrade_sos()
|
||||
|
||||
# Rename Clear-linux-iot-lts2018-sos conf to acrn.conf
|
||||
conf_directory=/mnt/loader/entries/
|
||||
conf=`sed -n 2p /mnt/loader/loader.conf | sed "s/default //"`
|
||||
conf=`sed -n 2p /mnt/loader/loader.conf | sed "s/default //" | sed "s/.conf$//"`
|
||||
cp -r ${conf_directory}${conf}.conf ${conf_directory}acrn.conf 2>/dev/null || \
|
||||
{ echo "${conf_directory}${conf}.conf does not exist." && exit 1; }
|
||||
sed -i 2"s/$conf/acrn/" /mnt/loader/loader.conf
|
||||
@@ -210,8 +212,14 @@ function upgrade_uos()
|
||||
mount ${uos_loop_device}p3 /mnt || { echo "Failed to mount the User VM rootfs partition" && exit 1; }
|
||||
mount ${uos_loop_device}p1 /mnt/boot || { echo "Failed to mount the User VM EFI partition" && exit 1; }
|
||||
|
||||
# set up mirror and proxy url while specified with m and p options
|
||||
[[ -n $mirror ]] && echo "Setting swupd mirror to: $mirror" && \
|
||||
sed -i 's/#allow_insecure_http=<true\/false>/allow_insecure_http=true/' /mnt$swupd_config && \
|
||||
swupd mirror -s $mirror --path=/mnt
|
||||
|
||||
echo "Install kernel-iot-lts2018 to $uos_img"
|
||||
swupd bundle-add --path=/mnt kernel-iot-lts2018 || { echo "Failed to install kernel-iot-lts2018" && exit 1; }
|
||||
swupd bundle-add --path=/mnt kernel-iot-lts2018 || { echo "Failed to install kernel-iot-lts2018" && \
|
||||
sync && umount /mnt/boot /mnt && exit 1; }
|
||||
|
||||
echo "Configure kernel-ios-lts2018 as $uos_img default boot kernel"
|
||||
uos_kernel_conf=`ls -t /mnt/boot/loader/entries/ | grep Clear-linux-iot-lts2018 | head -n1`
|
||||
|
||||
@@ -6,7 +6,10 @@ Build ACRN from Source
|
||||
Introduction
|
||||
************
|
||||
|
||||
Following a general embedded-system programming model, the ACRN hypervisor is designed to be customized at build time per hardware platform and per usage scenario, rather than one binary for all scenarios.
|
||||
Following a general embedded-system programming model, the ACRN
|
||||
hypervisor is designed to be customized at build time per hardware
|
||||
platform and per usage scenario, rather than one binary for all
|
||||
scenarios.
|
||||
|
||||
The hypervisor binary is generated based on Kconfig configuration
|
||||
settings. Instructions about these settings can be found in
|
||||
@@ -46,10 +49,16 @@ these steps.
|
||||
|
||||
.. _install-build-tools-dependencies:
|
||||
|
||||
Step 1: Install build tools and dependencies
|
||||
********************************************
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
ACRN development is supported on popular Linux distributions, each with their own way to install development tools. This user guide covers the different steps to configure and build ACRN natively on your distribution. Refer to the :ref:`building-acrn-in-docker` user guide for instructions on how to build ACRN using a container.
|
||||
Install build tools and dependencies
|
||||
************************************
|
||||
|
||||
ACRN development is supported on popular Linux distributions, each with
|
||||
their own way to install development tools. This user guide covers the
|
||||
different steps to configure and build ACRN natively on your
|
||||
distribution. Refer to the :ref:`building-acrn-in-docker` user guide for
|
||||
instructions on how to build ACRN using a container.
|
||||
|
||||
.. note::
|
||||
ACRN uses ``menuconfig``, a python3 text-based user interface (TUI) for configuring hypervisor options and using python's ``kconfiglib`` library.
|
||||
@@ -88,7 +97,9 @@ Install the necessary tools for the following systems:
|
||||
$ sudo pip3 install kconfiglib
|
||||
|
||||
.. note::
|
||||
Use ``gcc`` version 7.3.* or higher to avoid gcc compilation issues. Follow these instructions to install the ``gcc-7`` package on Ubuntu 18.04:
|
||||
Use ``gcc`` version 7.3.* or higher to avoid gcc compilation
|
||||
issues. Follow these instructions to install the ``gcc-7`` package on
|
||||
Ubuntu 18.04:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@@ -103,8 +114,10 @@ Install the necessary tools for the following systems:
|
||||
Verify your version of ``binutils`` with the command ``apt show binutils``.
|
||||
|
||||
|
||||
Step 2: Get the ACRN hypervisor source code
|
||||
*******************************************
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Get the ACRN hypervisor source code
|
||||
***********************************
|
||||
|
||||
The `acrn-hypervisor <https://github.com/projectacrn/acrn-hypervisor/>`_
|
||||
repository contains four main components:
|
||||
@@ -121,8 +134,10 @@ Enter the following to get the acrn-hypervisor source code:
|
||||
$ git clone https://github.com/projectacrn/acrn-hypervisor
|
||||
|
||||
|
||||
Step 3: Build with the ACRN scenario
|
||||
************************************
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Build with the ACRN scenario
|
||||
****************************
|
||||
|
||||
Currently, the ACRN hypervisor defines these typical usage scenarios:
|
||||
|
||||
@@ -131,19 +146,14 @@ SDC:
|
||||
automotive use case that includes one pre-launched Service VM and one
|
||||
post-launched User VM.
|
||||
|
||||
SDC2:
|
||||
SDC2 (Software Defined Cockpit 2) is an extended scenario for an
|
||||
automotive SDC system. SDC2 defines one pre-launched Service VM and up
|
||||
to three post-launched VMs.
|
||||
|
||||
LOGICAL_PARTITION:
|
||||
This scenario defines two pre-launched VMs.
|
||||
|
||||
INDUSTRY:
|
||||
This is a typical scenario for industrial usage with up to four VMs:
|
||||
one pre-launched Service VM, one post-launched Standard VM for Human
|
||||
interaction (HMI), and one or two post-launched RT VMs for real-time
|
||||
control.
|
||||
This is a typical scenario for industrial usage with up to eight VMs:
|
||||
one pre-launched Service VM, five post-launched Standard VMs (for Human
|
||||
interaction etc.), one post-launched RT VMs (for real-time control),
|
||||
and one Kata container VM.
|
||||
|
||||
HYBRID:
|
||||
This scenario defines a hybrid use case with three VMs: one
|
||||
@@ -153,7 +163,7 @@ HYBRID:
|
||||
Assuming that you are at the top level of the acrn-hypervisor directory, perform the following:
|
||||
|
||||
.. note::
|
||||
The release version is built by default, 'RELEASE=0' builds the debug version.
|
||||
The release version is built by default, ``RELEASE=0`` builds the debug version.
|
||||
|
||||
* Build the ``INDUSTRY`` scenario on the ``nuc7i7dnb``:
|
||||
|
||||
@@ -161,14 +171,11 @@ Assuming that you are at the top level of the acrn-hypervisor directory, perform
|
||||
|
||||
$ make all BOARD=nuc7i7dnb SCENARIO=industry RELEASE=0
|
||||
|
||||
* Build the ``INDUSTRY`` scenario on the ``whl-ipc-i5``:
|
||||
* Build the ``HYBRID`` scenario on the ``whl-ipc-i5``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ make all BOARD=whl-ipc-i5 SCENARIO=industry BOARD_FILE=/absolute_path/
|
||||
acrn-hypervisor/misc/acrn-config/xmls/board-xmls/whl-ipc-i5.xml SCENARIO_FILE=
|
||||
/absolute_patch/acrn-hypervisor/misc/acrn-config/xmls/config-xmls/whl-ipc-i5/industry.xml
|
||||
RELEASE=0
|
||||
$ make all BOARD=whl-ipc-i5 SCENARIO=hybrid RELEASE=0
|
||||
|
||||
* Build the ``SDC`` scenario on the ``nuc6cayh``:
|
||||
|
||||
@@ -181,8 +188,10 @@ for each scenario.
|
||||
|
||||
.. _getting-started-hypervisor-configuration:
|
||||
|
||||
Step 4: Build the hypervisor configuration
|
||||
******************************************
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Build the hypervisor configuration
|
||||
**********************************
|
||||
|
||||
Modify the hypervisor configuration
|
||||
===================================
|
||||
@@ -205,7 +214,8 @@ top level of the acrn-hypervisor directory. The configuration file, named
|
||||
$ make defconfig BOARD=nuc6cayh
|
||||
|
||||
The BOARD specified is used to select a ``defconfig`` under
|
||||
``arch/x86/configs/``. The other command line-based options (e.g. ``RELEASE``) take no effect when generating a defconfig.
|
||||
``arch/x86/configs/``. The other command line-based options (e.g.
|
||||
``RELEASE``) take no effect when generating a defconfig.
|
||||
|
||||
To modify the hypervisor configurations, you can either edit ``.config``
|
||||
manually, or you can invoke a TUI-based menuconfig--powered by kconfiglib--by
|
||||
@@ -218,7 +228,7 @@ configurations and build the hypervisor using the updated ``.config``:
|
||||
|
||||
# Modify the configurations per your needs
|
||||
$ cd ../ # Enter top-level folder of acrn-hypervisor source
|
||||
$ make menuconfig -C hypervisor BOARD=kbl-nuc-i7 <select industry scenario>
|
||||
$ make menuconfig -C hypervisor BOARD=kbl-nuc-i7 <input scenario name>
|
||||
|
||||
|
||||
Note that ``menuconfig`` is python3 only.
|
||||
@@ -229,8 +239,10 @@ Refer to the help on menuconfig for a detailed guide on the interface:
|
||||
|
||||
$ pydoc3 menuconfig
|
||||
|
||||
Step 5: Build the hypervisor, device model, and tools
|
||||
*****************************************************
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Build the hypervisor, device model, and tools
|
||||
*********************************************
|
||||
|
||||
Now you can build all these components at once as follows:
|
||||
|
||||
@@ -252,17 +264,24 @@ If you only need the hypervisor, use this command:
|
||||
|
||||
The ``acrn.efi`` will be generated in the ``./hypervisor/build/acrn.efi`` directory hypervisor.
|
||||
|
||||
As mentioned in :ref:`ACRN Configuration Tool <vm_config_workflow>`, the Board configuration and VM configuration can be imported from XML files.
|
||||
If you want to build the hypervisor with XML configuration files, specify
|
||||
the file location as follows:
|
||||
As mentioned in :ref:`ACRN Configuration Tool <vm_config_workflow>`, the
|
||||
Board configuration and VM configuration can be imported from XML files.
|
||||
If you want to build the hypervisor with XML configuration files,
|
||||
specify the file location as follows (assuming you're current directory
|
||||
is at the top level of the acrn-hypervisor directory):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ make BOARD_FILE=$PWD/misc/acrn-config/xmls/board-xmls/nuc7i7dnb.xml \
|
||||
SCENARIO_FILE=$PWD/misc/acrn-config/xmls/config-xmls/nuc7i7dnb/industry.xml FIRMWARE=uefi
|
||||
SCENARIO_FILE=$PWD/misc/acrn-config/xmls/config-xmls/nuc7i7dnb/industry.xml FIRMWARE=uefi TARGET_DIR=xxx
|
||||
|
||||
|
||||
Note that the file path must be absolute. Both of the ``BOARD`` and ``SCENARIO`` parameters are not needed because the information is retrieved from the XML file. Adjust the example above to your own environment path.
|
||||
.. note::
|
||||
The ``BOARD`` and ``SCENARIO`` parameters are not needed because the
|
||||
information is retrieved from the corresponding ``BOARD_FILE`` and
|
||||
``SCENARIO_FILE`` XML configuration files. The ``TARGET_DIR`` parameter
|
||||
specifies what directory is used to store configuration files imported
|
||||
from XML files. If the ``TARGED_DIR`` it is not specified, the original
|
||||
configuration files of acrn-hypervisor would be overridden.
|
||||
|
||||
Follow the same instructions to boot and test the images you created from your build.
|
||||
|
||||
|
||||
@@ -221,7 +221,7 @@ Use the ACRN industry out-of-the-box image
|
||||
It ensures that the end of the string is properly detected.
|
||||
|
||||
#. Reboot the test machine. After the Clear Linux OS boots,
|
||||
log in as “root” for the first time.
|
||||
log in as ``root`` for the first time.
|
||||
|
||||
.. _install_rtvm:
|
||||
|
||||
@@ -287,7 +287,15 @@ RT Performance Test
|
||||
Cyclictest introduction
|
||||
=======================
|
||||
|
||||
The cyclictest is most commonly used for benchmarking RT systems. It is one of the most frequently used tools for evaluating the relative performance of real-time systems. Cyclictest accurately and repeatedly measures the difference between a thread's intended wake-up time and the time at which it actually wakes up in order to provide statistics about the system's latencies. It can measure latencies in real-time systems that are caused by hardware, firmware, and the operating system. The cyclictest is currently maintained by Linux Foundation and is part of the test suite rt-tests.
|
||||
The cyclictest is most commonly used for benchmarking RT systems. It is
|
||||
one of the most frequently used tools for evaluating the relative
|
||||
performance of real-time systems. Cyclictest accurately and repeatedly
|
||||
measures the difference between a thread's intended wake-up time and the
|
||||
time at which it actually wakes up in order to provide statistics about
|
||||
the system's latencies. It can measure latencies in real-time systems
|
||||
that are caused by hardware, firmware, and the operating system. The
|
||||
cyclictest is currently maintained by Linux Foundation and is part of
|
||||
the test suite rt-tests.
|
||||
|
||||
Pre-Configurations
|
||||
==================
|
||||
@@ -555,5 +563,3 @@ Passthrough a hard disk to the RTVM
|
||||
.. code-block:: none
|
||||
|
||||
# /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
|
||||
|
||||
|
||||
|
||||
BIN
doc/images/ACRN-fall-from-tree-small.png
Normal file
|
After Width: | Height: | Size: 298 KiB |
@@ -64,13 +64,6 @@ through an open source platform.
|
||||
</a>
|
||||
<p>Supported hardware platforms and boards</p>
|
||||
</li>
|
||||
<li class="grid-item">
|
||||
<a href="glossary.html">
|
||||
<img alt="" src="_static/images/ACRNlogo80w.png"/>
|
||||
<h2>Glossary<br/>of Terms</h2>
|
||||
</a>
|
||||
<p>Glossary of useful terms</p>
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ What is ACRN
|
||||
Introduction to Project ACRN
|
||||
****************************
|
||||
|
||||
ACRN™ is a, flexible, lightweight reference hypervisor, built with
|
||||
ACRN |trade| is a, flexible, lightweight reference hypervisor, built with
|
||||
real-time and safety-criticality in mind, and optimized to streamline
|
||||
embedded development through an open source platform. ACRN defines a
|
||||
device hypervisor reference stack and an architecture for running
|
||||
@@ -26,7 +26,7 @@ user VM sharing optimizations for IoT and embedded devices.
|
||||
ACRN Open Source Roadmap 2020
|
||||
*****************************
|
||||
|
||||
Stay informed on what's ahead for ACRN in 2020 by visting the `ACRN 2020 Roadmap <https://projectacrn.org/wp-content/uploads/sites/59/2020/03/ACRN-Roadmap-External-2020.pdf>`_.
|
||||
Stay informed on what's ahead for ACRN in 2020 by visiting the `ACRN 2020 Roadmap <https://projectacrn.org/wp-content/uploads/sites/59/2020/03/ACRN-Roadmap-External-2020.pdf>`_.
|
||||
|
||||
For up-to-date happenings, visit the `ACRN blog <https://projectacrn.org/blog/>`_.
|
||||
|
||||
@@ -59,7 +59,7 @@ actions when system critical failures occur.
|
||||
|
||||
Shown on the right of :numref:`V2-hl-arch`, the remaining hardware
|
||||
resources are shared among the service VM and user VMs. The service VM
|
||||
is similar to Xen’s Dom0, and a user VM is similar to Xen’s DomU. The
|
||||
is similar to Xen's Dom0, and a user VM is similar to Xen's DomU. The
|
||||
service VM is the first VM launched by ACRN, if there is no pre-launched
|
||||
VM. The service VM can access hardware resources directly by running
|
||||
native drivers and it provides device sharing services to the user VMs
|
||||
@@ -117,7 +117,7 @@ information about the vehicle, such as:
|
||||
fuel or tire pressure;
|
||||
- showing rear-view and surround-view cameras for parking assistance.
|
||||
|
||||
An **In-Vehicle Infotainment (IVI)** system’s capabilities can include:
|
||||
An **In-Vehicle Infotainment (IVI)** system's capabilities can include:
|
||||
|
||||
- navigation systems, radios, and other entertainment systems;
|
||||
- connection to mobile devices for phone calls, music, and applications
|
||||
@@ -197,7 +197,7 @@ real-time OS needs, such as VxWorks* or RT-Linux*.
|
||||
|
||||
ACRN Industrial Usage Architecture Overview
|
||||
|
||||
:numref:`V2-industry-usage-arch` shows ACRN’s block diagram for an
|
||||
:numref:`V2-industry-usage-arch` shows ACRN's block diagram for an
|
||||
Industrial usage scenario:
|
||||
|
||||
- ACRN boots from the SoC platform, and supports firmware such as the
|
||||
@@ -616,10 +616,10 @@ ACRN Device model incorporates these three aspects:
|
||||
from the User VM device, the I/O dispatcher sends this request to the
|
||||
corresponding device emulation routine.
|
||||
|
||||
**I/O Path**:
|
||||
**I/O Path**:
|
||||
see `ACRN-io-mediator`_ below
|
||||
|
||||
**VHM**:
|
||||
**VHM**:
|
||||
The Virtio and Hypervisor Service Module is a kernel module in the
|
||||
Service VM acting as a middle layer to support the device model. The VHM
|
||||
and its client handling flow is described below:
|
||||
@@ -747,7 +747,7 @@ Following along with the numbered items in :numref:`io-emulation-path`:
|
||||
the module is invoked to execute its processing APIs.
|
||||
6. After the ACRN device module completes the emulation (port IO 20h access
|
||||
in this example), (say uDev1 here), uDev1 puts the result into the
|
||||
shared page (in register AL in this example).
|
||||
shared page (in register AL in this example).
|
||||
7. ACRN device model then returns control to ACRN hypervisor to indicate the
|
||||
completion of an IO instruction emulation, typically thru VHM/hypercall.
|
||||
8. The ACRN hypervisor then knows IO emulation is complete, and copies
|
||||
|
||||
@@ -12,7 +12,7 @@ Minimum System Requirements for Installing ACRN
|
||||
+------------------------+-----------------------------------+---------------------------------------------------------------------------------+
|
||||
| Hardware | Minimum Requirements | Recommended |
|
||||
+========================+===================================+=================================================================================+
|
||||
| Processor | Compatible x86 64-bit processor | 2 core with “Intel Hyper Threading Technology” enabled in the BIOS or more core |
|
||||
| Processor | Compatible x86 64-bit processor | 2 core with Intel Hyper Threading Technology enabled in the BIOS or more cores |
|
||||
+------------------------+-----------------------------------+---------------------------------------------------------------------------------+
|
||||
| System memory | 4GB RAM | 8GB or more (< 32G) |
|
||||
+------------------------+-----------------------------------+---------------------------------------------------------------------------------+
|
||||
@@ -27,6 +27,15 @@ Known Limitations
|
||||
*****************
|
||||
Platforms with multiple PCI segments
|
||||
|
||||
ACRN assumes the following conditions are satisfied from the Platform BIOS
|
||||
|
||||
* All the PCI device BARs should be assigned resources, including SR-IOv VF BARs if a device supports.
|
||||
|
||||
* Bridge windows for PCI bridge devices and the resources for root bus, should be programmed with values
|
||||
that enclose resources used by all the downstream devices.
|
||||
|
||||
* There should be no conflict in resources among the PCI devices and also between PCI devices and other platform devices.
|
||||
|
||||
Verified Platforms According to ACRN Usage
|
||||
******************************************
|
||||
|
||||
@@ -109,7 +118,7 @@ Verified Hardware Specifications Detail
|
||||
| | | System memory | - Two DDR3L SO-DIMM sockets |
|
||||
| | | | (up to 8 GB, 1866 MHz), 1.35V |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | Storage capabilities | - SDXC slot with UHS-I support on the side |
|
||||
| | | Storage capabilities | - SDXC slot with UHS-I support on the side |
|
||||
| | | | - One SATA3 port for connection to 2.5" HDD or SSD |
|
||||
| | | | (up to 9.5 mm thickness) |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
|
||||
@@ -34,7 +34,7 @@ https://projectacrn.github.io/0.4/. Documentation for the latest
|
||||
Version 0.4 new features
|
||||
************************
|
||||
|
||||
- :acrn-issue:`1824` - implement "wbinvd" emulation
|
||||
- :acrn-issue:`1824` - implement "wbinvd" emulation
|
||||
- :acrn-issue:`1859` - Doc: update GSG guide to avoid issue "black screen"
|
||||
- :acrn-issue:`1878` - The "Using Ubuntu as the Service OS" tutorial is outdated and needs to be refreshed
|
||||
- :acrn-issue:`1926` - `kernel-doc` causing `make doc` failure (because of upcoming Perl changes)
|
||||
@@ -147,7 +147,7 @@ Known Issues
|
||||
**Impact:** Failed to use UART for input in corner case.
|
||||
|
||||
**Workaround:** Enter other keys before typing :kbd:`Enter`.
|
||||
|
||||
|
||||
:acrn-issue:`1996` - There is an error log when using "acrnd&" to boot UOS
|
||||
An error log is printed when starting acrnd as a background job
|
||||
(``acrnd&``) to boot UOS. The UOS still boots up
|
||||
|
||||
@@ -34,16 +34,16 @@ https://projectacrn.github.io/0.5/. Documentation for the latest
|
||||
Version 0.5 new features
|
||||
************************
|
||||
|
||||
**OVMF support initial patches merged in ACRN**:
|
||||
**OVMF support initial patches merged in ACRN**:
|
||||
To support booting Windows as a Guest OS, we are
|
||||
using Open source Virtual Machine Firmware (OVMF).
|
||||
Initial patches to support OVMF have been merged in ACRN hypervisor.
|
||||
using Open source Virtual Machine Firmware (OVMF).
|
||||
Initial patches to support OVMF have been merged in ACRN hypervisor.
|
||||
More patches for ACRN and patches upstreaming to OVMF work will be continuing.
|
||||
|
||||
**UP2 board serial port support**:
|
||||
**UP2 board serial port support**:
|
||||
This release enables serial port debugging on UP2 boards during SOS and UOS boot.
|
||||
|
||||
**One E2E binary to support all UEFI platform**:
|
||||
**One E2E binary to support all UEFI platform**:
|
||||
ACRN can support both Apollo Lake (APL) and Kaby Lake (KBL) NUCs.
|
||||
Instead of having separate builds, this release offers community
|
||||
developers a single end-to-end reference build that supports both
|
||||
@@ -52,11 +52,11 @@ See :ref:`getting_started` for more information.
|
||||
|
||||
**APL UP2 board with SBL firmware**: With this 0.5 release, ACRN
|
||||
now supports APL UP2 board with slim Bootloader (SBL) firmware.
|
||||
Slim Bootloader is a modern, flexible, light-weight, open source
|
||||
reference boot loader with key benefits such as being fast, small,
|
||||
customizable, and secure. An end-to-end reference build with
|
||||
ACRN hypervisor, Clear Linux OS as SOS, and Clear Linux OS as UOS has been
|
||||
verified on UP2/SBL board. See the :ref:`using-sbl-up2` documentation
|
||||
Slim Bootloader is a modern, flexible, light-weight, open source
|
||||
reference boot loader with key benefits such as being fast, small,
|
||||
customizable, and secure. An end-to-end reference build with
|
||||
ACRN hypervisor, Clear Linux OS as SOS, and Clear Linux OS as UOS has been
|
||||
verified on UP2/SBL board. See the :ref:`using-sbl-up2` documentation
|
||||
for step-by-step instructions.
|
||||
|
||||
**Document updates**: Several new documents have been added in this release, including:
|
||||
@@ -68,35 +68,35 @@ for step-by-step instructions.
|
||||
- :acrn-issue:`892` - Power Management: VMM control
|
||||
- :acrn-issue:`894` - Power Management: S5
|
||||
- :acrn-issue:`914` - GPU Passthrough
|
||||
- :acrn-issue:`1124` - MMU code reshuffle
|
||||
- :acrn-issue:`1179` - RPMB key passing
|
||||
- :acrn-issue:`1180` - vFastboot release version 0.9
|
||||
- :acrn-issue:`1181` - Integrate enabling Crash OS feature as default in VSBL debugversion
|
||||
- :acrn-issue:`1182` - vSBL to support ACPI customization
|
||||
- :acrn-issue:`1124` - MMU code reshuffle
|
||||
- :acrn-issue:`1179` - RPMB key passing
|
||||
- :acrn-issue:`1180` - vFastboot release version 0.9
|
||||
- :acrn-issue:`1181` - Integrate enabling Crash OS feature as default in VSBL debugversion
|
||||
- :acrn-issue:`1182` - vSBL to support ACPI customization
|
||||
- :acrn-issue:`1240` - [APL][IO Mediator] Enable VHOST_NET & VHOST to accelerate guest networking with virtio_net.
|
||||
- :acrn-issue:`1284` - [DeviceModel]Enable NHLT table in DM for audio passthrough
|
||||
- :acrn-issue:`1313` - [APL][IO Mediator] Remove unused netmap/vale in virtio-net
|
||||
- :acrn-issue:`1330` - combine VM creating and ioreq shared page setup
|
||||
- :acrn-issue:`1364` - [APL][IO Mediator] virtio code reshuffle
|
||||
- :acrn-issue:`1496` - provide a string convert api and remove banned function for virtio-blk
|
||||
- :acrn-issue:`1546` - hv: timer: add debug information for add_timer
|
||||
- :acrn-issue:`1579` - vSBL to Support Ramoops
|
||||
- :acrn-issue:`1580` - vSBL to support crash mode with vFastboot
|
||||
- :acrn-issue:`1626` - support x2APIC mode for ACRN guests
|
||||
- :acrn-issue:`1672` - L1TF mitigation
|
||||
- :acrn-issue:`1747` - Replace function like macro with inline function
|
||||
- :acrn-issue:`1821` - Optimize IO request path
|
||||
- :acrn-issue:`1284` - [DeviceModel]Enable NHLT table in DM for audio passthrough
|
||||
- :acrn-issue:`1313` - [APL][IO Mediator] Remove unused netmap/vale in virtio-net
|
||||
- :acrn-issue:`1330` - combine VM creating and ioreq shared page setup
|
||||
- :acrn-issue:`1364` - [APL][IO Mediator] virtio code reshuffle
|
||||
- :acrn-issue:`1496` - provide a string convert api and remove banned function for virtio-blk
|
||||
- :acrn-issue:`1546` - hv: timer: add debug information for add_timer
|
||||
- :acrn-issue:`1579` - vSBL to Support Ramoops
|
||||
- :acrn-issue:`1580` - vSBL to support crash mode with vFastboot
|
||||
- :acrn-issue:`1626` - support x2APIC mode for ACRN guests
|
||||
- :acrn-issue:`1672` - L1TF mitigation
|
||||
- :acrn-issue:`1747` - Replace function like macro with inline function
|
||||
- :acrn-issue:`1821` - Optimize IO request path
|
||||
- :acrn-issue:`1832` - Add OVMF booting support for booting as an alternative to vSBL.
|
||||
- :acrn-issue:`1882` - Extend the SOS CMA range from 64M to 128M
|
||||
- :acrn-issue:`1882` - Extend the SOS CMA range from 64M to 128M
|
||||
- :acrn-issue:`1995` - Support SBL firmware as boot loader on Apollo Lake UP2.
|
||||
- :acrn-issue:`2011` - support DISCARD command for virtio-blk
|
||||
- :acrn-issue:`2011` - support DISCARD command for virtio-blk
|
||||
- :acrn-issue:`2036` - Update and complete `acrn-dm` parameters description in the user guide and HLD
|
||||
- :acrn-issue:`2037` - Set correct name for each pthread in DM
|
||||
- :acrn-issue:`2079` - Replace banned API with permitted API function in a crn device-model
|
||||
- :acrn-issue:`2120` - Optimize trusty logic to meet MISRA-C rules
|
||||
- :acrn-issue:`2145` - Reuse linux common virtio header file for virtio
|
||||
- :acrn-issue:`2170` - For UEFI based hardware platforms, one Clear Linux OS E2E build binary can be used for all platform's installation
|
||||
- :acrn-issue:`2187` - Complete the cleanup of unbounded APIs usage
|
||||
- :acrn-issue:`2037` - Set correct name for each pthread in DM
|
||||
- :acrn-issue:`2079` - Replace banned API with permitted API function in a crn device-model
|
||||
- :acrn-issue:`2120` - Optimize trusty logic to meet MISRA-C rules
|
||||
- :acrn-issue:`2145` - Reuse linux common virtio header file for virtio
|
||||
- :acrn-issue:`2170` - For UEFI based hardware platforms, one Clear Linux OS E2E build binary can be used for all platform's installation
|
||||
- :acrn-issue:`2187` - Complete the cleanup of unbounded APIs usage
|
||||
|
||||
Fixed Issues
|
||||
************
|
||||
@@ -187,7 +187,7 @@ Known Issues
|
||||
**Impact:** Failed to use UART for input in corner case.
|
||||
|
||||
**Workaround:** Enter other keys before typing :kbd:`Enter`.
|
||||
|
||||
|
||||
:acrn-issue:`1996` - There is an error log when using "acrnd&" to boot UOS
|
||||
An error log is printed when starting acrnd as a background job
|
||||
(``acrnd&``) to boot UOS. The UOS still boots up
|
||||
@@ -198,7 +198,7 @@ Known Issues
|
||||
|
||||
**Workaround:** None.
|
||||
|
||||
:acrn-issue:`2267` - [APLUP2][LaaG]LaaG can't detect 4k monitor
|
||||
:acrn-issue:`2267` - [APLUP2][LaaG]LaaG can't detect 4k monitor
|
||||
After launching UOS on APL UP2 , 4k monitor cannot be detected.
|
||||
|
||||
**Impact:** UOS hasn't display with 4k monitor.
|
||||
@@ -206,7 +206,7 @@ Known Issues
|
||||
**Workaround:** None.
|
||||
|
||||
:acrn-issue:`2276` - OVMF failed to launch UOS on UP2.
|
||||
UP2 failed to launch UOS using OVMF as virtual bootloader with acrn-dm.
|
||||
UP2 failed to launch UOS using OVMF as virtual bootloader with acrn-dm.
|
||||
|
||||
**Impact:** UOS cannot boot up using OVMF
|
||||
|
||||
@@ -224,9 +224,9 @@ Known Issues
|
||||
|
||||
**Impact:** Power Management states related operations cannot be using in SOS/UOS on KBLNUC
|
||||
|
||||
**Workaround:** None
|
||||
**Workaround:** None
|
||||
|
||||
:acrn-issue:`2279` - [APLNUC]After exiting UOS with mediator Usb_KeyBoard and Mouse, SOS cannot use the
|
||||
:acrn-issue:`2279` - [APLNUC]After exiting UOS with mediator Usb_KeyBoard and Mouse, SOS cannot use the
|
||||
Usb_KeyBoard and Mouse
|
||||
After exiting UOS with mediator Usb_KeyBoard and Mouse, SOS cannot use the Usb_KeyBoard and Mouse.
|
||||
Reproduce Steps as below:
|
||||
@@ -240,7 +240,7 @@ Known Issues
|
||||
|
||||
4) Exit UOS.
|
||||
|
||||
5) SOS access USB keyboard and mouse.
|
||||
5) SOS access USB keyboard and mouse.
|
||||
|
||||
**Impact:** SOS cannot use USB keyboard and mouse in such case.
|
||||
|
||||
|
||||
@@ -153,7 +153,7 @@ Known Issues
|
||||
|
||||
**Workaround:** None.
|
||||
|
||||
:acrn-issue:`2267` - [APLUP2][LaaG]LaaG can't detect 4k monitor
|
||||
:acrn-issue:`2267` - [APLUP2][LaaG]LaaG can't detect 4k monitor
|
||||
After launching UOS on APL UP2 , 4k monitor cannot be detected.
|
||||
|
||||
**Impact:** UOS can't display on a 4k monitor.
|
||||
@@ -161,7 +161,7 @@ Known Issues
|
||||
**Workaround:** Use a monitor with less than 4k resolution.
|
||||
|
||||
:acrn-issue:`2276` - OVMF failed to launch UOS on UP2.
|
||||
UP2 failed to launch UOS using OVMF as virtual bootloader with acrn-dm.
|
||||
UP2 failed to launch UOS using OVMF as virtual bootloader with acrn-dm.
|
||||
|
||||
**Impact:** UOS cannot boot up using OVMF
|
||||
|
||||
@@ -172,7 +172,7 @@ Known Issues
|
||||
|
||||
**Impact:** Power Management states related operations cannot be using in SOS/UOS on KBLNUC
|
||||
|
||||
**Workaround:** None
|
||||
**Workaround:** None
|
||||
|
||||
:acrn-issue:`2279` - [APLNUC]After exiting UOS with mediator Usb_KeyBoard and Mouse, SOS cannot use the Usb_KeyBoard and Mouse
|
||||
After exiting UOS with mediator Usb_KeyBoard and Mouse, SOS cannot use the Usb_KeyBoard and Mouse.
|
||||
@@ -188,7 +188,7 @@ Known Issues
|
||||
|
||||
4) Exit UOS.
|
||||
|
||||
5) SOS access USB keyboard and mouse.
|
||||
5) SOS access USB keyboard and mouse.
|
||||
|
||||
**Impact:** SOS cannot use USB keyboard and mouse in such case.
|
||||
|
||||
@@ -208,9 +208,9 @@ Known Issues
|
||||
|
||||
**Workaround:** Remove enable_initial_modeset for UP2 platform. You can apply :acrn-commit:`4b53ed67` to rebuild UP2 images.
|
||||
|
||||
:acrn-issue:`2522` - [NUC7i7BNH]After starting ias in SOS, there is no display
|
||||
On NUC7i7BNH, after starting IAS in SOS, there is no display if the monitor is
|
||||
connected with a TPC to VGA connector.
|
||||
:acrn-issue:`2522` - [NUC7i7BNH]After starting ias in SOS, there is no display
|
||||
On NUC7i7BNH, after starting IAS in SOS, there is no display if the monitor is
|
||||
connected with a TPC to VGA connector.
|
||||
|
||||
**Impact:** Special model [NUC7i7BNH] has no display in SOS.
|
||||
|
||||
@@ -221,7 +221,7 @@ Known Issues
|
||||
|
||||
**Impact:** Cannot use ias weston in UOS.
|
||||
|
||||
**Workaround:**
|
||||
**Workaround:**
|
||||
|
||||
1) Use weston instead of IAS weston: ``swupd install x11-server``
|
||||
2) Use acrn-kernel to rebuild SOS kernel to replace integrated kernel. To confirm "DRM_FBDEV_EMULATION" related configs in kernel_config_sos should as below:
|
||||
@@ -240,7 +240,7 @@ Known Issues
|
||||
**Impact:** launching UOS hang, and then no display in UOS.
|
||||
|
||||
**Workaround:** Use acrn-kernel to rebuild SOS kernel to replace the
|
||||
integrated kernel. Confirm "DRM_FBDEV_EMULATION" related
|
||||
integrated kernel. Confirm "DRM_FBDEV_EMULATION" related
|
||||
configs in kernel_config_sos are as below:
|
||||
|
||||
.. code-block:: bash
|
||||
@@ -254,14 +254,14 @@ Known Issues
|
||||
:acrn-issue:`2527` - [KBLNUC][HV]System will crash when run crashme (SOS/UOS)
|
||||
System will crash after a few minutes running stress test crashme tool in SOS/UOS.
|
||||
|
||||
**Impact:** System may crash in some stress situation.
|
||||
**Impact:** System may crash in some stress situation.
|
||||
|
||||
**Workaround:** None
|
||||
|
||||
:acrn-issue:`2528` - [APLUP2] SBL (built by SBL latest code) failed to boot ACRN hypervisor
|
||||
SBL built by latest slimbootloader code (HEAD->ad42a2bd6e4a6364358b9c712cb54e821ee7ee42) failed to boot acrn hypervisor.
|
||||
|
||||
**Impact:** UP2 with SBL cannot boot acrn hypervisor.
|
||||
**Impact:** UP2 with SBL cannot boot acrn hypervisor.
|
||||
|
||||
**Workaround:** Use SBL built by earlier slimbootloader code (commit id:edc112328cf3e414523162dd75dc3614e42579fe).
|
||||
This folder version can boot acrn hypervisor normally.
|
||||
|
||||
@@ -94,9 +94,9 @@ Fixed Issues Details
|
||||
- :acrn-issue:`2857` - FAQs for ACRN's memory usage need to be updated
|
||||
- :acrn-issue:`2971` - PCIE ECFG support for AcrnGT
|
||||
- :acrn-issue:`2976` - [GVT]don't register memory for gvt in acrn-dm
|
||||
- :acrn-issue:`2984` - HV will crash if launch two UOS with same UUID
|
||||
- :acrn-issue:`2984` - HV will crash if launch two UOS with same UUID
|
||||
- :acrn-issue:`2991` - Failed to boot normal vm on the pcpu which ever run lapic_pt vm
|
||||
- :acrn-issue:`3009` - When running new wokload on weston, the last workload animation not disappeared and screen flashed badly.
|
||||
- :acrn-issue:`3009` - When running new workload on weston, the last workload animation not disappeared and screen flashed badly.
|
||||
- :acrn-issue:`3028` - virtio gpio line fd not release
|
||||
- :acrn-issue:`3032` - Dump stack of mem allocation in irq_disabled after using mempool for ACRN VHM
|
||||
- :acrn-issue:`3050` - FYI: Kconfiglib major version bumped to 11
|
||||
@@ -129,14 +129,14 @@ Known Issues
|
||||
After booting UOS with multiple USB devices plugged in,
|
||||
there's a 60% chance that one or more devices are not discovered.
|
||||
|
||||
**Impact:** Cannot use multiple USB devices at same time.
|
||||
**Impact:** Cannot use multiple USB devices at same time.
|
||||
|
||||
**Workaround:** Unplug and plug-in the unrecognized device after booting.
|
||||
**Workaround:** Unplug and plug-in the unrecognized device after booting.
|
||||
|
||||
-----
|
||||
|
||||
:acrn-issue:`1991` - Input not accepted in UART Console for corner case
|
||||
Input is useless in UART Console for a corner case, demonstrated with these steps:
|
||||
Input is useless in UART Console for a corner case, demonstrated with these steps:
|
||||
|
||||
1) Boot to SOS
|
||||
2) ssh into the SOS.
|
||||
@@ -144,18 +144,18 @@ Known Issues
|
||||
4) On the host, use ``minicom -D /dev/ttyUSB0``.
|
||||
5) Use ``sos_console 0`` to launch SOS.
|
||||
|
||||
**Impact:** Fails to use UART for input.
|
||||
**Impact:** Fails to use UART for input.
|
||||
|
||||
**Workaround:** Enter other keys before typing :kbd:`Enter`.
|
||||
**Workaround:** Enter other keys before typing :kbd:`Enter`.
|
||||
|
||||
-----
|
||||
|
||||
:acrn-issue:`2267` - [APLUP2][LaaG] LaaG can't detect 4k monitor
|
||||
After launching UOS on APL UP2 , 4k monitor cannot be detected.
|
||||
|
||||
**Impact:** UOS can't display on a 4k monitor.
|
||||
**Impact:** UOS can't display on a 4k monitor.
|
||||
|
||||
**Workaround:** Use a monitor with less than 4k resolution.
|
||||
**Workaround:** Use a monitor with less than 4k resolution.
|
||||
|
||||
-----
|
||||
|
||||
@@ -173,18 +173,18 @@ Known Issues
|
||||
4) Exit UOS.
|
||||
5) SOS tries to access USB keyboard and mouse, and fails.
|
||||
|
||||
**Impact:** SOS cannot use USB keyboard and mouse in such case.
|
||||
**Impact:** SOS cannot use USB keyboard and mouse in such case.
|
||||
|
||||
**Workaround:** Unplug and plug-in the USB keyboard and mouse after exiting UOS.
|
||||
**Workaround:** Unplug and plug-in the USB keyboard and mouse after exiting UOS.
|
||||
|
||||
-----
|
||||
|
||||
:acrn-issue:`2753` - UOS cannot resume after suspend by pressing power key
|
||||
UOS cannot resume after suspend by pressing power key
|
||||
|
||||
**Impact:** UOS may failed to resume after suspend by pressing the power key.
|
||||
**Impact:** UOS may failed to resume after suspend by pressing the power key.
|
||||
|
||||
**Workaround:** None
|
||||
**Workaround:** None
|
||||
|
||||
-----
|
||||
|
||||
@@ -203,7 +203,7 @@ Known Issues
|
||||
|
||||
**Impact:** Launching Zephyr RTOS as a real-time UOS takes too long
|
||||
|
||||
**Workaround:** A different version of Grub is known to work correctly
|
||||
**Workaround:** A different version of Grub is known to work correctly
|
||||
|
||||
-----
|
||||
|
||||
@@ -239,11 +239,11 @@ Known Issues
|
||||
:acrn-issue:`3279` - AcrnGT causes display flicker in some situations.
|
||||
In current scaler ownership assignment logic, there's an issue that when SOS disables a plane,
|
||||
it will disable corresponding plane scalers; however, there's no scaler ownership checking there.
|
||||
So the scalers owned by UOS may be disabled by SOS by accident.
|
||||
So the scalers owned by UOS may be disabled by SOS by accident.
|
||||
|
||||
**Impact:** AcrnGT causes display flicker in some situations
|
||||
**Impact:** AcrnGT causes display flicker in some situations
|
||||
|
||||
**Workaround:** None
|
||||
**Workaround:** None
|
||||
|
||||
-----
|
||||
|
||||
@@ -398,7 +398,7 @@ release in May 2019 (click on the CommitID link to see details):
|
||||
- :acrn-commit:`a3073175` - dm: e820: reserve memory range for EPC resource
|
||||
- :acrn-commit:`7a915dc3` - hv: vmsr: present sgx related msr to guest
|
||||
- :acrn-commit:`1724996b` - hv: vcpuid: present sgx capabilities to guest
|
||||
- :acrn-commit:`65d43728` - hv: vm: build ept for sgx epc reource
|
||||
- :acrn-commit:`65d43728` - hv: vm: build ept for sgx epc resource
|
||||
- :acrn-commit:`c078f90d` - hv: vm_config: add epc info in vm config
|
||||
- :acrn-commit:`245a7320` - hv: sgx: add basic support to init sgx resource for vm
|
||||
- :acrn-commit:`c5cfd7c2` - vm state: reset vm state to VM_CREATED when reset_vm is called
|
||||
@@ -410,7 +410,7 @@ release in May 2019 (click on the CommitID link to see details):
|
||||
- :acrn-commit:`f2fe3547` - HV: remove mptable in vm_config
|
||||
- :acrn-commit:`26c7e372` - Doc: Add tutorial about using VxWorks as uos
|
||||
- :acrn-commit:`b10ad4b3` - DM USB: xHCI: refine the logic of CCS bit of PORTSC register
|
||||
- :acrn-commit:`ae066689` - DM USB: xHCI: re-implement the emulation of extented capabilities
|
||||
- :acrn-commit:`ae066689` - DM USB: xHCI: re-implement the emulation of extended capabilities
|
||||
- :acrn-commit:`5f9cd253` - Revert "DM: Get max vcpu per vm from HV instead of hardcode"
|
||||
- :acrn-commit:`8bca0b1a` - DM: remove unused function mptable_add_oemtbl
|
||||
- :acrn-commit:`bd3f34e9` - DM: remove unused function vm_get_device_fd
|
||||
@@ -466,14 +466,14 @@ release in May 2019 (click on the CommitID link to see details):
|
||||
- :acrn-commit:`90f3ce44` - HV: remove unused UNDEFINED_VM
|
||||
- :acrn-commit:`73cff9ef` - HV: predefine pci vbar's base address for pre-launched VMs in vm_config
|
||||
- :acrn-commit:`4cdaa519` - HV: rename vdev_pt_cfgwrite_bar to vdev_pt_write_vbar and some misra-c fix
|
||||
- :acrn-commit:`aba357dd` - 1. fix cpu family calculation 2. Modifie the parameter 'fl' order
|
||||
- :acrn-commit:`aba357dd` - 1. fix cpu family calculation 2. Modify the parameter 'fl' order
|
||||
- :acrn-commit:`238d8bba` - reshuffle init_vm_boot_info
|
||||
- :acrn-commit:`0018da41` - HV: add missing @pre for some functions
|
||||
- :acrn-commit:`b9578021` - HV: unify the sharing_mode_cfgwrite and partition_mode_cfgwrite code
|
||||
- :acrn-commit:`7635a68f` - HV: unify the sharing_mode_cfgread and partition_mode_cfgread code
|
||||
- :acrn-commit:`19af3bc8` - HV: unify the sharing_mode_vpci_deinit and partition_mode_vpci_deinit code
|
||||
- :acrn-commit:`3a6c63f2` - HV: unify the sharing_mode_vpci_init and partition_mode_vpci_init code
|
||||
- :acrn-commit:`f873b843` - HV: cosmetix fix for pci_pt.c
|
||||
- :acrn-commit:`f873b843` - HV: cosmetic fix for pci_pt.c
|
||||
- :acrn-commit:`cf48b9c3` - HV: use is_prelaunched_vm/is_hostbridge to check if the code is only for pre-launched VMs
|
||||
- :acrn-commit:`a97e6e64` - HV: rename sharing_mode_find_vdev_sos to find_vdev_for_sos
|
||||
- :acrn-commit:`32d1a9da` - HV: move bar emulation initialization code to pci_pt.c
|
||||
|
||||
@@ -52,8 +52,8 @@ defined **Usage Scenarios** in this release, including:
|
||||
* :ref:`Introduction to Project ACRN <introduction>`
|
||||
* :ref:`Build ACRN from Source <getting-started-building>`
|
||||
* :ref:`Supported Hardware <hardware>`
|
||||
* :ref:`Using Hybrid mode on NUC <using_hybrid_mode_on_nuc>`
|
||||
* :ref:`Launch Two User VMs on NUC using SDC2 Scenario <using_sdc2_mode_on_nuc>`
|
||||
* Using Hybrid mode on NUC (removed in v1.7)
|
||||
* Launch Two User VMs on NUC using SDC2 Scenario (removed in v1.7)
|
||||
|
||||
New Features Details
|
||||
********************
|
||||
@@ -82,10 +82,10 @@ Fixed Issues Details
|
||||
- :acrn-issue:`3281` - AcrnGT emulation thread causes high cpu usage when shadowing ppgtt
|
||||
- :acrn-issue:`3283` - New scenario-based configurations lack documentation
|
||||
- :acrn-issue:`3341` - Documentation on how to run Windows as a Guest (WaaG)
|
||||
- :acrn-issue:`3370` - vm_console 2 cannot switch to VM2’s console in hybrid mode
|
||||
- :acrn-issue:`3370` - vm_console 2 cannot switch to VM2's console in hybrid mode
|
||||
- :acrn-issue:`3374` - Potential interrupt info overwrite in acrn_handle_pending_request
|
||||
- :acrn-issue:`3379` - DM: Increase hugetlbfs MAX_PATH_LEN from 128 to 256
|
||||
- :acrn-issue:`3392` - During run UnigenHeaven 3D gfx benchmark in WaaG, RTVM lantency is much long
|
||||
- :acrn-issue:`3392` - During run UnigenHeaven 3D gfx benchmark in WaaG, RTVM latency is much long
|
||||
- :acrn-issue:`3466` - Buffer overflow will happen in 'strncmp' when 'n_arg' is 0
|
||||
- :acrn-issue:`3467` - Potential risk in virtioi_i2c.c & virtio_console.c
|
||||
- :acrn-issue:`3469` - [APL NUC] Display goes black while booting; when only one display monitor is connected
|
||||
@@ -102,22 +102,22 @@ Known Issues
|
||||
with vpci bar emulation, vpci needs to reinit the physical bar base address to a
|
||||
valid address if a device reset is detected.
|
||||
|
||||
**Impact:** Fail to launch Clear Linux Preempt_RT VM with ``reset`` passthru parameter
|
||||
**Impact:** Fail to launch Clear Linux Preempt_RT VM with ``reset`` passthru parameter
|
||||
|
||||
**Workaround:** Issue resolved on ACRN tag: ``acrn-2019w33.1-140000p``
|
||||
**Workaround:** Issue resolved on ACRN tag: ``acrn-2019w33.1-140000p``
|
||||
|
||||
-----
|
||||
|
||||
:acrn-issue:`3520` - bundle of "VGPU unconformance guest" messages observed for "gvt" in SOS console while using UOS
|
||||
After the need_force_wake is not removed in course of submitting VGPU workload,
|
||||
After the need_force_wake is not removed in course of submitting VGPU workload,
|
||||
it will print a bundle of below messages while the User VM is started.
|
||||
|
||||
| gvt: vgpu1 unconformance guest detected
|
||||
| gvt: vgpu1 unconformance mmio 0x2098:0xffffffff,0x0
|
||||
|
||||
**Impact:** Messy and repetitive output from the monitor
|
||||
**Impact:** Messy and repetitive output from the monitor
|
||||
|
||||
**Workaround:** Need to rebuild and apply the latest Service VM kernel from the ``acrn-kernel`` source code.
|
||||
**Workaround:** Need to rebuild and apply the latest Service VM kernel from the ``acrn-kernel`` source code.
|
||||
|
||||
-----
|
||||
|
||||
@@ -131,35 +131,35 @@ Known Issues
|
||||
#) Reboot RTVM and then will restart the whole system
|
||||
#) After Service VM boot up, return to step 3
|
||||
|
||||
**Impact:** Cold boot operation is not stable for NUC platform
|
||||
**Impact:** Cold boot operation is not stable for NUC platform
|
||||
|
||||
**Workaround:** Need to rebuild and apply the latest Service VM kernel from the ``acrn-kernel`` source code.
|
||||
**Workaround:** Need to rebuild and apply the latest Service VM kernel from the ``acrn-kernel`` source code.
|
||||
|
||||
-----
|
||||
|
||||
:acrn-issue:`3576` - Expand default memory from 2G to 4G for WaaG
|
||||
|
||||
**Impact:** More memory size is required from Windows VM
|
||||
**Impact:** More memory size is required from Windows VM
|
||||
|
||||
**Workaround:** Issue resolved on ACRN tag: ``acrn-2019w33.1-140000p``
|
||||
**Workaround:** Issue resolved on ACRN tag: ``acrn-2019w33.1-140000p``
|
||||
|
||||
-----
|
||||
|
||||
:acrn-issue:`3609` - Sometimes fail to boot os while repeating the cold boot operation
|
||||
|
||||
**Workaround:** Please refer the PR information in this git issue
|
||||
**Workaround:** Please refer the PR information in this git issue
|
||||
|
||||
-----
|
||||
|
||||
:acrn-issue:`3610` - LaaG hang while run some workloads loop with zephyr idle
|
||||
|
||||
**Workaround:** Revert commit ``bbb891728d82834ec450f6a61792f715f4ec3013`` from the kernel
|
||||
**Workaround:** Revert commit ``bbb891728d82834ec450f6a61792f715f4ec3013`` from the kernel
|
||||
|
||||
-----
|
||||
|
||||
:acrn-issue:`3611` - OVMF launch UOS fail for Hybrid and industry scenario
|
||||
|
||||
**Workaround:** Please refer the PR information in this git issue
|
||||
**Workaround:** Please refer the PR information in this git issue
|
||||
|
||||
-----
|
||||
|
||||
@@ -237,16 +237,16 @@ release in June 2019 (click on the CommitID link to see details):
|
||||
- :acrn-commit:`d0f7563d` - Corrected images and formatting
|
||||
- :acrn-commit:`ce7a126f` - Added 3 SGX images
|
||||
- :acrn-commit:`01504ecf` - Initial SGX Virt doc upload
|
||||
- :acrn-commit:`a9c38a5c` - HV:Acrn-hypvervisor Root Directory Clean-up and create misc/ folder for Acrn daemons, services and tools.
|
||||
- :acrn-commit:`a9c38a5c` - HV:Acrn-hypervisor Root Directory Clean-up and create misc/ folder for Acrn daemons, services and tools.
|
||||
- :acrn-commit:`555a03db` - HV: add board specific cpu state table to support Px Cx
|
||||
- :acrn-commit:`cd3b8ed7` - HV: fix MISRA violation of cpu state table
|
||||
- :acrn-commit:`a092f400` - HV: make the functions void
|
||||
- :acrn-commit:`d6bf0605` - HV: remove redundant function calling
|
||||
- :acrn-commit:`c175141c` - dm: bugfix for remote launch guest issue
|
||||
- :acrn-commit:`4a27d083` - hv: schedule: schedule to idel after SOS resume form S3
|
||||
- :acrn-commit:`4a27d083` - hv: schedule: schedule to idle after SOS resume form S3
|
||||
- :acrn-commit:`7b224567` - HV: Remove the mixed usage of inline assembly in wait_sync_change
|
||||
- :acrn-commit:`baf7d90f` - HV: Refine the usage of monitor/mwait to avoid the possible lockup
|
||||
- :acrn-commit:`11cf9a4a` - hv: mmu: add hpa2hva_early API for earlt boot
|
||||
- :acrn-commit:`11cf9a4a` - hv: mmu: add hpa2hva_early API for early boot
|
||||
- :acrn-commit:`40475e22` - hv: debug: use printf to debug on early boot
|
||||
- :acrn-commit:`cc47dbe7` - hv: uart: enable early boot uart
|
||||
- :acrn-commit:`3945bc4c` - dm: array bound and NULL pointer issue fix
|
||||
@@ -255,7 +255,7 @@ release in June 2019 (click on the CommitID link to see details):
|
||||
- :acrn-commit:`18ecdc12` - hv: uart: make uart base address more readable
|
||||
- :acrn-commit:`49e60ae1` - hv: refine handler to 'rdpmc' vmexit
|
||||
- :acrn-commit:`0887eecd` - doc: remove deprecated sos_bootargs
|
||||
- :acrn-commit:`2e79501e` - doc:udpate using_partition_mode_on_nuc nuc7i7bnh to nuc7i7dnb
|
||||
- :acrn-commit:`2e79501e` - doc:update using_partition_mode_on_nuc nuc7i7bnh to nuc7i7dnb
|
||||
- :acrn-commit:`a7b6fc74` - HV: allow write 0 to MSR_IA32_MCG_STATUS
|
||||
- :acrn-commit:`3cf1daa4` - HV: move vbar info to board specific pci_devices.h
|
||||
- :acrn-commit:`ce4d71e0` - vpci: fix coding style issue
|
||||
|
||||
@@ -68,7 +68,7 @@ New Features Details
|
||||
- :acrn-issue:`3497` - Inject exception for invalid vmcall
|
||||
- :acrn-issue:`3498` - Return extended info in vCPUID leaf 0x40000001
|
||||
- :acrn-issue:`2934` - Use virtual APIC IDs for Pre-launched VMs
|
||||
- :acrn-issue:`3459` - dm: support VMs communication with virtio-console
|
||||
- :acrn-issue:`3459` - dm: support VMs communication with virtio-console
|
||||
- :acrn-issue:`3190` - DM: handle SIGPIPE signal
|
||||
|
||||
Fixed Issues Details
|
||||
|
||||
@@ -52,8 +52,8 @@ We recommend that all developers upgrade to this v1.4 release, which
|
||||
addresses the following security issues that were discovered in previous releases:
|
||||
|
||||
Mitigation for Machine Check Error on Page Size Change
|
||||
Improper invalidation for page table updates by a virtual guest operating system for multiple
|
||||
Intel |reg| Processors may allow an authenticated user to potentially enable denial of service
|
||||
Improper invalidation for page table updates by a virtual guest operating system for multiple
|
||||
Intel |reg| Processors may allow an authenticated user to potentially enable denial of service
|
||||
of the host system via local access. Malicious guest kernel could trigger this issue, CVE-2018-12207.
|
||||
|
||||
AP Trampoline Is Accessible to the Service VM
|
||||
@@ -152,7 +152,7 @@ Fixed Issues Details
|
||||
- :acrn-issue:`3853` - [acrn-configuration-tool] Generated Launch script is incorrect when select audio&audio_codec for nuc7i7dnb with Scenario:SDC
|
||||
- :acrn-issue:`3859` - VM-Manager: the return value of "strtol" is not validated properly
|
||||
- :acrn-issue:`3863` - [acrn-configuration-tool]WebUI do not select audio&wifi devices by default for apl-mrb with LaunchSetting: sdc_launch_1uos_aaag
|
||||
- :acrn-issue:`3879` - [acrn-configuration-tool]The “-k" parameter is unnecessary in launch_uos_id2.sh for RTVM.
|
||||
- :acrn-issue:`3879` - [acrn-configuration-tool]The "-k" parameter is unnecessary in launch_uos_id2.sh for RTVM.
|
||||
- :acrn-issue:`3880` - [acrn-configuration-tool]"--windows \" missing in launch_uos_id1.sh for waag.
|
||||
- :acrn-issue:`3900` - [WHL][acrn-configuration-tool]Same bdf in generated whl-ipc-i5.xml.
|
||||
- :acrn-issue:`3913` - [acrn-configuration-tool]WebUI do not give any prompt when generate launch_script for a new imported board
|
||||
@@ -178,9 +178,9 @@ Known Issues
|
||||
- :acrn-issue:`4042` - RTVM UOS result is invalid when run cpu2017 with 3 and 1 core.
|
||||
- :acrn-issue:`4043` - Windows guest can not get normal IP after passthru Ethernet
|
||||
- :acrn-issue:`4045` - Adding USB mediator in launch script, it takes a long time to start windows, about 13 minutes.
|
||||
- :acrn-issue:`4046` - Error info popoup when run 3DMARK11 on Waag
|
||||
- :acrn-issue:`4046` - Error info pop up when run 3DMARK11 on Waag
|
||||
- :acrn-issue:`4047` - passthru usb, when WaaG boot at "windows boot manager" menu, the usb keyboard does not work.
|
||||
- :acrn-issue:`4048` - Scalling the media player while playing a video, then the video playback is not smooth
|
||||
- :acrn-issue:`4048` - Scaling the media player while playing a video, then the video playback is not smooth
|
||||
- :acrn-issue:`4049` - Only slot-2 can work in "-s n,passthru,02/00/0 \" for RTVM, other slots are not functional
|
||||
|
||||
Change Log
|
||||
@@ -217,7 +217,7 @@ release in Sep 2019 (click on the CommitID link to see details):
|
||||
- :acrn-commit:`2d0739bf` - doc: fix error in building_from_source doc
|
||||
- :acrn-commit:`3b977eef` - doc: clean up the docs in try using acrn table.
|
||||
- :acrn-commit:`2a3178aa` - doc: Update Using Windows as Guest VM on ACRN
|
||||
- :acrn-commit:`9bd274ae` - doc:modfiy ubuntu build on 18.04
|
||||
- :acrn-commit:`9bd274ae` - doc:modify ubuntu build on 18.04
|
||||
- :acrn-commit:`7d818c82` - doc: Stop using kconfig to make a customized efi.
|
||||
- :acrn-commit:`67c64522` - dm: fix memory free issue for xhci
|
||||
- :acrn-commit:`3fb1021d` - Doc: Minor grammatical edits on various files.
|
||||
@@ -332,7 +332,7 @@ release in Sep 2019 (click on the CommitID link to see details):
|
||||
- :acrn-commit:`048155d3` - hv: support minimum set of TLFS
|
||||
- :acrn-commit:`009d835b` - acrn-config: modify board info of block device info
|
||||
- :acrn-commit:`96dede43` - acrn-config: modify ipu/ipu_i2c device launch config of apl-up2
|
||||
- :acrn-commit:`001c929d` - acrn-config: correct launch config info for audio/wifi defice of apl-mrb
|
||||
- :acrn-commit:`001c929d` - acrn-config: correct launch config info for audio/wifi device of apl-mrb
|
||||
- :acrn-commit:`2a647fa1` - acrn-config: define vm name for Preempt-RT Linux in launch script
|
||||
- :acrn-commit:`a2430f13` - acrn-config: refine board name with undline_name api
|
||||
- :acrn-commit:`95b9ba36` - acrn-config: acrn-config: add white list to skip item check
|
||||
@@ -360,13 +360,13 @@ release in Sep 2019 (click on the CommitID link to see details):
|
||||
- :acrn-commit:`d8deaa4b` - dm: close filepointer before exiting acrn_load_elf()
|
||||
- :acrn-commit:`b5f77c07` - doc: add socket console backend for virtio-console
|
||||
- :acrn-commit:`d3ac30c6` - hv: modify SOS i915 plane setting for hybrid scenario
|
||||
- :acrn-commit:`c74a197c` - acrn-config: modify SOS i915 plane setting for hybird xmls
|
||||
- :acrn-commit:`c74a197c` - acrn-config: modify SOS i915 plane setting for hybrid xmls
|
||||
- :acrn-commit:`e1a2ed17` - hv: fix a bug that tpr threshold is not updated
|
||||
- :acrn-commit:`afb3608b` - acrn-config: add confirmation for commit of generated source in config app
|
||||
- :acrn-commit:`8eaee3b0` - acrn-config: add "enable_commit" parameter for config tool
|
||||
- :acrn-commit:`780a53a1` - tools: acrn-crashlog: refine crash complete code
|
||||
- :acrn-commit:`43b2327e` - dm: validation for input to public functions
|
||||
- :acrn-commit:`477f8331` - dm: modify DIR handler reference postion
|
||||
- :acrn-commit:`477f8331` - dm: modify DIR handler reference position
|
||||
- :acrn-commit:`de157ab9` - hv: sched: remove runqueue from current schedule logic
|
||||
- :acrn-commit:`837e4d87` - hv: sched: rename schedule related structs and vars
|
||||
- :acrn-commit:`89f53a40` - acrn-config: supply optional passthrough device for vm
|
||||
@@ -379,7 +379,7 @@ release in Sep 2019 (click on the CommitID link to see details):
|
||||
- :acrn-commit:`44c11ce6` - acrn-config: fix the issue some select boxes disappear after edited
|
||||
- :acrn-commit:`c7ecdf47` - Corrected number issue in GSG for ACRN Ind Scenario file
|
||||
- :acrn-commit:`051a8e4a` - doc: update Oracle driver install
|
||||
- :acrn-commit:`b73b0fc2` - doc: ioc: remove two unuse parts
|
||||
- :acrn-commit:`b73b0fc2` - doc: ioc: remove two unused parts
|
||||
- :acrn-commit:`6f7ba36e` - doc: move the "Building ACRN in Docker" user guide
|
||||
- :acrn-commit:`1794d994` - doc: update doc generation tooling to only work within the $BUILDDIR
|
||||
- :acrn-commit:`0dac373d` - hv: vpci: remove pci_msi_cap in pci_pdev
|
||||
|
||||
@@ -31,7 +31,7 @@ Version 1.5 major features
|
||||
What's New in v1.5
|
||||
==================
|
||||
* Basic CPU sharing: Fairness Round-Robin CPU Scheduling has been added to support basic CPU sharing (the Service VM and WaaG share one CPU core).
|
||||
* 8th Gen Intel® Core ™ Processors (code name Whiskey Lake) are now supported and validated.
|
||||
* 8th Gen Intel® Core™ Processors (code name Whiskey Lake) are now supported and validated.
|
||||
* Overall stability and performance has been improved.
|
||||
* An offline configuration tool has been created to help developers port ACRN to different hardware boards.
|
||||
|
||||
@@ -42,7 +42,7 @@ Many new `reference documents <https://projectacrn.github.io>`_ are available, i
|
||||
* :ref:`run-kata-containers`
|
||||
* :ref:`hardware` (Addition of Whiskey Lake information)
|
||||
* :ref:`cpu_sharing`
|
||||
* :ref:`using_windows_as_uos` (Update to use ACRNGT GOP to install Windows)
|
||||
* :ref:`using_windows_as_uos` (Update to use ACRNGT GOP to install Windows)
|
||||
|
||||
Fixed Issues Details
|
||||
********************
|
||||
@@ -70,7 +70,7 @@ Fixed Issues Details
|
||||
- :acrn-issue:`3993` - trampoline code in hypervisor potentially be accessible to service VM
|
||||
- :acrn-issue:`4005` - [WHL][Function][WaaG]Fail to create WaaG image using ISO only on WHL
|
||||
- :acrn-issue:`4007` - V1.3 E2E release binary failed to boot up on KBL NUC with 32G memory.
|
||||
- :acrn-issue:`4010` - [Community][External]Bootning in blind mode
|
||||
- :acrn-issue:`4010` - [Community][External]Booting in blind mode
|
||||
- :acrn-issue:`4012` - Error formatting flag for hypcall_id
|
||||
- :acrn-issue:`4020` - Refine print string format for 'uint64_t' type value in hypervisor
|
||||
- :acrn-issue:`4043` - [WHL][Function][WaaG]windows guest can not get normal IP after passthru Ethernet
|
||||
@@ -90,13 +90,13 @@ Fixed Issues Details
|
||||
- :acrn-issue:`4135` - [Community][External]Invalid guest vCPUs (0) Ubuntu as SOS.
|
||||
- :acrn-issue:`4139` - [Community][External]mngr_client_new: Failed to accept from fd 38
|
||||
- :acrn-issue:`4143` - [acrn-configuration-tool] bus of DRHD scope devices is parsed incorrectly
|
||||
- :acrn-issue:`4163` - [acrn-configuration-tool] not support: –s n,virtio-input
|
||||
- :acrn-issue:`4164` - [acrn-configuration-tool] not support: –s n,xhci,1-1:1-2:2-1:2-2
|
||||
- :acrn-issue:`4163` - [acrn-configuration-tool] not support: -s n,virtio-input
|
||||
- :acrn-issue:`4164` - [acrn-configuration-tool] not support: -s n,xhci,1-1:1-2:2-1:2-2
|
||||
- :acrn-issue:`4165` -[WHL][acrn-configuration-tool]Configure epc_section is incorrect
|
||||
- :acrn-issue:`4172` - [acrn-configuration-tool] not support: –s n,virtio-blk, (/root/part.img---dd if=/dev/zero of=/root/part.img bs=1M count=10 all/part of img, one u-disk device, u-disk as rootfs and the n is special)
|
||||
- :acrn-issue:`4173` - [acrn-configuartion-tool]acrn-config tool not support parse default pci mmcfg base
|
||||
- :acrn-issue:`4172` - [acrn-configuration-tool] not support: -s n,virtio-blk, (/root/part.img---dd if=/dev/zero of=/root/part.img bs=1M count=10 all/part of img, one u-disk device, u-disk as rootfs and the n is special)
|
||||
- :acrn-issue:`4173` - [acrn-configuration-tool]acrn-config tool not support parse default pci mmcfg base
|
||||
- :acrn-issue:`4175` - acrntrace fixes and improvement
|
||||
- :acrn-issue:`4185` - [acrn-configuration-tool] not support: –s n,virtio-net, (not set,error net, set 1 net, set multi-net, vhost net)
|
||||
- :acrn-issue:`4185` - [acrn-configuration-tool] not support: -s n,virtio-net, (not set,error net, set 1 net, set multi-net, vhost net)
|
||||
- :acrn-issue:`4211` - [kbl nuc] acrn failed to boot when generate hypervisor config source from config app with HT enabled in BIOS
|
||||
- :acrn-issue:`4212` - [KBL][acrn-configuration-tool][WaaG+RTVM]Need support pm_channel&pm_by_vuart setting for Board:nuc7i7dnb+WaaG&RTVM
|
||||
- :acrn-issue:`4227` - [ISD][Stability][WaaG][Regression] "Passmark8.0-Graphics3D-DirectX9Complex" test failed on WaaG due to driver error
|
||||
@@ -104,7 +104,7 @@ Fixed Issues Details
|
||||
- :acrn-issue:`4229` - Add range check in Kconfig.
|
||||
- :acrn-issue:`4230` - Remove MAX_VCPUS_PER_VM in Kconfig
|
||||
- :acrn-issue:`4232` - Set default KATA_VM_NUM to 1 for SDC
|
||||
- :acrn-issue:`4247` - [acrn-configuration-tool] Generate Scenario for VM0 communites with VM1 is incorrect.
|
||||
- :acrn-issue:`4247` - [acrn-configuration-tool] Generate Scenario for VM0 communities with VM1 is incorrect.
|
||||
- :acrn-issue:`4249` - [acrn-configuration-tool]Generated Launchscript but WebUI prompt error msg after we just select passthru-devices:audio_codec
|
||||
- :acrn-issue:`4255` - [acrn-configuration-tool][nuc7i7dnb][sdc]uos has no ip address
|
||||
- :acrn-issue:`4260` - [Community][External]webcam switch between 2 UOS.
|
||||
@@ -142,9 +142,9 @@ release in Nov 2019 (click on the CommitID link to view details):
|
||||
- :acrn-commit:`29b7aff5` - HV: Use NMI-window exiting to address req missing issue
|
||||
- :acrn-commit:`d26d8bec` - HV: Don't make NMI injection req when notifying vCPU
|
||||
- :acrn-commit:`24c2c0ec` - HV: Use NMI to kick lapic-pt vCPU's thread
|
||||
- :acrn-commit:`23422713` - acrn-config: add `tap\_` perfix for virtio-net
|
||||
- :acrn-commit:`23422713` - acrn-config: add `tap\_` prefix for virtio-net
|
||||
- :acrn-commit:`6383394b` - acrn-config: enable log_setting in all vm
|
||||
- :acrn-commit:`0b44d64d` - acrn-config: check pass-thruogh device for audio/audio_codec
|
||||
- :acrn-commit:`0b44d64d` - acrn-config: check pass-through device for audio/audio_codec
|
||||
- :acrn-commit:`75ca1694` - acrn-config: correct vuart1 setting in scenario config
|
||||
- :acrn-commit:`d52b45c1` - hv:fix crash issue when handling HC_NOTIFY_REQUEST_FINISH
|
||||
- :acrn-commit:`78139b95` - HV: kconfig: add range check for memory setting
|
||||
@@ -187,7 +187,7 @@ release in Nov 2019 (click on the CommitID link to view details):
|
||||
- :acrn-commit:`b39630a8` - hv: sched_iorr: add tick handler and runqueue operations
|
||||
- :acrn-commit:`f44aa4e4` - hv: sched_iorr: add init functions of sched_iorr
|
||||
- :acrn-commit:`ed400863` - hv: sched_iorr: Add IO sensitive Round-robin scheduler
|
||||
- :acrn-commit:`3c8d465a` - acrnboot: correct the calculation of the end boundry of _DYNAMIC region
|
||||
- :acrn-commit:`3c8d465a` - acrnboot: correct the calculation of the end boundary of _DYNAMIC region
|
||||
- :acrn-commit:`0bf03b41` - acrntrace: Set FLAG_CLEAR_BUF by default
|
||||
- :acrn-commit:`9e9e1f61` - acrntrace: Add opt to specify the cpus where we should capture the data
|
||||
- :acrn-commit:`366f4be4` - acrntrace: Use correct format for total run time
|
||||
|
||||
@@ -81,7 +81,7 @@ Fixed Issues Details
|
||||
********************
|
||||
- :acrn-issue:`3465` -[SIT][ISD] [AUTO]add reset in"-s 2,passthru,02/00/0 \", rtvm can not launch
|
||||
- :acrn-issue:`3789` -[Security][apl_sdc_stable]DM:The return value of snprintf is improperly checked.
|
||||
- :acrn-issue:`3886` -Lapic-pt vcpu notificaton issue
|
||||
- :acrn-issue:`3886` -Lapic-pt vcpu notification issue
|
||||
- :acrn-issue:`4032` -Modify License file.
|
||||
- :acrn-issue:`4042` -[KBL][HV]RTVM UOS result is invalid when run cpu2017 with 3 and 1 core
|
||||
- :acrn-issue:`4094` -Error parameter for intel_pstate in launch_hard_rt_vm.sh
|
||||
@@ -92,8 +92,8 @@ Fixed Issues Details
|
||||
- :acrn-issue:`4230` -Remove MAX_VCPUS_PER_VM in Kconfig
|
||||
- :acrn-issue:`4253` -[WHL][Function][WaaG]Meet error log and waag can't boot up randomly after allocated 3 cores cpu to waag
|
||||
- :acrn-issue:`4255` -[acrn-configuration-tool][nuc7i7dnb][sdc]uos has no ip address
|
||||
- :acrn-issue:`4258` -[Community][External]cyclictest benchmark UOS geting high.
|
||||
- :acrn-issue:`4282` -ACRN-DM Pass-tru devices bars prefetchable property isn't consistent with physical bars
|
||||
- :acrn-issue:`4258` -[Community][External]cyclictest benchmark UOS getting high.
|
||||
- :acrn-issue:`4282` -ACRN-DM Pass-thru devices bars prefetchable property isn't consistent with physical bars
|
||||
- :acrn-issue:`4286` -[acrn-configuration-tool] Remove VM1.vcpu_affinity.pcuid=3 for VM1 in sdc scenario
|
||||
- :acrn-issue:`4298` -[ConfigurationTool] mac address is not added to the launch script
|
||||
- :acrn-issue:`4301` -[WHL][Hybrid] WHL need support Hybrid mode
|
||||
@@ -102,13 +102,13 @@ Fixed Issues Details
|
||||
- :acrn-issue:`4325` -Do not wait pcpus offline when lapic pt is disabled.
|
||||
- :acrn-issue:`4402` -UEFI UP2 board boot APs failed with ACRN hypervisor
|
||||
- :acrn-issue:`4419` -[WHL][hybrid] SOS can not poweroff & reboot in hybrid mode of WHL board (multiboot2)
|
||||
- :acrn-issue:`4472` -[WHL][sdc2] HV launch fails with sdc2 screnario which support launching 3 Guest OS
|
||||
- :acrn-issue:`4492` -[acrn-configuartion-tool] miss include head file from logical partition
|
||||
- :acrn-issue:`4472` -[WHL][sdc2] HV launch fails with sdc2 scenario which support launching 3 Guest OS
|
||||
- :acrn-issue:`4492` -[acrn-configuration-tool] miss include head file from logical partition
|
||||
- :acrn-issue:`4495` -[acrn-configuration-tool] Missing passthru nvme parameter while using WebUI to generate RTVM launch script
|
||||
|
||||
Known Issues
|
||||
************
|
||||
- :acrn-issue:`4046` - [WHL][Function][WaaG] Error info popoup when run 3DMARK11 on Waag
|
||||
- :acrn-issue:`4046` - [WHL][Function][WaaG] Error info pop up when run 3DMARK11 on Waag
|
||||
- :acrn-issue:`4047` - [WHL][Function][WaaG] passthru usb, Windows will hang when reboot it
|
||||
- :acrn-issue:`4313` - [WHL][VxWorks] Failed to ping when VxWorks passthru network
|
||||
- :acrn-issue:`4520` - efi-stub could get wrong bootloader name
|
||||
@@ -157,7 +157,7 @@ release in Dec 2019 (click the CommitID link to see details):
|
||||
- :acrn-commit:`f78558a4` - dm: add one api for sending shutdown to life_mngr on SOS
|
||||
- :acrn-commit:`8733abef` - dm:handle shutdown command from UOS
|
||||
- :acrn-commit:`4fdc2be1` - dm:replace shutdown_uos_thread with a new one
|
||||
- :acrn-commit:`7e9b7a8c` - dm:set pm-vuart attritutes
|
||||
- :acrn-commit:`7e9b7a8c` - dm:set pm-vuart attributes
|
||||
- :acrn-commit:`790614e9` - hv:rename several variables and api for ioapic
|
||||
- :acrn-commit:`fa74bf40` - hv: vpci: pass through stolen memory and opregion memory for GVT-D
|
||||
- :acrn-commit:`659e5420` - hv: add static check for CONFIG_HV_RAM_START and CONFIG_HV_RAM_SIZE
|
||||
@@ -242,7 +242,7 @@ release in Dec 2019 (click the CommitID link to see details):
|
||||
- :acrn-commit:`520a0222` - HV: re-arch boot component header
|
||||
- :acrn-commit:`708cae7c` - HV: remove DBG_LEVEL_PARSE
|
||||
- :acrn-commit:`a46a7b35` - Makefile: Fix build issue if the ld is updated to 2.34
|
||||
- :acrn-commit:`ad606102` - hv: sched_bvt: add tick hanlder
|
||||
- :acrn-commit:`ad606102` - hv: sched_bvt: add tick handler
|
||||
- :acrn-commit:`77c64ecb` - hv: sched_bvt: add pick_next function
|
||||
- :acrn-commit:`a38f2cc9` - hv: sched_bvt: add wakeup and sleep handler
|
||||
- :acrn-commit:`e05eb42c` - hv: sched_bvt: add init and deinit function
|
||||
@@ -251,7 +251,7 @@ release in Dec 2019 (click the CommitID link to see details):
|
||||
- :acrn-commit:`4adad73c` - hv: mmio: refine mmio access handle lock granularity
|
||||
- :acrn-commit:`fbe57d9f` - hv: vpci: restrict SOS access assigned PCI device
|
||||
- :acrn-commit:`9d3d9c3d` - dm: vpci: restrict SOS access assigned PCI device
|
||||
- :acrn-commit:`e8479f84` - hv: vPCI: remove passthrough PCI device unuse code
|
||||
- :acrn-commit:`e8479f84` - hv: vPCI: remove passthrough PCI device unused code
|
||||
- :acrn-commit:`9fa6eff3` - dm: vPCI: remove passthrough PCI device unused code
|
||||
- :acrn-commit:`dafa3da6` - vPCI: split passthrough PCI device from DM to HV
|
||||
- :acrn-commit:`aa38ed5b` - dm: vPCI: add assign/deassign PCI device IC APIs
|
||||
@@ -280,7 +280,7 @@ release in Dec 2019 (click the CommitID link to see details):
|
||||
- :acrn-commit:`88dfd8d4` - doc: update Kata and ACRN tutorial
|
||||
- :acrn-commit:`e1eedc99` - Doc: Style updates to Building from Source doc
|
||||
- :acrn-commit:`1f6c0cd4` - doc: update project's target max LOC
|
||||
- :acrn-commit:`8f9e4c2d` - Updated grammer in ACRN industry scenario doc
|
||||
- :acrn-commit:`8f9e4c2d` - Updated grammar in ACRN industry scenario doc
|
||||
- :acrn-commit:`54e9b562` - doc: Modify CL version from 32030 to 31670
|
||||
- :acrn-commit:`1b3754aa` - dm:passthrough opregion to uos gpu
|
||||
- :acrn-commit:`4d882731` - dm:passthrough graphics stolen memory to uos gpu
|
||||
@@ -322,7 +322,7 @@ release in Dec 2019 (click the CommitID link to see details):
|
||||
- :acrn-commit:`4303ccb1` - hv: HLT emulation in hypervisor
|
||||
- :acrn-commit:`a8f6bdd4` - hv: Add vlapic_has_pending_intr of apicv to check pending interrupts
|
||||
- :acrn-commit:`e3c30336` - hv: vcpu: wait and signal vcpu event support
|
||||
- :acrn-commit:`1f23fe3f` - hv: sched: simple event implemention
|
||||
- :acrn-commit:`1f23fe3f` - hv: sched: simple event implementation
|
||||
- :acrn-commit:`4115dd62` - hv: PAUSE-loop exiting support in hypervisor.
|
||||
- :acrn-commit:`bfecf30f` - HV: do not offline pcpu when lapic pt disabled.
|
||||
- :acrn-commit:`c59f12da` - doc: fix wrong Docker container image in tutorial.
|
||||
@@ -351,7 +351,7 @@ release in Dec 2019 (click the CommitID link to see details):
|
||||
- :acrn-commit:`58b3a058` - hv: vpci: rename pci_bar to pci_vbar.
|
||||
- :acrn-commit:`d2089889` - hv: pci: minor fix of coding style about pci_read_cap.
|
||||
- :acrn-commit:`cdf9d6b3` - (ia) devicemodel: refactor CMD_OPT_LAPIC_PT case branch.
|
||||
- :acrn-commit:`77c3ce06` - acrn-config: remove uncessary split for `virtio-net`
|
||||
- :acrn-commit:`77c3ce06` - acrn-config: remove unnecessary split for `virtio-net`
|
||||
- :acrn-commit:`ce35a005` - acrn-config: add `cpu_sharing` support for launch config.
|
||||
- :acrn-commit:`3544f7c8` - acrn-config: add `cpu_sharing` info in launch xmls.
|
||||
- :acrn-commit:`57939730` - HV: search rsdp from e820 acpi reclaim region.
|
||||
@@ -359,14 +359,14 @@ release in Dec 2019 (click the CommitID link to see details):
|
||||
- :acrn-commit:`8f9cda18` - DOC: Content edits to CPU Sharing doc.
|
||||
- :acrn-commit:`651510a8` - acrn-config: add `logger_setting` into launch script.
|
||||
- :acrn-commit:`7f74e6e9` - acrn-config: refine mount device for virtio-blk.
|
||||
- :acrn-commit:`fc357a77` - acrn-config: add `tap_` perfix for virtio-net.
|
||||
- :acrn-commit:`fc357a77` - acrn-config: add `tap_` prefix for virtio-net.
|
||||
- :acrn-commit:`5b6a33bb` - acrn-config: enable log_setting in all VMs.
|
||||
- :acrn-commit:`d4bf019d` - Doc: Added Whiskey Lake specs to hardware ref page.
|
||||
- :acrn-commit:`8a8438df` - remove no support OS parts and add whl build.
|
||||
- :acrn-commit:`58b3a058` - hv: vpci: rename pci_bar to pci_vbar.
|
||||
- :acrn-commit:`d2089889` - hv: pci: minor fix of coding style about pci_read_cap.
|
||||
- :acrn-commit:`cdf9d6b3` - (ia) devicemodel: refactor CMD_OPT_LAPIC_PT case branch.
|
||||
- :acrn-commit:`77c3ce06` - acrn-config: remove uncessary split for `virtio-net`
|
||||
- :acrn-commit:`77c3ce06` - acrn-config: remove unnecessary split for `virtio-net`
|
||||
- :acrn-commit:`ce35a005` - acrn-config: add `cpu_sharing` support for launch config.
|
||||
- :acrn-commit:`3544f7c8` - acrn-config: add `cpu_sharing` info in launch xmls.
|
||||
- :acrn-commit:`57939730` - HV: search rsdp from e820 acpi reclaim region.
|
||||
@@ -374,9 +374,9 @@ release in Dec 2019 (click the CommitID link to see details):
|
||||
- :acrn-commit:`8f9cda18` - DOC: Content edits to CPU Sharing doc.
|
||||
- :acrn-commit:`651510a8` - acrn-config: add `logger_setting` into launch script.
|
||||
- :acrn-commit:`7f74e6e9` - acrn-config: refine mount device for virtio-blk.
|
||||
- :acrn-commit:`fc357a77` - acrn-config: add `tap_` perfix for virtio-net.
|
||||
- :acrn-commit:`fc357a77` - acrn-config: add `tap_` prefix for virtio-net.
|
||||
- :acrn-commit:`5b6a33bb` - acrn-config: enable log_setting in all VMs.
|
||||
- :acrn-commit:`bb6e28e1` - acrn-config: check pass-thruogh device for audio/audio_codec.
|
||||
- :acrn-commit:`bb6e28e1` - acrn-config: check pass-through device for audio/audio_codec.
|
||||
- :acrn-commit:`4234d2e4` - acrn-config: correct vuart1 setting in scenario config.
|
||||
- :acrn-commit:`d80a0dce` - acrn-config: fix a few formatting issues.
|
||||
- :acrn-commit:`051f277c` - acrn-config: modify hpa start size value for logical_partition scenario.
|
||||
|
||||
@@ -8,3 +8,9 @@ Disallow: /0.5/
|
||||
Disallow: /0.6/
|
||||
Disallow: /0.7/
|
||||
Disallow: /0.8/
|
||||
Disallow: /1.0/
|
||||
Disallow: /1.1/
|
||||
Disallow: /1.2/
|
||||
Disallow: /1.3/
|
||||
Disallow: /1.4/
|
||||
Disallow: /1.5/
|
||||
|
||||
@@ -3,3 +3,4 @@ sphinx==1.7.7
|
||||
docutils==0.14
|
||||
sphinx_rtd_theme==0.4.0
|
||||
kconfiglib>=10.2
|
||||
sphinx-tabs
|
||||
|
||||
38
doc/static/acrn-custom.css
vendored
@@ -2,7 +2,7 @@
|
||||
|
||||
/* make the page width fill the window */
|
||||
.wy-nav-content {
|
||||
max-width: none;
|
||||
max-width: 1100px;
|
||||
}
|
||||
|
||||
/* (temporarily) add an under development tagline to the bread crumb
|
||||
@@ -256,3 +256,39 @@ kbd
|
||||
font-size: 4rem;
|
||||
color: #114B4F;
|
||||
}
|
||||
|
||||
|
||||
/* add a class for multi-column support
|
||||
* in docs to replace use of .hlist with
|
||||
* a .. rst-class:: rst-columns
|
||||
*/
|
||||
|
||||
.rst-columns2 {
|
||||
column-width: 28em;
|
||||
column-fill: balance;
|
||||
}
|
||||
.rst-columns3 .rst-columns {
|
||||
column-width: 18em;
|
||||
column-fill: balance;
|
||||
}
|
||||
|
||||
/* numbered "h2" steps */
|
||||
|
||||
body {
|
||||
counter-reset: step-count;
|
||||
}
|
||||
|
||||
div.numbered-step h2::before {
|
||||
counter-increment: step-count;
|
||||
content: counter(step-count);
|
||||
background: #cccccc;
|
||||
border-radius: 0.8em;
|
||||
-moz-border-radius: 0.8em;
|
||||
-webkit-border-radius: 0.8em;
|
||||
color: #ffffff;
|
||||
display: inline-block;
|
||||
font-weight: bold;
|
||||
line-height: 1.6em;
|
||||
margin-right: 5px;
|
||||
text-align: center;
|
||||
width: 1.6em;}
|
||||
|
||||
6
doc/static/acrn-custom.js
vendored
@@ -3,3 +3,9 @@
|
||||
$(document).ready(function(){
|
||||
$( ".icon-home" ).attr("href", "https://projectacrn.org/");
|
||||
});
|
||||
|
||||
/* Global site tag (gtag.js) - Google Analytics */
|
||||
window.dataLayer = window.dataLayer || [];
|
||||
function gtag(){dataLayer.push(arguments);}
|
||||
gtag('js', new Date());
|
||||
gtag('config', 'UA-831873-64');
|
||||
|
||||
@@ -22,3 +22,5 @@ Follow these getting started guides to give ACRN a try:
|
||||
reference/hardware
|
||||
getting-started/building-from-source
|
||||
getting-started/rt_industry
|
||||
tutorials/using_hybrid_mode_on_nuc
|
||||
tutorials/using_partition_mode_on_nuc
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
.. _acrn-dm_qos:
|
||||
|
||||
Enable QoS based on runC container
|
||||
##################################
|
||||
Enable QoS based on runC Containers
|
||||
###################################
|
||||
This document describes how ACRN supports Device-Model Quality of Service (QoS)
|
||||
based on using runC containers to control the SOS resources
|
||||
based on using runC containers to control the Service VM resources
|
||||
(CPU, Storage, Memory, Network) by modifying the runC configuration file.
|
||||
|
||||
What is QoS
|
||||
@@ -28,7 +28,7 @@ to the `Open Container Initiative (OCI)
|
||||
ACRN-DM QoS architecture
|
||||
************************
|
||||
In ACRN-DM QoS design, we run the ACRN-DM in a runC container environment.
|
||||
Every time we start a UOS, we first start a runC container and
|
||||
Every time we start a User VM, we first start a runC container and
|
||||
then launch the ACRN-DM within that container.
|
||||
The ACRN-DM QoS can manage these resources for Device-Model:
|
||||
|
||||
@@ -108,7 +108,7 @@ How to use ACRN-DM QoS
|
||||
.. note:: For configuration details, refer to the `Open Containers configuration documentation
|
||||
<https://github.com/opencontainers/runtime-spec/blob/master/config.md>`_.
|
||||
|
||||
#. Add the UOS by ``acrnctl add`` command:
|
||||
#. Add the User VM by ``acrnctl add`` command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@@ -118,13 +118,13 @@ How to use ACRN-DM QoS
|
||||
<https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/devicemodel/samples/nuc/launch_uos.sh>`_
|
||||
that supports the ``-C`` (``run_container`` function) option.
|
||||
|
||||
#. Start the UOS by ``acrnd``
|
||||
#. Start the User VM by ``acrnd``
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# acrnd -t
|
||||
|
||||
#. After UOS boots, you may use ``runc list`` command to check the container status in SOS:
|
||||
#. After User VM boots, you may use ``runc list`` command to check the container status in Service VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
||||
@@ -5,8 +5,9 @@ ACRN Configuration Tool
|
||||
|
||||
The ACRN configuration tool is designed for System Integrators / Tier 1s to
|
||||
customize ACRN to meet their own needs. It consists of two tools, the
|
||||
``Kconfig`` tool and the ``acrn-config`` tool. The latter allows users to provision
|
||||
VMs via a web interface and configure the hypervisor from XML files at build time.
|
||||
``Kconfig`` tool and the ``acrn-config`` tool. The latter allows users to
|
||||
provision VMs via a web interface and configure the hypervisor from XML
|
||||
files at build time.
|
||||
|
||||
Introduction
|
||||
************
|
||||
@@ -17,15 +18,14 @@ are discussed in the following sections.
|
||||
Hypervisor configuration
|
||||
========================
|
||||
|
||||
The hypervisor configuration selects a working scenario and target
|
||||
The hypervisor configuration defines a working scenario and target
|
||||
board by configuring the hypervisor image features and capabilities such as
|
||||
setting up the log and the serial port.
|
||||
|
||||
The hypervisor configuration uses the ``Kconfig`` ``make
|
||||
menuconfig`` mechanism. The configuration file is located in the
|
||||
``acrn-hypervisor/hypervisor/arch/x86/configs/`` folder.
|
||||
The hypervisor configuration uses the ``Kconfig`` mechanism. The configuration
|
||||
file is located at ``acrn-hypervisor/hypervisor/arch/x86/Kconfig``.
|
||||
|
||||
The board-specific ``defconfig`` file,
|
||||
A board-specific ``defconfig`` file, for example
|
||||
``acrn-hypervisor/hypervisor/arch/x86/configs/$(BOARD).config``
|
||||
is loaded first; it is the default ``Kconfig`` for the specified board.
|
||||
|
||||
@@ -36,7 +36,7 @@ The board configuration stores board-specific settings referenced by the
|
||||
ACRN hypervisor. This includes **scenario-relevant** information such as
|
||||
board settings, root device selection, and the kernel cmdline. It also includes
|
||||
**scenario-irrelevant** hardware-specific information such as ACPI/PCI
|
||||
and BDF information. The board configuration is organized as
|
||||
and BDF information. The reference board configuration is organized as
|
||||
``*.c/*.h`` files located in the
|
||||
``acrn-hypervisor/hypervisor/arch/x86/configs/$(BOARD)/`` folder.
|
||||
|
||||
@@ -49,9 +49,9 @@ VMs on each user scenario. It also includes **launch script-based** VM
|
||||
configuration information, where parameters are passed to the device model
|
||||
to launch post-launched User VMs.
|
||||
|
||||
Scenario based VM configurations are organized as ``*.c/*.h`` files
|
||||
located in the ``acrn-hypervisor/hypervisor/scenarios/$(SCENARIO)/``
|
||||
folder.
|
||||
Scenario based VM configurations are organized as ``*.c/*.h`` files. The
|
||||
reference scenarios are located in the
|
||||
``acrn-hypervisor/hypervisor/scenarios/$(SCENARIO)/`` folder.
|
||||
|
||||
User VM launch script samples are located in the
|
||||
``acrn-hypervisor/devicemodel/samples/`` folder.
|
||||
@@ -100,40 +100,134 @@ and ``scenario`` attributes:
|
||||
|
||||
Additional scenario XML elements:
|
||||
|
||||
``hv``:
|
||||
Specify the global attributes for all VMs.
|
||||
|
||||
``RELEASE`` (a child node of ``DEBUG_OPTIONS``):
|
||||
Specify the final build is for Release or Debug.
|
||||
|
||||
``SERIAL_CONSOLE`` (a child node of ``DEBUG_OPTIONS``):
|
||||
Specify the host serial device is used for hypervisor debugging.
|
||||
|
||||
``MEM_LOGLEVEL`` (a child node of ``DEBUG_OPTIONS``):
|
||||
Specify the default log level in memory.
|
||||
|
||||
``NPK_LOGLEVEL`` (a child node of ``DEBUG_OPTIONS``):
|
||||
Specify the default log level for the hypervisor NPK log.
|
||||
|
||||
``CONSOLE_LOGLEVEL`` (a child node of ``DEBUG_OPTIONS``):
|
||||
Specify the default log level on the serial console.
|
||||
|
||||
``LOG_DESTINATION`` (a child node of ``DEBUG_OPTIONS``):
|
||||
Specify the bitmap of consoles where logs are printed.
|
||||
|
||||
``LOG_BUF_SIZE`` (a child node of ``DEBUG_OPTIONS``):
|
||||
Specify the capacity of the log buffer for each physical CPU.
|
||||
|
||||
``RELOC`` (a child node of ``FEATURES``):
|
||||
Specify whether hypervisor image relocation is enabled on booting.
|
||||
|
||||
``SCHEDULER`` (a child node of ``FEATURES``):
|
||||
Specify the CPU scheduler used by the hypervisor.
|
||||
Supported schedulers are: ``SCHED_NOOP``, ``SCHED_BVT`` and ``SCHED_IORR``.
|
||||
|
||||
``MULTIBOOT2`` (a child node of ``FEATURES``):
|
||||
Specify whether ACRN hypervisor image can be booted using multiboot2 protocol.
|
||||
If not set, GRUB's multiboot2 is not available as a boot option.
|
||||
|
||||
``HYPERV_ENABLED`` (a child node of ``FEATURES``):
|
||||
Specify whether Hyper-V is enabled.
|
||||
|
||||
``IOMMU_ENFORCE_SNP`` (a child node of ``FEATURES``):
|
||||
Specify whether IOMMU enforces snoop behavior of DMA operation.
|
||||
|
||||
``ACPI_PARSE_ENABLED`` (a child node of ``FEATURES``):
|
||||
Specify whether ACPI runtime parsing is enabled..
|
||||
|
||||
``L1D_VMENTRY_ENABLED`` (a child node of ``FEATURES``):
|
||||
Specify whether L1 cache flush before VM entry is enabled.
|
||||
|
||||
``MCE_ON_PSC_DISABLED`` (a child node of ``FEATURE``):
|
||||
Specify whether force to disable software workaround for Machine Check
|
||||
Error on Page Size Change is enabled.
|
||||
|
||||
``STACK_SIZE`` (a child node of ``MEMORY``):
|
||||
Specify the size of stacks used by physical cores. Each core uses one stack
|
||||
for normal operations and another three for specific exceptions.
|
||||
|
||||
``HV_RAM_SIZE`` (a child node of ``MEMORY``):
|
||||
Specify the size of the RAM region used by the hypervisor.
|
||||
|
||||
``LOW_RAM_SIZE`` (a child node of ``MEMORY``):
|
||||
Specify size of RAM region below address 0x10000, starting from address 0x0.
|
||||
|
||||
``SOS_RAM_SIZE`` (a child node of ``MEMORY``):
|
||||
Specify the size of Service OS VM RAM region.
|
||||
|
||||
``UOS_RAM_SIZE`` (a child node of ``MEMORY``):
|
||||
Specify the size of User OS VM RAM region.
|
||||
|
||||
``PLATFORM_RAM_SIZE`` (a child node of ``MEMORY``):
|
||||
Specify the size of the physical platform RAM region.
|
||||
|
||||
``IOMMU_BUS_NUM`` (a child node of ``CAPACITIES``):
|
||||
Specify the highest PCI bus ID used during IOMMU initialization.
|
||||
|
||||
``MAX_IR_ENTRIES`` (a child node of ``CAPACITIES``):
|
||||
Specify the maximum number of Interrupt Remapping Entries.
|
||||
|
||||
``MAX_IOAPIC_NUM`` (a child node of ``CAPACITIES``):
|
||||
Specify the maximum number of IO-APICs.
|
||||
|
||||
``MAX_PCI_DEV_NUM`` (a child node of ``CAPACITIES``):
|
||||
Specify the maximum number of PCI devices.
|
||||
|
||||
``MAX_IOAPIC_LINES`` (a child node of ``CAPACITIES``):
|
||||
Specify the maximum number of interrupt lines per IOAPIC.
|
||||
|
||||
``MAX_PT_IRQ_ENTRIES`` (a child node of ``CAPACITIES``):
|
||||
Specify the maximum number of interrupt source for PT devices.
|
||||
|
||||
``MAX_MSIX_TABLE_NUM`` (a child node of ``CAPACITIES``):
|
||||
Specify the maximum number of MSI-X tables per device.
|
||||
|
||||
``MAX_EMULATED_MMIO`` (a child node of ``CAPACITIES``):
|
||||
Specify the maximum number of emulated MMIO regions.
|
||||
|
||||
``GPU_SBDF`` (a child node of ``MISC_CFG``):
|
||||
Specify the Segment, Bus, Device, and function of the GPU.
|
||||
|
||||
``UEFI_OS_LOADER_NAME`` (a child node of ``MISC_CFG``):
|
||||
Specify the UEFI OS loader name.
|
||||
|
||||
``vm``:
|
||||
Specify the VM with VMID by its "id" attribute.
|
||||
|
||||
``load_order``:
|
||||
Specify the VM by its load order: ``PRE_LAUNCHED_VM``, ``SOS_VM`` or ``POST_LAUNCHED_VM``.
|
||||
``vm_type``:
|
||||
Current supported VM types are:
|
||||
|
||||
- ``SAFETY_VM`` pre-launched Safety VM
|
||||
- ``PRE_STD_VM`` pre-launched Standard VM
|
||||
- ``SOS_VM`` pre-launched Service VM
|
||||
- ``POST_STD_VM`` post-launched Standard VM
|
||||
- ``POST_RT_VM`` post-launched realtime capable VM
|
||||
- ``KATA_VM`` post-launched Kata Container VM
|
||||
|
||||
``name`` (a child node of ``vm``):
|
||||
Specify the VM name which will be shown in the hypervisor console command: vm_list.
|
||||
|
||||
``uuid``:
|
||||
UUID of the VM. It is for internal use and is not configurable.
|
||||
Specify the VM name shown in the hypervisor console command: vm_list.
|
||||
|
||||
``guest_flags``:
|
||||
Select all applicable flags for the VM:
|
||||
|
||||
``GUEST_FLAG_SECURE_WORLD_ENABLED`` specify whether secure world is enabled
|
||||
|
||||
``GUEST_FLAG_LAPIC_PASSTHROUGH`` specify whether LAPIC is passed through
|
||||
|
||||
``GUEST_FLAG_IO_COMPLETION_POLLING`` specify whether the hypervisor needs
|
||||
IO polling to completion
|
||||
|
||||
``GUEST_FLAG_HIDE_MTRR`` specify whether to hide MTRR from the VM
|
||||
|
||||
``GUEST_FLAG_RT`` specify whether the vm is RT-VM
|
||||
|
||||
``severity``:
|
||||
Severity of the guest VM; the lower severity VM should not impact the higher severity VM.
|
||||
|
||||
The order of severity from high to low is:
|
||||
``SEVERITY_SAFETY_VM``, ``SEVERITY_RTVM``, ``SEVERITY_SOS``, ``SEVERITY_STANDARD_VM``.
|
||||
- ``GUEST_FLAG_SECURE_WORLD_ENABLED`` specify whether secure world is enabled
|
||||
- ``GUEST_FLAG_LAPIC_PASSTHROUGH`` specify whether LAPIC is passed through
|
||||
- ``GUEST_FLAG_IO_COMPLETION_POLLING`` specify whether the hypervisor needs
|
||||
IO polling to completion
|
||||
- ``GUEST_FLAG_HIDE_MTRR`` specify whether to hide MTRR from the VM
|
||||
- ``GUEST_FLAG_RT`` specify whether the VM is RT-VM (realtime)
|
||||
|
||||
``cpu_affinity``:
|
||||
List of pCPUs: the guest VM is allowed to create vCPUs from all or a subset of this list.
|
||||
List of pCPU: the guest VM is allowed to create vCPU from all or a subset of this list.
|
||||
|
||||
``base`` (a child node of ``epc_section``):
|
||||
SGX EPC section base; must be page aligned.
|
||||
@@ -158,10 +252,12 @@ Additional scenario XML elements:
|
||||
Currently supports ``KERNEL_BZIMAGE`` and ``KERNEL_ZEPHYR``.
|
||||
|
||||
``kern_mod`` (a child node of ``os_config``):
|
||||
The tag for the kernel image that acts as a multiboot module; it must exactly match the module tag in the GRUB multiboot cmdline.
|
||||
The tag for the kernel image that acts as a multiboot module; it must
|
||||
exactly match the module tag in the GRUB multiboot cmdline.
|
||||
|
||||
``ramdisk_mod`` (a child node of ``os_config``):
|
||||
The tag for the ramdisk image which acts as a multiboot module; it must exactly match the module tag in the GRUB multiboot cmdline.
|
||||
The tag for the ramdisk image which acts as a multiboot module; it
|
||||
must exactly match the module tag in the GRUB multiboot cmdline.
|
||||
|
||||
``bootargs`` (a child node of ``os_config``):
|
||||
For internal use and is not configurable. Specify the kernel boot arguments
|
||||
@@ -188,26 +284,26 @@ Additional scenario XML elements:
|
||||
vCOM irq.
|
||||
|
||||
``target_vm_id`` (a child node of ``vuart1``):
|
||||
COM2 is used for VM communications. When it is enabled, specify which target VM the current VM connects to.
|
||||
COM2 is used for VM communications. When it is enabled, specify which
|
||||
target VM the current VM connects to.
|
||||
|
||||
``target_uart_id`` (a child node of ``vuart1``):
|
||||
Target vUART ID that vCOM2 connects to.
|
||||
|
||||
``pci_dev_num``:
|
||||
PCI devices number of the VM; it is hard-coded for each scenario so it is not configurable for now.
|
||||
PCI devices number of the VM; it is hard-coded for each scenario so it
|
||||
is not configurable for now.
|
||||
|
||||
``pci_devs``:
|
||||
PCI devices list of the VM; it is hard-coded for each scenario so it is not configurable for now.
|
||||
PCI devices list of the VM; it is hard-coded for each scenario so it
|
||||
is not configurable for now.
|
||||
|
||||
``board_private``:
|
||||
Stores scenario-relevant board configuration.
|
||||
|
||||
``rootfs``:
|
||||
``rootfs`` (a child node of ``board_private``):
|
||||
rootfs for the Linux kernel.
|
||||
|
||||
``console``:
|
||||
ttyS console for the Linux kernel.
|
||||
|
||||
``bootargs`` (a child node of ``board_private``):
|
||||
Specify kernel boot arguments.
|
||||
|
||||
@@ -222,7 +318,8 @@ The launch XML has an ``acrn-config`` root element as well as
|
||||
|
||||
<acrn-config board="BOARD" scenario="SCENARIO" uos_launcher="UOS_NUMBER">
|
||||
|
||||
Attributes of the ``uos_launcher`` specify the number of User VMs that the current scenario has:
|
||||
Attributes of the ``uos_launcher`` specify the number of User VMs that the
|
||||
current scenario has:
|
||||
|
||||
``uos``:
|
||||
Specify the User VM with its relative ID to Service VM by the "id" attribute.
|
||||
@@ -238,26 +335,27 @@ Attributes of the ``uos_launcher`` specify the number of User VMs that the curre
|
||||
Specify the User VM memory size in Mbyte.
|
||||
|
||||
``gvt_args``:
|
||||
GVT arguments for the VM. Input format: ``low_gm_size high_gm_size fence_sz``.
|
||||
Recommendation is: ``64 448 8``. Leave it blank to disable the GVT.
|
||||
GVT arguments for the VM. Set it to ``gvtd`` for GVTd, otherwise stand
|
||||
for GVTg arguments. The GVTg Input format: ``low_gm_size high_gm_size fence_sz``,
|
||||
The recommendation is ``64 448 8``. Leave it blank to disable the GVT.
|
||||
|
||||
``vbootloader``:
|
||||
Virtual bootloader type; currently only supports OVMF.
|
||||
|
||||
``cpu_sharing``:
|
||||
Specify whether the pCPUs listed can be shared with other VMs.
|
||||
|
||||
``vuart0``:
|
||||
Specify whether the device model emulates the vUART0(vCOM1); refer to :ref:`vuart_config` for details.
|
||||
If set to ``Enable``, the vUART0 is emulated by the device model;
|
||||
if set to ``Disable``, the vUART0 is emulated by the hypervisor if it is configured in the scenario XML.
|
||||
Specify whether the device model emulates the vUART0(vCOM1); refer to
|
||||
:ref:`vuart_config` for details. If set to ``Enable``, the vUART0 is
|
||||
emulated by the device model; if set to ``Disable``, the vUART0 is
|
||||
emulated by the hypervisor if it is configured in the scenario XML.
|
||||
|
||||
``poweroff_channel``:
|
||||
Specify whether the User VM power off channel is through the IOC, Powerbutton, or vUART.
|
||||
Specify whether the User VM power off channel is through the IOC,
|
||||
Powerbutton, or vUART.
|
||||
|
||||
``usb_xhci``:
|
||||
USB xHCI mediator configuration. Input format: ``bus#-port#[:bus#-port#: ...]``. e.g.: ``1-2:2-4``.
|
||||
refer to :ref:`usb_virtualization` for details.
|
||||
USB xHCI mediator configuration. Input format:
|
||||
``bus#-port#[:bus#-port#: ...]``, e.g.: ``1-2:2-4``.
|
||||
Refer to :ref:`usb_virtualization` for details.
|
||||
|
||||
``passthrough_devices``:
|
||||
Select the passthrough device from the lspci list; currently we support:
|
||||
@@ -274,7 +372,8 @@ Attributes of the ``uos_launcher`` specify the number of User VMs that the curre
|
||||
|
||||
``console`` (a child node of ``virtio_devices``):
|
||||
The virtio console device setting.
|
||||
Input format: ``[@]stdio|tty|pty|sock:portname[=portpath][,[@]stdio|tty|pty:portname[=portpath]]``.
|
||||
Input format:
|
||||
``[@]stdio|tty|pty|sock:portname[=portpath][,[@]stdio|tty|pty:portname[=portpath]]``.
|
||||
|
||||
.. note::
|
||||
|
||||
@@ -290,11 +389,12 @@ Configuration tool workflow
|
||||
Hypervisor configuration workflow
|
||||
==================================
|
||||
|
||||
The hypervisor configuration is based on the ``Kconfig`` ``make menuconfig``
|
||||
The hypervisor configuration is based on the ``Kconfig``
|
||||
mechanism. Begin by creating a board-specific ``defconfig`` file to
|
||||
set up the default ``Kconfig`` values for the specified board.
|
||||
Next, configure the hypervisor build options using the ``make
|
||||
menuconfig`` graphical interface. The resulting ``.config`` file is
|
||||
menuconfig`` graphical interface or ``make defconfig`` to generate
|
||||
a ``.config`` file. The resulting ``.config`` file is
|
||||
used by the ACRN build process to create a configured scenario- and
|
||||
board-specific hypervisor image.
|
||||
|
||||
@@ -308,8 +408,8 @@ board-specific hypervisor image.
|
||||
|
||||
menuconfig interface sample
|
||||
|
||||
Refer to :ref:`getting-started-hypervisor-configuration` for
|
||||
detailed configuration steps.
|
||||
Refer to :ref:`getting-started-hypervisor-configuration` for detailed
|
||||
configuration steps.
|
||||
|
||||
|
||||
.. _vm_config_workflow:
|
||||
@@ -402,10 +502,14 @@ The ACRN configuration app is a web user interface application that performs the
|
||||
|
||||
- reads board info
|
||||
- configures and validates scenario settings
|
||||
- automatically generates patches for board-related configurations and
|
||||
- automatically generates source code for board-related configurations and
|
||||
scenario-based VM configurations
|
||||
- configures and validates launch settings
|
||||
- generates launch scripts for the specified post-launched User VMs.
|
||||
- dynamically creates a new scenario setting and adds or deletes VM settings
|
||||
in scenario settings
|
||||
- dynamically creates a new launch setting and adds or deletes User VM
|
||||
settings in launch settings
|
||||
|
||||
Prerequisites
|
||||
=============
|
||||
@@ -459,74 +563,127 @@ Instructions
|
||||
|
||||
#. Upload the board info you have generated from the ACRN config tool.
|
||||
|
||||
#. After board info is uploaded, you will see the board name from the Board
|
||||
info list. Select the board name to be configured.
|
||||
#. After board info is uploaded, you will see the board name from the
|
||||
Board info list. Select the board name to be configured.
|
||||
|
||||
.. figure:: images/select_board_info.png
|
||||
:align: center
|
||||
|
||||
#. Choose a scenario from the **Scenario Setting** menu which lists all the scenarios,
|
||||
including the default scenarios and the user-defined scenarios for the board you selected
|
||||
in the previous step. The scenario configuration xmls are located at
|
||||
#. Load or create the scenario setting by selecting among the following:
|
||||
|
||||
- Choose a scenario from the **Scenario Setting** menu which lists all
|
||||
user-defined scenarios for the board you selected in the previous step.
|
||||
|
||||
- Click the **Create a new scenario** from the **Scenario Setting**
|
||||
menu to dynamically create a new scenario setting for the current board.
|
||||
|
||||
- Click the **Load a default scenario** from the **Scenario Setting**
|
||||
menu, and then select one default scenario setting to load a default
|
||||
scenario setting for the current board.
|
||||
|
||||
The default scenario configuration xmls are located at
|
||||
``acrn-hypervisor/misc/acrn-config/xmls/config-xmls/[board]/``.
|
||||
We can edit the scenario name when creating or loading a scenario. If the
|
||||
current scenario name is duplicated with an existing scenario setting
|
||||
name, rename the current scenario name or overwrite the existing one
|
||||
after the confirmation message.
|
||||
|
||||
.. figure:: images/choose_scenario.png
|
||||
:align: center
|
||||
|
||||
Note that you can also use a customized scenario xml by clicking **Import**.
|
||||
The configuration app automatically directs to the new scenario xml once the import is complete.
|
||||
Note that you can also use a customized scenario xml by clicking **Import
|
||||
XML**. The configuration app automatically directs to the new scenario
|
||||
xml once the import is complete.
|
||||
|
||||
#. The configurable items display after one scenario is selected. Here is
|
||||
the example of "SDC" scenario:
|
||||
#. The configurable items display after one scenario is created/loaded/
|
||||
selected. Following is an industry scenario:
|
||||
|
||||
.. figure:: images/configure_scenario.png
|
||||
:align: center
|
||||
|
||||
- You can edit these items directly in the text boxes, cor you can choose single or even multiple
|
||||
items from the drop down list.
|
||||
- You can edit these items directly in the text boxes, or you can choose
|
||||
single or even multiple items from the drop down list.
|
||||
|
||||
- Read-only items are marked as grey.
|
||||
|
||||
- Hover the mouse pointer over the item to display the description.
|
||||
|
||||
#. Click **Export** to save the scenario xml; you can rename it in the pop-up modal.
|
||||
#. To dynamically add or delete VMs:
|
||||
|
||||
.. note:: All customized scenario xmls will be in user-defined groups which located in
|
||||
``acrn-hypervisor/misc/acrn-config/xmls/config-xmls/[board]/user_defined/``.
|
||||
- Click **Add a VM below** in one VM setting, and then select one VM type
|
||||
to add a new VM under the current VM.
|
||||
|
||||
Before saving the scenario xml, the configuration app will validate
|
||||
the configurable items. If errors exist, the configuration app lists all
|
||||
wrong configurable items and shows the errors as below:
|
||||
- Click **Remove this VM** in one VM setting to remove the current VM for
|
||||
the scenario setting.
|
||||
|
||||
When one VM is added or removed in the scenario setting, the
|
||||
configuration app reassigns the VM IDs for the remaining VMs by the order of Pre-launched VMs, Service VMs, and Post-launched VMs.
|
||||
|
||||
.. figure:: images/configure_vm_add.png
|
||||
:align: center
|
||||
|
||||
#. Click **Export XML** to save the scenario xml; you can rename it in the
|
||||
pop-up model.
|
||||
|
||||
.. note::
|
||||
All customized scenario xmls will be in user-defined groups which are
|
||||
located in ``acrn-hypervisor/misc/acrn-config/xmls/config-xmls/[board]/user_defined/``.
|
||||
|
||||
Before saving the scenario xml, the configuration app validates the
|
||||
configurable items. If errors exist, the configuration app lists all
|
||||
incorrect configurable items and shows the errors as below:
|
||||
|
||||
.. figure:: images/err_acrn_configuration.png
|
||||
:align: center
|
||||
|
||||
After the scenario is saved, the page automatically directs to the saved scenario xmls.
|
||||
You can delete the configured scenario by clicking **Export** -> **Remove**.
|
||||
After the scenario is saved, the page automatically directs to the saved
|
||||
scenario xmls. Delete the configured scenario by clicking **Export XML** -> **Remove**.
|
||||
|
||||
#. Click **Generate Board SRC** to save the current scenario setting and then generate
|
||||
a patch for the board-related configuration source codes in
|
||||
``acrn-hypervisor/hypervisor/arch/x86/configs/[board]/``.
|
||||
#. Click **Generate configuration files** to save the current scenario
|
||||
setting and then generate files for the board-related configuration
|
||||
source code and the scenario-based VM configuration source code.
|
||||
|
||||
#. Click **Generate Scenario SRC** to save the current scenario setting and then generate
|
||||
a patch for the scenario-based VM configuration scenario source codes in
|
||||
If **Source Path** in the pop-up model is edited, the source code is
|
||||
generated into the edited Source Path relative to ``acrn-hypervisor``;
|
||||
otherwise, the source code is generated into default folders and
|
||||
overwrite the old ones. The board-related configuration source
|
||||
code is located at
|
||||
``acrn-hypervisor/hypervisor/arch/x86/configs/[board]/`` and the
|
||||
scenario-based VM configuration source code is located at
|
||||
``acrn-hypervisor/hypervisor/scenarios/[scenario]/``.
|
||||
|
||||
The **Launch Setting** is quite similar to the **Scenario Setting**:
|
||||
|
||||
#. Upload board info or select one board as the current board.
|
||||
|
||||
#. Import your local launch setting xml by clicking **Import** or selecting one launch setting xml from the menu.
|
||||
#. Load or create one launch setting by selecting among the following:
|
||||
|
||||
- Click **Create a new launch script** from the **Launch Setting** menu.
|
||||
|
||||
- Click **Load a default launch script** from the **Launch Setting** menu.
|
||||
|
||||
- Select one launch setting xml from the menu.
|
||||
|
||||
- Importing the local launch setting xml by clicking **Import XML**.
|
||||
|
||||
#. Select one scenario for the current launch setting from the **Select Scenario** drop down box.
|
||||
|
||||
#. Configure the items for the current launch setting.
|
||||
|
||||
#. Save the current launch setting to the user-defined xml files by
|
||||
clicking **Export**. The configuration app validates the current
|
||||
configuration and lists all wrong configurable items and shows errors.
|
||||
#. To dynamically add or remove User VM (UOS) launch scripts:
|
||||
|
||||
#. Click **Generate Launch Script** to save the current launch setting and then generate the launch script.
|
||||
- Add a UOS launch script by clicking **Configure an UOS below** for the
|
||||
current launch setting.
|
||||
|
||||
- Remove a UOS launch script by clicking **Remove this VM** for the
|
||||
current launch setting.
|
||||
|
||||
#. Save the current launch setting to the user-defined xml files by
|
||||
clicking **Export XML**. The configuration app validates the current
|
||||
configuration and lists all incorrect configurable items and shows errors.
|
||||
|
||||
#. Click **Generate Launch Script** to save the current launch setting and
|
||||
then generate the launch script.
|
||||
|
||||
.. figure:: images/generate_launch_script.png
|
||||
:align: center
|
||||
|
||||
@@ -1,9 +1,12 @@
|
||||
.. _acrn_ootb:
|
||||
|
||||
Install ACRN Out-of-the-box
|
||||
Install ACRN Out of the Box
|
||||
###########################
|
||||
|
||||
In this tutorial, we will learn to generate an out-of-the-box (OOTB) Service VM or a Preempt-RT VM image so that we can use ACRN or RTVM immediately after installation without any configuration or modification.
|
||||
In this tutorial, we will learn to generate an out-of-the-box (OOTB)
|
||||
Service VM or a Preempt-RT VM image so that we can use ACRN or RTVM
|
||||
immediately after installation without any configuration or
|
||||
modification.
|
||||
|
||||
Set up a Build Environment
|
||||
**************************
|
||||
@@ -318,7 +321,7 @@ Step 3: Deploy the Service VM image
|
||||
The operation has completed successfully.
|
||||
|
||||
#. Follow these steps to create two partitions on the U disk.
|
||||
Keep 4GB in the first partition and leave free space in the second parition.
|
||||
Keep 4GB in the first partition and leave free space in the second partition.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@@ -460,7 +463,7 @@ Step 3: Deploy the Service VM image
|
||||
|
||||
# dd if=/mnt/sos-industry.img of=/dev/sda bs=4M oflag=sync status=progress iflag=fullblock seek=0 conv=notrunc
|
||||
|
||||
.. note:: Given the large YAML size setting of over 100G, generating the SOS image and writing it to disk will take some time.
|
||||
.. note:: Given the large YAML size setting of over 100G, generating the Service VM image and writing it to disk will take some time.
|
||||
|
||||
#. Configure the EFI firmware to boot the ACRN hypervisor by default:
|
||||
|
||||
|
||||
@@ -2,8 +2,8 @@
|
||||
|
||||
.. _agl-vms:
|
||||
|
||||
Running AGL as VMs
|
||||
##################
|
||||
Run two AGL images as User VMs
|
||||
##############################
|
||||
|
||||
This document describes how to run two Automotive Grade Linux (AGL)
|
||||
images as VMs on the ACRN hypervisor. This serves as the baseline for
|
||||
@@ -67,9 +67,10 @@ The following hardware is used for demo development:
|
||||
<https://www.gorite.com/intel-nuc-dawson-canyon-edp-cable-4-lanes>`_
|
||||
- Other eDP pin cables work as well
|
||||
* - HDMI touch displays
|
||||
- `GeChic 1303I
|
||||
<https://www.gechic.com/en-portable-touch-monitor-onlap1303i-view.html>`_
|
||||
-
|
||||
- `GeChic portable touch monitor
|
||||
<https://www.gechic.com/en/touch-monitor>`_
|
||||
- Tested with 1303I (no longer available), but others such as 1102I should also
|
||||
work.
|
||||
* - Serial cable
|
||||
- `Serial DB9 header cable
|
||||
<https://www.gorite.com/serial-db9-header-cable-for-nuc-dawson-canyon>`_
|
||||
|
||||
@@ -1,10 +1,12 @@
|
||||
.. _building-acrn-in-docker:
|
||||
|
||||
Building ACRN in Docker
|
||||
#######################
|
||||
Build ACRN in Docker
|
||||
####################
|
||||
|
||||
This tutorial shows how to build ACRN in a Clear Linux Docker image.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Install Docker
|
||||
**************
|
||||
|
||||
@@ -24,25 +26,45 @@ Install Docker
|
||||
choose not to, add `sudo` in front of every `docker` command in
|
||||
this tutorial.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Get the Docker Image
|
||||
********************
|
||||
|
||||
This tutorial presents two ways to get the Clear Linux Docker image that's needed to build ACRN.
|
||||
Pick one of these two ways to get the Clear Linux Docker image needed to build ACRN.
|
||||
|
||||
Get the Docker Image from Docker Hub
|
||||
====================================
|
||||
|
||||
If you're not working behind a corporate proxy server, you can pull a
|
||||
pre-built Docker image from Docker Hub to your development machine using
|
||||
this command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ docker pull acrn/clearlinux-acrn-builder:latest
|
||||
|
||||
Build the Docker Image from Dockerfile
|
||||
======================================
|
||||
|
||||
Alternatively, you can build your own local Docker image using the
|
||||
provided Dockerfile build instructions by following these steps. You'll
|
||||
need to this this if you're working behind a corporate proxy.
|
||||
|
||||
.. note::
|
||||
A known `issue <https://github.com/projectacrn/acrn-hypervisor/issues/4560>`_ exists while building the ACRN hypervisor. Refer to `Get the Docker Image from Docker Hub`_ as a temporary way to obtain the Docker Image for the v1.6 release.
|
||||
A known `issue
|
||||
<https://github.com/projectacrn/acrn-hypervisor/issues/4560>`_ exists
|
||||
while building the ACRN hypervisor. Refer to `Get the Docker Image from
|
||||
Docker Hub`_ as a temporary way to obtain the Docker Image for the v1.6
|
||||
release.
|
||||
|
||||
#. Download `Dockerfile <https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/doc/getting-started/Dockerfile>`_
|
||||
to your development machine.
|
||||
#. Build the Docker image:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ docker build -t clearlinux-acrn-builder:latest -f <path/to/Dockerfile> .
|
||||
|
||||
if you are behind an HTTP or HTTPS proxy server, use this command instead:
|
||||
If you are behind an HTTP proxy server, use this command,
|
||||
with your proxy settings, to let docker build know about the proxy
|
||||
configuration for the docker image:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@@ -50,15 +72,14 @@ Build the Docker Image from Dockerfile
|
||||
--build-arg HTTPS_PROXY=https://<proxy_host>:<proxy_port> \
|
||||
-t clearlinux-acrn-builder:latest -f <path/to/Dockerfile> .
|
||||
|
||||
Otherwise, you can simply use this command:
|
||||
|
||||
Get the Docker Image from Docker Hub
|
||||
====================================
|
||||
.. code-block:: none
|
||||
|
||||
As an alternative, you can pull a pre-built Docker image from Docker Hub to your development machine. Use this command:
|
||||
$ docker build -t clearlinux-acrn-builder:latest -f <path/to/Dockerfile> .
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ docker pull acrn/clearlinux-acrn-builder:latest
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Build ACRN from Source in Docker
|
||||
********************************
|
||||
@@ -89,6 +110,8 @@ Build ACRN from Source in Docker
|
||||
|
||||
The build artifacts are found in the `build` directory.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Build the ACRN Service VM Kernel in Docker
|
||||
******************************************
|
||||
|
||||
@@ -123,6 +146,8 @@ Build the ACRN Service VM Kernel in Docker
|
||||
The commands build the bootable kernel image as ``arch/x86/boot/bzImage``,
|
||||
and the loadable kernel modules under the ``./out/`` folder.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Build the ACRN User VM PREEMPT_RT Kernel in Docker
|
||||
**************************************************
|
||||
|
||||
@@ -157,11 +182,14 @@ Build the ACRN User VM PREEMPT_RT Kernel in Docker
|
||||
The commands build the bootable kernel image as ``arch/x86/boot/bzImage``,
|
||||
and the loadable kernel modules under the ``./out/`` folder.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Build the ACRN documentation
|
||||
****************************
|
||||
|
||||
#. Make sure you have both the ``acrn-hypervisor`` and ``acrn-kernel`` repositories already available in your workspace
|
||||
(see steps above for instructions on how to clone them).
|
||||
#. Make sure you have both the ``acrn-hypervisor`` and ``acrn-kernel``
|
||||
repositories already available in your workspace (see steps above for
|
||||
instructions on how to clone them).
|
||||
|
||||
#. Build the ACRN documentation:
|
||||
|
||||
@@ -173,4 +201,3 @@ Build the ACRN documentation
|
||||
bash -c "cd acrn-hypervisor && make clean && make doc"
|
||||
|
||||
The HTML documentation can be found in ``acrn-hypervisor/build/doc/html``
|
||||
|
||||
|
||||
@@ -1,15 +1,15 @@
|
||||
.. _build UOS from Clearlinux:
|
||||
.. _build User VM from Clearlinux:
|
||||
|
||||
Building UOS from Clear Linux OS
|
||||
################################
|
||||
Build a User VM from the Clear Linux OS
|
||||
#######################################
|
||||
|
||||
This document builds on the :ref:`getting_started`,
|
||||
and explains how to build UOS from Clear Linux OS.
|
||||
This document builds on :ref:`getting_started`,
|
||||
and explains how to build a User VM from Clear Linux OS.
|
||||
|
||||
Build UOS image in Clear Linux OS
|
||||
*********************************
|
||||
Build User VM image from Clear Linux OS
|
||||
***************************************
|
||||
|
||||
Follow these steps to build a UOS image from Clear Linux OS:
|
||||
Follow these steps to build a User VM image from the Clear Linux OS:
|
||||
|
||||
#. In Clear Linux OS, install ``ister`` (a template-based
|
||||
installer for Linux) included in the Clear Linux OS bundle
|
||||
@@ -22,7 +22,7 @@ Follow these steps to build a UOS image from Clear Linux OS:
|
||||
$ sudo swupd bundle-add os-installer
|
||||
|
||||
#. After installation is complete, use ``ister.py`` to
|
||||
generate the image for UOS with the configuration in
|
||||
generate the image for a User VM with the configuration in
|
||||
``uos-image.json``:
|
||||
|
||||
.. code-block:: none
|
||||
@@ -81,7 +81,7 @@ Follow these steps to build a UOS image from Clear Linux OS:
|
||||
``"Version": "latest"`` for example.
|
||||
|
||||
Here we will use ``"Version": 26550`` for example,
|
||||
and the UOS image called ``uos.img`` will be generated
|
||||
and the User VM image called ``uos.img`` will be generated
|
||||
after successful installation. An example output log is:
|
||||
|
||||
.. code-block:: none
|
||||
@@ -118,10 +118,10 @@ Follow these steps to build a UOS image from Clear Linux OS:
|
||||
Reboot Into Firmware Interface
|
||||
|
||||
|
||||
Start the User OS (UOS)
|
||||
***********************
|
||||
Start the User VM
|
||||
*****************
|
||||
|
||||
#. Mount the UOS image and check the UOS kernel:
|
||||
#. Mount the User VM image and check the User VM kernel:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@@ -146,10 +146,10 @@ Start the User OS (UOS)
|
||||
-k /mnt/usr/lib/kernel/default-iot-lts2018 \
|
||||
|
||||
.. note::
|
||||
UOS image ``uos.img`` is in the directory ``~/``
|
||||
and UOS kernel ``default-iot-lts2018`` is in ``/mnt/usr/lib/kernel/``.
|
||||
User VM image ``uos.img`` is in the directory ``~/``
|
||||
and User VM kernel ``default-iot-lts2018`` is in ``/mnt/usr/lib/kernel/``.
|
||||
|
||||
#. You are now all set to start the User OS (UOS):
|
||||
#. You are now all set to start the User OS (User VM):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
||||
@@ -1,59 +1,51 @@
|
||||
.. _cpu_sharing:
|
||||
|
||||
ACRN CPU Sharing
|
||||
################
|
||||
Enable CPU Sharing in ACRN
|
||||
##########################
|
||||
|
||||
Introduction
|
||||
************
|
||||
|
||||
The goal of CPU Sharing is to fully utilize the physical CPU resource to
|
||||
support more virtual machines. Currently, ACRN only supports 1 to 1 mapping
|
||||
mode between virtual CPUs (vCPUs) and physical CPUs (pCPUs). Because of the
|
||||
lack of CPU sharing ability, the number of VMs is limited. To support CPU
|
||||
Sharing, we have introduced a scheduling framework and implemented two simple
|
||||
small scheduling algorithms to satisfy embedded device requirements. Note
|
||||
that, CPU Sharing is not available for VMs with local APIC passthrough
|
||||
(``--lapic_pt`` option).
|
||||
support more virtual machines. Currently, ACRN only supports 1 to 1
|
||||
mapping mode between virtual CPUs (vCPUs) and physical CPUs (pCPUs).
|
||||
Because of the lack of CPU sharing ability, the number of VMs is
|
||||
limited. To support CPU Sharing, we have introduced a scheduling
|
||||
framework and implemented two simple small scheduling algorithms to
|
||||
satisfy embedded device requirements. Note that, CPU Sharing is not
|
||||
available for VMs with local APIC passthrough (``--lapic_pt`` option).
|
||||
|
||||
Scheduling Framework
|
||||
********************
|
||||
|
||||
To satisfy the modularization design concept, the scheduling framework layer
|
||||
isolates the vCPU layer and scheduler algorithm. It does not have a vCPU
|
||||
concept so it is only aware of the thread object instance. The thread object
|
||||
state machine is maintained in the framework. The framework abstracts the
|
||||
scheduler algorithm object, so this architecture can easily extend to new
|
||||
scheduler algorithms.
|
||||
To satisfy the modularization design concept, the scheduling framework
|
||||
layer isolates the vCPU layer and scheduler algorithm. It does not have
|
||||
a vCPU concept so it is only aware of the thread object instance. The
|
||||
thread object state machine is maintained in the framework. The
|
||||
framework abstracts the scheduler algorithm object, so this architecture
|
||||
can easily extend to new scheduler algorithms.
|
||||
|
||||
.. figure:: images/cpu_sharing_framework.png
|
||||
:align: center
|
||||
|
||||
The below diagram shows that the vCPU layer invokes APIs provided by scheduling
|
||||
framework for vCPU scheduling. The scheduling framework also provides some APIs
|
||||
for schedulers. The scheduler mainly implements some callbacks in an
|
||||
``acrn_scheduler`` instance for scheduling framework. Scheduling initialization
|
||||
is invoked in the hardware management layer.
|
||||
The below diagram shows that the vCPU layer invokes APIs provided by
|
||||
scheduling framework for vCPU scheduling. The scheduling framework also
|
||||
provides some APIs for schedulers. The scheduler mainly implements some
|
||||
callbacks in an ``acrn_scheduler`` instance for scheduling framework.
|
||||
Scheduling initialization is invoked in the hardware management layer.
|
||||
|
||||
.. figure:: images/cpu_sharing_api.png
|
||||
:align: center
|
||||
|
||||
CPU affinity
|
||||
vCPU affinity
|
||||
*************
|
||||
|
||||
Currently, we do not support vCPU migration; the assignment of vCPU mapping to
|
||||
pCPU is fixed at the time the VM is launched. The statically configured
|
||||
cpu_affinity_bitmap in the VM configuration defines a superset of pCPUs that
|
||||
the VM is allowed to run on. One bit in this bitmap indicates that one pCPU
|
||||
could be assigned to this VM, and the bit number is the pCPU ID. A pre-launched
|
||||
VM is supposed to be launched on exact number of pCPUs that are assigned in
|
||||
this bitmap. and the vCPU to pCPU mapping is implicitly indicated: vCPU0 maps
|
||||
to the pCPU with lowest pCPU ID, vCPU1 maps to the second lowest pCPU ID, and
|
||||
so on.
|
||||
Currently, we do not support vCPU migration; the assignment of vCPU
|
||||
mapping to pCPU is statically configured by acrn-dm through
|
||||
``--cpu_affinity``. Use these rules to configure the vCPU affinity:
|
||||
|
||||
For post-launched VMs, acrn-dm could choose to launch a subset of pCPUs that
|
||||
are defined in cpu_affinity_bitmap by specifying the assigned pCPUs
|
||||
(``--cpu_affinity`` option). But it can't assign any pCPUs that are not
|
||||
included in the VM's cpu_affinity_bitmap.
|
||||
- Only one bit can be set for each affinity item of vCPU.
|
||||
- vCPUs in the same VM cannot be assigned to the same pCPU.
|
||||
|
||||
Here is an example for affinity:
|
||||
|
||||
@@ -72,47 +64,59 @@ The thread object contains three states: RUNNING, RUNNABLE, and BLOCKED.
|
||||
.. figure:: images/cpu_sharing_state.png
|
||||
:align: center
|
||||
|
||||
After a new vCPU is created, the corresponding thread object is initiated.
|
||||
The vCPU layer invokes a wakeup operation. After wakeup, the state for the
|
||||
new thread object is set to RUNNABLE, and then follows its algorithm to
|
||||
determine whether or not to preempt the current running thread object. If
|
||||
yes, it turns to the RUNNING state. In RUNNING state, the thread object may
|
||||
turn back to the RUNNABLE state when it runs out of its timeslice, or it
|
||||
might yield the pCPU by itself, or be preempted. The thread object under
|
||||
RUNNING state may trigger sleep to transfer to BLOCKED state.
|
||||
After a new vCPU is created, the corresponding thread object is
|
||||
initiated. The vCPU layer invokes a wakeup operation. After wakeup, the
|
||||
state for the new thread object is set to RUNNABLE, and then follows its
|
||||
algorithm to determine whether or not to preempt the current running
|
||||
thread object. If yes, it turns to the RUNNING state. In RUNNING state,
|
||||
the thread object may turn back to the RUNNABLE state when it runs out
|
||||
of its timeslice, or it might yield the pCPU by itself, or be preempted.
|
||||
The thread object under RUNNING state may trigger sleep to transfer to
|
||||
BLOCKED state.
|
||||
|
||||
Scheduler
|
||||
*********
|
||||
|
||||
The below block diagram shows the basic concept for the scheduler. There are
|
||||
two kinds of scheduler in the diagram: NOOP (No-Operation) scheduler and IORR
|
||||
(IO sensitive Round-Robin) scheduler.
|
||||
The below block diagram shows the basic concept for the scheduler. There
|
||||
are two kinds of schedulers in the diagram: NOOP (No-Operation) scheduler
|
||||
and BVT (Borrowed Virtual Time) scheduler.
|
||||
|
||||
|
||||
- **No-Operation scheduler**:
|
||||
|
||||
The NOOP (No-operation) scheduler has the same policy as the original 1-1
|
||||
mapping previously used; every pCPU can run only two thread objects: one is
|
||||
the idle thread, and another is the thread of the assigned vCPU. With this
|
||||
scheduler, vCPU works in Work-Conserving mode, which always try to keep
|
||||
resource busy, and will run once it is ready. Idle thread can run when the
|
||||
vCPU thread is blocked.
|
||||
The NOOP (No-operation) scheduler has the same policy as the original
|
||||
1-1 mapping previously used; every pCPU can run only two thread objects:
|
||||
one is the idle thread, and another is the thread of the assigned vCPU.
|
||||
With this scheduler, vCPU works in Work-Conserving mode, which always
|
||||
try to keep resource busy, and will run once it is ready. Idle thread
|
||||
can run when the vCPU thread is blocked.
|
||||
|
||||
- **IO sensitive round-robin scheduler**:
|
||||
- **Borrowed Virtual Time scheduler**:
|
||||
|
||||
The IORR (IO sensitive round-robin) scheduler is implemented with the per-pCPU
|
||||
runqueue and the per-pCPU tick timer; it supports more than one vCPU running
|
||||
on a pCPU. It basically schedules thread objects in a round-robin policy and
|
||||
supports preemption by timeslice counting.
|
||||
BVT (Borrowed Virtual time) is a virtual time based scheduling
|
||||
algorithm, it dispatching the runnable thread with the earliest
|
||||
effective virtual time.
|
||||
|
||||
TODO: BVT scheduler will be built on top of prioritized scheduling
|
||||
mechanism, i.e. higher priority threads get scheduled first, and same
|
||||
priority tasks are scheduled per BVT.
|
||||
|
||||
- **Virtual time**: The thread with the earliest effective virtual
|
||||
time (EVT) is dispatched first.
|
||||
- **Warp**: a latency-sensitive thread is allowed to warp back in
|
||||
virtual time to make it appear earlier. It borrows virtual time from
|
||||
its future CPU allocation and thus does not disrupt long-term CPU
|
||||
sharing
|
||||
- **MCU**: minimum charging unit, the scheduler account for running time
|
||||
in units of MCU.
|
||||
- **Weighted fair sharing**: each runnable thread receives a share of
|
||||
the processor in proportion to its weight over a scheduling
|
||||
window of some number of MCU.
|
||||
- **C**: context switch allowance. Real time by which the current
|
||||
thread is allowed to advance beyond another runnable thread with
|
||||
equal claim on the CPU. C is similar to the quantum in conventional
|
||||
timesharing.
|
||||
|
||||
- Every thread object has an initial timeslice (ex: 10ms)
|
||||
- The timeslice is consumed with time and be counted in the context switch
|
||||
and tick handler
|
||||
- If the timeslice is positive or zero, then switch out the current thread
|
||||
object and put it to tail of runqueue. Then, pick the next runnable one
|
||||
from runqueue to run.
|
||||
- Threads with an IO request will preempt current running threads on the
|
||||
same pCPU.
|
||||
|
||||
Scheduler configuration
|
||||
***********************
|
||||
@@ -122,180 +126,58 @@ Two places in the code decide the usage for the scheduler.
|
||||
* The option in Kconfig decides the only scheduler used in runtime.
|
||||
``hypervisor/arch/x86/Kconfig``
|
||||
|
||||
.. literalinclude:: ../../../../hypervisor/arch/x86/Kconfig
|
||||
:name: Kconfig for Scheduler
|
||||
:caption: Kconfig for Scheduler
|
||||
:linenos:
|
||||
:lines: 25-52
|
||||
:emphasize-lines: 3
|
||||
:language: c
|
||||
.. code-block:: none
|
||||
|
||||
The default scheduler is **SCHED_NOOP**. To use the IORR, change it to
|
||||
**SCHED_IORR** in the **ACRN Scheduler**.
|
||||
config SCHED_BVT
|
||||
bool "BVT scheduler"
|
||||
help
|
||||
BVT (Borrowed Virtual time) is virtual time based scheduling algorithm. It
|
||||
dispatches the runnable thread with the earliest effective virtual time.
|
||||
TODO: BVT scheduler will be built on top of prioritized scheduling mechanism,
|
||||
i.e. higher priority threads get scheduled first, and same priority tasks are
|
||||
scheduled per BVT.
|
||||
|
||||
* The VM CPU affinities are defined in ``hypervisor/scenarios/<scenario_name>/vm_configurations.h``
|
||||
The default scheduler is **SCHED_NOOP**. To use the BVT, change it to
|
||||
**SCHED_BVT** in the **ACRN Scheduler**.
|
||||
|
||||
.. literalinclude:: ../../../..//hypervisor/scenarios/industry/vm_configurations.h
|
||||
:name: Affinity for VMs
|
||||
:caption: Affinity for VMs
|
||||
:linenos:
|
||||
:lines: 39-45
|
||||
:language: c
|
||||
* The cpu_affinity is configured by acrn-dm command.
|
||||
|
||||
For example, assign physical CPUs (pCPUs) 1 and 3 to this VM using::
|
||||
|
||||
--cpu_affinity 1,3
|
||||
|
||||
* vCPU number corresponding to affinity is set in ``hypervisor/scenarios/<scenario_name>/vm_configurations.c`` by the **vcpu_num**
|
||||
|
||||
Example
|
||||
*******
|
||||
|
||||
To support below configuration in industry scenario:
|
||||
Use the following settings to support this configuration in the industry scenario:
|
||||
|
||||
+----------+-------+-------+--------+
|
||||
|pCPU0 |pCPU1 |pCPU2 |pCPU3 |
|
||||
+==========+=======+=======+========+
|
||||
|SOS WaaG |RT Linux |vxWorks |
|
||||
+----------+---------------+--------+
|
||||
+---------+-------+-------+-------+
|
||||
|pCPU0 |pCPU1 |pCPU2 |pCPU3 |
|
||||
+=========+=======+=======+=======+
|
||||
|SOS + WaaG |RT Linux |
|
||||
+-----------------+---------------+
|
||||
|
||||
Change the following three files:
|
||||
- offline pcpu2-3 in SOS.
|
||||
|
||||
1. ``hypervisor/arch/x86/Kconfig``
|
||||
- launch guests.
|
||||
|
||||
.. code-block:: none
|
||||
- launch WaaG with "--cpu_affinity=0,1"
|
||||
- launch RT with "--cpu_affinity=2,3"
|
||||
|
||||
choice
|
||||
prompt "ACRN Scheduler"
|
||||
-default SCHED_NOOP
|
||||
+default SCHED_IORR
|
||||
help
|
||||
Select the CPU scheduler to be used by the hypervisor
|
||||
|
||||
2. ``hypervisor/scenarios/industry/vm_configurations.h``
|
||||
After you start all VMs, check the vCPU affinities from the Hypervisor
|
||||
console with the ``vcpu_list`` command:
|
||||
|
||||
.. code-block:: none
|
||||
.. code-block:: console
|
||||
|
||||
#define CONFIG_MAX_VM_NUM (4U)
|
||||
ACRN:\>vcpu_list
|
||||
|
||||
#define DM_OWNED_GUEST_FLAG_MASK (GUEST_FLAG_SECURE_WORLD_ENABLED | GUEST_FLAG_LAPIC_PASSTHROUGH | \
|
||||
GUEST_FLAG_RT | GUEST_FLAG_IO_COMPLETION_POLLING)
|
||||
|
||||
#define SOS_VM_BOOTARGS SOS_ROOTFS \
|
||||
"rw rootwait " \
|
||||
"console=tty0 " \
|
||||
SOS_CONSOLE \
|
||||
"consoleblank=0 " \
|
||||
"no_timer_check " \
|
||||
"quiet loglevel=3 " \
|
||||
"i915.nuclear_pageflip=1 " \
|
||||
"i915.avail_planes_per_pipe=0x01010F " \
|
||||
"i915.domain_plane_owners=0x011111110000 " \
|
||||
"i915.enable_gvt=1 " \
|
||||
SOS_BOOTARGS_DIFF
|
||||
|
||||
#define VM1_CONFIG_CPU_AFFINITY (AFFINITY_CPU(0U))
|
||||
#define VM2_CONFIG_CPU_AFFINITY (AFFINITY_CPU(1U) | AFFINITY_CPU(2U))
|
||||
#define VM3_CONFIG_CPU_AFFINITY (AFFINITY_CPU(3U))
|
||||
|
||||
3. ``hypervisor/scenarios/industry/vm_configurations.c``
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
struct acrn_vm_config vm_configs[CONFIG_MAX_VM_NUM] = {
|
||||
{
|
||||
.load_order = SOS_VM,
|
||||
.name = "ACRN SOS VM",
|
||||
.uuid = {0xdbU, 0xbbU, 0xd4U, 0x34U, 0x7aU, 0x57U, 0x42U, 0x16U, \
|
||||
0xa1U, 0x2cU, 0x22U, 0x01U, 0xf1U, 0xabU, 0x02U, 0x40U},
|
||||
.guest_flags = 0UL,
|
||||
.clos = 0U,
|
||||
.memory = {
|
||||
.start_hpa = 0UL,
|
||||
.size = CONFIG_SOS_RAM_SIZE,
|
||||
},
|
||||
.os_config = {
|
||||
.name = "ACRN Service OS",
|
||||
.kernel_type = KERNEL_BZIMAGE,
|
||||
.kernel_mod_tag = "Linux_bzImage",
|
||||
.bootargs = SOS_VM_BOOTARGS
|
||||
},
|
||||
.vuart[0] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = SOS_COM1_BASE,
|
||||
.irq = SOS_COM1_IRQ,
|
||||
},
|
||||
.vuart[1] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = SOS_COM2_BASE,
|
||||
.irq = SOS_COM2_IRQ,
|
||||
.t_vuart.vm_id = 2U,
|
||||
.t_vuart.vuart_id = 1U,
|
||||
},
|
||||
.pci_dev_num = SOS_EMULATED_PCI_DEV_NUM,
|
||||
.pci_devs = sos_pci_devs,
|
||||
},
|
||||
{
|
||||
.load_order = POST_LAUNCHED_VM,
|
||||
.uuid = {0xd2U, 0x79U, 0x54U, 0x38U, 0x25U, 0xd6U, 0x11U, 0xe8U, \
|
||||
0x86U, 0x4eU, 0xcbU, 0x7aU, 0x18U, 0xb3U, 0x46U, 0x43U},
|
||||
.cpu_affinity_bitmap = VM1_CONFIG_CPU_AFFINITY,
|
||||
.vuart[0] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = COM1_BASE,
|
||||
.irq = COM1_IRQ,
|
||||
},
|
||||
.vuart[1] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = INVALID_COM_BASE,
|
||||
}
|
||||
|
||||
},
|
||||
{
|
||||
.load_order = POST_LAUNCHED_VM,
|
||||
.uuid = {0x49U, 0x5aU, 0xe2U, 0xe5U, 0x26U, 0x03U, 0x4dU, 0x64U, \
|
||||
0xafU, 0x76U, 0xd4U, 0xbcU, 0x5aU, 0x8eU, 0xc0U, 0xe5U},
|
||||
|
||||
.guest_flags = GUEST_FLAG_HIGHEST_SEVERITY,
|
||||
.cpu_affinity_bitmap = VM2_CONFIG_CPU_AFFINITY,
|
||||
.vuart[0] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = COM1_BASE,
|
||||
.irq = COM1_IRQ,
|
||||
},
|
||||
.vuart[1] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = COM2_BASE,
|
||||
.irq = COM2_IRQ,
|
||||
.t_vuart.vm_id = 0U,
|
||||
.t_vuart.vuart_id = 1U,
|
||||
},
|
||||
},
|
||||
{
|
||||
.load_order = POST_LAUNCHED_VM,
|
||||
.uuid = {0x38U, 0x15U, 0x88U, 0x21U, 0x52U, 0x08U, 0x40U, 0x05U, \
|
||||
0xb7U, 0x2aU, 0x8aU, 0x60U, 0x9eU, 0x41U, 0x90U, 0xd0U},
|
||||
.cpu_affinity_bitmap = VM3_CONFIG_CPU_AFFINITY,
|
||||
.vuart[0] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = COM1_BASE,
|
||||
.irq = COM1_IRQ,
|
||||
},
|
||||
.vuart[1] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = INVALID_COM_BASE,
|
||||
}
|
||||
|
||||
},
|
||||
|
||||
};
|
||||
|
||||
After you start all VMs, check the vCPU affinities from the Hypervisor console:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
ACRN:\>vcpu_list
|
||||
|
||||
VM ID PCPU ID VCPU ID VCPU ROLE VCPU STATE
|
||||
===== ======= ======= ========= ==========
|
||||
0 0 0 PRIMARY Running
|
||||
1 0 0 PRIMARY Running
|
||||
2 1 0 PRIMARY Running
|
||||
2 2 1 SECONDARY Running
|
||||
3 3 0 PRIMARY Running
|
||||
VM ID PCPU ID VCPU ID VCPU ROLE VCPU STATE THREAD STATE
|
||||
===== ======= ======= ========= ========== ==========
|
||||
0 0 0 PRIMARY Running BLOCKED
|
||||
0 1 0 SECONDARY Running BLOCKED
|
||||
1 0 0 PRIMARY Running RUNNING
|
||||
1 1 0 SECONDARY Running RUNNING
|
||||
2 2 0 PRIMARY Running RUNNING
|
||||
2 3 1 SECONDARY Running RUNNING
|
||||
|
||||
@@ -21,7 +21,7 @@ An example
|
||||
As an example, we'll show how to obtain the interrupts of a pass-through USB device.
|
||||
|
||||
First, we can get the USB controller BDF number (0:15.0) through the
|
||||
following command in the SOS console::
|
||||
following command in the Service VM console::
|
||||
|
||||
lspci | grep "USB controller"
|
||||
|
||||
@@ -110,7 +110,7 @@ Then we use the command, on the ACRN console::
|
||||
|
||||
vm_console
|
||||
|
||||
to switch to the SOS console. Then we use the command::
|
||||
to switch to the Service VM console. Then we use the command::
|
||||
|
||||
cat /tmp/acrnlog/acrnlog_cur.0
|
||||
|
||||
@@ -125,7 +125,7 @@ and we will see the following log:
|
||||
ACRN Trace
|
||||
**********
|
||||
|
||||
ACRN trace is a tool running on the Service OS (SOS) to capture trace
|
||||
ACRN trace is a tool running on the Service VM to capture trace
|
||||
data. We can use the existing trace information to analyze, and we can
|
||||
add self-defined tracing to analyze code which we care about.
|
||||
|
||||
@@ -135,7 +135,7 @@ Using Existing trace event id to analyze trace
|
||||
As an example, we can use the existing vm_exit trace to analyze the
|
||||
reason and times of each vm_exit after we have done some operations.
|
||||
|
||||
1. Run the following SOS console command to collect
|
||||
1. Run the following Service VM console command to collect
|
||||
trace data::
|
||||
|
||||
# acrntrace -c
|
||||
@@ -208,7 +208,7 @@ shown in the following example:
|
||||
:ref:`getting-started-building` and :ref:`kbl-nuc-sdc` for
|
||||
detailed instructions on how to do that.
|
||||
|
||||
5. Now we can use the following command in the SOS console
|
||||
5. Now we can use the following command in the Service VM console
|
||||
to generate acrntrace data into the current directory::
|
||||
|
||||
acrntrace -c
|
||||
|
||||
@@ -72,7 +72,7 @@ folder setup for documentation contributions and generation:
|
||||
The parent projectacrn folder is there because we'll also be creating a
|
||||
publishing area later in these steps. For API doc generation, we'll also
|
||||
need the acrn-kernel repo contents in a sibling folder to the
|
||||
acrn-hypervisor repo contents.
|
||||
acrn-hypervisor repo contents.
|
||||
|
||||
It's best if the acrn-hypervisor
|
||||
folder is an ssh clone of your personal fork of the upstream project
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
.. _enable_laag_secure_boot:
|
||||
|
||||
Secure Boot enabling for Clear Linux User VM
|
||||
############################################
|
||||
Enable Secure Boot in the Clear Linux User VM
|
||||
#############################################
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
|
||||
@@ -1,20 +1,22 @@
|
||||
.. _enable-s5:
|
||||
|
||||
Platform S5 Enable Guide
|
||||
########################
|
||||
Enable S5 in ACRN
|
||||
#################
|
||||
|
||||
Introduction
|
||||
************
|
||||
|
||||
S5 is one of the `ACPI sleep states <http://acpi.sourceforge.net/documentation/sleep.html>`_ that refers to the system being shut down (although some power may still be supplied to
|
||||
certain devices). In this document, S5 means the function to shut down the
|
||||
**User VMs**, **the Service VM**, the hypervisor, and the hardware. In most cases,
|
||||
directly shutting down the power of a computer system is not advisable because it can
|
||||
damage some components. It can cause corruption and put the system in an unknown or
|
||||
unstable state. On ACRN, the User VM must be shut down before powering off the Service VM.
|
||||
Especially for some use cases, where User VMs could be used in industrial control or other
|
||||
high safety requirement environment, a graceful system shutdown such as the ACRN S5
|
||||
function is required.
|
||||
S5 is one of the `ACPI sleep states <http://acpi.sourceforge.net/documentation/sleep.html>`_
|
||||
that refers to the system being shut down (although some power may still be
|
||||
supplied to certain devices). In this document, S5 means the function to
|
||||
shut down the **User VMs**, **the Service VM**, the hypervisor, and the
|
||||
hardware. In most cases, directly shutting down the power of a computer
|
||||
system is not advisable because it can damage some components. It can cause
|
||||
corruption and put the system in an unknown or unstable state. On ACRN, the
|
||||
User VM must be shut down before powering off the Service VM. Especially for
|
||||
some use cases, where User VMs could be used in industrial control or other
|
||||
high safety requirement environment, a graceful system shutdown such as the
|
||||
ACRN S5 function is required.
|
||||
|
||||
S5 Architecture
|
||||
***************
|
||||
@@ -30,14 +32,16 @@ The diagram below shows the overall architecture:
|
||||
|
||||
- **Scenario I**:
|
||||
|
||||
The User VM's serial port device (``ttySn``) is emulated in the Device Model, the channel from the Service VM to the User VM:
|
||||
The User VM's serial port device (``ttySn``) is emulated in the
|
||||
Device Model, the channel from the Service VM to the User VM:
|
||||
|
||||
.. graphviz:: images/s5-scenario-1.dot
|
||||
:name: s5-scenario-1
|
||||
|
||||
- **Scenario II**:
|
||||
|
||||
The User VM's (like RT-Linux or other RT-VMs) serial port device (``ttySn``) is emulated in the Hypervisor,
|
||||
The User VM's (like RT-Linux or other RT-VMs) serial port device
|
||||
(``ttySn``) is emulated in the Hypervisor,
|
||||
the channel from the Service OS to the User VM:
|
||||
|
||||
.. graphviz:: images/s5-scenario-2.dot
|
||||
@@ -92,7 +96,7 @@ The procedure for enabling S5 is specific to the particular OS:
|
||||
|
||||
.. note:: For RT-Linux, the vUART is emulated in the hypervisor; expose the node as ``/dev/ttySn``.
|
||||
|
||||
#. For LaaG and RT-Linux VMs, run the life-cycle manager deamon:
|
||||
#. For LaaG and RT-Linux VMs, run the life-cycle manager daemon:
|
||||
|
||||
a. Use these commands to build the life-cycle manager daemon, ``life_mngr``.
|
||||
|
||||
@@ -116,7 +120,7 @@ The procedure for enabling S5 is specific to the particular OS:
|
||||
# systemctl enable life_mngr.service
|
||||
# reboot
|
||||
|
||||
#. For the WaaG VM, run the life-cycle manager deamon:
|
||||
#. For the WaaG VM, run the life-cycle manager daemon:
|
||||
|
||||
a) Build the ``life_mngr_win.exe`` application::
|
||||
|
||||
@@ -181,12 +185,12 @@ How to test
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
● life_mngr.service - ACRN lifemngr daemon
|
||||
* life_mngr.service - ACRN lifemngr daemon
|
||||
Loaded: loaded (/usr/lib/systemd/system/life_mngr.service; enabled; vendor p>
|
||||
Active: active (running) since Tue 2019-09-10 07:15:06 UTC; 1min 11s ago
|
||||
Main PID: 840 (life_mngr)
|
||||
|
||||
.. note:: For WaaG, we need to close ``windbg`` by using the ``"bcdedit /set debug off`` command
|
||||
.. note:: For WaaG, we need to close ``windbg`` by using the ``bcdedit /set debug off`` command
|
||||
IF you executed the ``bcdedit /set debug on`` when you set up the WaaG, because it occupies the ``COM2``.
|
||||
|
||||
#. Use the``acrnctl stop`` command on the Service VM to trigger S5 to the User VMs:
|
||||
|
||||
|
Before Width: | Height: | Size: 16 KiB After Width: | Height: | Size: 17 KiB |
|
Before Width: | Height: | Size: 24 KiB After Width: | Height: | Size: 31 KiB |
BIN
doc/tutorials/images/configure_vm_add.png
Normal file
|
After Width: | Height: | Size: 28 KiB |
|
Before Width: | Height: | Size: 7.9 KiB After Width: | Height: | Size: 8.5 KiB |
@@ -1,11 +1,11 @@
|
||||
.. _Increase UOS disk size:
|
||||
.. _Increase User VM disk size:
|
||||
|
||||
Increasing the User OS disk size
|
||||
################################
|
||||
Increase the User VM Disk Size
|
||||
##############################
|
||||
|
||||
This document builds on the :ref:`getting_started` and assumes you already have
|
||||
This document builds on :ref:`getting_started` and assumes you already have
|
||||
a system with ACRN installed and running correctly. The size of the pre-built
|
||||
Clear Linux User OS (UOS) virtual disk is typically only 8GB and this may not be
|
||||
Clear Linux User OS (User VM) virtual disk is typically only 8GB and this may not be
|
||||
sufficient for some applications. This guide explains a simple few steps to
|
||||
increase the size of that virtual disk.
|
||||
|
||||
@@ -21,7 +21,7 @@ broken down into three steps:
|
||||
|
||||
.. note::
|
||||
|
||||
These steps are performed directly on the UOS disk image. The UOS VM **must**
|
||||
These steps are performed directly on the User VM disk image. The User VM **must**
|
||||
be powered off during this operation.
|
||||
|
||||
Increase the virtual disk size
|
||||
@@ -34,7 +34,7 @@ We will use the ``qemu-img`` tool to increase the size of the virtual disk
|
||||
|
||||
$ sudo swupd bundle-add clr-installer
|
||||
|
||||
As an example, let us add 10GB of storage to our virtual disk image called
|
||||
As an example, let us add 10GB of storage to our virtual disk image called
|
||||
``uos.img``.
|
||||
|
||||
.. code-block:: none
|
||||
@@ -78,23 +78,23 @@ Here is what the sequence looks like:
|
||||
GNU Parted 3.2
|
||||
Using /home/gvancuts/uos/uos.img
|
||||
Welcome to GNU Parted! Type 'help' to view a list of commands.
|
||||
(parted) p
|
||||
Warning: Not all of the space available to /home/gvancuts/uos/uos.img appears to be used, you can fix the GPT to use all of the space (an extra 20971520 blocks) or continue with the current setting?
|
||||
Fix/Ignore? Fix
|
||||
(parted) p
|
||||
Warning: Not all of the space available to /home/gvancuts/uos/uos.img appears to be used, you can fix the GPT to use all of the space (an extra 20971520 blocks) or continue with the current setting?
|
||||
Fix/Ignore? Fix
|
||||
Model: (file)
|
||||
Disk /home/gvancuts/uos/uos.img: 19.9GB
|
||||
Sector size (logical/physical): 512B/512B
|
||||
Partition Table: gpt
|
||||
Disk Flags:
|
||||
Disk Flags:
|
||||
|
||||
Number Start End Size File system Name Flags
|
||||
1 1049kB 537MB 536MB fat16 primary boot, esp
|
||||
2 537MB 570MB 33.6MB linux-swap(v1) primary
|
||||
3 570MB 9160MB 8590MB ext4 primary
|
||||
|
||||
(parted) resizepart 3
|
||||
(parted) resizepart 3
|
||||
End? [9160MB]? 19.9GB
|
||||
(parted) q
|
||||
(parted) q
|
||||
|
||||
Resize the filesystem
|
||||
*********************
|
||||
@@ -112,4 +112,4 @@ partition space.
|
||||
$ sudo losetup -d $LOOP_DEV
|
||||
|
||||
Congratulations! You have successfully resized the disk, partition, and
|
||||
filesystem of your User OS.
|
||||
filesystem of your User VM.
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
.. _kbl-nuc-sdc:
|
||||
|
||||
Using SDC Mode on the NUC
|
||||
#########################
|
||||
Use SDC Mode on the NUC
|
||||
#######################
|
||||
|
||||
The Intel |reg| NUC is the primary tested platform for ACRN development,
|
||||
and its setup is described below.
|
||||
@@ -90,7 +90,7 @@ Follow these steps:
|
||||
|
||||
#. Open a terminal.
|
||||
|
||||
#. Download the ``acrn_quick_setup.sh`` script to set up the Service VM.
|
||||
#. Download the ``acrn_quick_setup.sh`` script to set up the Service VM.
|
||||
(If you don't need a proxy to get the script, skip the ``export`` command.)
|
||||
|
||||
.. code-block:: none
|
||||
@@ -113,7 +113,7 @@ Follow these steps:
|
||||
Service OS setup done!
|
||||
Rebooting Service OS to take effects.
|
||||
Rebooting.
|
||||
|
||||
|
||||
.. note::
|
||||
This script is using ``/dev/sda1`` as the default EFI System Partition
|
||||
ESP). If the ESP is different based on your hardware, you can specify
|
||||
@@ -127,7 +127,7 @@ Follow these steps:
|
||||
|
||||
``sudo sh acrn_quick_setup.sh -s 32080 -e /dev/nvme0n1p1 -d``
|
||||
|
||||
#. After the system reboots, log in as the **clear** user. Verify that the Service VM
|
||||
#. After the system reboots, log in as the **clear** user. Verify that the Service VM
|
||||
booted successfully by checking the ``dmesg`` log:
|
||||
|
||||
.. code-block:: console
|
||||
@@ -180,7 +180,7 @@ Follow these steps:
|
||||
|
||||
clr-a632ec84744d4e02974fe1891130002e login:
|
||||
|
||||
#. Log in as root. Specify the new password. Verify that you are running in the User VM
|
||||
#. Log in as root. Specify the new password. Verify that you are running in the User VM
|
||||
by checking the kernel release version or seeing if acrn devices are visible:
|
||||
|
||||
.. code-block:: console
|
||||
@@ -214,7 +214,7 @@ and User VM manually. Follow these steps:
|
||||
#. Install Clear Linux on the NUC, log in as the **clear** user,
|
||||
and open a terminal window.
|
||||
|
||||
#. Disable the auto-update feature. Clear Linux OS is set to automatically update itself.
|
||||
#. Disable the auto-update feature. Clear Linux OS is set to automatically update itself.
|
||||
We recommend that you disable this feature to have more control over when updates happen. Use this command:
|
||||
|
||||
.. code-block:: none
|
||||
@@ -222,8 +222,8 @@ and User VM manually. Follow these steps:
|
||||
$ sudo swupd autoupdate --disable
|
||||
|
||||
.. note::
|
||||
When enabled, the Clear Linux OS installer automatically checks for updates and installs the latest version
|
||||
available on your system. To use a specific version (such as 32080), enter the following command after the
|
||||
When enabled, the Clear Linux OS installer automatically checks for updates and installs the latest version
|
||||
available on your system. To use a specific version (such as 32080), enter the following command after the
|
||||
installation is complete:
|
||||
|
||||
``sudo swupd repair --picky -V 32080``
|
||||
@@ -408,7 +408,7 @@ ACRN Network Bridge
|
||||
===================
|
||||
|
||||
The ACRN bridge has been set up as a part of systemd services for device
|
||||
communication. The default bridge creates ``acrn_br0`` which is the bridge and ``tap0`` as an initial setup.
|
||||
communication. The default bridge creates ``acrn_br0`` which is the bridge and ``tap0`` as an initial setup.
|
||||
The files can be found in ``/usr/lib/systemd/network``. No additional setup is needed since **systemd-networkd** is
|
||||
automatically enabled after a system restart.
|
||||
|
||||
@@ -425,8 +425,8 @@ Set up Reference User VM
|
||||
$ cd uos
|
||||
$ curl https://download.clearlinux.org/releases/32080/clear/clear-32080-kvm.img.xz -o uos.img.xz
|
||||
|
||||
Note that if you want to use or try out a newer version of Clear Linux OS as the User VM, download the
|
||||
latest from `http://download.clearlinux.org/image/`.
|
||||
Note that if you want to use or try out a newer version of Clear Linux OS as the User VM, download the
|
||||
latest from `http://download.clearlinux.org/image/`.
|
||||
Make sure to adjust the steps described below accordingly (image file name and kernel modules version).
|
||||
|
||||
#. Uncompress it:
|
||||
@@ -435,7 +435,7 @@ Set up Reference User VM
|
||||
|
||||
$ unxz uos.img.xz
|
||||
|
||||
#. Deploy the User VM kernel modules to the User VM virtual disk image (note that you'll need to
|
||||
#. Deploy the User VM kernel modules to the User VM virtual disk image (note that you'll need to
|
||||
use the same **iot-lts2018** image version number noted in Step 1 above):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
.. _open_vswitch:
|
||||
|
||||
How to enable OVS in ACRN
|
||||
#########################
|
||||
Enable OVS in ACRN
|
||||
##################
|
||||
Hypervisors need the ability to bridge network traffic between VMs
|
||||
and with the outside world. This tutorial describes how to
|
||||
use `Open Virtual Switch (OVS)
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
.. _rdt_configuration:
|
||||
|
||||
RDT Configuration
|
||||
#################
|
||||
Enable RDT Configuration
|
||||
########################
|
||||
|
||||
On x86 platforms that support Intel Resource Director Technology (RDT)
|
||||
allocation features such as Cache Allocation Technology (CAT) and Memory
|
||||
@@ -12,9 +12,13 @@ higher priorities VMs (such as RTVMs) are not impacted.
|
||||
|
||||
Using RDT includes three steps:
|
||||
|
||||
1. Detect and enumerate RDT allocation capabilites on supported resources such as cache and memory bandwidth.
|
||||
#. Set up resource mask array MSRs (Model-Specific Registers) for each CLOS (Class of Service, which is a resource allocation), basically to limit or allow access to resource usage.
|
||||
#. Select the CLOS for the CPU associated with the VM that will apply the resource mask on the CP.
|
||||
1. Detect and enumerate RDT allocation capabilities on supported
|
||||
resources such as cache and memory bandwidth.
|
||||
#. Set up resource mask array MSRs (Model-Specific Registers) for each
|
||||
CLOS (Class of Service, which is a resource allocation), basically to
|
||||
limit or allow access to resource usage.
|
||||
#. Select the CLOS for the CPU associated with the VM that will apply
|
||||
the resource mask on the CP.
|
||||
|
||||
Steps #2 and #3 configure RDT resources for a VM and can be done in two ways:
|
||||
|
||||
@@ -24,12 +28,15 @@ Steps #2 and #3 configure RDT resources for a VM and can be done in two ways:
|
||||
The following sections discuss how to detect, enumerate capabilities, and
|
||||
configure RDT resources for VMs in the ACRN hypervisor.
|
||||
|
||||
For further details, refer to the ACRN RDT high-level design :ref:`hv_rdt` and `Intel 64 and IA-32 Architectures Software Developer's Manual, (Section 17.19 Intel Resource Director Technology Allocation Features) <https://software.intel.com/en-us/download/intel-64-and-ia-32-architectures-sdm-combined-volumes-3a-3b-3c-and-3d-system-programming-guide>`_
|
||||
For further details, refer to the ACRN RDT high-level design
|
||||
:ref:`hv_rdt` and `Intel 64 and IA-32 Architectures Software Developer's
|
||||
Manual, (Section 17.19 Intel Resource Director Technology Allocation Features)
|
||||
<https://software.intel.com/en-us/download/intel-64-and-ia-32-architectures-sdm-combined-volumes-3a-3b-3c-and-3d-system-programming-guide>`_
|
||||
|
||||
.. _rdt_detection_capabilities:
|
||||
|
||||
RDT detection and resource capabilites
|
||||
**************************************
|
||||
RDT detection and resource capabilities
|
||||
***************************************
|
||||
From the ACRN HV debug shell, use ``cpuid`` to detect and identify the
|
||||
resource capabilities. Use the platform's serial port for the HV shell
|
||||
(refer to :ref:`getting-started-up2` for setup instructions).
|
||||
@@ -38,7 +45,7 @@ Check if the platform supports RDT with ``cpuid``. First, run ``cpuid 0x7 0x0``;
|
||||
RDT. Next, run ``cpuid 0x10 0x0`` and check the EBX [3-1] bits. EBX [bit 1]
|
||||
indicates that L3 CAT is supported. EBX [bit 2] indicates that L2 CAT is
|
||||
supported. EBX [bit 3] indicates that MBA is supported. To query the
|
||||
capabilties of the supported resources, use the bit position as a subleaf
|
||||
capabilities of the supported resources, use the bit position as a subleaf
|
||||
index. For example, run ``cpuid 0x10 0x2`` to query the L2 CAT capability.
|
||||
|
||||
.. code-block:: none
|
||||
@@ -48,10 +55,16 @@ index. For example, run ``cpuid 0x10 0x2`` to query the L2 CAT capability.
|
||||
|
||||
L3/L2 bit encoding:
|
||||
|
||||
* EAX [bit 4:0] reports the length of the cache mask minus one. For example, a value 0xa means the cache mask is 0x7ff.
|
||||
* EBX [bit 31:0] reports a bit mask. Each set bit indicates the corresponding unit of the cache allocation that can be used by other entities in the platform (e.g. integrated graphics engine).
|
||||
* ECX [bit 2] if set, indicates that cache Code and Data Prioritization Technology is supported.
|
||||
* EDX [bit 15:0] reports the maximum CLOS supported for the resource minus one. For example, a value of 0xf means the max CLOS supported is 0x10.
|
||||
* EAX [bit 4:0] reports the length of the cache mask minus one. For
|
||||
example, a value 0xa means the cache mask is 0x7ff.
|
||||
* EBX [bit 31:0] reports a bit mask. Each set bit indicates the
|
||||
corresponding unit of the cache allocation that can be used by other
|
||||
entities in the platform (e.g. integrated graphics engine).
|
||||
* ECX [bit 2] if set, indicates that cache Code and Data Prioritization
|
||||
Technology is supported.
|
||||
* EDX [bit 15:0] reports the maximum CLOS supported for the resource
|
||||
minus one. For example, a value of 0xf means the max CLOS supported
|
||||
is 0x10.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@@ -82,7 +95,8 @@ Tuning RDT resources in HV debug shell
|
||||
This section explains how to configure the RDT resources from the HV debug
|
||||
shell.
|
||||
|
||||
#. Check the PCPU IDs of each VM; the ``vcpu_list`` below shows that VM0 is running on PCPU0, and VM1 is running on PCPU1:
|
||||
#. Check the PCPU IDs of each VM; the ``vcpu_list`` below shows that VM0 is
|
||||
running on PCPU0, and VM1 is running on PCPU1:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@@ -93,14 +107,24 @@ shell.
|
||||
0 0 0 PRIMARY Running
|
||||
1 1 0 PRIMARY Running
|
||||
|
||||
#. Set the resource mask array MSRs for each CLOS with a ``wrmsr <reg_num> <value>``. For example, if you want to restrict VM1 to use the lower 4 ways of LLC cache and you want to allocate the upper 7 ways of LLC to access to VM0, you must first assign a CLOS for each VM (e.g. VM0 is assigned CLOS0 and VM1 CLOS1). Next, resource mask the MSR that corresponds to the CLOS0. In our example, IA32_L3_MASK_BASE + 0 is programmed to 0x7f0. Finally, resource mask the MSR that corresponds to CLOS1. In our example, IA32_L3_MASK_BASE + 1 is set to 0xf.
|
||||
#. Set the resource mask array MSRs for each CLOS with a ``wrmsr <reg_num> <value>``.
|
||||
For example, if you want to restrict VM1 to use the
|
||||
lower 4 ways of LLC cache and you want to allocate the upper 7 ways of
|
||||
LLC to access to VM0, you must first assign a CLOS for each VM (e.g. VM0
|
||||
is assigned CLOS0 and VM1 CLOS1). Next, resource mask the MSR that
|
||||
corresponds to the CLOS0. In our example, IA32_L3_MASK_BASE + 0 is
|
||||
programmed to 0x7f0. Finally, resource mask the MSR that corresponds to
|
||||
CLOS1. In our example, IA32_L3_MASK_BASE + 1 is set to 0xf.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
ACRN:\>wrmsr -p1 0xc90 0x7f0
|
||||
ACRN:\>wrmsr -p1 0xc91 0xf
|
||||
|
||||
#. Assign CLOS1 to PCPU1 by programming the MSR IA32_PQR_ASSOC [bit 63:32] (0xc8f) to 0x100000000 to use CLOS1 and assign CLOS0 to PCPU 0 by programming MSR IA32_PQR_ASSOC [bit 63:32] to 0x0. Note that IA32_PQR_ASSOC is per LP MSR and CLOS must be programmed on each LP.
|
||||
#. Assign CLOS1 to PCPU1 by programming the MSR IA32_PQR_ASSOC [bit 63:32]
|
||||
(0xc8f) to 0x100000000 to use CLOS1 and assign CLOS0 to PCPU 0 by
|
||||
programming MSR IA32_PQR_ASSOC [bit 63:32] to 0x0. Note that
|
||||
IA32_PQR_ASSOC is per LP MSR and CLOS must be programmed on each LP.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@@ -112,7 +136,12 @@ shell.
|
||||
Configure RDT for VM using VM Configuration
|
||||
*******************************************
|
||||
|
||||
#. RDT on ACRN is enabled by default on supported platforms. This information can be found using an offline tool that generates a platform-specific xml file that helps ACRN identify RDT-supported platforms. This feature can be also be toggled using the CONFIG_RDT_ENABLED flag with the ``make menuconfig`` command. The first step is to clone the ACRN source code (if you haven't already done so):
|
||||
#. RDT on ACRN is enabled by default on supported platforms. This
|
||||
information can be found using an offline tool that generates a
|
||||
platform-specific xml file that helps ACRN identify RDT-supported
|
||||
platforms. This feature can be also be toggled using the
|
||||
CONFIG_RDT_ENABLED flag with the ``make menuconfig`` command. The first
|
||||
step is to clone the ACRN source code (if you haven't already done so):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@@ -122,7 +151,9 @@ Configure RDT for VM using VM Configuration
|
||||
.. figure:: images/menuconfig-rdt.png
|
||||
:align: center
|
||||
|
||||
#. The predefined cache masks can be found at ``hypervisor/arch/x86/configs/$(CONFIG_BOARD)/board.c`` for respective boards. For example, apl-up2 can found at ``hypervisor/arch/x86/configs/apl-up2/board.c``.
|
||||
#. The predefined cache masks can be found at
|
||||
``hypervisor/arch/x86/configs/$(CONFIG_BOARD)/board.c`` for respective boards.
|
||||
For example, apl-up2 can found at ``hypervisor/arch/x86/configs/apl-up2/board.c``.
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 3,7,11,15
|
||||
@@ -147,9 +178,17 @@ Configure RDT for VM using VM Configuration
|
||||
};
|
||||
|
||||
.. note::
|
||||
Users can change the mask values, but the cache mask must have **continuous bits** or a #GP fault can be triggered. Similary, when programming an MBA delay value, be sure to set the value to less than or equal to the MAX delay value.
|
||||
Users can change the mask values, but the cache mask must have
|
||||
**continuous bits** or a #GP fault can be triggered. Similary, when
|
||||
programming an MBA delay value, be sure to set the value to less than or
|
||||
equal to the MAX delay value.
|
||||
|
||||
#. Set up the CLOS in the VM config. Follow `RDT detection and resource capabilites`_ to identify the MAX CLOS that can be used. ACRN uses the **the lowest common MAX CLOS** value among all RDT resources to avoid resource misconfigurations. For example, configuration data for the Service VM sharing mode can be found at ``hypervisor/arch/x86/configs/vm_config.c``
|
||||
#. Set up the CLOS in the VM config. Follow `RDT detection and resource capabilities`_
|
||||
to identify the MAX CLOS that can be used. ACRN uses the
|
||||
**the lowest common MAX CLOS** value among all RDT resources to avoid
|
||||
resource misconfigurations. For example, configuration data for the
|
||||
Service VM sharing mode can be found at
|
||||
``hypervisor/arch/x86/configs/vm_config.c``
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 6
|
||||
@@ -171,9 +210,15 @@ Configure RDT for VM using VM Configuration
|
||||
};
|
||||
|
||||
.. note::
|
||||
In ACRN, Lower CLOS always means higher priority (clos 0 > clos 1 > clos 2>...clos n). So, carefully program each VM's CLOS accordingly.
|
||||
In ACRN, Lower CLOS always means higher priority (clos 0 > clos 1 > clos 2> ...clos n).
|
||||
So, carefully program each VM's CLOS accordingly.
|
||||
|
||||
#. Careful consideration should be made when assigning vCPU affinity. In a cache isolation configuration, in addition to isolating CAT-capable caches, you must also isolate lower-level caches. In the following example, logical processor #0 and #2 share L1 and L2 caches. In this case, do not assign LP #0 and LP #2 to different VMs that need to do cache isolation. Assign LP #1 and LP #3 with similar consideration:
|
||||
#. Careful consideration should be made when assigning vCPU affinity. In
|
||||
a cache isolation configuration, in addition to isolating CAT-capable
|
||||
caches, you must also isolate lower-level caches. In the following
|
||||
example, logical processor #0 and #2 share L1 and L2 caches. In this
|
||||
case, do not assign LP #0 and LP #2 to different VMs that need to do
|
||||
cache isolation. Assign LP #1 and LP #3 with similar consideration:
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 3
|
||||
@@ -194,10 +239,15 @@ Configure RDT for VM using VM Configuration
|
||||
PU L#2 (P#1)
|
||||
PU L#3 (P#3)
|
||||
|
||||
#. Bandwidth control is per-core (not per LP), so max delay values of per-LP CLOS is applied to the core. If HT is turned on, don’t place high priority threads on sibling LPs running lower priority threads.
|
||||
#. Bandwidth control is per-core (not per LP), so max delay values of
|
||||
per-LP CLOS is applied to the core. If HT is turned on, don't place high
|
||||
priority threads on sibling LPs running lower priority threads.
|
||||
|
||||
#. Based on our scenario, build the ACRN hypervisor and copy the artifact ``acrn.efi`` to the
|
||||
``/boot/EFI/acrn`` directory. If needed, update the devicemodel ``acrn-dm`` as well in ``/usr/bin`` directory. see :ref:`getting-started-building` for building instructions.
|
||||
#. Based on our scenario, build the ACRN hypervisor and copy the
|
||||
artifact ``acrn.efi`` to the
|
||||
``/boot/EFI/acrn`` directory. If needed, update the devicemodel
|
||||
``acrn-dm`` as well in ``/usr/bin`` directory. see
|
||||
:ref:`getting-started-building` for building instructions.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
.. _rt_performance_tuning:
|
||||
|
||||
Real-Time (RT) Performance Analysis on ACRN
|
||||
###########################################
|
||||
ACRN Real-Time (RT) Performance Analysis
|
||||
########################################
|
||||
|
||||
The document describes the methods to collect trace/data for ACRN Real-Time VM (RTVM)
|
||||
real-time performance analysis. Two parts are included:
|
||||
|
||||
- Method to trace ``vmexit`` occurences for analysis.
|
||||
- Method to trace ``vmexit`` occurrences for analysis.
|
||||
- Method to collect Performance Monitoring Counters information for tuning based on Performance Monitoring Unit, or PMU.
|
||||
|
||||
``vmexit`` analysis for ACRN RT performance
|
||||
@@ -38,11 +38,11 @@ Here is example pseudocode of a cyclictest implementation.
|
||||
.. code-block:: none
|
||||
|
||||
while (!shutdown) {
|
||||
…
|
||||
...
|
||||
clock_nanosleep(&next)
|
||||
clock_gettime(&now)
|
||||
latency = calcdiff(now, next)
|
||||
…
|
||||
...
|
||||
next += interval
|
||||
}
|
||||
|
||||
@@ -65,7 +65,7 @@ Offline analysis
|
||||
|
||||
#. Convert the raw trace data to human readable format.
|
||||
#. Merge the logs in the RTVM and the ACRN hypervisor trace based on timestamps (in TSC).
|
||||
#. Check to see if any ``vmexit`` occured within the critical sections. The pattern is as follows:
|
||||
#. Check to see if any ``vmexit`` occurred within the critical sections. The pattern is as follows:
|
||||
|
||||
.. figure:: images/vm_exits_log.png
|
||||
:align: center
|
||||
@@ -161,7 +161,9 @@ CPU hardware differences in Linux performance measurements and presents a
|
||||
simple command line interface. Perf is based on the ``perf_events`` interface
|
||||
exported by recent versions of the Linux kernel.
|
||||
|
||||
**PMU** tools is a collection of tools for profile collection and performance analysis on Intel CPUs on top of Linux Perf. Refer to the following links for perf usage:
|
||||
**PMU** tools is a collection of tools for profile collection and
|
||||
performance analysis on Intel CPUs on top of Linux Perf. Refer to the
|
||||
following links for perf usage:
|
||||
|
||||
- https://perf.wiki.kernel.org/index.php/Main_Page
|
||||
- https://perf.wiki.kernel.org/index.php/Tutorial
|
||||
@@ -174,7 +176,8 @@ Top-down Micro-Architecture Analysis Method (TMAM)
|
||||
The Top-down Micro-Architecture Analysis Method (TMAM), based on Top-Down
|
||||
Characterization methodology, aims to provide an insight into whether you
|
||||
have made wise choices with your algorithms and data structures. See the
|
||||
Intel |reg| 64 and IA-32 `Architectures Optimization Reference Manual <http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf>`_,
|
||||
Intel |reg| 64 and IA-32 `Architectures Optimization Reference Manual
|
||||
<http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf>`_,
|
||||
Appendix B.1 for more details on TMAM. Refer to this `technical paper
|
||||
<https://fd.io/docs/whitepapers/performance_analysis_sw_data_planes_dec21_2017.pdf>`_
|
||||
which adopts TMAM for systematic performance benchmarking and analysis
|
||||
@@ -197,4 +200,3 @@ Example: Using Perf to analyze TMAM level 1 on CPU core 1
|
||||
S0-C1 1 10.6% 1.5% 3.9% 84.0%
|
||||
|
||||
0.006737123 seconds time elapsed
|
||||
|
||||
|
||||
196
doc/tutorials/rtvm_performance_tips.rst
Normal file
@@ -0,0 +1,196 @@
|
||||
.. _rt_perf_tips_rtvm:
|
||||
|
||||
ACRN Real-Time VM Performance Tips
|
||||
##################################
|
||||
|
||||
Background
|
||||
**********
|
||||
|
||||
The ACRN real-time VM (RTVM) is a special type of ACRN post-launched VM.
|
||||
This document shows how you can configure RTVMs to potentially achieve
|
||||
near bare-metal performance by configuring certain key technologies and
|
||||
eliminating use of a VM-exit within RT tasks, thereby avoiding this
|
||||
common virtualization overhead issue.
|
||||
|
||||
Neighbor VMs such as Service VMs, Human-Machine-Interface (HMI) VMs, or
|
||||
other real-time VMs, may negatively affect the execution of real-time
|
||||
tasks on an RTVM. This document also shows technologies used to isolate
|
||||
potential runtime noise from neighbor VMs.
|
||||
|
||||
Here are some key technologies that can significantly improve
|
||||
RTVM performance:
|
||||
|
||||
- LAPIC passthrough with core partitioning.
|
||||
- PCIe Device Passthrough: Only MSI interrupt-capable PCI devices are
|
||||
supported for the RTVM.
|
||||
- Enable CAT (Cache Allocation Technology)-based cache isolation: RTVM uses
|
||||
a dedicated CLOS (Class of Service). While others may share CLOS, the GPU
|
||||
uses a CLOS that will not overlap with the RTVM CLOS.
|
||||
- PMD virtio: Both virtio BE and FE work in polling mode so
|
||||
interrupts and notification between the Service VM and RTVM are not needed.
|
||||
All RTVM guest memory is hidden from the Service VM except for the virtio
|
||||
queue memory.
|
||||
|
||||
This document summarizes tips from issues encountered and
|
||||
resolved during real-time development and performance tuning.
|
||||
|
||||
Mandatory options for an RTVM
|
||||
*****************************
|
||||
|
||||
An RTVM is a post-launched VM with LAPIC passthrough. Pay attention to
|
||||
these options when you launch an ACRN RTVM:
|
||||
|
||||
Tip: Apply the acrn-dm option ``--lapic_pt``
|
||||
The LAPIC passthrough feature of ACRN is configured via the
|
||||
``--lapic_pt`` option, but the feature is actually enabled when LAPIC is
|
||||
switched to X2APIC mode. Both conditions should be met to enable an
|
||||
RTVM. The ``--rtvm`` option will be automatically attached once
|
||||
``--lapic_pt`` is applied.
|
||||
|
||||
Tip: Use virtio polling mode
|
||||
Polling mode prevents the frontend of the VM-exit from sending a
|
||||
notification to the backend. We recommend that you passthrough a
|
||||
physical peripheral device (such as block or an ethernet device), to an
|
||||
RTVM. If no physical device is available, ACRN supports virtio devices
|
||||
and enables polling mode to avoid a VM-exit at the frontend. Enable
|
||||
virtio polling mode via the option ``--virtio_poll [polling interval]``.
|
||||
|
||||
Avoid VM-exit latency
|
||||
*********************
|
||||
|
||||
VM-exit has a significant negative impact on virtualization performance.
|
||||
A single VM-exit causes a several micro-second or longer latency,
|
||||
depending on what's done in VMX-root mode. VM-exit is classified into two
|
||||
types: triggered by external CPU events or triggered by operations initiated
|
||||
by the vCPU.
|
||||
|
||||
ACRN eliminates almost all VM-exits triggered by external events by
|
||||
using LAPIC passthrough. A few exceptions exist:
|
||||
|
||||
- SMI - This brings the processor into the SMM, causing a much longer
|
||||
performance impact. The SMI should be handled in the BIOS.
|
||||
|
||||
- NMI - ACRN uses NMI for system-level notification.
|
||||
|
||||
You should avoid VM-exits triggered by operations initiated by the
|
||||
vCPU. Refer to the `Intel Software Developer Manuals (SDM)
|
||||
<https://software.intel.com/en-us/articles/intel-sdm>`_ "Instructions
|
||||
Cause VM-exits Unconditionally" (SDM V3, 25.1.2) and "Instructions That
|
||||
Cause VM-exits Conditionally" (SDM V3, 25.1.3).
|
||||
|
||||
Tip: Do not use CPUID in a real-time critical section.
|
||||
The CPUID instruction causes VM-exits unconditionally. You should
|
||||
detect CPU capability **before** entering a RT-critical section.
|
||||
CPUID can be executed at any privilege level to serialize instruction
|
||||
execution and its high efficiency of execution. It's commonly used as a
|
||||
serializing instruction in an application by using CPUID
|
||||
immediately before and after RDTSC. Remove use of CPUID in this case by
|
||||
using RDTSCP instead of RDTSC. RDTSCP waits until all previous
|
||||
instructions have been executed before reading the counter, and the
|
||||
subsequent instructions after the RDTSCP normally have data dependency
|
||||
on it, so they must wait until the RDTSCP has been executed.
|
||||
|
||||
RDMSR or WRMSR are instructions that cause VM-exits conditionally. On the
|
||||
ACRN RTVM, most MSRs are not intercepted by the HV, so they won't cause a
|
||||
VM-exit. But there are exceptions for security consideration:
|
||||
|
||||
1) read from APICID and LDR;
|
||||
2) write to TSC_ADJUST if VMX_TSC_OFFSET_FULL is zero;
|
||||
otherwise, read and write to TSC_ADJUST and TSC_DEADLINE;
|
||||
3) write to ICR.
|
||||
|
||||
Tip: Do not use RDMSR to access APICID and LDR in an RT critical section.
|
||||
ACRN does not present a physical APICID to a guest, so APICID
|
||||
and LDR are virtualized even though LAPIC is passthrough. As a result,
|
||||
access to APICID and LDR can cause a VM-exit.
|
||||
|
||||
Tip: Guarantee that VMX_TSC_OFFSET_FULL is zero; otherwise, do not access TSC_ADJUST and TSC_DEADLINE in the RT critical section.
|
||||
ACRN uses VMX_TSC_OFFSET_FULL as the offset between vTSC_ADJUST and
|
||||
pTSC_ADJUST. If VMX_TSC_OFFSET_FULL is zero, intercepting
|
||||
TSC_ADJUST and TSC_DEADLINE is not necessary. Otherwise, they should be
|
||||
intercepted to guarantee functionality.
|
||||
|
||||
Tip: Utilize Preempt-RT Linux mechanisms to reduce the access of ICR from the RT core.
|
||||
#. Add ``domain`` to ``isolcpus`` ( ``isolcpus=nohz,domain,1`` ) to the kernel parameters.
|
||||
#. Add ``idle=poll`` to the kernel parameters.
|
||||
#. Add ``rcu_nocb_poll`` along with ``rcu_nocbs=1`` to the kernel parameters.
|
||||
#. Disable the logging service like journald, syslogd if possible.
|
||||
|
||||
The parameters shown above are recommended for the guest Preempt-RT
|
||||
Linux. For an UP RTVM, ICR interception is not a problem. But for an SMP
|
||||
RTVM, IPI may be needed between vCPUs. These tips are about reducing ICR
|
||||
access. The example above assumes it is a dual-core RTVM, while core 0
|
||||
is a housekeeping core and core 1 is a real-time core. The ``domain``
|
||||
flag makes strong isolation of the RT core from the general SMP
|
||||
balancing and scheduling algorithms. The parameters ``idle=poll`` and
|
||||
``rcu_nocb_poll`` could prevent the RT core from sending reschedule IPI
|
||||
to wakeup tasks on core 0 in most cases. The logging service is disabled
|
||||
because an IPI may be issued to the housekeeping core to notify the
|
||||
logging service when there are kernel messages output on the RT core.
|
||||
|
||||
.. note::
|
||||
If an ICR access is inevitable within the RT critical section, be
|
||||
aware of the extra 3~4 microsecond latency for each access.
|
||||
|
||||
Tip: Create and initialize the RT tasks at the beginning to avoid runtime access to control registers.
|
||||
Accessing Control Registers is another cause of a VM-exit. An ACRN access
|
||||
to CR3 and CR8 does not cause a VM-exit. However, writes to CR0 and CR4 may cause a
|
||||
VM-exit, which would happen at the spawning and initialization of a new task.
|
||||
|
||||
Isolating the impact of neighbor VMs
|
||||
************************************
|
||||
|
||||
ACRN makes use of several technologies and hardware features to avoid
|
||||
performance impact on the RTVM by neighbor VMs:
|
||||
|
||||
Tip: Do not share CPUs allocated to the RTVM with other RT or non-RT VMs.
|
||||
ACRN enables CPU sharing to improve the utilization of CPU resources.
|
||||
However, for an RT VM, CPUs should be dedicatedly allocated for determinism.
|
||||
|
||||
Tip: Use RDT such as CAT and MBA to allocate dedicated resources to the RTVM.
|
||||
ACRN enables Intel® Resource Director Technology such as CAT, and MBA
|
||||
components such as the GPU via the memory hierarchy. The availability of RDT is
|
||||
hardware-specific. Refer to the :ref:`rdt_configuration`.
|
||||
|
||||
Tip: Lock the GPU to a feasible lowest frequency.
|
||||
A GPU can put a heavy load on the power/memory subsystem. Locking
|
||||
the GPU frequency as low as possible can help improve RT performance
|
||||
determinism. GPU frequency can usually be locked in the BIOS, but such
|
||||
BIOS support is platform-specific.
|
||||
|
||||
Miscellaneous
|
||||
*************
|
||||
|
||||
Tip: Disable timer migration on Preempt-RT Linux.
|
||||
Because most tasks are set affinitive to the housekeeping core, the timer
|
||||
armed by RT tasks might be migrated to the nearest busy CPU for power
|
||||
saving. But it will hurt RT determinism because the timer interrupts raised
|
||||
on the housekeeping core need to be resent to the RT core. The timer
|
||||
migration can be disabled by the command::
|
||||
|
||||
echo 0 > /proc/kernel/timer_migration
|
||||
|
||||
Tip: Add ``mce=off`` to RT VM kernel parameters.
|
||||
This parameter disables the mce periodic timer and avoids a VM-exit.
|
||||
|
||||
Tip: Disable the Intel processor C-State and P-State of the RTVM.
|
||||
Power management of a processor could save power, but it could also impact
|
||||
the RT performance because the power state is changing. C-State and P-State
|
||||
PM mechanism can be disabled by adding ``processor.max_cstate=0
|
||||
intel_idle.max_cstate=0 intel_pstate=disabled`` to the kernel parameters.
|
||||
|
||||
Tip: Exercise caution when setting ``/proc/sys/kernel/sched_rt_runtime_us``.
|
||||
Setting ``/proc/sys/kernel/sched_rt_runtime_us`` to ``-1`` can be a
|
||||
problem. A value of ``-1`` allows RT tasks to monopolize a CPU, so that
|
||||
a mechanism such as ``nohz`` might get no chance to work, which can hurt
|
||||
the RT performance or even (potentially) lock up a system.
|
||||
|
||||
Tip: Disable the software workaround for Machine Check Error on Page Size Change.
|
||||
By default, the software workaround for Machine Check Error on Page Size
|
||||
Change is conditionally applied to the models that may be affected by the
|
||||
issue. However, the software workaround has a negative impact on
|
||||
performance. If all guest OS kernels are trusted, the
|
||||
:option:`CONFIG_MCE_ON_PSC_WORKAROUND_DISABLED` option could be set for performance.
|
||||
|
||||
.. note::
|
||||
The tips for preempt-RT Linux are mostly applicable to the Linux-based RT OS as well, such as Xenomai.
|
||||
@@ -1,6 +1,6 @@
|
||||
.. _rtvm_workload_guideline:
|
||||
|
||||
Real time VM application design guidelines
|
||||
Real-Time VM Application Design Guidelines
|
||||
##########################################
|
||||
|
||||
An RTOS developer must be aware of the differences between running applications on a native
|
||||
|
||||
@@ -35,7 +35,9 @@ Install Kata Containers
|
||||
|
||||
The Kata Containers installation from Clear Linux's official repository does
|
||||
not work with ACRN at the moment. Therefore, you must install Kata
|
||||
Containers using the `manual installation <https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md>`_ instructions (using a ``rootfs`` image).
|
||||
Containers using the `manual installation
|
||||
<https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md>`_
|
||||
instructions (using a ``rootfs`` image).
|
||||
|
||||
#. Install the build dependencies.
|
||||
|
||||
@@ -45,7 +47,8 @@ Containers using the `manual installation <https://github.com/kata-containers/do
|
||||
|
||||
#. Install Kata Containers.
|
||||
|
||||
At a high level, the `manual installation <https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md>`_
|
||||
At a high level, the `manual installation
|
||||
<https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md>`_
|
||||
steps are:
|
||||
|
||||
#. Build and install the Kata runtime.
|
||||
@@ -89,7 +92,7 @@ outputs:
|
||||
$ kata-runtime kata-env | awk -v RS= '/\[Hypervisor\]/'
|
||||
[Hypervisor]
|
||||
MachineType = ""
|
||||
Version = "DM version is: 1.5-unstable-”2020w02.5.140000p_261” (daily tag:”2020w02.5.140000p”), build by mockbuild@2020-01-12 08:44:52"
|
||||
Version = "DM version is: 1.5-unstable-"2020w02.5.140000p_261" (daily tag:"2020w02.5.140000p"), build by mockbuild@2020-01-12 08:44:52"
|
||||
Path = "/usr/bin/acrn-dm"
|
||||
BlockDeviceDriver = "virtio-blk"
|
||||
EntropySource = "/dev/urandom"
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
.. _running_deb_as_serv_vm:
|
||||
|
||||
Running Debian as the Service VM
|
||||
##################################
|
||||
Run Debian as the Service VM
|
||||
############################
|
||||
|
||||
The `Debian Project <https://www.debian.org/>`_ is an association of individuals who have made common cause to create a `free <https://www.debian.org/intro/free>`_ operating system. The `latest stable Debian release <https://www.debian.org/releases/stable/>`_ is 10.0.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
.. _running_deb_as_user_vm:
|
||||
|
||||
Running Debian as the User VM
|
||||
#############################
|
||||
Run Debian as the User VM
|
||||
#########################
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
@@ -185,7 +185,7 @@ Re-use and modify the `launch_win.sh` script in order to launch the new Debian 1
|
||||
$ sudo cp /mnt/EFI/debian/grubx64.efi /mnt/EFI/boot/bootx64.efi
|
||||
$ sync && sudo umount /mnt
|
||||
|
||||
#. Launch the Debian VM afer logging in to the Service VM:
|
||||
#. Launch the Debian VM after logging in to the Service VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
.. _running_ubun_as_user_vm:
|
||||
|
||||
Running Ubuntu as the User VM
|
||||
#############################
|
||||
Run Ubuntu as the User VM
|
||||
#########################
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
@@ -63,9 +63,9 @@ Validated Versions
|
||||
Build the Ubuntu KVM Image
|
||||
**************************
|
||||
|
||||
This tutorial uses the Ubuntu 18.04 destop ISO as the base image.
|
||||
This tutorial uses the Ubuntu 18.04 desktop ISO as the base image.
|
||||
|
||||
#. Download the `Ubuntu 18.04 destop ISO <http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso?_ga=2.160010942.221344839.1566963570-491064742.1554370503>`_ on your development machine:
|
||||
#. Download the `Ubuntu 18.04 desktop ISO <http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso?_ga=2.160010942.221344839.1566963570-491064742.1554370503>`_ on your development machine:
|
||||
|
||||
#. Install Ubuntu via the virt-manager tool:
|
||||
|
||||
@@ -165,7 +165,7 @@ Modify the `launch_win.sh` script in order to launch Ubuntu as the User VM.
|
||||
$ sed -i 3"s/$/ i915.enable_conformance_check=0/" /mnt/loader/entries/acrn.conf
|
||||
$ sudo sync && sudo umount /mnt && reboot
|
||||
|
||||
#. Launch the Ubuntu VM afer logging in to the Service VM:
|
||||
#. Launch the Ubuntu VM after logging in to the Service VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
.. _sgx_virt:
|
||||
|
||||
SGX Virtualization
|
||||
##################
|
||||
Enable SGX Virtualization
|
||||
#########################
|
||||
|
||||
SGX refers to `Intel® Software Guard Extensions <https://software.intel.com/
|
||||
en-us/sgx>`_ (Intel® SGX). This is a set of instructions that can be used by
|
||||
@@ -92,7 +92,7 @@ enable SGX support in the BIOS and in ACRN:
|
||||
|
||||
#. Add the EPC config in the VM configuration.
|
||||
|
||||
Apply the patch to enable SGX support in UOS in the SDC scenario:
|
||||
Apply the patch to enable SGX support in User VM in the SDC scenario:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
.. _sign_clear_linux_image:
|
||||
|
||||
How to sign binaries of the Clear Linux image
|
||||
#############################################
|
||||
Sign Clear Linux Image Binaries
|
||||
###############################
|
||||
|
||||
In this tutorial, you will see how to sign the binaries of a Clear Linux image so that you can
|
||||
boot it through a secure boot enabled OVMF.
|
||||
@@ -35,4 +35,4 @@ Steps to sign the binaries of the Clear Linux image
|
||||
|
||||
$ sudo sh sign_image.sh $PATH_TO_CLEAR_IMAGE $PATH_TO_KEY $PATH_TO_CERT
|
||||
|
||||
#. **clear-xxx-kvm.img.signed** will be generated in the same folder as the original clear-xxx-kvm.img.
|
||||
#. **clear-xxx-kvm.img.signed** will be generated in the same folder as the original clear-xxx-kvm.img.
|
||||
|
||||
@@ -1,8 +1,7 @@
|
||||
.. _skl-nuc-gpu-passthrough:
|
||||
|
||||
GPU Passthrough on Skylake NUC
|
||||
##############################
|
||||
|
||||
Enable GPU Passthrough on the Skylake NUC
|
||||
#########################################
|
||||
|
||||
This community reference release for the Skylake NUC with GPU
|
||||
passthrough is a one-time snapshot release and is not supported
|
||||
@@ -21,7 +20,7 @@ Software Configuration
|
||||
<https://github.com/projectacrn/acrn-hypervisor/releases/tag/acrn-2018w39.6-140000p>`_
|
||||
* `acrn-kernel tag acrn-2018w39.6-140000p
|
||||
<https://github.com/projectacrn/acrn-kernel/releases/tag/acrn-2018w39.6-140000p>`_
|
||||
* Clear Linux OS: version: 25130 (UOS and SOS use this version)
|
||||
* Clear Linux OS: version: 25130 (User VM and Service VM use this version)
|
||||
|
||||
Source code patches are provided in `skl-patches-for-acrn.tar file
|
||||
<../_static/downloads/skl-patches-for-acrn.tar>`_ to work around or add support for
|
||||
@@ -95,10 +94,10 @@ Please follow the :ref:`kbl-nuc-sdc`, with the following changes:
|
||||
#. Don't Enable weston service (skip this step found in the NUC's getting
|
||||
started guide).
|
||||
|
||||
#. Set up Reference UOS by running the modified ``launch_uos.sh`` in
|
||||
#. Set up Reference User VM by running the modified ``launch_uos.sh`` in
|
||||
``acrn-hypervisor/devicemodel/samples/nuc/launch_uos.sh``
|
||||
|
||||
#. After UOS is launched, do these steps to run GFX workloads:
|
||||
#. After User VM is launched, do these steps to run GFX workloads:
|
||||
|
||||
a) install weston and glmark2::
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
.. _sriov_virtualization:
|
||||
|
||||
SR-IOV Virtualization
|
||||
=====================
|
||||
Enable SR-IOV Virtualization
|
||||
############################
|
||||
|
||||
SR-IOV (Single Root Input/Output Virtualization) can isolate PCIe devices
|
||||
to improve performance that is similar to bare-metal levels. SR-IOV consists
|
||||
@@ -10,10 +10,12 @@ extended capability and manages entire physical devices; and VF (Virtual
|
||||
Function), a "lightweight" PCIe function which is a passthrough device for
|
||||
VMs.
|
||||
|
||||
For details, refer to Chapter 9 of PCI-SIG's `PCI Express Base SpecificationRevision 4.0, Version 1.0 <https://pcisig.com/pci-express-architecture-configuration-space-test-specification-revision-40-version-10>`_.
|
||||
For details, refer to Chapter 9 of PCI-SIG's
|
||||
`PCI Express Base SpecificationRevision 4.0, Version 1.0
|
||||
<https://pcisig.com/pci-express-architecture-configuration-space-test-specification-revision-40-version-10>`_.
|
||||
|
||||
SR-IOV Architectural Overview
|
||||
-----------------------------
|
||||
*****************************
|
||||
|
||||
.. figure:: images/sriov-image1.png
|
||||
:align: center
|
||||
@@ -31,7 +33,7 @@ SR-IOV Architectural Overview
|
||||
- **PF** - A PCIe Function that supports the SR-IOV capability
|
||||
and is accessible to an SR-PCIM, a VI, or an SI.
|
||||
|
||||
- **VF** - A “light-weight” PCIe Function that is directly accessible by an
|
||||
- **VF** - A "light-weight" PCIe Function that is directly accessible by an
|
||||
SI.
|
||||
|
||||
SR-IOV Extended Capability
|
||||
@@ -39,7 +41,7 @@ SR-IOV Extended Capability
|
||||
|
||||
The SR-IOV Extended Capability defined here is a PCIe extended
|
||||
capability that must be implemented in each PF device that supports the
|
||||
SR-IOV feature. This capability is used to describe and control a PF’s
|
||||
SR-IOV feature. This capability is used to describe and control a PF's
|
||||
SR-IOV Capabilities.
|
||||
|
||||
.. figure:: images/sriov-image2.png
|
||||
@@ -84,22 +86,22 @@ SR-IOV Capabilities.
|
||||
supported by the PF.
|
||||
|
||||
- **System Page Size** - The field that defines the page size the system
|
||||
will use to map the VFs’ memory addresses. Software must set the
|
||||
will use to map the VFs' memory addresses. Software must set the
|
||||
value of the *System Page Size* to one of the page sizes set in the
|
||||
*Supported Page Sizes* field.
|
||||
|
||||
- **VF BARs** - Fields that must define the VF’s Base Address
|
||||
- **VF BARs** - Fields that must define the VF's Base Address
|
||||
Registers (BARs). These fields behave as normal PCI BARs.
|
||||
|
||||
- **VF Migration State Array Offset** - Register that contains a
|
||||
PF BAR relative pointer to the VF Migration State Array.
|
||||
|
||||
- **VF Migration State Array** – Located using the VF Migration
|
||||
- **VF Migration State Array** - Located using the VF Migration
|
||||
State Array Offset register of the SR-IOV Capability block.
|
||||
|
||||
For details, refer to the *PCI Express Base Specification Revision 4.0, Version 1.0 Chapter 9.3.3*.
|
||||
|
||||
SR-IOV Architecture In ACRN
|
||||
SR-IOV Architecture in ACRN
|
||||
---------------------------
|
||||
|
||||
.. figure:: images/sriov-image3.png
|
||||
@@ -111,7 +113,7 @@ SR-IOV Architecture In ACRN
|
||||
1. A hypervisor detects a SR-IOV capable PCIe device in the physical PCI
|
||||
device enumeration phase.
|
||||
|
||||
2. The hypervisor intercepts the PF’s SR-IOV capability and accesses whether
|
||||
2. The hypervisor intercepts the PF's SR-IOV capability and accesses whether
|
||||
to enable/disable VF devices based on the *VF\_ENABLE* state. All
|
||||
read/write requests for a PF device passthrough to the PF physical
|
||||
device.
|
||||
@@ -122,9 +124,9 @@ SR-IOV Architecture In ACRN
|
||||
initialization. The hypervisor uses *Subsystem Vendor ID* to detect the
|
||||
SR-IOV VF physical device instead of *Vendor ID* since no valid
|
||||
*Vendor ID* exists for the SR-IOV VF physical device. The VF BARs are
|
||||
initialized by its associated PF’s SR-IOV capabilities, not PCI
|
||||
initialized by its associated PF's SR-IOV capabilities, not PCI
|
||||
standard BAR registers. The MSIx mapping base address is also from the
|
||||
PF’s SR-IOV capabilities, not PCI standard BAR registers.
|
||||
PF's SR-IOV capabilities, not PCI standard BAR registers.
|
||||
|
||||
SR-IOV Passthrough VF Architecture In ACRN
|
||||
------------------------------------------
|
||||
@@ -144,8 +146,8 @@ SR-IOV Passthrough VF Architecture In ACRN
|
||||
|
||||
3. The hypervisor emulates *Device ID/Vendor ID* and *Memory Space Enable
|
||||
(MSE)* in the configuration space for an assigned SR-IOV VF device. The
|
||||
assigned VF *Device ID* comes from its associated PF’s capability. The
|
||||
*Vendor ID* is the same as the PF’s *Vendor ID* and the *MSE* is always
|
||||
assigned VF *Device ID* comes from its associated PF's capability. The
|
||||
*Vendor ID* is the same as the PF's *Vendor ID* and the *MSE* is always
|
||||
set when reading the SR-IOV VF device's *CONTROL* register.
|
||||
|
||||
4. The vendor-specific VF driver in the target VM probes the assigned SR-IOV
|
||||
@@ -180,7 +182,7 @@ The hypervisor intercepts all SR-IOV capability access and checks the
|
||||
*VF\_ENABLE* state. If *VF\_ENABLE* is set, the hypervisor creates n
|
||||
virtual devices after 100ms so that VF physical devices have enough time to
|
||||
be created. The Service VM waits 100ms and then only accesses the first VF
|
||||
device’s configuration space including *Class Code, Reversion ID, Subsystem
|
||||
device's configuration space including *Class Code, Reversion ID, Subsystem
|
||||
Vendor ID, Subsystem ID*. The Service VM uses the first VF device
|
||||
information to initialize subsequent VF devices.
|
||||
|
||||
@@ -238,8 +240,10 @@ only support LaaG (Linux as a Guest).
|
||||
|
||||
#. Input the ``\ *echo n > /sys/class/net/enp109s0f0/device/sriov\_numvfs*\``
|
||||
command in the Service VM to enable n VF devices for the first PF
|
||||
device (\ *enp109s0f0)*. The number *n* can’t be more than *TotalVFs*
|
||||
which comes from the return value of command ``cat /sys/class/net/enp109s0f0/device/sriov\_totalvfs``. Here we use *n = 2* as an example.
|
||||
device (\ *enp109s0f0)*. The number *n* can't be more than *TotalVFs*
|
||||
which comes from the return value of command
|
||||
``cat /sys/class/net/enp109s0f0/device/sriov\_totalvfs``. Here we
|
||||
use *n = 2* as an example.
|
||||
|
||||
.. figure:: images/sriov-image10.png
|
||||
:align: center
|
||||
@@ -267,7 +271,7 @@ only support LaaG (Linux as a Guest).
|
||||
iv. *echo "0000:6d:10.0" >
|
||||
/sys/bus/pci/drivers/pci-stub/bind*
|
||||
|
||||
b. Add the SR-IOV VF device parameter (“*-s X, passthru,6d/10/0*\ ”) in
|
||||
b. Add the SR-IOV VF device parameter ("*-s X, passthru,6d/10/0*\ ") in
|
||||
the launch User VM script
|
||||
|
||||
.. figure:: images/sriov-image12.png
|
||||
|
||||
@@ -1,19 +1,19 @@
|
||||
.. _static_ip:
|
||||
|
||||
Using a static IP address
|
||||
#########################
|
||||
Set Up a Static IP Address
|
||||
##########################
|
||||
|
||||
When you install ACRN on your system following the :ref:`getting_started`, a
|
||||
bridge called ``acrn-br0`` will be created and attached to the Ethernet network
|
||||
When you install ACRN on your system following :ref:`getting_started`, a
|
||||
bridge called ``acrn-br0`` is created and attached to the Ethernet network
|
||||
interface of the platform. By default, the bridge gets its network configuration
|
||||
using DHCP. This guide will explain how to modify the system to use a static IP
|
||||
using DHCP. This guide explains how to modify the system to use a static IP
|
||||
address. You need ``root`` privileges to make these changes to the system.
|
||||
|
||||
ACRN Network Setup
|
||||
******************
|
||||
|
||||
The ACRN Service OS is based on `Clear Linux OS`_ and it uses `systemd-networkd`_
|
||||
to set up the Service OS networking. A few files are responsible for setting up the
|
||||
The ACRN Service VM is based on `Clear Linux OS`_ and it uses `systemd-networkd`_
|
||||
to set up the Service VM networking. A few files are responsible for setting up the
|
||||
ACRN bridge (``acrn-br0``), the TAP device (``tap0``), and how these are all
|
||||
connected. Those files are installed in ``/usr/lib/systemd/network``
|
||||
on the target device and can also be found under ``misc/acrnbridge`` in the source code.
|
||||
|
||||