doc: change UOS/SOS to User VM/Service VM

First pass at updating obsolete usage of "UOS" and "SOS"

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
David B. Kinder 2020-04-10 10:44:30 -07:00 committed by David Kinder
parent f5f16f4e64
commit 237997f3f9
49 changed files with 833 additions and 822 deletions

View File

@ -3,7 +3,7 @@
Device Model APIs
#################
This section contains APIs for the SOS Device Model services. Sources
This section contains APIs for the Service VM Device Model services. Sources
for the Device Model are found in the devicemodel folder of the `ACRN
hypervisor GitHub repo`_

View File

@ -35,11 +35,11 @@ background introduction, please refer to:
virtio-echo is implemented as a virtio legacy device in the ACRN device
model (DM), and is registered as a PCI virtio device to the guest OS
(UOS). The virtio-echo software has three parts:
(User VM). The virtio-echo software has three parts:
- **virtio-echo Frontend Driver**: This driver runs in the UOS. It prepares
- **virtio-echo Frontend Driver**: This driver runs in the User VM. It prepares
the RXQ and notifies the backend for receiving incoming data when the
UOS starts. Second, it copies the received data from the RXQ to TXQ
User VM starts. Second, it copies the received data from the RXQ to TXQ
and sends them to the backend. After receiving the message that the
transmission is completed, it starts again another round of reception
and transmission, and keeps running until a specified number of cycle
@ -71,30 +71,30 @@ Virtualization Overhead Analysis
********************************
Let's analyze the overhead of the VBS-K framework. As we know, the VBS-K
handles notifications in the SOS kernel instead of in the SOS user space
handles notifications in the Service VM kernel instead of in the Service VM user space
DM. This can avoid overhead from switching between kernel space and user
space. Virtqueues are allocated by UOS, and virtqueue information is
space. Virtqueues are allocated by User VM, and virtqueue information is
configured to VBS-K backend by the virtio-echo driver in DM, thus
virtqueues can be shared between UOS and SOS. There is no copy overhead
virtqueues can be shared between User VM and Service VM. There is no copy overhead
in this sense. The overhead of VBS-K framework mainly contains two
parts: kick overhead and notify overhead.
- **Kick Overhead**: The UOS gets trapped when it executes sensitive
- **Kick Overhead**: The User VM gets trapped when it executes sensitive
instructions that notify the hypervisor first. The notification is
assembled into an IOREQ, saved in a shared IO page, and then
forwarded to the VHM module by the hypervisor. The VHM notifies its
client for this IOREQ, in this case, the client is the vbs-echo
backend driver. Kick overhead is defined as the interval from the
beginning of UOS trap to a specific VBS-K driver e.g. when
beginning of User VM trap to a specific VBS-K driver e.g. when
virtio-echo gets notified.
- **Notify Overhead**: After the data in virtqueue being processed by the
backend driver, vbs-echo calls the VHM module to inject an interrupt
into the frontend. The VHM then uses the hypercall provided by the
hypervisor, which causes a UOS VMEXIT. The hypervisor finally injects
an interrupt into the vLAPIC of the UOS and resumes it. The UOS
hypervisor, which causes a User VM VMEXIT. The hypervisor finally injects
an interrupt into the vLAPIC of the User VM and resumes it. The User VM
therefore receives the interrupt notification. Notify overhead is
defined as the interval from the beginning of the interrupt injection
to when the UOS starts interrupt processing.
to when the User VM starts interrupt processing.
The overhead of a specific application based on VBS-K includes two
parts: VBS-K framework overhead and application-specific overhead.

View File

@ -3364,8 +3364,8 @@ The data structure types include struct, union, and enum.
This rule applies to the data structure with all the following properties:
a) The data structure is used by multiple modules;
b) The corresponding resource is exposed to external components, such as SOS or
UOS;
b) The corresponding resource is exposed to external components, such as
the Service VM or a User VM;
c) The name meaning is simplistic or common, such as vcpu or vm.
Compliant example::

View File

@ -17,12 +17,12 @@ the below diagram.
HBA is registered to the PCI system with device id 0x2821 and vendor id
0x8086. Its memory registers are mapped in BAR 5. It only supports 6
ports (refer to ICH8 AHCI). AHCI driver in the Guest OS can access HBA in DM
ports (refer to ICH8 AHCI). AHCI driver in the User VM can access HBA in DM
through the PCI BAR. And HBA can inject MSI interrupts through the PCI
framework.
When the application in the Guest OS reads data from /dev/sda, the request will
send through the AHCI driver and then the PCI driver. The Guest VM will trap to
When the application in the User VM reads data from /dev/sda, the request will
send through the AHCI driver and then the PCI driver. The User VM will trap to
hypervisor, and hypervisor dispatch the request to DM. According to the
offset in the BAR, the request will dispatch to port control handler.
Then the request is parse to a block I/O request which can be processed
@ -39,6 +39,6 @@ regular file.
For example,
SOS: -s 20,ahci,\ `hd:/dev/mmcblk0p1 <http://hd/dev/mmcblk0p1>`__
System VM: -s 20,ahci,\ `hd:/dev/mmcblk0p1 <http://hd/dev/mmcblk0p1>`__
UOS: /dev/sda
User VM: /dev/sda

View File

@ -315,7 +315,7 @@ High Level Architecture
***********************
:numref:`gvt-arch` shows the overall architecture of GVT-g, based on the
ACRN hypervisor, with SOS as the privileged VM, and multiple user
ACRN hypervisor, with Service VM as the privileged VM, and multiple user
guests. A GVT-g device model working with the ACRN hypervisor,
implements the policies of trap and pass-through. Each guest runs the
native graphics driver and can directly access performance-critical
@ -323,7 +323,7 @@ resources: the Frame Buffer and Command Buffer, with resource
partitioning (as presented later). To protect privileged resources, that
is, the I/O registers and PTEs, corresponding accesses from the graphics
driver in user VMs are trapped and forwarded to the GVT device model in
SOS for emulation. The device model leverages i915 interfaces to access
Service VM for emulation. The device model leverages i915 interfaces to access
the physical GPU.
In addition, the device model implements a GPU scheduler that runs
@ -338,7 +338,7 @@ direct GPU execution. With that, GVT-g can achieve near-native
performance for a VM workload.
In :numref:`gvt-arch`, the yellow GVT device model works as a client on
top of an i915 driver in the SOS. It has a generic Mediated Pass-Through
top of an i915 driver in the Service VM. It has a generic Mediated Pass-Through
(MPT) interface, compatible with all types of hypervisors. For ACRN,
some extra development work is needed for such MPT interfaces. For
example, we need some changes in ACRN-DM to make ACRN compatible with
@ -368,7 +368,7 @@ trap-and-emulation, including MMIO virtualization, interrupt
virtualization, and display virtualization. It also handles and
processes all the requests internally, such as, command scan and shadow,
schedules them in the proper manner, and finally submits to
the SOS i915 driver.
the Service VM i915 driver.
.. figure:: images/APL_GVT-g-DM.png
:width: 800px
@ -446,7 +446,7 @@ interrupts are categorized into three types:
exception to this is the VBlank interrupt. Due to the demands of user
space compositors, such as Wayland, which requires a flip done event
to be synchronized with a VBlank, this interrupt is forwarded from
SOS to UOS when SOS receives it from the hardware.
Service VM to User VM when Service VM receives it from the hardware.
- Event-based GPU interrupts are emulated by the emulation logic. For
example, AUX Channel Interrupt.
@ -524,7 +524,7 @@ later after performing a few basic checks and verifications.
Display Virtualization
----------------------
GVT-g reuses the i915 graphics driver in the SOS to initialize the Display
GVT-g reuses the i915 graphics driver in the Service VM to initialize the Display
Engine, and then manages the Display Engine to show different VM frame
buffers. When two vGPUs have the same resolution, only the frame buffer
locations are switched.
@ -550,7 +550,7 @@ A typical automotive use case is where there are two displays in the car
and each one needs to show one domain's content, with the two domains
being the Instrument cluster and the In Vehicle Infotainment (IVI). As
shown in :numref:`direct-display`, this can be accomplished through the direct
display model of GVT-g, where the SOS and UOS are each assigned all HW
display model of GVT-g, where the Service VM and User VM are each assigned all HW
planes of two different pipes. GVT-g has a concept of display owner on a
per HW plane basis. If it determines that a particular domain is the
owner of a HW plane, then it allows the domain's MMIO register write to
@ -567,15 +567,15 @@ Indirect Display Model
Indirect Display Model
For security or fastboot reasons, it may be determined that the UOS is
For security or fastboot reasons, it may be determined that the User VM is
either not allowed to display its content directly on the HW or it may
be too late before it boots up and displays its content. In such a
scenario, the responsibility of displaying content on all displays lies
with the SOS. One of the use cases that can be realized is to display the
entire frame buffer of the UOS on a secondary display. GVT-g allows for this
model by first trapping all MMIO writes by the UOS to the HW. A proxy
application can then capture the address in GGTT where the UOS has written
its frame buffer and using the help of the Hypervisor and the SOS's i915
with the Service VM. One of the use cases that can be realized is to display the
entire frame buffer of the User VM on a secondary display. GVT-g allows for this
model by first trapping all MMIO writes by the User VM to the HW. A proxy
application can then capture the address in GGTT where the User VM has written
its frame buffer and using the help of the Hypervisor and the Service VM's i915
driver, can convert the Guest Physical Addresses (GPAs) into Host
Physical Addresses (HPAs) before making a texture source or EGL image
out of the frame buffer and then either post processing it further or
@ -585,33 +585,33 @@ GGTT-Based Surface Sharing
--------------------------
One of the major automotive use case is called "surface sharing". This
use case requires that the SOS accesses an individual surface or a set of
surfaces from the UOS without having to access the entire frame buffer of
the UOS. Unlike the previous two models, where the UOS did not have to do
anything to show its content and therefore a completely unmodified UOS
could continue to run, this model requires changes to the UOS.
use case requires that the Service VM accesses an individual surface or a set of
surfaces from the User VM without having to access the entire frame buffer of
the User VM. Unlike the previous two models, where the User VM did not have to do
anything to show its content and therefore a completely unmodified User VM
could continue to run, this model requires changes to the User VM.
This model can be considered an extension of the indirect display model.
Under the indirect display model, the UOS's frame buffer was temporarily
Under the indirect display model, the User VM's frame buffer was temporarily
pinned by it in the video memory access through the Global graphics
translation table. This GGTT-based surface sharing model takes this a
step further by having a compositor of the UOS to temporarily pin all
step further by having a compositor of the User VM to temporarily pin all
application buffers into GGTT. It then also requires the compositor to
create a metadata table with relevant surface information such as width,
height, and GGTT offset, and flip that in lieu of the frame buffer.
In the SOS, the proxy application knows that the GGTT offset has been
In the Service VM, the proxy application knows that the GGTT offset has been
flipped, maps it, and through it can access the GGTT offset of an
application that it wants to access. It is worth mentioning that in this
model, UOS applications did not require any changes, and only the
model, User VM applications did not require any changes, and only the
compositor, Mesa, and i915 driver had to be modified.
This model has a major benefit and a major limitation. The
benefit is that since it builds on top of the indirect display model,
there are no special drivers necessary for it on either SOS or UOS.
there are no special drivers necessary for it on either Service VM or User VM.
Therefore, any Real Time Operating System (RTOS) that use
this model can simply do so without having to implement a driver, the
infrastructure for which may not be present in their operating system.
The limitation of this model is that video memory dedicated for a UOS is
The limitation of this model is that video memory dedicated for a User VM is
generally limited to a couple of hundred MBs. This can easily be
exhausted by a few application buffers so the number and size of buffers
is limited. Since it is not a highly-scalable model, in general, Intel
@ -634,24 +634,24 @@ able to share its pages with another driver within one domain.
Applications buffers are backed by i915 Graphics Execution Manager
Buffer Objects (GEM BOs). As in GGTT surface
sharing, this model also requires compositor changes. The compositor of
UOS requests i915 to export these application GEM BOs and then passes
User VM requests i915 to export these application GEM BOs and then passes
them on to a special driver called the Hyper DMA Buf exporter whose job
is to create a scatter gather list of pages mapped by PDEs and PTEs and
export a Hyper DMA Buf ID back to the compositor.
The compositor then shares this Hyper DMA Buf ID with the SOS's Hyper DMA
The compositor then shares this Hyper DMA Buf ID with the Service VM's Hyper DMA
Buf importer driver which then maps the memory represented by this ID in
the SOS. A proxy application in the SOS can then provide the ID of this driver
to the SOS i915, which can create its own GEM BO. Finally, the application
the Service VM. A proxy application in the Service VM can then provide the ID of this driver
to the Service VM i915, which can create its own GEM BO. Finally, the application
can use it as an EGL image and do any post processing required before
either providing it to the SOS compositor or directly flipping it on a
either providing it to the Service VM compositor or directly flipping it on a
HW plane in the compositor's absence.
This model is highly scalable and can be used to share up to 4 GB worth
of pages. It is also not limited to only sharing graphics buffers. Other
buffers for the IPU and others, can also be shared with it. However, it
does require that the SOS port the Hyper DMA Buffer importer driver. Also,
the SOS OS must comprehend and implement the DMA buffer sharing model.
does require that the Service VM port the Hyper DMA Buffer importer driver. Also,
the Service VM must comprehend and implement the DMA buffer sharing model.
For detailed information about this model, please refer to the `Linux
HYPER_DMABUF Driver High Level Design
@ -669,13 +669,13 @@ Plane-Based Domain Ownership
Plane-Based Domain Ownership
Yet another mechanism for showing content of both the SOS and UOS on the
Yet another mechanism for showing content of both the Service VM and User VM on the
same physical display is called plane-based domain ownership. Under this
model, both the SOS and UOS are provided a set of HW planes that they can
model, both the Service VM and User VM are provided a set of HW planes that they can
flip their contents on to. Since each domain provides its content, there
is no need for any extra composition to be done through the SOS. The display
is no need for any extra composition to be done through the Service VM. The display
controller handles alpha blending contents of different domains on a
single pipe. This saves on any complexity on either the SOS or the UOS
single pipe. This saves on any complexity on either the Service VM or the User VM
SW stack.
It is important to provide only specific planes and have them statically
@ -689,7 +689,7 @@ show the correct content on them. No other changes are necessary.
While the biggest benefit of this model is that is extremely simple and
quick to implement, it also has some drawbacks. First, since each domain
is responsible for showing the content on the screen, there is no
control of the UOS by the SOS. If the UOS is untrusted, this could
control of the User VM by the Service VM. If the User VM is untrusted, this could
potentially cause some unwanted content to be displayed. Also, there is
no post processing capability, except that provided by the display
controller (for example, scaling, rotation, and so on). So each domain
@ -834,43 +834,43 @@ Different Schedulers and Their Roles
In the system, there are three different schedulers for the GPU:
- i915 UOS scheduler
- i915 User VM scheduler
- Mediator GVT scheduler
- i915 SOS scheduler
- i915 Service VM scheduler
Since UOS always uses the host-based command submission (ELSP) model,
Since User VM always uses the host-based command submission (ELSP) model,
and it never accesses the GPU or the Graphic Micro Controller (GuC)
directly, its scheduler cannot do any preemption by itself.
The i915 scheduler does ensure batch buffers are
submitted in dependency order, that is, if a compositor had to wait for
an application buffer to finish before its workload can be submitted to
the GPU, then the i915 scheduler of the UOS ensures that this happens.
the GPU, then the i915 scheduler of the User VM ensures that this happens.
The UOS assumes that by submitting its batch buffers to the Execlist
The User VM assumes that by submitting its batch buffers to the Execlist
Submission Port (ELSP), the GPU will start working on them. However,
the MMIO write to the ELSP is captured by the Hypervisor, which forwards
these requests to the GVT module. GVT then creates a shadow context
based on this batch buffer and submits the shadow context to the SOS
based on this batch buffer and submits the shadow context to the Service VM
i915 driver.
However, it is dependent on a second scheduler called the GVT
scheduler. This scheduler is time based and uses a round robin algorithm
to provide a specific time for each UOS to submit its workload when it
is considered as a "render owner". The workload of the UOSs that are not
to provide a specific time for each User VM to submit its workload when it
is considered as a "render owner". The workload of the User VMs that are not
render owners during a specific time period end up waiting in the
virtual GPU context until the GVT scheduler makes them render owners.
The GVT shadow context submits only one workload at
a time, and once the workload is finished by the GPU, it copies any
context state back to DomU and sends the appropriate interrupts before
picking up any other workloads from either this UOS or another one. This
picking up any other workloads from either this User VM or another one. This
also implies that this scheduler does not do any preemption of
workloads.
Finally, there is the i915 scheduler in the SOS. This scheduler uses the
GuC or ELSP to do command submission of SOS local content as well as any
content that GVT is submitting to it on behalf of the UOSs. This
Finally, there is the i915 scheduler in the Service VM. This scheduler uses the
GuC or ELSP to do command submission of Service VM local content as well as any
content that GVT is submitting to it on behalf of the User VMs. This
scheduler uses GuC or ELSP to preempt workloads. GuC has four different
priority queues, but the SOS i915 driver uses only two of them. One of
priority queues, but the Service VM i915 driver uses only two of them. One of
them is considered high priority and the other is normal priority with a
GuC rule being that any command submitted on the high priority queue
would immediately try to preempt any workload submitted on the normal
@ -893,8 +893,8 @@ preemption of lower-priority workload.
Scheduling policies are customizable and left to customers to change if
they are not satisfied with the built-in i915 driver policy, where all
workloads of the SOS are considered higher priority than those of the
UOS. This policy can be enforced through an SOS i915 kernel command line
workloads of the Service VM are considered higher priority than those of the
User VM. This policy can be enforced through an Service VM i915 kernel command line
parameter, and can replace the default in-order command submission (no
preemption) policy.
@ -922,7 +922,7 @@ OS and an Android Guest OS.
AcrnGT in kernel
=================
The AcrnGT module in the SOS kernel acts as an adaption layer to connect
The AcrnGT module in the Service VM kernel acts as an adaption layer to connect
between GVT-g in the i915, the VHM module, and the ACRN-DM user space
application:
@ -930,7 +930,7 @@ application:
services to it, including set and unset trap areas, set and unset
write-protection pages, etc.
- It calls the VHM APIs provided by the ACRN VHM module in the SOS
- It calls the VHM APIs provided by the ACRN VHM module in the Service VM
kernel, to eventually call into the routines provided by ACRN
hypervisor through hyper-calls.

View File

@ -3,8 +3,8 @@
Device Model high-level design
##############################
Hypervisor Device Model (DM) is a QEMU-like application in SOS
responsible for creating a UOS VM and then performing devices emulation
Hypervisor Device Model (DM) is a QEMU-like application in Service VM
responsible for creating a User VM and then performing devices emulation
based on command line configurations.
.. figure:: images/dm-image75.png
@ -14,18 +14,18 @@ based on command line configurations.
Device Model Framework
:numref:`dm-framework` above gives a big picture overview of DM
framework. There are 3 major subsystems in SOS:
framework. There are 3 major subsystems in Service VM:
- **Device Emulation**: DM provides backend device emulation routines for
frontend UOS device drivers. These routines register their I/O
frontend User VM device drivers. These routines register their I/O
handlers to the I/O dispatcher inside the DM. When the VHM
assigns any I/O request to the DM, the I/O dispatcher
dispatches this request to the corresponding device emulation
routine to do the emulation.
- I/O Path in SOS:
- I/O Path in Service VM:
- HV initializes an I/O request and notifies VHM driver in SOS
- HV initializes an I/O request and notifies VHM driver in Service VM
through upcall.
- VHM driver dispatches I/O requests to I/O clients and notifies the
clients (in this case the client is the DM which is notified
@ -34,9 +34,9 @@ framework. There are 3 major subsystems in SOS:
- I/O dispatcher notifies VHM driver the I/O request is completed
through char device
- VHM driver notifies HV on the completion through hypercall
- DM injects VIRQ to UOS frontend device through hypercall
- DM injects VIRQ to User VM frontend device through hypercall
- VHM: Virtio and Hypervisor Service Module is a kernel module in SOS as a
- VHM: Virtio and Hypervisor Service Module is a kernel module in Service VM as a
middle layer to support DM. Refer to :ref:`virtio-APIs` for details
This section introduces how the acrn-dm application is configured and
@ -136,7 +136,7 @@ DM Initialization
- **Option Parsing**: DM parse options from command line inputs.
- **VM Create**: DM calls ioctl to SOS VHM, then SOS VHM makes
- **VM Create**: DM calls ioctl to Service VM VHM, then Service VM VHM makes
hypercalls to HV to create a VM, it returns a vmid for a
dedicated VM.
@ -147,8 +147,8 @@ DM Initialization
with VHM and HV. Refer to :ref:`hld-io-emulation` and
:ref:`IO-emulation-in-sos` for more details.
- **Memory Setup**: UOS memory is allocated from SOS
memory. This section of memory will use SOS hugetlbfs to allocate
- **Memory Setup**: User VM memory is allocated from Service VM
memory. This section of memory will use Service VM hugetlbfs to allocate
linear continuous host physical address for guest memory. It will
try to get the page size as big as possible to guarantee maximum
utilization of TLB. It then invokes a hypercall to HV for its EPT
@ -175,7 +175,7 @@ DM Initialization
according to acrn-dm command line configuration and derived from
their default value.
- **SW Load**: DM prepares UOS VM's SW configuration such as kernel,
- **SW Load**: DM prepares User VM's SW configuration such as kernel,
ramdisk, and zeropage, according to these memory locations:
.. code-block:: c
@ -186,7 +186,7 @@ DM Initialization
#define ZEROPAGE_LOAD_OFF(ctx) (ctx->lowmem - 4*KB)
#define KERNEL_LOAD_OFF(ctx) (16*MB)
For example, if the UOS memory is set as 800M size, then **SW Load**
For example, if the User VM memory is set as 800M size, then **SW Load**
will prepare its ramdisk (if there is) at 0x31c00000 (796M), bootargs at
0x31ffe000 (800M - 8K), kernel entry at 0x31ffe800(800M - 6K) and zero
page at 0x31fff000 (800M - 4K). The hypervisor will finally run VM based
@ -277,8 +277,8 @@ VHM
VHM overview
============
Device Model manages UOS VM by accessing interfaces exported from VHM
module. VHM module is an SOS kernel driver. The ``/dev/acrn_vhm`` node is
Device Model manages User VM by accessing interfaces exported from VHM
module. VHM module is an Service VM kernel driver. The ``/dev/acrn_vhm`` node is
created when VHM module is initialized. Device Model follows the standard
Linux char device API (ioctl) to access the functionality of VHM.
@ -287,8 +287,8 @@ hypercall to the hypervisor. There are two exceptions:
- I/O request client management is implemented in VHM.
- For memory range management of UOS VM, VHM needs to save all memory
range info of UOS VM. The subsequent memory mapping update of UOS VM
- For memory range management of User VM, VHM needs to save all memory
range info of User VM. The subsequent memory mapping update of User VM
needs this information.
.. figure:: images/dm-image108.png
@ -306,10 +306,10 @@ VHM ioctl interfaces
.. _IO-emulation-in-sos:
I/O Emulation in SOS
********************
I/O Emulation in Service VM
***************************
I/O requests from the hypervisor are dispatched by VHM in the SOS kernel
I/O requests from the hypervisor are dispatched by VHM in the Service VM kernel
to a registered client, responsible for further processing the
I/O access and notifying the hypervisor on its completion.
@ -317,8 +317,8 @@ Initialization of Shared I/O Request Buffer
===========================================
For each VM, there is a shared 4-KByte memory region used for I/O request
communication between the hypervisor and SOS. Upon initialization
of a VM, the DM (acrn-dm) in SOS userland first allocates a 4-KByte
communication between the hypervisor and Service VM. Upon initialization
of a VM, the DM (acrn-dm) in Service VM userland first allocates a 4-KByte
page and passes the GPA of the buffer to HV via hypercall. The buffer is
used as an array of 16 I/O request slots with each I/O request being
256 bytes. This array is indexed by vCPU ID. Thus, each vCPU of the VM
@ -330,7 +330,7 @@ cannot issue multiple I/O requests at the same time.
I/O Clients
===========
An I/O client is either a SOS userland application or a SOS kernel space
An I/O client is either a Service VM userland application or a Service VM kernel space
module responsible for handling I/O access whose address
falls in a certain range. Each VM has an array of registered I/O
clients which are initialized with a fixed I/O address range, plus a PCI
@ -389,14 +389,14 @@ Processing I/O Requests
:align: center
:name: io-sequence-sos
I/O request handling sequence in SOS
I/O request handling sequence in Service VM
:numref:`io-sequence-sos` above illustrates the interactions among the
hypervisor, VHM,
and the device model for handling I/O requests. The main interactions
are as follows:
1. The hypervisor makes an upcall to SOS as an interrupt
1. The hypervisor makes an upcall to Service VM as an interrupt
handled by the upcall handler in VHM.
2. The upcall handler schedules the execution of the I/O request
@ -616,11 +616,11 @@ to destination emulated devices:
.. code-block:: c
/* Generate one msi interrupt to UOS, the index parameter indicates
/* Generate one msi interrupt to User VM, the index parameter indicates
* the msi number from its PCI msi capability. */
void pci_generate_msi(struct pci_vdev *pi, int index);
/* Generate one msix interrupt to UOS, the index parameter indicates
/* Generate one msix interrupt to User VM, the index parameter indicates
* the msix number from its PCI msix bar. */
void pci_generate_msix(struct pci_vdev *pi, int index);
@ -984,11 +984,11 @@ potentially error-prone.
ACPI Emulation
--------------
An alternative ACPI resource abstraction option is for the SOS (SOS_VM) to
own all devices and emulate a set of virtual devices for the UOS (POST_LAUNCHED_VM).
An alternative ACPI resource abstraction option is for the Service VM to
own all devices and emulate a set of virtual devices for the User VM (POST_LAUNCHED_VM).
This is the most popular ACPI resource model for virtualization,
as shown in the picture below. ACRN currently
uses device emulation plus some device passthrough for UOS.
uses device emulation plus some device passthrough for User VM.
.. figure:: images/dm-image52.png
:align: center
@ -1001,11 +1001,11 @@ different components:
- **Hypervisor** - ACPI is transparent to the Hypervisor, and has no knowledge
of ACPI at all.
- **SOS** - All ACPI resources are physically owned by SOS, and enumerates
- **Service VM** - All ACPI resources are physically owned by Service VM, and enumerates
all ACPI tables and devices.
- **UOS** - Virtual ACPI resources, exposed by device model, are owned by
UOS.
- **User VM** - Virtual ACPI resources, exposed by device model, are owned by
User VM.
ACPI emulation code of device model is found in
``hw/platform/acpi/acpi.c``
@ -1095,10 +1095,10 @@ basl_compile for each table. basl_compile does the following:
basl_end(&io[0], &io[1]);
}
After handling each entry, virtual ACPI tables are present in UOS
After handling each entry, virtual ACPI tables are present in User VM
memory.
For passthrough dev in UOS, we may need to add some ACPI description
For passthrough dev in User VM, we may need to add some ACPI description
in virtual DSDT table. There is one hook (passthrough_write_dsdt) in
``hw/pci/passthrough.c`` for this. The following source code, shows
calls different functions to add different contents for each vendor and
@ -1142,7 +1142,7 @@ device id:
}
For instance, write_dsdt_urt1 provides ACPI contents for Bluetooth
UART device when passthroughed to UOS. It provides virtual PCI
UART device when passthroughed to User VM. It provides virtual PCI
device/function as _ADR. With other description, it could be used for
Bluetooth UART enumeration.
@ -1174,19 +1174,19 @@ Bluetooth UART enumeration.
PM in Device Model
******************
PM module in Device Model emulate the UOS low power state transition.
PM module in Device Model emulate the User VM low power state transition.
Each time UOS writes an ACPI control register to initialize low power
Each time User VM writes an ACPI control register to initialize low power
state transition, the writing operation is trapped to DM as an I/O
emulation request by the I/O emulation framework.
To emulate UOS S5 entry, DM will destroy I/O request client, release
allocated UOS memory, stop all created threads, destroy UOS VM, and exit
To emulate User VM S5 entry, DM will destroy I/O request client, release
allocated User VM memory, stop all created threads, destroy User VM, and exit
DM. To emulate S5 exit, a fresh DM start by VM manager is used.
To emulate UOS S3 entry, DM pauses the UOS VM, stops the UOS watchdog,
and waits for a resume signal. When the UOS should exit from S3, DM will
get a wakeup signal and reset the UOS VM to emulate the UOS exit from
To emulate User VM S3 entry, DM pauses the User VM, stops the User VM watchdog,
and waits for a resume signal. When the User VM should exit from S3, DM will
get a wakeup signal and reset the User VM to emulate the User VM exit from
S3.
Pass-through in Device Model

View File

@ -13,21 +13,21 @@ Shared Buffer is a ring buffer divided into predetermined-size slots. There
are two use scenarios of Sbuf:
- sbuf can serve as a lockless ring buffer to share data from ACRN HV to
SOS in non-overwritten mode. (Writing will fail if an overrun
Service VM in non-overwritten mode. (Writing will fail if an overrun
happens.)
- sbuf can serve as a conventional ring buffer in hypervisor in
over-written mode. A lock is required to synchronize access by the
producer and consumer.
Both ACRNTrace and ACRNLog use sbuf as a lockless ring buffer. The Sbuf
is allocated by SOS and assigned to HV via a hypercall. To hold pointers
is allocated by Service VM and assigned to HV via a hypercall. To hold pointers
to sbuf passed down via hypercall, an array ``sbuf[ACRN_SBUF_ID_MAX]``
is defined in per_cpu region of HV, with predefined sbuf id to identify
the usage, such as ACRNTrace, ACRNLog, etc.
For each physical CPU there is a dedicated Sbuf. Only a single producer
is allowed to put data into that Sbuf in HV, and a single consumer is
allowed to get data from Sbuf in SOS. Therefore, no lock is required to
allowed to get data from Sbuf in Service VM. Therefore, no lock is required to
synchronize access by the producer and consumer.
sbuf APIs
@ -39,7 +39,7 @@ The sbuf APIs are defined in ``hypervisor/include/debug/sbuf.h``
ACRN Trace
**********
ACRNTrace is a tool running on the Service OS (SOS) to capture trace
ACRNTrace is a tool running on the Service VM to capture trace
data. It allows developers to add performance profiling trace points at
key locations to get a picture of what is going on inside the
hypervisor. Scripts to analyze the collected trace data are also
@ -52,8 +52,8 @@ up:
- **ACRNTrace userland app**: Userland application collecting trace data to
files (Per Physical CPU)
- **SOS Trace Module**: allocates/frees SBufs, creates device for each
SBuf, sets up sbuf shared between SOS and HV, and provides a dev node for the
- **Service VM Trace Module**: allocates/frees SBufs, creates device for each
SBuf, sets up sbuf shared between Service VM and HV, and provides a dev node for the
userland app to retrieve trace data from Sbuf
- **Trace APIs**: provide APIs to generate trace event and insert to Sbuf.
@ -71,18 +71,18 @@ See ``hypervisor/include/debug/trace.h``
for trace_entry struct and function APIs.
SOS Trace Module
================
Service VM Trace Module
=======================
The SOS trace module is responsible for:
The Service VM trace module is responsible for:
- allocating sbuf in sos memory range for each physical CPU, and assign
- allocating sbuf in Service VM memory range for each physical CPU, and assign
the gpa of Sbuf to ``per_cpu sbuf[ACRN_TRACE]``
- create a misc device for each physical CPU
- provide mmap operation to map entire Sbuf to userspace for high
flexible and efficient access.
On SOS shutdown, the trace module is responsible to remove misc devices, free
On Service VM shutdown, the trace module is responsible to remove misc devices, free
SBufs, and set ``per_cpu sbuf[ACRN_TRACE]`` to null.
ACRNTrace Application
@ -98,7 +98,7 @@ readable text, and do analysis.
With a debug build, trace components are initialized at boot
time. After initialization, HV writes trace event date into sbuf
until sbuf is full, which can happen easily if the ACRNTrace app is not
consuming trace data from Sbuf on SOS user space.
consuming trace data from Sbuf on Service VM user space.
Once ACRNTrace is launched, for each physical CPU a consumer thread is
created to periodically read RAW trace data from sbuf and write to a
@ -122,7 +122,7 @@ ACRN Log
********
acrnlog is a tool used to capture ACRN hypervisor log to files on
SOS filesystem. It can run as an SOS service at boot, capturing two
Service VM filesystem. It can run as an Service VM service at boot, capturing two
kinds of logs:
- Current runtime logs;
@ -137,9 +137,9 @@ up:
- **ACRN Log app**: Userland application collecting hypervisor log to
files;
- **SOS ACRN Log Module**: constructs/frees SBufs at reserved memory
- **Service VM ACRN Log Module**: constructs/frees SBufs at reserved memory
area, creates dev for current/last logs, sets up sbuf shared between
SOS and HV, and provides a dev node for the userland app to
Service VM and HV, and provides a dev node for the userland app to
retrieve logs
- **ACRN log support in HV**: put logs at specified loglevel to Sbuf.
@ -157,7 +157,7 @@ system:
- log messages with severity level higher than a specified value will
be put into Sbuf when calling logmsg in hypervisor
- allocate sbuf to accommodate early hypervisor logs before SOS
- allocate sbuf to accommodate early hypervisor logs before Service VM
can allocate and set up sbuf
There are 6 different loglevels, as shown below. The specified
@ -181,17 +181,17 @@ of a single log message is 320 bytes. Log messages with a length between
80 and 320 bytes will be separated into multiple sbuf elements. Log
messages with length larger then 320 will be truncated.
For security, SOS allocates sbuf in its memory range and assigns it to
For security, Service VM allocates sbuf in its memory range and assigns it to
the hypervisor.
SOS ACRN Log Module
===================
Service VM ACRN Log Module
==========================
ACRNLog module provides one kernel option `hvlog=$size@$pbase` to configure
the size and base address of hypervisor log buffer. This space will be further divided
into two buffers with equal size: last log buffer and current log buffer.
On SOS boot, SOS acrnlog module is responsible to:
On Service VM boot, Service VM acrnlog module is responsible to:
- examine if there are log messages remaining from last crashed
run by checking the magic number of each sbuf
@ -211,7 +211,7 @@ current sbuf with magic number ``0x5aa57aa71aa13aa3``, and changes the
magic number of last sbuf to ``0x5aa57aa71aa13aa2``, to distinguish which is
the current/last.
On SOS shutdown, the module is responsible to remove misc devices,
On Service VM shutdown, the module is responsible to remove misc devices,
free SBufs, and set ``per_cpu sbuf[ACRN_TRACE]`` to null.
ACRN Log Application

View File

@ -30,7 +30,7 @@ service (VBS) APIs, and virtqueue (VQ) APIs, as shown in
- **DM APIs** are exported by the DM, and are mainly used during the
device initialization phase and runtime. The DM APIs also include
PCIe emulation APIs because each virtio device is a PCIe device in
the SOS and UOS.
the Service VM and User VM.
- **VBS APIs** are mainly exported by the VBS and related modules.
Generally they are callbacks to be
registered into the DM.
@ -366,7 +366,7 @@ The workflow can be summarized as:
irqfd.
2. pass ioeventfd to vhost kernel driver.
3. pass ioevent fd to vhm driver
4. UOS FE driver triggers ioreq and forwarded to SOS by hypervisor
4. User VM FE driver triggers ioreq and forwarded to Service VM by hypervisor
5. ioreq is dispatched by vhm driver to related vhm client.
6. ioeventfd vhm client traverse the io_range list and find
corresponding eventfd.
@ -396,7 +396,7 @@ The workflow can be summarized as:
5. irqfd related logic traverses the irqfd list to retrieve related irq
information.
6. irqfd related logic inject an interrupt through vhm interrupt API.
7. interrupt is delivered to UOS FE driver through hypervisor.
7. interrupt is delivered to User VM FE driver through hypervisor.
.. _virtio-APIs:
@ -542,7 +542,7 @@ VBS APIs
========
The VBS APIs are exported by VBS related modules, including VBS, DM, and
SOS kernel modules. They can be classified into VBS-U and VBS-K APIs
Service VM kernel modules. They can be classified into VBS-U and VBS-K APIs
listed as follows.
VBS-U APIs

View File

@ -30,7 +30,7 @@ is active:
.. note:: The console is only available in the debug version of the hypervisor,
configured at compile time. In the release version, the console is
disabled and the physical UART is not used by the hypervisor or SOS.
disabled and the physical UART is not used by the hypervisor or Service VM.
Hypervisor shell
****************
@ -45,8 +45,8 @@ Virtual UART
Currently UART 16550 is owned by the hypervisor itself and used for
debugging purposes. Properties are configured by hypervisor command
line. Hypervisor emulates a UART device with 0x3F8 address to SOS that
acts as the console of SOS with these features:
line. Hypervisor emulates a UART device with 0x3F8 address to Service VM that
acts as the console of Service VM with these features:
- The vUART is exposed via I/O port 0x3f8.
- Incorporate a 256-byte RX buffer and 65536 TX buffer.
@ -85,8 +85,8 @@ The workflows are described as follows:
- Characters are read from this sbuf and put to rxFIFO,
triggered by vuart_console_rx_chars
- A virtual interrupt is sent to SOS, triggered by a read from
SOS. Characters in rxFIFO are sent to SOS by emulation of
- A virtual interrupt is sent to Service VM, triggered by a read from
Service VM. Characters in rxFIFO are sent to Service VM by emulation of
read of register UART16550_RBR
- TX flow:

View File

@ -79,7 +79,7 @@ physical CPUs are initially assigned to the Service VM by creating the same
number of virtual CPUs.
When the Service VM boot is finished, it releases the physical CPUs intended
for UOS use.
for User VM use.
Here is an example flow of CPU allocation on a multi-core platform.
@ -93,18 +93,18 @@ Here is an example flow of CPU allocation on a multi-core platform.
CPU management in the Service VM under flexing CPU sharing
==========================================================
As all Service VM CPUs could share with different UOSs, ACRN can still pass-thru
As all Service VM CPUs could share with different User VMs, ACRN can still pass-thru
MADT to Service VM, and the Service VM is still able to see all physical CPUs.
But as under CPU sharing, the Service VM does not need offline/release the physical
CPUs intended for UOS use.
CPUs intended for User VM use.
CPU management in UOS
=====================
CPU management in User VM
=========================
From the UOS point of view, CPU management is very simple - when DM does
From the User VM point of view, CPU management is very simple - when DM does
hypercalls to create VMs, the hypervisor will create its virtual CPUs
based on the configuration in this UOS VM's ``vm config``.
based on the configuration in this User VM's ``vm config``.
As mentioned in previous description, ``vcpu_affinity`` in ``vm config``
tells which physical CPUs a VM's VCPU will use, and the scheduler policy
@ -571,7 +571,7 @@ For a guest vCPU's state initialization:
SW load based on different boot mode
- UOS BSP: DM context initialization through hypercall
- User VM BSP: DM context initialization through hypercall
- If it's AP, then it will always start from real mode, and the start
vector will always come from vlapic INIT-SIPI emulation.
@ -1103,7 +1103,7 @@ APIs to register its IO/MMIO range:
for a hypervisor emulated device needs to first set its corresponding
I/O bitmap to 1.
- For UOS, the default I/O bitmap are all set to 1, which means UOS will trap
- For User VM, the default I/O bitmap are all set to 1, which means User VM will trap
all I/O port access by default. Adding an I/O handler for a
hypervisor emulated device does not need change its I/O bitmap.
If the trapped I/O port access does not fall into a hypervisor
@ -1115,7 +1115,7 @@ APIs to register its IO/MMIO range:
default. Adding a MMIO handler for a hypervisor emulated
device needs to first remove its MMIO range from EPT mapping.
- For UOS, EPT only maps its system RAM to the UOS, which means UOS will
- For User VM, EPT only maps its system RAM to the User VM, which means User VM will
trap all MMIO access by default. Adding a MMIO handler for a
hypervisor emulated device does not need to change its EPT mapping.
If the trapped MMIO access does not fall into a hypervisor

View File

@ -229,15 +229,15 @@ hypervisor before it configures the PCI configuration space to enable an
MSI. The hypervisor takes this opportunity to set up a remapping for the
given MSI or MSIX before it is actually enabled by Service VM.
When the UOS needs to access the physical device by passthrough, it uses
When the User VM needs to access the physical device by passthrough, it uses
the following steps:
- UOS gets a virtual interrupt
- User VM gets a virtual interrupt
- VM exit happens and the trapped vCPU is the target where the interrupt
will be injected.
- Hypervisor will handle the interrupt and translate the vector
according to ptirq_remapping_info.
- Hypervisor delivers the interrupt to UOS.
- Hypervisor delivers the interrupt to User VM.
When the Service VM needs to use the physical device, the passthrough is also
active because the Service VM is the first VM. The detail steps are:
@ -258,7 +258,7 @@ ACPI virtualization is designed in ACRN with these assumptions:
- HV has no knowledge of ACPI,
- Service VM owns all physical ACPI resources,
- UOS sees virtual ACPI resources emulated by device model.
- User VM sees virtual ACPI resources emulated by device model.
Some passthrough devices require physical ACPI table entry for
initialization. The device model will create such device entry based on

View File

@ -63,7 +63,9 @@ to support this. The ACRN hypervisor also initializes all the interrupt
related modules like IDT, PIC, IOAPIC, and LAPIC.
HV does not own any host devices (except UART). All devices are by
default assigned to the Service VM. Any interrupts received by Guest VM (Service VM or User VM) device drivers are virtual interrupts injected by HV (via vLAPIC).
default assigned to the Service VM. Any interrupts received by VM
(Service VM or User VM) device drivers are virtual interrupts injected
by HV (via vLAPIC).
HV manages a Host-to-Guest mapping. When a native IRQ/interrupt occurs,
HV decides whether this IRQ/interrupt should be forwarded to a VM and
which VM to forward to (if any). Refer to
@ -357,15 +359,15 @@ IPI vector 0xF3 upcall. The virtual interrupt injection uses IPI vector 0xF0.
0xF3 upcall
A Guest vCPU VM Exit exits due to EPT violation or IO instruction trap.
It requires Device Module to emulate the MMIO/PortIO instruction.
However it could be that the Service OS (SOS) vCPU0 is still in non-root
However it could be that the Service VM vCPU0 is still in non-root
mode. So an IPI (0xF3 upcall vector) should be sent to the physical CPU0
(with non-root mode as vCPU0 inside SOS) to force vCPU0 to VM Exit due
(with non-root mode as vCPU0 inside the Service VM) to force vCPU0 to VM Exit due
to the external interrupt. The virtual upcall vector is then injected to
SOS, and the vCPU0 inside SOS then will pick up the IO request and do
the Service VM, and the vCPU0 inside the Service VM then will pick up the IO request and do
emulation for other Guest.
0xF0 IPI flow
If Device Module inside SOS needs to inject an interrupt to other Guest
If Device Module inside the Service VM needs to inject an interrupt to other Guest
such as vCPU1, it will issue an IPI first to kick CPU1 (assuming CPU1 is
running on vCPU1) to root-hv_interrupt-data-apmode. CPU1 will inject the
interrupt before VM Enter.

View File

@ -4,7 +4,7 @@ I/O Emulation high-level design
###############################
As discussed in :ref:`intro-io-emulation`, there are multiple ways and
places to handle I/O emulation, including HV, SOS Kernel VHM, and SOS
places to handle I/O emulation, including HV, Service VM Kernel VHM, and Service VM
user-land device model (acrn-dm).
I/O emulation in the hypervisor provides these functionalities:
@ -12,7 +12,7 @@ I/O emulation in the hypervisor provides these functionalities:
- Maintain lists of port I/O or MMIO handlers in the hypervisor for
emulating trapped I/O accesses in a certain range.
- Forward I/O accesses to SOS when they cannot be handled by the
- Forward I/O accesses to Service VM when they cannot be handled by the
hypervisor by any registered handlers.
:numref:`io-control-flow` illustrates the main control flow steps of I/O emulation
@ -26,7 +26,7 @@ inside the hypervisor:
access, or ignore the access if the access crosses the boundary.
3. If the range of the I/O access does not overlap the range of any I/O
handler, deliver an I/O request to SOS.
handler, deliver an I/O request to Service VM.
.. figure:: images/ioem-image101.png
:align: center
@ -92,16 +92,16 @@ following cases exist:
- Otherwise it is implied that the access crosses the boundary of
multiple devices which the hypervisor does not emulate. Thus
no handler is called and no I/O request will be delivered to
SOS. I/O reads get all 1's and I/O writes are dropped.
Service VM. I/O reads get all 1's and I/O writes are dropped.
- If the range of the I/O access does not overlap with any range of the
handlers, the I/O access is delivered to SOS as an I/O request
handlers, the I/O access is delivered to Service VM as an I/O request
for further processing.
I/O Requests
************
An I/O request is delivered to SOS vCPU 0 if the hypervisor does not
An I/O request is delivered to Service VM vCPU 0 if the hypervisor does not
find any handler that overlaps the range of a trapped I/O access. This
section describes the initialization of the I/O request mechanism and
how an I/O access is emulated via I/O requests in the hypervisor.
@ -109,11 +109,11 @@ how an I/O access is emulated via I/O requests in the hypervisor.
Initialization
==============
For each UOS the hypervisor shares a page with SOS to exchange I/O
For each User VM the hypervisor shares a page with Service VM to exchange I/O
requests. The 4-KByte page consists of 16 256-Byte slots, indexed by
vCPU ID. It is required for the DM to allocate and set up the request
buffer on VM creation, otherwise I/O accesses from UOS cannot be
emulated by SOS, and all I/O accesses not handled by the I/O handlers in
buffer on VM creation, otherwise I/O accesses from User VM cannot be
emulated by Service VM, and all I/O accesses not handled by the I/O handlers in
the hypervisor will be dropped (reads get all 1's).
Refer to the following sections for details on I/O requests and the
@ -145,7 +145,7 @@ There are four types of I/O requests:
For port I/O accesses, the hypervisor will always deliver an I/O request
of type PIO to SOS. For MMIO accesses, the hypervisor will deliver an
of type PIO to Service VM. For MMIO accesses, the hypervisor will deliver an
I/O request of either MMIO or WP, depending on the mapping of the
accessed address (in GPA) in the EPT of the vCPU. The hypervisor will
never deliver any I/O request of type PCI, but will handle such I/O
@ -170,11 +170,11 @@ The four states are:
FREE
The I/O request slot is not used and new I/O requests can be
delivered. This is the initial state on UOS creation.
delivered. This is the initial state on User VM creation.
PENDING
The I/O request slot is occupied with an I/O request pending
to be processed by SOS.
to be processed by Service VM.
PROCESSING
The I/O request has been dispatched to a client but the
@ -185,19 +185,19 @@ COMPLETE
has not consumed the results yet.
The contents of an I/O request slot are owned by the hypervisor when the
state of an I/O request slot is FREE or COMPLETE. In such cases SOS can
state of an I/O request slot is FREE or COMPLETE. In such cases Service VM can
only access the state of that slot. Similarly the contents are owned by
SOS when the state is PENDING or PROCESSING, when the hypervisor can
Service VM when the state is PENDING or PROCESSING, when the hypervisor can
only access the state of that slot.
The states are transferred as follow:
1. To deliver an I/O request, the hypervisor takes the slot
corresponding to the vCPU triggering the I/O access, fills the
contents, changes the state to PENDING and notifies SOS via
contents, changes the state to PENDING and notifies Service VM via
upcall.
2. On upcalls, SOS dispatches each I/O request in the PENDING state to
2. On upcalls, Service VM dispatches each I/O request in the PENDING state to
clients and changes the state to PROCESSING.
3. The client assigned an I/O request changes the state to COMPLETE
@ -211,7 +211,7 @@ The states are transferred as follow:
States are accessed using atomic operations to avoid getting unexpected
states on one core when it is written on another.
Note that there is no state to represent a 'failed' I/O request. SOS
Note that there is no state to represent a 'failed' I/O request. Service VM
should return all 1's for reads and ignore writes whenever it cannot
handle the I/O request, and change the state of the request to COMPLETE.
@ -224,7 +224,7 @@ hypervisor re-enters the vCPU thread every time a vCPU is scheduled back
in, rather than switching to where the vCPU is scheduled out. As a result,
post-work is introduced for this purpose.
The hypervisor pauses a vCPU before an I/O request is delivered to SOS.
The hypervisor pauses a vCPU before an I/O request is delivered to Service VM.
Once the I/O request emulation is completed, a client notifies the
hypervisor by a hypercall. The hypervisor will pick up that request, do
the post-work, and resume the guest vCPU. The post-work takes care of
@ -236,9 +236,9 @@ updating the vCPU guest state to reflect the effect of the I/O reads.
Workflow of MMIO I/O request completion
The figure above illustrates the workflow to complete an I/O
request for MMIO. Once the I/O request is completed, SOS makes a
hypercall to notify the hypervisor which resumes the UOS vCPU triggering
the access after requesting post-work on that vCPU. After the UOS vCPU
request for MMIO. Once the I/O request is completed, Service VM makes a
hypercall to notify the hypervisor which resumes the User VM vCPU triggering
the access after requesting post-work on that vCPU. After the User VM vCPU
resumes, it does the post-work first to update the guest registers if
the access reads an address, changes the state of the corresponding I/O
request slot to FREE, and continues execution of the vCPU.
@ -255,7 +255,7 @@ similar to the MMIO case, except the post-work is done before resuming
the vCPU. This is because the post-work for port I/O reads need to update
the general register eax of the vCPU, while the post-work for MMIO reads
need further emulation of the trapped instruction. This is much more
complex and may impact the performance of SOS.
complex and may impact the performance of the Service VM.
.. _io-structs-interfaces:

View File

@ -106,7 +106,7 @@ Virtualization architecture
---------------------------
In the virtualization architecture, the IOC Device Model (DM) is
responsible for communication between the UOS and IOC firmware. The IOC
responsible for communication between the User VM and IOC firmware. The IOC
DM communicates with several native CBC char devices and a PTY device.
The native CBC char devices only include ``/dev/cbc-lifecycle``,
``/dev/cbc-signals``, and ``/dev/cbc-raw0`` - ``/dev/cbc-raw11``. Others
@ -133,7 +133,7 @@ There are five parts in this high-level design:
* Power management involves boot/resume/suspend/shutdown flows
* Emulated CBC commands introduces some commands work flow
IOC mediator has three threads to transfer data between UOS and SOS. The
IOC mediator has three threads to transfer data between User VM and Service VM. The
core thread is responsible for data reception, and Tx and Rx threads are
used for data transmission. Each of the transmission threads has one
data queue as a buffer, so that the IOC mediator can read data from CBC
@ -154,7 +154,7 @@ char devices and UART DM immediately.
data comes from a raw channel, the data will be passed forward. Before
transmitting to the virtual UART interface, all data needs to be
packed with an address header and link header.
- For Rx direction, the data comes from the UOS. The IOC mediator receives link
- For Rx direction, the data comes from the User VM. The IOC mediator receives link
data from the virtual UART interface. The data will be unpacked by Core
thread, and then forwarded to Rx queue, similar to how the Tx direction flow
is done except that the heartbeat and RTC are only used by the IOC
@ -176,10 +176,10 @@ IOC mediator has four states and five events for state transfer.
IOC Mediator - State Transfer
- **INIT state**: This state is the initialized state of the IOC mediator.
All CBC protocol packets are handled normally. In this state, the UOS
All CBC protocol packets are handled normally. In this state, the User VM
has not yet sent an active heartbeat.
- **ACTIVE state**: Enter this state if an HB ACTIVE event is triggered,
indicating that the UOS state has been active and need to set the bit
indicating that the User VM state has been active and need to set the bit
23 (SoC bit) in the wakeup reason.
- **SUSPENDING state**: Enter this state if a RAM REFRESH event or HB
INACTIVE event is triggered. The related event handler needs to mask
@ -219,17 +219,17 @@ The difference between the native and virtualization architectures is
that the IOC mediator needs to re-compute the checksum and reset
priority. Currently, priority is not supported by IOC firmware; the
priority setting by the IOC mediator is based on the priority setting of
the CBC driver. The SOS and UOS use the same CBC driver.
the CBC driver. The Service VM and User VM use the same CBC driver.
Power management virtualization
-------------------------------
In acrn-dm, the IOC power management architecture involves PM DM, IOC
DM, and UART DM modules. PM DM is responsible for UOS power management,
DM, and UART DM modules. PM DM is responsible for User VM power management,
and IOC DM is responsible for heartbeat and wakeup reason flows for IOC
firmware. The heartbeat flow is used to control IOC firmware power state
and wakeup reason flow is used to indicate IOC power state to the OS.
UART DM transfers all IOC data between the SOS and UOS. These modules
UART DM transfers all IOC data between the Service VM and User VM. These modules
complete boot/suspend/resume/shutdown functions.
Boot flow
@ -243,13 +243,13 @@ Boot flow
IOC Virtualizaton - Boot flow
#. Press ignition button for booting.
#. SOS lifecycle service gets a "booting" wakeup reason.
#. SOS lifecycle service notifies wakeup reason to VM Manager, and VM
#. Service VM lifecycle service gets a "booting" wakeup reason.
#. Service VM lifecycle service notifies wakeup reason to VM Manager, and VM
Manager starts VM.
#. VM Manager sets the VM state to "start".
#. IOC DM forwards the wakeup reason to UOS.
#. PM DM starts UOS.
#. UOS lifecycle gets a "booting" wakeup reason.
#. IOC DM forwards the wakeup reason to User VM.
#. PM DM starts User VM.
#. User VM lifecycle gets a "booting" wakeup reason.
Suspend & Shutdown flow
+++++++++++++++++++++++
@ -262,23 +262,23 @@ Suspend & Shutdown flow
IOC Virtualizaton - Suspend and Shutdown by Ignition
#. Press ignition button to suspend or shutdown.
#. SOS lifecycle service gets a 0x800000 wakeup reason, then keeps
#. Service VM lifecycle service gets a 0x800000 wakeup reason, then keeps
sending a shutdown delay heartbeat to IOC firmware, and notifies a
"stop" event to VM Manager.
#. IOC DM forwards the wakeup reason to UOS lifecycle service.
#. SOS lifecycle service sends a "stop" event to VM Manager, and waits for
#. IOC DM forwards the wakeup reason to User VM lifecycle service.
#. Service VM lifecycle service sends a "stop" event to VM Manager, and waits for
the stop response before timeout.
#. UOS lifecycle service gets a 0x800000 wakeup reason and sends inactive
#. User VM lifecycle service gets a 0x800000 wakeup reason and sends inactive
heartbeat with suspend or shutdown SUS_STAT to IOC DM.
#. UOS lifecycle service gets a 0x000000 wakeup reason, then enters
#. User VM lifecycle service gets a 0x000000 wakeup reason, then enters
suspend or shutdown kernel PM flow based on SUS_STAT.
#. PM DM executes UOS suspend/shutdown request based on ACPI.
#. PM DM executes User VM suspend/shutdown request based on ACPI.
#. VM Manager queries each VM state from PM DM. Suspend request maps
to a paused state and shutdown request maps to a stop state.
#. VM Manager collects all VMs state, and reports it to SOS lifecycle
#. VM Manager collects all VMs state, and reports it to Service VM lifecycle
service.
#. SOS lifecycle sends inactive heartbeat to IOC firmware with
suspend/shutdown SUS_STAT, based on the SOS' own lifecycle service
#. Service VM lifecycle sends inactive heartbeat to IOC firmware with
suspend/shutdown SUS_STAT, based on the Service VM's own lifecycle service
policy.
Resume flow
@ -297,33 +297,33 @@ the same flow blocks.
For ignition resume flow:
#. Press ignition button to resume.
#. SOS lifecycle service gets an initial wakeup reason from the IOC
#. Service VM lifecycle service gets an initial wakeup reason from the IOC
firmware. The wakeup reason is 0x000020, from which the ignition button
bit is set. It then sends active or initial heartbeat to IOC firmware.
#. SOS lifecycle forwards the wakeup reason and sends start event to VM
#. Service VM lifecycle forwards the wakeup reason and sends start event to VM
Manager. The VM Manager starts to resume VMs.
#. IOC DM gets the wakeup reason from the VM Manager and forwards it to UOS
#. IOC DM gets the wakeup reason from the VM Manager and forwards it to User VM
lifecycle service.
#. VM Manager sets the VM state to starting for PM DM.
#. PM DM resumes UOS.
#. UOS lifecycle service gets wakeup reason 0x000020, and then sends an initial
or active heartbeat. The UOS gets wakeup reason 0x800020 after
#. PM DM resumes User VM.
#. User VM lifecycle service gets wakeup reason 0x000020, and then sends an initial
or active heartbeat. The User VM gets wakeup reason 0x800020 after
resuming.
For RTC resume flow
#. RTC timer expires.
#. SOS lifecycle service gets initial wakeup reason from the IOC
#. Service VM lifecycle service gets initial wakeup reason from the IOC
firmware. The wakeup reason is 0x000200, from which RTC bit is set.
It then sends active or initial heartbeat to IOC firmware.
#. SOS lifecycle forwards the wakeup reason and sends start event to VM
#. Service VM lifecycle forwards the wakeup reason and sends start event to VM
Manager. VM Manager begins resuming VMs.
#. IOC DM gets the wakeup reason from the VM Manager, and forwards it to
the UOS lifecycle service.
the User VM lifecycle service.
#. VM Manager sets the VM state to starting for PM DM.
#. PM DM resumes UOS.
#. UOS lifecycle service gets the wakeup reason 0x000200, and sends
initial or active heartbeat. The UOS gets wakeup reason 0x800200
#. PM DM resumes User VM.
#. User VM lifecycle service gets the wakeup reason 0x000200, and sends
initial or active heartbeat. The User VM gets wakeup reason 0x800200
after resuming..
System control data
@ -413,19 +413,19 @@ Currently the wakeup reason bits are supported by sources shown here:
* - wakeup_button
- 5
- Get from IOC FW, forward to UOS
- Get from IOC FW, forward to User VM
* - RTC wakeup
- 9
- Get from IOC FW, forward to UOS
- Get from IOC FW, forward to User VM
* - car door wakeup
- 11
- Get from IOC FW, forward to UOS
- Get from IOC FW, forward to User VM
* - SoC wakeup
- 23
- Emulation (Depends on UOS's heartbeat message
- Emulation (Depends on User VM's heartbeat message
- CBC_WK_RSN_BTN (bit 5): ignition button.
- CBC_WK_RSN_RTC (bit 9): RTC timer.
@ -522,7 +522,7 @@ definition is as below.
:align: center
- The RTC command contains a relative time but not an absolute time.
- SOS lifecycle service will re-compute the time offset before it is
- Service VM lifecycle service will re-compute the time offset before it is
sent to the IOC firmware.
.. figure:: images/ioc-image10.png
@ -560,10 +560,10 @@ IOC signal type definitions are as below.
IOC Mediator - Signal flow
- The IOC backend needs to emulate the channel open/reset/close message which
shouldn't be forward to the native cbc signal channel. The SOS signal
shouldn't be forward to the native cbc signal channel. The Service VM signal
related services should do a real open/reset/close signal channel.
- Every backend should maintain a whitelist for different VMs. The
whitelist can be stored in the SOS file system (Read only) in the
whitelist can be stored in the Service VM file system (Read only) in the
future, but currently it is hard coded.
IOC mediator has two whitelist tables, one is used for rx
@ -582,9 +582,9 @@ new multi signal, which contains the signals in the whitelist.
Raw data
--------
OEM raw channel only assigns to a specific UOS following that OEM
OEM raw channel only assigns to a specific User VM following that OEM
configuration. The IOC Mediator will directly forward all read/write
message from IOC firmware to UOS without any modification.
message from IOC firmware to User VM without any modification.
IOC Mediator Usage
@ -600,14 +600,14 @@ The "ioc_channel_path" is an absolute path for communication between
IOC mediator and UART DM.
The "lpc_port" is "com1" or "com2", IOC mediator needs one unassigned
lpc port for data transfer between UOS and SOS.
lpc port for data transfer between User VM and Service VM.
The "wakeup_reason" is IOC mediator boot up reason, each bit represents
one wakeup reason.
For example, the following commands are used to enable IOC feature, the
initial wakeup reason is the ignition button and cbc_attach uses ttyS1
for TTY line discipline in UOS::
for TTY line discipline in User VM::
-i /run/acrn/ioc_$vm_name,0x20
-l com2,/run/acrn/ioc_$vm_name

View File

@ -291,7 +291,8 @@ Virtual MTRR
************
In ACRN, the hypervisor only virtualizes MTRRs fixed range (0~1MB).
The HV sets MTRRs of the fixed range as Write-Back for UOS, and the SOS reads
The HV sets MTRRs of the fixed range as Write-Back for a User VM, and
the Service VM reads
native MTRRs of the fixed range set by BIOS.
If the guest physical address is not in the fixed range (0~1MB), the
@ -489,7 +490,7 @@ almost all the system memory as shown here:
:width: 900px
:name: sos-mem-layout
SOS Physical Memory Layout
Service VM Physical Memory Layout
Host to Guest Mapping
=====================
@ -521,4 +522,4 @@ must not be accessible by the Seervice/User VM normal world.
.. figure:: images/mem-image18.png
:align: center
UOS Physical Memory Layout with Trusty
User VM Physical Memory Layout with Trusty

View File

@ -176,7 +176,7 @@ Guest SMP boot flow
The core APIC IDs are reported to the guest using mptable info. SMP boot
flow is similar to sharing mode. Refer to :ref:`vm-startup`
for guest SMP boot flow in ACRN. Partition mode guests startup is same as
the SOS startup in sharing mode.
the Service VM startup in sharing mode.
Inter-processor Interrupt (IPI) Handling
========================================

View File

@ -241,7 +241,8 @@ Here is initial mode of vCPUs:
+----------------------------------+----------------------------------------------------------+
| VM and Processor Type | Initial Mode |
+=================+================+==========================================================+
| Service VM | BSP | Same as physical BSP, or Real Mode if SOS boot w/ OVMF |
| Service VM | BSP | Same as physical BSP, or Real Mode if Service VM boot |
| | | w/ OVMF |
| +----------------+----------------------------------------------------------+
| | AP | Real Mode |
+-----------------+----------------+----------------------------------------------------------+

View File

@ -151,7 +151,7 @@ Virtual IOAPIC
**************
vIOAPIC is emulated by HV when Guest accesses MMIO GPA range:
0xFEC00000-0xFEC01000. vIOAPIC for SOS should match to the native HW
0xFEC00000-0xFEC01000. vIOAPIC for Service VM should match to the native HW
IOAPIC Pin numbers. vIOAPIC for guest VM provides 48 pins. As the vIOAPIC is
always associated with vLAPIC, the virtual interrupt injection from
vIOAPIC will finally trigger a request for vLAPIC event by calling

View File

@ -54,10 +54,10 @@ management. Please refer to ACRN power management design for more details.
Post-launched User VMs
======================
DM is taking control of post-launched User VMs' state transition after SOS
DM is taking control of post-launched User VMs' state transition after Service VM
boot up, and it calls VM APIs through hypercalls.
SOS user level service like Life-Cycle-Service and tool like Acrnd may work
Service VM user level service like Life-Cycle-Service and tool like Acrnd may work
together with DM to launch or stop a User VM. Please refer to ACRN tool
introduction for more details.

View File

@ -4,8 +4,8 @@ UART Virtualization
###################
In ACRN, UART virtualization is implemented as a fully-emulated device.
In the Service OS (SOS), UART virtualization is implemented in the
hypervisor itself. In the User OS (UOS), UART virtualization is
In the Service VM, UART virtualization is implemented in the
hypervisor itself. In the User VM, UART virtualization is
implemented in the Device Model (DM), and is the primary topic of this
document. We'll summarize differences between the hypervisor and DM
implementations at the end of this document.
@ -93,7 +93,7 @@ A similar virtual UART device is implemented in the hypervisor.
Currently UART16550 is owned by the hypervisor itself and is used for
debugging purposes. (The UART properties are configured by parameters
to the hypervisor command line.) The hypervisor emulates a UART device
with 0x3F8 address to the SOS and acts as the SOS console. The general
with 0x3F8 address to the Service VM and acts as the Service VM console. The general
emulation is the same as used in the device model, with the following
differences:
@ -110,8 +110,8 @@ differences:
- Characters are read from the sbuf and put to rxFIFO,
triggered by ``vuart_console_rx_chars``
- A virtual interrupt is sent to the SOS that triggered the read,
and characters from rxFIFO are sent to the SOS by emulating a read
- A virtual interrupt is sent to the Service VM that triggered the read,
and characters from rxFIFO are sent to the Service VM by emulating a read
of register ``UART16550_RBR``
- TX flow:

View File

@ -29,8 +29,8 @@ emulation of three components, described here and shown in
specific User OS with I/O MMU assistance.
- **DRD DM** (Dual Role Device) emulates the PHY MUX control
logic. The sysfs interface in UOS is used to trap the switch operation
into DM, and the the sysfs interface in SOS is used to operate on the physical
logic. The sysfs interface in a User VM is used to trap the switch operation
into DM, and the the sysfs interface in the Service VM is used to operate on the physical
registers to switch between DCI and HCI role.
On Intel Apollo Lake platform, the sysfs interface path is
@ -39,7 +39,7 @@ emulation of three components, described here and shown in
device mode. Similarly, by echoing ``host``, the usb phy will be
connected with xHCI controller as host mode.
An xHCI register access from UOS will induce EPT trap from UOS to
An xHCI register access from a User VM will induce EPT trap from the User VM to
DM, and the xHCI DM or DRD DM will emulate hardware behaviors to make
the subsystem run.
@ -94,7 +94,7 @@ DM:
ports to virtual USB ports. It communicate with
native USB ports though libusb.
All the USB data buffers from UOS (User OS) are in the form of TRB
All the USB data buffers from a User VM are in the form of TRB
(Transfer Request Blocks), according to xHCI spec. xHCI DM will fetch
these data buffers when the related xHCI doorbell registers are set.
These data will convert to *struct usb_data_xfer* and, through USB core,
@ -106,15 +106,15 @@ The device model configuration command syntax for xHCI is as follows::
-s <slot>,xhci,[bus1-port1,bus2-port2]
- *slot*: virtual PCI slot number in DM
- *bus-port*: specify which physical USB ports need to map to UOS.
- *bus-port*: specify which physical USB ports need to map to a User VM.
A simple example::
-s 7,xhci,1-2,2-2
This configuration means the virtual xHCI will appear in PCI slot 7
in UOS, and any physical USB device attached on 1-2 or 2-2 will be
detected by UOS and used as expected.
in the User VM, and any physical USB device attached on 1-2 or 2-2 will be
detected by a User VM and used as expected.
USB DRD virtualization
**********************
@ -129,7 +129,7 @@ USB DRD (Dual Role Device) emulation works as shown in this figure:
ACRN emulates the DRD hardware logic of an Intel Apollo Lake platform to
support the dual role requirement. The DRD feature is implemented as xHCI
vendor extended capability. ACRN emulates
the same way, so the native driver can be reused in UOS. When UOS DRD
the same way, so the native driver can be reused in a User VM. When a User VM DRD
driver reads or writes the related xHCI extended registers, these access will
be captured by xHCI DM. xHCI DM uses the native DRD related
sysfs interface to do the Host/Device mode switch operations.

View File

@ -4,8 +4,8 @@ Virtio-blk
##########
The virtio-blk device is a simple virtual block device. The FE driver
(in the UOS space) places read, write, and other requests onto the
virtqueue, so that the BE driver (in the SOS space) can process them
(in the User VM space) places read, write, and other requests onto the
virtqueue, so that the BE driver (in the Service VM space) can process them
accordingly. Communication between the FE and BE is based on the virtio
kick and notify mechanism.
@ -86,7 +86,7 @@ The device model configuration command syntax for virtio-blk is::
A simple example for virtio-blk:
1. Prepare a file in SOS folder::
1. Prepare a file in Service VM folder::
dd if=/dev/zero of=test.img bs=1M count=1024
mkfs.ext4 test.img
@ -96,15 +96,15 @@ A simple example for virtio-blk:
-s 9,virtio-blk,/root/test.img
#. Launch UOS, you can find ``/dev/vdx`` in UOS.
#. Launch User VM, you can find ``/dev/vdx`` in User VM.
The ``x`` in ``/dev/vdx`` is related to the slot number used. If
If you start DM with two virtio-blks, and the slot numbers are 9 and 10,
then, the device with slot 9 will be recognized as ``/dev/vda``, and
the device with slot 10 will be ``/dev/vdb``
#. Mount ``/dev/vdx`` to a folder in the UOS, and then you can access it.
#. Mount ``/dev/vdx`` to a folder in the User VM, and then you can access it.
Successful booting of the User OS verifies the correctness of the
Successful booting of the User VM verifies the correctness of the
device.

View File

@ -33,7 +33,7 @@ The virtio-console architecture diagram in ACRN is shown below.
Virtio-console is implemented as a virtio legacy device in the ACRN
device model (DM), and is registered as a PCI virtio device to the guest
OS. No changes are required in the frontend Linux virtio-console except
that the guest (UOS) kernel should be built with
that the guest (User VM) kernel should be built with
``CONFIG_VIRTIO_CONSOLE=y``.
The virtio console FE driver registers a HVC console to the kernel if
@ -152,7 +152,7 @@ PTY
TTY
===
1. Identify your tty that will be used as the UOS console:
1. Identify your tty that will be used as the User VM console:
- If you're connected to your device over the network via ssh, use
the linux ``tty`` command, and it will report the node (may be

View File

@ -4,7 +4,7 @@ Virtio-gpio
###########
virtio-gpio provides a virtual GPIO controller, which will map part of
native GPIOs to UOS, UOS can perform GPIO operations through it,
native GPIOs to User VM, User VM can perform GPIO operations through it,
including setting values, including set/get value, set/get direction and
set configuration (only Open Source and Open Drain types are currently
supported). GPIOs quite often be used as IRQs, typically for wakeup
@ -21,7 +21,7 @@ The virtio-gpio architecture is shown below
Virtio-gpio is implemented as a virtio legacy device in the ACRN device
model (DM), and is registered as a PCI virtio device to the guest OS. No
changes are required in the frontend Linux virtio-gpio except that the
guest (UOS) kernel should be built with ``CONFIG_VIRTIO_GPIO=y``.
guest (User VM) kernel should be built with ``CONFIG_VIRTIO_GPIO=y``.
There are three virtqueues used between FE and BE, one for gpio
operations, one for irq request and one for irq event notification.
@ -42,24 +42,24 @@ GPIO mapping
GPIO mapping
- Each UOS has only one GPIO chip instance, its number of GPIO is based
on acrn-dm command line and GPIO base always start from 0.
- Each User VM has only one GPIO chip instance, its number of GPIO is
based on acrn-dm command line and GPIO base always start from 0.
- Each GPIO is exclusive, uos can't map the same native gpio.
- Each GPIO is exclusive, User VM cant map the same native gpio.
- Each acrn-dm maximum number of GPIO is 64.
Usage
*****
add the following parameters into command line::
Add the following parameters into the command line::
-s <slot>,virtio-gpio,<@controller_name{offset|name[=mapping_name]:offset|name[=mapping_name]:...}@controller_name{...}...]>
-s <slot>,virtio-gpio,<@controller_name{offset|name[=mapping_name]:offset|name[=mapping_name]:…}@controller_name{…}…]>
- **controller_name**: Input ``ls /sys/bus/gpio/devices`` to check
native gpio controller information.Usually, the devices represent the
- **controller_name**: Input “ls /sys/bus/gpio/devices” to check native
gpio controller information.Usually, the devices represent the
controller_name, you can use it as controller_name directly. You can
also input "cat /sys/bus/gpio/device/XXX/dev" to get device id that can
also input “cat /sys/bus/gpio/device/XXX/dev” to get device id that can
be used to match /dev/XXX, then use XXX as the controller_name. On MRB
and NUC platforms, the controller_name are gpiochip0, gpiochip1,
gpiochip2.gpiochip3.
@ -73,17 +73,17 @@ add the following parameters into command line::
Example
*******
- Map three native gpio to UOS, they are native gpiochip0 with offset
of 1 and 6, and with the name ``reset``. In UOS, the three gpio has
no name, and base from 0 ::
- Map three native gpio to User VM, they are native gpiochip0 with
offset of 1 and 6, and with the name “reset”. In User VM, the three
gpio has no name, and base from 0.::
-s 10,virtio-gpio,@gpiochip0{1:6:reset}
- Map four native gpio to UOS, native gpiochip0's gpio with offset 1
- Map four native gpio to User VM, native gpiochip0s gpio with offset 1
and offset 6 map to FE virtual gpio with offset 0 and offset 1
without names, native gpiochip0's gpio with name ``reset`` maps to FE
virtual gpio with offset 2 and its name is ``shutdown``, native
gpiochip1's gpio with offset 0 maps to FE virtual gpio with offset 3 and
its name is ``reset`` ::
without names, native gpiochip0s gpio with name “reset” maps to FE
virtual gpio with offset 2 and its name is “shutdown”, native
gpiochip1s gpio with offset 0 maps to FE virtual gpio with offset 3 and
its name is “reset”.::
-s 10,virtio-gpio,@gpiochip0{1:6:reset=shutdown}@gpiochip1{0=reset}

View File

@ -49,18 +49,18 @@ notifies the frontend. The msg process flow is shown in
-s <slot>,virtio-i2c,<bus>[:<slave_addr>[@<node>]][:<slave_addr>[@<node>]][,<bus>[:<slave_addr>[@<node>]][:<slave_addr>][@<node>]]
bus:
The bus number for the native I2C adapter; ``2`` means ``/dev/i2c-2``.
The bus number for the native I2C adapter; “2” means “/dev/i2c-2”.
slave_addr:
he address for the native slave devices such as ``1C``, ``2F``...
he address for the native slave devices such as “1C”, “2F”...
@:
The prefix for the acpi node.
node:
The acpi node name supported in the current code. You can find the
supported name in the ``acpi_node_table[]`` from the source code. Currently,
only ``cam1``, ``cam2``, and ``hdac`` are supported for MRB. These nodes are
supported name in the acpi_node_table[] from the source code. Currently,
only cam1, cam2, and hdac are supported for MRB. These nodes are
platform-specific.

View File

@ -21,8 +21,8 @@ must be built with ``CONFIG_VIRTIO_INPUT=y``.
Two virtqueues are used to transfer input_event between FE and BE. One
is for the input_events from BE to FE, as generated by input hardware
devices in SOS. The other is for status changes from FE to BE, as
finally sent to input hardware device in SOS.
devices in Service VM. The other is for status changes from FE to BE, as
finally sent to input hardware device in Service VM.
At the probe stage of FE virtio-input driver, a buffer (used to
accommodate 64 input events) is allocated together with the driver data.
@ -37,7 +37,7 @@ char device and caches it into an internal buffer until an EV_SYN input
event with SYN_REPORT is received. BE driver then copies all the cached
input events to the event virtqueue, one by one. These events are added by
the FE driver following a notification to FE driver, implemented
as an interrupt injection to UOS.
as an interrupt injection to User VM.
For input events regarding status change, FE driver allocates a
buffer for an input event and adds it to the status virtqueue followed
@ -93,7 +93,7 @@ The general command syntax is::
-s n,virtio-input,/dev/input/eventX[,serial]
- /dev/input/eventX is used to specify the evdev char device node in
SOS.
Service VM.
- "serial" is an optional string. When it is specified it will be used
as the Uniq of guest virtio input device.

View File

@ -4,7 +4,7 @@ Virtio-net
##########
Virtio-net is the para-virtualization solution used in ACRN for
networking. The ACRN device model emulates virtual NICs for UOS and the
networking. The ACRN device model emulates virtual NICs for User VM and the
frontend virtio network driver, simulating the virtual NIC and following
the virtio specification. (Refer to :ref:`introduction` and
:ref:`virtio-hld` background introductions to ACRN and Virtio.)
@ -23,7 +23,7 @@ Network Virtualization Architecture
ACRN's network virtualization architecture is shown below in
:numref:`net-virt-arch`, and illustrates the many necessary network
virtualization components that must cooperate for the UOS to send and
virtualization components that must cooperate for the User VM to send and
receive data from the outside world.
.. figure:: images/network-virt-arch.png
@ -38,7 +38,7 @@ components are parts of the Linux kernel.)
Let's explore these components further.
SOS/UOS Network Stack:
Service VM/User VM Network Stack:
This is the standard Linux TCP/IP stack, currently the most
feature-rich TCP/IP implementation.
@ -57,11 +57,11 @@ ACRN Hypervisor:
bare-metal hardware, and suitable for a variety of IoT and embedded
device solutions. It fetches and analyzes the guest instructions, puts
the decoded information into the shared page as an IOREQ, and notifies
or interrupts the VHM module in the SOS for processing.
or interrupts the VHM module in the Service VM for processing.
VHM Module:
The Virtio and Hypervisor Service Module (VHM) is a kernel module in the
Service OS (SOS) acting as a middle layer to support the device model
Service VM acting as a middle layer to support the device model
and hypervisor. The VHM forwards a IOREQ to the virtio-net backend
driver for processing.
@ -72,7 +72,7 @@ ACRN Device Model and virtio-net Backend Driver:
Bridge and Tap Device:
Bridge and Tap are standard virtual network infrastructures. They play
an important role in communication among the SOS, the UOS, and the
an important role in communication among the Service VM, the User VM, and the
outside world.
IGB Driver:
@ -82,7 +82,7 @@ IGB Driver:
The virtual network card (NIC) is implemented as a virtio legacy device
in the ACRN device model (DM). It is registered as a PCI virtio device
to the guest OS (UOS) and uses the standard virtio-net in the Linux kernel as
to the guest OS (User VM) and uses the standard virtio-net in the Linux kernel as
its driver (the guest kernel should be built with
``CONFIG_VIRTIO_NET=y``).
@ -96,7 +96,7 @@ ACRN Virtio-Network Calling Stack
Various components of ACRN network virtualization are shown in the
architecture diagram shows in :numref:`net-virt-arch`. In this section,
we will use UOS data transmission (TX) and reception (RX) examples to
we will use User VM data transmission (TX) and reception (RX) examples to
explain step-by-step how these components work together to implement
ACRN network virtualization.
@ -123,13 +123,13 @@ Initialization in virtio-net Frontend Driver
- Register network driver
- Setup shared virtqueues
ACRN UOS TX FLOW
================
ACRN User VM TX FLOW
====================
The following shows the ACRN UOS network TX flow, using TCP as an
The following shows the ACRN User VM network TX flow, using TCP as an
example, showing the flow through each layer:
**UOS TCP Layer**
**User VM TCP Layer**
.. code-block:: c
@ -139,7 +139,7 @@ example, showing the flow through each layer:
tcp_write_xmit -->
tcp_transmit_skb -->
**UOS IP Layer**
**User VM IP Layer**
.. code-block:: c
@ -153,7 +153,7 @@ example, showing the flow through each layer:
neigh_output -->
neigh_resolve_output -->
**UOS MAC Layer**
**User VM MAC Layer**
.. code-block:: c
@ -165,7 +165,7 @@ example, showing the flow through each layer:
__netdev_start_xmit -->
**UOS MAC Layer virtio-net Frontend Driver**
**User VM MAC Layer virtio-net Frontend Driver**
.. code-block:: c
@ -187,7 +187,7 @@ example, showing the flow through each layer:
pio_instr_vmexit_handler -->
emulate_io --> // ioreq cant be processed in HV, forward it to VHM
acrn_insert_request_wait -->
fire_vhm_interrupt --> // interrupt SOS, VHM will get notified
fire_vhm_interrupt --> // interrupt Service VM, VHM will get notified
**VHM Module**
@ -216,7 +216,7 @@ example, showing the flow through each layer:
virtio_net_tap_tx -->
writev --> // write data to tap device
**SOS TAP Device Forwarding**
**Service VM TAP Device Forwarding**
.. code-block:: c
@ -233,7 +233,7 @@ example, showing the flow through each layer:
__netif_receive_skb_core -->
**SOS Bridge Forwarding**
**Service VM Bridge Forwarding**
.. code-block:: c
@ -244,7 +244,7 @@ example, showing the flow through each layer:
br_forward_finish -->
br_dev_queue_push_xmit -->
**SOS MAC Layer**
**Service VM MAC Layer**
.. code-block:: c
@ -256,16 +256,16 @@ example, showing the flow through each layer:
__netdev_start_xmit -->
**SOS MAC Layer IGB Driver**
**Service VM MAC Layer IGB Driver**
.. code-block:: c
igb_xmit_frame --> // IGB physical NIC driver xmit function
ACRN UOS RX FLOW
================
ACRN User VM RX FLOW
====================
The following shows the ACRN UOS network RX flow, using TCP as an example.
The following shows the ACRN User VM network RX flow, using TCP as an example.
Let's start by receiving a device interrupt. (Note that the hypervisor
will first get notified when receiving an interrupt even in passthrough
cases.)
@ -288,11 +288,11 @@ cases.)
do_softirq -->
ptdev_softirq -->
vlapic_intr_msi --> // insert the interrupt into SOS
vlapic_intr_msi --> // insert the interrupt into Service VM
start_vcpu --> // VM Entry here, will process the pending interrupts
**SOS MAC Layer IGB Driver**
**Service VM MAC Layer IGB Driver**
.. code-block:: c
@ -306,7 +306,7 @@ cases.)
__netif_receive_skb -->
__netif_receive_skb_core --
**SOS Bridge Forwarding**
**Service VM Bridge Forwarding**
.. code-block:: c
@ -317,7 +317,7 @@ cases.)
br_forward_finish -->
br_dev_queue_push_xmit -->
**SOS MAC Layer**
**Service VM MAC Layer**
.. code-block:: c
@ -328,7 +328,7 @@ cases.)
netdev_start_xmit -->
__netdev_start_xmit -->
**SOS MAC Layer TAP Driver**
**Service VM MAC Layer TAP Driver**
.. code-block:: c
@ -339,7 +339,7 @@ cases.)
.. code-block:: c
virtio_net_rx_callback --> // the tap fd get notified and this function invoked
virtio_net_tap_rx --> // read data from tap, prepare virtqueue, insert interrupt into the UOS
virtio_net_tap_rx --> // read data from tap, prepare virtqueue, insert interrupt into the User VM
vq_endchains -->
vq_interrupt -->
pci_generate_msi -->
@ -357,10 +357,10 @@ cases.)
vmexit_handler --> // vmexit because VMX_EXIT_REASON_VMCALL
vmcall_vmexit_handler -->
hcall_inject_msi --> // insert interrupt into UOS
hcall_inject_msi --> // insert interrupt into User VM
vlapic_intr_msi -->
**UOS MAC Layer virtio_net Frontend Driver**
**User VM MAC Layer virtio_net Frontend Driver**
.. code-block:: c
@ -372,7 +372,7 @@ cases.)
virtnet_receive -->
receive_buf -->
**UOS MAC Layer**
**User VM MAC Layer**
.. code-block:: c
@ -382,7 +382,7 @@ cases.)
__netif_receive_skb -->
__netif_receive_skb_core -->
**UOS IP Layer**
**User VM IP Layer**
.. code-block:: c
@ -393,7 +393,7 @@ cases.)
ip_local_deliver_finish -->
**UOS TCP Layer**
**User VM TCP Layer**
.. code-block:: c
@ -410,7 +410,7 @@ How to Use
==========
The network infrastructure shown in :numref:`net-virt-infra` needs to be
prepared in the SOS before we start. We need to create a bridge and at
prepared in the Service VM before we start. We need to create a bridge and at
least one tap device (two tap devices are needed to create a dual
virtual NIC) and attach a physical NIC and tap device to the bridge.
@ -419,11 +419,11 @@ virtual NIC) and attach a physical NIC and tap device to the bridge.
:width: 900px
:name: net-virt-infra
Network Infrastructure in SOS
Network Infrastructure in Service VM
You can use Linux commands (e.g. ip, brctl) to create this network. In
our case, we use systemd to automatically create the network by default.
You can check the files with prefix 50- in the SOS
You can check the files with prefix 50- in the Service VM
``/usr/lib/systemd/network/``:
- `50-acrn.netdev <https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/misc/acrnbridge/acrn.netdev>`__
@ -431,7 +431,7 @@ You can check the files with prefix 50- in the SOS
- `50-tap0.netdev <https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/misc/acrnbridge/tap0.netdev>`__
- `50-eth.network <https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/misc/acrnbridge/eth.network>`__
When the SOS is started, run ``ifconfig`` to show the devices created by
When the Service VM is started, run ``ifconfig`` to show the devices created by
this systemd configuration:
.. code-block:: none
@ -486,7 +486,7 @@ optional):
-s 4,virtio-net,<tap_name>,[mac=<XX:XX:XX:XX:XX:XX>]
When the UOS is launched, run ``ifconfig`` to check the network. enp0s4r
When the User VM is launched, run ``ifconfig`` to check the network. enp0s4r
is the virtual NIC created by acrn-dm:
.. code-block:: none

View File

@ -3,7 +3,7 @@
Virtio-rnd
##########
Virtio-rnd provides a virtual hardware random source for the UOS. It simulates a PCI device
Virtio-rnd provides a virtual hardware random source for the User VM. It simulates a PCI device
followed by a virtio specification, and is implemented based on the virtio user mode framework.
Architecture
@ -15,9 +15,9 @@ components are parts of Linux software or third party tools.
virtio-rnd is implemented as a virtio legacy device in the ACRN device
model (DM), and is registered as a PCI virtio device to the guest OS
(UOS). Tools such as :command:`od` (dump a file in octal or other format) can
(User VM). Tools such as :command:`od` (dump a file in octal or other format) can
be used to read random values from ``/dev/random``. This device file in the
UOS is bound with the frontend virtio-rng driver. (The guest kernel must
User VM is bound with the frontend virtio-rng driver. (The guest kernel must
be built with ``CONFIG_HW_RANDOM_VIRTIO=y``). The backend
virtio-rnd reads the HW random value from ``/dev/random`` in the SOS and sends
them to the frontend.
@ -35,7 +35,7 @@ Add a pci slot to the device model acrn-dm command line; for example::
-s <slot_number>,virtio-rnd
Check to see if the frontend virtio_rng driver is available in the UOS:
Check to see if the frontend virtio_rng driver is available in the User VM:
.. code-block:: console

View File

@ -29,27 +29,27 @@ Model following the PCI device framework. The following
Watchdog device flow
The DM in the Service OS (SOS) treats the watchdog as a passive device.
The DM in the Service VM treats the watchdog as a passive device.
It receives read/write commands from the watchdog driver, does the
actions, and returns. In ACRN, the commands are from User OS (UOS)
actions, and returns. In ACRN, the commands are from User VM
watchdog driver.
UOS watchdog work flow
**********************
User VM watchdog work flow
**************************
When the UOS does a read or write operation on the watchdog device's
When the User VM does a read or write operation on the watchdog device's
registers or memory space (Port IO or Memory map I/O), it will trap into
the hypervisor. The hypervisor delivers the operation to the SOS/DM
the hypervisor. The hypervisor delivers the operation to the Service VM/DM
through IPI (inter-process interrupt) or shared memory, and the DM
dispatches the operation to the watchdog emulation code.
After the DM watchdog finishes emulating the read or write operation, it
then calls ``ioctl`` to the SOS/kernel (``/dev/acrn_vhm``). VHM will call a
then calls ``ioctl`` to the Service VM/kernel (``/dev/acrn_vhm``). VHM will call a
hypercall to trap into the hypervisor to tell it the operation is done, and
the hypervisor will set UOS-related VCPU registers and resume UOS so the
UOS watchdog driver will get the return values (or return status). The
the hypervisor will set User VM-related VCPU registers and resume the User VM so the
User VM watchdog driver will get the return values (or return status). The
:numref:`watchdog-workflow` below is a typical operation flow:
from UOS to SOS and return back:
from a User VM to the Service VM and return back:
.. figure:: images/watchdog-image1.png
:align: center
@ -82,18 +82,18 @@ emulation.
The main part in the watchdog emulation is the timer thread. It emulates
the watchdog device timeout management. When it gets the kick action
from the UOS, it resets the timer. If the timer expires before getting a
timely kick action, it will call DM API to reboot that UOS.
from the User VM, it resets the timer. If the timer expires before getting a
timely kick action, it will call DM API to reboot that User VM.
In the UOS launch script, add: ``-s xx,wdt-i6300esb`` into DM parameters.
In the User VM launch script, add: ``-s xx,wdt-i6300esb`` into DM parameters.
(xx is the virtual PCI BDF number as with other PCI devices)
Make sure the UOS kernel has the I6300ESB driver enabled:
``CONFIG_I6300ESB_WDT=y``. After the UOS boots up, the watchdog device
Make sure the User VM kernel has the I6300ESB driver enabled:
``CONFIG_I6300ESB_WDT=y``. After the User VM boots up, the watchdog device
will be created as node ``/dev/watchdog``, and can be used as a normal
device file.
Usually the UOS needs a watchdog service (daemon) to run in userland and
Usually the User VM needs a watchdog service (daemon) to run in userland and
kick the watchdog periodically. If something prevents the daemon from
kicking the watchdog, for example the UOS system is hung, the watchdog
will timeout and the DM will reboot the UOS.
kicking the watchdog, for example the User VM system is hung, the watchdog
will timeout and the DM will reboot the User VM.

View File

@ -70,7 +70,7 @@ There is no additional action in ACRN hypervisor.
Guest -> hypervisor Attack
==========================
ACRN always enables EPT for all guests (SOS and UOS), thus a malicious
ACRN always enables EPT for all guests (Service VM and User VM), thus a malicious
guest can directly control guest PTEs to construct L1TF-based attack
to hypervisor. Alternatively if ACRN EPT is not sanitized with some
PTEs (with present bit cleared, or reserved bit set) pointing to valid
@ -241,7 +241,7 @@ There is no mitigation required on Apollo Lake based platforms.
The majority use case for ACRN is in pre-configured environment,
where the whole software stack (from ACRN hypervisor to guest
kernel to SOS root) is tightly controlled by solution provider
kernel to Service VM root) is tightly controlled by solution provider
and not allowed for run-time change after sale (guest kernel is
trusted). In that case solution provider will make sure that guest
kernel is up-to-date including necessary page table sanitization,

View File

@ -88,20 +88,21 @@ The components are listed as follows.
virtualization. The vCPU loop module in this component handles VM exit events
by calling the proper handler in the other components. Hypercalls are
implemented as a special type of VM exit event. This component is also able to
inject upcall interrupts to SOS.
inject upcall interrupts to the Service VM.
* **Device Emulation** This component implements devices that are emulated in
the hypervisor itself, such as the virtual programmable interrupt controllers
including vPIC, vLAPIC and vIOAPIC.
* **Passthru Management** This component manages devices that are passed-through
to specific VMs.
* **Extended Device Emulation** This component implements an I/O request
mechanism that allow the hypervisor to forward I/O accesses from UOSes to SOS
mechanism that allow the hypervisor to forward I/O accesses from a User
VM to the Service VM.
for emulation.
* **VM Management** This component manages the creation, deletion and other
lifecycle operations of VMs.
* **Hypervisor Initialization** This component invokes the initialization
subroutines in the other components to bring up the hypervisor and start up
SOS in sharing mode or all the VMs in partitioning mode.
Service VM in sharing mode or all the VMs in partitioning mode.
ACRN hypervisor adopts a layered design where higher layers can invoke the
interfaces of lower layers but not vice versa. The only exception is the

View File

@ -3,7 +3,7 @@
Enable QoS based on runC container
##################################
This document describes how ACRN supports Device-Model Quality of Service (QoS)
based on using runC containers to control the SOS resources
based on using runC containers to control the Service VM resources
(CPU, Storage, Memory, Network) by modifying the runC configuration file.
What is QoS
@ -28,7 +28,7 @@ to the `Open Container Initiative (OCI)
ACRN-DM QoS architecture
************************
In ACRN-DM QoS design, we run the ACRN-DM in a runC container environment.
Every time we start a UOS, we first start a runC container and
Every time we start a User VM, we first start a runC container and
then launch the ACRN-DM within that container.
The ACRN-DM QoS can manage these resources for Device-Model:
@ -108,7 +108,7 @@ How to use ACRN-DM QoS
.. note:: For configuration details, refer to the `Open Containers configuration documentation
<https://github.com/opencontainers/runtime-spec/blob/master/config.md>`_.
#. Add the UOS by ``acrnctl add`` command:
#. Add the User VM by ``acrnctl add`` command:
.. code-block:: none
@ -118,13 +118,13 @@ How to use ACRN-DM QoS
<https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/devicemodel/samples/nuc/launch_uos.sh>`_
that supports the ``-C`` (``run_container`` function) option.
#. Start the UOS by ``acrnd``
#. Start the User VM by ``acrnd``
.. code-block:: none
# acrnd -t
#. After UOS boots, you may use ``runc list`` command to check the container status in SOS:
#. After User VM boots, you may use ``runc list`` command to check the container status in Service VM:
.. code-block:: none

View File

@ -3,7 +3,10 @@
Install ACRN Out-of-the-box
###########################
In this tutorial, we will learn to generate an out-of-the-box (OOTB) Service VM or a Preempt-RT VM image so that we can use ACRN or RTVM immediately after installation without any configuration or modification.
In this tutorial, we will learn to generate an out-of-the-box (OOTB)
Service VM or a Preempt-RT VM image so that we can use ACRN or RTVM
immediately after installation without any configuration or
modification.
Set up a Build Environment
**************************
@ -460,7 +463,7 @@ Step 3: Deploy the Service VM image
# dd if=/mnt/sos-industry.img of=/dev/sda bs=4M oflag=sync status=progress iflag=fullblock seek=0 conv=notrunc
.. note:: Given the large YAML size setting of over 100G, generating the SOS image and writing it to disk will take some time.
.. note:: Given the large YAML size setting of over 100G, generating the Service VM image and writing it to disk will take some time.
#. Configure the EFI firmware to boot the ACRN hypervisor by default:

View File

@ -1,15 +1,15 @@
.. _build UOS from Clearlinux:
.. _build User VM from Clearlinux:
Building UOS from Clear Linux OS
################################
Building User VM from Clear Linux OS
####################################
This document builds on the :ref:`getting_started`,
and explains how to build UOS from Clear Linux OS.
and explains how to build a User VM from Clear Linux OS.
Build UOS image in Clear Linux OS
*********************************
Build User VM image in Clear Linux OS
*************************************
Follow these steps to build a UOS image from Clear Linux OS:
Follow these steps to build a User VM image from Clear Linux OS:
#. In Clear Linux OS, install ``ister`` (a template-based
installer for Linux) included in the Clear Linux OS bundle
@ -22,7 +22,7 @@ Follow these steps to build a UOS image from Clear Linux OS:
$ sudo swupd bundle-add os-installer
#. After installation is complete, use ``ister.py`` to
generate the image for UOS with the configuration in
generate the image for a User VM with the configuration in
``uos-image.json``:
.. code-block:: none
@ -81,7 +81,7 @@ Follow these steps to build a UOS image from Clear Linux OS:
``"Version": "latest"`` for example.
Here we will use ``"Version": 26550`` for example,
and the UOS image called ``uos.img`` will be generated
and the User VM image called ``uos.img`` will be generated
after successful installation. An example output log is:
.. code-block:: none
@ -118,10 +118,10 @@ Follow these steps to build a UOS image from Clear Linux OS:
Reboot Into Firmware Interface
Start the User OS (UOS)
***********************
Start the User VM
*****************
#. Mount the UOS image and check the UOS kernel:
#. Mount the User VM image and check the User VM kernel:
.. code-block:: none
@ -146,10 +146,10 @@ Start the User OS (UOS)
-k /mnt/usr/lib/kernel/default-iot-lts2018 \
.. note::
UOS image ``uos.img`` is in the directory ``~/``
and UOS kernel ``default-iot-lts2018`` is in ``/mnt/usr/lib/kernel/``.
User VM image ``uos.img`` is in the directory ``~/``
and User VM kernel ``default-iot-lts2018`` is in ``/mnt/usr/lib/kernel/``.
#. You are now all set to start the User OS (UOS):
#. You are now all set to start the User OS (User VM):
.. code-block:: none

View File

@ -101,11 +101,11 @@ Example
To support below configuration in industry scenario:
+----------+-------+-------+--------+
|pCPU0 |pCPU1 |pCPU2 |pCPU3 |
+==========+=======+=======+========+
|SOS WaaG |RT Linux |vxWorks |
+----------+---------------+--------+
+-----------------+-------+-------+--------+
|pCPU0 |pCPU1 |pCPU2 |pCPU3 |
+=================+=======+=======+========+
|Service VM WaaG |RT Linux |vxWorks |
+-----------------+---------------+--------+
Change the following three files:

View File

@ -21,7 +21,7 @@ An example
As an example, we'll show how to obtain the interrupts of a pass-through USB device.
First, we can get the USB controller BDF number (0:15.0) through the
following command in the SOS console::
following command in the Service VM console::
lspci | grep "USB controller"
@ -110,7 +110,7 @@ Then we use the command, on the ACRN console::
vm_console
to switch to the SOS console. Then we use the command::
to switch to the Service VM console. Then we use the command::
cat /tmp/acrnlog/acrnlog_cur.0
@ -125,7 +125,7 @@ and we will see the following log:
ACRN Trace
**********
ACRN trace is a tool running on the Service OS (SOS) to capture trace
ACRN trace is a tool running on the Service VM to capture trace
data. We can use the existing trace information to analyze, and we can
add self-defined tracing to analyze code which we care about.
@ -135,7 +135,7 @@ Using Existing trace event id to analyze trace
As an example, we can use the existing vm_exit trace to analyze the
reason and times of each vm_exit after we have done some operations.
1. Run the following SOS console command to collect
1. Run the following Service VM console command to collect
trace data::
# acrntrace -c
@ -208,7 +208,7 @@ shown in the following example:
:ref:`getting-started-building` and :ref:`kbl-nuc-sdc` for
detailed instructions on how to do that.
5. Now we can use the following command in the SOS console
5. Now we can use the following command in the Service VM console
to generate acrntrace data into the current directory::
acrntrace -c

View File

@ -1,11 +1,11 @@
.. _Increase UOS disk size:
.. _Increase User VM disk size:
Increasing the User OS disk size
Increasing the User VM disk size
################################
This document builds on the :ref:`getting_started` and assumes you already have
a system with ACRN installed and running correctly. The size of the pre-built
Clear Linux User OS (UOS) virtual disk is typically only 8GB and this may not be
Clear Linux User OS (User VM) virtual disk is typically only 8GB and this may not be
sufficient for some applications. This guide explains a simple few steps to
increase the size of that virtual disk.
@ -21,7 +21,7 @@ broken down into three steps:
.. note::
These steps are performed directly on the UOS disk image. The UOS VM **must**
These steps are performed directly on the User VM disk image. The User VM **must**
be powered off during this operation.
Increase the virtual disk size

View File

@ -92,7 +92,7 @@ enable SGX support in the BIOS and in ACRN:
#. Add the EPC config in the VM configuration.
Apply the patch to enable SGX support in UOS in the SDC scenario:
Apply the patch to enable SGX support in User VM in the SDC scenario:
.. code-block:: bash

View File

@ -21,7 +21,7 @@ Software Configuration
<https://github.com/projectacrn/acrn-hypervisor/releases/tag/acrn-2018w39.6-140000p>`_
* `acrn-kernel tag acrn-2018w39.6-140000p
<https://github.com/projectacrn/acrn-kernel/releases/tag/acrn-2018w39.6-140000p>`_
* Clear Linux OS: version: 25130 (UOS and SOS use this version)
* Clear Linux OS: version: 25130 (User VM and Service VM use this version)
Source code patches are provided in `skl-patches-for-acrn.tar file
<../_static/downloads/skl-patches-for-acrn.tar>`_ to work around or add support for
@ -95,10 +95,10 @@ Please follow the :ref:`kbl-nuc-sdc`, with the following changes:
#. Don't Enable weston service (skip this step found in the NUC's getting
started guide).
#. Set up Reference UOS by running the modified ``launch_uos.sh`` in
#. Set up Reference User VM by running the modified ``launch_uos.sh`` in
``acrn-hypervisor/devicemodel/samples/nuc/launch_uos.sh``
#. After UOS is launched, do these steps to run GFX workloads:
#. After User VM is launched, do these steps to run GFX workloads:
a) install weston and glmark2::

View File

@ -269,7 +269,7 @@ secure monitor to schedule in/out Trusty secure world.
:name: trusty-isolated
As shown in :numref:`trusty-isolated` above, the hypervisor creates an
isolated secure world UOS to support a Trusty OS running in a UOS on
isolated secure world User VM to support a Trusty OS running in a User VM on
ACRN.
:numref:`trusty-lhs-rhs` below shows further implementation details. The RHS
@ -290,7 +290,7 @@ invoked. The WS Hypercall has parameters to specify the services cmd ID
requested from the non-secure world.
In the ACRN hypervisor design of the "one VM, two worlds"
architecture, there is a single UOS/VM structure per-UOS in the
architecture, there is a single User VM structure per-User VM in the
Hypervisor, but two vCPU structures that save the LHS/RHS virtual
logical processor states respectively.
@ -311,7 +311,7 @@ implementation, secure storage is built in the RPMB partition in eMMC
Currently the eMMC in the APL SoC platform only has a single RPMB
partition for tamper-resistant and anti-replay secure storage. The
secure storage (RPMB) is virtualized to support multiple guest UOS VMs.
secure storage (RPMB) is virtualized to support multiple guest User VM VMs.
Although newer generations of flash storage (e.g. UFS 3.0, and NVMe)
support multiple RPMB partitions, this article only discusses the
virtualization solution for single-RPMB flash storage device in APL SoC
@ -328,11 +328,11 @@ high-level architecture.
In :numref:`trusty-rpmb`, the rKey (RPMB AuthKey) is the physical RPMB
authentication key used for data authenticated read/write access between
SOS kernel and physical RPMB controller in eMMC device. The VrKey is the
virtual RPMB authentication key used for authentication between SOS DM
module and its corresponding UOS secure software. Each UOS (if secure
Service VM kernel and physical RPMB controller in eMMC device. The VrKey is the
virtual RPMB authentication key used for authentication between Service VM DM
module and its corresponding User VM secure software. Each User VM (if secure
storage is supported) has its own VrKey, generated randomly when the DM
process starts, and is securely distributed to UOS secure world for each
process starts, and is securely distributed to User VM secure world for each
reboot. The rKey is fixed on a specific platform unless the eMMC is
replaced with another one.
@ -344,20 +344,20 @@ provisioning are out of scope for this document.)
For each reboot, the BIOS/SBL retrieves the rKey from CSE FW (or
generated from a special unique secret that is retrieved from CSE FW),
and SBL hands it off to the ACRN hypervisor, and the hypervisor in turn
sends the key to the SOS kernel.
sends the key to the Service VM kernel.
As an example, secure storage virtualization workflow for data write
access is like this:
#. UOS Secure world (e.g. Trusty) packs the encrypted data and signs it
#. User VM Secure world (e.g. Trusty) packs the encrypted data and signs it
with the vRPMB authentication key (VrKey), and sends the data along
with its signature over the RPMB FE driver in UOS non-secure world.
#. After DM process in SOS receives the data and signature, the vRPMB
with its signature over the RPMB FE driver in User VM non-secure world.
#. After DM process in Service VM receives the data and signature, the vRPMB
module in DM verifies them with the shared secret (vRPMB
authentication key, VrKey),
#. If verification is success, the vRPMB module does data address
remapping (remembering that the multiple UOS VMs share a single
physical RPMB partition), and forwards those data to SOS kernel, then
remapping (remembering that the multiple User VM VMs share a single
physical RPMB partition), and forwards those data to Service VM kernel, then
kernel packs the data and signs it with the physical RPMB
authentication key (rKey). Eventually, the data and its signature
will be sent to physical eMMC device.
@ -372,17 +372,17 @@ Note that there are some security considerations in this architecture:
- The rKey protection is very critical in this system. If the key is
leaked, an attacker can change/overwrite the data on RPMB, bypassing
the "tamper-resistant & anti-replay" capability.
- Typically, the vRPMB module in DM process of SOS system can filter
data access, i.e. it doesn't allow one UOS to perform read/write
access to the data from another UOS VM.
If the vRPMB module in DM process is compromised, a UOS could
change/overwrite the secure data of other UOSs.
- Typically, the vRPMB module in DM process of Service VM system can filter
data access, i.e. it doesn't allow one User VM to perform read/write
access to the data from another User VM.
If the vRPMB module in DM process is compromised, a User VM could
change/overwrite the secure data of other User VMs.
Keeping SOS system as secure as possible is a very important goal in the
system security design. In practice, the SOS designer and implementer
Keeping Service VM system as secure as possible is a very important goal in the
system security design. In practice, the Service VM designer and implementer
should obey these following rules (and more):
- Make sure the SOS is a closed system and doesn't allow users to
- Make sure the Service VM is a closed system and doesn't allow users to
install any unauthorized 3rd party software or components.
- External peripherals are constrained.
- Enable kernel-based hardening techniques, e.g., dm-verity (to make

View File

@ -1,10 +1,10 @@
.. _using_agl_as_uos:
Using AGL as the User OS
Using AGL as the User VM
########################
This tutorial describes the steps to run Automotive Grade Linux (AGL)
as the User OS on ACRN hypervisor and the existing issues we still have.
as the User VM on ACRN hypervisor and the existing issues we still have.
We hope the steps documented in this article will help others reproduce the
issues we're seeing, and provide information for further debugging.
We're using an Apollo Lake-based NUC model `NUC6CAYH
@ -26,13 +26,13 @@ standard to enable rapid development of new features and technologies.
For more information about AGL, please visit `AGL's official website
<https://www.automotivelinux.org/>`_.
Steps for using AGL as the UOS
******************************
Steps for using AGL as the User VM
**********************************
#. Follow the instructions found in the :ref:`kbl-nuc-sdc` to
boot "The ACRN Service OS"
#. In SOS, download the release of AGL from https://download.automotivelinux.org/AGL/release/eel/.
#. In Service VM, download the release of AGL from https://download.automotivelinux.org/AGL/release/eel/.
We're using release ``eel_5.1.0`` for our example:
.. code-block:: none
@ -41,7 +41,7 @@ Steps for using AGL as the UOS
$ wget https://download.automotivelinux.org/AGL/release/eel/5.1.0/intel-corei7-64/deploy/images/intel-corei7-64/agl-demo-platform-crosssdk-intel-corei7-64.wic.xz
$ unxz agl-demo-platform-crosssdk-intel-corei7-64.wic.xz
#. Deploy the UOS kernel modules to UOS virtual disk image
#. Deploy the User VM kernel modules to User VM virtual disk image
.. code-block:: none
@ -72,13 +72,13 @@ Steps for using AGL as the UOS
to match what you have downloaded above.
Likewise, you may need to adjust the kernel file name to ``default-iot-lts2018``.
#. Start the User OS (UOS)
#. Start the User VM
.. code-block:: none
$ sudo /usr/share/acrn/samples/nuc/launch_uos.sh
**Congratulations**, you are now watching the User OS booting up!
**Congratulations**, you are now watching the User VM booting up!
And you should be able to see the console of AGL:

View File

@ -28,8 +28,8 @@ as the two privileged VMs will mount the root filesystems via the SATA controlle
and the USB controller respectively.
This tutorial is verified on a tagged ACRN v0.6.
Build kernel and modules for partition mode UOS
***********************************************
Build kernel and modules for partition mode User VM
***************************************************
#. On your development workstation, clone the ACRN kernel source tree, and
build the Linux kernel image that will be used to boot the privileged VMs:
@ -314,7 +314,7 @@ Enable partition mode in ACRN hypervisor
$ sudo cp build/acrn.32.out /boot
#. Modify the ``/etc/grub.d/40_custom`` file to create a new GRUB entry
that will multi-boot the ACRN hypervisor and the UOS kernel image
that will multi-boot the ACRN hypervisor and the User VM kernel image
Append the following configuration to the ``/etc/grub.d/40_custom`` file:

View File

@ -73,7 +73,7 @@ Flash SBL on the UP2
Build ACRN for UP2
******************
In Clear Linux, build out the SOS and LaaG image with these two files:
In Clear Linux, build out the Service VM and LaaG image with these two files:
* create-up2-images.sh
@ -105,7 +105,7 @@ An example of the configuration file ``uos.json``:
of ``"Version": 31030`` for example.
Build SOS and LaaG image:
Build Service VM and LaaG image:
.. code-block:: none
@ -120,10 +120,10 @@ Build SOS and LaaG image:
argument that specifies the directory where your ``acrn-hypervisor`` is found.
When building images, modify the ``--clearlinux-version`` argument
to a specific version (such as 31030). To generate the images of SOS only,
to a specific version (such as 31030). To generate the images of Service VM only,
modify the ``--images-type`` argument to ``sos``.
This step will generate the images of SOS and LaaG:
This step will generate the images of Service VM and LaaG:
* sos_boot.img
* sos_rootfs.img
@ -144,28 +144,28 @@ which is also in the directory ``~/acrn-hypervisor/doc/tutorials/``.
.. table::
:widths: auto
+------------------------------+---------------------------------------------------+
| Filename | Description |
+==============================+===================================================+
| sos_boot.img | This SOS image contains the ACRN hypervisor and |
| | SOS kernel. |
+------------------------------+---------------------------------------------------+
| sos_rootfs.img | This is the root filesystem image for the SOS. it |
| | contains the Device Models implementation and |
| | SOS user space. |
+------------------------------+---------------------------------------------------+
| partition_desc.bin | This is the binary image for GPT partitions |
+------------------------------+---------------------------------------------------+
| up2_laag.img | This is the root filesystem image for the SOS. |
| | It has an integrated kernel and userspace. |
+------------------------------+---------------------------------------------------+
| flash_LaaG.json | Configuration file for Intel Platform Flash Tool |
| | to flash SOS image + hypervisor/SOS boot image + |
| | SOS userland |
+------------------------------+---------------------------------------------------+
+------------------------------+----------------------------------------------------------+
| Filename | Description |
+==============================+==========================================================+
| sos_boot.img | This Service VM image contains the ACRN hypervisor and |
| | Service VM kernel. |
+------------------------------+----------------------------------------------------------+
| sos_rootfs.img | This is the root filesystem image for the Service VM. it |
| | contains the Device Models implementation and |
| | Service VM user space. |
+------------------------------+----------------------------------------------------------+
| partition_desc.bin | This is the binary image for GPT partitions |
+------------------------------+----------------------------------------------------------+
| up2_laag.img | This is the root filesystem image for the Service VM. |
| | It has an integrated kernel and userspace. |
+------------------------------+----------------------------------------------------------+
| flash_LaaG.json | Configuration file for Intel Platform Flash Tool |
| | to flash Service VM image + hypervisor/Service VM |
| | boot image + Service VM userland |
+------------------------------+----------------------------------------------------------+
.. note::
In this step, build SOS and LaaG images in Clear Linux rather than Ubuntu.
In this step, build Service VM and LaaG images in Clear Linux rather than Ubuntu.
Download and install flash tool
*******************************
@ -177,8 +177,8 @@ Download and install flash tool
<https://github.com/projectceladon/tools/blob/master/platform_flash_tool_lite/latest/platformflashtoollite_5.8.9.0_linux_x86_64.deb>`_
for example.
SOS and LaaG Installation
*************************
Service VM and LaaG Installation
********************************
#. Connect a USB cable from the debug board to your Ubuntu host machine,
and run the following command to verify that its USB serial port is
@ -274,24 +274,24 @@ SOS and LaaG Installation
#. When the UP2 board is in fastboot mode, you should be able
see the device in the Platform Flash Tool. Select the
file ``flash_LaaG.json`` and modify ``Configuration``
to ``SOS_and_LaaG``. Click ``Start to flash`` to flash images.
to ``Service VM_and_LaaG``. Click ``Start to flash`` to flash images.
.. image:: images/platformflashtool_start_to_flash.png
:align: center
Boot to SOS
***********
Boot to Service VM
******************
After flashing, UP2 board will automatically reboot and
boot to the ACRN hypervisor. Log in to SOS by using the following command:
boot to the ACRN hypervisor. Log in to Service VM by using the following command:
.. image:: images/vm_console_login.png
:align: center
Launch UOS
**********
Launch User VM
**************
Run the ``launch_uos.sh`` script to launch the UOS:
Run the ``launch_uos.sh`` script to launch the User VM:
.. code-block:: none
@ -299,4 +299,4 @@ Run the ``launch_uos.sh`` script to launch the UOS:
$ wget https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/doc/tutorials/launch_uos.sh
$ sudo ./launch_uos.sh -V 1
**Congratulations**, you are now watching the User OS booting up!
**Congratulations**, you are now watching the User VM booting up!

View File

@ -1,15 +1,15 @@
.. _using_vxworks_as_uos:
Using VxWorks* as User OS
Using VxWorks* as User VM
#########################
`VxWorks`_\* is a real-time proprietary OS designed for use in embedded systems requiring real-time, deterministic
performance. This tutorial describes how to run VxWorks as the User OS on the ACRN hypervisor
performance. This tutorial describes how to run VxWorks as the User VM on the ACRN hypervisor
based on Clear Linux 29970 (ACRN tag v1.1).
.. note:: You'll need to be a WindRiver* customer and have purchased VxWorks to follow this tutorial.
Steps for Using VxWorks as User OS
Steps for Using VxWorks as User VM
**********************************
#. Build VxWorks
@ -94,9 +94,9 @@ Steps for Using VxWorks as User OS
#. Follow :ref:`kbl-nuc-sdc` to boot "The ACRN Service OS".
#. Boot VxWorks as User OS.
#. Boot VxWorks as User VM.
On the ACRN SOS, prepare a directory and populate it with VxWorks files.
On the ACRN Service VM, prepare a directory and populate it with VxWorks files.
.. code-block:: none

View File

@ -1,9 +1,9 @@
.. _using_zephyr_as_uos:
Using Zephyr as User OS
Using Zephyr as User VM
#######################
This tutorial describes how to run Zephyr as the User OS on the ACRN hypervisor. We are using
This tutorial describes how to run Zephyr as the User VM on the ACRN hypervisor. We are using
Kaby Lake-based NUC (model NUC7i5DNHE) in this tutorial.
Other :ref:`ACRN supported platforms <hardware>` should work as well.
@ -13,7 +13,7 @@ Introduction to Zephyr
The Zephyr RTOS is a scalable real-time operating-system supporting multiple hardware architectures,
optimized for resource constrained devices, and built with safety and security in mind.
Steps for Using Zephyr as User OS
Steps for Using Zephyr as User VM
*********************************
#. Build Zephyr
@ -21,7 +21,7 @@ Steps for Using Zephyr as User OS
Follow the `Zephyr Getting Started Guide <https://docs.zephyrproject.org/latest/getting_started/>`_ to
setup the Zephyr development environment.
The build process for ACRN UOS target is similar to other boards. We will build the `Hello World
The build process for ACRN User VM target is similar to other boards. We will build the `Hello World
<https://docs.zephyrproject.org/latest/samples/hello_world/README.html>`_ sample for ACRN:
.. code-block:: none
@ -84,14 +84,14 @@ Steps for Using Zephyr as User OS
$ sudo umount /mnt
You now have a virtual disk image with a bootable Zephyr in ``zephyr.img``. If the Zephyr build system is not
the ACRN SOS, then you will need to transfer this image to the ACRN SOS (via, e.g, a USB stick or network )
the ACRN Service VM, then you will need to transfer this image to the ACRN Service VM (via, e.g, a USB stick or network )
#. Follow :ref:`kbl-nuc-sdc` to boot "The ACRN Service OS" based on Clear Linux OS 28620
(ACRN tag: acrn-2019w14.3-140000p)
#. Boot Zephyr as User OS
#. Boot Zephyr as User VM
On the ACRN SOS, prepare a directory and populate it with Zephyr files.
On the ACRN Service VM, prepare a directory and populate it with Zephyr files.
.. code-block:: none
@ -100,7 +100,7 @@ Steps for Using Zephyr as User OS
You will also need to copy the ``zephyr.img`` created in the above section into directory ``zephyr``.
Run the ``launch_zephyr.sh`` script to launch Zephyr as UOS.
Run the ``launch_zephyr.sh`` script to launch Zephyr as User VM.
.. code-block:: none

View File

@ -31,15 +31,18 @@ Console enable list
+-----------------+-----------------------+--------------------+----------------+----------------+
| Scenarios | vm0 | vm1 | vm2 | vm3 |
+=================+=======================+====================+================+================+
| SDC | SOS (vuart enable) | Post-launched | Post-launched | |
| SDC | Service VM | Post-launched | Post-launched | |
| | (vuart enable) | | | |
| | | | | |
+-----------------+-----------------------+--------------------+----------------+----------------+
| SDC2 | SOS (vuart enable) | Post-launched | | Post-launched |
+-----------------+-----------------------+--------------------+----------------+----------------+
| Hybrid | Pre-launched (Zephyr) | SOS (vuart enable) | Post-launched | |
| SDC2 | Service VM | Post-launched | | Post-launched |
| | (vuart enable) | | | |
+-----------------+-----------------------+--------------------+----------------+----------------+
| Industry | SOS (vuart enable) | Post-launched | Post-launched | Post-launched |
| | | | (vuart enable) | |
| Hybrid | Pre-launched (Zephyr) | Service VM | Post-launched | |
| | (vuart enable) | (vuart enable) | | |
+-----------------+-----------------------+--------------------+----------------+----------------+
| Industry | Service VM | Post-launched | Post-launched | Post-launched |
| | (vuart enable) | | (vuart enable) | |
+-----------------+-----------------------+--------------------+----------------+----------------+
| Logic_partition | Pre-launched | Pre-launched RTVM | Post-launched | |
| | (vuart enable) | (vuart enable) | RTVM | |
@ -106,14 +109,14 @@ Communication vUART enable list
+-----------------+-----------------------+--------------------+---------------------+----------------+
| Scenarios | vm0 | vm1 | vm2 | vm3 |
+=================+=======================+====================+=====================+================+
| SDC | SOS | Post-launched | Post-launched | |
| SDC | Service VM | Post-launched | Post-launched | |
+-----------------+-----------------------+--------------------+---------------------+----------------+
| SDC2 | SOS | Post-launched | Post-launched | Post-launched |
| SDC2 | Service VM | Post-launched | Post-launched | Post-launched |
+-----------------+-----------------------+--------------------+---------------------+----------------+
| Hybrid | Pre-launched (Zephyr) | SOS | Post-launched | |
| Hybrid | Pre-launched (Zephyr) | Service VM | Post-launched | |
| | (vuart enable COM2) | (vuart enable COM2)| | |
+-----------------+-----------------------+--------------------+---------------------+----------------+
| Industry | SOS | Post-launched | Post-launched RTVM | Post-launched |
| Industry | Service VM | Post-launched | Post-launched RTVM | Post-launched |
| | (vuart enable COM2) | | (vuart enable COM2) | |
+-----------------+-----------------------+--------------------+---------------------+----------------+
| Logic_partition | Pre-launched | Pre-launched RTVM | | |

View File

@ -4,7 +4,7 @@ Device Model Parameters
#######################
Hypervisor Device Model (DM) is a QEMU-like application in the Service
OS (SOS) responsible for creating a UOS VM and then performing devices
VM responsible for creating a User VM and then performing devices
emulation based on command line configurations, as introduced in
:ref:`hld-devicemodel`.
@ -23,7 +23,7 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters:
default value.
* - :kbd:`-B, --bootargs <bootargs>`
- Set the UOS kernel command line arguments.
- Set the User VM kernel command line arguments.
The maximum length is 1023.
The bootargs string will be passed to the kernel as its cmdline.
@ -105,14 +105,14 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters:
As an example, the following commands are used to enable IOC feature, the
initial wakeup reason is ignition button, and cbc_attach uses ttyS1 for
TTY line discipline in UOS::
TTY line discipline in User VM::
-i /run/acrn/ioc_$vm_name,0x20
-l com2,/run/acrn/ioc_$vm_name
* - :kbd:`--intr_monitor <intr_monitor_params>`
- Enable interrupt storm monitor for UOS. Use this option to prevent an interrupt
storm from the UOS.
- Enable interrupt storm monitor for User VM. Use this option to prevent an interrupt
storm from the User VM.
usage: ``--intr_monitor threshold/s probe-period(s) delay_time(ms) delay_duration(ms)``
@ -129,7 +129,7 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters:
to normal.
* - :kbd:`-k, --kernel <kernel_image_path>`
- Set the kernel (full path) for the UOS kernel. The maximum path length is
- Set the kernel (full path) for the User VM kernel. The maximum path length is
1023 characters. The DM handles bzImage image format.
usage: ``-k /path/to/your/kernel_image``
@ -138,12 +138,12 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters:
- (See :kbd:`-i, --ioc_node`)
* - :kbd:`-m, --memsize <memory_size>`
- Setup total memory size for UOS.
- Setup total memory size for User VM.
memory_size format is: "<size>{K/k, B/b, M/m, G/g}", and size is an
integer.
usage: ``-m 4g``: set UOS memory to 4 gigabytes.
usage: ``-m 4g``: set User VM memory to 4 gigabytes.
* - :kbd:`--mac_seed <seed_string>`
- Set a platform unique string as a seed to generate the mac address.
@ -166,8 +166,8 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters:
can be disabled using this option.
* - :kbd:`-r, --ramdisk <ramdisk_image_path>`
- Set the ramdisk (full path) for the UOS. The maximum length is 1023.
The supported ramdisk format depends on your UOS kernel configuration.
- Set the ramdisk (full path) for the User VM. The maximum length is 1023.
The supported ramdisk format depends on your User VM kernel configuration.
usage: ``-r /path/to/your/ramdisk_image``
@ -192,10 +192,10 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters:
-s 7,xhci,1-2,2-2
This configuration means the virtual xHCI will appear in PCI slot 7
in UOS. Any physical USB device attached on 1-2 (bus 1, port 2) or
2-2 (bus 2, port 2) will be detected by UOS and be used as expected. To
in User VM. Any physical USB device attached on 1-2 (bus 1, port 2) or
2-2 (bus 2, port 2) will be detected by User VM and be used as expected. To
determine which bus and port a USB device is attached, you could run
``lsusb -t`` in SOS.
``lsusb -t`` in Service VM.
::
@ -221,7 +221,7 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters:
* - :kbd:`--vsbl <vsbl_file_path>`
- Virtual Slim bootloader (vSBL) is the virtual bootloader supporting
booting of the UOS on the ACRN hypervisor platform. The vSBL design is
booting of the User VM on the ACRN hypervisor platform. The vSBL design is
derived from Slim Bootloader, which follows a staged design approach
that provides hardware initialization and launching a payload that
provides the boot logic.
@ -308,7 +308,7 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters:
``GUEST_FLAG_IO_COMPLETION_POLLING`` mode. This kind of VM is
generally used for soft realtime scenarios (without ``--lapic_pt``) or
hard realtime scenarios (with ``--lapic_pt``). With ``GUEST_FLAG_RT``,
the Service OS (SOS) cannot interfere with this kind of VM when it is
the Service VM cannot interfere with this kind of VM when it is
running. It can only be powered off from inside the VM itself.
By default, this option is not enabled.
@ -343,10 +343,10 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters:
Example::
for general UOS, like LaaG or WaaG, it need set:
for general User VM, like LaaG or WaaG, it need set:
--pm_notify_channel uart --pm_by_vuart pty,/run/acrn/life_mngr_vm1
-l com2,/run/acrn/life_mngr_vm1
for RTVM, like RT-Linux:
--pm_notify_channel uart --pm_by_vuart tty,/dev/ttyS1
For different UOS, it can be configured as needed.
For different User VM, it can be configured as needed.

View File

@ -89,7 +89,7 @@ vcpu_dumpreg
registers values, etc.
In the following example, we dump vCPU0 RIP register value and get into
the SOS to search for the currently running function, using these
the Service VM to search for the currently running function, using these
commands::
cat /proc/kallsyms | grep RIP_value

View File

@ -7,7 +7,7 @@ Generic kernel parameters
*************************
A number of kernel parameters control the behavior of ACRN-based systems. Some
are applicable to the Service OS (SOS) kernel, others to the User OS (UOS)
are applicable to the Service VM kernel, others to the User VM
kernel, and some are applicable to both.
This section focuses on generic parameters from the Linux kernel which are
@ -18,12 +18,12 @@ relevant for configuring or debugging ACRN-based systems.
:widths: 10,10,50,30
* - Parameter
- Used in SOS or UOS
- Used in Service VM or User VM
- Description
- Usage example
* - module_blacklist
- SOS
- Service VM
- A comma-separated list of modules that should not be loaded.
Useful to debug or work
around issues related to specific modules.
@ -32,14 +32,14 @@ relevant for configuring or debugging ACRN-based systems.
module_blacklist=dwc3_pci
* - no_timer_check
- SOS,UOS
- Service VM,User VM
- Disables the code which tests for broken timer IRQ sources.
- ::
no_timer_check
* - console
- SOS,UOS
- Service VM,User VM
- Output console device and options.
``tty<n>``
@ -65,7 +65,7 @@ relevant for configuring or debugging ACRN-based systems.
console=hvc0
* - loglevel
- SOS
- Service VM
- All Kernel messages with a loglevel less than the console loglevel will
be printed to the console. The loglevel can also be changed with
``klogd`` or other programs. The loglevels are defined as follows:
@ -96,7 +96,7 @@ relevant for configuring or debugging ACRN-based systems.
loglevel=7
* - ignore_loglevel
- UOS
- User VM
- Ignoring loglevel setting will print **all**
kernel messages to the console. Useful for debugging.
We also add it as printk module parameter, so users
@ -108,7 +108,7 @@ relevant for configuring or debugging ACRN-based systems.
* - log_buf_len
- UOS
- User VM
- Sets the size of the printk ring buffer,
in bytes. n must be a power of two and greater
than the minimal size. The minimal size is defined
@ -121,7 +121,7 @@ relevant for configuring or debugging ACRN-based systems.
log_buf_len=16M
* - consoleblank
- SOS,UOS
- Service VM,User VM
- The console blank (screen saver) timeout in
seconds. Defaults to 600 (10 minutes). A value of 0
disables the blank timer.
@ -130,7 +130,7 @@ relevant for configuring or debugging ACRN-based systems.
consoleblank=0
* - rootwait
- SOS,UOS
- Service VM,User VM
- Wait (indefinitely) for root device to show up.
Useful for devices that are detected asynchronously
(e.g. USB and MMC devices).
@ -139,7 +139,7 @@ relevant for configuring or debugging ACRN-based systems.
rootwait
* - root
- SOS,UOS
- Service VM,User VM
- Define the root filesystem
``/dev/<disk_name><decimal>``
@ -166,14 +166,14 @@ relevant for configuring or debugging ACRN-based systems.
root=PARTUUID=00112233-4455-6677-8899-AABBCCDDEEFF
* - rw
- SOS,UOS
- Service VM,User VM
- Mount root device read-write on boot
- ::
rw
* - tsc
- UOS
- User VM
- Disable clocksource stability checks for TSC.
Format: <string>, where the only supported value is:
@ -188,7 +188,7 @@ relevant for configuring or debugging ACRN-based systems.
tsc=reliable
* - cma
- SOS
- Service VM
- Sets the size of the kernel global memory area for
contiguous memory allocations, and optionally the
placement constraint by the physical address range of
@ -200,7 +200,7 @@ relevant for configuring or debugging ACRN-based systems.
cma=64M@0
* - hvlog
- SOS
- Service VM
- Reserve memory for the ACRN hypervisor log. The reserved space should not
overlap any other blocks (e.g. hypervisor's reserved space).
- ::
@ -208,7 +208,7 @@ relevant for configuring or debugging ACRN-based systems.
hvlog=2M@0x6de00000
* - memmap
- SOS
- Service VM
- Mark specific memory as reserved.
``memmap=nn[KMG]$ss[KMG]``
@ -222,7 +222,7 @@ relevant for configuring or debugging ACRN-based systems.
* - ramoops.mem_address
ramoops.mem_size
ramoops.console_size
- SOS
- Service VM
- Ramoops is an oops/panic logger that writes its logs to RAM
before the system crashes. Ramoops uses a predefined memory area
to store the dump. See `Linux Kernel Ramoops oops/panic logger
@ -236,7 +236,7 @@ relevant for configuring or debugging ACRN-based systems.
* - reboot_panic
- SOS
- Service VM
- Reboot in case of panic
The comma-delimited parameters are:
@ -258,7 +258,7 @@ relevant for configuring or debugging ACRN-based systems.
reboot_panic=p,w
* - maxcpus
- UOS
- User VM
- Maximum number of processors that an SMP kernel
will bring up during bootup.
@ -275,14 +275,14 @@ relevant for configuring or debugging ACRN-based systems.
maxcpus=1
* - nohpet
- UOS
- User VM
- Don't use the HPET timer
- ::
nohpet
* - intel_iommu
- UOS
- User VM
- Intel IOMMU driver (DMAR) option
``on``:
@ -314,34 +314,34 @@ section below has more details on a few select parameters.
:widths: 10,10,50,30
* - Parameter
- Used in SOS or UOS
- Used in Service VM or User VM
- Description
- Usage example
* - i915.enable_gvt
- SOS
- Service VM
- Enable Intel GVT-g graphics virtualization support in the host
- ::
i915.enable_gvt=1
* - i915.enable_pvmmio
- SOS, UOS
- Service VM, User VM
- Control Para-Virtualized MMIO (PVMMIO). It batches sequential MMIO writes
into a shared buffer between the SOS and UOS
into a shared buffer between the Service VM and User VM
- ::
i915.enable_pvmmio=0x1F
* - i915.gvt_workload_priority
- SOS
- Define the priority level of UOS graphics workloads
- Service VM
- Define the priority level of User VM graphics workloads
- ::
i915.gvt_workload_priority=1
* - i915.enable_initial_modeset
- SOS
- Service VM
- On MRB, value must be ``1``. On NUC or UP2 boards, value must be
``0``. See :ref:`i915-enable-initial-modeset`.
- ::
@ -350,63 +350,63 @@ section below has more details on a few select parameters.
i915.enable_initial_modeset=0
* - i915.nuclear_pageflip
- SOS,UOS
- Service VM,User VM
- Force enable atomic functionality on platforms that don't have full support yet.
- ::
i915.nuclear_pageflip=1
* - i915.avail_planes_per_pipe
- SOS
- Service VM
- See :ref:`i915-avail-planes-owners`.
- ::
i915.avail_planes_per_pipe=0x01010F
* - i915.domain_plane_owners
- SOS
- Service VM
- See :ref:`i915-avail-planes-owners`.
- ::
i915.domain_plane_owners=0x011111110000
* - i915.domain_scaler_owner
- SOS
- Service VM
- See `i915.domain_scaler_owner`_
- ::
i915.domain_scaler_owner=0x021100
* - i915.enable_guc
- SOS
- Service VM
- Enable GuC load for HuC load.
- ::
i915.enable_guc=0x02
* - i915.avail_planes_per_pipe
- UOS
- User VM
- See :ref:`i915-avail-planes-owners`.
- ::
i915.avail_planes_per_pipe=0x070F00
* - i915.enable_guc
- UOS
- User VM
- Disable GuC
- ::
i915.enable_guc=0
* - i915.enable_hangcheck
- UOS
- User VM
- Disable check GPU activity for detecting hangs.
- ::
i915.enable_hangcheck=0
* - i915.enable_fbc
- UOS
- User VM
- Enable frame buffer compression for power savings
- ::
@ -425,8 +425,8 @@ i915.enable_gvt
This option enables support for Intel GVT-g graphics virtualization
support in the host. By default, it's not enabled, so we need to add
``i915.enable_gvt=1`` in the SOS kernel command line. This is a Service
OS only parameter, and cannot be enabled in the User OS.
``i915.enable_gvt=1`` in the Service VM kernel command line. This is a Service
OS only parameter, and cannot be enabled in the User VM.
i915.enable_pvmmio
------------------
@ -434,8 +434,8 @@ i915.enable_pvmmio
We introduce the feature named **Para-Virtualized MMIO** (PVMMIO)
to improve graphics performance of the GVT-g guest.
This feature batches sequential MMIO writes into a
shared buffer between the Service OS and User OS, and then submits a
para-virtualized command to notify to GVT-g in Service OS. This
shared buffer between the Service VM and User VM, and then submits a
para-virtualized command to notify to GVT-g in Service VM. This
effectively reduces the trap numbers of MMIO operations and improves
overall graphics performance.
@ -455,8 +455,8 @@ The PVMMIO optimization levels are:
* PVMMIO_PPGTT_UPDATE = 0x10 - Use PVMMIO method to update the PPGTT table
of guest.
.. note:: This parameter works in both the Service OS and User OS, but
changes to one will affect the other. For example, if either SOS or UOS
.. note:: This parameter works in both the Service VM and User VM, but
changes to one will affect the other. For example, if either Service VM or User VM
disables the PVMMIO_PPGTT_UPDATE feature, this optimization will be
disabled for both.
@ -468,17 +468,17 @@ AcrnGT supports **Prioritized Rendering** as described in the
configuration option controls the priority level of GVT-g guests.
Priority levels range from -1023 to 1023.
The default priority is zero, the same priority as the Service OS. If
The default priority is zero, the same priority as the Service VM. If
the level is less than zero, the guest's priority will be lower than the
Service OS, so graphics preemption will work and the prioritized
Service VM, so graphics preemption will work and the prioritized
rendering feature will be enabled. If the level is greater than zero,
UOS graphics workloads will preempt most of the SOS graphics workloads,
User VM graphics workloads will preempt most of the Service VM graphics workloads,
except for display updating related workloads that use a default highest
priority (1023).
Currently, all UOSes share the same priority.
This is a Service OS only parameters, and does
not work in the User OS.
Currently, all User VMs share the same priority.
This is a Service VM only parameters, and does
not work in the User VM.
.. _i915-enable-initial-modeset:
@ -494,14 +494,14 @@ initialized, so users would not be able to see the fb console on screen.
If there is no graphics UI running by default, users will see black
screens displayed.
When ``i915.enable_initial_modeset=0`` in SOS, the plane restriction
When ``i915.enable_initial_modeset=0`` in Service VM, the plane restriction
(also known as plane-based domain ownership) feature will be disabled.
(See the next section and :ref:`plane_restriction` in the ACRN GVT-g
High Level Design for more information about this feature.)
In the current configuration, we will set
``i915.enable_initial_modeset=1`` in SOS and
``i915.enable_initial_modeset=0`` in UOS.
``i915.enable_initial_modeset=1`` in Service VM and
``i915.enable_initial_modeset=0`` in User VM.
This parameter is not used on UEFI platforms.
@ -510,9 +510,9 @@ This parameter is not used on UEFI platforms.
i915.avail_planes_per_pipe and i915.domain_plane_owners
-------------------------------------------------------
Both Service OS and User OS are provided a set of HW planes where they
Both Service VM and User VM are provided a set of HW planes where they
can display their contents. Since each domain provides its content,
there is no need for any extra composition to be done through SOS.
there is no need for any extra composition to be done through Service VM.
``i915.avail_planes_per_pipe`` and ``i915.domain_plane_owners`` work
together to provide the plane restriction (or plan-based domain
ownership) feature.
@ -528,9 +528,9 @@ ownership) feature.
The ``i915.domain_plane_owners`` parameter controls the ownership of all
the planes in the system, as shown in :numref:`i915-planes-pipes`. Each
4-bit nibble identifies the domain id owner for that plane and a group
of 4 nibbles represents a pipe. This is a Service OS only configuration
and cannot be modified at runtime. Domain ID 0x0 is for the Service OS,
the User OS use domain IDs from 0x1 to 0xF.
of 4 nibbles represents a pipe. This is a Service VM only configuration
and cannot be modified at runtime. Domain ID 0x0 is for the Service VM,
the User VM use domain IDs from 0x1 to 0xF.
.. figure:: images/i915-image1.png
:width: 900px
@ -540,8 +540,8 @@ ownership) feature.
i915.domain_plane_owners
For example, if we set ``i915.domain_plane_owners=0x010001101110``, the
plane ownership will be as shown in :numref:`i915-planes-example1` - SOS
(green) owns plane 1A, 1B, 4B, 1C, and 2C, and UOS #1 owns plane 2A, 3A,
plane ownership will be as shown in :numref:`i915-planes-example1` - Service VM
(green) owns plane 1A, 1B, 4B, 1C, and 2C, and User VM #1 owns plane 2A, 3A,
4A, 2B, 3B and 3C.
.. figure:: images/i915-image2.png
@ -553,16 +553,16 @@ ownership) feature.
Some other examples:
* i915.domain_plane_owners=0x022211110000 - SOS (0x0) owns planes on pipe A;
UOS #1 (0x1) owns all planes on pipe B; and UOS #2 (0x2) owns all
* i915.domain_plane_owners=0x022211110000 - Service VM (0x0) owns planes on pipe A;
User VM #1 (0x1) owns all planes on pipe B; and User VM #2 (0x2) owns all
planes on pipe C (since, in the representation in
:numref:`i915-planes-pipes` above, there are only 3 planes attached to
pipe C).
* i915.domain_plane_owners=0x000001110000 - SOS owns all planes on pipe A
and pipe C; UOS #1 owns plane 1, 2 and 3 on pipe B. Plane 4 on pipe B
is owned by the SOS so that if it wants to display notice message, it
can display on top of the UOS.
* i915.domain_plane_owners=0x000001110000 - Service VM owns all planes on pipe A
and pipe C; User VM #1 owns plane 1, 2 and 3 on pipe B. Plane 4 on pipe B
is owned by the Service VM so that if it wants to display notice message, it
can display on top of the User VM.
* i915.avail_planes_per_pipe
@ -579,8 +579,8 @@ ownership) feature.
i915.avail_planes_per_pipe
For example, if we set ``i915.avail_planes_per_pipe=0x030901`` in SOS
and ``i915.avail_planes_per_pipe=0x04060E`` in UOS, the planes will be as
For example, if we set ``i915.avail_planes_per_pipe=0x030901`` in Service VM
and ``i915.avail_planes_per_pipe=0x04060E`` in User VM, the planes will be as
shown in :numref:`i915-avail-planes-example1` and
:numref:`i915-avail-planes-example1`:
@ -589,21 +589,21 @@ ownership) feature.
:align: center
:name: i915-avail-planes-example1
SOS i915.avail_planes_per_pipe
Service VM i915.avail_planes_per_pipe
.. figure:: images/i915-image5.png
:width: 500px
:align: center
:name: i915-avail-planes-example2
UOS i915.avail_planes_per_pipe
User VM i915.avail_planes_per_pipe
``i915.avail_planes_per_pipe`` controls the view of planes from i915 drivers
inside of every domain, and ``i915.domain_plane_owners`` is the global
arbiter controlling which domain can present its content onto the
real hardware. Generally, they are aligned. For example, we can set
``i915.domain_plane_owners= 0x011111110000``,
``i915.avail_planes_per_pipe=0x00000F`` in SOS, and
``i915.avail_planes_per_pipe=0x00000F`` in Service VM, and
``i915.avail_planes_per_pipe=0x070F00`` in domain 1, so every domain will
only flip on the planes they owns.
@ -611,9 +611,9 @@ ownership) feature.
not be aligned with the
setting of ``domain_plane_owners``. Consider this example:
``i915.domain_plane_owners=0x011111110000``,
``i915.avail_planes_per_pipe=0x01010F`` in SOS and
``i915.avail_planes_per_pipe=0x01010F`` in Service VM and
``i915.avail_planes_per_pipe=0x070F00`` in domain 1.
With this configuration, SOS will be able to render on plane 1B and
With this configuration, Service VM will be able to render on plane 1B and
plane 1C, however, the content of plane 1B and plane 1C will not be
flipped onto the real hardware.
@ -636,12 +636,12 @@ guest OS.
As with the parameter ``i915.domain_plane_owners``, each nibble of
``i915.domain_scaler_owner`` represents the domain id that owns the scaler;
every nibble (4 bits) represents a scaler and every group of 2 nibbles
represents a pipe. This is a Service OS only configuration and cannot be
modified at runtime. Domain ID 0x0 is for the Service OS, the User OS
represents a pipe. This is a Service VM only configuration and cannot be
modified at runtime. Domain ID 0x0 is for the Service VM, the User VM
use domain IDs from 0x1 to 0xF.
For example, if we set ``i915.domain_scaler_owner=0x021100``, the SOS
owns scaler 1A, 2A; UOS #1 owns scaler 1B, 2B; and UOS #2 owns scaler
For example, if we set ``i915.domain_scaler_owner=0x021100``, the Service VM
owns scaler 1A, 2A; User VM #1 owns scaler 1B, 2B; and User VM #2 owns scaler
1C.
i915.enable_hangcheck
@ -651,8 +651,8 @@ This parameter enable detection of a GPU hang. When enabled, the i915
will start a timer to check if the workload is completed in a specific
time. If not, i915 will treat it as a GPU hang and trigger a GPU reset.
In AcrnGT, the workload in SOS and UOS can be set to different
priorities. If SOS is assigned a higher priority than the UOS, the UOS's
In AcrnGT, the workload in Service VM and User VM can be set to different
priorities. If Service VM is assigned a higher priority than the User VM, the User VM's
workload might not be able to run on the HW on time. This may lead to
the guest i915 triggering a hangcheck and lead to a guest GPU reset.
This reset is unnecessary so we use ``i915.enable_hangcheck=0`` to