mirror of
https://github.com/projectacrn/acrn-hypervisor.git
synced 2025-09-23 09:47:44 +00:00
doc: change UOS/SOS to User VM/Service VM
First pass at updating obsolete usage of "UOS" and "SOS" Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
committed by
David Kinder
parent
f5f16f4e64
commit
237997f3f9
@@ -17,12 +17,12 @@ the below diagram.
|
||||
|
||||
HBA is registered to the PCI system with device id 0x2821 and vendor id
|
||||
0x8086. Its memory registers are mapped in BAR 5. It only supports 6
|
||||
ports (refer to ICH8 AHCI). AHCI driver in the Guest OS can access HBA in DM
|
||||
ports (refer to ICH8 AHCI). AHCI driver in the User VM can access HBA in DM
|
||||
through the PCI BAR. And HBA can inject MSI interrupts through the PCI
|
||||
framework.
|
||||
|
||||
When the application in the Guest OS reads data from /dev/sda, the request will
|
||||
send through the AHCI driver and then the PCI driver. The Guest VM will trap to
|
||||
When the application in the User VM reads data from /dev/sda, the request will
|
||||
send through the AHCI driver and then the PCI driver. The User VM will trap to
|
||||
hypervisor, and hypervisor dispatch the request to DM. According to the
|
||||
offset in the BAR, the request will dispatch to port control handler.
|
||||
Then the request is parse to a block I/O request which can be processed
|
||||
@@ -39,6 +39,6 @@ regular file.
|
||||
|
||||
For example,
|
||||
|
||||
SOS: -s 20,ahci,\ `hd:/dev/mmcblk0p1 <http://hd/dev/mmcblk0p1>`__
|
||||
System VM: -s 20,ahci,\ `hd:/dev/mmcblk0p1 <http://hd/dev/mmcblk0p1>`__
|
||||
|
||||
UOS: /dev/sda
|
||||
User VM: /dev/sda
|
||||
|
@@ -315,7 +315,7 @@ High Level Architecture
|
||||
***********************
|
||||
|
||||
:numref:`gvt-arch` shows the overall architecture of GVT-g, based on the
|
||||
ACRN hypervisor, with SOS as the privileged VM, and multiple user
|
||||
ACRN hypervisor, with Service VM as the privileged VM, and multiple user
|
||||
guests. A GVT-g device model working with the ACRN hypervisor,
|
||||
implements the policies of trap and pass-through. Each guest runs the
|
||||
native graphics driver and can directly access performance-critical
|
||||
@@ -323,7 +323,7 @@ resources: the Frame Buffer and Command Buffer, with resource
|
||||
partitioning (as presented later). To protect privileged resources, that
|
||||
is, the I/O registers and PTEs, corresponding accesses from the graphics
|
||||
driver in user VMs are trapped and forwarded to the GVT device model in
|
||||
SOS for emulation. The device model leverages i915 interfaces to access
|
||||
Service VM for emulation. The device model leverages i915 interfaces to access
|
||||
the physical GPU.
|
||||
|
||||
In addition, the device model implements a GPU scheduler that runs
|
||||
@@ -338,7 +338,7 @@ direct GPU execution. With that, GVT-g can achieve near-native
|
||||
performance for a VM workload.
|
||||
|
||||
In :numref:`gvt-arch`, the yellow GVT device model works as a client on
|
||||
top of an i915 driver in the SOS. It has a generic Mediated Pass-Through
|
||||
top of an i915 driver in the Service VM. It has a generic Mediated Pass-Through
|
||||
(MPT) interface, compatible with all types of hypervisors. For ACRN,
|
||||
some extra development work is needed for such MPT interfaces. For
|
||||
example, we need some changes in ACRN-DM to make ACRN compatible with
|
||||
@@ -368,7 +368,7 @@ trap-and-emulation, including MMIO virtualization, interrupt
|
||||
virtualization, and display virtualization. It also handles and
|
||||
processes all the requests internally, such as, command scan and shadow,
|
||||
schedules them in the proper manner, and finally submits to
|
||||
the SOS i915 driver.
|
||||
the Service VM i915 driver.
|
||||
|
||||
.. figure:: images/APL_GVT-g-DM.png
|
||||
:width: 800px
|
||||
@@ -446,7 +446,7 @@ interrupts are categorized into three types:
|
||||
exception to this is the VBlank interrupt. Due to the demands of user
|
||||
space compositors, such as Wayland, which requires a flip done event
|
||||
to be synchronized with a VBlank, this interrupt is forwarded from
|
||||
SOS to UOS when SOS receives it from the hardware.
|
||||
Service VM to User VM when Service VM receives it from the hardware.
|
||||
|
||||
- Event-based GPU interrupts are emulated by the emulation logic. For
|
||||
example, AUX Channel Interrupt.
|
||||
@@ -524,7 +524,7 @@ later after performing a few basic checks and verifications.
|
||||
Display Virtualization
|
||||
----------------------
|
||||
|
||||
GVT-g reuses the i915 graphics driver in the SOS to initialize the Display
|
||||
GVT-g reuses the i915 graphics driver in the Service VM to initialize the Display
|
||||
Engine, and then manages the Display Engine to show different VM frame
|
||||
buffers. When two vGPUs have the same resolution, only the frame buffer
|
||||
locations are switched.
|
||||
@@ -550,7 +550,7 @@ A typical automotive use case is where there are two displays in the car
|
||||
and each one needs to show one domain's content, with the two domains
|
||||
being the Instrument cluster and the In Vehicle Infotainment (IVI). As
|
||||
shown in :numref:`direct-display`, this can be accomplished through the direct
|
||||
display model of GVT-g, where the SOS and UOS are each assigned all HW
|
||||
display model of GVT-g, where the Service VM and User VM are each assigned all HW
|
||||
planes of two different pipes. GVT-g has a concept of display owner on a
|
||||
per HW plane basis. If it determines that a particular domain is the
|
||||
owner of a HW plane, then it allows the domain's MMIO register write to
|
||||
@@ -567,15 +567,15 @@ Indirect Display Model
|
||||
|
||||
Indirect Display Model
|
||||
|
||||
For security or fastboot reasons, it may be determined that the UOS is
|
||||
For security or fastboot reasons, it may be determined that the User VM is
|
||||
either not allowed to display its content directly on the HW or it may
|
||||
be too late before it boots up and displays its content. In such a
|
||||
scenario, the responsibility of displaying content on all displays lies
|
||||
with the SOS. One of the use cases that can be realized is to display the
|
||||
entire frame buffer of the UOS on a secondary display. GVT-g allows for this
|
||||
model by first trapping all MMIO writes by the UOS to the HW. A proxy
|
||||
application can then capture the address in GGTT where the UOS has written
|
||||
its frame buffer and using the help of the Hypervisor and the SOS's i915
|
||||
with the Service VM. One of the use cases that can be realized is to display the
|
||||
entire frame buffer of the User VM on a secondary display. GVT-g allows for this
|
||||
model by first trapping all MMIO writes by the User VM to the HW. A proxy
|
||||
application can then capture the address in GGTT where the User VM has written
|
||||
its frame buffer and using the help of the Hypervisor and the Service VM's i915
|
||||
driver, can convert the Guest Physical Addresses (GPAs) into Host
|
||||
Physical Addresses (HPAs) before making a texture source or EGL image
|
||||
out of the frame buffer and then either post processing it further or
|
||||
@@ -585,33 +585,33 @@ GGTT-Based Surface Sharing
|
||||
--------------------------
|
||||
|
||||
One of the major automotive use case is called "surface sharing". This
|
||||
use case requires that the SOS accesses an individual surface or a set of
|
||||
surfaces from the UOS without having to access the entire frame buffer of
|
||||
the UOS. Unlike the previous two models, where the UOS did not have to do
|
||||
anything to show its content and therefore a completely unmodified UOS
|
||||
could continue to run, this model requires changes to the UOS.
|
||||
use case requires that the Service VM accesses an individual surface or a set of
|
||||
surfaces from the User VM without having to access the entire frame buffer of
|
||||
the User VM. Unlike the previous two models, where the User VM did not have to do
|
||||
anything to show its content and therefore a completely unmodified User VM
|
||||
could continue to run, this model requires changes to the User VM.
|
||||
|
||||
This model can be considered an extension of the indirect display model.
|
||||
Under the indirect display model, the UOS's frame buffer was temporarily
|
||||
Under the indirect display model, the User VM's frame buffer was temporarily
|
||||
pinned by it in the video memory access through the Global graphics
|
||||
translation table. This GGTT-based surface sharing model takes this a
|
||||
step further by having a compositor of the UOS to temporarily pin all
|
||||
step further by having a compositor of the User VM to temporarily pin all
|
||||
application buffers into GGTT. It then also requires the compositor to
|
||||
create a metadata table with relevant surface information such as width,
|
||||
height, and GGTT offset, and flip that in lieu of the frame buffer.
|
||||
In the SOS, the proxy application knows that the GGTT offset has been
|
||||
In the Service VM, the proxy application knows that the GGTT offset has been
|
||||
flipped, maps it, and through it can access the GGTT offset of an
|
||||
application that it wants to access. It is worth mentioning that in this
|
||||
model, UOS applications did not require any changes, and only the
|
||||
model, User VM applications did not require any changes, and only the
|
||||
compositor, Mesa, and i915 driver had to be modified.
|
||||
|
||||
This model has a major benefit and a major limitation. The
|
||||
benefit is that since it builds on top of the indirect display model,
|
||||
there are no special drivers necessary for it on either SOS or UOS.
|
||||
there are no special drivers necessary for it on either Service VM or User VM.
|
||||
Therefore, any Real Time Operating System (RTOS) that use
|
||||
this model can simply do so without having to implement a driver, the
|
||||
infrastructure for which may not be present in their operating system.
|
||||
The limitation of this model is that video memory dedicated for a UOS is
|
||||
The limitation of this model is that video memory dedicated for a User VM is
|
||||
generally limited to a couple of hundred MBs. This can easily be
|
||||
exhausted by a few application buffers so the number and size of buffers
|
||||
is limited. Since it is not a highly-scalable model, in general, Intel
|
||||
@@ -634,24 +634,24 @@ able to share its pages with another driver within one domain.
|
||||
Applications buffers are backed by i915 Graphics Execution Manager
|
||||
Buffer Objects (GEM BOs). As in GGTT surface
|
||||
sharing, this model also requires compositor changes. The compositor of
|
||||
UOS requests i915 to export these application GEM BOs and then passes
|
||||
User VM requests i915 to export these application GEM BOs and then passes
|
||||
them on to a special driver called the Hyper DMA Buf exporter whose job
|
||||
is to create a scatter gather list of pages mapped by PDEs and PTEs and
|
||||
export a Hyper DMA Buf ID back to the compositor.
|
||||
|
||||
The compositor then shares this Hyper DMA Buf ID with the SOS's Hyper DMA
|
||||
The compositor then shares this Hyper DMA Buf ID with the Service VM's Hyper DMA
|
||||
Buf importer driver which then maps the memory represented by this ID in
|
||||
the SOS. A proxy application in the SOS can then provide the ID of this driver
|
||||
to the SOS i915, which can create its own GEM BO. Finally, the application
|
||||
the Service VM. A proxy application in the Service VM can then provide the ID of this driver
|
||||
to the Service VM i915, which can create its own GEM BO. Finally, the application
|
||||
can use it as an EGL image and do any post processing required before
|
||||
either providing it to the SOS compositor or directly flipping it on a
|
||||
either providing it to the Service VM compositor or directly flipping it on a
|
||||
HW plane in the compositor's absence.
|
||||
|
||||
This model is highly scalable and can be used to share up to 4 GB worth
|
||||
of pages. It is also not limited to only sharing graphics buffers. Other
|
||||
buffers for the IPU and others, can also be shared with it. However, it
|
||||
does require that the SOS port the Hyper DMA Buffer importer driver. Also,
|
||||
the SOS OS must comprehend and implement the DMA buffer sharing model.
|
||||
does require that the Service VM port the Hyper DMA Buffer importer driver. Also,
|
||||
the Service VM must comprehend and implement the DMA buffer sharing model.
|
||||
|
||||
For detailed information about this model, please refer to the `Linux
|
||||
HYPER_DMABUF Driver High Level Design
|
||||
@@ -669,13 +669,13 @@ Plane-Based Domain Ownership
|
||||
|
||||
Plane-Based Domain Ownership
|
||||
|
||||
Yet another mechanism for showing content of both the SOS and UOS on the
|
||||
Yet another mechanism for showing content of both the Service VM and User VM on the
|
||||
same physical display is called plane-based domain ownership. Under this
|
||||
model, both the SOS and UOS are provided a set of HW planes that they can
|
||||
model, both the Service VM and User VM are provided a set of HW planes that they can
|
||||
flip their contents on to. Since each domain provides its content, there
|
||||
is no need for any extra composition to be done through the SOS. The display
|
||||
is no need for any extra composition to be done through the Service VM. The display
|
||||
controller handles alpha blending contents of different domains on a
|
||||
single pipe. This saves on any complexity on either the SOS or the UOS
|
||||
single pipe. This saves on any complexity on either the Service VM or the User VM
|
||||
SW stack.
|
||||
|
||||
It is important to provide only specific planes and have them statically
|
||||
@@ -689,7 +689,7 @@ show the correct content on them. No other changes are necessary.
|
||||
While the biggest benefit of this model is that is extremely simple and
|
||||
quick to implement, it also has some drawbacks. First, since each domain
|
||||
is responsible for showing the content on the screen, there is no
|
||||
control of the UOS by the SOS. If the UOS is untrusted, this could
|
||||
control of the User VM by the Service VM. If the User VM is untrusted, this could
|
||||
potentially cause some unwanted content to be displayed. Also, there is
|
||||
no post processing capability, except that provided by the display
|
||||
controller (for example, scaling, rotation, and so on). So each domain
|
||||
@@ -834,43 +834,43 @@ Different Schedulers and Their Roles
|
||||
|
||||
In the system, there are three different schedulers for the GPU:
|
||||
|
||||
- i915 UOS scheduler
|
||||
- i915 User VM scheduler
|
||||
- Mediator GVT scheduler
|
||||
- i915 SOS scheduler
|
||||
- i915 Service VM scheduler
|
||||
|
||||
Since UOS always uses the host-based command submission (ELSP) model,
|
||||
Since User VM always uses the host-based command submission (ELSP) model,
|
||||
and it never accesses the GPU or the Graphic Micro Controller (GuC)
|
||||
directly, its scheduler cannot do any preemption by itself.
|
||||
The i915 scheduler does ensure batch buffers are
|
||||
submitted in dependency order, that is, if a compositor had to wait for
|
||||
an application buffer to finish before its workload can be submitted to
|
||||
the GPU, then the i915 scheduler of the UOS ensures that this happens.
|
||||
the GPU, then the i915 scheduler of the User VM ensures that this happens.
|
||||
|
||||
The UOS assumes that by submitting its batch buffers to the Execlist
|
||||
The User VM assumes that by submitting its batch buffers to the Execlist
|
||||
Submission Port (ELSP), the GPU will start working on them. However,
|
||||
the MMIO write to the ELSP is captured by the Hypervisor, which forwards
|
||||
these requests to the GVT module. GVT then creates a shadow context
|
||||
based on this batch buffer and submits the shadow context to the SOS
|
||||
based on this batch buffer and submits the shadow context to the Service VM
|
||||
i915 driver.
|
||||
|
||||
However, it is dependent on a second scheduler called the GVT
|
||||
scheduler. This scheduler is time based and uses a round robin algorithm
|
||||
to provide a specific time for each UOS to submit its workload when it
|
||||
is considered as a "render owner". The workload of the UOSs that are not
|
||||
to provide a specific time for each User VM to submit its workload when it
|
||||
is considered as a "render owner". The workload of the User VMs that are not
|
||||
render owners during a specific time period end up waiting in the
|
||||
virtual GPU context until the GVT scheduler makes them render owners.
|
||||
The GVT shadow context submits only one workload at
|
||||
a time, and once the workload is finished by the GPU, it copies any
|
||||
context state back to DomU and sends the appropriate interrupts before
|
||||
picking up any other workloads from either this UOS or another one. This
|
||||
picking up any other workloads from either this User VM or another one. This
|
||||
also implies that this scheduler does not do any preemption of
|
||||
workloads.
|
||||
|
||||
Finally, there is the i915 scheduler in the SOS. This scheduler uses the
|
||||
GuC or ELSP to do command submission of SOS local content as well as any
|
||||
content that GVT is submitting to it on behalf of the UOSs. This
|
||||
Finally, there is the i915 scheduler in the Service VM. This scheduler uses the
|
||||
GuC or ELSP to do command submission of Service VM local content as well as any
|
||||
content that GVT is submitting to it on behalf of the User VMs. This
|
||||
scheduler uses GuC or ELSP to preempt workloads. GuC has four different
|
||||
priority queues, but the SOS i915 driver uses only two of them. One of
|
||||
priority queues, but the Service VM i915 driver uses only two of them. One of
|
||||
them is considered high priority and the other is normal priority with a
|
||||
GuC rule being that any command submitted on the high priority queue
|
||||
would immediately try to preempt any workload submitted on the normal
|
||||
@@ -893,8 +893,8 @@ preemption of lower-priority workload.
|
||||
|
||||
Scheduling policies are customizable and left to customers to change if
|
||||
they are not satisfied with the built-in i915 driver policy, where all
|
||||
workloads of the SOS are considered higher priority than those of the
|
||||
UOS. This policy can be enforced through an SOS i915 kernel command line
|
||||
workloads of the Service VM are considered higher priority than those of the
|
||||
User VM. This policy can be enforced through an Service VM i915 kernel command line
|
||||
parameter, and can replace the default in-order command submission (no
|
||||
preemption) policy.
|
||||
|
||||
@@ -922,7 +922,7 @@ OS and an Android Guest OS.
|
||||
AcrnGT in kernel
|
||||
=================
|
||||
|
||||
The AcrnGT module in the SOS kernel acts as an adaption layer to connect
|
||||
The AcrnGT module in the Service VM kernel acts as an adaption layer to connect
|
||||
between GVT-g in the i915, the VHM module, and the ACRN-DM user space
|
||||
application:
|
||||
|
||||
@@ -930,7 +930,7 @@ application:
|
||||
services to it, including set and unset trap areas, set and unset
|
||||
write-protection pages, etc.
|
||||
|
||||
- It calls the VHM APIs provided by the ACRN VHM module in the SOS
|
||||
- It calls the VHM APIs provided by the ACRN VHM module in the Service VM
|
||||
kernel, to eventually call into the routines provided by ACRN
|
||||
hypervisor through hyper-calls.
|
||||
|
||||
|
@@ -3,8 +3,8 @@
|
||||
Device Model high-level design
|
||||
##############################
|
||||
|
||||
Hypervisor Device Model (DM) is a QEMU-like application in SOS
|
||||
responsible for creating a UOS VM and then performing devices emulation
|
||||
Hypervisor Device Model (DM) is a QEMU-like application in Service VM
|
||||
responsible for creating a User VM and then performing devices emulation
|
||||
based on command line configurations.
|
||||
|
||||
.. figure:: images/dm-image75.png
|
||||
@@ -14,18 +14,18 @@ based on command line configurations.
|
||||
Device Model Framework
|
||||
|
||||
:numref:`dm-framework` above gives a big picture overview of DM
|
||||
framework. There are 3 major subsystems in SOS:
|
||||
framework. There are 3 major subsystems in Service VM:
|
||||
|
||||
- **Device Emulation**: DM provides backend device emulation routines for
|
||||
frontend UOS device drivers. These routines register their I/O
|
||||
frontend User VM device drivers. These routines register their I/O
|
||||
handlers to the I/O dispatcher inside the DM. When the VHM
|
||||
assigns any I/O request to the DM, the I/O dispatcher
|
||||
dispatches this request to the corresponding device emulation
|
||||
routine to do the emulation.
|
||||
|
||||
- I/O Path in SOS:
|
||||
- I/O Path in Service VM:
|
||||
|
||||
- HV initializes an I/O request and notifies VHM driver in SOS
|
||||
- HV initializes an I/O request and notifies VHM driver in Service VM
|
||||
through upcall.
|
||||
- VHM driver dispatches I/O requests to I/O clients and notifies the
|
||||
clients (in this case the client is the DM which is notified
|
||||
@@ -34,9 +34,9 @@ framework. There are 3 major subsystems in SOS:
|
||||
- I/O dispatcher notifies VHM driver the I/O request is completed
|
||||
through char device
|
||||
- VHM driver notifies HV on the completion through hypercall
|
||||
- DM injects VIRQ to UOS frontend device through hypercall
|
||||
- DM injects VIRQ to User VM frontend device through hypercall
|
||||
|
||||
- VHM: Virtio and Hypervisor Service Module is a kernel module in SOS as a
|
||||
- VHM: Virtio and Hypervisor Service Module is a kernel module in Service VM as a
|
||||
middle layer to support DM. Refer to :ref:`virtio-APIs` for details
|
||||
|
||||
This section introduces how the acrn-dm application is configured and
|
||||
@@ -136,7 +136,7 @@ DM Initialization
|
||||
|
||||
- **Option Parsing**: DM parse options from command line inputs.
|
||||
|
||||
- **VM Create**: DM calls ioctl to SOS VHM, then SOS VHM makes
|
||||
- **VM Create**: DM calls ioctl to Service VM VHM, then Service VM VHM makes
|
||||
hypercalls to HV to create a VM, it returns a vmid for a
|
||||
dedicated VM.
|
||||
|
||||
@@ -147,8 +147,8 @@ DM Initialization
|
||||
with VHM and HV. Refer to :ref:`hld-io-emulation` and
|
||||
:ref:`IO-emulation-in-sos` for more details.
|
||||
|
||||
- **Memory Setup**: UOS memory is allocated from SOS
|
||||
memory. This section of memory will use SOS hugetlbfs to allocate
|
||||
- **Memory Setup**: User VM memory is allocated from Service VM
|
||||
memory. This section of memory will use Service VM hugetlbfs to allocate
|
||||
linear continuous host physical address for guest memory. It will
|
||||
try to get the page size as big as possible to guarantee maximum
|
||||
utilization of TLB. It then invokes a hypercall to HV for its EPT
|
||||
@@ -175,7 +175,7 @@ DM Initialization
|
||||
according to acrn-dm command line configuration and derived from
|
||||
their default value.
|
||||
|
||||
- **SW Load**: DM prepares UOS VM's SW configuration such as kernel,
|
||||
- **SW Load**: DM prepares User VM's SW configuration such as kernel,
|
||||
ramdisk, and zeropage, according to these memory locations:
|
||||
|
||||
.. code-block:: c
|
||||
@@ -186,7 +186,7 @@ DM Initialization
|
||||
#define ZEROPAGE_LOAD_OFF(ctx) (ctx->lowmem - 4*KB)
|
||||
#define KERNEL_LOAD_OFF(ctx) (16*MB)
|
||||
|
||||
For example, if the UOS memory is set as 800M size, then **SW Load**
|
||||
For example, if the User VM memory is set as 800M size, then **SW Load**
|
||||
will prepare its ramdisk (if there is) at 0x31c00000 (796M), bootargs at
|
||||
0x31ffe000 (800M - 8K), kernel entry at 0x31ffe800(800M - 6K) and zero
|
||||
page at 0x31fff000 (800M - 4K). The hypervisor will finally run VM based
|
||||
@@ -277,8 +277,8 @@ VHM
|
||||
VHM overview
|
||||
============
|
||||
|
||||
Device Model manages UOS VM by accessing interfaces exported from VHM
|
||||
module. VHM module is an SOS kernel driver. The ``/dev/acrn_vhm`` node is
|
||||
Device Model manages User VM by accessing interfaces exported from VHM
|
||||
module. VHM module is an Service VM kernel driver. The ``/dev/acrn_vhm`` node is
|
||||
created when VHM module is initialized. Device Model follows the standard
|
||||
Linux char device API (ioctl) to access the functionality of VHM.
|
||||
|
||||
@@ -287,8 +287,8 @@ hypercall to the hypervisor. There are two exceptions:
|
||||
|
||||
- I/O request client management is implemented in VHM.
|
||||
|
||||
- For memory range management of UOS VM, VHM needs to save all memory
|
||||
range info of UOS VM. The subsequent memory mapping update of UOS VM
|
||||
- For memory range management of User VM, VHM needs to save all memory
|
||||
range info of User VM. The subsequent memory mapping update of User VM
|
||||
needs this information.
|
||||
|
||||
.. figure:: images/dm-image108.png
|
||||
@@ -306,10 +306,10 @@ VHM ioctl interfaces
|
||||
|
||||
.. _IO-emulation-in-sos:
|
||||
|
||||
I/O Emulation in SOS
|
||||
********************
|
||||
I/O Emulation in Service VM
|
||||
***************************
|
||||
|
||||
I/O requests from the hypervisor are dispatched by VHM in the SOS kernel
|
||||
I/O requests from the hypervisor are dispatched by VHM in the Service VM kernel
|
||||
to a registered client, responsible for further processing the
|
||||
I/O access and notifying the hypervisor on its completion.
|
||||
|
||||
@@ -317,8 +317,8 @@ Initialization of Shared I/O Request Buffer
|
||||
===========================================
|
||||
|
||||
For each VM, there is a shared 4-KByte memory region used for I/O request
|
||||
communication between the hypervisor and SOS. Upon initialization
|
||||
of a VM, the DM (acrn-dm) in SOS userland first allocates a 4-KByte
|
||||
communication between the hypervisor and Service VM. Upon initialization
|
||||
of a VM, the DM (acrn-dm) in Service VM userland first allocates a 4-KByte
|
||||
page and passes the GPA of the buffer to HV via hypercall. The buffer is
|
||||
used as an array of 16 I/O request slots with each I/O request being
|
||||
256 bytes. This array is indexed by vCPU ID. Thus, each vCPU of the VM
|
||||
@@ -330,7 +330,7 @@ cannot issue multiple I/O requests at the same time.
|
||||
I/O Clients
|
||||
===========
|
||||
|
||||
An I/O client is either a SOS userland application or a SOS kernel space
|
||||
An I/O client is either a Service VM userland application or a Service VM kernel space
|
||||
module responsible for handling I/O access whose address
|
||||
falls in a certain range. Each VM has an array of registered I/O
|
||||
clients which are initialized with a fixed I/O address range, plus a PCI
|
||||
@@ -389,14 +389,14 @@ Processing I/O Requests
|
||||
:align: center
|
||||
:name: io-sequence-sos
|
||||
|
||||
I/O request handling sequence in SOS
|
||||
I/O request handling sequence in Service VM
|
||||
|
||||
:numref:`io-sequence-sos` above illustrates the interactions among the
|
||||
hypervisor, VHM,
|
||||
and the device model for handling I/O requests. The main interactions
|
||||
are as follows:
|
||||
|
||||
1. The hypervisor makes an upcall to SOS as an interrupt
|
||||
1. The hypervisor makes an upcall to Service VM as an interrupt
|
||||
handled by the upcall handler in VHM.
|
||||
|
||||
2. The upcall handler schedules the execution of the I/O request
|
||||
@@ -616,11 +616,11 @@ to destination emulated devices:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
/* Generate one msi interrupt to UOS, the index parameter indicates
|
||||
/* Generate one msi interrupt to User VM, the index parameter indicates
|
||||
* the msi number from its PCI msi capability. */
|
||||
void pci_generate_msi(struct pci_vdev *pi, int index);
|
||||
|
||||
/* Generate one msix interrupt to UOS, the index parameter indicates
|
||||
/* Generate one msix interrupt to User VM, the index parameter indicates
|
||||
* the msix number from its PCI msix bar. */
|
||||
void pci_generate_msix(struct pci_vdev *pi, int index);
|
||||
|
||||
@@ -984,11 +984,11 @@ potentially error-prone.
|
||||
ACPI Emulation
|
||||
--------------
|
||||
|
||||
An alternative ACPI resource abstraction option is for the SOS (SOS_VM) to
|
||||
own all devices and emulate a set of virtual devices for the UOS (POST_LAUNCHED_VM).
|
||||
An alternative ACPI resource abstraction option is for the Service VM to
|
||||
own all devices and emulate a set of virtual devices for the User VM (POST_LAUNCHED_VM).
|
||||
This is the most popular ACPI resource model for virtualization,
|
||||
as shown in the picture below. ACRN currently
|
||||
uses device emulation plus some device passthrough for UOS.
|
||||
uses device emulation plus some device passthrough for User VM.
|
||||
|
||||
.. figure:: images/dm-image52.png
|
||||
:align: center
|
||||
@@ -1001,11 +1001,11 @@ different components:
|
||||
- **Hypervisor** - ACPI is transparent to the Hypervisor, and has no knowledge
|
||||
of ACPI at all.
|
||||
|
||||
- **SOS** - All ACPI resources are physically owned by SOS, and enumerates
|
||||
- **Service VM** - All ACPI resources are physically owned by Service VM, and enumerates
|
||||
all ACPI tables and devices.
|
||||
|
||||
- **UOS** - Virtual ACPI resources, exposed by device model, are owned by
|
||||
UOS.
|
||||
- **User VM** - Virtual ACPI resources, exposed by device model, are owned by
|
||||
User VM.
|
||||
|
||||
ACPI emulation code of device model is found in
|
||||
``hw/platform/acpi/acpi.c``
|
||||
@@ -1095,10 +1095,10 @@ basl_compile for each table. basl_compile does the following:
|
||||
basl_end(&io[0], &io[1]);
|
||||
}
|
||||
|
||||
After handling each entry, virtual ACPI tables are present in UOS
|
||||
After handling each entry, virtual ACPI tables are present in User VM
|
||||
memory.
|
||||
|
||||
For passthrough dev in UOS, we may need to add some ACPI description
|
||||
For passthrough dev in User VM, we may need to add some ACPI description
|
||||
in virtual DSDT table. There is one hook (passthrough_write_dsdt) in
|
||||
``hw/pci/passthrough.c`` for this. The following source code, shows
|
||||
calls different functions to add different contents for each vendor and
|
||||
@@ -1142,7 +1142,7 @@ device id:
|
||||
}
|
||||
|
||||
For instance, write_dsdt_urt1 provides ACPI contents for Bluetooth
|
||||
UART device when passthroughed to UOS. It provides virtual PCI
|
||||
UART device when passthroughed to User VM. It provides virtual PCI
|
||||
device/function as _ADR. With other description, it could be used for
|
||||
Bluetooth UART enumeration.
|
||||
|
||||
@@ -1174,19 +1174,19 @@ Bluetooth UART enumeration.
|
||||
PM in Device Model
|
||||
******************
|
||||
|
||||
PM module in Device Model emulate the UOS low power state transition.
|
||||
PM module in Device Model emulate the User VM low power state transition.
|
||||
|
||||
Each time UOS writes an ACPI control register to initialize low power
|
||||
Each time User VM writes an ACPI control register to initialize low power
|
||||
state transition, the writing operation is trapped to DM as an I/O
|
||||
emulation request by the I/O emulation framework.
|
||||
|
||||
To emulate UOS S5 entry, DM will destroy I/O request client, release
|
||||
allocated UOS memory, stop all created threads, destroy UOS VM, and exit
|
||||
To emulate User VM S5 entry, DM will destroy I/O request client, release
|
||||
allocated User VM memory, stop all created threads, destroy User VM, and exit
|
||||
DM. To emulate S5 exit, a fresh DM start by VM manager is used.
|
||||
|
||||
To emulate UOS S3 entry, DM pauses the UOS VM, stops the UOS watchdog,
|
||||
and waits for a resume signal. When the UOS should exit from S3, DM will
|
||||
get a wakeup signal and reset the UOS VM to emulate the UOS exit from
|
||||
To emulate User VM S3 entry, DM pauses the User VM, stops the User VM watchdog,
|
||||
and waits for a resume signal. When the User VM should exit from S3, DM will
|
||||
get a wakeup signal and reset the User VM to emulate the User VM exit from
|
||||
S3.
|
||||
|
||||
Pass-through in Device Model
|
||||
|
@@ -13,21 +13,21 @@ Shared Buffer is a ring buffer divided into predetermined-size slots. There
|
||||
are two use scenarios of Sbuf:
|
||||
|
||||
- sbuf can serve as a lockless ring buffer to share data from ACRN HV to
|
||||
SOS in non-overwritten mode. (Writing will fail if an overrun
|
||||
Service VM in non-overwritten mode. (Writing will fail if an overrun
|
||||
happens.)
|
||||
- sbuf can serve as a conventional ring buffer in hypervisor in
|
||||
over-written mode. A lock is required to synchronize access by the
|
||||
producer and consumer.
|
||||
|
||||
Both ACRNTrace and ACRNLog use sbuf as a lockless ring buffer. The Sbuf
|
||||
is allocated by SOS and assigned to HV via a hypercall. To hold pointers
|
||||
is allocated by Service VM and assigned to HV via a hypercall. To hold pointers
|
||||
to sbuf passed down via hypercall, an array ``sbuf[ACRN_SBUF_ID_MAX]``
|
||||
is defined in per_cpu region of HV, with predefined sbuf id to identify
|
||||
the usage, such as ACRNTrace, ACRNLog, etc.
|
||||
|
||||
For each physical CPU there is a dedicated Sbuf. Only a single producer
|
||||
is allowed to put data into that Sbuf in HV, and a single consumer is
|
||||
allowed to get data from Sbuf in SOS. Therefore, no lock is required to
|
||||
allowed to get data from Sbuf in Service VM. Therefore, no lock is required to
|
||||
synchronize access by the producer and consumer.
|
||||
|
||||
sbuf APIs
|
||||
@@ -39,7 +39,7 @@ The sbuf APIs are defined in ``hypervisor/include/debug/sbuf.h``
|
||||
ACRN Trace
|
||||
**********
|
||||
|
||||
ACRNTrace is a tool running on the Service OS (SOS) to capture trace
|
||||
ACRNTrace is a tool running on the Service VM to capture trace
|
||||
data. It allows developers to add performance profiling trace points at
|
||||
key locations to get a picture of what is going on inside the
|
||||
hypervisor. Scripts to analyze the collected trace data are also
|
||||
@@ -52,8 +52,8 @@ up:
|
||||
- **ACRNTrace userland app**: Userland application collecting trace data to
|
||||
files (Per Physical CPU)
|
||||
|
||||
- **SOS Trace Module**: allocates/frees SBufs, creates device for each
|
||||
SBuf, sets up sbuf shared between SOS and HV, and provides a dev node for the
|
||||
- **Service VM Trace Module**: allocates/frees SBufs, creates device for each
|
||||
SBuf, sets up sbuf shared between Service VM and HV, and provides a dev node for the
|
||||
userland app to retrieve trace data from Sbuf
|
||||
|
||||
- **Trace APIs**: provide APIs to generate trace event and insert to Sbuf.
|
||||
@@ -71,18 +71,18 @@ See ``hypervisor/include/debug/trace.h``
|
||||
for trace_entry struct and function APIs.
|
||||
|
||||
|
||||
SOS Trace Module
|
||||
================
|
||||
Service VM Trace Module
|
||||
=======================
|
||||
|
||||
The SOS trace module is responsible for:
|
||||
The Service VM trace module is responsible for:
|
||||
|
||||
- allocating sbuf in sos memory range for each physical CPU, and assign
|
||||
- allocating sbuf in Service VM memory range for each physical CPU, and assign
|
||||
the gpa of Sbuf to ``per_cpu sbuf[ACRN_TRACE]``
|
||||
- create a misc device for each physical CPU
|
||||
- provide mmap operation to map entire Sbuf to userspace for high
|
||||
flexible and efficient access.
|
||||
|
||||
On SOS shutdown, the trace module is responsible to remove misc devices, free
|
||||
On Service VM shutdown, the trace module is responsible to remove misc devices, free
|
||||
SBufs, and set ``per_cpu sbuf[ACRN_TRACE]`` to null.
|
||||
|
||||
ACRNTrace Application
|
||||
@@ -98,7 +98,7 @@ readable text, and do analysis.
|
||||
With a debug build, trace components are initialized at boot
|
||||
time. After initialization, HV writes trace event date into sbuf
|
||||
until sbuf is full, which can happen easily if the ACRNTrace app is not
|
||||
consuming trace data from Sbuf on SOS user space.
|
||||
consuming trace data from Sbuf on Service VM user space.
|
||||
|
||||
Once ACRNTrace is launched, for each physical CPU a consumer thread is
|
||||
created to periodically read RAW trace data from sbuf and write to a
|
||||
@@ -122,7 +122,7 @@ ACRN Log
|
||||
********
|
||||
|
||||
acrnlog is a tool used to capture ACRN hypervisor log to files on
|
||||
SOS filesystem. It can run as an SOS service at boot, capturing two
|
||||
Service VM filesystem. It can run as an Service VM service at boot, capturing two
|
||||
kinds of logs:
|
||||
|
||||
- Current runtime logs;
|
||||
@@ -137,9 +137,9 @@ up:
|
||||
|
||||
- **ACRN Log app**: Userland application collecting hypervisor log to
|
||||
files;
|
||||
- **SOS ACRN Log Module**: constructs/frees SBufs at reserved memory
|
||||
- **Service VM ACRN Log Module**: constructs/frees SBufs at reserved memory
|
||||
area, creates dev for current/last logs, sets up sbuf shared between
|
||||
SOS and HV, and provides a dev node for the userland app to
|
||||
Service VM and HV, and provides a dev node for the userland app to
|
||||
retrieve logs
|
||||
- **ACRN log support in HV**: put logs at specified loglevel to Sbuf.
|
||||
|
||||
@@ -157,7 +157,7 @@ system:
|
||||
|
||||
- log messages with severity level higher than a specified value will
|
||||
be put into Sbuf when calling logmsg in hypervisor
|
||||
- allocate sbuf to accommodate early hypervisor logs before SOS
|
||||
- allocate sbuf to accommodate early hypervisor logs before Service VM
|
||||
can allocate and set up sbuf
|
||||
|
||||
There are 6 different loglevels, as shown below. The specified
|
||||
@@ -181,17 +181,17 @@ of a single log message is 320 bytes. Log messages with a length between
|
||||
80 and 320 bytes will be separated into multiple sbuf elements. Log
|
||||
messages with length larger then 320 will be truncated.
|
||||
|
||||
For security, SOS allocates sbuf in its memory range and assigns it to
|
||||
For security, Service VM allocates sbuf in its memory range and assigns it to
|
||||
the hypervisor.
|
||||
|
||||
SOS ACRN Log Module
|
||||
===================
|
||||
Service VM ACRN Log Module
|
||||
==========================
|
||||
|
||||
ACRNLog module provides one kernel option `hvlog=$size@$pbase` to configure
|
||||
the size and base address of hypervisor log buffer. This space will be further divided
|
||||
into two buffers with equal size: last log buffer and current log buffer.
|
||||
|
||||
On SOS boot, SOS acrnlog module is responsible to:
|
||||
On Service VM boot, Service VM acrnlog module is responsible to:
|
||||
|
||||
- examine if there are log messages remaining from last crashed
|
||||
run by checking the magic number of each sbuf
|
||||
@@ -211,7 +211,7 @@ current sbuf with magic number ``0x5aa57aa71aa13aa3``, and changes the
|
||||
magic number of last sbuf to ``0x5aa57aa71aa13aa2``, to distinguish which is
|
||||
the current/last.
|
||||
|
||||
On SOS shutdown, the module is responsible to remove misc devices,
|
||||
On Service VM shutdown, the module is responsible to remove misc devices,
|
||||
free SBufs, and set ``per_cpu sbuf[ACRN_TRACE]`` to null.
|
||||
|
||||
ACRN Log Application
|
||||
|
@@ -30,7 +30,7 @@ service (VBS) APIs, and virtqueue (VQ) APIs, as shown in
|
||||
- **DM APIs** are exported by the DM, and are mainly used during the
|
||||
device initialization phase and runtime. The DM APIs also include
|
||||
PCIe emulation APIs because each virtio device is a PCIe device in
|
||||
the SOS and UOS.
|
||||
the Service VM and User VM.
|
||||
- **VBS APIs** are mainly exported by the VBS and related modules.
|
||||
Generally they are callbacks to be
|
||||
registered into the DM.
|
||||
@@ -366,7 +366,7 @@ The workflow can be summarized as:
|
||||
irqfd.
|
||||
2. pass ioeventfd to vhost kernel driver.
|
||||
3. pass ioevent fd to vhm driver
|
||||
4. UOS FE driver triggers ioreq and forwarded to SOS by hypervisor
|
||||
4. User VM FE driver triggers ioreq and forwarded to Service VM by hypervisor
|
||||
5. ioreq is dispatched by vhm driver to related vhm client.
|
||||
6. ioeventfd vhm client traverse the io_range list and find
|
||||
corresponding eventfd.
|
||||
@@ -396,7 +396,7 @@ The workflow can be summarized as:
|
||||
5. irqfd related logic traverses the irqfd list to retrieve related irq
|
||||
information.
|
||||
6. irqfd related logic inject an interrupt through vhm interrupt API.
|
||||
7. interrupt is delivered to UOS FE driver through hypervisor.
|
||||
7. interrupt is delivered to User VM FE driver through hypervisor.
|
||||
|
||||
.. _virtio-APIs:
|
||||
|
||||
@@ -542,7 +542,7 @@ VBS APIs
|
||||
========
|
||||
|
||||
The VBS APIs are exported by VBS related modules, including VBS, DM, and
|
||||
SOS kernel modules. They can be classified into VBS-U and VBS-K APIs
|
||||
Service VM kernel modules. They can be classified into VBS-U and VBS-K APIs
|
||||
listed as follows.
|
||||
|
||||
VBS-U APIs
|
||||
|
@@ -30,7 +30,7 @@ is active:
|
||||
|
||||
.. note:: The console is only available in the debug version of the hypervisor,
|
||||
configured at compile time. In the release version, the console is
|
||||
disabled and the physical UART is not used by the hypervisor or SOS.
|
||||
disabled and the physical UART is not used by the hypervisor or Service VM.
|
||||
|
||||
Hypervisor shell
|
||||
****************
|
||||
@@ -45,8 +45,8 @@ Virtual UART
|
||||
|
||||
Currently UART 16550 is owned by the hypervisor itself and used for
|
||||
debugging purposes. Properties are configured by hypervisor command
|
||||
line. Hypervisor emulates a UART device with 0x3F8 address to SOS that
|
||||
acts as the console of SOS with these features:
|
||||
line. Hypervisor emulates a UART device with 0x3F8 address to Service VM that
|
||||
acts as the console of Service VM with these features:
|
||||
|
||||
- The vUART is exposed via I/O port 0x3f8.
|
||||
- Incorporate a 256-byte RX buffer and 65536 TX buffer.
|
||||
@@ -85,8 +85,8 @@ The workflows are described as follows:
|
||||
- Characters are read from this sbuf and put to rxFIFO,
|
||||
triggered by vuart_console_rx_chars
|
||||
|
||||
- A virtual interrupt is sent to SOS, triggered by a read from
|
||||
SOS. Characters in rxFIFO are sent to SOS by emulation of
|
||||
- A virtual interrupt is sent to Service VM, triggered by a read from
|
||||
Service VM. Characters in rxFIFO are sent to Service VM by emulation of
|
||||
read of register UART16550_RBR
|
||||
|
||||
- TX flow:
|
||||
|
@@ -79,7 +79,7 @@ physical CPUs are initially assigned to the Service VM by creating the same
|
||||
number of virtual CPUs.
|
||||
|
||||
When the Service VM boot is finished, it releases the physical CPUs intended
|
||||
for UOS use.
|
||||
for User VM use.
|
||||
|
||||
Here is an example flow of CPU allocation on a multi-core platform.
|
||||
|
||||
@@ -93,18 +93,18 @@ Here is an example flow of CPU allocation on a multi-core platform.
|
||||
CPU management in the Service VM under flexing CPU sharing
|
||||
==========================================================
|
||||
|
||||
As all Service VM CPUs could share with different UOSs, ACRN can still pass-thru
|
||||
As all Service VM CPUs could share with different User VMs, ACRN can still pass-thru
|
||||
MADT to Service VM, and the Service VM is still able to see all physical CPUs.
|
||||
|
||||
But as under CPU sharing, the Service VM does not need offline/release the physical
|
||||
CPUs intended for UOS use.
|
||||
CPUs intended for User VM use.
|
||||
|
||||
CPU management in UOS
|
||||
=====================
|
||||
CPU management in User VM
|
||||
=========================
|
||||
|
||||
From the UOS point of view, CPU management is very simple - when DM does
|
||||
From the User VM point of view, CPU management is very simple - when DM does
|
||||
hypercalls to create VMs, the hypervisor will create its virtual CPUs
|
||||
based on the configuration in this UOS VM's ``vm config``.
|
||||
based on the configuration in this User VM's ``vm config``.
|
||||
|
||||
As mentioned in previous description, ``vcpu_affinity`` in ``vm config``
|
||||
tells which physical CPUs a VM's VCPU will use, and the scheduler policy
|
||||
@@ -571,7 +571,7 @@ For a guest vCPU's state initialization:
|
||||
SW load based on different boot mode
|
||||
|
||||
|
||||
- UOS BSP: DM context initialization through hypercall
|
||||
- User VM BSP: DM context initialization through hypercall
|
||||
|
||||
- If it's AP, then it will always start from real mode, and the start
|
||||
vector will always come from vlapic INIT-SIPI emulation.
|
||||
@@ -1103,7 +1103,7 @@ APIs to register its IO/MMIO range:
|
||||
for a hypervisor emulated device needs to first set its corresponding
|
||||
I/O bitmap to 1.
|
||||
|
||||
- For UOS, the default I/O bitmap are all set to 1, which means UOS will trap
|
||||
- For User VM, the default I/O bitmap are all set to 1, which means User VM will trap
|
||||
all I/O port access by default. Adding an I/O handler for a
|
||||
hypervisor emulated device does not need change its I/O bitmap.
|
||||
If the trapped I/O port access does not fall into a hypervisor
|
||||
@@ -1115,7 +1115,7 @@ APIs to register its IO/MMIO range:
|
||||
default. Adding a MMIO handler for a hypervisor emulated
|
||||
device needs to first remove its MMIO range from EPT mapping.
|
||||
|
||||
- For UOS, EPT only maps its system RAM to the UOS, which means UOS will
|
||||
- For User VM, EPT only maps its system RAM to the User VM, which means User VM will
|
||||
trap all MMIO access by default. Adding a MMIO handler for a
|
||||
hypervisor emulated device does not need to change its EPT mapping.
|
||||
If the trapped MMIO access does not fall into a hypervisor
|
||||
|
@@ -229,15 +229,15 @@ hypervisor before it configures the PCI configuration space to enable an
|
||||
MSI. The hypervisor takes this opportunity to set up a remapping for the
|
||||
given MSI or MSIX before it is actually enabled by Service VM.
|
||||
|
||||
When the UOS needs to access the physical device by passthrough, it uses
|
||||
When the User VM needs to access the physical device by passthrough, it uses
|
||||
the following steps:
|
||||
|
||||
- UOS gets a virtual interrupt
|
||||
- User VM gets a virtual interrupt
|
||||
- VM exit happens and the trapped vCPU is the target where the interrupt
|
||||
will be injected.
|
||||
- Hypervisor will handle the interrupt and translate the vector
|
||||
according to ptirq_remapping_info.
|
||||
- Hypervisor delivers the interrupt to UOS.
|
||||
- Hypervisor delivers the interrupt to User VM.
|
||||
|
||||
When the Service VM needs to use the physical device, the passthrough is also
|
||||
active because the Service VM is the first VM. The detail steps are:
|
||||
@@ -258,7 +258,7 @@ ACPI virtualization is designed in ACRN with these assumptions:
|
||||
|
||||
- HV has no knowledge of ACPI,
|
||||
- Service VM owns all physical ACPI resources,
|
||||
- UOS sees virtual ACPI resources emulated by device model.
|
||||
- User VM sees virtual ACPI resources emulated by device model.
|
||||
|
||||
Some passthrough devices require physical ACPI table entry for
|
||||
initialization. The device model will create such device entry based on
|
||||
|
@@ -63,7 +63,9 @@ to support this. The ACRN hypervisor also initializes all the interrupt
|
||||
related modules like IDT, PIC, IOAPIC, and LAPIC.
|
||||
|
||||
HV does not own any host devices (except UART). All devices are by
|
||||
default assigned to the Service VM. Any interrupts received by Guest VM (Service VM or User VM) device drivers are virtual interrupts injected by HV (via vLAPIC).
|
||||
default assigned to the Service VM. Any interrupts received by VM
|
||||
(Service VM or User VM) device drivers are virtual interrupts injected
|
||||
by HV (via vLAPIC).
|
||||
HV manages a Host-to-Guest mapping. When a native IRQ/interrupt occurs,
|
||||
HV decides whether this IRQ/interrupt should be forwarded to a VM and
|
||||
which VM to forward to (if any). Refer to
|
||||
@@ -357,15 +359,15 @@ IPI vector 0xF3 upcall. The virtual interrupt injection uses IPI vector 0xF0.
|
||||
0xF3 upcall
|
||||
A Guest vCPU VM Exit exits due to EPT violation or IO instruction trap.
|
||||
It requires Device Module to emulate the MMIO/PortIO instruction.
|
||||
However it could be that the Service OS (SOS) vCPU0 is still in non-root
|
||||
However it could be that the Service VM vCPU0 is still in non-root
|
||||
mode. So an IPI (0xF3 upcall vector) should be sent to the physical CPU0
|
||||
(with non-root mode as vCPU0 inside SOS) to force vCPU0 to VM Exit due
|
||||
(with non-root mode as vCPU0 inside the Service VM) to force vCPU0 to VM Exit due
|
||||
to the external interrupt. The virtual upcall vector is then injected to
|
||||
SOS, and the vCPU0 inside SOS then will pick up the IO request and do
|
||||
the Service VM, and the vCPU0 inside the Service VM then will pick up the IO request and do
|
||||
emulation for other Guest.
|
||||
|
||||
0xF0 IPI flow
|
||||
If Device Module inside SOS needs to inject an interrupt to other Guest
|
||||
If Device Module inside the Service VM needs to inject an interrupt to other Guest
|
||||
such as vCPU1, it will issue an IPI first to kick CPU1 (assuming CPU1 is
|
||||
running on vCPU1) to root-hv_interrupt-data-apmode. CPU1 will inject the
|
||||
interrupt before VM Enter.
|
||||
|
@@ -4,7 +4,7 @@ I/O Emulation high-level design
|
||||
###############################
|
||||
|
||||
As discussed in :ref:`intro-io-emulation`, there are multiple ways and
|
||||
places to handle I/O emulation, including HV, SOS Kernel VHM, and SOS
|
||||
places to handle I/O emulation, including HV, Service VM Kernel VHM, and Service VM
|
||||
user-land device model (acrn-dm).
|
||||
|
||||
I/O emulation in the hypervisor provides these functionalities:
|
||||
@@ -12,7 +12,7 @@ I/O emulation in the hypervisor provides these functionalities:
|
||||
- Maintain lists of port I/O or MMIO handlers in the hypervisor for
|
||||
emulating trapped I/O accesses in a certain range.
|
||||
|
||||
- Forward I/O accesses to SOS when they cannot be handled by the
|
||||
- Forward I/O accesses to Service VM when they cannot be handled by the
|
||||
hypervisor by any registered handlers.
|
||||
|
||||
:numref:`io-control-flow` illustrates the main control flow steps of I/O emulation
|
||||
@@ -26,7 +26,7 @@ inside the hypervisor:
|
||||
access, or ignore the access if the access crosses the boundary.
|
||||
|
||||
3. If the range of the I/O access does not overlap the range of any I/O
|
||||
handler, deliver an I/O request to SOS.
|
||||
handler, deliver an I/O request to Service VM.
|
||||
|
||||
.. figure:: images/ioem-image101.png
|
||||
:align: center
|
||||
@@ -92,16 +92,16 @@ following cases exist:
|
||||
- Otherwise it is implied that the access crosses the boundary of
|
||||
multiple devices which the hypervisor does not emulate. Thus
|
||||
no handler is called and no I/O request will be delivered to
|
||||
SOS. I/O reads get all 1's and I/O writes are dropped.
|
||||
Service VM. I/O reads get all 1's and I/O writes are dropped.
|
||||
|
||||
- If the range of the I/O access does not overlap with any range of the
|
||||
handlers, the I/O access is delivered to SOS as an I/O request
|
||||
handlers, the I/O access is delivered to Service VM as an I/O request
|
||||
for further processing.
|
||||
|
||||
I/O Requests
|
||||
************
|
||||
|
||||
An I/O request is delivered to SOS vCPU 0 if the hypervisor does not
|
||||
An I/O request is delivered to Service VM vCPU 0 if the hypervisor does not
|
||||
find any handler that overlaps the range of a trapped I/O access. This
|
||||
section describes the initialization of the I/O request mechanism and
|
||||
how an I/O access is emulated via I/O requests in the hypervisor.
|
||||
@@ -109,11 +109,11 @@ how an I/O access is emulated via I/O requests in the hypervisor.
|
||||
Initialization
|
||||
==============
|
||||
|
||||
For each UOS the hypervisor shares a page with SOS to exchange I/O
|
||||
For each User VM the hypervisor shares a page with Service VM to exchange I/O
|
||||
requests. The 4-KByte page consists of 16 256-Byte slots, indexed by
|
||||
vCPU ID. It is required for the DM to allocate and set up the request
|
||||
buffer on VM creation, otherwise I/O accesses from UOS cannot be
|
||||
emulated by SOS, and all I/O accesses not handled by the I/O handlers in
|
||||
buffer on VM creation, otherwise I/O accesses from User VM cannot be
|
||||
emulated by Service VM, and all I/O accesses not handled by the I/O handlers in
|
||||
the hypervisor will be dropped (reads get all 1's).
|
||||
|
||||
Refer to the following sections for details on I/O requests and the
|
||||
@@ -145,7 +145,7 @@ There are four types of I/O requests:
|
||||
|
||||
|
||||
For port I/O accesses, the hypervisor will always deliver an I/O request
|
||||
of type PIO to SOS. For MMIO accesses, the hypervisor will deliver an
|
||||
of type PIO to Service VM. For MMIO accesses, the hypervisor will deliver an
|
||||
I/O request of either MMIO or WP, depending on the mapping of the
|
||||
accessed address (in GPA) in the EPT of the vCPU. The hypervisor will
|
||||
never deliver any I/O request of type PCI, but will handle such I/O
|
||||
@@ -170,11 +170,11 @@ The four states are:
|
||||
|
||||
FREE
|
||||
The I/O request slot is not used and new I/O requests can be
|
||||
delivered. This is the initial state on UOS creation.
|
||||
delivered. This is the initial state on User VM creation.
|
||||
|
||||
PENDING
|
||||
The I/O request slot is occupied with an I/O request pending
|
||||
to be processed by SOS.
|
||||
to be processed by Service VM.
|
||||
|
||||
PROCESSING
|
||||
The I/O request has been dispatched to a client but the
|
||||
@@ -185,19 +185,19 @@ COMPLETE
|
||||
has not consumed the results yet.
|
||||
|
||||
The contents of an I/O request slot are owned by the hypervisor when the
|
||||
state of an I/O request slot is FREE or COMPLETE. In such cases SOS can
|
||||
state of an I/O request slot is FREE or COMPLETE. In such cases Service VM can
|
||||
only access the state of that slot. Similarly the contents are owned by
|
||||
SOS when the state is PENDING or PROCESSING, when the hypervisor can
|
||||
Service VM when the state is PENDING or PROCESSING, when the hypervisor can
|
||||
only access the state of that slot.
|
||||
|
||||
The states are transferred as follow:
|
||||
|
||||
1. To deliver an I/O request, the hypervisor takes the slot
|
||||
corresponding to the vCPU triggering the I/O access, fills the
|
||||
contents, changes the state to PENDING and notifies SOS via
|
||||
contents, changes the state to PENDING and notifies Service VM via
|
||||
upcall.
|
||||
|
||||
2. On upcalls, SOS dispatches each I/O request in the PENDING state to
|
||||
2. On upcalls, Service VM dispatches each I/O request in the PENDING state to
|
||||
clients and changes the state to PROCESSING.
|
||||
|
||||
3. The client assigned an I/O request changes the state to COMPLETE
|
||||
@@ -211,7 +211,7 @@ The states are transferred as follow:
|
||||
States are accessed using atomic operations to avoid getting unexpected
|
||||
states on one core when it is written on another.
|
||||
|
||||
Note that there is no state to represent a 'failed' I/O request. SOS
|
||||
Note that there is no state to represent a 'failed' I/O request. Service VM
|
||||
should return all 1's for reads and ignore writes whenever it cannot
|
||||
handle the I/O request, and change the state of the request to COMPLETE.
|
||||
|
||||
@@ -224,7 +224,7 @@ hypervisor re-enters the vCPU thread every time a vCPU is scheduled back
|
||||
in, rather than switching to where the vCPU is scheduled out. As a result,
|
||||
post-work is introduced for this purpose.
|
||||
|
||||
The hypervisor pauses a vCPU before an I/O request is delivered to SOS.
|
||||
The hypervisor pauses a vCPU before an I/O request is delivered to Service VM.
|
||||
Once the I/O request emulation is completed, a client notifies the
|
||||
hypervisor by a hypercall. The hypervisor will pick up that request, do
|
||||
the post-work, and resume the guest vCPU. The post-work takes care of
|
||||
@@ -236,9 +236,9 @@ updating the vCPU guest state to reflect the effect of the I/O reads.
|
||||
Workflow of MMIO I/O request completion
|
||||
|
||||
The figure above illustrates the workflow to complete an I/O
|
||||
request for MMIO. Once the I/O request is completed, SOS makes a
|
||||
hypercall to notify the hypervisor which resumes the UOS vCPU triggering
|
||||
the access after requesting post-work on that vCPU. After the UOS vCPU
|
||||
request for MMIO. Once the I/O request is completed, Service VM makes a
|
||||
hypercall to notify the hypervisor which resumes the User VM vCPU triggering
|
||||
the access after requesting post-work on that vCPU. After the User VM vCPU
|
||||
resumes, it does the post-work first to update the guest registers if
|
||||
the access reads an address, changes the state of the corresponding I/O
|
||||
request slot to FREE, and continues execution of the vCPU.
|
||||
@@ -255,7 +255,7 @@ similar to the MMIO case, except the post-work is done before resuming
|
||||
the vCPU. This is because the post-work for port I/O reads need to update
|
||||
the general register eax of the vCPU, while the post-work for MMIO reads
|
||||
need further emulation of the trapped instruction. This is much more
|
||||
complex and may impact the performance of SOS.
|
||||
complex and may impact the performance of the Service VM.
|
||||
|
||||
.. _io-structs-interfaces:
|
||||
|
||||
|
@@ -106,7 +106,7 @@ Virtualization architecture
|
||||
---------------------------
|
||||
|
||||
In the virtualization architecture, the IOC Device Model (DM) is
|
||||
responsible for communication between the UOS and IOC firmware. The IOC
|
||||
responsible for communication between the User VM and IOC firmware. The IOC
|
||||
DM communicates with several native CBC char devices and a PTY device.
|
||||
The native CBC char devices only include ``/dev/cbc-lifecycle``,
|
||||
``/dev/cbc-signals``, and ``/dev/cbc-raw0`` - ``/dev/cbc-raw11``. Others
|
||||
@@ -133,7 +133,7 @@ There are five parts in this high-level design:
|
||||
* Power management involves boot/resume/suspend/shutdown flows
|
||||
* Emulated CBC commands introduces some commands work flow
|
||||
|
||||
IOC mediator has three threads to transfer data between UOS and SOS. The
|
||||
IOC mediator has three threads to transfer data between User VM and Service VM. The
|
||||
core thread is responsible for data reception, and Tx and Rx threads are
|
||||
used for data transmission. Each of the transmission threads has one
|
||||
data queue as a buffer, so that the IOC mediator can read data from CBC
|
||||
@@ -154,7 +154,7 @@ char devices and UART DM immediately.
|
||||
data comes from a raw channel, the data will be passed forward. Before
|
||||
transmitting to the virtual UART interface, all data needs to be
|
||||
packed with an address header and link header.
|
||||
- For Rx direction, the data comes from the UOS. The IOC mediator receives link
|
||||
- For Rx direction, the data comes from the User VM. The IOC mediator receives link
|
||||
data from the virtual UART interface. The data will be unpacked by Core
|
||||
thread, and then forwarded to Rx queue, similar to how the Tx direction flow
|
||||
is done except that the heartbeat and RTC are only used by the IOC
|
||||
@@ -176,10 +176,10 @@ IOC mediator has four states and five events for state transfer.
|
||||
IOC Mediator - State Transfer
|
||||
|
||||
- **INIT state**: This state is the initialized state of the IOC mediator.
|
||||
All CBC protocol packets are handled normally. In this state, the UOS
|
||||
All CBC protocol packets are handled normally. In this state, the User VM
|
||||
has not yet sent an active heartbeat.
|
||||
- **ACTIVE state**: Enter this state if an HB ACTIVE event is triggered,
|
||||
indicating that the UOS state has been active and need to set the bit
|
||||
indicating that the User VM state has been active and need to set the bit
|
||||
23 (SoC bit) in the wakeup reason.
|
||||
- **SUSPENDING state**: Enter this state if a RAM REFRESH event or HB
|
||||
INACTIVE event is triggered. The related event handler needs to mask
|
||||
@@ -219,17 +219,17 @@ The difference between the native and virtualization architectures is
|
||||
that the IOC mediator needs to re-compute the checksum and reset
|
||||
priority. Currently, priority is not supported by IOC firmware; the
|
||||
priority setting by the IOC mediator is based on the priority setting of
|
||||
the CBC driver. The SOS and UOS use the same CBC driver.
|
||||
the CBC driver. The Service VM and User VM use the same CBC driver.
|
||||
|
||||
Power management virtualization
|
||||
-------------------------------
|
||||
|
||||
In acrn-dm, the IOC power management architecture involves PM DM, IOC
|
||||
DM, and UART DM modules. PM DM is responsible for UOS power management,
|
||||
DM, and UART DM modules. PM DM is responsible for User VM power management,
|
||||
and IOC DM is responsible for heartbeat and wakeup reason flows for IOC
|
||||
firmware. The heartbeat flow is used to control IOC firmware power state
|
||||
and wakeup reason flow is used to indicate IOC power state to the OS.
|
||||
UART DM transfers all IOC data between the SOS and UOS. These modules
|
||||
UART DM transfers all IOC data between the Service VM and User VM. These modules
|
||||
complete boot/suspend/resume/shutdown functions.
|
||||
|
||||
Boot flow
|
||||
@@ -243,13 +243,13 @@ Boot flow
|
||||
IOC Virtualizaton - Boot flow
|
||||
|
||||
#. Press ignition button for booting.
|
||||
#. SOS lifecycle service gets a "booting" wakeup reason.
|
||||
#. SOS lifecycle service notifies wakeup reason to VM Manager, and VM
|
||||
#. Service VM lifecycle service gets a "booting" wakeup reason.
|
||||
#. Service VM lifecycle service notifies wakeup reason to VM Manager, and VM
|
||||
Manager starts VM.
|
||||
#. VM Manager sets the VM state to "start".
|
||||
#. IOC DM forwards the wakeup reason to UOS.
|
||||
#. PM DM starts UOS.
|
||||
#. UOS lifecycle gets a "booting" wakeup reason.
|
||||
#. IOC DM forwards the wakeup reason to User VM.
|
||||
#. PM DM starts User VM.
|
||||
#. User VM lifecycle gets a "booting" wakeup reason.
|
||||
|
||||
Suspend & Shutdown flow
|
||||
+++++++++++++++++++++++
|
||||
@@ -262,23 +262,23 @@ Suspend & Shutdown flow
|
||||
IOC Virtualizaton - Suspend and Shutdown by Ignition
|
||||
|
||||
#. Press ignition button to suspend or shutdown.
|
||||
#. SOS lifecycle service gets a 0x800000 wakeup reason, then keeps
|
||||
#. Service VM lifecycle service gets a 0x800000 wakeup reason, then keeps
|
||||
sending a shutdown delay heartbeat to IOC firmware, and notifies a
|
||||
"stop" event to VM Manager.
|
||||
#. IOC DM forwards the wakeup reason to UOS lifecycle service.
|
||||
#. SOS lifecycle service sends a "stop" event to VM Manager, and waits for
|
||||
#. IOC DM forwards the wakeup reason to User VM lifecycle service.
|
||||
#. Service VM lifecycle service sends a "stop" event to VM Manager, and waits for
|
||||
the stop response before timeout.
|
||||
#. UOS lifecycle service gets a 0x800000 wakeup reason and sends inactive
|
||||
#. User VM lifecycle service gets a 0x800000 wakeup reason and sends inactive
|
||||
heartbeat with suspend or shutdown SUS_STAT to IOC DM.
|
||||
#. UOS lifecycle service gets a 0x000000 wakeup reason, then enters
|
||||
#. User VM lifecycle service gets a 0x000000 wakeup reason, then enters
|
||||
suspend or shutdown kernel PM flow based on SUS_STAT.
|
||||
#. PM DM executes UOS suspend/shutdown request based on ACPI.
|
||||
#. PM DM executes User VM suspend/shutdown request based on ACPI.
|
||||
#. VM Manager queries each VM state from PM DM. Suspend request maps
|
||||
to a paused state and shutdown request maps to a stop state.
|
||||
#. VM Manager collects all VMs state, and reports it to SOS lifecycle
|
||||
#. VM Manager collects all VMs state, and reports it to Service VM lifecycle
|
||||
service.
|
||||
#. SOS lifecycle sends inactive heartbeat to IOC firmware with
|
||||
suspend/shutdown SUS_STAT, based on the SOS' own lifecycle service
|
||||
#. Service VM lifecycle sends inactive heartbeat to IOC firmware with
|
||||
suspend/shutdown SUS_STAT, based on the Service VM's own lifecycle service
|
||||
policy.
|
||||
|
||||
Resume flow
|
||||
@@ -297,33 +297,33 @@ the same flow blocks.
|
||||
For ignition resume flow:
|
||||
|
||||
#. Press ignition button to resume.
|
||||
#. SOS lifecycle service gets an initial wakeup reason from the IOC
|
||||
#. Service VM lifecycle service gets an initial wakeup reason from the IOC
|
||||
firmware. The wakeup reason is 0x000020, from which the ignition button
|
||||
bit is set. It then sends active or initial heartbeat to IOC firmware.
|
||||
#. SOS lifecycle forwards the wakeup reason and sends start event to VM
|
||||
#. Service VM lifecycle forwards the wakeup reason and sends start event to VM
|
||||
Manager. The VM Manager starts to resume VMs.
|
||||
#. IOC DM gets the wakeup reason from the VM Manager and forwards it to UOS
|
||||
#. IOC DM gets the wakeup reason from the VM Manager and forwards it to User VM
|
||||
lifecycle service.
|
||||
#. VM Manager sets the VM state to starting for PM DM.
|
||||
#. PM DM resumes UOS.
|
||||
#. UOS lifecycle service gets wakeup reason 0x000020, and then sends an initial
|
||||
or active heartbeat. The UOS gets wakeup reason 0x800020 after
|
||||
#. PM DM resumes User VM.
|
||||
#. User VM lifecycle service gets wakeup reason 0x000020, and then sends an initial
|
||||
or active heartbeat. The User VM gets wakeup reason 0x800020 after
|
||||
resuming.
|
||||
|
||||
For RTC resume flow
|
||||
|
||||
#. RTC timer expires.
|
||||
#. SOS lifecycle service gets initial wakeup reason from the IOC
|
||||
#. Service VM lifecycle service gets initial wakeup reason from the IOC
|
||||
firmware. The wakeup reason is 0x000200, from which RTC bit is set.
|
||||
It then sends active or initial heartbeat to IOC firmware.
|
||||
#. SOS lifecycle forwards the wakeup reason and sends start event to VM
|
||||
#. Service VM lifecycle forwards the wakeup reason and sends start event to VM
|
||||
Manager. VM Manager begins resuming VMs.
|
||||
#. IOC DM gets the wakeup reason from the VM Manager, and forwards it to
|
||||
the UOS lifecycle service.
|
||||
the User VM lifecycle service.
|
||||
#. VM Manager sets the VM state to starting for PM DM.
|
||||
#. PM DM resumes UOS.
|
||||
#. UOS lifecycle service gets the wakeup reason 0x000200, and sends
|
||||
initial or active heartbeat. The UOS gets wakeup reason 0x800200
|
||||
#. PM DM resumes User VM.
|
||||
#. User VM lifecycle service gets the wakeup reason 0x000200, and sends
|
||||
initial or active heartbeat. The User VM gets wakeup reason 0x800200
|
||||
after resuming..
|
||||
|
||||
System control data
|
||||
@@ -413,19 +413,19 @@ Currently the wakeup reason bits are supported by sources shown here:
|
||||
|
||||
* - wakeup_button
|
||||
- 5
|
||||
- Get from IOC FW, forward to UOS
|
||||
- Get from IOC FW, forward to User VM
|
||||
|
||||
* - RTC wakeup
|
||||
- 9
|
||||
- Get from IOC FW, forward to UOS
|
||||
- Get from IOC FW, forward to User VM
|
||||
|
||||
* - car door wakeup
|
||||
- 11
|
||||
- Get from IOC FW, forward to UOS
|
||||
- Get from IOC FW, forward to User VM
|
||||
|
||||
* - SoC wakeup
|
||||
- 23
|
||||
- Emulation (Depends on UOS's heartbeat message
|
||||
- Emulation (Depends on User VM's heartbeat message
|
||||
|
||||
- CBC_WK_RSN_BTN (bit 5): ignition button.
|
||||
- CBC_WK_RSN_RTC (bit 9): RTC timer.
|
||||
@@ -522,7 +522,7 @@ definition is as below.
|
||||
:align: center
|
||||
|
||||
- The RTC command contains a relative time but not an absolute time.
|
||||
- SOS lifecycle service will re-compute the time offset before it is
|
||||
- Service VM lifecycle service will re-compute the time offset before it is
|
||||
sent to the IOC firmware.
|
||||
|
||||
.. figure:: images/ioc-image10.png
|
||||
@@ -560,10 +560,10 @@ IOC signal type definitions are as below.
|
||||
IOC Mediator - Signal flow
|
||||
|
||||
- The IOC backend needs to emulate the channel open/reset/close message which
|
||||
shouldn't be forward to the native cbc signal channel. The SOS signal
|
||||
shouldn't be forward to the native cbc signal channel. The Service VM signal
|
||||
related services should do a real open/reset/close signal channel.
|
||||
- Every backend should maintain a whitelist for different VMs. The
|
||||
whitelist can be stored in the SOS file system (Read only) in the
|
||||
whitelist can be stored in the Service VM file system (Read only) in the
|
||||
future, but currently it is hard coded.
|
||||
|
||||
IOC mediator has two whitelist tables, one is used for rx
|
||||
@@ -582,9 +582,9 @@ new multi signal, which contains the signals in the whitelist.
|
||||
Raw data
|
||||
--------
|
||||
|
||||
OEM raw channel only assigns to a specific UOS following that OEM
|
||||
OEM raw channel only assigns to a specific User VM following that OEM
|
||||
configuration. The IOC Mediator will directly forward all read/write
|
||||
message from IOC firmware to UOS without any modification.
|
||||
message from IOC firmware to User VM without any modification.
|
||||
|
||||
|
||||
IOC Mediator Usage
|
||||
@@ -600,14 +600,14 @@ The "ioc_channel_path" is an absolute path for communication between
|
||||
IOC mediator and UART DM.
|
||||
|
||||
The "lpc_port" is "com1" or "com2", IOC mediator needs one unassigned
|
||||
lpc port for data transfer between UOS and SOS.
|
||||
lpc port for data transfer between User VM and Service VM.
|
||||
|
||||
The "wakeup_reason" is IOC mediator boot up reason, each bit represents
|
||||
one wakeup reason.
|
||||
|
||||
For example, the following commands are used to enable IOC feature, the
|
||||
initial wakeup reason is the ignition button and cbc_attach uses ttyS1
|
||||
for TTY line discipline in UOS::
|
||||
for TTY line discipline in User VM::
|
||||
|
||||
-i /run/acrn/ioc_$vm_name,0x20
|
||||
-l com2,/run/acrn/ioc_$vm_name
|
||||
|
@@ -291,7 +291,8 @@ Virtual MTRR
|
||||
************
|
||||
|
||||
In ACRN, the hypervisor only virtualizes MTRRs fixed range (0~1MB).
|
||||
The HV sets MTRRs of the fixed range as Write-Back for UOS, and the SOS reads
|
||||
The HV sets MTRRs of the fixed range as Write-Back for a User VM, and
|
||||
the Service VM reads
|
||||
native MTRRs of the fixed range set by BIOS.
|
||||
|
||||
If the guest physical address is not in the fixed range (0~1MB), the
|
||||
@@ -489,7 +490,7 @@ almost all the system memory as shown here:
|
||||
:width: 900px
|
||||
:name: sos-mem-layout
|
||||
|
||||
SOS Physical Memory Layout
|
||||
Service VM Physical Memory Layout
|
||||
|
||||
Host to Guest Mapping
|
||||
=====================
|
||||
@@ -521,4 +522,4 @@ must not be accessible by the Seervice/User VM normal world.
|
||||
.. figure:: images/mem-image18.png
|
||||
:align: center
|
||||
|
||||
UOS Physical Memory Layout with Trusty
|
||||
User VM Physical Memory Layout with Trusty
|
||||
|
@@ -176,7 +176,7 @@ Guest SMP boot flow
|
||||
The core APIC IDs are reported to the guest using mptable info. SMP boot
|
||||
flow is similar to sharing mode. Refer to :ref:`vm-startup`
|
||||
for guest SMP boot flow in ACRN. Partition mode guests startup is same as
|
||||
the SOS startup in sharing mode.
|
||||
the Service VM startup in sharing mode.
|
||||
|
||||
Inter-processor Interrupt (IPI) Handling
|
||||
========================================
|
||||
|
@@ -241,7 +241,8 @@ Here is initial mode of vCPUs:
|
||||
+----------------------------------+----------------------------------------------------------+
|
||||
| VM and Processor Type | Initial Mode |
|
||||
+=================+================+==========================================================+
|
||||
| Service VM | BSP | Same as physical BSP, or Real Mode if SOS boot w/ OVMF |
|
||||
| Service VM | BSP | Same as physical BSP, or Real Mode if Service VM boot |
|
||||
| | | w/ OVMF |
|
||||
| +----------------+----------------------------------------------------------+
|
||||
| | AP | Real Mode |
|
||||
+-----------------+----------------+----------------------------------------------------------+
|
||||
|
@@ -151,7 +151,7 @@ Virtual IOAPIC
|
||||
**************
|
||||
|
||||
vIOAPIC is emulated by HV when Guest accesses MMIO GPA range:
|
||||
0xFEC00000-0xFEC01000. vIOAPIC for SOS should match to the native HW
|
||||
0xFEC00000-0xFEC01000. vIOAPIC for Service VM should match to the native HW
|
||||
IOAPIC Pin numbers. vIOAPIC for guest VM provides 48 pins. As the vIOAPIC is
|
||||
always associated with vLAPIC, the virtual interrupt injection from
|
||||
vIOAPIC will finally trigger a request for vLAPIC event by calling
|
||||
|
@@ -54,10 +54,10 @@ management. Please refer to ACRN power management design for more details.
|
||||
Post-launched User VMs
|
||||
======================
|
||||
|
||||
DM is taking control of post-launched User VMs' state transition after SOS
|
||||
DM is taking control of post-launched User VMs' state transition after Service VM
|
||||
boot up, and it calls VM APIs through hypercalls.
|
||||
|
||||
SOS user level service like Life-Cycle-Service and tool like Acrnd may work
|
||||
Service VM user level service like Life-Cycle-Service and tool like Acrnd may work
|
||||
together with DM to launch or stop a User VM. Please refer to ACRN tool
|
||||
introduction for more details.
|
||||
|
||||
|
@@ -4,8 +4,8 @@ UART Virtualization
|
||||
###################
|
||||
|
||||
In ACRN, UART virtualization is implemented as a fully-emulated device.
|
||||
In the Service OS (SOS), UART virtualization is implemented in the
|
||||
hypervisor itself. In the User OS (UOS), UART virtualization is
|
||||
In the Service VM, UART virtualization is implemented in the
|
||||
hypervisor itself. In the User VM, UART virtualization is
|
||||
implemented in the Device Model (DM), and is the primary topic of this
|
||||
document. We'll summarize differences between the hypervisor and DM
|
||||
implementations at the end of this document.
|
||||
@@ -93,7 +93,7 @@ A similar virtual UART device is implemented in the hypervisor.
|
||||
Currently UART16550 is owned by the hypervisor itself and is used for
|
||||
debugging purposes. (The UART properties are configured by parameters
|
||||
to the hypervisor command line.) The hypervisor emulates a UART device
|
||||
with 0x3F8 address to the SOS and acts as the SOS console. The general
|
||||
with 0x3F8 address to the Service VM and acts as the Service VM console. The general
|
||||
emulation is the same as used in the device model, with the following
|
||||
differences:
|
||||
|
||||
@@ -110,8 +110,8 @@ differences:
|
||||
- Characters are read from the sbuf and put to rxFIFO,
|
||||
triggered by ``vuart_console_rx_chars``
|
||||
|
||||
- A virtual interrupt is sent to the SOS that triggered the read,
|
||||
and characters from rxFIFO are sent to the SOS by emulating a read
|
||||
- A virtual interrupt is sent to the Service VM that triggered the read,
|
||||
and characters from rxFIFO are sent to the Service VM by emulating a read
|
||||
of register ``UART16550_RBR``
|
||||
|
||||
- TX flow:
|
||||
|
@@ -29,8 +29,8 @@ emulation of three components, described here and shown in
|
||||
specific User OS with I/O MMU assistance.
|
||||
|
||||
- **DRD DM** (Dual Role Device) emulates the PHY MUX control
|
||||
logic. The sysfs interface in UOS is used to trap the switch operation
|
||||
into DM, and the the sysfs interface in SOS is used to operate on the physical
|
||||
logic. The sysfs interface in a User VM is used to trap the switch operation
|
||||
into DM, and the the sysfs interface in the Service VM is used to operate on the physical
|
||||
registers to switch between DCI and HCI role.
|
||||
|
||||
On Intel Apollo Lake platform, the sysfs interface path is
|
||||
@@ -39,7 +39,7 @@ emulation of three components, described here and shown in
|
||||
device mode. Similarly, by echoing ``host``, the usb phy will be
|
||||
connected with xHCI controller as host mode.
|
||||
|
||||
An xHCI register access from UOS will induce EPT trap from UOS to
|
||||
An xHCI register access from a User VM will induce EPT trap from the User VM to
|
||||
DM, and the xHCI DM or DRD DM will emulate hardware behaviors to make
|
||||
the subsystem run.
|
||||
|
||||
@@ -94,7 +94,7 @@ DM:
|
||||
ports to virtual USB ports. It communicate with
|
||||
native USB ports though libusb.
|
||||
|
||||
All the USB data buffers from UOS (User OS) are in the form of TRB
|
||||
All the USB data buffers from a User VM are in the form of TRB
|
||||
(Transfer Request Blocks), according to xHCI spec. xHCI DM will fetch
|
||||
these data buffers when the related xHCI doorbell registers are set.
|
||||
These data will convert to *struct usb_data_xfer* and, through USB core,
|
||||
@@ -106,15 +106,15 @@ The device model configuration command syntax for xHCI is as follows::
|
||||
-s <slot>,xhci,[bus1-port1,bus2-port2]
|
||||
|
||||
- *slot*: virtual PCI slot number in DM
|
||||
- *bus-port*: specify which physical USB ports need to map to UOS.
|
||||
- *bus-port*: specify which physical USB ports need to map to a User VM.
|
||||
|
||||
A simple example::
|
||||
|
||||
-s 7,xhci,1-2,2-2
|
||||
|
||||
This configuration means the virtual xHCI will appear in PCI slot 7
|
||||
in UOS, and any physical USB device attached on 1-2 or 2-2 will be
|
||||
detected by UOS and used as expected.
|
||||
in the User VM, and any physical USB device attached on 1-2 or 2-2 will be
|
||||
detected by a User VM and used as expected.
|
||||
|
||||
USB DRD virtualization
|
||||
**********************
|
||||
@@ -129,7 +129,7 @@ USB DRD (Dual Role Device) emulation works as shown in this figure:
|
||||
ACRN emulates the DRD hardware logic of an Intel Apollo Lake platform to
|
||||
support the dual role requirement. The DRD feature is implemented as xHCI
|
||||
vendor extended capability. ACRN emulates
|
||||
the same way, so the native driver can be reused in UOS. When UOS DRD
|
||||
the same way, so the native driver can be reused in a User VM. When a User VM DRD
|
||||
driver reads or writes the related xHCI extended registers, these access will
|
||||
be captured by xHCI DM. xHCI DM uses the native DRD related
|
||||
sysfs interface to do the Host/Device mode switch operations.
|
||||
|
@@ -4,8 +4,8 @@ Virtio-blk
|
||||
##########
|
||||
|
||||
The virtio-blk device is a simple virtual block device. The FE driver
|
||||
(in the UOS space) places read, write, and other requests onto the
|
||||
virtqueue, so that the BE driver (in the SOS space) can process them
|
||||
(in the User VM space) places read, write, and other requests onto the
|
||||
virtqueue, so that the BE driver (in the Service VM space) can process them
|
||||
accordingly. Communication between the FE and BE is based on the virtio
|
||||
kick and notify mechanism.
|
||||
|
||||
@@ -86,7 +86,7 @@ The device model configuration command syntax for virtio-blk is::
|
||||
|
||||
A simple example for virtio-blk:
|
||||
|
||||
1. Prepare a file in SOS folder::
|
||||
1. Prepare a file in Service VM folder::
|
||||
|
||||
dd if=/dev/zero of=test.img bs=1M count=1024
|
||||
mkfs.ext4 test.img
|
||||
@@ -96,15 +96,15 @@ A simple example for virtio-blk:
|
||||
|
||||
-s 9,virtio-blk,/root/test.img
|
||||
|
||||
#. Launch UOS, you can find ``/dev/vdx`` in UOS.
|
||||
#. Launch User VM, you can find ``/dev/vdx`` in User VM.
|
||||
|
||||
The ``x`` in ``/dev/vdx`` is related to the slot number used. If
|
||||
If you start DM with two virtio-blks, and the slot numbers are 9 and 10,
|
||||
then, the device with slot 9 will be recognized as ``/dev/vda``, and
|
||||
the device with slot 10 will be ``/dev/vdb``
|
||||
|
||||
#. Mount ``/dev/vdx`` to a folder in the UOS, and then you can access it.
|
||||
#. Mount ``/dev/vdx`` to a folder in the User VM, and then you can access it.
|
||||
|
||||
|
||||
Successful booting of the User OS verifies the correctness of the
|
||||
Successful booting of the User VM verifies the correctness of the
|
||||
device.
|
||||
|
@@ -33,7 +33,7 @@ The virtio-console architecture diagram in ACRN is shown below.
|
||||
Virtio-console is implemented as a virtio legacy device in the ACRN
|
||||
device model (DM), and is registered as a PCI virtio device to the guest
|
||||
OS. No changes are required in the frontend Linux virtio-console except
|
||||
that the guest (UOS) kernel should be built with
|
||||
that the guest (User VM) kernel should be built with
|
||||
``CONFIG_VIRTIO_CONSOLE=y``.
|
||||
|
||||
The virtio console FE driver registers a HVC console to the kernel if
|
||||
@@ -152,7 +152,7 @@ PTY
|
||||
TTY
|
||||
===
|
||||
|
||||
1. Identify your tty that will be used as the UOS console:
|
||||
1. Identify your tty that will be used as the User VM console:
|
||||
|
||||
- If you're connected to your device over the network via ssh, use
|
||||
the linux ``tty`` command, and it will report the node (may be
|
||||
|
@@ -1,89 +1,89 @@
|
||||
.. _virtio-gpio:
|
||||
|
||||
Virtio-gpio
|
||||
###########
|
||||
|
||||
virtio-gpio provides a virtual GPIO controller, which will map part of
|
||||
native GPIOs to UOS, UOS can perform GPIO operations through it,
|
||||
including setting values, including set/get value, set/get direction and
|
||||
set configuration (only Open Source and Open Drain types are currently
|
||||
supported). GPIOs quite often be used as IRQs, typically for wakeup
|
||||
events, virtio-gpio supports level and edge interrupt trigger modes.
|
||||
|
||||
The virtio-gpio architecture is shown below
|
||||
|
||||
.. figure:: images/virtio-gpio-1.png
|
||||
:align: center
|
||||
:name: virtio-gpio-1
|
||||
|
||||
Virtio-gpio Architecture
|
||||
|
||||
Virtio-gpio is implemented as a virtio legacy device in the ACRN device
|
||||
model (DM), and is registered as a PCI virtio device to the guest OS. No
|
||||
changes are required in the frontend Linux virtio-gpio except that the
|
||||
guest (UOS) kernel should be built with ``CONFIG_VIRTIO_GPIO=y``.
|
||||
|
||||
There are three virtqueues used between FE and BE, one for gpio
|
||||
operations, one for irq request and one for irq event notification.
|
||||
|
||||
Virtio-gpio FE driver will register a gpiochip and irqchip when it is
|
||||
probed, the base and number of gpio are generated by the BE. Each
|
||||
gpiochip or irqchip operation(e.g. get_direction of gpiochip or
|
||||
irq_set_type of irqchip) will trigger a virtqueue_kick on its own
|
||||
virtqueue. If some gpio has been set to interrupt mode, the interrupt
|
||||
events will be handled within the irq virtqueue callback.
|
||||
|
||||
GPIO mapping
|
||||
************
|
||||
|
||||
.. figure:: images/virtio-gpio-2.png
|
||||
:align: center
|
||||
:name: virtio-gpio-2
|
||||
|
||||
GPIO mapping
|
||||
|
||||
- Each UOS has only one GPIO chip instance, its number of GPIO is based
|
||||
on acrn-dm command line and GPIO base always start from 0.
|
||||
|
||||
- Each GPIO is exclusive, uos can't map the same native gpio.
|
||||
|
||||
- Each acrn-dm maximum number of GPIO is 64.
|
||||
|
||||
Usage
|
||||
*****
|
||||
|
||||
add the following parameters into command line::
|
||||
|
||||
-s <slot>,virtio-gpio,<@controller_name{offset|name[=mapping_name]:offset|name[=mapping_name]:...}@controller_name{...}...]>
|
||||
|
||||
- **controller_name**: Input ``ls /sys/bus/gpio/devices`` to check
|
||||
native gpio controller information.Usually, the devices represent the
|
||||
controller_name, you can use it as controller_name directly. You can
|
||||
also input "cat /sys/bus/gpio/device/XXX/dev" to get device id that can
|
||||
be used to match /dev/XXX, then use XXX as the controller_name. On MRB
|
||||
and NUC platforms, the controller_name are gpiochip0, gpiochip1,
|
||||
gpiochip2.gpiochip3.
|
||||
|
||||
- **offset|name**: you can use gpio offset or its name to locate one
|
||||
native gpio within the gpio controller.
|
||||
|
||||
- **mapping_name**: This is optional, if you want to use a customized
|
||||
name for a FE gpio, you can set a new name for a FE virtual gpio.
|
||||
|
||||
Example
|
||||
*******
|
||||
|
||||
- Map three native gpio to UOS, they are native gpiochip0 with offset
|
||||
of 1 and 6, and with the name ``reset``. In UOS, the three gpio has
|
||||
no name, and base from 0 ::
|
||||
|
||||
-s 10,virtio-gpio,@gpiochip0{1:6:reset}
|
||||
|
||||
- Map four native gpio to UOS, native gpiochip0's gpio with offset 1
|
||||
and offset 6 map to FE virtual gpio with offset 0 and offset 1
|
||||
without names, native gpiochip0's gpio with name ``reset`` maps to FE
|
||||
virtual gpio with offset 2 and its name is ``shutdown``, native
|
||||
gpiochip1's gpio with offset 0 maps to FE virtual gpio with offset 3 and
|
||||
its name is ``reset`` ::
|
||||
|
||||
-s 10,virtio-gpio,@gpiochip0{1:6:reset=shutdown}@gpiochip1{0=reset}
|
||||
.. _virtio-gpio:
|
||||
|
||||
Virtio-gpio
|
||||
###########
|
||||
|
||||
virtio-gpio provides a virtual GPIO controller, which will map part of
|
||||
native GPIOs to User VM, User VM can perform GPIO operations through it,
|
||||
including setting values, including set/get value, set/get direction and
|
||||
set configuration (only Open Source and Open Drain types are currently
|
||||
supported). GPIOs quite often be used as IRQs, typically for wakeup
|
||||
events, virtio-gpio supports level and edge interrupt trigger modes.
|
||||
|
||||
The virtio-gpio architecture is shown below
|
||||
|
||||
.. figure:: images/virtio-gpio-1.png
|
||||
:align: center
|
||||
:name: virtio-gpio-1
|
||||
|
||||
Virtio-gpio Architecture
|
||||
|
||||
Virtio-gpio is implemented as a virtio legacy device in the ACRN device
|
||||
model (DM), and is registered as a PCI virtio device to the guest OS. No
|
||||
changes are required in the frontend Linux virtio-gpio except that the
|
||||
guest (User VM) kernel should be built with ``CONFIG_VIRTIO_GPIO=y``.
|
||||
|
||||
There are three virtqueues used between FE and BE, one for gpio
|
||||
operations, one for irq request and one for irq event notification.
|
||||
|
||||
Virtio-gpio FE driver will register a gpiochip and irqchip when it is
|
||||
probed, the base and number of gpio are generated by the BE. Each
|
||||
gpiochip or irqchip operation(e.g. get_direction of gpiochip or
|
||||
irq_set_type of irqchip) will trigger a virtqueue_kick on its own
|
||||
virtqueue. If some gpio has been set to interrupt mode, the interrupt
|
||||
events will be handled within the irq virtqueue callback.
|
||||
|
||||
GPIO mapping
|
||||
************
|
||||
|
||||
.. figure:: images/virtio-gpio-2.png
|
||||
:align: center
|
||||
:name: virtio-gpio-2
|
||||
|
||||
GPIO mapping
|
||||
|
||||
- Each User VM has only one GPIO chip instance, its number of GPIO is
|
||||
based on acrn-dm command line and GPIO base always start from 0.
|
||||
|
||||
- Each GPIO is exclusive, User VM can’t map the same native gpio.
|
||||
|
||||
- Each acrn-dm maximum number of GPIO is 64.
|
||||
|
||||
Usage
|
||||
*****
|
||||
|
||||
Add the following parameters into the command line::
|
||||
|
||||
-s <slot>,virtio-gpio,<@controller_name{offset|name[=mapping_name]:offset|name[=mapping_name]:…}@controller_name{…}…]>
|
||||
|
||||
- **controller_name**: Input “ls /sys/bus/gpio/devices” to check native
|
||||
gpio controller information.Usually, the devices represent the
|
||||
controller_name, you can use it as controller_name directly. You can
|
||||
also input “cat /sys/bus/gpio/device/XXX/dev” to get device id that can
|
||||
be used to match /dev/XXX, then use XXX as the controller_name. On MRB
|
||||
and NUC platforms, the controller_name are gpiochip0, gpiochip1,
|
||||
gpiochip2.gpiochip3.
|
||||
|
||||
- **offset|name**: you can use gpio offset or its name to locate one
|
||||
native gpio within the gpio controller.
|
||||
|
||||
- **mapping_name**: This is optional, if you want to use a customized
|
||||
name for a FE gpio, you can set a new name for a FE virtual gpio.
|
||||
|
||||
Example
|
||||
*******
|
||||
|
||||
- Map three native gpio to User VM, they are native gpiochip0 with
|
||||
offset of 1 and 6, and with the name “reset”. In User VM, the three
|
||||
gpio has no name, and base from 0.::
|
||||
|
||||
-s 10,virtio-gpio,@gpiochip0{1:6:reset}
|
||||
|
||||
- Map four native gpio to User VM, native gpiochip0’s gpio with offset 1
|
||||
and offset 6 map to FE virtual gpio with offset 0 and offset 1
|
||||
without names, native gpiochip0’s gpio with name “reset” maps to FE
|
||||
virtual gpio with offset 2 and its name is “shutdown”, native
|
||||
gpiochip1’s gpio with offset 0 maps to FE virtual gpio with offset 3 and
|
||||
its name is “reset”.::
|
||||
|
||||
-s 10,virtio-gpio,@gpiochip0{1:6:reset=shutdown}@gpiochip1{0=reset}
|
||||
|
@@ -1,135 +1,135 @@
|
||||
.. _virtio-i2c:
|
||||
|
||||
Virtio-i2c
|
||||
##########
|
||||
|
||||
Virtio-i2c provides a virtual I2C adapter that supports mapping multiple
|
||||
slave devices under multiple native I2C adapters to one virtio I2C
|
||||
adapter. The address for the slave device is not changed. Virtio-i2c
|
||||
also provides an interface to add an acpi node for slave devices so that
|
||||
the slave device driver in the guest OS does not need to change.
|
||||
|
||||
:numref:`virtio-i2c-1` below shows the virtio-i2c architecture.
|
||||
|
||||
.. figure:: images/virtio-i2c-1.png
|
||||
:align: center
|
||||
:name: virtio-i2c-1
|
||||
|
||||
Virtio-i2c Architecture
|
||||
|
||||
Virtio-i2c is implemented as a virtio legacy device in the ACRN device
|
||||
model (DM) and is registered as a PCI virtio device to the guest OS. The
|
||||
Device ID of virtio-i2c is 0x860A and the Sub Device ID is 0xFFF6.
|
||||
|
||||
Virtio-i2c uses one **virtqueue** to transfer the I2C msg that is
|
||||
received from the I2C core layer. Each I2C msg is translated into three
|
||||
parts:
|
||||
|
||||
- Header: includes addr, flags, and len.
|
||||
- Data buffer: includes the pointer to msg data.
|
||||
- Status: includes the process results at the backend.
|
||||
|
||||
In the backend kick handler, data is obtained from the virtqueue, which
|
||||
reformats the data to a standard I2C message and then sends it to a
|
||||
message queue that is maintained in the backend. A worker thread is
|
||||
created during the initiate phase; it receives the I2C message from the
|
||||
queue and then calls the I2C APIs to send to the native I2C adapter.
|
||||
|
||||
When the request is done, the backend driver updates the results and
|
||||
notifies the frontend. The msg process flow is shown in
|
||||
:numref:`virtio-process-flow` below.
|
||||
|
||||
.. figure:: images/virtio-i2c-1a.png
|
||||
:align: center
|
||||
:name: virtio-process-flow
|
||||
|
||||
Message Process Flow
|
||||
|
||||
**Usage:**
|
||||
-s <slot>,virtio-i2c,<bus>[:<slave_addr>[@<node>]][:<slave_addr>[@<node>]][,<bus>[:<slave_addr>[@<node>]][:<slave_addr>][@<node>]]
|
||||
|
||||
bus:
|
||||
The bus number for the native I2C adapter; ``2`` means ``/dev/i2c-2``.
|
||||
|
||||
slave_addr:
|
||||
he address for the native slave devices such as ``1C``, ``2F``...
|
||||
|
||||
@:
|
||||
The prefix for the acpi node.
|
||||
|
||||
node:
|
||||
The acpi node name supported in the current code. You can find the
|
||||
supported name in the ``acpi_node_table[]`` from the source code. Currently,
|
||||
only ``cam1``, ``cam2``, and ``hdac`` are supported for MRB. These nodes are
|
||||
platform-specific.
|
||||
|
||||
|
||||
**Example:**
|
||||
|
||||
-s 19,virtio-i2c,0:70@cam1:2F,4:1C
|
||||
|
||||
This adds slave devices 0x70 and 0x2F under the native adapter
|
||||
/dev/i2c-0, and 0x1C under /dev/i2c-6 to the virtio-i2c adapter. Since
|
||||
0x70 includes '@cam1', acpi info is also added to it. Since 0x2F and
|
||||
0x1C have '@<node>', no acpi info is added to them.
|
||||
|
||||
|
||||
**Simple use case:**
|
||||
|
||||
When launched with this cmdline:
|
||||
|
||||
-s 19,virtio-i2c,4:1C
|
||||
|
||||
a virtual I2C adapter will appear in the guest OS:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
root@clr-d5f61ae5f5224e59bb1727db3b5f5d4e ~ # ./i2cdetect -y -l
|
||||
i2c-3 i2c DPDDC-A I2C adapter
|
||||
i2c-1 i2c i915 gmbus dpc I2C adapter
|
||||
i2c-6 i2c i2c-virtio I2C adapter <------
|
||||
i2c-4 i2c DPDDC-B I2C adapter
|
||||
i2c-2 i2c i915 gmbus misc I2C adapter
|
||||
i2c-0 i2c i915 gmbus dpb I2C adapter
|
||||
i2c-5 i2c DPDDC-C I2C adapter
|
||||
|
||||
You can find the slave device 0x1C under the virtio I2C adapter i2c-6:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
root@clr-d5f61ae5f5224e59bb1727db3b5f5d4e ~ # ./i2cdetect -y -r 6
|
||||
0 1 2 3 4 5 6 7 8 9 a b c d e f
|
||||
00: -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- -- <--------
|
||||
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
70: -- -- -- -- -- -- -- --
|
||||
|
||||
You can dump the i2c device if it is supported:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
root@clr-d5f61ae5f5224e59bb1727db3b5f5d4e ~ # ./i2cdump -f -y 6 0x1C
|
||||
No size specified (using byte-data access)
|
||||
0 1 2 3 4 5 6 7 8 9 a b c d e f 0123456789abcdef
|
||||
10: ff ff 00 22 b2 05 00 00 00 00 00 00 00 00 00 00 ..."??..........
|
||||
20: 00 00 00 ff ff ff ff ff 00 00 00 ff ff ff ff ff ................
|
||||
30: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff 00 ................
|
||||
40: 00 00 00 ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
50: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
60: 00 10 00 00 00 00 00 00 00 00 00 00 00 00 00 00 .?..............
|
||||
70: ff ff 00 ff 10 10 ff ff ff ff ff ff ff ff ff ff ....??..........
|
||||
80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
90: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
a0: ff ff ff ff ff ff f8 ff 00 00 ff ff 00 ff ff ff ......?.........
|
||||
b0: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
c0: 00 ff 00 00 ff ff ff 00 00 ff ff ff ff ff ff ff ................
|
||||
d0: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
e0: 00 ff 06 00 03 fa 00 ff ff ff ff ff ff ff ff ff ..?.??..........
|
||||
f0: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
|
||||
Note that the virtual I2C bus number has no relationship with the native
|
||||
I2C bus number; it is auto-generated by the guest OS.
|
||||
.. _virtio-i2c:
|
||||
|
||||
Virtio-i2c
|
||||
##########
|
||||
|
||||
Virtio-i2c provides a virtual I2C adapter that supports mapping multiple
|
||||
slave devices under multiple native I2C adapters to one virtio I2C
|
||||
adapter. The address for the slave device is not changed. Virtio-i2c
|
||||
also provides an interface to add an acpi node for slave devices so that
|
||||
the slave device driver in the guest OS does not need to change.
|
||||
|
||||
:numref:`virtio-i2c-1` below shows the virtio-i2c architecture.
|
||||
|
||||
.. figure:: images/virtio-i2c-1.png
|
||||
:align: center
|
||||
:name: virtio-i2c-1
|
||||
|
||||
Virtio-i2c Architecture
|
||||
|
||||
Virtio-i2c is implemented as a virtio legacy device in the ACRN device
|
||||
model (DM) and is registered as a PCI virtio device to the guest OS. The
|
||||
Device ID of virtio-i2c is 0x860A and the Sub Device ID is 0xFFF6.
|
||||
|
||||
Virtio-i2c uses one **virtqueue** to transfer the I2C msg that is
|
||||
received from the I2C core layer. Each I2C msg is translated into three
|
||||
parts:
|
||||
|
||||
- Header: includes addr, flags, and len.
|
||||
- Data buffer: includes the pointer to msg data.
|
||||
- Status: includes the process results at the backend.
|
||||
|
||||
In the backend kick handler, data is obtained from the virtqueue, which
|
||||
reformats the data to a standard I2C message and then sends it to a
|
||||
message queue that is maintained in the backend. A worker thread is
|
||||
created during the initiate phase; it receives the I2C message from the
|
||||
queue and then calls the I2C APIs to send to the native I2C adapter.
|
||||
|
||||
When the request is done, the backend driver updates the results and
|
||||
notifies the frontend. The msg process flow is shown in
|
||||
:numref:`virtio-process-flow` below.
|
||||
|
||||
.. figure:: images/virtio-i2c-1a.png
|
||||
:align: center
|
||||
:name: virtio-process-flow
|
||||
|
||||
Message Process Flow
|
||||
|
||||
**Usage:**
|
||||
-s <slot>,virtio-i2c,<bus>[:<slave_addr>[@<node>]][:<slave_addr>[@<node>]][,<bus>[:<slave_addr>[@<node>]][:<slave_addr>][@<node>]]
|
||||
|
||||
bus:
|
||||
The bus number for the native I2C adapter; “2” means “/dev/i2c-2”.
|
||||
|
||||
slave_addr:
|
||||
he address for the native slave devices such as “1C”, “2F”...
|
||||
|
||||
@:
|
||||
The prefix for the acpi node.
|
||||
|
||||
node:
|
||||
The acpi node name supported in the current code. You can find the
|
||||
supported name in the acpi_node_table[] from the source code. Currently,
|
||||
only ‘cam1’, ‘cam2’, and ‘hdac’ are supported for MRB. These nodes are
|
||||
platform-specific.
|
||||
|
||||
|
||||
**Example:**
|
||||
|
||||
-s 19,virtio-i2c,0:70@cam1:2F,4:1C
|
||||
|
||||
This adds slave devices 0x70 and 0x2F under the native adapter
|
||||
/dev/i2c-0, and 0x1C under /dev/i2c-6 to the virtio-i2c adapter. Since
|
||||
0x70 includes '@cam1', acpi info is also added to it. Since 0x2F and
|
||||
0x1C have '@<node>', no acpi info is added to them.
|
||||
|
||||
|
||||
**Simple use case:**
|
||||
|
||||
When launched with this cmdline:
|
||||
|
||||
-s 19,virtio-i2c,4:1C
|
||||
|
||||
a virtual I2C adapter will appear in the guest OS:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
root@clr-d5f61ae5f5224e59bb1727db3b5f5d4e ~ # ./i2cdetect -y -l
|
||||
i2c-3 i2c DPDDC-A I2C adapter
|
||||
i2c-1 i2c i915 gmbus dpc I2C adapter
|
||||
i2c-6 i2c i2c-virtio I2C adapter <------
|
||||
i2c-4 i2c DPDDC-B I2C adapter
|
||||
i2c-2 i2c i915 gmbus misc I2C adapter
|
||||
i2c-0 i2c i915 gmbus dpb I2C adapter
|
||||
i2c-5 i2c DPDDC-C I2C adapter
|
||||
|
||||
You can find the slave device 0x1C under the virtio I2C adapter i2c-6:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
root@clr-d5f61ae5f5224e59bb1727db3b5f5d4e ~ # ./i2cdetect -y -r 6
|
||||
0 1 2 3 4 5 6 7 8 9 a b c d e f
|
||||
00: -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- -- <--------
|
||||
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
70: -- -- -- -- -- -- -- --
|
||||
|
||||
You can dump the i2c device if it is supported:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
root@clr-d5f61ae5f5224e59bb1727db3b5f5d4e ~ # ./i2cdump -f -y 6 0x1C
|
||||
No size specified (using byte-data access)
|
||||
0 1 2 3 4 5 6 7 8 9 a b c d e f 0123456789abcdef
|
||||
10: ff ff 00 22 b2 05 00 00 00 00 00 00 00 00 00 00 ..."??..........
|
||||
20: 00 00 00 ff ff ff ff ff 00 00 00 ff ff ff ff ff ................
|
||||
30: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff 00 ................
|
||||
40: 00 00 00 ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
50: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
60: 00 10 00 00 00 00 00 00 00 00 00 00 00 00 00 00 .?..............
|
||||
70: ff ff 00 ff 10 10 ff ff ff ff ff ff ff ff ff ff ....??..........
|
||||
80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
90: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
a0: ff ff ff ff ff ff f8 ff 00 00 ff ff 00 ff ff ff ......?.........
|
||||
b0: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
c0: 00 ff 00 00 ff ff ff 00 00 ff ff ff ff ff ff ff ................
|
||||
d0: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
e0: 00 ff 06 00 03 fa 00 ff ff ff ff ff ff ff ff ff ..?.??..........
|
||||
f0: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................
|
||||
|
||||
Note that the virtual I2C bus number has no relationship with the native
|
||||
I2C bus number; it is auto-generated by the guest OS.
|
||||
|
@@ -21,8 +21,8 @@ must be built with ``CONFIG_VIRTIO_INPUT=y``.
|
||||
|
||||
Two virtqueues are used to transfer input_event between FE and BE. One
|
||||
is for the input_events from BE to FE, as generated by input hardware
|
||||
devices in SOS. The other is for status changes from FE to BE, as
|
||||
finally sent to input hardware device in SOS.
|
||||
devices in Service VM. The other is for status changes from FE to BE, as
|
||||
finally sent to input hardware device in Service VM.
|
||||
|
||||
At the probe stage of FE virtio-input driver, a buffer (used to
|
||||
accommodate 64 input events) is allocated together with the driver data.
|
||||
@@ -37,7 +37,7 @@ char device and caches it into an internal buffer until an EV_SYN input
|
||||
event with SYN_REPORT is received. BE driver then copies all the cached
|
||||
input events to the event virtqueue, one by one. These events are added by
|
||||
the FE driver following a notification to FE driver, implemented
|
||||
as an interrupt injection to UOS.
|
||||
as an interrupt injection to User VM.
|
||||
|
||||
For input events regarding status change, FE driver allocates a
|
||||
buffer for an input event and adds it to the status virtqueue followed
|
||||
@@ -93,7 +93,7 @@ The general command syntax is::
|
||||
-s n,virtio-input,/dev/input/eventX[,serial]
|
||||
|
||||
- /dev/input/eventX is used to specify the evdev char device node in
|
||||
SOS.
|
||||
Service VM.
|
||||
|
||||
- "serial" is an optional string. When it is specified it will be used
|
||||
as the Uniq of guest virtio input device.
|
||||
|
@@ -4,7 +4,7 @@ Virtio-net
|
||||
##########
|
||||
|
||||
Virtio-net is the para-virtualization solution used in ACRN for
|
||||
networking. The ACRN device model emulates virtual NICs for UOS and the
|
||||
networking. The ACRN device model emulates virtual NICs for User VM and the
|
||||
frontend virtio network driver, simulating the virtual NIC and following
|
||||
the virtio specification. (Refer to :ref:`introduction` and
|
||||
:ref:`virtio-hld` background introductions to ACRN and Virtio.)
|
||||
@@ -23,7 +23,7 @@ Network Virtualization Architecture
|
||||
|
||||
ACRN's network virtualization architecture is shown below in
|
||||
:numref:`net-virt-arch`, and illustrates the many necessary network
|
||||
virtualization components that must cooperate for the UOS to send and
|
||||
virtualization components that must cooperate for the User VM to send and
|
||||
receive data from the outside world.
|
||||
|
||||
.. figure:: images/network-virt-arch.png
|
||||
@@ -38,7 +38,7 @@ components are parts of the Linux kernel.)
|
||||
|
||||
Let's explore these components further.
|
||||
|
||||
SOS/UOS Network Stack:
|
||||
Service VM/User VM Network Stack:
|
||||
This is the standard Linux TCP/IP stack, currently the most
|
||||
feature-rich TCP/IP implementation.
|
||||
|
||||
@@ -57,11 +57,11 @@ ACRN Hypervisor:
|
||||
bare-metal hardware, and suitable for a variety of IoT and embedded
|
||||
device solutions. It fetches and analyzes the guest instructions, puts
|
||||
the decoded information into the shared page as an IOREQ, and notifies
|
||||
or interrupts the VHM module in the SOS for processing.
|
||||
or interrupts the VHM module in the Service VM for processing.
|
||||
|
||||
VHM Module:
|
||||
The Virtio and Hypervisor Service Module (VHM) is a kernel module in the
|
||||
Service OS (SOS) acting as a middle layer to support the device model
|
||||
Service VM acting as a middle layer to support the device model
|
||||
and hypervisor. The VHM forwards a IOREQ to the virtio-net backend
|
||||
driver for processing.
|
||||
|
||||
@@ -72,7 +72,7 @@ ACRN Device Model and virtio-net Backend Driver:
|
||||
|
||||
Bridge and Tap Device:
|
||||
Bridge and Tap are standard virtual network infrastructures. They play
|
||||
an important role in communication among the SOS, the UOS, and the
|
||||
an important role in communication among the Service VM, the User VM, and the
|
||||
outside world.
|
||||
|
||||
IGB Driver:
|
||||
@@ -82,7 +82,7 @@ IGB Driver:
|
||||
|
||||
The virtual network card (NIC) is implemented as a virtio legacy device
|
||||
in the ACRN device model (DM). It is registered as a PCI virtio device
|
||||
to the guest OS (UOS) and uses the standard virtio-net in the Linux kernel as
|
||||
to the guest OS (User VM) and uses the standard virtio-net in the Linux kernel as
|
||||
its driver (the guest kernel should be built with
|
||||
``CONFIG_VIRTIO_NET=y``).
|
||||
|
||||
@@ -96,7 +96,7 @@ ACRN Virtio-Network Calling Stack
|
||||
|
||||
Various components of ACRN network virtualization are shown in the
|
||||
architecture diagram shows in :numref:`net-virt-arch`. In this section,
|
||||
we will use UOS data transmission (TX) and reception (RX) examples to
|
||||
we will use User VM data transmission (TX) and reception (RX) examples to
|
||||
explain step-by-step how these components work together to implement
|
||||
ACRN network virtualization.
|
||||
|
||||
@@ -123,13 +123,13 @@ Initialization in virtio-net Frontend Driver
|
||||
- Register network driver
|
||||
- Setup shared virtqueues
|
||||
|
||||
ACRN UOS TX FLOW
|
||||
================
|
||||
ACRN User VM TX FLOW
|
||||
====================
|
||||
|
||||
The following shows the ACRN UOS network TX flow, using TCP as an
|
||||
The following shows the ACRN User VM network TX flow, using TCP as an
|
||||
example, showing the flow through each layer:
|
||||
|
||||
**UOS TCP Layer**
|
||||
**User VM TCP Layer**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -139,7 +139,7 @@ example, showing the flow through each layer:
|
||||
tcp_write_xmit -->
|
||||
tcp_transmit_skb -->
|
||||
|
||||
**UOS IP Layer**
|
||||
**User VM IP Layer**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -153,7 +153,7 @@ example, showing the flow through each layer:
|
||||
neigh_output -->
|
||||
neigh_resolve_output -->
|
||||
|
||||
**UOS MAC Layer**
|
||||
**User VM MAC Layer**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -165,7 +165,7 @@ example, showing the flow through each layer:
|
||||
__netdev_start_xmit -->
|
||||
|
||||
|
||||
**UOS MAC Layer virtio-net Frontend Driver**
|
||||
**User VM MAC Layer virtio-net Frontend Driver**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -187,7 +187,7 @@ example, showing the flow through each layer:
|
||||
pio_instr_vmexit_handler -->
|
||||
emulate_io --> // ioreq cant be processed in HV, forward it to VHM
|
||||
acrn_insert_request_wait -->
|
||||
fire_vhm_interrupt --> // interrupt SOS, VHM will get notified
|
||||
fire_vhm_interrupt --> // interrupt Service VM, VHM will get notified
|
||||
|
||||
**VHM Module**
|
||||
|
||||
@@ -216,7 +216,7 @@ example, showing the flow through each layer:
|
||||
virtio_net_tap_tx -->
|
||||
writev --> // write data to tap device
|
||||
|
||||
**SOS TAP Device Forwarding**
|
||||
**Service VM TAP Device Forwarding**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -233,7 +233,7 @@ example, showing the flow through each layer:
|
||||
__netif_receive_skb_core -->
|
||||
|
||||
|
||||
**SOS Bridge Forwarding**
|
||||
**Service VM Bridge Forwarding**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -244,7 +244,7 @@ example, showing the flow through each layer:
|
||||
br_forward_finish -->
|
||||
br_dev_queue_push_xmit -->
|
||||
|
||||
**SOS MAC Layer**
|
||||
**Service VM MAC Layer**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -256,16 +256,16 @@ example, showing the flow through each layer:
|
||||
__netdev_start_xmit -->
|
||||
|
||||
|
||||
**SOS MAC Layer IGB Driver**
|
||||
**Service VM MAC Layer IGB Driver**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
igb_xmit_frame --> // IGB physical NIC driver xmit function
|
||||
|
||||
ACRN UOS RX FLOW
|
||||
================
|
||||
ACRN User VM RX FLOW
|
||||
====================
|
||||
|
||||
The following shows the ACRN UOS network RX flow, using TCP as an example.
|
||||
The following shows the ACRN User VM network RX flow, using TCP as an example.
|
||||
Let's start by receiving a device interrupt. (Note that the hypervisor
|
||||
will first get notified when receiving an interrupt even in passthrough
|
||||
cases.)
|
||||
@@ -288,11 +288,11 @@ cases.)
|
||||
|
||||
do_softirq -->
|
||||
ptdev_softirq -->
|
||||
vlapic_intr_msi --> // insert the interrupt into SOS
|
||||
vlapic_intr_msi --> // insert the interrupt into Service VM
|
||||
|
||||
start_vcpu --> // VM Entry here, will process the pending interrupts
|
||||
|
||||
**SOS MAC Layer IGB Driver**
|
||||
**Service VM MAC Layer IGB Driver**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -306,7 +306,7 @@ cases.)
|
||||
__netif_receive_skb -->
|
||||
__netif_receive_skb_core --
|
||||
|
||||
**SOS Bridge Forwarding**
|
||||
**Service VM Bridge Forwarding**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -317,7 +317,7 @@ cases.)
|
||||
br_forward_finish -->
|
||||
br_dev_queue_push_xmit -->
|
||||
|
||||
**SOS MAC Layer**
|
||||
**Service VM MAC Layer**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -328,7 +328,7 @@ cases.)
|
||||
netdev_start_xmit -->
|
||||
__netdev_start_xmit -->
|
||||
|
||||
**SOS MAC Layer TAP Driver**
|
||||
**Service VM MAC Layer TAP Driver**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -339,7 +339,7 @@ cases.)
|
||||
.. code-block:: c
|
||||
|
||||
virtio_net_rx_callback --> // the tap fd get notified and this function invoked
|
||||
virtio_net_tap_rx --> // read data from tap, prepare virtqueue, insert interrupt into the UOS
|
||||
virtio_net_tap_rx --> // read data from tap, prepare virtqueue, insert interrupt into the User VM
|
||||
vq_endchains -->
|
||||
vq_interrupt -->
|
||||
pci_generate_msi -->
|
||||
@@ -357,10 +357,10 @@ cases.)
|
||||
|
||||
vmexit_handler --> // vmexit because VMX_EXIT_REASON_VMCALL
|
||||
vmcall_vmexit_handler -->
|
||||
hcall_inject_msi --> // insert interrupt into UOS
|
||||
hcall_inject_msi --> // insert interrupt into User VM
|
||||
vlapic_intr_msi -->
|
||||
|
||||
**UOS MAC Layer virtio_net Frontend Driver**
|
||||
**User VM MAC Layer virtio_net Frontend Driver**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -372,7 +372,7 @@ cases.)
|
||||
virtnet_receive -->
|
||||
receive_buf -->
|
||||
|
||||
**UOS MAC Layer**
|
||||
**User VM MAC Layer**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -382,7 +382,7 @@ cases.)
|
||||
__netif_receive_skb -->
|
||||
__netif_receive_skb_core -->
|
||||
|
||||
**UOS IP Layer**
|
||||
**User VM IP Layer**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -393,7 +393,7 @@ cases.)
|
||||
ip_local_deliver_finish -->
|
||||
|
||||
|
||||
**UOS TCP Layer**
|
||||
**User VM TCP Layer**
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@@ -410,7 +410,7 @@ How to Use
|
||||
==========
|
||||
|
||||
The network infrastructure shown in :numref:`net-virt-infra` needs to be
|
||||
prepared in the SOS before we start. We need to create a bridge and at
|
||||
prepared in the Service VM before we start. We need to create a bridge and at
|
||||
least one tap device (two tap devices are needed to create a dual
|
||||
virtual NIC) and attach a physical NIC and tap device to the bridge.
|
||||
|
||||
@@ -419,11 +419,11 @@ virtual NIC) and attach a physical NIC and tap device to the bridge.
|
||||
:width: 900px
|
||||
:name: net-virt-infra
|
||||
|
||||
Network Infrastructure in SOS
|
||||
Network Infrastructure in Service VM
|
||||
|
||||
You can use Linux commands (e.g. ip, brctl) to create this network. In
|
||||
our case, we use systemd to automatically create the network by default.
|
||||
You can check the files with prefix 50- in the SOS
|
||||
You can check the files with prefix 50- in the Service VM
|
||||
``/usr/lib/systemd/network/``:
|
||||
|
||||
- `50-acrn.netdev <https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/misc/acrnbridge/acrn.netdev>`__
|
||||
@@ -431,7 +431,7 @@ You can check the files with prefix 50- in the SOS
|
||||
- `50-tap0.netdev <https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/misc/acrnbridge/tap0.netdev>`__
|
||||
- `50-eth.network <https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/misc/acrnbridge/eth.network>`__
|
||||
|
||||
When the SOS is started, run ``ifconfig`` to show the devices created by
|
||||
When the Service VM is started, run ``ifconfig`` to show the devices created by
|
||||
this systemd configuration:
|
||||
|
||||
.. code-block:: none
|
||||
@@ -486,7 +486,7 @@ optional):
|
||||
|
||||
-s 4,virtio-net,<tap_name>,[mac=<XX:XX:XX:XX:XX:XX>]
|
||||
|
||||
When the UOS is launched, run ``ifconfig`` to check the network. enp0s4r
|
||||
When the User VM is launched, run ``ifconfig`` to check the network. enp0s4r
|
||||
is the virtual NIC created by acrn-dm:
|
||||
|
||||
.. code-block:: none
|
||||
|
@@ -3,7 +3,7 @@
|
||||
Virtio-rnd
|
||||
##########
|
||||
|
||||
Virtio-rnd provides a virtual hardware random source for the UOS. It simulates a PCI device
|
||||
Virtio-rnd provides a virtual hardware random source for the User VM. It simulates a PCI device
|
||||
followed by a virtio specification, and is implemented based on the virtio user mode framework.
|
||||
|
||||
Architecture
|
||||
@@ -15,9 +15,9 @@ components are parts of Linux software or third party tools.
|
||||
|
||||
virtio-rnd is implemented as a virtio legacy device in the ACRN device
|
||||
model (DM), and is registered as a PCI virtio device to the guest OS
|
||||
(UOS). Tools such as :command:`od` (dump a file in octal or other format) can
|
||||
(User VM). Tools such as :command:`od` (dump a file in octal or other format) can
|
||||
be used to read random values from ``/dev/random``. This device file in the
|
||||
UOS is bound with the frontend virtio-rng driver. (The guest kernel must
|
||||
User VM is bound with the frontend virtio-rng driver. (The guest kernel must
|
||||
be built with ``CONFIG_HW_RANDOM_VIRTIO=y``). The backend
|
||||
virtio-rnd reads the HW random value from ``/dev/random`` in the SOS and sends
|
||||
them to the frontend.
|
||||
@@ -35,7 +35,7 @@ Add a pci slot to the device model acrn-dm command line; for example::
|
||||
|
||||
-s <slot_number>,virtio-rnd
|
||||
|
||||
Check to see if the frontend virtio_rng driver is available in the UOS:
|
||||
Check to see if the frontend virtio_rng driver is available in the User VM:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
|
@@ -29,27 +29,27 @@ Model following the PCI device framework. The following
|
||||
|
||||
Watchdog device flow
|
||||
|
||||
The DM in the Service OS (SOS) treats the watchdog as a passive device.
|
||||
The DM in the Service VM treats the watchdog as a passive device.
|
||||
It receives read/write commands from the watchdog driver, does the
|
||||
actions, and returns. In ACRN, the commands are from User OS (UOS)
|
||||
actions, and returns. In ACRN, the commands are from User VM
|
||||
watchdog driver.
|
||||
|
||||
UOS watchdog work flow
|
||||
**********************
|
||||
User VM watchdog work flow
|
||||
**************************
|
||||
|
||||
When the UOS does a read or write operation on the watchdog device's
|
||||
When the User VM does a read or write operation on the watchdog device's
|
||||
registers or memory space (Port IO or Memory map I/O), it will trap into
|
||||
the hypervisor. The hypervisor delivers the operation to the SOS/DM
|
||||
the hypervisor. The hypervisor delivers the operation to the Service VM/DM
|
||||
through IPI (inter-process interrupt) or shared memory, and the DM
|
||||
dispatches the operation to the watchdog emulation code.
|
||||
|
||||
After the DM watchdog finishes emulating the read or write operation, it
|
||||
then calls ``ioctl`` to the SOS/kernel (``/dev/acrn_vhm``). VHM will call a
|
||||
then calls ``ioctl`` to the Service VM/kernel (``/dev/acrn_vhm``). VHM will call a
|
||||
hypercall to trap into the hypervisor to tell it the operation is done, and
|
||||
the hypervisor will set UOS-related VCPU registers and resume UOS so the
|
||||
UOS watchdog driver will get the return values (or return status). The
|
||||
:numref:`watchdog-workflow` below is a typical operation flow:
|
||||
from UOS to SOS and return back:
|
||||
the hypervisor will set User VM-related VCPU registers and resume the User VM so the
|
||||
User VM watchdog driver will get the return values (or return status). The
|
||||
:numref:`watchdog-workflow` below is a typical operation flow:
|
||||
from a User VM to the Service VM and return back:
|
||||
|
||||
.. figure:: images/watchdog-image1.png
|
||||
:align: center
|
||||
@@ -82,18 +82,18 @@ emulation.
|
||||
|
||||
The main part in the watchdog emulation is the timer thread. It emulates
|
||||
the watchdog device timeout management. When it gets the kick action
|
||||
from the UOS, it resets the timer. If the timer expires before getting a
|
||||
timely kick action, it will call DM API to reboot that UOS.
|
||||
from the User VM, it resets the timer. If the timer expires before getting a
|
||||
timely kick action, it will call DM API to reboot that User VM.
|
||||
|
||||
In the UOS launch script, add: ``-s xx,wdt-i6300esb`` into DM parameters.
|
||||
In the User VM launch script, add: ``-s xx,wdt-i6300esb`` into DM parameters.
|
||||
(xx is the virtual PCI BDF number as with other PCI devices)
|
||||
|
||||
Make sure the UOS kernel has the I6300ESB driver enabled:
|
||||
``CONFIG_I6300ESB_WDT=y``. After the UOS boots up, the watchdog device
|
||||
Make sure the User VM kernel has the I6300ESB driver enabled:
|
||||
``CONFIG_I6300ESB_WDT=y``. After the User VM boots up, the watchdog device
|
||||
will be created as node ``/dev/watchdog``, and can be used as a normal
|
||||
device file.
|
||||
|
||||
Usually the UOS needs a watchdog service (daemon) to run in userland and
|
||||
Usually the User VM needs a watchdog service (daemon) to run in userland and
|
||||
kick the watchdog periodically. If something prevents the daemon from
|
||||
kicking the watchdog, for example the UOS system is hung, the watchdog
|
||||
will timeout and the DM will reboot the UOS.
|
||||
kicking the watchdog, for example the User VM system is hung, the watchdog
|
||||
will timeout and the DM will reboot the User VM.
|
||||
|
Reference in New Issue
Block a user