doc: Change VHM to HSM in documentations

Mainly do the renaming(VHM->HSM) in the Doc.

Tracked-On: #6282
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
This commit is contained in:
Shuo A Liu 2021-07-02 11:47:31 +08:00 committed by wenlingz
parent 4896faaebb
commit 4c9e9b9e4a
15 changed files with 108 additions and 108 deletions

View File

@ -18,25 +18,25 @@ framework. There are 3 major subsystems in Service VM:
- **Device Emulation**: DM provides backend device emulation routines for - **Device Emulation**: DM provides backend device emulation routines for
frontend User VM device drivers. These routines register their I/O frontend User VM device drivers. These routines register their I/O
handlers to the I/O dispatcher inside the DM. When the VHM handlers to the I/O dispatcher inside the DM. When the HSM
assigns any I/O request to the DM, the I/O dispatcher assigns any I/O request to the DM, the I/O dispatcher
dispatches this request to the corresponding device emulation dispatches this request to the corresponding device emulation
routine to do the emulation. routine to do the emulation.
- I/O Path in Service VM: - I/O Path in Service VM:
- HV initializes an I/O request and notifies VHM driver in Service VM - HV initializes an I/O request and notifies HSM driver in Service VM
through upcall. through upcall.
- VHM driver dispatches I/O requests to I/O clients and notifies the - HSM driver dispatches I/O requests to I/O clients and notifies the
clients (in this case the client is the DM, which is notified clients (in this case the client is the DM, which is notified
through char device) through char device)
- DM I/O dispatcher calls corresponding I/O handlers - DM I/O dispatcher calls corresponding I/O handlers
- I/O dispatcher notifies VHM driver the I/O request is completed - I/O dispatcher notifies HSM driver the I/O request is completed
through char device through char device
- VHM driver notifies HV on the completion through hypercall - HSM driver notifies HV on the completion through hypercall
- DM injects VIRQ to User VM frontend device through hypercall - DM injects VIRQ to User VM frontend device through hypercall
- VHM: Virtio and Hypervisor Service Module is a kernel module in Service VM as a - HSM: Hypervisor Service Module is a kernel module in Service VM as a
middle layer to support DM. Refer to :ref:`virtio-APIs` for details middle layer to support DM. Refer to :ref:`virtio-APIs` for details
This section introduces how the acrn-dm application is configured and This section introduces how the acrn-dm application is configured and
@ -142,15 +142,15 @@ DM Initialization
- **Option Parsing**: DM parse options from command line inputs. - **Option Parsing**: DM parse options from command line inputs.
- **VM Create**: DM calls ioctl to Service VM VHM, then Service VM VHM makes - **VM Create**: DM calls ioctl to Service VM HSM, then Service VM HSM makes
hypercalls to HV to create a VM, it returns a vmid for a hypercalls to HV to create a VM, it returns a vmid for a
dedicated VM. dedicated VM.
- **Set I/O Request Buffer**: the I/O request buffer is a page buffer - **Set I/O Request Buffer**: the I/O request buffer is a page buffer
allocated by DM for a specific VM in user space. This buffer is allocated by DM for a specific VM in user space. This buffer is
shared between DM, VHM and HV. **Set I/O Request buffer** calls shared between DM, HSM and HV. **Set I/O Request buffer** calls
an ioctl executing a hypercall to share this unique page buffer an ioctl executing a hypercall to share this unique page buffer
with VHM and HV. Refer to :ref:`hld-io-emulation` and with HSM and HV. Refer to :ref:`hld-io-emulation` and
:ref:`IO-emulation-in-sos` for more details. :ref:`IO-emulation-in-sos` for more details.
- **Memory Setup**: User VM memory is allocated from Service VM - **Memory Setup**: User VM memory is allocated from Service VM
@ -277,33 +277,33 @@ DM Initialization
thread. mevent dispatch will do polling for potential async thread. mevent dispatch will do polling for potential async
event. event.
VHM HSM
*** ***
VHM Overview HSM Overview
============ ============
Device Model manages User VM by accessing interfaces exported from VHM Device Model manages User VM by accessing interfaces exported from HSM
module. VHM module is a Service VM kernel driver. The ``/dev/acrn_vhm`` node is module. HSM module is a Service VM kernel driver. The ``/dev/acrn_hsm`` node is
created when VHM module is initialized. Device Model follows the standard created when HSM module is initialized. Device Model follows the standard
Linux char device API (ioctl) to access the functionality of VHM. Linux char device API (ioctl) to access the functionality of HSM.
In most of ioctl, VHM converts the ioctl command to a corresponding In most of ioctl, HSM converts the ioctl command to a corresponding
hypercall to the hypervisor. There are two exceptions: hypercall to the hypervisor. There are two exceptions:
- I/O request client management is implemented in VHM. - I/O request client management is implemented in HSM.
- For memory range management of User VM, VHM needs to save all memory - For memory range management of User VM, HSM needs to save all memory
range info of User VM. The subsequent memory mapping update of User VM range info of User VM. The subsequent memory mapping update of User VM
needs this information. needs this information.
.. figure:: images/dm-image108.png .. figure:: images/dm-image108.png
:align: center :align: center
:name: vhm-arch :name: hsm-arch
Architecture of ACRN VHM Architecture of ACRN HSM
VHM ioctl Interfaces HSM ioctl Interfaces
==================== ====================
.. note:: Reference API documents for General interface, VM Management, .. note:: Reference API documents for General interface, VM Management,
@ -315,7 +315,7 @@ VHM ioctl Interfaces
I/O Emulation in Service VM I/O Emulation in Service VM
*************************** ***************************
I/O requests from the hypervisor are dispatched by VHM in the Service VM kernel I/O requests from the hypervisor are dispatched by HSM in the Service VM kernel
to a registered client, responsible for further processing the to a registered client, responsible for further processing the
I/O access and notifying the hypervisor on its completion. I/O access and notifying the hypervisor on its completion.
@ -347,43 +347,43 @@ acts as the fallback client for any VM.
Each I/O client can be configured to handle the I/O requests in the Each I/O client can be configured to handle the I/O requests in the
client thread context or in a separate kernel thread context. client thread context or in a separate kernel thread context.
:numref:`vhm-interaction` shows how an I/O client talks to VHM to register :numref:`hsm-interaction` shows how an I/O client talks to HSM to register
a handler and process the incoming I/O requests in a kernel thread a handler and process the incoming I/O requests in a kernel thread
specifically created for this purpose. specifically created for this purpose.
.. figure:: images/dm-image94.png .. figure:: images/dm-image94.png
:align: center :align: center
:name: vhm-interaction :name: hsm-interaction
Interaction of in-kernel I/O clients and VHM Interaction of in-kernel I/O clients and HSM
- On registration, the client requests a fresh ID, registers a - On registration, the client requests a fresh ID, registers a
handler, adds the I/O range (or PCI BDF) to be emulated by this handler, adds the I/O range (or PCI BDF) to be emulated by this
client, and finally attaches it to VHM that kicks off client, and finally attaches it to HSM that kicks off
a new kernel thread. a new kernel thread.
- The kernel thread waits for any I/O request to be handled. When a - The kernel thread waits for any I/O request to be handled. When a
pending I/O request is assigned to the client by VHM, the kernel pending I/O request is assigned to the client by HSM, the kernel
thread wakes up and calls the registered callback function thread wakes up and calls the registered callback function
to process the request. to process the request.
- Before the client is destroyed, VHM ensures that the kernel - Before the client is destroyed, HSM ensures that the kernel
thread exits. thread exits.
An I/O client can also handle I/O requests in its own thread context. An I/O client can also handle I/O requests in its own thread context.
:numref:`dm-vhm-interaction` shows the interactions in such a case, using the :numref:`dm-hsm-interaction` shows the interactions in such a case, using the
device model as an example. No callback is registered on device model as an example. No callback is registered on
registration and the I/O client (device model in the example) attaches registration and the I/O client (device model in the example) attaches
itself to VHM every time it is ready to process additional I/O requests. itself to HSM every time it is ready to process additional I/O requests.
Note also that the DM runs in userland and talks to VHM via the ioctl Note also that the DM runs in userland and talks to HSM via the ioctl
interface in `VHM ioctl interfaces`_. interface in `HSM ioctl interfaces`_.
.. figure:: images/dm-image99.png .. figure:: images/dm-image99.png
:align: center :align: center
:name: dm-vhm-interaction :name: dm-hsm-interaction
Interaction of DM and VHM Interaction of DM and HSM
Refer to `I/O client interfaces`_ for a list of interfaces for developing Refer to `I/O client interfaces`_ for a list of interfaces for developing
I/O clients. I/O clients.
@ -398,12 +398,12 @@ Processing I/O Requests
I/O request handling sequence in Service VM I/O request handling sequence in Service VM
:numref:`io-sequence-sos` above illustrates the interactions among the :numref:`io-sequence-sos` above illustrates the interactions among the
hypervisor, VHM, hypervisor, HSM,
and the device model for handling I/O requests. The main interactions and the device model for handling I/O requests. The main interactions
are as follows: are as follows:
1. The hypervisor makes an upcall to Service VM as an interrupt 1. The hypervisor makes an upcall to Service VM as an interrupt
handled by the upcall handler in VHM. handled by the upcall handler in HSM.
2. The upcall handler schedules the execution of the I/O request 2. The upcall handler schedules the execution of the I/O request
dispatcher. If the dispatcher is already running, another round dispatcher. If the dispatcher is already running, another round
@ -417,10 +417,10 @@ are as follows:
4. The woken client (the DM in :numref:`io-sequence-sos` above) handles the 4. The woken client (the DM in :numref:`io-sequence-sos` above) handles the
assigned I/O requests, updates their state to COMPLETE, and notifies assigned I/O requests, updates their state to COMPLETE, and notifies
the VHM of the completion via ioctl. :numref:`dm-io-flow` shows this the HSM of the completion via ioctl. :numref:`dm-io-flow` shows this
flow. flow.
5. The VHM device notifies the hypervisor of the completion via 5. The HSM device notifies the hypervisor of the completion via
hypercall. hypercall.
.. figure:: images/dm-image97.png .. figure:: images/dm-image97.png
@ -441,7 +441,7 @@ Emulation of Accesses to PCI Configuration Space
PCI configuration spaces are accessed by writing to an address to I/O PCI configuration spaces are accessed by writing to an address to I/O
port 0xcf8 and then reading the I/O port 0xcfc. As the PCI configuration port 0xcf8 and then reading the I/O port 0xcfc. As the PCI configuration
space of different devices is emulated by different clients, VHM space of different devices is emulated by different clients, HSM
handles the emulation of accesses to I/O port 0xcf8, caches the BDF of handles the emulation of accesses to I/O port 0xcf8, caches the BDF of
the device and the offset of the register, and delivers the request to the device and the offset of the register, and delivers the request to
the client with the same BDF when I/O port 0xcfc is accessed. the client with the same BDF when I/O port 0xcfc is accessed.
@ -599,7 +599,7 @@ device's MMIO handler:
CFG SPACE Handler Register CFG SPACE Handler Register
-------------------------- --------------------------
As VHM intercepts the cf8/cfc PIO access for PCI CFG SPACE, the DM only As HSM intercepts the cf8/cfc PIO access for PCI CFG SPACE, the DM only
needs to provide CFG SPACE read/write handlers directly. Such handlers needs to provide CFG SPACE read/write handlers directly. Such handlers
are defined as shown below. Normally, a device emulation developer are defined as shown below. Normally, a device emulation developer
has no need to update this function. has no need to update this function.

View File

@ -22,7 +22,7 @@ Hypervisor High-Level Design
hv-partitionmode hv-partitionmode
Power Management <hv-pm> Power Management <hv-pm>
Console, Shell, and vUART <hv-console> Console, Shell, and vUART <hv-console>
Hypercall / VHM upcall <hv-hypercall> Hypercall / HSM upcall <hv-hypercall>
Compile-time configuration <hv-config> Compile-time configuration <hv-config>
RDT support <hv-rdt> RDT support <hv-rdt>
Split-locked Access handling <hld-splitlock> Split-locked Access handling <hld-splitlock>

View File

@ -203,14 +203,14 @@ the port I/O address, size of access, read/write, and target register
into the I/O request in the I/O request buffer (shown in into the I/O request in the I/O request buffer (shown in
:numref:`overview-io-emu-path`) and then notify/interrupt the Service VM to process. :numref:`overview-io-emu-path`) and then notify/interrupt the Service VM to process.
The virtio and HV service module (VHM) in the Service VM intercepts HV interrupts, The Hypervisor service module (HSM) in the Service VM intercepts HV interrupts,
and accesses the I/O request buffer for the port I/O instructions. It will and accesses the I/O request buffer for the port I/O instructions. It will
then check to see if any kernel device claims ownership of the then check to see if any kernel device claims ownership of the
I/O port. The owning device, if any, executes the requested APIs from a I/O port. The owning device, if any, executes the requested APIs from a
VM. Otherwise, the VHM module leaves the I/O request in the request buffer VM. Otherwise, the HSM module leaves the I/O request in the request buffer
and wakes up the DM thread for processing. and wakes up the DM thread for processing.
DM follows the same mechanism as VHM. The I/O processing thread of the DM follows the same mechanism as HSM. The I/O processing thread of the
DM queries the I/O request buffer to get the PIO instruction details and DM queries the I/O request buffer to get the PIO instruction details and
checks to see if any (guest) device emulation modules claim ownership of checks to see if any (guest) device emulation modules claim ownership of
the I/O port. If yes, the owning module is invoked to execute requested the I/O port. If yes, the owning module is invoked to execute requested
@ -220,7 +220,7 @@ When the DM completes the emulation (port IO 20h access in this example)
of a device such as uDev1, uDev1 will put the result into the request of a device such as uDev1, uDev1 will put the result into the request
buffer (register AL). The DM will then return the control to HV buffer (register AL). The DM will then return the control to HV
indicating completion of an IO instruction emulation, typically thru indicating completion of an IO instruction emulation, typically thru
VHM/hypercall. The HV then stores the result to the guest register HSM/hypercall. The HV then stores the result to the guest register
context, advances the guest IP to indicate the completion of instruction context, advances the guest IP to indicate the completion of instruction
execution, and resumes the guest. execution, and resumes the guest.
@ -299,7 +299,7 @@ Service VM
The Service VM is an important guest OS in the ACRN architecture. It The Service VM is an important guest OS in the ACRN architecture. It
runs in non-root mode, and contains many critical components, including the VM runs in non-root mode, and contains many critical components, including the VM
manager, the device model (DM), ACRN services, kernel mediation, and virtio manager, the device model (DM), ACRN services, kernel mediation, and virtio
and hypercall modules (VHM). The DM manages the User VM and and hypercall modules (HSM). The DM manages the User VM and
provides device emulation for it. The User VMS also provides services provides device emulation for it. The User VMS also provides services
for system power lifecycle management through the ACRN service and VM manager, for system power lifecycle management through the ACRN service and VM manager,
and services for system debugging through ACRN log/trace tools. and services for system debugging through ACRN log/trace tools.
@ -311,10 +311,10 @@ DM (Device Model) is a user-level QEMU-like application in the Service VM
responsible for creating the User VM and then performing devices emulation responsible for creating the User VM and then performing devices emulation
based on command line configurations. based on command line configurations.
Based on a VHM kernel module, DM interacts with VM manager to create the User Based on a HSM kernel module, DM interacts with VM manager to create the User
VM. It then emulates devices through full virtualization on the DM user VM. It then emulates devices through full virtualization on the DM user
level, or para-virtualized based on kernel mediator (such as virtio, level, or para-virtualized based on kernel mediator (such as virtio,
GVT), or passthrough based on kernel VHM APIs. GVT), or passthrough based on kernel HSM APIs.
Refer to :ref:`hld-devicemodel` for more details. Refer to :ref:`hld-devicemodel` for more details.
@ -337,16 +337,16 @@ ACRN service provides
system lifecycle management based on IOC polling. It communicates with the system lifecycle management based on IOC polling. It communicates with the
VM manager to handle the User VM state, such as S3 and power-off. VM manager to handle the User VM state, such as S3 and power-off.
VHM HSM
=== ===
The VHM (virtio & hypercall module) kernel module is the Service VM kernel driver The HSM (Hypervisor service module) kernel module is the Service VM kernel driver
supporting User VM management and device emulation. Device Model follows supporting User VM management and device emulation. Device Model follows
the standard Linux char device API (ioctl) to access VHM the standard Linux char device API (ioctl) to access HSM
functionalities. VHM communicates with the ACRN hypervisor through functionalities. HSM communicates with the ACRN hypervisor through
hypercall or upcall interrupts. hypercall or upcall interrupts.
Refer to the VHM chapter for more details. Refer to the HSM chapter for more details.
Kernel Mediators Kernel Mediators
================ ================
@ -358,7 +358,7 @@ Log/Trace Tools
=============== ===============
ACRN Log/Trace tools are user-level applications used to ACRN Log/Trace tools are user-level applications used to
capture ACRN hypervisor log and trace data. The VHM kernel module provides a capture ACRN hypervisor log and trace data. The HSM kernel module provides a
middle layer to support these tools. middle layer to support these tools.
Refer to :ref:`hld-trace-log` for more details. Refer to :ref:`hld-trace-log` for more details.

View File

@ -67,8 +67,8 @@ Px/Cx data for User VM P/C-state management:
System block for building vACPI table with Px/Cx data System block for building vACPI table with Px/Cx data
Some ioctl APIs are defined for the Device model to query Px/Cx data from Some ioctl APIs are defined for the Device model to query Px/Cx data from
the Service VM VHM. The Hypervisor needs to provide hypercall APIs to transit the Service VM HSM. The Hypervisor needs to provide hypercall APIs to transit
Px/Cx data from the CPU state table to the Service VM VHM. Px/Cx data from the CPU state table to the Service VM HSM.
The build flow is: The build flow is:
@ -76,8 +76,8 @@ The build flow is:
a CPU state table in the Hypervisor. The Hypervisor loads the data after a CPU state table in the Hypervisor. The Hypervisor loads the data after
the system boots. the system boots.
2) Before User VM launching, the Device mode queries the Px/Cx data from the Service 2) Before User VM launching, the Device mode queries the Px/Cx data from the Service
VM VHM via ioctl interface. VM HSM via ioctl interface.
3) VHM transmits the query request to the Hypervisor by hypercall. 3) HSM transmits the query request to the Hypervisor by hypercall.
4) The Hypervisor returns the Px/Cx data. 4) The Hypervisor returns the Px/Cx data.
5) The Device model builds the virtual ACPI table with these Px/Cx data 5) The Device model builds the virtual ACPI table with these Px/Cx data

View File

@ -247,11 +247,11 @@ between the FE and BE driver is through shared memory, in the form of
virtqueues. virtqueues.
On the service OS side where the BE driver is located, there are several On the service OS side where the BE driver is located, there are several
key components in ACRN, including device model (DM), virtio and HV key components in ACRN, including device model (DM), Hypervisor
service module (VHM), VBS-U, and user-level vring service API helpers. service module (HSM), VBS-U, and user-level vring service API helpers.
DM bridges the FE driver and BE driver since each VBS-U module emulates DM bridges the FE driver and BE driver since each VBS-U module emulates
a PCIe virtio device. VHM bridges DM and the hypervisor by providing a PCIe virtio device. HSM bridges DM and the hypervisor by providing
remote memory map APIs and notification APIs. VBS-U accesses the remote memory map APIs and notification APIs. VBS-U accesses the
virtqueue through the user-level vring service API helpers. virtqueue through the user-level vring service API helpers.
@ -332,7 +332,7 @@ can be described as:
1. vhost proxy creates two eventfds per virtqueue, one is for kick, 1. vhost proxy creates two eventfds per virtqueue, one is for kick,
(an ioeventfd), the other is for call, (an irqfd). (an ioeventfd), the other is for call, (an irqfd).
2. vhost proxy registers the two eventfds to VHM through VHM character 2. vhost proxy registers the two eventfds to HSM through HSM character
device: device:
a) Ioevenftd is bound with a PIO/MMIO range. If it is a PIO, it is a) Ioevenftd is bound with a PIO/MMIO range. If it is a PIO, it is
@ -343,14 +343,14 @@ can be described as:
3. vhost proxy sets the two fds to vhost kernel through ioctl of vhost 3. vhost proxy sets the two fds to vhost kernel through ioctl of vhost
device. device.
4. vhost starts polling the kick fd and wakes up when guest kicks a 4. vhost starts polling the kick fd and wakes up when guest kicks a
virtqueue, which results a event_signal on kick fd by VHM ioeventfd. virtqueue, which results a event_signal on kick fd by HSM ioeventfd.
5. vhost device in kernel signals on the irqfd to notify the guest. 5. vhost device in kernel signals on the irqfd to notify the guest.
Ioeventfd Implementation Ioeventfd Implementation
~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~
Ioeventfd module is implemented in VHM, and can enhance a registered Ioeventfd module is implemented in HSM, and can enhance a registered
eventfd to listen to IO requests (PIO/MMIO) from vhm ioreq module and eventfd to listen to IO requests (PIO/MMIO) from HSM ioreq module and
signal the eventfd when needed. :numref:`ioeventfd-workflow` shows the signal the eventfd when needed. :numref:`ioeventfd-workflow` shows the
general workflow of ioeventfd. general workflow of ioeventfd.
@ -365,17 +365,17 @@ The workflow can be summarized as:
1. vhost device init. Vhost proxy creates two eventfd for ioeventfd and 1. vhost device init. Vhost proxy creates two eventfd for ioeventfd and
irqfd. irqfd.
2. pass ioeventfd to vhost kernel driver. 2. pass ioeventfd to vhost kernel driver.
3. pass ioevent fd to vhm driver 3. pass ioevent fd to HSM driver
4. User VM FE driver triggers ioreq and forwarded to Service VM by hypervisor 4. User VM FE driver triggers ioreq and forwarded to Service VM by hypervisor
5. ioreq is dispatched by vhm driver to related vhm client. 5. ioreq is dispatched by HSM driver to related HSM client.
6. ioeventfd vhm client traverses the io_range list and find 6. ioeventfd HSM client traverses the io_range list and find
corresponding eventfd. corresponding eventfd.
7. trigger the signal to related eventfd. 7. trigger the signal to related eventfd.
Irqfd Implementation Irqfd Implementation
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
The irqfd module is implemented in VHM, and can enhance a registered The irqfd module is implemented in HSM, and can enhance a registered
eventfd to inject an interrupt to a guest OS when the eventfd gets eventfd to inject an interrupt to a guest OS when the eventfd gets
signaled. :numref:`irqfd-workflow` shows the general flow for irqfd. signaled. :numref:`irqfd-workflow` shows the general flow for irqfd.
@ -390,12 +390,12 @@ The workflow can be summarized as:
1. vhost device init. Vhost proxy creates two eventfd for ioeventfd and 1. vhost device init. Vhost proxy creates two eventfd for ioeventfd and
irqfd. irqfd.
2. pass irqfd to vhost kernel driver. 2. pass irqfd to vhost kernel driver.
3. pass IRQ fd to vhm driver 3. pass IRQ fd to HSM driver
4. vhost device driver triggers IRQ eventfd signal once related native 4. vhost device driver triggers IRQ eventfd signal once related native
transfer is completed. transfer is completed.
5. irqfd related logic traverses the irqfd list to retrieve related irq 5. irqfd related logic traverses the irqfd list to retrieve related irq
information. information.
6. irqfd related logic injects an interrupt through vhm interrupt API. 6. irqfd related logic injects an interrupt through HSM interrupt API.
7. Interrupt is delivered to User VM FE driver through hypervisor. 7. Interrupt is delivered to User VM FE driver through hypervisor.
.. _virtio-APIs: .. _virtio-APIs:
@ -646,7 +646,7 @@ Linux Vhost IOCTLs
This IOCTL is used to set the eventfd which is used by vhost do inject This IOCTL is used to set the eventfd which is used by vhost do inject
virtual interrupt. virtual interrupt.
VHM Eventfd IOCTLs HSM Eventfd IOCTLs
------------------ ------------------
.. doxygenstruct:: acrn_ioeventfd .. doxygenstruct:: acrn_ioeventfd

View File

@ -1,6 +1,6 @@
.. _hv-hypercall: .. _hv-hypercall:
Hypercall / VHM Upcall Hypercall / HSM Upcall
###################### ######################
The hypercall/upcall is used to request services between the Guest VM and the hypervisor. The hypercall/upcall is used to request services between the Guest VM and the hypervisor.

View File

@ -215,7 +215,7 @@ The interrupt vectors are assigned as shown here:
- Posted Interrupt - Posted Interrupt
* - 0xF3 * - 0xF3
- Hypervisor Callback VHM - Hypervisor Callback HSM
* - 0xF4 * - 0xF4
- Performance Monitering Interrupt - Performance Monitering Interrupt

View File

@ -4,7 +4,7 @@ I/O Emulation High-Level Design
############################### ###############################
As discussed in :ref:`intro-io-emulation`, there are multiple ways and As discussed in :ref:`intro-io-emulation`, there are multiple ways and
places to handle I/O emulation, including HV, Service VM Kernel VHM, and Service VM places to handle I/O emulation, including HV, Service VM Kernel HSM, and Service VM
user-land device model (acrn-dm). user-land device model (acrn-dm).
I/O emulation in the hypervisor provides these functionalities: I/O emulation in the hypervisor provides these functionalities:

View File

@ -57,12 +57,12 @@ ACRN Hypervisor:
bare-metal hardware, and suitable for a variety of IoT and embedded bare-metal hardware, and suitable for a variety of IoT and embedded
device solutions. It fetches and analyzes the guest instructions, puts device solutions. It fetches and analyzes the guest instructions, puts
the decoded information into the shared page as an IOREQ, and notifies the decoded information into the shared page as an IOREQ, and notifies
or interrupts the VHM module in the Service VM for processing. or interrupts the HSM module in the Service VM for processing.
VHM Module: HSM Module:
The Virtio and Hypervisor Service Module (VHM) is a kernel module in the The Hypervisor Service Module (HSM) is a kernel module in the
Service VM acting as a middle layer to support the device model Service VM acting as a middle layer to support the device model
and hypervisor. The VHM forwards a IOREQ to the virtio-net backend and hypervisor. The HSM forwards a IOREQ to the virtio-net backend
driver for processing. driver for processing.
ACRN Device Model and virtio-net Backend Driver: ACRN Device Model and virtio-net Backend Driver:
@ -185,18 +185,18 @@ example, showing the flow through each layer:
vmexit_handler --> // vmexit because VMX_EXIT_REASON_IO_INSTRUCTION vmexit_handler --> // vmexit because VMX_EXIT_REASON_IO_INSTRUCTION
pio_instr_vmexit_handler --> pio_instr_vmexit_handler -->
emulate_io --> // ioreq cant be processed in HV, forward it to VHM emulate_io --> // ioreq cant be processed in HV, forward it to HSM
acrn_insert_request_wait --> acrn_insert_request_wait -->
fire_vhm_interrupt --> // interrupt Service VM, VHM will get notified fire_hsm_interrupt --> // interrupt Service VM, HSM will get notified
**VHM Module** **HSM Module**
.. code-block:: c .. code-block:: c
vhm_intr_handler --> // VHM interrupt handler vhm_intr_handler --> // HSM interrupt handler
tasklet_schedule --> tasklet_schedule -->
io_req_tasklet --> io_req_tasklet -->
acrn_ioreq_distribute_request --> // ioreq can't be processed in VHM, forward it to device DM acrn_ioreq_distribute_request --> // ioreq can't be processed in HSM, forward it to device DM
acrn_ioreq_notify_client --> acrn_ioreq_notify_client -->
wake_up_interruptible --> // wake up DM to handle ioreq wake_up_interruptible --> // wake up DM to handle ioreq
@ -344,7 +344,7 @@ cases.)
vq_interrupt --> vq_interrupt -->
pci_generate_msi --> pci_generate_msi -->
**VHM Module** **HSM Module**
.. code-block:: c .. code-block:: c

View File

@ -44,7 +44,7 @@ through IPI (inter-process interrupt) or shared memory, and the DM
dispatches the operation to the watchdog emulation code. dispatches the operation to the watchdog emulation code.
After the DM watchdog finishes emulating the read or write operation, it After the DM watchdog finishes emulating the read or write operation, it
then calls ``ioctl`` to the Service VM/kernel (``/dev/acrn_vhm``). VHM will call a then calls ``ioctl`` to the Service VM/kernel (``/dev/acrn_hsm``). HSM will call a
hypercall to trap into the hypervisor to tell it the operation is done, and hypercall to trap into the hypervisor to tell it the operation is done, and
the hypervisor will set User VM-related VCPU registers and resume the User VM so the the hypervisor will set User VM-related VCPU registers and resume the User VM so the
User VM watchdog driver will get the return values (or return status). The User VM watchdog driver will get the return values (or return status). The

View File

@ -222,8 +222,8 @@ Glossary of Terms
vGPU vGPU
Virtual GPU Instance, created by GVT-g and used by a VM Virtual GPU Instance, created by GVT-g and used by a VM
VHM HSM
Virtio and Hypervisor Service Module Hypervisor Service Module
Virtio-BE Virtio-BE
Back-End, VirtIO framework provides front-end driver and back-end driver Back-End, VirtIO framework provides front-end driver and back-end driver

View File

@ -597,25 +597,25 @@ ACRN Device model incorporates these three aspects:
**I/O Path**: **I/O Path**:
see `ACRN-io-mediator`_ below see `ACRN-io-mediator`_ below
**VHM**: **HSM**:
The Virtio and Hypervisor Service Module is a kernel module in the The Hypervisor Service Module is a kernel module in the
Service VM acting as a middle layer to support the device model. The VHM Service VM acting as a middle layer to support the device model. The HSM
client handling flow is described below: client handling flow is described below:
#. ACRN hypervisor IOREQ is forwarded to the VHM by an upcall #. ACRN hypervisor IOREQ is forwarded to the HSM by an upcall
notification to the Service VM. notification to the Service VM.
#. VHM will mark the IOREQ as "in process" so that the same IOREQ will #. HSM will mark the IOREQ as "in process" so that the same IOREQ will
not pick up again. The IOREQ will be sent to the client for handling. not pick up again. The IOREQ will be sent to the client for handling.
Meanwhile, the VHM is ready for another IOREQ. Meanwhile, the HSM is ready for another IOREQ.
#. IOREQ clients are either a Service VM Userland application or a Service VM #. IOREQ clients are either a Service VM Userland application or a Service VM
Kernel space module. Once the IOREQ is processed and completed, the Kernel space module. Once the IOREQ is processed and completed, the
Client will issue an IOCTL call to the VHM to notify an IOREQ state Client will issue an IOCTL call to the HSM to notify an IOREQ state
change. The VHM then checks and hypercalls to ACRN hypervisor change. The HSM then checks and hypercalls to ACRN hypervisor
notifying it that the IOREQ has completed. notifying it that the IOREQ has completed.
.. note:: .. note::
* Userland: dm as ACRN Device Model. * Userland: dm as ACRN Device Model.
* Kernel space: VBS-K, MPT Service, VHM itself * Kernel space: VBS-K, MPT Service, HSM itself
.. _pass-through: .. _pass-through:
@ -709,15 +709,15 @@ Following along with the numbered items in :numref:`io-emulation-path`:
the decoded information (including the PIO address, size of access, the decoded information (including the PIO address, size of access,
read/write, and target register) into the shared page, and read/write, and target register) into the shared page, and
notify/interrupt the Service VM to process. notify/interrupt the Service VM to process.
3. The Virtio and hypervisor service module (VHM) in Service VM receives the 3. The hypervisor service module (HSM) in Service VM receives the
interrupt, and queries the IO request ring to get the PIO instruction interrupt, and queries the IO request ring to get the PIO instruction
details. details.
4. It checks to see if any kernel device claims 4. It checks to see if any kernel device claims
ownership of the IO port: if a kernel module claimed it, the kernel ownership of the IO port: if a kernel module claimed it, the kernel
module is activated to execute its processing APIs. Otherwise, the VHM module is activated to execute its processing APIs. Otherwise, the HSM
module leaves the IO request in the shared page and wakes up the module leaves the IO request in the shared page and wakes up the
device model thread to process. device model thread to process.
5. The ACRN device model follows the same mechanism as the VHM. The I/O 5. The ACRN device model follows the same mechanism as the HSM. The I/O
processing thread of device model queries the IO request ring to get the processing thread of device model queries the IO request ring to get the
PIO instruction details and checks to see if any (guest) device emulation PIO instruction details and checks to see if any (guest) device emulation
module claims ownership of the IO port: if a module claimed it, module claims ownership of the IO port: if a module claimed it,
@ -726,7 +726,7 @@ Following along with the numbered items in :numref:`io-emulation-path`:
in this example), (say uDev1 here), uDev1 puts the result into the in this example), (say uDev1 here), uDev1 puts the result into the
shared page (in register AL in this example). shared page (in register AL in this example).
7. ACRN device model then returns control to ACRN hypervisor to indicate the 7. ACRN device model then returns control to ACRN hypervisor to indicate the
completion of an IO instruction emulation, typically through VHM/hypercall. completion of an IO instruction emulation, typically through HSM/hypercall.
8. The ACRN hypervisor then knows IO emulation is complete, and copies 8. The ACRN hypervisor then knows IO emulation is complete, and copies
the result to the guest register context. the result to the guest register context.
9. The ACRN hypervisor finally advances the guest IP to 9. The ACRN hypervisor finally advances the guest IP to
@ -844,7 +844,7 @@ Virtio Spec 0.9/1.0. The VBS-U is statically linked with the Device Model,
and communicates with the Device Model through the PCIe interface: PIO/MMIO and communicates with the Device Model through the PCIe interface: PIO/MMIO
or MSI/MSI-X. VBS-U accesses Virtio APIs through the user space ``vring`` service or MSI/MSI-X. VBS-U accesses Virtio APIs through the user space ``vring`` service
API helpers. User space ``vring`` service API helpers access shared ring API helpers. User space ``vring`` service API helpers access shared ring
through a remote memory map (mmap). VHM maps User VM memory with the help of through a remote memory map (mmap). HSM maps User VM memory with the help of
ACRN Hypervisor. ACRN Hypervisor.
.. figure:: images/virtio-framework-kernel.png .. figure:: images/virtio-framework-kernel.png
@ -859,9 +859,9 @@ at the right timings, for example. The FE driver sets
VIRTIO_CONFIG_S_DRIVER_OK to avoid unnecessary device configuration VIRTIO_CONFIG_S_DRIVER_OK to avoid unnecessary device configuration
changes while running. VBS-K can access shared rings through the VBS-K changes while running. VBS-K can access shared rings through the VBS-K
virtqueue APIs. VBS-K virtqueue APIs are similar to VBS-U virtqueue virtqueue APIs. VBS-K virtqueue APIs are similar to VBS-U virtqueue
APIs. VBS-K registers as a VHM client to handle a continuous range of APIs. VBS-K registers as a HSM client to handle a continuous range of
registers. registers.
There may be one or more VHM-clients for each VBS-K, and there can be a There may be one or more HSM-clients for each VBS-K, and there can be a
single VHM-client for all VBS-Ks as well. VBS-K notifies FE through VHM single HSM-client for all VBS-Ks as well. VBS-K notifies FE through HSM
interrupt APIs. interrupt APIs.

View File

@ -429,7 +429,7 @@ for i in `ls -d /sys/devices/system/cpu/cpu[1-99]`; do
echo 0 > $i/online echo 0 > $i/online
online=`cat $i/online` online=`cat $i/online`
done done
echo $idx > /sys/class/vhm/acrn_vhm/offline_cpu echo $idx > /sys/devices/virtual/misc/acrn_hsm/remove_cpu
fi fi
done done

View File

@ -88,7 +88,7 @@ Set Up and Launch LXC/LXD
b. Run the following commands to configure ``openstack``:: b. Run the following commands to configure ``openstack``::
$ lxc config device add openstack eth1 nic name=eth1 nictype=bridged parent=acrn-br0 $ lxc config device add openstack eth1 nic name=eth1 nictype=bridged parent=acrn-br0
$ lxc config device add openstack acrn_vhm unix-char path=/dev/acrn_vhm $ lxc config device add openstack acrn_hsm unix-char path=/dev/acrn_hsm
$ lxc config device add openstack loop-control unix-char path=/dev/loop-control $ lxc config device add openstack loop-control unix-char path=/dev/loop-control
$ for n in {0..15}; do lxc config device add openstack loop$n unix-block path=/dev/loop$n; done; $ for n in {0..15}; do lxc config device add openstack loop$n unix-block path=/dev/loop$n; done;

View File

@ -127,7 +127,7 @@ Prepare the Script to Create an Image
echo 0 > $i/online echo 0 > $i/online
online=`cat $i/online` online=`cat $i/online`
done done
echo $idx > /sys/class/vhm/acrn_vhm/offline_cpu echo $idx > /sys/devices/virtual/misc/acrn_hsm/remove_cpu
fi fi
done done
launch_win 1 launch_win 1