doc: change term of vm0 to sos_vm

Using term of VM0 would mislead to a VM with VM id 0 easily, whereas
VM id 0 could be used for any PRE_LAUNCHED_VM. So replace VM0 with
SOS_VM.

Signed-off-by: Victor Sun <victor.sun@intel.com>
This commit is contained in:
Victor Sun 2019-01-28 22:57:16 +08:00 committed by David Kinder
parent 7da9161d7d
commit a01c3cb913
7 changed files with 25 additions and 25 deletions

View File

@ -981,8 +981,8 @@ potentially error-prone.
ACPI Emulation
--------------
An alternative ACPI resource abstraction option is for the SOS (VM0) to
own all devices and emulate a set of virtual devices for the UOS (VM1).
An alternative ACPI resource abstraction option is for the SOS (SOS_VM) to
own all devices and emulate a set of virtual devices for the UOS (NORMAL_VM).
This is the most popular ACPI resource model for virtualization,
as shown in the picture below. ACRN currently
uses device emulation plus some device passthrough for UOS.

View File

@ -516,7 +516,7 @@ Host to Guest Mapping
=====================
ACRN hypervisor creates Service OS's host (HPA) to guest (GPA) mapping
(EPT mapping) through the function ``prepare_vm0_memmap_and_e820()``
(EPT mapping) through the function ``prepare_sos_vm_memmap()``
when it creates the SOS VM. It follows these rules:
- Identical mapping

View File

@ -151,7 +151,7 @@ The main steps include:
Hypervisor VM Startup Flow
SW configuration for Service OS (VM0):
SW configuration for Service OS (SOS_VM):
- **ACPI**: HV passes the entire ACPI table from bootloader to Service
OS directly. Legacy mode is currently supported as the ACPI table
@ -162,13 +162,13 @@ SW configuration for Service OS (VM0):
filtered out.
- **Zero Page**: HV prepares the zero page at the high end of Service
OS memory which is determined by VM0 guest FIT binary build. The
OS memory which is determined by SOS_VM guest FIT binary build. The
zero page includes configuration for ramdisk, bootargs and e820
entries. The zero page address will be set to "Primary CPU" RSI
register before VCPU gets run.
- **Entry address**: HV will copy Service OS kernel image to 0x1000000
as entry address for VM0's "Primary CPU". This entry address will
as entry address for SOS_VM's "Primary CPU". This entry address will
be set to "Primary CPU" RIP register before VCPU gets run.
SW configuration for User OS (VMx):

View File

@ -147,14 +147,14 @@ there is a EPT table for Normal world, and there may be a EPT table for
Secure World. Secure world can access Normal World's memory, but Normal
world cannot access Secure World's memory.
VM0 domain
VM0 domain is created when the hypervisor creates VM0 for the
SOS_VM domain
SOS_VM domain is created when the hypervisor creates VM for the
Service OS.
IOMMU uses the EPT table of Normal world of VM0 as the address
translation structures for the devices in VM0 domain. The Normal world's
EPT table of VM0 doesn't include the memory resource of the hypervisor
and Secure worlds if any. So the devices in VM0 domain can't access the
IOMMU uses the EPT table of Normal world of SOS_VM as the address
translation structures for the devices in SOS_VM domain. The Normal world's
EPT table of SOS_VM doesn't include the memory resource of the hypervisor
and Secure worlds if any. So the devices in SOS_VM domain can't access the
memory belong to hypervisor or secure worlds.
Other domains
@ -252,24 +252,24 @@ be multiple DMAR units on the platform, ACRN allows some of the DMAR
units to be ignored. If some DMAR unit(s) are marked as ignored, they
would not be enabled.
Hypervisor creates VM0 domain using the Normal World's EPT table of VM0
as address translation table when creating VM0 as Service OS. And all
PCI devices on the platform are added to VM0 domain. Then enable DMAR
Hypervisor creates SOS_VM domain using the Normal World's EPT table of SOS_VM
as address translation table when creating SOS_VM as Service OS. And all
PCI devices on the platform are added to SOS_VM domain. Then enable DMAR
translation for DMAR unit(s) if they are not marked as ignored.
Device assignment
*****************
All devices are initially added to VM0 domain.
All devices are initially added to SOS_VM domain.
To assign a device means to assign the device to an User OS. The device
is remove from VM0 domain and added to the VM domain related to the User
OS, which changes the address translation table from EPT of VM0 to EPT
is remove from SOS_VM domain and added to the VM domain related to the User
OS, which changes the address translation table from EPT of SOS_VM to EPT
of User OS for the device.
To unassign a device means to unassign the device from an User OS. The
device is remove from the VM domain related to the User OS, then added
back to VM0 domain, which changes the address translation table from EPT
of User OS to EPT of VM0 for the device.
back to SOS_VM domain, which changes the address translation table from EPT
of User OS to EPT of SOS_VM for the device.
Power Management support for S3
*******************************

View File

@ -482,7 +482,7 @@ Device Assignment Management
ACRN hypervisor provides major device assignment management. Since the
hypervisor owns all native vectors and IRQs, there must be a mapping
table to handle the Guest IRQ/Vector to Host IRQ/Vector. Currently we
assign all devices to VM0 except the UART.
assign all devices to SOS_VM except the UART.
If a PCI device (with MSI/MSI-x) is assigned to Guest, the User OS will
program the PCI config space and set the guest vector to this device. A
@ -504,7 +504,7 @@ vector for the device:
(vIOAPC) Redirection Table Entries (RTE).
**Legacy**
Legacy devices are assigned to VM0.
Legacy devices are assigned to SOS_VM.
User OS device assignment is similar to the above, except the User OS
doesn't call hypercall. Instead, the Guest program PCI configuration

View File

@ -57,7 +57,7 @@ Setup ``SOS_RAM_SIZE`` = 32G too (The SOS will have the whole resource)
::
config SOS_RAM_SIZE
hex "Size of the vm0 (SOS) RAM"
hex "Size of the Service OS (SOS) RAM"
default 0x200000000 if PLATFORM_SBL
default 0x800000000 if PLATFORM_UEFI

View File

@ -62,8 +62,8 @@ A **Rear Seat Entertainment (RSE)** system could run:
The ACRN hypervisor can support both Linux\* VM and Android\* VM as a
User OS, with the User OS managed by the ACRN hypervisor. Developers and
OEMs can use this reference stack to run their own VMs, together with
IC, IVI, and RSE VMs. The Service OS runs as VM0 (also known as Dom0 in
other hypervisors) and the User OS runs as VM1, (also known as DomU).
IC, IVI, and RSE VMs. The Service OS runs as SOS_VM (also known as Dom0 in
other hypervisors) and the User OS runs as NORMAL_VM, (also known as DomU).
:numref:`ivi-block` shows an example block diagram of using the ACRN
hypervisor.