doc: terminology cleanup in HLD startup

- Replace SOS or Service OS with Service VM
- Replace UOS or User OS with User VM
- Clean up some of the grammar

Signed-off-by: Amy Reyes <amy.reyes@intel.com>
This commit is contained in:
Amy Reyes 2021-11-03 11:58:48 -07:00 committed by David Kinder
parent a11c4592c3
commit dfe18717ed

View File

@ -7,8 +7,9 @@ This section is an overview of the ACRN hypervisor startup.
The ACRN hypervisor
compiles to a 32-bit multiboot-compliant ELF file.
The bootloader (ABL/SBL or GRUB) loads the hypervisor according to the
addresses specified in the ELF header. The BSP starts the hypervisor
with an initial state compliant to multiboot 1 specification, after the
addresses specified in the ELF header. The bootstrap processor (BSP) starts the
hypervisor
with an initial state compliant to the multiboot 1 specification, after the
bootloader prepares full configurations including ACPI, E820, etc.
The HV startup has two parts: the native startup followed by
@ -19,9 +20,9 @@ Multiboot Header
The ACRN hypervisor is built with a multiboot header, which presents
``MULTIBOOT_HEADER_MAGIC`` and ``MULTIBOOT_HEADER_FLAGS`` at the beginning
of the image, and it sets bit 6 in ``MULTIBOOT_HEADER_FLAGS`` which requests
bootloader passing memory mmap information(like e820 entries) through
Multiboot Information(MBI) structure.
of the image. It sets bit 6 in ``MULTIBOOT_HEADER_FLAGS``, which requests the
bootloader pass memory map information (such as e820 entries) through the
Multiboot Information (MBI) structure.
Native Startup
**************
@ -37,31 +38,31 @@ memory and interrupt initialization as shown in
:numref:`hvstart-nativeflow`. Here is a short
description for the flow:
- **BSP Startup:** The starting point for bootstrap processor.
- **BSP Startup:** The starting point for the bootstrap processor.
- **Relocation**: Relocate the hypervisor image if the hypervisor image
is not placed at the assumed base address.
- **UART Init:** Initialize a pre-configured UART device used
as the base physical console for HV and Service OS.
as the base physical console for HV and Service VM.
- **Memory Init:** Initialize memory type and cache policy, and creates
- **Memory Init:** Initialize memory type and cache policy, and create
MMU page table mapping for HV.
- **Scheduler Init:** Initialize scheduler framework, which provide the
capability to switch different threads(like vcpu vs. idle thread) on a
- **Scheduler Init:** Initialize the scheduler framework, which provides the
capability to switch different threads (such as vcpu vs. idle thread) on a
physical CPU, and to support CPU sharing.
- **Interrupt Init:** Initialize interrupt and exception for native HV
- **Interrupt Init:** Initialize interrupts and exceptions for native HV
including IDT and ``do_IRQ`` infrastructure; a timer interrupt
framework is then built. The native/physical interrupts will go
through this ``do_IRQ`` infrastructure then distribute to special
targets (HV or VMs).
- **Start AP:** BSP kicks ``INIT-SIPI-SIPI`` IPI sequence to start other
native APs (application processor). Each AP will initialize its
own memory and interrupts, notifies the BSP on completion and
enter the default idle loop.
native APs (application processor). Each AP initializes its
own memory and interrupts, notifies the BSP on completion, and
enters the default idle loop.
- **Shell Init:** Start a command shell for HV accessible via the UART.
@ -80,24 +81,24 @@ CPU
addresses without permission restrictions. The control registers and
some MSRs are set as follows:
- cr0: The following features are enabled: paging, write protection,
- ``cr0``: The following features are enabled: paging, write protection,
protection mode, numeric error and co-processor monitoring.
- cr3: refer to the initial state of memory.
- ``cr3``: Refer to the initial state of memory.
- cr4: The following features are enabled: physical address extension,
- ``cr4``: The following features are enabled: physical address extension,
machine-check, FXSAVE/FXRSTOR, SMEP, VMX operation and unmask
SIMD FP exception. The other features are disabled.
- MSR_IA32_EFER: only IA32e mode is enabled.
- ``MSR_IA32_EFER``: Only IA32e mode is enabled.
- MSR_IA32_FS_BASE: the address of stack canary, used for detecting
- ``MSR_IA32_FS_BASE``: The address of stack canary, used for detecting
stack smashing.
- MSR_IA32_TSC_AUX: a unique logical ID is set for each physical
- ``MSR_IA32_TSC_AUX``: A unique logical ID is set for each physical
processor.
- stack: each physical processor has a separate stack.
- ``stack``: Each physical processor has a separate stack.
Memory
All physical processors are in 64-bit IA32e mode after
@ -111,11 +112,12 @@ Memory
Refer to :ref:`physical-interrupt-initialization` for a detailed description of interrupt-related
initial states, including IDT and physical PICs.
After the BSP detects that all APs are up, it will continue to enter guest mode; similar, after one AP
complete its initialization, it will start entering guest mode as well.
When BSP & APs enter guest mode, they will try to launch predefined VMs whose vBSP associated with
this physical core; these predefined VMs are static configured in ``vm config`` and they could be
pre-launched Safety VM or Service VM; the VM startup will be explained in next section.
After the BSP detects that all APs are up, it continues to enter guest mode.
Likewise, after one AP completes its initialization, it starts entering guest
mode as well. When the BSP and APs enter guest mode, they try to launch
predefined VMs whose vBSP is associated with this physical core. These
predefined VMs are configured in ``vm config`` and may be a
pre-launched Safety VM or Service VM.
.. _vm-startup:
@ -123,45 +125,45 @@ VM Startup
**********
The Service VM or a pre-launched VM is created and launched on the physical
CPU which configured as its vBSP. Meanwhile, for the physical CPUs which
configured as vAPs for dedicated VMs, they will enter the default idle loop
CPU that is configured as its vBSP. Meanwhile, for the physical CPUs that
are configured as vAPs for dedicated VMs, they enter the default idle loop
(refer to :ref:`VCPU_lifecycle` for details), waiting for any vCPU to be
scheduled to them.
:numref:`hvstart-vmflow` illustrates a high-level execution flow of
creating and launching a VM, applicable to pre-launched VM, Service VM
and User VM. One major difference in the creation of User VM and pre-launched
/Service VM is that pre-launched/Service VM is created by the hypervisor,
while the creation of User VMs is triggered by the DM in Service OS.
The main steps include:
:numref:`hvstart-vmflow` illustrates a high-level execution flow of creating and
launching a VM, applicable to pre-launched User VMs, Service VM, and
post-launched User VMs. One major difference in the creation of post-launched
User VMs vs. pre-launched User VMs or Service VM is that the pre-launched User
VMs and Service VM are created by the hypervisor, while post-launched User VMs
are created by the Device Model (DM) in the Service VM. The main steps include:
- **Create VM**: A VM structure is allocated and initialized. A unique
VM ID is picked, EPT is initialized, e820 table for this VM is prepared,
I/O bitmap is set up, virtual PIC/IOAPIC/PCI/UART is initialized, EPC for
virtual SGX is prepared, guest PM IO is set up, IOMMU for PT dev support
is enabled, virtual CPUID entries are filled, and vCPUs configured in this VM's
``vm config`` are prepared. For post-launched User VM, the EPT page table and
e820 table is actually prepared by DM instead of hypervisor.
``vm config`` are prepared. For a post-launched User VM, the EPT page table
and e820 table are prepared by the DM instead of the hypervisor.
- **Prepare vCPUs:** Create the vCPUs, assign the physical processor it
is pinned to, a unique-per-VM vCPU ID and a globally unique VPID,
and initializes its virtual lapic and MTRR, and its vCPU thread object got setup
for vcpu scheduling. The vCPU number and affinity are defined in corresponding
- **Prepare vCPUs:** Create the vCPUs, assign the physical processor the vCPU
is pinned to, a unique-per-VM vCPU ID and a globally unique VPID, initialize
its virtual lapic and MTRR, and set up its vCPU thread object for vCPU
scheduling. The vCPU number and affinity are defined in the corresponding
``vm config`` for this VM.
- **Build vACPI:** For the Service VM, the hypervisor will customize a virtual ACPI
table based on the native ACPI table (this is in the TODO).
For a pre-launched VM, the hypervisor will build a simple ACPI table with necessary
information like MADT.
For a post-launched User VM, the DM will build its ACPI table dynamically.
- **Build vACPI:** For the Service VM, the hypervisor customizes a virtual ACPI
table based on the native ACPI table (this is in the TODO). For a
pre-launched User VM, the hypervisor builds a simple ACPI table with
necessary information such as MADT. For a post-launched User VM, the DM
builds its ACPI table dynamically.
- **SW Load:** Prepares for each VM's SW configuration according to guest OS
requirement, which may include kernel entry address, ramdisk address,
bootargs, or zero page for launching bzImage etc.
This is done by the hypervisor for pre-launched or Service VM, and the VM will
start from the standard real or protected mode which is not related to the
native environment. For post-launched VMs, the VM's SW configuration is done
by DM.
- **SW Load:** Prepare for each VM's SW configuration according to guest OS
requirements, which may include kernel entry address, ramdisk address,
bootargs, or zero page for launching bzImage. This is done by the hypervisor
for pre-launched User VMs or Service VM, and the VM will start from the
standard real mode or protected mode which is not related to the native
environment. For post-launched User VMs, the VM's SW configuration is done by
DM.
- **Start VM:** The vBSP of vCPUs in this VM is kick to do schedule.
@ -169,12 +171,12 @@ The main steps include:
physical processors for execution.
- **Init VMCS:** Initialize vCPU's VMCS for its host state, guest
state, execution control, entry control and exit control. It's
state, execution control, entry control, and exit control. It's
the last configuration before vCPU runs.
- **vCPU thread:** vCPU kicks out to run. For vBSP of vCPUs, it will
start running into kernel image which SW Load is configured; for
any vAP of vCPUs, it will wait for INIT-SIPI-SIPI IPI sequence
any vAP of vCPUs, it will wait for ``INIT-SIPI-SIPI`` IPI sequence
trigger from its vBSP.
.. figure:: images/hld-image104.png
@ -185,69 +187,70 @@ The main steps include:
SW configuration for Service VM (bzimage SW load as example):
- **ACPI**: HV passes the entire ACPI table from bootloader to Service
- **ACPI**: HV passes the entire ACPI table from the bootloader to the Service
VM directly. Legacy mode is currently supported as the ACPI table
is loaded at F-Segment.
- **E820**: HV passes e820 table from bootloader through zero-page
after the HV reserved (32M for example) and pre-launched VM owned
- **E820**: HV passes the e820 table from the bootloader through the zero page
after the HV reserved (32M, for example) and pre-launched User VM owned
memory is filtered out.
- **Zero Page**: HV prepares the zero page at the high end of Service
VM memory which is determined by SOS_VM guest FIT binary build. The
zero page includes configuration for ramdisk, bootargs and e820
entries. The zero page address will be set to vBSP RSI register
before VCPU gets run.
VM memory which is determined by the Service VM guest FIT binary build. The
zero page includes configuration for ramdisk, bootargs, and e820
entries. The zero page address will be set to the vBSP RSI register
before the vCPU runs.
- **Entry address**: HV will copy Service OS kernel image to
kernel_load_addr, which could be got from "pref_addr" field in bzimage
header; the entry address will be calculated based on kernel_load_addr,
and will be set to vBSP RIP register before VCPU gets run.
- **Entry address**: HV copies the Service VM OS kernel image to
``kernel_load_addr``, which it can get from the ``pref_addr`` field in the
bzimage header. The entry address will be calculated based on
``kernel_load_addr``, and will be set to the vBSP RIP register before the
vCPU runs.
SW configuration for post-launched User VMs (OVMF SW load as example):
- **ACPI**: the virtual ACPI table is built by DM and put at User VM's
- **ACPI**: the DM builds the virtual ACPI table and puts it at the User VM's
F-Segment. Refer to :ref:`hld-io-emulation` for details.
- **E820**: the virtual E820 table is built by the DM then passed to
- **E820**: the DM builds the virtual E820 table and passes it to
the virtual bootloader. Refer to :ref:`hld-io-emulation` for details.
- **Entry address**: the DM will copy User OS kernel(OVMF) image to
OVMF_NVSTORAGE_OFFSET - normally is @(4G - 2M), and set the entry
address to 0xFFFFFFF0. As the vBSP will kick to run virtual bootloader
(OVMF) from real-mode, so its CS base will be set as 0xFFFF0000, and
RIP register will be set as 0xFFF0.
- **Entry address**: the DM copies the User VM OS kernel (OVMF) image to
``OVMF_NVSTORAGE_OFFSET`` - normally is @(4G - 2M), and sets the entry
address to 0xFFFFFFF0. As the vBSP will kick to run the virtual bootloader
(OVMF) from real mode, its CS base will be set to 0xFFFF0000, and
RIP register will be set to 0xFFF0.
SW configuration for pre-launched VMs (raw SW load as example):
SW configuration for pre-launched User VMs (raw SW load as example):
- **ACPI**: the virtual ACPI table is built by the hypervisor and put at
- **ACPI**: the hypervisor builds the virtual ACPI table and puts it at
this VM's F-Segment.
- **E820**: the virtual E820 table is built by the hypervisor then passed to
the VM according to different SW loaders. For raw SW load here, it's not
- **E820**: the hypervisor builds the virtual E820 table and passes it to
the VM according to different SW loaders. For a raw SW load, it's not
used.
- **Entry address**: the hypervisor will copy User OS kernel image to
kernel_load_addr which set by ``vm config``, and set the entry
address to kernel_entry_addr which set by ``vm config`` as well.
- **Entry address**: the hypervisor copies the User VM OS kernel image to
``kernel_load_addr`` which is set by ``vm config``, and sets the entry
address to ``kernel_entry_addr`` which is set by ``vm config`` as well.
Here is initial mode of vCPUs:
Here is the initial mode of vCPUs:
+----------------------------------+----------------------------------------------------------+
| VM and Processor Type | Initial Mode |
+=================+================+==========================================================+
| Service VM | BSP | Same as physical BSP, or Real Mode if Service VM boot |
| | | w/ OVMF |
| +----------------+----------------------------------------------------------+
+=======================+==========+==========================================================+
| Service VM | BSP | Same as physical BSP, or Real Mode if |
| | | Service VM boots with OVMF |
| +----------+----------------------------------------------------------+
| | AP | Real Mode |
+-----------------+----------------+----------------------------------------------------------+
| User VM | BSP | Real Mode |
| +----------------+----------------------------------------------------------+
+-----------------------+----------+----------------------------------------------------------+
| Post-launched User VM | BSP | Real Mode |
| +----------+----------------------------------------------------------+
| | AP | Real Mode |
+-----------------+----------------+----------------------------------------------------------+
| Pre-launched VM | BSP | Real Mode or Protected Mode |
| +----------------+----------------------------------------------------------+
+-----------------------+----------+----------------------------------------------------------+
| Pre-launched User VM | BSP | Real Mode or Protected Mode |
| +----------+----------------------------------------------------------+
| | AP | Real Mode |
+-----------------+----------------+----------------------------------------------------------+
+-----------------------+----------+----------------------------------------------------------+