doc: spelling and grammar fixes

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
David B. Kinder 2020-08-19 17:42:04 -07:00 committed by David Kinder
parent 7602304692
commit e74cf71eb7
26 changed files with 113 additions and 86 deletions

View File

@ -108,7 +108,7 @@ PVINFO, GTT, DISPLAY, and Execlists, and calls back to the AcrnGT
module through the :ref:`MPT_interface` ``attach_vgpu``. Then, the module through the :ref:`MPT_interface` ``attach_vgpu``. Then, the
AcrnGT module sets up an I/O request server and asks to trap the PCI AcrnGT module sets up an I/O request server and asks to trap the PCI
configure space of the vGPU (virtual device 0:2:0) via VHM's APIs. configure space of the vGPU (virtual device 0:2:0) via VHM's APIs.
Finally, the AcrnGT module launches a AcrnGT emulation thread to Finally, the AcrnGT module launches an AcrnGT emulation thread to
listen to I/O trap notifications from HVM and ACRN hypervisor. listen to I/O trap notifications from HVM and ACRN hypervisor.
vGPU destroy scenario vGPU destroy scenario

View File

@ -129,7 +129,7 @@ Here's an example showing how to run a VM with:
DM Initialization DM Initialization
***************** *****************
:numref:`dm-boot-flow` shows the overall flow for the DM boot up: :numref:`dm-boot-flow` shows the overall flow for the DM boot:
.. figure:: images/dm-image80.png .. figure:: images/dm-image80.png
:align: center :align: center

View File

@ -13,7 +13,7 @@ documented in this section.
usb-virt-hld usb-virt-hld
UART virtualization <uart-virt-hld> UART virtualization <uart-virt-hld>
Watchdoc virtualization <watchdog-hld> Watchdog virtualization <watchdog-hld>
AHCI virtualization <ahci-hld> AHCI virtualization <ahci-hld>
GVT-g GPU Virtualization <hld-APL_GVT-g> GVT-g GPU Virtualization <hld-APL_GVT-g>
System timer virtualization <system-timer-hld> System timer virtualization <system-timer-hld>

View File

@ -186,7 +186,7 @@ a vCPU with VCPU_PAUSED or VCPU_ZOMBIE state runs in default_idle
loop. The detail behaviors in vcpu_thread and default_idle threads loop. The detail behaviors in vcpu_thread and default_idle threads
are illustrated in :numref:`hv-vcpu-schedule`: are illustrated in :numref:`hv-vcpu-schedule`:
- The **vcpu_thread** loop will do the loop of handling vm exits, - The **vcpu_thread** loop will do the loop of handling VM exits,
and pending requests around the VM entry/exit. and pending requests around the VM entry/exit.
It will also check the reschedule request then schedule out to It will also check the reschedule request then schedule out to
default_idle if necessary. See `vCPU Thread`_ for more details default_idle if necessary. See `vCPU Thread`_ for more details
@ -251,10 +251,10 @@ The vCPU thread flow is a loop as shown and described below:
3. VM Enter by calling *start/run_vcpu*, then enter non-root mode to do 3. VM Enter by calling *start/run_vcpu*, then enter non-root mode to do
guest execution. guest execution.
4. VM Exit from *start/run_vcpu* when guest trigger vm exit reason in 4. VM Exit from *start/run_vcpu* when guest trigger VM exit reason in
non-root mode. non-root mode.
5. Handle vm exit based on specific reason. 5. Handle VM exit based on specific reason.
6. Loop back to step 1. 6. Loop back to step 1.
@ -270,16 +270,16 @@ the vCPU is saved and restored using this structure:
The vCPU handles runtime context saving by three different The vCPU handles runtime context saving by three different
categories: categories:
- Always save/restore during vm exit/entry: - Always save/restore during VM exit/entry:
- These registers must be saved every time vm exit, and restored - These registers must be saved every time VM exit, and restored
every time vm entry every time VM entry
- Registers include: general purpose registers, CR2, and - Registers include: general purpose registers, CR2, and
IA32_SPEC_CTRL IA32_SPEC_CTRL
- Definition in *vcpu->run_context* - Definition in *vcpu->run_context*
- Get/Set them through *vcpu_get/set_xxx* - Get/Set them through *vcpu_get/set_xxx*
- On-demand cache/update during vm exit/entry: - On-demand cache/update during VM exit/entry:
- These registers are used frequently. They should be cached from - These registers are used frequently. They should be cached from
VMCS on first time access after a VM exit, and updated to VMCS on VMCS on first time access after a VM exit, and updated to VMCS on
@ -432,7 +432,7 @@ that will trigger an error message and return without handling:
- APIC write for APICv - APIC write for APICv
Details of each vm exit reason handler are described in other sections. Details of each VM exit reason handler are described in other sections.
.. _pending-request-handlers: .. _pending-request-handlers:
@ -849,7 +849,7 @@ ACRN always enables MSR bitmap in *VMX_PROC_VM_EXEC_CONTROLS* VMX
execution control field. This bitmap marks the MSRs to cause a VM execution control field. This bitmap marks the MSRs to cause a VM
exit upon guest access for both read and write. The VM exit upon guest access for both read and write. The VM
exit reason for reading or writing these MSRs is respectively exit reason for reading or writing these MSRs is respectively
*VMX_EXIT_REASON_RDMSR* or *VMX_EXIT_REASON_WRMSR* and the vm exit *VMX_EXIT_REASON_RDMSR* or *VMX_EXIT_REASON_WRMSR* and the VM exit
handler is *rdmsr_vmexit_handler* or *wrmsr_vmexit_handler*. handler is *rdmsr_vmexit_handler* or *wrmsr_vmexit_handler*.
This table shows the predefined MSRs ACRN will trap for all the guests. For This table shows the predefined MSRs ACRN will trap for all the guests. For
@ -1002,7 +1002,7 @@ hypervisor on CR writes.
For ``mov to cr0`` and ``mov to cr4``, ACRN sets For ``mov to cr0`` and ``mov to cr4``, ACRN sets
*cr0_host_mask/cr4_host_mask* into *VMX_CR0_MASK/VMX_CR4_MASK* *cr0_host_mask/cr4_host_mask* into *VMX_CR0_MASK/VMX_CR4_MASK*
for the bitmask causing vm exit. for the bitmask causing VM exit.
As ACRN always enables ``unrestricted guest`` in As ACRN always enables ``unrestricted guest`` in
*VMX_PROC_VM_EXEC_CONTROLS2*, *CR0.PE* and *CR0.PG* can be *VMX_PROC_VM_EXEC_CONTROLS2*, *CR0.PE* and *CR0.PG* can be

View File

@ -602,7 +602,7 @@ IOC mediator and UART DM.
The "lpc_port" is "com1" or "com2", IOC mediator needs one unassigned The "lpc_port" is "com1" or "com2", IOC mediator needs one unassigned
lpc port for data transfer between User VM and Service VM. lpc port for data transfer between User VM and Service VM.
The "wakeup_reason" is IOC mediator boot up reason, each bit represents The "wakeup_reason" is IOC mediator boot reason, each bit represents
one wakeup reason. one wakeup reason.
For example, the following commands are used to enable IOC feature, the For example, the following commands are used to enable IOC feature, the

View File

@ -54,10 +54,9 @@ management. Please refer to ACRN power management design for more details.
Post-launched User VMs Post-launched User VMs
====================== ======================
DM is taking control of post-launched User VMs' state transition after Service VM DM takes control of post-launched User VMs' state transition after the Service VM
boot up, and it calls VM APIs through hypercalls. boots, by calling VM APIs through hypercalls.
Service VM user level service like Life-Cycle-Service and tool like Acrnd may work
together with DM to launch or stop a User VM. Please refer to ACRN tool
introduction for more details.
Service VM user level service such as Life-Cycle-Service and tools such
as Acrnd may work together with DM to launch or stop a User VM. Please
refer to ACRN tool introduction for more details.

View File

@ -69,7 +69,7 @@ As shown in the above figure, here are some details about the Trusty boot flow p
#. Resume to Secure World #. Resume to Secure World
#. Trusty #. Trusty
a. Booting up a. Booting
#. Call ``hcall_world_switch`` to switch back to Normal World if boot completed #. Call ``hcall_world_switch`` to switch back to Normal World if boot completed
#. ACRN (``hcall_world_switch``) #. ACRN (``hcall_world_switch``)

View File

@ -20,7 +20,7 @@ supported, primarily because dynamic configuration parsing is restricted in
the ACRN hypervisor for the following reasons: the ACRN hypervisor for the following reasons:
- **Maintain functional safety requirements.** Implementing dynamic parsing - **Maintain functional safety requirements.** Implementing dynamic parsing
introduces dynamic objects, which violates functional safety requirements. introduces dynamic objects, which violate functional safety requirements.
- **Reduce complexity.** ACRN is a lightweight reference hypervisor, built for - **Reduce complexity.** ACRN is a lightweight reference hypervisor, built for
embedded IoT. As new platforms for embedded systems are rapidly introduced, embedded IoT. As new platforms for embedded systems are rapidly introduced,
@ -32,9 +32,9 @@ the ACRN hypervisor for the following reasons:
helps keep the hypervisor's Lines of Code (LOC) in a desirable range (less helps keep the hypervisor's Lines of Code (LOC) in a desirable range (less
than 40K). than 40K).
- **Improve boot up time.** Dynamic parsing at runtime increases the boot - **Improve boot time.** Dynamic parsing at runtime increases the boot
up time. Using a build-time configuration and not dynamic parsing time. Using a build-time configuration and not dynamic parsing
helps improve the boot up time of the hypervisor. helps improve the boot time of the hypervisor.
Build the ACRN hypervisor, device model, and tools from source by following Build the ACRN hypervisor, device model, and tools from source by following

View File

@ -5,9 +5,8 @@ Getting Started
After reading the :ref:`introduction`, use these guides to get started After reading the :ref:`introduction`, use these guides to get started
using ACRN in a reference setup. We'll show how to set up your using ACRN in a reference setup. We'll show how to set up your
development and target hardware, and then how to boot up the ACRN development and target hardware, and then how to boot the ACRN
hypervisor and the `Clear Linux`_ Service VM and User VM on the Intel hypervisor, the Service VM, and a User VM on the Intel platform.
(EFI) platform.
.. _Clear Linux: https://clearlinux.org .. _Clear Linux: https://clearlinux.org

View File

@ -161,7 +161,7 @@ Service OS
#. Reboot the system, choose **ACRN Hypervisor**, and launch the Clear Linux OS #. Reboot the system, choose **ACRN Hypervisor**, and launch the Clear Linux OS
Service VM. If the EFI boot order is not right, use :kbd:`F10` Service VM. If the EFI boot order is not right, use :kbd:`F10`
on boot up to enter the EFI menu and choose **ACRN Hypervisor**. on boot to enter the EFI menu and choose **ACRN Hypervisor**.
#. Install the graphics UI if necessary. Use only one of the two #. Install the graphics UI if necessary. Use only one of the two

View File

@ -155,4 +155,4 @@ Start the User VM
$ sudo /usr/share/acrn/samples/nuc/launch_uos.sh $ sudo /usr/share/acrn/samples/nuc/launch_uos.sh
You are now watching the User OS booting up! You are now watching the User OS booting!

View File

@ -52,7 +52,7 @@ ACRN Log
ACRN log provides console log and mem log for a user to analyze. ACRN log provides console log and mem log for a user to analyze.
We can use console log to debug directly, while mem log is a userland tool We can use console log to debug directly, while mem log is a userland tool
used to capture a ACRN hypervisor log. used to capture an ACRN hypervisor log.
Turn on the logging info Turn on the logging info
======================== ========================

View File

@ -384,7 +384,7 @@ partition. Follow these steps:
Clear Linux OS (Clear-linux-native.5.4.11-890) Clear Linux OS (Clear-linux-native.5.4.11-890)
Reboot Into Firmware Interface Reboot Into Firmware Interface
#. After booting up the ACRN hypervisor, the Service OS launches #. After booting the ACRN hypervisor, the Service OS launches
automatically by default, and the Clear Linux OS desktop show with the **clear** user (or you can login remotely with an "ssh" client). automatically by default, and the Clear Linux OS desktop show with the **clear** user (or you can login remotely with an "ssh" client).
If there is any issue which makes the GNOME desktop not successfully display,, then the system will go to the shell console. If there is any issue which makes the GNOME desktop not successfully display,, then the system will go to the shell console.

View File

@ -62,7 +62,7 @@ supports the User VM network.
How to use OVS bridge How to use OVS bridge
********************* *********************
#. Disable acrn network configuration:: #. Disable the ACRN network configuration::
# cd /usr/lib/systemd/network/ # cd /usr/lib/systemd/network/
# mv 50-acrn.network 50-acrn.network_bak # mv 50-acrn.network 50-acrn.network_bak
@ -106,12 +106,13 @@ How to use OVS bridge
Example for VLAN network based on OVS in ACRN Example for VLAN network based on OVS in ACRN
********************************************* *********************************************
We will use the OVS bridge VLAN feature to support network isolation We will use the OVS bridge VLAN feature to support network isolation
between VMs. :numref:`ovs-example1` shows an example with four VMs in two hosts, between VMs. :numref:`ovs-example1` shows an example with four VMs in
with the hosts directly connected by a network cable. The VMs are interconnected two hosts, with the hosts directly connected by a network cable. The VMs
through statically configured IP addresses, and use VLAN id to put VM1 of are interconnected through statically configured IP addresses, and use
HOST1 and VM1 of HOST2 into a VLAN. Similarly, VM2 of HOST1 and VM2 of VLAN id to put VM1 of HOST1 and VM1 of HOST2 into a VLAN. Similarly, VM2
HOST2 are put into a VLAN. In this configuration, the VM1s can communicate with each other, of HOST1 and VM2 of HOST2 are put into a VLAN. In this configuration,
and VM2s can directly communicate with each other, but VM1s and VM2s cannot connect. the VM1s can communicate with each other, and VM2s can directly
communicate with each other, but VM1s and VM2s cannot connect.
.. figure:: images/example-of-OVS-usage.png .. figure:: images/example-of-OVS-usage.png
:align: center :align: center
@ -135,7 +136,7 @@ Follow these steps to set up OVS networks on both HOSTs:
# sed -i "s/virtio-net,tap0/virtio-net,tap2/" <2nd launch_uos script> # sed -i "s/virtio-net,tap0/virtio-net,tap2/" <2nd launch_uos script>
# reboot # reboot
#. Configure the static IP address on both HOSTs and it's VMs:: #. Configure the static IP address on both HOSTs and its VMs::
# <HOST_1 Service VM>: # <HOST_1 Service VM>:
# ifconfig ovs-br0 192.168.1.100 # ifconfig ovs-br0 192.168.1.100
@ -151,5 +152,5 @@ Follow these steps to set up OVS networks on both HOSTs:
# <HOST_2 User VM2>: # <HOST_2 User VM2>:
# ifconfig enp0s4 192.168.1.202 # ifconfig enp0s4 192.168.1.202
#. After that, it will succeed to ``ping`` from VM1 of HOST1 to VM1 of HOST2, #. After that, a ``ping`` from VM1 of HOST1 to **VM1** of HOST2 will succeed,
but fail to ``ping`` from VM1 of HOST1 to VM2 of HOST2. but a ``ping`` from VM1 of HOST1 to **VM2** of HOST2 will fail.

View File

@ -3,17 +3,29 @@
Run Debian as the Service VM Run Debian as the Service VM
############################ ############################
The `Debian Project <https://www.debian.org/>`_ is an association of individuals who have made common cause to create a `free <https://www.debian.org/intro/free>`_ operating system. The `latest stable Debian release <https://www.debian.org/releases/stable/>`_ is 10.0. The `Debian Project <https://www.debian.org/>`_ is an association of
individuals who have made common cause to create a `free
<https://www.debian.org/intro/free>`_ operating system. The `latest
stable Debian release <https://www.debian.org/releases/stable/>`_ is
10.0.
This tutorial describes how to use Debian 10.0 instead of `Clear Linux OS <https://clearlinux.org>`_ as the Service VM with the ACRN hypervisor. This tutorial describes how to use Debian 10.0 instead of `Clear Linux
OS <https://clearlinux.org>`_ as the Service VM with the ACRN
hypervisor.
Prerequisites Prerequisites
************* *************
Use the following instructions to install Debian. Use the following instructions to install Debian.
- Navigate to `Debian 10 iso <https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/>`_. Select and download **debian-10.1.0-amd64-netinst.iso** (scroll down to the bottom of the page). - Navigate to `Debian 10 iso
- Follow the `Debian installation guide <https://www.debian.org/releases/stable/amd64/index.en.html>`_ to install it on your NUC; we are using an Intel Kaby Lake NUC (NUC7i7DNHE) in this tutorial. <https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/>`_.
Select and download **debian-10.1.0-amd64-netinst.iso** (scroll down to
the bottom of the page).
- Follow the `Debian installation guide
<https://www.debian.org/releases/stable/amd64/index.en.html>`_ to
install it on your NUC; we are using an Intel Kaby Lake NUC (NUC7i7DNHE)
in this tutorial.
- :ref:`install-build-tools-dependencies` for ACRN. - :ref:`install-build-tools-dependencies` for ACRN.
- Update to the latest iASL (required by the ACRN Device Model): - Update to the latest iASL (required by the ACRN Device Model):
@ -55,7 +67,12 @@ Install ACRN on the Debian VM
$ sudo make install $ sudo make install
#. Install the hypervisor. #. Install the hypervisor.
The ACRN Device Model and tools were installed as part of a previous step. However, make install does not install the hypervisor (acrn.efi) on your EFI System Partition (ESP), nor does it configure your EFI firmware to boot it automatically. Follow the steps below to perform these operations and complete the ACRN installation. Note that we are using a SATA disk in this section. The ACRN Device Model and tools were installed as part of a previous
step. However, make install does not install the hypervisor (acrn.efi)
on your EFI System Partition (ESP), nor does it configure your EFI
firmware to boot it automatically. Follow the steps below to perform
these operations and complete the ACRN installation. Note that we are
using a SATA disk in this section.
a. Add the ACRN hypervisor (as the root user): a. Add the ACRN hypervisor (as the root user):
@ -106,7 +123,9 @@ Install ACRN on the Debian VM
$ sudo update-grub $ sudo update-grub
$ sudo reboot $ sudo reboot
You should see the Grub menu with the new "ACRN Debian Service VM" entry. Select it and proceed to booting the platform. The system will start the Debian Desktop and you can now log in (as before). You should see the Grub menu with the new "ACRN Debian Service VM"
entry. Select it and proceed to booting the platform. The system will
start the Debian Desktop and you can now log in (as before).
#. Log in to the Debian Service VM and check the ACRN status: #. Log in to the Debian Service VM and check the ACRN status:

View File

@ -2,16 +2,21 @@
Using GRUB to boot ACRN Using GRUB to boot ACRN
####################### #######################
`GRUB <http://www.gnu.org/software/grub/>`_ is a multiboot boot loader `GRUB <http://www.gnu.org/software/grub/>`_ is a multiboot boot loader
used by many popular Linux distributions. It also supports booting the used by many popular Linux distributions. It also supports booting the
ACRN hypervisor. ACRN hypervisor. See
See `<http://www.gnu.org/software/grub/grub-download.html>`_ `<http://www.gnu.org/software/grub/grub-download.html>`_ to get the
to get the latest GRUB source code and `<https://www.gnu.org/software/grub/grub-documentation.html>`_ latest GRUB source code and
for detailed documentation. `<https://www.gnu.org/software/grub/grub-documentation.html>`_ for
detailed documentation.
The ACRN hypervisor can boot from `multiboot protocol <http://www.gnu.org/software/grub/manual/multiboot/multiboot.html>`_ The ACRN hypervisor can boot from `multiboot protocol
or `multiboot2 protocol <http://www.gnu.org/software/grub/manual/multiboot2/multiboot.html>`_. <http://www.gnu.org/software/grub/manual/multiboot/multiboot.html>`_ or
Comparing with multiboot protocol, the multiboot2 protocol adds UEFI support. `multiboot2 protocol
<http://www.gnu.org/software/grub/manual/multiboot2/multiboot.html>`_.
Comparing with multiboot protocol, the multiboot2 protocol adds UEFI
support.
The multiboot protocol is supported by the ACRN hypervisor natively. The multiboot protocol is supported by the ACRN hypervisor natively.
The multiboot2 protocol is supported when ``CONFIG_MULTIBOOT2`` is The multiboot2 protocol is supported when ``CONFIG_MULTIBOOT2`` is
@ -49,9 +54,11 @@ higher.
Here's an example using Ubuntu to load ACRN on a scenario with two Here's an example using Ubuntu to load ACRN on a scenario with two
pre-launched VMs (the SOS_VM is also a kind of pre-launched VM): pre-launched VMs (the SOS_VM is also a kind of pre-launched VM):
#. Copy ACRN hypervisor binary ``acrn.32.out`` (or ``acrn.bin``) and the pre-launched VM kernel images to ``/boot/``; #. Copy ACRN hypervisor binary ``acrn.32.out`` (or ``acrn.bin``) and the
pre-launched VM kernel images to ``/boot/``;
#. Modify the ``/etc/default/grub`` file as follows to make the GRUB menu visible when booting: #. Modify the ``/etc/default/grub`` file as follows to make the GRUB
menu visible when booting:
.. code-block:: none .. code-block:: none
@ -158,7 +165,8 @@ Here we provide another simple method to build GRUB in efi application format:
gfxterm_background gfxterm_menu legacycfg video_bochs video_cirrus \ gfxterm_background gfxterm_menu legacycfg video_bochs video_cirrus \
video_colors video_fb videoinfo video net tftp video_colors video_fb videoinfo video net tftp
This will build a ``grub_x86_64.efi`` binary in the current directory, copy it to ``/EFI/boot/`` folder This will build a ``grub_x86_64.efi`` binary in the current
directory, copy it to ``/EFI/boot/`` folder
on the EFI partition (it is typically mounted under ``/boot/efi/`` folder on rootfs). on the EFI partition (it is typically mounted under ``/boot/efi/`` folder on rootfs).
#. Create ``/EFI/boot/grub.cfg`` file containing the following: #. Create ``/EFI/boot/grub.cfg`` file containing the following:
@ -188,6 +196,7 @@ Here we provide another simple method to build GRUB in efi application format:
module2 /boot/kernel4vm1 yyyyyy $(VM1 bootargs) module2 /boot/kernel4vm1 yyyyyy $(VM1 bootargs)
} }
#. Copy ACRN binary and guest kernel images to the GRUB-configured folder, e.g. ``/boot/`` folder on ``/dev/sda3/``; #. Copy the ACRN binary and guest kernel images to the GRUB-configured
folder, e.g. ``/boot/`` folder on ``/dev/sda3/``;
#. Run ``/EFI/boot/grub_x86_64.efi`` in the EFI shell. #. Run ``/EFI/boot/grub_x86_64.efi`` in the EFI shell.

View File

@ -86,10 +86,10 @@ Hybrid Scenario Startup Checking
a. Use the ``vm_console 0`` to switch to VM0 (Zephyr) console. It will display **Hello world! acrn**. a. Use the ``vm_console 0`` to switch to VM0 (Zephyr) console. It will display **Hello world! acrn**.
#. Enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell. #. Enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell.
#. Use the ``vm_console 1`` command to switch to the VM1 (Service VM) console. #. Use the ``vm_console 1`` command to switch to the VM1 (Service VM) console.
#. Verify that the VM1's Service VM can boot up and you can log in. #. Verify that the VM1's Service VM can boot and you can log in.
#. ssh to VM1 and launch the post-launched VM2 using the ACRN device model launch script. #. ssh to VM1 and launch the post-launched VM2 using the ACRN device model launch script.
#. Go to the Service VM console, and enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell. #. Go to the Service VM console, and enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell.
#. Use the ``vm_console 2`` command to switch to the VM2 (User VM) console. #. Use the ``vm_console 2`` command to switch to the VM2 (User VM) console.
#. Verify that VM2 can boot up and you can log in. #. Verify that VM2 can boot and you can log in.
Refer to the :ref:`acrnshell` for more information about available commands. Refer to the :ref:`acrnshell` for more information about available commands.

View File

@ -241,10 +241,10 @@ Logical partition scenario startup checking
properly: properly:
#. Use the ``vm_console 0`` to switch to VM0's console. #. Use the ``vm_console 0`` to switch to VM0's console.
#. The VM0's Clear Linux OS should boot up and log in. #. The VM0's Clear Linux OS should boot and log in.
#. Use a :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell. #. Use a :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell.
#. Use the ``vm_console 1`` to switch to VM1's console. #. Use the ``vm_console 1`` to switch to VM1's console.
#. The VM1's Clear Linux OS should boot up and log in. #. The VM1's Clear Linux OS should boot and log in.
Refer to the :ref:`ACRN hypervisor shell user guide <acrnshell>` Refer to the :ref:`ACRN hypervisor shell user guide <acrnshell>`
for more information about available commands. for more information about available commands.

View File

@ -299,4 +299,4 @@ Run the ``launch_uos.sh`` script to launch the User VM:
$ wget https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/doc/tutorials/launch_uos.sh $ wget https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/doc/tutorials/launch_uos.sh
$ sudo ./launch_uos.sh -V 1 $ sudo ./launch_uos.sh -V 1
**Congratulations**, you are now watching the User VM booting up! **Congratulations**, you are now watching the User VM booting!

View File

@ -316,14 +316,14 @@ You are now all set to start the User VM:
sudo /usr/share/acrn/samples/nuc/launch_uos.sh sudo /usr/share/acrn/samples/nuc/launch_uos.sh
**Congratulations**, you are now watching the User VM booting up! **Congratulations**, you are now watching the User VM booting!
.. _enable-network-sharing-user-vm: .. _enable-network-sharing-user-vm:
Enable network sharing Enable network sharing
********************** **********************
After booting up the Service VM and User VM, network sharing must be enabled After booting the Service VM and User VM, network sharing must be enabled
to give network access to the Service VM by enabling the TAP and networking to give network access to the Service VM by enabling the TAP and networking
bridge in the Service VM. The following script example shows how to set bridge in the Service VM. The following script example shows how to set
this up (verified in Ubuntu 16.04 and 18.04 as the Service VM). this up (verified in Ubuntu 16.04 and 18.04 as the Service VM).

View File

@ -112,7 +112,7 @@ Steps for Using VxWorks as User VM
$ sudo ./launch_vxworks.sh $ sudo ./launch_vxworks.sh
Then VxWorks will boot up automatically. You will see the prompt. Then VxWorks will boot automatically. You will see the prompt.
.. code-block:: console .. code-block:: console

View File

@ -110,7 +110,7 @@ Install Windows 10 by GVT-g
#. Run ``install_win.sh``. When you see the UEFI shell, input **exit**. #. Run ``install_win.sh``. When you see the UEFI shell, input **exit**.
#. Select **Boot Manager** and boot up from Win10 ISO. #. Select **Boot Manager** and boot from Win10 ISO.
#. When the display reads **Press any key to boot from CD or DVD** on the #. When the display reads **Press any key to boot from CD or DVD** on the
monitor, press any key in the terminal on the **Host** side. monitor, press any key in the terminal on the **Host** side.

View File

@ -111,7 +111,7 @@ Steps for Using Zephyr as User VM
$ sudo ./launch_zephyr.sh $ sudo ./launch_zephyr.sh
Then Zephyr will boot up automatically. You will see a console message from the hello_world sample application: Then Zephyr will boot automatically. You will see a console message from the hello_world sample application:
.. code-block:: console .. code-block:: console

View File

@ -520,7 +520,7 @@ to import PK, KEK, and DB into OVMF, Ubuntu 16.04 used.
-hda fat:hda-contents \ -hda fat:hda-contents \
-net none -net none
After boot up, you can see the UEFI shell. After booting, you can see the UEFI shell.
.. image:: images/waag_secure_boot_image5.png .. image:: images/waag_secure_boot_image5.png
:align: center :align: center

View File

@ -93,7 +93,7 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters:
mediator and UART DM. mediator and UART DM.
- ``lpc_port`` is com1 or com2. IOC mediator needs one unassigned lpc - ``lpc_port`` is com1 or com2. IOC mediator needs one unassigned lpc
port for data transfer between User OS and Service OS. port for data transfer between User OS and Service OS.
- ``wakeup_reason`` is IOC mediator boot up reason, where each bit represents - ``wakeup_reason`` is IOC mediator boot reason, where each bit represents
one wakeup reason. one wakeup reason.
Currently the wakeup reason bits supported by IOC firmware are: Currently the wakeup reason bits supported by IOC firmware are:

View File

@ -43,7 +43,7 @@ The ACRN hypervisor shell supports the following commands:
logging for the console, memory and npk logging for the console, memory and npk
* Give (up to) three parameters between ``0`` (none) and ``6`` (verbose) * Give (up to) three parameters between ``0`` (none) and ``6`` (verbose)
to set the loglevel for the console, memory, and npk (in to set the loglevel for the console, memory, and npk (in
that order). If less than three parameters are given, the that order). If fewer than three parameters are given, the
loglevels for the remaining areas will not be changed loglevels for the remaining areas will not be changed
* - cpuid <leaf> [subleaf] * - cpuid <leaf> [subleaf]
- Display the CPUID leaf [subleaf], in hexadecimal - Display the CPUID leaf [subleaf], in hexadecimal