mirror of
https://github.com/projectacrn/acrn-hypervisor.git
synced 2025-06-24 06:29:19 +00:00
doc: Spelling and grammar tweaks
Did a partial run of ACRN documents through Acrolinx to catch additional spelling and grammar fixes missed during regular reviews. Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
parent
d54347a054
commit
dd0fe54141
@ -22,4 +22,4 @@ documented in this section.
|
||||
Hostbridge emulation <hostbridge-virt-hld>
|
||||
AT keyboard controller emulation <atkbdc-virt-hld>
|
||||
Split Device Model <split-dm>
|
||||
Shared memory based inter-vm communication <ivshmem-hld>
|
||||
Shared memory based inter-VM communication <ivshmem-hld>
|
||||
|
@ -5,7 +5,7 @@ ACRN high-level design overview
|
||||
|
||||
ACRN is an open source reference hypervisor (HV) that runs on top of Intel
|
||||
platforms (APL, KBL, etc) for heterogeneous scenarios such as the Software Defined
|
||||
Cockpit (SDC), or the In-Vehicle Experience (IVE) for automotive, or HMI & Real-Time OS for industry. ACRN provides embedded hypervisor vendors with a reference
|
||||
Cockpit (SDC), or the In-Vehicle Experience (IVE) for automotive, or HMI & real-time OS for industry. ACRN provides embedded hypervisor vendors with a reference
|
||||
I/O mediation solution with a permissive license and provides auto makers and
|
||||
industry users a reference software stack for corresponding use.
|
||||
|
||||
@ -124,7 +124,7 @@ ACRN 2.0
|
||||
========
|
||||
|
||||
ACRN 2.0 is extending ACRN to support pre-launched VM (mainly for safety VM)
|
||||
and Real-Time (RT) VM.
|
||||
and real-time (RT) VM.
|
||||
|
||||
:numref:`overview-arch2.0` shows the architecture of ACRN 2.0; the main difference
|
||||
compared to ACRN 1.0 is that:
|
||||
|
@ -1016,7 +1016,7 @@ access is like this:
|
||||
#. If the verification is successful in eMMC RPMB controller, then the
|
||||
data will be written into storage device.
|
||||
|
||||
This work flow of authenticated data read is very similar to this flow
|
||||
This workflow of authenticated data read is very similar to this flow
|
||||
above, but in reverse order.
|
||||
|
||||
Note that there are some security considerations in this design:
|
||||
|
@ -358,7 +358,7 @@ general workflow of ioeventfd.
|
||||
:align: center
|
||||
:name: ioeventfd-workflow
|
||||
|
||||
ioeventfd general work flow
|
||||
ioeventfd general workflow
|
||||
|
||||
The workflow can be summarized as:
|
||||
|
||||
|
@ -13,7 +13,7 @@ SoC and back, as well as signals the SoC uses to control onboard
|
||||
peripherals.
|
||||
|
||||
.. note::
|
||||
NUC and UP2 platforms do not support IOC hardware, and as such, IOC
|
||||
Intel NUC and UP2 platforms do not support IOC hardware, and as such, IOC
|
||||
virtualization is not supported on these platforms.
|
||||
|
||||
The main purpose of IOC virtualization is to transfer data between
|
||||
@ -131,7 +131,7 @@ There are five parts in this high-level design:
|
||||
* State transfer introduces IOC mediator work states
|
||||
* CBC protocol illustrates the CBC data packing/unpacking
|
||||
* Power management involves boot/resume/suspend/shutdown flows
|
||||
* Emulated CBC commands introduces some commands work flow
|
||||
* Emulated CBC commands introduces some commands workflow
|
||||
|
||||
IOC mediator has three threads to transfer data between User VM and Service VM. The
|
||||
core thread is responsible for data reception, and Tx and Rx threads are
|
||||
|
@ -57,8 +57,8 @@ configuration and copies them to the corresponding guest memory.
|
||||
.. figure:: images/partition-image18.png
|
||||
:align: center
|
||||
|
||||
ACRN set-up for guests
|
||||
**********************
|
||||
ACRN setup for guests
|
||||
*********************
|
||||
|
||||
Cores
|
||||
=====
|
||||
|
@ -39,7 +39,7 @@ resource allocator.) The user can check the cache capabilities such as cache
|
||||
mask and max supported CLOS as described in :ref:`rdt_detection_capabilities`
|
||||
and then program the IA32_type_MASK_n and IA32_PQR_ASSOC MSR with a
|
||||
CLOS ID, to select a cache mask to take effect. These configurations can be
|
||||
done in scenario xml file under ``FEATURES`` section as shown in the below example.
|
||||
done in scenario XML file under ``FEATURES`` section as shown in the below example.
|
||||
ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes
|
||||
to enforce the settings.
|
||||
|
||||
@ -52,7 +52,7 @@ to enforce the settings.
|
||||
<CLOS_MASK desc="Cache Capacity Bitmask">0xF</CLOS_MASK>
|
||||
|
||||
Once the cache mask is set of each individual CPU, the respective CLOS ID
|
||||
needs to be set in the scenario xml file under ``VM`` section. If user desires
|
||||
needs to be set in the scenario XML file under ``VM`` section. If user desires
|
||||
to use CDP feature, CDP_ENABLED should be set to ``y``.
|
||||
|
||||
.. code-block:: none
|
||||
@ -106,7 +106,7 @@ that corresponds to each CLOS and then setting IA32_PQR_ASSOC MSR with CLOS
|
||||
users can check the MBA capabilities such as mba delay values and
|
||||
max supported CLOS as described in :ref:`rdt_detection_capabilities` and
|
||||
then program the IA32_MBA_MASK_n and IA32_PQR_ASSOC MSR with the CLOS ID.
|
||||
These configurations can be done in scenario xml file under ``FEATURES`` section
|
||||
These configurations can be done in scenario XML file under ``FEATURES`` section
|
||||
as shown in the below example. ACRN uses VMCS MSR loads on every VM Entry/VM Exit
|
||||
for non-root and root modes to enforce the settings.
|
||||
|
||||
@ -120,7 +120,7 @@ for non-root and root modes to enforce the settings.
|
||||
<MBA_DELAY desc="Memory Bandwidth Allocation delay value">0</MBA_DELAY>
|
||||
|
||||
Once the cache mask is set of each individual CPU, the respective CLOS ID
|
||||
needs to be set in the scenario xml file under ``VM`` section.
|
||||
needs to be set in the scenario XML file under ``VM`` section.
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 2
|
||||
|
@ -15,7 +15,7 @@ Inter-VM Communication Overview
|
||||
:align: center
|
||||
:name: ivshmem-architecture-overview
|
||||
|
||||
ACRN shared memory based inter-vm communication architecture
|
||||
ACRN shared memory based inter-VM communication architecture
|
||||
|
||||
There are two ways ACRN can emulate the ``ivshmem`` device:
|
||||
|
||||
|
@ -86,7 +86,7 @@ I/O ports definition::
|
||||
RTC emulation
|
||||
=============
|
||||
|
||||
ACRN supports RTC (Real-Time Clock) that can only be accessed through
|
||||
ACRN supports RTC (real-time clock) that can only be accessed through
|
||||
I/O ports (0x70 and 0x71).
|
||||
|
||||
0x70 is used to access CMOS address register and 0x71 is used to access
|
||||
|
@ -61,7 +61,7 @@ Add the following parameters into the command line::
|
||||
controller_name, you can use it as controller_name directly. You can
|
||||
also input ``cat /sys/bus/gpio/device/XXX/dev`` to get device id that can
|
||||
be used to match /dev/XXX, then use XXX as the controller_name. On MRB
|
||||
and NUC platforms, the controller_name are gpiochip0, gpiochip1,
|
||||
and Intel NUC platforms, the controller_name are gpiochip0, gpiochip1,
|
||||
gpiochip2.gpiochip3.
|
||||
|
||||
- **offset|name**: you can use gpio offset or its name to locate one
|
||||
|
@ -34,8 +34,8 @@ It receives read/write commands from the watchdog driver, does the
|
||||
actions, and returns. In ACRN, the commands are from User VM
|
||||
watchdog driver.
|
||||
|
||||
User VM watchdog work flow
|
||||
**************************
|
||||
User VM watchdog workflow
|
||||
*************************
|
||||
|
||||
When the User VM does a read or write operation on the watchdog device's
|
||||
registers or memory space (Port IO or Memory map I/O), it will trap into
|
||||
|
@ -77,7 +77,7 @@ PTEs (with present bit cleared, or reserved bit set) pointing to valid
|
||||
host PFNs, a malicious guest may use those EPT PTEs to construct an attack.
|
||||
|
||||
A special aspect of L1TF in the context of virtualization is symmetric
|
||||
multi threading (SMT), e.g. Intel |reg| Hyper-Threading Technology.
|
||||
multi threading (SMT), e.g. Intel |reg| Hyper-threading Technology.
|
||||
Logical processors on the affected physical cores share the L1 Data Cache
|
||||
(L1D). This fact could make more variants of L1TF-based attack, e.g.
|
||||
a malicious guest running on one logical processor can attack the data which
|
||||
@ -88,11 +88,11 @@ Guest -> guest Attack
|
||||
=====================
|
||||
|
||||
The possibility of guest -> guest attack varies on specific configuration,
|
||||
e.g. whether CPU partitioning is used, whether Hyper-Threading is on, etc.
|
||||
e.g. whether CPU partitioning is used, whether Hyper-threading is on, etc.
|
||||
|
||||
If CPU partitioning is enabled (default policy in ACRN), there is
|
||||
1:1 mapping between vCPUs and pCPUs i.e. no sharing of pCPU. There
|
||||
may be an attack possibility when Hyper-Threading is on, where
|
||||
may be an attack possibility when Hyper-threading is on, where
|
||||
logical processors of same physical core may be allocated to two
|
||||
different guests. Then one guest may be able to attack the other guest
|
||||
on sibling thread due to shared L1D.
|
||||
@ -221,7 +221,7 @@ This mitigation is always enabled.
|
||||
Core-based scheduling
|
||||
=====================
|
||||
|
||||
If Hyper-Threading is enabled, it's important to avoid running
|
||||
If Hyper-threading is enabled, it's important to avoid running
|
||||
sensitive context (if containing security data which a given VM
|
||||
has no permission to access) on the same physical core that runs
|
||||
said VM. It requires scheduler enhancement to enable core-based
|
||||
@ -265,9 +265,9 @@ requirements:
|
||||
- Doing 5) is not feasible, or
|
||||
- CPU sharing is enabled (in the future)
|
||||
|
||||
If Hyper-Threading is enabled, there is no available mitigation
|
||||
If Hyper-threading is enabled, there is no available mitigation
|
||||
option before core scheduling is planned. User should understand
|
||||
the security implication and only turn on Hyper-Threading
|
||||
the security implication and only turn on Hyper-threading
|
||||
when the potential risk is acceptable to their usage.
|
||||
|
||||
Mitigation Status
|
||||
|
@ -566,7 +566,7 @@ The following table shows some use cases of module level configuration design:
|
||||
- This module is used to virtualize part of LAPIC functionalities.
|
||||
It can be done via APICv or software emulation depending on CPU
|
||||
capabilities.
|
||||
For example, KBL NUC doesn't support virtual-interrupt delivery, while
|
||||
For example, KBL Intel NUC doesn't support virtual-interrupt delivery, while
|
||||
other platforms support it.
|
||||
- If a function pointer is used, the prerequisite is
|
||||
"hv_operation_mode == OPERATIONAL".
|
||||
|
@ -31,8 +31,8 @@ details:
|
||||
* :option:`CONFIG_UOS_RAM_SIZE`
|
||||
* :option:`CONFIG_HV_RAM_SIZE`
|
||||
|
||||
For example, if the NUC's physical memory size is 32G, you may follow these steps
|
||||
to make the new uefi ACRN hypervisor, and then deploy it onto the NUC board to boot
|
||||
For example, if the Intel NUC's physical memory size is 32G, you may follow these steps
|
||||
to make the new UEFI ACRN hypervisor, and then deploy it onto the Intel NUC to boot
|
||||
the ACRN Service VM with the 32G memory size.
|
||||
|
||||
#. Use ``make menuconfig`` to change the ``RAM_SIZE``::
|
||||
|
@ -54,7 +54,7 @@ distribution.
|
||||
|
||||
.. note::
|
||||
ACRN uses ``menuconfig``, a python3 text-based user interface (TUI)
|
||||
for configuring hypervisor options and using python's ``kconfiglib``
|
||||
for configuring hypervisor options and using Python's ``kconfiglib``
|
||||
library.
|
||||
|
||||
Install the necessary tools for the following systems:
|
||||
|
@ -34,7 +34,7 @@ Hardware Connection
|
||||
Connect the WHL Maxtang with the appropriate external devices.
|
||||
|
||||
#. Connect the WHL Maxtang board to a monitor via an HDMI cable.
|
||||
#. Connect the mouse, keyboard, ethernet cable, and power supply cable to
|
||||
#. Connect the mouse, keyboard, Ethernet cable, and power supply cable to
|
||||
the WHL Maxtang board.
|
||||
#. Insert the Ubuntu 18.04 USB boot disk into the USB port.
|
||||
|
||||
@ -55,7 +55,7 @@ Install Ubuntu on the SATA disk
|
||||
#. Insert the Ubuntu USB boot disk into the WHL Maxtang machine.
|
||||
#. Power on the machine, then press F11 to select the USB disk as the boot
|
||||
device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the
|
||||
label depends on the brand/make of the USB stick.
|
||||
label depends on the brand/make of the USB drive.
|
||||
#. Install the Ubuntu OS.
|
||||
#. Select **Something else** to create the partition.
|
||||
|
||||
@ -72,7 +72,7 @@ Install Ubuntu on the SATA disk
|
||||
#. Complete the Ubuntu installation on ``/dev/sda``.
|
||||
|
||||
This Ubuntu installation will be modified later (see `Build and Install the RT kernel for the Ubuntu User VM`_)
|
||||
to turn it into a Real-Time User VM (RTVM).
|
||||
to turn it into a real-time User VM (RTVM).
|
||||
|
||||
Install the Ubuntu Service VM on the NVMe disk
|
||||
==============================================
|
||||
@ -87,7 +87,7 @@ Install Ubuntu on the NVMe disk
|
||||
#. Insert the Ubuntu USB boot disk into the WHL Maxtang machine.
|
||||
#. Power on the machine, then press F11 to select the USB disk as the boot
|
||||
device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the
|
||||
label depends on the brand/make of the USB stick.
|
||||
label depends on the brand/make of the USB drive.
|
||||
#. Install the Ubuntu OS.
|
||||
#. Select **Something else** to create the partition.
|
||||
|
||||
@ -103,7 +103,7 @@ Install Ubuntu on the NVMe disk
|
||||
|
||||
#. Complete the Ubuntu installation and reboot the system.
|
||||
|
||||
.. note:: Set **acrn** as the username for the Ubuntu Service VM.
|
||||
.. note:: Set ``acrn`` as the username for the Ubuntu Service VM.
|
||||
|
||||
|
||||
Build and Install ACRN on Ubuntu
|
||||
@ -287,7 +287,7 @@ BIOS settings of GVT-d for WaaG
|
||||
-------------------------------
|
||||
|
||||
.. note::
|
||||
Skip this step if you are using a Kaby Lake (KBL) NUC.
|
||||
Skip this step if you are using a Kaby Lake (KBL) Intel NUC.
|
||||
|
||||
Go to **Chipset** -> **System Agent (SA) Configuration** -> **Graphics
|
||||
Configuration** and make the following settings:
|
||||
@ -441,7 +441,7 @@ Recommended BIOS settings for RTVM
|
||||
.. csv-table::
|
||||
:widths: 15, 30, 10
|
||||
|
||||
"Hyper-Threading", "Intel Advanced Menu -> CPU Configuration", "Disabled"
|
||||
"Hyper-threading", "Intel Advanced Menu -> CPU Configuration", "Disabled"
|
||||
"Intel VMX", "Intel Advanced Menu -> CPU Configuration", "Enable"
|
||||
"Speed Step", "Intel Advanced Menu -> Power & Performance -> CPU - Power Management Control", "Disabled"
|
||||
"Speed Shift", "Intel Advanced Menu -> Power & Performance -> CPU - Power Management Control", "Disabled"
|
||||
@ -458,7 +458,7 @@ Recommended BIOS settings for RTVM
|
||||
"Delay Enable DMI ASPM", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Disabled"
|
||||
"DMI Link ASPM", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Disabled"
|
||||
"Aggressive LPM Support", "Intel Advanced Menu -> PCH-IO Configuration -> SATA And RST Configuration", "Disabled"
|
||||
"USB Periodic Smi", "Intel Advanced Menu -> LEGACY USB Configuration", "Disabled"
|
||||
"USB Periodic SMI", "Intel Advanced Menu -> LEGACY USB Configuration", "Disabled"
|
||||
"ACPI S3 Support", "Intel Advanced Menu -> ACPI Settings", "Disabled"
|
||||
"Native ASPM", "Intel Advanced Menu -> ACPI Settings", "Disabled"
|
||||
|
||||
@ -522,13 +522,13 @@ this, follow the below steps to allocate all housekeeping tasks to core 0:
|
||||
# Move all rcu tasks to core 0.
|
||||
for i in `pgrep rcu`; do taskset -pc 0 $i; done
|
||||
|
||||
# Change realtime attribute of all rcu tasks to SCHED_OTHER and priority 0
|
||||
# Change real-time attribute of all rcu tasks to SCHED_OTHER and priority 0
|
||||
for i in `pgrep rcu`; do chrt -v -o -p 0 $i; done
|
||||
|
||||
# Change realtime attribute of all tasks on core 1 to SCHED_OTHER and priority 0
|
||||
# Change real-time attribute of all tasks on core 1 to SCHED_OTHER and priority 0
|
||||
for i in `pgrep /1`; do chrt -v -o -p 0 $i; done
|
||||
|
||||
# Change realtime attribute of all tasks to SCHED_OTHER and priority 0
|
||||
# Change real-time attribute of all tasks to SCHED_OTHER and priority 0
|
||||
for i in `ps -A -o pid`; do chrt -v -o -p 0 $i; done
|
||||
|
||||
echo disabling timer migration
|
||||
@ -668,7 +668,8 @@ Passthrough a hard disk to RTVM
|
||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||
hard_rtvm
|
||||
|
||||
#. Upon deployment completion, launch the RTVM directly onto your WHL NUC:
|
||||
#. Upon deployment completion, launch the RTVM directly onto your WHL
|
||||
Intel NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
@ -69,7 +69,7 @@ through the Device Model. Currently, the service VM is based on Linux,
|
||||
but it can also use other operating systems as long as the ACRN Device
|
||||
Model is ported into it. A user VM can be Ubuntu*, Android*,
|
||||
Windows* or VxWorks*. There is one special user VM, called a
|
||||
post-launched Real-Time VM (RTVM), designed to run a hard real-time OS,
|
||||
post-launched real-time VM (RTVM), designed to run a hard real-time OS,
|
||||
such as Zephyr*, VxWorks*, or Xenomai*. Because of its real-time capability, RTVM
|
||||
can be used for soft programmable logic controller (PLC), inter-process
|
||||
communication (IPC), or Robotics applications.
|
||||
@ -130,7 +130,7 @@ In total, up to 7 post-launched User VMs can be started:
|
||||
- 5 regular User VMs,
|
||||
- One `Kata Containers <https://katacontainers.io>`_ User VM (see
|
||||
:ref:`run-kata-containers` for more details), and
|
||||
- One Real-Time VM (RTVM).
|
||||
- One real-time VM (RTVM).
|
||||
|
||||
In this example, one post-launched User VM provides Human Machine Interface
|
||||
(HMI) capability, another provides Artificial Intelligence (AI) capability, some
|
||||
@ -157,15 +157,15 @@ Industrial usage scenario:
|
||||
with tools such as Kubernetes*.
|
||||
- The HMI Application OS can be Windows* or Linux*. Windows is dominant
|
||||
in Industrial HMI environments.
|
||||
- ACRN can support a soft Real-time OS such as preempt-rt Linux for
|
||||
soft-PLC control, or a hard Real-time OS that offers less jitter.
|
||||
- ACRN can support a soft real-time OS such as preempt-rt Linux for
|
||||
soft-PLC control, or a hard real-time OS that offers less jitter.
|
||||
|
||||
Automotive Application Scenarios
|
||||
================================
|
||||
|
||||
As shown in :numref:`V2-SDC-scenario`, the ACRN hypervisor can be used
|
||||
for building Automotive Software Defined Cockpit (SDC) and In-Vehicle
|
||||
Experience (IVE) solutions.
|
||||
for building Automotive Software Defined Cockpit (SDC) and in-vehicle
|
||||
experience (IVE) solutions.
|
||||
|
||||
.. figure:: images/ACRN-V2-SDC-scenario.png
|
||||
:width: 600px
|
||||
@ -177,12 +177,12 @@ Experience (IVE) solutions.
|
||||
As a reference implementation, ACRN provides the basis for embedded
|
||||
hypervisor vendors to build solutions with a reference I/O mediation
|
||||
solution. In this scenario, an automotive SDC system consists of the
|
||||
Instrument Cluster (IC) system running in the Service VM and the In-Vehicle
|
||||
Infotainment (IVI) system is running the post-launched User VM. Additionally,
|
||||
instrument cluster (IC) system running in the Service VM and the in-vehicle
|
||||
infotainment (IVI) system is running the post-launched User VM. Additionally,
|
||||
one could modify the SDC scenario to add more post-launched User VMs that can
|
||||
host Rear Seat Entertainment (RSE) systems (not shown on the picture).
|
||||
host rear seat entertainment (RSE) systems (not shown on the picture).
|
||||
|
||||
An **Instrument Cluster (IC)** system is used to show the driver operational
|
||||
An **instrument cluster (IC)** system is used to show the driver operational
|
||||
information about the vehicle, such as:
|
||||
|
||||
- the speed, fuel level, trip mileage, and other driving information of
|
||||
@ -191,14 +191,14 @@ information about the vehicle, such as:
|
||||
fuel or tire pressure;
|
||||
- showing rear-view and surround-view cameras for parking assistance.
|
||||
|
||||
An **In-Vehicle Infotainment (IVI)** system's capabilities can include:
|
||||
An **in-vehicle infotainment (IVI)** system's capabilities can include:
|
||||
|
||||
- navigation systems, radios, and other entertainment systems;
|
||||
- connection to mobile devices for phone calls, music, and applications
|
||||
via voice recognition;
|
||||
- control interaction by gesture recognition or touch.
|
||||
|
||||
A **Rear Seat Entertainment (RSE)** system could run:
|
||||
A **rear seat entertainment (RSE)** system could run:
|
||||
|
||||
- entertainment system;
|
||||
- virtual office;
|
||||
@ -265,9 +265,9 @@ application scenario needs.
|
||||
- Post-launched VM
|
||||
-
|
||||
|
||||
* - Hybrid Real-Time Usage Config
|
||||
* - Hybrid real-time Usage Config
|
||||
- Hybrid RT
|
||||
- Pre-launched VM (Real-Time VM)
|
||||
- Pre-launched VM (real-time VM)
|
||||
- Service VM
|
||||
- Post-launched VM
|
||||
-
|
||||
@ -284,8 +284,8 @@ Here are block diagrams for each of these four scenarios.
|
||||
SDC scenario
|
||||
============
|
||||
|
||||
In this SDC scenario, an Instrument Cluster (IC) system runs with the
|
||||
Service VM and an In-Vehicle Infotainment (IVI) system runs in a user
|
||||
In this SDC scenario, an instrument cluster (IC) system runs with the
|
||||
Service VM and an in-vehicle infotainment (IVI) system runs in a user
|
||||
VM.
|
||||
|
||||
.. figure:: images/ACRN-V2-SDC-scenario.png
|
||||
@ -300,10 +300,10 @@ Industry scenario
|
||||
|
||||
In this Industry scenario, the Service VM provides device sharing capability for
|
||||
a Windows-based HMI User VM. One post-launched User VM can run a Kata Container
|
||||
application. Another User VM supports either hard or soft Real-time OS
|
||||
application. Another User VM supports either hard or soft real-time OS
|
||||
applications. Up to five additional post-launched User VMs support functions
|
||||
such as Human Machine Interface (HMI), Artificial Intelligence (AI), Computer
|
||||
Vision, etc.
|
||||
such as human/machine interface (HMI), artificial intelligence (AI), computer
|
||||
vision, etc.
|
||||
|
||||
.. figure:: images/ACRN-Industry.png
|
||||
:width: 600px
|
||||
@ -326,10 +326,10 @@ non-real-time tasks.
|
||||
|
||||
Hybrid scenario
|
||||
|
||||
Hybrid Real-Time (RT) scenario
|
||||
Hybrid real-time (RT) scenario
|
||||
==============================
|
||||
|
||||
In this Hybrid Real-Time (RT) scenario, a pre-launched RTVM is started by the
|
||||
In this Hybrid real-time (RT) scenario, a pre-launched RTVM is started by the
|
||||
hypervisor. The Service VM runs a post-launched User VM that runs non-safety or
|
||||
non-real-time tasks.
|
||||
|
||||
@ -458,7 +458,7 @@ all types of Virtual Machines (VMs) represented:
|
||||
- Pre-launched Service VM
|
||||
- Post-launched User VM
|
||||
- Kata Container VM (post-launched)
|
||||
- Real-Time VM (RTVM)
|
||||
- real-time VM (RTVM)
|
||||
|
||||
The Service VM owns most of the devices including the platform devices, and
|
||||
provides I/O mediation. The notable exceptions are the devices assigned to the
|
||||
@ -557,11 +557,11 @@ emulation:
|
||||
|
||||
* The third variation on hypervisor-based device emulation is
|
||||
**paravirtualized (PV) drivers**. In this model introduced by the `XEN
|
||||
project`_, the hypervisor includes the physical drivers, and each guest
|
||||
Project`_, the hypervisor includes the physical drivers, and each guest
|
||||
operating system includes a hypervisor-aware driver that works in
|
||||
concert with the hypervisor drivers.
|
||||
|
||||
.. _XEN project:
|
||||
.. _XEN Project:
|
||||
https://wiki.xenproject.org/wiki/Understanding_the_Virtualization_Spectrum
|
||||
|
||||
In the device emulation models discussed above, there's a price to pay
|
||||
@ -717,7 +717,7 @@ Following along with the numbered items in :numref:`io-emulation-path`:
|
||||
module is activated to execute its processing APIs. Otherwise, the VHM
|
||||
module leaves the IO request in the shared page and wakes up the
|
||||
device model thread to process.
|
||||
5. The ACRN device model follow the same mechanism as the VHM. The I/O
|
||||
5. The ACRN device model follows the same mechanism as the VHM. The I/O
|
||||
processing thread of device model queries the IO request ring to get the
|
||||
PIO instruction details and checks to see if any (guest) device emulation
|
||||
module claims ownership of the IO port: if a module claimed it,
|
||||
@ -813,7 +813,8 @@ here:
|
||||
The BE drivers only need to parse the virtqueue structures to obtain
|
||||
the requests and get the requests done. Virtqueue organization is
|
||||
specific to the User OS. In the implementation of Virtio in Linux, the
|
||||
virtqueue is implemented as a ring buffer structure called vring.
|
||||
virtqueue is implemented as a ring buffer structure called
|
||||
``vring``.
|
||||
|
||||
In ACRN, the virtqueue APIs can be leveraged
|
||||
directly so users don't need to worry about the details of the
|
||||
@ -841,8 +842,8 @@ space as shown in :numref:`virtio-framework-userland`:
|
||||
In the Virtio user-land framework, the implementation is compatible with
|
||||
Virtio Spec 0.9/1.0. The VBS-U is statically linked with the Device Model,
|
||||
and communicates with the Device Model through the PCIe interface: PIO/MMIO
|
||||
or MSI/MSIx. VBS-U accesses Virtio APIs through the user space vring service
|
||||
API helpers. User space vring service API helpers access shared ring
|
||||
or MSI/MSIx. VBS-U accesses Virtio APIs through the user space ``vring`` service
|
||||
API helpers. User space ``vring`` service API helpers access shared ring
|
||||
through a remote memory map (mmap). VHM maps User VM memory with the help of
|
||||
ACRN Hypervisor.
|
||||
|
||||
|
@ -12,7 +12,7 @@ Minimum System Requirements for Installing ACRN
|
||||
+------------------------+-----------------------------------+---------------------------------------------------------------------------------+
|
||||
| Hardware | Minimum Requirements | Recommended |
|
||||
+========================+===================================+=================================================================================+
|
||||
| Processor | Compatible x86 64-bit processor | 2 core with Intel Hyper Threading Technology enabled in the BIOS or more cores |
|
||||
| Processor | Compatible x86 64-bit processor | 2 core with Intel Hyper-threading Technology enabled in the BIOS or more cores |
|
||||
+------------------------+-----------------------------------+---------------------------------------------------------------------------------+
|
||||
| System memory | 4GB RAM | 8GB or more (< 32G) |
|
||||
+------------------------+-----------------------------------+---------------------------------------------------------------------------------+
|
||||
@ -29,7 +29,7 @@ Platforms with multiple PCI segments
|
||||
|
||||
ACRN assumes the following conditions are satisfied from the Platform BIOS
|
||||
|
||||
* All the PCI device BARs should be assigned resources, including SR-IOv VF BARs if a device supports.
|
||||
* All the PCI device BARs should be assigned resources, including SR-IOV VF BARs if a device supports.
|
||||
|
||||
* Bridge windows for PCI bridge devices and the resources for root bus, should be programmed with values
|
||||
that enclose resources used by all the downstream devices.
|
||||
@ -98,16 +98,16 @@ For general instructions setting up ACRN on supported hardware platforms, visit
|
||||
| | | | | | |
|
||||
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+
|
||||
| | **Kaby Lake** | | `NUC7i5BNH`_ | V | | | |
|
||||
| | (Codename: Baby Canyon) | | (Board: NUC7i5BNB) | | | | |
|
||||
| | (Code name: Baby Canyon) | | (Board: NUC7i5BNB) | | | | |
|
||||
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+
|
||||
| | **Kaby Lake** | | `NUC7i7BNH`_ | V | | | |
|
||||
| | (Codename: Baby Canyon) | | (Board: NUC7i7BNB | | | | |
|
||||
| | (Code name: Baby Canyon) | | (Board: NUC7i7BNB | | | | |
|
||||
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+
|
||||
| | **Kaby Lake** | | `NUC7i5DNH`_ | V | | | |
|
||||
| | (Codename: Dawson Canyon) | | (Board: NUC7i5DNB) | | | | |
|
||||
| | (Code name: Dawson Canyon) | | (Board: NUC7i5DNB) | | | | |
|
||||
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+
|
||||
| | **Kaby Lake** | | `NUC7i7DNH`_ | V | V | V | V |
|
||||
| | (Codename: Dawson Canyon) | | (Board: NUC7i7DNB) | | | | |
|
||||
| | (Code name: Dawson Canyon) | | (Board: NUC7i7DNB) | | | | |
|
||||
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+
|
||||
| | **Whiskey Lake** | | `WHL-IPC-I5`_ | V | V | V | V |
|
||||
| | | | (Board: WHL-IPC-I5) | | | | |
|
||||
@ -141,8 +141,8 @@ Verified Hardware Specifications Detail
|
||||
| | | UP2 - x5-E3940 | | - Intel® Atom ™ x5-E3940 (4C4T) |
|
||||
| | | | (up to 1.8GHz)/x7-E3950 (4C4T, up to 2.0GHz) |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | Graphics | - 2GB ( single channel) LPDDR4 |
|
||||
| | | | - 4GB/8GB ( dual channel) LPDDR4 |
|
||||
| | | Graphics | - 2GB (single channel) LPDDR4 |
|
||||
| | | | - 4GB/8GB (dual channel) LPDDR4 |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | System memory | - Intel® Gen 9 HD, supporting 4K Codec |
|
||||
| | | | Decode and Encode for HEVC4, H.264, VP8 |
|
||||
@ -152,16 +152,16 @@ Verified Hardware Specifications Detail
|
||||
| | | Serial Port | - Yes |
|
||||
+--------------------------------+------------------------+------------------------+-----------------------------------------------------------+
|
||||
| | **Kaby Lake** | | NUC7i5BNH | Processor | - Intel® Core™ i5-7260U CPU @ 2.20GHz (2C4T) |
|
||||
| | (Codename: Baby Canyon) | | (Board: NUC7i5BNB) | | |
|
||||
| | (Code name: Baby Canyon) | | (Board: NUC7i5BNB) | | |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | Graphics | - Intel® Iris™ Plus Graphics 640 |
|
||||
| | | Graphics | - Intel® Iris® Plus Graphics 640 |
|
||||
| | | | - One HDMI\* 2.0 port with 4K at 60 Hz |
|
||||
| | | | - Thunderbolt™ 3 port with support for USB\* 3.1 |
|
||||
| | | | Gen 2, DisplayPort\* 1.2 and 40 Gb/s Thunderbolt |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2133 MHz), 1.2V |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | Storage capabilities | - Micro SDXC slot with UHS-I support on the side |
|
||||
| | | Storage capabilities | - microSDXC slot with UHS-I support on the side |
|
||||
| | | | - One M.2 connector supporting 22x42 or 22x80 M.2 SSD |
|
||||
| | | | - One SATA3 port for connection to 2.5" HDD or SSD |
|
||||
| | | | (up to 9.5 mm thickness) |
|
||||
@ -169,16 +169,16 @@ Verified Hardware Specifications Detail
|
||||
| | | Serial Port | - Yes |
|
||||
+--------------------------------+------------------------+------------------------+-----------------------------------------------------------+
|
||||
| | **Kaby Lake** | | NUC7i7BNH | Processor | - Intel® Core™ i7-7567U CPU @ 3.50GHz (2C4T) |
|
||||
| | (Codename: Baby Canyon) | | (Board: NUC7i7BNB) | | |
|
||||
| | (Code name: Baby Canyon) | | (Board: NUC7i7BNB) | | |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | Graphics | - Intel® Iris™ Plus Graphics 650 |
|
||||
| | | Graphics | - Intel® Iris® Plus Graphics 650 |
|
||||
| | | | - One HDMI\* 2.0 port with 4K at 60 Hz |
|
||||
| | | | - Thunderbolt™ 3 port with support for USB\* 3.1 Gen 2, |
|
||||
| | | | DisplayPort\* 1.2 and 40 Gb/s Thunderbolt |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2133 MHz), 1.2V |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | Storage capabilities | - Micro SDXC slot with UHS-I support on the side |
|
||||
| | | Storage capabilities | - microSDXC slot with UHS-I support on the side |
|
||||
| | | | - One M.2 connector supporting 22x42 or 22x80 M.2 SSD |
|
||||
| | | | - One SATA3 port for connection to 2.5" HDD or SSD |
|
||||
| | | | (up to 9.5 mm thickness) |
|
||||
@ -186,7 +186,7 @@ Verified Hardware Specifications Detail
|
||||
| | | Serial Port | - No |
|
||||
+--------------------------------+------------------------+------------------------+-----------------------------------------------------------+
|
||||
| | **Kaby Lake** | | NUC7i5DNH | Processor | - Intel® Core™ i5-7300U CPU @ 2.64GHz (2C4T) |
|
||||
| | (Codename: Dawson Canyon) | | (Board: NUC7i5DNB) | | |
|
||||
| | (Code name: Dawson Canyon) | | (Board: NUC7i5DNB) | | |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | Graphics | - Intel® HD Graphics 620 |
|
||||
| | | | - Two HDMI\* 2.0a ports supporting 4K at 60 Hz |
|
||||
@ -209,7 +209,7 @@ Verified Hardware Specifications Detail
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2400 MHz), 1.2V |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | Storage capabilities | - One M.2 connector for WIFI |
|
||||
| | | Storage capabilities | - One M.2 connector for Wi-Fi |
|
||||
| | | | - One M.2 connector for 3G/4G module, supporting |
|
||||
| | | | LTE Category 6 and above |
|
||||
| | | | - One M.2 connector for 2242 SSD |
|
||||
@ -225,7 +225,7 @@ Verified Hardware Specifications Detail
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2400 MHz), 1.2V |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | Storage capabilities | - One M.2 connector for WIFI |
|
||||
| | | Storage capabilities | - One M.2 connector for Wi-Fi |
|
||||
| | | | - One M.2 connector for 3G/4G module, supporting |
|
||||
| | | | LTE Category 6 and above |
|
||||
| | | | - One M.2 connector for 2242 SSD |
|
||||
|
@ -6,7 +6,7 @@ ACRN v1.0 (May 2019)
|
||||
We are pleased to announce the release of ACRN version 1.0, a key
|
||||
Project ACRN milestone focused on automotive Software-Defined Cockpit
|
||||
(SDC) use cases and introducing additional architecture enhancements for
|
||||
more IOT usages, such as Industrial.
|
||||
more IoT usages, such as Industrial.
|
||||
|
||||
This v1.0 release is a production-ready reference solution for SDC
|
||||
usages that require multiple VMs and rich I/O mediation for device
|
||||
|
@ -44,7 +44,7 @@ We have many new `reference documents available <https://projectacrn.github.io>`
|
||||
|
||||
* Getting Started Guide for Industry scenario
|
||||
* :ref:`ACRN Configuration Tool Manual <acrn_configuration_tool>`
|
||||
* :ref:`Trace and Data Collection for ACRN Real-Time(RT) Performance Tuning <rt_performance_tuning>`
|
||||
* :ref:`Trace and Data Collection for ACRN real-time (RT) Performance Tuning <rt_performance_tuning>`
|
||||
* Building ACRN in Docker
|
||||
* :ref:`Running Ubuntu as the User VM <running_ubun_as_user_vm>`
|
||||
* :ref:`Running Debian as the User VM <running_deb_as_user_vm>`
|
||||
|
@ -39,7 +39,7 @@ What's New in v1.6
|
||||
|
||||
- The ACRN hypervisor allows a SRIOV-capable PCI device's Virtual Functions (VFs) to be allocated to any VM.
|
||||
|
||||
- The ACRN Service VM supports the SRIOV ethernet device (through the PF driver), and ensures that the SRIOV VF device is able to be assigned (passthrough) to a post-launched VM (launched by ACRN-DM).
|
||||
- The ACRN Service VM supports the SRIOV Ethernet device (through the PF driver), and ensures that the SRIOV VF device is able to be assigned (passthrough) to a post-launched VM (launched by ACRN-DM).
|
||||
|
||||
* CPU sharing enhancement - Halt/Pause emulation
|
||||
|
||||
|
@ -118,14 +118,14 @@ ACRN supports Open Virtual Machine Firmware (OVMF) as a virtual boot
|
||||
loader for the Service VM to launch post-launched VMs such as Windows,
|
||||
Linux, VxWorks, or Zephyr RTOS. Secure boot is also supported.
|
||||
|
||||
Post-launched Real-Time VM Support
|
||||
Post-launched real-time VM Support
|
||||
==================================
|
||||
|
||||
ACRN supports a post-launched RTVM, which also uses partitioned hardware
|
||||
resources to ensure adequate real-time performance, as required for
|
||||
industrial use cases.
|
||||
|
||||
Real-Time VM Performance Optimizations
|
||||
Real-time VM Performance Optimizations
|
||||
======================================
|
||||
|
||||
ACRN 2.0 improves RTVM performance with these optimizations:
|
||||
@ -165,7 +165,7 @@ Large selection of OSs for User VMs
|
||||
===================================
|
||||
|
||||
ACRN now supports Windows* 10, Android*, Ubuntu*, Xenomai, VxWorks*,
|
||||
Real-Time Linux*, and Zephyr* RTOS. ACRN's Windows support now conforms
|
||||
real-time Linux*, and Zephyr* RTOS. ACRN's Windows support now conforms
|
||||
to the Microsoft* Hypervisor Top-Level Functional Specification (TLFS).
|
||||
ACRN 2.0 also improves overall Windows as a Guest (WaaG) stability and
|
||||
performance.
|
||||
|
@ -13,7 +13,7 @@ Introduction
|
||||
************
|
||||
|
||||
ACRN includes three types of configurations: Hypervisor, Board, and VM. Each
|
||||
are discussed in the following sections.
|
||||
is discussed in the following sections.
|
||||
|
||||
Hypervisor configuration
|
||||
========================
|
||||
@ -52,7 +52,7 @@ to launch post-launched User VMs.
|
||||
Scenario based VM configurations are organized as ``*.c/*.h`` files. The
|
||||
reference scenarios are located in the
|
||||
``misc/vm_configs/scenarios/$(SCENARIO)/`` folder.
|
||||
The board specific configurations on this scenario is stored in the
|
||||
The board-specific configurations on this scenario are stored in the
|
||||
``misc/vm_configs/scenarios/$(SCENARIO)/$(BOARD)/`` folder.
|
||||
|
||||
User VM launch script samples are located in the
|
||||
@ -242,7 +242,7 @@ Additional scenario XML elements:
|
||||
- ``PRE_STD_VM`` pre-launched Standard VM
|
||||
- ``SOS_VM`` pre-launched Service VM
|
||||
- ``POST_STD_VM`` post-launched Standard VM
|
||||
- ``POST_RT_VM`` post-launched realtime capable VM
|
||||
- ``POST_RT_VM`` post-launched real-time capable VM
|
||||
- ``KATA_VM`` post-launched Kata Container VM
|
||||
|
||||
``name`` (a child node of ``vm``):
|
||||
@ -257,7 +257,7 @@ Additional scenario XML elements:
|
||||
- ``GUEST_FLAG_IO_COMPLETION_POLLING`` specify whether the hypervisor needs
|
||||
IO polling to completion
|
||||
- ``GUEST_FLAG_HIDE_MTRR`` specify whether to hide MTRR from the VM
|
||||
- ``GUEST_FLAG_RT`` specify whether the VM is RT-VM (realtime)
|
||||
- ``GUEST_FLAG_RT`` specify whether the VM is RT-VM (real-time)
|
||||
|
||||
``cpu_affinity``:
|
||||
List of pCPU: the guest VM is allowed to create vCPU from all or a subset of this list.
|
||||
@ -289,7 +289,7 @@ Additional scenario XML elements:
|
||||
exactly match the module tag in the GRUB multiboot cmdline.
|
||||
|
||||
``ramdisk_mod`` (a child node of ``os_config``):
|
||||
The tag for the ramdisk image which acts as a multiboot module; it
|
||||
The tag for the ramdisk image, which acts as a multiboot module; it
|
||||
must exactly match the module tag in the GRUB multiboot cmdline.
|
||||
|
||||
``bootargs`` (a child node of ``os_config``):
|
||||
@ -375,7 +375,7 @@ current scenario has:
|
||||
``ZEPHYR`` or ``VXWORKS``.
|
||||
|
||||
``rtos_type``:
|
||||
Specify the User VM Realtime capability: Soft RT, Hard RT, or none of them.
|
||||
Specify the User VM Real-time capability: Soft RT, Hard RT, or none of them.
|
||||
|
||||
``mem_size``:
|
||||
Specify the User VM memory size in Mbyte.
|
||||
@ -413,7 +413,7 @@ current scenario has:
|
||||
``passthrough_devices``:
|
||||
Select the passthrough device from the lspci list. Currently we support:
|
||||
usb_xdci, audio, audio_codec, ipu, ipu_i2c, cse, wifi, Bluetooth, sd_card,
|
||||
ethernet, wifi, sata, and nvme.
|
||||
Ethernet, wifi, sata, and nvme.
|
||||
|
||||
``network`` (a child node of ``virtio_devices``):
|
||||
The virtio network device setting.
|
||||
@ -431,7 +431,7 @@ current scenario has:
|
||||
.. note::
|
||||
|
||||
The ``configurable`` and ``readonly`` attributes are used to mark
|
||||
whether the items is configurable for users. When ``configurable="0"``
|
||||
whether the item is configurable for users. When ``configurable="0"``
|
||||
and ``readonly="true"``, the item is not configurable from the web
|
||||
interface. When ``configurable="0"``, the item does not appear on the
|
||||
interface.
|
||||
@ -599,7 +599,7 @@ Instructions
|
||||
because the app needs to download some JavaScript files.
|
||||
|
||||
.. note:: The ACRN configuration app is supported on Chrome, Firefox,
|
||||
and MS Edge. Do not use IE.
|
||||
and Microsoft Edge. Do not use Internet Explorer.
|
||||
|
||||
The website is shown below:
|
||||
|
||||
@ -624,7 +624,7 @@ Instructions
|
||||
|
||||
#. Load or create the scenario setting by selecting among the following:
|
||||
|
||||
- Choose a scenario from the **Scenario Setting** menu which lists all
|
||||
- Choose a scenario from the **Scenario Setting** menu that lists all
|
||||
user-defined scenarios for the board you selected in the previous step.
|
||||
|
||||
- Click the **Create a new scenario** from the **Scenario Setting**
|
||||
@ -644,9 +644,9 @@ Instructions
|
||||
.. figure:: images/choose_scenario.png
|
||||
:align: center
|
||||
|
||||
Note that you can also use a customized scenario xml by clicking **Import
|
||||
Note that you can also use a customized scenario XML by clicking **Import
|
||||
XML**. The configuration app automatically directs to the new scenario
|
||||
xml once the import is complete.
|
||||
XML once the import is complete.
|
||||
|
||||
#. The configurable items display after one scenario is created/loaded/
|
||||
selected. Following is an industry scenario:
|
||||
@ -655,9 +655,9 @@ Instructions
|
||||
:align: center
|
||||
|
||||
- You can edit these items directly in the text boxes, or you can choose
|
||||
single or even multiple items from the drop down list.
|
||||
single or even multiple items from the drop-down list.
|
||||
|
||||
- Read-only items are marked as grey.
|
||||
- Read-only items are marked as gray.
|
||||
|
||||
- Hover the mouse pointer over the item to display the description.
|
||||
|
||||
@ -679,7 +679,7 @@ Instructions
|
||||
pop-up model.
|
||||
|
||||
.. note::
|
||||
All customized scenario xmls will be in user-defined groups which are
|
||||
All customized scenario xmls will be in user-defined groups, which are
|
||||
located in ``misc/vm_configs/xmls/config-xmls/[board]/user_defined/``.
|
||||
|
||||
Before saving the scenario xml, the configuration app validates the
|
||||
@ -698,8 +698,8 @@ Instructions
|
||||
|
||||
If **Source Path** in the pop-up model is edited, the source code is
|
||||
generated into the edited Source Path relative to ``acrn-hypervisor``;
|
||||
otherwise, the source code is generated into default folders and
|
||||
overwrite the old ones. The board-related configuration source
|
||||
otherwise, source code is generated into default folders and
|
||||
overwrites the old ones. The board-related configuration source
|
||||
code is located at
|
||||
``misc/vm_configs/boards/[board]/`` and the
|
||||
scenario-based VM configuration source code is located at
|
||||
@ -715,11 +715,11 @@ The **Launch Setting** is quite similar to the **Scenario Setting**:
|
||||
|
||||
- Click **Load a default launch script** from the **Launch Setting** menu.
|
||||
|
||||
- Select one launch setting xml from the menu.
|
||||
- Select one launch setting XML file from the menu.
|
||||
|
||||
- Import the local launch setting xml by clicking **Import XML**.
|
||||
- Import the local launch setting XML file by clicking **Import XML**.
|
||||
|
||||
#. Select one scenario for the current launch setting from the **Select Scenario** drop down box.
|
||||
#. Select one scenario for the current launch setting from the **Select Scenario** drop-down box.
|
||||
|
||||
#. Configure the items for the current launch setting.
|
||||
|
||||
@ -731,7 +731,7 @@ The **Launch Setting** is quite similar to the **Scenario Setting**:
|
||||
- Remove a UOS launch script by clicking **Remove this VM** for the
|
||||
current launch setting.
|
||||
|
||||
#. Save the current launch setting to the user-defined xml files by
|
||||
#. Save the current launch setting to the user-defined XML files by
|
||||
clicking **Export XML**. The configuration app validates the current
|
||||
configuration and lists all incorrect configurable items and shows errors.
|
||||
|
||||
|
@ -18,8 +18,8 @@ This setup was tested with the following configuration,
|
||||
- Platforms Tested: ApolloLake, KabyLake, CoffeeLake
|
||||
|
||||
|
||||
Pre-Requisites
|
||||
**************
|
||||
Prerequisites
|
||||
*************
|
||||
1. Make sure the platform supports Intel VMX as well as VT-d technologies. On ubuntu18.04, this
|
||||
can be checked by installing ``cpu-checker`` tool. If the output displays **KVM acceleration can be used**
|
||||
the platform supports it.
|
||||
|
@ -186,7 +186,7 @@ shown in the following example:
|
||||
Formats:
|
||||
0x00000005: event id for trace test
|
||||
|
||||
%(cpu)d: corresponding cpu index with 'decimal' format
|
||||
%(cpu)d: corresponding CPU index with 'decimal' format
|
||||
|
||||
%(event)016x: corresponding event id with 'hex' format
|
||||
|
||||
|
@ -151,7 +151,7 @@ Depending on your Linux version, install the needed tools:
|
||||
|
||||
sudo dnf install doxygen python3-pip python3-wheel make graphviz
|
||||
|
||||
And for any of these Linux environments, install the remaining python-based
|
||||
And for any of these Linux environments, install the remaining Python-based
|
||||
tools:
|
||||
|
||||
.. code-block:: bash
|
||||
@ -160,7 +160,7 @@ tools:
|
||||
pip3 install --user -r scripts/requirements.txt
|
||||
|
||||
Add ``$HOME/.local/bin`` to the front of your ``PATH`` so the system will
|
||||
find expected versions of python utilities such as ``sphinx-build`` and
|
||||
find expected versions of Python utilities such as ``sphinx-build`` and
|
||||
``breathe``:
|
||||
|
||||
.. code-block:: bash
|
||||
@ -304,7 +304,7 @@ Sphinx/Breathe, we've added a post-processing filter on the output of
|
||||
the documentation build process to check for "expected" messages from the
|
||||
generation process output.
|
||||
|
||||
The output from the Sphinx build is processed by the python script
|
||||
The output from the Sphinx build is processed by the Python script
|
||||
``scripts/filter-known-issues.py`` together with a set of filter
|
||||
configuration files in the ``.known-issues/doc`` folder. (This
|
||||
filtering is done as part of the ``Makefile``.)
|
||||
|
@ -31,7 +31,7 @@ where
|
||||
For example, to set up a shared memory of 2 megabytes, use ``2``
|
||||
instead of ``shm_size``. The two communicating VMs must define the same size.
|
||||
|
||||
.. note:: This device can be used with Real-Time VM (RTVM) as well.
|
||||
.. note:: This device can be used with real-time VM (RTVM) as well.
|
||||
|
||||
ivshmem hv-land usage
|
||||
*********************
|
||||
|
@ -23,8 +23,8 @@ Verified version
|
||||
*****************
|
||||
|
||||
- ACRN-hypervisor tag: **acrn-2020w17.4-140000p**
|
||||
- ACRN-Kernel (Service VM kernel): **master** branch, commit id **095509221660daf82584ebdd8c50ea0078da3c2d**
|
||||
- ACRN-EDK2 (OVMF): **ovmf-acrn** branch, commit id **0ff86f6b9a3500e4c7ea0c1064e77d98e9745947**
|
||||
- ACRN-Kernel (Service VM kernel): **master** branch, commit ID **095509221660daf82584ebdd8c50ea0078da3c2d**
|
||||
- ACRN-EDK2 (OVMF): **ovmf-acrn** branch, commit ID **0ff86f6b9a3500e4c7ea0c1064e77d98e9745947**
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
@ -117,12 +117,12 @@ Steps
|
||||
|
||||
git clone https://github.com/projectacrn/acrn-edk2.git
|
||||
|
||||
#. Fetch the vbt and gop drivers.
|
||||
#. Fetch the VBT and GOP drivers.
|
||||
|
||||
Fetch the **vbt** and **gop** drivers from the board manufacturer
|
||||
Fetch the **VBT** and **GOP** drivers from the board manufacturer
|
||||
according to your CPU model name.
|
||||
|
||||
#. Add the **vbt** and **gop** drivers to the OVMF:
|
||||
#. Add the **VBT** and **GOP** drivers to the OVMF:
|
||||
|
||||
::
|
||||
|
||||
|
@ -70,7 +70,7 @@ Build ACRN with Pre-Launched RT Mode
|
||||
|
||||
The ACRN VM configuration framework can easily configure resources for
|
||||
Pre-Launched VMs. On Whiskey Lake WHL-IPC-I5, to passthrough SATA and
|
||||
ethernet 03:00.0 devices to the Pre-Launched RT VM, build ACRN with:
|
||||
Ethernet 03:00.0 devices to the Pre-Launched RT VM, build ACRN with:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
@ -144,9 +144,9 @@ Configure RDT for VM using VM Configuration
|
||||
|
||||
#. RDT hardware feature is enabled by default on supported platforms. This
|
||||
information can be found using an offline tool that generates a
|
||||
platform-specific xml file that helps ACRN identify RDT-supported
|
||||
platform-specific XML file that helps ACRN identify RDT-supported
|
||||
platforms. RDT on ACRN is enabled by configuring the ``FEATURES``
|
||||
sub-section of the scenario xml file as in the below example. For
|
||||
sub-section of the scenario XML file as in the below example. For
|
||||
details on building ACRN with scenario refer to :ref:`build-with-acrn-scenario`.
|
||||
|
||||
.. code-block:: none
|
||||
@ -163,7 +163,7 @@ Configure RDT for VM using VM Configuration
|
||||
<MBA_DELAY desc="Memory Bandwidth Allocation delay value"></MBA_DELAY>
|
||||
</RDT>
|
||||
|
||||
#. Once RDT is enabled in the scenario xml file, the next step is to program
|
||||
#. Once RDT is enabled in the scenario XML file, the next step is to program
|
||||
the desired cache mask or/and the MBA delay value as needed in the
|
||||
scenario file. Each cache mask or MBA delay configuration corresponds
|
||||
to a CLOS ID. For example, if the maximum supported CLOS ID is 4, then 4
|
||||
|
@ -1,9 +1,9 @@
|
||||
.. _rt_performance_tuning:
|
||||
|
||||
ACRN Real-Time (RT) Performance Analysis
|
||||
ACRN Real-time (RT) Performance Analysis
|
||||
########################################
|
||||
|
||||
The document describes the methods to collect trace/data for ACRN Real-Time VM (RTVM)
|
||||
The document describes the methods to collect trace/data for ACRN real-time VM (RTVM)
|
||||
real-time performance analysis. Two parts are included:
|
||||
|
||||
- Method to trace ``vmexit`` occurrences for analysis.
|
||||
|
@ -1,6 +1,6 @@
|
||||
.. _rt_perf_tips_rtvm:
|
||||
|
||||
ACRN Real-Time VM Performance Tips
|
||||
ACRN Real-time VM Performance Tips
|
||||
##################################
|
||||
|
||||
Background
|
||||
@ -50,7 +50,7 @@ Tip: Apply the acrn-dm option ``--lapic_pt``
|
||||
Tip: Use virtio polling mode
|
||||
Polling mode prevents the frontend of the VM-exit from sending a
|
||||
notification to the backend. We recommend that you passthrough a
|
||||
physical peripheral device (such as block or an ethernet device), to an
|
||||
physical peripheral device (such as block or an Ethernet device), to an
|
||||
RTVM. If no physical device is available, ACRN supports virtio devices
|
||||
and enables polling mode to avoid a VM-exit at the frontend. Enable
|
||||
virtio polling mode via the option ``--virtio_poll [polling interval]``.
|
||||
|
@ -1,6 +1,6 @@
|
||||
.. _rtvm_workload_guideline:
|
||||
|
||||
Real-Time VM Application Design Guidelines
|
||||
Real-time VM Application Design Guidelines
|
||||
##########################################
|
||||
|
||||
An RTOS developer must be aware of the differences between running applications on a native
|
||||
|
@ -18,7 +18,7 @@ Prerequisites
|
||||
|
||||
#. Refer to the :ref:`ACRN supported hardware <hardware>`.
|
||||
#. For a default prebuilt ACRN binary in the E2E package, you must have 4
|
||||
CPU cores or enable "CPU Hyper-Threading" in order to have 4 CPU threads for 2 CPU cores.
|
||||
CPU cores or enable "CPU Hyper-threading" in order to have 4 CPU threads for 2 CPU cores.
|
||||
#. Follow the :ref:`rt_industry_ubuntu_setup` to set up the ACRN Service VM
|
||||
based on Ubuntu.
|
||||
#. This tutorial is validated on the following configurations:
|
||||
|
@ -24,7 +24,7 @@ Use the following instructions to install Debian.
|
||||
the bottom of the page).
|
||||
- Follow the `Debian installation guide
|
||||
<https://www.debian.org/releases/stable/amd64/index.en.html>`_ to
|
||||
install it on your NUC; we are using an Intel Kaby Lake NUC (NUC7i7DNHE)
|
||||
install it on your Intel NUC; we are using a Kaby Lake Intel NUC (NUC7i7DNHE)
|
||||
in this tutorial.
|
||||
- :ref:`install-build-tools-dependencies` for ACRN.
|
||||
- Update to the latest iASL (required by the ACRN Device Model):
|
||||
|
@ -11,18 +11,18 @@ Intel NUC Kit. If you have not, refer to the following instructions:
|
||||
|
||||
- Install a `Clear Linux OS
|
||||
<https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_
|
||||
on your NUC kit.
|
||||
on your Intel NUC kit.
|
||||
- Follow the instructions at XXX to set up the
|
||||
Service VM automatically on your NUC kit. Follow steps 1 - 4.
|
||||
Service VM automatically on your Intel NUC kit. Follow steps 1 - 4.
|
||||
|
||||
.. important:: need updated instructions that aren't Clear Linux dependent
|
||||
|
||||
We are using Intel Kaby Lake NUC (NUC7i7DNHE) and Debian 10 as the User VM in this tutorial.
|
||||
We are using a Kaby Lake Intel NUC (NUC7i7DNHE) and Debian 10 as the User VM in this tutorial.
|
||||
|
||||
Before you start this tutorial, make sure the KVM tools are installed on the
|
||||
development machine and set **IGD Aperture Size to 512** in the BIOS
|
||||
settings (refer to :numref:`intel-bios-deb`). Connect two monitors to your
|
||||
NUC:
|
||||
Intel NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -47,7 +47,7 @@ Hardware Configurations
|
||||
| | | Graphics | - UHD Graphics 620 |
|
||||
| | | | - Two HDMI 2.0a ports supporting 4K at 60 Hz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | System memory | - 8GiB SODIMM DDR4 2400 MHz |
|
||||
| | | System memory | - 8GiB SO-DIMM DDR4 2400 MHz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | Storage capabilities | - 1TB WDC WD10SPZX-22Z |
|
||||
+--------------------------+----------------------+----------------------+----------------------------------------------+
|
||||
@ -97,7 +97,7 @@ steps will detail how to use the Debian CD-ROM (ISO) image to install Debian
|
||||
|
||||
#. Right-click **QEMU/KVM** and select **New**.
|
||||
|
||||
a. Choose **Local install media (ISO image or CDROM)** and then click
|
||||
a. Choose **Local install media (ISO image or CD-ROM)** and then click
|
||||
**Forward**. A **Create a new virtual machine** box displays, as shown
|
||||
in :numref:`newVM-debian` below.
|
||||
|
||||
@ -119,7 +119,7 @@ steps will detail how to use the Debian CD-ROM (ISO) image to install Debian
|
||||
#. Rename the image if you desire. You must check the **customize
|
||||
configuration before install** option before you finish all stages.
|
||||
|
||||
#. Verify that you can see the Overview screen as set up, as shown in :numref:`debian10-setup` below:
|
||||
#. Verify that you can see the Overview screen as set up, shown in :numref:`debian10-setup` below:
|
||||
|
||||
.. figure:: images/debian-uservm-3.png
|
||||
:align: center
|
||||
@ -127,14 +127,14 @@ steps will detail how to use the Debian CD-ROM (ISO) image to install Debian
|
||||
|
||||
Debian Setup Overview
|
||||
|
||||
#. Complete the Debian installation. Verify that you have set up a vda
|
||||
#. Complete the Debian installation. Verify that you have set up a VDA
|
||||
disk partition, as shown in :numref:`partition-vda` below:
|
||||
|
||||
.. figure:: images/debian-uservm-4.png
|
||||
:align: center
|
||||
:name: partition-vda
|
||||
|
||||
Virtual Disk (vda) partition
|
||||
Virtual Disk (VDA) partition
|
||||
|
||||
#. Upon installation completion, the KVM image is created in the
|
||||
``/var/lib/libvirt/images`` folder. Convert the `gcow2` format to `img`
|
||||
@ -154,7 +154,7 @@ Re-use and modify the `launch_win.sh` script in order to launch the new Debian 1
|
||||
"/dev/sda1" mentioned below with "/dev/nvme0n1p1" if you are using an
|
||||
NVMe drive.
|
||||
|
||||
1. Copy the debian.img to your NUC:
|
||||
1. Copy the debian.img to your Intel NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
@ -11,9 +11,9 @@ Intel NUC Kit. If you have not, refer to the following instructions:
|
||||
|
||||
- Install a `Clear Linux OS
|
||||
<https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_
|
||||
on your NUC kit.
|
||||
on your Intel NUC kit.
|
||||
- Follow the instructions at XXX to set up the
|
||||
Service VM automatically on your NUC kit. Follow steps 1 - 4.
|
||||
Service VM automatically on your Intel NUC kit. Follow steps 1 - 4.
|
||||
|
||||
.. important:: need updated instructions that aren't Clear Linux
|
||||
dependent
|
||||
@ -21,7 +21,7 @@ Intel NUC Kit. If you have not, refer to the following instructions:
|
||||
Before you start this tutorial, make sure the KVM tools are installed on the
|
||||
development machine and set **IGD Aperture Size to 512** in the BIOS
|
||||
settings (refer to :numref:`intel-bios-ubun`). Connect two monitors to your
|
||||
NUC:
|
||||
Intel NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -46,7 +46,7 @@ Hardware Configurations
|
||||
| | | Graphics | - UHD Graphics 620 |
|
||||
| | | | - Two HDMI 2.0a ports supporting 4K at 60 Hz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | System memory | - 8GiB SODIMM DDR4 2400 MHz |
|
||||
| | | System memory | - 8GiB SO-DIMM DDR4 2400 MHz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | Storage capabilities | - 1TB WDC WD10SPZX-22Z |
|
||||
+--------------------------+----------------------+----------------------+----------------------------------------------+
|
||||
@ -147,7 +147,7 @@ Modify the ``launch_win.sh`` script in order to launch Ubuntu as the User VM.
|
||||
``/dev/sda1`` mentioned below with ``/dev/nvme0n1p1`` if you are
|
||||
using an SSD.
|
||||
|
||||
1. Copy the ``uos.img`` to your NUC:
|
||||
1. Copy the ``uos.img`` to your Intel NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
@ -135,7 +135,7 @@ CPUID Leaf 12H
|
||||
**Intel SGX Capability Enumeration**
|
||||
|
||||
* CPUID_12H.0.EAX[0] SGX1: If 1, indicates that Intel SGX supports the
|
||||
collection of SGX1 leaf functions.If is_sgx_supported and the section count
|
||||
collection of SGX1 leaf functions. If is_sgx_supported and the section count
|
||||
is initialized for the VM, this bit will be set.
|
||||
* CPUID_12H.0.EAX[1] SGX2: If 1, indicates that Intel SGX supports the
|
||||
collection of SGX2 leaf functions. If hardware supports it and SGX enabled
|
||||
@ -149,7 +149,7 @@ CPUID Leaf 12H
|
||||
Extended feature (same structure as XCR0).
|
||||
|
||||
The hypervisor may change the allow-1 setting of XFRM in ATTRIBUTES for VM.
|
||||
If some feature is disabled for the VM, the bit is also cleared, eg. MPX.
|
||||
If some feature is disabled for the VM, the bit is also cleared, e.g. MPX.
|
||||
|
||||
**Intel SGX EPC Enumeration**
|
||||
|
||||
|
@ -61,7 +61,7 @@ Space Layout Randomization), and stack overflow protector.
|
||||
|
||||
There are a couple of built-in Trusted Apps running in user mode of
|
||||
Trusty OS. However, an OEM can add more Trusted Apps in Trusty OS to
|
||||
serve any other customized security services.For security reasons and
|
||||
serve any other customized security services. For security reasons and
|
||||
for serving early-boot time security requests (e.g. disk decryption),
|
||||
Trusty OS and Apps are typically started before Normal world OS.
|
||||
|
||||
@ -102,7 +102,7 @@ malware detection.
|
||||
|
||||
In embedded products such as an automotive IVI system, the most important
|
||||
security services requested by customers are keystore and secure
|
||||
storage. In this article we will focus on these two services.
|
||||
storage. In this article, we will focus on these two services.
|
||||
|
||||
Keystore
|
||||
========
|
||||
@ -126,14 +126,14 @@ and are permanently bound to the key, ensuring the key cannot be used in
|
||||
any other way.
|
||||
|
||||
In addition to the list above, there is one more service that Keymaster
|
||||
implementations provide, but which is not exposed as an API: Random
|
||||
implementations provide, but is not exposed as an API: Random
|
||||
number generation. This is used internally for generation of keys,
|
||||
Initialization Vectors (IVs), random padding, and other elements of
|
||||
secure protocols that require randomness.
|
||||
|
||||
Using Android as an example, Keystore functions are explained in greater
|
||||
details in this `Android keymaster functions document
|
||||
<https://source.android.com/security/keystore/implementer-ref>`_
|
||||
<https://source.android.com/security/keystore/implementer-ref>`_.
|
||||
|
||||
.. figure:: images/trustyacrn-image3.png
|
||||
:align: center
|
||||
@ -161,7 +161,7 @@ You can read the `eMMC/UFS JEDEC specification
|
||||
to understand that.
|
||||
|
||||
This secure storage can provide data confidentiality, integrity, and
|
||||
anti-replay protection.Confidentiality is guaranteed by data encryption
|
||||
anti-replay protection. Confidentiality is guaranteed by data encryption
|
||||
with a root key derived from the platform chipset's unique key/secret.
|
||||
|
||||
RPMB partition is a fixed size partition (128KB ~ 16MB) in eMMC (or UFS)
|
||||
@ -178,11 +178,11 @@ key). See `Android Key and ID Attestation
|
||||
for details.
|
||||
|
||||
In Trusty, the secure storage architecture is shown in the figure below.
|
||||
In the secure world, there is a SS (Secure Storage) TA, which has an
|
||||
In the secure world, there is an SS (Secure Storage) TA, which has an
|
||||
RPMB authentication key (AuthKey, an HMAC key) and uses this Authkey to
|
||||
talk with the RPMB controller in the eMMC device. Since the eMMC device
|
||||
is controlled by normal world driver, Trusty needs to send an RPMB data
|
||||
frame ( encrypted by hardware-backed unique encryption key and signed by
|
||||
frame (encrypted by hardware-backed unique encryption key and signed by
|
||||
AuthKey) over Trusty IPC channel to Trusty SS proxy daemon, which then
|
||||
forwards RPMB data frame to physical RPMB partition in eMMC.
|
||||
|
||||
@ -260,7 +260,7 @@ One-VM, Two-Worlds
|
||||
==================
|
||||
|
||||
As previously mentioned, Trusty Secure Monitor could be any
|
||||
hypervisor. In the ACRN project the ACRN hypervisor will behave as the
|
||||
hypervisor. In the ACRN project, the ACRN hypervisor will behave as the
|
||||
secure monitor to schedule in/out Trusty secure world.
|
||||
|
||||
.. figure:: images/trustyacrn-image4.png
|
||||
@ -364,7 +364,7 @@ access is like this:
|
||||
#. If the verification is successful in the eMMC RPMB controller, the
|
||||
data will be written into the storage device.
|
||||
|
||||
The work flow of authenticated data read is very similar to this flow
|
||||
The workflow of authenticated data read is very similar to this flow
|
||||
above in reverse order.
|
||||
|
||||
Note that there are some security considerations in this architecture:
|
||||
@ -383,7 +383,7 @@ system security design. In practice, the Service VM designer and implementer
|
||||
should obey these following rules (and more):
|
||||
|
||||
- Make sure the Service VM is a closed system and doesn't allow users to
|
||||
install any unauthorized 3rd party software or components.
|
||||
install any unauthorized third-party software or components.
|
||||
- External peripherals are constrained.
|
||||
- Enable kernel-based hardening techniques, e.g., dm-verity (to make
|
||||
sure integrity of DM and vBIOS/vOSloaders), kernel module signing,
|
||||
|
@ -104,9 +104,9 @@ pre-launched VMs (the SOS_VM is also a kind of pre-launched VM):
|
||||
``kernel_mod_tag`` of VM1 in the
|
||||
``misc/vm_configs/scenarios/$(SCENARIO)/vm_configurations.c`` file.
|
||||
|
||||
The guest kernel command line arguments is configured in the
|
||||
The guest kernel command-line arguments is configured in the
|
||||
hypervisor source code by default if no ``$(VMx bootargs)`` is present.
|
||||
If ``$(VMx bootargs)`` is present, the default command line arguments
|
||||
If ``$(VMx bootargs)`` is present, the default command-line arguments
|
||||
are overridden by the ``$(VMx bootargs)`` parameters.
|
||||
|
||||
The ``$(Service VM bootargs)`` parameter in the multiboot command
|
||||
|
@ -19,7 +19,8 @@ Prerequisites
|
||||
*************
|
||||
- Use the `Intel NUC Kit NUC7i7DNHE <https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc7i7dnhe.html>`_.
|
||||
- Connect to the serial port as described in :ref:`Connecting to the serial port <connect_serial_port>`.
|
||||
- Install Ubuntu 18.04 on your SATA device or on the NVME disk of your NUC.
|
||||
- Install Ubuntu 18.04 on your SATA device or on the NVME disk of your
|
||||
Intel NUC.
|
||||
|
||||
Update Ubuntu GRUB
|
||||
******************
|
||||
@ -49,11 +50,11 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo
|
||||
|
||||
.. note:: The module ``/boot/zephyr.bin`` is the VM0 (Zephyr) kernel file.
|
||||
The param ``xxxxxx`` is VM0's kernel file tag and must exactly match the
|
||||
``kernel_mod_tag`` of VM0 which is configured in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c``
|
||||
``kernel_mod_tag`` of VM0, which is configured in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c``
|
||||
file. The multiboot module ``/boot/bzImage`` is the Service VM kernel
|
||||
file. The param ``yyyyyy`` is the bzImage tag and must exactly match the
|
||||
``kernel_mod_tag`` of VM1 in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c``
|
||||
file. The kernel command line arguments used to boot the Service VM are
|
||||
file. The kernel command-line arguments used to boot the Service VM are
|
||||
located in the header file ``misc/vm_configs/scenarios/hybrid/vm_configurations.h``
|
||||
and are configured by the `SOS_VM_BOOTARGS` macro.
|
||||
The module ``/boot/ACPI_VM0.bin`` is the binary of ACPI tables for pre-launched VM0 (Zephyr).
|
||||
@ -73,8 +74,8 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo
|
||||
|
||||
$ sudo update-grub
|
||||
|
||||
#. Reboot the NUC. Select the **ACRN hypervisor Hybrid Scenario** entry to boot
|
||||
the ACRN hypervisor on the NUC's display. The GRUB loader will boot the
|
||||
#. Reboot the Intel NUC. Select the **ACRN hypervisor Hybrid Scenario** entry to boot
|
||||
the ACRN hypervisor on the Intel NUC's display. The GRUB loader will boot the
|
||||
hypervisor, and the hypervisor will start the VMs automatically.
|
||||
|
||||
Hybrid Scenario Startup Checking
|
||||
@ -86,7 +87,7 @@ Hybrid Scenario Startup Checking
|
||||
|
||||
#. Use these steps to verify all VMs are running properly:
|
||||
|
||||
a. Use the ``vm_console 0`` to switch to VM0 (Zephyr) console. It will display **Hello world! acrn**.
|
||||
a. Use the ``vm_console 0`` to switch to VM0 (Zephyr) console. It will display ``Hello world! acrn``.
|
||||
#. Enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell.
|
||||
#. Use the ``vm_console 1`` command to switch to the VM1 (Service VM) console.
|
||||
#. Verify that the VM1's Service VM can boot and you can log in.
|
||||
|
@ -24,7 +24,7 @@ Prerequisites
|
||||
* NVMe disk
|
||||
* SATA disk
|
||||
* Storage device with USB interface (such as USB Flash
|
||||
or SATA disk connected with a USB3.0 SATA converter).
|
||||
or SATA disk connected with a USB 3.0 SATA converter).
|
||||
* Disable **Intel Hyper Threading Technology** in the BIOS to avoid
|
||||
interference from logical cores for the logical partition scenario.
|
||||
* In the logical partition scenario, two VMs (running Ubuntu OS)
|
||||
@ -57,7 +57,8 @@ Update kernel image and modules of pre-launched VM
|
||||
|
||||
The last two commands build the bootable kernel image as
|
||||
``arch/x86/boot/bzImage``, and loadable kernel modules under the ``./out/``
|
||||
folder. Copy these files to a removable disk for installing on the NUC later.
|
||||
folder. Copy these files to a removable disk for installing on the
|
||||
Intel NUC later.
|
||||
|
||||
#. The current ACRN logical partition scenario implementation requires a
|
||||
multi-boot capable bootloader to boot both the ACRN hypervisor and the
|
||||
@ -68,10 +69,10 @@ Update kernel image and modules of pre-launched VM
|
||||
default, the GRUB bootloader is installed on the EFI System Partition
|
||||
(ESP) that's used to bootstrap the ACRN hypervisor.
|
||||
|
||||
#. After installing the Ubuntu OS, power off the NUC. Attach the
|
||||
SATA disk and storage device with the USB interface to the NUC. Power on
|
||||
the NUC and make sure it boots the Ubuntu OS from the NVMe SSD. Plug in
|
||||
the removable disk with the kernel image into the NUC and then copy the
|
||||
#. After installing the Ubuntu OS, power off the Intel NUC. Attach the
|
||||
SATA disk and storage device with the USB interface to the Intel NUC. Power on
|
||||
the Intel NUC and make sure it boots the Ubuntu OS from the NVMe SSD. Plug in
|
||||
the removable disk with the kernel image into the Intel NUC and then copy the
|
||||
loadable kernel modules built in Step 1 to the ``/lib/modules/`` folder
|
||||
on both the mounted SATA disk and storage device with USB interface. For
|
||||
example, assuming the SATA disk and storage device with USB interface are
|
||||
@ -101,8 +102,8 @@ Update ACRN hypervisor image
|
||||
|
||||
#. Before building the ACRN hypervisor, find the I/O address of the serial
|
||||
port and the PCI BDF addresses of the SATA controller nd the USB
|
||||
controllers on the NUC. Enter the following command to get the
|
||||
I/O addresses of the serial port. The NUC supports one serial port, **ttyS0**.
|
||||
controllers on the Intel NUC. Enter the following command to get the
|
||||
I/O addresses of the serial port. The Intel NUC supports one serial port, **ttyS0**.
|
||||
Connect the serial port to the development workstation in order to access
|
||||
the ACRN serial console to switch between pre-launched VMs:
|
||||
|
||||
@ -173,7 +174,7 @@ Update ACRN hypervisor image
|
||||
|
||||
#. Copy ``acrn.bin``, ``ACPI_VM1.bin`` and ``ACPI_VM0.bin`` to a removable disk.
|
||||
|
||||
#. Plug the removable disk into the NUC's USB port.
|
||||
#. Plug the removable disk into the Intel NUC's USB port.
|
||||
|
||||
#. Copy the ``acrn.bin``, ``ACPI_VM0.bin``, and ``ACPI_VM1.bin`` from the removable disk to ``/boot``
|
||||
directory.
|
||||
@ -204,7 +205,7 @@ Update Ubuntu GRUB to boot hypervisor and load kernel image
|
||||
.. note::
|
||||
Update this to use the UUID (``--set``) and PARTUUID (``root=`` parameter)
|
||||
(or use the device node directly) of the root partition (e.g.``/dev/nvme0n1p2). Hint: use ``sudo blkid``.
|
||||
The kernel command line arguments used to boot the pre-launched VMs is
|
||||
The kernel command-line arguments used to boot the pre-launched VMs is
|
||||
located in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.h`` header file
|
||||
and is configured by ``VMx_CONFIG_OS_BOOTARG_*`` MACROs (where x is the VM id number and ``*`` are arguments).
|
||||
The multiboot2 module param ``XXXXXX`` is the bzImage tag and must exactly match the ``kernel_mod_tag``
|
||||
@ -231,9 +232,9 @@ Update Ubuntu GRUB to boot hypervisor and load kernel image
|
||||
|
||||
$ sudo update-grub
|
||||
|
||||
#. Reboot the NUC. Select the **ACRN hypervisor Logical Partition
|
||||
#. Reboot the Intel NUC. Select the **ACRN hypervisor Logical Partition
|
||||
Scenario** entry to boot the logical partition of the ACRN hypervisor on
|
||||
the NUC's display. The GRUB loader will boot the hypervisor, and the
|
||||
the Intel NUC's display. The GRUB loader will boot the hypervisor, and the
|
||||
hypervisor will automatically start the two pre-launched VMs.
|
||||
|
||||
Logical partition scenario startup checking
|
||||
|
@ -1,22 +1,22 @@
|
||||
.. _connect_serial_port:
|
||||
|
||||
Using the Serial Port on KBL NUC
|
||||
================================
|
||||
Using the Serial Port on KBL Intel NUC
|
||||
======================================
|
||||
|
||||
You can enable the serial console on the
|
||||
`KBL NUC <https://www.amazon.com/Intel-Business-Mini-Technology-BLKNUC7i7DNH1E/dp/B07CCQ8V4R>`_
|
||||
(NUC7i7DNH). The KBL NUC has a serial port header you can
|
||||
expose with a serial DB9 header cable. (The NUC has a punch out hole for
|
||||
`KBL Intel NUC <https://www.amazon.com/Intel-Business-Mini-Technology-BLKNUC7i7DNH1E/dp/B07CCQ8V4R>`_
|
||||
(NUC7i7DNH). The KBL Intel NUC has a serial port header you can
|
||||
expose with a serial DB9 header cable. (The Intel NUC has a punch out hole for
|
||||
mounting the serial connector.)
|
||||
|
||||
.. figure:: images/NUC-serial-port.jpg
|
||||
|
||||
KBL NUC with populated serial port punchout
|
||||
KBL Intel NUC with populated serial port punchout
|
||||
|
||||
You can `purchase
|
||||
<https://www.amazon.com/dp/B07BV1W6N8/ref=cm_sw_r_cp_ep_dp_wYm0BbABD5AK6>`_
|
||||
such a cable or you can build it yourself;
|
||||
refer to the `KBL NUC product specification
|
||||
refer to the `KBL Intel NUC product specification
|
||||
<https://www.intel.com/content/dam/support/us/en/documents/mini-pcs/nuc-kits/NUC7i7DN_TechProdSpec.pdf>`_
|
||||
as shown below:
|
||||
|
||||
|
@ -7,7 +7,7 @@ Run VxWorks as the User VM
|
||||
performance. This tutorial describes how to run VxWorks as the User VM on the ACRN hypervisor
|
||||
based on Clear Linux 29970 (ACRN tag v1.1).
|
||||
|
||||
.. note:: You'll need to be a WindRiver* customer and have purchased VxWorks to follow this tutorial.
|
||||
.. note:: You'll need to be a Wind River* customer and have purchased VxWorks to follow this tutorial.
|
||||
|
||||
Steps for Using VxWorks as User VM
|
||||
**********************************
|
||||
@ -15,7 +15,7 @@ Steps for Using VxWorks as User VM
|
||||
#. Build VxWorks
|
||||
|
||||
Follow the `VxWorks Getting Started Guide <https://docs.windriver.com/bundle/vxworks_7_tutorial_kernel_application_workbench_sr0610/page/rbu1422461642318.html>`_
|
||||
to setup the VxWorks development environment and build the VxWorks Image.
|
||||
to set up the VxWorks development environment and build the VxWorks Image.
|
||||
|
||||
.. note::
|
||||
The following kernel configuration should be **excluded**:
|
||||
@ -31,7 +31,7 @@ Steps for Using VxWorks as User VM
|
||||
* CONSOLE_BAUD_RATE = 115200
|
||||
* SYS_CLK_RATE_MAX = 1000
|
||||
|
||||
#. Build GRUB2 BootLoader Image
|
||||
#. Build GRUB2 bootloader Image
|
||||
|
||||
We use grub-2.02 as the bootloader of VxWorks in this tutorial; other versions may also work.
|
||||
|
||||
@ -95,7 +95,7 @@ Steps for Using VxWorks as User VM
|
||||
#. Follow XXX to boot the ACRN Service VM.
|
||||
|
||||
.. important:: need instructions from deleted document (using sdc
|
||||
mode on the NUC)
|
||||
mode on the Intel NUC)
|
||||
|
||||
#. Boot VxWorks as User VM.
|
||||
|
||||
@ -107,7 +107,7 @@ Steps for Using VxWorks as User VM
|
||||
$ cp /usr/share/acrn/samples/nuc/launch_vxworks.sh .
|
||||
|
||||
You will also need to copy the ``VxWorks.img`` created in the VxWorks build environment into directory
|
||||
``vxworks`` (via, e.g. a USB stick or network).
|
||||
``vxworks`` (via, e.g. a USB drive or network).
|
||||
|
||||
Run the ``launch_vxworks.sh`` script to launch VxWorks as the User VM.
|
||||
|
||||
@ -134,7 +134,7 @@ Steps for Using VxWorks as User VM
|
||||
|
||||
->
|
||||
|
||||
Finally, you can type ``help`` to check whether the VxWorks works well.
|
||||
Finally, you can type ``help`` to see available VxWorks commands.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
|
@ -46,7 +46,7 @@ Download Win10 ISO and drivers
|
||||
|
||||
- Select **ISO-LTSC** and click **Continue**.
|
||||
- Complete the required info. Click **Continue**.
|
||||
- Select the language and **x86 64 bit**. Click **Download ISO** and save as ``windows10-LTSC-17763.iso``.
|
||||
- Select the language and **x86 64-bit**. Click **Download ISO** and save as ``windows10-LTSC-17763.iso``.
|
||||
|
||||
#. Download the `Intel DCH Graphics Driver
|
||||
<https://downloadmirror.intel.com/29074/a08/igfx_win10_100.7212.zip>`__.
|
||||
@ -57,8 +57,8 @@ Download Win10 ISO and drivers
|
||||
- Select **Download Package**. Key in **Oracle Linux 7.6** and click
|
||||
**Search**.
|
||||
- Click **DLP: Oracle Linux 7.6** to add to your Cart.
|
||||
- Click **Checkout** which is located at the top-right corner.
|
||||
- Under **Platforms/Language**, select **x86 64 bit**. Click **Continue**.
|
||||
- Click **Checkout**, which is located at the top-right corner.
|
||||
- Under **Platforms/Language**, select **x86 64-bit**. Click **Continue**.
|
||||
- Check **I accept the terms in the license agreement**. Click **Continue**.
|
||||
- From the list, right check the item labeled **Oracle VirtIO Drivers
|
||||
Version for Microsoft Windows 1.x.x, yy MB**, and then **Save link as
|
||||
@ -129,8 +129,8 @@ Install Windows 10 by GVT-g
|
||||
.. figure:: images/windows_install_4.png
|
||||
:align: center
|
||||
|
||||
#. Click **Browser** and go to the drive that includes the virtio win
|
||||
drivers. Select **all** under **vio\\w10\\amd64**. Install the
|
||||
#. Click **Browser** and go to the drive that includes the virtio
|
||||
Windows drivers. Select **all** under **vio\\w10\\amd64**. Install the
|
||||
following drivers into the image:
|
||||
|
||||
- Virtio-balloon
|
||||
@ -201,7 +201,7 @@ ACRN Windows verified feature list
|
||||
|
||||
"IO Devices", "Virtio block as the boot device", "Working"
|
||||
, "AHCI as the boot device", "Working"
|
||||
, "AHCI cdrom", "Working"
|
||||
, "AHCI CD-ROM", "Working"
|
||||
, "Virtio network", "Working"
|
||||
, "Virtio input - mouse", "Working"
|
||||
, "Virtio input - keyboard", "Working"
|
||||
@ -235,7 +235,7 @@ Explanation for acrn-dm popular command lines
|
||||
You may need to change 0/2/0 to match the bdf of the VGA controller on your platform.
|
||||
|
||||
* **-s 3,ahci,hd:/root/img/win10.img**:
|
||||
This is the hard disk onto which to install Windows 10.
|
||||
This is the hard disk where Windows 10 should be installed..
|
||||
Make sure that the slot ID **3** points to your win10 img path.
|
||||
|
||||
* **-s 4,virtio-net,tap0**:
|
||||
@ -253,11 +253,11 @@ Explanation for acrn-dm popular command lines
|
||||
# cat /proc/bus/input/devices | grep mouse
|
||||
|
||||
* **-s 7,ahci,cd:/root/img/Windows10.iso**:
|
||||
This is the IOS image used to install Windows 10. It appears as a cdrom
|
||||
This is the IOS image used to install Windows 10. It appears as a CD-ROM
|
||||
device. Make sure that the slot ID **7** points to your win10 ISO path.
|
||||
|
||||
* **-s 8,ahci,cd:/root/img/winvirtio.iso**:
|
||||
This is cdrom device to install the virtio Windows driver. Make sure it points to your VirtIO ISO path.
|
||||
This is CD-ROM device to install the virtio Windows driver. Make sure it points to your VirtIO ISO path.
|
||||
|
||||
* **-s 9,passthru,0/14/0**:
|
||||
This is to passthrough the USB controller to Windows.
|
||||
|
@ -1,11 +1,11 @@
|
||||
.. _using_xenomai_as_uos:
|
||||
|
||||
Run Xenomai as the User VM OS (Real-Time VM)
|
||||
Run Xenomai as the User VM OS (Real-time VM)
|
||||
############################################
|
||||
|
||||
`Xenomai`_ is a versatile real-time framework that provides support to user space applications that are seamlessly integrated into Linux environments.
|
||||
|
||||
This tutorial describes how to run Xenomai as the User VM OS (Real-Time VM) on the ACRN hypervisor.
|
||||
This tutorial describes how to run Xenomai as the User VM OS (real-time VM) on the ACRN hypervisor.
|
||||
|
||||
.. _Xenomai: https://gitlab.denx.de/Xenomai/xenomai/-/wikis/home
|
||||
|
||||
@ -60,21 +60,21 @@ Launch the RTVM
|
||||
|
||||
#. Prepare a dedicated disk (NVMe or SATA) for the RTVM; in this example, we use ``/dev/sda``.
|
||||
|
||||
a. Download the Preempt-RT VM image:
|
||||
a. Download the Preempt-RT VM image::
|
||||
|
||||
$ wget https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2020w01.1-140000p/preempt-rt-32030.img.xz
|
||||
$ wget https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2020w01.1-140000p/preempt-rt-32030.img.xz
|
||||
|
||||
#. Decompress the xz image:
|
||||
#. Decompress the xz image::
|
||||
|
||||
$ xz -d preempt-rt-32030.img.xz
|
||||
$ xz -d preempt-rt-32030.img.xz
|
||||
|
||||
#. Burn the Preempt-RT VM image onto the SATA disk:
|
||||
#. Burn the Preempt-RT VM image onto the SATA disk::
|
||||
|
||||
$ sudo dd if=preempt-rt-32030.img of=/dev/sda bs=4M oflag=sync status=progress iflag=fullblock seek=0 conv=notrunc
|
||||
$ sudo dd if=preempt-rt-32030.img of=/dev/sda bs=4M oflag=sync status=progress iflag=fullblock seek=0 conv=notrunc
|
||||
|
||||
#. Launch the RTVM via our script. Indicate the location of the root partition (sda3 in our example) and the kernel tarball::
|
||||
|
||||
$ sudo /usr/share/acrn/samples/nuc/launch_xenomai.sh -b /dev/sda3 -k /path/to/linux-4.19.59-xenomai-3.1-acrn+-x86.tar.gz
|
||||
$ sudo /usr/share/acrn/samples/nuc/launch_xenomai.sh -b /dev/sda3 -k /path/to/linux-4.19.59-xenomai-3.1-acrn+-x86.tar.gz
|
||||
|
||||
#. Verify that a login prompt displays::
|
||||
|
||||
@ -95,5 +95,6 @@ Launch the RTVM
|
||||
Install the Xenomai libraries and tools
|
||||
***************************************
|
||||
|
||||
To build and install Xenomai tools or its libraries in the RVTM, refer to the official `Xenomai documentation <https://gitlab.denx.de/Xenomai/xenomai/-/wikis/Installing_Xenomai_3#library-install>`_.
|
||||
To build and install Xenomai tools or its libraries in the RVTM, refer to the official
|
||||
`Xenomai documentation <https://gitlab.denx.de/Xenomai/xenomai/-/wikis/Installing_Xenomai_3#library-install>`_.
|
||||
Note that the current supported version is Xenomai-3.1 with the 4.19.59 kernel.
|
||||
|
@ -8,7 +8,7 @@ collaboration project that helps developers create custom Linux-based
|
||||
systems. The project provides a flexible set of tools and a space where
|
||||
embedded developers worldwide can share technologies, software stacks,
|
||||
configurations, and best practices used to create tailored Linux images
|
||||
for embedded and IOT devices, or anywhere a customized Linux OS is
|
||||
for embedded and IoT devices, or anywhere a customized Linux OS is
|
||||
needed.
|
||||
|
||||
Yocto Project layers support the inclusion of technologies, hardware
|
||||
|
@ -4,7 +4,7 @@ Run Zephyr as the User VM
|
||||
#########################
|
||||
|
||||
This tutorial describes how to run Zephyr as the User VM on the ACRN hypervisor. We are using
|
||||
Kaby Lake-based NUC (model NUC7i5DNHE) in this tutorial.
|
||||
Kaby Lake-based Intel NUC (model NUC7i5DNHE) in this tutorial.
|
||||
Other :ref:`ACRN supported platforms <hardware>` should work as well.
|
||||
|
||||
.. note::
|
||||
@ -24,7 +24,7 @@ Steps for Using Zephyr as User VM
|
||||
#. Build Zephyr
|
||||
|
||||
Follow the `Zephyr Getting Started Guide <https://docs.zephyrproject.org/latest/getting_started/>`_ to
|
||||
setup the Zephyr development environment.
|
||||
set up the Zephyr development environment.
|
||||
|
||||
The build process for ACRN User VM target is similar to other boards. We will build the `Hello World
|
||||
<https://docs.zephyrproject.org/latest/samples/hello_world/README.html>`_ sample for ACRN:
|
||||
@ -40,8 +40,8 @@ Steps for Using Zephyr as User VM
|
||||
|
||||
#. Build grub2 boot loader image
|
||||
|
||||
We can build grub2 bootloader for Zephyr using ``boards/x86/common/scripts/build_grub.sh``
|
||||
which locate in `Zephyr Sourcecode <https://github.com/zephyrproject-rtos/zephyr>`_.
|
||||
We can build the grub2 bootloader for Zephyr using ``boards/x86/common/scripts/build_grub.sh``
|
||||
found in the `Zephyr source code <https://github.com/zephyrproject-rtos/zephyr>`_.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -89,13 +89,14 @@ Steps for Using Zephyr as User VM
|
||||
$ sudo umount /mnt
|
||||
|
||||
You now have a virtual disk image with a bootable Zephyr in ``zephyr.img``. If the Zephyr build system is not
|
||||
the ACRN Service VM, then you will need to transfer this image to the ACRN Service VM (via, e.g, a USB stick or network )
|
||||
the ACRN Service VM, then you will need to transfer this image to the
|
||||
ACRN Service VM (via, e.g, a USB drive or network )
|
||||
|
||||
#. Follow XXX to boot "The ACRN Service OS" based on Clear Linux OS 28620
|
||||
(ACRN tag: acrn-2019w14.3-140000p)
|
||||
|
||||
.. important:: need to remove reference to Clear Linux and reference
|
||||
to deleted document (use SDC mode on the NUC)
|
||||
to deleted document (use SDC mode on the Intel NUC)
|
||||
|
||||
#. Boot Zephyr as User VM
|
||||
|
||||
|
@ -6,10 +6,13 @@ Enable vUART Configurations
|
||||
Introduction
|
||||
============
|
||||
|
||||
The virtual universal asynchronous receiver-transmitter (vUART) supports two functions: one is the console, the other is communication. vUART only works on a single function.
|
||||
The virtual universal asynchronous receiver-transmitter (vUART) supports
|
||||
two functions: one is the console, the other is communication. vUART
|
||||
only works on a single function.
|
||||
|
||||
Currently, only two vUART configurations are added to the
|
||||
``misc/vm_configs/scenarios/<xxx>/vm_configuration.c`` file, but you can change the value in it.
|
||||
``misc/vm_configs/scenarios/<xxx>/vm_configuration.c`` file, but you can
|
||||
change the value in it.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -22,9 +25,9 @@ Currently, only two vUART configurations are added to the
|
||||
.addr.port_base = INVALID_COM_BASE,
|
||||
}
|
||||
|
||||
**vuart[0]** is initiated as the **console** port.
|
||||
``vuart[0]`` is initiated as the **console** port.
|
||||
|
||||
**vuart[1]** is initiated as a **communication** port.
|
||||
``vuart[1]`` is initiated as a **communication** port.
|
||||
|
||||
Console enable list
|
||||
===================
|
||||
@ -48,12 +51,15 @@ Console enable list
|
||||
How to configure a console port
|
||||
===============================
|
||||
|
||||
To enable the console port for a VM, change only the ``port_base`` and ``irq``. If the irq number is already in use in your system (``cat /proc/interrupt``), choose another irq number. If you set the ``.irq =0``, the vuart will work in polling mode.
|
||||
To enable the console port for a VM, change only the ``port_base`` and
|
||||
``irq``. If the IRQ number is already in use in your system (``cat
|
||||
/proc/interrupt``), choose another IRQ number. If you set the ``.irq =0``,
|
||||
the vUART will work in polling mode.
|
||||
|
||||
- COM1_BASE (0x3F8) + COM1_IRQ(4)
|
||||
- COM2_BASE (0x2F8) + COM2_IRQ(3)
|
||||
- COM3_BASE (0x3E8) + COM3_IRQ(6)
|
||||
- COM4_BASE (0x2E8) + COM4_IRQ(7)
|
||||
- ``COM1_BASE (0x3F8) + COM1_IRQ(4)``
|
||||
- ``COM2_BASE (0x2F8) + COM2_IRQ(3)``
|
||||
- ``COM3_BASE (0x3E8) + COM3_IRQ(6)``
|
||||
- ``COM4_BASE (0x2E8) + COM4_IRQ(7)``
|
||||
|
||||
Example:
|
||||
|
||||
@ -70,11 +76,12 @@ How to configure a communication port
|
||||
|
||||
To enable the communication port, configure ``vuart[1]`` in the two VMs that want to communicate.
|
||||
|
||||
The port_base and irq should differ from the ``vuart[0]`` in the same VM.
|
||||
The port_base and IRQ should differ from the ``vuart[0]`` in the same VM.
|
||||
|
||||
**t_vuart.vm_id** is the target VM's vm_id, start from 0. (0 means VM0)
|
||||
|
||||
**t_vuart.vuart_id** is the target vuart index in the target VM. start from 1. (1 means vuart[1])
|
||||
**t_vuart.vuart_id** is the target vuart index in the target VM. start
|
||||
from 1. (1 means ``vuart[1]``)
|
||||
|
||||
Example:
|
||||
|
||||
@ -120,16 +127,19 @@ Communication vUART enable list
|
||||
Launch script
|
||||
=============
|
||||
|
||||
- *-s 1:0,lpc -l com1,stdio*
|
||||
This option is only needed for WaaG and VxWorks (and also when using OVMF). They depend on the ACPI table, and only ``acrn-dm`` can provide the ACPI table for UART.
|
||||
- ``-s 1:0,lpc -l com1,stdio``
|
||||
This option is only needed for WaaG and VxWorks (and also when using
|
||||
OVMF). They depend on the ACPI table, and only ``acrn-dm`` can provide
|
||||
the ACPI table for UART.
|
||||
|
||||
- *-B " ....,console=ttyS0, ..."*
|
||||
- ``-B " ....,console=ttyS0, ..."``
|
||||
Add this to the kernel-based system.
|
||||
|
||||
Test the communication port
|
||||
===========================
|
||||
|
||||
After you have configured the communication port in hypervisor, you can access the corresponding port. For example, in Clear Linux:
|
||||
After you have configured the communication port in hypervisor, you can
|
||||
access the corresponding port. For example, in Clear Linux:
|
||||
|
||||
1. With ``echo`` and ``cat``
|
||||
|
||||
@ -137,20 +147,26 @@ After you have configured the communication port in hypervisor, you can access t
|
||||
|
||||
On VM2: ``# echo "test test" > /dev/ttyS1``
|
||||
|
||||
you can find the message from VM1 ``/dev/ttyS1``.
|
||||
You can find the message from VM1 ``/dev/ttyS1``.
|
||||
|
||||
If you are not sure which port is the communication port, you can run ``dmesg | grep ttyS`` under the Linux shell to check the base address. If it matches what you have set in the ``vm_configuration.c`` file, it is the correct port.
|
||||
If you are not sure which port is the communication port, you can run
|
||||
``dmesg | grep ttyS`` under the Linux shell to check the base address.
|
||||
If it matches what you have set in the ``vm_configuration.c`` file, it
|
||||
is the correct port.
|
||||
|
||||
|
||||
#. With minicom
|
||||
|
||||
Run ``minicom -D /dev/ttyS1`` on both VM1 and VM2 and enter ``test`` in VM1's minicom. The message should appear in VM2's minicom. Disable flow control in minicom.
|
||||
Run ``minicom -D /dev/ttyS1`` on both VM1 and VM2 and enter ``test``
|
||||
in VM1's minicom. The message should appear in VM2's minicom. Disable
|
||||
flow control in minicom.
|
||||
|
||||
|
||||
#. Limitations
|
||||
|
||||
- The msg cannot be longer than 256 bytes.
|
||||
- This cannot be used to transfer files because flow control is not supported so data may be lost.
|
||||
- This cannot be used to transfer files because flow control is
|
||||
not supported so data may be lost.
|
||||
|
||||
vUART design
|
||||
============
|
||||
@ -178,19 +194,23 @@ This adds ``com1 (0x3f8)`` and ``com2 (0x2f8)`` modules in the Guest VM, includi
|
||||
|
||||
**Data Flows**
|
||||
|
||||
Three different data flows exist based on how the post-launched VM is started, as shown in the diagram below.
|
||||
Three different data flows exist based on how the post-launched VM is
|
||||
started, as shown in the diagram below:
|
||||
|
||||
Figure 1 data flow: The post-launched VM is started with the vUART enabled in the hypervisor configuration file only.
|
||||
|
||||
Figure 2 data flow: The post-launched VM is started with the ``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio`` only.
|
||||
|
||||
Figure 3 data flow: The post-launched VM is started with both vUART enabled and the ``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio``.
|
||||
* Figure 1 data flow: The post-launched VM is started with the vUART
|
||||
enabled in the hypervisor configuration file only.
|
||||
* Figure 2 data flow: The post-launched VM is started with the
|
||||
``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio`` only.
|
||||
* Figure 3 data flow: The post-launched VM is started with both vUART
|
||||
enabled and the ``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio``.
|
||||
|
||||
.. figure:: images/vuart-config-post-launch.png
|
||||
:align: center
|
||||
:name: Post-Launched VMs
|
||||
|
||||
.. note::
|
||||
For operating systems such as VxWorks and Windows that depend on the ACPI table to probe the uart driver, adding the vuart configuration in the hypervisor is not sufficient. Currently, we recommend that you use the configuration in the figure 3 data flow. This may be refined in the future.
|
||||
|
||||
|
||||
For operating systems such as VxWorks and Windows that depend on the
|
||||
ACPI table to probe the uart driver, adding the vUART configuration in
|
||||
the hypervisor is not sufficient. Currently, we recommend that you use
|
||||
the configuration in the figure 3 data flow. This may be refined in the
|
||||
future.
|
||||
|
@ -23,7 +23,7 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters:
|
||||
default value.
|
||||
|
||||
* - :kbd:`-B, --bootargs <bootargs>`
|
||||
- Set the User VM kernel command line arguments.
|
||||
- Set the User VM kernel command-line arguments.
|
||||
The maximum length is 1023.
|
||||
The bootargs string will be passed to the kernel as its cmdline.
|
||||
|
||||
@ -326,16 +326,16 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters:
|
||||
- This option is to create a VM with the local APIC (LAPIC) passed-through.
|
||||
With this option, a VM is created with ``LAPIC_PASSTHROUGH`` and
|
||||
``IO_COMPLETION_POLLING`` mode. This option is typically used for hard
|
||||
realtime scenarios.
|
||||
real-time scenarios.
|
||||
|
||||
By default, this option is not enabled.
|
||||
|
||||
* - :kbd:`--rtvm`
|
||||
- This option is used to create a VM with realtime attributes.
|
||||
- This option is used to create a VM with real-time attributes.
|
||||
With this option, a VM is created with ``GUEST_FLAG_RT`` and
|
||||
``GUEST_FLAG_IO_COMPLETION_POLLING`` mode. This kind of VM is
|
||||
generally used for soft realtime scenarios (without ``--lapic_pt``) or
|
||||
hard realtime scenarios (with ``--lapic_pt``). With ``GUEST_FLAG_RT``,
|
||||
generally used for soft real-time scenarios (without ``--lapic_pt``) or
|
||||
hard real-time scenarios (with ``--lapic_pt``). With ``GUEST_FLAG_RT``,
|
||||
the Service VM cannot interfere with this kind of VM when it is
|
||||
running. It can only be powered off from inside the VM itself.
|
||||
|
||||
|
@ -13,7 +13,7 @@ The ACRN hypervisor supports the following parameter:
|
||||
+=================+=============================+========================================================================================+
|
||||
| | disabled | This disables the serial port completely. |
|
||||
| +-----------------------------+----------------------------------------------------------------------------------------+
|
||||
| uart= | bdf@<BDF value> | This sets the PCI serial port based on its BDF. e.g. bdf@0:18.1 |
|
||||
| ``uart=`` | bdf@<BDF value> | This sets the PCI serial port based on its BDF. e.g. ``bdf@0:18.1`` |
|
||||
| +-----------------------------+----------------------------------------------------------------------------------------+
|
||||
| | port@<port address> | This sets the serial port address. |
|
||||
+-----------------+-----------------------------+----------------------------------------------------------------------------------------+
|
||||
@ -21,14 +21,14 @@ The ACRN hypervisor supports the following parameter:
|
||||
The Generic hypervisor parameters are specified in the GRUB multiboot/multiboot2 command.
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 5
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 5
|
||||
|
||||
menuentry 'Boot ACRN hypervisor from multiboot' {
|
||||
insmod part_gpt
|
||||
insmod ext2
|
||||
echo 'Loading ACRN hypervisor ...'
|
||||
multiboot --quirk-modules-after-kernel /boot/acrn.32.out uart=bdf@0:18.1
|
||||
module /boot/bzImage Linux_bzImage
|
||||
module /boot/bzImage2 Linux_bzImage2
|
||||
}
|
||||
menuentry 'Boot ACRN hypervisor from multiboot' {
|
||||
insmod part_gpt
|
||||
insmod ext2
|
||||
echo 'Loading ACRN hypervisor ...'
|
||||
multiboot --quirk-modules-after-kernel /boot/acrn.32.out uart=bdf@0:18.1
|
||||
module /boot/bzImage Linux_bzImage
|
||||
module /boot/bzImage2 Linux_bzImage2
|
||||
}
|
||||
|
@ -22,7 +22,7 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
- Description
|
||||
- Usage example
|
||||
|
||||
* - module_blacklist
|
||||
* - ``module_blacklist``
|
||||
- Service VM
|
||||
- A comma-separated list of modules that should not be loaded.
|
||||
Useful to debug or work
|
||||
@ -31,14 +31,14 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
module_blacklist=dwc3_pci
|
||||
|
||||
* - no_timer_check
|
||||
* - ``no_timer_check``
|
||||
- Service VM,User VM
|
||||
- Disables the code which tests for broken timer IRQ sources.
|
||||
- ::
|
||||
|
||||
no_timer_check
|
||||
|
||||
* - console
|
||||
* - ``console``
|
||||
- Service VM,User VM
|
||||
- Output console device and options.
|
||||
|
||||
@ -64,7 +64,7 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
console=ttyS0
|
||||
console=hvc0
|
||||
|
||||
* - loglevel
|
||||
* - ``loglevel``
|
||||
- Service VM
|
||||
- All Kernel messages with a loglevel less than the console loglevel will
|
||||
be printed to the console. The loglevel can also be changed with
|
||||
@ -95,7 +95,7 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
loglevel=7
|
||||
|
||||
* - ignore_loglevel
|
||||
* - ``ignore_loglevel``
|
||||
- User VM
|
||||
- Ignoring loglevel setting will print **all**
|
||||
kernel messages to the console. Useful for debugging.
|
||||
@ -107,7 +107,7 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
ignore_loglevel
|
||||
|
||||
|
||||
* - log_buf_len
|
||||
* - ``log_buf_len``
|
||||
- User VM
|
||||
- Sets the size of the printk ring buffer,
|
||||
in bytes. n must be a power of two and greater
|
||||
@ -120,7 +120,7 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
log_buf_len=16M
|
||||
|
||||
* - consoleblank
|
||||
* - ``consoleblank``
|
||||
- Service VM,User VM
|
||||
- The console blank (screen saver) timeout in
|
||||
seconds. Defaults to 600 (10 minutes). A value of 0
|
||||
@ -129,7 +129,7 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
consoleblank=0
|
||||
|
||||
* - rootwait
|
||||
* - ``rootwait``
|
||||
- Service VM,User VM
|
||||
- Wait (indefinitely) for root device to show up.
|
||||
Useful for devices that are detected asynchronously
|
||||
@ -138,7 +138,7 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
rootwait
|
||||
|
||||
* - root
|
||||
* - ``root``
|
||||
- Service VM,User VM
|
||||
- Define the root filesystem
|
||||
|
||||
@ -165,14 +165,14 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
root=/dev/vda2
|
||||
root=PARTUUID=00112233-4455-6677-8899-AABBCCDDEEFF
|
||||
|
||||
* - rw
|
||||
* - ``rw``
|
||||
- Service VM,User VM
|
||||
- Mount root device read-write on boot
|
||||
- Mount root device read/write on boot
|
||||
- ::
|
||||
|
||||
rw
|
||||
|
||||
* - tsc
|
||||
* - ``tsc``
|
||||
- User VM
|
||||
- Disable clocksource stability checks for TSC.
|
||||
|
||||
@ -180,14 +180,14 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
``reliable``:
|
||||
Mark TSC clocksource as reliable, and disables clocksource
|
||||
verification at runtime, and the stability checks done at bootup.
|
||||
verification at runtime, and the stability checks done at boot.
|
||||
Used to enable high-resolution timer mode on older hardware, and in
|
||||
virtualized environments.
|
||||
- ::
|
||||
|
||||
tsc=reliable
|
||||
|
||||
* - cma
|
||||
* - ``cma``
|
||||
- Service VM
|
||||
- Sets the size of the kernel global memory area for
|
||||
contiguous memory allocations, and optionally the
|
||||
@ -199,7 +199,7 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
cma=64M@0
|
||||
|
||||
* - hvlog
|
||||
* - ``hvlog``
|
||||
- Service VM
|
||||
- Sets the guest physical address and size of the dedicated hypervisor
|
||||
log ring buffer between the hypervisor and Service VM.
|
||||
@ -216,26 +216,26 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
You should enable ASLR on SOS. This ensures that when guest Linux is
|
||||
relocating kernel image, it will avoid this buffer address.
|
||||
|
||||
|
||||
- ::
|
||||
|
||||
hvlog=2M@0xe00000
|
||||
|
||||
* - memmap
|
||||
* - ``memmap``
|
||||
- Service VM
|
||||
- Mark specific memory as reserved.
|
||||
|
||||
``memmap=nn[KMG]$ss[KMG]``
|
||||
Region of memory to be reserved is from ``ss`` to ``ss+nn``,
|
||||
using ``K``, ``M``, and ``G`` representing Kilobytes, Megabytes, and
|
||||
Gigabytes, respectively.
|
||||
using ``K``, ``M``, and ``G`` representing kilobytes, megabytes, and
|
||||
gigabytes, respectively.
|
||||
- ::
|
||||
|
||||
memmap=0x400000$0xa00000
|
||||
|
||||
* - ramoops.mem_address
|
||||
ramoops.mem_size
|
||||
ramoops.console_size
|
||||
* - ``ramoops.mem_address``
|
||||
``ramoops.mem_size``
|
||||
``ramoops.console_size``
|
||||
- Service VM
|
||||
- Ramoops is an oops/panic logger that writes its logs to RAM
|
||||
before the system crashes. Ramoops uses a predefined memory area
|
||||
@ -252,21 +252,21 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
ramoops.console_size=0x200000
|
||||
|
||||
|
||||
* - reboot_panic
|
||||
* - ``reboot_panic``
|
||||
- Service VM
|
||||
- Reboot in case of panic
|
||||
|
||||
The comma-delimited parameters are:
|
||||
|
||||
reboot_mode:
|
||||
``w`` (warm), ``s`` (soft), ``c`` (cold), or ``g`` (gpio)
|
||||
``w`` (warm), ``s`` (soft), ``c`` (cold), or ``g`` (GPIO)
|
||||
|
||||
reboot_type:
|
||||
``b`` (bios), ``a`` (acpi), ``k`` (kbd), ``t`` (triple), ``e`` (efi),
|
||||
or ``p`` (pci)
|
||||
``b`` (BIOS), ``a`` (ACPI), ``k`` (kbd), ``t`` (triple), ``e`` (EFI),
|
||||
or ``p`` (PCI)
|
||||
|
||||
reboot_cpu:
|
||||
``s###`` (smp, and processor number to be used for rebooting)
|
||||
``s###`` (SMP, and processor number to be used for rebooting)
|
||||
|
||||
reboot_force:
|
||||
``f`` (force), or not specified.
|
||||
@ -274,17 +274,17 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
reboot_panic=p,w
|
||||
|
||||
* - maxcpus
|
||||
* - ``maxcpus``
|
||||
- User VM
|
||||
- Maximum number of processors that an SMP kernel
|
||||
will bring up during bootup.
|
||||
will bring up during boot.
|
||||
|
||||
``maxcpus=n`` where n >= 0 limits
|
||||
the kernel to bring up ``n`` processors during system bootup.
|
||||
the kernel to bring up ``n`` processors during system boot.
|
||||
Giving n=0 is a special case, equivalent to ``nosmp``,which
|
||||
also disables the I/O APIC.
|
||||
|
||||
After bootup, you can bring up additional plugged CPUs by executing
|
||||
After booting, you can bring up additional plugged CPUs by executing
|
||||
|
||||
``echo 1 > /sys/devices/system/cpu/cpuX/online``
|
||||
- ::
|
||||
@ -298,7 +298,7 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
nohpet
|
||||
|
||||
* - intel_iommu
|
||||
* - ``intel_iommu``
|
||||
- User VM
|
||||
- Intel IOMMU driver (DMAR) option
|
||||
|
||||
@ -351,7 +351,7 @@ section below has more details on a few select parameters.
|
||||
|
||||
* - i915.enable_initial_modeset
|
||||
- Service VM
|
||||
- On MRB, value must be ``1``. On NUC or UP2 boards, value must be
|
||||
- On MRB, value must be ``1``. On Intel NUC or UP2 boards, value must be
|
||||
``0``. See :ref:`i915-enable-initial-modeset`.
|
||||
- ::
|
||||
|
||||
|
@ -14,8 +14,8 @@ acrn-kernel, install them on your target system, and boot running ACRN.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Set up Pre-requisites
|
||||
*********************
|
||||
Set up prerequisites
|
||||
********************
|
||||
|
||||
Your development system should be running Ubuntu
|
||||
18.04 and be connected to the internet. (You'll be installing software
|
||||
@ -66,7 +66,7 @@ Here's the default ``release.json`` configuration:
|
||||
Run the package-building script
|
||||
*******************************
|
||||
|
||||
The ``install_uSoS.py`` python script does all the work to install
|
||||
The ``install_uSoS.py`` Python script does all the work to install
|
||||
needed tools (such as make, gnu-efi, libssl-dev, libpciaccess-dev,
|
||||
uuid-dev, and more). It also verifies that tool versions (such as the
|
||||
gcc compiler) are appropriate (as configured in the ``release.json``
|
||||
@ -89,7 +89,7 @@ When done, it creates two Debian packages:
|
||||
* ``acrn_kernel_deb_package.deb`` with the ACRN-patched Linux kernel.
|
||||
|
||||
You'll need to copy these two files onto your target system, either via
|
||||
the network or simply by using a thumbdrive.
|
||||
the network or simply by using a USB drive.
|
||||
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
@ -112,7 +112,7 @@ Install Debian packages on your target system
|
||||
*********************************************
|
||||
|
||||
Copy the Debian packages you created on your development system, for
|
||||
example, using a thumbdrive. Then install the ACRN Debian package::
|
||||
example, using a USB drive. Then install the ACRN Debian package::
|
||||
|
||||
sudo dpkg -i acrn_deb_package.deb
|
||||
|
||||
@ -228,4 +228,4 @@ by looking at the dmesg log:
|
||||
|
||||
4.python3 compile_iasl.py
|
||||
=========================
|
||||
this scriptrs is help compile iasl and cp to /usr/sbin
|
||||
this script helps compile iasl and cp to /usr/sbin
|
||||
|
@ -29,7 +29,7 @@ The ``ACRN-Crashlog`` tool depends on the following libraries
|
||||
- libblkid
|
||||
- e2fsprogs
|
||||
|
||||
Refer to the :ref:`getting_started` for instructions on how to set-up your
|
||||
Refer to the :ref:`getting_started` for instructions on how to set up your
|
||||
build environment, and follow the instructions below to build and configure the
|
||||
``ACRN-Crashlog`` tool.
|
||||
|
||||
@ -163,7 +163,7 @@ telemetrics-client on the system:
|
||||
of the telemetrics-client: it runs as a daemon autostarted when the system
|
||||
boots, and sends the crashlog path to the telemetrics-client that records
|
||||
events of interest and reports them to the backend using ``telemd`` the
|
||||
telemetrics daemon. The work flow of ``acrnprobe`` and
|
||||
telemetrics daemon. The workflow of ``acrnprobe`` and
|
||||
telemetrics-client is shown in :numref:`crashlog-workflow`:
|
||||
|
||||
.. graphviz:: images/crashlog-workflow.dot
|
||||
@ -217,7 +217,7 @@ The source code structure:
|
||||
like ``ipanic``, ``pstore`` and etc. For the log on AaaG, it's collected with
|
||||
monitoring the change of related folders on the sos image, like
|
||||
``/data/logs/``. ``acrnprobe`` also provides a flexible way to allow users to
|
||||
configure which crash or event they want to collect through the xml file
|
||||
configure which crash or event they want to collect through the XML file
|
||||
easily.
|
||||
- ``common``: some utils for logs, command and string.
|
||||
- ``data``: configuration file, service files and shell script.
|
||||
|
@ -42,7 +42,7 @@ Architecture
|
||||
Terms
|
||||
=====
|
||||
|
||||
- channel :
|
||||
channel
|
||||
Channel represents a way of detecting the system's events. There are 3
|
||||
channels:
|
||||
|
||||
@ -50,33 +50,33 @@ Terms
|
||||
+ polling: run a detecting job with fixed time interval.
|
||||
+ inotify: monitor the change of file or dir.
|
||||
|
||||
- trigger :
|
||||
trigger
|
||||
Essentially, trigger represents one section of content. It could be
|
||||
a file's content, a directory's content, or a memory's content which can be
|
||||
obtained. By monitoring it ``acrnprobe`` could detect certain events which
|
||||
happened in the system.
|
||||
a file's content, a directory's content, or a memory's content, which can be
|
||||
obtained. By monitoring it, ``acrnprobe`` could detect certain events
|
||||
that happened in the system.
|
||||
|
||||
- crash :
|
||||
crash
|
||||
A subtype of event. It often corresponds to a crash of programs, system, or
|
||||
hypervisor. ``acrnprobe`` detects it and reports it as ``CRASH``.
|
||||
|
||||
- info :
|
||||
info
|
||||
A subtype of event. ``acrnprobe`` detects it and reports it as ``INFO``.
|
||||
|
||||
- event queue :
|
||||
event queue
|
||||
There is a global queue to receive all events detected.
|
||||
Generally, events are enqueued in channel, and dequeued in event handler.
|
||||
|
||||
- event handler :
|
||||
event handler
|
||||
Event handler is a thread to handle events detected by channel.
|
||||
It's awakened by an enqueued event.
|
||||
|
||||
- sender :
|
||||
sender
|
||||
The sender corresponds to an exit of event.
|
||||
There are two senders:
|
||||
|
||||
+ Crashlog is responsible for collecting logs and saving it locally.
|
||||
+ Telemd is responsible for sending log records to telemetrics client.
|
||||
+ ``crashlog`` is responsible for collecting logs and saving it locally.
|
||||
+ ``telemd`` is responsible for sending log records to telemetrics client.
|
||||
|
||||
Description
|
||||
===========
|
||||
@ -86,30 +86,30 @@ As a log collection mechanism to record critical events on the platform,
|
||||
|
||||
1. detect event
|
||||
|
||||
From experience, the occurrence of an system event is usually accompanied
|
||||
From experience, the occurrence of a system event is usually accompanied
|
||||
by some effects. The effects could be a generated file, an error message in
|
||||
kernel's log, or a system reboot. To get these effects, for some of them we
|
||||
can monitor a directory, for other of them we might need to do a detection
|
||||
can monitor a directory, for others, we might need to do detection
|
||||
in a time loop.
|
||||
*So we implement the channel, which represents a common method of detection.*
|
||||
So we implement the channel, which represents a common method of detection.
|
||||
|
||||
2. analyze event and determine the event type
|
||||
|
||||
Generally, a specific effect correspond to a particular type of events.
|
||||
Generally, a specific effect corresponds to a particular type of events.
|
||||
However, it is the icing on the cake for analyzing the detailed event types
|
||||
according to some phenomena. *Crash reclassify is implemented for this
|
||||
purpose.*
|
||||
according to some phenomena. Crash reclassifying is implemented for this
|
||||
purpose.
|
||||
|
||||
3. collect information for detected events
|
||||
|
||||
This is for debug purpose. Events without information are meaningless,
|
||||
and developers need to use this information to improve their system. *Sender
|
||||
crashlog is implemented for this purpose.*
|
||||
and developers need to use this information to improve their system. Sender
|
||||
``crashlog`` is implemented for this purpose.
|
||||
|
||||
4. archive these information as logs, and generate records
|
||||
|
||||
There must be a central place to tell user what happened in system.
|
||||
*Sender telemd is implemented for this purpose.*
|
||||
Sender ``telemd`` is implemented for this purpose.
|
||||
|
||||
Diagram
|
||||
=======
|
||||
@ -172,7 +172,7 @@ Source files
|
||||
This file provides the function to get system reboot reason from kernel
|
||||
command line.
|
||||
- android_events.c
|
||||
Sync events detected by android crashlog.
|
||||
Sync events detected by Android ``crashlog``.
|
||||
- loop.c
|
||||
This file provides interfaces to read from image.
|
||||
|
||||
|
@ -81,7 +81,7 @@ Other properties
|
||||
|
||||
- ``inherit``:
|
||||
Specify a parent for a certain crash.
|
||||
The child crash will inherit all configurations from the specified (by id)
|
||||
The child crash will inherit all configurations from the specified (by ID)
|
||||
crash. These inherited configurations could be overwritten by new ones.
|
||||
Also, this property helps build the crash tree in ``acrnprobe``.
|
||||
- ``expression``:
|
||||
@ -90,7 +90,7 @@ Other properties
|
||||
Crash tree in acrnprobe
|
||||
***********************
|
||||
|
||||
There could be a parent-child relationship between crashes. Refer to the
|
||||
There could be a parent/child relationship between crashes. Refer to the
|
||||
diagrams below, crash B and D are the children of crash A, because crash B and
|
||||
D inherit from crash A, and crash C is the child of crash B.
|
||||
|
||||
@ -260,10 +260,10 @@ Example:
|
||||
* ``channel``:
|
||||
The ``channel`` name to get the virtual machine events.
|
||||
* ``interval``:
|
||||
Time interval in seconds of polling vm's image.
|
||||
Time interval in seconds of polling VM's image.
|
||||
* ``syncevent``:
|
||||
Event type ``acrnprobe`` will synchronize from virtual machine's ``crashlog``.
|
||||
User could specify different types by id. The event type can also be
|
||||
User could specify different types by ID. The event type can also be
|
||||
indicated by ``type/subtype``.
|
||||
|
||||
Log
|
||||
@ -369,6 +369,6 @@ Example:
|
||||
The name of channel info use.
|
||||
* ``log``:
|
||||
The log to be collected. The value is the configured name in log module. User
|
||||
could specify different logs by id.
|
||||
could specify different logs by ID.
|
||||
|
||||
.. _`XML standard`: http://www.w3.org/TR/REC-xml
|
||||
|
@ -23,7 +23,7 @@ the client. The client is responsible for collecting crash information
|
||||
and saving it in the crashlog file. After the saving work is done, the
|
||||
client notifies server and the server will clean up.
|
||||
|
||||
The work flow diagram:
|
||||
The workflow diagram:
|
||||
|
||||
::
|
||||
|
||||
|
@ -78,7 +78,7 @@ Options:
|
||||
- input file name
|
||||
|
||||
* - :kbd:`-o, --ofile=string`
|
||||
- output filename
|
||||
- output file name
|
||||
|
||||
* - :kbd:`-f, --frequency=unsigned_int`
|
||||
- TSC frequency in MHz
|
||||
@ -155,7 +155,7 @@ data to your Linux system, and run the analysis tool.
|
||||
-o /home/xxxx/trace_data/20171115-101605/cpu0 --vm_exit --irq
|
||||
|
||||
- The analysis report is written to stdout, or to a CSV file if
|
||||
a filename is specified using ``-o filename``.
|
||||
a file name is specified using ``-o filename``.
|
||||
- The scripts require Python3.
|
||||
|
||||
Build and Install
|
||||
|
Loading…
Reference in New Issue
Block a user