mirror of
https://github.com/projectacrn/acrn-hypervisor.git
synced 2025-06-22 05:30:24 +00:00
doc: update release_1.6 with master docs
Another update for the release_1.6 branch with changed docs from the master branch. Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
parent
5632deadc9
commit
ad3d39a274
16
doc/asa.rst
16
doc/asa.rst
@ -3,6 +3,22 @@
|
||||
Security Advisory
|
||||
#################
|
||||
|
||||
Addressed in ACRN v1.6.1
|
||||
************************
|
||||
|
||||
We recommend that all developers upgrade to this v1.6.1 release (or later), which
|
||||
addresses the following security issue that was discovered in previous releases:
|
||||
|
||||
------
|
||||
|
||||
- Service VM kernel Crashes When Fuzzing HC_ASSIGN_PCIDEV and HC_DEASSIGN_PCIDEV
|
||||
NULL pointer dereference due to invalid address of PCI device to be assigned or
|
||||
de-assigned may result in kernel crash. The return value of 'pci_find_bus()' shall
|
||||
be validated before using in 'update_assigned_vf_state()'.
|
||||
|
||||
**Affected Release:** v1.6.
|
||||
|
||||
|
||||
Addressed in ACRN v1.6
|
||||
**********************
|
||||
|
||||
|
@ -65,6 +65,7 @@ Enable ACRN Features
|
||||
tutorials/run_kata_containers
|
||||
tutorials/trustyACRN
|
||||
tutorials/rtvm_workload_design_guideline
|
||||
tutorials/setup_openstack_libvirt
|
||||
|
||||
Debug
|
||||
*****
|
||||
|
@ -6,9 +6,9 @@ Getting Started Guide for ACRN Industry Scenario
|
||||
Verified version
|
||||
****************
|
||||
|
||||
- Clear Linux version: **32680**
|
||||
- ACRN-hypervisor tag: **v1.6 (acrn-2020w12.5-140000p)**
|
||||
- ACRN-Kernel (Service VM kernel): **4.19.97-104.iot-lts2018-sos**
|
||||
- Clear Linux version: **33050**
|
||||
- ACRN-hypervisor tag: **v1.6.1 (acrn-2020w18.4-140000p)**
|
||||
- ACRN-Kernel (Service VM kernel): **4.19.120-108.iot-lts2018-sos**
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
@ -19,13 +19,13 @@ for the RTVM.
|
||||
|
||||
- Intel Whiskey Lake (aka WHL) NUC platform with two disks inside
|
||||
(refer to :ref:`the tables <hardware_setup>` for detailed information).
|
||||
- `com1` is the serial port on WHL NUC.
|
||||
- **com1** is the serial port on WHL NUC.
|
||||
If you are still using the KBL NUC and trying to enable the serial port on it, navigate to the
|
||||
:ref:`troubleshooting section <connect_serial_port>` that discusses how to prepare the cable.
|
||||
- Follow the steps below to install Clear Linux OS (ver: 32680) onto the NVMe disk of the WHL NUC.
|
||||
- Follow the steps below to install Clear Linux OS (ver: 33050) onto the NVMe disk of the WHL NUC.
|
||||
|
||||
.. _Clear Linux OS Server image:
|
||||
https://download.clearlinux.org/releases/32680/clear/clear-32680-live-server.iso
|
||||
https://download.clearlinux.org/releases/33050/clear/clear-33050-live-server.iso
|
||||
|
||||
#. Create a bootable USB drive on Linux*:
|
||||
|
||||
@ -47,7 +47,7 @@ for the RTVM.
|
||||
#. Unmount all the ``/dev/sdc`` partitions and burn the image onto the USB drive::
|
||||
|
||||
$ umount /dev/sdc* 2>/dev/null
|
||||
$ sudo dd if=./clear-32680-live-server.iso of=/dev/sdc oflag=sync status=progress bs=4M
|
||||
$ sudo dd if=./clear-33050-live-server.iso of=/dev/sdc oflag=sync status=progress bs=4M
|
||||
|
||||
#. Plug in the USB drive to the WHL NUC and boot from USB.
|
||||
#. Launch the Clear Linux OS installer boot menu.
|
||||
@ -131,7 +131,7 @@ Use the pre-installed industry ACRN hypervisor
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo ./acrn_quick_setup.sh -s 32680 -d -e /dev/nvme0n1p1 -i
|
||||
$ sudo ./acrn_quick_setup.sh -s 33050 -d -e /dev/nvme0n1p1 -i
|
||||
|
||||
.. note:: ``-i`` option means the industry scenario efi image will be used, e.g. ``acrn.nuc7i7dnb.industry.efi``.
|
||||
For the detailed usage of the ``acrn_quick_setup.sh`` script, refer to the :ref:`quick setup ACRN guide <quick-setup-guide>`
|
||||
@ -145,7 +145,7 @@ Use the pre-installed industry ACRN hypervisor
|
||||
BootCurrent: 0005
|
||||
Timeout: 1 seconds
|
||||
BootOrder: 0000,0003,0005,0001,0004
|
||||
Boot0000* ACRN HD(1,GPT,cb72266b-c83d-4c56-99e3-3e7d2f4bc175,0x800,0x47000)/File(\EFI\acrn\acrn.efi)u.a.r.t.=.d.i.s.a.b.l.e.d.
|
||||
Boot0000* ACRN HD(1,GPT,cb72266b-c83d-4c56-99e3-3e7d2f4bc175,0x800,0x47000)/File(\EFI\acrn\acrn.efi)u.a.r.t.=.d.i.s.a.b.l.e.d. .
|
||||
Boot0001* UEFI OS HD(1,GPT,335d53f0-50c1-4b0a-b58e-3393dc0389a4,0x800,0x47000)/File(\EFI\BOOT\BOOTX64.EFI)..BO
|
||||
Boot0003* Linux bootloader HD(3,GPT,af681d62-3a96-43fb-92fc-e98e850f867f,0xc1800,0x1dc31800)/File(\EFI\org.clearlinux\bootloaderx64.efi)
|
||||
Boot0004* Hard Drive BBS(HD,,0x0)..GO..NO........o.K.I.N.G.S.T.O.N. .R.B.U.S.N.S.8.1.8.0.S.3.1.2.8.G.J...................A..........................>..Gd-.;.A..MQ..L.0.5.2.0.B.6.6.7.2.8.F.F.3.D.1.0. . . . .......BO..NO........m.F.O.R.E.S.E.E. .2.5.6.G.B. .S.S.D...................A......................................0..Gd-.;.A..MQ..L.J.2.7.1.0.0.R.0.0.0.9.6.9.......BO
|
||||
@ -191,17 +191,17 @@ Use the ACRN industry out-of-the-box image
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# wget https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2020w12.5-140000p/sos-industry-32680.img.xz
|
||||
# wget https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2020w18.4-140000p/sos-industry-33050.img.xz
|
||||
|
||||
.. note:: You may also follow :ref:`set_up_ootb_service_vm` to build the image by yourself.
|
||||
|
||||
#. Decompress the .xz image::
|
||||
|
||||
# xz -d sos-industry-32680.img.xz
|
||||
# xz -d sos-industry-33050.img.xz
|
||||
|
||||
#. Burn the Service VM image onto the NVMe disk::
|
||||
|
||||
# dd if=sos-industry-32680.img of=/dev/nvme0n1 bs=4M oflag=sync status=progress iflag=fullblock seek=0 conv=notrunc
|
||||
# dd if=sos-industry-33050.img of=/dev/nvme0n1 bs=4M oflag=sync status=progress iflag=fullblock seek=0 conv=notrunc
|
||||
|
||||
#. Configure the EFI firmware to boot the ACRN hypervisor by default:
|
||||
|
||||
@ -238,17 +238,17 @@ build the ACRN kernel for the Service VM, and then :ref:`passthrough the SATA di
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# wget https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2020w12.5-140000p/preempt-rt-32680.img.xz
|
||||
# wget https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2020w18.4-140000p/preempt-rt-33050.img.xz
|
||||
|
||||
.. note:: You may also follow :ref:`set_up_ootb_rtvm` to build the Preempt-RT VM image by yourself.
|
||||
|
||||
#. Decompress the xz image::
|
||||
|
||||
# xz -d preempt-rt-32680.img.xz
|
||||
# xz -d preempt-rt-33050.img.xz
|
||||
|
||||
#. Burn the Preempt-RT VM image onto the SATA disk::
|
||||
|
||||
# dd if=preempt-rt-32680.img of=/dev/sda bs=4M oflag=sync status=progress iflag=fullblock seek=0 conv=notrunc
|
||||
# dd if=preempt-rt-33050.img of=/dev/sda bs=4M oflag=sync status=progress iflag=fullblock seek=0 conv=notrunc
|
||||
|
||||
#. Modify the script to use the virtio device.
|
||||
|
||||
@ -397,7 +397,7 @@ Run cyclictest
|
||||
|
||||
#. Install the ``cyclictest`` tool::
|
||||
|
||||
# swupd bundle-add dev-utils
|
||||
# swupd bundle-add dev-utils --skip-diskspace-check
|
||||
|
||||
#. Use the following command to start cyclictest::
|
||||
|
||||
@ -410,9 +410,138 @@ Run cyclictest
|
||||
:-m: lock current and future memory allocations
|
||||
:-N: print results in ns instead of us (default us)
|
||||
:-D 1h: to run for 1 hour, you can change it to other values
|
||||
:-q: quiee mode; print a summary only on exit
|
||||
:-q: quiet mode; print a summary only on exit
|
||||
:-H 30000 --histfile=test.log: dump the latency histogram to a local file
|
||||
|
||||
Launch additional User VMs
|
||||
**************************
|
||||
|
||||
With the :ref:`CPU sharing <cpu_sharing>` feature enabled, the Industry
|
||||
scenario supports a maximum of 6 post-launched VMs, including 1 post
|
||||
Real-Time VM (Preempt-RT, VxWorks\*, Xenomai\*) and 5 post standard VMs
|
||||
(Clear Linux\*, Android\*, Windows\*).
|
||||
|
||||
You should follow the steps below to launch those post standard VMs.
|
||||
|
||||
Prepare the launch scripts
|
||||
==========================
|
||||
|
||||
#. Install :ref:`dependencies <install-build-tools-dependencies>` on your workspace
|
||||
and get the acrn-hypervisor source code::
|
||||
|
||||
$ git clone https://github.com/projectacrn/acrn-hypervisor
|
||||
|
||||
#. Generate launch scripts by :ref:`acrn-configuration-tool <acrn_configuration_tool>`::
|
||||
|
||||
$ cd acrn-hypervisor
|
||||
$ export board_file=$PWD/misc/acrn-config/xmls/board-xmls/whl-ipc-i5.xml
|
||||
$ export scenario_file=$PWD/misc/acrn-config/xmls/config-xmls/whl-ipc-i5/industry.xml
|
||||
$ export launch_file=$PWD/misc/acrn-config/xmls/config-xmls/whl-ipc-i5/industry_launch_6uos.xml
|
||||
$ python misc/acrn-config/launch_config/launch_cfg_gen.py --board $board_file --scenario $scenario_file --launch $launch_file --uosid 0
|
||||
|
||||
#. Launch scripts will be generated in
|
||||
``misc/acrn-config/xmls/config-xmls/whl-ipc-i5/output`` directory.
|
||||
|
||||
The launch scripts are:
|
||||
|
||||
+-------------------+--------------------+---------------------+
|
||||
| For Windows: | For Preempt-RT: | For other VMs: |
|
||||
+===================+====================+=====================+
|
||||
| launch_uos_id1.sh | launch_uos_id2.sh | | launch_uos_id3.sh |
|
||||
| | | | launch_uos_id4.sh |
|
||||
| | | | launch_uos_id5.sh |
|
||||
| | | | launch_uos_id6.sh |
|
||||
+-------------------+--------------------+---------------------+
|
||||
|
||||
#. Copy those files to your WHL board::
|
||||
|
||||
$ scp -r misc/acrn-config/xmls/config-xmls/whl-ipc-i5/output <board address>:~/
|
||||
|
||||
|
||||
Launch Windows VM
|
||||
=================
|
||||
|
||||
#. Follow this :ref:`guide <using_windows_as_uos>` to prepare the Windows image file,
|
||||
update Service VM kernel and then reboot with new ``acrngt.conf``.
|
||||
|
||||
#. Modify ``launch_uos_id1.sh`` script as following and then launch Windows VM as one of the post-launched standard VMs:
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 4,6
|
||||
|
||||
acrn-dm -A -m $mem_size -s 0:0,hostbridge -U d2795438-25d6-11e8-864e-cb7a18b34643 \
|
||||
--windows \
|
||||
$logger_setting \
|
||||
-s 5,virtio-blk,<your win img directory>/win10-ltsc.img \
|
||||
-s 6,virtio-net,tap_WaaG \
|
||||
-s 2,passthru,0/2/0,gpu \
|
||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||
-s 1:0,lpc \
|
||||
-l com1,stdio \
|
||||
$boot_audio_option \
|
||||
$vm_name
|
||||
}
|
||||
|
||||
.. note:: ``-s 2,passthru,0/2/0,gpu`` means Windows VM will be launched as GVT-d mode, which is passthrough VGA
|
||||
controller to the Windows. You may find more details in :ref:`using_windows_as_uos`.
|
||||
|
||||
Launch other standard VMs
|
||||
=========================
|
||||
|
||||
If you want to launch other VMs such as Clearlinux\* or Android\*, then you should use one of these scripts:
|
||||
``launch_uos_id3.sh``, ``launch_uos_id4.sh``, ``launch_uos_id5.sh``,
|
||||
``launch_uos_id6.sh``.
|
||||
|
||||
Here is an example to launch a Clear Linux VM:
|
||||
|
||||
#. Download Clear Linux KVM image::
|
||||
|
||||
$ cd ~/output && curl https://cdn.download.clearlinux.org/releases/33050/clear/clear-33050-kvm.img.xz -o clearlinux.img.xz
|
||||
$ unxz clearlinux.img.xz
|
||||
|
||||
#. Modify ``launch_uos_id3.sh`` script to launch Clear Linux VM:
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 1,2,3,5,16,17,23
|
||||
|
||||
#echo ${passthru_vpid["gpu"]} > /sys/bus/pci/drivers/pci-stub/new_id
|
||||
#echo ${passthru_bdf["gpu"]} > /sys/bus/pci/devices/${passthru_bdf["gpu"]}/driver/unbind
|
||||
#echo ${passthru_bdf["gpu"]} > /sys/bus/pci/drivers/pci-stub/bind
|
||||
echo 100 > /sys/bus/usb/drivers/usb-storage/module/parameters/delay_use
|
||||
mem_size=200M
|
||||
#interrupt storm monitor for pass-through devices, params order:
|
||||
#threshold/s,probe-period(s),intr-inject-delay-time(ms),delay-duration(ms)
|
||||
intr_storm_monitor="--intr_monitor 10000,10,1,100"
|
||||
|
||||
#logger_setting, format: logger_name,level; like following
|
||||
logger_setting="--logger_setting console,level=4;kmsg,level=3;disk,level=5"
|
||||
|
||||
acrn-dm -A -m $mem_size -s 0:0,hostbridge -U 615db82a-e189-4b4f-8dbb-d321343e4ab3 \
|
||||
--mac_seed $mac_seed \
|
||||
$logger_setting \
|
||||
-s 2,virtio-blk,./clearlinux.img \
|
||||
-s 3,virtio-console,@stdio:stdio_port \
|
||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||
-s 8,virtio-hyper_dmabuf \
|
||||
$intr_storm_monitor \
|
||||
$vm_name
|
||||
}
|
||||
launch_clearlinux 3
|
||||
|
||||
.. note::
|
||||
|
||||
Remove ``-s 2,passthru,0/2/0,gpu`` parameter before you launch Clear Linux VM
|
||||
because the VGA controller is already passthrough to the Windows
|
||||
VM and it's no longer visible for other VMs.
|
||||
|
||||
Before launching VMs, check available free memory using ``free -m``
|
||||
and update the ``mem_size`` value.
|
||||
|
||||
If you will run multiple Clear Linux User VMs, also make sure
|
||||
the VM names don't conflict
|
||||
with others or you must change the number in the last line of
|
||||
the script, such as ``launch_clearlinux 3``.
|
||||
|
||||
Troubleshooting
|
||||
***************
|
||||
|
||||
@ -452,8 +581,9 @@ or an `RS232 DB9 female/female (NULL modem) cross-over cable
|
||||
<https://www.amazon.com/SF-Cable-Null-Modem-RS232/dp/B006W0I3BA>`_
|
||||
to connect to your host system.
|
||||
|
||||
Note that If you want to use the RS232 DB9 female/female cable, choose the ``cross-over``
|
||||
type rather than ``straight-through`` type.
|
||||
Note that If you want to use the RS232 DB9 female/female cable, choose
|
||||
the **cross-over**
|
||||
type rather than **straight-through** type.
|
||||
|
||||
.. _efi image not exist:
|
||||
|
||||
@ -524,7 +654,7 @@ Passthrough a hard disk to the RTVM
|
||||
|
||||
passthru_vpid=(
|
||||
["eth"]="8086 156f"
|
||||
["sata"]="8086 9d03"
|
||||
["sata"]="8086 9dd3"
|
||||
["nvme"]="8086 f1a6"
|
||||
)
|
||||
passthru_bdf=(
|
||||
@ -544,8 +674,9 @@ Passthrough a hard disk to the RTVM
|
||||
#echo ${passthru_bdf["nvme"]} > /sys/bus/pci/drivers/pci-stub/bind
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 4
|
||||
:emphasize-lines: 5
|
||||
|
||||
--lapic_pt \
|
||||
--rtvm \
|
||||
--virtio_poll 1000000 \
|
||||
-U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \
|
||||
|
@ -37,21 +37,29 @@ Scheduling initialization is invoked in the hardware management layer.
|
||||
.. figure:: images/cpu_sharing_api.png
|
||||
:align: center
|
||||
|
||||
vCPU affinity
|
||||
CPU affinity
|
||||
*************
|
||||
|
||||
Currently, we do not support vCPU migration; the assignment of vCPU
|
||||
mapping to pCPU is statically configured by acrn-dm through
|
||||
``--cpu_affinity``. Use these rules to configure the vCPU affinity:
|
||||
Currently, we do not support vCPU migration; the assignment of vCPU mapping to
|
||||
pCPU is fixed at the time the VM is launched. The statically configured
|
||||
cpu_affinity in the VM configuration defines a superset of pCPUs that
|
||||
the VM is allowed to run on. One bit in this bitmap indicates that one pCPU
|
||||
could be assigned to this VM, and the bit number is the pCPU ID. A pre-launched
|
||||
VM is supposed to be launched on exact number of pCPUs that are assigned in
|
||||
this bitmap. The vCPU to pCPU mapping is implicitly indicated: vCPU0 maps
|
||||
to the pCPU with lowest pCPU ID, vCPU1 maps to the second lowest pCPU ID, and
|
||||
so on.
|
||||
|
||||
- Only one bit can be set for each affinity item of vCPU.
|
||||
- vCPUs in the same VM cannot be assigned to the same pCPU.
|
||||
For post-launched VMs, acrn-dm could choose to launch a subset of pCPUs that
|
||||
are defined in cpu_affinity by specifying the assigned pCPUs
|
||||
(``--cpu_affinity`` option). But it can't assign any pCPUs that are not
|
||||
included in the VM's cpu_affinity.
|
||||
|
||||
Here is an example for affinity:
|
||||
|
||||
- VM0: 2 vCPUs, pinned to pCPU0 and pCPU1
|
||||
- VM1: 2 vCPUs, pinned to pCPU2 and pCPU3
|
||||
- VM2: 2 vCPUs, pinned to pCPU0 and pCPU1
|
||||
- VM1: 2 vCPUs, pinned to pCPU0 and pCPU1
|
||||
- VM2: 2 vCPUs, pinned to pCPU2 and pCPU3
|
||||
|
||||
.. figure:: images/cpu_sharing_affinity.png
|
||||
:align: center
|
||||
@ -119,15 +127,20 @@ and BVT (Borrowed Virtual Time) scheduler.
|
||||
|
||||
|
||||
Scheduler configuration
|
||||
***********************
|
||||
|
||||
Two places in the code decide the usage for the scheduler.
|
||||
***********************
|
||||
|
||||
* The option in Kconfig decides the only scheduler used in runtime.
|
||||
``hypervisor/arch/x86/Kconfig``
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
choice
|
||||
prompt "ACRN Scheduler"
|
||||
default SCHED_BVT
|
||||
help
|
||||
Select the CPU scheduler to be used by the hypervisor
|
||||
|
||||
config SCHED_BVT
|
||||
bool "BVT scheduler"
|
||||
help
|
||||
@ -137,14 +150,19 @@ Two places in the code decide the usage for the scheduler.
|
||||
i.e. higher priority threads get scheduled first, and same priority tasks are
|
||||
scheduled per BVT.
|
||||
|
||||
The default scheduler is **SCHED_NOOP**. To use the BVT, change it to
|
||||
**SCHED_BVT** in the **ACRN Scheduler**.
|
||||
The default scheduler is **SCHED_BVT**.
|
||||
|
||||
* The cpu_affinity is configured by acrn-dm command.
|
||||
* The cpu_affinity could be configured by one of these approaches:
|
||||
|
||||
For example, assign physical CPUs (pCPUs) 1 and 3 to this VM using::
|
||||
- Without ``cpu_affinity`` option in acrn-dm. This launches the user VM
|
||||
on all the pCPUs that are included in the statically configured cpu_affinity_bitmap.
|
||||
|
||||
--cpu_affinity 1,3
|
||||
- With ``cpu_affinity`` option in acrn-dm. This launches the user VM on
|
||||
a subset of the configured cpu_affinity_bitmap pCPUs.
|
||||
|
||||
For example, assign physical CPUs 0 and 1 to this VM::
|
||||
|
||||
--cpu_affinity 0,1
|
||||
|
||||
|
||||
Example
|
||||
@ -152,32 +170,35 @@ Example
|
||||
|
||||
Use the following settings to support this configuration in the industry scenario:
|
||||
|
||||
+---------+-------+-------+-------+
|
||||
|pCPU0 |pCPU1 |pCPU2 |pCPU3 |
|
||||
+=========+=======+=======+=======+
|
||||
|SOS + WaaG |RT Linux |
|
||||
+-----------------+---------------+
|
||||
+---------+--------+-------+-------+
|
||||
|pCPU0 |pCPU1 |pCPU2 |pCPU3 |
|
||||
+=========+========+=======+=======+
|
||||
|Service VM + Waag |RT Linux |
|
||||
+------------------+---------------+
|
||||
|
||||
- offline pcpu2-3 in Service VM.
|
||||
|
||||
- offline pcpu2-3 in SOS.
|
||||
|
||||
- launch guests.
|
||||
|
||||
- launch WaaG with "--cpu_affinity=0,1"
|
||||
- launch RT with "--cpu_affinity=2,3"
|
||||
- launch WaaG with "--cpu_affinity 0,1"
|
||||
- launch RT with "--cpu_affinity 2,3"
|
||||
|
||||
|
||||
After you start all VMs, check the vCPU affinities from the Hypervisor
|
||||
After you start all VMs, check the CPU affinities from the Hypervisor
|
||||
console with the ``vcpu_list`` command:
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: none
|
||||
|
||||
ACRN:\>vcpu_list
|
||||
ACRN:\>vcpu_list
|
||||
|
||||
VM ID PCPU ID VCPU ID VCPU ROLE VCPU STATE THREAD STATE
|
||||
===== ======= ======= ========= ========== ==========
|
||||
0 0 0 PRIMARY Running BLOCKED
|
||||
0 1 0 SECONDARY Running BLOCKED
|
||||
1 0 0 PRIMARY Running RUNNING
|
||||
1 1 0 SECONDARY Running RUNNING
|
||||
2 2 0 PRIMARY Running RUNNING
|
||||
2 3 1 SECONDARY Running RUNNING
|
||||
VM ID PCPU ID VCPU ID VCPU ROLE VCPU STATE THREAD STATE
|
||||
===== ======= ======= ========= ========== ==========
|
||||
0 0 0 PRIMARY Running RUNNING
|
||||
0 1 1 SECONDARY Running RUNNING
|
||||
1 0 0 PRIMARY Running RUNNABLE
|
||||
1 1 1 SECONDARY Running BLOCKED
|
||||
2 2 0 PRIMARY Running RUNNING
|
||||
2 3 1 SECONDARY Running RUNNING
|
||||
|
||||
Note: the THREAD STATE are instant states, they will change at any time.
|
||||
|
Binary file not shown.
Before Width: | Height: | Size: 67 KiB After Width: | Height: | Size: 46 KiB |
343
doc/tutorials/setup_openstack_libvirt.rst
Normal file
343
doc/tutorials/setup_openstack_libvirt.rst
Normal file
@ -0,0 +1,343 @@
|
||||
.. _setup_openstack_libvirt:
|
||||
|
||||
Configure ACRN using OpenStack and libvirt
|
||||
##########################################
|
||||
|
||||
Introduction
|
||||
************
|
||||
|
||||
This document provides instructions for setting up libvirt to configure
|
||||
ACRN. We use OpenStack to use libvirt and we'll install OpenStack in a container
|
||||
to avoid crashing your system and to take advantage of easy
|
||||
snapshots/restores so that you can quickly roll back your system in the
|
||||
event of setup failure. (You should only install OpenStack directly on Ubuntu if
|
||||
you have a dedicated testing machine.) This setup utilizes LXC/LXD on
|
||||
Ubuntu 16.04 or 18.04.
|
||||
|
||||
Install ACRN
|
||||
************
|
||||
|
||||
#. Install ACRN using Ubuntu 16.04 or 18.04 as its Service VM. Refer to
|
||||
:ref:`Ubuntu Service OS`.
|
||||
|
||||
#. Make the acrn-kernel using the `kernel_config_uefi_sos
|
||||
<https://raw.githubusercontent.com/projectacrn/acrn-kernel/master/kernel_config_uefi_sos>`_
|
||||
configuration file (from the ``acrn-kernel`` repo).
|
||||
|
||||
#. Add the following kernel bootarg to give the Service VM more loop
|
||||
devices. Refer to `Kernel Boot Parameters
|
||||
<https://wiki.ubuntu.com/Kernel/KernelBootParameters>`_ documentation::
|
||||
|
||||
max_loop=16
|
||||
|
||||
#. Boot the Service VM with this new ``acrn-kernel`` using the ACRN
|
||||
hypervisor.
|
||||
#. Use the command: ``losetup -a`` to verify that Ubuntu's snap service is **not**
|
||||
using all available loop devices. Typically, OpenStack needs at least 4
|
||||
available loop devices. Follow the `snaps guide
|
||||
<https://maslosoft.com/kb/how-to-clean-old-snaps/>`_ to clean up old
|
||||
snap revisions if you're running out of loop devices.
|
||||
#. Make sure the networking bridge ``acrn-br0`` is created. If not,
|
||||
create it using the instructions in
|
||||
:ref:`Enable network sharing <enable-network-sharing-user-vm>`.
|
||||
|
||||
Set up and launch LXC/LXD
|
||||
*************************
|
||||
|
||||
1. Set up the LXC/LXD Linux container engine using these `instructions
|
||||
<https://ubuntu.com/tutorials/tutorial-setting-up-lxd-1604>`_ provided
|
||||
by Ubuntu (for release 16.04).
|
||||
|
||||
Refer to the following additional information for the setup
|
||||
procedure:
|
||||
|
||||
- Disregard ZFS utils (we're not going to use the ZFS storage
|
||||
backend).
|
||||
- Answer ``dir`` (and not ``zfs``) when prompted for the name of the storage backend to use.
|
||||
- Set up ``lxdbr0`` as instructed.
|
||||
- Before launching a container, make sure ``lxc-checkconfig | grep missing`` does not show any missing
|
||||
kernel features.
|
||||
|
||||
2. Create an Ubuntu 18.04 container named **openstack**::
|
||||
|
||||
$ lxc init ubuntu:18.04 openstack
|
||||
|
||||
3. Export the kernel interfaces necessary to launch a Service VM in the
|
||||
**openstack** container:
|
||||
|
||||
a. Edit the **openstack** config file using the command::
|
||||
|
||||
$ lxc config edit openstack
|
||||
|
||||
In the editor, add the following lines under **config**:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
linux.kernel_modules: iptable_nat, ip6table_nat, ebtables, openvswitch
|
||||
raw.lxc: |-
|
||||
lxc.cgroup.devices.allow = c 10:237 rwm
|
||||
lxc.cgroup.devices.allow = b 7:* rwm
|
||||
lxc.cgroup.devices.allow = c 243:0 rwm
|
||||
lxc.mount.entry = /dev/net/tun dev/net/tun none bind,create=file 0 0
|
||||
lxc.mount.auto=proc:rw sys:rw cgroup:rw
|
||||
security.nesting: "true"
|
||||
security.privileged: "true"
|
||||
|
||||
Save and exit the editor.
|
||||
|
||||
b. Run the following commands to configure OpenStack::
|
||||
|
||||
$ lxc config device add openstack eth1 nic name=eth1 nictype=bridged parent=acrn-br0
|
||||
$ lxc config device add openstack acrn_vhm unix-char path=/dev/acrn_vhm
|
||||
$ lxc config device add openstack loop-control unix-char path=/dev/loop-control
|
||||
$ for n in {0..15}; do lxc config device add openstack loop$n unix-block path=/dev/loop$n; done;
|
||||
|
||||
4. Launch the **openstack** container::
|
||||
|
||||
$ lxc start openstack
|
||||
|
||||
5. Log in to the **openstack** container::
|
||||
|
||||
$ lxc exec openstack -- su -l
|
||||
|
||||
6. Let ``systemd`` manage **eth1** in the container, with **eth0** as the
|
||||
default route:
|
||||
|
||||
Edit ``/etc/netplan/50-cloud-init.yaml``
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
network:
|
||||
version: 2
|
||||
ethernets:
|
||||
eth0:
|
||||
dhcp4: true
|
||||
eth1:
|
||||
dhcp4: true
|
||||
dhcp4-overrides:
|
||||
route-metric: 200
|
||||
|
||||
|
||||
7. Log out and restart the **openstack** container::
|
||||
|
||||
$ lxc restart openstack
|
||||
|
||||
8. Log in to the **openstack** container again::
|
||||
|
||||
$ xc exec openstack -- su -l
|
||||
|
||||
9. If needed, set up the proxy inside the **openstack** container via
|
||||
``/etc/environment`` and make sure ``no_proxy`` is properly set up.
|
||||
Both IP addresses assigned to **eth0** and
|
||||
**eth1** and their subnets must be included. For example::
|
||||
|
||||
no_proxy=xcompany.com,.xcompany.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16
|
||||
|
||||
10. Add a new user named **stack** and set permissions::
|
||||
|
||||
$ sudo useradd -s /bin/bash -d /opt/stack -m stack
|
||||
$ echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
|
||||
|
||||
11. Log out and restart the **openstack** container::
|
||||
|
||||
$ lxc restart openstack
|
||||
|
||||
The **openstack** container is now properly configured for OpenStack.
|
||||
Use the ``lxc list`` command to verify that both **eth0** and **eth1**
|
||||
appear in the container.
|
||||
|
||||
Set up ACRN prerequisites inside the container
|
||||
**********************************************
|
||||
|
||||
1. Log in to the **openstack** container as the **stack** user::
|
||||
|
||||
$ lxc exec openstack -- su -l stack
|
||||
|
||||
2. Download and compile ACRN's source code. Refer to :ref:`getting-started-building`.
|
||||
|
||||
.. note::
|
||||
All tools and build dependencies must be installed before you run the first ``make`` command.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ git clone https://github.com/projectacrn/acrn-hypervisor
|
||||
$ cd acrn-hypervisor
|
||||
$ git checkout v1.6.1
|
||||
$ make
|
||||
$ cd misc/acrn-manager/; make
|
||||
|
||||
Install only the user-space components: acrn-dm, acrnctl, and acrnd
|
||||
|
||||
3. Download, compile, and install ``iasl``. Refer to :ref:`Prepare the User VM <prepare-UOS>`.
|
||||
|
||||
Set up libvirt
|
||||
**************
|
||||
|
||||
1. Install the required packages::
|
||||
|
||||
$ sudo apt install libdevmapper-dev libnl-route-3-dev libnl-3-dev python \
|
||||
automake autoconf autopoint libtool xsltproc libxml2-utils gettext
|
||||
|
||||
2. Download libvirt/ACRN::
|
||||
|
||||
$ git clone https://github.com/projectacrn/acrn-libvirt.git
|
||||
|
||||
3. Build and install libvirt::
|
||||
|
||||
$ cd acrn-libvirt
|
||||
$ ./autogen.sh --prefix=/usr --disable-werror --with-test-suite=no \
|
||||
--with-qemu=no --with-openvz=no --with-vmware=no --with-phyp=no \
|
||||
--with-vbox=no --with-lxc=no --with-uml=no --with-esx=no
|
||||
|
||||
$ make
|
||||
$ sudo make install
|
||||
|
||||
4. Edit and enable these options in ``/etc/libvirt/libvirtd.conf``::
|
||||
|
||||
unix_sock_ro_perms = "0777"
|
||||
unix_sock_rw_perms = "0777"
|
||||
unix_sock_admin_perms = "0777"
|
||||
|
||||
5. Restart the libvirt daemon::
|
||||
|
||||
$ sudo systemctl daemon-reload
|
||||
|
||||
|
||||
Set up OpenStack
|
||||
****************
|
||||
|
||||
Use DevStack to install OpenStack. Refer to the `DevStack instructions <https://docs.openstack.org/devstack/>`_.
|
||||
|
||||
1. Use the latest maintenance branch **stable/train** to ensure OpenStack
|
||||
stability::
|
||||
|
||||
$ git clone https://opendev.org/openstack/devstack.git -b stable/train
|
||||
|
||||
2. Go to the devstack directory (``cd devstack``) and apply the following
|
||||
patch:
|
||||
|
||||
``0001-devstack-installation-for-acrn.patch``
|
||||
|
||||
3. Edit ``lib/nova_plugins/hypervisor-libvirt``:
|
||||
|
||||
Change ``xen_hvmloader_path`` to the location of your OVMF image
|
||||
file. A stock image is included in the ACRN source tree
|
||||
(``devicemodel/bios/OVMF.fd``).
|
||||
|
||||
4. Create a ``devstack/local.conf`` file as shown below (setting the password as
|
||||
appropriate):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
||||
[[local|localrc]]
|
||||
PUBLIC_INTERFACE=eth1
|
||||
|
||||
ADMIN_PASSWORD=<password>
|
||||
DATABASE_PASSWORD=<password>
|
||||
RABBIT_PASSWORD=<password>
|
||||
SERVICE_PASSWORD=<password>
|
||||
|
||||
ENABLE_KSM=False
|
||||
VIRT_DRIVER=libvirt
|
||||
LIBVIRT_TYPE=acrn
|
||||
DEBUG_LIBVIRT=True
|
||||
DEBUG_LIBVIRT_COREDUMPS=True
|
||||
USE_PYTHON3=True
|
||||
|
||||
.. note::
|
||||
Now is a great time to take a snapshot of the container using ``lxc
|
||||
snapshot``. If the OpenStack installation fails, manually rolling back
|
||||
to the previous state can be difficult. Currently, no step exists to
|
||||
reliably restart OpenStack after restarting the container.
|
||||
|
||||
5. Install OpenStack::
|
||||
|
||||
execute ./stack.sh in devstack/
|
||||
|
||||
The installation should take about 20-30 minutes. Upon successful
|
||||
installation, the installer reports the URL of OpenStack's management
|
||||
interface. This URL is accessible from the native Ubuntu.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
...
|
||||
|
||||
Horizon is now available at http://<IP_address>/dashboard
|
||||
|
||||
...
|
||||
|
||||
2020-04-09 01:21:37.504 | stack.sh completed in 1755 seconds.
|
||||
|
||||
6. Verify using the command ``systemctl status libvirtd.service`` that libvirtd is active
|
||||
and running.
|
||||
|
||||
7. Set up SNAT for OpenStack instances to connect to the external network.
|
||||
|
||||
a. Inside the container, use the command ``ip a`` to identify the ``br-ex`` bridge
|
||||
interface. ``br-ex`` should have two IPs. One should be visible to
|
||||
the native Ubuntu's ``acrn-br0`` interface (e.g. inet 192.168.1.104/24).
|
||||
The other one is internal to OpenStack (e.g. inet 172.24.4.1/24). The
|
||||
latter corresponds to the public network in OpenStack.
|
||||
|
||||
b. Set up SNAT to establish a link between ``acrn-br0`` and OpenStack.
|
||||
For example::
|
||||
|
||||
$ sudo iptables -t nat -A POSTROUTING -s 172.24.4.1/24 -o br-ex -j SNAT --to-source 192.168.1.104
|
||||
|
||||
Final Steps
|
||||
***********
|
||||
|
||||
1. Create OpenStack instances.
|
||||
|
||||
- OpenStack logs to systemd journal
|
||||
- libvirt logs to /var/log/libvirt/libvirtd.log
|
||||
|
||||
You can now use the URL to manage OpenStack in your native Ubuntu
|
||||
as admin and using the password set in the local.conf file when you
|
||||
set up OpenStack earlier.
|
||||
|
||||
2. Create a router between **public** (external network) and **shared**
|
||||
(internal network) using `OpenStack's network instructions
|
||||
<https://docs.openstack.org/openstackdocstheme/latest/demo/create_and_manage_networks.html>`_.
|
||||
|
||||
|
||||
3. Launch an ACRN instance using `OpenStack's launch instructions
|
||||
<https://docs.openstack.org/horizon/latest/user/launch-instances.html>`_.
|
||||
|
||||
- Use Clear Linux Cloud Guest as the image (qcow2 format):
|
||||
https://clearlinux.org/downloads
|
||||
- Skip **Create Key Pair** as it's not supported by Clear Linux.
|
||||
- Select **No** for **Create New Volume** when selecting the instance
|
||||
boot source image.
|
||||
- Use **shared** as the instance's network.
|
||||
|
||||
4. After the instance is created, use the hypervisor console to verify that
|
||||
it is running (``vm_list``).
|
||||
|
||||
5. Ping the instance inside the container using the instance's floating IP
|
||||
address.
|
||||
|
||||
6. Clear Linux prohibits root SSH login by default. Use libvirt's ``virsh``
|
||||
console to configure the instance. Inside the container, run::
|
||||
|
||||
$ sudo virsh -c acrn:///system
|
||||
list #you should see the instance listed as running
|
||||
console <instance_name>
|
||||
|
||||
7. Log in to the Clear Linux instance and set up the root SSH. Refer to
|
||||
the Clear Linux instructions on `enabling root login
|
||||
<https://docs.01.org/clearlinux/latest/guides/network/openssh-server.html#enable-root-login>`_.
|
||||
|
||||
a. If needed, set up the proxy inside the instance.
|
||||
b. Configure ``systemd-resolved`` to use the correct DNS server.
|
||||
c. Install ping: ``swupd bundle-add clr-network-troubleshooter``.
|
||||
|
||||
The ACRN instance should now be able to ping ``acrn-br0`` and another
|
||||
ACRN instance. It should also be accessible inside the container via SSH
|
||||
and its floating IP address.
|
||||
|
||||
The ACRN instance can be deleted via the OpenStack management interface.
|
||||
|
||||
For more advanced CLI usage, refer to this `OpenStack cheat sheet
|
||||
<https://docs.openstack.org/ocata/user-guide/cli-cheat-sheet.html>`_.
|
@ -24,6 +24,11 @@ Prerequisites
|
||||
<https://www.intel.com/content/dam/support/us/en/documents/mini-pcs/nuc-kits/NUC7i5DN_TechProdSpec.pdf>`_
|
||||
to identify the USB 3.0 port header on the main board.
|
||||
|
||||
.. note::
|
||||
This document uses the (default) SDC scenario. If you use a different
|
||||
scenario, to see its console, you will need a serial port connection to your platform
|
||||
or change the configuration of the User VM that will run Celadon.
|
||||
|
||||
Build Celadon from source
|
||||
*************************
|
||||
|
||||
@ -31,8 +36,10 @@ Build Celadon from source
|
||||
<https://01.org/projectceladon/documentation/getting-started/build-source>`_ guide
|
||||
to set up the Celadon project source code.
|
||||
|
||||
.. note:: The master branch is based on the Google Android 10 pre-Production Early Release.
|
||||
Use the following command to specify a stable Celadon branch based on the Google Android 9 source code in order to apply those patches in the :ref:`ACRN patch list`::
|
||||
.. note:: The master branch is based on the Google Android 10
|
||||
pre-Production Early Release. Use the following command to specify a
|
||||
stable Celadon branch based on the Google Android 9 source code in order
|
||||
to apply those patches in the :ref:`ACRN patch list`::
|
||||
|
||||
$ repo init -u https://github.com/projectceladon/manifest.git -b celadon/p/mr0/master -m stable-build/ww201925_H.xml
|
||||
|
||||
|
@ -318,6 +318,7 @@ You are now all set to start the User VM:
|
||||
|
||||
**Congratulations**, you are now watching the User VM booting up!
|
||||
|
||||
.. _enable-network-sharing-user-vm:
|
||||
|
||||
Enable network sharing
|
||||
**********************
|
||||
|
@ -7,6 +7,11 @@ This tutorial describes how to run Zephyr as the User VM on the ACRN hypervisor.
|
||||
Kaby Lake-based NUC (model NUC7i5DNHE) in this tutorial.
|
||||
Other :ref:`ACRN supported platforms <hardware>` should work as well.
|
||||
|
||||
.. note::
|
||||
This tutorial uses the (default) SDC scenario. If you use a different
|
||||
scenario, you will need a serial port connection to your platform to see
|
||||
Zephyr console output.
|
||||
|
||||
Introduction to Zephyr
|
||||
**********************
|
||||
|
||||
|
@ -201,11 +201,25 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
* - hvlog
|
||||
- Service VM
|
||||
- Reserve memory for the ACRN hypervisor log. The reserved space should not
|
||||
overlap any other blocks (e.g. hypervisor's reserved space).
|
||||
- Sets the guest physical address and size of the dedicated hypervisor
|
||||
log ring buffer between the hypervisor and Service VM.
|
||||
A ``memmap`` parameter is also required to reserve the specified memory
|
||||
from the guest VM.
|
||||
|
||||
If hypervisor relocation is disabled, verify that
|
||||
:option:`CONFIG_HV_RAM_START` and :option:`CONFIG_HV_RAM_SIZE`
|
||||
does not overlap with the hypervisor's reserved buffer space allocated
|
||||
in the Service VM. Service VM GPA and HPA are a 1:1 mapping.
|
||||
|
||||
If hypervisor relocation is enabled, reserve the memory below 256MB,
|
||||
since hypervisor could be relocated anywhere between 256MB and 4GB.
|
||||
|
||||
You should enable ASLR on SOS. This ensures that when guest Linux is
|
||||
relocating kernel image, it will avoid this buffer address.
|
||||
|
||||
- ::
|
||||
|
||||
hvlog=2M@0x6de00000
|
||||
hvlog=2M@0xe00000
|
||||
|
||||
* - memmap
|
||||
- Service VM
|
||||
@ -217,7 +231,7 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
Gigabytes, respectively.
|
||||
- ::
|
||||
|
||||
memmap=0x400000$0x6da00000
|
||||
memmap=0x400000$0xa00000
|
||||
|
||||
* - ramoops.mem_address
|
||||
ramoops.mem_size
|
||||
@ -228,9 +242,12 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
to store the dump. See `Linux Kernel Ramoops oops/panic logger
|
||||
<https://www.kernel.org/doc/html/v4.19/admin-guide/ramoops.html#ramoops-oops-panic-logger>`_
|
||||
for details.
|
||||
|
||||
This buffer should not overlap with hypervisor reserved memory and
|
||||
guest kernel image. See ``hvlog``.
|
||||
- ::
|
||||
|
||||
ramoops.mem_address=0x6da00000
|
||||
ramoops.mem_address=0xa00000
|
||||
ramoops.mem_size=0x400000
|
||||
ramoops.console_size=0x200000
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user