doc: edit agl-vms.rst adjust to v1.3

This commit is contained in:
hongliang 2019-11-11 13:41:24 +08:00 committed by deb-intel
parent da469d9e3e
commit 1d8f16a2e9

View File

@ -24,9 +24,9 @@ meter, the In-Vehicle Infotainment (IVI) system, and the rear seat
entertainment (RSE). For the software, there are three VMs running on
top of ACRN:
* Clear Linux OS runs as the service OS (SOS) to control the cluster meter,
* an AGL instance runs as a user OS (UOS) controlling the IVI display, and
* a second AGL UOS controls the RSE display.
* Clear Linux OS runs as the service OS (Service VM) to control the cluster meter,
* an AGL instance runs as a user OS (User VM) controlling the IVI display, and
* a second AGL User VM controls the RSE display.
:numref:`agl-demo-setup` shows the hardware and display images of a
running demo:
@ -58,7 +58,7 @@ Here is the hardware used for the demo development:
* `Tested components and peripherals
<http://compatibleproducts.intel.com/ProductDetails?prodSearch=True&searchTerm=NUC7i7DNHE#>`_,
* 16GB RAM, and
* 250GB SSD
* 120GB SATA SSD
* - eDP display
- `Sharp LQ125T1JX05
<http://www.panelook.com/LQ125T1JX05-E_SHARP_12.5_LCM_overview_35649.html>`_
@ -109,25 +109,25 @@ The demo setup uses these software components and versions:
- Version
- Link
* - ACRN hypervisor
- 0.3
- 1.3
- `ACRN project <https://github.com/projectacrn/acrn-hypervisor>`_
* - Clear Linux OS
- 26200
- 31080
- `Clear Linux OS installer image
<https://download.clearlinux.org/releases/26200/clear/clear-26200-installer.img.xz>`_
<https://download.clearlinux.org/releases/31080/clear/clear-31080-kvm.img.xz>`_
* - AGL
- Funky Flounder (6.02)
- `intel-corei7-x64 image
<https://download.automotivelinux.org/AGL/release/flounder/6.0.2/intel-corei7-64/deploy/images/intel-corei7-64/agl-demo-platform-crosssdk-intel-corei7-64-20181112133144.rootfs.wic.xz>`_
* - acrn-kernel
- revision acrn-2018w49.3-140000p
- revision acrn-2019w39.1-140000p
- `acrn-kernel <https://github.com/projectacrn/acrn-kernel>`_
Service OS
==========
#. Download the compressed Clear Linux OS installer image from
https://download.clearlinux.org/releases/26200/clear/clear-26200-installer.img.xz
https://download.clearlinux.org/releases/31080/clear/clear-31080-live-server.img.xz
and follow the `Clear Linux OS installation guide
<https://clearlinux.org/documentation/clear-linux/get-started/bare-metal-install-server>`_
as a starting point for installing Clear Linux OS onto your platform.
@ -145,19 +145,24 @@ Service OS
# swupd autoupdate --disable
#. This demo setup uses a specific release version (26200) of Clear
#. This demo setup uses a specific release version (31080) of Clear
Linux OS which has been verified to work with ACRN. In case you
unintentionally update or change the Clear Linux OS version, you can
fix it again using::
# swupd verify --fix --picky -m 26200
# swupd verify --fix --picky -m 31080
#. Use the ``swupd bundle-add`` command and add needed Clear Linux
OS bundles::
#. Use `acrn_quick_setup.sh <https://github.com/projectacrn/acrn-hypervisor/blob/84c2b8819f479c5e6f4641490ff4bf6004f112d1/doc/getting-started/acrn_quick_setup.sh>`_
to automatically install ACRN::
# sh acrn_quick_setup.sh -s 31080 -i
#. After installation, the system will automatically start
#. Reboot the system, choose "ACRN Hypervisor" and launch Clear Linux OS
Service VM. If the EFI boot order is not right, use :kbd:`F10`
on boot up to enter the EFI menu and choose "ACRN Hypervisor".
# swupd bundle-add openssh-server sudo network-basic \
kernel-iot-lts2018 os-clr-on-clr os-core-dev \
python3-basic dfu-util dtc
#. Install the graphics UI if necessary. Use only one of the two
options listed below (this guide uses the first GNOME on Wayland option)::
@ -184,56 +189,7 @@ Service OS
screen, click on the setting button and choose "GNOME on Wayland". Then
chose the <username> and enter the password to login.
#. Build ACRN. In this demo we use the ACRN v0.3 release.
Open a terminal window in Clear Linux OS desktop, create a workspace,
install needed tools, clone the ACRN Hypervisor repo source, and build ACRN::
$ mkdir workspace
$ cd workspace
$ pip3 install kconfiglib
$ git clone https://github.com/projectacrn/acrn-hypervisor
$ git checkout tags/v0.3
$ make PLATFORM=uefi
$ sudo make install
#. Install and enable ACRN::
$ sudo mount /dev/sda1 /mnt
$ sudo mkdir /mnt/EFI/acrn
$ sudo cp /usr/lib/acrn/acrn.efi /mnt/EFI/acrn/
$ efibootmgr -c -l "\EFI\acrn\acrn.efi" -d /dev/sda -p 1 \
-L "ACRN Hypervisor" \
-u "bootloader=\EFI\org.clearlinux\bootloaderx64.efi uart=port@0x3f8"
$ sudo cp /usr/share/acrn/samples/nuc/acrn.conf /mnt/loader/entries/
$ sudo vi /mnt/loader/entries/acrn.conf
Modify the acrn.conf file as shown below and save it::
title The ACRN Service OS
linux
/EFI/org.clearlinux/kernel-org.clearlinux.iot-lts2018-sos.4.19.0-19
options pci_devices_ignore=(0:18:1) console=tty0 console=ttyS0
root=/dev/sda3 rw rootwait ignore_loglevel no_timer_check consoleblank=0
i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x00000F i915.domain_plane_owners=0x022211110000
i915.enable_gvt=1 i915.enable_guc=0 hvlog=2M@0x1FE00000
#. Set a longer timeout::
$ sudo clr-boot-manager set-timeout 20
$ sudo clr-boot-manager update
#. Reboot the system, choose "ACRN Hypervisor" and launch Clear Linux OS
SOS. If the EFI boot order is not right, use :kbd:`F10`
on boot up to enter the EFI menu and choose "ACRN Hypervisor".
Building ACRN kernel for AGL (UOS)
Building ACRN kernel for AGL (User VM)
==================================
In this demo, we use acrn-kernel as the baseline for development for AGL.
@ -243,25 +199,26 @@ In this demo, we use acrn-kernel as the baseline for development for AGL.
$ cd workspace
$ git clone https://github.com/projectacrn/acrn-kernel
$ git checkout tags/acrn-2018w49.3-140000p
$ make menuconfig
$ git checkout tags/acrn-2019w39.1-140000p
$ cp kernel_config_uos .config
$ vi .config
$ make olddefconfig
Load the **kernel_uos_config** for the UOS kernel build, and verify
Load the **.config** for the User VM kernel build, and verify
the following configs options are on::
CONFIG_LOCAL_VERSION="-uos"
CONFIG_LOCALVERSION="-uos"
CONFIG_SECURITY_SMACK=y
CONFIG_SECURITY_SMACK_BRINGUP=y
CONFIG_DEFAULT_SECURITY_SMACK=y
CONFIG_EXT4_FS=y
CONFIG_EXT4_USE_FOR_EXT23=y
CONFIG_EXT4_USE_FOR_EXT2=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_MODULES is not set
CONFIG_CAN
CONFIG_CAN_VCAN
CONFIG_CAN_SLCAN
CONFIG_CAN=y
CONFIG_CAN_VCAN=y
CONFIG_CAN_SLCAN=y
#. Build the kernel::
@ -302,139 +259,139 @@ Setting up AGLs
the following content::
#!/bin/bash
function launch_agl()
set -x
offline_path="/sys/class/vhm/acrn_vhm"
# Check the device file of /dev/acrn_hsm to determine the offline_path
if [ -e "/dev/acrn_hsm" ]; then
offline_path="/sys/class/acrn/acrn_hsm"
fi
function launch_clear()
{
vm_name=vm$1
mac=$(cat /sys/class/net/e*/address)
vm_name=vm$1
mac_seed=${mac:9:8}-${vm_name}
#check if the vm is running or not
vm_ps=$(pgrep -a -f acrn-dm)
result=$(echo $vm_ps | grep "${vm_name}")
if [[ "$result" != "" ]]; then
echo "$vm_name is running, can't create twice!"
exit
fi
#check if the vm is running or not
vm_ps=$(pgrep -a -f acrn-dm)
result=$(echo $vm_ps | grep -w "${vm_name}")
if [[ "$result" != "" ]]; then
echo "$vm_name is running, can't create twice!"
exit
fi
# create a unique tap device for each VM
tap=tap2
tap_exist=$(ip a | grep "$tap" | awk '{print $1}')
if [ "$tap_exist"x != "x" ]; then
echo "tap device existed, reuse $tap"
else
ip tuntap add dev $tap mode tap
fi
#logger_setting, format: logger_name,level; like following
logger_setting="--logger_setting console,level=4;kmsg,level=3"
# if acrn-br0 exists, add VM's unique tap device under it
br_exist=$(ip a | grep acrn-br0 | awk '{print $1}')
if [ "$br_exist"x != "x" -a "$tap_exist"x = "x" ]; then
echo "acrn-br0 bridge aleady exists, adding new tap device to it..."
ip link set "$tap" master acrn-br0
ip link set dev "$tap" down
ip link set dev "$tap" up
fi
#for memsize setting
mem_size=2048M
#for memsize setting
mem_size=2048M
acrn-dm -A -m $mem_size -c $2 -s 0:0,hostbridge -s 1:0,lpc -l com1,stdio \
-s 2,pci-gvt -G "$3" \
-s 5,virtio-console,@pty:pty_port \
-s 6,virtio-hyper_dmabuf \
-s 3,virtio-blk,/root/agl_ivi.wic \
-s 4,virtio-net,$tap \
-s 7,xhci,1-4 \
-k /root/bzImage-4.19.0-uos \
-B "root=/dev/vda2 rw rootwait maxcpus=$2 nohpet console=tty0 console=hvc0 \
console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \
consoleblank=0 tsc=reliable i915.avail_planes_per_pipe=$4 \
i915.enable_hangcheck=0 i915.nuclear_pageflip=1 i915.enable_guc_loading=0 \
i915.enable_guc_submission=0 i915.enable_guc=0" $vm_name
acrn-dm -A -m $mem_size -c $2 -s 0:0,hostbridge \
-s 2,pci-gvt -G "$3" \
-s 3,virtio-blk,/root/agl-ivi.wic \
-s 4,virtio-net,tap0 \
-s 5,virtio-console,@stdio:stdio_port \
-s 6,virtio-hyper_dmabuf \
-s 7,xhci,1-4 \
$logger_setting \
--mac_seed $mac_seed \
-k /root/bzImage-4.19.0-uos \
-B "root=/dev/vda2 rw rootwait maxcpus=$2 nohpet console=tty0 console=hvc0 \
console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \
consoleblank=0 tsc=reliable i915.avail_planes_per_pipe=$4 \
i915.enable_hangcheck=0 i915.nuclear_pageflip=1 i915.enable_guc_loading=0 \
i915.enable_guc_submission=0 i915.enable_guc=0" $vm_name
}
# offline SOS CPUs except BSP before launch UOS
for i in `ls -d /sys/devices/system/cpu/cpu[2-99]`; do
online=`cat $i/online`
idx=`echo $i | tr -cd "[2-99]"`
echo cpu$idx online=$online
if [ "$online" = "1" ]; then
echo 0 > $i/online
echo $idx > /sys/class/vhm/acrn_vhm/offline_cpu
fi
# offline Service VM CPUs except BSP before launch User VM
for i in `ls -d /sys/devices/system/cpu/cpu[1-99]`; do
online=`cat $i/online`
idx=`echo $i | tr -cd "[1-99]"`
echo cpu$idx online=$online
if [ "$online" = "1" ]; then
echo 0 > $i/online
# during boot time, cpu hotplug may be disabled by pci_device_probe during a pci module insmod
while [ "$online" = "1" ]; do
sleep 1
echo 0 > $i/online
online=`cat $i/online`
done
echo $idx > ${offline_path}/offline_cpu
fi
done
launch_agl 1 1 "64 448 8" 0x000F00 agl
launch_clear 1 1 "64 448 8" 0x000F00 agl
#. Create the ``launch_rse.sh`` script for the AGL RSE VM with this
content::
#. Create the ``launch_rse.sh`` script for the AGL RSE VM with this content::
#!/bin/bash
function launch_agl()
set -x
offline_path="/sys/class/vhm/acrn_vhm"
# Check the device file of /dev/acrn_hsm to determine the offline_path
if [ -e "/dev/acrn_hsm" ]; then
offline_path="/sys/class/acrn/acrn_hsm"
fi
function launch_clear()
{
vm_name=vm$1
mac=$(cat /sys/class/net/e*/address)
vm_name=vm$1
mac_seed=${mac:9:8}-${vm_name}
#check if the vm is running or not
vm_ps=$(pgrep -a -f acrn-dm)
result=$(echo $vm_ps | grep "${vm_name}")
if [[ "$result" != "" ]]; then
echo "$vm_name is running, can't create twice!"
exit
fi
#check if the vm is running or not
vm_ps=$(pgrep -a -f acrn-dm)
result=$(echo $vm_ps | grep -w "${vm_name}")
if [[ "$result" != "" ]]; then
echo "$vm_name is running, can't create twice!"
exit
fi
#logger_setting, format: logger_name,level; like following
logger_setting="--logger_setting console,level=4;kmsg,level=3"
# create a unique tap device for each VM
tap=tap1
#for memsize setting
mem_size=2048M
tap_exist=$(ip a | grep "$tap" | awk '{print $1}')
if [ "$tap_exist"x != "x" ]; then
echo "tap device existed, reuse $tap"
else
ip tuntap add dev $tap mode tap
fi
# if acrn-br0 exists, add VM's unique tap device under it
br_exist=$(ip a | grep acrn-br0 | awk '{print $1}')
if [ "$br_exist"x != "x" -a "$tap_exist"x = "x" ]; then
echo "acrn-br0 bridge aleady exists, adding new tap device to it..."
ip link set "$tap" master acrn-br0
ip link set dev "$tap" down
ip link set dev "$tap" up
fi
#for memsize setting
mem_size=2048M
acrn-dm -A -m $mem_size -c $2 -s 0:0,hostbridge -s 1:0,lpc -l com1,stdio \
acrn-dm -A -m $mem_size -c $2 -s 0:0,hostbridge -U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \
-s 2,pci-gvt -G "$3" \
-s 5,virtio-console,@pty:pty_port \
-s 5,virtio-console,@stdio:stdio_port \
-s 6,virtio-hyper_dmabuf \
-s 3,virtio-blk,/root/agl_rse.wic \
-s 4,virtio-net,tap1 \
-s 3,virtio-blk,/root/agl-rse.wic \
-s 4,virtio-net,tap0 \
-s 7,xhci,1-5 \
$logger_setting \
--mac_seed $mac_seed \
-k /root/bzImage-4.19.0-uos \
-B "root=/dev/vda2 rw rootwait maxcpus=$2 nohpet console=tty0 console=hvc0 \
console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \
consoleblank=0 tsc=reliable i915.avail_planes_per_pipe=$4 \
i915.enable_hangcheck=0 i915.nuclear_pageflip=1 i915.enable_guc_loading=0 \
i915.enable_guc_submission=0 i915.enable_guc=0" $vm_name
console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \
consoleblank=0 tsc=reliable i915.avail_planes_per_pipe=$4 \
i915.enable_hangcheck=0 i915.nuclear_pageflip=1 i915.enable_guc_loading=0 \
i915.enable_guc_submission=0 i915.enable_guc=0" $vm_name
}
# offline SOS CPUs except BSP before launch UOS
for i in `ls -d /sys/devices/system/cpu/cpu[2-99]`; do
online=`cat $i/online`
idx=`echo $i | tr -cd "[2-99]"`
echo cpu$idx online=$online
if [ "$online" = "1" ]; then
echo 0 > $i/online
echo $idx > /sys/class/vhm/acrn_vhm/offline_cpu
fi
# offline Service VM CPUs except BSP before launch User VM
for i in `ls -d /sys/devices/system/cpu/cpu[1-99]`; do
online=`cat $i/online`
idx=`echo $i | tr -cd "[1-99]"`
echo cpu$idx online=$online
if [ "$online" = "1" ]; then
echo 0 > $i/online
# during boot time, cpu hotplug may be disabled by pci_device_probe during a pci module insmod
while [ "$online" = "1" ]; do
sleep 1
echo 0 > $i/online
online=`cat $i/online`
done
echo $idx > ${offline_path}/offline_cpu
fi
done
launch_agl 2 1 "64 448 8" 0x070000 agl
launch_clear 2 1 "64 448 8" 0x070000 agl
#. Launch the AGL IVI VM::
@ -528,4 +485,4 @@ Setting up AGLs
Congratulations! You've successfully launch the demo system. It should
look similar to :numref:`agl-demo-setup` at the beginning of this
document. AGL as IVI and RSE work independently on top
of ACRN and you can interact with them via the touch screen.
of ACRN and you can interact with them via the mouse.