Doc: update document to remove CL Service VM dependency

Some document refer Clear Linux as Service VM; update them
to Ubuntu Service VM.

Signed-off-by: fuzhongl <fuzhong.liu@intel.com>
Reviewed-by: Geoffroy Van Cutsem <geoffroy.vancutsem@intel.com>
This commit is contained in:
fuzhongl 2020-11-12 10:08:14 +08:00 committed by David Kinder
parent b37442564b
commit bb4fcae1b8
9 changed files with 37 additions and 44 deletions

View File

@ -48,6 +48,9 @@ Connect the WHL Maxtang with the appropriate external devices.
.. rst-class:: numbered-step
.. _install-ubuntu-rtvm-sata:
Install the Ubuntu User VM (RTVM) on the SATA disk
**************************************************
@ -79,6 +82,8 @@ to turn it into a real-time User VM (RTVM).
.. rst-class:: numbered-step
.. _install-ubuntu-Service VM-NVMe:
Install the Ubuntu Service VM on the NVMe disk
**********************************************

View File

@ -27,6 +27,7 @@ The diagram below shows the overall architecture:
.. figure:: images/s5_overall_architecture.png
:align: center
:name: s5-architecture
S5 overall architecture
@ -160,22 +161,20 @@ The procedure for enabling S5 is specific to the particular OS:
How to test
***********
As described in :ref:`vuart_config`, two vUARTs are defined in
pre-defined ACRN scenarios: vUART0/ttyS0 for the console and
vUART1/ttyS1 for S5-related communication (as shown in :ref:`s5-architecture`).
.. note:: The :ref:`CBC <IOC_virtualization_hld>` tools and service installed by
the `software-defined-cockpit
<https://github.com/clearlinux/clr-bundles/blob/master/bundles/software-defined-cockpit>`_ bundle
will conflict with the vUART and hence need to be masked.
For Yocto Project (Poky) or Ubuntu rootfs, the ``serial-getty``
service for ``ttyS1`` conflicts with the S5-related communication
use of ``vUART1``. We can eliminate the conflict by preventing
that service from being started
either automatically or manually, by masking the service
using this command
::
systemctl mask cbc_attach
systemctl mask cbc_thermal_fuse
systemctl mask cbc_thermald
systemctl mask cbc_lifecycle.service
Or::
ps -ef|grep cbc; kill -9 cbc_pid
systemctl mask serial-getty@ttyS1.service
#. Refer to the :ref:`enable_s5` section to set up the S5 environment for the User VMs.

View File

@ -50,8 +50,7 @@ install Ubuntu on the NVMe drive, and use grub to launch the Service VM.
Install Pre-Launched RT Filesystem on SATA and Kernel Image on NVMe
===================================================================
.. important:: Need to add instructions to download the RTVM image and burn it to the
SATA drive.
Follow the :ref:`install-ubuntu-rtvm-sata` guide to install RT rootfs on SATA drive.
The Kernel should
be on the NVMe drive along with GRUB. You'll need to copy the RT kernel
@ -94,6 +93,7 @@ like this:
multiboot2 /EFI/BOOT/acrn.bin
module2 /EFI/BOOT/bzImage_RT RT_bzImage
module2 /EFI/BOOT/bzImage Linux_bzImage
module2 /boot/ACPI_VM0.bin ACPI_VM0
}
Reboot the system, and it will boot into Pre-Launched RT Mode

View File

@ -9,13 +9,10 @@ Prerequisites
This tutorial assumes you have already set up the ACRN Service VM on an
Intel NUC Kit. If you have not, refer to the following instructions:
- Install a `Clear Linux OS
<https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_
on your Intel NUC kit.
- Follow the instructions at XXX to set up the
Service VM automatically on your Intel NUC kit. Follow steps 1 - 4.
.. important:: need updated instructions that aren't Clear Linux dependent
- Install a `Ubuntu 18.04 desktop ISO
<http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso?_ga=2.160010942.221344839.1566963570-491064742.1554370503>`_
on your board.
- Follow the instructions :ref:`install-ubuntu-Service VM-NVMe` guide to setup the Service VM.
We are using a Kaby Lake Intel NUC (NUC7i7DNHE) and Debian 10 as the User VM in this tutorial.
@ -63,9 +60,9 @@ Hardware Configurations
Validated Versions
==================
- **Clear Linux version:** 30920
- **ACRN hypervisor tag:** acrn-2019w36.2-140000p
- **Service VM Kernel version:** 4.19.68-84.iot-lts2018-sos
- **Ubuntu version:** 18.04
- **ACRN hypervisor tag:** v2.2
- **Service VM Kernel version:** v2.2
Build the Debian KVM Image
**************************

View File

@ -9,14 +9,11 @@ Prerequisites
This tutorial assumes you have already set up the ACRN Service VM on an
Intel NUC Kit. If you have not, refer to the following instructions:
- Install a `Clear Linux OS
<https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_
on your Intel NUC kit.
- Follow the instructions at XXX to set up the
Service VM automatically on your Intel NUC kit. Follow steps 1 - 4.
- Install a `Ubuntu 18.04 desktop ISO
<http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso?_ga=2.160010942.221344839.1566963570-491064742.1554370503>`_
on your board.
- Follow the instructions :ref:`install-ubuntu-Service VM-NVMe` to set up the Service VM.
.. important:: need updated instructions that aren't Clear Linux
dependent
Before you start this tutorial, make sure the KVM tools are installed on the
development machine and set **IGD Aperture Size to 512** in the BIOS
@ -62,9 +59,9 @@ Hardware Configurations
Validated Versions
==================
- **Clear Linux version:** 30920
- **ACRN hypervisor tag:** acrn-2019w36.2-140000p
- **Service VM Kernel version:** 4.19.68-84.iot-lts2018-sos
- **Ubuntuversion:** 18.04
- **ACRN hypervisor tag:** v2.2
- **Service VM Kernel version:** v2.2
.. _build-the-ubuntu-kvm-image:

View File

@ -5,7 +5,7 @@ Run VxWorks as the User VM
`VxWorks`_\* is a real-time proprietary OS designed for use in embedded systems requiring real-time, deterministic
performance. This tutorial describes how to run VxWorks as the User VM on the ACRN hypervisor
based on Clear Linux 29970 (ACRN tag v1.1).
based on Ubuntu Service VM (ACRN tag v2.0).
.. note:: You'll need to be a Wind River* customer and have purchased VxWorks to follow this tutorial.
@ -92,10 +92,8 @@ Steps for Using VxWorks as User VM
You now have a virtual disk image with bootable VxWorks in ``VxWorks.img``.
#. Follow XXX to boot the ACRN Service VM.
#. Follow :ref:`install-ubuntu-Service VM-NVMe` to boot the ACRN Service VM.
.. important:: need instructions from deleted document (using SDC
mode on the Intel NUC)
#. Boot VxWorks as User VM.

View File

@ -92,11 +92,9 @@ Steps for Using Zephyr as User VM
the ACRN Service VM, then you will need to transfer this image to the
ACRN Service VM (via, e.g, a USB drive or network)
#. Follow XXX to boot "The ACRN Service OS" based on Clear Linux OS 28620
(ACRN tag: acrn-2019w14.3-140000p)
#. Follow :ref:`install-ubuntu-Service VM-NVMe`
to boot "The ACRN Service OS" based on Ubnuntu OS (ACRN tag: v2.2)
.. important:: need to remove reference to Clear Linux and reference
to deleted document (use SDC mode on the Intel NUC)
#. Boot Zephyr as User VM

View File

@ -139,7 +139,7 @@ Test the communication port
===========================
After you have configured the communication port in hypervisor, you can
access the corresponding port. For example, in Clear Linux:
access the corresponding port. For example, in Linux OS:
1. With ``echo`` and ``cat``

View File

@ -179,7 +179,6 @@ This time when you boot your target system you'll see some new options:
Advanced options for Ubuntu
System setup
*ACRN multiboot2
ACRN efi
If your target system has a serial port active, you can simply hit
:kbd:`return` (or wait for the timeout) to boot with this