doc: apply edits to SDC2 scenario doc

Update merged PR #3463 with format and wording improvements.
(Not linked yet into the document navigation)

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
David B. Kinder 2019-08-02 12:42:11 -07:00 committed by wenlingz
parent da744ac35f
commit defac8d195
2 changed files with 115 additions and 107 deletions

View File

@ -22,6 +22,8 @@ Configuration Tutorials
tutorials/using_partition_mode_on_nuc tutorials/using_partition_mode_on_nuc
tutorials/using_partition_mode_on_up2 tutorials/using_partition_mode_on_up2
.. tutorials/using_sdc2_mode_on_nuc
User VM Tutorials User VM Tutorials
***************** *****************

View File

@ -1,162 +1,168 @@
:orphan:
.. _using_sdc2_mode_on_nuc: .. _using_sdc2_mode_on_nuc:
Launch Two User VMs on NUC using SDC2 Scenario Launch Two User VMs on NUC using SDC2 Scenario
############################################## ##############################################
ACRN hypervisor supports new SDC2 (Software Defined Cockpit) scenario Starting with the ACRN v1.2 release, the ACRN hypervisor supports a new
since the tag ``acrn-2019w29.5-140000p``. In SDC2 scenario, Software Defined Cockpit scenario SDC2, where up to three User VMs
up to three User VMs (e.g. Clear Linux + Ubuntu) can be launched from running potentially different OSes, can be launched from the Service VM.
the Service VM. This tutorial provides step by step instructions on how
to enable the ACRN hypervisor SDC2 scenario on Intel NUC to activate This tutorial provides step-by-step instructions for enabling this SDC2
two post-launched VMs running Linux based guest OSes. The same process scenario on an Intel NUC and activate two post-launched User VMs. One of
can be applied to launch the third Linux VM as well. these User VMs will be running Clear Linux, the other Ubuntu. The same
process can be applied to launch a third Linux VM as well.
ACRN Service VM Setup ACRN Service VM Setup
********************* *********************
Refer to the steps in :ref:`getting-started-apl-nuc` to set up ACRN on Follow the steps in :ref:`getting-started-apl-nuc` to set up ACRN on an
Intel NUC. The target device must be capable of launching a Clear Linux User VM Intel NUC. The target device must be capable of launching a Clear Linux
as a starting point. User VM as a starting point.
Re-build ACRN UEFI Executable Re-build ACRN UEFI Executable
***************************** *****************************
The ACRN prebuilt UEFI executable ``acrn.efi`` is compiled for SDC scenario The ACRN prebuilt UEFI executable ``acrn.efi`` is compiled for the
by default, in which only one post-launched VM is supported. single post-launched VM ``SDC scenario`` by default. To activate additional
To activate more than one post-launched VM, you need to enable the SDC2 post-launched VMs, you need to enable the ``SDC2 scenario`` and rebuild
scenario and rebuild the UEFI executable with the following steps: the UEFI executable using the following steps:
#. Refer to :ref:`getting-started-building` to set up the development environment #. Refer to :ref:`getting-started-building` to set up the development environment
for re-compiling the UEFI executable from ACRN source tree. for re-compiling the UEFI executable from the ACRN source tree.
#. Enter the ``hypervisor`` directory under the ACRN source tree to #. Enter the ``hypervisor`` directory under the ACRN source tree and use
reconfigure the ACRN hypervisor for SDC2 scenario. The following example menuconfig to reconfigure the ACRN hypervisor for SDC2 scenario. The
picks up the configurations for ``kbl-nuc-i7`` board as a template, following example starts with the configurations for the
you can specify other board type which is closed to your target system. ``kbl-nuc-i7`` board as a template. You can specify another board type
which is closer to your target system.
.. code-block:: bash .. code-block:: bash
$ cd hypervisor/ $ cd hypervisor/
$ make defconfig BOARD=kbl-nuc-i7 $ make defconfig BOARD=kbl-nuc-i7
$ make menuconfig $ make menuconfig
.. figure:: images/sdc2-defconfig.png .. figure:: images/sdc2-defconfig.png
:align: center :align: center
:width: 400px :width: 600px
:name: Reconfigure the ACRN hypervisor :name: Reconfigure the ACRN hypervisor
#. Select ``Software Defined Cockpit 2`` option for the **ACRN Scenario** configuration: #. Select ``Software Defined Cockpit 2`` option for the **ACRN Scenario** configuration:
.. figure:: images/sdc2-selected.png .. figure:: images/sdc2-selected.png
:align: center :align: center
:width: 400px :width: 600px
:name: Select the SDC2 scenario option :name: Select the SDC2 scenario option
#. Press :kbd:`D` to save the minimum configurations to a default file ``defconfig``, #. Press :kbd:`D` to save the minimum configurations to a default file ``defconfig``,
then press :kbd:`Q` to quit the menuconfig script. then press :kbd:`Q` to quit the menuconfig script.
.. figure:: images/sdc2-save-mini-config.png .. figure:: images/sdc2-save-mini-config.png
:align: center :align: center
:width: 400px :width: 600px
:name: Save the customized configurations :name: Save the customized configurations
#. Create a new BOARD configuration (say ``mydevice``) with the SDC2 #. Create a new BOARD configuration (say ``mydevice``) with the SDC2
scenario enabled. Replace the following ``kbl-nuc-i7`` soft linked target scenario you just enabled. Replace the following ``kbl-nuc-i7`` soft
by the board type you specified in the previous step: linked target by the board type you specified in the previous step (if
different):
.. code-block:: bash .. code-block:: bash
$ cp defconfig arch/x86/configs/mydevice.config $ cp defconfig arch/x86/configs/mydevice.config
$ ln -s kbl-nuc-i7 arch/x86/configs/mydevice $ ln -s kbl-nuc-i7 arch/x86/configs/mydevice
#. Go to the root of ACRN source tree to build the ACRN UEFI executable #. Go to the root of ACRN source tree to build the ACRN UEFI executable
with the customized configurations: with the customized configurations:
.. code-block:: bash .. code-block:: bash
$ cd .. $ cd ..
$ make FIRMWARE=uefi BOARD=mydevice $ make FIRMWARE=uefi BOARD=mydevice
#. Copy the generated ``acrn.efi`` executable to the ESP partition. #. Copy the generated ``acrn.efi`` executable to the ESP partition.
You may need to mount the ESP partition if it's not mounted. (You may need to mount the ESP partition if it's not mounted.)
.. code-block:: bash .. code-block:: bash
$ sudo mount /dev/sda1 /boot $ sudo mount /dev/sda1 /boot
$ sudo cp build/hypervisor/acrn.efi /boot/EFI/acrn/acrn.efi $ sudo cp build/hypervisor/acrn.efi /boot/EFI/acrn/acrn.efi
#. Reboot the ACRN hypervisor and the Service VM. #. Reboot the ACRN hypervisor and the Service VM.
Launch User VMs with predefined UUIDs Launch User VMs with predefined UUIDs
************************************* *************************************
In SDC2 scenario, VMs launched by the ACRN device model ``acrn-dm`` In the SDC2 scenario, each User VMs launched by the ACRN device model ``acrn-dm``
must match the following UUIDs. You will have to add the ``-U`` parameter must use one of the following UUIDs:
to the ``launch_uos.sh`` script, in order to attach to specific VM through
``acrn-dm``.
* d2795438-25d6-11e8-864e-cb7a18b34643 * ``d2795438-25d6-11e8-864e-cb7a18b34643``
* 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 * ``495ae2e5-2603-4d64-af76-d4bc5a8ec0e5``
* 38158821-5208-4005-b72a-8a609e4190d0 * ``38158821-5208-4005-b72a-8a609e4190d0``
For example, the following code snippet is used to launch the VM1: As shown below, add the ``-U`` parameter to the ``launch_uos.sh`` script
to attach the specific VM through an ``acrn-dm`` command. For example, the
following code snippet is used to launch VM1:
.. code-block:: none .. code-block:: none
:emphasize-lines: 9 :emphasize-lines: 9
acrn-dm -A -m $mem_size -c $2 -s 0:0,hostbridge -s 1:0,lpc -l com1,stdio \ acrn-dm -A -m $mem_size -c $2 -s 0:0,hostbridge -s 1:0,lpc -l com1,stdio \
-s 2,pci-gvt -G "$3" \ -s 2,pci-gvt -G "$3" \
-s 5,virtio-console,@pty:pty_port \ -s 5,virtio-console,@pty:pty_port \
-s 6,virtio-hyper_dmabuf \ -s 6,virtio-hyper_dmabuf \
-s 3,virtio-blk,clear-27550-kvm.img \ -s 3,virtio-blk,clear-27550-kvm.img \
-s 4,virtio-net,tap0 \ -s 4,virtio-net,tap0 \
$logger_setting \ $logger_setting \
--mac_seed $mac_seed \ --mac_seed $mac_seed \
-U d2795438-25d6-11e8-864e-cb7a18b34643 \ -U d2795438-25d6-11e8-864e-cb7a18b34643 \
-k /usr/lib/kernel/default-iot-lts2018 \ -k /usr/lib/kernel/default-iot-lts2018 \
-B "root=/dev/vda3 rw rootwait maxcpus=$2 nohpet console=tty0 console=hvc0 \ -B "root=/dev/vda3 rw rootwait maxcpus=$2 nohpet console=tty0 console=hvc0 \
console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \ console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \
consoleblank=0 tsc=reliable i915.avail_planes_per_pipe=$4 \ consoleblank=0 tsc=reliable i915.avail_planes_per_pipe=$4 \
i915.enable_hangcheck=0 i915.nuclear_pageflip=1 i915.enable_guc_loading=0 \ i915.enable_hangcheck=0 i915.nuclear_pageflip=1 i915.enable_guc_loading=0 \
i915.enable_guc_submission=0 i915.enable_guc=0" $vm_name i915.enable_guc_submission=0 i915.enable_guc=0" $vm_name
Likewise, the following code snippet specifies a separate UUID to launch the VM2. Likewise, the following code snippet specifies a different UUID and a
different network tap device ``tap1`` to launch VM2 and connect VM2 to
the network:
.. code-block:: none .. code-block:: none
:emphasize-lines: 2,6,10 :emphasize-lines: 2,6,10
acrn-dm -A -m $mem_size -c $2 -s 0:0,hostbridge -s 1:0,lpc -l com1,stdio \ acrn-dm -A -m $mem_size -c $2 -s 0:0,hostbridge -s 1:0,lpc -l com1,stdio \
-s 2,pci-gvt -G "$3" \ -s 2,pci-gvt -G "$3" \
-s 5,virtio-console,@pty:pty_port \ -s 5,virtio-console,@pty:pty_port \
-s 6,virtio-hyper_dmabuf \ -s 6,virtio-hyper_dmabuf \
-s 3,virtio-blk,ubuntu-16.04.img \ -s 3,virtio-blk,ubuntu-16.04.img \
-s 4,virtio-net,tap1 \ -s 4,virtio-net,tap1 \
-s 7,virtio-rnd \ -s 7,virtio-rnd \
$logger_setting \ $logger_setting \
--mac_seed $mac_seed \ --mac_seed $mac_seed \
-U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \ -U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \
-k /usr/lib/kernel/default-iot-lts2018 \ -k /usr/lib/kernel/default-iot-lts2018 \
-B "root=/dev/vda rw rootwait maxcpus=$2 nohpet console=tty0 console=hvc0 \ -B "root=/dev/vda rw rootwait maxcpus=$2 nohpet console=tty0 console=hvc0 \
console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \ console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \
consoleblank=0 tsc=reliable i915.avail_planes_per_pipe=$4 \ consoleblank=0 tsc=reliable i915.avail_planes_per_pipe=$4 \
i915.enable_hangcheck=0 i915.nuclear_pageflip=1 i915.enable_guc_loading=0 \ i915.enable_hangcheck=0 i915.nuclear_pageflip=1 i915.enable_guc_loading=0 \
i915.enable_guc_submission=0 i915.enable_guc=0" $vm_name i915.enable_guc_submission=0 i915.enable_guc=0" $vm_name
.. note::
In addition to a different predefined UUID, you also need to specify
a different network tap device (e.g. ``tap1``) to connect the VM2 to
the network.
.. note:: .. note::
i915 GPU supports three hardware pipes to drive the displays, The i915 GPU supports three hardware pipes to drive the displays,
however only certain products design with circuitry to connect to however only certain products are designed with circuitry needed to
three external displays. connect to three external displays. On a system supporting two external
On a system supports up to two external displays, since the primary displays, because the primary display is assigned to the Service VM at
display has been assigned to the Service VM at boot time, you may boot time, you may remove the ``-s 2,pci-gvt -G "$3"`` options in one of
remove the ``-s 2,pci-gvt -G "$3"`` options in one of the previous the previous VM-launching example scripts to completely disable the
scripts to completely disable the GVT-g feature from that particular VM. GVT-g feature for that VM. Refer the :ref:`APL_GVT-g-hld` for
Refer the :ref:`APL_GVT-g-hld` for detailed information. detailed information.
.. figure:: images/sdc2-launch-2-laag.png Here's a screen shot of the resuting launch of the Clear Linux and Ubuntu
:align: center User VMs, with a Clear Linux Service VM:
:name: Running 2 Linux UOSes
.. figure:: images/sdc2-launch-2-laag.png
:align: center
:name: Launching two User VMs, running Clear Linux and Ubuntu