mirror of
https://github.com/projectacrn/acrn-hypervisor.git
synced 2025-06-24 14:33:38 +00:00
doc: remove docs referencing Clear Linux
ACRN 2.1 supports two virtual boot modes, deprivilege boot mode and direct boot mode. The deprivilege boot mode’s main purpose is to support booting Clear Linux Service VM with UEFI service support, but this brings scalability problems when porting ACRN to new Intel platforms. For the 2.2 release, deprivilege mode is removed, and only direct boot is supported, and with this we've removed support for Clear Linux as the service VM, which impacts over 50 ACRN documents. This PR removes documents we don't intend to update, and fixes broken links that would occur from references to these deleted docs. Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
parent
fc1fc0eb8d
commit
54975e4629
@ -32,9 +32,7 @@ Service VM Tutorials
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
tutorials/using_ubuntu_as_sos
|
||||
tutorials/running_deb_as_serv_vm
|
||||
tutorials/cl_servicevm
|
||||
|
||||
User VM Tutorials
|
||||
*****************
|
||||
@ -44,15 +42,12 @@ User VM Tutorials
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
tutorials/building_uos_from_clearlinux
|
||||
tutorials/using_windows_as_uos
|
||||
tutorials/running_ubun_as_user_vm
|
||||
tutorials/running_deb_as_user_vm
|
||||
tutorials/using_xenomai_as_uos
|
||||
tutorials/using_celadon_as_uos
|
||||
tutorials/using_vxworks_as_uos
|
||||
tutorials/using_zephyr_as_uos
|
||||
tutorials/agl-vms
|
||||
|
||||
Enable ACRN Features
|
||||
********************
|
||||
@ -62,12 +57,9 @@ Enable ACRN Features
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
tutorials/acrn-dm_QoS
|
||||
tutorials/open_vswitch
|
||||
tutorials/sgx_virtualization
|
||||
tutorials/vuart_configuration
|
||||
tutorials/rdt_configuration
|
||||
tutorials/using_sbl_on_up2
|
||||
tutorials/waag-secure-boot
|
||||
tutorials/enable_s5
|
||||
tutorials/cpu_sharing
|
||||
@ -89,23 +81,7 @@ Debug
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
tutorials/using_serial_port
|
||||
tutorials/debug
|
||||
tutorials/realtime_performance_tuning
|
||||
tutorials/rtvm_performance_tips
|
||||
|
||||
Additional Tutorials
|
||||
********************
|
||||
|
||||
.. rst-class:: rst-columns2
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
tutorials/up2
|
||||
tutorials/building_acrn_in_docker
|
||||
tutorials/acrn_ootb
|
||||
tutorials/static-ip
|
||||
tutorials/increase-uos-disk-size
|
||||
tutorials/sign_clear_linux_image
|
||||
tutorials/enable_laag_secure_boot
|
||||
tutorials/kbl-nuc-sdc
|
||||
|
@ -50,8 +50,7 @@ Install build tools and dependencies
|
||||
ACRN development is supported on popular Linux distributions, each with
|
||||
their own way to install development tools. This user guide covers the
|
||||
different steps to configure and build ACRN natively on your
|
||||
distribution. Refer to the :ref:`building-acrn-in-docker` user guide for
|
||||
instructions on how to build ACRN using a container.
|
||||
distribution.
|
||||
|
||||
.. note::
|
||||
ACRN uses ``menuconfig``, a python3 text-based user interface (TUI)
|
||||
|
@ -65,7 +65,7 @@ development team for Software-Defined Cockpit (SDC), Industrial Usage
|
||||
https://up-shop.org/home/270-up-squared.html
|
||||
|
||||
|
||||
For general instructions setting up ACRN on supported hardware platforms, visit the :ref:`rt_industry_setup` page.
|
||||
For general instructions setting up ACRN on supported hardware platforms, visit the :ref:`rt_industry_ubuntu_setup` page.
|
||||
|
||||
|
||||
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+
|
||||
|
@ -56,8 +56,7 @@ Slim Bootloader is a modern, flexible, light-weight, open source
|
||||
reference boot loader with key benefits such as being fast, small,
|
||||
customizable, and secure. An end-to-end reference build with
|
||||
ACRN hypervisor, Clear Linux OS as SOS, and Clear Linux OS as UOS has been
|
||||
verified on UP2/SBL board. See the :ref:`using-sbl-up2` documentation
|
||||
for step-by-step instructions.
|
||||
verified on UP2/SBL board.
|
||||
|
||||
**Document updates**: Several new documents have been added in this release, including:
|
||||
|
||||
|
@ -30,8 +30,7 @@ with a specific release: generated v0.6 documents can be found at
|
||||
https://projectacrn.github.io/0.6/. Documentation for the latest
|
||||
(master) branch is found at https://projectacrn.github.io/latest/.
|
||||
|
||||
ACRN v0.6 requires Clear Linux OS version 27600. Please follow the
|
||||
instructions in the :ref:`kbl-nuc-sdc`.
|
||||
ACRN v0.6 requires Clear Linux OS version 27600.
|
||||
|
||||
Version 0.6 new features
|
||||
************************
|
||||
@ -44,7 +43,7 @@ published a tutorial. More patches for ACRN real time support will continue.
|
||||
|
||||
**Document updates**: Several new documents have been added in this release, including:
|
||||
|
||||
* :ref:`Running Automotive Grade Linux as a VM <agl-vms>`
|
||||
* Running Automotive Grade Linux as a VM
|
||||
* Using PREEMPT_RT-Linux for real-time UOS
|
||||
* :ref:`Frequently Asked Questions <faq>`
|
||||
* :ref:`An introduction to Trusty and Security services on ACRN
|
||||
|
@ -30,8 +30,7 @@ with a specific release: generated v0.7 documents can be found at
|
||||
https://projectacrn.github.io/0.7/. Documentation for the latest
|
||||
(master) branch is found at https://projectacrn.github.io/latest/.
|
||||
|
||||
ACRN v0.7 requires Clear Linux OS version 28260. Please follow the
|
||||
instructions in the :ref:`kbl-nuc-sdc`.
|
||||
ACRN v0.7 requires Clear Linux OS version 28260.
|
||||
|
||||
Version 0.7 new features
|
||||
************************
|
||||
|
@ -30,8 +30,7 @@ with a specific release: generated v0.8 documents can be found at
|
||||
https://projectacrn.github.io/0.8/. Documentation for the latest
|
||||
(master) branch is found at https://projectacrn.github.io/latest/.
|
||||
|
||||
ACRN v0.8 requires Clear Linux OS version 28600. Please follow the
|
||||
instructions in the :ref:`kbl-nuc-sdc`.
|
||||
ACRN v0.8 requires Clear Linux OS version 28600.
|
||||
|
||||
Version 0.8 new features
|
||||
************************
|
||||
|
@ -31,8 +31,7 @@ or use Git clone and checkout commands::
|
||||
The project's online technical documentation is also tagged to correspond
|
||||
with a specific release: generated v1.0 documents can be found at https://projectacrn.github.io/1.0/.
|
||||
Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/.
|
||||
ACRN v1.0 requires Clear Linux* OS version 29070. Please follow the
|
||||
instructions in the :ref:`kbl-nuc-sdc`.
|
||||
ACRN v1.0 requires Clear Linux* OS version 29070.
|
||||
|
||||
Version 1.0 major features
|
||||
**************************
|
||||
@ -50,7 +49,7 @@ Slim Bootloader is a modern, flexible, light-weight,
|
||||
open source reference bootloader that is also fast, small,
|
||||
customizable, and secure. An end-to-end reference build has been verified
|
||||
on UP2/SBL board using ACRN hypervisor, Clear Linux OS as SOS, and Clear
|
||||
Linux OS as UOS. See the :ref:`using-sbl-up2` for step-by-step instructions.
|
||||
Linux OS as UOS.
|
||||
|
||||
Enable post-launched RTVM support for real-time UOS in ACRN
|
||||
===========================================================
|
||||
@ -204,7 +203,7 @@ We have many reference documents `available
|
||||
|
||||
* :ref:`Enable GVT-d in ACRN <gpu-passthrough>`
|
||||
* :ref:`Device Model Parameters <acrn-dm_parameters>`
|
||||
* :ref:`Running Automotive Grade Linux as a VM <agl-vms>`
|
||||
* Running Automotive Grade Linux as a VM
|
||||
* Using PREEMPT_RT-Linux for real-time UOS
|
||||
* :ref:`Frequently Asked Questions <faq>`
|
||||
* :ref:`An introduction to Trusty and Security services on ACRN <trusty-security-services>`
|
||||
|
@ -22,8 +22,7 @@ or use Git clone and checkout commands::
|
||||
The project's online technical documentation is also tagged to correspond
|
||||
with a specific release: generated v1.1 documents can be found at https://projectacrn.github.io/1.1/.
|
||||
Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/.
|
||||
ACRN v1.1 requires Clear Linux* OS version 29970. Please follow the
|
||||
instructions in the :ref:`kbl-nuc-sdc`.
|
||||
ACRN v1.1 requires Clear Linux* OS version 29970.
|
||||
|
||||
Version 1.1 major features
|
||||
**************************
|
||||
@ -48,8 +47,8 @@ We have many `reference documents available <https://projectacrn.github.io>`_, i
|
||||
* Update: Using PREEMPT_RT-Linux for real-time UOS
|
||||
* :ref:`Zephyr RTOS as Guest OS <using_zephyr_as_uos>`
|
||||
* :ref:`Using VxWorks* as User OS <using_vxworks_as_uos>`
|
||||
* :ref:`How to enable OVS in ACRN <open_vswitch>`
|
||||
* :ref:`Enable QoS based on runC container <acrn-dm_qos>`
|
||||
* How to enable OVS in ACRN
|
||||
* Enable QoS based on runC container
|
||||
* :ref:`Using partition mode on NUC <using_partition_mode_on_nuc>`
|
||||
|
||||
New Features Details
|
||||
|
@ -22,8 +22,7 @@ or use Git clone and checkout commands::
|
||||
The project's online technical documentation is also tagged to correspond
|
||||
with a specific release: generated v1.2 documents can be found at https://projectacrn.github.io/1.2/.
|
||||
Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/.
|
||||
ACRN v1.2 requires Clear Linux* OS version 30690. Please follow the
|
||||
instructions in the :ref:`kbl-nuc-sdc`.
|
||||
ACRN v1.2 requires Clear Linux* OS version 30690.
|
||||
|
||||
Version 1.2 major features
|
||||
**************************
|
||||
@ -42,8 +41,8 @@ Document updates
|
||||
We have many `reference documents available <https://projectacrn.github.io>`_, including:
|
||||
|
||||
* :ref:`Using Windows as User VM <using_windows_as_uos>`
|
||||
* :ref:`How to sign binaries of the Clear Linux image <sign_clear_linux_image>`
|
||||
* :ref:`Using Celadon as User VM <using_celadon_as_uos>`
|
||||
* How to sign binaries of the Clear Linux image
|
||||
* Using Celadon as User VM
|
||||
* :ref:`SGX Virtualization <sgx_virt>`
|
||||
|
||||
We also updated the following documents based on the newly
|
||||
|
@ -22,8 +22,7 @@ or use Git clone and checkout commands::
|
||||
The project's online technical documentation is also tagged to correspond
|
||||
with a specific release: generated v1.3 documents can be found at https://projectacrn.github.io/1.3/.
|
||||
Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/.
|
||||
ACRN v1.3 requires Clear Linux* OS version 31080. Please follow the
|
||||
instructions in the :ref:`kbl-nuc-sdc`.
|
||||
ACRN v1.3 requires Clear Linux* OS version 31080.
|
||||
|
||||
Version 1.3 major features
|
||||
**************************
|
||||
@ -43,10 +42,10 @@ Document updates
|
||||
================
|
||||
We have many new `reference documents available <https://projectacrn.github.io>`_, including:
|
||||
|
||||
* :ref:`Getting Started Guide for Industry scenario <rt_industry_setup>`
|
||||
* Getting Started Guide for Industry scenario
|
||||
* :ref:`ACRN Configuration Tool Manual <acrn_configuration_tool>`
|
||||
* :ref:`Trace and Data Collection for ACRN Real-Time(RT) Performance Tuning <rt_performance_tuning>`
|
||||
* :ref:`Building ACRN in Docker <building-acrn-in-docker>`
|
||||
* Building ACRN in Docker
|
||||
* :ref:`Running Ubuntu as the User VM <running_ubun_as_user_vm>`
|
||||
* :ref:`Running Debian as the User VM <running_deb_as_user_vm>`
|
||||
* :ref:`Running Debian as the Service VM <running_deb_as_serv_vm>`
|
||||
|
@ -22,8 +22,7 @@ or use Git clone and checkout commands::
|
||||
The project's online technical documentation is also tagged to correspond
|
||||
with a specific release: generated v1.4 documents can be found at https://projectacrn.github.io/1.4/.
|
||||
Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/.
|
||||
ACRN v1.4 requires Clear Linux* OS version 31670. Follow the
|
||||
instructions in the :ref:`rt_industry_setup`.
|
||||
ACRN v1.4 requires Clear Linux* OS version 31670.
|
||||
|
||||
Version 1.4 major features
|
||||
**************************
|
||||
@ -41,7 +40,7 @@ Many new `reference documents <https://projectacrn.github.io>`_ are available, i
|
||||
|
||||
* :ref:`ACRN high-level design <hld>` documents.
|
||||
* :ref:`enable-s5`
|
||||
* :ref:`enable_laag_secure_boot`
|
||||
* Enable Secure Boot in the Clear Linux User VM
|
||||
* :ref:`How-to-enable-secure-boot-for-windows`
|
||||
* :ref:`asa`
|
||||
|
||||
|
@ -22,8 +22,7 @@ or use Git clone and checkout commands::
|
||||
The project's online technical documentation is also tagged to correspond
|
||||
with a specific release: generated v1.5 documents can be found at https://projectacrn.github.io/1.5/.
|
||||
Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/.
|
||||
ACRN v1.5 requires Clear Linux* OS version 32030. Follow the
|
||||
instructions in the :ref:`rt_industry_setup`.
|
||||
ACRN v1.5 requires Clear Linux* OS version 32030.
|
||||
|
||||
Version 1.5 major features
|
||||
**************************
|
||||
|
@ -23,8 +23,7 @@ The project's online technical documentation is also tagged to correspond
|
||||
with a specific release: generated v1.6.1 documents can be found at
|
||||
https://projectacrn.github.io/1.6.1/.
|
||||
Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/.
|
||||
ACRN v1.6.1 requires Clear Linux OS version 33050. Follow the
|
||||
instructions in the :ref:`rt_industry_setup`.
|
||||
ACRN v1.6.1 requires Clear Linux OS version 33050.
|
||||
|
||||
Version 1.6.1 major features
|
||||
****************************
|
||||
@ -61,7 +60,7 @@ Many new and updated `reference documents <https://projectacrn.github.io>`_ are
|
||||
* :ref:`hv-device-passthrough`
|
||||
* :ref:`cpu_sharing`
|
||||
* :ref:`getting-started-building`
|
||||
* :ref:`rt_industry_setup`
|
||||
* Run Clear Linux as the Service VM
|
||||
* :ref:`using_windows_as_uos`
|
||||
|
||||
We recommend that all developers upgrade to ACRN release v1.6.1.
|
||||
|
@ -22,8 +22,7 @@ or use Git clone and checkout commands::
|
||||
The project's online technical documentation is also tagged to correspond
|
||||
with a specific release: generated v1.6 documents can be found at https://projectacrn.github.io/1.6/.
|
||||
Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/.
|
||||
ACRN v1.6 requires Clear Linux OS version 32680. Follow the
|
||||
instructions in the :ref:`rt_industry_setup`.
|
||||
ACRN v1.6 requires Clear Linux OS version 32680.
|
||||
|
||||
Version 1.6 major features
|
||||
**************************
|
||||
|
@ -250,7 +250,7 @@ Many new and updated `reference documents <https://projectacrn.github.io>`_ are
|
||||
|
||||
* :ref:`using_zephyr_as_uos`
|
||||
* :ref:`running_deb_as_user_vm`
|
||||
* :ref:`using_celadon_as_uos`
|
||||
* Run Celadon as the User VM
|
||||
* :ref:`using_windows_as_uos`
|
||||
* :ref:`using_vxworks_as_uos`
|
||||
* :ref:`using_xenomai_as_uos`
|
||||
@ -259,7 +259,7 @@ Many new and updated `reference documents <https://projectacrn.github.io>`_ are
|
||||
|
||||
.. rst-class:: rst-columns2
|
||||
|
||||
* :ref:`open_vswitch`
|
||||
* Enable OVS in ACRN
|
||||
* :ref:`rdt_configuration`
|
||||
* :ref:`sriov_virtualization`
|
||||
* :ref:`cpu_sharing`
|
||||
@ -268,7 +268,7 @@ Many new and updated `reference documents <https://projectacrn.github.io>`_ are
|
||||
* :ref:`enable-s5`
|
||||
* :ref:`vuart_config`
|
||||
* :ref:`sgx_virt`
|
||||
* :ref:`acrn-dm_qos`
|
||||
* Enable QoS based on runC Containers
|
||||
* :ref:`setup_openstack_libvirt`
|
||||
* :ref:`acrn_on_qemu`
|
||||
* :ref:`gpu-passthrough`
|
||||
|
@ -1,133 +0,0 @@
|
||||
.. _acrn-dm_qos:
|
||||
|
||||
Enable QoS based on runC Containers
|
||||
###################################
|
||||
This document describes how ACRN supports Device-Model Quality of Service (QoS)
|
||||
based on using runC containers to control the Service VM resources
|
||||
(CPU, Storage, Memory, Network) by modifying the runC configuration file.
|
||||
|
||||
What is QoS
|
||||
***********
|
||||
Traditionally, Quality of Service (QoS) is the description or measurement
|
||||
of the overall performance of a service, such as a `computer network
|
||||
<https://en.wikipedia.org/wiki/Computer_network>`_ or a `cloud computing
|
||||
<https://en.wikipedia.org/wiki/Cloud_computing>`_ service,
|
||||
particularly the performance is seen by the users on the network.
|
||||
|
||||
What is runC container
|
||||
**********************
|
||||
Containers are an abstraction at the application layer that packages code
|
||||
and dependencies together. Multiple containers can run on the same machine
|
||||
and share the OS kernel with other containers, each running as
|
||||
isolated processes in user space. `runC
|
||||
<https://github.com/opencontainers/runc>`_, a lightweight universal container runtime,
|
||||
is a command-line tool for spawning and running containers according
|
||||
to the `Open Container Initiative (OCI)
|
||||
<https://www.opencontainers.org/>`_ specification.
|
||||
|
||||
ACRN-DM QoS architecture
|
||||
************************
|
||||
In ACRN-DM QoS design, we run the ACRN-DM in a runC container environment.
|
||||
Every time we start a User VM, we first start a runC container and
|
||||
then launch the ACRN-DM within that container.
|
||||
The ACRN-DM QoS can manage these resources for Device-Model:
|
||||
|
||||
- CPU utilization
|
||||
- Memory amount/limitation
|
||||
- I/O bandwidth
|
||||
- Network throughput
|
||||
|
||||
.. figure:: images/acrn-dm_qos_architecture.png
|
||||
:align: center
|
||||
|
||||
ACRN-DM QoS architecture
|
||||
|
||||
ACRN-QoS CPU utilization example
|
||||
********************************
|
||||
In runC ``config.json`` we set the CPU resource as shown below for VM0 and VM1:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
"cpu": {
|
||||
"shares": 1024,
|
||||
"quota": 1000000,
|
||||
"period": 500000,
|
||||
"realtimeRuntime": 950000,
|
||||
"realtimePeriod": 1000000,
|
||||
"mems": "0-7"
|
||||
},
|
||||
|
||||
In this example the `cpu.shares
|
||||
<https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/sec-cpu>`_
|
||||
value is 1024, so the VM0 and VM1 device model
|
||||
CPU utilization is ``1024 / (1024 + 1024 + 1024) = 33%``, which means
|
||||
the maximal CPU resource for the VM0 or VM1 is 33% of the entire CPU resource.
|
||||
|
||||
.. figure:: images/cpu_utilization_image.png
|
||||
:align: center
|
||||
|
||||
CPU utilization image
|
||||
|
||||
How to use ACRN-DM QoS
|
||||
**********************
|
||||
#. Follow :ref:`kbl-nuc-sdc` to boot "The ACRN Service OS" based on Clear Linux 29970 (ACRN tag v1.1).
|
||||
|
||||
#. Add these parameters to the ``runC.json`` file:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# vim /usr/share/acrn/samples/nuc/runC.json
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
"linux": {
|
||||
"resources": {
|
||||
"memory": {
|
||||
"limit": 536870912,
|
||||
"reservation": 536870912,
|
||||
"swap": 536870912,
|
||||
"kernel": -1,
|
||||
"kernelTCP": -1,
|
||||
"swappiness": 0,
|
||||
"disableOOMKiller": false
|
||||
},
|
||||
"cpu": {
|
||||
"shares": 1024,
|
||||
"quota": 1000000,
|
||||
"period": 500000,
|
||||
"mems": "0-7"
|
||||
},
|
||||
"devices": [
|
||||
{
|
||||
"allow": true,
|
||||
"access": "rwm"
|
||||
}
|
||||
]
|
||||
},
|
||||
|
||||
.. note:: For configuration details, refer to the `Open Containers configuration documentation
|
||||
<https://github.com/opencontainers/runtime-spec/blob/master/config.md>`_.
|
||||
|
||||
#. Add the User VM by ``acrnctl add`` command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# acrnctl add launch_uos.sh -C
|
||||
|
||||
.. note:: You can download an `example launch_uos.sh script
|
||||
<https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/devicemodel/samples/nuc/launch_uos.sh>`_
|
||||
that supports the ``-C`` (``run_container`` function) option.
|
||||
|
||||
#. Start the User VM by ``acrnd``
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# acrnd -t
|
||||
|
||||
#. After User VM boots, you may use ``runc list`` command to check the container status in Service VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# runc list
|
||||
ID PID STATUS BUNDLE CREATED OWNER
|
||||
vm1 1686 running /usr/share/acrn/conf/add/runc/vm1 2019-06-27T08:16:40.9039293Z #0
|
@ -1,578 +0,0 @@
|
||||
.. _acrn_ootb:
|
||||
|
||||
Install ACRN Out of the Box
|
||||
###########################
|
||||
|
||||
In this tutorial, we will learn to generate an out-of-the-box (OOTB)
|
||||
Service VM or a Preempt-RT VM image so that we can use ACRN or RTVM
|
||||
immediately after installation without any configuration or
|
||||
modification.
|
||||
|
||||
Set up a Build Environment
|
||||
**************************
|
||||
|
||||
#. Follow the `Clear Linux OS installation guide
|
||||
<https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_
|
||||
to install a native Clear Linux OS on a development machine.
|
||||
|
||||
#. Log in to the Clear Linux OS and install these bundles::
|
||||
|
||||
$ sudo swupd bundle-add clr-installer vim network-basic
|
||||
|
||||
.. _set_up_ootb_service_vm:
|
||||
|
||||
Generate a Service VM image
|
||||
***************************
|
||||
|
||||
Step 1: Create a Service VM YAML file and script
|
||||
================================================
|
||||
|
||||
**Scenario 1: ACRN SDC**
|
||||
|
||||
#. Create the ACRN SDC ``service-os.yaml`` file:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ mkdir -p ~/service-os && cd ~/service-os
|
||||
$ vim service-os.yaml
|
||||
|
||||
Update the ``service-os.yaml`` file to:
|
||||
|
||||
.. code-block:: bash
|
||||
:emphasize-lines: 51
|
||||
|
||||
block-devices: [
|
||||
{name: "bdevice", file: "sos.img"}
|
||||
]
|
||||
|
||||
targetMedia:
|
||||
- name: ${bdevice}
|
||||
size: "108.54G"
|
||||
type: disk
|
||||
children:
|
||||
- name: ${bdevice}1
|
||||
fstype: vfat
|
||||
mountpoint: /boot
|
||||
size: "512M"
|
||||
type: part
|
||||
- name: ${bdevice}2
|
||||
fstype: swap
|
||||
size: "32M"
|
||||
type: part
|
||||
- name: ${bdevice}3
|
||||
fstype: ext4
|
||||
mountpoint: /
|
||||
size: "108G"
|
||||
type: part
|
||||
|
||||
bundles: [
|
||||
bootloader,
|
||||
editors,
|
||||
network-basic,
|
||||
openssh-server,
|
||||
os-core,
|
||||
os-core-update,
|
||||
sysadmin-basic,
|
||||
systemd-networkd-autostart,
|
||||
service-os
|
||||
]
|
||||
|
||||
autoUpdate: false
|
||||
postArchive: false
|
||||
postReboot: false
|
||||
telemetry: false
|
||||
hostname: clr-sos
|
||||
|
||||
keyboard: us
|
||||
language: en_US.UTF-8
|
||||
kernel: kernel-iot-lts2018-sos
|
||||
|
||||
post-install: [
|
||||
{cmd: "${yamlDir}/service-os-post.sh ${chrootDir}"},
|
||||
]
|
||||
|
||||
version: 32030
|
||||
|
||||
.. note:: Update the version value to your target Clear Linux version.
|
||||
|
||||
#. Create the ACRN SDC ``service-os-post.sh`` script:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ vim service-os-post.sh
|
||||
|
||||
Update the ``service-os-post.sh`` script to:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
#!/bin/bash
|
||||
# Copyright (C) 2019 Intel Corporation.
|
||||
# SPDX-License-Identifier: BSD-3-Clause
|
||||
|
||||
# ACRN SOS Image Post Install steps
|
||||
|
||||
set -ex
|
||||
|
||||
CHROOTPATH=$1
|
||||
|
||||
# acrn.efi path
|
||||
acrn_efi_path="$CHROOTPATH/usr/lib/acrn/acrn.nuc7i7dnb.sdc.efi"
|
||||
|
||||
# copy acrn.efi to efi partition
|
||||
mkdir -p "$CHROOTPATH/boot/EFI/acrn" || exit 1
|
||||
cp "$acrn_efi_path" "$CHROOTPATH/boot/EFI/acrn/acrn.efi" || exit 1
|
||||
|
||||
# create load.conf
|
||||
echo "Add default (5 seconds) boot wait time"
|
||||
echo "timeout 5" >> "$CHROOTPATH/boot/loader/loader.conf" || exit 1
|
||||
|
||||
chroot $CHROOTPATH systemd-machine-id-setup
|
||||
chroot $CHROOTPATH systemctl enable getty@tty1.service
|
||||
|
||||
echo "Welcome to the Clear Linux* ACRN SOS image!
|
||||
|
||||
Please login as root for the first time!
|
||||
|
||||
" >> $1/etc/issue
|
||||
|
||||
exit 0
|
||||
|
||||
Grant execute permission to the script:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ chmod a+x service-os-post.sh
|
||||
|
||||
**Scenario 2: ACRN INDUSTRY**
|
||||
|
||||
#. Create the ACRN INDUSTRY ``service-os-industry.yaml`` file:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ mkdir -p ~/service-os-industry && cd ~/service-os-industry
|
||||
$ vim service-os-industry.yaml
|
||||
|
||||
Update the ``service-os-industry.yaml`` file to:
|
||||
|
||||
.. code-block:: bash
|
||||
:emphasize-lines: 52
|
||||
|
||||
block-devices: [
|
||||
{name: "bdevice", file: "sos-industry.img"}
|
||||
]
|
||||
|
||||
targetMedia:
|
||||
- name: ${bdevice}
|
||||
size: "108.54G"
|
||||
type: disk
|
||||
children:
|
||||
- name: ${bdevice}1
|
||||
fstype: vfat
|
||||
mountpoint: /boot
|
||||
size: "512M"
|
||||
type: part
|
||||
- name: ${bdevice}2
|
||||
fstype: swap
|
||||
size: "32M"
|
||||
type: part
|
||||
- name: ${bdevice}3
|
||||
fstype: ext4
|
||||
mountpoint: /
|
||||
size: "108G"
|
||||
type: part
|
||||
|
||||
bundles: [
|
||||
bootloader,
|
||||
editors,
|
||||
network-basic,
|
||||
openssh-server,
|
||||
os-core,
|
||||
os-core-update,
|
||||
sysadmin-basic,
|
||||
systemd-networkd-autostart,
|
||||
service-os
|
||||
]
|
||||
|
||||
autoUpdate: false
|
||||
postArchive: false
|
||||
postReboot: false
|
||||
telemetry: false
|
||||
hostname: clr-sos
|
||||
|
||||
keyboard: us
|
||||
language: en_US.UTF-8
|
||||
kernel: kernel-iot-lts2018-sos
|
||||
|
||||
|
||||
post-install: [
|
||||
{cmd: "${yamlDir}/service-os-industry-post.sh ${chrootDir}"},
|
||||
]
|
||||
|
||||
version: 32030
|
||||
|
||||
.. note:: Update the version value to your target Clear Linux version.
|
||||
|
||||
#. Create the ``service-os-industry-post.sh`` script:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ vim service-os-industry-post.sh
|
||||
|
||||
Update the ``service-os-industry-post.sh`` script to:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
#!/bin/bash
|
||||
# Copyright (C) 2019 Intel Corporation.
|
||||
# SPDX-License-Identifier: BSD-3-Clause
|
||||
|
||||
|
||||
# ACRN SOS Image Post Install steps
|
||||
|
||||
set -ex
|
||||
|
||||
CHROOTPATH=$1
|
||||
|
||||
# acrn.nuc7i7dnb.industry.efi path
|
||||
acrn_industry_efi_path="$CHROOTPATH/usr/lib/acrn/acrn.nuc7i7dnb.industry.efi"
|
||||
|
||||
# copy acrn.efi to efi partition
|
||||
mkdir -p "$CHROOTPATH/boot/EFI/acrn" || exit 1
|
||||
cp "$acrn_industry_efi_path" "$CHROOTPATH/boot/EFI/acrn/acrn.efi" || exit 1
|
||||
|
||||
# create load.conf
|
||||
echo "Add default (5 seconds) boot wait time"
|
||||
echo "timeout 5" >> "$CHROOTPATH/boot/loader/loader.conf" || exit 1
|
||||
|
||||
chroot $CHROOTPATH systemd-machine-id-setup
|
||||
chroot $CHROOTPATH systemctl enable getty@tty1.service
|
||||
|
||||
echo "Welcome to the Clear Linux* ACRN SOS Industry image!
|
||||
|
||||
Please login as root for the first time!
|
||||
|
||||
" >> $1/etc/issue
|
||||
|
||||
exit 0
|
||||
|
||||
Grant execute permission to the script:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ chmod a+x service-os-industry-post.sh
|
||||
|
||||
Step 2: Build the Service VM image
|
||||
==================================
|
||||
|
||||
Use the clr-installer to build the Service VM image.
|
||||
|
||||
**Scenario 1: ACRN SDC**
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd ~/service-os
|
||||
$ sudo clr-installer -c service-os.yaml
|
||||
|
||||
|
||||
The ``sos.img`` will be generated at current directory.
|
||||
|
||||
|
||||
**Scenario 2: ACRN INDUSTRY**
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd ~/service-os-industry
|
||||
$ sudo clr-installer -c service-os-industry.yaml
|
||||
|
||||
|
||||
The ``sos-industry.img`` will be generated at current directory.
|
||||
|
||||
.. _deploy_ootb_service_vm:
|
||||
|
||||
Step 3: Deploy the Service VM image
|
||||
===================================
|
||||
|
||||
#. Prepare a U disk with at least 8GB memory. Begin by formatting the U disk:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# sudo gdisk /dev/sdb
|
||||
GPT fdisk (gdisk) version 1.0.3
|
||||
|
||||
Partition table scan:
|
||||
MBR: protective
|
||||
BSD: not present
|
||||
APM: not present
|
||||
GPT: present
|
||||
|
||||
Found valid GPT with protective MBR; using GPT.
|
||||
|
||||
Command (? for help): o
|
||||
This option deletes all partitions and creates a new protective MBR.
|
||||
Proceed? (Y/N): Y
|
||||
|
||||
Command (? for help): w
|
||||
|
||||
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
|
||||
PARTITIONS!!
|
||||
|
||||
Do you want to proceed? (Y/N): Y
|
||||
OK; writing new GUID partition table (GPT) to /dev/sdb.
|
||||
The operation has completed successfully.
|
||||
|
||||
#. Follow these steps to create two partitions on the U disk.
|
||||
Keep 4GB in the first partition and leave free space in the second partition.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# sudo gdisk /dev/sdb
|
||||
GPT fdisk (gdisk) version 1.0.3
|
||||
|
||||
Partition table scan:
|
||||
MBR: protective
|
||||
BSD: not present
|
||||
APM: not present
|
||||
GPT: present
|
||||
|
||||
Found valid GPT with protective MBR; using GPT.
|
||||
|
||||
Command (? for help): n
|
||||
Partition number (1-128, default 1):
|
||||
First sector (34-15249374, default = 2048) or {+-}size{KMGTP}:
|
||||
Last sector (2048-15249374, default = 15249374) or {+-}size{KMGTP}: +4G
|
||||
Current type is 'Linux filesystem'
|
||||
Hex code or GUID (L to show codes, Enter = 8300):
|
||||
Changed type of partition to 'Linux filesystem'
|
||||
|
||||
Command (? for help): n
|
||||
Partition number (2-128, default 2):
|
||||
First sector (34-15249374, default = 8390656) or {+-}size{KMGTP}:
|
||||
Last sector (8390656-15249374, default = 15249374) or {+-}size{KMGTP}:
|
||||
Current type is 'Linux filesystem'
|
||||
Hex code or GUID (L to show codes, Enter = 8300):
|
||||
Changed type of partition to 'Linux filesystem'
|
||||
|
||||
Command (? for help): p
|
||||
Disk /dev/sdb: 15249408 sectors, 7.3 GiB
|
||||
Model: USB FLASH DRIVE
|
||||
Sector size (logical/physical): 512/512 bytes
|
||||
Disk identifier (GUID): 8C6BF21D-521A-49D5-8BC8-5B319FAF3F91
|
||||
Partition table holds up to 128 entries
|
||||
Main partition table begins at sector 2 and ends at sector 33
|
||||
First usable sector is 34, last usable sector is 15249374
|
||||
Partitions will be aligned on 2048-sector boundaries
|
||||
Total free space is 2014 sectors (1007.0 KiB)
|
||||
|
||||
Number Start (sector) End (sector) Size Code Name
|
||||
1 2048 8390655 4.0 GiB 8300 Linux filesystem
|
||||
2 8390656 15249374 3.3 GiB 8300 Linux filesystem
|
||||
|
||||
Command (? for help): w
|
||||
|
||||
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
|
||||
PARTITIONS!!
|
||||
|
||||
Do you want to proceed? (Y/N): Y
|
||||
OK; writing new GUID partition table (GPT) to /dev/sdb.
|
||||
The operation has completed successfully.
|
||||
|
||||
#. Download and install a bootable Clear Linux on the U disk:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ wget https://download.clearlinux.org/releases/32030/clear/clear-32030-live-server.iso
|
||||
$ sudo dd if=clear-32030-live-server.iso of=/dev/sdb1 bs=4M oflag=sync status=progress
|
||||
|
||||
#. Copy the ``sos.img`` or ``sos-industry.img`` to the U disk:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo mkfs.ext4 /dev/sdb2
|
||||
$ sudo mount /dev/sdb2 /mnt
|
||||
|
||||
- ACRN SDC scenario:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cp ~/service-os/sos.img /mnt
|
||||
$ sync && umount /mnt
|
||||
|
||||
- ACRN INDUSTRY scenario:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cp ~/service-os-industry/sos-industry.img /mnt
|
||||
$ sync && umount /mnt
|
||||
|
||||
#. Unplug the U disk from the development machine and plug it in to your test machine.
|
||||
|
||||
#. Reboot the test machine and boot from the USB.
|
||||
|
||||
#. Log in to the Live Service Clear Linux OS with your "root" account and
|
||||
mount the second partition on the U disk:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# mount /dev/sdb2 /mnt
|
||||
|
||||
#. Format the disk that will install the Service VM image:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# sudo gdisk /dev/sda
|
||||
GPT fdisk (gdisk) version 1.0.3
|
||||
|
||||
Partition table scan:
|
||||
MBR: protective
|
||||
BSD: not present
|
||||
APM: not present
|
||||
GPT: present
|
||||
|
||||
Found valid GPT with protective MBR; using GPT.
|
||||
|
||||
Command (? for help): o
|
||||
This option deletes all partitions and creates a new protective MBR.
|
||||
Proceed? (Y/N): Y
|
||||
|
||||
Command (? for help): w
|
||||
|
||||
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
|
||||
PARTITIONS!!
|
||||
|
||||
Do you want to proceed? (Y/N): Y
|
||||
OK; writing new GUID partition table (GPT) to /dev/sda.
|
||||
The operation has completed successfully.
|
||||
|
||||
#. Delete the old ACRN EFI firmware info:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# efibootmgr | grep ACRN | cut -d'*' -f1 | cut -d't' -f2 | xargs -i efibootmgr -b {} -B
|
||||
|
||||
#. Write the Service VM:
|
||||
|
||||
- ACRN SDC scenario:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# dd if=/mnt/sos.img of=/dev/sda bs=4M oflag=sync status=progress iflag=fullblock seek=0 conv=notrunc
|
||||
|
||||
- ACRN INDUSTRY scenario:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# dd if=/mnt/sos-industry.img of=/dev/sda bs=4M oflag=sync status=progress iflag=fullblock seek=0 conv=notrunc
|
||||
|
||||
.. note:: Given the large YAML size setting of over 100G, generating the Service VM image and writing it to disk will take some time.
|
||||
|
||||
#. Configure the EFI firmware to boot the ACRN hypervisor by default:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# efibootmgr -c -l "\EFI\acrn\acrn.efi" -d /dev/sda -p 1 -L "ACRN"
|
||||
|
||||
#. Unplug the U disk and reboot the test machine. After the Clear Linux OS boots, log in as "root" for the first time.
|
||||
|
||||
.. _set_up_ootb_rtvm:
|
||||
|
||||
Generate a User VM Preempt-RT image
|
||||
***********************************
|
||||
|
||||
Step 1: Create a Preempt-RT image YAML file and script
|
||||
======================================================
|
||||
|
||||
#. Create the ``preempt-rt.yaml`` file:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ mkdir -p ~/preempt-rt && cd ~/preempt-rt
|
||||
$ vim preempt-rt.yaml
|
||||
|
||||
Update the ``preempt-rt.yaml`` file to:
|
||||
|
||||
.. code-block:: bash
|
||||
:emphasize-lines: 46
|
||||
|
||||
block-devices: [
|
||||
{name: "bdevice", file: "preempt-rt.img"}
|
||||
]
|
||||
|
||||
targetMedia:
|
||||
- name: ${bdevice}
|
||||
size: "8.54G"
|
||||
type: disk
|
||||
children:
|
||||
- name: ${bdevice}1
|
||||
fstype: vfat
|
||||
mountpoint: /boot
|
||||
size: "512M"
|
||||
type: part
|
||||
- name: ${bdevice}2
|
||||
fstype: swap
|
||||
size: "32M"
|
||||
type: part
|
||||
- name: ${bdevice}3
|
||||
fstype: ext4
|
||||
mountpoint: /
|
||||
size: "8G"
|
||||
type: part
|
||||
|
||||
bundles: [
|
||||
bootloader,
|
||||
editors,
|
||||
network-basic,
|
||||
openssh-server,
|
||||
os-core,
|
||||
os-core-update,
|
||||
sysadmin-basic,
|
||||
systemd-networkd-autostart
|
||||
]
|
||||
|
||||
autoUpdate: false
|
||||
postArchive: false
|
||||
postReboot: false
|
||||
telemetry: false
|
||||
hostname: clr-preempt-rt
|
||||
|
||||
keyboard: us
|
||||
language: en_US.UTF-8
|
||||
kernel: kernel-lts2018-preempt-rt
|
||||
|
||||
version: 32030
|
||||
|
||||
.. note:: Update the version value to your target Clear Linux version
|
||||
|
||||
Step 2: Build a User VM Preempt-RT image
|
||||
========================================
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo clr-installer -c preempt-rt.yaml
|
||||
|
||||
|
||||
The ``preempt-rt.img`` will be generated at the current directory.
|
||||
|
||||
|
||||
.. _deploy_ootb_rtvm:
|
||||
|
||||
Step 3: Deploy the User VM Preempt-RT image
|
||||
===========================================
|
||||
|
||||
#. Log in to the Service VM and copy the ``preempt-rt.img`` from the development machine:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ mkdir -p preempt-rt && cd preempt-rt
|
||||
$ scp <development username>@<development machine ip>:<path to preempt-rt.img> .
|
||||
|
||||
#. Write ``preempt-rt.img`` to disk:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo dd if=<path to preempt-rt.img> of=/dev/nvme0n1 bs=4M oflag=sync status=progress
|
||||
|
||||
#. Launch the Preempt-RT User VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
|
@ -1,488 +0,0 @@
|
||||
.. highlight:: none
|
||||
|
||||
.. _agl-vms:
|
||||
|
||||
Run two AGL images as User VMs
|
||||
##############################
|
||||
|
||||
This document describes how to run two Automotive Grade Linux (AGL)
|
||||
images as VMs on the ACRN hypervisor. This serves as the baseline for
|
||||
developing the hypervisor version of the `AGL CES demo
|
||||
<https://www.youtube.com/watch?v=3Bv501INyKY>`_ using open-source
|
||||
technologies.
|
||||
|
||||
.. figure:: images/agl-demo-concept.jpg
|
||||
:align: center
|
||||
:width: 500px
|
||||
:name: agl-demo-concept
|
||||
|
||||
Demo concept
|
||||
|
||||
:numref:`agl-demo-concept` shows the AGL demo system configuration. The
|
||||
hardware is an Intel Kaby Lake NUC and three displays for the cluster
|
||||
meter, the In-Vehicle Infotainment (IVI) system, and the rear seat
|
||||
entertainment (RSE). For software, three VMs run on top of ACRN:
|
||||
|
||||
* Clear Linux OS runs as the service OS (Service VM) to control the cluster meter.
|
||||
* An AGL instance runs as a user OS (User VM) to control the IVI display.
|
||||
* A second AGL User VM controls the RSE display.
|
||||
|
||||
:numref:`agl-demo-setup` shows the hardware and display images of a
|
||||
running demo:
|
||||
|
||||
.. figure:: images/agl-demo-setup.jpg
|
||||
:align: center
|
||||
:width: 400px
|
||||
:name: agl-demo-setup
|
||||
|
||||
Demo in action
|
||||
|
||||
Hardware Setup
|
||||
**************
|
||||
|
||||
The following hardware is used for demo development:
|
||||
|
||||
.. list-table:: Demo Hardware
|
||||
:header-rows: 1
|
||||
|
||||
* - Name
|
||||
- Link
|
||||
- Notes
|
||||
* - NUC
|
||||
- Kaby Lake `NUC7i7DNHE
|
||||
<https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc7i7dnhe.html>`_
|
||||
-
|
||||
* `Specifications
|
||||
<https://www.intel.com/content/dam/support/us/en/documents/mini-pcs/nuc-kits/NUC7i7DN_TechProdSpec.pdf>`_,
|
||||
* `Tested components and peripherals
|
||||
<http://compatibleproducts.intel.com/ProductDetails?prodSearch=True&searchTerm=NUC7i7DNHE#>`_,
|
||||
* 16GB RAM
|
||||
* 120GB SATA SSD
|
||||
* - eDP display
|
||||
- `Sharp LQ125T1JX05
|
||||
<http://www.panelook.com/LQ125T1JX05-E_SHARP_12.5_LCM_overview_35649.html>`_
|
||||
-
|
||||
* - eDP cable
|
||||
- `eDP 40 pin cable
|
||||
<https://www.gorite.com/intel-nuc-dawson-canyon-edp-cable-4-lanes>`_
|
||||
- Other eDP pin cables work as well
|
||||
* - HDMI touch displays
|
||||
- `GeChic portable touch monitor
|
||||
<https://www.gechic.com/en/touch-monitor>`_
|
||||
- Tested with 1303I (no longer available), but others such as 1102I should also
|
||||
work.
|
||||
* - Serial cable
|
||||
- `Serial DB9 header cable
|
||||
<https://www.gorite.com/serial-db9-header-cable-for-nuc-dawson-canyon>`_
|
||||
or `RS232 lid
|
||||
<https://www.gorite.com/intel-nuc-rs232-lid-for-7th-gen-dawson-canyon-nuc>`_
|
||||
-
|
||||
|
||||
Connect Hardware
|
||||
================
|
||||
|
||||
Learn how to connect an eDP display to the NUC using an eDP cable, as
|
||||
shown in :numref:`agl-cables`, by
|
||||
following the `NUC specification
|
||||
<https://www.intel.com/content/dam/support/us/en/documents/mini-pcs/nuc-kits/NUC7i7DN_TechProdSpec.pdf>`_
|
||||
|
||||
.. figure:: images/agl-cables.jpg
|
||||
:align: center
|
||||
:name: agl-cables
|
||||
|
||||
USB an Display cable connections
|
||||
|
||||
As shown in :numref:`agl-cables`, connect HDMI cables and USB cables
|
||||
(for touch) to the touch displays for the IVI and RSE. Note that if the USB
|
||||
port for touch is changed, the USB bus-port number in the AGL launch script
|
||||
must be changed accordingly.
|
||||
|
||||
Software Setup
|
||||
**************
|
||||
|
||||
The demo setup uses these software components and versions:
|
||||
|
||||
.. list-table:: Demo Software
|
||||
:header-rows: 1
|
||||
|
||||
* - Name
|
||||
- Version
|
||||
- Link
|
||||
* - ACRN hypervisor
|
||||
- 1.3
|
||||
- `ACRN project <https://github.com/projectacrn/acrn-hypervisor>`_
|
||||
* - Clear Linux OS
|
||||
- 31080
|
||||
- `Clear Linux OS installer image
|
||||
<https://download.clearlinux.org/releases/31080/clear/clear-31080-kvm.img.xz>`_
|
||||
* - AGL
|
||||
- Funky Flounder (6.02)
|
||||
- `intel-corei7-x64 image
|
||||
<https://mirrors.edge.kernel.org/AGL/release/flounder/6.0.2/intel-corei7-64/deploy/images/intel-corei7-64/agl-demo-platform-crosssdk-intel-corei7-64-20200318141526.rootfs.wic.xz>`_
|
||||
* - acrn-kernel
|
||||
- revision acrn-2019w39.1-140000p
|
||||
- `acrn-kernel <https://github.com/projectacrn/acrn-kernel>`_
|
||||
|
||||
Service OS
|
||||
==========
|
||||
|
||||
#. Download the compressed Clear Linux OS installer image from
|
||||
https://download.clearlinux.org/releases/31080/clear/clear-31080-live-server.img.xz
|
||||
and follow the `Clear Linux OS installation guide
|
||||
<https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_
|
||||
as a starting point for installing the Clear Linux OS onto your platform.
|
||||
Follow the recommended options for choosing an Automatic installation
|
||||
type, and using the platform's storage as the target device for
|
||||
installation (overwriting the existing data and creating three
|
||||
partitions on the platform's storage drive).
|
||||
|
||||
#. After installation is complete, boot into the Clear Linux OS, log in as
|
||||
root, and set a password.
|
||||
|
||||
#. The Clear Linux OS is set to automatically update itself. We recommend that
|
||||
you disable this feature to have more control over when the updates
|
||||
happen. Use this command (as root) to disable the autoupdate feature::
|
||||
|
||||
# swupd autoupdate --disable
|
||||
|
||||
#. This demo setup uses a specific release version (31080) of Clear
|
||||
Linux OS which has been verified to work with ACRN. In case you
|
||||
unintentionally update or change the Clear Linux OS version, you can
|
||||
fix it again using::
|
||||
|
||||
# swupd verify --fix --picky -m 31080
|
||||
|
||||
#. Use `acrn_quick_setup.sh <https://github.com/projectacrn/acrn-hypervisor/blob/84c2b8819f479c5e6f4641490ff4bf6004f112d1/doc/getting-started/acrn_quick_setup.sh>`_
|
||||
to automatically install ACRN::
|
||||
|
||||
# sh acrn_quick_setup.sh -s 31080 -i
|
||||
|
||||
#. After installation, the system will automatically start.
|
||||
|
||||
#. Reboot the system, choose **ACRN Hypervisor**, and launch the Clear Linux OS
|
||||
Service VM. If the EFI boot order is not right, use :kbd:`F10`
|
||||
on boot to enter the EFI menu and choose **ACRN Hypervisor**.
|
||||
|
||||
|
||||
#. Install the graphics UI if necessary. Use only one of the two
|
||||
options listed below (this guide uses the GNOME on Wayland option)::
|
||||
|
||||
# swupd bundle-add desktop desktop-autostart # GNOME and Weston
|
||||
|
||||
or::
|
||||
|
||||
# swupd bundle-add software-defined-cockpit # IAS shell for IVI (optional)
|
||||
|
||||
|
||||
#. Create a new user and allow the user to use sudo::
|
||||
|
||||
# useradd <username>
|
||||
# passwd <username>
|
||||
# usermod -G wheel -a <username>
|
||||
|
||||
|
||||
#. Reboot the system::
|
||||
|
||||
# reboot
|
||||
|
||||
#. The system will reboot to the graphic interface (GDM). From the login
|
||||
screen, click **Setting** and choose **GNOME on Wayland**. Then
|
||||
chose the <username> and enter the password to log in.
|
||||
|
||||
Build ACRN kernel for AGL (User VM)
|
||||
===================================
|
||||
|
||||
In this demo, we use acrn-kernel as the baseline for AGL development.
|
||||
|
||||
#. Create a workspace, get the kernel source code, and configure kernel
|
||||
settings with::
|
||||
|
||||
$ cd workspace
|
||||
$ git clone https://github.com/projectacrn/acrn-kernel
|
||||
$ git checkout tags/acrn-2019w39.1-140000p
|
||||
$ cp kernel_config_uos .config
|
||||
$ vi .config
|
||||
$ make olddefconfig
|
||||
|
||||
|
||||
#. Load the `.config` for the User VM kernel build, and verify
|
||||
that the following config options are on::
|
||||
|
||||
CONFIG_LOCALVERSION="-uos"
|
||||
CONFIG_SECURITY_SMACK=y
|
||||
CONFIG_SECURITY_SMACK_BRINGUP=y
|
||||
CONFIG_DEFAULT_SECURITY_SMACK=y
|
||||
CONFIG_EXT4_FS=y
|
||||
CONFIG_EXT4_USE_FOR_EXT2=y
|
||||
CONFIG_EXT4_FS_POSIX_ACL=y
|
||||
CONFIG_EXT4_FS_SECURITY=y
|
||||
CONFIG_CAN=y
|
||||
CONFIG_CAN_VCAN=y
|
||||
CONFIG_CAN_SLCAN=y
|
||||
|
||||
|
||||
#. Build the kernel::
|
||||
|
||||
$ make -j 4
|
||||
$ sudo make modules_install
|
||||
$ sudo cp arch/x86/boot/bzImage /root/bzImage-4.19.0-uos
|
||||
|
||||
Set up AGLs
|
||||
===========
|
||||
|
||||
#. Download the AGL Funky Flounder image::
|
||||
|
||||
$ sudo su
|
||||
# cd /root
|
||||
# wget https://download.automotivelinux.org/AGL/release/flounder/6.0.2/intel-corei7-64/deploy/images/intel-corei7-64/agl-demo-platform-crosssdk-intel-corei7-64-20181112133144.rootfs.wic.xz
|
||||
|
||||
# unxz agl-demo-platform-crosssdk-intel-corei7-64-20181112133144.rootfs.wic.xz
|
||||
# cp agl-demo-platform-crosssdk-intel-corei7-64-20181112133144.rootfs.wic agl-ivi.wic
|
||||
# cp agl-demo-platform-crosssdk-intel-corei7-64-20181112133144.rootfs.wic agl-rse.wic
|
||||
|
||||
|
||||
#. Set up the AGL images::
|
||||
|
||||
# losetup -f -P --show agl-ivi.wic
|
||||
# mount /dev/loop0p2 /mnt
|
||||
# cp -r /lib/modules/4.19.0-uos /mnt/lib/modules/
|
||||
# sync
|
||||
# umount /mnt
|
||||
# losetup -f -P --show agl-rse.wic
|
||||
# mount /dev/loop1p2 /mnt
|
||||
# cp -r /lib/modules/4.19.0-uos /mnt/lib/modules/
|
||||
# sync
|
||||
# umount /mnt
|
||||
|
||||
|
||||
#. Create the ``launch_ivi.sh`` script for the AGL IVI VM (e.g., with vi) with
|
||||
the following content::
|
||||
|
||||
#!/bin/bash
|
||||
set -x
|
||||
|
||||
offline_path="/sys/class/vhm/acrn_vhm"
|
||||
|
||||
# Check the device file of /dev/acrn_hsm to determine the offline_path
|
||||
if [ -e "/dev/acrn_hsm" ]; then
|
||||
offline_path="/sys/class/acrn/acrn_hsm"
|
||||
fi
|
||||
|
||||
function launch_clear()
|
||||
{
|
||||
mac=$(cat /sys/class/net/e*/address)
|
||||
vm_name=vm$1
|
||||
mac_seed=${mac:9:8}-${vm_name}
|
||||
|
||||
#check if the vm is running or not
|
||||
vm_ps=$(pgrep -a -f acrn-dm)
|
||||
result=$(echo $vm_ps | grep -w "${vm_name}")
|
||||
if [[ "$result" != "" ]]; then
|
||||
echo "$vm_name is running, can't create twice!"
|
||||
exit
|
||||
fi
|
||||
|
||||
#logger_setting, format: logger_name,level; like following
|
||||
logger_setting="--logger_setting console,level=4;kmsg,level=3"
|
||||
|
||||
#for memsize setting
|
||||
mem_size=2048M
|
||||
|
||||
acrn-dm -A -m $mem_size -c $2 -s 0:0,hostbridge \
|
||||
-s 2,pci-gvt -G "$3" \
|
||||
-s 3,virtio-blk,/root/agl-ivi.wic \
|
||||
-s 4,virtio-net,tap0 \
|
||||
-s 5,virtio-console,@stdio:stdio_port \
|
||||
-s 6,virtio-hyper_dmabuf \
|
||||
-s 7,xhci,1-4 \
|
||||
$logger_setting \
|
||||
--mac_seed $mac_seed \
|
||||
-k /root/bzImage-4.19.0-uos \
|
||||
-B "root=/dev/vda2 rw rootwait maxcpus=$2 nohpet console=tty0 console=hvc0 \
|
||||
console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \
|
||||
consoleblank=0 tsc=reliable i915.avail_planes_per_pipe=$4 \
|
||||
i915.enable_hangcheck=0 i915.nuclear_pageflip=1 i915.enable_guc_loading=0 \
|
||||
i915.enable_guc_submission=0 i915.enable_guc=0" $vm_name
|
||||
}
|
||||
|
||||
# offline Service VM CPUs except BSP before launch User VM
|
||||
for i in `ls -d /sys/devices/system/cpu/cpu[1-99]`; do
|
||||
online=`cat $i/online`
|
||||
idx=`echo $i | tr -cd "[1-99]"`
|
||||
echo cpu$idx online=$online
|
||||
if [ "$online" = "1" ]; then
|
||||
echo 0 > $i/online
|
||||
# during boot time, cpu hotplug may be disabled by pci_device_probe during a pci module insmod
|
||||
while [ "$online" = "1" ]; do
|
||||
sleep 1
|
||||
echo 0 > $i/online
|
||||
online=`cat $i/online`
|
||||
done
|
||||
echo $idx > ${offline_path}/offline_cpu
|
||||
fi
|
||||
done
|
||||
|
||||
launch_clear 1 1 "64 448 8" 0x000F00 agl
|
||||
|
||||
#. Create the ``launch_rse.sh`` script for the AGL RSE VM with this content::
|
||||
|
||||
#!/bin/bash
|
||||
|
||||
set -x
|
||||
|
||||
offline_path="/sys/class/vhm/acrn_vhm"
|
||||
|
||||
# Check the device file of /dev/acrn_hsm to determine the offline_path
|
||||
if [ -e "/dev/acrn_hsm" ]; then
|
||||
offline_path="/sys/class/acrn/acrn_hsm"
|
||||
fi
|
||||
|
||||
function launch_clear()
|
||||
{
|
||||
mac=$(cat /sys/class/net/e*/address)
|
||||
vm_name=vm$1
|
||||
mac_seed=${mac:9:8}-${vm_name}
|
||||
|
||||
#check if the vm is running or not
|
||||
vm_ps=$(pgrep -a -f acrn-dm)
|
||||
result=$(echo $vm_ps | grep -w "${vm_name}")
|
||||
if [[ "$result" != "" ]]; then
|
||||
echo "$vm_name is running, can't create twice!"
|
||||
exit
|
||||
fi
|
||||
|
||||
#logger_setting, format: logger_name,level; like following
|
||||
logger_setting="--logger_setting console,level=4;kmsg,level=3"
|
||||
|
||||
#for memsize setting
|
||||
mem_size=2048M
|
||||
|
||||
acrn-dm -A -m $mem_size -c $2 -s 0:0,hostbridge -U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \
|
||||
-s 2,pci-gvt -G "$3" \
|
||||
-s 5,virtio-console,@stdio:stdio_port \
|
||||
-s 6,virtio-hyper_dmabuf \
|
||||
-s 3,virtio-blk,/root/agl-rse.wic \
|
||||
-s 4,virtio-net,tap0 \
|
||||
-s 7,xhci,1-5 \
|
||||
$logger_setting \
|
||||
--mac_seed $mac_seed \
|
||||
-k /root/bzImage-4.19.0-uos \
|
||||
-B "root=/dev/vda2 rw rootwait maxcpus=$2 nohpet console=tty0 console=hvc0 \
|
||||
console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \
|
||||
consoleblank=0 tsc=reliable i915.avail_planes_per_pipe=$4 \
|
||||
i915.enable_hangcheck=0 i915.nuclear_pageflip=1 i915.enable_guc_loading=0 \
|
||||
i915.enable_guc_submission=0 i915.enable_guc=0" $vm_name
|
||||
}
|
||||
|
||||
# offline Service VM CPUs except BSP before launch User VM
|
||||
for i in `ls -d /sys/devices/system/cpu/cpu[1-99]`; do
|
||||
online=`cat $i/online`
|
||||
idx=`echo $i | tr -cd "[1-99]"`
|
||||
echo cpu$idx online=$online
|
||||
if [ "$online" = "1" ]; then
|
||||
echo 0 > $i/online
|
||||
# during boot time, cpu hotplug may be disabled by pci_device_probe during a pci module insmod
|
||||
while [ "$online" = "1" ]; do
|
||||
sleep 1
|
||||
echo 0 > $i/online
|
||||
online=`cat $i/online`
|
||||
done
|
||||
echo $idx > ${offline_path}/offline_cpu
|
||||
fi
|
||||
done
|
||||
|
||||
launch_clear 2 1 "64 448 8" 0x070000 agl
|
||||
|
||||
|
||||
#. Launch the AGL IVI VM::
|
||||
|
||||
# chmod a+x launch_ivi.sh
|
||||
# ./launch_ivi.sh
|
||||
|
||||
#. Settings for the IVI screen
|
||||
|
||||
After booting, the IVI image will be accessible via the console.
|
||||
Login as root, and use an editor to modify ``/etc/xdg/weston/weston.ini``
|
||||
to change the ``[output]`` orientation as shown below.
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 11-13
|
||||
|
||||
[core]
|
||||
shell=ivi-shell.so
|
||||
backend=drm-backend.so
|
||||
require-input=false
|
||||
modules=systemd-notify.so
|
||||
|
||||
# A display is connected to HDMI-A-1 and needs to be rotated 90 degrees
|
||||
# to have a proper orientation of the homescreen. For example, the 'eGalax'
|
||||
# display used in some instances.
|
||||
|
||||
[output]
|
||||
name=HDMI-A-1
|
||||
transform=270
|
||||
|
||||
[id-agent]
|
||||
default-id-offset=1000
|
||||
|
||||
[ivi-shell]
|
||||
ivi-input-module=ivi-input-controller.so
|
||||
ivi-module=ivi-controller.so
|
||||
id-agent-module=simple-id-agent.so
|
||||
|
||||
[shell]
|
||||
locking=true
|
||||
panel-position=none
|
||||
|
||||
.. note:: Reboot for the changes to take affect.
|
||||
|
||||
#. Launch the AGL RSE VM
|
||||
|
||||
Open a new terminal::
|
||||
|
||||
$ sudo su
|
||||
# cd /root
|
||||
# chmod a+x launch_rse.sh
|
||||
# ./launch_rse.sh
|
||||
|
||||
#. Settings for the RSE screen
|
||||
|
||||
After booting, the RSE image will be accessible via the console.
|
||||
Login as root, and use an editor to modify ``/etc/xdg/weston/weston.ini``
|
||||
to change the ``[output]`` orientation as shown below.
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 11-13
|
||||
|
||||
[core]
|
||||
shell=ivi-shell.so
|
||||
backend=drm-backend.so
|
||||
require-input=false
|
||||
modules=systemd-notify.so
|
||||
|
||||
# A display is connected to HDMI-A-3 and needs to be rotated 90 degrees
|
||||
# to have a proper orientation of the homescreen. For example, the 'eGalax'
|
||||
# display used in some instances.
|
||||
|
||||
[output]
|
||||
name=HDMI-A-3
|
||||
transform=270
|
||||
|
||||
[id-agent]
|
||||
default-id-offset=1000
|
||||
|
||||
[ivi-shell]
|
||||
ivi-input-module=ivi-input-controller.so
|
||||
ivi-module=ivi-controller.so
|
||||
id-agent-module=simple-id-agent.so
|
||||
|
||||
[shell]
|
||||
locking=true
|
||||
panel-position=none
|
||||
|
||||
.. note:: Reboot for the changes to take affect.
|
||||
|
||||
You have successfully launched the demo system. It should
|
||||
look similar to :numref:`agl-demo-setup` at the beginning of this
|
||||
document. AGL as IVI and RSE work independently on top
|
||||
of ACRN and you can interact with them via the mouse.
|
@ -1,196 +0,0 @@
|
||||
.. _building-acrn-in-docker:
|
||||
|
||||
Build ACRN in Docker
|
||||
####################
|
||||
|
||||
This tutorial shows how to build ACRN in a Clear Linux Docker image.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Install Docker
|
||||
**************
|
||||
|
||||
#. Install Docker according to the `Docker installation instructions <https://docs.docker.com/install/>`_.
|
||||
#. If you are behind an HTTP or HTTPS proxy server, follow the
|
||||
`HTTP/HTTPS proxy instructions <https://docs.docker.com/config/daemon/systemd/#httphttps-proxy>`_
|
||||
to set up an HTTP/HTTPS proxy to pull the Docker image and
|
||||
`Configure Docker to use a proxy server <https://docs.docker.com/network/proxy/>`_
|
||||
to set up an HTTP/HTTPS proxy for the Docker container.
|
||||
#. Docker requires root privileges by default.
|
||||
Follow these `additional steps <https://docs.docker.com/install/linux/linux-postinstall/>`_
|
||||
to enable a non-root user.
|
||||
|
||||
.. note::
|
||||
|
||||
Performing these post-installation steps is not required. If you
|
||||
choose not to, add `sudo` in front of every `docker` command in
|
||||
this tutorial.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Get the Docker Image
|
||||
********************
|
||||
|
||||
Pick one of these two ways to get the Clear Linux Docker image needed to build ACRN.
|
||||
|
||||
Get the Docker Image from Docker Hub
|
||||
====================================
|
||||
|
||||
If you're not working behind a corporate proxy server, you can pull a
|
||||
pre-built Docker image from Docker Hub to your development machine using
|
||||
this command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ docker pull acrn/clearlinux-acrn-builder:latest
|
||||
|
||||
Build the Docker Image from Dockerfile
|
||||
======================================
|
||||
|
||||
Alternatively, you can build your own local Docker image using the
|
||||
provided Dockerfile build instructions by following these steps. You'll
|
||||
need this if you're working behind a corporate proxy.
|
||||
|
||||
#. Download `Dockerfile <https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/doc/getting-started/Dockerfile>`_
|
||||
to your development machine.
|
||||
#. Build the Docker image:
|
||||
|
||||
If you are behind an HTTP proxy server, use this command,
|
||||
with your proxy settings, to let docker build know about the proxy
|
||||
configuration for the docker image:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ docker build --build-arg HTTP_PROXY=http://<proxy_host>:<proxy_port> \
|
||||
--build-arg HTTPS_PROXY=https://<proxy_host>:<proxy_port> \
|
||||
-t clearlinux-acrn-builder:latest -f <path/to/Dockerfile> .
|
||||
|
||||
Otherwise, you can simply use this command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ docker build -t clearlinux-acrn-builder:latest -f <path/to/Dockerfile> .
|
||||
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Build ACRN from Source in Docker
|
||||
********************************
|
||||
|
||||
#. Clone the acrn-hypervisor repo:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ mkdir -p ~/workspace && cd ~/workspace
|
||||
$ git clone https://github.com/projectacrn/acrn-hypervisor
|
||||
$ cd acrn-hypervisor
|
||||
|
||||
#. Build the acrn-hypervisor with the default configuration (Software Defined Cockpit [SDC] configuration):
|
||||
|
||||
For the Docker image build from Dockerfile, use this command to build ACRN:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ docker run -u`id -u`:`id -g` --rm -v $PWD:/workspace \
|
||||
clearlinux-acrn-builder:latest bash -c "make clean && make"
|
||||
|
||||
For the Docker image downloaded from Docker Hub, use this command to build ACRN:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ docker run -u`id -u`:`id -g` --rm -v $PWD:/workspace \
|
||||
acrn/clearlinux-acrn-builder:latest bash -c "make clean && make"
|
||||
|
||||
The build artifacts are found in the `build` directory.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Build the ACRN Service VM Kernel in Docker
|
||||
******************************************
|
||||
|
||||
#. Clone the acrn-kernel repo:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ mkdir -p ~/workspace && cd ~/workspace
|
||||
$ git clone https://github.com/projectacrn/acrn-kernel
|
||||
$ cd acrn-kernel
|
||||
|
||||
#. Build the ACRN Service VM kernel:
|
||||
|
||||
For the Docker image built from Dockerfile, use this command to build ACRN:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cp kernel_config_sos .config
|
||||
$ docker run -u`id -u`:`id -g` --rm -v $PWD:/workspace \
|
||||
clearlinux-acrn-builder:latest \
|
||||
bash -c "make clean && make olddefconfig && make && make modules_install INSTALL_MOD_PATH=out/"
|
||||
|
||||
For the Docker image downloaded from Docker Hub, use this command to build ACRN:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cp kernel_config_sos .config
|
||||
$ docker run -u`id -u`:`id -g` --rm -v $PWD:/workspace \
|
||||
acrn/clearlinux-acrn-builder:latest \
|
||||
bash -c "make clean && make olddefconfig && make && make modules_install INSTALL_MOD_PATH=out/"
|
||||
|
||||
The commands build the bootable kernel image as ``arch/x86/boot/bzImage``,
|
||||
and the loadable kernel modules under the ``./out/`` folder.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Build the ACRN User VM PREEMPT_RT Kernel in Docker
|
||||
**************************************************
|
||||
|
||||
#. Clone the preempt-rt kernel repo:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ mkdir -p ~/workspace && cd ~/workspace
|
||||
$ git clone -b 4.19/preempt-rt https://github.com/projectacrn/acrn-kernel preempt-rt
|
||||
$ cd preempt-rt
|
||||
|
||||
#. Build the ACRN User VM PREEMPT_RT kernel:
|
||||
|
||||
For the Docker image built from Dockerfile, use this command to build ACRN:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cp x86-64_defconfig .config
|
||||
$ docker run -u`id -u`:`id -g` --rm -v $PWD:/workspace \
|
||||
clearlinux-acrn-builder:latest \
|
||||
bash -c "make clean && make olddefconfig && make && make modules_install INSTALL_MOD_PATH=out/"
|
||||
|
||||
For the Docker image downloaded from Docker Hub, use this command to build ACRN:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cp x86-64_defconfig .config
|
||||
$ docker run -u`id -u`:`id -g` --rm -v $PWD:/workspace \
|
||||
acrn/clearlinux-acrn-builder:latest \
|
||||
bash -c "make clean && make olddefconfig && make && make modules_install INSTALL_MOD_PATH=out/"
|
||||
|
||||
The commands build the bootable kernel image as ``arch/x86/boot/bzImage``,
|
||||
and the loadable kernel modules under the ``./out/`` folder.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Build the ACRN documentation
|
||||
****************************
|
||||
|
||||
#. Make sure you have both the ``acrn-hypervisor`` and ``acrn-kernel``
|
||||
repositories already available in your workspace (see steps above for
|
||||
instructions on how to clone them).
|
||||
|
||||
#. Build the ACRN documentation:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd ~/workspace
|
||||
$ docker run -u`id -u`:`id -g` --rm -v $PWD:/workspace \
|
||||
acrn/clearlinux-acrn-builder:latest \
|
||||
bash -c "cd acrn-hypervisor && make clean && make doc"
|
||||
|
||||
The HTML documentation can be found in ``acrn-hypervisor/build/doc/html``
|
@ -1,158 +0,0 @@
|
||||
.. _build User VM from Clearlinux:
|
||||
|
||||
Build a User VM from the Clear Linux OS
|
||||
#######################################
|
||||
|
||||
This document builds on :ref:`getting_started`,
|
||||
and explains how to build a User VM from Clear Linux OS.
|
||||
|
||||
Build User VM image from Clear Linux OS
|
||||
***************************************
|
||||
|
||||
Follow these steps to build a User VM image from the Clear Linux OS:
|
||||
|
||||
#. In Clear Linux OS, install ``ister`` (a template-based
|
||||
installer for Linux) included in the Clear Linux OS bundle
|
||||
``os-installer``.
|
||||
For more information about ``ister``,
|
||||
please visit https://github.com/bryteise/ister.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo swupd bundle-add os-installer
|
||||
|
||||
#. After installation is complete, use ``ister.py`` to
|
||||
generate the image for a User VM with the configuration in
|
||||
``uos-image.json``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd ~
|
||||
$ sudo ister.py -t uos-image.json
|
||||
|
||||
An example of the configuration file ``uos-image.json``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
{
|
||||
"DestinationType" : "virtual",
|
||||
"PartitionLayout" : [ { "disk" : "uos.img",
|
||||
"partition" : 1,
|
||||
"size" : "512M",
|
||||
"type" : "EFI" },
|
||||
{ "disk" : "uos.img",
|
||||
"partition" : 2,
|
||||
"size" : "1G",
|
||||
"type" : "swap" },
|
||||
{ "disk" : "uos.img",
|
||||
"partition" : 3,
|
||||
"size" : "8G",
|
||||
"type" : "linux" } ],
|
||||
"FilesystemTypes" : [ { "disk" : "uos.img",
|
||||
"partition" : 1,
|
||||
"type" : "vfat" },
|
||||
{ "disk" : "uos.img",
|
||||
"partition" : 2,
|
||||
"type" : "swap" },
|
||||
{ "disk" : "uos.img",
|
||||
"partition" : 3,
|
||||
"type" : "ext4" } ],
|
||||
"PartitionMountPoints" : [ { "disk" : "uos.img",
|
||||
"partition" : 1,
|
||||
"mount" : "/boot" },
|
||||
{ "disk" : "uos.img",
|
||||
"partition" : 3,
|
||||
"mount" : "/" } ],
|
||||
"Version": "latest",
|
||||
"Bundles": ["bootloader",
|
||||
"editors",
|
||||
"kernel-iot-lts2018",
|
||||
"network-basic",
|
||||
"os-core-update",
|
||||
"os-core",
|
||||
"openssh-server",
|
||||
"sysadmin-basic"]
|
||||
}
|
||||
|
||||
.. note::
|
||||
To generate the image with a specified version,
|
||||
please modify the ``"Version"`` argument,
|
||||
and we can set ``"Version": 26550`` instead of
|
||||
``"Version": "latest"`` for example.
|
||||
|
||||
Here we will use ``"Version": 26550`` for example,
|
||||
and the User VM image called ``uos.img`` will be generated
|
||||
after successful installation. An example output log is:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
Reading configuration
|
||||
Validating configuration
|
||||
Creating virtual disk
|
||||
Creating partitions
|
||||
Mapping loop device
|
||||
Creating file systems
|
||||
Setting up mount points
|
||||
Starting swupd. May take several minutes
|
||||
Installing 9 bundles (and dependencies)...
|
||||
Verifying version 26550
|
||||
Downloading packs...
|
||||
|
||||
Extracting emacs pack for version 26550
|
||||
|
||||
Extracting vim pack for version 26550
|
||||
...
|
||||
Cleaning up
|
||||
Successful installation
|
||||
|
||||
#. On your target device, boot the system and select "The ACRN Service OS", as shown below:
|
||||
|
||||
.. code-block:: console
|
||||
:emphasize-lines: 1
|
||||
|
||||
=> The ACRN Service OS
|
||||
Clear Linux OS for Intel Architecture (Clear-linux-iot-lts2018-4.19.0-19)
|
||||
Clear Linux OS for Intel Architecture (Clear-linux-iot-lts2018-sos-4.19.0-19)
|
||||
Clear Linux OS for Intel Architecture (Clear-linux-native.4.19.1-654)
|
||||
EFI Default Loader
|
||||
Reboot Into Firmware Interface
|
||||
|
||||
|
||||
Start the User VM
|
||||
*****************
|
||||
|
||||
#. Mount the User VM image and check the User VM kernel:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# losetup -r -f -P --show ~/uos.img
|
||||
# mount /dev/loop0p3 /mnt
|
||||
|
||||
# ls -l /mnt/usr/lib/kernel/
|
||||
|
||||
cmdline-4.19.0-26.iot-lts2018
|
||||
config-4.19.0-26.iot-lts2018
|
||||
default-iot-lts2018 -> org.clearlinux.iot-lts2018.4.19.0-26
|
||||
install.d
|
||||
org.clearlinux.iot-lts2018.4.19.0-26
|
||||
|
||||
#. Adjust the ``/usr/share/acrn/samples/nuc/launch_uos.sh``
|
||||
script to match your installation.
|
||||
These are the couple of lines you need to modify:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
-s 3,virtio-blk,~/uos.img \
|
||||
-k /mnt/usr/lib/kernel/default-iot-lts2018 \
|
||||
|
||||
.. note::
|
||||
User VM image ``uos.img`` is in the directory ``~/``
|
||||
and User VM kernel ``default-iot-lts2018`` is in ``/mnt/usr/lib/kernel/``.
|
||||
|
||||
#. You are now all set to start the User OS (User VM):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo /usr/share/acrn/samples/nuc/launch_uos.sh
|
||||
|
||||
You are now watching the User OS booting!
|
@ -1,697 +0,0 @@
|
||||
.. _rt_industry_setup:
|
||||
.. _clear_service_vm:
|
||||
|
||||
Run Clear Linux as the Service VM
|
||||
#################################
|
||||
|
||||
Verified version
|
||||
****************
|
||||
|
||||
- Clear Linux\* version: **33050**
|
||||
- ACRN-hypervisor tag: **v1.6.1 (acrn-2020w18.4-140000p)**
|
||||
- ACRN-Kernel (Service VM kernel): **4.19.120-108.iot-lts2018-sos**
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
|
||||
The example below is based on the Intel Whiskey Lake NUC platform with two
|
||||
disks, an NVMe disk for the Clear Linux-based Service VM and a SATA disk
|
||||
for the RTVM.
|
||||
|
||||
- Intel Whiskey Lake (aka WHL) NUC platform with two disks inside
|
||||
(refer to :ref:`the tables <hardware_setup>` for detailed information).
|
||||
- **com1** is the serial port on WHL NUC.
|
||||
If you are still using the KBL NUC and trying to enable the serial port on it, navigate to the
|
||||
:ref:`troubleshooting section <connect_serial_port>` that discusses how to prepare the cable.
|
||||
- Follow the steps below to install Clear Linux OS (ver: 33050) onto the NVMe disk of the WHL NUC.
|
||||
|
||||
.. _Clear Linux OS Server image:
|
||||
https://download.clearlinux.org/releases/33050/clear/clear-33050-live-server.iso
|
||||
|
||||
#. Create a bootable USB drive on Linux*:
|
||||
|
||||
a. Download the `Clear Linux OS Server image`_.
|
||||
#. Plug in the USB drive.
|
||||
#. Use the ``lsblk`` command line to identify the USB drive:
|
||||
|
||||
.. code-block:: console
|
||||
:emphasize-lines: 6,7
|
||||
|
||||
$ lsblk | grep sd*
|
||||
sda 8:0 0 931.5G 0 disk
|
||||
├─sda1 8:1 0 512M 0 part /boot/efi
|
||||
├─sda2 8:2 0 930.1G 0 part /
|
||||
└─sda3 8:3 0 977M 0 part [SWAP]
|
||||
sdc 8:32 1 57.3G 0 disk
|
||||
└─sdc1 8:33 1 57.3G 0 part
|
||||
|
||||
#. Unmount all the ``/dev/sdc`` partitions and burn the image onto the USB drive::
|
||||
|
||||
$ umount /dev/sdc* 2>/dev/null
|
||||
$ sudo dd if=./clear-33050-live-server.iso of=/dev/sdc oflag=sync status=progress bs=4M
|
||||
|
||||
#. Plug in the USB drive to the WHL NUC and boot from USB.
|
||||
#. Launch the Clear Linux OS installer boot menu.
|
||||
#. With Clear Linux OS highlighted, select :kbd:`Enter`.
|
||||
#. Log in with your root account and new password.
|
||||
#. Run the installer using the following command::
|
||||
|
||||
# clr-installer
|
||||
|
||||
#. From the Main menu, select :kbd:`Configure Installation Media` and set
|
||||
:kbd:`Destructive Installation` to the NVMe disk.
|
||||
#. Select :kbd:`Manage User` and choose :kbd:`Add New User`.
|
||||
#. Select :kbd:`Telemetry` to set Tab to highlight your choice.
|
||||
#. Press :kbd:`A` to show the :kbd:`Advanced` options.
|
||||
#. Select :kbd:`Select additional bundles` and add bundles for
|
||||
**network-basic**, and **user-basic**.
|
||||
#. Select :kbd:`Automatic OS Updates` and choose :kbd:`No [Disable]`.
|
||||
#. Select :kbd:`Install`.
|
||||
#. Select :kbd:`Confirm Install` in the :kbd:`Confirm Installation` window to start the installation.
|
||||
|
||||
.. _step-by-step instructions:
|
||||
https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html
|
||||
|
||||
.. note:: Refer to these `step-by-step instructions`_ from the Clear Linux OS installation guide.
|
||||
|
||||
.. _hardware_setup:
|
||||
|
||||
Hardware Setup
|
||||
==============
|
||||
|
||||
.. table:: Hardware Setup
|
||||
:widths: auto
|
||||
:name: Hardware Setup
|
||||
|
||||
+----------------------+-------------------+----------------------+-----------------------------------------------------------+
|
||||
| Platform (Intel x86) | Product/kit name | Hardware | Descriptions |
|
||||
+======================+===================+======================+===========================================================+
|
||||
| Whiskey Lake | WHL-IPC-I7 | Processor | - Intel |reg| Core |trade| i7-8565U CPU @ 1.80GHz |
|
||||
| | +----------------------+-----------------------------------------------------------+
|
||||
| | | Graphics | - UHD Graphics 620 |
|
||||
| | | | - ONE HDMI\* 1.4a ports supporting 4K at 60 Hz |
|
||||
| | +----------------------+-----------------------------------------------------------+
|
||||
| | | System memory | - 8GiB SODIMM DDR4 2400 MHz [1]_ |
|
||||
| | +----------------------+-----------------------------------------------------------+
|
||||
| | | Storage capabilities | - SATA: 128G KINGSTON RBUSNS8 |
|
||||
| | | | - NVMe: 256G Intel Corporation SSD Pro 7600p/760p/E 6100p |
|
||||
+----------------------+-------------------+----------------------+-----------------------------------------------------------+
|
||||
|
||||
.. [1] The maximum supported memory size for ACRN is 16GB. If you are using
|
||||
32GB memory, follow the :ref:`config_32GB_memory` instructions to make
|
||||
a customized ACRN hypervisor that can support 32GB memory. For more
|
||||
detailed information about how to build ACRN
|
||||
from the source code, refer to this :ref:`guide <getting-started-building>`.
|
||||
|
||||
Set up the ACRN Hypervisor for industry scenario
|
||||
************************************************
|
||||
|
||||
The ACRN industry scenario environment can be set up in several ways. The
|
||||
two listed below are recommended:
|
||||
|
||||
- :ref:`Using the pre-installed industry ACRN hypervisor <use pre-installed industry efi>`
|
||||
- :ref:`Using the ACRN industry out-of-the-box image <use industry ootb image>`
|
||||
|
||||
.. _use pre-installed industry efi:
|
||||
|
||||
Use the pre-installed industry ACRN hypervisor
|
||||
==============================================
|
||||
|
||||
.. note:: Skip this section if you choose :ref:`Using the ACRN industry out-of-the-box image <use industry ootb image>`.
|
||||
|
||||
#. Boot Clear Linux from NVMe disk.
|
||||
|
||||
#. Log in and download ACRN quick setup script:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ wget https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/doc/getting-started/acrn_quick_setup.sh
|
||||
$ sudo chmod +x acrn_quick_setup.sh
|
||||
|
||||
#. Run the script to set up Service VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo ./acrn_quick_setup.sh -s 33050 -d -e /dev/nvme0n1p1 -i
|
||||
|
||||
.. note:: ``-i`` option means the industry scenario efi image will be used, e.g. ``acrn.nuc7i7dnb.industry.efi``.
|
||||
For the detailed usage of the ``acrn_quick_setup.sh`` script, refer to the :ref:`quick setup ACRN guide <quick-setup-guide>`
|
||||
or simply type ``./acrn_quick_setup.sh -h``.
|
||||
|
||||
#. Use ``efibootmgr -v`` command to check the ACRN boot order:
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 3,4
|
||||
|
||||
BootCurrent: 0005
|
||||
Timeout: 1 seconds
|
||||
BootOrder: 0000,0003,0005,0001,0004
|
||||
Boot0000* ACRN HD(1,GPT,cb72266b-c83d-4c56-99e3-3e7d2f4bc175,0x800,0x47000)/File(\EFI\acrn\acrn.efi)u.a.r.t.=.d.i.s.a.b.l.e.d. .
|
||||
Boot0001* UEFI OS HD(1,GPT,335d53f0-50c1-4b0a-b58e-3393dc0389a4,0x800,0x47000)/File(\EFI\BOOT\BOOTX64.EFI)..BO
|
||||
Boot0003* Linux bootloader HD(3,GPT,af681d62-3a96-43fb-92fc-e98e850f867f,0xc1800,0x1dc31800)/File(\EFI\org.clearlinux\bootloaderx64.efi)
|
||||
Boot0004* Hard Drive BBS(HD,,0x0)..GO..NO........o.K.I.N.G.S.T.O.N. .R.B.U.S.N.S.8.1.8.0.S.3.1.2.8.G.J...................A..........................>..Gd-.;.A..MQ..L.0.5.2.0.B.6.6.7.2.8.F.F.3.D.1.0. . . . .......BO..NO........m.F.O.R.E.S.E.E. .2.5.6.G.B. .S.S.D...................A......................................0..Gd-.;.A..MQ..L.J.2.7.1.0.0.R.0.0.0.9.6.9.......BO
|
||||
Boot0005* UEFI OS HD(1,GPT,cb72266b-c83d-4c56-99e3-3e7d2f4bc175,0x800,0x47000)/File(\EFI\BOOT\BOOTX64.EFI)..BO
|
||||
|
||||
.. note:: Ensure that ACRN is first in the boot order, or you may use the
|
||||
``efibootmgr -o 1`` command to move it to the first position. If you need to enable the serial port, run the following command before rebooting:
|
||||
|
||||
``efibootmgr -c -l '\EFI\acrn\acrn.efi' -d /dev/nvme0n1 -p 1 -L ACRN -u "uart=port@0x3f8 "``
|
||||
|
||||
Note the extra space at the end of the EFI command-line options
|
||||
string. This is a workaround for a current `efi-stub bootloader name
|
||||
issue <https://github.com/projectacrn/acrn-hypervisor/issues/4520>`_.
|
||||
It ensures that the end of the string is properly detected.
|
||||
|
||||
#. Reboot WHL NUC.
|
||||
|
||||
#. Use the ``dmesg`` command to ensure that the Service VM boots:
|
||||
|
||||
.. code-block:: console
|
||||
:emphasize-lines: 2
|
||||
|
||||
$ sudo dmesg | grep ACRN
|
||||
[ 0.000000] Hypervisor detected: ACRN
|
||||
[ 1.252840] ACRNTrace: Initialized acrn trace module with 4 cpu
|
||||
[ 1.253291] ACRN HVLog: Failed to init last hvlog devs, errno -19
|
||||
[ 1.253292] ACRN HVLog: Initialized hvlog module with 4
|
||||
|
||||
.. note:: If you want to log in to the Service VM with root privileges, use ``sudo passwd`` to create a root user
|
||||
so that you can log in as root on the next reboot.
|
||||
|
||||
.. _use industry ootb image:
|
||||
|
||||
Use the ACRN industry out-of-the-box image
|
||||
==========================================
|
||||
|
||||
.. note:: If you are following the section above to set up the Service VM, jump to the next
|
||||
:ref:`section <install_rtvm>`.
|
||||
|
||||
#. Boot Clear Linux from SATA disk.
|
||||
|
||||
#. Download the Service VM industry image:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# wget https://github.com/projectacrn/acrn-hypervisor/releases/download/v1.6.1/sos-industry-33050.img.xz
|
||||
|
||||
.. note:: You may also follow :ref:`set_up_ootb_service_vm` to build the image by yourself.
|
||||
|
||||
#. Decompress the .xz image::
|
||||
|
||||
# xz -d sos-industry-33050.img.xz
|
||||
|
||||
#. Burn the Service VM image onto the NVMe disk::
|
||||
|
||||
# dd if=sos-industry-33050.img of=/dev/nvme0n1 bs=4M oflag=sync status=progress iflag=fullblock seek=0 conv=notrunc
|
||||
|
||||
#. Configure the EFI firmware to boot the ACRN hypervisor by default:
|
||||
|
||||
::
|
||||
|
||||
# efibootmgr -c -l "\EFI\acrn\acrn.efi" -d /dev/nvme0n1 -p 1 -L "ACRN" -u "uart=disabled "
|
||||
|
||||
Or use the following command to enable the serial port:
|
||||
|
||||
::
|
||||
|
||||
# efibootmgr -c -l "\EFI\acrn\acrn.efi" -d /dev/nvme0n1 -p 1 -L "ACRN" -u "uart=port@0x3f8 "
|
||||
|
||||
.. note:: Note the extra space at the end of the EFI command-line options
|
||||
strings above. This is a workaround for a current `efi-stub bootloader
|
||||
name issue <https://github.com/projectacrn/acrn-hypervisor/issues/4520>`_.
|
||||
It ensures that the end of the string is properly detected.
|
||||
|
||||
#. Reboot the test machine. After the Clear Linux OS boots,
|
||||
log in as ``root`` for the first time.
|
||||
|
||||
.. _install_rtvm:
|
||||
|
||||
Install and launch the Preempt-RT VM
|
||||
************************************
|
||||
|
||||
In this section, we will use :ref:`virtio-blk` to launch the Preempt-RT VM.
|
||||
If you need better performance, follow :ref:`building-acrn-in-docker` to
|
||||
build the ACRN kernel for the Service VM, and then :ref:`passthrough the SATA disk <passthru rtvm>` to launch the Preempt-RT VM.
|
||||
|
||||
#. Log in to the Service VM with root privileges.
|
||||
|
||||
#. Download the Preempt-RT VM image:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# wget https://github.com/projectacrn/acrn-hypervisor/releases/download/v1.6.1/preempt-rt-33050.img.xz
|
||||
|
||||
.. note:: You may also follow :ref:`set_up_ootb_rtvm` to build the Preempt-RT VM image by yourself.
|
||||
|
||||
#. Decompress the xz image::
|
||||
|
||||
# xz -d preempt-rt-33050.img.xz
|
||||
|
||||
#. Burn the Preempt-RT VM image onto the SATA disk::
|
||||
|
||||
# dd if=preempt-rt-33050.img of=/dev/sda bs=4M oflag=sync status=progress iflag=fullblock seek=0 conv=notrunc
|
||||
|
||||
#. Modify the script to use the virtio device.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# NVME pass-through
|
||||
#echo ${passthru_vpid["nvme"]} > /sys/bus/pci/drivers/pci-stub/new_id
|
||||
#echo ${passthru_bdf["nvme"]} > /sys/bus/pci/devices/${passthru_bdf["nvme"]}/driver/unbind
|
||||
#echo ${passthru_bdf["nvme"]} > /sys/bus/pci/drivers/pci-stub/bind
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 6
|
||||
|
||||
/usr/bin/acrn-dm -A -m $mem_size -s 0:0,hostbridge \
|
||||
--lapic_pt \
|
||||
--rtvm \
|
||||
--virtio_poll 1000000 \
|
||||
-U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \
|
||||
-s 2,virtio-blk,/dev/sda \
|
||||
-s 3,virtio-console,@stdio:stdio_port \
|
||||
$pm_channel $pm_by_vuart \
|
||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||
hard_rtvm
|
||||
|
||||
}
|
||||
|
||||
#. Upon deployment completion, launch the RTVM directly onto your WHL NUC::
|
||||
|
||||
# /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
|
||||
|
||||
RT Performance Test
|
||||
*******************
|
||||
|
||||
.. _cyclictest:
|
||||
|
||||
Cyclictest introduction
|
||||
=======================
|
||||
|
||||
The cyclictest is most commonly used for benchmarking RT systems. It is
|
||||
one of the most frequently used tools for evaluating the relative
|
||||
performance of real-time systems. Cyclictest accurately and repeatedly
|
||||
measures the difference between a thread's intended wake-up time and the
|
||||
time at which it actually wakes up in order to provide statistics about
|
||||
the system's latencies. It can measure latencies in real-time systems
|
||||
that are caused by hardware, firmware, and the operating system. The
|
||||
cyclictest is currently maintained by Linux Foundation and is part of
|
||||
the test suite rt-tests.
|
||||
|
||||
Pre-Configurations
|
||||
==================
|
||||
|
||||
Firmware update on the NUC
|
||||
--------------------------
|
||||
|
||||
If you need to update to the latest UEFI firmware for the NUC hardware.
|
||||
Follow these `BIOS Update Instructions
|
||||
<https://www.intel.com/content/www/us/en/support/articles/000005636.html>`__
|
||||
for downloading and flashing an updated BIOS for the NUC.
|
||||
|
||||
Recommended BIOS settings
|
||||
-------------------------
|
||||
|
||||
.. csv-table::
|
||||
:widths: 15, 30, 10
|
||||
|
||||
"Hyper-Threading", "Intel Advanced Menu -> CPU Configuration", "Disabled"
|
||||
"Intel VMX", "Intel Advanced Menu -> CPU Configuration", "Enable"
|
||||
"Speed Step", "Intel Advanced Menu -> Power & Performance -> CPU - Power Management Control", "Disabled"
|
||||
"Speed Shift", "Intel Advanced Menu -> Power & Performance -> CPU - Power Management Control", "Disabled"
|
||||
"C States", "Intel Advanced Menu -> Power & Performance -> CPU - Power Management Control", "Disabled"
|
||||
"RC6", "Intel Advanced Menu -> Power & Performance -> GT - Power Management", "Disabled"
|
||||
"GT freq", "Intel Advanced Menu -> Power & Performance -> GT - Power Management", "Lowest"
|
||||
"SA GV", "Intel Advanced Menu -> Memory Configuration", "Fixed High"
|
||||
"VT-d", "Intel Advanced Menu -> System Agent Configuration", "Enable"
|
||||
"Gfx Low Power Mode", "Intel Advanced Menu -> System Agent Configuration -> Graphics Configuration", "Disabled"
|
||||
"DMI spine clock gating", "Intel Advanced Menu -> System Agent Configuration -> DMI/OPI Configuration", "Disabled"
|
||||
"PCH Cross Throttling", "Intel Advanced Menu -> PCH-IO Configuration", "Disabled"
|
||||
"Legacy IO Low Latency", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Enabled"
|
||||
"PCI Express Clock Gating", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Disabled"
|
||||
"Delay Enable DMI ASPM", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Disabled"
|
||||
"DMI Link ASPM", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Disabled"
|
||||
"Aggressive LPM Support", "Intel Advanced Menu -> PCH-IO Configuration -> SATA And RST Configuration", "Disabled"
|
||||
"USB Periodic Smi", "Intel Advanced Menu -> LEGACY USB Configuration", "Disabled"
|
||||
"ACPI S3 Support", "Intel Advanced Menu -> ACPI Settings", "Disabled"
|
||||
"Native ASPM", "Intel Advanced Menu -> ACPI Settings", "Disabled"
|
||||
|
||||
.. note:: BIOS settings depend on the platform and BIOS version; some may not be applicable.
|
||||
|
||||
Configure RDT
|
||||
-------------
|
||||
|
||||
In addition to setting the CAT configuration via HV commands, we allow
|
||||
developers to add CAT configurations to the VM config and configure
|
||||
automatically at the time of RTVM creation. Refer to :ref:`rdt_configuration`
|
||||
for details on RDT configuration and :ref:`hv_rdt` for details on RDT
|
||||
high-level design.
|
||||
|
||||
Set up the core allocation for the RTVM
|
||||
---------------------------------------
|
||||
|
||||
In our recommended configuration, two cores are allocated to the RTVM:
|
||||
core 0 for housekeeping and core 1 for RT tasks. In order to achieve
|
||||
this, follow the below steps to allocate all housekeeping tasks to core 0:
|
||||
|
||||
#. Launch RTVM::
|
||||
|
||||
# /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
|
||||
|
||||
#. Log in to RTVM as root and run the script as below:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
#!/bin/bash
|
||||
# Copyright (C) 2019 Intel Corporation.
|
||||
# SPDX-License-Identifier: BSD-3-Clause
|
||||
# Move all IRQs to core 0.
|
||||
for i in `cat /proc/interrupts | grep '^ *[0-9]*[0-9]:' | awk {'print $1'} | sed 's/:$//' `;
|
||||
do
|
||||
echo setting $i to affine for core zero
|
||||
echo 1 > /proc/irq/$i/smp_affinity
|
||||
done
|
||||
|
||||
# Move all rcu tasks to core 0.
|
||||
for i in `pgrep rcu`; do taskset -pc 0 $i; done
|
||||
|
||||
# Change realtime attribute of all rcu tasks to SCHED_OTHER and priority 0
|
||||
for i in `pgrep rcu`; do chrt -v -o -p 0 $i; done
|
||||
|
||||
# Change realtime attribute of all tasks on core 1 to SCHED_OTHER and priority 0
|
||||
for i in `pgrep /1`; do chrt -v -o -p 0 $i; done
|
||||
|
||||
# Change realtime attribute of all tasks to SCHED_OTHER and priority 0
|
||||
for i in `ps -A -o pid`; do chrt -v -o -p 0 $i; done
|
||||
|
||||
echo disabling timer migration
|
||||
echo 0 > /proc/sys/kernel/timer_migration
|
||||
|
||||
.. note:: You can ignore the error messages during the script running.
|
||||
|
||||
Run cyclictest
|
||||
==============
|
||||
|
||||
#. Refer to the :ref:`troubleshooting section <enabling the network on RTVM>` below that discusses how to enable the network connection for RTVM.
|
||||
|
||||
#. Launch RTVM and log in as root.
|
||||
|
||||
#. Install the ``cyclictest`` tool::
|
||||
|
||||
# swupd bundle-add dev-utils --skip-diskspace-check
|
||||
|
||||
#. Use the following command to start cyclictest::
|
||||
|
||||
# cyclictest -a 1 -p 80 -m -N -D 1h -q -H 30000 --histfile=test.log
|
||||
|
||||
Parameter descriptions:
|
||||
|
||||
:-a 1: to bind the RT task to core 1
|
||||
:-p 80: to set the priority of the highest prio thread
|
||||
:-m: lock current and future memory allocations
|
||||
:-N: print results in ns instead of us (default us)
|
||||
:-D 1h: to run for 1 hour, you can change it to other values
|
||||
:-q: quiet mode; print a summary only on exit
|
||||
:-H 30000 --histfile=test.log: dump the latency histogram to a local file
|
||||
|
||||
Launch additional User VMs
|
||||
**************************
|
||||
|
||||
With the :ref:`CPU sharing <cpu_sharing>` feature enabled, the Industry
|
||||
scenario supports a maximum of 6 post-launched VMs, including 1 post
|
||||
Real-Time VM (Preempt-RT, VxWorks\*, Xenomai\*) and 5 post standard VMs
|
||||
(Clear Linux\*, Android\*, Windows\*).
|
||||
|
||||
You should follow the steps below to launch those post standard VMs.
|
||||
|
||||
Prepare the launch scripts
|
||||
==========================
|
||||
|
||||
#. Install :ref:`dependencies <install-build-tools-dependencies>` on your workspace
|
||||
and get the acrn-hypervisor source code::
|
||||
|
||||
$ git clone https://github.com/projectacrn/acrn-hypervisor
|
||||
|
||||
#. Generate launch scripts by :ref:`acrn-configuration-tool <acrn_configuration_tool>`::
|
||||
|
||||
$ cd acrn-hypervisor
|
||||
$ export board_file=$PWD/misc/acrn-config/xmls/board-xmls/whl-ipc-i5.xml
|
||||
$ export scenario_file=$PWD/misc/acrn-config/xmls/config-xmls/whl-ipc-i5/industry.xml
|
||||
$ export launch_file=$PWD/misc/acrn-config/xmls/config-xmls/whl-ipc-i5/industry_launch_6uos.xml
|
||||
$ python misc/acrn-config/launch_config/launch_cfg_gen.py --board $board_file --scenario $scenario_file --launch $launch_file --uosid 0
|
||||
|
||||
#. Launch scripts will be generated in
|
||||
``misc/acrn-config/xmls/config-xmls/whl-ipc-i5/output`` directory.
|
||||
|
||||
The launch scripts are:
|
||||
|
||||
+-------------------+--------------------+---------------------+
|
||||
| For Windows: | For Preempt-RT: | For other VMs: |
|
||||
+===================+====================+=====================+
|
||||
| launch_uos_id1.sh | launch_uos_id2.sh | | launch_uos_id3.sh |
|
||||
| | | | launch_uos_id4.sh |
|
||||
| | | | launch_uos_id5.sh |
|
||||
| | | | launch_uos_id6.sh |
|
||||
+-------------------+--------------------+---------------------+
|
||||
|
||||
#. Copy those files to your WHL board::
|
||||
|
||||
$ scp -r misc/acrn-config/xmls/config-xmls/whl-ipc-i5/output <board address>:~/
|
||||
|
||||
|
||||
Launch Windows VM
|
||||
=================
|
||||
|
||||
#. Follow this :ref:`guide <using_windows_as_uos>` to prepare the Windows image file,
|
||||
update Service VM kernel and then reboot with new ``acrngt.conf``.
|
||||
|
||||
#. Modify ``launch_uos_id1.sh`` script as following and then launch Windows VM as one of the post-launched standard VMs:
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 4,6
|
||||
|
||||
acrn-dm -A -m $mem_size -s 0:0,hostbridge -U d2795438-25d6-11e8-864e-cb7a18b34643 \
|
||||
--windows \
|
||||
$logger_setting \
|
||||
-s 5,virtio-blk,<your win img directory>/win10-ltsc.img \
|
||||
-s 6,virtio-net,tap_WaaG \
|
||||
-s 2,passthru,0/2/0,gpu \
|
||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||
-s 1:0,lpc \
|
||||
-l com1,stdio \
|
||||
$boot_audio_option \
|
||||
$vm_name
|
||||
}
|
||||
|
||||
.. note:: ``-s 2,passthru,0/2/0,gpu`` means Windows VM will be launched as GVT-d mode, which is passthrough VGA
|
||||
controller to the Windows. You may find more details in :ref:`using_windows_as_uos`.
|
||||
|
||||
Launch other standard VMs
|
||||
=========================
|
||||
|
||||
If you want to launch other VMs such as Clearlinux\* or Android\*, then you should use one of these scripts:
|
||||
``launch_uos_id3.sh``, ``launch_uos_id4.sh``, ``launch_uos_id5.sh``,
|
||||
``launch_uos_id6.sh``.
|
||||
|
||||
Here is an example to launch a Clear Linux VM:
|
||||
|
||||
#. Download Clear Linux KVM image::
|
||||
|
||||
$ cd ~/output && curl https://cdn.download.clearlinux.org/releases/33050/clear/clear-33050-kvm.img.xz -o clearlinux.img.xz
|
||||
$ unxz clearlinux.img.xz
|
||||
|
||||
#. Modify ``launch_uos_id3.sh`` script to launch Clear Linux VM:
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 1,2,3,5,16,17,23
|
||||
|
||||
#echo ${passthru_vpid["gpu"]} > /sys/bus/pci/drivers/pci-stub/new_id
|
||||
#echo ${passthru_bdf["gpu"]} > /sys/bus/pci/devices/${passthru_bdf["gpu"]}/driver/unbind
|
||||
#echo ${passthru_bdf["gpu"]} > /sys/bus/pci/drivers/pci-stub/bind
|
||||
echo 100 > /sys/bus/usb/drivers/usb-storage/module/parameters/delay_use
|
||||
mem_size=200M
|
||||
#interrupt storm monitor for pass-through devices, params order:
|
||||
#threshold/s,probe-period(s),intr-inject-delay-time(ms),delay-duration(ms)
|
||||
intr_storm_monitor="--intr_monitor 10000,10,1,100"
|
||||
|
||||
#logger_setting, format: logger_name,level; like following
|
||||
logger_setting="--logger_setting console,level=4;kmsg,level=3;disk,level=5"
|
||||
|
||||
acrn-dm -A -m $mem_size -s 0:0,hostbridge -U 615db82a-e189-4b4f-8dbb-d321343e4ab3 \
|
||||
--mac_seed $mac_seed \
|
||||
$logger_setting \
|
||||
-s 2,virtio-blk,./clearlinux.img \
|
||||
-s 3,virtio-console,@stdio:stdio_port \
|
||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||
-s 8,virtio-hyper_dmabuf \
|
||||
$intr_storm_monitor \
|
||||
$vm_name
|
||||
}
|
||||
launch_clearlinux 3
|
||||
|
||||
.. note::
|
||||
|
||||
Remove ``-s 2,passthru,0/2/0,gpu`` parameter before you launch Clear Linux VM
|
||||
because the VGA controller is already passthrough to the Windows
|
||||
VM and it's no longer visible for other VMs.
|
||||
|
||||
Before launching VMs, check available free memory using ``free -m``
|
||||
and update the ``mem_size`` value.
|
||||
|
||||
If you will run multiple Clear Linux User VMs, also make sure
|
||||
the VM names don't conflict
|
||||
with others or you must change the number in the last line of
|
||||
the script, such as ``launch_clearlinux 3``.
|
||||
|
||||
Troubleshooting
|
||||
***************
|
||||
|
||||
.. _connect_serial_port:
|
||||
|
||||
Use serial port on KBL NUC
|
||||
==========================
|
||||
|
||||
You can enable the serial console on the
|
||||
`KBL NUC <https://www.amazon.com/Intel-Business-Mini-Technology-BLKNUC7i7DNH1E/dp/B07CCQ8V4R>`_
|
||||
(NUC7i7DNH). The KBL NUC has a serial port header you can
|
||||
expose with a serial DB9 header cable. You can build this cable yourself;
|
||||
refer to the `KBL NUC product specification
|
||||
<https://www.intel.com/content/dam/support/us/en/documents/mini-pcs/nuc-kits/NUC7i7DN_TechProdSpec.pdf>`_
|
||||
as shown below:
|
||||
|
||||
.. figure:: images/KBL-serial-port-header.png
|
||||
:scale: 80
|
||||
|
||||
KBL serial port header details
|
||||
|
||||
|
||||
.. figure:: images/KBL-serial-port-header-to-RS232-cable.jpg
|
||||
:scale: 80
|
||||
|
||||
KBL `serial port header to RS232 cable
|
||||
<https://www.amazon.com/dp/B07BV1W6N8/ref=cm_sw_r_cp_ep_dp_wYm0BbABD5AK6>`_
|
||||
|
||||
|
||||
Or you can `purchase
|
||||
<https://www.amazon.com/dp/B07BV1W6N8/ref=cm_sw_r_cp_ep_dp_wYm0BbABD5AK6>`_
|
||||
such a cable.
|
||||
|
||||
You'll also need an `RS232 DB9 female to USB cable
|
||||
<https://www.amazon.com/Adapter-Chipset-CableCreation-Converter-Register/dp/B0769DVQM1>`_,
|
||||
or an `RS232 DB9 female/female (NULL modem) cross-over cable
|
||||
<https://www.amazon.com/SF-Cable-Null-Modem-RS232/dp/B006W0I3BA>`_
|
||||
to connect to your host system.
|
||||
|
||||
Note that If you want to use the RS232 DB9 female/female cable, choose
|
||||
the **cross-over**
|
||||
type rather than **straight-through** type.
|
||||
|
||||
.. _efi image not exist:
|
||||
|
||||
EFI image doesn't exist
|
||||
=======================
|
||||
|
||||
You might see the error message if you are running the ``acrn_quick_setup.sh`` script
|
||||
on an older Clear Linux OS ( < 31470 ):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
/usr/lib/acrn/acrn.wl10.industry.efi doesn't exist.
|
||||
Use one of these efi images from /usr/lib/acrn.
|
||||
------
|
||||
/usr/lib/acrn/acrn.nuc7i7dnb.industry.efi
|
||||
------
|
||||
Copy the efi image to /usr/lib/acrn/acrn.wl10.industry.efi, then run the script again.
|
||||
|
||||
To fix it, just rename the existing efi image to ``/usr/lib/acrn/acrn.wl10.industry.efi`` and
|
||||
then run the script again::
|
||||
|
||||
$ sudo cp /usr/lib/acrn/acrn.nuc7i7dnb.industry.efi /usr/lib/acrn/acrn.wl10.industry.efi
|
||||
$ sudo ./acrn_quick_setup.sh -s <target OS version> -d -e <target EFI partition> -i
|
||||
|
||||
.. _enabling the network on RTVM:
|
||||
|
||||
Enabling the network on RTVM
|
||||
============================
|
||||
|
||||
If you need to access the internet, you must add the following command line to the
|
||||
``launch_hard_rt_vm.sh`` script before launch it:
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 8
|
||||
|
||||
/usr/bin/acrn-dm -A -m $mem_size -s 0:0,hostbridge \
|
||||
--lapic_pt \
|
||||
--rtvm \
|
||||
--virtio_poll 1000000 \
|
||||
-U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \
|
||||
-s 2,passthru,02/0/0 \
|
||||
-s 3,virtio-console,@stdio:stdio_port \
|
||||
-s 8,virtio-net,tap0 \
|
||||
$pm_channel $pm_by_vuart \
|
||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||
hard_rtvm
|
||||
}
|
||||
|
||||
.. _passthru rtvm:
|
||||
|
||||
Passthrough a hard disk to the RTVM
|
||||
===================================
|
||||
|
||||
#. Use the ``lspci`` command to ensure that the correct SATA device IDs will
|
||||
be used for the passthrough before launching the script:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# lspci -nn | grep -i sata
|
||||
00:17.0 SATA controller [0106]: Intel Corporation Cannon Point-LP SATA Controller [AHCI Mode] [8086:9dd3] (rev 30)
|
||||
|
||||
#. Modify the script to use the correct SATA device IDs and bus number:
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 5, 10
|
||||
|
||||
# vim /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
|
||||
|
||||
passthru_vpid=(
|
||||
["eth"]="8086 156f"
|
||||
["sata"]="8086 9dd3"
|
||||
["nvme"]="8086 f1a6"
|
||||
)
|
||||
passthru_bdf=(
|
||||
["eth"]="0000:00:1f.6"
|
||||
["sata"]="0000:00:17.0"
|
||||
["nvme"]="0000:02:00.0"
|
||||
)
|
||||
|
||||
# SATA pass-through
|
||||
echo ${passthru_vpid["sata"]} > /sys/bus/pci/drivers/pci-stub/new_id
|
||||
echo ${passthru_bdf["sata"]} > /sys/bus/pci/devices/${passthru_bdf["sata"]}/driver/unbind
|
||||
echo ${passthru_bdf["sata"]} > /sys/bus/pci/drivers/pci-stub/bind
|
||||
|
||||
# NVME pass-through
|
||||
#echo ${passthru_vpid["nvme"]} > /sys/bus/pci/drivers/pci-stub/new_id
|
||||
#echo ${passthru_bdf["nvme"]} > /sys/bus/pci/devices/${passthru_bdf["nvme"]}/driver/unbind
|
||||
#echo ${passthru_bdf["nvme"]} > /sys/bus/pci/drivers/pci-stub/bind
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 5
|
||||
|
||||
--lapic_pt \
|
||||
--rtvm \
|
||||
--virtio_poll 1000000 \
|
||||
-U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \
|
||||
-s 2,passthru,00/17/0 \
|
||||
-s 3,virtio-console,@stdio:stdio_port \
|
||||
-s 8,virtio-net,tap0 \
|
||||
$pm_channel $pm_by_vuart \
|
||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||
hard_rtvm
|
||||
|
||||
}
|
||||
|
||||
#. Upon deployment completion, launch the RTVM directly onto your WHL NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
|
@ -90,8 +90,8 @@ noted above. For example, add the following code into function
|
||||
shell_cmd_help added information
|
||||
|
||||
Once you have instrumented the code, you need to rebuild the hypervisor and
|
||||
install it on your platform. Refer to :ref:`getting-started-building` and
|
||||
:ref:`kbl-nuc-sdc` for detailed instructions on how to do that.
|
||||
install it on your platform. Refer to :ref:`getting-started-building`
|
||||
for detailed instructions on how to do that.
|
||||
|
||||
We set console log level to 5, and mem log level to 2 through the
|
||||
command::
|
||||
@ -205,7 +205,7 @@ shown in the following example:
|
||||
|
||||
4. After we have inserted the trace code addition, we need to rebuild
|
||||
the ACRN hypervisor and install it on the platform. Refer to
|
||||
:ref:`getting-started-building` and :ref:`kbl-nuc-sdc` for
|
||||
:ref:`getting-started-building` for
|
||||
detailed instructions on how to do that.
|
||||
|
||||
5. Now we can use the following command in the Service VM console
|
||||
|
@ -1,279 +0,0 @@
|
||||
.. _enable_laag_secure_boot:
|
||||
|
||||
Enable Secure Boot in the Clear Linux User VM
|
||||
#############################################
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
|
||||
- ACRN Service VM is installed on the KBL NUC.
|
||||
- ACRN OVMF version is v1.2 or above ( :acrn-issue:`3506` ).
|
||||
- ACRN DM support OVMF write back ( :acrn-issue:`3413` ).
|
||||
- ``efi-tools`` and ``sbsigntools`` are installed in the Service VM::
|
||||
|
||||
# swupd bundle-add os-clr-on-clr
|
||||
|
||||
Validated versions
|
||||
******************
|
||||
|
||||
- **Clear Linux version:** 31080
|
||||
- **ACRN-hypervisor tag:** v1.3
|
||||
- **ACRN-Kernel(Service VM kernel):** 4.19.73-92.iot-lts2018-sos
|
||||
- **OVMF version:** v1.3
|
||||
|
||||
Prepare keys (PK/KEK/DB)
|
||||
************************
|
||||
|
||||
Generate keys
|
||||
=============
|
||||
|
||||
.. _Ubuntu-KeyGeneration:
|
||||
https://wiki.ubuntu.com/UEFI/SecureBoot/KeyManagement/KeyGeneration
|
||||
|
||||
.. _Windows-secure-boot-key-creation-and-management-guidance:
|
||||
https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/windows-secure-boot-key-creation-and-management-guidance
|
||||
|
||||
For formal case, key generation and management can be referenced by:
|
||||
`Ubuntu-KeyGeneration`_ or `Windows-secure-boot-key-creation-and-management-guidance`_.
|
||||
|
||||
For testing, the keys can be created on the KBL NUC with these commands:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ openssl req -new -x509 -newkey rsa:2048 -subj "/CN=test platform key/" -keyout PK.key -out PK.crt -days 3650 -nodes -sha256
|
||||
$ openssl req -new -x509 -newkey rsa:2048 -subj "/CN=test key-exchange-key/" -keyout KEK.key -out KEK.crt -days 3650 -nodes -sha256
|
||||
$ openssl req -new -x509 -newkey rsa:2048 -subj "/CN=test signing key/" -keyout db.key -out db.crt -days 3650 -nodes -sha256
|
||||
$ cert-to-efi-sig-list -g "$(uuidgen)" PK.crt PK.esl
|
||||
$ sign-efi-sig-list -k PK.key -c PK.crt PK PK.esl PK.auth
|
||||
$ cert-to-efi-sig-list -g "$(uuidgen)" KEK.crt KEK.esl
|
||||
$ sign-efi-sig-list -a -k PK.key -c PK.crt KEK KEK.esl KEK.auth
|
||||
$ cert-to-efi-sig-list -g "$(uuidgen)" db.crt db.esl
|
||||
$ sign-efi-sig-list -a -k KEK.key -c KEK.crt db db.esl db.auth
|
||||
$ openssl x509 -outform DER -in PK.crt -out PK.der
|
||||
$ openssl x509 -outform DER -in KEK.crt -out KEK.der
|
||||
$ openssl x509 -outform DER -in db.crt -out db.der
|
||||
|
||||
The keys to be enrolled in UEFI BIOS: **PK.der**, **KEK.der**, **db.der**
|
||||
The keys to sign bootloader or kernel: **db.key**, **db.crt**
|
||||
|
||||
Create virtual disk to hold the keys
|
||||
====================================
|
||||
|
||||
Follow these commands to create a virtual disk and copy the keys
|
||||
generated above:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo dd if=/dev/zero of=$PWD/hdd_keys.img bs=1024 count=10240
|
||||
$ mkfs.msdos hdd_keys.img
|
||||
$ sudo losetup -D
|
||||
$ sudo losetup -f -P --show $PWD/hdd_keys.img
|
||||
$ sudo mount /dev/loop0 /mnt
|
||||
$ sudo cp PK.der KEK.der db.der /mnt
|
||||
$ sync
|
||||
$ sudo umount /mnt
|
||||
$ sudo losetup -d /dev/loop0
|
||||
|
||||
Enroll keys in OVMF
|
||||
===================
|
||||
|
||||
#. Customize the ``launch_uos.sh`` script to boot with the virtual disk
|
||||
that contains the keys for enrollment:
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 6,7,9
|
||||
|
||||
$ cp /usr/share/acrn/samples/nuc/launch_uos.sh ./launch_virtual_disk.sh
|
||||
$ sudo vim ./launch_virtual_disk.sh
|
||||
|
||||
acrn-dm -A -m $mem_size -c $2 -s 0:0,hostbridge \
|
||||
-s 2,pci-gvt -G "$3" \
|
||||
-l com1,stdio \
|
||||
-s 5,virtio-console,@pty:pty_port \
|
||||
-s 6,virtio-hyper_dmabuf \
|
||||
-s 3,virtio-blk,./hdd_keys.img \
|
||||
-s 4,virtio-net,tap0 \
|
||||
-s 7,virtio-rnd \
|
||||
--ovmf w,/usr/share/acrn/bios/OVMF.fd \
|
||||
$pm_channel $pm_by_vuart $pm_vuart_node \
|
||||
$logger_setting \
|
||||
--mac_seed $mac_seed \
|
||||
$vm_name
|
||||
}
|
||||
|
||||
#. Launch the customized script to enroll keys::
|
||||
|
||||
$ sudo ./launch_virtual_disk.sh
|
||||
|
||||
#. Type ``exit`` command in UEFI shell.
|
||||
|
||||
.. figure:: images/exit_uefi_shell.png
|
||||
|
||||
|
|
||||
|
||||
#. Select **Device Manager** \-\-> **Secure Boot Configuration**.
|
||||
|
||||
.. figure:: images/secure_boot_config_1.png
|
||||
|
||||
|
|
||||
|
||||
.. figure:: images/secure_boot_config_2.png
|
||||
|
||||
|
|
||||
|
||||
.. figure:: images/secure_boot_config_3.png
|
||||
|
||||
|
|
||||
|
||||
#. Select **Secure Boot Mode** \-\-> **Custom Mode** \-\-> **Custom Secure Boot Options**.
|
||||
|
||||
.. figure:: images/select_custom_mode.png
|
||||
|
||||
|
|
||||
|
||||
.. figure:: images/enable_custom_boot.png
|
||||
|
||||
|
|
||||
|
||||
#. Enroll Keys:
|
||||
|
||||
a. Enroll PK: Select **PK Options** \-\-> **Enroll PK** \-\->
|
||||
**Enroll PK Using File** \-\-> **VOLUME** \-\- PK.der \-\-> **Commit Changes and Exit**
|
||||
|
||||
#. Enroll KEK(similar with PK): Select **KEK Options** --> **Enroll KEK** -->
|
||||
**Enroll KEK Using File** --> **VOLUME** --> KEK.der --> **Commit Changes and Exit**
|
||||
|
||||
#. Enroll Signatures(similar with PK): Select **DB Options** --> **Enroll Signature** -->
|
||||
**Enroll Signature Using File** --> **VOLUME** --> db.der --> **Commit Changes and Exit**
|
||||
|
||||
Example for enrolling the PK file:
|
||||
|
||||
.. figure:: images/enroll_pk_key_1.png
|
||||
|
||||
|
|
||||
|
||||
.. figure:: images/enroll_pk_key_2.png
|
||||
|
||||
|
|
||||
|
||||
.. figure:: images/enroll_pk_key_3.png
|
||||
|
||||
|
|
||||
|
||||
.. figure:: images/enroll_pk_key_4.png
|
||||
|
||||
|
|
||||
|
||||
.. figure:: images/enroll_pk_key_5.png
|
||||
|
||||
|
|
||||
|
||||
.. figure:: images/enroll_pk_key_6.png
|
||||
|
||||
|
|
||||
|
||||
#. Press :kbd:`ESC` to go back to the **Secure Boot Configuration** interface.
|
||||
|
||||
Now the **Current Secure Boot State** is **Enabled** and **Attempt Secure Boot** option is selected.
|
||||
|
||||
.. figure:: images/secure_boot_enabled.png
|
||||
|
||||
|
|
||||
|
||||
#. Go back to UEFI GUI main interface and select **Reset** to perform a formal
|
||||
reset/shutdown to ensure the key enrollment is taking effect in the next boot.
|
||||
|
||||
.. figure:: images/reset_in_bios.png
|
||||
|
||||
|
|
||||
|
||||
#. Type ``reset -s`` to shutdown the guest in the UEFI shell.
|
||||
|
||||
.. figure:: images/reset_in_uefi_shell.png
|
||||
|
||||
|
|
||||
|
||||
Sign the Clear Linux image
|
||||
**************************
|
||||
|
||||
Follow these commands to sign the Clear Linux VM binaries.
|
||||
|
||||
#. Download and decompress the Clear Linux image::
|
||||
|
||||
$ wget https://download.clearlinux.org/releases/31080/clear/clear-31080-kvm.img.xz
|
||||
$ unxz clear-31080-kvm.img.xz
|
||||
|
||||
#. Download the script to sign image::
|
||||
|
||||
$ wget https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/doc/scripts/sign_image.sh
|
||||
|
||||
#. Run the script to sign image.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo sh sign_image.sh clear-31080-kvm.img db.key db.crt
|
||||
/mnt/EFI/BOOT/BOOTX64.EFI
|
||||
warning: data remaining[93184 vs 105830]: gaps between PE/COFF sections?
|
||||
warning: data remaining[93184 vs 105832]: gaps between PE/COFF sections?
|
||||
Signing Unsigned original image
|
||||
sign /mnt/EFI/BOOT/BOOTX64.EFI succeed
|
||||
/mnt/EFI/org.clearlinux/bootloaderx64.efi
|
||||
warning: data remaining[1065472 vs 1196031]: gaps between PE/COFF sections?
|
||||
warning: data remaining[1065472 vs 1196032]: gaps between PE/COFF sections?
|
||||
Signing Unsigned original image
|
||||
sign /mnt/EFI/org.clearlinux/bootloaderx64.efi succeed
|
||||
/mnt/EFI/org.clearlinux/kernel-org.clearlinux.kvm.5.2.17-389
|
||||
Signing Unsigned original image
|
||||
sign /mnt/EFI/org.clearlinux/kernel-org.clearlinux.kvm.5.2.17-389 succeed
|
||||
/mnt/EFI/org.clearlinux/loaderx64.efi
|
||||
warning: data remaining[93184 vs 105830]: gaps between PE/COFF sections?
|
||||
warning: data remaining[93184 vs 105832]: gaps between PE/COFF sections?
|
||||
Signing Unsigned original image
|
||||
sign /mnt/EFI/org.clearlinux/loaderx64.efi succeed
|
||||
|
||||
#. You will get the signed Clear Linux image: ``clear-31080-kvm.img.signed``
|
||||
|
||||
Boot Clear Linux signed image
|
||||
*****************************
|
||||
|
||||
#. Modify the ``launch_uos.sh`` script to use the signed image.
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 5,6,8
|
||||
|
||||
$ sudo vim /usr/share/acrn/samples/nuc/launch_uos.sh
|
||||
|
||||
acrn-dm -A -m $mem_size -c $2 -s 0:0,hostbridge \
|
||||
-s 2,pci-gvt -G "$3" \
|
||||
-l com1,stdio \
|
||||
-s 5,virtio-console,@pty:pty_port \
|
||||
-s 6,virtio-hyper_dmabuf \
|
||||
-s 3,virtio-blk,./clear-31080-kvm.img.signed \
|
||||
-s 4,virtio-net,tap0 \
|
||||
-s 7,virtio-rnd \
|
||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||
$pm_channel $pm_by_vuart $pm_vuart_node \
|
||||
$logger_setting \
|
||||
--mac_seed $mac_seed \
|
||||
$vm_name
|
||||
}
|
||||
|
||||
#. You may see the UEFI shell boots by default.
|
||||
|
||||
.. figure:: images/uefi_shell_boot_default.png
|
||||
|
||||
|
|
||||
|
||||
#. Type ``exit`` to enter Bios configuration.
|
||||
|
||||
#. Navigate to the **Boot Manager** and select **UEFI Misc Device** to
|
||||
boot the signed Clear Linux image.
|
||||
|
||||
#. Login as root and use ``dmesg`` to check the secure boot status on
|
||||
the User VM.
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 2
|
||||
|
||||
root@clr-763e953a125f4bda94dd2efbab77f776 ~ # dmesg | grep Secure
|
||||
[ 0.001330] Secure boot enabled
|
@ -29,7 +29,8 @@ Verified version
|
||||
Prerequisites
|
||||
*************
|
||||
|
||||
Follow :ref:`these instructions <kbl-nuc-sdc>` to set up the ACRN Service VM.
|
||||
Follow :ref:`these instructions <rt_industry_ubuntu_setup>` to set up
|
||||
Ubuntu as the ACRN Service VM.
|
||||
|
||||
Supported hardware platform
|
||||
***************************
|
||||
|
BIN
doc/tutorials/images/NUC-serial-port.jpg
Normal file
BIN
doc/tutorials/images/NUC-serial-port.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 11 KiB |
@ -1,115 +0,0 @@
|
||||
.. _Increase User VM disk size:
|
||||
|
||||
Increase the User VM Disk Size
|
||||
##############################
|
||||
|
||||
This document builds on :ref:`getting_started` and assumes you already have
|
||||
a system with ACRN installed and running correctly. The size of the pre-built
|
||||
Clear Linux User OS (User VM) virtual disk is typically only 8GB and this may not be
|
||||
sufficient for some applications. This guide explains a simple few steps to
|
||||
increase the size of that virtual disk.
|
||||
|
||||
This document is largely inspired from Clear Linux's `Increase virtual disk size
|
||||
of a Clear Linux* OS image
|
||||
<https://docs.01.org/clearlinux/latest/guides/maintenance/increase-virtual-disk-size.html>`_
|
||||
tutorial. The process can be
|
||||
broken down into three steps:
|
||||
|
||||
1. Increase the virtual disk (``uos.img``) size
|
||||
#. Resize the ``rootfs`` partition
|
||||
#. Resize the filesystem
|
||||
|
||||
.. note::
|
||||
|
||||
These steps are performed directly on the User VM disk image. The User VM **must**
|
||||
be powered off during this operation.
|
||||
|
||||
Increase the virtual disk size
|
||||
******************************
|
||||
|
||||
We will use the ``qemu-img`` tool to increase the size of the virtual disk
|
||||
(``uos.img``) file. On a Clear Linux system, you can install this tool using:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo swupd bundle-add clr-installer
|
||||
|
||||
As an example, let us add 10GB of storage to our virtual disk image called
|
||||
``uos.img``.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ qemu-img resize -f raw uos.img +10G
|
||||
|
||||
.. note::
|
||||
|
||||
Replace ``uos.img`` by the actual name of your virtual disk file if you
|
||||
deviated from the :ref:`getting_started`.
|
||||
|
||||
.. note::
|
||||
|
||||
You can choose any increment for the additional storage space. Check the
|
||||
``qemu-img resize`` help for more information.
|
||||
|
||||
Resize the ``rootfs`` partition
|
||||
*******************************
|
||||
|
||||
The next step is to modify the ``rootfs`` partition (in Clear Linux, it is
|
||||
partition 3) to use the additional space available. We will use the ``parted``
|
||||
tool and perform these steps:
|
||||
|
||||
* Enter the ``parted`` tool
|
||||
* Press ``p`` to print the partition tables
|
||||
* A warning will be displayed, enter ``Fix``
|
||||
* Enter ``resizepart 3``
|
||||
* Enter the size of the disk (``19.9GB`` in our example)
|
||||
* Enter ``q`` to quit the tool
|
||||
|
||||
Here is what the sequence looks like:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ parted uos.img
|
||||
|
||||
.. code-block:: console
|
||||
:emphasize-lines: 5,7,9,19,20
|
||||
|
||||
WARNING: You are not superuser. Watch out for permissions.
|
||||
GNU Parted 3.2
|
||||
Using /home/gvancuts/uos/uos.img
|
||||
Welcome to GNU Parted! Type 'help' to view a list of commands.
|
||||
(parted) p
|
||||
Warning: Not all of the space available to /home/gvancuts/uos/uos.img appears to be used, you can fix the GPT to use all of the space (an extra 20971520 blocks) or continue with the current setting?
|
||||
Fix/Ignore? Fix
|
||||
Model: (file)
|
||||
Disk /home/gvancuts/uos/uos.img: 19.9GB
|
||||
Sector size (logical/physical): 512B/512B
|
||||
Partition Table: gpt
|
||||
Disk Flags:
|
||||
|
||||
Number Start End Size File system Name Flags
|
||||
1 1049kB 537MB 536MB fat16 primary boot, esp
|
||||
2 537MB 570MB 33.6MB linux-swap(v1) primary
|
||||
3 570MB 9160MB 8590MB ext4 primary
|
||||
|
||||
(parted) resizepart 3
|
||||
End? [9160MB]? 19.9GB
|
||||
(parted) q
|
||||
|
||||
Resize the filesystem
|
||||
*********************
|
||||
|
||||
The final step is to resize the ``rootfs`` filesystem to use the entire
|
||||
partition space.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ LOOP_DEV=`sudo losetup -f -P --show uos.img`
|
||||
$ PART_DEV=$LOOP_DEV
|
||||
$ PART_DEV+="p3"
|
||||
$ sudo e2fsck -f $PART_DEV
|
||||
$ sudo resize2fs -p $PART_DEV
|
||||
$ sudo losetup -d $LOOP_DEV
|
||||
|
||||
Congratulations! You have successfully resized the disk, partition, and
|
||||
filesystem of your User VM.
|
@ -1,479 +0,0 @@
|
||||
.. _kbl-nuc-sdc:
|
||||
|
||||
Use SDC Mode on the NUC
|
||||
#######################
|
||||
|
||||
The Intel |reg| NUC is the primary tested platform for ACRN development,
|
||||
and its setup is described below.
|
||||
|
||||
Validated Version
|
||||
*****************
|
||||
|
||||
- Clear Linux version: **32080**
|
||||
- ACRN-hypervisor tag: **acrn-2012020w02.5.140000p**
|
||||
- ACRN-Kernel (Service VM kernel): **4.19.94-102.iot-lts2018-sos**
|
||||
|
||||
Software Setup
|
||||
**************
|
||||
|
||||
.. _set-up-CL:
|
||||
|
||||
Set up a Clear Linux Operating System
|
||||
=====================================
|
||||
|
||||
We begin by installing Clear Linux as the development OS on the NUC.
|
||||
The Clear Linux release includes an ``acrn.nuc7i7dnb.sdc.efi`` hypervisor application
|
||||
that will be added to the EFI partition (by the quick setup script or
|
||||
manually, as described below).
|
||||
|
||||
.. note::
|
||||
|
||||
Refer to the ACRN :ref:`release_notes` for the Clear Linux OS
|
||||
version number tested with a specific ACRN release. Adjust the
|
||||
instruction below to reference the appropriate version number of Clear
|
||||
Linux OS (we use version 32080 as an example).
|
||||
|
||||
#. Download the Clear Linux OS installer image from
|
||||
https://download.clearlinux.org/releases/31470/clear/clear-31470-live-server.iso
|
||||
and follow the `Clear Linux OS Installation Guide
|
||||
<https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_
|
||||
as a starting point for installing the Clear Linux OS onto your platform.
|
||||
Follow the recommended options for choosing an :kbd:`Advanced options`
|
||||
installation type, and using the platform's storage as the target device
|
||||
for installation (overwriting the existing data).
|
||||
|
||||
When setting up Clear Linux on your NUC:
|
||||
|
||||
#. Launch the Clear Linux OS installer boot menu.
|
||||
#. With Clear Linux OS highlighted, select :kbd:`Enter`.
|
||||
#. Log in with your root account and new password.
|
||||
#. Run the installer using the following command::
|
||||
|
||||
$ clr-installer
|
||||
|
||||
#. From the Main menu, select :kbd:`Configure Installation Media` and set
|
||||
:kbd:`Destructive Installation` to your desired hard disk.
|
||||
#. Select :kbd:`Telemetry` to set Tab to highlight your choice.
|
||||
#. Press :kbd:`A` to show the :kbd:`Advanced` options.
|
||||
#. Select :kbd:`Select additional bundles` and add bundles for
|
||||
**network-basic**, and **user-basic**.
|
||||
#. Select :kbd:`Manager User` to add an administrative user :kbd:`clear` and
|
||||
password.
|
||||
#. Select :kbd:`Install`.
|
||||
#. Select :kbd:`Confirm Install` in the :kbd:`Confirm Installation` window to start the installation.
|
||||
|
||||
#. After installation is complete, boot into Clear Linux OS, log in as
|
||||
:kbd:`clear` (using the password you set earlier).
|
||||
|
||||
.. _quick-setup-guide:
|
||||
|
||||
Use the script to set up ACRN automatically
|
||||
===========================================
|
||||
|
||||
We provide an `acrn_quick_setup.sh
|
||||
<https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/doc/getting-started/acrn_quick_setup.sh>`_
|
||||
script in the ACRN GitHub repo to quickly and automatically set up the Service VM,
|
||||
User VM and generate a customized script for launching the User VM.
|
||||
|
||||
This script requires the Clear Linux version number you'd like to set up
|
||||
for the ACRN Service VM and User VM. The specified version must be greater than or
|
||||
equal to the Clear Linux version currently installed on the NUC. You can see
|
||||
your current Clear Linux version with this command::
|
||||
|
||||
$ cat /etc/os-release
|
||||
|
||||
The following instructions use Clear Linux version 31470. Specify the Clear Linux version you want to use.
|
||||
|
||||
Follow these steps:
|
||||
|
||||
#. Install and log in to Clear Linux.
|
||||
|
||||
#. Open a terminal.
|
||||
|
||||
#. Download the ``acrn_quick_setup.sh`` script to set up the Service VM.
|
||||
(If you don't need a proxy to get the script, skip the ``export`` command.)
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ export https_proxy=https://myproxy.mycompany.com:port
|
||||
$ cd ~
|
||||
$ wget https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/doc/getting-started/acrn_quick_setup.sh
|
||||
$ sudo sh acrn_quick_setup.sh -s 32080
|
||||
|
||||
#. This output means the script ran successfully.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Check ACRN efi boot event
|
||||
Clean all ACRN efi boot event
|
||||
Check linux bootloader event
|
||||
Clean all Linux bootloader event
|
||||
Add new ACRN efi boot event, uart is disabled by default.
|
||||
+ efibootmgr -c -l '\EFI\acrn\acrn.efi' -d /dev/sda -p 1 -L ACRN -u uart=disabled
|
||||
Service OS setup done!
|
||||
Rebooting Service OS to take effects.
|
||||
Rebooting.
|
||||
|
||||
.. note::
|
||||
This script is using ``/dev/sda1`` as the default EFI System Partition
|
||||
ESP). If the ESP is different based on your hardware, you can specify
|
||||
it using the ``-e`` option. For example, to set up the Service VM on an NVMe
|
||||
SSD, you could specify:
|
||||
|
||||
``sudo sh acrn_quick_setup.sh -s 32080 -e /dev/nvme0n1p1``
|
||||
|
||||
If you don't need to reboot automatically after setting up the Service VM, you
|
||||
can specify the ``-d`` parameter (don't reboot).
|
||||
|
||||
``sudo sh acrn_quick_setup.sh -s 32080 -e /dev/nvme0n1p1 -d``
|
||||
|
||||
#. After the system reboots, log in as the **clear** user. Verify that the Service VM
|
||||
booted successfully by checking the ``dmesg`` log:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo dmesg | grep ACRN
|
||||
Password:
|
||||
[ 0.000000] Hypervisor detected: ACRN
|
||||
[ 1.252840] ACRNTrace: Initialized acrn trace module with 4 cpu
|
||||
[ 1.253291] ACRN HVLog: Failed to init last hvlog devs, errno -19
|
||||
[ 1.253292] ACRN HVLog: Initialized hvlog module with 4 cpu
|
||||
|
||||
#. Continue by setting up a Guest OS using the ``acrn_quick_setup.sh``
|
||||
script with the ``-u`` option (and the same Clear Linux version
|
||||
number):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo sh acrn_quick_setup.sh -u 32080
|
||||
Password:
|
||||
Upgrading User VM...
|
||||
Downloading User VM image: https://download.clearlinux.org/releases/32080/clear/clear-32080-kvm.img.xz
|
||||
% Total % Received % Xferd Average Speed Time Time Time Current
|
||||
Dload Upload Total Spent Left Speed
|
||||
14 248M 14 35.4M 0 0 851k 0 0:04:57 0:00:42 0:04:15 293k
|
||||
|
||||
After the download is complete, you'll get this output.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Unxz User VM image: clear-32080-kvm.img.xz
|
||||
Get User VM image: clear-32080-kvm.img
|
||||
Upgrade User VM done...
|
||||
Now you can run this command to start User VM...
|
||||
$ sudo /root/launch_uos_32080.sh
|
||||
|
||||
#. Launch the User VM using the customized ``launch_uos_32080.sh`` script (with sudo):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
[ 3.658689] Adding 33788k swap on /dev/vda2. Priority:-2 extents:1 across:33788k
|
||||
[ 4.034712] random: dbus-daemon: uninitialized urandom read (12 bytes read)
|
||||
[ 4.101122] random: tallow: uninitialized urandom read (4 bytes read)
|
||||
[ 4.119713] random: dbus-daemon: uninitialized urandom read (12 bytes read)
|
||||
[ 4.223296] virtio_net virtio1 enp0s4: renamed from eth0
|
||||
[ 4.342645] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
|
||||
[ 4.560662] IPv6: ADDRCONF(NETDEV_UP): enp0s4: link is not ready
|
||||
Unhandled ps2 mouse command 0xe1
|
||||
[ 4.725622] IPv6: ADDRCONF(NETDEV_CHANGE): enp0s4: link becomes ready
|
||||
[ 5.114339] input: PS/2 Generic Mouse as /devices/platform/i8042/serio1/input/input3
|
||||
|
||||
clr-a632ec84744d4e02974fe1891130002e login:
|
||||
|
||||
#. Log in as root. Specify the new password. Verify that you are running in the User VM
|
||||
by checking the kernel release version or seeing if acrn devices are visible:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# uname -r
|
||||
4.19.94-102.iot-lts2018-sos
|
||||
# ls /dev/acrn*
|
||||
ls: cannot access '/dev/acrn*': No such file or directory
|
||||
|
||||
The User VM does not have ``/dev/acrn*`` devices. If you are in the Service VM,
|
||||
you will see results such as these:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ uname -r
|
||||
4.19.94-102.iot-lts2018-sos
|
||||
$ ls /dev/acrn*
|
||||
/dev/acrn_hvlog_cur_0 /dev/acrn_hvlog_cur_2 /dev/acrn_trace_0 /dev/acrn_trace_2 /dev/acrn_vhm
|
||||
/dev/acrn_hvlog_cur_1 /dev/acrn_hvlog_cur_3 /dev/acrn_trace_1 /dev/acrn_trace_3
|
||||
|
||||
You have successfully set up Clear Linux at the Service and User VM and started up a User VM.
|
||||
|
||||
.. _manual-setup-guide:
|
||||
|
||||
Manually Set Up ACRN
|
||||
====================
|
||||
|
||||
Instead of using the quick setup script, you can also set up ACRN, Service VM,
|
||||
and User VM manually. Follow these steps:
|
||||
|
||||
#. Install Clear Linux on the NUC, log in as the **clear** user,
|
||||
and open a terminal window.
|
||||
|
||||
#. Disable the auto-update feature. Clear Linux OS is set to automatically update itself.
|
||||
We recommend that you disable this feature to have more control over when updates happen. Use this command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo swupd autoupdate --disable
|
||||
|
||||
.. note::
|
||||
When enabled, the Clear Linux OS installer automatically checks for updates and installs the latest version
|
||||
available on your system. To use a specific version (such as 32080), enter the following command after the
|
||||
installation is complete:
|
||||
|
||||
``sudo swupd repair --picky -V 32080``
|
||||
|
||||
#. If you have an older version of Clear Linux OS already installed
|
||||
on your hardware, use this command to upgrade the Clear Linux OS
|
||||
to version 32080 (or newer):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo swupd update -V 32080 # or newer version
|
||||
|
||||
#. Use the ``sudo swupd bundle-add`` command to add these Clear Linux OS bundles:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo swupd bundle-add service-os systemd-networkd-autostart
|
||||
|
||||
+----------------------------+-------------------------------------------+
|
||||
| Bundle | Description |
|
||||
+============================+===========================================+
|
||||
| service-os | Adds the acrn hypervisor, acrn |
|
||||
| | devicemodel, and Service OS kernel |
|
||||
+----------------------------+-------------------------------------------+
|
||||
| systemd-networkd-autostart | Enables systemd-networkd as the default |
|
||||
| | network manager |
|
||||
+----------------------------+-------------------------------------------+
|
||||
|
||||
.. _add-acrn-to-efi:
|
||||
|
||||
Add the ACRN hypervisor to the EFI Partition
|
||||
============================================
|
||||
|
||||
In order to boot the ACRN Service VM on the platform, you must add it to the EFI
|
||||
partition. Follow these steps:
|
||||
|
||||
#. Mount the EFI partition and verify you have the following files:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo ls -1 /boot/EFI/org.clearlinux
|
||||
bootloaderx64.efi
|
||||
freestanding-00-intel-ucode.cpio
|
||||
freestanding-i915-firmware.cpio.xz
|
||||
kernel-org.clearlinux.iot-lts2018-sos.4.19.94-102
|
||||
kernel-org.clearlinux.native.5.4.11-890
|
||||
loaderx64.efi
|
||||
|
||||
.. note::
|
||||
On the Clear Linux OS, the EFI System Partition (e.g. ``/dev/sda1``)
|
||||
is mounted under ``/boot`` by default. The Clear Linux project releases updates often, sometimes twice a day, so make note of the specific kernel versions (iot-lts2018) listed on your system, as you will need them later.
|
||||
|
||||
The EFI System Partition (ESP) may be different based on your hardware.
|
||||
It will typically be something like ``/dev/mmcblk0p1`` on platforms
|
||||
that have an on-board eMMC or ``/dev/nvme0n1p1`` if your system has
|
||||
a non-volatile storage media attached via a PCI Express (PCIe) bus
|
||||
(NVMe).
|
||||
|
||||
#. Add the ``acrn.nuc7i7dnb.sdc.efi`` hypervisor application (included in the Clear
|
||||
Linux OS release) to the EFI partition. Use these commands:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo mkdir /boot/EFI/acrn
|
||||
$ sudo cp /usr/lib/acrn/acrn.nuc7i7dnb.sdc.efi /boot/EFI/acrn/acrn.efi
|
||||
|
||||
#. Configure the EFI firmware to boot the ACRN hypervisor by default.
|
||||
|
||||
The ACRN hypervisor (``acrn.efi``) is an EFI executable that's
|
||||
loaded directly by the platform EFI firmware. It then loads the
|
||||
Service OS bootloader. Use the ``efibootmgr`` utility to configure the EFI
|
||||
firmware and add a new entry that loads the ACRN hypervisor.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo efibootmgr -c -l "\EFI\acrn\acrn.efi" -d /dev/sda -p 1 -L "ACRN"
|
||||
|
||||
.. note::
|
||||
|
||||
Be aware that a Clear Linux OS update that includes a kernel upgrade will
|
||||
reset the boot option changes you just made. A Clear Linux OS update could
|
||||
happen automatically (if you have not disabled it as described above),
|
||||
if you later install a new bundle to your system, or simply if you
|
||||
decide to trigger an update manually. Whenever that happens,
|
||||
double-check the platform boot order using ``efibootmgr -v`` and
|
||||
modify it if needed.
|
||||
|
||||
The ACRN hypervisor (``acrn.efi``) accepts two command-line parameters
|
||||
that tweak its behavior:
|
||||
|
||||
1. ``bootloader=``: this sets the EFI executable to be loaded once the hypervisor
|
||||
is up and running. This is typically the bootloader of the Service OS.
|
||||
The default value is to use the Clear Linux OS bootloader, i.e.:
|
||||
``\EFI\org.clearlinux\bootloaderx64.efi``.
|
||||
#. ``uart=``: this tells the hypervisor where the serial port (UART) is found or
|
||||
whether it should be disabled. There are three forms for this parameter:
|
||||
|
||||
#. ``uart=disabled``: this disables the serial port completely.
|
||||
#. ``uart=bdf@<BDF value>``: this sets the PCI serial port based on its BDF.
|
||||
For example, use ``bdf@0:18.1`` for a BDF of 0:18.1 ttyS1.
|
||||
#. ``uart=port@<port address>``: this sets the serial port address.
|
||||
|
||||
.. note::
|
||||
|
||||
``uart=port@<port address>`` is required if you want to enable the serial console.
|
||||
Run ``dmesg |grep ttyS0`` to get port address from the output, and then
|
||||
add the ``uart`` parameter into the ``efibootmgr`` command.
|
||||
|
||||
|
||||
Here is a more complete example of how to configure the EFI firmware to load the ACRN
|
||||
hypervisor and set these parameters:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo efibootmgr -c -l "\EFI\acrn\acrn.efi" -d /dev/sda -p 1 -L "ACRN NUC Hypervisor" \
|
||||
-u "uart=disabled"
|
||||
|
||||
Here is an example of how to enable a serial console for the KBL NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo efibootmgr -c -l "\EFI\acrn\acrn.efi" -d /dev/sda -p 1 -L "ACRN NUC Hypervisor" \
|
||||
-u "uart=port@0x3f8"
|
||||
|
||||
#. Add a timeout period for the Systemd-Boot to wait; otherwise, it will not
|
||||
present the boot menu and will always boot the base Clear Linux OS:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo clr-boot-manager set-timeout 5
|
||||
$ sudo clr-boot-manager update
|
||||
|
||||
#. Set the kernel-iot-lts2018 kernel as the default kernel:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo clr-boot-manager list-kernels
|
||||
* org.clearlinux.native.5.4.11-890
|
||||
org.clearlinux.iot-lts2018-sos.4.19.94-102
|
||||
|
||||
Set the default kernel from ``org.clearlinux.native.5.4.11-890`` to
|
||||
``org.clearlinux.iot-lts2018-sos.4.19.94-102``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo clr-boot-manager set-kernel org.clearlinux.iot-lts2018-sos.4.19.94-102
|
||||
$ sudo clr-boot-manager list-kernels
|
||||
org.clearlinux.native.5.4.11-890
|
||||
* org.clearlinux.iot-lts2018-sos.4.19.94-102
|
||||
|
||||
#. Reboot and wait until the boot menu is displayed, as shown below:
|
||||
|
||||
.. code-block:: console
|
||||
:emphasize-lines: 1
|
||||
:caption: ACRN Service OS Boot Menu
|
||||
|
||||
Clear Linux OS (Clear-linux-iot-lts2018-sos-4.19.94-102)
|
||||
Clear Linux OS (Clear-linux-native.5.4.11-890)
|
||||
Reboot Into Firmware Interface
|
||||
|
||||
#. After booting the ACRN hypervisor, the Service OS launches
|
||||
automatically by default, and the Clear Linux OS desktop show with the **clear** user (or you can login remotely with an "ssh" client).
|
||||
If there is any issue which makes the GNOME desktop not successfully display,, then the system will go to the shell console.
|
||||
|
||||
#. From the ssh client, log in as the **clear** user. Use the password you set previously when you installed the Clear Linux OS.
|
||||
|
||||
#. After rebooting the system, check that the ACRN hypervisor is running properly with:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo dmesg | grep ACRN
|
||||
[ 0.000000] Hypervisor detected: ACRN
|
||||
[ 1.253093] ACRNTrace: Initialized acrn trace module with 4 cpu
|
||||
[ 1.253535] ACRN HVLog: Failed to init last hvlog devs, errno -19
|
||||
[ 1.253536] ACRN HVLog: Initialized hvlog module with 4 cpu
|
||||
|
||||
If you see log information similar to this, the ACRN hypervisor is running properly
|
||||
and you can start deploying a User OS. If not, verify the EFI boot options, and Service VM
|
||||
kernel settings are correct (as described above).
|
||||
|
||||
ACRN Network Bridge
|
||||
===================
|
||||
|
||||
The ACRN bridge has been set up as a part of systemd services for device
|
||||
communication. The default bridge creates ``acrn_br0`` which is the bridge and ``tap0`` as an initial setup.
|
||||
The files can be found in ``/usr/lib/systemd/network``. No additional setup is needed since **systemd-networkd** is
|
||||
automatically enabled after a system restart.
|
||||
|
||||
Set up Reference User VM
|
||||
========================
|
||||
|
||||
#. On your platform, download the pre-built reference Clear Linux OS User VM
|
||||
image version 31470 (or newer) into your (root) home directory:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd ~
|
||||
$ mkdir uos
|
||||
$ cd uos
|
||||
$ curl https://download.clearlinux.org/releases/32080/clear/clear-32080-kvm.img.xz -o uos.img.xz
|
||||
|
||||
Note that if you want to use or try out a newer version of Clear Linux OS as the User VM, download the
|
||||
latest from `http://download.clearlinux.org/image/`.
|
||||
Make sure to adjust the steps described below accordingly (image file name and kernel modules version).
|
||||
|
||||
#. Uncompress it:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ unxz uos.img.xz
|
||||
|
||||
#. Deploy the User VM kernel modules to the User VM virtual disk image (note that you'll need to
|
||||
use the same **iot-lts2018** image version number noted in Step 1 above):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo losetup -f -P --show uos.img
|
||||
$ sudo mount /dev/loop0p3 /mnt
|
||||
$ sudo mount /dev/loop0p1 /mnt/boot
|
||||
$ sudo swupd bundle-add --path=/mnt kernel-iot-lts2018
|
||||
$ uos_kernel_conf=`ls -t /mnt/boot/loader/entries/ | grep Clear-linux-iot-lts2018 | head -n1`
|
||||
$ uos_kernel=${uos_kernel_conf%.conf}
|
||||
$ sudo echo "default $uos_kernel" > /mnt/boot/loader/loader.conf
|
||||
$ sudo umount /mnt/boot
|
||||
$ sudo umount /mnt
|
||||
$ sync
|
||||
|
||||
#. Edit and run the ``launch_uos.sh`` script to launch the User VM.
|
||||
|
||||
A sample `launch_uos.sh
|
||||
<https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/devicemodel/samples/nuc/launch_uos.sh>`__
|
||||
is included in the Clear Linux OS release, and
|
||||
is also available in the ``acrn-hypervisor/devicemodel`` GitHub repo (in the samples
|
||||
folder) as shown here:
|
||||
|
||||
.. literalinclude:: ../../../../devicemodel/samples/nuc/launch_uos.sh
|
||||
:caption: devicemodel/samples/nuc/launch_uos.sh
|
||||
:language: bash
|
||||
|
||||
By default, the script is located in the ``/usr/share/acrn/samples/nuc/``
|
||||
directory. You can run it to launch the User OS:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd /usr/share/acrn/samples/nuc/
|
||||
$ sudo ./launch_uos.sh
|
||||
|
||||
#. You have successfully booted the ACRN hypervisor, Service VM, and User VM:
|
||||
|
||||
.. figure:: images/gsg-successful-boot.png
|
||||
:align: center
|
||||
|
||||
Successful boot
|
@ -1,156 +0,0 @@
|
||||
.. _open_vswitch:
|
||||
|
||||
Enable OVS in ACRN
|
||||
##################
|
||||
Hypervisors need the ability to bridge network traffic between VMs
|
||||
and with the outside world. This tutorial describes how to
|
||||
use `Open Virtual Switch (OVS)
|
||||
<https://www.openvswitch.org/>`_ bridge in ACRN for this purpose.
|
||||
|
||||
.. note::
|
||||
OVS is provided as part of the ``service-os``
|
||||
bundle. Use ClearLinux OS version ``29660``.
|
||||
|
||||
What is OVS
|
||||
***********
|
||||
Open vSwitch (OVS) is an open-source implementation of
|
||||
a distributed virtual multilayer switch that provides a switching
|
||||
stack for hardware virtualization environments. OVS supports multiple
|
||||
protocols and standards used in computer networks. For more detailed
|
||||
OVS information, please refer to `what-is-ovs
|
||||
<http://docs.openvswitch.org/en/latest/intro/what-is-ovs/#what-is-open-vswitch>`_.
|
||||
|
||||
Why OVS
|
||||
*******
|
||||
Open vSwitch is targeted at multi-server virtualization deployments,
|
||||
a landscape not well suited for ACRN's built-in L2 switch (the `Linux bridge
|
||||
<https://wiki.linuxfoundation.org/networking/bridge>`_).
|
||||
These environments are often characterized by highly dynamic end-points,
|
||||
the maintenance of logical abstractions, and (sometimes) integration with
|
||||
or offloading to special purpose switching hardware.
|
||||
For more reasons about why Open vSwitch is used, please refer to `why-ovs
|
||||
<http://docs.openvswitch.org/en/latest/intro/why-ovs/>`_.
|
||||
|
||||
.. _enable_ovs_in_ACRN:
|
||||
|
||||
How to enable OVS in ACRN
|
||||
*************************
|
||||
The OVS service is included with the Clear Linux ``service-os`` bundle.
|
||||
|
||||
After booting the ACRN Service OS, disable the Clear Linux
|
||||
autoupdate feature before setting up the OVS bridge to
|
||||
prevent autoupdate from restoring the default bridge after
|
||||
a system update::
|
||||
|
||||
# swupd autoupdate --disable
|
||||
|
||||
You can then start the OVS service with the command::
|
||||
|
||||
# systemctl start openvswitch
|
||||
|
||||
To start OVS automatically after a reboot, you should also use this command::
|
||||
|
||||
# systemctl enable openvswitch
|
||||
|
||||
The default ``acrn-br0`` bridge is created by the Service VM ``systemd`` and
|
||||
supports the User VM network.
|
||||
|
||||
.. figure:: images/default-acrn-network.png
|
||||
:align: center
|
||||
|
||||
Default ACRN Network
|
||||
|
||||
How to use OVS bridge
|
||||
*********************
|
||||
#. Disable the ACRN network configuration::
|
||||
|
||||
# cd /usr/lib/systemd/network/
|
||||
# mv 50-acrn.network 50-acrn.network_bak
|
||||
|
||||
#. Modify ``50-eth.network`` to enable DHCP on OVS bridge
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
[Match]
|
||||
Name=ovs-br0
|
||||
|
||||
[Network]
|
||||
DHCP=ipv4
|
||||
|
||||
#. Create OVS bridge and ``tap1`` network interface::
|
||||
|
||||
# ovs-vsctl add-br ovs-br0
|
||||
# ip tuntap add dev tap1 mode tap
|
||||
# ip link set dev tap1 down
|
||||
# ip link set dev tap1 up
|
||||
|
||||
#. Add ``eno1``, ``tap1`` into OVS bridge::
|
||||
|
||||
# ovs-vsctl add-port ovs-br0 eno1
|
||||
# ovs-vsctl add-port ovs-br0 tap1
|
||||
|
||||
#. Modify ``launch_uos.sh`` script to enable ``tap1`` device before launching the User VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# sed -i "s/virtio-net,tap0/virtio-net,tap1/" /usr/share/acrn/samples/nuc/launch_uos.sh
|
||||
|
||||
.. note::
|
||||
If you set up the User VM via `acrn_quick_setup.sh
|
||||
<https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/doc/getting-started/acrn_quick_setup.sh>`_,
|
||||
then replace ``/usr/share/acrn/samples/nuc/launch_uos.sh`` with ``/root/launch_uos_<version>.sh``
|
||||
in ``sed`` command above.
|
||||
|
||||
#. The User VM and Service VM network will work after rebooting the host via ``ovs-br0``
|
||||
|
||||
Example for VLAN network based on OVS in ACRN
|
||||
*********************************************
|
||||
We will use the OVS bridge VLAN feature to support network isolation
|
||||
between VMs. :numref:`ovs-example1` shows an example with four VMs in
|
||||
two hosts, with the hosts directly connected by a network cable. The VMs
|
||||
are interconnected through statically configured IP addresses, and use
|
||||
VLAN id to put VM1 of HOST1 and VM1 of HOST2 into a VLAN. Similarly, VM2
|
||||
of HOST1 and VM2 of HOST2 are put into a VLAN. In this configuration,
|
||||
the VM1s can communicate with each other, and VM2s can directly
|
||||
communicate with each other, but VM1s and VM2s cannot connect.
|
||||
|
||||
.. figure:: images/example-of-OVS-usage.png
|
||||
:align: center
|
||||
:name: ovs-example1
|
||||
|
||||
An example of OVS usage in ACRN
|
||||
|
||||
Follow these steps to set up OVS networks on both HOSTs:
|
||||
|
||||
#. Set up ``ovs-br0`` instead of ``acrn-br0``, (refer to the the previous section
|
||||
:ref:`enable_ovs_in_ACRN` for details).
|
||||
|
||||
#. Add ``eno1``, ``tap<VM number>`` into OVS bridge:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# ovs-vsctl add-port ovs-br0 eno1
|
||||
# ovs-vsctl add-port ovs-br0 tap1 tag=101
|
||||
# ovs-vsctl add-port ovs-br0 tap2 tag=102
|
||||
# sed -i "s/virtio-net,tap0/virtio-net,tap1/" <1st launch_uos script>
|
||||
# sed -i "s/virtio-net,tap0/virtio-net,tap2/" <2nd launch_uos script>
|
||||
# reboot
|
||||
|
||||
#. Configure the static IP address on both HOSTs and its VMs::
|
||||
|
||||
# <HOST_1 Service VM>:
|
||||
# ifconfig ovs-br0 192.168.1.100
|
||||
# <HOST_1 User VM1>:
|
||||
# ifconfig enp0s4 192.168.1.101
|
||||
# <HOST_1 User VM2>:
|
||||
# ifconfig enp0s4 192.168.1.102
|
||||
#
|
||||
# <HOST_2 Service VM>:
|
||||
# ifconfig ovs-br0 192.168.1.200
|
||||
# <HOST_2 User VM1>:
|
||||
# ifconfig enp0s4 192.168.1.201
|
||||
# <HOST_2 User VM2>:
|
||||
# ifconfig enp0s4 192.168.1.202
|
||||
|
||||
#. After that, a ``ping`` from VM1 of HOST1 to **VM1** of HOST2 will succeed,
|
||||
but a ``ping`` from VM1 of HOST1 to **VM2** of HOST2 will fail.
|
@ -44,15 +44,16 @@ kernels are loaded as multiboot modules. The ACRN hypervisor, Service
|
||||
VM, and Pre-Launched RT kernel images are all located on the NVMe drive.
|
||||
We recommend installing Ubuntu on the NVMe drive as the Service VM OS,
|
||||
which also has the required GRUB image to launch Pre-Launched RT mode.
|
||||
Refer to :ref:`Run Ubuntu as the Service VM <Ubuntu Service OS>`, to
|
||||
Refer to :ref:`rt_industry_ubuntu_setup`, to
|
||||
install Ubuntu on the NVMe drive, and use grub to launch the Service VM.
|
||||
|
||||
Install Pre-Launched RT Filesystem on SATA and Kernel Image on NVMe
|
||||
===================================================================
|
||||
|
||||
The Pre-Launched Preempt RT Linux use Clearlinux as rootfs. Refer to
|
||||
:ref:`Burn the Preempt-RT VM image onto the SATA disk <install_rtvm>` to
|
||||
download the RTVM image and burn it to the SATA drive. The Kernel should
|
||||
.. important:: Need to add instructions to download the RTVM image and burn it to the
|
||||
SATA drive.
|
||||
|
||||
The Kernel should
|
||||
be on the NVMe drive along with GRUB. You'll need to copy the RT kernel
|
||||
to the NVMe drive. Once you have successfully installed and booted
|
||||
Ubuntu from the NVMe drive, you'll then need to copy the RT kernel from
|
||||
|
@ -38,8 +38,7 @@ Manual, (Section 17.19 Intel Resource Director Technology Allocation Features)
|
||||
RDT detection and resource capabilities
|
||||
***************************************
|
||||
From the ACRN HV debug shell, use ``cpuid`` to detect and identify the
|
||||
resource capabilities. Use the platform's serial port for the HV shell
|
||||
(refer to :ref:`getting-started-up2` for setup instructions).
|
||||
resource capabilities. Use the platform's serial port for the HV shell.
|
||||
|
||||
Check if the platform supports RDT with ``cpuid``. First, run ``cpuid 0x7 0x0``; the return value ebx [bit 15] is set to 1 if the platform supports
|
||||
RDT. Next, run ``cpuid 0x10 0x0`` and check the EBX [3-1] bits. EBX [bit 1]
|
||||
|
@ -147,4 +147,6 @@ Install ACRN on the Debian VM
|
||||
$ sudo systemctl enable systemd-networkd
|
||||
$ sudo systemctl start systemd-networkd
|
||||
|
||||
#. Follow :ref:`prepare-UOS` to start a User VM.
|
||||
#. Prepare and Start a User VM.
|
||||
|
||||
.. important:: Need instructions for this.
|
||||
|
@ -12,9 +12,11 @@ Intel NUC Kit. If you have not, refer to the following instructions:
|
||||
- Install a `Clear Linux OS
|
||||
<https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_
|
||||
on your NUC kit.
|
||||
- Follow the instructions at :ref:`quick-setup-guide` to set up the
|
||||
- Follow the instructions at XXX to set up the
|
||||
Service VM automatically on your NUC kit. Follow steps 1 - 4.
|
||||
|
||||
.. important:: need updated instructions that aren't Clear Linux dependent
|
||||
|
||||
We are using Intel Kaby Lake NUC (NUC7i7DNHE) and Debian 10 as the User VM in this tutorial.
|
||||
|
||||
Before you start this tutorial, make sure the KVM tools are installed on the
|
||||
|
@ -12,9 +12,12 @@ Intel NUC Kit. If you have not, refer to the following instructions:
|
||||
- Install a `Clear Linux OS
|
||||
<https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_
|
||||
on your NUC kit.
|
||||
- Follow the instructions at :ref:`quick-setup-guide` to set up the
|
||||
- Follow the instructions at XXX to set up the
|
||||
Service VM automatically on your NUC kit. Follow steps 1 - 4.
|
||||
|
||||
.. important:: need updated instructions that aren't Clear Linux
|
||||
dependent
|
||||
|
||||
Before you start this tutorial, make sure the KVM tools are installed on the
|
||||
development machine and set **IGD Aperture Size to 512** in the BIOS
|
||||
settings (refer to :numref:`intel-bios-ubun`). Connect two monitors to your
|
||||
|
@ -17,8 +17,10 @@ Ubuntu 16.04 or 18.04.
|
||||
Install ACRN
|
||||
************
|
||||
|
||||
#. Install ACRN using Ubuntu 16.04 or 18.04 as its Service VM. Refer to
|
||||
:ref:`Ubuntu Service OS`.
|
||||
#. Install ACRN using Ubuntu 16.04 or 18.04 as its Service VM.
|
||||
|
||||
.. important:: Need instructions from deleted document (using ubuntu
|
||||
as SOS)
|
||||
|
||||
#. Make the acrn-kernel using the `kernel_config_uefi_sos
|
||||
<https://raw.githubusercontent.com/projectacrn/acrn-kernel/master/kernel_config_uefi_sos>`_
|
||||
@ -38,8 +40,10 @@ Install ACRN
|
||||
<https://maslosoft.com/kb/how-to-clean-old-snaps/>`_ to clean up old
|
||||
snap revisions if you're running out of loop devices.
|
||||
#. Make sure the networking bridge ``acrn-br0`` is created. If not,
|
||||
create it using the instructions in
|
||||
:ref:`Enable network sharing <enable-network-sharing-user-vm>`.
|
||||
create it using the instructions in XXX.
|
||||
|
||||
.. important:: need instructions from deleted document (using ubuntu
|
||||
as SOS)
|
||||
|
||||
Set up and launch LXC/LXD
|
||||
*************************
|
||||
@ -168,7 +172,10 @@ Set up ACRN prerequisites inside the container
|
||||
|
||||
Install only the user-space components: acrn-dm, acrnctl, and acrnd
|
||||
|
||||
3. Download, compile, and install ``iasl``. Refer to :ref:`Prepare the User VM <prepare-UOS>`.
|
||||
3. Download, compile, and install ``iasl``. Refer to XXX.
|
||||
|
||||
.. important:: need instructions from deleted document (using ubuntu
|
||||
as SOS)
|
||||
|
||||
Set up libvirt
|
||||
**************
|
||||
|
@ -1,38 +0,0 @@
|
||||
.. _sign_clear_linux_image:
|
||||
|
||||
Sign Clear Linux Image Binaries
|
||||
###############################
|
||||
|
||||
In this tutorial, you will see how to sign the binaries of a Clear Linux image so that you can
|
||||
boot it through a secure boot enabled OVMF.
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
* Install **sbsigntool** on Ubuntu (Verified on 18.04)::
|
||||
|
||||
$ sudo apt install sbsigntool
|
||||
|
||||
* Download and extract the Clear Linux image from the `release <https://cdn.download.clearlinux.org/releases/>`_::
|
||||
|
||||
$ export https_proxy=<your https proxy>:<port>
|
||||
$ wget https://cdn.download.clearlinux.org/releases/29880/clear/clear-29880-kvm.img.xz
|
||||
$ unxz clear-29880-kvm.img.xz
|
||||
|
||||
* Download script `sign_image.sh
|
||||
<https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/doc/scripts/sign_image.sh>`_ on Ubuntu.
|
||||
|
||||
Steps to sign the binaries of the Clear Linux image
|
||||
***************************************************
|
||||
#. Follow the `KeyGeneration <https://wiki.ubuntu.com/UEFI/SecureBoot/KeyManagement/KeyGeneration>`_ to generate
|
||||
the key and certification which will be used to sign the binaries.
|
||||
|
||||
#. Get these files from the previous step:
|
||||
|
||||
* archive-subkey-private.key
|
||||
* archive-subkey-public.crt
|
||||
|
||||
#. Use the script to sign binaries in the Clear Linux image::
|
||||
|
||||
$ sudo sh sign_image.sh $PATH_TO_CLEAR_IMAGE $PATH_TO_KEY $PATH_TO_CERT
|
||||
|
||||
#. **clear-xxx-kvm.img.signed** will be generated in the same folder as the original clear-xxx-kvm.img.
|
@ -1,78 +0,0 @@
|
||||
.. _static_ip:
|
||||
|
||||
Set Up a Static IP Address
|
||||
##########################
|
||||
|
||||
When you install ACRN on your system following :ref:`getting_started`, a
|
||||
bridge called ``acrn-br0`` is created and attached to the Ethernet network
|
||||
interface of the platform. By default, the bridge gets its network configuration
|
||||
using DHCP. This guide explains how to modify the system to use a static IP
|
||||
address. You need ``root`` privileges to make these changes to the system.
|
||||
|
||||
ACRN Network Setup
|
||||
******************
|
||||
|
||||
The ACRN Service VM is based on `Clear Linux OS`_ and it uses `systemd-networkd`_
|
||||
to set up the Service VM networking. A few files are responsible for setting up the
|
||||
ACRN bridge (``acrn-br0``), the TAP device (``tap0``), and how these are all
|
||||
connected. Those files are installed in ``/usr/lib/systemd/network``
|
||||
on the target device and can also be found under ``misc/acrnbridge`` in the source code.
|
||||
|
||||
Setting up the static IP address
|
||||
********************************
|
||||
|
||||
You can set up a static IP address by copying the
|
||||
``/usr/lib/systemd/network/50-eth.network`` file to
|
||||
``/etc/systemd/network/`` directory. You can create this directory and
|
||||
copy the file with the following command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
mkdir -p /etc/systemd/network
|
||||
cp /usr/lib/systemd/network/50-eth.network /etc/systemd/network
|
||||
|
||||
Modify the ``[Network]`` section in the
|
||||
``/etc/systemd/network/50-eth.network`` file you just created.
|
||||
This is the content of the file used in ACRN by default.
|
||||
|
||||
.. literalinclude:: ../../../../misc/acrnbridge/eth.network
|
||||
:caption: misc/acrnbridge/eth.network
|
||||
:emphasize-lines: 5
|
||||
|
||||
Edit the file to remove the line highlighted above and add your network settings in
|
||||
that ``[Network]`` section. You will typically need to add the ``Address=``, ``Gateway=``
|
||||
and ``DNS=`` parameters in there. There are many more parameters that can be used and
|
||||
detailing them is beyond the scope of this document. For an extensive list of those,
|
||||
please visit the official `systemd-network`_ page.
|
||||
|
||||
This is an example of what a typical ``[Network]`` section would look like, specifying
|
||||
a static IP address:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
[Network]
|
||||
Address=192.168.1.87/24
|
||||
Gateway=192.168.1.254
|
||||
DNS=192.168.1.254
|
||||
|
||||
Activate the new configuration
|
||||
******************************
|
||||
|
||||
You do not need to reboot the machine after making the changes to the system, the
|
||||
following steps that restart the ``systemd-networkd`` service will suffice (run as ``root``):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
systemctl daemon-reload
|
||||
systemctl restart systemd-networkd
|
||||
|
||||
If you encounter connectivity issues after following this guide, please contact us on the
|
||||
`ACRN-users mailing list`_ or file an issue in `ACRN hypervisor issues`_. Provide the details
|
||||
of the configuration you are trying to set up, the modifications you have made to your system, and
|
||||
the output of ``journalctl -b -u systemd-networkd`` so we can best assist you.
|
||||
|
||||
.. _systemd-networkd: https://www.freedesktop.org/software/systemd/man/systemd-networkd.service.html
|
||||
.. _Clear Linux OS: https://clearlinux.org
|
||||
.. _systemd-network: https://www.freedesktop.org/software/systemd/man/systemd.network.html
|
||||
.. _ACRN-users mailing list: https://lists.projectacrn.org/g/acrn-users
|
||||
.. _ACRN hypervisor issues: https://github.com/projectacrn/acrn-hypervisor/issues
|
@ -1,128 +0,0 @@
|
||||
.. _getting-started-up2:
|
||||
|
||||
Getting Started Guide for the UP2 Board
|
||||
#######################################
|
||||
|
||||
Hardware setup
|
||||
**************
|
||||
|
||||
The `UP Squared board <http://www.up-board.org/upsquared/specifications/>`_ (UP2) is
|
||||
an x86 maker board based on the Intel Apollo Lake platform. The UP boards
|
||||
are used in IoT applications, industrial automation, digital signage, and more.
|
||||
|
||||
The UP2 features Intel `Celeron N3550
|
||||
<https://ark.intel.com/products/95598/Intel-Celeron-Processor-N3350-2M-Cache-up-to-2_4-GHz>`_
|
||||
and Intel `Pentium N4200
|
||||
<https://ark.intel.com/products/95592/Intel-Pentium-Processor-N4200-2M-Cache-up-to-2_5-GHz>`_
|
||||
SoCs. Both have been confirmed to work with ACRN.
|
||||
|
||||
Connecting to the serial port
|
||||
=============================
|
||||
|
||||
The UP2 board has two serial ports. The following figure shows the UP2 board's
|
||||
40-pin HAT connector we'll be using as documented in the `UP2 Datasheet
|
||||
<https://up-board.org/wp-content/uploads/datasheets/UP-Square-DatasheetV0.5.pdf>`_.
|
||||
|
||||
.. image:: images/the-bottom-side-of-UP2-board.png
|
||||
:align: center
|
||||
|
||||
We'll access the serial port through the I/O pins in the
|
||||
40-pin HAT connector using a `USB TTL serial cable
|
||||
<http://www.ftdichip.com/Products/USBTTLSerial.htm>`_,
|
||||
and show how to connect a serial port with
|
||||
``PL2303TA USB to TTL serial cable`` for example:
|
||||
|
||||
.. image:: images/USB-to-TTL-serial-cable.png
|
||||
:align: center
|
||||
|
||||
Connect pin 6 (``Ground``), pin 8 (``UART_TXD``) and pin 10 (``UART_RXD``) of the HAT
|
||||
connector to respectively the ``GND``, ``RX`` and ``TX`` pins of your
|
||||
USB serial cable. Plug the USB TTL serial cable into your PC and use a
|
||||
console emulation tool such as ``minicom`` or ``putty`` to communicate
|
||||
with the UP2 board for debugging.
|
||||
|
||||
.. image:: images/the-connection-of-serial-port.png
|
||||
:align: center
|
||||
|
||||
Software setup
|
||||
**************
|
||||
|
||||
Setting up the ACRN hypervisor (and associated components) on the UP2
|
||||
board is no different than other hardware platforms so please follow
|
||||
the instructions provided in the :ref:`rt_industry_setup`, with
|
||||
the additional information below.
|
||||
|
||||
There are a few parameters specific to the UP2 board that differ from
|
||||
what is referenced in the :ref:`rt_industry_setup` section:
|
||||
|
||||
1. Serial Port settings
|
||||
#. Storage device name
|
||||
|
||||
You will need to keep these in mind in a few places:
|
||||
|
||||
* When mounting the EFI System Partition (ESP)
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# mount /dev/mmcblk0p1 /mnt
|
||||
|
||||
* When adjusting the ``acrn.conf`` file
|
||||
|
||||
* Set the ``root=`` parameter using the ``PARTUUID`` or device name directly
|
||||
|
||||
* When configuring the EFI firmware to boot the ACRN hypervisor by default
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# efibootmgr -c -l "\EFI\acrn\acrn.efi" -d /dev/mmcblk0 -p 1 -L "ACRN Hypervisor" \
|
||||
-u "bootloader=\EFI\org.clearlinux\bootloaderx64.efi uart=bdf@0:18.1"
|
||||
|
||||
.. note::
|
||||
There have been reports that the UP2 EFI firmware does not always keep
|
||||
these settings during a reboot. Make sure to always double-check the
|
||||
settings if ACRN is not running correctly. There is no reliable way to
|
||||
set this boot order and you may want to remove other, unused boot entries
|
||||
and also change the boot order (``-o`` option).
|
||||
|
||||
UP2 serial port setting
|
||||
=======================
|
||||
|
||||
The serial port (ttyS1) in the 40-pin HAT connector is located at ``serial PCI BDF 0:18.1``.
|
||||
You can check this from the ``lspci`` output from the initial Clearlinux installation.
|
||||
Also you can use ``dmesg | grep tty`` to get its IRQ information for console setting; and update
|
||||
SOS bootargs ``console=ttyS1`` in acrn.conf to match with console setting.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# lspci | grep UART
|
||||
00:18.0 . Series HSUART Controller #1 (rev 0b)
|
||||
00:18.1 . Series HSUART Controller #2 (rev 0b)
|
||||
|
||||
# dmesg | grep tty
|
||||
dw-apb-uart.8: ttyS0 at MMIO 0x91524000 (irq = 4, base_baud = 115200) is a 16550A
|
||||
dw-apb-uart.9: ttyS1 at MMIO 0x91522000 (irq = 5, base_baud = 115200) is a 16550A
|
||||
|
||||
The second entry associated with ``00:18.1 @irq5`` is the one on the 40-pin HAT connector.
|
||||
|
||||
UP2 block device
|
||||
================
|
||||
|
||||
The UP2 board has an on-board eMMC device. The device name to be used
|
||||
throughout the :ref:`getting_started` therefore is ``/dev/mmcblk0``
|
||||
(and ``/dev/mmcblk0pX`` for any partition).
|
||||
|
||||
The UUID of the partition ``/dev/mmcblk0p3`` can be found by
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# blkid /dev/mmcblk
|
||||
|
||||
.. note::
|
||||
You can also use the device name directly, e.g.: ``root=/dev/mmcblk0p3``
|
||||
|
||||
Running the hypervisor
|
||||
**********************
|
||||
|
||||
Now that the hypervisor and Service OS have been installed on your UP2 board,
|
||||
you can proceed with the rest of the instructions in the
|
||||
:ref:`kbl-nuc-sdc` and install the User OS (UOS).
|
@ -1,116 +0,0 @@
|
||||
.. _using_celadon_as_uos:
|
||||
|
||||
Run Celadon as the User VM
|
||||
##########################
|
||||
|
||||
`Celadon <https://01.org/projectceladon/>`_ is an open source Android software reference stack
|
||||
for Intel architecture. It builds upon a vanilla Android stack and incorporates open sourced components
|
||||
that are optimized for the hardware. This tutorial describes how to run Celadon as the User VM
|
||||
on the ACRN hypervisor. We are using the Kaby Lake-based NUC (model NUC7i7DNHE) in this tutorial.
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
|
||||
* Ubuntu 18.04 with at least 150G free disk space.
|
||||
* Intel Kaby Lake NUC7ixDNHE (Reference Platforms: :ref:`ACRN supported platforms <hardware>`).
|
||||
* BIOS version 0059 or later firmware should be flashed on the NUC system,
|
||||
and the ``Device Mode`` option is selected on the USB category of the Devices tab
|
||||
in order to enable USB device function through the internal USB 3.0 port header.
|
||||
* Two HDMI monitors.
|
||||
* A USB dongle (e.g. `Dawson Canyon USB 3.0 female
|
||||
to 10-pin header cable <https://www.gorite.com/dawson-canyon-usb-3-0-female-to-10-pin-header-cable>`_)
|
||||
is optional if you plan to use the ``adb`` and ``fastboot`` tools in the Celadon User OS for debugging.
|
||||
Refer to the `Technical Product Specification
|
||||
<https://www.intel.com/content/dam/support/us/en/documents/mini-pcs/nuc-kits/NUC7i5DN_TechProdSpec.pdf>`_
|
||||
to identify the USB 3.0 port header on the main board.
|
||||
|
||||
.. note::
|
||||
This document uses the (default) SDC scenario. If you use a different
|
||||
scenario, to see its console, you will need a serial port connection to your platform
|
||||
or change the configuration of the User VM that will run Celadon.
|
||||
|
||||
Build Celadon from source
|
||||
*************************
|
||||
|
||||
#. Follow the instructions in the `Build Celadon from source
|
||||
<https://01.org/projectceladon/documentation/getting-started/build-source>`__ guide
|
||||
to set up the Celadon project source code.
|
||||
|
||||
.. note:: The main branch is based on the Google Android 10
|
||||
pre-Production Early Release. Use the following command to specify a
|
||||
stable Celadon branch based on the Google Android 9 source code in order
|
||||
to apply those patches in the :ref:`ACRN patch list`::
|
||||
|
||||
$ repo init -u https://github.com/projectceladon/manifest.git -b celadon/p/mr0/master -m stable-build/ww201925_H.xml
|
||||
|
||||
#. Select Celadon build target::
|
||||
|
||||
$ cd <Celadon project directory>
|
||||
$ source build/envsetup.sh
|
||||
$ lunch cel_apl-userdebug
|
||||
|
||||
.. note:: You can run ``lunch`` with no arguments to manually choose your Celadon build variants.
|
||||
|
||||
#. Download these additional patches and apply each one individually with the following command::
|
||||
|
||||
$ git apply <patch-filename>
|
||||
|
||||
.. table:: ACRN patch list
|
||||
:widths: auto
|
||||
:name: ACRN patch list
|
||||
|
||||
+--------------------------------------------------------------------+-------------------------------------------+
|
||||
| Patch link | Description |
|
||||
+====================================================================+===========================================+
|
||||
| https://github.com/projectceladon/device-androidia/pull/458 | kernel config: Add the support of ACRN |
|
||||
+--------------------------------------------------------------------+-------------------------------------------+
|
||||
| https://github.com/projectceladon/device-androidia-mixins/pull/293 | graphic/mesa: Add the support of ACRN |
|
||||
+--------------------------------------------------------------------+-------------------------------------------+
|
||||
| https://github.com/projectceladon/device-androidia/pull/441 | cel_apl: use ttyS0 instead of ttyUSB0 |
|
||||
+--------------------------------------------------------------------+-------------------------------------------+
|
||||
| https://github.com/projectceladon/device-androidia/pull/439 | Disable trusty and pstore |
|
||||
+--------------------------------------------------------------------+-------------------------------------------+
|
||||
|
||||
.. note:: If the ``git apply`` command shows an error, you may need to modify
|
||||
the source code manually instead.
|
||||
|
||||
#. Build Celadon image::
|
||||
|
||||
$ device/intel/mixins/mixin-update
|
||||
$ make SPARSE_IMG=true gptimage -j $(nproc)
|
||||
|
||||
.. note:: Replace the ``$(nproc)`` argument with the number of processor threads on your workstation
|
||||
in order to build the source code with parallel tasks. The Celadon gptimage will be
|
||||
generated to ``out/target/product/cel_apl/cel_apl_gptimage.img``
|
||||
|
||||
Steps for Using Celadon as the User VM
|
||||
**************************************
|
||||
|
||||
#. Follow :ref:`kbl-nuc-sdc` to boot the ACRN Service VM based on Clear Linux 29880.
|
||||
|
||||
#. Prepare dependencies on your NUC::
|
||||
|
||||
# mkdir ~/celadon && cd ~/celadon
|
||||
# cp /usr/share/acrn/samples/nuc/launch_win.sh ./launch_android.sh
|
||||
# sed -i "s/win10-ltsc/android/" launch_android.sh
|
||||
# scp <cel_apl_gptimage.img from your host> ./android.img
|
||||
# sh launch_android.sh
|
||||
|
||||
#. You will see the shell console from the terminal and the Celadon GUI on the secondary monitor
|
||||
after the system boots. You can check the build info using the ``getprop`` command in the shell console:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
console:/ $
|
||||
console:/ $ getprop | grep finger
|
||||
[ro.bootimage.build.fingerprint]: [cel_apl/cel_apl/cel_apl:9/PPR2.181005.003.A1/rui06241613:userdebug/test-keys]
|
||||
[ro.build.fingerprint]: [cel_apl/cel_apl/cel_apl:9/PPR2.181005.003.A1/rui06241613:userdebug/test-keys]
|
||||
[ro.vendor.build.fingerprint]: [cel_apl/cel_apl/cel_apl:9/PPR2.181005.003.A1/rui06241613:userdebug/test-keys]
|
||||
|
||||
.. figure:: images/Celadon_home.png
|
||||
:width: 700px
|
||||
:align: center
|
||||
|
||||
.. figure:: images/Celadon_apps.png
|
||||
:width: 700px
|
||||
:align: center
|
@ -1,302 +0,0 @@
|
||||
.. _using-sbl-up2:
|
||||
|
||||
Enable SBL on the UP2 Board
|
||||
###########################
|
||||
|
||||
This document builds on :ref:`getting-started-up2`, and explains how to use
|
||||
SBL instead of UEFI to boot the UP2 board.
|
||||
|
||||
Slim Bootloader is an open-source boot firmware solution,
|
||||
built from the ground up to be secure, lightweight, and highly
|
||||
optimized while leveraging robust tools and libraries from
|
||||
the EDK II framework. For more information about booting ACRN with SBL,
|
||||
please visit `<https://slimbootloader.github.io/how-tos/boot-acrn.html>`_.
|
||||
|
||||
.. image:: images/sbl_boot_flow_UP2.png
|
||||
:align: center
|
||||
|
||||
We show a verified Boot Sequence with SBL on an Intel Architecture platform UP2,
|
||||
and the boot process proceeds as follows:
|
||||
|
||||
#. SBL verifies and boots the ACRN hypervisor and Service OS kernel
|
||||
#. Service OS kernel verifies and loads ACRN Device Model and vSBL
|
||||
#. vSBL starts the User-side verified boot process
|
||||
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
|
||||
The following hardware and software are required to use SBL on an UP2 board:
|
||||
|
||||
* UP2 kit (`Model N3350 <https://up-shop.org/up-boards/94-up-squared-celeron-duo-core-4gb-memory32gb-emmc.html>`_)
|
||||
* `USB 2.0 Pin Header Cable <https://up-shop.org/up-peripherals/110-usb-20-pin-header-cable.html>`_ for debug UART output
|
||||
* USB to TTL Serial Cable (PL2303TA for example) for debug UART output
|
||||
* 3 Pin Male To Male Jumper Cable Dupont Wire for debug UART output
|
||||
* Micro USB OTG Cable for flashing
|
||||
* Linux host
|
||||
* Internet access
|
||||
|
||||
.. image:: images/up2_sbl_connections.png
|
||||
:align: center
|
||||
|
||||
The connections between USB to TTL Serial Cable and USB 2.0 Pin Header
|
||||
Cable should be:
|
||||
|
||||
.. image:: images/up2_sbl_cables_connections.png
|
||||
:align: center
|
||||
|
||||
Build SBL
|
||||
*********
|
||||
|
||||
Follow the steps of `Building <https://slimbootloader.github.io/supported-hardware/up2.html#building>`_
|
||||
and `Stitching <https://slimbootloader.github.io/supported-hardware/up2.html#stitching>`_
|
||||
from `<https://slimbootloader.github.io/supported-hardware/up2.html>`_ to generate the
|
||||
BIOS binary file ``<SBL_IFWI_IMAGE>``, which is the new IFWI image with SBL in BIOS region.
|
||||
|
||||
Flash SBL on the UP2
|
||||
********************
|
||||
|
||||
#. Download the appropriate BIOS update for `UP2 Board <https://downloads.up-community.org>`_.
|
||||
#. Put the empty USB flash drive in your PC and format it as FAT32.
|
||||
#. Decompress the BIOS zip file into the formatted drive.
|
||||
#. Attach the USB disk and keyboard to the board and power it on.
|
||||
#. During boot, press :kbd:`F7` on the keyboard to enter the UEFI BIOS boot menu.
|
||||
#. Navigate through the following menus and select ``Built-in EFI shell``.
|
||||
#. Please take note to which filesystem number ``fs*`` your USB drive is mapped.
|
||||
#. Switch to that filesystem, e.g. ``fs1:``. (Don't forget the colon.)
|
||||
#. Navigate to the path where you decompressed the update (the ``cd`` and ``ls`` commands are available here, as if in an Unix shell).
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
Fpt_3.1.50.2222.efi -f <SBL_IFWI_IMAGE> -y
|
||||
|
||||
Build ACRN for UP2
|
||||
******************
|
||||
|
||||
In Clear Linux, build out the Service VM and LaaG image with these two files:
|
||||
|
||||
* create-up2-images.sh
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ wget https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/doc/tutorials/create-up2-images.sh
|
||||
|
||||
* uos.json
|
||||
|
||||
An example of the configuration file ``uos.json``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
{
|
||||
"DestinationType" : "virtual",
|
||||
"PartitionLayout" : [ { "disk" : "clearlinux.img", "partition" : 1, "size" : "100M", "type" : "EFI" },
|
||||
{ "disk" : "clearlinux.img", "partition" : 2, "size" : "10G", "type" : "linux" } ],
|
||||
"FilesystemTypes" : [ { "disk" : "clearlinux.img", "partition" : 1, "type" : "vfat" },
|
||||
{ "disk" : "clearlinux.img", "partition" : 2, "type" : "ext4" } ],
|
||||
"PartitionMountPoints" : [ { "disk" : "clearlinux.img", "partition" : 1, "mount" : "/boot" },
|
||||
{ "disk" : "clearlinux.img", "partition" : 2, "mount" : "/" } ],
|
||||
"Version": 31030,
|
||||
"Bundles": ["kernel-iot-lts2018", "openssh-server", "x11-server", "os-core", "os-core-update"]
|
||||
}
|
||||
|
||||
.. note::
|
||||
To generate the image with a specified version, modify
|
||||
the "Version" argument, ``"Version": 3****`` instead
|
||||
of ``"Version": 31030`` for example.
|
||||
|
||||
|
||||
Build Service VM and LaaG image:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo -s
|
||||
# chmod +x create-up2-images.sh
|
||||
# ./create-up2-images.sh --images-type all --clearlinux-version 31030 --laag-json uos.json
|
||||
|
||||
.. note::
|
||||
You must have root privileges to run ``create-up2-images.sh``.
|
||||
|
||||
If you want to build with your own ``acrn-hypervisor``, add the ``--acrn-code-path``
|
||||
argument that specifies the directory where your ``acrn-hypervisor`` is found.
|
||||
|
||||
When building images, modify the ``--clearlinux-version`` argument
|
||||
to a specific version (such as 31030). To generate the images of Service VM only,
|
||||
modify the ``--images-type`` argument to ``sos``.
|
||||
|
||||
This step will generate the images of Service VM and LaaG:
|
||||
|
||||
* sos_boot.img
|
||||
* sos_rootfs.img
|
||||
* up2_laag.img
|
||||
|
||||
Build the binary image ``partition_desc.bin`` for GPT partitions and change
|
||||
the partition layout in ``partition_desc.ini`` if needed.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd ~/acrn-hypervisor/doc/tutorials/doc/tutorials/
|
||||
$ sudo -s
|
||||
# python2 gpt_ini2bin.py partition_desc.ini>partition_desc.bin
|
||||
|
||||
We still need the configuration file ``flash_LaaG.json`` for flashing,
|
||||
which is also in the directory ``~/acrn-hypervisor/doc/tutorials/``.
|
||||
|
||||
.. table::
|
||||
:widths: auto
|
||||
|
||||
+------------------------------+----------------------------------------------------------+
|
||||
| Filename | Description |
|
||||
+==============================+==========================================================+
|
||||
| sos_boot.img | This Service VM image contains the ACRN hypervisor and |
|
||||
| | Service VM kernel. |
|
||||
+------------------------------+----------------------------------------------------------+
|
||||
| sos_rootfs.img | This is the root filesystem image for the Service VM. it |
|
||||
| | contains the Device Models implementation and |
|
||||
| | Service VM user space. |
|
||||
+------------------------------+----------------------------------------------------------+
|
||||
| partition_desc.bin | This is the binary image for GPT partitions |
|
||||
+------------------------------+----------------------------------------------------------+
|
||||
| up2_laag.img | This is the root filesystem image for the Service VM. |
|
||||
| | It has an integrated kernel and userspace. |
|
||||
+------------------------------+----------------------------------------------------------+
|
||||
| flash_LaaG.json | Configuration file for Intel Platform Flash Tool |
|
||||
| | to flash Service VM image + hypervisor/Service VM |
|
||||
| | boot image + Service VM userland |
|
||||
+------------------------------+----------------------------------------------------------+
|
||||
|
||||
.. note::
|
||||
In this step, build Service VM and LaaG images in Clear Linux rather than Ubuntu.
|
||||
|
||||
Download and install flash tool
|
||||
*******************************
|
||||
|
||||
#. Download Intel Platform Flash Tool Lite from
|
||||
`<https://github.com/projectceladon/tools/tree/master/platform_flash_tool_lite/latest/>`_.
|
||||
|
||||
#. For the Ubuntu host, install `platformflashtoollite_5.8.9.0_linux_x86_64.deb
|
||||
<https://github.com/projectceladon/tools/blob/master/platform_flash_tool_lite/latest/platformflashtoollite_5.8.9.0_linux_x86_64.deb>`_
|
||||
for example.
|
||||
|
||||
Service VM and LaaG Installation
|
||||
********************************
|
||||
|
||||
#. Connect a USB cable from the debug board to your Ubuntu host machine,
|
||||
and run the following command to verify that its USB serial port is
|
||||
discovered and showing under ``/dev``.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ ls /dev/ttyUSB*
|
||||
/dev/ttyUSB0
|
||||
|
||||
#. Connect to the board via ``minicom``, and use ``/dev/ttyUSB0``. For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo minicom -s /dev/ttyUSB0
|
||||
|
||||
.. note::
|
||||
Verify that the minicom serial port settings are 115200 8N1 and
|
||||
both HW and SW flow control are turned off.
|
||||
|
||||
#. When the following console log displays, press any key to enter the
|
||||
shell command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
====================Os Loader====================
|
||||
|
||||
|
||||
Press any key within 2 second(s) to enter the command shell
|
||||
|
||||
Shell>
|
||||
|
||||
#. Swap the boot sequence of ``DevType: MEM`` to ``Idx:0``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
Shell> boot
|
||||
Boot options (in HEX):
|
||||
|
||||
Idx|ImgType|DevType|DevNum|Flags|HwPart|FsType|SwPart|File/Lbaoffset
|
||||
0| 0| MMC | 0 | 0 | 0 | RAW | 1 | 0x0
|
||||
1| 4| MEM | 0 | 0 | 0 | RAW | 0 | 0x0
|
||||
|
||||
SubCommand:
|
||||
s -- swap boot order by index
|
||||
a -- modify all boot options one by one
|
||||
q -- quit boot option change
|
||||
idx -- modify the boot option specified by idx (0 to 0x1)
|
||||
s
|
||||
Updated the Boot Option List
|
||||
Boot options (in HEX):
|
||||
|
||||
Idx|ImgType|DevType|DevNum|Flags|HwPart|FsType|SwPart|File/Lbaoffset
|
||||
0| 4| MEM | 0 | 0 | 0 | RAW | 0 | 0x0
|
||||
1| 0| MMC | 0 | 0 | 0 | RAW | 1 | 0x0
|
||||
|
||||
|
||||
#. Exit and reboot to fastboot mode:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
Shell> exit
|
||||
|
||||
...
|
||||
|
||||
40E0 | 175118 ms | 158 ms | Kernel setup
|
||||
40F0 | 175144 ms | 26 ms | FSP ReadyToBoot/EndOfFirmware notify
|
||||
4100 | 175144 ms | 0 ms | TPM IndicateReadyToBoot
|
||||
------+------------+------------+----------------------------------
|
||||
|
||||
Starting MB Kernel ...
|
||||
|
||||
abl cmd 00: console=ttyS0,115200
|
||||
abl cmd 00 length: 20
|
||||
abl cmd 01: fw_boottime=175922
|
||||
abl cmd 01 length: 18
|
||||
boot target: 1
|
||||
target=1
|
||||
Enter fastboot mode ...
|
||||
Start Send HECI Message: EndOfPost
|
||||
HECI sec_mode 00000000
|
||||
GetSeCMode successful
|
||||
GEN_END_OF_POST size is 4
|
||||
uefi_call_wrapper(SendwACK) = 0
|
||||
Group =000000FF
|
||||
Command =0000000C
|
||||
IsRespone=00000001
|
||||
Result =00000000
|
||||
RequestedActions =00000000
|
||||
USB for fastboot transport layer selected
|
||||
|
||||
|
||||
#. When the UP2 board is in fastboot mode, you should be able
|
||||
see the device in the Platform Flash Tool. Select the
|
||||
file ``flash_LaaG.json`` and modify ``Configuration``
|
||||
to ``Service VM_and_LaaG``. Click ``Start to flash`` to flash images.
|
||||
|
||||
.. image:: images/platformflashtool_start_to_flash.png
|
||||
:align: center
|
||||
|
||||
Boot to Service VM
|
||||
******************
|
||||
|
||||
After flashing, UP2 board will automatically reboot and
|
||||
boot to the ACRN hypervisor. Log in to Service VM by using the following command:
|
||||
|
||||
.. image:: images/vm_console_login.png
|
||||
:align: center
|
||||
|
||||
Launch User VM
|
||||
**************
|
||||
|
||||
Run the ``launch_uos.sh`` script to launch the User VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd ~
|
||||
$ wget https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/doc/tutorials/launch_uos.sh
|
||||
$ sudo ./launch_uos.sh -V 1
|
||||
|
||||
**Congratulations**, you are now watching the User VM booting!
|
43
doc/tutorials/using_serial_port.rst
Normal file
43
doc/tutorials/using_serial_port.rst
Normal file
@ -0,0 +1,43 @@
|
||||
.. _connect_serial_port:
|
||||
|
||||
Using the Serial Port on KBL NUC
|
||||
================================
|
||||
|
||||
You can enable the serial console on the
|
||||
`KBL NUC <https://www.amazon.com/Intel-Business-Mini-Technology-BLKNUC7i7DNH1E/dp/B07CCQ8V4R>`_
|
||||
(NUC7i7DNH). The KBL NUC has a serial port header you can
|
||||
expose with a serial DB9 header cable. (The NUC has a punch out hole for
|
||||
mounting the serial connector.)
|
||||
|
||||
.. figure:: images/NUC-serial-port.jpg
|
||||
|
||||
KBL NUC with populated serial port punchout
|
||||
|
||||
You can `purchase
|
||||
<https://www.amazon.com/dp/B07BV1W6N8/ref=cm_sw_r_cp_ep_dp_wYm0BbABD5AK6>`_
|
||||
such a cable or you can build it yourself;
|
||||
refer to the `KBL NUC product specification
|
||||
<https://www.intel.com/content/dam/support/us/en/documents/mini-pcs/nuc-kits/NUC7i7DN_TechProdSpec.pdf>`_
|
||||
as shown below:
|
||||
|
||||
.. figure:: images/KBL-serial-port-header.png
|
||||
:scale: 80
|
||||
|
||||
KBL serial port header details
|
||||
|
||||
|
||||
.. figure:: images/KBL-serial-port-header-to-RS232-cable.jpg
|
||||
:scale: 80
|
||||
|
||||
KBL `serial port header to RS232 cable
|
||||
<https://www.amazon.com/dp/B07BV1W6N8/ref=cm_sw_r_cp_ep_dp_wYm0BbABD5AK6>`_
|
||||
|
||||
|
||||
You'll also need an `RS232 DB9 female to USB cable
|
||||
<https://www.amazon.com/Adapter-Chipset-CableCreation-Converter-Register/dp/B0769DVQM1>`_,
|
||||
or an `RS232 DB9 female/female (NULL modem) cross-over cable
|
||||
<https://www.amazon.com/SF-Cable-Null-Modem-RS232/dp/B006W0I3BA>`_
|
||||
to connect to your host system.
|
||||
|
||||
Note that If you want to use the RS232 DB9 female/female cable, choose
|
||||
the **cross-over** type rather than **straight-through** type.
|
@ -1,364 +0,0 @@
|
||||
.. _Ubuntu Service OS:
|
||||
|
||||
Run Ubuntu as the Service VM
|
||||
############################
|
||||
|
||||
This document builds on the :ref:`getting_started` series and explains how
|
||||
to use Ubuntu instead of `Clear Linux OS`_ as the Service VM with the ACRN
|
||||
hypervisor. (Note that different OSs can be used for the Service and User
|
||||
VM.) In the following instructions, we will build on material described in
|
||||
:ref:`kbl-nuc-sdc`.
|
||||
|
||||
Install Ubuntu (natively)
|
||||
*************************
|
||||
|
||||
Ubuntu 18.04.1 LTS is used throughout this document; other older versions
|
||||
such as 16.04 also work.
|
||||
|
||||
* Download Ubuntu 18.04 from the `Ubuntu 18.04.1 LTS (Bionic Beaver) page
|
||||
<http://releases.ubuntu.com/18.04.1/>`_ and select the `ubuntu-18.04.1-desktop-amd64.iso
|
||||
<http://releases.ubuntu.com/18.04.1/ubuntu-18.04.1-desktop-amd64.iso>`_ image.
|
||||
|
||||
* Follow Ubuntu's `online instructions <https://tutorials.ubuntu.com/tutorial/tutorial-install-ubuntu-desktop>`_
|
||||
to install it on your device.
|
||||
|
||||
.. note::
|
||||
Configure your device's proxy settings to have full internet access.
|
||||
|
||||
* While not strictly required, enabling SSH gives the user a very useful
|
||||
mechanism for accessing the Service VM remotely or when running one or more
|
||||
User VM. Follow these steps to enable it on the Ubuntu Service VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
sudo apt-get install openssh-server
|
||||
sudo service ssh status
|
||||
sudo service ssh start
|
||||
|
||||
* If you plan to SSH Ubuntu as root, you must also modify ``/etc/ssh/sshd_config``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
PermitRootLogin yes
|
||||
|
||||
Install ACRN
|
||||
************
|
||||
|
||||
ACRN components are distributed in source form, so you must download
|
||||
the source code, build it, and install it on your device.
|
||||
|
||||
1. Install the build tools and dependencies.
|
||||
|
||||
Follow the instructions found in :ref:`getting-started-building` to
|
||||
install all the build tools and dependencies on your system.
|
||||
|
||||
#. Clone the `Project ACRN <https://github.com/projectacrn/acrn-hypervisor>`_
|
||||
code repository.
|
||||
|
||||
Enter the following:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
cd ~
|
||||
git clone https://github.com/projectacrn/acrn-hypervisor
|
||||
git checkout acrn-2019w47.1-140000p
|
||||
|
||||
.. note::
|
||||
We clone the git repository above but it is also possible to download
|
||||
the tarball for any specific tag or release from the `Project ACRN
|
||||
Github release page <https://github.com/projectacrn/acrn-hypervisor/releases>`_.
|
||||
|
||||
#. Build and install ACRN.
|
||||
|
||||
Here is the short version on how to build and install ACRN from source:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
cd ~/acrn-hypervisor
|
||||
make
|
||||
sudo make install
|
||||
|
||||
For more details, refer to :ref:`getting-started-building`.
|
||||
|
||||
#. Install the hypervisor.
|
||||
|
||||
The ACRN device model and tools are installed as part of the previous
|
||||
step. However, ``make install`` does not install the hypervisor (``acrn.efi``) on
|
||||
your EFI System Partition (ESP), nor does it configure your EFI firmware
|
||||
to boot it automatically. Therefore, follow the steps below to perform
|
||||
these operations and complete the ACRN installation.
|
||||
|
||||
#. Add the ACRN hypervisor and Service VM kernel to it (as ``root``):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
ls /boot/efi/EFI/ubuntu/
|
||||
|
||||
You should see the following output:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
fw fwupx64.efi grub.cfg grubx64.efi MokManager.efi shimx64.efi
|
||||
|
||||
#. Install the hypervisor (``acrn.efi``):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
sudo mkdir /boot/efi/EFI/acrn/
|
||||
sudo cp ~/acrn-hypervisor/build/hypervisor/acrn.efi /boot/efi/EFI/acrn/
|
||||
|
||||
#. Configure the EFI firmware to boot the ACRN hypervisor by default:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# For SATA
|
||||
sudo efibootmgr -c -l "\EFI\acrn\acrn.efi" -d /dev/sda -p 1 \
|
||||
-L "ACRN Hypervisor" -u "bootloader=\EFI\ubuntu\grubx64.efi "
|
||||
# For NVMe
|
||||
sudo efibootmgr -c -l "\EFI\acrn\acrn.efi" -d /dev/nvme0n1 -p 1 \
|
||||
-L "ACRN Hypervisor" -u "bootloader=\EFI\ubuntu\grubx64.efi "
|
||||
|
||||
.. note::
|
||||
Note the extra space at the end of the EFI command-line options
|
||||
strings above. This is a workaround for a current `efi-stub
|
||||
bootloader name issue <https://github.com/projectacrn/acrn-hypervisor/issues/4520>`_.
|
||||
It ensures that the end of the string is properly detected.
|
||||
|
||||
#. Verify that "ACRN Hypervisor" is added and that it will boot first:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
sudo efibootmgr -v
|
||||
|
||||
You can also verify it by entering the EFI firmware at boot (using :kbd:`F10`).
|
||||
|
||||
#. Change the boot order at any time using ``efibootmgr -o XXX,XXX,XXX``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
sudo efibootmgr -o xxx,xxx,xxx
|
||||
|
||||
Install the Service VM kernel
|
||||
*****************************
|
||||
|
||||
Download the latest Service VM kernel.
|
||||
|
||||
1. The latest Service VM kernel from the latest Clear Linux OS release is
|
||||
located here:
|
||||
https://download.clearlinux.org/releases/current/clear/x86_64/os/Packages/. Look for the following ``.rpm`` file:
|
||||
``linux-iot-lts2018-sos-<kernel-version>-<build-version>.x86_64.rpm``.
|
||||
|
||||
While we recommend using the current (latest) Clear Linux OS release, you
|
||||
can download a specific Clear Linux release from an area with that
|
||||
release number, such as the following:
|
||||
https://download.clearlinux.org/releases/31670/clear/x86_64/os/Packages/linux-iot-lts2018-sos-4.19.78-98.x86_64.rpm
|
||||
|
||||
#. Download and extract the latest Service VM kernel (this guide uses 31670 as the current example):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
sudo mkdir ~/sos-kernel-build
|
||||
cd ~/sos-kernel-build
|
||||
wget https://download.clearlinux.org/releases/31670/clear/x86_64/os/Packages/linux-iot-lts2018-sos-4.19.78-98.x86_64.rpm
|
||||
sudo apt-get install rpm2cpio
|
||||
rpm2cpio linux-iot-lts2018-sos-4.19.78-98.x86_64.rpm | cpio -idmv
|
||||
|
||||
#. Install the Service VM kernel and its drivers (modules):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
sudo cp -r ~/sos-kernel-build/usr/lib/modules/4.19.78-98.iot-lts2018-sos/ /lib/modules/
|
||||
sudo mkdir /boot/acrn/
|
||||
sudo cp ~/sos-kernel-build/usr/lib/kernel/org.clearlinux.iot-lts2018-sos.4.19.78-98 /boot/acrn/
|
||||
|
||||
#. Configure Grub to load the Service VM kernel:
|
||||
|
||||
* Modify the ``/etc/grub.d/40_custom`` file to create a new Grub entry
|
||||
that will boot the Service VM kernel.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
menuentry 'ACRN Ubuntu Service VM' --id ubuntu-service-vm {
|
||||
recordfail
|
||||
load_video
|
||||
insmod gzio
|
||||
insmod part_gpt
|
||||
insmod ext2
|
||||
linux /boot/acrn/org.clearlinux.iot-lts2018-sos.4.19.78-98 pci_devices_ignore=(0:18:1) console=tty0 console=ttyS0 root=PARTUUID=<UUID of rootfs partition> rw rootwait ignore_loglevel no_timer_check consoleblank=0 i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x01010F i915.domain_plane_owners=0x011111110000 i915.enable_gvt=1 i915.enable_guc=0 hvlog=2M@0x1FE00000
|
||||
}
|
||||
|
||||
.. note::
|
||||
Adjust this to use your partition UUID (``PARTUUID``) for the
|
||||
``root=`` parameter (or use the device node directly).
|
||||
|
||||
Adjust the kernel name if you used a different RPM file as the
|
||||
source of your Service VM kernel.
|
||||
|
||||
The command line for the kernel in ``/etc/grub.d/40_custom``
|
||||
should be entered as a single line, not as multiple lines.
|
||||
Otherwise, the kernel will fail to boot.
|
||||
|
||||
* Modify the ``/etc/default/grub`` file to make the grub menu visible
|
||||
when booting and make it load the Service VM kernel by default.
|
||||
Modify the lines shown below:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
GRUB_DEFAULT=ubuntu-service-vm
|
||||
#GRUB_TIMEOUT_STYLE=hidden
|
||||
GRUB_TIMEOUT=3
|
||||
|
||||
* Update Grub on your system:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
sudo update-grub
|
||||
|
||||
#. Reboot the system.
|
||||
|
||||
Reboot the system. You should see the Grub menu with the new ACRN ``ubuntu-service-vm``
|
||||
entry. Select it and proceed to booting the platform. The system will
|
||||
start the Ubuntu Desktop and you can now log in (as before).
|
||||
|
||||
.. note::
|
||||
If you don't see the Grub menu after rebooting the system (and you are
|
||||
not booting into the ACRN hypervisor), enter the EFI firmware at boot
|
||||
(using :kbd:`F10`) and manually select ``ACRN Hypervisor``.
|
||||
|
||||
If you see a black screen on the first-time reboot after installing
|
||||
the ACRN Hypervisor, wait a few moments and the Ubuntu desktop will
|
||||
display.
|
||||
|
||||
To verify that the hypervisor is effectively running, check ``dmesg``. The
|
||||
typical output of a successful installation resembles the following:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
dmesg | grep ACRN
|
||||
[ 0.000000] Hypervisor detected: ACRN
|
||||
[ 0.862942] ACRN HVLog: acrn_hvlog_init
|
||||
|
||||
.. _prepare-UOS:
|
||||
|
||||
Prepare the User VM
|
||||
*******************
|
||||
|
||||
For the User VM, we are using the same `Clear Linux OS`_ release version as
|
||||
for the Service VM.
|
||||
|
||||
* Download the Clear Linux OS image from `<https://download.clearlinux.org>`_:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
cd ~
|
||||
wget https://download.clearlinux.org/releases/31670/clear/clear-31670-kvm.img.xz
|
||||
unxz clear-31670-kvm.img.xz
|
||||
|
||||
* Download the "linux-iot-lts2018" kernel:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
sudo mkdir ~/uos-kernel-build
|
||||
cd ~/uos-kernel-build
|
||||
wget https://download.clearlinux.org/releases/31670/clear/x86_64/os/Packages/linux-iot-lts2018-sos-4.19.78-98.x86_64.rpm
|
||||
rpm2cpio linux-iot-lts2018-4.19.78-98.x86_64.rpm | cpio -idmv
|
||||
|
||||
* Update the User VM kernel modules:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
sudo losetup -f -P --show ~/clear-31670-kvm.img
|
||||
sudo mount /dev/loop0p3 /mnt
|
||||
sudo cp -r ~/uos-kernel-build/usr/lib/modules/4.19.78-98.iot-lts2018/ /mnt/lib/modules/
|
||||
sudo cp -r ~/uos-kernel-build/usr/lib/kernel /lib/modules/
|
||||
sudo umount /mnt
|
||||
sync
|
||||
|
||||
If you encounter a permission issue, follow these steps:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
sudo chmod 777 /dev/acrn_vhm
|
||||
|
||||
* Add the following package:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
sudo apt update
|
||||
sudo apt install m4 bison flex zlib1g-dev
|
||||
cd ~
|
||||
wget https://acpica.org/sites/acpica/files/acpica-unix-20191018.tar.gz
|
||||
tar zxvf acpica-unix-20191018.tar.gz
|
||||
cd acpica-unix-20191018
|
||||
make clean && make iasl
|
||||
sudo cp ./generate/unix/bin/iasl /usr/sbin/
|
||||
|
||||
|
||||
* Adjust the ``launch_uos.sh`` script:
|
||||
|
||||
You need to adjust the ``/usr/share/acrn/samples/nuc/launch_uos.sh`` script
|
||||
to match your installation. Modify the following lines:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
-s 3,virtio-blk,/root/clear-31670-kvm.img \
|
||||
|
||||
.. note::
|
||||
The User VM image can be stored in other directories instead of ``~/``.
|
||||
Remember to also modify the image directory in ``launch_uos.sh``.
|
||||
|
||||
Start the User VM
|
||||
*****************
|
||||
|
||||
You are now all set to start the User VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
sudo /usr/share/acrn/samples/nuc/launch_uos.sh
|
||||
|
||||
**Congratulations**, you are now watching the User VM booting!
|
||||
|
||||
.. _enable-network-sharing-user-vm:
|
||||
|
||||
Enable network sharing
|
||||
**********************
|
||||
|
||||
After booting the Service VM and User VM, network sharing must be enabled
|
||||
to give network access to the Service VM by enabling the TAP and networking
|
||||
bridge in the Service VM. The following script example shows how to set
|
||||
this up (verified in Ubuntu 16.04 and 18.04 as the Service VM).
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
#!/bin/bash
|
||||
#setup bridge for uos network
|
||||
br=$(brctl show | grep acrn-br0)
|
||||
br=${br-:0:6}
|
||||
ip tuntap add dev tap0 mode tap
|
||||
|
||||
# if bridge not existed
|
||||
if [ "$br"x != "acrn-br0"x ]; then
|
||||
#setup bridge for uos network
|
||||
brctl addbr acrn-br0
|
||||
brctl addif acrn-br0 enp3s0
|
||||
ifconfig enp3s0 0
|
||||
dhclient acrn-br0
|
||||
fi
|
||||
|
||||
# Add TAP device to the bridge
|
||||
brctl addif acrn-br0 tap0
|
||||
ip link set dev tap0 up
|
||||
|
||||
.. note::
|
||||
The Service VM network interface is called ``enp3s0`` in the script
|
||||
above. Adjust the script if your system uses a different name (e.g.
|
||||
``eno1``).
|
||||
|
||||
Enable the USB keyboard and mouse
|
||||
*********************************
|
||||
|
||||
Refer to :ref:`kbl-nuc-sdc` for instructions on enabling the USB keyboard
|
||||
and mouse for the User VM.
|
||||
|
||||
|
||||
.. _Clear Linux OS: https://clearlinux.org
|
@ -92,7 +92,10 @@ Steps for Using VxWorks as User VM
|
||||
|
||||
You now have a virtual disk image with bootable VxWorks in ``VxWorks.img``.
|
||||
|
||||
#. Follow :ref:`kbl-nuc-sdc` to boot the ACRN Service VM.
|
||||
#. Follow XXX to boot the ACRN Service VM.
|
||||
|
||||
.. important:: need instructions from deleted document (using sdc
|
||||
mode on the NUC)
|
||||
|
||||
#. Boot VxWorks as User VM.
|
||||
|
||||
|
@ -91,9 +91,12 @@ Steps for Using Zephyr as User VM
|
||||
You now have a virtual disk image with a bootable Zephyr in ``zephyr.img``. If the Zephyr build system is not
|
||||
the ACRN Service VM, then you will need to transfer this image to the ACRN Service VM (via, e.g, a USB stick or network )
|
||||
|
||||
#. Follow :ref:`kbl-nuc-sdc` to boot "The ACRN Service OS" based on Clear Linux OS 28620
|
||||
#. Follow XXX to boot "The ACRN Service OS" based on Clear Linux OS 28620
|
||||
(ACRN tag: acrn-2019w14.3-140000p)
|
||||
|
||||
.. important:: need to remove reference to Clear Linux and reference
|
||||
to deleted document (use SDC mode on the NUC)
|
||||
|
||||
#. Boot Zephyr as User VM
|
||||
|
||||
On the ACRN Service VM, prepare a directory and populate it with Zephyr files.
|
||||
|
@ -56,8 +56,7 @@ container::
|
||||
|
||||
.. note:: You can download an `example launch_uos.sh script
|
||||
<https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/devicemodel/samples/nuc/launch_uos.sh>`_
|
||||
that supports the ``-C`` (``run_container`` function) option. You may refer to :ref:`acrn-dm_qos`
|
||||
for more details about this option.
|
||||
that supports the ``-C`` (``run_container`` function) option.
|
||||
|
||||
Note that the launch script must only launch one UOS instance.
|
||||
The VM name is important. ``acrnctl`` searches VMs by their
|
||||
|
Loading…
Reference in New Issue
Block a user