diff --git a/doc/develop.rst b/doc/develop.rst
index 757390954..5298fd3be 100644
--- a/doc/develop.rst
+++ b/doc/develop.rst
@@ -3,6 +3,44 @@
Advanced Guides
###############
+Advanced Scenario Tutorials
+*********************************
+
+.. rst-class:: rst-columns2
+
+.. toctree::
+ :maxdepth: 1
+
+ tutorials/using_hybrid_mode_on_nuc
+ tutorials/using_partition_mode_on_nuc
+
+Service VM Tutorials
+********************
+
+.. rst-class:: rst-columns2
+
+.. toctree::
+ :maxdepth: 1
+
+ tutorials/running_deb_as_serv_vm
+ tutorials/using_yp
+
+.. _develop_acrn_user_vm:
+
+User VM Tutorials
+*****************
+
+.. rst-class:: rst-columns2
+
+.. toctree::
+ :maxdepth: 1
+
+ tutorials/using_windows_as_uos
+ tutorials/running_ubun_as_user_vm
+ tutorials/running_deb_as_user_vm
+ tutorials/using_xenomai_as_uos
+ tutorials/using_vxworks_as_uos
+ tutorials/using_zephyr_as_uos
Configuration and Tools
***********************
@@ -24,33 +62,7 @@ Configuration and Tools
misc/debug_tools/**
misc/services/acrn_manager/**
-Service VM Tutorials
-********************
-
-.. rst-class:: rst-columns2
-
-.. toctree::
- :maxdepth: 1
-
- tutorials/running_deb_as_serv_vm
- tutorials/using_yp
-
-User VM Tutorials
-*****************
-
-.. rst-class:: rst-columns2
-
-.. toctree::
- :maxdepth: 1
-
- tutorials/using_windows_as_uos
- tutorials/running_ubun_as_user_vm
- tutorials/running_deb_as_user_vm
- tutorials/using_xenomai_as_uos
- tutorials/using_vxworks_as_uos
- tutorials/using_zephyr_as_uos
-
-Enable ACRN Features
+Advanced Features
********************
.. rst-class:: rst-columns2
diff --git a/doc/getting-started/building-from-source.rst b/doc/getting-started/building-from-source.rst
deleted file mode 100644
index fbe84247e..000000000
--- a/doc/getting-started/building-from-source.rst
+++ /dev/null
@@ -1,266 +0,0 @@
-.. _getting-started-building:
-
-Build ACRN From Source
-######################
-
-Following a general embedded-system programming model, the ACRN
-hypervisor is designed to be customized at build time per hardware
-platform and per usage scenario, rather than one binary for all
-scenarios.
-
-The hypervisor binary is generated based on configuration settings in XML
-files. Instructions about customizing these settings can be found in
-:ref:`getting-started-hypervisor-configuration`.
-
-One binary for all platforms and all usage scenarios is not
-supported. Dynamic configuration parsing is not used in
-the ACRN hypervisor for these reasons:
-
-- **Maintain functional safety requirements.** Implementing dynamic parsing
- introduces dynamic objects, which violate functional safety requirements.
-
-- **Reduce complexity.** ACRN is a lightweight reference hypervisor, built for
- embedded IoT. As new platforms for embedded systems are rapidly introduced,
- support for one binary could require more and more complexity in the
- hypervisor, which is something we strive to avoid.
-
-- **Maintain small footprint.** Implementing dynamic parsing introduces
- hundreds or thousands of lines of code. Avoiding dynamic parsing
- helps keep the hypervisor's Lines of Code (LOC) in a desirable range (less
- than 40K).
-
-- **Improve boot time.** Dynamic parsing at runtime increases the boot
- time. Using a build-time configuration and not dynamic parsing
- helps improve the boot time of the hypervisor.
-
-
-Build the ACRN hypervisor, device model, and tools from source by following
-these steps.
-
-.. contents::
- :local:
- :depth: 1
-
-.. _install-build-tools-dependencies:
-
-.. rst-class:: numbered-step
-
-Install Build Tools and Dependencies
-************************************
-
-ACRN development is supported on popular Linux distributions, each with their
-own way to install development tools. This user guide covers the steps to
-configure and build ACRN natively on **Ubuntu 18.04 or newer**.
-
-The following commands install the necessary tools for configuring and building
-ACRN.
-
- .. code-block:: none
-
- sudo apt install gcc \
- git \
- make \
- libssl-dev \
- libpciaccess-dev \
- uuid-dev \
- libsystemd-dev \
- libevent-dev \
- libxml2-dev \
- libxml2-utils \
- libusb-1.0-0-dev \
- python3 \
- python3-pip \
- libblkid-dev \
- e2fslibs-dev \
- pkg-config \
- libnuma-dev \
- liblz4-tool \
- flex \
- bison \
- xsltproc \
- clang-format
-
- sudo pip3 install lxml xmlschema defusedxml
-
- wget https://acpica.org/sites/acpica/files/acpica-unix-20210105.tar.gz
- tar zxvf acpica-unix-20210105.tar.gz
- cd acpica-unix-20210105
- make clean && make iasl
- sudo cp ./generate/unix/bin/iasl /usr/sbin/
-
-.. rst-class:: numbered-step
-
-Get the ACRN Hypervisor Source Code
-***********************************
-
-The `ACRN hypervisor `_
-repository contains four main components:
-
-1. The ACRN hypervisor code is in the ``hypervisor`` directory.
-#. The ACRN device model code is in the ``devicemodel`` directory.
-#. The ACRN debug tools source code is in the ``misc/debug_tools`` directory.
-#. The ACRN online services source code is in the ``misc/services`` directory.
-
-Enter the following to get the ACRN hypervisor source code:
-
-.. code-block:: none
-
- git clone https://github.com/projectacrn/acrn-hypervisor
-
-
-.. _build-with-acrn-scenario:
-
-.. rst-class:: numbered-step
-
-Build With the ACRN Scenario
-****************************
-
-Currently, the ACRN hypervisor defines these typical usage scenarios:
-
-SDC:
- The SDC (Software Defined Cockpit) scenario defines a simple
- automotive use case that includes one pre-launched Service VM and one
- post-launched User VM.
-
-LOGICAL_PARTITION:
- This scenario defines two pre-launched VMs.
-
-INDUSTRY:
- This scenario is an example for industrial usage with up to eight VMs:
- one pre-launched Service VM, five post-launched Standard VMs (for Human
- interaction etc.), one post-launched RT VMs (for real-time control),
- and one Kata Container VM.
-
-HYBRID:
- This scenario defines a hybrid use case with three VMs: one
- pre-launched Safety VM, one pre-launched Service VM, and one post-launched
- Standard VM.
-
-HYBRID_RT:
- This scenario defines a hybrid use case with three VMs: one
- pre-launched RTVM, one pre-launched Service VM, and one post-launched
- Standard VM.
-
-XML configuration files for these scenarios on supported boards are available
-under the ``misc/config_tools/data`` directory.
-
-Assuming that you are at the top level of the ``acrn-hypervisor`` directory, perform
-the following to build the hypervisor, device model, and tools:
-
-.. note::
- The debug version is built by default. To build a release version,
- build with ``RELEASE=y`` explicitly, regardless of whether a previous
- build exists.
-
-* Build the debug version of ``INDUSTRY`` scenario on the ``nuc7i7dnb``:
-
- .. code-block:: none
-
- make BOARD=nuc7i7dnb SCENARIO=industry
-
-* Build the release version of ``HYBRID`` scenario on the ``whl-ipc-i5``:
-
- .. code-block:: none
-
- make BOARD=whl-ipc-i5 SCENARIO=hybrid RELEASE=y
-
-* Build the release version of ``HYBRID_RT`` scenario on the ``whl-ipc-i7``
- (hypervisor only):
-
- .. code-block:: none
-
- make BOARD=whl-ipc-i7 SCENARIO=hybrid_rt RELEASE=y hypervisor
-
-* Build the release version of the device model and tools:
-
- .. code-block:: none
-
- make RELEASE=y devicemodel tools
-
-You can also build ACRN with your customized scenario:
-
-* Build with your own scenario configuration on the ``nuc11tnbi5``, assuming the
- scenario is defined in ``/path/to/scenario.xml``:
-
- .. code-block:: none
-
- make BOARD=nuc11tnbi5 SCENARIO=/path/to/scenario.xml
-
-* Build with your own board and scenario configuration, assuming the board and
- scenario XML files are ``/path/to/board.xml`` and ``/path/to/scenario.xml``:
-
- .. code-block:: none
-
- make BOARD=/path/to/board.xml SCENARIO=/path/to/scenario.xml
-
-.. note::
- ACRN uses XML files to summarize board characteristics and scenario
- settings. The ``BOARD`` and ``SCENARIO`` variables accept board/scenario
- names as well as paths to XML files. When board/scenario names are given, the
- build system searches for XML files with the same names under
- ``misc/config_tools/data/``. When paths (absolute or relative) to the XML
- files are given, the build system uses the files pointed at. If relative
- paths are used, they are considered relative to the current working
- directory.
-
-See the :ref:`hardware` document for information about platform needs for each
-scenario. For more instructions to customize scenarios, see
-:ref:`getting-started-hypervisor-configuration` and
-:ref:`acrn_configuration_tool`.
-
-The build results are found in the ``build`` directory. You can specify
-a different build directory by setting the ``O`` ``make`` parameter,
-for example: ``make O=build-nuc``.
-
-To query the board, scenario, and build type of an existing build, the
-``hvshowconfig`` target will help.
-
- .. code-block:: none
-
- $ make BOARD=tgl-rvp SCENARIO=hybrid_rt hypervisor
- ...
- $ make hvshowconfig
- Build directory: /path/to/acrn-hypervisor/build/hypervisor
- This build directory is configured with the settings below.
- - BOARD = tgl-rvp
- - SCENARIO = hybrid_rt
- - RELEASE = n
-
-.. _getting-started-hypervisor-configuration:
-
-.. rst-class:: numbered-step
-
-Modify the Hypervisor Configuration
-***********************************
-
-The ACRN hypervisor is built with scenario encoded in an XML file (referred to
-as the scenario XML hereinafter). The scenario XML of a build can be found at
-``/hypervisor/.scenario.xml``, where ```` is the name of the build
-directory. You can make further changes to this file to adjust to your specific
-requirements. Another ``make`` will rebuild the hypervisor using the updated
-scenario XML.
-
-The following commands show how to customize manually the scenario XML based on
-the predefined ``INDUSTRY`` scenario for ``nuc7i7dnb`` and rebuild the
-hypervisor. The ``hvdefconfig`` target generates the configuration files without
-building the hypervisor, allowing users to tweak the configurations.
-
-.. code-block:: none
-
- make BOARD=nuc7i7dnb SCENARIO=industry hvdefconfig
- vim build/hypervisor/.scenario.xml
- #(Modify the XML file per your needs)
- make
-
-.. note::
- A hypervisor build remembers the board and scenario previously
- configured. Thus, there is no need to duplicate BOARD and SCENARIO in the
- second ``make`` above.
-
-While the scenario XML files can be changed manually, we recommend you use the
-ACRN web-based configuration app that provides valid options and descriptions
-of the configuration entries. Refer to :ref:`acrn_config_tool_ui` for more
-instructions.
-
-Descriptions of each configuration entry in scenario XML files are also
-available at :ref:`scenario-config-options`.
diff --git a/doc/getting-started/getting-started.rst b/doc/getting-started/getting-started.rst
index 4c7e9c8eb..715ffbb75 100644
--- a/doc/getting-started/getting-started.rst
+++ b/doc/getting-started/getting-started.rst
@@ -1,648 +1,765 @@
.. _gsg:
.. _rt_industry_ubuntu_setup:
+.. _getting-started-building:
Getting Started Guide
#####################
-.. contents::
- :local:
- :depth: 1
+This guide will help you get started with ACRN. We'll show how to prepare a
+build environment on your development computer. Then we'll walk through the
+steps to set up a simple ACRN configuration on a target system. The
+configuration is based on the ACRN predefined **industry** scenario and consists
+of an ACRN hypervisor, Service VM, and one User VM, as illustrated in this
+figure:
-Introduction
-************
+.. image:: ./images/gsg_scenario.png
+ :scale: 80%
-This document describes the various steps to set up a system based on the following components:
-
-- ACRN: Industry scenario
-- Service VM OS: Ubuntu (running off the NVMe storage device)
-- Real-Time VM (RTVM) OS: Ubuntu modified to use a PREEMPT-RT kernel (running off the
- SATA storage device)
-- Post-launched User VM OS: Windows
-
-Verified Version
-****************
-
-- Ubuntu version: **18.04**
-- GCC version: **7.5**
-- ACRN-hypervisor branch: **release_2.5 (v2.5)**
-- ACRN-Kernel (Service VM kernel): **release_2.5 (v2.5)**
-- RT kernel for Ubuntu User OS: **4.19/preempt-rt (4.19.72-rt25)**
-- HW: Intel NUC 11 Pro Kit NUC11TNHi5 (`NUC11TNHi5
- `_)
-
-.. note:: This NUC is based on the
- `NUC11TNBi5 board `_.
- The ``BOARD`` parameter that is used to build ACRN for this NUC is therefore ``nuc11tnbi5``.
+Throughout this guide, you will be exposed to some of the tools, processes, and
+components of the ACRN project. Let's get started.
Prerequisites
-*************
+**************
-- VMX/VT-D are enabled and secure boot is disabled in the BIOS
-- Ubuntu 18.04 boot-able USB disk
-- Monitors with HDMI interface (DP interface is optional)
-- USB keyboard and mouse
-- Ethernet cables
+You will need two machines: a development computer and a target system. The
+development computer is where you configure and build ACRN and your application.
+The target system is where you deploy and run ACRN and your application.
+
+.. image:: ./images/gsg_host_target.png
+ :scale: 60%
+
+Before you begin, make sure your machines have the following prerequisites:
+
+**Development computer**:
+
+* Hardware specifications
+
+ - A PC with Internet access
+
+* Software specifications
+
+ - Ubuntu Desktop 18.04 or newer
+ (ACRN development is not supported on Windows.)
+
+**Target system**:
+
+* Hardware specifications
+
+ - Target board (see :ref:`hardware_tested`)
+ - USB keyboard and mouse
+ - Monitor
+ - Ethernet cable and Internet access
+ - Serial-to-USB cable to view the ACRN and VM console (optional)
+ - Ubuntu 18.04 bootable USB disk (see `Ubuntu documentation
+ `__
+ for instructions)
+ - A second USB disk with minimum 1GB capacity to copy files between the
+ development computer and target system
+ - Local storage device (NVMe or SATA drive, for example)
.. rst-class:: numbered-step
-Hardware Connection
+Set Up the Hardware
*******************
-Connect the NUC11TNHi5 with the appropriate external devices.
+To set up the hardware environment:
-#. Connect the NUC11TNHi5 NUC to a monitor via an HDMI cable.
-#. Connect the mouse, keyboard, Ethernet cable, and power supply cable to
- the NUC11TNHi5 board.
-#. Insert the Ubuntu 18.04 USB boot disk into the USB port.
+#. Connect the mouse, keyboard, monitor, and power supply cable to the target
+ system.
- .. figure:: images/rt-ind-ubun-hw-1.png
- :scale: 15
+#. Connect the target system to the LAN with the Ethernet cable.
- .. figure:: images/rt-ind-ubun-hw-2.png
- :scale: 15
+#. (Optional) Connect the serial cable between the target and development
+ computer to view the ACRN and VM console (for an example, see :ref:`connect_serial_port`).
+
+Example of a target system with cables connected:
+
+.. image:: ./images/gsg_nuc.png
+ :scale: 25%
.. rst-class:: numbered-step
-
-.. _install-ubuntu-rtvm-sata:
-
-Install the Ubuntu User VM (RTVM) on the SATA Disk
-**************************************************
-
-.. note:: The NUC11TNHi5 NUC contains both an NVMe and SATA disk.
- Before you install the Ubuntu User VM on the SATA disk, either
- remove the NVMe disk or delete its blocks.
-
-#. Insert the Ubuntu USB boot disk into the NUC11TNHi5 machine.
-#. Power on the machine, then press F10 to select the USB disk as the boot
- device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the
- label depends on the brand/make of the USB drive.
-#. Install the Ubuntu OS.
-#. Select **Something else** to create the partition.
-
- .. figure:: images/native-ubuntu-on-SATA-1.png
-
-#. Configure the ``/dev/sda`` partition. Refer to the diagram below:
-
- .. figure:: images/native-ubuntu-on-SATA-3.png
-
- a. Select the ``/dev/sda`` partition, not ``/dev/nvme0p1``.
- b. Select ``/dev/sda`` **ATA KINGSTON SA400S3** as the device for the
- bootloader installation. Note that the label depends on the SATA disk used.
-
-#. Complete the Ubuntu installation on ``/dev/sda``.
-
-This Ubuntu installation will be modified later (see `Build and Install the RT kernel for the Ubuntu User VM`_)
-to turn it into a real-time User VM (RTVM).
-
-.. rst-class:: numbered-step
-
-.. _install-ubuntu-Service VM-NVMe:
-
-Install the Ubuntu Service VM on the NVMe Disk
-**********************************************
-
-.. note:: Before you install the Ubuntu Service VM on the NVMe disk, please
- remove the SATA disk.
-
-#. Insert the Ubuntu USB boot disk into the NUC11TNHi5 machine.
-#. Power on the machine, then press F10 to select the USB disk as the boot
- device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the
- label depends on the brand/make of the USB drive.
-#. Install the Ubuntu OS.
-#. Select **Something else** to create the partition.
-
- .. figure:: images/native-ubuntu-on-NVME-1.png
-
-#. Configure the ``/dev/nvme0n1`` partition. Refer to the diagram below:
-
- .. figure:: images/native-ubuntu-on-NVME-3.png
-
- a. Select the ``/dev/nvme0n1`` partition, not ``/dev/sda``.
- b. Select ``/dev/nvme0n1`` **Lenovo SL700 PCI-E M.2 256G** as the device for the
- bootloader installation. Note that the label depends on the NVMe disk used.
-
-#. Complete the Ubuntu installation and reboot the system.
-
- .. note:: Set ``acrn`` as the username for the Ubuntu Service VM.
-
-
-.. rst-class:: numbered-step
-
-.. _build-and-install-acrn-on-ubuntu:
-
-Build and Install ACRN on Ubuntu
+Prepare the Development Computer
********************************
-Pre-Steps
-=========
-
-#. Set the network configuration, proxy, etc.
-#. Update Ubuntu:
-
- .. code-block:: none
-
- $ sudo -E apt update
-
-#. Create a work folder:
-
- .. code-block:: none
-
- $ mkdir /home/acrn/work
-
-Build the ACRN Hypervisor on Ubuntu
-===================================
-
-#. Install the necessary libraries:
-
- .. code-block:: none
-
- $ sudo apt install gcc \
- git \
- make \
- libssl-dev \
- libpciaccess-dev \
- uuid-dev \
- libsystemd-dev \
- libevent-dev \
- libxml2-dev \
- libxml2-utils \
- libusb-1.0-0-dev \
- python3 \
- python3-pip \
- libblkid-dev \
- e2fslibs-dev \
- pkg-config \
- libnuma-dev \
- liblz4-tool \
- flex \
- bison \
- xsltproc \
- clang-format
-
- $ sudo pip3 install lxml xmlschema defusedxml
-
-#. Starting with the ACRN v2.2 release, we use the ``iasl`` tool to
- compile an offline ACPI binary for pre-launched VMs while building ACRN,
- so we need to install the ``iasl`` tool in the ACRN build environment.
-
- Follow these steps to install ``iasl`` (and its dependencies) and
- then update the ``iasl`` binary with a newer version not available
- in Ubuntu 18.04:
-
- .. code-block:: none
-
- $ cd /home/acrn/work
- $ wget https://acpica.org/sites/acpica/files/acpica-unix-20210105.tar.gz
- $ tar zxvf acpica-unix-20210105.tar.gz
- $ cd acpica-unix-20210105
- $ make clean && make iasl
- $ sudo cp ./generate/unix/bin/iasl /usr/sbin/
-
-#. Get the ACRN source code:
-
- .. code-block:: none
-
- $ cd /home/acrn/work
- $ git clone https://github.com/projectacrn/acrn-hypervisor
- $ cd acrn-hypervisor
-
-#. Switch to the v2.5 version:
-
- .. code-block:: none
-
- $ git checkout v2.5
-
-#. Build ACRN:
-
- .. code-block:: none
-
- $ make BOARD=nuc11tnbi5 SCENARIO=industry
- $ sudo make install
- $ sudo mkdir -p /boot/acrn
- $ sudo cp build/hypervisor/acrn.bin /boot/acrn/
-
-.. _build-and-install-ACRN-kernel:
-
-Build and Install the ACRN Kernel
-=================================
-
-#. Build the Service VM kernel from the ACRN repo:
-
- .. code-block:: none
-
- $ cd /home/acrn/work/
- $ git clone https://github.com/projectacrn/acrn-kernel
- $ cd acrn-kernel
-
-#. Switch to the 5.4 kernel:
-
- .. code-block:: none
-
- $ git checkout v2.5
- $ cp kernel_config_uefi_sos .config
- $ make olddefconfig
- $ make all
-
-Install the Service VM Kernel and Modules
-=========================================
-
-.. code-block:: none
-
- $ sudo make modules_install
- $ sudo cp arch/x86/boot/bzImage /boot/bzImage
-
-.. _gsg_update_grub:
-
-Update Grub for the Ubuntu Service VM
-=====================================
-
-#. Update the ``/etc/grub.d/40_custom`` file as shown below.
-
- .. note::
- Enter the command line for the kernel in ``/etc/grub.d/40_custom`` as
- a single line and not as multiple lines. Otherwise, the kernel will
- fail to boot.
-
- .. code-block:: none
-
- menuentry "ACRN Multiboot Ubuntu Service VM" --id ubuntu-service-vm {
- load_video
- insmod gzio
- insmod part_gpt
- insmod ext2
-
- search --no-floppy --fs-uuid --set 9bd58889-add7-410c-bdb7-1fbc2af9b0e1
- echo 'loading ACRN...'
- multiboot2 /boot/acrn/acrn.bin root=PARTUUID="e515916d-aac4-4439-aaa0-33231a9f4d83"
- module2 /boot/bzImage Linux_bzImage
- }
-
- .. note::
- Update this to use the UUID (``--set``) and PARTUUID (``root=`` parameter)
- (or use the device node directly) of the root partition (e.g.
- ``/dev/nvme0n1p2``). Hint: use ``sudo blkid ``.
-
- Update the kernel name if you used a different name as the source
- for your Service VM kernel.
-
- Add the ``menuentry`` at the bottom of :file:`40_custom`, keep the
- ``exec tail`` line at the top intact.
-
-#. Modify the ``/etc/default/grub`` file to make the Grub menu visible when
- booting and make it load the Service VM kernel by default. Modify the
- lines shown below:
-
- .. code-block:: none
-
- GRUB_DEFAULT=ubuntu-service-vm
- #GRUB_TIMEOUT_STYLE=hidden
- GRUB_TIMEOUT=5
- GRUB_CMDLINE_LINUX="text"
-
-#. Update Grub on your system:
-
- .. code-block:: none
-
- $ sudo update-grub
-
-Enable Network Sharing for the User VM
-======================================
-
-In the Ubuntu Service VM, enable network sharing for the User VM:
-
-.. code-block:: none
-
- $ sudo systemctl enable systemd-networkd
- $ sudo systemctl start systemd-networkd
-
-
-Reboot the System
-=================
-
-Reboot the system. You should see the Grub menu with the new **ACRN
-ubuntu-service-vm** entry. Select it and proceed to booting the platform. The
-system will start Ubuntu and you can now log in (as before).
-
-To verify that the hypervisor is effectively running, check ``dmesg``. The
-typical output of a successful installation resembles the following:
-
-.. code-block:: none
-
- $ dmesg | grep ACRN
- [ 0.000000] Hypervisor detected: ACRN
- [ 0.862942] ACRN HVLog: acrn_hvlog_init
-
-
-Additional Settings in the Service VM
-=====================================
-
-Build and Install the RT Kernel for the Ubuntu User VM
-------------------------------------------------------
-
-Follow these instructions to build the RT kernel.
-
-#. Clone the RT kernel source code:
-
- .. note::
- This guide assumes you are doing this within the Service VM. This
- **acrn-kernel** repository was already cloned under ``/home/acrn/work``
- earlier on so you can just ``cd`` into it and perform the ``git checkout``
- directly.
-
- .. code-block:: none
-
- $ git clone https://github.com/projectacrn/acrn-kernel
- $ cd acrn-kernel
- $ git checkout origin/4.19/preempt-rt
- $ make mrproper
-
- .. note::
- The ``make mrproper`` is to make sure there is no ``.config`` file
- left from any previous build (e.g. the one for the Service VM kernel).
-
-#. Build the kernel:
-
- .. code-block:: none
-
- $ cp x86-64_defconfig .config
- $ make olddefconfig
- $ make targz-pkg
-
-#. Copy the kernel and modules:
-
- .. code-block:: none
-
- $ sudo mount /dev/sda2 /mnt
- $ sudo cp arch/x86/boot/bzImage /mnt/boot/
- $ sudo tar -zxvf linux-4.19.72-rt25-x86.tar.gz -C /mnt/
- $ sudo cd ~ && sudo umount /mnt && sync
+To set up the ACRN build environment on the development computer:
+
+#. On the development computer, run the following command to confirm that Ubuntu
+ Desktop 18.04 or newer is running:
+
+ .. code-block:: bash
+
+ cat /etc/os-release
+
+ If you have an older version, see `Ubuntu documentation
+ `__ to
+ install a new OS on the development computer.
+
+#. Update Ubuntu with any outstanding patches, and install the necessary ACRN
+ build tools and dependencies:
+
+ .. code-block:: bash
+
+ sudo apt update
+ sudo apt upgrade -y
+ sudo apt install gcc \
+ git \
+ make \
+ libssl-dev \
+ libpciaccess-dev \
+ uuid-dev \
+ libsystemd-dev \
+ libevent-dev \
+ libxml2-dev \
+ libxml2-utils \
+ libusb-1.0-0-dev \
+ python3 \
+ python3-pip \
+ libblkid-dev \
+ e2fslibs-dev \
+ pkg-config \
+ libnuma-dev \
+ liblz4-tool \
+ flex \
+ bison \
+ xsltproc \
+ clang-format
+ sudo pip3 install lxml xmlschema
+
+#. Install the iASL compiler/disassembler used for advanced power management,
+ device discovery, and configuration (ACPI) within the host OS:
+
+ .. code-block:: bash
+
+ mkdir ~/acrn-work
+ cd ~/acrn-work
+ wget https://acpica.org/sites/acpica/files/acpica-unix-20210105.tar.gz
+ tar zxvf acpica-unix-20210105.tar.gz
+ cd acpica-unix-20210105
+ make clean && make iasl
+ sudo cp ./generate/unix/bin/iasl /usr/sbin
+
+#. Get the ACRN hypervisor and kernel source code:
+
+ .. code-block:: bash
+
+ cd ~/acrn-work
+ git clone https://github.com/projectacrn/acrn-hypervisor
+ cd acrn-hypervisor
+ git checkout release_2.6
+
+ cd ..
+ git clone https://github.com/projectacrn/acrn-kernel
+ cd acrn-kernel
+ git checkout release_2.6
.. rst-class:: numbered-step
-Launch the RTVM
-***************
+Prepare the Target and Generate a Board Configuration File
+***************************************************************
-Grub in the Ubuntu User VM (RTVM) needs to be configured to use the new RT
-kernel that was just built and installed on the rootfs. Follow these steps to
-perform this operation.
+A **board configuration file** is an XML file that stores hardware-specific information extracted from the target system. The file is used to configure
+the ACRN hypervisor, because each hypervisor instance is specific to your
+target hardware.
-Update the Grub File
-====================
-
-#. Reboot into the Ubuntu User VM located on the SATA drive and log on.
-
-#. Update the ``/etc/grub.d/40_custom`` file as shown below.
-
- .. note::
- Enter the command line for the kernel in ``/etc/grub.d/40_custom`` as
- a single line and not as multiple lines. Otherwise, the kernel will
- fail to boot.
-
- .. code-block:: none
-
- menuentry "ACRN Ubuntu User VM" --id ubuntu-user-vm {
- load_video
- insmod gzio
- insmod part_gpt
- insmod ext2
-
- search --no-floppy --fs-uuid --set b2ae4879-c0b6-4144-9d28-d916b578f2eb
- echo 'loading ACRN...'
-
- linux /boot/bzImage root=PARTUUID= rw rootwait nohpet console=hvc0 console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M consoleblank=0 clocksource=tsc tsc=reliable x2apic_phys processor.max_cstate=0 intel_idle.max_cstate=0 intel_pstate=disable mce=ignore_ce audit=0 isolcpus=nohz,domain,1 nohz_full=1 rcu_nocbs=1 nosoftlockup idle=poll irqaffinity=0
- }
-
- .. note::
- Update this to use the UUID (``--set``) and PARTUUID (``root=`` parameter)
- (or use the device node directly) of the root partition (e.g. ``/dev/sda2).
- Hint: use ``sudo blkid /dev/sda*``.
-
- Update the kernel name if you used a different name as the source
- for your Service VM kernel.
-
- Add the ``menuentry`` at the bottom of :file:`40_custom`, keep the
- ``exec tail`` line at the top intact.
-
-#. Modify the ``/etc/default/grub`` file to make the grub menu visible when
- booting and make it load the RT kernel by default. Modify the
- lines shown below:
-
- .. code-block:: none
-
- GRUB_DEFAULT=ubuntu-user-vm
- #GRUB_TIMEOUT_STYLE=hidden
- GRUB_TIMEOUT=5
-
-#. Update Grub on your system:
-
- .. code-block:: none
-
- $ sudo update-grub
-
-#. Reboot into the Ubuntu Service VM
-
-Launch the RTVM
-===============
-
- .. code-block:: none
-
- $ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
+You use the **board inspector tool** to generate the board
+configuration file.
.. note::
- If using a KBL NUC, the script must be adapted to match the BDF on the actual HW platform
-Recommended Kernel Cmdline for RTVM
------------------------------------
+ Whenever you change the configuration of the board, such as BIOS settings,
+ additional memory, or PCI devices, you must
+ generate a new board configuration file.
-.. code-block:: none
+Install OS on the Target
+============================
- root=PARTUUID= rw rootwait nohpet console=hvc0 console=ttyS0 \
- no_timer_check ignore_loglevel log_buf_len=16M consoleblank=0 \
- clocksource=tsc tsc=reliable x2apic_phys processor.max_cstate=0 \
- intel_idle.max_cstate=0 intel_pstate=disable mce=ignore_ce audit=0 \
- isolcpus=nohz,domain,1 nohz_full=1 rcu_nocbs=1 nosoftlockup idle=poll \
- irqaffinity=0
+The target system needs Ubuntu 18.04 to run the board inspector tool.
+To install Ubuntu 18.04:
-Configure RDT
--------------
+#. Insert the Ubuntu bootable USB disk into the target system.
-In addition to setting the CAT configuration via HV commands, we allow
-developers to add CAT configurations to the VM config and configure
-automatically at the time of RTVM creation. Refer to :ref:`rdt_configuration`
-for details on RDT configuration and :ref:`hv_rdt` for details on RDT
-high-level design.
+#. Power on the target system, and select the USB disk as the boot device
+ in the UEFI
+ menu. Note that the USB disk label presented in the boot options depends on
+ the brand/make of the USB drive. (You will need to configure the BIOS to boot
+ off the USB device first, if that option isn't available.)
-Set Up the Core Allocation for the RTVM
----------------------------------------
+#. After selecting the language and keyboard layout, select the **Normal
+ installation** and **Download updates while installing Ubuntu** (downloading
+ updates requires the target to have an Internet connection).
-In our recommended configuration, two cores are allocated to the RTVM:
-core 0 for housekeeping and core 1 for RT tasks. In order to achieve
-this, follow the below steps to allocate all housekeeping tasks to core 0:
+ .. image:: ./images/gsg_ubuntu_install_01.png
-#. Prepare the RTVM launch script
+#. Use the checkboxes to choose whether you'd like to install Ubuntu alongside
+ another operating system, or delete your existing operating system and
+ replace it with Ubuntu:
- Follow the `Passthrough a hard disk to RTVM`_ section to make adjustments to
- the ``/usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh`` launch script.
+ .. image:: ./images/gsg_ubuntu_install_02.jpg
+ :scale: 85%
-#. Launch the RTVM:
+#. Complete the Ubuntu installation and create a new user account ``acrn`` and
+ set a password.
- .. code-block:: none
+#. The next section shows how to configure BIOS settings.
- $ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
+Configure Target BIOS Settings
+===============================
-#. Log in to the RTVM as root and run the script as below:
+#. Boot your target and enter the BIOS configuration editor.
- .. code-block:: none
+ Tip: When you are booting your target, you’ll see an option (quickly) to
+ enter the BIOS configuration editor, typically by pressing :kbd:`F2` during
+ the boot and before the GRUB menu (or Ubuntu login screen) appears.
- #!/bin/bash
- # Copyright (C) 2019 Intel Corporation.
- # SPDX-License-Identifier: BSD-3-Clause
- # Move all IRQs to core 0.
- for i in `cat /proc/interrupts | grep '^ *[0-9]*[0-9]:' | awk {'print $1'} | sed 's/:$//' `;
- do
- echo setting $i to affine for core zero
- echo 1 > /proc/irq/$i/smp_affinity
- done
+#. Configure these BIOS settings:
- # Move all rcu tasks to core 0.
- for i in `pgrep rcu`; do taskset -pc 0 $i; done
+ * Enable **VMX** (Virtual Machine Extensions, which provide hardware
+ assist for CPU virtualization).
+ * Enable **VT-d** (Intel Virtualization Technology for Directed I/O, which
+ provides additional support for managing I/O virtualization).
+ * Disable **Secure Boot**. This simplifies the steps for this example.
- # Change real-time attribute of all rcu tasks to SCHED_OTHER and priority 0
- for i in `pgrep rcu`; do chrt -v -o -p 0 $i; done
+ The names and locations of the BIOS settings differ depending on the target
+ hardware and BIOS version. You can search for the items in the BIOS
+ configuration editor.
- # Change real-time attribute of all tasks on core 1 to SCHED_OTHER and priority 0
- for i in `pgrep /1`; do chrt -v -o -p 0 $i; done
+ For example, on a Tiger Lake NUC, quickly press :kbd:`F2` while the system
+ is booting. (If the GRUB menu or Ubuntu login screen
+ appears, press :kbd:`CTRL` + :kbd:`ALT` + :kbd:`DEL` to reboot again and
+ press :kbd:`F2` sooner.) The settings are in the following paths:
- # Change real-time attribute of all tasks to SCHED_OTHER and priority 0
- for i in `ps -A -o pid`; do chrt -v -o -p 0 $i; done
+ * **System Agent (SA) Configuration** > **VT-d** > **Enabled**
+ * **CPU Configuration** > **VMX** > **Enabled**
+ * **Boot** > **Secure Boot** > **Secure Boot** > **Disabled**
- echo disabling timer migration
- echo 0 > /proc/sys/kernel/timer_migration
+#. Set other BIOS settings, such as Hyper-Threading, depending on the needs
+ of your application.
- .. note:: Ignore the error messages that might appear while the script is
- running.
+Generate a Board Configuration File
+=========================================
-Run Cyclictest
---------------
+#. On the target system, install the board inspector dependencies:
-#. Refer to the :ref:`troubleshooting section `
- below that discusses how to enable the network connection for RTVM.
+ .. code-block:: bash
-#. Launch the RTVM and log in as root.
+ sudo apt install cpuid msr-tools pciutils dmidecode python3 python3-pip
+ sudo modprobe msr
+ sudo pip3 install lxml
-#. Install the ``rt-tests`` tool:
+#. Configure the GRUB kernel command line as follows:
- .. code-block:: none
+ a. Edit the ``grub`` file. The following command uses ``vi``, but you
+ can use any text editor.
- sudo apt install rt-tests
+ .. code-block:: bash
-#. Use the following command to start cyclictest:
+ sudo vi /etc/default/grub
- .. code-block:: none
+ #. Find the line starting with ``GRUB_CMDLINE_LINUX_DEFAULT`` and append:
- sudo cyclictest -a 1 -p 80 -m -N -D 1h -q -H 30000 --histfile=test.log
+ .. code-block:: bash
+ idle=nomwait intel_idle.max_cstate=0 intel_pstate=disable
- Parameter descriptions:
+ Example:
- :-a 1: to bind the RT task to core 1
- :-p 80: to set the priority of the highest prio thread
- :-m: lock current and future memory allocations
- :-N: print results in ns instead of us (default us)
- :-D 1h: to run for 1 hour, you can change it to other values
- :-q: quiet mode; print a summary only on exit
- :-H 30000 --histfile=test.log: dump the latency histogram to a local file
+ .. code-block:: bash
+
+ GRUB_CMDLINE_LINUX_DEFAULT="quiet splash idle=nomwait intel_idle.max_cstate=0 intel_pstate=disable"
+
+ These settings allow the board inspector tool to
+ gather important information about the board.
+
+ #. Save and close the file.
+
+ #. Update GRUB and reboot the system:
+
+ .. code-block:: bash
+
+ sudo update-grub
+ reboot
+
+#. Copy the board inspector tool folder from the development computer to the
+ target via USB disk as follows:
+
+ a. Move to the development computer.
+
+ #. On the development computer, insert the USB disk that you intend to
+ use to copy files.
+
+ #. Ensure that there is only one USB disk inserted by running the
+ following command:
+
+ .. code-block:: bash
+
+ ls /media/${USER}
+
+ Confirm that one disk name appears. You'll use that disk name in
+ the following steps.
+
+ #. Copy the board inspector tool folder from the acrn-hypervisor source code to the USB disk:
+
+ .. code-block:: bash
+
+ cd ~/acrn-work/
+ disk="/media/$USER/"$(ls /media/$USER)
+ cp -r acrn-hypervisor/misc/config_tools/board_inspector/ $disk/
+ sync && sudo umount $disk
+
+ #. Insert the USB disk into the target system.
+
+ #. Copy the board inspector tool from the USB disk to the target:
+
+ .. code-block:: bash
+
+ mkdir -p ~/acrn-work
+ disk="/media/$USER/"$(ls /media/$USER)
+ cp -r $disk/board_inspector ~/acrn-work
+
+#. On the target, run ``board_inspector.py`` (the board inspector tool) to generate
+ the board configuration file. This example uses the parameter ``my_board``
+ as the file name.
+
+ .. code-block:: bash
+
+ cd ~/acrn-work/board_inspector/
+ sudo python3 board_inspector.py my_board
+
+#. Confirm that the board configuration file ``my_board.xml`` was generated
+ in the current directory.
+
+#. Copy ``my_board.xml`` from the target to the development computer
+ via USB disk as follows:
+
+ a. Make sure the USB disk is connected to the target.
+
+ a. Copy ``my_board.xml`` to the USB disk:
+
+ .. code-block:: bash
+
+ disk="/media/$USER/"$(ls /media/$USER)
+ cp ~/acrn-work/board_inspector/my_board.xml $disk/
+ sync && sudo umount $disk
+
+ #. Insert the USB disk into the development computer.
+
+ #. Copy ``my_board.xml`` from the USB disk to the development computer:
+
+ .. code-block:: bash
+
+ disk="/media/$USER/"$(ls /media/$USER)
+ cp $disk/my_board.xml ~/acrn-work
+ sudo umount $disk
.. rst-class:: numbered-step
-Launch the Windows VM
-*********************
+Generate a Scenario Configuration File and Launch Script
+*********************************************************
-Follow this :ref:`guide ` to prepare the Windows
-image file and then reboot.
+A **scenario configuration file** is an XML file that holds the parameters of
+a specific ACRN configuration, such as the number of VMs that can be run,
+their attributes, and the resources they have access to.
-Troubleshooting
+A **launch script** is a shell script that is used to create a User VM.
+
+You use the **ACRN configuration editor** to generate scenario configuration files and launch scripts.
+
+To generate a scenario configuration file and launch script:
+
+#. On the development computer, install ACRN configuration editor dependencies:
+
+ .. code-block:: bash
+
+ cd ~/acrn-work/acrn-hypervisor/misc/config_tools/config_app
+ sudo pip3 install -r requirements
+
+#. Launch the ACRN configuration editor:
+
+ .. code-block:: bash
+
+ python3 acrn_configurator.py
+
+#. Your web browser should open the website ``__
+ automatically, or you may need to visit this website manually.
+
+ .. note::
+
+ The ACRN configuration editor is supported on Chrome and Firefox.
+
+ The browser-based configuration editor interface:
+
+ .. image:: ./images/gsg_config_01.png
+
+#. Click the **Import Board info** button and browse to the board configuration
+ file ``my_board.xml`` previously generated. When it is successfully
+ imported, the board information appears.
+ Example:
+
+ .. image:: ./images/gsg_config_board.png
+
+#. Generate the scenario configuration file:
+
+ a. Click the **Scenario Setting** menu on the top banner of the UI and select
+ **Load a default scenario**. Example:
+
+ .. image:: ./images/gsg_config_scenario_default.png
+
+ #. In the dialog box, select **industry** as the default scenario setting and click **OK**.
+
+ .. image:: ./images/gsg_config_scenario_load.png
+
+ #. The scenario's configurable items appear. Feel free to look through all
+ the available configuration settings used in this sample scenario. This
+ is where you can change the sample scenario to meet your application's
+ particular needs. But for now, leave them as they're set in the
+ sample.
+
+ #. Click the **Export XML** button to save the scenario configuration file
+ that will be
+ used in the build process.
+
+ #. In the dialog box, keep the default name as is. Type
+ ``/home//acrn-work`` in the Scenario XML Path field. In the
+ following example, acrn is the username. Click **Submit** to save the
+ file.
+
+ .. image:: ./images/gsg_config_scenario_save.png
+
+ #. Confirm that ``industry.xml`` appears in the directory ``/home//acrn-work``.
+
+#. Generate the launch script:
+
+ a. Click the **Launch Setting** menu on the top banner of the UI and select
+ **Load a default launch script**.
+
+ .. image:: ./images/gsg_config_launch_default.png
+
+ #. In the dialog box, select **industry_launch_6uos** as the default launch
+ setting and click **OK**.
+
+ .. image:: ./images/gsg_config_launch_load.png
+
+ #. Click the **Generate Launch Script** button.
+
+ .. image:: ./images/gsg_config_launch_generate.png
+
+ #. In the dialog box, type ``/home//acrn-work/`` in the Source Path
+ field. In the following example, ``acrn`` is the username. Click **Submit**
+ to save the script.
+
+ .. image:: ./images/gsg_config_launch_save.png
+
+ #. Confirm that ``launch_uos_id3.sh`` appears in the directory
+ ``/home//acrn-work/my_board/output/``.
+
+#. Close the browser and press :kbd:`CTRL` + :kbd:`C` to terminate the
+ ``acrn_configurator.py`` program running in the terminal window.
+
+.. rst-class:: numbered-step
+
+Build ACRN
***************
-.. _enabling the network on the RTVM:
+#. On the development computer, build the ACRN hypervisor:
-Enabling the Network on the RTVM
-================================
+ .. code-block:: bash
-If you need to access the internet, you must add the following command line
-to the ``launch_hard_rt_vm.sh`` script before launching it:
+ cd ~/acrn-work/acrn-hypervisor
+ make -j $(nproc) BOARD=~/acrn-work/my_board.xml SCENARIO=~/acrn-work/industry.xml
+ make targz-pkg
-.. code-block:: none
- :emphasize-lines: 8
+ The build typically takes a few minutes. By default, the build results are
+ found in the build directory. For convenience, we also built a compressed tar
+ file to ease copying files to the target.
- acrn-dm -A -m $mem_size -s 0:0,hostbridge \
- --lapic_pt \
- --rtvm \
- --virtio_poll 1000000 \
- -U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \
- -s 2,passthru,00/17/0 \
- -s 3,virtio-console,@stdio:stdio_port \
- -s 8,virtio-net,tap0 \
- --ovmf /usr/share/acrn/bios/OVMF.fd \
- hard_rtvm
+#. Build the ACRN kernel for the Service VM:
-.. _passthru to rtvm:
+ .. code-block:: bash
-Passthrough a Hard Disk to RTVM
-===============================
+ cd ~/acrn-work/acrn-kernel
+ cp kernel_config_uefi_sos .config
+ make olddefconfig
+ make -j $(nproc) targz-pkg
-#. Use the ``lspci`` command to ensure that the correct SATA device IDs will
- be used for the passthrough before launching the script:
+ The build can take 1-3 hours depending on the performance of your development
+ computer and network.
- .. code-block:: none
+#. Copy all the necessary files generated on the development computer to the
+ target system by USB disk as follows:
- # lspci -nn | grep -i sata
- 00:17.0 SATA controller [0106]: Intel Corporation Device [8086:a0d3] (rev 20)
+ a. Insert the USB disk into the development computer and run these commands:
-#. Modify the script to use the correct SATA device IDs and bus number:
+ .. code-block:: bash
- .. code-block:: none
+ disk="/media/$USER/"$(ls /media/$USER)
+ sudo cp linux-5.10.47-acrn-sos-x86.tar.gz $disk/
+ sudo cp ~/acrn-work/acrn-hypervisor/build/hypervisor/acrn.bin $disk/
+ sudo cp ~/acrn-work/my_board3/output/launch_uos_id3.sh $disk/
+ sudo cp ~/acrn-work/acpica-unix-20210105/generate/unix/bin/iasl $disk/
+ sudo cp ~/acrn-work/acrn-hypervisor/build/acrn-2.6-unstable.tar.gz $disk/
+ sync && sudo umount $disk/
- # vim /usr/share/acrn/launch_hard_rt_vm.sh
+ #. Insert the USB disk you just used into the target system and run these commands:
- passthru_vpid=(
- ["eth"]="8086 15f2"
- ["sata"]="8086 a0d3"
- ["nvme"]="126f 2263"
- )
- passthru_bdf=(
- ["eth"]="0000:58:00.0"
- ["sata"]="0000:00:17.0"
- ["nvme"]="0000:01:00.0"
- )
+ .. code-block:: bash
- # SATA pass-through
- echo ${passthru_vpid["sata"]} > /sys/bus/pci/drivers/pci-stub/new_id
- echo ${passthru_bdf["sata"]} > /sys/bus/pci/devices/${passthru_bdf["sata"]}/driver/unbind
- echo ${passthru_bdf["sata"]} > /sys/bus/pci/drivers/pci-stub/bind
+ disk="/media/$USER/"$(ls /media/$USER)
+ sudo cp $disk/linux-5.10.47-acrn-sos-x86.tar.gz ~/acrn-work
+ sudo cp $disk/acrn-2.6-unstable.tar.gz ~/acrn-work
+ cd ~/acrn-work
+ sudo tar -zxvf linux-5.10.47-acrn-sos-x86.tar.gz -C /
+ sudo tar -zxvf acrn-2.6-unstable.tar.gz -C /
+ sudo mkdir -p /boot/acrn/
+ sudo cp $disk/acrn.bin /boot/acrn
+ sudo cp $disk/launch_uos_id3.sh ~/acrn-work
+ sudo cp $disk/iasl /usr/sbin/
+ sudo umount $disk/
- # NVME pass-through
- #echo ${passthru_vpid["nvme"]} > /sys/bus/pci/drivers/pci-stub/new_id
- #echo ${passthru_bdf["nvme"]} > /sys/bus/pci/devices/${passthru_bdf["nvme"]}/driver/unbind
- #echo ${passthru_bdf["nvme"]} > /sys/bus/pci/drivers/pci-stub/bind
+.. rst-class:: numbered-step
- .. code-block:: none
- :emphasize-lines: 5
+Install ACRN
+************
- --lapic_pt \
- --rtvm \
- --virtio_poll 1000000 \
- -U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \
- -s 2,passthru,00/17/0 \
- -s 3,virtio-console,@stdio:stdio_port \
- -s 8,virtio-net,tap0 \
+In the following steps, you will configure GRUB on the target system.
+
+#. On the target, find the root file system (rootfs) device name by using the ``lsblk`` command:
+
+ .. code-block:: console
+ :emphasize-lines: 24
+
+ ~$ lsblk
+ NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+ loop0 7:0 0 255.6M 1 loop /snap/gnome-3-34-1804/36
+ loop1 7:1 0 62.1M 1 loop /snap/gtk-common-themes/1506
+ loop2 7:2 0 2.5M 1 loop /snap/gnome-calculator/884
+ loop3 7:3 0 241.4M 1 loop /snap/gnome-3-38-2004/70
+ loop4 7:4 0 61.8M 1 loop /snap/core20/1081
+ loop5 7:5 0 956K 1 loop /snap/gnome-logs/100
+ loop6 7:6 0 2.2M 1 loop /snap/gnome-system-monitor/148
+ loop7 7:7 0 2.4M 1 loop /snap/gnome-calculator/748
+ loop8 7:8 0 29.9M 1 loop /snap/snapd/8542
+ loop9 7:9 0 32.3M 1 loop /snap/snapd/12704
+ loop10 7:10 0 65.1M 1 loop /snap/gtk-common-themes/1515
+ loop11 7:11 0 219M 1 loop /snap/gnome-3-34-1804/72
+ loop12 7:12 0 55.4M 1 loop /snap/core18/2128
+ loop13 7:13 0 55.5M 1 loop /snap/core18/2074
+ loop14 7:14 0 2.5M 1 loop /snap/gnome-system-monitor/163
+ loop15 7:15 0 704K 1 loop /snap/gnome-characters/726
+ loop16 7:16 0 276K 1 loop /snap/gnome-characters/550
+ loop17 7:17 0 548K 1 loop /snap/gnome-logs/106
+ loop18 7:18 0 243.9M 1 loop /snap/gnome-3-38-2004/39
+ nvme0n1 259:0 0 119.2G 0 disk
+ ├─nvme0n1p1 259:1 0 512M 0 part /boot/efi
+ └─nvme0n1p2 259:2 0 118.8G 0 part /
+
+ As highlighted, you're looking for the device name associated with the
+ partition named ``/``, in this case ``nvme0n1p2``.
+
+#. Run the ``blkid`` command to get the UUID and PARTUUID for the rootfs device
+ (replace the ``nvme0n1p2`` name with the name shown for the rootfs on your system):
+
+ .. code-block:: bash
+
+ sudo blkid /dev/nvme0n1p2
+
+ In the output, look for the UUID and PARTUUID (example below). You will need
+ them in the next step.
+
+ .. code-block:: console
+
+ /dev/nvme0n1p2: UUID="3cac5675-e329-4cal-b346-0a3e65f99016" TYPE="ext4" PARTUUID="03db7f45-8a6c-454b-adf7-30343d82c4f4"
+
+#. Add the ACRN Service VM to the GRUB boot menu:
+
+ a. Edit the GRUB 40_custom file. The following command uses ``vi``, but
+ you can use any text editor.
+
+ .. code-block:: bash
+
+ sudo vi /etc/grub.d/40_custom
+
+ #. Add the following text at the end of the file. Replace ```` and
+ ```` with the output from the previous step.
+
+ .. code-block:: bash
+ :emphasize-lines: 6,8
+
+ menuentry "ACRN Multiboot Ubuntu Service VM" --id ubuntu-service-vm {
+ load_video
+ insmod gzio
+ insmod part_gpt
+ insmod ext2
+ search --no-floppy --fs-uuid --set
+ echo 'loading ACRN...'
+ multiboot2 /boot/acrn/acrn.bin root=PARTUUID=
+ module2 /boot/vmlinuz-5.10.47-acrn-sos Linux_bzImage
+ }
+
+ #. Save and close the file.
+
+#. Make the GRUB menu visible when
+ booting and make it load the Service VM kernel by default:
+
+ a. Edit the ``grub`` file:
+
+ .. code-block:: bash
+
+ sudo vi /etc/default/grub
+
+ #. Edit these items:
+
+ .. code-block:: bash
+
+ GRUB_DEFAULT=ubuntu-service-vm
+ #GRUB_TIMEOUT_STYLE=hidden
+ GRUB_TIMEOUT=5
+ GRUB_CMDLINE_LINUX="text"
+
+ #. Save and close the file.
+
+#. Update GRUB and reboot the system:
+
+ .. code-block:: bash
+
+ sudo update-grub
+ reboot
+
+#. Confirm that you see the GRUB menu with the "ACRN Multiboot Ubuntu Service
+ VM" entry. Select it and proceed to booting ACRN. (It may be autoselected, in
+ which case it will boot with this option automatically in 5 seconds.)
+
+ .. code-block:: console
+ :emphasize-lines: 8
+
+ GNU GRUB version 2.04
+ ────────────────────────────────────────────────────────────────────────────────
+ Ubuntu
+ Advanced options for Ubuntu
+ Ubuntu 18.04.05 LTS (18.04) (on /dev/nvme0n1p2)
+ Advanced options for Ubuntu 18.04.05 LTS (18.04) (on /dev/nvme0n1p2)
+ System setup
+ *ACRN Multiboot Ubuntu Service VM
+
+.. rst-class:: numbered-step
+
+Run ACRN and the Service VM
+******************************
+
+When the ACRN hypervisor starts to boot, the ACRN console log will be displayed
+to the serial port (optional). The ACRN hypervisor boots the Service VM
+automatically.
+
+#. On the target, log in to the Service VM.
+
+#. Verify that the hypervisor is running by checking ``dmesg`` in
+ the Service VM:
+
+ .. code-block:: bash
+
+ dmesg | grep ACRN
+
+ You should see "Hypervisor detected: ACRN" in the output. Example output of a
+ successful installation:
+
+ .. code-block:: console
+
+ [ 0.000000] Hypervisor detected: ACRN
+ [ 0.862942] ACRN HVLog: acrn_hvlog_init
+
+.. rst-class:: numbered-step
+
+Launch the User VM
+*******************
+
+#. A User VM image is required on the target system before launching it. The
+ following steps use Ubuntu:
+
+ a. Go to the `official Ubuntu website
+ `__ to get an ISO format of the Ubuntu
+ 18.04 desktop image.
+
+ #. Put the ISO file in the path ``~/acrn-work/`` on the target system.
+
+#. Open the launch script in a text editor. The following command uses vi, but
+ you can use any text editor.
+
+ .. code-block:: bash
+
+ vi ~/acrn-work/launch_uos_id3.sh
+
+#. Look for the line that contains the term ``virtio-blk`` and replace
+ the existing image file path with your ISO image file path.
+ In the following example, the
+ ISO image file path is ``/home/acrn/acrn-work/ubuntu-18.04.5-desktop-amd64.iso``.
+
+ .. code-block:: bash
+ :emphasize-lines: 4
+
+ acrn-dm -A -m $mem_size -s 0:0,hostbridge -U 615db82a-e189-4b4f-8dbb-d321343e4ab3 \
+ --mac_seed $mac_seed \
+ $logger_setting \
+ -s 7,virtio-blk,/home/acrn/acrn-work/ubuntu-18.04.5-desktop-amd64.iso \
+ -s 8,virtio-net,tap_YaaG3 \
+ -s 6,virtio-console,@stdio:stdio_port \
--ovmf /usr/share/acrn/bios/OVMF.fd \
- hard_rtvm
+ -s 31:0,lpc \
+ $vm_name
-#. Upon deployment completion, launch the RTVM directly onto your NUC11TNHi5:
+#. Save and close the file.
- .. code-block:: none
+#. Launch the User VM:
- $ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
+ .. code-block:: bash
+
+ sudo chmod +x ~/acrn-work/launch_uos_id3.sh
+ sudo chmod +x /usr/bin/acrn-dm
+ sudo chmod +x /usr/sbin/iasl
+ sudo ~/acrn-work/launch_uos_id3.sh
+
+#. Confirm that you see the console of the User VM on the Service VM's terminal
+ (on the monitor connected to the target system). Example:
+
+ .. code-block:: console
+
+ Ubuntu 18.04.5 LTS ubuntu hvc0
+
+ ubuntu login:
+
+#. Log in to the User VM. For the Ubuntu 18.04 ISO, the user is ``ubuntu``, and
+ there's no password.
+
+#. Confirm that you see output similar to this example:
+
+ .. code-block:: console
+
+ Welcome to Ubuntu 18.04.5 LTS (GNU/Linux 5.4.0-42-generic x86_64)
+
+ * Documentation: https://help.ubuntu.com
+ * Management: https://landscape.canonical.com
+ * Support: https://ubuntu.com/advantage
+
+ 0 packages can be updated.
+ 0 updates are security updates.
+
+ Your Hardware Enablement Stack (HWE) is supported until April 2023.
+
+ The programs included with the Ubuntu system are free software;
+ the exact distribution terms for each program are described in the
+ individual files in /usr/share/doc/*/copyright.
+
+ Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
+ applicable law.
+
+ To run a command as administrator (user "root"), use "sudo ".
+ See "man sudo_root" for details.
+
+ ubuntu@ubuntu:~$
+
+The guest VM has launched successfully. You have completed this ACRN setup.
+
+Next Steps
+**************
+
+:ref:`overview_dev` describes the ACRN configuration process, with links to additional details.
diff --git a/doc/getting-started/images/acrn_terms.png b/doc/getting-started/images/acrn_terms.png
new file mode 100644
index 000000000..6a6936b5f
Binary files /dev/null and b/doc/getting-started/images/acrn_terms.png differ
diff --git a/doc/getting-started/images/gsg_config_01.png b/doc/getting-started/images/gsg_config_01.png
new file mode 100644
index 000000000..f0e6e473f
Binary files /dev/null and b/doc/getting-started/images/gsg_config_01.png differ
diff --git a/doc/getting-started/images/gsg_config_board.png b/doc/getting-started/images/gsg_config_board.png
new file mode 100644
index 000000000..fd31b5aaa
Binary files /dev/null and b/doc/getting-started/images/gsg_config_board.png differ
diff --git a/doc/getting-started/images/gsg_config_board.psd b/doc/getting-started/images/gsg_config_board.psd
new file mode 100644
index 000000000..a013cf1a5
Binary files /dev/null and b/doc/getting-started/images/gsg_config_board.psd differ
diff --git a/doc/getting-started/images/gsg_config_launch_default.png b/doc/getting-started/images/gsg_config_launch_default.png
new file mode 100644
index 000000000..b778f31f4
Binary files /dev/null and b/doc/getting-started/images/gsg_config_launch_default.png differ
diff --git a/doc/getting-started/images/gsg_config_launch_default.psd b/doc/getting-started/images/gsg_config_launch_default.psd
new file mode 100644
index 000000000..916c89bad
Binary files /dev/null and b/doc/getting-started/images/gsg_config_launch_default.psd differ
diff --git a/doc/getting-started/images/gsg_config_launch_generate.png b/doc/getting-started/images/gsg_config_launch_generate.png
new file mode 100644
index 000000000..f49cd8dd3
Binary files /dev/null and b/doc/getting-started/images/gsg_config_launch_generate.png differ
diff --git a/doc/getting-started/images/gsg_config_launch_generate.psd b/doc/getting-started/images/gsg_config_launch_generate.psd
new file mode 100644
index 000000000..c3480fe93
Binary files /dev/null and b/doc/getting-started/images/gsg_config_launch_generate.psd differ
diff --git a/doc/getting-started/images/gsg_config_launch_load.png b/doc/getting-started/images/gsg_config_launch_load.png
new file mode 100644
index 000000000..a1a33aefe
Binary files /dev/null and b/doc/getting-started/images/gsg_config_launch_load.png differ
diff --git a/doc/getting-started/images/gsg_config_launch_load.psd b/doc/getting-started/images/gsg_config_launch_load.psd
new file mode 100644
index 000000000..bfed04e8a
Binary files /dev/null and b/doc/getting-started/images/gsg_config_launch_load.psd differ
diff --git a/doc/getting-started/images/gsg_config_launch_save.png b/doc/getting-started/images/gsg_config_launch_save.png
new file mode 100644
index 000000000..c09badc88
Binary files /dev/null and b/doc/getting-started/images/gsg_config_launch_save.png differ
diff --git a/doc/getting-started/images/gsg_config_launch_save.psd b/doc/getting-started/images/gsg_config_launch_save.psd
new file mode 100644
index 000000000..951e1bc3e
Binary files /dev/null and b/doc/getting-started/images/gsg_config_launch_save.psd differ
diff --git a/doc/getting-started/images/gsg_config_scenario_default.png b/doc/getting-started/images/gsg_config_scenario_default.png
new file mode 100644
index 000000000..2cf12f7b4
Binary files /dev/null and b/doc/getting-started/images/gsg_config_scenario_default.png differ
diff --git a/doc/getting-started/images/gsg_config_scenario_default.psd b/doc/getting-started/images/gsg_config_scenario_default.psd
new file mode 100644
index 000000000..acc32d2b6
Binary files /dev/null and b/doc/getting-started/images/gsg_config_scenario_default.psd differ
diff --git a/doc/getting-started/images/gsg_config_scenario_load.png b/doc/getting-started/images/gsg_config_scenario_load.png
new file mode 100644
index 000000000..ffda6c601
Binary files /dev/null and b/doc/getting-started/images/gsg_config_scenario_load.png differ
diff --git a/doc/getting-started/images/gsg_config_scenario_load.psd b/doc/getting-started/images/gsg_config_scenario_load.psd
new file mode 100644
index 000000000..310cebec6
Binary files /dev/null and b/doc/getting-started/images/gsg_config_scenario_load.psd differ
diff --git a/doc/getting-started/images/gsg_config_scenario_save.png b/doc/getting-started/images/gsg_config_scenario_save.png
new file mode 100644
index 000000000..b35b4ed1d
Binary files /dev/null and b/doc/getting-started/images/gsg_config_scenario_save.png differ
diff --git a/doc/getting-started/images/gsg_config_scenario_save.psd b/doc/getting-started/images/gsg_config_scenario_save.psd
new file mode 100644
index 000000000..ccdec023c
Binary files /dev/null and b/doc/getting-started/images/gsg_config_scenario_save.psd differ
diff --git a/doc/getting-started/images/gsg_host_target.png b/doc/getting-started/images/gsg_host_target.png
new file mode 100644
index 000000000..e6419e672
Binary files /dev/null and b/doc/getting-started/images/gsg_host_target.png differ
diff --git a/doc/getting-started/images/gsg_nuc.png b/doc/getting-started/images/gsg_nuc.png
new file mode 100644
index 000000000..7904483cc
Binary files /dev/null and b/doc/getting-started/images/gsg_nuc.png differ
diff --git a/doc/getting-started/images/gsg_overview_image_sources.pptx b/doc/getting-started/images/gsg_overview_image_sources.pptx
new file mode 100644
index 000000000..091d92602
Binary files /dev/null and b/doc/getting-started/images/gsg_overview_image_sources.pptx differ
diff --git a/doc/getting-started/images/gsg_rootfs.png b/doc/getting-started/images/gsg_rootfs.png
new file mode 100644
index 000000000..80fd00dfb
Binary files /dev/null and b/doc/getting-started/images/gsg_rootfs.png differ
diff --git a/doc/getting-started/images/gsg_scenario.png b/doc/getting-started/images/gsg_scenario.png
new file mode 100644
index 000000000..373de0fbb
Binary files /dev/null and b/doc/getting-started/images/gsg_scenario.png differ
diff --git a/doc/getting-started/images/gsg_ubuntu_install_01.png b/doc/getting-started/images/gsg_ubuntu_install_01.png
new file mode 100644
index 000000000..09910b902
Binary files /dev/null and b/doc/getting-started/images/gsg_ubuntu_install_01.png differ
diff --git a/doc/getting-started/images/gsg_ubuntu_install_02.jpg b/doc/getting-started/images/gsg_ubuntu_install_02.jpg
new file mode 100644
index 000000000..85e215c22
Binary files /dev/null and b/doc/getting-started/images/gsg_ubuntu_install_02.jpg differ
diff --git a/doc/getting-started/images/gsg_ubuntu_install_02.png b/doc/getting-started/images/gsg_ubuntu_install_02.png
new file mode 100644
index 000000000..47e9f2cec
Binary files /dev/null and b/doc/getting-started/images/gsg_ubuntu_install_02.png differ
diff --git a/doc/getting-started/images/gsg_vm_iso.png b/doc/getting-started/images/gsg_vm_iso.png
new file mode 100644
index 000000000..a2cabc7bb
Binary files /dev/null and b/doc/getting-started/images/gsg_vm_iso.png differ
diff --git a/doc/getting-started/images/gsg_vm_ubuntu_launch.png b/doc/getting-started/images/gsg_vm_ubuntu_launch.png
new file mode 100644
index 000000000..f78bfb727
Binary files /dev/null and b/doc/getting-started/images/gsg_vm_ubuntu_launch.png differ
diff --git a/doc/getting-started/images/icon_host.png b/doc/getting-started/images/icon_host.png
new file mode 100644
index 000000000..e542f3103
Binary files /dev/null and b/doc/getting-started/images/icon_host.png differ
diff --git a/doc/getting-started/images/icon_light.png b/doc/getting-started/images/icon_light.png
new file mode 100644
index 000000000..2551d8dab
Binary files /dev/null and b/doc/getting-started/images/icon_light.png differ
diff --git a/doc/getting-started/images/icon_target.png b/doc/getting-started/images/icon_target.png
new file mode 100644
index 000000000..1cde93259
Binary files /dev/null and b/doc/getting-started/images/icon_target.png differ
diff --git a/doc/getting-started/images/overview_flow.png b/doc/getting-started/images/overview_flow.png
new file mode 100644
index 000000000..e522f9530
Binary files /dev/null and b/doc/getting-started/images/overview_flow.png differ
diff --git a/doc/getting-started/images/overview_host_target.png b/doc/getting-started/images/overview_host_target.png
new file mode 100644
index 000000000..dfc64f3d7
Binary files /dev/null and b/doc/getting-started/images/overview_host_target.png differ
diff --git a/doc/getting-started/overview_dev.rst b/doc/getting-started/overview_dev.rst
new file mode 100644
index 000000000..70e13e95e
--- /dev/null
+++ b/doc/getting-started/overview_dev.rst
@@ -0,0 +1,309 @@
+.. _overview_dev:
+
+Configuration and Development Overview
+######################################
+
+This overview is for developers who are new or relatively new to ACRN. It will
+help you get familiar with ACRN basics: ACRN components and general process for
+building an ACRN hypervisor.
+
+The overview covers the process at an abstract and universal level.
+
+* Abstract: the overall structure rather than detailed instructions
+* Universal: applicable to most use cases
+
+Although the overview describes the process as a series of steps, it's intended
+to be a summary, not a step-by-step guide. Throughout the overview, you will see
+links to the :ref:`gsg` for first-time setup instructions. Links to advanced
+guides and additional information are also provided.
+
+.. _overview_dev_dev_env:
+
+Development Environment
+***********************
+
+The recommended development environment for ACRN consists of two machines:
+
+* **Development computer** where you configure and build ACRN images
+* **Target system** where you install and run ACRN images
+
+.. image:: ./images/overview_host_target.png
+ :scale: 60%
+
+ACRN requires a serial output from the target system to the development computer
+for :ref:`debugging and system messaging `. If your target doesn't
+have a serial output, :ref:`here are some tips for connecting a serial output
+`.
+
+You will need a way to copy the built ACRN images from the development computer
+to the target system. A USB drive is recommended.
+
+General Process for Building an ACRN Hypervisor
+***********************************************
+
+The general process for configuring and building an ACRN hypervisor is
+illustrated in the following figure. Additional details follow.
+
+.. image:: ./images/overview_flow.png
+
+.. _overview_dev_hw_scenario:
+
+|icon_light| Step 1: Select Hardware and Scenario
+*************************************************
+
+.. |icon_light| image:: ./images/icon_light.png
+ :scale: 75%
+
+ACRN configuration is hardware and scenario specific. You will need to learn
+about supported ACRN hardware and scenarios, and select the right ones for your
+needs.
+
+Select Your Hardware
+====================
+
+ACRN supports certain Intel processors. Development kits are widely available.
+See :ref:`hardware`.
+
+.. _overview_dev_select_scenario:
+
+Select Your Scenario
+====================
+
+A :ref:`scenario ` is a specific ACRN configuration, such as
+the type and number of VMs that can be run, their attributes, and the resources
+they have access to.
+
+This image shows an example of an ACRN scenario to illustrate the types of VMs
+that ACRN offers:
+
+.. image:: ./images/acrn_terms.png
+ :scale: 75%
+
+ACRN offers three types of VMs:
+
+* **Pre-launched User VMs**: These VMs run independently of other VMs and own
+ dedicated hardware resources, such as a CPU core, memory, and I/O devices.
+ Other VMs may not even be aware of the existence of pre-launched VMs. The
+ configuration of these VMs is static and must be defined at build time. They
+ are well-suited for safety-critical applications.
+
+* **Service VM**: This VM is required for scenarios that have post-launched VMs.
+ It controls post-launched VMs and provides device sharing services to them.
+ ACRN supports one Service VM.
+
+* **Post-launched User VMs**: These VMs share hardware resources. Unlike
+ pre-launched VMs, you can change the configuration at run-time. They are
+ well-suited for non-safety applications, including human machine interface
+ (HMI), artificial intelligence (AI), computer vision, real-time, and others.
+
+The names "pre-launched" and "post-launched" refer to the boot order of these
+VMs. The ACRN hypervisor launches the pre-launched VMs first, then launches the
+Service VM. The Service VM launches the post-launched VMs.
+
+Due to the static configuration of pre-launched VMs, they are recommended only
+if you need complete isolation from the rest of the system. Most use cases can
+meet their requirements without pre-launched VMs. Even if your application has
+stringent real-time requirements, start by testing the application on a
+post-launched VM before considering a pre-launched VM.
+
+To help accelerate the configuration process, ACRN offers the following
+:ref:`predefined scenarios `:
+
+* **Shared scenario:** A configuration in which the VMs share resources
+ (post-launched).
+
+* **Partitioned scenario:** A configuration in which the VMs are isolated from
+ each other and don't share resources (pre-launched).
+
+* **Hybrid scenario:** A configuration that has both pre-launched and
+ post-launched VMs.
+
+ACRN provides predefined configuration files and documentation to help you set
+up these scenarios.
+
+* New ACRN users start with the shared scenario, as described in the :ref:`gsg`.
+
+* The other predefined scenarios are more complex. The :ref:`develop_acrn`
+ provide setup instructions.
+
+You can copy the predefined configuration files and customize them for your use
+case, as described later in :ref:`overview_dev_config_editor`.
+
+|icon_host| Step 2: Prepare the Development Computer
+****************************************************
+
+.. |icon_host| image:: ./images/icon_host.png
+ :scale: 75%
+
+Your development computer requires certain dependencies to configure and build
+ACRN:
+
+* Ubuntu OS
+* Build tools
+* ACRN hypervisor source code
+* If your scenario has a Service VM: ACRN kernel source code
+
+The :ref:`gsg` provides step-by-step instructions for setting up your
+development computer.
+
+In the next step, :ref:`overview_dev_board_config`, you will need the board
+inspector tool found in the ACRN hypervisor source code to collect information
+about the target hardware and generate a board configuration file.
+
+.. _overview_dev_board_config:
+
+|icon_target| Step 3: Generate a Board Configuration File
+*********************************************************
+
+.. |icon_target| image:: ./images/icon_target.png
+ :scale: 75%
+
+A **board configuration file** is an XML file that stores hardware-specific
+information extracted from the target system. It describes the capacity of
+hardware resources (such as processors and memory), platform power states,
+available devices, and BIOS settings. The file is used to configure the ACRN
+hypervisor, because each hypervisor instance is specific to your target
+hardware.
+
+The **board inspector tool** ``board_inspector.py`` enables you to generate a board
+configuration file on the target system. The following sections provide an
+overview and important information to keep in mind when using the tool.
+
+Configure BIOS Settings
+=======================
+
+You must configure all of your target's BIOS settings before running the board
+inspector tool, because the tool records the current BIOS settings in the board
+configuration file.
+
+Some BIOS settings are required by ACRN. The :ref:`gsg` provides a list of the
+settings.
+
+Use the Board Inspector to Generate a Board Configuration File
+==============================================================
+
+The board inspector tool requires certain dependencies to be present on the
+target system:
+
+* Ubuntu OS
+* Tools and kernel command-line options that allow the board inspector to
+ collect information about the target hardware
+
+After setting up the dependencies, you run the board inspector via command-line.
+The tool generates a board configuration file specific to your hardware.
+
+.. important:: Whenever you change the configuration of the board, such as BIOS
+ settings or PCI ports, you must generate a new board configuration file.
+
+The :ref:`gsg` provides step-by-step instructions for using the tool. For more
+information about the tool, see :ref:`acrn_config_workflow`.
+
+.. _overview_dev_config_editor:
+
+|icon_host| Step 4: Generate a Scenario Configuration File and Launch Scripts
+*****************************************************************************
+
+As described in :ref:`overview_dev_select_scenario`, a scenario is a specific
+ACRN configuration, such as the number of VMs that can be run, their attributes,
+and the resources they have access to. These parameters are saved in a
+**scenario configuration file** in XML format.
+
+A **launch script** is a shell script that is used to create a post-launched VM.
+
+The **configuration editor tool** ``acrn_configurator.py`` is a web-based user interface that
+runs on your development computer. It enables you to customize, validate, and
+generate scenario configuration files and launch scripts. The following sections
+provide an overview and important information to keep in mind when using the
+tool.
+
+Generate a Scenario Configuration File
+======================================
+
+Before using the configuration editor tool to generate a scenario configuration
+file, be sure you have the board configuration file that you generated in
+:ref:`overview_dev_board_config`. The tool needs the board configuration file to
+validate that your custom scenario is supported by the target hardware.
+
+You can use the tool to create a new scenario configuration file or modify an
+existing one, such as a predefined scenario described in
+:ref:`overview_dev_hw_scenario`. The tool's GUI enables you to edit the
+configurable items in the file, such as adding VMs, modifying VM attributes, or
+deleting VMs. The tool validates your inputs against your board configuration
+file. After validation is successful, the tool generates your custom scenario
+configuration file.
+
+Generate Launch Scripts
+=======================
+
+Before using the configuration editor tool to generate a launch script, be sure
+you have your board configuration file and scenario configuration file. The tool
+needs both files to validate your launch script configuration.
+
+The process of customizing launch scripts is similar to the process of
+customizing scenario configuration files. You can choose to create a new launch
+script or modify an existing one. You can then use the GUI to edit the
+configurable parameters. The tool validates your inputs against your board
+configuration file and scenario configuration file. After validation is
+successful, the tool generates your custom launch script.
+
+.. note::
+ The configuration editor may not show all editable
+ parameters for scenario configuration files and launch scripts. You can edit
+ the parameters manually. See :ref:`acrn_config_data`.
+
+The :ref:`gsg` walks you through a simple example of using the tool. For more
+information about the tool, see :ref:`acrn_config_tool_ui`.
+
+|icon_host| Step 5: Build ACRN
+******************************
+
+The ACRN hypervisor source code provides a makefile to build the ACRN hypervisor
+binary and associated components. In the ``make`` command, you need to specify
+your board configuration file and scenario configuration file. The build
+typically takes a few minutes.
+
+If your scenario has a Service VM, you also need to build the ACRN kernel for
+the Service VM. The ACRN kernel source code provides a predefined configuration
+file and a makefile to build the ACRN kernel binary and associated components.
+The build can take 1-3 hours depending on the performance of your development
+computer and network.
+
+The :ref:`gsg` provides step-by-step instructions.
+
+For more information about the kernel, see :ref:`kernel-parameters`.
+
+.. _overview_dev_install:
+
+|icon_target| Step 6: Install and Run ACRN
+******************************************
+
+The last step is to make final changes to the target system configuration and
+then boot ACRN.
+
+At a high level, you will:
+
+* Copy the built ACRN hypervisor files, kernel files, and launch scripts from
+ the development computer to the target.
+
+* Configure GRUB to boot the ACRN hypervisor, pre-launched VMs, and Service VM.
+ Reboot the target, and launch ACRN.
+
+* If your scenario contains a post-launched VM, install an OS image for the
+ post-launched VM and run the launch script you created in
+ :ref:`overview_dev_config_editor`.
+
+For a basic example, see the :ref:`gsg`.
+
+For details about GRUB, see :ref:`using_grub`.
+
+For more complex examples of post-launched VMs, see the
+:ref:`develop_acrn_user_vm`.
+
+Next Steps
+**********
+
+* To get ACRN up and running for the first time, see the :ref:`gsg` for
+ step-by-step instructions.
+
+* If you have already completed the :ref:`gsg`, see the :ref:`develop_acrn` for
+ more information about complex scenarios, advanced features, and debugging.
diff --git a/doc/getting-started/roscube/roscube-gsg.rst b/doc/getting-started/roscube/roscube-gsg.rst
index 6b2dde0ed..1fc92d173 100644
--- a/doc/getting-started/roscube/roscube-gsg.rst
+++ b/doc/getting-started/roscube/roscube-gsg.rst
@@ -1,3 +1,5 @@
+:orphan:
+
.. _roscube-gsg:
Getting Started Guide for ACRN Industry Scenario With ROScube-I
diff --git a/doc/introduction/index.rst b/doc/introduction/index.rst
index 72370f3a5..4b8c4c9f6 100644
--- a/doc/introduction/index.rst
+++ b/doc/introduction/index.rst
@@ -23,15 +23,6 @@ partitioning hypervisors. The ACRN hypervisor architecture partitions
the system into different functional domains, with carefully selected
user VM sharing optimizations for IoT and embedded devices.
-ACRN Open Source Roadmap
-************************
-
-Stay informed on what's ahead for ACRN by visiting the
-`ACRN Project Roadmap `_ on the
-projectacrn.org website.
-
-For up-to-date happenings, visit the `ACRN blog `_.
-
ACRN High-Level Architecture
****************************
diff --git a/doc/reference/hardware.rst b/doc/reference/hardware.rst
index 18d8e9baa..a1e9bbe22 100644
--- a/doc/reference/hardware.rst
+++ b/doc/reference/hardware.rst
@@ -38,6 +38,7 @@ ACRN assumes the following conditions are satisfied from the Platform BIOS:
* There should be no conflict in resources among the PCI devices or with other platform devices.
+.. _hardware_tested:
Tested Platforms by ACRN Release
********************************
diff --git a/doc/try.rst b/doc/try.rst
index ec6891f3a..0f145adf1 100644
--- a/doc/try.rst
+++ b/doc/try.rst
@@ -3,21 +3,19 @@
Getting Started
###############
-After reading the :ref:`introduction`, use these guides to get started
+After reading the :ref:`introduction`, use these documents to get started
using ACRN in a reference setup. We'll show how to set up your
development and target hardware, and then how to boot the ACRN
-hypervisor, the Service VM, and a User VM on the Intel platform.
+hypervisor, the Service VM, and a User VM on a supported Intel target platform.
-ACRN is supported on platforms listed in :ref:`hardware`.
-
-Follow these getting started guides to give ACRN a try:
.. toctree::
:maxdepth: 1
reference/hardware
+ getting-started/overview_dev
getting-started/getting-started
- getting-started/building-from-source
- getting-started/roscube/roscube-gsg
- tutorials/using_hybrid_mode_on_nuc
- tutorials/using_partition_mode_on_nuc
+
+After getting familiar with ACRN development, check out these
+:ref:`develop_acrn` for information about more-advanced scenarios and enabling
+ACRN advanced capabilities.
diff --git a/doc/tutorials/acrn-secure-boot-with-efi-stub.rst b/doc/tutorials/acrn-secure-boot-with-efi-stub.rst
index b619001f6..179fbdac7 100644
--- a/doc/tutorials/acrn-secure-boot-with-efi-stub.rst
+++ b/doc/tutorials/acrn-secure-boot-with-efi-stub.rst
@@ -57,7 +57,7 @@ Building
Build Dependencies
==================
-- Build Tools and Dependencies described in the :ref:`getting-started-building` guide
+- Build Tools and Dependencies described in the :ref:`gsg` guide
- ``gnu-efi`` package
- Service VM Kernel ``bzImage``
- pre-launched RTVM Kernel ``bzImage``
diff --git a/doc/tutorials/acrn_configuration_tool.rst b/doc/tutorials/acrn_configuration_tool.rst
index 90874205c..b8cb5c40c 100644
--- a/doc/tutorials/acrn_configuration_tool.rst
+++ b/doc/tutorials/acrn_configuration_tool.rst
@@ -142,7 +142,7 @@ toolset.
.. note:: Refer to :ref:`acrn_config_tool_ui` for more details on
the configuration editor.
-#. Build with your XML files. Refer to :ref:`getting-started-building` to build
+#. Build with your XML files. Refer to :ref:`gsg` to build
the ACRN hypervisor with your XML files on the host machine.
#. Deploy VMs and run ACRN hypervisor on the target board.
@@ -398,9 +398,6 @@ The ACRN configuration editor provides a web-based user interface for the follow
Prerequisites
=============
-.. _get acrn repo guide:
- https://projectacrn.github.io/latest/getting-started/building-from-source.html#get-the-acrn-hypervisor-source-code
-
- Clone the ACRN hypervisor repo
.. code-block:: bash
diff --git a/doc/tutorials/acrn_on_qemu.rst b/doc/tutorials/acrn_on_qemu.rst
index 243357f0c..913ee1edc 100644
--- a/doc/tutorials/acrn_on_qemu.rst
+++ b/doc/tutorials/acrn_on_qemu.rst
@@ -124,7 +124,7 @@ Install ACRN Hypervisor
.. important:: All the steps below are performed **inside** the Service VM guest that we built in the
previous section.
-#. Install the ACRN build tools and dependencies following the :ref:`install-build-tools-dependencies`
+#. Install the ACRN build tools and dependencies following the :ref:`gsg`
#. Clone ACRN repo and check out the ``v2.5`` tag.
@@ -141,7 +141,7 @@ Install ACRN Hypervisor
make BOARD=qemu SCENARIO=sdc
- For more details, refer to :ref:`getting-started-building`.
+ For more details, refer to :ref:`gsg`.
#. Install the ACRN Device Model and tools
@@ -156,7 +156,7 @@ Install ACRN Hypervisor
sudo cp build/hypervisor/acrn.32.out /boot
#. Clone and configure the Service VM kernel repository following the instructions at
- :ref:`build-and-install-ACRN-kernel` and using the ``v2.5`` tag. The User VM (L2 guest)
+ :ref:`gsg` and using the ``v2.5`` tag. The User VM (L2 guest)
uses the ``virtio-blk`` driver to mount the rootfs. This driver is included in the default
kernel configuration as of the ``v2.5`` tag.
diff --git a/doc/tutorials/debug.rst b/doc/tutorials/debug.rst
index 13fa5f35f..fe7ff263c 100644
--- a/doc/tutorials/debug.rst
+++ b/doc/tutorials/debug.rst
@@ -90,7 +90,7 @@ noted above. For example, add the following code into function
shell_cmd_help added information
Once you have instrumented the code, you need to rebuild the hypervisor and
-install it on your platform. Refer to :ref:`getting-started-building`
+install it on your platform. Refer to :ref:`gsg`
for detailed instructions on how to do that.
We set console log level to 5, and mem log level to 2 through the
@@ -205,8 +205,7 @@ shown in the following example:
4. After we have inserted the trace code addition, we need to rebuild
the ACRN hypervisor and install it on the platform. Refer to
- :ref:`getting-started-building` for
- detailed instructions on how to do that.
+ :ref:`gsg` for detailed instructions on how to do that.
5. Now we can use the following command in the Service VM console
to generate acrntrace data into the current directory::
diff --git a/doc/tutorials/enable_ivshmem.rst b/doc/tutorials/enable_ivshmem.rst
index 09faaa9f5..fe152edd6 100644
--- a/doc/tutorials/enable_ivshmem.rst
+++ b/doc/tutorials/enable_ivshmem.rst
@@ -37,7 +37,7 @@ steps:
communication and separate it with ``:``. For example, the
communication between VM0 and VM2, it can be written as ``0:2``
-- Build with the XML configuration, refer to :ref:`getting-started-building`.
+- Build with the XML configuration, refer to :ref:`gsg`.
Ivshmem DM-Land Usage
*********************
diff --git a/doc/tutorials/nvmx_virtualization.rst b/doc/tutorials/nvmx_virtualization.rst
index 5ea2b2f45..145983478 100644
--- a/doc/tutorials/nvmx_virtualization.rst
+++ b/doc/tutorials/nvmx_virtualization.rst
@@ -196,7 +196,7 @@ with these settings:
Since CPU sharing is disabled, you may need to delete all ``POST_STD_VM`` and ``KATA_VM`` VMs
from the scenario configuration file, which may share pCPU with the Service OS VM.
-#. Follow instructions in :ref:`getting-started-building` and build with this XML configuration.
+#. Follow instructions in :ref:`gsg` and build with this XML configuration.
Prepare for Service VM Kernel and rootfs
@@ -209,7 +209,7 @@ Instructions on how to boot Ubuntu as the Service VM can be found in
The Service VM kernel needs to be built from the ``acrn-kernel`` repo, and some changes
to the kernel ``.config`` are needed.
Instructions on how to build and install the Service VM kernel can be found
-in :ref:`Build and Install the ACRN Kernel `.
+in :ref:`gsg`.
Here is a summary of how to modify and build the kernel:
diff --git a/doc/tutorials/pre-launched-rt.rst b/doc/tutorials/pre-launched-rt.rst
index 4687c34f5..59951743d 100644
--- a/doc/tutorials/pre-launched-rt.rst
+++ b/doc/tutorials/pre-launched-rt.rst
@@ -50,7 +50,7 @@ install Ubuntu on the NVMe drive, and use grub to launch the Service VM.
Install Pre-Launched RT Filesystem on SATA and Kernel Image on NVMe
===================================================================
-Follow the :ref:`install-ubuntu-rtvm-sata` guide to install RT rootfs on SATA drive.
+Follow the :ref:`gsg` to install RT rootfs on SATA drive.
The Kernel should
be on the NVMe drive along with GRUB. You'll need to copy the RT kernel
@@ -82,8 +82,8 @@ Add Pre-Launched RT Kernel Image to GRUB Config
===============================================
The last step is to modify the GRUB configuration file to load the Pre-Launched
-kernel. (For more information about this, see :ref:`Update Grub for the Ubuntu Service VM
-` section in the :ref:`gsg`.) The grub config file will look something
+kernel. (For more information about this, see
+the :ref:`gsg`.) The grub config file will look something
like this:
.. code-block:: none
diff --git a/doc/tutorials/rdt_configuration.rst b/doc/tutorials/rdt_configuration.rst
index 5931d514d..433a980d1 100644
--- a/doc/tutorials/rdt_configuration.rst
+++ b/doc/tutorials/rdt_configuration.rst
@@ -149,7 +149,7 @@ Configure RDT for VM Using VM Configuration
platform-specific XML file that helps ACRN identify RDT-supported
platforms. RDT on ACRN is enabled by configuring the ``FEATURES``
sub-section of the scenario XML file as in the below example. For
- details on building ACRN with a scenario, refer to :ref:`build-with-acrn-scenario`.
+ details on building ACRN with a scenario, refer to :ref:`gsg`.
.. code-block:: none
:emphasize-lines: 6
@@ -249,7 +249,7 @@ Configure RDT for VM Using VM Configuration
per-LP CLOS is applied to the core. If HT is turned on, don't place high
priority threads on sibling LPs running lower priority threads.
-#. Based on our scenario, build and install ACRN. See :ref:`build-with-acrn-scenario`
+#. Based on our scenario, build and install ACRN. See :ref:`gsg`
for building and installing instructions.
#. Restart the platform.
diff --git a/doc/tutorials/running_deb_as_serv_vm.rst b/doc/tutorials/running_deb_as_serv_vm.rst
index 12fe71006..c8f378b18 100644
--- a/doc/tutorials/running_deb_as_serv_vm.rst
+++ b/doc/tutorials/running_deb_as_serv_vm.rst
@@ -30,7 +30,7 @@ Use the following instructions to install Debian.
`_ to
install it on your board; we are using a Kaby Lake Intel NUC (NUC7i7DNHE)
in this tutorial.
-- :ref:`install-build-tools-dependencies` for ACRN.
+- :ref:`gsg` for ACRN.
- Update to the newer iASL:
.. code-block:: bash
diff --git a/doc/tutorials/running_deb_as_user_vm.rst b/doc/tutorials/running_deb_as_user_vm.rst
index d9c0da07d..8780b3f8d 100644
--- a/doc/tutorials/running_deb_as_user_vm.rst
+++ b/doc/tutorials/running_deb_as_user_vm.rst
@@ -12,7 +12,7 @@ Intel NUC Kit. If you have not, refer to the following instructions:
- Install a `Ubuntu 18.04 desktop ISO
`_
on your board.
-- Follow the instructions :ref:`install-ubuntu-Service VM-NVMe` guide to setup the Service VM.
+- Follow the instructions in :ref:`gsg` guide to setup the Service VM.
We are using a Kaby Lake Intel NUC (NUC7i7DNHE) and Debian 10 as the User VM in this tutorial.
diff --git a/doc/tutorials/running_ubun_as_user_vm.rst b/doc/tutorials/running_ubun_as_user_vm.rst
index a29974b18..4d4d39c81 100644
--- a/doc/tutorials/running_ubun_as_user_vm.rst
+++ b/doc/tutorials/running_ubun_as_user_vm.rst
@@ -12,7 +12,7 @@ Intel NUC Kit. If you have not, refer to the following instructions:
- Install a `Ubuntu 18.04 desktop ISO
`_
on your board.
-- Follow the instructions :ref:`install-ubuntu-Service VM-NVMe` to set up the Service VM.
+- Follow the instructions in :ref:`gsg` to set up the Service VM.
Before you start this tutorial, make sure the KVM tools are installed on the
diff --git a/doc/tutorials/setup_openstack_libvirt.rst b/doc/tutorials/setup_openstack_libvirt.rst
index f713f59ab..fb8123356 100644
--- a/doc/tutorials/setup_openstack_libvirt.rst
+++ b/doc/tutorials/setup_openstack_libvirt.rst
@@ -18,7 +18,7 @@ Install ACRN
************
#. Install ACRN using Ubuntu 20.04 as its Service VM. Refer to
- :ref:`Build and Install ACRN on Ubuntu `.
+ :ref:`gsg`.
#. Make the acrn-kernel using the `kernel_config_uefi_sos
`_
@@ -37,9 +37,8 @@ Install ACRN
available loop devices. Follow the `snaps guide
`_ to clean up old
snap revisions if you're running out of loop devices.
-#. Make sure the networking bridge ``acrn-br0`` is created. If not,
- create it using the instructions in
- :ref:`Build and Install ACRN on Ubuntu `.
+#. Make sure the networking bridge ``acrn-br0`` is created. See
+ :ref:`hostbridge_virt_hld` for more information.
Set Up and Launch LXC/LXD
*************************
@@ -155,7 +154,7 @@ Set Up ACRN Prerequisites Inside the Container
$ lxc exec openstack -- su -l stack
-2. Download and compile ACRN's source code. Refer to :ref:`getting-started-building`.
+2. Download and compile ACRN's source code. Refer to :ref:`gsg`.
.. note::
All tools and build dependencies must be installed before you run the first ``make`` command.
diff --git a/doc/tutorials/using_hybrid_mode_on_nuc.rst b/doc/tutorials/using_hybrid_mode_on_nuc.rst
index bf34e1e77..cd3dd497d 100644
--- a/doc/tutorials/using_hybrid_mode_on_nuc.rst
+++ b/doc/tutorials/using_hybrid_mode_on_nuc.rst
@@ -57,7 +57,7 @@ Prepare the Zephyr kernel that you will run in VM0 later.
Set-up ACRN on your device
**************************
-- Follow the instructions in :Ref:`getting-started-building` to build ACRN using the
+- Follow the instructions in :Ref:`gsg` to build ACRN using the
``hybrid`` scenario. Here is the build command-line for the `Intel NUC Kit NUC7i7DNHE `_::
make BOARD=nuc7i7dnb SCENARIO=hybrid
diff --git a/doc/tutorials/using_partition_mode_on_nuc.rst b/doc/tutorials/using_partition_mode_on_nuc.rst
index 2de64a402..e997e74e3 100644
--- a/doc/tutorials/using_partition_mode_on_nuc.rst
+++ b/doc/tutorials/using_partition_mode_on_nuc.rst
@@ -141,7 +141,7 @@ Update ACRN Hypervisor Image
#. Clone the ACRN source code and configure the build options.
- Refer to :ref:`getting-started-building` to set up the ACRN build
+ Refer to :ref:`gsg` to set up the ACRN build
environment on your development workstation.
Clone the ACRN source code and check out to the tag v2.4:
diff --git a/doc/tutorials/using_vxworks_as_uos.rst b/doc/tutorials/using_vxworks_as_uos.rst
index eed4259b1..0a1505d60 100644
--- a/doc/tutorials/using_vxworks_as_uos.rst
+++ b/doc/tutorials/using_vxworks_as_uos.rst
@@ -92,7 +92,7 @@ Steps for Using VxWorks as User VM
You now have a virtual disk image with bootable VxWorks in ``VxWorks.img``.
-#. Follow :ref:`install-ubuntu-Service VM-NVMe` to boot the ACRN Service VM.
+#. Follow :ref:`gsg` to boot the ACRN Service VM.
#. Boot VxWorks as User VM.
diff --git a/doc/tutorials/using_zephyr_as_uos.rst b/doc/tutorials/using_zephyr_as_uos.rst
index 36f4004ff..7e16bf493 100644
--- a/doc/tutorials/using_zephyr_as_uos.rst
+++ b/doc/tutorials/using_zephyr_as_uos.rst
@@ -92,7 +92,7 @@ Steps for Using Zephyr as User VM
the ACRN Service VM, then you will need to transfer this image to the
ACRN Service VM (via, e.g, a USB drive or network)
-#. Follow :ref:`install-ubuntu-Service VM-NVMe`
+#. Follow :ref:`gsg`
to boot "The ACRN Service OS" based on Ubnuntu OS (ACRN tag: v2.2)