doc: DX update for GSG

- Update the Getting Started material with a DX-inspired rewrite and
  simplification.
- Remove duplicate and out-of-date "Building from Source"
  document, deferring to the new GSG.
- Add a development overview document.
- Move other GSGs to the advanced guides section.
- Update links in other documents to aim at the new GSG.

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Signed-off-by: Amy Reyes <amy.reyes@intel.com>
This commit is contained in:
David B. Kinder 2021-08-19 16:28:08 -07:00 committed by David Kinder
parent adcf51e5f5
commit 50094fb88b
57 changed files with 1056 additions and 897 deletions

View File

@ -3,6 +3,44 @@
Advanced Guides
###############
Advanced Scenario Tutorials
*********************************
.. rst-class:: rst-columns2
.. toctree::
:maxdepth: 1
tutorials/using_hybrid_mode_on_nuc
tutorials/using_partition_mode_on_nuc
Service VM Tutorials
********************
.. rst-class:: rst-columns2
.. toctree::
:maxdepth: 1
tutorials/running_deb_as_serv_vm
tutorials/using_yp
.. _develop_acrn_user_vm:
User VM Tutorials
*****************
.. rst-class:: rst-columns2
.. toctree::
:maxdepth: 1
tutorials/using_windows_as_uos
tutorials/running_ubun_as_user_vm
tutorials/running_deb_as_user_vm
tutorials/using_xenomai_as_uos
tutorials/using_vxworks_as_uos
tutorials/using_zephyr_as_uos
Configuration and Tools
***********************
@ -24,33 +62,7 @@ Configuration and Tools
misc/debug_tools/**
misc/services/acrn_manager/**
Service VM Tutorials
********************
.. rst-class:: rst-columns2
.. toctree::
:maxdepth: 1
tutorials/running_deb_as_serv_vm
tutorials/using_yp
User VM Tutorials
*****************
.. rst-class:: rst-columns2
.. toctree::
:maxdepth: 1
tutorials/using_windows_as_uos
tutorials/running_ubun_as_user_vm
tutorials/running_deb_as_user_vm
tutorials/using_xenomai_as_uos
tutorials/using_vxworks_as_uos
tutorials/using_zephyr_as_uos
Enable ACRN Features
Advanced Features
********************
.. rst-class:: rst-columns2

View File

@ -1,266 +0,0 @@
.. _getting-started-building:
Build ACRN From Source
######################
Following a general embedded-system programming model, the ACRN
hypervisor is designed to be customized at build time per hardware
platform and per usage scenario, rather than one binary for all
scenarios.
The hypervisor binary is generated based on configuration settings in XML
files. Instructions about customizing these settings can be found in
:ref:`getting-started-hypervisor-configuration`.
One binary for all platforms and all usage scenarios is not
supported. Dynamic configuration parsing is not used in
the ACRN hypervisor for these reasons:
- **Maintain functional safety requirements.** Implementing dynamic parsing
introduces dynamic objects, which violate functional safety requirements.
- **Reduce complexity.** ACRN is a lightweight reference hypervisor, built for
embedded IoT. As new platforms for embedded systems are rapidly introduced,
support for one binary could require more and more complexity in the
hypervisor, which is something we strive to avoid.
- **Maintain small footprint.** Implementing dynamic parsing introduces
hundreds or thousands of lines of code. Avoiding dynamic parsing
helps keep the hypervisor's Lines of Code (LOC) in a desirable range (less
than 40K).
- **Improve boot time.** Dynamic parsing at runtime increases the boot
time. Using a build-time configuration and not dynamic parsing
helps improve the boot time of the hypervisor.
Build the ACRN hypervisor, device model, and tools from source by following
these steps.
.. contents::
:local:
:depth: 1
.. _install-build-tools-dependencies:
.. rst-class:: numbered-step
Install Build Tools and Dependencies
************************************
ACRN development is supported on popular Linux distributions, each with their
own way to install development tools. This user guide covers the steps to
configure and build ACRN natively on **Ubuntu 18.04 or newer**.
The following commands install the necessary tools for configuring and building
ACRN.
.. code-block:: none
sudo apt install gcc \
git \
make \
libssl-dev \
libpciaccess-dev \
uuid-dev \
libsystemd-dev \
libevent-dev \
libxml2-dev \
libxml2-utils \
libusb-1.0-0-dev \
python3 \
python3-pip \
libblkid-dev \
e2fslibs-dev \
pkg-config \
libnuma-dev \
liblz4-tool \
flex \
bison \
xsltproc \
clang-format
sudo pip3 install lxml xmlschema defusedxml
wget https://acpica.org/sites/acpica/files/acpica-unix-20210105.tar.gz
tar zxvf acpica-unix-20210105.tar.gz
cd acpica-unix-20210105
make clean && make iasl
sudo cp ./generate/unix/bin/iasl /usr/sbin/
.. rst-class:: numbered-step
Get the ACRN Hypervisor Source Code
***********************************
The `ACRN hypervisor <https://github.com/projectacrn/acrn-hypervisor/>`_
repository contains four main components:
1. The ACRN hypervisor code is in the ``hypervisor`` directory.
#. The ACRN device model code is in the ``devicemodel`` directory.
#. The ACRN debug tools source code is in the ``misc/debug_tools`` directory.
#. The ACRN online services source code is in the ``misc/services`` directory.
Enter the following to get the ACRN hypervisor source code:
.. code-block:: none
git clone https://github.com/projectacrn/acrn-hypervisor
.. _build-with-acrn-scenario:
.. rst-class:: numbered-step
Build With the ACRN Scenario
****************************
Currently, the ACRN hypervisor defines these typical usage scenarios:
SDC:
The SDC (Software Defined Cockpit) scenario defines a simple
automotive use case that includes one pre-launched Service VM and one
post-launched User VM.
LOGICAL_PARTITION:
This scenario defines two pre-launched VMs.
INDUSTRY:
This scenario is an example for industrial usage with up to eight VMs:
one pre-launched Service VM, five post-launched Standard VMs (for Human
interaction etc.), one post-launched RT VMs (for real-time control),
and one Kata Container VM.
HYBRID:
This scenario defines a hybrid use case with three VMs: one
pre-launched Safety VM, one pre-launched Service VM, and one post-launched
Standard VM.
HYBRID_RT:
This scenario defines a hybrid use case with three VMs: one
pre-launched RTVM, one pre-launched Service VM, and one post-launched
Standard VM.
XML configuration files for these scenarios on supported boards are available
under the ``misc/config_tools/data`` directory.
Assuming that you are at the top level of the ``acrn-hypervisor`` directory, perform
the following to build the hypervisor, device model, and tools:
.. note::
The debug version is built by default. To build a release version,
build with ``RELEASE=y`` explicitly, regardless of whether a previous
build exists.
* Build the debug version of ``INDUSTRY`` scenario on the ``nuc7i7dnb``:
.. code-block:: none
make BOARD=nuc7i7dnb SCENARIO=industry
* Build the release version of ``HYBRID`` scenario on the ``whl-ipc-i5``:
.. code-block:: none
make BOARD=whl-ipc-i5 SCENARIO=hybrid RELEASE=y
* Build the release version of ``HYBRID_RT`` scenario on the ``whl-ipc-i7``
(hypervisor only):
.. code-block:: none
make BOARD=whl-ipc-i7 SCENARIO=hybrid_rt RELEASE=y hypervisor
* Build the release version of the device model and tools:
.. code-block:: none
make RELEASE=y devicemodel tools
You can also build ACRN with your customized scenario:
* Build with your own scenario configuration on the ``nuc11tnbi5``, assuming the
scenario is defined in ``/path/to/scenario.xml``:
.. code-block:: none
make BOARD=nuc11tnbi5 SCENARIO=/path/to/scenario.xml
* Build with your own board and scenario configuration, assuming the board and
scenario XML files are ``/path/to/board.xml`` and ``/path/to/scenario.xml``:
.. code-block:: none
make BOARD=/path/to/board.xml SCENARIO=/path/to/scenario.xml
.. note::
ACRN uses XML files to summarize board characteristics and scenario
settings. The ``BOARD`` and ``SCENARIO`` variables accept board/scenario
names as well as paths to XML files. When board/scenario names are given, the
build system searches for XML files with the same names under
``misc/config_tools/data/``. When paths (absolute or relative) to the XML
files are given, the build system uses the files pointed at. If relative
paths are used, they are considered relative to the current working
directory.
See the :ref:`hardware` document for information about platform needs for each
scenario. For more instructions to customize scenarios, see
:ref:`getting-started-hypervisor-configuration` and
:ref:`acrn_configuration_tool`.
The build results are found in the ``build`` directory. You can specify
a different build directory by setting the ``O`` ``make`` parameter,
for example: ``make O=build-nuc``.
To query the board, scenario, and build type of an existing build, the
``hvshowconfig`` target will help.
.. code-block:: none
$ make BOARD=tgl-rvp SCENARIO=hybrid_rt hypervisor
...
$ make hvshowconfig
Build directory: /path/to/acrn-hypervisor/build/hypervisor
This build directory is configured with the settings below.
- BOARD = tgl-rvp
- SCENARIO = hybrid_rt
- RELEASE = n
.. _getting-started-hypervisor-configuration:
.. rst-class:: numbered-step
Modify the Hypervisor Configuration
***********************************
The ACRN hypervisor is built with scenario encoded in an XML file (referred to
as the scenario XML hereinafter). The scenario XML of a build can be found at
``<build>/hypervisor/.scenario.xml``, where ``<build>`` is the name of the build
directory. You can make further changes to this file to adjust to your specific
requirements. Another ``make`` will rebuild the hypervisor using the updated
scenario XML.
The following commands show how to customize manually the scenario XML based on
the predefined ``INDUSTRY`` scenario for ``nuc7i7dnb`` and rebuild the
hypervisor. The ``hvdefconfig`` target generates the configuration files without
building the hypervisor, allowing users to tweak the configurations.
.. code-block:: none
make BOARD=nuc7i7dnb SCENARIO=industry hvdefconfig
vim build/hypervisor/.scenario.xml
#(Modify the XML file per your needs)
make
.. note::
A hypervisor build remembers the board and scenario previously
configured. Thus, there is no need to duplicate BOARD and SCENARIO in the
second ``make`` above.
While the scenario XML files can be changed manually, we recommend you use the
ACRN web-based configuration app that provides valid options and descriptions
of the configuration entries. Refer to :ref:`acrn_config_tool_ui` for more
instructions.
Descriptions of each configuration entry in scenario XML files are also
available at :ref:`scenario-config-options`.

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 456 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 135 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

View File

@ -0,0 +1,309 @@
.. _overview_dev:
Configuration and Development Overview
######################################
This overview is for developers who are new or relatively new to ACRN. It will
help you get familiar with ACRN basics: ACRN components and general process for
building an ACRN hypervisor.
The overview covers the process at an abstract and universal level.
* Abstract: the overall structure rather than detailed instructions
* Universal: applicable to most use cases
Although the overview describes the process as a series of steps, it's intended
to be a summary, not a step-by-step guide. Throughout the overview, you will see
links to the :ref:`gsg` for first-time setup instructions. Links to advanced
guides and additional information are also provided.
.. _overview_dev_dev_env:
Development Environment
***********************
The recommended development environment for ACRN consists of two machines:
* **Development computer** where you configure and build ACRN images
* **Target system** where you install and run ACRN images
.. image:: ./images/overview_host_target.png
:scale: 60%
ACRN requires a serial output from the target system to the development computer
for :ref:`debugging and system messaging <acrn-debug>`. If your target doesn't
have a serial output, :ref:`here are some tips for connecting a serial output
<connect_serial_port>`.
You will need a way to copy the built ACRN images from the development computer
to the target system. A USB drive is recommended.
General Process for Building an ACRN Hypervisor
***********************************************
The general process for configuring and building an ACRN hypervisor is
illustrated in the following figure. Additional details follow.
.. image:: ./images/overview_flow.png
.. _overview_dev_hw_scenario:
|icon_light| Step 1: Select Hardware and Scenario
*************************************************
.. |icon_light| image:: ./images/icon_light.png
:scale: 75%
ACRN configuration is hardware and scenario specific. You will need to learn
about supported ACRN hardware and scenarios, and select the right ones for your
needs.
Select Your Hardware
====================
ACRN supports certain Intel processors. Development kits are widely available.
See :ref:`hardware`.
.. _overview_dev_select_scenario:
Select Your Scenario
====================
A :ref:`scenario <usage-scenarios>` is a specific ACRN configuration, such as
the type and number of VMs that can be run, their attributes, and the resources
they have access to.
This image shows an example of an ACRN scenario to illustrate the types of VMs
that ACRN offers:
.. image:: ./images/acrn_terms.png
:scale: 75%
ACRN offers three types of VMs:
* **Pre-launched User VMs**: These VMs run independently of other VMs and own
dedicated hardware resources, such as a CPU core, memory, and I/O devices.
Other VMs may not even be aware of the existence of pre-launched VMs. The
configuration of these VMs is static and must be defined at build time. They
are well-suited for safety-critical applications.
* **Service VM**: This VM is required for scenarios that have post-launched VMs.
It controls post-launched VMs and provides device sharing services to them.
ACRN supports one Service VM.
* **Post-launched User VMs**: These VMs share hardware resources. Unlike
pre-launched VMs, you can change the configuration at run-time. They are
well-suited for non-safety applications, including human machine interface
(HMI), artificial intelligence (AI), computer vision, real-time, and others.
The names "pre-launched" and "post-launched" refer to the boot order of these
VMs. The ACRN hypervisor launches the pre-launched VMs first, then launches the
Service VM. The Service VM launches the post-launched VMs.
Due to the static configuration of pre-launched VMs, they are recommended only
if you need complete isolation from the rest of the system. Most use cases can
meet their requirements without pre-launched VMs. Even if your application has
stringent real-time requirements, start by testing the application on a
post-launched VM before considering a pre-launched VM.
To help accelerate the configuration process, ACRN offers the following
:ref:`predefined scenarios <usage-scenarios>`:
* **Shared scenario:** A configuration in which the VMs share resources
(post-launched).
* **Partitioned scenario:** A configuration in which the VMs are isolated from
each other and don't share resources (pre-launched).
* **Hybrid scenario:** A configuration that has both pre-launched and
post-launched VMs.
ACRN provides predefined configuration files and documentation to help you set
up these scenarios.
* New ACRN users start with the shared scenario, as described in the :ref:`gsg`.
* The other predefined scenarios are more complex. The :ref:`develop_acrn`
provide setup instructions.
You can copy the predefined configuration files and customize them for your use
case, as described later in :ref:`overview_dev_config_editor`.
|icon_host| Step 2: Prepare the Development Computer
****************************************************
.. |icon_host| image:: ./images/icon_host.png
:scale: 75%
Your development computer requires certain dependencies to configure and build
ACRN:
* Ubuntu OS
* Build tools
* ACRN hypervisor source code
* If your scenario has a Service VM: ACRN kernel source code
The :ref:`gsg` provides step-by-step instructions for setting up your
development computer.
In the next step, :ref:`overview_dev_board_config`, you will need the board
inspector tool found in the ACRN hypervisor source code to collect information
about the target hardware and generate a board configuration file.
.. _overview_dev_board_config:
|icon_target| Step 3: Generate a Board Configuration File
*********************************************************
.. |icon_target| image:: ./images/icon_target.png
:scale: 75%
A **board configuration file** is an XML file that stores hardware-specific
information extracted from the target system. It describes the capacity of
hardware resources (such as processors and memory), platform power states,
available devices, and BIOS settings. The file is used to configure the ACRN
hypervisor, because each hypervisor instance is specific to your target
hardware.
The **board inspector tool** ``board_inspector.py`` enables you to generate a board
configuration file on the target system. The following sections provide an
overview and important information to keep in mind when using the tool.
Configure BIOS Settings
=======================
You must configure all of your target's BIOS settings before running the board
inspector tool, because the tool records the current BIOS settings in the board
configuration file.
Some BIOS settings are required by ACRN. The :ref:`gsg` provides a list of the
settings.
Use the Board Inspector to Generate a Board Configuration File
==============================================================
The board inspector tool requires certain dependencies to be present on the
target system:
* Ubuntu OS
* Tools and kernel command-line options that allow the board inspector to
collect information about the target hardware
After setting up the dependencies, you run the board inspector via command-line.
The tool generates a board configuration file specific to your hardware.
.. important:: Whenever you change the configuration of the board, such as BIOS
settings or PCI ports, you must generate a new board configuration file.
The :ref:`gsg` provides step-by-step instructions for using the tool. For more
information about the tool, see :ref:`acrn_config_workflow`.
.. _overview_dev_config_editor:
|icon_host| Step 4: Generate a Scenario Configuration File and Launch Scripts
*****************************************************************************
As described in :ref:`overview_dev_select_scenario`, a scenario is a specific
ACRN configuration, such as the number of VMs that can be run, their attributes,
and the resources they have access to. These parameters are saved in a
**scenario configuration file** in XML format.
A **launch script** is a shell script that is used to create a post-launched VM.
The **configuration editor tool** ``acrn_configurator.py`` is a web-based user interface that
runs on your development computer. It enables you to customize, validate, and
generate scenario configuration files and launch scripts. The following sections
provide an overview and important information to keep in mind when using the
tool.
Generate a Scenario Configuration File
======================================
Before using the configuration editor tool to generate a scenario configuration
file, be sure you have the board configuration file that you generated in
:ref:`overview_dev_board_config`. The tool needs the board configuration file to
validate that your custom scenario is supported by the target hardware.
You can use the tool to create a new scenario configuration file or modify an
existing one, such as a predefined scenario described in
:ref:`overview_dev_hw_scenario`. The tool's GUI enables you to edit the
configurable items in the file, such as adding VMs, modifying VM attributes, or
deleting VMs. The tool validates your inputs against your board configuration
file. After validation is successful, the tool generates your custom scenario
configuration file.
Generate Launch Scripts
=======================
Before using the configuration editor tool to generate a launch script, be sure
you have your board configuration file and scenario configuration file. The tool
needs both files to validate your launch script configuration.
The process of customizing launch scripts is similar to the process of
customizing scenario configuration files. You can choose to create a new launch
script or modify an existing one. You can then use the GUI to edit the
configurable parameters. The tool validates your inputs against your board
configuration file and scenario configuration file. After validation is
successful, the tool generates your custom launch script.
.. note::
The configuration editor may not show all editable
parameters for scenario configuration files and launch scripts. You can edit
the parameters manually. See :ref:`acrn_config_data`.
The :ref:`gsg` walks you through a simple example of using the tool. For more
information about the tool, see :ref:`acrn_config_tool_ui`.
|icon_host| Step 5: Build ACRN
******************************
The ACRN hypervisor source code provides a makefile to build the ACRN hypervisor
binary and associated components. In the ``make`` command, you need to specify
your board configuration file and scenario configuration file. The build
typically takes a few minutes.
If your scenario has a Service VM, you also need to build the ACRN kernel for
the Service VM. The ACRN kernel source code provides a predefined configuration
file and a makefile to build the ACRN kernel binary and associated components.
The build can take 1-3 hours depending on the performance of your development
computer and network.
The :ref:`gsg` provides step-by-step instructions.
For more information about the kernel, see :ref:`kernel-parameters`.
.. _overview_dev_install:
|icon_target| Step 6: Install and Run ACRN
******************************************
The last step is to make final changes to the target system configuration and
then boot ACRN.
At a high level, you will:
* Copy the built ACRN hypervisor files, kernel files, and launch scripts from
the development computer to the target.
* Configure GRUB to boot the ACRN hypervisor, pre-launched VMs, and Service VM.
Reboot the target, and launch ACRN.
* If your scenario contains a post-launched VM, install an OS image for the
post-launched VM and run the launch script you created in
:ref:`overview_dev_config_editor`.
For a basic example, see the :ref:`gsg`.
For details about GRUB, see :ref:`using_grub`.
For more complex examples of post-launched VMs, see the
:ref:`develop_acrn_user_vm`.
Next Steps
**********
* To get ACRN up and running for the first time, see the :ref:`gsg` for
step-by-step instructions.
* If you have already completed the :ref:`gsg`, see the :ref:`develop_acrn` for
more information about complex scenarios, advanced features, and debugging.

View File

@ -1,3 +1,5 @@
:orphan:
.. _roscube-gsg:
Getting Started Guide for ACRN Industry Scenario With ROScube-I

View File

@ -23,15 +23,6 @@ partitioning hypervisors. The ACRN hypervisor architecture partitions
the system into different functional domains, with carefully selected
user VM sharing optimizations for IoT and embedded devices.
ACRN Open Source Roadmap
************************
Stay informed on what's ahead for ACRN by visiting the
`ACRN Project Roadmap <https://projectacrn.org/#resources>`_ on the
projectacrn.org website.
For up-to-date happenings, visit the `ACRN blog <https://projectacrn.org/blog/>`_.
ACRN High-Level Architecture
****************************

View File

@ -38,6 +38,7 @@ ACRN assumes the following conditions are satisfied from the Platform BIOS:
* There should be no conflict in resources among the PCI devices or with other platform devices.
.. _hardware_tested:
Tested Platforms by ACRN Release
********************************

View File

@ -3,21 +3,19 @@
Getting Started
###############
After reading the :ref:`introduction`, use these guides to get started
After reading the :ref:`introduction`, use these documents to get started
using ACRN in a reference setup. We'll show how to set up your
development and target hardware, and then how to boot the ACRN
hypervisor, the Service VM, and a User VM on the Intel platform.
hypervisor, the Service VM, and a User VM on a supported Intel target platform.
ACRN is supported on platforms listed in :ref:`hardware`.
Follow these getting started guides to give ACRN a try:
.. toctree::
:maxdepth: 1
reference/hardware
getting-started/overview_dev
getting-started/getting-started
getting-started/building-from-source
getting-started/roscube/roscube-gsg
tutorials/using_hybrid_mode_on_nuc
tutorials/using_partition_mode_on_nuc
After getting familiar with ACRN development, check out these
:ref:`develop_acrn` for information about more-advanced scenarios and enabling
ACRN advanced capabilities.

View File

@ -57,7 +57,7 @@ Building
Build Dependencies
==================
- Build Tools and Dependencies described in the :ref:`getting-started-building` guide
- Build Tools and Dependencies described in the :ref:`gsg` guide
- ``gnu-efi`` package
- Service VM Kernel ``bzImage``
- pre-launched RTVM Kernel ``bzImage``

View File

@ -142,7 +142,7 @@ toolset.
.. note:: Refer to :ref:`acrn_config_tool_ui` for more details on
the configuration editor.
#. Build with your XML files. Refer to :ref:`getting-started-building` to build
#. Build with your XML files. Refer to :ref:`gsg` to build
the ACRN hypervisor with your XML files on the host machine.
#. Deploy VMs and run ACRN hypervisor on the target board.
@ -398,9 +398,6 @@ The ACRN configuration editor provides a web-based user interface for the follow
Prerequisites
=============
.. _get acrn repo guide:
https://projectacrn.github.io/latest/getting-started/building-from-source.html#get-the-acrn-hypervisor-source-code
- Clone the ACRN hypervisor repo
.. code-block:: bash

View File

@ -124,7 +124,7 @@ Install ACRN Hypervisor
.. important:: All the steps below are performed **inside** the Service VM guest that we built in the
previous section.
#. Install the ACRN build tools and dependencies following the :ref:`install-build-tools-dependencies`
#. Install the ACRN build tools and dependencies following the :ref:`gsg`
#. Clone ACRN repo and check out the ``v2.5`` tag.
@ -141,7 +141,7 @@ Install ACRN Hypervisor
make BOARD=qemu SCENARIO=sdc
For more details, refer to :ref:`getting-started-building`.
For more details, refer to :ref:`gsg`.
#. Install the ACRN Device Model and tools
@ -156,7 +156,7 @@ Install ACRN Hypervisor
sudo cp build/hypervisor/acrn.32.out /boot
#. Clone and configure the Service VM kernel repository following the instructions at
:ref:`build-and-install-ACRN-kernel` and using the ``v2.5`` tag. The User VM (L2 guest)
:ref:`gsg` and using the ``v2.5`` tag. The User VM (L2 guest)
uses the ``virtio-blk`` driver to mount the rootfs. This driver is included in the default
kernel configuration as of the ``v2.5`` tag.

View File

@ -90,7 +90,7 @@ noted above. For example, add the following code into function
shell_cmd_help added information
Once you have instrumented the code, you need to rebuild the hypervisor and
install it on your platform. Refer to :ref:`getting-started-building`
install it on your platform. Refer to :ref:`gsg`
for detailed instructions on how to do that.
We set console log level to 5, and mem log level to 2 through the
@ -205,8 +205,7 @@ shown in the following example:
4. After we have inserted the trace code addition, we need to rebuild
the ACRN hypervisor and install it on the platform. Refer to
:ref:`getting-started-building` for
detailed instructions on how to do that.
:ref:`gsg` for detailed instructions on how to do that.
5. Now we can use the following command in the Service VM console
to generate acrntrace data into the current directory::

View File

@ -37,7 +37,7 @@ steps:
communication and separate it with ``:``. For example, the
communication between VM0 and VM2, it can be written as ``0:2``
- Build with the XML configuration, refer to :ref:`getting-started-building`.
- Build with the XML configuration, refer to :ref:`gsg`.
Ivshmem DM-Land Usage
*********************

View File

@ -196,7 +196,7 @@ with these settings:
Since CPU sharing is disabled, you may need to delete all ``POST_STD_VM`` and ``KATA_VM`` VMs
from the scenario configuration file, which may share pCPU with the Service OS VM.
#. Follow instructions in :ref:`getting-started-building` and build with this XML configuration.
#. Follow instructions in :ref:`gsg` and build with this XML configuration.
Prepare for Service VM Kernel and rootfs
@ -209,7 +209,7 @@ Instructions on how to boot Ubuntu as the Service VM can be found in
The Service VM kernel needs to be built from the ``acrn-kernel`` repo, and some changes
to the kernel ``.config`` are needed.
Instructions on how to build and install the Service VM kernel can be found
in :ref:`Build and Install the ACRN Kernel <build-and-install-ACRN-kernel>`.
in :ref:`gsg`.
Here is a summary of how to modify and build the kernel:

View File

@ -50,7 +50,7 @@ install Ubuntu on the NVMe drive, and use grub to launch the Service VM.
Install Pre-Launched RT Filesystem on SATA and Kernel Image on NVMe
===================================================================
Follow the :ref:`install-ubuntu-rtvm-sata` guide to install RT rootfs on SATA drive.
Follow the :ref:`gsg` to install RT rootfs on SATA drive.
The Kernel should
be on the NVMe drive along with GRUB. You'll need to copy the RT kernel
@ -82,8 +82,8 @@ Add Pre-Launched RT Kernel Image to GRUB Config
===============================================
The last step is to modify the GRUB configuration file to load the Pre-Launched
kernel. (For more information about this, see :ref:`Update Grub for the Ubuntu Service VM
<gsg_update_grub>` section in the :ref:`gsg`.) The grub config file will look something
kernel. (For more information about this, see
the :ref:`gsg`.) The grub config file will look something
like this:
.. code-block:: none

View File

@ -149,7 +149,7 @@ Configure RDT for VM Using VM Configuration
platform-specific XML file that helps ACRN identify RDT-supported
platforms. RDT on ACRN is enabled by configuring the ``FEATURES``
sub-section of the scenario XML file as in the below example. For
details on building ACRN with a scenario, refer to :ref:`build-with-acrn-scenario`.
details on building ACRN with a scenario, refer to :ref:`gsg`.
.. code-block:: none
:emphasize-lines: 6
@ -249,7 +249,7 @@ Configure RDT for VM Using VM Configuration
per-LP CLOS is applied to the core. If HT is turned on, don't place high
priority threads on sibling LPs running lower priority threads.
#. Based on our scenario, build and install ACRN. See :ref:`build-with-acrn-scenario`
#. Based on our scenario, build and install ACRN. See :ref:`gsg`
for building and installing instructions.
#. Restart the platform.

View File

@ -30,7 +30,7 @@ Use the following instructions to install Debian.
<https://www.debian.org/releases/stable/amd64/index.en.html>`_ to
install it on your board; we are using a Kaby Lake Intel NUC (NUC7i7DNHE)
in this tutorial.
- :ref:`install-build-tools-dependencies` for ACRN.
- :ref:`gsg` for ACRN.
- Update to the newer iASL:
.. code-block:: bash

View File

@ -12,7 +12,7 @@ Intel NUC Kit. If you have not, refer to the following instructions:
- Install a `Ubuntu 18.04 desktop ISO
<http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso?_ga=2.160010942.221344839.1566963570-491064742.1554370503>`_
on your board.
- Follow the instructions :ref:`install-ubuntu-Service VM-NVMe` guide to setup the Service VM.
- Follow the instructions in :ref:`gsg` guide to setup the Service VM.
We are using a Kaby Lake Intel NUC (NUC7i7DNHE) and Debian 10 as the User VM in this tutorial.

View File

@ -12,7 +12,7 @@ Intel NUC Kit. If you have not, refer to the following instructions:
- Install a `Ubuntu 18.04 desktop ISO
<http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso?_ga=2.160010942.221344839.1566963570-491064742.1554370503>`_
on your board.
- Follow the instructions :ref:`install-ubuntu-Service VM-NVMe` to set up the Service VM.
- Follow the instructions in :ref:`gsg` to set up the Service VM.
Before you start this tutorial, make sure the KVM tools are installed on the

View File

@ -18,7 +18,7 @@ Install ACRN
************
#. Install ACRN using Ubuntu 20.04 as its Service VM. Refer to
:ref:`Build and Install ACRN on Ubuntu <build-and-install-acrn-on-ubuntu>`.
:ref:`gsg`.
#. Make the acrn-kernel using the `kernel_config_uefi_sos
<https://raw.githubusercontent.com/projectacrn/acrn-kernel/master/kernel_config_uefi_sos>`_
@ -37,9 +37,8 @@ Install ACRN
available loop devices. Follow the `snaps guide
<https://maslosoft.com/kb/how-to-clean-old-snaps/>`_ to clean up old
snap revisions if you're running out of loop devices.
#. Make sure the networking bridge ``acrn-br0`` is created. If not,
create it using the instructions in
:ref:`Build and Install ACRN on Ubuntu <build-and-install-acrn-on-ubuntu>`.
#. Make sure the networking bridge ``acrn-br0`` is created. See
:ref:`hostbridge_virt_hld` for more information.
Set Up and Launch LXC/LXD
*************************
@ -155,7 +154,7 @@ Set Up ACRN Prerequisites Inside the Container
$ lxc exec openstack -- su -l stack
2. Download and compile ACRN's source code. Refer to :ref:`getting-started-building`.
2. Download and compile ACRN's source code. Refer to :ref:`gsg`.
.. note::
All tools and build dependencies must be installed before you run the first ``make`` command.

View File

@ -57,7 +57,7 @@ Prepare the Zephyr kernel that you will run in VM0 later.
Set-up ACRN on your device
**************************
- Follow the instructions in :Ref:`getting-started-building` to build ACRN using the
- Follow the instructions in :Ref:`gsg` to build ACRN using the
``hybrid`` scenario. Here is the build command-line for the `Intel NUC Kit NUC7i7DNHE <https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc7i7dnhe.html>`_::
make BOARD=nuc7i7dnb SCENARIO=hybrid

View File

@ -141,7 +141,7 @@ Update ACRN Hypervisor Image
#. Clone the ACRN source code and configure the build options.
Refer to :ref:`getting-started-building` to set up the ACRN build
Refer to :ref:`gsg` to set up the ACRN build
environment on your development workstation.
Clone the ACRN source code and check out to the tag v2.4:

View File

@ -92,7 +92,7 @@ Steps for Using VxWorks as User VM
You now have a virtual disk image with bootable VxWorks in ``VxWorks.img``.
#. Follow :ref:`install-ubuntu-Service VM-NVMe` to boot the ACRN Service VM.
#. Follow :ref:`gsg` to boot the ACRN Service VM.
#. Boot VxWorks as User VM.

View File

@ -92,7 +92,7 @@ Steps for Using Zephyr as User VM
the ACRN Service VM, then you will need to transfer this image to the
ACRN Service VM (via, e.g, a USB drive or network)
#. Follow :ref:`install-ubuntu-Service VM-NVMe`
#. Follow :ref:`gsg`
to boot "The ACRN Service OS" based on Ubnuntu OS (ACRN tag: v2.2)