doc: DX update for GSG

- Update the Getting Started material with a DX-inspired rewrite and
  simplification.
- Remove duplicate and out-of-date "Building from Source"
  document, deferring to the new GSG.
- Add a development overview document.
- Move other GSGs to the advanced guides section.
- Update links in other documents to aim at the new GSG.

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Signed-off-by: Amy Reyes <amy.reyes@intel.com>
This commit is contained in:
David B. Kinder
2021-08-19 16:28:08 -07:00
committed by David Kinder
parent adcf51e5f5
commit 50094fb88b
57 changed files with 1056 additions and 897 deletions

View File

@@ -57,7 +57,7 @@ Building
Build Dependencies
==================
- Build Tools and Dependencies described in the :ref:`getting-started-building` guide
- Build Tools and Dependencies described in the :ref:`gsg` guide
- ``gnu-efi`` package
- Service VM Kernel ``bzImage``
- pre-launched RTVM Kernel ``bzImage``

View File

@@ -142,7 +142,7 @@ toolset.
.. note:: Refer to :ref:`acrn_config_tool_ui` for more details on
the configuration editor.
#. Build with your XML files. Refer to :ref:`getting-started-building` to build
#. Build with your XML files. Refer to :ref:`gsg` to build
the ACRN hypervisor with your XML files on the host machine.
#. Deploy VMs and run ACRN hypervisor on the target board.
@@ -398,9 +398,6 @@ The ACRN configuration editor provides a web-based user interface for the follow
Prerequisites
=============
.. _get acrn repo guide:
https://projectacrn.github.io/latest/getting-started/building-from-source.html#get-the-acrn-hypervisor-source-code
- Clone the ACRN hypervisor repo
.. code-block:: bash

View File

@@ -124,7 +124,7 @@ Install ACRN Hypervisor
.. important:: All the steps below are performed **inside** the Service VM guest that we built in the
previous section.
#. Install the ACRN build tools and dependencies following the :ref:`install-build-tools-dependencies`
#. Install the ACRN build tools and dependencies following the :ref:`gsg`
#. Clone ACRN repo and check out the ``v2.5`` tag.
@@ -141,7 +141,7 @@ Install ACRN Hypervisor
make BOARD=qemu SCENARIO=sdc
For more details, refer to :ref:`getting-started-building`.
For more details, refer to :ref:`gsg`.
#. Install the ACRN Device Model and tools
@@ -156,7 +156,7 @@ Install ACRN Hypervisor
sudo cp build/hypervisor/acrn.32.out /boot
#. Clone and configure the Service VM kernel repository following the instructions at
:ref:`build-and-install-ACRN-kernel` and using the ``v2.5`` tag. The User VM (L2 guest)
:ref:`gsg` and using the ``v2.5`` tag. The User VM (L2 guest)
uses the ``virtio-blk`` driver to mount the rootfs. This driver is included in the default
kernel configuration as of the ``v2.5`` tag.

View File

@@ -90,7 +90,7 @@ noted above. For example, add the following code into function
shell_cmd_help added information
Once you have instrumented the code, you need to rebuild the hypervisor and
install it on your platform. Refer to :ref:`getting-started-building`
install it on your platform. Refer to :ref:`gsg`
for detailed instructions on how to do that.
We set console log level to 5, and mem log level to 2 through the
@@ -205,8 +205,7 @@ shown in the following example:
4. After we have inserted the trace code addition, we need to rebuild
the ACRN hypervisor and install it on the platform. Refer to
:ref:`getting-started-building` for
detailed instructions on how to do that.
:ref:`gsg` for detailed instructions on how to do that.
5. Now we can use the following command in the Service VM console
to generate acrntrace data into the current directory::

View File

@@ -37,7 +37,7 @@ steps:
communication and separate it with ``:``. For example, the
communication between VM0 and VM2, it can be written as ``0:2``
- Build with the XML configuration, refer to :ref:`getting-started-building`.
- Build with the XML configuration, refer to :ref:`gsg`.
Ivshmem DM-Land Usage
*********************

View File

@@ -196,7 +196,7 @@ with these settings:
Since CPU sharing is disabled, you may need to delete all ``POST_STD_VM`` and ``KATA_VM`` VMs
from the scenario configuration file, which may share pCPU with the Service OS VM.
#. Follow instructions in :ref:`getting-started-building` and build with this XML configuration.
#. Follow instructions in :ref:`gsg` and build with this XML configuration.
Prepare for Service VM Kernel and rootfs
@@ -209,7 +209,7 @@ Instructions on how to boot Ubuntu as the Service VM can be found in
The Service VM kernel needs to be built from the ``acrn-kernel`` repo, and some changes
to the kernel ``.config`` are needed.
Instructions on how to build and install the Service VM kernel can be found
in :ref:`Build and Install the ACRN Kernel <build-and-install-ACRN-kernel>`.
in :ref:`gsg`.
Here is a summary of how to modify and build the kernel:

View File

@@ -50,7 +50,7 @@ install Ubuntu on the NVMe drive, and use grub to launch the Service VM.
Install Pre-Launched RT Filesystem on SATA and Kernel Image on NVMe
===================================================================
Follow the :ref:`install-ubuntu-rtvm-sata` guide to install RT rootfs on SATA drive.
Follow the :ref:`gsg` to install RT rootfs on SATA drive.
The Kernel should
be on the NVMe drive along with GRUB. You'll need to copy the RT kernel
@@ -82,8 +82,8 @@ Add Pre-Launched RT Kernel Image to GRUB Config
===============================================
The last step is to modify the GRUB configuration file to load the Pre-Launched
kernel. (For more information about this, see :ref:`Update Grub for the Ubuntu Service VM
<gsg_update_grub>` section in the :ref:`gsg`.) The grub config file will look something
kernel. (For more information about this, see
the :ref:`gsg`.) The grub config file will look something
like this:
.. code-block:: none

View File

@@ -149,7 +149,7 @@ Configure RDT for VM Using VM Configuration
platform-specific XML file that helps ACRN identify RDT-supported
platforms. RDT on ACRN is enabled by configuring the ``FEATURES``
sub-section of the scenario XML file as in the below example. For
details on building ACRN with a scenario, refer to :ref:`build-with-acrn-scenario`.
details on building ACRN with a scenario, refer to :ref:`gsg`.
.. code-block:: none
:emphasize-lines: 6
@@ -249,7 +249,7 @@ Configure RDT for VM Using VM Configuration
per-LP CLOS is applied to the core. If HT is turned on, don't place high
priority threads on sibling LPs running lower priority threads.
#. Based on our scenario, build and install ACRN. See :ref:`build-with-acrn-scenario`
#. Based on our scenario, build and install ACRN. See :ref:`gsg`
for building and installing instructions.
#. Restart the platform.

View File

@@ -30,7 +30,7 @@ Use the following instructions to install Debian.
<https://www.debian.org/releases/stable/amd64/index.en.html>`_ to
install it on your board; we are using a Kaby Lake Intel NUC (NUC7i7DNHE)
in this tutorial.
- :ref:`install-build-tools-dependencies` for ACRN.
- :ref:`gsg` for ACRN.
- Update to the newer iASL:
.. code-block:: bash

View File

@@ -12,7 +12,7 @@ Intel NUC Kit. If you have not, refer to the following instructions:
- Install a `Ubuntu 18.04 desktop ISO
<http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso?_ga=2.160010942.221344839.1566963570-491064742.1554370503>`_
on your board.
- Follow the instructions :ref:`install-ubuntu-Service VM-NVMe` guide to setup the Service VM.
- Follow the instructions in :ref:`gsg` guide to setup the Service VM.
We are using a Kaby Lake Intel NUC (NUC7i7DNHE) and Debian 10 as the User VM in this tutorial.

View File

@@ -12,7 +12,7 @@ Intel NUC Kit. If you have not, refer to the following instructions:
- Install a `Ubuntu 18.04 desktop ISO
<http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso?_ga=2.160010942.221344839.1566963570-491064742.1554370503>`_
on your board.
- Follow the instructions :ref:`install-ubuntu-Service VM-NVMe` to set up the Service VM.
- Follow the instructions in :ref:`gsg` to set up the Service VM.
Before you start this tutorial, make sure the KVM tools are installed on the

View File

@@ -18,7 +18,7 @@ Install ACRN
************
#. Install ACRN using Ubuntu 20.04 as its Service VM. Refer to
:ref:`Build and Install ACRN on Ubuntu <build-and-install-acrn-on-ubuntu>`.
:ref:`gsg`.
#. Make the acrn-kernel using the `kernel_config_uefi_sos
<https://raw.githubusercontent.com/projectacrn/acrn-kernel/master/kernel_config_uefi_sos>`_
@@ -37,9 +37,8 @@ Install ACRN
available loop devices. Follow the `snaps guide
<https://maslosoft.com/kb/how-to-clean-old-snaps/>`_ to clean up old
snap revisions if you're running out of loop devices.
#. Make sure the networking bridge ``acrn-br0`` is created. If not,
create it using the instructions in
:ref:`Build and Install ACRN on Ubuntu <build-and-install-acrn-on-ubuntu>`.
#. Make sure the networking bridge ``acrn-br0`` is created. See
:ref:`hostbridge_virt_hld` for more information.
Set Up and Launch LXC/LXD
*************************
@@ -155,7 +154,7 @@ Set Up ACRN Prerequisites Inside the Container
$ lxc exec openstack -- su -l stack
2. Download and compile ACRN's source code. Refer to :ref:`getting-started-building`.
2. Download and compile ACRN's source code. Refer to :ref:`gsg`.
.. note::
All tools and build dependencies must be installed before you run the first ``make`` command.

View File

@@ -57,7 +57,7 @@ Prepare the Zephyr kernel that you will run in VM0 later.
Set-up ACRN on your device
**************************
- Follow the instructions in :Ref:`getting-started-building` to build ACRN using the
- Follow the instructions in :Ref:`gsg` to build ACRN using the
``hybrid`` scenario. Here is the build command-line for the `Intel NUC Kit NUC7i7DNHE <https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc7i7dnhe.html>`_::
make BOARD=nuc7i7dnb SCENARIO=hybrid

View File

@@ -141,7 +141,7 @@ Update ACRN Hypervisor Image
#. Clone the ACRN source code and configure the build options.
Refer to :ref:`getting-started-building` to set up the ACRN build
Refer to :ref:`gsg` to set up the ACRN build
environment on your development workstation.
Clone the ACRN source code and check out to the tag v2.4:

View File

@@ -92,7 +92,7 @@ Steps for Using VxWorks as User VM
You now have a virtual disk image with bootable VxWorks in ``VxWorks.img``.
#. Follow :ref:`install-ubuntu-Service VM-NVMe` to boot the ACRN Service VM.
#. Follow :ref:`gsg` to boot the ACRN Service VM.
#. Boot VxWorks as User VM.

View File

@@ -92,7 +92,7 @@ Steps for Using Zephyr as User VM
the ACRN Service VM, then you will need to transfer this image to the
ACRN Service VM (via, e.g, a USB drive or network)
#. Follow :ref:`install-ubuntu-Service VM-NVMe`
#. Follow :ref:`gsg`
to boot "The ACRN Service OS" based on Ubnuntu OS (ACRN tag: v2.2)