mirror of
https://github.com/projectacrn/acrn-hypervisor.git
synced 2025-06-20 12:42:54 +00:00
doc: fix docs with windows line endings
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
parent
ff646297cc
commit
4c32cbb5db
@ -1,138 +1,131 @@
|
||||
.. _running_deb_as_serv_vm:
|
||||
|
||||
Running Debian as the Service VM
|
||||
##################################
|
||||
|
||||
The `Debian Project <https://www.debian.org/>`_ is an association of individuals who have made common cause to create a `free <https://www.debian.org/intro/free>`_ operating system. The `latest stable Debian release <https://www.debian.org/releases/stable/>`_ is 10.0.
|
||||
|
||||
This tutorial describes how to use Debian 10.0 instead of `Clear Linux OS <https://clearlinux.org>`_ as the Service VM with the ACRN hypervisor.
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
|
||||
Use the following instructions to install Debian.
|
||||
|
||||
- Navigate to `Debian 10 iso <https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/>`_. Select and download **debian-10.1.0-amd64-netinst.iso** (scroll down to the bottom of the page).
|
||||
- Follow the `Debian installation guide <https://www.debian.org/releases/stable/amd64/index.en.html>`_ to install it on your NUC; we are using an Intel Kaby Lake NUC (NUC7i7DNHE) in this tutorial.
|
||||
- Install the necessary development tools. Refer to :ref:`install-build-tools-dependencies` for ACRN.
|
||||
- Update to the latest iASL (required by the ACRN Device Model):
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ sudo apt update
|
||||
$ sudo apt install m4 bison flex zlib1g-dev
|
||||
$ cd ~
|
||||
$ wget https://acpica.org/sites/acpica/files/acpica-unix-20190816.tar.gz
|
||||
$ tar zxvf acpica-unix-20190816.tar.gz
|
||||
$ cd acpica-unix-20190816
|
||||
$ make clean && make iasl
|
||||
$ sudo cp ./generate/unix/bin/iasl /usr/sbin/
|
||||
|
||||
Validated Versions
|
||||
******************
|
||||
|
||||
- **Debian version:** 10.0 (buster)
|
||||
- **ACRN hypervisor tag:** acrn-2019w35.1-140000p
|
||||
- **Debian Service VM Kernel version:** 4.19.68-84.iot-lts2018-sos
|
||||
|
||||
Install ACRN on the Debian VM
|
||||
*****************************
|
||||
|
||||
1. Clone the `Project ACRN <https://github.com/projectacrn/acrn-hypervisor>`_ code repository:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ cd ~
|
||||
$ git clone https://github.com/projectacrn/acrn-hypervisor
|
||||
$ cd acrn-hypervisor
|
||||
$ git checkout acrn-2019w35.1-140000p
|
||||
|
||||
#. Build and install ACRN:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ make BOARD=nuc7i7dnb FIRMWARE=uefi
|
||||
$ sudo make install
|
||||
|
||||
#. Install the hypervisor.
|
||||
The ACRN Device Model and tools were installed as part of a previous step. However, make install does not install the hypervisor (acrn.efi) on your EFI System Partition (ESP), nor does it configure your EFI firmware to boot it automatically. Follow the steps below to perform these operations and complete the ACRN installation. Note that we are using a SATA disk in this section.
|
||||
|
||||
a. Add the ACRN hypervisor (as the root user):
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ sudo mkdir /boot/efi/EFI/acrn/
|
||||
$ sudo cp ~/acrn-hypervisor/build/hypervisor/acrn.efi /boot/efi/EFI/acrn/
|
||||
$ sudo efibootmgr -c -l "\EFI\acrn\acrn.efi" -d /dev/sda -p 1 -L "ACRN Hypervisor" -u "bootloader=\EFI\debian\grubx64.efi "
|
||||
$ sudo efibootmgr -v # shows output as below
|
||||
Timeout: 1 seconds
|
||||
BootOrder: 0009,0003,0004,0007,0005,0006,0001,0008,0002,0000
|
||||
Boot0000* ACRN VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
|
||||
Boot0001* ACRN VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
|
||||
Boot0002* debian VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
|
||||
Boot0003* UEFI : INTEL SSDPEKKW256G8 : PART 0 : OS Bootloader PciRoot(0x0)/Pci(0x1d,0x0)/Pci(0x0,0x0)/NVMe(0x1,00-00-00-00-00-00-00-00)/HD(1,GPT,89d38801-d55b-4bf6-be05-79a5a7b87e66,0x800,0x47000)..BO
|
||||
Boot0004* UEFI : INTEL SSDPEKKW256G8 : PART 3 : OS Bootloader PciRoot(0x0)/Pci(0x1d,0x0)/Pci(0x0,0x0)/NVMe(0x1,00-00-00-00-00-00-00-00)/HD(4,GPT,550e1da5-6533-4e64-8d3f-0beadfb20d33,0x1c6da800,0x47000)..BO
|
||||
Boot0005* UEFI : LAN : PXE IP4 Intel(R) Ethernet Connection I219-LM PciRoot(0x0)/Pci(0x1f,0x6)/MAC(54b2030f4b84,0)/IPv4(0.0.0.00.0.0.0,0,0)..BO
|
||||
Boot0006* UEFI : LAN : PXE IP6 Intel(R) Ethernet Connection I219-LM PciRoot(0x0)/Pci(0x1f,0x6)/MAC(54b2030f4b84,0)/IPv6([::]:<->[::]:,0,0)..BO
|
||||
Boot0007* UEFI : Built-in EFI Shell VenMedia(5023b95c-db26-429b-a648-bd47664c8012)..BO
|
||||
Boot0008* Linux bootloader VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
|
||||
Boot0009* ACRN Hypervisor HD(1,GPT,94597852-7166-4216-b0f1-cef5fd1f2349,0x800,0x100000)/File(\EFI\acrn\acrn.efi)b.o.o.t.l.o.a.d.e.r.=.\.E.F.I.\.d.e.b.i.a.n.\.g.r.u.b.x.6.4...e.f.i.
|
||||
|
||||
|
||||
.. note::
|
||||
Note the extra space at the end of the EFI command-line options
|
||||
string above. This is a workaround for a current `efi-stub
|
||||
bootloader name issue <https://github.com/projectacrn/acrn-hypervisor/issues/4520>`_.
|
||||
It ensures that the end of the string is properly detected.
|
||||
|
||||
b. Install the Service VM kernel and reboot:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ mkdir ~/sos-kernel && cd ~/sos-kernel
|
||||
$ wget https://download.clearlinux.org/releases/30930/clear/x86_64/os/Packages/linux-iot-lts2018-sos-4.19.68-84.x86_64.rpm
|
||||
$ sudo apt install rpm2cpio
|
||||
$ rpm2cpio linux-iot-lts2018-sos-4.19.68-84.x86_64.rpm | cpio -idmv
|
||||
$ sudo cp -r ~/sos-kernel/usr/lib/modules/4.19.68-84.iot-lts2018-sos /lib/modules/
|
||||
$ sudo mkdir /boot/acrn/
|
||||
$ sudo cp ~/sos-kernel/usr/lib/kernel/org.clearlinux.iot-lts2018-sos.4.19.68-84 /boot/acrn/
|
||||
$ sudo vi /etc/grub.d/40_custom
|
||||
<To add below>
|
||||
menuentry 'ACRN Debian Service VM' {
|
||||
recordfail
|
||||
load_video
|
||||
insmod gzio
|
||||
insmod part_gpt
|
||||
insmod ext2
|
||||
|
||||
linux /boot/acrn/org.clearlinux.iot-lts2018-sos.4.19.68-84 console=tty0 console=ttyS0 root=/dev/sda2 rw rootwait ignore_loglevel no_timer_check consoleblank=0 i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x01010F i915.domain_plane_owners=0x011111110000 i915.enable_gvt=1 i915.enable_guc=0 hvlog=2M@0x1FE00000 memmap=2M\$0x1FE00000
|
||||
}
|
||||
$ sudo vi /etc/default/grub
|
||||
<Specify the default grub to the ACRN Debian Service VM entry>
|
||||
GRUB_DEFAULT=5
|
||||
$ sudo update-grub
|
||||
$ sudo reboot
|
||||
|
||||
You should see the Grub menu with the new "ACRN Debian Service VM" entry. Select it and proceed to booting the platform. The system will start the Debian Desktop and you can now log in (as before).
|
||||
|
||||
#. Log in to the Debian Service VM and check the ACRN status:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ dmesg | grep ACRN
|
||||
[ 0.000000] Hypervisor detected: ACRN
|
||||
[ 0.981476] ACRNTrace: Initialized acrn trace module with 4 cpu
|
||||
[ 0.982837] ACRN HVLog: Failed to init last hvlog devs, errno -19
|
||||
[ 0.983023] ACRN HVLog: Initialized hvlog module with 4 cp
|
||||
|
||||
$ uname -a
|
||||
Linux debian 4.19.68-84.iot-lts2018-sos #1 SMP Debian 4.19.37-5+deb10u2 (2019-08-08) x86_64 GNU/Linux
|
||||
|
||||
#. Enable the network sharing to give network access to User VM:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ sudo systemctl enable systemd-networkd
|
||||
$ sudo systemctl start systemd-networkd
|
||||
|
||||
#. Follow :ref:`prepare-UOS` to start a User VM.
|
||||
.. _running_deb_as_serv_vm:
|
||||
|
||||
Running Debian as the Service VM
|
||||
##################################
|
||||
|
||||
The `Debian Project <https://www.debian.org/>`_ is an association of individuals who have made common cause to create a `free <https://www.debian.org/intro/free>`_ operating system. The `latest stable Debian release <https://www.debian.org/releases/stable/>`_ is 10.0.
|
||||
|
||||
This tutorial describes how to use Debian 10.0 instead of `Clear Linux OS <https://clearlinux.org>`_ as the Service VM with the ACRN hypervisor.
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
|
||||
Use the following instructions to install Debian.
|
||||
|
||||
- Navigate to `Debian 10 iso <https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/>`_. Select and download **debian-10.1.0-amd64-netinst.iso** (scroll down to the bottom of the page).
|
||||
- Follow the `Debian installation guide <https://www.debian.org/releases/stable/amd64/index.en.html>`_ to install it on your NUC; we are using an Intel Kaby Lake NUC (NUC7i7DNHE) in this tutorial.
|
||||
- :ref:`install-build-tools-dependencies` for ACRN.
|
||||
- Update to the latest iASL (required by the ACRN Device Model):
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ sudo apt update
|
||||
$ sudo apt install m4 bison flex zlib1g-dev
|
||||
$ cd ~
|
||||
$ wget https://acpica.org/sites/acpica/files/acpica-unix-20190816.tar.gz
|
||||
$ tar zxvf acpica-unix-20190816.tar.gz
|
||||
$ cd acpica-unix-20190816
|
||||
$ make clean && make iasl
|
||||
$ sudo cp ./generate/unix/bin/iasl /usr/sbin/
|
||||
|
||||
Validated Versions
|
||||
******************
|
||||
|
||||
- **Debian version:** 10.0 (buster)
|
||||
- **ACRN hypervisor tag:** acrn-2019w35.1-140000p
|
||||
- **Debian Service VM Kernel version:** 4.19.68-84.iot-lts2018-sos
|
||||
|
||||
Install ACRN on the Debian VM
|
||||
*****************************
|
||||
|
||||
1. Clone the `Project ACRN <https://github.com/projectacrn/acrn-hypervisor>`_ code repository:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ cd ~
|
||||
$ git clone https://github.com/projectacrn/acrn-hypervisor
|
||||
$ cd acrn-hypervisor
|
||||
$ git checkout acrn-2019w35.1-140000p
|
||||
|
||||
#. Build and install ACRN:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ make BOARD=nuc7i7dnb FIRMWARE=uefi
|
||||
$ sudo make install
|
||||
|
||||
#. Install the hypervisor.
|
||||
The ACRN Device Model and tools were installed as part of a previous step. However, make install does not install the hypervisor (acrn.efi) on your EFI System Partition (ESP), nor does it configure your EFI firmware to boot it automatically. Follow the steps below to perform these operations and complete the ACRN installation. Note that we are using a SATA disk in this section.
|
||||
|
||||
a. Add the ACRN hypervisor (as the root user):
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ sudo mkdir /boot/efi/EFI/acrn/
|
||||
$ sudo cp ~/acrn-hypervisor/build/hypervisor/acrn.efi /boot/efi/EFI/acrn/
|
||||
$ sudo efibootmgr -c -l "\EFI\acrn\acrn.efi" -d /dev/sda -p 1 -L "ACRN Hypervisor" -u "bootloader=\EFI\debian\grubx64.efi "
|
||||
$ sudo efibootmgr -v # shows output as below
|
||||
Timeout: 1 seconds
|
||||
BootOrder: 0009,0003,0004,0007,0005,0006,0001,0008,0002,0000
|
||||
Boot0000* ACRN VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
|
||||
Boot0001* ACRN VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
|
||||
Boot0002* debian VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
|
||||
Boot0003* UEFI : INTEL SSDPEKKW256G8 : PART 0 : OS Bootloader PciRoot(0x0)/Pci(0x1d,0x0)/Pci(0x0,0x0)/NVMe(0x1,00-00-00-00-00-00-00-00)/HD(1,GPT,89d38801-d55b-4bf6-be05-79a5a7b87e66,0x800,0x47000)..BO
|
||||
Boot0004* UEFI : INTEL SSDPEKKW256G8 : PART 3 : OS Bootloader PciRoot(0x0)/Pci(0x1d,0x0)/Pci(0x0,0x0)/NVMe(0x1,00-00-00-00-00-00-00-00)/HD(4,GPT,550e1da5-6533-4e64-8d3f-0beadfb20d33,0x1c6da800,0x47000)..BO
|
||||
Boot0005* UEFI : LAN : PXE IP4 Intel(R) Ethernet Connection I219-LM PciRoot(0x0)/Pci(0x1f,0x6)/MAC(54b2030f4b84,0)/IPv4(0.0.0.00.0.0.0,0,0)..BO
|
||||
Boot0006* UEFI : LAN : PXE IP6 Intel(R) Ethernet Connection I219-LM PciRoot(0x0)/Pci(0x1f,0x6)/MAC(54b2030f4b84,0)/IPv6([::]:<->[::]:,0,0)..BO
|
||||
Boot0007* UEFI : Built-in EFI Shell VenMedia(5023b95c-db26-429b-a648-bd47664c8012)..BO
|
||||
Boot0008* Linux bootloader VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
|
||||
Boot0009* ACRN Hypervisor HD(1,GPT,94597852-7166-4216-b0f1-cef5fd1f2349,0x800,0x100000)/File(\EFI\acrn\acrn.efi)b.o.o.t.l.o.a.d.e.r.=.\.E.F.I.\.d.e.b.i.a.n.\.g.r.u.b.x.6.4...e.f.i.
|
||||
|
||||
#. Install the Service VM kernel and reboot:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ mkdir ~/sos-kernel && cd ~/sos-kernel
|
||||
$ wget https://download.clearlinux.org/releases/30930/clear/x86_64/os/Packages/linux-iot-lts2018-sos-4.19.68-84.x86_64.rpm
|
||||
$ sudo apt install rpm2cpio
|
||||
$ rpm2cpio linux-iot-lts2018-sos-4.19.68-84.x86_64.rpm | cpio -idmv
|
||||
$ sudo cp -r ~/sos-kernel/usr/lib/modules/4.19.68-84.iot-lts2018-sos /lib/modules/
|
||||
$ sudo mkdir /boot/acrn/
|
||||
$ sudo cp ~/sos-kernel/usr/lib/kernel/org.clearlinux.iot-lts2018-sos.4.19.68-84 /boot/acrn/
|
||||
$ sudo vi /etc/grub.d/40_custom
|
||||
<To add below>
|
||||
menuentry 'ACRN Debian Service VM' {
|
||||
recordfail
|
||||
load_video
|
||||
insmod gzio
|
||||
insmod part_gpt
|
||||
insmod ext2
|
||||
|
||||
linux /boot/acrn/org.clearlinux.iot-lts2018-sos.4.19.68-84 console=tty0 console=ttyS0 root=/dev/sda2 rw rootwait ignore_loglevel no_timer_check consoleblank=0 i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x01010F i915.domain_plane_owners=0x011111110000 i915.enable_gvt=1 i915.enable_guc=0 hvlog=2M@0x1FE00000 memmap=2M\$0x1FE00000
|
||||
}
|
||||
$ sudo vi /etc/default/grub
|
||||
<Specify the default grub to the ACRN Debian Service VM entry>
|
||||
GRUB_DEFAULT=5
|
||||
$ sudo update-grub
|
||||
$ sudo reboot
|
||||
|
||||
You should see the Grub menu with the new "ACRN Debian Service VM" entry. Select it and proceed to booting the platform. The system will start the Debian Desktop and you can now log in (as before).
|
||||
|
||||
#. Log in to the Debian Service VM and check the ACRN status:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ dmesg | grep ACRN
|
||||
[ 0.000000] Hypervisor detected: ACRN
|
||||
[ 0.981476] ACRNTrace: Initialized acrn trace module with 4 cpu
|
||||
[ 0.982837] ACRN HVLog: Failed to init last hvlog devs, errno -19
|
||||
[ 0.983023] ACRN HVLog: Initialized hvlog module with 4 cp
|
||||
|
||||
$ uname -a
|
||||
Linux debian 4.19.68-84.iot-lts2018-sos #1 SMP Debian 4.19.37-5+deb10u2 (2019-08-08) x86_64 GNU/Linux
|
||||
|
||||
#. Enable the network sharing to give network access to User VM:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ sudo systemctl enable systemd-networkd
|
||||
$ sudo systemctl start systemd-networkd
|
||||
|
||||
#. Follow :ref:`prepare-UOS` to start a User VM.
|
||||
|
@ -1,244 +1,244 @@
|
||||
.. _running_deb_as_user_vm:
|
||||
|
||||
Running Debian as the User VM
|
||||
#############################
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
|
||||
This tutorial assumes you have already set up the ACRN Service VM on an
|
||||
Intel NUC Kit. If you have not, refer to the following instructions:
|
||||
|
||||
- Install a `Clear Linux OS <https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_ on your NUC kit.
|
||||
- Follow the instructions at :ref:`quick-setup-guide` to set up the Service VM automatically on your NUC kit. Follow steps 1 - 4.
|
||||
|
||||
We are using Intel Kaby Lake NUC (NUC7i7DNHE) and Debian 10 as the User VM in this tutorial.
|
||||
|
||||
Before you start this tutorial, make sure the KVM tools are installed on the
|
||||
development machine and set **IGD Aperture Size to 512** in the BIOS
|
||||
settings (refer to :numref:`intel-bios-deb`). Connect two monitors to your
|
||||
NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo apt install qemu-kvm libvirt-clients libvirt-daemon-system bridge-utils virt-manager ovmf
|
||||
|
||||
.. figure:: images/debian-uservm-0.png
|
||||
:align: center
|
||||
:name: intel-bios-deb
|
||||
|
||||
Intel Visual BIOS
|
||||
|
||||
We installed these KVM tools on Ubuntu 18.04; refer to the table below for our hardware configurations.
|
||||
|
||||
Hardware Configurations
|
||||
=======================
|
||||
|
||||
+--------------------------+----------------------+---------------------------------------------------------------------+
|
||||
| Platform (Intel x86) | Product/Kit Name | Hardware | Description |
|
||||
+==========================+======================+======================+=====================================+========+
|
||||
| Kaby Lake | NUC7i7DNH | Processor | - Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | Graphics | - UHD Graphics 620 |
|
||||
| | | | - Two HDMI 2.0a ports supporting 4K at 60 Hz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | System memory | - 8GiB SODIMM DDR4 2400 MHz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | Storage capabilities | - 1TB WDC WD10SPZX-22Z |
|
||||
+--------------------------+----------------------+----------------------+----------------------------------------------+
|
||||
| PC (development machine) | | Processor | - Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | System memory | - 2GiB DIMM DDR3 Synchronous 1333 MHz x 4 |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | Storage capabilities | - 1TB WDC WD10JPLX-00M |
|
||||
+--------------------------+----------------------+----------------------+----------------------------------------------+
|
||||
|
||||
|
||||
|
||||
Validated Versions
|
||||
==================
|
||||
|
||||
- **Clear Linux version:** 30920
|
||||
- **ACRN hypervisor tag:** acrn-2019w36.2-140000p
|
||||
- **Service VM Kernel version:** 4.19.68-84.iot-lts2018-sos
|
||||
|
||||
Build the Debian KVM Image
|
||||
**************************
|
||||
|
||||
This tutorial describes how to build a Debian 10 KVM image. The next few
|
||||
steps will detail how to use the Debian CD-ROM (ISO) image to install Debian
|
||||
10 onto a virtual disk.
|
||||
|
||||
#. Download the Debian ISO on your development machine:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ mkdir ~/debian10 && cd ~/debian10
|
||||
$ wget https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-10.0.0-amd64-netinst.iso
|
||||
|
||||
#. Install the Debian ISO via the virt-manager tool:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo virt-manager
|
||||
|
||||
#. Verify that you can see the main menu as shown in :numref:`vmmanager-debian` below.
|
||||
|
||||
.. figure:: images/debian-uservm-1.png
|
||||
:align: center
|
||||
:name: vmmanager-debian
|
||||
|
||||
Virtual Machine Manager
|
||||
|
||||
#. Right-click **QEMU/KVM** and select **New**.
|
||||
|
||||
a. Choose **Local install media (ISO image or CDROM)** and then click **Forward**. A **Create a new virtual machine** box displays, as shown in :numref:`newVM-debian` below.
|
||||
|
||||
.. figure:: images/debian-uservm-2.png
|
||||
:align: center
|
||||
:name: newVM-debian
|
||||
|
||||
Create a New Virtual Machine
|
||||
|
||||
b. Choose **Use ISO image** and click **Browse** - **Browse Local**. Select the ISO which you get from Step 1 above.
|
||||
|
||||
c. Choose the **OS type:** Linux, **Version:** Debian Stretch and then click **Forward**.
|
||||
|
||||
d. Select **Forward** if you do not need to make customized CPU settings.
|
||||
|
||||
e. Choose **Create a disk image for virtual machine**. Set the storage to 20 GB or more if necessary and click **Forward**.
|
||||
|
||||
f. Rename the image if you desire. You must check the **customize configuration before install** option before you finish all stages.
|
||||
|
||||
#. Verify that you can see the Overview screen as set up, as shown in :numref:`debian10-setup` below:
|
||||
|
||||
.. figure:: images/debian-uservm-3.png
|
||||
:align: center
|
||||
:name: debian10-setup
|
||||
|
||||
Debian Setup Overview
|
||||
|
||||
#. Complete the Debian installation. Verify that you have set up a vda disk partition, as shown in :numref:`partition-vda` below:
|
||||
|
||||
.. figure:: images/debian-uservm-4.png
|
||||
:align: center
|
||||
:name: partition-vda
|
||||
|
||||
Virtual Disk (vda) partition
|
||||
|
||||
#. Upon installation completion, the KVM image is created in the ``/var/lib/libvirt/images`` folder. Convert the `gcow2` format to `img` **as the root user**:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd ~/debian10
|
||||
$ qemu-img convert -f qcow2 -O raw /var/lib/libvirt/images/debian10.qcow2 debian10.img
|
||||
|
||||
Launch the Debian Image as the User VM
|
||||
**************************************
|
||||
|
||||
Re-use and modify the `launch_win.sh` script in order to launch the new Debian 10 User VM.
|
||||
|
||||
.. note:: This tutorial assumes SATA is the default boot drive; replace "/dev/sda1" mentioned below with "/dev/nvme0n1p1" if you are using an NVMe drive.
|
||||
|
||||
1. Copy the debian.img to your NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# scp ~/debian10/debian10.img user_name@ip_address:~/debian10.img
|
||||
|
||||
#. Log in to the ACRN Service VM, and create a launch script from the existing script:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd ~
|
||||
$ cp /usr/share/acrn/samples/nuc/launch_win.sh ./launch_debian.sh
|
||||
$ sed -i "s/win10-ltsc.img/debian10.img/" launch_debian.sh
|
||||
|
||||
#. Assign USB ports to the Debian VM in order to use the mouse and keyboard before the launch:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ vim launch_debian.sh
|
||||
|
||||
<Add below as the acrn-dm parameter>
|
||||
-s 7,xhci,1-2:1-3:1-4:1-5 \
|
||||
|
||||
.. note:: This will assign all USB ports (2 front and 2 rear) to the User VM. If you want to only assign the USB ports at the front, use "-s 7,xhci,1-2:1-3 \" instead. Refer to :ref:`acrn-dm_parameters` for ACRN for more information.
|
||||
|
||||
#. Modify acrn.conf and reboot the Service VM to assign the Pipe A monitor to the Debian VM and the Pipe B monitor to the Service VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo mount /dev/sda1 /mnt
|
||||
$ sudo sed -i "s/0x01010F/0x010101/" /mnt/loader/entries/acrn.conf
|
||||
$ sudo sed -i "s/0x011111110000/0x011100001111/" /mnt/loader/entries/acrn.conf
|
||||
$ sed -i 3"s/$/ i915.enable_conformance_check=0/" /mnt/loader/entries/acrn.conf
|
||||
$ sudo sync && sudo umount /mnt && reboot
|
||||
|
||||
#. Copy grubx64.efi to bootx64.efi:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo losetup -f -P --show ~/debian10.img
|
||||
$ sudo mount /dev/loop0p1 /mnt
|
||||
$ sudo mkdir -p /mnt/EFI/boot
|
||||
$ sudo cp /mnt/EFI/debian/grubx64.efi /mnt/EFI/boot/bootx64.efi
|
||||
$ sync && sudo umount /mnt
|
||||
|
||||
#. Launch the Debian VM afer logging in to the Service VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo ./launch_debian.sh
|
||||
|
||||
#. View the Debian desktop on the secondary monitor, as shown in :numref:`debian-display2` below:
|
||||
|
||||
.. figure:: images/debian-uservm-5.png
|
||||
:align: center
|
||||
:name: debian-display1
|
||||
|
||||
.. figure:: images/debian-uservm-6.png
|
||||
:align: center
|
||||
:name: debian-display2
|
||||
|
||||
The Debian desktop appears on the secondary monitor (bottom image)
|
||||
|
||||
Enable the ttyS0 Console on the Debian VM
|
||||
*****************************************
|
||||
|
||||
After the Debian VM reboots, follow the steps below to enable the ttyS0 console so you can make command-line entries directly from it.
|
||||
|
||||
1. Log in to the Debian user interface and launch **Terminal** from the Application list.
|
||||
|
||||
#. Add "console=ttyS0,115200" to the grub file on the terminal:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo vim /etc/default/grub
|
||||
<Add console=ttyS0,115200>
|
||||
GRUB_CMDLINE_LINUX="console=ttyS0,115200"
|
||||
$ sudo update-grub
|
||||
|
||||
#. Add `virtio_console` to `/etc/initramfs-tools/modules`. **Power OFF** the Debian VM after `initramfs` is updated:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo echo "virtio_console" >> /etc/initramfs-tools/modules
|
||||
$ sudo update-initramfs -u
|
||||
$ sudo poweroff
|
||||
|
||||
#. Log in to the Service VM and the modify the launch script to add the `virtio-console` parameter to the Device Model for the Debian VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ vim ~/launch_debian.sh
|
||||
<add below to the acrn-dm command line>
|
||||
-s 9,virtio-console,@stdio:stdio_port \
|
||||
|
||||
#. Launch Debian using the modified script. Verify that you see the console output shown in :numref:`console output-debian` below:
|
||||
|
||||
.. figure:: images/debian-uservm-7.png
|
||||
:align: center
|
||||
:name: console output-debian
|
||||
|
||||
Debian VM console output
|
||||
.. _running_deb_as_user_vm:
|
||||
|
||||
Running Debian as the User VM
|
||||
#############################
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
|
||||
This tutorial assumes you have already set up the ACRN Service VM on an
|
||||
Intel NUC Kit. If you have not, refer to the following instructions:
|
||||
|
||||
- Install a `Clear Linux OS <https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_ on your NUC kit.
|
||||
- Follow the instructions at :ref:`quick-setup-guide` to set up the Service VM automatically on your NUC kit. Follow steps 1 - 4.
|
||||
|
||||
We are using Intel Kaby Lake NUC (NUC7i7DNHE) and Debian 10 as the User VM in this tutorial.
|
||||
|
||||
Before you start this tutorial, make sure the KVM tools are installed on the
|
||||
development machine and set **IGD Aperture Size to 512** in the BIOS
|
||||
settings (refer to :numref:`intel-bios-deb`). Connect two monitors to your
|
||||
NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo apt install qemu-kvm libvirt-clients libvirt-daemon-system bridge-utils virt-manager ovmf
|
||||
|
||||
.. figure:: images/debian-uservm-0.png
|
||||
:align: center
|
||||
:name: intel-bios-deb
|
||||
|
||||
Intel Visual BIOS
|
||||
|
||||
We installed these KVM tools on Ubuntu 18.04; refer to the table below for our hardware configurations.
|
||||
|
||||
Hardware Configurations
|
||||
=======================
|
||||
|
||||
+--------------------------+----------------------+---------------------------------------------------------------------+
|
||||
| Platform (Intel x86) | Product/Kit Name | Hardware | Description |
|
||||
+==========================+======================+======================+=====================================+========+
|
||||
| Kaby Lake | NUC7i7DNH | Processor | - Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | Graphics | - UHD Graphics 620 |
|
||||
| | | | - Two HDMI 2.0a ports supporting 4K at 60 Hz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | System memory | - 8GiB SODIMM DDR4 2400 MHz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | Storage capabilities | - 1TB WDC WD10SPZX-22Z |
|
||||
+--------------------------+----------------------+----------------------+----------------------------------------------+
|
||||
| PC (development machine) | | Processor | - Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | System memory | - 2GiB DIMM DDR3 Synchronous 1333 MHz x 4 |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | Storage capabilities | - 1TB WDC WD10JPLX-00M |
|
||||
+--------------------------+----------------------+----------------------+----------------------------------------------+
|
||||
|
||||
|
||||
|
||||
Validated Versions
|
||||
==================
|
||||
|
||||
- **Clear Linux version:** 30920
|
||||
- **ACRN hypervisor tag:** acrn-2019w36.2-140000p
|
||||
- **Service VM Kernel version:** 4.19.68-84.iot-lts2018-sos
|
||||
|
||||
Build the Debian KVM Image
|
||||
**************************
|
||||
|
||||
This tutorial describes how to build a Debian 10 KVM image. The next few
|
||||
steps will detail how to use the Debian CD-ROM (ISO) image to install Debian
|
||||
10 onto a virtual disk.
|
||||
|
||||
#. Download the Debian ISO on your development machine:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ mkdir ~/debian10 && cd ~/debian10
|
||||
$ wget https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-10.0.0-amd64-netinst.iso
|
||||
|
||||
#. Install the Debian ISO via the virt-manager tool:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo virt-manager
|
||||
|
||||
#. Verify that you can see the main menu as shown in :numref:`vmmanager-debian` below.
|
||||
|
||||
.. figure:: images/debian-uservm-1.png
|
||||
:align: center
|
||||
:name: vmmanager-debian
|
||||
|
||||
Virtual Machine Manager
|
||||
|
||||
#. Right-click **QEMU/KVM** and select **New**.
|
||||
|
||||
a. Choose **Local install media (ISO image or CDROM)** and then click **Forward**. A **Create a new virtual machine** box displays, as shown in :numref:`newVM-debian` below.
|
||||
|
||||
.. figure:: images/debian-uservm-2.png
|
||||
:align: center
|
||||
:name: newVM-debian
|
||||
|
||||
Create a New Virtual Machine
|
||||
|
||||
b. Choose **Use ISO image** and click **Browse** - **Browse Local**. Select the ISO which you get from Step 1 above.
|
||||
|
||||
c. Choose the **OS type:** Linux, **Version:** Debian Stretch and then click **Forward**.
|
||||
|
||||
d. Select **Forward** if you do not need to make customized CPU settings.
|
||||
|
||||
e. Choose **Create a disk image for virtual machine**. Set the storage to 20 GB or more if necessary and click **Forward**.
|
||||
|
||||
f. Rename the image if you desire. You must check the **customize configuration before install** option before you finish all stages.
|
||||
|
||||
#. Verify that you can see the Overview screen as set up, as shown in :numref:`debian10-setup` below:
|
||||
|
||||
.. figure:: images/debian-uservm-3.png
|
||||
:align: center
|
||||
:name: debian10-setup
|
||||
|
||||
Debian Setup Overview
|
||||
|
||||
#. Complete the Debian installation. Verify that you have set up a vda disk partition, as shown in :numref:`partition-vda` below:
|
||||
|
||||
.. figure:: images/debian-uservm-4.png
|
||||
:align: center
|
||||
:name: partition-vda
|
||||
|
||||
Virtual Disk (vda) partition
|
||||
|
||||
#. Upon installation completion, the KVM image is created in the ``/var/lib/libvirt/images`` folder. Convert the `gcow2` format to `img` **as the root user**:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd ~/debian10
|
||||
$ qemu-img convert -f qcow2 -O raw /var/lib/libvirt/images/debian10.qcow2 debian10.img
|
||||
|
||||
Launch the Debian Image as the User VM
|
||||
**************************************
|
||||
|
||||
Re-use and modify the `launch_win.sh` script in order to launch the new Debian 10 User VM.
|
||||
|
||||
.. note:: This tutorial assumes SATA is the default boot drive; replace "/dev/sda1" mentioned below with "/dev/nvme0n1p1" if you are using an NVMe drive.
|
||||
|
||||
1. Copy the debian.img to your NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# scp ~/debian10/debian10.img user_name@ip_address:~/debian10.img
|
||||
|
||||
#. Log in to the ACRN Service VM, and create a launch script from the existing script:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd ~
|
||||
$ cp /usr/share/acrn/samples/nuc/launch_win.sh ./launch_debian.sh
|
||||
$ sed -i "s/win10-ltsc.img/debian10.img/" launch_debian.sh
|
||||
|
||||
#. Assign USB ports to the Debian VM in order to use the mouse and keyboard before the launch:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ vim launch_debian.sh
|
||||
|
||||
<Add below as the acrn-dm parameter>
|
||||
-s 7,xhci,1-2:1-3:1-4:1-5 \
|
||||
|
||||
.. note:: This will assign all USB ports (2 front and 2 rear) to the User VM. If you want to only assign the USB ports at the front, use "-s 7,xhci,1-2:1-3 \" instead. Refer to :ref:`acrn-dm_parameters` for ACRN for more information.
|
||||
|
||||
#. Modify acrn.conf and reboot the Service VM to assign the Pipe A monitor to the Debian VM and the Pipe B monitor to the Service VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo mount /dev/sda1 /mnt
|
||||
$ sudo sed -i "s/0x01010F/0x010101/" /mnt/loader/entries/acrn.conf
|
||||
$ sudo sed -i "s/0x011111110000/0x011100001111/" /mnt/loader/entries/acrn.conf
|
||||
$ sed -i 3"s/$/ i915.enable_conformance_check=0/" /mnt/loader/entries/acrn.conf
|
||||
$ sudo sync && sudo umount /mnt && reboot
|
||||
|
||||
#. Copy grubx64.efi to bootx64.efi:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo losetup -f -P --show ~/debian10.img
|
||||
$ sudo mount /dev/loop0p1 /mnt
|
||||
$ sudo mkdir -p /mnt/EFI/boot
|
||||
$ sudo cp /mnt/EFI/debian/grubx64.efi /mnt/EFI/boot/bootx64.efi
|
||||
$ sync && sudo umount /mnt
|
||||
|
||||
#. Launch the Debian VM afer logging in to the Service VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo ./launch_debian.sh
|
||||
|
||||
#. View the Debian desktop on the secondary monitor, as shown in :numref:`debian-display2` below:
|
||||
|
||||
.. figure:: images/debian-uservm-5.png
|
||||
:align: center
|
||||
:name: debian-display1
|
||||
|
||||
.. figure:: images/debian-uservm-6.png
|
||||
:align: center
|
||||
:name: debian-display2
|
||||
|
||||
The Debian desktop appears on the secondary monitor (bottom image)
|
||||
|
||||
Enable the ttyS0 Console on the Debian VM
|
||||
*****************************************
|
||||
|
||||
After the Debian VM reboots, follow the steps below to enable the ttyS0 console so you can make command-line entries directly from it.
|
||||
|
||||
1. Log in to the Debian user interface and launch **Terminal** from the Application list.
|
||||
|
||||
#. Add "console=ttyS0,115200" to the grub file on the terminal:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo vim /etc/default/grub
|
||||
<Add console=ttyS0,115200>
|
||||
GRUB_CMDLINE_LINUX="console=ttyS0,115200"
|
||||
$ sudo update-grub
|
||||
|
||||
#. Add `virtio_console` to `/etc/initramfs-tools/modules`. **Power OFF** the Debian VM after `initramfs` is updated:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo echo "virtio_console" >> /etc/initramfs-tools/modules
|
||||
$ sudo update-initramfs -u
|
||||
$ sudo poweroff
|
||||
|
||||
#. Log in to the Service VM and the modify the launch script to add the `virtio-console` parameter to the Device Model for the Debian VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ vim ~/launch_debian.sh
|
||||
<add below to the acrn-dm command line>
|
||||
-s 9,virtio-console,@stdio:stdio_port \
|
||||
|
||||
#. Launch Debian using the modified script. Verify that you see the console output shown in :numref:`console output-debian` below:
|
||||
|
||||
.. figure:: images/debian-uservm-7.png
|
||||
:align: center
|
||||
:name: console output-debian
|
||||
|
||||
Debian VM console output
|
||||
|
@ -1,217 +1,217 @@
|
||||
.. _running_ubun_as_user_vm:
|
||||
|
||||
Running Ubuntu as the User VM
|
||||
#############################
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
|
||||
This tutorial assumes you have already set up the ACRN Service VM on an
|
||||
Intel NUC Kit. If you have not, refer to the following instructions:
|
||||
|
||||
- Install a `Clear Linux OS <https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_ on your NUC kit.
|
||||
- Follow the instructions at :ref:`quick-setup-guide` to set up the Service VM automatically on your NUC kit. Follow steps 1 - 4.
|
||||
|
||||
Before you start this tutorial, make sure the KVM tools are installed on the
|
||||
development machine and set **IGD Aperture Size to 512** in the BIOS
|
||||
settings (refer to :numref:`intel-bios-ubun`). Connect two monitors to your
|
||||
NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo apt install qemu-kvm libvirt-clients libvirt-daemon-system bridge-utils virt-manager ovmf
|
||||
|
||||
.. figure:: images/ubuntu-uservm-0.png
|
||||
:align: center
|
||||
:name: intel-bios-ubun
|
||||
|
||||
Intel Visual BIOS
|
||||
|
||||
We installed these KVM tools on Ubuntu 18.04; refer to the table below for our hardware configurations.
|
||||
|
||||
Hardware Configurations
|
||||
=======================
|
||||
|
||||
+--------------------------+----------------------+---------------------------------------------------------------------+
|
||||
| Platform (Intel x86) | Product/Kit Name | Hardware | Description |
|
||||
+==========================+======================+======================+=====================================+========+
|
||||
| Kaby Lake | NUC7i7DNH | Processor | - Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | Graphics | - UHD Graphics 620 |
|
||||
| | | | - Two HDMI 2.0a ports supporting 4K at 60 Hz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | System memory | - 8GiB SODIMM DDR4 2400 MHz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | Storage capabilities | - 1TB WDC WD10SPZX-22Z |
|
||||
+--------------------------+----------------------+----------------------+----------------------------------------------+
|
||||
| PC (development machine) | | Processor | - Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | System memory | - 2GiB DIMM DDR3 Synchronous 1333 MHz x 4 |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | Storage capabilities | - 1TB WDC WD10JPLX-00M |
|
||||
+--------------------------+----------------------+----------------------+----------------------------------------------+
|
||||
|
||||
|
||||
|
||||
Validated Versions
|
||||
==================
|
||||
|
||||
- **Clear Linux version:** 30920
|
||||
- **ACRN hypervisor tag:** acrn-2019w36.2-140000p
|
||||
- **Service VM Kernel version:** 4.19.68-84.iot-lts2018-sos
|
||||
|
||||
Build the Ubuntu KVM Image
|
||||
**************************
|
||||
|
||||
This tutorial uses the Ubuntu 18.04 destop ISO as the base image.
|
||||
|
||||
#. Download the `Ubuntu 18.04 destop ISO <http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso?_ga=2.160010942.221344839.1566963570-491064742.1554370503>`_ on your development machine:
|
||||
|
||||
#. Install Ubuntu via the virt-manager tool:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo virt-manager
|
||||
|
||||
#. Verify that you can see the main menu as shown in :numref:`vmmanager-ubun` below.
|
||||
|
||||
.. figure:: images/ubuntu-uservm-1.png
|
||||
:align: center
|
||||
:name: vmmanager-ubun
|
||||
|
||||
Virtual Machine Manager
|
||||
|
||||
#. Right-click **QEMU/KVM** and select **New**.
|
||||
|
||||
a. Choose **Local install media (ISO image or CDROM)** and then click **Forward**. A **Create a new virtual machine** box displays, as shown in :numref:`newVM-ubun` below.
|
||||
|
||||
.. figure:: images/ubuntu-uservm-2.png
|
||||
:align: center
|
||||
:name: newVM-ubun
|
||||
|
||||
Create a New Virtual Machine
|
||||
|
||||
b. Choose **Use ISO image** and click **Browse** - **Browse Local**. Select the ISO which you get from Step 2 above.
|
||||
|
||||
c. Choose the **OS type:** Linux, **Version:** Ubuntu 18.04 LTS and then click **Forward**.
|
||||
|
||||
d. Select **Forward** if you do not need to make customized CPU settings.
|
||||
|
||||
e. Choose **Create a disk image for virtual machine**. Set the storage to 20 GB or more if necessary and click **Forward**.
|
||||
|
||||
f. Rename the image if you desire. You must check the **customize configuration before install** option before you finish all stages.
|
||||
|
||||
#. Verify that you can see the Overview screen as set up, as shown in :numref:`ubun-setup` below:
|
||||
|
||||
.. figure:: images/ubuntu-uservm-3.png
|
||||
:align: center
|
||||
:name: ubun-setup
|
||||
|
||||
Debian Setup Overview
|
||||
|
||||
#. Complete the Ubuntu installation. Verify that you have set up the disk partition as follows:
|
||||
|
||||
- /dev/vda1: EFI System Partition
|
||||
- /dev/vda2: File System Partition
|
||||
|
||||
#. Upon installation completion, click **Restart** Now to make sure the Ubuntu OS boots successfully.
|
||||
|
||||
#. The KVM image is created in the ``/var/lib/libvirt/images`` folder. Convert the `gcow2` format to `img` **as the root user**:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd ~ && mkdir ubuntu_images && cd ubuntu_images
|
||||
$ sudo qemu-img convert -f qcow2 -O raw /var/lib/libvirt/images/ubuntu18.04.qcow2 uos.img
|
||||
|
||||
Launch the Ubuntu Image as the User VM
|
||||
**************************************
|
||||
|
||||
Modify the `launch_win.sh` script in order to launch Ubuntu as the User VM.
|
||||
|
||||
.. note:: This tutorial assumes SATA is the default boot drive; replace "/dev/sda1" mentioned below with "/dev/nvme0n1p1" if you are using SSD.
|
||||
|
||||
1. Copy the uos.img to your NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# scp ~/ubuntu_images/uos.img user_name@ip_address:~/uos.img
|
||||
|
||||
#. Log in to the ACRN Service VM, and create a launch script from the existing script:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd ~
|
||||
$ cp /usr/share/acrn/samples/nuc/launch_win.sh ./launch_ubuntu.sh
|
||||
$ sed -i "s/win10-ltsc.img/uos.img/" launch_ubuntu.sh
|
||||
|
||||
#. Assign USB ports to the Ubuntu VM in order to use the mouse and keyboard before the launch:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ vim launch_ubuntu.sh
|
||||
|
||||
<Add below as the acrn-dm parameter>
|
||||
-s 7,xhci,1-2:1-3:1-4:1-5 \
|
||||
|
||||
.. note:: This will assign all USB ports (2 front and 2 rear) to the User VM. If you want to only assign the USB ports at the front, use "-s 7,xhci,1-2:1-3 \" instead. Refer to :ref:`acrn-dm_parameters` for ACRN for more information.
|
||||
|
||||
#. Modify acrn.conf and reboot the Service VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo mount /dev/sda1 /mnt
|
||||
$ sudo sed -i "s/0x01010F/0x010101/" /mnt/loader/entries/acrn.conf
|
||||
$ sudo sed -i "s/0x011111110000/0x011100001111/" /mnt/loader/entries/acrn.conf
|
||||
$ sed -i 3"s/$/ i915.enable_conformance_check=0/" /mnt/loader/entries/acrn.conf
|
||||
$ sudo sync && sudo umount /mnt && reboot
|
||||
|
||||
#. Launch the Ubuntu VM afer logging in to the Service VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo sh launch_ubuntu.sh
|
||||
|
||||
#. View the Ubuntu desktop on the secondary monitor, as shown in :numref:`ubun-display1` below:
|
||||
|
||||
.. figure:: images/ubuntu-uservm-4.png
|
||||
:align: center
|
||||
:name: ubun-display1
|
||||
|
||||
The Ubuntu desktop on the secondary monitor
|
||||
|
||||
Enable the Ubuntu Console instead of the User Interface
|
||||
*******************************************************
|
||||
|
||||
After the Ubuntu VM reboots, follow the steps below to enable the Ubuntu VM console so you can make command-line entries directly from it.
|
||||
|
||||
1. Log in to the Ubuntu user interface and launch **Terminal** from the Application list.
|
||||
|
||||
#. Add "console=ttyS0,115200" to the grub file on the terminal:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo vim /etc/default/grub
|
||||
<Add console=ttyS0,115200>
|
||||
GRUB_CMDLINE_LINUX="console=ttyS0,115200"
|
||||
$ sudo update-grub
|
||||
$ sudo poweroff
|
||||
|
||||
#. Modify the launch script to enable `virtio-console` for the Ubuntu VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ vim ~/launch_ubuntu.sh
|
||||
<add below to the acrn-dm command line>
|
||||
-s 9,virtio-console,@stdio:stdio_port \
|
||||
|
||||
#. Log in to the Service VM and launch Ubuntu. Verify that you see the console output shown in :numref:`console output-ubun` below:
|
||||
|
||||
.. figure:: images/ubuntu-uservm-5.png
|
||||
:align: center
|
||||
:name: console output-ubun
|
||||
|
||||
Ubuntu VM console output
|
||||
|
||||
|
||||
|
||||
|
||||
.. _running_ubun_as_user_vm:
|
||||
|
||||
Running Ubuntu as the User VM
|
||||
#############################
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
|
||||
This tutorial assumes you have already set up the ACRN Service VM on an
|
||||
Intel NUC Kit. If you have not, refer to the following instructions:
|
||||
|
||||
- Install a `Clear Linux OS <https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_ on your NUC kit.
|
||||
- Follow the instructions at :ref:`quick-setup-guide` to set up the Service VM automatically on your NUC kit. Follow steps 1 - 4.
|
||||
|
||||
Before you start this tutorial, make sure the KVM tools are installed on the
|
||||
development machine and set **IGD Aperture Size to 512** in the BIOS
|
||||
settings (refer to :numref:`intel-bios-ubun`). Connect two monitors to your
|
||||
NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo apt install qemu-kvm libvirt-clients libvirt-daemon-system bridge-utils virt-manager ovmf
|
||||
|
||||
.. figure:: images/ubuntu-uservm-0.png
|
||||
:align: center
|
||||
:name: intel-bios-ubun
|
||||
|
||||
Intel Visual BIOS
|
||||
|
||||
We installed these KVM tools on Ubuntu 18.04; refer to the table below for our hardware configurations.
|
||||
|
||||
Hardware Configurations
|
||||
=======================
|
||||
|
||||
+--------------------------+----------------------+---------------------------------------------------------------------+
|
||||
| Platform (Intel x86) | Product/Kit Name | Hardware | Description |
|
||||
+==========================+======================+======================+=====================================+========+
|
||||
| Kaby Lake | NUC7i7DNH | Processor | - Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | Graphics | - UHD Graphics 620 |
|
||||
| | | | - Two HDMI 2.0a ports supporting 4K at 60 Hz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | System memory | - 8GiB SODIMM DDR4 2400 MHz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | Storage capabilities | - 1TB WDC WD10SPZX-22Z |
|
||||
+--------------------------+----------------------+----------------------+----------------------------------------------+
|
||||
| PC (development machine) | | Processor | - Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | System memory | - 2GiB DIMM DDR3 Synchronous 1333 MHz x 4 |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | Storage capabilities | - 1TB WDC WD10JPLX-00M |
|
||||
+--------------------------+----------------------+----------------------+----------------------------------------------+
|
||||
|
||||
|
||||
|
||||
Validated Versions
|
||||
==================
|
||||
|
||||
- **Clear Linux version:** 30920
|
||||
- **ACRN hypervisor tag:** acrn-2019w36.2-140000p
|
||||
- **Service VM Kernel version:** 4.19.68-84.iot-lts2018-sos
|
||||
|
||||
Build the Ubuntu KVM Image
|
||||
**************************
|
||||
|
||||
This tutorial uses the Ubuntu 18.04 destop ISO as the base image.
|
||||
|
||||
#. Download the `Ubuntu 18.04 destop ISO <http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso?_ga=2.160010942.221344839.1566963570-491064742.1554370503>`_ on your development machine:
|
||||
|
||||
#. Install Ubuntu via the virt-manager tool:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo virt-manager
|
||||
|
||||
#. Verify that you can see the main menu as shown in :numref:`vmmanager-ubun` below.
|
||||
|
||||
.. figure:: images/ubuntu-uservm-1.png
|
||||
:align: center
|
||||
:name: vmmanager-ubun
|
||||
|
||||
Virtual Machine Manager
|
||||
|
||||
#. Right-click **QEMU/KVM** and select **New**.
|
||||
|
||||
a. Choose **Local install media (ISO image or CDROM)** and then click **Forward**. A **Create a new virtual machine** box displays, as shown in :numref:`newVM-ubun` below.
|
||||
|
||||
.. figure:: images/ubuntu-uservm-2.png
|
||||
:align: center
|
||||
:name: newVM-ubun
|
||||
|
||||
Create a New Virtual Machine
|
||||
|
||||
b. Choose **Use ISO image** and click **Browse** - **Browse Local**. Select the ISO which you get from Step 2 above.
|
||||
|
||||
c. Choose the **OS type:** Linux, **Version:** Ubuntu 18.04 LTS and then click **Forward**.
|
||||
|
||||
d. Select **Forward** if you do not need to make customized CPU settings.
|
||||
|
||||
e. Choose **Create a disk image for virtual machine**. Set the storage to 20 GB or more if necessary and click **Forward**.
|
||||
|
||||
f. Rename the image if you desire. You must check the **customize configuration before install** option before you finish all stages.
|
||||
|
||||
#. Verify that you can see the Overview screen as set up, as shown in :numref:`ubun-setup` below:
|
||||
|
||||
.. figure:: images/ubuntu-uservm-3.png
|
||||
:align: center
|
||||
:name: ubun-setup
|
||||
|
||||
Debian Setup Overview
|
||||
|
||||
#. Complete the Ubuntu installation. Verify that you have set up the disk partition as follows:
|
||||
|
||||
- /dev/vda1: EFI System Partition
|
||||
- /dev/vda2: File System Partition
|
||||
|
||||
#. Upon installation completion, click **Restart** Now to make sure the Ubuntu OS boots successfully.
|
||||
|
||||
#. The KVM image is created in the ``/var/lib/libvirt/images`` folder. Convert the `gcow2` format to `img` **as the root user**:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd ~ && mkdir ubuntu_images && cd ubuntu_images
|
||||
$ sudo qemu-img convert -f qcow2 -O raw /var/lib/libvirt/images/ubuntu18.04.qcow2 uos.img
|
||||
|
||||
Launch the Ubuntu Image as the User VM
|
||||
**************************************
|
||||
|
||||
Modify the `launch_win.sh` script in order to launch Ubuntu as the User VM.
|
||||
|
||||
.. note:: This tutorial assumes SATA is the default boot drive; replace "/dev/sda1" mentioned below with "/dev/nvme0n1p1" if you are using SSD.
|
||||
|
||||
1. Copy the uos.img to your NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# scp ~/ubuntu_images/uos.img user_name@ip_address:~/uos.img
|
||||
|
||||
#. Log in to the ACRN Service VM, and create a launch script from the existing script:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd ~
|
||||
$ cp /usr/share/acrn/samples/nuc/launch_win.sh ./launch_ubuntu.sh
|
||||
$ sed -i "s/win10-ltsc.img/uos.img/" launch_ubuntu.sh
|
||||
|
||||
#. Assign USB ports to the Ubuntu VM in order to use the mouse and keyboard before the launch:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ vim launch_ubuntu.sh
|
||||
|
||||
<Add below as the acrn-dm parameter>
|
||||
-s 7,xhci,1-2:1-3:1-4:1-5 \
|
||||
|
||||
.. note:: This will assign all USB ports (2 front and 2 rear) to the User VM. If you want to only assign the USB ports at the front, use "-s 7,xhci,1-2:1-3 \" instead. Refer to :ref:`acrn-dm_parameters` for ACRN for more information.
|
||||
|
||||
#. Modify acrn.conf and reboot the Service VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo mount /dev/sda1 /mnt
|
||||
$ sudo sed -i "s/0x01010F/0x010101/" /mnt/loader/entries/acrn.conf
|
||||
$ sudo sed -i "s/0x011111110000/0x011100001111/" /mnt/loader/entries/acrn.conf
|
||||
$ sed -i 3"s/$/ i915.enable_conformance_check=0/" /mnt/loader/entries/acrn.conf
|
||||
$ sudo sync && sudo umount /mnt && reboot
|
||||
|
||||
#. Launch the Ubuntu VM afer logging in to the Service VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo sh launch_ubuntu.sh
|
||||
|
||||
#. View the Ubuntu desktop on the secondary monitor, as shown in :numref:`ubun-display1` below:
|
||||
|
||||
.. figure:: images/ubuntu-uservm-4.png
|
||||
:align: center
|
||||
:name: ubun-display1
|
||||
|
||||
The Ubuntu desktop on the secondary monitor
|
||||
|
||||
Enable the Ubuntu Console instead of the User Interface
|
||||
*******************************************************
|
||||
|
||||
After the Ubuntu VM reboots, follow the steps below to enable the Ubuntu VM console so you can make command-line entries directly from it.
|
||||
|
||||
1. Log in to the Ubuntu user interface and launch **Terminal** from the Application list.
|
||||
|
||||
#. Add "console=ttyS0,115200" to the grub file on the terminal:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo vim /etc/default/grub
|
||||
<Add console=ttyS0,115200>
|
||||
GRUB_CMDLINE_LINUX="console=ttyS0,115200"
|
||||
$ sudo update-grub
|
||||
$ sudo poweroff
|
||||
|
||||
#. Modify the launch script to enable `virtio-console` for the Ubuntu VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ vim ~/launch_ubuntu.sh
|
||||
<add below to the acrn-dm command line>
|
||||
-s 9,virtio-console,@stdio:stdio_port \
|
||||
|
||||
#. Log in to the Service VM and launch Ubuntu. Verify that you see the console output shown in :numref:`console output-ubun` below:
|
||||
|
||||
.. figure:: images/ubuntu-uservm-5.png
|
||||
:align: center
|
||||
:name: console output-ubun
|
||||
|
||||
Ubuntu VM console output
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1,198 +1,198 @@
|
||||
.. _vuart_config:
|
||||
|
||||
vUART Configuration
|
||||
###################
|
||||
|
||||
Introduction
|
||||
============
|
||||
|
||||
The virtual universal asynchronous receiver-transmitter (vUART) supports two functions: one is the console, the other is communication. vUART only works on a single function.
|
||||
|
||||
Currently, only two vUART configurations are added to the ``hypervisor/scenarios/<xxx>/vm_configuration.c`` file, but you can change the value in it.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
.vuart[0] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = INVALID_COM_BASE,
|
||||
},
|
||||
.vuart[1] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = INVALID_COM_BASE,
|
||||
}
|
||||
|
||||
**vuart[0]** is initiated as the **console** port.
|
||||
|
||||
**vuart[1]** is initiated as a **communication** port.
|
||||
|
||||
Console enable list
|
||||
===================
|
||||
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
| Scenarios | vm0 | vm1 | vm2 | vm3 |
|
||||
+=================+=======================+====================+================+================+
|
||||
| SDC | SOS (vuart enable) | Post-launched | Post-launched | |
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
| SDC2 | SOS (vuart enable) | Post-launched | | Post-launched |
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
| Hybrid | Pre-launched (Zephyr) | SOS (vuart enable) | Post-launched | |
|
||||
| | (vuart enable) | | | |
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
| Industry | SOS (vuart enable) | Post-launched | Post-launched | Post-launched |
|
||||
| | | | (vuart enable) | |
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
| Logic_partition | Pre-launched | Pre-launched RTVM | Post-launched | |
|
||||
| | (vuart enable) | (vuart enable) | RTVM | |
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
|
||||
How to configure a console port
|
||||
===============================
|
||||
|
||||
To enable the console port for a VM, change only the ``port_base`` and ``irq``. If the irq number is already in use in your system (``cat /proc/interrupt``), choose another irq number. If you set the ``.irq =0``, the vuart will work in polling mode.
|
||||
|
||||
- COM1_BASE (0x3F8) + COM1_IRQ(4)
|
||||
- COM2_BASE (0x2F8) + COM2_IRQ(3)
|
||||
- COM3_BASE (0x3E8) + COM3_IRQ(6)
|
||||
- COM4_BASE (0x2E8) + COM4_IRQ(7)
|
||||
|
||||
Example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
.vuart[0] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = COM1_BASE,
|
||||
.irq = COM1_IRQ,
|
||||
},
|
||||
|
||||
How to configure a communication port
|
||||
=====================================
|
||||
|
||||
To enable the communication port, configure ``vuart[1]`` in the two VMs that want to communicate.
|
||||
|
||||
The port_base and irq should differ from the ``vuart[0]`` in the same VM.
|
||||
|
||||
**t_vuart.vm_id** is the target VM's vm_id, start from 0. (0 means VM0)
|
||||
|
||||
**t_vuart.vuart_id** is the target vuart index in the target VM. start from 1. (1 means vuart[1])
|
||||
|
||||
Example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
/* VM0 */
|
||||
...
|
||||
/* VM1 */
|
||||
.vuart[1] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = COM2_BASE,
|
||||
.irq = COM2_IRQ,
|
||||
.t_vuart.vm_id = 2U,
|
||||
.t_vuart.vuart_id = 1U,
|
||||
},
|
||||
...
|
||||
/* VM2 */
|
||||
.vuart[1] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = COM2_BASE,
|
||||
.irq = COM2_IRQ,
|
||||
.t_vuart.vm_id = 1U,
|
||||
.t_vuart.vuart_id = 1U,
|
||||
},
|
||||
|
||||
Communication vUART enable list
|
||||
===============================
|
||||
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
| Scenarios | vm0 | vm1 | vm2 | vm3 |
|
||||
+=================+=======================+====================+=====================+================+
|
||||
| SDC | SOS | Post-launched | Post-launched | |
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
| SDC2 | SOS | Post-launched | Post-launched | Post-launched |
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
| Hybrid | Pre-launched (Zephyr) | SOS | Post-launched | |
|
||||
| | (vuart enable COM2) | (vuart enable COM2)| | |
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
| Industry | SOS | Post-launched | Post-launched RTVM | Post-launched |
|
||||
| | (vuart enable COM2) | | (vuart enable COM2) | |
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
| Logic_partition | Pre-launched | Pre-launched RTVM | | |
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
|
||||
Launch script
|
||||
=============
|
||||
|
||||
- *-s 1:0,lpc -l com1,stdio*
|
||||
This option is only needed for WaaG and VxWorks (and also when using OVMF). They depend on the ACPI table, and only ``acrn-dm`` can provide the ACPI table for UART.
|
||||
|
||||
- *-B " ....,console=ttyS0, ..."*
|
||||
Add this to the kernel-based system.
|
||||
|
||||
Test the communication port
|
||||
===========================
|
||||
|
||||
After you have configured the communication port in hypervisor, you can access the corresponding port. For example, in Clear Linux:
|
||||
|
||||
1. With ``echo`` and ``cat``
|
||||
|
||||
On VM1: ``# cat /dev/ttyS1``
|
||||
|
||||
On VM2: ``# echo "test test" > /dev/ttyS1``
|
||||
|
||||
you can find the message from VM1 ``/dev/ttyS1``.
|
||||
|
||||
If you are not sure which port is the communication port, you can run ``dmesg | grep ttyS`` under the Linux shell to check the base address. If it matches what you have set in the ``vm_configuration.c`` file, it is the correct port.
|
||||
|
||||
|
||||
#. With minicom
|
||||
|
||||
Run ``minicom -D /dev/ttyS1`` on both VM1 and VM2 and enter ``test`` in VM1's minicom. The message should appear in VM2's minicom. Disable flow control in minicom.
|
||||
|
||||
|
||||
#. Limitations
|
||||
|
||||
- The msg cannot be longer than 256 bytes.
|
||||
- This cannot be used to transfer files because flow control is not supported so data may be lost.
|
||||
|
||||
vUART design
|
||||
============
|
||||
|
||||
**Console vUART**
|
||||
|
||||
.. figure:: images/vuart-config-1.png
|
||||
:align: center
|
||||
:name: console-vuart
|
||||
|
||||
**Communication vUART (between VM0 and VM1)**
|
||||
|
||||
.. figure:: images/vuart-config-2.png
|
||||
:align: center
|
||||
:name: communication-vuart
|
||||
|
||||
COM port configurations for Post-Launched VMs
|
||||
=============================================
|
||||
|
||||
For a post-launched VM, the ``acrn-dm`` cmdline also provides a COM port configuration:
|
||||
|
||||
``-s 1:0,lpc -l com1,stdio``
|
||||
|
||||
This adds ``com1 (0x3f8)`` and ``com2 (0x2f8)`` modules in the Guest VM, including the ACPI info for these two ports.
|
||||
|
||||
**Data Flows**
|
||||
|
||||
Three different data flows exist based on how the post-launched VM is started, as shown in the diagram below.
|
||||
|
||||
Figure 1 data flow: The post-launched VM is started with the vUART enabled in the hypervisor configuration file only.
|
||||
|
||||
Figure 2 data flow: The post-launched VM is started with the ``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio`` only.
|
||||
|
||||
Figure 3 data flow: The post-launched VM is started with both vUART enabled and the ``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio``.
|
||||
|
||||
.. figure:: images/vuart-config-post-launch.png
|
||||
:align: center
|
||||
:name: Post-Launched VMs
|
||||
|
||||
.. note::
|
||||
For operating systems such as VxWorks and Windows that depend on the ACPI table to probe the uart driver, adding the vuart configuration in the hypervisor is not sufficient. Currently, we recommend that you use the configuration in the figure 3 data flow. This may be refined in the future.
|
||||
|
||||
|
||||
.. _vuart_config:
|
||||
|
||||
vUART Configuration
|
||||
###################
|
||||
|
||||
Introduction
|
||||
============
|
||||
|
||||
The virtual universal asynchronous receiver-transmitter (vUART) supports two functions: one is the console, the other is communication. vUART only works on a single function.
|
||||
|
||||
Currently, only two vUART configurations are added to the ``hypervisor/scenarios/<xxx>/vm_configuration.c`` file, but you can change the value in it.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
.vuart[0] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = INVALID_COM_BASE,
|
||||
},
|
||||
.vuart[1] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = INVALID_COM_BASE,
|
||||
}
|
||||
|
||||
**vuart[0]** is initiated as the **console** port.
|
||||
|
||||
**vuart[1]** is initiated as a **communication** port.
|
||||
|
||||
Console enable list
|
||||
===================
|
||||
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
| Scenarios | vm0 | vm1 | vm2 | vm3 |
|
||||
+=================+=======================+====================+================+================+
|
||||
| SDC | SOS (vuart enable) | Post-launched | Post-launched | |
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
| SDC2 | SOS (vuart enable) | Post-launched | | Post-launched |
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
| Hybrid | Pre-launched (Zephyr) | SOS (vuart enable) | Post-launched | |
|
||||
| | (vuart enable) | | | |
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
| Industry | SOS (vuart enable) | Post-launched | Post-launched | Post-launched |
|
||||
| | | | (vuart enable) | |
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
| Logic_partition | Pre-launched | Pre-launched RTVM | Post-launched | |
|
||||
| | (vuart enable) | (vuart enable) | RTVM | |
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
|
||||
How to configure a console port
|
||||
===============================
|
||||
|
||||
To enable the console port for a VM, change only the ``port_base`` and ``irq``. If the irq number is already in use in your system (``cat /proc/interrupt``), choose another irq number. If you set the ``.irq =0``, the vuart will work in polling mode.
|
||||
|
||||
- COM1_BASE (0x3F8) + COM1_IRQ(4)
|
||||
- COM2_BASE (0x2F8) + COM2_IRQ(3)
|
||||
- COM3_BASE (0x3E8) + COM3_IRQ(6)
|
||||
- COM4_BASE (0x2E8) + COM4_IRQ(7)
|
||||
|
||||
Example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
.vuart[0] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = COM1_BASE,
|
||||
.irq = COM1_IRQ,
|
||||
},
|
||||
|
||||
How to configure a communication port
|
||||
=====================================
|
||||
|
||||
To enable the communication port, configure ``vuart[1]`` in the two VMs that want to communicate.
|
||||
|
||||
The port_base and irq should differ from the ``vuart[0]`` in the same VM.
|
||||
|
||||
**t_vuart.vm_id** is the target VM's vm_id, start from 0. (0 means VM0)
|
||||
|
||||
**t_vuart.vuart_id** is the target vuart index in the target VM. start from 1. (1 means vuart[1])
|
||||
|
||||
Example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
/* VM0 */
|
||||
...
|
||||
/* VM1 */
|
||||
.vuart[1] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = COM2_BASE,
|
||||
.irq = COM2_IRQ,
|
||||
.t_vuart.vm_id = 2U,
|
||||
.t_vuart.vuart_id = 1U,
|
||||
},
|
||||
...
|
||||
/* VM2 */
|
||||
.vuart[1] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = COM2_BASE,
|
||||
.irq = COM2_IRQ,
|
||||
.t_vuart.vm_id = 1U,
|
||||
.t_vuart.vuart_id = 1U,
|
||||
},
|
||||
|
||||
Communication vUART enable list
|
||||
===============================
|
||||
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
| Scenarios | vm0 | vm1 | vm2 | vm3 |
|
||||
+=================+=======================+====================+=====================+================+
|
||||
| SDC | SOS | Post-launched | Post-launched | |
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
| SDC2 | SOS | Post-launched | Post-launched | Post-launched |
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
| Hybrid | Pre-launched (Zephyr) | SOS | Post-launched | |
|
||||
| | (vuart enable COM2) | (vuart enable COM2)| | |
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
| Industry | SOS | Post-launched | Post-launched RTVM | Post-launched |
|
||||
| | (vuart enable COM2) | | (vuart enable COM2) | |
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
| Logic_partition | Pre-launched | Pre-launched RTVM | | |
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
|
||||
Launch script
|
||||
=============
|
||||
|
||||
- *-s 1:0,lpc -l com1,stdio*
|
||||
This option is only needed for WaaG and VxWorks (and also when using OVMF). They depend on the ACPI table, and only ``acrn-dm`` can provide the ACPI table for UART.
|
||||
|
||||
- *-B " ....,console=ttyS0, ..."*
|
||||
Add this to the kernel-based system.
|
||||
|
||||
Test the communication port
|
||||
===========================
|
||||
|
||||
After you have configured the communication port in hypervisor, you can access the corresponding port. For example, in Clear Linux:
|
||||
|
||||
1. With ``echo`` and ``cat``
|
||||
|
||||
On VM1: ``# cat /dev/ttyS1``
|
||||
|
||||
On VM2: ``# echo "test test" > /dev/ttyS1``
|
||||
|
||||
you can find the message from VM1 ``/dev/ttyS1``.
|
||||
|
||||
If you are not sure which port is the communication port, you can run ``dmesg | grep ttyS`` under the Linux shell to check the base address. If it matches what you have set in the ``vm_configuration.c`` file, it is the correct port.
|
||||
|
||||
|
||||
#. With minicom
|
||||
|
||||
Run ``minicom -D /dev/ttyS1`` on both VM1 and VM2 and enter ``test`` in VM1's minicom. The message should appear in VM2's minicom. Disable flow control in minicom.
|
||||
|
||||
|
||||
#. Limitations
|
||||
|
||||
- The msg cannot be longer than 256 bytes.
|
||||
- This cannot be used to transfer files because flow control is not supported so data may be lost.
|
||||
|
||||
vUART design
|
||||
============
|
||||
|
||||
**Console vUART**
|
||||
|
||||
.. figure:: images/vuart-config-1.png
|
||||
:align: center
|
||||
:name: console-vuart
|
||||
|
||||
**Communication vUART (between VM0 and VM1)**
|
||||
|
||||
.. figure:: images/vuart-config-2.png
|
||||
:align: center
|
||||
:name: communication-vuart
|
||||
|
||||
COM port configurations for Post-Launched VMs
|
||||
=============================================
|
||||
|
||||
For a post-launched VM, the ``acrn-dm`` cmdline also provides a COM port configuration:
|
||||
|
||||
``-s 1:0,lpc -l com1,stdio``
|
||||
|
||||
This adds ``com1 (0x3f8)`` and ``com2 (0x2f8)`` modules in the Guest VM, including the ACPI info for these two ports.
|
||||
|
||||
**Data Flows**
|
||||
|
||||
Three different data flows exist based on how the post-launched VM is started, as shown in the diagram below.
|
||||
|
||||
Figure 1 data flow: The post-launched VM is started with the vUART enabled in the hypervisor configuration file only.
|
||||
|
||||
Figure 2 data flow: The post-launched VM is started with the ``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio`` only.
|
||||
|
||||
Figure 3 data flow: The post-launched VM is started with both vUART enabled and the ``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio``.
|
||||
|
||||
.. figure:: images/vuart-config-post-launch.png
|
||||
:align: center
|
||||
:name: Post-Launched VMs
|
||||
|
||||
.. note::
|
||||
For operating systems such as VxWorks and Windows that depend on the ACPI table to probe the uart driver, adding the vuart configuration in the hypervisor is not sufficient. Currently, we recommend that you use the configuration in the figure 3 data flow. This may be refined in the future.
|
||||
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user