doc: update OpenStack and libvirt tutorial
Update the tutorial on how to use OpenStack and libvirt: * Use Ubuntu 20.04 as the host and the 'lxd' snap * Use the Ubuntu Cloud image (instead of Clear Cloud image) * Delete a screenshot that wasn't in use Tracked-On: #5564 Signed-off-by: Geoffroy Van Cutsem <geoffroy.vancutsem@intel.com>
Before Width: | Height: | Size: 392 KiB After Width: | Height: | Size: 382 KiB |
Before Width: | Height: | Size: 535 KiB After Width: | Height: | Size: 94 KiB |
Before Width: | Height: | Size: 222 KiB After Width: | Height: | Size: 72 KiB |
Before Width: | Height: | Size: 241 KiB After Width: | Height: | Size: 51 KiB |
Before Width: | Height: | Size: 497 KiB After Width: | Height: | Size: 172 KiB |
Before Width: | Height: | Size: 161 KiB After Width: | Height: | Size: 65 KiB |
Before Width: | Height: | Size: 194 KiB After Width: | Height: | Size: 91 KiB |
Before Width: | Height: | Size: 211 KiB After Width: | Height: | Size: 100 KiB |
Before Width: | Height: | Size: 195 KiB After Width: | Height: | Size: 85 KiB |
Before Width: | Height: | Size: 318 KiB |
Before Width: | Height: | Size: 462 KiB After Width: | Height: | Size: 72 KiB |
Before Width: | Height: | Size: 134 KiB After Width: | Height: | Size: 31 KiB |
Before Width: | Height: | Size: 367 KiB After Width: | Height: | Size: 85 KiB |
Before Width: | Height: | Size: 341 KiB After Width: | Height: | Size: 80 KiB |
@ -11,22 +11,22 @@ ACRN. We use OpenStack to use libvirt and we'll install OpenStack in a container
|
|||||||
to avoid crashing your system and to take advantage of easy
|
to avoid crashing your system and to take advantage of easy
|
||||||
snapshots/restores so that you can quickly roll back your system in the
|
snapshots/restores so that you can quickly roll back your system in the
|
||||||
event of setup failure. (You should only install OpenStack directly on Ubuntu if
|
event of setup failure. (You should only install OpenStack directly on Ubuntu if
|
||||||
you have a dedicated testing machine.) This setup utilizes LXC/LXD on
|
you have a dedicated testing machine). This setup utilizes LXC/LXD on
|
||||||
Ubuntu 18.04.
|
Ubuntu 20.04.
|
||||||
|
|
||||||
Install ACRN
|
Install ACRN
|
||||||
************
|
************
|
||||||
|
|
||||||
#. Install ACRN using Ubuntu 18.04 as its Service VM. Refer to
|
#. Install ACRN using Ubuntu 20.04 as its Service VM. Refer to
|
||||||
:ref:`Build and Install ACRN on Ubuntu <build-and-install-acrn-on-ubuntu>`.
|
:ref:`Build and Install ACRN on Ubuntu <build-and-install-acrn-on-ubuntu>`.
|
||||||
|
|
||||||
#. Make the acrn-kernel using the `kernel_config_uefi_sos
|
#. Make the acrn-kernel using the `kernel_config_uefi_sos
|
||||||
<https://raw.githubusercontent.com/projectacrn/acrn-kernel/master/kernel_config_uefi_sos>`_
|
<https://raw.githubusercontent.com/projectacrn/acrn-kernel/master/kernel_config_uefi_sos>`_
|
||||||
configuration file (from the ``acrn-kernel`` repo).
|
configuration file (from the ``acrn-kernel`` repo).
|
||||||
|
|
||||||
#. Add the following kernel boot arg to give the Service VM more memory
|
#. Append the following kernel boot arguments to the ``multiboot2`` line in
|
||||||
and more loop devices. Refer to `Kernel Boot Parameters
|
:file:`/etc/grub.d/40_custom` and run ``sudo update-grub`` before rebooting the system.
|
||||||
<https://wiki.ubuntu.com/Kernel/KernelBootParameters>`_ documentation::
|
It will give the Service VM more memory and more loop devices::
|
||||||
|
|
||||||
hugepagesz=1G hugepages=10 max_loop=16
|
hugepagesz=1G hugepages=10 max_loop=16
|
||||||
|
|
||||||
@ -44,34 +44,25 @@ Install ACRN
|
|||||||
Set up and launch LXC/LXD
|
Set up and launch LXC/LXD
|
||||||
*************************
|
*************************
|
||||||
|
|
||||||
1. Set up the LXC/LXD Linux container engine using these `instructions
|
1. Set up the LXC/LXD Linux container engine::
|
||||||
<https://ubuntu.com/tutorials/tutorial-setting-up-lxd-1604>`_ provided
|
|
||||||
by Ubuntu.
|
|
||||||
|
|
||||||
Refer to the following additional information for the setup
|
$ sudo snap install lxd
|
||||||
procedure:
|
$ lxd init --auto
|
||||||
|
|
||||||
- Disregard ZFS utils (we're not going to use the ZFS storage
|
Use all default values if running ``lxd init`` in interactive mode.
|
||||||
backend).
|
|
||||||
- Answer ``dir`` (and not ``zfs``) when prompted for the name of the storage backend to use.
|
|
||||||
- Set up ``lxdbr0`` as instructed.
|
|
||||||
- Before launching a container, install lxc-utils by ``apt-get install lxc-utils``,
|
|
||||||
make sure ``lxc-checkconfig | grep missing`` does not show any missing kernel features
|
|
||||||
except ``CONFIG_NF_NAT_IPV4`` and ``CONFIG_NF_NAT_IPV6``, which
|
|
||||||
were renamed in recent kernels.
|
|
||||||
|
|
||||||
2. Create an Ubuntu 18.04 container named ``openstack``::
|
2. Create an Ubuntu 18.04 container named ``openstack``::
|
||||||
|
|
||||||
$ lxc init ubuntu:18.04 openstack
|
$ lxc init ubuntu:18.04 openstack
|
||||||
|
|
||||||
3. Export the kernel interfaces necessary to launch a Service VM in the
|
3. Export the kernel interfaces necessary to launch a Service VM in the
|
||||||
``openstack`` container:
|
``openstack`` container:
|
||||||
|
|
||||||
a. Edit the ``openstack`` config file using the command::
|
a. Edit the ``openstack`` config file using the command::
|
||||||
|
|
||||||
$ lxc config edit openstack
|
$ lxc config edit openstack
|
||||||
|
|
||||||
In the editor, add the following lines under **config**:
|
In the editor, add the following lines in the **config** section:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
@ -82,11 +73,18 @@ Set up and launch LXC/LXD
|
|||||||
lxc.cgroup.devices.allow = c 243:0 rwm
|
lxc.cgroup.devices.allow = c 243:0 rwm
|
||||||
lxc.mount.entry = /dev/net/tun dev/net/tun none bind,create=file 0 0
|
lxc.mount.entry = /dev/net/tun dev/net/tun none bind,create=file 0 0
|
||||||
lxc.mount.auto=proc:rw sys:rw cgroup:rw
|
lxc.mount.auto=proc:rw sys:rw cgroup:rw
|
||||||
|
lxc.apparmor.profile=unconfined
|
||||||
security.nesting: "true"
|
security.nesting: "true"
|
||||||
security.privileged: "true"
|
security.privileged: "true"
|
||||||
|
|
||||||
Save and exit the editor.
|
Save and exit the editor.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
Make sure to respect the indentation as to keep these options within
|
||||||
|
the **config** section. It is a good idea after saving your changes
|
||||||
|
to check that they have been correctly recorded (``lxc config show openstack``).
|
||||||
|
|
||||||
b. Run the following commands to configure ``openstack``::
|
b. Run the following commands to configure ``openstack``::
|
||||||
|
|
||||||
$ lxc config device add openstack eth1 nic name=eth1 nictype=bridged parent=acrn-br0
|
$ lxc config device add openstack eth1 nic name=eth1 nictype=bridged parent=acrn-br0
|
||||||
@ -135,14 +133,16 @@ Set up and launch LXC/LXD
|
|||||||
|
|
||||||
no_proxy=xcompany.com,.xcompany.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16
|
no_proxy=xcompany.com,.xcompany.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16
|
||||||
|
|
||||||
10. Add a new user named **stack** and set permissions::
|
10. Add a new user named **stack** and set permissions
|
||||||
|
|
||||||
$ useradd -s /bin/bash -d /opt/stack -m stack
|
.. code-block:: none
|
||||||
$ echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
|
|
||||||
|
# useradd -s /bin/bash -d /opt/stack -m stack
|
||||||
|
# echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
|
||||||
|
|
||||||
11. Log off and restart the ``openstack`` container::
|
11. Log off and restart the ``openstack`` container::
|
||||||
|
|
||||||
$ lxc restart openstack
|
$ lxc restart openstack
|
||||||
|
|
||||||
The ``openstack`` container is now properly configured for OpenStack.
|
The ``openstack`` container is now properly configured for OpenStack.
|
||||||
Use the ``lxc list`` command to verify that both **eth0** and **eth1**
|
Use the ``lxc list`` command to verify that both **eth0** and **eth1**
|
||||||
@ -162,17 +162,20 @@ Set up ACRN prerequisites inside the container
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
|
$ cd ~
|
||||||
$ git clone https://github.com/projectacrn/acrn-hypervisor
|
$ git clone https://github.com/projectacrn/acrn-hypervisor
|
||||||
$ cd acrn-hypervisor
|
$ cd acrn-hypervisor
|
||||||
$ git checkout v2.3
|
$ git checkout v2.4
|
||||||
$ make
|
$ make
|
||||||
$ cd misc/acrn-manager/; make
|
$ sudo make devicemodel-install
|
||||||
|
$ sudo cp build/misc/debug_tools/acrnd /usr/bin/
|
||||||
|
$ sudo cp build/misc/debug_tools/acrnctl /usr/bin/
|
||||||
|
|
||||||
Install only the user-space components: ``acrn-dm``, ``acrnctl``, and
|
Install only the user-space components: ``acrn-dm``, ``acrnctl``, and
|
||||||
``acrnd``
|
``acrnd`` as shown above.
|
||||||
|
|
||||||
3. Download, compile, and install ``iasl``. Refer to
|
.. note:: Use the tag that matches the version of the ACRN hypervisor (``acrn.bin``)
|
||||||
:ref:`Build and Install ACRN on Ubuntu <build-and-install-acrn-on-ubuntu>`.
|
that runs on your system.
|
||||||
|
|
||||||
Set up libvirt
|
Set up libvirt
|
||||||
**************
|
**************
|
||||||
@ -186,6 +189,7 @@ Set up libvirt
|
|||||||
|
|
||||||
2. Download libvirt/ACRN::
|
2. Download libvirt/ACRN::
|
||||||
|
|
||||||
|
$ cd ~
|
||||||
$ git clone https://github.com/projectacrn/acrn-libvirt.git
|
$ git clone https://github.com/projectacrn/acrn-libvirt.git
|
||||||
|
|
||||||
3. Build and install libvirt::
|
3. Build and install libvirt::
|
||||||
@ -200,6 +204,9 @@ Set up libvirt
|
|||||||
$ make
|
$ make
|
||||||
$ sudo make install
|
$ sudo make install
|
||||||
|
|
||||||
|
.. note:: The ``dev-acrn-v6.1.0`` branch is used in this tutorial. It is
|
||||||
|
the default branch.
|
||||||
|
|
||||||
4. Edit and enable these options in ``/etc/libvirt/libvirtd.conf``::
|
4. Edit and enable these options in ``/etc/libvirt/libvirtd.conf``::
|
||||||
|
|
||||||
unix_sock_ro_perms = "0777"
|
unix_sock_ro_perms = "0777"
|
||||||
@ -219,20 +226,20 @@ Use DevStack to install OpenStack. Refer to the `DevStack instructions <https://
|
|||||||
1. Use the latest maintenance branch **stable/train** to ensure OpenStack
|
1. Use the latest maintenance branch **stable/train** to ensure OpenStack
|
||||||
stability::
|
stability::
|
||||||
|
|
||||||
$ git clone https://opendev.org/openstack/devstack.git -b stable/train
|
$ cd ~
|
||||||
|
$ git clone https://opendev.org/openstack/devstack.git -b stable/train
|
||||||
|
|
||||||
2. Go into the ``devstack`` directory, download an ACRN patch from
|
2. Go into the ``devstack`` directory, and apply the
|
||||||
:acrn_raw:`doc/tutorials/0001-devstack-installation-for-acrn.patch`,
|
:file:`doc/tutorials/0001-devstack-installation-for-acrn.patch`::
|
||||||
and apply it ::
|
|
||||||
|
|
||||||
$ cd devstack
|
$ cd devstack
|
||||||
$ git apply 0001-devstack-installation-for-acrn.patch
|
$ git apply ~/acrn-hypervisor/doc/tutorials/0001-devstack-installation-for-acrn.patch
|
||||||
|
|
||||||
3. Edit ``lib/nova_plugins/hypervisor-libvirt``:
|
3. Edit ``lib/nova_plugins/hypervisor-libvirt``:
|
||||||
|
|
||||||
Change ``xen_hvmloader_path`` to the location of your OVMF image
|
Change ``xen_hvmloader_path`` to the location of your OVMF image
|
||||||
file. A stock image is included in the ACRN source tree
|
file: ``/usr/share/acrn/bios/OVMF.fd``. Or use the stock image that is included
|
||||||
(``devicemodel/bios/OVMF.fd``).
|
in the ACRN source tree (``devicemodel/bios/OVMF.fd``).
|
||||||
|
|
||||||
4. Create a ``devstack/local.conf`` file as shown below (setting the
|
4. Create a ``devstack/local.conf`` file as shown below (setting the
|
||||||
passwords as appropriate):
|
passwords as appropriate):
|
||||||
@ -256,6 +263,7 @@ Use DevStack to install OpenStack. Refer to the `DevStack instructions <https://
|
|||||||
USE_PYTHON3=True
|
USE_PYTHON3=True
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
Now is a great time to take a snapshot of the container using ``lxc
|
Now is a great time to take a snapshot of the container using ``lxc
|
||||||
snapshot``. If the OpenStack installation fails, manually rolling back
|
snapshot``. If the OpenStack installation fails, manually rolling back
|
||||||
to the previous state can be difficult. Currently, no step exists to
|
to the previous state can be difficult. Currently, no step exists to
|
||||||
@ -263,7 +271,7 @@ Use DevStack to install OpenStack. Refer to the `DevStack instructions <https://
|
|||||||
|
|
||||||
5. Install OpenStack::
|
5. Install OpenStack::
|
||||||
|
|
||||||
execute ./stack.sh in devstack/
|
$ ./stack.sh
|
||||||
|
|
||||||
The installation should take about 20-30 minutes. Upon successful
|
The installation should take about 20-30 minutes. Upon successful
|
||||||
installation, the installer reports the URL of OpenStack's management
|
installation, the installer reports the URL of OpenStack's management
|
||||||
@ -298,15 +306,11 @@ Use DevStack to install OpenStack. Refer to the `DevStack instructions <https://
|
|||||||
Configure and create OpenStack Instance
|
Configure and create OpenStack Instance
|
||||||
***************************************
|
***************************************
|
||||||
|
|
||||||
We'll be using the Clear Linux Cloud Guest as the OS image (qcow2
|
We'll be using the Ubuntu 20.04 (Focal) Cloud image as the OS image (qcow2
|
||||||
format). Download the Cloud Guest image from
|
format). Download the Cloud image from https://cloud-images.ubuntu.com/releases/focal,
|
||||||
https://clearlinux.org/downloads and uncompress it, for example::
|
for example::
|
||||||
|
|
||||||
$ wget https://cdn.download.clearlinux.org/releases/33110/clear/clear-33110-cloudguest.img.xz
|
$ wget https://cloud-images.ubuntu.com/releases/focal/release-20210201/ubuntu-20.04-server-cloudimg-amd64.img
|
||||||
$ unxz clear-33110-cloudguest.img.xz
|
|
||||||
|
|
||||||
This will leave you with the uncompressed OS image
|
|
||||||
``clear-33110-cloudguest.img`` we'll use later.
|
|
||||||
|
|
||||||
Use the OpenStack management interface URL reported in a previous step
|
Use the OpenStack management interface URL reported in a previous step
|
||||||
to finish setting up the network and configure and create an OpenStack
|
to finish setting up the network and configure and create an OpenStack
|
||||||
@ -395,7 +399,7 @@ instance.
|
|||||||
:width: 1200px
|
:width: 1200px
|
||||||
:name: os-06-create-image
|
:name: os-06-create-image
|
||||||
|
|
||||||
Browse for and select the Clear Linux Cloud Guest image file we
|
Browse for and select the Ubuntu Cloud image file we
|
||||||
downloaded earlier:
|
downloaded earlier:
|
||||||
|
|
||||||
.. figure:: images/OpenStack-06a-create-image-browse.png
|
.. figure:: images/OpenStack-06a-create-image-browse.png
|
||||||
@ -405,31 +409,30 @@ instance.
|
|||||||
|
|
||||||
.. figure:: images/OpenStack-06b-create-image-select.png
|
.. figure:: images/OpenStack-06b-create-image-select.png
|
||||||
:align: center
|
:align: center
|
||||||
:width: 1200px
|
|
||||||
:name: os-06b-create-image
|
:name: os-06b-create-image
|
||||||
|
|
||||||
Give the image a name (**acrnImage**), select the **QCOW2 - QEMU
|
Give the image a name (**Ubuntu20.04**), select the **QCOW2 - QEMU
|
||||||
Emulator** format, and click on **Create Image**:
|
Emulator** format, and click on **Create Image**:
|
||||||
|
|
||||||
.. figure:: images/OpenStack-06e-create-image.png
|
.. figure:: images/OpenStack-06e-create-image.png
|
||||||
:align: center
|
:align: center
|
||||||
:width: 1200px
|
:width: 900px
|
||||||
:name: os-063-create-image
|
:name: os-063-create-image
|
||||||
|
|
||||||
This will take a few minutes to complete.
|
This will take a few minutes to complete.
|
||||||
|
|
||||||
#. Next, click on the **Admin / Computer / Flavors** tabs and then the
|
#. Next, click on the **Admin / Computer / Flavors** tabs and then the
|
||||||
**+Create Flavor** button. This is where you'll define a machine flavor name
|
**+Create Flavor** button. This is where you'll define a machine flavor name
|
||||||
(**acrn4vcpu**), and specify its resource requirements: the number of vCPUs (**4**), RAM size
|
(**UbuntuCloud**), and specify its resource requirements: the number of vCPUs (**2**), RAM size
|
||||||
(**256MB**), and root disk size (**2GB**):
|
(**512MB**), and root disk size (**4GB**):
|
||||||
|
|
||||||
.. figure:: images/OpenStack-07a-create-flavor.png
|
.. figure:: images/OpenStack-07a-create-flavor.png
|
||||||
:align: center
|
:align: center
|
||||||
:width: 1200px
|
:width: 700px
|
||||||
:name: os-07a-create-flavor
|
:name: os-07a-create-flavor
|
||||||
|
|
||||||
Click on **Create Flavor** and you'll return to see a list of
|
Click on **Create Flavor** and you'll return to see a list of
|
||||||
available flavors plus the new one you created (**acrn4vcpu**):
|
available flavors plus the new one you created (**UbuntuCloud**):
|
||||||
|
|
||||||
.. figure:: images/OpenStack-07b-flavor-created.png
|
.. figure:: images/OpenStack-07b-flavor-created.png
|
||||||
:align: center
|
:align: center
|
||||||
@ -499,36 +502,36 @@ instance.
|
|||||||
|
|
||||||
#. Now we're ready to launch an instance. Go to **Project / Compute /
|
#. Now we're ready to launch an instance. Go to **Project / Compute /
|
||||||
Instance**, click on the **Launch Instance** button, give it a name
|
Instance**, click on the **Launch Instance** button, give it a name
|
||||||
(**acrn4vcpuVM**) and click **Next**:
|
(**UbuntuOnACRN**) and click **Next**:
|
||||||
|
|
||||||
.. figure:: images/OpenStack-10a-launch-instance-name.png
|
.. figure:: images/OpenStack-10a-launch-instance-name.png
|
||||||
:align: center
|
:align: center
|
||||||
:width: 1200px
|
:width: 900px
|
||||||
:name: os-10a-launch
|
:name: os-10a-launch
|
||||||
|
|
||||||
Select **No** for "Create New Volume", and click the up-arrow button
|
Select **No** for "Create New Volume", and click the up-arrow button
|
||||||
for uploaded (**acrnImage**) image as the "Available source" for this
|
for uploaded (**Ubuntu20.04**) image as the "Available source" for this
|
||||||
instance:
|
instance:
|
||||||
|
|
||||||
.. figure:: images/OpenStack-10b-no-new-vol-select-allocated.png
|
.. figure:: images/OpenStack-10b-no-new-vol-select-allocated.png
|
||||||
:align: center
|
:align: center
|
||||||
:width: 1200px
|
:width: 900px
|
||||||
:name: os-10b-launch
|
:name: os-10b-launch
|
||||||
|
|
||||||
Click **Next**, and select the machine flavor you created earlier
|
Click **Next**, and select the machine flavor you created earlier
|
||||||
(**acrn4vcpu**):
|
(**UbuntuCloud**):
|
||||||
|
|
||||||
.. figure:: images/OpenStack-10c-select-flavor.png
|
.. figure:: images/OpenStack-10c-select-flavor.png
|
||||||
:align: center
|
:align: center
|
||||||
:width: 1200px
|
:width: 900px
|
||||||
:name: os-10c-launch
|
:name: os-10c-launch
|
||||||
|
|
||||||
Click on **>** next to the Allocated **acrn4vcpu** flavor and see
|
Click on **>** next to the Allocated **UbuntuCloud** flavor and see
|
||||||
details about your choice:
|
details about your choice:
|
||||||
|
|
||||||
.. figure:: images/OpenStack-10d-flavor-selected.png
|
.. figure:: images/OpenStack-10d-flavor-selected.png
|
||||||
:align: center
|
:align: center
|
||||||
:width: 1200px
|
:width: 900px
|
||||||
:name: os-10d-launch
|
:name: os-10d-launch
|
||||||
|
|
||||||
Click on the **Networks** tab, and select the internal **shared**
|
Click on the **Networks** tab, and select the internal **shared**
|
||||||
@ -574,7 +577,7 @@ instance.
|
|||||||
|
|
||||||
.. figure:: images/OpenStack-11a-manage-floating-ip.png
|
.. figure:: images/OpenStack-11a-manage-floating-ip.png
|
||||||
:align: center
|
:align: center
|
||||||
:width: 1200px
|
:width: 700px
|
||||||
:name: os-11a-running
|
:name: os-11a-running
|
||||||
|
|
||||||
Select **public** pool, and click on **Allocate IP**:
|
Select **public** pool, and click on **Allocate IP**:
|
||||||
@ -625,26 +628,5 @@ running:
|
|||||||
* Ping the instance inside the container using the instance's floating IP
|
* Ping the instance inside the container using the instance's floating IP
|
||||||
address.
|
address.
|
||||||
|
|
||||||
* Clear Linux prohibits root SSH login by default. Use libvirt's ``virsh``
|
|
||||||
console to configure the instance. Inside the container, using::
|
|
||||||
|
|
||||||
$ sudo virsh -c acrn:///system
|
|
||||||
list #you should see the instance listed as running
|
|
||||||
console <instance_name>
|
|
||||||
|
|
||||||
Log in to the Clear Linux instance and set up the root SSH. Refer to
|
|
||||||
the Clear Linux instructions on `enabling root login
|
|
||||||
<https://docs.01.org/clearlinux/latest/guides/network/openssh-server.html#enable-root-login>`_.
|
|
||||||
|
|
||||||
- If needed, set up the proxy inside the instance.
|
|
||||||
- Configure ``systemd-resolved`` to use the correct DNS server.
|
|
||||||
- Install ping: ``swupd bundle-add clr-network-troubleshooter``.
|
|
||||||
|
|
||||||
The ACRN instance should now be able to ping ``acrn-br0`` and another
|
|
||||||
ACRN instance. It should also be accessible inside the container via SSH
|
|
||||||
and its floating IP address.
|
|
||||||
|
|
||||||
The ACRN instance can be deleted via the OpenStack management interface.
|
|
||||||
|
|
||||||
For more advanced CLI usage, refer to this `OpenStack cheat sheet
|
For more advanced CLI usage, refer to this `OpenStack cheat sheet
|
||||||
<https://docs.openstack.org/ocata/user-guide/cli-cheat-sheet.html>`_.
|
<https://docs.openstack.org/ocata/user-guide/cli-cheat-sheet.html>`_.
|
||||||
|