Merge pull request #984 from bergwolf/prepare-release

backport 2.0-dev commits to stable-2.0.0
This commit is contained in:
Xu Wang 2020-10-18 13:46:16 +08:00 committed by GitHub
commit 49776f76bf
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
118 changed files with 5383 additions and 3475 deletions

25
.github/workflows/snap.yaml vendored Normal file
View File

@ -0,0 +1,25 @@
name: snap CI
on:
pull_request:
paths:
- "**/Makefile"
- "**/*.go"
- "**/*.mk"
- "**/*.rs"
- "**/*.sh"
- "**/*.toml"
- "**/*.yaml"
- "**/*.yml"
jobs:
test:
runs-on: ubuntu-20.04
steps:
- name: Check out
uses: actions/checkout@v2
- name: Install Snapcraft
uses: samuelmeuli/action-snapcraft@v1
- name: Build snap
run: |
snapcraft -d snap --destructive-mode

View File

@ -41,11 +41,14 @@
* [Connect to debug console](#connect-to-debug-console) * [Connect to debug console](#connect-to-debug-console)
* [Traditional debug console setup](#traditional-debug-console-setup) * [Traditional debug console setup](#traditional-debug-console-setup)
* [Create a custom image containing a shell](#create-a-custom-image-containing-a-shell) * [Create a custom image containing a shell](#create-a-custom-image-containing-a-shell)
* [Create a debug systemd service](#create-a-debug-systemd-service)
* [Build the debug image](#build-the-debug-image) * [Build the debug image](#build-the-debug-image)
* [Configure runtime for custom debug image](#configure-runtime-for-custom-debug-image) * [Configure runtime for custom debug image](#configure-runtime-for-custom-debug-image)
* [Connect to the virtual machine using the debug console](#connect-to-the-virtual-machine-using-the-debug-console)
* [Enabling debug console for QEMU](#enabling-debug-console-for-qemu)
* [Enabling debug console for cloud-hypervisor / firecracker](#enabling-debug-console-for-cloud-hypervisor--firecracker)
* [Create a container](#create-a-container) * [Create a container](#create-a-container)
* [Connect to the virtual machine using the debug console](#connect-to-the-virtual-machine-using-the-debug-console) * [Connect to the virtual machine using the debug console](#connect-to-the-virtual-machine-using-the-debug-console)
* [Obtain details of the image](#obtain-details-of-the-image)
* [Capturing kernel boot logs](#capturing-kernel-boot-logs) * [Capturing kernel boot logs](#capturing-kernel-boot-logs)
# Warning # Warning
@ -75,6 +78,11 @@ You need to install the following to build Kata Containers components:
To view the versions of go known to work, see the `golang` entry in the To view the versions of go known to work, see the `golang` entry in the
[versions database](../versions.yaml). [versions database](../versions.yaml).
- [rust](https://www.rust-lang.org/tools/install)
To view the versions of rust known to work, see the `rust` entry in the
[versions database](../versions.yaml).
- `make`. - `make`.
- `gcc` (required for building the shim and runtime). - `gcc` (required for building the shim and runtime).
@ -247,6 +255,15 @@ $ sudo systemctl restart systemd-journald
> >
> - You should only do this step if you are testing with the latest version of the agent. > - You should only do this step if you are testing with the latest version of the agent.
The rust-agent is built with a static linked `musl.` To configure this:
```
rustup target add x86_64-unknown-linux-musl
sudo ln -s /usr/bin/g++ /bin/musl-g++
```
To build the agent:
``` ```
$ go get -d -u github.com/kata-containers/kata-containers $ go get -d -u github.com/kata-containers/kata-containers
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/src/agent && make $ cd $GOPATH/src/github.com/kata-containers/kata-containers/src/agent && make
@ -288,9 +305,9 @@ You MUST choose one of `alpine`, `centos`, `clearlinux`, `debian`, `euleros`, `f
> - You should only do this step if you are testing with the latest version of the agent. > - You should only do this step if you are testing with the latest version of the agent.
``` ```
$ sudo install -o root -g root -m 0550 -t ${ROOTFS_DIR}/bin ../../agent/kata-agent $ sudo install -o root -g root -m 0550 -t ${ROOTFS_DIR}/bin ../../../src/agent/target/x86_64-unknown-linux-musl/release/kata-agent
$ sudo install -o root -g root -m 0440 ../../agent/kata-agent.service ${ROOTFS_DIR}/usr/lib/systemd/system/ $ sudo install -o root -g root -m 0440 ../../../src/agent/kata-agent.service ${ROOTFS_DIR}/usr/lib/systemd/system/
$ sudo install -o root -g root -m 0440 ../../agent/kata-containers.target ${ROOTFS_DIR}/usr/lib/systemd/system/ $ sudo install -o root -g root -m 0440 ../../../src/agent/kata-containers.target ${ROOTFS_DIR}/usr/lib/systemd/system/
``` ```
### Build a rootfs image ### Build a rootfs image
@ -526,35 +543,6 @@ $ export ROOTFS_DIR=${GOPATH}/src/github.com/kata-containers/kata-containers/too
$ script -fec 'sudo -E GOPATH=$GOPATH USE_DOCKER=true EXTRA_PKGS="bash coreutils" ./rootfs.sh centos' $ script -fec 'sudo -E GOPATH=$GOPATH USE_DOCKER=true EXTRA_PKGS="bash coreutils" ./rootfs.sh centos'
``` ```
#### Create a debug systemd service
Create the service file that starts the shell in the rootfs directory:
```
$ cat <<EOT | sudo tee ${ROOTFS_DIR}/lib/systemd/system/kata-debug.service
[Unit]
Description=Kata Containers debug console
[Service]
Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
StandardInput=tty
StandardOutput=tty
# Must be disabled to allow the job to access the real console
PrivateDevices=no
Type=simple
ExecStart=/bin/bash
Restart=always
EOT
```
**Note**: You might need to adjust the `ExecStart=` path.
Add a dependency to start the debug console:
```
$ sudo sed -i '$a Requires=kata-debug.service' ${ROOTFS_DIR}/lib/systemd/system/kata-containers.target
```
#### Build the debug image #### Build the debug image
Follow the instructions in the [Build a rootfs image](#build-a-rootfs-image) Follow the instructions in the [Build a rootfs image](#build-a-rootfs-image)
@ -595,10 +583,55 @@ $ sudo crictl run -r kata container.yaml pod.yaml
#### Connect to the virtual machine using the debug console #### Connect to the virtual machine using the debug console
The steps required to enable debug console for QEMU slightly differ with
those for firecracker / cloud-hypervisor.
##### Enabling debug console for QEMU
Add `agent.debug_console` to the guest kernel command line to allow the agent process to start a debug console.
``` ```
$ id=$(sudo crictl pods --no-trunc -q) $ sudo sed -i -e 's/^kernel_params = "\(.*\)"/kernel_params = "\1 agent.debug_console"/g' "${kata_configuration_file}"
$ console="/var/run/vc/vm/${id}/console.sock" ```
$ sudo socat "stdin,raw,echo=0,escape=0x11" "unix-connect:${console}"
Here `kata_configuration_file` could point to `/etc/kata-containers/configuration.toml`
or `/usr/share/defaults/kata-containers/configuration.toml`
or `/opt/kata/share/defaults/kata-containers/configuration-{hypervisor}.toml`, if
you installed Kata Containers using `kata-deploy`.
##### Enabling debug console for cloud-hypervisor / firecracker
Slightly different configuration is required in case of firecracker and cloud hypervisor.
Firecracker and cloud-hypervisor don't have a UNIX socket connected to `/dev/console`.
Hence, the kernel command line option `agent.debug_console` will not work for them.
These hypervisors support `hybrid vsocks`, which can be used for communication
between the host and the guest. The kernel command line option `agent.debug_console_vport`
was added to allow developers specify on which `vsock` port the debugging console should be connected.
Add the parameter `agent.debug_console_vport=1026` to the kernel command line
as shown below:
```
sudo sed -i -e 's/^kernel_params = "\(.*\)"/kernel_params = "\1 agent.debug_console_vport=1026"/g' "${kata_configuration_file}"
```
> **Note** Ports 1024 and 1025 are reserved for communication with the agent
> and gathering of agent logs respectively.
Next, connect to the debug console. The VSOCKS paths vary slightly between
cloud-hypervisor and firecracker.
In case of cloud-hypervisor, connect to the `vsock` as shown:
```
$ sudo su -c 'cd /var/run/vc/vm/{sandbox_id}/root/ && socat stdin unix-connect:clh.sock'
CONNECT 1026
```
**Note**: You need to type `CONNECT 1026` and press `RETURN` key after entering the `socat` command.
For firecracker, connect to the `hvsock` as shown:
```
$ sudo su -c 'cd /var/run/vc/firecracker/{sandbox_id}/root/ && socat stdin unix-connect:kata.hvsock'
CONNECT 1026
``` ```
**Note**: You need to press the `RETURN` key to see the shell prompt. **Note**: You need to press the `RETURN` key to see the shell prompt.

View File

@ -1,185 +1,140 @@
* [Introduction](#introduction) * [Introduction](#introduction)
* [Unsupported scenarios](#unsupported-scenarios) * [Maintenance warning](#maintenance-warning)
* [Maintenance Warning](#maintenance-warning) * [Determine current version](#determine-current-version)
* [Upgrade from Clear Containers](#upgrade-from-clear-containers) * [Determine latest version](#determine-latest-version)
* [Stop all running Clear Container instances](#stop-all-running-clear-container-instances) * [Configuration changes](#configuration-changes)
* [Configuration migration](#configuration-migration)
* [Remove Clear Containers packages](#remove-clear-containers-packages)
* [Fedora](#fedora)
* [Ubuntu](#ubuntu)
* [Disable old container manager configuration](#disable-old-container-manager-configuration)
* [Install Kata Containers](#install-kata-containers)
* [Create a Kata Container](#create-a-kata-container)
* [Upgrade from runV](#upgrade-from-runv)
* [Upgrade Kata Containers](#upgrade-kata-containers) * [Upgrade Kata Containers](#upgrade-kata-containers)
* [Appendices](#appendices) * [Upgrade native distribution packaged version](#upgrade-native-distribution-packaged-version)
* [Assets](#assets) * [Static installation](#static-installation)
* [Guest kernel](#guest-kernel) * [Determine if you are using a static installation](#determine-if-you-are-using-a-static-installation)
* [Image](#image) * [Remove a static installation](#remove-a-static-installation)
* [Determining asset versions](#determining-asset-versions) * [Upgrade a static installation](#upgrade-a-static-installation)
* [Custom assets](#custom-assets)
# Introduction # Introduction
This document explains how to upgrade from This document outlines the options for upgrading from a
[Clear Containers](https://github.com/clearcontainers) and [runV](https://github.com/hyperhq/runv) to [Kata Containers 1.x release](https://github.com/kata-containers/runtime/releases) to a
[Kata Containers](https://github.com/kata-containers) and how to upgrade an existing [Kata Containers 2.x release](https://github.com/kata-containers/kata-containers/releases).
Kata Containers system to the latest version.
# Unsupported scenarios # Maintenance warning
Upgrading a Clear Containers system on the following distributions is **not** Kata Containers 2.x is the new focus for the Kata Containers development
supported since the installation process for these distributions makes use of community.
unpackaged components:
- [CentOS](https://github.com/clearcontainers/runtime/blob/master/docs/centos-installation-guide.md) Although Kata Containers 1.x releases will continue to be published for a
- [BCLinux](https://github.com/clearcontainers/runtime/blob/master/docs/bclinux-installation-guide.md) period of time, once a stable release for Kata Containers 2.x is published,
- [RHEL](https://github.com/clearcontainers/runtime/blob/master/docs/rhel-installation-guide.md) Kata Containers 1.x stable users should consider switching to the Kata 2.x
- [SLES](https://github.com/clearcontainers/runtime/blob/master/docs/sles-installation-guide.md) release.
Additionally, upgrading See the [stable branch strategy documentation](Stable-Branch-Strategy.md) for
[Clear Linux](https://github.com/clearcontainers/runtime/blob/master/docs/clearlinux-installation-guide.md) further details.
is not supported as Kata Containers packages do not yet exist.
# Maintenance Warning # Determine current version
The Clear Containers codebase is no longer being developed. Only new releases To display the current Kata Containers version, run one of the following:
will be considered for significant bug fixes.
The main development focus is now on Kata Containers. All Clear Containers ```bash
users are encouraged to switch to Kata Containers. $ kata-runtime --version
$ containerd-shim-kata-v2 --version
# Upgrade from Clear Containers
Since Kata Containers can co-exist on the same system as Clear Containers, if
you already have Clear Containers installed, the upgrade process is simply to
install Kata Containers. However, since Clear Containers is
[no longer being actively developed](#maintenance-warning),
you are encouraged to remove Clear Containers from your systems.
## Stop all running Clear Container instances
Assuming a Docker\* system, to stop all currently running Clear Containers:
```
$ for container in $(sudo docker ps -q); do sudo docker stop $container; done
``` ```
## Configuration migration # Determine latest version
The automatic migration of Kata Containers 2.x releases are published on the
[Clear Containers configuration](https://github.com/clearcontainers/runtime#configuration) to [Kata Containers GitHub releases page](https://github.com/kata-containers/kata-containers/releases).
[Kata Containers configuration](../src/runtime/README.md#configuration) is
not supported.
If you have made changes to your Clear Containers configuration, you should Alternatively, if you are using Kata Containers version 1.12.0 or newer, you
review those changes and decide whether to manually apply those changes to the can check for newer releases using the command line:
Kata Containers configuration.
> **Note**: This step must be completed before continuing to ```bash
> [remove the Clear Containers packages](#remove-clear-containers-packages) since doing so will $ kata-runtime kata-check --check-version-only
> *delete the default Clear Containers configuration file from your system*.
## Remove Clear Containers packages
> **Warning**: If you have modified your
> [Clear Containers configuration](https://github.com/clearcontainers/runtime#configuration),
> you might want to make a safe copy of the configuration file before removing the
> packages since doing so will *delete the default configuration file*
### Fedora
```
$ sudo -E dnf remove cc-runtime\* cc-proxy\* cc-shim\* linux-container clear-containers-image qemu-lite cc-ksm-throttler
$ sudo rm /etc/yum.repos.d/home:clearcontainers:clear-containers-3.repo
``` ```
### Ubuntu There are various other related options. Run `kata-runtime kata-check --help`
for further details.
``` # Configuration changes
$ sudo apt-get purge cc-runtime\* cc-proxy\* cc-shim\* linux-container clear-containers-image qemu-lite cc-ksm-throttler
$ sudo rm /etc/apt/sources.list.d/clear-containers.list
```
## Disable old container manager configuration The [Kata Containers 2.x configuration file](/src/runtime/README.md#configuration)
is compatible with the
[Kata Containers 1.x configuration file](https://github.com/kata-containers/runtime/blob/master/README.md#configuration).
Assuming a Docker installation, remove the docker configuration for Clear However, if you have created a local configuration file
Containers: (`/etc/kata-containers/configuration.toml`), this will mask the newer Kata
Containers 2.x configuration file.
``` Since Kata Containers 2.x introduces a number of new options and changes
$ sudo rm /etc/systemd/system/docker.service.d/clear-containers.conf some default values, we recommend that you disable the local configuration
``` file (by moving or renaming it) until you have reviewed the changes to the
official configuration file and applied them to your local file if required.
## Install Kata Containers
Follow one of the [installation guides](install).
## Create a Kata Container
```
$ sudo docker run -ti busybox sh
```
# Upgrade from runV
runV and Kata Containers can run together on the same system without affecting each other, as long as they are
not configured to use the same container root storage. Currently, runV defaults to `/run/runv` and Kata Containers
defaults to `/var/run/kata-containers`.
Now, to upgrade from runV you need to fresh install Kata Containers by following one of
the [installation guides](install).
# Upgrade Kata Containers # Upgrade Kata Containers
## Upgrade native distribution packaged version
As shown in the As shown in the
[installation instructions](install), [installation instructions](install),
Kata Containers provide binaries for popular distributions in their native Kata Containers provide binaries for popular distributions in their native
packaging formats. This allows Kata Containers to be upgraded using the packaging formats. This allows Kata Containers to be upgraded using the
standard package management tools for your distribution. standard package management tools for your distribution.
# Appendices > **Note:**
>
> Users should prefer the distribution packaged version of Kata Containers
> unless they understand the implications of a manual installation.
## Assets ## Static installation
Kata Containers requires additional resources to create a virtual machine > **Note:**
container. These resources are called >
[Kata Containers assets](./design/architecture.md#assets), > Unless you are an advanced user, if you are using a static installation of
which comprise a guest kernel and a root filesystem or initrd image. This > Kata Containers, we recommend you remove it and install a
section describes when these components are updated. > [native distribution packaged version](#upgrade-native-distribution-packaged-version)
> instead.
Since the official assets are packaged, they are automatically upgraded when ### Determine if you are using a static installation
new package versions are published.
> **Warning**: Note that if you use custom assets (by modifying the If the following command displays the output "static", you are using a static
> [Kata Runtime configuration > file](../src/runtime/README.md#configuration)), version of Kata Containers:
> it is your responsibility to ensure they are updated as necessary.
### Guest kernel
The `kata-linux-container` package contains a Linux\* kernel based on the
latest vanilla version of the
[long-term kernel](https://www.kernel.org/)
plus a small number of
[patches](../tools/packaging/kernel).
The `Longterm` branch is only updated with
[important bug fixes](https://www.kernel.org/category/releases.html)
meaning this package is only updated when necessary.
The guest kernel package is updated when a new long-term kernel is released
and when any patch updates are required.
### Image
The `kata-containers-image` package is updated only when critical updates are
available for the packages used to create it, such as:
- systemd
- [Kata Containers Agent](../src/agent)
### Determining asset versions
To see which versions of the assets being used:
```bash
$ ls /opt/kata/bin/kata-runtime &>/dev/null && echo static
``` ```
$ kata-runtime kata-env
``` ### Remove a static installation
Static installations are installed in `/opt/kata/`, so to uninstall simply
remove this directory.
### Upgrade a static installation
If you understand the implications of using a static installation, to upgrade
first
[remove the existing static installation](#remove-a-static-installation), then
[install the latest release](#determine-latest-version).
See the
[manual installation installation documentation](install/README.md#manual-installation)
for details on how to automatically install and configuration a static release
with containerd.
# Custom assets
> **Note:**
>
> This section only applies to advanced users who have built their own guest
> kernel or image.
If you are using custom
[guest assets](design/architecture.md#guest-assets),
you must upgrade them to work with Kata Containers 2.x since Kata
Containers 1.x assets will **not** work.
See the following for further details:
- [Guest kernel documentation](/tools/packaging/kernel)
- [Guest image and initrd documentation](/tools/osbuilder)
The official assets are packaged meaning they are automatically included in
new releases.

View File

@ -13,7 +13,6 @@
- [Runtime](#runtime) - [Runtime](#runtime)
- [Configuration](#configuration) - [Configuration](#configuration)
- [Networking](#networking) - [Networking](#networking)
- [CNM](#cnm)
- [Network Hotplug](#network-hotplug) - [Network Hotplug](#network-hotplug)
- [Storage](#storage) - [Storage](#storage)
- [Kubernetes support](#kubernetes-support) - [Kubernetes support](#kubernetes-support)
@ -157,66 +156,31 @@ In order to do so, container engines will usually add one end of a virtual
ethernet (`veth`) pair into the container networking namespace. The other end of ethernet (`veth`) pair into the container networking namespace. The other end of
the `veth` pair is added to the host networking namespace. the `veth` pair is added to the host networking namespace.
This is a very namespace-centric approach as many hypervisors (in particular QEMU) This is a very namespace-centric approach as many hypervisors/VMMs cannot handle `veth`
cannot handle `veth` interfaces. Typically, `TAP` interfaces are created for VM interfaces. Typically, `TAP` interfaces are created for VM connectivity.
connectivity.
To overcome incompatibility between typical container engines expectations To overcome incompatibility between typical container engines expectations
and virtual machines, Kata Containers networking transparently connects `veth` and virtual machines, Kata Containers networking transparently connects `veth`
interfaces with `TAP` ones using MACVTAP: interfaces with `TAP` ones using Traffic Control:
![Kata Containers networking](arch-images/network.png) ![Kata Containers networking](arch-images/network.png)
With a TC filter in place, a redirection is created between the container network and the
virtual machine. As an example, the CNI may create a device, `eth0`, in the container's network
namespace, which is a VETH device. Kata Containers will create a tap device for the VM, `tap0_kata`,
and setup a TC redirection filter to mirror traffic from `eth0`'s ingress to `tap0_kata`'s egress,
and a second to mirror traffic from `tap0_kata`'s ingress to `eth0`'s egress.
Kata Containers maintains support for MACVTAP, which was an earlier implementation used in Kata. TC-filter
is the default because it allows for simpler configuration, better CNI plugin compatibility, and performance
on par with MACVTAP.
Kata Containers has deprecated support for bridge due to lacking performance relative to TC-filter and MACVTAP.
Kata Containers supports both Kata Containers supports both
[CNM](https://github.com/docker/libnetwork/blob/master/docs/design.md#the-container-network-model) [CNM](https://github.com/docker/libnetwork/blob/master/docs/design.md#the-container-network-model)
and [CNI](https://github.com/containernetworking/cni) for networking management. and [CNI](https://github.com/containernetworking/cni) for networking management.
### CNM
![High-level CNM Diagram](arch-images/CNM_overall_diagram.png)
__CNM lifecycle__
1. `RequestPool`
2. `CreateNetwork`
3. `RequestAddress`
4. `CreateEndPoint`
5. `CreateContainer`
6. Create `config.json`
7. Create PID and network namespace
8. `ProcessExternalKey`
9. `JoinEndPoint`
10. `LaunchContainer`
11. Launch
12. Run container
![Detailed CNM Diagram](arch-images/CNM_detailed_diagram.png)
__Runtime network setup with CNM__
1. Read `config.json`
2. Create the network namespace
3. Call the `prestart` hook (from inside the netns)
4. Scan network interfaces inside netns and get the name of the interface
created by prestart hook
5. Create bridge, TAP, and link all together with network interface previously
created
### Network Hotplug ### Network Hotplug
Kata Containers has developed a set of network sub-commands and APIs to add, list and Kata Containers has developed a set of network sub-commands and APIs to add, list and

View File

@ -6,6 +6,7 @@
* [Advanced Topics](#advanced-topics) * [Advanced Topics](#advanced-topics)
## Kubernetes Integration ## Kubernetes Integration
- [Run Kata containers with `crictl`](run-kata-with-crictl.md)
- [Run Kata Containers with Kubernetes](run-kata-with-k8s.md) - [Run Kata Containers with Kubernetes](run-kata-with-k8s.md)
- [How to use Kata Containers and Containerd](containerd-kata.md) - [How to use Kata Containers and Containerd](containerd-kata.md)
- [How to use Kata Containers and CRI (containerd plugin) with Kubernetes](how-to-use-k8s-with-cri-containerd-and-kata.md) - [How to use Kata Containers and CRI (containerd plugin) with Kubernetes](how-to-use-k8s-with-cri-containerd-and-kata.md)

View File

@ -0,0 +1,19 @@
{
"metadata": {
"name": "busybox-container",
"namespace": "test.kata"
},
"image": {
"image": "docker.io/library/busybox:latest"
},
"command": [
"sleep",
"9999"
],
"args": [],
"working_dir": "/",
"log_path": "",
"stdin": false,
"stdin_once": false,
"tty": false
}

View File

@ -0,0 +1,20 @@
{
"metadata": {
"name": "busybox-pod",
"uid": "busybox-pod",
"namespace": "test.kata"
},
"hostname": "busybox_host",
"log_directory": "",
"dns_config": {
},
"port_mappings": [],
"resources": {
},
"labels": {
},
"annotations": {
},
"linux": {
}
}

View File

@ -0,0 +1,39 @@
{
"metadata": {
"name": "redis-client",
"namespace": "test.kata"
},
"image": {
"image": "docker.io/library/redis:6.0.8-alpine"
},
"command": [
"tail", "-f", "/dev/null"
],
"envs": [
{
"key": "PATH",
"value": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
},
{
"key": "TERM",
"value": "xterm"
}
],
"labels": {
"tier": "backend"
},
"annotations": {
"pod": "redis-client-pod"
},
"log_path": "",
"stdin": false,
"stdin_once": false,
"tty": false,
"linux": {
"resources": {
"memory_limit_in_bytes": 524288000
},
"security_context": {
}
}
}

View File

@ -0,0 +1,28 @@
{
"metadata": {
"name": "redis-client-pod",
"uid": "test-redis-client-pod",
"namespace": "test.kata"
},
"hostname": "redis-client",
"log_directory": "",
"dns_config": {
"searches": [
"8.8.8.8"
]
},
"port_mappings": [],
"resources": {
"cpu": {
"limits": 1,
"requests": 1
}
},
"labels": {
"tier": "backend"
},
"annotations": {
},
"linux": {
}
}

View File

@ -0,0 +1,36 @@
{
"metadata": {
"name": "redis-server",
"namespace": "test.kata"
},
"image": {
"image": "docker.io/library/redis:6.0.8-alpine"
},
"envs": [
{
"key": "PATH",
"value": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
},
{
"key": "TERM",
"value": "xterm"
}
],
"labels": {
"tier": "backend"
},
"annotations": {
"pod": "redis-server-pod"
},
"log_path": "",
"stdin": false,
"stdin_once": false,
"tty": false,
"linux": {
"resources": {
"memory_limit_in_bytes": 524288000
},
"security_context": {
}
}
}

View File

@ -0,0 +1,28 @@
{
"metadata": {
"name": "redis-server-pod",
"uid": "test-redis-server-pod",
"namespace": "test.kata"
},
"hostname": "redis-server",
"log_directory": "",
"dns_config": {
"searches": [
"8.8.8.8"
]
},
"port_mappings": [],
"resources": {
"cpu": {
"limits": 1,
"requests": 1
}
},
"labels": {
"tier": "backend"
},
"annotations": {
},
"linux": {
}
}

View File

@ -0,0 +1,150 @@
# Working with `crictl`
* [What's `cri-tools`](#whats-cri-tools)
* [Use `crictl` run Pods in Kata containers](#use-crictl-run-pods-in-kata-containers)
* [Run `busybox` Pod](#run-busybox-pod)
* [Run pod sandbox with config file](#run-pod-sandbox-with-config-file)
* [Create container in the pod sandbox with config file](#create-container-in-the-pod-sandbox-with-config-file)
* [Start container](#start-container)
* [Run `redis` Pod](#run-redis-pod)
* [Create `redis-server` Pod](#create-redis-server-pod)
* [Create `redis-client` Pod](#create-redis-client-pod)
* [Check `redis` server is working](#check-redis-server-is-working)
## What's `cri-tools`
[`cri-tools`](https://github.com/kubernetes-sigs/cri-tools) provides debugging and validation tools for Kubelet Container Runtime Interface (CRI).
`cri-tools` includes two tools: `crictl` and `critest`. `crictl` is the CLI for Kubelet CRI, in this document, we will show how to use `crictl` to run Pods in Kata containers.
> **Note:** `cri-tools` is only used for debugging and validation purpose, and don't use it to run production workloads.
> **Note:** For how to install and configure `cri-tools` with CRI runtimes like `containerd` or CRI-O, please also refer to other [howtos](./README.md).
## Use `crictl` run Pods in Kata containers
Sample config files in this document can be found [here](./data/crictl/).
### Run `busybox` Pod
#### Run pod sandbox with config file
```bash
$ sudo crictl runp -r kata sandbox_config.json
16a62b035940f9c7d79fd53e93902d15ad21f7f9b3735f1ac9f51d16539b836b
$ sudo crictl pods
POD ID CREATED STATE NAME NAMESPACE ATTEMPT
16a62b035940f 21 seconds ago Ready busybox-pod 0
```
#### Create container in the pod sandbox with config file
```bash
$ sudo crictl create 16a62b035940f container_config.json sandbox_config.json
e6ca0e0f7f532686236b8b1f549e4878e4fe32ea6b599a5d684faf168b429202
```
List containers and check the container is in `Created` state:
```bash
$ sudo crictl ps -a
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
e6ca0e0f7f532 docker.io/library/busybox:latest 19 seconds ago Created busybox-container 0 16a62b035940f
```
#### Start container
```bash
$ sudo crictl start e6ca0e0f7f532
e6ca0e0f7f532
```
List containers and we can see that the container state has changed from `Created` to `Running`:
```bash
$ sudo crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
e6ca0e0f7f532 docker.io/library/busybox:latest About a minute ago Running busybox-container 0 16a62b035940f
```
And last we can `exec` into `busybox` container:
```bash
$ sudo crictl exec -it e6ca0e0f7f532 sh
```
And run commands in it:
```
/ # hostname
busybox_host
/ # id
uid=0(root) gid=0(root)
```
### Run `redis` Pod
In this example, we will create two Pods: one is for `redis` server, and another one is `redis` client.
#### Create `redis-server` Pod
It's also possible to start a container within a single command:
```bash
$ sudo crictl run -r kata redis_server_container_config.json redis_server_sandbox_config.json
bb36e05c599125842c5193909c4de186b1cee3818f5d17b951b6a0422681ce4b
```
#### Create `redis-client` Pod
```bash
$ sudo crictl run -r kata redis_client_container_config.json redis_client_sandbox_config.json
e344346c5414e3f51f97f20b2262e0b7afe457750e94dc0edb109b94622fc693
```
After the new container started, we can check the running Pods and containers.
```bash
$ sudo crictl pods
POD ID CREATED STATE NAME NAMESPACE ATTEMPT
469d08a7950e3 30 seconds ago Ready redis-client-pod 0
02c12fdb08219 About a minute ago Ready redis-server-pod 0
$ sudo crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
e344346c5414e docker.io/library/redis:6.0.8-alpine 35 seconds ago Running redis-client 0 469d08a7950e3
bb36e05c59912 docker.io/library/redis:6.0.8-alpine About a minute ago Running redis-server 0 02c12fdb08219
```
#### Check `redis` server is working
To connect to the `redis-server`. First we need to get the `redis-server`'s IP address.
```bash
$ server=$(sudo crictl inspectp 02c12fdb08219 | jq .status.network.ip | tr -d '"' )
$ echo $server
172.19.0.118
```
Launch `redis-cli` in the new Pod and connect server running at `172.19.0.118`.
```bash
$ sudo crictl exec -it e344346c5414e redis-cli -h $server
172.19.0.118:6379> get test-key
(nil)
172.19.0.118:6379> set test-key test-value
OK
172.19.0.118:6379> get test-key
"test-value"
```
Then back to `redis-server`, check if the `test-key` is set in server.
```bash
$ sudo crictl exec -it bb36e05c59912 redis-cli get test-key
"test-val"
```
Returned `test-val` is just set by `redis-cli` in `redis-client` Pod.

View File

@ -1,98 +1,82 @@
# Kata Containers installation user guides # Kata Containers installation user guides
- [Kata Containers installation user guides](#kata-containers-installation-user-guides) * [Kata Containers installation user guides](#kata-containers-installation-user-guides)
- [Prerequisites](#prerequisites) * [Prerequisites](#prerequisites)
- [Packaged installation methods](#packaged-installation-methods) * [Legacy installation](#legacy-installation)
- [Official packages](#official-packages) * [Packaged installation methods](#packaged-installation-methods)
- [Automatic Installation](#automatic-installation) * [Official packages](#official-packages)
- [Snap Installation](#snap-installation) * [Snap Installation](#snap-installation)
- [Scripted Installation](#scripted-installation) * [Automatic Installation](#automatic-installation)
- [Manual Installation](#manual-installation) * [Manual Installation](#manual-installation)
- [Build from source installation](#build-from-source-installation) * [Build from source installation](#build-from-source-installation)
- [Installing on a Cloud Service Platform](#installing-on-a-cloud-service-platform) * [Installing on a Cloud Service Platform](#installing-on-a-cloud-service-platform)
- [Further information](#further-information) * [Further information](#further-information)
The following is an overview of the different installation methods available. All of these methods equally result The following is an overview of the different installation methods available. All of these methods equally result
in a system configured to run Kata Containers. in a system configured to run Kata Containers.
## Prerequisites ## Prerequisites
Kata Containers requires nested virtualization or bare metal. Kata Containers requires nested virtualization or bare metal.
See the See the
[hardware requirements](../../src/runtime/README.md#hardware-requirements) [hardware requirements](/src/runtime/README.md#hardware-requirements)
to see if your system is capable of running Kata Containers. to see if your system is capable of running Kata Containers.
## Legacy installation
If you wish to install a legacy 1.x version of Kata Containers, see
[the Kata Containers 1.x installation documentation](https://github.com/kata-containers/documentation/tree/master/install/).
## Packaged installation methods ## Packaged installation methods
> **Notes:** > **Notes:**
> >
> - Packaged installation methods uses your distribution's native package format (such as RPM or DEB). > - Packaged installation methods uses your distribution's native package format (such as RPM or DEB).
> - You are strongly encouraged to choose an installation method that provides
> automatic updates, to ensure you benefit from security updates and bug fixes.
| Installation method | Description | Distributions supported | | Installation method | Description | Automatic updates | Use case |
|------------------------------------------------------|-----------------------------------------------------------------------------------------|--------------------------------------| |------------------------------------------------------|---------------------------------------------------------------------|-------------------|----------------------------------------------------------|
| [Automatic](#automatic-installation) |Run a single command to install a full system | | | [Using official distro packages](#official-packages) | Kata packages provided by Linux distributions official repositories | yes | Recommended for most users. |
| [Using snap](#snap-installation) |Easy to install and automatic updates |any distro that supports snapd | | [Using snap](#snap-installation) | Easy to install | yes | Good alternative to official distro packages. |
| [Using official distro packages](#official-packages) |Kata packages provided by Linux distributions official repositories | | | [Automatic](#automatic-installation) | Run a single command to install a full system | **No!** | For those wanting the latest release quickly. |
| [Scripted](#scripted-installation) |Generates an installation script which will result in a working system when executed | | | [Manual](#manual-installation) | Follow a guide step-by-step to install a working system | **No!** | For those who want the latest release with more control. |
| [Manual](#manual-installation) |Allows the user to read a brief document and execute the specified commands step-by-step | | | [Build from source](#build-from-source-installation) | Build the software components manually | **No!** | Power users and developers only. |
### Official packages ### Official packages
Kata packages are provided by official distribution repositories for: Kata packages are provided by official distribution repositories for:
| Distribution (link to packages) | Versions | Contacts | | Distribution (link to installation guide) | Minimum versions |
| -------------------------------------------------------- | ------------------------------------------------------------------------------ | -------- | |----------------------------------------------------------|--------------------------------------------------------------------------------|
| [CentOS](centos-installation-guide.md) | 8 | | | [CentOS](centos-installation-guide.md) | 8 |
| [Fedora](fedora-installation-guide.md) | 32, Rawhide | | | [Fedora](fedora-installation-guide.md) | 32, Rawhide |
| [SUSE Linux Enterprise (SLE)](sle-installation-guide.md) | SLE 15 SP1, 15 SP2 | | | [openSUSE](opensuse-installation-guide.md) | [Leap 15.1](opensuse-leap-15.1-installation-guide.md)<br>Leap 15.2, Tumbleweed |
| [openSUSE](opensuse-installation-guide.md) | [Leap 15.1](opensuse-leap-15.1-installation-guide.md)<br>Leap 15.2, Tumbleweed | | | [SUSE Linux Enterprise (SLE)](sle-installation-guide.md) | SLE 15 SP1, 15 SP2 |
> **Note::**
### Automatic Installation >
> All users are encouraged to uses the official distribution versions of Kata
[Use `kata-manager`](installing-with-kata-manager.md) to automatically install Kata packages. > Containers unless they understand the implications of alternative methods.
### Snap Installation ### Snap Installation
> **Note:** The snap installation is available for all distributions which support `snapd`.
[![Get it from the Snap Store](https://snapcraft.io/static/images/badges/en/snap-store-black.svg)](https://snapcraft.io/kata-containers) [![Get it from the Snap Store](https://snapcraft.io/static/images/badges/en/snap-store-black.svg)](https://snapcraft.io/kata-containers)
[Use snap](snap-installation-guide.md) to install Kata Containers from https://snapcraft.io. [Use snap](snap-installation-guide.md) to install Kata Containers from https://snapcraft.io.
### Scripted Installation ### Automatic Installation
[Use `kata-doc-to-script`](installing-with-kata-doc-to-script.md) to generate installation scripts that can be reviewed before they are executed.
[Use `kata-manager`](/utils/README.md) to automatically install a working Kata Containers system.
### Manual Installation ### Manual Installation
Manual installation instructions are available for [these distributions](#packaged-installation-methods) and document how to:
1. Add the Kata Containers repository to your distro package manager, and import the packages signing key.
2. Install the Kata Containers packages.
3. Install a supported container manager.
4. Configure the container manager to use Kata Containers as the default OCI runtime. Or, for Kata Containers 1.5.0 or above, configure the
`io.containerd.kata.v2` to be the runtime shim (see [containerd runtime v2 (shim API)](https://github.com/containerd/containerd/tree/master/runtime/v2)
and [How to use Kata Containers and CRI (containerd plugin) with Kubernetes](../how-to/how-to-use-k8s-with-cri-containerd-and-kata.md)).
> **Notes on upgrading**: Follow the [containerd installation guide](container-manager/containerd/containerd-install.md).
> - If you are installing Kata Containers on a system that already has Clear Containers or `runv` installed,
> first read [the upgrading document](../Upgrading.md).
> **Notes on releases**:
> - [This download server](http://download.opensuse.org/repositories/home:/katacontainers:/releases:/)
> hosts the Kata Containers packages built by OBS for all the supported architectures.
> Packages are available for the latest and stable releases (more info [here](../Stable-Branch-Strategy.md)).
>
> - The following guides apply to the latest Kata Containers release
> (a.k.a. `master` release).
>
> - When choosing a stable release, replace all `master` occurrences in the URLs
> with a `stable-x.y` version available on the [download server](http://download.opensuse.org/repositories/home:/katacontainers:/releases:/).
> **Notes on packages source verification**:
> - The Kata packages hosted on the download server are signed with GPG to ensure integrity and authenticity.
>
> - The public key used to sign packages is available [at this link](https://raw.githubusercontent.com/kata-containers/tests/master/data/rpm-signkey.pub); the fingerprint is `9FDC0CB6 3708CF80 3696E2DC D0B37B82 6063F3ED`.
>
> - Only trust the signing key and fingerprint listed in the previous bullet point. Do not disable GPG checks,
> otherwise packages source and authenticity is not guaranteed.
## Build from source installation ## Build from source installation
> **Notes:** > **Notes:**
> >
> - Power users who decide to build from sources should be aware of the > - Power users who decide to build from sources should be aware of the
@ -104,6 +88,7 @@ who are comfortable building software from source to use the latest component
versions. This is not recommended for normal users. versions. This is not recommended for normal users.
## Installing on a Cloud Service Platform ## Installing on a Cloud Service Platform
* [Amazon Web Services (AWS)](aws-installation-guide.md) * [Amazon Web Services (AWS)](aws-installation-guide.md)
* [Google Compute Engine (GCE)](gce-installation-guide.md) * [Google Compute Engine (GCE)](gce-installation-guide.md)
* [Microsoft Azure](azure-installation-guide.md) * [Microsoft Azure](azure-installation-guide.md)
@ -111,6 +96,7 @@ versions. This is not recommended for normal users.
* [VEXXHOST OpenStack Cloud](vexxhost-installation-guide.md) * [VEXXHOST OpenStack Cloud](vexxhost-installation-guide.md)
## Further information ## Further information
* The [upgrading document](../Upgrading.md). * The [upgrading document](../Upgrading.md).
* The [developer guide](../Developer-Guide.md). * The [developer guide](../Developer-Guide.md).
* The [runtime documentation](../../src/runtime/README.md). * The [runtime documentation](../../src/runtime/README.md).

View File

@ -0,0 +1,128 @@
# Install Kata Containers with containerd
> **Note:**
>
> - If Kata Containers and / or containerd are packaged by your distribution,
> we recommend you install these versions to ensure they are updated when
> new releases are available.
> **Warning:**
>
> - These instructions install the **newest** versions of Kata Containers and
> containerd from binary release packages. These versions may not have been
> tested with your distribution version.
>
> - Since your package manager is not being used, it is **your**
> responsibility to ensure these packages are kept up-to-date when new
> versions are released.
>
> - If you decide to proceed and install a Kata Containers release, you can
> still check for the latest version of Kata Containers by running
> `kata-runtime kata-check --only-list-releases`.
>
> - These instructions will not work for Fedora 31 and higher since those
> distribution versions only support cgroups version 2 by default. However,
> Kata Containers currently requires cgroups version 1 (on the host side). See
> https://github.com/kata-containers/kata-containers/issues/927 for further
> details.
## Install Kata Containers
> **Note:**
>
> If your distribution packages Kata Containers, we recommend you install that
> version. If it does not, or you wish to perform a manual installation,
> continue with the steps below.
- Download a release from:
- https://github.com/kata-containers/kata-containers/releases
Note that Kata Containers uses [semantic versioning](https://semver.org) so
you should install a version that does *not* include a dash ("-"), since this
indicates a pre-release version.
- Unpack the downloaded archive.
Kata Containers packages use a `/opt/kata/` prefix so either add that to
your `PATH`, or create symbolic links for the following commands. The
advantage of using symbolic links is that the `systemd(1)` configuration file
for containerd will not need to be modified to allow the daemon to find this
binary (see the [section on installing containerd](#install-containerd) below).
| Command | Description |
|-|-|
| `/opt/kata/bin/containerd-shim-kata-v2` | The main Kata 2.x binary |
| `/opt/kata/bin/kata-collect-data.sh` | Data collection script used for [raising issues](https://github.com/kata-containers/kata-containers/issues) |
| `/opt/kata/bin/kata-runtime` | Utility command |
- Check installation by showing version details:
```bash
$ kata-runtime --version
```
## Install containerd
> **Note:**
>
> If your distribution packages containerd, we recommend you install that
> version. If it does not, or you wish to perform a manual installation,
> continue with the steps below.
- Download a release from:
- https://github.com/containerd/containerd/releases
- Unpack the downloaded archive.
- Configure containerd
- Download the standard `systemd(1)` service file and install to
`/etc/systemd/system/`:
- https://raw.githubusercontent.com/containerd/containerd/master/containerd.service
> **Notes:**
>
> - You will need to reload the systemd configuration after installing this
> file.
>
> - If you have not created a symbolic link for
> `/opt/kata/bin/containerd-shim-kata-v2`, you will need to modify this
> file to ensure the containerd daemon's `PATH` contains `/opt/kata/`.
> See the `Environment=` command in `systemd.exec(5)` for further
> details.
- Add the Kata Containers configuration to the containerd configuration file:
```toml
[plugins]
[plugins.cri]
[plugins.cri.containerd]
default_runtime_name = "kata"
[plugins.cri.containerd.runtimes.kata]
runtime_type = "io.containerd.kata.v2"
```
> **Note:**
>
> The containerd daemon needs to be able to find the
> `containerd-shim-kata-v2` binary to allow Kata Containers to be created.
- Start the containerd service.
## Test the installation
You are now ready to run Kata Containers. You can perform a simple test by
running the following commands:
```bash
$ image="docker.io/library/busybox:latest"
$ sudo ctr image pull "$image"
$ sudo ctr run --runtime "io.containerd.kata.v2" --rm -t "$image" test-kata uname -r
```
The last command above shows details of the kernel version running inside the
container, which will likely be different to the host kernel version.

View File

@ -1,47 +0,0 @@
# Installing with `kata-doc-to-script`
* [Introduction](#introduction)
* [Packages Installation](#packages-installation)
* [Docker Installation and Setup](#docker-installation-and-setup)
## Introduction
Use [these installation instructions](README.md#packaged-installation-methods) together with
[`kata-doc-to-script`](https://github.com/kata-containers/tests/blob/master/.ci/kata-doc-to-script.sh)
to generate installation bash scripts.
> Note:
> - Only the Docker container manager installation can be scripted. For other setups you must
> install and configure the container manager manually.
## Packages Installation
```bash
$ source /etc/os-release
$ curl -fsSL -O https://raw.githubusercontent.com/kata-containers/documentation/master/install/${ID}-installation-guide.md
$ bash -c "$(curl -fsSL https://raw.githubusercontent.com/kata-containers/tests/master/.ci/kata-doc-to-script.sh) ${ID}-installation-guide.md ${ID}-install.sh"
```
For example, if your distribution is CentOS, the previous example will generate a runnable shell script called `centos-install.sh`.
To proceed with the installation, run:
```bash
$ source /etc/os-release
$ bash "./${ID}-install.sh"
```
## Docker Installation and Setup
```bash
$ source /etc/os-release
$ curl -fsSL -O https://raw.githubusercontent.com/kata-containers/documentation/master/install/docker/${ID}-docker-install.md
$ bash -c "$(curl -fsSL https://raw.githubusercontent.com/kata-containers/tests/master/.ci/kata-doc-to-script.sh) ${ID}-docker-install.md ${ID}-docker-install.sh"
```
For example, if your distribution is CentOS, this will generate a runnable shell script called `centos-docker-install.sh`.
To proceed with the Docker installation, run:
```bash
$ source /etc/os-release
$ bash "./${ID}-docker-install.sh"
```

View File

@ -1,47 +0,0 @@
# Installing with `kata-manager`
* [Introduction](#introduction)
* [Full Installation](#full-installation)
* [Install the Kata packages only](#install-the-kata-packages-only)
* [Further Information](#further-information)
## Introduction
`kata-manager` automates the Kata Containers installation procedure documented for [these Linux distributions](README.md#packaged-installation-methods).
> **Note**:
> - `kata-manager` requires `curl` and `sudo` installed on your system.
>
> - Full installation mode is only available for Docker container manager. For other setups, you
> can still use `kata-manager` to [install Kata package](#install-the-kata-packages-only), and then setup your container manager manually.
>
> - You can run `kata-manager` in dry run mode by passing the `-n` flag. Dry run mode allows you to review the
> commands that `kata-manager` would run, without doing any change to your system.
## Full Installation
This command does the following:
1. Installs Kata Containers packages
2. Installs Docker
3. Configure Docker to use the Kata OCI runtime by default
```bash
$ bash -c "$(curl -fsSL https://raw.githubusercontent.com/kata-containers/tests/master/cmd/kata-manager/kata-manager.sh) install-docker-system"
```
<!--
You can ignore the content of this comment.
(test code run by test-install-docs.sh to validate code blocks this document)
```bash
$ bash -c "$(curl -fsSL https://raw.githubusercontent.com/kata-containers/tests/master/cmd/kata-manager/kata-manager.sh) remove-packages"
```
-->
## Install the Kata packages only
Use the following command to only install Kata Containers packages.
```bash
$ bash -c "$(curl -fsSL https://raw.githubusercontent.com/kata-containers/tests/master/cmd/kata-manager/kata-manager.sh) install-packages"
```
## Further Information
For more information on what `kata-manager` can do, refer to the [`kata-manager` page](https://github.com/kata-containers/tests/blob/master/cmd/kata-manager).

View File

@ -182,12 +182,6 @@ impl<D> RuntimeLevelFilter<D> {
level: Mutex::new(level), level: Mutex::new(level),
} }
} }
fn set_level(&self, level: slog::Level) {
let mut log_level = self.level.lock().unwrap();
*log_level = level;
}
} }
impl<D> Drain for RuntimeLevelFilter<D> impl<D> Drain for RuntimeLevelFilter<D>

View File

@ -224,7 +224,7 @@ parts:
after: [godeps, runtime] after: [godeps, runtime]
build-packages: build-packages:
- gcc - gcc
- python - python3
- zlib1g-dev - zlib1g-dev
- libcap-ng-dev - libcap-ng-dev
- libglib2.0-dev - libglib2.0-dev
@ -284,7 +284,7 @@ parts:
done done
# Only x86_64 supports libpmem # Only x86_64 supports libpmem
[ "$(uname -m)" = "x86_64" ] && sudo apt-get --no-install-recommends install -y apt-utils ca-certificates libpmem-dev [ "$(uname -m)" = "x86_64" ] && sudo apt-get --no-install-recommends install -y apt-utils ca-certificates libpmem-dev libseccomp-dev
configure_hypervisor=${kata_dir}/tools/packaging/scripts/configure-hypervisor.sh configure_hypervisor=${kata_dir}/tools/packaging/scripts/configure-hypervisor.sh
chmod +x ${configure_hypervisor} chmod +x ${configure_hypervisor}

View File

@ -46,6 +46,13 @@ ifeq ($(ARCH), ppc64le)
$(warning "WARNING: powerpc64le-unknown-linux-musl target is unavailable") $(warning "WARNING: powerpc64le-unknown-linux-musl target is unavailable")
endif endif
EXTRA_RUSTFLAGS :=
ifeq ($(ARCH), aarch64)
override EXTRA_RUSTFLAGS = -C link-arg=-lgcc
$(warning "WARNING: aarch64-musl needs extra symbols from libgcc")
endif
TRIPLE = $(ARCH)-unknown-linux-$(LIBC) TRIPLE = $(ARCH)-unknown-linux-$(LIBC)
TARGET_PATH = target/$(TRIPLE)/$(BUILD_TYPE)/$(TARGET) TARGET_PATH = target/$(TRIPLE)/$(BUILD_TYPE)/$(TARGET)
@ -106,10 +113,10 @@ default: $(TARGET) show-header
$(TARGET): $(GENERATED_CODE) $(TARGET_PATH) $(TARGET): $(GENERATED_CODE) $(TARGET_PATH)
$(TARGET_PATH): $(SOURCES) | show-summary $(TARGET_PATH): $(SOURCES) | show-summary
@cargo build --target $(TRIPLE) --$(BUILD_TYPE) @RUSTFLAGS="$(EXTRA_RUSTFLAGS) --deny warnings" cargo build --target $(TRIPLE) --$(BUILD_TYPE)
optimize: $(SOURCES) | show-summary show-header optimize: $(SOURCES) | show-summary show-header
@RUSTFLAGS='-C link-arg=-s' cargo build --target $(TRIPLE) --$(BUILD_TYPE) @RUSTFLAGS="-C link-arg=-s $(EXTRA_RUSTFLAGS) --deny-warnings" cargo build --target $(TRIPLE) --$(BUILD_TYPE)
show-header: show-header:
@printf "%s - version %s (commit %s)\n\n" "$(TARGET)" "$(VERSION)" "$(COMMIT_MSG)" @printf "%s - version %s (commit %s)\n\n" "$(TARGET)" "$(VERSION)" "$(COMMIT_MSG)"

View File

@ -126,13 +126,12 @@ pub fn drop_privileges(cfd_log: RawFd, caps: &LinuxCapabilities) -> Result<()> {
) )
.map_err(|e| anyhow!(e.to_string()))?; .map_err(|e| anyhow!(e.to_string()))?;
if let Err(_) = caps::set( let _ = caps::set(
None, None,
CapSet::Ambient, CapSet::Ambient,
to_capshashset(cfd_log, caps.ambient.as_ref()), to_capshashset(cfd_log, caps.ambient.as_ref()),
) { )
log_child!(cfd_log, "failed to set ambient capability"); .map_err(|_| log_child!(cfd_log, "failed to set ambient capability"));
}
Ok(()) Ok(())
} }

View File

@ -3,7 +3,7 @@
// SPDX-License-Identifier: Apache-2.0 // SPDX-License-Identifier: Apache-2.0
// //
use cgroups::blkio::{BlkIo, BlkIoController, BlkIoData, IoService}; use cgroups::blkio::{BlkIoController, BlkIoData, IoService};
use cgroups::cpu::CpuController; use cgroups::cpu::CpuController;
use cgroups::cpuacct::CpuAcctController; use cgroups::cpuacct::CpuAcctController;
use cgroups::cpuset::CpuSetController; use cgroups::cpuset::CpuSetController;
@ -15,18 +15,18 @@ use cgroups::memory::MemController;
use cgroups::pid::PidController; use cgroups::pid::PidController;
use cgroups::{ use cgroups::{
BlkIoDeviceResource, BlkIoDeviceThrottleResource, Cgroup, CgroupPid, Controller, BlkIoDeviceResource, BlkIoDeviceThrottleResource, Cgroup, CgroupPid, Controller,
DeviceResource, DeviceResources, HugePageResource, MaxValue, NetworkPriority, DeviceResource, HugePageResource, MaxValue, NetworkPriority,
}; };
use crate::cgroups::Manager as CgroupManager; use crate::cgroups::Manager as CgroupManager;
use crate::container::DEFAULT_DEVICES; use crate::container::DEFAULT_DEVICES;
use anyhow::{anyhow, Context, Error, Result}; use anyhow::{anyhow, Context, Result};
use lazy_static; use lazy_static;
use libc::{self, pid_t}; use libc::{self, pid_t};
use nix::errno::Errno; use nix::errno::Errno;
use oci::{ use oci::{
LinuxBlockIO, LinuxCPU, LinuxDevice, LinuxDeviceCgroup, LinuxHugepageLimit, LinuxMemory, LinuxBlockIO, LinuxCPU, LinuxDevice, LinuxDeviceCgroup, LinuxHugepageLimit, LinuxMemory,
LinuxNetwork, LinuxPids, LinuxResources, LinuxThrottleDevice, LinuxWeightDevice, LinuxNetwork, LinuxPids, LinuxResources,
}; };
use protobuf::{CachedSize, RepeatedField, SingularPtrField, UnknownFields}; use protobuf::{CachedSize, RepeatedField, SingularPtrField, UnknownFields};
@ -34,7 +34,6 @@ use protocols::agent::{
BlkioStats, BlkioStatsEntry, CgroupStats, CpuStats, CpuUsage, HugetlbStats, MemoryData, BlkioStats, BlkioStatsEntry, CgroupStats, CpuStats, CpuUsage, HugetlbStats, MemoryData,
MemoryStats, PidsStats, ThrottlingData, MemoryStats, PidsStats, ThrottlingData,
}; };
use regex::Regex;
use std::collections::HashMap; use std::collections::HashMap;
use std::fs; use std::fs;
use std::path::Path; use std::path::Path;
@ -91,7 +90,7 @@ impl CgroupManager for Manager {
let h = cgroups::hierarchies::auto(); let h = cgroups::hierarchies::auto();
let h = Box::new(&*h); let h = Box::new(&*h);
let cg = load_or_create(h, &self.cpath); let cg = load_or_create(h, &self.cpath);
cg.add_task(CgroupPid::from(pid as u64)); cg.add_task(CgroupPid::from(pid as u64))?;
Ok(()) Ok(())
} }
@ -194,10 +193,10 @@ impl CgroupManager for Manager {
let freezer_controller: &FreezerController = cg.controller_of().unwrap(); let freezer_controller: &FreezerController = cg.controller_of().unwrap();
match state { match state {
FreezerState::Thawed => { FreezerState::Thawed => {
freezer_controller.thaw(); freezer_controller.thaw()?;
} }
FreezerState::Frozen => { FreezerState::Frozen => {
freezer_controller.freeze(); freezer_controller.freeze()?;
} }
_ => { _ => {
return Err(nix::Error::Sys(Errno::EINVAL).into()); return Err(nix::Error::Sys(Errno::EINVAL).into());
@ -230,7 +229,7 @@ impl CgroupManager for Manager {
} }
fn set_network_resources( fn set_network_resources(
cg: &cgroups::Cgroup, _cg: &cgroups::Cgroup,
network: &LinuxNetwork, network: &LinuxNetwork,
res: &mut cgroups::Resources, res: &mut cgroups::Resources,
) -> Result<()> { ) -> Result<()> {
@ -259,7 +258,7 @@ fn set_network_resources(
} }
fn set_devices_resources( fn set_devices_resources(
cg: &cgroups::Cgroup, _cg: &cgroups::Cgroup,
device_resources: &Vec<LinuxDeviceCgroup>, device_resources: &Vec<LinuxDeviceCgroup>,
res: &mut cgroups::Resources, res: &mut cgroups::Resources,
) -> Result<()> { ) -> Result<()> {
@ -267,18 +266,21 @@ fn set_devices_resources(
let mut devices = vec![]; let mut devices = vec![];
for d in device_resources.iter() { for d in device_resources.iter() {
let dev = linux_device_group_to_cgroup_device(&d); if let Some(dev) = linux_device_group_to_cgroup_device(&d) {
devices.push(dev); devices.push(dev);
}
} }
for d in DEFAULT_DEVICES.iter() { for d in DEFAULT_DEVICES.iter() {
let dev = linux_device_to_cgroup_device(&d); if let Some(dev) = linux_device_to_cgroup_device(&d) {
devices.push(dev); devices.push(dev);
}
} }
for d in DEFAULT_ALLOWED_DEVICES.iter() { for d in DEFAULT_ALLOWED_DEVICES.iter() {
let dev = linux_device_group_to_cgroup_device(&d); if let Some(dev) = linux_device_group_to_cgroup_device(&d) {
devices.push(dev); devices.push(dev);
}
} }
res.devices.update_values = true; res.devices.update_values = true;
@ -288,7 +290,7 @@ fn set_devices_resources(
} }
fn set_hugepages_resources( fn set_hugepages_resources(
cg: &cgroups::Cgroup, _cg: &cgroups::Cgroup,
hugepage_limits: &Vec<LinuxHugepageLimit>, hugepage_limits: &Vec<LinuxHugepageLimit>,
res: &mut cgroups::Resources, res: &mut cgroups::Resources,
) -> Result<()> { ) -> Result<()> {
@ -363,11 +365,11 @@ fn set_cpu_resources(cg: &cgroups::Cgroup, cpu: &LinuxCPU) -> Result<()> {
let cpuset_controller: &CpuSetController = cg.controller_of().unwrap(); let cpuset_controller: &CpuSetController = cg.controller_of().unwrap();
if !cpu.cpus.is_empty() { if !cpu.cpus.is_empty() {
cpuset_controller.set_cpus(&cpu.cpus); cpuset_controller.set_cpus(&cpu.cpus)?;
} }
if !cpu.mems.is_empty() { if !cpu.mems.is_empty() {
cpuset_controller.set_mems(&cpu.mems); cpuset_controller.set_mems(&cpu.mems)?;
} }
let cpu_controller: &CpuController = cg.controller_of().unwrap(); let cpu_controller: &CpuController = cg.controller_of().unwrap();
@ -379,11 +381,12 @@ fn set_cpu_resources(cg: &cgroups::Cgroup, cpu: &LinuxCPU) -> Result<()> {
shares shares
}; };
if shares != 0 { if shares != 0 {
cpu_controller.set_shares(shares); cpu_controller.set_shares(shares)?;
} }
} }
cpu_controller.set_cfs_quota_and_period(cpu.quota, cpu.period); set_resource!(cpu_controller, set_cfs_quota, cpu, quota);
set_resource!(cpu_controller, set_cfs_period, cpu, period);
set_resource!(cpu_controller, set_rt_runtime, cpu, realtime_runtime); set_resource!(cpu_controller, set_rt_runtime, cpu, realtime_runtime);
set_resource!(cpu_controller, set_rt_period_us, cpu, realtime_period); set_resource!(cpu_controller, set_rt_period_us, cpu, realtime_period);
@ -465,26 +468,32 @@ fn build_blk_io_device_throttle_resource(
blk_io_device_throttle_resources blk_io_device_throttle_resources
} }
fn linux_device_to_cgroup_device(d: &LinuxDevice) -> DeviceResource { fn linux_device_to_cgroup_device(d: &LinuxDevice) -> Option<DeviceResource> {
let dev_type = DeviceType::from_char(d.r#type.chars().next()).unwrap(); let dev_type = match DeviceType::from_char(d.r#type.chars().next()) {
Some(t) => t,
None => return None,
};
let mut permissions = vec![ let permissions = vec![
DevicePermissions::Read, DevicePermissions::Read,
DevicePermissions::Write, DevicePermissions::Write,
DevicePermissions::MkNod, DevicePermissions::MkNod,
]; ];
DeviceResource { Some(DeviceResource {
allow: true, allow: true,
devtype: dev_type, devtype: dev_type,
major: d.major, major: d.major,
minor: d.minor, minor: d.minor,
access: permissions, access: permissions,
} })
} }
fn linux_device_group_to_cgroup_device(d: &LinuxDeviceCgroup) -> DeviceResource { fn linux_device_group_to_cgroup_device(d: &LinuxDeviceCgroup) -> Option<DeviceResource> {
let dev_type = DeviceType::from_char(d.r#type.chars().next()).unwrap(); let dev_type = match DeviceType::from_char(d.r#type.chars().next()) {
Some(t) => t,
None => return None,
};
let mut permissions: Vec<DevicePermissions> = vec![]; let mut permissions: Vec<DevicePermissions> = vec![];
for p in d.access.chars().collect::<Vec<char>>() { for p in d.access.chars().collect::<Vec<char>>() {
@ -496,13 +505,13 @@ fn linux_device_group_to_cgroup_device(d: &LinuxDeviceCgroup) -> DeviceResource
} }
} }
DeviceResource { Some(DeviceResource {
allow: d.allow, allow: d.allow,
devtype: dev_type, devtype: dev_type,
major: d.major.unwrap_or(0), major: d.major.unwrap_or(0),
minor: d.minor.unwrap_or(0), minor: d.minor.unwrap_or(0),
access: permissions, access: permissions,
} })
} }
// split space separated values into an vector of u64 // split space separated values into an vector of u64
@ -518,7 +527,7 @@ fn lines_to_map(content: &str) -> HashMap<String, u64> {
.lines() .lines()
.map(|x| x.split_whitespace().collect::<Vec<&str>>()) .map(|x| x.split_whitespace().collect::<Vec<&str>>())
.filter(|x| x.len() == 2 && x[1].parse::<u64>().is_ok()) .filter(|x| x.len() == 2 && x[1].parse::<u64>().is_ok())
.fold(HashMap::new(), |mut hm, mut x| { .fold(HashMap::new(), |mut hm, x| {
hm.insert(x[0].to_string(), x[1].parse::<u64>().unwrap()); hm.insert(x[0].to_string(), x[1].parse::<u64>().unwrap());
hm hm
}) })
@ -1059,7 +1068,7 @@ impl Manager {
info!(sl!(), "updating cpuset for path {:?}", &r_path); info!(sl!(), "updating cpuset for path {:?}", &r_path);
let cg = load_or_create(h, &r_path); let cg = load_or_create(h, &r_path);
let cpuset_controller: &CpuSetController = cg.controller_of().unwrap(); let cpuset_controller: &CpuSetController = cg.controller_of().unwrap();
cpuset_controller.set_cpus(cpuset_cpus); cpuset_controller.set_cpus(cpuset_cpus)?;
} }
Ok(()) Ok(())

View File

@ -3,11 +3,9 @@
// SPDX-License-Identifier: Apache-2.0 // SPDX-License-Identifier: Apache-2.0
// //
// use crate::configs::{FreezerState, Config};
use anyhow::{anyhow, Result}; use anyhow::{anyhow, Result};
use oci::LinuxResources; use oci::LinuxResources;
use protocols::agent::CgroupStats; use protocols::agent::CgroupStats;
use std::collections::HashMap;
use cgroups::freezer::FreezerState; use cgroups::freezer::FreezerState;

View File

@ -366,128 +366,3 @@ impl IfPrioMap {
format!("{} {}", self.interface, self.priority) format!("{} {}", self.interface, self.priority)
} }
} }
/*
impl Config {
fn new(opts: &CreateOpts) -> Result<Self> {
if opts.spec.is_none() {
return Err(ErrorKind::ErrorCode("invalid createopts!".into()));
}
let root = unistd::getcwd().chain_err(|| "cannot getwd")?;
let root = root.as_path().canonicalize().chain_err(||
"cannot resolve root into absolute path")?;
let mut root = root.into();
let cwd = root.clone();
let spec = opts.spec.as_ref().unwrap();
if spec.root.is_none() {
return Err(ErrorKind::ErrorCode("no root".into()));
}
let rootfs = PathBuf::from(&spec.root.as_ref().unwrap().path);
if rootfs.is_relative() {
root = format!("{}/{}", root, rootfs.into());
}
// handle annotations
let mut label = spec.annotations
.iter()
.map(|(key, value)| format!("{}={}", key, value)).collect();
label.push(format!("bundle={}", cwd));
let mut config = Config {
rootfs: root,
no_pivot_root: opts.no_pivot_root,
readonlyfs: spec.root.as_ref().unwrap().readonly,
hostname: spec.hostname.clone(),
labels: label,
no_new_keyring: opts.no_new_keyring,
rootless_euid: opts.rootless_euid,
rootless_cgroups: opts.rootless_cgroups,
};
config.mounts = Vec::new();
for m in &spec.mounts {
config.mounts.push(Mount::new(&cwd, &m)?);
}
config.devices = create_devices(&spec)?;
config.cgroups = Cgroups::new(&opts)?;
if spec.linux.as_ref().is_none() {
return Err(ErrorKind::ErrorCode("no linux configuration".into()));
}
let linux = spec.linux.as_ref().unwrap();
let propagation = MOUNTPROPAGATIONMAPPING.get(linux.rootfs_propagation);
if propagation.is_none() {
Err(ErrorKind::ErrorCode("rootfs propagation not support".into()));
}
config.root_propagation = propagation.unwrap();
if config.no_pivot_root && (config.root_propagation & MSFlags::MSPRIVATE != 0) {
return Err(ErrorKind::ErrorCode("[r]private is not safe without pivot root".into()));
}
// handle namespaces
let m: HashMap<String, String> = HashMap::new();
for ns in &linux.namespaces {
if NAMESPACEMAPPING.get(&ns.r#type.as_str()).is_none() {
return Err(ErrorKind::ErrorCode("namespace don't exist".into()));
}
if m.get(&ns.r#type).is_some() {
return Err(ErrorKind::ErrorCode(format!("duplicate ns {}", ns.r#type)));
}
m.insert(ns.r#type, ns.path);
}
if m.contains_key(oci::NETWORKNAMESPACE) {
let path = m.get(oci::NETWORKNAMESPACE).unwrap();
if path == "" {
config.networks = vec![Network {
r#type: "loopback",
}];
}
}
if m.contains_key(oci::USERNAMESPACE) {
setup_user_namespace(&spec, &mut config)?;
}
config.namespaces = m.iter().map(|(key, value)| Namespace {
r#type: key,
path: value,
}).collect();
config.mask_paths = linux.mask_paths;
config.readonly_path = linux.readonly_path;
config.mount_label = linux.mount_label;
config.sysctl = linux.sysctl;
config.seccomp = None;
config.intelrdt = None;
if spec.process.is_some() {
let process = spec.process.as_ref().unwrap();
config.oom_score_adj = process.oom_score_adj;
config.process_label = process.selinux_label.clone();
if process.capabilities.as_ref().is_some() {
let cap = process.capabilities.as_ref().unwrap();
config.capabilities = Some(Capabilities {
..cap
})
}
}
config.hooks = None;
config.version = spec.version;
Ok(config)
}
}
impl Mount {
fn new(cwd: &str, m: &oci::Mount) -> Result<Self> {
}
}
*/

View File

@ -3,35 +3,32 @@
// SPDX-License-Identifier: Apache-2.0 // SPDX-License-Identifier: Apache-2.0
// //
use anyhow::{anyhow, Context, Result};
use dirs; use dirs;
use lazy_static; use lazy_static;
use libc::pid_t;
use oci::{Hook, Linux, LinuxNamespace, LinuxResources, POSIXRlimit, Spec}; use oci::{Hook, Linux, LinuxNamespace, LinuxResources, POSIXRlimit, Spec};
use oci::{LinuxDevice, LinuxIDMapping};
use serde_json; use serde_json;
use std::clone::Clone;
use std::ffi::{CStr, CString}; use std::ffi::{CStr, CString};
use std::fmt; use std::fmt;
use std::fmt::Display;
use std::fs; use std::fs;
use std::os::unix::io::RawFd; use std::os::unix::io::RawFd;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::process::Command;
use std::time::SystemTime; use std::time::SystemTime;
// use crate::sync::Cond;
use anyhow::{anyhow, bail, Context, Result};
use libc::pid_t;
use oci::{LinuxDevice, LinuxIDMapping};
use std::clone::Clone;
use std::fmt::Display;
use std::process::{Child, Command};
use cgroups::freezer::FreezerState; use cgroups::freezer::FreezerState;
use crate::process::Process; use crate::capabilities::{self, CAPSMAP};
// use crate::intelrdt::Manager as RdtManager; use crate::cgroups::fs::Manager as FsManager;
use crate::cgroups::Manager;
use crate::log_child; use crate::log_child;
use crate::process::Process;
use crate::specconv::CreateOpts; use crate::specconv::CreateOpts;
use crate::sync::*; use crate::sync::*;
// use crate::stats::Stats;
use crate::capabilities::{self, CAPSMAP};
use crate::cgroups::fs::{self as fscgroup, Manager as FsManager};
use crate::cgroups::Manager;
use crate::{mount, validator}; use crate::{mount, validator};
use protocols::agent::StatsContainerResponse; use protocols::agent::StatsContainerResponse;
@ -55,7 +52,7 @@ use std::io::BufRead;
use std::io::BufReader; use std::io::BufReader;
use std::os::unix::io::FromRawFd; use std::os::unix::io::FromRawFd;
use slog::{debug, info, o, Logger}; use slog::{info, o, Logger};
const STATE_FILENAME: &'static str = "state.json"; const STATE_FILENAME: &'static str = "state.json";
const EXEC_FIFO_FILENAME: &'static str = "exec.fifo"; const EXEC_FIFO_FILENAME: &'static str = "exec.fifo";
@ -214,11 +211,6 @@ pub struct BaseState {
init_process_pid: i32, init_process_pid: i32,
#[serde(default)] #[serde(default)]
init_process_start: u64, init_process_start: u64,
/*
#[serde(default)]
created: SystemTime,
config: Config,
*/
} }
pub trait BaseContainer { pub trait BaseContainer {
@ -280,12 +272,8 @@ pub struct SyncPC {
} }
pub trait Container: BaseContainer { pub trait Container: BaseContainer {
// fn checkpoint(&self, opts: &CriuOpts) -> Result<()>;
// fn restore(&self, p: &Process, opts: &CriuOpts) -> Result<()>;
fn pause(&mut self) -> Result<()>; fn pause(&mut self) -> Result<()>;
fn resume(&mut self) -> Result<()>; fn resume(&mut self) -> Result<()>;
// fn notify_oom(&self) -> Result<(Sender, Receiver)>;
// fn notify_memory_pressure(&self, lvl: PressureLevel) -> Result<(Sender, Receiver)>;
} }
impl Container for LinuxContainer { impl Container for LinuxContainer {
@ -332,14 +320,11 @@ impl Container for LinuxContainer {
pub fn init_child() { pub fn init_child() {
let cwfd = std::env::var(CWFD_FD).unwrap().parse::<i32>().unwrap(); let cwfd = std::env::var(CWFD_FD).unwrap().parse::<i32>().unwrap();
let cfd_log = std::env::var(CLOG_FD).unwrap().parse::<i32>().unwrap(); let cfd_log = std::env::var(CLOG_FD).unwrap().parse::<i32>().unwrap();
match do_init_child(cwfd) {
Ok(_) => (), let _ = do_init_child(cwfd).map_err(|e| {
Err(e) => { log_child!(cfd_log, "child exit: {:?}", e);
log_child!(cfd_log, "child exit: {:?}", e); let _ = write_sync(cwfd, SYNC_FAILED, format!("{:?}", e).as_str());
write_sync(cwfd, SYNC_FAILED, format!("{:?}", e).as_str()); });
return;
}
}
} }
fn do_init_child(cwfd: RawFd) -> Result<()> { fn do_init_child(cwfd: RawFd) -> Result<()> {
@ -364,7 +349,7 @@ fn do_init_child(cwfd: RawFd) -> Result<()> {
let buf = read_sync(crfd)?; let buf = read_sync(crfd)?;
let process_str = std::str::from_utf8(&buf)?; let process_str = std::str::from_utf8(&buf)?;
let mut oci_process: oci::Process = serde_json::from_str(process_str)?; let oci_process: oci::Process = serde_json::from_str(process_str)?;
log_child!(cfd_log, "notify parent to send cgroup manager"); log_child!(cfd_log, "notify parent to send cgroup manager");
write_sync(cwfd, SYNC_SUCCESS, "")?; write_sync(cwfd, SYNC_SUCCESS, "")?;
@ -385,7 +370,7 @@ fn do_init_child(cwfd: RawFd) -> Result<()> {
let linux = spec.linux.as_ref().unwrap(); let linux = spec.linux.as_ref().unwrap();
// get namespace vector to join/new // get namespace vector to join/new
let nses = get_namespaces(&linux)?; let nses = get_namespaces(&linux);
let mut userns = false; let mut userns = false;
let mut to_new = CloneFlags::empty(); let mut to_new = CloneFlags::empty();
@ -404,19 +389,17 @@ fn do_init_child(cwfd: RawFd) -> Result<()> {
to_new.set(*s, true); to_new.set(*s, true);
} }
} else { } else {
let fd = match fcntl::open(ns.path.as_str(), OFlag::O_CLOEXEC, Mode::empty()) { let fd =
Ok(v) => v, fcntl::open(ns.path.as_str(), OFlag::O_CLOEXEC, Mode::empty()).map_err(|e| {
Err(e) => {
log_child!( log_child!(
cfd_log, cfd_log,
"cannot open type: {} path: {}", "cannot open type: {} path: {}",
ns.r#type.clone(), ns.r#type.clone(),
ns.path.clone() ns.path.clone()
); );
log_child!(cfd_log, "error is : {}", e.as_errno().unwrap().desc()); log_child!(cfd_log, "error is : {:?}", e.as_errno());
return Err(e.into()); e
} })?;
};
if *s != CloneFlags::CLONE_NEWPID { if *s != CloneFlags::CLONE_NEWPID {
to_join.push((*s, fd)); to_join.push((*s, fd));
@ -442,6 +425,23 @@ fn do_init_child(cwfd: RawFd) -> Result<()> {
setrlimit(rl)?; setrlimit(rl)?;
} }
//
// Make the process non-dumpable, to avoid various race conditions that
// could cause processes in namespaces we're joining to access host
// resources (or potentially execute code).
//
// However, if the number of namespaces we are joining is 0, we are not
// going to be switching to a different security context. Thus setting
// ourselves to be non-dumpable only breaks things (like rootless
// containers), which is the recommendation from the kernel folks.
//
// Ref: https://github.com/opencontainers/runc/commit/50a19c6ff828c58e5dab13830bd3dacde268afe5
//
if !nses.is_empty() {
prctl::set_dumpable(false)
.map_err(|e| anyhow!(e).context("set process non-dumpable failed"))?;
}
if userns { if userns {
log_child!(cfd_log, "enter new user namespace"); log_child!(cfd_log, "enter new user namespace");
sched::unshare(CloneFlags::CLONE_NEWUSER)?; sched::unshare(CloneFlags::CLONE_NEWUSER)?;
@ -468,17 +468,20 @@ fn do_init_child(cwfd: RawFd) -> Result<()> {
} }
log_child!(cfd_log, "join namespace {:?}", s); log_child!(cfd_log, "join namespace {:?}", s);
if let Err(e) = sched::setns(fd, s) { sched::setns(fd, s).or_else(|e| {
if s == CloneFlags::CLONE_NEWUSER { if s == CloneFlags::CLONE_NEWUSER {
if e.as_errno().unwrap() != Errno::EINVAL { if e.as_errno().unwrap() != Errno::EINVAL {
write_sync(cwfd, SYNC_FAILED, format!("{:?}", e).as_str()); let _ = write_sync(cwfd, SYNC_FAILED, format!("{:?}", e).as_str());
return Err(e.into()); return Err(e);
} }
Ok(())
} else { } else {
write_sync(cwfd, SYNC_FAILED, format!("{:?}", e).as_str()); let _ = write_sync(cwfd, SYNC_FAILED, format!("{:?}", e).as_str());
return Err(e.into()); Err(e)
} }
} })?;
unistd::close(fd)?; unistd::close(fd)?;
if s == CloneFlags::CLONE_NEWUSER { if s == CloneFlags::CLONE_NEWUSER {
@ -550,20 +553,19 @@ fn do_init_child(cwfd: RawFd) -> Result<()> {
if guser.additional_gids.len() > 0 { if guser.additional_gids.len() > 0 {
setgroups(guser.additional_gids.as_slice()).map_err(|e| { setgroups(guser.additional_gids.as_slice()).map_err(|e| {
write_sync( let _ = write_sync(
cwfd, cwfd,
SYNC_FAILED, SYNC_FAILED,
format!("setgroups failed: {:?}", e).as_str(), format!("setgroups failed: {:?}", e).as_str(),
); );
e e
})?; })?;
} }
// NoNewPeiviledges, Drop capabilities // NoNewPeiviledges, Drop capabilities
if oci_process.no_new_privileges { if oci_process.no_new_privileges {
if let Err(_) = prctl::set_no_new_privileges(true) { prctl::set_no_new_privileges(true).map_err(|_| anyhow!("cannot set no new privileges"))?;
return Err(anyhow!("cannot set no new privileges"));
}
} }
if oci_process.capabilities.is_some() { if oci_process.capabilities.is_some() {
@ -586,7 +588,7 @@ fn do_init_child(cwfd: RawFd) -> Result<()> {
fifofd = std::env::var(FIFO_FD)?.parse::<i32>().unwrap(); fifofd = std::env::var(FIFO_FD)?.parse::<i32>().unwrap();
} }
//cleanup the env inherited from parent // cleanup the env inherited from parent
for (key, _) in env::vars() { for (key, _) in env::vars() {
env::remove_var(key); env::remove_var(key);
} }
@ -595,7 +597,6 @@ fn do_init_child(cwfd: RawFd) -> Result<()> {
for e in env.iter() { for e in env.iter() {
let v: Vec<&str> = e.splitn(2, "=").collect(); let v: Vec<&str> = e.splitn(2, "=").collect();
if v.len() != 2 { if v.len() != 2 {
//info!(logger, "incorrect env config!");
continue; continue;
} }
env::set_var(v[0], v[1]); env::set_var(v[0], v[1]);
@ -611,20 +612,15 @@ fn do_init_child(cwfd: RawFd) -> Result<()> {
let exec_file = Path::new(&args[0]); let exec_file = Path::new(&args[0]);
log_child!(cfd_log, "process command: {:?}", &args); log_child!(cfd_log, "process command: {:?}", &args);
if !exec_file.exists() { if !exec_file.exists() {
match find_file(exec_file) { find_file(exec_file).ok_or_else(|| anyhow!("the file {} is not exist", &args[0]))?;
Some(_) => (),
None => {
return Err(anyhow!("the file {} is not exist", &args[0]));
}
}
} }
// notify parent that the child's ready to start // notify parent that the child's ready to start
write_sync(cwfd, SYNC_SUCCESS, "")?; write_sync(cwfd, SYNC_SUCCESS, "")?;
log_child!(cfd_log, "ready to run exec"); log_child!(cfd_log, "ready to run exec");
unistd::close(cfd_log); let _ = unistd::close(cfd_log);
unistd::close(crfd); let _ = unistd::close(crfd);
unistd::close(cwfd); let _ = unistd::close(cwfd);
if oci_process.terminal { if oci_process.terminal {
unistd::setsid()?; unistd::setsid()?;
@ -739,7 +735,6 @@ impl BaseContainer for LinuxContainer {
return Err(anyhow!("exec fifo exists")); return Err(anyhow!("exec fifo exists"));
} }
unistd::mkfifo(fifo_file.as_str(), Mode::from_bits(0o622).unwrap())?; unistd::mkfifo(fifo_file.as_str(), Mode::from_bits(0o622).unwrap())?;
// defer!(fs::remove_file(&fifo_file)?);
fifofd = fcntl::open( fifofd = fcntl::open(
fifo_file.as_str(), fifo_file.as_str(),
@ -762,7 +757,9 @@ impl BaseContainer for LinuxContainer {
let st = self.oci_state()?; let st = self.oci_state()?;
let (pfd_log, cfd_log) = unistd::pipe().context("failed to create pipe")?; let (pfd_log, cfd_log) = unistd::pipe().context("failed to create pipe")?;
fcntl::fcntl(pfd_log, FcntlArg::F_SETFD(FdFlag::FD_CLOEXEC));
let _ = fcntl::fcntl(pfd_log, FcntlArg::F_SETFD(FdFlag::FD_CLOEXEC))
.map_err(|e| warn!(logger, "fcntl pfd log FD_CLOEXEC {:?}", e));
let child_logger = logger.new(o!("action" => "child process log")); let child_logger = logger.new(o!("action" => "child process log"));
let log_handler = thread::spawn(move || { let log_handler = thread::spawn(move || {
@ -791,54 +788,57 @@ impl BaseContainer for LinuxContainer {
info!(logger, "exec fifo opened!"); info!(logger, "exec fifo opened!");
let (prfd, cwfd) = unistd::pipe().context("failed to create pipe")?; let (prfd, cwfd) = unistd::pipe().context("failed to create pipe")?;
let (crfd, pwfd) = unistd::pipe().context("failed to create pipe")?; let (crfd, pwfd) = unistd::pipe().context("failed to create pipe")?;
fcntl::fcntl(prfd, FcntlArg::F_SETFD(FdFlag::FD_CLOEXEC));
fcntl::fcntl(pwfd, FcntlArg::F_SETFD(FdFlag::FD_CLOEXEC)); let _ = fcntl::fcntl(prfd, FcntlArg::F_SETFD(FdFlag::FD_CLOEXEC))
.map_err(|e| warn!(logger, "fcntl prfd FD_CLOEXEC {:?}", e));
let _ = fcntl::fcntl(pwfd, FcntlArg::F_SETFD(FdFlag::FD_CLOEXEC))
.map_err(|e| warn!(logger, "fcntl pwfd FD_COLEXEC {:?}", e));
defer!({ defer!({
unistd::close(prfd); let _ = unistd::close(prfd).map_err(|e| warn!(logger, "close prfd {:?}", e));
unistd::close(pwfd); let _ = unistd::close(pwfd).map_err(|e| warn!(logger, "close pwfd {:?}", e));
}); });
let mut child_stdin = std::process::Stdio::null(); let child_stdin: std::process::Stdio;
let mut child_stdout = std::process::Stdio::null(); let child_stdout: std::process::Stdio;
let mut child_stderr = std::process::Stdio::null(); let child_stderr: std::process::Stdio;
let mut stdin = -1;
let mut stdout = -1;
let mut stderr = -1;
if tty { if tty {
let pseduo = pty::openpty(None, None)?; let pseudo = pty::openpty(None, None)?;
p.term_master = Some(pseduo.master); p.term_master = Some(pseudo.master);
fcntl::fcntl(pseduo.master, FcntlArg::F_SETFD(FdFlag::FD_CLOEXEC)); let _ = fcntl::fcntl(pseudo.master, FcntlArg::F_SETFD(FdFlag::FD_CLOEXEC))
fcntl::fcntl(pseduo.slave, FcntlArg::F_SETFD(FdFlag::FD_CLOEXEC)); .map_err(|e| warn!(logger, "fnctl pseudo.master {:?}", e));
let _ = fcntl::fcntl(pseudo.slave, FcntlArg::F_SETFD(FdFlag::FD_CLOEXEC))
.map_err(|e| warn!(logger, "fcntl pseudo.slave {:?}", e));
child_stdin = unsafe { std::process::Stdio::from_raw_fd(pseduo.slave) }; child_stdin = unsafe { std::process::Stdio::from_raw_fd(pseudo.slave) };
child_stdout = unsafe { std::process::Stdio::from_raw_fd(pseduo.slave) }; child_stdout = unsafe { std::process::Stdio::from_raw_fd(pseudo.slave) };
child_stderr = unsafe { std::process::Stdio::from_raw_fd(pseduo.slave) }; child_stderr = unsafe { std::process::Stdio::from_raw_fd(pseudo.slave) };
} else { } else {
stdin = p.stdin.unwrap(); let stdin = p.stdin.unwrap();
stdout = p.stdout.unwrap(); let stdout = p.stdout.unwrap();
stderr = p.stderr.unwrap(); let stderr = p.stderr.unwrap();
child_stdin = unsafe { std::process::Stdio::from_raw_fd(stdin) }; child_stdin = unsafe { std::process::Stdio::from_raw_fd(stdin) };
child_stdout = unsafe { std::process::Stdio::from_raw_fd(stdout) }; child_stdout = unsafe { std::process::Stdio::from_raw_fd(stdout) };
child_stderr = unsafe { std::process::Stdio::from_raw_fd(stderr) }; child_stderr = unsafe { std::process::Stdio::from_raw_fd(stderr) };
} }
let old_pid_ns = match fcntl::open(PID_NS_PATH, OFlag::O_CLOEXEC, Mode::empty()) { let old_pid_ns =
Ok(v) => v, fcntl::open(PID_NS_PATH, OFlag::O_CLOEXEC, Mode::empty()).map_err(|e| {
Err(e) => {
error!( error!(
logger, logger,
"cannot open pid ns path: {} with error: {:?}", PID_NS_PATH, e "cannot open pid ns path: {} with error: {:?}", PID_NS_PATH, e
); );
return Err(e.into()); e
} })?;
};
//restore the parent's process's pid namespace. //restore the parent's process's pid namespace.
defer!({ defer!({
sched::setns(old_pid_ns, CloneFlags::CLONE_NEWPID); let _ = sched::setns(old_pid_ns, CloneFlags::CLONE_NEWPID)
unistd::close(old_pid_ns); .map_err(|e| warn!(logger, "settns CLONE_NEWPID {:?}", e));
let _ = unistd::close(old_pid_ns)
.map_err(|e| warn!(logger, "close old pid namespace {:?}", e));
}); });
let pidns = get_pid_namespace(&self.logger, linux)?; let pidns = get_pid_namespace(&self.logger, linux)?;
@ -868,7 +868,7 @@ impl BaseContainer for LinuxContainer {
child = child.env(FIFO_FD, format!("{}", fifofd)); child = child.env(FIFO_FD, format!("{}", fifofd));
} }
let mut child = child.spawn()?; let child = child.spawn()?;
unistd::close(crfd)?; unistd::close(crfd)?;
unistd::close(cwfd)?; unistd::close(cwfd)?;
@ -880,29 +880,28 @@ impl BaseContainer for LinuxContainer {
} }
if p.init { if p.init {
unistd::close(fifofd); let _ = unistd::close(fifofd).map_err(|e| warn!(logger, "close fifofd {:?}", e));
} }
info!(logger, "child pid: {}", p.pid); info!(logger, "child pid: {}", p.pid);
match join_namespaces( join_namespaces(
&logger, &logger,
&spec, &spec,
&p, &p,
self.cgroup_manager.as_ref().unwrap(), self.cgroup_manager.as_ref().unwrap(),
&st, &st,
&mut child,
pwfd, pwfd,
prfd, prfd,
) { )
Ok(_) => (), .map_err(|e| {
Err(e) => { error!(logger, "create container process error {:?}", e);
error!(logger, "create container process error {:?}", e); // kill the child process.
// kill the child process. let _ = signal::kill(Pid::from_raw(p.pid), Some(Signal::SIGKILL))
signal::kill(Pid::from_raw(p.pid), Some(Signal::SIGKILL)); .map_err(|e| warn!(logger, "signal::kill joining namespaces {:?}", e));
return Err(e);
} e
}; })?;
info!(logger, "entered namespaces!"); info!(logger, "entered namespaces!");
@ -912,7 +911,9 @@ impl BaseContainer for LinuxContainer {
let (exit_pipe_r, exit_pipe_w) = unistd::pipe2(OFlag::O_CLOEXEC) let (exit_pipe_r, exit_pipe_w) = unistd::pipe2(OFlag::O_CLOEXEC)
.context("failed to create pipe") .context("failed to create pipe")
.map_err(|e| { .map_err(|e| {
signal::kill(Pid::from_raw(child.id() as i32), Some(Signal::SIGKILL)); let _ = signal::kill(Pid::from_raw(child.id() as i32), Some(Signal::SIGKILL))
.map_err(|e| warn!(logger, "signal::kill creating pipe {:?}", e));
e e
})?; })?;
@ -926,7 +927,9 @@ impl BaseContainer for LinuxContainer {
self.processes.insert(p.pid, p); self.processes.insert(p.pid, p);
info!(logger, "wait on child log handler"); info!(logger, "wait on child log handler");
log_handler.join(); let _ = log_handler
.join()
.map_err(|e| warn!(logger, "joining log handler {:?}", e));
info!(logger, "create process completed"); info!(logger, "create process completed");
return Ok(()); return Ok(());
} }
@ -1027,25 +1030,22 @@ fn do_exec(args: &[String]) -> ! {
.collect(); .collect();
let a: Vec<&CStr> = sa.iter().map(|s| s.as_c_str()).collect(); let a: Vec<&CStr> = sa.iter().map(|s| s.as_c_str()).collect();
if let Err(e) = unistd::execvp(p.as_c_str(), a.as_slice()) { let _ = unistd::execvp(p.as_c_str(), a.as_slice()).map_err(|e| match e {
// info!(logger, "execve failed!!!"); nix::Error::Sys(errno) => {
// info!(logger, "binary: {:?}, args: {:?}, envs: {:?}", p, a, env); std::process::exit(errno as i32);
match e {
nix::Error::Sys(errno) => {
std::process::exit(errno as i32);
}
_ => std::process::exit(-2),
} }
} _ => std::process::exit(-2),
});
unreachable!() unreachable!()
} }
fn update_namespaces(logger: &Logger, spec: &mut Spec, init_pid: RawFd) -> Result<()> { fn update_namespaces(logger: &Logger, spec: &mut Spec, init_pid: RawFd) -> Result<()> {
let linux = match spec.linux.as_mut() { info!(logger, "updating namespaces");
None => return Err(anyhow!("Spec didn't container linux field")), let linux = spec
Some(l) => l, .linux
}; .as_mut()
.ok_or_else(|| anyhow!("Spec didn't contain linux field"))?;
let namespaces = linux.namespaces.as_mut_slice(); let namespaces = linux.namespaces.as_mut_slice();
for namespace in namespaces.iter_mut() { for namespace in namespaces.iter_mut() {
@ -1072,19 +1072,18 @@ fn get_pid_namespace(logger: &Logger, linux: &Linux) -> Result<Option<RawFd>> {
return Ok(None); return Ok(None);
} }
let fd = match fcntl::open(ns.path.as_str(), OFlag::O_CLOEXEC, Mode::empty()) { let fd =
Ok(v) => v, fcntl::open(ns.path.as_str(), OFlag::O_CLOEXEC, Mode::empty()).map_err(|e| {
Err(e) => {
error!( error!(
logger, logger,
"cannot open type: {} path: {}", "cannot open type: {} path: {}",
ns.r#type.clone(), ns.r#type.clone(),
ns.path.clone() ns.path.clone()
); );
error!(logger, "error is : {}", e.as_errno().unwrap().desc()); error!(logger, "error is : {:?}", e.as_errno());
return Err(e.into());
} e
}; })?;
return Ok(Some(fd)); return Ok(Some(fd));
} }
@ -1094,24 +1093,21 @@ fn get_pid_namespace(logger: &Logger, linux: &Linux) -> Result<Option<RawFd>> {
} }
fn is_userns_enabled(linux: &Linux) -> bool { fn is_userns_enabled(linux: &Linux) -> bool {
for ns in &linux.namespaces { linux
if ns.r#type == "user" && ns.path == "" { .namespaces
return true; .iter()
} .any(|ns| ns.r#type == "user" && ns.path == "")
}
false
} }
fn get_namespaces(linux: &Linux) -> Result<Vec<LinuxNamespace>> { fn get_namespaces(linux: &Linux) -> Vec<LinuxNamespace> {
let mut ns: Vec<LinuxNamespace> = Vec::new(); linux
for i in &linux.namespaces { .namespaces
ns.push(LinuxNamespace { .iter()
r#type: i.r#type.clone(), .map(|ns| LinuxNamespace {
path: i.path.clone(), r#type: ns.r#type.clone(),
}); path: ns.path.clone(),
} })
Ok(ns) .collect()
} }
fn join_namespaces( fn join_namespaces(
@ -1120,7 +1116,6 @@ fn join_namespaces(
p: &Process, p: &Process,
cm: &FsManager, cm: &FsManager,
st: &OCIState, st: &OCIState,
_child: &mut Child,
pwfd: RawFd, pwfd: RawFd,
prfd: RawFd, prfd: RawFd,
) -> Result<()> { ) -> Result<()> {
@ -1137,7 +1132,6 @@ fn join_namespaces(
info!(logger, "wait child received oci spec"); info!(logger, "wait child received oci spec");
// child.try_wait()?;
read_sync(prfd)?; read_sync(prfd)?;
info!(logger, "send oci process from parent to child"); info!(logger, "send oci process from parent to child");
@ -1150,7 +1144,7 @@ fn join_namespaces(
let cm_str = serde_json::to_string(cm)?; let cm_str = serde_json::to_string(cm)?;
write_sync(pwfd, SYNC_DATA, cm_str.as_str())?; write_sync(pwfd, SYNC_DATA, cm_str.as_str())?;
//wait child setup user namespace // wait child setup user namespace
info!(logger, "wait child setup user namespace"); info!(logger, "wait child setup user namespace");
read_sync(prfd)?; read_sync(prfd)?;
@ -1209,7 +1203,7 @@ fn join_namespaces(
read_sync(prfd)?; read_sync(prfd)?;
info!(logger, "get ready to run poststart hook!"); info!(logger, "get ready to run poststart hook!");
//run poststart hook // run poststart hook
if spec.hooks.is_some() { if spec.hooks.is_some() {
info!(logger, "poststart hook"); info!(logger, "poststart hook");
let hooks = spec.hooks.as_ref().unwrap(); let hooks = spec.hooks.as_ref().unwrap();
@ -1226,36 +1220,30 @@ fn join_namespaces(
} }
fn write_mappings(logger: &Logger, path: &str, maps: &[LinuxIDMapping]) -> Result<()> { fn write_mappings(logger: &Logger, path: &str, maps: &[LinuxIDMapping]) -> Result<()> {
let mut data = String::new(); let data = maps
for m in maps { .iter()
if m.size == 0 { .filter(|m| m.size != 0)
continue; .map(|m| format!("{} {} {}\n", m.container_id, m.host_id, m.size))
} .collect::<Vec<_>>()
.join("");
let val = format!("{} {} {}\n", m.container_id, m.host_id, m.size);
data = data + &val;
}
info!(logger, "mapping: {}", data); info!(logger, "mapping: {}", data);
if !data.is_empty() { if !data.is_empty() {
let fd = fcntl::open(path, OFlag::O_WRONLY, Mode::empty())?; let fd = fcntl::open(path, OFlag::O_WRONLY, Mode::empty())?;
defer!(unistd::close(fd).unwrap()); defer!(unistd::close(fd).unwrap());
match unistd::write(fd, data.as_bytes()) { unistd::write(fd, data.as_bytes()).map_err(|e| {
Ok(_) => {} info!(logger, "cannot write mapping");
Err(e) => { e
info!(logger, "cannot write mapping"); })?;
return Err(e.into());
}
}
} }
Ok(()) Ok(())
} }
fn setid(uid: Uid, gid: Gid) -> Result<()> { fn setid(uid: Uid, gid: Gid) -> Result<()> {
// set uid/gid // set uid/gid
if let Err(e) = prctl::set_keep_capabilities(true) { prctl::set_keep_capabilities(true)
bail!(anyhow!(e).context("set keep capabilities returned")); .map_err(|e| anyhow!(e).context("set keep capabilities returned"))?;
};
{ {
unistd::setresgid(gid, gid, gid)?; unistd::setresgid(gid, gid, gid)?;
} }
@ -1267,9 +1255,9 @@ fn setid(uid: Uid, gid: Gid) -> Result<()> {
capabilities::reset_effective()?; capabilities::reset_effective()?;
} }
if let Err(e) = prctl::set_keep_capabilities(false) { prctl::set_keep_capabilities(false)
bail!(anyhow!(e).context("set keep capabilities returned")); .map_err(|e| anyhow!(e).context("set keep capabilities returned"))?;
};
Ok(()) Ok(())
} }
@ -1287,13 +1275,13 @@ impl LinuxContainer {
// validate oci spec // validate oci spec
validator::validate(&config)?; validator::validate(&config)?;
if let Err(e) = fs::create_dir_all(root.as_str()) { fs::create_dir_all(root.as_str()).map_err(|e| {
if e.kind() == std::io::ErrorKind::AlreadyExists { if e.kind() == std::io::ErrorKind::AlreadyExists {
return Err(e).context(format!("container {} already exists", id.as_str())); return anyhow!(e).context(format!("container {} already exists", id.as_str()));
} }
return Err(e).context(format!("fail to create container directory {}", root)); anyhow!(e).context(format!("fail to create container directory {}", root))
} })?;
unistd::chown( unistd::chown(
root.as_str(), root.as_str(),
@ -1428,7 +1416,6 @@ fn set_sysctls(sysctls: &HashMap<String, String>) -> Result<()> {
Ok(()) Ok(())
} }
use std::error::Error as StdError;
use std::io::Read; use std::io::Read;
use std::os::unix::process::ExitStatusExt; use std::os::unix::process::ExitStatusExt;
use std::process::Stdio; use std::process::Stdio;
@ -1448,7 +1435,6 @@ fn execute_hook(logger: &Logger, h: &Hook, st: &OCIState) -> Result<()> {
let args = h.args.clone(); let args = h.args.clone();
let envs = h.env.clone(); let envs = h.env.clone();
let state = serde_json::to_string(st)?; let state = serde_json::to_string(st)?;
// state.push_str("\n");
let (rfd, wfd) = unistd::pipe2(OFlag::O_CLOEXEC)?; let (rfd, wfd) = unistd::pipe2(OFlag::O_CLOEXEC)?;
defer!({ defer!({
@ -1468,9 +1454,6 @@ fn execute_hook(logger: &Logger, h: &Hook, st: &OCIState) -> Result<()> {
info!(logger, "hook child: {} status: {}", child, status); info!(logger, "hook child: {} status: {}", child, status);
// let _ = wait::waitpid(_ch,
// Some(WaitPidFlag::WEXITED | WaitPidFlag::__WALL));
if status != 0 { if status != 0 {
if status == -libc::ETIMEDOUT { if status == -libc::ETIMEDOUT {
return Err(anyhow!(nix::Error::from_errno(Errno::ETIMEDOUT))); return Err(anyhow!(nix::Error::from_errno(Errno::ETIMEDOUT)));
@ -1511,7 +1494,7 @@ fn execute_hook(logger: &Logger, h: &Hook, st: &OCIState) -> Result<()> {
.spawn() .spawn()
.unwrap(); .unwrap();
//send out our pid // send out our pid
tx.send(child.id() as libc::pid_t).unwrap(); tx.send(child.id() as libc::pid_t).unwrap();
info!(logger, "hook grand: {}", child.id()); info!(logger, "hook grand: {}", child.id());
@ -1530,7 +1513,7 @@ fn execute_hook(logger: &Logger, h: &Hook, st: &OCIState) -> Result<()> {
.unwrap() .unwrap()
.read_to_string(&mut out) .read_to_string(&mut out)
.unwrap(); .unwrap();
info!(logger, "{}", out.as_str()); info!(logger, "child stdout: {}", out.as_str());
match child.wait() { match child.wait() {
Ok(exit) => { Ok(exit) => {
let code: i32 = if exit.success() { let code: i32 = if exit.success() {
@ -1549,7 +1532,7 @@ fn execute_hook(logger: &Logger, h: &Hook, st: &OCIState) -> Result<()> {
info!( info!(
logger, logger,
"wait child error: {} {}", "wait child error: {} {}",
e.description(), e,
e.raw_os_error().unwrap() e.raw_os_error().unwrap()
); );
@ -1600,8 +1583,6 @@ fn execute_hook(logger: &Logger, h: &Hook, st: &OCIState) -> Result<()> {
SYNC_DATA, SYNC_DATA,
std::str::from_utf8(&status.to_be_bytes()).unwrap_or_default(), std::str::from_utf8(&status.to_be_bytes()).unwrap_or_default(),
); );
// let _ = wait::waitpid(Pid::from_raw(pid),
// Some(WaitPidFlag::WEXITED | WaitPidFlag::__WALL));
std::process::exit(0); std::process::exit(0);
} }
} }

View File

@ -15,7 +15,6 @@
#[macro_use] #[macro_use]
#[cfg(test)] #[cfg(test)]
extern crate serial_test; extern crate serial_test;
#[macro_use]
extern crate serde; extern crate serde;
extern crate serde_json; extern crate serde_json;
#[macro_use] #[macro_use]
@ -37,13 +36,6 @@ extern crate oci;
extern crate path_absolutize; extern crate path_absolutize;
extern crate regex; extern crate regex;
// Convenience macro to obtain the scope logger
macro_rules! sl {
() => {
slog_scope::logger().new(o!("subsystem" => "rustjail"))
};
}
pub mod capabilities; pub mod capabilities;
pub mod cgroups; pub mod cgroups;
pub mod container; pub mod container;
@ -77,7 +69,6 @@ use protocols::oci::{
Root as grpcRoot, Spec as grpcSpec, Root as grpcRoot, Spec as grpcSpec,
}; };
use std::collections::HashMap; use std::collections::HashMap;
use std::mem::MaybeUninit;
pub fn process_grpc_to_oci(p: &grpcProcess) -> ociProcess { pub fn process_grpc_to_oci(p: &grpcProcess) -> ociProcess {
let console_size = if p.ConsoleSize.is_some() { let console_size = if p.ConsoleSize.is_some() {
@ -99,7 +90,12 @@ pub fn process_grpc_to_oci(p: &grpcProcess) -> ociProcess {
username: u.Username.clone(), username: u.Username.clone(),
} }
} else { } else {
unsafe { MaybeUninit::zeroed().assume_init() } ociUser {
uid: 0,
gid: 0,
additional_gids: vec![],
username: String::from(""),
}
}; };
let capabilities = if p.Capabilities.is_some() { let capabilities = if p.Capabilities.is_some() {
@ -144,11 +140,6 @@ pub fn process_grpc_to_oci(p: &grpcProcess) -> ociProcess {
} }
} }
fn process_oci_to_grpc(_p: ociProcess) -> grpcProcess {
// dont implement it for now
unsafe { MaybeUninit::zeroed().assume_init() }
}
fn root_grpc_to_oci(root: &grpcRoot) -> ociRoot { fn root_grpc_to_oci(root: &grpcRoot) -> ociRoot {
ociRoot { ociRoot {
path: root.Path.clone(), path: root.Path.clone(),
@ -156,10 +147,6 @@ fn root_grpc_to_oci(root: &grpcRoot) -> ociRoot {
} }
} }
fn root_oci_to_grpc(_root: &ociRoot) -> grpcRoot {
unsafe { MaybeUninit::zeroed().assume_init() }
}
fn mount_grpc_to_oci(m: &grpcMount) -> ociMount { fn mount_grpc_to_oci(m: &grpcMount) -> ociMount {
ociMount { ociMount {
destination: m.destination.clone(), destination: m.destination.clone(),
@ -169,10 +156,6 @@ fn mount_grpc_to_oci(m: &grpcMount) -> ociMount {
} }
} }
fn mount_oci_to_grpc(_m: &ociMount) -> grpcMount {
unsafe { MaybeUninit::zeroed().assume_init() }
}
use oci::Hook as ociHook; use oci::Hook as ociHook;
use protocols::oci::Hook as grpcHook; use protocols::oci::Hook as grpcHook;
@ -203,10 +186,6 @@ fn hooks_grpc_to_oci(h: &grpcHooks) -> ociHooks {
} }
} }
fn hooks_oci_to_grpc(_h: &ociHooks) -> grpcHooks {
unsafe { MaybeUninit::zeroed().assume_init() }
}
use oci::{ use oci::{
LinuxDevice as ociLinuxDevice, LinuxIDMapping as ociLinuxIDMapping, LinuxDevice as ociLinuxDevice, LinuxIDMapping as ociLinuxIDMapping,
LinuxIntelRdt as ociLinuxIntelRdt, LinuxNamespace as ociLinuxNamespace, LinuxIntelRdt as ociLinuxIntelRdt, LinuxNamespace as ociLinuxNamespace,
@ -573,17 +552,8 @@ pub fn grpc_to_oci(grpc: &grpcSpec) -> ociSpec {
} }
} }
pub fn oci_to_grpc(_oci: &ociSpec) -> grpcSpec {
unsafe { MaybeUninit::zeroed().assume_init() }
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
#[test]
fn it_works() {
assert_eq!(2 + 2, 4);
}
#[allow(unused_macros)] #[allow(unused_macros)]
#[macro_export] #[macro_export]
macro_rules! skip_if_not_root { macro_rules! skip_if_not_root {

View File

@ -7,7 +7,9 @@ use anyhow::{anyhow, bail, Context, Error, Result};
use libc::uid_t; use libc::uid_t;
use nix::errno::Errno; use nix::errno::Errno;
use nix::fcntl::{self, OFlag}; use nix::fcntl::{self, OFlag};
use nix::mount::{self, MntFlags, MsFlags}; #[cfg(not(test))]
use nix::mount;
use nix::mount::{MntFlags, MsFlags};
use nix::sys::stat::{self, Mode, SFlag}; use nix::sys::stat::{self, Mode, SFlag};
use nix::unistd::{self, Gid, Uid}; use nix::unistd::{self, Gid, Uid};
use nix::NixPath; use nix::NixPath;
@ -111,6 +113,7 @@ lazy_static! {
} }
#[inline(always)] #[inline(always)]
#[allow(unused_variables)]
fn mount<P1: ?Sized + NixPath, P2: ?Sized + NixPath, P3: ?Sized + NixPath, P4: ?Sized + NixPath>( fn mount<P1: ?Sized + NixPath, P2: ?Sized + NixPath, P3: ?Sized + NixPath, P4: ?Sized + NixPath>(
source: Option<&P1>, source: Option<&P1>,
target: &P2, target: &P2,
@ -125,6 +128,7 @@ fn mount<P1: ?Sized + NixPath, P2: ?Sized + NixPath, P3: ?Sized + NixPath, P4: ?
} }
#[inline(always)] #[inline(always)]
#[allow(unused_variables)]
fn umount2<P: ?Sized + NixPath>( fn umount2<P: ?Sized + NixPath>(
target: &P, target: &P,
flags: MntFlags, flags: MntFlags,
@ -201,6 +205,21 @@ pub fn init_rootfs(
check_proc_mount(m)?; check_proc_mount(m)?;
} }
// If the destination already exists and is not a directory, we bail
// out This is to avoid mounting through a symlink or similar -- which
// has been a "fun" attack scenario in the past.
if m.r#type == "proc" || m.r#type == "sysfs" {
if let Ok(meta) = fs::symlink_metadata(&m.destination) {
if !meta.is_dir() {
return Err(anyhow!(
"Mount point {} must be ordinary directory: got {:?}",
m.destination,
meta.file_type()
));
}
}
}
mount_from(cfd_log, &m, &rootfs, flags, &data, "")?; mount_from(cfd_log, &m, &rootfs, flags, &data, "")?;
// bind mount won't change mount options, we need remount to make mount options // bind mount won't change mount options, we need remount to make mount options
// effective. // effective.
@ -388,20 +407,17 @@ fn mount_cgroups(
if key != base { if key != base {
let src = format!("{}/{}", m.destination.as_str(), key); let src = format!("{}/{}", m.destination.as_str(), key);
match unix::fs::symlink(destination.as_str(), &src[1..]) { unix::fs::symlink(destination.as_str(), &src[1..]).map_err(|e| {
Err(e) => { log_child!(
log_child!( cfd_log,
cfd_log, "symlink: {} {} err: {}",
"symlink: {} {} err: {}", key,
key, destination.as_str(),
destination.as_str(), e.to_string()
e.to_string() );
);
return Err(e.into()); e
} })?;
Ok(_) => {}
}
} }
} }
@ -421,6 +437,7 @@ fn mount_cgroups(
Ok(()) Ok(())
} }
#[allow(unused_variables)]
fn pivot_root<P1: ?Sized + NixPath, P2: ?Sized + NixPath>( fn pivot_root<P1: ?Sized + NixPath, P2: ?Sized + NixPath>(
new_root: &P1, new_root: &P1,
put_old: &P2, put_old: &P2,
@ -553,6 +570,7 @@ fn parse_mount_table() -> Result<Vec<Info>> {
} }
#[inline(always)] #[inline(always)]
#[allow(unused_variables)]
fn chroot<P: ?Sized + NixPath>(path: &P) -> Result<(), nix::Error> { fn chroot<P: ?Sized + NixPath>(path: &P) -> Result<(), nix::Error> {
#[cfg(not(test))] #[cfg(not(test))]
return unistd::chroot(path); return unistd::chroot(path);
@ -594,24 +612,23 @@ pub fn ms_move_root(rootfs: &str) -> Result<bool> {
MsFlags::MS_SLAVE | MsFlags::MS_REC, MsFlags::MS_SLAVE | MsFlags::MS_REC,
None::<&str>, None::<&str>,
)?; )?;
match umount2(abs_mount_point, MntFlags::MNT_DETACH) { umount2(abs_mount_point, MntFlags::MNT_DETACH).or_else(|e| {
Ok(_) => (), if e.ne(&nix::Error::from(Errno::EINVAL)) && e.ne(&nix::Error::from(Errno::EPERM)) {
Err(e) => { return Err(anyhow!(e));
if e.ne(&nix::Error::from(Errno::EINVAL)) && e.ne(&nix::Error::from(Errno::EPERM)) {
return Err(anyhow!(e));
}
// If we have not privileges for umounting (e.g. rootless), then
// cover the path.
mount(
Some("tmpfs"),
abs_mount_point,
Some("tmpfs"),
MsFlags::empty(),
None::<&str>,
)?;
} }
}
// If we have not privileges for umounting (e.g. rootless), then
// cover the path.
mount(
Some("tmpfs"),
abs_mount_point,
Some("tmpfs"),
MsFlags::empty(),
None::<&str>,
)?;
Ok(())
})?;
} }
mount( mount(
@ -668,18 +685,14 @@ fn mount_from(
Path::new(&dest) Path::new(&dest)
}; };
// let _ = fs::create_dir_all(&dir); let _ = fs::create_dir_all(&dir).map_err(|e| {
match fs::create_dir_all(&dir) { log_child!(
Ok(_) => {} cfd_log,
Err(e) => { "creat dir {}: {}",
log_child!( dir.to_str().unwrap(),
cfd_log, e.to_string()
"creat dir {}: {}", )
dir.to_str().unwrap(), });
e.to_string()
);
}
}
// make sure file exists so we can bind over it // make sure file exists so we can bind over it
if src.is_file() { if src.is_file() {
@ -696,31 +709,26 @@ fn mount_from(
} }
}; };
match stat::stat(dest.as_str()) { let _ = stat::stat(dest.as_str()).map_err(|e| {
Ok(_) => {} log_child!(
Err(e) => { cfd_log,
log_child!( "dest stat error. {}: {:?}",
cfd_log, dest.as_str(),
"dest stat error. {}: {}", e.as_errno()
dest.as_str(), )
e.as_errno().unwrap().desc() });
);
}
}
match mount( mount(
Some(src.as_str()), Some(src.as_str()),
dest.as_str(), dest.as_str(),
Some(m.r#type.as_str()), Some(m.r#type.as_str()),
flags, flags,
Some(d.as_str()), Some(d.as_str()),
) { )
Ok(_) => {} .map_err(|e| {
Err(e) => { log_child!(cfd_log, "mount error: {:?}", e.as_errno());
log_child!(cfd_log, "mount error: {}", e.as_errno().unwrap().desc()); e
return Err(e.into()); })?;
}
}
if flags.contains(MsFlags::MS_BIND) if flags.contains(MsFlags::MS_BIND)
&& flags.intersects( && flags.intersects(
@ -732,24 +740,17 @@ fn mount_from(
| MsFlags::MS_SLAVE), | MsFlags::MS_SLAVE),
) )
{ {
match mount( mount(
Some(dest.as_str()), Some(dest.as_str()),
dest.as_str(), dest.as_str(),
None::<&str>, None::<&str>,
flags | MsFlags::MS_REMOUNT, flags | MsFlags::MS_REMOUNT,
None::<&str>, None::<&str>,
) { )
Err(e) => { .map_err(|e| {
log_child!( log_child!(cfd_log, "remout {}: {:?}", dest.as_str(), e.as_errno());
cfd_log, e
"remout {}: {}", })?;
dest.as_str(),
e.as_errno().unwrap().desc()
);
return Err(e.into());
}
Ok(_) => {}
}
} }
Ok(()) Ok(())
} }
@ -891,8 +892,6 @@ fn mask_path(path: &str) -> Result<()> {
return Err(nix::Error::Sys(Errno::EINVAL).into()); return Err(nix::Error::Sys(Errno::EINVAL).into());
} }
//info!("{}", path);
match mount( match mount(
Some("/dev/null"), Some("/dev/null"),
path, path,
@ -908,7 +907,6 @@ fn mask_path(path: &str) -> Result<()> {
} }
Err(e) => { Err(e) => {
//info!("{}: {}", path, e.as_errno().unwrap().desc());
return Err(e.into()); return Err(e.into());
} }
@ -923,8 +921,6 @@ fn readonly_path(path: &str) -> Result<()> {
return Err(nix::Error::Sys(Errno::EINVAL).into()); return Err(nix::Error::Sys(Errno::EINVAL).into());
} }
//info!("{}", path);
match mount( match mount(
Some(&path[1..]), Some(&path[1..]),
path, path,
@ -942,7 +938,6 @@ fn readonly_path(path: &str) -> Result<()> {
} }
Err(e) => { Err(e) => {
//info!("{}: {}", path, e.as_errno().unwrap().desc());
return Err(e.into()); return Err(e.into());
} }
@ -1004,8 +999,8 @@ mod tests {
// there is no spec.mounts, but should pass // there is no spec.mounts, but should pass
let ret = init_rootfs(stdout_fd, &spec, &cpath, &mounts, true); let ret = init_rootfs(stdout_fd, &spec, &cpath, &mounts, true);
assert!(ret.is_ok(), "Should pass. Got: {:?}", ret); assert!(ret.is_ok(), "Should pass. Got: {:?}", ret);
let ret = fs::remove_dir_all(rootfs.path().join("dev")); let _ = fs::remove_dir_all(rootfs.path().join("dev"));
let ret = fs::create_dir(rootfs.path().join("dev")); let _ = fs::create_dir(rootfs.path().join("dev"));
// Adding bad mount point to spec.mounts // Adding bad mount point to spec.mounts
spec.mounts.push(oci::Mount { spec.mounts.push(oci::Mount {
@ -1023,8 +1018,8 @@ mod tests {
ret ret
); );
spec.mounts.pop(); spec.mounts.pop();
let ret = fs::remove_dir_all(rootfs.path().join("dev")); let _ = fs::remove_dir_all(rootfs.path().join("dev"));
let ret = fs::create_dir(rootfs.path().join("dev")); let _ = fs::create_dir(rootfs.path().join("dev"));
// mounting a cgroup // mounting a cgroup
spec.mounts.push(oci::Mount { spec.mounts.push(oci::Mount {
@ -1037,8 +1032,8 @@ mod tests {
let ret = init_rootfs(stdout_fd, &spec, &cpath, &mounts, true); let ret = init_rootfs(stdout_fd, &spec, &cpath, &mounts, true);
assert!(ret.is_ok(), "Should pass. Got: {:?}", ret); assert!(ret.is_ok(), "Should pass. Got: {:?}", ret);
spec.mounts.pop(); spec.mounts.pop();
let ret = fs::remove_dir_all(rootfs.path().join("dev")); let _ = fs::remove_dir_all(rootfs.path().join("dev"));
let ret = fs::create_dir(rootfs.path().join("dev")); let _ = fs::create_dir(rootfs.path().join("dev"));
// mounting /dev // mounting /dev
spec.mounts.push(oci::Mount { spec.mounts.push(oci::Mount {
@ -1179,8 +1174,8 @@ mod tests {
let tempdir = tempdir().unwrap(); let tempdir = tempdir().unwrap();
let olddir = unistd::getcwd().unwrap(); let olddir = unistd::getcwd().unwrap();
defer!(unistd::chdir(&olddir);); defer!(let _ = unistd::chdir(&olddir););
unistd::chdir(tempdir.path()); let _ = unistd::chdir(tempdir.path());
let dev = oci::LinuxDevice { let dev = oci::LinuxDevice {
path: "/fifo".to_string(), path: "/fifo".to_string(),

View File

@ -3,24 +3,17 @@
// SPDX-License-Identifier: Apache-2.0 // SPDX-License-Identifier: Apache-2.0
// //
// use std::process::{Stdio, Command, ExitStatus};
use libc::pid_t; use libc::pid_t;
use std::fs::File; use std::fs::File;
use std::os::unix::io::RawFd; use std::os::unix::io::RawFd;
use std::sync::mpsc::Sender; use std::sync::mpsc::Sender;
// use crate::configs::{Capabilities, Rlimit};
// use crate::cgroups::Manager as CgroupManager;
// use crate::intelrdt::Manager as RdtManager;
use nix::fcntl::{fcntl, FcntlArg, OFlag}; use nix::fcntl::{fcntl, FcntlArg, OFlag};
use nix::sys::signal::{self, Signal}; use nix::sys::signal::{self, Signal};
use nix::sys::socket::{self, AddressFamily, SockFlag, SockType};
use nix::sys::wait::{self, WaitStatus}; use nix::sys::wait::{self, WaitStatus};
use nix::unistd::{self, Pid}; use nix::unistd::{self, Pid};
use nix::Result; use nix::Result;
use nix::Error;
use oci::Process as OCIProcess; use oci::Process as OCIProcess;
use slog::Logger; use slog::Logger;
@ -33,8 +26,6 @@ pub struct Process {
pub exit_pipe_r: Option<RawFd>, pub exit_pipe_r: Option<RawFd>,
pub exit_pipe_w: Option<RawFd>, pub exit_pipe_w: Option<RawFd>,
pub extra_files: Vec<File>, pub extra_files: Vec<File>,
// pub caps: Capabilities,
// pub rlimits: Vec<Rlimit>,
pub term_master: Option<RawFd>, pub term_master: Option<RawFd>,
pub tty: bool, pub tty: bool,
pub parent_stdin: Option<RawFd>, pub parent_stdin: Option<RawFd>,
@ -151,11 +142,11 @@ mod tests {
#[test] #[test]
fn test_create_extended_pipe() { fn test_create_extended_pipe() {
// Test the default // Test the default
let (r, w) = create_extended_pipe(OFlag::O_CLOEXEC, 0).unwrap(); let (_r, _w) = create_extended_pipe(OFlag::O_CLOEXEC, 0).unwrap();
// Test setting to the max size // Test setting to the max size
let max_size = get_pipe_max_size(); let max_size = get_pipe_max_size();
let (r, w) = create_extended_pipe(OFlag::O_CLOEXEC, max_size).unwrap(); let (_, w) = create_extended_pipe(OFlag::O_CLOEXEC, max_size).unwrap();
let actual_size = get_pipe_size(w); let actual_size = get_pipe_size(w);
assert_eq!(max_size, actual_size); assert_eq!(max_size, actual_size);
} }

View File

@ -4,8 +4,6 @@
// //
use oci::Spec; use oci::Spec;
// use crate::configs::namespaces;
// use crate::configs::device::Device;
#[derive(Debug)] #[derive(Debug)]
pub struct CreateOpts { pub struct CreateOpts {
@ -17,143 +15,3 @@ pub struct CreateOpts {
pub rootless_euid: bool, pub rootless_euid: bool,
pub rootless_cgroup: bool, pub rootless_cgroup: bool,
} }
/*
const WILDCARD: i32 = -1;
lazy_static! {
static ref NAEMSPACEMAPPING: HashMap<&'static str, &'static str> = {
let mut m = HashMap::new();
m.insert(oci::PIDNAMESPACE, namespaces::NEWPID);
m.insert(oci::NETWORKNAMESPACE, namespaces::NEWNET);
m.insert(oci::UTSNAMESPACE, namespaces::NEWUTS);
m.insert(oci::MOUNTNAMESPACE, namespaces::NEWNS);
m.insert(oci::IPCNAMESPACE, namespaces::NEWIPC);
m.insert(oci::USERNAMESPACE, namespaces::NEWUSER);
m.insert(oci::CGROUPNAMESPACE, namespaces::NEWCGROUP);
m
};
static ref MOUNTPROPAGATIONMAPPING: HashMap<&'static str, MsFlags> = {
let mut m = HashMap::new();
m.insert("rprivate", MsFlags::MS_PRIVATE | MsFlags::MS_REC);
m.insert("private", MsFlags::MS_PRIVATE);
m.insert("rslave", MsFlags::MS_SLAVE | MsFlags::MS_REC);
m.insert("slave", MsFlags::MS_SLAVE);
m.insert("rshared", MsFlags::MS_SHARED | MsFlags::MS_REC);
m.insert("shared", MsFlags::MS_SHARED);
m.insert("runbindable", MsFlags::MS_UNBINDABLE | MsFlags::MS_REC);
m.insert("unbindable", MsFlags::MS_UNBINDABLE);
m
};
static ref ALLOWED_DEVICES: Vec<Device> = {
let mut m = Vec::new();
m.push(Device {
r#type: 'c',
major: WILDCARD,
minor: WILDCARD,
permissions: "m",
allow: true,
});
m.push(Device {
r#type: 'b',
major: WILDCARD,
minor: WILDCARD,
permissions: "m",
allow: true,
});
m.push(Device {
r#type: 'c',
path: "/dev/null".to_string(),
major: 1,
minor: 3,
permissions: "rwm",
allow: true,
});
m.push(Device {
r#type: 'c',
path: String::from("/dev/random"),
major: 1,
minor: 8,
permissions: "rwm",
allow: true,
});
m.push(Device {
r#type: 'c',
path: String::from("/dev/full"),
major: 1,
minor: 7,
permissions: "rwm",
allow: true,
});
m.push(Device {
r#type: 'c',
path: String::from("/dev/tty"),
major: 5,
minor: 0,
permissions: "rwm",
allow: true,
});
m.push(Device {
r#type: 'c',
path: String::from("/dev/zero"),
major: 1,
minor: 5,
permissions: "rwm",
allow: true,
});
m.push(Device {
r#type: 'c',
path: String::from("/dev/urandom"),
major: 1,
minor: 9,
permissions: "rwm",
allow: true,
});
m.push(Device {
r#type: 'c',
path: String::from("/dev/console"),
major: 5,
minor: 1,
permissions: "rwm",
allow: true,
});
m.push(Device {
r#type: 'c',
path: String::from(""),
major: 136,
minor: WILDCARD,
permissions: "rwm",
allow: true,
});
m.push(Device {
r#type: 'c',
path: String::from(""),
major: 5,
minor: 2,
permissions: "rwm",
allow: true,
});
m.push(Device {
r#type: 'c',
path: String::from(""),
major: 10,
minor: 200,
permissions: "rwm",
allow: true,
});
m
};
}
*/

View File

@ -23,7 +23,8 @@ macro_rules! log_child {
let lfd = $fd; let lfd = $fd;
let mut log_str = format_args!($($arg)+).to_string(); let mut log_str = format_args!($($arg)+).to_string();
log_str.push('\n'); log_str.push('\n');
write_count(lfd, log_str.as_bytes(), log_str.len()); // Ignore error writing to the logger, not much we can do
let _ = write_count(lfd, log_str.as_bytes(), log_str.len());
}) })
} }
@ -142,21 +143,15 @@ pub fn write_sync(fd: RawFd, msg_type: i32, data_str: &str) -> Result<()> {
}, },
SYNC_DATA => { SYNC_DATA => {
let length: i32 = data_str.len() as i32; let length: i32 = data_str.len() as i32;
match write_count(fd, &length.to_be_bytes(), MSG_SIZE) { write_count(fd, &length.to_be_bytes(), MSG_SIZE).or_else(|e| {
Ok(_count) => (), unistd::close(fd)?;
Err(e) => { Err(anyhow!(e).context("error in send message to process"))
unistd::close(fd)?; })?;
return Err(anyhow!(e).context("error in send message to process"));
}
}
match write_count(fd, data_str.as_bytes(), data_str.len()) { write_count(fd, data_str.as_bytes(), data_str.len()).or_else(|e| {
Ok(_count) => (), unistd::close(fd)?;
Err(e) => { Err(anyhow!(e).context("error in send message to process"))
unistd::close(fd)?; })?;
return Err(anyhow!(e).context("error in send message to process"));
}
}
} }
_ => (), _ => (),

View File

@ -8,7 +8,6 @@ use anyhow::{anyhow, Result};
use lazy_static; use lazy_static;
use nix::errno::Errno; use nix::errno::Errno;
use oci::{LinuxIDMapping, LinuxNamespace, Spec}; use oci::{LinuxIDMapping, LinuxNamespace, Spec};
use protobuf::RepeatedField;
use std::collections::HashMap; use std::collections::HashMap;
use std::path::{Component, PathBuf}; use std::path::{Component, PathBuf};
@ -226,7 +225,8 @@ fn rootless_euid_mapping(oci: &Spec) -> Result<()> {
return Err(anyhow!(nix::Error::from_errno(Errno::EINVAL))); return Err(anyhow!(nix::Error::from_errno(Errno::EINVAL)));
} }
if linux.gid_mappings.len() == 0 || linux.gid_mappings.len() == 0 { if linux.uid_mappings.len() == 0 || linux.gid_mappings.len() == 0 {
// rootless containers requires at least one UID/GID mapping
return Err(anyhow!(nix::Error::from_errno(Errno::EINVAL))); return Err(anyhow!(nix::Error::from_errno(Errno::EINVAL)));
} }

View File

@ -40,6 +40,36 @@ pub struct agentConfig {
pub unified_cgroup_hierarchy: bool, pub unified_cgroup_hierarchy: bool,
} }
// parse_cmdline_param parse commandline parameters.
macro_rules! parse_cmdline_param {
// commandline flags, without func to parse the option values
($param:ident, $key:ident, $field:expr) => {
if $param.eq(&$key) {
$field = true;
continue;
}
};
// commandline options, with func to parse the option values
($param:ident, $key:ident, $field:expr, $func:ident) => {
if $param.starts_with(format!("{}=", $key).as_str()) {
let val = $func($param)?;
$field = val;
continue;
}
};
// commandline options, with func to parse the option values, and match func
// to valid the values
($param:ident, $key:ident, $field:expr, $func:ident, $guard:expr) => {
if $param.starts_with(format!("{}=", $key).as_str()) {
let val = $func($param)?;
if $guard(val) {
$field = val;
}
continue;
}
};
}
impl agentConfig { impl agentConfig {
pub fn new() -> agentConfig { pub fn new() -> agentConfig {
agentConfig { agentConfig {
@ -60,51 +90,49 @@ impl agentConfig {
let params: Vec<&str> = cmdline.split_ascii_whitespace().collect(); let params: Vec<&str> = cmdline.split_ascii_whitespace().collect();
for param in params.iter() { for param in params.iter() {
// parse cmdline flags // parse cmdline flags
if param.eq(&DEBUG_CONSOLE_FLAG) { parse_cmdline_param!(param, DEBUG_CONSOLE_FLAG, self.debug_console);
self.debug_console = true; parse_cmdline_param!(param, DEV_MODE_FLAG, self.dev_mode);
}
if param.eq(&DEV_MODE_FLAG) {
self.dev_mode = true;
}
// parse cmdline options // parse cmdline options
if param.starts_with(format!("{}=", LOG_LEVEL_OPTION).as_str()) { parse_cmdline_param!(param, LOG_LEVEL_OPTION, self.log_level, get_log_level);
let level = get_log_level(param)?;
self.log_level = level;
}
if param.starts_with(format!("{}=", HOTPLUG_TIMOUT_OPTION).as_str()) { // ensure the timeout is a positive value
let hotplugTimeout = get_hotplug_timeout(param)?; parse_cmdline_param!(
// ensure the timeout is a positive value param,
if hotplugTimeout.as_secs() > 0 { HOTPLUG_TIMOUT_OPTION,
self.hotplug_timeout = hotplugTimeout; self.hotplug_timeout,
} get_hotplug_timeout,
} |hotplugTimeout: time::Duration| hotplugTimeout.as_secs() > 0
);
if param.starts_with(format!("{}=", DEBUG_CONSOLE_VPORT_OPTION).as_str()) { // vsock port should be positive values
let port = get_vsock_port(param)?; parse_cmdline_param!(
if port > 0 { param,
self.debug_console_vport = port; DEBUG_CONSOLE_VPORT_OPTION,
} self.debug_console_vport,
} get_vsock_port,
|port| port > 0
);
parse_cmdline_param!(
param,
LOG_VPORT_OPTION,
self.log_vport,
get_vsock_port,
|port| port > 0
);
if param.starts_with(format!("{}=", LOG_VPORT_OPTION).as_str()) { parse_cmdline_param!(
let port = get_vsock_port(param)?; param,
if port > 0 { CONTAINER_PIPE_SIZE_OPTION,
self.log_vport = port; self.container_pipe_size,
} get_container_pipe_size
} );
parse_cmdline_param!(
if param.starts_with(format!("{}=", CONTAINER_PIPE_SIZE_OPTION).as_str()) { param,
let container_pipe_size = get_container_pipe_size(param)?; UNIFIED_CGROUP_HIERARCHY_OPTION,
self.container_pipe_size = container_pipe_size self.unified_cgroup_hierarchy,
} get_bool_value
);
if param.starts_with(format!("{}=", UNIFIED_CGROUP_HIERARCHY_OPTION).as_str()) {
let b = get_bool_value(param, false);
self.unified_cgroup_hierarchy = b;
}
} }
if let Ok(addr) = env::var(SERVER_ADDR_ENV_VAR) { if let Ok(addr) = env::var(SERVER_ADDR_ENV_VAR) {
@ -185,32 +213,26 @@ fn get_hotplug_timeout(param: &str) -> Result<time::Duration> {
Ok(time::Duration::from_secs(value.unwrap())) Ok(time::Duration::from_secs(value.unwrap()))
} }
fn get_bool_value(param: &str, default: bool) -> bool { fn get_bool_value(param: &str) -> Result<bool> {
let fields: Vec<&str> = param.split("=").collect(); let fields: Vec<&str> = param.split("=").collect();
if fields.len() != 2 { if fields.len() != 2 {
return default; return Ok(false);
} }
let v = fields[1]; let v = fields[1];
// bool // first try to parse as bool value
let t: std::result::Result<bool, std::str::ParseBoolError> = v.parse(); v.parse::<bool>().or_else(|_err1| {
if t.is_ok() { // then try to parse as integer value
return t.unwrap(); v.parse::<u64>().or_else(|_err2| Ok(0)).and_then(|v| {
} // only `0` returns false, otherwise returns true
Ok(match v {
// integer 0 => false,
let i: std::result::Result<u64, std::num::ParseIntError> = v.parse(); _ => true,
if i.is_err() { })
return default; })
} })
// only `0` returns false, otherwise returns true
match i.unwrap() {
0 => false,
_ => true,
}
} }
fn get_container_pipe_size(param: &str) -> Result<i32> { fn get_container_pipe_size(param: &str) -> Result<i32> {

View File

@ -28,8 +28,15 @@ macro_rules! sl {
const VM_ROOTFS: &str = "/"; const VM_ROOTFS: &str = "/";
struct DevIndexEntry {
idx: usize,
residx: Vec<usize>,
}
struct DevIndex(HashMap<String, DevIndexEntry>);
// DeviceHandler is the type of callback to be defined to handle every type of device driver. // DeviceHandler is the type of callback to be defined to handle every type of device driver.
type DeviceHandler = fn(&Device, &mut Spec, &Arc<Mutex<Sandbox>>) -> Result<()>; type DeviceHandler = fn(&Device, &mut Spec, &Arc<Mutex<Sandbox>>, &DevIndex) -> Result<()>;
// DeviceHandlerList lists the supported drivers. // DeviceHandlerList lists the supported drivers.
#[cfg_attr(rustfmt, rustfmt_skip)] #[cfg_attr(rustfmt, rustfmt_skip)]
@ -130,17 +137,14 @@ fn get_device_name(sandbox: &Arc<Mutex<Sandbox>>, dev_addr: &str) -> Result<Stri
info!(sl!(), "Waiting on channel for device notification\n"); info!(sl!(), "Waiting on channel for device notification\n");
let hotplug_timeout = AGENT_CONFIG.read().unwrap().hotplug_timeout; let hotplug_timeout = AGENT_CONFIG.read().unwrap().hotplug_timeout;
let dev_name = match rx.recv_timeout(hotplug_timeout) { let dev_name = rx.recv_timeout(hotplug_timeout).map_err(|_| {
Ok(name) => name, GLOBAL_DEVICE_WATCHER.lock().unwrap().remove_entry(dev_addr);
Err(_) => { anyhow!(
GLOBAL_DEVICE_WATCHER.lock().unwrap().remove_entry(dev_addr); "Timeout reached after {:?} waiting for device {}",
return Err(anyhow!( hotplug_timeout,
"Timeout reached after {:?} waiting for device {}", dev_addr
hotplug_timeout, )
dev_addr })?;
));
}
};
Ok(format!("{}/{}", SYSTEM_DEV_PATH, &dev_name)) Ok(format!("{}/{}", SYSTEM_DEV_PATH, &dev_name))
} }
@ -194,7 +198,7 @@ fn scan_scsi_bus(scsi_addr: &str) -> Result<()> {
// the same device in the list of devices provided through the OCI spec. // the same device in the list of devices provided through the OCI spec.
// This is needed to update information about minor/major numbers that cannot // This is needed to update information about minor/major numbers that cannot
// be predicted from the caller. // be predicted from the caller.
fn update_spec_device_list(device: &Device, spec: &mut Spec) -> Result<()> { fn update_spec_device_list(device: &Device, spec: &mut Spec, devidx: &DevIndex) -> Result<()> {
let major_id: c_uint; let major_id: c_uint;
let minor_id: c_uint; let minor_id: c_uint;
@ -207,10 +211,10 @@ fn update_spec_device_list(device: &Device, spec: &mut Spec) -> Result<()> {
)); ));
} }
let linux = match spec.linux.as_mut() { let linux = spec
None => return Err(anyhow!("Spec didn't container linux field")), .linux
Some(l) => l, .as_mut()
}; .ok_or_else(|| anyhow!("Spec didn't container linux field"))?;
if !Path::new(&device.vm_path).exists() { if !Path::new(&device.vm_path).exists() {
return Err(anyhow!("vm_path:{} doesn't exist", device.vm_path)); return Err(anyhow!("vm_path:{} doesn't exist", device.vm_path));
@ -228,44 +232,44 @@ fn update_spec_device_list(device: &Device, spec: &mut Spec) -> Result<()> {
"got the device: dev_path: {}, major: {}, minor: {}\n", &device.vm_path, major_id, minor_id "got the device: dev_path: {}, major: {}, minor: {}\n", &device.vm_path, major_id, minor_id
); );
let devices = linux.devices.as_mut_slice(); if let Some(idxdata) = devidx.0.get(device.container_path.as_str()) {
for dev in devices.iter_mut() { let dev = &mut linux.devices[idxdata.idx];
if dev.path == device.container_path { let host_major = dev.major;
let host_major = dev.major; let host_minor = dev.minor;
let host_minor = dev.minor;
dev.major = major_id as i64; dev.major = major_id as i64;
dev.minor = minor_id as i64; dev.minor = minor_id as i64;
info!(
sl!(),
"change the device from major: {} minor: {} to vm device major: {} minor: {}",
host_major,
host_minor,
major_id,
minor_id
);
// Resources must be updated since they are used to identify
// the device in the devices cgroup.
for ridx in &idxdata.residx {
// unwrap is safe, because residx would be empty if there
// were no resources
let res = &mut linux.resources.as_mut().unwrap().devices[*ridx];
res.major = Some(major_id as i64);
res.minor = Some(minor_id as i64);
info!( info!(
sl!(), sl!(),
"change the device from major: {} minor: {} to vm device major: {} minor: {}", "set resources for device major: {} minor: {}\n", major_id, minor_id
host_major,
host_minor,
major_id,
minor_id
); );
// Resources must be updated since they are used to identify the
// device in the devices cgroup.
if let Some(res) = linux.resources.as_mut() {
let ds = res.devices.as_mut_slice();
for d in ds.iter_mut() {
if d.major == Some(host_major) && d.minor == Some(host_minor) {
d.major = Some(major_id as i64);
d.minor = Some(minor_id as i64);
info!(
sl!(),
"set resources for device major: {} minor: {}\n", major_id, minor_id
);
}
}
}
} }
Ok(())
} else {
Err(anyhow!(
"Should have found a matching device {} in the spec",
device.vm_path
))
} }
Ok(())
} }
// device.Id should be the predicted device name (vda, vdb, ...) // device.Id should be the predicted device name (vda, vdb, ...)
@ -274,12 +278,13 @@ fn virtiommio_blk_device_handler(
device: &Device, device: &Device,
spec: &mut Spec, spec: &mut Spec,
_sandbox: &Arc<Mutex<Sandbox>>, _sandbox: &Arc<Mutex<Sandbox>>,
devidx: &DevIndex,
) -> Result<()> { ) -> Result<()> {
if device.vm_path == "" { if device.vm_path == "" {
return Err(anyhow!("Invalid path for virtio mmio blk device")); return Err(anyhow!("Invalid path for virtio mmio blk device"));
} }
update_spec_device_list(device, spec) update_spec_device_list(device, spec, devidx)
} }
// device.Id should be the PCI address in the format "bridgeAddr/deviceAddr". // device.Id should be the PCI address in the format "bridgeAddr/deviceAddr".
@ -289,6 +294,7 @@ fn virtio_blk_device_handler(
device: &Device, device: &Device,
spec: &mut Spec, spec: &mut Spec,
sandbox: &Arc<Mutex<Sandbox>>, sandbox: &Arc<Mutex<Sandbox>>,
devidx: &DevIndex,
) -> Result<()> { ) -> Result<()> {
let mut dev = device.clone(); let mut dev = device.clone();
@ -298,7 +304,7 @@ fn virtio_blk_device_handler(
dev.vm_path = get_pci_device_name(sandbox, &device.id)?; dev.vm_path = get_pci_device_name(sandbox, &device.id)?;
} }
update_spec_device_list(&dev, spec) update_spec_device_list(&dev, spec, devidx)
} }
// device.Id should be the SCSI address of the disk in the format "scsiID:lunID" // device.Id should be the SCSI address of the disk in the format "scsiID:lunID"
@ -306,22 +312,49 @@ fn virtio_scsi_device_handler(
device: &Device, device: &Device,
spec: &mut Spec, spec: &mut Spec,
sandbox: &Arc<Mutex<Sandbox>>, sandbox: &Arc<Mutex<Sandbox>>,
devidx: &DevIndex,
) -> Result<()> { ) -> Result<()> {
let mut dev = device.clone(); let mut dev = device.clone();
dev.vm_path = get_scsi_device_name(sandbox, &device.id)?; dev.vm_path = get_scsi_device_name(sandbox, &device.id)?;
update_spec_device_list(&dev, spec) update_spec_device_list(&dev, spec, devidx)
} }
fn virtio_nvdimm_device_handler( fn virtio_nvdimm_device_handler(
device: &Device, device: &Device,
spec: &mut Spec, spec: &mut Spec,
_sandbox: &Arc<Mutex<Sandbox>>, _sandbox: &Arc<Mutex<Sandbox>>,
devidx: &DevIndex,
) -> Result<()> { ) -> Result<()> {
if device.vm_path == "" { if device.vm_path == "" {
return Err(anyhow!("Invalid path for nvdimm device")); return Err(anyhow!("Invalid path for nvdimm device"));
} }
update_spec_device_list(device, spec) update_spec_device_list(device, spec, devidx)
}
impl DevIndex {
fn new(spec: &Spec) -> DevIndex {
let mut map = HashMap::new();
for linux in spec.linux.as_ref() {
for (i, d) in linux.devices.iter().enumerate() {
let mut residx = Vec::new();
for linuxres in linux.resources.as_ref() {
for (j, r) in linuxres.devices.iter().enumerate() {
if r.r#type == d.r#type
&& r.major == Some(d.major)
&& r.minor == Some(d.minor)
{
residx.push(j);
}
}
}
map.insert(d.path.clone(), DevIndexEntry { idx: i, residx });
}
}
DevIndex(map)
}
} }
pub fn add_devices( pub fn add_devices(
@ -329,14 +362,21 @@ pub fn add_devices(
spec: &mut Spec, spec: &mut Spec,
sandbox: &Arc<Mutex<Sandbox>>, sandbox: &Arc<Mutex<Sandbox>>,
) -> Result<()> { ) -> Result<()> {
let devidx = DevIndex::new(spec);
for device in devices.iter() { for device in devices.iter() {
add_device(device, spec, sandbox)?; add_device(device, spec, sandbox, &devidx)?;
} }
Ok(()) Ok(())
} }
fn add_device(device: &Device, spec: &mut Spec, sandbox: &Arc<Mutex<Sandbox>>) -> Result<()> { fn add_device(
device: &Device,
spec: &mut Spec,
sandbox: &Arc<Mutex<Sandbox>>,
devidx: &DevIndex,
) -> Result<()> {
// log before validation to help with debugging gRPC protocol version differences. // log before validation to help with debugging gRPC protocol version differences.
info!(sl!(), "device-id: {}, device-type: {}, device-vm-path: {}, device-container-path: {}, device-options: {:?}", info!(sl!(), "device-id: {}, device-type: {}, device-vm-path: {}, device-container-path: {}, device-options: {:?}",
device.id, device.field_type, device.vm_path, device.container_path, device.options); device.id, device.field_type, device.vm_path, device.container_path, device.options);
@ -355,7 +395,7 @@ fn add_device(device: &Device, spec: &mut Spec, sandbox: &Arc<Mutex<Sandbox>>) -
match DEVICEHANDLERLIST.get(device.field_type.as_str()) { match DEVICEHANDLERLIST.get(device.field_type.as_str()) {
None => Err(anyhow!("Unknown device type {}", device.field_type)), None => Err(anyhow!("Unknown device type {}", device.field_type)),
Some(dev_handler) => dev_handler(device, spec, sandbox), Some(dev_handler) => dev_handler(device, spec, sandbox, devidx),
} }
} }
@ -368,10 +408,10 @@ pub fn update_device_cgroup(spec: &mut Spec) -> Result<()> {
let major = stat::major(rdev) as i64; let major = stat::major(rdev) as i64;
let minor = stat::minor(rdev) as i64; let minor = stat::minor(rdev) as i64;
let linux = match spec.linux.as_mut() { let linux = spec
None => return Err(anyhow!("Spec didn't container linux field")), .linux
Some(l) => l, .as_mut()
}; .ok_or_else(|| anyhow!("Spec didn't container linux field"))?;
if linux.resources.is_none() { if linux.resources.is_none() {
linux.resources = Some(LinuxResources::default()); linux.resources = Some(LinuxResources::default());
@ -413,4 +453,263 @@ mod tests {
assert_eq!(devices[0].major, Some(major)); assert_eq!(devices[0].major, Some(major));
assert_eq!(devices[0].minor, Some(minor)); assert_eq!(devices[0].minor, Some(minor));
} }
#[test]
fn test_update_spec_device_list() {
let (major, minor) = (7, 2);
let mut device = Device::default();
let mut spec = Spec::default();
// container_path empty
let devidx = DevIndex::new(&spec);
let res = update_spec_device_list(&device, &mut spec, &devidx);
assert!(res.is_err());
device.container_path = "/dev/null".to_string();
// linux is empty
let devidx = DevIndex::new(&spec);
let res = update_spec_device_list(&device, &mut spec, &devidx);
assert!(res.is_err());
spec.linux = Some(Linux::default());
// linux.devices is empty
let devidx = DevIndex::new(&spec);
let res = update_spec_device_list(&device, &mut spec, &devidx);
assert!(res.is_err());
spec.linux.as_mut().unwrap().devices = vec![oci::LinuxDevice {
path: "/dev/null2".to_string(),
major,
minor,
..oci::LinuxDevice::default()
}];
// vm_path empty
let devidx = DevIndex::new(&spec);
let res = update_spec_device_list(&device, &mut spec, &devidx);
assert!(res.is_err());
device.vm_path = "/dev/null".to_string();
// guest and host path are not the same
let devidx = DevIndex::new(&spec);
let res = update_spec_device_list(&device, &mut spec, &devidx);
assert!(res.is_err(), "device={:?} spec={:?}", device, spec);
spec.linux.as_mut().unwrap().devices[0].path = device.container_path.clone();
// spec.linux.resources is empty
let devidx = DevIndex::new(&spec);
let res = update_spec_device_list(&device, &mut spec, &devidx);
assert!(res.is_ok());
// update both devices and cgroup lists
spec.linux.as_mut().unwrap().devices = vec![oci::LinuxDevice {
path: device.container_path.clone(),
major,
minor,
..oci::LinuxDevice::default()
}];
spec.linux.as_mut().unwrap().resources = Some(oci::LinuxResources {
devices: vec![oci::LinuxDeviceCgroup {
major: Some(major),
minor: Some(minor),
..oci::LinuxDeviceCgroup::default()
}],
..oci::LinuxResources::default()
});
let devidx = DevIndex::new(&spec);
let res = update_spec_device_list(&device, &mut spec, &devidx);
assert!(res.is_ok());
}
#[test]
fn test_update_spec_device_list_guest_host_conflict() {
let null_rdev = fs::metadata("/dev/null").unwrap().rdev();
let zero_rdev = fs::metadata("/dev/zero").unwrap().rdev();
let full_rdev = fs::metadata("/dev/full").unwrap().rdev();
let host_major_a = stat::major(null_rdev) as i64;
let host_minor_a = stat::minor(null_rdev) as i64;
let host_major_b = stat::major(zero_rdev) as i64;
let host_minor_b = stat::minor(zero_rdev) as i64;
let mut spec = Spec {
linux: Some(Linux {
devices: vec![
oci::LinuxDevice {
path: "/dev/a".to_string(),
r#type: "c".to_string(),
major: host_major_a,
minor: host_minor_a,
..oci::LinuxDevice::default()
},
oci::LinuxDevice {
path: "/dev/b".to_string(),
r#type: "c".to_string(),
major: host_major_b,
minor: host_minor_b,
..oci::LinuxDevice::default()
},
],
resources: Some(LinuxResources {
devices: vec![
oci::LinuxDeviceCgroup {
r#type: "c".to_string(),
major: Some(host_major_a),
minor: Some(host_minor_a),
..oci::LinuxDeviceCgroup::default()
},
oci::LinuxDeviceCgroup {
r#type: "c".to_string(),
major: Some(host_major_b),
minor: Some(host_minor_b),
..oci::LinuxDeviceCgroup::default()
},
],
..LinuxResources::default()
}),
..Linux::default()
}),
..Spec::default()
};
let devidx = DevIndex::new(&spec);
let dev_a = Device {
container_path: "/dev/a".to_string(),
vm_path: "/dev/zero".to_string(),
..Device::default()
};
let guest_major_a = stat::major(zero_rdev) as i64;
let guest_minor_a = stat::minor(zero_rdev) as i64;
let dev_b = Device {
container_path: "/dev/b".to_string(),
vm_path: "/dev/full".to_string(),
..Device::default()
};
let guest_major_b = stat::major(full_rdev) as i64;
let guest_minor_b = stat::minor(full_rdev) as i64;
let specdevices = &spec.linux.as_ref().unwrap().devices;
assert_eq!(host_major_a, specdevices[0].major);
assert_eq!(host_minor_a, specdevices[0].minor);
assert_eq!(host_major_b, specdevices[1].major);
assert_eq!(host_minor_b, specdevices[1].minor);
let specresources = spec.linux.as_ref().unwrap().resources.as_ref().unwrap();
assert_eq!(Some(host_major_a), specresources.devices[0].major);
assert_eq!(Some(host_minor_a), specresources.devices[0].minor);
assert_eq!(Some(host_major_b), specresources.devices[1].major);
assert_eq!(Some(host_minor_b), specresources.devices[1].minor);
let res = update_spec_device_list(&dev_a, &mut spec, &devidx);
assert!(res.is_ok());
let specdevices = &spec.linux.as_ref().unwrap().devices;
assert_eq!(guest_major_a, specdevices[0].major);
assert_eq!(guest_minor_a, specdevices[0].minor);
assert_eq!(host_major_b, specdevices[1].major);
assert_eq!(host_minor_b, specdevices[1].minor);
let specresources = spec.linux.as_ref().unwrap().resources.as_ref().unwrap();
assert_eq!(Some(guest_major_a), specresources.devices[0].major);
assert_eq!(Some(guest_minor_a), specresources.devices[0].minor);
assert_eq!(Some(host_major_b), specresources.devices[1].major);
assert_eq!(Some(host_minor_b), specresources.devices[1].minor);
let res = update_spec_device_list(&dev_b, &mut spec, &devidx);
assert!(res.is_ok());
let specdevices = &spec.linux.as_ref().unwrap().devices;
assert_eq!(guest_major_a, specdevices[0].major);
assert_eq!(guest_minor_a, specdevices[0].minor);
assert_eq!(guest_major_b, specdevices[1].major);
assert_eq!(guest_minor_b, specdevices[1].minor);
let specresources = spec.linux.as_ref().unwrap().resources.as_ref().unwrap();
assert_eq!(Some(guest_major_a), specresources.devices[0].major);
assert_eq!(Some(guest_minor_a), specresources.devices[0].minor);
assert_eq!(Some(guest_major_b), specresources.devices[1].major);
assert_eq!(Some(guest_minor_b), specresources.devices[1].minor);
}
#[test]
fn test_update_spec_device_list_char_block_conflict() {
let null_rdev = fs::metadata("/dev/null").unwrap().rdev();
let guest_major = stat::major(null_rdev) as i64;
let guest_minor = stat::minor(null_rdev) as i64;
let host_major: i64 = 99;
let host_minor: i64 = 99;
let mut spec = Spec {
linux: Some(Linux {
devices: vec![
oci::LinuxDevice {
path: "/dev/char".to_string(),
r#type: "c".to_string(),
major: host_major,
minor: host_minor,
..oci::LinuxDevice::default()
},
oci::LinuxDevice {
path: "/dev/block".to_string(),
r#type: "b".to_string(),
major: host_major,
minor: host_minor,
..oci::LinuxDevice::default()
},
],
resources: Some(LinuxResources {
devices: vec![
LinuxDeviceCgroup {
r#type: "c".to_string(),
major: Some(host_major),
minor: Some(host_minor),
..LinuxDeviceCgroup::default()
},
LinuxDeviceCgroup {
r#type: "b".to_string(),
major: Some(host_major),
minor: Some(host_minor),
..LinuxDeviceCgroup::default()
},
],
..LinuxResources::default()
}),
..Linux::default()
}),
..Spec::default()
};
let devidx = DevIndex::new(&spec);
let dev = Device {
container_path: "/dev/char".to_string(),
vm_path: "/dev/null".to_string(),
..Device::default()
};
let specresources = spec.linux.as_ref().unwrap().resources.as_ref().unwrap();
assert_eq!(Some(host_major), specresources.devices[0].major);
assert_eq!(Some(host_minor), specresources.devices[0].minor);
assert_eq!(Some(host_major), specresources.devices[1].major);
assert_eq!(Some(host_minor), specresources.devices[1].minor);
let res = update_spec_device_list(&dev, &mut spec, &devidx);
assert!(res.is_ok());
// Only the char device, not the block device should be updated
let specresources = spec.linux.as_ref().unwrap().resources.as_ref().unwrap();
assert_eq!(Some(guest_major), specresources.devices[0].major);
assert_eq!(Some(guest_minor), specresources.devices[0].minor);
assert_eq!(Some(host_major), specresources.devices[1].major);
assert_eq!(Some(host_minor), specresources.devices[1].minor);
}
} }

View File

@ -25,7 +25,6 @@ extern crate scopeguard;
#[macro_use] #[macro_use]
extern crate slog; extern crate slog;
#[macro_use]
extern crate netlink; extern crate netlink;
use crate::netlink::{RtnlHandle, NETLINK_ROUTE}; use crate::netlink::{RtnlHandle, NETLINK_ROUTE};
@ -129,7 +128,6 @@ fn main() -> Result<()> {
// support vsock log // support vsock log
let (rfd, wfd) = unistd::pipe2(OFlag::O_CLOEXEC)?; let (rfd, wfd) = unistd::pipe2(OFlag::O_CLOEXEC)?;
let writer = unsafe { File::from_raw_fd(wfd) };
let agentConfig = AGENT_CONFIG.clone(); let agentConfig = AGENT_CONFIG.clone();
@ -514,14 +512,12 @@ fn run_debug_console_shell(logger: &Logger, shell: &str, socket_fd: RawFd) -> Re
let args: Vec<&CStr> = vec![]; let args: Vec<&CStr> = vec![];
// run shell // run shell
if let Err(e) = unistd::execvp(cmd.as_c_str(), args.as_slice()) { let _ = unistd::execvp(cmd.as_c_str(), args.as_slice()).map_err(|e| match e {
match e { nix::Error::Sys(errno) => {
nix::Error::Sys(errno) => { std::process::exit(errno as i32);
std::process::exit(errno as i32);
}
_ => std::process::exit(-2),
} }
} _ => std::process::exit(-2),
});
} }
Ok(ForkResult::Parent { child: child_pid }) => { Ok(ForkResult::Parent { child: child_pid }) => {
@ -638,8 +634,6 @@ fn run_debug_console_shell(logger: &Logger, shell: &str, socket_fd: RawFd) -> Re
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use std::fs::File;
use std::io::Write;
use tempfile::tempdir; use tempfile::tempdir;
#[test] #[test]

View File

@ -125,7 +125,7 @@ lazy_static! {
// type of storage driver. // type of storage driver.
type StorageHandler = fn(&Logger, &Storage, Arc<Mutex<Sandbox>>) -> Result<String>; type StorageHandler = fn(&Logger, &Storage, Arc<Mutex<Sandbox>>) -> Result<String>;
// StorageHandlerList lists the supported drivers. // STORAGEHANDLERLIST lists the supported drivers.
#[cfg_attr(rustfmt, rustfmt_skip)] #[cfg_attr(rustfmt, rustfmt_skip)]
lazy_static! { lazy_static! {
pub static ref STORAGEHANDLERLIST: HashMap<&'static str, StorageHandler> = { pub static ref STORAGEHANDLERLIST: HashMap<&'static str, StorageHandler> = {
@ -251,10 +251,7 @@ fn ephemeral_storage_handler(
return Ok("".to_string()); return Ok("".to_string());
} }
if let Err(err) = fs::create_dir_all(Path::new(&storage.mount_point)) { fs::create_dir_all(Path::new(&storage.mount_point))?;
return Err(err.into());
}
common_storage_handler(logger, storage)?; common_storage_handler(logger, storage)?;
Ok("".to_string()) Ok("".to_string())
@ -449,21 +446,17 @@ pub fn add_storages(
"subsystem" => "storage", "subsystem" => "storage",
"storage-type" => handler_name.to_owned())); "storage-type" => handler_name.to_owned()));
let handler = match STORAGEHANDLERLIST.get(&handler_name.as_str()) { let handler = STORAGEHANDLERLIST
None => { .get(&handler_name.as_str())
return Err(anyhow!( .ok_or_else(|| {
anyhow!(
"Failed to find the storage handler {}", "Failed to find the storage handler {}",
storage.driver.to_owned() storage.driver.to_owned()
)); )
} })?;
Some(f) => f,
};
let mount_point = match handler(&logger, &storage, sandbox.clone()) { // Todo need to rollback the mounted storage if err met.
// Todo need to rollback the mounted storage if err met. let mount_point = handler(&logger, &storage, sandbox.clone())?;
Err(e) => return Err(e),
Ok(m) => m,
};
if mount_point.len() > 0 { if mount_point.len() > 0 {
mount_list.push(mount_point); mount_list.push(mount_point);
@ -482,15 +475,18 @@ fn mount_to_rootfs(logger: &Logger, m: &INIT_MOUNT) -> Result<()> {
fs::create_dir_all(Path::new(m.dest)).context("could not create directory")?; fs::create_dir_all(Path::new(m.dest)).context("could not create directory")?;
if let Err(err) = bare_mount.mount() { bare_mount.mount().or_else(|e| {
if m.src != "dev" { if m.src != "dev" {
return Err(err.into()); return Err(e);
} }
error!( error!(
logger, logger,
"Could not mount filesystem from {} to {}", m.src, m.dest "Could not mount filesystem from {} to {}", m.src, m.dest
); );
}
Ok(())
})?;
Ok(()) Ok(())
} }
@ -510,7 +506,7 @@ pub fn get_mount_fs_type(mount_point: &str) -> Result<String> {
get_mount_fs_type_from_file(PROC_MOUNTSTATS, mount_point) get_mount_fs_type_from_file(PROC_MOUNTSTATS, mount_point)
} }
// get_mount_fs_type returns the FS type corresponding to the passed mount point and // get_mount_fs_type_from_file returns the FS type corresponding to the passed mount point and
// any error ecountered. // any error ecountered.
pub fn get_mount_fs_type_from_file(mount_file: &str, mount_point: &str) -> Result<String> { pub fn get_mount_fs_type_from_file(mount_file: &str, mount_point: &str) -> Result<String> {
if mount_point == "" { if mount_point == "" {
@ -643,7 +639,7 @@ pub fn cgroups_mount(logger: &Logger, unified_cgroup_hierarchy: bool) -> Result<
// Enable memory hierarchical account. // Enable memory hierarchical account.
// For more information see https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt // For more information see https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt
online_device("/sys/fs/cgroup/memory//memory.use_hierarchy")?; online_device("/sys/fs/cgroup/memory/memory.use_hierarchy")?;
Ok(()) Ok(())
} }
@ -654,15 +650,14 @@ pub fn remove_mounts(mounts: &Vec<String>) -> Result<()> {
Ok(()) Ok(())
} }
// ensureDestinationExists will recursively create a given mountpoint. If directories // ensure_destination_exists will recursively create a given mountpoint. If directories
// are created, their permissions are initialized to mountPerm // are created, their permissions are initialized to mountPerm(0755)
fn ensure_destination_exists(destination: &str, fs_type: &str) -> Result<()> { fn ensure_destination_exists(destination: &str, fs_type: &str) -> Result<()> {
let d = Path::new(destination); let d = Path::new(destination);
if !d.exists() { if !d.exists() {
let dir = match d.parent() { let dir = d
Some(d) => d, .parent()
None => return Err(anyhow!("mount destination {} doesn't exist", destination)), .ok_or_else(|| anyhow!("mount destination {} doesn't exist", destination))?;
};
if !dir.exists() { if !dir.exists() {
fs::create_dir_all(dir).context(format!("create dir all failed on {:?}", dir))?; fs::create_dir_all(dir).context(format!("create dir all failed on {:?}", dir))?;
} }
@ -1088,7 +1083,7 @@ mod tests {
#[test] #[test]
fn test_get_cgroup_v2_mounts() { fn test_get_cgroup_v2_mounts() {
let dir = tempdir().expect("failed to create tmpdir"); let _ = tempdir().expect("failed to create tmpdir");
let drain = slog::Discard; let drain = slog::Discard;
let logger = slog::Logger::root(drain, o!()); let logger = slog::Logger::root(drain, o!());
let result = get_cgroup_mounts(&logger, "", true); let result = get_cgroup_mounts(&logger, "", true);

View File

@ -3,15 +3,15 @@
// SPDX-License-Identifier: Apache-2.0 // SPDX-License-Identifier: Apache-2.0
// //
use anyhow::{anyhow, Result};
use nix::mount::MsFlags; use nix::mount::MsFlags;
use nix::sched::{unshare, CloneFlags}; use nix::sched::{unshare, CloneFlags};
use nix::unistd::{getpid, gettid}; use nix::unistd::{getpid, gettid};
use std::fmt; use std::fmt;
use std::fs; use std::fs;
use std::fs::File; use std::fs::File;
use std::os::unix::io::AsRawFd;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::thread; use std::thread::{self};
use crate::mount::{BareMount, FLAGS}; use crate::mount::{BareMount, FLAGS};
use slog::Logger; use slog::Logger;
@ -75,12 +75,10 @@ impl Namespace {
self self
} }
// setup_persistent_ns creates persistent namespace without switching to it. // setup creates persistent namespace without switching to it.
// Note, pid namespaces cannot be persisted. // Note, pid namespaces cannot be persisted.
pub fn setup(mut self) -> Result<Self, String> { pub fn setup(mut self) -> Result<Self> {
if let Err(err) = fs::create_dir_all(&self.persistent_ns_dir) { fs::create_dir_all(&self.persistent_ns_dir)?;
return Err(err.to_string());
}
let ns_path = PathBuf::from(&self.persistent_ns_dir); let ns_path = PathBuf::from(&self.persistent_ns_dir);
let ns_type = self.ns_type.clone(); let ns_type = self.ns_type.clone();
@ -88,33 +86,23 @@ impl Namespace {
let new_ns_path = ns_path.join(&ns_type.get()); let new_ns_path = ns_path.join(&ns_type.get());
if let Err(err) = File::create(new_ns_path.as_path()) { File::create(new_ns_path.as_path())?;
return Err(err.to_string());
}
self.path = new_ns_path.clone().into_os_string().into_string().unwrap(); self.path = new_ns_path.clone().into_os_string().into_string().unwrap();
let hostname = self.hostname.clone(); let hostname = self.hostname.clone();
let new_thread = thread::spawn(move || { let new_thread = thread::spawn(move || -> Result<()> {
let origin_ns_path = get_current_thread_ns_path(&ns_type.get()); let origin_ns_path = get_current_thread_ns_path(&ns_type.get());
let _origin_ns_fd = match File::open(Path::new(&origin_ns_path)) { File::open(Path::new(&origin_ns_path))?;
Err(err) => return Err(err.to_string()),
Ok(file) => file.as_raw_fd(),
};
// Create a new netns on the current thread. // Create a new netns on the current thread.
let cf = ns_type.get_flags().clone(); let cf = ns_type.get_flags().clone();
if let Err(err) = unshare(cf) { unshare(cf)?;
return Err(err.to_string());
}
if ns_type == NamespaceType::UTS && hostname.is_some() { if ns_type == NamespaceType::UTS && hostname.is_some() {
match nix::unistd::sethostname(hostname.unwrap()) { nix::unistd::sethostname(hostname.unwrap())?;
Err(err) => return Err(err.to_string()),
Ok(_) => (),
}
} }
// Bind mount the new namespace from the current thread onto the mount point to persist it. // Bind mount the new namespace from the current thread onto the mount point to persist it.
let source: &str = origin_ns_path.as_str(); let source: &str = origin_ns_path.as_str();
@ -131,23 +119,21 @@ impl Namespace {
}; };
let bare_mount = BareMount::new(source, destination, "none", flags, "", &logger); let bare_mount = BareMount::new(source, destination, "none", flags, "", &logger);
if let Err(err) = bare_mount.mount() { bare_mount.mount().map_err(|e| {
return Err(format!( anyhow!(
"Failed to mount {} to {} with err:{:?}", "Failed to mount {} to {} with err:{:?}",
source, destination, err source,
)); destination,
} e
)
})?;
Ok(()) Ok(())
}); });
match new_thread.join() { new_thread
Ok(t) => match t { .join()
Err(err) => return Err(err), .map_err(|e| anyhow!("Failed to join thread {:?}!", e))??;
Ok(()) => (),
},
Err(err) => return Err(format!("Failed to join thread {:?}!", err)),
}
Ok(self) Ok(self)
} }

View File

@ -3,15 +3,13 @@
// SPDX-License-Identifier: Apache-2.0 // SPDX-License-Identifier: Apache-2.0
// //
use anyhow::{anyhow, Context, Result}; use anyhow::{anyhow, Result};
use nix::mount::{self, MntFlags, MsFlags}; use nix::mount::{self, MsFlags};
use protocols::types::{Interface, Route}; use protocols::types::{Interface, Route};
use slog::Logger; use slog::Logger;
use std::collections::HashMap; use std::collections::HashMap;
use std::fs; use std::fs;
use crate::Sandbox;
const KATA_GUEST_SANDBOX_DNS_FILE: &str = "/run/kata-containers/sandbox/resolv.conf"; const KATA_GUEST_SANDBOX_DNS_FILE: &str = "/run/kata-containers/sandbox/resolv.conf";
const GUEST_DNS_FILE: &str = "/etc/resolv.conf"; const GUEST_DNS_FILE: &str = "/etc/resolv.conf";

File diff suppressed because it is too large Load Diff

View File

@ -7,10 +7,8 @@
use crate::linux_abi::*; use crate::linux_abi::*;
use crate::mount::{get_mount_fs_type, remove_mounts, TYPEROOTFS}; use crate::mount::{get_mount_fs_type, remove_mounts, TYPEROOTFS};
use crate::namespace::Namespace; use crate::namespace::Namespace;
use crate::namespace::NSTYPEPID;
use crate::network::Network; use crate::network::Network;
use anyhow::{anyhow, Context, Result}; use anyhow::{anyhow, Context, Result};
use cgroups;
use libc::pid_t; use libc::pid_t;
use netlink::{RtnlHandle, NETLINK_ROUTE}; use netlink::{RtnlHandle, NETLINK_ROUTE};
use oci::{Hook, Hooks}; use oci::{Hook, Hooks};
@ -145,16 +143,10 @@ impl Sandbox {
// It's assumed that caller is calling this method after // It's assumed that caller is calling this method after
// acquiring a lock on sandbox. // acquiring a lock on sandbox.
pub fn unset_and_remove_sandbox_storage(&mut self, path: &str) -> Result<()> { pub fn unset_and_remove_sandbox_storage(&mut self, path: &str) -> Result<()> {
match self.unset_sandbox_storage(path) { if self.unset_sandbox_storage(path)? {
Ok(res) => { return self.remove_sandbox_storage(path);
if res {
return self.remove_sandbox_storage(path);
}
}
Err(err) => {
return Err(err);
}
} }
Ok(()) Ok(())
} }
@ -168,23 +160,17 @@ impl Sandbox {
pub fn setup_shared_namespaces(&mut self) -> Result<bool> { pub fn setup_shared_namespaces(&mut self) -> Result<bool> {
// Set up shared IPC namespace // Set up shared IPC namespace
self.shared_ipcns = match Namespace::new(&self.logger).as_ipc().setup() { self.shared_ipcns = Namespace::new(&self.logger)
Ok(ns) => ns, .as_ipc()
Err(err) => { .setup()
return Err(anyhow!(err).context("Failed to setup persistent IPC namespace")); .context("Failed to setup persistent IPC namespace")?;
}
};
// // Set up shared UTS namespace // // Set up shared UTS namespace
self.shared_utsns = match Namespace::new(&self.logger) self.shared_utsns = Namespace::new(&self.logger)
.as_uts(self.hostname.as_str()) .as_uts(self.hostname.as_str())
.setup() .setup()
{ .context("Failed to setup persistent UTS namespace")?;
Ok(ns) => ns,
Err(err) => {
return Err(anyhow!(err).context("Failed to setup persistent UTS namespace"));
}
};
Ok(true) Ok(true)
} }
@ -318,10 +304,9 @@ impl Sandbox {
thread::spawn(move || { thread::spawn(move || {
for event in rx { for event in rx {
info!(logger, "got an OOM event {:?}", event); info!(logger, "got an OOM event {:?}", event);
match tx.send(container_id.clone()) { let _ = tx
Err(err) => error!(logger, "failed to send message: {:?}", err), .send(container_id.clone())
Ok(_) => {} .map_err(|e| error!(logger, "failed to send message: {:?}", e));
}
} }
}); });
} }

View File

@ -99,14 +99,14 @@ impl Uevent {
let online_path = format!("{}/{}/online", SYSFS_DIR, &self.devpath); let online_path = format!("{}/{}/online", SYSFS_DIR, &self.devpath);
// It's a memory hot-add event. // It's a memory hot-add event.
if online_path.starts_with(SYSFS_MEMORY_ONLINE_PATH) { if online_path.starts_with(SYSFS_MEMORY_ONLINE_PATH) {
if let Err(e) = online_device(online_path.as_ref()) { let _ = online_device(online_path.as_ref()).map_err(|e| {
error!( error!(
*logger, *logger,
"failed to online device"; "failed to online device";
"device" => &self.devpath, "device" => &self.devpath,
"error" => format!("{}", e), "error" => format!("{}", e),
); )
} });
return; return;
} }
} }

View File

@ -83,7 +83,6 @@ QEMUBINDIR := $(PREFIXDEPS)/bin
CLHBINDIR := $(PREFIXDEPS)/bin CLHBINDIR := $(PREFIXDEPS)/bin
FCBINDIR := $(PREFIXDEPS)/bin FCBINDIR := $(PREFIXDEPS)/bin
ACRNBINDIR := $(PREFIXDEPS)/bin ACRNBINDIR := $(PREFIXDEPS)/bin
VIRTIOFSDBINDIR := $(PREFIXDEPS)/bin
SYSCONFDIR := /etc SYSCONFDIR := /etc
LOCALSTATEDIR := /var LOCALSTATEDIR := /var
@ -95,6 +94,13 @@ COLLECT_SCRIPT = data/kata-collect-data.sh
COLLECT_SCRIPT_SRC = $(COLLECT_SCRIPT).in COLLECT_SCRIPT_SRC = $(COLLECT_SCRIPT).in
GENERATED_FILES += $(COLLECT_SCRIPT) GENERATED_FILES += $(COLLECT_SCRIPT)
GENERATED_VARS = \
VERSION \
CONFIG_ACRN_IN \
CONFIG_QEMU_IN \
CONFIG_CLH_IN \
CONFIG_FC_IN \
$(USER_VARS)
SCRIPTS += $(COLLECT_SCRIPT) SCRIPTS += $(COLLECT_SCRIPT)
SCRIPTS_DIR := $(BINDIR) SCRIPTS_DIR := $(BINDIR)
@ -120,25 +126,30 @@ HYPERVISOR_FC = firecracker
JAILER_FC = jailer JAILER_FC = jailer
HYPERVISOR_QEMU = qemu HYPERVISOR_QEMU = qemu
HYPERVISOR_CLH = cloud-hypervisor HYPERVISOR_CLH = cloud-hypervisor
HYPERVISOR_QEMU_VIRTIOFS = qemu-virtiofs
# Determines which hypervisor is specified in $(CONFIG_FILE). # Determines which hypervisor is specified in $(CONFIG_FILE).
DEFAULT_HYPERVISOR ?= $(HYPERVISOR_QEMU) DEFAULT_HYPERVISOR ?= $(HYPERVISOR_QEMU)
# List of hypervisors this build system can generate configuration for. # List of hypervisors this build system can generate configuration for.
HYPERVISORS := $(HYPERVISOR_ACRN) $(HYPERVISOR_FC) $(HYPERVISOR_QEMU) $(HYPERVISOR_QEMU_VIRTIOFS) $(HYPERVISOR_CLH) HYPERVISORS := $(HYPERVISOR_ACRN) $(HYPERVISOR_FC) $(HYPERVISOR_QEMU) $(HYPERVISOR_CLH)
QEMUPATH := $(QEMUBINDIR)/$(QEMUCMD) QEMUPATH := $(QEMUBINDIR)/$(QEMUCMD)
QEMUVALIDHYPERVISORPATHS := [\"$(QEMUPATH)\"]
QEMUVIRTIOFSPATH := $(QEMUBINDIR)/$(QEMUVIRTIOFSCMD) QEMUVALIDVIRTIOFSPATHS := $(QEMUBINDIR)/$(QEMUVIRTIOFSCMD)
CLHPATH := $(CLHBINDIR)/$(CLHCMD) CLHPATH := $(CLHBINDIR)/$(CLHCMD)
CLHVALIDHYPERVISORPATHS := [\"$(CLHBINDIR)/$(CLHCMD)\"]
FCPATH = $(FCBINDIR)/$(FCCMD) FCPATH = $(FCBINDIR)/$(FCCMD)
FCVALIDPATHS = [\"$(FCPATH)\"]
FCJAILERPATH = $(FCBINDIR)/$(FCJAILERCMD) FCJAILERPATH = $(FCBINDIR)/$(FCJAILERCMD)
FCVALIDJAILERPATHS = [\"$(FCJAILERPATH)\"]
ACRNPATH := $(ACRNBINDIR)/$(ACRNCMD) ACRNPATH := $(ACRNBINDIR)/$(ACRNCMD)
ACRNVALIDHYPERVISORPATHS := [\"$(ACRNPATH)\"]
ACRNCTLPATH := $(ACRNBINDIR)/$(ACRNCTLCMD) ACRNCTLPATH := $(ACRNBINDIR)/$(ACRNCTLCMD)
ACRNVALIDCTLPATHS := [\"$(ACRNCTLPATH)\"]
SHIMCMD := $(BIN_PREFIX)-shim SHIMCMD := $(BIN_PREFIX)-shim
SHIMPATH := $(PKGLIBEXECDIR)/$(SHIMCMD) SHIMPATH := $(PKGLIBEXECDIR)/$(SHIMCMD)
@ -161,6 +172,7 @@ DEFMEMSZ := 2048
DEFMEMSLOTS := 10 DEFMEMSLOTS := 10
#Default number of bridges #Default number of bridges
DEFBRIDGES := 1 DEFBRIDGES := 1
DEFENABLEANNOTATIONS := []
DEFDISABLEGUESTSECCOMP := true DEFDISABLEGUESTSECCOMP := true
#Default experimental features enabled #Default experimental features enabled
DEFAULTEXPFEATURES := [] DEFAULTEXPFEATURES := []
@ -171,21 +183,26 @@ DEFENTROPYSOURCE := /dev/urandom
DEFDISABLEBLOCK := false DEFDISABLEBLOCK := false
DEFSHAREDFS := virtio-9p DEFSHAREDFS := virtio-9p
DEFSHAREDFS_QEMU_VIRTIOFS := virtio-fs DEFSHAREDFS_QEMU_VIRTIOFS := virtio-fs
DEFVIRTIOFSDAEMON := $(VIRTIOFSDBINDIR)/virtiofsd DEFVIRTIOFSDAEMON := $(LIBEXECDIR)/kata-qemu/virtiofsd
DEFVALIDVIRTIOFSDAEMONPATHS := [\"$(DEFVIRTIOFSDAEMON)\"]
# Default DAX mapping cache size in MiB # Default DAX mapping cache size in MiB
DEFVIRTIOFSCACHESIZE := 1024 #if value is 0, DAX is not enabled
DEFVIRTIOFSCACHESIZE := 0
DEFVIRTIOFSCACHE ?= auto DEFVIRTIOFSCACHE ?= auto
# Format example: # Format example:
# [\"-o\", \"arg1=xxx,arg2\", \"-o\", \"hello world\", \"--arg3=yyy\"] # [\"-o\", \"arg1=xxx,arg2\", \"-o\", \"hello world\", \"--arg3=yyy\"]
# #
# see `virtiofsd -h` for possible options. # see `virtiofsd -h` for possible options.
# Make sure you quote args. # Make sure you quote args.
DEFVIRTIOFSEXTRAARGS ?= [] DEFVIRTIOFSEXTRAARGS ?= [\"--thread-pool-size=1\"]
DEFENABLEIOTHREADS := false DEFENABLEIOTHREADS := false
DEFENABLEMEMPREALLOC := false DEFENABLEMEMPREALLOC := false
DEFENABLEHUGEPAGES := false DEFENABLEHUGEPAGES := false
DEFENABLEVHOSTUSERSTORE := false DEFENABLEVHOSTUSERSTORE := false
DEFVHOSTUSERSTOREPATH := $(PKGRUNDIR)/vhost-user DEFVHOSTUSERSTOREPATH := $(PKGRUNDIR)/vhost-user
DEFVALIDVHOSTUSERSTOREPATHS := [\"$(DEFVHOSTUSERSTOREPATH)\"]
DEFFILEMEMBACKEND := ""
DEFVALIDFILEMEMBACKENDS := [\"$(DEFFILEMEMBACKEND)\"]
DEFENABLESWAP := false DEFENABLESWAP := false
DEFENABLEDEBUG := false DEFENABLEDEBUG := false
DEFDISABLENESTINGCHECKS := false DEFDISABLENESTINGCHECKS := false
@ -245,28 +262,6 @@ ifneq (,$(QEMUCMD))
KERNELPATH = $(KERNELDIR)/$(KERNELNAME) KERNELPATH = $(KERNELDIR)/$(KERNELNAME)
endif endif
ifneq (,$(QEMUVIRTIOFSCMD))
KNOWN_HYPERVISORS += $(HYPERVISOR_QEMU_VIRTIOFS)
CONFIG_FILE_QEMU_VIRTIOFS = configuration-qemu-virtiofs.toml
CONFIG_QEMU_VIRTIOFS = $(CLI_DIR)/config/$(CONFIG_FILE_QEMU_VIRTIOFS)
CONFIG_QEMU_VIRTIOFS_IN = $(CONFIG_QEMU_VIRTIOFS).in
CONFIG_PATH_QEMU_VIRTIOFS = $(abspath $(CONFDIR)/$(CONFIG_FILE_QEMU_VIRTIOFS))
CONFIG_PATHS += $(CONFIG_PATH_QEMU_VIRTIOFS)
SYSCONFIG_QEMU_VIRTIOFS = $(abspath $(SYSCONFDIR)/$(CONFIG_FILE_QEMU_VIRTIOFS))
SYSCONFIG_PATHS += $(SYSCONFIG_QEMU_VIRTIOFS)
CONFIGS += $(CONFIG_QEMU_VIRTIOFS)
# qemu-specific options (all should be suffixed by "_QEMU")
DEFBLOCKSTORAGEDRIVER_QEMU_VIRTIOFS := virtio-fs
DEFNETWORKMODEL_QEMU := tcfilter
KERNELNAMEVIRTIOFS = $(call MAKE_KERNEL_VIRTIOFS_NAME,$(KERNELTYPE))
KERNELVIRTIOFSPATH = $(KERNELDIR)/$(KERNELNAMEVIRTIOFS)
endif
ifneq (,$(CLHCMD)) ifneq (,$(CLHCMD))
KNOWN_HYPERVISORS += $(HYPERVISOR_CLH) KNOWN_HYPERVISORS += $(HYPERVISOR_CLH)
@ -384,16 +379,28 @@ SHAREDIR := $(SHAREDIR)
# list of variables the user may wish to override # list of variables the user may wish to override
USER_VARS += ARCH USER_VARS += ARCH
USER_VARS += BINDIR USER_VARS += BINDIR
USER_VARS += CONFIG_ACRN_IN
USER_VARS += CONFIG_CLH_IN
USER_VARS += CONFIG_FC_IN
USER_VARS += CONFIG_PATH USER_VARS += CONFIG_PATH
USER_VARS += CONFIG_QEMU_IN
USER_VARS += DESTDIR USER_VARS += DESTDIR
USER_VARS += DEFAULT_HYPERVISOR USER_VARS += DEFAULT_HYPERVISOR
USER_VARS += DEFENABLEMSWAP
USER_VARS += ACRNCMD USER_VARS += ACRNCMD
USER_VARS += ACRNCTLCMD USER_VARS += ACRNCTLCMD
USER_VARS += ACRNPATH USER_VARS += ACRNPATH
USER_VARS += ACRNVALIDHYPERVISORPATHS
USER_VARS += ACRNCTLPATH USER_VARS += ACRNCTLPATH
USER_VARS += ACRNVALIDCTLPATHS
USER_VARS += CLHPATH
USER_VARS += CLHVALIDHYPERVISORPATHS
USER_VARS += FIRMWAREPATH_CLH
USER_VARS += FCCMD USER_VARS += FCCMD
USER_VARS += FCPATH USER_VARS += FCPATH
USER_VARS += FCVALIDHYPERVISORPATHS
USER_VARS += FCJAILERPATH USER_VARS += FCJAILERPATH
USER_VARS += FCVALIDJAILERPATHS
USER_VARS += SYSCONFIG USER_VARS += SYSCONFIG
USER_VARS += IMAGENAME USER_VARS += IMAGENAME
USER_VARS += IMAGEPATH USER_VARS += IMAGEPATH
@ -405,6 +412,11 @@ USER_VARS += KERNELTYPE
USER_VARS += KERNELTYPE_FC USER_VARS += KERNELTYPE_FC
USER_VARS += KERNELTYPE_ACRN USER_VARS += KERNELTYPE_ACRN
USER_VARS += KERNELTYPE_CLH USER_VARS += KERNELTYPE_CLH
USER_VARS += KERNELPATH_ACRN
USER_VARS += KERNELPATH
USER_VARS += KERNELPATH_CLH
USER_VARS += KERNELPATH_FC
USER_VARS += KERNELVIRTIOFSPATH
USER_VARS += FIRMWAREPATH USER_VARS += FIRMWAREPATH
USER_VARS += MACHINEACCELERATORS USER_VARS += MACHINEACCELERATORS
USER_VARS += CPUFEATURES USER_VARS += CPUFEATURES
@ -417,15 +429,22 @@ USER_VARS += PKGLIBDIR
USER_VARS += PKGLIBEXECDIR USER_VARS += PKGLIBEXECDIR
USER_VARS += PKGRUNDIR USER_VARS += PKGRUNDIR
USER_VARS += PREFIX USER_VARS += PREFIX
USER_VARS += PROJECT_BUG_URL
USER_VARS += PROJECT_NAME USER_VARS += PROJECT_NAME
USER_VARS += PROJECT_ORG
USER_VARS += PROJECT_PREFIX USER_VARS += PROJECT_PREFIX
USER_VARS += PROJECT_TAG
USER_VARS += PROJECT_TYPE USER_VARS += PROJECT_TYPE
USER_VARS += PROJECT_URL
USER_VARS += NETMONPATH USER_VARS += NETMONPATH
USER_VARS += QEMUBINDIR USER_VARS += QEMUBINDIR
USER_VARS += QEMUCMD USER_VARS += QEMUCMD
USER_VARS += QEMUPATH USER_VARS += QEMUPATH
USER_VARS += QEMUVALIDHYPERVISORPATHS
USER_VARS += QEMUVIRTIOFSCMD USER_VARS += QEMUVIRTIOFSCMD
USER_VARS += QEMUVIRTIOFSPATH USER_VARS += QEMUVIRTIOFSPATH
USER_VARS += QEMUVALIDVIRTIOFSPATHS
USER_VARS += RUNTIME_NAME
USER_VARS += SHAREDIR USER_VARS += SHAREDIR
USER_VARS += SHIMPATH USER_VARS += SHIMPATH
USER_VARS += SYSCONFDIR USER_VARS += SYSCONFDIR
@ -436,6 +455,7 @@ USER_VARS += DEFMEMSZ
USER_VARS += DEFMEMSLOTS USER_VARS += DEFMEMSLOTS
USER_VARS += DEFBRIDGES USER_VARS += DEFBRIDGES
USER_VARS += DEFNETWORKMODEL_ACRN USER_VARS += DEFNETWORKMODEL_ACRN
USER_VARS += DEFNETWORKMODEL_CLH
USER_VARS += DEFNETWORKMODEL_FC USER_VARS += DEFNETWORKMODEL_FC
USER_VARS += DEFNETWORKMODEL_QEMU USER_VARS += DEFNETWORKMODEL_QEMU
USER_VARS += DEFDISABLEGUESTSECCOMP USER_VARS += DEFDISABLEGUESTSECCOMP
@ -444,18 +464,22 @@ USER_VARS += DEFDISABLEBLOCK
USER_VARS += DEFBLOCKSTORAGEDRIVER_ACRN USER_VARS += DEFBLOCKSTORAGEDRIVER_ACRN
USER_VARS += DEFBLOCKSTORAGEDRIVER_FC USER_VARS += DEFBLOCKSTORAGEDRIVER_FC
USER_VARS += DEFBLOCKSTORAGEDRIVER_QEMU USER_VARS += DEFBLOCKSTORAGEDRIVER_QEMU
USER_VARS += DEFBLOCKSTORAGEDRIVER_QEMU_VIRTIOFS
USER_VARS += DEFSHAREDFS USER_VARS += DEFSHAREDFS
USER_VARS += DEFSHAREDFS_QEMU_VIRTIOFS USER_VARS += DEFSHAREDFS_QEMU_VIRTIOFS
USER_VARS += DEFVIRTIOFSDAEMON USER_VARS += DEFVIRTIOFSDAEMON
USER_VARS += DEFVALIDVIRTIOFSDAEMONPATHS
USER_VARS += DEFVIRTIOFSCACHESIZE USER_VARS += DEFVIRTIOFSCACHESIZE
USER_VARS += DEFVIRTIOFSCACHE USER_VARS += DEFVIRTIOFSCACHE
USER_VARS += DEFVIRTIOFSEXTRAARGS USER_VARS += DEFVIRTIOFSEXTRAARGS
USER_VARS += DEFENABLEANNOTATIONS
USER_VARS += DEFENABLEIOTHREADS USER_VARS += DEFENABLEIOTHREADS
USER_VARS += DEFENABLEMEMPREALLOC USER_VARS += DEFENABLEMEMPREALLOC
USER_VARS += DEFENABLEHUGEPAGES USER_VARS += DEFENABLEHUGEPAGES
USER_VARS += DEFENABLEVHOSTUSERSTORE USER_VARS += DEFENABLEVHOSTUSERSTORE
USER_VARS += DEFVHOSTUSERSTOREPATH USER_VARS += DEFVHOSTUSERSTOREPATH
USER_VARS += DEFVALIDVHOSTUSERSTOREPATHS
USER_VARS += DEFFILEMEMBACKEND
USER_VARS += DEFVALIDFILEMEMBACKENDS
USER_VARS += DEFENABLESWAP USER_VARS += DEFENABLESWAP
USER_VARS += DEFENABLEDEBUG USER_VARS += DEFENABLEDEBUG
USER_VARS += DEFDISABLENESTINGCHECKS USER_VARS += DEFDISABLENESTINGCHECKS
@ -597,84 +621,7 @@ GENERATED_FILES += $(CONFIGS)
$(GENERATED_FILES): %: %.in $(MAKEFILE_LIST) VERSION .git-commit $(GENERATED_FILES): %: %.in $(MAKEFILE_LIST) VERSION .git-commit
$(QUIET_GENERATE)$(SED) \ $(QUIET_GENERATE)$(SED) \
-e "s|@COMMIT@|$(shell cat .git-commit)|g" \ -e "s|@COMMIT@|$(shell cat .git-commit)|g" \
-e "s|@VERSION@|$(VERSION)|g" \ $(foreach v,$(GENERATED_VARS),-e "s|@$v@|$($v)|g") \
-e "s|@CONFIG_ACRN_IN@|$(CONFIG_ACRN_IN)|g" \
-e "s|@CONFIG_QEMU_IN@|$(CONFIG_QEMU_IN)|g" \
-e "s|@CONFIG_QEMU_VIRTIOFS_IN@|$(CONFIG_QEMU_VIRTIOFS_IN)|g" \
-e "s|@CONFIG_CLH_IN@|$(CONFIG_CLH_IN)|g" \
-e "s|@CONFIG_FC_IN@|$(CONFIG_FC_IN)|g" \
-e "s|@CONFIG_PATH@|$(CONFIG_PATH)|g" \
-e "s|@FCPATH@|$(FCPATH)|g" \
-e "s|@FCJAILERPATH@|$(FCJAILERPATH)|g" \
-e "s|@ACRNPATH@|$(ACRNPATH)|g" \
-e "s|@ACRNCTLPATH@|$(ACRNCTLPATH)|g" \
-e "s|@CLHPATH@|$(CLHPATH)|g" \
-e "s|@SYSCONFIG@|$(SYSCONFIG)|g" \
-e "s|@IMAGEPATH@|$(IMAGEPATH)|g" \
-e "s|@KERNELPATH_ACRN@|$(KERNELPATH_ACRN)|g" \
-e "s|@KERNELPATH_FC@|$(KERNELPATH_FC)|g" \
-e "s|@KERNELPATH_CLH@|$(KERNELPATH_CLH)|g" \
-e "s|@KERNELPATH@|$(KERNELPATH)|g" \
-e "s|@KERNELVIRTIOFSPATH@|$(KERNELVIRTIOFSPATH)|g" \
-e "s|@INITRDPATH@|$(INITRDPATH)|g" \
-e "s|@FIRMWAREPATH@|$(FIRMWAREPATH)|g" \
-e "s|@MACHINEACCELERATORS@|$(MACHINEACCELERATORS)|g" \
-e "s|@CPUFEATURES@|$(CPUFEATURES)|g" \
-e "s|@FIRMWAREPATH_CLH@|$(FIRMWAREPATH_CLH)|g" \
-e "s|@DEFMACHINETYPE_CLH@|$(DEFMACHINETYPE_CLH)|g" \
-e "s|@KERNELPARAMS@|$(KERNELPARAMS)|g" \
-e "s|@LOCALSTATEDIR@|$(LOCALSTATEDIR)|g" \
-e "s|@PKGLIBEXECDIR@|$(PKGLIBEXECDIR)|g" \
-e "s|@PKGRUNDIR@|$(PKGRUNDIR)|g" \
-e "s|@NETMONPATH@|$(NETMONPATH)|g" \
-e "s|@PROJECT_BUG_URL@|$(PROJECT_BUG_URL)|g" \
-e "s|@PROJECT_ORG@|$(PROJECT_ORG)|g" \
-e "s|@PROJECT_URL@|$(PROJECT_URL)|g" \
-e "s|@PROJECT_NAME@|$(PROJECT_NAME)|g" \
-e "s|@PROJECT_TAG@|$(PROJECT_TAG)|g" \
-e "s|@PROJECT_TYPE@|$(PROJECT_TYPE)|g" \
-e "s|@QEMUPATH@|$(QEMUPATH)|g" \
-e "s|@QEMUVIRTIOFSPATH@|$(QEMUVIRTIOFSPATH)|g" \
-e "s|@RUNTIME_NAME@|$(TARGET)|g" \
-e "s|@MACHINETYPE@|$(MACHINETYPE)|g" \
-e "s|@SHIMPATH@|$(SHIMPATH)|g" \
-e "s|@DEFVCPUS@|$(DEFVCPUS)|g" \
-e "s|@DEFMAXVCPUS@|$(DEFMAXVCPUS)|g" \
-e "s|@DEFMAXVCPUS_ACRN@|$(DEFMAXVCPUS_ACRN)|g" \
-e "s|@DEFMEMSZ@|$(DEFMEMSZ)|g" \
-e "s|@DEFMEMSLOTS@|$(DEFMEMSLOTS)|g" \
-e "s|@DEFBRIDGES@|$(DEFBRIDGES)|g" \
-e "s|@DEFNETWORKMODEL_ACRN@|$(DEFNETWORKMODEL_ACRN)|g" \
-e "s|@DEFNETWORKMODEL_CLH@|$(DEFNETWORKMODEL_CLH)|g" \
-e "s|@DEFNETWORKMODEL_FC@|$(DEFNETWORKMODEL_FC)|g" \
-e "s|@DEFNETWORKMODEL_QEMU@|$(DEFNETWORKMODEL_QEMU)|g" \
-e "s|@DEFDISABLEGUESTSECCOMP@|$(DEFDISABLEGUESTSECCOMP)|g" \
-e "s|@DEFAULTEXPFEATURES@|$(DEFAULTEXPFEATURES)|g" \
-e "s|@DEFDISABLEBLOCK@|$(DEFDISABLEBLOCK)|g" \
-e "s|@DEFBLOCKSTORAGEDRIVER_ACRN@|$(DEFBLOCKSTORAGEDRIVER_ACRN)|g" \
-e "s|@DEFBLOCKSTORAGEDRIVER_FC@|$(DEFBLOCKSTORAGEDRIVER_FC)|g" \
-e "s|@DEFBLOCKSTORAGEDRIVER_QEMU@|$(DEFBLOCKSTORAGEDRIVER_QEMU)|g" \
-e "s|@DEFBLOCKSTORAGEDRIVER_QEMU_VIRTIOFS@|$(DEFBLOCKSTORAGEDRIVER_QEMU_VIRTIOFS)|g" \
-e "s|@DEFSHAREDFS@|$(DEFSHAREDFS)|g" \
-e "s|@DEFSHAREDFS_QEMU_VIRTIOFS@|$(DEFSHAREDFS_QEMU_VIRTIOFS)|g" \
-e "s|@DEFVIRTIOFSDAEMON@|$(DEFVIRTIOFSDAEMON)|g" \
-e "s|@DEFVIRTIOFSCACHESIZE@|$(DEFVIRTIOFSCACHESIZE)|g" \
-e "s|@DEFVIRTIOFSCACHE@|$(DEFVIRTIOFSCACHE)|g" \
-e "s|@DEFVIRTIOFSEXTRAARGS@|$(DEFVIRTIOFSEXTRAARGS)|g" \
-e "s|@DEFENABLEIOTHREADS@|$(DEFENABLEIOTHREADS)|g" \
-e "s|@DEFENABLEMEMPREALLOC@|$(DEFENABLEMEMPREALLOC)|g" \
-e "s|@DEFENABLEHUGEPAGES@|$(DEFENABLEHUGEPAGES)|g" \
-e "s|@DEFENABLEVHOSTUSERSTORE@|$(DEFENABLEVHOSTUSERSTORE)|g" \
-e "s|@DEFVHOSTUSERSTOREPATH@|$(DEFVHOSTUSERSTOREPATH)|g" \
-e "s|@DEFENABLEMSWAP@|$(DEFENABLESWAP)|g" \
-e "s|@DEFENABLEDEBUG@|$(DEFENABLEDEBUG)|g" \
-e "s|@DEFDISABLENESTINGCHECKS@|$(DEFDISABLENESTINGCHECKS)|g" \
-e "s|@DEFMSIZE9P@|$(DEFMSIZE9P)|g" \
-e "s|@DEFHOTPLUGVFIOONROOTBUS@|$(DEFHOTPLUGVFIOONROOTBUS)|g" \
-e "s|@DEFPCIEROOTPORT@|$(DEFPCIEROOTPORT)|g" \
-e "s|@DEFENTROPYSOURCE@|$(DEFENTROPYSOURCE)|g" \
-e "s|@DEFSANDBOXCGROUPONLY@|$(DEFSANDBOXCGROUPONLY)|g" \
-e "s|@FEATURE_SELINUX@|$(FEATURE_SELINUX)|g" \
$< > $@ $< > $@
generate-config: $(CONFIGS) generate-config: $(CONFIGS)

View File

@ -16,6 +16,22 @@ ctlpath = "@ACRNCTLPATH@"
kernel = "@KERNELPATH_ACRN@" kernel = "@KERNELPATH_ACRN@"
image = "@IMAGEPATH@" image = "@IMAGEPATH@"
# List of valid annotation names for the hypervisor
# Each member of the list is a regular expression, which is the base name
# of the annotation, e.g. "path" for io.katacontainers.config.hypervisor.path"
enable_annotations = @DEFENABLEANNOTATIONS@
# List of valid annotations values for the hypervisor
# Each member of the list is a path pattern as described by glob(3).
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: @ACRNVALIDHYPERVISORPATHS@
valid_hypervisor_paths = @ACRNVALIDHYPERVISORPATHS@
# List of valid annotations values for ctlpath
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: @ACRNVALIDCTLPATHS@
valid_ctlpaths = @ACRNVALIDCTLPATHS@
# Optional space-separated list of options to pass to the guest kernel. # Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having # For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc. # trouble running pre-2.15 glibc.

View File

@ -15,6 +15,17 @@ path = "@CLHPATH@"
kernel = "@KERNELPATH_CLH@" kernel = "@KERNELPATH_CLH@"
image = "@IMAGEPATH@" image = "@IMAGEPATH@"
# List of valid annotation names for the hypervisor
# Each member of the list is a regular expression, which is the base name
# of the annotation, e.g. "path" for io.katacontainers.config.hypervisor.path"
enable_annotations = @DEFENABLEANNOTATIONS@
# List of valid annotations values for the hypervisor
# Each member of the list is a path pattern as described by glob(3).
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: @CLHVALIDHYPERVISORPATHS@
valid_hypervisor_paths = @CLHVALIDHYPERVISORPATHS@
# Optional space-separated list of options to pass to the guest kernel. # Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having # For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc. # trouble running pre-2.15 glibc.
@ -62,6 +73,11 @@ default_memory = @DEFMEMSZ@
# Path to vhost-user-fs daemon. # Path to vhost-user-fs daemon.
virtio_fs_daemon = "@DEFVIRTIOFSDAEMON@" virtio_fs_daemon = "@DEFVIRTIOFSDAEMON@"
# List of valid annotations values for the virtiofs daemon
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: @DEFVALIDVIRTIOFSDAEMONPATHS@
valid_virtio_fs_daemon_paths = @DEFVALIDVIRTIOFSDAEMONPATHS@
# Default size of DAX cache in MiB # Default size of DAX cache in MiB
virtio_fs_cache_size = @DEFVIRTIOFSCACHESIZE@ virtio_fs_cache_size = @DEFVIRTIOFSCACHESIZE@

View File

@ -12,6 +12,20 @@
[hypervisor.firecracker] [hypervisor.firecracker]
path = "@FCPATH@" path = "@FCPATH@"
kernel = "@KERNELPATH_FC@"
image = "@IMAGEPATH@"
# List of valid annotation names for the hypervisor
# Each member of the list is a regular expression, which is the base name
# of the annotation, e.g. "path" for io.katacontainers.config.hypervisor.path"
enable_annotations = @DEFENABLEANNOTATIONS@
# List of valid annotations values for the hypervisor
# Each member of the list is a path pattern as described by glob(3).
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: @FCVALIDHYPERVISORPATHS@
valid_hypervisor_paths = @FCVALIDHYPERVISORPATHS@
# Path for the jailer specific to firecracker # Path for the jailer specific to firecracker
# If the jailer path is not set kata will launch firecracker # If the jailer path is not set kata will launch firecracker
# without a jail. If the jailer is set firecracker will be # without a jail. If the jailer is set firecracker will be
@ -19,8 +33,13 @@ path = "@FCPATH@"
# This is disabled by default as additional setup is required # This is disabled by default as additional setup is required
# for this feature today. # for this feature today.
#jailer_path = "@FCJAILERPATH@" #jailer_path = "@FCJAILERPATH@"
kernel = "@KERNELPATH_FC@"
image = "@IMAGEPATH@" # List of valid jailer path values for the hypervisor
# Each member of the list can be a regular expression
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: @FCVALIDJAILERPATHS@
valid_jailer_paths = @FCVALIDJAILERPATHS@
# Optional space-separated list of options to pass to the guest kernel. # Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having # For example, use `kernel_params = "vsyscall=emulate"` if you are having
@ -87,10 +106,10 @@ default_memory = @DEFMEMSZ@
#memory_offset = 0 #memory_offset = 0
# Disable block device from being used for a container's rootfs. # Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's # In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed # root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons. # directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor, # This flag prevents the block device from being passed to the hypervisor,
# 9pfs is used instead to pass the rootfs. # 9pfs is used instead to pass the rootfs.
disable_block_device_use = @DEFDISABLEBLOCK@ disable_block_device_use = @DEFDISABLEBLOCK@
@ -126,7 +145,7 @@ block_device_driver = "@DEFBLOCKSTORAGEDRIVER_FC@"
# Enabling this will result in the VM memory # Enabling this will result in the VM memory
# being allocated using huge pages. # being allocated using huge pages.
# This is useful when you want to use vhost-user network # This is useful when you want to use vhost-user network
# stacks within the container. This will automatically # stacks within the container. This will automatically
# result in memory pre allocation # result in memory pre allocation
#enable_hugepages = true #enable_hugepages = true

View File

@ -1,446 +0,0 @@
# Copyright (c) 2017-2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "@CONFIG_QEMU_VIRTIOFS_IN@"
# XXX: Project:
# XXX: Name: @PROJECT_NAME@
# XXX: Type: @PROJECT_TYPE@
[hypervisor.qemu]
path = "@QEMUVIRTIOFSPATH@"
kernel = "@KERNELVIRTIOFSPATH@"
image = "@IMAGEPATH@"
machine_type = "@MACHINETYPE@"
# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = "@KERNELPARAMS@"
# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = "@FIRMWAREPATH@"
# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators="@MACHINEACCELERATORS@"
# CPU features
# comma-separated list of cpu features to pass to the cpu
# For example, `cpu_features = "pmu=off,vmx=off"
cpu_features="@CPUFEATURES@"
# Default number of vCPUs per SB/VM:
# unspecified or 0 --> will be set to @DEFVCPUS@
# < 0 --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores
default_vcpus = 1
# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
# NOTICE: on arm platform with gicv2 interrupt controller, set it to 8.
default_maxvcpus = @DEFMAXVCPUS@
# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
# This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0 --> will be set to @DEFBRIDGES@
# > 1 <= 5 --> will be set to the specified number
# > 5 --> will be set to 5
default_bridges = @DEFBRIDGES@
# Default memory size in MiB for SB/VM.
# If unspecified then it will be set @DEFMEMSZ@ MiB.
default_memory = @DEFMEMSZ@
#
# Default memory slots per SB/VM.
# If unspecified then it will be set @DEFMEMSLOTS@.
# This is will determine the times that memory will be hotadded to sandbox/VM.
#memory_slots = @DEFMEMSLOTS@
# The size in MiB will be plused to max memory of hypervisor.
# It is the memory address space for the NVDIMM devie.
# If set block storage driver (block_device_driver) to "nvdimm",
# should set memory_offset to the size of block device.
# Default 0
#memory_offset = 0
# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = @DEFDISABLEBLOCK@
# Shared file system type:
# - virtio-fs (default)
# - virtio-9p
shared_fs = "@DEFSHAREDFS_QEMU_VIRTIOFS@"
# Path to vhost-user-fs daemon.
virtio_fs_daemon = "@DEFVIRTIOFSDAEMON@"
# Default size of DAX cache in MiB
virtio_fs_cache_size = @DEFVIRTIOFSCACHESIZE@
# Extra args for virtiofsd daemon
#
# Format example:
# ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"]
#
# see `virtiofsd -h` for possible options.
virtio_fs_extra_args = @DEFVIRTIOFSEXTRAARGS@
# Cache mode:
#
# - none
# Metadata, data, and pathname lookup are not cached in guest. They are
# always fetched from host and any changes are immediately pushed to host.
#
# - auto
# Metadata and pathname lookup cache expires after a configured amount of
# time (default is 1 second). Data is cached while the file is open (close
# to open consistency).
#
# - always
# Metadata, data, and pathname lookup are cached in guest and never expire.
virtio_fs_cache = "@DEFVIRTIOFSCACHE@"
# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is virtio-scsi, virtio-blk
# or nvdimm.
block_device_driver = "@DEFBLOCKSTORAGEDRIVER_QEMU@"
# Specifies cache-related options will be set to block devices or not.
# Default false
#block_device_cache_set = true
# Specifies cache-related options for block devices.
# Denotes whether use of O_DIRECT (bypass the host page cache) is enabled.
# Default false
#block_device_cache_direct = true
# Specifies cache-related options for block devices.
# Denotes whether flush requests for the device are ignored.
# Default false
#block_device_cache_noflush = true
# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = @DEFENABLEIOTHREADS@
# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true
# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically
# result in memory pre allocation
#enable_hugepages = true
# Enable vhost-user storage device, default false
# Enabling this will result in some Linux reserved block type
# major range 240-254 being chosen to represent vhost-user devices.
enable_vhost_user_store = @DEFENABLEVHOSTUSERSTORE@
# The base directory specifically used for vhost-user devices.
# Its sub-path "block" is used for block devices; "block/sockets" is
# where we expect vhost-user sockets to live; "block/devices" is where
# simulated block device nodes for vhost-user devices to live.
vhost_user_store_path = "@DEFVHOSTUSERSTOREPATH@"
# Enable vIOMMU, default false
# Enabling this will result in the VM having a vIOMMU device
# This will also add the following options to the kernel's
# command line: intel_iommu=on,iommu=pt
#enable_iommu = true
# Enable IOMMU_PLATFORM, default false
# Enabling this will result in the VM device having iommu_platform=on set
#enable_iommu_platform = true
# Enable file based guest memory support. The default is an empty string which
# will disable this feature. In the case of virtio-fs, this is enabled
# automatically and '/dev/shm' is used as the backing folder.
# This option will be ignored if VM templating is enabled.
#file_mem_backend = ""
# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true
# This option changes the default hypervisor and kernel parameters
# to enable debug output where available.
#
# Default false
#enable_debug = true
# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
#
#disable_nesting_checks = true
# This is the msize used for 9p shares. It is the number of bytes
# used for 9p packet payload.
#msize_9p = @DEFMSIZE9P@
# If false and nvdimm is supported, use nvdimm device to plug guest image.
# Otherwise virtio-block device is used.
# Default false
#disable_image_nvdimm = true
# VFIO devices are hotplugged on a bridge by default.
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on
# a bridge. This value is valid for "pc" machine type.
# Default false
#hotplug_vfio_on_root_bus = true
# If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off
# security (vhost-net runs ring0) for network I/O performance.
#disable_vhost_net = true
#
# Default entropy source.
# The path to a host source of entropy (including a real hardware RNG)
# /dev/urandom and /dev/random are two main options.
# Be aware that /dev/random is a blocking source of entropy. If the host
# runs out of entropy, the VMs boot time will increase leading to get startup
# timeouts.
# The source of entropy /dev/urandom is non-blocking and provides a
# generally acceptable source of entropy. It should work well for pretty much
# all practical purposes.
#entropy_source= "@DEFENTROPYSOURCE@"
# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered will scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"
[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Note: Requires "initrd=" to be set ("image=" is not supported).
#
# Default false
#enable_template = true
# Specifies the path of template.
#
# Default "/run/vc/vm/template"
#template_path = "/run/vc/vm/template"
# The number of caches of VMCache:
# unspecified or == 0 --> VMCache is disabled
# > 0 --> will be set to the specified number
#
# VMCache is a function that creates VMs as caches before using it.
# It helps speed up new container creation.
# The function consists of a server and some clients communicating
# through Unix socket. The protocol is gRPC in protocols/cache/cache.proto.
# The VMCache server will create some VMs and cache them by factory cache.
# It will convert the VM to gRPC format and transport it when gets
# requestion from clients.
# Factory grpccache is the VMCache client. It will request gRPC format
# VM and convert it back to a VM. If VMCache function is enabled,
# kata-runtime will request VM from factory grpccache when it creates
# a new sandbox.
#
# Default 0
#vm_cache_number = 0
# Specify the address of the Unix socket that is used by VMCache.
#
# Default /var/run/kata-containers/cache.sock
#vm_cache_endpoint = "/var/run/kata-containers/cache.sock"
[agent.@PROJECT_TYPE@]
# If enabled, make the agent display debug-level messages.
# (default: disabled)
#enable_debug = true
# Enable agent tracing.
#
# If enabled, the default trace mode is "dynamic" and the
# default trace type is "isolated". The trace mode and type are set
# explicity with the `trace_type=` and `trace_mode=` options.
#
# Notes:
#
# - Tracing is ONLY enabled when `enable_tracing` is set: explicitly
# setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing`
# will NOT activate agent tracing.
#
# - See https://github.com/kata-containers/agent/blob/master/TRACING.md for
# full details.
#
# (default: disabled)
#enable_tracing = true
#
#trace_mode = "dynamic"
#trace_type = "isolated"
# Comma separated list of kernel modules and their parameters.
# These modules will be loaded in the guest kernel using modprobe(8).
# The following example can be used to load two kernel modules with parameters
# - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"]
# The first word is considered as the module name and the rest as its parameters.
# Container will not be started when:
# * A kernel module is specified and the modprobe command is not installed in the guest
# or it fails loading the module.
# * The module is not available in the guest or it doesn't met the guest kernel
# requirements, like architecture and version.
#
kernel_modules=[]
# Enable debug console.
# If enabled, user can connect guest OS running inside hypervisor
# through "kata-runtime exec <sandbox-id>" command
#debug_console_enabled = true
[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true
# Specify the path to the netmon binary.
path = "@NETMONPATH@"
# If enabled, netmon messages will be sent to the system log
# (default: disabled)
#enable_debug = true
[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
# - bridged (Deprecated)
# Uses a linux bridge to interconnect the container interface to
# the VM. Works for most cases except macvlan and ipvlan.
# ***NOTE: This feature has been deprecated with plans to remove this
# feature in the future. Please use other network models listed below.
#
# - macvtap
# Used when the Container network interface can be bridged using
# macvtap.
#
# - none
# Used when customize network. Only creates a tap device. No veth pair.
#
# - tcfilter
# Uses tc filter rules to redirect traffic from the network interface
# provided by plugin to a tap interface connected to the VM.
#
internetworking_model="@DEFNETWORKMODEL_QEMU@"
# disable guest seccomp
# Determines whether container seccomp profiles are passed to the virtual
# machine and applied by the kata agent. If set to true, seccomp is not applied
# within the guest
# (default: true)
disable_guest_seccomp=@DEFDISABLEGUESTSECCOMP@
# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true
# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true
# if enabled, the runtime will add all the kata processes inside one dedicated cgroup.
# The container cgroups in the host are not created, just one single cgroup per sandbox.
# The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox.
# The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation.
# The sandbox cgroup is constrained if there is no container type annotation.
# See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType
sandbox_cgroup_only=@DEFSANDBOXCGROUPONLY@
# Enabled experimental feature list, format: ["a", "b"].
# Experimental features are features not stable enough for production,
# they may break compatibility, and are prepared for a big version bump.
# Supported experimental features:
# (default: [])
experimental=@DEFAULTEXPFEATURES@
# If enabled, user can run pprof tools with shim v2 process through kata-monitor.
# (default: false)
# EnablePprof = true

View File

@ -16,6 +16,17 @@ kernel = "@KERNELPATH@"
image = "@IMAGEPATH@" image = "@IMAGEPATH@"
machine_type = "@MACHINETYPE@" machine_type = "@MACHINETYPE@"
# List of valid annotation names for the hypervisor
# Each member of the list is a regular expression, which is the base name
# of the annotation, e.g. "path" for io.katacontainers.config.hypervisor.path"
enable_annotations = @DEFENABLEANNOTATIONS@
# List of valid annotations values for the hypervisor
# Each member of the list is a path pattern as described by glob(3).
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: @QEMUVALIDHYPERVISORPATHS@
valid_hypervisor_paths = @QEMUVALIDHYPERVISORPATHS@
# Optional space-separated list of options to pass to the guest kernel. # Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having # For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc. # trouble running pre-2.15 glibc.
@ -101,21 +112,26 @@ default_memory = @DEFMEMSZ@
#enable_virtio_mem = true #enable_virtio_mem = true
# Disable block device from being used for a container's rootfs. # Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's # In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed # root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons. # directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor, # This flag prevents the block device from being passed to the hypervisor,
# 9pfs is used instead to pass the rootfs. # 9pfs is used instead to pass the rootfs.
disable_block_device_use = @DEFDISABLEBLOCK@ disable_block_device_use = @DEFDISABLEBLOCK@
# Shared file system type: # Shared file system type:
# - virtio-9p (default) # - virtio-9p (default)
# - virtio-fs # - virtio-fs
shared_fs = "@DEFSHAREDFS@" shared_fs = "@DEFSHAREDFS_QEMU_VIRTIOFS@"
# Path to vhost-user-fs daemon. # Path to vhost-user-fs daemon.
virtio_fs_daemon = "@DEFVIRTIOFSDAEMON@" virtio_fs_daemon = "@DEFVIRTIOFSDAEMON@"
# List of valid annotations values for the virtiofs daemon
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: @DEFVALIDVIRTIOFSDAEMONPATHS@
valid_virtio_fs_daemon_paths = @DEFVALIDVIRTIOFSDAEMONPATHS@
# Default size of DAX cache in MiB # Default size of DAX cache in MiB
virtio_fs_cache_size = @DEFVIRTIOFSCACHESIZE@ virtio_fs_cache_size = @DEFVIRTIOFSCACHESIZE@
@ -180,7 +196,7 @@ enable_iothreads = @DEFENABLEIOTHREADS@
# Enabling this will result in the VM memory # Enabling this will result in the VM memory
# being allocated using huge pages. # being allocated using huge pages.
# This is useful when you want to use vhost-user network # This is useful when you want to use vhost-user network
# stacks within the container. This will automatically # stacks within the container. This will automatically
# result in memory pre allocation # result in memory pre allocation
#enable_hugepages = true #enable_hugepages = true
@ -205,11 +221,21 @@ vhost_user_store_path = "@DEFVHOSTUSERSTOREPATH@"
# Enabling this will result in the VM device having iommu_platform=on set # Enabling this will result in the VM device having iommu_platform=on set
#enable_iommu_platform = true #enable_iommu_platform = true
# List of valid annotations values for the vhost user store path
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: @DEFVALIDVHOSTUSERSTOREPATHS@
valid_vhost_user_store_paths = @DEFVALIDVHOSTUSERSTOREPATHS@
# Enable file based guest memory support. The default is an empty string which # Enable file based guest memory support. The default is an empty string which
# will disable this feature. In the case of virtio-fs, this is enabled # will disable this feature. In the case of virtio-fs, this is enabled
# automatically and '/dev/shm' is used as the backing folder. # automatically and '/dev/shm' is used as the backing folder.
# This option will be ignored if VM templating is enabled. # This option will be ignored if VM templating is enabled.
#file_mem_backend = "" #file_mem_backend = "@DEFFILEMEMBACKEND@"
# List of valid annotations values for the file_mem_backend annotation
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: @DEFVALIDFILEMEMBACKENDS@
valid_file_mem_backends = @DEFVALIDFILEMEMBACKENDS@
# Enable swap of vm memory. Default false. # Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true # The behaviour is undefined if mem_prealloc is also set to true
@ -217,17 +243,17 @@ vhost_user_store_path = "@DEFVHOSTUSERSTOREPATH@"
# This option changes the default hypervisor and kernel parameters # This option changes the default hypervisor and kernel parameters
# to enable debug output where available. # to enable debug output where available.
# #
# Default false # Default false
#enable_debug = true #enable_debug = true
# Disable the customizations done in the runtime when it detects # Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime # that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal. # behaving as it would when running on bare metal.
# #
#disable_nesting_checks = true #disable_nesting_checks = true
# This is the msize used for 9p shares. It is the number of bytes # This is the msize used for 9p shares. It is the number of bytes
# used for 9p packet payload. # used for 9p packet payload.
#msize_9p = @DEFMSIZE9P@ #msize_9p = @DEFMSIZE9P@
@ -236,9 +262,9 @@ vhost_user_store_path = "@DEFVHOSTUSERSTOREPATH@"
# Default is false # Default is false
#disable_image_nvdimm = true #disable_image_nvdimm = true
# VFIO devices are hotplugged on a bridge by default. # VFIO devices are hotplugged on a bridge by default.
# Enable hotplugging on root bus. This may be required for devices with # Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on # a large PCI bar, as this is a current limitation with hotplugging on
# a bridge. This value is valid for "pc" machine type. # a bridge. This value is valid for "pc" machine type.
# Default false # Default false
#hotplug_vfio_on_root_bus = true #hotplug_vfio_on_root_bus = true
@ -251,7 +277,7 @@ vhost_user_store_path = "@DEFVHOSTUSERSTOREPATH@"
#pcie_root_port = 2 #pcie_root_port = 2
# If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off # If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off
# security (vhost-net runs ring0) for network I/O performance. # security (vhost-net runs ring0) for network I/O performance.
#disable_vhost_net = true #disable_vhost_net = true
# #

View File

@ -22,9 +22,9 @@ func shimConfig(config *shim.Config) {
func main() { func main() {
if len(os.Args) == 2 && os.Args[1] == "--version" { if len(os.Args) == 2 && os.Args[1] == "--version" {
fmt.Printf("%s containerd shim: id: %q, version: %s, commit: %v\n", project, types.KataRuntimeName, version, commit) fmt.Printf("%s containerd shim: id: %q, version: %s, commit: %v\n", project, types.DefaultKataRuntimeName, version, commit)
os.Exit(0) os.Exit(0)
} }
shim.Run(types.KataRuntimeName, containerdshim.New, shimConfig) shim.Run(types.DefaultKataRuntimeName, containerdshim.New, shimConfig)
} }

View File

@ -317,12 +317,12 @@ func TestCreateContainerConfigFail(t *testing.T) {
MockID: testSandboxID, MockID: testSandboxID,
} }
testingImpl.CreateContainerFunc = func(ctx context.Context, sandboxID string, containerConfig vc.ContainerConfig) (vc.VCSandbox, vc.VCContainer, error) { sandbox.CreateContainerFunc = func(conf vc.ContainerConfig) (vc.VCContainer, error) {
return sandbox, &vcmock.Container{}, nil return &vcmock.Container{}, nil
} }
defer func() { defer func() {
testingImpl.CreateContainerFunc = nil sandbox.CreateContainerFunc = nil
}() }()
tmpdir, err := ioutil.TempDir("", "") tmpdir, err := ioutil.TempDir("", "")

View File

@ -7,12 +7,11 @@
package containerdshim package containerdshim
import ( import (
"context"
"testing" "testing"
"github.com/containerd/cgroups" "github.com/containerd/cgroups"
"github.com/containerd/containerd/namespaces"
vc "github.com/kata-containers/kata-containers/src/runtime/virtcontainers" vc "github.com/kata-containers/kata-containers/src/runtime/virtcontainers"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/vcmock"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )
@ -37,18 +36,21 @@ func TestStatNetworkMetric(t *testing.T) {
}, },
} }
testingImpl.StatsContainerFunc = func(ctx context.Context, sandboxID, containerID string) (vc.ContainerStats, error) { sandbox := &vcmock.Sandbox{
MockID: testSandboxID,
}
sandbox.StatsContainerFunc = func(contID string) (vc.ContainerStats, error) {
return vc.ContainerStats{ return vc.ContainerStats{
NetworkStats: mockNetwork, NetworkStats: mockNetwork,
}, nil }, nil
} }
defer func() { defer func() {
testingImpl.StatsContainerFunc = nil sandbox.StatsContainerFunc = nil
}() }()
ctx := namespaces.WithNamespace(context.Background(), "UnitTest") resp, err := sandbox.StatsContainer(testContainerID)
resp, err := testingImpl.StatsContainer(ctx, testSandboxID, testContainerID)
assert.NoError(err) assert.NoError(err)
metrics := statsToMetrics(&resp) metrics := statsToMetrics(&resp)

View File

@ -28,14 +28,14 @@ func TestPauseContainerSuccess(t *testing.T) {
MockID: testSandboxID, MockID: testSandboxID,
} }
testingImpl.PauseContainerFunc = func(ctx context.Context, sandboxID, containerID string) error { sandbox.PauseContainerFunc = func(contID string) error {
return nil return nil
} }
defer func() { defer func() {
testingImpl.PauseContainerFunc = nil sandbox.PauseContainerFunc = nil
}() }()
testingImpl.StatusContainerFunc = func(ctx context.Context, sandboxID, containerID string) (vc.ContainerStatus, error) { sandbox.StatusContainerFunc = func(contID string) (vc.ContainerStatus, error) {
return vc.ContainerStatus{ return vc.ContainerStatus{
ID: testContainerID, ID: testContainerID,
Annotations: make(map[string]string), Annotations: make(map[string]string),
@ -45,7 +45,7 @@ func TestPauseContainerSuccess(t *testing.T) {
}, nil }, nil
} }
defer func() { defer func() {
testingImpl.StatusContainerFunc = nil sandbox.StatusContainerFunc = nil
}() }()
s := &service{ s := &service{
@ -76,14 +76,14 @@ func TestPauseContainerFail(t *testing.T) {
MockID: testSandboxID, MockID: testSandboxID,
} }
testingImpl.PauseContainerFunc = func(ctx context.Context, sandboxID, containerID string) error { sandbox.PauseContainerFunc = func(contID string) error {
return nil return nil
} }
defer func() { defer func() {
testingImpl.PauseContainerFunc = nil sandbox.PauseContainerFunc = nil
}() }()
testingImpl.StatusContainerFunc = func(ctx context.Context, sandboxID, containerID string) (vc.ContainerStatus, error) { sandbox.StatusContainerFunc = func(contID string) (vc.ContainerStatus, error) {
return vc.ContainerStatus{ return vc.ContainerStatus{
ID: testContainerID, ID: testContainerID,
Annotations: make(map[string]string), Annotations: make(map[string]string),
@ -93,7 +93,7 @@ func TestPauseContainerFail(t *testing.T) {
}, nil }, nil
} }
defer func() { defer func() {
testingImpl.StatusContainerFunc = nil sandbox.StatusContainerFunc = nil
}() }()
s := &service{ s := &service{
@ -119,14 +119,14 @@ func TestResumeContainerSuccess(t *testing.T) {
MockID: testSandboxID, MockID: testSandboxID,
} }
testingImpl.ResumeContainerFunc = func(ctx context.Context, sandboxID, containerID string) error { sandbox.ResumeContainerFunc = func(contID string) error {
return nil return nil
} }
defer func() { defer func() {
testingImpl.ResumeContainerFunc = nil sandbox.ResumeContainerFunc = nil
}() }()
testingImpl.StatusContainerFunc = func(ctx context.Context, sandboxID, containerID string) (vc.ContainerStatus, error) { sandbox.StatusContainerFunc = func(contID string) (vc.ContainerStatus, error) {
return vc.ContainerStatus{ return vc.ContainerStatus{
ID: testContainerID, ID: testContainerID,
Annotations: make(map[string]string), Annotations: make(map[string]string),
@ -137,7 +137,7 @@ func TestResumeContainerSuccess(t *testing.T) {
} }
defer func() { defer func() {
testingImpl.StatusContainerFunc = nil sandbox.StatusContainerFunc = nil
}() }()
s := &service{ s := &service{
@ -168,13 +168,13 @@ func TestResumeContainerFail(t *testing.T) {
MockID: testSandboxID, MockID: testSandboxID,
} }
testingImpl.ResumeContainerFunc = func(ctx context.Context, sandboxID, containerID string) error { sandbox.ResumeContainerFunc = func(contID string) error {
return nil return nil
} }
defer func() { defer func() {
testingImpl.ResumeContainerFunc = nil sandbox.ResumeContainerFunc = nil
}() }()
testingImpl.StatusContainerFunc = func(ctx context.Context, sandboxID, containerID string) (vc.ContainerStatus, error) { sandbox.StatusContainerFunc = func(contID string) (vc.ContainerStatus, error) {
return vc.ContainerStatus{ return vc.ContainerStatus{
ID: testContainerID, ID: testContainerID,
Annotations: make(map[string]string), Annotations: make(map[string]string),
@ -184,7 +184,7 @@ func TestResumeContainerFail(t *testing.T) {
}, nil }, nil
} }
defer func() { defer func() {
testingImpl.StatusContainerFunc = nil sandbox.StatusContainerFunc = nil
}() }()
s := &service{ s := &service{

View File

@ -28,7 +28,7 @@ func TestStartStartSandboxSuccess(t *testing.T) {
MockID: testSandboxID, MockID: testSandboxID,
} }
testingImpl.StatusContainerFunc = func(ctx context.Context, sandboxID, containerID string) (vc.ContainerStatus, error) { sandbox.StatusContainerFunc = func(contID string) (vc.ContainerStatus, error) {
return vc.ContainerStatus{ return vc.ContainerStatus{
ID: sandbox.ID(), ID: sandbox.ID(),
Annotations: map[string]string{ Annotations: map[string]string{
@ -38,7 +38,7 @@ func TestStartStartSandboxSuccess(t *testing.T) {
} }
defer func() { defer func() {
testingImpl.StatusContainerFunc = nil sandbox.StatusContainerFunc = nil
}() }()
s := &service{ s := &service{
@ -58,12 +58,12 @@ func TestStartStartSandboxSuccess(t *testing.T) {
ID: testSandboxID, ID: testSandboxID,
} }
testingImpl.StartSandboxFunc = func(ctx context.Context, sandboxID string) (vc.VCSandbox, error) { sandbox.StartFunc = func() error {
return sandbox, nil return nil
} }
defer func() { defer func() {
testingImpl.StartSandboxFunc = nil sandbox.StartFunc = nil
}() }()
ctx := namespaces.WithNamespace(context.Background(), "UnitTest") ctx := namespaces.WithNamespace(context.Background(), "UnitTest")
@ -79,7 +79,7 @@ func TestStartMissingAnnotation(t *testing.T) {
MockID: testSandboxID, MockID: testSandboxID,
} }
testingImpl.StatusContainerFunc = func(ctx context.Context, sandboxID, containerID string) (vc.ContainerStatus, error) { sandbox.StatusContainerFunc = func(contID string) (vc.ContainerStatus, error) {
return vc.ContainerStatus{ return vc.ContainerStatus{
ID: sandbox.ID(), ID: sandbox.ID(),
Annotations: map[string]string{}, Annotations: map[string]string{},
@ -87,7 +87,7 @@ func TestStartMissingAnnotation(t *testing.T) {
} }
defer func() { defer func() {
testingImpl.StatusContainerFunc = nil sandbox.StatusContainerFunc = nil
}() }()
s := &service{ s := &service{
@ -107,12 +107,12 @@ func TestStartMissingAnnotation(t *testing.T) {
ID: testSandboxID, ID: testSandboxID,
} }
testingImpl.StartSandboxFunc = func(ctx context.Context, sandboxID string) (vc.VCSandbox, error) { sandbox.StartFunc = func() error {
return sandbox, nil return nil
} }
defer func() { defer func() {
testingImpl.StartSandboxFunc = nil sandbox.StartFunc = nil
}() }()
_, err = s.Start(s.ctx, reqStart) _, err = s.Start(s.ctx, reqStart)
@ -135,7 +135,7 @@ func TestStartStartContainerSucess(t *testing.T) {
}, },
} }
testingImpl.StatusContainerFunc = func(ctx context.Context, sandboxID, containerID string) (vc.ContainerStatus, error) { sandbox.StatusContainerFunc = func(contID string) (vc.ContainerStatus, error) {
return vc.ContainerStatus{ return vc.ContainerStatus{
ID: testContainerID, ID: testContainerID,
Annotations: map[string]string{ Annotations: map[string]string{
@ -145,15 +145,15 @@ func TestStartStartContainerSucess(t *testing.T) {
} }
defer func() { defer func() {
testingImpl.StatusContainerFunc = nil sandbox.StatusContainerFunc = nil
}() }()
testingImpl.StartContainerFunc = func(ctx context.Context, sandboxID, containerID string) (vc.VCContainer, error) { sandbox.StartContainerFunc = func(contID string) (vc.VCContainer, error) {
return sandbox.MockContainers[0], nil return sandbox.MockContainers[0], nil
} }
defer func() { defer func() {
testingImpl.StartContainerFunc = nil sandbox.StartContainerFunc = nil
}() }()
s := &service{ s := &service{

View File

@ -179,6 +179,7 @@ github.com/juju/errors v0.0.0-20180806074554-22422dad46e1/go.mod h1:W54LbzXuIE0b
github.com/juju/loggo v0.0.0-20190526231331-6e530bcce5d8/go.mod h1:vgyd7OREkbtVEN/8IXZe5Ooef3LQePvuBm9UWj6ZL8U= github.com/juju/loggo v0.0.0-20190526231331-6e530bcce5d8/go.mod h1:vgyd7OREkbtVEN/8IXZe5Ooef3LQePvuBm9UWj6ZL8U=
github.com/juju/testing v0.0.0-20190613124551-e81189438503/go.mod h1:63prj8cnj0tU0S9OHjGJn+b1h0ZghCndfnbQolrYTwA= github.com/juju/testing v0.0.0-20190613124551-e81189438503/go.mod h1:63prj8cnj0tU0S9OHjGJn+b1h0ZghCndfnbQolrYTwA=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kata-containers/kata-containers v0.0.0-20201013034856-c88820454d08 h1:yk9fzLKb9RmV9xuT5mkJw4owk/K0rX5cusm2ukEEDro=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00= github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk= github.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk=

View File

@ -82,7 +82,7 @@ func (ka *KataMonitor) getSandboxes() (map[string]string, error) {
namespacedCtx := namespaces.WithNamespace(ctx, namespace) namespacedCtx := namespaces.WithNamespace(ctx, namespace)
// only list Kata Containers pods/containers // only list Kata Containers pods/containers
containers, err := client.ContainerService().List(namespacedCtx, containers, err := client.ContainerService().List(namespacedCtx,
"runtime.name=="+types.KataRuntimeName+`,labels."io.cri-containerd.kind"==sandbox`) "runtime.name~="+types.KataRuntimeNameRegexp+`,labels."io.cri-containerd.kind"==sandbox`)
if err != nil { if err != nil {
return err return err
} }

View File

@ -8,6 +8,7 @@ package katamonitor
import ( import (
"context" "context"
"fmt" "fmt"
"regexp"
"sync" "sync"
"github.com/containerd/containerd" "github.com/containerd/containerd"
@ -97,6 +98,11 @@ func (sc *sandboxCache) startEventsListener(addr string) error {
`topic=="/containers/delete"`, `topic=="/containers/delete"`,
} }
runtimeNameRegexp, err := regexp.Compile(types.KataRuntimeNameRegexp)
if err != nil {
return err
}
eventsCh, errCh := eventsClient.Subscribe(ctx, eventFilters...) eventsCh, errCh := eventsClient.Subscribe(ctx, eventFilters...)
for { for {
var e *events.Envelope var e *events.Envelope
@ -138,7 +144,7 @@ func (sc *sandboxCache) startEventsListener(addr string) error {
} }
// skip non-kata contaienrs // skip non-kata contaienrs
if cc.Runtime.Name != types.KataRuntimeName { if !runtimeNameRegexp.MatchString(cc.Runtime.Name) {
continue continue
} }

View File

@ -71,9 +71,12 @@ type factory struct {
type hypervisor struct { type hypervisor struct {
Path string `toml:"path"` Path string `toml:"path"`
HypervisorPathList []string `toml:"valid_hypervisor_paths"`
JailerPath string `toml:"jailer_path"` JailerPath string `toml:"jailer_path"`
JailerPathList []string `toml:"valid_jailer_paths"`
Kernel string `toml:"kernel"` Kernel string `toml:"kernel"`
CtlPath string `toml:"ctlpath"` CtlPath string `toml:"ctlpath"`
CtlPathList []string `toml:"valid_ctlpaths"`
Initrd string `toml:"initrd"` Initrd string `toml:"initrd"`
Image string `toml:"image"` Image string `toml:"image"`
Firmware string `toml:"firmware"` Firmware string `toml:"firmware"`
@ -85,6 +88,7 @@ type hypervisor struct {
EntropySource string `toml:"entropy_source"` EntropySource string `toml:"entropy_source"`
SharedFS string `toml:"shared_fs"` SharedFS string `toml:"shared_fs"`
VirtioFSDaemon string `toml:"virtio_fs_daemon"` VirtioFSDaemon string `toml:"virtio_fs_daemon"`
VirtioFSDaemonList []string `toml:"valid_virtio_fs_daemon_paths"`
VirtioFSCache string `toml:"virtio_fs_cache"` VirtioFSCache string `toml:"virtio_fs_cache"`
VirtioFSExtraArgs []string `toml:"virtio_fs_extra_args"` VirtioFSExtraArgs []string `toml:"virtio_fs_extra_args"`
VirtioFSCacheSize uint32 `toml:"virtio_fs_cache_size"` VirtioFSCacheSize uint32 `toml:"virtio_fs_cache_size"`
@ -93,6 +97,7 @@ type hypervisor struct {
BlockDeviceCacheNoflush bool `toml:"block_device_cache_noflush"` BlockDeviceCacheNoflush bool `toml:"block_device_cache_noflush"`
EnableVhostUserStore bool `toml:"enable_vhost_user_store"` EnableVhostUserStore bool `toml:"enable_vhost_user_store"`
VhostUserStorePath string `toml:"vhost_user_store_path"` VhostUserStorePath string `toml:"vhost_user_store_path"`
VhostUserStorePathList []string `toml:"valid_vhost_user_store_paths"`
NumVCPUs int32 `toml:"default_vcpus"` NumVCPUs int32 `toml:"default_vcpus"`
DefaultMaxVCPUs uint32 `toml:"default_maxvcpus"` DefaultMaxVCPUs uint32 `toml:"default_maxvcpus"`
MemorySize uint32 `toml:"default_memory"` MemorySize uint32 `toml:"default_memory"`
@ -108,6 +113,7 @@ type hypervisor struct {
IOMMU bool `toml:"enable_iommu"` IOMMU bool `toml:"enable_iommu"`
IOMMUPlatform bool `toml:"enable_iommu_platform"` IOMMUPlatform bool `toml:"enable_iommu_platform"`
FileBackedMemRootDir string `toml:"file_mem_backend"` FileBackedMemRootDir string `toml:"file_mem_backend"`
FileBackedMemRootList []string `toml:"valid_file_mem_backends"`
Swap bool `toml:"enable_swap"` Swap bool `toml:"enable_swap"`
Debug bool `toml:"enable_debug"` Debug bool `toml:"enable_debug"`
DisableNestingChecks bool `toml:"disable_nesting_checks"` DisableNestingChecks bool `toml:"disable_nesting_checks"`
@ -118,6 +124,7 @@ type hypervisor struct {
GuestHookPath string `toml:"guest_hook_path"` GuestHookPath string `toml:"guest_hook_path"`
RxRateLimiterMaxRate uint64 `toml:"rx_rate_limiter_max_rate"` RxRateLimiterMaxRate uint64 `toml:"rx_rate_limiter_max_rate"`
TxRateLimiterMaxRate uint64 `toml:"tx_rate_limiter_max_rate"` TxRateLimiterMaxRate uint64 `toml:"tx_rate_limiter_max_rate"`
EnableAnnotations []string `toml:"enable_annotations"`
} }
type runtime struct { type runtime struct {
@ -527,7 +534,9 @@ func newFirecrackerHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
return vc.HypervisorConfig{ return vc.HypervisorConfig{
HypervisorPath: hypervisor, HypervisorPath: hypervisor,
HypervisorPathList: h.HypervisorPathList,
JailerPath: jailer, JailerPath: jailer,
JailerPathList: h.JailerPathList,
KernelPath: kernel, KernelPath: kernel,
InitrdPath: initrd, InitrdPath: initrd,
ImagePath: image, ImagePath: image,
@ -550,6 +559,7 @@ func newFirecrackerHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
GuestHookPath: h.guestHookPath(), GuestHookPath: h.guestHookPath(),
RxRateLimiterMaxRate: rxRateLimiterMaxRate, RxRateLimiterMaxRate: rxRateLimiterMaxRate,
TxRateLimiterMaxRate: txRateLimiterMaxRate, TxRateLimiterMaxRate: txRateLimiterMaxRate,
EnableAnnotations: h.EnableAnnotations,
}, nil }, nil
} }
@ -628,6 +638,7 @@ func newQemuHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
return vc.HypervisorConfig{ return vc.HypervisorConfig{
HypervisorPath: hypervisor, HypervisorPath: hypervisor,
HypervisorPathList: h.HypervisorPathList,
KernelPath: kernel, KernelPath: kernel,
InitrdPath: initrd, InitrdPath: initrd,
ImagePath: image, ImagePath: image,
@ -647,6 +658,7 @@ func newQemuHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
DisableBlockDeviceUse: h.DisableBlockDeviceUse, DisableBlockDeviceUse: h.DisableBlockDeviceUse,
SharedFS: sharedFS, SharedFS: sharedFS,
VirtioFSDaemon: h.VirtioFSDaemon, VirtioFSDaemon: h.VirtioFSDaemon,
VirtioFSDaemonList: h.VirtioFSDaemonList,
VirtioFSCacheSize: h.VirtioFSCacheSize, VirtioFSCacheSize: h.VirtioFSCacheSize,
VirtioFSCache: h.defaultVirtioFSCache(), VirtioFSCache: h.defaultVirtioFSCache(),
VirtioFSExtraArgs: h.VirtioFSExtraArgs, VirtioFSExtraArgs: h.VirtioFSExtraArgs,
@ -655,6 +667,7 @@ func newQemuHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
IOMMU: h.IOMMU, IOMMU: h.IOMMU,
IOMMUPlatform: h.getIOMMUPlatform(), IOMMUPlatform: h.getIOMMUPlatform(),
FileBackedMemRootDir: h.FileBackedMemRootDir, FileBackedMemRootDir: h.FileBackedMemRootDir,
FileBackedMemRootList: h.FileBackedMemRootList,
Mlock: !h.Swap, Mlock: !h.Swap,
Debug: h.Debug, Debug: h.Debug,
DisableNestingChecks: h.DisableNestingChecks, DisableNestingChecks: h.DisableNestingChecks,
@ -670,9 +683,11 @@ func newQemuHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
DisableVhostNet: h.DisableVhostNet, DisableVhostNet: h.DisableVhostNet,
EnableVhostUserStore: h.EnableVhostUserStore, EnableVhostUserStore: h.EnableVhostUserStore,
VhostUserStorePath: h.vhostUserStorePath(), VhostUserStorePath: h.vhostUserStorePath(),
VhostUserStorePathList: h.VhostUserStorePathList,
GuestHookPath: h.guestHookPath(), GuestHookPath: h.guestHookPath(),
RxRateLimiterMaxRate: rxRateLimiterMaxRate, RxRateLimiterMaxRate: rxRateLimiterMaxRate,
TxRateLimiterMaxRate: txRateLimiterMaxRate, TxRateLimiterMaxRate: txRateLimiterMaxRate,
EnableAnnotations: h.EnableAnnotations,
}, nil }, nil
} }
@ -715,25 +730,28 @@ func newAcrnHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
} }
return vc.HypervisorConfig{ return vc.HypervisorConfig{
HypervisorPath: hypervisor, HypervisorPath: hypervisor,
KernelPath: kernel, HypervisorPathList: h.HypervisorPathList,
ImagePath: image, KernelPath: kernel,
HypervisorCtlPath: hypervisorctl, ImagePath: image,
FirmwarePath: firmware, HypervisorCtlPath: hypervisorctl,
KernelParams: vc.DeserializeParams(strings.Fields(kernelParams)), HypervisorCtlPathList: h.CtlPathList,
NumVCPUs: h.defaultVCPUs(), FirmwarePath: firmware,
DefaultMaxVCPUs: h.defaultMaxVCPUs(), KernelParams: vc.DeserializeParams(strings.Fields(kernelParams)),
MemorySize: h.defaultMemSz(), NumVCPUs: h.defaultVCPUs(),
MemSlots: h.defaultMemSlots(), DefaultMaxVCPUs: h.defaultMaxVCPUs(),
EntropySource: h.GetEntropySource(), MemorySize: h.defaultMemSz(),
DefaultBridges: h.defaultBridges(), MemSlots: h.defaultMemSlots(),
HugePages: h.HugePages, EntropySource: h.GetEntropySource(),
Mlock: !h.Swap, DefaultBridges: h.defaultBridges(),
Debug: h.Debug, HugePages: h.HugePages,
DisableNestingChecks: h.DisableNestingChecks, Mlock: !h.Swap,
BlockDeviceDriver: blockDriver, Debug: h.Debug,
DisableVhostNet: h.DisableVhostNet, DisableNestingChecks: h.DisableNestingChecks,
GuestHookPath: h.guestHookPath(), BlockDeviceDriver: blockDriver,
DisableVhostNet: h.DisableVhostNet,
GuestHookPath: h.guestHookPath(),
EnableAnnotations: h.EnableAnnotations,
}, nil }, nil
} }
@ -786,6 +804,7 @@ func newClhHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
return vc.HypervisorConfig{ return vc.HypervisorConfig{
HypervisorPath: hypervisor, HypervisorPath: hypervisor,
HypervisorPathList: h.HypervisorPathList,
KernelPath: kernel, KernelPath: kernel,
InitrdPath: initrd, InitrdPath: initrd,
ImagePath: image, ImagePath: image,
@ -804,11 +823,13 @@ func newClhHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
DisableBlockDeviceUse: h.DisableBlockDeviceUse, DisableBlockDeviceUse: h.DisableBlockDeviceUse,
SharedFS: sharedFS, SharedFS: sharedFS,
VirtioFSDaemon: h.VirtioFSDaemon, VirtioFSDaemon: h.VirtioFSDaemon,
VirtioFSDaemonList: h.VirtioFSDaemonList,
VirtioFSCacheSize: h.VirtioFSCacheSize, VirtioFSCacheSize: h.VirtioFSCacheSize,
VirtioFSCache: h.VirtioFSCache, VirtioFSCache: h.VirtioFSCache,
MemPrealloc: h.MemPrealloc, MemPrealloc: h.MemPrealloc,
HugePages: h.HugePages, HugePages: h.HugePages,
FileBackedMemRootDir: h.FileBackedMemRootDir, FileBackedMemRootDir: h.FileBackedMemRootDir,
FileBackedMemRootList: h.FileBackedMemRootList,
Mlock: !h.Swap, Mlock: !h.Swap,
Debug: h.Debug, Debug: h.Debug,
DisableNestingChecks: h.DisableNestingChecks, DisableNestingChecks: h.DisableNestingChecks,
@ -822,6 +843,7 @@ func newClhHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
PCIeRootPort: h.PCIeRootPort, PCIeRootPort: h.PCIeRootPort,
DisableVhostNet: true, DisableVhostNet: true,
VirtioFSExtraArgs: h.VirtioFSExtraArgs, VirtioFSExtraArgs: h.VirtioFSExtraArgs,
EnableAnnotations: h.EnableAnnotations,
}, nil }, nil
} }

View File

@ -1,4 +1,4 @@
// Copyright (c) 2020 Ant Financial // Copyright (c) 2020 Ant Group
// //
// SPDX-License-Identifier: Apache-2.0 // SPDX-License-Identifier: Apache-2.0
// //
@ -6,6 +6,7 @@
package types package types
const ( const (
KataRuntimeName = "io.containerd.kata.v2" DefaultKataRuntimeName = "io.containerd.kata.v2"
KataRuntimeNameRegexp = `io\.containerd\.kata.*\.v2`
ContainerdRuntimeTaskPath = "io.containerd.runtime.v2.task" ContainerdRuntimeTaskPath = "io.containerd.runtime.v2.task"
) )

View File

@ -0,0 +1,30 @@
// Copyright (c) 2020 Ant Group
//
// SPDX-License-Identifier: Apache-2.0
//
package types
import (
"regexp"
"testing"
"github.com/stretchr/testify/assert"
)
func TestKataRuntimeNameRegexp(t *testing.T) {
assert := assert.New(t)
runtimeNameRegexp, err := regexp.Compile(KataRuntimeNameRegexp)
assert.NoError(err)
// valid Kata containers name
assert.Equal(true, runtimeNameRegexp.MatchString("io.containerd.kata.v2"))
assert.Equal(true, runtimeNameRegexp.MatchString("io.containerd.kataclh.v2"))
assert.Equal(true, runtimeNameRegexp.MatchString("io.containerd.kata-clh.v2"))
assert.Equal(true, runtimeNameRegexp.MatchString("io.containerd.kata.1.2.3-clh.4.v2"))
// invalid Kata containers name
assert.Equal(false, runtimeNameRegexp.MatchString("io2containerd.kata.v2"))
assert.Equal(false, runtimeNameRegexp.MatchString("io.c3ontainerd.kata.v2"))
assert.Equal(false, runtimeNameRegexp.MatchString("io.containerd.runc.v1"))
}

View File

@ -420,15 +420,12 @@ func (clh *cloudHypervisor) hotplugAddBlockDevice(drive *config.BlockDrive) erro
" using '%v' but only support '%v'", clh.config.BlockDeviceDriver, config.VirtioBlock) " using '%v' but only support '%v'", clh.config.BlockDeviceDriver, config.VirtioBlock)
} }
var err error
cl := clh.client() cl := clh.client()
ctx, cancel := context.WithTimeout(context.Background(), clhHotPlugAPITimeout*time.Second) ctx, cancel := context.WithTimeout(context.Background(), clhHotPlugAPITimeout*time.Second)
defer cancel() defer cancel()
_, _, err := cl.VmmPingGet(ctx)
if err != nil {
return openAPIClientError(err)
}
driveID := clhDriveIndexToID(drive.Index) driveID := clhDriveIndexToID(drive.Index)
//Explicitly set PCIAddr to NULL, so that VirtPath can be used //Explicitly set PCIAddr to NULL, so that VirtPath can be used
@ -457,12 +454,7 @@ func (clh *cloudHypervisor) hotPlugVFIODevice(device config.VFIODev) error {
ctx, cancel := context.WithTimeout(context.Background(), clhHotPlugAPITimeout*time.Second) ctx, cancel := context.WithTimeout(context.Background(), clhHotPlugAPITimeout*time.Second)
defer cancel() defer cancel()
_, _, err := cl.VmmPingGet(ctx) _, _, err := cl.VmAddDevicePut(ctx, chclient.VmAddDevice{Path: device.SysfsDev, Id: device.ID})
if err != nil {
return openAPIClientError(err)
}
_, _, err = cl.VmAddDevicePut(ctx, chclient.VmAddDevice{Path: device.SysfsDev})
if err != nil { if err != nil {
err = fmt.Errorf("Failed to hotplug device %+v %s", device, openAPIClientError(err)) err = fmt.Errorf("Failed to hotplug device %+v %s", device, openAPIClientError(err))
} }
@ -506,6 +498,20 @@ func (clh *cloudHypervisor) hotplugRemoveBlockDevice(drive *config.BlockDrive) e
return err return err
} }
func (clh *cloudHypervisor) hotplugRemoveVfioDevice(device *config.VFIODev) error {
cl := clh.client()
ctx, cancel := context.WithTimeout(context.Background(), clhHotPlugAPITimeout*time.Second)
defer cancel()
_, err := cl.VmRemoveDevicePut(ctx, chclient.VmRemoveDevice{Id: device.ID})
if err != nil {
err = fmt.Errorf("failed to hotplug remove vfio device %+v %s", device, openAPIClientError(err))
}
return err
}
func (clh *cloudHypervisor) hotplugRemoveDevice(devInfo interface{}, devType deviceType) (interface{}, error) { func (clh *cloudHypervisor) hotplugRemoveDevice(devInfo interface{}, devType deviceType) (interface{}, error) {
span, _ := clh.trace("hotplugRemoveDevice") span, _ := clh.trace("hotplugRemoveDevice")
defer span.Finish() defer span.Finish()
@ -513,6 +519,8 @@ func (clh *cloudHypervisor) hotplugRemoveDevice(devInfo interface{}, devType dev
switch devType { switch devType {
case blockDev: case blockDev:
return nil, clh.hotplugRemoveBlockDevice(devInfo.(*config.BlockDrive)) return nil, clh.hotplugRemoveBlockDevice(devInfo.(*config.BlockDrive))
case vfioDev:
return nil, clh.hotplugRemoveVfioDevice(devInfo.(*config.VFIODev))
default: default:
clh.Logger().WithFields(log.Fields{"devInfo": devInfo, clh.Logger().WithFields(log.Fields{"devInfo": devInfo,
"deviceType": devType}).Error("hotplugRemoveDevice: unsupported device") "deviceType": devType}).Error("hotplugRemoveDevice: unsupported device")

View File

@ -1139,6 +1139,12 @@ func (c *Container) update(resources specs.LinuxResources) error {
if q := cpu.Quota; q != nil && *q != 0 { if q := cpu.Quota; q != nil && *q != 0 {
c.config.Resources.CPU.Quota = q c.config.Resources.CPU.Quota = q
} }
if cpu.Cpus != "" {
c.config.Resources.CPU.Cpus = cpu.Cpus
}
if cpu.Mems != "" {
c.config.Resources.CPU.Mems = cpu.Mems
}
} }
if c.config.Resources.Memory == nil { if c.config.Resources.Memory == nil {
@ -1159,6 +1165,14 @@ func (c *Container) update(resources specs.LinuxResources) error {
} }
} }
// There currently isn't a notion of cpusets.cpus or mems being tracked
// inside of the guest. Make sure we clear these before asking agent to update
// the container's cgroups.
if resources.CPU != nil {
resources.CPU.Mems = ""
resources.CPU.Cpus = ""
}
return c.sandbox.agent.updateContainer(c.sandbox, *c, resources) return c.sandbox.agent.updateContainer(c.sandbox, *c, resources)
} }

View File

@ -275,12 +275,21 @@ type HypervisorConfig struct {
// HypervisorPath is the hypervisor executable host path. // HypervisorPath is the hypervisor executable host path.
HypervisorPath string HypervisorPath string
// HypervisorPathList is the list of hypervisor paths names allowed in annotations
HypervisorPathList []string
// HypervisorCtlPathList is the list of hypervisor control paths names allowed in annotations
HypervisorCtlPathList []string
// HypervisorCtlPath is the hypervisor ctl executable host path. // HypervisorCtlPath is the hypervisor ctl executable host path.
HypervisorCtlPath string HypervisorCtlPath string
// JailerPath is the jailer executable host path. // JailerPath is the jailer executable host path.
JailerPath string JailerPath string
// JailerPathList is the list of jailer paths names allowed in annotations
JailerPathList []string
// BlockDeviceDriver specifies the driver to be used for block device // BlockDeviceDriver specifies the driver to be used for block device
// either VirtioSCSI or VirtioBlock with the default driver being defaultBlockDriver // either VirtioSCSI or VirtioBlock with the default driver being defaultBlockDriver
BlockDeviceDriver string BlockDeviceDriver string
@ -309,6 +318,9 @@ type HypervisorConfig struct {
// VirtioFSDaemon is the virtio-fs vhost-user daemon path // VirtioFSDaemon is the virtio-fs vhost-user daemon path
VirtioFSDaemon string VirtioFSDaemon string
// VirtioFSDaemonList is the list of valid virtiofs names for annotations
VirtioFSDaemonList []string
// VirtioFSCache cache mode for fs version cache or "none" // VirtioFSCache cache mode for fs version cache or "none"
VirtioFSCache string VirtioFSCache string
@ -318,6 +330,9 @@ type HypervisorConfig struct {
// File based memory backend root directory // File based memory backend root directory
FileBackedMemRootDir string FileBackedMemRootDir string
// FileBackedMemRootList is the list of valid root directories values for annotations
FileBackedMemRootList []string
// customAssets is a map of assets. // customAssets is a map of assets.
// Each value in that map takes precedence over the configured assets. // Each value in that map takes precedence over the configured assets.
// For example, if there is a value for the "kernel" key in this map, // For example, if there is a value for the "kernel" key in this map,
@ -400,6 +415,9 @@ type HypervisorConfig struct {
// related folders, sockets and device nodes should be. // related folders, sockets and device nodes should be.
VhostUserStorePath string VhostUserStorePath string
// VhostUserStorePathList is the list of valid values for vhost-user paths
VhostUserStorePathList []string
// GuestHookPath is the path within the VM that will be used for 'drop-in' hooks // GuestHookPath is the path within the VM that will be used for 'drop-in' hooks
GuestHookPath string GuestHookPath string
@ -415,6 +433,9 @@ type HypervisorConfig struct {
// TxRateLimiterMaxRate is used to control network I/O outbound bandwidth on VM level. // TxRateLimiterMaxRate is used to control network I/O outbound bandwidth on VM level.
TxRateLimiterMaxRate uint64 TxRateLimiterMaxRate uint64
// Enable annotations by name
EnableAnnotations []string
} }
// vcpu mapping from vcpu number to thread number // vcpu mapping from vcpu number to thread number

View File

@ -212,8 +212,11 @@ func (s *Sandbox) dumpConfig(ss *persistapi.SandboxState) {
MachineAccelerators: sconfig.HypervisorConfig.MachineAccelerators, MachineAccelerators: sconfig.HypervisorConfig.MachineAccelerators,
CPUFeatures: sconfig.HypervisorConfig.CPUFeatures, CPUFeatures: sconfig.HypervisorConfig.CPUFeatures,
HypervisorPath: sconfig.HypervisorConfig.HypervisorPath, HypervisorPath: sconfig.HypervisorConfig.HypervisorPath,
HypervisorPathList: sconfig.HypervisorConfig.HypervisorPathList,
HypervisorCtlPath: sconfig.HypervisorConfig.HypervisorCtlPath, HypervisorCtlPath: sconfig.HypervisorConfig.HypervisorCtlPath,
HypervisorCtlPathList: sconfig.HypervisorConfig.HypervisorCtlPathList,
JailerPath: sconfig.HypervisorConfig.JailerPath, JailerPath: sconfig.HypervisorConfig.JailerPath,
JailerPathList: sconfig.HypervisorConfig.JailerPathList,
BlockDeviceDriver: sconfig.HypervisorConfig.BlockDeviceDriver, BlockDeviceDriver: sconfig.HypervisorConfig.BlockDeviceDriver,
HypervisorMachineType: sconfig.HypervisorConfig.HypervisorMachineType, HypervisorMachineType: sconfig.HypervisorConfig.HypervisorMachineType,
MemoryPath: sconfig.HypervisorConfig.MemoryPath, MemoryPath: sconfig.HypervisorConfig.MemoryPath,
@ -221,6 +224,7 @@ func (s *Sandbox) dumpConfig(ss *persistapi.SandboxState) {
EntropySource: sconfig.HypervisorConfig.EntropySource, EntropySource: sconfig.HypervisorConfig.EntropySource,
SharedFS: sconfig.HypervisorConfig.SharedFS, SharedFS: sconfig.HypervisorConfig.SharedFS,
VirtioFSDaemon: sconfig.HypervisorConfig.VirtioFSDaemon, VirtioFSDaemon: sconfig.HypervisorConfig.VirtioFSDaemon,
VirtioFSDaemonList: sconfig.HypervisorConfig.VirtioFSDaemonList,
VirtioFSCache: sconfig.HypervisorConfig.VirtioFSCache, VirtioFSCache: sconfig.HypervisorConfig.VirtioFSCache,
VirtioFSExtraArgs: sconfig.HypervisorConfig.VirtioFSExtraArgs[:], VirtioFSExtraArgs: sconfig.HypervisorConfig.VirtioFSExtraArgs[:],
BlockDeviceCacheSet: sconfig.HypervisorConfig.BlockDeviceCacheSet, BlockDeviceCacheSet: sconfig.HypervisorConfig.BlockDeviceCacheSet,
@ -232,6 +236,7 @@ func (s *Sandbox) dumpConfig(ss *persistapi.SandboxState) {
MemPrealloc: sconfig.HypervisorConfig.MemPrealloc, MemPrealloc: sconfig.HypervisorConfig.MemPrealloc,
HugePages: sconfig.HypervisorConfig.HugePages, HugePages: sconfig.HypervisorConfig.HugePages,
FileBackedMemRootDir: sconfig.HypervisorConfig.FileBackedMemRootDir, FileBackedMemRootDir: sconfig.HypervisorConfig.FileBackedMemRootDir,
FileBackedMemRootList: sconfig.HypervisorConfig.FileBackedMemRootList,
Realtime: sconfig.HypervisorConfig.Realtime, Realtime: sconfig.HypervisorConfig.Realtime,
Mlock: sconfig.HypervisorConfig.Mlock, Mlock: sconfig.HypervisorConfig.Mlock,
DisableNestingChecks: sconfig.HypervisorConfig.DisableNestingChecks, DisableNestingChecks: sconfig.HypervisorConfig.DisableNestingChecks,
@ -243,10 +248,12 @@ func (s *Sandbox) dumpConfig(ss *persistapi.SandboxState) {
DisableVhostNet: sconfig.HypervisorConfig.DisableVhostNet, DisableVhostNet: sconfig.HypervisorConfig.DisableVhostNet,
EnableVhostUserStore: sconfig.HypervisorConfig.EnableVhostUserStore, EnableVhostUserStore: sconfig.HypervisorConfig.EnableVhostUserStore,
VhostUserStorePath: sconfig.HypervisorConfig.VhostUserStorePath, VhostUserStorePath: sconfig.HypervisorConfig.VhostUserStorePath,
VhostUserStorePathList: sconfig.HypervisorConfig.VhostUserStorePathList,
GuestHookPath: sconfig.HypervisorConfig.GuestHookPath, GuestHookPath: sconfig.HypervisorConfig.GuestHookPath,
VMid: sconfig.HypervisorConfig.VMid, VMid: sconfig.HypervisorConfig.VMid,
RxRateLimiterMaxRate: sconfig.HypervisorConfig.RxRateLimiterMaxRate, RxRateLimiterMaxRate: sconfig.HypervisorConfig.RxRateLimiterMaxRate,
TxRateLimiterMaxRate: sconfig.HypervisorConfig.TxRateLimiterMaxRate, TxRateLimiterMaxRate: sconfig.HypervisorConfig.TxRateLimiterMaxRate,
EnableAnnotations: sconfig.HypervisorConfig.EnableAnnotations,
} }
ss.Config.KataAgentConfig = &persistapi.KataAgentConfig{ ss.Config.KataAgentConfig = &persistapi.KataAgentConfig{
@ -473,8 +480,11 @@ func loadSandboxConfig(id string) (*SandboxConfig, error) {
MachineAccelerators: hconf.MachineAccelerators, MachineAccelerators: hconf.MachineAccelerators,
CPUFeatures: hconf.CPUFeatures, CPUFeatures: hconf.CPUFeatures,
HypervisorPath: hconf.HypervisorPath, HypervisorPath: hconf.HypervisorPath,
HypervisorPathList: hconf.HypervisorPathList,
HypervisorCtlPath: hconf.HypervisorCtlPath, HypervisorCtlPath: hconf.HypervisorCtlPath,
HypervisorCtlPathList: hconf.HypervisorCtlPathList,
JailerPath: hconf.JailerPath, JailerPath: hconf.JailerPath,
JailerPathList: hconf.JailerPathList,
BlockDeviceDriver: hconf.BlockDeviceDriver, BlockDeviceDriver: hconf.BlockDeviceDriver,
HypervisorMachineType: hconf.HypervisorMachineType, HypervisorMachineType: hconf.HypervisorMachineType,
MemoryPath: hconf.MemoryPath, MemoryPath: hconf.MemoryPath,
@ -482,6 +492,7 @@ func loadSandboxConfig(id string) (*SandboxConfig, error) {
EntropySource: hconf.EntropySource, EntropySource: hconf.EntropySource,
SharedFS: hconf.SharedFS, SharedFS: hconf.SharedFS,
VirtioFSDaemon: hconf.VirtioFSDaemon, VirtioFSDaemon: hconf.VirtioFSDaemon,
VirtioFSDaemonList: hconf.VirtioFSDaemonList,
VirtioFSCache: hconf.VirtioFSCache, VirtioFSCache: hconf.VirtioFSCache,
VirtioFSExtraArgs: hconf.VirtioFSExtraArgs[:], VirtioFSExtraArgs: hconf.VirtioFSExtraArgs[:],
BlockDeviceCacheSet: hconf.BlockDeviceCacheSet, BlockDeviceCacheSet: hconf.BlockDeviceCacheSet,
@ -493,6 +504,7 @@ func loadSandboxConfig(id string) (*SandboxConfig, error) {
MemPrealloc: hconf.MemPrealloc, MemPrealloc: hconf.MemPrealloc,
HugePages: hconf.HugePages, HugePages: hconf.HugePages,
FileBackedMemRootDir: hconf.FileBackedMemRootDir, FileBackedMemRootDir: hconf.FileBackedMemRootDir,
FileBackedMemRootList: hconf.FileBackedMemRootList,
Realtime: hconf.Realtime, Realtime: hconf.Realtime,
Mlock: hconf.Mlock, Mlock: hconf.Mlock,
DisableNestingChecks: hconf.DisableNestingChecks, DisableNestingChecks: hconf.DisableNestingChecks,
@ -504,10 +516,12 @@ func loadSandboxConfig(id string) (*SandboxConfig, error) {
DisableVhostNet: hconf.DisableVhostNet, DisableVhostNet: hconf.DisableVhostNet,
EnableVhostUserStore: hconf.EnableVhostUserStore, EnableVhostUserStore: hconf.EnableVhostUserStore,
VhostUserStorePath: hconf.VhostUserStorePath, VhostUserStorePath: hconf.VhostUserStorePath,
VhostUserStorePathList: hconf.VhostUserStorePathList,
GuestHookPath: hconf.GuestHookPath, GuestHookPath: hconf.GuestHookPath,
VMid: hconf.VMid, VMid: hconf.VMid,
RxRateLimiterMaxRate: hconf.RxRateLimiterMaxRate, RxRateLimiterMaxRate: hconf.RxRateLimiterMaxRate,
TxRateLimiterMaxRate: hconf.TxRateLimiterMaxRate, TxRateLimiterMaxRate: hconf.TxRateLimiterMaxRate,
EnableAnnotations: hconf.EnableAnnotations,
} }
sconfig.AgentConfig = KataAgentConfig{ sconfig.AgentConfig = KataAgentConfig{

View File

@ -60,12 +60,22 @@ type HypervisorConfig struct {
// HypervisorPath is the hypervisor executable host path. // HypervisorPath is the hypervisor executable host path.
HypervisorPath string HypervisorPath string
// HypervisorPathList is the list of hypervisor paths names allowed in annotations
HypervisorPathList []string
// HypervisorCtlPath is the hypervisor ctl executable host path. // HypervisorCtlPath is the hypervisor ctl executable host path.
HypervisorCtlPath string HypervisorCtlPath string
// HypervisorCtlPathList is the list of hypervisor control paths names allowed in annotations
HypervisorCtlPathList []string
// HypervisorCtlPath is the hypervisor ctl executable host path.
// JailerPath is the jailer executable host path. // JailerPath is the jailer executable host path.
JailerPath string JailerPath string
// JailerPathList is the list of jailer paths names allowed in annotations
JailerPathList []string
// BlockDeviceDriver specifies the driver to be used for block device // BlockDeviceDriver specifies the driver to be used for block device
// either VirtioSCSI or VirtioBlock with the default driver being defaultBlockDriver // either VirtioSCSI or VirtioBlock with the default driver being defaultBlockDriver
BlockDeviceDriver string BlockDeviceDriver string
@ -94,6 +104,9 @@ type HypervisorConfig struct {
// VirtioFSDaemon is the virtio-fs vhost-user daemon path // VirtioFSDaemon is the virtio-fs vhost-user daemon path
VirtioFSDaemon string VirtioFSDaemon string
// VirtioFSDaemonList is the list of valid virtiofs names for annotations
VirtioFSDaemonList []string
// VirtioFSCache cache mode for fs version cache or "none" // VirtioFSCache cache mode for fs version cache or "none"
VirtioFSCache string VirtioFSCache string
@ -103,6 +116,9 @@ type HypervisorConfig struct {
// File based memory backend root directory // File based memory backend root directory
FileBackedMemRootDir string FileBackedMemRootDir string
// FileBackedMemRootList is the list of valid root directories values for annotations
FileBackedMemRootList []string
// BlockDeviceCacheSet specifies cache-related options will be set to block devices or not. // BlockDeviceCacheSet specifies cache-related options will be set to block devices or not.
BlockDeviceCacheSet bool BlockDeviceCacheSet bool
@ -173,6 +189,9 @@ type HypervisorConfig struct {
// related folders, sockets and device nodes should be. // related folders, sockets and device nodes should be.
VhostUserStorePath string VhostUserStorePath string
// VhostUserStorePathList is the list of valid values for vhost-user paths
VhostUserStorePathList []string
// GuestHookPath is the path within the VM that will be used for 'drop-in' hooks // GuestHookPath is the path within the VM that will be used for 'drop-in' hooks
GuestHookPath string GuestHookPath string
@ -185,6 +204,9 @@ type HypervisorConfig struct {
// TxRateLimiterMaxRate is used to control network I/O outbound bandwidth on VM level. // TxRateLimiterMaxRate is used to control network I/O outbound bandwidth on VM level.
TxRateLimiterMaxRate uint64 TxRateLimiterMaxRate uint64
// Enable annotations by name
EnableAnnotations []string
} }
// KataAgentConfig is a structure storing information needed // KataAgentConfig is a structure storing information needed

View File

@ -28,6 +28,7 @@ const (
// //
// Assets // Assets
// //
KataAnnotationHypervisorPrefix = kataAnnotHypervisorPrefix
// KernelPath is a sandbox annotation for passing a per container path pointing at the kernel needed to boot the container VM. // KernelPath is a sandbox annotation for passing a per container path pointing at the kernel needed to boot the container VM.
KernelPath = kataAnnotHypervisorPrefix + "kernel" KernelPath = kataAnnotHypervisorPrefix + "kernel"
@ -44,6 +45,9 @@ const (
// JailerPath is a sandbox annotation for passing a per container path pointing at the jailer that will constrain the container VM. // JailerPath is a sandbox annotation for passing a per container path pointing at the jailer that will constrain the container VM.
JailerPath = kataAnnotHypervisorPrefix + "jailer_path" JailerPath = kataAnnotHypervisorPrefix + "jailer_path"
// CtlPath is a sandbox annotation for passing a per container path pointing at the acrn ctl binary
CtlPath = kataAnnotHypervisorPrefix + "ctlpath"
// FirmwarePath is a sandbox annotation for passing a per container path pointing at the guest firmware that will run the container VM. // FirmwarePath is a sandbox annotation for passing a per container path pointing at the guest firmware that will run the container VM.
FirmwarePath = kataAnnotHypervisorPrefix + "firmware" FirmwarePath = kataAnnotHypervisorPrefix + "firmware"
@ -211,7 +215,7 @@ const (
TxRateLimiterMaxRate = kataAnnotHypervisorPrefix + "tx_rate_limiter_max_rate" TxRateLimiterMaxRate = kataAnnotHypervisorPrefix + "tx_rate_limiter_max_rate"
) )
// Agent related annotations // Runtime related annotations
const ( const (
kataAnnotRuntimePrefix = kataConfAnnotationsPrefix + "runtime." kataAnnotRuntimePrefix = kataConfAnnotationsPrefix + "runtime."
@ -235,6 +239,7 @@ const (
DisableNewNetNs = kataAnnotRuntimePrefix + "disable_new_netns" DisableNewNetNs = kataAnnotRuntimePrefix + "disable_new_netns"
) )
// Agent related annotations
const ( const (
kataAnnotAgentPrefix = kataConfAnnotationsPrefix + "agent." kataAnnotAgentPrefix = kataConfAnnotationsPrefix + "agent."

View File

@ -331,3 +331,17 @@ func (m *Manager) RemoveDevice(device string) error {
m.Unlock() m.Unlock()
return fmt.Errorf("device %v not found in the cgroup", device) return fmt.Errorf("device %v not found in the cgroup", device)
} }
func (m *Manager) SetCPUSet(cpuset, memset string) error {
cgroups, err := m.GetCgroups()
if err != nil {
return err
}
m.Lock()
cgroups.CpusetCpus = cpuset
cgroups.CpusetMems = memset
m.Unlock()
return m.Apply()
}

View File

@ -13,7 +13,7 @@ YQ := $(shell command -v yq 2> /dev/null)
generate-client-code: clean-generated-code generate-client-code: clean-generated-code
docker run --rm \ docker run --rm \
--user $$(id -u):$$(id -g) \ --user $$(id -u):$$(id -g) \
-v $${PWD}:/local openapitools/openapi-generator-cli generate \ -v $${PWD}:/local openapitools/openapi-generator-cli:v4.3.1 generate \
-i /local/cloud-hypervisor.yaml \ -i /local/cloud-hypervisor.yaml \
-g go \ -g go \
-o /local/client -o /local/client

View File

@ -1,69 +0,0 @@
.gitignore
.openapi-generator-ignore
.travis.yml
README.md
api/openapi.yaml
api_default.go
client.go
configuration.go
docs/CmdLineConfig.md
docs/ConsoleConfig.md
docs/CpuTopology.md
docs/CpusConfig.md
docs/DefaultApi.md
docs/DeviceConfig.md
docs/DiskConfig.md
docs/FsConfig.md
docs/InitramfsConfig.md
docs/KernelConfig.md
docs/MemoryConfig.md
docs/MemoryZoneConfig.md
docs/NetConfig.md
docs/NumaConfig.md
docs/NumaDistance.md
docs/PciDeviceInfo.md
docs/PmemConfig.md
docs/RestoreConfig.md
docs/RngConfig.md
docs/SgxEpcConfig.md
docs/VmAddDevice.md
docs/VmConfig.md
docs/VmInfo.md
docs/VmRemoveDevice.md
docs/VmResize.md
docs/VmResizeZone.md
docs/VmSnapshotConfig.md
docs/VmmPingResponse.md
docs/VsockConfig.md
git_push.sh
go.mod
go.sum
model_cmd_line_config.go
model_console_config.go
model_cpu_topology.go
model_cpus_config.go
model_device_config.go
model_disk_config.go
model_fs_config.go
model_initramfs_config.go
model_kernel_config.go
model_memory_config.go
model_memory_zone_config.go
model_net_config.go
model_numa_config.go
model_numa_distance.go
model_pci_device_info.go
model_pmem_config.go
model_restore_config.go
model_rng_config.go
model_sgx_epc_config.go
model_vm_add_device.go
model_vm_config.go
model_vm_info.go
model_vm_remove_device.go
model_vm_resize.go
model_vm_resize_zone.go
model_vm_snapshot_config.go
model_vmm_ping_response.go
model_vsock_config.go
response.go

View File

@ -518,7 +518,7 @@ components:
VmCounters: VmCounters:
additionalProperties: additionalProperties:
additionalProperties: additionalProperties:
format: uint64 format: int64
type: integer type: integer
type: object type: object
type: object type: object
@ -828,7 +828,7 @@ components:
default: false default: false
type: boolean type: boolean
host_numa_node: host_numa_node:
format: uint32 format: int32
type: integer type: integer
hotplug_size: hotplug_size:
format: int64 format: int64
@ -896,7 +896,7 @@ components:
default: false default: false
type: boolean type: boolean
balloon_size: balloon_size:
format: uint64 format: int64
type: integer type: integer
zones: zones:
items: items:
@ -1158,7 +1158,7 @@ components:
size: 8 size: 8
properties: properties:
size: size:
format: uint64 format: int64
type: integer type: integer
prefault: prefault:
default: false default: false
@ -1172,10 +1172,10 @@ components:
destination: 3 destination: 3
properties: properties:
destination: destination:
format: uint32 format: int32
type: integer type: integer
distance: distance:
format: uint8 format: int32
type: integer type: integer
required: required:
- destination - destination
@ -1197,11 +1197,11 @@ components:
guest_numa_id: 9 guest_numa_id: 9
properties: properties:
guest_numa_id: guest_numa_id:
format: uint32 format: int32
type: integer type: integer
cpus: cpus:
items: items:
format: uint8 format: int32
type: integer type: integer
type: array type: array
distances: distances:
@ -1248,9 +1248,16 @@ components:
VmAddDevice: VmAddDevice:
example: example:
path: path path: path
iommu: false
id: id
properties: properties:
path: path:
type: string type: string
iommu:
default: false
type: boolean
id:
type: string
type: object type: object
VmRemoveDevice: VmRemoveDevice:
example: example:

View File

@ -14,7 +14,6 @@ import (
_ioutil "io/ioutil" _ioutil "io/ioutil"
_nethttp "net/http" _nethttp "net/http"
_neturl "net/url" _neturl "net/url"
_bytes "bytes"
) )
// Linger please // Linger please
@ -73,7 +72,6 @@ func (a *DefaultApiService) BootVM(ctx _context.Context) (*_nethttp.Response, er
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarHTTPResponse, err return localVarHTTPResponse, err
} }
@ -140,7 +138,6 @@ func (a *DefaultApiService) CreateVM(ctx _context.Context, vmConfig VmConfig) (*
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarHTTPResponse, err return localVarHTTPResponse, err
} }
@ -204,7 +201,6 @@ func (a *DefaultApiService) DeleteVM(ctx _context.Context) (*_nethttp.Response,
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarHTTPResponse, err return localVarHTTPResponse, err
} }
@ -268,7 +264,6 @@ func (a *DefaultApiService) PauseVM(ctx _context.Context) (*_nethttp.Response, e
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarHTTPResponse, err return localVarHTTPResponse, err
} }
@ -332,7 +327,6 @@ func (a *DefaultApiService) RebootVM(ctx _context.Context) (*_nethttp.Response,
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarHTTPResponse, err return localVarHTTPResponse, err
} }
@ -396,7 +390,6 @@ func (a *DefaultApiService) ResumeVM(ctx _context.Context) (*_nethttp.Response,
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarHTTPResponse, err return localVarHTTPResponse, err
} }
@ -460,7 +453,6 @@ func (a *DefaultApiService) ShutdownVM(ctx _context.Context) (*_nethttp.Response
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarHTTPResponse, err return localVarHTTPResponse, err
} }
@ -524,7 +516,6 @@ func (a *DefaultApiService) ShutdownVMM(ctx _context.Context) (*_nethttp.Respons
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarHTTPResponse, err return localVarHTTPResponse, err
} }
@ -593,7 +584,6 @@ func (a *DefaultApiService) VmAddDevicePut(ctx _context.Context, vmAddDevice VmA
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarReturnValue, localVarHTTPResponse, err return localVarReturnValue, localVarHTTPResponse, err
} }
@ -671,7 +661,6 @@ func (a *DefaultApiService) VmAddDiskPut(ctx _context.Context, diskConfig DiskCo
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarReturnValue, localVarHTTPResponse, err return localVarReturnValue, localVarHTTPResponse, err
} }
@ -749,7 +738,6 @@ func (a *DefaultApiService) VmAddFsPut(ctx _context.Context, fsConfig FsConfig)
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarReturnValue, localVarHTTPResponse, err return localVarReturnValue, localVarHTTPResponse, err
} }
@ -827,7 +815,6 @@ func (a *DefaultApiService) VmAddNetPut(ctx _context.Context, netConfig NetConfi
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarReturnValue, localVarHTTPResponse, err return localVarReturnValue, localVarHTTPResponse, err
} }
@ -905,7 +892,6 @@ func (a *DefaultApiService) VmAddPmemPut(ctx _context.Context, pmemConfig PmemCo
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarReturnValue, localVarHTTPResponse, err return localVarReturnValue, localVarHTTPResponse, err
} }
@ -983,7 +969,6 @@ func (a *DefaultApiService) VmAddVsockPut(ctx _context.Context, vsockConfig Vsoc
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarReturnValue, localVarHTTPResponse, err return localVarReturnValue, localVarHTTPResponse, err
} }
@ -1011,16 +996,16 @@ func (a *DefaultApiService) VmAddVsockPut(ctx _context.Context, vsockConfig Vsoc
/* /*
VmCountersGet Get counters from the VM VmCountersGet Get counters from the VM
* @param ctx _context.Context - for authentication, logging, cancellation, deadlines, tracing, etc. Passed from http.Request or context.Background(). * @param ctx _context.Context - for authentication, logging, cancellation, deadlines, tracing, etc. Passed from http.Request or context.Background().
@return map[string]map[string]int32 @return map[string]map[string]int64
*/ */
func (a *DefaultApiService) VmCountersGet(ctx _context.Context) (map[string]map[string]int32, *_nethttp.Response, error) { func (a *DefaultApiService) VmCountersGet(ctx _context.Context) (map[string]map[string]int64, *_nethttp.Response, error) {
var ( var (
localVarHTTPMethod = _nethttp.MethodGet localVarHTTPMethod = _nethttp.MethodGet
localVarPostBody interface{} localVarPostBody interface{}
localVarFormFileName string localVarFormFileName string
localVarFileName string localVarFileName string
localVarFileBytes []byte localVarFileBytes []byte
localVarReturnValue map[string]map[string]int32 localVarReturnValue map[string]map[string]int64
) )
// create path and map variables // create path and map variables
@ -1058,7 +1043,6 @@ func (a *DefaultApiService) VmCountersGet(ctx _context.Context) (map[string]map[
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarReturnValue, localVarHTTPResponse, err return localVarReturnValue, localVarHTTPResponse, err
} }
@ -1133,7 +1117,6 @@ func (a *DefaultApiService) VmInfoGet(ctx _context.Context) (VmInfo, *_nethttp.R
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarReturnValue, localVarHTTPResponse, err return localVarReturnValue, localVarHTTPResponse, err
} }
@ -1209,7 +1192,6 @@ func (a *DefaultApiService) VmRemoveDevicePut(ctx _context.Context, vmRemoveDevi
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarHTTPResponse, err return localVarHTTPResponse, err
} }
@ -1276,7 +1258,6 @@ func (a *DefaultApiService) VmResizePut(ctx _context.Context, vmResize VmResize)
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarHTTPResponse, err return localVarHTTPResponse, err
} }
@ -1343,7 +1324,6 @@ func (a *DefaultApiService) VmResizeZonePut(ctx _context.Context, vmResizeZone V
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarHTTPResponse, err return localVarHTTPResponse, err
} }
@ -1410,7 +1390,6 @@ func (a *DefaultApiService) VmRestorePut(ctx _context.Context, restoreConfig Res
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarHTTPResponse, err return localVarHTTPResponse, err
} }
@ -1477,7 +1456,6 @@ func (a *DefaultApiService) VmSnapshotPut(ctx _context.Context, vmSnapshotConfig
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarHTTPResponse, err return localVarHTTPResponse, err
} }
@ -1543,7 +1521,6 @@ func (a *DefaultApiService) VmmPingGet(ctx _context.Context) (VmmPingResponse, *
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body) localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close() localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil { if err != nil {
return localVarReturnValue, localVarHTTPResponse, err return localVarReturnValue, localVarHTTPResponse, err
} }

View File

@ -36,7 +36,7 @@ import (
) )
var ( var (
jsonCheck = regexp.MustCompile(`(?i:(?:application|text)/(?:vnd\.[^;]+\+)?(?:problem\+)?json)`) jsonCheck = regexp.MustCompile(`(?i:(?:application|text)/(?:vnd\.[^;]+\+)?json)`)
xmlCheck = regexp.MustCompile(`(?i:(?:application|text)/xml)`) xmlCheck = regexp.MustCompile(`(?i:(?:application|text)/xml)`)
) )

View File

@ -451,7 +451,7 @@ No authorization required
## VmCountersGet ## VmCountersGet
> map[string]map[string]int32 VmCountersGet(ctx, ) > map[string]map[string]int64 VmCountersGet(ctx, )
Get counters from the VM Get counters from the VM
@ -461,7 +461,7 @@ This endpoint does not need any parameter.
### Return type ### Return type
[**map[string]map[string]int32**](map.md) [**map[string]map[string]int64**](map.md)
### Authorization ### Authorization

View File

@ -12,7 +12,7 @@ Name | Type | Description | Notes
**Shared** | **bool** | | [optional] [default to false] **Shared** | **bool** | | [optional] [default to false]
**Hugepages** | **bool** | | [optional] [default to false] **Hugepages** | **bool** | | [optional] [default to false]
**Balloon** | **bool** | | [optional] [default to false] **Balloon** | **bool** | | [optional] [default to false]
**BalloonSize** | **int32** | | [optional] **BalloonSize** | **int64** | | [optional]
**Zones** | [**[]MemoryZoneConfig**](MemoryZoneConfig.md) | | [optional] **Zones** | [**[]MemoryZoneConfig**](MemoryZoneConfig.md) | | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) [[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -4,7 +4,7 @@
Name | Type | Description | Notes Name | Type | Description | Notes
------------ | ------------- | ------------- | ------------- ------------ | ------------- | ------------- | -------------
**Size** | **int32** | | **Size** | **int64** | |
**Prefault** | **bool** | | [optional] [default to false] **Prefault** | **bool** | | [optional] [default to false]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) [[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -5,6 +5,8 @@
Name | Type | Description | Notes Name | Type | Description | Notes
------------ | ------------- | ------------- | ------------- ------------ | ------------- | ------------- | -------------
**Path** | **string** | | [optional] **Path** | **string** | | [optional]
**Iommu** | **bool** | | [optional] [default to false]
**Id** | **string** | | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md) [[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -18,6 +18,6 @@ type MemoryConfig struct {
Shared bool `json:"shared,omitempty"` Shared bool `json:"shared,omitempty"`
Hugepages bool `json:"hugepages,omitempty"` Hugepages bool `json:"hugepages,omitempty"`
Balloon bool `json:"balloon,omitempty"` Balloon bool `json:"balloon,omitempty"`
BalloonSize int32 `json:"balloon_size,omitempty"` BalloonSize int64 `json:"balloon_size,omitempty"`
Zones []MemoryZoneConfig `json:"zones,omitempty"` Zones []MemoryZoneConfig `json:"zones,omitempty"`
} }

View File

@ -10,6 +10,6 @@
package openapi package openapi
// SgxEpcConfig struct for SgxEpcConfig // SgxEpcConfig struct for SgxEpcConfig
type SgxEpcConfig struct { type SgxEpcConfig struct {
Size int32 `json:"size"` Size int64 `json:"size"`
Prefault bool `json:"prefault,omitempty"` Prefault bool `json:"prefault,omitempty"`
} }

View File

@ -11,4 +11,6 @@ package openapi
// VmAddDevice struct for VmAddDevice // VmAddDevice struct for VmAddDevice
type VmAddDevice struct { type VmAddDevice struct {
Path string `json:"path,omitempty"` Path string `json:"path,omitempty"`
Iommu bool `json:"iommu,omitempty"`
Id string `json:"id,omitempty"`
} }

View File

@ -368,7 +368,7 @@ components:
type: object type: object
additionalProperties: additionalProperties:
type: integer type: integer
format: uint64 format: int64
PciDeviceInfo: PciDeviceInfo:
required: required:
@ -492,7 +492,7 @@ components:
default: false default: false
host_numa_node: host_numa_node:
type: integer type: integer
format: uint32 format: int32
hotplug_size: hotplug_size:
type: integer type: integer
format: int64 format: int64
@ -532,7 +532,7 @@ components:
default: false default: false
balloon_size: balloon_size:
type: integer type: integer
format: uint64 format: int64
zones: zones:
type: array type: array
items: items:
@ -741,7 +741,7 @@ components:
properties: properties:
size: size:
type: integer type: integer
format: uint64 format: int64
prefault: prefault:
type: boolean type: boolean
default: false default: false
@ -754,10 +754,10 @@ components:
properties: properties:
destination: destination:
type: integer type: integer
format: uint32 format: int32
distance: distance:
type: integer type: integer
format: uint8 format: int32
NumaConfig: NumaConfig:
required: required:
@ -766,12 +766,12 @@ components:
properties: properties:
guest_numa_id: guest_numa_id:
type: integer type: integer
format: uint32 format: int32
cpus: cpus:
type: array type: array
items: items:
type: integer type: integer
format: uint8 format: int32
distances: distances:
type: array type: array
items: items:
@ -811,6 +811,11 @@ components:
properties: properties:
path: path:
type: string type: string
iommu:
type: boolean
default: false
id:
type: string
VmRemoveDevice: VmRemoveDevice:
type: object type: object

View File

@ -0,0 +1,296 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Copyright (c) 2017 The Kubernetes Authors
// SPDX-License-Identifier: Apache-2.0
package cpuset
import (
"bytes"
"fmt"
"reflect"
"sort"
"strconv"
"strings"
)
// Builder is a mutable builder for CPUSet. Functions that mutate instances
// of this type are not thread-safe.
type Builder struct {
result CPUSet
done bool
}
// NewBuilder returns a mutable CPUSet builder.
func NewBuilder() Builder {
return Builder{
result: CPUSet{
elems: map[int]struct{}{},
},
}
}
// Add adds the supplied elements to the result. Calling Add after calling
// Result has no effect.
func (b Builder) Add(elems ...int) {
if b.done {
return
}
for _, elem := range elems {
b.result.elems[elem] = struct{}{}
}
}
// Result returns the result CPUSet containing all elements that were
// previously added to this builder. Subsequent calls to Add have no effect.
func (b Builder) Result() CPUSet {
b.done = true
return b.result
}
// CPUSet is a thread-safe, immutable set-like data structure for CPU IDs.
type CPUSet struct {
elems map[int]struct{}
}
// NewCPUSet returns a new CPUSet containing the supplied elements.
func NewCPUSet(cpus ...int) CPUSet {
b := NewBuilder()
for _, c := range cpus {
b.Add(c)
}
return b.Result()
}
// Size returns the number of elements in this set.
func (s CPUSet) Size() int {
return len(s.elems)
}
// IsEmpty returns true if there are zero elements in this set.
func (s CPUSet) IsEmpty() bool {
return s.Size() == 0
}
// Contains returns true if the supplied element is present in this set.
func (s CPUSet) Contains(cpu int) bool {
_, found := s.elems[cpu]
return found
}
// Equals returns true if the supplied set contains exactly the same elements
// as this set (s IsSubsetOf s2 and s2 IsSubsetOf s).
func (s CPUSet) Equals(s2 CPUSet) bool {
return reflect.DeepEqual(s.elems, s2.elems)
}
// Filter returns a new CPU set that contains all of the elements from this
// set that match the supplied predicate, without mutating the source set.
func (s CPUSet) Filter(predicate func(int) bool) CPUSet {
b := NewBuilder()
for cpu := range s.elems {
if predicate(cpu) {
b.Add(cpu)
}
}
return b.Result()
}
// FilterNot returns a new CPU set that contains all of the elements from this
// set that do not match the supplied predicate, without mutating the source
// set.
func (s CPUSet) FilterNot(predicate func(int) bool) CPUSet {
b := NewBuilder()
for cpu := range s.elems {
if !predicate(cpu) {
b.Add(cpu)
}
}
return b.Result()
}
// IsSubsetOf returns true if the supplied set contains all the elements
func (s CPUSet) IsSubsetOf(s2 CPUSet) bool {
result := true
for cpu := range s.elems {
if !s2.Contains(cpu) {
result = false
break
}
}
return result
}
// Union returns a new CPU set that contains all of the elements from this
// set and all of the elements from the supplied set, without mutating
// either source set.
func (s CPUSet) Union(s2 CPUSet) CPUSet {
b := NewBuilder()
for cpu := range s.elems {
b.Add(cpu)
}
for cpu := range s2.elems {
b.Add(cpu)
}
return b.Result()
}
// UnionAll returns a new CPU set that contains all of the elements from this
// set and all of the elements from the supplied sets, without mutating
// either source set.
func (s CPUSet) UnionAll(s2 []CPUSet) CPUSet {
b := NewBuilder()
for cpu := range s.elems {
b.Add(cpu)
}
for _, cs := range s2 {
for cpu := range cs.elems {
b.Add(cpu)
}
}
return b.Result()
}
// Intersection returns a new CPU set that contains all of the elements
// that are present in both this set and the supplied set, without mutating
// either source set.
func (s CPUSet) Intersection(s2 CPUSet) CPUSet {
return s.Filter(func(cpu int) bool { return s2.Contains(cpu) })
}
// Difference returns a new CPU set that contains all of the elements that
// are present in this set and not the supplied set, without mutating either
// source set.
func (s CPUSet) Difference(s2 CPUSet) CPUSet {
return s.FilterNot(func(cpu int) bool { return s2.Contains(cpu) })
}
// ToSlice returns a slice of integers that contains all elements from
// this set.
func (s CPUSet) ToSlice() []int {
result := []int{}
for cpu := range s.elems {
result = append(result, cpu)
}
sort.Ints(result)
return result
}
// ToSliceNoSort returns a slice of integers that contains all elements from
// this set.
func (s CPUSet) ToSliceNoSort() []int {
result := []int{}
for cpu := range s.elems {
result = append(result, cpu)
}
return result
}
// String returns a new string representation of the elements in this CPU set
// in canonical linux CPU list format.
//
// See: http://man7.org/linux/man-pages/man7/cpuset.7.html#FORMATS
func (s CPUSet) String() string {
if s.IsEmpty() {
return ""
}
elems := s.ToSlice()
type rng struct {
start int
end int
}
ranges := []rng{{elems[0], elems[0]}}
for i := 1; i < len(elems); i++ {
lastRange := &ranges[len(ranges)-1]
// if this element is adjacent to the high end of the last range
if elems[i] == lastRange.end+1 {
// then extend the last range to include this element
lastRange.end = elems[i]
continue
}
// otherwise, start a new range beginning with this element
ranges = append(ranges, rng{elems[i], elems[i]})
}
// construct string from ranges
var result bytes.Buffer
for _, r := range ranges {
if r.start == r.end {
result.WriteString(strconv.Itoa(r.start))
} else {
result.WriteString(fmt.Sprintf("%d-%d", r.start, r.end))
}
result.WriteString(",")
}
return strings.TrimRight(result.String(), ",")
}
// Parse CPUSet constructs a new CPU set from a Linux CPU list formatted string.
//
// See: http://man7.org/linux/man-pages/man7/cpuset.7.html#FORMATS
func Parse(s string) (CPUSet, error) {
b := NewBuilder()
// Handle empty string.
if s == "" {
return b.Result(), nil
}
// Split CPU list string:
// "0-5,34,46-48 => ["0-5", "34", "46-48"]
ranges := strings.Split(s, ",")
for _, r := range ranges {
boundaries := strings.Split(r, "-")
if len(boundaries) == 1 {
// Handle ranges that consist of only one element like "34".
elem, err := strconv.Atoi(boundaries[0])
if err != nil {
return NewCPUSet(), err
}
b.Add(elem)
} else if len(boundaries) == 2 {
// Handle multi-element ranges like "0-5".
start, err := strconv.Atoi(boundaries[0])
if err != nil {
return NewCPUSet(), err
}
end, err := strconv.Atoi(boundaries[1])
if err != nil {
return NewCPUSet(), err
}
// Add all elements to the result.
// e.g. "0-5", "46-48" => [0, 1, 2, 3, 4, 5, 46, 47, 48].
for e := start; e <= end; e++ {
b.Add(e)
}
}
}
return b.Result(), nil
}
// Clone returns a copy of this CPU set.
func (s CPUSet) Clone() CPUSet {
b := NewBuilder()
for elem := range s.elems {
b.Add(elem)
}
return b.Result()
}

View File

@ -0,0 +1,348 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Copyright (c) 2017 The Kubernetes Authors
// SPDX-License-Identifier: Apache-2.0
package cpuset
import (
"reflect"
"testing"
)
func TestCPUSetBuilder(t *testing.T) {
b := NewBuilder()
elems := []int{1, 2, 3, 4, 5}
for _, elem := range elems {
b.Add(elem)
}
result := b.Result()
for _, elem := range elems {
if !result.Contains(elem) {
t.Fatalf("expected cpuset to contain element %d: [%v]", elem, result)
}
}
if len(elems) != result.Size() {
t.Fatalf("expected cpuset %s to have the same size as %v", result, elems)
}
}
func TestCPUSetSize(t *testing.T) {
testCases := []struct {
cpuset CPUSet
expected int
}{
{NewCPUSet(), 0},
{NewCPUSet(5), 1},
{NewCPUSet(1, 2, 3, 4, 5), 5},
}
for _, c := range testCases {
actual := c.cpuset.Size()
if actual != c.expected {
t.Fatalf("expected: %d, actual: %d, cpuset: [%v]", c.expected, actual, c.cpuset)
}
}
}
func TestCPUSetIsEmpty(t *testing.T) {
testCases := []struct {
cpuset CPUSet
expected bool
}{
{NewCPUSet(), true},
{NewCPUSet(5), false},
{NewCPUSet(1, 2, 3, 4, 5), false},
}
for _, c := range testCases {
actual := c.cpuset.IsEmpty()
if actual != c.expected {
t.Fatalf("expected: %t, IsEmpty() returned: %t, cpuset: [%v]", c.expected, actual, c.cpuset)
}
}
}
func TestCPUSetContains(t *testing.T) {
testCases := []struct {
cpuset CPUSet
mustContain []int
mustNotContain []int
}{
{NewCPUSet(), []int{}, []int{1, 2, 3, 4, 5}},
{NewCPUSet(5), []int{5}, []int{1, 2, 3, 4}},
{NewCPUSet(1, 2, 4, 5), []int{1, 2, 4, 5}, []int{0, 3, 6}},
}
for _, c := range testCases {
for _, elem := range c.mustContain {
if !c.cpuset.Contains(elem) {
t.Fatalf("expected cpuset to contain element %d: [%v]", elem, c.cpuset)
}
}
for _, elem := range c.mustNotContain {
if c.cpuset.Contains(elem) {
t.Fatalf("expected cpuset not to contain element %d: [%v]", elem, c.cpuset)
}
}
}
}
func TestCPUSetEqual(t *testing.T) {
shouldEqual := []struct {
s1 CPUSet
s2 CPUSet
}{
{NewCPUSet(), NewCPUSet()},
{NewCPUSet(5), NewCPUSet(5)},
{NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(1, 2, 3, 4, 5)},
}
shouldNotEqual := []struct {
s1 CPUSet
s2 CPUSet
}{
{NewCPUSet(), NewCPUSet(5)},
{NewCPUSet(5), NewCPUSet()},
{NewCPUSet(), NewCPUSet(1, 2, 3, 4, 5)},
{NewCPUSet(1, 2, 3, 4, 5), NewCPUSet()},
{NewCPUSet(5), NewCPUSet(1, 2, 3, 4, 5)},
{NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(5)},
}
for _, c := range shouldEqual {
if !c.s1.Equals(c.s2) {
t.Fatalf("expected cpusets to be equal: s1: [%v], s2: [%v]", c.s1, c.s2)
}
}
for _, c := range shouldNotEqual {
if c.s1.Equals(c.s2) {
t.Fatalf("expected cpusets to not be equal: s1: [%v], s2: [%v]", c.s1, c.s2)
}
}
}
func TestCPUSetIsSubsetOf(t *testing.T) {
shouldBeSubset := []struct {
s1 CPUSet
s2 CPUSet
}{
// A set is a subset of itself
{NewCPUSet(), NewCPUSet()},
{NewCPUSet(5), NewCPUSet(5)},
{NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(1, 2, 3, 4, 5)},
// Empty set is a subset of every set
{NewCPUSet(), NewCPUSet(5)},
{NewCPUSet(), NewCPUSet(1, 2, 3, 4, 5)},
{NewCPUSet(5), NewCPUSet(1, 2, 3, 4, 5)},
{NewCPUSet(1, 2, 3), NewCPUSet(1, 2, 3, 4, 5)},
{NewCPUSet(4, 5), NewCPUSet(1, 2, 3, 4, 5)},
{NewCPUSet(2, 3), NewCPUSet(1, 2, 3, 4, 5)},
}
shouldNotBeSubset := []struct {
s1 CPUSet
s2 CPUSet
}{}
for _, c := range shouldBeSubset {
if !c.s1.IsSubsetOf(c.s2) {
t.Fatalf("expected s1 to be a subset of s2: s1: [%v], s2: [%v]", c.s1, c.s2)
}
}
for _, c := range shouldNotBeSubset {
if c.s1.IsSubsetOf(c.s2) {
t.Fatalf("expected s1 to not be a subset of s2: s1: [%v], s2: [%v]", c.s1, c.s2)
}
}
}
func TestCPUSetUnionAll(t *testing.T) {
testCases := []struct {
s1 CPUSet
s2 CPUSet
s3 CPUSet
expected CPUSet
}{
{NewCPUSet(), NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(4, 5), NewCPUSet(1, 2, 3, 4, 5)},
{NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(), NewCPUSet(4), NewCPUSet(1, 2, 3, 4, 5)},
{NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(1, 5), NewCPUSet(1, 2, 3, 4, 5)},
}
for _, c := range testCases {
s := []CPUSet{}
s = append(s, c.s2)
s = append(s, c.s3)
result := c.s1.UnionAll(s)
if !result.Equals(c.expected) {
t.Fatalf("expected the union of s1 and s2 to be [%v] (got [%v]), s1: [%v], s2: [%v]", c.expected, result, c.s1, c.s2)
}
}
}
func TestCPUSetUnion(t *testing.T) {
testCases := []struct {
s1 CPUSet
s2 CPUSet
expected CPUSet
}{
{NewCPUSet(), NewCPUSet(), NewCPUSet()},
{NewCPUSet(), NewCPUSet(5), NewCPUSet(5)},
{NewCPUSet(5), NewCPUSet(), NewCPUSet(5)},
{NewCPUSet(5), NewCPUSet(5), NewCPUSet(5)},
{NewCPUSet(), NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(1, 2, 3, 4, 5)},
{NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(), NewCPUSet(1, 2, 3, 4, 5)},
{NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(1, 2, 3, 4, 5)},
{NewCPUSet(5), NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(1, 2, 3, 4, 5)},
{NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(5), NewCPUSet(1, 2, 3, 4, 5)},
{NewCPUSet(1, 2), NewCPUSet(3, 4, 5), NewCPUSet(1, 2, 3, 4, 5)},
{NewCPUSet(1, 2, 3), NewCPUSet(3, 4, 5), NewCPUSet(1, 2, 3, 4, 5)},
}
for _, c := range testCases {
result := c.s1.Union(c.s2)
if !result.Equals(c.expected) {
t.Fatalf("expected the union of s1 and s2 to be [%v] (got [%v]), s1: [%v], s2: [%v]", c.expected, result, c.s1, c.s2)
}
}
}
func TestCPUSetIntersection(t *testing.T) {
testCases := []struct {
s1 CPUSet
s2 CPUSet
expected CPUSet
}{
{NewCPUSet(), NewCPUSet(), NewCPUSet()},
{NewCPUSet(), NewCPUSet(5), NewCPUSet()},
{NewCPUSet(5), NewCPUSet(), NewCPUSet()},
{NewCPUSet(5), NewCPUSet(5), NewCPUSet(5)},
{NewCPUSet(), NewCPUSet(1, 2, 3, 4, 5), NewCPUSet()},
{NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(), NewCPUSet()},
{NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(1, 2, 3, 4, 5)},
{NewCPUSet(5), NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(5)},
{NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(5), NewCPUSet(5)},
{NewCPUSet(1, 2), NewCPUSet(3, 4, 5), NewCPUSet()},
{NewCPUSet(1, 2, 3), NewCPUSet(3, 4, 5), NewCPUSet(3)},
}
for _, c := range testCases {
result := c.s1.Intersection(c.s2)
if !result.Equals(c.expected) {
t.Fatalf("expected the intersection of s1 and s2 to be [%v] (got [%v]), s1: [%v], s2: [%v]", c.expected, result, c.s1, c.s2)
}
}
}
func TestCPUSetDifference(t *testing.T) {
testCases := []struct {
s1 CPUSet
s2 CPUSet
expected CPUSet
}{
{NewCPUSet(), NewCPUSet(), NewCPUSet()},
{NewCPUSet(), NewCPUSet(5), NewCPUSet()},
{NewCPUSet(5), NewCPUSet(), NewCPUSet(5)},
{NewCPUSet(5), NewCPUSet(5), NewCPUSet()},
{NewCPUSet(), NewCPUSet(1, 2, 3, 4, 5), NewCPUSet()},
{NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(), NewCPUSet(1, 2, 3, 4, 5)},
{NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(1, 2, 3, 4, 5), NewCPUSet()},
{NewCPUSet(5), NewCPUSet(1, 2, 3, 4, 5), NewCPUSet()},
{NewCPUSet(1, 2, 3, 4, 5), NewCPUSet(5), NewCPUSet(1, 2, 3, 4)},
{NewCPUSet(1, 2), NewCPUSet(3, 4, 5), NewCPUSet(1, 2)},
{NewCPUSet(1, 2, 3), NewCPUSet(3, 4, 5), NewCPUSet(1, 2)},
}
for _, c := range testCases {
result := c.s1.Difference(c.s2)
if !result.Equals(c.expected) {
t.Fatalf("expected the difference of s1 and s2 to be [%v] (got [%v]), s1: [%v], s2: [%v]", c.expected, result, c.s1, c.s2)
}
}
}
func TestCPUSetToSlice(t *testing.T) {
testCases := []struct {
set CPUSet
expected []int
}{
{NewCPUSet(), []int{}},
{NewCPUSet(5), []int{5}},
{NewCPUSet(1, 2, 3, 4, 5), []int{1, 2, 3, 4, 5}},
}
for _, c := range testCases {
result := c.set.ToSlice()
if !reflect.DeepEqual(result, c.expected) {
t.Fatalf("expected set as slice to be [%v] (got [%v]), s: [%v]", c.expected, result, c.set)
}
}
}
func TestCPUSetString(t *testing.T) {
testCases := []struct {
set CPUSet
expected string
}{
{NewCPUSet(), ""},
{NewCPUSet(5), "5"},
{NewCPUSet(1, 2, 3, 4, 5), "1-5"},
{NewCPUSet(1, 2, 3, 5, 6, 8), "1-3,5-6,8"},
}
for _, c := range testCases {
result := c.set.String()
if result != c.expected {
t.Fatalf("expected set as string to be %s (got \"%s\"), s: [%v]", c.expected, result, c.set)
}
}
}
func TestParse(t *testing.T) {
testCases := []struct {
cpusetString string
expected CPUSet
}{
{"", NewCPUSet()},
{"5", NewCPUSet(5)},
{"1,2,3,4,5", NewCPUSet(1, 2, 3, 4, 5)},
{"1-5", NewCPUSet(1, 2, 3, 4, 5)},
{"1-2,3-5", NewCPUSet(1, 2, 3, 4, 5)},
}
for _, c := range testCases {
result, err := Parse(c.cpusetString)
if err != nil {
t.Fatalf("expected error not to have occurred: %v", err)
}
if !result.Equals(c.expected) {
t.Fatalf("expected string \"%s\" to parse as [%v] (got [%v])", c.cpusetString, c.expected, result)
}
}
}

View File

@ -10,6 +10,7 @@ import (
"errors" "errors"
"fmt" "fmt"
"path/filepath" "path/filepath"
"regexp"
goruntime "runtime" goruntime "runtime"
"strconv" "strconv"
"strings" "strings"
@ -181,15 +182,44 @@ func containerMounts(spec specs.Spec) []vc.Mount {
return mnts return mnts
} }
func contains(s []string, e string) bool { func contains(strings []string, toFind string) bool {
for _, a := range s { for _, candidate := range strings {
if a == e { if candidate == toFind {
return true return true
} }
} }
return false return false
} }
func regexpContains(regexps []string, toMatch string) bool {
for _, candidate := range regexps {
if matched, _ := regexp.MatchString(candidate, toMatch); matched {
return true
}
}
return false
}
func checkPathIsInGlobs(globs []string, path string) bool {
for _, glob := range globs {
filenames, _ := filepath.Glob(glob)
for _, a := range filenames {
if path == a {
return true
}
}
}
return false
}
// Check if an annotation name either belongs to another prefix, matches regexp list
func checkAnnotationNameIsValid(list []string, name string, prefix string) bool {
if strings.HasPrefix(name, prefix) {
return regexpContains(list, strings.TrimPrefix(name, prefix))
}
return true
}
func newLinuxDeviceInfo(d specs.LinuxDevice) (*config.DeviceInfo, error) { func newLinuxDeviceInfo(d specs.LinuxDevice) (*config.DeviceInfo, error) {
allowedDeviceTypes := []string{"c", "b", "u", "p"} allowedDeviceTypes := []string{"c", "b", "u", "p"}
@ -322,13 +352,18 @@ func SandboxID(spec specs.Spec) (string, error) {
return "", fmt.Errorf("Could not find sandbox ID") return "", fmt.Errorf("Could not find sandbox ID")
} }
func addAnnotations(ocispec specs.Spec, config *vc.SandboxConfig) error { func addAnnotations(ocispec specs.Spec, config *vc.SandboxConfig, runtime RuntimeConfig) error {
for key := range ocispec.Annotations {
if !checkAnnotationNameIsValid(runtime.HypervisorConfig.EnableAnnotations, key, vcAnnotations.KataAnnotationHypervisorPrefix) {
return fmt.Errorf("annotation %v is not enabled", key)
}
}
addAssetAnnotations(ocispec, config) addAssetAnnotations(ocispec, config)
if err := addHypervisorConfigOverrides(ocispec, config); err != nil { if err := addHypervisorConfigOverrides(ocispec, config, runtime); err != nil {
return err return err
} }
if err := addRuntimeConfigOverrides(ocispec, config); err != nil { if err := addRuntimeConfigOverrides(ocispec, config, runtime); err != nil {
return err return err
} }
@ -353,20 +388,18 @@ func addAssetAnnotations(ocispec specs.Spec, config *vc.SandboxConfig) {
for _, a := range assetAnnotations { for _, a := range assetAnnotations {
value, ok := ocispec.Annotations[a] value, ok := ocispec.Annotations[a]
if !ok { if ok {
continue config.Annotations[a] = value
} }
config.Annotations[a] = value
} }
} }
func addHypervisorConfigOverrides(ocispec specs.Spec, config *vc.SandboxConfig) error { func addHypervisorConfigOverrides(ocispec specs.Spec, config *vc.SandboxConfig, runtime RuntimeConfig) error {
if err := addHypervisorCPUOverrides(ocispec, config); err != nil { if err := addHypervisorCPUOverrides(ocispec, config); err != nil {
return err return err
} }
if err := addHypervisorMemoryOverrides(ocispec, config); err != nil { if err := addHypervisorMemoryOverrides(ocispec, config, runtime); err != nil {
return err return err
} }
@ -374,7 +407,7 @@ func addHypervisorConfigOverrides(ocispec specs.Spec, config *vc.SandboxConfig)
return err return err
} }
if err := addHypervisporVirtioFsOverrides(ocispec, config); err != nil { if err := addHypervisorVirtioFsOverrides(ocispec, config, runtime); err != nil {
return err return err
} }
@ -382,15 +415,8 @@ func addHypervisorConfigOverrides(ocispec specs.Spec, config *vc.SandboxConfig)
return err return err
} }
if value, ok := ocispec.Annotations[vcAnnotations.KernelParams]; ok { if err := addHypervisorPathOverrides(ocispec, config, runtime); err != nil {
if value != "" { return err
params := vc.DeserializeParams(strings.Fields(value))
for _, param := range params {
if err := config.HypervisorConfig.AddKernelParam(param); err != nil {
return fmt.Errorf("Error adding kernel parameters in annotation kernel_params : %v", err)
}
}
}
} }
if value, ok := ocispec.Annotations[vcAnnotations.MachineType]; ok { if value, ok := ocispec.Annotations[vcAnnotations.MachineType]; ok {
@ -405,6 +431,13 @@ func addHypervisorConfigOverrides(ocispec specs.Spec, config *vc.SandboxConfig)
} }
} }
if value, ok := ocispec.Annotations[vcAnnotations.VhostUserStorePath]; ok {
if !checkPathIsInGlobs(runtime.HypervisorConfig.VhostUserStorePathList, value) {
return fmt.Errorf("vhost store path %v required from annotation is not valid", value)
}
config.HypervisorConfig.VhostUserStorePath = value
}
if value, ok := ocispec.Annotations[vcAnnotations.GuestHookPath]; ok { if value, ok := ocispec.Annotations[vcAnnotations.GuestHookPath]; ok {
if value != "" { if value != "" {
config.HypervisorConfig.GuestHookPath = value config.HypervisorConfig.GuestHookPath = value
@ -446,7 +479,42 @@ func addHypervisorConfigOverrides(ocispec specs.Spec, config *vc.SandboxConfig)
return nil return nil
} }
func addHypervisorMemoryOverrides(ocispec specs.Spec, sbConfig *vc.SandboxConfig) error { func addHypervisorPathOverrides(ocispec specs.Spec, config *vc.SandboxConfig, runtime RuntimeConfig) error {
if value, ok := ocispec.Annotations[vcAnnotations.HypervisorPath]; ok {
if !checkPathIsInGlobs(runtime.HypervisorConfig.HypervisorPathList, value) {
return fmt.Errorf("hypervisor %v required from annotation is not valid", value)
}
config.HypervisorConfig.HypervisorPath = value
}
if value, ok := ocispec.Annotations[vcAnnotations.JailerPath]; ok {
if !checkPathIsInGlobs(runtime.HypervisorConfig.JailerPathList, value) {
return fmt.Errorf("jailer %v required from annotation is not valid", value)
}
config.HypervisorConfig.JailerPath = value
}
if value, ok := ocispec.Annotations[vcAnnotations.CtlPath]; ok {
if !checkPathIsInGlobs(runtime.HypervisorConfig.HypervisorCtlPathList, value) {
return fmt.Errorf("hypervisor control %v required from annotation is not valid", value)
}
config.HypervisorConfig.HypervisorCtlPath = value
}
if value, ok := ocispec.Annotations[vcAnnotations.KernelParams]; ok {
if value != "" {
params := vc.DeserializeParams(strings.Fields(value))
for _, param := range params {
if err := config.HypervisorConfig.AddKernelParam(param); err != nil {
return fmt.Errorf("Error adding kernel parameters in annotation kernel_params : %v", err)
}
}
}
}
return nil
}
func addHypervisorMemoryOverrides(ocispec specs.Spec, sbConfig *vc.SandboxConfig, runtime RuntimeConfig) error {
if value, ok := ocispec.Annotations[vcAnnotations.DefaultMemory]; ok { if value, ok := ocispec.Annotations[vcAnnotations.DefaultMemory]; ok {
memorySz, err := strconv.ParseUint(value, 10, 32) memorySz, err := strconv.ParseUint(value, 10, 32)
if err != nil { if err != nil {
@ -510,6 +578,9 @@ func addHypervisorMemoryOverrides(ocispec specs.Spec, sbConfig *vc.SandboxConfig
} }
if value, ok := ocispec.Annotations[vcAnnotations.FileBackedMemRootDir]; ok { if value, ok := ocispec.Annotations[vcAnnotations.FileBackedMemRootDir]; ok {
if !checkPathIsInGlobs(runtime.HypervisorConfig.FileBackedMemRootList, value) {
return fmt.Errorf("file_mem_backend value %v required from annotation is not valid", value)
}
sbConfig.HypervisorConfig.FileBackedMemRootDir = value sbConfig.HypervisorConfig.FileBackedMemRootDir = value
} }
@ -646,7 +717,7 @@ func addHypervisorBlockOverrides(ocispec specs.Spec, sbConfig *vc.SandboxConfig)
return nil return nil
} }
func addHypervisporVirtioFsOverrides(ocispec specs.Spec, sbConfig *vc.SandboxConfig) error { func addHypervisorVirtioFsOverrides(ocispec specs.Spec, sbConfig *vc.SandboxConfig, runtime RuntimeConfig) error {
if value, ok := ocispec.Annotations[vcAnnotations.SharedFS]; ok { if value, ok := ocispec.Annotations[vcAnnotations.SharedFS]; ok {
supportedSharedFS := []string{config.Virtio9P, config.VirtioFS} supportedSharedFS := []string{config.Virtio9P, config.VirtioFS}
valid := false valid := false
@ -663,6 +734,9 @@ func addHypervisporVirtioFsOverrides(ocispec specs.Spec, sbConfig *vc.SandboxCon
} }
if value, ok := ocispec.Annotations[vcAnnotations.VirtioFSDaemon]; ok { if value, ok := ocispec.Annotations[vcAnnotations.VirtioFSDaemon]; ok {
if !checkPathIsInGlobs(runtime.HypervisorConfig.VirtioFSDaemonList, value) {
return fmt.Errorf("virtiofs daemon %v required from annotation is not valid", value)
}
sbConfig.HypervisorConfig.VirtioFSDaemon = value sbConfig.HypervisorConfig.VirtioFSDaemon = value
} }
@ -730,7 +804,7 @@ func addHypervisporNetworkOverrides(ocispec specs.Spec, sbConfig *vc.SandboxConf
return nil return nil
} }
func addRuntimeConfigOverrides(ocispec specs.Spec, sbConfig *vc.SandboxConfig) error { func addRuntimeConfigOverrides(ocispec specs.Spec, sbConfig *vc.SandboxConfig, runtime RuntimeConfig) error {
if value, ok := ocispec.Annotations[vcAnnotations.DisableGuestSeccomp]; ok { if value, ok := ocispec.Annotations[vcAnnotations.DisableGuestSeccomp]; ok {
disableGuestSeccomp, err := strconv.ParseBool(value) disableGuestSeccomp, err := strconv.ParseBool(value)
if err != nil { if err != nil {
@ -870,7 +944,7 @@ func SandboxConfig(ocispec specs.Spec, runtime RuntimeConfig, bundlePath, cid, c
Experimental: runtime.Experimental, Experimental: runtime.Experimental,
} }
if err := addAnnotations(ocispec, &sandboxConfig); err != nil { if err := addAnnotations(ocispec, &sandboxConfig, runtime); err != nil {
return vc.SandboxConfig{}, err return vc.SandboxConfig{}, err
} }

View File

@ -676,7 +676,25 @@ func TestAddAssetAnnotations(t *testing.T) {
Annotations: expectedAnnotations, Annotations: expectedAnnotations,
} }
addAnnotations(ocispec, &config) runtimeConfig := RuntimeConfig{
HypervisorType: vc.QemuHypervisor,
Console: consolePath,
}
// Try annotations without enabling them first
err := addAnnotations(ocispec, &config, runtimeConfig)
assert.Error(err)
assert.Exactly(map[string]string{}, config.Annotations)
// Check if annotation not enabled correctly
runtimeConfig.HypervisorConfig.EnableAnnotations = []string{"nonexistent"}
err = addAnnotations(ocispec, &config, runtimeConfig)
assert.Error(err)
// Check that it works if all annotation are enabled
runtimeConfig.HypervisorConfig.EnableAnnotations = []string{".*"}
err = addAnnotations(ocispec, &config, runtimeConfig)
assert.NoError(err)
assert.Exactly(expectedAnnotations, config.Annotations) assert.Exactly(expectedAnnotations, config.Annotations)
} }
@ -700,9 +718,14 @@ func TestAddAgentAnnotations(t *testing.T) {
ContainerPipeSize: 1024, ContainerPipeSize: 1024,
} }
runtimeConfig := RuntimeConfig{
HypervisorType: vc.QemuHypervisor,
Console: consolePath,
}
ocispec.Annotations[vcAnnotations.KernelModules] = strings.Join(expectedAgentConfig.KernelModules, KernelModulesSeparator) ocispec.Annotations[vcAnnotations.KernelModules] = strings.Join(expectedAgentConfig.KernelModules, KernelModulesSeparator)
ocispec.Annotations[vcAnnotations.AgentContainerPipeSize] = "1024" ocispec.Annotations[vcAnnotations.AgentContainerPipeSize] = "1024"
addAnnotations(ocispec, &config) addAnnotations(ocispec, &config, runtimeConfig)
assert.Exactly(expectedAgentConfig, config.AgentConfig) assert.Exactly(expectedAgentConfig, config.AgentConfig)
} }
@ -722,8 +745,13 @@ func TestContainerPipeSizeAnnotation(t *testing.T) {
ContainerPipeSize: 0, ContainerPipeSize: 0,
} }
runtimeConfig := RuntimeConfig{
HypervisorType: vc.QemuHypervisor,
Console: consolePath,
}
ocispec.Annotations[vcAnnotations.AgentContainerPipeSize] = "foo" ocispec.Annotations[vcAnnotations.AgentContainerPipeSize] = "foo"
err := addAnnotations(ocispec, &config) err := addAnnotations(ocispec, &config, runtimeConfig)
assert.Error(err) assert.Error(err)
assert.Exactly(expectedAgentConfig, config.AgentConfig) assert.Exactly(expectedAgentConfig, config.AgentConfig)
} }
@ -752,8 +780,16 @@ func TestAddHypervisorAnnotations(t *testing.T) {
}, },
} }
runtimeConfig := RuntimeConfig{
HypervisorType: vc.QemuHypervisor,
Console: consolePath,
}
runtimeConfig.HypervisorConfig.EnableAnnotations = []string{".*"}
runtimeConfig.HypervisorConfig.FileBackedMemRootList = []string{"/dev/shm*"}
runtimeConfig.HypervisorConfig.VirtioFSDaemonList = []string{"/bin/*ls*"}
ocispec.Annotations[vcAnnotations.KernelParams] = "vsyscall=emulate iommu=on" ocispec.Annotations[vcAnnotations.KernelParams] = "vsyscall=emulate iommu=on"
addHypervisorConfigOverrides(ocispec, &config) addHypervisorConfigOverrides(ocispec, &config, runtimeConfig)
assert.Exactly(expectedHyperConfig, config.HypervisorConfig) assert.Exactly(expectedHyperConfig, config.HypervisorConfig)
ocispec.Annotations[vcAnnotations.DefaultVCPUs] = "1" ocispec.Annotations[vcAnnotations.DefaultVCPUs] = "1"
@ -774,7 +810,7 @@ func TestAddHypervisorAnnotations(t *testing.T) {
ocispec.Annotations[vcAnnotations.BlockDeviceCacheDirect] = "true" ocispec.Annotations[vcAnnotations.BlockDeviceCacheDirect] = "true"
ocispec.Annotations[vcAnnotations.BlockDeviceCacheNoflush] = "true" ocispec.Annotations[vcAnnotations.BlockDeviceCacheNoflush] = "true"
ocispec.Annotations[vcAnnotations.SharedFS] = "virtio-fs" ocispec.Annotations[vcAnnotations.SharedFS] = "virtio-fs"
ocispec.Annotations[vcAnnotations.VirtioFSDaemon] = "/home/virtiofsd" ocispec.Annotations[vcAnnotations.VirtioFSDaemon] = "/bin/false"
ocispec.Annotations[vcAnnotations.VirtioFSCache] = "/home/cache" ocispec.Annotations[vcAnnotations.VirtioFSCache] = "/home/cache"
ocispec.Annotations[vcAnnotations.Msize9p] = "512" ocispec.Annotations[vcAnnotations.Msize9p] = "512"
ocispec.Annotations[vcAnnotations.MachineType] = "q35" ocispec.Annotations[vcAnnotations.MachineType] = "q35"
@ -791,7 +827,7 @@ func TestAddHypervisorAnnotations(t *testing.T) {
ocispec.Annotations[vcAnnotations.RxRateLimiterMaxRate] = "10000000" ocispec.Annotations[vcAnnotations.RxRateLimiterMaxRate] = "10000000"
ocispec.Annotations[vcAnnotations.TxRateLimiterMaxRate] = "10000000" ocispec.Annotations[vcAnnotations.TxRateLimiterMaxRate] = "10000000"
addAnnotations(ocispec, &config) addAnnotations(ocispec, &config, runtimeConfig)
assert.Equal(config.HypervisorConfig.NumVCPUs, uint32(1)) assert.Equal(config.HypervisorConfig.NumVCPUs, uint32(1))
assert.Equal(config.HypervisorConfig.DefaultMaxVCPUs, uint32(1)) assert.Equal(config.HypervisorConfig.DefaultMaxVCPUs, uint32(1))
assert.Equal(config.HypervisorConfig.MemorySize, uint32(1024)) assert.Equal(config.HypervisorConfig.MemorySize, uint32(1024))
@ -810,7 +846,7 @@ func TestAddHypervisorAnnotations(t *testing.T) {
assert.Equal(config.HypervisorConfig.BlockDeviceCacheDirect, true) assert.Equal(config.HypervisorConfig.BlockDeviceCacheDirect, true)
assert.Equal(config.HypervisorConfig.BlockDeviceCacheNoflush, true) assert.Equal(config.HypervisorConfig.BlockDeviceCacheNoflush, true)
assert.Equal(config.HypervisorConfig.SharedFS, "virtio-fs") assert.Equal(config.HypervisorConfig.SharedFS, "virtio-fs")
assert.Equal(config.HypervisorConfig.VirtioFSDaemon, "/home/virtiofsd") assert.Equal(config.HypervisorConfig.VirtioFSDaemon, "/bin/false")
assert.Equal(config.HypervisorConfig.VirtioFSCache, "/home/cache") assert.Equal(config.HypervisorConfig.VirtioFSCache, "/home/cache")
assert.Equal(config.HypervisorConfig.Msize9p, uint32(512)) assert.Equal(config.HypervisorConfig.Msize9p, uint32(512))
assert.Equal(config.HypervisorConfig.HypervisorMachineType, "q35") assert.Equal(config.HypervisorConfig.HypervisorMachineType, "q35")
@ -828,16 +864,77 @@ func TestAddHypervisorAnnotations(t *testing.T) {
// In case an absurd large value is provided, the config value if not over-ridden // In case an absurd large value is provided, the config value if not over-ridden
ocispec.Annotations[vcAnnotations.DefaultVCPUs] = "655536" ocispec.Annotations[vcAnnotations.DefaultVCPUs] = "655536"
err := addAnnotations(ocispec, &config) err := addAnnotations(ocispec, &config, runtimeConfig)
assert.Error(err) assert.Error(err)
ocispec.Annotations[vcAnnotations.DefaultVCPUs] = "-1" ocispec.Annotations[vcAnnotations.DefaultVCPUs] = "-1"
err = addAnnotations(ocispec, &config) err = addAnnotations(ocispec, &config, runtimeConfig)
assert.Error(err) assert.Error(err)
ocispec.Annotations[vcAnnotations.DefaultVCPUs] = "1" ocispec.Annotations[vcAnnotations.DefaultVCPUs] = "1"
ocispec.Annotations[vcAnnotations.DefaultMaxVCPUs] = "-1" ocispec.Annotations[vcAnnotations.DefaultMaxVCPUs] = "-1"
err = addAnnotations(ocispec, &config) err = addAnnotations(ocispec, &config, runtimeConfig)
assert.Error(err)
ocispec.Annotations[vcAnnotations.DefaultMaxVCPUs] = "1"
ocispec.Annotations[vcAnnotations.DefaultMemory] = fmt.Sprintf("%d", vc.MinHypervisorMemory+1)
assert.Error(err)
}
func TestAddProtectedHypervisorAnnotations(t *testing.T) {
assert := assert.New(t)
config := vc.SandboxConfig{
Annotations: make(map[string]string),
}
ocispec := specs.Spec{
Annotations: make(map[string]string),
}
runtimeConfig := RuntimeConfig{
HypervisorType: vc.QemuHypervisor,
Console: consolePath,
}
ocispec.Annotations[vcAnnotations.KernelParams] = "vsyscall=emulate iommu=on"
err := addAnnotations(ocispec, &config, runtimeConfig)
assert.Error(err)
assert.Exactly(vc.HypervisorConfig{}, config.HypervisorConfig)
// Enable annotations
runtimeConfig.HypervisorConfig.EnableAnnotations = []string{".*"}
ocispec.Annotations[vcAnnotations.FileBackedMemRootDir] = "/dev/shm"
ocispec.Annotations[vcAnnotations.VirtioFSDaemon] = "/bin/false"
config.HypervisorConfig.FileBackedMemRootDir = "do-not-touch"
config.HypervisorConfig.VirtioFSDaemon = "dangerous-daemon"
err = addAnnotations(ocispec, &config, runtimeConfig)
assert.Error(err)
assert.Equal(config.HypervisorConfig.FileBackedMemRootDir, "do-not-touch")
assert.Equal(config.HypervisorConfig.VirtioFSDaemon, "dangerous-daemon")
// Now enable them and check again
runtimeConfig.HypervisorConfig.FileBackedMemRootList = []string{"/dev/*m"}
runtimeConfig.HypervisorConfig.VirtioFSDaemonList = []string{"/bin/*ls*"}
err = addAnnotations(ocispec, &config, runtimeConfig)
assert.NoError(err)
assert.Equal(config.HypervisorConfig.FileBackedMemRootDir, "/dev/shm")
assert.Equal(config.HypervisorConfig.VirtioFSDaemon, "/bin/false")
// In case an absurd large value is provided, the config value if not over-ridden
ocispec.Annotations[vcAnnotations.DefaultVCPUs] = "655536"
err = addAnnotations(ocispec, &config, runtimeConfig)
assert.Error(err)
ocispec.Annotations[vcAnnotations.DefaultVCPUs] = "-1"
err = addAnnotations(ocispec, &config, runtimeConfig)
assert.Error(err)
ocispec.Annotations[vcAnnotations.DefaultVCPUs] = "1"
ocispec.Annotations[vcAnnotations.DefaultMaxVCPUs] = "-1"
err = addAnnotations(ocispec, &config, runtimeConfig)
assert.Error(err) assert.Error(err)
ocispec.Annotations[vcAnnotations.DefaultMaxVCPUs] = "1" ocispec.Annotations[vcAnnotations.DefaultMaxVCPUs] = "1"
@ -856,18 +953,82 @@ func TestAddRuntimeAnnotations(t *testing.T) {
Annotations: make(map[string]string), Annotations: make(map[string]string),
} }
runtimeConfig := RuntimeConfig{
HypervisorType: vc.QemuHypervisor,
Console: consolePath,
}
ocispec.Annotations[vcAnnotations.DisableGuestSeccomp] = "true" ocispec.Annotations[vcAnnotations.DisableGuestSeccomp] = "true"
ocispec.Annotations[vcAnnotations.SandboxCgroupOnly] = "true" ocispec.Annotations[vcAnnotations.SandboxCgroupOnly] = "true"
ocispec.Annotations[vcAnnotations.DisableNewNetNs] = "true" ocispec.Annotations[vcAnnotations.DisableNewNetNs] = "true"
ocispec.Annotations[vcAnnotations.InterNetworkModel] = "macvtap" ocispec.Annotations[vcAnnotations.InterNetworkModel] = "macvtap"
addAnnotations(ocispec, &config) addAnnotations(ocispec, &config, runtimeConfig)
assert.Equal(config.DisableGuestSeccomp, true) assert.Equal(config.DisableGuestSeccomp, true)
assert.Equal(config.SandboxCgroupOnly, true) assert.Equal(config.SandboxCgroupOnly, true)
assert.Equal(config.NetworkConfig.DisableNewNetNs, true) assert.Equal(config.NetworkConfig.DisableNewNetNs, true)
assert.Equal(config.NetworkConfig.InterworkingModel, vc.NetXConnectMacVtapModel) assert.Equal(config.NetworkConfig.InterworkingModel, vc.NetXConnectMacVtapModel)
} }
func TestRegexpContains(t *testing.T) {
assert := assert.New(t)
type testData struct {
regexps []string
toMatch string
expected bool
}
data := []testData{
{[]string{}, "", false},
{[]string{}, "nonempty", false},
{[]string{"simple"}, "simple", true},
{[]string{"simple"}, "some_simple_text", true},
{[]string{"simple"}, "simp", false},
{[]string{"one", "two"}, "one", true},
{[]string{"one", "two"}, "two", true},
{[]string{"o*"}, "oooo", true},
{[]string{"o*"}, "oooa", true},
{[]string{"^o*$"}, "oooa", false},
}
for _, d := range data {
matched := regexpContains(d.regexps, d.toMatch)
assert.Equal(d.expected, matched, "%+v", d)
}
}
func TestCheckPathIsInGlobs(t *testing.T) {
assert := assert.New(t)
type testData struct {
globs []string
toMatch string
expected bool
}
data := []testData{
{[]string{}, "", false},
{[]string{}, "nonempty", false},
{[]string{"simple"}, "simple", false},
{[]string{"simple"}, "some_simple_text", false},
{[]string{"/bin/ls"}, "/bin/ls", true},
{[]string{"/bin/ls", "/bin/false"}, "/bin/ls", true},
{[]string{"/bin/ls", "/bin/false"}, "/bin/false", true},
{[]string{"/bin/ls", "/bin/false"}, "/bin/bar", false},
{[]string{"/bin/*ls*"}, "/bin/ls", true},
{[]string{"/bin/*ls*"}, "/bin/false", true},
{[]string{"bin/ls"}, "/bin/ls", false},
{[]string{"./bin/ls"}, "/bin/ls", false},
{[]string{"*/bin/ls"}, "/bin/ls", false},
}
for _, d := range data {
matched := checkPathIsInGlobs(d.globs, d.toMatch)
assert.Equal(d.expected, matched, "%+v", d)
}
}
func TestIsCRIOContainerManager(t *testing.T) { func TestIsCRIOContainerManager(t *testing.T) {
assert := assert.New(t) assert := assert.New(t)

View File

@ -18,14 +18,7 @@ package vcmock
import ( import (
"context" "context"
"fmt" "fmt"
"syscall"
vc "github.com/kata-containers/kata-containers/src/runtime/virtcontainers" vc "github.com/kata-containers/kata-containers/src/runtime/virtcontainers"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/device/api"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/device/config"
pbTypes "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/agent/protocols"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/types"
specs "github.com/opencontainers/runtime-spec/specs-go"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
) )
@ -56,240 +49,6 @@ func (m *VCMock) CreateSandbox(ctx context.Context, sandboxConfig vc.SandboxConf
return nil, fmt.Errorf("%s: %s (%+v): sandboxConfig: %v", mockErrorPrefix, getSelf(), m, sandboxConfig) return nil, fmt.Errorf("%s: %s (%+v): sandboxConfig: %v", mockErrorPrefix, getSelf(), m, sandboxConfig)
} }
// DeleteSandbox implements the VC function of the same name.
func (m *VCMock) DeleteSandbox(ctx context.Context, sandboxID string) (vc.VCSandbox, error) {
if m.DeleteSandboxFunc != nil {
return m.DeleteSandboxFunc(ctx, sandboxID)
}
return nil, fmt.Errorf("%s: %s (%+v): sandboxID: %v", mockErrorPrefix, getSelf(), m, sandboxID)
}
// FetchSandbox implements the VC function of the same name.
func (m *VCMock) FetchSandbox(ctx context.Context, sandboxID string) (vc.VCSandbox, error) {
if m.FetchSandboxFunc != nil {
return m.FetchSandboxFunc(ctx, sandboxID)
}
return nil, fmt.Errorf("%s: %s (%+v): sandboxID: %v", mockErrorPrefix, getSelf(), m, sandboxID)
}
// StartSandbox implements the VC function of the same name.
func (m *VCMock) StartSandbox(ctx context.Context, sandboxID string) (vc.VCSandbox, error) {
if m.StartSandboxFunc != nil {
return m.StartSandboxFunc(ctx, sandboxID)
}
return nil, fmt.Errorf("%s: %s (%+v): sandboxID: %v", mockErrorPrefix, getSelf(), m, sandboxID)
}
// StopSandbox implements the VC function of the same name.
func (m *VCMock) StopSandbox(ctx context.Context, sandboxID string, force bool) (vc.VCSandbox, error) {
if m.StopSandboxFunc != nil {
return m.StopSandboxFunc(ctx, sandboxID, force)
}
return nil, fmt.Errorf("%s: %s (%+v): sandboxID: %v", mockErrorPrefix, getSelf(), m, sandboxID)
}
// RunSandbox implements the VC function of the same name.
func (m *VCMock) RunSandbox(ctx context.Context, sandboxConfig vc.SandboxConfig) (vc.VCSandbox, error) {
if m.RunSandboxFunc != nil {
return m.RunSandboxFunc(ctx, sandboxConfig)
}
return nil, fmt.Errorf("%s: %s (%+v): sandboxConfig: %v", mockErrorPrefix, getSelf(), m, sandboxConfig)
}
// ListSandbox implements the VC function of the same name.
func (m *VCMock) ListSandbox(ctx context.Context) ([]vc.SandboxStatus, error) {
if m.ListSandboxFunc != nil {
return m.ListSandboxFunc(ctx)
}
return nil, fmt.Errorf("%s: %s", mockErrorPrefix, getSelf())
}
// StatusSandbox implements the VC function of the same name.
func (m *VCMock) StatusSandbox(ctx context.Context, sandboxID string) (vc.SandboxStatus, error) {
if m.StatusSandboxFunc != nil {
return m.StatusSandboxFunc(ctx, sandboxID)
}
return vc.SandboxStatus{}, fmt.Errorf("%s: %s (%+v): sandboxID: %v", mockErrorPrefix, getSelf(), m, sandboxID)
}
// CreateContainer implements the VC function of the same name.
func (m *VCMock) CreateContainer(ctx context.Context, sandboxID string, containerConfig vc.ContainerConfig) (vc.VCSandbox, vc.VCContainer, error) {
if m.CreateContainerFunc != nil {
return m.CreateContainerFunc(ctx, sandboxID, containerConfig)
}
return nil, nil, fmt.Errorf("%s: %s (%+v): sandboxID: %v, containerConfig: %v", mockErrorPrefix, getSelf(), m, sandboxID, containerConfig)
}
// DeleteContainer implements the VC function of the same name.
func (m *VCMock) DeleteContainer(ctx context.Context, sandboxID, containerID string) (vc.VCContainer, error) {
if m.DeleteContainerFunc != nil {
return m.DeleteContainerFunc(ctx, sandboxID, containerID)
}
return nil, fmt.Errorf("%s: %s (%+v): sandboxID: %v, containerID: %v", mockErrorPrefix, getSelf(), m, sandboxID, containerID)
}
// StartContainer implements the VC function of the same name.
func (m *VCMock) StartContainer(ctx context.Context, sandboxID, containerID string) (vc.VCContainer, error) {
if m.StartContainerFunc != nil {
return m.StartContainerFunc(ctx, sandboxID, containerID)
}
return nil, fmt.Errorf("%s: %s (%+v): sandboxID: %v, containerID: %v", mockErrorPrefix, getSelf(), m, sandboxID, containerID)
}
// StopContainer implements the VC function of the same name.
func (m *VCMock) StopContainer(ctx context.Context, sandboxID, containerID string) (vc.VCContainer, error) {
if m.StopContainerFunc != nil {
return m.StopContainerFunc(ctx, sandboxID, containerID)
}
return nil, fmt.Errorf("%s: %s (%+v): sandboxID: %v, containerID: %v", mockErrorPrefix, getSelf(), m, sandboxID, containerID)
}
// EnterContainer implements the VC function of the same name.
func (m *VCMock) EnterContainer(ctx context.Context, sandboxID, containerID string, cmd types.Cmd) (vc.VCSandbox, vc.VCContainer, *vc.Process, error) {
if m.EnterContainerFunc != nil {
return m.EnterContainerFunc(ctx, sandboxID, containerID, cmd)
}
return nil, nil, nil, fmt.Errorf("%s: %s (%+v): sandboxID: %v, containerID: %v, cmd: %v", mockErrorPrefix, getSelf(), m, sandboxID, containerID, cmd)
}
// StatusContainer implements the VC function of the same name.
func (m *VCMock) StatusContainer(ctx context.Context, sandboxID, containerID string) (vc.ContainerStatus, error) {
if m.StatusContainerFunc != nil {
return m.StatusContainerFunc(ctx, sandboxID, containerID)
}
return vc.ContainerStatus{}, fmt.Errorf("%s: %s (%+v): sandboxID: %v, containerID: %v", mockErrorPrefix, getSelf(), m, sandboxID, containerID)
}
// StatsContainer implements the VC function of the same name.
func (m *VCMock) StatsContainer(ctx context.Context, sandboxID, containerID string) (vc.ContainerStats, error) {
if m.StatsContainerFunc != nil {
return m.StatsContainerFunc(ctx, sandboxID, containerID)
}
return vc.ContainerStats{}, fmt.Errorf("%s: %s (%+v): sandboxID: %v, containerID: %v", mockErrorPrefix, getSelf(), m, sandboxID, containerID)
}
// StatsSandbox implements the VC function of the same name.
func (m *VCMock) StatsSandbox(ctx context.Context, sandboxID string) (vc.SandboxStats, []vc.ContainerStats, error) {
if m.StatsContainerFunc != nil {
return m.StatsSandboxFunc(ctx, sandboxID)
}
return vc.SandboxStats{}, []vc.ContainerStats{}, fmt.Errorf("%s: %s (%+v): sandboxID: %v", mockErrorPrefix, getSelf(), m, sandboxID)
}
// KillContainer implements the VC function of the same name.
func (m *VCMock) KillContainer(ctx context.Context, sandboxID, containerID string, signal syscall.Signal, all bool) error {
if m.KillContainerFunc != nil {
return m.KillContainerFunc(ctx, sandboxID, containerID, signal, all)
}
return fmt.Errorf("%s: %s (%+v): sandboxID: %v, containerID: %v, signal: %v, all: %v", mockErrorPrefix, getSelf(), m, sandboxID, containerID, signal, all)
}
// ProcessListContainer implements the VC function of the same name.
func (m *VCMock) ProcessListContainer(ctx context.Context, sandboxID, containerID string, options vc.ProcessListOptions) (vc.ProcessList, error) {
if m.ProcessListContainerFunc != nil {
return m.ProcessListContainerFunc(ctx, sandboxID, containerID, options)
}
return nil, fmt.Errorf("%s: %s (%+v): sandboxID: %v, containerID: %v", mockErrorPrefix, getSelf(), m, sandboxID, containerID)
}
// UpdateContainer implements the VC function of the same name.
func (m *VCMock) UpdateContainer(ctx context.Context, sandboxID, containerID string, resources specs.LinuxResources) error {
if m.UpdateContainerFunc != nil {
return m.UpdateContainerFunc(ctx, sandboxID, containerID, resources)
}
return fmt.Errorf("%s: %s (%+v): sandboxID: %v, containerID: %v", mockErrorPrefix, getSelf(), m, sandboxID, containerID)
}
// PauseContainer implements the VC function of the same name.
func (m *VCMock) PauseContainer(ctx context.Context, sandboxID, containerID string) error {
if m.PauseContainerFunc != nil {
return m.PauseContainerFunc(ctx, sandboxID, containerID)
}
return fmt.Errorf("%s: %s (%+v): sandboxID: %v, containerID: %v", mockErrorPrefix, getSelf(), m, sandboxID, containerID)
}
// ResumeContainer implements the VC function of the same name.
func (m *VCMock) ResumeContainer(ctx context.Context, sandboxID, containerID string) error {
if m.ResumeContainerFunc != nil {
return m.ResumeContainerFunc(ctx, sandboxID, containerID)
}
return fmt.Errorf("%s: %s (%+v): sandboxID: %v, containerID: %v", mockErrorPrefix, getSelf(), m, sandboxID, containerID)
}
// AddDevice implements the VC function of the same name.
func (m *VCMock) AddDevice(ctx context.Context, sandboxID string, info config.DeviceInfo) (api.Device, error) {
if m.AddDeviceFunc != nil {
return m.AddDeviceFunc(ctx, sandboxID, info)
}
return nil, fmt.Errorf("%s: %s (%+v): sandboxID: %v", mockErrorPrefix, getSelf(), m, sandboxID)
}
// AddInterface implements the VC function of the same name.
func (m *VCMock) AddInterface(ctx context.Context, sandboxID string, inf *pbTypes.Interface) (*pbTypes.Interface, error) {
if m.AddInterfaceFunc != nil {
return m.AddInterfaceFunc(ctx, sandboxID, inf)
}
return nil, fmt.Errorf("%s: %s (%+v): sandboxID: %v", mockErrorPrefix, getSelf(), m, sandboxID)
}
// RemoveInterface implements the VC function of the same name.
func (m *VCMock) RemoveInterface(ctx context.Context, sandboxID string, inf *pbTypes.Interface) (*pbTypes.Interface, error) {
if m.RemoveInterfaceFunc != nil {
return m.RemoveInterfaceFunc(ctx, sandboxID, inf)
}
return nil, fmt.Errorf("%s: %s (%+v): sandboxID: %v", mockErrorPrefix, getSelf(), m, sandboxID)
}
// ListInterfaces implements the VC function of the same name.
func (m *VCMock) ListInterfaces(ctx context.Context, sandboxID string) ([]*pbTypes.Interface, error) {
if m.ListInterfacesFunc != nil {
return m.ListInterfacesFunc(ctx, sandboxID)
}
return nil, fmt.Errorf("%s: %s (%+v): sandboxID: %v", mockErrorPrefix, getSelf(), m, sandboxID)
}
// UpdateRoutes implements the VC function of the same name.
func (m *VCMock) UpdateRoutes(ctx context.Context, sandboxID string, routes []*pbTypes.Route) ([]*pbTypes.Route, error) {
if m.UpdateRoutesFunc != nil {
return m.UpdateRoutesFunc(ctx, sandboxID, routes)
}
return nil, fmt.Errorf("%s: %s (%+v): sandboxID: %v", mockErrorPrefix, getSelf(), m, sandboxID)
}
// ListRoutes implements the VC function of the same name.
func (m *VCMock) ListRoutes(ctx context.Context, sandboxID string) ([]*pbTypes.Route, error) {
if m.ListRoutesFunc != nil {
return m.ListRoutesFunc(ctx, sandboxID)
}
return nil, fmt.Errorf("%s: %s (%+v): sandboxID: %v", mockErrorPrefix, getSelf(), m, sandboxID)
}
func (m *VCMock) CleanupContainer(ctx context.Context, sandboxID, containerID string, force bool) error { func (m *VCMock) CleanupContainer(ctx context.Context, sandboxID, containerID string, force bool) error {
if m.CleanupContainerFunc != nil { if m.CleanupContainerFunc != nil {
return m.CleanupContainerFunc(ctx, sandboxID, containerID, true) return m.CleanupContainerFunc(ctx, sandboxID, containerID, true)

View File

@ -8,13 +8,10 @@ package vcmock
import ( import (
"context" "context"
"reflect" "reflect"
"syscall"
"testing" "testing"
vc "github.com/kata-containers/kata-containers/src/runtime/virtcontainers" vc "github.com/kata-containers/kata-containers/src/runtime/virtcontainers"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/factory" "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/factory"
pbTypes "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/agent/protocols"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/types"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )
@ -143,514 +140,6 @@ func TestVCMockCreateSandbox(t *testing.T) {
assert.True(IsMockError(err)) assert.True(IsMockError(err))
} }
func TestVCMockDeleteSandbox(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
assert.Nil(m.DeleteSandboxFunc)
ctx := context.Background()
_, err := m.DeleteSandbox(ctx, testSandboxID)
assert.Error(err)
assert.True(IsMockError(err))
m.DeleteSandboxFunc = func(ctx context.Context, sandboxID string) (vc.VCSandbox, error) {
return &Sandbox{}, nil
}
sandbox, err := m.DeleteSandbox(ctx, testSandboxID)
assert.NoError(err)
assert.Equal(sandbox, &Sandbox{})
// reset
m.DeleteSandboxFunc = nil
_, err = m.DeleteSandbox(ctx, testSandboxID)
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockListSandbox(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
assert.Nil(m.ListSandboxFunc)
ctx := context.Background()
_, err := m.ListSandbox(ctx)
assert.Error(err)
assert.True(IsMockError(err))
m.ListSandboxFunc = func(ctx context.Context) ([]vc.SandboxStatus, error) {
return []vc.SandboxStatus{}, nil
}
sandboxes, err := m.ListSandbox(ctx)
assert.NoError(err)
assert.Equal(sandboxes, []vc.SandboxStatus{})
// reset
m.ListSandboxFunc = nil
_, err = m.ListSandbox(ctx)
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockRunSandbox(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
assert.Nil(m.RunSandboxFunc)
ctx := context.Background()
_, err := m.RunSandbox(ctx, vc.SandboxConfig{})
assert.Error(err)
assert.True(IsMockError(err))
m.RunSandboxFunc = func(ctx context.Context, sandboxConfig vc.SandboxConfig) (vc.VCSandbox, error) {
return &Sandbox{}, nil
}
sandbox, err := m.RunSandbox(ctx, vc.SandboxConfig{})
assert.NoError(err)
assert.Equal(sandbox, &Sandbox{})
// reset
m.RunSandboxFunc = nil
_, err = m.RunSandbox(ctx, vc.SandboxConfig{})
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockStartSandbox(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
assert.Nil(m.StartSandboxFunc)
ctx := context.Background()
_, err := m.StartSandbox(ctx, testSandboxID)
assert.Error(err)
assert.True(IsMockError(err))
m.StartSandboxFunc = func(ctx context.Context, sandboxID string) (vc.VCSandbox, error) {
return &Sandbox{}, nil
}
sandbox, err := m.StartSandbox(ctx, testSandboxID)
assert.NoError(err)
assert.Equal(sandbox, &Sandbox{})
// reset
m.StartSandboxFunc = nil
_, err = m.StartSandbox(ctx, testSandboxID)
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockStatusSandbox(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
assert.Nil(m.StatusSandboxFunc)
ctx := context.Background()
_, err := m.StatusSandbox(ctx, testSandboxID)
assert.Error(err)
assert.True(IsMockError(err))
m.StatusSandboxFunc = func(ctx context.Context, sandboxID string) (vc.SandboxStatus, error) {
return vc.SandboxStatus{}, nil
}
sandbox, err := m.StatusSandbox(ctx, testSandboxID)
assert.NoError(err)
assert.Equal(sandbox, vc.SandboxStatus{})
// reset
m.StatusSandboxFunc = nil
_, err = m.StatusSandbox(ctx, testSandboxID)
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockStopSandbox(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
assert.Nil(m.StopSandboxFunc)
ctx := context.Background()
_, err := m.StopSandbox(ctx, testSandboxID, false)
assert.Error(err)
assert.True(IsMockError(err))
m.StopSandboxFunc = func(ctx context.Context, sandboxID string, force bool) (vc.VCSandbox, error) {
return &Sandbox{}, nil
}
sandbox, err := m.StopSandbox(ctx, testSandboxID, false)
assert.NoError(err)
assert.Equal(sandbox, &Sandbox{})
// reset
m.StopSandboxFunc = nil
_, err = m.StopSandbox(ctx, testSandboxID, false)
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockCreateContainer(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
assert.Nil(m.CreateContainerFunc)
ctx := context.Background()
config := vc.ContainerConfig{}
_, _, err := m.CreateContainer(ctx, testSandboxID, config)
assert.Error(err)
assert.True(IsMockError(err))
m.CreateContainerFunc = func(ctx context.Context, sandboxID string, containerConfig vc.ContainerConfig) (vc.VCSandbox, vc.VCContainer, error) {
return &Sandbox{}, &Container{}, nil
}
sandbox, container, err := m.CreateContainer(ctx, testSandboxID, config)
assert.NoError(err)
assert.Equal(sandbox, &Sandbox{})
assert.Equal(container, &Container{})
// reset
m.CreateContainerFunc = nil
_, _, err = m.CreateContainer(ctx, testSandboxID, config)
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockDeleteContainer(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
assert.Nil(m.DeleteContainerFunc)
ctx := context.Background()
_, err := m.DeleteContainer(ctx, testSandboxID, testContainerID)
assert.Error(err)
assert.True(IsMockError(err))
m.DeleteContainerFunc = func(ctx context.Context, sandboxID, containerID string) (vc.VCContainer, error) {
return &Container{}, nil
}
container, err := m.DeleteContainer(ctx, testSandboxID, testContainerID)
assert.NoError(err)
assert.Equal(container, &Container{})
// reset
m.DeleteContainerFunc = nil
_, err = m.DeleteContainer(ctx, testSandboxID, testContainerID)
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockEnterContainer(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
assert.Nil(m.EnterContainerFunc)
ctx := context.Background()
cmd := types.Cmd{}
_, _, _, err := m.EnterContainer(ctx, testSandboxID, testContainerID, cmd)
assert.Error(err)
assert.True(IsMockError(err))
m.EnterContainerFunc = func(ctx context.Context, sandboxID, containerID string, cmd types.Cmd) (vc.VCSandbox, vc.VCContainer, *vc.Process, error) {
return &Sandbox{}, &Container{}, &vc.Process{}, nil
}
sandbox, container, process, err := m.EnterContainer(ctx, testSandboxID, testContainerID, cmd)
assert.NoError(err)
assert.Equal(sandbox, &Sandbox{})
assert.Equal(container, &Container{})
assert.Equal(process, &vc.Process{})
// reset
m.EnterContainerFunc = nil
_, _, _, err = m.EnterContainer(ctx, testSandboxID, testContainerID, cmd)
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockKillContainer(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
assert.Nil(m.KillContainerFunc)
ctx := context.Background()
sig := syscall.SIGTERM
for _, all := range []bool{true, false} {
err := m.KillContainer(ctx, testSandboxID, testContainerID, sig, all)
assert.Error(err)
assert.True(IsMockError(err))
}
m.KillContainerFunc = func(ctx context.Context, sandboxID, containerID string, signal syscall.Signal, all bool) error {
return nil
}
for _, all := range []bool{true, false} {
err := m.KillContainer(ctx, testSandboxID, testContainerID, sig, all)
assert.NoError(err)
}
// reset
m.KillContainerFunc = nil
for _, all := range []bool{true, false} {
err := m.KillContainer(ctx, testSandboxID, testContainerID, sig, all)
assert.Error(err)
assert.True(IsMockError(err))
}
}
func TestVCMockStartContainer(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
assert.Nil(m.StartContainerFunc)
ctx := context.Background()
_, err := m.StartContainer(ctx, testSandboxID, testContainerID)
assert.Error(err)
assert.True(IsMockError(err))
m.StartContainerFunc = func(ctx context.Context, sandboxID, containerID string) (vc.VCContainer, error) {
return &Container{}, nil
}
container, err := m.StartContainer(ctx, testSandboxID, testContainerID)
assert.NoError(err)
assert.Equal(container, &Container{})
// reset
m.StartContainerFunc = nil
_, err = m.StartContainer(ctx, testSandboxID, testContainerID)
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockStatusContainer(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
assert.Nil(m.StatusContainerFunc)
ctx := context.Background()
_, err := m.StatusContainer(ctx, testSandboxID, testContainerID)
assert.Error(err)
assert.True(IsMockError(err))
m.StatusContainerFunc = func(ctx context.Context, sandboxID, containerID string) (vc.ContainerStatus, error) {
return vc.ContainerStatus{}, nil
}
status, err := m.StatusContainer(ctx, testSandboxID, testContainerID)
assert.NoError(err)
assert.Equal(status, vc.ContainerStatus{})
// reset
m.StatusContainerFunc = nil
_, err = m.StatusContainer(ctx, testSandboxID, testContainerID)
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockStatsContainer(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
assert.Nil(m.StatsContainerFunc)
ctx := context.Background()
_, err := m.StatsContainer(ctx, testSandboxID, testContainerID)
assert.Error(err)
assert.True(IsMockError(err))
m.StatsContainerFunc = func(ctx context.Context, sandboxID, containerID string) (vc.ContainerStats, error) {
return vc.ContainerStats{}, nil
}
stats, err := m.StatsContainer(ctx, testSandboxID, testContainerID)
assert.NoError(err)
assert.Equal(stats, vc.ContainerStats{})
// reset
m.StatsContainerFunc = nil
_, err = m.StatsContainer(ctx, testSandboxID, testContainerID)
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockStopContainer(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
assert.Nil(m.StopContainerFunc)
ctx := context.Background()
_, err := m.StopContainer(ctx, testSandboxID, testContainerID)
assert.Error(err)
assert.True(IsMockError(err))
m.StopContainerFunc = func(ctx context.Context, sandboxID, containerID string) (vc.VCContainer, error) {
return &Container{}, nil
}
container, err := m.StopContainer(ctx, testSandboxID, testContainerID)
assert.NoError(err)
assert.Equal(container, &Container{})
// reset
m.StopContainerFunc = nil
_, err = m.StopContainer(ctx, testSandboxID, testContainerID)
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockProcessListContainer(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
assert.Nil(m.ProcessListContainerFunc)
options := vc.ProcessListOptions{
Format: "json",
Args: []string{"-ef"},
}
ctx := context.Background()
_, err := m.ProcessListContainer(ctx, testSandboxID, testContainerID, options)
assert.Error(err)
assert.True(IsMockError(err))
processList := vc.ProcessList("hi")
m.ProcessListContainerFunc = func(ctx context.Context, sandboxID, containerID string, options vc.ProcessListOptions) (vc.ProcessList, error) {
return processList, nil
}
pList, err := m.ProcessListContainer(ctx, testSandboxID, testContainerID, options)
assert.NoError(err)
assert.Equal(pList, processList)
// reset
m.ProcessListContainerFunc = nil
_, err = m.ProcessListContainer(ctx, testSandboxID, testContainerID, options)
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockFetchSandbox(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
config := &vc.SandboxConfig{}
assert.Nil(m.FetchSandboxFunc)
ctx := context.Background()
_, err := m.FetchSandbox(ctx, config.ID)
assert.Error(err)
assert.True(IsMockError(err))
m.FetchSandboxFunc = func(ctx context.Context, id string) (vc.VCSandbox, error) {
return &Sandbox{}, nil
}
sandbox, err := m.FetchSandbox(ctx, config.ID)
assert.NoError(err)
assert.Equal(sandbox, &Sandbox{})
// reset
m.FetchSandboxFunc = nil
_, err = m.FetchSandbox(ctx, config.ID)
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockPauseContainer(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
config := &vc.SandboxConfig{}
assert.Nil(m.PauseContainerFunc)
ctx := context.Background()
err := m.PauseContainer(ctx, config.ID, config.ID)
assert.Error(err)
assert.True(IsMockError(err))
m.PauseContainerFunc = func(ctx context.Context, sid, cid string) error {
return nil
}
err = m.PauseContainer(ctx, config.ID, config.ID)
assert.NoError(err)
// reset
m.PauseContainerFunc = nil
err = m.PauseContainer(ctx, config.ID, config.ID)
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockResumeContainer(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
config := &vc.SandboxConfig{}
assert.Nil(m.ResumeContainerFunc)
ctx := context.Background()
err := m.ResumeContainer(ctx, config.ID, config.ID)
assert.Error(err)
assert.True(IsMockError(err))
m.ResumeContainerFunc = func(ctx context.Context, sid, cid string) error {
return nil
}
err = m.ResumeContainer(ctx, config.ID, config.ID)
assert.NoError(err)
// reset
m.ResumeContainerFunc = nil
err = m.ResumeContainer(ctx, config.ID, config.ID)
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockSetVMFactory(t *testing.T) { func TestVCMockSetVMFactory(t *testing.T) {
assert := assert.New(t) assert := assert.New(t)
@ -682,137 +171,54 @@ func TestVCMockSetVMFactory(t *testing.T) {
assert.Equal(factoryTriggered, 1) assert.Equal(factoryTriggered, 1)
} }
func TestVCMockAddInterface(t *testing.T) { func TestVCMockCleanupContainer(t *testing.T) {
assert := assert.New(t) assert := assert.New(t)
m := &VCMock{} m := &VCMock{}
config := &vc.SandboxConfig{} assert.Nil(m.CleanupContainerFunc)
assert.Nil(m.AddInterfaceFunc)
ctx := context.Background() ctx := context.Background()
_, err := m.AddInterface(ctx, config.ID, nil) err := m.CleanupContainer(ctx, testSandboxID, testContainerID, false)
assert.Error(err) assert.Error(err)
assert.True(IsMockError(err)) assert.True(IsMockError(err))
m.AddInterfaceFunc = func(ctx context.Context, sid string, inf *pbTypes.Interface) (*pbTypes.Interface, error) { m.CleanupContainerFunc = func(ctx context.Context, sandboxID, containerID string, force bool) error {
return nil, nil return nil
} }
_, err = m.AddInterface(ctx, config.ID, nil) err = m.CleanupContainer(ctx, testSandboxID, testContainerID, false)
assert.NoError(err) assert.NoError(err)
// reset // reset
m.AddInterfaceFunc = nil m.CleanupContainerFunc = nil
_, err = m.AddInterface(ctx, config.ID, nil) err = m.CleanupContainer(ctx, testSandboxID, testContainerID, false)
assert.Error(err) assert.Error(err)
assert.True(IsMockError(err)) assert.True(IsMockError(err))
} }
func TestVCMockRemoveInterface(t *testing.T) { func TestVCMockForceCleanupContainer(t *testing.T) {
assert := assert.New(t) assert := assert.New(t)
m := &VCMock{} m := &VCMock{}
config := &vc.SandboxConfig{} assert.Nil(m.CleanupContainerFunc)
assert.Nil(m.RemoveInterfaceFunc)
ctx := context.Background() ctx := context.Background()
_, err := m.RemoveInterface(ctx, config.ID, nil) err := m.CleanupContainer(ctx, testSandboxID, testContainerID, true)
assert.Error(err) assert.Error(err)
assert.True(IsMockError(err)) assert.True(IsMockError(err))
m.RemoveInterfaceFunc = func(ctx context.Context, sid string, inf *pbTypes.Interface) (*pbTypes.Interface, error) { m.CleanupContainerFunc = func(ctx context.Context, sandboxID, containerID string, force bool) error {
return nil, nil return nil
} }
_, err = m.RemoveInterface(ctx, config.ID, nil) err = m.CleanupContainer(ctx, testSandboxID, testContainerID, true)
assert.NoError(err) assert.NoError(err)
// reset // reset
m.RemoveInterfaceFunc = nil m.CleanupContainerFunc = nil
_, err = m.RemoveInterface(ctx, config.ID, nil) err = m.CleanupContainer(ctx, testSandboxID, testContainerID, true)
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockListInterfaces(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
config := &vc.SandboxConfig{}
assert.Nil(m.ListInterfacesFunc)
ctx := context.Background()
_, err := m.ListInterfaces(ctx, config.ID)
assert.Error(err)
assert.True(IsMockError(err))
m.ListInterfacesFunc = func(ctx context.Context, sid string) ([]*pbTypes.Interface, error) {
return nil, nil
}
_, err = m.ListInterfaces(ctx, config.ID)
assert.NoError(err)
// reset
m.ListInterfacesFunc = nil
_, err = m.ListInterfaces(ctx, config.ID)
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockUpdateRoutes(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
config := &vc.SandboxConfig{}
assert.Nil(m.UpdateRoutesFunc)
ctx := context.Background()
_, err := m.UpdateRoutes(ctx, config.ID, nil)
assert.Error(err)
assert.True(IsMockError(err))
m.UpdateRoutesFunc = func(ctx context.Context, sid string, routes []*pbTypes.Route) ([]*pbTypes.Route, error) {
return nil, nil
}
_, err = m.UpdateRoutes(ctx, config.ID, nil)
assert.NoError(err)
// reset
m.UpdateRoutesFunc = nil
_, err = m.UpdateRoutes(ctx, config.ID, nil)
assert.Error(err)
assert.True(IsMockError(err))
}
func TestVCMockListRoutes(t *testing.T) {
assert := assert.New(t)
m := &VCMock{}
config := &vc.SandboxConfig{}
assert.Nil(m.ListRoutesFunc)
ctx := context.Background()
_, err := m.ListRoutes(ctx, config.ID)
assert.Error(err)
assert.True(IsMockError(err))
m.ListRoutesFunc = func(ctx context.Context, sid string) ([]*pbTypes.Route, error) {
return nil, nil
}
_, err = m.ListRoutes(ctx, config.ID)
assert.NoError(err)
// reset
m.ListRoutesFunc = nil
_, err = m.ListRoutes(ctx, config.ID)
assert.Error(err) assert.Error(err)
assert.True(IsMockError(err)) assert.True(IsMockError(err))
} }

View File

@ -88,34 +88,5 @@ type VCMock struct {
SetFactoryFunc func(ctx context.Context, factory vc.Factory) SetFactoryFunc func(ctx context.Context, factory vc.Factory)
CreateSandboxFunc func(ctx context.Context, sandboxConfig vc.SandboxConfig) (vc.VCSandbox, error) CreateSandboxFunc func(ctx context.Context, sandboxConfig vc.SandboxConfig) (vc.VCSandbox, error)
DeleteSandboxFunc func(ctx context.Context, sandboxID string) (vc.VCSandbox, error)
ListSandboxFunc func(ctx context.Context) ([]vc.SandboxStatus, error)
FetchSandboxFunc func(ctx context.Context, sandboxID string) (vc.VCSandbox, error)
RunSandboxFunc func(ctx context.Context, sandboxConfig vc.SandboxConfig) (vc.VCSandbox, error)
StartSandboxFunc func(ctx context.Context, sandboxID string) (vc.VCSandbox, error)
StatusSandboxFunc func(ctx context.Context, sandboxID string) (vc.SandboxStatus, error)
StatsContainerFunc func(ctx context.Context, sandboxID, containerID string) (vc.ContainerStats, error)
StatsSandboxFunc func(ctx context.Context, sandboxID string) (vc.SandboxStats, []vc.ContainerStats, error)
StopSandboxFunc func(ctx context.Context, sandboxID string, force bool) (vc.VCSandbox, error)
CreateContainerFunc func(ctx context.Context, sandboxID string, containerConfig vc.ContainerConfig) (vc.VCSandbox, vc.VCContainer, error)
DeleteContainerFunc func(ctx context.Context, sandboxID, containerID string) (vc.VCContainer, error)
EnterContainerFunc func(ctx context.Context, sandboxID, containerID string, cmd types.Cmd) (vc.VCSandbox, vc.VCContainer, *vc.Process, error)
KillContainerFunc func(ctx context.Context, sandboxID, containerID string, signal syscall.Signal, all bool) error
StartContainerFunc func(ctx context.Context, sandboxID, containerID string) (vc.VCContainer, error)
StatusContainerFunc func(ctx context.Context, sandboxID, containerID string) (vc.ContainerStatus, error)
StopContainerFunc func(ctx context.Context, sandboxID, containerID string) (vc.VCContainer, error)
ProcessListContainerFunc func(ctx context.Context, sandboxID, containerID string, options vc.ProcessListOptions) (vc.ProcessList, error)
UpdateContainerFunc func(ctx context.Context, sandboxID, containerID string, resources specs.LinuxResources) error
PauseContainerFunc func(ctx context.Context, sandboxID, containerID string) error
ResumeContainerFunc func(ctx context.Context, sandboxID, containerID string) error
AddDeviceFunc func(ctx context.Context, sandboxID string, info config.DeviceInfo) (api.Device, error)
AddInterfaceFunc func(ctx context.Context, sandboxID string, inf *pbTypes.Interface) (*pbTypes.Interface, error)
RemoveInterfaceFunc func(ctx context.Context, sandboxID string, inf *pbTypes.Interface) (*pbTypes.Interface, error)
ListInterfacesFunc func(ctx context.Context, sandboxID string) ([]*pbTypes.Interface, error)
UpdateRoutesFunc func(ctx context.Context, sandboxID string, routes []*pbTypes.Route) ([]*pbTypes.Route, error)
ListRoutesFunc func(ctx context.Context, sandboxID string) ([]*pbTypes.Route, error)
CleanupContainerFunc func(ctx context.Context, sandboxID, containerID string, force bool) error CleanupContainerFunc func(ctx context.Context, sandboxID, containerID string, force bool) error
} }

View File

@ -39,6 +39,7 @@ import (
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/annotations" "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/annotations"
vccgroups "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/cgroups" vccgroups "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/cgroups"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/compatoci" "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/compatoci"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/cpuset"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/rootless" "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/rootless"
vcTypes "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/types" vcTypes "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/types"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/types" "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/types"
@ -564,15 +565,25 @@ func (s *Sandbox) createCgroupManager() error {
} }
spec := s.GetPatchedOCISpec() spec := s.GetPatchedOCISpec()
if spec != nil { if spec != nil && spec.Linux != nil {
cgroupPath = spec.Linux.CgroupsPath cgroupPath = spec.Linux.CgroupsPath
// Kata relies on the cgroup parent created and configured by the container // Kata relies on the cgroup parent created and configured by the container
// engine, but sometimes the sandbox cgroup is not configured and the container // engine by default. The exception is for devices whitelist as well as sandbox-level
// may have access to all the resources, hence the runtime must constrain the // CPUSet.
// sandbox and update the list of devices with the devices hotplugged in the if spec.Linux.Resources != nil {
// hypervisor. resources.Devices = spec.Linux.Resources.Devices
resources = *spec.Linux.Resources
if spec.Linux.Resources.CPU != nil {
resources.CPU = &specs.LinuxCPU{
Cpus: spec.Linux.Resources.CPU.Cpus,
}
}
}
//TODO: in Docker or Podman use case, it is reasonable to set a constraint. Need to add a flag
// to allow users to configure Kata to constrain CPUs and Memory in this alternative
// scenario. See https://github.com/kata-containers/runtime/issues/2811
} }
if s.devManager != nil { if s.devManager != nil {
@ -1215,7 +1226,7 @@ func (s *Sandbox) CreateContainer(contConfig ContainerConfig) (VCContainer, erro
} }
}() }()
// Sandbox is reponsable to update VM resources needed by Containers // Sandbox is responsible to update VM resources needed by Containers
// Update resources after having added containers to the sandbox, since // Update resources after having added containers to the sandbox, since
// container status is requiered to know if more resources should be added. // container status is requiered to know if more resources should be added.
err = s.updateResources() err = s.updateResources()
@ -1329,6 +1340,11 @@ func (s *Sandbox) DeleteContainer(containerID string) (VCContainer, error) {
} }
} }
// update the sandbox cgroup
if err = s.cgroupsUpdate(); err != nil {
return nil, err
}
if err = s.storeSandbox(); err != nil { if err = s.storeSandbox(); err != nil {
return nil, err return nil, err
} }
@ -1866,11 +1882,12 @@ func (s *Sandbox) AddDevice(info config.DeviceInfo) (api.Device, error) {
return b, nil return b, nil
} }
// updateResources will calculate the resources required for the virtual machine, and // updateResources will:
// adjust the virtual machine sizing accordingly. For a given sandbox, it will calculate the // - calculate the resources required for the virtual machine, and adjust the virtual machine
// number of vCPUs required based on the sum of container requests, plus default CPUs for the VM. // sizing accordingly. For a given sandbox, it will calculate the number of vCPUs required based
// Similar is done for memory. If changes in memory or CPU are made, the VM will be updated and // on the sum of container requests, plus default CPUs for the VM. Similar is done for memory.
// the agent will online the applicable CPU and memory. // If changes in memory or CPU are made, the VM will be updated and the agent will online the
// applicable CPU and memory.
func (s *Sandbox) updateResources() error { func (s *Sandbox) updateResources() error {
if s == nil { if s == nil {
return errors.New("sandbox is nil") return errors.New("sandbox is nil")
@ -1880,7 +1897,10 @@ func (s *Sandbox) updateResources() error {
return fmt.Errorf("sandbox config is nil") return fmt.Errorf("sandbox config is nil")
} }
sandboxVCPUs := s.calculateSandboxCPUs() sandboxVCPUs, err := s.calculateSandboxCPUs()
if err != nil {
return err
}
// Add default vcpus for sandbox // Add default vcpus for sandbox
sandboxVCPUs += s.hypervisor.hypervisorConfig().NumVCPUs sandboxVCPUs += s.hypervisor.hypervisorConfig().NumVCPUs
@ -1942,8 +1962,9 @@ func (s *Sandbox) calculateSandboxMemory() int64 {
return memorySandbox return memorySandbox
} }
func (s *Sandbox) calculateSandboxCPUs() uint32 { func (s *Sandbox) calculateSandboxCPUs() (uint32, error) {
mCPU := uint32(0) mCPU := uint32(0)
cpusetCount := int(0)
for _, c := range s.config.Containers { for _, c := range s.config.Containers {
// Do not hot add again non-running containers resources // Do not hot add again non-running containers resources
@ -1957,9 +1978,22 @@ func (s *Sandbox) calculateSandboxCPUs() uint32 {
mCPU += utils.CalculateMilliCPUs(*cpu.Quota, *cpu.Period) mCPU += utils.CalculateMilliCPUs(*cpu.Quota, *cpu.Period)
} }
set, err := cpuset.Parse(cpu.Cpus)
if err != nil {
return 0, nil
}
cpusetCount += set.Size()
} }
} }
return utils.CalculateVCpusFromMilliCpus(mCPU)
// If we aren't being constrained, then we could have two scenarios:
// 1. BestEffort QoS: no proper support today in Kata.
// 2. We could be constrained only by CPUSets. Check for this:
if mCPU == 0 && cpusetCount > 0 {
return uint32(cpusetCount), nil
}
return utils.CalculateVCpusFromMilliCpus(mCPU), nil
} }
// GetHypervisorType is used for getting Hypervisor name currently used. // GetHypervisorType is used for getting Hypervisor name currently used.
@ -1975,9 +2009,18 @@ func (s *Sandbox) GetHypervisorType() string {
func (s *Sandbox) cgroupsUpdate() error { func (s *Sandbox) cgroupsUpdate() error {
// If Kata is configured for SandboxCgroupOnly, the VMM and its processes are already // If Kata is configured for SandboxCgroupOnly, the VMM and its processes are already
// in the Kata sandbox cgroup (inherited). No need to move threads/processes, and we should // in the Kata sandbox cgroup (inherited). Check to see if sandbox cpuset needs to be
// rely on parent's cgroup CPU/memory values // updated.
if s.config.SandboxCgroupOnly { if s.config.SandboxCgroupOnly {
cpuset, memset, err := s.getSandboxCPUSet()
if err != nil {
return err
}
if err := s.cgroupMgr.SetCPUSet(cpuset, memset); err != nil {
return err
}
return nil return nil
} }
@ -2275,3 +2318,31 @@ func (s *Sandbox) GetOOMEvent() (string, error) {
func (s *Sandbox) GetAgentURL() (string, error) { func (s *Sandbox) GetAgentURL() (string, error) {
return s.agent.getAgentURL() return s.agent.getAgentURL()
} }
// getSandboxCPUSet returns the union of each of the sandbox's containers' CPU sets'
// cpus and mems as a string in canonical linux CPU/mems list format
func (s *Sandbox) getSandboxCPUSet() (string, string, error) {
if s.config == nil {
return "", "", nil
}
cpuResult := cpuset.NewCPUSet()
memResult := cpuset.NewCPUSet()
for _, ctr := range s.config.Containers {
if ctr.Resources.CPU != nil {
currCPUSet, err := cpuset.Parse(ctr.Resources.CPU.Cpus)
if err != nil {
return "", "", fmt.Errorf("unable to parse CPUset.cpus for container %s: %v", ctr.ID, err)
}
cpuResult = cpuResult.Union(currCPUSet)
currMemSet, err := cpuset.Parse(ctr.Resources.CPU.Mems)
if err != nil {
return "", "", fmt.Errorf("unable to parse CPUset.mems for container %s: %v", ctr.ID, err)
}
memResult = memResult.Union(currMemSet)
}
}
return cpuResult.String(), memResult.String(), nil
}

View File

@ -106,12 +106,18 @@ func TestCreateMockSandbox(t *testing.T) {
func TestCalculateSandboxCPUs(t *testing.T) { func TestCalculateSandboxCPUs(t *testing.T) {
sandbox := &Sandbox{} sandbox := &Sandbox{}
sandbox.config = &SandboxConfig{} sandbox.config = &SandboxConfig{}
unconstrained := newTestContainerConfigNoop("cont-00001") unconstrained := newTestContainerConfigNoop("cont-00001")
constrained := newTestContainerConfigNoop("cont-00001") constrained := newTestContainerConfigNoop("cont-00002")
unconstrainedCpusets0_1 := newTestContainerConfigNoop("cont-00003")
unconstrainedCpusets2 := newTestContainerConfigNoop("cont-00004")
constrainedCpusets0_7 := newTestContainerConfigNoop("cont-00005")
quota := int64(4000) quota := int64(4000)
period := uint64(1000) period := uint64(1000)
constrained.Resources.CPU = &specs.LinuxCPU{Period: &period, Quota: &quota} constrained.Resources.CPU = &specs.LinuxCPU{Period: &period, Quota: &quota}
unconstrainedCpusets0_1.Resources.CPU = &specs.LinuxCPU{Cpus: "0-1"}
unconstrainedCpusets2.Resources.CPU = &specs.LinuxCPU{Cpus: "2"}
constrainedCpusets0_7.Resources.CPU = &specs.LinuxCPU{Period: &period, Quota: &quota, Cpus: "0-7"}
tests := []struct { tests := []struct {
name string name string
containers []ContainerConfig containers []ContainerConfig
@ -123,11 +129,14 @@ func TestCalculateSandboxCPUs(t *testing.T) {
{"2-constrained", []ContainerConfig{constrained, constrained}, 8}, {"2-constrained", []ContainerConfig{constrained, constrained}, 8},
{"3-mix-constraints", []ContainerConfig{unconstrained, constrained, constrained}, 8}, {"3-mix-constraints", []ContainerConfig{unconstrained, constrained, constrained}, 8},
{"3-constrained", []ContainerConfig{constrained, constrained, constrained}, 12}, {"3-constrained", []ContainerConfig{constrained, constrained, constrained}, 12},
{"unconstrained-1-cpuset", []ContainerConfig{unconstrained, unconstrained, unconstrainedCpusets0_1}, 2},
{"unconstrained-2-cpuset", []ContainerConfig{unconstrainedCpusets0_1, unconstrainedCpusets2}, 3},
{"constrained-cpuset", []ContainerConfig{constrainedCpusets0_7}, 4},
} }
for _, tt := range tests { for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) { t.Run(tt.name, func(t *testing.T) {
sandbox.config.Containers = tt.containers sandbox.config.Containers = tt.containers
got := sandbox.calculateSandboxCPUs() got, _ := sandbox.calculateSandboxCPUs()
assert.Equal(t, got, tt.want) assert.Equal(t, got, tt.want)
}) })
} }
@ -1419,3 +1428,127 @@ func TestSandbox_SetupSandboxCgroup(t *testing.T) {
}) })
} }
} }
func getContainerConfigWithCPUSet(cpuset, memset string) ContainerConfig {
return ContainerConfig{
Resources: specs.LinuxResources{
CPU: &specs.LinuxCPU{
Cpus: cpuset,
Mems: memset,
},
},
}
}
func getSimpleSandbox(cpusets, memsets [3]string) *Sandbox {
sandbox := Sandbox{}
sandbox.config = &SandboxConfig{
Containers: []ContainerConfig{
getContainerConfigWithCPUSet(cpusets[0], memsets[0]),
getContainerConfigWithCPUSet(cpusets[1], memsets[1]),
getContainerConfigWithCPUSet(cpusets[2], memsets[2]),
},
}
return &sandbox
}
func TestGetSandboxCpuSet(t *testing.T) {
tests := []struct {
name string
cpusets [3]string
memsets [3]string
cpuResult string
memResult string
wantErr bool
}{
{
"single, no cpuset",
[3]string{"", "", ""},
[3]string{"", "", ""},
"",
"",
false,
},
{
"single cpuset",
[3]string{"0", "", ""},
[3]string{"", "", ""},
"0",
"",
false,
},
{
"two duplicate cpuset",
[3]string{"0", "0", ""},
[3]string{"", "", ""},
"0",
"",
false,
},
{
"3 cpusets",
[3]string{"0-3", "5-7", "1"},
[3]string{"", "", ""},
"0-3,5-7",
"",
false,
},
{
"weird, but should be okay",
[3]string{"0-3", "99999", ""},
[3]string{"", "", ""},
"0-3,99999",
"",
false,
},
{
"two, overlapping cpuset",
[3]string{"0-3", "1-2", ""},
[3]string{"", "", ""},
"0-3",
"",
false,
},
{
"garbage, should fail",
[3]string{"7 beard-seconds", "Audrey + 7", "Elliott - 17"},
[3]string{"", "", ""},
"",
"",
true,
},
{
"cpuset and memset",
[3]string{"0-3", "1-2", ""},
[3]string{"0", "1", "0-1"},
"0-3",
"0-1",
false,
},
{
"memset",
[3]string{"0-3", "1-2", ""},
[3]string{"0", "3", ""},
"0-3",
"0,3",
false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := getSimpleSandbox(tt.cpusets, tt.memsets)
res, _, err := s.getSandboxCPUSet()
if (err != nil) != tt.wantErr {
t.Errorf("getSandboxCPUSet() error = %v, wantErr %v", err, tt.wantErr)
}
if res != tt.cpuResult {
t.Errorf("getSandboxCPUSet() result = %s, wanted result %s", res, tt.cpuResult)
}
})
}
}

View File

@ -6,7 +6,7 @@
default: build default: build
build: build:
cargo build -v RUSTFLAGS="--deny warnings" cargo build -v
clean: clean:
cargo clean cargo clean

View File

@ -1,20 +1,5 @@
# This file is automatically @generated by Cargo. # This file is automatically @generated by Cargo.
# It is not intended for manual editing. # It is not intended for manual editing.
[[package]]
name = "addr2line"
version = "0.12.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "602d785912f476e480434627e8732e6766b760c045bbf897d9dfaa9f4fbd399c"
dependencies = [
"gimli",
]
[[package]]
name = "adler32"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "567b077b825e468cc974f0020d4082ee6e03132512f207ef1a02fd5d00d1f32d"
[[package]] [[package]]
name = "aho-corasick" name = "aho-corasick"
version = "0.7.13" version = "0.7.13"
@ -35,9 +20,9 @@ dependencies = [
[[package]] [[package]]
name = "anyhow" name = "anyhow"
version = "1.0.31" version = "1.0.32"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "85bb70cc08ec97ca5450e6eba421deeea5f172c0fc61f78b5357b2a8e8be195f" checksum = "6b602bfe940d21c130f3895acd65221e8a61270debe89d628b9cb4e3ccb8569b"
[[package]] [[package]]
name = "arc-swap" name = "arc-swap"
@ -74,20 +59,6 @@ version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f8aac770f1885fd7e387acedd76065302551364496e46b3dd00860b2f8359b9d" checksum = "f8aac770f1885fd7e387acedd76065302551364496e46b3dd00860b2f8359b9d"
[[package]]
name = "backtrace"
version = "0.3.49"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "05100821de9e028f12ae3d189176b41ee198341eb8f369956407fea2f5cc666c"
dependencies = [
"addr2line",
"cfg-if",
"libc",
"miniz_oxide",
"object",
"rustc-demangle",
]
[[package]] [[package]]
name = "base64" name = "base64"
version = "0.11.0" version = "0.11.0"
@ -140,6 +111,17 @@ version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4785bdd1c96b2a846b2bd7cc02e86b6b3dbf14e7e53446c4f54c92a361040822" checksum = "4785bdd1c96b2a846b2bd7cc02e86b6b3dbf14e7e53446c4f54c92a361040822"
[[package]]
name = "cgroups"
version = "0.1.1-alpha.0"
source = "git+https://github.com/kata-containers/cgroups-rs?branch=stable-0.1.1#8717524f2c95aacd30768b6f0f7d7f2fddef5cac"
dependencies = [
"libc",
"log",
"nix 0.18.0",
"regex",
]
[[package]] [[package]]
name = "chrono" name = "chrono"
version = "0.4.11" version = "0.4.11"
@ -240,7 +222,6 @@ version = "0.12.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d371106cc88ffdfb1eabd7111e432da544f16f3e2d7bf1dfe8bf575f1df045cd" checksum = "d371106cc88ffdfb1eabd7111e432da544f16f3e2d7bf1dfe8bf575f1df045cd"
dependencies = [ dependencies = [
"backtrace",
"version_check", "version_check",
] ]
@ -267,12 +248,6 @@ dependencies = [
"wasi", "wasi",
] ]
[[package]]
name = "gimli"
version = "0.21.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bcc8e0c9bce37868955864dbecd2b1ab2bdf967e6f28066d65aaac620444b65c"
[[package]] [[package]]
name = "hermit-abi" name = "hermit-abi"
version = "0.1.14" version = "0.1.14"
@ -282,6 +257,12 @@ dependencies = [
"libc", "libc",
] ]
[[package]]
name = "hex"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "644f9158b2f133fd50f5fb3242878846d9eb792e445c893805ff0e3824006e35"
[[package]] [[package]]
name = "humantime" name = "humantime"
version = "2.0.1" version = "2.0.1"
@ -299,7 +280,9 @@ name = "kata-agent-ctl"
version = "0.0.1" version = "0.0.1"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"byteorder",
"clap", "clap",
"hex",
"humantime", "humantime",
"lazy_static", "lazy_static",
"libc", "libc",
@ -325,9 +308,9 @@ checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
[[package]] [[package]]
name = "libc" name = "libc"
version = "0.2.71" version = "0.2.79"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9457b06509d27052635f90d6466700c65095fdf75409b3fbdd903e988b886f49" checksum = "2448f6066e80e3bfc792e9c98bf705b4b0fc6e8ef5b43e5889aff0eaa9c58743"
[[package]] [[package]]
name = "log" name = "log"
@ -361,15 +344,6 @@ version = "2.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3728d817d99e5ac407411fa471ff9800a778d88a24685968b36824eaf4bee400" checksum = "3728d817d99e5ac407411fa471ff9800a778d88a24685968b36824eaf4bee400"
[[package]]
name = "miniz_oxide"
version = "0.3.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "791daaae1ed6889560f8c4359194f56648355540573244a5448a83ba1ecc7435"
dependencies = [
"adler32",
]
[[package]] [[package]]
name = "nix" name = "nix"
version = "0.16.1" version = "0.16.1"
@ -396,6 +370,18 @@ dependencies = [
"void", "void",
] ]
[[package]]
name = "nix"
version = "0.18.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "83450fe6a6142ddd95fb064b746083fc4ef1705fe81f64a64e1d4b39f54a1055"
dependencies = [
"bitflags",
"cc",
"cfg-if",
"libc",
]
[[package]] [[package]]
name = "num-integer" name = "num-integer"
version = "0.1.43" version = "0.1.43"
@ -415,12 +401,6 @@ dependencies = [
"autocfg", "autocfg",
] ]
[[package]]
name = "object"
version = "0.20.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1ab52be62400ca80aa00285d25253d7f7c437b7375c4de678f5405d3afe82ca5"
[[package]] [[package]]
name = "oci" name = "oci"
version = "0.1.0" version = "0.1.0"
@ -463,7 +443,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "059a34f111a9dee2ce1ac2826a68b24601c4298cfeb1a587c3cb493d5ab46f52" checksum = "059a34f111a9dee2ce1ac2826a68b24601c4298cfeb1a587c3cb493d5ab46f52"
dependencies = [ dependencies = [
"libc", "libc",
"nix 0.17.0", "nix 0.18.0",
] ]
[[package]] [[package]]
@ -594,6 +574,15 @@ version = "0.6.18"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "26412eb97c6b088a6997e05f69403a802a92d520de2f8e63c2b65f9e0f47c4e8" checksum = "26412eb97c6b088a6997e05f69403a802a92d520de2f8e63c2b65f9e0f47c4e8"
[[package]]
name = "remove_dir_all"
version = "0.5.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3acd125665422973a33ac9d3dd2df85edad0f4ae9b00dafb1a05e43a9f5ef8e7"
dependencies = [
"winapi",
]
[[package]] [[package]]
name = "rust-argon2" name = "rust-argon2"
version = "0.7.0" version = "0.7.0"
@ -606,19 +595,14 @@ dependencies = [
"crossbeam-utils", "crossbeam-utils",
] ]
[[package]]
name = "rustc-demangle"
version = "0.1.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4c691c0e608126e00913e33f0ccf3727d5fc84573623b8d65b2df340b5201783"
[[package]] [[package]]
name = "rustjail" name = "rustjail"
version = "0.1.0" version = "0.1.0"
dependencies = [ dependencies = [
"anyhow",
"caps", "caps",
"cgroups",
"dirs", "dirs",
"error-chain",
"lazy_static", "lazy_static",
"libc", "libc",
"nix 0.17.0", "nix 0.17.0",
@ -635,6 +619,7 @@ dependencies = [
"serde_json", "serde_json",
"slog", "slog",
"slog-scope", "slog-scope",
"tempfile",
] ]
[[package]] [[package]]
@ -759,6 +744,20 @@ version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f764005d11ee5f36500a149ace24e00e3da98b0158b3e2d53a7495660d3f4d60" checksum = "f764005d11ee5f36500a149ace24e00e3da98b0158b3e2d53a7495660d3f4d60"
[[package]]
name = "tempfile"
version = "3.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7a6e24d9338a0a5be79593e2fa15a648add6138caa803e2d5bc782c371732ca9"
dependencies = [
"cfg-if",
"libc",
"rand",
"redox_syscall",
"remove_dir_all",
"winapi",
]
[[package]] [[package]]
name = "textwrap" name = "textwrap"
version = "0.11.0" version = "0.11.0"

View File

@ -17,6 +17,8 @@ oci = { path = "../../src/agent/oci" }
clap = "2.33.0" clap = "2.33.0"
lazy_static = "1.4.0" lazy_static = "1.4.0"
anyhow = "1.0.31" anyhow = "1.0.31"
hex = "0.4.2"
byteorder = "1.3.4"
logging = { path = "../../pkg/logging" } logging = { path = "../../pkg/logging" }
slog = "2.5.2" slog = "2.5.2"

View File

@ -6,7 +6,7 @@
default: build default: build
build: build:
cargo build -v RUSTFLAGS="--deny warnings" cargo build -v
clean: clean:
cargo clean cargo clean

View File

@ -3,6 +3,11 @@
* [Overview](#overview) * [Overview](#overview)
* [Audience and environment](#audience-and-environment) * [Audience and environment](#audience-and-environment)
* [Full details](#full-details) * [Full details](#full-details)
* [Code summary](#code-summary)
* [Running the tool](#running-the-tool)
* [Prerequisites](#prerequisites)
* [Connect to a real Kata Container](#connect-to-a-real-kata-container)
* [Run the tool and the agent in the same environment](#run-the-tool-and-the-agent-in-the-same-environment)
## Overview ## Overview
@ -37,3 +42,80 @@ To see some examples, run:
```sh ```sh
$ cargo run -- examples $ cargo run -- examples
``` ```
## Code summary
The table below summarises where to look to learn more about both this tool,
the agent protocol and the client and server implementations.
| Description | File | Example RPC or function | Example summary |
|-|-|-|-|
| Protocol buffers definition of the Kata Containers Agent API protocol | [`agent.proto`](../../src/agent/protocols/protos/agent.proto) | `CreateContainer` | API to create a Kata container. |
| Agent Control (client) API calls | [`src/client.rs`](src/client.rs) | `agent_cmd_container_create()` | Agent Control tool function that calls the `CreateContainer` API. |
| Agent (server) API implementations | [`rpc.rs`](../../src/agent/src/rpc.rs) | `create_container()` | Server function that implements the `CreateContainers` API. |
## Running the tool
### Prerequisites
It is necessary to create an OCI bundle to use the tool. The simplest method
is:
```sh
$ bundle_dir="bundle"
$ rootfs_dir="$bundle_dir/rootfs"
$ image="busybox"
$ mkdir -p "$rootfs_dir" && (cd "$bundle_dir" && runc spec)
$ sudo docker export $(sudo docker create "$image") | tar -C "$rootfs_dir" -xvf -
```
### Connect to a real Kata Container
1. Start a Kata Container
1. Establish the VSOCK guest CID number for the virtual machine:
Assuming you are running a single QEMU based Kata Container, you can look
at the program arguments to find the (randomly-generated) `guest-cid=` option
value:
```sh
$ guest_cid=$(ps -ef | grep qemu-system-x86_64 | egrep -o "guest-cid=[^,][^,]*" | cut -d= -f2)
```
1. Run the tool to connect to the agent:
```sh
$ cargo run -- -l debug connect --bundle-dir "${bundle_dir}" --server-address "vsock://${guest_cid}:1024" -c Check -c GetGuestDetails
```
This examples makes two API calls:
- It runs `Check` to see if the agent's RPC server is serving.
- It then runs `GetGuestDetails` to establish some details of the
environment the agent is running in.
### Run the tool and the agent in the same environment
> **Warnings:**
>
> - This method is **only** for testing and development!
> - Only continue if you are using a non-critical system
> (such as a freshly installed VM environment).
1. Start the agent, specifying a local socket for it to communicate on:
```sh
$ sudo KATA_AGENT_SERVER_ADDR=unix:///tmp/foo.socket target/x86_64-unknown-linux-musl/release/kata-agent
```
1. Run the tool in the same environment:
```sh
$ cargo run -- -l debug connect --server-address "unix://@/tmp/foo.socket" --bundle-dir "$bundle_dir" -c Check -c GetGuestDetails
```
> **Note:**
>
> The `@` in the server address is required - it denotes an abstract
> socket which the agent requires (see `unix(7)`).

View File

@ -8,6 +8,7 @@
use crate::types::{Config, Options}; use crate::types::{Config, Options};
use crate::utils; use crate::utils;
use anyhow::{anyhow, Result}; use anyhow::{anyhow, Result};
use byteorder::ByteOrder;
use nix::sys::socket::{connect, socket, AddressFamily, SockAddr, SockFlag, SockType, UnixAddr}; use nix::sys::socket::{connect, socket, AddressFamily, SockAddr, SockFlag, SockType, UnixAddr};
use protocols::agent::*; use protocols::agent::*;
use protocols::agent_ttrpc::*; use protocols::agent_ttrpc::*;
@ -75,6 +76,11 @@ const DEFAULT_PS_FORMAT: &str = "json";
const ERR_API_FAILED: &str = "API failed"; const ERR_API_FAILED: &str = "API failed";
static AGENT_CMDS: &'static [AgentCmd] = &[ static AGENT_CMDS: &'static [AgentCmd] = &[
AgentCmd {
name: "AddARPNeighbors",
st: ServiceType::Agent,
fp: agent_cmd_sandbox_add_arp_neighbors,
},
AgentCmd { AgentCmd {
name: "Check", name: "Check",
st: ServiceType::Health, st: ServiceType::Health,
@ -85,6 +91,16 @@ static AGENT_CMDS: &'static [AgentCmd] = &[
st: ServiceType::Health, st: ServiceType::Health,
fp: agent_cmd_health_version, fp: agent_cmd_health_version,
}, },
AgentCmd {
name: "CloseStdin",
st: ServiceType::Agent,
fp: agent_cmd_container_close_stdin,
},
AgentCmd {
name: "CopyFile",
st: ServiceType::Agent,
fp: agent_cmd_sandbox_copy_file,
},
AgentCmd { AgentCmd {
name: "CreateContainer", name: "CreateContainer",
st: ServiceType::Agent, st: ServiceType::Agent,
@ -106,9 +122,19 @@ static AGENT_CMDS: &'static [AgentCmd] = &[
fp: agent_cmd_container_exec, fp: agent_cmd_container_exec,
}, },
AgentCmd { AgentCmd {
name: "GuestDetails", name: "GetGuestDetails",
st: ServiceType::Agent, st: ServiceType::Agent,
fp: agent_cmd_sandbox_guest_details, fp: agent_cmd_sandbox_get_guest_details,
},
AgentCmd {
name: "GetMetrics",
st: ServiceType::Agent,
fp: agent_cmd_sandbox_get_metrics,
},
AgentCmd {
name: "GetOOMEvent",
st: ServiceType::Agent,
fp: agent_cmd_sandbox_get_oom_event,
}, },
AgentCmd { AgentCmd {
name: "ListInterfaces", name: "ListInterfaces",
@ -125,11 +151,36 @@ static AGENT_CMDS: &'static [AgentCmd] = &[
st: ServiceType::Agent, st: ServiceType::Agent,
fp: agent_cmd_container_list_processes, fp: agent_cmd_container_list_processes,
}, },
AgentCmd {
name: "MemHotplugByProbe",
st: ServiceType::Agent,
fp: agent_cmd_sandbox_mem_hotplug_by_probe,
},
AgentCmd {
name: "OnlineCPUMem",
st: ServiceType::Agent,
fp: agent_cmd_sandbox_online_cpu_mem,
},
AgentCmd { AgentCmd {
name: "PauseContainer", name: "PauseContainer",
st: ServiceType::Agent, st: ServiceType::Agent,
fp: agent_cmd_container_pause, fp: agent_cmd_container_pause,
}, },
AgentCmd {
name: "ReadStderr",
st: ServiceType::Agent,
fp: agent_cmd_container_read_stderr,
},
AgentCmd {
name: "ReadStdout",
st: ServiceType::Agent,
fp: agent_cmd_container_read_stdout,
},
AgentCmd {
name: "ReseedRandomDev",
st: ServiceType::Agent,
fp: agent_cmd_sandbox_reseed_random_dev,
},
AgentCmd { AgentCmd {
name: "RemoveContainer", name: "RemoveContainer",
st: ServiceType::Agent, st: ServiceType::Agent,
@ -140,6 +191,11 @@ static AGENT_CMDS: &'static [AgentCmd] = &[
st: ServiceType::Agent, st: ServiceType::Agent,
fp: agent_cmd_container_resume, fp: agent_cmd_container_resume,
}, },
AgentCmd {
name: "SetGuestDateTime",
st: ServiceType::Agent,
fp: agent_cmd_sandbox_set_guest_date_time,
},
AgentCmd { AgentCmd {
name: "SignalProcess", name: "SignalProcess",
st: ServiceType::Agent, st: ServiceType::Agent,
@ -165,6 +221,16 @@ static AGENT_CMDS: &'static [AgentCmd] = &[
st: ServiceType::Agent, st: ServiceType::Agent,
fp: agent_cmd_sandbox_tracing_stop, fp: agent_cmd_sandbox_tracing_stop,
}, },
AgentCmd {
name: "TtyWinResize",
st: ServiceType::Agent,
fp: agent_cmd_container_tty_win_resize,
},
AgentCmd {
name: "UpdateContainer",
st: ServiceType::Agent,
fp: agent_cmd_sandbox_update_container,
},
AgentCmd { AgentCmd {
name: "UpdateInterface", name: "UpdateInterface",
st: ServiceType::Agent, st: ServiceType::Agent,
@ -180,6 +246,11 @@ static AGENT_CMDS: &'static [AgentCmd] = &[
st: ServiceType::Agent, st: ServiceType::Agent,
fp: agent_cmd_container_wait_process, fp: agent_cmd_container_wait_process,
}, },
AgentCmd {
name: "WriteStdin",
st: ServiceType::Agent,
fp: agent_cmd_container_write_stdin,
},
]; ];
static BUILTIN_CMDS: &'static [BuiltinCmd] = &[ static BUILTIN_CMDS: &'static [BuiltinCmd] = &[
@ -684,6 +755,8 @@ fn agent_cmd_health_check(
// value unused // value unused
req.set_service("".to_string()); req.set_service("".to_string());
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = health let reply = health
.check(&req, cfg.timeout_nano) .check(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -707,6 +780,8 @@ fn agent_cmd_health_version(
// value unused // value unused
req.set_service("".to_string()); req.set_service("".to_string());
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = health let reply = health
.version(&req, cfg.timeout_nano) .version(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -729,6 +804,8 @@ fn agent_cmd_sandbox_create(
let sid = utils::get_option("sid", options, args); let sid = utils::get_option("sid", options, args);
req.set_sandbox_id(sid); req.set_sandbox_id(sid);
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client let reply = client
.create_sandbox(&req, cfg.timeout_nano) .create_sandbox(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -748,6 +825,8 @@ fn agent_cmd_sandbox_destroy(
) -> Result<()> { ) -> Result<()> {
let req = DestroySandboxRequest::default(); let req = DestroySandboxRequest::default();
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client let reply = client
.destroy_sandbox(&req, cfg.timeout_nano) .destroy_sandbox(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -778,6 +857,8 @@ fn agent_cmd_container_create(
req.set_exec_id(exec_id); req.set_exec_id(exec_id);
req.set_OCI(grpc_spec); req.set_OCI(grpc_spec);
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client let reply = client
.create_container(&req, cfg.timeout_nano) .create_container(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -801,6 +882,8 @@ fn agent_cmd_container_remove(
req.set_container_id(cid); req.set_container_id(cid);
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client let reply = client
.remove_container(&req, cfg.timeout_nano) .remove_container(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -838,6 +921,8 @@ fn agent_cmd_container_exec(
req.set_exec_id(exec_id); req.set_exec_id(exec_id);
req.set_process(process); req.set_process(process);
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client let reply = client
.exec_process(&req, cfg.timeout_nano) .exec_process(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -861,6 +946,8 @@ fn agent_cmd_container_stats(
req.set_container_id(cid); req.set_container_id(cid);
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client let reply = client
.stats_container(&req, cfg.timeout_nano) .stats_container(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -884,6 +971,8 @@ fn agent_cmd_container_pause(
req.set_container_id(cid); req.set_container_id(cid);
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client let reply = client
.pause_container(&req, cfg.timeout_nano) .pause_container(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -907,6 +996,8 @@ fn agent_cmd_container_resume(
req.set_container_id(cid); req.set_container_id(cid);
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client let reply = client
.resume_container(&req, cfg.timeout_nano) .resume_container(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -930,6 +1021,8 @@ fn agent_cmd_container_start(
req.set_container_id(cid); req.set_container_id(cid);
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client let reply = client
.start_container(&req, cfg.timeout_nano) .start_container(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -940,7 +1033,7 @@ fn agent_cmd_container_start(
Ok(()) Ok(())
} }
fn agent_cmd_sandbox_guest_details( fn agent_cmd_sandbox_get_guest_details(
cfg: &Config, cfg: &Config,
client: &AgentServiceClient, client: &AgentServiceClient,
_health: &HealthClient, _health: &HealthClient,
@ -951,6 +1044,8 @@ fn agent_cmd_sandbox_guest_details(
req.set_mem_block_size(true); req.set_mem_block_size(true);
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client let reply = client
.get_guest_details(&req, cfg.timeout_nano) .get_guest_details(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -981,6 +1076,8 @@ fn agent_cmd_container_list_processes(
req.set_container_id(cid); req.set_container_id(cid);
req.set_format(list_format); req.set_format(list_format);
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client let reply = client
.list_processes(&req, cfg.timeout_nano) .list_processes(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -1006,6 +1103,8 @@ fn agent_cmd_container_wait_process(
req.set_container_id(cid); req.set_container_id(cid);
req.set_exec_id(exec_id); req.set_exec_id(exec_id);
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client let reply = client
.wait_process(&req, cfg.timeout_nano) .wait_process(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -1041,6 +1140,8 @@ fn agent_cmd_container_signal_process(
req.set_exec_id(exec_id); req.set_exec_id(exec_id);
req.set_signal(signum as u32); req.set_signal(signum as u32);
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client let reply = client
.signal_process(&req, cfg.timeout_nano) .signal_process(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -1060,6 +1161,8 @@ fn agent_cmd_sandbox_tracing_start(
) -> Result<()> { ) -> Result<()> {
let req = StartTracingRequest::default(); let req = StartTracingRequest::default();
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client let reply = client
.start_tracing(&req, cfg.timeout_nano) .start_tracing(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -1079,6 +1182,8 @@ fn agent_cmd_sandbox_tracing_stop(
) -> Result<()> { ) -> Result<()> {
let req = StopTracingRequest::default(); let req = StopTracingRequest::default();
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client let reply = client
.stop_tracing(&req, cfg.timeout_nano) .stop_tracing(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -1098,6 +1203,7 @@ fn agent_cmd_sandbox_update_interface(
) -> Result<()> { ) -> Result<()> {
let req = UpdateInterfaceRequest::default(); let req = UpdateInterfaceRequest::default();
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client let reply = client
.update_interface(&req, cfg.timeout_nano) .update_interface(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -1105,9 +1211,6 @@ fn agent_cmd_sandbox_update_interface(
// FIXME: Implement 'UpdateInterface' fully. // FIXME: Implement 'UpdateInterface' fully.
eprintln!("FIXME: 'UpdateInterface' not fully implemented"); eprintln!("FIXME: 'UpdateInterface' not fully implemented");
// let if = ...;
// req.set_interface(if);
info!(sl!(), "response received"; info!(sl!(), "response received";
"response" => format!("{:?}", reply)); "response" => format!("{:?}", reply));
@ -1123,6 +1226,8 @@ fn agent_cmd_sandbox_update_routes(
) -> Result<()> { ) -> Result<()> {
let req = UpdateRoutesRequest::default(); let req = UpdateRoutesRequest::default();
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client let reply = client
.update_routes(&req, cfg.timeout_nano) .update_routes(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -1130,9 +1235,6 @@ fn agent_cmd_sandbox_update_routes(
// FIXME: Implement 'UpdateRoutes' fully. // FIXME: Implement 'UpdateRoutes' fully.
eprintln!("FIXME: 'UpdateRoutes' not fully implemented"); eprintln!("FIXME: 'UpdateRoutes' not fully implemented");
// let routes = ...;
// req.set_routes(routes);
info!(sl!(), "response received"; info!(sl!(), "response received";
"response" => format!("{:?}", reply)); "response" => format!("{:?}", reply));
@ -1148,6 +1250,8 @@ fn agent_cmd_sandbox_list_interfaces(
) -> Result<()> { ) -> Result<()> {
let req = ListInterfacesRequest::default(); let req = ListInterfacesRequest::default();
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client let reply = client
.list_interfaces(&req, cfg.timeout_nano) .list_interfaces(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -1167,6 +1271,8 @@ fn agent_cmd_sandbox_list_routes(
) -> Result<()> { ) -> Result<()> {
let req = ListRoutesRequest::default(); let req = ListRoutesRequest::default();
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client let reply = client
.list_routes(&req, cfg.timeout_nano) .list_routes(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?; .map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
@ -1177,9 +1283,531 @@ fn agent_cmd_sandbox_list_routes(
Ok(()) Ok(())
} }
fn agent_cmd_container_tty_win_resize(
cfg: &Config,
client: &AgentServiceClient,
_health: &HealthClient,
options: &mut Options,
args: &str,
) -> Result<()> {
let mut req = TtyWinResizeRequest::default();
let cid = utils::get_option("cid", options, args);
let exec_id = utils::get_option("exec_id", options, args);
req.set_container_id(cid);
req.set_exec_id(exec_id);
let rows_str = utils::get_option("row", options, args);
if rows_str != "" {
let rows = rows_str
.parse::<u32>()
.map_err(|e| anyhow!(e).context("invalid row size"))?;
req.set_row(rows);
}
let cols_str = utils::get_option("column", options, args);
if cols_str != "" {
let cols = cols_str
.parse::<u32>()
.map_err(|e| anyhow!(e).context("invalid column size"))?;
req.set_column(cols);
}
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client
.tty_win_resize(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
info!(sl!(), "response received";
"response" => format!("{:?}", reply));
Ok(())
}
fn agent_cmd_container_close_stdin(
cfg: &Config,
client: &AgentServiceClient,
_health: &HealthClient,
options: &mut Options,
args: &str,
) -> Result<()> {
let mut req = CloseStdinRequest::default();
let cid = utils::get_option("cid", options, args);
let exec_id = utils::get_option("exec_id", options, args);
req.set_container_id(cid);
req.set_exec_id(exec_id);
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client
.close_stdin(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
info!(sl!(), "response received";
"response" => format!("{:?}", reply));
Ok(())
}
fn agent_cmd_container_read_stdout(
cfg: &Config,
client: &AgentServiceClient,
_health: &HealthClient,
options: &mut Options,
args: &str,
) -> Result<()> {
let mut req = ReadStreamRequest::default();
let cid = utils::get_option("cid", options, args);
let exec_id = utils::get_option("exec_id", options, args);
req.set_container_id(cid);
req.set_exec_id(exec_id);
let length_str = utils::get_option("len", options, args);
if length_str != "" {
let length = length_str
.parse::<u32>()
.map_err(|e| anyhow!(e).context("invalid length"))?;
req.set_len(length);
}
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client
.read_stdout(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
info!(sl!(), "response received";
"response" => format!("{:?}", reply));
Ok(())
}
fn agent_cmd_container_read_stderr(
cfg: &Config,
client: &AgentServiceClient,
_health: &HealthClient,
options: &mut Options,
args: &str,
) -> Result<()> {
let mut req = ReadStreamRequest::default();
let cid = utils::get_option("cid", options, args);
let exec_id = utils::get_option("exec_id", options, args);
req.set_container_id(cid);
req.set_exec_id(exec_id);
let length_str = utils::get_option("len", options, args);
if length_str != "" {
let length = length_str
.parse::<u32>()
.map_err(|e| anyhow!(e).context("invalid length"))?;
req.set_len(length);
}
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client
.read_stderr(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
info!(sl!(), "response received";
"response" => format!("{:?}", reply));
Ok(())
}
fn agent_cmd_container_write_stdin(
cfg: &Config,
client: &AgentServiceClient,
_health: &HealthClient,
options: &mut Options,
args: &str,
) -> Result<()> {
let mut req = WriteStreamRequest::default();
let cid = utils::get_option("cid", options, args);
let exec_id = utils::get_option("exec_id", options, args);
let str_data = utils::get_option("data", options, args);
let data = utils::str_to_bytes(&str_data)?;
req.set_container_id(cid);
req.set_exec_id(exec_id);
req.set_data(data.to_vec());
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client
.write_stdin(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
info!(sl!(), "response received";
"response" => format!("{:?}", reply));
Ok(())
}
fn agent_cmd_sandbox_get_metrics(
cfg: &Config,
client: &AgentServiceClient,
_health: &HealthClient,
_options: &mut Options,
_args: &str,
) -> Result<()> {
let req = GetMetricsRequest::default();
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client
.get_metrics(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
info!(sl!(), "response received";
"response" => format!("{:?}", reply));
Ok(())
}
fn agent_cmd_sandbox_get_oom_event(
cfg: &Config,
client: &AgentServiceClient,
_health: &HealthClient,
_options: &mut Options,
_args: &str,
) -> Result<()> {
let req = GetOOMEventRequest::default();
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client
.get_oom_event(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
info!(sl!(), "response received";
"response" => format!("{:?}", reply));
Ok(())
}
fn agent_cmd_sandbox_copy_file(
cfg: &Config,
client: &AgentServiceClient,
_health: &HealthClient,
options: &mut Options,
args: &str,
) -> Result<()> {
let mut req = CopyFileRequest::default();
let path = utils::get_option("path", options, args);
if path != "" {
req.set_path(path);
}
let file_size_str = utils::get_option("file_size", options, args);
if file_size_str != "" {
let file_size = file_size_str
.parse::<i64>()
.map_err(|e| anyhow!(e).context("invalid file_size"))?;
req.set_file_size(file_size);
}
let file_mode_str = utils::get_option("file_mode", options, args);
if file_mode_str != "" {
let file_mode = file_mode_str
.parse::<u32>()
.map_err(|e| anyhow!(e).context("invalid file_mode"))?;
req.set_file_mode(file_mode);
}
let dir_mode_str = utils::get_option("dir_mode", options, args);
if dir_mode_str != "" {
let dir_mode = dir_mode_str
.parse::<u32>()
.map_err(|e| anyhow!(e).context("invalid dir_mode"))?;
req.set_dir_mode(dir_mode);
}
let uid_str = utils::get_option("uid", options, args);
if uid_str != "" {
let uid = uid_str
.parse::<i32>()
.map_err(|e| anyhow!(e).context("invalid uid"))?;
req.set_uid(uid);
}
let gid_str = utils::get_option("gid", options, args);
if gid_str != "" {
let gid = gid_str
.parse::<i32>()
.map_err(|e| anyhow!(e).context("invalid gid"))?;
req.set_gid(gid);
}
let offset_str = utils::get_option("offset", options, args);
if offset_str != "" {
let offset = offset_str
.parse::<i64>()
.map_err(|e| anyhow!(e).context("invalid offset"))?;
req.set_offset(offset);
}
let data_str = utils::get_option("data", options, args);
if data_str != "" {
let data = utils::str_to_bytes(&data_str)?;
req.set_data(data.to_vec());
}
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client
.copy_file(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
info!(sl!(), "response received";
"response" => format!("{:?}", reply));
Ok(())
}
fn agent_cmd_sandbox_reseed_random_dev(
cfg: &Config,
client: &AgentServiceClient,
_health: &HealthClient,
options: &mut Options,
args: &str,
) -> Result<()> {
let mut req = ReseedRandomDevRequest::default();
let str_data = utils::get_option("data", options, args);
let data = utils::str_to_bytes(&str_data)?;
req.set_data(data.to_vec());
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client
.reseed_random_dev(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
info!(sl!(), "response received";
"response" => format!("{:?}", reply));
Ok(())
}
fn agent_cmd_sandbox_online_cpu_mem(
cfg: &Config,
client: &AgentServiceClient,
_health: &HealthClient,
options: &mut Options,
args: &str,
) -> Result<()> {
let mut req = OnlineCPUMemRequest::default();
let wait_str = utils::get_option("wait", options, args);
if wait_str != "" {
let wait = wait_str
.parse::<bool>()
.map_err(|e| anyhow!(e).context("invalid wait bool"))?;
req.set_wait(wait);
}
let nb_cpus_str = utils::get_option("nb_cpus", options, args);
if nb_cpus_str != "" {
let nb_cpus = nb_cpus_str
.parse::<u32>()
.map_err(|e| anyhow!(e).context("invalid nb_cpus value"))?;
req.set_nb_cpus(nb_cpus);
}
let cpu_only_str = utils::get_option("cpu_only", options, args);
if cpu_only_str != "" {
let cpu_only = cpu_only_str
.parse::<bool>()
.map_err(|e| anyhow!(e).context("invalid cpu_only bool"))?;
req.set_cpu_only(cpu_only);
}
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client
.online_cpu_mem(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
info!(sl!(), "response received";
"response" => format!("{:?}", reply));
Ok(())
}
fn agent_cmd_sandbox_set_guest_date_time(
cfg: &Config,
client: &AgentServiceClient,
_health: &HealthClient,
options: &mut Options,
args: &str,
) -> Result<()> {
let mut req = SetGuestDateTimeRequest::default();
let secs_str = utils::get_option("sec", options, args);
if secs_str != "" {
let secs = secs_str
.parse::<i64>()
.map_err(|e| anyhow!(e).context("invalid seconds"))?;
req.set_Sec(secs);
}
let usecs_str = utils::get_option("usec", options, args);
if usecs_str != "" {
let usecs = usecs_str
.parse::<i64>()
.map_err(|e| anyhow!(e).context("invalid useconds"))?;
req.set_Usec(usecs);
}
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client
.set_guest_date_time(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
info!(sl!(), "response received";
"response" => format!("{:?}", reply));
Ok(())
}
fn agent_cmd_sandbox_add_arp_neighbors(
cfg: &Config,
client: &AgentServiceClient,
_health: &HealthClient,
_options: &mut Options,
_args: &str,
) -> Result<()> {
let req = AddARPNeighborsRequest::default();
// FIXME: Implement fully.
eprintln!("FIXME: 'AddARPNeighbors' not fully implemented");
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client
.add_arp_neighbors(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
info!(sl!(), "response received";
"response" => format!("{:?}", reply));
Ok(())
}
fn agent_cmd_sandbox_update_container(
cfg: &Config,
client: &AgentServiceClient,
_health: &HealthClient,
options: &mut Options,
args: &str,
) -> Result<()> {
let mut req = UpdateContainerRequest::default();
let cid = utils::get_option("cid", options, args);
req.set_container_id(cid);
// FIXME: Implement fully
eprintln!("FIXME: 'UpdateContainer' not fully implemented");
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client
.update_container(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
info!(sl!(), "response received";
"response" => format!("{:?}", reply));
Ok(())
}
fn agent_cmd_sandbox_mem_hotplug_by_probe(
cfg: &Config,
client: &AgentServiceClient,
_health: &HealthClient,
options: &mut Options,
args: &str,
) -> Result<()> {
let mut req = MemHotplugByProbeRequest::default();
// Expected to be a comma separated list of hex addresses
let addr_list = utils::get_option("memHotplugProbeAddr", options, args);
if addr_list != "" {
let addrs: Vec<u64> = addr_list
// Convert into a list of string values.
.split(",")
// Convert each string element into a u8 array of bytes, ignoring
// those elements that fail the conversion.
.filter_map(|s| hex::decode(s.trim_start_matches("0x")).ok())
// "Stretch" the u8 byte slice into one of length 8
// (to allow each 8 byte chunk to be converted into a u64).
.map(|mut v| -> Vec<u8> {
v.resize(8, 0x0);
v
})
// Convert the slice of u8 bytes into a u64
.map(|b| byteorder::LittleEndian::read_u64(&b))
.collect();
req.set_memHotplugProbeAddr(addrs);
}
debug!(sl!(), "sending request"; "request" => format!("{:?}", req));
let reply = client
.mem_hotplug_by_probe(&req, cfg.timeout_nano)
.map_err(|e| anyhow!("{:?}", e).context(ERR_API_FAILED))?;
info!(sl!(), "response received";
"response" => format!("{:?}", reply));
Ok(())
}
#[inline] #[inline]
fn builtin_cmd_repeat(_cfg: &Config, _options: &mut Options, _args: &str) -> (Result<()>, bool) { fn builtin_cmd_repeat(_cfg: &Config, _options: &mut Options, _args: &str) -> (Result<()>, bool) {
// XXX: NOP implementation. Due to the way repeat has to work, providing // XXX: NOP implementation. Due to the way repeat has to work, providing a
// handler like this is "too late" to be useful. However, a handler // handler like this is "too late" to be useful. However, a handler
// is required as "repeat" is a valid command. // is required as "repeat" is a valid command.
// //

View File

@ -65,7 +65,7 @@ fn make_examples_text(program_name: &str) -> String {
- Query the agent environment: - Query the agent environment:
$ {program} connect --server-address "{vsock_server_address}" --cmd GuestDetails $ {program} connect --server-address "{vsock_server_address}" --cmd GetGuestDetails
- List all available (built-in and Kata Agent API) commands: - List all available (built-in and Kata Agent API) commands:
@ -85,7 +85,7 @@ fn make_examples_text(program_name: &str) -> String {
- Query guest details forever: - Query guest details forever:
$ {program} connect --server-address "{vsock_server_address}" --repeat -1 --cmd GuestDetails $ {program} connect --server-address "{vsock_server_address}" --repeat -1 --cmd GetGuestDetails
- Send a 'SIGUSR1' signal to a container process: - Send a 'SIGUSR1' signal to a container process:

View File

@ -8,8 +8,7 @@ use anyhow::{anyhow, Result};
use oci::{Process as ociProcess, Root as ociRoot, Spec as ociSpec}; use oci::{Process as ociProcess, Root as ociRoot, Spec as ociSpec};
use protocols::oci::{ use protocols::oci::{
Box as grpcBox, Linux as grpcLinux, LinuxCapabilities as grpcLinuxCapabilities, Box as grpcBox, Linux as grpcLinux, LinuxCapabilities as grpcLinuxCapabilities,
POSIXRlimit as grpcPOSIXRlimit, Process as grpcProcess, Root as grpcRoot, Spec as grpcSpec, Process as grpcProcess, Root as grpcRoot, Spec as grpcSpec, User as grpcUser,
User as grpcUser,
}; };
use rand::Rng; use rand::Rng;
use slog::{debug, warn}; use slog::{debug, warn};
@ -409,3 +408,17 @@ pub fn get_grpc_spec(options: &mut Options, cid: &str) -> Result<grpcSpec> {
Ok(oci_to_grpc(&bundle_dir, cid, &oci_spec)?) Ok(oci_to_grpc(&bundle_dir, cid, &oci_spec)?)
} }
pub fn str_to_bytes(s: &str) -> Result<Vec<u8>> {
let prefix = "hex:";
if s.starts_with(prefix) {
let hex_str = s.trim_start_matches(prefix);
let decoded = hex::decode(hex_str).map_err(|e| anyhow!(e))?;
Ok(decoded)
} else {
Ok(s.as_bytes().to_vec())
}
}

View File

@ -559,6 +559,7 @@ EOT
[ "$ARCH" == "aarch64" ] && export PATH=$OLD_PATH && rm -rf /usr/local/musl [ "$ARCH" == "aarch64" ] && export PATH=$OLD_PATH && rm -rf /usr/local/musl
popd popd
else else
mkdir -p ${AGENT_DIR}
cp ${AGENT_SOURCE_BIN} ${AGENT_DEST} cp ${AGENT_SOURCE_BIN} ${AGENT_DEST}
OK "cp ${AGENT_SOURCE_BIN} ${AGENT_DEST}" OK "cp ${AGENT_SOURCE_BIN} ${AGENT_DEST}"
fi fi

View File

@ -66,7 +66,7 @@ function run_test() {
cmd="kubectl get pods | grep $busybox_pod | grep Completed" cmd="kubectl get pods | grep $busybox_pod | grep Completed"
wait_time=120 wait_time=120
configurations=("nginx-deployment-qemu" "nginx-deployment-qemu-virtiofs" "nginx-deployment-clh") configurations=("nginx-deployment-qemu" "nginx-deployment-clh")
for deployment in "${configurations[@]}"; do for deployment in "${configurations[@]}"; do
# start the kata pod: # start the kata pod:
kubectl apply -f "$YAMLPATH/examples/${deployment}.yaml" kubectl apply -f "$YAMLPATH/examples/${deployment}.yaml"

View File

@ -1,20 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-qemu-virtiofs
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
runtimeClassName: kata-qemu-virtiofs
containers:
- name: nginx
image: nginx:1.14
ports:
- containerPort: 80

View File

@ -16,7 +16,6 @@ containerd_conf_file_backup="${containerd_conf_file}.bak"
shims=( shims=(
"fc" "fc"
"qemu" "qemu"
"qemu-virtiofs"
"clh" "clh"
) )

View File

@ -63,25 +63,25 @@ $ ./build-kernel.sh -v 4.19.86 -g nvidia -f -d setup
> **Note** > **Note**
> - `-v 4.19.86`: Specify the guest kernel version. > - `-v 4.19.86`: Specify the guest kernel version.
> - `-g nvidia`: To build a guest kernel supporting Nvidia GPU. > - `-g nvidia`: To build a guest kernel supporting Nvidia GPU.
> - `-f`: The .config file is forced to be generated even if the kernel directory already exists. > - `-f`: The `.config` file is forced to be generated even if the kernel directory already exists.
> - `-d`: Enable bash debug mode. > - `-d`: Enable bash debug mode.
## Setup kernel source code ## Setup kernel source code
```bash ```bash
$ go get -d -u github.com/kata-containers/packaging $ go get -d -u github.com/kata-containers/kata-containers
$ cd $GOPATH/src/github.com/kata-containers/packaging/kernel $ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/packaging/kernel
$ ./build-kernel.sh setup $ ./build-kernel.sh setup
``` ```
The script `./build-kernel.sh` tries to apply the patches from The script `./build-kernel.sh` tries to apply the patches from
`${GOPATH}/src/github.com/kata-containers/packaging/kernel/patches/` when it `${GOPATH}/src/github.com/kata-containers/kata-containers/tools/packaging/kernel/patches/` when it
sets up a kernel. If you want to add a source modification, add a patch on this sets up a kernel. If you want to add a source modification, add a patch on this
directory. directory.
The script also adds a kernel config file from The script also adds a kernel config file from
`${GOPATH}/src/github.com/kata-containers/packaging/kernel/configs/` to `.config` `${GOPATH}/src/github.com/kata-containers/kata-containers/tools/packaging/kernel/configs/` to `.config`
in the kernel source code. You can modify it as needed. in the kernel source code. You can modify it as needed.
## Build the kernel ## Build the kernel
@ -106,7 +106,7 @@ $ ./build-kernel.sh install
Kata Containers packaging repository holds the kernel configs and patches. The Kata Containers packaging repository holds the kernel configs and patches. The
config and patches can work for many versions, but we only test the config and patches can work for many versions, but we only test the
kernel version defined in the [runtime versions file][runtime-versions-file]. kernel version defined in the [Kata Containers versions file][kata-containers-versions-file].
For further details, see [the kernel configuration documentation](configs). For further details, see [the kernel configuration documentation](configs).
@ -115,33 +115,33 @@ For further details, see [the kernel configuration documentation](configs).
The Kata Containers CI scripts install the kernel from [CI cache The Kata Containers CI scripts install the kernel from [CI cache
job][cache-job] or build from sources. job][cache-job] or build from sources.
If the kernel defined in the [runtime versions file][runtime-versions-file] is If the kernel defined in the [Kata Containers versions file][kata-containers-versions-file] is
built and cached with the latest kernel config and patches, it installs. built and cached with the latest kernel config and patches, it installs.
Otherwise, the kernel is built from source. Otherwise, the kernel is built from source.
The Kata kernel version is a mix of the kernel version defined in the [runtime The Kata kernel version is a mix of the kernel version defined in the [Kata Containers
versions file][runtime-versions-file] and the file `kata_config_version`. This versions file][kata-containers-versions-file] and the file `kata_config_version`. This
helps to identify if a kernel build has the latest recommend helps to identify if a kernel build has the latest recommend
configuration. configuration.
Example: Example:
```bash ```bash
# From https://github.com/kata-containers/runtime/blob/master/versions.yaml # From https://github.com/kata-containers/kata-containers/blob/2.0-dev/versions.yaml
$ kernel_version_in_versions_file=4.10.1 $ kernel_version_in_versions_file=5.4.60
# From https://github.com/kata-containers/packaging/blob/master/kernel/kata_config_version # From https://github.com/kata-containers/kata-containers/blob/2.0-dev/tools/packaging/kernel/kata_config_version
$ kata_config_version=25 $ kata_config_version=83
$ latest_kernel_version=${kernel_version_in_versions_file}-${kata_config_version} $ latest_kernel_version=${kernel_version_in_versions_file}-${kata_config_version}
``` ```
The resulting version is 4.10.1-25, this helps identify whether or not the kernel The resulting version is 5.4.60-83, this helps identify whether or not the kernel
configs are up-to-date on a CI version. configs are up-to-date on a CI version.
## Contribute ## Contribute
In order to do Kata Kernel changes. There are places to contribute: In order to do Kata Kernel changes. There are places to contribute:
1. [Kata runtime versions file][runtime-versions-file]: This file points to the 1. [Kata Containers versions file][kata-containers-versions-file]: This file points to the
recommended versions to be used by Kata. To update the kernel version send a recommended versions to be used by Kata. To update the kernel version send a
pull request to update that version. The Kata CI will run all the use cases pull request to update that version. The Kata CI will run all the use cases
and verify it works. and verify it works.
@ -174,7 +174,7 @@ In this case, the PR you submit needs to be tested together with a patch from
another Kata Containers repository. To do this you have to specify which another Kata Containers repository. To do this you have to specify which
repository and which pull request [it depends on][depends-on-docs]. repository and which pull request [it depends on][depends-on-docs].
[runtime-versions-file]: https://github.com/kata-containers/runtime/blob/master/versions.yaml [kata-containers-versions-file]: ../../../versions.yaml
[patches-dir]: https://github.com/kata-containers/packaging/tree/master/kernel/patches [patches-dir]: patches
[depends-on-docs]: https://github.com/kata-containers/tests/blob/master/README.md#breaking-compatibility [depends-on-docs]: https://github.com/kata-containers/tests/blob/master/README.md#breaking-compatibility
[cache-job]: http://jenkins.katacontainers.io/job/image-nightly-x86_64/ [cache-job]: http://jenkins.katacontainers.io/job/image-nightly-x86_64/

View File

@ -1,3 +0,0 @@
# virtio-fs support
CONFIG_VIRTIO_FS=y
CONFIG_FUSE_FS=y

Some files were not shown because too many files have changed in this diff Show More