- runtime# make sure the "Shutdown" trace span have a correct end - tracing: Accept multiple dynamic tags - logging: Enable agent debug output for release builds - agent: "Revert agent: Disable seccomp feature on aarch64 temporarily" - runtime: Enhancement for Makefile - osbuilder: build image-builder image from Fedora 34 - agent: refactor process IO processing - agent-ctl: Update for Hybrid VSOCK - docs: Fix outdated links - ci/install_libseccomp: Fix libseccomp build and misc improvement - virtcontainers: simplify read-only mount handling - runtime: add fast-test to let test exit on error - test: Fix random failure for TestIoCopy - cli: Show available guest protection in env output - Update k8s, critools, and CRI-O to their 1.22 release - package: assign proper value to redefined_string in build-kernel.sh - agent: Make wording of error message match CRI-O test suite - docs: Moving from EOT to EOF - virtcontainers: api: update the functions in the api.md docs - release: Upload libseccomp sources with notice to release page - virtcontainers: check that both initrd and image are not set - agent: Fix the configuration sample file - runtime: set tags for trace span - agent-ctl: Implement Linux OCI spec handling - runtime: Remove comments about unsupported features in config for clh - tools/packaging: Add options for VFIO to guest kernel - agent/runtime: Add seccomp feature - ci: test-kata-deploy: Get rid of slash-command-action action - This is to bump the OOT QAT 1.7 driver version to the latest version.… - forwarder: Drop privileges when using hybrid VSOCK - packaging/static-build: s390x fixes - agent-ctl: improve the oci_to_grpc code - agent: do not return error but print it if task wait failed - virtcontainers: delete duplicated notify in watchHypervisor function - agent: Handle uevent remove actions - enable unit test on arm - rustjail: Consistent coding style of LinuxDevice type - cli: Fix outdated kata-runtime bash completion - Allow VFIO devices to be used as VFIO devices in the container - Expose top level hypervisor methods - - Upgrade to Cloud Hypervisor v19.0 - docs: use-cases: Update Intel SGX use case - virtcontainers: clh: Enable the `seccomp` feature - runtime: delete cri containerd plugin from versions.yaml - docs: Write tracing documentation - runtime: delete useless src/runtime/cli/exit.go - snap: add cloud-hypervisor and experimental kernel - osbuilder: Call detect_rust_version() right before install_rust.sh - docs: Updating Developer Guide re qemu-img - versions: Add libseccomp and gperf version - Enable agent tracing for hybrid VSOCK hypervisors - runtime: optimize test code - runtime: use containerd package instead of cri-containerd - runtime: update sandbox root dir cleanup behavior in rootless hypervisor - utils: kata-manager: Update kata-manager.sh for new containerd config - osbuilder: Re-enable building the agent in Docker - agent: Do not fail when trying to adding existing routes - tracing: Fix typo in "package" tag name - kata-deploy: add .dockerignore file - runtime: change name in config settings back to "kata" - tracing: Remove trace mode and trace type09d5d88
runtime: tracing: Change method for adding tagsbcf3e82
logging: Enable agent debug output for release buildsa239a38
osbuilder: build image-builder image from Fedora 34375ad2b
runtime: Enhancement for Makefileb468dc5
agent: Use dup3 system call in unit tests of seccomp1aaa059
agent: "Revert agent: Disable seccomp feature on aarch64 temporarily"1e331f7
agent: refactor process IO processing9d3ec58
runtime: make sure the "Shutdown" trace span have a correct end3f21af9
runtime: add fast-test to let test exit on error9b270d7
ci/install_libseccomp: use a temporary work directory98b4406
ci/install_libseccomp: Fix fail when DESTDIR is set338ac87
virtcontainers: api: update the functions in the api.md docs23496f9
release: Upload libseccomp sources with notice to release pagee610fc8
runtime: Remove comments about unsupported features in config for clh7e40195
agent-ctl: Add stub for AddSwap API82de838
agent-ctl: Update for Hybrid VSOCKd1bcf10
forwarder: Remove quotes from socket path in doce66d047
virtcontainers: simplify read-only mount handlingbdf4824
tools/packaging: Add options for VFIO to guest kernelc509a20
agent-ctl: Implement Linux OCI spec handling42add7f
agent: Disable seccomp feature on aarch64 temporarily5dfedc2
docs: Add explanation about seccomp45e7c2c
static-checks: Add step for installing libseccompa3647e3
osbuilder: Set up libseccomp library3be50ad
agent: Add support for Seccomp4280415
agent: Fix the configuration sample fileb0bc71f
ci: test-kata-deploy: Get rid of slash-command-action action309dae6
virtcontainers: check that both initrd and image are not seta10cfff
forwarder: Fix changing log level6abccb9
forwarder: Drop privileges when using hybrid VSOCKbf00b8d
agent-ctl: improve the oci_to_grpc codeb67fa9e
forwarder: Make explicit root checke377578
forwarder: Fix docs socket path5f30633
virtcontainers: delete duplicated notify in watchHypervisor function5f5eca6
agent: do not return error but print it if task wait failedd2a7b6f
packaging/static-build: s390x fixes6cc8000
cli: Show available guest protection in env output2063b13
virtcontainers: Add func AvailableGuestProtectionsa13e2f7
agent: Handle uevent remove actions34273da
runtime/device: Allow VFIO devices to be presented to guest as VFIO devices68696e0
runtime: Add parameter to constrainGRPCSpec to control VFIO handlingd9e2e9e
runtime: Rename constraintGRPCSpec to improve grammar57ab408
runtime: Introduce "vfio_mode" config variable and annotation730b9c4
agent/device: Create device nodes for VFIO devices175f9b0
rustjail: Allow container devices in subdirectories9891efc
rustjail: Correct sanity checks on device pathd6b62c0
rustjail: Change mknod_dev() and bind_dev() to take relative device path2680c0b
rustjail: Provide useful context on device node creation errors42b92b2
agent/device: Allow container devname to differ from the host827a41f
agent/device: Refactor update_spec_device_list()8ceadcc
agent/device: Sanity check guest IOMMU groupsff59db7
agent/device: Add function to get IOMMU group for a PCI device13b06a3
agent/device: Rebind VFIO devices to VFIO driver inside gueste22bd78
agent/device: Add helper function for binding a guest device to a driverb40eedc
rustjail: Consistent coding style of LinuxDevice type57c0f93
agent: fix race condition when test watcher1a96b8b
template: disable template unit test on arm43b13a4
runtime: DefaultMaxVCPUs should not greater than defaultMaxQemuVCPUsc59c367
runtime: current vcpu number should be limitedfa92251
runtime: kernel version with '+' as suffix panic in parse52268d0
hypervisor: Expose the hypervisor itselfa72bed5
hypervisor: update tests based on createSandbox->CreateVM changef434bcb
hypervisor: createSandbox is CreateVM76f1ce9
hypervisor: startSandbox is StartVMfd24a69
hypervisor: waitSandbox is waitVMa6385c8
hypervisor: stopSandbox is StopVMf989078
hypervisor: resumeSandbox is ResumeVM73b4f27
hypervisor: saveSandbox is SaveVM7308610
hypervisor: pauseSandbox is nothing but PauseVM8f78e1c
hypervisor: The SandboxConsole is the VM's console4d47aee
hypervisor: Export generic interface methods6baf258
hypervisor: Minimal exports of generic hypervisor internal fields37fa453
osbuilder: Update QAT driver in Dockerfile8030b6c
virtcontainers: clh: Re-generate the client code8296754
versions: Upgrade to Cloud Hypervisor v19.02b13944
docs: Fix outdated links4f75ccb
docs: use-cases: Update Intel SGX use case4f018b5
runtime: delete useless src/runtime/cli/exit.go7a80aeb
docs: Moving from EOT to EOF09a5e03
docs: Write tracing documentationb625f62
runtime: delete cri containerd plugin from versions.yaml24fff57
snap: make curl commands consistent2b9f79c
snap: add cloud-hypervisor and experimental kernel273a1a9
runtime: optimize test code76f16fd
runtime: use containerd package instead of cri-containerd6d55b1b
docs: use containerd to replace cri-containerded02bc9
packaging: add containerd to versions.yaml50da26d
osbuilder: Call detect_rust_version() right before install_rust.shb4fadc9
docs: Updating Developer Guide re qemu-imgb8e69ce
versions: Add libseccomp and gperf version17a8c5c
runtime: Fix random failure for TestIoCopyf34f67d
osbuilder: Specify version when installing Rust135a080
osbuilder: Pass CI env to container agent buildeb5dd76
osbuilder: Re-enable building the agent in Dockerbcffa26
tracing: Fix typo in "package" tag namee61f5e2
runtime: Show socket path in kata-env output5b3a349
trace-forwarder: Support Hybrid VSOCKe42bc05
kata-deploy: add .dockerignore file321be0f
tracing: Remove trace mode and trace type7d0b616
agent: Do not fail when trying to adding existing routes3f95469
runtime: logging: Add variable for syslog tagadc9e0b
runtime: fix two bugs in rootless hypervisor51cbe14
runtime: Add option "disable_seccomp" to config hypervisor.clh98b7350
virtcontainers: clh: Enable the `seccomp` feature46720c6
runtime: set tags for trace spand789b42
package: assign proper value to redefined_string4d7ddff
utils: kata-manager: Update kata-manager.sh for new containerd configf5172d1
cli: Fix outdated kata-runtime bash completiond45c86d
versions: Update CRI-O to its 1.22 releasec4a6426
versions: Update k8s & critools to v1.22881b996
agent: Make wording of error message match CRI-O test suite Signed-off-by: Peng Tao <bergwolf@hyper.sh>
kata-deploy
kata-deploy
provides a Dockerfile, which contains all of the binaries
and artifacts required to run Kata Containers, as well as reference DaemonSets, which can
be utilized to install Kata Containers on a running Kubernetes cluster.
Note, installation through DaemonSets successfully installs katacontainers.io/kata-runtime
on
a node only if it uses either containerd or CRI-O CRI-shims.
Kubernetes quick start
Install Kata on a running Kubernetes cluster
Installing the latest image
The latest image refers to pre-release and release candidate content. For stable releases, please, use the "stable" instructions.
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy.yaml
Installing the stable image
The stable image refers to the last stable releases content.
Note that if you use a tagged version of the repo, the stable image does match that version. For instance, if you use the 2.2.1 tagged version of the kata-deploy.yaml file, then the version 2.2.1 of the kata runtime will be deployed.
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy-stable.yaml
For your k3s cluster, do:
$ GO111MODULE=auto go get github.com/kata-containers/kata-containers
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/packaging/kata-deploy
$ kubectl apply -k kata-deploy/overlays/k3s
Ensure kata-deploy is ready
kubectl -n kube-system wait --timeout=10m --for=condition=Ready -l name=kata-deploy pod
Run a sample workload
Workloads specify the runtime they'd like to utilize by setting the appropriate runtimeClass
object within
the Pod
specification. The runtimeClass
examples provided define a node selector to match node label katacontainers.io/kata-runtime:"true"
,
which will ensure the workload is only scheduled on a node that has Kata Containers installed
runtimeClass
is a built-in type in Kubernetes. To apply each Kata Containers runtimeClass
:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/runtimeclasses/kata-runtimeClasses.yaml
The following YAML snippet shows how to specify a workload should use Kata with Cloud Hypervisor:
spec:
template:
spec:
runtimeClassName: kata-clh
The following YAML snippet shows how to specify a workload should use Kata with Firecracker:
spec:
template:
spec:
runtimeClassName: kata-fc
The following YAML snippet shows how to specify a workload should use Kata with QEMU:
spec:
template:
spec:
runtimeClassName: kata-qemu
To run an example with kata-clh
:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-clh.yaml
To run an example with kata-fc
:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-fc.yaml
To run an example with kata-qemu
:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-qemu.yaml
The following removes the test pods:
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-clh.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-fc.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-qemu.yaml
Remove Kata from the Kubernetes cluster
Removing the latest image
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy.yaml
$ kubectl -n kube-system wait --timeout=10m --for=delete -l name=kata-deploy pod
After ensuring kata-deploy has been deleted, cleanup the cluster:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup.yaml
The cleanup daemon-set will run a single time, cleaning up the node-label, which makes it difficult to check in an automated fashion. This process should take, at most, 5 minutes.
After that, let's delete the cleanup daemon-set, the added RBAC and runtime classes:
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/runtimeclasses/kata-runtimeClasses.yaml
Removing the stable image
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy-stable.yaml
$ kubectl -n kube-system wait --timeout=10m --for=delete -l name=kata-deploy pod
After ensuring kata-deploy has been deleted, cleanup the cluster:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup-stabe.yaml
The cleanup daemon-set will run a single time, cleaning up the node-label, which makes it difficult to check in an automated fashion. This process should take, at most, 5 minutes.
After that, let's delete the cleanup daemon-set, the added RBAC and runtime classes:
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup-stable.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/runtimeclasses/kata-runtimeClasses.yaml
kata-deploy
details
Dockerfile
The Dockerfile used to create the container image deployed in the DaemonSet is provided here. This image contains all the necessary artifacts for running Kata Containers, all of which are pulled from the Kata Containers release page.
Host artifacts:
cloud-hypervisor
,firecracker
,qemu-system-x86_64
, and supporting binariescontainerd-shim-kata-v2
kata-collect-data.sh
kata-runtime
Virtual Machine artifacts:
kata-containers.img
andkata-containers-initrd.img
: pulled from Kata GitHub releases pagevmlinuz.container
andvmlinuz-virtiofs.container
: pulled from Kata GitHub releases page
DaemonSets and RBAC
Two DaemonSets are introduced for kata-deploy
, as well as an RBAC to facilitate
applying labels to the nodes.
Kata deploy
This DaemonSet installs the necessary Kata binaries, configuration files, and virtual machine artifacts on
the node. Once installed, the DaemonSet adds a node label katacontainers.io/kata-runtime=true
and reconfigures
either CRI-O or containerd to register three runtimeClasses
: kata-clh
(for Cloud Hypervisor isolation), kata-qemu
(for QEMU isolation),
and kata-fc
(for Firecracker isolation). As a final step the DaemonSet restarts either CRI-O or containerd. Upon deletion,
the DaemonSet removes the Kata binaries and VM artifacts and updates the node label to katacontainers.io/kata-runtime=cleanup
.
Kata cleanup
This DaemonSet runs of the node has the label katacontainers.io/kata-runtime=cleanup
. These DaemonSets removes
the katacontainers.io/kata-runtime
label as well as restarts either CRI-O or containerd
systemctl
daemon. You cannot execute these resets during the preStopHook
of the Kata installer DaemonSet,
which necessitated this final cleanup DaemonSet.