This release has been tracked in our new [roadmap project ](https://github.com/orgs/cloud-hypervisor/projects/6) as iteration v27.0. **Community Engagement** A new mailing list has been created to support broader community discussions. Please consider [subscribing](https://lists.cloudhypervisor.org/g/dev/); an announcement of a regular meeting will be announced via this list shortly. **Prebuilt Packages** Prebuilt packages are now available. Please see this [document](https://github.com/cloud-hypervisor/obs-packaging/blob/main/README.md) on how to install. These packages also include packages for the different firmware options available. **Network Device MTU Exposed to Guest** The MTU for the TAP device associated with a virtio-net device is now exposed to the guest. If the user provides a MTU with --net mtu=.. then that MTU is applied to created TAP interfaces. This functionality is also exposed for vhost-user-net devices including those created with the reference backend. **Boot Tracing** Support for generating a trace report for the boot time has been added including a script for generating an SVG from that trace. **Simplified Build Feature Flags** The set of feature flags, for e.g. experimental features, have been simplified: * msvh and kvm features provide support for those specific hypervisors (with kvm enabled by default), * tdx provides support for Intel TDX; and although there is no MSHV support now it is now possible to compile with the mshv feature, * tracing adds support for boot tracing, * guest_debug now covers both support for gdbing a guest (formerly gdb feature) and dumping guest memory. The following feature flags were removed as the functionality was enabled by default: amx, fwdebug, cmos and common. **Asynchronous Kernel Loading** AArch64 has gained support for loading the guest kernel asynchronously like x86-64. **GDB Support for AArch64** GDB stub support (accessed through --gdb under guest_debug feature) is now available on AArch64 as well as as x86-64. **Notable Bug Fixes** * This version incorporates a version of virtio-queue that addresses an issue where a rogue guest can potentially DoS the VMM, * Improvements around PTY handling for virtio-console and serial devices, * Improved error handling in virtio devices. **Deprecations** Deprecated features will be removed in a subsequent release and users should plan to use alternatives. * Booting legacy firmware (compiled without a PVH header) has been deprecated. All the firmware options (Cloud Hypervisor OVMF and Rust Hypervisor Firmware) support booting with PVH so support for loading firmware in a legacy mode is no longer needed. This functionality will be removed in the next release. Details can be found: https://github.com/cloud-hypervisor/cloud-hypervisor/releases/tag/v27.0 Note: To have the new API of loading firmware for booting (e.g. boot from td-shim), a specific commit revision after the v27.0 release is used as the Cloud Hypervisor version from the 'versions.yaml'. Fixes: #5309 Signed-off-by: Bo Chen <chen.bo@intel.com> |
||
---|---|---|
.github | ||
ci | ||
docs | ||
snap | ||
src | ||
tools | ||
utils | ||
.gitignore | ||
CODE_OF_CONDUCT.md | ||
CODEOWNERS | ||
CONTRIBUTING.md | ||
deny.toml | ||
Glossary.md | ||
LICENSE | ||
Makefile | ||
README.md | ||
utils.mk | ||
VERSION | ||
versions.yaml |

Kata Containers
Welcome to Kata Containers!
This repository is the home of the Kata Containers code for the 2.0 and newer releases.
If you want to learn about Kata Containers, visit the main Kata Containers website.
Introduction
Kata Containers is an open source project and community working to build a standard implementation of lightweight Virtual Machines (VMs) that feel and perform like containers, but provide the workload isolation and security advantages of VMs.
License
The code is licensed under the Apache 2.0 license. See the license file for further details.
Platform support
Kata Containers currently runs on 64-bit systems supporting the following technologies:
Architecture | Virtualization technology |
---|---|
x86_64 , amd64 |
Intel VT-x, AMD SVM |
aarch64 ("arm64 ") |
ARM Hyp |
ppc64le |
IBM Power |
s390x |
IBM Z & LinuxONE SIE |
Hardware requirements
The Kata Containers runtime provides a command to determine if your host system is capable of running and creating a Kata Container:
$ kata-runtime check
Notes:
This command runs a number of checks including connecting to the network to determine if a newer release of Kata Containers is available on GitHub. If you do not wish this to check to run, add the
--no-network-checks
option.By default, only a brief success / failure message is printed. If more details are needed, the
--verbose
flag can be used to display the list of all the checks performed.If the command is run as the
root
user additional checks are run (including checking if another incompatible hypervisor is running). When running asroot
, network checks are automatically disabled.
Getting started
See the installation documentation.
Documentation
See the official documentation including:
Configuration
Kata Containers uses a single configuration file which contains a number of sections for various parts of the Kata Containers system including the runtime, the agent and the hypervisor.
Hypervisors
See the hypervisors document and the Hypervisor specific configuration details.
Community
To learn more about the project, its community and governance, see the community repository. This is the first place to go if you wish to contribute to the project.
Getting help
See the community section for ways to contact us.
Raising issues
Please raise an issue in this repository.
Note: If you are reporting a security issue, please follow the vulnerability reporting process
Developers
See the developer guide.
Components
Main components
The table below lists the core parts of the project:
Component | Type | Description |
---|---|---|
runtime | core | Main component run by a container manager and providing a containerd shimv2 runtime implementation. |
runtime-rs | core | The Rust version runtime. |
agent | core | Management process running inside the virtual machine / POD that sets up the container environment. |
libraries | core | Library crates shared by multiple Kata Container components or published to crates.io |
dragonball |
core | An optional built-in VMM brings out-of-the-box Kata Containers experience with optimizations on container workloads |
documentation | documentation | Documentation common to all components (such as design and install documentation). |
libraries | core | Library crates shared by multiple Kata Container components or published to crates.io |
tests | tests | Excludes unit tests which live with the main code. |
Additional components
The table below lists the remaining parts of the project:
Component | Type | Description |
---|---|---|
packaging | infrastructure | Scripts and metadata for producing packaged binaries (components, hypervisors, kernel and rootfs). |
kernel | kernel | Linux kernel used by the hypervisor to boot the guest image. Patches are stored here. |
osbuilder | infrastructure | Tool to create "mini O/S" rootfs and initrd images and kernel for the hypervisor. |
agent-ctl |
utility | Tool that provides low-level access for testing the agent. |
trace-forwarder |
utility | Agent tracing helper. |
runk |
utility | Standard OCI container runtime based on the agent. |
ci |
CI | Continuous Integration configuration files and scripts. |
katacontainers.io |
Source for the katacontainers.io site. |
Packaging and releases
Kata Containers is now available natively for most distributions. However, packaging scripts and metadata are still used to generate snap and GitHub releases. See the components section for further details.
Glossary of Terms
See the glossary of terms related to Kata Containers.